text
stringlengths
56
7.94M
\begin{document} \title{Making the cut: two methods for breaking down a quantum algorithm} \author{Miguel \surname{Mur\c{c}a}} \thanks{These authors contributed equally to this work} \email[corresponding author address: ]{[email protected]} \affiliation{Instituto Superior T\'{e}cnico, Universidade de Lisboa, Portugal} \affiliation{Centro de Física e Engenharia de Materiais Avançados (CeFEMA), Physics of Information and Quantum Technologies Group, Portugal} \affiliation{PQI -- Portuguese Quantum Institute, Portugal} \author{Duarte \surname{Magano}} \thanks{These authors contributed equally to this work} \email[corresponding author address: ]{[email protected]} \affiliation{Instituto Superior T\'{e}cnico, Universidade de Lisboa, Portugal} \affiliation{Instituto de Telecomunica\c{c}\~{o}es, Lisboa, Portugal} \author{Yasser Omar} \affiliation{Instituto Superior T\'{e}cnico, Universidade de Lisboa, Portugal} \affiliation{Centro de Física e Engenharia de Materiais Avançados (CeFEMA), Physics of Information and Quantum Technologies Group, Portugal} \affiliation{PQI -- Portuguese Quantum Institute, Portugal} \date{\today} \begin{abstract} Despite the promise that fault-tolerant quantum computers can efficiently solve classically intractable problems, it remains a major challenge to find quantum algorithms that may reach computational advantage in the present era of noisy, small-scale quantum hardware. Thus, there is substantial ongoing effort to create new quantum algorithms (or adapt existing ones) to accommodate depth and space restrictions. By adopting a hybrid query perspective, we identify and characterize two methods of ``breaking down'' quantum algorithms into rounds of lower (query) depth, designating these approaches as ``parallelization'' and ``interpolation''. To the best of our knowledge, these had not been explicitly identified and compared side-by-side, although one can find instances of them in the literature. We apply them to two problems with known quantum speedup: calculating the $k$-threshold function and computing a NAND tree. We show that for the first problem parallelization offers the best performance, while for the second interpolation is the better choice. This illustrates that no approach is strictly better than the other, and so that there is more than one good way to break down a quantum algorithm into a hybrid quantum-classical algorithm. \end{abstract} \maketitle \section{Introduction} Algorithms that combine classical processing with limited quantum computational resources hold an attractive promise: to provide computational advantage over completely classical computation, while remaining compatible with the technological landscape of quantum computing. The appeal of this kind of algorithm is well reflected in some of the key modern proposals for quantum advantage, usually based on variational principles \cite{Bharti2022}. Prominent examples include the Quantum Approximate Optimization Algorithm \cite{Farhi2014}, the Variational Quantum Eigensolver \cite{VQE,McClean2016,Wecker,Kandala2017}, and some versions of Quantum Machine Learning \cite{farhi2018classification,Benedetti2019,chen2020variational,Banchi2022,Buffoni2020}. All of these attempt to exploit circuits of limited coherence to obtain computational advantage. However, variational approaches often cannot offer theoretical performance guarantees, as discussed in refs.~\cite{McClean2018,Anschuetz2022}. Consider instead a setting where we are given a quantum algorithm with guaranteed advantage for a certain computational problem, but the available hardware is too noisy to execute the algorithm with a reasonable fidelity. We would need to limit the circuit depths to values much shorter than the ones prescribed by the original algorithm to prevent errors from dominating the calculations. Is it still possible to guarantee some quantum advantage? We may phrase this question more precisely. Say we are faced with a computational problem $f$ that can be solved by a classical computer in time $C(f)$, and we know a quantum algorithm that solves $f$ with complexity $Q(f)$ ($Q(f) < C(f)$) by running quantum circuits of depth $D$; what is the best we can do if we are only permitted to run quantum circuits up to a depth $D'$ smaller than $D$? The expectation is that best strategy yields an algorithm with a complexity between $C(f)$ and $Q(f)$. We believe that understanding this question may contribute to finding practical but provable advantage in near-term quantum computers. For oracular problems, the notion of limited coherence is captured by the hybrid query (or decision tree) complexity $Q(f; D)$, introduced by Sun and Zheng~\cite{Sun2019}. In this setting, only the input accesses (or queries) contribute to the complexity count, while the intermediate computations are free. $Q(f; D)$ is defined as the minimum number of queries required to solve $f$ when limited to running quantum circuits of depth $D$. That is, we can only perform $D$ queries before being forced to measure the state of the circuit and restart it. It is known that quantum decision trees are strictly more powerful than hybrid decision trees, which are strictly more powerful than classical decision trees. Concretely, there is a problem $f$ for which $C(f)$ and $Q(f, \bigO(1))$ are (super-)exponentially separated \cite{AaronsonAmbainis}, and similarly there is a problem $f$ for which $Q(f, \bigO(1))$ and $Q(f)$ are exponentially separated \cite{Sun2019}. There are also problems $f$ that exhibit a continuous trade-off between speedup and circuit depth \cite{Wang2019}, i.e., $Q(f;D) < Q(f;D+1)$ for every $D$ between $1$ and $Q(f)$. \begin{figure*} \caption{Diagrammatic representation of the procedures of parallelization (procedure $1$) and interpolation (procedure $2$), as defined and exemplified in this paper. For both procedures, the overall goal is to carry out a quantum algorithm (as described by some unitary $U$) for some input $X$ to calculate a property $f$ of $X$, as shown in the box to the left. However, this unitary may require a prohibitive depth (modelled by us as a prohibitive amount of coherent quantum queries). In the case of parallelization (procedure $1$), this is dealt with by identifying independent, smaller instances of the same problem that can be dealt with within the query constraints; in other words, by partitioning the input appropriately into sub-problems. Interpolation (procedure $2$) involves, instead, considering multiple repetitions of some unitary (or sequence of unitaries) that require, individually, less coherent queries, but that collectively yield the same information as a single run of $U$. For both of these approaches, ``breaking up'' the original algorithm may come at a cost of more overall queries -- indeed, we expect that this will be the case for most algorithms. We show that no method is strictly better than the other, and that the best choice depends on the problem at hand.} \label{fig:representation} \end{figure*} Despite these landmark results, it is not always obvious how to optimally ``break up'' an algorithm into circuits of smaller sizes. For example, when given a quantum circuit that is too deep, P\'{e}rez-Salinas \et al.~\cite{PeresSalinas2022} propose a heuristic algorithm where one performs intermediate measurements in a parametrized basis, as given by a shallow variational circuit, optimized to minimize the effect of measuring and restarting the quantum operation. But, the expressiveness of the shallow circuit determining the measurement basis and the difficulty of minimizing the cost function may limit the success of this approach. In this paper, we identify and discuss two general strategies with theoretical guarantees to limit the depth of an algorithm to some specified value $D$. We refer to them as ``parallelization'' and ``interpolation''. Parallelization applies when a problem can be broken down into a number of smaller, independent sub-problems, such that the algorithm that solves these sub-problems fits the permitted circuit depth. In contrast, with interpolation the entire problem is tackled at each circuit run. It applies whenever there is a trade-off between the information content of the measurement and the depth of the corresponding circuit. In these cases, we may compensate the information loss caused by shortening the circuit depth with repeated runs of the shorter circuit. Intuitively, we can say that interpolation methods ``break up the unitary'' that solves the problem instead of breaking up the problem itself. See Figure \ref{fig:representation} for an illustration of these notions. To the best of our knowledge, neither of these methods had been explicitly identified and compared side-by-side, even though several works that fit into these labels can be found in the literature. For example, parallelization approaches are present in refs.~\cite{Zalka1999,GroverRadhakrishnan2004,Jeffery2017}, while refs.~\cite{Wang2019,WangKohJohnsonCao2021,GiurgicaTiron2022,Magano2022} describe interpolation methods. We argue that Quantum Singular Value Transformations (QSVTs) \cite{Gilyen2019} provide a natural framework for thinking about interpolation. In a seminal work, Gily\'{e}n \textit{et al.}~\cite{Gilyen2019} have shown that it is possible to perform polynomial transformations on the singular values of large matrices with circuits whose degree is proportional to the degree of the corresponding polynomial, and that many important quantum algorithms can be described this way. Taking this perspective, limiting the circuit depth means implementing a rougher approximation to the target function. As a consequence, each measurement provides less accurate information about the quantity that we are trying to estimate. Sometimes, this effect can be compensated with statistical sampling. That is, we can trade-off circuit depth and number of circuit runs. We illustrate these methods with two well-known problems: the \(k\)-threshold function and perfectly balanced NAND trees. These problems are known to exhibit quantum speed-ups (\cite{Beals2001} and \cite{Farhi2014,ChildsCleveJordanYongeMallo2009,Ambainis2010}), but, to the best of our knowledge, neither has been discussed in the context of a limited-depth computing model. Both problems are amenable to both parallelization and interpolation. We show that for the \(k\)-threshold function parallelization offers the best performance, while for evaluating perfectly balanced NAND trees the interpolation method is the most efficient. This reinforces the relevance of the distinction between parallelization and interpolation, and demonstrates that no technique is \textit{a priori} better than the other, as the best option depends on the problem at hand. \section{Preliminaries} \subsection{Hybrid Query Model \label{sec:querymodel}} We will be working mostly within the query model of quantum computing. Here we quickly review the main concepts, referring to Ambainis \cite{AndrisReview2017} for a more in-depth discussion. The quantum query complexity model, a generalization of decision tree complexity \cite{BdW02}, is widely used to study the power of quantum computers. On one hand, it captures the important features of most quantum algorithms, including search \cite{Grover1996}, period-finding \cite{Shor1994}, and element distinctness \cite{Ambainis2007}. On the other hand, it is simple enough to make the proof of lower bounds attainable \cite{Beals2001,Ambainis2002}. In the query model, the goal is to compute a Boolean function $f(x_1, \ldots, x_N)$ of variables \(x_i \in \zo\). The function can be total (defined on $\zo^N$) or partial (defined on a subset of $\zo^N$). We only get information about the input variables by querying a black-box quantum operator $O$ acting as \begin{equation} O \ket{i}\ket{b} = \ket{i} \ket{b \oplus x_i} \end{equation} for every $b\in \zo$ and $i \in \zo^N$. A quantum query algorithm is specified by a set of input-independent unitaries $U_0, U_1,\ldots, U_T$. The algorithm consists in performing the transformation \begin{equation} \label{eq:querycircuit} U_T \, O \, U_{T-1} \, \ldots U_2 \, O \, U_1 \, O \, U_0 \ket{0} \end{equation} and measuring the result, which is then converted into the answer of the problem according to a predefined rule. In the query model, the algorithm's complexity increases with each query, while the intermediate computations are free. That is, the complexity of the algorithm corresponding to transformation \eqref{eq:querycircuit} is $T$, independently of how the unitaries $U_i$ are chosen. We say that a quantum algorithm computes $f$ with bounded error if, for all $x \in \zo^N$, the answer of the algorithm agrees with $f(x)$ with probability at least $2/3$, where the probability is over the randomness of the algorithm's measuring process. The minimum query complexity of any bounded-error algorithm computing $f$ is the quantum (bounded-error) complexity of $f$, denoted as $Q(f)$. The hybrid query model introduced by Sun and Zheng \cite{Sun2019} captures the idea of restricted-depth computation in an oracular setting. Hybrid algorithms are in direct correspondence with hybrid decision trees. A hybrid decision tree is similar to a (classical) decision tree, but the decision at each node is determined by the output of a quantum algorithm with query complexity no more than a value $D$, which we refer to as the depth of the hybrid algorithm. The hybrid algorithm's answer is the output of the algorithm at the leaf node. More plainly, a hybrid algorithm works by running and measuring sequences of circuits like \eqref{eq:querycircuit} with $T \leq D$, using the intermediate measurements to decide what quantum circuit to run next. A hybrid algorithm computes $f$ with bounded error if, for all $x \in \zo^N$, the answer of the algorithm agrees with $f(x)$ with probability at least $2/3$, where the probability is over the randomness of the internal measurements. The complexity of a path in a hybrid tree is the sum of the complexities of the algorithms associated to each node in the path. The complexity of a hybrid algorithm that computes a function $f$ is the maximal complexity of any path that connects the root and a leaf, that is, it is the total number of queries needed to evaluate $f$ in the worst case. The minimum query complexity of any bounded-error hybrid algorithm computing $f$ is the hybrid (bounded-error) complexity of $f$, denoted as $Q(f; D)$. \subsection{Quantum Singular Value Transformations \label{sec:QSVT}} In this paper we make extensive use of Quantum Singular Value Transformations (QSVTs) \cite{Gilyen2019,Martyn2021,Lin2022}. As a generalization of the work on Quantum Signal Processing \cite{LowChuang19}, QSVTs have provided a unifying description of several algorithms, including amplitude estimation, quantum simulation, and quantum methods for linear systems. Recently, Magano and Mur\c{c}a \cite{Magano2022} have shown that QSVTs also constitute a natural framework for reasoning about interpolation methods. By the singular value decomposition theorem, an arbitrary matrix $A$ of rank $r$ can be written as \begin{equation} A = \sum_{i=1}^r \sigma_i \ket{w_k} \bra{v_k}, \end{equation} where $\{w_k\}_k$ and $\{v_k\}_k$ are orthogonal sets (known as the left and right singular values of $A$, respectively) and $\{\sigma_k\}_k$ are positive real numbers (known as the singular values of $A$). For functions $P: \R \rightarrow \C$, we call \begin{equation} P^{\text{(SV)}}(A) := \sum_{i=1}^r P(\sigma_i) \ket{w_k} \bra{v_k} \end{equation} a singular value transformation of $A$. When considering performing such transformations on arbitrary matrices with quantum computers, we are immediately faced with the difficulty that quantum states evolve according to unitary transformations. The introduction of \emph{block-encodings} overcomes this apparent limitation \cite{Chakraborty2018}. Let \(\Pi\) and \(\tilde\Pi\) be orthogonal projectors and \(U\) be a unitary; we say that \(\Pi\), \(\tilde\Pi\), and $U$ form a block-encoding of the operator $A$ if \begin{equation} A = \tilde\Pi U \Pi. \end{equation} Based on this concept, the main theorem of QSVTs can be phrase as follows. \begin{theorem}[QSVTs \cite{Gilyen2019}] \label{thm:qsvt} Let \(\Pi\), \(\tilde\Pi\), and \(U\) be a block-encoding of a matrix $A$, and let $P : [-1,1] \rightarrow [-1,1]$ be a polynomial of degree $d$. Then, we can implement a unitary $U_P$ such that \(\Pi\), \(\tilde\Pi\), and \(U_P\) form a block-encoding of $P^{\text{(SV)}}(A)$ using $\bigO(d)$ calls to $U$, $U^{\dg}$ and $\Pi/\tilde\Pi$-controlled-NOT operations.\footnote{By $\Pi$-controlled-NOT we mean an operation that acts as a NOT gate controlled on a given state being in the image of $\Pi/\tilde\Pi$.} \end{theorem} A transformation that will be particularly useful for us is the step (or Heaviside) function, \begin{equation} \sigma \mapsto \begin{cases} 1, \text{ if } \sigma \geq \mu \\ 0, \text{ if } \sigma < \mu \end{cases}, \end{equation} for some $\mu \in [-1,1]$. Refs.~\cite{LowPhD,Gilyen2019} show that we can approximate this transformation up to arbitrary accuracy by a polynomial approximation of the error function, defined as \begin{equation} \erf(x) := \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} \dd{t}. \end{equation} The result is stated below. \begin{theorem}[Polynomial approximation of step function \cite{Gilyen2019}] \label{thm:stepfunction} There is a polynomial $P_{\delta, \eta, \mu}(\lambda):[-1,1] \rightarrow [-1,1]$ of degree \begin{equation} \bigO \left( \frac{1}{\delta} \log\left(\frac{1}{\eta} \right) \right) \end{equation} satisfying \begin{align} \vert P_{\delta, \eta, \mu}(\sigma) \vert \leq \eta, \forall \sigma &\in [-1,\mu - \delta] && \\ P_{\delta, \eta, \mu}(\sigma) \geq 1 - \eta, \forall \sigma &\in [\mu+\delta,1] . && \end{align} \end{theorem} We will also be interested in performing a step transformation on the modulus of the singular values (also known as a window function due to the shape of its plot), \begin{equation} \sigma \rightarrow \begin{cases} 1, \text{ if } \vert \sigma\vert \leq \mu \\ 0, \text{ if } \vert \sigma\vert > \mu \end{cases}. \end{equation} Noting that $P_{\delta, \eta, -\mu} - P_{\delta, \eta,\mu}$ is a polynomial with the same degree as $P_{\delta, \eta, \mu}$, we immediately derive the following. \begin{corollary}[Polynomial approximation of window function] \label{thm:bumpfunction} There is a polynomial $P'_{\delta, \eta, \mu}(\lambda):[-1,1] \rightarrow [-1,1]$ of degree \begin{equation} \bigO \left( \frac{1}{\delta} \log\left(\frac{1}{\eta} \right) \right) \end{equation} satisfying \begin{align} & \vert P'_{\delta, \eta, \mu}(\sigma) \vert \leq \eta, \forall \sigma \in [-1 , -\mu - \delta] \cup [\mu + \delta, 1]\\ & P'_{\delta, \eta, \mu}(\sigma) \geq 1 - \eta, \forall \sigma \in [-\mu + \delta, \mu-\delta] . \end{align} \end{corollary} Combining Theorems \ref{thm:qsvt} and \ref{thm:stepfunction} we find a method to distinguish the singular values of a block-encoded matrix that are above or below a given threshold. Similarly, from Theorem \ref{thm:qsvt} and Corollary \ref{thm:bumpfunction} we can distinguish the singular values of a block-encoded matrix whose modulus are above or below a given threshold. \section{Two approaches to restricted-depth computation} \subsection{Parallelization} In many cases the problem at hand can be broken down into a number of smaller, independent sub-problems. As an example, consider the problem of computing the OR function on $N$ bits. We can partition the domain into $p$ subdomains of size approximately $N/p$. If for any of those subdomains there is an index $i$ for which $x_i = 1$, then we return $1$; otherwise the answer is $0$. In other words, the problem is reduced to evaluating $p$ OR function on $N/p$ bits. With Grover's algorithm \cite{Grover1996} we can evaluate each subdomain with $\bigO(\sqrt{N/p})$ queries. In total, this strategy has a query complexity of \begin{equation} \bigO*(\sqrt{p N}). \end{equation} If we are limited to circuits of depth $D$, we set $p = \bigO(N / D^2)$, finding that \begin{equation} Q(\text{OR}; D) = \bigO*(\frac{N}{D} + \sqrt{N}). \end{equation} By Corollary~1.5 of Sun and Zheng \cite{Sun2019}, this is optimal. We say that the algorithms that employ this kind of strategy -- breaking the problem into smaller problems that fit the permitted depth -- fall into the category of parallelization methods. Note that this procedure does not require multiple quantum processors operating at the same time, even though it is amenable to it. The important point is that the different sub-problems considered are independent and may be treated as such. This should be contrasted with the notion of parallel quantum algorithms as defined by Jeffery \textit{et al.}\ \cite{Jeffery2017}, where a number of queries are realized at the same time (in parallel), but by a number of quantum registers that may, for example, be entangled with each other. Arguably, parallelization as described above is the most natural approach to ``breaking up'' a quantum algorithm into circuits of lower quantum depth. Examples of parallelization include Zalka \cite{Zalka1999}, containing the OR function discussed above, Grover and Radhakrishnan \cite{GroverRadhakrishnan2004}, searching for marked elements over many copies of a database, and Jeffery \textit{et al.} \cite{Jeffery2017}, with the problems of element distinctness and $k$-sum. Although these references were originally motivated by the idea of quantum processors acting in parallel, they easily translate to the discussion of restricted-depth setting, and fit into the description of the procedure of parallelization we have made above. \subsection{Interpolation} Contrary to parallelization, interpolation methods do not distribute the problem into different sub-problems. Instead, at each run the entire problem is tackled -- only over several quantum circuit runs. Since the circuit depth is limited, each circuit measurement can only yield partial information about the answer to problem; the definitive answer is recovered by repeating the computation multiple times. We illustrate this approach with an information-theoretic argument (similar to that of Wang \textit{et al.}~\cite{WangKohJohnsonCao2021}). Say that we have a quantum routine $\mathcal{A}$ that prepares the state \begin{equation} \label{eq:AAstate} \ket{0^n} \xrightarrow{\mathcal{A}} \sqrt{1-p} \ket{\psi_0} + \sqrt{p} \ket{\psi_1} \end{equation} for some unknown $p \in [0,1]$, and assume that we can efficiently distinguish between $\ket{\psi_0}$ and $\ket{\psi_1}$. The goal is to estimate $p$, noting that many query problems can be reduced to estimating an amplitude. With Grover's iterator \cite{Grover1996}, we can prepare the state \begin{equation} \label{eq:iteratedAAstate} \cos\big( (1 + 2k) \theta\big) \ket{\psi_0} + \sin\big((1 + 2k) \theta\big) \ket{\psi_1}, \end{equation} where $\theta = \arcsin(\sqrt{p})$, with $\bigO(k)$ calls to $\mathcal{A}$. Now suppose that we prepare and measure the state \eqref{eq:iteratedAAstate} in the $\{\ket{\psi_0}, \ket{\psi_1}\}$ basis $l$ times, recording the outcomes. The Fisher information associated with this experiment is \begin{align} I(\pi) := & \, l \sum_{i=0,1} \frac{1}{\mathbb{P}[\ket{\psi_i} \vert \pi] } \left( \frac{\partial}{\partial \pi} \mathbb{P}[\ket{\psi_i} \vert \pi] \right)^2 \nonumber \\ = & \, \frac{l (1+ 2k)^2}{\pi(1-\pi)}, \label{eq:Fisherinfo} \end{align} where $\mathbb{P}[\ket{\psi_i} \vert \pi]$ is the probability of observing outcome $\ket{\psi_i}$ in a single trial assuming that $p = \pi$. Expression \eqref{eq:Fisherinfo} reveals that the measurement is more informative the larger the value of $k$ (in particular, that it grows quadratically with $k$, justifying the quadratic speedup of Grover's algorithm). Refs.~\cite{Wang2019,GiurgicaTiron2022,Magano2022} have suggested different schemes to harness the enhanced information of deeper circuits. Here we adopt the perspective put forward by Magano and Mur\c{c}a \cite{Magano2022}, according to which QSVTs constitute a natural framework for interpolation methods. The idea is to trade off the quality of the polynomial approximation to the target function by statistical sampling. That is, we can compensate using polynomials of lower degree (corresponding to lower circuit depths) by running the quantum circuits more times. The result is a continuous trade-off between circuit depth and quantum speed-up, without ever needing to identify independent sub-problems. In the subsequent sections we demonstrate how QSVTs can be used to interpolate specific problems. \section{Sometimes Parallelization is Better: Threshold Function} Consider the \(k\)-threshold function, a total symmetric boolean function defined as follows: \begin{equation} \Threshold_k (x_1, \ldots, x_N) = \begin{cases} 0 & \text{if } \sum_{i=1}^N x_i \leq k \\ 1 & \text{otherwise} \end{cases}. \end{equation} This function admits a quantum query speed-up: whereas in the classical case \(\Theta(N)\) queries are required (easily concluded by an adversarial argument), the quantum query complexity is \(\Theta(\sqrt{N \min(k, N-k)})\) (as follows from Beals \textit{et al.} \cite{Beals2001}), resulting in the aforementioned quadratic speed-up when \(\min(k, N-k)=\bigO(1)\), and no speed-up when \(\min(k, N-k) = \Omega(N)\). For simplicity, we assume from now on that $k \leq N/2$. We approach the problem from the perspective of QSVTs. This is a departure from the original proof of Beals \textit{et al.}~\cite{Beals2001}, where the problem of evaluating any totally symmetric Boolean function is reduced to quantum counting. Arguably, QSVTs permit tackling the $k$-threshold problem more directly, while also offering a more natural route towards interpolation. We show in Appendix \ref{sec:symmfunctions} that our approach can also be generalized to any totally symmetric Boolean function, although in that case the proof resembles more closely that of Beals \textit{et al.}~\cite{Beals2001}. We start by making the (trivial) observation that $k$-threshold function can be written as a function of the Hamming weight of the input, which we denote by $\HammingWeight{x}$. The first step of our algorithm will be to block-encode $\sqrt{\HammingWeight{x} / N}$ (or, more technically, to block-encode the $1\times1$ matrix whose only entry is $\sqrt{\HammingWeight{x} / N}$). Then, we will perform a QSVT on this value to prepare the desired function of $\HammingWeight{x}$. Consider the unitary transformation \begin{equation} U = \quad\raisebox{1.2em}{ \Qcircuit @C=1.1em @R=.5em { & \qw{{}^{n}/} & \gate{H^{\otimes n}} & \multigate{1}{O_X} & \qw \\ & \qw & \qw & \ghost{O_X} & \qw \\ } }, \end{equation} where \(n = \log_2(N)\) -- assuming, without loss of generality, that \(N\) is exactly a power of two -- and \(O\) is our query operator (defined in Section \ref{sec:querymodel}). We have that \begin{flalign} U \ket*{0^{n+1}} & = \sqrt{\frac{1}{N}} \left(\quad\sum_{\mathclap{i \SuchThat x_i = 0}} \ket{i}\ket{0} + \sum_{\mathclap{i \SuchThat x_i = 1}} \ket{i} \ket{1}\;\right) \nonumber \\ & = \sqrt{1 - \frac{\HammingWeight{X}}{N}} \ket{\phi_0} \ket{0} + \sqrt{\frac{\HammingWeight{X}}{N}} \ket{\phi_1} \ket{1} , \label{eq:threshold_blockencoding} \end{flalign} where \(\ket{\phi_0}\) and \(\ket{\phi_1}\) are normalized states. Choosing \begin{equation} \Pi = \dyad*{0^{n+1}} \, \text{ and } \, \tilde\Pi = \Identity_{2^n} \otimes \dyad{1} \end{equation} we find that \begin{equation} \tilde\Pi U \Pi = \sqrt{\frac{\HammingWeight{x}}{N}}. \end{equation} That is, $\tilde\Pi$, $\Pi$, and $U$ form a block-encoding of $\sqrt{\HammingWeight{x} / N}$. We would like to distinguish between cases where $\sqrt{\HammingWeight{x} / N}$ is smaller or equal to $\sqrt{k / N}$ and those where it is larger than $\sqrt{k / N}$. From the results on QSVTs (Theorems \ref{thm:qsvt} and \ref{thm:stepfunction}) we can perform the transformation \begin{equation} \label{eq:threshold_transformation} \ket*{0^{n+1}} \rightarrow P_{\delta, \eta, \mu}\left(\sqrt{\frac{\HammingWeight{x}}{N}} \right) \ket{\phi_1} \ket{1} + \ket{\perp_1}, \end{equation} where $\ket{\perp_1}$ is such that $\tilde\Pi \ket{\perp_1} = 0$, using $\bigO ( (1/\delta) \log (1 / \eta ) )$ calls to $U$. As $U$ only calls the query operator $O$ once, the operation \eqref{eq:threshold_transformation} only involves $\bigO ( (1/\delta) \log (1 / \eta ) )$ queries. We choose the parameters as \begin{align} \eta &= 1/8, \label{eq:eta_def}\\ \delta &= \frac{1}{2} \left(\sqrt{(k+1)/N} - \sqrt{k/N} \right) = \bigO*(\frac{1}{\sqrt{kN}}), \label{eq:par_delta_def} \\ \mu &= \frac{1}{2} \left(\sqrt{(k+1)/N} + \sqrt{k/N} \right), \end{align} in which case the operation \eqref{eq:threshold_transformation} consumes $\bigO(\sqrt{kN})$ queries. The final step is simply to measure the last qubit of the resulting state, outputting $0$ if we measure $\ket{0}$ and outputting $1$ if we measure $\ket{1}$. To verify that this yields the desired answer, consider the two possible scenarios: \begin{itemize} \item $\Threshold_k (x_1, \ldots, x_N) = 0$. Then, $\sqrt{\HammingWeight{x} / N} \leq \sqrt{\HammingWeight{k} / N}$, which means that $P_{\delta, \eta, \mu}(\sqrt{\HammingWeight{x} / N} ) \leq \eta = 1/8$. So, the probability of measuring the last qubit in state $\ket{1}$ is less than $1/3$. \item $\Threshold_k (x_1, \ldots, x_N) = 1$. Then, $\sqrt{\HammingWeight{x} / N} > \sqrt{\HammingWeight{k} / N}$, which means that $P_{\delta, \eta, \mu}(\sqrt{\HammingWeight{x} / N} ) \geq 1 - \eta = 7/8$. So, the probability of measuring the last qubit in state $\ket{1}$ is greater than $2/3$. \end{itemize} If instead $k > N/2$, the algorithm does not change significantly: denote the logical negation of $x$ by $\neg x$, and note that $\Threshold_k (x_1, \ldots, x_N) = 1 - \Threshold_k (\neg x_1, \ldots, \neg x_N)$. It follows that we just need to evaluate the threshold function on $\neg x$, whose Hamming weight is $\HammingWeight{\neg x} = N - \HammingWeight{x}$. Looking at expression \eqref{eq:threshold_blockencoding}, we see that $U$ already provides a block-encoding of the $\sqrt{\HammingWeight{\neg x}/N}$: we just need to replace $\tilde\Pi$ by $\Identity_{2^n} \otimes \dyad{0}$. Everything else follows as before. \paragraph{Interpolation.} The algorithm that we have just presented can be interpolated using the same strategy as in Magano and Mur\c{c}a \cite{Magano2022}. Recall from the theory of QSVTs (Section \ref{sec:QSVT}) that with deeper circuits we can prepare polynomial transformations of higher degree. Conversely, by limiting the circuit depths we are forced to implement a rougher approximation to the target function (in this case, the step function). The idea is to compensate this effect by performing a larger number of measurements. Concretely, the trade-off between circuit depth and repetitions of the circuit can be controlled by the parameter $\eta$, which we had previously fixed to be $\bigO(1)$ (\textit{cf.} \eqref{eq:eta_def}). Now we choose \begin{equation} \eta = \bigO\left(2^{-\delta D}\right) \end{equation} in such a way that the circuit depth associated with the transformation by $P_{\delta, \eta, \mu}$ is upper bounded by $D$. If we measure the last qubit of state \eqref{eq:threshold_transformation}, the probability that we see $\ket{1}$ is \begin{align} & \leq \eta^2, \text{ if } \Threshold_k (x) = 0, \text{ or} \\ & \geq (1-\eta)^2, \text{ if } \Threshold_k (x) = 1. \end{align} So, the problem is reduced to distinguishing the bias of a Bernoulli distribution with precision $1 - 2 \eta$. It is well-known that $\Theta(1 / (1-\eta)^2)$ samples are sufficient (and necessary) to achieve such a precision with bounded-error probability. That is, we prepare and measure state \eqref{eq:threshold_transformation} \begin{equation} \bigO*(\frac{1}{(1-\eta)^2}) = \bigO*(\frac{1}{\delta D}) \end{equation} times. The total number of queries to $O$ is \begin{equation} \bigO*( \frac{1}{(1-\eta)^2} \times \frac{1}{\delta} \log\left(\frac{1}{\eta} \right) ) = \bigO*(\frac{1}{\delta^2 D}). \end{equation} Replacing in the definition of $\delta$ \eqref{eq:par_delta_def}, we conclude that \begin{equation} \label{eq:thresholdcomplexity_interpolation} Q(\Threshold_k; D) = \bigO*(\frac{k N}{D} + \sqrt{k N}). \end{equation} \paragraph{Parallelization.} The approach of \cite{Magano2022} was originally developed in the context phase estimation. In phase estimation the parameter $\phi$ to be estimated is accessed via a black-box oracle that changes the phase of a particular state by an angle proportional to $\phi$. In that case, the interpolation is likely optimal. However, the threshold problem has more structure than phase estimation. Indeed, we can choose to query only a subset of the input variables, in which case the block-encoding holds information about the Hamming weight of that subset of input variables, whereas we cannot choose to query a ``fractional phase''. It is the parallelization approach that yields the optimal algorithm for evaluating the threshold function in a restricted-depth setting. To show this, we follow a procedure similar to that of Grover and Radhakrishnan \cite{GroverRadhakrishnan2004}. First, we partition the set \(\{1, 2, \ldots, N\}\) into \(p\) disjoint subsets $V_1, \ldots, V_p$ of size $N/p$ (to simplify the notation, we assume that $N/p$ is an integer). Then, for each subset \(V_i\), we prepare the uniform superposition \(\sqrt{p/N} \sum_{j\in V_i} \ket{j}\ket{0}\) and apply to it the query operator \(O\). The resulting state is \begin{equation} \sqrt{\frac{p \HammingWeight{x \Sslash V_i}}{N}} \ket{\phi'_1} \ket{1} + \sqrt{1 - \frac{p \HammingWeight{x \Sslash V_i}}{N}} \ket{\phi'_0} \ket{0} \end{equation} where \(\ket{\phi'_0}\), \(\ket{\phi'_1}\) are normalized states and \(\vert x \Sslash V_i\vert := \vert\{x_j \in x \SuchThat j \in V_i\} \vert\). If we run the amplitude estimation algorithm of Brassard \textit{et al.}~\cite{Brassard2002} for $D$ steps we get an estimate of the amplitude $\sqrt{p\HammingWeight{x \Sslash V_i} / N}$ up to precision \begin{equation} \bigO*( \frac{1}{D} \sqrt{\frac{p \HammingWeight{x \Sslash V_i}}{N}}) \label{eq:AAprecision} \end{equation} with a constant probability. To lower the probability that the algorithm fails to $1/p$, we repeat the amplitude amplification routine $\bigO(\log p)$ times; this guarantees a bounded probability that all the amplitude estimations succeed in returning a precision as in \eqref{eq:AAprecision}. We set \begin{equation} D = \begin{cases} \bigO*(\sqrt{\frac{N \log p}{p}}) & \text{if } k \leq p \log p \\ \bigO*(\frac{\sqrt{N k}}{p}) & \text{if } k \geq p \log p. \end{cases} \label{eq:depth_def} \end{equation} Then, for every subset $V_i$, we are estimating $\HammingWeight{x \Sslash V_i}$ with precision \begin{equation} \epsilon_i = \begin{cases} \bigO*(\sqrt{\frac{\HammingWeight{x \Sslash V_i}}{\log p}}) & \text{if } k \leq p \log p \\ \bigO*(\sqrt{\frac{\HammingWeight{x \Sslash V_i}}{k / p}}) & \text{if } k \geq p \log p. \end{cases} \label{eq:xprecision} \end{equation} We estimate $\HammingWeight{x}$ as the sum of our estimates for $\HammingWeight{x \Sslash V_i}$. If it exceeds $k$, we output $1$, and otherwise we output $0$. The actual behaviour of the algorithm depends on how the $1$-input variables are distributed among the subsets $V_1, \ldots, V_p$. In the worst-case scenario, all the ones are concentrated in a single bin. However, this scenario is extremely unlikely. Raab and Steger's ``balls into bins'' theorem \cite{BallsIntoBins} states that, with probability greater than $2/3$, \begin{equation} \label{eq:ballsintobins} \max_i \HammingWeight{x \Sslash V_i} = \begin{cases} \bigO*(\log p) & \text{if } \HammingWeight{x} \leq p \log p \\ \bigO*(\frac{\HammingWeight{x}}{p}) & \text{if } \HammingWeight{x} \geq p \log p . \end{cases} \end{equation} Using this result, we show in Appendix \ref{sec:threshold_proof} that there is a choice for the constant factors in \eqref{eq:depth_def} that guarantees that our estimate for $\HammingWeight{x}$ is larger than $k$ if $\Threshold_k(x) = 1$ and smaller or equal than $k$ if $\Threshold_k(x) = 0$. Putting everything together, we conclude that \begin{equation} Q(\Threshold_k; D) = \bigO*(\frac{N}{D} \log^2\left(\frac{N}{D}\right) + \sqrt{Nk} \log k). \end{equation} Comparing with the upper bound that we derived with the interpolation method (equation \eqref{eq:thresholdcomplexity_interpolation}), we see that parallelization offers the best performance. Indeed, for short circuit depths the complexity of the parallelization method is smaller by a factor of $k$ (up to logarithmic factors). \section{Sometimes Interpolation is Better: NAND trees} We now apply the interpolation and parallelization techniques for the problem of evaluating a balanced binary NAND formula. This problem has been widely studied in the literature: Farhi \textit{et al.} \cite{FarhiGoldstoneGutmann2007} proposed a quantum walk algorithm that runs in $\bigO*(N^{1/2})$ time with an unconventional, continuous-time query model. Later, Childs \textit{et al.} \cite{ChildsCleveJordanYongeMallo2009} understood that this algorithm could be translated into the discrete query model (as presented in Section \ref{sec:querymodel}) with just an $\bigO*(N^{o(1)})$ overhead. Finally, Ambainis \textit{et al.} \cite{Ambainis2010} presented an optimal $\bigO*(N^{1/2})$-time algorithm on the conventional query model. We adapt their approach to a restricted-depth setting. Let $\Phi$ be a Boolean function on $N$ inputs $x_1, \ldots, x_N$ expressed with NAND gates. We treat each occurrence of a variable separately, in that $N$ is counting with the variables' multiplicity. Equivalently, we could be considering a formula expressed in terms of the gate set $\{\text{AND}, \text{OR}, \text{NOT}\}$. The input is accessed via the conventional query operator $O$ as defined in Section \ref{sec:querymodel}. The formula $\Phi$ can be represented by a tree, where the internal nodes are NAND gates acting on their children and the leafs hold the input variables. Here we restrict our attention to formulas that are represented by perfectly balanced binary trees. We note that Ambainis \textit{et al.}'s algorithm can be applied to general formulas after a proper rebalancing of the corresponding tree \cite{Bshouty1995,Bonnet1994}. Similarly, our arguments could also be extended to the general case. Ambainis \textit{et al.} \cite{Ambainis2010} prove that (after efficient classical pre-processing) $\Phi(x)$ can be evaluated with bounded-error probability using $\sqrt{N}$ queries to $O$. The main idea is to build a weighted graph, whose adjacency matrix, denoted as $H$, has spectrum that relates to the value of $\Phi(x)$. Then, one simulates a discrete-time quantum walk on this graph. By applying a phase estimation on this process for a special starting state, one is able to infer the value of $\Phi(x)$. Starting on the graph construction of Ambainis \textit{et al.} \cite{Ambainis2010}, we present a different, QSVT-based approach to infer the value of $\Phi(x)$, circumventing the quantum walk and phase estimation steps. With the aforementioned principle of trading off lower degree polynomial approximations by longer statistical sampling, we immediately derive an interpolating algorithm for evaluating general NAND trees. We present a succinct definition of $H$, referring the reader to the original paper \cite{Ambainis2010} for a more detailed explanation. We construct a symmetric weighted graph from the formula's tree, attaching to the root node (call it $r$) a tail of two nodes, $r'$ and $r''$. For each node $v$, let $s_v$ be the number of variables of the subformula rooted at $v$. The weights on the graph are defined in the following manner. If $p$ is the parent of a node $v$, then \begin{equation} \bra{v} H \ket{p} := \left( \frac{ s_v}{s_p} \right)^{1/4}, \end{equation} with two exceptions: \begin{enumerate} \item if $v$ is a leaf reading $1$, then $\bra{v} H \ket{p} := 0$ (effectively removing the edge $(v,p)$ from the graph); \item $\bra{r'} H \ket{r''} := 1 / (\sqrt{2} N^{1/4})$. \end{enumerate} The spectrum of $H$ has the following properties \cite[Theorem~2]{Ambainis2010}: \begin{enumerate} \item if $\Phi(x)=0$, then there is a zero-eigenvalue eigenstate $\ket{g}$ of $H$ $\vert \bra{r''}\ket{g} \vert \geq 1 / \sqrt{2}$; \label{item:Phi=0} \item if $\Phi(x)=1$, then every eigenstate with support on $\ket{r''}$ has eigenvalue at least $1 / (18 \sqrt{2N})$ in absolute value. \label{item:Phi=1} \end{enumerate} That is, we can evaluate $\Phi$ by determining whether $\ket{r''}$ has a large zero-eigenvalue component. We propose doing this within the QSVT framework. \paragraph{Interpolation.} The first step in our interpolation approach to evaluating NAND trees is to construct a block-encoding of $H$. As $H$ has bounded degree and the weights of its edges are upper bounded by $1$, we can use standard block-encoding techniques for sparse matrices \cite{Chakraborty2018,Lin2022}. Namely, for projectors \begin{equation} \Pi, \tilde\Pi = \dyad{0^m}, \end{equation} with $m = \bigO(\log N)$, there is a unitary $U_H$ that block-encodes $H / 3$ with $\bigO(1)$ calls to $O$. By definition, the unitary $U_H$ is such that, for an arbitrary state $\ket{\psi}$ \begin{equation} U_H \ket{0^m} \ket{\psi} = \ket{0^m} \left( \frac{H}{3} \ket{\psi} \right) + \ket{\perp}, \end{equation} where $\ket{\perp}$ is orthogonal to $\ket{0^m}$. We would like to distinguish between the eigenstates of $H/3$ whose eigenvalue is close to zero and those whose eigenvalue is larger than \begin{equation} \label{eq:delta_def} \frac{1}{3} \times \frac{1}{18 \sqrt{2N} } =: \delta \end{equation} in absolute value. We treat this as a QSVT problem, as discussed in Section \ref{sec:QSVT}. Indeed, let $\{\lambda_i, \ket{v_i}\}_i$ be an eigenvalue decomposition of $H/3$ and $\ket{\psi} = \sum_i \alpha_i \ket{v_i}$ be an arbitrary state. From Theorem \ref{thm:qsvt} and Corollary \ref{thm:bumpfunction}, we can perform the transformation \begin{align} \ket{0^m} \ket{\psi} = & \ket{0^m} \left( \sum_i \alpha_i \ket{v_i}\right) \nonumber \\ \rightarrow & \ket{0^m} \left( \sum_i P'_{\delta, \eta, \mu}(\lambda_i)\alpha_i \ket{v_i}\right) + \ket{\perp}, \label{eq:bumpQSVTtransformation} \end{align} where $P'_{\delta, \eta, \mu}$ is an approximation to the window function (as defined in Corollary \ref{thm:bumpfunction}), with $\bigO ( (1/\delta) \log (1 / \eta ) )$ queries to $O$. We now have all the necessary tools to solve the problem. We start by preparing the state $\ket{r''}$ (this does not involve any oracle queries). We then transform $\ket{r''}$ as in \eqref{eq:bumpQSVTtransformation}. We measure the $m$ first qubits (i.e., the block-encoding register) of the resulting state, assigning an outcome ``yes'' if we observe $\ket{0^m}$ and an outcome ``no'' otherwise. From the spectral properties of $H$ we known that \begin{equation} \mathbb{P}[\text{``yes''}] \begin{cases} \geq \frac{(1-\eta)^2}{2}, \text{ if } \Phi(x) = 0 \\ \leq \eta^2, \text{ if } \Phi(x) = 1 \end{cases}. \end{equation} So, we need to determine the bias of a Bernoulli distribution with precision no larger than $(1- \eta)/4$. It is well-known that $\bigO(1 / (1-\eta)^2)$ samples are sufficient (and necessary) to achieve such a precision with bounded-error probability. In summary, we can evaluate $\Phi(x)$ with bounded-error probability by running $\bigO ( (1/\delta) \log (1 / \eta ) )$-deep circuits $\bigO(1 / (1-\eta)^2)$ times, amounting to a total of \begin{equation} \bigO \left( \frac{1}{(1-\eta)^2} \times \frac{1}{\delta} \log\left(\frac{1}{\eta} \right) \right) \label{eq:NANDcomplexity} \end{equation} queries to $O$. We have purposely left $\eta$ as a free parameter in our algorithm. We get the best possible complexity by choosing $\eta = 1 -\Omega(1)$, in which case the algorithm's query complexity is (using definition \eqref{eq:delta_def}) \begin{equation} \bigO\left(\frac{1}{\delta}\right) = \bigO\left( \sqrt{N}\right), \end{equation} recovering the scaling of Ambainis \textit{et al.} \cite{Ambainis2010}. But this choice of $\eta$ requires running circuits of depth also in $\bigO(\sqrt{N})$. Suppose now that we want to limit the circuit depth to some maximum value $D$. We can run the same algorithm, setting this time $\eta$ to be \begin{equation} \eta = \bigO\left(2^{-\delta D}\right). \end{equation} Replacing into expression \eqref{eq:NANDcomplexity}, we find that \begin{equation} Q(\Phi; D) = \bigO\left( \frac{ N}{D} + \sqrt{N}\right). \end{equation} \paragraph{Parallelization.} The problem of evaluating NAND trees is also amenable to parallelization. The key observation is that, if for any given level of the tree we know the logical value of all the nodes at that level, then we can infer $\Phi(x)$ without performing any more queries to the input. Therefore, we solve the problem if, for every node $v$ at that level, we run the quantum algorithm for evaluating the NAND tree rooted at $v$. Say that we want to limit our circuit depths to $D$. We partition the input variables into $\bigO( N/D^2)$ subsets of $\bigO(D^2)$ variables each. To each subset of variables corresponds a subtree of the total tree. For each such subtree, we evaluate the logical value of the root node with an error probability bounded by $D^2/N$, which we can do with $\bigO(\sqrt{D^2} \log(N/D^2))$ queries to $O$. Since we repeat this for all subtrees, the hybrid query complexity becomes \begin{equation} Q(\Phi; D) = \bigO\left(\frac{N}{D} \log\left(\frac{N}{D}\right)+ \sqrt{N} \right). \end{equation} We find that both the interpolation and parallelization methods can be applied for evaluating balanced binary NAND trees. Although the resulting complexities are close, the parallelization approach comes with an extra $\log(N/D)$ factor. This problem illustrates that there are also situations where interpolation is advantageous over parallelization. \section{Conclusions} In this paper, we suggest two distinct approaches for adapting a quantum algorithm to a restricted-depth setting: parallelization and interpolation. An algorithm is said to be ``parallelizable'' whenever we can split its action into smaller, independent sub-problems; and ``interpolatable'' if the loss of information caused by shortening the circuit depth can be compensated by repeated runs of the shorter circuit. Therefore, informally, these two methods can be understood as either ``breaking up the input'' (for parallelization) or ``breaking up the unitary procedure'' (for interpolation). We argue that Quantum Singular Value Transformations (QSVT) closely relate to the notion of interpolation, rather than parallelization. For QSVTs, a smaller circuit depth corresponds to a polynomial approximation to a target function of lower degree, which needs to be compensated by longer statistical sampling. We apply these approaches to two problems with known quantum speed-ups: the \(k\)-threshold function and perfectly balanced NAND trees. To the best of our knowledge, neither of these problems had been studied in a hybrid, restricted-depth setting. For the \(k\)-threshold function, we show that parallelization offers the best performance by a factor of $\tbigO(k)$ (in terms of query complexity). In contrast, for evaluating perfectly balanced NAND trees the interpolation method is the most efficient, differing by a factor of $\bigO(\log (N/D))$. This way, we demonstrate that no technique (parallelization or interpolation) is strictly better than the other -- each one may be the best option depending on the problem at hand. This shows that, when designing a quantum-classical hybrid algorithm obeying certain (query) depth limitations, both of the proposed techniques can be explored as a strategy for maintaining some of the speedup (over a fully classical approach) of a quantum unrestricted-depth counterpart. Furthermore, given the close connection between (depth unrestricted) algorithms formulated in terms of QSVTs and the interpolation method, this implies that, when searching for hybrid quantum-classical algorithms for a particular problem, it may be a good option to start by formulating a (depth unrestricted) QSVT algorithm for the problem, and then seeking to interpolate it. We note that we only offered an example of a problem (perfectly balanced NAND trees) where the interpolation beats parallelization by a logarithmic factor. It would be interesting to find a problem for which the interpolation procedure is polynomially more efficient than the corresponding parallelization, to rule out the possibility that parallelization, whenever applicable, is always optimal up to logarithmic factors. We leave the existence of such a problem as an open question. The definitions we have provided for the terms ``parallelization'' and ``interpolation'' are not strictly rigorous; they should be seen as general strategies for restricted-depth computing, rather than formal notions. This does not preclude that in some situations the two strategies may be simultaneously at play, or that these classifications may not apply. As such, we expect there is room for discussion on what other classes of methods may exist besides the ones discussed here, and for other systematic approaches to hybridization. \begin{acknowledgments} We thank ~R.~de~Wolf for his comments on quantum query lower bounds for the problem of quantum counting, in the context of calculating the threshold function, and N.~Stamatopoulos for his comments regarding the proof of the parallelization method for the threshold function. We also thank the support from FCT -- Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (Portugal), namely through project UIDB/04540/2020, as well as from projects QuantHEP and HQCC supported by the EU QuantERA ERA-NET Cofund in Quantum Technologies and by FCT (QuantERA/0001/2019 and QuantERA/004/2021, respectively), and from the EU Horizon Europe Quantum Flagship project EuRyQa (101070144). DM and MM acknowledge the support from FCT through scholarships 2020.04677.BD and 2021.05528.BD, respectively. \end{acknowledgments} \appendix \section{Threshold function -- proof of parallelization method\label{sec:threshold_proof}} From Brassard \textit{et al.} \cite{Brassard2002}, there is a constant $c$ such that the error for our estimate of $\HammingWeight{x \Sslash V_i}$ is bounded as \begin{equation} \epsilon_i < c \frac{\sqrt{N \HammingWeight{x \Sslash V_i} / p}}{D}. \end{equation} We analyse separately the cases where $\Threshold_k(x) = 0$ and $\Threshold_k(x) = 1$. If $\Threshold_k(x) = 0$, the following possible relations between $p$, $k$, and $\HammingWeight{x}$ need to be considered. \begin{enumerate} \item $p \log p \leq \HammingWeight{x} \leq k$. From the result of Raab and Steger (equation \eqref{eq:ballsintobins}), we know that $\HammingWeight{x \Sslash V_i} = \bigO(\HammingWeight{x} / p)$ for all $i$. So, by our expression for the error \eqref{eq:xprecision}, we see that $\epsilon_i = \bigO(\sqrt{\HammingWeight{x} / k}) = \bigO(1)$. \item $\HammingWeight{x} \leq p \log p \leq k$. Now we know that $\HammingWeight{x \Sslash V_i} = \bigO(\log p)$. So, for all $i$, $\epsilon_i = \bigO(\sqrt{p \log p / k}) = \bigO(1)$. \item $\HammingWeight{x} \leq k \leq p \log p$. Equation \eqref{eq:ballsintobins} ensures that $\HammingWeight{x \Sslash V_i} = \bigO(\log p)$. From the expression for the error, we see that $\epsilon_i = \bigO(\sqrt{\log p / \log p}) = \bigO(1)$. \end{enumerate} That is, there is a choice of constants that guarantees that $\epsilon_i < 1/2$ for all $i$ with bounded probability. In that case, we estimate each $\HammingWeight{x \Sslash V_i}$ exactly, and so we exactly infer $\HammingWeight{x}$ and consequently the value of $\Threshold_k(x)$. If $\Threshold_k(x) = 1$, the proof is slightly different. Again, we consider three scenarios. \begin{enumerate} \item $p \log p \leq k \leq \HammingWeight{x}$. Equation \eqref{eq:ballsintobins} tells us that $\HammingWeight{x \Sslash V_i} = \bigO(\HammingWeight{x} / p)$. Combining with \eqref{eq:xprecision} we see that there is a (controllable) constant $C$ for which \begin{equation} \epsilon_i < C \sqrt{\frac{\HammingWeight{x}}{k}} \label{eq:gettablebound} \end{equation} for all $\HammingWeight{x}, k$. Unlike before, we cannot guarantee that $\epsilon_i$ is kept below $1/2$ for all $\HammingWeight{x}$. But we can make sure that our estimate for the Hamming weight is always greater than $k$. Let \(X_j\) be the random variables corresponding to the estimations of each $\HammingWeight{x \Sslash V_j}$, and $\sigma_j^2$ the corresponding variances. From the Chebyshev bound, \begin{multline} \Probability\biggl[\bigl\vert{\sum_j X_j - \HammingWeight{x}}\bigr\vert > \HammingWeight{x} - k\biggr] < \abs{\frac{\sum_j \sigma_j}{\HammingWeight{x} - k}}^2 < {} \\ {} < \abs{C' \, p\, \frac{\sqrt{\HammingWeight{x}/k}}{\HammingWeight{x}-k}}^2 \end{multline} for some constant $C'$. Thus we can attain with constant probability an estimation of $\HammingWeight{x}$ with error within $\HammingWeight{x}-k$ if there exists a constant $C'$ such that there exists a value $\HammingWeight{x}^*$ satisfying: \begin{itemize} \item If $\HammingWeight{x}<\HammingWeight{x}^*$, the error in the estimation of each $\HammingWeight{x \Sslash V_j}$ is less than $1/2$, such that the estimate of $\HammingWeight{x}$ is exact, \item If $\HammingWeight{x}>\HammingWeight{x}^*$, then $C' \sqrt{\HammingWeight{x}/k} < (|x|-k)/p$, bounding the error probability to be constant. \end{itemize} Choosing $\HammingWeight{x}^* = k + p \log p$, one can check that $C = 1 / 4(C')$ satisfies the conditions above. \item $k \leq p \log p \leq \HammingWeight{x}$. Again, for all $i$, $\HammingWeight{x \Sslash V_i} = \bigO(\HammingWeight{x} / p)$. Combining this with the expression for the error \eqref{eq:xprecision}, we get $\epsilon_i = \bigO(\HammingWeight{x} / p \log p)$. The proof follows the same steps as the ``$p \log p \leq k \leq \HammingWeight{x}$'' case. \item $k \leq \HammingWeight{x} \leq p \log p$. From equation \eqref{eq:ballsintobins} we known that $\HammingWeight{x \Sslash V_i} = \bigO(\log p)$ for all $i$. Then, $\epsilon_i = \bigO(\sqrt{\log p / \log p}) = \bigO(1)$. So, in this case we can also ensure that we estimate $\sum_i \HammingWeight{x \Sslash V_i}$ exactly. \end{enumerate} \section{Total Non-Constant Symmetric Boolean Functions \label{sec:symmfunctions}} We have shown, before, how to interpolate the \(k\)-threshold function based on Quantum Singular Value Transformations. A similar interpolation scheme to the one we have shown can actually be applied to the calculation of any symmetric boolean function, as we now show. Furthermore, we show that a similar difference exists between the scaling for this interpolation and the scaling for a parallelization procedure. We start by reviewing an intermediate claim of Beals \textit{et al.}~\cite{Beals2001}: \begin{lemma}(Part of Theorem 4.10 of Beals \textit{et al.}\ \cite{Beals2001}) \label{lemma:beals_threshold} For a symmetric boolean function \(f\), if given an algorithm that outputs \(\HammingWeight{X}\) if \(\HammingWeight{X} < (N - \Gamma(f))/2\) or outputs ``in'' otherwise, with \(Q\) queries to the oracle, immediately there is an algorithm that computes \(f\) with \(Q\) queries to the oracle. \end{lemma} \begin{proof} Let \Algorithm{A} be an algorithm as outlined in the lemma, requiring \(Q\) queries to the oracle. By definition of \(\Gamma(f)\), \(f\) is constant for \(X\) such that \(\HammingWeight{X} \in [(N-\Gamma(f))/2, (N+\Gamma(f))/2]\). Therefore, let \Algorithm{A'} be an algorithm that runs \Algorithm{A}, and then: \begin{itemize} \item If \Algorithm{A} outputs ``in'', \Algorithm{A'} outputs \(f((N-\Gamma(f))/2)\), \item If \Algorithm{A} outputs \(\HammingWeight{X}\), \Algorithm{A'} outputs \(f(\HammingWeight{X})\). \end{itemize} \Algorithm{A'} requires only as many queries as \Algorithm{A}. \end{proof} Now, departing from Beals \textit{et al.}'s proof, we rephrase the construction of an algorithm matching the description of \cref{lemma:beals_threshold} in terms of Quantum Singular Value Transformations. We start with the following lemma of Low and Chuang \cite{Low2017}: \begin{lemma}{\cite{Low2017}} \label{lemma:qsvt_erf} For a given \(k\in\Reals\), \(\delta \in [-1, 1]\) and \(\epsilon\in(0, \bigO(1))\), there exists a real polynomial \(p(x)\) satisfying \begin{gather*} \abs{p(x)} \leq 1 \Where x \in [-1, 1], \text{ and} \\ \abs{p(x) - \erf(k (x - \delta))} \leq \epsilon \Where x \in [-1, 1]. \end{gather*} with polynomial degree \begin{equation}\label{eq:qsvt_erf} \deg(p) = \bigO*(\sqrt{\qty(\log\frac{1}{\epsilon}) \qty(k^2 + \log\frac{1}{\epsilon})}). \end{equation} \end{lemma} From this lemma follows the already mentioned construction for a polynomial approximation to the threshold function, which we restate: \begin{corollary}{\cite{Low2017}} \label{corollary:qsvt_threshold} For a given \(\delta \in [-1, 1]\), \(\epsilon \in (0, \bigO(1))\), \(\eta \in (0, 1/4)\), there exists a real polynomial \(p\) satisfying \begin{align*} \abs{p(x)} \leq 1 & \Where x \in [-1, 1] \\ \abs{p(x) - 1} \leq \eta & \Where x \in [-1, \delta - \epsilon], \\ \abs{p(x)} \leq \eta & \Where x \in [\delta + \epsilon, 1], \end{align*} and with polynomial degree \[ \deg(p) = \bigO*(\frac{1}{\epsilon} \log \frac{1}{\eta}). \] \end{corollary} To make use of these polynomial transformations, we also recall the block encoding of the quantities of interest, which are the same as for the \(k\)-threshold case; for unitary \[ U = \quad\raisebox{1.2em}{ \Qcircuit @C=1.1em @R=.5em { & \qw{{}^{n}/} & \gate{H^{\otimes}} & \multigate{1}{O_X} & \qw \\ & \qw & \qw & \ghost{O_X} & \gate{X} & \qw \\ } } \] and \(\Pi = \dyad*{0^{n+1}}\), we have that \((\Identity_{2^n} \otimes \dyad{1}) U \Pi\) is a block encoding of \(\sqrt{\HammingWeight{X}/N}\), and \((\Identity_{2^n} \otimes \dyad{0}) U \Pi\) is a block encoding of \(\sqrt{(N-\HammingWeight{X})/N}\). Now we first determine if the Hamming weight of the input should produce output ``in'' or not, which is, essentially, the task of calculating the \(k\)-threshold function with \(k = (N - \Gamma(f))/2\) and with \(k' = (N + \Gamma(f))/2\). As stated in the body text, the case of threshold \(k'\) can be reduced to the case of threshold \(k\) calculated for the complement of the Hamming weight \(N - \HammingWeight{X}\), and so we conclude that this step requires \(\bigO(2\sqrt{N(N-\Gamma(f))})=\bigO(\sqrt{N(N-\Gamma(f))})\) applications of the oracle. In the event that we find \(\HammingWeight{X}\) to be smaller than \((N-\Gamma(f))/2\), or larger than \((N+\Gamma(f))/2\), it remains to output the Hamming weight of \(X\), or of \(\neg{X}\), respectively. We consider henceforth the case of \(\HammingWeight{X} < (N-\Gamma(f))/2\), from which generalization is easy. Note first that performing bisections on \(\HammingWeight{X}\) for \(\HammingWeight{X} \in [0, (N-\Gamma(f))/2]\) corresponds to performing successive threshold operations for thresholds \(k' < (N-\Gamma(f))/2\), so, by binary search, we have that we may find \(\HammingWeight{X}\) with \(\bigO[\sqrt{N(N-\Gamma(f))} \log^2(N-\Gamma(f))]\) applications of the oracle, where one of the \(\log\) factors is due to the binary search, and the other to error probability bounding. However, by making direct use of \cref{lemma:qsvt_erf}, the \(\log\) factors can be significantly lowered. Consider the following lemma: \begin{lemma} \label{lemma:cut_in_half} Given the block encoding of a value \(z \in [a,b] \subseteq [-1, 1]\), it is possible to determine \([a', b'] \subseteq [a,b]\) such that \(z \in [a', b']\), and \((b' - a') \leq (b - a)/2\), with \begin{equation} D_\text{round} = \bigO*(\frac{1}{b - a}) \end{equation} coherent applications of the oracle, and \begin{equation} T_\text{round} = \bigO*(\frac{1}{b-a} \log \frac{1}{E}) \end{equation} total applications of the oracle, with probability of error at most \(E\). \end{lemma} \begin{proof} Fix \(\eta \in O(1)\), for example, \(\eta = 1/8\). Using QSVT, create a block encoding of \(P(z)\), where \(P\) is the polynomial approximating \(\erf(k(x-\delta))\) up to absolute error \(\epsilon\) (to be determined), with \(k = \frac{2}{b-a} \erf\inv(1-2\eta)\) and \(\mu = (b-a)/2\). For a choice of \(\sigma\), after \(\bigO(\log(1/E)/\sigma^2)\) samples of this encoding, one obtains an estimate for \(\erf(k(z-\mu))^2\) up to precision \(\sigma + \epsilon^2\) with error probability \(E\); denote this estimate \(\tilde p\). This estimate implies a new window \([a', b']\) for \(z\) satisfying \begin{multline} b' - a' \leq \frac{1}{k} \big(\erf\inv(2 \sqrt{\tilde p} - 1 + 2(\sigma + \epsilon)) - {} \\ \erf\inv(2\sqrt{\tilde p} - 1 - 2(\sigma + \epsilon)) \big) \end{multline} which in turn satisfies, with our choice of \(k\), \begin{multline} \frac{b'-a'}{b-a} \leq \frac{2}{\erf\inv(1-2\eta)} (\sigma+\epsilon) \times {} \\ {} \times \max_{y\in[-2,2]} (\erf\inv)' (2\sqrt{\tilde{p}}-1 + y(\sigma+\epsilon)). \end{multline} Demanding that \begin{equation} \label{eq:constraint} \sigma + \epsilon \Please\leq \eta/4, \end{equation} we have \begin{equation} \frac{b'-a'}{b-a} \leq \frac{\eta}{4} \frac{\sqrt{\pi}}{\erf\inv(1-2\eta)} e^{(\erf\inv(1-\eta))^2} \end{equation} which, for \(\eta = 1/8\), has a right-hand side of less than one half. Since \(\eta \in \bigO(1)\), we may choose \(\sigma \in \bigO(1)\) and \(\epsilon \in \bigO(1)\) satisfying the constraint \eqref{eq:constraint}, and thus follows that this procedure requires \begin{equation} M_\text{round} = \bigO*(\frac{1}{\sigma^2} \log\frac{1}{E}) = \bigO*(\log\frac{1}{E}) \end{equation} measurements of a circuit encoding a polynomial transformation of degree (cf.\ equation \eqref{eq:qsvt_erf}) \begin{equation} D_\text{round} = \bigO*(k) = \bigO*(\frac{1}{b-a}) \end{equation} or, equivalently, the same number of coherent queries. The total number of queries is therefore \begin{equation} T_\text{round} = D_\text{round} \cdot M_\text{round} = \bigO*(\frac{\log E\inv}{b-a}) \end{equation} as claimed. \end{proof} By repeating the procedure given in the lemma above, we may reduce the window for \(\sqrt{\HammingWeight{X}/N}\) until this value is unambiguous. This requires a final window of size \(\Delta = \frac{1}{2}(\sqrt{N-\Gamma(f)}-\sqrt{N-\Gamma(f)-1})\), which in turn requires \(\log\/(\sqrt{N-\Gamma(f)}/\Delta) = \bigO[\log\/\{\sqrt{N}(N - \Gamma(f))\}]\) rounds of application of the lemma. Since we wish any of these rounds to fail with probability at most \(1/3\), this requires that each round fails with probability at most \(1/(3\log[\sqrt{N}(N-\Gamma(f))])\). Therefore, overall, this procedure requires \begin{equation} D = \bigO*(\sqrt{N(N - \Gamma(f))}) \end{equation} maximum coherent oracle calls, and a total number of oracle calls \begin{equation} T = \bigO*(\sqrt{N - \Gamma(f)} \log\log[\sqrt{N}(N-\Gamma(f))]). \end{equation} Performing the interpolation now is straightforward: instead of demanding the procedure from lemma \ref{lemma:cut_in_half} be repeated until the window is so small that the Hamming weight of \(X\) is unambiguous, we instead choose a final window size \(\Delta'\) that respects the given coherent query limit. After this limit has been reached, we ``switch'' to statistical sampling until the final window size for the value of \(\sqrt{\HammingWeight{X}/N}\) is \(\Delta\). This procedure therefore is split into two steps; following an analysis analogous to the one for the unbound case, we conclude that for some choice of \(\Delta'\), the first phase requires maximum coherent query depth \begin{equation} D_\text{first} = \bigO*(\frac{1}{\Delta'}) \end{equation} and total query count \begin{equation} T_\text{first} = \bigO*(\frac{\log E\inv}{\Delta'}). \end{equation} Using the fact that \((\erf\inv)'(x) \geq \sqrt{\pi}/2\), one may then conclude that and the second phase requires corresponding \begin{gather} D_\text{second} = \bigO*(\frac{1}{\Delta'} \sqrt{\log(C\frac{\Delta'}{\Delta})}) \\ T_\text{second} = \bigO*(\frac{\Delta'}{\Delta^2} \sqrt{\log(C\frac{\Delta'}{\Delta})}\log E\inv) \end{gather} where, again, \(E\) is the error probability, and \(C\) is a constant in \(\bigO(1)\). Choosing this intermediate window size \(\Delta'\) to be \(\Delta^{1-\alpha}\), for \(\alpha \in [0,1]\), we recover complexities analogous to those verified for \(\alpha\)-Quantum Phase Estimation \cite{GiurgicaTiron2022,Magano2022}: \begin{multline} D(\alpha) = D_\text{first} + D_\text{second} = {} \\ {} = \bigO*(\Delta^{\alpha-1} \sqrt{\log(C\Delta^{-\alpha})}) \end{multline} \begin{multline} T(\alpha) = T_\text{first} + T_\text{second} = {}\\ {}=\bigO*(\Delta^{-(1+\alpha)} \sqrt{\log(C\Delta^{-\alpha})} \log E\inv) \end{multline} Using again the fact that \(\Delta\inv = \bigO(\sqrt{N(N-\Gamma(f))})\), and with considerations as to the error probability identical to before, we finally have \begin{gather} D(\alpha) = \tbigO*({[ N(N-\Gamma(f)) ]}^{(1-\alpha)/2}) \\ T(\alpha) = \tbigO*({[N(N-\Gamma(f))]}^{(1+\alpha)/2}). \end{gather} \end{document}
\begin{document} \vglue -2mm \title{A shifted convolution sum for $GL(3)\times GL(2)$} \author{Ping Xi} \address{Department of Mathematics, Xi'an Jiaotong University, Xi'an 710049, P. R. China} \email{[email protected]} \subjclass[2010]{11F30, 11M41, 11L07, 11T23} \keywords{shifted convolution sum, Fourier coefficients of cusp forms, exponential sums} \begin{abstract} In this paper, we estimate the shifted convolution sum \[\sum_{n\geqslantslantqslant1}\lambda_1(1,n)\lambda_2(n+h)V\Big(\frac{n}{X}\Big),\] where $V$ is a smooth function with support in $[1,2]$, $1\leqslantslantqslant|h|\leqslantslantqslant X$, $\lambda_1(1,n)$ and $\lambda_2(n)$ are the $n$-th Fourier coefficients of $SL(3,\mathbf{Z})$ and $SL(2,\mathbf{Z})$ Hecke--Maass cusp forms, respectively. We prove an upper bound $O(X^{\frac{21}{22}+\varepsilon})$, updating a recent result of Munshi. \end{abstract} \maketitle \section{Introduction} \label{sec:Introduction} Given two arithmetic sequences $\alpha$ and $\beta$, one usually encounters in analytic number theory the shifted convolution sum \begin{align*} \sum_{n\leqslantslantqslant X}\alpha(n)\beta(n+h) \end{align*} with $h\neq0$. To state the status of this problem in principle, we would like to recall the following special cases that have been studied in literature: \begin{itemize} \item $(\alpha,\beta)=(\Lambda,\Lambda)$: this is related to the twin prime conjecture ($h=2$) \item $(\alpha,\beta)=(\mu,\mu)$: this is related to the Chowla conjecture \item $(\alpha,\beta)=(\mu^2,\mu^2)$: this counts consecutive squarefree numbers initiated by Heath-Brown $(h=1)$ \item $(\alpha,\beta)=(\Lambda,\tau)$: this is the focus of classical Titchmarsh divisor problem \item $(\alpha,\beta)=(\tau,\tau)$: this is known as additive divisor problem, related to the fourth moments of Riemann zeta functions; one can also consider general cases with $\tau$ replaced by $\tau_k$ \item $(\alpha,\beta)=(\lambda_f,\lambda_g)$: this is the so-called shifted convolution problem for $GL(2)\times GL(2)$ as a cuspidal analogue of the additive divisor problem, which is quite important in the study of $GL(2)$ $L$-functions \end{itemize} Here $\Lambda,\mu,\tau$ denote the von Mangoldt, M\"obius and divisor functions, respectively; and $\lambda_f,\lambda_g$ denote the Fourier coefficients of $GL(2)$ cusp forms $f,g$. More details and relevant developments can be referred to \cite{Bl04,Bl05,BHM07,BFI86,DFI94,Ha03,HM06,HB84,Ho09,KMV02,LS03,MRT15,Mic04,Sa01,Se65} for instance. In this paper, we consider the following smoothed version of shifted convolution sum \begin{align*} \mathscr{D}_h(X)=\sum_{n\geqslantslantqslant1}\lambda_1(1,n)\lambda_2(n+h)V\Big(\frac{n}{X}\Big), \end{align*} where $V$ is a fixed smooth function with compact support in $[1,2]$ and \begin{itemize} \item $\lambda_1(1,n)$ is the $n$-th Fourier coefficient of an $SL(3,\mathbf{Z})$ Hecke--Maass cusp form $\pi_1$, \item $\lambda_2(n)$ is the $n$-th Fourier coefficient of an $SL(2,\mathbf{Z})$ Hecke--Maass or Hecke holomorphic cusp form $\pi_2$. \end{itemize} By Cauchy's inequality and Rankin--Selberg theory (see Lemma \ref{lm:second-GL3} below), one has \begin{align*} \mathscr{D}_h(X)\ll X^{1+\varepsilon}, \end{align*} where the implied constant is allowed to depend on $\pi_1,\pi_2$ and of course on $\varepsilon.$ The interest of studying $\mathscr{D}_h(X)$ is to capture the non-correlations between the $GL(3)$ and $GL(2)$ objects by obtaining a power saving in $X$. Munshi \cite{Mu13a} is the first who beats the above (trivial) bound by saving an exponent $\frac{1}{26}$ in $X$ (after a suitable correction on his estimates for exponential sums in Lemma 11). The main ingredient in \cite{Mu13a} is a variant of the circle method due to Jutila \cite{Ju92,Ju97}, where the moduli can be chosen at one's demand. The innovation of Munshi is to specialize the moduli to be the product of two primes of certain sizes, so that he can utilize the bilinear structures in the estimates for resultant exponential sums. Following Munshi's approach, we may prove the following stronger power-saving by exploring the bilinear structure more efficiently in the circle method. \begin{theorem}\label{thm:individual} For $1\leqslantslantqslant|h|\leqslantslantqslant X,$ we have \begin{align*} \mathscr{D}_h(X)\ll X^{\frac{21}{22}+\varepsilon}, \end{align*} where the implied constant depends on $\varepsilon,\pi_1$ and $\pi_2.$ \end{theorem} As noted in \cite{Mu13a}, the dependence of the implied constant on the conductors of $\pi_1,\pi_2$ can be made explicitly. Much earlier than Munshi, Pitt \cite{Pi95} studied the similar shifted convolution sum with respect to the ternary divisor function $\tau_3(n)$ instead of $\lambda_1(1,n)$. One can also refer to \cite{Mu13b}, \cite{Ta15} and \cite{Su17} for recent progresses. \subsection*{Notation and convention} As usual, we write $\mathrm{e}(t)=\mathrm{e}^{2\pi it}$. The variable $p$ is reserved for prime numbers. For a function $f$ defined over $\mathbf{Z}/q\mathbf{Z},$ the Fourier transform is defined as \begin{align*} \widehat{f}(y) := \frac{1}{\sqrt{q}}\sum_{a\bmod{q}}f(a)\mathrm{e}\Big(-\frac{ya}{q}\Big). \end{align*} For a function $g\in L^1(\mathbf{R})$, its Fourier transform is defined as \begin{align*} \widehat{g}(y) := \int_\mathbf{R} g(x) \mathrm{e}(-yx)\mathrm{d} x. \end{align*} We use $\varepsilon$ to denote a very small positive number, which might be different at each occurrence; we also write $X^\varepsilon \log X\ll X^\varepsilon.$ The notation $n\sim N$ means $N<n\leqslantslantqslant2N.$ If $X$ and $Y$ are two quantities depending on $x$, we say that $X=O(Y)$ or $X\ll Y$ if one has $|X|\leqslantslantqslant CY$ for some fixed $C$, and $X=o(Y)$ if one has $X/Y$ tends to zero as $x\rightarrow+\infty$. We use $X\llcurlyurly Y$ to denote the estimate $X\ll x^{o(1)}Y$. \section{Background on automorphic forms} \label{sec:Automorphicforms} In this section, we recall some basic concepts and results on $GL(2)$ and $GL(3)$ automorphic forms. We shall first briefly recall some basic facts about $SL(3,\mathbf{Z})$ automorphic forms, and the details can be found in Goldfeld's book \cite{Go06}. Suppose $\pi_1$ is a Maass form of type $(\nu_1,\nu_2)$ for $SL(3,\mathbf{Z})$ which is an eigenfunction of all the Hecke operators with Fourier coefficients $\lambda_1(m_1, m_2)$, normalized so that $\lambda_1(1, 1)=1$. We introduce the Langlands parameters $(\alpha_1, \alpha_2, \alpha_3)$, defined by \[ \alpha_1=-\nu_1-2\nu_2+1,\ \ \ \alpha_2=-\nu_1+\nu_2\ \ \ \text{and}\ \ \ \alpha_3=2\nu_1+\nu_2-1. \] The Ramanujan--Selberg conjecture predicts that $|\text{Re}(\alpha_i)|=0$, and from the work of Jacquet and Shalika we at least know that $|\text{Re}(\alpha_i)|<\frac{1}{2}$. We also recall the following (inverse) Hecke multiplicativity \begin{align}\label{eq:Heckemultiplicativity} \lambda_1(m_1,m_2)=\sum_{d|(m_1,m_2)}\mu(d)\lambda_1\Big(\frac{m_1}{d},1\Big)\lambda_1\Big(1,\frac{m_2}{d}\Big). \end{align} Let $g$ be a compactly supported function on $\mathbf{R}^+$, and let $$\widetilde{g}(s)=\int_0^\infty g(x)x^{s-1}\mathrm{d} x$$ be the Mellin transform. For $\sigma>-1+\max\{-\text{Re}(\alpha_1),-\text{Re}(\alpha_2),-\text{Re}(\alpha_3)\}$ and $\ell=0,1$ define \[G_{\ell}(y)=\frac{1}{2\pi i}\int_{(\sigma)}\frac{\widetilde g(-s)}{(\pi^3 y)^{s}}\prod_{j=1}^3\frac{\Gamma(\frac{1+s+\alpha_j+\ell}{2})}{\Gamma(\frac{-s-\alpha_j+\ell}{2})}\mathrm{d} s\] and set \begin{align} \label{eq:Gpm} G_\pm(y)=\frac{1}{2\pi^{3/2}}\leqslantslantft(G_{0}(y)\mp iG_{1}(y)\right). \end{align} We are ready to state the following Voronoi summation formula due to Miller--Schmid \cite{MS06} and Goldfeld--Li \cite{GL06}. \begin{lemma}\label{lm:Voronoi3} Let $g$ be a compactly supported smooth function on $\mathbf{R}^+.$ For $(a,q)=1,$ we have \begin{align}\label{eq:Voronoi3} \sum_{m\geqslantslantqslant1} \lambda_1(1,m)\mathrm{e}\Big(\frac{am}{q}\Big)g(m)=&q\sum_{\pm}\sum_{m_1|q}\sum_{m_2\geqslantslantqslant1} \frac{\lambda_1(m_2,m_1)}{m_1m_2}S(\bar a,\pm m_2; q/m_1)G_\pm\Big(\frac{m_1^2m_2}{q^3}\Big), \end{align} where $\bar{a}$ denotes the multiplicative inverse of $a\bmod{q}$ and \begin{align}\label{eq:Klsum} S(m,n;c)=\sideset{}{^*}\sum_{x\bmod c}\mathrm{e}\Big(\frac{mx+n\overline{x}}{c}\Big)\end{align} denotes the classical Kloosterman sum. \end{lemma} Although the Ramanujan--Selberg conjecture is not proved yet for $\pi_1$, we have the following alternative estimates; see \cite[Remark 12.1.8]{Go06} or \cite[Theorem 2]{Mo02}. \begin{lemma}\label{lm:second-GL3} We have $$ \sum_{n\leqslantslantq N}|\lambda_1(n,1)|^2\ll N^{1+\varepsilon} $$ and $$ \sum_{n\leqslantslantq N}|\lambda_1(1,n)|^2\ll N^{1+\varepsilon}, $$ where the implied constants depend on the form $\pi_1$ and $\varepsilon$. \end{lemma} We now turn to $SL(2,\mathbf{Z})$. For the sake of exposition we only present the case of Maass forms, and the case of holomorphic forms is just similar or even simpler. Furthermore, for technical simplicity, we only restrict to the case of full level. Let $\pi_2$ be a Maass cusp form with Laplace eigenvalue $\frac{1}{4}+t^2\geqslantslantq 0$, and with Fourier expansion $$ \sqrt{y}\sum_{n\neq 0}\lambda_2(n)K_{it}(2\pi|n|y)\mathrm{e}(nx). $$ We will use the following Voronoi type summation formula (see Meurman \cite{Me88}). \begin{lemma}\label{lm:Voronoi2} Let $h$ be compactly supported smooth function on $\mathbf{R}^+$. For $(a,q)=1,$ we have \begin{align} \label{eq:Voronoi2} \sum_{n=1}^\infty \lambda_2(n)\mathrm{e}\Big(\frac{an}{q}\Big)h(n)=\frac{1}{q}\sum_{\pm}\sum_{n=1}^\infty \lambda_2(\mp n)\mathrm{e}\Big(\pm\frac{\overline{a}n}{q}\Big)H^{\pm}\Big(\frac{n}{q^2}\Big) \end{align} where $\bar{a}$ is the multiplicative inverse of $a\bmod{q}$, and \begin{align} H^-(y)=&\frac{-\pi}{\cosh(\pi t)}\int_\mathbf{R} h(x)\{Y_{2it}+Y_{-2it}\}\leqslantslantft(4\pi\sqrt{xy}\right)\mathrm{d} x\label{eq:H-}\\ H^+(y)=&4\cosh(\pi t)\int_\mathbf{R} h(x)K_{2it}\leqslantslantft(4\pi\sqrt{xy}\right)\mathrm{d} x.\label{eq:H+} \end{align} \end{lemma} \begin{remark} If $g$ is supported in $[X,2X]$, satisfying $x^jg^{(j)}(x)\ll_j 1$, then the integral transform $G_\pm(y)$ satisfies \begin{align} \label{eq:Gtransform-bound} G_{\pm}(y)\ll \sqrt{yX}(1+yX)^{-A} \end{align} for any fixed $A>0$ (see \cite[Remark 1]{Mu13a} for comments). Therefore, the sums on the right hand side of \eqref{eq:Voronoi3} are essentially supported on $m_1^2m_2\ll q^3(qX)^{\varepsilon}/X$ (where the implied constant depends on $\pi_1$ and $\varepsilon$), and the contribution from the terms with $m_1^2m_2\gg q^3(qX)^{\varepsilon}/X$ is negligibly small, say $O((qX)^{-A})$ for any $A>0$. If $h$ is supported in $[Y,2Y]$, satisfying $y^jh^{(j)}(y)\ll_j 1$, one has \begin{align}\label{eq:Htransform-bound} H^{\pm}(y)\ll Y(1+|y|Y)^{-A} \end{align} for any fixed $A>0$. Therefore, the sums on the right hand side of \eqref{eq:Voronoi2} are essentially supported on $n\ll q^2(qY)^{\varepsilon}/Y$ (where the implied constant depends on $\pi_2$ and $\varepsilon$) and the contribution from the terms with $n\gg q^2(qY)^{\varepsilon}/Y$ is also negligibly small. \end{remark} The following lemma characterizes the non-correlations of Fourier coefficients with additive characters. \begin{lemma}\label{lm:Wilton-Miller} Uniformly in $\alpha\in[0,1],$ we have \begin{align} \label{eq:Miller} \sum_{n\leqslantslantqslant X}\lambda_1(1,n)\mathrm{e}(\alpha n)\ll X^{\frac{3}{4}+\varepsilon} \end{align} and \begin{align} \label{eq:Wilton} \sum_{n\leqslantslantqslant X}\lambda_2(n)\mathrm{e}(\alpha n)\ll X^{\frac{1}{2}+\varepsilon}, \end{align} where the implied constants depends on $\varepsilon$ and the form. \end{lemma} The inequality \eqref{eq:Wilton} is classical in the case of holomorphic forms of full level due to Wilton \cite{Wi29}; and the case of Maass forms can be proved in a similar way, which can be found, for instance, in \cite[Proposition 4]{HM06}. The inequality \eqref{eq:Miller} is due to Miller \cite{Mil06} in a slightly more general setting. \section{Trace functions and exponential sums} \label{sec:trace-expsums} This section is devoted to introduce the terminology and concepts on trace functions over finite fields, as well as the estimates on certain averages of them. More precisely, we shall state the arithmetic exponent pairs for {\it composite} trace functions developed in \cite{WX16}, which will be employed in the latter estimates for certain exponential sums. \subsection{Trace functions} Let $p$ be a prime and $\ell\neq p$ an auxiliary prime, and fix an isomorphism $\iota : \overline{\mathbf{Q}}_\ell\rightarrow\mathbf{C}$. Let $\mathcal{F}$ be an $\ell$-adic middle-extension sheaf on $\mathbf{A}^1_{\mathbf{F}_p}$, which is pure of weight zero and of rank $\mathrm{rank}(\mathcal{F}).$ The trace function associated to $\mathcal{F}$ is defined to be \begin{align*} K(x) := \iota((\mathrm{tr}\,\mathcal{F})(\mathbf{F}_p, x))\end{align*} for $x\in\mathbf{F}_p$, following the manner of Katz \cite[Section 7.3.7]{Ka88}. The (analytic) conductor of $\mathcal{F}$, which is also called the conductor of $K$, is defined to be \begin{align*} \mathfrak{c}(\mathcal{F}) := \mathrm{rank}(\mathcal{F}) + \sum_{x\in S(\mathcal{F})} (1+\mathrm{Swan}_x(\mathcal{F})), \end{align*} where $S(\mathcal{F})\subset\mathbf{P}^1(\overline{\mathbf{F}}_p)$ denotes the $($finite$)$ set of singularities of $\mathcal{F}$, and $\mathrm{Swan}_x(\mathcal{F})$ $(\geqslantslantqslant 0)$ denotes the Swan conductor of $\mathcal{F}$ at $x$ $($see {\rm \cite{Ka80}}$).$ Let $q\geqslantslantqslant3$ be a squarefree number. For each prime factor $p$ of $q$, one we may introduce an $\ell$-adic middle-extension sheaf $\mathcal{F}_p$ on $\mathbf{A}^1_{\mathbf{F}_p}$, as well as its trace function $K_p(\cdot)$. The {\it composite} trace function $K=K_q(\cdot)$ is then defined by the product \begin{align}\label{eq:def-Kq} K_q(n)=\prod_{p\mid q}K_p(n),\end{align} and conductor of $K$ can be defined in terms of the conductor of each $\mathcal{F}_p$. There are many typical examples of trace functions and regarding the applications in this paper, we only focus the one example: Kloosterman sums given by \eqref{eq:Klsum}. However, for the sake of geometric discussions, we will focus on the normalized sum \begin{align*}\mathrm{Kl}(n,q)=\frac{S(n,1;q)}{\sqrt{q}}\end{align*} for squarefree $q\geqslantslantqslant3$, which is a composite trace function mod $q$. Recall the classical Weil bound \begin{align*}|\mathrm{Kl}(n,q)|\leqslantslantqslant\tau(q).\end{align*} More deeply, according to Deligne, there exists an $\ell$-adic middle-extension sheaf $\mathcal{K}\ell$ modulo $p$, called a Kloosterman sheaf with the corresponding trace function \begin{align*}K_{\mathcal{K}\ell}(x)=\mathrm{Kl}(x,p),\ \ x\in\mathbf{F}_p^\times.\end{align*} Such a sheaf was constructed by Deligne \cite{De80}. According to him, $\mathcal{K}\ell$ is geometrically irreducible, of rank $2$, and with conductor bounded by $5$. \subsection{$\ell$-adic transforms and conductors} For a non-trivial additive character $\psi$ and a function $f: \mathbf{F}_p\rightarrow\mathbf{C}$, we define the Fourier transform $\mathrm{FT}_\psi(f):\mathbf{F}_p\rightarrow\mathbf{C}$ by \begin{align*}\mathrm{FT}_\psi(f)(t)=\frac{-1}{\sqrt{p}}\sum_{x\in\mathbf{F}_p}f(x)\psi(tx)\end{align*} for $t\in\mathbf{F}_p$. According to Deligne \cite[3.4.1]{De80}, a middle-extension sheaf modulo $p$ of weight 0 is geometrically a direct sum of irreducible sheaves over $\mathbf{F}_p$. Hence one can define a Fourier sheaf modulo $p$ to be one where no such geometrically irreducible component is isomorphic to an Artin--Schreier sheaf $\mathcal{L}_{\psi(aX)}$ for some $a\in\overline{\mathbf{F}}_p.$ Such local Fourier transforms were studied in depth by Laumon \cite{La80}, Brylinski and Katz \cite{Ka80,Ka88}, and shown to satisfy the following properties (many of which are, intuitively, analogues of classical properties of the Fourier transform): \begin{lemma}\label{lm:fouriertransform} Let $\psi$ be a non-trivial additive character of $\mathbf{F}_p$ and $\mathcal{F}$ a Fourier sheaf on $\mathbf{A}_{\mathbf{F}_p}^1.$ Then there exists an $\ell$-adic sheaf $$\mathcal{G}=\mathrm{FT}_\psi(\mathcal{F})$$ called the Fourier transform of $\mathcal{F}$, which is also an $\ell$-adic Fourier sheaf, with the property that $$K_{\mathrm{FT}_\psi(\mathcal{F})}(y)=\mathrm{FT}_\psi(K_\mathcal{F})(y)=\frac{1}{\sqrt{p}}\sum_{x\in\mathbf{F}_p}K_\mathcal{F}(x)\psi(yx).$$ Furthermore, we have $(1)$ The sheaf $\mathcal{G}$ is geometrically irreducible if and only if $\mathcal{F}$ is; $(2)$ The Fourier transform is involutive, in the sense that we have a canonical arithmetic isomorphism $$\mathrm{FT}_\psi(\mathcal{G})\simeq[\times(-1)]^*\mathcal{F},$$ where $[\times(-1)]^*$ denotes the pull-back by the map $x\mapsto-x;$ $(3)$ We have \begin{align}\label{eq:cond-fouriertranform} \mathfrak{c}(\mathrm{FT}_\psi(\mathcal{F}))\leqslantslantqslant10\mathfrak{c}(\mathcal{F})^2.\end{align} \end{lemma} \proof The last claim was proved by Fouvry, Kowalski and Michel \cite{FKM15} using the theory of local Fourier transforms developed by Laumon \cite{La80}, and the others have been established for instance in \cite[Theorem 8.4.1]{Ka88}. \endproof The inequality (\ref{eq:cond-fouriertranform}) is essential in analytic applications, since it implies that if $p$ varies but $\mathcal{F}$ has a bounded conductor, so do the Fourier transforms. \subsection{Arithmetic exponent pairs for trace functions} Let $q$ be a squarefree number and $K$ defined by \eqref{eq:def-Kq}. For a given interval $I$, we consider the following average of $K$ over $I$: \begin{align*} \mathfrak{S}(K_q,W_\delta)=\sum_{n\in I}K_q(n)W_\delta(n), \end{align*} where $W_\delta$ is an arbitrary function defined over $\mathbf{Z}/\delta\mathbf{Z}$ satisfying $\|W_\delta\|_\infty\leqslantslantqslant1$. Roughly speaking, the P\'olya--Vinogradov bound is non-trivial for $|I|>q^{\frac{1}{2}+\varepsilon}$ (at least for $\delta=1$). The situation is much easier if $q$ allows certain factorizations. In \cite{WX16}, we developed the method of arithmetic exponent pairs for averages of composite trace functions that may go far beyond the P\'olya--Vinogradov bound, as long as the factorization of $q$ is good enough. Such observation can be at least dated back to Heath-Brown and his proof of Weyl-type subconvex bound for Dirichlet $L$-functions to well-factorable moduli. Recall that a middle-extension sheaf $\mathcal{F}_p$ on $\mathbf{A}_{\mathbf{F}_p}^1$, which is pointwise pure of weight $0$ is said to be $d$-amiable if no geometrically irreducible component of $\mathcal{F}_p$ is geometrically isomorphic to an Artin--Schreier sheaf of the form $\mathcal{L}_{\psi(P)},$ where $P\in\mathbf{F}_p[X]$ is a polynomial of degree $\leqslantslantqslant d.$ In such case, we also say the associated trace function $K_p$ is $d$-amiable. The composite trace function $K_q$ is called to be $d$-amiable if $K_p$ is $d$-amiable for each $p\mid q.$ In addition, a sheaf $($or its associated trace function$)$ is said to be $\infty$-amiable if it is amiable for any fixed $d\geqslantslantqslant1.$ We are ready to state the arithmetic exponent pairs developed in \cite{WX16}. \begin{lemma}\label{lm:AEP} Let $\eta>0$ be a sufficiently small number. Suppose $q$ is a squarefree number with no prime factors exceeding $q^\eta$, and $K=K_q(\cdot)$ is an $\infty$-amiable trace function $\bmod q$. For $|I|<q\delta,$ there exists $(\kappa,\lambda,\nu,\mu)$ such that \begin{align}\label{eq:exponentpair} \mathfrak{S}(K_q,W_\delta)\ll q^\varepsilon\Big(\frac{q}{|I|}\Big)^\kappa|I|^\lambda\delta^\nu\|\widehat{W}_\delta\|_\infty^\mu, \end{align} where $\varepsilon>0$ and the implied constant depends only on $\varepsilon$ and the conductor of $\mathcal{F}_p$ for each $p\mid q$. In particular, we may take $(\kappa,\lambda,\nu,\mu)=(\frac12,\frac12,\frac12,1), (\frac{11}{30}, \frac{16}{30}, \frac{1}{6}, 1)$ and $(\frac{2}{18}, \frac{13}{18}, \frac{11}{28}, 0)$. \end{lemma} \begin{remark} We call $(\kappa,\lambda,\nu,\mu)$ satisfying \eqref{eq:exponentpair} to be an arithmetic exponent pair for $(K_q,W_\delta)$. The values of $\nu,\mu$ are usually not too large so that these do not impact the applications. Note that one can take $(\kappa,\lambda,\nu,\mu)=(0,1,0,0)$ trivially and a sequence of exponent pairs can be produced starting from $(0,1,0,0)$ by virtue of $A$- and $B$-processes in the $q$-analogue of the van der Corput method. In the following application, however, we only invoke the choice $(\kappa,\lambda,\nu,\mu)=(\frac12,\frac12,\frac12,1).$ Since we do not input extra conditions on $W_\delta$, it is not a bad choice to utilize the trivial bound $\|\widehat{W}_\delta\|_\infty\leqslantslantqslant\sqrt{\delta}.$ \end{remark} \subsection{Babies of Kloosterman sums} By virtue of Kloosterman sums, we define two relevant algebraic sums: \begin{itemize} \item For $d\mid c,$ define \begin{align}\label{eq:S(h,n,m;c,d)} \mathscr{S}(h,n,m;c,d)=\sideset{}{^*}\sum_{x\bmod c}S(m,x;d)\mathrm{e}\Big(\frac{h\overline{x}-nx}{c}\Big). \end{align} \item \begin{align*} T(a,b,m;c)&=\frac{1}{c}~\sideset{}{^*}\sum_{x\bmod c}S(\overline{x}+a,-b;c)\mathrm{e}\Big(\frac{-mx}{c}\Big). \end{align*} \end{itemize} For $d\mid c,$ put $\ell=c/d$. If $(d,\ell)=1,$ from the Chinese remainder theorem it follows that \begin{align*} \mathscr{S}(h,n,m;c,d)=S(h,-n\overline{d}^2;\ell)\sideset{}{^*}\sum_{x\bmod d}S(m,x;d)\mathrm{e}\Big(\frac{\overline{\ell}(h\overline{x}-nx)}{d}\Big). \end{align*} Furthermore, opening Kloosterman sums and rearranging the summations, we may conclude the following lemma. \begin{lemma} Let $c=d\ell$ with $(d,\ell)=1$, we have \begin{align*} \mathscr{S}(h,n,m;c,d)=d\cdot S(h,-n\overline{d}^2;\ell)T(n\overline{\ell},h\overline{\ell},m;d).\end{align*} \end{lemma} The Chinese remainder theorem also yields the following twisted multiplicativity of $T(a,b,m;c).$ \begin{lemma} For $c=c_1c_2$ with $(c_1,c_2)=1,$ we have \begin{align*} T(a,b,m;c_1c_2)&=T(a,b\overline{c}_2^2,m\overline{c}_2;c_1)T(a,b\overline{c}_1^2,m\overline{c}_1;c_2). \end{align*} \end{lemma} For the sake of subsequent applications of arithmetic exponent pairs, we would like to determine when $T(a_1,b_1,y;p)$ and $T(a_2,b_2,y;p)$, as functions in $x$, do not correlate for given tuples $(a_1,b_1)$ and $(a_2,b_2).$ Note that $y\mapsto T(a_1,b_1,y;p)$ is Fourier transform of $S(\overline{x}+a,-b;p)/\sqrt{p}$. For $p\nmid b$, we have \[\frac{S(\overline{x}+a,-b;p)}{\sqrt{p}}=\mathrm{Kl}(-b(\overline{x}+a),p),\] which is a trace function of $[\gamma\cdot x]^*\mathcal{K}\ell$ with \begin{align*} \gamma= \begin{pmatrix} ab & b\\ -1 & 0 \end{pmatrix}. \end{align*} In view of Lemma \ref{lm:fouriertransform}, it suffices to consider $[\gamma\cdot x]^*\mathcal{K}\ell$. On the other hand, it was proved by Katz that there does not exist a rank 1 sheaf $\mathcal{L}$ and a geometric isomorphism \begin{align*} [\gamma\cdot x]^*\mathcal{K}\ell\simeq\mathcal{K}\ell\otimes\mathcal{L} \end{align*} for $\mathbf{1}\neq\gamma\in PGL_2(\mathbf{F}_p).$ As a consequence, if $\gamma\neq \mathbf{1}$, we find, for any rank 1 sheaf $\mathcal{L},$ the triple tensor sheaf \begin{align*} [\gamma\cdot x]^*\mathcal{K}\ell\otimes\mathcal{K}\ell\otimes\mathcal{L} \end{align*} is geometrically irreducible, of rank $\geqslantslantqslant2,$ and thus $\infty$-amiable. Put \begin{align*} \gamma_1= \begin{pmatrix} a_1b_1 & b_1\\ -1 & 0 \end{pmatrix},\ \ \ \gamma_2= \begin{pmatrix} a_2b_2 & b_2\\ -1 & 0 \end{pmatrix} \end{align*} and suppose $p\nmid b_1b_2.$ Hence $\gamma_1,\gamma_2\in PGL_2(\mathbf{F}_p)$. We would like to determine when \begin{align*} [\gamma_1\cdot x]^*\mathcal{K}\ell\otimes[\gamma_2\cdot x]^*\mathcal{K}\ell\otimes\mathcal{L} \end{align*} is $\infty$-amiable. After a bijective change of variable, it suffices to check $\gamma_2\gamma_1^{-1}$ is the identity or not. In fact, \begin{align*} \gamma_2\gamma_1^{-1}= \begin{pmatrix} a_2b_2 & b_2\\ -1 & 0 \end{pmatrix} \begin{pmatrix} 0 & -1\\ 1/b_1 & a_1 \end{pmatrix} =\begin{pmatrix} b_2/b_1 & b_2(a_1-a_2)\\ 0 & 1 \end{pmatrix}, \end{align*} which is identity if and only if $a_1\equiv a_2\bmod p$ and $b_1\equiv b_2\bmod p$. Hence we may conclude the following lemma. \begin{lemma}\label{lm:amiable} Suppose $p\nmid b_1b_2.$ If $a_1\not\equiv a_2\bmod p$ or $b_1\not\equiv b_2\bmod p$, the trace function \begin{align*} y\mapsto T(a_1,b_1,y;p)\overline{T(a_2,b_2,y;p)} \end{align*} is $\infty$-amiable. \end{lemma} \begin{remark} Suppose $p\nmid h\ell k_1k_2$. Since $T(n_j\overline{\ell},h\overline{\ell k_j^2},y\overline{k}_j;p)=T(n_jk_j\overline{\ell},h\overline{\ell k_j^3},y;p)$ for $j=1,2$. It follows from Lemma \ref{lm:amiable} that $y\mapsto T(n_1\overline{\ell},h\overline{\ell k_1^2},y\overline{k}_1;p) \overline{T(n_2\overline{\ell},h\overline{\ell k_2^2},y\overline{k}_2;p)}$ is $\infty$-amiable if \[n_1\equiv n_2\bmod p,\ \ k_1^3\equiv k_2^3\bmod p.\] In particular, if $p\equiv2\bmod3$, the later congruence condition is equivalent to $k_1\equiv k_2\bmod p.$ This observation will be invoked in the proof Theorem \ref{thm:individual}. \end{remark} \section{Applying circle method} \label{sec:circlemethod} \subsection{Jutila's variant of circle method} In the study of $\mathscr{D}_h(X)$, we would like to detect the condition $m=n\in\mathbf{Z}$ starting from the trivial identity \begin{align*} \int_0^1\mathrm{e}(\alpha(m-n))\mathrm{d}\alpha= \begin{cases} 1,\ \ &\text{if }m=n,\\ 0,&\text{if }m\neq n. \end{cases} \end{align*} The circle method is devoted to the decomposition of the unit interval $[0,1]$ in a certain way such that the subsequent evaluations can be proceeded non-trivially. There is a flexible invariant due to Jutila \cite{Ju92,Ju97} with overlapping intervals, which has found many important applications in the analytic theory of automorphic forms. Let $Q\geqslantslantqslant1$. For any moduli set $\mathscr{Q} \subseteq [Q,2Q]$ and a positive real number $\varDelta$ with $ Q^{-2}\ll\varDelta\ll Q^{-1}$, we define the function $$ I_{\mathscr{Q},\varDelta} (x)=\frac{1}{2\varDelta\varPhi}\sum_{q\in\mathscr{Q}}\;\sideset{}{^*}\sum_{a\bmod{q}}\mathbf{1}_\varDelta\Big(\frac{a}{q}-x\Big), $$ where $\varPhi=\sum_{q\in\mathscr{Q}}\varphi(q)$ and $\mathbf{1}_\varDelta$ is the characteristic function of $[-\varDelta,\varDelta]$. If we view the fractions $a/q$ as random variables, the expected deviation of $I_{\mathscr{Q},\varDelta} (x)$ from 1 is $O(1/(\varDelta Q^2))$. In fact, Jutila proved this is the case in the following average sense, if $\varPhi\gg Q^{2-\varepsilon}$, which holds upon many interesting choices. The proof can be found in \cite{Ju92}. \begin{lemma}\label{lm:jutila} Let $Q^{-2}\ll \varDelta\ll Q^{-1} $. Then we have \begin{align} \label{variance} \frac{1}{\varDelta Q^2}\int_0^1\leqslantslantft|1-I_{\mathscr{Q},\varDelta}(x)\right|^2\mathrm{d} x\llcurlyurly \frac{1}{(\varDelta\varPhi)^2}. \end{align} \end{lemma} \subsection{Setting up the circle method} Let $W$ be a smooth function supported in $[\frac{1}{2},3]$ satisfying $W(x)=1$ for $x\in [\frac{2}{3},\frac{5}{2}]$. Then we may write \begin{align}\label{eq:unit-identity} \mathscr{D}_h(X) &=\int_0^1\mathrm{e}(xh)\mathfrak{S}_1(x,X)\mathfrak{S}_2(x,X)\mathrm{d} x, \end{align} where \begin{align}\label{eq:S1(x,X)} \mathfrak{S}_1(x,X)=\sum_{m\geqslantslantqslant1}\lambda_1(1,m)\mathrm{e}(xm)V\leqslantslantft(\frac{m}{X}\right) \end{align} and \begin{align}\label{eq:S2(x,X)} \mathfrak{S}_2(x,X)=\sum_{n\geqslantslantqslant1}\lambda_2(n)\mathrm{e}(-xn)W\leqslantslantft(\frac{n}{X}\right). \end{align} In \cite{Mu13a}, Munshi constructed $\mathscr{Q}$ to be a set of products of distinct primes with prescribed sizes. In order to figure a general framework, we here allow $\mathscr{Q}$ to be a collection of squarefree numbers contained in $[Q/2,Q]$. Let $\varDelta>0$ be parameter to be chosen later such that $Q^{-2}\ll \varDelta\ll Q^{-1}.$ Define \begin{align*} \mathscr{D}_h^*(X) &=\int_0^1I_{\mathscr{Q},\varDelta}(x)\mathrm{e}(xh)\mathfrak{S}_1(x,X)\mathfrak{S}_2(x,X)\mathrm{d} x. \end{align*} We now first consider the difference between $\mathscr{D}_h(X)$ and $\mathscr{D}_h^*(X).$ From \eqref{eq:Wilton} and partial summation, it follows that $$\mathfrak{S}_2(x,X)\llcurlyurly X^{\frac{1}{2}}, $$ which is uniform in $x\in[0,1]$. We then have \begin{align*} |\mathscr{D}_h(X)-\mathscr{D}^*_h(X)|\llcurlyurly X^{\frac{1}{2}}\int_0^1|\mathfrak{S}_1(x,X)|\leqslantslantft|1- I_{\mathscr{Q},\varDelta}(x)\right|\mathrm{d} x. \end{align*} By Cauchy's inequality, it follows from Lemmas \ref{lm:jutila} and \ref{lm:second-GL3} that \begin{align*} |\mathscr{D}_h(X)-\mathscr{D}^*_h(X)|^2 &\llcurlyurly \frac{(QX)^2}{\varDelta\varPhi^2}. \end{align*} If we specialize the moduli set $\mathscr{Q}$ such that $\varPhi\gg Q^{2-\varepsilon}$, we then conclude the following approximation. \begin{lemma} \label{lm:Dh(X)-approximation} For $Q^{-2}\ll \varDelta\ll Q^{-1}$ and $\varPhi\gg Q^{2-\varepsilon}$, we have \begin{align}\label{eq:Dh(X)-approximation} \mathscr{D}_h(X)=\mathscr{D}_h^*(X)+O\Big(\frac{X^{1+\varepsilon}}{\sqrt{\varDelta}Q}\Big), \end{align} where the $O$-constant depends on $\varepsilon.$ \end{lemma} \begin{remark} We will specialize $\varDelta=1/X$, which implies an error term $O(Q^{-1}X^{\frac{3}{2}+\varepsilon})$ in \eqref{eq:Dh(X)-approximation}. To beat the trivial bound of $\mathscr{D}_h(X)$, we require that $Q\gg \sqrt{X}$, which is always kept in mind henceforth. \end{remark} \section{Transformation of $\mathscr{D}_h^*(X)$} \label{sec:transformation} \subsection{Applying Voronoi summation formulae} In this section, we make some initial transformations of $\mathscr{D}_h^*(X)$ by appealing to the Voronoi summation formulae on $GL(3)$ and $GL(2)$. From the definition of $I_{\mathscr{Q},\varDelta}(x)$, we get \begin{align*} \mathscr{D}_h^*(X)&=\frac{1}{2\varDelta L}\int_{-\varDelta}^{\varDelta}\sum_{q\in\mathscr{Q}}~\sideset{}{^*}\sum_{a\bmod q}\mathrm{e}\Big(\frac{ah}{q}\Big)\mathop{\sum\sum}_{m,n\in\mathbf{Z}}\lambda_1(1,m)\lambda_2(n)\mathrm{e}\Big(\frac{a(m-n)}{q}\Big)\\ &\ \ \ \ \ \times V\leqslantslantft(\frac{m}{X}\right)W\leqslantslantft(\frac{n}{Y}\right)\mathrm{e}(\alpha(m-n+h))\mathrm{d}\alpha.\end{align*} For any fixed $\alpha\in[-\varDelta,\varDelta],$ put \begin{align}\label{eq:D(h,alpha)} \mathscr{D}_{h,\alpha}^*(X)&=\sum_{q\in\mathscr{Q}}~\sideset{}{^*}\sum_{a\bmod q}\mathrm{e}\Big(\frac{ah}{q}\Big)\mathop{\sum\sum}_{m,n\in\mathbf{Z}}\lambda_1(1,m)\lambda_2(n)\mathrm{e}\Big(\frac{a(m-n)}{q}\Big)g(m)h(n)\end{align} with \begin{align}\label{eq:choices:gh} g(x)=V\leqslantslantft(\frac{x}{X}\right)\mathrm{e}(\alpha x),\ \ h(y)=W\leqslantslantft(\frac{y}{Y}\right)\mathrm{e}(-\alpha y),\end{align} so that \begin{align} \label{eq:Dh*} \mathscr{D}_h^*(X)&=\frac{1}{2\varDelta L}\int_{-\varDelta}^{\varDelta}\mathrm{e}(\alpha h)\mathscr{D}_{h,\alpha}^*(X)\mathrm{d}\alpha.\end{align} We are now in a position to apply Voronoi summation formulae (Lemmas \ref{lm:Voronoi3} and \ref{lm:Voronoi2}) to sums over $m,n$ in \eqref{eq:D(h,alpha)}, getting \begin{align*} \mathscr{D}_{h,\alpha}^*(X)&=\mathop{\sum\sum}_{\sigma_1,\sigma_2\in\{+,-\}}\sum_{q\in\mathscr{Q}}\sum_{m_1|q}\sum_{m_2\geqslantslantqslant1} \frac{\lambda_1(m_2,m_1)}{m_1m_2}\\ &\ \ \ \ \ \times\sum_{n\geqslantslantqslant1} \lambda_2(-\sigma_2n)\mathscr{S}(h,n,m_2;q,q/m_1)G_{\sigma_1}\Big(\frac{m_1^2m_2}{q^3}\Big)H^{\sigma_2}\Big(\frac{n}{q^2}\Big), \end{align*} where $G_\pm$ and $H^\pm$ are integral transforms defined by \eqref{eq:Gpm}, \eqref{eq:H-} and \eqref{eq:H+} with respect to the choices in \eqref{eq:choices:gh}, and the exponential sum $\mathscr{S}(h,n,m_2;q,q/m_1)$ is given by \eqref{eq:S(h,n,m;c,d)}. \subsection{Well-factorable moduli coming into play} Due to the choices of $\sigma_1,\sigma_2$ in the above expression for $\mathscr{D}_{h,\alpha}^*(X)$, there will be four sub-contributions with strong similarities; we choose only one to ease the complexity of presentation. More precisely, we consider \begin{align*} \varSigma_{h,\alpha}(X)&=\sum_{q\in\mathscr{Q}}\sum_{\ell|q}\mathop{\sum\sum}_{m,n\geqslantslantqslant1}\frac{\lambda_1(m,\ell)\lambda_2(n)}{m\ell}\mathscr{S}(h,n,m;q,q/\ell)G_+\Big(\frac{\ell^2m}{q^3}\Big)H^-\Big(\frac{n}{q^2}\Big).\end{align*} Alternatively, we have the expression \begin{align*} \varSigma_{h,\alpha}(X)&=\sum_{\ell d\in\mathscr{Q}}\mathop{\sum\sum}_{m,n\geqslantslantqslant1}\frac{\lambda_1(m,\ell)\lambda_2(n)}{m\ell}\mathscr{S}(h,n,m;\ell d,d)G_+\Big(\frac{m}{\ell d^3}\Big)H^-\Big(\frac{n}{\ell^2d^2}\Big).\end{align*} The moduli set $\mathscr{Q}$ is chosen to be the set of squarefree numbers in $(Q/2,Q],$ whose prime factors are $\equiv2\bmod3$ and not exceeding $Q^\eta$ for any small $\eta>0.$ In particular, we may take $\eta=\varepsilon^2.$ It is known that \[\varPhi:=\sum_{q\in\mathscr{Q}}\varphi(q)\gg Q^2\] with an implied constant depends only on $\eta$. By dyadic devices, we consider \begin{equation}\label{eq:Sigma_h,alpha(X;M,N)} \begin{split} \varSigma_{h,\alpha}(X;M,N)&=\sum_{\ell\sim L}\sum_{d\sim D}\sum_{k\sim K}\sum_{m\sim M}\sum_{n\sim N}\frac{\lambda_1(m,\ell)\lambda_2(n)}{m\ell}\mathscr{S}(h,n,m;\ell dk,dk)\\ &\ \ \ \ \ \ \times G_+\Big(\frac{m}{\ell d^3k^3}\Big)H^-\Big(\frac{n}{\ell^2d^2k^2}\Big)\end{split} \end{equation} for all $L\leqslantslantqslant Q$ with $LDK\asymp Q$, where $D,K$ can be specialized on our demand. In addition, the sums are void unless $\mu^2(\ell dk)=1,$ which we always assume henceforth. From the supports and decays of $G_+,H^-$ given by \eqref{eq:Gtransform-bound} and \eqref{eq:Htransform-bound}, we may assume \begin{align}\label{eq:MN-size} M\llcurlyurly X^{-1}QD^2K^2,\ \ N\llcurlyurly X^{-1}Q^2. \end{align} Our next goal is to bound $\varSigma_{h,\alpha}(X;M,N)$ from above uniformly in $\alpha\in[-\varDelta,\varDelta]$. Collecting all possible tuples $(M,N)$ and integrating over $\alpha$ trivially, one can obtain an upper bound for $\mathscr{D}^*_h(X)$, from which and Lemma \ref{lm:Dh(X)-approximation} the theorem follows by optimizing the parameters. \section{Proof of Theorem \ref{thm:individual}} \subsection{Invoking bilinear structures} By Cauchy's inequality, it follows from \eqref{eq:Sigma_h,alpha(X;M,N)} that \begin{align*} |\varSigma_{h,\alpha}(X;M,N)|^2\leqslantslantqslant\varSigma_1\varSigma_2,\end{align*} where \begin{align*} \varSigma_1&=\sum_{\ell\sim L}\sum_{d\sim D}\sum_{m\sim M}\leqslantslantft|\frac{\lambda_1(m,\ell)}{m\ell}\right|^2\end{align*} and \begin{align*} \varSigma_2&=\sum_{\ell\sim L}\sum_{d\sim D}\sum_{m\sim M}\Bigg|\sum_{k\sim K}\sum_{n\sim N}\lambda_2(n)\mathscr{S}(h,n,m;\ell dk,dk)G_+\Big(\frac{m}{\ell d^3k^3}\Big)H^-\Big(\frac{n}{\ell^2d^2k^2}\Big)\Bigg|^2.\end{align*} On one hand, it follows from Lemma \ref{lm:second-GL3} that \begin{align}\label{eq:Sigma1-upperbound} \varSigma_1 &\llcurlyurly\frac{D}{(LM)^2}\sum_{d}\sum_{\ell\sim L/d}\sum_{m\sim M/d}|\lambda_1(m,1)|^2|\lambda_1(\ell,1)|^2\llcurlyurly\frac{D}{LM}. \end{align} Squaring out and switching summations, \begin{align}\label{eq:Sigma2-upperbound} \varSigma_2&\llcurlyurly X^2\sum_{\ell\sim L}\sum_{d\sim D}\mathop{\sum\sum}_{k_1,k_2\sim K}\mathop{\sum\sum}_{n_1,n_2\sim N}|\lambda_2(n_1)||\lambda_2(n_2)||\mathscr{B}(\ell,d,\mathbf{k},\mathbf{n})|,\end{align} where \begin{align*} \mathscr{B}(\ell,d,\mathbf{k},\mathbf{n})&=\sum_{m\sim M}\mathscr{S}(h,n_1,m;\ell dk_1,dk_1)\overline{\mathscr{S}(h,n_2,m;\ell dk_2,dk_2)}G_+\Big(\frac{m}{\ell d^3k_1^3}\Big)\overline{G_+\Big(\frac{m}{\ell d^3k_2^3}\Big)}.\end{align*} \subsection{Estimates for $\varSigma_2$ and $\mathscr{D}^*_h(X)$} The next step is to capture the oscillations within $\mathscr{B}(\ell,d,\mathbf{k},\mathbf{n})$ by virtue of the method of arithmetic exponent pairs given in Lemma \ref{lm:AEP}. Put $k_0=(k_1,k_2)$ and $k_1=k_0k_1',k_2=k_0k_2',$ so that $(k_1',k_2')=1.$ We then decompose \begin{align*} \mathscr{S}(h,n_1,m;\ell dk_1,dk_1) &=dk_1\cdot S(h,-n_1\overline{(dk_1)^2};\ell)T(n_1\overline{\ell},h\overline{\ell},m;dk_1)\\ &=dk_1\cdot S(h,-n_1\overline{(dk_1)^2};\ell)T(n_1\overline{\ell},h\overline{\ell k_1^2},m\overline{k}_1;d)\\ &\ \ \ \ \ \cdot T(n_1\overline{\ell},h\overline{\ell(dk_1')^2},m\overline{dk_1'};k_0) T(n_1\overline{\ell},h\overline{\ell(dk_0)^2},m\overline{dk}_0;k_1'),\end{align*} and we have a relevant decomposition for $\mathscr{S}(h,n_2,m;\ell dk_2,dk_2)$. As a function in $m$, the product $\mathscr{S}(h,n_1,m;\ell dk_1,dk_1)\overline{\mathscr{S}(h,n_2,m;\ell dk_2,dk_2)}$ is well-defined over $\mathbf{Z}/dk\mathbf{Z}$ with $k=k_0k_1'k_2'.$ Due to the possible correlation between $T(n_1\overline{\ell},h\overline{\ell k_1^2},m\overline{k}_1;d)$ and $T(n_2\overline{\ell},h\overline{\ell k_2^2},m\overline{k}_2;d),$ we would like to divide the tuples $(n_1,n_2,k_1,k_2)$ into the following three cases: \begin{itemize} \item $n_1=n_2,k_1=k_2,$ \item $n_1\neq n_2,k_1=k_2,$ \item $k_1\neq k_2.$ \end{itemize} Correspondingly, we denote by $\varSigma_{21},\varSigma_{22}$ and $\varSigma_{23}$ the relevant contributions from these tuples to the RHS in \eqref{eq:Sigma2-upperbound}. In Case I, we have $k_0=k$ and $k_1'=k_2'=1$, and we invoke the trivial bound \begin{align*} |\mathscr{S}(\cdots)\overline{\mathscr{S}(\cdots)}|&\llcurlyurly\ell d^2k_1k_2\cdot(h,n_1,\ell), \end{align*} getting \begin{align*} \varSigma_{21}&\llcurlyurly M^2NLX^3.\end{align*} In Case II, we also have $k_0=k=k_1=k_2$ and $k_1'=k_2'=1$. Put $\mathfrak{p}=(dk,n_1-n_2)$ and $\mathfrak{q}=dk/\mathfrak{p}.$ In such case, we may have $(n_1-n_2,\mathfrak{q})=1$ and write \begin{align*} &\mathscr{S}(h,n_1,m;\ell dk_1,dk_1)\overline{\mathscr{S}(h,n_2,m;\ell dk_2,dk_2)}\\ =&(dk)^2\cdot S(h,-n_1\overline{(dk)^2};\ell)S(h,-n_2\overline{(dk)^2};\ell)\\ &\ \ \ \ \times T(n_1\overline{\ell},h\overline{\ell},m;dk)\overline{T(n_2\overline{\ell},h\overline{\ell},m;dk)}\\ =&(dk)^2\cdot S(h,-n_1\overline{(dk)^2};\ell)S(h,-n_2\overline{(dk)^2};\ell)|T(n_1\overline{\ell},h\overline{\ell\mathfrak{q}^2},m\overline{\mathfrak{q}};\mathfrak{p})|^2\\ &\ \ \ \ \times T(n_1\overline{\ell},h\overline{\ell\mathfrak{p}^2},m\overline{\mathfrak{p}};\mathfrak{q})\overline{T(n_2\overline{\ell},h\overline{\ell\mathfrak{p}^2},m\overline{\mathfrak{p}};\mathfrak{q})}.\end{align*} Thanks to Lemma \ref{lm:amiable}, we may apply Lemma \ref{lm:AEP} with \begin{align*} q=\mathfrak{q},\ \ \delta=\mathfrak{p}, \ \ (\kappa,\lambda,\nu,\mu)=(\frac12,\frac12,\frac12,1), \end{align*} \begin{align*} K_q:x\mapsto T(n_1\overline{\ell},h\overline{\ell\mathfrak{p}^2},m\overline{\mathfrak{p}};\mathfrak{q})\overline{T(n_2\overline{\ell},h\overline{\ell\mathfrak{p}^2},m\overline{\mathfrak{p}};\mathfrak{q})} \end{align*} and \begin{align*} W_\delta:x\mapsto|T(n_1\overline{\ell},h\overline{\ell\mathfrak{q}^2},m\overline{\mathfrak{q}};\mathfrak{p})|^2, \end{align*} getting \begin{align*} \mathscr{B}_h(\ell,d,\mathbf{k},\mathbf{n})&\llcurlyurly\frac{MX}{L(DK)^3}L(DK)^2(h,n_1,\ell)^{\frac{1}{2}}(h,n_2,\ell)^{\frac{1}{2}}\Big(\mathfrak{q}^{\frac{1}{2}}\mathfrak{p}+\frac{M}{\sqrt{DK}}\Big)\\ &\llcurlyurly\frac{MX}{DK}(h,\ell)(dk)^{\frac{1}{2}}(dk,n_1-n_2)^{\frac{1}{2}}+\frac{M^2X}{(DK)^{\frac{3}{2}}}(h,\ell),\end{align*} from which we conclude \begin{align*} \varSigma_{22}&\llcurlyurly L(DK)^{\frac{1}{2}} MN^2X^3+\frac{L(MN)^2X^3}{\sqrt{DK}}.\end{align*} In Case III, we can follow the arguments in Case II by pulling out $(d,k_1-k_2)$, the g.c.d. of $d$ and $k_1-k_2$, then apply Lemmas \ref{lm:AEP} and \ref{lm:amiable} with \[q=k_1'k_2'd/(d,k_1-k_2),\ \ \delta=k_0(d,k_1-k_2),\ \ (\kappa,\lambda,\nu,\mu)=(\frac12,\frac12,\frac12,1),\] and the relevant $K_q,W_\delta$, of which we do not intend to display the explicit shapes, getting \begin{align*} \mathscr{B}_h(\ell,d,\mathbf{k},\mathbf{n}) &\llcurly\frac{MX}{DK}(h,\ell)\Big((k_1'k_2'd)^{\frac{1}{2}}(d,k_1-k_2)^{\frac{1}{2}}k_0+\frac{M}{\sqrt{dk_1'k_2'k_0}}\Big)\\ &\ll\frac{MX}{\sqrt{D}}(h,\ell)(d,k_1-k_2)^{\frac{1}{2}}+\frac{M^2X}{D^{\frac{3}{2}}K^2}(h,\ell)(k_1,k_2)^\frac{1}{2}.\end{align*} Summing over $\ell,d,k_1,k_2,n_1,n_2$ in Case III, we derive that \begin{align*} \varSigma_{23} &\llcurly LD^{\frac{1}{2}}K^2MN^2X^3 +\frac{L(MN)^2X^3}{\sqrt{D}}.\end{align*} Collecting all above estimates for $\varSigma_{21},\varSigma_{22}$ and $\varSigma_{23}$, we find \begin{align*} \varSigma_2&\llcurly LM^2NX^3+LD^{\frac{1}{2}}K^2MN^2X^3 +\frac{L(MN)^2X^3}{\sqrt{D}},\end{align*} from which and \eqref{eq:Sigma1-upperbound}, it follows that \begin{align*} |\varSigma_{h,\alpha}(X;M,N)|^2 &\llcurly X^3(DMN+D^{\frac{3}{2}} K^2N^2 +\sqrt{D}MN^2).\end{align*} Collecting all possible tuples $(M,N)$ subject to \eqref{eq:MN-size}, we find \begin{align*} \varSigma_{h,\alpha}(X)&\llcurly(DQ^5X+D^{-\kappa}Q^{5+3\lambda-\kappa}X^{1+\kappa-\lambda} +\sqrt{D}Q^7)^{\frac{1}{2}},\end{align*} and the same upper bound also works for $\mathscr{D}^*_{h,\alpha}(X).$ Integrating over $\alpha\in[-\varDelta,\varDelta],$ we obtain \begin{align}\label{eq:D*h(X)-estimate} \mathscr{D}^*_h(X)&\llcurly(DQX+D^{-\frac{1}{2}}Q^2X +\sqrt{D}Q^3)^{\frac{1}{2}}.\end{align} \subsection{Concluding Theorem \ref{thm:individual}} Taking $D=Q^{\frac{2}{3}}$ in \eqref{eq:D*h(X)-estimate} to balance the first two terms on the RHS, we get \begin{align*} \mathscr{D}^*_h(X)& \llcurlyurly Q^{\frac{5}{6}}X^{\frac{1}{2}}+Q^{\frac{5}{3}}, \end{align*} from which and \eqref{eq:Dh(X)-approximation} we conclude \begin{align*} \mathscr{D}_h(X)&\llcurlyurly Q^{\frac{5}{6}}X^{\frac{1}{2}}+Q^{\frac{5}{3}}+\frac{X}{\sqrt{\varDelta}Q}. \end{align*} Taking $Q=X^{\frac{6}{11}}$ and $\varDelta=X^{-1}\geqslantslantqslant Q^{-2}$, we arrive at \begin{align*} \mathscr{D}_h(X)&\llcurlyurly X^{\frac{21}{22}} \end{align*} as stated in Theorem \ref{thm:individual}. \section{Remarks} Regarding the estimates for exponential sums, Munshi \cite{Mu13a} appealed to the works of Adolphson--Sperber \cite{AS89} and Bombieri--Sperber \cite{BS95} on multi-dimensional exponential sums over finite fields. Alternatively, we utilize the arithmetic exponent pairs (Lemma \ref{lm:AEP}) developed in \cite{WX16}, which allow us to deal with these exponential sums simultaneously. On the other hand, that the function in Lemma \ref{lm:amiable} is $\infty$-amiable is much more than what we need in the proof of Theorem \ref{thm:individual}, since we only utilize the exponent pair $(\frac12,\frac12,\frac12,1).$ As in the case of $GL(2)$, one can also consider the average of $\mathscr{D}_h(X)$ over $h$. In fact, we may write \begin{align*} \sum_{h\in\mathbf{Z}}|\mathscr{D}_h(X)|^2&=\sum_{h\in\mathbf{Z}}\leqslantslantft|\sum_{\substack{m,n\geqslantslantqslant1\\m+h=n}}\lambda_1(1,m)\lambda_2(n)V\leqslantslantft(\frac{m}{X}\right) W\leqslantslantft(\frac{n}{X}\right)\right|^2, \end{align*} where $W$ is a smooth function supported in $[\frac{1}{2},3]$ satisfying $W(x)=1$ for $x\in [\frac{2}{3},\frac{5}{2}]$. From the orthogonality of additive characters, it follows that \begin{align*} \sum_{h\in\mathbf{Z}}|\mathscr{D}_h(X)|^2&=\int_0^1|\mathfrak{S}_1(x,X)\mathfrak{S}_2(x,X)|^2\mathrm{d} x, \end{align*} where $\mathfrak{S}_1(x,X)$ and $\mathfrak{S}_2(x,X)$ are defined by \eqref{eq:S1(x,X)} and \eqref{eq:S2(x,X)}. We then conclude from Lemmas \ref{lm:Wilton-Miller} and \ref{lm:second-GL3} that \begin{align*} \sum_{h\in\mathbf{Z}}|\mathscr{D}_h(X)|^2&\llcurly X\int_0^1|\mathfrak{S}_1(x,X)|^2\mathrm{d} x\llcurly X^2. \end{align*} This gives the square-root cancellation in $\mathscr{D}_h(X)$ on average. As an immediate consequence, we also have \begin{align}\label{eq:Dh(X)-average} \sum_{h}\gamma(h)\mathscr{D}_h(X)\llcurly X\Big(\sum_{h}|\gamma(h)|^2\Big)^{\frac12}\end{align} for an arbitrary coefficient $\boldsymbol\gamma=(\gamma(h))\in\ell^2(\mathbf{Z})$. It is expected that the approach in this paper can be utilized to improve \eqref{eq:Dh(X)-average} in some special cases. Unfortunately, a lot of information cannot be saved in the application of Lemma \ref{lm:Dh(X)-approximation}, although one can choose better exponent pairs while applying Lemma \ref{lm:AEP}. In fact, in the contexts of $GL(2)$, one can choose the moduli set to be consecutive integers (with certain necessary divisibility conditions at most), and one will encounter sums of Kloosterman sums after Voronoi, so that Kuznetsov trace formula is applicable since one is summing over consecutive integers. Hence, many more cancellations become possible thanks to Kloostermania. See \cite{Bl05,BHM07} for instances. On the other hand, in the case of $GL(3)\times GL(2)$, Sun \cite{Su17} proved that \begin{align}\label{eq:Dh(X)-smoothaverage} \sum_{h\geqslantslantqslant1}U\Big(\frac{h}{H}\Big)\mathscr{D}_h(X)\ll X^{-A} \end{align} for any $A>0$, provided that $H>X^{\frac{1}{2}+\varepsilon}$, where $U$ is a fixed smooth function with compact support in $\mathbf{R}^+.$ Instead of using Jutila's variant of the circle method, she employed Kloosterman's circle method in the version of Heath-Brown \cite{HB96}, in which case no approximation such as Lemma \ref{lm:Dh(X)-approximation} is required. The saving in \eqref{eq:Dh(X)-smoothaverage} comes from the estimate \[\sum_{n\geqslantslantqslant1}\lambda_2(n)V\Big(\frac{n}{X}\Big)\ll X^{-B},\ \ B>0\] due to Booker \cite{Bo05}. \end{document}
\begin{document} \title{Experimental distillation of squeezing from non-Gaussian quantum states} \author{J. Heersink} \affiliation{Institut f\"{u}r Optik, Information und Photonik, Max-Planck Forschungsgruppe, Universit\"{a}t Erlangen-N\"{u}rnberg, G\"{u}nther-Scharowsky-Str. 1, 91058, Erlangen, Germany} \email{[email protected]} \author{Ch. Marquardt} \affiliation{Institut f\"{u}r Optik, Information und Photonik, Max-Planck Forschungsgruppe, Universit\"{a}t Erlangen-N\"{u}rnberg, G\"{u}nther-Scharowsky-Str. 1, 91058, Erlangen, Germany} \author{R. Dong} \affiliation{Institut f\"{u}r Optik, Information und Photonik, Max-Planck Forschungsgruppe, Universit\"{a}t Erlangen-N\"{u}rnberg, G\"{u}nther-Scharowsky-Str. 1, 91058, Erlangen, Germany} \author{R. Filip} \affiliation{Institut f\"{u}r Optik, Information und Photonik, Max-Planck Forschungsgruppe, Universit\"{a}t Erlangen-N\"{u}rnberg, G\"{u}nther-Scharowsky-Str. 1, 91058, Erlangen, Germany} \affiliation{Department of Optics, Palack\'y University, 17. listopadu 50, 77200 Olomouc, Czech Republic} \author{S. Lorenz} \affiliation{Institut f\"{u}r Optik, Information und Photonik, Max-Planck Forschungsgruppe, Universit\"{a}t Erlangen-N\"{u}rnberg, G\"{u}nther-Scharowsky-Str. 1, 91058, Erlangen, Germany} \author{G. Leuchs} \affiliation{Institut f\"{u}r Optik, Information und Photonik, Max-Planck Forschungsgruppe, Universit\"{a}t Erlangen-N\"{u}rnberg, G\"{u}nther-Scharowsky-Str. 1, 91058, Erlangen, Germany} \author{U.L. Andersen} \affiliation{Institut f\"{u}r Optik, Information und Photonik, Max-Planck Forschungsgruppe, Universit\"{a}t Erlangen-N\"{u}rnberg, G\"{u}nther-Scharowsky-Str. 1, 91058, Erlangen, Germany} \date{\today} \begin{abstract} We show theoretically and experimentally that single copy distillation of squeezing from continuous variable non-Gaussian states is possible using linear optics and conditional homodyne detection. A specific non-Gaussian noise source, corresponding to a random linear displacement, is investigated. Conditioning the signal on a tap measurement, we observe probabilistic recovery of squeezing. \end{abstract} \pacs{03.67.-a, 42.50.Dv} \maketitle Non-classical states such as continuous variable (CV) entangled and squeezed states serve as enabling resources for many CV quantum information protocols~\cite{braunstein05.rmp} as well as for highly sensitive measurements beyond the shot noise limit~\cite{giovannetti04.sci}. The efficiency of these applications relies crucially on the state's nonclassicality (i.e. the degree of single- or two-mode squeezing). Therefore, uncontrolled and unavoidable interaction of the system with the environment and the resultant loss of squeezing in generation or transmission should be combated. This can be done by using a distillation protocol which probabilistically selects out squeezed states from a mixture, hereby increasing the output state's squeezing. Various protocols exploiting non-Gaussian operations to probabilistically distill two-mode squeezed {\it Gaussian states} have been proposed \cite{duan00.prl}. These protocols are, however, experimentally challenging. In addition it has been proven that the distillation of two-mode squeezed Gaussian states by means of more feasible local Gaussian operations is impossible~\cite{giedke02.pra}. Similarly, following a set of simple arguments, we conjecture that the single copy distillation of single-mode Gaussian squeezed states is impossible using only linear optics and homodyne detection. There has however been no work devoted to the distillation of CV Gaussian states corrupted by {\it non-Gaussian noise}. This occurs naturally in channels with fluctuating properties, i.e. gain or phase, examples of which are the fading channel~\cite{haas96.i3esac} or channels producing mixture noise~\cite{middleton77.i3etec}. Recently, non-Gaussian telegraph noise has been discussed in qubit systems~\cite{gutmann05.pra}. Therefore, extending the work on Gaussian noise, we pose the question: Is it possible to distill single-mode Gaussian squeezed states with superimposed {\it non-Gaussian noise} using linear optics and homodyne detectors? We answer this question in the affirmative and provide an experimental demonstration. The set of non-Gaussian noise sources is large. Thus we restrict our attention to a specific case: a squeezed vacuum state perturbed by phase kicks or jitter, attributable to either imperfect generation or transmission through a noisy channel. Assuming these perturbations cause a linear phase space displacement, a convex mixture of two Gaussian squeezed states is created \begin{equation}\label{W} W(x,p)=(1-\gamma) W_0(x,p)+\gamma W_1(x,p), \end{equation} where $\gamma$ is the probability of displacement, and the individual constituents of the mixture ($i=0,1$) are described by the Wigner functions \begin{equation}\label{sq} W_i(x,p)= \frac{ \exp \left(-\frac{(x-\bar{x}_i)^2}{2 \Delta^2 X_{sq}} -\frac{(p-\bar{p}_i)^2}{2 \Delta^2 P_{sq}}\right) } {2\pi\sqrt{\Delta^2 X_{sq} \Delta^2 P_{sq}}} .\nonumber \end{equation} Here $x$ and $p$ are the amplitude and phase quadratures. $\Delta^2 X_{sq}$ and $\Delta^2 P_{sq}$ are the corresponding variances of the input state. $\bar{x}_0,\bar{p}=0$ and $\bar{x}_1,\bar{p}_1$ are the mean values of the initial and displaced squeezed states respectively. We assume the two individual Gaussians to be equally squeezed in $x$: $\Delta^2 X_{sq}<1$, where $\Delta^2 X_{sq} \Delta^2 P_{sq} \geq 1$. The first two moments of the amplitude quadrature of $W(x,p)$ are $\langle x \rangle=\gamma \bar{x}_1$ and $\langle x^2 \rangle= \Delta^2 X_{sq}+\gamma \bar{x}^2_1$, and thus the variance of the amplitude quadrature of the corrupted state of Eq.~\ref{W} is $\Delta^2 X = \Delta^2 X_{sq}+\gamma(1-\gamma)\bar{x}_{1}^{2}$. The second term here originates from the noise and degrades the squeezing. The aim is to recover the squeezing by distilling the initial squeezed state from this mixture. A schematic of the distillation protocol is shown in Fig.~\ref{setup}. A polarization squeezed state, mathematically equivalent to a squeezed vacuum state~\cite{josse03.prl,heersink05.ol}, is modulated to generate a noisy non-Gaussian state. This is incident upon a beam splitter with reflection $R$ and transmission $T$, producing correlated output states. Using a Stokes, or equivalently homodyne, detector a given quadrature of the tap beam is measured. Conditioned on this measurement, the signal is selected only if the outcome lies above a given threshold value, as in Ref.~\cite{laurat03.prl}. Due to the correlations between the signal and tap beams, the scheme accomplishes a probabilistic distillation of the noisy input state. A similar strategy was proposed to purify decohered Schr\"odinger cat states~\cite{suzuki05.xxx}. We now present a theoretical description of the distillation. The tap beam splitter transforms the quadratures of the Wigner function to, for the transmitted signal beam, $x_s=\sqrt{T}x+\sqrt{R}x_v$ and $p_s=\sqrt{T}p+\sqrt{R}p_v$ where $x_v$ and $p_v$ are uncorrelated vacuum contributions. We write the signal Wigner function after i) detection of the tapped signal and ii) post selection of the signal as \begin{eqnarray} W(x_s,p_s)&=&\frac{1}{\Pi}\left[(1-\gamma) G_0(x_s,p_s)W_0(x_s,p_s)+\right. \nonumber \\ & &\left. \gamma G_1(x_s,p_s)W_1(x_s,p_s)\right], \end{eqnarray} where $\Pi$ is the success probability and $G_i(x_s,p_s)$ is a filter function which incorporates the effect of the tap measurement and post selection. It thus depends on the measured quadrature and the threshold value. Since the goal of the distillation is to recover the initial squeezing, we consider only the marginal quadrature distribution associated with the squeezed quadrature, $x$. Measuring the phase quadrature in the tapped signal $p_t$~\cite{note}, the resulting probability distribution of the squeezed quadrature in the signal reads \begin{equation}\label{P} P(x_s)=\frac{1}{\Pi}\left\{(1-\gamma) g_0P_0(x_s)+\gamma g_1P_1(x_s)\right\}, \end{equation} where $\Pi=(1-\gamma)g_0+\gamma g_1$ and the individual marginals $P_0(x_s)$ and $P_1(x_s)$ are Gaussian functions with variance $\Delta^2 X_s =T \Delta^2 X_{sq} + R \Delta^2 X_v$ and centered at $\bar{x}_0=0$ and $\sqrt{T}\bar{x}_1$, respectively. The filter function in Eq.~\ref{P} is \begin{equation} g_i=\frac{1}{2}\mbox{Erfc}\left[\frac{p_{th}-\bar{p}_i\sqrt{R}}{\sqrt{2 \Delta^2 P_t}} \right].\nonumber \end{equation} Here $p_{th}$ is the post selection threshold and $\Delta^2 P_t=R \Delta^2 P_{sq} + T \Delta^2 P_v$. After distillation the first two moments of the signal are $\langle x_s\rangle=(\sqrt{T}\bar{x}_{1})/(1+r)$ and $\langle x_s^2\rangle=\Delta^2 X_s+(T\bar{x}_{1}^{2})/(1+r)$ where $r=(1-\gamma)g_0/\gamma g_1$. Thus the distilled squeezing is given by \begin{equation} \Delta^2 X_s^{\mathit{distill}}=\Delta^2 X_s+T\bar{x}_1^2\frac{r}{(1+r)^2}. \label{distill} \end{equation} The signal variance can be decreased or even the squeezing recovered by minimizing the second term. The probability $\gamma$ and the displacement $\bar{x}_1$ are parameters of the noisy process and thus can not be altered in the distillation optimization. However through the choice of the threshold value $p_{th}$, the ratio between the filter functions $r$ can be controlled to yield efficient distillation, corresponding to $r\rightarrow \infty$ or $r\rightarrow 0$. \begin{figure} \caption{Schematic of the experimental setup for the generation, distillation and verificaiton of non-Gaussian squeezed states.} \label{setup} \end{figure} Our experiment for the distillation of corrupted squeezed states consists of three parts (Fig.~\ref{setup}): the preparation, distillation and verification. We observe the squeezing of sidebands at $17.5\pm0.5$~MHz relative to the optical carrier frequency to avoid low frequency technical noise. The preparation of the mixed state of Eq.~\ref{W} is accomplished by combining a squeezer with a controllable noise source. We use a polarization squeezer exploiting the Kerr nonlinearity experienced by ultrashort laser pulses in optical fibers~\cite{heersink05.ol}. Using a birefringent fiber, two quadrature squeezed states can be simultaneously and independently generated. Stable overlap of these pulses allows us to generate Stokes parameter squeezing. Considering the Stokes plane orthogonal to the classical excitation ($S_3$, circularly polarized), the 'dark' plane ($S_1-S_2$), it is found that the polarization squeezing observed in this mode is mathematically equivalent to quadrature vacuum squeezing~\cite{josse03.prl,heersink05.ol}. We treat the two synonymously. The classical excitation can then be thought of as a perfectly matched local oscillator. From this source we observed $\Delta^2 X_{sq} =-3.1 \pm 0.3$~dB relative to the quantum noise level. The anti-squeezed quadrature contains the large excess phase noise characteristic of pulse propagation in glass fibers, here $\Delta^2 P_{sq} = +27 \pm 0.3$~dB. These noise signals are observed using balanced detector pairs with 85\% quantum efficiency. \begin{figure} \caption{Experimentally measured marginal distributions, centered at zero for convenience, outlining the distillation of a squeezed state from a non-Gaussian mixture of squeezed states; a) tap measurement $p_t$, b) signal measurement $x_s$. Inset: Phase space representation of the mixed state and the projection axes used in the measurements.} \label{marginals} \end{figure} The non-Gaussian noise source is implemented by executing a fixed phase space displacement of the squeezed state with a probability $\gamma =0.5$. The displacement is generated by a phase modulation in one of the linear polarization modes at the fiber input using an electro-optic modulator at 17.5~MHz. This produces a corresponding displacement along the $S_2$ polarization after the fiber (Fig.~\ref{marginals}, inset). The modulation depth governs the amount of phase space displacement. By periodically switching the modulation on and off, the displacement is toggled from maximum to zero at a frequency of 500~kHz. This non-Gaussian state, with $\Delta^2 X =+1.4 \pm 0.3$~dB, is fed into the distiller. It consists of two operations: i) the tap measurement of a certain quadrature on a small portion of the beam; ii) the signal post selection conditioned on the tap measurement. The latter could be implemented electro-optically to probabilistically generate a freely propagating distilled signal state. To avoid such complications our conditioning is instead based on data post selection using a verification measurement. The tap and the signal are recorded simultaneously, yielding data pairs, and the signal is selected dependent on the tap value. These measurements are implemented as Stokes measurements in the 'dark' plane. For a circularly polarized beam the rotation of a half-wave plate introduces a relative phase shift between the right-hand circular polarization (squeezed state) and the left-hand polarization (local oscillator) when observing the difference signal after a polarization beam splitter~\cite{heersink05.ol}. This is equivalent to the phase scan of a homodyne detector measuring arbitrary quadratures in the 'dark' plane. \begin{figure} \caption{Experimentally and theoretically distilled squeezing (left) and success probability (right) as a function of post selection threshold for two displacements. The threshold is given relative the center of the marginal distributions.} \label{threshold} \end{figure} The RF photocurrents of the photodetectors are mixed with an electronic local oscillator at 17.5~MHz and digitized with a fast AD converter at $10^7$~samples per second with a 16~bit resolution. We define our state of the electro-magnetic field to be a time window of 1~$\mu$s. By digital filtering and averaging over time bins of 1~$\mu$s we derive a photocurrent value for each bin. In this process the 1~$\mu$s time bins of our signal are synchronized to the modulator switching period, such that each bin is recorded entirely during an 'on' or an 'off' period. Thus by measuring the anti-squeezed quadrature in the tap on an ensemble of identically prepared noisy states we construct the distributions in Fig.~\ref{marginals}(a). The simultaneous measurement of the signal beam recorded the orthogonal, squeezed quadrature. The modulation was chosen such that the variance of the noisy signal was just greater than that of the shot noise (Fig.~\ref{marginals}(b)). Performing post selection on this data by conditioning it on the tap measurement, we observe a recovery of the squeezing. That is, the distilled signal distribution is narrower than that of the shot noise (Fig.~\ref{marginals}(b)). We measured $\Delta^2 P_t = +17.5.0 \pm 0.3$~dB relative to the shot noise. Conditioning on the tap, the noisy signal variance, $+1.1 \pm 0.3$~dB, fell to $-2.6 \pm 0.3$~dB after distillation. Using the data shown in Fig.~\ref{marginals}, the distilled signal variance was investigated as a function of the post selection threshold. In Fig.~\ref{threshold} we notice that an increasing threshold decreases the signal variance, ultimately approaching the input squeezing. This agrees well with the exponential increase in squeezing predicted by Eq.~\ref{distill}, given by the dashed line. As the threshold increases, the success probability or amount of distilled data decreases to zero causing an increase in the statistical error on the variance. Thus a compromise between the post selected variance and probability of success must be made. \begin{figure} \caption{Distilled variance (left axis) and success probability (right axis) as a function of the quadrature angle relative to the squeezed quadrature (angle $\beta$ in Fig.~\ref{marginals} \label{tapturn} \end{figure} The effectiveness of a given threshold depends on i) the projection of the displacement onto the measured quadrature and ii) the variance of the measured quadrature. Fig.~\ref{tapturn}(a) and (b), each with a different threshold, shows this effect. The measured tap quadrature was rotated by an angle $\beta$ (see Fig.~\ref{marginals}), effectively changing the displacement size. We observe the best distillation for small angles where the displacement ($\bar{x}_0-\bar{x}_1$) to threshold ($p_t$) difference is largest. It is seen that the quality of the distillation decreases with increasing $\beta$ as the projection of the displacement onto the measured quadrature increases. We note however that for large thresholds the distillation quality is approximately independent of $\beta$ or the measured quadrature. We also measured the Wigner function, for the first time in fiber-based systems, of both the mixed and the distilled states. Rotating the half-wave plate in the verifier (Fig.~\ref{setup}) allows observation of all the squeezed state's quadratures for a constant tap measurement. We made 128 equally spaced projections in phase space, each of $3.5\cdot 10^6$ data points to which we applied the inverse Radon transformation~\cite{toft96.phd} to derive the Wigner functions. Fig.~\ref{wigner}(a) shows the density plots of the Wigner function associated with the noisy state; its non-Gaussian nature is evident. In Fig.~\ref{wigner}(b) we present the Wigner function of the post selected signal and observe a distribution very similar to that of a single squeezed state. Thus the purity of the state relative to the non-Gaussian state is increased and a corresponding recovery of the squeezing is seen. The shift to the right reflects the post selection process as well as the renormalization of the distilled data. Further investigation of the Wigner functions of fiber-based squeezed states is described elsewhere~\cite{marquardt06.tbp}. \begin{figure} \caption{Density plots of the Wigner function distributions for a) the non-Gaussian mixed state measured at the verification setup and b) the Wigner function of the corresponding distilled state. Note: To aid visualization the plots have been vertically rescaled by a factor of two.} \label{wigner} \end{figure} We have sucessfully experimentally demonstrated the probabilistic distillation of continuous variable non-classical states from a non-Gaussian mixture of squeezed states. This was accomplished by the thorough investigation of a specific source of non-Gaussian noise, namely a linear shift in phase space. The methods presented here can be implemented for many other forms of non-Gaussian noise, e.g. phase space rotations, which will be the subject of further experiments. Another extension of this work presented would be to perform a conditional optical operation, i.e. phase shift or displacement, on the signal beam. Thus not only the distillation demonstrated here but also a purification of the excess noise of the initially squeezed states could be implemented, generating an even purer non-classical resource than produced here. Whilst we have focused on single-mode squeezed states, these techniques can assuredly be extended to two-mode squeezed systems. This means that continuous variable entanglement distillation is possible using local Gaussian operations and classical communication if the two-mode squeezing is corrupted by non-Gaussian noise. We thank M.~Chekhova for fruitful collaboration. This work has been supported by the EU project COVAQIAL (project no. FP6-511004). R.~F. was supported by: 202/03/D239 of GACR, MSM6198959213 of MSMT CR and by the Alexander von Humboldt fundation. \end{document}
\begin{equation}gin{document} \title{Polarization Squeezing in Degenerate Parametric Amplification of Coherent Light} \author{Namrata Shukla} \email{[email protected]} \affiliation{Now at, Quantum Information and Computation Group,\\ Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019, India} \affiliation{Department of Physics,\\ University of Allahabad, Allahabad, Allahabad-211001, UP, India} \author{Ranjana Prakash} \affiliation{Department of Physics,\\ University of Allahabad, Allahabad, Allahabad-211001, UP, India} \date{\today} \begin{equation}gin{abstract} We study polarization squeezing of a light beam initially in the coherent state using the nonlinear interaction hamiltonian $ H=k\big(\hat a_{x}^{\dagger2}+{\hat a_{x}}^2\big)$. For the degree of polarization squeezing, we use a definition written in the final form by the authors and also used in earlier papers. We find that the polarization squeezing can be very high and the degree of polarization squeezing can be less than unity by a very small amount. We achieve the polarization squeezing even for very low beam intensity under some conditions involving phase angles in the polarization modes. \end{abstract} \keywords{Polarization squeezing; Stokes operators; Coherent state; Parametric amplification.} \maketitle \section*{Introduction} Polarization of light was initially defined with the help of Stokes parameters \cite{1,2} $(S_1, S_2, S_3)$. The point having coordinates $(S_1, S_2, S_3) $ lies on a sphere of radius $S_0$ called Poincare sphere represented by \begin{equation}gin{equation} \label{eq1} \bm S^2=S_{0}^2=S_{1}^2+S_{2}^2+S_{3}^2. \end{equation} For unpolarized light $ |{\bm S}|=0 $ and for partially polarized light $ S_0>|{\bm S}| $. Stokes parameters involve coherence functions\cite{3} of order (1,1) and it has been realized that these are insufficient to describe polarization completely as $\bm{S}=0$ does not represent only unpolarized light.\cite{4} These parameters are still relevant because of the non-classicalities associated with polarization, {\it viz}, the polarization squeezing and polarization entanglement.\cite{5} \\ Appropriate continuous variables for non-classical polarization states are Stokes operators\cite{6} which have a clear analogy with the Stokes parameters. These operators $\hat S_{0}$ and ${\hat{\bm S}}=\hat S_{1}, \hat S_{2}, \hat S_{3}$ have a similarity with spin variables of atomic systems and are given by \begin{equation}gin{equation} \label{eq2} \hat S_{0, 1}=\hat a_{x}^\dagger \hat a_{x}\pm \hat a_{y}^\dagger \hat a_{y},~ \hat S_{2}+i \hat S_{3}=2 \hat a_{x}^\dagger \hat a_{y}. \end {equation} Simultaneous exact measurement of these variables is impossible as they follow the uncertainty relations to limit their variances written as \begin{equation}gin{equation} \label{eq3} V_{j} V_{k}\geqslant{\expec{\hat S_{l}}}^2,~ V_{j}\equiv\expec{\hat S_{j}^2}- {\expec{\hat S_{j}}}^2, \end{equation} which can be obtained by using the commutation relations \begin{equation}gin{equation} \label{eq4} [\hat S_0, \hat S_j]=0,~[\hat S_j, \hat S_k]=2i{\sum_{l}\epsilon_{jkl}}~\hat S_{l}. \end {equation} $\epsilon_{jkl}$ being Levi-Civita symbol for $(j,k,l=1,2,$ or $3,~j\neq k\neq l\neq j)$ as annihilation and creation operators follow the commutation relations $ [\hat a_{j}, \hat a_{k}^\dagger]=\delta_{jk}$.\\ Similar to the concept of ordinary squeezing, polarization squeezing is defined using the commutation relations followed by Stokes operators and uncertainty products. Polarization squeezing having applications in quantum information theory, it is desirable to devise methods for generation of states with appreciable polarization squeezing. \\ In this paper, we investigate the polarization squeezing using the most general criterion for it, in the coherent light under a squeezing operation such as degenerate parametric amplification with a nonlinear interaction hamiltonian and report the closed form result. The incident coherent beam shows high polarization squeezing after nonlinear interaction and the degree of polarization squeezing can be less than unity by a very small amount for certain photon intensity and phases in two polarization modes. It is important to note that in most cases of realizations of polarization squeezing the beam intensity has been very large in order to efficiently realize a quadrature measurement. However, in this study we get a tighter bound on beam intensity with some additional conditions. \section*{Criterion for Polarization squeezing} Polarization squeezing was first defined by Chirkin et al.\cite{7} as \begin{equation}gin {equation} \label{eq5} V_{j}<V_{j} (coh)=\hat S_{0}, \end {equation} ,{\it i.e.}, $\hat S_{j}$ is squeezed if the variance of a Stokes operator is less than that for an equally intense coherent state.\\ Heersink et al.\cite{8} in their earlier mentioned paper, defined polarization squeezing using the uncertainty relations given by Eq.~(\ref{eq3}) in the form \begin{equation}gin {equation} \label{eq6} V_{j}<\mid\expec{\hat S_{l}}\mid<V_{k} ~~for~~ j\neq k \neq l, \end {equation} for squeezing of component $\hat S_{j}$ of Stokes operator vector. \\ Luis and Korolkova\cite{9} finally compared various criteria and gave the following criterion for polarization squeezing of a component of $\hat {\bm S}$ along a unit vector ${\bm n}$ as \begin{equation}gin{equation} \label{eq7} V_{\bm n}<\mid\expec{\hat S_{\bm n_{\perp}}}\mid, \end{equation} where $ \hat S_{\bm n_{\perp}}$ is component of $\hat {\bm S}$ along unit vector $\bm n_{\perp}$ which is perpendicular to $ \bm n $. For suitable orthogonal components $ \hat S_{\bm n} $ and $ \hat S_{\bm n_{\perp}}$, they have discussed the order of stringency of the various criteria as \begin{equation}gin{equation} \label{eq8} V_{\bm n}<\expec{\hat S_{\bm n_{\perp}}}^2 / \expec{\hat S_{0}} <\mid\expec{\hat S_{\bm n\perp}}\mid<\expec{\hat S_0}. \end{equation} The authors finally have written the criterion for polarization squeezing in the form \cite{10} \begin{equation}gin{eqnarray} \label{eq9} V_{\bm n}\equiv\expec{\Delta \hat S_{\bm n}^2} &< ~~~~ {|\expec{\hat S_{\bm n_{\perp}}}|}_{max} \nonumber \\ &~~~~ =\sqrt{{|\expec{\hat {\bm S}}|}^2-{\expec{\hat S_{\bm n}}}^2}. \end{eqnarray} arguing that there are infinite directions $ \bm n_{\perp} $ for a given component $ \hat S_{\bm n} $ and therefore it is needed to consider maximum possible value of $ \mid\expec{\hat S_{n_{\perp}}}\mid $. We shall use the criterion in Eq.~(\ref{eq9}) for polarization squeezing, which is most general and based on the actual uncertainty relations. We define polarization squeezing factor $ \mathcal{S}_{\bm n}$ and degree of polarization squeezing $ \mathcal{D}_{\bm n} $ as \begin{equation}gin{equation} \label{eq10} \mathcal{S}_{\bm n} =\frac{V_{\bm n}}{\sqrt{{\mid\expec{\hat {\bm S}}\mid}^2-{\expec{\hat S_{\bm n}}}^2}},~ \mathcal{D}_{\bm n}=1-\mathcal{S}_{\bm n}. \end{equation} Non-classicalities are observed when $ 1>\mathcal{S}_{\bm n}>0 $ and the degree of polarization squeezing $ \mathcal{D}_{\bm n}$ lies between $0$ and $1$. \section*{Hamiltonian and Polarization squeezing} Consider a beam of two mode coherent radiation and we pass it through a squeezing operation like degenerate parametric amplification that changes $x$ mode after the nonlinear interaction. Hamiltonian in the interaction picture\cite{11} for this kind of a squeezing operation can be written as \begin{equation}gin{equation} \label{eq11} H=k\big(\hat a_{x}^{\dagger2}+{\hat a_{x}}^2\big), \end{equation} where $ \hat a_{x,y} $ are annihilation operators for the two linear polarizations $x$ and $y$. The state at time $t$ after this nonlinear interaction is given by \begin{equation}gin{eqnarray} \label{eq12} \hat a_x(t)&=&c~\hat a_x-is~\hat a_{x}^\dagger, \nonumber \\ \hat a_y(t)&=&\hat a_y. \end{eqnarray} where $c=\cosh 2kt,~s=\sinh 2kt$, $k$ being a coupling constant.\\ If, the incident light in the coherent state $ \ket{\alpha, \begin{equation}ta}$ with $ \alpha=A\cos{\theta}~e^{i\phi_x},~\begin{equation}ta=A\sin{\theta}~e^{i\phi_y}$, straight forward calculations give the expectation values and variances of Stokes operators after the interaction time $ kt $ as \begin{equation}gin{eqnarray} \label{eq13} \expec{\hat S_1}&=&(c^2+s^2)|\alpha|^2-|\begin{equation}ta|^2-2cs\big(|\alpha|^2\sin2\phi_x\big),\nonumber\\ \expec{\hat S_2}&=&2c|\alpha||\begin{equation}ta|\cos(\phi_x-\phi_y)-2s|\alpha||\begin{equation}ta|\sin(\phi_x+\phi_y),\nonumber\\ \expec{\hat S_3}&=&2s|\alpha||\begin{equation}ta|\cos(\phi_x+\phi_y)-2c|\alpha||\begin{equation}ta|\sin(\phi_x-\phi_y),\nonumber\\ \end{eqnarray} and \begin{equation}gin{eqnarray} \label{eq14} V_1&=&(c^2+s^2)|\alpha|^2+2c^2s^2\big(2|\alpha|^2+1\big)+|\begin{equation}ta|^2\nonumber\\ &&-4cs(c^2+s^2)|\alpha|^2\sin2\phi_x,\nonumber\\ V_2&=&c^2\big(|\alpha|^2+|\begin{equation}ta|^2\big)+s^2\big(|\alpha|^2+|\begin{equation}ta|^2+1\big)\nonumber\\ &&-2cs\big(|\alpha|^2\sin2\phi_x+|\begin{equation}ta|^2\sin2\phi_y\big),\nonumber\\ V_3&=&c^2\big(|\alpha|^2+|\begin{equation}ta|^2\big)+s^2\big(|\alpha|^2+|\begin{equation}ta|^2+1\big)\nonumber\\ &&-2cs\big(|\alpha|^2\sin2\phi_x+|\begin{equation}ta|^2\sin2\phi_y\big).\nonumber\\ \end{eqnarray} In order to study the maximum polarization squeezing, we perform extensive numerics for the minimization of squeezing factors obtained by plugging in the values for all three Stokes operators $\hat S_1 $, $ \hat S_2 $ and $\hat S_3 $ using the criterion mentioned in Eq.~(\ref{eq10}). This minimization was done for fixed number of photons in both the polarization modes and we observe that with the forms of polarization squeezing factors obtained for set of values $(\phi_x,\phi_y)$ corresponding to maximum polarization squeezing in $\hat S_1 $ and $\hat S_3 $, it is challenging to perform analytical calculations. \\ However, we find that $\mathcal{S}_{2}$ attains its minimum at $\big(\phi_x,\phi_y)=\big(\frac{\pi}{4},\frac{\pi}{4}\big), \big(\frac{5\pi}{4},\frac{\pi}{4}\big), \big(\frac{\pi}{4},\frac{5\pi}{4}\big)$ and $\big(\frac{5\pi}{4},\frac{5\pi}{4}\big)$ making it analytically feasible to find the conditions on photon intensity and time for maximum polarization squeezing to occur, though it is difficult to show that these are the only minima. The polarization squeezing factor and degree of polarization squeezing for Stokes operator $\hat S_2$ has the form \begin{equation}gin{equation} \label{eq15} \mathcal{S}_{2}=\frac{V_2}{\sqrt{{\expec{\hat S_1}}^2+{\expec{\hat S_3}}^2}}, ~\mathcal{D}_{2}=1-\mathcal{S}_{2}. \end{equation} For these set of values $(\phi_x,\phi_y)$, the expression for $\mathcal{S}_{2}$ can be written as \begin{equation}gin{equation} \label{eq16} \mathcal{S}_{2}=\frac{(c-s)^2\big|(\alpha|^2+|\begin{equation}ta|^2)+s^2}{\big|(c-s)^2|\alpha|^2-|\begin{equation}ta|^2\big|}. \end{equation} or \begin{equation}gin{equation} \label{eq17} \mathcal{S}_{2}=\frac{Y}{|X|}, \end{equation} where $ X=\big|(c-s)^2|\alpha|^2-|\begin{equation}ta|^2\big|,~Y=(c-s)^2(\big|\alpha|^2+|\begin{equation}ta|^2)+s^2$. Polarization squeezing, {\it i.e.}, $(\mathcal{S}_{2}<1)$ is not likely to occur when $X>0$ as $Y-|X|=(1+e^{-4kt})$ can not take negative values. For polarization squeezing, therefore, we look for $X<0$ and $|X|=-X$. Let us write \begin{equation}gin{equation} \label{eq18} X=|\alpha|^2(e^{-4kt})+\frac{1}{4}\big(e^{4kt}+e^{-4kt}-2\big)-|\begin{equation}ta|^2, \end{equation} for $kt\to\infty,~ X>0$ and the continuously varying function $X$ can change sign and if it passes through zero, $X=0$ gives two solutions \begin{equation}gin{equation} \label{eq19} t_{01,02}=\frac{1}{4k}\log\big[1+2|\begin{equation}ta|^2\mp2\sqrt{|\begin{equation}ta|^4+|\begin{equation}ta|^2-|\alpha|^2}\big]. \end{equation} For $t_{01,02}$ to be real, we require the condition $|\begin{equation}ta|^4+|\begin{equation}ta|^2-|\alpha|^2>0$. When this condition holds a real and positive $t_{02}$ is obtained. $t_{01}$ is also real as $ 1+2|\begin{equation}ta|^2>2\sqrt{|\begin{equation}ta|^4+|\begin{equation}ta|^2-|\alpha|^2}$, {\it i.e.}, $|\alpha|^2>|\begin{equation}ta|^2$ but $t_{01}$ is positive only for $2|\begin{equation}ta|^2>2\sqrt{|\begin{equation}ta|^4+|\begin{equation}ta|^2-|\alpha|^2}$. Thus, we have following two cases: \begin{equation}gin{itemize} \item [Case 1]: For $|\alpha|^2>|\begin{equation}ta|^2$, $ 0<t_{01}<t_{02}$ and $X$ takes negative values for time interval $t_{01}<t<t_{02}$. \item [Case 2]: For $|\alpha|^2\eqslantless|\begin{equation}ta|^2$, $t_{01}\eqslantless0<t_{02}$ and hence $X$ is negative for the interval $0<t<t_{02}$. \end{itemize} To find more confined region of polarization squeezing on time axis we need to look for the time interval for which $\mathcal{S}_{2}<1$ or $Y<|X|=-X$ as clear from Eq.~(\ref{eq17}). The boundaries of this time interval can be given by the roots of equation $Y-|X|=Y+X=0$ as at these times $\mathcal{S}_{2}=1$ and this equation gives the real roots \begin{equation}gin{equation} \label{eq20} t_{1,2}=\frac{1}{4k}\log\big[1+2|\begin{equation}ta|^2\mp 2\sqrt{|\begin{equation}ta|^4-4|\alpha|^2}\big], \end{equation} for $|\begin{equation}ta|^4>4|\alpha|^2$. It shows that $0<t_1<t_2$. It can also be seen that $t_{01}<t_1$ as $ e^{-4kt_1}-e^{-4kt_{01}}=-|\begin{equation}ta|^2-\sqrt{|\begin{equation}ta|^4-4|\alpha|^2} +2\sqrt{|\begin{equation}ta|^4+|\begin{equation}ta|^2-|\alpha|^2}$ is positive for $|\alpha|^2=0$ and its derivative with $|\alpha|^2$, {\it viz.}, $\frac{2}{|\begin{equation}ta|^4-4|\alpha|^2}-\frac{1}{\sqrt{|\begin{equation}ta|^4}+|\begin{equation}ta|^2-|\alpha|^2}>0 $. We can similarly show that $t_2<t_{02}$ and thus we have $t_01<t_1<t_2<t_{02}$. Therefore, for $ |\alpha|^2>|\begin{equation}ta|^2$, $ t_{01}>0$ in the interval $ t_{01}<t_1<t_{02}$ in which polarization squeezing occurs in the interval $t_1<t<t_2$. For $|\alpha|^2\eqslantless|\begin{equation}ta|^2$, however, $ t_{01}\eqslantless0$ and hence $X<0$ in the interval $0<t<t_{02}$ in which polarization squeezing occurs in the interval $t_1<t<t_2$. This becomes clear if we write \begin{equation}gin{equation} \label{eq21} Y-|X|=\frac{1}{2}e^{-4kt}(e^{4kt}-e^{-4kt_1})(e^{4kt}-e^{-4kt_2}). \end{equation} If $x=e^{4kt}-1$ and $x_{1,2}$ are the values of $x$ for $t_1$ and $t_2$, respectively, the expression for polarization squeezing factor can be written as \begin{equation}gin{equation} \label{eq22} \mathcal{S}_{2}=\frac{\big(4|\alpha|^2+|\begin{equation}ta|^2\big)+x^2}{(x-x_1)(x-x_2)}. \end{equation} Minimization of this expression with respect to $x$ by derivative method for a positive root of equation $\frac{d(\mathcal{S}_2)}{dx}=0$ gives the minimum value of polarization squeezing factor corresponding to maximum polarization squeezing as \begin{equation}gin{equation} \label{eq23} \mathcal{S}_{2 min}=\frac{\sqrt{1+|\alpha|^2+|\begin{equation}ta|^2}-1}{1+|\begin{equation}ta|^2-\sqrt{1+|\alpha|^2+|\begin{equation}ta|^2}}, \end{equation} at time \begin{equation}gin{equation} \label{eq24} t=t_{2min}=\frac{1}{4k}\log \big[{2\sqrt{1+|\alpha|^2+|\begin{equation}ta|^2}-1}\big]. \end{equation} $\mathcal{S}_{2}$ is less than 1 for $|\begin{equation}ta|^4>4|\alpha|^2$. We thus note that for polarization squeezing to occur one shall have two conditions $|\begin{equation}ta|^4>4|\alpha|^2$ and $|\begin{equation}ta|^4>|\alpha|^2-|\begin{equation}ta|^2$. The condition $|\begin{equation}ta|^4>4|\alpha|^2$ is a stronger condition and the other one is certainly followed if this is followed. \section*{Special case of Equally intense modes} As a special case of the above, when $|\alpha|^2=|\begin{equation}ta|^2$ , {\it i.e.}, equal number of photons in both the polarization modes, we have Eq.~(\ref{eq19}) and Eq.~(\ref{eq20}) reduced in the form \begin{equation}gin{equation} \label{eq25} t_{01,02}=0, \frac{1}{4k}\log\big[1+|\alpha|^2\big], \end{equation} and \begin{equation}gin{equation} \label{eq26} t_{1,2}=0, \frac{1}{4k}\log\big[1+|\alpha|^2\mp|\alpha|\sqrt{|\alpha|^2-4}\big]. \end{equation} Polarization squeezing in this case is quantified by polarization squeezing factor \begin{equation}gin{equation} \label{eq27} \mathcal{S}_{2min}=\frac{\sqrt{1+2|\alpha|^2}-1}{\big(1+|\alpha|^2\big)-\sqrt{1+2|\alpha|^2}}, \end{equation} at the time \begin{equation}gin{equation} \label{eq28} t=t_{2min}=\frac{1}{4k}\log\big[\big(2\sqrt{1+2|\alpha|^2\big)}-1\big]. \end{equation} The general condition of polarization squeezing to occur, {\it i.e.} $|\begin{equation}ta|^4>4|\alpha|^2$ reduces to the $|\alpha|^2>4$ for this case. \\ \section*{Discussion of Result} For given number of incident photons, we can have the maximum polarization squeezing \begin{equation}gin{equation} \label{eq29} \mathcal{S}_{2min}=\frac{\sqrt{1+N^2}-1}{1+N^2\sin^2\chi-\sqrt{1+N^2}},\nonumber \end{equation} where $ N=|\alpha|^2+|\begin{equation}ta|^2$ and $\chi= tan^{-1}\big(\frac{|\begin{equation}ta|}{|\alpha|}\big)$. This gives the further minimized value of polarization squeezing factor exhibiting the maximum polarization squeezing as \begin{equation}gin{equation} \label{eq30} \mathcal{S}_{min}=\frac{1}{\sqrt{1+N^2}},\nonumber \end{equation} at time $t_{min}=\frac{1}{4k}\log[2\sqrt{1+N}-1]$ and $\chi=\frac{\pi}{2}$ , {\it i.e.}, $ N=|\begin{equation}ta|^2$ and $|\alpha|^2=0$. It is observed that in order to achieve maximum polarization squeezing the intensity of light should be tending to zero in the parametrically amplified mode and maximum in the counter-mode. We found the maximum polarization squeezing is feasible to be analytically calculated in the Stokes parameter $\hat S_2 $ by minimization of polarization squeezing factor $\mathcal{S}_2$. It is shown that degree of polarization squeezing is maximum for some definite combinations of $\phi_x$ and $\phi_y$ , {\it e.g.}, $(\phi_x,\phi_y)=\big(\frac{\pi}{4},\frac{\pi}{4}\big)$ and other values. However, it is not possible to show analytically that these are the only maxima. In this particular case, we see analytically that polarization squeezing may occur only if the denominator $X$ in the expression for polarization squeezing factor in Eq.~(\ref{eq17}) holds a negative value. The variation of polarization squeezing with the negativity of the denominator $X$ is depicted in Fig.~\ref{f1} which indicates the time range for occurence of polarization squeezing in case of $|\alpha|^2>|\begin{equation}ta|^2$ as an example. For polarization squeezing to occur the condition on photon number $|\begin{equation}ta|^4>4|\alpha|^2$ automatically covers the condition $|\begin{equation}ta|^4>|\alpha|^2-|\begin{equation}ta|^2$ as $4|\alpha|^2>|\alpha|^2-|\begin{equation}ta|^2$ and it reduces to $ |\alpha|^2>4$ for equal number of photons in both the polarization modes. It should be noted that this condition on beam intensity allows to achieve polarization squeezing even for a low beam intensity. Fig.~\ref{f2} shows the degree of polarization squeezing plotted with the interaction time $kt$ as in Eq.~(\ref{eq16}), in all the three possible cases for photon intensities in two polarization modes. Some examples of photon number values are studied and the plot shows that the maximum polarization squeezing is achieved for equal intensity of light in both the modes. This plot also presents the pattern for occurrence of polarization squeezing on time axis. The contour plot for $\mathcal{S}_{2min}$ from Eq.~(\ref{eq16}) in the plane $ (|\alpha|^2, |\begin{equation}ta|^2)$ in Fig.~\ref{f3} clearly shows that $\mathcal{S}_{2min}>1$ for $|\begin{equation}ta|^2>2|\alpha|$ which is consistent with the essential condition $|\begin{equation}ta|^4>4|\alpha|^2$ for polarization squeezing. Efficient polarization squeezing shown to be a resource for quantum communication, achieving good extent of polarization squeezing in coherent light via a nonlinear interaction is important. This work gives an added advantage of low beam intensity in order to observe polarization squeezing in such a system for significant interaction times under some specific conditions. \begin{equation}gin{figure}[th] \centerline{\psfig{file=SX1.eps,width=8cm}} \vspace*{8pt} \caption{Variation of polarization squeezing factor $\mathcal{S}_2$ compared to negativity of $X$ with interaction time $kt$, for $|\alpha|^2=10,~|\begin{equation}ta|^2=8$.\label{f1}} \end{figure} \begin{equation}gin{figure}[th] \centerline{\psfig{file=Squeezing.eps,width=8cm}} \vspace*{8pt} \caption{Variation of degree of polarization squeezing $\mathcal{D}_2$ with interaction time $kt$.\label{f2}} \end{figure} \begin{equation}gin{figure}[th] \centerline{\psfig{file=Contour.eps,width=8cm}} \vspace*{8pt} \caption{Contour plots for differenr values of minimum polarization squeezing factor $\mathcal{S}_{2min}$ in $(|\alpha|^2$,~$|\begin{equation}ta|^2)$ plane. $\mathcal{S}_{2min}$ increases with increasing value of $|\alpha|^2$ but decreases with $|\begin{equation}ta|^2$.\label{f3}} \end{figure} \begin{equation}gin{thebibliography}{0} \bibitem{1} G. G. Stokes, {\it Trans.Cambridge Phylos. Soc.} {\bf 9}, (1852) 399.. \bibitem{2} M. Born, E. Wolf, {\it Principles of Optics} (Cambridge University Press, England, 1999). \bibitem{3} L. Mandel, E. Wolf, {\it Optical Coherence and Quantum Optics Optics} (Cambridge Univ. Press, England, 1995). \bibitem{4} H. Prakash, N. Chandra,{\it Physical Review A} {\bf 4}, 796 (1971); H. Prakash,and N. Chandra, {\it Physical Review A} {\bf 9}, 1021 (1974); H. Prakash,and N. Chandra, {\it Physics Letters A} {\bf 34}, 28 (1971). \bibitem{5} U. L. Anderson,and P. Buchhave, {\it Journal of Optics B} {\bf 5}, 486 (2003); O. Glockl, J. Heersink, N. Korolkova, G. Leuchs, S. Lorenz, {\it Journal of Optics B} {\bf 5}, 492 (2003); N. Korolkova, R. Loudon, {\it Physical Review A} {\bf 71} 032343 (2005); N. Korolkova, G. Leuchs, R. Loudon, T. C. Ralph,and Ch. Silberhorn, {Physical Review A} {\bf 65}, 052306 (2002); Yu. M. Golubeva, T. Yu. Golubev, M. I. Kolobov, and E. Giacobino, {\it Physical Review A} {\bf 70}, 053817 (2004); W. P. Bowen, R. Schanabel, H. A. Bachor and P. K. Lam, {\it Physical Review Letters} {\bf 88}, 093601 (2002); A. P. Aldojants, S. M. Arakelian, {\it Journal of Modern Optics} {\bf 46}, 475 (1999); N. V. Korolkova, A. S. Chirkin, {\it Journal of Modern Optics} {\bf 43}, 869 (1996); M. Lassen, M. Sabunch, P. Buchhave, U. L. Anderson, {\it Optics Express} {\bf 15}, 5077 (2007); D. N. Klyscho, {\it JETP }{\bf 84}, 1065 (1997); A. P. Aldojants, S. M. Arakelian, A. S. Chirkin, {\it Quantum Semiclassical Optics} {\bf 9}, 311 (1997); J. F. Sherson, K. M\o lmer {\it Physical Review Letters} {\bf 97}, 143602 (2006); J. Heersink, V. Josse, G. Leuchs, V. L. Anderson, {\it Optics Letters} {\bf 30}, 1192 (2005); P.~Usachev, J.Soderholm, G.~Bjork, A.~Trifonov, Optics Communication, {\bf 193} 161 (2001); A. P. Aldojants, A. M. Arakelian, A. S. Chirkin, {JETP} {\bf 108}, 63 (1995); A. S. Chirkin, V. V. Volokhovsky, {\it Journal of Russian Laser Research} {\bf 16}, 6 (1995). \bibitem{6} See Ref.~[4]. \bibitem{7} A. S. Chirkin, A. A. Orlov, and D. Y.Parashchuk, {\it Quantum Electrononics} {\bf 23}, 870 (1993). \bibitem{8} J. Heersink, T. Lorenz, O. Glockl, N. Korolkova,and G. Leuchs, {\it Physical Review A} {\bf 68}, 013815 (2003). \bibitem{9} A. Luis, N. Korolkova, {\it Physical Review A}, {\bf 74}, 043817 (2006). \bibitem{10} R. Prakash and N. Shukla, {\it Optics Communications} {\bf284}, 3568 (2011). \bibitem{11} M. O. Schully, M. S. Zubairy,{\it Quantum Optics} (Cambridge Univ. Press, Cambridge, 1997). \end{thebibliography} \end{document}
\begin{document} \title[Meet-reducible submaximal clones and nontrivial equivalence relations]{Meet-reducible submaximal clones determined by nontrivial equivalence relations.} \author[L. E. F. Di\'ekouam]{Luc E. F. Di\'ekouam} \email{[email protected]} \address{Department of mathematics\\Ecole Normale Sup\'erieure\\ University of Maroua \\\text{ }\hspace{2cm} P.O. Box 55 Maroua \\Cameroon} \author[E. R. A. Temgoua]{\'Etienne R. A. Temgoua} \email{[email protected]} \address{Department of Mathematics\\Ecole Normale Sup\'erieure\\University of Yaound\'e-1\\\text{ }\hspace{2cm} P.O. Box 47 Yaound\'e\\Cameroon} \author[M. TONGA]{Marcel TONGA} \email{[email protected]} \address{Department of Mathematics\\Faculty of Sciences\\University of Yaound\'e-1\\ P.O. Box 812 Yaound\'e\\Cameroon} \subjclass[2010]{Primary: 08A40; Secondary: 08A02, 18B35.} \keywords{clones, submaximal, equivalence relations, maximal} \begin{abstract} The structure of the lattice of clones on a finite set has been proven to be very complex. To better understand the top of this lattice, it is important to provide a characterization of submaximal clones in the lattice of clones. It is known that the clones $\text{Pol}(\theta)$ and $\text{Pol}(\rho)$ (where $\theta$ is a nontrivial equivalence relation on $E_k = \{0, . . . ,k - 1\}$, and $\rho$ is among the six types of relations which characterize maximal clones) are maximal clones. In this paper, we provide a classification of relations (of Rosenberg's List) $\rho$ on $E_k$ such that the clone $\text{Pol}(\theta) \cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$. \end{abstract} \maketitle \section{Introduction} The structure of the lattice of clones on a finite set of more than two elements is quite complex. An indication of such a complexity is through its cardinality, which is $2^{\aleph_0}$. For a better picture of some intervals in this lattice, it is important to provide a characterization of maximal and submaximal clones. Maximal clones have been investigated extensively by I. G. Rosenberg, and a complete characterization of these can be found in \cite{ROS65}. More precisely, it is proved that for a given nontrivial equivalence relation $\theta$ on a finite set and a central relation $\rho$ on the same set, the clones $\text{Pol}(\theta)$ and $\text{Pol}(\rho)$ are maximal. For a unary central relation on an arbitrary finite set, Rosenberg and Szendrei \cite{ROS70-2, ROS85} investigated the submaximal clones of their polymorphisms and obtain new results on polymorphism of prime permutations on a finite set. Submaximal clones for a set with two and three elements were completely described and classified in \cite{LAU,PO21,PO41}. However, for sets with more than three elements, only partial results on their submaximal clones are found in the literature (see for e.g., \cite{LAU}). Recently, Temgoua and Rosenberg \cite{TEM-ROS} obtained a characterization of all binary central relations such that the clone $\text{Pol}(\theta)\cap\text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$, given any nontrivial equivalence relation $\theta$ and a binary central relation $\rho$. In this paper, we characterize all relations $\rho$ such that the clones of the form $\text{Pol}(\theta) \cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$, where $\theta$ is a nontrivial equivalence relation on a given finite set. The rest of the paper is organized as follows: Section \ref{sec2} recalls the necessary basic definitions and notations for the clarity of our presentation. In Section \ref{sec3}, we present the no submaximality when $\rho$ is a partial order or a prime affine relation. Section \ref{sec4} is devoted to the characterization of type of equivalence relations or prime permutation relations which give submaximality. In the Section \ref{sec5}, we characterize the relations $\rho$ (resp. central relations or $h$-regularly generated relations ) for which $\text{Pol}(\theta) \cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$. Section \ref{sec8} concludes the paper. \section{Preliminaries}\label{sec2} In this section, we recall some of the main definitions and notations. Readers needing more background on the topic are encouraged to consult \cite{LAU}. Let $E_k=\{0,1,\hdots,k-1\}$ be a finite set of $k$ elements with $k\geq 3$. Let $n,s\in\mathbb{N}^{*}$. An $n$-ary operation on $E_k$ is a function from $E_k^n$ to $E_k.$ The set of all $n$-ary operations on $E_k$ is denoted by $\mathcal{O}^n(E_k)$ and we set $\mathcal{O}(E_k)=\bigcup\limits_{0<n<\omega}\mathcal{O}^n(E_k).$ For $1\leqslantq i \leqslantq s$, the $s$-ary $i$-th projection $e_i^s$ is defined as $e_i^s(x_1,\hdots,x_s)=x_i$ for all $x_1,\hdots,x_s.$ For $f\in \mathcal{O}^n(E_k)$ and $g_1,\hdots, g_n\in \mathcal{O}^m(E_k)$, we define their composition to be the $m$-ary operation $f[g_1,\hdots,g_n]$ defined by: $$f[g_1,\hdots,g_n](x_1,\hdots,x_m)=f(g_1(x_1,\hdots,x_m),\hdots,g_n(x_1,\hdots,x_m)).$$ A clone on $E_k$ is a subset $F$ of $\mathcal{O}(E_k)$ which contains all the projections and is closed under composition. It is known that the intersection of an arbitrary set of clones on $E_k$ is a clone on $E_k.$ Thus for $F\subseteq \mathcal{O}(E_k)$, there exists a smallest clone containing $F$, called the clone generated by $F$ and denoted by $\langle F\rangle.$ $\langle F\rangle$ is also the set of term operations of the non-indexed algebra $\mathcal{A}=(A;F)$, with $A=E_{k}$. The clones on $E_k$, ordered by inclusion, form a complete lattice denoted by $\mathcal{L}(E_k).$ A clone $C\in\mathcal{L}(E_k)$ is called maximal if it is covered only by $\mathcal{O}(E_k)$. A clone $C\in\mathcal{L}(E_k)$ is called \textit{submaximal} if it is covered only by a maximal clone. Let $h$ be a positive integer. An $h$-ary relation $\rho$ is a subset of $E_k^h.$ For $\rho\subseteq E_k^2$, we write $a \,\rho\, b$ for $(a,b)\in \rho.$ An $h$-ary relation $\rho$ is called totally symmetric if for every permutation $\sigma$ of $\{1,\hdots,h\}$ and each $h$-tuple $(a_1,\hdots, a_h)\in E_k^h$, $$(a_1,\hdots, a_h)\in\rho \text{ if } (a_{\sigma(1)},\hdots, a_{\sigma(h)})\in\rho.$$ $\tau_h^{E_k}$ is the $h$-ary relation defined by $(a_1,\hdots, a_h)\in\tau_h^{E_k}$ if there exist $i,j\in \{1,\hdots,h\}$ such that $i\neq j$ and $a_i=a_j.$ An $h$-ary relation $\rho$ is called totally reflexive if $\tau_h^{E_k}\subseteq \rho.$ For $h=2$, the concepts totally reflexive and totally symmetric coincide with the usual notions of reflexive and symmetric. If $\rho$ is totally reflexive and totally symmetric, we define the center of $\rho$ denoted by $C_{\rho}$ as follows: $$C_{\rho}=\{a\in E_k: (a,a_2,\hdots,a_h)\in \rho \text{ for all } a_2,\hdots,a_h\in E_k\}.$$ Let $\theta$ be a binary relation and $m\in\mathbb{N}^{*}$; for $\textbf{a} = (a_1, \ldots, a_m)\in E^{m}_{k}$ and $\textbf{b} = (b_1,\ldots, b_m)\in E_{k}^{m}$, we write $\textbf{a}\theta\textbf{b}$ if $(a_i,b_i) \in \theta$ for $1 \leqslantq i \leqslantq m$. Assume that $\theta$ is an equivalence relation on $E_{k}$. The $\theta$-class of $a\in E_k$ will be denoted by $[a]_{\theta}$. A permutation $\pi$ on $E_{k}$ is prime if all cycles of $\pi$ have the same prime length. A subset $\sigma\subseteq E_{k}^{4}$ is called affine if there is a binary operation $+$ on $E_{k}$ such that $(E_{k},+)$ is an abelian group and $(a, b, c, d)\in\sigma\Leftrightarrow a + b = c + d$. An affine relation $\sigma$ is prime if $(E_{k},+)$ is an abelian $p$-group for some prime $p$, that is, all elements of the group have the same prime order $p$. An $h$-ary relation $\rho$ on $E_k$ is called central, if $\rho$ is a nonempty proper subset of $E_{k}$ or $\rho$ has the following three properties: $\rho$ is totally reflexive; $\rho$ is totally symmetric and $ C_{\rho}$ is a nonempty proper subset of $E_{k}$. For $h\geq3$, a family $T = \{\mathcal{V}_{1};\cdots\mathcal{V}_{m}\}$ of equivalence relations on $E_{k}$ is called $h$-regular if each $\mathcal{V}_{i}$ has exactly $h$ equivalence classes and $\cap\{B_i| 1\leqslantq i\leqslantq m\}$ is nonempty for arbitrary equivalence classes $B_i$ of $\mathcal{V}_{i}$ , $1\leqslantq i\leqslantq m$. For $3 \leqslantq h\leqslantq k$, an $h$-regular (or $h$-regularly generated) relation on $E_{k}$ determined by the $h$-regular family $T$(often denoted by $\lambda_{T}$), consists of all $h$-tuples whose set of components meets at most $h-1$ classes of each $\mathcal{V}_{i}$ ( $1\leqslantq i\leqslantq m)$. Let $f\in\mathcal{O}^n(E_k)$ and $\rho$ be an $h$-ary relation on $E_{k}$. The operation $f$ preserves $\rho$ if for all $(a_{1,i}, \hdots , a_{h,i})\in\rho$ $(i = 1,\ldots, n)$, we have $$(f(a_{1,1},\hdots , a_{1,n}), f(a_{2,1}, \hdots, a_{2,n}), \hdots , f(a_{h,1}, \hdots , a_{h,n}))\in\rho.$$ The set of operations on $E_{k}$ preserving $\rho$ is a clone denoted by $\text{Pol}(\rho)$. The maximal clones have been described for $k = 2$(respectively $k = 3$ and $k \geq 4$) in Post\cite{PO21}(respectively Jablonskij \cite{JAB} and Rosenberg \cite{ROS65,ROS70}). They are of the form $\text{Pol}(\rho)$ where $\rho$ belongs to one of six families of relations which include some familiar and easily defined relations. For clones $C$ and $D$ on $E_{k}$, we say that $C$ is maximal in $D$ if $D$ covers $C$ in $\mathcal{L}(E_k)$; we also say that $C$ is submaximal if $C$ is maximal in at least one maximal clone. All submaximal clones are known for $k = 2$ (see \cite{PO21}) and $k = 3$ (see \cite{LAU}). For $n \geq 3$, an $n$-ary operation $f$ is called a near-unanimity operation provided that $f(y,x,\hdots,x)\approx f(x,y,\hdots,x)\approx\hdots \approx f(x,x,\hdots,y)\approx x$ for all $x,y\in E_{k}.$ We recall the following Baker-Pixley Theorem which will be used to prove our results: \begin{theorem}\label{baker-pixley} {\cite{BA-PI}} Let $\mathcal{A}=(A,F)$ be a finite algebra which contains a ``near unanimity function" of arity $d+1$ ($(d+1)$-ary near-unanimity term or $nu$-term). Then, an operation $f:A^{n}\rightarrow A$ is term function for $\mathcal{A}$ iff each subuniverse of $\mathcal{A}^{d}$ is preserved by $f$. \end{theorem} \section{Partial order, prime affine relations}\label{sec3} In this section we prove that the clones $\text{Pol}(\theta) \cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$ where $\theta$ is a nontrivial equivalence relation on $E_{k}$ and $\rho$ is either a partial order with least and greatest elements or a prime affine relation on $E_{k}$. \begin{theorem}\label{maintheo-order-equiv} If $ \theta $ is a nontrivial equivalence relation and $ \rho $ is a partial order with least and greatest elements, on a finite set $E_{k} $, then $ \text{Pol}(\theta) \cap \text{Pol}(\rho)$ is not submaximal in $\text{Pol}(\theta)$. \end{theorem} \begin{proof} $\theta$ and $\rho$ are incomparable. In fact, $\theta\nsubseteq\rho$ because $\theta $ is a nontrivial symmetric relation and $\rho$ is an antisymmetric relation. We also have $\rho\nsubseteq\theta$, otherwise since $\rho$ is a partial order with least and greatest element and $\theta$ is a transitive relation, $\theta$ will be trivial (equal to $E_{k}^{2}$); this is a contradiction. Without lost of generality we can consider that the least element of $\rho$ is $0$ and the greatest element of $\rho$ is $1$. Let $\rho'$ and $r$ be the relations defined by: $\rho':=\theta\circ\rho\circ\theta$ and $r:=\rho\cap\theta$. If $\rho'\neq E_{k}^{2}$, we have $\text{Pol}(\theta)\cap\text{Pol}(\rho)\varsubsetneq\text{Pol}(\rho')\varsubsetneq\text{Pol}(\theta)$. Suppose that $\rho'= E_{k}^{2}$, then $(1,0)\in E_{k}^{2}=\rho'$ and it follows that $(1,0)\in\theta$ or there is $b\in E_{k}\setminus\{0\}$ such that $(b,0)\in\theta$. Therefore $r\neq \Delta_{E_{k}}=\{(a,a):\ \ a\in E_{k}\}$. Since $r$ and $r^{-1}$ are subset of $\theta$, $r\circ r^{-1}$ is a subset of $\theta$. If $r\circ r^{-1}\neq \theta$, then $\text{Pol}(\theta)\cap\text{Pol}(\rho)\varsubsetneq\text{Pol}(\theta)\cap\text{Pol}(r\circ r^{-1})\varsubsetneq\text{Pol}(\theta)$. If $r\circ r^{-1}= \theta$, then $\text{Pol}(r)\varsubsetneq\text{Pol}(\theta)$ and it can be proved that each equivalence class of $\theta$ contains a least and a greatest element. It follows by Theorem 3.3 of \cite{TEM-meet-irred} that $\text{Pol}(r)$ is a meet-irreducible maximal subclone of $\text{Pol}(\theta)$. Using the fact that $\text{Pol}(\theta)\cap\text{Pol}(\rho)\varsubsetneq \text{Pol}(r)$, we conclude that $\text{Pol}(\theta)\cap\text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$. \end{proof} For a prime affine relation, there is no meet-reducible submaximality in the set of polymorphisms of a nontrivial equivalence relation. This is proved by the following theorem. \begin{theorem}\label{maintheo-primeaffine-equiv} Let $\alpha$ be a prime affine relation and $\theta$ a nontrivial equivalence relation. We have $$ \text{Pol}(\theta)\cap\text{Pol}(\alpha)\varsubsetneq\text{Pol}(\alpha_{1})\varsubsetneq\text{Pol}(\theta), $$ where $$\alpha_1 = \{(a, b, c, d)\in\alpha: (a, b),(a,c),(a,d) \in \theta\}.$$ \end{theorem} \begin{proof} Let $(a,b)\in\theta$ such that $a\neq b$, then the binary operation $g_1$ defined on $E_k$ by: \begin{displaymath} g_1(x,y)=\leqslantft\{ \begin{array}{l l} a & \textrm{if}\;(x,y)\in\{(a,a);(a,b);(b,a)\}\\ b & \textrm{otherwise,} \end{array} \right. \end{displaymath} preserves $\theta$ and does not preserve $\alpha_{1}$. Therefore $\text{Pol}(\alpha_{1})\varsubsetneq \text{Pol}(\theta)$. Also it is easy to see that $\text{Pol}(\theta)\cap\text{Pol}(\alpha)\subseteq\text{Pol}(\alpha_{1})$. To continue, let $(a,b)\notin\theta$ and let $g_{2}$ be a ternary operation on $E_{k}$ defined by: \begin{displaymath} g_2(x,y,z)=\leqslantft\{ \begin{array}{l l} b & \textrm{if}\;(x,y,z)\theta(a,a,b)\\ a & \textrm{otherwise.} \end{array} \right. \end{displaymath} $g_2$ preserves $\alpha_{1}$ and does not preserve $\alpha$; therefore $\text{Pol}(\theta)\cap\text{Pol}(\alpha)\varsubsetneq \text{Pol}(\alpha_{1})$. \end{proof} \section{Equivalence relations, prime permutation relations}\label{sec4} Let $k > 1$, $\theta$ a nontrivial equivalence relation on $E_{k}$ with blocks (equivalence classes) $B_0,\ldots,B_{t_{1}-1}$ (where $2\leqslantq t_1\leqslantq k)$ and $\rho$ a nontrivial equivalence relation distinct from $\theta$ with blocks $C_0,\ldots,C_{t_{2}-1}$ (where $2\leqslantq t_2\leqslantq k$). In this section we determine the meet-reducible clones of the form $\text{Pol}(\theta) \cap \text{Pol}(\rho)$ maximal in $\text{Pol}(\theta)$ where $\theta$ and $\rho$ are two distinct nontrivial equivalence relations on $E_{k}$. We define $\mu : E_{k} \rightarrow E_{t_{1}}$ by $\mu(x) = i$ if $x \in B_i$ and $\nu: E_{k} \rightarrow E_{t_{2}}$ by $\nu(x) = i$ if $x \in C_i$. Set $D = \text{Pol}(\theta) \cap \text{Pol}(\rho)$, $\nabla_{E_{k}} =E_{k}^{2}$ and $\gamma = \theta \cap \rho$. Clearly $ \gamma $ is an equivalence relation on $E_{k}$ and $D\subseteq \text{Pol}(\theta) \cap \text{Pol}(\gamma).$ \begin{theorem}\label{maintheo-equiv-equiv} Let $ \theta $ and $ \rho $ be two distinct nontrivial equivalence relations on a finite set $E_{k} $. $ \text{Pol}(\theta) \cap \text{Pol}(\rho)$ is submaximal in $\text{Pol}(\theta)$ if and only if $\theta$ and $\rho$ satisfy one of the following statement: \begin{enumerate} \item [(a)] $\theta \varsubsetneq \rho$ or $\rho \varsubsetneq \theta$; \item [(b)] $ \rho\cap\theta=\Delta_{E_{k}} $ and $ \rho\circ\theta=\nabla_{E_{k}}. $ \end{enumerate} \end{theorem} The proof of Theorem \ref{maintheo-equiv-equiv} follows from the next lemmas giving the sufficiency part and the necessity part of this theorem. \begin{remark} In the condition $(b)$, it follows from $ \rho\circ\theta=\nabla_{E_{k}} $ that there exist $u_i \in B_i$ $(i = 0,\ldots, t_1-1)$ such that $(u_p,u_q)\in\rho$ for all $0\leqslantq p,$ $q\leqslantq t_{1}-1$. Also $ \rho\circ\theta=\nabla_{E_{k}} $ is equivalent to $ \theta\circ\rho=\nabla_{E_{k}}. $ \end{remark} \begin{lemma}\label{equivequivbin} Let $\beta$ be a binary relation on $E_k$, $\rho$ and $\theta$ satisfying the condition $(a)$ or $(b)$ of Theorem \ref{maintheo-equiv-equiv}. $$\text{ If }\text{Pol}(\theta) \cap \text{Pol}(\rho)\subseteq \text{Pol}(\beta), \text{ then } \beta\in\{\Delta_{E_{k}};\rho;\theta;\nabla_{E_{k}}\}.$$ \end{lemma} \begin{proof}\text{ } \begin{itemize} \item Assume that $\rho$ and $\theta$ satisfy condition $(a)$. Moreover, suppose that $\rho \varsubsetneq \theta$, $\text{Pol}(\theta) \cap \text{Pol}(\rho)\subseteq \text{Pol}(\beta)$ and $ \beta\neq\Delta_{E_{k}} $. For any distinct $a_1, a_2 \in E_k$ and for any $b_i \in [a_i]_{\rho}$ ($i = 1, 2$), the function $f: E_k \rightarrow E_k$ defined by $f(a_i) = b_i, (i = 1 , 2)$ and $f(x) = x$ for all $x \in E_k\setminus \{a_1; a_2\}$ satisfies $f \in \text{Pol}(\theta) \cap \text{Pol}(\rho)$. Since $\text{Pol}(\theta) \cap \text{Pol}(\rho)\subseteq \text{Pol}(\beta)$, $f \in \text{Pol}(\beta).$ It follows that $[a_1]_{\rho}\times[a_2]_{\rho}\subseteq \beta $ whenever $(a_{1},a_{2})\in\beta$. In the following we denote by $\beta/\rho $ the relation on $E_{k}/\rho:=\{[a]_{\rho}| a\in E_{k}\}$ defined by: $([a_1]_{\rho},[a_2]_{\rho})\in\beta/\rho\Leftrightarrow (a_1,a_2)\in\beta.$ Similarly we define $\rho/\rho $ and $\theta/\rho $ and we have $\text{Pol}(\theta/\rho) \cap \text{Pol}(\rho/\rho)\subseteq \text{Pol}(\beta/\rho)$. Since $\rho/\rho=\Delta_{E_{k}/\rho} $, $\text{Pol}(\theta/\rho) \cap \text{Pol}(\rho/\rho)=\text{Pol}(\theta/\rho)$ is the maximal clone on $E_{k}/\rho$ determined by the nontrivial equivalence relation $\theta/\rho$ and it follows that $\beta/\rho\in\{\rho/\rho=\Delta_{E_{k}/\rho};\theta/\rho;\nabla_{E_{k}/\rho}=\nabla_{E_{k}}/\rho\}.$ Hence $\beta\in\{\rho;\theta;\nabla_{E_{k}}\}$. \item Assume that $\rho$ and $\theta$ satisfy condition $(b)$. In this case the function $\varphi: E_{k}\rightarrow E_{k}/\rho\times E_{k}/\theta $, defined by $a\mapsto ([a]_{\rho},[a]_{\theta}) $ is a bijection. This bijection gives a decomposition of $E_{k}$ into a cartesian product of $E_{k}/\rho$ and $E_{k}/\theta $ and one deduces that the operations in $ \text{Pol}(\theta) \cap \text{Pol}(\rho)$ correspond to the operations that act coordinatewise on $E_{k}/\rho\times E_{k}/\theta.$ \end{itemize} \end{proof} \begin{lemma} Let $\beta$ be a binary relation on $E_k$. If $\rho$ and $\theta$ satisfy condition $(a)$ or $(b)$ in Theorem \ref{maintheo-equiv-equiv}, then $\text{Pol}(\theta) \cap \text{Pol}(\rho)$ contains a majority operation. \end{lemma} \begin{proof}\text{ } \begin{itemize} \item Assume that $\rho$ and $\theta$ satisfy condition $(a)$. In addition, assume that $\rho \varsubsetneq \theta$. In each set $B_{i}$, we fix an element $v_{B_{i}}$ $(i=0,\ldots,t_{1}-1).$ We consider the majority operation $m$ defined on $E_{k}$ by: $$ m(x_1,x_2,x_3)= \leqslantft\{\begin{array}{ll} x_{i}& \text{ if } (x_i,x_j)\in\rho \text{ and } x_l\notin[x_i]_{\rho}, \\ & \qquad \text{ for some } 1\leqslantq i<j\leqslantq 3, \text{ such that } l\notin\{i;j\},\\ x_1& \text{ if } \{x_1;x_2\}\subseteq [x_3]_{\rho},\\ v_{[x_{l}]_{\theta}}& \text{ if } x_i\notin [x_j]_{\rho} \text{ for } 1\leqslantq i<j\leqslantq 3 \text{ and }\\ &\qquad (x_l,x_m)\in\theta \text{ for some } 1\leqslantq l<m\leqslantq 3,\\ 0 & \text{ otherwise.} \end{array} \right.$$ Let us show that $m\in \text{Pol}(\rho)\cap\text{Pol}(\theta) $. Let $(a_i,b_i)\in\rho, 1\leqslantq i\leqslantq 3 $ \begin{itemize} \item if $\{a_1;a_2\}\subseteq [a_3]_{\rho}$ then $\{b_1;b_2\}\subseteq [b_3]_{\rho}$ and $m(a_1,a_2,a_3)=a_1\rho b_1=m(b_1,b_2,b_3)$ \item otherwise, if there exist $1\leqslantq i<j\leqslantq 3$ such that $(a_i,a_j)\in\rho$, $l\notin\{i;j\}$ and $a_l\notin[a_i]_{\rho}$, then $(b_i,b_j)\in\rho$, $l\notin\{i;j\}$ and $b_l\notin[b_i]_{\rho}$, hence $m(a_1,a_2,a_3)=a_i\rho b_i=m(b_1,b_2,b_3)$ \item otherwise, if there exist $1\leqslantq l<m\leqslantq 3$ such that $(a_l,a_m)\in\theta$, then with transitivity of $\theta$ and the fact that $\rho\varsubsetneq \theta $ we have $(b_l,b_m)\in\theta$ and $m(a_1,a_2,a_3)=v_{[a_{l}]_{\theta}}=v_{[b_{l}]_{\theta}}=m(b_1,b_2,b_3)$ \item otherwise, we have $m(a_1,a_2,a_3)=0=m(b_1,b_2,b_3)$. \end{itemize} Therefore $m\in \text{Pol}(\rho).$ Let $(a_i,b_i)\in\theta, 1\leqslantq i\leqslantq 3 $ \begin{itemize} \item if $\{a_1;a_2\}\subseteq [a_3]_{\rho}$ then with transitivity of $\theta$ and the fact that $\rho\varsubsetneq \theta $, we have $\{a_1;a_2\}\subseteq [a_3]_{\theta}$ and $\{b_1;b_2\}\subseteq [b_3]_{\theta}$; therefore $$(m(a_1,a_2,a_3), m(b_1,b_2,b_3))\subseteq \{a_i;b_i;v_{[a_{i}]_{\theta}};v_{[b_{l}]_{\theta}} \}^{2}\subseteq \theta $$ \item otherwise, if there exist $1\leqslantq i<j\leqslantq 3$ such that $(a_i,a_j)\in\rho$, $l\notin\{i;j\}$ and $a_l\notin[a_i]_{\rho}$, then $(b_i,b_j)\in\theta$; it follows that $m(b_1,b_2,b_3)\neq 0$ and $m(b_1,b_2,b_3)\in [m(a_1,a_2,a_3)]_{\theta}$. Since $l\notin\{i;j\}$ and $b_l\notin[b_i]_{\rho}$, $$m(a_1,a_2,a_3)=a_i\rho b_i=m(b_1,b_2,b_3)$$ \item otherwise, if there exist $1\leqslantq l<m\leqslantq 3$ such that $(a_l,a_m)\in\theta$, then with transitivity of $\theta$, we have $(b_l,b_m)\in\theta$, and $$m(a_1,a_2,a_3)=v_{[a_{l}]_{\theta}}=v_{[b_{l}]_{\theta}}=m(b_1,b_2,b_3)$$ \item otherwise, we have $m(a_1,a_2,a_3)=0=m(b_1,b_2,b_3)$. \end{itemize} Therefore $m\in \text{Pol}(\theta)$ \item Assume that $\rho$ and $\theta$ satisfy condition $(b)$. With the decomposition of $E_{k}$ into a cartesian product of $E_{k}/\rho$ and $E_{k}/\theta $, we can say that, if $m_1$ is a majority operation on $E_{k}/\rho$ and $m_2$ is a majority operation on $E_{k}/\theta$, then the operation $m$ on $E_{k}/\rho\times E_{k}/\theta $ that acts like $m_i$ in the $i$th coordinate($i=1,2$) is a majority operation on $E_{k}/\rho\times E_{k}/\theta $ that preserves $\rho$ and $\theta$. \end{itemize} \end{proof} The two previous lemmas together with Theorem \ref{baker-pixley} prove the sufficiency part of Theorem \ref{maintheo-equiv-equiv}. Our Next step is to prove the necessity part of Theorem \ref{maintheo-equiv-equiv}. It is done in the following three Lemmas. \begin{lemma}\label{equivequivL1} If $\Delta_{E_{k}} \varsubsetneq \gamma \varsubsetneq \theta$ and $\gamma\varsubsetneq \rho$, then $D \varsubsetneq \text{Pol}(\theta) \cap \text{Pol}(\gamma) \varsubsetneq \text{Pol}(\theta).$ \end{lemma} \begin{proof} $ \gamma$ is a nontrivial equivalence relation on $E_{k}$ distinct from $\theta$ and $\rho$. Thus $D \subseteq \text{Pol}(\theta) \cap \text{Pol}(\gamma) \varsubsetneq \text{Pol}(\theta)$. Let us prove that $D \neq \text{Pol}(\theta) \cap \text{Pol}(\gamma)$; let $(a, b) \in \theta \setminus \gamma$ and $(c, d) \in \rho\setminus\gamma.$ Then $(a, b) \notin \rho$ and $(c, d) \notin \theta$. Define $f \in \mathcal{O}^{1}(E_{k})$ by: $$f(x)= \leqslantft\{\begin{array}{ll} a &\text{ if } x\in B_{\mu(c)} \\ b &\text{ otherwise.} \end{array} \right.$$ Since $a\theta b$, $f \in \text{Pol}(\theta)$. In addition $\gamma\subseteq \theta$ and $f$ is constant on each block of $\theta$, hence $f \in \text{Pol}(\gamma).$ Therefore $f \in \text{Pol}(\theta) \cap \text{Pol}(\gamma)$ while $f \notin D$ since $c\rho d$ and $(f(c), f(d)) = (a, b) \notin \rho.$ \end{proof} \begin{lemma} Let $\rho$ and $\theta$ be two nontrivial equivalence relations which are incomparable.\\ If $\rho\cap\theta\neq\Delta_{E_{k}}$, then $\text{Pol}(\theta) \cap \text{Pol}(\rho)\varsubsetneq \text{Pol}(\theta) \cap \text{Pol}(\rho\cap\theta)\varsubsetneq\text{Pol}(\theta)$ and $\text{Pol}(\theta) \cap \text{Pol}(\rho\cap\theta)$ is maximal in $\text{Pol}(\theta)$. \end{lemma} \begin{proof} It follows from the assumptions and Lemma \ref{equivequivL1} that $\text{Pol}(\theta) \cap \text{Pol}(\rho)\varsubsetneq \text{Pol}(\theta) \cap \text{Pol}(\rho\cap\theta)\varsubsetneq\text{Pol}(\theta)$. As $\Delta_{E_{k}}\neq\rho\cap\theta\varsubsetneq\theta\neq \nabla_{E_{k}} $, the sufficiency part yields $\text{Pol}(\theta) \cap \text{Pol}(\rho\cap\theta)$ is maximal in $\text{Pol}(\theta)$. \end{proof} \begin{lemma} Let $\rho$ and $\theta$ be two nontrivial equivalence relations which are incomparable.\\ If $\rho\cap\theta=\Delta_{E_{k}}$ and $\rho\circ\theta \neq \nabla_{E_{k}}$, then for $ \sigma=\rho\circ\theta\cap\theta\circ\rho $, we have $\text{Pol}(\theta) \cap \text{Pol}(\rho)\varsubsetneq \text{Pol}(\theta) \cap \text{Pol}(\sigma)\varsubsetneq\text{Pol}(\theta)$. \end{lemma} \begin{proof} By assumptions we have $\sigma\neq \nabla_{E_{k}}$, hence $\text{Pol}(\theta) \cap \text{Pol}(\sigma)\varsubsetneq\text{Pol}(\theta).$ From the definition of $\sigma$, we get $\text{Pol}(\theta) \cap \text{Pol}(\rho)\subseteq \text{Pol}(\theta) \cap \text{Pol}(\sigma)$. Let us prove that $\text{Pol}(\theta) \cap \text{Pol}(\rho)\varsubsetneq \text{Pol}(\theta) \cap \text{Pol}(\sigma)$; choose $a\theta b$ with $a \neq b$ and $u\rho v$ with $u \neq v$ and define the following unary operation $g$ on $E_{k}$ by: $$g(x)= \leqslantft\{\begin{array}{ll} a &\text{ if } x=u \\ b &\text{ otherwise.} \end{array} \right.$$ As $(a,b)\in\theta$, we have $g\in \text{Pol}(\theta)$ and $g\in \text{Pol}(\sigma)$; $(\theta\subseteq \sigma).$ Since $(g(u),g(v))=(a,b)$, $(a,b)\in\theta\setminus\Delta_{E_{k}}$ and $\rho\cap\theta=\Delta_{E_{k}}$, $g\notin \text{Pol}(\rho)$. Hence $$g\notin\text{Pol}(\theta) \cap \text{Pol}(\rho)\text{ while }g\in\text{Pol}(\theta) \cap \text{Pol}(\sigma).$$ \end{proof} The two previous lemmas proved that if $\text{Pol}(\theta) \cap \text{Pol}(\rho)$ is a submaximal clone of $\text{Pol}(\theta)$ and $\rho$ and $\theta$ are incomparable, then $\rho\cap\theta=\Delta_{E_{k}}$ and $\rho\circ\theta=\nabla_{E_{k}}$, so the condition $(b)$ holds. \begin{proof}\textit{of Theorem} \ref{maintheo-equiv-equiv} It follows from the previous lemmas. \end{proof} We conclude this section with the following theorem due to Lau and Rosenberg, and characterizing the case of prime permutation relations. \begin{theorem}[\cite{GRE}]\label{maintheo-primepermut-equiv} If $ \theta $ is a nontrivial equivalence relation and $ \rho $ is a graph of prime permutation $\pi$, on a finite set $E_{k} $, then $ \text{Pol}(\theta) \cap \text{Pol}(\rho)$ is submaximal in $\text{Pol}(\theta)$ if and only if $\theta$ and $\rho$ satisfy one of the following statements: \begin{enumerate} \item [(a)] $\rho \varsubsetneq \theta$ \item [(b)] The image of an equivalence class of $\theta$ is include in another class of $\theta$ surjectively. \end{enumerate} \end{theorem} \section{Central relations and $h$-regular relations}\label{sec5} In this section, $\theta$ is a nontrivial equivalence relation on $E_{k}$, whose equivalence classes are: $C_0,C_1, \ldots ,C_{t-1}$. Our aims is to characterize the central relations or $h$-regular relations $\rho$ such that $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ is maximal in $\text{Pol}(\theta) $. Firstly, we give some definitions to be used. Let $\rho$ be an $h$-ary relation $(h>1)$ on $E_{k}$. For $i\in\{0;1;2;\cdots;h-1\}$ we define the relation $\rho_{i,\theta}$ by \[\rho_{0,\theta}=\{(a_{1},\ldots,a_{h})\in E_{k}^{h}/ \exists u_{i}\in[a_{i}]_{\theta}, i\in\{1;\cdots;h\} \text{ with } (u_{1},u_{2},\ldots,u_{h})\in \rho\}\] and for $i\geq 1$, \begin{eqnarray*}\label{} \rho_{i,\theta}=\leqslantft\{(a_{1},\ldots,a_{h})\in E_{k}^{h}/ \exists u_{j}\in[a_{j}]_{\theta}, j\in\{i+1;\cdots;h\}\right.\\ \leqslantft.\qquad\text{ with } (a_{1},a_{2},\ldots,a_{i},u_{i+1},\ldots,u_{h})\in \rho\right\}. \end{eqnarray*} For $\sigma\in \mathcal{S}_{h}$ and $\gamma$ an $h$-ary relation, we set $$\gamma_{\sigma}=\{(a_{\sigma(1)},\ldots,a_{\sigma(h)})/ (a_{1},\ldots,a_{h})\in\gamma\}.$$ For $J=\{j_1; \ldots; j_{n}\} \subseteq \{1; \ldots; h\}$ with $j_1< \ldots <j_{n}$, we define the $h$-ary relation $\rho_{J}$ or $\rho_{j_{1} \ldots j_{n}}$ on $E_k$ as follow: \begin{eqnarray*} \rho_{J} &=& \leqslantft\{\leqslantft( a_1, a_2,\ldots, a_h \right) | \exists u_{i.J}\in[a_{i}]_{\theta}, i\in \{1;\cdots;h\}\setminus J=\{i_1,...,i_{h-n}\} \text{ such that }\right. \\ & & \leqslantft.\leqslantft( a_{j_1},\ldots, a_{j_n}, u_{i_1.J},\ldots,u_{i_{h-n}.J} \right)\in\rho\right\}. \end{eqnarray*} The next remark gives some properties of those relations. \begin{remark}\text{ } \begin{enumerate} \item For $i\in\{0;1;2;\cdots;h-1\}$, $\rho\subseteq\rho_{i,\theta}$ and $\text{Pol}(\rho)\cap\text{Pol}(\theta)\subseteq\text{Pol}(\rho_{i,\theta})$. \item $\rho_{0,\theta}$ is totally symmetric. \item For $\sigma\in \mathcal{S}_{h}$, $\text{Pol}(\rho)=\text{Pol}(\rho_{\sigma})$. \item If $\{j_1,...,j_n\}\subseteq\{r_1,...,r_m\}$, then $\rho_{\{r_1,...,r_m\}}\subseteq\rho_{\{j_1,...,j_n\}}.$ \item for $J=\{1;\cdots;n\}$ we have $\rho_{J}=\rho_{n,\theta}$. \item For all $1\leqslantq j_1< \ldots <j_{n}\leqslantq h$, there exists a permutation $\sigma\in\mathcal{S}_{h}$ such that $\rho_{j_{1} \ldots j_{n}}=(\rho_{n,\theta})_{\sigma}.$ \end{enumerate} \end{remark} \begin{definition}\label{def-theta-transversal} Let $\rho$ be an $h$-ary relation and $\theta $ be a nontrivial equivalence relation on $E_k$ with $t$ classes. \begin{enumerate} \item There is a transversal $T$ for the $\theta $-classes means that there exist $u_1,\ldots,$ $u_t\in E_k$ such that $(u_i, u_j) \notin\theta $ for all $1 \leqslantq i < j\leqslantq t$, $(u_{i_{1}},u_{i_{2}}, \ldots ,u_{i_{h}})\in \rho$ for all $1 \leqslantq i_{1}, \ldots ,i_{h} \leqslantq t$ and $T = \{u_1, \ldots , u_t\}$. \item There is a transversal $T$ of order $l$ ($1\leqslantq l\leqslantq h-1$) for the $\theta $-classes means that there exist $u_1,\ldots,$ $u_t\in E_k$ such that $(u_i, u_j) \notin\theta $ for all $1 \leqslantq i < j\leqslantq t$, $(a_1,a_2,\ldots,a_l,v_{l+1},v_{l+2}, \ldots , v_{h})\in \rho$, for all $a_1,a_2,...,a_l\in E_k$ and $v_{l+1},v_{l+2}, \ldots , v_{h}\in\{u_{1};\cdots;u_{t}\}$; and $T=\{u_1,...,u_t\}$. \end{enumerate} \end{definition} \begin{definition} A transversal of order $0$ for the $\theta$-classes means a transversal for the $\theta$-classes. \end{definition} \begin{definition}\label{def-theta-fermeture} Let $\rho$ be an $h$-ary relation and $\theta $ be a nontrivial equivalence relation on $E_k$ with $t$ classes. \begin{enumerate} \item $\rho$ is $\theta$-closed means that $\rho=\rho_{0,\theta}$. \item $\rho$ is weakly $\theta $-closed of order $l$($1\leqslantq l\leqslantq h-1$) means that there is a transversal $T=\{u_{1};\cdots;u_{t}\}$ of order $l-1$ for the $\theta$-classes and $\rho= \underset{\substack{\sigma\in\mathcal{S}_{h}}}{\bigcap} (\rho_{l,\theta})_{\sigma}.$ \end{enumerate} \end{definition} Secondly, we characterize some particular relations. We consider the surjective map $$\begin{array}{rccl} \varphi: & E_{k} & \rightarrow & E_{t} \\ & x & \mapsto & \varphi(x)=i \text{ if } x\in C_{i}. \end{array} $$ For an $n$-ary relation $\alpha$ on $E_{t}$, we set \[\varphi^{-1}(\alpha)=\{(a_{1},\ldots,a_{n})\in E_{k}^{n}: (\varphi(a_{1}),\varphi(a_{2}),\ldots,\varphi(a_{n}))\in \alpha\};\] for an $n$-ary relation $\beta$ on $E_{k}$, we set \[\varphi(\beta)=\{(\varphi(a_{1}),\varphi(a_{2}),\ldots,\varphi(a_{n})): (a_{1},\ldots,a_{n})\in \alpha\}.\] \begin{remark} With the previous considerations and for a central relation $\rho$, we have: \begin{itemize} \item $\rho$ is $\theta$-closed if and only if there exists an $h$-ary central relation $\gamma$ on $E_{t}$ such that $\rho=\varphi^{-1}(\gamma).$ \item If $\rho$ is weakly $\theta $-closed of order $l$($1\leqslantq l\leqslantq h-1$), then $\text{Pol}((\rho_{l,\theta})_{\sigma_{1}}\cap\cdots\cap(\rho_{l,\theta})_{\sigma_{n}})\subseteq\text{Pol}(\rho)$ for $\{\sigma_{1};\cdots;\sigma_{n}\}\subseteq\mathcal{S}_{h}$ \end{itemize} \end{remark} \begin{remark}\label{def-theta-close-trans-binary} Let $\rho$ be a binary relation and $\theta$ be a nontrivial equivalence relation on $E_k$ with $t$ classes. \begin{enumerate} \item [(1)] $\rho$ is $\theta$-closed if and only if $\rho = \theta\circ \rho\circ \theta$, \item [(2)] $\rho$ is weakly $\theta$-closed of order $1$(or simply weakly $\theta$-closed) if and only if $\rho = ( \theta\circ\rho)\cap(\rho\circ\theta),$ and there is a transversal $T$ for the $\theta$-classes. \end{enumerate} \end{remark} Thirdly, we characterize the central relations $\rho$ generating the submaximality. \subsection{Central relations}\label{central-relation-subsection} We recall that for $k=2$, we have the result in the Post's description. If $k\geq 3$ and $h\in\{1;2\}$ the following results give the characterization of existing submaximal classes. \begin{lemma}\cite{LAU} If $h=1$, then $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ is maximal in $\text{Pol}(\theta) $ if and only if the following condition is valid: $$(\exists I\subset\{0;\cdots;t-1\}: \rho=\underset{i\in I}\cup C_{i})\vee \forall j\in\{0;\cdots;t-1\}, \rho\cap C_{j}\neq \emptyset $$ \end{lemma} \begin{theorem}\cite{TEM-ROS} If $h=2$, then $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ is maximal in $\text{Pol}(\theta) $ if and only if one of the following conditions is valid: \begin{enumerate} \item [(i)] $\theta\subseteq\rho$ and every $\theta$-class contains a central element of $\rho$; \item [(ii)] $\rho$ is $\theta$-closed; \item [(iii)] $\rho$ is weakly $\theta$-closed of order $1$. \end{enumerate} \end{theorem} In the remaining of this subsection we suppose that $h\geq 3$ and $k\geq 3$. For a nontrivial equivalence relation $\theta$, we define the $h$-ary relation $\eta$ by $$\eta=\leqslantft\{\leqslantft(u_1 , u_2 ,\ldots,u_h\right)\in E_{k}^{h} / u_1\theta u_2\right\}.$$ Here we state the main theorem of this subsection: \begin{theorem}\label{maintheorem} Let $k \geq 3$ and let $\theta $ be a nontrivial equivalence relation on $E_k$ with $t$ equivalence classes. For an $h$-ary central relation $\rho $ on $E_k$, the clone $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ is a submaximal clone of $\text{Pol}(\theta) $ if and only if $h\leqslantq t $ and $\rho $ satisfies one of the following three conditions: \begin{enumerate} \item [I.] $\eta\subseteq \rho$ and every $\theta $-class contains a central element of $\rho $; \item [II.] $\rho $ is $\theta $-closed; \item [III.] $\rho$ is weakly $\theta $-closed of order $l$ and $\eta\subseteq \rho$. \end{enumerate} \end{theorem} The proof of Theorem \ref{maintheorem} will follow from the results obtained below. It will be given at the end of this subsection. \begin{definition} Let $l\in\{I; II; III\}$. $\rho $ is of type $l$ if $\rho $ satisfies the condition $l$ of Theorem \ref{maintheorem}. \end{definition} The following examples clarify the type of relations defined above. \begin{example} Let $k\geq 3$ be an integer and $0\leqslantq i<j<r<n\leqslantq k-1$, we denote by $A_{i,j,r}$ and $A_{i,j,r,n}$ the sets $$A_{i,j,r}:=\{(\sigma(i),\sigma(j),\sigma(r)); \sigma\in\mathcal{S}_{\{i;j;r\}}\} $$ and $$A_{i,j,r,n}:=\{(\sigma(i),\sigma(j),\sigma(r),\sigma(n)); \sigma\in\mathcal{S}_{\{i;j;r;n\}}\}.$$ We consider the following equivalence relations $\theta_{i}$ defined by their equivalence classes denoted by $C_{m}^{i}$ $\theta_{1} \text{ is defined on } E_{6} \text{ by } C_{0}^{1}=\{0;1\}, C_{1}^{1}=\{2;3\}, C_{2}^{1}=\{4;5\} $; $\theta_{2} \text{ is defined on } E_{5} \text{ by } C_{0}^{2}=\{0;1\}, C_{1}^{2}=\{2\}, C_{2}^{2}=\{3\}, C_{3}^{2}=\{4\}$; $\theta_{3} \text{ is defined on } E_{4} \text{ by } C_{0}^{3}=\{0;1\}, C_{1}^{3}=\{2\}, C_{2}^{3}=\{3\}$; $\theta_{4} \text{ is defined on } E_{8} \text{ by } C_{0}^{4}=\{0;1;2\}, C_{1}^{4}=\{3;4;5\}, C_{2}^{4}=\{6;7\}$; $\theta_{5} \text{ is defined on } E_{8} \text{ by } C_{0}^{5}=\{0;1;2\}, C_{1}^{5}=\{3;4\}, C_{2}^{5}=\{5;6\}, C_{3}^{5}=\{7\}$; and the relations $$\Upsilon_{1}=E_{6}^{3}\setminus A_{1,2,5},\Upsilon_{2}=E_{5}^{3}\setminus A_{2,3,4},\Upsilon_{3}=E_{4}^{3}\setminus A_{1,2,3}$$ $$\Upsilon_{4}=E_{8}^{3}\setminus (A_{1,4,6}\cup A_{1,4,7}\cup A_{1,5,7}\cup A_{1,5,6}\cup A_{2,3,7}\cup A_{2,4,6}\cup A_{2,4,7}\cup A_{2,5,6}\cup A_{2,5,7} )$$ $$\Upsilon_{5}=E_{8}^{4}\setminus (A_{1,4,6,7}\cup A_{2,4,6,7})$$ It is easy to see that: $\Upsilon_{1}$ is a central relation of type $I$ with $\theta_{1}$; $\Upsilon_{2}$ is a central relation of type $II$ but not of type $I$ with $\theta_{2}$; with $\theta_{3}$, $\Upsilon_{3}$ is a central relation whose center is $\{0\}$ but it is neither of type, $I$, $II$ or $III$. $\Upsilon_{4}$ is weakly $\theta_{4} $-closed of order $2$ with a transversal of order $1$, $T_{1}=\{0;3;6\}$ $\Upsilon_{5}$ is weakly $\theta_{5} $-closed of order $3$ with a transversal of order $2$, $T_{2}=\{0;3;5;7\}$ \end{example} \begin{definition}\label{def-rel-diagonale} Let $\theta$ be an equivalence relation on $E_k$. An $h$-ary relation $\tau$ on $E_k$ is said to be diagonal through $\theta$ if there exists an equivalence relation $\varepsilon_{1}$ on $\{1;2; \ldots ;h\}$ with equivalence classes $A_1,A_2, \ldots ,A_l$ and an equivalence relation $\varepsilon_{2}$ on $\{\min (A_m); 1\leqslantq m\leqslantq l\}$ such that $$\tau=\leqslantft\{\leqslantft(a_{1},a_{2},\hdots, a_{h} \right)\in E_k^{h}/ ((i,j)\in\varepsilon_{1}\Rightarrow a_{i}=a_{j})\text{ and }((i,j)\in\varepsilon_{2}\Rightarrow a_{i}\theta a_{j})\right\}.$$ \end{definition} Given two equivalence relations $\theta_1$ and $\theta_2$ satisfying Definition \ref{def-rel-diagonale}, we denote by $D_{\theta_1\theta_2}$ the corresponding diagonal relation through $\theta$. \subsubsection{Proof of the necessity criterion in Theorem \ref{maintheorem}}\label{sec5'-subsec5} \begin{proposition}\label{sufficiency-direction} If $k \geq 3$, $\theta $ is a nontrivial equivalence relation on $E_k$, and $\rho$ is an $h$-ary central relation on $E_k$ such that one of conditions I-III is satisfied, then the clone $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ is a submaximal clone of $\text{Pol}(\theta) $. \end{proposition} Before the proof of Proposition \ref{sufficiency-direction}, we give some results characterizing relations containing $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ and we show that $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ contains an $h$-near-unanimity operation. For the proof of this proposition we choose a fixed central element $c$ of $\rho $; we denote by $C_\rho$ the set of all central elements of $\rho$. If $\rho $ is of type I, choose a central element $c_B$ from the $\theta$-class $B$ which can be $\min(B\cap C_\rho)$, and if $\rho $ is of type III and of order $l$, choose a transversal $T$ of order $l-1$ of the $\theta$-classes and denote by $T_B$ the element of $T$ representing the $\theta$-class $B$. We begin with the following lemma characterizing the diagonal relations through $\theta$. \begin{lemma}\label{caracterisation-I} For an equivalence relation $\theta$ on $E_k$ and a diagonal relation $\tau$ through $\theta$, with arity $h$ on $E_k$, we have $\text{Pol}(\tau)=\text{Pol}(\theta)$ or $\text{Pol}(\tau)=\mathcal{O}(E_k)$. \end{lemma} \begin{proof} Let $\theta$ be an equivalence relation on $E_k$ and $\tau=D_{\varepsilon_{1}\varepsilon_{2}}$ be a diagonal relation through $\theta$. Let $T=\{\min A_m; 1\leqslantq m\leqslantq l\}$ where $A_{m}, 1\leqslantq m\leqslantq l$ are as in Definition $\ref{def-rel-diagonale}$. We will distinguish two cases: (a) $\varepsilon_{2}\neq \Delta_{T}$ and (b) $\varepsilon_{2}= \Delta_{T}.$ \begin{enumerate} \item [(a)]Assume that $\varepsilon_{2}\neq \Delta_{T}.$\\ There exist $u,v\in T$ with $u<v$ such that for all $(a_1,a_2,\hdots,a_h)\in\tau$, we have $(a_u,a_v)\in\theta$. Using the definition of $\tau$, we have $\text{Pol}(\theta)\subseteq \text{Pol}(\tau).$ By setting $$pr_{uv}(\tau):=\{(e_{u}^{h}(\textbf{a}),e_{v}^{h}(\textbf{a})); \textbf{a}=(a_1,a_2,\ldots,a_h)\in\tau\},$$ it follows that $pr_{uv}(\tau)=\theta$, therefore $\text{Pol}(\tau)\subseteq \text{Pol}(\theta)$, and it appears that $\text{Pol}(\tau)=\text{Pol}(\theta)$. \item [(b)]Assume that $\varepsilon_{2}= \Delta_{T}.$\\ It is clear that $$\tau=D_{\varepsilon_{1}\Delta_{T}}=\leqslantft\{\leqslantft(a_{1},a_{2},\hdots, a_{h} \right)\in E_k^{h}/ i\varepsilon_{1}j\Rightarrow a_{i}=a_{j}\right\}.$$ Hence $\text{Pol}(\tau)=\mathcal{O}(E_k).$ \end{enumerate} \end{proof} \begin{lemma}\label{caracterisation-diagonal-all} Under the assumptions of Proposition \ref{sufficiency-direction}, we have: \begin{enumerate} \item [(a)] $\eta\subseteq \rho$; \item [(b)] If $\rho $ is of type I or II, then an $h$-ary relation $\tau $ on $E_k$ is preserved by every operation in $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ if and only if $\tau $ is either the empty relation, a diagonal relation through $\theta$, or the relation $\rho$; \item [(c)] If $\rho $ is of type III and of order $l$, then an $h$-ary relation $\tau $ on $E_k$ is preserved by every operation in $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ if and only if $\tau $ is either the empty relation, a diagonal relation through $\theta$, or an intersection of relations of the form $(\rho_{l,\theta})_{\sigma}$ with $\sigma\in\mathcal{S}_{h}$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item [(a)] The proof of $\eta\subseteq \rho$ is straightforward. \item [(b)] Assume $\tau $ is either the empty relation, or a diagonal relation through $\theta$. Then $\text{Pol}(\tau)\in\{\text{Pol}(\theta);\mathcal{O}(E_k)\}$ (see Lemma \ref{caracterisation-I}); hence $ \text{Pol}(\rho)\cap\text{Pol}(\theta)\subseteq\text{Pol}(\tau)$. Assume $\tau$ is of the form $(\rho_{l,\theta})_{\sigma}$. Then it is easy to see that $ \text{Pol}(\rho)\cap\text{Pol}(\theta)\subseteq\text{Pol}((\rho_{l,\theta})_{\sigma})$. Hence, if $\tau$ is an intersection of relations of the form $(\rho_{l,\theta})_{\sigma}$ then $\text{Pol}(\rho)\cap\text{Pol}(\theta)\subseteq\text{Pol}(\tau)$. Conversely, assume $\tau$ is preserved by all operations in $\text{Pol}(\rho)\cap\text{Pol}(\theta)$. Let us suppose that $\tau$ is not the empty relation. We have to prove that $\tau$ is either a diagonal relation through $\theta$, or an intersection of relations of the form $(\rho_{l,\theta})_{\sigma}$ with $l$ the order of $\rho$ and $\sigma\in\mathcal{S}_{h}$. \\ For this purpose, we define two equivalence relations. The first one denoted by $\epsilon_{1}$ is defined on $\{1;2; \ldots ;h\}$ by: $$i\epsilon_{1}j\text{ iff }\forall \leqslantft(a_1, a_2, \ldots, a_h\right)\in \tau, a_i=a_j.$$ $\epsilon_{1}$ is an equivalence relation with classes $A_{0}, A_{1},\ldots, A_{m}$. The second one, denoted by $\epsilon_{2}$ is defined on $T=\{\min(A_i); 0\leqslantq i\leqslantq m\}$ by: $$i\epsilon_{2}j \text{ iff }\forall \leqslantft(a_1, a_2, \ldots, a_h\right)\in \tau,(a_i,a_j)\in\theta.$$ It follows that $D_{\epsilon_{1}\epsilon_{2}}$ is a diagonal relation through $\theta$. In order to complete the proof of this lemma, we distinguish two cases: $(\epsilon_{1}\neq\Delta_{\{1;2; \ldots ;h\}} \text{ or } \epsilon_{2}\neq\Delta_{T})$ and $(\epsilon_{1}=\Delta_{\{1;2; \ldots ;h\}} \text{ and } \epsilon_{2} = \Delta_{T})$. \begin{enumerate} \item [(i)] We suppose that $\epsilon_{1}\neq\Delta_{\{1;2; \ldots ;h\}} \text{ or } \epsilon_{2}\neq\Delta_{T}.$ We will prove that $D_{\epsilon_{1}\epsilon_{2}}=\tau$. It suffices to show that $D_{\epsilon_{1}\epsilon_{2}}\subseteq\tau$; $\tau\subseteq D_{\epsilon_{1}\epsilon_{2}}$ by definitions of $\epsilon_{1}$ and $\epsilon_{2}$. We need only consider three subcases: $(\epsilon_{1}=\nabla_{\{1;2; \ldots ;h\}})$, $(\epsilon_{1}\neq\nabla_{\{1;2; \ldots ;h\}} \text{ and } \epsilon_{2}=\nabla_{T})$, and $(\epsilon_{1}\neq\nabla_{\{1;2; \ldots ;h\}} \text{ and } \epsilon_{2}\neq\nabla_{T})$. \begin{enumerate} \item [a)] If $\epsilon_{1}=\nabla_{\{1;2; \ldots ;h\}},$ then $D_{\epsilon_{1}\epsilon_{2}}=\{(x,...,x); x\in E_{k}\}$. Since $\tau\subseteq D_{\epsilon_{1}\epsilon_{2}}$ and $\tau$ is not the empty relation, for each $b\in E_k$ the constant function of value $b$ preserves $\theta$ and $\rho$; hence $(b,...,b)\in \tau$ and $\tau=D_{\epsilon_{1}\epsilon_{2}}.$ \item [b)] If $\epsilon_{1}\neq\nabla_{\{1; \ldots ;h\}} \text{ and } \epsilon_{2}=\nabla_{T}$, then there exists $(i,j)\in \nabla_{\{1; \ldots ;h\}}$ such that $(i,j)\notin\epsilon_{1}.$ For all $1\leqslantq i,j\leqslantq h$ such that $(i,j)\notin\epsilon_{1}$, we choose $\mathbf{a}^{ij}=(a^{ij}_{1},...,a^{ij}_{h})\in\tau$ such that $a^{ij}_{i}\neq a^{ij}_{j}$ and we set $B=\{\mathbf{a}^{ij}; (i,j)\notin \epsilon_{1}\}$. Let $q=|B|$; to simplify our notation, we suppose that $B=\{\mathbf{b}^1;\mathbf{b}^2; \ldots ;\mathbf{b}^q\}$ and we define the sequence $(\mathbf{x}_i)_{1\leqslantq i\leqslantq h}$ by $\mathbf{x}_i=(b^1_{i};b^2_{i}; \ldots ;b^q_{i})$. It is easy to see that: $(i,j)\notin \epsilon_{1}\Rightarrow \mathbf{x}_i\neq \mathbf{x}_j$. Let $(u_1,...u_h)\in D_{\epsilon_{1}\epsilon_{2}}$, we consider the $q$-ary operation defined on $E_k$ by: $$f(\mathbf{y})= \leqslantft\{\begin{array}{ll} u_l &\text{ if } \mathbf{y}=\mathbf{x}_l\in\{\mathbf{x}_1;\mathbf{x}_2; \ldots ;\mathbf{x}_h\} \\ u_1 &\text{ otherwise }. \end{array} \right. $$ Since $\{u_1,u_2,...u_h\}^{2}\subseteq \theta$ and $\,\eta\subseteq \rho$, $f\in\text{Pol}(\theta)\cap\text{Pol}(\rho)$. Therefore $f\in\text{Pol}(\tau)$ and $(u_1,...,u_h)\in\tau$. \item [c)] If $\epsilon_{1}\neq\nabla_{\{1;2; \ldots ;h\}} \text{ and } \epsilon_{2}\neq\nabla_{T}$, then there exists $(i,j)\in\nabla_{T}$ such that $(i,j)\notin\epsilon_{2}$. For all $1\leqslantq i,j\leqslantq h$ such that $(i,j)\notin \epsilon_{2}$ we choose $ \mathbf{a}^{ij}=\leqslantft(a^{ij}_1, a^{ij}_2, \ldots, a^{ij}_h \right)\in \tau$ such that $(a^{ij}_i,a^{ij}_j)\notin \theta$ and we consider the set $B=\{\mathbf{a}^{ij}, (i,j)\notin \epsilon_{2}\}$. We set $q=|B|$. To simplify our notation we set $B=\{\mathbf{b}^1;\mathbf{b}^2; \ldots ;\mathbf{b}^q\}$ which allows us to define $\mathbf{x}_1=(b^1_1,b^2_1, \ldots ,b^q_1)$, $\mathbf{x}_2=(b^1_2,b^2_2, \ldots ,b^q_2)$, \ldots , $\mathbf{x}_s=(b^1_s,b^2_s, \ldots ,b^q_s)$. We remark that $(i,j)\notin \epsilon_{2}\Rightarrow (\mathbf{x}_i,\mathbf{x}_j)\notin \theta$.\\ Let $\leqslantft(u_1, u_2, \ldots, u_h\right)\in D_{\epsilon_{1}\epsilon_{2}}$ and consider the $q$-ary operation defined on $E_k$ by: $$ f(\mathbf{y})= \leqslantft\{\begin{array}{ll} u_l &\text{ if } \mathbf{y}=\mathbf{x}_l \\ u_{\sigma(\mathbf{y})} &\text{ if } \mathbf{y}\notin\{\mathbf{x}_1;\mathbf{x}_2; \ldots ;\mathbf{x}_h\}, \exists l\in\{1,2, \ldots ,h\}/ \mathbf{y}\theta \mathbf{x}_l \text{ and}\\% &\,\,\,\,\,\,\,\,\,\,\,\,\sigma(\mathbf{y})=min\{t / \mathbf{y} \theta \mathbf{x}_{t}\} \\ c &\text{ otherwise, } \end{array} \right. $$ where $c$ is a central element. Since $\rho$ is totally reflexive and $D_{\epsilon_{1}\epsilon_{2}}\subseteq \rho$, it follows that $f\in \text{Pol}(\rho)$. By the definition of $f$, we have $f\in \text{Pol}(\theta)$, thus $f\in \text{Pol}(\tau)$. Since $\mathbf{b}^1,..,\mathbf{b}^q\in \tau$, it follows that $f(\mathbf{b}^1,\mathbf{b}^2, \ldots ,\mathbf{b}^q)=\leqslantft(f(\mathbf{x}_1), f(\mathbf{x}_2), \ldots, f(\mathbf{x}_h)\right)=\leqslantft(u_1, u_2, \ldots, u_h\right)\in \tau$, therefore $D_{\epsilon_{1}\epsilon_{2}}\subseteq \tau.$ \end{enumerate} \item [(ii)] We suppose that $\epsilon_{1}=\Delta_{\{1;2; \ldots ;h\}} \text{ and } \epsilon_{2} = \Delta_{T}$. We show that $\tau$ is an intersection of relations of the form $(\rho_{l,\theta})_{\sigma}$ or the relation $E_{k}^{h}$.\\ For all $1\leqslantq i,j\leqslantq h$ such that $i \neq j$, choose $\mathbf{a}^{ij}=\leqslantft(a^{ij}_1,\ldots, a^{ij}_s\right)\in\tau$ such that $(a^{ij}_i,a^{ij}_j)\notin \theta$ and consider the set $B=\{\mathbf{a}^{ij}, (i,j)\notin \epsilon_{2}\}$. Using similar notation as in part (c) of (i) and the same $q$-ary operation $f$ on $E_k$ for a given $\leqslantft(u_1, u_2, \ldots, u_h\right)\in \rho $, we have $$f(\mathbf{b}^1,\mathbf{b}^2, \ldots ,\mathbf{b}^q)=\leqslantft(f(\mathbf{x}_1), f(\mathbf{x}_2), \ldots, f(\mathbf{x}_h)\right)=\leqslantft(u_1, u_2,\ldots, u_h\right)\in \tau ,$$ and then $\rho\subseteq \tau$. By the definition of $(\rho_{l,\theta})_{\sigma}$ with $l$ the order of $\rho$ and $\sigma\in\mathcal{S}_{h}$, we have: $\rho\subseteq (\rho_{l,\theta})_{\sigma}$ for all $\sigma\in\mathcal{S}_{h}.$ We distinguish once more two subcases: $\rho=\tau$ or $\rho\neq\tau$. \begin{enumerate} \item [a)]If $\rho=\tau$, then it is finished. \item [b)]Otherwise $\rho\varsubsetneq\tau\subseteq E_{k}^{h}.$ If $\rho$ is of type I or II, then we will show that $\tau= E_{k}^{h}$. \\ As $\rho\varsubsetneq\tau$ then $\tau\setminus\rho\neq\emptyset$. Let us consider $\leqslantft(a_1,a_2,\ldots,a_h\right)\in\tau\setminus\rho$. Let $\leqslantft(u_1, u_2, \ldots, u_h\right)\in E_{k}^{h}$; assume $\rho$ is of type I and consider the unary operation $f$ defined on $E_k$ by:$$f(x)= \leqslantft\{\begin{array}{l} u_i \text{ if } x= a_i \\ c_{[a_i]_{\theta}} \text{ if } x\theta a_i \text{ and } x\neq a_i\ \\ c \text{ otherwise,} \end{array} \right. $$ where $c$ is a central element. $f\in \text{Pol}(\rho)$ because $ \eta\subseteq\rho$ and $\rho$ is totally symmetric. Moreover $f\in \text{Pol}(\theta)$, then $f\in \text{Pol}(\tau)$ and $$\leqslantft(u_1, u_2, \ldots, u_h\right)=\leqslantft(f(a_1), f(a_2), \ldots, f(a_h)\right)\in\tau.$$ Assume $\rho$ is of type II and consider the unary operation $f$ defined on $E_k$ by:$$f(x)= \leqslantft\{\begin{array}{l} u_i \text{ if } x\theta a_i \\ c \text{ otherwise } \end{array} \right. $$ where $c$ is a central element. $f\in \text{Pol}(\rho)$ because $\rho$ is $\theta$-closed. Moreover $f\in \text{Pol}(\theta)$, then $f\in \text{Pol}(\tau)$ and $\leqslantft(u_1, u_2, \ldots, u_h\right)=\leqslantft(f(a_1), f(a_2), \ldots, f(a_h)\right)\in\tau$. Thus $\tau=E_{k}^{h}=D_{\Delta_{\{1;\cdots;h\}}\Delta_{\{1;\cdots;h\}}}.$ Let's suppose that $\rho$ is of type III and of order $l$, i.e., $\rho$ is weakly $\theta$-closed of order $l$ and $\eta\subseteq\rho$.\\ Since $\tau\setminus\rho$ is not empty, there exists $\leqslantft(a_1, a_2, \ldots, a_h\right)\in(\tau\setminus\rho).$ Therefore $\leqslantft(a_1, a_2, \ldots, a_h\right)\notin\underset{\sigma\in\mathcal{S}_{h}}\bigcap(\rho_{l,\theta})_{\sigma}$. We suppose that there is $\sigma'\in\mathcal{S}_{h}$ such that $(a_1,a_2,\ldots,a_h)\in(\rho_{l,\theta})_{\sigma'}.$ We consider the set $R_{1}:=\{\sigma'\in\mathcal{S}_{h}/ \,\,(a_1,a_2,\ldots,a_h)\in(\rho_{l,\theta})_{\sigma'}\}$ and define the relation $\varphi$ by $$\varphi=\underset{\sigma'\in R_1}\bigcap(\rho_{l,\theta})_{\sigma'}.$$ We have $\rho\varsubsetneq \varphi$. We will show that $\varphi\subseteq \tau$. Let $(u_1,...u_h)\in\varphi$ and set \begin{eqnarray*} D=\{(b_1,\ldots,b_h)\in E_{k}^{h}; b_i\in\{u_1,...,u_h\}\cup\{c,T_{[u_{1}]_{\theta}},\dots, T_{[u_{h}]_{\theta}}\}, 1\leqslantq i\leqslantq h, \\ \text{ and } \{b_1,...b_s\}\neq\{u_1,...,u_h\}\}. \end{eqnarray*} We define the unary operation $h$ on $E_k$ by: $$h(x)= \leqslantft\{\begin{array}{l} u_i \text{ if } x= a_i \\ T_{[u_i]_{\theta}} \text{ if } x\in [a_i]_{\theta}\setminus\{a_i\} \\ c \text{ otherwise } \end{array} \right. $$ For $(y_1,...,y_h)\in\rho$, we have $(h(y_1),...,h(y_h))\in D\cap\rho\subseteq\rho$. Therefore $h\in \text{Pol}(\rho)\cap\text{Pol}(\theta)$. Hence $h\in \text{Pol}(\tau)$ and $(u_1,...,u_h)=(h(a_1),...,h(a_h))\in\tau.$\\ If $\tau=\varphi$, then it is finished. Otherwise, we have $\varphi\varsubsetneq\tau$. There exists $(a_1,a_2,\ldots,a_h)\in \tau\setminus\varphi$. If there exists $s\in\mathcal{S}_{h}$ such that $(a_1,a_2,\ldots,a_h)\in (\rho_{l,\theta})_{s}$, then we use the same argument to construct $\varphi'$ such that $\varphi\varsubsetneq\varphi'$ and $\varphi'\subseteq \tau$. Therefore $\tau=\varphi'$ or $\varphi'\varsubsetneq\tau$. So we have the same conclusion as above. We continue this process until obtained a $h$-tuple $(a_1, a_2, \ldots, a_h)\in\tau$ such that, for each $\sigma\in\mathcal{S}_{h}$, $(a_1, a_2, \ldots, a_h)\notin (\rho_{l,\theta})_{\sigma}$. Let $(u_1,...,u_h)\in E_{k}^{h}$, using the unary operation $h$ defined above and the fact that $\rho$ is weakly $\theta$-closed of order $l$ and there is a transversal $T$ of order $l-1$ for the $\theta$-classes, we show that $(u_1,...,u_h)\in \tau.$ Therefore $\tau=E_{k}^{h}$. \end{enumerate} \end{enumerate} \end{enumerate} \end{proof} \begin{lemma}\label{existence-unanimity-all} Under the assumptions of Proposition \ref{sufficiency-direction}, $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ contains an $h$-near-unanimity operation. \end{lemma} \begin{proof} If $\rho$ is of type I, let us consider the $(s+1)$-ary operation $m$ defined on $E_k$ by: $$ m(x_1, \ldots ,x_h,x_{h+1})= \leqslantft\{\begin{array}{ll} x_{i_1}& \text{ if there exist } 1\leqslantq i_1 < \ldots <i_h\leqslantq h+1 \\ & \,\, \text{ such that } x_{i_1}=x_{i_2}= \ldots =x_{i_s} \\ c_{[x_{i_1}]_{\theta}}& \text{ if there exist } 1\leqslantq i_1 < \ldots <i_h\leqslantq h+1 \\ & \,\, \text{ such that } [x_{i_1}]_{\theta}=[x_{i_2}]_{\theta}= \ldots =[x_{i_s}]_{\theta} \\ c & \text{ otherwise;} \end{array} \right.$$ if $\rho$ is of type II, let us consider the $(s+1)$-ary operation $m$ defined on $E_k$ by: $$ m(x_1, \ldots ,x_h,x_{h+1})= \leqslantft\{\begin{array}{ll} x_{i_1}& \text{ if there exist } 1\leqslantq i_1 < \ldots <i_h\leqslantq h+1 \\ & \,\, \text{ such that } x_{i_1}=x_{i_2}= \ldots =x_{i_s} \\ x_{i_1}& \text{ if there exist } 1\leqslantq i_1 < \ldots <i_h\leqslantq h+1 \\ & \,\, \text{ such that } [x_{i_1}]_{\theta}=[x_{i_2}]_{\theta}= \ldots =[x_{i_h}]_{\theta} \\ c & \text{ otherwise;} \end{array} \right.$$ if $\rho$ is of type III, let us consider the $(s+1)$-ary operation $m$ defined on $E_k$ by: $$ m(x_1, \ldots ,x_h,x_{h+1})= \leqslantft\{\begin{array}{ll} x_{i_1}& \text{ if there exist } 1\leqslantq i_1 < \ldots <i_h\leqslantq h+1 \\ & \,\, \text{ such that } x_{i_1}=x_{i_2}= \ldots =x_{i_h} \\ T_{[x_{i_1}]_{\theta}}& \text{ if there exist } 1\leqslantq i_1 < \ldots <i_h\leqslantq h+1 \\ & \,\, \text{ such that } [x_{i_1}]_{\theta}=[x_{i_2}]_{\theta}= \ldots =[x_{i_h}]_{\theta} \\ c & \text{ otherwise;} \end{array} \right.$$ where $c$ is a central element. By definition, $m$ is a near unanimity function of order $h+1$. We will show that $m\in \text{Pol}(\rho)\cap \text{Pol}(\theta)$. To do this, we will prove firstly that $m\in \text{Pol}(\theta)$ and secondly that $m\in \text{Pol}(\rho).$ \begin{enumerate} \item [a.] To see that $m$ preserves $\theta$, assume that $\leqslantft( u_i ,v_i \right)\in\theta$ for $1\leqslantq i \leqslantq s+1.$ Since $$|\{[u_{1}]_{\theta};[u_2]_{\theta}; \ldots ;[u_{h+1}]_{\theta}\}|=|\{[v_1]_{\theta};[v_2]_{\theta}; \ldots ;[v_{h+1}]_{\theta}\}|,$$ if there exist $1\leqslantq i_1< i_2<..<i_h\leqslantq h+1$ such that $[u_{i_1}]_{\theta}=[u_{i_2}]_{\theta}= \ldots =[u_{i_h}]_{\theta}$, then $m(u_1,u_2, \ldots ,u_{h+1})\in [u_{i_1}]_{\theta}= [v_{i_1}]_{\theta}$ because $[u_{i}]_{\theta}=[v_{i}]_{\theta}$ for $i = 1, 2,..,h+1$.\\ Therefore, $(m(u_1,u_2, \ldots ,u_{h+1}),m(v_1,v_2, \ldots ,v_{h+1}))\in [u_{i_1}]_{\theta}^{2}\subseteq\theta$. In the other case we have $(m(u_1,u_2, \ldots ,u_{h+1}),m(v_1,v_2, \ldots ,v_{h+1}))=\leqslantft( c ,c \right)\in\theta$; then $m\in \text{Pol}(\theta)$. \item [b.] Let us prove now that $m$ preserves $\rho$.\\ Let $\leqslantft(u_{11}, u_{21},\ldots, u_{h1}\right)$,$\leqslantft(u_{12}, u_{22}, \ldots, u_{h2}\right)$,\ldots,$\leqslantft(u_{1 h+1}, u_{2 h+1}, \ldots, u_{h h+1}\right)$ $\in\rho.$ \\ If $\{m(u_{i1},u_{i2}, \ldots ,u_{i h+1}); \ 1\leqslantq i \leqslantq h\}$ contains a central element of $\rho$ then $$\leqslantft(m(u_{11},u_{12}, \ldots ,u_{1 h+1}), \ldots, m(u_{h1},u_{h2}, \ldots ,u_{h h+1})\right)\in\rho;$$ else we will distinguish the three type of $\rho$.\\ Suppose $\rho$ be of type I, there exist $1\leqslantq i_{r}^{1}<i_{r}^{2}< \ldots <i_{r}^{h}\leqslantq h+1, \ 1\leqslantq r \leqslantq h$ such that $u_{r i_{r}^{1}}=u_{r i_{r}^{2}}= \ldots =u_{r i_{r}^{h}}$ for $r\in\{1;\cdots;h\}$; as $\{i_{1}^{1};i_{1}^{2}; \ldots ;i_{1}^{h}\}\cap\{i_{2}^{1};i_{2}^{2}; \ldots ;i_{2}^{h}\}\cap, \ldots ,\cap\{i_{h}^{1};i_{h}^{2}; \ldots ;i_{h}^{h}\}\neq \emptyset$, let us consider a fixed element $i\in\{i_{1}^{1};i_{1}^{2}; \ldots ;i_{1}^{h}\}\cap\{i_{2}^{1};i_{2}^{2}; \ldots ;i_{2}^{h}\}\cap, \ldots ,\cap\{i_{h}^{1};i_{h}^{2}; \ldots ;i_{h}^{h}\}$. We have $$( m(u_{11},u_{12}, \ldots ,u_{1 h+1}), \ldots ,m(u_{h1},u_{h2}, \ldots ,u_{h h+1}))=(u_{1i},...,u_{hi})\in\rho.$$ Therefore $m\in \text{Pol}(\rho)$.\\ If $\rho$ is of type II there exist $1\leqslantq i_{r}^{1}<i_{r}^{2}< \ldots <i_{r}^{h}\leqslantq h+1, \ 1\leqslantq r \leqslantq h$ such that $[u_{r i_{r}^{1}}]_{\theta}=[u_{r i_{r}^{2}}]_{\theta}= \ldots =[u_{r i_{r}^{h}}]_{\theta}$ for $r\in\{1;\cdots;h\}$; as $\{i_{1}^{1};i_{1}^{2}; \ldots ;i_{1}^{h}\}\cap\{i_{2}^{1};i_{2}^{2}; \ldots ;i_{2}^{h}\}\cap, \ldots ,\cap\{i_{h}^{1};i_{h}^{2}; \ldots ;i_{h}^{h}\}\neq \emptyset$ then, with $i\in\{i_{1}^{1};i_{1}^{2}; \ldots ;i_{1}^{h}\}\cap\{i_{2}^{1};i_{2}^{2}; \ldots ;i_{2}^{h}\}\cap, \ldots ,\cap\{i_{h}^{1};i_{h}^{2}; \ldots ;i_{h}^{h}\}$ we have $$( m(u_{11},u_{12}, \ldots ,u_{1 h+1}), \ldots ,m(u_{h1},u_{h2}, \ldots ,u_{h h+1}))\in [u_{1 i}]_{\theta}\times\ldots\times[u_{h i}]_{\theta}\subseteq\rho$$ because $\rho$ is $\theta$-closed. Thus $m\in \text{Pol}(\rho)$.\\ Finally, if $\rho$ is of type III, there exist $1\leqslantq i_{r}^{1}<i_{r}^{2}< \ldots <i_{r}^{h}\leqslantq h+1, \ 1\leqslantq r \leqslantq h$ such that $[u_{r i_{r}^{1}}]_{\theta}=[u_{r i_{r}^{2}}]_{\theta}= \ldots =[u_{r i_{r}^{h}}]_{\theta}$ for $r\in\{1;\cdots;h\}$; as $$\{i_{1}^{1};i_{1}^{2}; \ldots ;i_{1}^{h}\}\cap\{i_{2}^{1};i_{2}^{2}; \ldots ;i_{2}^{h}\}\cap, \ldots ,\cap\{i_{h}^{1};i_{h}^{2}; \ldots ;i_{h}^{h}\}\neq \emptyset$$ then, with $i\in\{i_{1}^{1};i_{1}^{2}; \ldots ;i_{1}^{h}\}\cap\{i_{2}^{1};i_{2}^{2}; \ldots ;i_{2}^{h}\}\cap, \ldots ,\cap\{i_{h}^{1};i_{h}^{2}; \ldots ;i_{h}^{h}\}$ we have ( with $Z:=(m(u_{11},\ldots,u_{1 h+1}),\ldots,m(u_{h1}, \ldots ,u_{h h+1}))$) $$Z\in \{u_{1i},...,u_{hi},T_{[u_{1i}]_{\theta}},...,T_{[u_{hi}]_{\theta}}\}^{h}\subseteq \rho.$$ Hence $m\in \text{Pol}(\rho)$. \end{enumerate} \end{proof} \begin{proof}[\textbf{Proof (of proposition \ref{sufficiency-direction})}]\text{ } Let $f\in \text{Pol}(\theta)\setminus(\text{Pol}(\theta)\cap \text{Pol}(\rho)).$ We will prove that $G=<\text{Pol}(\theta)\cap \text{Pol}(\rho)\cup\{f\}>$ is equal to $\text{Pol}(\theta).$ From Lemma \ref{existence-unanimity-all}, it follows that $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ contains an $h$-near-unanimity function $m$. According to Theorem \ref{baker-pixley} and the fact that $m\in G$, we have $G=Pol Inv^{(h)} G$. If $\tau\in Inv^{(h)} G$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)\subseteq <\text{Pol}(\theta)\cap \text{Pol}(\rho)\cup\{f\}>= G=Pol Inv^{(h)} G\subseteq \text{Pol}(\tau)$. By Lemma \ref{caracterisation-diagonal-all}, if $\rho$ is of type I or II, then $\tau$ is either the empty relation, either a diagonal relation through $\theta$ or $\rho$; if $\rho$ is of type III and of order $l$, then $\tau$ is either the empty relation, either a diagonal relation through $\theta$ or an intersection of relations of the form $(\rho_{l,\theta})_{\sigma}$ with $\sigma\in\mathcal{S}_{h}$. Since $f\notin \text{Pol}(\rho)$, therefore $f$ can not preserve an intersection of relations of the form $(\rho_{l,\theta})_{\sigma}$ with $\sigma\in\mathcal{S}_{h}$; therefore $\tau$ is the empty relation or a diagonal relation through $\theta$. In the light of Lemma \ref{caracterisation-I}, we have $\text{Pol}(\theta)\subseteq G$. \end{proof} \begin{remark} Let us mention that if $\rho$ is of type I or II, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is also maximal in $\text{Pol}(\rho)$, whereas if $\rho$ is of type III, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\rho)$. \end{remark} \subsubsection{Proof of the completeness criterion}\label{sec6-subsec5}\text{ } In this subsection, we show that the relations of type I, II and III are the only central relations $\rho$ such that $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$. So we suppose that $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$ and we show that except the types announced, in the other cases we don't have maximality. \begin{proposition}\label{completeness-criterion} If $\text{Pol}(\rho)\cap \text{Pol}(\theta)$ is maximal in $\text{Pol}(\theta)$, then $ \rho$ is either of type I, II, or III. \end{proposition} Before the proof of Proposition \ref{completeness-criterion}, we will give some results on the properties of the relation $\rho$. \begin{lemma}\label{major-criterion} If $\text{Pol}(\rho)\cap \text{Pol}(\theta)$ is maximal in $\text{Pol}(\theta)$, then $\eta\subseteq \rho$. \end{lemma} \begin{proof}By contraposition, suppose that $\eta\nsubseteq \rho$, therefore there exists $$\leqslantft(u_1, u_2, \ldots, u_h\right)\in \,\eta\quad \text{ such that } \quad \leqslantft(u_1, u_2, \ldots, u_h\right)\notin \rho.$$ We denote by $\eta^{1}$ the relation $$\eta^{1}=\leqslantft\{\leqslantft(u_1, u_2, \ldots, u_h\right)\in \rho / u_1\theta u_2\right\}.$$ Let's prove the following inclusions $$\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\eta^{1})\varsubsetneq \text{Pol}(\theta).$$ \begin{enumerate} \item [(i)] Firstly, let us prove that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\eta^{1}).$ Let $f\in \text{Pol}(\rho)\cap \text{Pol}(\theta) $ with arity $n$, we will prove that $f \in \text{Pol}(\eta^{1})$.\\ Let $\leqslantft(u_{11}, u_{21}, \ldots,u_{h1}\right), \ldots ,\leqslantft(u_{1n}, u_{2n}, \ldots, u_{hn}\right) \in \,\eta^{1}.$\\ We have $f(u_{11}, \ldots ,u_{1n})\theta f(u_{21}, \ldots ,u_{2n})$ since $f\in \text{Pol}(\theta)$ and $$\leqslantft(f(u_{11}, \ldots ,u_{1n}),f(u_{21}, \ldots ,u_{2n}), \ldots, f(u_{h1}, \ldots ,u_{hn})\right)\in \rho$$ since $f\in \text{Pol}(\rho)$. Therefore $f\in \text{Pol}(\eta^{1})$.\\ Let $\leqslantft(u_1, u_2, \ldots, u_h\right)\in \,\eta\setminus \rho$ and $(a,b)\notin \theta$ fixed, we define the $h$-ary operation $f$ on $E_{k}$ by: \begin{equation}\label{function-for-strict-inclusion-f} f(x_1,x_2, \ldots ,x_h)= \leqslantft\{\begin{array}{l} u_1 \text{ if } (x_1,x_2, \ldots ,x_h)\theta (a,a, \ldots ,a) \\ u_2 \text{ if } (x_1,x_2, \ldots ,x_h)\theta (a,b,a, \ldots ,a) \\ \vdots\\ u_h \text{ if } (x_1,x_2, \ldots ,x_h)\theta (a,a, \ldots ,a,b) \\ u_1 \text{ otherwise. } \end{array} \right. \end{equation} With $\leqslantft(u_{11}, u_{21}, \ldots, u_{h1}\right)$, \ldots ,$\leqslantft(u_{1s}, u_{2s}, \ldots, u_{ss}\right)$ $\in$ $ \,\eta^{1}$, according to definition of $ \eta^{1}$, we have $f(u_{11}, \ldots ,u_{1s})= f(u_{21}, \ldots ,u_{2s})$ since $f\in \text{Pol}(\theta)$ and $(u_{11}, \ldots ,u_{1n})\theta(u_{21}, \ldots ,u_{2n}).$ From the fact that $\rho$ is totally reflexive we have $$\leqslantft(f(u_{11}, \ldots ,u_{1n}),f(u_{21}, \ldots ,u_{2n}), \ldots, f(u_{h1}, \ldots ,u_{hn})\right)\in \,\eta^{1}.$$ Hence $f\in \text{Pol}(\eta^{1})$. Furthermore, $\leqslantft(\begin{array}{cccccc} a&a&a&\cdots &a&a \\a&b&a&\cdots&a&a \\\vdots \\a&a&a&\cdots&a&b \\ \end{array}\right)\subseteq \rho$ and by the definition of $f$ we have $$\leqslantft(u_1, u_2, \ldots, u_h\right)=\leqslantft(f(a,a,a,\ldots,a), f(a,b,a,\ldots,a), \ldots, f(a,a,\ldots,a,b)\right)\notin \rho.$$ Therefore, $f\notin \text{Pol}(\rho) $ and $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\eta^{1}).$ \item [(ii)]Secondly we will show that $\text{Pol}(\eta^{1})\varsubsetneq \text{Pol}(\theta)$.\\ From the equality $pr_{12}(\eta^{1})=\theta$, it follows that $\text{Pol}(\eta^{1})\subseteq \text{Pol}(\theta)$. Let $\leqslantft(u_1, u_2, \ldots, u_h\right)\in \,\eta\setminus \rho$ be a fixed element and $c$ be a central element of $\rho.$ It is obvious that $\leqslantft(u_1, u_2, \ldots, u_h\right)\notin \,\eta^{1}$. Let $a,b,c\in E_{k}$ such that $(a,b)\in\theta$, $a\neq b$ and $(a,c)\notin\theta$.\\ For $1\leqslantq i\leqslantq h$, we define the $(h-1)$-tuple $\mathbf{W}_{i}=(w_{i1},\ldots,w_{i h-1})$ by: \[w_{1l}=a\text{ for } 1\leqslantq l\leqslantq h-1\] \[w_{2l}=\leqslantft\{\begin{array}{cc} b & \text{ if } l=h-1 \\ a & \text{ elsewhere } \end{array} \right.\] \[\text{for } m\geq 3, w_{ml}=\leqslantft\{\begin{array}{cc} c & \text{ if } l=m-2 \\ a & \text{ elsewhere. } \end{array} \right.\] We have $\mathbf{W}_{1}\neq \mathbf{W}_{2}$ and for every $1\leqslantq i<j\leqslantq h$, $\mathbf{W}_{i}\theta \mathbf{W}_{j}$ if and only if $i=1$ and $j=2$. We define the $(h-1)$-ary operation $g$ on $E_{k}$ by: \[g(x_1,x_2, \ldots ,x_{h-1})= \leqslantft\{\begin{array}{l} u_1 \text{ if } (x_1,x_2, \ldots ,x_{h-1})=\mathbf{W}_{1} \\ u_2 \text{ if }(x_1,x_2, \ldots ,x_{h-1})=\mathbf{W}_{2}\\ u_i \text{ if } (x_1,x_2, \ldots ,x_{h-1})\theta \mathbf{W}_{i}\text{ for some } 3\leqslantq i\leqslantq h-1 \\ u_{h} \text{ elsewhere. } \end{array} \right.\] Since $g(\theta)\subseteq \{(u_1,u_2);(u_2,u_1)\}\cup\{(u_i,u_i): i\in\{1;\cdots;h\}\}$, $g\in \text{Pol}(\theta)$.\\ The matrix $(w_{ij})_{\begin{subarray}{l} 1\leqslantq i\leqslantq h\\ 1\leqslantq j\leqslantq h-1 \end{subarray} }$ is a subset of $\eta^{1}$ and \[g\leqslantft((w_{ij})_{\begin{subarray}{l} 1\leqslantq i\leqslantq h\\ 1\leqslantq j\leqslantq h-1 \end{subarray} }\right)=g((\mathbf{W}_{1}),\ldots,g(\mathbf{W}_{h}))=(u_1,\ldots,u_h)\notin\eta^{1}.\] Hence $g\notin \text{Pol}(\eta^{1})$. Therefore $\text{Pol}(\eta^{1})\varsubsetneq \text{Pol}(\theta)$. \end{enumerate} \end{proof} \begin{remark}\label{arity-criterion} If $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$, then the arity of $\rho$ is less than or equal to $t$ where $t$ is the number of equivalence classes of $\theta$. Indeed, if the arity of $\rho$ is greater than $t$ then $\eta\nsubseteq \rho$, because $\rho\neq E_k^{\textit{arity}(\rho)}$ and $(\eta\subseteq\rho\Rightarrow\rho=E_k^{\textit{arity}(\rho)})$. \end{remark} From now on we suppose that $\eta\subseteq\rho$. According to the definition of $\rho_{0,\theta}$ we have $\rho\subseteq\rho_{0,\theta}$. \begin{lemma}\label{lem-rec-O} If $\rho= \rho_{0,\theta}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$ \end{lemma} \begin{proof} If $\rho=\, \rho_{0,\theta}$, then $\rho$ is $\theta$-closed. Hence $\rho$ is of type II and $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$ from Proposition \ref{sufficiency-direction}. \end{proof} Now we suppose that $\rho\varsubsetneq\, \rho_{0,\theta}$ and we have two cases express in the following lemmas. \begin{lemma}\label{lem-rec-I} If $\rho\varsubsetneq\, \rho_{0,\theta}\varsubsetneq E_{k}^{h}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$ \end{lemma} \begin{proof}It is easy to see that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\subseteq \text{Pol}(\theta)\cap \text{Pol}(\rho_{0,\theta})\subseteq \text{Pol}(\theta).$\\ Let $a,b\in E_{k}$ such that $(a,b)\notin \theta$ and $\leqslantft( u_1, u_2, \ldots, u_h\right)\in\, \rho_{0,\theta}\setminus \rho$ (respectively $E_{k}^{h}\setminus\rho$). Using the $h$-ary operation $f$ defined in the proof of Lemma \ref{major-criterion}, we show that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)\cap \text{Pol}(\rho_{0,\theta})$(respectively $\text{Pol}(\theta)\cap \text{Pol}(\rho_{0,\theta})\varsubsetneq \text{Pol}(\theta).$)\\ Hence $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)\cap \text{Pol}(\rho_{0,\theta})\varsubsetneq \text{Pol}(\theta).$ \end{proof} \begin{lemma}\label{lem-rec-II}\text{ } If $\rho\varsubsetneq\,\rho_{0,\theta}= E_{k}^{h}$ and there exists an integer $l>h$ such that $\rho_{0,\theta}^{l}\neq E_k^{l}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$ where $\rho_{0,\theta}^{l}$ is the relation $$\rho_{0,\theta}^{l}=\leqslantft\{\mathbf{u}\in E_{k}^{l} / \exists (e_1,\ldots,e_l)\in [u_1]_\theta\times\cdots\times[u_l]_\theta; \{e_1,\ldots,e_l\}^{h}\subseteq \rho\right\}.$$ \end{lemma} \begin{proof} To prove our lemma, we will show that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\rho_{0,\theta}^{m})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$ where $m=min\{l\in \mathbb{N}\setminus\{0;1;2; \ldots ;h\}; \,\rho_{0,\theta}^{l}\neq E_k^{l}\}$. We will distinguish two cases; one for each inclusion. \begin{enumerate} \item [(i)]Let us prove here that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\rho_{0,\theta}^{m})\cap \text{Pol}(\theta)$ \\ Let $f\in \mathcal{O}^{n}(E_k)$ be an $n$-ary operation on $E_k$ such that $f\in \text{Pol}(\theta)\cap \text{Pol}(\rho)$. According to the definition of $\rho_{0,\theta}^{m}$, for $$\leqslantft(u_{11}, \ldots, u_{m1}\right), \ldots ,\leqslantft(u_{1n}, \ldots, u_{mn} \right)\in\, \rho_{0,\theta}^{m}$$ there exist $a_{ij}\in [u_{ij}]_{\theta}$ such that for all $j\in\{1;\cdots;n\}$, we have\\ $\{a_{1j},\ldots,a_{mj}\}^{h}\subseteq\rho$. Then, it follows that there exist $a_{ij}\in [u_{ij}]_{\theta}$ such that for all $i_1,i_2, \ldots ,i_h\in\{1;\cdots;m\}$ we have, $$\leqslantft(a_{i_{1}1}, a_{i_{2}1}, \ldots, a_{i_{h}1} \right), \ldots ,\leqslantft(a_{i_{1}n}, a_{i_{2}n}, \ldots, a_{i_{h}n}\right) \in \rho;$$ which imply that there exist $a_{ij}\in [u_{ij}]_{\theta}$ such that for all $i_1,i_2, \ldots ,i_h\in\{1;\cdots;m\}$ we have, $$\leqslantft(f(a_{i_{1}1}, \ldots ,a_{i_{1}n}), f(a_{i_{2}1}, \ldots ,a_{i_{2}n}), \ldots, f(a_{i_{h}1}, \ldots ,a_{i_{h}n})\right)\in \rho.$$ Since $f\in \text{Pol}(\theta)$ and for all $ i \in \{1;\cdots;m\}$, $(u_{i1}, \ldots ,u_{in})\theta(a_{i1}, \ldots ,a_{in})$, it appears that $$\leqslantft(f(u_{11}, \ldots ,u_{1n}), \ldots , f(u_{m1}, \ldots ,u_{mn})\right)\in\, \rho_{0,\theta}^{m} \quad \text{and} \quad f\in\, \rho_{0,\theta}^{m}.$$ Hence $f\in \text{Pol}(\rho_{0,\theta}^{m})\cap \text{Pol}(\theta).$ Let $\leqslantft(v_1, \ldots, v_h\right)\in E_{k}^{h}\setminus \rho$ and $(a,b)\notin \theta$, using the $h$-ary operation $f$ on $E_k$, specified by: $$f(x_1,x_2, \ldots ,x_h)= \leqslantft\{\begin{array}{l} v_1 \text{ if } (x_1,x_2, \ldots ,x_h)\theta (a,a, \ldots ,a) \\ v_2 \text{ if } (x_1,x_2, \ldots ,x_h)\theta (a,b,a, \ldots ,a) \\ \vdots\\ v_h \text{ if } (x_1,x_2, \ldots ,x_h)\theta (a,a, \ldots ,a,b,a) \\ v_h \text{ if } (x_1,x_2, \ldots ,x_h)\theta (a,a, \ldots ,a,a,b) \\ v_1 \text{ otherwise. } \end{array} \right.$$ and the fact that $m>h$ and $\rho_{0,\theta}^{m}$ is totally reflexive, we obtain $f\notin \text{Pol}(\rho)$ and $f\in \text{Pol}(\rho_{0,\theta}^{m}).$ \item [(ii)] This item is devoted to prove that $\text{Pol}(\rho_{0,\theta}^{m})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta).$\\ Let $\leqslantft(u_{1}, \ldots, u_{m}\right)\in E_k^{m}\setminus\, \rho_{0,\theta}^{m}$ ($E_k^{m}\setminus\, \rho_{0,\theta}^{m}$ is not empty because $\rho_{0,\theta}^{m}\neq E_{k}^{m}$) and $(a,b)\notin \theta.$ Let us consider the $m$-ary operation $h$ on $E_{k}$ defined by: \begin{equation}\label{function-for-strict-inclusion-h-in-theta} h(x_1, \ldots ,x_m)= \leqslantft\{\begin{array}{l} u_1 \text{ if } (x_1, \ldots ,x_m)\theta (a, \ldots ,a,a)=a_1 \\ u_2 \text{ if } (x_1, \ldots ,x_m)\theta (a,b,a, \ldots ,,a)=a_2 \\ u_3 \text{ if } (x_1, \ldots ,x_m)\theta (a,a,b,a \ldots ,a)=a_3 \\ \vdots\\ u_{m} \text{ if } (x_1, \ldots ,x_m)\theta (a,a,a, \ldots a,b)=a_m\\ u_m \text{ otherwise. } \end{array} \right. \end{equation} The implication $a_i\theta a_j\Rightarrow i=j$ allows us to say that $h\in \text{Pol}(\theta).$ It is clear that $$\leqslantft(\begin{array}{c} a \\\vdots\\a\\a\\a \\a\\a \\ \end{array}\right), \ \leqslantft(\begin{array}{c} a \\b\\a \\\vdots\\a\\a\\a \\ \end{array}\right), \ \leqslantft(\begin{array}{c} a \\a\\b\\a\\\vdots \\a\\a \\ \end{array}\right), \ \ldots , \ \leqslantft(\begin{array}{c} a \\a\\a\\\vdots\\a\\a \\b \\ \end{array}\right)\in\, \rho_{0,\theta}^{m}$$ but following the definition of $h$, it appears that $$\leqslantft(u_1, u_2, u_3,\ldots, u_m\right)= \leqslantft( h(a_1), h(a_2), h(a_3), \ldots, h(a_m)\right)\notin\, \rho_{0,\theta}^{m};$$ Therefore, $h\notin \text{Pol}(\rho_{0,\theta}^{m})$ and $\text{Pol}(\rho_{0,\theta}^{m})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta).$ \end{enumerate} Hence $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\rho_{0,\theta}^{m})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta).$ \end{proof} In the light of Lemma \ref{lem-rec-II}, it is natural to suppose $\rho_{0,\theta}= E_{k}^{h}$ and $\rho_{0,\theta}^{l}= E_k^{l}$ for all $l>h.$ In particular, for $l=t$ we have $\rho_{0,\theta}^{t}= E_k^{t}$ and there exists $(x_1,x_2, \ldots ,x_t)\in C_0\times C_1\times \ldots \times C_{t-1}$ such that $\{x_1,x_2, \ldots ,x_t\}^{h} \subseteq\rho$. Let $\,\varsigma$ be the relation $$\, \varsigma=\leqslantft\{\mathbf{a}\in E_{k}^{h}| \exists u_{ij}\in [a_{i}]_{\theta}, 1\leqslantq i,j\leqslantq h\text{ with } i\neq j\right.\text{ such that } \forall j\in\{1;\cdots;h\},$$ $$\leqslantft. \leqslantft(a_j, u_{j_{1}j}, u_{j_{2}j}, \ldots, u_{j_{h-1}j}\right)\in \rho, \text{ with } \{j_{1}; \cdots; j_{h-1}\}=\{1; \cdots; h\}\setminus\{j\}\right\}.$$ Since $\rho $ is totally symmetric, $\, \varsigma=\underset{\sigma\in\mathcal{S}_{h}}\bigcap(\rho_{1,\theta})_{\sigma}$ and it is obvious that $\rho\subseteq\, \varsigma$. \begin{lemma}\label{lem-rec-I-gamma3-eal} If $\rho= \varsigma$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$. \end{lemma} \begin{proof} If $\rho= \varsigma$, then $\rho$ is weakly $\theta$-closed of order $1$. Hence $\rho$ is a relation of type III. By Proposition \ref{sufficiency-direction} $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$. \end{proof} \begin{lemma}\label{lem-rec-I-gamma3} If $\rho\varsubsetneq\, \varsigma\varsubsetneq E_{k}^{h}$ then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$ \end{lemma} \begin{proof} We will prove that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)\cap \text{Pol}(\, \varsigma)\varsubsetneq \text{Pol}(\theta).$ Let $f\in \text{Pol}(\theta)\cap \text{Pol}(\rho)$, $n$-ary. \\ Let $\leqslantft(a_{11}, a_{21},\ldots, a_{h1}\right), \ldots ,\leqslantft(a_{1n}, a_{2n}, \ldots, a_{sn}\right)\in \, \varsigma$, we will show that $$\leqslantft(f(a_{11}, \ldots ,a_{1n}), f(a_{21}, \ldots ,a_{2n}), \ldots, f(a_{h1}, \ldots ,a_{hn}) \right)\in \, \varsigma.$$ Firstly, we show that $\leqslantft(f(a_{11}, \ldots ,a_{1n}), \ldots, f(a_{h1}, \ldots ,a_{hn}) \right)\in \rho_{1,\theta}$. Since $\leqslantft(a_{1i},\ldots, a_{hi}\right)\in \, \varsigma$ for $1\leqslantq i\leqslantq n$, there exist $u_{ji}\in[a_{ji}]_{\theta}$, $2\leqslantq j\leqslantq h$ such that $\leqslantft(a_{1i}, u_{2i},\ldots, u_{hi}\right)\in\rho$ for $1\leqslantq i\leqslantq n$. Therefore $$\leqslantft(f(a_{11}, \ldots ,a_{1n}), f(u_{21}, \ldots ,u_{2n}), \ldots, f(u_{h1}, \ldots ,u_{hn}) \right)\in\rho$$ and $f(u_{j1}, \ldots ,u_{jn})\theta f(a_{j1}, \ldots ,a_{jn})$ for $2\leqslantq j \leqslantq n$. Hence $$\leqslantft(f(a_{11}, \ldots ,a_{1n}), f(a_{21}, \ldots ,a_{2n}), \ldots, f(a_{h1}, \ldots ,a_{hn}) \right)\in \rho_{1,\theta}.$$ Secondly, we show that $\leqslantft(f(a_{11}, \ldots ,a_{1n}), \ldots, f(a_{h1}, \ldots ,a_{hn}) \right)\in (\rho_{1,\theta})_{\sigma}$ for $\sigma\in \mathcal{S}_{h}$. Let $\sigma\in \mathcal{S}_{h}$. Since $\leqslantft(a_{1i},\ldots, a_{hi}\right)\in \, (\rho_{1,\theta})_{\sigma}$ for $1\leqslantq i\leqslantq n$, it follows that $\leqslantft(a_{\sigma^{-1}(1)i},\ldots, a_{\sigma^{-1}(h)i}\right)\in \, \rho_{1,\theta}$ for $1\leqslantq i\leqslantq n$. Hence $$\leqslantft(f(a_{\sigma^{-1}(1)1}, \ldots ,a_{\sigma^{-1}(1)n}), \ldots, f(a_{\sigma^{-1}(h)1}, \ldots ,a_{\sigma^{-1}(h)n}) \right)\in \rho_{1,\theta}.$$ Therefore $\leqslantft(f(a_{11}, \ldots ,a_{1n}), \ldots, f(a_{h1}, \ldots ,a_{hn}) \right)\in (\rho_{1,\theta})_{\sigma}.$\\ Thus $\leqslantft(f(a_{11}, \ldots ,a_{1n}), \ldots, f(a_{h1}, \ldots ,a_{hn}) \right)\in \, \varsigma.$\\ And it follows that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\subseteq \text{Pol}(\theta)\cap \text{Pol}(\, \varsigma).$ \\ As $\rho\varsubsetneq\, \varsigma$ we have $ \varsigma\setminus \rho\neq \emptyset;$ then with $\leqslantft( u_1, u_2, \ldots, u_h\right)\in\, \varsigma\setminus \rho$ and $(a,b)\notin \theta$ fixed; using the $h$-ary operation $f$ defined by (\ref{function-for-strict-inclusion-f}), and the same argument used in the proof of Lemma \ref{lem-rec-I}, it follows that $f\in \text{Pol}(\, \varsigma)$ and $f\notin \text{Pol}(\rho).$ Then $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)\cap \text{Pol}(\, \varsigma).$ \\ It is obvious to see that $\text{Pol}(\theta)\cap \text{Pol}(\, \varsigma)\varsubsetneq \text{Pol}(\theta)$ (because $\, \varsigma\neq E_{k}^{h}$). \\ Let us take a fixed element $\leqslantft(u_1, u_2, \ldots, u_h\right)\in E_{k}^{h}\setminus\, \varsigma$ and $(a,b)\notin \theta$. Using the operation $f$ define above, we easily obtain $f\in \text{Pol}(\theta)$.\\ On the other hand $$\leqslantft(\begin{array}{cccccc} a&a&a&\cdots&a&a \\a&b&a&\cdots&a&a \\&&\cdots&\cdots&& \\a&a&\cdots&a&b&a \\a&a&\cdots&a&a&b \\ \end{array}\right)\subseteq \, \varsigma$$ but by the definition of $f$ we have $$\leqslantft(u_1, u_2, \ldots, u_h\right)=\leqslantft(f(a,a, \ldots ,a), f(a,b,a, \ldots ,a), \ldots, f(a,a, \ldots ,a,a,b)\right)\notin \, \varsigma.$$ It follows that $f\notin \text{Pol}(\, \varsigma);$ thus $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\, \varsigma)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$. \end{proof} \begin{lemma}\label{lem-rec-II-gamma3}\text{ } If $\rho\varsubsetneq\, \varsigma= E_{k}^{h}$ and there exists $h<l\leqslantq t$ with $\, \varsigma^{l}\neq E_k^{l}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$ where $\, \varsigma^{l}=\underset{\sigma\in\mathcal{S}_{l}}\bigcap(\rho_{1,\theta}^{l})_{\sigma}$ with $$\, \rho_{1,\theta}^{l}=\leqslantft\{\leqslantft(a_1, \ldots, a_l \right)\in E_{k}^{l} / \exists u_{i}\in [a_{i}]_{\theta}, 2\leqslantq i\leqslantq l: \{a_1;u_{2},u_{3},\ldots,u_{l}\}^{h}\subseteq \rho\right\}.$$ \end{lemma} \begin{proof} We set $m:=min\{l\in \{0;1;\cdots;t\}\setminus\{0;1;2; \ldots ;h\}/\, \varsigma^{l}\neq E_k^{l}\}$. We will prove that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\, \varsigma^{m})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$.\\ We use the similar argument as in the proof of Lemma \ref{lem-rec-I-gamma3} to show that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\subseteq \text{Pol}(\, \varsigma^{m})\cap \text{Pol}(\theta)$.\\ Let $\leqslantft( u_1, u_2, \ldots, u_h\right)\in\, \varsigma\setminus \rho$ and $(a,b)\notin \theta$ fixed. Using the $h$-ary operation $f$ defined by (\ref{function-for-strict-inclusion-f}), we have again $f\notin \text{Pol}(\rho)$ and $f\in \text{Pol}(\, \varsigma^{m})$; since $\leqslantft( u_1, u_2, \ldots, u_h\right)\in E_{k}^{h}=\, \varsigma$, there exist $ a_{ij}\in [u_{i}]_{\theta}, i,j\in\{1,..,s\}\text{ with } i\neq j$ such that for all $j\in\{1, \ldots ,h\},$ $$ \leqslantft(u_j, a_{j_{1}j}, a_{j_{2}j}, \ldots, a_{j_{h-1}j}\right)\in \rho$$ with $\{j_{1}, \ldots ,j_{h-1}\}=\{1, \ldots ,h\}\setminus\{j\}$. Then it follows that $\{u_1,u_2, \ldots ,u_h\}^{m}\subseteq \, \varsigma^m$. \\ Moreover, as we have $\varsigma^{m}\neq E_{k}^{m}$, there exists $\leqslantft(u_{1}, \ldots, u_{m}\right)\in E_k^{m}\setminus \, \varsigma^{m} $. Given $\leqslantft(u_{1}, \ldots, u_{m}\right)\in E_k^{m}\setminus \, \varsigma^{m}$ and $(a,b)\notin \theta$. Using the $m$-ary operation $h$ defined by (\ref{function-for-strict-inclusion-h-in-theta}), and the same argument used in the proof of Lemma \ref{lem-rec-II}, we have $h\in \text{Pol}(\theta)$ but $h\notin \text{Pol}(\,\varsigma^{m})$. Thus $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\, \varsigma^{m})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$. \end{proof} The previous lemma suggests us to suppose that for each $l\in\{h;\cdots; t\}$, $\varsigma^{l}= E_k^l$. Let $m=max\{|C_i|; 0\leqslantq i\leqslantq t-1\}$ and we denote by $\, \gamma'$ the relation $$\, \gamma'=\leqslantft\{\leqslantft(a_1, \ldots, a_m, a_{m+1}, \ldots, a_{m+t-1} \right)\in E_{k}^{m+t-1} / \forall \{i;j\}\subseteq \{1;\cdots;m\}, a_i\theta a_j; \right.$$ $$\leqslantft. \qquad \exists u_{i}\in [a_{i}]_{\theta} ,m+1\leqslantq i \leqslantq m+t-1, \forall 1\leqslantq i\leqslantq m,\right.$$ $$\leqslantft.\{a_i;u_{m+1}; \ldots ;u_{m+t-1}\}^{h}\subseteq \rho\right\}.$$ We have $\text{Pol}(\, \gamma')\subseteq \text{Pol}(\theta)$. We define two relations $\epsilon_{=}$ and $\epsilon'_{\theta}$ on $\{1;\cdots;m+t-1\}$ by: $$(i,j)\in\epsilon_{=}\Leftrightarrow i=j \text { and } (i,j)\in \epsilon'_{\theta}\Leftrightarrow (i=j \text{ or } \{i,j\}\subseteq \{1; \ldots ;m\}).$$ $\epsilon_{=}$ and $\epsilon'_{\theta}$ are equivalence relations and $\, \gamma' \subseteq D_{\epsilon_{=}\epsilon'_{\theta}}$. \begin{lemma}\label{lem-rec-II-gamma3'}\text{ } If $\forall l\in\{h,h+1,...,t\}$, $\, \varsigma^{l}= E_{k}^{h}$ and $\, \gamma'\neq D_{\epsilon_{=}\epsilon'_{\theta}}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$. \end{lemma} \begin{proof} We just have to prove that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\, \gamma') \varsubsetneq \text{Pol}(\theta)$. \\ Let $f\in \mathcal{O}^{n}(E_k)$ be an $n$-ary operation on $E_k$ such that $f\in \text{Pol}(\theta)\cap \text{Pol}(\rho)$. Let $$\leqslantft(a_{11}, \ldots, a_{m+t-1 1} \right), \ldots ,\leqslantft(a_{1n}, \ldots, a_{m+t-1 n} \right)\in \, \gamma'.$$ According to the definition of $\, \gamma'$, for all $1\leqslantq j \leqslantq n$, for all $\alpha,\beta\in \{1;\cdots;m\}$, we have $[a_{\alpha j}]_{\theta}=[a_{\beta j}]_{\theta}$ and for all $j\in \{1;\cdots;n\}$, there exists $u_{rj}\in [a_{rj}]_{\theta}$ with $r\in \{m+1;\cdots;m+t-1\}$ such that for all $i\in \{1;\cdots;m\}$, $$\{a_{ij};u_{m+1j};u_{m+2j};\ldots;u_{m+t-1j}\}^{h}\subseteq \rho.$$ Since $f\in \text{Pol}(\theta)\cap \text{Pol}(\rho)$, for all $\alpha,\beta\in \{1;\cdots;m\}$, $[f(a_{\alpha 1}, \ldots ,a_{\alpha n})]_{\theta}=[f(a_{\beta 1}, \ldots ,a_{\beta n})]_{\theta}$ and for all $r\in \{m+1;\cdots;m+t-1\}$, $f(u_{r1}, \ldots ,u_{rn})\in [f(a_{r1}, \ldots ,a_{rn})]_{\theta}.$ It follows that for all $i\in\{1;\cdots;m\}$ $$\{f(a_{i1}, \ldots ,a_{in});f(u_{m+11}, \ldots ,u_{m+1n}); \ldots ;f(u_{m+t-11}, \ldots ,u_{m+t-1n})\}^{h}\subseteq\rho .$$ Consequently $$\leqslantft(f(a_{11}, \ldots ,a_{1n}), \ldots, f(a_{m+t-11}, \ldots ,a_{m+t-1n})\right)\in \, \gamma'$$ and then $f\in \text{Pol}(\, \gamma').$ Let $\leqslantft(u_1, u_2, \ldots, u_h\right)\in\, E_{k}^{h}\setminus \rho$ and $(a,b)\notin \theta$ fixed. Using the $h$-ary operation $f$ defined by (\ref{function-for-strict-inclusion-f}), we have $f\notin \text{Pol}(\rho)$ and $f\in \text{Pol}(\, \gamma')$; since $\leqslantft(u_1, u_2, \ldots, u_h\right)\in E_{k}^{h}=\, \varsigma$, it follows that there exist $ a_{ij}\in [u_{i}]_{\theta}; i,j\in\{1;\cdots; h\}\text{ with } i\neq j$ such that for all $j\in\{1;\cdots; h\},$ $$ \leqslantft(u_j, a_{j_{1}j}, a_{j_{2}j}, \ldots, a_{j_{h-1}j} \right)\in \rho,$$ \text{ with } $\{j_{1}, \ldots ,j_{h-1}\}=\{1;\cdots; h\}\setminus\{j\}$. It follows that $$\bigcup\limits_{i\in\{1;\cdots;h\}} \{u_i\}^{m}\times\{u_{l}; l\in \{1;\cdots;h\}\setminus\{i\}\}^{t-1}\subseteq\, \gamma'.$$ From the fact that $\, \gamma'\neq D_{\epsilon_{=}\epsilon'_{\theta}}$, we can deduce that $ D_{\epsilon_{=}\epsilon'_{\theta}}\setminus \, \gamma'\neq \emptyset $.\\ Given $\leqslantft(v_{1}, \ldots, v_{m+t-1}\right)\in D_{\epsilon_{=}\epsilon'_{\theta}}\setminus \, \gamma'$, $(a,b)\notin \theta$ and consider the following $(m+t-1)$-ary operation defined on $E_k$ by: $$ h(x_1, \ldots ,x_{m+t-1})= \leqslantft\{\begin{array}{ll} v_1& \text{ if } (x_1, \ldots ,x_{m+t-1})\theta (a, \ldots ,a,a)=a_1 \\ v_{m+1}& \text{ if } (x_1, \ldots ,x_{m+t-1})\theta (a,b,a, \ldots ,,a)=a_2 \\ v_{m+2} &\text{ if } (x_1, \ldots ,x_{m+t-1})\theta (a,a,b,a \ldots ,a)=a_3 \\ &\vdots\\ v_{m+t-1} &\text{ if } (x_1, \ldots ,x_{m+t-1})\theta (a,a,a, \ldots a,b)=a_t\\ v_{m+t-1} &\text{ otherwise. } \end{array} \right. $$ Since $a_i\theta a_j\Rightarrow i=j$, $h\in \text{Pol}(\theta).$ From the fact that $\{a,b\}^{h}\subseteq\rho$ we have $h\notin \text{Pol}(\gamma').$ Indeed, \\ $$\leqslantft(\begin{array}{c} a \\\vdots\\a \\\vdots\\a\\a\\a \\a\\a \\ \end{array}\right), \leqslantft(\begin{array}{c} a \\\vdots\\a \\b\\a \\\vdots\\a\\a\\a \\ \end{array}\right), \leqslantft(\begin{array}{c} a \\\vdots\\a \\a\\b\\a\\\vdots \\a\\a \\ \end{array}\right), \ldots ,\leqslantft(\begin{array}{c}a \\\vdots\\ a \\a\\a\\\vdots\\a\\a \\b \\ \end{array}\right) \in \, \gamma' $$ but $$\leqslantft(v_1,\ldots,v_1, v_{m+1},\ldots, v_{m+t-1}\right)=\leqslantft(h(a_1), \ldots, h(a_1), h(a_2), \ldots, h(a_{t})\right)\notin \, \gamma'.$$ \end{proof} In our next step, we assume that \begin{equation}\label{transversale-classe} \, \gamma'= D_{\epsilon_{=}\epsilon'_{\theta}}\end{equation} i.e., for all $i\in\{0;\cdots; t-1\},$ there exists $u_{ji}\in C_j, j\in\{0;\cdots; t-1\}\setminus \{i\}$ such that for all $a\in C_{i},$ $\{a;u_{1i}; \ldots ;u_{i-1 i},u_{i+1 i}; \ldots ;u_{ti}\}^{h}\subseteq\rho.$ Let $\,\zeta$ be the relation defined by: $$\,\zeta=\leqslantft\{\mathbf{a}\in E_{k}^{h}| \exists u_i\in[a_i]_{\theta},\right. \leqslantft.\text{ such that } \leqslantft(a_i, u_{i_1}, \ldots, u_{i_{h-1}}\right)\in \rho, i\in\{1;\cdots;h\} \right. $$ $$\leqslantft.\text{ and } \{i_1; \ldots ;i_{h-1}\}=\{1;\cdots;h\}\setminus\{i\}\right\}.$$ Our goal is to show that if $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$, then $\,\zeta=E_{k}^{h}$. Before that, let us prove the following result. \begin{lemma}\label{lem19} If $\rho=\,\zeta$, then $\rho=E_{k}^{h}$. \end{lemma} \begin{proof} We suppose that $\rho=\,\zeta$. Let $a=(a_1,a_2, \ldots ,a_s)\in E_{k}^{h}.$ If there exists $\{i;j\}\subseteq\{1;\cdots;h\}$ with $i\neq j$ such that $(a_i,a_j)\in\theta$ then $a\in \rho$ since $\eta\subseteq \rho.$ Else, according to (\ref{transversale-classe}), for all $i\in\{1,...,t\}$, there exist $u_{ji}\in C_j, j\in\{0;\cdots; t-1\}\setminus \{i\}$ such that for all $b\in C_{i}$, $$\{b;u_{0i}; \ldots ;u_{i-1i},u_{i+1i}; \ldots ;u_{t-1 i}\}^{h}\subseteq\rho $$ where $C_j=[a_j]_{\theta}, j\in\{1;\cdots;h\}$. Hence $$(C_i\cup\{u_{0i},...,u_{t-1 i}\}\setminus\{u_{ii}\})^{h}\subseteq\rho, 0\leqslantq i\leqslantq t-1.$$ Therefore $$\leqslantft(a_1, a_2, \ldots, a_h\right)\in \,\zeta=\rho.$$ It follows that $\rho= E_{k}^{h}$. \end{proof} From Lemma \ref{lem19} and the fact that $\rho\neq E_{k}^{h}$, we obtain $\rho\varsubsetneq \,\zeta\subseteq E_{k}^{h}$. Now we can prove the following lemma. \begin{lemma}\label{lem-rec-O-gamma4} If $\,\zeta\neq E_{k}^{h}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$. \end{lemma} \begin{proof}We have to prove that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)\cap \text{Pol}(\,\zeta)\varsubsetneq \text{Pol}(\theta).$ Let $f\in \mathcal{O}^{n}(E_k)$ be an $n$-ary operation on $E_k$ such that $f\in \text{Pol}(\theta)\cap \text{Pol}(\rho)$. Let $$\leqslantft(a_{11}, a_{21}, \ldots, a_{h1}\right), \ldots ,\leqslantft(a_{1n}, a_{2n}, \ldots, a_{hn} \right)\in \,\zeta.$$ From the definition of $\,\zeta$, for all $i\in\{1;\cdots;h\}$, for all $j\in\{1, \ldots ,n\}$, there exist $u_{ij}\in [a_{ij}]_{\theta}$ such that $$\leqslantft(a_{1j}, u_{2j}, \ldots, u_{hj}\right),\leqslantft(u_{1j}, a_{2j}, \ldots, u_{hj}\right), \ldots ,\leqslantft(u_{1j}, u_{2j}, \ldots, u_{(h-1)j}, a_{hj} \right)\in\rho;$$ and as $f\in \text{Pol}(\rho)$ we have $$\leqslantft(f(a_{11}, \ldots ,a_{1n}), f(u_{21}, \ldots ,u_{2n}), \ldots, f(u_{h1}, \ldots ,u_{hn}) \right)\in \rho,$$ $$\leqslantft(f(u_{11}, \ldots ,u_{1n}), f(a_{21}, \ldots ,a_{2n}), f(u_{31}, \ldots ,u_{3n}), \ldots, f(u_{h1}, \ldots ,u_{hn})\right)\in \rho, \ldots,$$ $$\leqslantft(f(u_{11}, \ldots ,u_{1n}), \ldots, f(u_{(s-1)1}, \ldots ,u_{(h-1)n}), f(a_{h1}, \ldots ,a_{hn}) \right)\in \rho.$$ Moreover, as $f\in \text{Pol}(\theta)$, we have $f(u_{i1}, \ldots ,u_{in})\theta f(a_{i1}, \ldots ,a_{in})$; therefore, using the definition of $\,\zeta$, $$\leqslantft(f(a_{11}, \ldots ,a_{1n}), f(a_{21}, \ldots ,a_{2n}), \ldots, f(a_{h1}, \ldots ,a_{hn})\right)\in \,\zeta.$$ Then $\text{Pol}(\rho)\cap \text{Pol}(\theta)\subseteq \text{Pol}(\theta)\cap \text{Pol}(\,\zeta)$. \\ As $\rho\varsubsetneq \,\zeta$ we have $ \,\zeta\setminus \rho$ is not empty. Given $\leqslantft( u_1, u_2, \ldots, u_h\right)\in \,\zeta\setminus \rho$ and $(a,b)\notin \theta$, using the $h$-ary operation $f$ defined by (\ref{function-for-strict-inclusion-f}), and the same argument used in the proof of Lemma \ref{lem-rec-I}, we obtain $f\in \text{Pol}(\,\zeta)$ and $f\notin \text{Pol}(\rho)$. Therefore, $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)\cap \text{Pol}(\,\zeta).$ It is obvious that $\text{Pol}(\theta)\cap \text{Pol}(\,\zeta)\varsubsetneq \text{Pol}(\theta)$ (because $\,\zeta\neq E_{k}^{h}$). Given $\leqslantft(u_1, u_2, \ldots, u_h\right)\in E_{k}^{h}\setminus\, \zeta$ and $(a,b)\notin \theta$. $f\in \text{Pol}(\theta)$ and $f\notin \text{Pol}(\,\zeta)$; thus $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\,\zeta)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$. \end{proof} \begin{lemma}\label{lem-rec-I-gamma4} If $\,\zeta=E_{k}^{h}$ and $\,\zeta^k\neq E_k^{k}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$ where $\,\zeta^{l}$ is the $l$-ary relation on $E_k$ specified by \begin{eqnarray*} \,\zeta^{l} &=& \leqslantft\{\leqslantft(a_1, \ldots, a_l\right)\in E_{k}^{l} / \exists u_{i}\in [a_{i}]_{\theta}/ \forall i\in \{1;\cdots;l\},\right. \\ && \hspace{1cm}\leqslantft.\{u_1; \ldots ;u_{i-1};a_i;u_{i+1}; \ldots ;u_l\}^{h}\subseteq\rho\right\}. \end{eqnarray*} \end{lemma} \begin{proof} We will prove that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\,\zeta^k)\varsubsetneq \text{Pol}(\theta)$.\\ It is easy to see that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\subseteq \text{Pol}(\,\zeta^k)$.\\ Given $\leqslantft( u_1, u_2, \ldots, u_h\right)\in E_{k}^{h}\setminus \rho$ and $(a,b)\notin \theta$, using the $h$-ary operation $f$ defined by (\ref{function-for-strict-inclusion-f}), we have again $f\notin \text{Pol}(\rho)$ and $f\in \text{Pol}(\,\zeta^{k})$; indeed, since $\leqslantft( u_1, u_2, \ldots, u_h\right)\in E_{k}^{h}=\,\zeta$, there exist $ v_i \in [u_i]_{\theta}, i\in\{1;\cdots;h\}$ such that for all $ i\in\{1;\cdots;h\}$, $\{u_i,v_{i_1}, \ldots ,v_{i_{h-1}}\}^{h}\subseteq \rho $, with $ \{i_1,i_2, \ldots ,i_{h-1}\}=\{1;\cdots;h\}\setminus \{i\} $. and it follows that $\{u_1,u_2, \ldots ,u_h\}^{k}\subseteq \,\zeta^k$.\\ Moreover, as we have $\zeta^{k}\neq E_{k}^{k}$ then $E_{k}^{k}\setminus \,\zeta^{k}\neq \emptyset$. Given $\leqslantft(w_{1}, \ldots, w_{k}\right)\in E_k^{k}\setminus \, \zeta^{k} $, $(a,b)\notin \theta$, using the $k$-ary operation $h$ defined on $E_k$, by: $$ h(x_1, \ldots ,x_k)= \leqslantft\{\begin{array}{ll} w_1 &\text{ if } (x_1, \ldots ,x_k)\theta (a, \ldots ,a,a)=a_1 \\ w_2 &\text{ if } (x_1, \ldots ,x_k)\theta (a,b,a, \ldots ,,a)=a_2 \\ w_3 &\text{ if } (x_1, \ldots ,x_k)\theta (a,a,b,a \ldots ,a)=a_3 \\ &\vdots\\ w_{k} &\text{ if } (x_1, \ldots ,x_k)\theta (a,a,a, \ldots a,b)=a_k\\ w_k &\text{ otherwise; } \end{array} \right. $$ and the same argument used for the similar operation $h$ in the proof of Lemma \ref{lem-rec-II} yields $h\in \text{Pol}(\theta)$ and $h\notin \text{Pol}(\zeta^{k})$. Thus $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\,\zeta^k)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$. \end{proof} Now, we continue our induction process with the assumption $\, \varsigma^{l}= E_k^{l}$ for all $h\leqslantq l\leqslantq t$ and $\,\zeta^{k}= E_k^{k}$. Since $\leqslantft(0, \ldots, k-1 \right)\in E_{k}^{k}$, there exist $ u_{i}\in [i]_{\theta}$ such that $$\{u_0;u_1; \ldots ;u_{i-1};i;u_{i+1}; \ldots ;u_{k-1}\}^{h}\subseteq\rho$$ for all $i\in E_{k}$. We set $$T_i:= \min (C_i)\cap\{u_0;u_1; \ldots ;u_{k-1}\}$$ for all $i\in E_{k}$. Therefore $\{T_{0};\cdots; T_{t-1}\}$ is a transversal of order $1$. Before the main induction, we define the sequence $(^{h}\beta_{n})$ by: $ ^{h}\beta_0=\, \eta $, $^{h}\beta_1=\, \rho_{0,\theta} $, $ ^{h}\beta_2=\, \varsigma $, and for $l\geq 3$, $$^{h}\beta_{l}=\underset{\sigma\in\mathcal{S}_{h}}\bigcap (\rho_{l-1,\theta})_{\sigma}.$$ Let $n\in\{1;\cdots;h-1\}$ and assume that there exists a transversal $T$ of order $ n-1$ for the $\theta$-classes. Set $T=\{u_0;u_1;\cdots;u_{t-1}\}$; then for every $a_1,a_2,...a_{n-1}\in E_{k}$, $\{a_1;a_2;\cdots a_{n-1};u_0;u_1;\cdots; u_{t-1}\}^{h}\subseteq \rho$ and $\rho\subseteq\,^{h}\beta_{n+1}$. \begin{lemma}\label{lem-rec-I-gamma5-egal} If $\rho=\, ^{h}\beta_{n+1}\varsubsetneq E_{k}^{h}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta).$ \end{lemma} \begin{proof} If $\rho= \, ^{h}\beta_{n+1}\varsubsetneq E_{k}^{h}$, then $\rho$ is weakly $\theta$-closed of order $n$. Hence $\rho$ is a relation of type III. Using Proposition \ref{sufficiency-direction} $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$. \end{proof} \begin{lemma}\label{lem-rec-I-gamma5} If $\rho\varsubsetneq\, ^{h}\beta_{n+1}\varsubsetneq E_{k}^{h}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta).$ \end{lemma} \begin{proof}We have to prove that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)\cap \text{Pol}(\, ^{h}\beta_{n+1})\varsubsetneq \text{Pol}(\theta)$.\\ It is easy to show that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\subseteq \text{Pol}(\theta)\cap \text{Pol}(\, ^{h}\beta_{n+1})$.\\ Using the $h$-ary operation $f$ defined by (\ref{function-for-strict-inclusion-f}), we show that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\, ^{h}\beta_{n+1})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$. \end{proof} \begin{lemma}\label{lem-rec-II-gamma5}\text{ } If $\rho\varsubsetneq\, ^{h}\beta_{n+1}= E_{k}^{h}$ and $\, ^{h}\beta_{n+1}^{l}\neq E_k^{l}$ for some $l>h$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$;\\ where $ ^{h}\beta_{n+1}^{l}=\underset{\sigma\in\mathcal{S}_{l}}\bigcap(\rho_{n,\theta}^{l})_{\sigma}$ with \begin{eqnarray*} \, \rho_{n,\theta}^{l} &:=& \leqslantft\{\leqslantft(a_1, \ldots, a_l \right)\in E_{k}^{l} / \exists u_{i}\in [a_{i}]_{\theta}, n+1\leqslantq i\leqslantq l: \right. \\ && \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\leqslantft.\{a_1;a_2;\cdots;a_n;u_{n+1},\ldots,u_{l}\}^{h}\subseteq \rho\right\}. \end{eqnarray*} \end{lemma} \begin{proof} Denote $m:=\min\{l\in \mathbb{N}\setminus\{0;1;2; \ldots ;h\}/^{h}\beta_{n+1}^{l}\neq E_k^{l}\}$. Before proving that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\, ^{h}\beta_{n+1}^{m})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$, we will first prove that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\subseteq \text{Pol}(\, ^{h}\beta_{n+1}^{m})\cap \text{Pol}(\theta)$.\\ Let $f\in \text{Pol}(\theta)\cap \text{Pol}(\rho)$ be an $p$-ary operation on $E_k$. Let $$\leqslantft(a_{11}, \ldots, a_{m1}\right), \ldots ,\leqslantft(a_{1p}, \ldots, a_{mp} \right)\in \, ^{h}\beta_{n+1}^{m}.$$ By definition of $\, ^{h}\beta_{n+1}^{m}$, for all $ c\in \{1;\cdots;p\}$, for all $\sigma\in\mathcal{S}_{m}$, there exist $ u_{\sigma(r)c}\in [a_{\sigma(r)c}]_{\theta}, n+1 \leqslantq r\leqslantq m$ such that $$ \{ a_{\sigma(1)c};\ldots; a_{\sigma(n)c}; u_{\sigma(n+1)c} ;\ldots;u_{\sigma(m)c} \}^{h}\subseteq \rho,$$ and from the fact that $f\in \text{Pol}(\rho)$ it follows that for all $ \sigma\in\mathcal{S}_{m}$, \[\begin{array}{ccr} \{f(a_{\sigma(1)1}, \ldots ,a_{\sigma(1)p}) ;\ldots; f(a_{\sigma(n)1}, \ldots ,a_{\sigma(n)p}); & & \\ \hspace{2cm}f(u_{\sigma(n+1)1}, \ldots ,u_{\sigma(n+1)p});\ldots; f(u_{\sigma(m)1}, \ldots,u_{\sigma(m)p})\}^{h} &\subseteq & \rho. \end{array} \] Since $f\in \text{Pol}(\theta)$, it follows that for all $ i\in \{n+1;\cdots;m\}$, and for all $ d\in \{n+1;\cdots;m\}$, $$\sigma(d)=i \Rightarrow f(u_{\sigma(d)1},u_{\sigma(d)2}, \ldots ,u_{\sigma(d)p})\theta f(a_{i1}, \ldots ,a_{ip}).$$ From the definition of $\, ^{h}\beta_{n+1}^{m}$, it follows that $$\leqslantft(f(a_{11}, \ldots ,a_{1p}), \ldots, f(a_{m1}, \ldots ,a_{mp})\right)\in \, ^{h}\beta_{n+1}^{m}.$$ Then $f\in \text{Pol}(\, ^{h}\beta_{n+1}^{m})\cap \text{Pol}(\theta)$.\\ Fix $\leqslantft(v_{1}, \ldots, v_{h}\right)\in E_{k}^{h}\setminus \rho$ and let $(a,b)\notin \theta$. Using the $h$-ary operation $f$ defined by (\ref{function-for-strict-inclusion-f})(where we replace $u_{i}$ by $v_{i}$, $1\leqslantq i \leqslantq h$), we have $f\notin \text{Pol}(\rho)$ and $f\in \text{Pol}(\, ^{h}\beta_{n+1}^{m})$. Indeed, due to the fact that $$\leqslantft(v_{1}, \ldots, v_{h}\right)\in E_{k}^{h}=\, ^{h}\beta_{n+1},$$ for all $\sigma\in\mathcal{S}_{h}$ there exist $d_{r}\in [v_{r}]_{\theta}$, $n+1\leqslantq r\leqslantq h$ such that, $$ \{ v_{\sigma(1)};\ldots; a_{\sigma(n)}; d_{\sigma(n+1)};\ldots;d_{\sigma(h)} \}^{h}\subseteq \rho.$$ Therefore, it follows that $\{v_1;v_2; \ldots ;v_h\}^{m}\subseteq \, ^{h}\beta_{n+1}^m$. \\ Moreover, as we have $\, ^{h}\beta_{n+1}^{m}\neq E_{k}^{m}$, then $ E_k^{m}\setminus \, ^{h}\beta_{n+1}^{m}$ is not an empty relation. Given $\leqslantft(v_{1}, \ldots, v_{m}\right)\in E_k^{m}\setminus \, ^{h}\beta_{n+1}^{m}$ and $(a,b)\notin \theta$; using the $m$-ary operation $h$ defined by (\ref{function-for-strict-inclusion-h-in-theta}) (where we replace $u_{i}$ by $v_{i}$, $1\leqslantq i \leqslantq m$), and the same argument used in the proof of Lemma \ref{lem-rec-II}, it follows that $h\in \text{Pol}(\theta)$ and $h\notin \text{Pol}(\, ^{h}\beta_{n+1}^{m})$. Thus $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\, ^{h}\beta_{n+1}^{m})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$. \end{proof} It is naturally now to suppose that for each $l\in\{h;\cdots; t\}$, $^{h}\beta_{n+1}^{l}= E_k^l$. Let $m_1,...,m_n$ be integers such that $m_{1}>m_{2}>\dots>m_{n}$ and for each $i\in\{0;\cdots; t-1\}$, $|C_i|\leqslantq \min\{m_1,...,m_n\}$ or $|C_i|\in \{m_1,...,m_n\}$. We set $m=m_1+...+m_n$. In this part we will use these notations $$\mathbf{a}:=\leqslantft(a_1, \ldots, a_{m_1},a_{m_{1}+1}, \ldots, a_{m_1+m_2},\ldots, a_{m_1+...+m_n},a_{m+1}, \ldots, a_{m+t-n} \right),$$ $$\Pi=\{1;\cdots;m_1\}\times\{m_1+1;\cdots;m_1+m_2\}\times...\times\{m_1+\cdots+m_{n-1}+1;\cdots;m\},$$ and $$\Pi'=\{1;\cdots;m_1\}^{2}\cup\{m_1+1;\cdots;m_1+m_2\}^{2}\cup...\cup\{m_1+\cdots+m_{n-1}+1;\cdots;m\}^{2}.$$ Let $\, ^{h}\beta_{n+1}^{'}$ be the relation \begin{eqnarray*} \, ^{h}\beta_{n+1}^{'}&=&\leqslantft\{\mathbf{a}\in E_{k}^{m+t-n} / \forall (i,j)\in \Pi', a_i\theta a_j;\right. \\ &&\exists u_{i}\in [a_{i}]_{\theta}, m+1\leqslantq i \leqslantq m+t-n,\text{ such that } \\ && \leqslantft. \forall(i_1,...,i_n)\in\Pi, \{ a_{i_1}, \ldots, a_{i_{n}},u_{m+1}, \ldots ,u_{m+t-n}\}^{h}\subseteq \rho.\right\} \end{eqnarray*} We have $\text{Pol}(\, ^{h}\beta_{n+1}^{'})\subseteq \text{Pol}(\theta)$. We define two relations $\varrho_{1}$ and $\varrho_{2}$ on $\{1;\cdots;m+t-n\}$ by: $$(i,j)\in\varrho_{1}\Leftrightarrow i=j \text { and } (i,j)\in \varrho_{2}\Leftrightarrow (i=j \text{ or } (i,j)\in \Pi').$$ $\varrho_{1}$ and $\varrho_{2}$ are equivalence relations and $\, ^{h}\beta_{n+1}^{'} \subseteq D_{\varrho_{1}\varrho_{2}}$. \begin{lemma}\label{lem-rec-II-beta-n'}\text{ } If $\forall l\in\{h,h+1,...,t\}$, $\, ^{h}\beta_{n+1}^{l}= E_{k}^{h}$ and $\, ^{h}\beta_{n+1}^{'}\neq D_{\varrho_{1}\varrho_{2}}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is not maximal in $\text{Pol}(\theta)$. \end{lemma} \begin{proof} It is similar to the proof of Lemma \ref{lem-rec-II-gamma3'}. \end{proof} In what follows, we assume that \begin{equation}\label{transversale-classe-ind} \, ^{h}\beta_{n+1}^{'}= D_{\varrho_{1}\varrho_{2}}.\end{equation} i.e., for all $i_1,...,i_n\in\{0;\cdots; t-1\},$ there exist $u_{jI}\in C_j, j\in\{0;\cdots; t-1\}\setminus \{i_1,...,i_n\}$, with $I=\{i_1,...,i_n\}$ such that for all $a_{i_{r}}\in C_{i_{r}}, r\in\{1;\cdots;n\}$ we have $$\{a_{i_1};\cdots;a_{i_{n}},u_{1I}; \ldots ;u_{t-n \,I}\}^{h}\subseteq\rho.$$\\ We obtain a transversal of order $n$ for the $\theta$-classes. Hence we can continue the previous induction until $$\, ^{h}\beta_{h}=\underset{\sigma\in\mathcal{S}_{h}}\bigcap(\rho_{h-1,\theta})_{\sigma}= E_{k}^{h}.$$ From now, we suppose that $\, ^{h}\beta_{h}= E_{k}^{h}$. \begin{lemma}\label{lemma-reciproque-gamma5} If $\, ^{h}\beta_{h}= E_{k}^{h}$ and there exists $l>h $ such that $\, ^{h}\beta_{h}^{l}\neq E_{k}^{h}$, then $\text{Pol}(\rho)\cap \text{Pol}(\theta)$ is not maximal in $\text{Pol}(\theta)$. \end{lemma} \begin{proof} Set $m=\min\{l\in\{1;\cdots;k\}\setminus\{1;\cdots;h\}/ \, ^{h}\beta_{h}^{l}\neq E_{k}^{h}\}$. It is easy to prove that $\text{Pol}(\rho)\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\, ^{h}\beta_{h}^{m})\cap \text{Pol}(\theta)\varsubsetneq \text{Pol}(\theta)$. \end{proof} \begin{lemma}\label{lemma-classe-central}\text{ } If $\, ^{h}\beta_{h}^{k}= E_k^{k}$, then $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$. \end{lemma} \begin{proof} If $\, ^{h}\beta_{h}^{k}= E_k^{k}$, then each equivalence class of $\theta$ has a central element of $\rho$. Since $\, \eta\subseteq\rho$, $\rho$ is of type I and $\text{Pol}(\rho)\cap\text{Pol}(\theta)$ is maximal in $\text{Pol}(\theta)$ by Proposition \ref{sufficiency-direction}. \end{proof} We are ready now to give the proof of Proposition \ref{completeness-criterion} and Theorem \ref{maintheorem}. \begin{proof}[Proof of Proposition \ref{completeness-criterion}] Combining Lemmas \ref{major-criterion}, \ref{lem-rec-I}, \ref{lem-rec-II}, \ref{lem-rec-I-gamma3}, \ref{lem-rec-II-gamma3}, \ref{lem-rec-II-gamma3'}, \ref{lem-rec-O-gamma4}, \ref{lem-rec-I-gamma4}, \ref{lem-rec-I-gamma5}, \ref{lem-rec-II-gamma5}, \ref{lemma-reciproque-gamma5}, \ref{lemma-classe-central} and Remark \ref{arity-criterion}, we obtain the result. \end{proof} \begin{proof}[Proof of Theorem \ref{maintheorem}] It follows from Propositions \ref{sufficiency-direction} and \ref{completeness-criterion}. \end{proof} Fourthly, we look at $h$-regular relations. \subsection{$h$-regular relations}\label{sec7} As an $h$-regular relation is totally reflexive and totally symmetric, some results state in the previous subsection can be applied to the $h$-regular relation. Besides an $h$-regular generated relation does not contain a central element. We will prove that there is submaximality if and only if $\rho$ is $\theta$-closed(or of type II). We begin this subsection with some examples of $h$-regular relations. \begin{example} Let $k\geq 3$ be an integer and $0\leqslantq i<j<r<n\leqslantq k-1$, we denote by $A_{i,j,r}$ and $A_{i,j,r,n}$ the sets $$A_{i,j,r}:=\{(\sigma(i),\sigma(j),\sigma(r)); \sigma\in\mathcal{S}_{\{i;j;r\}}\} $$ and $$A_{i,j,r,n}:=\{(\sigma(i),\sigma(j),\sigma(r),\sigma(n)); \sigma\in\mathcal{S}_{\{i;j;r;n\}}\}.$$ We consider the following equivalence relations $\theta_{6}$ $\theta_{7}$ $\theta_{8}$ on $E_{12}$ defined respectively by their equivalence classes denoted by $C_{m}^{i}, 6\leqslantq i\leqslantq 8$ as follows: $$ C_{0}^{6}=\{0;1;2;3;4\}, C_{1}^{6}=\{5;6;7\}, C_{2}^{6}=\{8;9;10;11\};$$ $$ C_{0}^{7}=\{0;1;5;8\}, C_{1}^{7}=\{2;6;9;11\}, C_{2}^{7}=\{3;4;7;10\};$$ $$ C_{0}^{8}=\{0;1\}, C_{1}^{8}=\{2\}, C_{2}^{8}=\{3;4\}, C_{3}^{8}=\{5\}, C_{4}^{8}=\{6\}, C_{5}^{8}=\{7\},$$ $$C_{6}^{8}=\{8\}, C_{7}^{8}=\{9\}, C_{8}^{8}=\{10\}, C_{9}^{8}=\{11\};$$ and the relation $$\Upsilon_{6}=\{(x_1,x_2,x_3)\in E_{12}^{3}: (x_1,x_2)\in\theta_{6},\textit{ or } (x_2,x_3)\in\theta_{6},\textit{ or } (x_1,x_3)\in\theta_{6}\}.$$ $$\Upsilon_{7}=\underset{\sigma\in\mathcal{S}_{3}}\cup(\Upsilon)_{\sigma}$$ where $$\Upsilon=\{(x_1,x_2,x_3)\in E_{12}^{3}: (x_1,x_2)\in\theta_{6}, (x_2,x_3)\in\theta_{7}\}.$$ It is easy to see that $\Upsilon_{6}$ is a $\theta_{6}$-closed $3$-regular relation associated to $T=\{\theta_{6}\}$, and $\Upsilon_{7}$ is a $\theta_{8}$-closed $3$-regular relation associated to $T=\{\theta_{6};\theta_{7}\}$. \end{example} We continue with the characterization of $\theta$-closed $h$-regular relation. \begin{lemma}\label{caracterisation-relation-h-reguliere-theta-fermee} For $h\geq3$, let $\rho$ be an $h$-regular relation on $E_{k}$ determined by the $h$-regular family $T = \{\theta_{1};\cdots;\theta_{m}\}$ and $\theta$ a nontrivial equivalence relation on $E_{k}$. $\rho$ is $\theta$-closed iff $\theta\subseteq\theta_{i}$, for all $1\leqslantq i\leqslantq m$. \end{lemma} \begin{proof} $\Rightarrow)$ Firstly, we show that $\eta=\{(a_{1},\dots,a_{h})\in E_{k}^{h}:\ \ (a_{1},a_{2})\in\theta\}\subseteq\rho$. Let $(a_{1},\dots,a_{h})\in\eta$; since $\rho$ is totally reflexive, we have $(a_{1},a_{1},a_{3},\dots,a_{h})\in\rho$ and $(a_{1},\dots,a_{h})\theta(a_{1},a_{1},a_{3},\dots,a_{h})$. Hence $(a_{1},\dots,a_{h})\in\rho_{0,\theta}=\rho$. Our next step is to show that $\theta\subseteq\theta_{i}$ for all $1\leqslantq i\leqslantq m$. Let $i\in\{1,\dots,m\}$ and $(a,b)\in\theta$; set $A_{i}=E_{k}/\theta_{i}\setminus\{[a]_{\theta_{i}},[b]_{\theta_{i}}\}$. It is easy to see that $|A_{i}|\geq h-2$; choose $(a_{1},\dots,a_{h-2})\in E_{k}^{h-2}$ such that $[a_{p}]_{\theta_{i}}\in A_{i}$ for all $1\leqslantq p\leqslantq h-2$ and $(a_{p},a_{q})\notin\theta_{i}$ for all $1\leqslantq p<q\leqslantq h-2$. Due to $\eta\subseteq\rho$, we have $(a,b,a_{1},\dots,a_{h-2})\in\rho$; therefore $(a,b)\in\theta$ and $\theta\subseteq\theta_{i}$. $\Leftarrow)$ It follows from the fact that $\theta\circ\theta_{i}\circ\theta\subseteq\theta_{i}$ for all $1\leqslantq i\leqslantq m$. \end{proof} \begin{lemma}\label{caracterisation-relation-h-reguliere-theta-fermee-2} For $h\geq3$, let $\rho$ be an $h$-ary relation and $\theta$ a nontrivial equivalence relation on $E_{k}$. $\rho$ is a $\theta$-closed $h$-regular relation iff there exists an $h$-regular relation $\psi$ on $E_{t}$ such that $\rho=\varphi^{-1}(\psi)$. \end{lemma} \begin{proof} Firstly, let us assume that there exists a $h$-regular relation $\psi$ on $E_{t}$ and put $\bot=\{\nu_{1};\cdots;\nu_{n}\}$ the $h$-regular family associated to $\psi$. Clearly $\varphi^{-1}(\bot)=\{\varphi^{-1}(\nu_{1});\cdots;\varphi^{-1}(\nu_{n})\}$ is a $h$-regular family and $\rho=\varphi^{-1}(\psi)$ is exactly the $h$-regular relation associated to $\varphi^{-1}(\bot)$. Moreover, for all $1\leqslantq i\leqslantq n$ we have $\theta\subseteq\varphi^{-1}(\nu_{i})$. Therefore, by Lemma \ref{caracterisation-relation-h-reguliere-theta-fermee}, $\rho$ is $\theta$-closed.\\ Conversely, assume that $\rho$ is a $h$-regular relation determined by the $h$-regular family $T = \{\theta_{1};\cdots\theta_{r}\}$ and $\rho$ is $\theta$-closed. Clearly, for all $1\leqslantq i\leqslantq r$, $\varphi(\theta_{i})$ is an equivalence relation with exactly $h$ equivalence classes whose are the images of equivalence classes of $\theta_{i}$ by $\varphi$. It follows that $\cap\{\varphi(B_i)| 1\leqslantq i\leqslantq r\}$ is non-empty for arbitrary equivalence classes $\varphi(B_i)$ of $\varphi(\theta_{i})$ , $1\leqslantq i\leqslantq r$. Therefore $\varphi(T)= \{\varphi(\theta_{1});\cdots\varphi(\theta_{r})\}$ is a $h$-regular family and $\varphi(\rho)$ is a $h$-regular relation on $E_{t}$, associated to $\varphi(T)$. Moreover $\varphi^{-1}(\varphi(\rho))=\rho$ because $\rho$ is $\theta$-closed. Therefore we have the result. \end{proof} We end this subsection with the characterization of $h$-regular relations $\rho$ such that $\text{Pol}(\theta)\cap\text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$. We need the next lemma to prove our main result. \begin{lemma}\cite{LAU}\label{caracterisation-relation-h-reguliere} Let $\chi\subseteq E_{t}^{h}$ such that $\text{Pol}(\chi)$ is maximal in $\mathcal{L}_{t}$. Then $\text{Pol}(\theta)\cap\text{Pol}(\varphi^{-1}(\chi))$ is submaximal in $\text{Pol}(\theta)$. \end{lemma} \begin{proof} See the proof of Lemma 18.2.5 Page 565 in \cite{LAU}. \end{proof} \begin{lemma}\label{major-criterion-regular} If $\text{Pol}(\rho)\cap \text{Pol}(\theta)$ is maximal in $\text{Pol}(\theta)$, then $\eta\subseteq \rho$. \end{lemma} \begin{proof} See the proof of Lemma \ref{major-criterion}. \end{proof} Now we give the main result of this subsection. \begin{proposition}\label{reg-max-in-equiv-completeness} Let $\theta$ be a nontrivial equivalence relation and $\rho$ be an $h$-regular relation on $E_{k}$ determined by the $h$-regular family $T = \{\theta_{1};\cdots;\theta_{m}\}$. $\text{Pol}(\theta)\cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$ if and only if $\rho$ is $\theta$-closed. \end{proposition} \begin{proof} $\Rightarrow)$ It follows from Lemma \ref{major-criterion-regular} that $\eta\subseteq\rho$; and in the light of the proof of Lemma \ref{caracterisation-relation-h-reguliere-theta-fermee}, we conclude that $\theta\subseteq\theta_{i}$ for all $1\leqslantq i\leqslantq m$. It follows from Lemma \ref{caracterisation-relation-h-reguliere-theta-fermee} that $\rho$ is $\theta$-closed. $\Leftarrow)$ Combining Lemmas \ref{caracterisation-relation-h-reguliere-theta-fermee}, \ref{caracterisation-relation-h-reguliere-theta-fermee-2}, \ref{caracterisation-relation-h-reguliere}, we have the result. \end{proof} \section{Conclusion}\label{sec8} In this paper, we characterized the relations $\rho$ from the Rosenberg's list for which $\text{Pol}(\theta) \cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$, where $\theta$ is a nontrivial equivalence relation. The classification of all central relations $\rho$ on $E_k$ such that the clone $\text{Pol}(\theta) \cap \text{Pol}(\rho)$ is maximal in $\text{Pol}(\theta)$ improves Temgoua and Rosenberg's results \cite{TEM-ROS}. We plan in a future project to characterize the meet-irreducible submaximal clones of $\text{Pol}(\theta)$ for a nontrivial equivalence relation $\theta$. \end{document}
\begin{document} \title{Baker-Gross theorem revisited} \maketitle \begin{abstract} F. Gross conjectured that any meromorphic solution of the Fermat Cubic $F_3\colon\ x^3+y^3=1$ are elliptic functions composed with entire functions. The conjecture was solved affirmatively first by I. N. Baker who found explicit formulas of those elliptic functions and later F. Gross gave another proof proving that in fact one of them uniformize the Fermat cubic. In this paper we give an alternative proof of the Baker and Gross theorems. With our method we obtain other analogous formulas. Some remarks on Fermat curves of higher degree is given. \end{abstract} \section*{Introduction} Consider the Fermat cubic \begin{equation}\label{Fermat-cubic} F_3\colon \ x^3+y^3=1. \end{equation} This algebraic curve defines an elliptic curve, i.e., a compact Riemann surface of genus 1 (taking the zeros in $\mathbb{CP}^2$ of its homogenization). A meromorphic solution of this equation is, by definition, a pair of meromorphic functions in the plane such that $f^3+g^3=1$. In his paper \cite{Gross} F. Gross conjectures that any meromorphic solution of the Fermat cubic is obtained by composing elliptic functions with entire functions. The conjecture was solved affirmatively by I. N. Baker in \cite{Baker}. He proved that any solution is the composition of the following elliptic functions with an entire function: \begin{equation}\label{elliptic-solutions} f(z)=\frac{1}{2\wp(z)}\left(1-3^{-1/2}\wp'(z)\right), \quad g(z)=\frac{1}{2\wp(z)}\left(1+3^{-1/2}\wp'(z)\right), \end{equation} where $\wp$ is the Weierstrass elliptic function satisfying $(\wp')^2=4\wp^3-1$. In what follows we denote by $\Lambda'$ the lattice in $\mathbb C$ that defines this $\wp$. In particular these functions are solutions of the Fermat cubic but these formulas differ from the analogous that appear in \cite{Gross}, \cite{Gross-erratum}, which seem to contain an error. Later, F. Gross gave another proof in \cite{Gross-II}, proving in fact that the function $f$ in (\ref{elliptic-solutions}) gives a uniformization of the Fermat cubic (\ref{Fermat-cubic}). In our context we formulate the previous results in the following theorem:\\ \par \noindent \textbf{Theorem (Baker-Gross).} \textit{Let $\Lambda'$ and $\wp$ be as above. Then the map $\mathbb{C}/\Lambda'\to F_3$ given in affine coordinates by \begin{equation}\label{map-uniform} z\mapsto \left(\frac{1}{2\wp(z)}\left(1-3^{-1/2}\wp'(z)\right), \frac{1}{2\wp(z)}\left(1+3^{-1/2}\wp'(z)\right)\right) \end{equation} is a biholomorphism between the two elliptic curves. Then by the lifting property of coverings, any pair of functions $F$ and $G$, which are meromorphic in the plane and satisfy (\ref{Fermat-cubic}) have the form: \begin{equation}\label{general-form} F=\frac{1}{2\wp(\alpha)}\left(1-3^{-1/2}\wp'(\alpha)\right), \quad G=\frac{1}{2\wp(\alpha)}\left(1+3^{-1/2}\wp'(\alpha)\right), \end{equation} where $\alpha$ is an entire function.}\\ \par In this paper we give a proof of this theorem by using Riemann surface theory and by using an explicit map from a Weierstrass normal form to the Fermat cubic. Our proof could clarify the nature of the previous formulas, which are not obvious. Also, by this method, other formulas analogous to (\ref{map-uniform}) and (\ref{general-form}) are obtained (see (\ref{map-uniform2}) and (\ref{general-form2})). \par In Section 1 we recall some basic facts about elliptic curves and compute a Weierstrass normal form of the Fermat cubic, and the corresponding isomorphism as well. In the next section we prove the main theorem. Finally, in the last section we give some remarks on Fermat curves of higher degree. \par Recently, N. Steinmetz communicated to the author another proof of the Gross conjecture in \cite{Steinmetz} (\S 2.3.5 pp. 56-57) by using Nevanlinna theory. He proved without reference to the Uniformization Theorem the following:\\ \par \noindent \textbf{Theorem (Steinmetz).} \textit{Suppose that non-constant meromorphic functions $f$ and $g$ parametrize the algebraic curve \begin{equation*} F\colon\ x^n+y^m=1\quad (n\geq m\geq 2) \end{equation*} with $\frac{1}{m}+\frac{1}{n}<1$. Then $(m,n)$ equals $(4,2)$ or $(3,3)$ or $(3,2)$. In any case $f$ and $g$ are given by \begin{equation*} f=E\circ \psi\quad \text{and}\quad g=\sqrt[m-1]{E'}\circ \psi, \end{equation*} where $E$ is an elliptic function satisfying \begin{equation*} E'^2=1-E^4,\quad E'^3=(1-E^3)^2\quad and \quad E'^2=1-E^3, \end{equation*} respectively, and $\psi$ is any non-constant entire function.}\\ The present paper contains part of the Undergraduate Thesis of the author written under the supervision of Dr. Alberto Verjovsky at the Cuernavaca Branch of the Institute of Mathematics of the National Autonomous University of Mexico (UNAM). \section{The normal form of the Fermat cubic} \subsection{Basic facts on elliptic curves} A complex elliptic curve $X$ is by definition a compact Riemann surface of genus 1. The Plücker formula tells us that a non-singular projective curve of degree 3 in $\mathbb{CP}^2$ is a Riemann surface of genus 1 i.e., an elliptic curve. The reciprocal is also true and we will briefly discuss it. For this, we recall the uniformization theorem and the Weierstrass normal form. \par The \emph{Uniformization Theorem} says that every simply connected Riemann surface is conformally equivalent to one of the three Riemann surfaces: the Riemann sphere $\overline{\mathbb{C}}$, the complex plane $\mathbb{C}$, or the open unit disk $\Delta$. This theorem combined with the theory of covering spaces give us a classification of Riemann surfaces: every Riemann surface $X$ is conformally equivalent to a quotient $\tilde{X}/G$, where $\tilde{X}$ is the universal holomorphic cover of $X$ (hence isomorphic to one of the three previous Riemann surfaces) and $G$ is a subgroup of holomorphic automorphisms of $\tilde{X}$ which acts on $\tilde{X}$ free and properly discontinuously. In particular, when the Riemann surface is of genus 1, it has the complex plane as its universal holomorphic cover, then $X$ is conformally equivalent to $\mathbb{C}/\Lambda$, for some lattice $\Lambda\subset \mathbb{C}$. For an introduction to Riemann surfaces and a proof of the uniformization theorem see \cite{Forster}. \par The homogeneous polynomial with complex coefficients \begin{equation}\label{homog-cubic} Y^2Z-4X^3+g_2XZ^2+g_3Z^3, \end{equation} obtained by homogenization of the polynomial \begin{equation}\label{no-hom-cubic} y^2=4x^3-g_2x-g_3, \end{equation} defines an non-singular curve if and only if the discriminant $\Delta=g_2^3-27g_3^2$ does not vanish. Hence, (\ref{homog-cubic}) defines an elliptic curve if and only if $\Delta\neq 0$. We call a \emph{Weierstrass normal form} of an elliptic curve $X$ an elliptic curve given by an equation of the form (\ref{homog-cubic}) which is isomorphic as a Riemann surface to $X$. \par Recall also that given a lattice $\Lambda \subset \mathbb{C}$ we can associate the Weierstrass elliptic function $\wp$ or $\wp_\Lambda$ given by the series: \begin{equation} \wp(z)=\frac{1}{z^2}+\sum_{\omega\in \Lambda^*}\left(\frac{1}{(z+\omega)^2}-\frac{1}{\omega^2}\right). \end{equation} This function satisfies the differential equation \begin{equation} (\wp')^2=4\wp^3-g_2\wp-g_3, \end{equation} where $g_2$ and $g_3$ are constants depending on $\Lambda$ given by: \begin{equation*} g_2= 60 \sum_{\omega\in \Lambda^*}\frac{1}{\omega^4}, \quad g_3=140 \sum_{\omega\in \Lambda^*}\frac{1}{\omega^6}, \end{equation*} satisfying $\Delta=g_2^3-27g_3^2\neq 0$. Thus this function gives us a map $\Psi\colon \mathbb{C}/\Lambda\to E$, in affine coordinates given by: \begin{equation}\label{Psi} \Psi(z)=(\wp(z),\wp'(z)), \end{equation} from $\mathbb{C}/\Lambda$ to the elliptic curve $E\colon y^2=4x^3-g_2x-g_3$. This map is an biholomorphism which sends $\Lambda$ to the point at infinity $[0:1:0]$. \par From the previous results and the Uniformization Theorem we can conclude that every elliptic curve has a Weierstrass normal form. \par Also, it is true that given a non-singular equation (\ref{no-hom-cubic}), there exists a lattice $\Lambda$ with the same constants $g_2$ and $g_3$. For more information, refer to \cite[p. 176]{Silverman}. \subsection{Computing the Weierstrass normal form of the Fermat cubic}\label{computing} Although a Weierstrass normal form is in general difficult to compute starting from an abstract Riemann surface of genus 1, the case of the Fermat cubic is relatively easy by choosing suitable changes of variables. Since this process will be applied to other Fermat curves in Section \ref{Remarks}, we describe it step-by-step below: \begin{enumerate} \item[1.] Change $(x,y)$ to $(x-y,x+y)$ in order to eliminate the cubic term $y^3$. Obtaining: \begin{equation*} E_1\colon\ 2x^3+6xy^2=1. \end{equation*} \item[2.] Change $(x,y)$ to $(1/x,y/x)$ to get: \begin{equation*} E_2\colon\ 2+6y^2=x^3. \end{equation*} \item[3.] At this point, we could use any change of variables for which the coefficient of $y^2$ is 1 and the coefficient of $x^3$ is $4$, for instance with $(x, y/\sqrt{24})$ we obtain the case $g_2=0$ and $g_3=8$: \begin{equation*} E_3\colon\ y^2=4x^3-8. \end{equation*} \end{enumerate} Observe that we obtain a map from the curve obtained in the change of variable to the original curve. For example in step 1 we obtain $E_1\to F_3$, $(x,y)\mapsto (x-y,x+y)$. Then, the maps associated to the previous changes of variables are: \begin{eqnarray}\label{Phi-} E_3\to E_2 \quad \quad & E_2\to E_1 & \quad E_1\to F_{3}\\ \nonumber \quad (x,y)\mapsto \left(x,\frac{y}{\sqrt{24}}\right), &\quad (x,y) \mapsto \left(\frac{1}{x},\frac{y}{x}\right), &\quad (x,y)\mapsto (x-y,x+y). \end{eqnarray} The inverse maps are (in the reverse order, respectively): \begin{eqnarray}\label{Phi} F_3\to E_1 \quad\quad&\quad E_1\to E_2 &\quad E_2\to E_3\\ \nonumber (x,y)\mapsto \left(\frac{y+x}{2},\frac{y-x}{2}\right), &\quad (x,y)\mapsto \left(\frac{1}{x},\frac{y}{x}\right), &\quad (x,y)\to (x,\sqrt{24}y). \end{eqnarray} So in each step we have a birrational isomorphism between these non-singular algebraic curves, hence a biholomorphism between their Riemann surfaces. So we obtain, composing the maps of (\ref{Phi}) and (\ref{Phi-}), respectively, the biholomorphisms $\Phi\colon F_3\to E_3$ and $\Phi^{-1}\colon E_3\to F_3$: \begin{eqnarray}\label{Phis} \Phi(x,y) & = & \left(\frac{2}{y+x},\sqrt{24}\frac{y-x}{y+x}\right),\\ \nonumber \Phi^{-1}(x,y) & = &\left(\frac{1}{x}-\frac{y}{\sqrt{24}x}, \frac{1}{x}+\frac{y}{\sqrt{24}x}\right). \end{eqnarray} \section{Proof of the Baker-Gross theorem} From the previous explicit formulas the Baker-Gross theorem follows easily. Consider $\Lambda$ associated to $g_2=0$ and $g_3=8$ and consider the biholomorphism $\Psi:\mathbb{C}/\Lambda\to E_3$ defined in (\ref{Psi}), then the composition $\Phi^{-1}\circ\Psi\colon \mathbb{C}/\Lambda\to F_3$ is a biholomorphism, \begin{equation}\label{map-uniform2} \Phi^{-1}\circ\Psi (z)= \left(\frac{1}{\wp(z)}-\frac{1}{\sqrt{24}}\frac{\wp'(z)}{\wp(z)}, \frac{1}{\wp(z)}+\frac{1}{\sqrt{24}}\frac{\wp'(z)}{\wp(z)}\right). \end{equation} where $\wp$ satisfies $(\wp')^2=4\wp^3-8$. \par If we continue from step 3 applying the change of variables $(2x,\sqrt{2^3}y)$ we obtain the curve $E_3'\colon y^2=4x^3-1$ and the map $\overline{\Phi}=\Phi^{-1}(2x,\sqrt{2^3}y):E_3'\to F_3$ \begin{eqnarray} \overline{\Phi}(x,y) & = & \Phi^{-1}(2x,\sqrt{2^3}y)\\ \nonumber & = &\left(\frac{1}{2x}-\frac{\sqrt{2^3}y}{2\sqrt{24}x},\frac{1}{2x}+ \frac{\sqrt{2^3}y}{2\sqrt{24}x}\right)\\ \nonumber & = & \left(\frac{1}{2x}\left(1-\frac{y}{\sqrt{3}x}\right),\frac{1}{2x} \left(1+\frac{y}{\sqrt{3}x}\right) \right), \end{eqnarray} and taking $\Lambda'$ associated to $g_2=0$ and $g_3=1$, and $\Psi':\mathbb{C}/\Lambda'\to E_3'$ as (\ref{Psi}), composing this two isomorphism we obtain the biholomorphism expected in (\ref{map-uniform}) $\overline{\Phi}\circ \Psi'\colon \mathbb{C}/\Lambda'\to F_3$: \begin{equation*} \overline{\Phi}\circ \Psi'(z)=\left(\frac{1}{2\wp(z)}\left(1-3^{-1/2}\wp'(z)\right), \frac{1}{2\wp(z)}\left(1+3^{-1/2}\wp'(z)\right)\right), \end{equation*} where the Weierstrass elliptic function $\wp$ satisfies here $(\wp')^2=4\wp^3-1$. \par On the other hand, let $\pi\colon\mathbb{C}\to \mathbb{C}/\Lambda'$ be the natural projection, this map is an unbranched holomorphic covering, then the map $\overline{\Phi}\circ\Psi'\circ \pi\colon \mathbb{C}\to F_3$ is an unbranched holomorphic covering as well. Hence, given $F$ and $G$ a meromorphic solution of the Fermat cubic, the map $\phi(z)=(F(z),G(z))$ defines a holomorphic map $\phi\colon \mathbb{C}\to F_3$. Since $\mathbb{C}$ is simply connected $\phi$ has an holomorphic lifting $\alpha\colon \mathbb{C}\to \mathbb{C}$ with respect to this covering, i.e., the following diagram commutes: \begin{equation} \xymatrix{ & \mathbb{C}\ar[d]^{\overline{\Phi}\circ\Psi'\circ \pi}\\ \mathbb{C}\ar^{\alpha}[ru]\ar^{\phi}[r] & F_3 } \end{equation} Composing with $\alpha$ we obtain \begin{equation} F=\frac{1}{2\wp(\alpha)}\left(1-3^{-1/2}\wp'(\alpha)\right), \quad G=\frac{1}{2\wp(\alpha)}\left(1+3^{-1/2}\wp'(\alpha)\right), \end{equation} which are the desired formulas. This proves the theorem. \par Note that we could use the map $\Phi^{-1}\circ \Psi\colon \mathbb{C}/\Lambda\to F_3$ given in (\ref{map-uniform2}) instead of $\overline{\Phi}\circ \Psi'$ in the above argument to obtain that any meromorphic solution of the Fermat cubic is of the form \begin{equation}\label{general-form2} F=\frac{1}{\wp(\alpha)}\left(1-\frac{1}{\sqrt{24}}\wp'(\alpha)\right), \quad G=\frac{1}{\wp(\alpha)}\left(1+\frac{1}{\sqrt{24}}\wp'(\alpha)\right), \end{equation} where in this case $\wp$ satisfies $(\wp')^2=4\wp^3-8$. We could obtain similar solutions depending on which factor we choose in step 3, but we can always obtain one from the other by this process. \section{Some remarks for Fermat curves of higher degree.}\label{Remarks} We finalize discussing about the application of the changes of variables described in \ref{computing} to the Fermat curves of higher degrees (see (\ref{Fermat-curve})). When the curve is of odd degree the process give us directly an interesting equation, but when the degree is even we need to apply a slight modification in step 1. From these equations we give a meromorphic function on the Fermat curves. \subsection{The odd case} The changes of variables in steps 1 and 2 described in \ref{computing} can be applied to any Fermat curve, \begin{equation}\label{Fermat-curve} F_n\colon x^n+y^n=1, \end{equation} but in the case of $n$ odd we get an interesting formula. By a straightforward calculation, following steps 1 and 2, we find the curve $E_2$: \begin{equation}\label{new-Fermat} E_2\colon\ 2+2\sum_{k=1}^{\frac{n-1}{2}}\binom{n}{2k}y^{2k}=x^n. \end{equation} As we did not modify the above steps we get the same correspondence $\Phi\colon F_n\to E_2$ as in (\ref{Phis}) but without step 3, so we get in this case: \begin{eqnarray}\label{Phis2} \Phi(x,y)=\left(\frac{2}{y+x},\frac{y-x}{y+x}\right),\\ \nonumber \Phi^{-1}(x,y)=\left(\frac{1}{x}-\frac{y}{x}, \frac{1}{x}+\frac{y}{x}\right). \end{eqnarray} Note that $E_2$ has an holomorphic involution $I(x,y)=(x,-y)$. It is easy to check that it is conjugate by $\Phi$ to the canonical involution of $F_n$, $\overline{I}(x,y)=(y,x)$, i.e., the following diagram commutes \begin{equation} \xymatrix{F_3\ar_\Phi[d]\ar^{\overline{I}}[r] & F_3\ar^{\Phi}[d]\\ E_2\ar^{I}[r] & E_2} \end{equation} Note that the projection in the first coordinate is a meromorphic function of degree $n-1$ on $E_2$, so composing with $\Phi$ we obtain the meromorphic function $2/(y+x)$ on $F_n$ of degree $n-1$, for example in the case $n=3$ we obtain a degree 2 meromorphic function on the elliptic curve $F_3$. \subsection{The even case} Similar formulas can be obtained in the even case by using the change $(x+\omega y,x+y)$ instead of $(x-y,x+y)$ in the first step, where $\omega$ is a root of $x^n=-1$, maintaining the other steps without changes as before. In this case we have \begin{equation} E_2\colon 2+\sum_{k=1}^{n-1}\binom{n}{k}(1+\omega^k)y^{k}=x^n, \end{equation} and $\Phi:F_n\to E_2$ become \begin{eqnarray} \Phi(x,y) & = & \left(\frac{\omega-1}{\omega y-x},\frac{x-y}{\omega y-x}\right),\\ \nonumber \Phi^{-1}(x,y) & = & \left(\frac{1}{x}+\omega\frac{y}{x}, \frac{1}{x}+\frac{y}{x}\right). \end{eqnarray} Similarly as above, the map $(\omega-1)/(\omega y-x)$ is an meromorphic map of degree $n-1$ on the Fermat curve $F_n$, for $n$ even. \section*{Acknowledgments} \par I would like to thank my advisor Alberto Verjovsky for his constant support and for his encouragement in writing this paper. This work was partially supported by PAPIIT IN100811. \end{document}
\begin{document} \title{ Examples of Berezin-Toeplitz Quantization: Finite sets and Unit Interval } \author{J.-P. Gazeau} \address{ LPTMC and F\'ed\'eration de Recherches Astroparticules et Cosmologie, Boite 7020, Universit\'e Paris 7 Denis Diderot, F-75251 Paris Cedex 05, France} \email{[email protected] } \author{T. Garidi} \address{ LPTMC and F\'ed\'eration de Recherches Astroparticules et Cosmologie, Boite 7020, Universit\'e Paris 7 Denis Diderot, F-75251 Paris Cedex 05, France} \email{[email protected]} \author{E. Huguet} \address{ D\'epartement d'Astrophysique Stellaire et Galactique and F\'ed\'eration de Recherches Astroparticules et Cosmologie, Observatoire de Paris-Meudon, 92195 Meudon, France} \email{[email protected]} \author{M. Lachi\`eze Rey} \address{ Service d'Astrophysique du CEA et F\'ed\'eration de Recherches Astroparticules et Cosmologie, CEA Saclay, 91191 Gif sur Yvette Cedex, France } \email{[email protected]} \author{J. Renaud} \address{LPTMC and F\'ed\'eration de Recherches Astroparticules et Cosmologie, Boite 7020, Universit\'e Paris 7 Denis Diderot, F-75251 Paris Cedex 05, France} \email{[email protected] } \subjclass{Primary 81R30, 81R60, 81S30, 81S10} \date{today} \dedicatory{In memory of Bob Sharp} \keywords{Quantization, Signal Processing, Coherent States} \begin{abstract} We present a quantization scheme of an arbitrary measure space based on overcomplete families of states and generalizing the Klauder and the Berezin-Toeplitz approaches. This scheme could reveal itself as an efficient tool for quantizing physical systems for which more traditional methods like geometric quantization are uneasy to implement. The procedure is illustrated by (mostly two-dimensional) elementary examples in which the measure space is a $N$-element set and the unit interval. Spaces of states for the $N$-element set and the unit interval are the 2-dimensional euclidean $\mathbb{R}^2$ and hermitian $\mathbb{C}^2$ planes. \end{abstract} \maketitle . \section{Quantum processing of a measure space} Quantum Physics and Signal Analysis have many aspects in common. As a departure point of their respective formalism, one finds a {\it raw} set $X=\{x\}$ of basic parameters or data. This set may be a classical phase space in the former case whereas it might be a temporal line or a time-frequency half-plane in the latter one. In reality it can be any set of data accessible to observation. The minimal significant structure one requires of it is the existence of a measure $\mu(dx)$, together with a $\sigma$-algebra of measurable subsets. As a measure space, $X$ will be given the name of an {\it observation} set in the present context, and the existence of a measure provides us with a statistical reading of the set of measurable real or complex valued functions $f(x)$ on $X$: computing for instance average values on subsets with bounded measure. Actually, both approaches deal with quadratic mean values and correlation/convolution involving signal pairs, and the natural frameworks of studies are the real (Euclidean) or complex (Hilbert) spaces, $L^2(X, \mu) \equiv L_{\mathbb{R}}^2(X, \mu)$ or $L_{\mathbb{C}}^2(X, \mu)$ of square integrable functions $f(x)$ on the observation set $X$: $\int_X \vert f(x)\vert^2 \, \mu(dx) < \infty $. One will speak of {\it finite-energy} signal in Signal Analysis and of quantum state in Quantum Mechanics. However, it is precisely at this stage that ``quantum processing'' of $X$ differs from signal processing on at least three points : \begin{enumerate} \item not all square integrable functions are eligible as quantum states, \item a quantum state is defined up to a nonzero factor, \item those ones among functions $f(x)$ that are eligible as quantum states with unit norm, $\int_X \vert f(x)\vert^2\, \mu(dx) = 1$, give rise to a probability interpretation: $X \supset \Delta \rightarrow \int_{\Delta} \vert f(x) \vert^2 \mu(dx)$ is a probability measure interpretable in terms of localisation in the measurable $\Delta$. This is inherent to the computing of mean values of quantum observables, (essentially) self-adjoint operators with domain included in the set of quantum states. \end{enumerate} The first point lies at the heart of the {\it quantization} problem : what is the more or less canonical procedure allowing to select quantum states among simple signals? In other words, how to select the right (projective) Hilbert space ${\mathcal H}$, a closed subspace of $L^2(X, \mu)$, or equivalently the corresponding orthogonal projecteur $\mathbb{I}_{{\mathcal H}}$? In various cicumstances, this question may be answered through the selection, among elements of $L^2(X, \mu)$, of an orthonormal set $\mathcal{S}_N = \{ \phi_n(x) \}_{n = 1}^N$, $N$ being finite or infinite, which spans, by definition, the separable Hilbert subspace ${\mathcal H} \equiv {\mathcal H}_N$. Furthermore, and this is a crucial assumption \cite{aag,klau2,csbook}, we require that \begin{equation} {\mathcal N} (x) \equiv \sum_n \vert \phi_n (x) \vert^2 < \infty \ \mbox{almost everywhere}. \label{factor} \end{equation} Of course, if $N$ is finite the above condition is trivially checked. We then consider the family of states $\{ | x \rangle \}_{x\in X}$ through the following linear superpositions: \begin{equation} | x\rangle \equiv \frac{1}{\sqrt{{\mathcal N} (x)}} \sum_n \phi_n (x) | n\rangle, \label{cs} \end{equation} in which the ket $| n\rangle$ designates the element $\phi_n(x)$ in a ``Fock'' notation. This defines an injective map \begin{equation} X \ni x \rightarrow | x \rangle \in {\mathcal H}_N \in {\mathcal H}_N \end{equation}(in Dirac notations), and it is not difficult to check that states (\ref{cs}) are \textit{coherent} in the sense that they obey the following two conditions: \begin{itemize} \item {\bf Normalisation} \begin{equation}\langle \, x\, | x \rangle = 1, \label{norma} \end{equation}\item {\bf Resolution of the unity in ${\mathcal H}_N$} \begin{equation}\int_X | x\rangle \langle x | \,\, \nu(dx)= \mathbb{I}_{{\mathcal H}_N}, \label{iden} \end{equation}where $\nu(dx) = {\mathcal N}(x)\,\mu(dx)$ is another measure on $X$, absolutely continuous with respect to $\mu(dx)$. The coherent states (\ref{cs}) form in general an overcomplete (continuous) basis of ${\mathcal H}$. \end{itemize} The resolution of the unity in ${\mathcal H}_N$ can alternatively been understood in terms of the scalar product $\langle \, x\, | x' \rangle$ of two states of the family. Indeed, (\ref{iden}) implies that, to any vector $| \phi \rangle$ in ${\mathcal H}_N$ one can (anti-)isometrically associate the function \begin{equation} \phi^{\star}(x) \equiv \sqrt{\mathcal N(x)}\langle x\, | \phi \rangle \end{equation} in $L^2(X, \mu)$, and this function obeys \begin{equation} \phi^{\star}(x) = \int_X \sqrt{\mathcal N(x)\mathcal N(x')} \langle x| x' \rangle \phi^{\star}(x')\, \mu(dx') . \end{equation} Hence, ${\mathcal H_N}$ is (anti-) isometric to a reproducing Hilbert space with kernel \begin{equation} {\mathcal K}(x,x') = \sqrt{\mathcal N(x)\mathcal N(x')} \langle x\, | x' \rangle, \end{equation} and the latter assumes finite diagonal values ({\it a.e.}), ${\mathcal K}(x,x) = \mathcal N(x)$, by construction. A {\it classical} observable is a function $f(x)$ on $X$ having specific properties in relationship with some supplementary structure allocated to $X$, topology, geometry or something else. Its quantization \cite{klau1,ber} simply consists in associating to $f(x)$ the operator \begin{equation}A_f := \int_X f(x) | x\rangle \langle x| \, \nu(dx). \label{oper} \end{equation}In this context, $f(x)$ is said upper (or contravariant) symbol of the operator $A_f$ and denoted by $f = \hat{A}_f $, whereas the mean value $\langle x| f(x) | x\rangle$ is said lower (or covariant) symbol of $A_f$ \cite{ber,csfks} and denoted by $\check{A}_f$. Through this approach, one can say that a quantization of the observation set is in one-to-one correspondence with the choice of a frame in the sense of (\ref{norma}) and (\ref{iden}). To a certain extent, a quantization scheme consists in adopting a certain point of view in dealing with $X$. This frame can be discrete, continuous, depending on the topology furthermore allocated to the set $X$, and it can be overcomplete, of course. The validity of a precise frame choice is asserted by comparing spectral characteristics of quantum observables $A_f$ with data issued from a predefined experimental protocole. Of course, operators acting in ${\mathcal H}_N$ are not all of them of the ``diagonal'' type $A_f$, and many different classical $f(x)$'s can give rise to the \textit{same} operator $A_f$. The frame should be complete or rich enough in order to meet all experimental possibilities determined by the protocole. Let us illustrate the above construction with the well-known Klauder-Glauber-Sudarshan coherent states \cite{csfks} and the subsequent so-called canonical quantization. The observation set $X$ is the classical phase space $\mathbb{R}^2 \simeq \mathbb{C} = \{ x \equiv z = \frac{1}{\sqrt2}(q+ip) \}$ (in complex notations) of a particle with one degree of freedom. The measure on $X$ is Gaussian, $\mu(dx) = \frac{1}{\pi}\, e^{-\vert z \vert^2}\, d^2 z $ where $d^2 z$ is the Lebesgue measure of the plane. The functions $\phi_n (x)$ are the normalised powers of the complex variable $z$, $\phi_n (x) \equiv \frac{z^n}{\sqrt{n!}}$, so that the Hilbert subspace ${\mathcal H}$ is the so-called Fock-Bargmann space of all entire functions that are square integrable with respect to the Gaussian measure. Since $ \sum_n \frac{\vert z \vert^2}{n!} = e^{\vert z \vert^2}$, the coherent states read \begin{equation}| z\rangle = e^{-\frac{\vert z \vert^2}{2}} \sum_n \frac{z^n}{\sqrt{n!}}| n\rangle, \label{scs} \end{equation}and one easily checks the normalisation and unity resolution: \begin{equation}\langle z\, | z \rangle = 1,\ \ \frac{1}{\pi}\int_{\mathbb{C}} | z\rangle \langle z| \, d^2 z= \mathbb{I}_{{\mathcal H}}, \label{pscs} \end{equation}Note that the reproducing kernel is simply given by $e^{\bar{z}z'}$. The quantization of the observation set is hence achieved by selecting in the original Hilbert space $L^2(\mathbb{C}, \frac{1}{\pi}e^{-\vert z \vert^2}\, d^2 z)$ all holomorphic entire functions, which geometric quantization specialists would call a choice of polarization. Quantum operators acting on ${\mathcal H}$ are yielded by using (\ref{oper}). We thus have for the most basic one, \begin{equation}\frac{1}{\pi}\int_{\mathbb{C}} z\, | z\rangle \langle z| \, d^2 z = \sum_n \sqrt{n+1} | n\rangle \langle n+1| \equiv a, \label{low} \end{equation}which is the lowering operator, $a | n\rangle = \sqrt{n} | n - 1\rangle$. Its adjoint $a^{\dagger}$ is obtained by replacing $z$ by $\bar{z}$ in (\ref{low}), and we get the factorisation $N = a^{\dagger}a$ for the number operator, together with the commutation rule $\lbrack a, a^{\dagger} \rbrack = \mathbb{I}_{{\mathcal H}}$. Also note that $ a^{\dagger}$ and $a$ realize on ${\mathcal H}$ as multiplication operator and derivation operator respectively, $a^{\dagger}f(z) = zf(z), \ af(z) = df(z)/dz$. From $q = \frac{1}{2}(z + \bar{z})$ et $p = \frac{1}{2i}(z - \bar{z})$, one easily infers by linearity that $q$ and $p$ are upper symbols for $\frac{1}{2}(a + a^{\dagger}) \equiv Q$ and $\frac{1}{2i}(a - a^{\dagger}) \equiv P$ respectively. In consequence, the self-adjoint operators $Q$ and $P$ obey the canonical commutation rule $\lbrack Q, P \rbrack = i\mathbb{I}_{{\mathcal H}}$, and for this reason fully deserve the name of position and momentum operators of the usual (galilean) quantum mechanics, together with all localisation properties specific to the latter. The next examples which are presented in this paper are, although elementary, rather unusual. In particular, we start with observation sets which are not necessarily phase spaces, and such sets are far from having any physical meaning in the common sense. We first consider a two-dimensional quantization of a $N$-element set which leads, for $N\geq 4$, to a Pauli algebra of observables. We then study two-dimensional (and higher-dimensional) quantizations of the unit segment. In the conclusion, we shall mention some questions of physical interest which are currently under investigation. \section{Quantum processing of a $N$-element set} An elementary (but not trivial!) exercise for illustrating the quantization scheme introduced in the previous section involves an arbitrary $N$-element set $X= \{x_i \}$ as observation set. An arbitrary non-degenerate measure on it is given by a sum of Dirac measures: \begin{equation} \mu(dx) = \sum_{i=1}^N a_i \delta_{\{x_i\}}, \ a_i > 0. \end{equation} The Hilbert space $L^2(X,\mu)$ is simply isomorphic to $\mathbb{C}^N$. An obvious orthonormal basis is given by $\left\{\frac{1}{\sqrt{a_i}} \chi_{\{x_i\}}(x), \ i=1, \cdots , N\right\}$, where $\chi_{\{a\}}$ is the characteristic function of the singleton $\{a\}$. We now consider the two-element orthonormal set $\{ \phi_1\equiv \phi_{\alpha} \equiv| {\pmb \alpha }\rangle , \phi_2 \equiv \phi_{\beta} \equiv | {\pmb \beta } \rangle\} $ defined in the most generic way by: \begin{equation} \label{osN} \phi_{\alpha}(x) = \sum_{i=1}^N \alpha_i \frac{1}{\sqrt{a_i}} \chi_{\{x_i\}}(x), \ \ \phi_{\beta}(x) = \sum_{i=1}^N \beta_i \frac{1}{\sqrt{a_i}} \chi_{\{x_i\}}(x), \end{equation} where complex coefficients $\alpha_i$ and $\beta_i$ obey \begin{equation} \sum_{i=1}^N \vert \alpha_i \vert^2 = 1 = \sum_{i=1}^N \vert \beta_i \vert^2, \ \sum_{i=1}^N \alpha_i \overline{\beta_i} = 0. \end{equation} In a Hermitian geometry language, our choice of $\{\phi_{\alpha}(, \phi_{\beta}( \} $ amounts to selecting in $\mathbb{C}^N$ the two orthonormal vectors ${\pmb \alpha }= \{\alpha_i \}, {\pmb \beta }= \{\beta_i\}$, and this justifies our notations for indices. It follows the expression for the coherent states: \begin{equation}\label{Ncs} | x \rangle = \frac{1}{\sqrt{\mathcal N(x)}} \left\lbrack \phi_{\alpha}(x) ~| {\pmb \alpha }\rangle + \phi_{\beta}(x) ~| {\pmb \beta} \rangle \right\rbrack, \end{equation} in which $\mathcal N(x)$ is given by \begin{equation} \mathcal N(x) = \sum_{i=1}^N \frac{\vert \alpha_i \vert^2 + \vert \beta_i \vert^2}{a_i} \chi_{\{x_i\}}(x). \end{equation} The resolution of unity (\ref{ident}) here reads as: \begin{equation} \label{ } \mathbb{I} = \sum_{i=1}^N \left( \vert \alpha_j \vert^2 + \vert \beta_j \vert^2 \right) | x_i \rangle \langle x_i | \end{equation} The overlap between two coherent states is given by the following kernel : \begin{equation} \label{ overlap} \langle x_i | x_j \rangle = \frac{\overline{\alpha_i}\alpha_j + \overline{\beta_i}\beta_j }{\sqrt{\vert \alpha_i \vert^2 + \vert \beta_i \vert^2}\sqrt{\vert \alpha_j \vert^2 + \vert \beta_j \vert^2}}. \end{equation} To any real-valued function $f(x)$ on $X$, {\it i.e.} to any vector ${\pmb f} \equiv (f(x_i))$ in $\mathbb{R}^N$, there corresponds the following hermitian operator $A_f$ in $\mathbb{C}^2$, expressed in matrix form with respect to the orthonormal basis (\ref{osN}): \begin{equation}\label{aefN} \begin{split} A_f &= \int_X \mu(dx)\,\mathcal N(x)\, f(x) | x \rangle \langle x | \\ & = \begin{pmatrix} \sum_{i=1}^N \vert \alpha_i \vert^2 f(x_i) & \sum_{i=1}^N \alpha_i \overline{\beta_i} f(x_i) \\ \sum_{i=1}^N \overline{\alpha_i} \beta_i f(x_i) & \sum_{i=1}^N \vert \beta_i \vert^2 f(x_i) \end{pmatrix} \equiv \begin{pmatrix} \langle {\pmb F} \rangle_{\pmb \alpha } & \langle{\pmb \beta } | {\pmb F} | {\pmb \alpha }\rangle \\ \langle{\pmb \alpha } | {\pmb F} | {\pmb \beta }\rangle & \langle {\pmb F} \rangle_{\pmb \beta } \end{pmatrix}, \end{split} \end{equation} where ${\pmb F}$ holds for the diagonal matrix $\left\{ (f(x_i))\right\}$. It is clear that, for a generic choice of the complex $\alpha_i$'s and $\beta_i$'s, all possible hermitian $2\times2$-matrices can be obtained in this way if $N\geq 4$. By {\it generic} we mean that the following $4\times N$-real matrix \begin{equation}\label{matc} \mathcal C = \begin{pmatrix} \vert \alpha_1 \vert^2 & \vert \alpha_2 \vert^2 & \cdots & \vert \alpha_N \vert^2 \\ \vert \beta_1 \vert^2 & \vert \beta_2 \vert^2 & \cdots & \vert \beta_N \vert^2 \\ \mathbb{R}e( \alpha_1 \overline{\beta_1}) & \mathbb{R}e( \alpha_2 \overline{\beta_2}) & \cdots & \mathbb{R}e( \alpha_N \overline{\beta_N}) \\ \mathbb{I}m( \alpha_1 \overline{\beta_1}) & \mathbb{I}m( \alpha_2 \overline{\beta_2}) & \cdots & \mathbb{I}m( \alpha_N \overline{\beta_N}) \end{pmatrix} \end{equation} has rank equal to 4. The case $N=4$ with $\det \mathcal C \not= 0 $ is particularly interesting since then one has uniqueness of upper symbols of Pauli matrices $\sigma_{1} = \bigl( \begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix} \bigr) ,\ \sigma_{2} = \bigl( \begin{smallmatrix} 0 & -i \\ i & 0 \end{smallmatrix} \bigr) ,\ \sigma_{3} = \bigl( \begin{smallmatrix} 1 & 0 \\ 0 & -1 \end{smallmatrix} \bigr) ,\ \sigma_{0} =Ê\mathbb{I}$, which form a basis of the four-dimensional Lie algebra of complex Hermitian $2\times 2$-matrices. As a matter of fact, the operator (\ref{aefN}) decomposes with respect to this basis as: \begin{equation} A_f = \langle f \rangle_+ \sigma_0 + \langle f \rangle_- \sigma_3 + \mathbb{R}e\left(\langle{\pmb \beta } | {\pmb F} | {\pmb \alpha }\rangle \right)\sigma_1 - \mathbb{I}m\left(\langle{\pmb \beta } | {\pmb F} | {\pmb \alpha }\rangle\right)\sigma_2, \end{equation} where the symbols $\langle f \rangle$ stand for the following averagings: \begin{equation}\label{avf} \langle f \rangle_{\pm} = \frac{1}{2}\,\sum_{i=1}^N \left(\vert \alpha_i \vert^2 \pm \vert \beta_i \vert^2\right)f(x_i) = \frac{1}{2}\left( \langle {\pmb F} \rangle_{\pmb \alpha } \pm \langle {\pmb F} \rangle_{\pmb \beta } \right). \end{equation} Note that $\langle f \rangle_{+}$ alone has a meanvalue status, precisely with respect to the probability distribution \begin{equation}\label{probf} p_i = \frac{1}{2}\,\left(\vert \alpha_i \vert^2 + \vert \beta_i \vert^2\right). \end{equation} Also note the appearance of these averagings in the spectral values of the quantum observable $A_f$: \begin{equation}\label{outcome1} \mbox{Sp}(f) = \left\{ \langle f \rangle_{+} \pm \sqrt{\left(\langle f \rangle_{-}\right)^2 + \vert \langle{\pmb \beta } | {\pmb F} | {\pmb \alpha }\rangle\vert^2}\right\}. \end{equation} Just remark that if vector ${\pmb \alpha } = (1,0, \cdots , 0)$ is part of the canonical basis and ${\pmb \beta } = \left(0, \beta_2, \cdots, \beta_n \right)$ is unit vector orthogonal to ${\pmb \alpha }$, then $A_f$ is diagonal and $\mbox{Sp}(f)$ is trivially reduced to $\left( f(x_1), \langle {\pmb F} \rangle_{\pmb \beta }\right)$. The upper symbols for Pauli matrices read in vector form as \begin{equation} \hat{\pmb \sigma}_0 = \begin{pmatrix} 1 \\ 1\\ 1 \\ 1 \end{pmatrix}, \ \hat{\pmb \sigma}_1 = {\mathcal C}^{-1}\begin{pmatrix} 0 \\ 0\\ 1 \\ 0 \end{pmatrix}, \ \hat{\pmb \sigma}_2 = {\mathcal C}^{-1}\begin{pmatrix} 0 \\ 0\\ 0 \\ -1 \end{pmatrix}, \ \hat{\pmb \sigma}_3 = {\mathcal C}^{-1}\begin{pmatrix} 1 \\ -1\\ 0 \\ 0 \end{pmatrix}. \end{equation} On the other hand, and for any $N$, components of the lower symbol of $A_f$ are given in terms of another probability distribution in which the importance of each one is precisely doubled relatively to its counterpart in (\ref{probf}): \begin{equation} \label{ } \langle x_l| A_f |x_l \rangle = \check{A}_f (x_l) = \sum_{i=1}^N \varpi_{li} f(x_i), \end{equation} with \begin{equation} \varpi_{ll} = \vert \alpha_l \vert^2 + \vert \beta_l \vert^2, \ \varpi_{li} = \frac{\vert \overline{\alpha_l} \alpha_i + \overline{\beta_l} \beta_i\vert^2}{\vert \alpha_l \vert^2 + \vert \beta_l \vert^2}, \ i\not= l. \end{equation} Note that the matrix $(\varpi_{li} ) $ is stochastic. As a matter of fact, components of lower symbols of Pauli matrices are given by: \begin{align} \check{\sigma}_0 (x_l) &= 1, & \check{\sigma}_1 (x_l) & = \frac{2 \mathbb{R}e\left(\overline{\alpha_l} \beta_l\right)}{\vert \alpha_l \vert^2 + \vert \beta_l \vert^2}, \\ \check{\sigma}_2 (x_l) & = \frac{2 \mathbb{I}m\left(\overline{\alpha_l} \beta_l\right)}{\vert \alpha_l \vert^2 + \vert \beta_l \vert^2}, & \check{\sigma}_3 (x_l) &= \frac{\vert \alpha_l \vert^2 - \vert \beta_l \vert^2}{\vert \alpha_l \vert^2 + \vert \beta_l \vert^2}. \end{align} Hidden behind this formal game lies an interpretation resorting to Hermitian geometry probability \cite{}. For instance, consider $X= \{x_i \}$ as a set of N real numbers. One then can view the real-valued function $f$ defined by $f(x_i) = x_i$ as the \textit{position} observable, the measurement of which on the quantum level determined by the choice of ${\pmb \alpha }= \{\alpha_i \}, {\pmb \beta }= \{\beta_i\}$ has the two possible outcomes given by (\ref{outcome1}). Moreover, the \textit{position} $x_l$ is privileged to a certain (quantitative) extent in the expression of the average value of the \textit{position} operator when computed in state $ |x_l \rangle$. Before ending this section, let us examine the lower-dimensional cases $N=2$ and $N=3$. When $N=2$ the basis change (\ref{osN}) reduces to a $U(2)$ transformation with $SU(2)$ parameters $\alpha = \alpha_1$, $\beta = - \overline{\beta}_1$, $\vert \alpha \vert^2 - \vert \beta \vert^2 = 1$, and some global phase factor. The operator (\ref{aefN}) simplifies as \begin{equation} A_f = f_+ \mathbb{I} + f_- \begin{pmatrix} \vert \alpha \vert^2 - \vert \beta \vert^2 & - 2 \alpha \beta\\ -2 \overline{\alpha \beta} & \vert \beta \vert^2 - \vert \alpha \vert^2 \end{pmatrix}, \end{equation} with $f_{\pm} := (f(x_1) \pm f(x_2))/2 $. We now have a two-dimensional commutative algebra of ``observables'' $A_f$, generated by the identity matrix $\mathbb{I} = \sigma_0$ and the $SU(2)$ transform of $\sigma_3$: $\sigma_3 \rightarrow g \sigma_3 g^{\dagger}$ with $g = \bigl( \begin{smallmatrix} \alpha & \beta \\ -\bar{\beta} & \bar{\alpha} \end{smallmatrix}\bigr) \in SU(2)$. As is easily expected in this case, lower symbols reduce to components: \begin{equation} \langle x_l| A_f |x_l \rangle = \check{A}_f (x_l) = f(x_l), \ l= 1,2. \end{equation} Finally, it is interesting to consider the $N=3$ case when all considered vector spaces are real. The basis change (\ref{osN}) involves four real independent parameters, say $\alpha_1, \alpha_2, \beta_1$, and $\beta_2$, all with modulus $< 1$. The counterpart of (\ref{matc}) reads here as \begin{equation}\label{matcr} \mathcal C_3 = \begin{pmatrix} ( \alpha_1 )^2 & ( \alpha_2 )^2 & 1 - (\alpha_1 )^2 - ( \alpha_2 )^2 \\ ( \beta_1 )^2 & ( \beta_2 )^2 & 1 - (\beta_1 )^2 -( \beta_2 )^2\\ \alpha_1 \beta_1 & \alpha_2 \beta_2 & - \alpha_1 \beta_1 - \alpha_2 \beta_2 \end{pmatrix} \end{equation} If $\det \mathcal C_3 = ( \alpha_1 \beta_2 - \alpha_2 \beta_1)( \beta_1 \beta_2- \alpha_1 \alpha_2 )\not= 0 $, then one has uniqueness of upper symbols of Pauli matrices $\sigma_{1}, \sigma_{3}$, and $\sigma_{0} =Ê\mathbb{I}$ which form a basis of the three-dimensional Jordan algebra of real symmetric $2\times 2$-matrices. These upper symbols read in vector form as \begin{equation} \hat{\pmb \sigma}_0 = \begin{pmatrix} 1 \\ 1\\ 1 \end{pmatrix}, \ \hat{\pmb \sigma}_1 = {\mathcal C_3}^{-1}\begin{pmatrix} 0 \\ 0\\ 1 \end{pmatrix}, \ \hat{\pmb \sigma}_3 = {\mathcal C_3}^{-1}\begin{pmatrix} 1 \\ -1\\ 0 \end{pmatrix}. \end{equation} Finally, the extension of this quantization formalism to $N'$-dimensional subspaces of the original $L^2(X, \mu) \simeq \mathbb{C}^N$ appears as being straighforward on a technical if not interpretational level. \section{Quantum processing of the unit interval} \subsection{ Quantization with finite subfamilies of Haar wavelets} Further simple examples of quantization are provided when we deal with the unit interval $X = \lbrack 0,1 \rbrack$ of the real line and its associated Hilbert space $L^2 \lbrack 0,1 \rbrack$. Let us start out by simply selecting the two first elements of the orthonormal Haar basis \cite{daube}, namely the characteristic function $\mathbf1 (x)$ of the unit interval and the Haar wavelet: \begin{equation}\label{haar} \phi_1(x) = \mathbf1 (x), \ \phi_2 (x) = \mathbf1 (2x) - \mathbf1 (2x-1). \end{equation} Then we have, \begin{equation} \mathcal N(x) = \sum_{n=1}^2 \vert \phi_n(x) \vert^2 = 2 \ \ \mbox{\it a.e.}. \end{equation} The corresponding coherent states read as \begin{equation}\label{uics} | x \rangle = \frac{1}{\sqrt{2}} \left\lbrack \phi_1(x) ~|1 \rangle + \phi_2(x) ~|2 \rangle \right\rbrack. \end{equation} To any integrable function $f(x)$ on the interval there corresponds the linear operator $A_f$ on $\mathbb{R}^2$ or $\mathbb{C}^2$: \begin{equation}\label{aefui} \begin{split} A_f &= 2\int_0^1 dx\, f(x) | x \rangle \langle x | \\ & = \left\lbrack \int_0^1 dx\, f(x) \right\rbrack \left\lbrack |1\rangle \langle 1 | + |2\rangle \langle 2 | \right\rbrack + \left\lbrack \int_0^1 dx\, f(x)\phi_2(x) \right\rbrack \left\lbrack |1\rangle \langle 2 | + |2\rangle \langle 1 | \right\rbrack, \end{split} \end{equation} or, in matrix form with respect to the orthonormal basis (\ref{haar}), \begin{equation} A_f = \begin{pmatrix} \int_0^1 dx\, f(x) & \int_0^1 dx\, f(x)\phi_2(x) \\ \int_0^1 dx\, f(x)\phi_2(x) & \int_0^1 dx\, f(x) \end{pmatrix} . \end{equation} In particular, with the choice $f=\phi_1 $ we recover the identity whereas for $f=\phi_2 $, $A_{\phi_2} = \bigl( \begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix} \bigr) = \sigma_1$, the first Pauli matrix. With the choice $f(x) = x^p, \mathbb{R}e e \,p > -1 $, \begin{equation} A_{x^p} = \frac{1}{p+1} \begin{pmatrix} 1 & 2^{-p} - 1\\ 2^{-p} - 1 & 1 \end{pmatrix} . \end{equation} For an arbitrary coherent state $ | x_0 \rangle,\, x_0 \in \lbrack 0,1 \rbrack $, it is interesting to evaluate the average values (lower symbols) of $A_{x^p} $. This gives \begin{equation} \langle x_0 | A_{x^p} | x_0 \rangle = \left\{ \begin{array}{ll} \frac{2^{-p} }{p+1} & 0 \leq x_0 \leq \frac{1}{2}, \\ \frac{2 - 2^{-p} }{p+1} & \frac{1}{2} \leq x_0 \leq 1 , \end{array} \right. \end{equation} the two possible values being precisely the eigenvalues of the above matrix. Note the average values of the ``position'' operator: $\langle x_0 | A_{x} | x_0 \rangle =1/4$ if $0 \leq x_0 \leq \frac{1}{2}$ and $3/4$ if $\frac{1}{2} \leq x_0 \leq 1$. Clearly, like in the $N=2$ case of the previous section, all operators $A_f$ commute, since they are linear combinations of the identity matrix and the Pauli matrix $\sigma_1$. The procedure is easily generalized to higher dimensions. Let us add to the previous set $\{\phi_1, \phi_2\}$ other elements of the Haar basis, say up to ``scale'' $J$ : \begin{multline} \label{ } \lbrace\phi_1(x), \phi_2 (x), \phi_3 (x) = \sqrt 2 \phi_2(2x), \phi_4(x) = \sqrt 2 \phi_2(2x -1), \\ \cdots, \phi_s (x) =2^{j/2} \phi_2(2x - k), \phi_N (x) = 2^{J/2}\phi_2(2x - 2^J + 1) \rbrace, \end{multline} where, at given $j = 1, 2, \cdots, J$, the integer $k$ assumes its values in the range $ 0 \leq k \leq 2^j -1$. The total number of elements of this orthonormal system is $N = 2^{J+1}$. The expression of (\ref{factor}) is also given by $\mathcal N(x) = 2^{J+1}$, and this clearly diverges at the limit $J \to \infty$. Then, it is remarkable if not expected that spectral values as well as average values of the ``position'' operator are given by $\langle x_0 | A_{x^p} | x_0 \rangle =(2k+1)/2^{J+1}$ for $ k/2^J \leq x_0 \leq (k+1)/2^J$ where $ 0 \leq k \leq 2^J -1$. Our quantization scheme in the present case achieves a dyadic discretization of the localization in the unit interval. \subsection{A two-dimensional non-commutative quantization of the unit interval} Now we choose another orthonormal system, in the form of the two first elements of the trigonometric Fourier basis, \begin{equation}\label{trig} \phi_1(x) = \mathbf1 (x), \ \phi_2 (x) = \sqrt{2}\sin{2\pi x}. \end{equation} Then we have, \begin{equation} \mathcal N(x) = \sum_{n=1}^2 \vert \phi_n(x) \vert^2 = 1 + 2 \sin^2{2\pi x}, \end{equation} and corresponding coherent states read as \begin{equation}\label{uics2} | x \rangle = \frac{1}{\sqrt{1 + 2~ \sin^2{2\pi x}}} \left\lbrack |1 \rangle + \sqrt{2}~\sin{2\pi x} ~|2 \rangle \right\rbrack. \end{equation} To any integrable function $f(x)$ on the interval, corresponds the linear operator $A_f$ on $\mathbb{R}^2$ or $\mathbb{C}^2$ (in its matrix form) , \begin{equation} A_f = \begin{pmatrix} \int_0^1 dx\, f(x) & \sqrt2\int_0^1 dx\, f(x) \sin{2\pi x} \\ \sqrt2\int_0^1 dx\, f(x) \sin{2\pi x} & 2\int_0^1 dx\, f(x) \sin^2{2\pi x} \end{pmatrix} . \end{equation} Like in the previous case, with the choice $f=\phi_1 $ we recover the identity whereas for $f=\phi_2 $, $A_{\phi_2} = \sigma_1$, the first Pauli matrix. We now have to deal with a non-commutative Jordan algebra of operators $A_f$, like in the $N=3$ real case of the previous section. It is generated by the identity matrix and the two real Pauli matrices $\sigma_1$ and $\sigma_3$. In this context, the \textit{position} operator is given by: \begin{equation*} A_x = \begin{pmatrix} \frac{1}{2} & -\frac{1}{\sqrt{2} \pi} \\ - \frac{1}{\sqrt{2} \pi} & \frac{1}{2}\end{pmatrix} , \end{equation*} with eigenvalues $\frac{1}{2} \pm \frac{1}{\sqrt{2} \pi}$ Note its average values in function of the coherent state parameter $x_0 \in \lbrack 0,1 \rbrack $: \begin{equation*} \langle x_0 |A_x | x_0 \rangle = \frac{1}{2} - \frac{2}{\pi} \frac{\sin{2\pi x_0}}{1 + 2~ \sin^2{2\pi x_0}} \end{equation*} In Fig.\ref{ffirstfig} we give the curve of $\langle x_0 |A_x | x_0$ in function of $x_0$. It is interesting to compare with the two-dimensional Haar quantization presented in the previous subsection. \begin{figure} \caption{Average value $\langle x_0 |A_x | x_0 \rangle$ of \textit{position} \label{ffirstfig} \end{figure} \section{Conclusion} The examples we have given in this contribution are mainly of pedagogical nature. Other examples, specially devoted to Euclidean and pseudo-euclidean spheres will be presented elsewhere, having in view possible connections with objects of noncommutative geometry (like fuzzy spheres, see for instance \cite{frkr}). They show the extreme freedom we have in analyzing a set $X$ of \textit{data} or \textit{possibilities} just equipped with a measure by following a quantumlike procedure. The crucial step lies in the choice of a countable orthonormal subset in $ L^2(X, \mu)$ obeying (\ref{factor}). A $\mathbb{C}^N$ (or $l^2$ if $N = \infty$) unitary transform of this original subset would actually lead to the same specific quantization, and the latter could as well be obtained by using unitarily equivalent \textit{continuous} orthonormal distributions defined within the framework of some Gel'fand triplet. Of course, further structure like symplectic manifold combined with spectral constraints imposed to some specific observables will considerably restrict that freedom and will lead hopefully to a unique solution, like Weyl quantization, deformation quantization, or geometric quantization are able to achieve in specific situations. Nevertheless, we believe that the generalization of Berezin quantization which has been described here, and which goes far beyond the context of Classical and Quantum Mechanics, not only will shed light on the specific nature of the latter, but also will help to solve in a simpler way some quantization problems. \end{document}
\begin{document} \begin{spacing}{1.3} \newtheorem{definition}{Definition}[] \newtheorem{question}{Question}[] \newtheorem{observation}{Observation}[] \newtheorem{conjecture}{Conjecture}[] \newtheorem{claim}{Claim}[] \newtheorem{lemma}{Lemma}[] \newtheorem{proposition}{Proposition}[] \newtheorem{theorem}{Theorem}[] \newcounter{example}[section] \newenvironment{example}[1][]{\refstepcounter{example}\par \noindent \textbf{Example~\theexample. #1} \rmfamily}{ } \title{Single-peaked domains with designer uncertainty \footnote{Guidance by Arunava Sen and Debasis Mishra is gratefully acknowledged. I am also thankful to Stephen Morris and Alex Wolitzky for insightful comments.}} \author{Aroon Narayanan \footnote{Department of Economics, MIT}} \date{} \maketitle \begin{abstract} This paper studies single-peaked domains where the designer is uncertain about the underlying alignment according to which the domain is single-peaked. The underlying alignment is common knowledge amongst agents, but preferences are private knowledge. Thus, the state of the world has both a public and private element, with the designer uninformed of both. I first posit a relevant solution concept called implementation in mixed information equilibria, which requires Nash implementation in the public information and dominant strategy implementation in the private information given the public information. I then identify necessary and sufficient conditions for social rules to be implementable. The characterization is used to identify unanimous and anonymous implementable social rules for various belief structures of the designer, which basically boils down to picking the right rules from the large class of median rules identified by Moulin (1980), and hence this result can be seen as identifying which median rules are robust to designer uncertainty.\\ \noindent Keywords: social choice function, single-peakedness, implementation, designer uncertainty \\ \noindent JEL Classification: D71 \end{abstract} \section{Introduction} Consider the following situation: a group of people, say voters, need to select an alternative, say a political party, amongst many that are available to them. An independent authority is established to determine what the selection rule should be i.e. what alternative to select based on the preferences of each person in the group. In many such situations, it is reasonable to assume that the agents, probably after much discussion amongst themselves, have a common concern, such as national defence. They would each have some ideal amount of national defense spend in mind, which would correspond to their ``peak", and they would find any amount farther away from this peak less attractive than amounts that are closer to it. This captures their preference over the alternatives. In evaluating the parties based on their position on how much to spend on national defense, the voters thus find themselves having ``single-peaked preferences" over them. This basically means that if a party proposes a spend closer to a voter's peak, the voter will like it more. First introduced by \cite{black}, single-peaked domains were studied extensively by \cite{Moulin1980}. Since then, the literature has studied them in the context of some properties of rules that seem quite natural: first, everybody should find it optimal to report their true preference regardless of what the others say they prefer, second, if everyone agrees on an alternative, then that should be chosen, and third, every agent should be ex-ante equal. These are the properties of strategy-proofness, unanimity, and anonymity, respectively. A nice class of rules - the median rules - satisfy these conditions, and in fact only they satisfy these conditions. Essentially such rules pick the median of the $n$ reported peaks and $n-1$ fixed ``phantom peaks", using an exogenous underlying alignment according to which the preferences of the agents are known to be single-peaked. Each median rule can be uniquely identified by these $n-1$ ``phantom peaks". \cite{Moulin1980} also showed that an even more general characterization of rules - which \cite{sprumont1995} terms min-max rules - can be obtained if one drops anonymity. However, in this analysis it was important to assume that the underlying alignment according to which everyone evaluates the alternatives is known to the authority. In many situations, this may not be reasonable since the authority may not be privy to such information. For example, when the legislature sets up a committee to decide funding for a program, it is quite likely that the committee members will arrive at a common concern after discussions, and then vote for the optimal funding based on that concern. But it is unlikely that the legislature would know the specific concern beforehand. It would then be prudent to set up a rule that accounts for this uncertainty. It may even be the case that priorities that produce such alignments amongst voters changes drastically over time. Witness the remarkable realignment in the United States Democratic Party's position on race, from being anti-abolitionist in the 19th century to passing the Voting Rights Act in the 20th century. It is reasonable to expect that long-lasting rules be designed keeping such shifting alignments in mind. Applications also extend to similar settings in auctions, regulation and a variety of other domains. The objective of this paper is to identify social rules that will be robust to the designer's uncertainty about the underlying ordering. In particular, we would like to relate this to Moulin's class of median rules. Given $m$ alternatives and $n$ players, his median class contains ${m+n-2}\choose{n-1}$ rules for each possible underlying alignment. Clearly a designer then faces a problem of plenty, and we would like our results to help her in identifying which of these will have this additional property i.e. the property of being robust to uncertainty about the underlying ordering. In order to achieve this objective, it is imperative to first formalize what this robustness notion should be. Note that the information of each agent can be split into two - a public state, which is the underlying alignment, and a private state, which is his own preference. The state of the world then is composed of an alignment that is in the support of the designer's belief and a profile of preferences that is single-peaked according to this alignment, and any SCF picks an alternative for each state. Given the mixed information nature of our setting, we posit a suitable solution concept for implementation of SCFs. With respect to the underlying alignment, which is common knowledge amongst the agents, we require Nash implementation. Then, given that the underlying alignment is being truthfully revealed, we require implementation in dominant strategies with respect to the private information i.e. their preferences. Of course, applying our solution concept directly to rules is cumbersome, and hence first we must obtain a characterization of this concept in terms of easy-to-check conditions on social rules. By the first requirement, implementable SCFs will be strategy proof for states that share the same alignment. By the second, they will have to satisfy a condition that we called shared-monotonicity, which requires that across two states that share a preference profile, the SCF must choose the same outcome at that profile. This is due to the lower contour sets of agents matching exactly for agents with such preferences across the two states, which leads to Nash equilibria in one state being Nash equilibria in the other. These conditions, with no veto power, also turn out to be sufficient. This characterization enables us to carry out our original objective - identify robust unanimous and anonymous SCFs. In fact, this objective had a greater role in inspiring the kind of solution concept that we use than may be apparent from the deductive narrative we have laid out so far. The axioms characterizing the solution are such that adding unanimity and anonymity implies that implementable SCFs, fixing an alignment from the set of all alignments that the designer believes plausible, must be Moulin's median rules. We can then check which median rules satisfy the additional conditions imposed by implementability for some interesting belief structures of the designer. This then enables us to cleanly answer some of the question of robustness within Moulin's class that inspired us. In fact, this will form the bulk of our study. The most important of these is the case when the designer believes that the preferences may have been generated by any possible alignment - in this case, only the true median rule, i.e. the rule that picks the median of the reported peaks of the agents, survives the demanding notion of implementability. We also derive results for other interesting forms of designer uncertainty. We are not aware of any paper that studies settings in which the state of the world has both public and private elements. In that sense, our setting is novel. There are papers that look at domains which feature multiple single-peaked domains devoid of any asymmetry in information between the designer and the agents about the alignments, such \cite{REFFGEN2015349}. Another related strand of literature studies single-peaked preferences on multi-dimensional domains, as in \cite{barb}, \cite{border}, and \cite{chich}. \section{Model} The environment is the tuple $(N, X, \mathcal{P}, \mu)$ where $N := \{1, . . . , n\}$ is a set of $n$ agents, $X$ is a finite set of alternatives, $\mathcal{P}$ is the set of all strict alignments over $X$, and $\mu$ is a non-degenerate probability distribution over $\mathcal{P}$. We assume $n \geq 3$. The agents in this setting have single-peaked preferences according to some alignment in the support of $\mu$ \footnote{Note that any alignment and its exact reverse, for example $a \succ b \succ c$ and $c \succ b \succ a$, produce the same single-peaked domain, and hence are equivalent for our purposes. All claims pertain to this equivalence class.}, and this alignment is common knowledge amongst them. $\mu$ represents the designer's belief over which possible alignment could be underlying the agents' preferences. The preferences of agents are private knowledge, i.e. unknown to both other agents and the designer. Throughout this study, we will attempt to derive results that do not depend heavily on the specific prior the designer has. This will be achieved by working with only the support of the prior, so that our results will be robust to small changes in the probability assigned to the alignments in the support of the prior. We assume wlog that the message space of each agent $i$ in any mechanism must be of the form $M_i = L_i \times \mathcal{K}$ where $L_i$ is allowed to be any arbitrary space, and $\mathcal{K} \subseteq \mathcal{P} \times \mathcal{P}$ contains elements of the form $(\succ, P_i)$ where $P_i$ is single-peaked according to $\succ$. \begin{definition} A \textbf{mechanism} is the tuple $(M,g)$ where $M = (M_i)_{i \in N}$ is the message space and $g:M \to X$ assigns an outcome to each message profile. \end{definition} Let $\Theta$ denote the set of all states. Then, for any state $\theta \in \Theta$, we can represent it as $\theta = (\succ^{\theta}, (P^{\theta}_{i})_{i \in N})$, where $\succ^{\theta}$ is an alignment in the support of $\mu$ and represents the public state while $P^{\theta}_{i}$ represents the private state of agent $i$. The tuple $(P^{\theta}_{i})_{i \in N}$ identifies a profile of preferences that is single-peaked according to the public state. \begin{definition} A \textbf{social choice function} $f:\Theta \to X$ assigns an alternative to each state $\theta$. \end{definition} Finally, we formally define the solution concept that we set forth in the introduction. The idea is simple and intuitive - we require Nash implementation in the public component and dominant strategy implementation in the private component of the state given truthful reporting of the public component. Essentially, Nash implementation in the public component ensures nobody has an incentive to unilaterally misreport the common ordering. Then, given the common ordering is being reported truthfully, dominant strategy implementation ensures that the private preferences are also reported truthfully. \begin{definition} A mechanism $(M,g)$ implements a social choice function $f$ in \textbf{mixed information equilibria} if for every state $\theta = (\succ^{\theta}, (P^{\theta}_{i})_{i \in N})$ there exists a message $m^* = (l^{*}_i, \succ^{\theta}, (P^{\theta}_{i})_{i \in N})$ such that $g(m^*) = f(\theta)$ and we have: \begin{enumerate} \item Nash implementation in public information: \begin{itemize} \item for all $i$, $$g(m^{*}_{i}, (l_{-i}, \succ^{\theta}, P^{\theta}_{-i})) \ R^{\theta}_i \ g((l_{i}, \succ^{'}, P^{'}_{i}), (l_{-i}, \succ^{\theta}, P^{'}_{-i}))$$ for all $(l_i)_{i \in N}$, for all $(\succ^{'},P^{'}_{i}) \in \mathcal{K}$ \item if $\Bar{m}$ is a Nash equilibrium and $\Bar{m}_i = (\Bar{l}_i, \Bar{\succ}_i, P^{\theta}_{i})$ for some $(\Bar{l}_i, \Bar{\succ}_i)$, then $g(\Bar{m}) = g(m^*)$ \end{itemize} \item Dominant strategy implementation in private information given truthful public state reporting: for all $i$, $$g(m^{*}_{i}, (l_{-i}, \succ^{\theta}, P_{-i}^{'})) \ R^{\theta}_i \ g((l_{i}, \succ^{\theta}, P_{i}^{'}), (l_{-i}, \succ^{\theta}, P_{-i}^{'}))$$ for all $(l_i)_{i \in N}$, for all $P_{i}^{'}$, for all $P_{-i}^{'}$, with strict preference for some $(l_{-i}, \succ^{\theta}, P_{-i}^{'})$ \end{enumerate} A social choice function $f$ is said to be \textbf{implementable} in mixed information equilibria if there exists a mechanism that implements it. \end{definition} To further expand on this definition, the first condition of Nash implementation requires that agents not be able to unilaterally deviate along both private and public dimensions while the second ensures that the equilibrium message is the unique Nash equilibrium with respect to the public state. This ensures that the public state is conveyed accurately to the designer. Then, the second condition says that given this public state, we must have dominant strategy implementation in the private state. \section{Implementable choice functions} The primary motive of this study is to understand the nature of choice functions which can be implemented by the designer in this context. Clearly the definition above of implementability is onerous to check in practice. The obvious first step then would be to arrive at a characterization in terms of some properties that can be directly verified for choice functions, and this is indeed the step we take. Then, we can apply these properties directly to the choice functions to identify which of them satisfy these properties. Our first result will be to identify necessary and sufficient conditions for SCFs to be implementable in mixed information equilibria. In order to state the result, a few notions will be required. Denote by $f_{\succ}$ the SCF $f$ restricted to $\{\theta = (\succ, P)\}$ where $P$ is allowed to be any profile of preferences that is single-peaked according to $\succ$. \begin{definition} The SCF $f_{\succ}$ is \textbf{strategy-proof} if for every preference $P_i$ that is single-peaked according to $\succ$, either $f_{\succ}(P_i,P_{-i})$ $P_i$ $f_{\succ}(P_i^{'}, P_{-i})$ or $f_{\succ}(P_i,P_{-i}) = f_{\succ}(P_i^{'}, P_{-i})$ for all deviating preference reports $P_i^{'}$ and for all preferences of the other agents $(P_{-i})$. \end{definition} \begin{definition} The SCF $f$ is \textbf{shared-monotonic} if for all pairs of alignments $\succ, \succ^{'}$ such that there exists a profile of preferences $P$ that is single-peaked according to both $\succ$ and $\succ^{'}$, $f(\succ, P) = f(\succ^{'}, P)$. \end{definition} \begin{definition} The SCF $f$ satisfies \textbf{no veto power (NVP)} if for all $\theta = (\succ, P)$ such that $P_i(1) = a$ for all but one agent, then $f(\theta) = a$. \end{definition} Before we present the theorem, let us try to intuitively understand why these conditions are relevant to the notion of implementability. Consider any implementable SCF. By the dominant strategy requirement of implementation, we must have strategy-proofness of $f_\succ$ for all $\succ$ in the support of the designer's belief. Hence the necessity of strategy-proofness is obvious. Moreover, if a message is a Nash equilibrium at a state $\theta$, it will continue to be a Nash equilibrium for states which share the profile of preferences in $\theta$, since the lower contour sets coincide exactly for each player across the states. So if two different alignments generate some common set of single-peaked preferences, whenever these preferences are held by agents, the outcomes must also be exactly the same. By the requirement of Nash implementation in the public information, we must then have shared-monotonicity. It turns out that these two conditions, along with NVP, are also sufficient for any SCF to be implementable. We use a fairly standard mechanism to show this. Agents are asked to report the state i.e. an alignment and a preference single-peaked according to that alignment, and also their preferred alternative and an integer. If at least N-1 agents agree on the alignment, then that alignment is fixed as the public state. If not, the agent reporting the highest integer gets his preferred alternative. \begin{theorem}\label{neccsuffthm} If a social choice function $f$ is implementable in mixed information equilibria, then $f_{\succ}$ is strategy-proof for all $\succ$ in the support of $\mu$, and $f$ is shared-monotonic. Conversely, if a social choice function $f$ is such that $f_{\succ}$ is strategy-proof for all $\succ$ in the support of $\mu$, $f$ is shared-monotonic, and $f$ satisfies no veto power, then it is implementable in mixed information equilibria. \end{theorem} \section{Implementability of unanimous and anonymous rules} Now, after identifying the right solution concept and the right set of tools to check for it, we turn towards identifying what implementable rules actually look like. Recall that our motivation was to apply robustness requirements on Moulin (1980), looking at unanimous and anonymous rules. This classic paper led to the characterization that a rule is strategy-proof, unanimous and, anonymous if and only if it is a generalized median rule. These rules have an intuitive and appealing visual representation. First, place the alternatives on a line left to right according to the underlying alignment. Then, place $n-1$ ``phantom top" on the alternatives on the line - each generalized median rule is associated with a unique placement of these phantom tops. Given any profile of preferences of the agents, identify each agent's most preferred alternative, or ``top", and mark them on the line. The outcome of the generalized median rule is the median of these $n$ agent tops and the $n-1$ phantom tops. Part of the appeal of these properties lies in their normative nicety. If all the agents like the same alternative, it would be quite reasonable for a social rule to assign them that alternative. Equality amongst the agents as well is a fairly standard requirement. The intuitive nature of the generalized median rules also adds to the appeal of these properties, since in essence they are tied to these rules by the characterization. However, importantly, the class of generalized median rules is vast. In fact, if we have $m$ alternatives and $n$ agents, his result gives us ${m+n-2}\choose{n-1}$ rules to choose from. Our results will be able to identify which of these will be robust to designer uncertainty in the underlying alignment. At this point, we should be more specific by what we mean by designer uncertainty. We assume that the designer has some belief over the possible alignments according to which the domain of agents' preferences is single-peaked. The result in the previous subsection identifies general conditions on implementable SCFs. The strength of these conditions, of course, is dependent on the kind of belief that the designer has. Varying her belief will vary the type of SCFs that will be implementable. In order to be robust to changes in belief probabilities, we work with the support of the belief i.e. the designer only uses information about the alignments that she places a positive probability on, and not the specific probability that she places on them. It will also be useful to state Moulin's result here, since we will draw on it later. \begin{theorem}[Moulin (1980)] In a single-peaked domain, a social choice function is strategy-proof, anonymous and unanimous if and only if it is a generalized median rule. \end{theorem} A generalized median rule specifies $n-1$ ``phantom" voters who each vote for a given alternative regardless of the preferences of the agents. The rule chooses the median of the $n$ most-preferred alternatives of the agents and these $n-1$ phantom votes. An example is shown in Figure \ref{fig:MoulMed} - note that $P_1$ and $P_2$ are the top preferences of the agents and $F_1$ is the fixed phantom vote that identifies the rule. \begin{figure} \caption{Moulin's generalized median SCF.} \label{fig:MoulMed} \end{figure} For us, the following definitions of unanimity and anonymity will be applicable. \begin{definition} Let $P(1)$ identify the most preferred alternative in $P$, also called its peak. The SCF $f$ is \textbf{unanimous} if for all preference profiles $(P_i)$ such that $P_i(1) = a$ for some $a \in X$ and for all $i \in N$, we have $f(\succ, P) = a$. \end{definition} \begin{definition} The SCF $f$ is \textbf{anonymous} if for all preference profiles $(P_i)$ and for all bijective functions $\sigma:N \to N$, we have $f(\succ,(P_i)) = f(\succ,(P_{\sigma(i)}))$. \end{definition} Since implementable SCFs must be strategy-proof for each $\succ$, it is clear that adding unanimity and anonymity implies that implementable SCFs must have some median rule for each $f_{\succ}$. A slightly more subtle point relates to the property of no veto power. Consider what happens when there are no phantoms of a median rule at one of the two extreme ends. Then, one can always construct a profile where a single agent would indeed have a veto - for example with all but one agent having their top preference at the end without a phantom. At the same time, having at least one phantom on both ends ensures that there is no veto at any profile. \begin{observation} If an anonymous and unanimous SCF $f$ is implementable, then $f_{\succ}$ must be a median rule for all alignments $\succ$ in the support of $\mu$. It satisfies no veto power if and only if each $f_{\succ}$ places at least one phantom on the extreme ends of $\succ$. We call such SCFs \textbf{no veto projected median SCFs (NVPMS)}. \end{observation} \begin{figure} \caption{An NVPMS. Note that the three alignments that the designer believes possible are $acb$, $abc$, and $bac$. The green arrows represent position of the phantoms.} \label{fig:NVPMS} \end{figure} We present five results here, progressing in a natural way over possible beliefs of the designer. First, suppose the designer places positive probabilities over exactly two alignments, each of which is the reverse of the other. We may think of this as a good starting point, since the single-peaked domains produced by each alignment is exactly the same. \begin{observation} Let $\hat{\succ}^{-1}$ denote the alignment where the ordering of alternatives of $\hat{\succ}$ is exactly reversed. If $supp(\mu) = \{\hat{\succ}, \hat{\succ}^{-1} \}$ for some $\hat{\succ} \in \mathcal{P}$, then any NVPMS with $f_{\hat{\succ}} = f_{\hat{\succ}^{-1}}$ is implementable. \end{observation} When the designer is dealing with such a belief, any report of preference profiles could have been produced by either of the alignments. Shared-monotonicity then kicks in with full force - it must be the case that the SCFs chosen for each alignment must then be exactly the same. This leads us to ask whether there might be cases where shared-monotonicity might have no bite at all. Indeed, whenever the two sets of single-peaked preferences for two alignments are disjoint, shared-monotonicity is satisfied vacuously, and we have the following result. \begin{proposition}\label{delta} Let $\mathcal{D}_{\succ}$ be the set of all preferences that are single-peaked according to $\succ$. If for all $\succ, \succ^{'} \in supp(\mu)$, $\mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}} = \emptyset$, then any NVPMS is implementable. \end{proposition} This is a very permissive result, since it gives a free hand to the designer to choose any median rule for each alignment, as long as it places at least one phantom on both ends of each alignment. We would quite naturally be interested to know the circumstances under which the designer would be much more constrained than this in her choices. Consider then the case where the designer is uncertain entirely about the underlying alignment, that is, she places a positive probability over every possible alignment in $\mathcal{P}$. This would be the case when the designer wants her mechanism to robust to any possible underlying alignment, for example when designing a voting system where the voters may end up having any common concern, or even any change in the alignment over time. The result for such a belief structure requires a few notations, which we present before the theorem. \begin{definition} Given $|A| = 3$, we call an SCF a \textbf{symmetric order-statistic SCF} if for any alignment $\succ \ \in \{abc, bca, cab\}$ where $a,b,c \in X$, $f{\succ}$ places $k$ phantoms on the leftmost alternative and $(n-1)-k$ on the rightmost alternative, with $1 \leq k \leq (n-2)$. Note that in alignment $abc$, $a$ is the leftmost alternative and $c$ is the rightmost alternative. \end{definition} \begin{figure} \caption{A symmetric order-statistic SCF. Note that for each alignment, the phantoms are at the ends and in the same pattern across alignments.} \label{fig:OrdStat} \end{figure} \begin{definition} Given $n$ is odd, we call an SCF a \textbf{true median SCF} if $\forall \succ \ \in supp(\mu)$, $f_{\succ}$ chooses the median of the reported peaks of the agents. \end{definition} \begin{figure} \caption{A true median SCF. Note that all the phantoms must be equally distributed at the two ends.} \label{fig:TMM} \end{figure} \begin{theorem}\label{FullSupp} Suppose $supp(\mu) = \mathcal{P}$ and $|A| \geq 3$. Then \begin{enumerate} \item If $|A| = 3$, then an SCF is implementable if and only if it is a symmetric order-statistic mechanism. \item If $n$ is even and $|A| > 3$, then there are no implementable SCFs. \item If $n$ is odd and $|A| > 3$, then an SCF is implementable if and only if it is the true median mechanism. \end{enumerate} \end{theorem} The surprising thing about this result is how it picks a handful of median rules from the vast class identified by Moulin. From ${m+n-2}\choose{n-1}$ possible rules, these have been whittled down to just $1$ when we have $m>3$ and $n$ is odd. What this means is that if the designer actually has some (possibly very small) non-zero belief over all possible alignments, she should choose the true-median so that she can have this additional bulwark of implementability in mixed information equilibria. This result (and others in this study) can thus also be seen as identifying special mechanisms within the class of median rules, so that designers can be assisted in choosing from within it. Let us now try to address what happens when we do not have the full support, but there are still alignments with common preferences in their single-peaked domains. Because a general characterization is bound to be messy, we consider an important special case. \begin{definition} Let $T_{\succ, \succ^{'}} = $ $\{x \in X \ | \ x = P(1) $ for some $ P \in \mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}} \}$. We say that the belief $\mu$ has \textbf{constant shared peaks} if $T_{\succ, \succ^{'}} = T$ for all $\succ, \succ^{'} \in supp(\mu)$, where $T \subseteq X$ is a fixed subset of the set of alternatives. \end{definition} \begin{figure} \caption{A set of alignments that have constant shared peaks. All preferences that are shared across these alignments have peaks on the red colored alternatives.} \label{fig:SP} \end{figure} In this case, as we show in a lemma that we use to prove our theorems, the shared peaks must be a contiguous subordering of each alignment, so one can imagine these tops as being lumped together in the same order, and then the other alternatives moving around to the left and right of this lump to generate the entire support. For such beliefs, there is a fairly large possibility result. \begin{proposition}\label{SameShare} Suppose $\mu$ has constant shared peaks. Then $T$ is a contiguous subordering of each alignment in $supp(\mu)$. Denote this subordering by $R$. An SCF is implementable if it is a projected median mechanism that, given any $\succ, \succ^{'} \ \in supp(\mu)$, places \begin{enumerate} \item the same number of phantoms before and after $R$ for both $\succ$ and $\succ^{'}$ \item at least one phantom on either end of both $\succ$ and $\succ^{'}$ \item the same pattern of phantoms on $R$ for both $\succ$ and $\succ^{'}$ \end{enumerate} \end{proposition} Proposition \ref{SameShare} admits an interesting corollary. Suppose $|supp(\mu)| = 2$. In this case, regardless of which alignments are part of $supp(\mu)$, either the two alignments share a given set of preferences or they do not. Thus, the premise of the proposition is met, and we can identify implementable SCFs. $|supp(\mu)| = 2$ includes practically important situations such as when the designer is uncertain just about the relative alignment of two adjacent alternatives. Finally, we show that if we impose a fairly weak additional assumption, we again get a very permissive result. Dutta and Sen (2012) \cite{dutta2012nash} introduced the concept of a ``partially honest" agent, who strictly prefers to reveal the true state as long as he is not worse off. \begin{definition} An agent $i$ is partially honest if given that $\hat{\succ}$ is the true underlying alignment, and $m,m^{'}$ are such that $m_{i} = (\_,\hat{\succ},\_)$ and it is not the case that $g(m^{'}) P_i g(m)$, then he strictly prefers to send $m$. \end{definition} They show that having just one partially honest agent can make it possible to Nash implement SCFs that satisfy only No Veto Power, doing away with the otherwise necessary Maskin Monotonocity requirement. Unsurprisingly, this assumption is strong in our setting as well - in the presence of just one partially honest agent, we are able to implement a very large class of SCFs, regardless of the belief structure. \begin{proposition}\label{PartHon} Let $|supp(\mu)| \geq 3$. Suppose that at least one agent is partially honest. Then, any NVPMS is implementable. \end{proposition} \section{Discussion} The primary innovation of this paper is the study of a setting in which the state of the world has public and private components, with the designer being uninformed about both components and agents informed of the public state and their own private state. Settings like this merit practical interest, since not all information available to an agent can be neatly bucketed into either something only privately observed or only publicly observed. The solution concept that we use is relevant for our specific case, but there may be others or generalizations of the same which could lead to interesting results. We focus on the single peaked domain since our interest is in answering questions related to robustness when the designer is not certain about the underlying alignment of the domain. Our general result on implementation in mixed information equilibria, which involves Nash implementation in the public component and dominant strategy implementation in the private component, identifies necessary and sufficient conditions on SCFs for them to be implementable. This is in and of itself useful in practice, since these conditions can be directly checked and it can be verified whether the SCF can be implemented. It is for anonymous and unanimous rules that we go further and identify implementable SCFs. Fixing the public part of the state i.e. the underlying alignment, implementability necessitates that the SCF behave as a Moulin generalized median mechanism. Implementability could also impose further constraints on which median mechanisms these can be, depending on which alignments are possible. For some sets of alignments, there are no additional constraints imposed, while if the set of alignments is the set of all possible alignments, then only one median mechanism is admissible for each alignment. This is how we answer the robustness questions - depending on which alignments the designer thinks are possible, she can identify which SCF she should choose from all of Moulin's median mechanisms. A more general question to tackle in this setting would be to identify the structure of unanimous implementable rules. As alluded to earlier, it is known that unanimous strategy-proof rules must be ``min-max" rules, and the same questions of robustness to designer uncertainty can be asked about these rules. Here our choice was to investigate the generalized median rules since they have an intuitive visualization and also because anonymity is a fairly standard normative requirement. The literature post \cite{Moulin1980} has also focused a great deal on median rules for similar reasons. \begin{appendices} \section{Proofs} \begin{proof}[\textbf{Proof of Theorem \ref{neccsuffthm}}] Let $f$ be an implementable SCF, and let $(M,g)$ be the mechanism that implements it. Let $\succ$ be an alignment in the support of $\mu$. Fix an agent $i$. Let $P_i$, $P_{i}^{'}$ and $P_{-i}^{'}$ be arbitrary preferences single peaked according to $\succ$. Then, there exists an $m^*$ such that $m^{*}_{i} = (l^{*}_i, \succ, P_{i})$, $m^{*}_{-i} = (l^{*}_{-i}, \succ, P^{'}_{-i})$, and $g(m*) = f(\succ, P_i, P_{-i}^{'})$. Also, there exists an $m^{**}$ such that $m^{**}_{i} = (l^{**}_i, \succ, P_{i}^{'})$, $m^{**}_{-i} = (l^{**}_{-i}, \succ, P^{'}_{-i})$, and $g(m^{**}) = f(\succ, P_{i}^{'}, P_{-i}^{'})$. By dominant strategy implementation in private information, $g(m^{*}_{i}, (l^{*}_{-i}, \succ, P_{-i}^{'})) \ R_i \ g((l^{**}_{i}, \succ, P_{i}^{'}), (l^{**}_{-i}, \succ, P_{-i}^{'}))$, which implies $g(m^*) R_i g(m^{**})$, and hence we have $f(\succ, P_i, P_{-i}^{'}) R_i f(\succ, P_{i}^{'}, P_{-i}^{'})$. Since $P_{i}^{'}$ and $P_{-i}^{'}$ were arbitrary, this means that $f_{\succ}$ is strategy-proof. Let $(\succ, P)$ and $(\succ^{'}, P)$ be two states that share preferences. Let $m_{i}^* = (l^{*}_i, \succ, P_{i})$ be the implementing message in state $(\succ, P)$ and $m^{**}$ in $(\succ^{'}, P)$. We must have that deviating with respect to the underlying alignment leads to some outcome in the lower contour set for each agent $i$. Then, at state $(\succ^{'}, P)$ as well deviations from $m^*$ will lead to some outcome in the lower contour set for each agent $i$, and hence $m^*$ will be a Nash equilibrium at $(\succ^{'}, P)$. By Nash implementation in public information, $f(\succ, P) = g(m^*) = g(m^{**}) = f(\succ^{'}, P)$. For the other direction, let $f$ be such that $f_{\succ}$ is strategy-proof for all $\succ$ in the support of $\mu$, $f$ is shared-monotonic, and $f$ satisfies no veto power. Consider the following mechanism: \begin{itemize} \item $M_i = N \times X \times \mathcal{K}$ for all $i$, where $N$ is the set of natural numbers. \item if messages are of the form $m_i = (l_i, x_i, \succ, P_i)$ for all except at most one agent i.e. at least $n-1$ agents agree on the alignment, $g(m) = f(\succ, P)$. \item if not, select the agent with the lowest index amongst those sending the highest natural number, say $j$. $g(m) = x_j$. \end{itemize} To verify dominant strategy implementation in private information, note that given $\succ$, truth-telling is a dominant strategy since $f_{\succ}$ is strategy proof. Unilateral deviations along $N \times X$ do not change the outcome. To verify Nash implementation in public information, note first that unilateral deviations for the alignment do not change the outcome. Second, if all but one agent $i$ report the same alignment but $i$ reports a different alignment, we can have a Nash equilibrium in the public information only if the outcome chosen is the best alternative for all agents but $i$. But then, by NVP, this outcome coincides with $f(\succ, P)$. Finally, if all agents report a different alignment, by shared-monotonicity the outcome is the same as $f(\succ, P)$. \end{proof} \begin{lemma}\label{ShareLem}[Consistency] Suppose $\succ$ and $\succ^{'}$ are such that $\mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}} \neq \emptyset$. Let $T_{\succ, \succ^{'}} = \{x \in X \ | \ x = P(1) $ for some $ P \in \mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}} \}$. Then the following hold \begin{enumerate} \item $|T_{\succ, \succ^{'}}| \geq 2$. \item Let $T_{\succ, \succ^{'}} = \{x_1, x_2,...,x_{|T|}\}$. Then $x_1, x_2,...,x_{|T_{\succ, \succ^{'}}|}$ are adjacent in both $\succ$ and $\succ^{'}$, and they are in the same order in both $\succ$ and $\succ^{'}$ \footnote{We reiterate that an alignment and its exact reverse are equivalent for our purposes, so this statement and the rest in this paper should be considered up to these equivalence classes.}. \end{enumerate} \end{lemma} \begin{proof}[\textbf{Proof of Lemma \ref{ShareLem}}] \begin{enumerate} \item Note that $T_{\succ, \succ^{'}} \neq \emptyset$. Let $x \in T_{\succ, \succ^{'}}$. Then, $\exists \ P \in \mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}}$ such that $x = P(1)$. Let $y = P(2)$. Consider a preference $P^{'}$ such that $P^{'}(1) = y$ and $P^{'}(2) = x$ and $P^{'}(i) = P(i) \ \forall \ i > 2$. Since this change continues to make the new ordering single-peaked with respect to whatever $P$ was single-peaked according to, we must have $P^{'} \in \mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}}$. Since this implies $y \in T_{\succ, \succ^{'}}$, we are done. \item Let $x, y \in T_{\succ, \succ^{'}}$ such that in $\succ$, $t$ lies between $x$ and $y$ for all $t \in T_{\succ, \succ^{'}}$. Now, let $z \in X$ be in between $x$ and $y$ in $\succ$. Let us check where $z$ will be in $\succ^{'}$. Suppose it is not in between the two. Without loss of generality, let it be to the left of both $x$ and $y$ in $\succ^{'}$. Since $y \in T_{\succ, \succ^{'}}$, $\exists \ P \in \mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}}$ such that $y = P(1)$. By single peakedness, $zPx$ in $\succ$ but $xPz$ in $\succ^{'}$, which is a contradiction. Thus, $z$ must be in between $x$ and $y$ in $\succ^{'}$ as well. \\ It is straightforward to show by a similar argument that if both $z_1$ and $z_2$ are in between $x$ and $y$ in $\succ$, then the four must be in the same order in $\succ^{'}$ as well. Thus, the order between $x$ and $y$ is preserved across $\succ$ and $\succ^{'}$. \\ Now, let $z$ be next to $x$ in $\succ$ and between $x$ and $y$. Since $x \in T_{\succ, \succ^{'}}$, $\exists \ P^{'} \in \mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}}$ such that $x = P^{'}(1)$. Consider a preference $P^{''}$ such that $P^{''}(1) = z$ and $P^{''}(2) = x$ and $P^{''}(i) = P^{'}(i) \ \forall \ i > 2$. We must have $P^{''} \in \mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}}$. In this manner, $z \in T_{\succ, \succ^{'}}$ for all $z$ in between $x$ and $y$, which proves our hypothesis. \end{enumerate} \end{proof} \begin{lemma}\label{2Lemm}[Symmetry] Suppose $\succ$ and $\succ^{'}$ are such that $\mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}} \neq \emptyset$. Let $x,y \in T_{\succ, \succ^{'}}$ be such that $x$ and $y$ are adjacent to each other, with $x$ before $y$. Then, $f_{\succ}$ and $f_{\succ^{'}}$ must have the same number of phantoms with tops on or before $x$. \end{lemma} \begin{proof}[\textbf{Proof of Lemma \ref{2Lemm}}] Let $P_x \in \mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}}$ denote the preference with $x$ at top and $P_y \in \mathcal{D}_{\succ} \cap \mathcal{D}_{\succ^{'}}$ denote the preference with $y$ at top. For any $k,l \in \{0,1,...,n\}$ such that $k+l=n$, we must have that reports of $k$ number of $P_x$ and $l$ number of $P_y$ preferences must lead to the same outcome across $\succ$ and $\succ^{'}$. If the number of phantoms before $x$ were not the same across the two alignments, this would not be possible. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{FullSupp}}] Since Lemma \ref{2Lemm} establishes that Symmetry is necessary, we first show that everything but the order statistic mechanisms with odd agents will violate Symmetry and, for $|A| > 3$, we show that all except the true median will violate it. Then we show that for these surviving mechanisms, NVP is necessary. Showing that the surviving mechanisms satisfy Symmetry and NVP completes the proof. Our first claim is that phantoms cannot be placed on the interior for any alignment, where interior of an alignment is anything not at the two ends. Suppose the median SCF chosen for some alignment, say $\succ$, contains phantoms in the interior. Select one alternative in the interior which has a phantom on it, and call it $y$. Let the alternative to the left of it be $x$ and the one to its right be $z$. Suppose there are $k$ phantoms on or before $x$, and $l$ phantoms on or before $y$. By assumption, $k < l$. Consider the alignment, say $\succ^{'}$, with $z$ pulled to the left of $x$, but everything else in the same position. Since we have full support, this alignment is in the support of $\mu$. Clearly, $\succ$ and $\succ^{'}$ share preferences with tops at $x$ and $y$. Thus, there must continue to be $k$ phantoms before $x$ in $\succ^{'}$. Consider the alignment, say $\succ^{''}$, with $y$ and $z$ interchanged in $\succ$, with all other alternatives at the same position. $\succ$ and $\succ^{''}$ share preferences with tops at $y$ and $z$, so after $y$ in $\succ^{''}$, we must have $l$ phantoms. Now, note that $\succ^{'}$ and $\succ^{''}$ share preferences with tops at $x$ and $z$. Before $x$ in $\succ^{'}$, we have $k$ phantoms but after $x$ in $\succ^{''}$ we have at least $l$ (since $y$ is to the right of $x$), which is a contradiction. Second, we claim that the number of phantoms must be the same on both ends for all alignments when $|A| > 3$. Suppose the median SCF chosen for some alignment, say $\succ$, contains $k$ phantoms at the left end. Let $a$, $b$, $c$, $d$ be the first four alternatives in $\succ$, in that order. Consider the alignment, say $\succ^{'}$, with $a$, $d$, $b$, $c$ as the first four alternatives, in that order, and the rest the same as $\succ$. Clearly, $\succ$ and $\succ^{'}$ share preferences with tops at $b$ and $c$, so $\succ^{'}$ must have $k$ phantoms on $a$. Consider the alignment, say $\succ^{''}$, with $c$, $b$, $a$, $d$ as the first four alternatives, in that order, and the rest the same as $\succ$. Clearly, $\succ^{'}$ and $\succ^{''}$ share preferences with tops at $a$ and $d$, so $\succ^{''}$ must have $k$ phantoms on $c$. Now, note that $\succ^{'}$ and $\succ^{''}$ share preferences with tops at $a$ and $b$, so in $\succ$, there must be $k$ phantoms on the right end. Thus, we have ruled out everything but the true-median. Now, we note that No Veto Power (NVP) is also a necessary condition for the surviving mechanisms under full support. In the class of order statistics mechanisms, a mechanism does not satisfy NVP if for some $\succ$, the SCF chosen for $\succ$ has all phantoms placed on one or the other side. Suppose such a $\succ$ exists and without loss of generality, let all phantoms be on the left end. Let $a,b,c$ be the first three alternatives in $\succ$. Consider a profile of preferences with $n-1$ peaks at $a$ and $b$, and $1$ peak at $c$ (call this agent $i$). Consider a $\succ^{'}$ where $c,a,b$ are the first three alternatives and everything else is the same as $\succ$. By Symmetry, $\succ^{'}$ must also have all the peaks at $c$. Then, by misreporting to a preference $\succ^{'}$ with top at $c$, agent $i$ can profitably deviate. We close by noting that the true median and the symmetric order statistic mechanisms satisfy both NVP and Symmetry. Consider the true median mechanism. For any pair of alternatives, Symmetry is satisfied by definition since the number of phantoms is the same on both ends of every alignment. For any profile of preferences with $n-1$ them on the same alternative, this alternative must be chosen since there will be at least one phantom on each end. Consider the symmetric order statistic mechanism. For any pair $a,b$, they are shared across exactly two alignments, and in the same order. Thus, symmetry is guaranteed. NVP follows by the fact that they have at least one phantom on either side. \end{proof} \begin{proof}[\textbf{Proof of Proposition \ref{SameShare}}] This follows directly from Lemmas \ref{ShareLem} and \ref{2Lemm}. Note that Consistency implies that the shared tops must be a contiguous subordering of each alignment in $S$, and Symmetry then implies parts $1.$ and $3.$ of the proposition. $2.$ is a consequence of NVP, as noted earlier. \end{proof} \begin{proof}[\textbf{Proof of Proposition \ref{PartHon}}] Consider the following indirect mechanism $\mathcal{M}$: each agent sends a message $m_i = (\succ_i, P_i, f_i, z)$ such that $\succ_i \in \mathcal{P}$, $P_i \in \mathcal{D}_{\succ_i}$, $f_i$ is a median SCF, and $z$ is an integer. Let $x_i$ be the top of $P_i$. \begin{enumerate} \item If at least $N-1$ agents send $\succ_i = \succ^{*}$, run any median mechanism $\mathcal{M}^c$ satisfying No Veto Power (NVP) using $\succ^{*}$ as the underlying alignment and $x_i$ as the peaks, and \item otherwise, choose the agent with the lowest index agent, say $j$, amongst those sending the largest integer in their message, and run the median mechanism $\mathcal{M}^j$ dictated by $f_j$ and $\succ_j$, using $x_i$ as the peaks. \end{enumerate} Suppose all agents send $\hat{\succ}$ as part of their message. Then, $\mathcal{M}^c$ is run using the correct alignment and all the equilibria are the same as a strategy-proof, unanimous, and anonymous direct mechanism. Deviations in reporting of $\succ_{i}$ do not lead to a change in the outcome, and hence they cannot be profitable. Suppose all agents send $\succ^{'} \neq \hat{\succ}$ as part of their message. Since there exists an agent with preference for honesty, this agent will deviate to sending $\hat{\succ}$ since that doesn't change the outcome, and hence this cannot be an equilibrium message profile. Suppose there are two alignments $\succ^{'}$ and $\succ^{''}$ sent by the agents. Consider two cases: \begin{adjustwidth}{1 cm}{} \textit{Case 1}: The true peaks of all but one agents are at the same alternative, say $a$. If at least one of the agents with the true peak at $a$ sends an alignment $\succ^{'}$ while the others send $\succ^{''} \neq \succ^{'}$, then the agent with the true peak not at $a$ can deviate to some $\succ^{'''} \notin \{\succ^{'}, \succ^{''}\}$ and get his true peak selected. \\ Suppose instead that all the agents with the true peak at $a$ send the same $\succ^{'}$ and the remaining agent sends $\succ^{''} \neq \succ^{'}$. Note that for such a message to be a Nash equilibrium, it must be that $a$ is the outcome, since if some $b \neq a$ is chosen, then, one of the agents with peak at $a$ can deviate to report some $\succ^{'''} \notin \{\succ^{'}, \succ^{''}\}$ and get $a$ chosen, which is a profitable deviation. By NVP, the alternative chosen in the dominant strategy equilibrium of $\mathcal{M}^c$ under knowledge of $\succ$ is $a$ as well. \\ \textit{Case 2}: If the true peaks of fewer than $N-1$ agents are the same, then there will be at least two agents who do not get their preferred alternative. At least one of these agents can profitably deviate by sending a larger integer and a different alignment $\succ^{'''}$ (along with the appropriate $f_j$) to get his preferred alternative given the messages of others.\end{adjustwidth} \end{proof} \end{appendices} \end{spacing} \end{document}
\begin{document} \title{{\bf {\large Prime component-preservingly amphicheiral link with odd minimal crossing number}} \footnotetext[0]{ 2010 {\it Mathematics Subject Classification}: 57M25, 57M27.\\ {\it Keywords}: component-preservingly amphicheiral link; minimal crossing number; Tait's conjecture ; invertibility. }} \author{ {\footnotesize Teruhisa KADOKAMI and Yoji KOBATAKE}} \date{{\footnotesize March 11, 2015}} \maketitle \newcommand{\circlenum}[1]{{\ooalign{ $\scriptstyle#1$ \crcr$\bigcirc$}}} \newcommand{\svline}[1]{\multicolumn{1}{|c}{#1}} \newfont{\bg}{cmr10 scaled\magstep4} \newcommand{\smash{\hbox{\bg 0}}}{\smash{\hbox{\bg 0}}} \newcommand{\smash{\lower1.7ex\hbox{\bg 0}}}{\smash{\lower1.7ex\hbox{\bg 0}}} \newcommand{\bsquare}{\hbox{\rule{6pt}{6pt}}} \newcommand{\qed}{\hbox{\rule[-2pt]{3pt}{6pt}}} \newcommand{\mathrm{Int}\ \! }{\mathrm{Int}\ \! } \newcommand{\mathrm{Ker}\ \! }{\mathrm{Ker}\ \! } \newcommand{\mathrm{Im}\ \! }{\mathrm{Im}\ \! } \newcommand{\mathrm{aug}\ \!}{\mathrm{aug}\ \!} \newcommand{\mathrm{pr}\ \!}{\mathrm{pr}\ \!} \newcommand{\mathrm{Tor}\ \!}{\mathrm{Tor}\ \!} \newcommand{\mathrm{Spin}\ \!}{\mathrm{Spin}\ \!} \newcommand{\mathrm{Eul}\ \!}{\mathrm{Eul}\ \!} \newcommand{\mathrm{Vect}\ \!}{\mathrm{Vect}\ \!} \newcommand{\mathrm{HULL}\ \!}{\mathrm{HULL}\ \!} \newcommand{\mathrm{real}\ \!}{\mathrm{real}\ \!} \newcommand{\mathrm{rank}\ \!}{\mathrm{rank}\ \!} \newcommand{\mathrm{ord}\ \!}{\mathrm{ord}\ \!} \newcommand{\mathrm{Sign}\ \!}{\mathrm{Sign}\ \!} \newcommand{\mathrm{Hom}\ \!}{\mathrm{Hom}\ \!} \newcommand{\mathrm{ad}\ \!}{\mathrm{ad}\ \!} \newcommand{\mathrm{Det}\ \!}{\mathrm{Det}\ \!} \newcommand{\mathrm{lk}\ \!}{\mathrm{lk}\ \!} \newcommand{\mathrm{pt}}{\mathrm{pt}} \newcommand{$\alpha$}{$$\alpha$pha$} \newcommand{\displaystyle}{\displaystyleplaystyle} \newtheorem{df}{Definition}[section] \newtheorem{lm}[df]{Lemma} \newtheorem{theo}[df]{Theorem} \newtheorem{re}[df]{Remark} \newtheorem{pr}[df]{Proposition} \newtheorem{ex}[df]{Example} \newtheorem{co}[df]{Corollary} \newtheorem{cl}[df]{Claim} \newtheorem{qu}[df]{Question} \newtheorem{pb}[df]{Problem} \newtheorem{cj}[df]{Conjecture} \makeatletter \renewcommand{\theequation}{ \thesection.\arabic{equation}} \@addtoreset{equation}{section} \makeatother \begin{abstract} {\footnotesize \setlength{\baselineskip}{10pt} \setlength{\oddsidemargin}{0.25in} \setlength{\evensidemargin}{0.25in} For every odd integer $c\ge 21$, we raise an example of a prime component-preservingly amphicheiral link with the minimal crossing number $c$. The link has two components, and consists of an unknot and a knot which is $(-)$-amphicheiral with odd minimal crossing number. We call the latter knot a {\it Stoimenow knot}. We also show that the Stoimenow knot is not invertible by the Alexander polynomials.} \end{abstract} \tableofcontents \section{Introduction}\label{sec:intro} Let $L=K_1\cup \cdots \cup K_r$ be an oriented $r$-component link in $S^3$. A $1$-component link is called a knot. For an oriented knot $K$, we denote the orientation-reversed knot by $-K$. If $\varphi$ is an orientation-reversing homeomorphism of $S^3$ so that $\varphi(K_i)=\varepsilon_{\sigma(i)} K_{\sigma(i)}$ for all $i=1, \ldots, r$ where $\varepsilon_i=+$ or $-$, and $\sigma$ is a permutation of $\{1, 2, \ldots, r\}$, then $L$ is called an {\it $(\varepsilon_1, \ldots, \varepsilon_r; \sigma)$-amphicheiral link}. A term ``amphicheiral link" is used as a general term for an $(\varepsilon_1, \ldots, \varepsilon_r; \sigma)$-amphicheiral link. If $\varphi$ can be taken as an involution (i.e.\ $\varphi^2=\mathrm{id}$), then $L$ is called a {\it strongly} amphicheiral link. If $\sigma$ is the identity, then an amphicheiral link is called a {\it component-preservingly amphicheiral link}, and $\sigma$ may be omitted from the notation. If every $\varepsilon_i=\varepsilon$ is identical for all $i=1, \ldots, r$ (including the case that $\sigma$ is not the identity), then an $(\varepsilon_1, \ldots, \varepsilon_r; \sigma)$-amphicheiral link is called an $(\varepsilon)$-amphicheiral link. We use the notations $+=+1=1$ and $-=-1$. For the case of invertibility, we only replace $\varphi$ with an orientation-preserving homeomorphism of $S^3$. We refer the reader to \cite{Wh, Hi, Kd1, Kd2, Kd3, KK}. The minimal crossing number of an alternating amphicheiral link is known to be even (cf.\ \cite[Lemma 1.4]{Kd3}) from the positive answer for the {\it flyping conjecture} due to W.~Menasco and M.~Thistlethwaite \cite{MT}. The flyping conjecture is one of famous Tait's conjectures on alternating links, and it is also called Tait's conjecture III in \cite{St1}. The positive answer for the flyping conjecture implies those of Tait's conjecture I on the minimal crossing number (cf.\ \cite{Mu1}), and Tait's conjecture II on the writhe (cf.\ \cite{Mu2}). A.~Stoimenow \cite[Conjecture 2.4]{St1} sets a conjecture: \begin{center} ``{\it Amphicheiral (alternating?) knots have even crossing number}." \end{center} as Tait's conjecture IV by guessing what Tait had in mind (i.e.\ Tait has not stated it explicitly). We pose the following conjecture: \begin{cj}\label{cj:even} {\rm (a generalized version of Tait's conjecture IV)} The minimal crossing number of an amphicheiral link is even. \end{cj} For the case of alternating amphicheiral links, Conjecture \ref{cj:even} is affirmative as mentioned above from the answer for Tait's conjecture II. Hence it motivates to find an amphicheiral link with odd minimal crossing number. If there exists a counter-example for Conjecture \ref{cj:even}, then it should be non-alternating. A non-split link is {\it prime} if it is not a connected sum of non-trivial links. We assume that a prime link is non-split. There exists a prime amphicheiral knot with minimal crossing number $15$ in the table of J.~Hoste, M.~Thistlethwaite and J.~Weeks \cite{HTW}, which gives a negative answer for Conjecture \ref{cj:even} (the original Tait's conjecture IV). The knot is named $15_{224980}$ (Figure 1). \begin{figure} \caption{$15_{224980} \label{fig:HTW} \end{figure} Stoimenow \cite{St2} showed that for every odd integer $c\ge 15$, there exists an example of a prime amphicheiral knot with minimal crossing number $c$. The case $c=15$ corresponds to $15_{224980}$. We call the sequence of knots {\it Stoimenow knots} (see Section \ref{sec:Stoimenow}). He also pointed out that there are no such examples for the case $c\le 13$. The first author and A.~Kawauchi \cite{KK}, and the first author \cite{Kd3} determined prime amphicheiral links with minimal crossing number up to $11$. Then there are two prime amphicheiral links with odd minimal crossing numbers named $9_{61}^2$ and $11_{n247}^2$ (Figure 2), where we use modified notations from Rolfsen's table \cite{Ro} and Thistlethwaite's table on the web site maintained by D.~Bar-Natan and S.~Morrison \cite{BM}. \begin{figure} \caption{$9_{61} \label{fig:odd} \end{figure} These examples show that Conjecture \ref{cj:even} is negative for links. Since both $9_{61}^2$ and $11_{n247}^2$ are not component-preservingly amphicheiral, we ask the following question (see also Question \ref{qu:mincr}) : \begin{qu}\label{qu:cp} Is there a prime component-preservingly amphicheiral link with odd minimal crossing number ? \end{qu} If we remove `prime' from Question \ref{qu:cp}, then we can obtain nugatory examples by taking split sum of a Stoimenow knot and an unknot, or connected sum of Stoimenow knot and the Hopf link. Our main theorem is an affirmative answer for Question \ref{qu:cp} which is a negative answer for Conjecture \ref{cj:even}: \begin{theo}\label{thm:main} For every odd integer $c\ge 21$, there exists a prime component-preservingly amphicheiral link with minimal crossing number $c$ (Figure 10). \end{theo} Our example is a $2$-component link with linking number $3$ whose components are a Stoimenow knot and an unknot. We prove it in Section \ref{sec:proof}. The proof is divided into three parts such as to show amphicheirality, to determine the minimal crossing number, and to show primeness. We can immediately see its amphicheirality by construction. Though to find the way of linking of the two components was not so easy, to determine the minimal crossing number is easy by the help of Stoimenow's result \cite{St2} (cf.\ Theorem \ref{thm:Stoimenow}). In \cite{St2}, to determine the minimal crossing number and to show primeness of his knot were very hard. Finally we show primeness by using the Kauffman bracket (cf.\ Subsection \ref{ssec:Kauffman}). This part is also eased by Stoimenow's result. In Section \ref{sec:noninv}, by R.~Hartley \cite{Ha}, R.~Hartley and A.~Kawauchi \cite{HK}, and A.~Kawauchi \cite{Kw1}'s necessary conditions on the Alexander polynomials of amphicheiral knots, we show that a Stoimenow knot is not invertible (Theorem \ref{th:Stoimenow}). \section{Link invariants}\label{sec:inv} \subsection{Kauffman bracket}\label{ssec:Kauffman} Let $L$ be an $r$-component oriented link, and $D$ a diagram of $L$. Firstly we regard $D$ as an unoriented diagram. On a crossing of $D$, a {\it splice} is a replacement from the left-hand side (the crossing) to the right-hand side as in Figure 3. Precisely, a {\it $0$-splice} is to the upper right-hand side, and an {\it $\infty$-splice} is to the down right-hand side, respectively. \begin{figure} \caption{splice} \label{splice} \end{figure} The resulting diagram is a {\it state}, and it is a diagram of an unlink without crossings. Let $s$ be a state, $|s|$ the number of components of $s$, $t_0(s)$ the number of $0$-splices to obtain $s$, $t_{\infty}(s)$ the number of $\infty$-splices to obtain $s$, $t(s)=t_0(s)-t_{\infty}(s)$, and $\mathcal{S}$ the set of states from $D$. Let $A$ be an indeterminate, and $d=-A^2-A^{-2}$. Then $$\langle D\rangle =\sum_{s\in \mathcal{S}} A^{t(s)}d^{|s|-1}\in \mathbb{Z}[A, A^{-1}]$$ is the {\it Kauffman bracket} of $D$, and \begin{equation}\label{eq:f} f_L(A)=(-A^3)^{-w(D)}\langle D\rangle \end{equation} is the {\it $f$-polynomial} of $L$ where $w(D)$ is the writhe of $D$ as an oriented diagram. Then $f_L(A)$ is an invariant of $L$, and \begin{equation}\label{eq:J} V_L(t)=f_L\left(t^{\frac 14}\right)\in \mathbb{Z}\left[t^{\frac 12}, t^{-\frac 12}\right] \end{equation} is the {\it Jones polynomial} of $L$. We denote $\langle D\rangle$ as $\langle D\rangle(A)$ when we emphasis it as a function of $A$. We have the following facts: \begin{lm}\label{lm:Kauffman} Let $L$ be an $r$-component oriented link, and $D$ a diagram of $L$. \begin{enumerate} \item[(1)] The Kauffman bracket $\langle D\rangle$ is an invariant of $L$ up to multiplications of $(-A^3)$. In particular, if we substitute a root of unity for $A$ and take its absolute value, then it is an invariant of $L$, which is a non-negative real number. \item[(2)] We have the following skein relation (Figure 4) which can be an axiom of the Kauffman bracket: \begin{figure} \caption{skein relation I} \label{skein1} \end{figure} \item[(3)] Let $L_i\ (i=1, 2)$ be a link, $D_i$ a link diagram of $L_i$, and $D_1\amalg D_2$ ($L_1\amalg L_2$, respectively) the split sum of $D_1$ and $D_2$ ($L_1$ and $L_2$, respectively). Then we have $$\langle D_1\amalg D_2\rangle =d\langle D_1\rangle\langle D_2\rangle,\ f_{L_1\amalg L_2}(A)=d\cdot f_{L_1}(A)f_{L_2}(A).$$ \item[(4)] Let $L_i\ (i=1, 2)$ be a link, $D_i$ a link diagram of $L_i$, and $D_1\sharp D_2$ ($L_1\sharp L_2$, respectively) the connected sum of $D_1$ and $D_2$ ($L_1$ and $L_2$, respectively). Then we have $$\langle D_1\sharp D_2\rangle =\langle D_1\rangle\langle D_2\rangle,\ f_{L_1\sharp L_2}(A)=f_{L_1}(A)f_{L_2}(A).$$ \item[(5)] We have a skein relation as in Figure 5: \begin{figure} \caption{skein relation II} \label{skein2} \end{figure} \item[(6)] Let $D^*$ ($L^*$, respectively) be the mirror image of $D$ ($L$, respectively). Then we have $$\langle D^*\rangle(A) =\langle D\rangle(A^{-1}),\ f_{L^*}(A)=f_L(A^{-1}).$$ \item[(7)] $f_L(A)\in A^{2(r+1)}\cdot \mathbb{Z}[A^4, A^{-4}]$. \item[(8)] Let $\zeta$ be a primitive $8$-th root of unity (i.e.\ $\zeta^4=-1$ and $\zeta^8=1$). Suppose that the number of the crossing number of $D$ is even . Then $\langle D\rangle (\zeta)$ is an integer or of the form $\sqrt{-1}\times (\mbox{integer})$, which depends on $r$ and the writhe. In particular, for $r=1$, $\langle D\rangle (\zeta)$ is an integer if and only if the writhe is $0\ (\mathrm{mod}\ \! 4)$. \item[(9)] Let $\zeta$ be a primitive $8$-th root of unity. Then we have $|\langle D\rangle (\zeta)|=|V_L(-1)|$. \end{enumerate} \end{lm} Lemma \ref{lm:Kauffman} (8) is obtained from (7) and (\ref{eq:f}), and it is a special case of (1). Lemma \ref{lm:Kauffman} (9) is obtained from (\ref{eq:J}). Let $T_m$ be an $m$-half twist tangle for $m\in \mathbb{Z}$, and $T_{\infty}$ a tangle in Figure 6. \begin{figure} \caption{$m$-half twists} \label{mfwist} \end{figure} By Lemma \ref{lm:Kauffman} (2), (3), (4) and (5), we have the following: \begin{lm}\label{lm:Tm} \begin{enumerate} \item[(1)] We have $$\langle T_m\rangle=A^m\langle T_0\rangle +$\alpha$pha_m(A) \langle T_{\infty}\rangle,$$ where $$$\alpha$pha_m(A) =A^{m-2}\cdot \frac{1-(-A^{-4})^m}{1-(-A^{-4})}.$$ \item[(2)] $$\alpha$pha_{-m}(A)=$\alpha$pha_m(A^{-1}).$ \item[(3)] Let $\zeta$ be a primitive $8$-th root of unity. Then we have $$$\alpha$pha_m(\zeta)=m\zeta^{m-2}\quad \mbox{and}\quad $\alpha$pha_m(\zeta)\cdot $\alpha$pha_{-m}(\zeta)=m^2.$$ \end{enumerate} \end{lm} \subsection{Alexander and Conway polynomials}\label{ssec:Alexander} Let $L$ be an oriented link, and $D$ a diagram of $L$. Pick a crossing $c$ of $D$. If $c$ is a positive crossing (a negative crossing, respectively), then we denote $D$ by $L_+$ ($L_-$, respectively). If $c$ is smoothed with preserving the orientation, then we denote $D$ by $L_0$. We call a pair $(L_+, L_-, L_0)$ a {\it skein triple} (Figure 7). \begin{figure} \caption{skein triple} \label{skein3} \end{figure} For an oriented link $L$, the {\it Conway polynomial} of $L$ is denoted by $\nabla_L(z)$ which is an element of $\mathbb{Z}[z]$. For a skein triple $(L_+, L_-, L_0)$, the Conway polynomial is defined by the following skein relation : $$\nabla_{L_+}(z)-\nabla_{L_-}(z)=z\nabla_{L_0}(z),\quad \nabla_O(z)=1,$$ where $O$ is the trivial knot. \begin{lm}\label{lm:Conway} Let $L$ be an $r$-component oriented link, and $L^*$ the mirror image of $L$. Then we have $$\nabla_{L^*}(z)=\nabla_L(-z).$$ More precisely, $\nabla_{L^*}(z)=\nabla_L(z)$ if $r$ is odd, and $\nabla_{L^*}(z)=-\nabla_L(z)$ if $r$ is even. \end{lm} For an $r$-component oriented link $L$, the {\it (normalized one variable) Alexander polynomial} ${\mit \Delta}_L(t)$ is defined by $${\mit \Delta}_L(t)=\nabla_L\left(t^{\frac 12}-t^{-\frac 12}\right) \in \mathbb{Z}\left[ t^{\frac 12}, t^{-\frac 12}\right].$$ For $A, B\in \mathbb{Z}\left[ t^{\frac 12}, t^{-\frac 12}\right]$, $A\doteq B$ implies $A=\pm t^{\frac m2}B$ for some $m\in \mathbb{Z}$. For $f, g\in \mathbb{Z}[z]$ or $\mathbb{Z}\left[ t^{\frac 12}, t^{-\frac 12}\right]$, if they are equal as elements in $(\mathbb{Z}/d\mathbb{Z})[z]$ or $(\mathbb{Z}/d\mathbb{Z})\left[ t^{\frac 12}, t^{-\frac 12}\right]$, then we denote by $f=_dg$. For an oriented link $L$, if $\nabla_L(z)$ and ${\mit \Delta}_L(t)$ are regarded as elements in $(\mathbb{Z}/d\mathbb{Z})[z]$ and $(\mathbb{Z}/d\mathbb{Z})\left[ t^{\frac 12}, t^{-\frac 12}\right]$ respectively, then we call them the {\it mod $d$ Conway polynomial} of $L$ and the {\it mod $d$ Alexander polynomial} of $L$ respectively. \section{Stoimenow knots}\label{sec:Stoimenow} Let $\sigma_i$\ $(i=1, \ldots, m-1)$ be a generator of the $m$-string braid group, and $\delta_i$ and $\overline{\delta}_i$\ $(i=1, \ldots, m-1)$ tangles in Figure 8. \begin{figure} \caption{generator $\sigma_i$ of braid group, and $\delta_i$ and $\overline{\delta} \label{braid} \end{figure} For an odd number $n\ge 15$, a {\it Stoimenow knot} with crossing number $n$, denoted by $S_n$, is the closure of the following composition of $\sigma_i$, $\delta_i$ and $\overline{\delta}_i$\ $(i=1, \ldots, m-1)$: $$\begin{array}{ll} 3\ -1\ 2^2\ 3^{2k}\ 4\ -3\ 2\ -1\ (-2)^{2k}\ (-3)^2\ 4\ -2 & (n=4k+11), \\ \delta_3\ -1\ 2^2\ 3^{2k}\ 4\ -3\ 2\ -1\ (-2)^{2k}\ (-3)^2\ 4\ \overline{\delta}_2 & (n=4k+13), \end{array}$$ where in the sequence above, $m=5$, $\sigma_i$ is translated into $i$ and $\sigma_i^{-1}$ is translated into $-i$, and $i^l$ implies that $i$ is repeated $l$ times with $l\ge 1$. The former is {\it of type I}, and the latter is {\it of type II}, respectively. Note that $S_{15}=15_{224980}$ in Figure 1, and both two tangles above have $(n+1)$ crossings. We can see strong $(-)$-amphicheirality of $S_n$ from its diagram with $(n+1)$ crossings in the righthand side of Figure 9. \begin{theo}\label{thm:Stoimenow} {\rm (Stoimenow \cite{St1, St2})} A Stoimenow knot $S_n$ is a prime strongly $(-)$-amphicheiral knot with minimal crossing number $n$. \end{theo} \begin{figure} \caption{Stoimenow knot $S_n$} \label{Stoimenow} \end{figure} \section{Proof of Theorem \ref{thm:main}}\label{sec:proof} We take a $2$-component link $L_n=S_n\cup U$ whose components are a Stoimenow knot $S_n$ and an unknot $U$ as in Figure 10. The link $L_n$ is {\it of type I} if $S_n$ is of type I, and is {\it of type II} if $S_n$ is of type II. We prove that $L_n$ is a prime component-preservingly amphicheiral link with minimal crossing number $n+6$, where $n+6$ is odd with $n+6\ge 21$ because $n$ is odd with $n\ge 15$. \begin{figure} \caption{prime component-preservingly amphicheiral link $L_n$} \label{ex} \end{figure} \noindent {\bf Proof of Theorem \ref{thm:main}}\ By the righthand side of Figure 10, $L_n$ is a component-preservingly strongly $(-, +)$-amphicheiral link. The linking number of $L_n$, $\mathrm{lk}\ \! (L_n)$, is $3$ by a suitable orientation. Let $c(\cdot)$ denote the minimal crossing number of a link. Since $$c(L_n)\ge c(S_n)+c(U)+2|\mathrm{lk}\ \! (L_n)|=n+6,$$ and the lefthand side of Figure 10 realizes the lower bound, we have $c(L_n)=n+6$ and it is odd. Finally we show that $L_n$ is prime by using the Kauffman bracket. Suppose that $L_n$ is not prime. Then $L_n$ is a connected sum of two links such that one is a Stoimenow knot $S_n$ and the other is a $2$-component link with unknotted components and with linking number $3$ by Theorem \ref{thm:Stoimenow}. Hence $\langle L_n\rangle$ should be divisible by $\langle S_n\rangle$ by Lemma \ref{lm:Kauffman} (4). We compute $\langle L_n\rangle (\zeta)$ and $\langle S_n\rangle (\zeta)$, where $\zeta$ is a primitive $8$-th root of unity. By Lemma \ref{lm:Kauffman} (4) and (8), $|\langle L_n\rangle (\zeta)|$ should be divisible by $|\langle S_n\rangle (\zeta)|$. To compute $\langle S_n\rangle$ and $\langle L_n\rangle$, we set $K=S_n$ and $L=L_n$, and we denote the results of splicings by $K_{00}$, $K_{0\infty}$, $K_{\infty 0}$, $K_{\infty \infty}$, $L_{00}$, $L_{0\infty}$, $L_{\infty 0}$ and $L_{\infty \infty}$, respectively as in Figure 11. Here we drew only the type I case. We can obtain the type II case in a similar way. \begin{figure} \caption{splices of $L_n$} \label{splice2} \end{figure} Then by Lemma \ref{lm:Tm} (1), we have: \begin{equation}\label{eq:K} \begin{matrix} \langle K\rangle & = & \langle K_{00}\rangle +A^{-2k}$\alpha$pha_{2k}(A)\langle K_{0\infty}\rangle +A^{2k}$\alpha$pha_{-2k}(A)\langle K_{\infty 0}\rangle \\ & & +$\alpha$pha_{2k}(A)$\alpha$pha_{-2k}(A)\langle K_{\infty \infty}\rangle \end{matrix} \end{equation} and \begin{equation}\label{eq:L} \begin{matrix} \langle L\rangle & = & \langle L_{00}\rangle +A^{-2k}$\alpha$pha_{2k}(A)\langle L_{0\infty}\rangle +A^{2k}$\alpha$pha_{-2k}(A)\langle L_{\infty 0}\rangle \\ & & +$\alpha$pha_{2k}(A)$\alpha$pha_{-2k}(A)\langle L_{\infty \infty}\rangle. \end{matrix} \end{equation} We can see that $K_{00}$ and $K_{\infty \infty}$ are amphicheiral knot diagrams with writhe $0$, $K_{0\infty}=(K_{\infty 0})^*$, the writhe of $K_{0\infty}$ is $-10$, the writhe of $K_{\infty 0}$ is $10$, $L_{00}$ and $L_{\infty \infty}$ are $2$-component amphicheiral link diagrams with writhe $6$, $L_{0\infty}=(L_{\infty 0})^*$, the writhe of $L_{0\infty}$ is $-4$, and the writhe of $L_{\infty 0}$ is $16$. By Lemma \ref{lm:Kauffman} (6), we have $$K_{00}(A)=K_{00}(A^{-1}),\ K_{\infty \infty}(A)=K_{\infty \infty}(A^{-1}),\ K_{\infty 0}(A)=K_{0\infty}(A^{-1}),$$ $$L_{00}(A)=L_{00}(A^{-1}),\ L_{\infty \infty}(A)=L_{\infty \infty}(A^{-1}),\quad \mbox{and}\quad L_{\infty 0}(A)=L_{0\infty}(A^{-1}).$$ By Lemma \ref{lm:Tm} (2), $A^{2k}$\alpha$pha_{-2k}(A)$ can be obtained by replacing $A$ with $A^{-1}$ in $A^{-2k}$\alpha$pha_{2k}(A)$. By straight calculations using Lemma \ref{lm:Kauffman} and Lemma \ref{lm:Tm}, we have: \noindent (type I) \begin{equation}\label{eq:typeIK} \begin{matrix} \langle K_{00}\rangle & = & A^{16}-4A^{12}+6A^8-7A^4+9-7A^{-4}+6A^{-8}-4A^{-12}+A^{-16}, \\ \langle K_{0\infty}\rangle & = & -A^{18}+3A^{14}-5A^{10}+6A^6-7A^2+6A^{-2} -5A^{-6}+4A^{-10} \\ & & -A^{-14}+A^{-18}, \\ \langle K_{\infty \infty}\rangle & = & A^{16}-3A^{12}+5A^8-6A^4+7-6A^{-4}+5A^{-8}-3A^{-12}+A^{-16}. \end{matrix} \end{equation} \begin{equation}\label{eq:typeIL} \begin{matrix} \langle L_{00}\rangle & = & -A^{20}+4A^{16}-8A^{12}+12A^8-16A^4+16 -16A^{-4}+12A^{-8} \\ & & -8A^{-12}+4A^{-16}-A^{-20}, \\ \langle L_{0\infty}\rangle & = & A^{22}-3A^{18}+6A^{14}-9A^{10}+12A^6-12A^2 +11A^{-2}-9A^{-6} \\ & & +5A^{-10}-3A^{-14}-A^{-26}, \\ \langle L_{\infty \infty}\rangle & = & -A^{20}+3A^{16}-7A^{12}+10A^8-13A^4+14 -13A^{-4}+10A^{-8} \\ & & -7A^{-12}+3A^{-16}-A^{-20}. \end{matrix} \end{equation} \noindent (type II) \begin{equation}\label{eq:typeIIK} \begin{matrix} \langle K_{00}\rangle & = & -A^{20}+4A^{16}-9A^{12}+14A^8-17A^4+19 -17A^{-4}+14A^{-8} \\ & & -9A^{-12}+4A^{-16}-A^{-20}, \\ \langle K_{0\infty}\rangle & = & A^{22}-4A^{18}+10A^{14}-15A^{10}+19A^6-22A^2 +20A^{-2}-18A^{-6} \\ & & +12A^{-10}-7A^{-14}+3A^{-18}-A^{-22}, \\ \langle K_{\infty \infty}\rangle & = & -2A^{20}+6A^{16}-13A^{12}+21A^8-24A^4+28 -24A^{-4}+21A^{-8} \\ & & -13A^{-12}+6A^{-16}-2A^{-20}. \end{matrix} \end{equation} \begin{equation}\label{eq:typeIIL} \begin{matrix} \langle L_{00}\rangle & = & A^{24}-5A^{20}+13A^{16}-24A^{12}+35A^8-44A^4 +46-44A^{-4} \\ & & +35A^{-8}-24A^{-12}+13A^{-16}-5A^{-20}+A^{-24}, \\ \langle L_{0\infty}\rangle & = & -A^{26}+4A^{22}-11A^{18}+20A^{14}-31A^{10}+40A^6 -42A^2+42A^{-2} \\ & & -33A^{-6}+24A^{-10}-13A^{-14}+5A^{-18}-A^{-26}+A^{-30}, \\ \langle L_{\infty \infty}\rangle & = & A^{24}-5A^{20}+14A^{16}-27A^{12}+38A^8-50A^4 +50-50A^{-4} \\ & & +38A^{-8}-27A^{-12}+14A^{-16}-5A^{-20}+A^{-24}. \end{matrix} \end{equation} We substitute $A=\zeta$ to (\ref{eq:K}) and (\ref{eq:L}). We set $\zeta^2=\sqrt{-1}$. By Lemma \ref{lm:Tm} (2) and the arguments above, we have \begin{equation}\label{eq:Kzeta} \begin{matrix} \langle K\rangle (\zeta) & = & \langle K_{00}\rangle (\zeta) -4k\sqrt{-1}\langle K_{0\infty}\rangle (\zeta) +4k^2\langle K_{\infty \infty}\rangle (\zeta) \end{matrix} \end{equation} and \begin{equation}\label{eq:Lzeta} \begin{matrix} \langle L\rangle (\zeta) & = & \langle L_{00}\rangle (\zeta) -4k\sqrt{-1}\langle L_{0\infty}\rangle (\zeta) +4k^2\langle L_{\infty \infty}\rangle (\zeta). \end{matrix} \end{equation} By (\ref{eq:typeIK}), (\ref{eq:typeIL}), (\ref{eq:typeIIK}) and (\ref{eq:typeIIL}), we have \noindent (type I) \begin{equation}\label{eq:typeIKzeta} \begin{matrix} \langle K_{00}\rangle (\zeta) & = & 45, \\ \langle K_{0\infty}\rangle (\zeta) & = & -39\sqrt{-1}, \\ \langle K_{\infty \infty}\rangle (\zeta) & = & 37. \end{matrix} \end{equation} \begin{equation}\label{eq:typeILzeta} \begin{matrix} \langle L_{00}\rangle (\zeta) & = & 98, \\ \langle L_{0\infty}\rangle (\zeta) & = & -70\sqrt{-1}, \\ \langle L_{\infty \infty}\rangle (\zeta) & = & 82. \end{matrix} \end{equation} \noindent (type II) \begin{equation}\label{eq:typeIIKzeta} \begin{matrix} \langle K_{00}\rangle (\zeta) & = & 109, \\ \langle K_{0\infty}\rangle (\zeta) & = & -132\sqrt{-1}, \\ \langle K_{\infty \infty}\rangle (\zeta) & = & 160. \end{matrix} \end{equation} \begin{equation}\label{eq:typeIILzeta} \begin{matrix} \langle L_{00}\rangle (\zeta) & = & 290, \\ \langle L_{0\infty}\rangle (\zeta) & = & -264\sqrt{-1}, \\ \langle L_{\infty \infty}\rangle (\zeta) & = & 320. \end{matrix} \end{equation} By (\ref{eq:Kzeta}), (\ref{eq:Lzeta}), (\ref{eq:typeIKzeta}), (\ref{eq:typeILzeta}), (\ref{eq:typeIIKzeta}) and (\ref{eq:typeIILzeta}), we have \noindent (type I) \begin{equation*}\label{eq:typeI} \begin{matrix} \langle K\rangle (\zeta) & = & 148k^2-156k+45, \\ \langle L\rangle (\zeta) & = & 328k^2-280k+98. \end{matrix} \end{equation*} \noindent (type II) \begin{equation*}\label{eq:typeII} \begin{matrix} \langle K\rangle (\zeta) & = & 640k^2-528k+109, \\ \langle L\rangle (\zeta) & = & 1280k^2-1056k+290. \end{matrix} \end{equation*} Note that $148k^2-156k+45$ and $640k^2-528k+109$ are odd and $328k^2-280k+98$ and $1280k^2-1056k+290$ are of the form $2\times$(odd), and they are positive for $k\ge 1$. Hence if $148k^2-156k+45$ divides $328k^2-280k+98$ ($640k^2-528k+109$ divides $1280k^2-1056k+290$, respectively), then $148k^2-156k+45$ divides $164k^2-140k+49$ ($640k^2-528k+109$ divides $640k^2-528k+145$, respectively), and the quantity is odd. \noindent (type I) Suppose that $164k^2-140k+49$ is divisible by $148k^2-156k+45$. Since $$(164k^2-140k+49)-(148k^2-156k+45) =16k^2+16k+4>0,$$ the quantity is not $1$. Since $$3(148k^2-156k+45)-(164k^2-140k+49) =280k^2-328k+86>0,$$ the quantity is not greater than $1$. It is a contradiction. \noindent (type II) Suppose that $640k^2-528k+145$ is divisible by $640k^2-528k+109$. Since $$(640k^2-528k+145)-(640k^2-528k+109) =36>0,$$ the quantity is not $1$. Since $$3(640k^2-528k+109)-(640k^2-528k+145) =1280k^2-1056k+182>0,$$ the quantity is not greater than $1$. It is a contradiction. \qed \begin{re}\label{re:Remark} {\rm In \cite{Ko}, the second author computes the J polynomials, which are modified Jones polynomials, of $S_n$ and $L_n$ explicitly. The J polynomial is an invariant of unoriented links.} \end{re} \section{Non-invertibility of Stoimenow knots}\label{sec:noninv} In this section, we show that a Stoimenow knot $S_n$ is not invertible by using the Alexander polynomials. Since $S_n$ is $(-)$-amphicheiral, we show that it is not $(+)$-amphicheiral, which is equivalent to that it is not invertble. Let $L$ be a link, and ${\mit \Delta}_L(t)\in \mathbb{Z}[t, t^{-1}]$ the Alexander polynomial of $L$. For two elements $A$ and $B$ in $\mathbb{Z}[t, t^{-1}]$ ($(\mathbb{Z}/d\mathbb{Z})[t, t^{-1}]$, respectively), we denote by $A\doteq B$ ($A\doteq_d B$, respectively) if they are equal up to multiplications of trivial units. A one variable Laurent polynomial $r(t)\in \mathbb{Z}[t^{\pm 1}]$ is {\it of type $X$} if there are integers $n\ge 0$ and $\lambda \ge 3$ such that $\lambda$ is odd, and $f_i(t)\in \mathbb{Z}[t, t^{-1}]$ $(i=0, 1, \ldots, n)$ such that $f_i(t)\doteq f_i(t^{-1})$, $|f_i(1)|=1$, and for $i>0$, $f_i(t)\doteq_2 f_0(t)^{2^i}p_{\lambda}(t)^{2^{i-1}}$ where $p_{\lambda}(t)=(t^{\lambda}-1)/(t-1)$, and \begin{equation}\label{eq:r} r(t)\doteq \left\{ \begin{array}{ll} f_0(t)^2 & (n=0), \\ f_0(t)^2f_1(t)\cdots f_n(t) & (n\ge 1). \end{array} \right. \end{equation} R.~Hartley \cite{Ha}, R.~Hartley and A.~Kawauchi \cite{HK}, and A.~Kawauchi \cite{Kw1} gave necessary conditions on the Alexander polynomials of amphicheiral knots. \begin{lm}\label{lm:HK} {\rm (Hartley \cite{Ha}; Hartley and Kawauchi \cite{HK}; Kawauchi \cite{Kw1})} \begin{enumerate} \item[(1)] Let $K$ be a $(-)$-amphicheiral knot. Then there exists an element $f(t)\in \mathbb{Z}[t, t^{-1}]$ such that $|f(1)|=1$, $f(t^{-1})\doteq f(-t)$, and $${\mit \Delta}_K(t^2)\doteq f(t)f(t^{-1}).$$ \item[(2)] Let $K$ be a $(+)$-amphicheiral knot. Then there exist $r_j(t)\in \mathbb{Z}[t, t^{-1}]$ of type $X$ and a positive odd number $$\alpha$pha_j$\ $(j=1, \ldots, m)$ such that $${\mit \Delta}_K(t)\doteq \prod_{j=1}^m r_j(t^{$\alpha$pha_j}).$$ In particular, if $K$ is hyperbolic, then we can take $m=1$ and $$\alpha$pha_1=1$. \end{enumerate} \end{lm} We generalize Stoimenow knots as in Figure 12. The lefthand side is called a {\it generalized Stoimenow link of type I}, and is denoted by $S^1_{p,q}$. The righthand side is called a {\it generalized Stoimenow link of type II}, and is denoted by $S^2_{r,s}$. The numbers in rectangles are the numbers of half twists. We note that $S^1_{2k,2k}=S_{4k+11}$ and $S^2_{k,k}=S_{4k+13}$. We denote the Alexander polynomials (the Conway polynomials) of $S^1_{p,q}$ and $S^2_{r,s}$ by ${\mit \Delta}^{(1)}_{p,q}(t)$ and ${\mit \Delta}^{(2)}_{r,s}(t)$ ($\nabla^{(1)}_{p,q}(z)$ and $\nabla^{(2)}_{r,s}(z)$), respectively. We compute ${\mit \Delta}^{(1)}_{2k,2k}(t)$ and ${\mit \Delta}^{(2)}_{k,k}(t)$ as the mod $2$ Alexander polynomials. \begin{figure} \caption{generalized Stoimenow links $S_{p,q} \label{general} \end{figure} \begin{lm}\label{lm:StAlex} The Alexander and the mod $2$ Alexander polynomials of $S^1_{2k,2k}$ and $S^2_{k,k}$ are as follows : \begin{eqnarray*} (t+1)^2 {\mit \Delta}^{(1)}_{2k,2k}(t) & \doteq_2 & t^{4k+6}+t^{4k+5}+t^{4k+4}+t^{4k+2}+t^{4k-1}+t^7+t^4+t^2+t+1 \\ & =_2 & (t^2+t+1)^2(t^{4k+2}+t^{4k+1}+t^{4k-1}+t^3+t+1). \\ {\mit \Delta}^{(2)}_{k,k}(t) & \doteq & t^3(-t^6+9t^5-26t^4+37t^3-26t^2+9t-1)\\ & & -2kt^2(t-1)^2(2t^6-7t^5+15t^4-18t^3+15t^2-7t+2)\\ & & +k^2(t-1)^2(t^{10}-3t^9+7t^8-17t^7+32t^6-40t^5+32t^4-17t^3\\ & & +7t^2-3t+1) \\ & \doteq_2 & \left\{ \begin{array}{cl} t^6+t^5+t^3+t+1 & (\mbox{$k$ is even}), \\ t^{12}+t^{11}+t^9+t^7+t^6+t^5+t^3+t+1 & (\mbox{$k$ is odd}). \end{array} \right. \end{eqnarray*} \end{lm} \noindent {\bf Proof}\ \ We have the following relations on the Conway polynomials from the skein relation in Subsection \ref{ssec:Alexander} : \begin{equation}\label{eq:rel1} \left\{ \begin{array}{ccccl} \nabla_{p,q}^{(1)}(z) & - & \nabla_{p-2,q}^{(1)}(z) & = & z\nabla_{p-1,q}^{(1)}(z) \\ \nabla_{p,q-2}^{(1)}(z) & - & \nabla_{p,q}^{(1)}(z) & = & z\nabla_{p,q-1}^{(1)}(z), \end{array} \right. \end{equation} and \begin{equation}\label{eq:rel2} \left\{ \begin{array}{ccccl} \nabla_{r-1,s}^{(2)}(z) & - & \nabla_{r,s}^{(2)}(z) & = & z\nabla_{\infty,s}^{(2)}(z) \\ \nabla_{r,s}^{(2)}(z) & - & \nabla_{r,s-1}^{(2)}(z) & = & z\nabla_{r,\infty}^{(2)}(z). \end{array} \right. \end{equation} For the meaning of $\infty$, see Figure 6. \noindent (type I) From (\ref{eq:rel1}), we have : \begin{equation}\label{eq:rel3} \left\{ \begin{array}{ccl} {\mit \Delta}_{p,q}^{(1)}(t)-t^{\frac 12}{\mit \Delta}_{p-1,q}^{(1)}(t) & = & (-t^{-\frac 12})^{p-1}({\mit \Delta}_{1,q}^{(1)}(t)-t^{\frac 12}{\mit \Delta}_{0,q}^{(1)}(t)) \\ {\mit \Delta}_{p,q}^{(1)}(t)+t^{-\frac 12}{\mit \Delta}_{p-1,q}^{(1)}(t) & = & (t^{-\frac 12})^{p-1}({\mit \Delta}_{1,q}^{(1)}(t)+t^{-\frac 12}{\mit \Delta}_{0,q}^{(1)}(t)), \end{array} \right. \end{equation} and \begin{equation}\label{eq:rel4} \left\{ \begin{array}{ccl} {\mit \Delta}_{p,q}^{(1)}(t)+t^{\frac 12}{\mit \Delta}_{p,q-1}^{(1)}(t) & = & (t^{-\frac 12})^{q-1}({\mit \Delta}_{p,1}^{(1)}(t)+t^{\frac 12}{\mit \Delta}_{p,0}^{(1)}(t)) \\ {\mit \Delta}_{p,q}^{(1)}(t)-t^{-\frac 12}{\mit \Delta}_{p,q-1}^{(1)}(t) & = & (-t^{-\frac 12})^{q-1}({\mit \Delta}_{p,1}^{(1)}(t)-t^{-\frac 12}{\mit \Delta}_{p,0}^{(1)}(t)). \end{array} \right. \end{equation} From (\ref{eq:rel3}) and (\ref{eq:rel4}), we have : \begin{equation}\label{eq:rel5} \left\{ \begin{array}{ccl} (t^{\frac 12}+t^{-\frac 12}){\mit \Delta}_{p,q}^{(1)}(t) & = & (t^{\frac p2}-(-1)^pt^{-\frac p2}){\mit \Delta}_{1,q}^{(1)}(t) \\ & & +(t^{\frac{p-1}{2}}+(-1)^pt^{-\frac{p-1}{2}}){\mit \Delta}_{0,q}^{(1)}(t) \\ -(t^{\frac 12}+t^{-\frac 12}){\mit \Delta}_{p,q}^{(1)}(t) & = & ((-1)^qt^{\frac q2}-t^{-\frac q2}){\mit \Delta}_{p,1}^{(1)}(t) \\ & & -((-1)^qt^{\frac{q-1}{2}}+t^{-\frac{q-1}{2}}){\mit \Delta}_{p,0}^{(1)}(t). \end{array} \right. \end{equation} From (\ref{eq:rel5}), if $p=q=2k$, then we have a skein relation among the Alexander polynomials of $S^1_{2k,2k}$, $S^1_{0,0}$, $S^1_{1,0}$, $S^1_{0,1}$ and $S^1_{1,1}$ (cf.\ Figure 13) : \begin{equation}\label{eq:type11} \begin{array}{ccl} \left( t^{\frac 12}+t^{-\frac 12}\right)^2 {\mit \Delta}^{(1)}_{2k,2k}(t) & = & \left( t^{k-\frac 12}+t^{-k+\frac 12}\right)^2 {\mit \Delta}^{(1)}_{0,0}(t) \\ & & -( t^k+t^{-k})\left( t^{k-\frac 12}+t^{-k+\frac 12}\right) ({\mit \Delta}^{(1)}_{1,0}(t)-{\mit \Delta}^{(1)}_{0,1}(t)) \\ & & -( t^k+t^{-k})^2 {\mit \Delta}^{(1)}_{1,1}(t). \end{array} \end{equation} Since $S^1_{1,0}$ and $S^1_{0,1}$ are $2$-component links with $S^1_{0,1}=-(S^1_{1,0})^*$, and (\ref{eq:type11}), we have $\nabla^{(1)}_{0,1}(z)=-\nabla^{(1)}_{1,0}(z)$ and ${\mit \Delta}^{(1)}_{0,1}(t)=-{\mit \Delta}^{(1)}_{1,0}(t)$ by Lemma \ref{lm:Conway}, and \begin{equation}\label{eq:type12} \begin{array}{ccl} \left( t^{\frac 12}+t^{-\frac 12}\right)^2 {\mit \Delta}^{(1)}_{2k,2k}(t) & = & \left( t^{k-\frac 12}+t^{-k+\frac 12}\right)^2 {\mit \Delta}^{(1)}_{0,0}(t) \\ & & -2( t^k+t^{-k})\left( t^{k-\frac 12}+t^{-k+\frac 12}\right) {\mit \Delta}^{(1)}_{1,0}(t) \\ & & -( t^k+t^{-k})^2 {\mit \Delta}^{(1)}_{1,1}(t). \end{array} \end{equation} Since $S^1_{0,0}=8_{18}$, $$\begin{array}{rll} {\mit \Delta}^{(1)}_{0,0}(t)={\mit \Delta}_{8_{18}}(t) & = & -t^3+5t^2-10t+13-10t^{-1}+5t^{-2}-t^{-3}\\ & =_2 & t^3+t^2+1+t^{-2}+t^{-3}, \\ {\mit \Delta}^{(1)}_{1,1}(t) & = & -t^{-3}(t^3-1)^2=_2t^3+t^{-3}, \end{array}$$ and (\ref{eq:type12}), we have $$\begin{array}{rcl} (t+1)^2 {\mit \Delta}^{(1)}_{2k,2k}(t) & \doteq_2 & t^{4k+6}+t^{4k+5}+t^{4k+4}+t^{4k+2}+t^{4k-1}+t^7+t^4+t^2+t+1 \\ & =_2 & (t^2+t+1)^2(t^{4k+2}+t^{4k+1}+t^{4k-1}+t^3+t+1). \end{array}$$ \noindent (type II) From (\ref{eq:rel2}), we have : \begin{equation}\label{eq:rel6} \left\{ \begin{array}{ccl} \nabla_{r,s}^{(2)}(z) & = & \nabla_{0,s}^{(2)}(z)-rz\nabla_{\infty,s}^{(2)}(z) \\ \nabla_{r,s}^{(2)}(z) & = & \nabla_{r,0}^{(2)}(z)+sz\nabla_{r,\infty}^{(2)}(z). \end{array} \right. \end{equation} From (\ref{eq:rel6}), we have : \begin{equation*} \nabla_{r,s}^{(2)}(z)= \nabla_{0,0}^{(2)}(z)-rz\nabla_{\infty,0}^{(2)}(z)+sz\nabla_{0,\infty}^{(2)}(z) -rsz^2\nabla_{\infty,\infty}^{(2)}(z). \end{equation*} In particular, if $r=s=k$, then we have a skein relation among the Conway polynomials of $S^2_{k,k}$, $S^2_{0,0}$, $S^2_{0,\infty}$, $S^2_{\infty,0}$ and $S^2_{\infty,\infty}$ (cf.\ Figure 14) : \begin{equation}\label{eq:rel7} \nabla_{k,k}^{(2)}(z)= \nabla_{0,0}^{(2)}(z)+kz(\nabla_{0,\infty}^{(2)}(z)-\nabla_{\infty,0}^{(2)}(z)) -k^2z^2\nabla_{\infty,\infty}^{(2)}(z). \end{equation} Since $S^2_{0,\infty}$ and $S^2_{\infty,0}$ are $2$-component links with $S^2_{0,\infty}=-(S^2_{\infty,0})^*$, $$\begin{array}{cll} \nabla^{(2)}_{0,0}(z) & = & -z^6+3z^4+z^2+1, \\ \nabla^{(2)}_{0,\infty}(z) & = & -2z^7-5z^5-5z^3-2z, \\ \nabla^{(2)}_{\infty,\infty}(z) & = & -z^{10}-7z^8-18z^6-15z^4-4z^2, \end{array}$$ and (\ref{eq:rel7}), we have $\nabla^{(2)}_{0,\infty}(z)=-\nabla^{(2)}_{\infty,0}(z)$ by Lemma \ref{lm:Conway}, and \begin{eqnarray*} {\mit \Delta}^{(2)}_{k,k}(t) & \doteq & t^3(-t^6+9t^5-26t^4+37t^3-26t^2+9t-1)\\ & & -2kt^2(t-1)^2(2t^6-7t^5+15t^4-18t^3+15t^2-7t+2)\\ & & +k^2(t-1)^2(t^{10}-3t^9+7t^8-17t^7+32t^6-40t^5+32t^4-17t^3\\ & & +7t^2-3t+1) \\ & \doteq_2 & \left\{ \begin{array}{cl} t^6+t^5+t^3+t+1 & (\mbox{$k$ is even}), \\ t^{12}+t^{11}+t^9+t^7+t^6+t^5+t^3+t+1 & (\mbox{$k$ is odd}).\ \qed \end{array} \right. \end{eqnarray*} \begin{figure} \caption{$S_{0,0} \label{type1} \end{figure} \begin{figure} \caption{$S_{0,0} \label{type2} \end{figure} Every element $f\in (\mathbb{Z}/2\mathbb{Z})[t, t^{-1}]$ is of the form : $$f=t^{k_d}+t^{k_{d-1}}+\cdots +t^{k_1}+t^{k_0}$$ where $k_0, \ldots, k_d$ are integers such that $k_0<k_1<\cdots <k_{d-1}<k_d$. Then we define the {\it mod $2$ trace}, denoted by $\mathrm{tr}_2(f)\in \mathbb{Z}/2\mathbb{Z}=\{0, 1\}$, as : $$\mathrm{tr}_2(f) =\left\{ \begin{array}{cl} 1 & (k_d-k_{d-1}=1), \\ 0 & (k_d-k_{d-1}\ge 2). \end{array} \right.$$ For $f_1, f_2\in (\mathbb{Z}/2\mathbb{Z})[t, t^{-1}]$, $\mathrm{tr}_2(f_1f_2)=\mathrm{tr}_2(f_1)+\mathrm{tr}_2(f_2)$. There exists an element $g\in (\mathbb{Z}/2\mathbb{Z})[t, t^{-1}]$ such that $f=g^2$ if and only if every $k_i$\ $(i=0, \ldots, d)$ is even. Then we call $f$ a {\it square polynomial}, and we have $$g=t^{k_d/2}+\cdots +t^{k_1/2}+t^{k_0/2}$$ and $\mathrm{tr}_2(f)=0$. \begin{lm}\label{lm:square} Let $r(t)$ be of type X as in (\ref{eq:r}), and $$\alpha$pha$ a positive odd integer. \begin{enumerate} \item[(1)] If $n=0$, then $r(t^{$\alpha$pha})$ is a square polynomial. If $n\ge 1$, then $r(t^{$\alpha$pha})$ is of the form : $$r(t^{$\alpha$pha})=g^2p_{\lambda}(t^{$\alpha$pha})$$ where $g\in (\mathbb{Z}/2\mathbb{Z})[t, t^{-1}]$ and $p_{\lambda}(t)=(t^{\lambda}-1)/(t-1)$. \item[(2)] $\mathrm{tr}_2(r(t^{$\alpha$pha}))=1$ if and only if $n\ge 1$ and $$\alpha$pha=1$. \end{enumerate} \end{lm} Let $\zeta_m$ be a primitive $m$-th root of unity, and $\mathbf{\Phi}_m(t)\in \mathbb{Z}[t]$ the $m$-th cyclotomic polynomial defined by $$\mathbf{\Phi}_m(t)=\prod_{ {\scriptstyle 1\le i\le m-1} \atop {\scriptstyle \gcd(i, m)=1}}(t-\zeta_m^i).$$ The cyclotomic polynomial is a monic symmetric irreducible polynomial over $\mathbb{Z}$. For a prime $q$ and a positive integer $r$, $$\mathbf{\Phi}_{q^r}(t)=\frac{t^{q^r}-1}{t^{q^{r-1}}-1} =t^{q^{r-1}(q-1)}+t^{q^{r-1}(q-2)}+\cdots +t^{q^{r-1}}+1.$$ Since $$t^m-1=\prod_{d\ge 1, d|m}\mathbf{\Phi}_d(t),$$ we have \begin{equation}\label{eq:cyclotomic} p_{\lambda}(t^{$\alpha$pha}) =\frac{t^{$\alpha$pha \lambda}-1}{t^{$\alpha$pha}-1} =\prod_{d|$\alpha$pha \lambda, d\not\ \! | $\alpha$pha}\mathbf{\Phi}_d(t). \end{equation} \begin{theo}\label{th:Stoimenow} A Stoimenow knot $S_n$ is not invertible. \end{theo} \noindent {\bf Proof}\ \ We show that both $S^1_{2k,2k}$ and $S^2_{k,k}$ with $k\ge 1$ are not $(+)$-amphicheiral. \noindent (type I) Suppose that ${\mit \Delta}^{(1)}_{2k,2k}(t)$ satisfies the condition in Lemma \ref{lm:HK} (2). We set $$h=_2 t^{4k+2}+t^{4k+1}+t^{4k-1}+t^3+t+1,$$ and $m=q^r$ with an odd prime $q\ge 3$ and $r\ge 1$. Then \begin{equation}\label{eq:h} (t+1)^2{\mit \Delta}^{(1)}_{2k,2k}(t)\doteq_2 (t^2+t+1)^2h. \end{equation} \noindent {\bf Claim 1}\ {\it $\mathbf{\Phi}_m(t)$ is a mod $2$ divisor of $h$ only if $m=3, 5$ or $9$.} \noindent {\bf Proof}\ \ Take $Q(t), R(t)\in (\mathbb{Z}/2\mathbb{Z})[t, t^{-1}]$ such that $h=_2\mathbf{\Phi}_m(t)Q(t)+R(t)$. We can take $R(t)$ of the form : $$R(t)=_2t^{d+3}+t^{d+2}+t^d+t^3+t+1$$ where $-m/2<d<m/2$. The span of $R(t)$ is less than $m/2+3$. \noindent {\bf Case 1}\ $r\ge 2$ except the case $(q, r)=(3, 2)$. Since the degree of $\mathbf{\Phi}_m(t)$ is $q^{r-1}(q-1)$ which is greater than $q^r/2+3$, $R(t)=0$ should be hold. However it does not occur. \noindent {\bf Case 2}\ $(q, r)=(3, 2)$ ($m=9$). $R(t)$ is not mod $2$ divisible by $\mathbf{\Phi}_9(t)=t^6+t^3+1$ except the case $d=4$. \noindent {\bf Case 3}\ $r=1$. We check only the cases $m=3, 5$ and $7$. The case $m=7$ does not occur. Hence we have the result. \qed \noindent {\bf Claim 2}\ {\it $h$ is mod $2$ divisible by $\mathbf{\Phi}_3(t)$ if and only if $k\equiv 0\ (\mathrm{mod}\ \! 3)$. $h$ is mod $2$ divisible by $\mathbf{\Phi}_5(t)$ if and only if $k\equiv 1\ (\mathrm{mod}\ \! 5)$. $h$ is mod $2$ divisible by $\mathbf{\Phi}_9(t)$ if and only if $k\equiv -1\ (\mathrm{mod}\ \! 9)$.} \noindent {\bf Proof}\ \ $h$ is mod $2$ divisible by $\mathbf{\Phi}_3(t)$ if and only if $4k+1\equiv 1\ (\mathrm{mod}\ \! 3)$ which is equivalent to $k\equiv 0\ (\mathrm{mod}\ \! 3)$. $h$ is mod $2$ divisible by $\mathbf{\Phi}_5(t)$ if and only if $4k+1\equiv 0\ (\mathrm{mod}\ \! 5)$ and $4k-1\equiv 3\ (\mathrm{mod}\ \! 5)$ which is equivalent to $k\equiv 1\ (\mathrm{mod}\ \! 5)$. $h$ is mod $2$ divisible by $\mathbf{\Phi}_9(t)$ if and only if $4k+1\equiv 6\ (\mathrm{mod}\ \! 9)$ which is equivalent to $k\equiv -1\ (\mathrm{mod}\ \! 9)$. \qed \noindent {\bf Claim 3}\ {\it $\mathbf{\Phi}_{15}(t)$ is a mod $2$ divisor of $h$ if and only if $k\equiv -5\ (\mathrm{mod}\ \! 15)$. $\mathbf{\Phi}_{45}(t)$ is not a mod $2$ divisor of $h$. } \noindent {\bf Proof}\ \ For $\mathbf{\Phi}_{15}(t)=t^8-t^7+t^5-t^4+t^3-t+1$, we only check the cases $d=\pm 5, \pm 6$ and $\pm 7$. For the cases, $R(t)$ is mod $2$ divisible by $\mathbf{\Phi}_{15}(t)$ if and only if $4k-1\equiv -6\ (\mathrm{mod}\ \! 15)$ which is equivalent to $k\equiv -5\ (\mathrm{mod}\ \! 15)$. For $\mathbf{\Phi}_{45}(t)=t^{24}-t^{21}+t^{15}-t^{12}+t^9-t^3+1$, we only check the cases $d=\pm 21$ and $\pm 22$. For the cases, $R(t)$ is not mod $2$ divisible by $\mathbf{\Phi}_{45}(t)$. \qed \noindent {\bf Claim 4}\ {\it $p_{\lambda}(t^{$\alpha$pha})$ is a mod $2$ divisor of $h$ only if $p_3(t)=\mathbf{\Phi}_3(t)=t^2+t+1$, $p_5(t)=\mathbf{\Phi}_5(t)=t^4+t^3+t^2+t+1$ or $p_3(t^3)=\mathbf{\Phi}_9(t)=t^6+t^3+1$.} \noindent {\bf Proof}\ \ By Claim 1, Claim 2, Claim 3 and (\ref{eq:cyclotomic}), we have the result. \qed By Lemma \ref{lm:StAlex}, we have $\mathrm{tr}_2({\mit \Delta}^{(1)}_{2k,2k}(t))=1$. By Lemma \ref{lm:square}, Claim 1, Claim 2, Claim 3, Claim 4 and (\ref{eq:h}), $h$ is of the form : $$h\doteq_2 g^2p_3(t),\ g^2p_5(t)\ \mbox{or}\ g^2p_5(t)p_3(t^3)$$ for some $g\in (\mathbb{Z}/2\mathbb{Z})[t, t^{-1}]$. However we have $$\frac{h}{t^2+t+1}=_2t^{4k}+\cdots+t^5+t^4+t^2+1$$ for $k\equiv 0\ (\mathrm{mod}\ \! 3), k\ge 3$, $$\frac{h}{t^4+t^3+t^2+t+1}=_2t^{4k-2}+\cdots+t^3+t^2+1$$ for $k\equiv 1\ (\mathrm{mod}\ \! 5), k\ge 6$, and $$\frac{h}{(t^4+t^3+t^2+t+1)(t^6+t^3+1)}=_2 t^{4k-8}+\cdots+t^5+t^2+1$$ for $k\equiv 26\ (\mathrm{mod}\ \! 45), k\ge 26$ are not square polynomials. It is a contradiction. \noindent (type II) Suppose that ${\mit \Delta}^{(2)}_{k,k}(t)$ satisfies the condition in Lemma \ref{lm:HK} (2). By Lemma \ref{lm:StAlex}, we have $\mathrm{tr}_2({\mit \Delta}^{(2)}_{k,k}(t))=1$. By Lemma \ref{lm:square}, there exists an odd $\lambda \ge 3$ such that $p_{\lambda}(t)$ is a mod $2$ divisor of ${\mit \Delta}^{(2)}_{k,k}(t)$. If $k$ is odd, then there is no such $\lambda$ (Check only the cases $\lambda=3, 5, 7, 9, 11$). Hence we suppose that $k$ is even. Since $${\mit \Delta}^{(2)}_{k,k}(t)\doteq_2 (t^2+t+1)^3,$$ we have $\lambda=3$. By the forms (\ref{eq:r}) and Lemma \ref{lm:HK} (2), ${\mit \Delta}^{(2)}_{k,k}(t)$ is of the form : \begin{equation}\label{eq:form} {\mit \Delta}^{(2)}_{k,k}(t)\doteq r_1(t)r_2(t)r_3(t) \end{equation} where $r_i(t)\doteq r_i(t^{-1})$, $|r_i(1)|=1$ and $r_i(t)\doteq_2t^2+t+1$\ ($i=1, 2, 3$). That is, ${\mit \Delta}^{(2)}_{k,k}(t)$ is decomposed into at least three non-trivial factors in $\mathbb{Z}[t, t^{-1}]$. We set $d_i$ as the degree (span) of $r_i(t)$\ $(i=1, 2, 3)$, and assume $d_1\le d_2\le d_3$. There are two cases : \noindent {\bf Case 1}\ $k\equiv 0\ (\mathrm{mod}\ \! 4)$. By Lemma \ref{lm:StAlex}, we have the mod $8$ Alexander polynomial : $${\mit \Delta}^{(2)}_{k,k}(t)\doteq_8 t^6-t^5+2t^4+3t^3+2t^2-t+1.$$ Since $t^2\pm t+1$ and $t^2\pm 3t+1$ are not mod $8$ divisors of ${\mit \Delta}^{(2)}_{k,k}(t)$, the case does not occur. \noindent {\bf Case 2}\ $k\equiv 2\ (\mathrm{mod}\ \! 4)$. By Lemma \ref{lm:StAlex}, we have the mod $8$ Alexander polynomial : $$\begin{array}{rcl} {\mit \Delta}^{(2)}_{k,k}(t) & \doteq_8 & 4t^{12}+4t^{11}+3t^9+t^8-2t^7-3t^6-2t^5+t^4+3t^3+4t+4 \\ & \doteq_8 & (t^2-t+1)(4t^{10}+4t^8-t^7+4t^6+3t^5+4t^4-t^3+4t^2+4). \end{array}$$ We set $s=4t^{10}+4t^8-t^7+4t^6+3t^5+4t^4-t^3+4t^2+4$. In this case, the $\mathbb{Z}$-degree of ${\mit \Delta}^{(2)}_{k,k}(t)$ is $12$ which is equal to the mod $8$ degree of it. By the assumption, there are three cases for the triple $(d_1, d_2, d_3)$ : $(d_1, d_2, d_3)=(2, 2, 8)$, $(2, 4, 6)$ or $(4, 4, 4)$. The possibilities of the degree $2$ mod $8$ factors are $t^2\pm t+1$ and $t^2\pm 3t+1$. Since $t^2\pm t+1$ and $t^2\pm 3t+1$ are not mod $8$ divisors of $s$, $s$ is decomposed into $s=s_1s_2$ such that the degrees of $s_1$ and $s_2$ are $4$ and $6$ respectively, they are both irreducible, and $s_1\doteq_2s_2\doteq_2 t^2+t+1$. By (\ref{eq:form}), $s_1$ and $s_2$ are of the form : $$\begin{array}{ccl} s_1 & \doteq_8 & 2t^4+a_1t^3+a_2t^2+a_1t+2\doteq_2 t^2+t+1, \\ s_2 & \doteq_8 & 2t^6+b_1t^5+b_2t^4+b_3t^3+b_2t^2+b_1t+2 \doteq_2 t^2+t+1 \end{array}$$ where $a_1$, $a_2$, $b_2$ and $b_3$ are odd, and $b_1$ is even. Then the $9$-th coefficient of $s_1s_2$ is odd (non-zero). However it contradicts the form of $s$. \qed At the end of the paper, we raise refined questions realted with Question \ref{qu:cp}: \begin{qu}\label{qu:mincr} \begin{enumerate} \item[(1)] Is there a prime component-preservingly amphicheiral link with odd minimal crossing number less than $21$ ? \item[(2)] Is there a prime component-preservingly $(\varepsilon)$-amphicheiral link with odd minimal crossing number ? \end{enumerate} \end{qu} About (1), we have already known that there are no such examples for the case that the minimal crossing number $\le 11$ (cf.\ \cite{Kd3}). If we need to use an amphicheiral knot with odd minimal crossing number, then the minimal crossing number should be greater than or equal to $19$ from primeness. Under the restriction, if there exists an example $L$ for Question \ref{qu:mincr} (1) with minimal crossing number $19$, then $L$ is a $2$-component link such that (i)\ its components are a knot with minimal crossing number $15$ and the unknot, (ii)\ $\mathrm{lk}\ \! (L)=0$, and (iii)\ on its diagram realizing the minimal crossing number, its components are also realizing the minimal crossing numbers (i.e.\ $15$ and $0$). About (2), our example $L_n$ was a prime component-preservingly $(-, +)$-amphicheiral link with odd minimal crossing number. In general, the linking number of a $2$-component $(\varepsilon)$-amphicheiral link is $0$. $11_{n247}^2$ in Figure 2 is a prime $(\varepsilon)$-amphicheiral link with odd minimal crossing number. However it is not component-preservingly $(\varepsilon)$-amphicheiral. {\noindent {\bf Acknowledgements}}\ The authors would like to express gratitudes to Professor Akio Kawauchi, Professor Taizo Kanenobu, Kenji Shibata, members of Topology Seminar at Osaka City University, and the anonymous referee for giving them useful comments. {\footnotesize } {\footnotesize \par Teruhisa KADOKAMI\par Department of Mathematics,\par East China Normal University,\par Dongchuan-lu 500, Shanghai, 200241, China \par {\tt [email protected]}\par {\tt [email protected]}\par Yoji KOBATAKE \par {\tt [email protected]}\par } \end{document}
\begin{document} \setlength{\baselineskip}{4.9mm} \setlength{\abovedisplayskip}{4.5mm} \setlength{\belowdisplayskip}{4.5mm} \renewcommand{\roman{enumi}}{\roman{enumi}} \renewcommand{(\theenumi)}{(\roman{enumi})} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \allowdisplaybreaks[2] \parindent=20pt \begin{center} {\bf Diagram automorphisms and quantum groups } \par Toshiaki Shoji and Zhiping Zhou \\ \title{} \end{center} \begin{abstract} Let $\BU^-_q = \BU^-_q(\Fg)$ be the negative part of the quantum group associated to a finite dimensional simple Lie algebra $\Fg$, and $\s : \Fg \to \Fg$ be the automorphism obtained from the diagram automorphism. Let $\Fg^{\s}$ be the fixed point subalgebra of $\Fg$, and put $\ul\BU^-_q = \BU^-_q(\Fg^{\s})$. Let $\BB$ be the canonical basis of $\BU_q^-$ and $\ul\BB$ the canonical basis of $\ul\BU_q^-$. $\s$ induces a natural action on $\BB$, and we denote by $\BB^{\s}$ the set of $\s$-fixed elements in $\BB$. Lusztig proved that there exists a canonical bijection $\BB^{\s} \simeq \ul\BB$ by using geometric considerations. In this paper, we construct such a bijection in an elementary way. We also consider such a bijection in the case of certain affine quantum groups, by making use of PBW-bases constructed by Beck and Nakajima. \end{abstract} \let\fnsymbol{footnote}\relax\ \footnote{2010 {\it Mathematics Subject Classification.} Primary 17B37; Secondary 20G42, 81R50.} \footnote{{\it Key Words and Phrases.} Quantum groups, canonical bases, PBW-bases} \maketitle \markboth{SHOJI-ZHOU}{Diagram automorphisms} \pagestyle{myheadings} \begin{center} {\sc Introduction } \end{center} \para{0.1.} Let $X$ be a Dynkin diagram with vertex set $I$, and $\Fg$ the semisimple Lie algebra associated to $X$. We denote by $\BU_q = \BU_q(\Fg)$ the quantum enveloping algebra of $\Fg$, and by $\BU_q^-$ its negative part, which are associative algebras over $\BQ(q)$. Let $W$ be the Weyl group of $\Fg$, and $w_0$ the longest element of $W$. Let $\Bh = (i_1, \dots, i_{\nu})$ be a sequence of $i_k \in I$ such that $w_0 = s_{i_1}\cdots s_{i_{\nu}}$ gives a reduced expression of $w_0$, where $s_i (i \in I)$ are simple reflections in $W$. For each $\Bh$ as above, there exists a basis $\SX_{\Bh}$ of $\BU^-_q$, called the PBW-basis of $\BU_q^-$. Put $\BA = \BZ[q, q\iv]$, and let ${}_{\BA}\BU_q^-$ be Lusztig's integral form of $\BU_q^-$. We consider the following statements. \par \noindent (0.1.1) \begin{enumerate} \item The $\BZ[q]$-submodule of $\BU_q^-$ generated by $\SX_{\Bh}$ is independent of the choice of $\Bh$, which we denote by $\SL_{\BZ}(\infty)$. \item The $\BZ$-basis of $\SL_{\BZ}(\infty)/q\SL_{\BZ}(\infty)$ induced from $\SX_{\Bh}$ is independent of the choice of $\Bh$. \item For each $\Bh$, PBW-basis $\SX_{\Bh}$ gives rise to an $\BA$-basis of ${}_{\BA}\BU_q^-$. \end{enumerate} We also consider a weaker version of (iii), \par (iii$\,'$) \ For each $\Bh$, any element of $\SX_{\Bh}$ is contained in ${}_{\BA}\BU_q^-$. \par The canonical basis $\BB$ of $\BU^-_q$ was constructed by Lusztig [L2, L3] by using a geometric method. It is known that it coincides with the global crystal basis of Kashiwara [K1]. \par The statement (0.1.1) can be verified in general by making use of the canonical basis or Kashiwara's global crystal basis. \para{0.2.} We are interested in an elementary construction of canonical bases, in the sense that we don't appeal to Lusztig's geometric theory of canonical bases nor Kashiwara's theory of crystal bases. We shall construct canonical bases (as discussed in [L2]), by making use of PBW-basis, based on the properties (0.1.1). Actually, in their theories, canonical bases or crystal bases are constructed independently from PBW-bases. However those constructions look like a huge black box, and it is not easy to trace the construction even in the small rank cases. On the other hand, the construction of PBW-bases is more explicit, the parametrization is easy, and they fit to direct computations. So it is important to express canonical basis in terms of PBW-bases, which is the problem closely related to the elementary construction of canonical basis. \par In the case where $X$ is simply laced, the verification of (0.1.1) is rather easy. In the non-simply laced case, the problem is reduced to the case of type $B_2$ or $G_2$. In the case of $B_2$, the properties (i) and (iii) were verified by [L1], by computing the commutation relations of root vectors in the case of type $B_2$, and furthermore by applying the method of Kostant on the $\BZ$-form of Chevalley groups in the case of type $G_2$. Later [X1] gave a proof of (iii) similar to the case of $B_2$. But in any case, it requires a hard computation. In [X2], Xi computed, in the case of $B_2$, the canonical basis of $\BU^-_q$ explicitly in terms of PBW-basis. The property (ii) follows from his result. But the property (ii) for $G_2$ is not yet verified (in an elementary method). \par If we assume (i) and (iii) in (0.1.1), one can construct the ``canonical basis'', which is only independent of $\Bh$, up to $\pm 1$. We call them the signed basis of $\BU^-_q$. Thus in the non-simply laced case, one can construct the signed basis. \para{0.3.} Assume that $X$ is simply laced, and let $\s$ be a graph automorphism of $X$. We denote by $\ul I$ the set of orbits in $I$ under the action of $\s : I \to I$. Then $\s$ determines a Dynkin diagram $\ul X$ whose vertex set is given by $\ul I$. $\ul X$ corresponds to the $\s$-fixed point subalgebra $\Fg^{\s}$ of $\Fg$, and we denote by $\ul\BU_q = \BU_q(\Fg^{\s})$ the corresponding quantum enveloping algebra, and $\ul\BU_q^-$ its negative part. Let $\BB$ be the canonical basis of $\BU^-_q$. Then $\s$ permutes $\BB$, and we denote by $\BB^{\s}$ the set of $\s$-fixed elements in $\BB$. We also denote by $\ul\BB$ the set of canonical basis of $\ul\BU^-_q$. In [L4] (and in [L3]), Lusztig proved that there exists a canonical bijection between $\BB^{\s}$ and $\ul\BB$, based on geometric considerations of canonical basis. \par In this paper, we construct the bijection $\BB^{\s} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ul \BB$ in an elementary way. We assume that $\s$ is admissible, namely for $\eta \in \ul I$, if $i, j \in \eta$ with $i \ne j$, then $i$ and $j$ are not joined in $X$. Let $\ve$ be the order of $\s$. We assume that $\ve = 2$ or 3 (note that if $X$ is irreducible, then $\ve = 2$ or 3). . Let $\BF$ be the finite field $\BZ/\ve\BZ$, and put $\BA' = \BF[q,q\iv] = \BA/\ve\BA$. Let ${}_{\BA}\BU^{-,\s}_q$ be the subalgebra of ${}_{\BA}\BU^-_q$ consisting of $\s$-fixed elements, and consider the $\BA'$-algebra ${}_{\BA'}\BU^{-,\s}_q = {}_{\BA}\BU^{-,\s}_q\otimes_{\BA}\BA'$. Let $J$ be the $\BA'$-submodule of ${}_{\BA'}\BU^{-,\s}_q$ consisting of elements of the form $\sum_{0 \le i < \ve}\s^i(x)$ for $x \in {}_{\BA'}\BU^-_q$. Then $J$ is a two-sided ideal of ${}_{\BA'}\BU^{-,\s}_q$, and we denote by $\BV_q$ the quotient algebra ${}_{\BA'}\BU^{-,\s}_q/J$. We define ${}_{\BA'}\ul\BU_q^-$ similarly to ${}_{\BA'}\BU_q^-$. We can prove the following result (Proposition 1.20 and Corollary 1.21). \begin{thm} Assume that {\rm (iii)} in {\rm (0.1.1)} holds for ${}_{\BA}\BU_q^-$, and {\rm (iii$\, '$)} holds for ${}_{\BA}\ul\BU_q^-$. Then we have an isomorphism of $\BA'$-algebras \begin{equation*} \tag{0.4.1} {}_{\BA'}\ul\BU_q^- \simeq \BV_q. \end{equation*} Moreover {\rm (iii)} holds for ${}_{\BA}\ul\BU_q^-$. \end{thm} By Theorem 0.4, one can define the signed basis for $\ul\BU^-_q$ by assuming (iii$\,'$). But in the case of $G_2$, we have a more precise result (Proposition 1.23), namely \begin{prop} Let $\ul\BU_q^-$ be of type $G_2$. Then the ambiguity of the sign can be removed in the signed basis, hence {\rm (ii)} of {\rm (0.1.1)} holds for $\ul\BU_q^-$. \end{prop} (0.4.1) gives a surjective map ${}_{\BA'}\BU_q^{-,\s} \to {}_{\BA'}\ul\BU_q^-$ combined with the natural surjection ${}_{\BA'}\BU_q^{-,\s} \to \BV_q$. This map is compatible with PBW-bases, hence induces a natural map $\BB^{\s} \to \ul\BB$, which is shown to be bijective (see Remark 1.24). Thus we can recover Lusztig's bijection $\bB^{\s} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ul\bB$ by an elementary method. \para{0.6.} In Beck and Nakajima [BN], PBW-bases were constructed for the affine quantum enveloping algebras $\BU_q^-$. They showed that an analogous property of (iii$'$) holds for those PBW-basis, and that of (iii) holds if the corresponding diagram $X$ is simply laced. We apply the previous discussion to the case where $X$ is simply laced of type $A_{2n+1}^{(1)}$ ($n \ge 1$), $D_n^{(1)}$ ($n \ge 4$), $E_6^{(1)}$ with $\ve = 2$, and $D_4^{(1)}$ with $\ve = 3$. Then $\ul X$ is twisted affine of type $D^{(2)}_{n+2}$, $A^{(2)}_{2n-3}$, $E_6^{(2)}$ and $D_4^{(3)}$, respectively (under the notation in [Ka, 4.8]). We have (Corollary 2.17), \begin{thm} Assume that $\ul X$ is twisted of type $D^{(2)}_n$, $A^{(2)}_{2n-1}$, $E^{(2)}_6$ or $D^{(3)}_4$. Then {\rm (iii)} holds for $\ul\BU_q^-$. Moreover the surjective map ${}_{\BA'}\BU_q^{-,\s} \to {}_{\BA'}\ul\BU_q^-$ gives a natural bijection $\BB^{\s} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ul\BB$. \end{thm} \remark{0.8.} Assume that $\Fg$ is an affine Lie algebra, and $\Fg_0$ the associated finite dimensional subalgebra of $\Fg$. We consider the automorphism $\s : \Fg \to \Fg$. In order to apply the construction of PBW-basis in [BN] to our $\s$-setting, we need to assume that $\s$ leaves $\Fg_0$ invariant. Then $\Fg^{\s}$ is necessarily twisted affine type. Our discussion can not cover the case where $\Fg^{\s}$ is untwisted type. \para{0.9.} As mentioned in 0.3, Lusztig has given a canonical bijection between the set of $\s$-stable canonical bases of $\BU_q^-$ and the set of canonical bases of $\ul\BU_q^-$. A closely related problem for crystal bases was also studied by many researchers, such as Naito and Sagaki [NS], Savage [S]. However those results are concerned with the level of the parametrization, since there exists no direct relationship between $\BU_q^{\s}$ and $\ul\BU_q^-$. The main observation in our work is that if we replace $\BA = \BZ[q,q\iv]$ by $\BA' = (\BZ/\ve\BZ)[q,q\iv]$, we obtain a natural surjective map from ${}_{\BA'}\BU_q^{-,\s}$ to ${}_{\BA'}\ul\BU_q^-$ as $\BA'$-algebras. This has an advantage that we can compare directly the algebra structure of $\ul\BU_q^-$ and of $\BU_q^{-,\s}$, not only the correspondence of bases. For example, the following is an easy consequence of our results. (Notations are as in Section 1 for the Dynkin case. A similar result also holds for the affine case.) \begin{thm} Let $b(\ul\Bc, \ul\Bh)$ be a canonical basis of $\ul\BU_q^-$, and $b(\Bc, \Bh)$ the corresponding $\s$-stable canonical basis of $\BU_q^-$. We write them as \begin{align*} b(\ul\Bc, \ul\Bh) &= L(\ul\Bc, \ul\Bh) + \sum_{\ul\Bd > \ul \Bc}a_{\ul\Bd}L(\ul\Bd, \ul\Bh), \\ b(\Bc, \Bh) &= L(\Bc, \Bh) + \sum_{\Bd' > \Bc}a'_{\Bd'}L(\Bd', \Bh), \end{align*} with $a'_{\Bd'}, a_{\ul\Bd} \in q\BZ[q]$. If $L(\Bd', \Bh)$ is the $\s$-stable PBW-basis corresponding to $L(\ul\Bd, \ul\Bh)$, then we have $a'_{\Bd'} \equiv a_{\ul\Bd} \pmod \ve$. \end{thm} Some examples of Theorem 0.10 for small rank cases were computed in [MNZ]. \par This research has grown up from the question, concerning the elementary construction of canonical bases, posed by H. Nakajima in his lecture note [N] on the lectures at Sophia University, 2006. The authors are grateful to him for his helpful suggestions. \par \section{PBW-bases and canonical bases} \para{1.1.} In this paper, we understand that a Cartan datum is a pair $X = (I, (\ , \ ))$, where $(\ ,\ )$ is a symmetric bilinear form on $\bigoplus_{i \in I}\BQ\a_i$ (a finite dimensional vector space over $\BQ$ with the basis $\{ \a_i\}$ indexed by $I$) such that $(\a_i, \a_j) \in \BZ$, satisfying the property \par \begin{itemize} \item $(\a_i,\a_i) \in 2\BZ_{> 0}$ for any $i \in I$, \item $\frac{2(\a_i,\a_j)}{(\a_i,\a_i)} \in \BZ_{\le 0}$ for any $i \ne j$ in $I$. \end{itemize} The Cartan datum $X$ is called simply laced if $(\a_i, \a_j) \in \{ 0,-1\}$ for any $i \ne j$ in $I$, and $(\a_i,\a_i) = 2$ for any $i \in I$. The Cartan datum $X$ determines a graph with the vertex set $I$. If the associated graph is connected, $X$ is said to be irreducible. Put $a_{ij} = 2(\a_i, \a_j)/(\a_i,\a_i)$ for any $i,j \in I$. The matrix $(a_{ij})$ is called the Cartan matrix. \par In the case where the bilinear form is positive definite, $X$ is called finite type. In that case, the associated graph is a Dynkin diagram. In the case where the bilinear form is positive semi-definite, $X$ is called affine type. In that case, the associated graph is a Euclidean diagram. In this paper, we are concerned with $X$ of finite type or affine type. \para{1.2.} Let $X = (I, (\ ,\ ))$ be a simply laced Cartan datum, and let $\s : I \to I$ be a permutation such that $(\s(\a_i), \s(\a_j)) = (\a_i,\a_j)$ for any $i, j \in I$. Let $\ul I$ be the set of orbits of $\s$ on $I$. We assume that $\s$ is admissible, namely for each orbit $\eta \in \ul I$, $(\a_i, \a_j) = 0$ for any $i \ne j$ in $\eta$. \par We define a symmetric bilinear form $(\ ,\ )_1$ on $\bigoplus_{\eta \in \ul I}\BQ \a_{\eta}$ by \begin{equation*} (\a_{\eta}, \a_{\eta'})_1 = \begin{cases} 2|\eta| &\quad\text{ if } \eta = \eta', \\ - |\{ (i, j) \in \eta \times \eta' \mid (\a_i, \a_j) \ne 0\}| &\quad\text{ if } \eta \ne \eta'. \end{cases} \end{equation*} It is easy to see that $\ul X = (\ul I, (\ ,\ )_1)$ defines a Cartan datum. \para{1.3.} Let $I = \{1,2, \dots, 2n -1\}$ for $n \ge 1$. For $i, j \in I$, we put $(\a_i, \a_j) = 2$ if $i = j$, $(\a_i,\a_j) = -1$ if $i - j = \pm 1$, and $(\a_i, \a_j) = 0$ otherwise. Then $(I, (\ ,\ ))$ is a simply laced irreducible Cartan datum of type $A_{2n -1}$. We define a permutation $\s : I \to I$ by $\s(i) = 2n -i$ for all $i$. Then $\s$ satisfies the condition in 1.2. We can identify $\ul I$ with the set $\{ \ul 1, \dots, \ul n \}$, where $\ul i = \{ i, 2n - i\}$ for $1 \le i \le n-1$ and $\ul n = \{ n \}$. Then $(\ul I, (\ ,\ )_1)$ is the Cartan datum of type $B_n$. \para{1.4.} Let $I = \{ 1, 2, 2',2''\}$. We define a permutation $\s : I \to I$ of order 3 by $\s(1) = 1$ and $\s : 2 \mapsto 2' \mapsto 2'' \mapsto 2$. The set $\ul I$ of orbits of $\s$ in $I$ is given by $\ul I = \{ \ul 1, \ul 2\}$, where $\ul 1 = \{ 1 \}$ and $\ul 2 = \{ 2, 2', 2'' \}$. We define a symmetric bilinear form on $\bigoplus_{i \in I}\BQ\a_i$ by \begin{equation*} (\a_i, \a_j) = \begin{cases} 2 &\quad\text{ if } i = j, \\ -1 &\quad\text{ if $i \in \ul 1, j \in \ul 2$ or $i \in \ul 2, j \in \ul 1$}, \\ 0 &\quad\text{ if } i, j \in \ul 2, i \ne j. \end{cases} \end{equation*} Then $(I, (\ ,\ ))$ gives the Cartan datum of type $D_4$. $\s : I \to I$ satisfies the condition in 1.2, and $(\ul I, (\ ,\ )_1)$ gives the Cartan datum of type $G_2$. \para{1.5.} Let $q$ be an indeterminate, and for an integer $n$, a positive integer $m $, put \begin{equation*} [n]_q = \frac{q^n - q^{-n}}{q - q\iv}, \quad [m]_q^! = \prod_{i = 1}^m[i]_q, \quad [0]!_q = 1. \end{equation*} For each $i \in I$, put $q_i = q^{(\a_i,\a_i)/2}$, and consider $[n]_{q_i}$, etc. by replacing $q$ by $q_i$ in the above formulas. Let $\BU_q^-$ be the negative part of the quantum enveloping algebra $\BU_q$ associated to a Cartan datum $X = (I, (\ ,\ ))$. Hence $\BU_q^-$ is an associative algebra over $\BQ(q)$ with generators $f_i$ ($i \in I$) satisfying the fundamental relations \begin{equation*} \tag{1.5.1} \sum_{k = 0}^{1-a_{ij}}(-1)^kf_i^{(k)}f_jf_i^{(1- a_{ij} - k)} = 0 \end{equation*} for any $i \ne j \in I$, where $f_i^{(n)} = f_i^n/[n]^!_{q_i}$ for a non-negative integer $n$. \par We now assume that the Cartan datum $X$ is simply-laced. Then $[n]_{q_i} = [n]_q$ for any $i \in I$. Let $\s : I \to I$ be the automorphism as in 1.2. Then $\s$ induces an algebra automorphism $\s : \BU_q^- \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \BU_q^-$ by $f_i \to f_{\s(i)}$. We denote by $\BU_q^{-,\s}$ the subalgebra of $\BU_q^-$ consisting of $\s$-fixed elements. Let $\BA = \BZ[q,q\iv]$, and ${}_{\BA}\!\BU_q^-$ be the $\BA$-subalgebra of $\BU_q^-$ generated by $f_i^{(a)}$ for $i \in I$ and $a \in \BN$. Then $\s$ stabilizes ${}_{\BA}\!\BU_q^-$, and we can define ${}_{\BA}\!\BU_q^{-,\s}$ the subalgebra of ${}_{\BA}\!\BU_q^-$ consisting of $\s$-fixed elements. \par Let $\ul X = (\ul I, (\ ,\ )_1)$ be the Cartan datum obtained from $\s$ as in 1.2. We denote by $\ul{\BU}_q^-$ the negative part of the quantum enveloping algebra associated to $\ul X$, namely, $\ul{\BU}_q^-$ is the $\BQ(q)$-algebra generated by $\ul f_{\eta}$ with $\eta \in \ul I$ satisfying a similar relation as in (1.5.1). \par Let $\ve$ be the order of $\s$ (here we assume tht $\ve = 2$ or 3), and let $\BF = \BZ/\ve\BZ$ be the finite field of $\ve$-elements. Put $\BA' = \BF[q,q\iv]$, and consider the $\BA'$-algebra \begin{equation*} \tag{1.5.2} {}_{\BA'}\BU_q^{-,\s} = {}_{\BA}\BU_q^{-,\s}\otimes_{\BA}\BA' \simeq {}_{\BA}\BU_q^{-,\s}/\ve({}_{\BA}\BU_q^{-,\s}). \end{equation*} Let $J$ be the $\BA'$-submodule of ${}_{\BA'}\BU_q^{-,\s}$ consisting of elements of the form $\sum_{0 \le i < \ve}\s^i(x)$ for $x \in {}_{\BA'}\BU_q^-$. Then $J$ is a two-sided ideal of ${}_{\BA'}\BU_q^{-,\s}$, and we denote by $\BV_q$ the quotient algebra ${}_{\BA'}\BU_q^{-,\s}/J$. Let $\pi : {}_{\BA'}\BU_q^{-,\s} \to \BV_q$ be the natural map. \par Let $\ul\BU_q^-$ be as before. We can define ${}_{\BA}\ul\BU_q^-$ and ${}_{\BA'}\ul\BU_q^-$ similarly to ${}_{\BA}\BU_q^-$ and ${}_{\BA'}\BU_q^-$. \para{1.6.} In the rest of this section, we assume that $X$ is of finite type. Let $W$ be the Weyl group associated to the Cartan datum $X$, with simple reflections $\{ s_i \mid i \in I\}$. Let $l : W \to \BN$ be the standard length function of $W$ relative to the generators $s_i \ (i \in I)$. Let $w_0$ be the unique longest element in $W$ with respect to $l$, and put $\nu = l(w_0)$. Let $\ul W$ be the Weyl group associated to the Cartan datum $\ul X$, with simple reflections $\{ \ul s_{\eta} \mid \eta \in \ul I\}$. Then $\ul l, \ul w_0, \ul \nu$ with respect to $\ul X$ are defined similarly to $l, w_0, \nu$. For any $\eta \in \ul I$, let $w_{\eta}$ be the product of $s_i$ for $i \in \eta$ (note, by our assumption, that such $s_i$ are mutually commuting). Then $\ul W$ can be identified with the subgroup of $W$ generated by $\{ w_{\eta} \mid \eta \in \ul I\}$ under the correspondence $\ul s_{\eta} \lra w_{\eta}$. The map $s_i \mapsto s_{\s(i)}$ defines an automorphism $\s : W \to W$, and $\ul W$ coincides with the subgroup $W^{\s} = \{ w \in W \mid \s(w) = w\}$ of $W$ under the above identification. We have $w_0 = \ul w_0$, and if $\ul w_0 = \ul s_{\eta_1}\cdots \ul s_{\eta_{\ul \nu}}$ is a reduced expression of $\ul w_0$, then $w_0 = w_{\eta_1}\cdots w_{\eta_{\ul \nu}}$, which satisfies the relation $\sum_{k = 1}^{\ul \nu} l(w_{\eta_k}) = \nu$. Thus if we write $w_{\eta} = \prod_{i \in \eta}s_i$ for any $\eta \in \ul I$, $w_0 = w_{\eta_1}\cdots w_{\eta_{\ul \nu}}$ induces a reduced expression of $w_0$, \begin{equation*} \tag{1.6.1} w_0 = (\prod_{k_1 \in \eta_1}s_{k_1})\cdots (\prod_{k_{\ul \nu} \in \eta_{\ul \nu}}s_{k_{\ul \nu}}) = s_{i_1}\cdots s_{i_{\nu}}. \end{equation*} We write $\ul \Bh = (\eta_1, \dots, \eta_{\ul \nu})$ and $\Bh = (i_1, \dots, i_{\nu})$. Note that $\Bh$ is determined from $\ul \Bh$ by choosing the expression $w_{\eta} = s_{k_1}\cdots s_{k_{|\eta|}}$ for each $\eta$. \para{1.7.} For any $i \in I$ the braid group action $T_i : \BU_q \to \BU_q$ is defined as in [L4, \S 39] (denoted by $T''_{s_i,1}$ there). Let $\Bh = (i_1, \dots, i_{\nu})$ be a sequence such that $w_0 = s_{i_1}\cdots s_{i_{\nu}}$ is a reduced expression. For $\Bc = (c_1, \dots, c_{\nu}) \in \BN^{\nu}$, put \begin{equation*} \tag{1.7.1} L(\Bc, \Bh) = f_{i_1}^{(c_1)}T_{i_1}(f_{i_2}^{(c_2)})\cdots (T_{i_1}\cdots T_{i_{{\nu}-1}})(f_{\nu}^{(c_{\nu})}). \end{equation*} Then $\{ L(\Bc, \Bh) \mid \Bc \in \BN^{\nu} \}$ gives a PBW-basis of $\BU_q^-$, which we denote by $\SX_{\Bh}$. Now assume given $\s : I \to I$ as in 1.2. Then $\s\circ T_i \circ \s\iv = T_{\s(i)}$ and $T_iT_j = T_jT_i$ if $i,j \in \eta$. Hence one can define $R_{\eta} = \prod_{i \in \eta}T_i$ for each $\eta \in \ul I$, and $R_{\eta}$ commutes with $\s$. \par We consider the braid group action $\ul T_{\eta} : \ul\BU_q \to \ul\BU_q$. Let $\ul\Bh = (\eta_1, \dots, \eta_{\ul \nu})$ be a sequence for $\ul w_0$. For any $\ul\Bc = (\g_1, \dots \g_{\ul \nu}) \in \BN^{\ul \nu}$, $L(\ul\Bc, \ul\Bh)$ is defined in a similar way as in (1.7.1), \begin{equation*} \tag{1.7.2} L(\ul\Bc, \ul\Bh) = \ul f_{\eta_1}^{(\g_1)}\ul T_{\eta_1}(\ul f_{\eta_2}^{(\g_2)})\cdots (\ul T_{\eta_1}\cdots \ul T_{\eta_{\ul \nu -1}})(\ul f_{\eta_{\ul \nu}}^{(\g_{\ul \nu})}). \end{equation*} Then $\{ L(\ul\Bc, \ul\Bh) \mid \ul\Bc \in \BN^{\ul \nu}\}$ gives a PBW-basis of $\ul\BU_q^-$, which we denote by $\ul\SX_{\ul\Bh}$. \par Now assume that $\Bh$ is obtained from $\ul\Bh$ as in 1.6. Then $L(\Bc, \Bh)$ can be written as follows. For $k = 1, \dots, \ul \nu$, let $I_k$ be the interval in $[1, \nu]$ corresponding to $\eta_k$ so that $w_{\eta_k} = \prod_{j \in I_k}s_{i_j}$ in the expression of $w_0$ in (1.6.1). Put $F_{\eta_k}(\Bc) = \prod_{j \in I_k}f_{i_j}^{(c_j)}$ for each $k$. Then we have \begin{equation*} \tag{1.7.3} L(\Bc, \Bh) = F_{\eta_1}(\Bc)R_{\eta_1}(F_{\eta_2}(\Bc))\cdots (R_{\eta_1}\cdots R_{\eta_{\ul \nu -1}})(F_{\eta_{\ul \nu}}(\Bc)). \end{equation*} In particular, the following holds. \begin{lem} Under the notation as above, \begin{enumerate} \item \ $\s$ gives a permutation of the PBW-basis $\SX_{\Bh}$, namely $\s(L(\Bc, \Bh)) = L(\Bc', \Bh)$ for some $\Bc' \in \BN^{\nu}$. $L(\Bc, \Bh)$ is $\s$-invariant if and only if $c_j$ is constant for $j \in I_k$ for $k = 1, \dots, \ul \nu$. \item For each $\ul\Bc \in \BN^{\ul\nu}$, let $\Bc \in \BN^{\nu}$ be the unique element such that $c_j = \g_k$ for each $j \in I_k$. Then $L(\ul\Bc, \ul\Bh) \mapsto L(\Bc, \Bh)$ gives a bijection \begin{equation*} \ul\SX_{\ul\Bh} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \SX_{\Bh}^{\s}, \end{equation*} where $\SX^{\s}_{\Bh}$ is the set of $\s$-stable PBW-basis in $\SX_{\Bh}$. \end{enumerate} \end{lem} \para{1.9.} For each $\eta \in \ul I$ and $a \in \BN$, put $\wt f_{\eta}^{(a)} = \prod_{i \in \eta}f_i^{(a)}$. Since $f_i^{(a)}$ and $f_j^{(a)}$ commute each other for $i,j \in \eta$, we have $\wt f^{(a)}_{\eta} \in {}_{\BA}\BU_q^{-,\s}$. We denote its image in ${}_{\BA'}\BU_q^{-,\s}$ also by $\wt f^{(a)}_i$. Thus we can define $g_{\eta}^{(a)} \in \BV_q$ by \begin{equation*} \tag{1.9.1} g^{(a)}_{\eta} = \pi(\wt f_{\eta}^{(a)}). \end{equation*} In the case where $a = 1$, we put $\wt f_{\eta}^{(1)} = \wt f_{\eta} = \prod_{i \in \eta}f_i$ and $g^{(1)}_{\eta} = g_{\eta}$. Recall that ${}_{\BA'}\ul\BU_q^-$ is generated by $\ul f^{(a)}_{\eta}$ for $\eta \in \ul I$ and $a \in \BN$. We have the following result. \begin{prop} The correspondence $\ul f_{\eta}^{(a)} \mapsto g_{\eta}^{(a)}$ gives rise to a homomorphism $\Phi : {}_{\BA'}\ul\BU^-_q \to \BV_q$ of $\BA'$-algebras. \end{prop} \para{1.11.} Proposition 1.10 will be proved in Section 3. Here assuming the proposition, we continue the discussion. Let $\SX_{\Bh}$ be as in Lemma 1.8. It is known that the PBW -basis $\SX_{\Bh}$ is contained in ${}_{\BA}U_q^-$ (see Introduction). Thus $\s$-stable PBW-basis $L(\Bc,\Bh)$ in $\SX^{\s}_{\Bh}$ is contained in ${}_{\BA}\BU_q^{-,\s}$. By Lemma 1.8 such an $L(\Bc, \Bh)$ can be written as \begin{equation*} \tag{1.11.1} L(\Bc, \Bh) = \wt f_{\eta_1}^{(\g_1)}R_{\eta_1}(\wt f_{\eta_2}^{(\g_2)}) \cdots (R_{\eta_1}\cdots R_{\eta_{\ul \nu - 1}}) (\wt f_{\eta_{\ul \nu}}^{(\g_{\ul \nu})}), \end{equation*} where $\ul\Bc = (\g_1, \dots, \g_{\ul \nu})$ and \begin{equation*} \tag{1.11.2} \Bc = (c_1, \dots, c_{\nu}) = (\underbrace{\g_1, \dots, \g_1}_{|\eta_1|\text{-times}}, \underbrace{\g_2, \dots, \g_2}_{|\eta_2|\text{-times}}, \cdots, \underbrace{\g_{\ul \nu}, \dots, \g_{\ul \nu}}_{|\eta_{\ul \nu}|\text{-times}}). \end{equation*} For each $L(\Bc, \Bh) \in \SX^{\s}_{\Bh}$, put $E(\ul\Bc, \ul\Bh) = \pi(L(\Bc, \Bh))$ under the correspondence in (1.11.2). By Lemma 1.8 (i), any element $x \in {}_{\BA'}\BU_q^{-,\s}$ can be written as an $\BA'$-linear combination of $\s$-stable PBW-basis modulo $J$. Thus we have \par \noindent (1.11.3) \ The set $\{ E(\ul\Bc, \ul\Bh) \mid \ul\Bc \in \BN^{\ul \nu} \}$ generates $\BV_q$ as $\BA'$-module. \para{1.12.} It is known, for any Cartan datum $X$, that there exists a canonical symmetric bilinear form $(\ ,\ )$ on $\BU_q^-$, which satisfies the property, \begin{equation*} \tag{1.12.1} (L(\Bc, \Bh), L(\Bc',\Bh)) = \prod_{k = 1}^{\nu}(f_{i_k}^{(c_k)}, f_{i_k}^{(c'_k)}) = \prod_{k = 1}^{\nu}\d_{c_k,c'_k} \prod_{d = 1}^{c_k}\frac{1}{1 - q_{i_k}} \end{equation*} for $\Bc = (c_1, \dots, c_{\nu}), \Bc' = (c_1', \dots, c_{\nu}')$. In particular, $(L(\Bc, \Bh), L(\Bc', \Bh)) = 0$ if $\Bc \ne \Bc'$, and the form $(\ ,\ )$ is non-degenerate. Assume that $X$ is as in 1.2. Then $\s$ preserves the form, namely, $(\s(x), \s(y)) = (x,y)$ for any $x,y \in \BU_q^-$. \par Let $\BF(q)$ be the field of rational functions over $\BF$, and put ${}_{\BF(q)}\BV_q = \BV_q\otimes_{\BA'}\BF(q)$. Then the form $(\ ,\ )$ on $\BU_q^-$ induces a symmetric bilinear form on ${}_{\BF(q)}\BV_q$ (note that $(\sum_i \s^i(x), \sum_i \s^i(y)) = 0$ in $\BF(q)$). We have $(E(\ul\Bc, \ul\Bh), E(\ul\Bc',\ul\Bh')) = 0$ if $\ul\Bc \ne \ul\Bc'$, and $(E(\ul\Bc, \ul\Bh), E(\ul\Bc, \ul\Bh)) \ne 0$. Thus $\{ E(\ul\Bc, \ul\Bh) \mid \ul\Bc \in \BN^{\ul \nu}\}$ gives rise to an orthogonal basis of ${}_{\BF(q)}\BV_q$. \par Put ${}_{\BF(q)}\ul\BU_q^- = {}_{\BA'}\ul\BU_q^-\otimes_{\BA'}\BF(q)$. We can regard $\{ L(\ul\Bc, \ul\Bh) \mid \ul\Bc \in \BN^{\ul \nu}\}$ as an $\BF(q)$-basis of ${}_{\BF(q)}\ul\BU_q^{-}$. The map $\Phi : {}_{\BA'}\ul\BU_q^- \to \BV_q$ induces an algebra homomorphism ${}_{\BF(q)}\ul\BU_q^- \to {}_{\BF(q)}\BV_q$, which we denote also by $\Phi$. We need a lemma. \begin{lem} Assume that $\ul X$ has rank 2, and $\ul\Bh = (\eta_1, \dots, \eta_{\ul \nu})$. Then for $k = 1, \dots, \ul \nu$, we have \begin{equation*} \tag{1.13.1} \Phi(\ul T_{\eta_1} \dots \ul T_{\eta_{k-1}}(\ul f_{\eta_k})) = \pi(R_{\eta_1}\cdots R_{\eta_{k-1}}(\wt f_{\eta_k})). \end{equation*} \end{lem} \par Lemma 1.13 will be proved in Section 4. We continue the discussion assuming the lemma. By using Lemma 1.13, we can prove the following theorem. \begin{thm} Let $\Bh$ and $\ul\Bh$ be as in 1.6. \begin{enumerate} \item For any $\ul\Bc \in \BN^{\ul \nu}$, we have $\Phi(L(\ul\Bc, \ul\Bh)) = E(\ul\Bc, \ul\Bh)$. \item $\Phi$ gives an algebra isomorphism ${}_{\BF(q)}\ul\BU_q^- \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, {}_{\BF(q)}\BV_q$. \end{enumerate} \end{thm} \begin{proof} Since $R_{\eta}$'s satisfy the braid relation, we can define $R_w = R_{\eta_1}\cdots R_{\eta_k}$ for a reduced expression $w = \ul s_{\eta_1}\cdots \ul s_{\eta_k} \in \ul W$. Let $\ul \vD^+$ be the set of positive roots in $\bigoplus_{\eta \in \ul I}\BQ\a_{\eta}$. We consider the following statement \par \noindent (1.14.1) \ Assume that $w(\a_{\eta}) \in \vD^+$. Then $\pi\bigl(R_w(\wt f_{\eta})\bigr) = \Phi(\ul T_w(\ul f_{\eta}))$. \par Note that (1.14.1) certainly holds in the case where $\ul X$ has rank 2, in view of Lemma 1.13. We prove (1.14.1) by induction on $l(w)$. (1.14.1) holds if $l(w) = 0$. Thus we assume that $l(w) > 0$, and choose $\eta' \in \ul I$ such that $l(ws_{\eta'}) = l(w)-1$. From the assumption in (1.14.1), $\eta' \ne \eta$. It is known that there exists $w',w'' \in \ul W$ such that $w = w'w''$, which satisfy the condition \par (i) \ $w''$ is contained in the subgroup of $\ul W$ generated by $s_{\eta}$ and $s_{\eta'}$, \par (ii) \ $l(w) = l(w') + l(w'')$, \par (iii) \ $l(w's_{\eta}) = l(w') +1$, $l(w's_{\eta'}) = l(w') + 1$. \par By applying (1.14.1) to the case $\ul X$ has rank 2, we see that $\pi(R_{w''}(\wt f_{\eta})) = \Phi(\ul T_{w''}(\ul f_{\eta}))$. Since $w \ne w'$, we have $l(w') < l(w)$. Also note that $w'(\a_{\eta}), w'(\a_{\eta'}) \in \ul\vD^+$. Thus by induction, we have \begin{equation*} \pi(R_{w'}(\wt f_{\eta})) = \Phi(\ul T_{w'}(\ul f_{\eta})), \quad \pi(R_{w'}(\wt f_{\eta'})) = \Phi(\ul T_{w'}(\ul f_{\eta'})). \end{equation*} Since $R_{w}(\wt f_{\eta}) = R_{w'}R_{w''}(\wt f_{\eta})$ and $\ul T_w(\ul f_{\eta}) = \ul T_{w'}\ul T_{w''}(\ul f_{\eta})$, (1.14.1) holds for $w$. Thus (1.14.1) is proved. \par Now the claim (i) in the theorem follows from (1.14.1). Let $Z$ be the $\BF(q)$-subspace of ${}_{\BF(q)}\ul\BU_q^-$ spanned by $\{ L(\ul\Bc, \ul\Bh)\}$. Since $\{ E(\ul\Bc, \ul\Bh)\}$ is a basis of ${}_{\BF(q)}\BV_q$, $\Phi$ gives an isomorphism $Z \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, {}_{\BF(q)}\BV_q$ by (i), and so $Z$ is an algebra over $\BF(q)$. Since $\ul f_{\eta}^{(a)} = ([a]^!_{q_{\eta}})\iv \ul f_{\eta}^a$ is contained in $Z$, we see that $Z = {}_{\BF(q)}\ul\BU_q^-$. Thus (ii) holds. The theorem is proved. \end{proof} \para{1.15.} We follow the point of view explained in Introduction. In the simply-laced case, the properties (i), (ii) and (iii) in (0.1.1) are known to hold. Hence there exists the canonical basis $\{ b(\Bc, \Bh) \mid \Bc \in \BN^{\nu} \}$ in $\SL_{\BZ}(\infty)$, which is characterized by the following properties, \begin{align*} \tag{1.15.1} \ol {b(\Bc, \Bh)} &= b(\Bc, \Bh), \\ b(\Bc, \Bh) &\equiv L(\Bc, \Bh) \mod q\SL_{\BZ}(\infty), \end{align*} where $x \mapsto \ol{x}$ is the bar involution in $\BU_q^-$. Note that $\{ b(\Bc, \Bh) \mid \Bc \in \BN^{\nu}\}$ is independent of the choice of $\Bh$, which we denote by $\BB$. \par We define a total order on $\BN^{\nu}$ by making use of the lexicographic order, i.e., for $\Bc = (c_1, \dots, c_{\nu}), \Bd = (d_1, \dots, d_{\nu}) \in \BN^{\nu}$, $\Bc < \Bd$ if and only if there exists $k$ such that $c_i = d_i$ for $i < k$ and $c_k < d_k$. Then the second formula in (1.15.1) can be written more precisely as \begin{equation*} \tag{1.15.2} b(\Bc,\Bh) = L(\Bc,\Bh) + \sum_{\Bc < \Bd}a_{\Bd}L(\Bd,\Bh) \end{equation*} with $a_{\Bd} \in q\BZ[q]$. \para{1.16.} We choose $\Bh$ and $\ul\Bh$ as in 1.6. Since $\s$ permutes the PBW-basis $L(\Bc,\Bh)$, $\s$ permutes the canonical basis $\BB$. We denote by $\BB^{\s}$ the set of $\s$-stable canonical basis of $\BU_q^-$. Take $\Bb = \Bb(\Bc, \Bh) \in \BB^{\s}$. Then $L(\Bc, \Bh)$ is $\s$-stable, and $\Bc$ is obtained from $\ul\Bc$ as in 1.11. Since $\Bb \in {}_{\BA}\BU_q^{-,\s}$, one can consider $\pi(\Bb)$. Then we can write as \begin{equation*} \tag{1.16.1} \pi(\Bb) = E(\ul\Bc, \ul\Bh) + \sum_{\ul\Bc < \ul\Bd}a_{\ul\Bd}E(\ul\Bd, \ul\Bh) \end{equation*} with $a_{\ul\Bd} \in q\BF[q]$. The total order $\ul\Bc < \ul\Bd$ on $\BN^{\ul\nu}$ is defined similarly. The bar involution can be defined on $\BV_q$, and the map $\pi$ is compatible with those bar involutions. Thus we have \begin{equation*} \tag{1.16.2} \ol{\pi(\Bb)} = \pi(\Bb). \end{equation*} Let $\wt\SL_{\BF}(\infty)$ be the $\BF[q]$-submodule of $\BV_q$ generated by $E(\ul\Bc, \ul\Bh)$. Then the set $\{ \pi(\Bb) \mid \Bb \in \BB^{\s}\}$ gives rise to an $\BF[q]$-basis of $\wt\SL_{\BF}(\infty)$ satisfying the properties (1.16.1) and (1.16.2). Note that the set $\{ \pi(\Bb) \mid \Bb \in \BB^{\s}\}$ is characterized by those properties, and this set is independent of the choice of $\ul\Bh$, which we call the canonical basis of $\BV_q$. \par Let $\ul\SL_{\BF}(\infty)$ be the $\BF[q]$-submodule of ${}_{\BF(q)}\ul\BU_q^-$ generated by $\{ L(\ul\Bc, \ul\Bh) \mid \ul\Bc \in \BN^{\ul \nu} \}$. We have the following result. \begin{prop} There exists a unique $\BF[q]$-basis $\{ \Bb(\ul\Bc, \ul\Bh) \mid \ul\Bc \in \BN^{\ul \nu}\}$ in $\ul\SL_{\BF}(\infty)$ satisfying the following properties, \begin{align*} \tag{1.17.1} \ol{\Bb(\ul\Bc, \ul\Bh)} &= \Bb(\ul\Bc, \ul\Bh), \\ \Bb(\ul\Bc, \ul\Bh) &= L(\ul\Bc, \ul\Bh) + \sum_{\ul\Bc < \ul\Bd}a_{\ul\Bd}L(\ul\Bd, \ul\Bh), \qquad (a_{\ul\Bd} \in q\BF[q]). \end{align*} Moreover, the set $\{ \Bb(\ul\Bc, \ul\Bh)\}$ is independent of the choice of $\ul\Bh$, and $\ul\SL_{\BF}(\infty)$ does not depend on the choice of $\ul\Bh$. \end{prop} \begin{proof} It is clear that the map $\Phi : {}_{\BF(q)}\ul\BU_q^- \to {}_{\BF(q)}\BV_q$ is compatible with the bar involutions. Then the proposition immediately follows from Theorem 1.14. \end{proof} \para{1.18.} For any $\ul X$, we consider the following statements corresponding to (iii) and (iii$'$) in (0.1.1). \par \noindent (1.18.1) \ PBW-basis $\ul\SX_{\ul\Bh}$ gives an $\BA$-basis of ${}_{\BA}\ul\BU_q^-$. \par \noindent (1.18.2) \ Any element $L(\ul\Bc, \ul\Bh) \in \ul\SX_{\ul\Bh}$ is contained in ${}_{\BA}\ul\BU_q^-$. \par As was explained in Introduction, the proof of (1.18.1) is reduced to the case of rank 2, namely the case of type $B_2$ and $G_2$, and in that case, (1.18.2) was proved by Lusztig [L1] and Xi [X1]. In any case, the computation in the case of $G_2$ is not easy. (1.18.2) can be proved by computing the commutation relations of root vectors, which is relatively easy compared to (1.18.1). \par In the discussion below, we only assume that (1.18.2) holds for ${}_{\BA}\ul\BU_q^-$, and will prove that (1.18.1) holds for ${}_{\BA}\ul\BU_q^-$. \para{1.19.} We return to our original setting, and consider the map $\Phi : {}_{\BA'}\ul\BU_q^- \to \BV_q$. By (1.18.2), the PBW-basis $\ul\SX_{\ul\Bh} = \{ L(\ul\Bc, \ul\Bh)\}$ is contained in ${}_{\BA'}\ul\BU_q^-$. Since $\{ E(\ul\Bc, \ul\Bh)\}$ is an $\BA'$-basis of $\BV_q$, we see that $\Phi$ is surjective, by Theorem 1.14 (i). Let ${}_{\BA'}\wt{\ul\BU}^-_q$ be the $\BA'$-module generated by $\{ L(\ul\Bc, \ul\Bh) \mid \ul\Bc \in \BN^{\ul \nu}\}$. Again by Theorem 1.14, ${}_{\BA'}\wt{\ul\BU}_q^-$ is an $\BA'$-submodule of ${}_{\BF(q)}\ul\BU_q^-$, which is independent of the choice of $\ul\Bh$. We show that \begin{equation*} \tag{1.19.1} {}_{\BA'}\wt{\ul\BU}_q^- = {}_{\BA'}\ul\BU_q^-. \end{equation*} By (1.18.2), we know that ${}_{\BA'}\wt{\ul\BU}_q^- \subset {}_{\BA'}\ul\BU_q^-$. On the other hand, for each $\eta \in \ul I$, one can find a sequence $\ul\Bh = (\eta_1, \dots, \eta_{\ul N})$ such that $\eta_1 = \eta$. This implies that ${}_{\BA'}\wt{\ul\BU}_q^-$ is invariant under the left multiplication by $\ul f^{(a)}_{\eta}$. Since this is true for any $\eta$, we see that ${}_{\BA'}\ul\BU_q^-$ is contained in ${}_{\BA'}\wt{\ul\BU}_q^-$. Thus (1.19.1) holds. \par Summing up the above arguments, we have the following integral form of Theorem 1.14. \begin{prop} Assume that $(1.18.2)$ holds for ${}_{\BA}\ul\BU_q^-$. Then $\Phi$ induces an isomorphism ${}_{\BA'}\ul\BU_q^- \simeq \BV_q$. In particular, the PBW-basis $\ul\SX_{\ul\Bh}$ gives an $\BA'$-basis of ${}_{\BA'}\ul\BU_q^-$. \end{prop} As a corollary, we have \begin{cor} Assume that $(1.18.2)$ holds for ${}_{\BA}\ul\BU_q^-$. Then $(1.18.1)$ also holds. \end{cor} \begin{proof} Let ${}_{\BA}\wh{\ul\BU}_q^-$ be the inverse limit of ${}_{\BA}\ul\BU_q^-/\ve^n({}_{\BA}\ul\BU_q^-)$. Then ${}_{\BA}\wh{\ul\BU}_q^-$ has a natural structure of the module over $\displaystyle\BZ_{\ve}[q,q\iv] = \lim_{\leftarrow}\BA/\ve^n\BA$, where $\BZ_{\ve}$ is the ring of $\ve$-adic integers. We have a natural embedding ${}_{\BA}\ul\BU_q^- \subset {}_{\BA}\wh{\ul\BU}_q^-$. Now take $x \in {}_{\BA}\ul\BU_q^-$. (1.18.1) shows that $x$ can be written as a linear combination of PBW-basis with coefficients in $\BA$ modulo $\ve({}_{\BA}\ul\BU_q^-)$. We regard $x$ as an element in ${}_{\BA}\wh{\ul\BU}_q^-$. Then $x$ can be written as a linear combination of PBW-basis with coefficients in $\BZ_{\ve}[q,q\iv]$. On the other hand, we know that $x$ is a linear combination of PBW-basis with coefficients in $\BQ(q)$. Thus those coefficients belong to $\BA = \BZ[q,q\iv]$, and we obtain (1.18.1). \end{proof} \para{1.22.} We assume that (1.18.2) holds for $\ul\BU_q^-$. Then by Corollary 1.21, we have \par \noindent (1.22.1) \ In $\ul\BU_q^-$, $\ol{L(\ul\Bc, \ul\Bh)}$ is a linear combination of various $L(\ul\Bd, \ul\Bh)$ with coefficients in $\BA$. \par Then by [L3, Lemma 24.2.1], one can define a basis $\{ \Bb(\ul\Bc, \ul\Bh) \mid \ul\Bc \in \BN^{\ul\nu}\}$ of $\ul\BU_q^-$, satisfying the properties \begin{align*} \tag{1.22.2} \ol{\Bb(\ul\Bc, \ul\Bh)} &= \Bb(\ul\Bc, \ul\Bh), \\ \Bb(\ul\Bc,\ul\Bh) &= L(\ul\Bc, \ul\Bh) + \sum_{\ul\Bc < \ul\Bd}a_{\ul\Bd}L(\ul\Bd, \ul\Bh), \qquad (a_{\ul\Bd} \in q\BZ[q]). \end{align*} In this construction, we cannot give the independence of the basis $\{ \Bb(\ul\Bc, \ul\Bh)\}$ from $\ul\Bh$. But by using the almost orthogonality of PBW-basis (1.12.1), one can prove a weaker property, namely, the independence from $\ul\Bh$, up to sign (see [L3, Thm. 14.2.3]); if we fix $\Bh, \Bh'$, then for any $\ul\Bc$, there exists a unique $\ul\Bc'$ such that \begin{equation*} \tag{1.22.3} \Bb(\ul\Bc, \ul\Bh) = \pm \Bb(\ul \Bc', \ul\Bh'). \end{equation*} We denote by $\ul\BB$ the set of canonical basis $\{ b(\ul\Bc, \Bh) \}$ in $\ul\BU_q^-$. On the other hand, let $\ul\BB'$ be the canonical basis in ${}_{\BA'}\ul\BU_q^-$ given in Proposition 1.17. We temporally write them as $\{ \Bb'(\ul\Bc, \ul\Bh) \}$. Then the image of $\Bb(\ul\Bc, \ul\Bh)$ under the natural map ${}_{\BA}\ul\BU_q^- \to {}_{\BA'}\ul\BU_q^-$ coincides with $\Bb'(\ul\Bc, \ul\Bh)$, and this gives a bijection $\ul\BB \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ul\BB'$. In the case where $\ve = 2$, this does not give a new information on the sign of $\Bb(\ul\Bc, \ul\Bh)$. But in the case where $\ve = 3$, we have the following result. \begin{prop} Assume that $\ve = 3$, and $\ul X$ is of type $G_2$. Then the canonical basis $\{ \Bb(\ul\Bc, \ul\Bh) \mid \ul\Bc \in \BN^{\ul\nu}\}$ is independent of the choice of $\ul\Bh$, namely, if we fix $\ul\Bh, \ul\Bh'$, then for any $\ul\Bc$, there exists a unique $\ul\Bc'$ such that \begin{equation*} \Bb(\ul\Bc, \ul\Bh) = \Bb(\ul\Bc', \ul\Bh'). \end{equation*} \end{prop} \begin{proof} By (1.22.3), we have $\Bb'(\ul\Bc, \ul\Bh) = a \Bb'(\ul\Bc',\ul\Bh')$ for some $a = \pm 1$. But by Proposition 1.17, $\Bb'(\ul\Bc, \ul\Bh)$ is determined uniquely as an element in $\ul\SL_{\BF}(\infty)$, which is independent of the choice of $\ul\Bh$. It follows that $a \equiv 1 \mod 3$. This implies that $a = 1$, and the proposition is proved. \end{proof} \remark{1.24.} By Proposition 1.20, we have a natural bijection $\ul\BB' \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \BB^{\s}$. By the discussion in 1.22, we have $\ul\BB \simeq \ul\BB'$. Hence \begin{equation*} \tag{1.24.1} \BB^{\s} \simeq \ul\BB' \simeq \ul\BB. \end{equation*} Thus we have a natural correspondence $\BB^{\s} \lra \ul \BB$ between the set of $\s$-stable canonical basis of $\BU_q^-$ and the set of canonical basis of $\ul\BU_q^-$. This is nothing but the reformulation, by our context of elementary setting, of Lusztig's result [L4, 1.12 (b)](see also [L3, Thm. 14.4.9]) obtained by geometric considerations. \par \section{PBW-bases for affine quantum groups} \para{2.1.} In Beck and Nakajima [BN], the PBW-bases were constructed in the case of affine quantum groups. In this section, by making use of their PBW-bases, we shall extend the results in the previous section to the case of affine quantum groups. \par Let $\Fg$ be an untwisted affine Lie algebra associated to the simply laced Cartan datum $X$, with the vertex set $I$, and $\Fg_0$ the simple Lie algebra over $\BC$ with the vertex set $I_0$ associated to the simply laced Cartan datum $X_0$ such that \begin{align*} L\Fg_0 &= \Fg_0\otimes_{\BC}\BC[t, t\iv], \\ \Fg &= L\Fg_0 \oplus \BC c \oplus \BC d, \end{align*} where $c$ is the center of $\Fg$ and $d$ is the degree operator. Here $L\Fg_0 \oplus \BC c$ is the central extension of the Loop algebra $L\Fg_0$. \par Let $\Fg_0 = \Fh_0 \oplus \bigoplus_{\a \in \vD_0}(\Fg_0)_{\a}$ be the root space decomposition of $\Fg_0$ with respect to a Cartan subalgebra $\Fh_0$ of $\Fg_0$, where $\vD_0$ is the set of roots in $\Fg_0$. Then $\Fh = \Fh_0 \oplus \BC c \oplus \BC d$ is a Cartan subalgebra of $\Fg$, and $\Fg$ is decomposed as \begin{equation*} \tag{2.1.1} \Fg = \Fh \oplus \biggl(\bigoplus_{\substack{\a \in \vD_0 \\ m \in \BZ}}(\Fg_0)_{\a} \otimes t^m\biggr) \oplus \biggl(\bigoplus_{m \in \BZ - \{0\}}\Fh_0 \otimes t^m\biggr). \end{equation*} We define $\d \in \Fh^*$ by $\lp d, \d \rp = 1, \lp \Fh_0 \oplus \BC c, \d \rp = 0$. We regard $\a \in \vD_0 \subset \Fh_0^*$ as an element in $\Fh^*$ by $\a(c) = 0, \a(d) = 0$. Then $(\Fg_0)_{\a} \otimes t^m, \Fh_0 \otimes t^m$ corresponds to the root space with root $\a + m\d$, $m\d$, respectively, and (2.1.1) gives a root space decomposition of $\Fg$ with respect to $\Fh$. Let $\vD$ (resp. $\vD^+$) be the set of roots (resp. the set of positive roots) in $\Fg$. Also $\vD_0^+$ be the set of positive roots in $\vD_0$. Then $\vD^+$ is given by \begin{equation*} \tag{2.1.2} \vD^+ = \vD^{\re,+}_{>} \sqcup \vD^{\re,+}_{<} \sqcup \BZ_{>0}\d, \end{equation*} where \begin{align*} \vD^{\re,+}_{>} &= \{ \a + m\d \mid \a \in \vD_0^+, m \in \BZ_{\ge 0}\}, \\ \vD^{\re,+}_{<} &= \{ \a + m\d \mid \a \in -\vD_0^+, m \in \BZ_{> 0} \}. \end{align*} $\vD^{\re,+}_{>} \sqcup \vD^{\re,+}_{<}$ is the set of positive real roots, and $\BZ_{>0}\d$ is the set of positive imaginary roots. The simple roots $\Pi$ are given by \begin{equation*} \Pi = \{ \a_i \mid i \in I_0 \} \sqcup \{ \a_0 = \d - \th \}, \end{equation*} where $\th$ is the highest root in $\vD_0^+$. \para{2.2.} Let $\s : I \to I$ be the permutation as in 1.2. We assume that $\s$ preserves $I_0$. Thus if $X$ is irreducible, $X$ has type $A_{2n+1}^{(1)} (n \ge 1), D_n^{(1)} (n \ge 4), E_6^{(1)}$ for $\ve = 2$, and $D_4^{(1)}$ for $\ve = 3$. Correspondingly, $\ul X$ has type $D^{(2)}_{n+2}, (n \ge 1), A^{(2)}_{2n -3}, (n \ge 4), E_6^{(2)}$ and $D_4^{(3)}$ under the notation of the table in [Ka, Section 4.8]. Let $\ul I_0$ be the set of $\s$-orbits in $I_0$, and $\ul X_0$ be the corresponding Cartan datum. Then $\ul X_0$ has type $B_{n+1}, C_{n-1}, F_4, G_2$, respectively. \par $\s$ induces a Lie algebra automorphism $\s : \Fg \to \Fg$, and let $\Fg^{\s}$ be the subalgebra of $\Fg$ consisting of $\s$-fixed elements. $\s$ preserves $\Fg_0$, and $\s(c) = c, \s(d) = d$. We define $\Fg_0^{\s}$ similarly. Then $\Fg^{\s}_0$ is a simple Lie algebra, and $\Fg^{\s} = L\Fg^{\s}_0 \oplus \BC c \oplus \BC d$ is the affine Lie algebra associated to $\Fg^{\s}_0$. Note that $\Fg^{\s}$ is isomorphic to the affine Lie algebra $\ul \Fg$ associated to $\ul X$, which is the twisted affine Lie algebra of type $X^{(r)}_k$ given above (here $r$ coincides with $\ve$). Moreover $\Fg_0^{\s}$ is isomorphic to $\ul \Fg_0$ associated to $\ul X_0$. We have $\Fh^{\s} = \Fh^{\s}_0 \oplus \BC c \oplus \BC d$, and $\Fh^{\s} \simeq \ul\Fh$, $\Fh_0^{\s} \simeq \ul\Fh_0$ (Cartan subalgebras of $\ul\Fg$ and $\ul\Fg_0$). \par Note that $\s$ acts on $\vD^+$, leaving $\vD_0^+$ invariant. Moreover, $\s(\d) = \d$. Thus $\vD^{\re,+}_{>}$ and $\vD^{\re,+}_{<}$ are stable by $\s$. \par Let $\ul\vD^+$ (resp. $\ul\vD^{\re,+}$, $\ul\vD^{\ima,+}$) be the set of positive roots (resp. positive real roots, positive imaginary roots) in the root system $\ul\vD$ of $\ul\Fg$. Since $\ul\Fg$ is twisted of type $X^{(r)}_n$, by [K, Prop.6.3], $\ul\vD^{\re,+}$ can be written as $\ul\vD^{\re,+} = \ul\vD^{\re,+}_{>} \sqcup \ul\vD^{\re,+}_{<}$ and $\ul\vD^{\ima,+} = \BZ_{>0}\d$, where \begin{align*} \tag{2.2.1} \ul\vD^{\re,+}_{>} &= \{\a + m\d \mid \a \in (\ul\vD^+_0)_s, m \in \BZ_{\ge 0} \} \sqcup \{ \a + mr\d \mid \a \in (\ul\vD^+_0)_l, m \in \BZ_{\ge 0} \}, \\ \ul\vD^{\re,+}_{<} &= \{\a + m\d \mid \a \in -(\ul\vD^+_0)_s, m \in \BZ_{> 0} \} \sqcup \{ \a + mr\d \mid \a \in -(\ul\vD^+_0)_l, m \in \BZ_{>0} \}. \end{align*} Here $(\ul\vD^+_0)_s$ (resp. $(\ul\vD^+_0)_l$) is the set of positive short roots (resp. positive long roots) in the root system $\ul\vD_0$ of $\ul\Fg_0$. \para{2.3.} Let $\Fh^{*0} = \{ \la \in \Fh^* \mid \lp c, \la \rp = 0\} = \{ \la \in \Fh^* \mid (\la, \d) = 0\}$ be the subspace of $\Fh^*$. Then $\Fh^{*0} = \bigoplus_{i \in I_0}\BC\a_i \oplus \BC \d$. We define a map \begin{equation*} \cl : \Fh^{*0} \to \Fh_0^* \end{equation*} by $\cl(\a_i) = \a_i \ (i \in I_0)$ and $\cl(\d) = 0$, where $\Fh_0^* = (\Fh_0)^*$. Then $\cl$ induces an isomorphism $\Fh^{*0}/\BC\d \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \Fh_0^*$. $\s$ acts on $\Fh^{*0}$ and on $\Fh_0^*$, and $\cl$ is compatible with those $\s$-actions. Hence $\cl$ induces a map $(\Fh^{*0})^{\s} \to (\Fh_0^*)^{\s}$. The restriction map $\Fh^* \to (\Fh^{\s})^*$ induces an isomorphism $(\Fh^*)^{\s} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, (\Fh^{\s})^*$, which implies that $(\Fh^{*0})^{\s} \simeq \ul\Fh^{*0}$ since $\Fh^{\s} \simeq \ul\Fh$. Similarly we have $(\Fh_0^*)^{\s} \simeq (\Fh_0^{\s})^* \simeq \ul\Fh_0^*$. Under those identifications, the induced map $(\Fh^{*0})^{\s} \to (\Fh_0^*)^{\s}$ coincides with the map $\cl : \ul \Fh^{*0} \to \ul \Fh_0^*$ defined for $\ul\Fg$ similarly to $\Fg$. \para{2.4.} Let $Q_{\cl}$ be the image of $\bigoplus_{i \in I_0}\BZ \a_i$ in $\Fh^{*0}/\BC\d$. Then $Q_{\cl}$ can be identified with the root lattice of $\Fg_0$ via $\cl$. We define $t : \Fh^{*0} \to GL(\Fh^*)$ by \begin{equation*} \tag{2.4.1} t(\xi)(\la) = \la + (\la, \d)\xi - \biggl\{ (\la, \xi) + \frac{(\xi,\xi)}{2}(\la,\d)\biggr\}\d, \quad (\xi \in \Fh^{*0}, \la \in \Fh^*), \end{equation*} which induces a map $t : \Fh^{*0}/\BC\d \to GL(\Fh^*)$, and consider the restriction of $t$ on $Q_{\cl}$. Note that in the case where $\la \in \Fh^{*0}$, (2.4.1) can be written in a simple form \begin{equation*} \tag{2.4.2} t(\xi)(\la) = \la - (\la, \xi)\d. \end{equation*} Let $W$ be the Weyl group of $\Fg$ and $W_0$ the Weyl group of $\Fg_0$. Then we have an exact sequence \begin{equation*} \tag{2.4.3} \begin{CD} 1 @>>> Q_{\cl} @>t>> W @>>> W_0 @>>> 1. \end{CD} \end{equation*} Put \begin{equation*} P_{\cl} = \{ \la \in \Fh^{*0} \mid (\la, \a_i) \in \BZ \text{ for any } i \in I_0 \}/\BC\d. \end{equation*} Then $P_{\cl}$ is identified with the weight lattice of $\Fg_0$ via $\cl$. We define an extended affine Weyl group $\wt W$ by $\wt W = P_{\cl}\rtimes W_0$ (note that $\Fg$ is simply laced). \par Let $\ul W$ be the Weyl group of $\ul\Fg$ and $\ul W_0$ the Weyl group of $\ul\Fg_0$. Let $(\ ,\ )_1$ be the non-degenerate symmetric bilinear form on $\ul\Fh^*_0$, normalized that $(\a_i, \a_i)_1 = 2$ for a short root $\a_i$ ($i \in \ul I_0)$ (see 1.2). The form $(\ ,\ )_1$ is extended uniquely to a non-degenerate symmetric bilinear form $(\ , \ )_1$ on $\ul\Fh^*$ by the condition that $(\la,\d) = \lp c, \la \rp$ for any $\la \in \ul\Fh^*$. For $\a \in \ul\vD_0$, put $\a^{\vee} = 2\a/(\a,\a)_1$. Put $\ul Q_{\cl} = \bigoplus_{\eta \in \ul I_0}\BZ \a_{\eta}$ and $\ul Q^{\vee}_{\cl} = \bigoplus_{\eta \in \ul I_0}\BZ\a_{\eta}^{\vee}$. Since $\ul\Fg$ is the dual of the untwisted algebra, we have $\ul Q_{\cl} \subset \ul Q^{\vee}_{\cl}$. As in (2.4.1), we can define a map $t : \ul\Fh^{*0}/\BC\d \to GL(\ul\Fh^*)$, and we have an exact sequence \begin{equation*} \tag{2.4.4} \begin{CD} 1 @>>> \ul Q_{\cl}^{\vee} @>t>> \ul W @>>> \ul W_0 @>>> 1. \end{CD} \end{equation*} \par For each $i \in I_0$, let $\w_i$ be the fundamental weight of $(\vD_0, \Fh_0^*)$, defined by $(\w_i, \a_j) = \d_{ij}$ ($i,j \in I_0$). Then under the isomorphism $\cl : \Fh^{*0}/\BC\d \simeq \Fh_0^*$, $P_{\cl} \simeq \bigoplus_{i \in I_0}\BZ \w_i$. The action of $\s$ on $\Fh^{*0}$ induces an action of $\s$ on $P_{\cl}$, which is given by $\w_i \mapsto \w_{\s(i)}$ $(i \in I_0)$. Thus we have an action of $\s$ on $\wt W$, which preserves $W_0$. On the other hand, we define the fundamental coweight $\ul \w^{\vee}_{\eta}$ of $(\ul\vD_0, \ul\Fh_0^*)$ by $(\ul \w^{\vee}_{\eta}, \a_{\eta'})_1 = \d_{\eta\eta'}$ ($\eta, \eta' \in \ul I_0$), and put $\ul{\wt \w}_{\eta} = |\eta|\ul\w^{\vee}_{\eta}$. We define $\ul{\wt P}_{\cl} = \bigoplus_{\eta \in \ul I_0}\BZ\wt{\ul\w}_{\eta}$, which we regard as a lattice of $\ul\Fh^{*0}/\BC\d$ dual to $\ul Q^{\vee}_{\cl}$. Define the extended affine Weyl group by $\ul{\wt W} = \ul{\wt P}_{\cl} \rtimes \ul W_0$. Since the map \begin{equation*} \tag{2.4.5} (P_{\cl})^{\s} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ul{\wt P}_{\cl}, \quad \sum_{i \in \eta}\w_i \mapsto |\eta|\ul\w^{\vee}_{\eta} = \wt{\ul\w}_{\eta} \end{equation*} is compatible with the action of $W_0^{\s}\simeq \ul W_0$, we have an isomoprhism \begin{equation*} \tag{2.4.6} \wt W^{\s} = (P_{\cl})^{\s} \rtimes W_0^{\s} \simeq \wt{\ul P}_{\cl} \rtimes \ul W_0 = \wt{\ul W}. \end{equation*} Let $\ST = \{ w \in \wt W \mid w(\vD^+) \subset \vD^+ \}$,which is a subgroup of the automorphism group of the ambient diagram. Then we have $\wt W = \ST \ltimes W$. Similarly we define $\ul\ST = \{ w \in \ul W \mid w(\ul\vD^+) \subset \ul\vD^+\}$ so that. $\wt{\ul W} = \ul\ST \ltimes \ul W$. The action of $\s$ on $\wt W$ preserves $\ST$, and we have $\ST^{\s} = \ul \ST$. \para{2.5.} Following [BN, 3.1], put \begin{equation*} \tag{2.5.1} \xi = \sum_{i \in I_0}\w_i \in P_{\cl}, \end{equation*} and consider $t(\xi) \in \wt W$, which we simply denote by $\xi$. Here $\xi \in \wt W^{\s} \simeq \wt{\ul W } = \ul\ST \ltimes \ul W$, and one can express $\xi$ as \begin{equation*} \tag{2.5.2} \xi = s_{\eta_1}\cdots s_{\eta_{\ul \nu}}\tau \end{equation*} with $\tau \in \ul\ST = \ST^{\s}$, where $w = s_{\eta_1}\cdots s_{\eta_{\ul \nu}}$ is a reduced expression of $w \in \ul W$ ($w$ is the $\ul W$-component of $\xi$). Accordingly, we obtain a reduced expression of $w = s_{i_1} \cdots s_{i_{\nu}} \in W$ such that \begin{equation*} \tag{2.5.3} w = \biggl(\prod_{k_1 \in \eta_1}s_{k_1}\biggr)\cdots \biggl(\prod_{k_{\ul \nu} \in \eta_{\ul \nu}}s_{k_{\ul \nu}}\biggr) = s_{i_1}\cdots s_{i_{\nu}}. \end{equation*} \par As in [BN, (3.1)], we define a doubly infinite sequence attached to $\Fg$ \begin{equation*} \tag{2.5.4} \Bh = (\dots i_{-1}, i_0, i_1, \dots) \end{equation*} by setting $i_{k + \nu} = \tau(i_k)$ for $k \in \BZ$. Then for any integer $m < p$, the product $s_{i_m}s_{i_{m +1}} \cdots s_{i_p} \in W$ is a reduced expression. Similarly, we define a doubly infinite sequence \begin{equation*} \tag{2.5.5} \ul \Bh = (\dots, \eta_{-1}, \eta_0, \eta_{1}, \dots) \end{equation*} by the condition that $\eta_{k + \ul \nu} = \tau(\eta_k)$ for $k \in \BZ$, which satisfies the property that $s_{\eta_m}s_{\eta_{m+1}}\cdots s_{\eta_p} \in \ul W$ is a reduced expression for $m < p$. Note that $\xi \in (P_{\cl})^{\s}$, and under the isomorphism $(P_{\cl})^{\s} \simeq \wt{\ul P}_{\cl}$ in (2.4.5), $\xi$ coincides with the element $\sum_{\eta \in \ul I_0}\wt{\ul\w}_{\eta}$. Thus the sequence (2.5.5) is exactly the sequence defined in [BN, 3.1] attached to $\ul\Fg$. \par By (2.4.2), for $\b = \a + m\d \in \vD^{\re,+}_{>}$ and $n \in \BZ$, $(n\xi)\iv (\b) = \b + n(\xi, \b)\d = \a + (m + n(\xi,\b))\d$. Since $(\xi,\b) > 0$ by (2.5.1), $(n\xi)\iv(\b) \in \vD^-$ if $n < 0$ is small enough. Similar argument holds also for $\b \in \vD^{\re,+}_{<}$ by replacing $n < 0$ by $n > 0$. It follows that \begin{align*} \tag{2.5.6} \bigcup_{n \in \BZ_{< 0}}(\vD^{\re,+}_{>} \cap w^n(\vD^-)) = \vD^{\re,+}_{>}, \qquad \bigcup_{n \in \BZ_{> 0}}(\vD^{\re,+}_{<} \cap w^n(\vD^-)) = \vD^{\re,+}_{<}. \end{align*} Similar formulas hold also for the root system $\ul \vD^+$ of $\ul\Fg$. As a corollary of (2.5.6), we have \par \noindent (2.5.7) \ Let $\Bh$ be as in (2.5.4). Then any $i \in I$ appears in the infinite sequence $(\dots i_{-1}, i_0, i_1, \dots)$. Similarly, let $\ul\Bh$ be as in (2.5.5). Then any $\eta \in \ul I$ appears in the infinite sequence $(\dots, \eta_{-1}, \eta_0, \eta_{1}, \dots)$. \para{2.6.} Let $\BU_q^-$ (resp. $\ul\BU_q^-$) be the negative part of the quantum enveloping algebra $\BU_q$ (resp $\ul\BU_q$) associated to $X$ (resp. $\ul X$). We follow the notation in 1.5. We fix $\Bh$ as in (2.5.4), and define $\b_k \in \vD^+$ for $k \in \BZ$ by \begin{equation*} \tag{2.6.1} \b_k = \begin{cases} s_{i_0}s_{i_{-1}}\cdots s_{i_{k + 1}}(\a_{i_k}) &\text{ if $k \le 0$}, \\ s_{i_1}s_{i_2} \cdots s_{i_{k-1}}(\a_{i_k}) &\text{ if $k > 0$}. \end{cases} \end{equation*} Then, as in [BN, 3.1], we have \begin{align*} \tag{2.6.2} \vD^{\re,+}_{>} = \{ \b_k \mid k \in \BZ_{\le 0} \}, \qquad \vD^{\re,+}_{<} = \{ \b_k \mid k \in \BZ_{> 0} \}. \end{align*} We define root vectors $f^{(c)}_{\b_k} \in \BU_q^-$ by \begin{equation*} \tag{2.6.3} f^{(c)}_{\b_k} = \begin{cases} T_{i_0}T_{i_{-1}}\cdots T_{i_{k+1}}(f^{(c)}_{i_k}), &\quad\text{ if } k \le 0, \\ T_{i_1}\iv T_{i_2}\iv \cdots T_{i_{k-1}}\iv (f^{(c)}_{i_k}), &\quad\text{ if } k > 0. \end{cases} \end{equation*} We fix $p \in \BZ$, and let $\Bc_{+_p} = (c_p, c_{p-1}, \dots) \in \BN^{\BZ_{\le p}}, \Bc_{-_p} = (c_{p+1}, c_{p+2}, \dots) \in \BN^{\BZ_{> p}}$ be functions which are almost everywhere 0. We define $L(\Bc_{+_p}), L(\Bc_{-_p}) \in \BU_q^-$ by \begin{align*} \tag{2.6.4} L(\Bc_{+_p}) &= f_{i_p}^{(c_p)}T_{i_p}(f^{(c_{p-1})}_{i_{p-1}}) T_{i_p}T_{i_{p-1}}(f^{(c_{p-2})}_{i_{p-2}}) \cdots \\ L(\Bc_{-_p}) &= \cdots T_{i_{p+1}}\iv T_{i_{p+2}}\iv(f_{i_{p+3}}^{(c_{p+3})}) T_{i_{p+1}}\iv (f_{i_{p+2}}^{(c_{p+2})})f_{i_{p+1}}^{(c_{p+1})}. \end{align*} In the case where $p = 0$, we simply write $\Bc_{+_p}, \Bc_{-_p}$ as $\Bc_+, \Bc_-$. Thus $(\Bc_{+_p}, \Bc_{-_p})$ is obtained from $(\Bc_+, \Bc_-)$ by the shift by $p$. Note that $L(\Bc_+)$ (resp. $L(\Bc_-)$) coincides with $f_{\b_0}^{(c_0)}f_{\b_{-1}}^{(c_{-1})}f_{\b_{-2}}^{(c_{-2})}\cdots$ (resp. $\cdots f_{\b_3}^{(c_3)}f_{\b_2}^{(c_2)}f_{\b_1}^{(c_1)}$). A similar discussion works for $\ul\BU_q^-$. We fix $\ul \Bh$ as in (2.5.5). $\b_k \in \ul\vD^+$ for $k \in \BZ$ is defined similarly to (2.6.1), and the root vectors $\ul f_{\b_k} \in \ul \BU_q^-$ are defined as in (2.6.3). For $\ul\Bc_{+_p} = (\g_p, \g_{p-1}, \dots) \in \BN^{\BZ_{\le p}}$, $\ul\Bc_{-_p} = (\g_{p+1}, \g_{p+2}, \dots) \in \BN^{\BZ_{> p}}$, define $L(\ul\Bc_{+_p}), L(\ul\Bc_{-_p}) \in \ul\BU_q^-$ similarly to (2.6.4). \par It is known by [BN, Remark 3.6], for $i \in I_0, \eta \in \ul I_0$, \begin{align*} \tag{2.6.5} f_{k\d + \a_i} &= T^k_{-\w_i}f_i, \ (k \ge 0), &\quad f_{k\d - \a_i} &= T^{-k}_{-\w_i}T_if_i, \ (k > 0), \\ \tag{2.6.6} \ul f_{k|\eta|\d + \a_{\eta}} &= T^k_{-\wt{\ul\w}_{\eta}}\ul f_{\eta}, \ (k \ge 0), &\quad \ul f_{k|\eta|\d - \a_{\eta}} &= T^{-k}_{-\wt{\ul\w}_{\eta}}T_{\eta}\ul f_{\eta}, \ (k \ge 0). \end{align*} \para{2.7.} For $i \in I_0, \eta \in \ul I_0, k > 0$, put \begin{align*} \tag{2.7.1} \wt\psi_{i,k} &= f_{k\d - \a_i}f_i - q^2f_if_{k\d - \a_i}, \\ \tag{2.7.2} \wt{\ul\psi}_{\eta, k|\eta|} &= \ul f_{k|\eta|\d - \a_{\eta}}\ul f_{\eta} - q^2_{\eta}\ul f_{\eta}\ul f_{k|\eta|\d - \a_{\eta}}. \end{align*} It is known that $\wt\psi_{i,k}$ ($i \in I_0, k \in \BZ_{>0}$) are mutually commuting, and similarly, $\ul\psi_{\eta, k|\eta|}$ ($\eta \in \ul I_0, k \in \BZ_{> 0}$) are mutually commuting. For each $i \in I_0, k \in \BZ_{>0}$, we define $\wt P_{i,k} \in \BU_q^-$ by the following recursive identity; \begin{equation*} \tag{2.7.3} \wt P_{i,k} = \frac{1}{[k]_q}\sum_{s = 1}^kq^{s-k}\wt\psi_{i,s}\wt P_{i,k-s}. \end{equation*} Similarly, for $\eta \in \ul I_0, k \in \BZ_{> 0}$, we define $\wt{\ul P}_{\eta, k|\eta|} \in \ul\BU_q^-$ by \begin{equation*} \tag{2.7.4} \wt{\ul P}_{\eta, k|\eta|} = \frac{1}{[k]_{q_{\eta}}}\sum_{s = 1}^kq_{|\eta|}^{s-k}\wt{\ul\psi}_{\eta, s|\eta|} \wt{\ul P}_{\eta, (k-s)|\eta|}. \end{equation*} For a fixed $i \in I_0$, regarding $\wt P_{i,k}$ ($k \in \BZ_{>0}$) as elementary symmetric functions, we define Schur polynomials by making use of the determinant formula; for each partition $\r^{(i)}$, put \begin{equation*} \tag{2.7.5} S_{\r^{(i)}} = \det \bigl(\wt P_{i, \r'_k - k + m}\bigr)_{1 \le k, m \le t} \end{equation*} where $(\r'_1, \dots, \r'_t)$ is the dual partition of $\r^{(i)}$. For an $|I_0|$-tuple of partitions $\Bc_0 = (\r^{(i)})_{i \in I_0}$, we define $S_{\Bc_0}$ by \begin{equation*} \tag{2.7.6} S_{\Bc_0} = \prod_{i \in I_0}S_{\r^{(i)}}. \end{equation*} Similarly, for a fixed $\eta \in \ul I_0$, choose a partition $\ul\r^{(\eta)}$, and define a Schur polynomial by \begin{equation*} \tag{2.7.7} \ul S_{\ul\r^{(\eta)}} = \det\bigl(\wt{\ul P}_{\eta, (\r'_k - k + m)|\eta|}\bigr)_{1 \le k,m \le t} \end{equation*} where $(\r'_1, \dots, \r'_t)$ is the dual partition of $\ul\r^{(\eta)}$. For an $\ul I_0$-tuple of partitions $\ul\Bc_0 = (\ul\r^{(\eta)})_{\eta \in \ul I_0}$, we define \begin{equation*} \tag{2.7.8} \ul S_{\ul\Bc_0} = \prod_{\eta \in \ul I_0}\ul S_{\ul \r^{(\eta)}}. \end{equation*} \par We denote by $\SC$ the set of triples $\Bc = (\Bc_+, \Bc_0, \Bc_-)$, where $\Bc_+ \in \BN^{\BZ_{\le 0}}, \Bc_- \in \BN^{\BZ_{> 0}}$, and $\Bc_0$ is an $I_0$-tuple of partitions. For each $\Bc \in \SC, p \in \BZ$, we define $L(\Bc,p) \in \BU_q^-$ by \begin{equation*} \tag{2.7.9} L(\Bc, p) = \begin{cases} L(\Bc_{+_p})\times \bigl(T\iv_{i_{p +1}}T\iv_{i_{p+2}}\cdots T\iv_{i_0}(S_{\Bc_0})\bigr) \times L(\Bc_{-_p}), &\quad\text{ if } p \le 0, \\ L(\Bc_{+_p}) \times \bigl(T_{i_p}\cdots T_{i_2} T_{i_1} (S_{\Bc_0})\bigr) \times L(\Bc_{-p}), &\quad\text{ if } p > 0. \end{cases} \end{equation*} Similarly, we denote by $\ul\SC$ the set of triples $\ul\Bc = (\ul\Bc_+, \ul\Bc_0, \ul\Bc_-)$, where $\ul\Bc_+ \in \BN^{\BZ_{\le 0}}, \ul\Bc_- \in \BN^{\BZ_{> 0}}$, and $\ul\Bc_0$ is the set of $\ul I_0$-tuples of partitions. We define $L(\ul\Bc, p) \in \ul\BU_q^-$ in a similar way as in (2.7.9). The following results are proved in [BN]. Note that Lemma 3.39 in [BN] can be applied to the case where $X$ is simply laced. \begin{prop}[{[BN, Prop. 3.16]}] $L(\Bc, p) \in {}_{\BA}\BU_q^-$, and $L(\ul\Bc, p) \in {}_{\BA}\ul\BU_q^-$. \end{prop} \begin{prop}[{[BN, Thm 3.13 (i), Lemma 3.39]}] We fix $\Bh$ and $p$ as before. \begin{enumerate} \item For various $\Bc \in \SC$, $L(\Bc, p)$ are almost orthonormal, namely, \begin{equation*} (L(\Bc, p), L(\Bc', p)) \in \d_{\Bc,\Bc'} + q\BZ[[q]] \cap \BQ(q). \end{equation*} In particular, for a fixed $\Bh, p$, $\{ L(\Bc, p) \mid \Bc \in \SC\}$ gives a $\BQ(q)$-basis of $\BU_q^-$. \par Similarly, $L(\ul\Bc, p)$ are almost orthonormal, and $\{ L(\ul\Bc, p)\mid \ul\Bc \in \ul \SC\}$ gives a $\BQ(q)$-basis of $\ul\BU_q^-$. \item $\{ L(\Bc, p) \mid \Bc \in \SC \}$ gives an $\BA$-basis of ${}_{\BA}\BU_q^-$. \end{enumerate} \end{prop} \para{2.10.} We first fix $\ul\Bh$ as in (2.5.5), then construct $\Bh$ as in (2.5.4) from $\ul\Bh$ by making use of of the relation (2.5.3). We also fix $\ul p > 0$, and consider the sequence $w_{\ul p} = s_{\eta_{\ul p}}s_{\eta_{\ul p -1}}s_{\eta_{p-2}} \cdots$ in $\ul W \simeq W^{\s}$. Then $w_{\ul p}$ determines an integer $p > 0$ such that $w_{\ul p}$ corresponds to $w_p = s_{i_p}s_{i_{p-1}}s_{i_{p-2}}\cdots$ in $W$. For each $s_{\eta_k}$ appearing in $w_{\ul p}$, let $I_k$ be an interval in $\BZ$ such that $s_{\eta_k} = \prod_{j \in I_k}s_{i_j}$ corresponds to a subexpression of $w_p$ as above. Put $F_{\eta_k}(\Bc_{\pm_p}) = \prod_{j \in I_k}f_{i_j}^{(c_j)}$. We also define $R_{\eta} = \prod_{j \in \eta}T_j$ for $\eta \in \ul I_0$. Then $\s$ commutes with $R_{\eta}$. Note that $L(\Bc_{+_p}), L(\Bc_{-_p})$ can be expressed as \begin{align*} \tag{2.10.1} L(\Bc_{+_p}) &= F_{\eta_{\ul p}}(\Bc_{+_p})R_{\eta_{\ul p}}(F_{\eta_{\ul p -1}}(\Bc_{+_p})) R_{\eta_{\ul p}}R_{\eta_{\ul p-1}}(F_{\eta_{\ul p -2}}(\Bc_{+_p}))\cdots, \\ L(\Bc_{-_p}) &= \cdots R_{\eta_{\ul p+1}}\iv R_{\eta_{\ul p+2}}\iv (F_{\eta_{\ul p+3}}(\Bc_{-_p})) R_{\eta_{\ul p+1}}\iv (F_{\eta_{\ul p+2}}(\Bc_{-_p}))F_{\eta_{\ul p+1}}(\Bc_{-_p}). \end{align*} We have a lemma. \begin{lem} Take $\Bh, p$ as in 2.10. \begin{enumerate} \item $\s$ permutes the PBW-basis $\{ L(\Bc, p)\}$ of $\BU_q^-$, namely, $\s(L(\Bc, p)) = L(\Bc',p)$ for some $\Bc' \in \SC$. \item Let $\Bc = (\Bc_+, \Bc_0, \Bc_-) \in \SC$. Then $L(\Bc,p)$ is $\s$-stable if and only if $c_j$ is constant for each $j \in I_k$ corresponding to $s_{\eta_k}$ in $w_{\ul p}$, and $\r^{(i)}$ is constant on $i \in \eta$ for each $\eta \in \ul I_0$. In particular, the set of $\s$-stable PBW-basis in $\BU_q^-$ with respect to $\Bh, p$ is in bijection with the set of PBW-basis $\{ L(\ul\Bc, \ul p)\}$ in $\ul\BU_q^-$ if $\Bh,p$ are obtained from $\ul\Bh, \ul p$. \end{enumerate} \end{lem} \begin{proof} By (2.10.1), we have $\s(L(\Bc_{+_p})) = L(\Bc'_{+_p})$, $\s(L(\Bc_{-_p})) = L(\Bc'_{-_p})$ for some $\Bc'_{+_p} \in \BN^{\BZ_{\le p}}, \Bc'_{-_p} \in \BN^{\BZ_{>p}}$. On the other hand, since $\s(f_{k\d \pm \a_i}) = f_{k\d \pm \a_{\s(i)}}$ for $i \in I_0, k > 0$ by (2.6.5), we have $\s(\wt\psi_{i,k}) = \wt\psi_{\s(i),k}$, and so $\s(\wt P_{i,k}) = \wt P_{\s(i),k}$. This implies that $\s(S_{\r^{(i)}}) = S_{\r^{(\s(i))}}$ for each $i \in I_0$. We see that $\s(S_{\Bc_0}) = S_{\Bc_0'}$ for some $I_0$-tuple of partitions $\Bc_0'$. Thus we obtain (i). (ii) follows from (i). \end{proof} \para{2.12.} We apply the discussion in 1.5 to the affine case, and we can define a homomorphism $\pi : {}_{\BA'}\BU_q^{-,\s} \to \BV_q$. For any $\eta \in \ul I$, and $a \in \BN$, we define $\wt f^{(a)}_{\eta} = \prod_{i \in I}f_i^{(a)}$, and put $g^{(a)}_{\eta} = \pi(\wt f^{(a)}_{\eta})$ as in 1.9. Then Proposition 1.10 still holds for the affine case, and we can define an algebra homomorphism $\Phi : {}_{\BA'}\ul \BU_q^- \to \BV_q$ of $\BA'$-algebras. Assume that $\Bh, p$ are obtained from $\ul\Bh, \ul p$ as in 2.10. We denote by $\SX_{\Bh, p}$ the set of PBW-basis $\{ L(\Bc, p) \mid \Bc \in \SC\}$ of $\BU_q^-$, and $\SX^{\s}_{\Bh,p}$ the subset of $\SX_{\Bh,p}$ consisting of $\s$-stable PBW-basis. Similarly, we denote by $\ul\SX_{\ul\Bh, \ul p}$ the set of PBW-basis $\{ L(\ul\Bc, \ul p) \mid \ul\Bc \in \ul\SC\}$ of $\ul\BU_q^-$. By Lemma 2.11 (ii), we have a natural bijection $\SX^{\s}_{\Bh,p} \simeq \ul\SX_{\ul\Bh, \ul p}$, by $L(\Bc, p) \lra L(\ul\Bc, \ul p)$. We put $E(\ul\Bc, \ul p) = \pi(L(\Bc, p))$ under this correspondence. Then by Lemma 2.11 (i), and by Proposition 2.9 (see the discussion in 1.12), we see that $\{ E(\ul\Bc, \ul p)\}$ gives rise to an $\BA'$-basis of $\BV_q$. \par Assume that $L(\Bc, p) \in \SX^{\s}_{\Bh,p}$ corresponds to $L(\ul\Bc, \ul p) \in \ul\SX_{\ul\Bh, \ul p}$ with $\Bc = (\Bc_+, \Bc_0, \Bc_-)$, $\ul\Bc = (\ul\Bc_+, \ul\Bc_0, \ul\Bc_-)$. We consider $L(\Bc_{+_p}), L(\Bc_{-_p}) \in \BU_q^{-,\s}$ and $L(\ul\Bc_{+_{\ul p}}), L(\ul\Bc_{-_{\ul p}}) \in \ul\BU_q^-$. The following result can be proved in a similar way as in Theorem 1.14 (i). \begin{prop} $\Phi(L(\ul\Bc_{+_{\ul p}})) = \pi(L(\Bc_{+_p}))$ and $\Phi(L(\ul\Bc_{-_{\ul p}})) = \pi(L(\Bc_{-_p}))$. \end{prop} \para{2.14.} Let $\Bc_0 = (\r^{(i)})_{i \in I_0}$ be an $I_0$-tuple of partitions appearing in $\Bc$, and $\ul\Bc_0 = (\ul\r^{(\eta)})_{\eta \in \ul I_0}$ be an $\ul I_0$tuple of partitions appearing in $\ul\Bc$ as in 2.7. We have $\r^{(i)} = \ul\r^{(\eta)}$ if $i \in \eta$ for each $\eta \in \ul I_0$. Then $S_{\Bc_0} \in \BU_q^{-,\s}$, and we consider $\pi(S_{\Bc_0}) \in \BV_q$. On the other hand, we can consider $\ul S_{\ul\Bc_0} \in \ul\BU_q^-$. We show a lemma. \begin{lem} $\Phi(\ul S_{\ul \Bc_0}) = \pi(S_{\Bc_0})$. \end{lem} \begin{proof} Take $i \in I_0$ such that $i \in \eta$. We consider $\prod_{i \in \eta}f_{k\d + \a_i} \in \BU_q^{-,\s}$ and $\ul f_{k|\eta|\d + \a_{\eta}} \in \ul\BU_q^-$, and similar elements obtained by replacing $\a_i$ by $-\a_i$, $\a_{\eta}$ by $-\a_{\eta}$. By applying Proposition 2.13 for the case where $p = 0$, we have \begin{align*} \tag{2.15.1} \Phi(\ul f_{k|\eta|\d + \a_{\eta}}) = \pi(\prod_{i \in \eta}f_{k\d + \a_i}), \qquad \Phi(\ul f_{k|\eta|\d - \a_{\eta}}) = \pi(\prod_{i \in \eta}f_{k\d - \a_i}). \end{align*} Next we show, for $\eta \in \ul I_0, k > 0$, that \begin{equation*} \tag{2.15.2} \Phi(\wt{\ul\psi}_{\eta, k|\eta|}) = \pi(\prod_{i \in \eta}\wt\psi_{i,k}). \end{equation*} It is known by [B, BCP] that $T_{\w_i}(f_{k\d \pm \a_j}) = f_{k\d \pm \a_j}$ for $i \ne j, k\ge 0$. Hence if $(\a_i, \a_j) = 0$, we have \begin{equation*} \tag{2.15.3} f_jf_{k\d - \a_i} = f_jT^{-k}_{-\w_i}T_i(f_i) = T^{-k}_{\w_i}T_i(f_jf_i) = T^{-k}_{\w_i}T_i(f_if_j) = f_{k\d - \a_i}f_j \end{equation*} by (2.6.5). Again by using (2.6.5) we have \begin{equation*} \tag{2.15.4} f_{k\d - \a_i}f_{k\d - \a_j} = f_{k\d - \a_j}f_{k\d - \a_i}. \end{equation*} \par In the case where $|\eta| = 1$, (2.15.2) immediately follows from (2.15.1). We assume that $|\eta| = 2$, and put $\eta = \{ i, j\}$. Then by using commutation relations (2.15.3), (2.15.4), we have \begin{align*} \wt\psi_{i, k}\wt\psi_{j, k} &= (f_{k\d - \a_i}f_i - q^2f_if_{k\d - \a_i}) (f_{k\d - \a_j}f_j - q^2f_jf_{k\d - \a_j}) \\ &= f_{k\d - \a_i}f_{k\d - \a_j}f_if_j + q^4f_if_jf_{k\d - \a_i}f_{k\d - \a_j} - q^2Z , \end{align*} where \begin{align*} Z &= f_{k\d - \a_i}f_if_jf_{k\d - \a_j} + f_if_{k\d - \a_i}f_{k\d - \a_j}f_j \\ &= f_jf_{k\d - \a_i}f_{k\d - \a_j}f_i + f_if_{k\d - \a_i}f_{k\d - \a_j}f_j \\ &= f_jf_{k\d - \a_j}f_{k\d - \a_i}f_i + \s(f_jf_{k\d - \a_j}f_{k\d - \a_i}f_i). \end{align*} Since $Z \in J$, we have \begin{equation*} \pi(\wt\psi_{i,k}\wt\psi_{j,k}) = \pi (f_{k\d - \a_i}f_{k\d - \a_j}f_if_j - q^2_{\eta}f_if_jf_{k\d - \a_i}f_{k\d - \a_j}). \end{equation*} Now (2.15.2) follows from (2.15.1). The proof for the case $|\eta| = 3$ is similar. Thus (2.15.2) is proved. \par Since $\wt\psi_{i,k}$ and $\wt\psi_{j, \ell}$ commute for any pair, $\wt\psi_{i,k}$ commutes with $\wt P_{j,\ell}$ for any pair $i,j, k,\ell$. Then by a similar argument as in the proof of (2.15.2), for each $\eta \in \ul I_0$ we have \begin{equation*} \tag{2.15.5} \Phi(\wt {\ul P}_{\eta, k}) = \pi\bigl(\prod_{i \in \eta}\wt P_{i,k}\bigr). \end{equation*} (Note that $([k]_q)^{|\eta|} = [k]_{q_{\eta}}$ in $\BA'$. ) \par Since $\wt P_{i,k}$ are commuting for any pair $i,k$, (2.15.5) implies, by a similar argument as above, that \begin{equation*} \tag{2.15.6} \Phi(\ul S_{\r^{(\eta)}}) = \pi(\prod_{i \in \eta}S_{\r^{(i)}}) \end{equation*} for any $\eta \in \ul I_0$. Lemma 2.15 follows from this. \end{proof} The following result is an analogue of Theorem 1.14 and Proposition 1.20. \begin{thm} \begin{enumerate} \item For any $\ul\Bc \in \ul\SC$, we have $\Phi(L(\ul\Bc, \ul 0)) = E(\ul\Bc, \ul 0)$. \item PBW-basis $\{ L(\ul\Bc, \ul 0) \mid \ul\Bc \in \ul\SC\}$ gives an $\BA'$-basis of ${}_{\BA'}\ul\BU_q^-$. \item $\Phi$ gives an isomorphism ${}_{\BA'}\ul\BU_q^- \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \BV_q$. \end{enumerate} \end{thm} \begin{proof} (i) follows from Proposition 2.13 and Lemma 2.15. By Proposition 2.8, (the image of ) $L(\ul\Bc, \ul 0)$ is contained in ${}_{\BA'}\ul\BU_q^-$. Hence the map $\Phi : {}_{\BA'}\ul\BU_q^- \to \BV_q$ is surjective. As in the proof of Theorem 1.14, $\Phi$ can be extended to the map ${}_{\BF(q)}\ul\BU^-_q \to {}_{\BF(q)}\BV_q$, which gives an isomorphism of $\BF(q)$-algebras. Let ${}_{\BA'}\wt{\ul\BU}_q^-$ be the $\BA'$-submodule of ${}_{\BF(q)}\ul\BU_q^-$ spanned by $L(\ul\Bc,0)$. Then $\Phi$ gives an isomorphism ${}_{\BA'}\wt{\ul\BU}_q^- \simeq \BV_q$ of $\BA'$-modules. In particular, ${}_{\BA'}\wt{\ul\BU}_q^-$ is an algebra over $\BA'$. We note that \begin{equation*} \tag{2.16.1} {}_{\BA'}\wt{\ul\BU}_q^- = {}_{\BA'}\ul\BU_q^-. \end{equation*} In fact, ${}_{\BA'}\wt{\ul\BU}_q^- \subset {}_{\BA'}\ul\BU_q^-$ by Proposition 2.8. Since $\{ E(\ul\Bc, \ul p) \mid \ul\Bc \in \ul\SC \}$ is an $\BA'$-basis of $\BV_q$, $\{ \Phi\iv(E(\ul\Bc, \ul p)) \mid \ul\Bc \in \ul\SC\}$ gives an $\BA'$-basis of ${}_{\BA'}\wt{\ul\BU}_q^-$ for any $\ul p$.. Hence by (2.10.1), ${}_{\BA'}\wt{\ul\BU}_q^-$ is invariant under the left multiplication by $\ul f_{\eta_{\ul p}}^{(k)}$. By (2.5.7), for any $\eta \in \ul I$, there exists $\ul p$ such that $\eta = \eta_{\ul p}$. Thus ${}_{\BA'}\wt{\ul\BU}_q^-$ is invariant under the left multiplication by any $\ul f^{(k)}_{\eta}$, and (2.16.1) follows. Now (ii) and (iii) follows from (2.16.1). The theorem is proved. \end{proof} \begin{cor} For any $\ul p \in \BZ$, the PBW-basis $\{L(\ul\Bc, \ul p) \mid \ul\Bc \in \ul\SC \}$ gives an $\BA$-basis of ${}_{\BA}\ul\BU_q^-$. \end{cor} \begin{proof} By a similar argument as in the proof of Corollary 1.21, we see that $\{ L(\ul\Bc, \ul 0) \mid \ul\Bc \in \ul\SC \}$ gives an $\BA$-basis of ${}_{\BA}\ul\BU_q^-$ thanks to Theorem 2.16. Then by [BN, Lemma 3.39], $\{ L(\ul\Bc, \ul p)\}$ gives an $\BA$-basis of ${}_{\BA}\ul\BU_q^-$. The corollary is proved. \end{proof} \remark{2.18.} In the case where $\Fg$ is a simply laced affine algebra, the fact that $\{ L(\Bc, p) \mid \Bc \in \SC\}$ gives an $\BA$-basis of ${}_{\BA}\BU_q^-$ (Proposition 2.9 (ii)) was known by [BCP] for $p = 0$, and was proved by [BN] for arbitrary $p$. Corollary 2.17 is a generalization of this fact to the case of twisted affine algebras. Once this is done, one can define the (signed) canonical basis $b(\Bc, p)$ parametrized by $L(\Bc, p)$ as in (1.22.2). The basis $\{ b(\Bc,p) \mid \Bc \in \SC\}$ is independent of the choice of $\Bh$ and $p$, up to $\pm 1$. In [BN], in the simply laced case, this ambiguity of the sign was removed by using the theory of extremal weight modules due to [K2]. It is likely that our result makes it possible to extend their results to the case of twisted affine Lie algebras. \par \section{The proof of Proposition 1.10} \para{3.1.} In this and next section we write $[a]_{q^i}$ as $[a]_i$ for any $i \in \BZ$. Thus $[a]_q = [a]_1$ and $[a]_{q_{\eta}} = [a]_{|\eta|}$ since $(\a_{\eta}, \a_{\eta})_1/2 = |\eta|$. ${}_{\BA'}\ul\BU_q^-$ is the $\BA'$-algebra with generators $ \ul f_{\eta}^{(a)}$ $(\eta \in \ul I, a \in \BN)$ with fundamental relations \begin{align*} \tag{3.1.1} &\sum_{k= 0}^{1 - a_{\eta\eta'}}(-1)^k \ul f_{\eta}^{(k)}\ul f_{\eta'}\ul f_{\eta}^{(1 - a_{\eta\eta'} - k)} = 0, \qquad (\eta \ne \eta'), \\ \tag{3.1.2} &[a]^!_{|\eta|}\ul f_{\eta}^{(a)} = \ul f_{\eta}^a, \qquad (a \in \BN), \end{align*} where $A = (a_{\eta\eta'})$ is the Cartan matrix of $\ul X$. In order to prove Proposition 1.10, it is enough to show that $g_{\eta}^{(a)}$ satisfies a similar relations as above, namely, \begin{align*} \tag{3.1.3} &\sum_{k= 0}^{1 - a_{\eta\eta'}}(-1)^k g_{\eta}^{(k)}g_{\eta'}g_{\eta}^{(1 - a_{\eta\eta'} - k)} = 0, \qquad (\eta \ne \eta'), \\ \tag{3.1.4} &[a]^!_{|\eta|}g_{\eta}^{(a)} = g_{\eta}^a, \qquad (a \in \BN). \end{align*} First we show (3.1.4). We have \begin{align*} \wt f_{\eta}^{(a)} = \prod_{i \in \eta}f_i^{(a)} = ([a]^!_1)^{-|\eta|}\prod_{i \in \eta}f_i^a = ([a]^!_1)^{-|\eta|}\wt f_{\eta}^a. \end{align*} Since $|\eta| = 1$ or $\ve$, we have $([a]^!_1)^{|\eta|} = [a]^!_{|\eta|}$ in $\BA' = \BF[q, q\iv]$ with $\BF = \BZ/\ve\BZ$. Thus (3.1.4) follows. \par For the proof of (3.1.3), we may assume that $\ul X$ is of rank 2. Here we change the notation from 1.3, and consider $\ul I = \{ \ul 1, \ul 2 \}$ with Cartan matrix \begin{equation*} A = \begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix} \quad\text{ or } \quad \begin{pmatrix} 2 & a \\ -1 & 2 \end{pmatrix} \end{equation*} where $\ul X$ is of type $A_1 \times A_1$ in the first case, and $a = -1, -2, -3$ according to the cases $\ul X$ is of type $A_2, B_2, G_2$. \par Assume that $\ul X$ is of type $A_1 \times A_1$. In this case, we have $(\a_i, \a_j) = 0$ for any $i, j \in I$ such that $i \ne j$. It is easily seen that $g_{\ul 1}g_{\ul 2} = g_{\ul 2}g_{\ul 1}$, which coincides with the relation (3.1.3). Thus (3.1.3) holds. \para{3.2.} Assume that $\ul X$ is of type $A_2$. We have two possibilities for $I$, $\ul i = \{ i \}$ or $\ul i = \{ i,i'\}$ for $i = 1,2$. In the former case, (3.1.3) clearly holds. So we may assume that $I = \{ 1,2,1',2'\}$ with $\ul 1 = \{ 1,1'\}, \ul 2 = \{ 2,2'\}$, where $(\a_i, \a_j) = -1$ for $\{ i, j\} = \{ 1,2\}$ or $\{ 1',2'\}$, and is equal to zero for other cases. We have $g_{\ul 1} = \pi(f_1f_{1'})$ and $g_{\ul 2} = \pi(f_2f_{2'})$. The relation (3.1.3) is given by \begin{align*} \tag{3.2.1} g_{\ul 1}g_{\ul 2}^{(2)} - g_{\ul 2}g_{\ul 1}g_{\ul 2} + g_{\ul 2}^{(2)}g_{\ul 1} = 0. \end{align*} By (3.1.4), this is equivalent to \begin{equation*} \tag{3.2.2} g_{\ul 1}g_{\ul 2}^2 - (q^2 + q^{-2})g_{\ul 2}g_{\ul 1}g_{\ul 2} + g_{\ul 2}^2g_{\ul 1} = 0. \end{equation*} We show (3.2.2). It follows from the Serre relations for $A_2$, we have \begin{align*} \tag{3.2.3} &f_1f_2^2 - (q + q\iv)f_2f_1f_2 + f^2_2f_1 = 0, \\ &f_2f_1^2 - (q + q\iv)f_1f_2f_1 + f^2_1f_2 = 0, \\ \end{align*} and formulas obtained form (3.2.3) by replacing $f_1, f_2$ by $f_{1'}, f_{2'}$. By multiplying these two formulas, and by using the commutation relations $f_if_j = f_jf_i$ unless $\{ i,j\} = \{ 1, 2\}$ nor $\{ 1', 2'\}$, we have \begin{align*} (f_1f_{1'})(f_2f_{2'})^2 + (q + q\iv)^2 (f_2f_{2'})(f_1f_{1'})(f_2f_{2'}) + (f_2f_{2'})^2(f_1f_{1'}) + Z = 0, \end{align*} where \begin{equation*} Z = -(q + q\iv)\bigl( f_2f_{2'}^2f_1f_{1'}f_2 + f_{2'}f_2^2f_{1'}f_1f_{2'} \bigr) \end{equation*} Since $\ve = 2$, and $Z \in J$, we obtain (3.2.2). Thus (3.1.3) is verified for $\ul X$ of type $A_2$. \para{3.3.} Assume that $\ul X$ is of type $B_2$ and $X$ is of type $A_3$. We have $I = \{ 1, 2, 2'\}$, $\ul I = \{ \ul 1, \ul 2\}$ with $\ul 1 = \{ 1 \}$ and $\ul 2 = \{ 2,2'\}$, where $(\a_i,\a_j) = -1$ if $\{ i, j\} = \{ 1, 2\}$ or $\{ 1,2'\}$ and is equal to zero for all other $i \ne j$. By (3.1.4), (3.1.3) is equivalent to the formulas \begin{align*} \tag{3.3.1} &g_{\ul 1}g_{\ul 2}^2 - (q^2 + q^{-2})g_{\ul 2}g_{\ul 1}g_{\ul 2} + g_{\ul 2}^2g_{\ul 1} = 0, \\ \tag{3.3.2} &g_{\ul 2}g_1^3 - [3]_1g_{\ul 1}g_{\ul 2}g_{\ul 1}^2 + [3]_1g_{\ul 1}^2g_{\ul 2}g_{\ul 1} - g_{\ul 1}^3g_{\ul 2} = 0. \end{align*} We show (3.3.1). Here $\BU_q^-$ satisfies the formulas (3.2.3) and the formulas obtained from (3.2.3) by replacing $f_1, f_2$ by $f_1, f_{2'}$. By multiplying $f_{2'}^2$ from the right on (3.2.3) for $f_1f_2^2$, we have \begin{equation*} \tag{3.3.3} f_1f_2^2f_{2'}^2 - (q + q\iv)f_2f_1f_{2'}^2f_2 + f_2^2f_1f_{2'}^2 = 0. \end{equation*} Here by applying (3.2.3) for $f_1f_{2'}^2$, we have \begin{align*} f_2(f_1f_{2'}^2)f_2 &= (q + q\iv)(f_2f_{2'})f_1(f_2f_{2'}) - f_2f_{2'}^2f_1f_2, \\ f_2^2(f_1f_{2'}^2) &= (q + q\iv)f_2^2f_{2'}f_1f_{2'} - (f_2f_{2'})^2f_1. \end{align*} Substituting these formulas into (3.3.3), we have \begin{equation*} f_1(f_2f_{2'})^2 - (q + q\iv)^2(f_2f_{2'})f_1(f_2f_{2'}) - (f_2f_{2'})^2f_1 + Z = 0, \end{equation*} where \begin{equation*} Z = (q + q\iv) \bigl( f_{2'}^2f_2f_1f_2 + f_2^2f_{2'}f_1f_{2'}\bigr). \end{equation*} Since $\d = 2$, $Z \in J$, we obtain (3.3.1). \par Next we show (3.3.2). First note the following equality. By using (3.2.3) for $f_1f_2f_1$ and for $f_{2'}f_1^2$, we have \begin{align*} \tag{3.3.4} (q + q\iv)f_1f_{2'}(f_1f_2f_1) &= f_1f_{2'}(f_2f_1^2 + f_1^2f_2) \\ &= f_1(f_2f_{2'})f_1^2 + f_1(f_{2'}f_1^2)f_2 \\ &= f_1(f_2f_{2'})f_1^2 + (q + q\iv)f_1^2f_{2'}f_1f_2 - f_1^3f_{2'}f_2. \end{align*} Here by applying (3.2.3) for $f_2f_1^2$ and for $f_{2'}f_1^2$ twice, we have \begin{align*} f_{2'}f_1(f_2f_1^2) &= f_{2'}f_1\bigl((q + q\iv)f_1f_2f_1 - f_1^2f_2\bigr) \\ &= f_{2'}f_1^2\bigl((q + q\iv)f_2f_1 - f_1f_2\bigr) \\ &= \bigl((q + q\iv)f_1f_{2'}f_1 - f_1^2f_{2'}\bigr) \bigl((q + q\iv)f_2f_1 - f_1f_2\bigr) \\ &= (q + q\iv)^2f_1f_{2'}f_1f_2f_1 - (q + q\iv)f_1(f_{2'}f_1^2)f_2 - (q + q\iv)f_1^2f_{2'}f_2f_1 + f_1^2f_{2'}f_1f_2 \\ &= (q + q\iv)^2f_1f_{2'}f_1f_2f_1 - \bigl((q + q\iv)^2 - 1\bigr)f_1^2f_{2'}f_1f_2 \\ &\phantom{***} - (q + q\iv)f_1^2f_2f_{2'}f_1 + (q + q\iv)f_1^3f_2f_{2'}. \end{align*} Substituting (3.3.4) into the last equality, we obtain \begin{align*} \tag{3.3.5} f_{2'}f_1f_2f_1^2 &= (q + q\iv)f_1(f_2f_{2'})f_1^2 - (q + q\iv)f_1^2(f_2f_{2'})f_1 + f_1^2f_{2'}f_1f_2. \end{align*} On the other hand, by applying (3.2.3) for $f_{2'}f_1^2$, we have \begin{align*} \tag{3.3.6} (f_{2'}f_1^2)f_2f_1 &= (q + q\iv)f_1f_{2'}f_1f_2f_1 - f_1^2f_{2'}f_2f_1 \\ &= f_1(f_2f_{2'})f_1^2 + (q + q\iv)f_1^2f_{2'}f_1f_2 - f_1^3f_2f_{2'} - f_1^2f_{2'}f_2f_1. \end{align*} The second identity is obtained by substituting (3.3.4) into the first identity. Now by applying (3.2.3) for $f_2f_1^2$ we have \begin{align*} f_{2'}f_2f_1^3 &= f_{2'}(f_2f_1^2)f_1 = (q + q\iv)f_{2'}f_1f_2f_1^2 - f_{2'}f_1^2f_2f_1. \end{align*} By substituting (3.3.5) and (3.3.6) into the last formula, we have \begin{align*} \tag{3.3.7} (f_2f_{2'})f_1^3 &= (q^2 + 1 + q^{-2}) f_1(f_2f_{2'})f_1^2 - (q^2 + 1 + q^{-2})f_1^2(f_2f_{2'})f_1 + f_1^3(f_2f_{2'}). \end{align*} Since $[3]_1 = q^2 + 1 + q^{-2}$, by applying $\pi$, we obtain (3.3.2). Note that the formula (3.3.7) is obtained without appealing modulo 2. Thus (3.1.3) is verified for $\ul X$ of type $B_2$. \para{3.4.} Assume that $\ul X$ is of type $G_2$ and $X$ is of type $D_4$. We have $I = \{ 1,2,2',2''\}$, $\ul I = \{ \ul 1, \ul 2\}$ with $\ul 1 = \{ 1 \}$ and $\ul 2 = \{ 2,2',2''\}$, where $(\a_i, \a_j) = -1$ if $\{ i, j\} = \{ 1, 2\}, \{1, 2'\}$ or $\{ 1, 2''\}$, and is equal to zero for all other $i \ne j$. By (3.1.4), (3.1.3) is equivalent to the formulas \begin{align*} \tag{3.4.1} &g_{\ul 1}g_{\ul 2}^2 - (q^3 + q^{-3})g_{\ul 2}g_{\ul 1}g_{\ul 2} + g_{\ul 2}^2g_{\ul 1} = 0, \\ \tag{3.4.2} &g_{\ul 2}g_{\ul 1}^4 - [4]_1g_{\ul 1}g_{\ul 2}g_{\ul 1}^3 + \begin{bmatrix} 4 \\ 2 \end{bmatrix}_1 g_{\ul 1}^2g_{\ul 2}g_{\ul 1}^2 - [4]_1g_{\ul 1}^3g_{\ul 2}g_{\ul 1} + g_{\ul 1}^4g_{\ul 2} = 0, \end{align*} where $[4]_1 = q^3 + q + q\iv + q^{-3}$ and $\begin{bmatrix} 4 \\ 2 \end{bmatrix}_1 = q^4 + q^2 + 2 + q^{-2} + q^{-4}$. We show (3.4.1). Here $\BU_q^-$ satisfies the formulas (3.2.3) and the formulas obtained from (3.2.3) by replacing $f_1, f_2$ by $f_1, f_{2'}$ or $f_1, f_{2''}$. By multiplying $f_{2'}^2f_{2''}^2$ from the right on (3.2.3) for $f_1f_2^2$, we have \begin{equation*} \tag{3.4.3} f_1f_2^2f_{2'}^2f_{2''}^2 - (q + q\iv)f_2f_1f_{2'}^2f_{2''}^2f_2 + f_2^2f_1f_{2'}^2f_{2''}^2 = 0. \end{equation*} Concerning the middle term, by applying (3.2.3) for $f_1f_{2'}^2$, then for $f_1f_{2''}^2$, we have \begin{align*} \tag{3.4.4} f_2(f_1f_{2'}^2)f_{2''}^2f_2 &= (q + q\iv)f_2f_{2'}(f_1f_{2''}^2)f_{2'}f_2 - f_2f_{2'}^2f_1f_{2''}^2f_2 \\ &= (q + q\iv)^2f_2f_{2'}f_{2''}f_1f_2f_{2'}f_{2''} - (q + q\iv)f_2f_{2'}f_{2''}^2f_1f_2f_{2'} \\ &\phantom {*****} - f_2f_{2'}^2f_1f_2f_{2''}^2. \end{align*} Concerning the third term, by applying (3.2.3) for $f_1f_{2''}^2$, then for $f_1f_{2'}^2$, and finally for $f_2^2f_1$, we have \begin{align*} f_2^2(f_1f_{2''}^2)f_{2'}^2 &= (q + q\iv)f_2^2f_{2''}f_1f_{2''}f_{2'}^2 - f_2^2f_{2''}^2(f_1f_{2'}^2) \\ &= (q + q\iv)f_2^2f_{2''}f_1f_{2''}f_{2'}^2 - (q + q\iv)f_{2'}f_{2''}^2(f_2^2f_1)f_{2'} + f_2^2f_{2''}^2f_{2'}^2f_1 \\ &= (q + q\iv)f_2^2f_{2''}f_1f_{2''}f_{2'}^2 - (q + q\iv)^2f_{2'}f_{2''}^2f_2f_1f_2f_{2'} \\ &\phantom{*****} + (q + q\iv)f_{2'}f_{2''}^2f_1f_2^2f_{2'} + f_2^2f_{2''}^2f_{2'}^2f_1. \end{align*} It follows that \begin{equation*} f_1(f_2f_{2'}f_{2''})^2 - (q + q\iv)^3(f_2f_{2'}f_{2''})f_1(f_2f_{2'}f_{2''}) + (f_2f_{2'}f_{2''})^2f_1 + Z = 0, \end{equation*} where \begin{equation*} Z = (q + q\iv)\bigl( f_2f_{2'}^2f_1f_{2''}^2f_2 + f_{2''}f_2^2f_1f_{2'}^2f_{2''} + f_{2'}f_{2''}^2f_1f_2^2f_{2'}\bigr). \end{equation*} Since $\ve = 3$, $Z \in J$, we obtain (3.4.1). \para{3.5.} It remains to prove (3.4.2). We shall prove the following formula in ${}_{\BA'}\BU_q^-$. \begin{align*} \tag{3.5.1} (f_2f_{2'}f_{2''})f_1^4 &- [4]_1f_1(f_2f_{2'}f_{2''})f_1^3 + \begin{bmatrix} 4 \\ 2 \end{bmatrix}_1f_1^2(f_2f_{2'}f_{2''})f_1^2 \\ &- [4]_1f_1^3(f_2f_{2'}f_{2''})f_1 + f_1^4(f_2f_{2'}f_{2''}) \equiv 0 \mod J. \end{align*} Clearly (3.5.1) will imply (3.4.2). The proof of (3.5.1) by the direct computation as in the case of $B_2$ seems to be difficult. Instead, we will prove (3.5.1) by making use of PBW-basis of $\BU_q^-$. \par Let $\Bh = (i_1, \dots, i_{\nu})$ be a sequence associated to the longest element $w_0$ of $W$. Here $W$ is of type $D_4$, and $\nu = 12$. We choose $\Bh$ as \begin{equation*} \tag{3.5.2} \Bh = (2, 2', 2'', 1, 2, 2', 2'', 1, 2, 2', 2'',1). \end{equation*} We define $\b_k = s_{i_1}\cdots s_{i_{k-1}}(\a_{i_k})$ for $k = 1, \dots, \nu = 12$. Then the set $\vD^+$ of positive roots is given as \begin{align*} \tag{3.5.3} \vD^+ &=\{ \b_1, \dots, \b_{12}\} \\ &= \{ 2, 2', 2'', 122'2'', 12'2'', 122'', 122', 1122'2'', 12, 12', 12'', 1\}, \end{align*} where we use the notation for positive roots such as $12 \lra \a_1 + \a_2$, $12'2'' \lra \a_1 + \a_{2'} + \a_{2''}$, etc. For $k = 1, \dots, \nu$, the root vector $f_{\b_k}^{(c)}$ is defined by $f_{\b_k}^{(c)} = T_{i_1}\cdots T_{i_{k-1}}(f_{i_k}^{(c)})$. Then PBW-basis of $\BU_q^-$ is given as $\{ L(\Bc, \Bh) \mid \Bc \in \BN^{12} \}$, where for $\Bc = (c_1, \dots, c_{12})$, \begin{equation*} L(\Bc,\Bh) = f_2^{(c_1)}f_{2'}^{(c_2)}f_{2''}^{(c_3)} f_{122'2''}^{(c_4)}f_{12'2''}^{(c_5)}f_{122''}^{(c_6)}f_{122'}^{(c_7)} f_{1122'2''}^{(c_8)}f_{12}^{(c_9)}f_{12'}^{(c_{10})}f_{12''}^{(c_{11})} f_1^{(c_{12})}. \end{equation*} We use the following commutation relations, \begin{align*} \tag{3.5.4} f_{12} &= f_1f_2 - qf_2f_1 \quad (\text{similarly for } \ f_{12'}, f_{12''}), \\ f_{122'} &= f_{12}f_{2'} - qf_{2'}f_{12} = f_{12'}f_2 - qf_2f_{12'}, \quad (\text{similarly for } \ f_{12'2''}, f_{122''}) \\ f_{122'2''} &= f_{122'}f_{2''} - qf_{2''}f_{122'} = f_{12'2''}f_2 - qf_2f_{12'2''} = f_{122''}f_{2'} - qf_{2'}f_{122''}, \\ f_{1122'2''} &= f_{12''}f_{122'} - qf_{122'}f_{12''} = f_{12}f_{12'2''} - qf_{12'2''}f_{12} = f_{12'}f_{122''} - qf_{122''}f_{12'}, \end{align*} The following formulas are obtained by applying the commutation formula of Levendorskii and Soibelman [LS]. \begin{align*} f_{122'2''}f_{2} &= q\iv f_{2}f_{122'2''}, \quad (\text{similarly for } \ f_{122'2''}f_{2'}, f_{122'2''}f_{2''}), \\ f_{12'2''}f_{122'2''} &= q\iv f_{122'2''}f_{12'2''}, \quad(\text{similarly for } f_{122'}f_{122'2''}, f_{122''}f_{122'2''}), \\ f_{122''}f_{12'2''} &= f_{12'2''}f_{122''}, \quad(\text{similarly for } \ f_{122'}f_{122''}, f_{122'}f_{12'2''}), \\ f_{1122'2''}f_{122'} &= q\iv f_{122'}f_{1122'2''}, \quad(\text{similarly for } f_{1122'2''}f_{122''}, f_{1122'2''}f_{12'2''}), \\ f_{12}f_{1122'2''} &= q\iv f_{1122'2''}f_{12}, \quad(\text{similarly for } f_{12'}f_{1122'2''}, f_{12''}f_{1122'2''}), \\ f_{12'}f_{12} &= f_{12}f_{12'}, \quad(\text{similarly for } f_{12''}f_{12'}, f_{12''}f_{12}), \\ f_1f_{12} &= q\iv f_{12}f_1, \quad(\text{similarly for } f_1f_{12'}, f_1f_{12''}). \end{align*} By using those relations, we obtain \begin{equation*} f_1f_{12'2''} = f_{12'2''}f_1 - (q - q\iv)f_{12'}f_{12''}, \quad(\text{similarly for } f_1f_{122''}, f_1f_{122'}). \end{equation*} Also we can compute \begin{align*} f_1(f_2f_{2'}f_{2''}) &= f_{122'2''} + q(f_{2''}f_{122'} + f_{2'}f_{122''} + f_2f_{12'2''}) \\ &\phantom{*****} + q^2(f_{2'}f_{2''}f_{12} + f_2f_{2''}f_{12'} + f_2f_{2'}f_{12''}) + q^3f_2f_{2'}f_{2''}f_1. \end{align*} It follows that \begin{equation*} \tag{3.5.5} f_1(f_2f_{2'}f_{2''}) \equiv f_{122'2''} + q^3f_2f_{2'}f_{2''}f_1 \mod J. \end{equation*} By multiplying $f_1$ from the left on both sides of (3.5.5), we have \begin{align*} f_1^2(f_2f_{2'}f_{2''}) &\equiv f_1f_{122'2''} + q^3f_1(f_2f_{2'}f_{2''}f_1) \\ &\equiv f_1f_{122'2''} + q^3(f_{122'2''} + q^3f_2f_{2'}f_{2''}f_1)f_1 \\ &= f_1f_{122'2''} + q^3f_{122'2''}f_1 + q^6f_2f_{2'}f_{2''}f^2_1, \end{align*} where we again used (3.5.5) in the second identity. On the other hand, we can compute \begin{align*} \tag{3.5.6} f_1f_{122'2''} &= qf_{122'2''}f_1 - q(q - q\iv) \{ f_{12'2''}f_{12} + f_{122''}f_{12'} + f_{122'}f_{12''}\} \\ &\phantom{*****} + (q\iv -2q)f_{1122'2''} \\ &\equiv qf_{122'2''}f_1 + (q + q\iv)f_{1122'2''} \mod J. \end{align*} Hence we have \begin{equation*} \tag{3.5.7} f_1^2(f_2f_{2'}f_{2''}) \equiv (q + q^3)f_{122'2''}f_1 + (q + q\iv)f_{1122'2''} + q^6f_2f_{2'}f_{2''}f_1^2. \end{equation*} Next by multiplying $f_1$ from the left on both sides of (3.5.7), we have \begin{equation*} \tag{3.5.8} f_1^3(f_2f_{2'}f_{2''}) \equiv (q + q^3)f_1f_{122'2''}f_1 + (q + q\iv)f_1f_{1122'2''} + q^6f_1f_2f_{2'}f_{2''}f_1^2. \end{equation*} Here we can compute \begin{equation*} \tag{3.5.9} f_1f_{1122'2''} = q\iv f_{1122'2''}f_1 + (q - q\iv)^2f_{12}f_{12'}f_{12''}. \end{equation*} Thus by applying (3.5.6), (3.5.9) and (3.5.5) to (3.5.8), we have \begin{align*} \tag{3.5.10} f_1^3(f_2f_{2'}f_{2''}) \equiv &(q^6 + q^4 + q^2)f_{122'2''}f_1^2 + (q^4 + 2q^2 + 2 + q^{-2})f_{1122'2''}f_1 \\ &+ (q^3 + 2q + 2q\iv + q^{-3})f_{12}f_{12'}f_{12''} + q^9f_2f_{2'}f_{2''}f_1^3. \end{align*} Here we note that \begin{equation*} \tag{3.5.11} f_1f_{12}f_{12'}f_{12''} = q^{-3}f_{12}f_{12'}f_{12''}f_1. \end{equation*} Then by multiplying $f_1$ from the left on both sides of (3.5.10), and by applying (3.5.6), (3.5.9), (3.5.11) and (3.5.5), we have \begin{align*} \tag{3.5.12} f_1^4(f_2f_{2'}f_{2''}) \equiv (q^9 + &q^7 + q^5 + q^3)f_{122'2''}f_1^3 + (q^7 + 2q^5 + 2q\iv + q^{-3})f_{1122'2''}f_1^2 \\ &+ (q^6 + 2q^2 + 2q^{-2} + q^{-6})f_{12}f_{12'}f_{12''}f_1 + q^{12}f_2f_{2'}f_{2''}f_1^4. \end{align*} Now (3.5.1) can be verified easily by (3.5.5), (3.5.7), (3.5.10) and (3.5.12). Thus (3.4.2) is verified, and (3.1.3) holds for the case $\ul X$ is of type $G_2$. This completes the proof of Proposition 1.10. \remark{3.6.} In the case where $\ul X$ is of type $B_2$, the equality (3.3.7) holds in $\BU_q^-$. This is also true for the case of type $G_2$. In fact, a more precise computation shows that (3.5.1) holds in $\BU_q^-$, without appealing modulo $J$ nor modulo 3. \section{The proof of Lemma 1.13} \para{4.1.} We consider the Cartan matrix as in 3.1. Since $\ul X$ has rank 2, $\ul w_0$ has two reduced expressions $\ul\Bh = (\eta_1, \dots, \eta_{\ul \nu})$ and $\ul\Bh' = (\eta_1', \dots, \eta'_{\ul \nu})$. Let $*$ be the anti-algebra automorphism of $\BU_q^-$ and of $\ul\BU_q^-$. It is known that \begin{align*} (\ul T_{\eta_1}\cdots \ul T_{\eta_{k-1}}(\ul f_{\eta_k}))^* &= \ul T_{\eta'_1}\cdots \ul T_{\eta'_{\ul \nu - k}}(\ul f_{\eta'_{\ul \nu - k +1}}), \end{align*} and the following formula is obtained from the corresponding formula for $\BU_q^-$, \begin{equation*} (R_{\eta_1}\cdots R_{\eta_{k-1}}(\wt f_{\eta_k}))^* = R_{\eta'_1}\cdots R_{\eta'_{\ul \nu - k}}(\wt f_{\eta'_{\ul \nu - k + 1}}). \end{equation*} Thus we may verify (1.13.1) for a fixed $\ul\Bh$. \par In the case where $\ul X$ has type $A_1 \times A_1$, there is nothing to prove. \para{4.2.} Assume that $\ul X$ has type $A_2$. We write $I = \{ 1, 1', 2, 2' \}$ with $\ul I = \{ \ul 1, \ul 2\}$, where $\ul 1 = \{ 1,1'\}, \ul 2 = \{ 2,2'\}$. Put $\ul\Bh = (\ul 2, \ul 1, \ul 2)$. Then $\ul \vD^+ = \{\ul 2, \ul{12}, \ul 1\}$. We have \begin{equation*} \ul T_{\ul 2}(\ul f_{\ul 1}) = \ul f_{\ul 1}\ul f_{\ul 2} - q^2\ul f_{\ul 2}\ul f_{\ul 1}, \quad \ul T_{\ul 2}\ul T_{\ul 1}(\ul f_{\ul 2}) = \ul f_{\ul 1}. \end{equation*} We have \begin{align*} \tag{4.2.1} R_{\ul 2}(\wt f_{\ul 1}) &= T_2T_{2'}(f_1f_{1'}) = T_2(f_1)T_{2'}(f_{1'}) \\ &= (f_1f_2 - qf_2f_1)(f_{1'}f_{2'} - qf_{2'}f_{1'}) \\ &= f_{1}f_{1'}f_2f_{2'} + q^2f_2f_{2'}f_1f_{1'} - qZ, \end{align*} with \begin{align*} Z &= f_1f_2f_{2'}f_{1'} + f_2f_1f_{1'}f_{2'} \\ &= (f_{12} + qf_2f_1)f_{2'}f_{1'} + f_2f_1(f_{1'2'} + qf_{2'}f_{'}) \\ &= f_{12}f_{2'}f_{1'} + f_{1'2'}f_2f_1 + 2qf_2f_1f_{2'}f_{1'}, \end{align*} where $f_{12} = T_2(f_1) = f_1f_2 - qf_2f_1$ and $f_{1'2'} = T_{2'}(f_{1'}) = f_{1'}f_{2'}-qf_{2'}f_{1'}$. Since $\s(f_{12}) = f_{1'2'}$, we see that $Z \in J$. Thus $\pi(R_{\ul 2}(\wt f_1)) = g_{\ul 1}g_{\ul 2} - q^2g_{\ul 2}g_{\ul 1}$ and (1.13.1) holds for $\ul T_2(\ul f_{\ul 1})$. Moreover, \begin{align*} \tag{4.2.2} R_{\ul 2}R_{\ul 1}(\wt f_{\ul 2}) &= T_2T_{2'}T_1T_{1'}(f_2f_{2'}) \\ &= T_2T_1(f_2)T_{2'}T_{1'}(f_{2'}) \\ &= f_1f_{1'}. \end{align*} Hence $\pi(R_{\ul 2}R_{\ul 1}(\wt f_{\ul 2})) = g_1$, and (1.13.1) holds for $\ul T_{\ul 2}\ul T_{\ul 1}(\ul f_{\ul 2})$. The lemma holds for $\ul X$ of type $A_2$. \para{4.3.} Next assume that $\ul X$ has type $B_2$, and $X$ has type $A_3$. We write $I = \{ 2,1,2'\}$ and $\ul I = \{ \ul 1, \ul 2\}$, where $\ul 1 = \{ 1 \}, \ul 2 = \{ 2, 2' \}$. Put $\Bh = (2,2',1, 2,2',1)$ and $\vD^+ = \{ 2,2',122',12',12,1 \}$. Then $\ul\Bh = (\ul 2, \ul 1, \ul 2, \ul 1)$ and $\ul\vD^+ = \{ \ul 2, \ul{12}, \ul{112}, \ul 1\}$. We define root vectors and PBW-bases of $\BU_q^-$ and $\ul\BU_q^-$ similarly to the case of $G_2$ in 3.5. Then we have \begin{align*} \tag{4.3.1} \ul f_{\ul{12}} &= \ul T_{\ul 2}(\ul f_{\ul 1}) = \ul f_{\ul 1}\ul f_{\ul 2} - q^2\ul f_{\ul 2}\ul f_{\ul 1}, \\ \ul f_{\ul{112}} &= \ul T_{\ul 2}\ul T_{\ul 1}(\ul f_{\ul 2}) = (q + q\iv)\iv (\ul f_{\ul 1} \ul f_{\ul{12}} - \ul f_{\ul{12}} \ul f_{\ul 1}), \\ \ul f_{\ul 1} &= \ul T_{\ul 2}\ul T_{\ul 1}\ul T_{\ul 2}(\ul f_{\ul 1}). \end{align*} We compute \begin{align*} \tag{4.3.2} R_{\ul 2}(\wt f_{\ul 1}) &= T_2T_{2'}(f_1) = T_2(f_1f_{2'} - qf_{2'}f_1) \\ &= (f_1f_2 - qf_2f_1)f_{2'} - qf_{2'}(f_1f_2 - qf_2f_1) \\ &= f_1f_2f_{2'} + q^2 f_2f_{2'}f_1 - q(f_2f_1f_{2'} + f_{2'}f_1f_2). \end{align*} Hence $\pi(R_{\ul 2}(\wt f_{\ul 1})) = g_{\ul 1}g_{\ul 2} - q^2 g_{\ul 21}g_{\ul 1}$ and (1.13.1) holds for $\ul f_{12}$. Also \begin{align*} \tag{4.3.3} R_{\ul 2}R_{\ul 1}R_{\ul 2}(\wt f_{\ul 1}) &= T_2(T_{2'}T_1T_{2'})T_{2}(f_1) \\ &= T_2(T_1T_{2'}T_1)T_2(f_1) \\ &= T_2T_1T_{2'}(f_2) \\ &= T_2T_1(f_2) = f_1. \end{align*} Hence $\pi(R_{\ul 2}R_{\ul 1}R_{\ul 2}(\wt f_{\ul 1})) = g_{\ul 1}$, and (1.13.1) holds for $\ul f_{\ul 1}$. \par Finally consider \begin{align*} \tag{4.3.4} R_{\ul 2}R_{\ul 1}(\wt f_{\ul 2}) &= T_2T_{2'}T_1(f_2f_{2'}) \\ &= T_{2'}(T_2T_1(f_2))\cdot T_2(T_{2'}T_1(f_{2'})) \\ &= T_{2'}(f_1)T_2(f_1) \\ &= f_{12'}f_{12}. \end{align*} Put \begin{align*} \tag{4.3.5} Z_{\ul{112}} &= f_1(f_1f_2f_{2'} - q^2f_2f_{2'}f_1) - (f_1f_2f_{2'} - q^2f_2f_{2'}f_1)f_1 \\ &= f_1^2f_2f_{2'} - (q^2 + 1)f_1f_2f_{2'}f_1 + q^2f_2f_{2'}f_1^2. \end{align*} Clearly $Z_{\ul{112}} \in \BU_q^{-,\s}$, and $\pi(Z_{\ul{112}}) = (q + q\iv) \Phi(\ul f_{\ul{112}})$ by (4.3.1). We express $Z_{\ul{112}}$ in terms of PBW-basis of $\BU_q^-$. By using (3.2.3) for $f_1^2f_2$ and $f_1^2f_{2'}$, we have \begin{equation*} \tag{4.3.6} f_1^2f_2f_{2'} = (q + q\iv)f_1f_2f_1f_{2'} - (q + q\iv)f_2f_1f_{2'}f_1 + f_2f_{2'}f_1^2. \end{equation*} \begin{align*} f_1f_2f_1f_{2'} &= (qf_2f_1 + f_{12})(qf_{2'}f_1 + f_{12'}) \\ &= q^2f_2f_1f_{2'}f_1 + qf_{12}f_{2'}f_1 + qf_2f_1f_{12'} + f_{12}f_{12'} \\ &= q^3f_2f_{2'}f_1^2 + qf_{122'}f_1 + f_2f_{12'}f_1 + f_{12}f_{12'} + q^2(f_2f_{12'}f_1 + f_{2'}f_{12}f_1), \\ f_2f_1f_{2'}f_1 &= qf_2f_{2'}f_1^2 + f_2f_{12'}f_1. \end{align*} Here we have used the formula $f_1f_{12'} = q\iv f_{12'}f_1$. Moreover, by using $f_{12}f_{2'} = qf_{2'}f_{12} + f_{122'}$, we have \begin{align*} \tag{4.3.7} f_1f_2f_{2'}f_1 &= q^2f_2f_{2'}f_1^2 + q(f_2f_{12'}f_1 + f_{2'}f_{12}f_1) + f_{122'}f_1. \end{align*} Substituting these formulas into (4.3.5), we see that \begin{equation*} \tag{4.3.8} Z_{\ul{112}} = (q + q\iv)f_{12}f_{12'}. \end{equation*} Combining this with (4.3.4), (4.3.5), we obtain $\Phi(\ul f_{\ul{112}}) = \pi(R_{\ul 2}R_{\ul 1})(f_1)$. Thus the lemma holds for $\ul X$ of type $B_2$. \para{4.4.} Finally assume that $\ul X$ has type $G_2$. We follow the notation in 3.5. Put $\ul\Bh = (\ul 2, \ul 1, \ul 2, \ul 1, \ul 2, \ul 1)$. Then $\ul\vD^+ = \{\ul 2, \ul{12}, \ul{11122}, \ul{112}, \ul{1112}, \ul 1 \}$. We have \begin{align*} \tag{4.4.1} \ul f_{\ul{12}} &= \ul T_{\ul 2}(\ul f_{\ul 1}) = \ul f_{\ul 1}\ul f_{\ul 2} - q^3\ul f_{\ul 2}\ul f_{\ul 1}, \\ \ul f_{\ul{11122}} &= \ul T_{\ul 2}\ul T_{\ul 1}(\ul f_{\ul 2}) = [3]_1\iv(\ul f_{\ul{112}}\ul f_{\ul{12}} - q\iv \ul f_{\ul{12}}\ul f_{\ul{112}}), \\ \ul f_{\ul{112}} &= \ul T_{\ul 2}\ul T_{\ul 1}\ul T_{\ul 2}(\ul f_{\ul 1}) = [2]_1\iv(\ul f_{\ul 1}\ul f_{\ul{12}} - q\ul f_{\ul{12}}\ul f_{\ul 1}), \\ \ul f_{\ul{1112}} &= \ul T_{\ul 2}\ul T_{\ul 1}\ul T_{\ul 2}\ul T_{\ul 1}(\ul f_{\ul 2}) = [3]_1\iv(\ul f_{\ul 1}\ul f_{\ul{112}} - q\iv \ul f_{\ul{112}}\ul f_{\ul 1}) \\ \ul f_{\ul 1} &= \ul T_{\ul 2}\ul T_{\ul 1}\ul T_{\ul 2}\ul T_{\ul 1}\ul T_{\ul 2}(\ul f_{\ul 1}). \end{align*} First consider the case $\ul f_{\ul{12}}$. By using (4.3.2), we have \begin{align*} R_{\ul 2}(\wt f_{\ul 1}) &= T_2T_{2'}T_{2''}(f_1) \\ &= T_{2''}\bigl(f_1f_2f_{2'} + q^2f_2f_{2'}f_1 - q(f_2f_1f_{2'} + f_{2'}f_1f_2) \bigr) \\ &= f_1(f_2f_{2'}f_{2''}) - q^3(f_2f_{2'}f_{2''})f_1 \\ &\phantom{***} -q(f_2f_1f_{2'}f_{2''} + f_{2'}f_1f_{2''}f_2 + f_{2''}f_1f_2f_{2'}) \\ & \phantom{***}+q^2(f_2f_{2'}f_1f_{2''} + f_{2'}f_{2''}f_1f_2 + f_{2''}f_2f_1f_{2'}). \end{align*} Hence $\pi(R_{\ul 2}(\wt f_1)) = g_{\ul 1}g_{\ul 2} - q^3g_{\ul 2}g_{\ul 1}$ and (1.13.1) holds for $\ul f_{\ul{12}}$. \par Next consider the case $\ul f_{\ul{112}}$. Put \begin{align*} Z_{\ul{12}} &= f_1(f_2f_{2'}f_{2''}) - q^3(f_2f_{2'}f_{2''})f_1, \\ Z_{\ul{112}} &= f_1Z_{\ul {12}} - qZ_{\ul{12}}f_1. \end{align*} Then we have \begin{align*} \tag{4.4.2} Z_{\ul{112}} = f_1^2f_2f_{2'}f_{2''} - (q^3 + q)f_1f_2f_{2'}f_{2''}f_1 + q^4f_2f_{2'}f_{2''}f_1^2. \end{align*} Clearly $Z_{\ul{112}} \in \BU_q^{-,\s}$, and we have \begin{equation*} \tag{4.4.3} \pi(Z_{\ul{112}}) = (q + q\iv)\Phi(\ul f_{\ul{112}}) \end{equation*} by (4.4.1). We express each term of $Z_{\ul{112}}$ in terms of PBW-basis. By (3.5.5), we have \begin{align*} \tag{4.4.4} f_1(f_2f_{2'}f_{2''})f_1 \equiv f_{122'2''}f_1 + q^3f_2f_{2'}f_{2''}f_1^2 \quad \mod J. \end{align*} By (3.5.7), we have \begin{align*} \tag{4.4.5} f_1^2(f_2f_{2'}f_{2''}) \equiv (q &+ q^3)f_{122'2''}f_1 + (q + q\iv)f_{1122'2''} + q^6f_2f_{2'}f_{2''}f_1^2 \quad \mod J. \end{align*} Substituting these formulas into (4.4.2), we have $Z_{\ul{112}} \equiv (q + q\iv)f_{1122'2''} \mod J$, which implies that \begin{equation*} \tag{4.4.6} \pi(Z_{\ul{112}}) = (q + q\iv)\pi(f_{1122'2''}). \end{equation*} Note that by (3.5.2) and (3.5.3), we have \begin{equation*} R_{\ul 2}R_{\ul 1}R_{\ul 2}(f_1) = T_2T_{2'}T_{2''}T_1T_2T_{2'}T_{2''}(f_1) = f_{1122'2''}. \end{equation*} By comparing (4.4.3) and (4.4.6), we obtain \begin{equation*} \tag{4.4.7} \pi(R_{\ul 2}R_{\ul 1}R_{\ul 2}(f_1)) = \Phi(\ul f_{\ul{112}}). \end{equation*} Thus (1.13.1) holds for $\ul f_{\ul{112}}$. \par Next consider the case of $\ul f_{\ul{1112}}$. Put \begin{equation*} Z_{\ul{1112}} = f_1Z_{\ul{112}} - q\iv Z_{\ul{112}}f_1. \end{equation*} It follows from the computation of $Z_{\ul{112}}$ in (4.4.2), we have \begin{align*} \tag{4.4.8} Z_{\ul{1112}} = f_1^3&f_2f_{2'}f_{2''} - (q^3 + q + q\iv)f_1^2f_2f_{2'}f_{2''}f_1 \\ &+ (q^4 + q^2 + 1)f_1f_2f_{2'}f_{2''}f_1^2 - q^3f_2f_{2'}f_{2''}f_1^3. \end{align*} \par Clearly $Z_{\ul{1112}} \in \BU_q^{-,\s}$, and we have \begin{equation*} \tag{4.4.9} \pi(Z_{\ul{1112}}) = [2]_1[3]_1\Phi(\ul f_{\ul{1112}}). \end{equation*} By (3.5.10), we have \begin{align*} f_1^3(f_2f_{2'}f_{2''}) \equiv (q^6 &+ q^4 + q^2)f_{122'2''}f_1^2 + (q^4 + 2q^2 + 2 + q^{-2})f_{1122'2''}f_1 \\ &+ (q^3 + 2q^2 + 2q\iv + q^{-3})f_{12}f_{12'}f_{12''} + q^9f_2f_{2'}f_{2''}f_1^3. \end{align*} By this formula together with (3.5.7) and (3.5.5), we have $Z_{\ul{1112}} = [2]_1[3]_1f_{12}f_{12'}f_{12''} \mod J$, which implies that. \begin{align*} \tag{4.4.10} \pi(Z_{\ul{1112}}) = [2]_1[3]_1\pi(f_{12}f_{12'}f_{12''}). \end{align*} Note that by (3.5.2) and (3.5.3), we have \begin{align*} R_{\ul 2}R_{\ul 1}R_{\ul 2}R_{\ul 1}(\wt f_{\ul 2}) &= T_2T_{2'}T_{2''}T_1T_2T_{2'}T_{2''}T_1(f_2f_{2'}f_{2''}) \\ &= f_{12}f_{12'}f_{12''}. \end{align*} By comparing (4.4.9) and (4.4.10), we obtain \begin{equation*} \pi(R_{\ul 2}R_{\ul 1}R_{\ul 2}R_{\ul 1}(\wt f_{\ul 2})) = \Phi(f_{\ul{1112}}). \end{equation*} Thus (1.13.2) holds for $\ul f_{\ul{1112}}$. \par Finally consider the case of $\ul f_{\ul{11122}}$. Put \begin{equation*} Z_{\ul{11122}} = f_{1122'2''}f_{122'2''} - q\iv f_{122'2''}f_{1122'2''}. \end{equation*} By (3.5.2) and (3.5.3), we have \begin{align*} R_{\ul 2}(f_1) = T_2T_{2'}T_{2''}(f_1) = f_{122'2''}. \end{align*} Hence, by the previous computation, we know that $\pi(f_{122'2''}) = \Phi(\ul f_{\ul{12}})$. On the other hand, by (4.4.7), we have $\pi(f_{1122'2''}) = \Phi(\ul f_{\ul{112}})$. It follows, by (4.4.1), that \begin{align*} \tag{4.4.11} \pi(Z_{\ul{11122}}) = [3]_1\Phi(\ul f_{\ul{11122}}). \end{align*} We note, by (3.5.2) and (3.5.3), that \begin{align*} R_{\ul 2}R_{\ul 1}(\ul f_{\ul 2}) = T_2T_{2'}T_{2''}T_1(f_2f_{2'}f_{2''}) = f_{12'2''}f_{122''}f_{122'}. \end{align*} Thus in order to prove (1.13.1) for $\ul f_{\ul{11122}}$, it is enough to see that \begin{equation*} \tag{4.4.12} Z_{\ul{11122}} \equiv [3]_1f_{12'2''}f_{122''}f_{122'} \mod J. \end{equation*} We shall express $Z_{\ul{11122}}$ in terms of the PBW-basis of $\BU_q^-$. In the computation below, in addition to the formulas in 3.5, we need to use the following commutation relations, which are deduced from the formula of Levendorskii and Soibelman [LS] applied for the subalgebra of type $A_3$. . \begin{align*} \tag{4.4.13} f_{12}f_2 &= q\iv f_2f_{12}, \\ f_{122''}f_2 &= q\iv f_2f_{122''}, \\ f_{122'}f_2 &= q\iv f_2f_{122'}, \\ f_{12}f_{122'} &= q\iv f_{122'}f_{12}, \\ f_{12}f_{122''} &= q\iv f_{122''}f_{12}, \end{align*} and the formulas (two for each) by applying the operation $\s$ on both sides. By using these relations, we have \begin{align*} \tag{4.4.14} f_{12}f_{12'2''} &= f_{122'2''}f_{12} + (q\iv - q)f_{122''}f_{122'}, \\ f_{1122'2''}f_2 &= f_2f_{1122'2''} + (q\iv -q)f_{122''}f_{122'}, \end{align*} and the formulas (two for each) by applying the operation $\s$ on both sides. \par Now we can compute (note that the second formula in (4.4.14) is not used in this computation) \begin{align*} f_{1122'2''}f_{122'2''} = (q^2 - 2 + q^{-2})f_{12'2''}f_{122''}f_{122'} + q\iv f_{122'2''}f_{1122'2''}. \end{align*} Hence \begin{align*} Z_{\ul{11122}} &= f_{1122'2''}f_{122'2''} - q\iv f_{122'2''}f_{1122'2''} \\ &= (q^2 - 2 + q^{-2})f_{12'2''}f_{122''}f_{122'} \\ &\equiv [3]_1f_{12'2''}f_{122''}f_{122'} \mod J. \end{align*} Thus (4.4.12) holds, and (1.13.1) is proved for $\ul f_{\ul{11122}}$. The lemma holds for $\ul X$ of type $G_2$. \par \par \noindent T. Shoji \\ School of Mathematical Sciences, Tongji University \\ 1239 Siping Road, Shanghai 200092, P.R. China \\ E-mail: \verb|[email protected]| \par \noindent Z. Zhou \\ School of Mathematical Sciences, Tongji University \\ 1239 Siping Road, Shanghai 200092, P.R. China \\ E-mail: \verb|[email protected]| \end{document}
\begin{document} \twocolumn[ \icmltitle{Hiring Under Uncertainty}\ \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Manish Raghavan}{cornell} \icmlauthor{Manish Purohit}{goo} \icmlauthor{Sreenivas Gollapudi}{goo} \end{icmlauthorlist} \icmlaffiliation{cornell}{Department of Computer Science, Cornell University} \icmlaffiliation{goo}{Google, Inc.} \icmlcorrespondingauthor{Manish Raghavan}{[email protected]} \icmlkeywords{Machine Learning, IML} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} In this paper we introduce the hiring under uncertainty problem to model the questions faced by hiring committees in large enterprises and universities alike. Given a set of $n$ eligible candidates, the decision maker needs to choose the sequence of candidates to make offers so as to hire the $k$ best candidates. However, candidates may choose to reject an offer (for instance, due to a competing offer) and the decision maker has a time limit by which all positions must be filled. Given an estimate of the probabilities of acceptance for each candidate, the hiring under uncertainty problem is to design a strategy of making offers so that the total expected value of all candidates hired by the time limit is maximized. We provide a 2-approximation algorithm for the setting where offers must be made in sequence, an 8-approximation when offers may be made in parallel, and a 10-approximation for the more general stochastic knapsack setting with finite probes. \end{abstract} \section{Introduction} \label{sec:intro} Hiring is a core activity of any enterprise where the timely fulfillment of staffing needs is critical to its functioning. In addition to estimating the quality and suitability of a candidate, the enterprise also needs to deal with uncertainty that arises as the result of good candidates rejecting the job offer. Balancing this trade-off between hiring good quality candidates while at the same time ensuring that all staffing needs are met by a deadline is one of the most challenging aspects of hiring in practice. A number of algorithmic questions that are inspired by hiring settings have been well-studied in literature (see Section \ref{sec:rw}) including the popular \emph{secretary problem} and its many variants. This line of work focuses on the online nature of the problem and the key question tackled is how to find a good set of candidates when the pool of future candidates is unknown. However, this line of research does not model the other source of uncertainty, i.e., the candidate itself may choose to reject the job offer (for instance, due to a better competing offer), which in turn raises the question of hiring enough candidates by the deadline. During the hiring process, the ``quality'' (or value) of a candidate is often estimated by traditional hiring processes such as resume screening and formal interviews and even via algorithmic techniques (see Section \ref{sec:rw}). On the other hand, machine learning models can estimate the probability that a given candidate will accept a job offer based on various features such as the candidate's educational background, salary expectations, location preferences, and so on. Considering both the value as well as offer acceptance probability of each candidate leads to a rich collection of optimization problems. In this paper, we initiate the study of the hiring under uncertainty problem that aims to tackle the inherent trade-off at the heart of the hiring process - {\em how should we make job offers under uncertainty so that all staffing needs are met by a deadline, and yet hire the best available candidates}? Formally, we consider the following model as the basis for all the variants we present in this paper. There is a set of $n$ candidates, and we need to hire $k$ of them. We do this by making offers to candidates, which we'll also refer to more abstractly as ``probing'' a candidate. Each candidate $i$ has a known value $v_i$ and probability $p_i$ of accepting an offer, independent of all other candidates. We have a deadline of $t$ time steps, after which we can't make any further offers. It takes one time step to make an offer and receive an answer from a candidate. Job offers are irrevocable, i.e., once a candidate accepts an offer, that position is ``filled'' and we cannot replace that candidate with a better candidate in the future. The total value of a chosen set of candidates is simply the sum of the individual candidate values. Our goal is to maximize the total expected value of the hired candidates. We also consider two natural generalizations of this model. First, we allow making parallel offers to multiple candidates in a given time step. Second, we consider the knapsack hiring problem where each candidate $i$ has a size $s_i$ and we have a budget $B$ on the total size of hired candidates. The knapsack hiring problem models the scenario when the enterprise has a fixed budget and different candidates need to be offered different salaries. We note that in all settings, we do not require $v_i$ to be known precisely; all of our results hold if $v_i$ is only known in expectation. However, our results are sensitive to errors in $p_i$ and $s_i$. Making them robust to such errors is an interesting subject for future work. \subsection{Our Contributions} We summarize our contributions in this study. \begin{itemize} \item In Section \ref{sec:star}, we offer a $2$-approximation algorithm for hiring $k$ candidates with a constraint of making at most $t$ sequential offers. \item In Section \ref{sec:parallel}, we consider the parallel offers model where we are allowed to make as many parallel offers each time step as the number of unfilled positions remaining and design a $8$-approximation algorithm. \item In Section \ref{sec:knapsack10}, we present a $10$-approximation for the {\em knapsack hiring} problem where each candidate has a different size and the decision-maker is constrained by a total budget (as opposed to hiring $k$ candidates). \item We offer a connection to other stochastic optimization problems such as stochastic matching and present a lower-bound for the stochastic matching problem. \item Finally, we show the efficacy of our algorithms using simulations on data drawn from different distributions. \end{itemize} \subsection{Related Work} \label{sec:rw} Theoretical questions inspired by hiring scenarios have long been studied in the online setting under the names of optimal stopping or ``secretary'' problems~\cite{dynkin1963optimum,chow1964optimal}. A few extensions of this setting incorporate elements of our model. Kleinberg considers the case of hiring multiple candidates instead of the traditional single-hire case~\cite{kleinberg2005multiple}. An older line of work considers a version of the secretary problem in which candidates may stochastically reject offers, although this is typically modeled as a fixed rejection probability~\cite{smith1975secretary,tamaki1991secretary,tamaki2000minimal,ano2000note}. In addition, more recent work on stochastic optimization considers a variety of related problems in the offline setting. This includes stochastic versions of submodular optimization~\cite{asadpour2008stochastic,gupta2017adaptivity}, knapsack~\cite{dean2004approximating,dean2005adaptivity,bhalgat2011improved}, bandits~\cite{gupta2011approximation,ma2014improvements}, and matching~\cite{bansal2010lp,adamczyk2015improved,baveja2018improved}. Some special cases of our model (specifically, when one candidate is being hired) can be considered a special case of matching, and in fact, the results we derive here will provide lower bounds for stochastic matching. However, our model cannot in general be captured by any of these prior works. Algorithmic and data-driven approaches to hiring have become increasingly common with the rise of machine learning~\cite{miller2015can,carmichael2015hiring}. In particular, there is a long line of work focused on predicting teacher quality from data~\cite{kane2008estimating,dobbie2011teacher,chalfin2016productivity,jacob2018teacher}. More broadly,~\citet{mullainathan2017machine} describe the integration of machine learning with traditional econometric techniques to better estimate quantities like employee performance. Furthermore, studying the gig economy, \citet{kokkodis2015hiring} use machine learning to estimate the likelihood that freelancers get hired. \section{Hiring Problem: How to fill $k$ positions sequentially?} \label{sec:star} In this section, we consider the basic hiring problem where we want to hire $k$ employees out of $n$ potential candidates with a constraint of making at most $t$ sequential offers. \subsection{Special case: Hiring a single employee ($k = 1$)} \label{sec:singlecandidate} To develop some intuition about the problem as well as to illustrate some of the challenges posed, we begin with the case where $k=1$, i.e.\ , we only want to hire one candidate. One might hope that a simple greedy algorithm is optimal in this special case. Unfortunately, as we will show, a number of seemingly natural greedy algorithms\footnote{For instance, sorting the candidates by decreasing $p_i$, $v_i$, or $p_i \cdot v_i$ and then making at most $t$ offers until one accepts.} do not yield optimal solutions. However, we can still take advantage of structural properties of the solution. In particular, given a set of $t$ candidates, the optimal order in which to make offers to them is in decreasing order of $v_i$. To see why, for any two candidates $i$ and $j$, consider the four possible outcomes of making offers to them: both $i$ and $j$ accept, both reject, $i$ accepts and $j$ rejects, and vice versa. The only outcome in which the order of offers matters is when they both accept, since the position will go to the candidate receiving first offer, and the second offer will never be made. In this case, it is clearly better to make the first offer to the candidate with higher value. Since the optimal algorithm must always make offers to candidates in decreasing order by value, we can write a dynamic program to compute the optimal subset of $t$ candidates to potentially make offers to. Assume the candidates are sorted in non-increasing order of $v_i$, i.e., $v_1 \geq v_2 \geq \ldots \geq v_n$. Let $S(i, s)$ be the optimal expected value that can be achieved with $s$ time steps remaining by only considering candidates $i$ through $n$. Then, we have the recurrence \[ S(i, s) = \max\{p_i v_i + (1-p_i) S(i+1, s-1), S(i+1, s)\} \] where the two terms correspond to either making an offer to candidate $i$ or not. Note that $S(1, t)$ then gives the value of the optimal solution, and the offer strategy can be found by retracing the choices of the dynamic program. \subsection{General Problem: Hiring $k$ employees ($k > 1$)} While the $k=1$ case admits a clean solution, the general case where $k > 1$ is more complex. We first note that a simple $k$-approximation exists: using the dynamic program from Section~\ref{sec:singlecandidate}, we know how to optimally fill a single slot. Doing so yields a candidate who is in expectation at least as good as any of the $k$ candidates hired by the optimal strategy. In general, the optimal solution may display several non-monotonicities that make it difficult to extend the $k=1$ solution. \begin{example} \label{ex:non_monotone} Consider the following instance with $n = 4$, $t = 3$, and $k = 2$. \begin{align*} (p_1, v_1) &= (1, 1) & (p_2, v_2) &= (0.5, 1) \\ (p_3, v_3) &= (0.5, 1) & (p_4, v_4) &= (0.1, 2) \end{align*} \end{example} We will show that in the optimal strategy, the offers made are not necessarily monotone in acceptance probability, value, or expected value. First, note that any deterministic strategy can be represented as a binary decision tree, where each node in the tree corresponds to a candidate to whom an offer is made. The two branches are the resulting strategies if the offer is accepted or rejected. Taking the convention that the right branch corresponds to acceptance, the optimal solution for the above instance is as shown in Figure~\ref{fig:ex_sol}. \begin{wrapfigure}{l}{.19\textwidth} \centering \begin{tikzpicture}[scale=.6] \node (2) at (0, 0) {2}; \node (4) at (2, -1) {4}; \node (1l) at (.8, -2) {1}; \node (3) at (-2, -1) {3}; \node (1m) at (-.8, -2) {1}; \node (1r) at (-3.2, -2) {1}; \draw[green] (2) -- (4) (3) -- (1m) ; \draw[red] (2) -- (3) (4) -- (1l) (3) -- (1r) ; \end{tikzpicture} \caption{An optimal solution to Example~\ref{ex:non_monotone}} \label{fig:ex_sol} \end{wrapfigure} Note that there are several counter-intuitive effects at play here. First, despite having the lowest acceptance probability and expected value, candidate 4 still receives an offer with probability $1/2$. Second, the candidate with the highest expected value (candidate 1) receives an offer either last or not at all. Finally, despite the fact that candidates 1, 2, and 3 all have the same value, it is strictly optimal to make an offer to candidate 2 (or 3) before candidate 1, even though candidate 1 accepts with higher probability. Thus, unlike in the $k=1$ scenario, the optimal solution may not be value-ordered, so the dynamic programming approach discussed above cannot be optimal here. We conjecture that this problem is NP-hard for general $k$. In the remainder of this section, we present an approximation algorithm that runs in polynomial time and yields at least half of the optimal expected value. We first show that there exists a non-adaptive algorithm yielding a 2-approximation. Then, we show that a dynamic program similar to that in Section \ref{sec:singlecandidate} gives an adaptive algorithm that is better than \emph{any} non-adaptive algorithm, and hence is also a 2-approximation. \subsubsection{Establishing an adaptivity gap of 2} \citet{gupta2017adaptivity} study adaptivity gaps for stochastic probing problems where the goal is to maximize a given submodular function (or XOS function) over the set of active, probed elements. In this setting, each element $e$ is active independently with probability $p_e$ and the set of elements that are probed must satisfy given prefix-closed constraints. The {\textsc{Hiring with Uncertainty}}\ problem does not quite fit into their framework, since their framework allows one to choose the ``best'' set of active, probed elements, while in our setting we are forced to hire the \emph{first} $k$ candidates that are active. Nevertheless, we can leverage some insights from~\cite{gupta2017adaptivity} to show an adaptivity gap of 2 (as opposed to 3 obtained by them for stochastic probing). Similar to the one shown in Figure~\ref{fig:ex_sol}, the optimal solution to any instance can be represented by a binary tree $\mathcal{T}$. Each node $u$ of $\mathcal{T}$ corresponds to a candidate $i$ (denoted by $cand(u)$) and has two outgoing edges leading to subtrees in case the candidate $i$ is active (happens with probability $p_i$) or inactive (happens with probability $(1-p_i)$). Any root to leaf path in this tree represents the sequence of offers made by the optimal algorithm in a particular realization. The tree $\mathcal{T}$ naturally defines a probability distribution $\pi_\mathcal{T}$ over root to leaf paths - start at the root and at each node $u$, follow the ``yes'' edge with probability $p_i$ where $i = cand(u)$ and the ``no'' edge otherwise. Since the optimal strategy can make offers to at most $t$ candidates, any such path must have at most $t$ nodes. \begin{wrapfigure}{r}{0.17\textwidth} \centering \begin{tikzpicture}[scale=.5,thick] \node (a) at (0, .3) {$A$}; \node (b) at (-1, 0) {$B$}; \node (c) at (-2, -0.5) {$C$}; \node (d) at (-1, -1.5) {$D$}; \node (e) at (-3, -2) {$E$}; \node (f) at (-2, -3) {$F$}; \node (g) at (-2.8, -3.5) {$G$}; \node (h) at (-3.8, -4) {$H$}; \node (i) at (-4.8, -4.5) {$I$}; \draw[red,->] (0, 0) -> (-1, -.5); \draw[red,->] (-1, -.5)-> (-2, -1) ; \draw[green,->] (-2, -1) -> (-1.5, -1.5) ; \draw[red,->] (-1.5, -1.5)-> (-2.5, -2) ; \draw[green,->] (-2.5, -2) -> (-2, -2.5) ; \draw[red,->] (-2, -2.5)-> (-3, -3) ; \draw[red,->] (-3, -3)-> (-4, -3.5) ; \draw[red,->] (-4, -3.5)-> (-5, -4) ; \end{tikzpicture} \caption{A path with 3 segments: $ABC$, $DE$, and $FGHI$.} \label{fig:segment} \end{wrapfigure} Further, since any strategy can only hire at most $k$ candidates, any root to leaf path in $\mathcal{T}$ must have at most $k-1$ ``yes'' edges. Thus any root to leaf path $P$ can be decomposed into at most $k$ ``segments'' where a segment is a maximal sub-path composed of only ``no'' edges as shown in Figure~\ref{fig:segment}. Let $\mathsf{segments}(P) = \{S_1, S_2, \ldots, S_\ell\}$ denote the set of segments in $P$. For each segment $S \in \mathsf{segments}(P)$, let $\mathsf{last}(S)$ denote the \emph{last} node on segment $S$. Given the optimal tree $\mathcal{T}$, Procedure~\ref{alg:app_given_tree} samples a single path $P$ according to the distribution $\pi_\mathcal{T}$ and then probes the candidates on each segment of $P$ in descending order by value to hire at most one candidate from each segment. In the rest of this section, we show that Procedure~\ref{alg:app_given_tree} yields at least half of the total expected value of $\mathcal{T}$ in expectation. \begin{algorithm} \begin{algorithmic}[1] \caption{ApproxGivenTree$(\mathcal{T})$} \label{alg:app_given_tree} \STATE $P \gets$ a random path sampled from $\pi_{\mathcal{T}}$ \STATE $S_1, \dots, S_\ell \gets P$ divided into at most $k$ segments \\ \COMMENT{Each $S_j$ is a list of candidates} \FOR {$j \gets 1, \dots, \ell$} \STATE $S_j' \gets S_j$ sorted in decreasing order of value \FOR{each candidate $i$ in $S_j'$} \STATE Make an offer to candidate $i$ \IF{$i$ accepts} \STATE \textbf{break} \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Let $\mathsf{val}(\mathcal{T}) = \EM{P \sim \pi_\mathcal{T}}{\sum_{j = 1}^{\ell} v_{\mathsf{last}(S_j)}}$ be the total expected value of the tree $\mathcal{T}$ (note that $\ell$ is a random variable). Similarly, let $\mathsf{alg}o(\mathcal{T})$ be the expected value obtained by Procedure~\ref{alg:app_given_tree} on tree $\mathcal{T}$. For any segment $S_j$, we define $\mathsf{segval}(S_j)$ to be the expected value of the active candidate from $S_j$ with largest value. Formally, if $S_j$ consists of candidates $\{1, 2, \ldots, |S_j|\}$ sorted in non-increasing order of their values, then \[\mathsf{segval}(S_j) \triangleq \sum_{i=1}^{|S_j|} p_i v_i \prod_{j < i} (1-p_j).\] We observe that $\mathsf{alg}o(\mathcal{T}) = \EM{P \sim \pi_\mathcal{T}}{\sum_{j = 1}^\ell \mathsf{segval}(S_j)}$. In other words, Procedure~\ref{alg:app_given_tree} obtains the value of the active element with the largest value in each segment. The following lemma shows that in expectation, this is a 2-approximation to $\mathsf{val}(\mathcal{T})$. \begin{lemma} \label{lem:seq_approx} \[ \EM{P \sim \pi_\mathcal{T}}{\sum_{j = 1}^\ell \mathsf{segval}(S_j)} \ge \frac{1}{2} \EM{P \sim \pi_\mathcal{T}}{\sum_{j = 1}^{\ell} v_{\mathsf{last}(S_j)}} \] \end{lemma} \begin{proof} We proceed by induction over the segments. Let $I$ be $\mathsf{last}(S_1)$, and let $J$ be the random variable denoting the index of the first active candidate on $S_1$, so $J \le I$. If no candidate on this segment is active, we'll say $J = 0$. In the base case, $\ell=1$. Otherwise, using the inductive hypothesis, \[ \EM{P \sim \pi_\mathcal{T}}{\sum_{j = 2}^\ell \mathsf{segval}(S_j)} \ge \frac{1}{2} \EM{P \sim \pi_\mathcal{T}}{\sum_{j = 2}^{\ell} v_{\mathsf{last}(S_j)}}. \] In either case, it suffices to show that $\E{v_J} \ge \frac{1}{2} \E{v_I}$. This follows from Lemma 3.3 of \citet{gupta2017adaptivity} and the complete proof is deferred to Appendix~\ref{app:proofs}. \end{proof} \paragraph*{Removing adaptivity.} Note that Procedure~\ref{alg:app_given_tree} is adaptive, since it probes within a segment until it finds an active element. However, we can use it to argue about a simpler non-adaptive algorithm: pick a random path down the optimal tree $\mathcal{T}$, sort all the items in it in decreasing order of value, and make offers in that order. This has value at least as large as $\mathsf{alg}o(\mathcal{T})$ because for any realization of which elements are active and inactive, making offers in decreasing order of value is always beneficial. Thus, the adaptivity gap for this problem is at most $2$. \subsubsection{A constructive 2-approximation} In the above section, we have shown that there exists a non-adaptive algorithm whose total expected value is at least half of the expected value of the optimal algorithm. However, this algorithm relied on the knowledge of the optimal decision tree $\mathcal{T}$ and is thus non-constructive. We now design a polynomial time algorithm ($\mathsf{seq}alg$) whose expected value is at least the expected value of \emph{any non-adaptive algorithm}, and hence is also at least half the expected value of the optimal algorithm. We observe that by definition, any non-adaptive algorithm must choose a fixed sequence of $t$ potential candidates and make offers to them in order until $k$ of them accept. Further, as discussed in Section \ref{sec:singlecandidate}, the optimal such algorithm must probe the candidates in non-increasing order by value. However, using a dynamic programming strategy similar to that in Section \ref{sec:singlecandidate}, we can find the \emph{optimal} algorithm (not necessarily non-adaptive) that probes candidates in non-increasing order by value. This must be better than the optimal non-adaptive algorithm and hence is also a 2-approximation. \paragraph*{Dynamic Program ($\mathsf{seq}alg$).} We again assume that the candidates are sorted in non-increasing order of their values $v_i$. Let $S(i, \ell, s)$ be the optimal expected value that can be achieved by hiring at most $\ell$ candidates in $s$ time steps by only considering candidates $i$ through $n$ in sorted order. We obtain the following recurrence: \begin{align*} S(i, \ell, s) = &\max\{p_i (v_i + S(i+1, \ell-1, s-1)) \numberthis \label{eq:mult_recurrence} \\ &+ (1-p_i) S(i+1, \ell, s-1), S(i+1, \ell, s)\}. \end{align*} where the two terms correspond to either making an offer to candidate $i$ or not. Let $\mathsf{seq}alg$ denote the dynamic program constructed using the above recurrence. We abuse notation slightly and let $\mathsf{seq}alg_{k,t} = S(1, k, t)$ be the expected value obtained by this algorithm. Let $\mathsf{seq}opt_{k,t}$ be the value of the optimal adaptive strategy. Lemma~\ref{lem:seq_approx} shows that the optimal non-adaptive strategy is a 2-approximation to $\mathsf{seq}opt_{k,t}$. Because $\mathsf{seq}alg$ is at least as good as any non-adaptive strategy, we have that for a set of candidates $\mathcal{C}$, $\mathsf{seq}alg_{k,t}(\mathcal{C}) \ge \frac{1}{2} \mathsf{seq}opt_{k,t}(\mathcal{C})$. \paragraph*{A lower bound.} It is an open question as to whether the above analysis is tight, i.e., whether this algorithm may actually be closer to optimal than a factor of two. However, by modifying the probabilities and values in Example~\ref{ex:non_monotone}, we show in Appendix~\ref{app:gap} that no algorithm that provides a value-sorted solution (including $\mathsf{seq}alg$) can get more than $0.927$ of the optimal algorithm in general. \section{Filling $k$ Positions in Parallel} \label{sec:parallel} In the previous section, we considered the problem of hiring with sequential offers. However, if we have $k$ positions to fill, we could in principle make $k$ offers per timestep. This is clearly more powerful than the sequential offer model, since any sequence of sequential offers is valid in the parallel model. We'll treat the constraint of filling $k$ positions as hard, meaning that if at a particular timestep there are $\ell < k$ remaining unfilled positions, we can only make $\ell$ offers at that time, though it would be an interesting future direction to consider a relaxed version in which we hire at most $k$ candidates with high probability. Intuitively, the more slots remain available, the more offers can be made, which is beneficial when there are many high-value low-probability candidates. This means an optimal strategy must somehow balance the tension between two conflicting objectives: filling slots and maximizing the number of offers that can be made to risky candidates. The following example demonstrates this tension. \begin{example} \label{ex:parallel} Consider the example with $n = 2t - 1$ candidates and $k=2$. $2t-2$ of the candidates have $p_i = 1/(2t-2)$ and $v_i = 1$, and the last candidate has $p_n = 1$ and $v_n = 1$. \end{example} Even though candidate $n$ will surely accept an offer, the optimal strategy here is to make offers to all of the low-probability candidates (2 at a time) until one of them accepts, and then to make an offer to candidate $n$, who will definitely accept. As $t$ gets large, this yields value approximately $\frac{2e-1}{e} \approx 1.63$ in expectation, Making an offer to candidate $n$ first can only get value approximately $\frac{2\sqrt{e}-1}{\sqrt{e}} \approx 1.39$, since we can only make one offer per timestep after we fill the first slot. Thus, the order in which offers are made significantly impacts the overall value. \paragraph*{An 8-approximation algorithm ($\mathsf{paralg}$).} We now design $\mathsf{paralg}$, a constructive 8-approximation algorithm, drawing on the results in Section~\ref{sec:star}. The basic idea is to relax the parallel offer instance with $t$ timesteps to a sequential offer instance with $k \cdot t$ timesteps, solve this using $\mathsf{seq}alg_{k,kt}$, and use this solution to construct a solution to the original instance. Given a set of candidates $\mathcal{C}$, let $\mathsf{paropt}_{k, t}(\mathcal{C})$ be the expected value of the optimal solution with parallel offers, filling $k$ slots in $t$ timesteps. Then, $\mathsf{paropt}_{k, t}(\mathcal{C}) \le \mathsf{seq}opt_{k, kt}(\mathcal{C})$, since any sequence of parallel offers can be done in sequence over $kt$ timesteps. We can apply the dynamic programming algorithm $\mathsf{seq}alg$ from Section \ref{sec:star} to $\mathcal{C}$ to get a sequential-offer strategy over $kt$ timesteps yielding expected value at least $\frac{1}{2} \mathsf{seq}opt_{k, kt}(\mathcal{C})$. Let $\mathcal{T}$ be the resulting decision tree. We'll show how to convert this sequential-offer decision tree over $kt$ timesteps into a parallel-offer strategy over $t$ timesteps. \begin{algorithm} \begin{algorithmic}[1] \caption{ParallelFromSequential$(\mathcal{T})$} \label{alg:p_from_s} \STATE $P \gets$ a random path sampled from $\pi_\mathcal{T}$ \STATE $S_1, \dots, S_\ell \gets$ the segments of $P$ \STATE $S_1', \dots, S_m' \gets$ segments split such that each has length at most $t$. \STATE Sort each segment $S_j'$ in decreasing order of $v_i$. \STATE Let $U$ be the indices of the $k$ segments with highest $\mathsf{segval}(\cdot)$ \FOR {$s \gets 1, \dots, t$} \FOR {$j \in U$} \STATE Make an offer to candidate $i$, the $s^{\text{th}}$ candidate from $S_j'$, at this timestep $s$. \IF {$i$ accepts} \STATE Remove $j$ from $U$ \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Let $\mathsf{alg}t(\mathcal{T}^*)$ be the expected value of the parallel-offer strategy produced by Procedure~\ref{alg:p_from_s}, where $\mathcal{T}^*$ is the output of $\mathsf{seq}alg_{k,kt}(\mathcal{C})$. Then, we have the following. \begin{lemma} \label{lem:par_approx} \[ \mathsf{alg}t(\mathcal{T}^*) \ge \frac{1}{8} \mathsf{paropt}_{k, t}(\mathcal{C}) \] \end{lemma} \begin{proof} By Lemma~\ref{lem:seq_approx}, \[ \mathsf{seq}alg_{k,kt}(\mathcal{C}) \ge \frac{1}{2} \mathsf{seq}opt_{k, kt}(\mathcal{C}) \ge \frac{1}{2} \mathsf{paropt}_{k, t}(\mathcal{C}). \] To complete the proof, we must show that $\mathsf{alg}t(\mathcal{T}^*) \ge \frac{1}{4} \mathsf{seq}alg_{k,kt}(\mathcal{C})$. Note that Procedure~\ref{alg:p_from_s} yields an offer strategy in the parallel model, while $\mathcal{T}^*$ represents a strategy in the sequential model with $kt$ timesteps. First, observe that by applying Lemma~\ref{lem:seq_approx} again, if we could make offers along each segment of a random path down $\mathcal{T}^*$ in decreasing order of value, we'd get a 2-approximation to $\mathsf{seq}alg_{k,kt}(\mathcal{C})$, since we'd get the maximum active element on each segment. Since there are at most $k$ segments, we could make an offer to the highest valued candidate from each segment in the first time step and proceed down each segment in parallel, discarding a segment once a candidate accepts an offer. However, since some segments may have length more than $t$, we may not have enough offers to go all the way down each segment. Consequently, in step 3 of Procedure~\ref{alg:p_from_s}, we partition the segments further so that each new segment contains at most $t$ candidates. More formally, if a segment $S_j \in \mathsf{segments}(P)$ has length $at + b$ for some integers $a,b \geq 0$ and $b < t$, arbitrarily split the candidates in $S_j$ into $a+1$ new segments such that $a$ of those segments have exactly $t$ candidates. Let $|S_j|$ be the size of the $j$th segment. When we split it up into new segments of length at most $t$, $S_j$ will be turned into $\lceil \frac{|S_j|}{t} \rceil$ segments. Thus, after splitting, the number of new segments is at most \[ \sum_{j=1}^\ell \left\lceil \frac{|S_j|}{t} \right\rceil \le \sum_{j=1}^\ell \frac{|S_j|}{t} + 1 \le k + \frac{\sum_{j=1}^\ell |S_j|}{t} \le 2k. \] Thus, if we were to pick $k$ segments uniformly at random, each segment would have probability at least $1/2$ of being selected randomly into $U$, meaning that \begin{align*} \sum_{j=1}^m \E{\mathsf{segval}(S_j') \Pr[j \in U]} &\ge \frac{1}{2} \sum_{j=1}^m \E{\mathsf{segval}(S_j')} \\ &\ge \frac{1}{2} \sum_{j=1}^\ell \E{\mathsf{segval}(S_j)} \\ & \ge \frac{1}{4} \E{\mathsf{seq}alg_{k,kt}(\mathcal{C})}, \end{align*} where the last inequality follows from Lemma~\ref{lem:seq_approx}. We can derandomize this by choosing the $k$ segments with highest expected value, which must be at least as good as $k$ random segments. As a result, we have \[ \mathsf{alg}t(\mathcal{T}^*) \ge \frac{1}{4} \E{\mathsf{seq}alg_{k,kt}(\mathcal{C})} \ge \frac{1}{8} \mathsf{paropt}_{k, t}(\mathcal{C}). \] \end{proof} Thus, our final algorithm (which we call $\mathsf{paralg}_{k,t}$, or just $\mathsf{paralg}$ when $k$ and $t$ are clear) is to first apply $\mathsf{seq}alg_{k,kt}$ to $\mathcal{C}$, producing a tree $\mathcal{T}^*$, and build a parallel-offer strategy from $\mathcal{T}^*$ with Procedure~\ref{alg:p_from_s}. \begin{equation} \label{eq:paralg_def} \mathsf{paralg}_{k,t}(\mathcal{C}) \triangleq \text{ParallelFromSequential}(\mathsf{seq}alg_{k,kt}(\mathcal{C})) \end{equation} As noted above, our final approximation factor will be $\min(k, 8)$, since we get a $k$-approximation simply by filling one position optimally. \section{Knapsack Hiring Problem} \label{sec:knapsack10} We now consider the knapsack hiring problem that directly generalizes the vanilla hiring problem studied in Section \ref{sec:star}. In this case, in addition to a value $v_i$ and probability $p_i$, each candidate $i$ also has a size $s_i$. Instead of a number of slots $k$, we have a budget $B$ on the total size of the hired candidates. As earlier, we have a deadline of $t$ time steps and can make only one offer per time step. The knapsack hiring problem is closely related to the well-studied stochastic knapsack problem~\cite{dean2004approximating,dean2005adaptivity}, which is as follows: we are given $n$ items, each with a known value and ``size distribution''. When an item is added to the knapsack, its size is drawn from this distribution. Once an item exceeds the capacity, this item must be discarded and no further items can be added to the knapsack. In the multidimensional version, both the capacity of the knapsack and the size of an item are vectors, and the process ends once any component of the vector capacity is exceeded. We observe that the two models differ slightly since in the knapsack hiring problem, both the value and size of an item (candidate) is stochastic. We first give a reduction from the knapsack hiring problem to the multidimensional stochastic knapsack problem. For simplicity, we assume that the budget $B = 1$ without loss of generality. We construct an instance of 2-dimensional stochastic knapsack as follows - the knapsack capacity is $[1 ~ t]^\top$ where the first dimension represents the budget constraint and the second dimension represents the number of allowed probes. The size of item $i$ is represented by the vector $[s_i ~ 1]^\top$ if the item exists when it is probed (happens with probability $p_i$) and $[0 ~ 1]^\top$ otherwise. The value $v_i'$ of item $i$ is set to the expected value obtained from candidate $i$, i.e., $v_i' = p_i v_i$. With this reduction, the optimal solution to the knapsack hiring problem remains unchanged if items deterministically contribute value $v_i' = p_i v_i$, as we show in Appendix~\ref{app:equivalence}. \citet{dean2005adaptivity} give a general $1+6d$-approximation to multidimensional stochastic knapsack, where $d$ is the number of knapsack constraints. Directly applying this, we would get a 13-approximation in our 2-dimensional case. However, by leveraging the structure of the finite-probe problem, we can tighten this to a 10-approximation. Without loss of generality, we assume that $s_i \le 1$ for all $i$ (otherwise the item would never fit in the knapsack). We also normalize the number of probes to 1, so each item uses $1/t$ probes. Let $\mu(i)$ denote the vector of expected size of item $i$, meaning $\mu(i) = [p_i s_i ~ ~ 1/t]^{\top}$. Let $\mu(S) = \sum_{i \in S} \mu(i)$. We use the notation $\mu_1(S)$ and $\mu_2(S)$ to denote the first and second components of $\mu(S)$ respectively. Further, let $\mathsf{size}(i)$ denote the vector of the realized size of item $i$ and let $\mathsf{size}(S) = \sum_{i \in S} \mathsf{size}(i)$. \begin{algorithm} \begin{algorithmic}[1] \caption{KnapsackFiniteProbes$(p, v, s)$} \label{alg:greedy_knapsack} \STATE $m_1 \gets \max_i v_i' = \max_i p_i v_i$ \STATE $\mathcal{L} \gets$ the sequence of all items with $\|\mu(i)\|_1 \le 1/3$, sorted in non-increasing order of $\dfrac{v_i'}{\|\mu(i)\|_1}$ \STATE $m_{\mathcal{L}} \gets \sum_{i=1}^\ell v_i' (1 - \sum_{j \le i} \mu_1(j))$, where $\ell$ is the smallest integer such that $\sum_{i=1}^\ell \|\mu(i)\|_1 < 1$ \IF {$m_1 \ge m_{\mathcal{L}}$} \STATE Probe the item with highest expected value $v_i'$ \ELSE \STATE Probe the items in $\mathcal{L}$ until the knapsack is full \ENDIF \end{algorithmic} \end{algorithm} Our algorithm (Algorithm~\ref{alg:greedy_knapsack}) takes the better of two strategies: probing the item with highest expected value and probing a sequence of ``small'' items. Exactly evaluating $m_{\mathcal{L}}^*$, the expected value of the second of these strategies, may be difficult; however, we can show that $m_{\mathcal{L}} = \sum_{i=1}^\ell v_i' (1 - \sum_{j \le i} \mu_1(j)) \le m_{\mathcal{L}}^*$. We obtain $v_i'$ for item $i$ if and only if the first $i$ items in $\mathcal{L}$ all fit inside the knapsack. Thus, $m_\mathcal{L}^* = \sum_{i=1}^\ell v_i' Pr[\|\mathsf{size}(\mathcal{L}_i)\|_\infty \leq 1]$, where $\mathcal{L}_i$ denotes the set of first $i$ items in $\mathcal{L}$. By Claim \ref{clm:size}, $m_\mathcal{L}^* \geq m_{\mathcal{L}}$. Note that Claim \ref{clm:size} applies since the constraint $\sum_{i=1}^\ell \|\mu(i)\|_1 < 1$ implies $\ell < t$. \begin{claim} \label{clm:size} For any set $A$ of at most $t$ items, $Pr[\|\mathsf{size}(A)\|_\infty \leq 1] \ge 1 - \mu_1(A)$. \end{claim} See Appendix~\ref{app:proofs} for a proof. Let $\mathsf{greedy} = \max\{m_1, m_\mathcal{L}\}$ be a lower bound on the expected value of Algorithm~\ref{alg:greedy_knapsack}. Let $A$ be the random set of items that are probed by the optimal adaptive algorithm, and let $\mathsf{opt}$ be the expected value of the optimal algorithm. \begin{lemma}[\cite{dean2005adaptivity}, Lemma 4.2 and Lemma 4.3] \label{lem:deanknapsack} $\mathsf{opt} \le (1 + 3\E{\|\mu(A)\|_1}) \mathsf{greedy}$. \end{lemma} For any adaptive algorithm, we can bound the expected size of the set of items probed using Lemma 2 from \citet{dean2004approximating}. In particular, we can bound the expected size of the first component as $\E{\mu_1(A)} \leq 2$. On the other hand, since the optimal adaptive algorithm can never probe more than $t$ items, $\E{\mu_2(A)} \leq 1$. Substituting these bounds into Lemma \ref{lem:deanknapsack} gives us the desired $10$-approximation: \begin{align*} \mathsf{opt} &\le (1 + 3\E{\|\mu(A)\|_1}) \mathsf{greedy} \\ &\le (1 + 3(\E{\mu_1(A) + \mu_2(A)})) \mathsf{greedy}\\ &\le (1 + 3 \cdot 3) \mathsf{greedy} = 10 \cdot \mathsf{greedy}. \end{align*} \section{A Lower Bound for Stochastic Matching} \label{sec:matching} The hiring with uncertainty problem with $k=1$ can be viewed as a special case of the stochastic matching problem, which is as follows: given a graph $G = (V, E)$, probabilities $p_e$ and values $v_e$ for all $e \in E$, and patience parameters $t_v$ for all $v \in V$, the goal is to obtain a matching with maximum expected weight. As in our hiring problem, edges can be probed sequentially. If an edge $e$ is found to exist, it must be added to the matching, contributing value $v_e$. Each probe decreases the patience parameters of the incident vertices by 1, and when a vertex runs out of patience, it cannot be matched. The state-of-the-art approach for this problem is to form a probing strategy by solving the linear program relaxation: \begin{align*} \max_{x \in [0,1]^{|E|}} \sum_{e \in |E|} p_e v_e x_e && \text{s.t.} ~~ & \forall v ~ \sum_{e \in \delta(v)} x_e \le t_v \numberthis \label{eq:matching_lp} \\ &&& \forall v ~ \sum_{e \in \delta(v)} p_e x_e \le 1 \end{align*} This LP relaxation has been the primary approach for stochastic matching since~\citet{bansal2010lp}, yielding a 2.845-approximation for bipartite graphs~\cite{adamczyk2015improved} and a 3.224-approximation for general graphs~\cite{baveja2018improved}. However, little is known about the tightness of upper bound produced by the LP. Not only is there an integrality gap, but there is also a \textit{probing} gap -- the LP does not fully account for the random realizations of probes. With $k=1$, the hiring problem is a special case of stochastic matching, since it can be expressed as matching on a star-shaped graph. Thus, we can use it provide a lower bound on the worst-case slack created by the LP. In Appendix~\ref{app:lower_bound}, we provide an example showing that the gap between the LP value and the expected value of the optimal probing strategy must be at least $1 - 1/e$, meaning no probing strategy can approximate the optimal LP value to a factor better than $\frac{e}{e-1} \approx 1.581$. \begin{figure*} \caption{Comparison of different algorithms on three simulated data sets.} \label{fig:neg} \label{fig:pos} \label{fig:no} \end{figure*} \section{Experiments} \label{sec:experiments} We test the performance of our algorithms for the {\textsc{Hiring with Uncertainty}}\ problem in both the sequential and parallel offers setting via simulations. We generate simulated data sets as follows. The values for $n = 100$ candidates are chosen uniformly at random from $[0, 1]$. We consider three models to generate the probabilities: \begin{itemize}[topsep=0pt, partopsep=0pt, itemsep=-.5ex] \item \textbf{Negative correlation:} Higher-value candidates are less likely to accept offers. We sample $p_i$'s according to a Beta distribution, with $p_i \sim \text{Beta}(10(1-v_i), 10 v_i)$. \item \textbf{Positive correlation:} Higher-value candidates are more likely to accept: $p_i \sim \text{Beta}(10v_i, 10(1-v_i))$. \item \textbf{No correlation:} $p_i \sim \text{Uniform}[0,1]$. \end{itemize} On each of these data sets, we consider the performance of our three algorithms each with $k = 20$, namely \begin{itemize}[topsep=0pt, partopsep=0pt, itemsep=-.5ex] \item {$\boldsymbol{\mathsf{seq}alg}$:} The dynamic programming algorithm from Section \ref{sec:star} to make $t$ sequential offers. \item {$\boldsymbol{\mathsf{paralg}}$:} The parallel approximation algorithm from Section \ref{sec:parallel}. We take the best solution over 100 random samples (of paths). \item {$\boldsymbol{\mathsf{parheur}}$: } We consider the following heuristic strategy to make offers in the parallel model. Note that the parallel approximation algorithm effectively partitions the set of candidates into up to $2k$ sets, selects the best $k$ of them, and makes offers to candidates in those sets in decreasing order of value. Our heuristic, then, is to randomly partition the set of candidates into $k$ disjoint sets and use the optimal single-slot solution from Section~\ref{sec:singlecandidate} on each set independently to decide which of them to make offers to. These offers can be made in parallel since the sets are disjoint. \end{itemize} For comparison, we include two natural greedy baselines. We probe candidates in decreasing order of expected value $p_i \cdot v_i$ ($\boldsymbol{\mathsf{GE}}$) and value $v_i$ ($\boldsymbol{\mathsf{GV}}$). We also plot two upper bounds on the value obtained by an optimal algorithm: $\boldsymbol{\mathsf{LP}}$, the value obtained by a natural LP relaxation (similar to \eqref{eq:matching_lp}), and $\boldsymbol{\mathsf{inf}}$, the optimal algorithm with $t = \infty$ (sort the candidates by decreasing value and make offers until $k$ candidates accept). Figures~\ref{fig:neg},~\ref{fig:pos}, and~\ref{fig:no} demonstrate the performance of our algorithms on the three data sets with negative correlation, positive correlation, and no correlation respectively. Beyond the theoretical guarantees, $\mathsf{seq}alg$ performs well empirically and dominates the greedy baselines, especially in the more natural setting where values and probabilities are negatively correlated. $\mathsf{seq}alg$ is in general quite close to the LP upper bound -- much closer than the theoretical guarantee of 2. Thus, the LP is a fairly tight upper bound on the maximum value achievable by probing. Even for moderately small values of $t$, $\mathsf{seq}alg$ outperforms $\mathsf{paralg}$, despite the fact that it makes 1 offer per time step when $\mathsf{paralg}$ makes multiple offers at a time. Moreover, $\mathsf{parheur}$ almost always outperforms $\mathsf{paralg}$. The relatively poor performance of $\mathsf{paralg}$ is to be expected. Recall that $\mathsf{paralg}$ takes the solution tree to $\mathsf{seq}alg$ with $kt$ offers and probes candidates on the segments of a random path down this tree. By construction, candidates on this path are sorted by value, so high-value candidates are concentrated in a small number of segments. $\mathsf{paralg}$ can only select at most one candidate per segment, so it must ignore some high-value candidates. In contrast, $\mathsf{parheur}$ partitions the candidates randomly, making it more likely that each set in the partition contains high-value candidates. \subsection{Knapsack setting} We also provide simulated results for a slightly modified version of the 10-approximation algorithm ($\boldsymbol{\mathsf{approx}}$) in the knapsack setting, where instead of calculating the lower bound $m_{\mathcal{L}}$ on the greedy strategy as in Algorithm~\ref{alg:greedy_knapsack}, we estimate the true expected value $m_{\mathcal{L}}^*$ by simulating runs of the greedy branch of the algorithm. We compare the LP relaxation ($\boldsymbol{\mathsf{LP}}$) to our algorithm. In addition, we compare to a natural greedy baseline ($\boldsymbol{\mathsf{greedy}}$), which probes items in decreasing order of $p_i v_i/s_i$. We sample $s_i$ from a truncated Pareto distribution on $[0, 1]$, and sample $v_i \in [0, 1]$ from a Beta distribution positively correlated with $\sqrt{s_i}$. We use $\sqrt{s_i}$ so that expected $v_i$'s exhibit diminishing returns in $s_i$. We choose $p_i \sim \text{Uniform}[0, 1]$ and set a budget of $B = 1$. \begin{figure} \caption{Experimental results for knapsack setting} \label{fig:knapsack_results} \end{figure} As the results in Figure~\ref{fig:knapsack_results} show, our algorithm performs roughly as well as $\boldsymbol{\mathsf{greedy}}$, but it does better for very small values of $t$. \section{Conclusions} \label{sec:conclusion} With the increased use of data-driven techniques in hiring, predictions for employment outcomes are becoming increasingly accurate. Leveraging these predictions can be non-trivial, leading to the family of stochastic optimization problems we have considered here. As we have shown, imposing a finite number of offers can lead to highly complex solutions; however, by imposing an intuitive structure on the solution space, we are able to derive approximation algorithms that perform well, both theoretically and in practice. \appendix \section{Appendix} \label{app:gap} \subsection{Gap between optimal adaptive and value-ordered strategies} The following example shows a gap between the optimal adaptive strategy and the any value-ordered strategy in the sequential setting. \begin{align*} (p_1, v_1) &= (1, 1) & (p_2, v_2) &= (q, 1) \\ (p_3, v_3) &= (q, 1) & (p_4, v_4) &= (q(1-q)/(v-q),v) \end{align*} Here, we set $q = 0.63667$, and take the limit as $v$ goes to $\infty$. The optimal value-ordered strategy is make offers to $1$, $2$, and possibly $3$ if $2$ rejects. This yields value $1+2q-q^2$. The optimal strategy is shown in Figure~\ref{fig:ex_sol} and yields value $1+2q-q^2 + (1-q)q^2(v-1)/(v-q)$. As $v \to \infty$, the approximation ratio approaches \[ \frac{1+2q-q^3}{1+2q-q^2} \approx 1.0788 \] Moreover, this example demonstrates that simple greedy algorithms are suboptimal -- in particular, making offers greedily by decreasing $p_i$, $v_i$, and $p_i v_i$ all yield suboptimal value. \subsection{Lower bound for LP-based stochastic matching} \label{app:lower_bound} \begin{figure} \caption{Lower bound for integrality gap} \label{fig:int_gap} \end{figure} Consider the star graph as shown in Figure~\ref{fig:int_gap} with $n+1$ vertices, with $n$ leaves and 1 vertex in the middle. Each edge has $p_e = \frac{1}{n}$ and value 1. Let the number of probes be $t = n$. The value of the LP~\eqref{eq:matching_lp} is 1, assigning $x_e = 1$ to all edges. Since all edges are identical, any strategy is an optimal probing strategy, yielding expected value \begin{align*} \sum_{i=1}^n \p{\frac{1}{n}} \p{1 - \frac{1}{n}}^{i-1} &= \frac{1}{n} \sum_{i=1}^n \p{1 - \frac{1}{n}}^{i-1} \\ &= \frac{1}{n} \cdot \frac{1 - \p{1-\frac{1}{n}}^n}{\frac{1}{n}} \\ &= 1 - \p{1-\frac{1}{n}}^n \end{align*} In the limit, this is $1 - 1/e$, so no probing strategy can be better than an $\frac{e}{e-1} \approx 1.581$-approximation. \subsection{Deferred Proofs} \label{app:proofs} \begin{proof}[Proof of Claim~\ref{clm:size}] For any set $A$ of at most $t$ items, $\mathsf{size}_2(A) = \sum_{i \in A} \mathsf{size}_2(i) = |A|/t \leq 1$. Further, by Markov's inequality, we have $Pr[\mathsf{size}_1(A) \ge 1] \leq \E{\min\{\mathsf{size}_1(A), 1\}} \leq \E{\sum_{i \in A} \min\{s_i, 1\}} = \mu_1(A)$. Consequently, we have $Pr(\|\mathsf{size}(A)\|_\infty < 1) \geq 1 - \mu_1(A)$. \end{proof} \begin{claim} $\E{v_J} \geq \dfrac{1}{2} \E{v_I}$ \end{claim} \begin{proof} Let $W$ be the random set of elements on this segment, up to and including $I$, and let $W_x \subseteq W$ be the subset of those elements with value at least $x$. For ease of notation, we define $q_i = 1 - p_i$. Then, we can write \begin{equation} \label{eq:evI} \E{v_I} = \int_0^\infty \Pr[v_I \ge x] \; dx = \int_0^\infty \sum_{i \in W_x} p_i \prod_{j < i} q_j \; dx. \end{equation} Let $A$ be the random set of ``active'' candidates who will accept an offer if they receive one. Then, we have \begin{align*} \E{v_J} &= \int_0^\infty \Pr[v_J \ge x] \; dx \\ &= \int_0^\infty \Pr[A \cap W_x \ne \emptyset] \; dx \\ &= \int_0^\infty \sum_{i \in W_x} p_i \cdot \Pr[I \ge i] \cdot \Pr\b{\bigcap_{j < i, j \in W_x} j \notin A} \; dx \\ &= \int_0^\infty \sum_{i \in W_x} p_i \p{\prod_{j < i} q_j} \p{\prod_{j < i, j \in W_x} q_j} \; dx \\ &= \int_0^\infty \sum_{i \in W_x} p_i \p{\prod_{j < i, j \in W_x} q_j^2} \p{\prod_{j < i, j \notin W_x} q_j} \; dx \numberthis \label{eq:evJ} \end{align*} We can write~\eqref{eq:evI} as \begin{equation} \E{\sum_{i \in W_x} p_i \p{\prod_{j<i, j \in W_x} q_j} \p{\prod_{j < i, j \notin W_x} \mathbf{1}_{q_j}}}, \end{equation} where the expectation is taken over the indicators $\mathbf{1}_{q_j}$ for $j < i, j \notin W_x$. Similarly, we write~\eqref{eq:evJ} as \begin{equation} \E{\sum_{i \in W_x} p_i \p{\prod_{j<i, j \in W_x} q_j^2} \p{\prod_{j < i, j \notin W_x} \mathbf{1}_{q_j}}}, \end{equation} Conditioning on the realizations of these indicators, it is sufficient to show that \begin{equation} \sum_{i \in W_x} p_i \p{\prod_{j<i, j \in W_x} q_j^2} \ge \frac{1}{2} \sum_{i \in W_x} p_i \p{\prod_{j<i, j \in W_x} q_j}, \end{equation} which is true by Claim~\ref{claim:gupta}. \end{proof} \begin{claim}[\cite{gupta2017adaptivity}, Claim 3.4] \label{claim:gupta} For any ordered set A of probabilities $\{a_1, a_2, \dots, a_{|A|}\}$, let $b_j$ denote $1-a_j$ for $j \in [1, |A|]$. Then, \[ \sum_i a_i \p{\prod_{j < i} b_j}^2 \ge \frac{1}{2} \sum_i a_i \prod_{j < i} b_i \] \end{claim} \subsection{Equivalence to Stochastic Knapsack} \label{app:equivalence} We must show that the the optimal solution remains unchanged whether values are received stochastically or deterministically. It is easy to verify that the vector item sizes and knapsack capacities capture the budget and deadline requirements of the knapsack hiring problem. However, in the reduction, item $i$ deterministically yields a value of $p_i v_i$ instead of value $v_i$ when $i$ is active (happens with probability $p_i$) and value 0 otherwise. To account for this, observe that the optimal item to probe next depends only on the subset of remaining items, the number of probes left, and the capacity of the knapsack -- the value accumulated thus far has no bearing on the next action. Let $\mathsf{opt}(S, t, b)$ be the optimal value achievable with items (candidates) $S$, number of probes $t$, and budget $b$ remaining. The optimal strategy is then given by an exponential sized dynamic program, with the following recurrence \begin{align*} \mathsf{opt}(S, t, b) = \max_{i \in S} \Big\{\ &p_i(v_i + \mathsf{opt}(S \backslash \{i\}, t - 1, b - s_i)) \\ +& (1-p_i) \mathsf{opt}(S \backslash \{i\}, t - 1, b) \Big\}. \numberthis \label{eq:knap_opt} \end{align*} Assuming inductively that $\mathsf{opt}(S', t, b)$ is unchanged whether $i$ contributes value $v_i$ with probability $p_i$ or deterministic value $p_i v_i$ for all smaller sets $S'$, we see that~\eqref{eq:knap_opt} is optimized by the same $i$ in both cases. Thus, the optimal strategy is unchanged in the deterministic and random cases and our reduction is complete. \end{document}
\begin{document} \title{Vertex cut of a graph and connectivity of its neighbourhood complex} \author{ Rekha Santhanam\footnote{Department of Mathematics, Indian Institute of Technology Bombay, India. [email protected]}, Samir Shukla\footnote{School of Mathematical and Statistical Sciences, Indian Institute of Technology Mandi, India. [email protected]}} \maketitle \begin{abstract} We show that if a graph $G$ satisfies certain conditions, then the connectivity of neighbourhood complex $\mathcal{N}(G)$ is strictly less than the vertex connectivity of $G$. As an application, we give a relation between the connectivity of the neighbourhood complex and the vertex connectivity for stiff chordal graphs, and for weakly triangulated graphs satisfying certain properties. Next, we consider graphs with a vertex $v$ such that for any $k$-subset $S$ of neighbours of $v$, there exists a vertex $v_S \neq v$ such that $S$ is subset of neighbours of $v_S$. We prove that for any graph $G$ with a vertex $v$ as above, $\mathcal{N}(G-\{v\})$ is $(k-1)$-connected implies that $\mathcal{N}(G)$ is $(k-1)$-connected. We use this to show that:(i) neighbourhood complexes of queen and king graphs are simply connected and (ii) if $G$ is a non-complete stiff chordal graph, then vertex connectivity of $G$ is $n+1$ if and only if $\mathrm{Conn}(\mathcal{N}(G)) = n$. \end{abstract} \nonumberindent {\bf Keywords} : Neighbourhood complex, vertex connectivity, chordal graphs. \nonumberindent 2020 {\it Mathematics Subject Classification:} 05C15, 57M15, 05E45 \section{Introduction} In 1978, L. Lov{\'a}sz (\cite{l}) introduced the notion of a simplicial complex called the neighbourhood complex $\mathcal{N}(G)$ for a graph $G$. The {\it neighbourhood complex} $\mathcal{N}(G)$ of a graph $G$ is the simplicial complex, whose simplices are those subsets of vertices of $G$ which have a common neighbour. A topological space $X$ is said to be $k$-{\it connected} if every map from an $m$-dimensional sphere $\text{S}^m \to X$ can be extended to a map from the $(m+1)$-dimensional disk $\text{D}^{m+1} \to X$ for $m = 0, 1, \ldots , k.$ The connectivity of $X$, denoted $\mathrm{Conn}(X)$, is the largest integer $k$ such that $X$ is $k$-connected. In \cite{l}, Lov{\'a}sz relates the chromatic number of a graph $G$ with the connectivity of the neighbourhood complex $\mathcal{N}(G)$ (see \Cref{lovasz}) and as an application of this he proved the Kneser conjecture, which gives the chromatic number of a class of graphs called the Kneser graphs. In this article, we derive graph theoretic conditions which are sufficient to imply $n$-connectedness of its neighbourhood complex for some appropriate $n \in \mathbb{N}$. In this vein, we first show that if $\mathcal{N}(H)$ is simply connected and the vertex $v \in G=H\cup\{v\}$ satisfies an appropriate condition (cf. \Cref{thm:simplyconnected}) then, $\mathcal{N}(G)$ is simply connected. As an application, we show that the neighbourhood complexes of queen and king graphs are simply connected (cf. \Cref{kingqueen}). This then implies that the connectivity of the neighbourhood complex can be determined by computing the homology of the complex (see \Cref{rmk:homologyofqueen}). We further prove that for a graph $G$, if there exists a vertex $v$ satisfying the property that for any $k$-subset $S$ of neighbours of $v$, there exists a vertex $v_S \neq v$ such that $S$ is subset of neighbours of $v_S$, then $\mathcal{N}(G-\{v\})$ is $(k-1)$-connected implies that $\mathcal{N}(G)$ is $(k-1)$-connected (cf. \Cref{thm:general}). As a consequence of this, we show that if $G$ is an $(n+1)$-connected chordal graph which cannot be folded (see \Cref{defn_fold}) onto a clique of size $n+2$, then $\mathcal{N}(G)$ is $n$-connected (cf. \Cref{thm:chordal}). We show that if a graph $G$ satisfies a certain property, then the connectivity of $\mathcal{N}(G)$ is strictly less than that of the vertex connectivity of $G$ (cf. Theorems \ref{thm:cutiscomplete} and \ref{thm:vertexconnectivity}). Finally, in the case of chordal graphs, we show that the vertex connectivity completely determines the connectivity of the neighbourhood complex. We prove that if $G$ is a non-complete stiff (see \Cref{defn_fold}) chordal graph, then vertex connectivity of $G$ is $n+1$ if and only if $\mathrm{Conn}(\mathcal{N}(G)) = n$ (cf. \Cref{thm:chordalcomplete}). In the last section, we show that the inequality between vertex connectivity of the graph and connectivity of the neighbourhood complex varies based on the class of graphs. Apart from the classes of graphs in Theorems \ref{thm:cutiscomplete} and \ref{thm:vertexconnectivity}), we give examples of a class of graphs for which vertex connectivity is higher than the connectivity of its neighbourhood complex. On the other hand, there are classes of graphs for which the inequality is reversed. There are easy examples in the non-stiff case which we describe in \Cref{sec:conclusion}. More interestingly, for any $r \geq 1$ and $p \geq 5$, we construct a stiff graph $G_{r,p}$ (see \Cref{sec:conclusion}) such that the connectivity of its neighbourhood complex is $\min\{2r,p-3\}$ but the vertex connectivity is $1$. \section{Preliminaries} We begin by defining our objects of interest. These definitions are standard and are available in \cite{bondy} and \cite{dk}. For completeness, we included them here. A graph $G$ is a pair $(V(G), E(G))$, where $V(G)$ is called the set of vertices of $G$ and $E(G) \subseteq \binom{V(G)}{2}$ denotes the set of unordered edges. If $(x, y) \in E(G)$, it is also denoted by $x \sim y$. A {\it subgraph} $H$ of $G$ is a graph with $V(H) \subseteq V(G)$ and $E(H) \subseteq E(G)$. For a subset $S \subseteq V(G)$, the induced subgraph $G[S]$ is the subgraph whose set of vertices $V(G[S]) = S$ and the set of edges $E(G[S]) = \{(v, w) \in E(G) \ | \ v, w \in S\}$. We denote the graph $G[V(G) \setminus S]$ by $G - S$. The complement graph $\bar{G}$ of $G$ is the graph on $V(G)$ and $E(\bar{G}) = \{(x, y) | \ (x, y) \nonumbertin E(G)\} $. A {\it graph homomorphism} from $G$ to $H$ is a function $\phi: V(G) \to V(H)$ such that, $(v,w) \in E(G) \implies (\phi(v),\phi(w)) \in E(H).$ A graph homomorphism $f$ is called an {\it isomorphism} if $f$ is bijective and $f^{-1}$ is also a graph homomorphism. Two graphs are called {\it isomorphic}, if there exists an isomorphism between them. A {\it clique} of size $n$ or {\it complete graph} on $n$ vertices, denoted by $K_n$, is a graph on $n$ vertices where any two distinct vertices are adjacent by an edge. If $G$ and $H$ are isomorphic, we write $G \cong H$. The {\it chromatic number} $\chi(G)$ of a graph $G$ is defined as $\chi(G) := \text{min}\{n \ | \ \exists \ \text{a graph} $ $ \text{ homomorphism from } G \ \text{to} \ K_n\} $. Let $G$ be a graph and $v$ be a vertex of $G$. The {\it neighbourhood of $v$} is defined as $N_G(v)=\{ w \in V(G) \ | \ (v,w) \in E(G)\}$. The {\it degree} of a vertex $v$ is $|N_G(v)|$. For $A \subseteq V(G)$, the neighbour $N_G(A) = \{v \in G \ | \ v \sim x \ \forall \ x \in A\}$. Let $x$ and $y$ be two distinct vertices of $G$. A {\it $xy$-path} $P$ is a sequence $x v_0 \ldots v_n y$ of vertices of $G$ such that $x \sim v_0, v_n \sim y$ and $v_i \sim v_{i+1}$ for all $0 \leq i\leq n-1$. For an $xy$-path $P = x v_0 \ldots v_n y $, the vertex set of $P$ is $V(P) = \{x, v_0, \ldots, v_n, y\}$. Given an $xy$-path $P = x v_0 \ldots v_n y$, the path $P^{-1} = y v_n \ldots v_0 x$ is an $yx$-path. Given two paths $P= x v_0 \ldots v_m y$ and $Q = y u_0 \ldots u_n z$, we define the $xz$-path $PQ = x v_0 \ldots v_m y u_0 \ldots u_n z$. Two $xy$-paths $P$ and $Q$ are called {\it internally disjoint} if $V(P) \cap V(Q) = \{x, y\}$. A graph $G$ is called {\it $k$-connected} if for any two distinct vertices $x$ and $y$, there exists at least $k$ internally disjoint $xy$-paths. A graph is called connected if it is $1$-connected. The {\it vertex connectivity $\kappa(G)$} is the maximum value of $k$ for which $G$ is $k$-connected. A {\it vertex cut} of $G$ is a subset $S \subseteq V(G)$ such that $G-S$ is a disconnected graph. If $X$ is the vertex set of a component of $G-S$, then subgraph $G[S \cup X]$ is called an {\it $S$-component} of $G$. A vertex cut $S$ is called {\it minimal} if $G-S'$ is connected for any $S'$ such that $|S'| < |S| $. It is well known that $\kappa(G) = |S|$, where $S$ is a minimal vertex cut of $G$. For $k \geq 3$, a {\it cycle graph} on $k$ vertices, denoted by $C_k$, is a graph with $V(C_k) = \{1, \ldots, k\}$ and $E(C_k) = \{(i, i+1) \ | \ 1 \leq i \leq k-1\} \cup \{(1, k)\}$. A {\it chordal graph} is a graph having no induced subgraph which is isomorphic to $C_k$ for $k \geq 4$. It is well known that a chordal graph is a perfect graph, {\it i.e.}, chromatic number of every induced subgraph has a clique of that size. A {\it finite abstract simplicial complex X} is a collection of finite sets such that if $\tau \in X$ and $\sigma \subseteq \tau$, then $\sigma \in X$. The elements of $X$ are called {\it simplices} of $X$. The dimension of a simplex $\sigma$ is equal to $|\sigma| - 1$, here $|\cdot |$ denotes the cardinality. The $0$-dimensional simplices are called vertices of $X$ and we denote the set of all vertices of $X$ by $V(X)$. For a subset $S \subseteq V(X)$, the induced subcomplex of $X$ on $S$, denoted $X[S]$, is a simplicial complex whose simplices are $\sigma \in X$ such that $\sigma \subseteq S$. The $k $th- skeleton of $X$ consists of all simplices of $X$ of dimension at most $k$. The \emph{neighbourhood complex} $\mathcal{N}(G)$ of a graph $G$ is the simplicial complex, whose simplices are $\sigma \subseteq V(G)$ such that $N_G(\sigma) \neq \emptyset$. Lov{\'a}sz proved the following statement \begin{thm} \cite[Lov{\'a}sz]{l} \label{lovasz} For a graph $G$, $\chi(G) \geq \mathrm{Conn}(\mathcal{N}(G)) + 3$. \end{thm} \begin{defn} \label{defn_fold} Let $G$ be a graph and $N_G(u) \subseteq N_G(v)$ for $u,v \in V(G), u \neq v$. The graph $G- \{u\}$ is called a {\it fold} of $G$ and we denote it by $G \searrow G -\{u\}$. The graph $G$ is called {\it stiff}, if there exists no vertex $u \in V(G)$ can be folded. \end{defn} \begin{prop}\cite[Proposition $4.2$ and Proposition $5.1$]{BK} \label{fold} \\ Let $G$ be a graph and $u \in V(G)$. If $G$ is folded onto $G - \{u\}$, then $\mathcal{N}(G)$ is of same homotopy type as $\mathcal{N}(G -\{u\})$. \end{prop} In this article, if no confusion arises, we simply write $n$-connected for both $n$-vertex connectivity and $n$-topological connectivity. The vertex connectivity is always used in the context of a graph, and topological connectivity is used in the context of higher dimensional complexes. \section{Simply connectedness of neighbourhood complex}\label{sec:main} In this section, we explore a sufficient condition on a vertex $v$ of a graph $G$ under which simply connectedness of $\mathcal{N}(G-\{v\})$ implies the simply connectedness of $\mathcal{N}(G)$. We first recall the following results from \cite{bjorner}, which we use throughout this article. \begin{defn} The nerve of a family of sets $(A_i)_{i \in I}$ is the simplicial complex $\mathbf{N} = \mathbf{N}(\{A_i\})$ defined on the vertex set $I$ so that a finite subset $\sigma \subseteq I$ is in $\mathbf{N}$ precisely when $\bigcap\limits_{i \in \sigma} A_i \neq \emptyset$. \end{defn} \begin{thm}\cite[Theorem 10.6]{bjorner}\label{thm:nerve} Let $\text{D}elta$ be a simplicial complex and $(\text{D}elta_i)_{i \in I}$ be a family of subcomplexes such that $\text{D}elta = \bigcup\limits_{i \in I} \text{D}elta_i$. \begin{itemize} \item[(i)] \label{nerve1}Suppose every non-empty finite intersection $\text{D}elta_{i_1} \cap \ldots \cap \text{D}elta_{i_t}$ for $i_j \in I, t \in \mathbf{N}$ is contractible, then $\text{D}elta$ and $\mathbf{N}(\{\text{D}elta_i\})$ are homotopy equivalent. \item[(ii)] Suppose every non-empty finite intersection $\text{D}elta_{i_1} \cap \ldots \cap \text{D}elta_{i_t}$ is $(k-t+1)$-connected. Then $\text{D}elta$ is $k$-connected if and only if $\mathbf{N}(\{\text{D}elta_i\})$ is $k$-connected. \end{itemize} \end{thm} \begin{lem}\cite[Lemma 10.3(ii)]{bjorner}\label{thm:union} Let $\text{D}elta$ be a simplicial complex and $\text{D}elta_1$, $\text{D}elta_2$ be the sub-complexes of $\text{D}elta$ such that $\text{D}elta = \text{D}elta_1 \cup \text{D}elta_2$. If $\text{D}elta_1$ and $\text{D}elta_2$ are $k$-connected and $\text{D}elta_1 \cap \text{D}elta_2$ is $(k-1)$- connected, then $\text{D}elta$ is $k$-connected. \end{lem} Let $X$ be a simplicial complex and let $\eta \in X$. The {\it link} of $\eta$ is the simplicial complex defined as $$ lk_X(\eta) := \{\sigma \in X \ | \ \sigma \cup \eta \in X \ \text{and} \ \sigma \cap \eta = \emptyset\}. $$ The {\it star} of $\eta$ is defined as $$ st_X(\eta) := \{\sigma \in X \ | \ \sigma \cup \eta \in X \}. $$ Observe that {\it star} of a simplex is always contractible. \begin{thm} \label{thm:simplyconnected} Let $G$ be a connected graph and let there exist a vertex $v$ in G such that for $S=N_G(v)$, the complex $\mathcal{N}(G-\{v\})[S]$ is path connected. Then $\pi_1(\mathcal{N}(G-\{v\})) = 0$ implies that $\pi_1(\mathcal{N}(G)) = 0$. \end{thm} \begin{proof} Suppose $\pi_1(\mathcal{N}(G-\{v\})) = 0$. Let $S=N_G(v) = \{x_1, \ldots, x_r\}$. For each $1 \leq i \leq r$, let $\text{D}elta_i$ be a simplex on vertex set $N_G(x_i) \setminus \{v\}$. Observe that $lk_{\mathcal{N}(G)}(v) = \text{D}elta_1\cup \ldots \cup \text{D}elta_r$. Clearly, any non-empty finite intersection $\text{D}elta_{i_1} \cap \text{D}elta_{i_2} \cap \ldots \cap \text{D}elta_{i_t}$, $1 \leq t \leq r$ is a simplex and therefore contractible. Hence, by Theorem \ref{thm:nerve}$(i)$, $lk_{\mathcal{N}(G)}(v)$ and the nerve $\mathbf{N}(\{\text{D}elta_i\})$ are homotopy equivalent. Since $\mathcal{N}(G-\{v\})[S]$ is path connected, we observe that $\mathbf{N}(\{\text{D}elta_i\})$ is path connected and therefore $lk_{\mathcal{N}(G)}(v)$ is path connected. Let $\text{D}elta$ be a simplex on $S$. Let $X$ be the induced subcomplex of $\mathcal{N}(G)$ on $V(G) \setminus \{v\}$. Clearly, $X= \mathcal{N}(G-\{v\}) \cup \text{D}elta$ and $\mathcal{N}(G-\{v\}) \cap \text{D}elta = \mathcal{N}(G-\{v\})[S]$. Since $\mathcal{N}(G-\{v\})$ is simply connected and $\mathcal{N}(G-\{v\})[S]$ is path connected, using, \Cref{thm:union} we conclude that $X$ is simply connected. Observe that $\mathcal{N}(G) = X \cup st_{\mathcal{N}(G)}(\{v\})$ and $X \cap st_{\mathcal{N}(G)} (\{v\})= lk_{\mathcal{N}(G)}(\{v\})$. Since $st_{\mathcal{N}(G)} (\{v\})$ is contractible and $lk_{\mathcal{N}(G)}(\{v\})$ is path connected, the result follows from Lemma \ref{thm:union}. \end{proof} As an application of, \Cref{thm:simplyconnected} we show that the neighbourhood complexes of queen graphs and king graphs are simply connected. \begin{defn} The $m\times n$ queen graph $\mathcal{Q}_{m,n}$ is a graph with $mn$ vertices in which each vertex represents a square in an $m\times n$ chessboard, and each edge corresponds to a legal move by a queen (see \Cref{fig:queen and king} (a)). \end{defn} \begin{defn} The $m\times n$ king graph $\mathcal{K}_{m,n}$ is a graph with $mn$ vertices in which each vertex represents a square in an $m\times n$ chessboard, and each edge corresponds to a legal move by a king (see \Cref{fig:queen and king} (b)). \end{defn} \begin{figure} \caption{$\mathcal{Q} \label{queen} \caption{$\mathcal{K} \label{king} \caption{Examples of Queen and King graphs} \label{fig:queen and king} \end{figure} \begin{prop} \label{prop:queen}For any $p, q \geq 2, \mathcal{N}(\mathcal{Q}_{p, q})$ has full $1$-skeleton, {\it i.e.}, $\{(i, j), (k,l)\} \in \mathcal{N}(\mathcal{Q}_{p, q})$ for all $(i,j), (k,l) \in V(\mathcal{Q}_{p,q})$. \end{prop} \begin{proof} Let $(i, j), (k, l) \in V(\mathcal{Q}_{p, q})$. Without loss of generality we assume that $i \leq k$. If $i \neq k$, then $\{(i, j), (k,l)\} \subseteq N_{\mathcal{Q}_{p, q}}((i, l))$. So, assume that $i=k$. If $q> 2$, then there exists $t \in \{1, \ldots, q\} \setminus \{j, l\}$ and $\{(i, j), (k,l)\} \subseteq N_{\mathcal{Q}_{p, q}}((i, t))$. If $q= 2$, then without loss of generality we assume that $j=1$ and $l= 2$. In this case, if $(i-1, 1) \in V(\mathcal{Q}_{p, q})$, then $\{(i, j), (k,l)\} \subseteq N_{\mathcal{Q}_{p, q}}((i-1, 1))$ and if $(i+1, 1) \in V(\mathcal{Q}_{p, q})$, then $\{(i, j), (k,l)\} \subseteq N_{\mathcal{Q}_{p, q}}((i+1, 1))$. \end{proof} \begin{thm} \label{kingqueen} Let $p, q \geq 2$ be positive integers. Then $\mathcal{N}(\mathcal{Q}_{p, q})$ and $\mathcal{N}(\mathcal{K}_{p, q})$ are simply connected. \begin{proof} We will first induct on $p$ and then on $q$ to get the general result. Clearly, $\mathcal{Q}_{2, 2} \cong \mathcal{K}_{2,2} \cong K_4$ and therefore $\mathcal{N}(\mathcal{Q}_{2, 2}) \simeq \mathcal{N}(\mathcal{K}_{2,2,}) \simeq \mathcal{N}(K_4) \simeq S^2$, which is simply connected. Let $p, q $ be positive integers where $\min\{p, q\} \geq 2$ and assume that $\mathcal{N}(\mathcal{Q}_{p, q})$ and $\mathcal{N}(\mathcal{K}_{p, q})$ are simply connected. We first show that $\mathcal{Q}_{p+1, q}$ is simply connected. For $1 \leq i \leq q$, let $G_i$ be the induced subgraph of $\mathcal{Q}_{p+1, q}$ on vertex set $V(\mathcal{Q}_{p, q}) \cup \{( p+1, 1), \ldots, ( p+1, i)\}$. From \Cref{prop:queen}, $\mathcal{N}(\mathcal{Q}_{p, q})$ has full $1$-skeleton and therefore $\mathcal{N}(\mathcal{Q}_{p,q})[N_{G_1}((p+1, 1))]$ is path connected. Since $\mathcal{N}(\mathcal{Q}_{p, q})$ is simply connected, by using \Cref{thm:simplyconnected}, we conclude that $\mathcal{N}(G_1)$ is simply connected. Fix $ 2 \leq i \leq q$ and inductively assume that $\mathcal{N}(G_{i-1})$ is simply connected. Write $N_{G_i} ((p+1, i)) = A \sqcup B$, where, $A = \{(p+1, j) | 1 \leq j \leq i-1\}$ and $B = N_{G_i} ((p+1, i)) \cap V(\mathcal{Q}_{p,q})$. Since $\mathcal{N}(\mathcal{Q}_{p,q})[B]$ is path connected, $\mathcal{N}(G_{i-1})[B]$ is also path connected. Clearly, $\{(p+1, j), (p, i)\} \subseteq N_{G_{i-1}}((p, j))$ for each $1 \leq j \leq i-1$ and therefore using the fact that $(p, i) \in B$, we conclude that $\mathcal{N}(G_{i-1})[N_{G_i} ((p+1, i))]$ is path connected. Since $\mathcal{N}(G_{i-1})$ is simply connected, $\mathcal{N}(G_i)$ is simply connected from \Cref{thm:simplyconnected}. Hence, by induction $\mathcal{N}(\mathcal{Q}_{p+1, q})$ is simply connected. Similarly, if we are given that $\mathcal{N}(\mathcal{Q}_{p,q})$ is simply connected, then we can show that $\mathcal{N}(\mathcal{Q}_{p , q+1})$ is simply connected. We now show that $\mathcal{N}(\mathcal{K}_{p+1, q})$ is simply connected. For $1 \leq i \leq q$, let $\mathcal{K}_i$ be the induced subgraph of $\mathcal{K}_{p+1, q}$ on vertex set $V(\mathcal{K}_{p,q}) \cup \{( p+1, 1), \ldots, ( p+1, i)\}$. Since $N_{\mathcal{K}_{p+1, 1}} ((p+1, 1)) = \{(p, 1), (p, 2)\} \subseteq N_{\mathcal{K}_{p,q}}((p-1, 1))$, $\mathcal{K}_1$ is simply connected by \Cref{thm:simplyconnected}. Fix $ 2 \leq i \leq q$ and inductively assume that $\mathcal{N}(\mathcal{K}_{i-1})$ is simply connected. If $i < q$, then $N_{\mathcal{K}_i}((p+1, i)) = \{(p+1, i-1),(p, i-1), (p, i), (p, i+1)\}$ and if $i = q$, then $N_{\mathcal{K}_i}((p+1, i)) = \{(p+1, i-1),(p, i-1), (p, i)\}$. Here, since $\{(p+1, i-1), (p, i-1)\} \subseteq N_{\mathcal{K}_{i-1}}((p, i)), \{(p, i-1), (p, i)\} \subseteq N_{\mathcal{K}_{i-1}}((p-1, i)) $ and $\{(p, i), (p, i+1)\} \subseteq N_{\mathcal{K}_{i-1}}((p-1, i))$, we have that $\mathcal{N}(\mathcal{K}_{i-1})[N_{\mathcal{K}_i}((p+1, i))]$ is path connected. Since $\mathcal{K}_{i-1}$ is simply connected, $\mathcal{K}_i$ is simply connected by \Cref{thm:simplyconnected}. From induction, $\mathcal{K}_{p+1, q}$ is simply connected. By similar argument, we can show that $\mathcal{N}(K_{p,q})$ is simply connected implies $\mathcal{N}(\mathcal{K}_{p, q+1})$ is simply connected. Hence, the theorem is proved. \end{proof} \end{thm} \begin{rmk} \label{rmk:homologyofqueen} In \Cref{table:1}, we have computed the homology of neighbourhood complexes of queen graphs for some values of $m$ and $n$ by computer (using SAGE). Since $Q_{m, n} \cong Q_{n, m}$ and neighbourhood complexes of queen graphs are simply connected, we conclude that $\mathrm{Conn}(\mathcal{N}(Q_{m, n})) = 2$ for $2 \leq m \leq 4, 5\leq n \leq 6$. \end{rmk} \begin{table}[H] \centering \scalebox{0.58}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \backslashbox[20mm]{k}{(m,n)}&$(2,2)$& $(2,3)$& $(2,4)$& $(2,5)$& $(2,6)$& $(2,7)$& $(2,8)$&$(2,9)$& $(2,10)$&$(3,3)$ & $(3,4)$& $(3,5)$& $(3,6)$ & $(3,7)$&$(3,8)$& $(4,2)$& $(4,4)$& $(4,5)$& $(4,6)$\\ [10pt] \hline \hline $0$&$0$& $0$& $0$& $0$& $0$& $0$& $0$&$0$& $0$&$0$ & $0$& $0$& $0$ & $0$&$0$& $0$& $0$& $0$& $0$\\ [5pt] \hline $1$&$0$& $0$& $0$& $0$& $0$& $0$& $0$&$0$& $0$&$0$ & $0$& $0$& $0$ & $0$&$0$& $0$& $0$& $0$& $0$\\ [5pt] \hline $2$&$\mathbb{Z}$& $\mathbb{Z}$& $\mathbb{Z}$& $0$& $0$& $0$& $0$&$0$& $0$ &$0$ & $0$& $0$& $0$ & $0$&$0$& $\mathbb{Z}$& $0$& $0$& $0$\\ [5pt] \hline $3$&$0$& $0$& $0$& $\mathbb{Z}^3$& $\mathbb{Z}$& $\mathbb{Z}$& $\mathbb{Z}$&$\mathbb{Z}$& $\mathbb{Z}$ &$\mathbb{Z}^3$ & $\mathbb{Z}^5$& $\mathbb{Z}^{11}$& $\mathbb{Z}^{8}$ & $\mathbb{Z}^5$&$\mathbb{Z}^3$& $0$& $\mathbb{Z}^5$& $\mathbb{Z}^9$& $\mathbb{Z}^4$\\ [5pt] \hline \end{tabular}} \caption{\small{Reduced Homology $\tilde{H}_k(\mathcal{N}(Q_{m,n});\mathbb{Z})$}} \label{table:2} \end{table} \section{Vertex cut and connectivity of neighbourhood complex} In this section, we consider graphs $G$ which can be written as a union of two subgraphs $G_1$, $G_2$ such that the connectivity of their neighbourhood complexes are known. We then give conditions on the the subgraph induced by their intersection to give an estimate for the connectivity of the neighbourhood complex of $G$. We first recall the following results from \cite{bjorner}. \begin{lem}\cite[Lemma 10.3(iii)]{bjorner}\label{thm:intersection}Let $\text{D}elta_1$ and $\text{D}elta_2$ be two simplicial complexes. If $\text{D}elta_1 \cap \text{D}elta_2$ and $\text{D}elta_1 \cup \text{D}elta_2$ are $k$-connected, then so are $\text{D}elta_1$ and $\text{D}elta_2$. \end{lem} \begin{lem}\cite[Lemma $10.4(ii)$]{bjorner}\label{lem:suspension} Let $\text{D}elta_1$ and $\text{D}elta_2$ be two contractible subcomplexes of a simplicial complex $\text{D}elta$ such that $\text{D}elta=\text{D}elta_1 \cup \text{D}elta_2$. Then $\text{D}elta \simeq \ \text{S}igma(\text{D}elta_1 \cap \text{D}elta_2)$, where $\text{S}igma(X)$ denotes the suspension of space $X$. \end{lem} For a positive integer $n$, let $[n]$ denote the set $\{1, \ldots, n\}$. We first consider the case when a graph can be written as a union of two subgraphs whose intersection is a complete graph which is connected to a vertex in each subgraph. \begin{thm} \label{thm:cutiscomplete} Let $G = G_1 \cup G_2$ such that $G_1 \cap G_2 = H \cong K_{n}$. Let there exist $a \in V(G_1), b \in V(G_2)$ such that $a$ and $b$ are adjacent to each vertex of $H$. If $\mathcal{N}(G_1)$ and $\mathcal{N}(G_2)$ are $(n-1)$-connected, then $\mathrm{Conn}(\mathcal{N}(G)) = n-1$. \end{thm} \begin{proof} Let $V(H) = \{x_1, \ldots, x_n\}$. For each $1 \leq i \leq n$, let $\text{D}elta_i$ be a simplex on the vertex set $N_G(x_i)$. Let $\text{D}elta_{n+1} = \mathcal{N}(G_1)$ and $\text{D}elta_{n+2} = \mathcal{N}(G_2)$. Then $\mathcal{N}(G) = \text{D}elta_1 \cup \ldots \cup \text{D}elta_{n+2}$. We compute the nerve $\mathbf{N}(\{\text{D}elta_{i, 1 \leq i \leq n+2}\})$. First we show that, for any subset, $\{i_1, \ldots, i_{n+1}\} \subset [n+2]$ the intersection $\text{D}elta_{i_1} \cap \ldots \cap \text{D}elta_{i_{n+1}}$ is non-empty. Let $j = [n+2] \setminus \{i_1, \ldots, i_{n+1}\}$. If $j = n+1 $, then $\{b\} \in \text{D}elta_{i_1} \cap \ldots \cap \text{D}elta_{i_{n+1}} $ and if $j= n+2$, then $\{a\} \in \text{D}elta_{i_1} \cap \ldots \cap \text{D}elta_{i_{n+1}} $. If $j \in [n]$, then $\{x_j\} \in \text{D}elta_{i_1} \cap \ldots \cap \text{D}elta_{i_{n+1}} $. Since $\bigcap\limits_{i=1}^{n+2} \text{D}elta_i = \emptyset$, we conclude that $\mathbf{N}(\{\text{D}elta_{i, 1\leq i \leq n+2}\})$ homotopy equivalent to the simplicial boundary of an $(n+1)$-dimensional simplex, which is of the same homotopy type as $S^{n}$. Observe that $\text{D}elta_{n+1} \cap \text{D}elta_{n+2}$ is a simplex on vertex set $\{x_1, \ldots, x_n\}$. Further, $\text{D}elta_{n+1} \cap \text{D}elta_{j_1} \cap \ldots \cap \text{D}elta_{j_k}$, $\text{D}elta_{n+2} \cap \text{D}elta_{j_1} \cap \ldots \cap \text{D}elta_{j_k}$ and $ \text{D}elta_{j_1} \cap \ldots \cap \text{D}elta_{j_k}$ are all simplices of $\mathcal{N}(G)$ for all $\{j_1, \ldots, j_k\} \subset [n]$. Therefore, each non-empty intersection $\text{D}elta_{i_1} \cap \ldots \cap \text{D}elta_{i_t}$ is a simplex of $\mathcal{N}(G)$ for $2 \leq t \leq n+2$. Since $\mathcal{N}(G_1)$ and $\mathcal{N}(G_2)$ are $(n-1)$-connected, by taking $k = n-1$ in Theorem \ref{thm:nerve}$(ii)$ we conclude that $\mathrm{Conn}(\mathcal{N}(G)) \geq n-1$. Suppose $\mathcal{N}(G)$ is $n$-connected. Let $X = \mathcal{N}(G_2) \cup \bigcup\limits_{i=1}^{n} \text{D}elta_i$. \begin{claim} \label{claim:X1X2} $\mathcal{N}(G_1) \cap X \simeq S^{n-1}$. \end{claim} \begin{proof}[Proof of Claim \ref{claim:X1X2}] For $1 \leq i \leq n$, let $\Gamma_i = \text{D}elta_i \cap \mathcal{N}(G_1)$ and let $\Gamma_{n+1}$ be a simplex on vertex set $\{x_1, \ldots, x_n\}$. Then $\mathcal{N}(G_1) \cap X = \bigcup\limits_{i=1}^{n+1}\Gamma_i$. Since each $\Gamma_i$ is a simplex, we see that each non-empty intersection $\Gamma_{i_1} \cap \ldots \cap \Gamma_{i_t}$ is a simplex and therefore contractible. Thus, from Theorem \ref{thm:nerve}$(i)$, $\mathcal{N}(G_1) \cap X \simeq \mathbf{N}(\{\Gamma_{i, 1 \leq i \leq n+1}\})$. Observe that $\bigcap\limits_{i=1}^{n+1} \Gamma_i = \emptyset$. Further, for any $1 \leq t \leq n$, $\{a\} \in \Gamma_{i_1} \cap \ldots \cap \Gamma_{i_t}$ if $n+1 \nonumbertin \{i_1, \ldots, i_t\}$ and $\{x_j\} \in \Gamma_{i_1} \cap \ldots \cap \Gamma_{i_t}$ if $n+1 \in \{i_1, \ldots i_t\}$, where $n+1 \neq j \in [n] \setminus \{i_1, i_2 \ldots, i_t\}$. Therefore, $\mathbf{N}(\{\Gamma_{i, 1 \leq i \leq n+1}\}) \simeq S^{n-1}$. \end{proof} Clearly $\mathcal{N}(G) = \mathcal{N}(G_1) \cup X$. By Mayer-Vietoris sequence for homology, \begin{center} $ \cdots \longrightarrow \tilde{H}_n(\mathcal{N}(G))\longrightarrow \tilde{H}_{n-1}(\mathcal{N}(G_1) \cap X) \longrightarrow \tilde{H}_{n-1}(\mathcal{N}(G_1)) \oplus \tilde{H}_{n-1}(X) \longrightarrow \tilde{H}_{n-1}(\mathcal{N}(G)) \longrightarrow \cdots$. \end{center} Since $\mathcal{N}(G)$ is $n$-connected and $\mathcal{N}(G_1)$ is $(n-1)$-connected, we have $\tilde{H}_{n}(\mathcal{N}(G)) = 0,$ $\tilde{H}_{n-1}(\mathcal{N}(G)) = 0$ and $\tilde{H}_{n-1}(\mathcal{N}(G_1)) = 0$. So, $\tilde{H}_{n-1}(X) \cong \tilde{H}_{n-1} (\mathcal{N}(G_1) \cap X) $ and therefore, Claim \ref{claim:X1X2} implies that $\tilde{H}_{n-1}(X) \cong \mathbb{Z}$. Let $\text{D}elta = \bigcup\limits_{i=1}^n \text{D}elta_i$. Then $X = \mathcal{N}(G_2) \cup \text{D}elta$. Since $a, b \in \bigcap\limits_{i=1}^n \text{D}elta_i$, by using Theorem \ref{thm:nerve}$(i)$, we conclude that $\text{D}elta$ is contractible. We now show that $\mathcal{N}(G_2)\cap \text{D}elta$ is contractible. For each $1 \leq i \leq n$, let $T_i = \text{D}elta_i \cap \mathcal{N}(G_2)$. Then $\mathcal{N}(G_2)\cap \text{D}elta = T_1 \cup \ldots \cup T_n$. Since each $T_i$ is a simplex and $b \in \bigcap\limits_{i=1}^{n} T_i$, we see that $\mathbf{N}(\{T_{i, 1 \leq i \leq n}\}) \simeq \mathcal{N}(G_2)\cap \text{D}elta$ and $\mathbf{N}(\{T_{i, 1 \leq i \leq n}\}) $ is an $(n-1)$-dimensional simplex. Therefore, $\mathcal{N}(G_2)\cap \text{D}elta$ is contractible. By Mayer-Vietoris sequence for homology, we have \begin{center} $ \cdots \longrightarrow \tilde{H}_{n-1}(\mathcal{N}(G_2)) \oplus \tilde{H}_{n-1}(\text{D}elta) \longrightarrow \tilde{H}_{n-1}(X) \longrightarrow \tilde{H}_{n-2}(\mathcal{N}(G_2) \cap \text{D}elta) \longrightarrow \cdots$. \end{center} Since $\mathcal{N}(G_2)$ is $(n-1)$-connected, $\tilde{H}_{n-1}(\mathcal{N}(G_2)) = 0$. Further, since $\text{D}elta $ and $\mathcal{N}(G_2) \cap \text{D}elta$ is contractible, we conclude that $\tilde{H}_{n-1}(X) = 0$, which is a contradiction. \end{proof} We now prove that even if the intersection of the two subgraphs of $G$ is not a complete graph, we get a result about the cardinality of the intersection in terms of the connectivity of $\mathcal{N}(G)$. \begin{thm}\label{thm:vertexconnectivity} Let $G = G_1 \cup G_2$ and $V(G_1) \cap V(G_2) = S$. Let there exist $a \in V(G_1), b \in V(G_2)$ such that $a \sim x, b \sim x$ for all $x \in S$. Let $k = \min\{\mathrm{Conn}(\mathcal{N}(G_1)), \mathrm{Conn}( \mathcal{N}(G_2))\}$. \begin{itemize} \item[($i$)] If $k \geq |S|$, then $ |S| \geq \mathrm{Conn}(\mathcal{N}(G)) +1.$ \item[($ii$)] $\mathrm{Conn}(\mathcal{N}(G)) \leq k$. In particular, if $k \leq \mathrm{Conn}(\mathcal{N}(G[S]))$, then $|S| \geq \mathrm{Conn}(\mathcal{N}(G)) +3.$ \end{itemize} \end{thm} \begin{proof} If $S = \emptyset$, then $G$ is a disconnected graph and therefore $\mathcal{N}(G)$ is disconnected. Since connectivity of a disconnected space is $-1$, result is true in this case. So assume that $S \neq \emptyset$. Let $S = \{x_1, \ldots, x_n\}$. For each $1 \leq i \leq n$, let $\text{D}elta_{x_i}$ be a simplex on vertex set $N_G(x_i)$, $\text{D}elta_{n+1} = \mathcal{N}(G_1)$ and $\text{D}elta_{n+2} = \mathcal{N}(G_2)$. Clearly, $\mathcal{N}(G) = \text{D}elta_{n+1} \cup \text{D}elta_{n+2} \cup \bigcup\limits_{i=1}^{n} \text{D}elta_{x_i}$. Let $\mathbf{N} := \mathbf{N}(\{\text{D}elta_1, \text{D}elta_2, \text{D}elta_{x_i, 1 \leq i \leq n} \} )$. For a simplicial complex $\mathcal{K}$, let $\mathcal{M}(\mathcal{K})$ denotes the set of maximal simplices of $\mathcal{K}$. We first prove the following: \begin{claim} \label{claim:intersection} $\mathbf{N} \simeq \text{S}igma (\text{S}igma (\mathcal{N}(G[S])))$. \end{claim} \begin{proof}[Proof of Claim \ref{claim:intersection}] Observe that $\mathcal{M}(\mathbf{N} )= \{\sigma \cup \{n+1, n+2\} \ | \ \sigma \in \mathcal{M}(\mathcal{N}(G[S]))\} \cup \{S \cup \{n+1\} , S \cup \{n+2\} \}$. Write $\mathbf{N} = X_1 \cup X_2$, where, $\mathcal{M}(X_1) = \{\sigma \cup \{n+1, n+2\} \ | \ \sigma \in \mathcal{M}(\mathcal{N}(G[S]))\} $ and $\mathcal{M}(X_2) = \{S \cup \{n+1\}, S \cup \{n+2\}\}$. Observe that, $\mathcal{M}(X_1 \cap X_2) = \{\sigma \cup \{n+ 1\} \ | \ \sigma \in \mathcal{M}(\mathcal{N}(G[S])) \} \cup \{\sigma \cup \{n+2\} \ | \ \sigma \in \mathcal{M}(\mathcal{N}(G[S])) \}$ and thus we conclude that $ X_1 \cap X_2 \simeq \text{S}igma(\mathcal{N}(G[S]))$. Since $X_1$ and $X_2$ are contractible, the result follows from Lemma \ref{lem:suspension}. This completes the proof of Claim \ref{claim:intersection}. \end{proof} \begin{itemize} \item[($i$)] Let $k \geq n$. Since $\chi(G[S]) \leq n$, by \Cref{lovasz} we see that $\mathrm{Conn}(\mathcal{N}(G[S])) \leq n-3$. Hence, $\mathrm{Conn}(\mathbf{N}) \leq n-1$ by Claim \ref{claim:intersection}. Observe that each non-empty intersection $\text{D}elta_{i_1} \cap \ldots \cap \text{D}elta_{i_t}, t \geq 2$ is a simplex for $\{i_1, \ldots, i_t\} \subseteq S \cup \{n+1,n+ 2\}$. Also, since $\text{D}elta_{n+1}, \text{D}elta_{n+2}$ are $n$-connected and $\text{D}elta_{x_i}$ is a simplex for all $1 \leq i \leq n$, we see that each non-empty intersection $\text{D}elta_{i_1} \cap \ldots \cap \text{D}elta_{i_t}$ is $(n-t+1)$-connected for all $t \geq 1$ and $i_1, \ldots, i_t \in S \cup \{n+1,n+2\}$. Since $\mathrm{Conn}(\mathbf{N}) \leq n-1$, using Theorem \ref{thm:nerve}$(ii)$ we conclude that $\mathrm{Conn}(\mathcal{N}(G)) \leq n-1$. \item[($ii$)] Without loss of generality, we assume that $k = \mathrm{Conn}(\mathcal{N}(G_2))$. Let $Y_1 = \mathcal{N}(G_1) \cup \bigcup\limits_{i=1}^n \text{D}elta_{x_i}$. Clearly, $\mathcal{N}(G) = Y_1 \cup \mathcal{N}(G_2)$. Observe that $\mathcal{M}(Y_1 \cap \mathcal{N}(G_2))= \{ N_{G_2}(x) \ | \ x \in S\} \cup \{S\}$. For each $1 \leq i \leq n$, let $Z_{x_i}$ be the simplex on vertex set $N_{G_2}(x_i)$ and $Z_{n+1}$ be the simplex on vertex set $S$. Clearly, $Y_1 \cap \mathcal{N}(G_2) = Z_{n+1} \cup \bigcup\limits_{i=1}^{n} Z_{x_i}$. Observe that $\mathcal{M}(\mathbf{N}(\{Z_{n+1}, Z_{x_i, 1 \leq i \leq n}\}) ) = \{S \} \cup \{\sigma \cup \{n+1\} \ | \ \sigma \in \mathcal{M}(\mathcal{N}(G[S]))\}$. Since $V(\mathcal{N}(G[S])) \subseteq S$, we see that $ \mathbf{N}(\{Z_{n+1}, Z_{x_i, 1 \leq i \leq n}\}) \simeq \text{S}igma (\mathcal{N}(G[S]))$. Since each $Z_{x_i}$ and $Z_{n+1}$ are simplices, any non-empty finite intersection of these simplices is also a simplex and hence contractible. Thus $Y_1 \cap \mathcal{N}(G_2) \simeq \mathbf{N}(\{Z_{n+1}, Z_{x_i, 1 \leq i \leq n}\}) \simeq \text{S}igma(\mathcal{N}(G[S]))$. Therefore, $Y_1\cap \mathcal{N}(G_2)$ is at least $(k+1)$-connected. If $\mathcal{N}(G)$ is $(k+1)$ connected, then from Lemma \ref{thm:intersection}, $\mathcal{N}(G_2)$ has to be $(k+1)$-connected, which is a contradiction. Hence, $\mathrm{Conn}(\mathcal{N}(G)) \leq k $. If $k \leq \mathrm{Conn}(\mathcal{N}(G[S])) $, then we have $\mathrm{Conn}(\mathcal{N}(G)) \leq k \leq \mathrm{Conn}(\mathcal{N}(G[S])) \leq n-3$. \end{itemize} \end{proof} The {\it complement graph} $\bar{G}$ of $G$ is the graph with vertex set same as of $V(G)$ and $E(\bar{G}) = \{(x, y) | \ (x, y) \nonumbertin E(G)\} $. A graph $G$ is called \emph{weakly triangulated}, if it has no induced subgraph isomorphic to a cycle with five or more vertices, or to the complement of such a cycle. \begin{thm}\cite[Theorem 1] {hayward} \label{thm:hayward} Let $S$ be a minimal vertex cut of a weakly triangulated graph $G$ and let $S$ induces a connected subgraph of $\bar{G}$. Then each component of $G - S$ includes at least one vertex adjacent to all the vertices of $S$. \end{thm} The above property of weakly triangulated graphs and Theorem \ref{thm:vertexconnectivity} imply the following statement. \begin{thm} \label{thm:weaklytriangulated} Let $G$ be a weakly triangulated graph and $S$ be a minimal vertex cut of $G$ which induces a connected subgraph of $\bar{G}$. Let $G_1, G_2$ be subgraphs of $G$ such that $G = G_1 \cup G_2$ and $V(G_1 \cap G_2) = S$. If for $k = \min\{\mathrm{Conn}(\mathcal{N}(G_1)), \mathrm{Conn}( \mathcal{N}(G_2))\}$, either $k \geq |S|$ or $k \leq \mathrm{Conn}(\mathcal{N}(G[S]))$, then $|S| \geq conn(\mathcal{N}(G)) +1$. \end{thm} \section{Vertex connectivity and neighbourhood complexes of chordal graphs} In this section, we show that for stiff chordal graphs, the vertex connectivity completely determines the connectivity of its neighbourhood complex. The following theorem is a generalization of \Cref{thm:simplyconnected}. \begin{thm}\label{thm:general} Let $G$ be a graph and $n \geq 0$ be an integer. Let $v\in V(G)$ such that for each $S \subseteq N(v), |S| \leq n+1$ there exists a vertex $v_S \neq v$ satisfying $S \subseteq N_G(v_S)$. Then for all $k \leq n$, $\mathcal{N}(G - \{v\})$ is $k$-connected implies $\mathcal{N}(G)$ is $k$-connected. \end{thm} \begin{proof} Let $N_G(v) = \{x_1, \ldots, x_r\}$. Assume that $\mathcal{N}(G-\{v\})$ is $k$-connected. \begin{claim} \label{claim:link} The simplicial complex $lk_{\mathcal{N}(G)}(v)$ is at least $(n-1)$-connected. \end{claim} \begin{proof}[Proof of Claim \ref{claim:link}] For each $1 \leq i \leq r$, let $\text{D}elta_i$ be a simplex on vertex set $N_G(x_i) \setminus \{v\}$. Then observe that $lk_{\mathcal{N}(G)}(v) = \text{D}elta_1\cup \ldots \cup \text{D}elta_r$. Clearly, any non-empty finite intersection $\text{D}elta_{i_1} \cap \text{D}elta_{i_2} \cap \ldots \cap \text{D}elta_{i_t}$, $1 \leq t \leq r$ is a simplex and therefore contractible. Hence, by Theorem \ref{thm:nerve}$(i)$, $lk_{\mathcal{N}(G)}(v)$ and the nerve $\mathbf{N}(\{\text{D}elta_i\})$ are homotopy equivalent. Let $S= \{i_1, \ldots, i_t\} \subseteq \{1, \ldots, r\}$ and $t\leq n+1$. Then by assumption there exists a vertex $v_S \neq v$ such that $\{x_{i_1}, \ldots, x_{i_t}\} \subseteq N_G(v_S)$ and thereby showing that $v_S \in \bigcap\limits_{i \in S}\text{D}elta_{i}$. Hence $S \in \mathbf{N}(\{\text{D}elta_i\})$. Thus $\mathbf{N}(\{\text{D}elta_i\})$ has full $n$-skeleton and therefore result follows. This completes the proof of Claim \ref{claim:link}. \end{proof} Let $X = \mathcal{N}(G - \{v\})$ and $Y = \mathcal{N}(G)[V(G) \setminus \{v\}]$. We now show that $Y$ is at least $k$-connected. Let $\text{D}elta$ be a simplex on vertex set $N_G(v)$. Observe that $X \cup \text{D}elta= Y$ and $V(X \cap \text{D}elta) = N_G(v)$. Let $\sigma \subseteq N_G(v)$ such that $|\sigma| \leq n+1$. By assumption there exists a vertex $v_{\sigma}\neq v$ such that $\sigma \subseteq N_G(v_{\sigma})$ and thereby showing that $\sigma \in X \cap \text{D}elta$. Hence, $X \cap \text{D}elta$ has a full $n$-skeleton, and therefore it is at least $(n-1)$-connected. Since $k \leq n$, $X \cap \text{D}elta$ is at least $(k-1)$-connected. Further, since $X$ is $k$-connected and $\text{D}elta$ is contractible, using Lemma \ref{thm:union} we conclude that $Y$ is $k$-connected. Observe that $\mathcal{N}(G) = Y \cup st_{\mathcal{N}(G)}(v)$ and $Y \cap st_{\mathcal{N}(G)} = lk_{\mathcal{N}(G)}(v)$. Since $st_{\mathcal{N}(G)}(v)$ is a cone over $v$, it is contractible. Further, since $Y$ is $k$-connected, \Cref{thm:general} follows from Claim \ref{claim:link} and Lemma \ref{thm:union}. \end{proof} As an application of Theorem \ref{thm:general}, we prove that under mild assumptions the neighbourhood complex of $(n+1)$-connected chordal graph is $n$-connected. To prove this, we first establish two lemmas for chordal graphs. We recall the following result from \cite{bondy}. \begin{thm}\cite[Theorem 9.21]{bondy} \label{thm:simplicial} Every chordal graph which is not complete has two non-adjacent simplicial\footnote{ A vertex $v$ is called {\it simplicial} if $G[N_G(v)]$ is a clique.} vertices. \end{thm} \begin{lem} \label{lem:simplicial}Let $n \geq 1$ and let $G$ be a $n$-connected non-complete chordal graph. Then there exists a simplicial vertex $v$ such that $\chi(G- \{v\}) = \chi(G)$ and $G - \{v\}$ is $n$-connected. \end{lem} \begin{proof} Since chordal graphs are perfect graphs and $G$ is a non-complete, Theorem \ref{thm:simplicial} implies that there exists a simplicial vertex $v$ such that $\chi(G - \{v\}) = \chi(G)$. We show that $G - \{v\}$ is $n$-connected. To prove this, it is enough to show that for any two vertices $x, y \in V(G- \{v\})$ there exist $n$ internally disjoint $xy$-paths in $G - \{v\}$. Let $x, y \in V(G - \{v\})$. Since $G$ is $n$-connected, we get $n$ internally disjoint $xy$-paths $P_1, \ldots, P_n$ in $G$. Without loss of generality, we assume that all these paths are the shortest disjoint paths. If $v \nonumbertin V(P_i)$ for all $1 \leq i \leq n$, then all $P_i^{'}s$ are $xy$-paths in $G-\{v\}$ and we are done. So, assume that there exists $1 \leq i \leq n$ such that $v \in V(P_i)$. Without loss of generality, we assume that $i=1$. We will replace the path $P_1$ by a $xy$-path $P_1'$ in $G-\{v\}$ such that $P_1'$ is internally disjoint from $P_i$ for all $2 \leq i \leq n$. If $x, y \nonumbertin N_G(v)$, then there exist $w, w' \in V(P_1)$ distinct from $x$ and $y$ such that $w \sim v \sim w'$. Since $v$ is a simplicial vertex, $w \sim w'$ and we get a $xy$-path $P_1^{'} = x \ldots w w' \ldots y$ from $P_1 = x \ldots w vw' \ldots y$ by removing $v$ from $P_1$, which contradict the fact that $P_1$ is the shortest path. Hence, $\{x, y\} \cap N_G(v) \neq \emptyset$. Now, suppose $|\{x, y\} \cap N_G(v)| = 1$ and say $x \sim v$. Let $P_1 = x \ldots v w_1 \ldots w_k y, k \geq 1$. In this case we can replace the path $P_1$ by $P_1' = x w_1 \ldots w_k y$, which is again a contradiction. Hence, $x, y \in N_G(v)$. Let $N_G(v) = \{x, y, z_1, \ldots, z_m\}$. If there exists a $z \in N_G(v)$ different from $x, y$ such that $z \nonumbertin V(P_i)$ for all $1 \leq i \leq n$, then we replace the path $P_1$ by the path $P_1'= x z y$. Since $G$ is $n$-connected, $m \geq n-2$. If $m \geq n-1$, then we can easily construct $m+1$ internally disjoint $xy$-paths, namely $P_1 = xy, P_2 = xz_1 y, \ldots, P_{m+1} = x z_m y$. So, assume that $m = n-2$, {\it i.e.}, $deg(v) = n$ and for each $z \in N_G(v)$ there exists an $i$ such that $z \in V(P_i)$. Since $P_1, \ldots, P_n$ are the shortest paths, we can assume that $P_1 = x v y, P_2 = xy, P_3 = xz_1 y, \ldots, P_n = x z_{n-2} y$. Since $G$ in non-complete, there exists $w \in V(G)$ such that $w \nonumbertin \{v\} \cup N_G(v)$. Further, since $G$ is $n$-connected, we have $n$ internally disjoint $wx$ paths $L_1, \ldots, L_n$ and $n$ internally disjoint $wy$ paths $Q_1, \ldots, Q_n$ in $G$. We consider the following cases. {\bf Case 1.} $v$ does not belong to $V(L_i)$ or $V(Q_j)$ for all $1 \leq i, j \leq n$. If $x \sim w$ and $y \sim w$, then we replace $P_1$ by $xwy$. If $x \nonumbert\sim w$ and $y \sim w$, then since $deg(v) = n$, there exists $j_1$ such that $V(L_{j_1})$ is disjoint from $y, z_1, \ldots, z_{n-2}$. Then we replace $P_1$ by the path $L_{j_1}^{-1}y$. If $x \nonumbert\sim w$ and $y \nonumbert\sim w$, then there exist $j_1$ and $j_2$ such that $\{y, z_1, \ldots, z_{n-2}\} \cap V(L_{j_1}) = \emptyset$ and $\{x, z_1, \ldots, z_{n-2}\} \cap V(Q_{j_2}) = \emptyset$. In this case we replace $P_1$ by $L_{j_1}^{-1}Q_{j_2}$. {\bf Case 2.} There exist $i_0$ and $j_0$ such that $v$ belong to $V(L_{i_0})$ and $V(Q_{j_0})$. Since $v \in V(L_{i_0})$ and $w \nsim v$, there exists $t_1 \in \{y, z_1, \ldots, z_{n-2}\}$ such that $t_1 \in V(L_{i_0})$. Then $v$ and $t_1$ do not belong to $V(L_l)$ for any $1 \leq l \leq n, l \neq i_0$. There exists $i_1$ such that $V(L_{i_1}) \cap \{v, y, z_1, \ldots, z_{n-2}\} = \emptyset$. By similar argument there exists $j_1$ such that $V(Q_{j_1}) \cap \{v, x, z_1, \ldots, z_{n-2}\} = \emptyset$. Then, we replace the path $P_1$ by the path $L_{i_1}^{-1} Q_{j_1}$, which is internally disjoint from $P_1, \ldots, P_n$. {\bf Case 3.} There exists $i_0$ such that $v \in V(L_{i_0})$ and $v \nonumbertin V(Q_j)$ for all $j$. By similar argument as of Case 2, there exists $i_1$ such that $V(L_{i_1}) \cap \{v, y, z_1, \ldots, z_{n-2}\} = \emptyset$. Since $v \nonumbertin V(Q_j)$ for all $j$ and $|\{x, z_1,\ldots, z_{n-2} \}| = n-1$, there exists $j_1$ such that $Q_{j_1}$ is disjoint from $v, x, z_1, \ldots, z_{n-2}$. We replace the path $P_1$ by $L_{i_1}^{-1} Q_{j_1}$ and get $n$ internally disjoint $xy$-paths. \end{proof} \begin{lem} \label{lem:neighbor} Let $n \geq 1$ and let $G$ be an $n$-connected non-complete chordal graph. Let $v$ be a simplicial vertex such that $G - \{v\}$ is $n$-connected. Then for any $m \leq n$ and $\{x_1, \ldots, x_m\} \subseteq N_G(v)$, there exists $v' \neq v$ such that $\{x_1, \ldots, x_m \}\subseteq N_G(v')$. \end{lem} \begin{proof} Let $\{x_1, \ldots, x_m\} \subseteq N_G(v), m \leq n$. Since $G$ is $n$-connected, $deg(v) \geq n$. If $deg(v) \geq n+1$, then clearly there exists a vertex $v' \in N_G(v) \setminus \{x_1, \ldots, x_m\}$. Since $v$ is simplicial $v' \sim x_i$ for all $1 \leq i \leq m$. So assume $deg(v) = n$. If $G - \{v\}$ is a complete graph, then since $G$ is non-complete, there exists $w \in V(G)$ such that $w \neq v$ and $w \nonumbertin N_G(v)$. In this case, we take $v' = w$. Assume that $G-\{v\}$ is non-complete. Let $T$ be a maximal clique of $G -\{v\}$ containing $\{x_1, \ldots, x_m\}$. Since $G -\{v\}$ is non-complete and $n$-connected, using Proposition \ref{prop:cliquesize}, we conclude that $T$ is of size greater than $n$ and result follows. \end{proof} \begin{thm} \label{thm:chordal} Let $n \geq 0$ and let $G$ be an $(n+1)$-connected chordal graph. If $G$ cannot be folded onto a clique of size $n+2$, then $\mathcal{N}(G)$ is $n$-connected. \end{thm} \begin{proof} Since $G$ is $(n+1)$-connected and chordal, it has a clique of size at least $n+2$. Suppose each maximal clique of $G$ has size $n+2$ by \Cref{prop:cliquesize}. Since $G$ is not folded onto a clique of size $n+2$, $G$ has at least two maximal cliques. Let $V_1$ be a maximal clique of $G$. Then from \Cref{thm:simplicaildecomposition} and \Cref{prop:cliquesize}, the maximal cliques of $G$ can be arranged in a sequence $(V_1, \ldots, V_k)$ such that $V_j \cap (\bigcup\limits_{i=1}^{j-1} V_i)$ is a clique of size $n+1$ for $2 \leq j \leq k$. Since $V_k$ is a clique of size $n+2$, we see that there exists a vertex $v_k \in V_k$ such that $v_k \nonumbertin \bigcup\limits_{i=1}^{k-1} V_i$. Further, since $V_k \cap (\bigcup\limits_{i=1}^{k-1} V_i)$ is a clique of size $n+1$, we see that $G$ is folded onto $G - \{v_k\}$. Clearly, $G-\{v_k\}$ has simplicial decomposition $(V_1, \ldots, V_{k-1})$. From \Cref{prop:cliquesize}, we observe that $G - \{v_k\}$ is also $(n+1)$-connected. Since $G$ is not folded onto a clique of size $n+2$, $G-\{v_k\}$ is not a complete graph. Now, by similar argument there exists a $v_{k-1} \in V_{k-1}$ such that $G- \{v_k, v_{k-1}\}$ is an $(n+1)$-connected non-complete chordal graph. Since $k$ is finite, after $k$-steps, $G$ is folded onto $V_1$, which is a contradiction. Thus, $G$ has a clique of size at least $n+3$. If $n = 0$, then since $G$ has a clique of size $3$, $\chi(G) \geq 3$. It is well known that for any connected graph $G$ of chromatic number greater than $2$, $\mathcal{N}(G)$ is path connected. Proof is by induction on the number of vertices of the graph $G$. If $G$ is isomorphic to a complete graph $K_p$, then $p \geq n+3$ and in this case $\mathcal{N}(G) \simeq S^{p-2}$ and result is true. So, we assume that $G$ is non-complete. By Lemma \ref{lem:simplicial}, there exists a simplicial vertex $v$ such that $G-\{v\}$ is $(n+1)$-connected and $\chi(G) = \chi(G-\{v\})$. Since $G$ has a clique of size $n+3$ and $G-\{v\}$ is perfect graph, we see that $G-\{v\}$ also has a clique of size $n+3$ and therefore $G-\{v\}$ cannot be folded onto a clique of size $n+2$. By induction hypothesis, $\mathcal{N}(G-\{v\})$ is $n$-connected. By Lemma \ref{lem:neighbor}, for any $S \subseteq N_G(v)$ such that, $|S| \leq n+1$ there exists a vertex $v_S \neq v$ such that $S \subseteq N_G(v_S)$. The result follows from Theorem \ref{thm:general}. \end{proof} The following is an immediate corollary of Theorem \ref{thm:chordal}. \begin{cor} Let $G$ be an $n$-connected chordal graph. If $\mathrm{Conn}(\mathcal{N}(G)) < n-1$, then $\chi(G) = n+1$. \end{cor} \begin{proof} Using \Cref{prop:cliquesize}, we conclude that $\chi(G) \geq n+1$. Suppose $\chi(G) \geq n+2$. If $G$ is folded onto $K_{n+2}$, then from \Cref{fold}, $\mathcal{N}(G) \simeq \mathcal{N}(K_{n+2}) \simeq S^{n}$ and therefore $\mathrm{Conn}(\mathcal{N}(G)) = n-1$. If $G $ is not folded onto $K_{n+2}$, then by Theorem \ref{thm:chordal} $\mathrm{Conn}(\mathcal{N}(G)) \geq n-1$, which is a contradiction. \end{proof} \begin{thm}\cite[Theorem 9.19]{bondy} \label{thm:cut} Let $G$ be a connected chordal graph which is not complete, and let $S$ be a minimal vertex cut of $G$. Then $G[S]$ is a clique of $G$. \end{thm} We now as a consequence of Theorem \ref{thm:cutiscomplete} prove the following. \begin{thm}\label{thm:converse} Let $G$ be a chordal graph. If $G$ is stiff and vertex connectivity of $G$ is $n$, then $\mathrm{Conn}(\mathcal{N}(G)) < n$. \end{thm} \begin{proof} If $\chi(G) \leq n+2$, then by \Cref{lovasz}, $\mathrm{Conn}(\mathcal{N}(G)) < n$. Let $\chi(G) \geq n+3$. Since $G$ is $n$-connected, from Theorem \ref{thm:chordal}, $\mathcal{N}(G)$ is $(n-1)$-connected. Let $S = \{x_1, \ldots, x_n\}$ be a minimal vertex cut of $G$. From \Cref{thm:cut}, $G[S] \cong K_n$. Let $X_1, X_2, \ldots, X_r$ be the $S$-components of $G$. Clearly, for each $1 \leq i \leq r, X_i$ is a $n$-connected chordal graph. Let $G_1= X_1$ and $G_2 = \cup_{i=2}^r X_i$. Observe that $G_1$ and $G_2$ are at least $n$-connected. From Proposition \ref{prop:cliquesize}, we conclude that there exist $a_i \in V(G_i)$ such that $a_i \sim x_j$ for all $1 \leq i \leq 2$ and $ 1 \leq j \leq n$. Since $G$ contain a clique of size $n+3$, either $G_1$ or $G_2$ contain a clique of size $n+3$. Suppose, $G_1$ contains a clique of size $n+3$. From \Cref{thm:chordal}, $\mathcal{N}(G_1)$ is $(n-1)$-connected. If all the maximal cliques of $G_2$ are of size $n+1$, then using \Cref{prop:cliquesize}, we conclude that $G_2$ is folded onto $K_{n+1}$. Hence, $G_2$ can be folded onto $G_1$, which contradicts the fact that $G$ is stiff. From Theorem \ref{thm:chordal}, we have thus $\mathcal{N}(G_2)$ is $(n-1)$-connected. Using Theorem \ref{thm:cutiscomplete}, we see that $\mathrm{Conn}(\mathcal{N}(G)) = n-1$. By similar argument, if $G_2$ contains a clique of size $n+3$, then we can show that $\mathrm{Conn}(\mathcal{N}(G)) = n-1$. \end{proof} Combining Theorems \ref{thm:chordal} and \ref{thm:converse} we get our main result of this section. \begin{thm} \label{thm:chordalcomplete} Let $G$ be a non-complete stiff chordal graph. Then the vertex connectivity $\kappa(G)= n+1$ if and only if $\mathrm{Conn}(\mathcal{N}(G)) = n$. \end{thm} \begin{proof} If $\kappa(G) = n+1$, then Theorems \ref{thm:chordal} and \ref{thm:converse} imply that $\mathrm{Conn}(\mathcal{N}(G)) = n$. Now, let $\mathrm{Conn}(\mathcal{N}(G)) = n$. From \Cref{thm:converse}, $\kappa(G) \geq n+1$. But, if $\kappa(G) \geq n+2$, then \Cref{thm:chordal} implies that $\mathcal{N}(G)$ is $(n+1)$-connected, which is a contradiction. Thus $\kappa(G)=n+1$. \end{proof} \begin{rmk} In \cite{csorba2}, Csorba proved that the box complexes of chordal graphs are homotopy equivalent to wedge of spheres. It is well known that box complex and neighbourhood complex are homotopy equivalent \cite{csorba3}. So, the neighbourhood complexes of chordal graphs are homotopy equivalent to wedge of spheres. In his proof, Csorba used the simplicial decomposition of chordal graphs (see \Cref{thm:simplicaildecomposition}). He also remarked that using the simplicial decomposition structure, one can tell the possible dimensions of spheres appearing in the wedge. In fact, by following his proof and by using \Cref{prop:cliquesize}, we can conclude that if $G$ is a $(n+1)$-connected, non-complete, stiff chordal graph, then $\mathrm{Conn}(\mathcal{N}(G)) \geq n$. \end{rmk} \section{Concluding Remarks}\label{sec:conclusion} In the previous section, we showed that the vertex connectivity of non-complete stiff chordal graphs is exactly one more than the connectivity of its neighbourhood complex. In the case of queen graphs, the vertex connectivity of the graph can be much larger than the connectivity of its neighbourhood complex, as shown in the following table. \begin{table}[H] \centering \scalebox{0.58}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline (m, n)&$(2,2)$& $(2,3)$& $(2,4)$& $(2,5)$& $(2,6)$& $(2,7)$& $(2,8)$&$(2,9)$& $(2,10)$&$(3,3)$ & $(3,4)$& $(3,5)$& $(3,6)$ & $(3,7)$&$(3,8)$& $(4,2)$& $(4,4)$& $(4,5)$& $(4,6)$\\ [10pt] \hline $\kappa(Q_{m, n})$&$3$& $4$& $5$& $6$& $7$& $8$& $9$&$10$& $11$&$6$ & $7$& $8$& $9$ & $10$&$11$& $5$& $9$& $10$& $11$\\ [5pt] \hline conn($N(Q_{m,n}$)&$1$ &$1$&$1$ &$2$&$2$&$2$&$2$&$2$&$2$&$2$&$2$&$2$&$2$&$2$&$2$&$1$&$2$&$2$&$2$ \\ \hline \end{tabular}} \caption{\small{Vertex connectivity vs Connectivity of Neighbourhood Complex of $Q_{m, n}$}} \label{table:1} \end{table} From \Cref{thm:weaklytriangulated}, for a class of weakly triangulated graphs, $\kappa(G) > \text{Conn}(\mathcal{N}(G))$. For any graph $G$ of chromatic number less or equal than three, clearly from Theorem \ref{lovasz}, $\kappa(G) > \text{Conn}(\mathcal{N}(G))$. We can create more classes of graphs with vertex connectivity much larger than the connectivity of its neighbourhood complex by using the notion of Mycielskian of a graph. The Mycielskian $\mathscr{M}(G)$ of $G$ with $V(G) = \{v_1, \ldots, v_n\}$ is a graph with $V(\mathscr{M}(G)) = \{v_1, \ldots, v_n\} \sqcup \{u_1, \ldots, u_n\} \sqcup \{w\}$ and $E(\mathscr{M}(G)) = E(G) \sqcup \{(u_i, v_j), (u_j, v_i) | (v_i, v_j) \in E(G)\} \sqcup \{(w, u_j )| 1 \leq j \leq n\}. $ In \cite{csorba1}, Csorba proved that the neighbourhood complex $\mathcal{N}(\mathscr{M}(G))$ is homotopy equivalent to the suspension of $\mathcal{N}(G)$. In \cite{chang}, Chang et al. shown that $\kappa(\mathscr{M}(G)) > \kappa(G)$. Hence, using the results of Csorba and Chang et al., we have $\kappa(G) > \text{Conn}(\mathcal{N}(G))$ implies that $\kappa(\mathscr{M}(G)) > \text{Conn}(\mathcal{N}(\mathscr{M}(G)))$. From \Cref{thm:chordalcomplete}, for non-complete stiff chordal graph $G$, $\kappa(G) > \text{Conn}(\mathcal{N}(G))$. So, for all of these classes of graphs, vertex connectivity is larger than the connectivity of its neighbourhood complex. For $n \geq 5$, let $\tilde{K}_n$ be the graph defined as a complete graph $K_n$ with one hanging edge (see \Cref{Fig:a}), then clearly $\tilde{K}_n$ is folded onto $K_n$ and its vertex connectivity is $1$. But $\mathcal{N}(\tilde{K}_n) \simeq \mathcal{N}(K_n) \simeq S^{n-2}$ and therefore Conn$(\mathcal{N}(\tilde{K}_n)) = n-3$. More generally, we construct a class of stiff graphs, where the vertex connectivity is arbitrarily smaller than the topological connectivity of its neighbourhood complex. \begin{figure} \caption{$\tilde{K} \label{Fig:a} \caption{$G_{1, 5} \label{Figure:b} \caption{Examples of the graphs $\tilde{K} \label{ex} \end{figure} Let $n \geq 2$ be a positive integer and $S \subset \{ 1, 2, \ldots, n-1\}$. The {\it circulant graph} $C_n(S)$ is the graph, whose set of vertices $V(C_n(S)) = \{1, 2, \ldots, n\}$ and any two vertices $x$ and $y$ adjacent if and only if $x-y \ (\text{mod \ n}) \in S \cup -S$, where $-S = \{n-a : a \in S\}$. Observe that for any $t \in \{1, 2, \ldots, n\}$, $N_{C_n(S)}(t) = \{s+t : s \in S\} \cup \{ n-s+t : s \in S \}$. Let $r \geq 1, p \geq 3, n = 4r+6$ and $S = \{1, 3, \ldots, 2r+ 1\}$. Let $H_n^p$ be the complete graph on vertex set $\{n+1, \ldots, n+p\}$. Let $G_{r,p}$ be the graph on vertex set $\{1, 2, \ldots, n+p\}$ and $E(G_{r,p}) = E(C_n(S)) \cup E(H_n^p) \cup \{( n+1, 1), (n+1, 3)\}$ (see \Cref{Figure:b}). For a set $A$, let $\text{D}elta^{A}$ denote a simplex on vertex set $A$ and $\partial(\text{D}elta^{A})$, the simplicial boundary of $\text{D}elta^{A}$. It is easy to check that $\mathcal{N}(C_n(S)) = \partial(\text{D}elta^{\{1, 3, 5, \ldots, n-1\}}) \sqcup \partial(\text{D}elta^{\{2, 4, 6, \ldots, n\}})$. Hence $\mathcal{N}(C_n(S)) \simeq S^{2r+1} \sqcup S^{2r+1} $. Since $\mathcal{N}(H_n^p) \simeq S^{p-2}$, by \Cref{claim:example}, we conclude that $\mathcal{N}(G) \simeq S^{2r+1} \vee S^{2r+1} \vee S^{p-2}$. By construction, the vertex connectivity of $G_{r,p}$ is $1$ whereas the connectivity of its neighbourhood complex can be made very large by making appropriate choices of $r$ and $p$. \section{Appendix} \begin{thm}\cite[Theorem 9.20]{bondy} \label{thm:simplicaildecomposition} Let $G$ be a chordal graph and let $V_1$ be a maximal clique of $G$. Then the maximal cliques of $G$ can be arranged in a sequence $(V_1, \ldots, V_k)$ such that $V_j \cap (\bigcup\limits_{i=1}^{j-1} V_i)$ is a clique of $2 \leq j \leq k$. Such a sequence $(V_1, \ldots, V_k)$ is called a simplicial decomposition of $G$. \end{thm} The following statement is probably well known to experts. We are proving it here for completeness. \begin{prop} \label{prop:cliquesize}Let $G$ be an $n$-connected chordal graph and let $V_1$ be a maximal clique of $G$. Then either $G = V_1$ or the maximal cliques of $G$ can be arranged in a sequence $(V_1, \ldots, V_k)$ such that $V_j \cap (\bigcup\limits_{i=1}^{j-1} V_i)$ is a clique of size at least $n $, $2 \leq j \leq k$. \end{prop} \begin{proof} Assume that $G \neq V_1$, {\it i.e.,} $G$ is non-complete. Proof is by induction on number of vertices of $G$. Let $S$ be a minimal vertex cut of $G$ and let $X_1, \ldots, X_N$ are the $S$-components of $G$. Since $G$ is $n$-connected, $|S| \geq n$. Clearly, each $X_i$ is an $n$-connected chordal graph. Without loss of generality we assume that $V_1$ is a maximal clique of $X_1$. From \Cref{thm:cut}, $G[S]$ is a clique. Let $C^{i}$ be a maximal clique of $X_i$ containing $S$, $1 \leq i \leq N$. Clearly, each maximal clique of $X_i$ is also a maximal clique of $G$. By induction, for each $1 \leq i \leq N$, either $X_i$ is a complete graph or we can arrange the maximal cliques of $X_i$ by $ M_i = (T_{j_1}^{i}, \ldots, T^{i}_{J_{l_i}})$ such that $T^{i}_{j_t} \cap (\bigcup\limits_{m=1}^{t-1} T^i_{j_m}), 2 \leq t \leq l_i $ is a clique of size at least $n$, where $T^{1}_{j_1} = V_1$ and $T^{i}_{j_1} = C^{i}$ for $2 \leq i \leq N$. Let $M = (T_1, \ldots, T_{l_1}, T_{l_1+1}, \ldots, T_{l_1+l_2}, \ldots, T_{l_1 + \cdots + l_{N-1}+1}, \ldots, T_{l_1 + \cdots +l_N})$, where $M_1= (T_1, \ldots, T_{l_1})$ and $M_i = (T_{l_1 + \cdots+ l_{i-1}+1}, \ldots, T_{l_1+ \cdots+ l_{i}})$ for $2 \leq i \leq N$. Let $2 \leq t \leq l_1 + \ldots + l_N$. If $t \leq l_1$, then since $M_1$ is a simplicial decomposition of $X_1$, $ T_t \cap (\bigcup\limits_{i=1}^{t-1} T_i)$ will be a clique of size at least $n$. So assume $t \geq l_1+1$. There exists $1 \leq p \leq N-1$, such that $l_1 + \ldots + l_{p}+1 \leq t\leq l_1 + \ldots + l_{p}+ l_{p+1} $. If $t= l_1 + \ldots + l_{p}+1$, then $V(T_t \cap (\bigcup\limits_{i=1}^{t-1} T_i)) = S$ and therefore $T_t \cap (\bigcup\limits_{i=1}^{t-1} T_i)$ is a clique of size at least $n$ by Theorem \ref{thm:cut}. Let $t > l_1 + \ldots + l_{p}+1$. Observe that the vertices of $T_t \cap (\bigcup\limits_{i=1}^{t-1} T_i)$ is a subset of vertices of $X_{p+1}$. Hence, $ T_t \cap (\bigcup\limits_{i=1}^{t-1} T_i) = T_t \cap (\bigcup\limits_{i = l_1 + \ldots + l_{p}+1}^{t-1} T_i) $. Since $M_{p+1}$ is a simplicial decomposition of $X_{p+1}$, by induction $ T_t \cap (\bigcup\limits_{i = l_1 + \ldots + l_{p}+1}^{t-1} T_i)$ is a clique of size at least $n$. Hence, $M$ is a simplicial decomposition of $G$ such that $T_t \cap (\bigcup\limits_{i=1}^{t-1} T_i)$ is a clique of size at least $n$, $2 \leq t \leq l_1 + \ldots + l_N$. \end{proof} Let $X$ be a simplicial complex and $\tau, \sigma \in X$ such that $\sigma \subsetneq \tau$ and $\tau$ is the only maximal simplex in $X$ that contains $\sigma$. A {\it simplicial collapse} of $X$ is the simplicial complex $Y$ obtained from $X$ by removing all those simplices $\gamma$ of $X$ such that $\sigma \subseteq \gamma \subseteq \tau$. Here, $\sigma$ is called a {\it free face} of $\tau$ and $(\sigma, \tau)$ is called a {\it collapsible pair}. It is well known that, if $X$ collapses to $Y$, then $X \simeq Y$ \cite[Proposition 6.14]{dk}. Recall that for a simplicial complex $\mathcal{K}$, the set of maximal simplices of $\mathcal{K}$ denoted by $\mathcal{M}(\mathcal{K})$. \begin{claim} \label{claim:example} Let $r \geq 1, p \geq 3, n = 4r+6$ and $S = \{1, 3, \ldots, 2r+ 1\}$. Let the graphs $G_{r, p}$ and $H_n^p$ be as defined in \Cref{sec:conclusion}. Then $\mathcal{N}(G_{r, p})$ collapses to a subcomplex $X$, where $$ \mathcal{M}(X) = \mathcal{M}(\mathcal{N}(C_n(S))) \sqcup \mathcal{M}(\mathcal{N}(H_n^p)) \sqcup \{\{3, n+p\} \}\sqcup \{\{n, n+1\}\}.$$ \end{claim} \begin{proof} For convenience of notation, we denote the graph $G_{r, p}$ by $G$ and the graph $H_n^p$ by $H$. Clearly, $N_G(1) = N_{C_n(S)}(1) \cup \{n+1\}, N_G(3) = N_{C_n(S)}(3) \cup \{n+1\}, N_G(n+1) = \{n+2, \ldots, n+p\} \cup \{1, 3\}$. For any $i \in [n] \setminus \{1, 3\}$, $N_G(i) = N_{C_n(S)}(i)$ and for any $j \in \{n+2, \ldots, n+p\}$, $N_G(j) = N_H(j)$. Observe that $(\{1, n+2\}, N_G(n+1))$ is a collapsible pair in $\mathcal{N}(G)$. By using this collapsible pair, $N_G(n+1)$ collapses to $\delta_1 = N_G(n+1) \setminus \{n+2\}$ and $\delta_2 = N_G(n+1) \setminus \{1\}$. Thus, $\mathcal{N}(G)$ collapses to a subcomplex $\text{D}elta_1$, where $\mathcal{M}(\text{D}elta_1) = (\mathcal{M}(\mathcal{N}(G)) \setminus \{N_G(n+1)\}) \sqcup \{\delta_1, \delta_2\}$. Now $(\{1, n+3\}, \delta_1)$ is a collapsible pair in $\text{D}elta_1$ and therefore $\delta_1$ collapses to $\delta_1 \setminus \{n+3\} $ and $\delta_1 \setminus \{1\} \subseteq \delta_2$. Hence, $\text{D}elta_1$ collapses to $\text{D}elta_2$, where $\mathcal{M}(\text{D}elta_2) = (M(\mathcal{N}(G)) \setminus \{N_G(n+1)\}) \sqcup \{\delta_2, \delta_1 \setminus \{n+3\}\}$. Since $\{1, 3\} \subseteq N_G(2)$, by applying a sequence $$ (\{1, n+4\}, \delta_1 \setminus \{n+3\} ), (\{1, n+5\}, \delta_1 \setminus \{n+3, n+4\}), \ldots, (\{1, n+p\}, \delta_1 \setminus \{n+3, \ldots, n+p-1\}) $$ of collapsible pairs, $\text{D}elta_2$ collapses to $\text{D}elta_3$, where $\mathcal{M}(\text{D}elta_3) = (\mathcal{M}(\mathcal{N}(G)) \setminus \{N_G(n+1)\}) \sqcup \{\delta_2\}$. Observe that for any $2 \leq i \leq p$, $(\{3, n+i\}, \delta_2)$ is a collapsible pair in $\text{D}elta_3$. Hence by using the collapsible pair $(\{3, n+2\}, \delta_2)$, $\delta_2 $ is collapses to $\delta_2 \setminus\{3\} = N_{H}(n+1)$ and $\delta_2 \setminus \{n+2\}$. Let $\sigma = \delta \setminus\{n+2\}$. Then, $\text{D}elta_3$ collapses to a subcomplex $\text{D}elta_4$, where the set of maximal simplices $\mathcal{M}(\text{D}elta_4) = (\mathcal{M}(\mathcal{N}(G)) \setminus \{N_G(n+1)\} )\sqcup \{N_H(n+1), \sigma\}$. Now $(\{3, n+3\}, \sigma)$ is a collapsible pair in $\text{D}elta_4$ and therefore $\sigma$ is collapses to $\sigma \setminus \{n+3\}$ and $\sigma \setminus \{3\} \subset N_H(n+1)$. By applying a sequence $$ (\{3, n+4\}, \sigma \setminus \{n+3\} ), (\{3, n+5\}, \sigma \setminus \{n+3, n+4\}), \ldots, (\{3, n+p-1\}, \sigma \setminus \{n+3, \ldots, n+p-2\}) $$ of collapsible pairs, we see that $\text{D}elta_4$ collapses to a subcomplex $\text{D}elta_5$, where the set of maximal simplices $\mathcal{M}(\text{D}elta_5) = ( \mathcal{M}(\mathcal{N}(G)) \setminus \{N_G(n+1)\} )\sqcup \{N_H(n+1)\} \sqcup \{\{3, n+p\}\}$. Observe that $(\{n+1, 2r+6\}, N_G(1))$ and $(\{n+1, 2r+4\}, N_G(3))$ are collapsible pairs in $\mathcal{N}(G)$ and hence in $\text{D}elta_5$. Therefore, by using these collapsible pairs we get that $N_G(1)$ is collapses to $N_G(1) \setminus \{n+1\}$ and $N_G(1) \setminus \{2r+6\}$, and $N_G(3)$ is collapses to $N_G(3) \setminus \{n+1\}$ and $N_G(3) \setminus \{2r+4\}$. Clearly, $N_G(1) \setminus \{n+1\} = N_{C_n(S)}(1)$, $N_G(3) \setminus \{n+1\} = N_{C_n(S)}(3)$ and $N_G(1) \setminus \{2r+6\} = N_G(3) \setminus \{2r+4\} = \{n+1\} \cup \{2, 4, 6, \ldots, n\} \setminus \{2r+4, 2r+6\}$. Let $\tau = N_G(1) \setminus \{2r+6\}$. Hence $\text{D}elta_5$ collapses to a subcomplex $\text{D}elta_6$, where \begin{align*} \mathcal{M}(\text{D}elta_6) = & ( \mathcal{M}(\mathcal{N}(G)) \setminus \{N_G(n+1), N_G(1), N_G(3)\} ) \\ & \sqcup \{N_H(n+1), \{3, n+p\}, N_{C_n(S)}(1), N_{C_n(S)}(3), \tau \}. \end{align*} Observe that for any $j \in \tau, j \neq n+1$, $(\{n+1, j\}, \tau)$ is a collapsible pair in $\text{D}elta_6$. Since $\tau \setminus \{n+1\} \subseteq N_{C_n(S)}(1)$, by applying a sequence $$ (\{n+1, 2\}, \tau), ( \{n+1, 4\}, \tau \setminus \{2\} ) \ldots (\{n+1, 2r+2\}, \tau\setminus \{2, 4, \ldots, 2r\}), $$ $$ (\{n+1, 2r+8\}, \tau\setminus \{2, 4, \ldots, 2r, 2r+2\}), (\{n+1, 2r+10\}, \tau\setminus \{2, 4, \ldots, 2r, 2r+2, 2r+8\}), $$ $$ \ldots, (\{n+1, 4r+4\}, \sigma \setminus \{2, 4, \ldots, 2r+2, 2r+8, \ldots, 4r+2 \}) $$ of collapsible pairs, we see that $\text{D}elta_6$ collapses to a subcomplex $\text{D}elta_7$, where \begin{align*} \mathcal{M}(\text{D}elta_7) =& (\mathcal{M}(\mathcal{N}(G)) \setminus \{N_G(n+1), N_G(1), N_G(3)\} )\sqcup \{N_H(n+1),N_{C_n(S)}(1), N_{C_n(S)}(3)\} \\ & \sqcup\{\{3, n+p\}, \{n+1, n\}\}. \end{align*} Clearly, $\mathcal{M}(\text{D}elta_7) = \mathcal{M}(\mathcal{N}(C_n(S))) \sqcup \mathcal{M}(\mathcal{N}(H)) \sqcup \{\{3, n+p\} \}\sqcup \{\{n, n+1\}\}$. We take $X= \text{D}elta_7$. \end{proof} \end{document}
\begin{document} \title{Task Assignment for Multiplayer Reach-Avoid Games in Convex Domains via Analytical Barriers} \author{Rui~Yan,~\IEEEmembership{Student~Member,~IEEE,} Zongying~Shi,~\IEEEmembership{Member,~IEEE,} and~Yisheng~Zhong \thanks{This work was supported by the National Natural Science Foundation of China under Grants 61374034 and 61210012.} \thanks{R. Yan, Z. Shi, and Y. Zhong are with the Department of Automation, Tsinghua University, Beijing 100084, China e-mail: [email protected] and \{szy, zys-dau\}@mail.tsinghua.edu.cn} } \maketitle \begin{abstract} This work considers a multiplayer reach-avoid game between two adversarial teams in a general convex domain which consists of a target region and a play region. The evasion team, initially lying in the play region, aims to send as many its team members into the target region as possible, while the pursuit team with its team members initially distributed in both play region and target region, strives to prevent that by capturing the evaders. We aim at investigating a task assignment about the pursuer-evader matching, which can maximize the number of the evaders who can be captured before reaching the target region safely when both teams play optimally. To address this, two winning regions for a group of pursuers to intercept an evader are determined by constructing an analytical barrier which divides these two parts. Then, a task assignment to guarantee the most evaders intercepted is provided by solving a simplified 0-1 integer programming instead of a non-deterministic polynomial problem, easing the computation burden dramatically. It is worth noting that except the task assignment, the whole analysis is analytical. Finally, simulation results are also presented. \end{abstract} \begin{IEEEkeywords} Reach-avoid games; Multi-agent systems; Barriers; Pursuit-evasion games; Optimal control; Differential games \end{IEEEkeywords} \section{Introduction} \IEEEPARstart{T}{his} paper studies a multiplayer reach-avoid (RA) game between two adversarial teams of cooperative players playing in a convex bounded planar domain, which is partitioned into a target region and a play region by a straight line. Starting from the play region, the evasion team aims to send as many its team members, called evaders, into the target region as possible. Conversely, the pursuit team initially lying in the target region and play region, strives to prevent the evasion team from doing so by attempting to capture the evaders. Actually, from another side, this game can also be viewed as an evasion team tries to escape from a bounded region through an exit which is represented by a straight line, while avoiding adversaries and moving obstacles formulated as a pursuit team. Such a differential game is a powerful theoretical tool for analyzing realistic situations in robotics, aircraft control, security, reachability analysis and other domains \cite{Ba1999Dynamic,Chung2011,shaferman2017cooperative,Dong7862774,DING20132665}. For example, in collision avoidance and path planning, how a group of vehicles can get into some target set or escape from a bounded region through an exit, while avoiding dangerous situations, such as collisions with static or moving obstacles \cite{Zhou6426643,Panagou6767157,Pan2012Pursuit}. In region pursuit games, multiple pursuers are used to intercept multiple adversarial intruders \cite{ChenMo2016Multiplayer,HAYOUN2017On,Raslan2016A}. In safety verification, an agent often needs to judge whether it can guarantee its arrival into a safe region throughout plenty of dynamic dangers, such as disturbances and adversaries \cite{Huang2015Aut}. In the references \cite{Scott6580287} and \cite{Garcia8340791}, cooperative behaviors within pursuit-evasion games are analyzed in order to help or rescue teammates in the presence of adversarial players. However, finding cooperative strategies and task assignment among multiple players, especially when two teams both consist of multiple members, can be challenging \cite{Chung8424838}, as computing solutions over the joint state space of multiple players can greatly increase computational complexity, beyond the scope of traditional dynamic programming \cite{Ba1999Dynamic}. In \cite{Gerkey04045564}, a formal analysis and taxonomy of task allocation in multi-robot systems is presented. More recently, a distributed version of the Hungarian method \cite{Chopra7932518} and consensus-based decentralized auction and bundle algorithms \cite{Choi5072249} are proposed to solve the multirobot assignment problem. The authors in \cite{Zhou5735231} study the problem of multirobot active target tracking by processing relative observations. Due to the conflicting and asymmetric goals between two teams, complex cooperations and non-intuitive strategies within each team may exist. Although some techniques have been used to analyze RA games and demonstrated to work well in some conditions, they are mostly numerical algorithms and suffer from some weaknesses, such as highly computational complexity, conservation, strong assumption and application limitations \cite{Xue2017Reach,Chen8267187,Vamvoudakis2011Multi,ZHOU201828}. Although the problem considered in this paper is different from the classical pursuit-evasion games that have been thoroughly studied, we borrow several existing notions and modify their definitions slightly to address our current scenario. For RA games, as Isaacs' book \cite{Is1967Diff} shows, the core point is to construct the barrier, which is the boundary of the RA set, splitting the entire state space into two disjoint parts: Pursuit Winning Region (PWR) and Evasion Winning Region (EWR). The PWR is the region of initial conditions, from which the pursuit team can ensure the capture before the evader enters the target region. The EWR, complementary to the PWR, is the region of initial conditions, from which the evader can succeed to reach the target region regardless of the pursuit team's strategies. The surface that separates the PWR from the EWR is called barrier. In principle, the Hamilton-Jacob-Isaacs (HJI) approach is an ideal tool for solving general RA games when the game is low-dimensional, such as autonomous river navigation for underactuated vehicles \cite{Weekly6838996} and safety specifications in hybrid systems \cite{Tomlin2000A}. By defining a value function merging the payoff function and discriminator function with minmax operation, this approach involves solving a HJI partial differential equation (PDE) in the joint state space of the players and locating the barrier by finding the zero sublevel set of this value function. The players' optimal strategies can be extracted from the gradient of the value function. Generally, there are two approaches to solve the HJI PDEs: the method of characteristics \cite{Garcia8340791} and numerical approximation of the value function on a grid of the continuous state space \cite{Mitchell2005A,Margellos2011Ham,Fis2015Rea}. However, in practical applications, two approaches both face computational challenges. Non-unique terminal conditions in RA game setups, capture or entry into the target set, make it difficult to generate strategies by characteristic solutions which require backward integration from terminal manifold, as different backward trajectories may produce complicated singular surfaces for which there exist no systematic analysis methods \cite{lewin2012differential}. On the other hand, a number of numerical tools for solving HJI PDEs on grids have been provided to solve practical problems \cite{Margellos2011Ham}. Unfortunately, the curse of dimensionality makes these approaches computationally intractable in our multiplayer RA games, as the grid required for approximating the value function scales exponentially with the number of players. For certain games and game setups, geometric method shows an incredible power in providing strategies for the players \cite{BOPARDIKAR20091771,Yan2017Escape,lopez2016optimal}. For example, Voronoi diagrams, dividing a plane into regions of points that are closest to a predetermined set of seed points, are widely used for generating strategies in pursuit-evasion games, usually when each player possesses the same speed. Especially in group pursuit of a single evader or multiple evaders, Voronoi-based approaches can provide very constructive cooperative strategies, such as minimizing the area of the generalized Voronoi partition of the evader \cite{Pierson2017Int,Zhou2016Cooperative} or pursuing the evader in a relay way \cite{Bakolas2012Relay}. As for unequal speed scenarios, the Apollonius circle, first introduced by Isaacs, is a useful tool for analyzing the capture of a high-speed evader by using multiple pursuers \cite{Ramana2016Pur,Yan2017Rea}. More realistically, when pursuit-evasion games are played in the presence of obstacles that inhibit the motions of the players, Euclidean shortest path method is employed to construct the dominance region in \cite{Oyler2016Pursuit}, and visibility-based target tracking games are addressed in \cite{Zou8588382} and \cite{Yu6112246}. The work by Katsev \emph{et al.} \cite{Katsev5685659} introduces a simple wall-following robot to map and solve pursuit-evasion strategies in an unknown polygonal environment. The authors \cite{Noori913498291} revisit the lion and man problem by introducing line-of-sight visibility. In \cite{Bhadauria4912452894}, the number of pursuers which can guarantee to capture an equal speed evader in polygonal environment with obstacles, is investigated. For RA games, a number of straight lines called paths of defense, separating the target set from evaders, are created successively to compute an approximate 2D slice of the reach-avoid set, which eases the computational burden sharply \cite{ChenMo2016Multiplayer}. The construction of barrier, the most central and important part in RA games, has attracted a lot of attention in pursuit-evasion games and until now, achieved remarkable results \cite{ruiz2016differential,Sun2015Pur,Zou2016Vis,Yan2017D}. For example, in \cite{Ruiz6525413} and \cite{MACIAS2018271}, the authors compute the barrier for a pursuit-evasion game between an omnidirectional evader and a differential drive robot. For the problem of tracking an evader in an environment containing a corner, the method of explicit policy is used to investigate the escape set and the track set \cite{Sourabh11415885}. In the reference \cite{Barron2018}, the boundary of reach-avoid set is studied for general reach-avoid differential games. By adopting the method of characteristics or numerical approximation on grid, the barrier for two players or at most three players is constructed analytically or numerically. However, computing the barrier directly for more than three players in pursuit-evasion games is missing, resulting from the intrinsic high dimensionality of the joint state space. Compared with the traditional pursuit-evasion games, RA games are more complicated and have more practical significance, as the evaders aim to not only avoid the capture, but also strive to reach a target set. To our best knowledge, the current work for RA games mainly focuses on the construction of barrier in two-player scenarios \cite{ChenMo2016Multiplayer,Margellos2011Ham,Fis2015Rea}. The methods in \cite{ChenMo2016Multiplayer,Margellos2011Ham,Fis2015Rea} are numerical and cannot directly obtain the barrier for multiple players due to the high computation burden. The work \cite{Yan2017Rea} is most similar to this work, but it only considers two pursuers and one evader case without involving the complicated cooperations among the pursuit team occurring in this work, and it only focuses on a square game domain which is quite limited. Moreover, our current work also allows the pursuers to start the game in the target region, which is not considered in \cite{Yan2017Rea}. In this work, an analytical study on the number of the evaders which the pursuit team would be able to prevent from reaching the target region, is presented by generating a task assignment, namely, pursuer-evader matching pairs. Actually, the above analysis refers to a game of kind \cite{Is1967Diff}. To this end, all pursuers are first classified into all possible coalitions. Then, an evasion region method is proposed to construct the barrier analytically for each pursuit coalition versus one evader in convex domains, which is the first time in the existing literature to construct the barrier directly for multiplayer games with more than three players involved. More importantly, the constructed barrier is analytical and overcomes the curse of dimensionality. Finally, with rich prior information in hand about which evaders can be intercepted by a specified pursuit coalition, a maximum pursuer-evader matching is given such that the most evaders are intercepted. The original contributions of this paper are as follows. First, the analytical barrier is constructed for one pursuer and two pursuers versus one evader in convex domains. Second, the analytical barrier for multiple pursuers versus one evader is given, involving two kinds of initial deployments shown in Assumptions \ref{constrainedinitial} and \ref{relaxedinitial}, which can determine the capturable and uncapturable regions for these pursuers. Third, since all possible cooperations among the pursuit team are considered, the upper bound on the number of the evaders which the pursuit team can guarantee to intercept, is given by solving a 0-1 integer programming instead of a non-deterministic polynomial problem, greatly easing the computation burden. Fourth, except the task assignment, the whole analysis is analytical, allowing for real-time updates. These contributions provide a complete solution from the perspective of the task assignment to multiplayer RA games in any convex domains consisting of a target region and a play region separated by a straight line. The rest of this paper is organized as follows. In Section \ref{problmstate}, the problem statement is given. In Section \ref{preliminaries}, some important preliminaries are presented. In Section \ref{pursuitcoalition}, the barrier and winning regions for one pursuit coalition versus one evader, are found. In Section \ref{task}, the task assignment for the pursuit team to capture the most evaders, is designed. In Section \ref{simulation}, simulation results are presented. Finally, Section \ref{conclusion} concludes the paper and the future work is discussed. \begin{figure} \caption{Multiplayer reach-avoid games in convex domains, where the pursuit team with multiple pursuers (blue circles) wants to capture the most evaders (red triangles) before these evaders enter the target region. Our goal is to find a task assignment for the pursuit team to guarantee the most evaders captured, involving pursuer-evader matching pairs.\label{fig:1} \label{fig:1} \end{figure} \section{Problem Statement}\label{problmstate} \subsection{Multiplayer Reach-Avoid Games} Consider $N_p+N_e$ players partitioned into two teams, a pursuit team of $N_p$ pursuers, $\{P_i\}_{i=1}^{N_p}=\{{P_1},...,{P_{N_p}}\}$, and an evasion team of $N_e$ evaders, $\{E_j\}_{j=1}^{N_e}=\{{E_1},...,{E_{N_e}}\}$, whose states are constrained in a bounded, convex domain $\Omega\subset \mathbb{R}^2$ with boundary $\partial\Omega$. Each player is assumed to be a mass point. As Fig. \ref{fig:1} shows, a straight line $\mathcal{T}\subset\Omega$, called target line, divides $\Omega$ into two disjoint parts $\Omega_{\rm tar}$ and $\Omega_{\rm play}$. The compact set $\Omega_{\rm tar}$, called target region, represents the region which the evasion team strives to enter while the pursuit team tries to protect. The play region $\Omega_{\rm play}=\Omega\setminus\Omega_{\rm tar}$ corresponds to the region in which two teams play the game. Also note that $\mathcal{T}\subset\Omega_{\rm tar}$. Let $\mathbf{x}_{P_i}(t)=({x}_{P_i}(t),y_{P_i}(t))\in\mathbb{R}^2$ and $\mathbf{x}_{E_j}(t)=({x}_{E_j}(t),y_{E_j}(t))\in\mathbb{R}^2$ be the positions of $P_i$ and $E_j$ at time $t$, respectively. The dynamics of the players are described by the following decoupled system for $t\ge0$: \begin{equation}\begin{aligned} \dot{\mathbf{x}}_{P_i}(t)&=v_{P_i}\mathbf{u}_{P_i}(t),& \hspace{0.5mm} \mathbf{x}_{P_i}(0)&=\mathbf{x}^0_{P_i},& i&=1,...,N_p \\\dot{\mathbf{x}}_{E_j}(t)&=v_{E_j}\mathbf{u}_{E_j}(t),&\mathbf{x}_{E_j}(0)&=\mathbf{x}^0_{E_j},& j&=1,...,N_e \end{aligned} \end{equation} where $\mathbf{x}^0_{P_i}=({x}_{P_i}^0,{y}_{P_i}^0)\in\mathbb{R}^2$ and $\mathbf{x}^0_{E_j}=({x}_{E_j}^0,{y}_{E_j}^0)\in\mathbb{R}^2$ are the initial positions of $P_i$ and $E_j$, respectively. The maximum speeds of $P_i$ and $E_j$ are denoted by $v_{P_i}$ and $v_{E_j}$ respectively. The control inputs at time $t$ for $P_i$ and $E_j$ are $\mathbf{u}_{P_i}(t)$ and $\mathbf{u}_{E_j}(t)$ respectively, and they satisfy the constraint $\mathbf{u}_{P_i}(t),\mathbf{u}_{E_j}(t)\in\mathcal{U}=\{\mathbf{u}\in\mathbb{R}^2|\|\mathbf{u}\|_2=1\}$, where $\|\cdot\|_2$ stands for the Euclidean norm in $\mathbb{R}^2$. Unless needed for clarity, to simplify notations, $t$ will be omitted hereinafter. The goal of the evasion team is to send as many evaders as possible into $\Omega_{\rm tar}$ without being captured, while the pursuit team strives to prevent the evasion team from that by capturing the evaders. Naturally, once an evader enters $\Omega_{\rm tar}$, the pursuer cannot capture it hereinafter. Assume that $E_j$ is captured by $P_i$ in $\Omega_{\rm play}$ if $E_j$'s position coincides with $P_i$'s position, that is, the point-capture is considered. Our paper aims to investigate an optimal interception matching scheme for the pursuit team such that the most evaders can be intercepted, and also provide strategy instructions for the evasion team. In view of the pursuit team's goal, all possible cooperations among the pursuers must be considered, which is not involved in the existing literature on multiplayer RA games. The maximum matching employed by \cite{ChenMo2016Multiplayer} can only provide a suboptimal strategy for the pursuit team, as cooperation is only introduced at the matching step. Since the pursuit team has $N_p$ pursuers, one can select one-pursuer coalitions, two-pursuer coalitions...until $N_p$-pursuer coalitions, as Fig. \ref{fig:4} shows. Therefore, there are $2^{N_p}-1$ alternatives for pursuit coalitions among $N_p$ pursuers. These coalitions are labeled from 1 to $2^{N_p}-1$ successively, so that they can be coded in a binary way as follows. \begin{figure} \caption{All possible pursuit coalitions in a pursuit team with $N_p$ pursuers coded in a binary way. For example, the binary code of 3 is $11$, and thus the pursuit coalition 3's members are $P_1$ and $P_2$, namely, $m_1=1$ and $m_2=2$.\label{fig:4} \label{fig:4} \end{figure} \begin{defi}[Binary Coalitions for Pursuit Team]\label{binary coalitions}\rm $\forall k=1,...,2^{N_p}-1$, $P_i$ belongs to the pursuit coalition $k$ if the $i$-th bit from low order side in binary representation of $k$ is 1. Then, denote the index set of the pursuers in pursuit coalition $k$ by $\mathcal{I}_k=\{m_j|1\leq m_j\leq N_p,j=1,2,...,n_k\}$ satisfying $m_{i}<m_{j}$ for $i<j$, where $n_k$ is the number of the pursuers in pursuit coalition $k$. \end{defi} \begin{defi}[Pursuit Subcoalition]\label{Pursuit Subcoalition}\rm Consider two pursuit coalitions $k_1$ and $k_2$. If every pursuer in $k_1$ occurs in $k_2$, then $k_1$ is called a pursuit subcoalition of $k_2$. \end{defi} Obviously, there are plenty of ways to code these pursuit coalitions, but this binary way is very convenient to determine every coalition's members by only recording its number, as Fig. \ref{fig:4} shows. For example, for the pursuit coalition $k=5$, since the binary representation of $5$ is $101$, thus this pursuit coalition's members are $P_1$ and $P_3$, namely, $m_1=1$ and $m_2=3$. The adoption of pursuit subcoalition will tremendously simplify our problems as discussed below. \subsection{Information Structure and Assumptions}\label{assumtionsection} As is the usual convention in the differential game theory, the equilibrium outcomes crucially depend on the information structure employed by each player. Classically, the state feedback information structure allows each player to choose its current input, $\mathbf{u}_{P_i}$ or $\mathbf{u}_{E_j}$, based on the current value of the information set $\{\mathbf{x}_{P_1},\cdots,\mathbf{x}_{P_{N_p}},\mathbf{x}_{E_1},\cdots,\mathbf{x}_{E_{N_e}}\}$. This paper focuses on a non-anticipative information structure, as commonly adopted in the differential game literature (see for example \cite{Mitchell2005A}, \cite{Elliott1972The}). Under this information structure, the pursuit team is allowed to make decisions about its current input with all the information of state feedback, plus the evasion team's current input. While the evasion team is at a slight disadvantage under this information structure, at a minimum he has access to sufficient information to use state feedback, because the pursuit team must declare his strategy before the evasion team chooses a specific input and thus the evasion team can determine the response of the pursuit team to any input signal. Thus, the multiplayer reach-avoid games formulated here are an instantiation of the Stackelberg game \cite{Ba1999Dynamic}. As Fig. \ref{fig:1} shows, let $\bm{m}$ and $\bm{n}$ denote the endpoints of $\mathcal{T}$, and assume $\|\bm{m}-\bm{n}\|_2=l$. Fix the origin at $\bm{m}$, and build a Cartesian coordinate system with $x$-axis along the straight line through $\bm{m}$ and $\bm{n}$, and $y$-axis perpendicular to $x$-axis and pointing to $\Omega_{\rm tar}$. For unity, we assume that $\mathbb{R}^n=\mathbb{R}^{1\times n}$, where $n$ is a positive integer. Next, it is assumed that the following conditions are satisfied by the initial configurations of the players, where Assumptions \ref{constrainedinitial} and \ref{relaxedinitial} will be separately considered. \begin{asp}[Isolate Initial Deployment]\label{isolateinitial}\rm The initial positions of the players satisfy the three conditions:\\ 1) $\|\mathbf{x}^0_{P_i}-\mathbf{x}^0_{P_j}\|_2>0$ for all $i,j=1,...,N_p,i\neq j$;\\ 2) $\|\mathbf{x}^0_{E_i}-\mathbf{x}^0_{E_j}\|_2>0$ for all $i,j=1,...,N_e,i\neq j$;\\ 3) $\|\mathbf{x}^0_{P_i}-\mathbf{x}^0_{E_j}\|_2>0$ for all $i=1,...,N_p,j=1,...,N_e$. \end{asp} It can be seen that \autoref{isolateinitial} guarantees that all players start to play the game from different initial positions and every evader is not captured by the pursuers initially. \begin{asp}[Constrained Initial Deployment]\label{constrainedinitial}\rm Suppose that $\mathbf{x}^0_{P_i}\in\Omega_{\rm play}\cup\mathcal{T}$ for all $i=1,...,N_p$ and $\mathbf{x}^0_{E_j}\in\Omega_{\rm play}$ for all $j=1,...,N_e$. \end{asp} \begin{asp}[Relaxed Initial Deployment]\label{relaxedinitial}\rm Suppose that $\mathbf{x}^0_{P_i}\in\Omega$ for all $i=1,...,N_p$ and $\mathbf{x}^0_{E_j}\in\Omega_{\rm play}$ for all $j=1,...,N_e$. \end{asp} In Assumptions 2 and 3, restricting the evaders' initial positions into $\Omega_{\rm play}$ is a reasonable assumption for our RA games, as the evader wins once it enters $\Omega_{\rm tar}$. As for the initial positions of the pursuers, Assumption 2 is employed out of consideration for developing an extensible basic approach. Then, the case with more practical initial configuration described in Assumption 3 is investigated by building a bridge to the proposed basic approach. Moreover, an initial deployment is admissible if Assumptions 1 and 2 or 1 and 3 hold. Generally, in multiplayer RA games, the pursuers are homogeneous and the same for the evaders, such as confrontation between two species, and collision avoidance in the environment with similar dynamic obstacles. Thus, our discussion assumes that the pursuers have the same maximum speed $v_P$, and the evaders have the same maximum speed $v_E$. Define $\alpha=v_E/v_P$ to be the speed ratio, and this paper focuses on the faster pursuers. \begin{asp}[Speed Ratio]\label{speedratio}\rm Suppose that $v_{P_i}=v_P>0,i=1,...,N_p$, and $v_{E_j}=v_E>0,j=1,...,N_e$. Assume that $\alpha=v_E/v_P$ satisfies $0<\alpha<1$. \end{asp} \begin{figure} \caption{The evasion region (ER) and the boundary of ER (BER). $(a)$ The ER and BER determined by $P_i$ and $E_j$ are $\mathcal{R} \label{figT:1} \end{figure} \section{Preliminaries}\label{preliminaries} \subsection{Computation of the ER and BER} For $\bm{x},\bm{y}\in\mathbb{R}^2$, define two maps $r(\bm{x},\bm{y})=\frac{\alpha\|\bm{x}-\bm{y}\|_2}{1-\alpha^2}$ and $\eta(\bm{x},\bm{y})=\frac{\bm{x}-\alpha^2\bm{y}}{1-\alpha^2}$ \cite{Ramana2016Pur}, whose geometric meanings will be stated below. Let the set of points in $\mathbb{R}^2$ that one evader can reach before one pursuer, regardless of the pursuer' best effort, be called evasion region (ER), and the surface which bounds ER is called the boundary of ER (BER). Denote the ER and BER determined by $P_i$ and $E_j$ by $\mathcal{R}_e(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)$ and ${\mathcal{A}}(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)$ respectively, which can be mathematically formulated as follows: \begin{equation}\begin{aligned}\label{ARBAR1v1} \mathcal{R}_e&=\big\{\mathbf{z}\in\mathbb{R}^2|\|\mathbf{z}-\mathbf{x}_{E_j}^0\|_2<\alpha\|\mathbf{z}-\mathbf{x}_{P_i}^0\|_2\big\}\\ {\mathcal{A}}&=\big\{\mathbf{z}\in\mathbb{R}^2|\|\mathbf{z}-\mathbf{x}_{E_j}^0\|_2=\alpha\|\mathbf{z}-\mathbf{x}_{P_i}^0\|_2\big\}. \end{aligned}\end{equation} Also note that\begin{equation}\begin{aligned}\label{apolloniuscircle22} \|\mathbf{z}-\mathbf{x}_{E_j}^0\|^2_2<\alpha^2\|\mathbf{z}-\mathbf{x}_{P_i}^0\|^2_2\Rightarrow\Big\|\mathbf{z}-\frac{\mathbf{x}_{E_j}^0-\alpha^2\mathbf{x}_{P_i}^0}{1-\alpha^2}\Big\|_2^2\qquad&\\ <\frac{\alpha^2\|\mathbf{x}_{E_j}^0-\mathbf{x}_{P_i}^0\|^2_2}{(1-\alpha^2)^2}\Rightarrow\|\mathbf{z}-\eta(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)\|_2^2<r^2(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)& \end{aligned}\end{equation} implying that (\ref{ARBAR1v1}) can be equivalently rewritten as \begin{equation}\begin{aligned}\label{apolloniuscircle33} \mathcal{R}_e&=\big\{\mathbf{z}\in\mathbb{R}^2|\|\mathbf{z}-\eta(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)\|_2<r(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)\big\}\\ {\mathcal{A}}&=\big\{\mathbf{z}\in\mathbb{R}^2|\|\mathbf{z}-\eta(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)\|_2=r(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)\big\}. \end{aligned}\end{equation} Thus, it can be seen that ${\mathcal{A}}$ is a circle of radius $r(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)$ centered at $\eta(\mathbf{x}_{E_j}^0,\mathbf{x}_{P_i}^0)$, and $\mathcal{R}_e$ is the interior of ${\mathcal{A}}$, shown in Fig. \ref{figT:1}(a). This circle ${\mathcal{A}}$, also called Apollonius circle \cite{Is1967Diff} divides $\mathbb{R}^2$ into two parts: The interior of the circle is $E_j$'s dominance region, i.e., the ER $\mathcal{R}_e$: $E_j$ can reach any point inside the circle before $P_i$; the exterior of the circle is $P_i$'s dominance region: $P_i$ can reach any point outside the circle before $E_j$, as Fig. \ref{figT:1}(b) illustrates. Unless needed for clarity, to simplify notations, drop the initial positions occurring in the expressions of $\mathcal{R}_e$ and ${\mathcal{A}}$. \subsection{Key Function} Next, present an important lemma about a key function. \begin{lema}[Monotony of Function]\label{monotonic function1}\rm Given $P_i$ and $E_j$'s initial positions $\mathbf{x}_{P_i}^0$ and $\mathbf{x}_{E_j}^0$ satisfying $y_{E_j}^0<0$ and $x_{P_i}^0\leq x_{E_j}^0$, if ${\mathcal{A}}\cap\{\mathbf{z}\in\mathbb{R}^2|y=0\}=\big\{(c_1,0),(c_2,0)\big\}$ with $c_1<c_2$, the function \begin{equation}\label{monotonicfunction1} G_1(x_p)=\|\bm{p}-\mathbf{x}_{P_i}^0\|_2-\frac{\|\bm{p}-\mathbf{x}_{E_j}^0\|_2}{\alpha},\bm{p}=(x_p,0) \end{equation} is strictly monotonic increasing when $x_p\in[c_1,x_p^*]$, and strictly monotonic decreasing when $x_p\in[x_p^*,c_2]$, where $x_p^*$ is the unique solution of the quartic equation \begin{equation}\label{quarticlemma} \frac{x_p^*-x_{P_i}^0}{\|\bm{p}^*-\mathbf{x}_{P_i}^0\|_2}=\frac{x_p^*-x_{E_j}^0}{\alpha\|\bm{p}^*-\mathbf{x}_{E_j}^0\|_2},\bm{p}^*=(x_p^*,0) \end{equation} in the interval $[c_1,c_2]$. \emph{Proof:} See Fig. \ref{figT:2}(a). Take $\bm{p}=(x_p,0)$. It can be seen that if $x_p\in[c_1,c_2]$, $G_1(x_p)$ in (\ref{monotonicfunction1}) is actually the distance (depicted in dashed line) between $P_i$ and $E_j$ exactly when $E_j$ arrives at $\bm{p}$, if $P_i$ and $E_j$ both move directly towards $\bm{p}$. Note that ${\mathcal{A}}$ is a circle. Thus, based on (\ref{ARBAR1v1}), ${\mathcal{A}}\cap\{\mathbf{z}\in\mathbb{R}^2|y=0\}=\big\{(c_1,0),(c_2,0)\big\}$ with $c_1<c_2$ implies that $G_1(c_1)=G_1(c_2)=0$, $G_1(x_p)>0$ for $x_p\in(c_1,c_2)$, and $G_1(x_p)<0$ for $x_p\in(-\infty,c_1)\cup(c_2,+\infty)$. Thus, the maximum point $x_p^*$ of $G_1(x_p)$ lies in $[c_1,c_2]$ and satisfies $G_1'(x_p^*)=0$. If we can verify $G_1'(x_p^*)=0$ admits a unique solution $x_p^*$ in the interval $[c_1,c_2]$, then the lemma is straightforward. First, the existence of $x_p^*$ in the interval $[c_1,c_2]$ is obvious by noting that $G_1(x_p)$ is continuous in this interval. Next, prove the uniqueness. \begin{figure} \caption{The monotony of the function $G_1(x_p)$, where $\bm{p} \label{figT:2} \end{figure} Take $\bm{p}^*=(x_p^*,0)$. By taking the derivative for (\ref{monotonicfunction1}) with respect to $x_p$, $G_1'(x_p^*)=0$ means that (\ref{quarticlemma}) holds. There are two cases depending on whether $x_{P_i}^0<x_{E_j}^0$ or $x_{P_i}^0=x_{E_j}^0$, which will be separately discussed below. Consider $x_{P_i}^0<x_{E_j}^0$ first. Since $x_p^*\in[c_1,c_2]$, as Fig. \ref{figT:2}(a) shows, we have $G_1(x_p^*)\ge0$, implying that \begin{equation}\label{lemmaG1de344} \|\bm{p}^*-\mathbf{x}_{P_i}^0\|_2\ge\frac{\|\bm{p}^*-\mathbf{x}_{E_j}^0\|_2}{\alpha}>\alpha\|\bm{p}^*-\mathbf{x}_{E_j}^0\|_2. \end{equation} Then, combining (\ref{quarticlemma}) and (\ref{lemmaG1de344}) leads to \begin{equation}\label{lemmaG1de3554} |x_p^*-x_{P_i}^0|>|x_p^*-x_{E_j}^0|. \end{equation} Also note that (\ref{quarticlemma}) guarantees that $x_p^*-x_{P_i}^0$ and $x_p^*-x_{E_j}^0$ have the same plus or minus sign. Thus, by also noting that $x_{P_i}^0<x_{E_j}^0$, then (\ref{lemmaG1de3554}) means that $x_p^*>x_{E_j}^0>x_{P_i}^0$, as Fig. \ref{figT:2}(a) illustrates. Define \begin{equation}\label{lemmaG1de3} G_2(x_p^*)=\frac{\|\bm{p}^*-\mathbf{x}_{P_i}^0\|_2^2(x_p^*-x_{E_j}^0)^2}{\|\bm{p}^*-\mathbf{x}_{E_j}^0\|_2^2(x_p^*-x_{P_i}^0)^2} \end{equation} and thus (\ref{quarticlemma}) can be rewritten as $G_2(x_p^*)=\alpha^2$. Since $\alpha^2<1$, then $G_2(x_p^*)<1$, implying that \begin{equation}\begin{aligned}\label{lemmaG1de4} &\|\bm{p}^*-\mathbf{x}_{E_j}^0\|_2^2(x_p^*-x_{P_i}^0)^2>\|\bm{p}^*-\mathbf{x}_{P_i}^0\|_2^2(x_p^*-x_{E_j}^0)^2\\ &\Rightarrow(y_{E_j}^0)^2(x_p^*-x_{P_i}^0)^2-(y_{P_i}^0)^2(x_p^*-x_{E_j}^0)^2>0. \end{aligned}\end{equation} By computing the derivative of $G_2(x_p^*)$ with respect to $x_p^*$, it can be verified that $G_2(x_p^*)$ is strictly monotonic for $x_p^*$ satisfying (\ref{lemmaG1de4}) and $x_p^*>x_{E_j}^0>x_{P_i}^0$. Thus, $G_2(x_p^*)=\alpha^2$, i.e., $G'_1(x_p^*)=0$, admits a unique solution $x_p^*$ in the interval $[c_1,c_2]$. If $x_{P_i}^0=x_{E_j}^0$, (\ref{quarticlemma}) implies that $x_p^*=x_{E_j}^0=x_{P_i}^0$ by noting (\ref{lemmaG1de344}). Thus, $G'_1(x_p^*)=0$ still admits a unique solution $x_p^*$ in this case, and we finish the proof. \qed \end{lema} For simplicity of description, in Lemma \ref{monotonic function1}, only $x_{P_i}^0\leq x_{E_j}^0$ is considered. As for $x_{P_i}^0>x_{E_j}^0$, the similar conclusion can be obtained. \subsection{Base Curves} In the construction of the barrier in Section \ref{pursuitcoalition}, the following base curves are utilized, which are very essential to characterize the barrier. We emphasize that the parameters $\bm{h}_1,\bm{h}_2$ and $\bm{h}_3$ defined below represent the initial positions of three different pursuers. \begin{defi}[Base Curves]\label{base}\rm For $\bm{h}_1=(x_1,y_1),\bm{h}_2=(x_2,y_2)$ and $\bm{h}_3=(x_3,y_3)$ satisfying $y_i\leq0(i=1,2,3)$ and $x_1<x_2<x_3$, define the following curves.\\ (\romannumeral1). One pursuer case: \begin{equation}\begin{aligned}\label{base11} &F^1_1(\bm{h}_1)=\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|\|\mathbf{z}-\bm{m}\|_2\\ &\qquad\qquad\qquad\qquad\ =\alpha\|\bm{h}_1-\bm{m}\|_2,x\leq k_1,y<0\big\} \\ &F^1_2(\bm{h}_1)=\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|(x-x_1)^2+\\ &(1-1/\alpha^2)y^2+(1-\alpha^2)y_1^2=0,x\in(k_1,k_2),y<0\big\} \\ &F^1_3(\bm{h}_1)=\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|\|\mathbf{z}-\bm{n}\|_2\\ &\qquad\qquad\qquad\qquad\ \, \ =\alpha \|\bm{h}_1-\bm{n}\|_2,x\ge k_2,y<0\big\} \end{aligned} \end{equation} where $k_1=\alpha^2x_1$ and $k_2=(1-\alpha^2)l+\alpha^2 x_1$.\\ (\romannumeral2). Two pursuers case: \begin{equation}\begin{aligned}\label{basecurves22} &F^2_1(\bm{h}_1,\bm{h}_2)=F_1^1(\bm{h}_1)\\ &F^2_2(\bm{h}_1,\bm{h}_2)=\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|(x-x_1)^2+\\ &\ (1-1/\alpha^2)y^2+(1-\alpha^2)y_1^2=0,x\in(k_3,k_4),y<0\big\} \\ &F^2_3(\bm{h}_1,\bm{h}_2)=\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|\|\mathbf{z}-\bm{p}_c\|_2\\ &\qquad\qquad\qquad\quad =\alpha\|\bm{h}_1-\bm{p}_c\|_2,x\in[k_4,k_5],y<0\big\}\\ &F^2_4(\bm{h}_1,\bm{h}_2)=\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|(x-x_2)^2+\\ &\ (1-1/\alpha^2)y^2+(1-\alpha^2)y_2^2=0,x\in(k_5,k_6),y<0\big\}\\ &F^2_5(\bm{h}_1,\bm{h}_2)=F_3^1(\bm{h}_2)\\ \end{aligned}\end{equation} where $\bm{p}_c=(x_c,0)$ is the unique point on the $x$-axis such that $\|\bm{p}_c-\bm{h}_1\|_2=\|\bm{p}_c-\bm{h}_2\|_2$. Here, $k_3=\alpha^2x_1,k_4=(1-\alpha^2)x_{c}+\alpha^2x_1,k_5=(1-\alpha^2)x_{c}+\alpha^2x_2$ and $k_6=(1-\alpha^2)l+\alpha^2x_2$.\\ (\romannumeral3). Three pursuers case: \begin{equation}\begin{aligned}\label{basecurvethree} &F^3(\bm{h}_1,\bm{h}_2,\bm{h}_3)=\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|(x-x_2)^2+\\ &\ (1-1/\alpha^2)y^2+(1-\alpha^2)y_2^2=0,x\in(k_7,k_8),y<0\big\} \end{aligned}\end{equation} where $k_7=(1-\alpha^2)x_{c1}+\alpha^2x_2$ and $k_8=(1-\alpha^2)x_{c2}+\alpha^2x_2$. Two points $\bm{p}_{c1}=(x_{c1},0)$ and $\bm{p}_{c2}=(x_{c2},0)$ are respectively given by $\|\bm{h}_1-\bm{p}_{c1}\|_2=\|\bm{h}_2-\bm{p}_{c1}\|_2$ and $\|\bm{h}_2-\bm{p}_{c2}\|_2=\|\bm{h}_3-\bm{p}_{c2}\|_2$. Although it is called three pursuers case, $F^3$ only depends on $\bm{h}_2$ while the roles of $\bm{h}_1$ and $\bm{h}_3$ are to decide two boundaries $k_7$ and $k_8$ for $x$. Thus, each point on $F^3$ only depends on at most two pursuers by cutting $F^3$ into two parts. \end{defi} \begin{rek}\label{rek32}\rm It can be verified that given $\bm{h}_1,\bm{h}_2$ and $\bm{h}_3$, all base curves are explicit and smooth if they are not empty. Actually, the following discussion will show that the barrier, a focus in our paper, is composed by these curves. How to use these base curves will be stated clearly in the next section. \end{rek} \section {One Pursuit Coalition Versus One Evader}\label{pursuitcoalition} \subsection{Problem Formulation}\label{noProblem Formulation} Before investigating the number of the evaders which would be captured in $\Omega_{\rm play}$ before entering $\Omega_{\rm tar}$, a subgame between a pursuit coalition $k$ and $E_j$ is analyzed as a building block. Denote the joint game domain for the pursuit coalition $k$ by $\Omega^{n_k}=\Omega\times\cdots\times\Omega$. Similarly, denote the joint play region, target region, target line and control constraint for the pursuit coalition $k$ by $\Omega_{\rm play}^{n_k},\Omega_{\rm tar}^{n_k},\mathcal{T}^{n_k}$ and $\mathcal{U}^{n_k}$, respectively. Denote the joint control input and initial state of the pursuit coalition $k$ by $\mathcal{P}_k=\big\{\mathbf{u}_{P_{m_1}},...,\mathbf{u}_{P_{m_{n_k}}}\big\}\in\mathcal{U}^{n_k}$ and $\mathcal{X}_k^0=\big\{\mathbf{x}_{P_{m_1}}^{0},...,\mathbf{x}_{P_{m_{n_k}}}^{0}\big\}\in\mathbb{R}^{2n_k}$, respectively. In order to develop a basic approach, we first restrict $\mathcal{X}^0_k\in\Omega_{\rm play}^{n_k}\cup\mathcal{T}^{n_k}$ in Sections \ref{section2v1} and \ref{sectionmultiplev1}, that is, all pursuers initially lie in the play region $\Omega_{\rm play}$ or target line $\mathcal{T}$ as Assumption 2 states. Then, the case $\mathcal{X}^0_k\in\Omega^{n_k}$, that is, all pursuers can start the game from any positions in $\Omega$ as Assumption 3 states, will be discussed in Section \ref{sectionextension}. For clarity of the description, $k$ refers to the pursuit coalition $k$. Given $\Omega$ and $\mathcal{T}$, namely, specifying $\Omega_{\rm tar}$ and $\Omega_{\rm play}$, the following problems will be addressed in this section. \begin{pbm}\label{p31}\rm Consider $k$ and $E_j$. Given $\mathcal{X}_k^0$, find the region $\mathcal{W}_P^{n_k}(\mathcal{X}^0_k)\subset\Omega_{\rm play}$ in which if $E_j$ initially lies, there exists a joint pursuit control input $\mathcal{P}_k\in\mathcal{U}^{n_k}$ such that $E_j$ can be captured before entering $\Omega_{\rm tar}$ regardless of its evasion control input $\mathbf{u}_{E_j}\in\mathcal{U}$. \end{pbm} \begin{pbm}\label{p32}\rm Consider $k$ and $E_j$. Given $\mathcal{X}_k^0$, find the region $\mathcal{W}_E^{n_k}(\mathcal{X}^0_k)\subset\Omega_{\rm play}$ in which if $E_j$ initially lies, an evasion control input $\mathbf{u}_{E_j}\in\mathcal{U}$ exists such that $E_j$ can enter $\Omega_{\rm tar}$ without being captured regardless of the joint pursuit control $\mathcal{P}_k\in\mathcal{U}^{n_k}$. \end{pbm} Thus, $\mathcal{W}_P^{n_k}(\mathcal{X}^0_k)$ and $\mathcal{W}_E^{n_k}(\mathcal{X}^0_k)$ are the respective winning regions for $k$ and $E_j$, and we call them PWR and EWR respectively. According to Isaacs' book \cite{Is1967Diff}, the barrier, denoted by $\mathcal{B}^{n_k}(\mathcal{X}_k^0)$, is the surface separating $\mathcal{W}_P^{n_k}(\mathcal{X}^0_k)$ from $\mathcal{W}_E^{n_k}(\mathcal{X}^0_k)$, on which no team can guarantee its own winning. In this case, $\mathcal{B}^{n_k}(\mathcal{X}_k^0)$ is a curve, and $\mathcal{B}^{n_k}(\mathcal{X}_k^0)\cup\mathcal{W}_P^{n_k}(\mathcal{X}_k^0)\cup\mathcal{W}_E^{n_k}(\mathcal{X}_k^0)=\Omega_{\rm play}$. Unless needed for clarity, the initial condition occurring in the expressions of the barrier and winning regions will be dropped from now on. More visually, given $\mathcal{X}_k^0$, the winning region $\mathcal{W}_P^{n_k}$ can be interpreted as the capturable region of $k$ when facing one evader, while the winning region $\mathcal{W}_E^{n_k}$ corresponds to its uncapturable region. Note that if $\mathcal{B}^{n_k}$ can be obtained, $\mathcal{W}_P^{n_k}$ and $\mathcal{W}_E^{n_k}$ split by $\mathcal{B}^{n_k}$ will come out immediately, where the region closer to $\mathcal{T}$ is $\mathcal{W}_E^{n_k}$, and the other is $\mathcal{W}_P^{n_k}$. Thus, the primary focus in this section is to construct the barrier $\mathcal{B}^{n_k}$. For a clear symbol description, $\mathcal{B}^{n_k}(\mathcal{X}_k^0)$ is the barrier determined by $k$ with the initial position $\mathcal{X}_k^0$ and $n_k$ pursuers. More generally, $\mathcal{B}^{n_k-1}(\mathcal{X}_k^0\setminus\mathbf{x}_{P_i}^0)$ denotes the barrier determined by a pursuit coalition with the initial position $\mathcal{X}_k^0\setminus\mathbf{x}_{P_i}^0$ and $n_k-1$ pursuers, where $\mathcal{X}_k^0\setminus\mathbf{x}_{P_i}^0$ denotes the remainder in $\mathcal{X}_k^0$ when $\mathbf{x}_{P_i}^0$ is eliminated. The similar notations are also applied for $\mathcal{W}_P^{n_k}$ and $\mathcal{W}_E^{n_k}$. Let $\breve{\mathcal{B}}^{n_k},\breve{\mathcal{W}}_P^{n_k}$ and $\breve{\mathcal{W}}_E^{n_k}$ respectively denote the barrier, PWR, and EWR of $k$ when the boundary of $\Omega_{\rm play}$ is ignored, which will be used to construct $\mathcal{B}^{n_k},\mathcal{W}_P^{n_k}$ and $\mathcal{W}_E^{n_k}$. These three new notations have the similar meanings, citation ways and notation expressions as $\mathcal{B}^{n_k},\mathcal{W}_P^{n_k}$ and $\mathcal{W}_E^{n_k}$ respectively, and the only difference is that they are used without considering the boundary of $\Omega_{\rm play}$. They are introduced only for a clear proof. To clarify clearly, all pursuers in $k$ are classified into two categories from the perspective of these pursuers' relative positions, which plays a crucial role in barrier construction. \begin{defi}[Active and Inactive Pursuers]\label{active}\rm For a pursuer $P_i(i\in\mathcal{I}_k)$ in $k$, if there exists at least one point in $\mathcal{T}$ that $P_i$ can reach before all other pursuers in $k$, we call $P_i$ is an active pursuer in $k$. Otherwise, we call $P_i$ is an inactive pursuer in $k$. \end{defi} \begin{rek}\label{rek31}\rm As illustrated below, whether a pursuer is active or inactive in a pursuit coalition strictly depends on if it contributes to the barrier construction of this pursuit coalition. In brief, the contribution means being active. It can be noted that a pursuer who is inactive in one pursuit coalition may be active in another pursuit coalition, and vice versa. \end{rek} Next, introduce a critical payoff function which is employed to construct the barrier and provides strategies for the players. \begin{defi}[Payoff Function]\label{payoffADR}\rm For $k$ and $E_j$, if $E_j$ can succeed to reach $\mathcal{T}$, take the distance of $E_j$ to the closest pursuer exactly when $E_j$ arrives at $\mathcal{T}$ as the payoff function. This payoff function $J$ and the associated value function $V$ are respectively given by \begin{equation}\begin{aligned}\label{valueADR1v1} &J=\min_{i\in\mathcal{I}_k}\|\mathbf{x}_{P_{i}}(t_1)-\mathbf{x}_{E_j}(t_1)\|_2,V=\min_{\mathcal{P}_k\in\mathcal{U}^{n_k}}\max_{\mathbf{u}_{E_j}\in\mathcal{U}}J \end{aligned}\end{equation} where $t_1$ is the first arrival time when $E_j$ reaches $\mathcal{T}$. \end{defi} The above payoff function, which is also called safe distance on arrival, can be interpreted as $E_j$ desires to reach $\mathcal{T}$ under the safest condition, while $k$ wants to approach $E_j$ as close as possible although the capture cannot be guaranteed. \subsection{Two Pursuers Versus One Evader}\label{section2v1} We begin the discussion of the barrier construction by focusing on an important class of RA subgames with two pursuers and one evader, namely, when $k$ contains two pursuers $P_{i1}$ and $P_{i2}$. Focusing on this special case enables us to develop a scalable and analytical barrier, while also providing key insights into the barrier construction for general pursuit coalitions. When $\Omega_{\rm play}$ is square, the barrier $\mathcal{B}^2$ can be computed by the method proposed in \cite{Yan2017Rea}. However, the shape of $\Omega_{\rm play}$ in our current case is convex and not only restricted to square, which is beyond the scope of the previous work while quite common in general RA games. Our current work also allows the pursuers to start the game from $\Omega_{\rm tar}$ as Section \ref{sectionextension} shows, which is not involved in \cite{Yan2017Rea}. Next, we present a necessary condition that the optimal trajectories should satisfy if the players adopt their optimal strategies from (\ref{valueADR1v1}). \begin{lema}[Optimal Trajectories]\label{optstraight1v1}\rm For ${P_i}$ and $E_j$, consider the payoff function (\ref{valueADR1v1}) when only $\mathbf{x}_{P_i}^0(i\in\mathcal{I}_k)$ is considered in $\mathcal{X}_k^0$. Then, the optimal trajectories for $P_i$ and $E_j$ are both straight lines. \emph{Proof:} The Hamiltonian function for this problem is $H=v_P\mathbf{u}_{P_{i}}\lambda_1^\mathsf{T}+v_E\mathbf{u}_{E_j}\lambda_2^\mathsf{T}$, where $\lambda_1\in\mathbb{R}^2$ and $\lambda_2\in\mathbb{R}^2$ are costate vectors. Therefore, based on the classical Issacs's method, the optimal controls $\mathbf{u}_{P_i}^*$ and $\mathbf{u}_{E_j}^*$ satisfy\begin{equation}\label{1v1Hamiltonnece} \mathbf{u}_{P_i}^*= -\frac{\lambda_1}{\|\lambda_1\|_2},\mathbf{u}_{E_j}^* = \frac{\lambda_2}{\|\lambda_2\|_2},\dot{\lambda}_1=0,\dot{\lambda}_2=0. \end{equation} Thus, the optimal controls $\mathbf{u}_{P_i}^*$ and $\mathbf{u}_{E_j}^*$ are time-invariant, and their optimal trajectories are straight lines. \qed \end{lema} First, consider the case when $k$ contains only one pursuer, i.e., $\mathcal{X}_k^0=\mathbf{x}^0_{P_i}(1\leq i\leq N_p)$. \begin{thom}[Barrier for One Pursuer]\label{barrierone}\rm Consider the system (1) satisfying Assumptions \ref{isolateinitial}, \ref{constrainedinitial} and \ref{speedratio}. If a pursuit coalition $k$ only contains one pursuer $P_i$, then $\mathcal{B}^1(\mathcal{X}_k^0)=\breve{\mathcal{B}}^1\cap\Omega_{\rm play}$, where $\breve{\mathcal{B}}^1=\cup_{s=1}^3\breve{\mathcal{B}}_s^1$ and $\breve{\mathcal{B}}_s^1=F_s^1(\mathbf{x}^0_{P_i})$ for all $s=1,2,3$. \end{thom} \emph{Proof:} Since $k$ only contains one pursuer $P_i$, according to \autoref{binary coalitions}, we have $\mathcal{X}_k^0=\mathbf{x}^0_{P_i}$, $n_k=1$, and $\mathcal{I}_k=m_1=i$. \autoref{constrainedinitial} confines $\mathbf{x}^0_{P_i}$ in $\Omega_{\rm play}\cup\mathcal{T}$. First, we do not consider the boundary of $\Omega_{\rm play}$, namely, focus on the computation of $\breve{\mathcal{B}}^1$ defined in Section \ref{noProblem Formulation} which is proved to consist of three parts $\breve{\mathcal{B}}^1_1,\breve{\mathcal{B}}^1_2$ and $\breve{\mathcal{B}}^1_3$ in the following discussion, as depicted in Fig. \ref{figT:3}(a). Assume that $E_j$ can succeed to reach $\mathcal{T}$, and denote $E_j$'s optimal target point (OTP) in $\mathcal{T}$ by $\bm{p}^*=(x_p^*,0)$ such that $J$ in (\ref{valueADR1v1}) is maximized. It can be seen that in this case, (\ref{valueADR1v1}) is simplified to \begin{equation}\begin{aligned}\label{onepursuerpayoffAWR} &J=\|\mathbf{x}_{P_i}(t_1)-\mathbf{x}_{E_j}(t_1)\|_2,V=\min_{\mathbf{u}_{P_i}\in\mathcal{U}}\max_{\mathbf{u}_{E_j}\in\mathcal{U}}J \end{aligned}\end{equation} where $t_1$ is the first arrival time when $E_j$ reaches $\mathcal{T}$. It follows from \autoref{optstraight1v1} that for the payoff function (\ref{onepursuerpayoffAWR}), the optimal trajectories for $P_i$ and $E_j$ are both straight lines. Also note that the non-anticipative information structure stated in Section \ref{assumtionsection} implies that $P_i$ knows $E_j$'s current input (i.e., direction of motion). Since $P_i$ aims to minimize $J$ in (\ref{onepursuerpayoffAWR}), the optimal strategy for $P_i$ is to move towards the same target point in $\mathcal{T}$ as $E_j$ does, as Fig. \ref{figT:3}(a) illustrates. Hence, if $E_j$ selects $\bm{p}$ in $\mathcal{T}$ to go, the payoff function $J$ in (\ref{onepursuerpayoffAWR}) is equivalent to the function\begin{equation}\label{function1v1} G_1(x_p)=\|\bm{p}-\mathbf{x}_{P_i}^0\|_2-\frac{\|\bm{p}-\mathbf{x}_{E_j}^0\|_2}{\alpha},\bm{p}=(x_p,0)\in\mathcal{T} \end{equation} which is $P_i$'s distance to $E_j$ exactly when $E_j$ arrives at $\bm{p}$, if $P_i$ and $E_j$ both move directly towards $\bm{p}$. Notice that $0\leq x_p\leq l$, where $l$ is the length of $\mathcal{T}$. Note that $E_j$ strives to maximize $J$, i.e., $G_1(x_p)$, as (\ref{onepursuerpayoffAWR}) shows. Thus, if $P_i$ and $E_j$ both adopt their optimal strategies, $x_p^*$ must be a maximum point of $G_1(x_p)$ for $x_p\in[0,l]$. Note that the monotony of $G_1(x_p)$ has been shown in Lemma \ref{monotonic function1}. If the maximum point $x_p^*$ of $G_1(x_p)$ for $x_p\in[0,l]$ occurs in the interior of $[0,l]$, i.e., $(0,l)$, then $x_p^*$ is an extreme point of $G_1(x_p)$, that is, \begin{equation}\label{FNC} \frac{{\rm d}G_1(x_p^*)}{{\rm d}x_p}=0\Rightarrow\frac{x_p^*-x_{P_i}^0}{\|\bm{p}^*-\mathbf{x}_{P_i}^0\|_2}=\frac{x_p^*-x_{E_j}^0}{\alpha\|\bm{p}^*-\mathbf{x}_{E_j}^0\|_2}. \end{equation} \hspace{0.2 in}Furthermore, assume $\mathbf{x}_{E_j}^0\in\breve{\mathcal{B}}^1$, as Fig. \ref{figT:3}(a) shows. Then, $E_j$ will be captured by $P_i$ exactly when reaching $\mathcal{T}$ under two players' optimal strategies from (\ref{onepursuerpayoffAWR}), namely, $V=0$. Thus, $P_i$ and $E_j$ will reach $\bm{p}^*$ simultaneously as follows: \begin{equation}\label{distanceunequ} G_1(x_p^*)=0\Rightarrow\|\bm{p}^*-\mathbf{x}_{E_j}^0\|_2=\alpha\|\bm{p}^*-\mathbf{x}_{P_i}^0\|_2. \end{equation} This feature (\ref{distanceunequ}) reflects that when $\mathbf{x}_{E_j}^0\in\breve{\mathcal{B}}^1$, under two players' optimal strategies from (\ref{onepursuerpayoffAWR}), no player can guarantee its winning, namely, the capture and arrival happen at the same time. The following analysis will separately consider three cases: $x_p^*\in(0,l),x_p^*=0$ and $x_p^*=l$, due to their different features. \begin{figure} \caption{The barrier and two winning regions determined by $P_i$ when playing with $E_j$. $(a)$ Without considering the boundary of $\Omega_{\rm play} \label{figT:3} \end{figure} First, consider $x_p^*\in(0,l)$. It has been shown above that in this case, $x_p^*$ is an extreme point of $G_1(x_p)$. Denote this part of $\breve{\mathcal{B}}^1$ by $\breve{\mathcal{B}}^1_2$ which is in black shown in Fig. \ref{figT:3}(a). Next, we compute the expression of $\breve{\mathcal{B}}^1_2$. Substituting (\ref{distanceunequ}) into (\ref{FNC}) yields\begin{equation}\label{OTPunequ11} \alpha^2(x_p^*-x_{P_i}^0)=x_p^*-x_{E_j}^0\Rightarrow x_p^*=\frac{x_{E_j}^0-\alpha^2x_{P_i}^0}{1-\alpha^2}. \end{equation} Then, substituting $x_p^*$ given by (\ref{OTPunequ11}) into (\ref{distanceunequ}) leads to \begin{equation}\begin{aligned}\label{barrier1v1explicit} &\Big\|\big(\frac{x_{E_j}^0-\alpha^2x_{P_i}^0}{1-\alpha^2},0\big)-\mathbf{x}_{E_j}^0\Big\|_2^2\\ &\qquad\qquad\qquad\qquad=\alpha^2\Big\|\big(\frac{x_{E_j}^0-\alpha^2x_{P_i}^0}{1-\alpha^2},0\big)-\mathbf{x}_{P_i}^0\Big\|_2^2\Rightarrow\\ &(x_{E_j}^0-x_{P_i}^0)^2+(1-1/\alpha^2)(y_{E_j}^0)^2+(1-\alpha^2)(y_{P_i}^0)^2 =0 \end{aligned}\end{equation} which characterizes the relationship between the initial positions of $P_i$ and $E_j$ when $\mathbf{x}_{E_j}^0\in\breve{\mathcal{B}}^1_2$. Since $x_p^*\in(0,l)$, (\ref{OTPunequ11}) implies \begin{equation}\begin{aligned}\label{barrier1v1explicitinterral} x_{E_j}^0\in\big(\alpha^2x_{P_i}^0,(1-\alpha^2)l+\alpha^2x_{P_i}^0\big). \end{aligned}\end{equation} Also note that \autoref{constrainedinitial} confines $\mathbf{x}^0_{E_j}$ in $\Omega_{\rm play}$, that is, $y_{E_j}^0<0$. Hence, given $\mathbf{x}_{P_i}^0$, by taking all positions $\mathbf{x}_{E_j}^0$ satisfying (\ref{barrier1v1explicit}) and (\ref{barrier1v1explicitinterral}) with $y_{E_j}^0<0$, all initial positions of $E_j$ lying in $\breve{\mathcal{B}}^1_2$ are found. Equivalently, these initial positions of $E_j$ form the $\breve{\mathcal{B}}^1_2$. Thus, by replacing $\mathbf{x}_{E_j}^0$ with a general variable $\mathbf{z}\in\mathbb{R}^2$, it follows from (\ref{barrier1v1explicit}), (\ref{barrier1v1explicitinterral}) and $y_{E_j}^0<0$ that $\breve{\mathcal{B}}_2^1$ is as follows: \begin{equation}\begin{aligned}\label{barrier1v1expression11} \breve{\mathcal{B}}_2^1=\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|(x-x_{P_i}^0)^2+(1-1/\alpha^2)y^2\\ +(1-\alpha^2)(y_{P_i}^0)^2 =0,x\in(k_1,k_2),y<0\big\} \end{aligned}\end{equation} where $k_1=\alpha^2x_{P_i}^0$ and $k_2=(1-\alpha^2)l+\alpha^2x_{P_i}^0$. For clarity, it can be seen that $\breve{\mathcal{B}}_2^1$ can be expressed by a base curve in (\ref{base11}), that is, $\breve{\mathcal{B}}_2^1=F_2^1(\mathbf{x}^0_{P_i})$. Then, we focus on the case $x_p^*=0$, namely, $\bm{p}^*=\bm{m}$. Denote this part of $\breve{\mathcal{B}}^1$ by $\breve{\mathcal{B}}^1_1$ which is the left orange curve in Fig. \ref{figT:3}(a). Next, we compute the expression of $\breve{\mathcal{B}}^1_1$. In this scenario, (\ref{distanceunequ}) still holds and becomes \begin{equation}\begin{aligned}\label{barrier1v1expression12233} \|\mathbf{x}_{E_j}^0-\bm{m}\|_2=\alpha\|\mathbf{x}_{P_i}^0-\bm{m}\|_2. \end{aligned}\end{equation} Note that $x_{E_j}^0\leq\alpha^2x_{P_{i}}^0$ is straightforward based on the interval (\ref{barrier1v1explicitinterral}) of $x_{E_j}^0$ for $\mathbf{x}_{E_j}^0\in\breve{\mathcal{B}}^1_2$ and the monotony of $G_1(x_p)$ given by \autoref{monotonic function1}, as Fig. \ref{figT:3}(a) shows. Thus, by replacing $\mathbf{x}_{E_j}^0$ with a general variable $\mathbf{z}\in\mathbb{R}^2$, it follows from (\ref{barrier1v1expression12233}) and $x_{E_j}^0\leq\alpha^2x_{P_{i}}^0$ that $\breve{\mathcal{B}}_1^1$ is as follows:\begin{equation}\begin{aligned}\label{barrier1v1b123} \breve{\mathcal{B}}_1^1=\big\{&\mathbf{z}=(x,y)\in\mathbb{R}^2|\|\mathbf{z}-\bm{m}\|_2\\ &=\alpha\|\mathbf{x}_{P_i}^0-\bm{m}\|_2,x\leq k_1,y<0\big\} \end{aligned}\end{equation} which can also be represented by a base curve in (\ref{base11}), that is, $\breve{\mathcal{B}}_1^1=F_1^1(\mathbf{x}^0_{P_i})$. If $x_p^*=l$, namely, $\bm{p}^*=\bm{n}$, denote this part of $\breve{\mathcal{B}}^1$ by $\breve{\mathcal{B}}^1_3$ which is the right orange curve in Fig. \ref{figT:3}(a). Similar to the case $x_p^*=0$, $\breve{\mathcal{B}}_3^1$ is as follows:\begin{equation}\begin{aligned}\label{barrier1v1b12333} \breve{\mathcal{B}}_3^1=\big\{&\mathbf{z}=(x,y)\in\mathbb{R}^2|\|\mathbf{z}-\bm{n}\|_2\\ &=\alpha\|\mathbf{x}_{P_i}^0-\bm{n}\|_2,x\ge k_2,y<0\big\}. \end{aligned}\end{equation} Thus, $\breve{\mathcal{B}}_3^1=F_3^1(\mathbf{x}^0_{P_i})$ can be obtained. Therefore, we have constructed the barrier $\breve{\mathcal{B}}^1=\cup_{s=1}^3\breve{\mathcal{B}}_s^1$, without considering the boundary of $\Omega_{\rm play}$, shown in Fig. \ref{figT:3}(a). Then, consider the effect of the boundary of $\Omega_{\rm play}$. Since the optimal trajectories for two players are both straight lines and $\Omega_{\rm play}$ is convex, it can be concluded that $\mathcal{B}^1=\breve{\mathcal{B}}^1\cap\Omega_{\rm play}$, as Fig. \ref{figT:3}(b) shows. The shape of $\Omega_{\rm play}$ in Fig. \ref{figT:3}(b), which is general, is introduced just for showing that this theorem is applied for any convex $\Omega_{\rm play}$. As depicted in Fig. \ref{figT:3}(b), since $\mathcal{W}_P^1$ and $\mathcal{W}_E^1$ are two subregions of $\Omega_{\rm play}$ and separated by $\mathcal{B}^1$, and $\mathcal{W}_E^1$ is closer to $\mathcal{T}$ than $\mathcal{W}_P^1$, then the green region is the PWR $\mathcal{W}_P^1$ and the red region is the EWR $\mathcal{W}_E^1$. Thus, we finish the proof. \qed Next, we consider the case when $k$ contains two pursuers, i.e., $\mathcal{X}_k^0=\big\{\mathbf{x}^0_{P_{i1}},\mathbf{x}^0_{P_{i2}}\big\}(1\leq i1,i2\leq N_p,i1\neq i2)$. The main result of this section is presented below, which gives the analytical barrier for two pursuers case. \begin{thom}[Barrier for Two Pursuers]\label{barrier two}\rm Consider the system (1) and suppose that Assumptions \ref{isolateinitial}, \ref{constrainedinitial} and \ref{speedratio} hold. If a pursuit coalition $k$ contains two pursuers $P_{i1}$ and $P_{i2}$, its barrier can be analytically calculated as follows: \begin{itemize} \item[a] (Only one active pursuer). If only ${P_i}$ is an active pursuer, $\mathcal{B}^2(\mathcal{X}_k^0)=\mathcal{B}^1(\mathbf{x}^0_{P_i})$ where $i=i1$ or $i2$. \\ \item[b] (Two active pursuers). If $P_{i1}$ and ${P_{i2}}$ are both active pursuers, assume $x^0_{P_{i1}}<x^0_{P_{i2}}$. Then, $\mathcal{B}^2(\mathcal{X}_k^0)=\breve{\mathcal{B}}^2\cap\Omega_{\rm play}$, where $\breve{\mathcal{B}}^2=\cup_{s=1}^5\breve{\mathcal{B}}_s^2$ and $\breve{\mathcal{B}}_s^2=F_s^2(\mathbf{x}^0_{P_{i1}},\mathbf{x}^0_{P_{i2}})$ for all $s=1,...,5$. \end{itemize} \end{thom} \emph{Proof:} Since $k$ contains two pursuers $P_{i1}$ and $P_{i2}$, according to \autoref{binary coalitions}, we have $\mathcal{X}_k^0=\{\mathbf{x}^0_{P_{i1}},\mathbf{x}^0_{P_{i2}}\}$, $n_k=2$, and $\mathcal{I}_k=\{m_1,m_2\}=\{i1,i2\}$. Similar to the proof of \autoref{barrierone}, ignore the boundary of $\Omega_{\rm play}$ and consider $\breve{\mathcal{B}}^2$ first, as shown in Fig. \ref{figT:4}(a). Part (a): It suffices to consider the case where $P_{i1}$ is an active pursuer and $P_{i2}$ is an inactive pursuer. Since $P_{i2}$ is an inactive pursuer, it follows from \autoref{active} that\begin{equation}\label{theromequ12} \|\mathbf{x}^0_{P_{i1}}-\bm{p}\|_2\leq\|\mathbf{x}^0_{P_{i2}}-\bm{p}\|_2 \end{equation} holds for all $\bm{p}\in\mathcal{T}$. Hence, $P_{i1}$ can reach any point in $\mathcal{T}$ no later than $P_{i2}$, and this feature guarantees that $P_{i1}$ can determine the barrier $\breve{\mathcal{B}}^2$ alone. Thus, $\breve{\mathcal{B}}^2=\breve{\mathcal{B}}^1(\mathbf{x}_{P_{i1}}^0)$. Part (b): For two active pursuers case, we will show that $\breve{\mathcal{B}}^2$ includes five parts $\breve{\mathcal{B}}^2_s(s=1,...,5)$, as depicted in Fig. \ref{figT:4}(a). Since two pursuers are both active, it follows from \autoref{active} that there must exist two points $\bm{p}_1$ and $\bm{p}_2$ in $\mathcal{T}$ such that\begin{equation}\label{theromequ1233} \|\mathbf{x}^0_{P_{i1}}-\bm{p}_1\|_2<\|\mathbf{x}^0_{P_{i2}}-\bm{p}_1\|_2,\|\mathbf{x}^0_{P_{i2}}-\bm{p}_2\|_2<\|\mathbf{x}^0_{P_{i1}}-\bm{p}_2\|_2. \end{equation} The former in (\ref{theromequ1233}) implies that $\breve{\mathcal{B}}^2$ depends on $P_{i1}$, and the latter in (\ref{theromequ1233}) guarantees that $\breve{\mathcal{B}}^2$ depends on $P_{i2}$. Thus, in this case, $\breve{\mathcal{B}}^2$ depends on both two pursuers. Naturally, there exists a unique point $\bm{p}_{c}=(x_{c},0)$ in $\mathcal{T}$ that two pursuers can reach at the same time, as Fig. \ref{figT:4}(a) shows. Note that in this part, $x^0_{P_{i1}}\neq x^0_{P_{i2}}$; otherwise, it can be easily verified that one pursuer can reach any point in $\mathcal{T}$ no later than the other, which corresponds to the Part (a). Without loss of generality, assume $x^0_{P_{i1}}<x^0_{P_{i2}}$ as this theorem states. Thus, $\bm{p}_{c}$ is given by\begin{equation}\begin{aligned}\label{pcv2v1} \|\mathbf{x}_{P_{i1}}^0-\bm{p}_{c}\|_2=\|\mathbf{x}_{P_{i2}}^0-\bm{p}_{c}\|_2\Rightarrow x_{c}=\frac{\|\mathbf{x}_{P_{i2}}^0\|_2^2-\|\mathbf{x}_{P_{i1}}^0\|_2^2}{2(x^0_{P_{i2}}-x^0_{P_{i1}})}. \end{aligned}\end{equation} If $x_c=0$, i.e., $\bm{p}_c=\bm{m}$, it can be seen from Fig. \ref{figT:4}(a) that $P_{i2}$ can reach any point in $\mathcal{T}$ no later than $P_{i1}$. However, $P_{i1}$ is an active pursuer, so $x_c\neq0$. Similarly, $x_c\neq l$ can be obtained. Thus, we have $0<x_c<l$. Take a point $\bm{p}=(x_p,0)$ in $\mathcal{T}$. It can be observed in Fig. \ref{figT:4}(a) that if $x_p\in[0,x_c)$, $\|\mathbf{x}^0_{P_{i1}}-\bm{p}\|_2<\|\mathbf{x}^0_{P_{i2}}-\bm{p}\|_2$, and if $x_p\in(x_c,l]$, $\|\mathbf{x}^0_{P_{i2}}-\bm{p}\|_2<\|\mathbf{x}^0_{P_{i1}}-\bm{p}\|_2$. Assume $E_j$ can succeed to reach $\mathcal{T}$, and denote $E_j$'s OTP in $\mathcal{T}$ by $\bm{p}^*=(x_p^*,0)$ such that $J$ in (\ref{valueADR1v1}) is maximized. It can be noted that in this case, (\ref{valueADR1v1}) becomes \begin{equation}\begin{aligned}\label{valueADR2v1barrier} J&=\min_{i=i1,i2}\|\mathbf{x}_{P_i}(t_1)-\mathbf{x}_{E_j}(t_1)\|_2\\ V&=\min_{\mathbf{u}_{P_{i1}},\mathbf{u}_{P_{i2}}\in\mathcal{U}}\max_{\mathbf{u}_{E_j}\in\mathcal{U}}J \end{aligned}\end{equation} where $t_1$ is the first arrival time when $E_j$ reaches $\mathcal{T}$. Furthermore, assume $\mathbf{x}_{E_j}^0\in\breve{\mathcal{B}}^2$. Therefore, under three players' optimal strategies from (\ref{valueADR2v1barrier}), $E_j$ will be captured exactly when reaching $\mathcal{T}$, namely, $V=0$. If $x_p^*\in[0,x_c)$, as Fig. \ref{figT:4}(a) shows, this part of $\breve{\mathcal{B}}^2$ is determined by $P_{i1}$ alone, as $P_{i1}$ can reach $\bm{p}^*$ before $P_{i2}$. Thus, by following the proof of \autoref{barrierone}, it can be obtained that if $x_p^*=0$, i.e., $\bm{p}^*=\bm{m}$, and denote the related barrier by $\breve{\mathcal{B}}^2_1$, then $\breve{\mathcal{B}}^2_1=\breve{\mathcal{B}}^1_1(\mathbf{x}_{P_{i1}}^0)$, which is the leftmost orange curve in Fig. \ref{figT:4}(a) and given by (\ref{barrier1v1b123}) with $\mathbf{x}_{P_i}^0$ replaced by $\mathbf{x}_{P_{i1}}^0$. Thus, it follows from (\ref{basecurves22}) and \autoref{barrierone} that $\breve{\mathcal{B}}^2_1$ can be expressed by a base curve, i.e., $\breve{\mathcal{B}}^2_1=F_1^2(\mathbf{x}_{P_{i1}}^0,\mathbf{x}_{P_{i2}}^0)$. Consider the remainder $x_p^*\in(0,x_c)$ and denote the related barrier by $\breve{\mathcal{B}}^2_2$, which is the leftmost black curve in Fig. \ref{figT:4}(a). Next, we compute the expression of $\breve{\mathcal{B}}^2_2$. Since $x_p^*\in(0,x_c)$ and $J$ in (\ref{valueADR2v1barrier}) only depends on $P_{i1}$, it follows from the proof of \autoref{barrierone} that (\ref{OTPunequ11}) and (\ref{barrier1v1explicit}) still hold for $P_{i1}$ and $E_j$, that is, $x_p^*=\frac{x_{E_j}^0-\alpha^2x_{P_{i1}}^0}{1-\alpha^2}$ and \begin{equation}\begin{aligned}\label{barrier2v1explicitDi1} (x_{E_j}^0-x_{P_{i1}}^0)^2+(1-1/\alpha^2)(y_{E_j}^0)^2+(1-\alpha^2)(y_{P_{i1}}^0)^2 =0 \end{aligned}\end{equation} representing the relationship between the initial positions of $P_{i1}$ and $E_j$ when $\mathbf{x}_{E_j}^0\in\breve{\mathcal{B}}^2_2$. From $x_p^*\in(0,x_c)$, we have \begin{equation}\begin{aligned}\label{barrier2v1intervalb22} x_{E_j}^0\in\big(\alpha^2x_{P_{i1}}^0,(1-\alpha^2)x_c+\alpha^2x_{P_{i1}}^0\big). \end{aligned}\end{equation} Hence, by replacing $\mathbf{x}_{E_j}^0$ with a general variable $\mathbf{z}\in\mathbb{R}^2$, it follows from (\ref{barrier2v1explicitDi1}), (\ref{barrier2v1intervalb22}) and $y_{E_j}^0<0$ that $\breve{\mathcal{B}}_2^2$ is as follows: \begin{equation}\begin{aligned}\label{barrier2v1B22} \breve{\mathcal{B}}_2^2=\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|(x-x_{P_{i1}}^0)^2+(1-1/\alpha^2)y^2\\ +(1-\alpha^2)(y_{P_{i1}}^0)^2 =0,x\in(k_3,k_4),y<0\big\} \end{aligned}\end{equation} where $k_3=\alpha^2x_{P_{i1}}^0$ and $k_4=(1-\alpha^2)x_c+\alpha^2x_{P_{i1}}^0$. For clarity, it can be seen that $\breve{\mathcal{B}}_2^2$ can be expressed by a base curve in (\ref{basecurves22}), that is, $\breve{\mathcal{B}}^2_2=F_2^2(\mathbf{x}_{P_{i1}}^0,\mathbf{x}_{P_{i2}}^0)$. \begin{figure} \caption{The barrier and two winning regions determined by two active pursuers $P_{i1} \label{figT:4} \end{figure} Analogously, when $P_{i2}$ can reach $\bm{p}^*$ before $P_{i1}$, i.e., $x_p^*\in(x_c,l]$, as Fig. \ref{figT:4}(a) shows, $\breve{\mathcal{B}}^2_4=F_4^2(\mathbf{x}_{P_{i1}}^0,\mathbf{x}_{P_{i2}}^0)$ and $\breve{\mathcal{B}}^2_5=F_5^2(\mathbf{x}_{P_{i1}}^0,\mathbf{x}_{P_{i2}}^0)$ can be derived respectively for $x_p^*\in(x_c,l)$ and $x_p^*=l$, where $\breve{\mathcal{B}}^2_4$ is the rightmost black curve and $\breve{\mathcal{B}}^2_5$ is the rightmost orange curve. Finally, consider the case $x_p^*=x_c$, i.e., $\bm{p}^*=\bm{p}_c$, which corresponds to the middle orange barrier $\breve{\mathcal{B}}^2_3$ in Fig. \ref{figT:4}(a). Note that (\ref{distanceunequ}) still holds for $P_{i1}$ and $E_j$ and can be rewritten as \begin{equation}\label{distanceunequ22} \|\bm{p}_c-\mathbf{x}_{E_j}^0\|_2=\alpha\|\bm{p}_c-\mathbf{x}_{P_{i1}}^0\|_2. \end{equation} Naturally, $x_{E_j}^0$ should lie in $[k_4,k_5]$, where $k_5=(1-\alpha^2)x_{c}+\alpha^2x_{P_{i2}}^0$ which actually is the boundary of the $x$ value of $\breve{\mathcal{B}}^2_4$. Thus, by replacing $\mathbf{x}_{E_j}^0$ with a general variable $\mathbf{z}\in\mathbb{R}^2$, it follows from (\ref{distanceunequ22}) and $y_{E_j}^0<0$ that $\breve{\mathcal{B}}_3^2$ is as follows: \begin{equation}\begin{aligned}\label{barrier2v1B23} \breve{\mathcal{B}}_3^2=\big\{&\mathbf{z}=(x,y)\in\mathbb{R}^2|\|\mathbf{z}-\bm{p}_c\|_2\\ &=\alpha\|\mathbf{x}_{P_{i1}}^0-\bm{p}_c\|_2,x\in[k_4,k_5],y<0\big\}. \end{aligned}\end{equation} It can also be noted that $\breve{\mathcal{B}}_3^2$ can be expressed by a base curve in (\ref{basecurves22}), that is, $\breve{\mathcal{B}}^2_3=F_3^2(\mathbf{x}_{P_{i1}}^0,\mathbf{x}_{P_{i2}}^0)$. Therefore, we have constructed the barrier $\breve{\mathcal{B}}^2=\cup_{s=1}^5\breve{\mathcal{B}}_s^2$, without considering the boundary of $\Omega_{\rm play}$, shown in Fig. \ref{figT:4}(a). Now, consider the effect of the boundary of $\Omega_{\rm play}$. Since the optimal trajectories for three players (if it works in the payoff function $J$) are straight lines and $\Omega_{\rm play}$ is convex, $\mathcal{B}^2=\breve{\mathcal{B}}^2\cap\Omega_{\rm play}$ holds, as Fig. \ref{figT:4}(b) shows. Intuitively, the PWR $\mathcal{W}_P^2$ is the green region, and the EWR $\mathcal{W}_E^2$ is the red region. \qed \subsection{General Pursuit Coalitions Versus One Evader}\label{sectionmultiplev1} The objective in this section is to construct the barrier for general pursuit coalitions with the inspiration from the results derived in two pursuers case. First, the barrier for a special class of pursuit coalitions whose pursuers are all active, is constructed. Then theoretically, it is demonstrated that every pursuit coalition can be degenerated into a unique pursuit subcoalition of this class. Finally, this pursuit subcoalition and the original pursuit coalition are proved to possess the same barrier. To this end, the definition of this special class pursuit coalition is formally introduced as follows. \begin{defi}[Full-Active Pursuit Coalition]\label{full active}\rm Given a pursuit coalition $k$, if for any pursuer ${P_i}(i\in\mathcal{I}_k)$ in $k$, there always exists a point in $\mathcal{T}$ that $P_i$ can reach before all other pursuers in $k$, then $k$ is called a full-active pursuit coalition. \end{defi} It can be verified that if a pursuit coalition $k$ is full-active, its members must have different $x$-coordinates. Otherwise, assume that in $k$, $P_{i1}$ and $P_{i2}$ have the same $x$-coordinate. Thus, it can be noted that one pursuer can reach any point in $\mathcal{T}$ no later than the other one, which contradicts with the fact that two pursuers are both active. Hence, for every full-active pursuit coalition $k$ with $n_k\ge2$, introduce an auxiliary index set $\mathcal{\hat{I}}_k=\{\hat{m}_j|1\leq \hat{m}_j\leq N_p,j=1,2,...,n_k\}=\mathcal{I}_k$ such that $x_{P_{\hat{m}_i}}^0<x_{P_{\hat{m}_{i+1}}}^0$ holds for all $i=1,2,...,n_k-1$. More intuitively, we mean that $P_{\hat{m}_i}$ lies at the left side of $P_{\hat{m}_{i+1}}$ along the $x$-axis. We call $\mathcal{\hat{I}}_k$ as $x$-rank index set, and obviously it is unique. With the above $x$-rank index set $\mathcal{\hat{I}}_k$, the theorem presented below provides a scheme to construct the barrier for full-active pursuit coalitions. \begin{thom}[Barrier for Full-Active Pursuit Coalition]\label{barrierfull}\rm Consider the system (1) and suppose that Assumptions \ref{isolateinitial}, \ref{constrainedinitial} and \ref{speedratio} hold. If a pursuit coalition $k$ with the initial condition $\mathcal{X}_k^0$ and $n_k$ pursuers, is full-active, its corresponding barrier can be constructed analytically as follows:\\ (\romannumeral1). If $n_k=1$, $\mathcal{B}^{n_k}(\mathcal{X}_k^0)$ is given by \autoref{barrierone}. If $n_k=2$, $\mathcal{B}^{n_k}(\mathcal{X}_k^0)$ is given by \autoref{barrier two}(b).\\ (\romannumeral2). If $3\leq n_k\leq N_p$, let $\mathcal{\hat{I}}_k$ denote its $x$-rank index set. Then, $\mathcal{B}^{n_k}(\mathcal{X}_k^0)=\breve{\mathcal{B}}^{n_k}\cap\Omega_{\rm play}$, where \begin{equation}\begin{aligned}\label{barriergeneralnv1} \breve{\mathcal{B}}^{n_k}=&\cup_{i=1}^2F^2_i(\mathbf{x}^0_{P_{\hat{m}_1}},\mathbf{x}^0_{P_{\hat{m}_2}})\cup_{i=1}^{n_k-1}F^2_3(\mathbf{x}^0_{P_{\hat{m}_i}},\mathbf{x}^0_{P_{\hat{m}_{i+1}}})\\ &\cup_{i=2}^{n_k-1}F^3(\mathbf{x}^0_{P_{\hat{m}_{i-1}}},\mathbf{x}^0_{P_{\hat{m}_i}},\mathbf{x}^0_{P_{\hat{m}_{i+1}}})\\ &\cup_{i=4}^5F_i^2(\mathbf{x}^0_{P_{\hat{m}_{n_k-1}}},\mathbf{x}^0_{P_{\hat{m}_{n_k}}}). \end{aligned}\end{equation} \end{thom} \emph{Proof:} Note that the conclusion (\romannumeral1) is straightforward. Thus, we focus on the conclusion (\romannumeral2). Construct $\breve{\mathcal{B}}^{n_k}$ first, namely, ignore the boundary of $\Omega_{\rm play}$. Since $k$ is full-active, according to \autoref{full active}, for every pursuer in $k$, there always exists a point in $\mathcal{T}$ that it can reach before all other pursuers in $k$. Thus, every pursuer in $k$ contributes to $\breve{\mathcal{B}}^{n_k}$. Next, a method is presented to construct the barrier $\breve{\mathcal{B}}^{n_k}$. First, by \autoref{barrier two}(b), construct the barrier $\breve{\mathcal{B}}^2$ determined by $P_{\hat{m}_1}$ and $P_{\hat{m}_2}$, as Fig. \ref{figT:4}(a) shows by replacing $P_{i1}$ and $P_{i2}$ with $P_{\hat{m}_1}$ and $P_{\hat{m}_2}$ respectively. Then, add the third pursuer $P_{\hat{m}_3}$ and compute the associated $\breve{\mathcal{B}}^3$ which is proved below to consist of seven parts, as Fig. \ref{figT:5}(a) shows. According to the definition of the $x$-rank index set presented above, $x^0_{P_{\hat{m}_{1}}}<x^0_{P_{\hat{m}_{2}}}<x^0_{P_{\hat{m}_{3}}}$ holds. Take two points $\bm{p}_{c1}$ and $\bm{p}_{c2}$ in $\mathcal{T}$ such that\begin{equation}\label{pcv2v1definition} \|\mathbf{x}_{P_{\hat{m}_i}}^0-\bm{p}_{ci}\|_2=\|\mathbf{x}_{P_{\hat{m}_{i+1}}}^0-\bm{p}_{ci}\|_2,i=1,2 \end{equation} holds, as depicted in Fig. \ref{figT:5}(a), that is, $\bm{p}_{ci}$ is the point in $\mathcal{T}$ having the same distance to $P_{\hat{m}_i}$ and $P_{\hat{m}_{i+1}}$. Thus, from (\ref{pcv2v1definition}), $\bm{p}_{ci}=(x_{ci},0)$ can be computed as follows:\begin{equation}\label{pcv2v1} x_{ci}=\frac{\|\mathbf{x}_{P_{\hat{m}_{i+1}}}^0\|_2^2-\|\mathbf{x}_{P_{\hat{m}_i}}^0\|_2^2}{2(x^0_{P_{\hat{m}_{i+1}}}-x^0_{P_{\hat{m}_i}})},i=1,2. \end{equation} Since $P_{\hat{m}_1}$ is an active pursuer, there must exist a point in $\mathcal{T}$ that $P_{\hat{m}_1}$ can reach before $P_{\hat{m}_2}$. Thus, $x_{c1}>0$, as Fig. \ref{figT:5}(a) shows. Similarly, $x_{c2}<l$ also holds. Take a point $\bm{p}=(x_p,0)$ in $\mathcal{T}$. It can be obtained that if $x_p\in[0,x_{c1})$, $P_{\hat{m}_1}$ can reach $\bm{p}$ before $P_{\hat{m}_2}$, and if $x_p\in(x_{c2},l]$, $P_{\hat{m}_3}$ can reach $\bm{p}$ before $P_{\hat{m}_2}$. Since $P_{\hat{m}_2}$ is an active pursuer, there must exist a point in $\mathcal{T}$ that $P_{\hat{m}_2}$ can reach before both $P_{\hat{m}_1}$ and $P_{\hat{m}_3}$. Thus, $x_{c1}<x_{c2}$ holds. In conclusion, $0<x_{c1}<x_{c2}<l$ is derived, as Fig. \ref{figT:5}(a) illustrates. Assume $E_j$ can succeed to reach $\mathcal{T}$, and denote $E_j$'s OTP in $\mathcal{T}$ by $\bm{p}^*=(x_p^*,0)$ such that $J$ in (\ref{valueADR1v1}) only involving these three pursuers is maximized. Thus in this case, (\ref{valueADR1v1}) becomes \begin{equation}\begin{aligned}\label{valueADR3v1payoff} J&=\min_{i=\hat{m}_1,\hat{m}_2,\hat{m}_3}\|\mathbf{x}_{P_i}(t_1)-\mathbf{x}_{E_j}(t_1)\|_2\\ V&=\min_{\mathbf{u}_{P_{\hat{m}_1}},\mathbf{u}_{P_{\hat{m}_2}}, \mathbf{u}_{P_{\hat{m}_3}}\in\mathcal{U}}\max_{\mathbf{u}_{E_j}\in\mathcal{U}}J \end{aligned}\end{equation} where $t_1$ is the first arrival time when $E_j$ reaches $\mathcal{T}$. Similarly, we further assume $\mathbf{x}_{E_j}^0\in\breve{\mathcal{B}}^3$. Therefore, under four players' optimal strategies from (\ref{valueADR3v1payoff}), $E_j$ will be captured exactly when reaching $\mathcal{T}$, namely, $V=0$. \begin{figure} \caption{The barrier and two winning regions determined by three active pursuers $P_{\hat{m} \label{figT:5} \end{figure} If $x_p^*\in[0,x_{c1})$, as Fig. \ref{figT:5}(a) shows, $P_{\hat{m}_1}$ can reach $\bm{p}^*$ before both $P_{\hat{m}_2}$ and $P_{\hat{m}_3}$. Thus, this part of $\breve{\mathcal{B}}^{3}$ is determined by $P_{\hat{m}_1}$ alone, namely, $J$ only depends on $P_{\hat{m}_1}$. By following the proof of \autoref{barrier two}, the part related to $P_{\hat{m}_1}$ alone is given by $F^2_1(\mathbf{x}^0_{P_{\hat{m}_1}},\mathbf{x}^0_{P_{\hat{m}_2}})\cup F_2^2(\mathbf{x}^0_{P_{\hat{m}_1}},\mathbf{x}^0_{P_{\hat{m}_{2}}})$, respectively for $x_p^*=0$ and $x_p^*\in(0,x_{c1})$, which consists of the leftmost orange and black curves in Fig. \ref{figT:5}(a). If $x_p^*\in(x_{c1},x_{c2})$, as Fig. \ref{figT:5}(a) shows, $P_{\hat{m}_2}$ can reach $\bm{p}^*$ before both $P_{\hat{m}_1}$ and $P_{\hat{m}_3}$, implying that this part of $\breve{\mathcal{B}}^{3}$ is determined by $P_{\hat{m}_2}$ alone, namely, $J$ only depends on $P_{\hat{m}_2}$. Thus, it follows from the proof of \autoref{barrierone} that (\ref{OTPunequ11}) and (\ref{barrier1v1explicit}) hold for $P_{\hat{m}_2}$ and $E_j$, that is, $x_p^*=\frac{x_{E_j}^0-\alpha^2x_{P_{\hat{m}_2}}^0}{1-\alpha^2}$ and \begin{equation}\begin{aligned}\label{barrier3v1explicitDim2} (x_{E_j}^0-x_{P_{\hat{m}_2}}^0)^2+(1-1/\alpha^2)(y_{E_j}^0)^2+(1-\alpha^2)(y_{P_{\hat{m}_2}}^0)^2 =0 \end{aligned}\end{equation} which characterizes the relationship between the initial positions of $P_{\hat{m}_2}$ and $E_j$ when $\mathbf{x}_{E_j}^0\in\breve{\mathcal{B}}^3$ and $x_p^*\in(x_{c1},x_{c2})$. Also note that from $x_p^*\in(x_{c1},x_{c2})$, we have \begin{equation}\begin{aligned}\label{barrier3v1intervalb33} x_{E_j}^0\in\big((1-\alpha^2)x_{c1}+\alpha^2x_{P_{\hat{m}_2}}^0,(1-\alpha^2)x_{c2}+\alpha^2x_{P_{\hat{m}_2}}^0\big). \end{aligned}\end{equation} Therefore, by replacing $\mathbf{x}_{E_j}^0$ with a general variable $\mathbf{z}\in\mathbb{R}^2$, it follows from (\ref{barrier3v1explicitDim2}), (\ref{barrier3v1intervalb33}) and $y_{E_j}^0<0$ that this part of $\breve{\mathcal{B}}^{3}$, which is the middle black curve in Fig. \ref{figT:5}(a), is given by \begin{equation}\begin{aligned}\label{barrier2v1B22} &\big\{\mathbf{z}=(x,y)\in\mathbb{R}^2|(x-x_{P_{\hat{m}_2}}^0)^2+(1-1/\alpha^2)y^2\\ &\qquad +(1-\alpha^2)(y_{P_{\hat{m}_2}}^0)^2 =0,x\in(k_7,k_8),y<0\big\} \end{aligned}\end{equation} where $k_7=(1-\alpha^2)x_{c1}+\alpha^2x_{P_{\hat{m}_2}}^0$ and $k_8=(1-\alpha^2)x_{c2}+\alpha^2x_{P_{\hat{m}_2}}^0$. From (\ref{basecurvethree}), it can be noted that (\ref{barrier2v1B22}) can be expressed by a base curve, that is, $F^3(\mathbf{x}^0_{P_{\hat{m}_1}},\mathbf{x}^0_{P_{\hat{m}_{2}}},\mathbf{x}^0_{P_{\hat{m}_3}})$. If $x_p^*\in(x_{c2},l]$, as Fig. \ref{figT:5}(a) shows, this part of $\breve{\mathcal{B}}^3$ is only related to $P_{\hat{m}_3}$. Based on the similar analysis for the case $x_p^*\in[0,x_{c1})$, this part, which consists of the rightmost black and orange curves in Fig. \ref{figT:5}(a), is given by $F_4^2(\mathbf{x}^0_{P_{\hat{m}_2}},\mathbf{x}^0_{P_{\hat{m}_3}})\cup F_5^2(\mathbf{x}^0_{P_{\hat{m}_2}},\mathbf{x}^0_{P_{\hat{m}_3}})$. Now, we consider the remainder $x_p^*=x_{c1}$ or $x_p^*=x_{c2}$. For simplicity, we only consider $x_p^*=x_{c1}$, and similar analysis can be conducted for $x_p^*=x_{c2}$. Note that $x_p^*=x_{c1}$, i.e., $\bm{p}^*=\bm{p}_{c1}$, and (\ref{distanceunequ}) holds for $P_{\hat{m}_1}$ and $E_j$. Similar to the analysis for $x_p^*=x_c$ in the proof of \autoref{barrier two}, this part of $\breve{\mathcal{B}}^3$, which is the second orange curve from the left in Fig. \ref{figT:5}(a), is given by a base curve in (\ref{basecurves22}), that is, $F^2_3(\mathbf{x}^0_{P_{\hat{m}_1}},\mathbf{x}^0_{P_{\hat{m}_{2}}})$. Thus, by connecting all these parts of $\breve{\mathcal{B}}^{3}$, we can obtain $\breve{\mathcal{B}}^{3}$ given by (\ref{barriergeneralnv1}) with $n_k=3$, shown in Fig. \ref{figT:5}(a). Similarly, add the fourth pursuer $P_{\hat{m}_4}$, and $\bm{p}_{c3}=(x_{c3},0)$ is given by (\ref{pcv2v1}) with $i=3$. Then, $0<x_{c1}<x_{c2}<x_{c3}<l$ can be obtained. In the same way, $\breve{\mathcal{B}}^{4}$ is given by (\ref{barriergeneralnv1}) with $n_k=4$. Therefore, by adding the remaining pursuers one by one along the $x$-rank index set $\mathcal{\hat{I}}_k$, $\breve{\mathcal{B}}^{n_k}$ is obtained as (\ref{barriergeneralnv1}) shows. Then, consider the effect of the boundary of $\Omega_{\rm play}$. Since the optimal trajectories for the players who work in the payoff function $J$, are straight lines and $\Omega_{\rm play}$ is convex, it can be obtained that $\mathcal{B}^{n_k}=\breve{\mathcal{B}}^{n_k}\cap\Omega_{\rm play}$, as Fig. \ref{figT:5}(b) shows when $n_k=3$. Naturally, the PWR $\mathcal{W}_P^3$ is the green region, and the EWR $\mathcal{W}_E^3$ is the red region. Thus we finish the proof.\qed Note that in general pursuit coalitions, there may exist some inactive pursuers who as asserted below, have no effect on the barrier construction but instead hinder our analysis. In light of this, if all active pursuers can be extracted out from any given pursuit coalition, namely, its largest full-active pursuit subcoalition, then the barrier for this general pursuit coalition can be obtained from \autoref{barrierfull}. First, the following lemma formally establishes a connection of the barrier between a general pursuit coalition and its largest full-active pursuit subcoalition. \begin{lema}[Barrier Equivalence]\label{barriereqal}\rm For any pursuit coalition $k$, $\mathcal{B}^{n_k}(\mathcal{X}_k^0)=\mathcal{B}^{\bar{n}_k}(\mathcal{\bar{X}}_k^0)$ holds, where $\mathcal{\bar{X}}_k^0$ with $\bar{n}_k$ pursuers denotes the initial position of the unique largest full-active pursuit subcoalition of $\mathcal{X}_k^0$. \end{lema} \emph{Proof:} Obviously, we only need to focus on the case that $\mathcal{X}_k^0\setminus\mathcal{\bar{X}}^0_k$ is nonempty. Thus, $n_k>\bar{n}_k$. For simplicity, we prove $\mathcal{W}_E^{n_k}=\mathcal{W}_E^{\bar{n}_k}$. By definition, $\mathcal{W}_E^{n_k}\subseteq\mathcal{W}_E^{\bar{n}_k}$ holds, as reducing the pursuer will result in the same or expansion of the EWR. Thus we only need to verify $\mathcal{W}_E^{\bar{n}_k}\subseteq\mathcal{W}_E^{n_k}$, equivalently, $\breve{\mathcal{W}}_E^{\bar{n}_k}\subseteq\breve{\mathcal{W}}_E^{n_k}$. Suppose $\bm{p}\in\breve{\mathcal{W}}_E^{\bar{n}_k}$. Then, there must exist a point $\bm{p}_1\in\mathcal{T}$ such that\begin{equation}\label{barrierequa11} \|\bm{p}-\bm{p}_1\|_2<\alpha \|\mathbf{x}_{P_i}^0-\bm{p}_1\|_2 \end{equation} holds for all $\mathbf{x}_{P_i}^0\in\mathcal{\bar{X}}_k^0$. Assume that there exists a pursuer $P_j$ satisfying $\mathbf{x}_{P_j}^0\in\mathcal{X}_k^0\setminus\mathcal{\bar{X}}^0_k$ and \begin{equation}\label{barrierequa22} \alpha \|\mathbf{x}_{P_j}^0-\bm{p}_1\|_2\leq\|\bm{p}-\bm{p}_1\|_2 \end{equation} Then, by (\ref{barrierequa11}) and (\ref{barrierequa22}), it can be stated that\begin{equation}\label{barrierequa33} \|\mathbf{x}_{P_j}^0-\bm{p}_1\|_2<\|\mathbf{x}_{P_i}^0-\bm{p}_1\|_2 \end{equation} holds for all $\mathbf{x}_{P_i}^0\in\mathcal{\bar{X}}_k^0$. Thus, there exists a point, i.e, $\bm{p}_1$, in $\mathcal{T}$ that $P_j$ can reach before all pursuers in $\mathcal{\bar{X}}_k^0$, which contradicts with the fact that $\mathcal{\bar{X}}_k^0$ is the largest full-active pursuit subcoalition of $\mathcal{X}_k^0$. Therefore, we can conclude that (\ref{barrierequa22}) does not hold, that is, \begin{equation}\label{barrierequa44} \|\bm{p}-\bm{p}_1\|_2<\alpha \|\mathbf{x}_{P_j}^0-\bm{p}_1\|_2 \end{equation} holds for all $\mathbf{x}_{P_j}^0\in\mathcal{X}_k^0\setminus\mathcal{\bar{X}}^0_k$. Combining (\ref{barrierequa11}) with (\ref{barrierequa44}) shows that there exists a point, i.e., $\bm{p}_1$, in $\mathcal{T}$ that $\bm{p}$ can reach before all pursuers in $\mathcal{X}_k^0$ with a speed ratio $\alpha$. Thus, $\bm{p}\in\breve{\mathcal{W}}_E^{n_k}$, implying that $\breve{\mathcal{W}}_E^{\bar{n}_k}\subseteq\breve{\mathcal{W}}_E^{n_k}$. The existence of $\mathcal{\bar{X}}_k^0$ is straightforward, as $\mathcal{\bar{X}}_k^0$ is a subset of $\mathcal{X}_k^0$. As for the uniqueness, assume that $\mathcal{\bar{X}}_k^0$ with $\bar{n}_k$ pursuers and $\mathcal{\bar{Y}}_k^0$ with $\bar{m}_k$ pursuers are two distinct largest full-active pursuit subcoalitions of $\mathcal{X}_k^0$. Take a pursuer $P_i$ satisfying $\mathbf{x}_{P_i}^0\in\mathcal{\bar{X}}_k^0$ and $\mathbf{x}_{P_i}^0\notin\mathcal{\bar{Y}}_k^0$. Then, it follows from \autoref{full active} that $\mathbf{x}_{P_i}^0\notin\mathcal{\bar{Y}}_k^0$ implies that there is no point in $\mathcal{T}$ that $P_i$ can reach before all pursuers in $\mathcal{\bar{Y}}_k^0$. In other words, there is no point in $\mathcal{T}$ that $P_i$ can reach before all other pursuers in $\mathcal{X}_k^0$, which contradicts with the fact that $\mathbf{x}_{P_i}^0$ is an active pursuer by noting that $\mathbf{x}_{P_i}^0\in\mathcal{\bar{X}}_k^0$. Thus, $\mathcal{\bar{X}}_k^0=\mathcal{\bar{Y}}_k^0$ and the uniqueness is proved.\qed Denote by $\mathcal{R}_D$ the set of points in $\mathbb{R}^2$ that $P_{m_i}$ can reach before $P_{m_j}$, which is mathematically formulated as follows: \begin{equation}\begin{aligned}\label{pursuerdominancere} \mathcal{R}_D\big(\mathbf{x}_{P_{m_i}}^0,\mathbf{x}_{P_{m_j}}^0\big)=\big\{\mathbf{z}\in\mathbb{R}^2|\|\mathbf{z}&-\mathbf{x}_{P_{m_i}}^0\|_2\\ &<\|\mathbf{z}-\mathbf{x}_{P_{m_j}}^0\|_2\big\}. \end{aligned}\end{equation} It can be noted that $\mathcal{R}_D$ is a half plane. Next, for a pursuit coalition $k$, an algorithm is presented to find its unique largest full-active pursuit subcoalition, namely, $\mathcal{\bar{X}}_k^0$. To avoid redundant illustration, the proof of this algorithm is left in the next theorem. \begin{algm}\label{algm1}\rm (Algorithm for Finding the Largest Full-Active Pursuit Subcoalition in $k$).\\ 1: \textbf{Input:} $\mathcal{X}_k^0\in\Omega_{\rm play}^{n_k}\cup\mathcal{T}^{n_k},\bar{\mathcal{X}}_k^0\gets\emptyset,\bar{n}_k\gets0$.\\ 2: \textbf{for} $i\in\{1,...,n_k\}$ \textbf{do}\\ 3: \hspace{0.2 in}$\mathcal{T}_1=\mathcal{T}$\\ 4: \hspace{0.2 in}\textbf{for} $j\in\{1,...,n_k\},j\neq i$ \textbf{do}\\ 5: \hspace{0.4 in}$\mathcal{T}_1=\mathcal{R}_D(\mathbf{x}_{P_{m_i}}^0,\mathbf{x}_{P_{m_j}}^0)\cap\mathcal{T}_1$ \textbf{end for}\\ 6: \hspace{0.2 in}\textbf{if} $\mathcal{T}_1\neq\emptyset$ \textbf{then}\\ 7: \hspace{0.4 in}$\bar{\mathcal{X}}_k^0\gets\bar{\mathcal{X}}_k^0\cup \mathbf{x}^0_{P_{m_i}},\bar{n}_k\gets\bar{n}_k+1$ \textbf{end if} \textbf{end for}\\ 8: \textbf{Output:} $\bar{\mathcal{X}}_k^0,\bar{n}_k$. \end{algm} Now it suffices to present the main result of this section. \begin{thom}[Barrier for General Pursuit Coalition]\label{barriergeneral}\rm Consider the system (1) satisfying Assumptions \ref{isolateinitial}, \ref{constrainedinitial} and \ref{speedratio}. For a pursuit coalition $k$, its largest full-active pursuit subcoalition $\mathcal{\bar{X}}_k^0$ with $\bar{n}_k$ pursuers can be found by \autoref{algm1}. Then, $\mathcal{B}^{n_k}(\mathcal{X}_k^0)=\mathcal{B}^{\bar{n}_k}(\mathcal{\bar{X}}_k^0)$, where $\mathcal{B}^{\bar{n}_k}(\mathcal{\bar{X}}_k^0)$ can be computed by \autoref{barrierfull}. \end{thom} \emph{Proof:} According to \autoref{barriereqal} and \autoref{barrierfull}, if the validity of \autoref{algm1} can be verified, this theorem holds naturally. From line 3 to line 5 in \autoref{algm1}, the set of points in $\mathcal{T}$ that $P_{m_i}$ can reach before all other pursuers in $k$, is computed and denoted by $\mathcal{T}_1$, where $\mathcal{R}_D(\mathbf{x}_{P_{m_i}}^0,\mathbf{x}_{P_{m_j}}^0)$ denotes the set of points in $\mathbb{R}^2$ that $P_{m_i}$ can reach before $P_{m_j}$ as (\ref{pursuerdominancere}) shows. Thus, if $\mathcal{T}_1\neq\emptyset$, that is, there exists at least one point that $P_i$ can reach before all other pursuers in $k$, then $P_i$ must be an active pursuer in $k$. Conversely, if $\mathcal{T}_1=\emptyset$, namely, there is no point in $\mathcal{T}$ that $P_i$ can reach before all other pursuers in $k$, then $P_i$ must be an inactive pursuer in $k$.\qed \subsection{Extensions to Relaxed Initial Deployment}\label{sectionextension} In this section, the barrier construction method will be extended to the scenarios having more realistic initial deployment as \autoref{relaxedinitial} shows. In this initial deployment, pursuers are allowed to initially lie in $\Omega_{\rm tar}$. For example, there may exist pursuers who are patrolling in $\Omega_{\rm tar}$ exactly when evaders are detected. As discussed below, by introducing a specific virtual pursuer, a road between this case and the known results is established. Or, more concretely, it is demonstrated that this virtual pursuer plays the same role with its original pursuer in barrier construction. \begin{defi}[Virtual Pursuer]\label{virtual}\rm For every pursuer $P_i$ whose initial condition satisfies $\mathbf{x}_{P_i}^0=(x_{P_i}^0,y_{P_i}^0)\in\Omega_{\rm tar}$, introduce a virtual pursuer $\tilde{P}_i$ with the initial position $\tilde{\mathbf{x}}_{P_i}^0=(\tilde{x}_{P_i}^0,\tilde{y}_{P_i}^0)$ such that $\tilde{x}_{P_i}^0=x_{P_i}^0$ and $\tilde{y}_{P_i}^0=-y_{P_i}^0$. \end{defi} \begin{rek}\label{rek34}\rm It can be easily observed that the virtual pursuer and its original pursuer are symmetric with respect to $\mathcal{T}$, and a three-pursuer pursuit coalition example is presented in Fig. \ref{figBT:1} where $P_{\hat{m}_1}$ lies in $\Omega_{\rm tar}$ and its virtual pursuer is the red circle $\tilde{P}_{\hat{m}_1}$. Since $\Omega_{\rm play}$ and $\Omega_{\rm tar}$ allow different shapes, the virtual pursuer may lie out of $\Omega$. Recalling the barrier construction process under Assumptions \ref{isolateinitial} and \ref{constrainedinitial}, it can be claimed that this case can also be unified by constructing $\breve{\mathcal{B}}^{n_k}$ first and then computing $\breve{\mathcal{B}}^{n_k}\cap\Omega_{\rm play}$, as the optimal trajectories for the players are straight lines and $\Omega$ is convex. \end{rek} \begin{rek}\label{rek35}\rm Note that the virtual pursuer generated by one pursuer may coincide with another pursuer, which will lead to a disagreement with \autoref{isolateinitial}. Thus, assume that every virtual pursuer does not coincide with all other pursuers. However, the coincidence of the virtual pursuer and its own original pursuer is allowed, namely, when $\mathbf{x}_{P_i}^0\in\mathcal{T}$. \end{rek} Next, an important property of virtual pursuers in terms of the barrier construction will be stated, which is a key step to simplify the problems. \begin{lema}[Mirror Property]\label{mirror}\rm For a pursuit coalition $k$, if $\mathbf{x}_{P_i}^0\in\Omega_{\rm tar}$ $(i\in\mathcal{I}_k)$, let $\mathcal{X}_k^0(-i)$ denote the remaining pursuers in $k$ when $\mathbf{x}_{P_i}^0$ is eliminated, and $\tilde{\mathbf{x}}_{P_i}^0$ denote the virtual pursuer of $\mathbf{x}_{P_i}^0$. Then, $\mathcal{B}^{n_k}(\mathcal{X}^0_k)=\mathcal{B}^{n_k}(\mathcal{X}^0_k(-i)\cup\tilde{\mathbf{x}}_{P_i}^0)$. \end{lema} \begin{figure} \caption{The barrier and winning regions for relaxed initial deployment which allows the pursuer to start the game from $\Omega_{\rm tar} \label{figBT:1} \end{figure} \emph{Proof:} Suppose $\bm{p}\in\mathcal{W}_E^{n_k}(\mathcal{X}^0_k(-i)\cup\tilde{\mathbf{x}}_{P_i}^0)$, and then there must exist a point $\bm{p}_1\in\mathcal{T}$ such that \begin{equation}\label{mirriorequ1} \|\bm{p}-\bm{p}_1\|_2<\alpha \|\mathbf{x}_{P_j}^0-\bm{p}_1\|_2 \end{equation} holds for all $\mathbf{x}_{P_j}^0\in\mathcal{X}^0_k(-i)\cup\tilde{\mathbf{x}}_{P_i}^0$. According to \autoref{virtual}, we have \begin{equation}\label{mirriorequ2} \|\mathbf{x}_{P_i}^0-\bm{p}_1\|_2=\|\tilde{\mathbf{x}}_{P_i}^0-\bm{p}_1\|_2. \end{equation} Naturally, (\ref{mirriorequ1}) holds for all $\mathbf{x}_{P_j}^0\in\mathcal{X}^0_k$, which implies that $\bm{p}\in\mathcal{W}_E^{n_k}(\mathcal{X}^0_k)$. Therefore, $\mathcal{W}_E^{n_k}(\mathcal{X}^0_k(-i)\cup\tilde{\mathbf{x}}_{P_i}^0)\subset\mathcal{W}_E^{n_k}(\mathcal{X}^0_k)$ is obtained. On the other side, suppose $\bm{p}\in\mathcal{W}_E^{n_k}(\mathcal{X}^0_k)$ and in the similar way, $\bm{p}\in\mathcal{W}_E^{n_k}(\mathcal{X}^0_k(-i)\cup\tilde{\mathbf{x}}_{P_i}^0)$ can be derived, implying $\mathcal{W}_E^{n_k}(\mathcal{X}^0_k)\subset\mathcal{W}_E^{n_k}(\mathcal{X}^0_k(-i)\cup\tilde{\mathbf{x}}_{P_i}^0)$. Thus, $\mathcal{W}_E^{n_k}(\mathcal{X}^0_k)=\mathcal{W}_E^{n_k}(\mathcal{X}^0_k(-i)\cup\tilde{\mathbf{x}}_{P_i}^0)$, which finishes the proof by noting that $\mathcal{B}^{n_k}$ is the separating curve between $\mathcal{W}_P^{n_k}$ and $\mathcal{W}_E^{n_k}$. A three pursuer case is presented in Fig. \ref{figBT:1}, where $P_{\hat{m}_1}$ lies in $\Omega_{\rm tar}$.\qed The main result in this section is now given, which provides an efficient way to construct the barrier under Assumptions \ref{isolateinitial} and \ref{relaxedinitial} by projecting this problem into the field of the former section. \begin{corol}[Barrier for Relaxed Initial Deployment]\label{barrier relaxed}\rm Consider the system (1) and suppose that Assumptions \ref{isolateinitial}, \ref{relaxedinitial} and \ref{speedratio} hold. For a pursuit coalition $k$, let $\mathcal{X}_{k,1}^0$ and $\mathcal{X}_{k,2}^0$ (maybe empty) denote the initial sets of its pursuers who lie in $\Omega_{\rm play}$ and $\Omega_{\rm tar}$ respectively. By one-to-one mapping of $\mathcal{X}_{k,2}^0$, a set of virtual pursuers can be obtained and denote it by $\mathcal{\tilde{X}}_{k,2}^0$. Then, $\mathcal{B}^{n_k}(\mathcal{X}^0_k)=\mathcal{B}^{n_k}(\mathcal{X}_{k,1}^0\cup\mathcal{\tilde{X}}_{k,2}^0)$ holds, where $\mathcal{B}^{n_k}(\mathcal{X}_{k,1}^0\cup\mathcal{\tilde{X}}_{k,2}^0)$ can be computed by \autoref{barriergeneral}. \end{corol} \emph{Proof:} Since all players in $\mathcal{X}_{k,1}^0\cup\mathcal{\tilde{X}}_{k,2}^0$ lie in $\Omega_{\rm play}\cup\mathcal{T}$, this corollary is straightforward by considering \autoref{mirror} and \autoref{barriergeneral}.\qed \section{Pursuit Task Assignment}\label{task} In the next, by matching pursuit coalitions with evaders, an optimal task assignment scheme for the pursuit team to guarantee the most evaders intercepted, will be investigated. Intuitively, for every evader, we want to designate a pursuit coalition which can make sure of capturing it. In view of the characteristics of winning regions, the selection of an adequate pursuit coalition can be realized by checking if this evader lies in the capturable region of this pursuit coalition, namely, $\mathcal{W}_P^{n_k}$. In this way, rich prior information about which evaders can be captured by a specified pursuit coalition, is collected. Then, pursuit coalitions can be matched with evaders one by one such that the most evaders are captured. Finally, this maximum matching is formulated as a simplified 0-1 integer programming problem instead of solving a constrained bipartite matching problem \cite{lawler1976combinatorial}. Let $G=(P,E,\mathcal{E})$ denote an undirected bipartite graph, consisting of two independent sets $P,E$ of nodes, and a set $\mathcal{E}$ of unordered pairs of nodes called edges each of which connects a node in $P$ to one in $E$. In this case, we take $P=\{1,2,...,2^{N_p}-1\}$, representing all pursuit coalitions, and $E=\{1,2,..,N_e\}$ on behalf of all evaders. An edge from node $i$ in $P$ to node $j$ in $E$ is denoted by $e_{ij}$, referring to using pursuit coalition $i$ to intercept evader $E_j$. Define $e_{ij}\in \mathcal{E}$ if pursuit coalition $i$ can guarantee the capture of evader $E_j$ in $\Omega_{\rm play}$ or $\mathcal{T}$; otherwise, $e_{ij}\notin \mathcal{E}$. Thus, by computing the barrier for every pursuit coalition, all edges contained in $\mathcal{E}$ can be found. Traditionally, a subset $M\subseteq \mathcal{E}$ is said to be a matching if no two edges in $M$ are incident to the same node, and our goal is to find a matching containing a maximum number of edges. However, note that every pursuer can appear in at most one pursuit coalition when the interception scheme is executed. Thus, the pursuit coalitions containing at least one same pursuer cannot coexist, which results in a constrained bipartite matching problem. To solve it, this problem is transformed into the framework of 0-1 integer programming. Before proceeding with this transformation, the following lemma is presented, which will dramatically decrease the complexity of the 0-1 integer programming proposed later. \begin{lema}[Degeneration of Pursuit Coalition]\label{degene}\rm For any pursuit coalition $k$ with $n_k\ge3$, if there exists an evader $E_j$ such that $\mathbf{x}_{E_j}^0\in\mathcal{W}_P^{n_k}$, then there must exist a pursuit subcoalition $k_1$ of $k$ satisfying $n_{k_1}=2$ and $\mathbf{x}_{E_j}^0\in\mathcal{W}_P^{n_{k_1}}$. \end{lema} \emph{Proof:} Note that \autoref{barrierfull} manifests a special feature of the barrier that its every part is only associated with at most two pursuers as (\ref{barriergeneralnv1}) shows. Even for the part $F^3$, one can split it into two parts both of which are only associated with two pursuers. Thus, if a pursuit coalition $k$ can guarantee to capture an evader, then there must exist an its two-pursuer subcoalition which can also guarantee the capture of this evader.\qed \begin{rek}\label{rek41}\rm From \autoref{degene}, it can be stated that any maximum matching can be reduced to a simpler version in which every pursuit coalition contains at most two pursuers. Therefore, seeking for a maximum matching in the class of this simple version suffices to obtain a matching that is global optimal in the sense of the most evaders intercepted. \end{rek} \begin{rek}\label{rekadd41}\rm Although it follows from \autoref{degene} and \autoref{rek41} that the pursuit coalitions with more than two pursuers are not necessary in the maximum matching, the analysis for general pursuit coalitions in Section \ref{sectionmultiplev1} can provide instructions to determine the capturable and uncapturable regions of multiple pursuers with any numbers when pursuing one evader. Additionally, this result can also help to deploy the pursuers such that their capturable region is desirable, as sometimes the pursuit team should position its members before an evader occurs. More importantly, it is \autoref{barrierfull} that reveals the degeneration property of the pursuit coalition described in \autoref{degene}. \end{rek} Thus, in the following discussion, attention will be focused on this specific class of pursuit coalitions with at most two pursuers, and due to its crucial role in extracting out an optimal task assignment, it is stated formally as follows. \begin{defi}[Execution Pursuit Coalition]\label{execution}\rm A pursuit coalition $k$ is called an execution pursuit coalition, if $n_k=1$ or $n_k=2$. \end{defi} According to the barrier construction in Section \ref{pursuitcoalition}, the following prior information vector can be acquired. For notational convenience, define $N_v=N_eN_p(N_p+1)/2$. \begin{defi}[Prior Information Vector]\label{prior}\rm For $P_i$, define $\bm{r}_i^1=[r_i^1(1),...,r_i^1(N_e)]\in\mathbb{R}^{N_e}$, where for $j=1,...,N_e$, set $r_i^1(j)=0$ if $\mathbf{x}^0_{E_j}\in\mathcal{W}^1_E(\mathbf{x}^0_{P_i})$, that is, $P_i$ cannot guarantee to capture $E_j$ prior to its arrival in $\mathcal{T}$; otherwise, set $r_i^1(j)=1$. Similarly, for $P_{i1}$ and $P_{i2}$, define $\bm{r}_{i1,i2}^2=[r_{i1,2}^2(1),...,r_{i1,i2}^2(N_e)]\in\mathbb{R}^{N_e}$, where for $j=1,...,N_e$, set $r_{i1,i2}^2(j)=0$ if $\mathbf{x}^0_{E_j}\in\mathcal{W}^2_E(\mathbf{x}^0_{P_{i1}}\cup\mathbf{x}^0_{P_{i2}})$; otherwise, set $r_{i1,i2}^2(j)=1$. Then we call this vector $\bm{r}=[\bm{r}_1^1,...,\bm{r}_{N_p}^1,\bm{r}_{1,2}^2,...\bm{r}_{1,N_p}^2,\bm{r}^2_{2,3},...,\bm{r}_{N_p-1,N_p}^2]\in\mathbb{R}^{N_v}$ as prior information vector. \end{defi} \begin{rek}\label{rek42}\rm Note that the prior information vector $\bm{r}$ contains all information by which for any given evader, one can judge whether an execution pursuit coalition can guarantee to capture it. This vector is the only input for the maximum matching. \end{rek} Let $\bm{s}_i^1=[s_i^1(1),...,s_i^1(N_e)]\in\mathbb{R}^{N_e}$ denote the strategy vector of $P_i$. Its elements are either 1 or 0, where $s^1_i(j)=1$ indicates the assignment of $P_i$ to intercept $E_j$, and $s^1_i(j)=0$ means no assignment. Clearly, $\sum_{j=1}^{N_e}s^1_i(j)\leq1$ must be satisfied, namely, $P_i$ at a time can pursue at most one evader. Let $\bm{s}_{i1,i2}^2=[s_{i1,i2}^2(1),...,s_{i1,i2}^2(N_e)]\in\mathbb{R}^{N_e}$ denote the strategy vector of the pursuit pair $\{P_{i1},P_{i2}\}$. Its elements are either 1 or 0. Specifically, $s_{i1,i2}^2(j)=1$ indicates that the pursuit pair $\{P_{i1},P_{i2}\}$ cooperates to intercept $E_j$, and $s_{i1,i2}^2(j)=0$ means no assignment. Obviously, $\sum_{j=1}^{N_e}s_{i1i2}^2(j)\leq1$. Denote the joint strategy vector of all one-pursuer pursuit coalitions by $\bm{s}^1=[\bm{s}_1^{1},...,\bm{s}_{N_p}^{1}]\in\mathbb{R}^{N_pN_e}$, and denote $\bm{s}^2=[\bm{s}_{1,2}^{2},...,\bm{s}^2_{1,N_p},\bm{s}_{2,3}^{2},...,\bm{s}_{N_p-1,N_p}^{2}]\in\mathbb{R}^{N_eN_p(N_p-1)/2}$ as the joint strategy vector of all two-pursuer pursuit coalitions. Thus $\bm{z}=[\bm{s}^1,\bm{s}^2]^\mathsf{T}\in\mathbb{R}^{N_v\times 1}$ denotes all execution pursuit coalitions' strategy vector. Let ${\rm ones}(m,n)$ denote the $m\times n$ matrix each element of which is 1, ${\rm zeros}(m,n)$ denote the $m\times n$ zero matrix, $I_n$ denote the identity matrix of size $n$, $\otimes$ denote the Kronecker product, and $\mathcal{X}^0_P,\mathcal{X}^0_E$ denote the initial positions of all pursuers and all evaders, respectively. Now, the main result of this section is presented below, which gives a maximum matching solution by solving a 0-1 integer programming. \begin{thom}[Maximum Matching]\label{maximum}\rm Consider the system (1) and suppose that Assumptions \ref{isolateinitial}, \ref{relaxedinitial} and \ref{speedratio} hold. Given $\mathcal{X}^0_P$ and $\mathcal{X}^0_E$, the number $q$ of the evaders which the pursuit team can guarantee to prevent from reaching the target region $\Omega_{\rm tar}$ is given by \begin{equation}\begin{aligned}\label{maximizeequal11} q&=\text{maximize }\bm{c}^\mathsf{T}\bm{z}\qquad\qquad\qquad\qquad\qquad\qquad\\ &\quad\ \text{s. t. }A_1\bm{z}\leq \bm{b}_1,A_2\bm{z}\leq \bm{b}_2,A_3\bm{z}\leq \bm{b}_3\\ &\quad\ \bm{z}=[\bm{s}^1,\bm{s}^2]^\mathsf{T}=[z(1),...,z(N_v)]^\mathsf{T},z(i)=0,1 \end{aligned}\end{equation} and the maximum matching is given by $\bm{z}^*={\rm argmax}_{\bm{z}}(\bm{c}^\mathsf{T}\bm{z})$. The parameter matrixes and vectors are defined as follows: \begin{equation}\begin{aligned} &\bm{c}={\rm ones}(N_v,1),\bm{b}_1=\bm{r}^\mathsf{T},A_1=I_{N_v},\bm{b}_2={\rm ones}(N_e,1),\\ &A_2={\rm ones}(1,N_v/N_e)\otimes I_{N_e},\bm{b}_3={\rm ones}(N_p,1) \end{aligned}\end{equation} and $A_3$ is computed by \autoref{matrix} presented below. \end{thom} \emph{Proof:} Note that the objective function is the number of evaders which are assigned execution pursuit coalitions, that is, the number of the evaders to be captured. The first inequality constraint in (\ref{maximizeequal11}) represents that the prior information must be satisfied. In other words, assign an adequate execution pursuit coalition to capture an evader. The second inequality constraint indicates that each evader is assigned at most one pursuit coalition, and the third one restricts that each pursuer can occur at most one pursuit coalition when the game runs. It can be verified that \autoref{matrix} can find all execution pursuit coalitions which contain a specific pursuer. Thus, this 0-1 integer programming is solvable. \qed \begin{algm}\label{matrix}\rm (Matrix for Inequality Constraint).\\ 1: \textbf{Input:} $N_p,N_e$, and $A_3\gets$null matrix.\\ 2: \textbf{if} $N_p\ge2$ \textbf{then}\\ 3: \textbf{for} $i\in\{1,2,...,N_p\}$ \textbf{do}\\ 4: \hspace{0.2 in}$\rm temp\gets$null matix\\ 5: \hspace{0.2 in}\textbf{for} $j\in\{1,2,...,N_p(N_p-1)/2\}$ \textbf{do}\\ 6: \hspace{0.4 in}$\rm tag\gets0$\\ 7: \hspace{0.4 in}\textbf{for} $k\in\{1,2,...,i-1\}(i\ge2)$ \textbf{do}\\ 8: \hspace{0.6 in}\textbf{if} $j==i-k+(k-1)(N_p-k/2)$ \textbf{then}\\ 9: \hspace{0.77 in}$\rm tag\gets1$,\textbf{break} \textbf{ end if end for}\\ 10:\hspace{0.37 in}\textbf{if} $j\ge(i-1)(N_p-i/2)+1$ \&\&\\ 11:\hspace{0.51 in}$j\leq i(N_p-(i+1)/2)$ \textbf{then}\\ 12:\hspace{0.57 in}$\rm tag\gets1$ \textbf{end if}\\ 13:\hspace{0.37 in}$\rm temp\gets[temp,tag]$ \textbf{end for}\\ 14:\hspace{0.17 in}$A_3\gets[A_3;\rm temp]$ \textbf{end for}\\ 15: $A_3\gets A_3\otimes {\rm{ones}}(1,N_e)$ \textbf{end if}\\\ 16: $A_3\gets [I_{N_p}\otimes {\rm ones}(1,N_e),A_3]$\\ 17:\textbf{Output:} $A_3$. \end{algm} \begin{rek}\label{rek43}\rm It can be observed that $q$ is unique, while the maximum matching $\bm{z}^*$ may have multiple solutions. Moreover, the original matching is a $(2^{N_p}-1)\times N_e$ constrained bipartite matching problem, which is a NP-problem. However, the maximum matching given by \autoref{maximum} is a P-problem with $N_v$ variables and $N_v+N_e+N_p$ inequality constraints. The first inequality constraint in (\ref{maximizeequal11}) will be simplified a lot when $\bm{r}$ contains many zero elements. \end{rek} \begin{defi}[Maximum Matching Pairs]\label{pairs}\rm For a maximum matching $\bm{z}^*=[\bm{s}^{1*},\bm{s}^{2*}]^\mathsf{T}$, define the following sets of matching pairs:\begin{equation}\begin{aligned} M^1(\bm{z}^*)&=\big\{(i,j)|s_i^{1*}(j)=1,1\leq i\leq N_p,1\leq j\leq N_e\big\}\\ M^2(\bm{z}^*)&=\big\{(i1,i2,j)|s_{i1,i2}^{2*}(j)=1,1\leq i1<i2\leq N_p,\ \ \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ 1\leq j\leq N_e\big\}. \end{aligned}\end{equation} \end{defi} \begin{figure} \caption{5 pursuers versus 6 evaders. $(a)-(e)$ Computation of the barrier and winning regions for all pursuit coalitions when facing one evader: $(a)$ all one-pursuer pursuit coalitions; $(b)$ two-pursuer pursuit coalitions (only two are depicted); $(c)$ three-pursuer pursuit coalitions (only two are depicted); $(d)$ four-pursuer pursuit coalitions (only two are depicted); $(5)$ five-pursuer pursuit coalition. Fig. \ref{fig:sub-5} \label{fig:sub-1} \label{fig:sub-2} \label{fig:sub-3} \label{fig:sub-4} \label{fig:sub-5} \label{fig:sub-6} \label{figureexample1} \end{figure} Note that $M^1(\bm{z}^*)$ represents all one-to-one matching pairs in $\bm{z}^*$, and $M^2(\bm{z}^*)$ represents all two-to-one matching pairs. \section{Simulation Results}\label{simulation} In this section, simulation results are presented to illustrate the previous theoretical developments. Assume that $v_P=1{\rm m/s},v_E=0.7{\rm m/s}$, namely, $\alpha=0.7$, $N_p=5$ and $N_e=6$. Fig. \ref{figureexample1} shows the barrier and winning regions for all pursuit coalitions, namely, their capturable and uncapturable regions, where Fig. \ref{fig:sub-1} and Fig. \ref{fig:sub-5} refer to all one-pursuer pursuit coalitions and five-pursuer pursuit coalition, respectively. For clarity, only two barriers and winning regions are depicted for two-pursuer, three-pursuer, and four-pursuer pursuit coalitions in Fig. \ref{fig:sub-2}, Fig. \ref{fig:sub-3}, and Fig. \ref{fig:sub-4}, respectively. Fig. \ref{fig:sub-5} also shows the maximum matching from the known barrier and winning regions, including two one-to-one matching pairs and one two-to-one matching pair. Thus, the maximum matching is of size 3. This guarantees that if each pursuer occurring in the maximum matching plays optimally against the evader matched by the maximum matching, this matched evader will be captured before reaching $\mathcal{T}$. \section{Conclusions and Future Work}\label{conclusion} \subsection{Conclusions} This paper considered a multiplayer reach-avoid game in a general convex domain. The key achievement is providing an analytical description of the winning regions when each possible pursuit coalition competes with one evader. Furthermore, an interception scheme involving pursuer-evader matching is generated for the pursuit team such that the most evaders can be captured before entering the target region. The constructed barrier, splitting the winning regions, shows that at most two pursuers are needed to intercept one evader if the capture is possible, greatly simplifying the matching search. The winning regions from the whole pursuit team and one evader case also can help the evasion team to determine which evaders can enter the target region or escape from the play region definitely, no matter what strategy the pursuit team uses. Then these evaders will get more attention from the evasion team and be chosen as key mission performers. More generally, this result can provide guarantees on goal satisfaction and safety of optimal system trajectories for safety-critical systems, where a group of vehicles aim to reach their destinations in the presence of dynamic obstacles. The results in this paper are almost analytical and applicable for real-time updates. More importantly, all possible pursuit coalitions are considered and our results are optimal in the sense of the most evaders intercepted. \subsection{Future Work} All players considered in this paper move with simple motion and are able to turn instantaneously. In future work, more practical and complex dynamic models will be considered, for example, those of the Isaacs-Dubins car. Extending the results obtained in general convex domains to nonconvex domains is another worth-pursuing direction, and extracting out some conservative results is straightforward, such as, approximating the nonconvex domain with an appropriate convex domain. In view of the limited communication and computing power of a single player, another interesting and promising possibility for future extension is to consider distributed multiplayer reach-avoid games, which are the focus of our following research. \end{document}
\begin{document} \title{Provably Efficient Algorithms for Multi-Objective Competitive RL} \author{\name Tiancheng Yu \vect{v}ect{e}mail{[email protected]}\\ \addr{Massachusetts Institute of Technology}\\ \name Yi Tian \vect{v}ect{e}mail{[email protected]}\\ \addr{Massachusetts Institute of Technology}\\ \name Jingzhao Zhang \vect{v}ect{e}mail{[email protected]}\\ \addr{Massachusetts Institute of Technology}\\ \name Suvrit Sra \vect{v}ect{e}mail{[email protected]}\\ \addr{Massachusetts Institute of Technology}\\ } \maketitle \begin{abstract} We study multi-objective reinforcement learning (RL) where an agent's reward is represented as a vector. In settings where an agent competes against opponents, its performance is measured by the distance of its average return vector to a target set. We develop statistically and computationally efficient algorithms to approach the associated target set. Our results extend Blackwell's approachability theorem~\citep{blackwell1956analog} to tabular RL, where strategic exploration becomes essential. The algorithms presented are adaptive; their guarantees hold even without Blackwell's approachability condition. If the opponents use fixed policies, we give an improved rate of approaching the target set while also tackling the more ambitious goal of simultaneously minimizing a scalar cost function. We discuss our analysis for this special case by relating our results to previous works on constrained RL. To our knowledge, this work provides the first provably efficient algorithms for vector-valued Markov games and our theoretical guarantees are near-optimal. \vect{v}ect{e}nd{abstract} \section{Introduction} What can a player expect to achieve in competitive games when pursuing multiple objectives? If the player has a single objective, the answer is clear from von Neumann's minimax theorem~\citep{neumann1928theorie}: the player can follow a fixed strategy to ensure that its cost is no worse than a certain threshold, the \vect{v}ect{e}mph{minimax value} of the game, no matter how the opponents play. But if the player has multiple objectives, the answer is less clear and it must define some tradeoffs. One important way to capture tradeoffs is to define a certain \vect{v}ect{e}mph{target set} of vectors, and then to ensure that player's vector of returns lies in this set. The player's performance can then be measured via the distance of its reward vector from the target set. In 1956, Blackwell showed that in a repeated game, the player of interest can make the distance of its average return to a target set small as long as this set satisfies a condition called \vect{v}ect{e}mph{approachability}~\citep{blackwell1956analog}. The approachability theorem applies to multi-objective games with a decision horizon of a \vect{v}ect{e}mph{single} time step. However, in many practical domains such as robotics, self-driving, video games, and recommendation systems, the decision horizons span \vect{v}ect{e}mph{multiple} time steps. For example, in a robot control task, we may hope the robot arm reaches a certain region in a 3D space; while, in self-driving, we may hope the car takes care of speed, safety and comfort simultaneously. In these problems, the state of the decision process transitions based on both the actions taken by the players and the unknown dynamics. Though a generalization (Assumption~\ref{ass:Approachability}) of Blackwell's approachability condition~\citet{blackwell1956analog} is relatively direct, efficient exploration and the need to learn the unknown transitions is what poses a challenge in the multiple time step setting. This challenge motivates us to ask: \vect{v}ect{e}mph{How can a player approach a target set that satisfies a generalized notion of approachability?} We answer this question by modeling multi-objective competitive reinforcement learning (RL) as an online learning problem in a vector-valued Markov game (MG), for which we provide efficient algorithms as instances of a generic meta-algorithm that we propose. Going one step further, we can ask a more ambitious question: \vect{v}ect{e}mph{Can we minimize a scalar cost function while also satisfying approachability?} Our answer is affirmative if the opponents play fixed policies; equivalently, if the agent interacts with a fixed environment (without opponents), in which case the model reduces to a vector-valued Markov decision process (MDP). In this setting, the target set can be viewed as a set of constraints, and our results improve on the rich literature on constrained MDP in multiple aspects. In Table~\ref{tab:placement} we give a comparison of different multi-objective RL settings. Our work can be seen as a generalization of both~\citep{blackwell1956analog} and~\citep{agrawal2014bandits} to cases with an $H$ step horizon. \begin{table}[t] \centering \caption{The settings of this work with reference to the literature} \langlebel{tab:placement} \begin{tabular}{ccc} \toprule & w/o opponents & w/ adversarial opponents \\ \midrule single-state single-horizon & \makecell{constrained bandits \\ (e.g.,~\citep{agrawal2014bandits})} & \makecell{vector-valued games \\ (e.g.,~\citep{blackwell1956analog})} \\ \midrule multi-state $H$-horizon & \makecell{constrained MDPs \\ (e.g.,~\citep{brantley2020constrained}; \textbf{this work})} & \makecell{vector-valued Markov games \\ (\textbf{this work})} \\ \bottomrule \vect{v}ect{e}nd{tabular} \vect{v}ect{e}nd{table} \noindent\textbf{Summary of our contributions.} \begin{list}{{\tiny$\blacktriangleright$}}{\leftmargin=1em} \setlength{\itemsep}{-1pt} \item For online learning in vector-valued Markov games, we propose two provably efficient algorithms to approach a target set under a generic framework. Strategic exploration is essential to obtain statistical efficiency (Theorems~\ref{thm:basic-approachable} and~\ref{thm:OCO-delta-approachable}) for both algorithms. The second algorithm has the merit of being more computationally efficient. \item When the chosen target set is not approachable, both our algorithms adapt automatically. Concretely, we describe the guarantees (Theorems~\ref{thm:basic-delta-approachable} and~\ref{thm:OCO-delta-approachable}) of the algorithms using a notion of $\delta$-approachability (Assumption~\ref{ass:delta-Approachability}). \item For vector-valued MDPs, via a more dedicated design of the exploration bonus, we obtain a near-optimal rate of making the average reward vector approach (Theorem~\ref{thm:bernstein}) the target set. Moreover, under a mild assumption, we present a modified algorithm that can simultaneously minimize a convex cost function (Theorem~\ref{thm:satisfiable}). Comparing with existing results in constrained MDP, our bounds on regret and constraint violation are the sharpest with respect to their dependence on the parameters $S$, $A$, and $K$, where $S$ is the number of states, $A$ is the number of actions and $K$ is the number of episodes. \vect{v}ect{e}nd{list} \subsection{Related Work} \noindent\textbf{Blackwell's approachability.} \citet{blackwell1956analog} initiated the study of multi-objective learning in repeated matrix games by introducing the notion of approachability and an algorithm to approach a given set. Using a dual formulation of the distance from a point to a convex cone, \citet{abernethy2011blackwell} show the equivalence of approachability problems and online linear optimization. \citet{shimkin2016online} further extends the equivalence to online convex optimization (OCO) via a dual formulation of the distance from a point to a convex set. Our primal and dual algorithms generalize respectively Blackwell's algorithm~\citep{blackwell1956analog} and the OCO-based algorithm~\citep{shimkin2016online} to Markov games. \textbf{Learning in Markov games.} Markov games, also known as stochastic games~\citep{shapley1953stochastic, littman1994markov}, are a general model for multi-agent reinforcement learning. In recent years, much attention has been given to learning in scalar-valued Markov games with unknown transitions. In the self-play setting~\citep{bai2020provable, xie2020learning, bai2020near, liu2020sharp}, the goal is to learn a Nash equlibrium with sample complexity guarantees. \citet{bai2020provable, xie2020learning, bai2020near} consider zero-sum Markov games while \citet{liu2020sharp} provide results for general-sum Markov games. In the online setting~\citep{brafman2002r, xie2020learning, tian2020provably}, the goal is to achieve low regret in presence of an adversarial opponent. We also study the online setting, but in contrast, we consider vector-valued returns and the goal is to make the average return approach a given set. \textbf{Online learning with constraints.} Multi-objective RL is closely related to RL with constraints since satisfying the constraints is tantamount to having extra objectives. \citet{badanidiyuru2013bandits} study bandits with knapsacks, and \citet{agrawal2014bandits} study the more general setting with concave rewards and convex constraints that the method needs to approach. Beyond bandits, \citet{jenatton2016adaptive,yuan2018online} study online convex optimization with constraints given by convex functions. \textbf{Constrained MDPs.} For MDPs with linear constraints, \citet{efroni2020exploration, ding2020provably, qiu2020upper, brantley2020constrained} provide algorithms with both regret and total constraint violation guarantees. As a generalization of~\citep{agrawal2014bandits}, \citet{brantley2020constrained} also consider MDPs with convex constraints and concave rewards and discuss as a special case MDPs with knapsacks on \vect{v}ect{e}mph{all} episodes. \citet{chen2020efficient} formulate MDPs with knapsacks on \vect{v}ect{e}mph{each} episode as factored MDPs, to which the regret bounds of factored MDPs~\citep{osband2014near, tian2020towards, chen2020efficient} apply. See the discussion at the end of Section~\ref{sec:satisfiable} for a detailed comparison. \textbf{Multi-objective RL with preference.} More recently, \citet{wu2020accommodating} study single-agent multi-objective RL to accommodate potentially adversarial preference vectors. In contrast, we assume a potentially adversarial opponent that influences both the transition and the return vector. Their goal also differs from ours in that they aim to maximize the cumulative rewards defined by the observed preference vectors in each episode. The preference vector in their setting is similar to the dual variable in our algorithm. Nonetheless, our dual variable is learned by an update procedure. All of the aforementioned works on MGs or MDPs focus on the episodic setting. See, e.g.,~\citep{cheung2019non, singh2020learning}, for the studies of multi-objective or constrained RL in the nonepisodic setting. \section{Background and Problem Setup} In this section, we formulate the problem of two-player zero-sum Markov Games. We control one of the players, whom we call the \vect{v}ect{e}mph{agent}. The other player is referred to as the \vect{v}ect{e}mph{adversary}. We use the two-player zero-sum condition for simplicity. We can handle multi-player general-sum games by considering the product of all the opponents' actions as an augmented action (an idea also recently exploited in~\citep{tian2020provably}). Now we are ready to explain how players interact and learn in the Markov game setup. \subsection{Vector-valued Markov Games} \textbf{Model.} Let $[N]:=\{1,2,\ldots,N\}$, and let $\mathbb{D}elta(\mathbb{X})$ be the set of probability distribution on set $\mathbb{X}$. Then, an episodic two-player zero-sum vector-valued MG can be denoted by the tuple $\mathrm{MG}(\mathcal{S}, \mathcal{A}, \mathcal{B}, \mathbb{P}, \mathbf{r}, H)$, where \begin{list}{–}{\leftmargin=1.5em} \setlength\itemsep{-1pt} \item $H$ is the number of steps in each episode, \item $\mathcal{S}$ is the state space, \item $\mathcal{A}$ and $\mathcal{B}$ are the action spaces of both players, \item $\mathbb{P}$ is a collection of \vect{v}ect{e}mph{unknown} transition kernels $\{\mathbb{P}_h: \mathcal{S} \times \mathcal{A} \times \mathcal{B} \to \mathbb{D}elta(\mathcal{S})\}_{h\in [H]}$, and \item $\mathbf{r}$ is a collection of \vect{v}ect{e}mph{known} $d$-dimensional return functions $\{\mathbf{r}_h: \mathcal{S} \times \mathcal{A} \times \mathcal{B} \to [0, 1]^d\}_{h\in [H]}$, where $d\vect{v}ect{g}e 2$ is the \vect{v}ect{e}mph{dimensionality} of the MG. We assume known $\mathbf{r}$ only for simplicity; learning $\mathbf{r}$ poses no real difficulty--see e.g.,~\cite{azar2017minimax, jin2018q}. \vect{v}ect{e}nd{list} Let $|\cdot|$ denote set cardinality. Then, we define the three key cardinalities $S := |\mathcal{S}|$, $A := |\mathcal{A}|$, and $B := |\mathcal{B}|$. \paragraph{Interaction protocol.} Without loss of generality, in each episode the MG starts at a \vect{v}ect{e}mph{fixed} initial state $s_1 \in \mathcal{S}_1$. At each step $h\in [H]$, the two players observe the state $s_h\in \mathcal{S}$ and simultaneously take actions $a_h\in \mathcal{A}$, $b_h\in \mathcal{B}$. This decision is specified by the players' policies $\mu_h(s_h) \in \mathbb{D}elta(\mathcal{A})$ and $\nu_h(s_h) \in \mathbb{D}elta(\mathcal{B})$. Then the environment transitions to the next state $s_{h+1} \sim \mathbb{P}_{h}(\cdot\vect{v}ert s_h, a_h, b_h)$ and outputs the return $\mathbf{r}_{h}(s_h, a_h, b_h)$. Let $\mathcal{F}_h^k$ be the filtration generated by all these random variables until the $k$-th episode and $i$-th step. \paragraph{Value functions.} Analogous to usual MDPs, for a policy pair $(\mu, \nu)$, step $h\in [H]$, state $s\in \mathcal{S}$ and actions $a\in \mathcal{A}, b\in \mathcal{B}$, we define the State- and Q-value functions as: \begin{align*} &\mathbf{V}_{h}^{\mu, \nu}(s) := \mathbb{E}_{\mu, \nu}\mathbb{B}igl[\sum\nolimits_{l=h}^{H} \mathbf{r}_{l}(s_{l}, a_{l}, b_{l}) \vect{v}ert s_h = s\mathbb{B}igr], \\ &\mathbf{Q}_{h}^{\mu, \nu}(s, a, b) :=\mathbb{E}_{\mu, \nu}\mathbb{B}igl[ \sum\nolimits_{l=h}^{H} \mathbf{r}_{l}(s_{l}, a_{l}, b_{l}) \vect{v}ert s_h = s, a_h = a, b_h = b \mathbb{B}igr]. \vect{v}ect{e}nd{align*} For compactness of notation, for any $\mathbf{V} \in [0,H]^{dS}$ and $\mathbf{Q} \in [0, H]^{dSAB}$ we introduce the operators $\mathbb{P}$ and $\mathbb{D}$ by \begin{align*} \mathbb{P}_h [\mathbf{V}](s, a, b) := \mathbb{E}_{s'\sim \mathbb{P}_h(\cdot\vect{v}ert s, a, b)}[\mathbf{V}(s')], \,\,\,\, \mathbb{D}_{\mu, \nu}[\mathbf{Q}](s) := \mathbb{E}_{a\sim \mu(\cdot\vect{v}ert s), b\sim \nu(\cdot\vect{v}ert s)}[\mathbf{Q}(s, a, b)]. \vect{v}ect{e}nd{align*} With this notation we obtain the Bellman equations: \begin{align*} \mathbf{V}_{h}^{\mu, \nu}(s) = \mathbb{D}_{\mu_h, \nu_h} [\mathbf{Q}_{h}^{\mu, \nu}](s), \,\,\,\, \mathbf{Q}_{h}^{\mu, \nu}(s, a, b) = (r_h + \mathbb{P}_{h}[\mathbf{V}_{h+1}^{\mu, \nu}])(s, a, b). \vect{v}ect{e}nd{align*} For convenience define $\mathbf{V}_{H+1}^{\mu, \nu}(s) = 0$ for any $s\in \mathcal{S}$. \paragraph{Satisfiability.} Let $\mathbb{W}^\mathrm{s.t.~}ar$ denote a desired target set. Henceforth, we assume that $\mathbb{W}^\mathrm{s.t.~}ar$ is a is closed and convex subset of $[0,H]^d$. Let $\hat{\mathbf{V}}^k$ be the cumulative return received by the agent in the $k$th episode and $\mathbf{W}^K:=\frac1K\sum_{k=1}^K \hat{\mathbf{V}}^k$ be the average for the first $K$ episodes. The goal of the agent is to guarantee that $\mathbf{W}^K \in \mathbb{W}^{\mathrm{s.t.~}ar}$. This goal is achievable under the following satisfiability assumption. \begin{assumption}[Satisfiability] \langlebel{ass:satisfiability} Given a vector-valued MG $\mathrm{MG}(\mathcal{S}, \mathcal{A}, \mathcal{B}, \mathbb{P}, \mathbf{r}, H)$, we say a closed and convex target set $\mathbb{W}^{\mathrm{s.t.~}ar}$ is satisfiable, if there exists a policy $\mu$ such that for any policy $\nu$, the vector value $\mathbf{V}_{1}^{\mu, \nu}(s_1) \in \mathbb{W}^{\mathrm{s.t.~}ar}$. \vect{v}ect{e}nd{assumption} Informally, satisfiability means that the agent can ensure the cumulative return is contained in the target set, regardless of the opponent's action. A weaker notion is if upon knowing the opponent's policy the agent can satisfy the target set. Thus, we call it \vect{v}ect{e}mph{Response-satisfiability}. \begin{assumption}[Response-satisfiability] \langlebel{ass:response-satisfiability} Given a vector-valued MG $\mathrm{MG}(\mathcal{S}, \mathcal{A}, \mathcal{B}, \mathbb{P}, \mathbf{r}, H)$, we say a closed and convex target set $\mathbb{W}^{\mathrm{s.t.~}ar}$ is response-satisfiable, if for any policy $\nu$,there exists a policy $\mu$ such that $\mathbf{V}_{1}^{\mu, \nu}(s_1) \in \mathbb{W}^{\mathrm{s.t.~}ar}$. \vect{v}ect{e}nd{assumption} Both notions coincide in a scalar-valued zero-sum game, as a result of von Neumann's minimax theorem. However, for vector-valued games, satisfiability is strictly stronger. Indeed, satisfiability fails even in some simple games while response-satisfiability holds. See the discussion in Section 2.1 of \cite{abernethy2011blackwell} for a concrete example. Without satisfiability, we cannot expect to reach the target set $\mathbb{W}^{\mathrm{s.t.~}ar}$. Luckily, approaching a response-satisfiable set $\mathbb{W}^{\mathrm{s.t.~}ar}$ on average is still possible. To that end, we can reduce the vector-valued MG to a scalar-valued one, as shown below. \subsection{Scalar Reduction and Minimax Theorem} We can convert a vector-valued MG to a scalar-valued one by replacing the return vector $\mathbf{r}$ by the scalar $\mathbf{r} \cdot \bm{\theta}$, where $\bm{\theta} \in \mathbb{R}^d$ is a fixed vector. Importantly, we will treat $\bm{\theta}$ as a dual variable in our algorithms. For the resulting MG we can define $V_{h}^{\mu, \nu}(\bm{\theta},s)$ and $Q_{h}^{\mu, \nu}(\bm{\theta},s,a,b)$ similarly. We call the two players the ``min-player'' and the ``max-player''\footnote{To accommodate conventions in Approachability, we make the agent the min-player (usually the max-player in MG literature).}. Let $\nu$ be a policy of the max-player. There exists a \vect{v}ect{e}mph{best response} $\mu^{\dagger} $ to $\nu$, such that for any step $h\in [H]$ and state $s\in \mathcal{S}$ we have $V_{h}^{\mu^{\dagger}, \nu}(s) = V_{h}^{\dagger, \nu}(s) := \min_{\mu} V_{h}^{\mu, \nu}(s)$. A symmetric discussion applies to the best response to a min-player's policy. The following minimax equality holds: for any step $h\in [H]$ and state $s\in \mathcal{S}$, \begin{align*} \min_{\mu} \max_{\nu} V_{h}^{\mu, \nu}(\bm{\theta},s) = \max_{\nu} \min_{\mu} V_{h}^{\mu, \nu}(\bm{\theta},s). \vect{v}ect{e}nd{align*} A policy pair $(\mu^{\mathrm{s.t.~}ar}, \nu^{\mathrm{s.t.~}ar}) $ that achieves the equality is known as a \vect{v}ect{e}mph{Nash equilibrium}. We use $V^{\mathrm{s.t.~}ar}_h(\bm{\theta},s):=V_h^{\mu^{\mathrm{s.t.~}ar},\nu^{\mathrm{s.t.~}ar}}(\bm{\theta},s)$ to denote the value at the Nash equilibrium, which is unique for the MG and we call the \vect{v}ect{e}mph{minimax value} of the MG. \paragraph{Approachability.} Scalarizing a vector-valued MG is equivalent to considering a half-space that contains $\mathbb{W}^{\mathrm{s.t.~}ar}$ instead of $\mathbb{W}^{\mathrm{s.t.~}ar}$ itself. If we can reach $\mathbb{W}^{\mathrm{s.t.~}ar}$, then we can reach any half-space that contains $\mathbb{W}^{\mathrm{s.t.~}ar}$. Therefore, satisfiability of half-spaces that contain $\mathbb{W}^{\mathrm{s.t.~}ar}$ is weaker than satisfiability of $\mathbb{W}^{\mathrm{s.t.~}ar}$ itself. We state this condition formally below. \begin{assumption}[Approachability] \langlebel{ass:Approachability} Given a vector-valued MG $\mathrm{MG}(\mathcal{S}, \mathcal{A}, \mathcal{B}, \mathbb{P}, \mathbf{r}, H)$, we say a closed and convex target set $\mathbb{W}^{\mathrm{s.t.~}ar}$ is approachable, if for any vector $\bm{\theta}$, $$ \vect{v}ect{u}nderset{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}}{\max}\ \ \bm{\theta}\cdot \mathbf{x} \vect{v}ect{g}e V_1^{\mathrm{s.t.~}ar}\left(\bm{\theta}, s_1 \right). $$ \vect{v}ect{e}nd{assumption} Assumption~\ref{ass:Approachability} is also known as ``half-space satisfiability'' in the literature~\citep{blackwell1956analog}. Indeed, it is equivalent to response-satisfiability (See Lemma 7 in \cite{abernethy2011blackwell}. The proof therein carries over for MGs directly, since it only depends on the geometric property of $\mathbb{W}^{\mathrm{s.t.~}ar}$.). We will only use this approachability condition in the sequel; it results in no loss of generality, and moreover, it is easier to extend to the non-approachable case. So far we assumed that the target set $\mathbb{W}^{\mathrm{s.t.~}ar}$ is approachable. In practice, this assumption may or may not hold. In both cases, we can still seek to minimize the Euclidean distance $\mathrm{dist}(\mathbf{W}^K, \mathbb{W}^{\mathrm{s.t.~}ar})$ of the average return to the target set. This is analogous to the agnostic learning setting for supervised learning. Toward this end, the following condition is useful. \begin{assumption}[$\delta$-Approachability] \langlebel{ass:delta-Approachability} Given a vector-valued MG $\mathrm{MG}(\mathcal{S}, \mathcal{A}, \mathcal{B}, \mathbb{P}, \mathbf{r}, H)$, we say a closed and convex target set $\mathbb{W}^{\mathrm{s.t.~}ar}$ is $\delta$-approachable, if for any vector $\bm{\theta}$, $$ \vect{v}ect{u}nderset{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}}{\max}\ \ \bm{\theta}\cdot \mathbf{x} + \delta \vect{v}ect{g}e V_1^{\mathrm{s.t.~}ar}\left(\bm{\theta}, s_1 \right). $$ \vect{v}ect{e}nd{assumption} Equivalently, this means the $\delta$-expansion of $\mathbb{W}^{\mathrm{s.t.~}ar}$ is approachable. So, a larger $\delta$ means $\mathbb{W}^\mathrm{s.t.~}ar$ is harder to approach. \section{Multi-objective Meta-algorithm} \langlebel{sec:basic} Equipped with the generalized concepts of approachability for vector-valued MGs, we are ready to present our algorithmic framework. To make the exposition modular, we first present \textbf{M}ulti-\textbf{O}bjective \textbf{M}eta-\textbf{A}lgorithm (\textsc{MOMA}{}), our generic learning algorithm that is displayed as Algorithm~\ref{algorithm:Blackwell-VI}. Subsequently, we explain its key components. \begin{algorithm}[h] \caption{Multi-objective Meta-algorithm (\textsc{MOMA}{})} \langlebel{algorithm:Blackwell-VI} \begin{algorithmic}[1] \mathbb{S}tate {\bfseries Initialize:} for any $(s, a, b, h,s')$, $Q_{h}(s,a, b)\leftarrow \sqrt{d}H$, $N_{h}(s,a, b)\leftarrow 0$, $N_h(s,a,b,s')\leftarrow 0$, $\mathbf{W} \leftarrow \mathbf{0}$, $\bm{\theta} \leftarrow$ any unit verctor, $\hat{\mathbb{P}} \leftarrow$ any probability distribution. \For{Episode $k=1,\dots,K$} \mathbb{S}tate $\pi \leftarrow \textsc{Planning}{}(\bm{\theta},\mathbf{r},N, \hat{\mathbb{P}})$ \langlebel{line:planning} \mathbb{S}tate $\hat{\mathbf{V}} \leftarrow \mathbf{0}$. \langlebel{line:model_update_start} \For{step $h=1,\dots, H$} \mathbb{S}tate take action $(a_h, \cdot) \sim \pi_h(\cdot, \cdot| s_h)$. \mathbb{S}tate Observe opponent's action $b_h\sim \nu_h(s_h)$ and next state $s_{h+1}$. \mathbb{S}tate $\hat{\mathbf{V}} \leftarrow \hat{\mathbf{V}} + \mathbf{r}_h(s_h, a_h, b_h)$. \mathbb{S}tate $N_{h}(s_h, a_h, b_h)\leftarrow N_{h}(s_h, a_h, b_h) + 1$. \mathbb{S}tate $N_h(s_h, a_h, b_h, s_{h+1}) \leftarrow N_h(s_h, a_h, b_h, s_{h+1}) + 1$ \mathbb{S}tate $\hat{\mathbb{P}}_h(\cdot|s_h, a_h, b_h)\leftarrow \frac{N_h(s_h, a_h, b_h, \cdot)}{N_h(s_h, a_h, b_h)}$. \mathbb{E}ndFor \mathbb{S}tate $\mathbf{W} \leftarrow ((k-1)\mathbf{W}+\hat{\mathbf{V}})/k$. \langlebel{line:model_update_end} \mathbb{S}tate $\bm{\theta} \leftarrow \textsc{Dual-Update}{}(\mathbf{W},\mathbb{W}^{\mathrm{s.t.~}ar},\hat{\mathbf{V}})$ \langlebel{line:dual_update} \mathbb{E}ndFor \vect{v}ect{e}nd{algorithmic} \vect{v}ect{e}nd{algorithm} MOMA is partitioned into into three components: \begin{list}{–}{\leftmargin=1.5em} \setlength\itemsep{0em} \item \textbf{Planning} (Line 3): In each episode, we convert the vector-valued MG into a scalar-valued one by projecting onto the direction specified by the dual variable $\bm{\theta}$ and by computing the policy $\pi$. \item \textbf{Model Update} (Line 4 to 13): We accumulate the (vector-valued) return in each episode in $\hat{\mathbf{V}}$, and $\mathbf{W}$ is the average cumulative return. Then, we update the empirical estimators of the transition kernel. \item \textbf{Dual Update} (Line 14): Finally, we need to determine which direction we want to project the vector-valued MG onto in the next episode. \vect{v}ect{e}nd{list} Notice that $\pi$ actually defines policies for both players, but we only execute it for the agent. Let $\mu_h(\cdot| s_h)$ and $\omega _h(\cdot| s_h)$ be the marginal distributions of $\pi_h(\cdot, \cdot| s_h)$. Then action $a_h$ is indeed sampled from the marginal $\mu_h(\cdot| s_h)$, while $b_h$ is sampled from $\nu_h(\cdot| s_h)$, which is not necessarily equal to $\omega _h(\cdot| s_h)$. Using this notation, we can observe that $\hat{\mathbf{V}}$ is unbiased in the sense that $\mathbb{E}[\mathbf\bm{\theta} \cdot \hat{\mathbf{V}}] = V_1^{\mu ,\omega}\left(\bm{\theta}, s_1 \right) $. The idea behind Algorithm~\ref{algorithm:Blackwell-VI} is simple: In each episode we fix a direction and try to approach the target set $\mathbb{W}^{\mathrm{s.t.~}ar}$. In this way, we can reduce the problem to a scalar-valued MG and benefit from existing work on scalar-valued MGs~\cite{bai2020provable, xie2020learning, bai2020near,liu2020sharp}. The implementation of model updates is described in Algorithm~\ref{algorithm:Blackwell-VI}. The other two sub-procedures vary slightly in different settings as follows: \begin{list}{–}{\leftmargin=1.5em} \setlength\itemsep{0em} \item \textbf{\textsc{Planning}{}}: A planning algorithm to determine the policy $\pi$ based on the current estimated transition kernel $\hat{\mathbb{P}}$. For MGs we will use \textsc{VI-Hoeffding}{} (Algorithm~\ref{alg:VI-Hoeffding}). For MDPs, we can design a finer \textsc{VI-Bernstein}{} (Algorithm~\ref{alg:VI-Bernstein}) to achieve a sharper convergence rate. In Line 11 of \textsc{VI-Hoeffding}{}, we use \mathbb{N}ASH{} to denote computing the minimax policy w.r.t. a \vect{v}ect{e}mph{matrix} game, which is standard in model-based method for MGs \cite{bai2020provable,xie2020learning, liu2020sharp}. \item \textbf{\textsc{Dual-Update}{}}: A dual update algorithm to update the variable $\bm{\theta}$, which describes the direction to approach $\mathbb{W}^{\mathrm{s.t.~}ar}$ in the next episode. We propose two different candidates: (\ref{equ:PDU}) and (\ref{equ:ODU}) in the following two sections. A variant of \textsc{Projection-free-Dual-Update}{}, \textsc{Double-Dual-Update}{} is proposed in Section~\ref{sec:satisfiable} to simutaneously optimize a cost function. \vect{v}ect{e}nd{list} \begin{algorithm}[h] \caption{VI-Hoeffding ({\textsc{VI-Hoeffding}{}})} \langlebel{alg:VI-Hoeffding} \begin{algorithmic}[1] \For{step $h=H,H-1,\dots,1$} \langlebel{line:VI_start} \For{$(s, a, b)\in\mathcal{S}\times\mathcal{A}\times \mathcal{B}$} \mathbb{S}tate $t \leftarrow N_{h}(s, a, b)$. \mat{I}f{$t>0$} \mathbb{S}tate $r_h(s,a,b)=\bm{\theta} \cdot \mathbf{r}_h(s,a,b)$; \mathbb{S}tate $\beta \leftarrow c\sqrt{\min\{d,S\}H^2d\iota/t}$. \mathbb{S}tate $Q_{h}(s, a, b)\leftarrow \max\{(r_h + \vect{v}ect{w}idehat{\mathbb{P}}_{h} V_{h+1})(s, a, b) - \beta, -\sqrt{d}H\}$. \mathbb{E}ndIf \mathbb{E}ndFor \For{$s \in \mathcal{S}$} \mathbb{S}tate $\pi_h(\cdot, \cdot|s) \leftarrow \mathbb{N}ASH (Q_h(s, \cdot, \cdot))$. \mathbb{S}tate $V_h(s) \leftarrow (\mathbb{D}_{\pi_h}Q_h)(s)$. \mathbb{E}ndFor \mathbb{E}ndFor \vect{v}ect{e}nd{algorithmic} \vect{v}ect{e}nd{algorithm} \section{Projection-based Dual Update} \langlebel{sec:projection} We begin with the most intuitive way to choose the dual variable: follow the direction that minimizes the distance of a candidate vector $\mathbf{W}$ to the target set $\mathbb{W}^{\mathrm{s.t.~}ar}$: \begin{equation} \langlebel{equ:PDU} \bm{\theta} \leftarrow \begin{cases} \frac{\mathbf{W}-\mathbb{P}i _{\mathbb{W}^{\mathrm{s.t.~}ar}}\left( \mathbf{W} \right) }{\left\| \mathbf{W}-\mathbb{P}i _{\mathbb{W}^{\mathrm{s.t.~}ar}}\left( \mathbf{W} \right) \right\|_2 }, \text{if}\ \mathbf{W} \notin \mathbb{W}^{\mathrm{s.t.~}ar},\\ \text{any unit vector}, \,\, \text{otherwise}. \vect{v}ect{e}nd{cases} \tag{\textsc{Projection-based-Dual-Update}{}} \vect{v}ect{e}nd{equation} To find this direction, we need to compute the orthogonal projection onto $\mathbb{W}^{\mathrm{s.t.~}ar}$, thus we call it \textsc{Projection-based-Dual-Update}{}. To give theoretical guarantees, we will prove upper bounds on the Euclidean distance from our average cumulative return in the first $K$ episodes $\mathbf{W}^K$ to the target set $\mathbb{W}^{\mathrm{s.t.~}ar}$. If $\mathbb{W}^{\mathrm{s.t.~}ar}$ is approachable, $\mathrm{dist}(\mathbf{W}^K,\mathbb{W}^{\mathrm{s.t.~}ar})$ will converge to zero. \begin{theorem} \langlebel{thm:basic-approachable} Following \textsc{MOMA}{} with \textsc{VI-Hoeffding}{} (Algorithm~\ref{alg:VI-Hoeffding}) for \textsc{Planning}{} and \textsc{Projection-based-Dual-Update}{} for \textsc{Dual-Update}{}, if $\mathbb{W}^\mathrm{s.t.~}ar$ is approachable, with probability $1-p$, \begin{align*} \mathrm{dist}(\mathbf{W}^K,\mathbb{W}^{\mathrm{s.t.~}ar}) \le \mathcal{O}\bigl( \sqrt{\min\{d,S\}dH^4SAB\iota/K}\bigr), \vect{v}ect{e}nd{align*} where $\iota =\log(SABKH/p)$. \vect{v}ect{e}nd{theorem} The approachability condition (Assumption~\ref{ass:Approachability}) is standard in the literature \cite{blackwell1956analog}. However in practice, the desired target set $\mathbb{W}^\mathrm{s.t.~}ar$ may rarely also happen to be approachable (since it is chosen to meet the needs of an application, not to meet our demands on approachability). In this case, one may be unable to guarantee $\mathrm{dist}( \mathbf{W}^K,\mathbb{W}^{\mathrm{s.t.~}ar} )$ converges to zero, but can only minimize the distance. A natural way to model this scenario is to assume $\mathbb{W}^{\mathrm{s.t.~}ar}$ is $\delta$-approachable, whence the following Theorem~\ref{thm:basic-delta-approachable} applies. \begin{theorem} \langlebel{thm:basic-delta-approachable} If we use \textsc{VI-Hoeffding}{} (Algorithm~\ref{alg:VI-Hoeffding}) for \textsc{Planning}{} and (\ref{equ:PDU}) for \textsc{Dual-Update}{} in \textsc{MOMA}{}, and if $W^{*}$ is $\delta$-approachable, then with probability $1-p$, \begin{align*} \mathrm{dist} \left( \mathbf{W}^K,W^* \right) \le \delta + \mathcal{O}\left( \sqrt{\min\{d,S\}dH^4SAB\iota/K}\right) \vect{v}ect{e}nd{align*} where $\iota =\log \left( SABKH/p \right) $. \vect{v}ect{e}nd{theorem} \textbf{Remark.} Although we assume $\mathbb{W}^{\mathrm{s.t.~}ar}$ is $\delta$-approachable, the algorithm does not need to know $\delta$. Instead, we just run the same algorithm and the guarantee is adaptive. \paragraph{Rationale behind the criterion.} When characterizing the performance of our method, we choose to compete with $\delta$, the ``non-approachability gap''. This choice is simple and similar to the notion of regret used in scalar-valued MGs \cite{xie2020learning,tian2020provably}. One may aim to be more ambitious: compete with the best response in hindsight, as in \cite{mannor2014approachability} for the bandit (single-horizon) setting. Unfortunately, such a choice is not computationally feasible for MGs. It is computationally hard even for scalar-valued MGs; see \cite{bai2020near} for an exponential lower bound. \section{Projection-free Dual Update} \langlebel{sec:OCO} The per-iteration computational bottleneck of \textsc{Projection-based-Dual-Update}{} is to compute the projection onto $\mathbb{W}^{\mathrm{s.t.~}ar}$, which requires solving a quadratic program and can be computationally demanding. However, if we can find $\argmax_{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}} \bm{\theta}\cdot \mathbf{x}$ efficiently (e.g., when $\mathbb{W}^{\mathrm{s.t.~}ar}$ is a polytope), then we can develop a computation-friendly dual update based on online convex optimization (OCO) techniques~\citep{abernethy2011blackwell, shimkin2016online}. To show the intuition behind \textsc{Projection-free-Dual-Update}, we proceed via Fenchel duality. Consider a convex, closed, 1-Lipschitz function $f:\left[ 0,H \right] ^d \rightarrow \mathbb{R}$. Its Fenchel conjugate is $$f^*\left( \bm{\theta} \right) :=\vect{v}ect{u}nderset{\mathbf{x}\in X}{\max}\left\{ \bm{\theta}\cdot \mathbf{x}-f\left( \mathbf{x} \right) \right\} . $$ Then $f^*$ is $\sqrt{dH^2}$-Lipschitz by Corollary 13.3.3 in \cite{rockafellar1970convex}. Fenchel duality implies \begin{equation} \langlebel{equ:duality} f\left( \mathbf{x} \right) =\vect{v}ect{u}nderset{\left\| \bm{\theta} \right\| \le 1}{\max}\left\{ \bm{\theta}\cdot \mathbf{x}-f^*\left(\bm{\theta} \right) \right\} . \vect{v}ect{e}nd{equation} In particular, if $f(\mathbf{x}) = \mathrm{dist}( \mathbf{x},\mathbb{W}^{\mathrm{s.t.~}ar})$, its Fenchel dual is $f^*(\bm{\theta}) = \max_{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}} \bm{\theta}\cdot \mathbf{x}$ and its subdifferential is $\partial f^*\left( \bm{\theta} \right) = \argmax_{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}} \bm{\theta}\cdot \mathbf{x}$. Therefore, we can use its dual representation to ``linearize'' the distance. That is, \begin{align*} K\mathrm{dist}(\mathbf{W}^k,\mathbb{W}^{\mathrm{s.t.~}ar}) = \max_{\left\| \bm{\theta} \right\| \le 1}\biggl\{ \bm{\theta}\cdot \sum_{k=1}^K{\mathbf{\hat{V}}^k}- \sum_{k=1}^K \max_{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}} \bm{\theta}\cdot \mathbf{x} \biggr\}. \vect{v}ect{e}nd{align*} Ideally, if we can find the dual variable $\bm{\theta}^{\mathrm{s.t.~}ar}$ that maximizes the right-hand side above, minimizing the distance will be equivalent to minimizing a linear function in $\hat{\mathbf{V}}^k$, which can be handled as before if we use \textsc{VI-Hoeffding}{} as the planning algorithm. Although we can not find $\bm{\theta}^{\mathrm{s.t.~}ar}$ directly, we can find a sequence of dual variables $\{\bm{\theta}\}_{k=1}^K$ such that $ \sum_{k=1}^K{\bigl\{ \bm{\theta}^k\cdot \mathbf{\hat{V}}^k - \sum_{k=1}^K \max_{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}} \bm{\theta}^k\cdot \mathbf{x} \bigr\}}$ is close to $\max_{\left\| \bm{\theta} \right\| \le 1}\bigl\{ \bm{\theta}\cdot \sum_{k=1}^K{\mathbf{\hat{V}}^k}- \sum_{k=1}^K \max_{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}} \bm{\theta}\cdot \mathbf{x} \bigr\}$. This task is precisely what online convex optimization (OCO) performs. The simplest solution is to use online subgradient method with step size $\vect{v}ect{e}ta^k= \sqrt{1/dH^2k}$. We define \textsc{Projection-free-Dual-Update}{} formally below: \begin{equation} \langlebel{equ:ODU} \bm{\theta}^{k+1}:=\mathbb{P}i_{\mathbb{B}^d}\bigl\{ \bm{\theta}^k+\vect{v}ect{e}ta^k \bigl(\mathbf{\hat{V}}^k-\partial f^*\bigl(\bm{\theta}^k \bigr) \bigr) \bigr\}, \tag{\textsc{Projection-free-Dual-Update}{}}\vect{v}space{4pt} \vect{v}ect{e}nd{equation} where $\mathbb{P}i_{\mathbb{B}^d}$ denotes projection onto the $d$-dimensional unit Euclidean ball and $\partial f^*\bigl(\bm{\theta}^k \bigr)$ is a subgradient vector of $f^*$ at $\bm{\theta}^k$ (not a set). Similarly, we provide theoretical guarantees for the new dual update rule. The proof is much simpler compared with that of Theorem~\ref{thm:basic-approachable} and Theorem~\ref{thm:basic-delta-approachable}. \begin{theorem} \langlebel{thm:OCO-delta-approachable} Following \textsc{MOMA}{} with \textsc{VI-Hoeffding}{} (Algorithm~\ref{alg:VI-Hoeffding}) for \textsc{Planning}{} and \textsc{Projection-free-Dual-Update}{} for \textsc{Dual-Update}{}, if $\mathbb{W}^{\mathrm{s.t.~}ar}$ is $\delta$-approachable, with probability $1-p$, \begin{align*} \mathrm{dist} \left( \mathbf{W}^K,\mathbb{W}^{\mathrm{s.t.~}ar} \right) \le \delta+ \mathcal{O}\left( \sqrt{\min\{d,S\}dH^4SAB\iota/K}\right), \vect{v}ect{e}nd{align*} where $\iota =\log \left( SABKH/p \right) $. \vect{v}ect{e}nd{theorem} \section{Application to CMDPs: Near Optimal Rate} \langlebel{sec:MDP_upper} In this section, we apply our algorithmic framework to MDPs, which can be considered as a special case of MGs where the adversary cannot change the game. The stationary environment enables us to use the Bernstein-type concentration and achieve sharper dependence on the horizon $H$. The corresponding planning algorithm \textsc{VI-Bernstein}{} is formalized in Algorithm~\ref{alg:VI-Bernstein}. In Line 6 we use the empirical variance operator defined by $\vect{v}ect{w}idehat{\mathbb{V}}^k_h[V](s,a) :=\text{Var}_{s'\sim \vect{v}ect{w}idehat{\mathbb{P}}^k_h(\cdot|s,a)}V(s')$ for any function $V \in [-\sqrt{d}H,\sqrt{d}H]^{S}$. Notice that this approach does \vect{v}ect{e}mph{not} work for MGs, because we need to estimate the variance of the value function $V^{\mu ,\vect{v}ect{u}psilon}$, a task that is impossible when the adversary's policy $\vect{v}ect{u}psilon$ is unknown. \begin{algorithm}[h] \caption{\textsc{VI-Bernstein}{}} \langlebel{alg:VI-Bernstein} \begin{algorithmic}[1] \For{step $h=H,H-1,\dots,1$} \For{$(s, a)\in\mathcal{S}\times\mathcal{A}$} \mathbb{S}tate $t \leftarrow N_{h}(s, a)$. \mat{I}f{$t>0$} \mathbb{S}tate $r_h(s,a)=\bm{\theta} \cdot \mathbf{r}_h(s,a)$; \mathbb{S}tate { $\beta \leftarrow c\big(\sqrt{\hat{\mathbb{V}}_h \low{V}_{h+1}(s,a)\min\{d,S\}\iota/t} + \hat{\mathbb{P}}_{h}(\vect{v}ect{u}p{V}_{h+1}-\low{V}_{h+1})(s,a)/H + \min\{d,S\}\sqrt{d}H^2\iota/t\big)$. } \mathbb{S}tate { $\low{Q}_{h}(s, a)\leftarrow \max\{(r_h + \vect{v}ect{w}idehat{\mathbb{P}}_{h} \vect{v}ect{u}p{V}_{h+1})(s, a) - \beta, -\sqrt{d}H\}$.} \mathbb{S}tate {$\vect{v}ect{u}p{Q}_{h}(s, a)\leftarrow \min\{(r_h + \vect{v}ect{w}idehat{\mathbb{P}}_{h} \low{V}_{h+1})(s, a) + \beta, \sqrt{d}H\}$. } \mathbb{E}ndIf \mathbb{E}ndFor \For{$s \in \mathcal{S}$} \mathbb{S}tate $\pi_h(s) \leftarrow \argmin (\low{Q}_h(s, \cdot))$. \mathbb{S}tate $\low{V}_h(s) \leftarrow \low{Q}_h(s, \pi_h(s)), \vect{v}ect{u}p{V}_h(s) \leftarrow \vect{v}ect{u}p{Q}_h(s, \pi_h(s))$. \mathbb{E}ndFor \mathbb{E}ndFor \vect{v}ect{e}nd{algorithmic} \vect{v}ect{e}nd{algorithm} The sharper theoretical guarantee is as follows: \begin{theorem} \langlebel{thm:bernstein} If we use \textsc{VI-Bernstein}{} (Algorithm~\ref{alg:VI-Bernstein}) for \textsc{Planning}{} and \vect{v}ect{e}qref{equ:PDU} or \vect{v}ect{e}qref{equ:ODU} for \textsc{Dual-Update}{} in \textsc{MOMA}{}, and if $\mathbb{W}^{\mathrm{s.t.~}ar}$ is $\delta$-approachable, then with probability $1-p$, $$ \mathrm{dist}( \mathbf{W}^K, \mathbb{W}^*) \le \delta+ \mathcal{O}\bigl( \sqrt{\min\{d,S\}dH^3SA\iota/K}\bigr), $$ where $\iota =\log \left( SAKH/p \right)$. \vect{v}ect{e}nd{theorem} When $d \le S$ (as is in most cases), our result is minimax optimal up to log-factors in $S, A, H, K$ according to the lower bound $\Omega \left( \sqrt{H^3SA\iota/K}\right)$ proven in \citep{domingues2020episodic}. The tightness of our result in $d$ remains open. In particular, we can get a naive $\Omega \left( \sqrt{dH^3SA\iota/K}\right)$ lower bound by duplicating the negative MDP example from \cite{domingues2020episodic} $d$ times in $d$ dimensions, and the distance naturally scales up by $d$. With such a lower bound, there is still a $\sqrt{d}$ gap open. More details on the difficulty of providing a tigher lower bound are discussed in Section~\ref{sec:conclusion}. The upper bound in Theorem~\ref{thm:bernstein} allows us to find a policy that approaches the target set $\mathbb{W}^\mathrm{s.t.~}ar$ efficiently. Next, we generalize the result to the constrained MDP setting where we want to simultaneously minimize a cost function. \subsection{Optimizing a Cost Function Simultaneously} \langlebel{sec:satisfiable} In this section, we show how to extend our algorithm to the constrained MDP setup \citep{efroni2020exploration, ding2020provably, qiu2020upper, brantley2020constrained}, in which one wants to simultaneously minimize a cost function $g : \mathbb{R}^d \to [0,1]$ defined on the return vector space. The goal is two-fold: (i) satisfy constraints defined by the target set; and (ii) minimize the cumulative cost. Note that our setup \vect{v}ect{e}mph{subsumes} the canonical cost function in which the cost function is defined on the state-action pair (e.g., \citep{efroni2020exploration}). Particularly, we can add an extra coordinate in the return vector space to denote the cost for each state-action pair, and pick $g$ to solely extract that cost coordinate. A more detailed comparison against constrained MDP setups from previous works can be found in Appendix~\ref{sec:cmdp-compare}. For our analysis, we assume that the cost function $g(\cdot)$ is $1$-Lipschitz and convex. Following~\citep{efroni2020exploration,ding2020provably, qiu2020upper, brantley2020constrained}, we also assume $\mathbb{W}^{\mathrm{s.t.~}ar}$ is satisfiable and that we want to compete with a policy $\mu^{\mathrm{s.t.~}ar}$ such that $\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1) \in \mathbb{W}^{\mathrm{s.t.~}ar}$. One might hope to bound the regret $\sum_{k=1}^K g(\hat{\mathbf{V}}^{k})-K g(\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1))$. But this goal is hard. Its counterpart is unknown even in the bandit setup \citet{agrawal2014bandits}. Instead, we aim to upper bound both the regret $[g(\mathbf{W}^K)- g(\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1))]$ and the constraint violation $\mathrm{dist}(\mathbf{W}^K, \mathbb{W}^{\mathrm{s.t.~}ar}).$ \paragraph{Constraint geometry.} Toward achieving our aim, we need to impose some geometric requirements on the constraints that will help us quantify algorithmic complexity in a non-asymptotic manner. Previous works that use a primal-dual approach (e.g., \citep{efroni2020exploration,qiu2020upper,ding2020provably}) assume knowledge of explicit structure of the constraint set, concretely by requiring $\mathbb{W}^{\mathrm{s.t.~}ar} = \{ x \ \| \forall i, g_i(x) \le 0\}$. Subsequently, they control complexity of the constraint set by assuming Lipschitzness of the $g_i$ and a strong Slater condition, i.e., there is a strictly feasible interior point $x_0$ such that $g_i(x_0) \le -\vect{v}ect{e}psilonilon$ for a universal constant $\vect{v}ect{e}psilonilon > 0$. In contrast, we do not impose explicit structure on $\mathbb{W}^\mathrm{s.t.~}ar$. Instead, we assume that we can solve linear or quadratic optimization over $\mathbb{W}^\mathrm{s.t.~}ar \subset \mathbb{R}^d$. A naive way to cast our setup into the previous form would be use the inequality $g_0(\cdot) := \mathrm{dist}(\cdot, \mathbb{W}^{\mathrm{s.t.~}ar}) \le 0$. But since $g_0$ is a distance function, we cannot satisfy the strict interiority condition needed by the previous setup. Consequently, we need to limit the complexity of our constraint set through a more refined alternative. To this end, we propose a geometric condition. In particular, we assume that the target set $\mathbb{W}^{\mathrm{s.t.~}ar}$ intersects with the set of achievable value vectors $\mathcal{V} = \{{\mathbf{V}}^\pi_1(s_1) |\ \text{any policy $\pi$}\}$ \vect{v}ect{e}mph{nonsingularly}---Figure~\ref{fig:intersection} illustrates this concept. Formally, denote the set of achievable returns within the target set as $\mathcal{W} = \mathcal{V} \cap \mathbb{W}^{\mathrm{s.t.~}ar}$ and $\partial \mathcal{W} = \partial \mathcal{V} \cap \partial\mathbb{W}^{\mathrm{s.t.~}ar}$ as the intersection of the boundaries of $\mathbb{W}^{\mathrm{s.t.~}ar}$ and the achievable value vector set $\mathcal{V}$. Then, Assumption~\ref{assump:angle} describes nonsingular intersection. \begin{assumption}\langlebel{assump:angle} If $\partial \mathcal{W}$ is not empty, then for each vector $\mathbf{W} \in \partial \mathcal{W}$, denote the maximum angle $\alpha \in [0, \pi]$ between the support vectors $\vect{v}ec{a}$ of $\mathbb{W}^{\mathrm{s.t.~}ar}$ at $\mathbf{W}$ and the support vectors $\vect{v}ec{b}$ of $\mathcal{V}$ at $\mathbf{W}$ as \begin{align*} \alpha(\mathbf{W}) := \min\{\angle(\vect{v}ec{a}, \vect{v}ec{b})\ |\ \vect{v}ec{a}, \vect{v}ec{b}\ \text{are support} \text{vectors of sets}\ \mathbb{W}^{\mathrm{s.t.~}ar}\ \text{and}\ \mathcal{V}\ \text{at}\ \mathbf{W}\}. \vect{v}ect{e}nd{align*} We assume there exists a constant $\alpha_{\max} \in [\pi/2, \pi)$ such that $\max_{w \in \partial\mathcal{W}} \alpha(w) < \alpha_{\max}$. With this upper bound on $\alpha$, we denote $\vect{v}ect{g}amma_{\min}= \sin(\pi - \alpha_{\max}) > 0$. \vect{v}ect{e}nd{assumption} \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{intersection.png} \caption{The target set intersects with the achievable return vectors nonsingularly. The angle $\alpha(\prod_\mathcal{W} \mathbf{W}^k)$ is upper bounded. }\langlebel{fig:intersection} \vect{v}ect{e}nd{figure} Assumption~\ref{assump:angle} excludes the case where the sets $\mathcal{V} $ and $\mathbb{W}^{\mathrm{s.t.~}ar}$ intersect tangentially (i.e., share the same supporting hyperplane) resulting in $\alpha = \pi$. The necessity of such a geometric assumption is discussed in Appendix~\ref{sec:intersect-proof}. At a high level, Assumption~\ref{assump:angle} is a geometric analog of the previously noted strict interiority condition that excludes a singular intersection of the constraint functions $g_i$. Our assumption provides a way to lower-bound the distance to the target set $\mathbb{W}^{\mathrm{s.t.~}ar}$ by the distance to the actual constraint set $\mathcal{W}=\mathcal{V}\cap \mathbb{W}^\mathrm{s.t.~}ar$, and thus prevent an algorithm from trading off too much constraint violation in exchange for a lower cost value $g(\mathbf{V}^k)$. To minimize cost and avoid constraint violation simultaneously we need a ``double'' version of dual variable update. This idea is formalized in \textsc{Double-Dual-Update}{} below: \begin{align} \bm{\vect{v}arphi}^{k+1} & = \mathbb{P}i _{\mathbb{B}^d}\bigl\{ \bm{\vect{v}arphi }^k+\vect{v}ect{e}ta^k \bigl( \mathbf{\hat{V}}^k-\vect{v}ect{u}nderset{\mathbf{x} \in W^{\mathrm{s.t.~}ar}}{\mathrm{arg}\max}\ \bm{\vect{v}arphi }^k \cdot \mathbf{x} \bigr) \bigr\}, \nonumber\\ \bm{\phi}^{k+1} &= \mathbb{P}i _{\mathbb{B}^d}\bigl\{ \bm{\phi }^{k} +\vect{v}ect{e}ta^k \bigl( \mathbf{\hat{V}}^k -\partial g^{*}( \bm{\phi}^k) \bigr) \bigr\}, \nonumber\\ \bm{\theta}^{k+1} &= \rho \bm{\vect{v}arphi }^{k+1} + \bm{\phi} \tag{\textsc{Double-Dual-Update}{}}^{k+1} \vect{v}ect{e}nd{align} where $\mathbb{P}i_{\mathbb{B}^d}$ denotes projection onto the $d$-dimensional unit Euclidean ball and $\partial g^*\bigl(\bm{\vect{v}arphi}^k \bigr)$ is a subgradient vector of $g^*$ at $\bm{\vect{v}arphi}^k$ (not a set). \begin{table}[t!] \centering \resizebox{0.95\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Algorithms} & \textbf{Regret} & \textbf{Constraint Violation} & \makecell{\textbf{Nonlinear} \\ \textbf{Cost and} \\ \textbf{Constraints}} & \makecell{\textbf{Computionally} \\ \textbf{Efficient}} \\ \hline \makecell{OptCMDP-bonus \\ \cite{efroni2020exploration}}& $ \mathcal{\tilde{O}}\left(\sqrt{H^4S^2AK}\right)$ & $ \mathcal{\tilde{O}}\left(\sqrt{dH^4S^2AK}\right)$ & & \checkmark\\ \hline \cite{brantley2020constrained}& $ \mathcal{\tilde{O}}\left(\sqrt{H^3S^2AK}\right)$ & $ \mathcal{\tilde{O}}\left(\sqrt{d^3H^3S^2AK}\right)$ & \checkmark & \\ \hline \makecell{OptPD-CMDP \\ \cite{efroni2020exploration}}& $ \mathcal{\tilde{O}}\left(\sqrt{(S^2A + d^2)H^4K}\right)$ & $ \mathcal{\tilde{O}}\left(\sqrt{(S^2Ad^2 + d^3)H^4K}\right)$ & & \checkmark \\ \hline \makecell{OPDOP \\ \cite{ding2020provably}} &$ \mathcal{\tilde{O}}\left(\sqrt{H^5S^4A^2K}\right)$ & $ \mathcal{\tilde{O}}\left(\sqrt{H^5S^4A^2K}\right)$ & & \checkmark \\ \hline \makecell{UCPD \\ \cite{qiu2020upper}} &$ \mathcal{\tilde{O}}\left(\sqrt{H^5S^2AK}\right)$ & $ \mathcal{\tilde{O}}\left(\sqrt{H^5S^2AK}\right)$ & & \checkmark \\ \hline \textbf{This Paper} \cellcolor{gray!25}& $ \mathcal{\tilde{O}}\left( \sqrt{\min\{d,S\}dH^3SAK}\right)$ \cellcolor{gray!25} & $ \mathcal{\tilde{O}}\left( \sqrt{\min\{d,S\}dH^3SAK}\right)$ \cellcolor{gray!25} & \checkmark \cellcolor{gray!25} & \checkmark \cellcolor{gray!25}\\ \hline \vect{v}ect{e}nd{tabular}} \caption{Comparison with constrained MDP literature.} \langlebel{tab:comparison} \vect{v}ect{e}nd{table} Here comes our theoretical guarantee for both constraint violation and regret. \begin{theorem} \langlebel{thm:satisfiable} Following \textsc{MOMA}{} with \textsc{VI-Bernstein}{} (Algorithm~\ref{alg:VI-Bernstein}) for \textsc{Planning}{} and \textsc{Double-Dual-Update}{} for \textsc{Dual-Update}{}, if $\mathbb{W}^{\mathrm{s.t.~}ar}$ is approachable and $\mu^{\mathrm{s.t.~}ar}$ is a policy s.t. $\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1) \in \mathbb{W}^{\mathrm{s.t.~}ar}$, with probability $1-p$ we can bound the constraint violation and the regret respectively as follows: \begin{align*} \mathrm{dist}( \mathbf{W}^K,W^{\mathrm{s.t.~}ar}) & \le \mathcal{O}\bigl( \sqrt{\min\{d,S\}dH^3SA\iota/K}\bigr),\\ g(\mathbf{W}^K)- g(\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1)) & \le \mathcal{O}\bigl( \rho \sqrt{\min\{d,S\}dH^3SA\iota/K}\bigr), \vect{v}ect{e}nd{align*} where $\iota =\log \left( SAKH/p \right) $, $\rho = 2/\vect{v}ect{g}amma_{\min}$. \vect{v}ect{e}nd{theorem} Known results on constrained MDP problems do not share a common setup and hence make a precise comparison tricky. In short, our result aims to provide a \vect{v}ect{e}mph{computationally efficient} algorithm for non-linear constraints (target set) and a convex cost function (see Table~\ref{tab:comparison}). Please see Appendix ~\ref{sec:comparsion} for a more detailed discussion of the subtleties among different constrained MDP setups, and some minor modifications needed to unify the exposition. With the existing results, our result is significant in the following aspects: \begin{itemize} \setlength{\itemsep}{0em} \item First, our algorithm is the most general in terms of being able to handle non-linearity in the cost and constraints. The constrained MDP setting we study in Section~\ref{sec:satisfiable} is a direct generalization of~\citep{agrawal2014bandits}, and is closest to~\citep{brantley2020constrained}. While our constraint assumption is equivalent to the one in~\citep{brantley2020constrained}, our cost functions are more general. The domain of \citeauthor{brantley2020constrained}'s cost function is scalars, while that of ours is vectors. \item Furthermore, our proposed algorithm is \vect{v}ect{e}mph{computationally efficient} because we do not require solving a large-scale convex optimization sub-problem with the number of variables and constraints scaling as $\mathcal{O}(SAH)$ per iteration (see Table~\ref{tab:comparison}). Indeed, our algorithms only comprise planning and model update procedures with a total of $\mathcal{O}(S^2AH)$ basic algebraic updates in each episode, along with a dual space optimization procedure whose computational complexity is free of $S$, $A$ and $H$. \item Our bounds on regret and constraint violation are also the sharpest with respect to their dependence on the parameters $S$, $A$, and $K$. \vect{v}ect{e}nd{itemize} \section{Conclusion and Future Work}\langlebel{sec:conclusion} In this paper, we formulate online learning in vector-valued Markov games via the lens of approaching a fixed convex target set within which the vector-valued objective should lie. We provide efficient model-based algorithms as instances of a generic meta-algorithm. Two key ideas contribute to our algorithmic design: (i) reduction of the vector-valued Markov game to a scalar-valued one, where the scalarization is iteratively updated; and (ii) exploration of the environment strategically. For vector-valued MDPs, our algorithms, after some modifications, achieve a tight rate in approaching the target set (in terms of $S, A, H, K$), while simultaneously minimizing a convex cost function. Moreover, when the given target set is non-approachable, our algorithms automatically adapt to the degree of non-approachability. Several problems are left open. Currently, there is still a $\sqrt{d}$ gap ($d$ is the dimensionality of the vector-valued cost) between our upper bound and the lower bound. How to close this gap to achieve the minimax rate remains unknown. The challenge in providing a tighter lower bound is that estimating a discrete distribution under the $L^2$ distance does not get harder as the dimensionality increases. Since we use the Euclidean distance to measure the performance of our algorithms, we cannot get stronger dependence on $d$. Lower bounds such as the one in \citep{jin2020reward} use a multiple hypothesis testing approach successfully because they work with an $L^1$ loss, whereas we study the standard Euclidean loss. A second question is that our result in Section~\ref{sec:satisfiable} has somewhat worse dependence on $d$ and $\rho$ compared to previous results. We leave improving the dimension dependency as a future direction. Another future direction that is worth pursuing is that of redefining the notion of regret and error. Our work measures approachability error using the Euclidean distance. In practice, this choice may not be the only useful measure. Can we develop provably efficient algorithms under other geometries and measures of approachability? Answering this question might help exploit the geometry of the target set better, and potentially lead to tighter complexity analyses. \appendix \section{Proofs for Section~\ref{sec:basic} } \langlebel{sec:basic_proof} In this section, we give detailed proofs needed in Section~\ref{sec:basic}. Beginning with a recapitulation of the notaions, We denote $V^k$, $Q^k$, $\pi^k$, $\mu^k$, $\nu^k$ and $\theta^k$ for values, policies and dual vectors at the \vect{v}ect{e}mph{beginning} of the $k$-th episode, In particular, $Q^k_h(s, a, b) = \theta^k \cdot \ Q^{\mu_k, \nu_k}_h(s, a, b).$ $\hat{\mathbf{V}}^k$ is the cumulative reward in the $k$-th episode . In particular, $N_h^k(s,a,b)$ is the number we have visited the state-action tuple $(s,a,b)$ at the $h$-th step before the $k$-th episode. $N_h^k(s,a,b,s')$ is defined by the same token. Using this notation, we can further define the empirical transition and exploration bonus by $ \vect{v}ect{w}idehat{\mathbb{P}}^k_h(s'|s, a, b):= N^k_h(s, a, b, s')/N^k_h(s, a, b) $ and $ \beta_h^k(s,a,b) := C\sqrt{\min\{d,S\}d\iota H^2/N_h^k(s,a,b)} $. We first give a uniform convergence guarantee, which will also be used later. The first simple lemma is from \cite{liu2020sharp}. \begin{lemma} \langlebel{lem:lipschitz-matrix} Let $\mathbb{X},\mat{Y},\mat{Z}\in\mathbb{R}^{A\times B}$ and $\mathbb{D}elta_d$ be the $d$-dimensional simplex. Suppose $\left|\mathbb{X}-\mat{Y} \right|\le \mathbf{Z}$, where the inequality is entry-wise. Then \begin{equation} \left|\max_{\mu\in\mathrm{tr}iangle_A}\min_{\nu\in\mathrm{tr}iangle_B} \mu\mathrm{tr}ans \mathbb{X} \nu -\max_{\mu\in\mathrm{tr}iangle_A}\min_{\nu\in\mathrm{tr}iangle_B} \mu\mathrm{tr}ans \mat{Y} \nu \right| \le \max_{i,j} \mathbf{Z}_{ij}. \vect{v}ect{e}nd{equation} \vect{v}ect{e}nd{lemma} We also need to following lemma to characterize the dependence of $V^{\mathrm{s.t.~}ar}(\bm{\theta}^k,\cdot)$ on $\bm{\theta}^k$ to apply the covering argument. \begin{lemma} [Lipschitz property of $V^{\mathrm{s.t.~}ar}$] \langlebel{lem:lipschitz-markov} For any $s \in \mathcal{S}$, $$ \left| V^{\mathrm{s.t.~}ar}_h(\bm{\theta},s)-V^{\mathrm{s.t.~}ar}_h(\bm{\theta}',s) \right|\le \sqrt{d} (H-h+1)\left\| \bm{\theta}-\bm{\theta}' \right\| _2 $$ \vect{v}ect{e}nd{lemma} \begin{proof} By Cauchy-Schwarz, $| V^{\mathrm{s.t.~}ar}_h(\bm{\theta},s)| \le \sqrt{d} (H-h+1)$. The rest of proof follows by induction via Bellman equation and Lemma~\ref{lem:lipschitz-matrix}. \vect{v}ect{e}nd{proof} Equipped with this Lipschitz property, we are ready to prove a uniform concentration result. Notice $\mathbb{B}^d$ is the $d$-dimensional unit Euclidean ball centered at $0$. \begin{lemma}[Uniform Concentration of $V^{\mathrm{s.t.~}ar}(\bm{\theta},\cdot)$] \langlebel{lem:uniform_V_star} Consider value function class \begin{equation*} \mathcal{V}_{h+1} = \set{V:\mathcal{S}\to\mathbb{R}~\mid~V(\cdot)=V_{h+1}^{\mathrm{s.t.~}ar}\left( \bm{\theta},\cdot \right) ~\textrm{for all}~ \bm{\theta}\in \mathbb{B}^m}. \vect{v}ect{e}nd{equation*} There exists an absolute constant $c$, with probability at least $1-p$, for all $(s, a, b, k, h)$ and all $V\in \mathcal{V}_{h+1}$ we have: \begin{equation*} \abs{[(\hat{\mathbb{P}}^k_h - \mathbb{P}_h)V](s, a, b)} \le c \sqrt{\frac{\min\{d,S\} dH^2\iota }{N_h^k(s,a)}} . \vect{v}ect{e}nd{equation*} where $\iota = \log(mSABKH /p)$ is a logarithmic factor. \vect{v}ect{e}nd{lemma} \begin{proof} Let $\mathcal{D}_\vect{v}ect{e}psilon$ be an $\vect{v}ect{e}psilon$-covering of $\mathbb{B}^d$ in the $\mathcal{L}^2$ norm, i.e., for any $\bm{\theta} \in \mathbb{B}^d$ there exists $\hat{\bm{\theta}}\in \mathcal{D}_\vect{v}ect{e}psilon$ such that $\left\| \bm{\theta}-\hat{\bm{\theta}} \right\| _2 \le \vect{v}ect{e}psilon$. For each $\hat{\bm{\theta}}\in \mathcal{D}_\vect{v}ect{e}psilon$, we can define the corresponding value function $V_{h+1}^{\mathrm{s.t.~}ar}\left( \hat{\bm{\theta}},\cdot \right)$. In this way, by Lemma~\ref{lem:lipschitz-markov}, we can generate a set $\mathcal{V}_\vect{v}ect{e}psilon$ which is an $H\vect{v}ect{e}psilon$-covering of $\mathcal{V}_{h+1}$ in infinity norm, i.e., for any $V \in \mathcal{V}_{h+1}$ there exists $\hat{V}\in \mathcal{V}_\vect{v}ect{e}psilon$ such that for any $s \in \mathcal{S}$, $|V(s)-\hat{V}(s)| \le H\vect{v}ect{e}psilon$ . Since $|\mathcal{D}_\vect{v}ect{e}psilon|\le (1/\vect{v}ect{e}psilon)^d$, we also have $|\mathcal{V}_\vect{v}ect{e}psilon|\le (1/\vect{v}ect{e}psilon)^d$. Since $| V^{\mathrm{s.t.~}ar}_h(\bm{\theta},s)| \le \sqrt{d} (H-h+1)$, by Hoeffding inequality and taking union bound, with probability at least $1-p$, \begin{align*} \abs{\sup_{\hat{V}\in\mathcal{V}_\vect{v}ect{e}psilon} [(\hat{\mathbb{P}}^k_h - \mathbb{P}_h)V](s, a, b)} \le \mathcal{O} \left( \sqrt{\frac{d^2H^2\iota' }{N_h^k(s,a)}} \right). \vect{v}ect{e}nd{align*} where $\iota' = \iota +\log 1/\vect{v}ect{e}psilon$. At the same time, for any $V\in\mathcal{V}_{h+1}$, there exists $\hat{V}\in\mathcal{V}_\vect{v}ect{e}psilon$ such that $\sup_s |V(s) - \hat{V}(s)|\le \sqrt{d}H\vect{v}ect{e}psilon$. Therefore, $$ \abs{[(\hat{\mathbb{P}}^k_h - \mathbb{P}_h)V](s, a, b)} \le \abs{[(\hat{\mathbb{P}}^k_h - \mathbb{P}_h)\hat{V}](s, a, b)} +\sqrt{d}H \vect{v}ect{e}psilon. $$ Taking $\vect{v}ect{e}psilon = d\iota/N_h^k(s,a)$ proves \begin{equation*} \abs{[(\hat{\mathbb{P}}^k_h - \mathbb{P}_h)V](s, a, b)} \le c \sqrt{\frac{d^2H^2\iota }{N_h^k(s,a)}} . \vect{v}ect{e}nd{equation*} Similarly we also have (for example see Lemma 12 in \cite{bai2020provable}) \begin{equation*} \abs{[(\hat{\mathbb{P}}^k_h - \mathbb{P}_h)V](s, a, b)} \le c \sqrt{\frac{ dSH^2\iota }{N_h^k(s,a)}}, \vect{v}ect{e}nd{equation*} \vect{v}ect{e}nd{proof} which completes the proof. Using the concentration result, we can prove the "lower confidence bounds" are indeed lower bounds with high probability. To do this, we need to introduce a little more notation. Similar to $V^{\mathrm{s.t.~}ar}_h$, we can also define $Q^{\mathrm{s.t.~}ar}$. By Bellman equation we have $$Q^{\mathrm{s.t.~}ar}_h(\bm{\theta},s,a,b)=[\bm{\theta} \cdot \mathbf{r}_h+ \mathbb{P}_hV^{\mathrm{s.t.~}ar}_{h+1}(\bm{\theta},\cdot)](s,a,b).$$ \begin{lemma}[Upper confidence bound] \langlebel{lem:ULCB} With probability $1-p$, for all $h,s,a,b$ and $k\in[K]$, we have \begin{equation} Q^{k}_h(s,a,b) \le Q^{\mathrm{s.t.~}ar}_h(\bm{\theta}^k,s,a,b), \,\,\,\, V^{k}_h(s) \le V^{\mathrm{s.t.~}ar}_h(\bm{\theta}^k,s). \vect{v}ect{e}nd{equation} \vect{v}ect{e}nd{lemma} \begin{proof} Again, the proof is by backward induction. Suppose the bounds hold for the Q-values in the $(h+1)$-th step, we now establish the bounds for the values in the $(h+1)$-th step and Q-values in the $h$-step. Consider a fixed state $s$, \begin{equation} \begin{aligned} V^{k}_{h+1}(s)&= \mathbb{D}_{\pi^k_h} Q^{k}_{h+1}(s)\\ &= \min_{\vect{v}ect{u}psilon} \mathbb{D}_{\mu^k_{h+1} \times \vect{v}ect{u}psilon} Q^{k}_{h+1}(s)\\ &\le \min_{\vect{v}ect{u}psilon} \mathbb{D}_{\mu^k_{h+1} \times \vect{v}ect{u}psilon} Q^{\mathrm{s.t.~}ar}_h(\bm{\theta}^k,\cdot,\cdot,\cdot)(s)\\ &\le V^{\mathrm{s.t.~}ar}_h(\bm{\theta}^k,s). \vect{v}ect{e}nd{aligned} \vect{v}ect{e}nd{equation} Now consider a fixed triple $(s,a,b)$ at $h$-th step. We have \begin{equation}\langlebel{eq:Q-decomposition} \begin{aligned} Q^{k}_h(s,a,b) - Q^{\mathrm{s.t.~}ar}_h(\bm{\theta}^k,s,a,b) = &(\vect{v}ect{w}idehat{\mathbb{P}}_h^kV^{k}_{h+1} - \mathbb{P}_h V^{\mathrm{s.t.~}ar}_{h+1}(\bm{\theta}^k,\cdot) - \beta_h^k)(s,a,b)\\ \overset{\left( i \right)}{\le} & [(\vect{v}ect{w}idehat{\mathbb{P}}_h^k- \mathbb{P}_h) V^{\mathrm{s.t.~}ar}_{h+1}(\bm{\theta}^k,\cdot)] (s,a,b) - \beta_h^k(s,a)\\ \overset{\left( ii \right)}{\le}& 0. \vect{v}ect{e}nd{aligned} \vect{v}ect{e}nd{equation} where $(i)$ is by induction hypothesis and $(ii)$ is by Lemma~\ref{lem:uniform_V_star} and the definition of $\beta$. \vect{v}ect{e}nd{proof} A handy decomposition will help us simplify the target we want to bound in Theorem~\ref{thm:basic-approachable} and Theorem~\ref{thm:basic-delta-approachable}. To simplify the notation, when there are no confusion, we use the shorthand $V^{\mu ^k,\nu ^k}$ and $Q^{\mu ^k,\nu ^k}$ for $\bm{\theta}^k \cdot \mathbf{V}^{\mu ^k,\nu ^k}$ and $\bm{\theta}^k \cdot \mathbf{Q}^{\mu ^k,\nu ^k}$. \begin{lemma}[Regret decomposition] \langlebel{lem:regret-decomposition} The "regret" $[V_{1}^{\mu ^k,\nu ^k}-V_{1}^{k}](s_{1}^{k})$ can be decompsed into $$ [V_{1}^{\mu ^k,\nu ^k}-V_{1}^{k}](s_{1}^{k})\le \sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)} $$ where \begin{align*} \vect{v}ect{x}i _{h}^{k}:=&\left(\mathbb{D}_{\mu ^k\times \nu ^k}Q_{h}^{\mu ^k,\nu ^k}- \mathbb{D}_{\mu ^k\times \nu ^k}Q_{h}^{k} \right) (s_{h}^{k})-\left( Q_{h}^{\mu ^k,\nu ^k}-Q_{h}^{k} \right) \left( s_{h}^{k},a_{h}^{k},b_{h}^{k} \right) \in \left[ -4\sqrt{d}H,4\sqrt{d}H \right], \\ \vect{v}ect{z}eta _{h}^{k}:=&\mathbb{P}_h\left( V_{h+1}^{\mu ^k,\nu ^k}-V_{h+1}^{k} \right) \left( s_{h}^{k},a_{h}^{k},b_{h}^{k} \right) -\left( V_{h+1}^{\mu ^k,\nu ^k}-V_{h+1}^{k} \right) \left( s_{h}^{k} \right) \in \left[ -4\sqrt{d}H,4\sqrt{d}H \right] \vect{v}ect{e}nd{align*} are martingale difference sequences adapted to $\mathcal{F}_h^k$. \vect{v}ect{e}nd{lemma} \begin{proof} We have \begin{align*} [V_{h}^{\mu ^k,\nu ^k}-V_{h}^{k}](s_{h}^{k}) =&\left( \mathbb{D}_{\mu ^k\times \nu ^k}Q_{h}^{\mu ^k,\nu ^k}-\mathbb{D}_{\pi ^k}Q_{h}^{k} \right) (s_{h}^{k}) \\ \overset{\left( i \right)}{\le}&\left( \mathbb{D}_{\mu ^k\times \nu ^k}Q_{h}^{\mu ^k,\nu ^k}-\mathbb{D}_{\mu ^k\times \nu ^k}Q_{h}^{k} \right) (s_{h}^{k}) \\ =&\left( Q_{h}^{\mu ^k,\nu ^k}-Q_{h}^{k} \right) \left( s_{h}^{k},a_{h}^{k},b_{h}^{k} \right) +\vect{v}ect{x}i _{h}^{k} \\ =&\mathbb{P}_h\left( V_{h+1}^{\mu ^k,\nu ^k}-V_{h+1}^{k} \right) \left( s_{h}^{k},a_{h}^{k},b_{h}^{k} \right) +\beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k} \\ =&\left( V_{h+1}^{\mu ^k,\nu ^k} -V_{h+1}^{k}\right) \left( s_{h+1}^{k} \right) +\beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k}. \vect{v}ect{e}nd{align*} where $(i)$ is by the definition of Nash equilibirum. Repete the recursion we have $$ [V_{1}^{\mu ^k,\nu ^k}-V_{1}^{k}](s_{1}^{k})\le \sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}. $$ \vect{v}ect{e}nd{proof} The sum of the exploration bonus can be bounded easily. \begin{lemma}[Sum of bonus] \langlebel{lem:sum-of-bonus} $$ \sum_{k=1}^K{\sum_{h=1}^H{\beta _{h}^{k}}}\le O\left( \sqrt{\min\{d,S\}dH^4SABK\iota} \right) $$ \vect{v}ect{e}nd{lemma} \begin{proof} By definition of $\beta$ and pigeonhole principle, \begin{align*} \sum_{k=1}^K{\sum_{h=1}^H{\beta _{h}^{k}}}\le& \sum_{k=1}^K{\sum_{h=1}^H{O\left( \sqrt{\frac{\min\{d,S\}dH^2\iota}{N_{h}^{k}\left( s_{h}^{k},a_{h}^{k},b_{h}^{k} \right)}} \right)}} \\ \le& O\left( \sum_{h=1}^H{\sum_{s,a,b}^{}{\sum_{t=1}^{N_{h}^{K}\left( s,a,b \right)}{\sqrt{\frac{\min\{d,S\}dH^2\iota}{t}}}}} \right) \\ \le& O\left( \sum_{h=1}^H{\sum_{s,a,b}^{}{\sqrt{\min\{d,S\}dH^2\iota N_{h}^{K}\left( s,a,b \right)}}} \right) \\ \le& O\left( \sqrt{\min\{d,S\}dH^4SABK\iota} \right) . \vect{v}ect{e}nd{align*} \vect{v}ect{e}nd{proof} Now we are ready to prove Theorem~\ref{thm:basic-approachable} and Theorem~\ref{thm:basic-delta-approachable}. \begin{proof}[Proof of Theorem~\ref{thm:basic-approachable}] The squared distance can be demcoposed by \begin{align*} \mathrm{dist}\left( \mathbf{W}^k,W^{\mathrm{s.t.~}ar} \right) ^2=&\lVert \mathbf{W}^k-\mathbb{P}i _{W^*}\left( \mathbf{W}^k \right) \rVert _{2}^{2} \\ \overset{\left( i \right)}{\le} & \lVert \mathbf{W}^k-\mathbb{P}i _{W^{\mathrm{s.t.~}ar}}\left( \mathbf{W}^{k-1} \right) \rVert _{2}^{2} \\ =&\lVert \frac{k-1}{k}\mathbf{W}^{k-1}+\frac{1}{k}\mathbf{\hat{V}}^k-\mathbb{P}i _{W^{\mathrm{s.t.~}ar}}\left( \mathbf{W}^{k-1} \right) \rVert _{2}^{2} \\ =&\left( \frac{k-1}{k} \right) ^2\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) ^2+\frac{1}{k^2}\vect{v}ect{u}nderset{\left( A \right)}{\vect{v}ect{u}nderbrace{\left\| \mathbf{\hat{V}}^k-\mathbb{P}i _{W^{\mathrm{s.t.~}ar}}\left( \mathbf{W}^{k-1} \right) \right\| _{2}^{2}}}\\ &+\frac{2\left( k-1 \right)}{k^2}\vect{v}ect{u}nderset{\left( B \right)}{\vect{v}ect{u}nderbrace{\left( \mathbf{W}^{k-1}-\mathbb{P}i _{W^{\mathrm{s.t.~}ar}}\left( \mathbf{W}^{k-1} \right) \right) \cdot \left( \mathbf{\hat{V}}^k-\mathbb{P}i _{W^{\mathrm{s.t.~}ar}}\left( \mathbf{W}^{k-1} \right) \right) }} \vect{v}ect{e}nd{align*} where $(i)$ is by the definition of (Euclidean) projection. By boundedness of distance, $(A)=\lVert \mathbf{\hat{V}}^k-\mathbb{P}i _{W^\mathrm{s.t.~}ar}\left( \mathbf{W}^{k-1} \right) \rVert _{2}^{2}\le dH^2$. To bound $(B)$, we notice if $\mathbf{W}^{k-1} \in W^{\mathrm{s.t.~}ar}$, $\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right)$ and $$ (B)= 0 = \mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \bm{\theta}^k\cdot \left(\mathbf{\hat{V}}^k-\mathbb{P}i _{W^{\mathrm{s.t.~}ar}}\left( \mathbf{W}^{k-1} \right) \right) $$ for any $\bm{\theta}^k$. Otherwise, \begin{align*} (B)= &\left( \mathbf{W}^{k-1}-\mathbb{P}i _{W^{\mathrm{s.t.~}ar}}\left( \mathbf{W}^{k-1} \right) \right) \cdot \left( \mathbf{\hat{V}}^k-\mathbb{P}i _{W^{\mathrm{s.t.~}ar}}\left( \mathbf{W}^{k-1} \right) \right) \\ =&\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \bm{\theta}^k\cdot \left( \mathbf{\hat{V}}^k-\mathbb{P}i _{W^{\mathrm{s.t.~}ar}}\left( \mathbf{W}^{k-1} \right) \right) \\ \overset{\left( i \right)}{\le}&\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \left( \bm{\theta}^k\cdot \mathbf{\hat{V}}^k-V_{1}^{\mathrm{s.t.~}ar}\left( \bm{\theta}^k,s_{1}^{k} \right) \right) \\ \overset{\left( ii \right)}{\le}&\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \left(\bm{\theta}^k\cdot \mathbf{\hat{V}}^k- V_{1}^{k}\left( s_{1}^{k} \right) \right) \\ =&\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \left[ \left( V_{1}^{\mu ^k,\nu ^k}-V_{1}^{k} \right) \left( s_{1}^{k} \right) +\left( \bm{\theta}^k\cdot \mathbf{\hat{V}}^k -V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right)\right) \right] \\ \overset{\left( iii \right)}{\le}&\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \left[ \sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}+\left( \bm{\theta}^k\cdot \mathbf{\hat{V}}^k- V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right)\right) \right] \vect{v}ect{e}nd{align*} where $(i)$ is by Approachability (Assumption~\ref{ass:Approachability}), $(ii)$ is by optimism (Lemma~\ref{lem:ULCB}) and $(iii)$ is by regret decomposition (Lemma~\ref{lem:regret-decomposition}). Putting everything together and repete the recursion we have $$ K\mathrm{dist}\left( \mathbf{W}^K,W^* \right) ^2\le dH^2+2\sum_{k=1}^K{\frac{k-1}{K}}\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \left[ \sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}+\left(\bm{\theta}^k\cdot \mathbf{\hat{V}}^k- V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right) \right) \right] $$ Now we can begin to prove the theorem by induction. Suppose $$ \mathrm{dist} \left( \mathbf{W}^k,W^{\mathrm{s.t.~}ar} \right) \le c_0 \sqrt{\min\{d,S\}dH^4SAB\iota/k}. $$ for $\forall k \le K-1$, let's prove the claim holds for $k=K$. We first consider the optimistic bonus. \begin{align*} \sum_{k=1}^K{\sum_{h=1}^H{\frac{k-1}{K}\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \beta _{h}^{k}}}\le& c_0\sqrt{\min\{d,S\}dH^4SAB\iota}\sum_{k=1}^K{\sum_{h=1}^H{\frac{\sqrt{k}}{K}\beta _{h}^{k}}} \\ \le& c_0\sqrt{\min\{d,S\}dH^4SAB\iota /K}\sum_{k=1}^K{\sum_{h=1}^H{\beta _{h}^{k}}} \\ \overset{\left( i \right)}{\le}&c_0c_1\min\{d,S\}dH^4SAB\iota \vect{v}ect{e}nd{align*} where $(i)$ is by Lemma~\ref{lem:sum-of-bonus} and $c_1$ is the constant coefficient there. The remaining terms are martingale difference sequence, so we only need to bound the variance. \begin{align*} \sum_{k=1}^K{\sum_{h=1}^H{\frac{k-1}{K}\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \vect{v}ect{x}i _{h}^{k}}}\overset{\left( i \right)}{\le}&c_2\sqrt{\sum_{k=1}^K{\left( \frac{k-1}{K} \right) ^2}\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) ^2dH^3\iota} \\ \le& c_2c_0\sqrt{\min\{d,S\}d^2H^4SAB\iota ^2}\sqrt{\sum_{k=1}^K{\frac{kH^3}{K^2}}} \\ \le& c_0c_1\sqrt{\min\{d,S\}d^2H^7SAB\iota ^2} \vect{v}ect{e}nd{align*} where $(i)$ is by Azuma-Hoeffding. Similarly, $\sum_{k=1}^K{\sum_{h=1}^H{\frac{k-1}{K}\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \vect{v}ect{z}eta _{h}^{k}}}\le c_2c_0\sqrt{\min\{d,S\}d^2H^7SAB\iota ^2}$. The last term can be handled similarly but we need to be more carefully because different coordinates of $\mathbf{\hat{V}}^k$ are correlated. \begin{align*} \sum_{k=1}^K{\frac{k-1}{K}\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \left(\bm{\theta}^k\cdot \mathbf{\hat{V}}^k- V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right) \right)} \le& c_2\sum_{j=1}^m{\sqrt{\sum_{k=1}^K{\left( \frac{k-1}{K} \right) ^2}\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) ^2\left( \bm{\theta}^k_j \right) ^2H^2\iota}} \\ \le& c_2c_0\sqrt{\min\{d,S\}dH^4SAB\iota ^2}\sum_{j=1}^m{\sqrt{\sum_{k=1}^K{\left( \bm{\theta}^k_j \right) ^2\frac{kH^2}{K^2}}}} \\ \overset{\left( i \right)}{\le}& c_2c_0\sqrt{\min\{d,S\}dH^4SAB\iota ^2}\sqrt{m\sum_{j=1}^m{\sum_{k=1}^K{\left( \bm{\theta}^k_j \right) ^2\frac{kH^2}{K^2}}}} \\ =&c_2c_0\sqrt{\min\{d,S\}d^2H^4SAB\iota ^2}\sqrt{\sum_{k=1}^K{\frac{kH^2}{K^2}}} \\ \le& c_2c_0\sqrt{\min\{d,S\}d^2H^6SAB\iota ^2} \vect{v}ect{e}nd{align*} where $(i)$ is by Cauchy-Schwarz. After taking a union bound w.r.t. $[K]$, to prove the claim for $k=K$, we only need to guarantee $$ dH^2+8c_0\max \left\{ c_1,c_2 \right\} \min\{d,S\}dH^4SAB\iota \le c_0^2\min\{d,S\}dH^4SAB\iota $$ which is satisfied as long as $c_0\vect{v}ect{g}e \max \left\{ 16\max \left\{ c_1,c_2 \right\} ,\sqrt{\frac{2}{SABH^2\iota}} \right\}$. \vect{v}ect{e}nd{proof} We can prve Theorem~\ref{thm:basic-delta-approachable} similarly. \begin{proof}[Proof of Theorem~\ref{thm:basic-delta-approachable}] As in the proof of Theorem~\ref{thm:basic-approachable} we have $$ K\mathrm{dist}\left( \mathbf{W}^K,W^* \right) ^2\le dH^2+2\sum_{k=1}^K{\frac{k-1}{K}}\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \left[\delta+ \sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}+\left(\bm{\theta}^k\cdot \mathbf{\hat{V}}^k- V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right) \right) \right] $$ Again we prove the theorem by induction. Suppose $$ \mathrm{dist} \left( \mathbf{W}^k,W^{\mathrm{s.t.~}ar} \right) \le \delta+ c_0 \sqrt{\min\{d,S\}dH^4SAB\iota/k} $$ for $\forall k \le K-1$, let's prove the claim holds for $k=K$. Now we have a new term to bound, which is $$ 2\delta \sum_{k=1}^K{\frac{k-1}{K}}\left[ \delta +c_0\sqrt{\min \{d,S\}dH^4SAB\iota /k}+\sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}+\left( \boldsymbol{\theta }^k\cdot \mathbf{\hat{V}}^k-V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right) \right) \right] =\left( A \right) +\left( B \right) +\left( C \right) $$ where $$ \left( A \right) =2\delta ^2\sum_{k=1}^K{\frac{k-1}{K}}\le \left( K-1 \right) \delta ^2, $$ $$ \left( B \right) =2\delta \sum_{k=1}^K{\frac{k-1}{K}}c_0\sqrt{\min \{d,S\}dH^4SAB\iota /k}\le \frac{4}{3}c_0\delta \sqrt{\min \{d,S\}dH^4SAB\iota K}, $$ and by Lemma~\ref{lem:sum-of-bonus} and Azuma-Hoeffding inequality $$ \left( C \right) =2\delta \sum_{k=1}^K{\frac{k-1}{K}}\left[ \sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}+\left( \boldsymbol{\theta }^k\cdot \mathbf{\hat{V}}^k-V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right) \right) \right] \le c_1c_0\delta \sqrt{\min \{d,S\}dH^4SAB\iota K}. $$ To prove the induction hypothesis, we only need to guarantee \begin{align*} &H^2+8c_0\max \left\{ c_1,c_2 \right\} \min \{d,S\}dH^4SAB\iota +\left( K-1 \right) \delta ^2+c_1\delta \sqrt{\min \{d,S\}dH^4SAB\iota K} \\ \le& K\left[ \delta +c_0\sqrt{\min \{d,S\}dH^4SAB\iota /K} \right] ^2. \vect{v}ect{e}nd{align*} Comparing the coefficients, we can see this is satisfied by setting $c_0\vect{v}ect{g}e \max \left\{ c_1,\sqrt{\frac{2}{SABH^2\iota}} \right\}$ . \vect{v}ect{e}nd{proof} \section{Proof for Section~\ref{sec:OCO} } \begin{proof}[Proof of Theorem~\ref{thm:OCO-delta-approachable}] The guarantee of online sub-gradient descent yields with high probability \begin{align} \langlebel{equ:sgd-regret} \vect{v}ect{u}nderset{\left\| \bm{\theta} \right\| \le 1}{\max}\left\{ \bm{\theta}\cdot \sum_{k=1}^K{\mathbf{\hat{V}}^k}- \sum_{k=1}^K\vect{v}ect{u}nderset{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}}{\max}\bm{\theta}\cdot \mathbf{x} \right\} \le \sum_{k=1}^K{\left\{ \bm{\theta}^k\cdot \mathbf{\hat{V}}^k-\sum_{k=1}^K\vect{v}ect{u}nderset{\mathbf{x}\in \mathbb{W}^{\mathrm{s.t.~}ar}}{\max}\bm{\theta}^k\cdot \mathbf{x} \right\}}+\mathcal{O}\left( \sqrt{dH^2K} \right) . \vect{v}ect{e}nd{align} Since $f\left( \mathbf{x} \right) =\mathrm{dist}\left( \mathbf{x},\mathbb{W}^{\mathrm{s.t.~}ar} \right) $ is a closed, 1-Lipschitz convex function, the dual representation implies \begin{align*} &K\mathrm{dist}\left( \mathbf{W}^k,W^{\mathrm{s.t.~}ar} \right) \\ =&\vect{v}ect{u}nderset{\left\| \bm{\theta} \right\| \le 1}{\max}\left\{ \bm{\theta}\cdot \sum_{k=1}^K{\mathbf{\hat{V}}^k}- \sum_{k=1}^K\vect{v}ect{u}nderset{\mathbf{x}\in W^{\mathrm{s.t.~}ar}}{\max}\bm{\theta}\cdot \mathbf{x} \right\} \\ \le & \sum_{k=1}^K{\left\{ \bm{\theta}^k\cdot \mathbf{\hat{V}}^k-\sum_{k=1}^K\vect{v}ect{u}nderset{\mathbf{x}\in W^{\mathrm{s.t.~}ar}}{\max}\bm{\theta}^k\cdot \mathbf{x} \right\}}+\mathcal{O}\left( \sqrt{dH^2K} \right) \\ \overset{\left( i \right)}{\le}&\sum_{k=1}^K{\left\{ \bm{\theta}^k\cdot \mathbf{\hat{V}}^k-V_{1}^{\mathrm{s.t.~}ar}\left( \bm{\theta}^k,s_1 \right) +\delta \right\}}+\mathcal{O}\left( \sqrt{dH^2K} \right) \\ \overset{\left( ii \right)}{\le}&K\delta +\sum_{k=1}^K{\bm{\theta}^k\cdot \left( \mathbf{\hat{V}}^k-\mathbf{V}_1^{\pi ^k,\mu ^k}\left( s_1 \right) \right)}+\sum_{k=1}^K{\left\{ \bm{\theta}^k\cdot \mathbf{V}_1^{\pi ^k,\mu ^k}\left( s_1 \right) -V_{1}^{k}\left( s_1 \right) \right\}}+\mathcal{O}\left( \sqrt{dH^2K} \right) \\ \overset{\left( iii \right)}{\le}&K\delta +\sum_{k=1}^K{\bm{\theta}^k\cdot \left( \mathbf{\hat{V}}^k-\mathbf{V}_1^{\pi ^k,\mu ^k}\left( s_1 \right) \right)}+\sum_{k=1}^K{\sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}}+\mathcal{O}\left( \sqrt{dH^2K} \right) \\ \overset{\left( iv \right)}{\le}&K\delta +\mathcal{O}\left( \sqrt{\min\{d,S\}dH^4SABK\iota} \right) \vect{v}ect{e}nd{align*} where $(i)$ is by $\delta$-approachability, $(ii)$ is by Lemma~\ref{lem:ULCB}, $(iii)$ is by Lemma~\ref{lem:regret-decomposition} and $(iv)$ is by Lemma~\ref{lem:sum-of-bonus} and Azuma-Hoeffding inequality. The claim is proved by taking the union bound with the event that \vect{v}ect{e}qref{equ:sgd-regret} holds. \vect{v}ect{e}nd{proof} \section{Comparison with CMDP literature.} \langlebel{sec:cmdp-compare} \langlebel{sec:comparsion} We compare our results with existing works on provably efficient algorithms for CMDP \cite{efroni2020exploration,ding2020provably,brantley2020constrained} in Table~\ref{tab:comparison}. Since the setting is a little bit different in these works, we try to unify the results as below: \begin{list}{–}{\leftmargin=1.5em} \setlength\itemsep{0em} \item When measuring constraint violation, \citet{efroni2020exploration, ding2020provably, brantley2020constrained} all consider $\mathcal{L}^{\infty}$ norm. To compare with our result we have transformed the result to $\mathcal{L}^2$ norm. \item Comparing with the othe algorithm in Table~\ref{tab:comparison}, OptCMDP-bonus actually uses a even stronger notion of regret, by summing up only the non-negative part of the constrain violation in each coordinate. \citet{efroni2020exploration} also propose two more algorithm, OptCMDP and OptDual-CMDP, whose theoretical guarantee is similar to the ones we present and are thus ommited. \item OptPrimalDual-CMDP and OPDOP need an upper bound $\rho$ of the dual variable by assuming the target set is strictly achievable. We use the geometrical assumption (Assumpion~\ref{assump:angle}) instead because the constraints measured by distance cannot be ``strictly'' satisfied (the distance function cannot be below zero). The dependence on $d$ and $\rho$ is not written explicitly in~\citep{ding2020provably}. \item \citet{ding2020provably} also considers linear approximation setting and we translate their result to tabular setting. \citet{brantley2020constrained} also considers knapsack setting. \item \citet{qiu2020upper} consider MDPs with adversarial reward functions and linear constraints. They assume $\mathcal{S}_{i} \cap \mathcal{S}_j = \vect{v}ect{e}mptyset$ for $i\neq j$. Therefore, if $|\mathcal{S}_2| = \cdots = |\mathcal{S}_H| = S$ then $|\mathcal{S}| = (H - 1) S + 2 = \mathcal{O}(HS)$, according to which we translate their regret bound to $\mathcal{\tilde{O}}(\sqrt{H^5 S^2 A K})$ in our setting. \item \cite{qiu2020upper} and \cite{ding2020provably} also considers adversarial reward but requires full information feedback then. Notice to handle adversarial transition kernels, we still need to game-theoretical formulation in the previous sections. \vect{v}ect{e}nd{list} \section{Proofs for Section~\ref{sec:MDP_upper}} Besides the notations we introduce at the beginning of Appendix~\ref{sec:basic_proof}, we also set the empirical and population variance operator by $$ \vect{v}ect{w}idehat{\mathbb{V}}^k_h[V](s,a) :=\text{Var}_{s'\sim \vect{v}ect{w}idehat{\mathbb{P}}^k_h(\cdot|s,a)}V(s'), \,\,\,\,\, \mathbb{V}_h[V](s,a) :=\text{Var}_{s'\sim \mathbb{P}_h(\cdot|s,a)}V(s') $$ for any function $V \in [-\sqrt{d}H,\sqrt{d}H]^{S}$. As a result, the bonus terms can be written as \begin{equation} \beta := C\big(\sqrt{\frac{\hat{\mathbb{V}}_h \low{V}_{h+1}(s,a)\min\{d,S\}\iota}{N_h^k(s,a)}} +\frac{1}{H} \hat{\mathbb{P}}_{h}(\vect{v}ect{u}p{V}_{h+1}-\low{V}_{h+1})(s,a) +\frac{ \min\{d,S\}\sqrt{d}H^2\iota}{N_h^k(s,a)}\big) \vect{v}ect{e}nd{equation} for some absolute constant $C>0$, which is \vect{v}ect{e}mph{different} from the one we used in Appendix~\ref{sec:basic_proof}. Another major difference from that Appendix~\ref{sec:basic_proof} is that now we are considering MDP instead of MG. We still begin with optimism, which is a upper and lower version of Lemma~\ref{lem:ULCB}: \begin{lemma}\langlebel{lem:optimism_Bernstein} With probability $1-p$, for all $h,s,a$ and $k\in[K]$, we have \begin{equation} \vect{v}ect{u}p{Q}^{k}_h(s,a) \vect{v}ect{g}e Q^{\mathrm{s.t.~}ar}_h(s,a) \vect{v}ect{g}e \low{Q}^{k}_h(s,a), \,\,\,\,\,\, \vect{v}ect{u}p{V}^{k}_h(s) \vect{v}ect{g}e V^{\mathrm{s.t.~}ar}_h(s) \vect{v}ect{g}e \low{V}^{k}_h(s). \vect{v}ect{e}nd{equation} \vect{v}ect{e}nd{lemma} \begin{proof} The proof is very similar to that of Lemma~\ref{lem:ULCB}. We only need to bound the variance by induction hypothesis, \begin{equation*} \begin{aligned} &|\hat{\mathbb{V}}_h^k \low{V}^k_{h+1} - \hat{\mathbb{V}}_h^kV^{\mathrm{s.t.~}ar}_{h+1}|(s,a)\\ \le & |[\hat{\mathbb{P}}_h^k \low{V}^k_{h+1}]^2-(\hat{\mathbb{P}}_h^kV^{\mathrm{s.t.~}ar}_{h+1})^2|(s,a)+|\hat{\mathbb{P}}_h^k (\low{V}^k_{h+1})^2-\hat{\mathbb{P}}_h^k(V^{\mathrm{s.t.~}ar}_{h+1})^2|(s,a) \\ \le& 4\sqrt{d}H\hat{\mathbb{P}}_h^k | V^{\mathrm{s.t.~}ar}_{h+1}-\low{V}^k_{h+1} |(s,a)\\ \le & 4\sqrt{d}H\hat{\mathbb{P}}_h^k (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a). \vect{v}ect{e}nd{aligned} \vect{v}ect{e}nd{equation*} As a result, \begin{equation*} \begin{aligned} \sqrt{\frac{\min\{d,S\}\iota\hat{\mathbb{V}}_h^k V^{\mathrm{s.t.~}ar}_{h+1}(s,a) }{N_h^k(s,a)}} &\le \sqrt{\frac{\min\{d,S\}\iota\hat{\mathbb{V}}_h^k\low{V}^k_{h+1} + 4\iota \min\{d,S\}\sqrt{d} H\hat{\mathbb{P}}_h^k (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})](s,a)}{N_h^k(s,a)}}\\ & \le \sqrt{\frac{\min\{d,S\}\iota\hat{\mathbb{V}}_h^k \low{V}^k_{h+1} (s,a) }{N_h^k(s,a)}}+ \sqrt{\frac{4\iota \min\{d,S\}\sqrt{d} H\vect{v}ect{w}idehat{\mathbb{P}}_h^k (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})](s,a) }{N_h^k(s,a)}}\\ & \overset{\left( i \right)}{\le} \sqrt{\frac{\min\{d,S\}\iota\hat{\mathbb{V}}_h^k \low{V}^k_{h+1} (s,a) }{N_h^k(s,a)}} + \frac{\hat{\mathbb{P}}_h^k (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a)}{H} + \frac{4 \min\{d,S\}\sqrt{d}H^2\iota}{N_h^k(s,a)}. \vect{v}ect{e}nd{aligned} \vect{v}ect{e}nd{equation*} where $(i)$ is by AM-GM inequality. \vect{v}ect{e}nd{proof} Since we are estimating the deviation in exploration bonus using the empirical variance estiamtor, we need to prove it is actually close to the population variance estiamtor. This is true if the corresponding state-action pair has been visited frequently. \begin{lemma} \langlebel{lem:bound_variance} Consider a fixed $(s,a)$ triple at $h$-th step. With probability $1-p$, $$ | \hat{\mathbb{V}}_h^k\low{V}^k_{h+1}) - \mathbb{V}_h\mathbb{V}^{\pi^k}_{h+1}|(s,a) \le 4\sqrt{d}H\hat{\mathbb{P}}_h^k(\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a) + \mathcal{O}(1 + \frac{d^2H^4S\iota}{N_h^k(s,a)}). $$ \vect{v}ect{e}nd{lemma} \begin{proof} Following the same argument in Lemma~\ref{lem:optimism_Bernstein}, we have $\vect{v}ect{u}p{V}^{k}_h(s) \vect{v}ect{g}e V_h^{\pi^k}(s) \vect{v}ect{g}e \low{V}^{k}_h(s)$. As a result, \begin{align*} & |\hat{\mathbb{V}}_h^k \low{V}^k_{h+1}) - \mathbb{V}_hV^{\pi^k}_{h+1}|(s,a) \\ = & | [\hat{\mathbb{P}}_h^k(\low{V}^k_{h+1})^2 - \mathbb{P}_h(V^{\pi^k}_{h+1})^2](s,a) - [(\hat{\mathbb{P}}_h^k (\low{V}^k_{h+1}))^2 - (\mathbb{P}_hV^{\pi^k}_{h+1})^2](s,a)| \\ \le & [\hat{\mathbb{P}}_h^k( \vect{v}ect{u}p{V}^k_{h+1})^2 - \mathbb{P}_h(\low{V}^k_{h+1})^2 - (\hat{\mathbb{P}}_h^k \low{V}^k_{h+1})^2 + (\mathbb{P}_h\vect{v}ect{u}p{V}^k_{h+1})^2](s,a) \\ \le &[|(\hat{\mathbb{P}}_h^k-\mathbb{P}_h)( \vect{v}ect{u}p{V}^k_{h+1})^2|+|\mathbb{P}_h[( \vect{v}ect{u}p{V}^k_{h+1})^2-(\low{V}^k_{h+1})^2]|+|(\hat{\mathbb{P}}_h^k \low{V}^k_{h+1})^2-(\mathbb{P}_h \low{V}^k_{h+1})^2|+|(\mathbb{P}_h \low{V}^k_{h+1})^2-(\mathbb{P}_h\vect{v}ect{u}p{V}^k_{h+1})^2|](s,a) \vect{v}ect{e}nd{align*} These terms can be bounded separately by \begin{align*} |(\hat{\mathbb{P}}_h^k-\mathbb{P}_h)( \vect{v}ect{u}p{V}^k_{h+1})^2|(s,a) &\le \mathcal{O}(dH^2\sqrt{\frac{S\iota}{N_h^k(s,a)}}), \\ |\mathbb{P}_h[( \vect{v}ect{u}p{V}^k_{h+1})^2-(\low{V}^k_{h+1})^2]|(s,a,b) &\le 2\sqrt{d}H[\mathbb{P}_h( \vect{v}ect{u}p{V}^k_{h+1}-\low{V}^k_{h+1})](s,a), \\ |(\hat{\mathbb{P}}_h^k \low{V}^k_{h+1})^2-(\mathbb{P}_h \low{V}^k_{h+1})^2|(s,a,b) &\le 2\sqrt{d}H[(\hat{\mathbb{P}}_h^k-\mathbb{P}_h)\low{V}^k_{h+1}](s,a) \le \mathcal{O}(dH^2\sqrt{\frac{S\iota}{N_h^k(s,a)}}), \\ |(\mathbb{P}_h \low{V}^k_{h+1})^2-(\mathbb{P}_h\vect{v}ect{u}p{V}^k_{h+1})^2|(s,a) &\le 2\sqrt{d}H[\mathbb{P}_h( \vect{v}ect{u}p{V}^k_{h+1}-\low{V}^k_{h+1})](s,a). \vect{v}ect{e}nd{align*} Combining with $dH^2\sqrt{\frac{S\iota}{N_h^k(s,a)}} \le 1 + \frac{d^2H^4S\iota}{N_h^k(s,a)}$ completes the proof. \vect{v}ect{e}nd{proof} The last auxiliary lemma is borrowed from \cite{liu2020sharp} to handle the $\hat{\mathbb{P}}_h^k(\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a)$ term. For completeness we give a proof here due to difference in setting. \begin{lemma} \langlebel{lem:lower-order} For any function $V \in [0, H]^{\mathcal{S}}$ s.t. $ |V|(s) \le (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s) $ for any $s$, with probability $1-p$, \begin{align*} |(\hat{\mathbb{P}}_h^k- \mathbb{P}_h)V(s,a)| \le \mathcal{O}\bigg(\frac{1}{H}\min\{\hat{\mathbb{P}}_h^k (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a),\mathbb{P}_h (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a)\} + \frac{H^2S\iota}{N_h^k(s,a)}\bigg). \vect{v}ect{e}nd{align*} \vect{v}ect{e}nd{lemma} \begin{proof} By triangle inequality, \begin{align*} |(\hat{\mathbb{P}}_h^k- \mathbb{P}_h)V(s,a)| \le& \sum_{s'}{|(\hat{\mathbb{P}}_h^k- \mathbb{P}_h)(s'|s,a,b)||V|(s')}\\ \le& \sum_{s'}{|(\hat{\mathbb{P}}_h^k- \mathbb{P}_h)(s'|s,a)|(\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s')}\\ \overset{\left( i \right)}{\le}& \mathcal{O}\left(\sum_{s'}{(\sqrt{\frac{\iota \hat{\mathbb{P}}_h^k(s'|s,a)}{N_h^k(s,a)}}+\frac{\iota }{N_h^k(s,a)})(\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s')}\right)\\ \overset{\left( ii \right)}{\le}& \mathcal{O}\left(\sum_{s'}{(\frac{\hat{\mathbb{P}}_h^k(s'|s,a) }{H}+\frac{H\iota }{N_h^k(s,a)})(\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s')}\right)\\ \le& \mathcal{O}\left(\frac{\hat{\mathbb{P}}_h^k (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a)}{H} + \frac{H^2S\iota}{N_h^k(s,a)}\right), \vect{v}ect{e}nd{align*} where $(i)$ is by empirical Bernstein bound \citep{maurer2009empirical} and $(ii)$ is by AM-GM inequality. This proves the empirical version. Use the standard Bernstein bound, we get the a similar upper bound. Combining the two bounds completes the proof. \vect{v}ect{e}nd{proof} Combining the previous results we can prove a tighter version of Lemma~\ref{lem:sum-of-bonus}, which is the key lemma in the proof of Theorem~\ref{thm:bernstein}. \begin{lemma}[Sum of bonus] \langlebel{lem:sum-of-bonus-Bernstein} $$ \sum_{k=1}^K{\sum_{h=1}^H{\beta _{h}^{k}}}\le O\left( \sqrt{\min\{d,S\}dH^3SABK\iota} \right) $$ \vect{v}ect{e}nd{lemma} \begin{proof} Define $\mathbb{D}elta_h^k:= [\vect{v}ect{u}p{V}_{h}^{k}-\low{V}_{h}^{k}](s_{h}^{k})$. Then \begin{align} \langlebel{equ:Bernstein_decomposition} \mathbb{D}elta_h^k =\left( \vect{v}ect{u}p{Q}^k_{h}-\low{Q}_{h}^{k} \right) (s_{h}^{k},a_{h}^{k}) \le \mathbb{P}_h\left( \vect{v}ect{u}p{V}^k_{h+1}-\low{V}_{h+1}^k \right) \left( s_{h}^{k},a_{h}^{k} \right) +\beta _{h}^{k} =\mathbb{D}elta_{h+1}^k +\beta _{h}^{k}+\vect{v}ect{z}eta _{h}^{k}. \vect{v}ect{e}nd{align} where \begin{align*} \vect{v}ect{z}eta _{h}^{k}:=&\mathbb{P}_h\left( \vect{v}ect{u}p{V}_{h+1}-\low{V}_{h+1}^k \right) \left( s_{h}^{k},a_{h}^{k} \right) -\left( \vect{v}ect{u}p{V}_{h+1}^k-\low{V}_{h+1}^k \right) \left( s_{h}^{k} \right) \in \left[ -4\sqrt{d}H,4\sqrt{d}H \right] . \vect{v}ect{e}nd{align*} We only need to carefully bound $\beta _{h}^{k}$. \begin{align*} \beta _{h}^{k} = \mathcal{O} \left(\sqrt{\frac{\hat{\mathbb{V}}_h \low{V}^k_{h+1}(s_{h}^{k},a_{h}^{k})\min\{d,S\}\iota}{N_h^k(s_{h}^{k},a_{h}^{k})}} + \frac{1}{H}\hat{\mathbb{P}}_{h}(\vect{v}ect{u}p{V}^k_{h+1}-\low{V}^k_{h+1})(s_{h}^{k},a_{h}^{k}) + \frac{\min\{d,S\}\sqrt{d}H^2\iota}{N_h^k(s_{h}^{k},a_{h}^{k})}\right) \vect{v}ect{e}nd{align*} By Lemma~\ref{lem:bound_variance} and AM-GM inequality, \begin{equation} \begin{aligned} &\sqrt{\frac{\min\{d,S\}\iota\hat{\mathbb{V}}_h^k \low{V}^k_{h+1}(s,a)}{N_h^k(s,a)}} \\ \le& \sqrt{\frac{\min\{d,S\}\iota\mathbb{V}_h V^{\pi^k}_{h+1}(s,a) + \mathcal{O}(\min\{d,S\}\iota)}{N_h^k(s,a)}} + \sqrt{\frac{4\min\{d,S\}\sqrt{d}H\iota\hat{\mathbb{P}}_h^k (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a,b)}{N_h^k(s,a)}}+ \mathcal{O}(\frac{dH^2\sqrt{S}\iota}{N_h^k(s,a)})\\ \le& \sqrt{\frac{\min\{d,S\}\iota\mathbb{V}_h\mathbb{V}^{\pi^k}_{h+1}(s,a) + \mathcal{O}(\min\{d,S\}\iota)}{N_h^k(s,a)}} + \frac{1}{H}\hat{\mathbb{P}}_h^k (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a) + \mathcal{O}(\frac{\min\{d,S\}\sqrt{d}H^2\sqrt{S}\iota}{N_h^k(s,a)}). \vect{v}ect{e}nd{aligned} \vect{v}ect{e}nd{equation} Using Lemma~\ref{lem:lower-order} we have \begin{equation*} \hat{\mathbb{P}}_h^k (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a) \le \paren{1 + \frac{\mathcal{O}(1)}{H}}\mathbb{P}_h (\vect{v}ect{u}p{V}^{k}_{h+1} - \low{V}^{k}_{h+1})(s,a)+\mathcal{O}(\frac{H^2S\iota}{N_h^k(s,a)}). \vect{v}ect{e}nd{equation*} Plugging back into inequality~\vect{v}ect{e}qref{equ:Bernstein_decomposition} we have \begin{equation} \begin{aligned} \mathbb{D}elta_h^k\le \paren{1 + \frac{\mathcal{O}(1)}{H}}\mathbb{B}igg\{\mathbb{D}elta_{h+1}^k+ \vect{v}ect{z}eta_h^k+\mathcal{O}\bigg(\sqrt{\frac{\min\{d,S\}\iota\mathbb{V}_hV^{\pi^k}_{h+1}(s_h^k,a_h^k)}{N_h^k(s_h^k,a_h^k)}} + \sqrt{\frac{\min\{d,S\}\iota}{N_h^k(s_h^k,a_h^k)}} + \frac{\min\{d,S\}\sqrt{d}H^2S\iota}{N_h^k(s_h^k,a_h^k)}\bigg)\mathbb{B}igg\}. \vect{v}ect{e}nd{aligned} \vect{v}ect{e}nd{equation} Recursing this argument for $h \in [H]$ and taking the sum, \begin{align*} \sum_{k=1}^{K}{\mathbb{D}elta_1^k} \le \sum_{k=1}^{K}\sum_{h=1}^{H}{\mathcal{O}\left(\vect{v}ect{z}eta_h^k +\sqrt{\frac{\min\{d,S\}\iota\mathbb{V}_h V^{\pi^k}_{h+1}(s_h^k,a_h^k)}{N_h^k(s_h^k,a_h^k)}} + \sqrt{\frac{\min\{d,S\}\iota}{N_h^k(s_h^k,a_h^k)}} + \frac{\min\{d,S\}\sqrt{d}H^2S\iota}{N_h^k(s_h^k,a_h^k)}\right)} . \vect{v}ect{e}nd{align*} The remaining steps are exactly the same as that in the proof of Theorem~\ref{thm:basic-approachable}. The only difference is that we need to bound the sum of variance term by Cauchy-Schwarz, \begin{equation} \langlebel{eq:sumlog} \sum_{k=1}^{K}\sum_{h=1}^{H}{\frac{1}{N_h^k(s_h^k,a_h^k)}} \le \sum_{s,a,h}\sum_{n=1}^{N_h^k(s,a)}{\frac{1}{n}}\le \mathcal{O} \paren{HSA\iota}. \vect{v}ect{e}nd{equation} and \begin{align*} \sum_{k=1}^{K}\sum_{h=1}^{H}{\sqrt{\frac{\mathbb{V}_h V^{\pi^k}_{h+1}(s_h^k,a_h^k)}{N_h^k(s_h^k,a_h^k)}}}\le & \sqrt{\sum_{k=1}^{K}\sum_{h=1}^{H}{\mathbb{V}_h V^{\pi^k}_{h+1}(s_h^k,a_h^k)}\cdot \sum_{k=1}^{K}\sum_{h=1}^{H}{\frac{1}{N_h^k(s_h^k,a_h^k)}}}\\ \overset{\left( i \right)}{\le} & \mathcal{O} \paren{\sqrt{d(H^2K+H^3\iota)\cdot HSA\iota}}\\ =&\mathcal{O} \paren{\sqrt{dH^3SAK}+\sqrt{dH^4SA\iota^2}}, \vect{v}ect{e}nd{align*} where $(i)$ is by Law of total variation (for example, Lemma 8 in \citet{azar2017minimax}) and inequality~\vect{v}ect{e}qref{eq:sumlog}. \vect{v}ect{e}nd{proof} \begin{proof}[Proof of Theorem~\ref{thm:bernstein}] We consider the two possibilities separately. \textbf{Using \textsc{Projection-based-Dual-Update}{} for \textsc{Dual-Update}{}.} As in the proof of Theorem~\ref{thm:basic-approachable} we have $$ K\mathrm{dist}\left( \mathbf{W}^K,W^* \right) ^2\le dH^2+2\sum_{k=1}^K{\frac{k-1}{K}}\mathrm{dist}\left( \mathbf{W}^{k-1},W^{\mathrm{s.t.~}ar} \right) \left[\delta+ \sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}+\left(\bm{\theta}^k\cdot \mathbf{\hat{V}}^k- V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right) \right) \right] $$ Again we prove the theorem by induction. Suppose $$ \mathrm{dist} \left( \mathbf{W}^k,W^{\mathrm{s.t.~}ar} \right) \le \delta+ c_0 \sqrt{\min\{d,S\}dH^3SAB\iota/k} $$ for $\forall k \le K-1$, let's prove the claim holds for $k=K$. Now we have a new term to bound, which is $$ 2\delta \sum_{k=1}^K{\frac{k-1}{K}}\left[ \delta +c_0\sqrt{\min \{d,S\}dH^3SAB\iota /k}+\sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}+\left( \boldsymbol{\theta }^k\cdot \mathbf{\hat{V}}^k-V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right) \right) \right] =\left( A \right) +\left( B \right) +\left( C \right) $$ where $$ \left( A \right) =2\delta ^2\sum_{k=1}^K{\frac{k-1}{K}}\le \left( K-1 \right) \delta ^2, $$ $$ \left( B \right) =2\delta \sum_{k=1}^K{\frac{k-1}{K}}c_0\sqrt{\min \{d,S\}dH^3SAB\iota /k}\le \frac{4}{3}c_0\delta \sqrt{\min \{d,S\}dH^3SAB\iota K}, $$ and by Lemma~\ref{lem:sum-of-bonus-Bernstein} and Azuma-Hoeffding inequality, $$ \left( C \right) =2\delta \sum_{k=1}^K{\frac{k-1}{K}}\left[ \sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}+\left( \boldsymbol{\theta }^k\cdot \mathbf{\hat{V}}^k-V_{1}^{\mu ^k,\nu ^k}\left( s_{1}^{k} \right) \right) \right] \le c_1c_0\delta \sqrt{\min \{d,S\}dH^3SAB\iota K}. $$ To prove the induction hypothesis, we only need to guarantee \begin{align*} &H^2+8c_0\max \left\{ c_1,c_2 \right\} \min \{d,S\}dH^3SAB\iota +\left( K-1 \right) \delta ^2+c_1\delta \sqrt{\min \{d,S\}dH^3SAB\iota K} \\ \le& K\left[ \delta +c_0\sqrt{\min \{d,S\}dH^3SAB\iota /K} \right] ^2. \vect{v}ect{e}nd{align*} Comparing the coefficients, we can see this is satisfied by setting $c_0\vect{v}ect{g}e \max \left\{ c_1,\sqrt{\frac{2}{SABH\iota}} \right\}$ . \textbf{Using \textsc{Projection-free-Dual-Update}{} for \textsc{Dual-Update}{}.} We can expand the distance using the same argument in the proof of Theorem~\ref{thm:OCO-delta-approachable} \begin{align*} K\mathrm{dist}\left( \mathbf{W}^k,W^{\mathrm{s.t.~}ar} \right) \le&K\delta +\sum_{k=1}^K{\bm{\theta}^k\cdot \left( \mathbf{\hat{V}}^k-\mathbf{V}_1^{\pi ^k,\mu ^k}\left( s_1 \right) \right)}+\sum_{k=1}^K{\sum_{h=1}^H{\left( \beta _{h}^{k}+\vect{v}ect{x}i _{h}^{k}+\vect{v}ect{z}eta _{h}^{k} \right)}}+\mathcal{O}\left( \sqrt{dH^2K} \right) \\ \overset{\left( i \right)}{\le}&K\delta +\mathcal{O}\left( \sqrt{\min\{d,S\}dH^3SABK\iota} \right) \vect{v}ect{e}nd{align*} where $(i)$ by Lemma~\ref{lem:sum-of-bonus-Bernstein} and Azuma-Hoeffding inequality. The claim is proved by taking the union bound. \vect{v}ect{e}nd{proof} \section{Proofs for Section~\ref{sec:satisfiable}} \langlebel{sec:opt} \begin{proof}[Proof of Theorem~\ref{thm:satisfiable}] A crucial property we will use is, by the definition of fenchel duality, \begin{equation} \langlebel{equ:fenchel-property} g^*\left( \bm{\theta} \right)=\vect{v}ect{u}nderset{\mathbf{x}\in X}{\max}\left\{ \bm{\theta}\cdot \mathbf{x}-g\left( \mathbf{x} \right) \right\} \vect{v}ect{g}e \bm{\theta}\cdot \mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1)-g\left( \mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1) \right) \vect{v}ect{e}nd{equation} and \begin{equation} \langlebel{equ:satisfiable} \vect{v}ect{u}nderset{\mathbf{x}\in W^{\mathrm{s.t.~}ar}}{\max}\bm{\theta}\cdot \mathbf{x} \vect{v}ect{g}e \bm{\theta}\cdot \mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1) \vect{v}ect{e}nd{equation} for any $\nu$. Let's try to bound the regret and constraint violation. \begin{align*} & K\left[g(\mathbf{W}^K)- g(\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1))+\rho \mathrm{dist}\left( \mathbf{W}^K,W^{\mathrm{s.t.~}ar} \right) \right] \\ =&\vect{v}ect{u}nderset{\left\| \bm{\phi } \right\| \le 1}{\max}\left\{ \bm{\phi }\cdot \sum_{k=1}^K{\mathbf{\hat{V}}^k}- \sum_{k=1}^Kg^*\left( \bm{\phi } \right) \right\}-K g(\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1))+\rho \vect{v}ect{u}nderset{\left\| \bm{\vect{v}arphi } \right\| \le 1}{\max}\left\{ \bm{\vect{v}arphi }\cdot \sum_{k=1}^K{\mathbf{\hat{V}}^k}- \sum_{k=1}^K \vect{v}ect{u}nderset{\mathbf{x}\in W^{\mathrm{s.t.~}ar}}{\max}\bm{\vect{v}arphi}\cdot \mathbf{x} \right\} \\ \le &\sum_{k=1}^K{\left\{ \bm{\phi}^k\cdot \mathbf{\hat{V}}^k-g^*\left( \bm{\phi^k } \right) \right\}}- K g(\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1)) +\rho \sum_{k=1}^K{\left\{ \bm{\vect{v}arphi}^k\cdot \mathbf{\hat{V}}^k-\vect{v}ect{u}nderset{\mathbf{x}\in W^{\mathrm{s.t.~}ar}}{\max}\bm{\vect{v}arphi}^k\cdot \mathbf{x} \right\}}+\mathcal{O}\left( \rho \sqrt{dH^2K} \right) \\ \overset{\left( i \right)}{\le}&\sum_{k=1}^K\left[ \bm{\theta}^k \cdot (\mathbf{\hat{V}}^k -\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1) )\right]+\mathcal{O}\left( \rho \sqrt{dH^2K} \right) \\ \overset{\left( ii \right)}{\le}&\sum_{k=1}^K\left[ \bm{\theta}^k \cdot (\mathbf{\hat{V}}^k -\mathbf{V}^{\mu^{k}}_1(s_1) )\right]+\sum_{k=1}^K\left[ \mathbf{V}^{\mu^{k}}_1(s_1)- V^{k}_1(s_1) \right]+\mathcal{O}\left( \rho \sqrt{dH^2K} \right) \\ \overset{\left( iii \right)}{\le}&\mathcal{O}\left( \rho \sqrt{\min\{d,S\}dH^3SAK\iota} \right), \vect{v}ect{e}nd{align*} where $(i)$ is by the update \textsc{Double-Dual-Update}{} and inequality~\vect{v}ect{e}qref{equ:fenchel-property}~\vect{v}ect{e}qref{equ:satisfiable}, $(ii)$ is by optimism and $(iii)$ is by Lemma~\ref{lem:sum-of-bonus}. \paragraph{Bound constraint violation in constrained MDP} We need to define a few notations. Recall that return vectors $\mathbf{r}_h(s, a)$ live in a space $\mathbb{R}^d$, and $\mathbb{W}^{\mathrm{s.t.~}ar} \subseteq \mathbb{R}^d$ denotes the set of desired expected future return. Note that $ \mathrm{dist}\left( \mathbf{W}^K,W^{\mathrm{s.t.~}ar} \right) \vect{v}ect{g}e 0$; hence we have $K \left[g(\mathbf{W}^K)- g(\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1)) \right] = \mathcal{O}\left( \rho \sqrt{\min\{d,S\}dH^3SAK\iota} \right)$. To bound the constraint violation separately, we only need the lemma below. \begin{figure} \centering \subfigure[]{\langlebel{fig:a}\includegraphics[width=60mm]{proof1.png}} \subfigure[]{\langlebel{fig:b}\includegraphics[width=60mm]{proof2.png}} \caption{ The projected point is : (a)In the interior ; (b)on the boundary.} \langlebel{fig:proof} \vect{v}ect{e}nd{figure} \begin{lemma} Let $\mathbf{W}^{\mathrm{s.t.~}ar}$ denote a return vector in set $\mathcal{W}$ that achieves the lowest cost, i.e. $\forall \ \mathbf{W} \in \mathcal{W}, g(\mathbf{W}) \vect{v}ect{g}e g(\mathbf{W}^{\mathrm{s.t.~}ar})$, then under Assumption~\ref{assump:angle}, \begin{align*} \left[g(\mathbf{W}^K)- g( \mathbf{W}^{\mathrm{s.t.~}ar} ) \right] \vect{v}ect{g}e - \mathrm{dist}\left( \mathbf{W}^K, \mathbb{W}^{\mathrm{s.t.~}ar} \right) /\vect{v}ect{g}amma_{\min}. \vect{v}ect{e}nd{align*} \vect{v}ect{e}nd{lemma} \begin{proof} Note that if $\mathbf{W}^K \in \mathcal{W}$, then by optimality of $\mathbf{W}^{\mathrm{s.t.~}ar}$, \begin{align*} \left[g(\mathbf{W}^K)- g( \mathbf{W}^{\mathrm{s.t.~}ar} ) \right] \vect{v}ect{g}e 0 \vect{v}ect{g}e - \mathrm{dist}\left( \mathbf{W}^K, \mathbb{W}^{\mathrm{s.t.~}ar} \right) /\vect{v}ect{g}amma_{\min}. \vect{v}ect{e}nd{align*} We focus on the case when $\mathbf{W}^K \not\in \mathcal{W}$. By convexity of $\mathcal{W}$, there exists a unique $\prod_\mathcal{W} \mathbf{W}^K = \argmin_{\mathbf{W} \in \mathcal{W}} \mathrm{dist}(\mathbf{W}, \mathbf{W}^k)$. Again we study two cases as illustrated in Figure~\ref{fig:proof}: whether or not the projected point $\prod_{\mathcal{W}} \mathbf{W}^K$ is in the interior of $\mathcal{V}$. \paragraph{Case 1:in the interior.} Note that the projection can be described as an optimization operation: $\prod_\mathcal{W} \mathbf{W}^K = \argmin_{w \in \mathcal{V}, w \in \mathbb{W}^{\mathrm{s.t.~}ar}} \mathrm{dist}(\mathbf{W}^K, w)$. When the projected point is in the interior of $\mathcal{V}$, we know that the constraint $w \in \mathcal{V}$ is not active at the optimal solution. Hence by complementary slackness, $\mathrm{dist}\left( \mathbf{W}^K, \mathcal{W} \right) = \min_{\mathbf{W} \in \mathcal{V}, \mathbf{W} \in \mathbb{W}^{\mathrm{s.t.~}ar}} \mathrm{dist}(\mathbf{W}^K, \mathbf{W} ) = \min_{\mathbf{W} \in \mathbb{W}^{\mathrm{s.t.~}ar}} \mathrm{dist}(\mathbf{W}^K, \mathbf{W} ) = \mathrm{dist}\left( \mathbf{W}^K, \mathbb{W}^{\mathrm{s.t.~}ar} \right) $. Then the inequality simply follows by \begin{align*} \left[g(\mathbf{W}^K)- g( \mathbf{W}^{\mathrm{s.t.~}ar} ) \right] =& g(\mathbf{W}^K)- g(\prod_\mathcal{W} \mathbf{W}^K) + g(\prod_\mathcal{W} \mathbf{W}^K) - g( \mathbf{W}^{\mathrm{s.t.~}ar} ) \\ \vect{v}ect{g}e& - \mathrm{dist}(\mathbf{W}^K, \mathcal{W}) + 0 \vect{v}ect{g}e - \mathrm{dist}(\mathbf{W}^K, \mathbb{W}^{\mathrm{s.t.~}ar}) \vect{v}ect{e}nd{align*} \paragraph{Case 2:on the boundary.} In this case, the distance $\mathrm{dist}\left( \mathbf{W}^K, \mathcal{W} \right)$ may not equal $\mathrm{dist}\left( \mathbf{W}^K, \mathbb{W}^{\mathrm{s.t.~}ar} \right) $. Instead, we know by convexity and Assumption~\ref{assump:angle} that the support hyperplanes of $\mathcal{V}$ and $\mathbb{W}^{\mathrm{s.t.~}ar}$ at $\prod_\mathcal{W} \mathbf{W}^K $ intersects with an angle $\alpha(\prod_\mathcal{W} \mathbf{W}^K) < \pi $, where $\alpha$ is defined in Assumption~\ref{assump:angle}. By optimality of the $\prod_\mathcal{W} \mathbf{W}^K$ in solving $ \min_{\mathbf{W} \in \mathcal{V}, \mathbf{W} \in \mathbb{W}^{\mathrm{s.t.~}ar}} \mathrm{dist}(\mathbf{W}^K, \mathbf{W} ) $, the vector $\prod_\mathcal{W} \mathbf{W}^K \to \mathbf{W}^K$ must lie in the cone formed by the support vectors. Further by $\mathbf{W}^K \in \mathcal{V}$, we have $\alpha(\prod_\mathcal{W} \mathbf{W}^K) \vect{v}ect{g}e \pi/2 $. Then we know that \begin{align*} \left[g(\mathbf{W}^K)- g( \mathbf{W}^{\mathrm{s.t.~}ar} ) \right] =& g(\mathbf{W}^K)- g(\prod_\mathcal{W} \mathbf{W}^K) + g(\prod_\mathcal{W} \mathbf{W}^K) - g( \mathbf{W}^{\mathrm{s.t.~}ar} ) \\ \vect{v}ect{g}e& - \mathrm{dist}(\mathbf{W}^K, \prod_\mathcal{W} \mathbf{W}^K) + 0 \vect{v}ect{e}nd{align*} where the second line follows by the Lipschitzness of $g$. Denote $\mathcal{H} = \mathcal{H}(\prod_\mathcal{W} \mathbf{W}^K)$ as the hyperspace that is supported by the support vector of $W^{\mathrm{s.t.~}ar}$ at $\prod_\mathcal{W} \mathbf{W}^K$. Then by the fact that $\mathcal{W^{\mathrm{s.t.~}ar}} \subseteq \mathcal{H} $ and assumption \ref{assump:angle}, we get \begin{align} \mathrm{dist}(\mathbf{W}^K, \mathbb{W}^{\mathrm{s.t.~}ar}) \vect{v}ect{g}e \mathrm{dist}(\mathbf{W}^K, \mathcal{H}) \vect{v}ect{g}e \mathrm{dist}(\mathbf{W}^K, \prod_\mathcal{W} \mathbf{W}^K) \sin(\pi - \alpha(\prod_\mathcal{W} \mathbf{W}^K)) \vect{v}ect{e}nd{align} Rearrange and we get \begin{align*} \left[g(\mathbf{W}^K)- g( \mathbf{W}^{\mathrm{s.t.~}ar} ) \right] \vect{v}ect{g}e - \mathrm{dist}\left( \mathbf{W}^K,\mathbb{W}^{\mathrm{s.t.~}ar} \right) /\vect{v}ect{g}amma_{\min}. \vect{v}ect{e}nd{align*} \vect{v}ect{e}nd{proof} With the above lemma, we see that if $\rho = 2 / \vect{v}ect{g}amma_{\min}$, then \begin{align*} \frac{1}{ \vect{v}ect{g}amma_{\min}} K \mathrm{dist}\left( \mathbf{W}^K,W^{\mathrm{s.t.~}ar} \right) &\le K\left[g(\mathbf{W}^K)- g(\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1))+ \frac{2}{ \vect{v}ect{g}amma_{\min}} \mathrm{dist}\left( \mathbf{W}^K,\mathbb{W}^{\mathrm{s.t.~}ar} \right) \right] \\ & = K\left[g(\mathbf{W}^K)- g(\mathbf{V}^{\mu^{\mathrm{s.t.~}ar}}_1(s_1))+ \rho \mathrm{dist}\left( \mathbf{W}^K,\mathbb{W}^{\mathrm{s.t.~}ar} \right) \right] \le \mathcal{O}\left( \rho\sqrt{\min\{d,S\}dH^3SAK\iota} \right). \vect{v}ect{e}nd{align*} Divide both side by $\rho/2$ and we get the desired result. \vect{v}ect{e}nd{proof} \subsection{Necessity of nonsingular intersection}\langlebel{sec:intersect-proof} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{intersection2.png} \caption{If the support hyperplains intersect with angle $0$ (exact cut), then the point $\mathbf{W}^k$ can be arbitrarily close to the set $\mathbb{W}^\mathrm{s.t.~}ar$ while remaining away from $\mathcal{W}$.}\langlebel{fig:intersection2} \vect{v}ect{e}nd{figure} In this section, we explain the high level intuition of why the intersection between the constrain set $\mathbb{W}^{\mathrm{s.t.~}ar}$ and the feasible return vectors $\mathcal{V}$ needs to be nonsingular. The key problem arises from the fact that $\mathbb{W}^\mathrm{s.t.~}ar$ is defined in space $\mathbb{R}^d$ where the set of feasible return vectors $\mathcal{V}$ may not be of full dimension. In such cases, for the actual achievable constrain set of interest $\mathcal{W} = \mathbb{W}^{\mathrm{s.t.~}ar} \cap \mathcal{V}$, there are too much freedom in selecting $\mathbb{W}$ as long as its elements on $\mathcal{V}$ remains fixed. In particular, an achievable return vector $\mathbf{W}^K$ can be very far away from the constrained feasible set $\mathcal{W}$, where as being arbitrarily close to the set $\mathbb{W}^\mathrm{s.t.~}ar$. The process is illustrated in Figure~\ref{fig:intersection2} by sending the radius $R$ to infinity. Note that $\mathcal{W}$ remains unchanged in this process. Since the cost is measured by the distance to $\mathbb{W}^\mathrm{s.t.~}ar$ instead of to the actual set of interest $\mathcal{W}$, the deviation from $\mathbf{W}$ to $\mathcal{W}$ cannot be reduced to 0 with any fixed algorithm given that point $\mathbf{W}$ is already very close to the target set $\mathbb{W}^{\mathrm{s.t.~}ar}$. Quantifying the level of non-singularity is necessary, whereas lower bounding the angle at intersection is one natural way of many to do so. \vect{v}ect{e}nd{document}
\bold egin{document} \renewcommand{\thepropositionA}{} \renewcommand{\thepropositionB}{} \title{Classifying spaces of degenerating mixed Hodge structures, IV: The fundamental diagram} \author {Kazuya Kato, Chikara Nakayama, Sampei Usui} \maketitle \renewcommand{\bold}{\bold} \operatorname{naive}ewcommand{\bold{C}}al{\mathcal} \operatorname{naive}ewcommand\define{\operatorname{naive}ewcommand} \define{{\gamma}mma}p{\mathrm{gp}} \define\fs{\mathrm{fs}} \define\an{\mathrm{an}} \define\mult{\mathrm{mult}} \define\add{\mathrm{add}} \define\Ker{\mathrm{Ker}\,} \define{\bold{C}}oker{\mathrm{Coker}\,} \define\Hom{\mathrm{Hom}\,} \define\Ext{\mathrm{Ext}\,} \define\ranglek{\mathrm{rank}\,} \define{{\gamma}mma}r{\mathrm{gr}} \define\cHom{{\bold{C}}al{Hom}} \define\cExt{{\bold{C}}al Ext\,} \define\cA{{\bold{C}}al A} \define\cC{{\bold{C}}al C} \define\cD{{\bold{C}}al D} \define\cO{{\bold{C}}al O} \define\cS{{\bold{C}}al S} \define\cM{{\bold{C}}al M} \define\cG{{\bold{C}}al G} \define\cH{{\bold{C}}al H} \define\cE{{\bold{C}}al E} \define\cF{{\bold{C}}al F} \define\cN{{\bold{C}}al N} \define\fF{\frak F} \define\fg{\frak g} \define\fh{\frak h} \define\Dc{\check{D}} \define\Ec{\check{E}} \operatorname{naive}ewcommand{{\bold{N}}}{{\bold{N}}} \operatorname{naive}ewcommand{{\bold{Q}}}{{\bold{Q}}} \operatorname{naive}ewcommand{{\bold{Z}}}{{\bold{Z}}} \operatorname{naive}ewcommand{{\bold{R}}}{{\bold{R}}} \operatorname{naive}ewcommand{{\bold{C}}}{{\bold{C}}} \operatorname{naive}ewcommand{{\bold{N}}}{{\bold{N}}} \operatorname{naive}ewcommand{{\bold{Q}}}{{\bold{Q}}} \operatorname{naive}ewcommand{{\bold{F}}}{{\bold{F}}} \operatorname{naive}ewcommand{{\bold{Z}}}{{\bold{Z}}} \operatorname{naive}ewcommand{{\bold{P}}}{{\bold{P}}} \operatorname{naive}ewcommand{{\bold{R}}}{{\bold{R}}} \operatorname{naive}ewcommand{{\bold{C}}}{{\bold{C}}} \operatorname{naive}ewcommand{{\bold{S}}}{{\bold{S}}} \operatorname{naive}ewcommand{{\bar \bold{Q}}}{{\bar \bold{Q}}} \operatorname{naive}ewcommand{\ol}[1]{\overline{#1}} \operatorname{naive}ewcommand{\longrightarrow}{\longrightarrow} \operatorname{naive}ewcommand{\rightsquigarrow}{\rightsquigarrow} \operatorname{naive}ewcommand{\leftrightsquigarrow}{\leftrightsquigarrow} \operatorname{naive}ewcommand{\upc}[1]{\overset {\lower 0.3ex \hbox{${\;}_{\circ}$}}{#1}} \operatorname{naive}ewcommand{{\bold{G}}_{m, \log}}{{\bold{G}}_{m, \log}} \operatorname{naive}ewcommand{{\bold{G}}_m}{{\bold{G}}_m} \operatorname{naive}ewcommand{\varepsilon}{\varepsilon} \operatorname{naive}ewcommand{\operatorname{Spec}}{\operatorname{Spec}} \operatorname{naive}ewcommand{{\mathrm{val}}}{{\mathrm{val}}} \operatorname{naive}ewcommand{\operatorname{naive}}{\operatorname{naive}} \operatorname{naive}ewcommand{\operatorname{\backslash}}{\operatorname{\backslash}} \operatorname{naive}ewcommand{\operatorname{{Gal}}}{\operatorname{{Gal}}} \operatorname{naive}ewcommand{{\rm {Gal}}({\bar \Q}/{\Q})}{{\rm {Gal}}({\bar {\bold{Q}}}/{{\bold{Q}}})} \operatorname{naive}ewcommand{{\rm {Gal}}({\bar \Q}/{\Q})p}{{\rm {Gal}}({\bar {\bold{Q}}}_p/{{\bold{Q}}}_p)} \operatorname{naive}ewcommand{{\rm {Gal}}({\bar \Q}/{\Q})l}{{\rm{Gal}}({\bar {\bold{Q}}}_\ell/{\bold{Q}}_\ell)} \operatorname{naive}ewcommand{W({\bar \Q}_p/\Q_p)}{W({\bar {\bold{Q}}}_p/{\bold{Q}}_p)} \operatorname{naive}ewcommand{W({\bar \Q}_\ell/\Q_\ell)}{W({\bar {\bold{Q}}}_\ell/{\bold{Q}}_\ell)} \operatorname{naive}ewcommand{{\rm{Ad}}}{{\rm{Ad}}} \operatorname{naive}ewcommand{{\rm {BS}}}{{\rm {BS}}} \operatorname{naive}ewcommand{\operatorname{even}}{\operatorname{even}} \operatorname{naive}ewcommand{{\rm {End}}}{{\rm {End}}} \operatorname{naive}ewcommand{\operatorname{odd}}{\operatorname{odd}} \operatorname{naive}ewcommand{\operatorname{GL}}{\operatorname{GL}} \operatorname{naive}ewcommand{\operatorname{naive}p}{\text{non-$p$}} \operatorname{naive}ewcommand{{{\gamma}mma}}{{{{\gamma}mma}amma}} \operatorname{naive}ewcommand{{\Gamma}}{{{\Gamma}amma}} \operatorname{naive}ewcommand{{{\Lambda}mbda}}{{{{\Lambda}mbda}bda}} \operatorname{naive}ewcommand{{\Lambda}}{{{{\Lambda}mbda}bda}} \operatorname{naive}ewcommand{{{\lambda}mbda}}{{{{\lambda}mbda}bda}} \operatorname{naive}ewcommand{{\lambda}}{{{{\lambda}mbda}bda}} \operatorname{naive}ewcommand{{{\hat {L}}^{\rm {ur}}}}{{{\hat {L}}^{\rm {ur}}}} \operatorname{naive}ewcommand{{{\hat \Q}_p}^{\text{ur}}}{{{\hat {\bold{Q}}}_p}^{\text{ur}}} \operatorname{naive}ewcommand{\operatorname{Sel}}{\operatorname{Sel}} \operatorname{naive}ewcommand{{\rm{Det}}}{{\rm{Det}}} \operatorname{naive}ewcommand{\Sigma}{\Sigmama} \operatorname{naive}ewcommand{{\rm{fil}}}{{\rm{fil}}} \operatorname{naive}ewcommand{{\rm{SL}}}{{\rm{SL}}} \operatorname{naive}ewcommand{{\rm{spl}}}{{\rm{spl}}} \operatorname{naive}ewcommand{{\rm{st}}}{{\rm{st}}} \operatorname{naive}ewcommand{{\rm {Isom}}}{{\rm {Isom}}} \operatorname{naive}ewcommand{{\rm {Mor}}}{{\rm {Mor}}} \operatorname{naive}ewcommand{\bar{g}}{\bar{g}} \operatorname{naive}ewcommand{{\rm {id}}}{{\rm {id}}} \operatorname{naive}ewcommand{{\rm {cone}}}{{\rm {cone}}} \operatorname{naive}ewcommand{a}{a} \operatorname{naive}ewcommand{{\bold{C}}hL}{{\cal{C}}({\Lambda})} \operatorname{naive}ewcommand{{\rm {Image}}}{{\rm {Image}}} \operatorname{naive}ewcommand{{\operatorname{toric}}}{{\operatorname{toric}}} \operatorname{naive}ewcommand{{\operatorname{torus}}}{{\operatorname{torus}}} \operatorname{naive}ewcommand{{\rm {Aut}}}{{\rm {Aut}}} \operatorname{naive}ewcommand{{\bold{Q}}p}{{\bold{Q}}_p} \operatorname{naive}ewcommand{{\bold{Q}}_p}{{\bold{Q}}_p} \operatorname{naive}ewcommand{{\bold{Q}}pur}{{\bold{Q}}_p^{\rm {ur}}} \operatorname{naive}ewcommand{{\bold{Z}}p}{{\bold{Z}}_p} \operatorname{naive}ewcommand{{\bold{Z}}l}{{\bold{Z}}_l} \operatorname{naive}ewcommand{{\bold{Q}}l}{{\bold{Q}}_l} \operatorname{naive}ewcommand{{\bold{Q}}lur}{{\bold{Q}}_l^{\rm {ur}}} \operatorname{naive}ewcommand{{\bold{F}}}{{\bold{F}}} \operatorname{naive}ewcommand{\varepsilons}{{\varepsilonsilon}} \operatorname{naive}ewcommand{\varepsilonsLa}{{\varepsilonsilon}_{{\Lambda}}} \operatorname{naive}ewcommand{\varepsilonsLaVxi}{{\varepsilonsilon}_{{\Lambda}}(V, \xi)} \operatorname{naive}ewcommand{\varepsilonsOLaVxi}{{\varepsilonsilon}_{0,{\Lambda}}(V, \xi)} \operatorname{naive}ewcommand{{\bold{Q}}plin}{{\bold{Q}}_p(\mu_{l^{\infty}})} \operatorname{naive}ewcommand{\otimesQplin}{\otimes_{{\bold{Q}}p}{\bold{Q}}_p(\mu_{l^{\infty}})} \operatorname{naive}ewcommand{{\rm {Gal}}({\bar \Q}/{\Q})Fl}{{\rm{Gal}}({\bar {\Bbb F}}_\ell/{\Bbb F}_\ell)} \operatorname{naive}ewcommand{{\rm {Gal}}({\bar \Q}/{\Q})lur}{{\rm{Gal}}({\bar {\bold{Q}}}_\ell/{\bold{Q}}_\ell^{\rm {ur}})} \operatorname{naive}ewcommand{{\rm {Gal}}({\bar \Q}/{\Q})FF}{{\rm {Gal}}(F_{\infty}/F)} \operatorname{naive}ewcommand{{\rm {Gal}}({\bar \Q}/{\Q})Fv}{{\rm {Gal}}(\bar{F}_v/F_v)} \operatorname{naive}ewcommand{{\rm {Gal}}({\bar \Q}/{\Q})F}{{\rm {Gal}}(\bar{F}/F)} \operatorname{naive}ewcommand{\varepsilonsVxi}{{\varepsilonsilon}(V, \xi)} \operatorname{naive}ewcommand{\varepsilonsOVxi}{{\varepsilonsilon}_0(V, \xi)} \operatorname{naive}ewcommand{{\sigma}}{{{\sigma}ma}} \operatorname{naive}ewcommand{{{\gamma}mma}a}{{{{\gamma}mma}amma}} \operatorname{naive}ewcommand{{\delta}}{{{\delta}ta}} \operatorname{naive}ewcommand{V^{\rm {ss}}}{V^{\rm {ss}}} \operatorname{naive}ewcommand{B_{\rm {st}}}{B_{\rm {st}}} \operatorname{naive}ewcommand{D_{\rm {pst}}}{D_{\rm {pst}}} \operatorname{naive}ewcommand{D_{\rm {crys}}}{D_{\rm {crys}}} \operatorname{naive}ewcommand{D_{\rm {dR}}}{D_{\rm {dR}}} \operatorname{naive}ewcommand{{\bold{F}}in}{F_{\infty}} \operatorname{naive}ewcommand{K_{\lambda}}{K_{{{\lambda}mbda}bda}} \operatorname{naive}ewcommand{O_{\lambda}}{O_{{{\lambda}mbda}bda}} \operatorname{naive}ewcommand{M_{\lambda}}{M_{{{\lambda}mbda}bda}} \operatorname{naive}ewcommand{{\rm{Det}}}{{\rm{Det}}} \operatorname{naive}ewcommand{{\rm{Sym}}}{{\rm{Sym}}} \operatorname{naive}ewcommand{{\Lambda}Sa}{{{\Lambda}_{S^*}}} \operatorname{naive}ewcommand{{\cal {X}}}{{\cal {X}}} \operatorname{naive}ewcommand{{\frak {M}}_H(G)}{{\frak {M}}_H(G)} \operatorname{naive}ewcommand{\tau(M_{\lambda})}{\tau(M_{{{\lambda}mbda}bda})} \operatorname{naive}ewcommand{{\bold{F}}vur}{{F_v^{\rm {ur}}}} \operatorname{naive}ewcommand{{\rm {Lie}}}{{\rm {Lie}}} \operatorname{naive}ewcommand{{\cal {B}}}{{\cal {B}}} \operatorname{naive}ewcommand{{\cal {L}}}{{\cal {L}}} \operatorname{naive}ewcommand{{\cal {W}}}{{\cal {W}}} \operatorname{naive}ewcommand{{\frak {q}}}{{\frak {q}}} \operatorname{naive}ewcommand{{\rm {cont}}}{{\rm {cont}}} \operatorname{naive}ewcommand{{SC}}{{SC}} \operatorname{naive}ewcommand{{\Omega}}{{{\Omega}ega}} \operatorname{naive}ewcommand{{\rm {dR}}}{{\rm {dR}}} \operatorname{naive}ewcommand{{\rm {crys}}}{{\rm {crys}}} \operatorname{naive}ewcommand{{\hat{\Sigma}}}{{\hat{\Sigmama}}} \operatorname{naive}ewcommand{{{\rm {det}}}}{{{\rm {det}}}} \operatorname{naive}ewcommand{{{\rm {ord}}}}{{{\rm {ord}}}} \operatorname{naive}ewcommand{{B_{\rm {dR}}}}{{B_{\rm {dR}}}} \operatorname{naive}ewcommand{{B_{\rm {dR}}}O}{{B^0_{\rm {dR}}}} \operatorname{naive}ewcommand{{B_{\rm {crys}}}}{{B_{\rm {crys}}}} \operatorname{naive}ewcommand{{\bold{Q}}w}{{\bold{Q}}_w} \operatorname{naive}ewcommand{{\bar{\kappa}}}{{\bar{\kappa}}} \operatorname{naive}ewcommand{{\Cal {P}}}{{{\bold{C}}al {P}}} \operatorname{naive}ewcommand{{\Cal {Z}}}{{{\bold{C}}al {Z}}} \operatorname{naive}ewcommand{{\Lambda^{\circ}}}{{{{\Lambda}mbda}bda^{\circ}}} \operatorname{naive}ewcommand{{\bold{G}}}{{\bold{G}}} \operatorname{naive}ewcommand{{{\bold r}}}{{{\bold r}}} \operatorname{naive}ewcommand{{\rm{triv}}}{{\rm{triv}}} \operatorname{naive}ewcommand{{\subset}}{{{\subset}set}} \operatorname{naive}ewcommand{{D^{\star,{{\rm{mild}}}}_{\SL(2)}}}{{D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}}} \operatorname{naive}ewcommand{{D^{\star}_{\SL(2)}}}{{D^{{\rm{st}}ar}_{{\rm{SL}}(2)}}} \operatorname{naive}ewcommand{{D^{\star}_{\SL(2),\val}}}{{D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}}} \operatorname{naive}ewcommand{\operatorname{naive}spl}{{{\rm nspl}}} \operatorname{naive}ewcommand{\operatorname{naive}ilp}{{{\rm nilp}}} \operatorname{naive}ewcommand{{[\val]}}{{[{\mathrm{val}}]}} \operatorname{naive}ewcommand{{{\rm{mild}}}}{{{\rm{mild}}}} \operatorname{naive}ewcommand{{\lambda}n}{{\lambda}ngle} \operatorname{naive}ewcommand{\rangle}{\ranglegle} \operatorname{naive}ewcommand{{{\rm sat}}}{{{\rm sat}}} \operatorname{naive}ewcommand{{{\rm Map}}}{{{\rm Map}}} \operatorname{naive}ewcommand{\langle \Phi \rangle}{{\lambda}ngle \Phi \ranglegle} \let\t=\tilde \def\overline{x}{\overline{x}} \let\op=\oplus \let\x=\times \def\operatorname{|toric|}{\operatorname{|toric|}} \def\bold e{\bold e} \def\normalsize\prod{\operatorname{naive}ormalsize\prod} \def\normalsize\sum{\operatorname{naive}ormalsize\sum} \bold egin{abstract} We complete the construction of the fundamental diagram of various partial compactifications of the moduli spaces of mixed Hodge structures with polarized graded quotients. The diagram includes the space of nilpotent orbits, the space of SL(2)-orbits, and the space of Borel--Serre orbits. We give amplifications of this fundamental diagram, and amplify the relations of these spaces. We describe how this work is useful to understand asymptotic behaviors of Beilinson regulators and of local height parings in degeneration. We discuss \lq\lq mild degenerations'' in which regulators converge. \end{abstract} \section*{Contents} \operatorname{naive}oindent \S\ref{s:intro}. Introduction \operatorname{naive}oindent \S\ref{s:pre}. Preliminaries \operatorname{naive}oindent \S\ref{s:new}. The new space $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ of ${\rm{SL}}(2)$-orbits \operatorname{naive}oindent \S\ref{s:val}. Valuative Borel--Serre orbits and valuative SL(2)-orbits \operatorname{naive}oindent \S\ref{s:newval}. New spaces $D^{\sharp}_{\Sigmama,[:]}$ and $D^{\sharp}_{\Sigmama,[{\mathrm{val}}]}$ of nilpotent orbits \operatorname{naive}oindent \S\ref{s:dia}. Mild nilpotent orbits and the space $D^{\diamond}_{{\rm{SL}}(2)}$ of ${\rm{SL}}(2)$-orbits \operatorname{naive}oindent \S\ref{s:NSB}. Complements \operatorname{naive}oindent \S\ref{s:Ex}. Relations with asymptotic behaviors of regulators and local height pairings \operatorname{naive}oindent \S\ref{s:co}. Corrections to \cite{KU}, supplements to Part III. \setcounter{section}{-1} \section{Introduction}{\lambda}bel{s:intro} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \footnote[0]{Primary 14C30; Secondary 14D07, 32G20} {\subset}section{The fundamental diagram and its amplification} \bold egin{sbpara} Let $D$ be the period domain which classifies mixed Hodge structures with polarized graded quotients with respect to the weight filtration (\cite{Gr}, \cite{U1}), with fixed Hodge numbers of graded quotients. In Part I--Part III (\cite{KNU2}) of this series of papers, we constructed extended period domains in the diagram $$\bold egin{matrix} &&&& D_{{\rm{SL}}(2),{\mathrm{val}}}&\overset{\eta}{\underset{{\subset}set}\to} &D_{{\rm {BS}},{\mathrm{val}}}\\ &&&&\downarrow&&\downarrow\\ {\Gamma}amma \operatorname{\backslash} D_{\Sigma,{\mathrm{val}}}& \leftarrow & D_{\Sigma,{\mathrm{val}}}^{\sharp}&\overset {\psi} \to &D_{{\rm{SL}}(2)}&&D_{{\rm {BS}}}\\ \downarrow &&\downarrow&&&&\\ {\Gamma}amma \operatorname{\backslash} D_{\Sigma}&\leftarrow &D_{\Sigma}^{\sharp}&&&& \end{matrix}$$ which we call the fundamental diagram, as the mixed Hodge versions of the extended period domains in \cite{KU} for the pure case. We have constructed the maps in the diagram except the map $\eta$. In this Part IV of our series of papers, we define the injective map $\eta$. There is a big issue concerning this map $\eta$, which did not appear in the pure case, as we explain below soon. In this Part IV, we amplify this fundamental diagram as in \ref{afd1} and \ref{afd2} below, and we remedy the issue as a result of the amplification. \end{sbpara} \bold egin{sbpara} The spaces in the fundamental diagram in 0.1.1 are topological spaces, the right six spaces have $D$ as dense open sets, and the left two spaces have the quotient ${\Gamma}amma \operatorname{\backslash} D$ of $D$ by a discrete group ${\Gamma}amma$ as dense open subsets. These left two spaces have sheaves of holomorphic functions extending that of ${\Gamma}amma \operatorname{\backslash} D$ (though these spaces need not be complex analytic spaces) and have log structures, and the right four spaces have sheaves of real analytic functions extending that of $D$ (though these spaces need not be real analytic spaces) and have log structures. The maps in the fundamental diagram except $\eta$ respect these structures. Among these eight spaces, the main spaces are the three spaces ${\Gamma}amma\operatorname{\backslash} D_{\Sigma}$ (the space of nilpotent orbits), $D_{{\rm{SL}}(2)}$ (the space of ${\rm{SL}}(2)$-orbits), and $D_{{\rm {BS}}}$ (the space of Borel--Serre orbits). We defined and studied $D_{{\rm {BS}}}$ in Part I, $D_{{\rm{SL}}(2)}$ in Part II, and ${\Gamma}amma \operatorname{\backslash} D_{\Sigma}$ in Part III. The other five spaces appear to help the connection of these three spaces. The map $\psi$ in the center of the fundamental diagram connects the four spaces in the world of nilpotent orbits on the left with the world of ${\rm{SL}}(2)$-orbits. We call $\psi$ the CKS map, for it is obtained in the pure case by using the work of Cattani-Kaplan-Schmid \cite{CKS} on the relation between nilpotent orbits and ${\rm{SL}}(2)$-orbits. However, to connect the world of SL(2)-orbits and the world of Borel--Serre orbits on the right, the map $\eta$ has the following defect. Though the map $\eta$ is a natural map and is continuous in the pure case (\cite{KU}), a big issue is that in the mixed case, the map $\eta$ is not necessarily continuous (see Section 3.5). \end{sbpara} \bold egin{sbpara} To remedy this issue and to amplify the connections of the spaces in the fundamental diagram, we will introduce new spaces $$D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\;\;\text{and}\;\; D^{\diamond}_{{\rm{SL}}(2)}\quad \text{in the world of ${\rm{SL}}(2)$-orbits (see Section 2), and}$$ $$D^{\sharp}_{\Sigma,[{\mathrm{val}}]} \;\;\text{and}\;\; D^{\sharp}_{\Sigma,[:]}\quad \text{in the world of nilpotent orbits (see Section 4)}.$$ These spaces are topological spaces, and the first two have sheaves of real analytic functions and log structures. They have the following special properties. The space $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ has better relations to Borel--Serre orbits than $D_{{\rm{SL}}(2)}$ (see Section 3.4), and this space remedies the above issue. The spirit of the definition of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (Section 2) is near that of $D_{{\rm {BS}}}$. As is shown in Section 5, the space $D^{\diamond}_{{\rm{SL}}(2)}$ has better relations to nilpotent orbits of \lq\lq mild degeneration'' (see \ref{ss:mild} for the meaning of mildness) than $D_{{\rm{SL}}(2)}$, though among $D_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ and $D^{\diamond}_{{\rm{SL}}(2)}$, $D_{{\rm{SL}}(2)}$ is the best for the relation with general nilpotent orbits. In the pure case, we have $$D_{{\rm{SL}}(2)}=D^{{\rm{st}}ar}_{{\rm{SL}}(2)}=D^{\diamond}_{{\rm{SL}}(2)}.$$ The space $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$ has a nice relation to $D_{{\rm{SL}}(2),{\mathrm{val}}}$ (see Section 4), which $D^{\sharp}_{\Sigma,{\mathrm{val}}}$ does not have. The space $D^{\sharp}_{\Sigma,[:]}$ is a quotient of $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$ and also a quotient of $D^{\sharp}_{\Sigma,{\mathrm{val}}}$, and has a nice relation to $D_{{\rm{SL}}(2)}$ (see Section 4). The symbols ${\rm{st}}ar$ and $\diamond$ are used to express that the spaces are shiny like stars and diamonds in the relations to Borel--Serre orbits and nilpotent orbits, respectively. The symbol $[:]$ is used because $D^{\sharp}_{\Sigma, [:]}$ is regarded as a space of ratios. The symbol $[{\mathrm{val}}]$ similar to $[:]$ is used because $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$ is the valuative space associated to $D^{\sharp}_{\Sigma,[:]}$ for a certain log structure. Actually, as is explained in Part II, $D_{{\rm{SL}}(2)}$ has two structures $D_{{\rm{SL}}(2)}^I$ and $D_{{\rm{SL}}(2)}^{II}$ of a topological space with sheaves of real analytic functions and log structures. Everything in this Introduction is true for $D^{II}_{{\rm{SL}}(2)}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{afd1} By using the above spaces, we have the following amplified fundamental diagram and supplemental amplifications in \ref{afd3} and \ref{afd2}, which connect the \lq\lq three worlds'' better. $$\bold egin{matrix} &&&&&& D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}&\overset{\eta^{{\rm{st}}ar}}{\underset{{\subset}set}\to}&D_{{\rm {BS}},{\mathrm{val}}}\\ &&&&&&\downarrow && \downarrow\\ &&&&D^{\sharp}_{\Sigmama,{[\val]}}& \overset{\psi}\to & D_{{\rm{SL}}(2),{\mathrm{val}}}& &D_{{\rm {BS}}}\\ &&&&\downarrow &&\downarrow&&\\ {\Gamma}amma \operatorname{\backslash} D_{\Sigma,{\mathrm{val}}}& \leftarrow & D_{\Sigma,{\mathrm{val}}}^{\sharp}&\to &D^{\sharp}_{\Sigma,[:]} & \overset {\psi} \to &D_{{\rm{SL}}(2)}&&\\ \downarrow &&&&\downarrow&&&&\\ {\Gamma}amma \operatorname{\backslash} D_{\Sigma}&&\leftarrow &&D_{\Sigma}^{\sharp}&&&& \end{matrix}$$ This diagram is commutative and the maps respect the structures of the spaces. As indicated in this diagram, the valuative space $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ associated to $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ has an injective morphism $\eta^{{\rm{st}}ar}: D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$ (Theorem \ref{SL2BS}), which is an improved version of $\eta$, and a proper surjective morphism $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm{SL}}(2),{\mathrm{val}}}$ (Theorem \ref{0thm}). Here morphism means a morphism of topological spaces endowed with sheaves of real analytic functions and with log structures. As also indicated in the diagram, the CKS map $\psi: D^{\sharp}_{\Sigma,{\mathrm{val}}}\to D_{{\rm{SL}}(2)}$ factors as $D^{\sharp}_{\Sigma,{\mathrm{val}}}\to D^{\sharp}_{\Sigma, [:]}\to D_{{\rm{SL}}(2)}$, and we have a continuous map $\psi: D^{\sharp}_{\Sigma, [{\mathrm{val}}]}\to D_{{\rm{SL}}(2),{\mathrm{val}}}$ (Theorem \ref{valper}). \end{sbpara} \bold egin{sbpara}{\lambda}bel{afd3} In the case $\Sigma$ is the fan $\Xi$ of all rational nilpotent cones of rank $\leq 1$, we have $$D^{\sharp}_{\Xi,[{\mathrm{val}}]}= D^{\sharp}_{\Xi,[:]}= D^{\sharp}_{\Xi,{\mathrm{val}}}= D^{\sharp}_{\Xi}.$$ Furthermore in this case, we have a CKS map $\psi: D^{\sharp}_{\Xi}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$, and hence the three worlds are connected directly by $$ D^{\sharp}_{\Xi} \overset{\psi}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} \overset{\eta^{{\rm{st}}ar}}{\underset{{\subset}set}\to} D_{{\rm {BS}},{\mathrm{val}}}.$$ See Theorem \ref{rk1}. \end{sbpara} \bold egin{sbpara}{\lambda}bel{star+-} As is described above, the spaces $D_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ and $D_{{\rm {BS}}}$ are related via their associated valuative spaces $D_{{\rm{SL}}(2),{\mathrm{val}}}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ and $D_{{\rm {BS}},{\mathrm{val}}}$. The associated valuative space is a kind of a projective limit of blowing-ups. In Section 2, we will construct also spaces $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$ and $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ which are related to $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ via kinds of blowing-ups and blowing-downs and which work as bridges between $D_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ and $D_{{\rm {BS}}}$ before going to the valuative spaces. See Section 2. \end{sbpara} \bold egin{sbpara} A nilpotent orbit appears as the limit of a variation of mixed Hodge structure in degeneration. SL(2)-orbits are simpler objects and Borel--Serre orbits are further simpler. The theory of SL(2)-orbits (\cite{Scw}, \cite{CKS} for the pure case and \cite{Pe}, \cite{KNU1} for the mixed case) tells that, roughly speaking, an SL(2)-orbit is associated to a nilpotent orbit, and we can read real analytic behaviors of the degeneration better by looking at the simpler object SL(2)-orbit. The map $\psi$ gives the SL(2)-orbit associated to a nilpotent orbit. We hope that the above extended period domains and their relations are useful in the study of degeneration of mixed Hodge structures. Actually, as illustrated in Section \ref{relBK} below and in Section 7, our theory has an application to the study \cite{BK} of asymptotic behaviors of degenerations of Beilinson regulators and local height pairings. In these subjects, the asymptotic behaviors are understood by degeneration of mixed Hodge structures. \end{sbpara} {\subset}section{Mild degenerations}{\lambda}bel{ss:mild} \bold egin{sbpara} We will define the subsets $$D^{{{\rm{mild}}}}_{\Sigma}{\subset}set D_{\Sigma}, \quad D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}{\subset}set D^{{\rm{st}}ar}_{{\rm{SL}}(2)}, \quad D^{{{\rm{mild}}}}_{{\rm {BS}}}{\subset}set D_{{\rm {BS}}}$$ of elements with mild degenerations. Any element of $D^{\diamond}_{{\rm{SL}}(2)}$ is regarded as having mild degeneration. \end{sbpara} \bold egin{sbpara} Let $D^{{{\rm{mild}}}}_{\Sigmama}$ be the subset of $D_{\Sigma}$ consisting of all points $p$ satisfying the following condition. For any element $N$ of the monodormy cone associated to $p$, there is a splitting of $W$ which is compatible with $N$. (The splitting can depend on $N$ and need not have any relation with the Hodge filtration). Denote the subset $D_{{\rm {BS}}}^{(A)}$ of $D_{{\rm {BS}}}$ (Part I, 8.1) by $D^{{{\rm{mild}}}}_{{\rm {BS}}}$. Let $D^{{{\rm{mild}}}}_{{\rm {BS}},{\mathrm{val}}}{\subset}set D_{{\rm {BS}},{\mathrm{val}}}$ be the inverse image of $D^{{{\rm{mild}}}}_{{\rm {BS}}}$. There is also a subset $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ consisting of $A$-orbits (see Section 2) whose inverse image $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2),{\mathrm{val}}}$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ coincide with the inverse image of $D^{{{\rm{mild}}}}_{{\rm {BS}},{\mathrm{val}}}$ under $\eta^{{\rm{st}}ar}$. We have also the mild parts of $D_{\Sigmama,[:]}^{\sharp}$ and $D^{\sharp}_{\Sigmama,[{\mathrm{val}}]}$, i.e., let $D_{\Sigmama,[:]}^{\sharp,{{\rm{mild}}}}$ and $D^{\sharp,{{\rm{mild}}}}_{\Sigmama,[{\mathrm{val}}]}$ be the inverse images of ${\Gamma}amma\operatorname{\backslash} D^{{{\rm{mild}}}}_{\Sigmama}$ in $D_{\Sigmama,[:]}^{\sharp}$ and in $D_{\Sigmama,[{\mathrm{val}}]}^{\sharp}$, respectively. All these mild parts ${\Gamma}amma \operatorname{\backslash} D^{{{\rm{mild}}}}_{\Sigmama}$, $D_{{\rm {BS}}}^{{{\rm{mild}}}}$, $\dots$, etc.\ are open sets of ${\Gamma}amma \operatorname{\backslash} D_{\Sigma}$, $D_{{\rm {BS}}}$, $\dots$ etc., respectively. \end{sbpara} \bold egin{sbpara}{\lambda}bel{afd2} For mild degenerations, we can replace the upper right part of the amplified fundamental diagram by the following commutative diagram (maps respect structures of the spaces) which contain the space $D^{\diamond}_{{\rm{SL}}(2)}$ and its associated valuative space $D^{\diamond}_{{\rm{SL}}(2),{\mathrm{val}}}$ (Theorem \ref{diathm}). $$\bold egin{matrix} D^{\sharp,{{\rm{mild}}}}_{\Sigmama,{[\val]}}& \overset{\psi}\to & D^{\diamond}_{{\rm{SL}}(2),{\mathrm{val}}} & \to &D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2),{\mathrm{val}}}&\overset{\eta^{{\rm{st}}ar}}{\underset{{\subset}set}\to} & D^{{{\rm{mild}}}}_{{\rm {BS}}, {\mathrm{val}}}\\ \downarrow &&\downarrow&&\downarrow &&\downarrow \\ D^{\sharp,{{\rm{mild}}}}_{\Sigma,[:]} &\overset{\psi}\to & D^{\diamond}_{{\rm{SL}}(2)} &\to & D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}&& D^{{{\rm{mild}}}}_{{\rm {BS}}}\\ &&&&\downarrow&&\\ &&&&D_{{\rm{SL}}(2)}&& \end{matrix}$$ \end{sbpara} \bold egin{sbpara}{\lambda}bel{app} In the applications of our work as in Section 7, the following part of the fundamental diagrams in Section 0.1 and Section \ref{ss:mild} becomes important. $$\bold egin{matrix} D_{\Sigma,[:]}^{\sharp,{{\rm{mild}}}}&\to& D^{\diamond}_{{\rm{SL}}(2)}\\ \cap &&\downarrow\\ D_{\Sigma,[:]}^{\sharp}&\to& D_{{\rm{SL}}(2)} \end{matrix}$$ Via this diagram, we can understand degeneration of mixed Hodge structure in the space $D_{{\rm{SL}}(2)}$, and understand mild degeneration better in $D^{\diamond}_{{\rm{SL}}(2)}$. The right vertical arrow is usually not injective, and hence $D^{\diamond}_{{\rm{SL}}(2)}$ can tell informations about mild degeneration which is lost in $D_{{\rm{SL}}(2)}$. This is explained in Section \ref{relBK} below, and in Section \ref{s:Ex} more precisely. \end{sbpara} {\subset}section{Relations with regulators and local height pairings}{\lambda}bel{relBK} We illustrate the relations of this work with the work \cite{BK}. \bold egin{sbpara}{\lambda}bel{K2intro} Let $S$ be a smooth curve over ${\bold{C}}$ and let $f:X\to S$ be a proper surjective morphism from a smooth algebraic variety $X$. Let $0\in S$ be a point, and assume that $X\smallsetminus X_0\to S\smallsetminus \{0\}$ is smooth and $X$ is of semistable reduction at $0\in S$. For $Z\in K_n(X\smallsetminus X_0)$ $(n{{\gamma}mma}eq 1)$, the asymptotic behavior of the regulator of the restriction $Z(t)\in K_n(X_t)$ of $Z$ to $X_t$ $(t\in S \smallsetminus\{0\}$, $t\to 0)$ is studied in \cite{BK} by using our theory of degeneration of MHS. For each $r{{\gamma}mma}eq 0$, $Z$ defines a variation of mixed Hodge structure $H_Z$ on $S\smallsetminus\{0\}$ with an exact sequence $0\to H^m(X/S)(r) \to H_Z \to {\bold{Z}}\to 0$, where $m=2r-n-1$, $H^m(X/S)$ is the $m$-th direct image $R^mf_*{\bold{Z}}$ on $S\smallsetminus \{0\}$ with Hodge filtration, and $(r)$ is the Tate twist. The ($r$-th) regulator of $Z(t)$ is determined by the fiber $H_Z(t)$ of $H_Z$ at $t$. \end{sbpara} \bold egin{sbpara} We describe how our theory is related to this subject. Our description in the rest of Section 0.3 is rough and imprecise. More precise matters are described in Section \ref{ss:reg} and details are given in \cite{BK}. We have the period map $$(S\smallsetminus \{0\}) \times K_n(X\smallsetminus X_0) \to {\Gamma}amma\operatorname{\backslash} D\quad (t, Z)\mapsto \text{class}(H_Z(t)).$$ By Part III, this extends to $$S\times K_n(X\smallsetminus X_0) \to {\Gamma}amma \operatorname{\backslash} D_{\Xi}, \quad S^{\log}\times K_n(X\smallsetminus X_0) \to {\Gamma}amma \operatorname{\backslash} D^{\sharp}_{\Xi}$$ where $S^{\log}$ is the space associated to $S$ defined in \cite{KN}. If $Z$ comes from $K_n(X)$, then $H_Z$ has mild degeneration at $0\in S$ (\ref{KXmild}). The diagram in \ref{app} produces the following commutative diagram. $$\bold egin{matrix} S^{\log}\times K_n(X) &\to &{\Gamma}amma\operatorname{\backslash} D^{\sharp,{{\rm{mild}}}}_{\Xi} &\to & {\Gamma}amma\operatorname{\backslash} D^{\diamond}_{{\rm{SL}}(2)} \\ \downarrow && \cap &&\downarrow\\ S^{\log} \times K_n(X\smallsetminus X_0)&\to &{\Gamma}amma\operatorname{\backslash} D^{\sharp}_{\Xi} &\to& {\Gamma}amma \operatorname{\backslash} D_{{\rm{SL}}(2)} \end{matrix}$$ \end{sbpara} \bold egin{sbpara} We can prove that for $Z\in K_n(X)$, the regulator of $Z(t)$ converges when $t\to 0$ (Theorem \ref{thm2}). In fact, this is a consequence of the fact that the period map $S\smallsetminus \{0\}\to {\Gamma}amma \operatorname{\backslash} D\;;\;t\mapsto \text{class}(H_Z(t))$ induced by $Z$ extends to a continuous map $S^{\log}\to {\Gamma}amma\operatorname{\backslash} D^{\diamond}_{{\rm{SL}}(2)}$ as indicated by the upper row of the above diagram. We recover the limit of the regulator of $Z(t)$ for $t\to 0$ from the image of a point $b$ of $S^{\log}$ over $0$ in ${\Gamma}amma \operatorname{\backslash} D^{\diamond}_{{\rm{SL}}(2)}$. On the other hand, for $Z\in K_n(X\smallsetminus X_0)$ which need not come from $K_n(X)$, the regulator of $Z(t)$ need not converge when $t\to 0$, and the image of $b$ in ${\Gamma}amma \operatorname{\backslash} D_{{\rm{SL}}(2)}$ tells how rapidly it diverges. When $Z$ comes from $K_n(X)$, the image of $b$ in ${\Gamma}amma \operatorname{\backslash} D_{{\rm{SL}}(2)}$ has smaller information than the image of $b$ in ${\Gamma}amma \operatorname{\backslash} D^{\diamond}_{{\rm{SL}}(2)}$, and cannot tell the limit of the regulator of $Z(t)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{htintro} We have a similar story for the asymptotic behavior of the local height pairing (at the Archimedean place). This is introduced in Section \ref{ss:reBK}. \end{sbpara} {\subset}section{Organization of this paper, acknowledgements} \bold egin{sbpara} The organization of this paper is as follows. Section 1 is a preparation. In Section \ref{s:new}, we consider the new space $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ of ${\rm{SL}}(2)$-orbits. In Section \ref{s:val}, we consider the spaces $D_{{\rm{SL}}(2),{\mathrm{val}}}$ and $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ of valuative ${\rm{SL}}(2)$-orbits and the space $D_{{\rm {BS}},{\mathrm{val}}}$ of valuative Borel--Serre orbits. In Section \ref{s:newval}, we consider the new spaces $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$ and $D^{\sharp}_{\Sigma, [:]}$ in the world of nilpotent orbits, and improve CKS maps by using these spaces. In Section \ref{s:dia}, we consider the new space $D^{\diamond}_{{\rm{SL}}(2)}$ of ${\rm{SL}}(2)$-orbits and construct mild CKS maps. In Section \ref{s:NSB}, we give complementary results on properties of extended period domains, on relations of nilpotent orbits, SL(2)-orbits, and Borel-Serre orbits, and on extended period maps. In Section \ref{s:Ex}, we illustrate the relations to the work \cite{BK} and give examples. In the appendix Section \ref{s:co}, we give corrections to \cite{KU} and supplements to Part III. Sections A.1 and Section A.3 in this appendix are directly related to Section \ref{ss:Lthm} of this Part IV. \end{sbpara} \bold egin{sbpara} We thank Spencer Bloch. Theorem \ref{diathm} and Theorem \ref{thm2} in this Part IV were obtained in joint efforts with him related to the joint work \cite{BK}. The first author is partially supported by NFS grant DMS 1001729. The second author is partially supported by JSPS. KAKENHI (C) No. 18540017, (C) No. 22540011. The third author is partially supported by JSPS. KAKENHI (B) No. 23340008. \end{sbpara} \operatorname{naive}oindent \section{Preliminaries}{\lambda}bel{s:pre} {\subset}section{The setting}{\lambda}bel{setting} We recall the basic setting and the notation used throughout this series of papers. \bold egin{sbpara}{\lambda}bel{hodge} We fix ${{\Lambda}mbda}bda=(H_0, W, ({\lambda}ngle\;,\;\ranglegle_w)_w, (h^{p,q})_{p, q})$, where $H_0$ is a finitely generated free ${\bold{Z}}$-module, $W$ is a finite increasing rational filtration on $H_{0, {\bold{R}}} = {\bold{R}}\otimes H_0$, ${\lambda}ngle\;,\;\ranglegle_w$ for each $w\in {\bold{Z}}$ is a rational nondegenerate ${\bold{R}}$-bilinear form ${{\gamma}mma}r^W_w\times {{\gamma}mma}r^W_w\to {\bold{R}}$ which is symmetric if $w$ is even and is anti-symmetric if $w$ is odd, $h^{p,q}$ is a nonnegative integer given for each $(p, q)\in {\bold{Z}}^2$, \operatorname{naive}oindent satisfying the following conditions (1)--(3). (1) $\sum_{p, q} h^{p,q}= \text{rank}_{\bold{Z}}(H_0)$, (2) $\sum_{p+q=w} h^{p,q}= \dim_{\bold{R}}({{\gamma}mma}r^W_w)$ for any $w\in {\bold{Z}}$, (3) $h^{p,q}=h^{q,p}$ for any $(p, q)$. \end{sbpara} \bold egin{sbpara} Let $D$ be the classifying space of gradedly polarized mixed Hodge structures in \cite{U1} associated to the data fixed in \ref{hodge}. As a set, $D$ consists of all increasing filtrations $F$ on $H_{0,{\bold{C}}} = {\bold{C}} \otimes H_0$ such that $(H_0, W, ({\lambda}ngle\;,\;\ranglegle_w)_w, F)$ is a gradedly polarized mixed Hodge structures with $\dim_{\bold{C}} F^p({{\gamma}mma}r^W_{p+q})/F^{p+1}({{\gamma}mma}r^W_{p+q}) = h^{p,q}$ for all $p, q$. The space $D$ is an open subset of a simpler complex analytic manifold $\Dc$ (Part I, 1.5) which is defined by dropping the condition of positivity for ${\lambda}ngle\;,\;\ranglegle_w$ in the definition of $D$. \end{sbpara} \bold egin{sbpara} For $A={\bold{Z}}, {\bold{Q}}, {\bold{R}}, {\bold{C}}$, let $G_A$ be the group of the $A$-automorphisms of $(H_{0,A}, W)$ whose ${{\gamma}mma}r^W_w$ are compatible with ${\lambda}ngle\;,\;\ranglegle_w$ for all $w$. Here $H_{0,A} = A \otimes H_0$. Then $G_{\bold{C}}$ (resp.\ $G_{\bold{R}}$) acts on $\Dc$ (resp.\ $D$). For $A= {\bold{Q}}, {\bold{R}}, {\bold{C}}$, let $\fg_A$ be the associated Lie algebra of $G_A$. Let $G_{A,u}=\{{{\gamma}mma}\in G_A\;|\;{{\gamma}mma}r^W(g)=1\}$, $\fg_{A,u}=\{N\in \fg_A\,|\,{{\gamma}mma}r^W(N)=0\}$. Then $G_A/G_{A,u}$ is isomorphic to $G_A({{\gamma}mma}r^W):=\prod_w G_A({{\gamma}mma}r^W_w)$ and $\fg_A/\fg_{A,u}$ is isomorphic to $\fg_A({{\gamma}mma}r^W):=\prod_w\fg_A({{\gamma}mma}r^W_w)$, where $G_A({{\gamma}mma}r^W_w)$ (resp.\ $\fg_A({{\gamma}mma}r^W_w)$) is \lq\lq the $G_A$ (resp.\ $\fg_A$) for ${{\gamma}mma}r^W_w$''. \end{sbpara} \bold egin{sbpara} For each $w\in {\bold{Z}}$, let $D({{\gamma}mma}r^W_w)$ be the $D$ for the graded quotient $((H_0\cap W_w)/(H_0\cap W_{w-1}), {\lambda}n\;,\;\rangle_w, (h^{p,q})_{p+q=w})$. Let $D({{\gamma}mma}r^W) = \prod_{w\in {\bold{Z}}} \; D({{\gamma}mma}r^W_w).$ Then the canonical morphism $$D \to D({{\gamma}mma}r^W); F \mapsto F({{\gamma}mma}r^W):= (F({{\gamma}mma}r^W_w))_{w\in {\bold{Z}}}$$ is surjective. \end{sbpara} \bold egin{sbpara}{\lambda}bel{splW} Let $W'$ be a finite increasing filtration on $H_{0, {\bold{R}}}$. A {\it splitting} of $W'$ is an isomorphism $$s\colon {{\gamma}mma}r^{W'}:=\operatorname{naive}ormalsize\bigoplus_w {{\gamma}mma}r^{W'}_w \overset \simeq \to H_{0,{\bold{R}}}$$ of ${\bold{R}}$-vector spaces such that for any $w\in {\bold{Z}}$ and $v \in {{\gamma}mma}r^{W'}_w$, $s(v) \in W'_w$ and $v = (s(v)\bmod W'_{w-1})$. Let ${\rm{spl}}(W')$ be the set of all splittings of $W'$. Consider the case $W'=W$. Then ${\rm{spl}}(W)$ is regarded as a $G_{{\bold{R}}, u}$-torsor. Let $D_{{\rm{spl}}}:= \{s(F) \;|\; s\in {\rm{spl}}(W),\, F\in D({{\gamma}mma}r^W)\} {\subset}set D$ be the subset of {\it ${\bold{R}}$-split} elements. Here $s(F)^p:= s(\bigoplus_w F_{(w)}^p)$ for $F = (F_{(w)})_w \in D({{\gamma}mma}r^W)$. Then, $D_{{\rm{spl}}}$ is a real analytic closed submanifold of $D$, and we have a real analytic isomorphism ${\rm{spl}}(W) \times D({{\gamma}mma}r^W) \overset \sim \to D_{{\rm{spl}}}$, $(s, F)\mapsto s(F)$. Let $D_{\operatorname{naive}spl}:=D\smallsetminus D_{{\rm{spl}}}$. \end{sbpara} {\subset}section{Canonical splitting of the weight filtration and the invariant ${\delta}ta$ of non-splitting} \bold egin{sbpara}{\lambda}bel{grsd} We review the fact that the weight filtration of an ${\bold{R}}$-mixed Hodge structure has a canonical splitting over ${\bold{R}}$ (which does not split the Hodge filtration except the case of an ${\bold{R}}$-split mixed Hodge structure) and the fact that there is an important map ${\delta}ta$ which tells how the ${\bold{R}}$-mixed Hodge structure is far from ${\bold{R}}$-split. We review that we have an isomorphism of real analytic manifolds $$D\overset{\cong}\to \{(F, s, {\delta}ta)\in D({{\gamma}mma}r^W)\times {\rm{spl}}(W) \times{\cal {L}} \;|\; {\delta}ta\in {\cal {L}}(F)\}$$ $$ x\mapsto (x({{\gamma}mma}r^W), {\rm{spl}}_W(x), {\delta}ta_W(x))$$ (Part II, Proposition 1.2.5) by using the canonical splitting ${\rm{spl}}_W(x)$ of $W$ associated to $x$ and the invariant ${\delta}ta_W(x)$ of non-splitting associated to $x$. ${\cal {L}}$ and ${\cal {L}}(F)$ are explained in \ref{cL(F)}, ${\delta}ta_W(x)$ is explained in \ref{II,1.2.2}, and ${\rm{spl}}_W(x)$ is explained in \ref{II,1.2.3} below. \end{sbpara} \bold egin{sbpara}{\lambda}bel{cL(F)} Let ${\cal {L}}= W_{-2}\mathrm{End}_{{\bold{R}}}({{\gamma}mma}r^W)$ be the set of all ${\bold{R}}$-linear maps ${\delta}ta:{{\gamma}mma}r^W\to{{\gamma}mma}r^W$ such that ${\delta}ta({{\gamma}mma}r^W_w){\subset}set\bigoplus_{w'\le w-2}{{\gamma}mma}r^W_{w'}$ for all $w\in{\bold{Z}}$ (Part II, 1.2.1). This is a finite dimensional weighted ${\bold{R}}$-vector space. For $F\in D({{\gamma}mma}r^W)$, let ${\cal {L}}(F)$ be the weighted subspace of ${\cal {L}}$ consisting of all elements whose $(p,q)$-Hodge components for $F$ are $0$ unless $p<0$ and $q<0$. That is, ${\cal {L}}(F)$ is the set of all ${\delta}ta\in {\cal {L}}$ such that ${\delta}ta(H^{p,q}_F){\subset}set \bigoplus_{p'<p, q'<q}\; H^{p',q'}_F\;\text{for all}\;p,q\in{\bold{Z}}$. Here $H^{p,q}_F$ denotes the $(p,q)$-Hodge component of $F({{\gamma}mma}r^W_{p+q})$ (Part II, 1.2.1). \end{sbpara} \bold egin{sbpara}{\lambda}bel{II,1.2.2} We explain ${\delta}ta_W(x)\in {\cal {L}}(x({{\gamma}mma}r^W))$. For $x \in D$, there is a unique pair of $s' \in {\rm{spl}}(W)$ and ${\delta}ta \in {\cal {L}}(x({{\gamma}mma}r^W))$ such that $$ x = s'(\exp(i{\delta}ta)x({{\gamma}mma}r^W)) $$ (\cite{CKS} (2.20)). We write ${\delta}ta_W(x)$ (or ${\delta}ta(x)$) for this ${\delta}ta$. \end{sbpara} \bold egin{sbpara} Roughly speaking, ${\delta}ta_W(x)$ is the invariant of the mixed Hodge structure $x$ which measures how $x$ is far from $D_{{\rm{spl}}}$ in $D$. We have ${\delta}ta_W(x)=0$ if and only if $x\in D_{{\rm{spl}}}$ (\ref{splW}). This ${\delta}ta_W(x)$ plays important roles in our series of papers. It is related to the regulator in number theory and in arithmetic geometry as is discussed in \cite{BK} and in Section 7 of this Part IV. Hence we would like to propose to call ${\delta}ta_W(x)$ the regulator of the mixed Hodge structure $x$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{II,1.2.3} We explain ${\rm{spl}}_W(x)\in {\rm{spl}}(W)$. Let $x \in D$, and let $s'\in {\rm{spl}}(W)$ and ${\delta}ta$ be as in \ref{II,1.2.2}. Then the {\it canonical splitting} $s={\rm{spl}}_W(x)$ of $W$ associated to $x$ is defined by $$ s=s'\exp(\zeta), $$ where $\zeta=\zeta(x({{\gamma}mma}r^W), {\delta}ta)$ is a certain element of ${\rm {End}}_{{\bold{R}}}({{\gamma}mma}r^W)$ determined by $x({{\gamma}mma}r^W)$ and ${\delta}ta={\delta}ta_W(x)$ roughly as in the following way. Let ${\delta}ta_{p,q}$ $(p,q\in {\bold{Z}})$ be the $(p,q)$-Hodge component of ${\delta}ta$ with respect to $x({{\gamma}mma}r^W)$. Then the $(p,q)$-Hodge component $\zeta_{p, q}$ of $\zeta=\zeta(x({{\gamma}mma}r^W), {\delta}ta)$ with respect to $x({{\gamma}mma}r^W)$ is given as a certain universal Lie polynomial of ${\delta}ta_{p', q'}$ $(p', q'\in {\bold{Z}}$, $p'\leq -1$, $q'\leq -1)$. See \cite{CKS} (6.60), and Section 1 and Appendix of \cite{KNU1} for more explanations. For $x\in D$, $x_{{\rm{spl}}}:=s(x({{\gamma}mma}r^W))\in D_{{\rm{spl}}}$ with $s={\rm{spl}}_W(x)$ is called the {\it associated ${\bold{R}}$-split mixed Hodge structure}. We have $x\in D_{{\rm{spl}}}$ if and only if $x=x_{{\rm{spl}}}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{liftac} We have the following action of the group $\prod_{w\in {\bold{Z}}} {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$ on $D$, which we call the {\it lifted action}. For $a=(a_w)_w\in \prod_{w\in {\bold{Z}}} {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$, $a$ sends $x\in D$ to $x'\in D$ which is characterized by $x'({{\gamma}mma}r^W_w)=a_wx({{\gamma}mma}r^W_w)$, ${\rm{spl}}_W(x')={\rm{spl}}_W(x)$, and ${\delta}ta_W(x')={\rm{Ad}}(a){\delta}ta_W(x)$. In other words, $a$ sends the Hodge filtration $F\in D$ to the Hodge filtration $s_F a(s_F^{-1}(F))$ where $s_F:={\rm{spl}}_W(F)$ and $s_F^{-1}(F)$ denotes the filtration on ${{\gamma}mma}r^W_{{\bold{C}}}=\prod_w {{\gamma}mma}r^W_{w,{\bold{C}}}$ induced by $F$ via $s_F^{-1}: H_{0,{\bold{C}}}\overset{\cong}\to {{\gamma}mma}r^W_{{\bold{C}}}$. This lifted action will be used in Section 2. \end{sbpara} {\subset}section{Spaces with real analytic structures and with fs log structures with sign} {\lambda}bel{ss:B} This is essentially a review of Section 3.1 of Part II. \bold egin{sbpara}{\lambda}bel{2.5.1} Endow ${\bold{R}}^n$ $(n{{\gamma}mma}eq 0)$ with the sheaf $\cO_{{\bold{R}}^n}$ of real analytic functions. Let ${\cal {B}}_{\bold{R}}'$ be the category of locally ringed spaces $S$ over ${\bold{R}}$ satisfying the following condition (i) locally on $S$. (i) There are $n{{\gamma}mma}eq 0$ and a morphism $\iota:S\to {\bold{R}}^n$ of locally ringed spaces over ${\bold{R}}$ such that $\iota$ is injective, the topology of $S$ coincides with the topology induced from that of ${\bold{R}}^n$, and the map $\iota^{-1}(\cO_{{\bold{R}}^n}) \to \cO_S$ is surjective. For an object $S$ of ${\cal {B}}'_{\bold{R}}$, we often call the structural sheaf $\cO_S$ the sheaf of real analytic functions on $S$ (though $S$ need not be a real analytic space). Let $\cC_{\bold{R}}$ be the category of locally ringed spaces $S$ over ${\bold{R}}$ satisfying the following condition (ii). (ii) For any open set $U$ of $S$ and for any $n{{\gamma}mma}eq 0$, the canonical map $\text{Mor}(U, {\bold{R}}^n)\to \cO_S(U)^n$ is bijective. \end{sbpara} \bold egin{sbpara} We have $${\cal {B}}'_{\bold{R}}{\subset}set \cC_{\bold{R}}.$$ For the proof, see Part II, Lemma 3.1.2. \end{sbpara} \bold egin{sbpara}{\lambda}bel{value} For a topological field $K$ and for a locally ringed space $S$ over $K$, the following three conditions (i)--(iii) are equivalent. (i) For any $s\in S$, the map $K\to \cO_{S,s}/m_s$ ($m_s$ denotes the maximal ideal of $\cO_{S,s})$ is an isomorphism. Furthermore for any open set $U$ of $S$ and for any $f\in \cO_S(U)$, the map $U\to K\;;\; s\mapsto f(s)$ is continuous. Here $f(s)$ denotes the image of $f$ in $\cO_{S,s}/m_s=K$. (ii) Let $\cO'_S$ be the sheaf on $S$ of all $K$-valued continuous functions. Then there is a homomorphism $\cO_S\to \cO'_S$ of sheaves of rings over $K$. (iii) Let $S'$ be the topological space $S$ endowed with the sheaf of all $K$-valued continuous functions. Then there is a morphism of locally ringed spaces $S'\to S$ over $K$ whose underlying map $S'\to S$ is the identity map. If these equivalent conditions are satisfied, there is only one homomorphism $\cO_S\to \cO'_S$ of sheaves of rings over $K$, and there is only one morphism $S'\to S$ of locally ringed spaces over $K$ lying over the identity map of $S$. These can be proved easily. \end{sbpara} \bold egin{sbpara} Note that objects of $\cC_{\bold{R}}$ satisfy the equivalent conditions in \ref{value} with $K={\bold{R}}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{lsign} Let $S$ be a locally ringed space over ${\bold{R}}$ satisfying the equivalent conditions in \ref{value} with $K={\bold{R}}$. By a {\it log structure with sign} on $S$, we mean a log structure $M$ on $S$ endowed with a submonoid sheaf $M_{>0}$ of $M$ satisfying the following (i) and (ii). (i) $M_{>0}\supset \cO^\times_{S,>0}$. Here $\cO^\times_{S,>0}$ denotes the subgroup sheaf of $\cO_S^\times$ consisting of all local sections whose values are $>0$. (ii) The map $M_{>0} \times \{\pm 1\}\to M\;;\;(f, \varepsilon)\mapsto \varepsilon f$ is an isomorphism of sheaves. Here we regard $\{\pm 1\}{\subset}set \cO_S^\times {\subset}set M$. Note that the map $\cO^\times_{S,>0}\times \{\pm 1\}\to \cO^\times_S\;;\;(f, \varepsilon)\mapsto \varepsilon f$ is an isomorphism. Indeed, if $f \in \cO_S^\times$ has value $>0$ (resp.\ $<0$) at $s\in S$, then $f$ (resp.\ $-f$) belongs to $\cO_{S,>0}^{\times}$ on some open neighborhood of $s$. Hence this map is surjective. The injectivity is clear. \end{sbpara} \bold egin{sbpara}{\lambda}bel{integr} In Pat II, Section 3.1, we defined the notion log structure with sign in a more restrictive situation where $S$ is an object of $\cC_{\bold{R}}$ requiring $M$ is integral (that is, the canonical map $M\to M^{{{\gamma}mma}p}$ is injective), and the presentation of the definition there was more complicated. So here we are improving the generality and the presentation of the definition. (But in this paper, we do not need this generalization.) If $M$ is integral, the present definition is equivalent to the definition in Part II, 3.1.5 which uses a subgroup sheaf $M^{{{\gamma}mma}p}_{>0}$. The relation with the present definition is that $M^{{{\gamma}mma}p}_{>0}$ in Part II, 3.1.5 is obtained from $M_{>0}$ in the present definition as $M^{{{\gamma}mma}p}_{>0}= (M_{>0})^{{{\gamma}mma}p}$, and $M_{>0}$ here is obtained from $M^{{{\gamma}mma}p}_{>0}$ there as $M_{>0}= M\cap M^{{{\gamma}mma}p}_{>0}$. To prove the equivalence, the non-trivial point is to show that (1) $M_{>0}\cap \cO_S^\times = \cO^\times_{S,>0}$ \operatorname{naive}oindent for a log structure with sign in the present sense. We prove (1). If $f\in M_{>0}\cap \cO_S^\times$ has a value $<0$ at $s\in S$, $-f$ belongs to $\cO^\times_{S,>0}{\subset}set M_{>0}$ on some open neighborhood of $s$, and this contradicts the condition (ii) in \ref{lsign}. Hence $f\in \cO^\times_{S,>0}$. Note that (1) implies the condition (3) in Part II, 3.1.5 on $M$, that is, the values of $f\in M_{>0}$ are ${{\gamma}mma}eq 0$. (The values of $f$ mean the values of the image of $f$ in $\cO_S$.) Indeed, for $s\in S$, if the image of $f$ in $M_s$ belongs to $\cO_{S,s}^\times$, then it belongs to $\cO^\times_{S, >0,s}$ by the above (1), and hence $f$ has value $>0$ at $s$. If the image of $f$ in $M_s$ does not belong to $\cO_{S,s}^\times$, then $f$ has value $0$ at $s$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{cblog} Let ${\cal {B}}'_{\bold{R}}(\log)$ be the category of objects of ${\cal {B}}'_{\bold{R}}$ (\ref{2.5.1}) endowed with an fs log structure with sign. Let $\cC_{\bold{R}}({{\rm sat}})$ be the category of objects of $\cC_{\bold{R}}$ endowed with a saturated log structure with sign. Here a log structure $M$ on a locally ringed space $S$ is said to be {\it saturated} if all stalks of $M$ are saturated in the following sense. We say a commutative monoid $\cS$ is saturated if it is integral (that is, the canonical map $\cS\to \cS^{{{\gamma}mma}p}$ is injective) and if for any $a\in \cS^{{{\gamma}mma}p}$ such that $a^n\in \cS{\subset}set \cS^{{{\gamma}mma}p}$ for some integer $n{{\gamma}mma}eq 1$, we have $a\in \cS$. We have $${\cal {B}}'_{\bold{R}}(\log){\subset}set \cC_{\bold{R}}({{\rm sat}}).$$ \end{sbpara} \bold egin{sbpara}{\lambda}bel{2.3ex} {\it Examples.} (1) {\it The object ${\bold{R}}^n_{{{\gamma}mma}eq 0}$ of ${\cal {B}}'_{\bold{R}}(\log)$. } The sheaf $\cO$ of real analytic functions is the inverse image of the sheaf of real analytic functions on ${\bold{R}}^n$. The log structure $M$ with sign is as follows. $M$ (resp.\ $M_{>0}$) is the multiplicative submonoid sheaf of $\cO$ generated by $\cO^\times$ (resp.\ $\cO^\times_{>0}$) and the coordinate functions $t_1, \dots, t_n$. (2) {\it A real analytic manifold with corners} (\cite{BS}, Appendix) is regarded as an object of ${\cal {B}}'_{\bold{R}}(\log)$. The log structure with sign is given as follows. Let $S$ be a real analytic manifold with corners and let $\cO$ be the sheaf of real analytic functions. If $S$ is an open set of ${\bold{R}}^n_{{{\gamma}mma}eq 0}$ (endowed with the sheaf of real analytic functions), the log structure with sign $(M, M_{>0})$ is defined as the inverse image of that of ${\bold{R}}^n_{{{\gamma}mma}eq 0}$. In this situation, the canonical map $M\to \cO$ is injective and hence $M$ and $M_{>0}$ are regarded as subsheaves of $\cO$. In general, $S$ is locally isomorphic to an open set of ${\bold{R}}^n_{{{\gamma}mma}eq 0}$, and the log structure with sign on $S$ induced from such isomorphism is independent of the choice of the isomorphism ($M$ and $M_{>0}$ are independent of the choice as subsheaves of $\cO$). By this, we have (a real analytic manifold with corners) $=$ (an object of ${\cal {B}}'_{\bold{R}}(\log)$ which is locally isomorphic to an open subobject of ${\bold{R}}^n_{{{\gamma}mma}eq 0}$ $(n{{\gamma}mma}eq 0)$). (3) {\it The real toric variety $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ for an fs monoid $\cS$.} (Here ${\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}$ is the set ${\bold{R}}_{{{\gamma}mma}eq 0}$ regarded as a multilpicative monoid.) This is also an object of ${\cal {B}}'_{\bold{R}}(\log)$. (The above (1) is the case $\cS={\bold{N}}^n$ of this (3).) The sheaf $\cO$ of real analytic functions is defined as follows. Take a surjective homomorphism ${\bold{N}}^n\to \cS$ of monoids for some $n{{\gamma}mma}eq 0$. It gives an embedding $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}){\subset}set {\bold{R}}^n$. We say an {\it ${\bold{R}}$-valued function on an open set of $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ is real analytic} if it is locally a restriction of a real analytic function on an open set of ${\bold{R}}^n$. This defines $\cO$ and it is independent of the the choice of the surjective homomorphism ${\bold{N}}^n \to \cS$. The log structure $M$ is the one associated to the canonical embedding $\cS\to \cO$. $M_{>0}$ is the submonoid sheaf of $M$ generated by $\cO^\times_{>0}$ and the image of $\cS$. (4) {\it The compactified vector space.} Let $V$ be a finite dimensional graded ${\bold{R}}$-vector space $V=\bigoplus_{w\in {\bold{Z}}, w\leq -1} \; V_w$ of weight $\leq -1$. Then we have a real analytic manifold with corners $\bar V$ (Part I, Section 7). It is covered by two open sets $V$ and $\bar V\smallsetminus \{0\}$. Here $V$ has the usual sheaf of real analytic functions and the trivial log structure, and $\bar V\smallsetminus \{0\}$ is described as follows. For $a\in {\bold{R}}_{>0}$ and $v\in V$, let $a\circ v = \sum_w a^wv_w\in V$ where $v_w$ denotes the component of $v$ of weight $w$. By choosing a real analytic closed submanifold $V^{(1)}$ of $V\smallsetminus \{0\}$ such that ${\bold{R}}_{>0}\times V^{(1)}\to V\smallsetminus \{0\}\;;\;(a, v) \mapsto a\circ v$ is an isomorphism of real analytic manifolds, we have an isomorphism of real analytic manifolds with corners $${\bold{R}}_{{{\gamma}mma}eq 0}\times V^{(1)}\cong \bar V\smallsetminus \{0\}$$ extending the above isomorphism. We will denote this extended isomorphism as $(a,v)\mapsto a\circ v$. For example, in the cases $V={\cal {L}}$ and $V={\cal {L}}(F)$ (\ref{cL(F)}), we have the compactified vector spaces $\bar {\cal {L}}$ and $\bar {\cal {L}}(F)$, respectively. We can identify $\bar {\cal {L}}(F)$ with the closure of ${\cal {L}}(F)$ in $\bar {\cal {L}}$. \end{sbpara} \bold egin{sbprop}{\lambda}bel{cv+0} Let $\cS$ be an fs monoid and consider the real toric variety $T:=\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$. Then if $S$ is an object of $\cC_{\bold{R}}({{\rm sat}})$, we have a natural bijection between the set of all morphisms $S\to T$ in $\cC_{\bold{R}}({{\rm sat}})$ and the set of all homomorphisms $\cS\to {\Gamma}amma(S, M_{S,>0})$. \end{sbprop} \bold egin{pf} Since $\cS{\subset}set {\Gamma}amma(T, M_{T,>0})$, a morphism $S\to T$ induces $\cS\to {\Gamma}amma(S, M_{S,>0})$. It is easy to see that this correspondence is bijective. \end{pf} \bold egin{sbpara}{\lambda}bel{pchar} If $M$ is an fs log structure with sign, locally we have a chart $\cS\to M$ whose image is contained in $M_{>0}$. (Here $\cS$ is an fs monoid.) In fact, if $\cS\to M$ is a chart, the composition $\cS\to M\cong M_{>0}\times \{\pm 1\}\to M_{>0}{\subset}set M$ is also a chart. We will call such a chart $\cS\to M_{>0}$ a {\it positive chart}. \end{sbpara} \bold egin{sbprop}{\lambda}bel{fiberpr} $(1)$ The category ${\cal {B}}'_{\bold{R}}(\log)$ has fiber products. $(2)$ A fiber product in ${\cal {B}}'_{\bold{R}}(\log)$ is a fiber product in $\cC_{\bold{R}}({{\rm sat}})$. \end{sbprop} (1) is proved in Part II, Proposition 3.1.7. We give here a proof which proves both (1) and (2). \bold egin{pf} For a diagram $S_1\to S_0\leftarrow S_2$ in ${\cal {B}}'_{\bold{R}}(\log)$, locally on $S_0, S_1, S_2$, we can find fs monoids $\cS_0, \cS_1, \cS_2$ with homomorphisms $\cS_1\leftarrow \cS_0\to \cS_2$ and a morphism $\iota_j: S_j\to T_j:= \Hom(\cS_j, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ of ${\cal {B}}'_{\bold{R}}(\log)$ for each $j=0,1,2$, satisfying the following conditions (i) and (ii). (i) The diagram $$\bold egin{matrix} S_1 &\to & S_0 &\leftarrow& S_2\\ \downarrow &&\downarrow && \downarrow \\ T_1&\to &T_0& \leftarrow & T_2\end{matrix}$$ is commutative. (ii) For each $j=0,1,2$, the underlying map $S_j\to T_j$ of $\iota_j$ is injective, the topology and the log structure of $S_j$ with sign are induced from those of $T_j$, and the homomorphism $\iota_j^{-1}(\cO_{T_j}) \to \cO_{S_j}$ is surjective. This is proved by using positive charts (\ref{pchar}) on $S_0$, $S_1$, $S_2$ which are compatible. To prove \ref{fiberpr}, it is sufficient to prove that in this situation, we have the fiber product $S_3$ of $S_1\to S_0 \leftarrow S_2$ in $\cC_{\bold{R}}({{\rm sat}})$ which belongs to ${\cal {B}}'_{\bold{R}}(\log)$. Let $\cS_3$ be the pushout of the diagram $\cS_1\leftarrow \cS_0\to \cS_2$ in the category of fs monoids. This $\cS_3$ is obtained from the pushout $\cS_3'$ of $\cS_1\leftarrow \cS_0 \to \cS_2$ in the category of commutative monoids as follows. $\cS_3$ is the submonoid of $ (\cS_3')^{{{\gamma}mma}p}$ consisting of all elements $a$ such that for some integer $n{{\gamma}mma}eq 1$, $a^n$ belongs to the submonoid of $(\cS_3')^{{{\gamma}mma}p}$ generated by the images of $\cS_1$ and $\cS_2$. Let $T_3$ be the real toric variety $\Hom(\cS_3, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$, let $S_3'$ be the fiber product of $S_1\to S_0\leftarrow S_2$ in the category of topological spaces, and let $T'_3$ be the fiber product of $T_1\to T_0\leftarrow T_2$ which is identified with $\Hom(\cS'_3, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) $ as a topological space. As a topological space, we define $S_3$ as the fiber product of $S_3' \to T_3' \leftarrow T_3$. Let $\iota_3: S_3\to T_3$ be the canonical injection. We define the structure sheaf $\cO_{S_3}$ on $S_3$ as follows. For $j=0, 1,2$, let $I_j$ be the kernel of $\iota^{-1}_j(\cO_{T_j})\to \cO_{S_j}$. Let $I_3$ be the ideal of $\iota^{-1}_3(\cO_{T_3})$ generated by the images of $I_1$ and $I_2$. Define $\cO_{S_3}= \iota_3^{-1}(\cO_{T_3})/I_3$. Define the log structure with sign on $S_3$ as the inverse image of that of $T_3$. Then $S_3$ is clearly an object of ${\cal {B}}'_{\bold{R}}(\log)$. We prove that $S_3$ is the fiber product of $S_1\to S_0\leftarrow S_2$ in $\cC_{\bold{R}}({{\rm sat}})$. By \ref{cv+0}, for an object $X$ of $\cC_{\bold{R}}({{\rm sat}})$ and $j=0,1,2,3$, a morphism $X\to S_j$ corresponds in one to one manner to a homomorphism $\cS_j\to {\Gamma}amma(X, M_{X,>0})$ such that the associated morphism $X\to T_j$ has the following two properties (1) and (2). (1) The image of the set $X$ in $T_j$ is contained in $S_j$. (2) The image of $I_j$ in $\cO_X$ is zero. Since ${\Gamma}amma(X, M_{X,>0})$ is a saturated monoid, a homomorphism $\cS_3'\to {\Gamma}amma(X, M_{X,>0})$ and a homomorphism $\cS_3\to {\Gamma}amma(X, M_{X,>0})$ correspond in one to one manner. These prove that $S_3$ is the fiber product of $S_1\to S_0\leftarrow S_2$ in $\cC_{\bold{R}}({{\rm sat}})$. \end{pf} \bold egin{sbpara}{\lambda}bel{gest0} The proof of \ref{fiberpr} shows that the underlying topological space of a fiber product in ${\cal {B}}'_{\bold{R}}(\log)$ need not be the fiber product of the underlying topological spaces. We consider this point. We call a homomorphism $\cS_0\to \cS_1$ of saturated commutative monoids (\ref{cblog}) {\it universally saturated} if for any commutative monoid $\cS_2$ and any homomorphism $\cS_0\to \cS_2$, the pushout of $\cS_1\leftarrow \cS_0 \to \cS_2$ in the category of commutative monoids is saturated. For a morphism $S_1\to S_0$ of ${\cal {B}}'_{\bold{R}}(\log)$, we say $f$ is universally saturated if for any $s_1\in S_1$ with image $s_0$ in $S_0$, the homomorphism $M_{S_0,s_0}\to M_{S_1,s_1}$ is universally saturated. (The last condition is equivalent to the condition that the homomorphism $(M_{S_0}/\cO^\times_{S_0})_{s_0} \to (M_{S_1}/\cO^\times_{S_1})_{s_1}$ is universally saturated.) The following can be proved easily: Let $f:S_1\to S_0$ be a morphism in ${\cal {B}}'_{\bold{R}}(\log)$. Let the triple of homomorphisms $\cS_j\to M_{S_j}$ $(j=0,1)$ and $h:\cS_0\to \cS_1$ be a chart of $f$. Then, if $h$ is universally saturated, $f$ is universally saturated. Conversely, if $f$ is universally saturated, then locally on $S_0$ and $S_1$, there are positive charts (\ref{pchar}) and a homomorphism $h$ of charts as above such that $h$ is universally saturated. \end{sbpara} \bold egin{sblem} Let $S_1\to S_0$ be a universally saturated morphism in ${\cal {B}}'_{\bold{R}}(\log)$, let $S_2\to S_0$ be a morphism in ${\cal {B}}'_{\bold{R}}(\log)$, and let $S_3$ be the fiber product of $S_1\to S_0\leftarrow S_2$ in the category ${\cal {B}}'_{\bold{R}}(\log)$. Then the underlying topological space of $S_3$ is the fiber product of the underlying topological spaces of $S_j$ $(j=0,1,2)$. \end{sblem} This follows from the proof of \ref{fiberpr}. \bold egin{sbprop}{\lambda}bel{gest5} (1) For $r{{\gamma}mma}eq 1$, the homomorphism ${\bold{N}}\to {\bold{N}}^r\;;\;m\mapsto (m,m,\dots,m)$ is universally saturated. (2) For any saturated commutative monoid $\cS$, the homomorphisms $\{1\}\to \cS$ and $\cS\to \{1\}$ are universally saturated. (3) Let $\cS_j$ $(j=0,1,2)$ be saturated commutative monoids, let $\cS_0 \to \cS_1$ be a universally saturated homomorphism, let $\cS_0\to \cS_2$ be a homomorphism, and let $\cS_3$ be the pushout of $\cS_1\leftarrow \cS_0\to \cS_2$ in the category of commutative monoids. Then the homomorphism $\cS_2\to \cS_3$ is universally saturated. (4) Let $\cS_j\to \cS_j'$ $(j=1, \dots, n)$ be universally saturated homomorphisms of saturated commutative monoids. Then the homomorphism $\prod_{j=1}^n \cS_j \to \prod_{j=1}^n \cS_j'$ is universally saturated. (5) A homomorphism $\cS\to \cS'$ of saturated commutative monoids is universally saturated if and only if the induced homomorphism $\cS/\cS^\times \to \cS'/(\cS')^\times$ is unversally saturated. (6) For a saturated commutative monoid $\cS$ and for $a\in \cS$, the canonical homomorphism $\cS\to \cS[1/a]$ is universally saturated. Here $\cS[1/a]$ denotes the submonoid $\{xa^{-n}\;|\;x\in \cS, n{{\gamma}mma}eq 0\}$ of $\cS^{{{\gamma}mma}p}$. \end{sbprop} \bold egin{pf} The proofs of (1), (2), (5), (6) are easy. (3) is evident. We can prove (4) by induction on $n$ as follows. We may assume $n{{\gamma}mma}eq 2$. Then the homomorphism between products in (4) is the composition $(\prod_{j=1}^{n-1} \cS_j) \times \cS_n \to (\prod_{j=1}^{n-1} \cS_j') \times \cS_n \to (\prod_{j=1}^{n-1} \cS_j')\times \cS_n'$ in which the first homomorphism is universally saturated by induction on $n$ and by (3), and the second homomorphism is universally saturated by (3). \end{pf} \bold egin{sbcor}{\lambda}bel{gest6} For a diagram $S_1\to S_0\leftarrow S_2$ in ${\cal {B}}'_{\bold{R}}(\log)$, the underlying topological space of the fiber product is the fiber product of the underlying topological spaces in the following cases (i) and (ii). (i) The case where at least one of $S_1\to S_0$ and $S_2\to S_0$ is strict. Here for a morphism $f: X\to Y$ of locally ringed spaces with log structures, we say $f$ is strict if the log structure of $X$ coincides with the inverse image of the log structure of $Y$ via $f$. (ii) The case where the log structure of $S_0$ is trivial. \end{sbcor} The following will be used for many times in this paper. \bold egin{sbpara}{\lambda}bel{embstr} Let $X$ be an object of ${\cal {B}}'_{\bold{R}}(\log)$ and let $Y$ be a subset of $X$. Assume that the following condition (C) is satisfied. (C) The homomorphism from $\cO_X$ to the sheaf of ${\bold{R}}$-valued continuous functions on $X$ is injective. Then we have a structure on $Y$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$, which also satisfies (C), as follows. The topology of $Y$ is the one as a subspace of $X$. $\cO_Y$ is the sheaf of ${\bold{R}}$-valued functions on $Y$ which are locally restrictions of functions in $\cO_X$. The log structure with sign is the pullback of that of $X$. For an object $S$ of ${\cal {B}}'_{\bold{R}}(\log)$ which satisfies (C), the map ${\rm {Mor}}(S, Y)\to {\rm {Mor}}(S,X)$ is injective and the image coincides with $\{f\in {\rm {Mor}}(S, X)\;|\; f(S){\subset}set Y\}$. \end{sbpara} {\subset}section{Review on toric geometry} We recall toric varieties over a field and the real toric varieties associated to fans, by comparing them. \bold egin{sbpara}{\lambda}bel{rvtoric} Let $L$ be a finitely generated free abelian group and let $N:=\Hom(L, {\bold{Z}})$. We will denote the group law of $L$ multiplicatively and that of $N$ additively. For a rational finitely generated sharp cone ${\sigma}$ in $N_{\bold{R}}$, define an fs monoid $\cS({\sigma}ma)$ by $$\cS({\sigma}):=\{l \in L\,|\; l({\sigma}ma){{\gamma}mma}eq 0\}.$$ For a rational fan $\Sigma$ in $N_{\bold{R}}$, we have a toric variety ${\operatorname{toric}}_k(\Sigma)$ over a field $k$ associated to $\Sigma$ which is an fs log scheme over $k$, and a real toric variety $|{\operatorname{toric}}|(\Sigma)$ which is an object of ${\cal {B}}'_{\bold{R}}(\log)$. We review these. \end{sbpara} \bold egin{sbpara}{\lambda}bel{torsig1} The toric variety ${\operatorname{toric}}_k(\Sigma)$ over $k$ is described as $${\operatorname{toric}}_k(\Sigma)= \bigcup_{{\sigma}\in \Sigma} \operatorname{Spec}(k[\cS({\sigma}ma)])\quad \text{(an open covering)}$$ where $k[\cS({\sigma})]$ denotes the semigroup algebra of $\cS({\sigma}ma)$ over $k$ and $\operatorname{Spec}(k[\cS({\sigma})])$ is endowed with the standard log structure. It represents the contravariant functor from the category $(\fs/k)$ of fs log schemes over $k$ to the category of sets, which sends $S$ to the set of all homomorphisms $h:L\to M_S^{{{\gamma}mma}p}$ satisfying the following condition. (C) Let $s\in S$. Then there exists ${\sigma}\in \Sigma$ such that for any homomorphism $a:(M_S/\cO^\times_S)_{\bar s}\to {\bold{N}}$, the homomorphism $a\circ h: L\to {\bold{Q}}$ belongs to ${\sigma}$. Here $\bar s$ is a geometric point over $s$. Note that this condition is equivalent to the following condition. (C$^\prime)$ \'Etale locally on $S$, there is ${\sigma}\in \Sigma$ such that $h(\cS({\sigma}ma)) {\subset}set M_S$. The set ${\operatorname{toric}}_k(\Sigma)(k)$ of all $k$-rational points of ${\operatorname{toric}}_k(\Sigma)$ is identified with the set of pairs $({\sigma}, h)$ consisting of ${\sigma}\in \Sigma$ and a homomorphism $h:\cS({\sigma})^\times \to k^\times$. The point corresponding to this pair is the element of $\operatorname{Spec}(k[\cS({\sigma})])(k)=\Hom(\cS({\sigma}), k)$ which sends $a\in \cS({\sigma})^\times$ to $h(a)$ and sends $a\in \cS({\sigma})\smallsetminus \cS({\sigma})^\times$ to $0$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{torsig2} The real toric variety $|{\operatorname{toric}}|(\Sigma)$ is described as $$|{\operatorname{toric}}|(\Sigma)=\bigcup_{{\sigma} \in \Sigma} \Hom(\cS({\sigma}), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\quad \text{(an open covering)}$$ where $\Hom(\cS({\sigma}), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ is regarded as an object of ${\cal {B}}'_{\bold{R}}(\log)$ as in \ref{2.3ex} (3). It represents the contravariant functor from $\cC_{\bold{R}}({{\rm sat}})$ to the category of sets, which sends $S$ to the set of all homomorphisms $h:L\to M_{S,>0}^{{{\gamma}mma}p}$ satisfying the following condition. (C) Let $s\in S$. Then there exists ${\sigma}\in \Sigma$ such that for any homomorphism $a:(M_S/\cO^\times_S)_{\bar s}\to {\bold{N}}$, the homomorphism $a\circ h: L\to {\bold{Q}}$ belongs to ${\sigma}$. Note that this condition is equivalent to the following condition. (C$^\prime)$ Locally on $S$, there is ${\sigma}\in \Sigma$ such that $h(\cS({\sigma}ma)) {\subset}set M_{S,>0}$. The set $|{\operatorname{toric}}|(\Sigma)$ is identified with the set of pairs $({\sigma}, h)$ consisting of ${\sigma}\in \Sigma$ and a homomorphism $h:\cS({\sigma})^\times \to {\bold{R}}_{>0}$. The point corresponding to this pair is the element of $\Hom(\cS({\sigma}), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ which sends $a\in \cS({\sigma})^\times$ to $h(a)$ and sends $a\in \cS({\sigma})\smallsetminus \cS({\sigma})^\times$ to $0$. By this understanding, we can regard $|{\operatorname{toric}}|(\Sigma)$ as a closed subset of ${\operatorname{toric}}_{\bold{R}}(\Sigma)({\bold{R}})$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{torsig3} The set $|{\operatorname{toric}}|(\Sigma)$ is also identified with the set of all pairs $({\sigma}, Z)$ consisting of ${\sigma}\in \Sigma$ and a subset $Z$ of $\Hom(L, {\bold{R}}^{\mult}_{>0})$ which is a $\Hom(L/\cS({\sigma})^\times, {\bold{R}}^{\mult}_{>0})$-orbit. In fact, $({\sigma}, Z)$ corresponds to $({\sigma}, h)$ in \ref{torsig2} where $h$ is the restriction of any element of $Z$ to $\cS({\sigma})^\times$. \end{sbpara} \bold egin{sbpara} If $\Sigma$ is finite and $\Sigma'$ is a rational finite subdivision of $\Sigma$, we have a proper surjective morphism ${\operatorname{toric}}_k(\Sigma')\to {\operatorname{toric}}_k(\Sigma)$. In the case $k={\bold{R}}$, this induces a morphism $|{\operatorname{toric}}|(\Sigma')\to |{\operatorname{toric}}|(\Sigma)$ which is proper and surjective. \end{sbpara} \bold egin{sbpara}{\lambda}bel{logmod} A morphism $S'\to S$ in the category $(\fs/k)$ (resp.\ ${\cal {B}}'_{\bold{R}}(\log)$) is called a {\it log modification} if locally on $S$, there are a homomorphism $\cS\to M_S$ (resp.\ $\cS\to M_{S,>0}$) with $\cS$ a sharp fs monoid and a rational finite subdivision $\Sigma'$ of the fan $\Sigma$ of all faces of the cone $\Hom(\cS, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0}){\subset}set \Hom(\cS^{{{\gamma}mma}p}, {\bold{R}}^{\add})$ such that $S'$ is isomorphic over $S$ to $S\times_{{\operatorname{toric}}_k(\Sigma)} {\operatorname{toric}}_k(\Sigma')$ (resp.\ $S\times_{|{\operatorname{toric}}|(\Sigma)} |{\operatorname{toric}}|(\Sigma')$). The underlying map of topological spaces of a log modification is proper and surjective. \end{sbpara} \bold egin{sbpara}{\lambda}bel{logmod2} We introduce a functor $[\Sigma]$ associated to a fan $\Sigma$, and consider its relation to log modification. Let $L$ and $N$ be as in \ref{rvtoric}. For a rational fan $\Sigma$ in $N_{\bold{R}}$, let $[\Sigma]$ be the contravariant functor from $(\fs/k)$ (resp.\ ${\cal {B}}'_{\bold{R}}(\log)$) to the category of sets which sends $S$ to the set of all homomorphisms $h:L\to M^{{{\gamma}mma}p}_S/\cO^\times_S$ satisfying the condition (C) in \ref{torsig1} (resp.\ \ref{torsig2}). In the present situation, (C) is equivalent to (C$^\prime$) with $M_S$ (resp.\ $M_{S,>0}$) replaced by $M_S/\cO_S^\times$. Let $S$ be an object of $(\fs/k)$ (resp.\ ${\cal {B}}'_{\bold{R}}(\log)$) and assume that we are given $h\in [\Sigma](S)$. This induces a continuous map $S\to \Sigma$ which sends $s\in S$ to the unique cone ${\sigma}\in \Sigma$ such that $\cS({\sigma}){\subset}set L$ coincides with the inverse image of $(M_S/\cO^\times_S)_s$ under $L\to (M^{{{\gamma}mma}p}_S/\cO^\times_S)_s$. Assume $\Sigma$ is finite and let $\Sigma'$ be a rational finite subdivision of $\Sigma$. Then we have a morphism of functors $[\Sigma']\to [\Sigma]$. The contravariant functor ${\rm {Mor}}(-, S)\times_{[\Sigma]} [\Sigma']$ from $(\fs/k)$ (resp.\ ${\cal {B}}'_{\bold{R}}(\log)$) to the category of sets is represented by a log modification $S'\to S$. In fact, locally on $S$, $h:L\to M^{{{\gamma}mma}p}_S/\cO^\times_S$ lifts to a morphism $S\to {\operatorname{toric}}_k(\Sigma)$ (1.4.2) (resp.\ $S\to |{\operatorname{toric}}|(\Sigma)$ (1.4.3)) and this functor is represented by $S \times_{{\operatorname{toric}}_k(\Sigma)} {\operatorname{toric}}_k(\Sigma')$ (resp.\ $S\times_{|{\operatorname{toric}}|(\Sigma)} |{\operatorname{toric}}|(\Sigma')$). \end{sbpara} \bold egin{sbpara}{\lambda}bel{gest1} This 1.4.8 will be used in Section 2.4--Section 2.6. Let $\cS_1$ be an fs monoid, let $T:=\Hom(\cS_1, {\bold{R}}^{\mult}_{>0})$, and let $Z$ be a $T$-torsor. The purpose of this \ref{gest1} is to introduce an object $\bar Z$ of ${\cal {B}}'_{\bold{R}}(\log)$ and to give set-theoretical descriptions (1) below of log modifications of $\bar Z$. Let $\bar T:=\Hom(\cS_1, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \supset T$, and let $\bar Z:=Z\times^T \bar T$. We regard $\bar Z$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ as follows. Take ${{\bold r}}\in Z$. Then we have bijection $T\to Z\;;\;t\mapsto t{{\bold r}}$ and this induces a bijection $\bar T\to \bar Z$. Via the last bijection from the real toric variety $\bar T$, we obtain a structure of $\bar Z$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$. This structure is independent of the choice of ${{\bold r}}$. We prepare notation. For $s\in \bar Z$, we define a subgroup $T(s)$ of $T$ and a $T(s)$-orbit $Z(s)$ inside $Z$ as follows. In the case $Z=T$ and hence $\bar Z=\bar T$, $s$ is a homomorphism $\cS_1\to {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}$. In this case, let $T(s)$ be the subgroup of $T=\Hom(\cS_1, {\bold{R}}^{\mult}_{>0})$ consisting of all elements which kill $s^{-1}({\bold{R}}_{>0}){\subset}set \cS_1$, and let $Z(s){\subset}set T$ be the set of all elements of $\cS_1\to {\bold{R}}^{\mult}_{>0}$ whose restriction to $s^{-1}({\bold{R}}_{>0})$ coincides with the homomorphism induced by $s$. Then $Z(s)$ is a $T(s)$-orbit. In general, take ${{\bold r}}\in Z$, consider the induced isomorphism $\bar Z \cong \bar T$, let $t$ be the image of $s$ in $\bar T$, let $T(s):=T(t)$, and let $Z(s)$ be the $T(s)$-orbit in $Z$ corresponding to the $T(t)$-orbit $Z(t)$ in $T$ via the isomorphism $Z\cong T$. Then $T(s)$ and $Z(s)$ are independent of the choice of ${{\bold r}}$. Consider $L$ and $N$ in \ref{rvtoric}, let ${\sigma}$ be a rational finitely generated sharp cone in $N_{\bold{R}}$, and let $\Sigma$ be the fan of all faces of ${\sigma}$. Assume that we are given a homomorphism $\cS({\sigma})\to \cS_1$. Then we have a morphism of functors ${\rm {Mor}}(-, \bar Z)\to [\Sigma]$ where $[\Sigma]$ is as in \ref{logmod2}. This morphism is obtained as follows. The homomorphism $\cS({\sigma}) \to \cS_1$ induces ${\rm {Mor}}(-, \bar T)\to [\Sigma]$. Take ${{\bold r}}\in Z$. Then ${{\bold r}}$ gives an isomorphism $\bar Z\cong \bar T$ and hence the composite morphism ${\rm {Mor}}(-, \bar Z) \cong {\rm {Mor}}(-, \bar T) \to [\Sigma]$. This composite morphism is independent of the choice of ${{\bold r}}$. Assume further that the homomorphism $\cS({\sigma})\to \cS_1$ is universally saturated (\ref{gest0}). Let $\Sigma'$ be a rational finite subdivision of $\Sigma$, and let $E$ be the log modification of $\bar Z$ which represents the fiber product ${\rm {Mor}}(-, \bar Z)\times_{[\Sigma]} [\Sigma']$ (\ref{logmod2}). We give a description of $E$ as a set. For $s\in \bar Z$ and for ${\sigma}'\in \Sigma'$ such that the image $\tau$ of $s$ in $\Sigma$ coincides with the image of ${\sigma}'$ in $\Sigma$, let $T(s,{\sigma}')$ be the subgroup of $T(s)$ consisting of all elements whose image in $\Hom(L/\cS(\tau)^\times, {\bold{R}}^{\mult}_{>0})$ is contained in its subgroup $\Hom(L/\cS({\sigma}')^\times, {\bold{R}}^{\mult}_{>0})$. Then we have: (1) There is a canonical bijection between $E$ and the set of all triples $(s, {\sigma}', Z')$ where $s\in \bar Z$, ${\sigma}'$ is an element of $\Sigma'$ whose image in $\Sigma$ coincides with the image of $s$ in $\Sigma$, and $Z'$ is a $T(s, {\sigma}')$-orbit in $Z(s)$. In fact, if $Z=T$, then $E= \bar T \times_{|{\operatorname{toric}}|(\Sigma)} |{\operatorname{toric}}|(\Sigma')$ and hence the bijection is given by \ref{torsig3}. In general, for ${{\bold r}}\in Z$, if $t$ denotes the image of $s$ under the isomorphism $\bar Z\cong \bar T$, we have $T(s, {\sigma}')=T(t, {\sigma}')$ and the isomorphism $Z\cong T$ sends a $T(t, {\sigma}')$-orbit in $T$ to a $T(s,{\sigma}')$-orbit in $Z$, and the induced composite bijection from the set of triples $(s, {\sigma}', Z')$ to $E$ is independent of the choice of ${{\bold r}}$. \end{sbpara} \section{The new space $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ of ${\rm{SL}}(2)$-orbits}{\lambda}bel{s:new} In Part II, we defined and studied the space $D_{{\rm{SL}}(2)}$ of SL(2)-orbits. Here we introduce a variant $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$. It is an object of the category ${\cal {B}}'_{\bold{R}}(\log)$ (\ref{cblog}). Recall that $D_{{\rm {BS}}}$ is an object of ${\cal {B}}'_{\bold{R}}(\log)$, $D_{{\rm{SL}}(2)}$ has two structures $D^I_{{\rm{SL}}(2)}$ and $D^{II} _{{\rm{SL}}(2)}$ as objects of ${\cal {B}}'_{\bold{R}}(\log)$, and the identity map of $D_{{\rm{SL}}(2)}$ gives a morphism $D^I_{{\rm{SL}}(2)}\to D^{II}_{{\rm{SL}}(2)}$ of ${\cal {B}}'_{\bold{R}}(\log)$. We will relate the three spaces $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^{II}_{{\rm{SL}}(2)}$ and $D_{{\rm {BS}}}$ in the following way. These three spaces are not connected directly, but as we will see in this Section 2, they are connected as in the diagram $$\bold egin{matrix} D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}& \to &D^{{\rm{st}}ar}_{{\rm{SL}}(2)}& \to &D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}&\leftarrow & D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}\\ \downarrow &&&&&&\downarrow\\ D^{II}_{{\rm{SL}}(2)}&&&&&& D_{{\rm {BS}}}\end{matrix}$$ in ${\cal {B}}'_{\bold{R}}(\log)$ in which the horizontal arrows are log modifications (\ref{logmod}) and the left vertical arrow is proper surjective. As will be seen in Section 3, this diagram will induce morphisms $$D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} \to D^{II}_{{\rm{SL}}(2),{\mathrm{val}}}, \quad D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$$ of associated valuative spaces, which appeared in Introduction, for log modifications induce isomorphisms of the associated valuative spaces $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2),{\mathrm{val}}} \overset{\cong}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} \overset{\cong}\to D^{{\rm{st}}ar,-}_{{\rm{SL}}(2),{\mathrm{val}}}\overset{\cong}\leftarrow D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2),{\mathrm{val}}}$. In the pure case, the arrows in $D_{{\rm{SL}}(2)}\leftarrow D^{*,+}_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$ are isomorphisms. In Section 2.1, we review SL(2)-orbits in the pure situation. In Section 2.2, we continue reviews on Part II. In Section 2.3, we define the spaces $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ and $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$. After preparations in Section 2.4, we connect $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ and $D^{II}_{{\rm{SL}}(2)}$ in Section 2.5 by introducing the space $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$, and we connect $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$ and $D_{{\rm {BS}}}$ in Section 2.6 by introducing the space $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$. In Section 2.7, we show that our spaces of SL(2)-orbits belong to a full subcategory ${\cal {B}}'_{\bold{R}}(\log)^+$ of ${\cal {B}}'_{\bold{R}}(\log)$ consisting of nice objects. {\subset}section{Review on SL(2)-orbits in the pure case} Let the setting be as in \ref{setting}, and assume that we are in the pure situation of weight $w$. \bold egin{sbpara}{\lambda}bel{sim1} In this pure case, an {\it ${\rm{SL}}(2)$-orbit in $n$ variables} means a pair $(\rho, \varphi)$, where $\rho$ is a homomorphism ${\rm{SL}}(2,{\bold{C}})^n\to G({\bold{C}})$ of algebraic groups defined over ${\bold{R}}$ and $\varphi$ is a holomorphic map ${\bold P}^1({\bold{C}})^n\to \Dc$, satisfying \bold egin{align*} &\varphi(gz)=\rho(g)\varphi(z)\quad \text{for}\;\;g\in {\rm{SL}}(2,{\bold{C}})^n\;\;\text{and}\;\;z\in {\bold P}^1({\bold{C}})^n,\\ &\varphi(\frak h^n) {\subset}set D\quad (\text{$\frak h$ is the upper half plane $\{x+iy\;|\;x, y\in {\bold{R}}, y>0\}$}),\\ &\rho_*(\text{fil}^p_z({\frak s}{\frak l}(2,{\bold{C}})^n)) {\subset}set \text{fil}^p_{\varphi(z)}({\frak g}_{{\bold{C}}})\quad (z \in {\bold P}^1({\bold{C}})^n, p\in {\bold{Z}}). \end{align*} Here $\rho_*$ denotes the homomorphism ${\frak s}{\frak l}(2, {\bold{C}})^n\to {\frak g}_{{\bold{C}}}$ of Lie algebras induced by $\rho$, and $\text{fil}^{\bullet}_z$ and ${\rm{fil}}^{\bullet}_{\varphi(z)}$ are filtrations given by $z$ and $\varphi(z)$, respectively (see Part II, 2.1.2). \end{sbpara} \bold egin{sbpara}{\lambda}bel{pure2} Let $(\rho, \varphi)$ be an ${\rm{SL}}(2)$-orbit in $n$ variables. Define the {\it associated homomorphisms} $\tau, \tau^{{\rm{st}}ar}: {\bold G}^n_{m,{\bold{R}}}\to {\rm {Aut}}_{\bold{R}}(H_{0,{\bold{R}}})$ of algebraic groups as $$\tau^{{\rm{st}}ar}(t)= \rho(g_1, \dots, g_n)\quad \text{where}\; t=(t_j)_{1\leq j\leq n}\; \text{and}$$ $$g_j =\bold egin{pmatrix} 1/\prod_{k=j}^n t_k & 0 \\ 0 & \prod_{k=j}^n t_k\end{pmatrix},$$ $$\tau(t)=(\prod_{j=1}^n t_j)^w\cdot \tau^{{\rm{st}}ar}(t).$$ The image of the homomorphism $\tau^{{\rm{st}}ar}$ is contained in $G_{\bold{R}}$. For $1\leq j\leq n$, we define the increasing filtration $W^{(j)}$ on $H_{0,{\bold{R}}}$ as follows. We have $H_{0,{\bold{R}}}= \bigoplus_{1\leq j\leq n, k\in {\bold{Z}}} \; H_{0,{\bold{R}}}(j, k)$ where $H_{0,{\bold{R}}}(j, k)$ is the part of $H_{0,{\bold{R}}}$ on which the action $\tau$ of ${\bold G}_{m,{\bold{R}}}^n$ is given by $(t_{\ell})_{1\leq\ell \leq n} \mapsto t_j^k$. Define $W^{(j)}$ by $W^{(j)}_k= \bigoplus_{k'\leq k} \; H_{0, {\bold{R}}}(j,k')$. We call $W^{(j)}$ $(1\leq j\leq n)$ the {\it associated weight filtrations}. \end{sbpara} \bold egin{sbpara}{\lambda}bel{pure1} Let $(\rho, \varphi)$ be an ${\rm{SL}}(2)$-orbit in $n$ variables. For $1\leq j\leq n$, the following conditions (i)--(iii) are equivalent. (i) The $j$-th component ${\rm{SL}}(2, {\bold{C}})\to G({\bold{C}})$ of $\rho$ is trivial. (ii) $\varphi$ factors through the projection ${\bold P}^1({\bold{C}})^n \to {\bold P}^1({\bold{C}})^{n-1}$ which removes the $j$-th component. (iii) Either $j{{\gamma}mma}eq 2$ and $W^{(j)}= W^{(j-1)}$, or $j=1$ and $W^{(1)}=W$ (that is, $W^{(1)}_w=H_{0,{\bold{R}}}$ and $W^{(1)}_{w-1}=0$). \end{sbpara} \bold egin{sbpara}{\lambda}bel{sl2eq1} We consider the following equivalence relation on ${\rm{SL}}(2)$-orbits. We say an ${\rm{SL}}(2)$-orbit in $n$ variables $(\rho, \varphi)$ is non-degenerate if there is no $j$ $(1\leq j\leq n)$ which satisfies the equivalent conditions in \ref{pure1}. For a non-degenerate ${\rm{SL}}(2)$-orbit $(\rho,\varphi)$ in $n$ variables and for a non-degenerate ${\rm{SL}}(2)$-orbit $(\rho', \varphi')$ in $n'$ variables, $(\rho, \varphi)$ and $(\rho', \varphi')$ are equivalent if and only if $n=n'$ and there is $t\in {\bold{R}}^n_{>0}$ such that $$\rho'(g)=\tau^{{\rm{st}}ar}(t) \rho(g) \tau^{{\rm{st}}ar}(t)^{-1}, \quad \varphi'(z)=\tau^{{\rm{st}}ar}(t)\varphi(z)$$ for any $g\in {\rm{SL}}_2({\bold{C}})^n$ and $z\in {\bf P}^1({\bold{C}})^n$. Here $\tau^{{\rm{st}}ar}$ is the homomorphism associated to $(\rho,\varphi)$ in \ref{pure2}. We have the same equivalence relation when we replace $\tau^{{\rm{st}}ar}(t)$ in the above by $\tau(t)$ in \ref{pure2} associated to $(\rho,\varphi)$. Any ${\rm{SL}}(2)$-orbit uniquely factors through a non-degenerate ${\rm{SL}}(2)$-orbit, called the associated non-degenerate ${\rm{SL}}(2)$-orbit, which is described as below. Two ${\rm{SL}}(2)$-orbits are equivalent if and only if their associated non-degenerate ${\rm{SL}}(2)$-orbits are equivalent in the above sense. For an ${\rm{SL}}(2)$-orbit $(\rho, \varphi)$ in $n$ variables, the associated non-degenerate ${\rm{SL}}(2)$-orbit $(\rho', \varphi')$ is as follows. Let $J=\{a(1), \dots, a(r)\}$ $(a(1)<\dots <a(r))$ be the set of $j$ $(1\leq j\leq n)$ such that the $j$-th component of $\rho$ is non-trivial. Then $(\rho',\varphi')$ is the ${\rm{SL}}(2)$-orbit in $r$ variables defined by $$ \rho(g_1, \dots, g_n)=\rho'(g_{a(1)},\dots, g_{a(r)}) \quad \varphi(z_1,\dots, z_n)=\varphi'(z_{a(1)},\dots, z_{a(r)}).$$ This number $r$ is called the rank of the (equivalence class of the) SL(2)-orbit $(\rho,\varphi)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{psl2} The set $D_{{\rm{SL}}(2)}$ is defined as the set of all equivalence classes of ${\rm{SL}}(2)$-orbits $(\rho,\varphi)$ such that all members of the set of weight filtrations associated to $(\rho,\varphi)$ (\ref{pure2}) are rational (that is, defined already on $H_{0,{\bold{Q}}}$). $D$ is embedded in $D_{{\rm{SL}}(2)}$ as the set of classes of SL(2)-orbits of rank $0$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{sl2eq2} Let $p\in D_{{\rm{SL}}(2)}$. We define objects $$\tau^{{\rm{st}}ar}_p, \;\tau_p, \;Z(p), \;{\cal {W}}(p)$$ associated to $p$. Let $n$ be the rank of $p$. Let $(\rho,\varphi)$ be a non-degenerate ${\rm{SL}}(2)$-orbit which represents $p$. The homomorphism $\tau^{{\rm{st}}ar}$ (resp.\ $\tau$) (\ref{pure2}) associated to $(\rho,\varphi)$ depends only on the class $p$ (it does not depend on the choice of $(\rho,\varphi)$). We denote it as $\tau^{{\rm{st}}ar}_p$ (resp.\ $\tau_p$). The subset $$\{\varphi((iy_j)_{1\leq j\leq n}) \;|\; y_j\in {\bold{R}}_{>0} \;(1\leq j\leq n)\}= \tau^{{\rm{st}}ar}({\bold{R}}^n_{>0})\varphi({\bold i})= \tau({\bold{R}}^n_{>0})\varphi({\bold i}){\subset}set D$$ $({\bold i}:=(i,\dots,i)\in \frak h^n)$ depends only on the class $p$. We denote it as $Z(p)$ and call it the {\it torus orbit associated to $p$}. The family $\{W^{(j)}\; |\; 1\leq j\leq n\}$ of weight filtrations associated to $(\rho,\varphi)$ (\ref{pure2}) depends only on the class $p$. Let ${\cal {W}}(p)=\{W^{(j)}\;|\; 1\leq j\leq n\}$ and call it the set of weight filtrations associated to $p$. It consists of $n$ elements (Part II, Proposition 2.1.13). \end{sbpara} \bold egin{sbpara}{\lambda}bel{pure3} $D_{{\rm{SL}}(2)}$ has a structure as an object of ${\cal {B}}'_{\bold{R}}(\log)$. For this, see Part II, Section 3.2. A basic property of the topology of $D_{{\rm{SL}}(2)}$ is that, if $p\in D_{{\rm{SL}}(2)}$ is the class of an SL(2)-orbit $(\rho,\varphi)$, $p$ is the limit of $\varphi(iy_1, \dots, iy_n)\in D$ where $y_j\in {\bold{R}}_{>0}$ and $y_j/y_{j+1}\to \infty$ ($1\leq j\leq n$, $y_n$ denotes $1$). \end{sbpara} {\subset}section{Reviews on $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ and $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$} We now consider the mixed Hodge situation. We review the spaces $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ and $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ considered in Part II and prepare notation which we will use later. Actually there was an error concerning the definition of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ in Part II. We correct it in \ref{correct}. \bold egin{sbpara} Let $$D_{{\rm{SL}}(2)}({{\gamma}mma}r^W):=\prod_{w\in {\bold{Z}}} D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)$$ where $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)$ denotes the space $D_{{\rm{SL}}(2)}$ (Section 2.1) for the graded quotient ${{\gamma}mma}r^W_w$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{sim2} The set $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ is defined as follows (cf.\ Part II, 3.5.1). By an {\it ${\rm{SL}}(2)$-orbit on ${{\gamma}mma}r^W$ of rank $n$}, we mean a family $(\rho_w, \varphi_w)_{w\in {\bold{Z}}}$ of ${\rm{SL}}(2)$-orbits $(\rho_w, \varphi_w)$ on ${{\gamma}mma}r^W_w$ in $n$ variables in the sense of \ref{sim1} satisfying the following condition (1). (1) For each $1\leq j\leq n$, there is a $w\in {\bold{Z}}$ such that the $j$-th component of $\rho_w$ is non-trivial. The equivalence relation is defined as follows. For an ${\rm{SL}}(2)$-orbit $(\rho_w, \varphi_w)_w$ on ${{\gamma}mma}r^W$ of rank $n$, the homomorphisms $\tau, \tau^{{\rm{st}}ar}: {\bold G}_{m,{\bold{R}}}^n \to {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$ associated to the ${\rm{SL}}(2)$-orbit $(\rho_w, \varphi_w)$ in $n$ variables of weight $w$ for $w\in {\bold{Z}}$ (\ref{pure2}) define homomorphisms $${\tau}, {\tau}^{{\rm{st}}ar}: {\bold G}_{m,{\bold{R}}}^n \to \prod_{w\in {\bold{Z}}} {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$$ of algebraic groups, respectively. An ${\rm{SL}}(2)$-orbit $(\rho_w, \varphi_w)_w$ on ${{\gamma}mma}r^W$ of rank $n$ and an ${\rm{SL}}(2)$-orbit $(\rho'_w, \varphi'_w)_w$ on ${{\gamma}mma}r^W$ of rank $n'$ are {\it equivalent} if and only if $n'=n$ and $(\rho'_w(g))_w= \tau^{{\rm{st}}ar}(t)(\rho_w(g))_w\tau^{{\rm{st}}ar}(t)^{-1}$, $(\varphi'_w(z))_w=\tau^{{\rm{st}}ar}(t)(\varphi_w(z))_w$ for some $t\in {\bold{R}}^n_{>0}$. (We have the same equivalence relation when we replace $\tau^{{\rm{st}}ar}$ here by $\tau$.) The set $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ is defined as the set of all equivalence classes of ${\rm{SL}}(2)$-orbits $(\rho_w, \varphi_w)_w$ on ${{\gamma}mma}r^W$ such that the weight filtrations on ${{\gamma}mma}r^W_w$ associated to $(\rho_w,\varphi_w)$ are rational (i.e., defined over ${\bold{Q}}$) for any $w\in {\bold{Z}}$. \end{sbpara} \bold egin{sbrem}{\lambda}bel{correct} In the definition of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ in Part II, 3.5.1, we forgot to put the condition of the rationality of the associated weight filtrations. This error does not affect the rest of Part II. \end{sbrem} \bold egin{sbpara}{\lambda}bel{sim10} We have the embedding $$D({{\gamma}mma}r^W)\overset{{\subset}set}\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$$ by identifying $D({{\gamma}mma}r^W)$ with the set of ${\rm{SL}}(2)$-orbits on ${{\gamma}mma}r^W$ of rank $0$. We have a map $$D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)\;;\;p\mapsto (p({{\gamma}mma}r^W_w))_w$$ which sends the class $p$ of $(\rho_w,\varphi_w)_w$ to $(\text{the class $p({{\gamma}mma}r^W_w)$ of $(\rho_w, \varphi_w)$})_w$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{simst2} For $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$, we define a finite set $\overline{{\cal {W}}}(p)$ of increasing filtrations on ${{\gamma}mma}r^W=\prod_w {{\gamma}mma}r^W_w$ as follows. Let $(\rho_w,\varphi_w)_w$ be an ${\rm{SL}}(2)$-orbit on ${{\gamma}mma}r^W$ in $n$ variables which represents $p$, let $W^{(w, j)}$ $(w\in {\bold{Z}}, 1\leq j\leq n)$ be the $j$-th weight filtration on ${{\gamma}mma}r^W_w$ associated to the ${\rm{SL}}(2)$-orbit $(\rho_w,\varphi_w)_w$ on ${{\gamma}mma}r^W_w$ in $n$ variables, and let $W^{(j)}=\bigoplus_w W^{(w,j)}$. Let $\overline{{\cal {W}}}(p):= \{W^{(j)} \; |\; 1\leq j \leq n\}$. Then $\overline{{\cal {W}}}(p)$ is independent of the choice of the representative $(\rho_w,\varphi_w)_w$ of $p$. By an {\it admissible set of weight filtrations on ${{\gamma}mma}r^W$} (Part II, 3.2.2), we mean a set of increasing filtrations on ${{\gamma}mma}r^W$ which coincides with the set $\overline{{\cal {W}}}(p)$ of weight filtrations associated to some point $p$ of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$. An admissible set $\Phi$ of weight filtrations on ${{\gamma}mma}r^W$ has a natural structure of a totally ordered set (given by the {\it variance} of $W'({{\gamma}mma}r^W)$ for $W'\in \Phi$\;; see Part II, 2.1.11 and 2.1.13). For any $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ of rank $n$ such that $\Phi=\overline{{\cal {W}}}(p)$, if $(W^{(j)})_{1\leq j\leq n}$ denotes the family of weight filtrations associated to $p$, $W^{(j)}\leq W^{(k)}$ for this order if and only if $j\leq k$. By using this ordering, we will identify $\Phi$ with the totally ordered set $\{1,\dots,n\}$. By this, we will identify ${\bold{G}}_m^{\Phi}$, ${\bold{Z}}^{\Phi}$, etc. with ${\bold{G}}_m^n$, ${\bold{Z}}^n$, etc. Let $\overline{{\cal {W}}}$ be the set of all admissible sets of weight filtrations on ${{\gamma}mma}r^W$. Let ${\cal {W}}({{\gamma}mma}r^W_w)$ be the set of all admissible sets of weight filtrations on ${{\gamma}mma}r^W_w$, that is, ${\cal {W}}({{\gamma}mma}r^W_w)=\{{\cal {W}}(p)\;|\; p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)\}$ (\ref{sl2eq2}). We have a map $$\overline{{\cal {W}}}\to \prod_w {\cal {W}}({{\gamma}mma}r^W_w)\;;\; \Phi\mapsto (\Phi(w))_w,$$ where $ \Phi(w):=\{W'({{\gamma}mma}r^W_w)\; |\; W'\in \Phi, W'({{\gamma}mma}r^W_w)\operatorname{naive}eq W({{\gamma}mma}r^W_w)\}$. This map sends $\overline{{\cal {W}}}(p)$ for $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ to $({\cal {W}}(p({{\gamma}mma}r^W_w)))_w$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{simst2.5} For $\Phi\in \overline{{\cal {W}}}$ and $Q=(Q(w))_w\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$ such that $\Phi(w){\subset}set Q(w)$ for any $w\in {\bold{Z}}$, let $${\bold{G}}_m^{\Phi} \to \prod_{w\in {\bold{Z}}} {\bold{G}}_m^{Q(w)}$$ be the homomorphism which sends $(t_{W'})_{W'\in \Phi}$ to $(t'_{w,j})_{w\in {\bold{Z}}, j\in Q(w)}$, where $t'_{w,j}$ is the product of $t_{W'}$ for all elements $W'$ of $\Phi$ such that $W'({{\gamma}mma}r^W_w)=j$. If $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ and $p'=(p({{\gamma}mma}r^W_w))_w\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$, for $\Phi=\overline{{\cal {W}}}(p)$ and $Q(w)= {\cal {W}}(p({{\gamma}mma}r^W_w))$ $(w\in {\bold{Z}})$, $\tau^{{\rm{st}}ar}_p$ coincides with the composition ${\bold{G}}_m^{\Phi} \to \prod_w {\bold{G}}_m^{Q(w)}\to G_{\bold{R}}({{\gamma}mma}r^W)$ where the first arrow is as above and the second arrow is $\tau^{{\rm{st}}ar}_{p'}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{sim5} Let $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ (resp.\ $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$). We define objects $$S_p,\; X(S_p)^+, \; A_p,\; B_p,\; \bar A_p, \;\bar B_p,\; \tau^{{\rm{st}}ar}_p,\; \tau_p,\; \tilde \tau^{{\rm{st}}ar}_p,\; \tilde \tau_p,\; Z(p)$$ associated to $p$. Let $$S_p= {\bold{G}}_m^{\overline{{\cal {W}}}(p)}\quad (\text{resp.}\;\; \prod_w {\bold{G}}_m^{{\cal {W}}(p_w)}).$$ Then the character group $X(S_p)$ of $S_p$ is identified with $\prod_w {\bold{Z}}^{\overline{{\cal {W}}}(p)}$ (resp.\ $\prod_w {\bold{Z}}^{{\cal {W}}(p_w)}$). We define the submonoid $X(S_p)^+$ of $X(S_p)$ as the part corresponding to ${\bold{N}}^{\overline{{\cal {W}}}(p)}$ (resp.\ $\prod_w {\bold{N}}^{{\cal {W}}(p_w)}$). Let $A_p$ be the connected component in $S_p({\bold{R}})$ which contains the unit element. We identify $$A_p= \Hom(X(S_p), {\bold{R}}^{\mult}_{>0})= {\bold{R}}_{>0}^{\overline{{\cal {W}}}(p)} \; (\text{resp.}\ \prod_w {\bold{R}}_{>0}^{{\cal {W}}(p_w)}).$$ Let $$\bar A_p=\Hom(X(S_p)^+, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})={\bold{R}}_{{{\gamma}mma}eq 0}^{\overline{{\cal {W}}}(p)}\; (\text{resp.}\ \prod_w {\bold{R}}_{{{\gamma}mma}eq 0}^{{\cal {W}}(p_w)}) \supset A_p,$$ $$\bar B_p={\bold{R}}_{{{\gamma}mma}eq 0}\times \bar A_p\supset B_p: = {\bold{R}}_{>0}\times A_p.$$ We regard $\bar A_p$ and $\bar B_p$ as real toric varieties (\ref{2.3ex} (3)). We define homomorphisms $$\tau_p, \tau^{{\rm{st}}ar}_p: S_p \to \prod_w {\rm {Aut}}_{{\bold{R}}}({{\gamma}mma}r^W_w)$$ of algebraic groups over ${\bold{R}}$ and a subset $Z(p)$ of $D({{\gamma}mma}r^W)$. Assume first $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$. For an ${\rm{SL}}(2)$-orbit $(\rho_w, \varphi_w)_w$ on ${{\gamma}mma}r^W$ of rank $n$ which represents $p$, the associated homomorphisms $\tau, {\tau}^{{\rm{st}}ar}: S_p={\bold G}_{m,{\bold{R}}}^{\overline{{\cal {W}}}(p)}={\bold{G}}_m^n \to \prod_w {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$ depend only on $p$. We denote $\tau$ as $\tau_p$ and $\tau^{{\rm{st}}ar}$ as ${\tau}^{{\rm{st}}ar}_p$. The set $$Z(p):=\{(\varphi_w(iy_1, \dots, iy_n))_w \;|\; y_j\in {\bold{R}}_{>0}\; (1\leq j\leq n)\}$$ $$=\{\tau_p(t)(\varphi_w({\bf i}))_w\; |\; t\in A_p\}=\{\tau^{{\rm{st}}ar}_p(t)(\varphi_w({\bf i}))_w\; |\; t\in A_p\}$$ $${\subset}set D({{\gamma}mma}r^W)=\prod_{w\in {\bold{Z}}} D({{\gamma}mma}r^W_w)\quad \quad (\text{here}\;\; {\bf i}=(i, \dots, i)\in {\frak h}^n)$$ depends only on $p$. Next for $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$, define $\tau_p$ and $\tau^{{\rm{st}}ar}_p$ as $(\tau_{p_w})_w$ and $(\tau^{{\rm{st}}ar}_{p_w})_w$, respectively, and let $Z(p)= \prod_w Z(p_w)$ (\ref{sl2eq2}). Both for $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ and for $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$, we call $Z(p)$ the {\it torus orbit} of $p$. It is an $A_p$-torsor. We define extended homomorphisms $${\tilde \tau}_p, {\tilde \tau}_p^{{\rm{st}}ar}: {\bold G}_m\times S_p\to \prod_w {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w),$$ for $t_0\in{\bold{G}}_m$ and $t\in S_p$, by $${\tilde \tau}_p(t_0,t)=(t_0^w)_w\tau_p(t)=\tau_p(t)(t_0^w)_w,$$ $$ {\tilde \tau}_p^{{\rm{st}}ar}(t_0,t)=(t_0^w)_w\tau^{{\rm{st}}ar}_p(t)=\tau^{{\rm{st}}ar}_p(t)(t_0^w)_w.$$ Here $(t_0^w)_w$ acts on ${{\gamma}mma}r^W_w$ as the multiplication by $t_0^w$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{simst3} Let $\Phi\in \overline{{\cal {W}}}$. By a {\it splitting of $\Phi$} (Part II, 3.2.3), we mean a homomorphism $apha=(apha_w)_w: {\bold{G}}_m^{\Phi} \to \prod_w{\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$ of algebraic groups over ${\bold{R}}$ such that, for any $W'\in \Phi$ and $k\in {\bold{Z}}$, $W'_k$ coincides with the sum of the parts of ${{\gamma}mma}r^W$ of $apha$-weight $m$ for all $m\in {\bold{Z}}^{\Phi}$ such that $m(W')\leq k$. For a splitting $apha$ of $\Phi$, let $apha^{{\rm{st}}ar}: {\bold{G}}_m^{\Phi}\to G_{\bold{R}}({{\gamma}mma}r^W)$ be the homomorphism whose $G_{{\bold{R}}}({{\gamma}mma}r^W_w)$-component $apha^{{\rm{st}}ar}_w$ is $t=(t_j)_{j\in \Phi}\mapsto (\prod_{j\in \Phi} t_j)^{-w}\cdot apha_w(t)$. Note that the actions of $apha(t)$ and $apha^{{\rm{st}}ar}(t)$ $(t\in {\bold{R}}^{\Phi}_{>0})$ on $D({{\gamma}mma}r^W)$ are the same. A splitting of $\Phi$ exists: If $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ and $\Phi=\overline{{\cal {W}}}(p)$, ${\tau}_p$ is a splitting of $\Phi$. In this case, for $ apha= {\tau}_p$, $apha^{{\rm{st}}ar}$ in the above coincides with ${\tau}_p^{{\rm{st}}ar}$ in \ref{sim5}. Let $Q=(Q(w))_w\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$. By a {\it splitting of $Q$}, we mean a family $apha=(apha_w)_w$ where $apha_w$ is a splitting of $Q(w)$. Let $apha^{{\rm{st}}ar}=(apha^{{\rm{st}}ar}_w)_w$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{simst3.5} Let $\Phi\in \overline{{\cal {W}}}$. By a {\it distance to $\Phi$-boundary} (Part II, 3.2.4), we mean a real analytic map $\bold eta\colon D({{\gamma}mma}r^W)\to {\bold{R}}_{>0}^{\Phi}$ such that $\bold eta(apha(t)x) = t\bold eta(x)$ $(t \in {\bold{R}}^{\Phi}_{>0}, x \in D({{\gamma}mma}r^W))$ for any splitting $apha$ of $\Phi$. (The last condition is equivalent to $\bold eta(apha^{{\rm{st}}ar}(t)x)= t\bold eta(x)$ $(t\in {\bold{R}}^{\Phi}_{>0}, x\in D({{\gamma}mma}r^W)$.) A distance to $\Phi$-boundary exists (Part II, 3.2.5). Let $Q=(Q(w))_w\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$. By a {\it distance} to $Q$-boundary, we mean a family $(\bold eta_w)_{w\in {\bold{Z}}}$ where $\bold eta_w$ is a distance to $Q(w)$-boundary for the pure situation ${{\gamma}mma}r^W_w$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{simst4} In Part II, we endowed $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ and $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ with structures as objects of ${\cal {B}}'_{\bold{R}}(\log)$. These spaces satisfy the condition (C) in \ref{embstr}, that is, the sheaf of real analytic functions is a sub-sheaf of the sheaf of all ${\bold{R}}$-valued continuous functions. $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ is just the product of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)$ (\ref{pure3}) in ${\cal {B}}'_{\bold{R}}(\log)$. The canonical map $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ (\ref{sim10}) is a morphism in ${\cal {B}}'_{\bold{R}}(\log)$ and it is a log modification (\ref{logmod}) as is explained in Part II, 3.5.9 and 3.5.10. We review some properties of these spaces. \end{sbpara} \bold egin{sbpara} Let $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ (resp.\ $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$) and let ${{\bold r}}\in Z(p)$ (\ref{sim5}). Then $p$ is the limit of $\tau_p(t){{\bold r}}=\tau^{{\rm{st}}ar}_p(t){{\bold r}}$ where $t\in A_p$ tends to $0\in \bar A_p$. Here $0\in \bar A_p$ denotes $(0,\dots,0)\in {\bold{R}}_{{{\gamma}mma}eq 0}^{\Phi}$ where $\Phi=\overline{{\cal {W}}}(p)$ (resp.\ $\prod_w {\bold{R}}_{{{\gamma}mma}eq 0}^{Q(w)}$ where $Q(w)={\cal {W}}(p({{\gamma}mma}r^W_w)))$ (\ref{simst2}) in the identifications $\bar A_p= {\bold{R}}_{{{\gamma}mma}eq 0}^{\Phi}$ (resp.\ $\prod_w {\bold{R}}_{{{\gamma}mma}eq 0}^{Q(w)}$) $\supset A_p= {\bold{R}}_{>0}^{\Phi}$ (resp.\ $\prod_w {\bold{R}}_{>0}^{Q(w)}$). \end{sbpara} \bold egin{sbpara}{\lambda}bel{phipart} For $\Phi\in \overline{{\cal {W}}}$, let $$D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi)= \{p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} \;|\; \overline{{\cal {W}}}(p){\subset}set \Phi\}.$$ For $Q=(Q(w))_{w\in {\bold{Z}}} \in \prod_{w\in {\bold{Z}}} {\cal {W}}({{\gamma}mma}r^W_w)$, let $$D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q)= \{p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)\;|\; {\cal {W}}(p_w){\subset}set Q(w)\;\text{for all}\;w\in {\bold{Z}}\}.$$ Then $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi)$ (resp.\ $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q))$ is open in $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ (resp.\ $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$). When $\Phi$ (resp.\ $Q$) moves, these open sets cover $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ (resp.\ $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$). If $\Phi\in \overline{{\cal {W}}}$, $Q=(Q(w))_w\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$ and $\Phi(w) {\subset}set Q(w)$ (\ref{simst2}) for any $w\in {\bold{Z}}$, then the map $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ induces a map $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi)\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{logst1} Let $\Phi\in \overline{{\cal {W}}}$ (resp.\ $Q=(Q(w))_w\in \prod_w{\cal {W}}({{\gamma}mma}r^W_w)$) and let $\bold eta$ be a distance to $\Phi$-boundary (resp.\ $Q$-boundary). Then the map $\bold eta$ extends uniquely to a morphism $$\bold eta: D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi) \to {\bold{R}}_{{{\gamma}mma}eq 0}^{\Phi}\quad (\text{resp.}\; D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q) \to \prod_w {\bold{R}}_{{{\gamma}mma}eq 0}^{Q(w)})$$ of ${\cal {B}}'_{\bold{R}}(\log)$. The log structure with sign of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi)$ (resp.\ $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q)$) coincides with the inverse image of the canonical log structure with sign of ${\bold{R}}^{\Phi}_{{{\gamma}mma}eq 0}$ (resp.\ $\prod_w {\bold{R}}_{{{\gamma}mma}eq 0}^{Q(w)}$) (\ref{2.3ex} (1)). For a distance $\bold eta$ to $\Phi$-boundary (resp.\ $Q$-boundary), each component $\bold eta_j$ $(j\in \Phi)$ (resp.\ $\bold eta_{w,j}$ $(w\in {\bold{Z}}$, $j\in Q(w))$) of $\bold eta$ is a section of the log structure $M_S$ where $S= D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi)$ (resp.\ $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q)$). We have a chart ${\bold{N}}^{\Phi}\to M_S$ (resp.\ $\prod_w {\bold{N}}^{Q(w)} \to M_S$) defined as $m\mapsto \prod_j \bold eta_j^{m(j)}$ (resp.\ $m\mapsto \prod_w \bold eta_{w,j}^{m(w,j)}$). The induced homomorphism from ${\bold{N}}^{\Phi}$ (resp.\ $\prod_w {\bold{N}}^{Q(w)}$) to $ M_S/\cO^\times_S$ is independent of the choice of $\bold eta$. If $\Phi=\overline{{\cal {W}}}(p)$ (resp.\ $Q(w)= {\cal {W}}(p_w)$) for $p\in S$, this induces an isomorphism from ${\bold{N}}^{\Phi}$ (resp.\ $\prod_w {\bold{N}}^{Q(w)}$) to $(M_S/\cO^\times_S)_p$. If $\Phi(w) {\subset}set Q(w)$ (cf. \ref{simst2}) for any $w\in {\bold{Z}}$, we have a commutative diagram $$\bold egin{matrix} \prod_w {\bold{N}}^{Q(w)} &\to & M_S/\cO^\times_S & (S:=D_{{\rm{SL}}(2)}({{\gamma}mma}r^W))\\ \downarrow &&\downarrow & \\ {\bold{N}}^{\Phi} &\to & M_{S'}/\cO_{S'}^\times & (S':=D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}) \end{matrix}$$ where the left vertical arrow is the homomorphism induced from the homomorphism ${\bold{G}}_m^{\Phi}\to \prod_w {\bold{G}}_m^{Q(w)}$ (\ref{simst2.5}) on the character groups. \end{sbpara} \bold egin{sbpara}{\lambda}bel{bab} Let $\Phi\in \overline{{\cal {W}}}$ (resp.\ $Q\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$), and let $apha$ be a splitting of $\Phi$ (resp.\ $Q$) and let $\bold eta$ be a distance to $\Phi$-boundary (resp.\ $Q$-boundary). Then the map $D({{\gamma}mma}r^W)\to D({{\gamma}mma}r^W) \;;\; x\mapsto apha(\bold eta(x))^{-1}x = apha^{{\rm{st}}ar}(\bold eta(x))^{-1}x$ extends uniquely to a morphism $$b_{apha, \bold eta}: D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} \;(\text{resp.\ $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$}) \to D({{\gamma}mma}r^W)$$ (Part II, Proposition 3.2.6). \end{sbpara} {\subset}section{The space $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$} We define the space $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ comparing it with the space $D^{II}_{{\rm{SL}}(2)}$ which we defined in Part II. We also define a related space $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$. \bold egin{sbpara}{\lambda}bel{symb1} Let $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (resp.\ $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$, resp.\ $D_{{\rm{SL}}(2)}$) be the set of all pairs $(p, Z)$ where $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ (resp.\ $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$, resp.\ $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$) and $Z$ is a subset of $D$ satisfying the following two conditions (i) and (ii). Denote $\tau^{{\rm{st}}ar}_p$ (resp.\ $\tau^{{\rm{st}}ar}_p$, resp.\ $\tau_p$) by $apha_p$ and denote $\tilde \tau^{{\rm{st}}ar}_p$ (resp.\ $\tilde \tau^{{\rm{st}}ar}_p$, resp.\ $\tilde \tau_p$) by $\tilde apha_p$. (i) $Z$ is either (i.A) an $apha_p(A_p)$-orbit in $D$, or (i.B) an $\tilde apha_p(B_p)$-orbit in $D_{\operatorname{naive}spl}$ (\ref{splW}) \operatorname{naive}oindent for the lifted action (\ref{liftac}). (ii) The image of $Z$ in $D({{\gamma}mma}r^W)$ coincides with the torus orbit $Z(p)$ (\ref{sim5}) of $p$. We call an element $(p, Z)$ an {\it $A$-orbit} if it satisfies (i.A) and a {\it $B$-orbit} if it satisfies (i.B). This is similar to the case of $D_{{\rm {BS}}}$, which also consists of $A_P$-orbits and $B_P$-orbits for ${\bold{Q}}$-parabolic subgroups $P$ of $G_{{\bold{R}}}({{\gamma}mma}r^W)$ (Part I, 5.1 and Definition 5.3). \end{sbpara} \bold egin{sbpara}{\lambda}bel{slmap} We embed $D$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (resp.\ $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$, resp.\ $D_{{\rm{SL}}(2)}$) by $F\mapsto (F({{\gamma}mma}r^W), \{F\})$. We have canonical maps $$D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}, \;\;D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W),\;\;D_{{\rm{SL}}(2)}\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$$ defined by $(p, Z) \mapsto p$. We have a canonical map $$D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}\;;\; (p, Z)\mapsto (p', Z'), \quad p':= (p({{\gamma}mma}r^W_w))_w, \quad Z':= \tau^{{\rm{st}}ar}_{p'}(A_{p'})Z.$$ \end{sbpara} \bold egin{sbpara} The style of the definition of the set $D_{{\rm{SL}}(2)}$ in \ref{symb1} is slightly different from the one in Part II, Section 2.5. We explain the relation of the two styles. Let $(p, Z)\in D_{{\rm{SL}}(2)}$ in the present style, and let $(\rho_w, \varphi_w)_w$ be an ${\rm{SL}}(2)$-orbit on ${{\gamma}mma}r^W$ which represents $p$. If $(p, Z)$ is an $A$-orbit (\ref{symb1}), it is the class of $((\rho_w,\varphi_w)_w,{{\bold r}})\in \cD'_{{\rm{SL}}(2), n}$ in Part II, 2.3.1 with ${{\bold r}}\in Z$. If $(p, Z)$ is a $B$-orbit (\ref{symb1}), it is the class of $((\rho'_w, \varphi'_w)_w, {{\bold r}})\in \cD'_{{\rm{SL}}(2),n+1}$ in Part II, 2.3.1 where ${{\bold r}} \in Z$ and $\rho'_w$ (resp.\ $\varphi'_w$) is the composition ${\rm{SL}}(2,{\bold{R}})^{n+1}\to {\rm{SL}}(2, {\bold{R}})^n\to G_{\bold{R}}({{\gamma}mma}r^W_w)$ (resp.\ ${\bold P}^1({\bold{C}})^{n+1}\to {\bold P}^1({\bold{C}})^n \to D({{\gamma}mma}r^W_w)$) of the projection to the last $n$ factors and $\rho_w$ (resp.\ $\varphi_w$). \end{sbpara} \bold egin{sbpara}{\lambda}bel{sm} Let $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ (resp.\ $D^{{\rm{st}}ar,-,{{\rm{mild}}}}_{{\rm{SL}}(2)}$) be the subset of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (resp.\ $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$) consisting of all $A$-orbits. (We do not define the mild part of $D_{{\rm{SL}}(2)}$. The part of $A$-orbits in $D_{{\rm{SL}}(2)}$ does not fit our formulation of the mild part.) \end{sbpara} \bold egin{sbpara}{\lambda}bel{sit} Consider the following three situations (a)--(c). (a) $\frak D:= D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $\frak E= D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$. (b) $\frak D=D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$, $\frak E= D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$. (c) $\frak D= D_{{\rm{SL}}(2)}$, $\frak E= D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$. We endow $\frak D$ with a structure of an object of ${\cal {B}}'_{\bold{R}}(\log)$ as follows in \ref{2.3.6}--\ref{2.3.11}. In the situation (c), this coincides with the structure $D^{II}_{{\rm{SL}}(2)}$ treated in Part II. \end{sbpara} \bold egin{sbpara}{\lambda}bel{2.3.6} In the situations (a) and (c) (situation (b)) in \ref{sit}, for $\Phi\in \overline{{\cal {W}}}$ (resp.\ $Q\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$), let $\frak D(\Phi)$ (resp.\ $\frak D(Q)$) be the inverse image of $\frak E(\Phi)$ (resp.\ $\frak E(Q)$) (\ref{phipart}) in $\frak D$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{slspl} In the situations (a)--(c) in \ref{sit}, for $x=(p, Z)\in \frak D$, ${\rm{spl}}_W({{\bold r}})$ for ${{\bold r}}\in Z$ is independent of the choice of ${{\bold r}}$. We denote this ${\rm{spl}}_W({{\bold r}})$ $({{\bold r}}\in Z)$ by ${\rm{spl}}_W(x)$. \end{sbpara} \bold egin{sbpara} {\lambda}bel{2.3.9} In the situations (a) and (c) (resp.\ situation (b)) in \ref{sit}, let $\Phi\in \overline{{\cal {W}}}$ (resp.\ $Q=(Q(w))_w\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w))$, let $apha$ be a splitting of $\Phi$ (resp.\ $Q$) (\ref{simst3}) and let $\bold eta$ be a distance to $\Phi$-boundary (resp, $Q$-boundary) (\ref{simst3.5}). In the situations (a) and (b) (resp.\ situation (c)), for $x\in D$, let ${\delta}ta_{apha,\bold eta}(x)\in {\cal {L}}$ (\ref{cL(F)}) be ${\rm{Ad}}(apha^{{\rm{st}}ar}(\bold eta(p)))^{-1}{\delta}ta_W(x)$ (resp.\ ${\rm{Ad}}(apha(\bold eta(p)))^{-1}{\delta}ta_W(x)$), where $p$ denotes the image of $x$ in $D({{\gamma}mma}r^W)$ (for $apha^{{\rm{st}}ar}$, see \ref{simst3}). Let $\frak D'=\frak D(\Phi)$ (resp.\ $\frak D(Q)$). Then, for $x=(p, Z)\in \frak D'$ and ${{\bold r}}\in Z$, ${\delta}ta_{apha,\bold eta}(\tau_p^{{\rm{st}}ar}(t){{\bold r}})$ (resp.\ ${\delta}ta_{apha,\bold eta}(\tau_p(t){{\bold r}})$) converges in $\bar {\cal {L}}$ (\ref{2.3ex} (4)) when $t\in A_p$ tends to $0$ in $\bar A_p$, and the limit depends only on $x$ and is independent of the choice of ${{\bold r}}$. We denote this limit by ${\delta}ta_{apha,\bold eta}(x)$. We have ${\delta}ta_{apha,\bold eta}(x) \in \bar {\cal {L}}(b_{apha,\bold eta}(p))$, where $b_{apha,\bold eta}(p)$ is as in \ref{bab}. These ${\delta}ta_{apha,\bold eta}(x)$ and $b_{apha,\bold eta}(p)$ $(x=(p, Z))$ are described as follows. In the situations (a) and (c) (resp.\ situation (b)), let $apha'$ and $(apha^{{\rm{st}}ar})'$ be the restrictions of $apha$ and $apha^{{\rm{st}}ar}$ (\ref{simst3}) to the subgroup ${\bold{G}}_m^{\overline{{\cal {W}}}(p)}$ (resp.\ $\prod_w {\bold{G}}_m^{{\cal {W}}(p_w)}$) of ${\bold{G}}_m^{\Phi}$ (resp.\ $\prod_w {\bold{G}}_m^{Q(w)}$), respectively. Since both $apha'$ and $\tau_p$ splits $\overline{{\cal {W}}}(p)$ (resp.\ $({\cal {W}}(p_w))_w$), there is $u\in \prod_w {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$ such that for all $W'\in \overline{{\cal {W}}}(p)$ (resp.\ for all $w\in {\bold{Z}}$ and all $W'\in {\cal {W}}(p_w)$), $u$ preserves $W'$ and induces the identity maps on ${{\gamma}mma}r^{W'}$, and such that $$\tau_p(t) = uapha'(t)u^{-1}, \quad \tau^{{\rm{st}}ar}_p(t)=u(apha^{{\rm{st}}ar})'(t)u^{-1}$$ for any $t\in {\bold{G}}_m^{\overline{{\cal {W}}}(p)}$ (resp.\ $\prod_w {\bold{G}}_m^{{\cal {W}}(p_w)}$). Take ${{\bold r}} \in Z$ and let $\bar {{\bold r}}$ be the image of ${{\bold r}}$ in $Z(p)$ (\ref{sim5}). Then we have $$b_{apha,\bold eta}(p)= b_{apha,\bold eta}(u^{-1}\bar {{\bold r}}),$$ $${\delta}ta_{apha,\bold eta}(x)= {\rm{Ad}}(uapha^{{\rm{st}}ar}(\bold eta(u^{-1}\bar {{\bold r}})))^{-1}{\delta}ta_W({{\bold r}}) \;\;(\text{resp.}\; {\rm{Ad}}(uapha(\bold eta(u^{-1}{{\bold r}})))^{-1}{\delta}ta_W({{\bold r}})). $$ These are shown in Part II, 3.3.9 in the situation (c). The proofs for the situations (a) and (b) are similar. \end{sbpara} \bold egin{sbprop}{\lambda}bel{staran1} Consider the three situations in \ref{sit}. In the situations (a) and (c), let $\Phi\in \overline{{\cal {W}}}$ and $\frak D'=\frak D(\Phi)$, $\frak E'=\frak E(\Phi)$. In the situation (b), let $Q\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$ and $\frak D'=\frak D(Q)$, $\frak E'=\frak E(Q)$. In the situations (a) and (c) (resp.\ situation (b)), fix a splitting $apha$ of $\Phi$ (resp.\ $Q$) and a distance $\bold eta$ to $\Phi$-boundary (resp.\ $Q$-boundary). Then we have a bijection $$\operatorname{naive}u: \frak D'\to \{(p, s, {\delta}ta)\in \frak E'\times {\rm{spl}}(W) \times \bar {\cal {L}}\;|\; {\delta}ta\in \bar {\cal {L}}(b_{apha,\bold eta}(p))\} $$ $(\bar {\cal {L}}(\;)$ is as in \ref{2.3ex} (4)) defined as $x\mapsto (p, s, {\delta}ta)$, where $p$ is the image of $x$ in $\frak E$, $s={\rm{spl}}_W(x)$ (\ref{slspl}), ${\delta}ta={\delta}ta_{apha, \bold eta}(x)$ (\ref{2.3.9}). \end{sbprop} \bold egin{pf} The inverse map of $\operatorname{naive}u$ is defined as $(p, s, {\delta}ta)\mapsto (p, Z)$ where $Z$ is as follows. Consider the situations (a) and (c) (resp.\ situation (b)). Take $u\in G_{\bold{R}}({{\gamma}mma}r^W)$ for $p$ as in \ref{2.3.9}. In the case ${\delta}ta\in {\cal {L}}{\subset}set \bar {\cal {L}}$, $Z$ is the subset of $D$ whose image in $D({{\gamma}mma}r^W) \times {\rm{spl}}(W) \times {\cal {L}}$ under the map in \ref{grsd} is $$ \{({{\bold r}}, s, {\rm{Ad}}(u apha^{{\rm{st}}ar}(\bold eta(u^{-1}{{\bold r}}))){\delta}ta) \;|\; {{\bold r}} \in Z(p)\} \quad (\text{resp.}\; \{({{\bold r}}, s, {\rm{Ad}}(u apha(\bold eta(u^{-1}{{\bold r}}))){\delta}ta) \;|\; {{\bold r}} \in Z(p)\}). $$ In the case ${\delta}ta=0\circ {\delta}ta'\in \bar {\cal {L}} \smallsetminus {\cal {L}}$ with ${\delta}ta'\in {\cal {L}}\smallsetminus \{0\}$ (\ref{2.3ex} (4)), $Z$ is ${\bold{R}}_{>0}\circ Z'$ where $Z'$ is the above set $Z$ for $(p, s, {\delta}ta')$. \end{pf} \bold egin{sbprop}{\lambda}bel{staran2} Let the three situations be as in \ref{sit}. $(1)$ In the situations (a) and (c) (resp.\ the situation (b)), endow $\frak D(\Phi)$ (resp.\ $\frak D(Q)$) (\ref{2.3.6}) with a structure of an object of ${\cal {B}}'_{\bold{R}}(\log)$ by using the bijection $\operatorname{naive}u$ in Proposition \ref{staran1} (the target of $\operatorname{naive}u$ is regarded as an object of ${\cal {B}}'_{\bold{R}}(\log)$ by regarding it as $Y$ in $X= \frak E' \times {\rm{spl}}(W) \times \bar {\cal {L}}$ in \ref{embstr}). Then this structure is independent of the choice of $(apha,\bold eta)$. $(2)$ There is a unique structure on $\frak D$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ such that for any $\Phi\in \overline {\cal {W}}$ (resp.\ $Q\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$), $\frak D':=\frak D(\Phi)$ (resp.\ $\frak D':=\frak D(Q)$) is an open subset and the restriction of this structure to $\frak D'$ coincides with the structure given in $(1)$. \end{sbprop} \bold egin{pf} For the situation (c), this follows from Part II, Proposition 3.2.9 and Theorem 3.2.10. The proofs for the situations (a) and (b) are similar. \end{pf} \bold egin{sbpara}{\lambda}bel{2.3.11} The structures of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$, $D^{II}_{{\rm{SL}}(2)}$ as objects of ${\cal {B}}'_{\bold{R}}(\log)$ are given by the situations (a), (b), (c) in Proposition \ref{staran2}, respectively. In the situations (a)--(c) in \ref{sit}, the canonical map $\frak D\to \frak E$ is evidently a morphism of ${\cal {B}}'_{\bold{R}}(\log)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{Lpart} Via the bijection $\operatorname{naive}u$ of Proposition \ref{staran1}, $A$-orbits in $\frak D'$ correspond to elements $(p,s, {\delta}ta)$ of the target of $\operatorname{naive}u$ such that ${\delta}ta\in {\cal {L}}$. Hence the subset of $\frak D$ consisting all $A$-orbits is open in $\frak D$. Elements $(p, Z)$ of $\frak D'$ such that $Z{\subset}set D_{{\rm{spl}}}$ (1.1.5) correspond to elements $(p,s,{\delta}ta)$ of the target of $\operatorname{naive}u$ such that ${\delta}ta=0$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{thm+2} Consider the situations (a)--(c) in \ref{sit}. In the situation (c), we consider the structure $D^{II}_{{\rm{SL}}(2)}$ of $D_{{\rm{SL}}(2)}$. In Theorem \ref{ls1} below, we extend the result Part II, Theorem 3.4.4 on the local structure of $D^{II}_{{\rm{SL}}(2)}$ to all situations in \ref{sit}. This \ref{thm+2} is a preparation for it. Let $p\in \frak E$. We consider the local structure of $\frak D$ around the inverse image of $p$ in $\frak D$. Consider the situations (a) and (c) (resp.\ situation (b)). Let $\Phi:=\overline{{\cal {W}}}(p)$ (resp.\ $Q=(Q(w))_w$ with $Q(w):= {\cal {W}}(p_w)$). Fix ${{\bold r}}\in Z(p)$. Let $K_{{{\bold r}}}$ be the maximal compact subgroup of $G_{\bold{R}}({{\gamma}mma}r^W)$ associated to ${{\bold r}}$ (Part II, 3.4.1), and $K'_{{{\bold r}}}{\subset}set K_{{{\bold r}}}$ be the isotropy subgroup of $G_{\bold{R}}({{\gamma}mma}r^W)$ at ${{\bold r}}$. We use the notation in \ref{sim5}. Let $R$ be an ${\bold{R}}$-subspace of ${\frak g}_{\bold{R}}({{\gamma}mma}r^W)$ satisfying the following conditions (C1) and (C2). (C1) ${\frak g}_{\bold{R}}({{\gamma}mma}r^W)= {\rm {Lie}}(\tau^{{\rm{st}}ar}(A_p)) \oplus R\oplus {\rm {Lie}}(K_{{{\bold r}}})$. (C2) $R=\sum_{m\in X(S_p)} R\cap (({\frak g}_{\bold{R}})_m + ({\frak g}_{\bold{R}})_{-m})$. Here $(-)_m$ denotes the part of weight $m$ for the adjoint action of $S_p$ via $\tau^{{\rm{st}}ar}_p$. (The definition of the part $(-)_m$ does not change if we replace $\tau^{{\rm{st}}ar}_p$ by $\tau_p$.) Let $S$ be an ${\bold{R}}$-subspace of ${\rm {Lie}}(K_{{{\bold r}}})$ such that ${\rm {Lie}}(K_{{{\bold r}}})={\rm {Lie}}(K'_{{{\bold r}}})\oplus S$. For a subset $J$ of $\Phi$ (resp.\ for $J=(J(w))_{w\in {\bold{Z}}}$, $J(w){\subset}set Q(w)$), let $S_J$ be the subset of $S$ consisting of all elements $k$ such that $\exp(k){{\bold r}} \in (K_{{{\bold r}}}\cap G_{{\bold{R}},J}({{\gamma}mma}r^W))\cdot {{\bold r}}$, where $G_{{\bold{R}},J}({{\gamma}mma}r^W)$ is the subgroup of $G_{{\bold{R}}}({{\gamma}mma}r^W)$ consisting of all $g\in G_{\bold{R}}({{\gamma}mma}r^W)$ such that $g W' = W'$ for any $W'\in J$ (resp.\ for any $w\in {\bold{Z}}$ and any $W'\in J(w)$). We define an object $Y$ of ${\cal {B}}'_{\bold{R}}(\log)$ as follows. Let $$X= \bar A_p \times {\frak g}_{\bold{R}}({{\gamma}mma}r^W)\times {\frak g}_{\bold{R}}({{\gamma}mma}r^W) \times {\frak g}_{\bold{R}}({{\gamma}mma}r^W)\times S.$$ Note that $\bar A_p$ is ${\bold{R}}^{\Phi}_{{{\gamma}mma}eq 0}$ (resp.\ $\prod_w {\bold{R}}_{{{\gamma}mma}eq 0}^{Q(w)}$) (\ref{sim5}). Let $Y$ be the subset of $X$ consisting of all elements $(t,f,g,h,k)$ satisfying the following conditions (i)--(iv). In (ii) and (iv) below, let $J=\{j\in \Phi\;|\; t_j=0\}$ (resp.\ $J=(J(w))_{w\in {\bold{Z}}}$ with $J(w)=\{j\in Q(w)\;|\; t_{w,j}=0\}$). For $\chi\in X(S_p)$, write $\chi=\chi_+(\chi_-)^{-1}$ with $\chi_+, \chi_-\in X(S_p)^+$ which are defined as follows. In the identification $X(S_p)= \prod_w {\bold{Z}}^{Q(w)}$, if we denote by $m(w,j)\in {\bold{Z}}$ the $(w,j)$-component of $\chi$ for $w\in {\bold{Z}}$ and $j\in Q(w)$, then the $(w,j)$-component of $\chi_+$ is $\max(m(w,j),0)$ and the $(w,j)$-component of $\chi_-$ is $\max(-m(w,j),0)$. (i) For any $\chi\in X(S_p)$, $t(\chi_+)g_{\chi}= t(\chi_-)f_{\chi}$ and $t(\chi_+)h_{\chi}= t(\chi_-)g_{\chi}$. Here $g_{\chi}$ etc. denotes the $\chi$-component for the adjoint action of $S_p$ via $\tau^{{\rm{st}}ar}_p$. $t(\chi_+), t(\chi_-) \in {\bold{R}}_{{{\gamma}mma}eq 0}$ are defined by the understanding $\bar A_p= \Hom(X(S_p)^+, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$. (ii) Let $\chi\in X(S_p)$. If $t(\chi_+)=0$, then $g_{\chi}=f_{\chi}=0$. If $t(\chi_-)=0$, then $g_{\chi}=h_{\chi}=0$. In other words, if $m(j)\in {\bold{Z}}$ for $j\in \Phi$ (resp.\ $m(w,j)\in {\bold{Z}}$ for $w\in {\bold{Z}}$ and $j\in Q(w)$) denotes the $j$ (resp.\ $(w,j)$)-component of $\chi$ in the identification $X(S_p)= {\bold{Z}}^{\Phi}$ (resp.\ $\prod_w {\bold{Z}}^{Q(w)}$), then $f_{\chi}=0$ unless $m(j)\leq 0$ for any $j\in J$ (resp.\ unless $m(w, j)\leq 0$ for any $w\in {\bold{Z}}$ and $j\in J(w)$), $g_m=0$ unless $m(j)=0$ for any $j\in J$ (resp.\ unless $m(w, j)= 0$ for any $w\in {\bold{Z}}$ and $j\in J(w)$), $h_m=0$ unless $m(j){{\gamma}mma}eq 0$ for any $j\in J$ (resp.\ unless $m(w, j){{\gamma}mma}eq 0$ for any $w\in {\bold{Z}}$ and $j\in J(w)$). (iii) $g_{\chi}\in R$ and $f_{\chi}+h_{\chi^{-1}}\in R$ for any $\chi\in X(S_p)$. (iv) $k\in S_J$. Regard $X$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ in the natural way, and regard $Y{\subset}set X$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ by \ref{embstr}. Let $$Y_0=\{(t,f,g,h,k)\in Y\;|\; t\in A_p\}{\subset}set Y.$$ \end{sbpara} \bold egin{sbthm}{\lambda}bel{ls1} Consider the three situations in \ref{sit}. Let the notation be as in \ref{thm+2}. (1) For a sufficiently small open neighborhood $U$ of $(0,0,0,0,0)$ in $Y$, there exists a unique open immersion $U\to \frak E$ in ${\cal {B}}'_{\bold{R}}(\log)$ which sends $(t, f, g, h, k)\in U\cap Y_0$ to $$\exp(f){\tau}^{{\rm{st}}ar}_p(t) \exp(k) {{\bold r}}= \exp(f){\tau}_p(t) \exp(k) {{\bold r}} $$ of $D({{\gamma}mma}r^W){\subset}set \frak E$. This morphism sends $(0,\dots, 0)\in Y$ to $p$. (2) Let $\bar L=\bar {\cal {L}}({{\bold r}})$, $L={\cal {L}}({{\bold r}})$. Then for a sufficiently small open neighborhood $U$ of $(0,0,0,0,0)$ in $Y$, there exists a unique open immersion $U\times {\rm{spl}}(W) \times \bar L\to \frak D$ in ${\cal {B}}'_{\bold{R}}(\log)$ having the following property. In the situations (a) and (b) (resp.\ situation (c)), it sends $(t, f, g, h, k,s,{\delta}ta)\in Y\times {\rm{spl}}(W) \times L$, where $(t,f,g,h,k) \in U\cap Y_0$, $s\in {\rm{spl}}(W)$, and ${\delta}ta\in L$, to the element of $D$ whose image in $D({{\gamma}mma}r^W) \times {\rm{spl}}(W) \times {\cal {L}}$ under the isomorphism \ref{grsd} is $$(\exp(f){\tau}^{{\rm{st}}ar}_p(t) \exp(k) {{\bold r}}, s, {\rm{Ad}}(\exp(f)\tau^{{\rm{st}}ar}_p(t)\exp(k)){\delta}ta)$$ $$ (\text{resp}. \; (\exp(f){\tau}_p(t) \exp(k) {{\bold r}}, s, {\rm{Ad}}(\exp(f)\tau_p(t)\exp(k)){\delta}ta)).$$ (3) For a sufficiently small open neighborhood $U$ of $(0,0,0,0,0)$ in $Y$, the diagram $$\bold egin{matrix} U\times {\rm{spl}}(W)\times \bar L &\to& \frak D\\ \downarrow &&\downarrow \\ U&\to& \frak E \end{matrix}$$ is cartesian in ${\cal {B}}'_{\bold{R}}(\log)$ and in the category of topological spaces. (4) In the situations (a) and (c) (resp.\ situation (b)), the image of the map in (1) is contained in $\frak E(\Phi)$ (resp.\ $\frak E(Q)$) and the image of the map in (2) is contained in $\frak D(\Phi)$ (resp.\ $\frak D(Q)$), where $\Phi=\overline{{\cal {W}}}(p)$ (resp.\ $Q=({\cal {W}}(p_w))_w$). (5) The underlying maps of the morphisms in (1) and (2) are described as in \ref{thm+4} below. \end{sbthm} \bold egin{pf} In the situation (c), this is given in Part II, Theorem 3.4.4 and 3.4.12. The proofs for the situations (a) and (b) are similar. \end{pf} \bold egin{sbpara}{\lambda}bel{thm+4} The maps in (1) and (2) in Theorem \ref{ls1} are induced from the maps $$Y \to \frak E,\quad Y\times {\rm{spl}}(W) \times \bar L \to \frak D,$$ respectively, defined as follows. The first map sends $(t, f, g, h, k) \in Y$ to the following element $p'\in \frak E$: Assume we are in the situations (a) and (c) (resp.\ situation (b)). Let $J=\{j\in \Phi\;|\; t_j=0\}$ (resp.\ $J=(J(w))_{w\in {\bold{Z}}}$ where $J(w)= \{j\in Q(w)\;|\; t_{w,j}=0\}$). Define $p_J\in \frak E$ as follows. Let $n=\sharp(\Phi)$ (resp.\ $n(w)= \sharp(Q(w))$ for $w\in {\bold{Z}}$). Let $(\rho, \varphi)$ be the SL(2)-orbit on ${{\gamma}mma}r^W$ which represents $p$ (resp.\ $(\rho_w,\varphi_w)$ for $w\in {\bold{Z}}$ be the ${\rm{SL}}(2)$-orbit on ${{\gamma}mma}r^W_w$ in $n(w)$ variables which represents $p_w$) such that ${{\bold r}}=\varphi(i,\dots,i)$ (resp.\ ${{\bold r}}_w=\varphi_w(i,\dots, i)$). Write $J=\{j_1,\dots, j_m\}$, $j_1<\dots< j_m$ (resp.\ $J(w)=\{j_{w,1}, \dots, j_{w,m(w)}\}$, $j_1<\dots<j_{m(w)}$). Then $p_J$ is the class of the following ${\rm{SL}}(2)$-orbit $(\rho',\varphi')$ on ${{\gamma}mma}r^W$ of rank $m$ (resp.\ the family $(\rho'_w,\varphi'_w)_w$ of ${\rm{SL}}(2)$-orbits in $m(w)$ variables). $$\rho'(g_1, \dots, g_m)= \rho(g_1',\dots, g_n'), \quad \varphi'(z_1,\dots, z_m)= \varphi(z'_1,\dots,z'_{n(w)})$$ $$(\text{resp.}\; \rho_w'(g_1, \dots, g_{m(w)})= \rho_w(g_1',\dots, g_{n(w)}), \quad \varphi'_w(z_1,\dots, z_{m(w)})= \varphi_w(z'_1,\dots,z'_{n(w)}).)$$ Here $g'_j= g_k$ and $z'_j=z_k$ where $k$ is the smallest among integers $a$ such that $1\leq a\leq m$ (resp.\ $1\leq a\leq m(w)$) and $j\leq j_a$ if such $a$ exists, and $g'_j=1$ and $z'_j=i$ if such $a$ does not exist. Let $A'$ be the set of all elements $t'$ of $A_p$ such that $t'_j=t_j$ for any $j\in \Phi\smallsetminus J$ (resp.\ $t'_{w,j}=t_{w,j}$ for any $w\in {\bold{Z}}$ and any $j\in Q(w)\smallsetminus J(w)$). Then $$p'= \exp(f)\tau_p(t')\exp(k)p_J$$ with $t'\in A'$. This $p'$ is independent of the choice of $t'\in A'$. Next the second map $Y\times {\rm{spl}}(W) \times \bar L\to \frak D$ sends $(t,f,g,h,k,s,{\delta}ta)$ to $(p', Z)\in \frak D$ where $p'$ is as above and $Z{\subset}set D$ is as follows. Consider the situations (a) and (b) (resp.\ situation (c)). If ${\delta}ta\in L{\subset}set \bar L$, $Z$ is the subset of $D$ whose image under the embedding $D\to D({{\gamma}mma}r^W) \times {\rm{spl}}(W) \times \bar {\cal {L}}$ in \ref{grsd} is the set of elements $$(\exp(f) \tau^{{\rm{st}}ar}_p(t') \exp(k){{\bold r}}, s, {\rm{Ad}}(\exp(f)\tau^{{\rm{st}}ar}_p(t') \exp(k)) {\delta}ta)$$ $$ (\text{resp.}\; (\exp(f) \tau_p(t') \exp(k){{\bold r}}, s, {\rm{Ad}}(\exp(f)\tau_p(t') \exp(k)) {\delta}ta))$$ where $t'$ ranges over all elements of $A'$. If ${\delta}ta\in \bar L\smallsetminus L$ and ${\delta}ta=0\circ {\delta}ta^{(1)}$ for ${\delta}ta^{(1)}\in L\smallsetminus \{0\}$ (\ref{2.3ex} (4)), $Z$ is the subset of $D$ whose image under the embedding $D\to D({{\gamma}mma}r^W) \times {\rm{spl}}(W) \times \bar {\cal {L}}$ in \ref{grsd} is the set of elements $$(\exp(f) \tau^{{\rm{st}}ar}_p(t') \exp(k){{\bold r}}, s, {\rm{Ad}}(\exp(f)\tau^{{\rm{st}}ar}_p(t') \exp(k)) (c\circ {\delta}ta^{(1)})) $$ $$ (\text{resp.}\; (\exp(f) \tau_p(t') \exp(k){{\bold r}}, s, {\rm{Ad}}(\exp(f)\tau_p(t') \exp(k)) (c\circ {\delta}ta^{(1)})))$$ where $t'$ ranges over all elements of $A'$ and $c$ ranges over all elements of ${\bold{R}}_{>0}$. \end{sbpara} The part for $D_{{\rm{SL}}(2)}$ of the following Proposition is Part II, Theorem 3.5.15. \bold egin{sbprop}{\lambda}bel{Lbund} Consider the situations in \ref{sit}. Fix any $F\in D({{\gamma}mma}r^W)$ and let $\bar L=\bar {\cal {L}}(F)$ (\ref{2.3ex}, (4)). Then $\frak D$ is an $\bar L$-bundle over $\frak E\times {\rm{spl}}(W)$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$. Consequently, the map $\frak D\to \frak E \times {\rm{spl}}(W)$ is proper. \end{sbprop} \bold egin{pf} This follows from \ref{ls1}. \end{pf} Note that $\bar {\cal {L}}(F)$ for all $F\in D({{\gamma}mma}r^W)$ are isomorphic to each other as objects of ${\cal {B}}'_{\bold{R}}(\log)$. \bold egin{sbprop}{\lambda}bel{2stars} The map $D_{{\rm{SL}}(2)}^{{\rm{st}}ar}\to D_{{\rm{SL}}(2)}^{{\rm{st}}ar,-}$ (\ref{slmap}) is a morphism of ${\cal {B}}'_{\bold{R}}(\log)$. The following diagram is cartesian in ${\cal {B}}'_{\bold{R}}(\log)$ and also cartesian in the category of topological spaces. $$\bold egin{matrix} D^{{\rm{st}}ar}_{{\rm{SL}}(2)} & \to & D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}\\\downarrow &&\downarrow\\ D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} &\to & \;D_{{\rm{SL}}(2)}({{\gamma}mma}r^W). \end{matrix} $$ \end{sbprop} \bold egin{pf} We deduce this from Theorem \ref{ls1}. Let $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ and let $p'$ be the image of $p$ in $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$. Take $R$ and $S$ for the situation (b) in \ref{sit} as in \ref{thm+2} by using $p'$ as $p$ in \ref{thm+2}, and write this $R$ by $R'$. Let $C$ be an ${\bold{R}}$-subspace of $\fg_{\bold{R}}({{\gamma}mma}r^W)$ such that ${\rm {Lie}}(\tau^{{\rm{st}}ar}_{p'}(A_{p'}))$ is the direct sum of ${\rm {Lie}}(\tau^{{\rm{st}}ar}_p(A_p))$ and $C$. Let $R=C\oplus R'$. Then $R$ and $S$ satisfy the conditions on $R$ and $S$ in \ref{thm+2} for the situation (a) in \ref{sit} and for $p$. The homomorphism $S_p \to S_{p'}$ (\ref{simst2.5}) induces a homomorpjhism $X(S_{p'})^+\to X(S_p)^+$ and hence a morphism $\bar A_p=\Hom(X(S_p)^+, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \to \bar A_{p'}=\Hom(X(S_{p'})^+, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$. Let $Y$ be the $Y$ in \ref{thm+2} defined by $(p, R, S)$ for the situation (a) in \ref{sit}, and let $Y'$ be the $Y$ in \ref{thm+2} defined by $(p', R', S)$ for the situation (b) in \ref{sit}. For $(t,f,g,h,k)\in Y$, since $g\in R=C\oplus R'$, we can write $g=c+g'$ with $c\in C$ and $g'\in R'$ in a unique way, and we have $(t', f', g', h', k)\in Y'$ where $t'$ is the image of $t$ in $\bar A_{p'}$ and $f'=f-c$, $h'=h-c$. We have a morphism $Y\to Y'$ which sends $(t,f,g,h,k)\in Y$ to $(t't'', f',g',h',k)\in Y'$ where $t''$ is the unique element of $A_{p'}$ such that $\tau^{{\rm{st}}ar}_{p'}(t'')=\exp(c)$. For a sufficiently small open neighborhood $U$ of $(0,0,0,0,0)$ in $Y$ and for a sufficiently small open neighborhood $U'$ of $(0,0,0,0,0)$ in $Y'$ such that the image of $U$ in $Y'$ is contained in $U'$, we have commutative diagrams $$\bold egin{matrix} U&\to & \frak E \\ \downarrow & &\downarrow \\ U' & \to & \frak E'\end{matrix} \quad\quad \bold egin{matrix}U\times {\rm{spl}}(W) \times \bar L &\to& \frak D\\ \downarrow &&\downarrow \\U'\times {\rm{spl}}(W) \times \bar L&\to & \frak D'\end{matrix}$$ where $\frak E=D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$, $\frak E'=D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$, $\frak D=D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $\frak D'=D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$. This reduces Proposition \ref{2stars} to Theorem \ref{ls1}. \end{pf} {\subset}section{Basic facts on ${\rm{SL}}(2)$-orbits and Borel-Serre orbits}{\lambda}bel{ss:basic} This Section \ref{ss:basic} is a preparation for the rest of Section 2. In \ref{BS1}--\ref{BS3}, we review the space $D_{{\rm {BS}}}$ defined and studied in Part I, and then in \ref{objass}--\ref{gest8} we give some basic facts about the spaces $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$, $D^{II}_{{\rm{SL}}(2)}$, and $D_{{\rm {BS}}}$,. \bold egin{sbpara}{\lambda}bel{BS1} We review the definition of the set $D_{{\rm {BS}}}$ shortly (see Part I for details). Parabolic subgroups play central roles in the theory of Borel-Serre spaces. Following \cite{BS}, for a linear algebraic group $Z$ over a field, we call an algebraic subgroup $P$ of $Z$ a {\it parabolic subgroup} if it is geometrically connected and $Z/P$ is a projective variety. In our setting, there are bijections $$\{\text{${\bold{Q}}$-parabolic subgroup of $G$}\} \leftrightarrow \{\text{${\bold{Q}}$-parabolic subgroup of $G({{\gamma}mma}r^W)$}\}$$ $$ \leftrightarrow \{\text{family $(P_w)_{w\in {\bold{Z}}}$ of ${\bold{Q}}$-parabolic subgroups $P_w$ of $G({{\gamma}mma}r^W_w)$}\}.$$ The bijection from the last set to the second set is given by $(P_w)_w\mapsto \prod_w P_w$, and the bijection from the second set to the first set is given by taking the inverse image under $G_{\bold{R}}\to G_{\bold{R}}({{\gamma}mma}r^W)$. Let $P$ be a ${\bold{Q}}$-parabolic subgroup of $G_{\bold{R}}({{\gamma}mma}r^W)$. Let $P_u$ be the unipotent radical of $P$, let $S_P$ be the largest ${\bold{Q}}$-split torus in the center of $P/P_u$, and let $A_P$ (resp.\ $B_P$) be the connected component including $1$ of the topological group $S_P({\bold{R}})$ (resp.\ $({\bold{G}}_m\times S_P)({\bold{R}})$). For each $p\in D({{\gamma}mma}r^W)$, we have a canonical homomorphism $S_P\to P$ of algebraic groups over ${\bold{R}}$ such that the composition $S_P\to P\to P/P_u$ is the identify map, which we call the {\it Borel-Serre lifting at $p$} and denote by $t\mapsto t_p$. This $t_p$ is characterized by the following two properties. (i) The image of $t_p$ in $P/P_u$ coincides with $t$. (ii) $\theta_{K_p}(t_p)= t_p^{-1}$ where $\theta_{K_p}: G_{\bold{R}}({{\gamma}mma}r^W)\to G_{\bold{R}}({{\gamma}mma}r^W)$ denotes the Cartan involution associated to the maximal compact subgroup $K_p$ (cf.\ Part I, 2.1) of $G_{\bold{R}}(gr^W)$ associated to $p$. We have the following action of $B_P$ on $D$, which we call the Borel-Serre action and denote as $(b, F)\mapsto b\circ F$ $(b\in B_P$, $F\in D)$. For $b=(c,a)\in B_P$ with $c\in {\bold{R}}_{>0}$ and $a\in A_P$, we define $b\circ F:= (c^w)_wa_{F({{\gamma}mma}r^W)} F$, where $a_{F({{\gamma}mma}r^W)}$ is the Borel-Serre lifting of $a$ at $F({{\gamma}mma}r^W)$, $(c^w)_w$ is the element of $\prod_w {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$ which acts on ${{\gamma}mma}r^W_w$ as the multiplication by $c^w$, and $(c^w)_wa_{F({{\gamma}mma}r^W)}$ acts on $D$ by the lifted action \ref{liftac}. The action of $A_P$ on $D$ and the action of $B_P$ on $D_{\operatorname{naive}spl}$ are fixed point free. $D_{{\rm {BS}}}$ is defined as the set of pairs $(P, Z)$ where $P$ is a ${\bold{Q}}$-parabolic subgroup of $G_{\bold{R}}({{\gamma}mma}r^W)$ and $Z$ is either (i) an $A_P$-orbit in $D$ or (ii) a $B_P$-orbit in $D_{\operatorname{naive}spl}$ \operatorname{naive}oindent for the Borel-Serre action. In the case (i), we call $(P, Z)$ an {\it $A_P$-orbit}. In the case (ii), we call $(P, Z)$ a {\it $B_P$-orbit}. We denote by $D^{{{\rm{mild}}}}_{{\rm {BS}}} $ the subset of $D_{{\rm {BS}}}$ consisting of $A_P$-orbits. This subset was written as $D^{(A)}_{{\rm {BS}}}$ in Part I. \end{sbpara} \bold egin{sbpara}{\lambda}bel{BS2} We review the structure of $D_{{\rm {BS}}}$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ (actually it is a real analytic manifold with corners). For a ${\bold{Q}}$-parabolic subgroup $P$ of $G_{\bold{R}}({{\gamma}mma}r^W)$, let $$D_{{\rm {BS}}}(P)= \{(Q, Z)\in D_{{\rm {BS}}}\;|\; Q\supset P\}.$$ Then $D_{{\rm {BS}}}(P)$ forms an open covering of $D_{{\rm {BS}}}$ when $P$ varies. $D_{{\rm {BS}}}$ is also covered by the open sets $D^{{{\rm{mild}}}}_{{\rm {BS}}}$ (\ref{BS1}) and $D_{{\rm {BS}},\operatorname{naive}spl}$ where $D_{{\rm {BS}},\operatorname{naive}spl}$ denotes the subset of $D_{{\rm {BS}}}$ consisting of all elements $(P, Z)$ such that $Z{\subset}set D_{\operatorname{naive}spl}$. The structures of $D^{{{\rm{mild}}}}_{{\rm {BS}}}(P): =D_{{\rm {BS}}}(P)\cap D^{{{\rm{mild}}}}_{{\rm {BS}}}$ and $D_{{\rm {BS}},\operatorname{naive}spl}(P):= D_{{\rm {BS}}}(P)\cap D_{{\rm {BS}},\operatorname{naive}spl}$ as objects of ${\cal {B}}'_{\bold{R}}(\log)$ are described as follows. Let $X(S_P)$ be the character group of $S_P$, and let $\Delta(P) {\subset}set X(S_P)$ be the set of simple roots (\cite{BS}). This set $\Delta(P)$ is characterized by the following two properties (i) and (ii). (i) Let $n$ be the rank of $S_P$. Then $\Delta(P)$ is of order $n$ and generates $ {\bold{Q}}\otimes X(S_P)$ over ${\bold{Q}}$. (ii) Let $X(S_P)^+$ be the submonoid of $X(S_P)$ generated by $\Delta(P)$. Lift $S_P$ to a subtorus of $P$. Then $X(S_P)^+$ coincides with the submonoid of $X(S_P)$ generated by $\chi^{-1}$ where $\chi$ ranges over all elements of $X(S_P)$ which appear in the adjoint action of $S_P$ on ${\rm {Lie}}(P)$. Define a real toric variety (\ref{2.3ex} (3)) $\bar A_P$ and $\bar B_P$ as $$\bar A_P:=\Hom(X(S_P)^+, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})= {\bold{R}}_{{{\gamma}mma}eq 0}^{\Delta(P)}\supset A_P=\Hom(X(S_P), {\bold{R}}^{\mult}_{>0})= {\bold{R}}_{>0}^{\Delta(P)},$$ $$B_P:={\bold{R}}_{{{\gamma}mma}eq 0} \times \bar A_P\supset B_P= {\bold{R}}_{>0}\times A_P.$$ For a ${\bold{Q}}$-parabolic subgroup $Q$ of $G_{\bold{R}}({{\gamma}mma}r^W)$ with $Q\supset P$, there is a canonical injection $\Delta(Q) \to \Delta(P)$, and $Q\mapsto \Delta(Q){\subset}set \Delta(P)$ is a bijection from the set of all ${\bold{Q}}$-parabolic subgroups of $G_{\bold{R}}({{\gamma}mma}r^W)$ such that $Q\supset P$ to the set of all subsets of $\Delta(P)$. This is explained as follows. For such $Q$, we have $Q_u{\subset}set P_u$, the composition $S_Q\to Q/Q_u\to Q/P_u$ is injective, and the image of this composite map is contained in $S_P{\subset}set P/P_u{\subset}set Q/P_u$. Hence $A_Q$ is regarded as a subgroup of $A_P$. There is a unique injection $\Delta(Q)\to \Delta(P)$ such that the composition ${\bold{R}}_{>0}^{\Delta(Q)}\cong A_Q{\subset}set A_P={\bold{R}}_{>0}^{\Delta(P)}$ coincides with the map $f\mapsto g$ where $g(j)=f(j)$ for $j\in \Delta(Q)$ and $g(j)=1$ for $j\in \Delta(P)\smallsetminus \Delta(Q)$. We have bijections $$D^{{{\rm{mild}}}}_{{\rm {BS}}}(P) \cong D\times^{A_P} \bar A_P, \quad D_{{\rm {BS}},\operatorname{naive}spl}(P)\cong D_{\operatorname{naive}spl} \times^{B_P} \bar B_P$$ which sends the element $(Q, Z)$ of $D^{{{\rm{mild}}}}_{{\rm {BS}}}(P)$ (resp.\ $D_{{\rm {BS}},\operatorname{naive}spl}(P)$) to the class of $(z, h)$ (resp.\ $(z, \tilde h)$) where $z\in Z$ and $h\in \bar A_P={\bold{R}}_{{{\gamma}mma}eq 0}^{\Delta(P)}$ (resp.\ $\tilde h=(0, h)\in \bar B_P={\bold{R}}_{{{\gamma}mma}eq 0}\times {\bold{R}}_{{{\gamma}mma}eq 0}^{\Delta(P)}$) is defined by $$h(j)=0\;\text{for}\; j\in \Delta(Q){\subset}set \Delta(P), \quad h(j)=1\;\text{for}\; j\in \Delta(P)\smallsetminus \Delta(Q).$$ The right-hand sides of these bijections are regarded as objects of ${\cal {B}}_{\bold{R}}'(\log)$ (Part I, Section 8) as is explained below, and the left-hand sides have the structures as objects of ${\cal {B}}'_{\bold{R}}(\log)$ for which these bijections are isomorphisms of ${\cal {B}}'_{\bold{R}}(\log)$. There is a closed real analytic sub-manifold $D^{(1,A)}$ (resp.\ $D^{(1,B)}$) of $D$ (resp.\ $D_{\operatorname{naive}spl}$) such that we have an isomorphism $A_P\times D^{(1,A)}\overset{\cong}\to D$ (resp.\ $B_P\times D^{(1,B)}\overset{\cong}\to D_{\operatorname{naive}spl}$), $(a, F)\mapsto a\circ F$, of real analytic manifolds. This induces a bijection $\bar A_P \times D^{(1,A)}\to D\times^{A_P} \bar A_P$ (resp.\ $\bar B_P\times D^{(1,B)} \to D_{\operatorname{naive}spl}\times^{B_P} \bar B_P$) and by this, $D\times^{A_P} \bar A_P$ (resp, $D_{\operatorname{naive}spl}\times^{B_P} \bar B_P$) has a structure of an object of ${\cal {B}}'_{\bold{R}}(\log)$. This structure is independent of the choice of $D^{(1,A)}$ (resp.\ $D^{(1,B)}_{\operatorname{naive}spl}$). \end{sbpara} \bold egin{sbpara}{\lambda}bel{BS3} The definition of the set $D_{{\rm {BS}}}$ can be rewritten in the style which is similar to the definitions of the spaces of ${\rm{SL}}(2)$-orbits in Section 2.3. Let $D_{{\rm {BS}}}({{\gamma}mma}r^W)=\prod_{w\in {\bold{Z}}} D_{{\rm {BS}}}({{\gamma}mma}r^W_w)$ where $D_{{\rm {BS}}}({{\gamma}mma}r^W_w)$ is the space $D_{{\rm {BS}}}$ for the graded quotient ${{\gamma}mma}r^W_w$. For $p=(P_w, Z_w)_{w\in {\bold{Z}}}\in D_{{\rm {BS}}}({{\gamma}mma}r^W)$, we denote $\prod_{w\in {\bold{Z}}} Z_w{\subset}set D({{\gamma}mma}r^W)$ as $Z(p)$. We call $Z(p)$ the {\it torus orbit} of $p$ and we call $\prod_{w\in {\bold{Z}}} P_w{\subset}set G_{\bold{R}}({{\gamma}mma}r^W)$ the ${\bold{Q}}$-parabolic subgroup of $G_{\bold{R}}({{\gamma}mma}r^W)$ associated to $p$. Then, $D_{{\rm {BS}}}$ is understood as the set of pairs $(p, Z)$ where $p\in D_{{\rm {BS}}}({{\gamma}mma}r^W)$ and $Z$ is a subset of $D$ satisfying the following conditions (i) and (ii). (i) $Z$ is either (i.A) an $A_P$-orbit in $D$ for the Borel--Serre action, or (i.B) a $B_P$-orbit in $D_{\operatorname{naive}spl}$ for the Borel--Serre action. Here $P$ is the ${\bold{Q}}$-parabolic subgroup of $G_{\bold{R}}$ associated to $p$. (ii) The image of $Z$ in $D({{\gamma}mma}r^W)$ coincides with the torus orbit $Z(p)$ of $p$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{sit2} In the rest of this Section 2.4, we consider the situations (a)--(c) in \ref{sit} and also the situation (d) $\frak D= D_{{\rm {BS}}}$, $\frak E=D_{{\rm {BS}}}({{\gamma}mma}r^W)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{objass} For $x\in \frak D$, we define objects $$S_x,\; X(S_x)^+, \; T(x),\; \bar T(x), \; Z(x),\; \bar Z(x)$$ associated to $x$. In the situations (a)--(c), write $x=(p, Z)$ $(p\in \frak E$, $Z{\subset}set D)$. In the situation (d), write $x=(P, Z)$. In the situations (a)--(c), let $S_x=S_p$ if $x$ is an $A$-orbit, and let $S_x={\bold{G}}_m\times S_p$ if $x$ is a $B$-orbit (\ref{sim5}, \ref{symb1}). In the situation (d), let $S_x=S_P$ if $x$ is an $A_P$-orbit, and let $S_x= {\bold{G}}_m\times S_P$ if $x$ is a $B_P$-orbit (\ref{BS1}, \ref{BS2}). We define a submonoid $X(S_x)^+$ of the character group $X(S_x)$ of $S_x$, as follows. In the situations (a)--(c), let $X(S_x)^+:=X(S_p)^+$ if $x$ is an $A$-orbit (\ref{sim5}), and let $X(S_x)^+:={\bold{N}}\times X(S_p)^+{\subset}set {\bold{Z}}\times X(S_p)=X(S_x)$ if $x$ is a $B$-orbit. In the situation (d), let $X(S_x)^+:=X(S_P)^+$ if $x$ is an $A_P$-orbit, and let $X(S_x)^+:={\bold{N}}\times X(S_P)^+{\subset}set {\bold{Z}}\times X(S_P)=X(S_x)$ if $x$ is a $B_P$-orbit, where $X(S_P)^+$ is as in \ref{BS2}. Let $T(x)$ be the connected component of $S_x({\bold{R}})$ containing the unit element. Let $$\bar T(x) := \Hom(X(S_x)^+, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\supset T(x) = \Hom(X(S_x)^+, {\bold{R}}^{\mult}_{>0}).$$ We regard $\bar T(x)$ as a real toric variety. Define $Z(x):=Z$. We call $Z(x)$ the torus orbit associated to $x$. $T(x)$ acts on $Z(x)$ and $Z(x)$ is a $T(x)$-torsor. Let $\bar Z(x):= Z(x) \times^{T(x)} \bar T(x)$. Then $\bar Z(x)$ has a unique structure of an object of ${\cal {B}}'_{\bold{R}}(\log)$ such that for any ${{\bold r}}\in Z(x)$, the bijection $ \bar T(x)\to \bar Z(x)$ induced from the bijection $T(x)\to Z(x)\;;\;t\mapsto t{{\bold r}}$ becomes an isomorphism in ${\cal {B}}'_{\bold{R}}(\log)$. We call $\bar Z(x)$ the extended torus obit associated to $x$. In \ref{gest2} below, we will embed $\bar Z(x)$ in $\frak D$ satisfying $x\in \bar Z(x)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{logstalk0} This \ref{logstalk0} is a preparation for the next \ref{logstalk}. Consider the three situations in \ref{sit}. In the situations (a) and (b) (resp.\ situation (c)), we have a global section $\bold eta_0^{{\rm{st}}ar}$ (resp.\ $\bold eta_0$) of $M_{\frak D}/\cO^\times_{\frak D}$ defined as follows. In the situations (a) and (c) (resp.\ situation (b)), let $\Phi \in \overline{{\cal {W}}}$ (resp.\ $Q=(Q(w))_w \in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$), let $\frak D'= \frak D(\Phi)$ (resp.\ $\frak D'=\frak D(Q)$), let $apha$ be a splitting of $\Phi$ (resp.\ $Q$), and let $\bold eta$ be a distance to $\Phi$-boundary (resp.\ $Q$-boundary). Fix a real analytic closed sub-manifold ${\cal {L}}^{(1)}$ of ${\cal {L}}\smallsetminus \{0\}$ such that ${\bold{R}}_{>0}\times {\cal {L}}^{(1)}\to {\cal {L}}\smallsetminus \{0\}\;;\; (a,{\delta}ta) \mapsto a\circ {\delta}ta$ is an isomorphism of real analytic manifolds, and let ${\bold{R}}_{{{\gamma}mma}eq 0}\times {\cal {L}}^{(1)}\overset{\cong}\to \bar {\cal {L}}\smallsetminus \{0\}$ be the induced isomorphism in ${\cal {B}}'_{\bold{R}}(\log)$. Let $\frak D'_{\operatorname{naive}spl}$ be the open subset of $\frak D'$ defined by ${\delta}ta\operatorname{naive}e0$ via the bijection $\operatorname{naive}u$ in Proposition \ref{staran1} associated to $(apha,\bold eta)$. Then in the situations (a) and (b) (resp.\ situation (c)), we have the composite morphism $\frak D'_{\operatorname{naive}spl}\to \bar {\cal {L}}\smallsetminus \{0\} \cong {\bold{R}}_{{{\gamma}mma}eq 0} \times {\cal {L}}^{(1)} \to {\bold{R}}_{{{\gamma}mma}eq 0}$ where the first arrow is $\operatorname{naive}u$. We denote this composite morphism $\cD_{\operatorname{naive}spl}(\Phi) \to {\bold{R}}_{{{\gamma}mma}eq 0}$ by $\bold eta_0^{{\rm{st}}ar}$ (resp.\ $\bold eta_0$). Then as is easily seen, this $\bold eta_0^{{\rm{st}}ar}$ (resp.\ $\bold eta_0$) belongs to $M_{\frak D_{\operatorname{naive}spl}'}$, the class of $\bold eta_0^{{\rm{st}}ar}$ (resp.\ $\bold eta_0$) in $M_{\frak D_{\operatorname{naive}spl}'}/\cO^\times_{\cD_{\operatorname{naive}spl}'}$ is independent of the choices of $apha$, $\bold eta$, and ${\cal {L}}^{(1)}$, this class extends uniquely to a section of $M_{\frak D'}/\cO^\times_{\frak D'}$ which is trivial on the part of $A$-orbits of $\frak D$, and this local section of $M_{\frak D}/\cO^\times_{\frak D}$ on $\frak D'=\frak D(\Phi)$ (resp.\ $\frak D'=\frak D(Q)$) extends, when $\Phi$ (resp.\ $Q$) moves, to a global section $\bold eta_0^{{\rm{st}}ar}$ (resp.\ $\bold eta_0)$ of $M_{\frak D}/\cO^\times_{\frak D}$ on $\frak D$ uniquely. \end{sbpara} \bold egin{sbprop}{\lambda}bel{logstalk} Consider the four situations in \ref{sit2}. For $x\in \frak D$, we have a canonical isomorphism $$(M_{\frak D}/\cO^\times_{\frak D})_x\cong X(S_x)^+.$$ \end{sbprop} \bold egin{pf} We first consider the situations (a)--(c). Write $x=(p, Z)$. As in \ref{logst1}, we have a canonical isomorphism $(M_{\frak E}/\cO^\times_{\frak E})_p\cong X(S_p)^+$. In the case when $x$ is an $A$-orbit, we have $(M_{\frak E}/\cO^\times_{\frak E})_p \overset{\cong}\to (M_{\frak D}/\cO^\times_{\frak D})_x$. If $x$ is a $B$-orbit, we have ${\bold{N}}\times (M_{\frak E}/\cO^\times_{\frak E})_p \overset{\cong}\to (M_{\frak D}/\cO^\times_{\frak D})_x$ where $1\in {\bold{N}}$ is sent to $\bold eta^{{\rm{st}}ar}_0$ in the situations (a) and (b) and to $\bold eta_0$ in the situation (c). We next consider the situation (d). Write $x=(P, Z)$. Assume first $x$ is an $A_P$-orbit. Consider the composite morphism $S:=D_{{\rm {BS}}}^{{{\rm{mild}}}}(P)\cong \bar A_P\times D^{(1,A)} \to \bar A_P={\bold{R}}_{{{\gamma}mma}eq 0}^{\Delta(P)}$ where the first isomorphism is as in \ref{BS2}. For $j\in \Delta(P)$, let $\bold eta_j: S\to {\bold{R}}_{{{\gamma}mma}eq 0}$ be the $j$-component of this composite morphism. Then $\bold eta_j$ is a section of $M_S$ and the class of $\bold eta_j$ in $M_S/\cO^\times_S$ is independent of the choice of $D^{(1,A)}$ in \ref{BS2}. We have a canonical isomorphism $X(S_x)^+ ={\bold{N}}^{\Delta(P)} \overset{\cong}\to (M_S/\cO^\times_S)_x$ which sends $m\in {\bold{N}}^{\Delta(P)}$ to the class of $\prod_{j\in \Delta(P)} \bold eta_j^{m(j)}$. Assume next that $x$ is a $B_P$-orbit. Consider the composite morphism $S:=D_{{\rm {BS}},\operatorname{naive}spl}(P)\cong \bar B_P\times D^{(1,B)} \to \bar B_P={\bold{R}}_{{{\gamma}mma}eq 0}\times {\bold{R}}_{{{\gamma}mma}eq 0}^{\Delta(P)}$ where the first isomorphism is as in \ref{BS2}. Let $\bold eta_0^{{\rm {BS}}}: S\to {\bold{R}}_{{{\gamma}mma}eq 0}$ be the first component of this composite morphism, and for $j\in \Delta(P)$, let $\bold eta_j: S\to {\bold{R}}_{{{\gamma}mma}eq 0}$ be the $j$-component of this composite morphism. Then $\bold eta_0^{{\rm {BS}}}$ and $\bold eta_j$ $(j\in \Delta(P))$ are sections of $M_S$ and their classes in $M_S/\cO^\times_S$ are independent of the choice of $D^{(1,B)}$ in \ref{BS2}. We have an isomorphism $X(S_x)^+ \cong {\bold{N}} \times {\bold{N}}^{\Delta(P)} \to (M_S/\cO^\times_S)_x$ which sends $(m_0, (m_j)_{j\in \Delta(P)})\in {\bold{N}}\times {\bold{N}}^{\Delta(P)}$ to the class of $(\bold eta_0^{{\rm {BS}}})^{m_0}\cdot \prod_{j\in \Delta(P)} \bold eta_j^{m(j)}$. \end{pf} \bold egin{sbpara}{\lambda}bel{gest2} Let the situations (a)--(d) be as in \ref{sit2}. Let $x\in \frak D$. The inclusion map $Z(x) \to D$ extends uniquely to a morphism $$\bar Z(x) \to \frak D$$ of ${\cal {B}}'_{\bold{R}}(\log)$. This morphism is described as follows. Assume first we are in one of the situations (a)--(c). Write $x=(p, Z)$ and fix ${{\bold r}}\in Z(p)$. Consider the morphism $Y\times{\rm{spl}}(W)\times\bar L \to \frak D$ in \ref{thm+4} defined for $(p, {{\bold r}}, R, S)$ by fixing $R$ and $S$ \ref{thm+2}. Then the morphism $\bar Z(x) \to \frak D$ is the composite morphism $\bar Z(x) \to Y\times{\rm{spl}}(W)\times\bar L \to \frak D$ where the first morphism is as follows. Let $F$ be an element of $Z(x)$ whose image under the embedding $D\to D({{\gamma}mma}r^W)\times{\rm{spl}}(W) \times {\cal {L}}$ is $({{\bold r}}, s, {\delta}ta)$. Let $t\in \bar A_p$. Then the first morphism sends $(F, t)\in \bar Z(x)=Z(x) \times^{T(x)} \bar T(x)$ to $(t, 0,0,0,0, s, {\delta}ta)\in Y\times{\rm{spl}}(W)\times\bar L$ and if $x$ is a $B$-orbit, for $(c,t)\in \bar B_p$ $(c\in {\bold{R}}_{{{\gamma}mma}eq 0})$, the first morphism sends $(F,(c,t))\in \bar Z(x)$ to $(t, 0,0,0,0,s, c\circ {\delta}ta)\in Y\times{\rm{spl}}(W)\times\bar L$. Next assume we are in the situation (d). Write $x=(P, Z)$. If $x$ is an $A_P$-orbit, this morphism $\bar Z(x)\to \frak D$ is the composition $\bar Z(x) = Z \times^{A_P} \bar A_P{\subset}set D\times^{A_P} \bar A_P \cong D_{{\rm {BS}}}^{{{\rm{mild}}}}(P)$. If $x$ is a $B_P$-orbit, this morphism is the composition $\bar Z(x) = Z \times^{B_P} \bar B_P {\subset}set D_{\operatorname{naive}spl} \times^{B_P} \bar B_P \cong D_{{\rm {BS}},\operatorname{naive}spl}(P)$. This morphism $\bar Z(x) \to \frak D$ is injective and strict (\ref{gest6}), and sends $0\in \bar Z(x)$ to $x$. Here $0$ denotes the class of $({{\bold r}}, 0)$ where ${{\bold r}}\in Z(x)$ and $0\in \bar T(x)$ is the homomorphism $(M_{\frak D}/\cO^\times_{\frak D})_x \to {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}$ which sends any non-trivial element of $(M_{\frak D}/\cO^\times_{\frak D})_x$ to $0$. (Then $0\in \bar Z(x)$ is independent of the choice of ${{\bold r}}$.) We will identify $\bar Z(x)$ with its image in $\frak D$, which coincides with the closure of $Z(x)$ in $\frak D$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{gest80} Consider the situations (a)--(d) as in \ref{sit2}. In \ref{gest8} below, we give descriptions of log modifications of $\frak D$ as sets by using the extended torus orbit $\bar Z(x) {\subset}set \frak D$ associated to $x\in \frak D$ (\ref{gest2}), which we will use in Section 2.5 and Section 2.6. Let $U$ be an open set of $\frak D$. Let $L$ and $N$ be as in \ref{rvtoric}, let $\Sigma$ be a finite rational fan in $N_{\bold{R}}$, and let $\Sigma'$ be a rational finite subdivision of $\Sigma$. Let ${\rm {Mor}}( -, U)\to [\Sigma]$ be a morphism of functors (\ref{logmod2}) such that for any $x\in U$, if ${\sigma}$ denotes the image of $x$ in $\Sigma$ (\ref{logmod2}), the homomorphism $\cS({\sigma}) \to (M_U/\cO_U^\times)_x$ is universally saturated. For $x\in U$ and ${\sigma}'\in \Sigma'$ whose images in $\Sigma$ coincide, we define a subgroup $T(x, {\sigma}')$ of $T(x)= \Hom((M_U^{{{\gamma}mma}p}/\cO^\times_U)_x, {\bold{R}}^{\mult}_{>0})$ as follows. Let ${\sigma}$ be the image in $\Sigma$. Then the homomorphism $L\to (M_U^{{{\gamma}mma}p}/\cO^\times_U)_x$ factors through $L/\cS({\sigma})^\times$. $T(x,{\sigma}')$ is the inverse image of $\Hom(L/\cS({\sigma}')^\times, {\bold{R}}^{\mult}_{>0}){\subset}set \Hom(L/\cS({\sigma})^\times, {\bold{R}}^{\mult}_{>0})$ in $T(x)$. Let $U'\to U$ be the log modification which represents the functor ${\rm {Mor}}(-, U) \times_{[\Sigma]} [\Sigma']$ (\ref{logmod2}). \end{sbpara} \bold egin{sblem}{\lambda}bel{gest8} Let the notation and the assumptions be as in \ref{gest80}. There exists a canonical bijection between $U'$ and the set of all triples $(x, {\sigma}', Z')$ where $x\in U$, ${\sigma}'$ is an element of $\Sigma'$ whose image in $\Sigma$ coincides with the image of $x$ in $\Sigma$, and $Z'$ is an $T(x,{\sigma}')$-orbit in $Z(x)$. \end{sblem} \bold egin{pf} Let $x\in U$ and let $U''$ be the fiber product of $\bar Z(x) \to U\leftarrow U'$. Then the fiber on $x$ of $U'\to U$ coincides with the fiber on $x$ of $U''\to \bar Z(x)$. Since $U''$ represents the functor ${\rm {Mor}}(-,U)\times_{[\Sigma]} [\Sigma']$, this lemma follows from \ref{gest1}. \end{pf} {\subset}section{Relations with $D_{{\rm{SL}}(2)}$} We connect the spaces $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ and $D^{II}_{{\rm{SL}}(2)}$ by introducing a new space $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ of ${\rm{SL}}(2)$-orbits. \bold egin{sbpara}{\lambda}bel{sl+def} We define a log modification (\ref{logmod}) $$D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}.$$ On $\frak D:=D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, there is a unique section $\bold eta_{\text{tot}}$ of $M_{\frak D}/\cO^\times_{\frak D}$ such that for any $\Phi\in \overline{{\cal {W}}}$, the restriction of $\bold eta_{\text{tot}}$ to $\frak D(\Phi)$ coincides with the image of the product $\prod_{j\in \Phi} \bold eta_j$ in $M_{\frak D}/\cO_{\frak D}^\times$ where $\bold eta=(\bold eta_j)_{j\in \Phi}$ is a distance to $\Phi$-boundary. Let $\bold eta_0^{{\rm{st}}ar}$ be the section of $M_{\frak D}/\cO_{\frak D}^\times$ defined in \ref{logstalk0}. Consider the homomorphism ${\bold{N}}^2\to M_S/\cO_S^\times\;;\; (a,b) \mapsto \bold eta_{\text{tot}}^a(\bold eta_0^{{\rm{st}}ar})^b$. Take $L={\bold{Z}}^2$ in \ref{rvtoric}, let $\Sigma$ be the fan of all faces of the cone ${\bold{R}}^2_{{{\gamma}mma}eq 0}{\subset}set N_{\bold{R}}^2={\bold{R}}^2$, so we have a morphism ${\rm {Mor}}(-, \frak D) \to [\Sigma]$. Let $\Sigma'$ be the rational finite subdivision of $\Sigma$ consisting of the cones $${\sigma}ma_1:= \{(x,y)\in {\bold{R}}^2_{{{\gamma}mma}eq 0}\; |\; x{{\gamma}mma}eq y\}, \quad {\sigma}ma_2:=\{(x,y)\in {\bold{R}}^2_{{{\gamma}mma}eq 0}\; |\; x\leq y\}$$ and their faces. Let $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ be the log modification of $\frak D$ which represents the fiber product ${\rm {Mor}}(-,\frak D)\times_{[\Sigma]} [\Sigma']$ (\ref{logmod2}). $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ is covered by the open sets $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_j)$ for $j=1,2$ corresponding to the cone ${\sigma}_j$, which represents ${\rm {Mor}}(-, \frak D)\times_{[\Sigma]} [\text{face}({\sigma}_j)]$ where $\text{face}({\sigma}_j)$ denotes the fan of all faces of ${\sigma}_j$. On the open set $U=D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_1)$ (resp.\ $U=D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_2)$), the pull back of $\bold eta_{\text{tot}}/\bold eta_0^{{\rm{st}}ar}$ (resp.\ $\bold eta_0^{{\rm{st}}ar}/\bold eta_{\text{tot}}$) in $M^{{{\gamma}mma}p}_U/\cO^\times_U$ belongs to $M_U/\cO^\times_U$. \end{sbpara} \bold egin{sbpara} Since the restriction of $\bold eta_0^{{\rm{st}}ar}$ to $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ is trivial, the canonical morphism $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ is an isomorphism over $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$, and hence $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ is embedded in $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ as an open set. Via this, $D{\subset}set D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ is embedded in $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ as an open set. \end{sbpara} \bold egin{sbpara}{\lambda}bel{sl+def2} We describe $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ as a set. We have $$\Sigma= \{\tau_{1,2}, \tau_1, \tau_2, \tau_0\}, \quad \Sigma'=\{{\sigma}_1, {\sigma}_2, {\sigma}_0, \tau_1, \tau_2, \tau_0\}$$ where $$\tau_{1,2}:= {\bold{R}}^2_{{{\gamma}mma}eq 0}, \;\; \tau_1:={\bold{R}}_{{{\gamma}mma}eq 0}\times \{0\}, \;\; \tau_2:= \{0\}\times {\bold{R}}_{{{\gamma}mma}eq 0}, \;\;\tau_0:= \{(0,0)\}, \;\; {\sigma}_0:= \{(x,x)\;|\; x\in {\bold{R}}_{{{\gamma}mma}eq 0}\}.$$ So, $\Sigma$ is the set of all faces of $\tau_{1,2}$, and $\text{face}({\sigma}_j)=\{{\sigma}_j, \tau_j, {\sigma}_0, \tau_0\}$ for $j=1,2$. The image of $x=(p, Z)\in D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ in $\Sigma$ is $\tau_0$ if and only if $x\in D$, $\tau_1$ if and only if $x\in D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\smallsetminus D$, $\tau_2$ if and only if $x$ is a $B$-orbit and $p\in D({{\gamma}mma}r^W)$, and $\tau_{1,2}$ if and only if $x$ is a $B$-orbit and $p\operatorname{naive}otin D({{\gamma}mma}r^W)$. We apply \ref{gest8} to describe the log modification $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ as a set. For this, we show that the homomorphism ${\bold{N}}^2\to (M_{\frak D}/\cO_{\frak D}^\times)_x$ $(\frak D:=D^{{\rm{st}}ar}_{{\rm{SL}}(2)})$, given by $(\bold eta_{\text{tot}}, \bold eta_0^{{\rm{st}}ar})$ in \ref{sl+def}, is universally saturated for any $x\in \frak D$. If the image of $x$ in $\Sigma$ is $\tau_0$ or $\tau_1$ (resp.\ $\tau_2$ or $\tau_{1,2}$), this homomorphism has the shape ${\bold{N}}^2\to {\bold{N}}^r\;;\;(a, b) \mapsto (b,\dots, b)$ (resp.\ ${\bold{N}}^2\to {\bold{N}}\times {\bold{N}}^r\;;\; (a,b)\mapsto (a, b,\dots, b)$) for some integer $r{{\gamma}mma}eq 0$, and hence is universally saturated by Proposition \ref{gest5}. By Lemma \ref{gest8}, we have the following list of points of $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$. (1) $(x, \tau_j, Z(x))$ ($x\in D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ and the image of $x$ in $\Sigma$ is $\tau_j$). (Here $j=0,1,2$.) (2) $(x, {\sigma}_j, Z(x))$ ($x\in D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ and the image of $x$ in $\Sigma$ is $\tau_{1,2}$). (Here $j=1,2$.) (3) $(x, {\sigma}_0, Z')$ ($x=(p, Z)\in D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, the image of $x$ in $\Sigma$ is $\tau_{1,2}$, and $Z'$ is $\tau_p(A_p)$-orbit in $Z(x)$). Actually, in (3), what Lemma \ref{gest8} directly tells is that a $\tau^{{\rm{st}}ar}_p(T(x,{\sigma}_0))$-orbit $Z'$ in the ${\tilde \tau}_p^{{\rm{st}}ar}(B_p)$-orbit $Z(x)$ appears instead of a $\tau_p(A_p)$-orbit in $Z(x)$. But $\tau^{{\rm{st}}ar}_p(T(x,{\sigma}_0))=\tau_p(A_p)$ inside ${\tilde \tau^{{\rm{st}}ar}_p}(B_p)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{maps5} We have a map $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}\to D_{{\rm{SL}}(2)}$ defined as follows. (1) $(x, \tau_j, Z)$ ($x=(p, Z)$ with image $\tau_j$ in $\Sigma$ for $j=0,2$) and ($x, {\sigma}_2, Z)$ $(x=(p, Z)$ with image $\tau_{1,2}$ in $\Sigma$) are sent to $(p, Z)\in D_{{\rm{SL}}(2)}$. (2) $(x, \tau_1, Z)$ ($x=(p, Z)$ with image $\tau_1$ in $\Sigma$) and $(x, {\sigma}_1, Z)$ ($x=(p, Z)$ with image $\tau_{1,2}$ in $\Sigma$) are sent to $(p, Z_{{\rm{spl}}})\in D_{{\rm{SL}}(2)}$. Here $Z_{{\rm{spl}}}=\{F_{{\rm{spl}}}\;|\; F\in Z\}$ where $F_{{\rm{spl}}}$ is as in \ref{II,1.2.3}. (3) $(x, {\sigma}_0, Z')$ ($x=(p, Z)$ with image $\tau_{1,2}$ in $\Sigma$ and $Z'$ is a $\tau_p(A_p)$-orbit inside $Z$) is sent to $(p, Z')\in D_{{\rm{SL}}(2)}$. \end{sbpara} \bold egin{sbthm}{\lambda}bel{0thm} (1) The identity map of $D$ extends uniquely to a morphism $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}\to D^{II}_{{\rm{SL}}(2)}$ in ${\cal {B}}'_{\bold{R}}(\log)$. Its underlying map of sets is the map in \ref{maps5}. This map is proper and surjective. (2) Let $U$ be the open set $D^{II}_{{\rm{SL}}(2),\operatorname{naive}spl}\cup D$ of $D^{II}_{{\rm{SL}}(2)}$. Then the inverse image of $U$ in $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ coincides with the open set $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_2)$, and the induced morphism $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_2)\to U$ of ${\cal {B}}'_{\bold{R}}(\log)$ is an isomorphism. \end{sbthm} \bold egin{pf} We prove (1). It is sufficient to prove that the map in \ref{maps5} is a morphism $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}\to D^{II}_{{\rm{SL}}(2)}$ of ${\cal {B}}'_{\bold{R}}(\log)$. For an admissible set of weight filtrations $\Phi$ on ${{\gamma}mma}r^W$, let $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}(\Phi){\subset}set D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ be the inverse image of $D^{{\rm{st}}ar}_ {{\rm{SL}}(2)}(\Phi){\subset}set D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$. It is sufficient to prove that the induced map $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}(\Phi)\to D^{II}_{{\rm{SL}}(2)}(\Phi)$ is a morphism in ${\cal {B}}'_{\bold{R}}(\log)$. Let $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2),\operatorname{naive}spl}{\subset}set D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ be the inverse image of the open set $D^{{\rm{st}}ar}_{{\rm{SL}}(2),\operatorname{naive}spl}$ of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$. Then $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_1)$ is the union of the two open sets $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ (which is embedded in $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$) and $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2),\operatorname{naive}spl}\cap D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_1)$, and $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_2)$ is contained in $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2),\operatorname{naive}spl}$. Take a splitting $apha$ of $\Phi$ and a distance $\bold eta$ to $\Phi$-boundary. First, the induced map $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}(\Phi)\to D^{II}_{{\rm{SL}}(2)}(\Phi)$ is a morphism in ${\cal {B}}'_{\bold{R}}(\log)$ because this map is embedded in a commutative diagram $$\bold egin{matrix} D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}(\Phi)& \overset{{\subset}set}\to & D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi) \times {\rm{spl}}(W) \times {\cal {L}}\\ \downarrow && \downarrow \\ D^{II}_{{\rm{SL}}(2)}(\Phi) & \overset{{\subset}set}\to & D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi) \times {\rm{spl}}(W) \times \bar {\cal {L}} \end{matrix}$$ where the horizontal arrows are the maps $\operatorname{naive}u$ in Proposition \ref{staran1} associated to $(apha,\bold eta)$ and the right vertical arrow is the morphism $(p, s, {\delta}ta)\mapsto (p, s, \sum_{w\leq -2} (\prod_{j\in \Phi} \bold eta_j(p))^{-w}{\delta}ta_w)$, and because the structure of $D^{II}_{{\rm{SL}}(2)}(\Phi)$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ is induced from that of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}\times {\rm{spl}}(W)\times \bar {\cal {L}}$ in the sense of \ref{embstr}. Next we consider the induced map $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2),\operatorname{naive}spl}(\Phi)\to D^{II}_{{\rm{SL}}(2)}(\Phi)$. Take a closed real analytic subset ${\cal {L}}^{(1)}$ of ${\cal {L}}\smallsetminus \{0\}$ such that ${\bold{R}}_{>0}\times {\cal {L}}^{(1)}\to {\cal {L}}\smallsetminus \{0\}\;;\;(a,{\delta}ta)\mapsto a \circ {\delta}ta$ is an isomorphism, and consider the induced isomorphism ${\bold{R}}_{{{\gamma}mma}eq 0}\times {\cal {L}}^{(1)}\overset{\cong}\to \bar {\cal {L}} \smallsetminus \{0\}$. Let $\bold eta_0^{{\rm{st}}ar}: D^{{\rm{st}}ar}_{{\rm{SL}}(2),\operatorname{naive}spl}(\Phi)\to {\bold{R}}_{{{\gamma}mma}eq 0}$ be the composition $D^{{\rm{st}}ar}_{{\rm{SL}}(2),\operatorname{naive}spl}(\Phi) \to \bar {\cal {L}}\smallsetminus \{0\} \cong {\bold{R}}_{{{\gamma}mma}eq 0}\times {\cal {L}}^{(1)} \to {\bold{R}}_{{{\gamma}mma}eq 0}$ where the first arrow is induced by the map $\operatorname{naive}u$ in Proposition \ref{staran1} associated to $(apha,\bold eta)$. For $j=1,2$, let $U_j:= D^{{\rm{st}}ar,+}_{{\rm{SL}}(2),\operatorname{naive}spl}(\Phi)\cap D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_j)$. Then when we regard $\bold eta_j$ $(j\in \Phi)$ and $\bold eta_0^{{\rm{st}}ar}$ as sections of $M_{U_j}$, then in $M^{{{\gamma}mma}p}_{U_j}$, $(\prod_{j\in \Phi}\bold eta_j)/\bold eta_0^{{\rm{st}}ar}$ belongs to $M_{U_1}$ and $\bold eta_0^{{\rm{st}}ar}/\prod_{j\in \Phi} \bold eta_j$ belongs to $M_{U_2}$. Furthermore, $\bold eta_0^{{\rm{st}}ar}/\prod_{j\in \Phi} \bold eta_j$ on $U_2$ is the pull back of the section $\bold eta_0$ of the log structure of $D_{{\rm{SL}}(2),\operatorname{naive}spl}(\Phi)$ which is defined as the composition $D_{{\rm{SL}}(2),\operatorname{naive}spl}(\Phi) \to \bar {\cal {L}}\smallsetminus \{0\} \cong {\bold{R}}_{{{\gamma}mma}eq 0}\times {\cal {L}}^{(1)}\to {\bold{R}}_{{{\gamma}mma}eq 0}$ where the first arrow is induced by $\operatorname{naive}u$ of Proposition \ref{staran1} associated to $(apha,\bold eta)$. The induced maps $U_j\to D^{II}_{{\rm{SL}}(2)}(\Phi)$ for $j=1,2$ are morphisms because they are embedded in the commutative diagrams $$\bold egin{matrix} U_1 & \overset{{\subset}set}\to & D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi) \times {\rm{spl}}(W) \times ({\bold{R}}_{{{\gamma}mma}eq 0}\times {\cal {L}}^{(1)})\times {\bold{R}}_{{{\gamma}mma}eq 0}\\ \downarrow && \downarrow \\ D^{II}_{{\rm{SL}}(2)}(\Phi) & \overset{{\subset}set}\to & D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi) \times {\rm{spl}}(W) \times \bar {\cal {L}}, \end{matrix}$$ $$\bold egin{matrix} U_2 & \overset{{\subset}set}\to & D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi) \times {\rm{spl}}(W) \times {\cal {L}}^{(1)}\times {\bold{R}}_{{{\gamma}mma}eq 0}\\ \downarrow && \Vert \\ D^{II}_{{\rm{SL}}(2),\operatorname{naive}spl}(\Phi) & \overset{{\subset}set}\to & D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi) \times {\rm{spl}}(W) \times {\cal {L}}^{(1)}\times {\bold{R}}_{{{\gamma}mma}eq 0}. \end{matrix}$$ Here in both diagrams, the lower horizontal arrows are induced by $\operatorname{naive}u$ in Proposition \ref{staran1} associated to $(apha,\bold eta)$ and the isomorphism $\bar {\cal {L}} \smallsetminus \{0\} \cong {\cal {L}}^{(1)}\times {\bold{R}}_{{{\gamma}mma}eq 0}$. In the first diagram, the part $U_1\to {\bold{R}}_{{{\gamma}mma}eq 0}\times {\cal {L}}^{(1)}$ in the upper row is the composition $U_1\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),\operatorname{naive}spl}\to \bar {\cal {L}}\smallsetminus \{0\} \cong {\bold{R}}_{{{\gamma}mma}eq 0}\times {\cal {L}}^{(1)}$, the map from $U_1$ to the last ${\bold{R}}_{{{\gamma}mma}eq 0}$ in the upper row is $(\prod_{j\in \Phi} \bold eta_j)/\bold eta_0^{{\rm{st}}ar}$, and the right vertical arrow is $(p, s, t, {\delta}ta, t')\mapsto (p,s, \sum_{w\leq -2} (tt')^{-w}{\delta}ta_w)$. In the second diagram, the part $U_2\to {\cal {L}}^{(1)}$ in the upper row is the composition $U_2\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),\operatorname{naive}spl}\to \bar {\cal {L}}\smallsetminus \{0\} \cong {\bold{R}}_{{{\gamma}mma}eq 0}\times {\cal {L}}^{(1)}\to {\cal {L}}^{(1)}$, the map $U_2\to {\bold{R}}_{{{\gamma}mma}eq 0}$ in the upper row is $\bold eta_0^{{\rm{st}}ar}/\prod_{j\in \Phi} \bold eta_j$, and the right vertical arrow is the identity map. The surjectivity of $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}\to D^{II}_{{\rm{SL}}(2)}$ is easily seen. The map is proper because $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ and $D^{II}_{{\rm{SL}}(2)}$ are proper over $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} \times {\rm{spl}}(W)$. This completes the proof of (1). We prove (2). It is easy to check that the inverse image of $U$ in $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$ is $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_2)$, and that the map $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_2)\to U$ is bijective. Hence for the proof of (2), it is sufficient to prove that the converse map $D^{II}_{{\rm{SL}}(2),\operatorname{naive}spl}\to D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}_2)$ is a morphism in ${\cal {B}}'_{\bold{R}}(\log)$. This is a morphism as is seen from the above last commutative diagram. (In the upper row of this diagram, the structure of the space of $U_2$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ is induced from that of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}\times {\rm{spl}}(W) \times {\cal {L}}^{(1)}\times {\bold{R}}_{{{\gamma}mma}eq 0}$ in the sense of \ref{embstr}.) \end{pf} \bold egin{sbpara}{\lambda}bel{lam} In the next Proposition \ref{twoSL2}, we consider when the identity map of $D$ extends to an isomorphism $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\cong D^{II}_{{\rm{SL}}(2)}$ in ${\cal {B}}'_{\bold{R}}(\log)$. Let ${{\lambda}mbda}bda: D_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ be the map which coincides on $D_{{\rm{SL}}(2),\operatorname{naive}spl}\cup D$ with the composition of morphisms $D^{II}_{{\rm{SL}}(2),\operatorname{naive}spl}\cup D\cong D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}({\sigma}ma_2) \to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ in ${\cal {B}}'_{\bold{R}}(\log)$ and which coincides on $D_{{\rm{SL}}(2),{\rm{spl}}}:=\{(p,Z)\in D_{{\rm{SL}}(2)}\; |\; Z{\subset}set D_{{\rm{spl}}}\}$ with the composition of two isomorphisms $D_{{\rm{SL}}(2),{\rm{spl}}}\cong D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} \times {\rm{spl}}(W)\cong D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\rm{spl}}}:= \{(p, Z)\in D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\;|\;Z{\subset}set D_{{\rm{spl}}}\}$ in ${\cal {B}}'_{\bold{R}}(\log)$. \end{sbpara} \bold egin{sbprop}{\lambda}bel{twoSL2} The following conditions {{\rm (i)}}--{{\rm (vii)}} are equivalent. {{\rm (i)}} Either $D=D_{{\rm{spl}}}$ or $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)=D({{\gamma}mma}r^W)$. {{\rm (ii)}} The identify map of $D$ extends to an isomorphism $D^{II}_{{\rm{SL}}(2)}\cong D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ in ${\cal {B}}'_{\bold{R}}(\log)$. {{\rm (iii)}} The identity map of $D$ extends to a homeomorphism $D^{II}_{{\rm{SL}}(2)}\cong D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$. {{\rm (iv)}} The map ${{\lambda}mbda}bda : D^I_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (\ref{lam}) is continuous. {{\rm (v)}} The identity map of $D$ extends to a continuous map $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\to D^{II}_{{\rm{SL}}(2)}$. {{\rm (vi)}} The map $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\to D_{{\rm{SL}}(2)}$ is injective. {{\rm (vii)}} The map $D_{{\rm{SL}}(2),\operatorname{naive}spl} \to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ is injective. \end{sbprop} \bold egin{pf} (i) ${\bold{R}}ightarrow$ (ii). If ${\cal {L}}(F)=0$ (1.2.2) for any $F\in D({{\gamma}mma}r^W)$, the isomorphism in \ref{grsd} extends to isomorphisms from $D^{II}_{{\rm{SL}}(2)}$ and $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ onto $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} \times {\rm{spl}}(W)$ in ${\cal {B}}'_{\bold{R}}(\log)$. If $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)= D({{\gamma}mma}r^W)$, the isomorphism in \ref{grsd} extends to isomorphisms from $D^{II}_{{\rm{SL}}(2)}$ and $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ onto $\{(F, s, {\delta}ta)\in D({{\gamma}mma}r^W) \times {\rm{spl}}(W) \times \bar {\cal {L}}\;|\; {\delta}ta\in {\bar {\cal {L}}}(F)\}$ in ${\cal {B}}'_{\bold{R}}(\log)$. (ii) ${\bold{R}}ightarrow$ (iii). Clear. (iii) ${\bold{R}}ightarrow $ (iv), (v), and (vi). Clear. (v) ${\bold{R}}ightarrow$ (vii). If (v) is satisfied, the composition $D^{II}_{{\rm{SL}}(2), \operatorname{naive}spl}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\to D^{II}_{{\rm{SL}}(2)}$ will be the inclusion map. We prove (iv) ${\bold{R}}ightarrow$ (i), (vi) ${\bold{R}}ightarrow$ (i), and (vii) ${\bold{R}}ightarrow$ (i). In the rest of this proof, assume $D\operatorname{naive}eq D_{{\rm{spl}}}$ and $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W) \operatorname{naive}eq D({{\gamma}mma}r^W)$. That is, assume (i) does not hold. Then there is $x= (p, Z)\in D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ with $p$ of rank $1$ such that $Z{\subset}set D_{\operatorname{naive}spl}$. Let $x_{{\rm{spl}}}:= (p, Z_{{\rm{spl}}})\in D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$. We have $x\operatorname{naive}eq x_{{\rm{spl}}}$. We prove (iv) ${\bold{R}}ightarrow$ (i). Take ${{\bold r}}\in Z$. Then when $t\in {\bold{R}}_{>0}$ tends to $0$, $\tau^{{\rm{st}}ar}_p(t){{\bold r}}$ converges to $x$ and $\tau^{{\rm{st}}ar}_p(t){{\bold r}}_{{\rm{spl}}}$ converges to $x_{{\rm{spl}}}$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$. {\bf Claim.} Let $y= (p, Z_{{\rm{spl}}})\in D_{{\rm{SL}}(2)}$. Then when $t\in {\bold{R}}_{>0}$ converges to $0$, $\tau^{{\rm{st}}ar}_p(t){{\bold r}}$ and $\tau^{{\rm{st}}ar}_p(t){{\bold r}}_{{\rm{spl}}}$ converge to $y$ in $D^I_{{\rm{SL}}(2)}$. We prove Claim. Let $s:={\rm{spl}}_W(p)={\rm{spl}}_W({{\bold r}})$. By Part II, Proposition 3.2.12, it is sufficient to prove that when $t\in {\bold{R}}_{>0}$ tends to $0$, $(s \tau_p(t)s^{-1})^{-1}(s\tau^{{\rm{st}}ar}_p(t)s^{-1}){{\bold r}}$ and $(s\tau_p(t)s^{-1})^{-1}(s\tau^{{\rm{st}}ar}_p(t)s^{-1}){{\bold r}}_{{\rm{spl}}}$ converge to ${{\bold r}}_{{\rm{spl}}}$. The former is equal to $s (t^{-w})_w s^{-1}{{\bold r}}$ and hence converges to ${{\bold r}}_{{\rm{spl}}}$. Here $(t^{-w})_w$ denotes the linear automorphism of ${{\gamma}mma}r^W=\prod_w {{\gamma}mma}r^W_w$ which acts on ${{\gamma}mma}r^W_w$ as the multiplication by $t^{-w}$. The latter is equal to ${{\bold r}}_{{\rm{spl}}}$. This proves Claim. By Claim, if the continuous map $D^I_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ exists, it should send $y$ to $x$ and also to $x_{{\rm{spl}}}\operatorname{naive}eq x$. A contradiction. We prove (vi) ${\bold{R}}ightarrow $ (i). The elements $x$ and $x_{{\rm{spl}}}$ of $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ have the same image $(p, Z_{{\rm{spl}}}) \in D^{II}_{{\rm{SL}}(2)}$. Hence the map $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\to D_{{\rm{SL}}(2)}$ is not injective. We prove (vii) ${\bold{R}}ightarrow$ (i). Take ${{\bold r}} \in Z$. Take $a\in {\bold{R}}_{>0}\smallsetminus \{1\}$ and let ${{\bold r}}'=a\circ {{\bold r}}$. Then the elements of $D_{{\rm{SL}}(2),\operatorname{naive}spl}$ of the forms $(p, \tau_p({\bold{R}}_{>0}){{\bold r}})$ and $(p, \tau_p({\bold{R}}_{>0}){{\bold r}}')$ for the lifted action (\ref{liftac}) are different but they have the same image $(p, {\bold{R}}_{>0}\circ Z)$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$. Hence the map $D_{{\rm{SL}}(2),\operatorname{naive}spl}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ is not injective. \end{pf} {\subset}section{Relations with $D_{{\rm {BS}}}$} We connect the spaces $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$ and $D_{{\rm {BS}}}$ by introducing a new space $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ of ${\rm{SL}}(2)$-orbits. \bold egin{sbpara} For $Q=(Q(w))_w \in \prod_{w\in {\bold{Z}}} {\cal {W}}({{\gamma}mma}r^W_w)$, let $$G_{\bold{R}}({{\gamma}mma}r^W)_{Q}:= \prod_w G_{\bold{R}}({{\gamma}mma}r^W_w)_{Q(w)}\quad\text{where}$$ $$G_{\bold{R}}({{\gamma}mma}r^W_w)_{Q(w)}:=\{g\in G_{\bold{R}}({{\gamma}mma}r^W_w)\;|\; gW'=W'\;\text{for all}\; W'\in Q(w)\}.$$ Let $G_{\bold{R}}({{\gamma}mma}r^W)_{Q,u}$ be the unipotent radical of $G_{\bold{R}}({{\gamma}mma}r^W)_Q$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{SB1} Let $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$. We define a set ${\Cal {P}}(p)$ of ${\bold{Q}}$-parabolic subgroups of $G_{\bold{R}}({{\gamma}mma}r^W)$. Let $X(S_p)$ be the character group of the torus $S_p$ (\ref{sim5}) associated to $p$. For $\chi\in X(S_p)$, let $$\fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi}=\{v\in \fg_{\bold{R}}({{\gamma}mma}r^W)\; |\; {\rm{Ad}}(\tau_p^{{\rm{st}}ar}(t))v= \chi(t)v \;\text{for all}\; t\in S_p\}.$$ Let ${\Cal {P}}(p)$ be the set of all ${\bold{Q}}$-parabolic subgroups $P$ of $G_{\bold{R}}({{\gamma}mma}r^W)$ satisfying the following conditions (i) and (ii). (i) $P\supset G_{\bold{R}}({{\gamma}mma}r^W)_Q$ and $P_u\supset G_{\bold{R}}({{\gamma}mma}r^W)_{Q,u}$, where $Q= ({\cal {W}}(p_w))_w$. (ii) There is a subset $I$ of $X(S_p)$ such that ${\rm {Lie}}(P)= \sum_{\chi\in I} \fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{2.6.3} We define $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ as a set. $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}$ is the set of all triples $(p, P, Z)$, where $p\in D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$, $P\in {\Cal {P}}(p)$, and $Z{\subset}set D$ satisfying the following conditions (i) and (ii). Let $A_{p,P}{\subset}set A_p$ be the inverse image of $A_P{\subset}set P/P_u$ under the composite map $A_p \to G_{\bold{R}}({{\gamma}mma}r^W)_Q/G_{\bold{R}}({{\gamma}mma}r^W)_{Q,u}\to P/P_u$. Let $B_{p,P}= {\bold{R}}_{>0}\times A_{p,P}{\subset}set B_p$. (i) $Z$ is either an $\tau^{{\rm{st}}ar}_p(A_{p.P})$-orbit in $D$ or a $\tilde \tau^{{\rm{st}}ar}_p(B_{p,P})$-orbit in $D_{\operatorname{naive}spl}$. (ii) The image of $Z$ in $D({{\gamma}mma}r^W)$ is contained in the torus orbit $Z(p)$. For $w\in {\bold{Z}}$, we denote by $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)^{{\rm {BS}}}$ the set $D^{{\rm{st}}ar, {\rm {BS}}}_{{\rm{SL}}(2)}$ for ${{\gamma}mma}r^W_w$. Let $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{{\rm {BS}}}:=\prod_w D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)^{{\rm {BS}}}$. We have an evident map $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{{\rm {BS}}}$. \end{sbpara} \bold egin{sbprop}{\lambda}bel{SB2} (1) We have a canonical map $$D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}} \to D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}\;;\; (p,P,Z)\mapsto (p, \tau_p^{{\rm{st}}ar}(A_p)Z).$$ (2) We have a map $$D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}\to D_{{\rm {BS}}}\;;\; (p, P, Z) \mapsto (P, A_P\circ Z).$$ Here $\circ$ denotes the Borel-Serre action with respect to $P$. \end{sbprop} \bold egin{pf} (1) is clear. We prove (2). It is sufficient to prove that, for ${{\bold r}}\in Z$ and $t\in A_{p,P}$, we have $\tau_p^{{\rm{st}}ar}(t){{\bold r}} = (\tau^{{\rm{st}}ar}_p(t) \bmod P_u)\circ {{\bold r}}$. This follows from $\theta_{K_{{{\bold r}}}}(\tau^{{\rm{st}}ar}_p(t))= \tau^{{\rm{st}}ar}_p(t)^{-1}$ (\cite{KU0}, Lemma 3.8) where $\theta_{K_{{{\bold r}}}}$ denotes the Cartan involution $G_{\bold{R}}({{\gamma}mma}r^W)\to G_{\bold{R}}({{\gamma}mma}r^W)$ associated to the maximal compact subgroup $K_{{{\bold r}}}$ of $G_{\bold{R}}({{\gamma}mma}r^W)$. \end{pf} We give $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ a structure of an object of ${\cal {B}}'_{\bold{R}}(\log)$. The following \ref{Pcone}--\ref{Pcone6} are preparations. \bold egin{sblem}{\lambda}bel{Pcone} Let $L$ and $N$ be as in \ref{rvtoric}. Let $R$ be a finite subset of $L$ such that $R^{-1}=R$ and such that the ${\bold{Q}}$-vector space ${\bold{Q}}\otimes L$ is generated by $R$. $(1)$ Let ${\sigma}$ be a rational finitely generated sharp cone in $N_{\bold{R}}$ and let $\cS({\sigma})=\{l\in L\;|\;h(l) {{\gamma}mma}eq 0\;\text{for all}\;h\in {\sigma}\}$ be the corresponding fs submonoid of $L$ such that $\cS({\sigma})^{{{\gamma}mma}p}=L$. Then ${\sigma}$ satisfies the following condition $(i)$ if and only if $\cS({\sigma})$ satisfies the following conditions $(ii.1)$ and $(ii.2)$. $(i)$ There exists a subset $R'$ of $R$ such that $R=R'\cup (R')^{-1}$ and such that $${\sigma}ma=\{h\in N_{\bold{R}}\;|\; h(l) {{\gamma}mma}eq 0 \;\text{for all}\; l\in R'\}.$$ $(ii.1)$ $R{\subset}set \cS({\sigma})\cup \cS({\sigma})^{-1}$. $(ii.2)$ For any $l\in \cS({\sigma})$, there is an integer $n{{\gamma}mma}eq 1$ such that $l^n$ belongs to the submonoid of $L$ generated by $\cS({\sigma})\cap R$. $(2)$ The set of all ${\sigma}$ satisfying the condition $(i)$ in (1) is a rational fan whose support is the whole $N_{\bold{R}}$. $(3)$ Assume that we are given a subset $R^+$ of $R$ which generates ${\bold{Q}}\otimes L$ over ${\bold{Q}}$. Let $\operatorname{naive}u:= \{h\in N_{{\bold{R}}}\; |\; h(R^+){\subset}set {\bold{R}}_{{{\gamma}mma}eq 0}\}$. Then ${\sigma}$ as above such that ${\sigma}{\subset}set \operatorname{naive}u$ form a rational finite subdivision of $\operatorname{naive}u$. \end{sblem} \bold egin{pf} The proof of (1) is straightforwards. We prove (2). Let $I$ be the set of all cones ${\sigma}$ satisfying the condition (i) in (1). We first prove that $I$ is a fan. We prove that if ${\sigma}_j\in I$ $(j=1,2)$, ${\sigma}_1\cap {\sigma}_2$ is a face of ${\sigma}_1$. Let $R'_j{\subset}set R$ and assume $R=R'_j\cup (R'_j)^{-1}$, ${\sigma}_j= \{h\in N_{\bold{R}}\;|\; h(l){{\gamma}mma}eq 0 \;\text{for all}\; l\in R'_j\}$. Let $R'=R'_1\cup R'_2$. Then ${\sigma}_1\cap {\sigma}_2=\{h\in N_{\bold{R}}\;|\; h(l){{\gamma}mma}eq 0\; \text{for all}\; l\in R'\}$. Since $R'\smallsetminus R'_1{\subset}set (R'_1)^{-1}$, ${\sigma}_1\cap {\sigma}_2$ is a face of ${\sigma}_1$. We prove that if ${\sigma}\in I$, any face $\tau$ of ${\sigma}$ belongs to $I$. Since $\tau$ is a face of ${\sigma}$, we have $\cS(\tau)=\cS({\sigma})[b^{-1}] = \{ab^{-n}\;|\; a\in \cS({\sigma}), n{{\gamma}mma}eq 0\}$ for some $b\in \cS({\sigma})$. By the condition (ii.2) in (1) for $\cS({\sigma})$, there exists $n{{\gamma}mma}eq 1$, $a_1, \dots, a_r\in\cS({\sigma}) \cap R$ and $m(j){{\gamma}mma}eq 1$ $(1\leq j\leq r)$ such that $b^n = \prod_{j=1}^r a_j^{m(j)}$. We have $\cS(\tau)=\cS({\sigma})[1/\prod_{j=1}^r a_j]$. For the set $R'{\subset}set R$ such that $R=R'\cup (R')^{-1}$ and ${\sigma}= \{h\in N_{\bold{R}}\;|\; h(l){{\gamma}mma}eq 0\;\text{for all}\; l\in R'\}$, we have $\tau=\{h\in N_{{\bold{R}}}\;|\; h(l){{\gamma}mma}eq 0\;\text{for all}\;l\in R'\cup\{a_1^{-1},\dots, a_r^{-1}\}\}$. Hence $\tau \in I$. These prove that $I$ is a fan. We show that $\bigcup_{{\sigma} \in I} {\sigma}=N_{\bold{R}}$. Let $h\in N_{\bold{R}}$. Let $R'=\{l\in R\;|\; h(l){{\gamma}mma}eq 0\}$. Then $R=R'\cup (R')^{-1}$. For ${\sigma}:=\{h'\;|\; h'(l) {{\gamma}mma}eq 0\;\text{for all}\; l\in R'\}\in I$, we have $h\in {\sigma}$. These completes the proof of (2). We prove (3). By (2), we have $\operatorname{naive}u=\bigcup_{{\sigma}\in I} ({\sigma} \cap \operatorname{naive}u)$. It is sufficient to prove that ${\sigma} \cap \operatorname{naive}u \in I$ for any ${\sigma} \in I$. For $R'{\subset}set R$ such that $R=R'\cup (R')^{-1}$ and ${\sigma}=\{h\in N_{\bold{R}}\;|\; h(l){{\gamma}mma}eq 0\;\text{for all}\; l\in R'\}$, we have ${\sigma}\cap \operatorname{naive}u= \{h\in N_{\bold{R}}\;|\; h(l){{\gamma}mma}eq 0 \;\text{for all}\; l\in R' \cup R^+\}\in I$. \end{pf} \bold egin{sbpara}{\lambda}bel{Pcone2} Let $Q=(Q(w))_w\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$. Let $L$ be the character group of $\prod_{w\in {\bold{Z}}} {\bold{G}}_m^{Q(w)}$ and let $N=\Hom(L, {\bold{Z}})$. We have the situation of \ref{rvtoric}. As in \ref{rvtoric}, we denote the group law of $L$ multiplicatively, though $L$ is identified with $\prod_w {\bold{Z}}^{Q(w)}$. Let ${\Cal {P}}(Q)$ be the set of all ${\bold{Q}}$-parabolic subgroups $P$ of $G_{\bold{R}}({{\gamma}mma}r^W)$ satisfying the following conditions (i) and (ii). (i) $P\supset G_{\bold{R}}({{\gamma}mma}r^W)_Q$. (ii) Take a splitting $apha=(apha_w)_w$ of $Q$. For $\chi\in L$, let $\fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi}$ be the part of $\fg_{\bold{R}}({{\gamma}mma}r^W)$ on which the adjoint action of $\prod_w {\bold{G}}_m^{Q(w)}$ via $apha^{{\rm{st}}ar}$ is given by $\chi$. Then there is a subset $I$ of $L$ such that ${\rm {Lie}}(P)= \sum_{\chi\in I} \fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi}$. Under the condition (i), the condition (ii) is independent of the choice of $apha$. This is because if $apha'$ is another splitting of $Q$, $apha'(t)=gapha(t)g^{-1}$ for some $g\in G_{\bold{R}}({{\gamma}mma}r^W)_Q{\subset}set P$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{Pcone2.2} Let the notation be as in \ref{Pcone2}. Taking a splitting $apha$ of $Q$, define a subset $$R(Q)=\{\chi\in L\;|\;\fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi}\operatorname{naive}eq 0\}$$ where $\fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi}$ is defined with respect to $apha$. This set is independent of the choice of $apha$ because all splittings of $Q$ are conjugate by elements of $G_{\bold{R}}({{\gamma}mma}r^W)_Q$. Let $L^+=\prod_w {\bold{N}}^{Q(w)}{\subset}set \prod_w {\bold{Z}}^{Q(w)}=L$. We will apply \ref{Pcone} by taking $R(Q)$ and $R(Q)\cap L^+$ as $R$ and $R^+$, respectively. We show that $R^+$ generates the ${\bold{Q}}$-vector space ${\bold{Q}}\otimes L$, as is assumed in \ref{Pcone} (3). Let $w\in {\bold{Z}}$ and take $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)$ such that $Q(w)= {\cal {W}}(p_w)$. Let $n$ be the rank of $p$, take a representative of $p$ and let $N_1,\dots, N_n\in \fg_{\bold{R}}({{\gamma}mma}r^W_w)$ be the monodromy logarithms of the representaive, and identify $Q(w)$ with $\{1,\dots, n\}$ (\ref{simst2}). Then ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p(t))N_j= t_j^{-2}N_j$. Hence $R(Q(w))^+$ generates the ${\bold{Q}}$-vector space ${\bold{Q}}^{Q(w)}$. Hence $R(Q)^+$ generates the ${\bold{Q}}$-vector space ${\bold{Q}}\otimes L=\prod_w {\bold{Q}}^{Q(w)}$. Let ${\Cal {P}}'(Q)$ be the set of all rational finitely generated sharp cones ${\sigma}$ in $N_{\bold{R}}$ satisfying the following conditions $(i)$ and $(ii)$. $(i)$ There is a subset $R'$ of $R(Q)$ such that $R(Q)=R'\cup (R')^{-1}$ and such that ${\sigma}=\{h\in N_{\bold{R}}\;|\; h(\chi){{\gamma}mma}eq 0\;\text{for all}\;\chi\in R'\}$. $(ii)$ ${\sigma} {\subset}set \prod_w {\bold{R}}_{{{\gamma}mma}eq 0}^{Q(w)}$ in $N_{\bold{R}}=\prod_w {\bold{R}}^{Q(w)}$. That is, ${\Cal {P}}'(Q)$ is the set of ${\sigma}$ considered in \ref{Pcone} (3). Hence ${\Cal {P}}'(Q)$ is a rational fan in $N_{\bold{R}}$ and is a rational finite subdivision of the cone $\prod_w {\bold{R}}_{{{\gamma}mma}eq 0}^{Q(w)}{\subset}set N_{\bold{R}}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{Pcone2.5} Let the notation be as in \ref{Pcone2} and \ref{Pcone2.2}. We have ${\Cal {P}}(Q) = \prod_w {\Cal {P}}(Q(w))$ where the element $(P_w)_w$ of the left hand side corresponds to the element $\prod_w P_w$ of the right-hand side. We have $R(Q)=\prod_w R(Q(w))$ in $X(\prod_w {\bold{G}}_m^{Q(w)})= \prod_w X({\bold{G}}_m^{Q(w)})$. We have ${\Cal {P}}'(Q)= \prod_w {\Cal {P}}'(Q(w))$ where the element $({\sigma}_w)_w$ of the left hand side corresponds to the element $\prod_w {\sigma}_w$ of the right-hand side. \end{sbpara} \bold egin{sbprop}{\lambda}bel{Pcone3} Let the notation be as in \ref{Pcone2} and \ref{Pcone2.2}. For $P\in {\Cal {P}}(Q)$, let $${\sigma}_P=\{h\in N_{\bold{R}}\;|\;h(\chi){{\gamma}mma}eq 0\; \text{for all $\chi\in R(Q)$ such that $\fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi^{-1}}{\subset}set {\rm {Lie}}(P)$}\}.$$ Then ${\sigma}_P\in {\Cal {P}}'(Q)$ and we have a bijection $${\Cal {P}}(Q)\to {\Cal {P}}'(Q)\;;\;P\mapsto {\sigma}_P.$$ \end{sbprop} \bold egin{pf} By \ref{Pcone2.5} and by the fact ${\sigma}_P= \prod_w {\sigma}_{P_w}$, we can (and do) assume that we are in the pure situation of weight $w$. We denote $Q(w)$ by $Q$. Take $p\in D_{{\rm{SL}}(2)}$ such that ${\cal {W}}(p)=Q$, and take $\tau_p$ as a splitting $apha$ of $Q$. Let $n=\sharp(Q)$ be the rank of $p$. Let $N_1, \dots, N_n$ be the monodromy logarithms of $p$. We identify $Q$ with $\{1,\dots, n\}$. We prove that ${\sigma}_P\in {\Cal {P}}'(Q)$ for $P\in {\Cal {P}}(Q)$. Let $R'=\{\chi\in L\;|\; {\rm {Lie}}(P)\cap {\rm {Lie}}(G_{\bold{R}})_{\chi^{-1}}\operatorname{naive}eq 0\}$. Since $apha^{{\rm{st}}ar}(\prod_w {\bold{G}}_m^{Q(w)}){\subset}set P$ and since $P$ is parabolic, we have $R(Q)=R'\cup (R')^{-1}$. By the property (ii) of $P$ in \ref{Pcone2}, we have ${\rm {Lie}}(G_{\bold{R}})_{\chi^{-1}}{\subset}set {\rm {Lie}}(P)$ for $\chi\in R'$. Hence ${\sigma}_P= \{h\in N_{\bold{R}}\;|\; h(\chi){{\gamma}mma}eq 0\;\text{for all} \; \chi\in R'\}$. It remains to prove that ${\sigma}_P {\subset}set {\bold{R}}^Q_{{{\gamma}mma}eq 0}$ in $N_{\bold{R}}={\bold{R}}^Q$. Since $N_j\in {\rm {Lie}}(P)$ and ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p(t))(N_j)= t_j^{-2}N_j$ $(1\leq j\leq n)$, for any $\chi\in L^+$, $\chi^2$ is contained in the submonoid of $L$ generated by $R'$. This proves that $h(\chi){{\gamma}mma}eq 0$ for any $h\in {\sigma}_P$ and $\chi\in L^+$. This implies ${\sigma}_P{\subset}set {\bold{R}}^Q_{{{\gamma}mma}eq 0}$. Thus we have a map ${\Cal {P}}(Q)\to {\Cal {P}}'(Q)$. Next we define a map ${\Cal {P}}'(Q) \to {\Cal {P}}(Q)$. Let ${\sigma}\in {\Cal {P}}'(Q)$ and let $\cS({\sigma}){\subset}set L$ be the corresponding fs submonoid of $L$. For $\chi\in L$, let $V[\chi]{\subset}set H_{0,{\bold{R}}}$ be the sum of the $\chi'$-components $(H_{0,{\bold{R}}})_{\chi'}$ of $H_{0,{\bold{R}}}$ for all $\chi'\in L$ such that $\chi(\chi')^{-1}\in \cS({\sigma})$. For $\chi, \chi'\in L$, we have $V[\chi]\supset V[\chi']$ if and only if $\chi(\chi')^{-1}\in \cS({\sigma})$. Let $P$ be the algebraic subgroup of $G_{\bold{R}}$ consisting of all elements which preserve $V[\chi]$ for all $\chi\in L$. We prove $P\in {\Cal {P}}(Q)$. Since $L=\cS({\sigma})\cup \cS({\sigma})^{-1}$ (\ref{Pcone}), we have either $V[\chi]\supset V[\chi']$ or $V[\chi]{\subset}set V[\chi']$. As in \cite{KU0} 2.7, this totally ordered property of the set $\{V[\chi]\;|\;\chi\in L\}$ shows that $P$ is a parabolic subgroup of $G_{\bold{R}}$. We show that $P$ is defined over ${\bold{Q}}$. For $\chi\in L$, let $U[\chi]= \sum_{\chi'} (H_{0,{\bold{R}}})_{\chi'}$ where $\chi'$ ranges over all elements of $L$ such that $\chi(\chi')^{-1}\in L^+$. Then $U[\chi]= \bigcap_{W'\in Q} W'_{m(W')}$ where $m(W')\in {\bold{Z}}$ is the $W'$- component of $\chi\in L={\bold{Z}}^Q$. Since $W'$ are rational, $U[\chi]$ is rational. Since $V[\chi]$ is the sum of $U[\chi']$ for all $\chi'$ such that $\chi(\chi')^{-1}\in \cS({\sigma})$, $V[\chi]$ is also rational. Hence $P$ is rational. The properties (i) and (ii) of $P$ in \ref{Pcone2} are checked easily. As is easily seen, the maps ${\Cal {P}}(Q) \to {\Cal {P}}'(Q)$ and ${\Cal {P}}'(Q)\to {\Cal {P}}(Q)$ are the converses of each other. \end{pf} \bold egin{sbpara}{\lambda}bel{Pcone4} Let the notation be as in Proposition \ref{Pcone3}. Via the bijection in Proposition \ref{Pcone3}, we identify the fan ${\Cal {P}}'(Q)$ with the set ${\Cal {P}}(Q)$ of ${\bold{Q}}$-parabolic subgroups of $G_{\bold{R}}({{\gamma}mma}r^W)$. Let $\Sigma$ be the fan of all faces of the cone $\operatorname{naive}u:=\prod_w {\bold{R}}^{Q(w)}_{{{\gamma}mma}eq 0}{\subset}set N_{\bold{R}}$. By the canonical homomorphism $\cS(\operatorname{naive}u)=L^+=\prod_w {\bold{N}}^{Q(w)}\to M_{\frak E'}/\cO^\times_{\frak E'}$ where $\frak E'=D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q)$ Proposition (\ref{logstalk}), we have a morphism ${\rm {Mor}}(-, D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q) \to [\Sigma]$. Consider the diagrams $${\rm {Mor}}(-, D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q))\to [\Sigma]\leftarrow [{\Cal {P}}(Q)], \quad D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)\to \Sigma\leftarrow {\Cal {P}}(Q).$$ \end{sbpara} \bold egin{sblem}{\lambda}bel{Pcone5} Let $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q)$. Then ${\Cal {P}}(p){\subset}set {\Cal {P}}(Q)$. For $P\in {\Cal {P}}(Q)$, $P\in {\Cal {P}}(p)$ if and only if the image of $P$ in $\Sigma$ coincides with the image of $p$ in $\Sigma$. \end{sblem} \bold egin{pf} It is clear that ${\Cal {P}}(p){\subset}set {\Cal {P}}(Q)$. To prove the rest, we may assume $Q(w)={\cal {W}}(p_w)$ $(w\in {\bold{Z}})$. It is sufficient to prove that in this case, for $P\in {\Cal {P}}(Q)$, $P_u\supset G_{\bold{R}}({{\gamma}mma}r^W)_{Q,u}$ if and only if the image of $P$ under the map ${\Cal {P}}(Q)\to \Sigma$ coincides with the face $\operatorname{naive}u$ of $\operatorname{naive}u$. Let ${\sigma}\in {\Cal {P}}'(Q)$ be the cone in $N_{\bold{R}}$ corresponding to $P$ and let $\cS:=\cS({\sigma})$ be the corresponding fs monoid in $L$. Then the image of ${\sigma}$ in $\Sigma$ is $\operatorname{naive}u$ if and only of $\cS^\times \cap L^+=\{1\}$. By the proof of Proposition \ref{Pcone3}, we have $${\rm {Lie}}(P_u)= \sum_{\chi\in \cS\smallsetminus \cS^\times} \fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi^{-1}}, \quad {\rm {Lie}}(G_{\bold{R}}({{\gamma}mma}r^W)_{Q,u}= \sum_{\chi\in L^+\smallsetminus \{1\}} \fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi^{-1}}.$$ Hence $P_u\supset G_{\bold{R}}({{\gamma}mma}r^W)_{Q,u}$ if $L^+\smallsetminus \{1\}{\subset}set \cS\smallsetminus \cS^{\times}$, i.e., if $L^+\cap \cS^\times =\{1\}$. Let $w\in {\bold{Z}}$ and let $N_1, \dots, N_n\in {\rm {Lie}}(G_{\bold{R}}({{\gamma}mma}r^W_w)_{Q(w),u})$ $(n=\sharp(Q(w)))$ be the monodromy logarithms of $p_w$. If $P_u\supset G_{\bold{R}}({{\gamma}mma}r^W)_{Q,u}$, then $N_j \in {\rm {Lie}}(P_u)$. Since ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p(t))N_j=t_j^{-2}N_j$ $(1\leq j\leq n)$, this proves $L^+\smallsetminus \{1\}{\subset}set \cS\smallsetminus \cS^\times$. \end{pf} \bold egin{sbpara}{\lambda}bel{Pcone6} Let the notation be as in \ref{Pcone4}. We show that the object of ${\cal {B}}'_{\bold{R}}(\log)$ which represents the fiber product of ${\rm {Mor}}(-,D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}(Q))\to [\Sigma] \leftarrow [{\Cal {P}}(Q)]$ is identified, as a set, with the inverse image of $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}(Q)$ of $\frak D':=D_{{\rm{SL}}(2)}^{{\rm{st}}ar,-}(Q){\subset}set \frak D:=D_{{\rm{SL}}(2)}^{{\rm{st}}ar,-}$ in $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ (\ref{SB2}). By Lemma \ref{gest8} and by Lemma \ref{Pcone5}, a point of this fiber product is identified with a triple $(x, P, Z)$ where $x\in \frak D'$, $P\in {\Cal {P}}(p)$, $Z{\subset}set D$ satisfying the following condition (i). Let $\cS=\cS({\sigma})$ be the fs submonoid of $L$ corresponding to the cone ${\sigma}\in {\Cal {P}}'(Q)$ which corresponds to $P$. Write $x=(p, Z')\in \frak D'$ and define a subgroup $T(x, P)$ of $T(x)= \Hom((M^{{{\gamma}mma}p}_{\frak D}/\cO^\times_{\frak D})_x, {\bold{R}}^{\mult}_{>0})$ as follows. If $x$ is an $A$-orbit, let $T(x,P)= \Hom(L/\cS^\times, {\bold{R}}^{\mult}_{>0}){\subset}set \Hom(L, {\bold{R}}^{\mult}_{>0})=A_p= T(x)$. If $x$ is a $B$-orbit, let $T(x,P)= {\bold{R}}_{>0}\times \Hom(L/\cS^\times, {\bold{R}}^{\mult}_{>0}){\subset}set {\bold{R}}_{>0}\times \Hom(L, {\bold{R}}^{\mult}_{>0})=B_p=T(x)$. (i) $Z$ is a $T(x, P)$-orbit in $Z'$. We prove this by showing the following claim. {\bf Claim.} $T(x, P)= A_{p,P}$ if $x$ is an $A$-orbit and $T(x,P)=B_{p,P}$ if $x$ is a $B$-orbit (\ref{2.6.3}). Let $S_{p,P}= \Hom(L/\cS^\times, {\bold{G}}_m){\subset}set \Hom(L, {\bold{G}}_m)=S_p$. Then $S_{p,P}$ coincides with the part of $S_p$ consisting of all elements whose adjoint action on ${\rm {Lie}}(P/P_u)$ is trivial. That is, $S_{p,P}$ is the inverse image in $S_p$ of the center of $P/P_u$. Since $S_{p,P}$ is ${\bold{Q}}$-split, the image of $S_{p.P}$ in $P/P_u$ is contained in $S_P$. This proves that $A_{p,P}$ coincides with the connected component of $S_{p,P}({\bold{R}})$ containing the unit element. This proves the above Claim. Since $Z'=\tau_p^{{\rm{st}}ar}(A_p)Z$, a triple $(x,P, Z)$ as above corresponds to a point $(p, P, Z)$ of $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}(Q)$ (\ref{2.6.3}) in one to one manner. \end{sbpara} \bold egin{sbpara} For $Q\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$, we define the structure of $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}(Q)$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ by identifying it as a log modification of $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,-}(Q)$ by \ref{Pcone6}. When $Q$ moves, these structures on $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}(Q)$ glue globally to a structure of $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$. For a ${\bold{Q}}$-parabolic subgroup $P$ of $G_{\bold{R}}({{\gamma}mma}r^W)$, let $$D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}(P)= \{(p,P', Z)\in D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}\;|\; P'\supset P\}.$$ Then $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}(P)$ is an open set of $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$, and when $P$ moves, we have a covering of $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}$ by these open sets. \end{sbpara} \bold egin{sbprop}{\lambda}bel{S-B} The diagram $$\bold egin{matrix} D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)} & \to & D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)} \\ \downarrow && \downarrow \\ D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{{\rm {BS}}} &\to &D_{{\rm{SL}}(2)}({{\gamma}mma}r^W) \end{matrix}$$ is cartesian in ${\cal {B}}'_{\bold{R}}(\log)$ and also in the category of topological spaces. \end{sbprop} \bold egin{pf} This is because $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}(Q)$ represents the fiber product of ${\rm {Mor}}(-, D_{{\rm{SL}}(2)}^{{\rm{st}}ar,-})(Q)\to [\Sigma]\leftarrow [{\Cal {P}}(Q)]$ and $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{{\rm {BS}}}(Q)$ represents the fiber product of ${\rm {Mor}}(-, D_{{\rm{SL}}(2)}({{\gamma}mma}r^W))\to [\Sigma]\leftarrow [{\Cal {P}}(Q)]$. \end{pf} \bold egin{sbprop} Let $F\in D({{\gamma}mma}r^W)$, $\bar L= \bar {\cal {L}}(F)$. Then $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ is an $\bar L$-bundle over $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{{\rm {BS}}}\times {\rm{spl}}(W)$. \end{sbprop} \bold egin{pf} This follows from Proposition \ref{S-B} and the corresponding result for $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$. \end{pf} . \bold egin{sbpara} For $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ and $P\in {\Cal {P}}(p)$, let $S_{p,P}{\subset}set S_p$ be the torus defined in \ref{Pcone6}, let $X(S_{p,P})$ be the character group of $S_{p,P}$, and let $X(S_{p,P})^+=\cS/\cS^\times$, where $\cS:=\cS({\sigma}_P)$ (\ref{Pcone}) with ${\sigma}_P$ the cone corresponding to $P$ (\ref{Pcone3}). Define a real toric variety $\bar A_{p,P}$ by $$\bar A_{p,P}:= \Hom(X(S_{p,P})^+, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\supset A_{p,P}=\Hom(X(S_{p,P}), {\bold{R}}^{\mult}_{>0}).$$ We have a canonical morphism $$\bar A_{p,P}\to \bar A_p$$ induced from the homomorphism $X(S_p)^+\to X(S_{p,P})^+$ which is induced by the inclusion map $S_{p,P}\to S_p$. \end{sbpara} \bold egin{sblem}{\lambda}bel{lemSB} (1) The homomorphism $X(S_P)\to X(S_{p,P})$ induced by $S_{p,P} \to S_P$ (\ref{Pcone6}) sends $X(S_P)^+$ to $X(S_{p,P})^+$. (2) The map $A_{p, P}\to A_P$ extends uniquely to a morphism $\bar A_{p,P}\to \bar A_P$ in ${\cal {B}}'_{\bold{R}}(\log)$. \end{sblem} \bold egin{pf} We prove (1). As a monoid, $X(S_P)^+$ is generated by $\Delta(P)$ (\ref{BS2}). For $\chi\in \Delta(P)$, $\chi^{-1}$ appears in ${\rm {Lie}}(P)$. Hence the image of $\chi^{-1}$ in $X(S_{p,P})$ appears in ${\rm {Lie}}(P)$. Hence the image of $\chi$ in $X(S_{p,P})$ belongs to $X(S_{p,P})^+$. (2) follows from (1). In fact, the homomorphism $X(S_P)^+\to X(S_{p,P})^+$ in (1) induces the morphism $\Hom(X(S_P)^+, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \to \Hom(X(S_{p,P})^+, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$. \end{pf} \bold egin{sbpara}{\lambda}bel{lsSB1} In Theorem \ref{lsSB}, we will consider the local structure of $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}$, comparing it with the local structure of $D_{{\rm {BS}}}$. Here we give preparations. We consider the following two situations (bd) and (d). (bd) $\frak D= D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}$ and $\frak E=D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{{\rm {BS}}}$. (d) $\frak D=D_{{\rm {BS}}}$ and $\frak E = D_{{\rm {BS}}}({{\gamma}mma}r^W)$. Fix $p\in \frak E$ and ${{\bold r}}\in Z(p)$ (\ref{sim5}). In the situation (bd) (resp.\ (d)), fix $P\in {\Cal {P}}(p)$ (resp.\ fix a ${\bold{Q}}$-parabolic subgroup $P$ of $G_{\bold{R}}({{\gamma}mma}r^W)$ such that $p\in \frak E(P)$). Let $R$ be an ${\bold{R}}$-subspace of ${\frak g}_{\bold{R}}({{\gamma}mma}r^W)$ satisfying the following conditions (C1) and (C2). (C1) ${\frak g}_{\bold{R}}({{\gamma}mma}r^W)= {\rm {Lie}}(\tau^{{\rm{st}}ar}(A_{p,P})) \oplus R\oplus {\rm {Lie}}(K_{{{\bold r}}})$ (resp.\ ${\frak g}_{\bold{R}}({{\gamma}mma}r^W)= {\rm {Lie}}((A_P)_{{{\bold r}}}) \oplus R\oplus {\rm {Lie}}(K_{{{\bold r}}})$, where $(A_P)_{{{\bold r}}}$ denotes the Borel-Serre lifting \ref{BS1} of $A_P$ at ${{\bold r}}$). (C2) $R{\subset}set {\rm {Lie}}(P)$. These conditions on $R$ are similar to those in \ref{thm+2}. Like in \ref{thm+2}, let $S$ be an ${\bold{R}}$-subspace of ${\rm {Lie}}(K_{{{\bold r}}})$ such that ${\rm {Lie}}(K_{{{\bold r}}})={\rm {Lie}}(K'_{{{\bold r}}})\oplus S$. We define an object $Y$ of ${\cal {B}}'_{\bold{R}}(\log)$ as follows. Let $$Y= \bar A_P\times R \times S\;\;\text{in the situation (d)}.$$ In the situation (bd), we define $Y$ as follows. Let $$X= \bar A_{p,P} \times R \times S.$$ Let $Y$ be the subset of $X$ consisting of all elements $(t,f,k)$ satisfying the following conditions (i) and (ii). (i) If $\chi\in X(S_p)$ and $t(\chi_+)=0$, then $f_{\chi}=0$. In other words, if $m(w, j)$ denotes the $(w,j)$-component of $\chi\in X(S_p)=\prod_w {\bold{Z}}^{Q(w)}$, $f_{\chi}=0$ unless $m(w,j)\leq 0$ for any $w\in {\bold{Z}}$ and $j\in J(w)$. Here $\chi_+$, $f_{\chi}$, and $J(w)$ are as in \ref{thm+2}. (ii) $k\in S_J$. Here $S_J$ is as in \ref{thm+2}. Regard $X$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ in the natural way, and regard $Y{\subset}set X$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ by \ref{embstr}. Both in the situations (bd) and (d), let $$Y_0=\{(t,f,k)\in Y\;|\; t\in A_{p.P}\}{\subset}set Y.$$ \end{sbpara} \bold egin{sbthm}{\lambda}bel{lsSB} Let the notation be as in \ref{lsSB1}. Consider the situation (bd) $\frak D=D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ and $\frak E=D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{{\rm {BS}}}$ (resp.\ (d) $\frak D= D_{{\rm {BS}}}$ and $\frak E= D_{{\rm {BS}}}({{\gamma}mma}r^W)$). (1) For a sufficiently small open neighborhood $U$ of $(0,0,0)$ in $Y$, there exists a unique open immersion $U\to \frak E$ in ${\cal {B}}'_{\bold{R}}(\log)$ which sends $(t, f, k)\in U\cap Y_0$ to the element $$\exp(f){\tau}^{{\rm{st}}ar}_p(t) \exp(k) {{\bold r}} \quad (\text{resp.}\; t\circ \exp(f)\exp(k){{\bold r}}) $$ of $D({{\gamma}mma}r^W){\subset}set \frak E$. (2) Let $\bar L=\bar {\cal {L}}({{\bold r}})$ and $L={\cal {L}}({{\bold r}})$. Then for a sufficiently small open neighborhood $U$ of $(0,0,0)$ in $Y$, there exists a unique open immersion $U\times {\rm{spl}}(W) \times \bar L\to \frak D$ in ${\cal {B}}'_{\bold{R}}(\log)$ having the following property. It sends $(t, f, k,s,{\delta}ta)\in Y\times {\rm{spl}}(W) \times L$, where $(t,f,k) \in U\cap Y_0$, $s\in {\rm{spl}}(W)$, and ${\delta}ta\in L$, to the element of $D$ (resp.\ to the element $t\circ x$ where $x$ is the element of $D$) whose image in $D({{\gamma}mma}r^W) \times {\rm{spl}}(W) \times {\cal {L}}$ under the isomorphism \ref{grsd} is $$(\exp(f){\tau}^{{\rm{st}}ar}_p(t) \exp(k) {{\bold r}}, s, {\rm{Ad}}(\exp(f)\tau^{{\rm{st}}ar}_p(t)\exp(k)){\delta}ta)$$ $$(\text{resp.}\; (\exp(f)\exp(k) {{\bold r}}, s, {\rm{Ad}}(\exp(f)\exp(k)){\delta}ta)).$$ (3) For a sufficiently small open neighborhood $U$ of $(0,0,0)$ in $Y$, the diagram $$\bold egin{matrix} U\times {\rm{spl}}(W)\times \bar L &\to& \frak D\\ \downarrow &&\downarrow \\ U&\to& \frak E \end{matrix}$$ is cartesian in ${\cal {B}}'_{\bold{R}}(\log)$ and in the category of topological spaces. (4) The image of the map in (1) is contained in $\frak E(Q)\cap \frak E(P)$ with $Q=({\cal {W}}(p_w))_w$ (resp.\ in $\frak E(P)$) and the image of the map in (2) is contained in $\frak D(Q)\cap \frak D(P)$ (resp.\ in $\frak D(P)$). (5) The underlying maps of the morphisms in (1) and (2) are described as in \ref{lsSB2} below. \end{sbthm} \bold egin{sbpara}{\lambda}bel{lsSB2} The maps in (1) and (2) in Theorem \ref{lsSB} are induced from the maps $$Y\to \frak E, \quad Y \times {\rm{spl}}(W) \times \bar L\to \frak D,$$ respectively, defined as follows. We consider first the situation (bd) in \ref{lsSB1}. Let $A'$ be the subset of $A_{p,P}=\Hom(X(S_{p,P}), {\bold{R}}^{\mult}_{>0})$ consisting of all elements whose restriction to $t^{-1}({\bold{R}}_{>0}){\subset}set X(S_{p,P})^+$ coincides with the restriction of $t: X(S_{p,P})^+\to {\bold{R}}_{{{\gamma}mma}eq 0}$ where $t$ ranges over $\bar A_{p,P}$. Let $J=(J(w))_w$ for $t$ be as in \ref{lsSB1} and let $p_J\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ be as in \ref{thm+4} for $J$. Then the first map $Y\to \frak E$ sends $(t,f,k)$ to $$p':=\exp(f) \tau^{{\rm{st}}ar}_p(t')\exp(k)p_J \quad \text{where} \;t'\in A'.$$ The second map $Y \times {\rm{spl}}(W) \times \bar L\to \frak D$ sends $(t,f,k,s,{\delta}ta)$ to the following element $(p', P', Z)$ of $\frak D=D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ (\ref{2.6.3}) where $P'$ and $Z$ are as follows. Let $\bar A_{p,P}\to \bar A_P= {\bold{R}}_{{{\gamma}mma}eq 0}^{\Delta(P)}$ be the morphism in Lemma \ref{lemSB}. Let $I=\{j\in \Delta(P)\;|\; t_j=0\}$ where $t_j$ denotes the $j$-component of the image of $t$ in ${\bold{R}}_{{{\gamma}mma}eq 0}^{\Delta(P)}$. Then $P'$ is the ${\bold{Q}}$-parablolic subgroup of $G_{\bold{R}}({{\gamma}mma}r^W)$ such that $P'\supset P$ which corresponds to the subset $I$ of $\Delta(P)$ (\ref{BS2}). If ${\delta}ta\in L$, $Z$ is the subset of $D$ whose image under the embedding $D\to D({{\gamma}mma}r^W)\times{\rm{spl}}(W) \times {\cal {L}}$ is the set $$\{(\exp(f) \tau_p^{{\rm{st}}ar}(t') \exp(k){{\bold r}}, s, {\rm{Ad}}(\exp(f) \tau^{{\rm{st}}ar}_p(t') \exp(k)){\delta}ta)\;|\; t'\in A'\}.$$ If ${\delta}ta= 0\circ {\delta}ta^{(1)}\in \bar L\smallsetminus L$ $({\delta}ta^{(1)}\in L\smallsetminus \{0\})$ (\ref{2.3ex} (4)), $Z$ is the subset of $D$ whose image under the embedding $D\to D({{\gamma}mma}r^W)\times{\rm{spl}}(W) \times {\cal {L}}$ is the set $$\{(\exp(f) \tau_p^{{\rm{st}}ar}(t') \exp(k){{\bold r}}, s, {\rm{Ad}}(\exp(f) \tau^{{\rm{st}}ar}_p(t') \exp(k))(c\circ {\delta}ta^{(1)})\;|\; t'\in A', c\in {\bold{R}}_{>0}\}.$$ Next consider the situation (d) in \ref{lsSB1}. In this situation, the first map sends $(t,f,k)$ to $t\circ \exp(f)\exp(k){{\bold r}}$. The second map sends $(t,f,k,s, {\delta}ta)$ with ${\delta}ta\in L$ to the element $t\circ x$ and sends $(t,f,k,s, 0\circ {\delta}ta)$ with ${\delta}ta\in L\smallsetminus \{0\}$ to the element $(0,t)\circ x$, where $x$ is the element of $D$ whose image in $D({{\gamma}mma}r^W) \times {\rm{spl}}(W) \times {\cal {L}}$ (\ref{grsd}) is $(\exp(f)\exp(k){{\bold r}}, s, {\rm{Ad}}(\exp(f)\exp(k)){\delta}ta)$. Here we denote by $(t,x) \mapsto t\circ x$ the morphisms $\bar A_P\times D({{\gamma}mma}r^W) \to D_{{\rm {BS}}}({{\gamma}mma}r^W)$, $\bar A_P \times D\to D_{{\rm {BS}}}$, and $\bar B_P\times D_{\operatorname{naive}spl}\to D_{{\rm {BS}}}$, which extends the morphisms $A_P\times D({{\gamma}mma}r^W) \to D_{{\rm {BS}}}({{\gamma}mma}r^W)$, $A_P\times D\to D_{{\rm {BS}}}$, and $B_P\times D\to D_{{\rm {BS}}}$, defined by $(t,x) \mapsto t \circ x$, respectively. \end{sbpara} \bold egin{sbpara}{\lambda}bel{2.6.21} We prove Theorem \ref{lsSB}. The theorem is clear in the situation (d) in \ref{lsSB1}. We consider the situation (bd) in \ref{lsSB1}. We reduce the theorem in this situation to Theorem \ref{ls1}. It is easily seen that the validity of the theorem does not depend on the choices of $R$ and $S$. We take any $S$ satisfying the condition in \ref{thm+2}, and hence the condition in \ref{lsSB1}. We choose $R$ in the following way. Let $Q=(Q(w))_w$ where $Q(w)={\cal {W}}({{\gamma}mma}r^W_w)$. Take a splitting $apha$ of $Q$ and let $R(Q)$ be as in \ref{Pcone2.2}. Let ${\sigma}_P\in {\Cal {P}}'(Q)$ be the cone corresponding to $P\in {\Cal {P}}(Q)$ (\ref{Pcone3}) and let $\cS:=\cS({\sigma}_P)$ be the corresponding fs submonoid of $X(S_p)$. Note that $R(Q){\subset}set \cS\cup \cS^{-1}$ (\ref{Pcone}). Choose a subset $I_1$ of $R(Q)\cap \cS\cap \cS^{-1}$ such that $R(Q)\cap \cS\cap \cS^{-1}$ is the disjoint union of $\{1\}$, $I_1$, and $I_1^{-1}$. Let $I_2:=R(Q) \cap \cS^{-1}\smallsetminus R(Q)\cap \cS\cap \cS^{-1}$. Hence $R(Q)$ is the disjoint union of $\{1\}$, $I_1$, $I_1^{-1}$, $I_2$, and $I_2^{-1}$. Choose an ${\bold{R}}$-subspace $C$ of $\fg_{\bold{R}}({{\gamma}mma}r^W)$ such that the subspace $\fg_{\bold{R}}({{\gamma}mma}r^W)_1= \{v\in \fg_{\bold{R}}({{\gamma}mma}r^W)\;|\; {\rm{Ad}}(\tau^{{\rm{st}}ar}_p(t))v=v\;\text{for all}\;t\in A_p\}$ of $\fg_{\bold{R}}({{\gamma}mma}r^W)$ coincides with the direct sum of ${\rm {Lie}}(\tau^{{\rm{st}}ar}_p(A_p))$, $C$, and $\frak g_{\bold{R}}({{\gamma}mma}r^W)_1\cap {\rm {Lie}}(K_{{{\bold r}}})$. Let $$R'= C \oplus (\bigoplus_{\chi\in I_1\cup I_2} \fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi}).$$ Then $R'{\subset}set {\rm {Lie}}(P)$, and $R'$ satisfies the conditions (C1) and (C2) on $R$ of \ref{thm+2}. (Here we used the fact that the Cartan involution $\theta_{K_{{{\bold r}}}}$ associated to the maximal compact subgroup $K_{{{\bold r}}}$ of $G_{\bold{R}}({{\gamma}mma}r^W)$ sends $\fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi}$ to $\fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi^{-1}}$ for any $\chi\in X(S_p)$, and ${\rm {Lie}}(K_{{{\bold r}}})$ coincides with $\{v\in \fg_{\bold{R}}({{\gamma}mma}r^W)\;|\; \theta_{K_{{{\bold r}}}}(v)=v\}$.) Take an ${\bold{R}}$-subspace $C'$ of $\fg_{\bold{R}}({{\gamma}mma}r^W)$ such that ${\rm {Lie}}(\tau^{{\rm{st}}ar}_p(A_p))= {\rm {Lie}}(\tau^{{\rm{st}}ar}_p(A_{p,P})) \oplus C'$. We take $R:=C' \oplus R'$ as $R$ of \ref{lsSB1}. Define $X$ and $Y$ of \ref{lsSB1} by using these $R$ and $S$. Denote by $X'$ and $Y'$, respectively, the $X$ and $Y$ of \ref{thm+2} defined by taking $R'$ and $S$ as $R$ and $S$ of \ref{thm+2}. As in \ref{Pcone4}, let $\Sigma$ be the fan of all faces of the cone $\Hom(X(S_p)^+, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0}) = \prod_w {\bold{R}}^{Q(w)}_{{{\gamma}mma}eq 0}$. Let $\Sigma'$ be the fan of all faces of the cone ${\sigma}_P$. Then $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}(Q) \cap D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}(P)$ represents the fiber product of ${\rm {Mor}}(-, D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(Q)) \to [\Sigma] \leftarrow [\Sigma']$. On the other hand, the fiber product of ${\rm {Mor}}(-, X') \to [\Sigma]\leftrightarrow [\Sigma']$ is represented by $X'':= \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \times \frak g_{\bold{R}}({{\gamma}mma}r^W) \times \frak g_{\bold{R}}({{\gamma}mma}r^W) \times \frak g_{\bold{R}}({{\gamma}mma}r^W) \times S$ and the fiber product of ${\rm {Mor}}(-, Y') \to [\Sigma]\leftarrow [\Sigma']$ is represented by the inverse image $Y''$ of $Y'$ in $X''$ under the canonical map $X''\to X'$, where $Y''$ is endowed with the structure of an object of ${\cal {B}}'_{\bold{R}}(\log)$ by using the embedding $Y''\to X''$ (\ref{embstr}). We identify $X$ with $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \times R'\times S$ via the isomorphism $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\cong \bar A_{p,P} \times C'$. To reduce Theorem \ref{lsSB} to Theorem \ref{ls1}, it is sufficient to prove the following (*). (*) If $(t,f,g,h,k)\in Y''$, then $(t,f,k)\in Y$ in $X=\Hom(\cS, {\bold{R}}_{{{\gamma}mma}eq 0}^{\mult})\times R' \times S$. We have an isomorphism $$Y''\overset{\cong}\to Y\;;\; (t,f,g,h,k) \mapsto (t, f,k)$$ in ${\cal {B}}'_{\bold{R}}(\log)$. Before the proof of (*), we note the following (1) and (2). (1) Let $(t,f,g,h,k)\in X''$ ($t\in \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$, $f,g,h\in \fg_{\bold{R}}({{\gamma}mma}r^W)$, $k\in S$). Then $(t,f,g,h,k)$ belongs to $Y''$ if and only if the conditions (i)--(iv) in \ref{thm+2}, among which (iii) and (iv) are modified as follows, are satisfied. We replace $R$ in (iii) in \ref{thm+2} by $R'$. In (iv) in \ref{thm+2}, we define $J= (J(w))_{w\in {\bold{Z}}}$ where $J(w)= \{j\in Q(w)\;|\; t_{w,j}=0\}$. Here $t_{w,j}\in {\bold{R}}_{{{\gamma}mma}eq 0}$ denotes the $(w,j)$-component of the image of $t$ in $\bar A_p$. Then $k\in S_J$. (2) Let $(t,f,k)\in X$ ($t\in \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$, $f\in R'$, $k\in S$). Then $(t,f,k)$ belongs to $Y$ if and only if the following conditions (2-i) and (2-ii) are satisfied. (2-i) Let $\chi\in X(S_p)$. If $t(\chi_+)=0$, then $f_{\chi}=0$. (2-ii) The same as the form of (iv) in the above (1). Now we prove the assertion (*). Let $(t,f,g,h,k)\in Y''$. We first prove that $(t,f,k)\in Y$. To show this, it is sufficient to prove $f\in R'$. Let $\chi\in R(Q)$. If $t(\chi_-)\operatorname{naive}eq 0$, since $t(\chi_+)g_{\chi}=t(\chi_-)f_{\chi}$ and $g_{\chi}\in R'$, we have $f_{\chi}=t(\chi_-)^{-1}t(\chi_+)g_{\chi}\in R'$. Assume $t(\chi_-)=0$. If $\chi\in \cS$, then $t(\chi_+)=t(\chi_-)t(\chi)=0$. Hence $f_{\chi}=0$. If $\chi\operatorname{naive}otin \cS$, then $\chi\in \cS^{-1}$ and hence $f_{\chi}\in \fg_{\bold{R}}({{\gamma}mma}r^W)_{\chi}{\subset}set R'$. We next prove that $Y'' \to Y$ is an isomorphism. For this, we define a morphism $Y\to X''$ of the converse direction by $(t,f,k)\mapsto (t,f,g,h,k)$ with $g=\sum_{\chi\in \cS^{-1}} t(\chi^{-1})f_{\chi}$, $h= \sum_{\chi\in \cS^{-1}} t(\chi^{-1})^2f_{\chi}$. We show that the image of this morphism is contained in $Y''$. Let $\chi\in R(Q)$. We prove $t(\chi_+)g_{\chi}= t(\chi_-)f_{\chi}$ and $t(\chi_+)h_{\chi}= t(\chi_-)g_{\chi}$. If $\chi\in \cS^{-1}$, we have $t(\chi_+)g_{\chi}= t(\chi_+)t(\chi^{-1})f_{\chi}= t(\chi_-)f_{\chi}$ and $t(\chi_+)h_{\chi}= t(\chi_+)t(\chi^{-1})^2f_{\chi}= t(\chi_-)g_{\chi}$. If $\chi\operatorname{naive}otin \cS^{-1}$, we have $f_{\chi}=0$ by the definition of $R'$, and hence $g_{\chi}=h_{\chi}=0$ by the definitions of $g$ and $h$. If $t(\chi_+)=0$, then $f_{\chi}=0$ and hence $g_{\chi}=0$. We prove that if $t(\chi_-)=0$, then $g_{\chi}=h_{\chi}=0$. In the case $t(\chi_+)=0$, then $f_{\chi}=0$ and hence $g_{\chi}=h_{\chi}=0$. In the case $\chi\in \cS$, we have $t(\chi_+)=t(\chi_-)t(\chi)=0$. In the case $t(\chi_+)\operatorname{naive}eq 0$ and $\chi\operatorname{naive}otin \cS$, we have $\chi\in \cS^{-1}$ and $t(\chi_-)=t(\chi_+)t(\chi^{-1})$ and hence $t(\chi^{-1})=0$. Hence $g_{\chi}=t(\chi^{-1})^2f_{\chi}=0$ and $h_{\chi}=0$ similarly. We prove $g_{\chi}, h_{\chi}+f_{\chi^{-1}}\in R'$. If $\chi\in \cS^{-1}$, $g_{\chi}=t(\chi)^{-1}f_{\chi}\in R'$ and $h_{\chi}= t(\chi^{-1})^2\in R'$ and hence $h_{\chi}+f_{\chi^{-1}}\in R'$. If $\chi\operatorname{naive}otin \cS^{-1}$, $g_{\chi}=h_{\chi}=0$ and hence $h_{\chi}+f_{\chi^{-1}}=f_{\chi^{-1}}\in R'$. Thus we have a morphism $Y\to Y''$. It is clear that the composition $Y\to Y''\to Y$ is the identity morphism. We prove that the composition $Y''\to Y\to Y''$ is also the identify morphism. Let $(t,f,g,h,k)\in Y''$ and let $(t,f,g', h', k)$ be the image of $(t,f,k)\in Y$ under $Y\to Y''$. We prove $g'_{\chi}=g_{\chi}$ and $h'_{\chi}=h_{\chi}$ for any $\chi\in R(Q)$. Assume first $\chi\in \cS^{-1}$. If $t(\chi_+)\operatorname{naive}eq 0$, then $g_{\chi}= t(\chi_+)^{-1}t(\chi_-)f_{\chi}= t(\chi^{-1})f_{\chi}=g'_{\chi}$, and we have similarly $h_{\chi}=h_{\chi}'$. If $t(\chi_+)=0$, then $t(\chi_-)= t(\chi_+)t(\chi^{-1})=0$, and hence $f_{\chi}=g_{\chi}=h_{\chi}=0$, and we have $g'_{\chi}=0$ and $h'_{\chi}=0$ by $f_{\chi}=0$. Next assume $\chi\operatorname{naive}otin \cS^{-1}$. Then by the definition of $R'$, we have $a_{\chi}=0$ for any $a\in R'$. Since $f_{\chi}, g_{\chi}, h_{\chi}+f_{\chi^{-1}}\in R'$, we have $f_{\chi}=g_{\chi}=h_{\chi}=0$, and we have $g'_{\chi}=h'_{\chi}=0$ by $f_{\chi}=0$. Theorem \ref{lsSB} is proved. \end{sbpara} \bold egin{sbthm}{\lambda}bel{thmBS} (1) The identity map of $D$ extends uniquely to a morphism $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}\to D_{{\rm {BS}}}$ in ${\cal {B}}'_{\bold{R}}(\log)$. It sends $(p, P,, Z)\in D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ to $(P, A_P\circ Z)\in D_{{\rm {BS}}}$. (2) The diagram $$\bold egin{matrix} D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)} & \to & D_{{\rm {BS}}}\\ \downarrow &&\downarrow \\ D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{{\rm {BS}}} &\to & D_{{\rm {BS}}}({{\gamma}mma}r^W)\end{matrix}$$ is cartesian in ${\cal {B}}'_{\bold{R}}(\log)$ and also in the category of topological spaces. (3) The inverse image of $D^{{{\rm{mild}}}}_{{\rm {BS}}}$ in $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ coincides with $D^{{\rm{st}}ar,{\rm {BS}},{{\rm{mild}}}}_{{\rm{SL}}(2)}$. \end{sbthm} \bold egin{pf} Let $(p, P, Z)\in D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}$ and take ${{\bold r}} \in Z$. We compare the situations (bd) and (d) in Theorem \ref{lsSB} by taking $p$, ${{\bold r}}$ for both the situations (bd) and (d), and by taking $R$ and $S$ for these situations as follows. Take $R$ and $S$ for the situation (d). Take this $S$ as $S$ for the situation (bd). Let $C$ be an ${\bold{R}}$-subspace of ${\rm {Lie}}((A_P)_{{{\bold r}}})$ such that ${\rm {Lie}}((A_P)_{{{\bold r}}})= {\rm {Lie}}(\tau^{{\rm{st}}ar}(A_{p,P})) \oplus C$ and take $C\oplus R$ as the $R$ for the situation (bd). Then Theorem \ref{thmBS} (1) and (2) follow from Theorem \ref{lsSB}, Lemma \ref{lemSB}, and the fact $$(\tau^{{\rm{st}}ar}(t) \bmod P_u)\circ \exp(f)\exp(k){{\bold r}}= \exp(f)\tau_p^{{\rm{st}}ar}(t) \exp(k){{\bold r}}.$$ Theorem \ref{thmBS} (3) is clear. \end{pf} {\subset}section{The category ${\cal {B}}'_{\bold{R}}(\log)^+$}{\lambda}bel{ss:+} The aim of this Section \ref{ss:+} is to define a full subcategory ${\cal {B}}'_{\bold{R}}(\log)^+$ of ${\cal {B}}'_{\bold{R}}(\log)$, consisting of nice objects, and prove that the spaces of ${\rm{SL}}(2)$-orbits in this Section 2 belong to ${\cal {B}}'_{\bold{R}}(\log)^+$ (Theorem \ref{slis+}). We discuss also full subcategories ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$ and ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ of ${\cal {B}}'_{\bold{R}}(\log)$ such that $${\cal {B}}'_{\bold{R}}(\log)\supset {\cal {B}}'_{\bold{R}}(\log)^+\supset {\cal {B}}'_{\bold{R}}(\log)^{[+]}\supset {\cal {B}}'_{\bold{R}}(\log)^{[[+]]}.$$ \bold egin{sbpara}{\lambda}bel{b[[+]]} We first define a full subcategory ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ of ${\cal {B}}'_{\bold{R}}(\log)$. We define standard objects of ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. Take $n{{\gamma}mma}eq 0$, a real analytic manifold $A$, and a real analytic closed submanifold $A_J$ of $A$ for each subset $J$ of $\{1,\dots, n\}$ satisfying $A_{\emptyset}=A$, $A_J{\subset}set A_{J'}$ if $J\supset J'$. Define $$Y= \{(t, x)\in {\bold{R}}^n_{{{\gamma}mma}eq 0}\times A\;|\; x\in A_{J(t)}\}$$ where $J(t)= \{j\;|\;1\leq j\leq n, t_j=0\}$. We regard $Y$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ by taking ${\bold{R}}^n_{{{\gamma}mma}eq 0} \times A$ as $X$ in \ref{embstr} where the log structure of $X$ with sign is induced from that of ${\bold{R}}^n_{{{\gamma}mma}eq 0}$ (\ref{2.3ex} (1)). Let ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ be the full subcategory of ${\cal {B}}'_{\bold{R}}(\log)$ consisting of all objects which are locally isomorphic to open subobjects of $Y$ as above. Real analytic manifolds with corners belong to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{b[+]} We next define a full subcategory ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$ of ${\cal {B}}'_{\bold{R}}(\log)$. We define standard objects of ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$. Take an fs monoid $\cS$, a real analytic manifold $A$, and a real analytic closed submanifold $A_I$ of $A$ for each face $I$ of $\cS$ satisfying $A_{\cS}=A$, $A_I{\subset}set A_{I'}$ if $I{\subset}set I'$. Define $$Y= \{(t, x)\in \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\times A\;|\; x\in A_{I(t)}\}$$ where $I(t)$ is the face $ \{a\in \cS\;|\;t(a) \operatorname{naive}eq 0\}$ of $\cS$. We regard $Y$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ by taking $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \times A$ as $X$ in \ref{embstr} where the log structure of $X$ with sign is induced from that of $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ (\ref{2.3ex} (3)). Let ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$ be the full subcategory of ${\cal {B}}'_{\bold{R}}(\log)$ consisting of all objects which are locally isomorphic to open subobjects of $Y$ as above. Since a standard object of ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ is the case $\cS={\bold{N}}^n$ of a standard object of ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$, we have ${\cal {B}}'_{\bold{R}}(\log)^{[+]}\supset {\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. \end{sbpara} The following Lemmas \ref{lem++} and \ref{lem++2} are proved easily. \bold egin{sblem}{\lambda}bel{lem++} Let $S$ be an object of ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$. Then $S$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ if and only if for any $s\in S$, $(M_S/\cO^\times_S)_s$ is isomorphic to ${\bold{N}}^r$ for some $r{{\gamma}mma}eq 0$ (which may depend on $s$). \end{sblem} \bold egin{sblem}{\lambda}bel{lem++2} Let $S'\to S$ be a log modification in ${\cal {B}}'_{\bold{R}}(\log)$. If $S$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$, then $S'$ also belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$. \end{sblem} \bold egin{sbpara}{\lambda}bel{b+} We define a full subcategory ${\cal {B}}'_{\bold{R}}(\log)^+$ of ${\cal {B}}'_{\bold{R}}(\log)$. Let ${\cal {B}}'_{\bold{R}}(\log)^+$ be the full subcategory of ${\cal {B}}'_{\bold{R}}(\log)$ consisting of all objects $S$ such that locally on $S$, there is a log modification $S'\to S$ such that $S'$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. We have clearly ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}{\subset}set {\cal {B}}'_{\bold{R}}(\log)^+$. \end{sbpara} \bold egin{sblem}{\lambda}bel{leq1} Let $S$ be an object of ${\cal {B}}'_{\bold{R}}(\log)^+$ and assume that $(M^{{{\gamma}mma}p}_S/\cO_S^\times)_s$ is of rank $\leq 1$ as an abelian group for any $s\in S$. Then $S$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. \end{sblem} \bold egin{pf} This is because any log modification $S' \to S$ is an isomorphism. \end{pf} \bold egin{sbprop}{\lambda}bel{[+]+} ${\cal {B}}'_{\bold{R}}(\log)^{[+]}{\subset}set {\cal {B}}'_{\bold{R}}(\log)^+$. \end{sbprop} \bold egin{pf} Let $S$ be an object of ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$. Locally on $S$, by the resolution of singularity in toric geometry (\cite{O} p.23), there exists a log modification $S'\to S$ such that for any $s\in S'$, $(M_{S'}/\cO^\times_{S'})_s\cong {\bold{N}}^r$ for some $r$. By Lemmas \ref{lem++} amd \ref{lem++2}, $S'$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. \end{pf} \bold egin{sbprop}{\lambda}bel{lm+} Let $S'\to S$ be a log modification in ${\cal {B}}'_{\bold{R}}(\log)$. Then, $S$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$ if and only if $S'$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. \end{sbprop} \bold egin{pf} First assume that $S$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. We prove that $S'$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. We may assume that $S$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. Locally on $S$, there is a log modification $S''\to S$ which is a composition $S''\to S'\to S$ where the first arrow is a log modification and the second arrow is the given morphism, such that for any $s\in S''$, $(M_{S''}/\cO^\times_{S''})_s \cong {\bold{N}}^r$ for some $r$. By Lemmas \ref{lem++} and \ref{lem++2}, $S''$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. Hence $S'$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. Next assume that $S'$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. We prove that $S$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. By the assumption, there are an open covering $(U_{{{\lambda}mbda}})_{{{\lambda}mbda}}$ of $S'$ and a log modification $V_{{{\lambda}mbda}}\to U_{{{\lambda}mbda}}$ for each ${{\lambda}mbda}$ such that $V_{{{\lambda}mbda}}$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. Since $S'\to S$ is proper, locally on $S$, we can take a finite covering $(U_{{{\lambda}mbda}})_{{{\lambda}mbda}}$. Hence locally on $S$, there is a log modification $S''\to S$ having the following properties (i)--(iii). (i) $S''\to S$ is a composition $S''\to S'\to S$ where the first arrow is a log modification and the second arrow is the given morphism. (ii) For each ${{\lambda}mbda}$, we have a morphism $U_{{{\lambda}mbda}} \times_{S'} S'' \to V_{{{\lambda}mbda}}$ over $U_{{{\lambda}mbda}}$ which is a log modification. (iii) For any $s\in S''$, $(M_{S''}/\cO^\times_{S''})_s \cong {\bold{N}}^r$ for some $r{{\gamma}mma}eq 0$. By Lemma \ref{lem++} and \ref{lem++2}, $U_{{{\lambda}mbda}} \times_{S'} S''$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. Since $(U_{{{\lambda}mbda}} \times_{S'} S'')_{{{\lambda}mbda}}$ is an open covering of $S''$, $S''$ belongs to ${\cal {B}}'(\log)^{[[+]]}$. Hence $S$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. \end{pf} \bold egin{sbprop}{\lambda}bel{cv++3} The category ${\cal {B}}'_{\bold{R}}(\log)^{+}$ (resp.\ ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$, resp.\ ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$) is stable in ${\cal {B}}'_{\bold{R}}(\log)$ under taking finite products. \end{sbprop} \bold egin{pf} This is clear for ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$ and ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. The part for ${\cal {B}}'_{\bold{R}}(\log)^+$ follows from the part for ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. \end{pf} \bold egin{sblem}{\lambda}bel{fiber+} Let $Y{\subset}set \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\times A$ be a standard object of ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$ in \ref{b[+]}, let $S$ be an object of ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$, and let $S\to \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ be a morphism in ${\cal {B}}'_{\bold{R}}(\log)$. Then the fiber product of $S\to \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \leftarrow Y$ in ${\cal {B}}'_{\bold{R}}(\log)$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. \end{sblem} \bold egin{pf} Working locally on $S$, we may assume that $S$ is an open set of the standard object ${\bold{R}}^n_{{{\gamma}mma}eq 0} \times A'$ in \ref{b[[+]]} $(A'$ here plays the role of $A$ in \ref{b[[+]]}), and that we have a commutative diagram of functors $$\bold egin{matrix} {\rm {Mor}}(-, S)&&\to&& {\rm {Mor}}(-, \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}))\\ \downarrow &&&& \downarrow\\ {\rm {Mor}}(-, {\bold{R}}^n_{{{\gamma}mma}eq 0}) & \to & [\Sigma'] &\to &[\Sigma] \end{matrix}$$ where $\Sigma$ is the fan of all faces of the cone $\Hom(\cS, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0})$ and $\Sigma'$ is the fan of all faces of the cone ${\bold{R}}^n_{{{\gamma}mma}eq 0}{\subset}set {\bold{R}}^n$. Then the fiber product in problem coincides with the space $$\{(t, a, a') \in {\bold{R}}^n_{{{\gamma}mma}eq 0} \times A\times A'\;|\; a\in A_{I(t)}, a'\in A'_{J(t)}\}$$ where $J(t)=\{j\;|\; 1\leq j\leq n, \; t_j=0\}$ and $I(t)$ is the face of $\cS$ which corresponds to the image of $t$ under ${\bold{R}}^n_{{{\gamma}mma}eq 0} \to \Sigma' \to \Sigma$. \end{pf} \bold egin{sblem}{\lambda}bel{fiber+2} Let $Y{\subset}set \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\times A$ be a standard object of ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$ in \ref{b[+]}, let $S$ be an object of ${\cal {B}}'_{\bold{R}}(\log)^+$, and let $S\to \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ be a strict morphism in ${\cal {B}}'_{\bold{R}}(\log)$. Then the fiber product of $S\to \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \leftarrow Y$ in ${\cal {B}}'_{\bold{R}}(\log)$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. \end{sblem} \bold egin{pf} Since $S\to \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ is strict, working locally on $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ and on $S$, we have a rational finite subdivision $\Sigma'$ of the cone $\Hom(\cS, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0})$ such that the fiber product $S'$ of $S\to \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\leftarrow |{\operatorname{toric}}|(\Sigma')$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ and such that $\cS({\sigma}')$ for all ${\sigma}'\in \Sigma'$ are isomorphic to ${\bold{N}}^r\times {\bold{Z}}^m$ for some $r, m$. Replacing $S$ by $S'$ and replacing $\cS$ by $\cS({\sigma}')$ $({\sigma}'\in \Sigma')$, we are reduced to Lemma \ref{fiber+}. \end{pf} \bold egin{sbprop}{\lambda}bel{VV+} Let $n{{\gamma}mma}eq 0$, and let $V$ be a finite dimensional ${\bold{R}}$-vector space endowed with an action of ${\bold{G}}_m^n$. Let $Y$ be the subset of ${\bold{R}}^n_{{{\gamma}mma}eq 0}\times V \times V$ consisting of all elements $(t, u, v)$ satisfying the following conditions $(i)$ and $(ii)$ for any $\chi\in X({\bold{G}}_m^n)$. In the following, we write $\chi=\chi_+(\chi_-)^{-1}$ as in \ref{thm+2}. $(i)$ $t(\chi_+)v_{\chi}= t(\chi_-)u_{\chi}$. $(ii)$ If $t(\chi_+)=0$, then $u_{\chi}=v_{\chi}=0$. Endow $Y$ with the structure of an object of ${\cal {B}}'_{\bold{R}}(\log)$ by the embedding $Y\to {\bold{R}}^n_{{{\gamma}mma}eq 0} \times V\times V$ as in \ref{embstr}. Let $S$ be an object of ${\cal {B}}'_{\bold{R}}(\log)^+$ and assume that we are given a strict morphism $S\to {\bold{R}}^n_{{{\gamma}mma}eq 0}$, and let $E$ be the fiber product of $S\to {\bold{R}}^n_{{{\gamma}mma}eq 0}\leftarrow Y$ in ${\cal {B}}'_{\bold{R}}(\log)$. Then $E$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. \end{sbprop} \bold egin{pf} In \ref{rvtoric}, we take $L=X({\bold{G}}_m^n)$. Let $L^+{\subset}set L$ be the submonoid corresponding to ${\bold{N}}^n$ in the identification $L={\bold{Z}}^n$. Take a finite subset $R$ of $L$ such that $\{\chi\in L\;|\; V_{\chi}\operatorname{naive}eq 0\}{\subset}set R$, $R=R^{-1}$, and $R^+:=R\cap L^+$ generates $L^+$ as a monoid. Let $\Sigma$ be fan of all faces of the cone ${\bold{R}}^n_{{{\gamma}mma}eq 0}{\subset}set {\bold{R}}^n=N_{\bold{R}}$, and let $\Sigma'$ be the rational finite subdivision of $\Sigma$ defined in \ref{Pcone} (3) with respect to $R$ and $L^+$. Let $Y'$, $S'$, $E'$ be the fiber products of $Y\to {\bold{R}}^n_{{{\gamma}mma}eq 0}\leftarrow |{\operatorname{toric}}|(\Sigma')$, $S\to {\bold{R}}^n_{{{\gamma}mma}eq 0}\leftarrow |{\operatorname{toric}}|(\Sigma')$, $E\to {\bold{R}}^n_{{{\gamma}mma}eq 0}\leftarrow |{\operatorname{toric}}|(\Sigma')$, respectively (we identify ${\bold{R}}^n_{{{\gamma}mma}eq 0}$ with $|{\operatorname{toric}}|(\Sigma)$). For ${\sigma}'\in \Sigma'$, let $Y'({\sigma}')$, $S'({\sigma}')$, $E'({\sigma}')$ be the open sets of $Y'$, $S'$, $E'$, respectively, corresponding to ${\sigma}'$. These are the fiber products of $Y\to {\bold{R}}^n_{{{\gamma}mma}eq 0}\leftarrow \Hom(\cS({\sigma}'), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$, $S\to {\bold{R}}^n_{{{\gamma}mma}eq 0}\leftarrow \Hom(\cS({\sigma}'), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$, $E\to {\bold{R}}^n_{{{\gamma}mma}eq 0}\leftarrow \Hom(\cS({\sigma}'), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$, respectively. In particular, $Y'({\sigma}') {\subset}set \Hom(\cS({\sigma}'), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \times V \times V$. We prove that $Y'({\sigma}')$ is isomorphic to a standard object of the category ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$ (\ref{b[+]}). Since $R{\subset}set \cS({\sigma}')\cup \cS({\sigma}')^{-1}$ (\ref{Pcone}), we can take subsets $R_1$ and $R_2$ such that $R$ is the disjoint union of $R_1$ and $R_2$ and such that $R_1{\subset}set \cS({\sigma}')$ and $R_2{\subset}set \cS({\sigma}')^{-1}$. Consider the map $$Y'({\sigma}') \to \Hom(\cS({\sigma}'), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\times V\;;\; (t, u, v) \mapsto (t, \sum_{\chi\in A_1} v_{\chi} + \sum_{\chi\in A_2} u_{\chi}).$$ This induces an isomorphism $$Y'({\sigma}') \overset{\cong}\to \{(t,x)\in \Hom(\cS({\sigma}'), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \times V\;|\; x\in V_{I(t)}\}\leqno{(1)}$$ in ${\cal {B}}'_{\bold{R}}(\log)$, where $I(t)$ denotes the face $\{\chi\in \cS({\sigma}')\;|\; t(\chi)\operatorname{naive}eq 0\}$ of $\cS({\sigma}')$, and for a face $I$ of $\cS({\sigma}')$, we define $$V_I = \{x\in V\;|\; x_{\chi}=0\;\text{if $\chi\in L$ and $\chi_+\operatorname{naive}otin I$}\}.$$ The inverse map of (1) is given by $(t, x)\mapsto (t, u, v)$ where $u=\sum_{\chi\in R_1} t(\chi)x_{\chi} + \sum_{\chi\in R_2} x_{\chi}$ and $v= \sum_{\chi\in R_1} x_{\chi} + \sum_{\chi\in R_2} t(\chi^{-1})x_{\chi}$. We omit the more details of the proof of this isomorphism (1), for the argument is straightforwards and similar to the proof of $Y''\cong Y$ in the proof of Theorem \ref{lsSB} (\ref{2.6.21}). Note that the right-hand side of (1) is a standard object of ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$ (\ref{b[+]}). By Lemma \ref{fiber+2}, the fiber product $E'({\sigma}')$ of $S'({\sigma}') \to \Hom(\cS({\sigma}'), {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \leftarrow Y'({\sigma}')$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. Hence $E'$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. By Proposition \ref{lm+}, this proves that $E$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. \end{pf} \bold egin{sbprop}{\lambda}bel{SB+} $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[+]}$. \end{sbprop} This follows form Theorem \ref{lsSB} for the situation (bd) in \ref{lsSB1} and from \ref{VV+}. \bold egin{sbthm}{\lambda}bel{slis+} The spaces $D^I_{{\rm{SL}}(2)}$, $D^{II}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$, $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ and $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ belong to ${\cal {B}}'_{\bold{R}}(\log)^+$. \end{sbthm} \bold egin{sbrem} We think that this Theorem \ref{slis+} is a version for the spaces of ${\rm{SL}}(2)$-orbits, treated in this Section 2, of the following results (1), (2) on the spaces of Borel-Serre orbits and of nilpotent orbits. (1) The space $D_{{\rm {BS}}}$ of Borel-Serre orbits is a real analytic manifold with corners. (Part I.) (2) For a weak rational fan $\Sigma$ in $\fg_{\bold{R}}$ and for a neat subgroup ${\Gamma}amma$ of $G_{{\bold{Z}}}$ which is strongly compatible with $\Sigma$, the space ${\Gamma}amma \operatorname{\backslash} D_{\Sigma}$ is a log manifold (Part III, Theorem 2.5.2). These (1) and (2) tell that $D_{{\rm {BS}}}$ and ${\Gamma}amma \operatorname{\backslash} D_{\Sigma}$ are beautiful spaces. Theorem \ref{slis+} also says that the spaces of ${\rm{SL}}(2)$-orbits are beautiful spaces. \end{sbrem} \bold egin{sbpara} We prove Theorem \ref{slis+}. Theorem \ref{slis+} for $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$ follows from \ref{SB+} and \ref{[+]+}. Theorem \ref{slis+} for $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$ follows from that for $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{\rm {BS}}}$ by \ref{lm+}. In the pure situation, this implies that $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$ for any $w$, hence $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$ belong to ${\cal {B}}'_{\bold{R}}(\log)^+$, and hence $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$ by \ref{lm+}. Theorem \ref{slis+} for $D^{II}_{{\rm{SL}}(2)}$ follows from that for $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ by \ref{Lbund} and \ref{cv++3}. Finally we prove that $D^I_{{\rm{SL}}(2)}$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$. We apply Proposition \ref{VV+}. Let $x=(p, Z)\in D_{{\rm{SL}}(2)}$, fix ${{\bold r}}\in Z$, and let $\bar {{\bold r}}={{\bold r}}({{\gamma}mma}r^W)\in D({{\gamma}mma}r^W)$. Let $n$ be $\text{rank}(p)$ if $x$ is an $A$-orbit, and let $n=\text{rank}(p)+1$ if $x$ is a $B$-orbit. Let $V={\rm {Lie}}(G_{{\bold{R}},u})$ where $G_{{\bold{R}},u}$ denotes the unipotent radical of $G_{{\bold{R}}}$. We define the action of ${\bold{G}}_m^n$ on $V$ as follows. In the case $x$ is an $A$-orbit (resp.\ a $B$-orbit), lift the homomorphism $\tau_p\; (\text{resp.}\; \tilde \tau_p) : {\bold{G}}_m^n \to G_{\bold{R}}({{\gamma}mma}r^W)$ (\ref{sim5}) to $\tau_x: {\bold{G}}_m^n\to G_{\bold{R}}$ by using the splitting ${\rm{spl}}_W({{\bold r}})$ of $W$. We consider the adjoint action of ${\bold{G}}_m^n$ on ${\rm {Lie}}(G_{{\bold{R}},u})$ via $\tau_x$. Define $Y{\subset}set {\bold{R}}^n_{{{\gamma}mma}eq 0}\times V \times V$ as in \ref{VV+}. Then by Part II, Theorem 3.4.6, in the case where $x$ is an $A$-orbit (resp.\ a $B$-orbit), there are an open neighborhood $S$ of $y:=(p, {\delta}ta_W({{\bold r}}))$ in $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}\times {\cal {L}}(\bar {{\bold r}})$ (resp.\ $y:=(p, 0\circ {\delta}ta_W({{\bold r}}))$ in $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}\times (\bar {\cal {L}}(\bar {{\bold r}})\smallsetminus \{0\}))$ and a strict morphism $S\to {\bold{R}}^n_{{{\gamma}mma}eq 0}$ which sends $y$ to $0=(0,\dots,0)$, an open neighborhood $U$ of $(y, 0,0,0)$ in the fiber product of $S\to {\bold{R}}^n_{{{\gamma}mma}eq 0} \leftarrow Y$ (here $(0,0,0)\in {\bold{R}}^n_{{{\gamma}mma}eq 0}\times V \times V$) and an open immersion $U\to D^I_{{\rm{SL}}(2)}$ which sends $(y,0,0,0)$ to $x$. By Proposition \ref{VV+}, $U$ is an object of ${\cal {B}}'_{\bold{R}}(\log)^+$. This shows that $D^I_{{\rm{SL}}(2)}$ is an object of ${\cal {B}}'_{\bold{R}}(\log)^+$. Theorem \ref{slis+} is proved. \end{sbpara} \bold egin{sblem} Let $n{{\gamma}mma}eq 0$. Then the part of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ consisting of points of rank $\leq n$ is open in $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$. \end{sblem} \bold egin{pf} This part is the union of open sets $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}(\Phi)$ where $\Phi$ ranges over all admissible sets of weight filtrations on ${{\gamma}mma}r^W$ associated to points of rank $\le n$. \end{pf} \bold egin{sbpara} We denote the above part of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ by $(D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim})_{\leq n}$. In the pure situation, this part is written as $D_{{\rm{SL}}(2),\leq n}$. \end{sbpara} \bold egin{sbprop} (1) Let $U$ be the inverse image of $(D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim})_{\leq 1}$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (resp.\ $D^{II}_{{\rm{SL}}(2)}$). Then $U$ is an object of ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. (2) Let $U$ be the inverse image of $\prod_w D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)_{\leq 1}$ in $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$. Then $U$ is an object of ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. \end{sbprop} \bold egin{pf} We prove (1). By \ref{slis+}, \ref{logst1} (which describes the stalks of $M_S/\cO^\times_S$ for $S= D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$) and \ref{leq1}, $(D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim})_{\leq 1}$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. Hence $U$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ by \ref{Lbund} and \ref{cv++3}. We prove (2). Similarly, $D_{SL(2)}({{\gamma}mma}r^W_w)_{\leq 1}$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ and hence $\prod_w D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)_{\leq 1}$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ by \ref{cv++3}. Hence $U$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ by \ref{Lbund} and \ref{cv++3}. \end{pf} \section{Valuative Borel--Serre orbits and valuative SL(2)-orbits}{\lambda}bel{s:val} In this Section \ref{s:val}, we study the spaces $D_{{\rm {BS}},{\mathrm{val}}}$, $D_{{\rm{SL}}(2),{\mathrm{val}}}$, and $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$, and their relations. {\subset}section{The associated valuative spaces}{\lambda}bel{ss:valsp} In this Section \ref{ss:valsp}, (1) for an object $S$ of ${\cal {B}}'_{\bold{R}}(\log)$, we define a locally ringed space $S_{{\mathrm{val}}}$ over ${\bold{R}}$ with a \lq\lq valuative log structure with sign" , and (2) more generally, for a field $K$ endowed with a non-trivial absolute value $|\;\;|\;:\;K\to {\bold{R}}$ and for a locally ringed space $S$ over $K$ endowed with an fs log structure satisfying the conditions in \ref{value}, we construct a topological space $S_{{\mathrm{val}}}$. In (2), $S_{{\mathrm{val}}}$ is merely a topological space and does not have more structures as in (1). (1) becomes important in the rest of this Section 3, and (2) will become important in Section 4. (1) is shortly explained in Part II, 3.7. We call $S_{{\mathrm{val}}}$ {\it the valuative space associated to $S$}. \bold egin{sbpara}{\lambda}bel{V(S)} Let $L$ be an abelian group whose group law is written multiplicatively. A submonoid $V$ of $L$ is said to be {\it valuative} if $V\cup V^{-1}=L$. An integral monoid $V$ is said to be {\it valuative} if it is a valuative submonoid of $V^{{{\gamma}mma}p}$. For an fs monoid $\cS$, let $V(\cS)$ be the set of all valuative submonoids $V$ of $\cS^{{{\gamma}mma}p}$ such that $V\supset \cS$ and $V^\times \cap \cS=\cS^\times$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{Sgen} Let $K$ be a field endowed with a non-trivial absolute value $|\;\;|: K\to {\bold{R}}$. Let $S$ be a locally ringed space over $K$ satisfying the equivalent conditions in \ref{value} and endowed with an fs log structure. Let $S_{{\mathrm{val}}}$ be the set of all triples $(s,V,h)$, where $s\in S$, $V\in V((M_S/\cO^\times_S)_s)$ (\ref{V(S)}), and writing by $\tilde V $ the inverse image of $V$ in $M^{{{\gamma}mma}p}_{S,s}$, $h$ is a homomorphism $(\tilde V)^\times \to {\bold{R}}^{\mult}_{>0}$ extending $f\mapsto |f(s)|$ on $\cO_{S,s}^\times$. Here ${\bold{R}}^{\mult}_{>0}$ denotes the set ${\bold{R}}_{>0}$ regarded as a multiplicative group. \end{sbpara} \bold egin{sbpara} There is a variant, which we denote by $S_{{\mathrm{val}}(K)}$, of $S_{{\mathrm{val}}}$: Let $S_{{\mathrm{val}}(K)}$ be the set of all triples $(s, V, h)$ where $s$ and $V$ are as above but $h$ is a homomorphism $(\tilde V)^\times \to K^\times$ extending $f\mapsto f(s)$ on $\cO_{S,s}^\times$. In \cite{KU}, Section 3.6, in the case $K={\bold{C}}$, this space $S_{{\mathrm{val}},{\bold{C}}}$ was denoted by $S_{{\mathrm{val}}}$. But in this Part IV, we consider only $S_{{\mathrm{val}}}$ in the sense of \ref{Sgen} except in the proof of \ref{perth1}, and we hope no confusion occurs in Part IV. In the case a confusion can happen in the future, we denote $S_{{\mathrm{val}}}$ in \ref{Sgen} by $S_{{\mathrm{val}}(|\;\;|)}$. We will call $S_{{\mathrm{val}}(K)}$ the valuative space of $K$-points associated to $S$, and $S_{{\mathrm{val}}(|\;\;|)}$ the valuative space of absolute values associated to $S$. \end{sbpara} \bold egin{sbpara} In the case $K={\bold{R}}$ and $M_S$ is a log structure with sign (as in the case $S\in {\cal {B}}'_{\bold{R}}(\log)$) (\ref{lsign}), $S_{{\mathrm{val}}}$ is identified with the set of all triples $(s,V,h)$, where $s\in S$, $V$ is an element of $V((M_S/\cO^\times_S)_s)$, and writing by $\tilde V_{>0} $ the inverse image of $V$ in $M^{{{\gamma}mma}p}_{S,>0, s}$, $h$ is a homomorphism $(\tilde V)_{>0}^\times \to {\bold{R}}^{\mult}_{>0}$ extending $f\mapsto f(s)$ on $\cO_{S,>0,s}^\times$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{valtop} Let $S$ be as in \ref{Sgen}. The topology of $S_{{\mathrm{val}}}$ is defined as follows. Let $(s_0,V_0,h_0)\in S_{{\mathrm{val}}}$. Assume that we are given a chart $\cS\to M_S$ near the point $s_0\in S$. We introduce a fundamental system of neighborhoods of the point $(s_0,V_0,h_0)\in S_{{\mathrm{val}}}$. Let $U$ be a neighborhood of $s_0$ in $S$, $I$ a finite subset of $\cS^{{{\gamma}mma}p}$ such that, for any $f\in I$, the image ${\bar f}_{s_0}$ of $f$ in $(M_S^{{{\gamma}mma}p}/\cO^\times_S)_{s_0}$ is contained in $V_0$, and $\varepsilon>0$. Let $B(U,I, \varepsilon)$ be the set of all points $(s,V, h)$ of $S_{{\mathrm{val}}}$ satisfying the following conditions (i)--(iii). (i) $s\in U$. (ii) For any $f\in I$, the image ${\bar f}_s$ of $f$ in $(M_S^{{{\gamma}mma}p}/\cO^\times_S)_s$ belongs to $V$. (iii) For any $f\in I$, $|h(f) - h_0(f)|<\varepsilon$. Here we define $h(f)$ (resp.\ $h_0(f)$) to be $0$ unless ${\bar f}_s \in V^\times$ (resp.\ ${\bar f}_{s_0}\in V_0^\times$). Define a topology of $S_{{\mathrm{val}}}$ so that the sets $B(U,I, \varepsilon)$, where $U$, $I$, and $\varepsilon$ vary, form a fundamental system of neighborhoods of the point $(s_0,V_0,h_0)$. This topology is independent of the choice of a chart $\cS$, and hence is well defined globally. \end{sbpara} We now consider the relation of $S_{{\mathrm{val}}}$ and the projective limit of toric varieties for sub-divisions of fans. This will be used to prove properties of $S_{{\mathrm{val}}}$, and to endow $S_{{\mathrm{val}}}$ in the case $S\in {\cal {B}}'_{\bold{R}}(\log)$ with a structure of a locally ringed space over ${\bold{R}}$ and a log structure with sign. \bold egin{sbpara}{\lambda}bel{l:val0} Let the notation be as in \ref{rvtoric}. Let $V$ be a valuative submonoid of $L$. For a submonoid $\cS$ of $L$, we say that $V$ {\it dominates} $\cS$ if $\cS{\subset}set V$ and $\cS^\times = \cS\cap V^\times$. For a rational finitely generated sharp cone ${\sigma}$ in $N_{\bold{R}}$, we say that $V$ dominates ${\sigma}$ if $V$ dominates $\cS({\sigma}):=\{l \in L\,|\, l({\sigma}) {{\gamma}mma}eq 0\}$. For a rational fan $\Sigma$ in $N_{\bold{R}}$, $V$ dominates some cone in $\Sigma$ if and only if $\cS({\sigma}){\subset}set V$ for some ${\sigma}\in \Sigma$. If $V$ dominates a cone in $\Sigma$, then such a cone is unique and is the smallest cone ${\sigma}\in\Sigma$ such that $\cS({\sigma}){\subset}set V$. If $\Sigma'$ is a rational finite subdivision of $\Sigma$, $V$ dominates some cone in $\Sigma$ if and only if $V$ dominates some cone in $\Sigma'$. In this case, if $V$ dominates ${\sigma}'\in \Sigma'$, $V$ dominates the smallest cone ${\sigma}\in \Sigma$ such that ${\sigma}'{\subset}set {\sigma}$. \end{sbpara} \bold egin{sblem} {\lambda}bel{l:val} Let $\Sigma$ be a finite rational fan in $N_{\bold{R}}$. Then we have a bijection from the set of all valuative submonoids $V$ of $L$, which dominate some cone in $\Sigma$, onto the projective limit $\varprojlim \;\Sigma'$, where $\Sigma'$ ranges over all finite rational subdivisions of $\Sigma$. This bijection sends $V$ to $({\sigma}_{\Sigma'})_{\Sigma'}$, where ${\sigma}_{\Sigma'}$ denotes the cone in $\Sigma'$ dominated by $V$. The inverse map is given by $({\sigma}_{\Sigma'})_{\Sigma'}\mapsto \small\bigcup_{\Sigma'} \cS({\sigma}_{\Sigma'})$. \end{sblem} \bold egin{pf} Straightforward. \end{pf} \bold egin{sblem} {\lambda}bel{l:val2} Let $\Sigma$ be a finite rational fan in $N_{\bold{R}}$. Then we have the following bijection from $\varprojlim_{\Sigma'} |{\operatorname{toric}}|(\Sigma')$, where $\Sigma'$ ranges over all finite rational subdivisions of $\Sigma$, to the set of all pairs $(V, h)$ of a valuative submonoid $V$ of $L$ dominating some cone in $\Sigma$ and a homomorphism $h\colon V^\times \to {\bold{R}}_{>0}$. If $(x_{\Sigma'})_{\Sigma'}$ is an element of $\varprojlim_{\Sigma'} |{\operatorname{toric}}|(\Sigma')$ and $({\sigma}_{\Sigma'}, h_{\Sigma'})$ $({\sigma}_{\Sigma'}\in \Sigma', h_{\Sigma'}: \cS({\sigma}_{\Sigma'})\to {\bold{R}}_{{{\gamma}mma}eq 0})$ is the pair corresponding to $x_{\Sigma'}$ (\ref{rvtoric}), then the pair $(V, h)$ corresponding to $(x_{\Sigma'})_{\Sigma'}$ is as follows. $V=\small\bigcup_{\Sigma'} \cS({\sigma}_{\Sigma'})$, and $h$ is the homomorphism $V^\times\to{\bold{R}}_{>0}$ whose restriction to $\cS({\sigma}_{\Sigma'})^\times$ is $h_{\Sigma'}$ for any $\Sigma'$. \end{sblem} \bold egin{pf} This can be shown by using \ref{l:val}. \end{pf} \bold egin{sbprop}{\lambda}bel{4.1.9} Let $S$ be as in \ref{Sgen}, and assume that we are given a chart $\cS\to M_S$ with $\cS$ an fs monoid, let $L=\cS^{{{\gamma}mma}p}$, let $N=\Hom(L, {\bold{Z}})$, and let $\Sigma$ be the fan in $N_{{\bold{R}}}$ of all faces of the cone $\Hom(\cS, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0})$. Here ${\bold{R}}^{\add}$ denotes ${\bold{R}}_{{{\gamma}mma}eq 0}$ regarded as an additive monoid. Then we have a cartesian diagram of topological spaces $$\bold egin{matrix} S_{{\mathrm{val}}} &\to & \varprojlim_{\Sigma'} |{\operatorname{toric}}|(\Sigma')\\ \downarrow && \downarrow \\ S &\to& |{\operatorname{toric}}|(\Sigma)=\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})\end{matrix}$$ where $\Sigma'$ ranges over all finite rational subdivisions of $\Sigma$, and the lower row sends $s\in S$ to the homomorphism $f\mapsto |f(s)|$ $(f\in\cS)$. \end{sbprop} \bold egin{pf} For $s\in S$, let $\cS(s)=\cS({\sigma})$ where ${\sigma}$ is the element of $\Sigma$ such that the image of $s$ in $|{\operatorname{toric}}|(\Sigma)$ corresponds to a pair $({\sigma}, h)$ for some $h:\cS({\sigma})^\times \to {\bold{R}}^{\mult}_{>0}$ (\ref{rvtoric}). Then $\cS(s)^\times$ coincides with the inverse image of $\cO^\times_{S,s}$ under the canonical map $\cS^{{{\gamma}mma}p}\to M^{{{\gamma}mma}p}_{S,s}$, and $\cS(s)$ is generated by $\cS$ and $\cS(s)^\times$. We have $\cS({\sigma})/\cS({\sigma})^\times \overset{\cong}\to (M_S/\cO^\times_S)_s$. By \ref{l:val2}, the fiber product $S \times_{|{\operatorname{toric}}|(\Sigma)} \varprojlim_{\Sigma'} |{\operatorname{toric}}|(\Sigma')$ is identified with the set of all triples $(s, V, h)$ where $s\in S$, $V$ is a valuative submonoid of $\cS^{{{\gamma}mma}p}$ such that $V\supset \cS$ and $V^\times \cap \cS=\cS(s)^\times$, and $h$ is a homomorphism $V^\times \to {\bold{R}}_{>0}^{\mult}$ whose restriction to $\cS(s)^\times$ coincides with the composition $\cS(s)^\times \to \cO_{S,s}^\times \to {\bold{R}}^{\mult}_{>0}$ where the last map is $f\mapsto |f(s)|$. By the isomorphism $\cS({\sigma})/\cS({\sigma})^\times \overset{\cong}\to (M_S/\cO^\times_S)_s$, a valuative submonoid $V$ of $\cS^{{{\gamma}mma}p}$ such that $V\supset \cS$ and $V^\times \cap \cS=\cS(s)^\times$ corresponds bijectively to a valuative submonoid $V'$ contains $(M_S/\cO^\times_S)_s$ and $(V')^\times \cap (M_S/\cO^\times_S)_s=\{1\}$. Furthermore, if $\tilde V'$ denotes the inverse image of $V'$ in $M^{{{\gamma}mma}p}_{S,s}$, $(\tilde V' )^\times$ is the pushout of $V^\times \leftarrow \cS(s)^\times \to \cO_{S,s}^\times$. Hence $h$ corresponds to a homomorphism $h': (\tilde V')^\times \to {\bold{R}}^{\mult}_{>0}$ whose restriction to $\cO_{S,s}^\times$ coincides with $f\mapsto |f(s)|$. Hence we have a bijection $(s, V, h) \mapsto (s,V',h')$ from the fiber product to $S_{{\mathrm{val}}}$. In the converse map $(s,V',h')\mapsto (s, V, h)$, $V$ is the inverse image of $V'$ under the canonical map $\cS^{{{\gamma}mma}p}\to (M_S^{{{\gamma}mma}p}/\cO^\times_S)_s$ and $h$ is the homomorphism $V^\times\to {\bold{R}}^{\mult}_{>0}$ induced by $h'$. By using these explicit constructions of the bijection between $S_{{\mathrm{val}}}$ and the fiber product, it is easy to see that this bijection is a homeomorphism. \end{pf} \bold egin{sbcor}{\lambda}bel{vproper} For $S$ as in \ref{Sgen}, the map $S_{{\mathrm{val}}}\to S$ is proper. \end{sbcor} \bold egin{sblem}{\lambda}bel{valsrt} Let $S$ and $S'$ be as in \ref{Sgen} and assume that we are given a strict morphism $S'\to S$ of locally ringed spaces over ${\bold{R}}$ with log structures (for the word \lq\lq strict'', see \ref{gest6}). Then the canonical map $S'_{{\mathrm{val}}} \to S'\times_S S_{{\mathrm{val}}}$ is a homeomorphism. \end{sblem} \bold egin{pf} For any $s'\in S'$ with image $s$ in $S$, the canonical map $(M_S/\cO^\times_S)_s\to (M_{S'}/\cO^\times_{S'})_{s'}$ is an isomorphism from the assumption. From this, we see that the map $S'_{{\mathrm{val}}}\to S'\times_S S_{{\mathrm{val}}}$ is bijective. Since this map is continuous and since both $S'_{{\mathrm{val}}}$ and $S'\times_S S_{{\mathrm{val}}}$ are proper over $S'$ (\ref{vproper}), this map is a homeomorphism. \end{pf} \bold egin{sblem}{\lambda}bel{abslog} Let $S$ be as in \ref{Sgen} and let $|S|$ be the topological space $S$ with the sheaf of all ${\bold{R}}$-valued continuous functions. Endow $|S|$ with the log structure $M_{|S|}$ associated to the composition $M_S\to \cO_S\to \cO_{|S|}$, where the second arrow is $f\mapsto |f|$, which we regard as a pre-log structure. Here $|f|$ denotes the function $s\mapsto|f(s)|$. Then $M_{|S|}$ is an fs log structure, and we have a canonical homeomoprhism $|S|_{{\mathrm{val}}}{\cong} S_{{\mathrm{val}}}$. \end{sblem} \bold egin{pf} If $\cS\to M_S$ is a chart with $\cS$ an fs monoid, then the composition $\cS\to M_S\to M_{|S|}$ is also a chart. Hence $M_{|S|}$ is an fs log structure. The canonical map $(M_S/\cO^\times_S)_s\to (M_{|S|}/\cO^\times_{|S|})_s$ is an isomorphism for any $s\in S$, and hence we have a canonical bijection $|S|_{{\mathrm{val}}}\to S_{{\mathrm{val}}}$. It is easy to see that this is a homeomorphism. \end{pf} \bold egin{sbpara}{\lambda}bel{3.1.13} Assume now $S$ is an object of ${\cal {B}}'_{\bold{R}}(\log)$ (\ref{cblog}). We endow $S_{{\mathrm{val}}}$ with a sheaf $\cO_{S_{{\mathrm{val}}}}$ of rings and a log structure $M_{S_{{\mathrm{val}}}}$ with sign as follows. Locally on $S$, take a positive chart $\cS\to M_{S,>0}$ (\ref{pchar}), let $\Sigma$ be the fan of faces of the cone $\Hom(\cS, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0})$, and for a rational finite subdivision $\Sigma'$ of $\Sigma$, regard $S(\Sigma') := S\times_{|{\operatorname{toric}}|(\Sigma)} |{\operatorname{toric}}|(\Sigma')$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ by taking the fiber product in ${\cal {B}}'_{\bold{R}}(\log)$. Here we use the fact that the underlying topological space of this fiber product is the same as the fiber product of the underlying topological spaces by \ref{gest6} (i) and by the fact that $S\to |{\operatorname{toric}}|(\Sigma)$ is strict. We define $\cO_{S_{{\mathrm{val}}}}$ (resp.\ $M_{S_{{\mathrm{val}}}}$) as the inductive limit of $\cO_{S(\Sigma')}$ (resp.\ $M_{S(\Sigma')}$) by using Proposition \ref{4.1.9}. This sheaf of rings and the log structure with sign are independent of the choice of the chart and hence defined globally. In fact, if we have two charts $\cS\to M_{S,>0}$ and $\cS'\to M_{S,>0}$, there is a third chart $\cS''\to M_{S,>0}$ with homomorphisms $\cS\to \cS''$ and $\cS'\to \cS''$ of charts. It is easy to see that the sheaf of rings and the log structure with sign given by the chart $\cS$ (resp.\ $\cS'$) are isomorphic to the ones given by the chart $\cS''$, and that the composite isomorphisms between the ones given by the chart $\cS$ and the ones given by the chart $\cS'$ are independent of the choice of the third chart $\cS''$. We call $\cO_{S_{{\mathrm{val}}}}$ the {\it sheaf of real analytic functions}. \end{sbpara} \bold egin{sbpara}{\lambda}bel{lmval} A log modification $S'\to S$ in ${\cal {B}}'_{\bold{R}}(\log)$ induces an isomorphism $$(S')_{{\mathrm{val}}}\overset{\cong}\to S_{{\mathrm{val}}}$$ of locally ringed spaces over ${\bold{R}}$ with log structures with sign. \end{sbpara} \bold egin{pf} This is clear. \end{pf} \bold egin{sbpara}{\lambda}bel{Vval1} For $S\in {\cal {B}}'_{\bold{R}}(\log)$ and for $x=(s, V, h)\in S_{{\mathrm{val}}}$, $V$ is identified with the inverse image of $(M_{S_{{\mathrm{val}}}}/\cO^\times_{S_{{\mathrm{val}}}})_x$ under the canonical map $(M^{{{\gamma}mma}p}_S/\cO_S^\times)_s \to (M^{{{\gamma}mma}p}_{S_{{\mathrm{val}}}}/\cO^\times_{S_{{\mathrm{val}}}})_x$, and $h: \tilde V_{>0}^\times \to {\bold{R}}^{\mult}_{>0}$ coincides with the composition $\tilde V_{>0}^\times \to \cO_{S_{{\mathrm{val}}},>0,x}\to {\bold{R}}^{\mult}_{>0}$ where the first arrow is induced from $\tilde V_{>0}{\subset}set M_{S,>0,s}\to M_{S_{{\mathrm{val}}},>0,x}$ and the second arrow is $f\mapsto f(x)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{Vval2} Let $S$ be a locally ringed space. Then a log structure $M$ on $S$ is said to be {\it valuative} if it is integral and satisfies the following condition. For any local section $f$ of $M^{{{\gamma}mma}p}$, locally we have either $f\in M$ or $f^{-1}\in M$, that is, if every stalk of $M$ is valuative. By \ref{Vval1}, for $S\in {\cal {B}}'_{\bold{R}}({\mathrm{val}})$, the log structure of $S_{{\mathrm{val}}}$ is valuative. \end{sbpara} \bold egin{sbpara}{\lambda}bel{gest9} Let $\cS_1$, $T$, $\bar T$, $Z$, $\bar Z$ and $T(s){\subset}set T$, $Z(s){\subset}set Z$ (for $s\in \bar Z$) be as in \ref{gest1}. We give a description (1) below of the valuative space $\bar Z_{{\mathrm{val}}}$ associated to $\bar Z$, as a set. This will be used in \ref{orbittr}. For a valuative submonoid $V$ of $\cS_1^{{{\gamma}mma}p}$, let $T(V): =\Hom(\cS_1^{{{\gamma}mma}p}/V^{\times}, {\bold{R}}_{>0}){\subset}set T=\Hom(\cS_1, {\bold{R}}^{\mult}_{>0})$. Then we have: (1) $\bar Z_{{\mathrm{val}}}$ is identified with the set of all triples $(s, V, Z')$ where $s\in \bar Z$, $V$ is a valuative submonoid of $\cS_1^{{{\gamma}mma}p}$ such that $V\supset \cS_1$ and such that $V^{\times}\cap \cS_1=\text{Ker}(\cS_1\to (M_{\bar Z}/\cO^\times_{\bar Z})_s)$, and $Z'$ is a $T(V)$-orbit in $Z(s)$. (Note that $Z(s)$ is a $T(s)$-torsor and $T(V) {\subset}set T(s)$.) This is proved as follows. Let $L=\cS_1^{{{\gamma}mma}p}$ and let $\Sigma$ be the fan of all faces of the cone $\Hom(\cS_1, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0}){\subset}set N_{\bold{R}}=\Hom(L, {\bold{R}})$. Then by \ref{4.1.9}, $\bar Z_{{\mathrm{val}}}$ is the projective limit of the log modifications of $\bar Z$ corresponding to rational finite subdivisions $\Sigma'$ of $\Sigma$. By \ref{l:val}, the projective limit of the sets $\Sigma'$ is identified with the set of valuative cones $V$ as above. Hence the above description (1) of $\bar Z_{{\mathrm{val}}}$ follows from the descriptions of log modifications of $\bar Z$ in \ref{gest1} as sets by taking the projective limit. \end{sbpara} {\subset}section{The category $\cC_{\bold{R}}({\mathrm{val}})^+$} We define categories $\cC_{\bold{R}}({\mathrm{val}})$ and $\cC_{\bold{R}}({\mathrm{val}})^+{\subset}set \cC_{\bold{R}}({\mathrm{val}})$. In Section 3.3, we will see that the valuative spaces associated to the spaces of SL(2)-orbits and the space of Borel-Serre orbits belong to $\cC_{\bold{R}}({\mathrm{val}})^+$. \bold egin{sbpara}{\lambda}bel{satval} Let $\cC_{\bold{R}}({\mathrm{val}})$ be the category of objects of $\cC_{\bold{R}}$ (\ref{2.5.1}) endowed with a valuative log structure (\ref{Vval2}) with sign. We have $\cC_{\bold{R}}({{\rm sat}}) \supset \cC_{\bold{R}}({\mathrm{val}})$ (\ref{cblog} (2)). \end{sbpara} \bold egin{sbprop}{\lambda}bel{B++C+} Let $S$ be an object of ${\cal {B}}'_{\bold{R}}(\log)^+$. Then $S_{{\mathrm{val}}}$ belongs to $\cC_{\bold{R}}({\mathrm{val}})$. \end{sbprop} For the proof, we use the following lemma. \bold egin{sblem}{\lambda}bel{prosys} Let $(S_{{{\lambda}mbda}})_{{{\lambda}mbda}}$ be a directed projective system in $\cC_{\bold{R}}$, let $S$ be the projective limit of the topological spaces $S_{{{\lambda}mbda}}$, and endow $S$ with the inductive limit of the inverse images of $\cO_{S_{{{\lambda}mbda}}}$. Assume that there is an open set $S'$ of $S$ satisfying the following conditions {{\rm(i)}} and {{\rm (ii)}}. {{\rm(i)}} $S'$ belongs to $\cC_{\bold{R}}$. {{\rm (ii)}} For any open set $U$ of $S$, the map $\cO_S(U)\to \cO_S(U\cap S')$ is injective. Then $S\in \cC_{\bold{R}}$. \end{sblem} \bold egin{pf} Let $\cF$ be the sheaf on $S$ of morphisms to ${\bold{R}}^n$ of locally ringed spaces over ${\bold{R}}$, where ${\bold{R}}^n$ is endowed with the sheaf of all real analytic functions. We have a morphism $a:\cF\to \cO_S^n$ by $f\mapsto (f^*(t_j))_{1\leq j\leq n}$ where $t_j$ are the standard coordinate functions of ${\bold{R}}^n$. We have also a morphism $b: \cO_S^n \to \cF$, which comes from the fact that since $S_{{{\lambda}mbda}bda}$ belong to $\cC_{\bold{R}}$, $\cO_S^n$ is regarded as the inductive limit of the inverse images of sheaves on $S_{{{\lambda}mbda}bda}$ of morphisms to ${\bold{R}}^n$. As is easily seen, the composition $ab: \cO_S^n\to \cO_S^n$ is the identity morphism. We prove that $ba: \cF\to \cF$ is the identity morphism. Let $f\in \cF(U)$ with $U$ an open set of $S$. It is easy to see that $f$ and $ba(f)$ induce the same underlying continuous maps $U\to {\bold{R}}^n$ which we denote by $g$. It remains to prove that the homomorphisms $g^{-1}(\cO_{{\bold{R}}^n})\to \cO_U$ given by $f$ and $ba(f)$ coincide. Since $\cO_S(V) \to \cO_S(V\cap S')$ is injective for any open set $V$ of $U$, it is sufficient to prove that the restrictions of $f$ and $ba(f)$ to $U\cap S'$ coincide. But $S'$ belongs to $\cC_{\bold{R}}$, and hence $ab$ gives the identity morphism of $\cF|_{S'}$. \end{pf} \bold egin{sbpara} We prove Proposition \ref{B++C+}. By \ref{lmval}, it is sufficient to prove this for objects of ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. As in \ref{b[[+]]}, let $Y{\subset}set {\bold{R}}^n_{{{\gamma}mma}eq 0}\times A$ be a standard object of ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$. It is sufficient to prove that $Y_{{\mathrm{val}}}$ belongs to $\cC_{\bold{R}}$. Let $L={\bold{Z}}^n$, let $\Sigma$ be the set of all faces of the cone ${\bold{R}}^n_{{{\gamma}mma}eq 0}{\subset}set \Hom(L, {\bold{R}}^{\add})= {\bold{R}}^n$, and for a rational finite subdivision $\Sigma'$ of $\Sigma$, let $Y(\Sigma')= \{(t, x)\in |{\operatorname{toric}}|(\Sigma') \times A\;|\; x\in A_{J(t)}\}$ where $J(t)= \{j\;|\; 1\leq j\leq n, t_j=0\}$ with $t_j$ the $j$-th component of the image of $t$ in $|{\operatorname{toric}}|(\Sigma) ={\bold{R}}^n_{{{\gamma}mma}eq 0}$. We apply Lemma \ref{prosys} by taking the projective system $(Y(\Sigma'))_{\Sigma'}$ in $\cC_{\bold{R}}$ as $(S_{{{\lambda}mbda}bda})_{{{\lambda}mbda}bda}$ and by taking the open set ${\bold{R}}^n_{>0} \times A$ of $Y_{{\mathrm{val}}}$ as $S'$. Then the projective limit $S$ in \ref{prosys} is $Y_{{\mathrm{val}}}$. The injectivity of $\cO_S(U) \to \cO_S(U\cap S')$ for any open set $U$ of $Y_{{\mathrm{val}}}$ is seen easily. Hence $Y_{{\mathrm{val}}}$ belongs to $\cC_{\bold{R}}$ by Lemma \ref{prosys}. \end{sbpara} \bold egin{sbpara}{\lambda}bel{c+} We define a full subcategory $\cC_{\bold{R}}({\mathrm{val}})^+$ of $\cC_{\bold{R}}({\mathrm{val}})$. This is the category of all objects which are locally isomorphic to open subobjects of $S_{{\mathrm{val}}}$ with objects $S$ of ${\cal {B}}'_{\bold{R}}(\log)^+$. We can replace ${\cal {B}}'_{\bold{R}}(\log)^+$ by ${\cal {B}}'_{\bold{R}}(\log)^{[[+]]}$ in this definition, to get the same category $\cC_{\bold{R}}({\mathrm{val}})^+$. Hence $\cC_{\bold{R}}({\mathrm{val}})^+$ is the category of objects which are locally an open subobject of $$Y_{{\mathrm{val}}}= \{(t,x)\in ({\bold{R}}^n_{{{\gamma}mma}eq 0})_{{\mathrm{val}}}\times A\;|\; x\in A_{J(t)}\}.$$ Here $n$, $A$, $(A_J)_J$ and $Y$ are as in \ref{b[[+]]} and $J(t)=\{j\;|\; 1\leq j\leq n, t_j=0\}$ where $t_j$ denotes the $j$-th component of the image of $t$ in ${\bold{R}}^n_{{{\gamma}mma}eq 0}$. \end{sbpara} \bold egin{sbprop}{\lambda}bel{cv+2} For any object $S$ of ${\cal {B}}'_{\bold{R}}(\log)$ and for any object $X$ of $\cC_{\bold{R}}({\mathrm{val}})$, the canonical map ${\rm {Mor}}(X, S_{{\mathrm{val}}})\to {\rm {Mor}}(X, S)$ is bijective. Consequently, if $S$ is an object of ${\cal {B}}'_{\bold{R}}(\log)^+$, $S_{{\mathrm{val}}}$ represents the functor $X\mapsto {\rm {Mor}}(X, S)$ from $\cC_{\bold{R}}({\mathrm{val}})$ to (Sets). \end{sbprop} \bold egin{pf} It is sufficient to prove that in the situation of \ref{4.1.9}, the canonical map ${\rm {Mor}}(X, S\times_{|{\operatorname{toric}}|(\Sigma)} |{\operatorname{toric}}|(\Sigma')) \to \text{Mor}(X,S)$ is bijective. Here $S\times_{|{\operatorname{toric}}|(\Sigma)} |{\operatorname{toric}}|(\Sigma')$ denotes the fiber product in ${\cal {B}}'_{\bold{R}}(\log)$. By \ref{fiberpr}, it is the fiber product in $\cC_{\bold{R}}({{\rm sat}})$. Hence it is sufficient to prove that the map ${\rm {Mor}}(X, |{\operatorname{toric}}|(\Sigma')) \to {\rm {Mor}}(X, |{\operatorname{toric}}|(\Sigma))$ is bijective. This last fact is reduced to Proposition \ref{cv+0}. \end{pf} \bold egin{sbprop}{\lambda}bel{cv+3} $(1)$ The category $\cC_{\bold{R}}({\mathrm{val}})^+$ has finite products. We will denote the product in $\cC_{\bold{R}}({\mathrm{val}})^+$ as $X\times_{{\mathrm{val}}} Y$. $(2)$ A finite product in $\cC_{\bold{R}}({\mathrm{val}})^+$ is a finite product in $\cC_{\bold{R}}({\mathrm{val}})$. $(3)$ The functor ${\cal {B}}'_{\bold{R}}(\log)^+\to \cC_{\bold{R}}({\mathrm{val}})^+\;;\;S\mapsto S_{{\mathrm{val}}}$ preserves finite products. $(4)$ For objects $S_1,\dots,S_n$ of $\cC_{\bold{R}}({\mathrm{val}})^+$, the product of $S_1,\dots, S_n$ in the category $\cC_{\bold{R}}({{\rm sat}})$ exists. As a topological space, it is the product of the topological spaces $S_j$. We will denote this product in $\cC_{\bold{R}}({{\rm sat}})$ by $S_1\times_{{{\rm sat}}}\dots \times_{{{\rm sat}}} S_n$. \end{sbprop} \bold egin{pf} If $Y$ and $Y'$ are objects of ${\cal {B}}'_{\bold{R}}(\log)^+$, then by Proposition \ref{cv++3}, $(Y\times Y')_{{\mathrm{val}}}$ is an object of $\cC_{\bold{R}}({\mathrm{val}})^+$ and for any object $X$ of $\cC_{\bold{R}}({\mathrm{val}})^+$, we have $${\rm {Mor}}(X, (Y\times Y')_{{\mathrm{val}}})= {\rm {Mor}}(X, Y\times Y') = {\rm {Mor}}(X, Y) \times {\rm {Mor}}(X, Y') = {\rm {Mor}}(X, Y_{{\mathrm{val}}}) \times {\rm {Mor}}(X, (Y')_{{\mathrm{val}}})$$ where the first and the third equalities follow from Proposition \ref{cv+2}, and the second equality follows from Proposition \ref{fiberpr} (2). This proves (1), (2), (3). We prove (4). Locally on each $S_j$, we have $S_j=(S'_j)_{{\mathrm{val}}}$ for an object $S'_j$ of ${\cal {B}}'_{\bold{R}}(\log)^+$. Locally in each $S'_j$, take a chart $\cS_j\to M_{S'_j}$, let $\Sigma_j$ be the fan of all faces of the cone $\Hom(\cS_j, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0})$, and consider $S= \varprojlim \prod_{j=1}^n S'_j \times_{|{\operatorname{toric}}|(\Sigma_j)} |{\operatorname{toric}}|(\Sigma'_j)$ where $\Sigma'_j$ ranges over all rational finite subdivisions of $\Sigma_j$. Endow $S$ with the inductive limit of the inverse images of $\cO$ and the log structures with sign of $ \prod_{j=1}^n S'_j \times_{|{\operatorname{toric}}|(\Sigma_j)} |{\operatorname{toric}}|(\Sigma'_j)$. Then $S$ belongs to $\cC_{\bold{R}}({{\rm sat}})$ by Lemma \ref{prosys}, and is the product of $S_j$ in $\cC_{\bold{R}}({{\rm sat}})$. This locally constructed $S$ glues to a global $S$. \end{pf} The following Lemma will be used in Section 3.4. \bold egin{sblem}{\lambda}bel{timesval} Let $n{{\gamma}mma}eq 0$ and let $S_j\to S'_j$ $(1\leq j\leq n)$ be morphisms in $\cC_{\bold{R}}({\mathrm{val}})^+$ having the Kummer property of log structure in the sense (K) below. Let $S$ (resp.\ $S'$) be the product $S_1\times_{{{\rm sat}}}\dots\times_{{{\rm sat}}} S_n$ (resp.\ $S_1'\times_{{{\rm sat}}} \dots\times_{{{\rm sat}}} S_n'$) in the category $\cC_{\bold{R}}({{\rm sat}})$, and let $S_{{\mathrm{val}}}=S_1\times_{{\mathrm{val}}} \dots \times_{{\mathrm{val}}} S_n$ (resp.\ $S'_{{\mathrm{val}}}= S'_1\times_{{\mathrm{val}}} \dots \times_{{\mathrm{val}}} S'_n$) be the product in the category $\cC_{\bold{R}}({\mathrm{val}})^+$ (\ref{cv+3}). (K) We say that a morphism $X\to Y$ of a locally ringed spaces with log structures has Kummer property of log structure if for any $x\in X$ and the image $y$ of $x$ in $Y$, the homomorphism $(M_Y/\cO^\times_Y)_y \to (M_X/\cO_X^\times)_x$ is injective, and for any $a\in (M_X/\cO_X^\times)_x$, there is $m{{\gamma}mma}eq 1$ such that $a^m$ belongs to the image of $(M_Y/\cO^\times_Y)_y$. Then the diagram $$\bold egin{matrix} S_{{\mathrm{val}}}&\to &S'_{{\mathrm{val}}}\\ \downarrow &&\downarrow\\ S &\to &S'\end{matrix}$$ is cartesian in the category of topological spaces. \end{sblem} \bold egin{pf} The set $S_{{\mathrm{val}}}$ is identified with the set of all triples $(s,V, h)$ where $s\in S$, $V$ is a valuative submonoid of $(M^{{{\gamma}mma}p}_S/\cO^\times_S)_s$ such that $V\supset (M_S/\cO_S^\times)_s$ and $V\cap (M_S/\cO_S^\times)_s=\{1\}$, and $h$ is a homomorphism $(\tilde V)^\times \to {\bold{R}}_{>0}$, where $\tilde V$ denotes the inverse image of $V$ in $M_{S,>0,s}$ such that the restriction of $h$ to $\cO_{S,>0,s}^\times$ coincides with $f\mapsto f(s)$. Furthermore, $(M_S/\cO_S^\times)_s \cong \prod_{j=1}^n (M^{{{\gamma}mma}p}_{S_j}/\cO^\times_{S_j})_{s_j}$ where $s_j$ denotes the image of $s$ in $S_j$. The similar things hold for $S'$. From these, we see that the diagram is cartesian in the category of sets. Since $S_{{\mathrm{val}}}$ and the fiber product $E$ of $S\to S'\leftarrow S'_{{\mathrm{val}}}$ in the category of topological spaces are proper over $S$, we see that the canonical map $S_{{\mathrm{val}}}\to E$ is a homeomorphism. \end{pf} {\subset}section{$D_{{\rm {BS}},{\mathrm{val}}}$, $D^I_{{\rm{SL}}(2),{\mathrm{val}}}$, $D^{II}_{{\rm{SL}}(2),{\mathrm{val}}}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$}{\lambda}bel{4.3} \bold egin{sbpara} We define $$D_{{\rm {BS}},{\mathrm{val}}}, \quad D^I_{{\rm{SL}}(2),{\mathrm{val}}}, \quad D^{II}_{{\rm{SL}}(2),{\mathrm{val}}}, \quad D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$$ as the valuative spaces associated to the objects $$D_{{\rm {BS}}}, \quad D^I_{{\rm{SL}}(2)}, \quad D^{II}_{{\rm{SL}}(2)}, \quad D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$$ of ${\cal {B}}'_{\bold{R}}(\log)$ (\ref{3.1.13}), respectively. By \ref{slis+}, \ref{B++C+} and \ref{c+}, they belong to $\cC_{\bold{R}}({\mathrm{val}})^+$. We call $D_{{\rm {BS}},{\mathrm{val}}}$ the space of valuative Borel--Serre orbits, and call the other spaces $D^I_{{\rm{SL}}(2),{\mathrm{val}}}$ etc., spaces of valuative ${\rm{SL}}(2)$-orbits. $D^I_{{\rm{SL}}(2),{\mathrm{val}}}$ and $D^{II}_{{\rm{SL}}(2),{\mathrm{val}}}$ are identified as sets because $D^I_{{\rm{SL}}(2)}$ and $D^{II}_{{\rm{SL}}(2)}$ are identified as sets and the morphism $D^I_{{\rm{SL}}(2)}\to D^{II}_{{\rm{SL}}(2)}$ is strict (\ref{valsrt}). They are denoted just by $D_{{\rm{SL}}(2),{\mathrm{val}}}$ when we regard them just as sets. \end{sbpara} \bold egin{sbpara} Since a log modification induces an isomorphism of associated valuative spaces (\ref{lmval}), we have $$D^{{\rm{st}}ar,+}_{{\rm{SL}}(2),{\mathrm{val}}} \overset{\cong}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} \overset{\cong}\to D^{{\rm{st}}ar,-}_{{\rm{SL}}(2),{\mathrm{val}}} \overset{\cong}{\leftarrow} D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2),{\mathrm{val}}}.$$ Hence the morphisms $$D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)} \to D^{II}_{{\rm{SL}}(2)}, \quad D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}\to D_{{\rm {BS}}}$$ (Section 2.5, Section 2.6) induce morphisms $$D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} \to D^{II}_{{\rm{SL}}(2),{\mathrm{val}}},\quad D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}.$$ \end{sbpara} \bold egin{sbpara}{\lambda}bel{orbittr} These valuative spaces are described as sets as follows. Let the situations (a)--(d) and the notation be as in \ref{sit2} and \ref{objass}. By \ref{gest9}, we have: As a set, $\frak D_{{\mathrm{val}}}$ is identified with the set of all triples $(x, V, Z)$ where $x\in \frak D$, $V$ is a valuative submonoid of $(M^{{{\gamma}mma}p}_{\frak D}/\cO^\times_{\frak D})_x=X(S_x)$ such that $X(S_x)^+ {\subset}set V$ and $X(S_x)^+ \cap V^\times =\{1\}$, and $Z$ is a $T(V)$-orbit in the $T(x)$-torsor $Z(x)$. Here $$T(V) := \Hom(X(S_x)/V^\times, {\bold{R}}^{\mult}_{>0}){\subset}set T(x)=\Hom(X(S_x), {\bold{R}}^{\mult}_{>0}).$$ \end{sbpara} \bold egin{sbpara}{\lambda}bel{orbitlog} Let the notation be as in \ref{orbittr}. For a point $z=(x, V, Z)\in \frak D$, the stalk $(M_{\frak D}/\cO^\times_{\frak D})_z$ is described as follows. In the situations (a)--(c), $(M_{\frak D}/\cO^\times_{\frak D})_z= V/V^\times$. In the situation (d), $(M_{\frak D}/\cO^\times_{\frak D})_z= V'/(V')^\times$ where $V'=V\cap (X(S_x)^+)^{{{\gamma}mma}p}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{another} In \cite{KU0} 2.6 and \cite{KU} 5.1.6, which treated the pure case, we defined the set $D_{{\rm {BS}},{\mathrm{val}}}$ in a different style. Following the style in \cite{KU0} 2.6 and \cite{KU} 5.1.6, we can define $D_{{\rm {BS}},{\mathrm{val}}}$ also as the set of all triples $(T, V, Z)$ where $T$ is an ${\bold{R}}$-split torus in $G_{\bold{R}}({{\gamma}mma}r^W)$, $V$ is a valuative submonoid of the character group $X(T)$ of $T$, and $Z{\subset}set D$, satisfying the following conditions (i)--(iv). (i) Let $T_{>0}$ be the connected component of $T({\bold{R}})$ containing the unit element. Then $Z$ is either a $T_{>0}$-orbit for the lifted action \ref{liftac} or an ${\bold{R}}_{>0}\times T$-orbit in $D_{\operatorname{naive}spl}$ for the lifted action. Here $t\in {\bold{R}}_{>0}$ acts on ${{\gamma}mma}r^W$ by the multiplication by $t^w$ on ${{\gamma}mma}r^W_w$. (ii) Let ${{\bold r}}\in Z$, let $\bar {{\bold r}}:={{\bold r}}({{\gamma}mma}r^W)\in D({{\gamma}mma}r^W)$, let $K_{\bar {{\bold r}}}$ be the maximal compact subgroup of $G_{\bold{R}}({{\gamma}mma}r^W)$ associated to $\bar {{\bold r}}$, and let $\theta_{K_{\bar {{\bold r}}}}: G_{\bold{R}}({{\gamma}mma}r^W)\to G_{\bold{R}}({{\gamma}mma}r^W)$ be the Cartan involution associated to $K_{\bar{{\bold r}}}$. Then $\theta_{K_{\bar {{\bold r}}}}(t)=t^{-1}$ for any $t\in T$. (iii) $V^\times =\{1\}$. (iv) Consider the direct sum decomposition ${{\gamma}mma}r^W=\bigoplus_{\chi\in X(T)} ({{\gamma}mma}r^W)_{\chi}$ by the action of $T$. Then for any $\chi\in X(T)$, the subspace $\bigoplus_{\chi'\in V^{-1}\chi} ({{\gamma}mma}r^W)_{\chi'}$ is ${\bold{Q}}$-rational. The relation with the presentation \ref{orbittr} of $D_{{\rm {BS}},{\mathrm{val}}}$ is as follows. $(P, V, Z)\in D_{{\rm {BS}},{\mathrm{val}}}$ in the presentation in \ref{orbittr} corresponds to $(T, V', Z)$ in the above presentation, where $T{\subset}set S_P$ is the annihilator of $V^{\times}$ in $S_P$ and $V'=V/V^\times {\subset}set X(T)$. The group $T(V)$ in \ref{orbittr} coincides with $T_{>0}$ in the above (i). Conversely, for a triple $(T, V, Z)$ here, the corresponding triple in the presentation of $D_{{\rm {BS}},{\mathrm{val}}}$ in \ref{orbittr} is $(P, V', Z)$ where $P$ is the ${\bold{Q}}$-parabolic subgroup of $G_{\bold{R}}({{\gamma}mma}r^W)$ defined as the connected component (as an algebraic group) of the algebraic subgroup of $G_{\bold{R}}({{\gamma}mma}r^W)$ consisting of all elements which preserve the subspaces $\bigoplus_{\chi'\in V^{-1}\chi} ({{\gamma}mma}r^W)_{\chi'}$ of ${{\gamma}mma}r^W$, and $V'$ is the inverse image of $V$ under the homomorphism $X(S_P)\to X(T)$ induced by the canonical homomorphism $T\to S_P$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{2slval} We describe the map $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D^{II}_{{\rm{SL}}(2),{\mathrm{val}}}$. This map is described as $x=(p,V, Z)\mapsto (p, V', Z')$ using \ref{orbittr} as follows. (0) On $D$, this map is the identity map. (1) For an $A$-orbit which does not belong to $D$, $V'=V$ and $Z'=Z_{{\rm{spl}}}$. (2) Assume that $(p,V,Z)$ is a $B$-orbit, let $n$ be the rank of $p$, and identify $X(S_x)={\bold{Z}}\times X(S_p)$ with ${\bold{Z}}\times {\bold{Z}}^n$. Let $e= (1, -1, \dots, -1)\in {\bold{Z}}\times {\bold{Z}}^n$. (2.1) Assume $-e\operatorname{naive}otin V$ (hence $e\in V$). Then $V'= \{a=(a_0, a_1,\dots, a_n)\in {\bold{Z}}^{n+1}\;|\; a - a_0e\in V\}$, and $Z'=Z$. (2.2) Assume $e, -e\in V$. Then $V'=\{a\in {\bold{Z}}^n\;|\; (0, a)\in V\}$, and $Z'=Z$. (2.3) Assume $e\operatorname{naive}otin V$ (hence $-e\in V$). Then $V'=\{a\in {\bold{Z}}^n\;|\; (0, a)\in V\}$, and $Z'=Z_{{\rm{spl}}}$. \end{sbpara} {\subset}section{The morphism $\eta^{{\rm{st}}ar}: D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$}{\lambda}bel{ss:SB} \bold egin{sbpara}{\lambda}bel{eta1} The map $ \eta^{{\rm{st}}ar}: D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} \to D_{{\rm {BS}}, {\mathrm{val}}} $ is described as follows. This description is similar to the pure case in \cite{KU0} Theorem 3.11, \cite{KU} Theorem 5.2.11. The map $\eta^{{\rm{st}}ar}$ sends $(p, V, Z)\in D_{{\rm{SL}}(2),{\mathrm{val}}}^{{\rm{st}}ar}$ in the presentation of $D_{{\rm{SL}}(2),{\mathrm{val}}}^{{\rm{st}}ar}$ in \ref{orbittr} to $(T, V', Z)\in D_{{\rm {BS}},{\mathrm{val}}}$ in the presentation of $D_{{\rm {BS}},{\mathrm{val}}}$ in \ref{another}, where $T$ and $V'$ are as follows. Let $T'{\subset}set S_p$ be the annihilator of $V^\times{\subset}set X(S_p)$. Then $T$ is the image of $T'\to G_{\bold{R}}({{\gamma}mma}r^W)$ under $\tau^{{\rm{st}}ar}_p$. $V'$ is the inverse image of $V/V^\times{\subset}set X(T')$ under the homomorphism $X(T)\to X(T')$ induced by the canonical homomorphism $T'\to T$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{eta2} we have also the following description of $\eta^{{\rm{st}}ar}$ by regarding $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ as $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2),{\mathrm{val}}}$. Let the notation be as in Section 2.6. By \ref{gest9}, an element of $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2),{\mathrm{val}}}$ is written as $(p, P, V, Z)$ where $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$, $P\in {\Cal {P}}(p)$, $V$ is a valuative submonoid of $X(S_{p,P})$ such that $X(S_{p,P})^+{\subset}set V$ and $X(S_{p.P})^+\cap V^\times =\{1\}$, and $Z$ is either a $\tau^{{\rm{st}}ar}(\Hom(X(S_{p,P})/V^\times, {\bold{R}}_{>0}))$-orbit in $D$ or a $\tilde \tau^{{\rm{st}}ar}({\bold{R}}_{>0}\times \Hom(X(S_{p,P})/V^\times, {\bold{R}}_{>0}))$-orbit in $D_{\operatorname{naive}spl}$ for the lifted action, such that the image of $Z$ in $D({{\gamma}mma}r^W)$ is contained in $Z(p)$. The map $\eta^{{\rm{st}}ar}$ sends $(p, P, V, Z)\in D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2),{\mathrm{val}}}$ to $(P, V', Z)\in D_{{\rm {BS}},{\mathrm{val}}}$ in the presentation of $D_{{\rm {BS}},{\mathrm{val}}}$ as a set in \ref{orbittr}, where $V'{\subset}set X(S_P)$ is the inverse image of $V$ under the homomorphism $X(S_P)\to X(S_{p,P})$ induced by the canonical homomorphism $S_{p.P}\to S_P$. \end{sbpara} \bold egin{sblem}{\lambda}bel{eta3} The morphism $\eta^{{\rm{st}}ar}: D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$ has the Kummer property of log structure in the sense of \ref{timesval} (K). \end{sblem} \bold egin{pf} Let $x=(p,P,V,Z)\in D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2),{\mathrm{val}}}= D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ (\ref{eta2}) and let $y$ be the image of $x$ in $D_{{\rm {BS}},{\mathrm{val}}}$. By \ref{orbitlog}, in the case of $A$-orbit (resp.\ $B$-orbit), $V$ is a valuative submonoid of $X(S_{p,P})$ (resp.\ ${\bold{Z}}\times X(S_{p,P})$) and the stalk of $M/\cO^\times$ of $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ at $x$ is identified with $V/V^\times$. On the other hand, the stalk of $M/\cO^\times$ of $D_{{\rm {BS}},{\mathrm{val}}}$ at $y$ is identified with $V'/(V')^\times$ where in the case of $A$-orbit (resp.\ $B$-orbit), $V'$ is the inverse image of $V$ in $(X(S_P)^+)^{{{\gamma}mma}p}$ (resp.\ ${\bold{Z}}\times (X(S_P)^+)^{{{\gamma}mma}p}$) for the canonical map $(X(S_P)^+)^{{{\gamma}mma}p}{\subset}set X(S_P) \to X(S_{p,P})$. Note that $(X(S_P)^+)^{{{\gamma}mma}p}$ is of finite index in $X(S_P)$. Furthermore, since the kernel of $S_{p,P}\to S_P$ is finite, the cokernel of $X(S_P)\to X(S_{p,P})$ is finite. Hence the map $V'/(V')^\times \to V/V^\times$ is injective, and for any element $a$ of $V/V^\times$, there is $m{{\gamma}mma}eq 1$ such that $a^m$ belongs to the image of $V'/(V')^\times$. \end{pf} \bold egin{sbthm} {\lambda}bel{SL2BS} The map $\eta^*:D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$ in $\cC_{\bold{R}}({\mathrm{val}})^+$ has the following properties. (1) The map $\eta^{{\rm{st}}ar}:D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$ is injective. (2) Let $Q\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$ and define the open set $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(Q)$ of $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ as the inverse image of the open set $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)(Q)$ of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)$. Then the topology of $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(Q)$ coincides with the restriction of the topology of $D_{{\rm {BS}},{\mathrm{val}}}$ through $\eta^{{\rm{st}}ar}$. (3) The diagram $$\bold egin{matrix} D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} &\overset{\eta^{{\rm{st}}ar}}\to& D_{{\rm {BS}},{\mathrm{val}}}\\ \downarrow &&\downarrow\\ \prod_w D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)_{{\mathrm{val}}} &\overset{\eta}\to & \prod_w D_{{\rm {BS}}}({{\gamma}mma}r^W_w)_{{\mathrm{val}}} \end{matrix}$$ is cartesian in the category of topological spaces. \end{sbthm} \bold egin{pf} We prove (3) first. For each $x=(x_w)_w\in \prod_w D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)_{{\mathrm{val}}}$ and the image $y=(y_w)_w$ of $x$ in $\prod_w D_{{\rm {BS}}}({{\gamma}mma}r^W_w)_{{\mathrm{val}}}$, and for each $w\in {\bold{Z}}$, there is an open neighborhood $U_w$ of $x_w$ and an open neighborhood $V_w$ of $y_w$ having the following properties (i) and (ii). (i) The image of $U_w$ in $D_{{\rm {BS}}}({{\gamma}mma}r^W_w)_{{\mathrm{val}}}$ is contained in $V_w$. (ii) Let $U$ be the inverse image of $\prod_w U_w$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ and let $V$ be the inverse image of $\prod_w V_w$ in $D_{{\rm {BS}},{\mathrm{val}}}$. Take any $F\in D({{\gamma}mma}r^W)$ and let $\bar L= \bar {\cal {L}}(F)$. Then we have a commutative diagram $$\bold egin{matrix}U &\cong &\prod_{{\mathrm{val}},w} U_w\times_{{\mathrm{val}}} {\rm{spl}}(W) \times_{{\mathrm{val}}} \bar L\\ \downarrow &&\downarrow\\ V &\cong &\prod_{{\mathrm{val}},w} V_w \times_{{\mathrm{val}}} {\rm{spl}}(W) \times_{{\mathrm{val}}} \bar L \end{matrix}$$ where $\prod_{{\mathrm{val}},w}$ is the product in $\cC_{\bold{R}}({\mathrm{val}})^+$, the upper row is an isomorphism over $\prod_{{\mathrm{val}},w} U_w$, and the lower row is an isomorphism over $\prod_{{\mathrm{val}},w} V$. By \ref{timesval} and \ref{eta3}, the following diagram is cartesian in the category of topological spaces. $$\bold egin{matrix} \prod_{{\mathrm{val}},w} U_w \times_{{\mathrm{val}}} {\rm{spl}}(W) \times_{{\mathrm{val}}} \bar L &\to & \prod_{{\mathrm{val}},w} V_w\times_{{\mathrm{val}}} {\rm{spl}}(W) \times_{{\mathrm{val}}} \bar L\\ \downarrow &&\downarrow\\ \prod_w U_w \times {\rm{spl}}(W) \times \bar L &\to & \prod_w V_w \times {\rm{spl}}(W) \times \bar L. \end{matrix}$$ (3) of Theorem \ref{SL2BS} follows from these two cartesian diagrams. Next we prove (1). The injectivity was proved in \cite{KU0} Theorem 3.11 in the pure case. Hence the map $\prod_w D_{{\rm{SL}}(2)}({{\gamma}mma}r^W_w)_{{\mathrm{val}}} \to \prod_w D_{{\rm {BS}}}({{\gamma}mma}r^W_w)_{{\mathrm{val}}}$ is injective. By (3), this proves the injectivity of $D_{{\rm{SL}}(2),{\mathrm{val}}}^{{\rm{st}}ar}\to D_{{\rm {BS}},{\mathrm{val}}}$. We prove (2). Assume first we are in the pure situation of weight $w$. Let ${\cal T}_1$ be the topology of $D_{{\rm{SL}}(2)}$ defined in Part II, and let ${\cal T}_{1,{\mathrm{val}}}$ be the topology of $D_{{\rm{SL}}(2),{\mathrm{val}}}$ defined in this Part IV. Let ${\cal T}_{2,{\mathrm{val}}}$ be the topology of $D_{{\rm{SL}}(2),{\mathrm{val}}}$ which is the weakest topology satisfying the following two conditions (i) and (ii). (i) For any open set $U$ of $D_{{\rm {BS}},{\mathrm{val}}}$, the pull-back of $U$ in $D_{{\rm{SL}}(2),{\mathrm{val}}}$ is open. (ii) For any $Q\in \prod_w {\cal {W}}({{\gamma}mma}r^W_w)$, $D_{{\rm{SL}}(2),{\mathrm{val}}}(Q)$ is open. Let ${\cal T}_2$ be the topology of $D_{{\rm{SL}}(2)}$ as a quotient space of $D_{{\rm{SL}}(2),{\mathrm{val}}}$ which is endowed with the topology ${\cal T}_{2,{\mathrm{val}}}$. Recall that in \cite{KU0} and \cite{KU} which treated the pure case, the topologies of $D_{{\rm{SL}}(2)}$ and $D_{{\rm{SL}}(2),{\mathrm{val}}}$ were defined as ${\cal T}_2$ and ${\cal T}_{2,{\mathrm{val}}}$, respectively (not as in the present series of papers). The study of ${\cal T}_2$ in \cite{KU} Section 10 and the study of ${\cal T}_1$ in Part II, Section 3.4 show that ${\cal T}_1={\cal T}_2$. Since the map $\eta^{{\rm{st}}ar}$ from $D_{{\rm{SL}}(2),{\mathrm{val}}}$ with ${\cal T}_{1,{\mathrm{val}}}$ to $D_{{\rm {BS}},{\mathrm{val}}}$ is continuous as we have seen in Section 2.6, we have that ${\cal T}_{1,{\mathrm{val}}} {{\gamma}mma}eq {\cal T}_{2,{\mathrm{val}}}$. Since the map $D_{{\rm{SL}}(2),{\mathrm{val}}} \to D_{{\rm{SL}}(2)}$ is proper for ${\cal T}_{1,{\mathrm{val}}}$ (\ref{vproper}) and also for ${\cal T}_{2,{\mathrm{val}}}$ (\cite{KU} Theorem 3.14), we have ${\cal T}_{1,{\mathrm{val}}}={\cal T}_{2,{\mathrm{val}}}$. Thus we have proved (2) in the pure case. By (3), we have a cartesian diagram of topological spaces $$\bold egin{matrix} D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} & \to & D_{{\rm {BS}},{\mathrm{val}}}\\ \downarrow && \downarrow\\ (\prod_w D_{SL(2)}({{\gamma}mma}r^W_w)_{{\mathrm{val}}}) \times {\rm{spl}}(W) &\to& (\prod_w D_{{\rm {BS}}}({{\gamma}mma}r^W_w)_{{\mathrm{val}}})\times {\rm{spl}}(W). \end{matrix}$$ The vertical arrows are proper by \ref{Lbund} and Part I, Cor. 8.5. Hence (3) is reduced to the pure case. \end{pf} \bold egin{sbpara} As in (2) of Theorem \ref{SL2BS}, the topology of $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(Q)$ coincides with the induced topology from $D_{{\rm {BS}},{\mathrm{val}}}$. We show an example in which the topology of $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ is not the induced one from $D_{{\rm {BS}},{\mathrm{val}}}$. This example is pure of weight $3$. So $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ is written as $D_{{\rm{SL}}(2),{\mathrm{val}}}$ below. Let $H_{0,{\bold{Z}}}=H'_{0,{\bold{Z}}} \otimes \text{Sym}^2(H'_{0,{\bold{Z}}})$ where $H'_{0,{\bold{Z}}}$ is a free ${\bold{Z}}$-module of rank $2$ with basis $e_1,e_2$. Hence $H_{0,{\bold{Z}}}$ is of rank $6$. The intersection form ${\lambda}ngle -,-\ranglegle$ on $H_{0,{\bold{Z}}}$ is $b\otimes \text{Sym}^2(b)$ where $b$ is the anti-symmetric bilinear form on $H'_{0,{\bold{Z}}}$ characterized by ${\lambda}ngle e_1, e_2\ranglegle =-1$. We have the following ${\rm{SL}}(2)$-orbit $(\rho, \varphi)$ in two variables: $$\rho(g_1,g_2)= g_1\otimes \text{Sym}^2(g_2), \quad \varphi(z_1,z_2) =F(z_1)\otimes {\rm{Sym}}^2F(z_2)$$ $(g_1,g_2\in {\rm{SL}}(2)$, $z_1,z_2\in {\bold{C}})$ where $F(z)$ is the decreasing filtration on $H'_{0,{\bold{C}}}$ defined by $$F(z)^2=0 {\subset}set F(z)^1= {\bold{C}}\cdot (ze_1+e_2){\subset}set F(z)^0= H'_{0, {\bold{C}}}.$$ The associated homomorphism $\tau^{{\rm{st}}ar}: {\bold{G}}_m^2\to G_{\bold{R}}$ is as follows. $\tau^{{\rm{st}}ar}(t_1, t_2)$ acts on $e_2\otimes e_2^2$ by $t_1t_2^3$, on $e_2\otimes e_1e_2$ by $t_1t_2$, on $e_2\otimes e_1^2$ by $t_1t_2^{-1}$, on $e_1\otimes e_2^2$ by $t_1^{-1}t_2$, on $e_1\otimes e_1e_2$ by $t^{-1}_1t_2^{-1}$, and on $e_1\otimes e_1^2$ by $t_1^{-1}t_2^{-3}$. The associated weight filtrations $W^{(1)}$ and $W^{(2)}$ are as follows. $$W^{(1)}_1=0 {\subset}set W^{(1)}_2= e_1 \otimes \text{Sym}^2H'_{0,{\bold{R}}} =W^{(1)}_3{\subset}set W^{(1)}_4=H_{0,{\bold{R}}}.$$ $$W^{(2)}_{-1}=0{\subset}set W^{(2)}_0={\bold{R}} e_1 \otimes e_1^2=W^{(2)}_1{\subset}set W^{(2)}_2 = W^{(2)}_1+ {\bold{R}} e_1 \otimes e_1e_2 + {\bold{R}} e_2 \otimes e_1^2= W^{(2)}_3$$ $${\subset}set W^{(2)}_4= W^{(2)}_3+ {\bold{R}} e_1\otimes e_2^2 + {\bold{R}} e_2\otimes e_1e_2= W^{(2)}_5{\subset}set W^{(2)}_6= H_{0,{\bold{R}}}.$$ Let $$\Phi= \{W^{(1)}, W^{(2)}\}.$$ We show that $D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$ is not open for the topology induced from the topology of $D_{{\rm {BS}},{\mathrm{val}}}$. Let $V$ be the valuative submonoid of $X({\bold{G}}_m^2)$ which is, under the identification $X({\bold{G}}_m^2)= {\bold{Z}}^2$, identified with the set of all $(a, b)\in {\bold{Z}}^2$ satisfying either $(a>0)$ or ($a=0$ and $b{{\gamma}mma}eq 0$). Consider the point $x:=(p, V, Z)\in D_{{\rm{SL}}(2),{\mathrm{val}}}$ where $p$ is the class of this ${\rm{SL}}(2)$-orbit and $Z$ is the torus orbit $\{F(iy_1) \otimes \text{Sym}^2 F(iy_2) \;|\; y_1,y_2\in {\bold{R}}_{>0}\}$ of $p$. The map $D_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$ sends $x$ to $y:=(T, V, Z)$ in the presentation \ref{another} of $D_{{\rm {BS}},{\mathrm{val}}}$ where $T$ is the image of $\tau^{{\rm{st}}ar}=\tau_p^{{\rm{st}}ar}: {\bold{G}}_m^2\to G_{\bold{R}}$ and we regard $V$ as a submonoid of $X(T)$ via the canonical isomorphism ${\bold{G}}_m^2\cong T$ given by $\tau_p^{{\rm{st}}ar}$. In the presentation of $D_{{\rm {BS}},{\mathrm{val}}}$ in \ref{orbittr}, this point $y$ coincides with $(P, V', Z)$ where $P$ and $V'$ are as follows. $P$ is the ${\bold{Q}}$-parabolic subgroup of $G_{\bold{R}}$ consisting of all elements which preserve the following subspaces $W'_w$ $(w\in {\bold{Z}})$. $$W'_1={\bold{R}} e_1 \otimes e_1^2, \quad W'_2=W'_1+ {\bold{R}} e_1 \otimes e_1e_2, \quad W'_3= W'_2+{\bold{R}} e_1 \otimes e_2^2,$$ $$W'_4= W'_3+{\bold{R}} e_2 \otimes e_1^2, \quad W'_5= W'_4+ {\bold{R}} e_2\otimes e_1e_2.$$ We have $S_P\cong {\bold{G}}_m^3$. The inclusion map $T\to P$ induces a canonical homomorphism $T\to S_P$. $V' {\subset}set X(S_P)$ is the inverse image of $V$ under the canonical homomorphism $X(S_P)\to X(T) \cong X({\bold{G}}_m^2)$. Let $f$ be the element of ${\rm {Lie}}(P_u)$ which sends $e_2\otimes e_1^2$ to $e_1\otimes e_2^2$ and kills $e_1\otimes \text{Sym}^2H'_{0,{\bold{R}}}$ and $e_2\otimes e_1e_2$ and $e_2\otimes e_2^2$. For $c\in {\bold{R}}$, we have the SL(2)-orbit in two variables $(\rho^{(c)}, \varphi^{(c)})$ defined by $$\rho^{(c)}(g_1, g_2)= \exp(cf) \rho(g_1, g_2) \exp(-cf), \quad \varphi^{(c)}(z_1,z_2)= \exp(cf)\varphi(z_1, z_2).$$ The associated weight filtrations of $(\rho^{(c)}, \varphi^{(c)})$ are $\exp(cf)W^{(1)}, \exp(cf)W^{(2)}$. Since $f$ respects $W^{(1)}$ but not $W^{(2)}$, $\{\exp(cf)W^{(1)}, \exp(cf)W^{(2)}\}=\{W^{(1)}, \exp(cf)W^{(2)}\}$ is not contained in $\Phi$ if $c\operatorname{naive}eq 0$. If $c\in {\bold{Q}}$, the filtration $\exp(cf)W^{(2)}$ is rational, and hence $(\rho^{(c)}, \varphi^{(c)})$ determines an element $p^{(c)}$ of $D_{{\rm{SL}}(2)}$. Let $x^{(c)}:=(p^{(c)}, V, Z^{(c)})\in D_{{\rm{SL}}(2), {\mathrm{val}}}$ where $V$ is the same as above and $Z^{(c)}$ is the torus orbit of $p^{(c)}$. Now, when $c\in {\bold{Q}}\smallsetminus \{0\}$ converges to $0$ in ${\bold{R}}$, the image $y^{(c)}$ of $x^{(c)}$ in $D_{{\rm {BS}},{\mathrm{val}}}$ converges to $y$. This is because $P$ acts on $D_{{\rm {BS}},{\mathrm{val}}}(P)$ continuously in the natural way, $y\in D_{{\rm {BS}},{\mathrm{val}}}(P)$, and $y^{(c)}=\exp(cf)y$ for this action of $P$. Since $y^{(c)}\operatorname{naive}otin D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$, $D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$ is not open for the topology induced by the topology of $D_{{\rm {BS}},{\mathrm{val}}}$. The set $\{x^{(c)}\;|\; c\in {\bold{Q}}\}$ is discrete in $D_{{\rm{SL}}(2),{\mathrm{val}}}$, though the image $\{y^{(c)}\;|\; c\in {\bold{Q}}\}$ in $D_{{\rm {BS}},{\mathrm{val}}}$ has the topology of the subspace ${\bold{Q}}$ of ${\bold{R}}$ via the correspondence $y^{(c)}\leftrightarrow c$. Thus the topology of $D_{{\rm{SL}}(2),{\mathrm{val}}}$ is not the induced topology from $D_{{\rm {BS}},{\mathrm{val}}}$. \end{sbpara} {\subset}section{The map $\eta: D_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$}{\lambda}bel{4.6} {\lambda}bel{ss:eta} \bold egin{sbpara}{\lambda}bel{defeta} We define a canonical map $$ \eta: D_{{\rm{SL}}(2),{\mathrm{val}}} \to D_{{\rm {BS}}, {\mathrm{val}}} $$ following the method in the pure case \cite{KU}. But we will see that this map need not be continuous. $\eta$ is the unique map such that for any $x\in D_{{\rm{SL}}(2)}$ and any $\tilde x\in D_{{\rm{SL}}(2),{\mathrm{val}}}$ lying over $x$, the restriction of $\eta$ to the subset $\bar Z(x)_{{\mathrm{val}}}$ of $D_{{\rm{SL}}(2),{\mathrm{val}}}$ (note $\tilde x\in \bar Z(x)_{{\mathrm{val}}}$) is the unique morphism in $\cC_{\bold{R}}({\mathrm{val}})^+$ whose restriction to $Z(x)$ is the inclusion morphism $Z(x) \overset{{\subset}set}\to D {\subset}set D_{{\rm {BS}},{\mathrm{val}}}$. The map $\eta$ coincides with the composition of the two maps $D_{{\rm{SL}}(2),{\mathrm{val}}}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\overset{\eta_{{\mathrm{val}}}^{{\rm{st}}ar}}\longrightarrow D_{{\rm {BS}},{\mathrm{val}}}$ where the first arrow is the following map ${{\lambda}mbda}bda_{{\mathrm{val}}}$. The restriction of ${{\lambda}mbda}bda_{{\mathrm{val}}}$ to $D_{{\rm{SL}}(2),\operatorname{naive}spl,{\mathrm{val}}}$ is the morphism on the associated valuative spaces induced from the morphism ${{\lambda}mbda}bda: D^{II}_{{\rm{SL}}(2),\operatorname{naive}spl} \to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ in \ref{lam}. The restriction of ${{\lambda}mbda}bda_{{\mathrm{val}}}$ to $D_{{\rm{SL}}(2),{\rm{spl}},{\mathrm{val}}}$ is the morphism on the associated valuative spaces induced from the isomorphism $\eta:D^{II}_{{\rm{SL}}(2),{\rm{spl}}} \overset{\cong}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\rm{spl}}}$ in \ref{lam}. The composition $D_{{\rm{SL}}(2),{\mathrm{val}}} \overset{{{\lambda}mbda}_{{\mathrm{val}}}}\longrightarrow D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm{SL}}(2),{\mathrm{val}}}$ is the identity map. By Theorem \ref{SL2BS} (1), the map $\eta: D_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$ is injective. \end{sbpara} \bold egin{sbprop} $(1)$ The restriction of $\eta$ to the open set $D^{II}_{{\rm{SL}}(2),\operatorname{naive}spl, {\mathrm{val}}}\cup D$ of $D^{II}_{{\rm{SL}}(2),{\mathrm{val}}}$ is a morphism in $\cC_{\bold{R}}({\mathrm{val}})$. $(2)$ For any $\Phi\in \overline{{\cal {W}}}$, the topology of $D^{II}_{{\rm{SL}}(2),\operatorname{naive}spl,{\mathrm{val}}}(\Phi)\cup D$ coincides with the topology induced from the topology of $D_{{\rm {BS}},{\mathrm{val}}}$. \end{sbprop} \bold egin{pf} This restriction of $\eta$ to $D^{II}_{{\rm{SL}}(2),\operatorname{naive}spl,{\mathrm{val}}}\cup D$ is the composition $D^{II}_{{\rm{SL}}(2), \operatorname{naive}spl,{\mathrm{val}}}\cup D\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} \overset{\eta^{{\rm{st}}ar}_{{\mathrm{val}}}}\longrightarrow D_{{\rm {BS}},{\mathrm{val}}}$ where the first arrow is the open immersion induced from isomorphism $D^{II}_{{\rm{SL}}(2)}\cup D\cong D_{{\rm{SL}}(2)}^{{\rm{st}}ar,+}({\sigma}_2)$ in \ref{0thm} (2). This proves (1). By this, (2) follows from Theorem \ref{SL2BS} (2). \end{pf} \bold egin{sbprop}{\lambda}bel{twoSL23} The equivalent conditions {\rm (i)}--{\rm (vii)} of $\ref{twoSL2}$ are equivalent to each of the following conditions. {{\rm (viii)}} The identity map of $D$ extends to an isomorphism $D^{II}_{{\rm{SL}}(2),{\mathrm{val}}}\cong D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ in $\cC_{\bold{R}}({\mathrm{val}})$. {{\rm (ix)}} The map ${{\lambda}mbda}bda_{{\mathrm{val}}}: D^I_{{\rm{SL}}(2),{\mathrm{val}}} \to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ (\ref{defeta}) is continuous. {{\rm (x)}} The map $\eta: D^{II}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$ is continuous. {{\rm (xi)}} The map $\eta: D^I_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm {BS}},{\mathrm{val}}}$ is continuous. {{\rm (xii)}} The map $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2),{\mathrm{val}}}\to D_{{\rm{SL}}(2),{\mathrm{val}}}$ is injective. \end{sbprop} \bold egin{pf} (ii) ${\bold{R}}ightarrow $ (viii). Take the associated valuative spaces. The implications (viii) ${\bold{R}}ightarrow$ (ix) and (viii) ${\bold{R}}ightarrow$ (xii) are clear. The implications (viii) ${\bold{R}}ightarrow$ (x) and (x) ${\bold{R}}ightarrow$ (xi) are easily seen. (xi) ${\bold{R}}ightarrow$ (ix). Use the fact that the topology of $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$ is the restriction of the topology of $D_{{\rm {BS}},{\mathrm{val}}}$ (\ref{SL2BS} (2)). (ix) ${\bold{R}}ightarrow $ (iv). This is because $D^I_{{\rm{SL}}(2),{\mathrm{val}}}\to D^I_{{\rm{SL}}(2)}$ is proper surjective (\ref{vproper}). (xii) ${\bold{R}}ightarrow $ (i). The proof of (vi) ${\bold{R}}ightarrow$ (i) of Proposition \ref{twoSL2} actually proves this. In that proof, assuming (i) does not hold, we used $x= (p, Z)\in D^{{\rm{st}}ar, {{\rm{mild}}}}_{{\rm{SL}}(2)}$ with $p$ of rank $1$ such that $x\operatorname{naive}eq x_{{\rm{spl}}}$. Since $p$ is of rank one, these $x$ and $x_{{\rm{spl}}}$ are regarded canonically as elements of $D_{{\rm{SL}}(2),{\mathrm{val}}}^{{\rm{st}}ar,{{\rm{mild}}}}$ whose images in $D_{{\rm{SL}}(2),{\mathrm{val}}}$ coincide. \end{pf} \section{New spaces $D^{\sharp}_{\Sigma,[:]}$ and $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$ of nilpotent orbits}{\lambda}bel{s:newval} In this Section \ref{s:newval}, we define and consider the new spaces $D^{\sharp}_{\Sigma, [:]}$ and $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$ of nilpotent orbits (nilpotent $i$-orbits, to be precise, in our terminology). In Sections \ref{ra1}--\ref{ra5}, for a topological space $S$ endowed with an fs log structure on the sheaf of all ${\bold{R}}$-valued continuous functions, we define topological spaces $S_{[:]}$ (the space of ratios, Section \ref{ra2}) and $S_{[{\mathrm{val}}]}$ (Section \ref{ra5}), and we define proper surjective continuous maps $S_{[:]}\to S$, $S_{{\mathrm{val}}}\to S_{[:]}$, and $S_{[{\mathrm{val}}]}\to S_{[:]}$, where $S_{{\mathrm{val}}}$ is as in Section 3.1. As will be explained in \ref{ra6}, in the case $S= D^{\sharp}_{\Sigma}$, we obtain the new spaces of nilpotent $i$-orbits $D^{\sharp}_{\Sigma, [:]}$ as $S_{[:]}$ and $D^{\sharp}_{\Sigma, [{\mathrm{val}}]}$ as $S_{[{\mathrm{val}}]}$, and $S_{{\mathrm{val}}}$ coincides with $D^{\sharp}_{\Sigma, {\mathrm{val}}}$ which we have already defined in Part III. We construct CKS maps $D^{\sharp}_{\Sigma, [:]}\to D_{{\rm{SL}}(2)}^I$ and $D^{\sharp}_{\Sigma, [{\mathrm{val}}]}\to D_{{\rm{SL}}(2),{\mathrm{val}}}^I$ in \ref{ss:cks1}. We have already constructed the CKS map $D^{\sharp}_{\Sigma, {\mathrm{val}}}\to D_{{\rm{SL}}(2)}^I$ in Part III. {\subset}section{The space of ratios in toric geometry}{\lambda}bel{ra1} \bold egin{sbpara} The space of ratios which we consider appears in the following way. Consider $S=\operatorname{Spec}(k[T_1, T_2])$ with $k$ a field. Regarding $S$ as the toric variety associated to the cone ${\bold{R}}^2_{{{\gamma}mma}eq 0}{\subset}set {\bold{R}}^2$, consider the toric varieties over $k$ associated to rational finite subdivisions of the cone ${\bold{R}}^2_{{{\gamma}mma}eq 0}$ (\ref{rvtoric}), and let $X$ be the projective limit of these toric varieties regarded as topological spaces with Zariski topology. It is the projective limit obtained by blowing-up the origin $s=(0,0)\in S$ first and then continuing blowing-up the intersections of irreducible components of the inverse image of $\operatorname{Spec}(k[T_1, T_2]/(T_1T_2)){\subset}set S$ on the blowing-up. Let $X_0{\subset}set X$ be the inverse image of $s$, and endow $X_0$ with the topology as a subspace of $X$. Then we have the following continuous surjective map from $X_0$ to the interval $ [0, \infty] \supset {\bold{R}}_{>0}$ despite that the Zariski topology and the topology of real numbers are very much different in nature. If $x\in X_0$, the image of $x$ in $[0, \infty]$ is defined as $$ \sup \{a/b\;|\;(a, b)\in {\bold{N}}^2\smallsetminus \{(0,0)\}, T_1^b/T_2^a\in \cO_{X,x}\} $$ $$= \inf \{a/b \; |\; (a, b)\in {\bold{N}}^2\smallsetminus \{(0,0)\},T_2^a/T_1^b\in \cO_{X,x}\}.$$ Here ${\bold{N}}={\bold{Z}}_{{{\gamma}mma}e0}$ and $\cO_X$ is the inductive limit of the inverse images on $X$ of the structural sheaves of the blowing-ups. The image of $x$ in $[0, \infty]$ is, roughly speaking, something like the ratio $\log(T_1)/\log(T_2)$ at $x$. In the definition below, this $[0, \infty]$ is {\it the space $R({\bold{N}}^2)$ of ratios} of the fs monoid ${\bold{N}}^2= (M_S/\cO_S^\times)_s$ which is generated by the classes of $T_1$ and $T_2$. The above relation with the projective limit of blowing-ups is generalized in \ref{zariski}. \end{sbpara} \bold egin{sbpara} In this Section \ref{ra1}, the notation $\cS$ is used for an fs monoid. We denote the semigroup law of $\cS$ multiplicatively unless we assume and state that $\cS={\bold{N}}^n$. So the neutral element of $\cS$ is denoted by $1$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{ratY} For a sharp fs monoid $\cS$, let $R(\cS)$ be the set of all maps $r:(\cS\times \cS)\smallsetminus \{(1,1)\}\to [0,\infty]$ satisfying the following conditions (i)--(iii). (i) $r(g,f)=r(f,g)^{-1}$. (ii) $r(f, g)r(g,h)=r(f,h)$ if $\{r(f,g), r(g,h)\}\operatorname{naive}eq \{0,\infty\}$. (iii) $r(fg, h)=r(f, h)+r(g,h)$. We endow $R(\cS)$ with the topology of simple convergence. It is a closed subset of the product of copies of the compact set $[0, \infty]$ and hence is compact. \end{sbpara} \bold egin{sbrem} From the condition (i), we have $r(f,f)=1$. (Conversely, $r(f,f)=1$ and (ii) imply (i).) From this and from $r(1,f)+r(f,f)=r(f,f)$ which comes from (iii), we get $$r(1,f)=0, \quad r(f,1)=\infty\quad \text{for any $f\in \cS\smallsetminus \{1\}$}. $$ \end{sbrem} \bold egin{sbpara} For example, we have $R({\bold{N}}^2)\cong [0,\infty]$, where $r\in R({\bold{N}}^2)$ corresponds to $r(q_1,q_2)\in [0, \infty]$ with $q_j$ the standard $j$-th basis of ${\bold{N}}^2$. A description of $R({\bold{N}}^n)$ for general $n$ is given in \ref{RNn}. \end{sbpara} \bold egin{sbpara}{\lambda}bel{RandR'} We have a canonical bijection between $R(\cS)$ and the set $R'(\cS)$ of all equivalence classes of $((\cS^{(j)})_{0\leq j\leq n}, (N_j)_{1\leq j\leq n})$, where $n{{\gamma}mma}eq 0$, $\cS^{(j)}$ is a face of $\cS$ such that $$\cS=\cS^{(0)} \supsetneq \cS^{(1)}\supsetneq \dots \supsetneq \cS^{(n)}=\{1\},$$ and $N_j$ is a homomorphism $\cS^{(j-1)}\to {\bold{R}}^{\add}$ such that $N_j(\cS^{(j)})=0$ and such that $N_j(\cS^{(j-1)}\smallsetminus \cS^{(j)}){\subset}set{\bold{R}}_{>0}$. The equivalence relation is given by multiplying each $N_j$ by an element of ${\bold{R}}_{>0}$ (which may depend on $j$). We define a map $R(\cS)\to R'(\cS)$ as follows. Let $r\in R(\cS)$. We give the corresponding element of $R'(\cS)$. For $f\in \cS\smallsetminus \{1\}$, let $\cS(r, f)=\{g\in \cS\;|\;r(g,f)\operatorname{naive}eq \infty\}$. Then the conditions (i)---(iii) on $r$ in \ref{ratY} show that $\cS(r,f)$ is a face of $\cS$. For $f, g\in \cS$, we have $\cS(r,f){\subset}set \cS(r,g)$ if and only if $r(f,g)\operatorname{naive}eq \infty$, and we have $\cS(r,f)\supset \cS(r,g)$ if and only if $r(f,g)\operatorname{naive}eq 0$. Hence the faces of $\cS$ of the form $\cS(r,f)$ $(f\in \cS\smallsetminus\{1\})$ together with the face $\{1\}$ form a totally ordered set for the inclusion relation. Let $\cS= \cS^{(0)}\supsetneq \cS^{(1)}\supsetneq \dots \supsetneq \cS^{(n)}=\{1\}$ be all the members of this set. Take $q_j\in \cS^{(j-1)}\smallsetminus \cS^{(j)}$ $(1\leq j\leq n)$. We have a homomorphism $N_j: \cS^{(j-1)}\to {\bold{R}}$ defined by $N_j(f)=r(f, q_j)$. This $N_j$ kills $\cS^{(j)}$ and $N_j(\cS^{(j-1)}\smallsetminus \cS^{(j)}){\subset}set {\bold{R}}_{>0}$. If we replace $q_j$ by another element $q'_j$, $N_j$ is multiplied by $r(q_j, q'_j)\in {\bold{R}}_{>0}$. Thus we have the map $R(\cS)\to R'(\cS)\;;\; r\mapsto \text{class}((\cS^{(j)})_j,(N_j)_j)$. Next we define the inverse map $R'(\cS)\to R(\cS)$. Let $\text{class}((\cS^{(j)})_{0\leq j\leq n}, (N_j)_{1\leq j\leq n})\in R'(\cS)$. Let $(f,g) \in (\cS\times S) \smallsetminus \{(1,1)\}$. We define $r(f,g)$ as follows. Let $j$ be the largest integer ${{\gamma}mma}eq 0$ such that $f$ belongs to $\cS^{(j)}$ and let $k$ be that of $g$. (1) If $j=k< n$, $r(f,g)= N_{j+1}(f)/N_{j+1}(g)$. (2) If $j>k$, $r(f, g)= \infty$. (3) If $j<k$, $r(f,g)=0$. This gives the map $R'(\cS)\to R(\cS)$. It can be seen easily that the maps $R(\cS)\to R'(\cS)$ and $R'(\cS)\to R(\cS)$ are the inverses of each other. \end{sbpara} \bold egin{sbpara}{\lambda}bel{V(S)2} As in \ref{V(S)}, for a sharp fs monoid $\cS$, let $V(\cS)$ be the set of all valuative submonoids $V$ of $\cS^{{{\gamma}mma}p}$ such that $V\supset \cS$ and $V^\times \cap \cS=\{1\}$. We endow $V(\cS)$ with the following topology. For a finite set $I$ of $\cS^{{{\gamma}mma}p}$, let $U(I)=\{V\in V(\cS)\;|\;I{\subset}set V\}$. Then these $U(I)$ form a basis of open sets of $V(\cS)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{valcs} We define a map $$V(\cS)\to R(\cS)\;;\;V\mapsto r_V.$$ For $V\in V(\cS)$, $r_V\in R(\cS)$ is the map $\cS\times\cS\smallsetminus\{(1,1)\}\to[0,\infty]$ defined by $$r_V(f,g)= \sup \{a/b\;|\;(a, b)\in {\bold{N}}^2\smallsetminus \{(0,0)\}, f^b/g^a\in V\} $$ $$= \inf \{a/b \; |\; (a, b)\in {\bold{N}}^2\smallsetminus \{(0,0)\}, g^a/f^b\in V\} $$ $((f,g)\in (\cS\times \cS)\smallsetminus \{(1,1)\})$ (\ref{ratY}). \end{sbpara} \bold egin{sbprop}{\lambda}bel{valcs2} The map $V(\cS)\to R(\cS)$ is continuous and surjective. \end{sbprop} \bold egin{pf} We first prove the continuity of $V(\cS)\to R(\cS)$. Let $f, g \in \cS\smallsetminus \{1\}$, and assume $r_V(f,g)> a/b$ where $a,b\in {\bold{N}}$ and $b>0$. We have $f^b/g^a\in V$. If $V'\in V(\cS)$ and $f^b/g^a\in V'$, we have $r_{V'}(f,g){{\gamma}mma}eq a/b$. This proves the continuity of $V(\cS)\to R(\cS)$ (\ref{V(S)2}, \ref{ratY}). We next prove the surjectivity of $V(\cS)\to R(\cS)$. Let $\text{class}((\cS^{(j)})_{0\leq j\leq n}, (N_j)_{1\leq j\leq n})\in R'(\cS)$ (\ref{RandR'}). Then the corresponding element of $R(\cS)$ is the image in $R(\cS)$ of the following element $V\in V(\cS)$. For $1\leq j\leq n$, define the ${\bold{Q}}$-vector subspace $Q^{(j)}$ of the ${\bold{Q}}$-vector space $\cS_{{\bold{Q}}}={\bold{Q}}\otimes \cS^{{{\gamma}mma}p}$ by $Q^{(j)}:=\Ker(N_j: \cS^{(j-1)}_{\bold{Q}}\to {\bold{R}})$. Then $Q^{(j)}\supset \cS^{(j)}_{{\bold{Q}}}$. Take an isomorphism of ${\bold{Q}}$-vector spaces ${{\lambda}mbda}bda_j : Q^{(j)}/\cS^{(j)}_{\bold{Q}}\overset{\cong}\to {\bold{Q}}^{d(j)}$ where $d(j):=\dim(Q^{(j)}/ \cS^{(j)}_{{\bold{Q}}})$. Define $V$ by the following. Let $a\in \cS^{{{\gamma}mma}p}$. When there is $j$ such that $1\leq j\leq n$, $a\in \cS^{(j-1)}_{{\bold{Q}}}$ and $a\operatorname{naive}otin Q^{(j)}$, then $a\in V$ if and only if $N_j(a)>0$. When there is $j$ such that $a\in Q^{(j)}$ and $a\operatorname{naive}otin \cS^{(j)}_{\bold{Q}}$, then $a\in V$ if and only if the first non-zero entry of ${{\lambda}mbda}bda_j(a)\in {\bold{Q}}^{d(j)}$ is $>0$. \end{pf} \bold egin{sbpara} Let $k$ be a field, let $S$ be the toric variety $\operatorname{Spec}(k[\cS])$, and let $X$ be the projective limit as a topological space of the toric varieties over $k$ (with Zariski topology) which correspond to finite rational subdivisions of the cone $\Hom(\cS, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0})$ (\ref{rvtoric}). Let $\cO_X$ be the inductive limit of the inverse images on $X$ of the structural sheaves of these toric varieties. Let $X_0{\subset}set X$ be the inverse image of $s\in S=\operatorname{Spec}(k[\cS])$ where $s$ is the $k$-rational point of $S$ at which all non-trivial elements of $\cS$ have value $0$. Endow $X_0$ with the topology as a subspace of $X$. We have a continuous map $X_0\to V(\cS)$ which sends $x\in X_0$ to $\{f\in \cS^{{{\gamma}mma}p}\;|\; f\in \cO_{X,x}\}\in V(\cS)$. The induced map $X_0(k)\to V(\cS)$ is surjective. In fact, for each $V\in V(\cS)$, the inverse image of $V$ in $X_0$ under the map $X_0\to V(\cS)$ is identified with $\operatorname{Spec}(k[V^\times])$. It has a $k$-rational point which sends all elements of $V^\times$ to $1$. Composing with the map in \ref{valcs} as $$X_0(k){\subset}set X_0\to V(\cS)\to R(\cS),$$ and using Proposition \ref{valcs2}, we have \end{sbpara} \bold egin{sbprop}{\lambda}bel{zariski} $(1)$ The map $X_0\to R(\cS)$ is continuous. $(2)$ The induced map $X_0(k)\to R(\cS)$ is surjective. \end{sbprop} \bold egin{sbcor}{\lambda}bel{zarcor} If we regard $R(\cS)$ as a quotient space of $V(\cS)$ or $X_0$, the topology of $R(\cS)$ coincides with the quotient topology. \end{sbcor} This is because $V(\cS)$ and $X_0$ are quasi-compact and $R(\cS)$ is Hausdorff. Thus Zariski topology and the topology of real numbers are well connected here. {\subset}section{The space $S_{[:]}$ of ratios}{\lambda}bel{ra2} \bold egin{sbpara}{\lambda}bel{4.2.1} For a locally ringed space $S$ endowed with an fs log structure, we define the set $S_{[:]}$ as the set of all pairs $(s, r)$ where $s\in S$ and $r \in R((M_S/\cO_S^\times)_s)$. We have the canonical surjection $S_{[:]}\to S\;;\;(s,r)\mapsto s$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{absv} Let $K$ be a field endowed with a non-trivial absolute value $|\:\;|: K\to {\bold{R}}_{{{\gamma}mma}eq 0}$. Let $S$ be a locally ringed space over $K$ satisfying the equivalent conditions in \ref{value}, and assume that we are given an fs log structure on $S$. We define a natural topology of $S_{[:]}$ for which the projection $S_{[:]}\to S$ is a proper continuous map and which induces on each fiber of this projection the topology of $R((M_S/\cO_S^\times)_s)$ defined in \ref{ratY}. \end{sbpara} \bold egin{sbpara}{\lambda}bel{ratS} Let $K$ and $S$ be as in \ref{absv}. To define the topology on $S_{[:]}$, the method is, so to speak, to combine the topology of $S$ and the topologies of $R(\cS)$ (Section \ref{ra1}) for $\cS=(M_S/\cO_S^\times)_s$ $(s\in S)$ by using a chart of the log structure. Assume first that we are given a chart $\cS\to M_S$ of the log structure, where $\cS$ is an fs monoid. Fix $c\in {\bold{R}}_{>0}$. We have a map $$S_{[:]}\to [0, \infty]^{\cS\times \cS}\;;\; (s,r)\mapsto r_c$$ where $r_c: \cS\times \cS \to [0, \infty]$ is defined by the following (1) and (2). Let $f,g\in\cS$. (1) If the images of $f$ and $g$ in $M_{S,s}$ belong to $\cO^\times_{S,s}$, then $$r_c(f,g)=\sup(c, - \log(|f(s)|))/\sup(c, -\log(|g(s)|)).$$ (2) Otherwise, $$r_c(f,g) = r({\bar f}_s, {\bar g}_s)$$ where ${\bar f}_s$ (resp.\ ${\bar g}_s$) denotes the image of $f$ (resp.\ $g$) in $(M_S/\cO_S^\times)_s$. \end{sbpara} \bold egin{sblem}{\lambda}bel{5.2.4} $(1)$ The map $$S_{[:]}\to S\times [0, \infty]^{\cS\times \cS}\;;\; (s,r)\mapsto (s, r_c)$$ is injective. $(2)$ The topology on $S_{[:]}$ induced by the embedding in (1) is independent of the choices of the chart and of the constant $c>0$. \end{sblem} \bold egin{pf} (1) follows from the fact that the map $\cS\to (M_S/\cO^\times_S)_s$ is surjective for any $s\in S$. We prove (2). If we have two charts $\cS\to M_S$ and $\cS'\to M_S$, we have locally on $S$ a third chart $\cS''\to M_S$ with homomorphisms of charts $\cS\to \cS''$ and $\cS'\to \cS''$. It is clear that if these third chart and two homomorphisms of charts are given and if the constant $c>0$ is fixed, the topology given by the chart $\cS''\to M_S$ and $c$ is finer than the topology given by $\cS\to M_S$ or $\cS'\to M_S$ and $c$. Hence it is sufficient to prove that if we have a homomorphism $\cS'\to \cS$ from a chart $\cS'\to M_S$ to a chart $\cS\to M_S$, the topology given by the former chart and the constant $c'>0$ is finer than the topology given by the latter and $c>0$. It suffices to prove that for $f,g\in \cS$, the map $(s,r)\mapsto r_c(f,g)$ is continuous for the topology given by $\cS'\to M_S$ and $c'$. {\bf Claim 1.} Let $f,g\in \cS$ and let $s\in S$, and assume that the images of $f$ and $g$ in $(M_S/\cO^\times_S)_s$ coincide. Let $c,c'>0$. Then for some neighborhood $U$ of $s$ in $S$, we have a continuous map $R_{c,c'}(f,g): U\to {\bold{R}}_{>0}$ whose value at $s'\in U$ is $\sup(c, -\log(|f(s')|))/\sup(c', -\log(|g(s')|))$ if the images of $f$ and $g$ in $M_{S,s'}$ belong to $\cO_{S,s'}^\times$, and is $1$ otherwise. This Claim 1 is proved easily. We continue the proof of (2). Let $f,g\in \cS$. Then locally on $S$, we have $f',g'\in \cS'$ and sections $u,v$ of $\cO_S^\times$ such that $f=f'u$ and $g=g'v$ in $M_S$. We have $$r_c(f,g)= r_{c'}(f', g')R_{c,c'}(f,f')(s)R_{c',c}(g', g)(s).$$ This proves the desired continuity of $r_c(f,g)$. \end{pf} \bold egin{sbpara}{\lambda}bel{topcs} By the independence (2) in \ref{5.2.4}, we have a canonical topology of $S_{[:]}$ (globally). \end{sbpara} \bold egin{sbpara}{\lambda}bel{sharp:} Assume that $\cS$ is sharp and that for any $f\in \cS\smallsetminus \{1\}$ and any $s\in S$, we have $|f(s)|<1$. (Note that we have such a chart locally on $S$.) Let $Y=(\cS\times \cS) \smallsetminus \{(1,1)\}$. Then we have a slightly different embedding $$S_{[:]}\to S\times [0, \infty]^Y;\;\;(s,r)\mapsto (s, r_*)$$ where $r_*: Y\to [0, \infty]$ is defined as follows. Let $(f,g)\in Y$. (1) If the images of $f$ and $g$ in $M_{S,s}$ belong to $\cO^\times_{S,s}$, then $$r_*(f,g)= \log(|f(s)|)/\log(|g(s)|).$$ (2) Otherwise, $$r_*(f,g)= r({\bar f}_s, {\bar g}_s)$$ where ${\bar f}_s$ (resp.\ ${\bar g}_s$) denotes the image of $f$ (resp.\ $g$) in $(M_S/\cO_S^\times)_s$. \end{sbpara} \bold egin{sblem}{\lambda}bel{ratL} Let the assumption be as in \ref{sharp:}. $(1)$ The map $S_{[:]}\to S\times [0, \infty]^Y$ is injective. $(2)$ The topology of $S_{[:]}$ induced by this embedding coincides with the topology defined in \ref{topcs}. $(3)$ The image of the embedding (1) consists of all pairs $(s, r)\in S\times [0, \infty]^Y$ such that $r$ satisfies the conditions {\rm (i)}--{\rm (iii)} in \ref{ratY} and such that the following conditions {\rm (iv)} and {\rm (v)} are satisfied. Let $(f,g)\in Y$. {\rm (iv)} If the images of $f$ and $g$ in $M_{S,s}$ belong to $\cO_{S,s}^\times$, $r(f,g)= \log(|f(s)|)/\log(|g(s)|)$. {\rm (v)} Otherwise, $r(f,g)$ depends only on the images of $f$ and $g$ in $(M_S/\cO^\times_S)_s$. $(4)$ The image of the embedding in $(1)$ is a closed set of $S\times [0, \infty]^Y$. \end{sblem} \bold egin{pf} (1) and (3) follow from the fact that the map $\cS\to (M_S/\cO^\times_S)_s$ is surjective for any $s\in S$. (4) follows from (3). We prove (2). If $f\in \cS\smallsetminus \{1\}$, by the property $|f(s)|<1$ for any $s\in S$, we see that there is a continuous function $R_{c}(f): S\to {\bold{R}}_{>0}$ whose value at $s\in S$ is $-\sup(c, -\log(|f(s)|))/\log(|f(s)|)$ if the image of $f$ in $M_{S,s}$ belongs to $\cO_{S,s}^\times$ and is $1$ otherwise. For $f,g\in \cS\smallsetminus \{1\}$, we have $$r_c(f,g)= r_*(f,g)R_{c}(f)(s) R_{c}(g)(s)^{-1}.$$ Furthermore, for $f\in \cS$, $r_c(1,f)$ is the value of the continuous function $c/\sup(c, -\log(|f(s)|))$ at $s$, and $r_c(f,1)$ is the value of the continuous function $\sup(c, -\log(|f(s)|))/c$ at $s$, and for $f\in \cS\smallsetminus \{1\}$, we have $r_*(1,f)=0$, $r_*(1, f)=\infty$. \end{pf} \bold egin{sbprop} The canonical map $S_{[:]}\to S$ is proper. \end{sbprop} \bold egin{pf} Since $[0, \infty]^Y$ is compact, this follows from (4) of Lemma \ref{ratL}. \end{pf} \bold egin{sbpara} For each $s\in S$, the topology of $R((M_S/\cO^\times_S)_s)$ defined in Section \ref{ra1} coincides with the topology of the fiber $R((M_S/\cO^\times_S)_s)$ over $s$ of $S_{[:]}\to S$, as a subspace of $S_{[:]}$. \end{sbpara} \bold egin{sblem} Let $S$ and $S'$ be as in \ref{absv} and assume we are given a strict morphism $S'\to S$ of locally ringed spaces over $K$ with log structures. (For the word \lq\lq strict'', see \ref{gest6}.) Then the canonical map $S'_{[:]}\to S'\times_S S_{[:]}$ is a homeomorphism. \end{sblem} \bold egin{pf} This is proved in the same way as Lemma \ref{valsrt}. \end{pf} \bold egin{sbpara}{\lambda}bel{rat4} We consider $S_{[:]}$ more locally. Assume we are given a chart $\cS\to M_S$. Let $\Phi$ be a set of faces of $\cS$ which is totally ordered for the inclusion relation and which contains $\cS$. Let $S_{[:]}(\Phi)$ be the subset of $S_{[:]}$ consisting of all $(s, r)$ such that the inverse images in $\cS$ of the faces of $(M_S/\cO_S^\times)_s$ associated to $r$ (\ref{RandR'}) belong to $\Phi$. Then $S_{[:]}(\Phi)$ for all $\Phi$ forms an open covering of $S_{[:]}$. \end{sbpara} \bold egin{sbpara} Let the notation be as in \ref{rat4}. Assume further that for any $f\in \cS\smallsetminus \{1\}$, we have $|f(s)|<1$ for any $s\in S$. (Such a chart always exists locally on $S$.) In the following proposition \ref{Phistr}, we give a description of the topological space $S_{[:]}(\Phi)$. Write $\Phi = \{\cS^{(j)} \;|\; 0\leq j \leq n\}$, $\cS= \cS^{(0)} \supsetneq \cS^{(1)}\supsetneq \dots \supsetneq \cS^{(n)}$. For each $1\leq j\leq n$, fix $q_j\in \cS^{(j-1)}\smallsetminus \cS^{(j)}$. Consider the topological subspace $$P{\subset}set {\bold{R}}_{{{\gamma}mma}eq 0}^n \times \prod_{j=0}^n \Hom(\cS^{(j)}, {\bold{R}}^{\add})$$ (here the Hom space is endowed with the topology of simple convergence) consisting of elements $(t,h)$ $(t=(t_j)_{1\leq j\leq n}, t_j\in {\bold{R}}_{{{\gamma}mma}eq 0}, h=(h_j)_{0\leq j\leq n}, h_j: \cS^{(j)}\to {\bold{R}})$ satisfying the following conditions (i) -- (iii) for $0\leq j<n$. (i) $h_j(q_{j+1})=1$. (ii) $h_j(f) = t_{j+1}h_{j+1}(f)$ for any $f\in \cS^{(j+1)}$. (iii) $h_j(\cS^{(j)}\smallsetminus \cS^{(j+1)}) {\subset}set {\bold{R}}_{>0}$. \end{sbpara} \bold egin{sblem}{\lambda}bel{5lemA} We have a unique continuous map $P\to \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ which sends $(t,h)$ to the following $a\in \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$. Let $j$ be the smallest integer such that $0\leq j\leq n$ and such that $t_k\operatorname{naive}eq 0$ if $j<k\leq n$. Then $$a(f) = \exp(- h_j(f) \prod_{k=j+1}^n t_k^{-1})\in {\bold{R}}_{>0}\quad \text{if} \; f \in \cS^{(j)},$$ $$a(f) = 0 \quad\text{if}\; f \in \cS\smallsetminus \cS^{(j)}.$$ \end{sblem} \bold egin{pf} The problem is the continuity of the map. This is shown as follows. Let $f\in \cS$. It is sufficient to prove that the map $P \to {\bold{R}}_{{{\gamma}mma}eq 0}\;;\;(t,h)\mapsto a(f)$ $(f\in\cS)$ (with notation as above) is continuous. Let $j$ be the largest integer such that $0\leq j\leq n$ and such that $f\in \cS^{(j)}$. Then this map is the composition of the continuous map $P\to {\bold{R}}_{{{\gamma}mma}eq 0}$ which sends $((t_j)_j, (h_j)_j)\in P$ to $\prod_{k=j+1}^n t_k \cdot h_j(f)^{-1}$ (note $h_j(f)>0$) and the continuous map ${\bold{R}}_{{{\gamma}mma}eq 0}\to {\bold{R}}_{{{\gamma}mma}eq 0}$ which sends $t\in {\bold{R}}_{>0}$ to $\exp(-t^{-1})$ and $0$ to $0$. \end{pf} \bold egin{sbprop}{\lambda}bel{Phistr} Let the notation be as above. We have a cartesian diagram of topological spaces $$\bold egin{matrix} S_{[:]}(\Phi) &\to & P \\ \downarrow && \downarrow \\ S & \to& \Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0}) \end{matrix}$$ where the lower horizontal arrow sends $s\in S$ to the map $f\mapsto |f(s)|$ $(f\in\cS)$, the right vertical arrow is as $a\mapsto a(f)$ $(f\in\cS)$ in \ref{5lemA}, the left vertical arrow is the canonical one, and the upper horizontal arrow sends $(s,r)\in S_{[:]}(\Phi)$ $(s\in S$, $r\in R((M_S/\cO_S^\times)_s))$ to $(s, ((t_j)_j, (h_j)_j))$ where $t_j= \log(|q_{j+1}(s)|)/\log(|q_j(s)|)$ (resp.\ $t_j= r(q_{j+1}, q_j)$) if $1\leq j<n$ and if $q_jq_{j+1}$ is invertible (resp.\ not invertible) at $s$, $t_n= -1/\log(|q_n(s)|)$, $h_j(f)=r(f, q_{j+1})$ for $0\leq j<n$, and $h_n(f) = -\log(|f(s)|)$. (Note that if $(s,r)\in S_{[:]}(\Phi)$ and $f\in \cS^{(n)}$, the image of $f$ in $M_{S,s}$ belongs to $\cO_{S,s}^\times$ and hence $|f(s)|\in {\bold{R}}_{>0}$.) \end{sbprop} \bold egin{pf} The converse map is given by $(s, (t, h))\mapsto (s,r)$ where $r$ is as follows. Let $(a, b)\in (M_S/\cO_S^\times)_s\times (M_S/\cO^\times_S)_s\smallsetminus \{(1,1)\}$ and take $f, g\in \cS$ such that the image of $f$ (resp.\ $g$) in $(M_S/\cO_S^\times)_s$ is $a$ (resp.\ $b$). Take the largest $j$ such that $0\leq j\leq n-1$ and $f,g\in \cS^{(j)}$. Then $r(f,g)= h_j(f)/h_j(g)\in [0,\infty]$. (Note that at least one of $f, g$ is outside $\cS^{(j+1)}$ and hence at least one of $h_j(f)$ and $h_j(g)$ is non-zero.) It is easy to see that this is the converse map and continuous. \end{pf} \bold egin{sbrem} In $(t,h)\in P$ $(t=(t_j)_{1\leq j\leq n}\in {\bold{R}}^n_{{{\gamma}mma}eq 0})$, $t_j$ for $1\leq j\leq n-1$ is determined by $h$ as $t_j= h_{j-1}(q_{j+1})$. $t_n$ is determined by the image $a$ of $(t,h)$ in $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ as $t_n=- 1/\log(a(q_n))$. These explain the fact that in the above proof of \ref{Phistr}, the converse map $(s, (t,h))\mapsto (s,r)$ is described without using $t$. \end{sbrem} \bold egin{sbpara}{\lambda}bel{4.2.14} Let $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})_{<1}$ be the open set of $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$ consisting of all elements $h$ such that $h(f)<1$ for any $f\in \cS\smallsetminus \{1\}$. Then the images of $S$ and $P$ in $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})$, under the maps in \ref{Phistr}, are contained in $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})_{<1}$. Hence, by \ref{Phistr}, we have \end{sbpara} \bold egin{sbcor}{\lambda}bel{4.2.15} In the case $S=\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})_{<1}$ with the sheaf of all ${\bold{R}}$-valued continuous functions and with the natural log structure, $S_{[:]}(\Phi)$ is identified with $P$. \end{sbcor} \bold egin{sbpara} We give a comment on this space $P$. For $1\leq j\leq n$, we fixed an element $q_j$ of $\cS^{(j-1)}\smallsetminus \cS^{(j)}$ (\ref{Phistr}). Let $m(j)= \dim_{{\bold{Q}}}(\cS^{(j-1)}_{\bold{Q}}/\cS^{(j)}_{\bold{Q}}) -1$ if $1\leq j\leq n$, and let $m(n+1)=\dim_{{\bold{Q}}}(\cS^{(n)}_{\bold{Q}})$. For $1\leq j\leq n+1$, fix elements $q_{j,k}$ $(0\leq k\leq m(j))$ of $(\cS^{(j-1)})^{{{\gamma}mma}p}$ satisfying the following conditions (i)--(iii). (i) $q_{j,0}=q_j$ if $1\leq j\leq n$. (ii) For $1\leq j\leq n$, $(q_{j,k}\bmod \cS^{(j)}_{\bold{Q}})_{0\leq k\leq m(j)}$ is a ${\bold{Q}}$-basis of $\cS^{(j-1)}_{\bold{Q}}/\cS^{(j)}_{\bold{Q}}$. (iii) $(q_{n+1,k})_{1\leq k\leq m(n+1)}$ is a ${\bold{Q}}$-basis of $\cS^{(n)}_{\bold{Q}}$. \end{sbpara} \bold egin{sbprop}{\lambda}bel{qjk} We have an injective open map $$P\overset{{\subset}set}\to {\bold{R}}_{{{\gamma}mma}eq 0}^n\times \prod_{j=1}^{n+1} {\bold{R}}^{m(j)}$$ which sends $(t, h)\in P$ $(t\in {\bold{R}}^n_{{{\gamma}mma}eq 0}$, $h\in \prod_{j=0}^n \Hom(\cS^{(j)}, {\bold{R}}^{\add}))$ to $(t, a)$ where $a=(a_j)_{1\leq j\leq n+1}$, $a_j=(a_{j,k})_{1\leq k\leq m(j)}$ with $$ a_{j,k}= h_{j-1}(q_{j,k})\in {\bold{R}}$$ for $1\leq j\leq n+1$. Here we define $h_{j-1}(q_{j, k})$ by using the unique extension of $h_{j-1}:\cS^{(j-1)}\to {\bold{R}}$ to a homomorphism $(\cS^{(j-1)})^{{{\gamma}mma}p}\to {\bold{R}}^{\add}$. \end{sbprop} The proof is easy. \bold egin{sbpara}{\lambda}bel{Nn:} Consider the case $S=|\Delta|^n$ where $|\Delta|=\{t\in {\bold{R}}\;|\;0\leq t<1\}$ with the sheaf of all ${\bold{R}}$-valued continuous functions and with the fs log structure associated to ${\bold{N}}^n\to \cO_S\;;\; m\mapsto \prod_{j=1}^n q_j^{m(j)}$ where $q_j$ $(1\leq j\leq n)$ are the coordinate functions. Let $\cS$ be the multiplicative monoid generated by $q_j$ $(1\leq j\leq n)$ which is identified with ${\bold{N}}^n$. Then $|\Delta|^n$ is identified with $\Hom(\cS, {\bold{R}}^{\mult}_{{{\gamma}mma}eq 0})_{<1}$ in \ref{4.2.14}. Let $\Phi=\{\cS^{(j)}\; |\; 0\leq j\leq n\}$ where $\cS^{(j)}$ is generated by $q_k$ $(j< k\leq n)$. Then $S_{[:]}$ is covered by the open sets $S_{[:]}(g(\Phi))$ where $g$ ranges over elements of the permutation group $\frak S_n$ acting on $\cS$, and $g$ induces a homeomorphism $S_{[:]}(\Phi)\cong S_{[:]}(g(\Phi))$. We describe $S_{[:]}(\Phi)$. \end{sbpara} \bold egin{sbprop}{\lambda}bel{Nnphi} Let the notation be as in \ref{Nn:}. Then we have a commutative diagram $$\bold egin{matrix} S_{[:]}(\Phi)&\cong & {\bold{R}}^n_{{{\gamma}mma}eq 0}\\ \downarrow &&\downarrow\\ S&=& |\Delta|^n \end{matrix}$$ in which the upper horizontal isomorphism sends $(s,r)\in S_{[:]}(\Phi)$ to $(t_1, \dots, t_n)$ where $t_j=r(q_{j+1}, q_j)$ $(1\le j\le n-1)$ and $t_n=-1/\log(q_n(s))$, and the right vertical arrow is $(t_j)_{1\leq j\leq n}\mapsto (q_j)_{1\leq j\leq n}$ where $q_j=\exp(-\prod_{k=j}^n t_k^{-1})$. \end{sbprop} \bold egin{pf} This follows from \ref{4.2.15}. \end{pf} \bold egin{sbcor}{\lambda}bel{RNn} Let the notation be as in \ref{Nn:}. Regarding $R(\cS)$ as the fiber of $S_{[:]}\to S=|\Delta|^n$ over the point $(0,\dots,0)\in S$, define $R(\cS)(\Phi)= R(\cS)\cap S_{[:]}(\Phi)$. Then we have a homeomorphism $$R(\cS)(\Phi)\cong {\bold{R}}^{n-1}_{{{\gamma}mma}eq 0}$$ which sends $r\in R(\cS)(\Phi)$ to $(t_1, \dots, t_{n-1})$ where $t_j= r(q_{j+1}, q_j)$. \end{sbcor} \bold egin{pf} This follows from \ref{Nnphi}. \end{pf} \bold egin{sblem}{\lambda}bel{fsK} Let $S$ and $|S|$, $M_{|S|}$ be as in \ref{abslog}. Then we have a canonical homeomorphism $S_{[:]}\cong |S|_{[:]}$. \end{sblem} \bold egin{pf} As in the proof of \ref{abslog}, we have a canonical isomorphism $(M_S/\cO^\times_S)_s\cong (M_{|S|}/\cO^\times_{|S|})_s$ for each $s\in S$. This gives a canonical bijection between $S_{[:]}$ and $|S|_{[:]}$. By Proposition \ref{Phistr}, they have the same topology. \end{pf} {\subset}section{$S_{[:]}$, $S_{[{\mathrm{val}}]}$, and $S_{{\mathrm{val}}}$}{\lambda}bel{ra5} Let $K$ and $S$ be as in \ref{absv}. We construct a topological space $S_{[{\mathrm{val}}]}$ and proper surjective continuous maps $$S_{{\mathrm{val}}}\to S_{[:]}, \quad S_{[{\mathrm{val}}]}\to S_{[:]}.$$ Here $S_{{\mathrm{val}}}$ is as in Section \ref{ss:valsp}. \bold egin{sbpara} Let $S_{{\mathrm{val}}}\to S_{[:]}$ be the map $(s, V, h)\mapsto (s, r_V)$ where $V\mapsto r_V$ is the map $V(\cS)\to R(\cS)$ for $\cS=(M_S/\cO^\times_S)_s$ (\ref{Sgen}, \ref{valcs}). \end{sbpara} \bold egin{sbprop} The map $S_{{\mathrm{val}}}\to S_{[:]}$ is continuous, and proper and surjective. \end{sbprop} \bold egin{pf} The surjectivity follows from the surjectivity in \ref{valcs2}. Once we prove the continuity, properness follows from the properness of $S_{{\mathrm{val}}}\to S$ and of $S_{[:]}\to S$. We prove the continuity. Working locally on $S$, we may and do assume that we have a chart $\cS\to M_S$ with $\cS$ a sharp fs monoid such that for any $f\in \cS\smallsetminus \{1\}$ and $s\in S$, we have $|f(s)|<1$. Fix $(s_0,V_0,h_0)\in S_{{\mathrm{val}}}$ and let $(s_0, r_0)\in S_{[:]}$ be its image. We show that when $(s, V, h)\in S_{{\mathrm{val}}}$ converges to $(s_0, V_0, h_0)$, its image $(s,r)\in S_{[:]}$ converges to $(s_0, r_0)$. Let $f,g\in \cS\smallsetminus \{1\}$. It is sufficient to prove that $r_*(f,g)\in [0,\infty]$ (\ref{sharp:}) converges to $(r_0)_*(f,g)\in [0,\infty]$. If at least one of $f$ and $g$ is invertible at $s_0$ (that is, if at least one of the images of $f$ and $g$ in $M_{S, s_0}$ belongs to $\cO_{S,s_0}^\times$), then the function $(s,r)\mapsto r_*(f,g)\in [0, \infty]$ on $S_{[:]}$ comes from the continuous function $s\mapsto \log(|f(s)|)/\log(|g(s)|)\in [0,\infty]$ on some neighborhood of $s_0$ in $S$. Hence we may assume that both $f$ and $g$ are not invertible at $s_0$. Assume $(r_0)_*(f,g) >a/b$, $a,b\in {\bold{N}}$, $b>0$. It is sufficient to prove that $r_*(f,g) >a/b$ when $(s,V,h)$ is sufficiently near $(s_0, V_0,h_0)$. Let $\varphi=f^b/g^a\in \cS^{{{\gamma}mma}p}$. Since the image $\bar \varphi_{s_0}$ of $\varphi$ in $(M_S/\cO_S^\times)_{s_0}$ belongs to $V_0$, there is a neighborhood $U$ of $(s_0,V_0, h_0)$ in $S_{{\mathrm{val}}}$ such that if $(s,V, h)\in U$, then $\bar \varphi_s\in V$. If $(s,V,h)\in U$ and if at least one of $f$ and $g$ are not invertible at $s$, then $r_*(f,g)=r({\bar f}_s, {\bar g}_s){{\gamma}mma}eq a/b$ because $\bar \varphi_s\in V$. Consider points $(s,V,h)\in U$ such that both $f$ and $g$ are invertible at $s$. On $U$, the function $(s,V, h)\mapsto h(\varphi)$ is continuous. (Here $h(\varphi)$ is defined to be $0$ if $\bar \varphi_s\operatorname{naive}otin V^\times$.) We have $$r_*(f,g)=b^{-1}r_*(f^b,g)= b^{-1}r_*(g^a\varphi, g)= (a/b) + b^{-1}\log(h(\varphi))/\log(|g(s)|).$$ When $(s, V, h)\in U$ converges to $(s_0, V_0, h_0)$, $h(\varphi)$ converges to $h_0(\varphi)\in{\bold{R}}$ and $g(s)$ converges to $0$. If $h_0(\varphi) =0$, then when $(s,V,h)$ converges to $(s_0, V_0, h_0)$, we have $h(\varphi)<1$ and $|g(s)|<1$ and hence $\log(h(\varphi))/\log(|g(s)|)>0$. If $h_0(\varphi)>0$, then when $(s,V,h)$ converges to $(s_0,V_0,h_0)$, $\log(h(\varphi))/\log(|g(s)|)$ converges to $0$. \end{pf} \bold egin{sbpara}{\lambda}bel{rat6} We next discuss $S_{[{\mathrm{val}}]}$. To define it, we use the following {\it new log structure on $S_{[:]}$} which is endowed with the sheaf $\cO_{S_{[:]}}$ of all ${\bold{R}}$-valued continuous functions. (We use the word \lq\lq new log structure'', to distinguish this log structure from the \lq\lq old'' log structure on $S_{[:]}$ which is defined as the inverse image of the log structure of $S$ on the inverse image of $\cO_S$ on $S_{[:]}$.) Assume that we are given a chart $\cS\to M_S$ with $\cS$ a sharp fs monoid such that $|f(s)| <1$ for any $f\in \cS\smallsetminus \{1\}$ and for any $s\in S$. Let $\cS^{(j)}$ $(0\leq j\leq n)$ be faces of $\cS$ such that $\cS=\cS^{(0)}\supsetneq \cS^{(1)}\supsetneq \dots \supsetneq \cS^{(n)}$ and let $\Phi=\{\cS^{(j)}\;|\; 0\leq j\leq n\}$. Take $q_j\in \cS^{(j-1)}\smallsetminus \cS^{(j)}$ for $1\leq j\leq n$. Then we define the new log structure on $S_{[:]}(\Phi)$ as the fs log structure associated to $${\bold{N}}^n \to \cO_{S_{[:]}}\;;\; m\mapsto (\prod_{j=1}^{n-1} r(q_{j+1}, q_j)^{m(j)/2})\cdot (-1/\log(|q_n|))^{m(n)/2}.$$ Then it is easy to see that this log structure glues to an fs log structure on $S_{[:]}$ which is independent of any choices. In the identification $S_{[:]}=|S|_{[:]}$ (\ref{fsK}), the new log structure of $S_{[:]}$ and that of $|S|_{[:]}$ coincide. \end{sbpara} \bold egin{sbrem} It may seem strange to take the square root $(-)^{m(j)/2}$ in the definition of this log structure. But this becomes important in Section 5 to have that the CKS map $D^{\sharp}_{\Sigma,[:]}\to D_{{\rm{SL}}(2)}$ respects (and $D^{\sharp,{{\rm{mild}}}}_{\Sigma,[:]}\to D_{{\rm{SL}}(2)}^{\diamond}$ which appears later (\ref{diathm}) also respects) the log structures. \end{sbrem} \bold egin{sbpara} Let $S_{[{\mathrm{val}}]}$ be the valuative space $(S_{[:]})_{{\mathrm{val}}}$ (Section \ref{ss:valsp}) associated to $S_{[:]}$ endowed with this new log structure. By Section \ref{ss:valsp}, the map $S_{[{\mathrm{val}}]}\to S_{[:]}$ is proper and surjective. \end{sbpara} \bold egin{sblem} Let $S$ (resp.\ $S'$) be a topological space endowed with the sheaf of all ${\bold{R}}$-valued continuous functions and with an fs log structure, and let $S'\to S$ be a strict morphism (\ref{gest6}) of locally ringed spaces over ${\bold{R}}$ with log structures. Then the canonical map $S'_{[{\mathrm{val}}]}\to S'\times_S S_{[{\mathrm{val}}]}$ is a homeomorphism. \end{sblem} \bold egin{pf} This is proved in the same way as \ref{valsrt}. \end{pf} \bold egin{sbprop}{\lambda}bel{v[[v]]} Assume $K={\bold{R}}$. There is a unique homeomorphism $$(|\Delta|^n)_{[{\mathrm{val}}]}\cong ({\bold{R}}^n_{{{\gamma}mma}eq 0})_{{\mathrm{val}}}$$ in which $(q_j)_{1\leq j\leq n}\in (|\Delta|\smallsetminus \{0\})^n{\subset}set (|\Delta|^n)_{[{\mathrm{val}}]}$ corresponds to $(-1/\log(q_j))_{1\leq j\leq n}\in {\bold{R}}^n_{>0}{\subset}set ({\bold{R}}^n_{{{\gamma}mma}eq 0})_{{\mathrm{val}}}$. \end{sbprop} \bold egin{pf} This is deduced from Proposition \ref{Nnphi}. \end{pf} \bold egin{sbpara}{\lambda}bel{rvalex} {\bf Example.} We compare $S_{[:]}$, $S_{{\mathrm{val}}}$, and $S_{[{\mathrm{val}}]}$ in the case $K={\bold{R}}$ and $S={\bold{R}}^2_{{{\gamma}mma}eq 0}$ with the standard log structure. The maps from these spaces to $S$ are homeomorphisms outside $(0,0)\in S$. We describe the fibers over $(0,0)$ explicitly. (1) The fiber of $S_{[:]}\to S$ over $(0,0)\in S$ is canonically homeomorphic to the interval $[0, \infty]$. It consists of points $r(a)$ with $a\in [0, \infty]$. $(q_1,q_2)\in {\bold{R}}^2_{>0}$ converges to $r(a)$ if and only if $q_j\to 0$ and $\log(q_2)/\log(q_1)\to a$. (2) A difference between the surjection $S_{{\mathrm{val}}}\to S_{[:]}$ and the surjection $S_{[{\mathrm{val}}]}\to S_{[:]}$ is that the fiber of the former surjection over $r(a)$ has cardinality $>1$ if and only if $a\in {\bold{Q}}_{>0}$ and the fiber of the latter surjection over $r(a)$ has cardinality $>1$ if and only if $a=0$ or $a=\infty$. (3) The fiber of $S_{{\mathrm{val}}}\to S$ over $(0,0)\in S$ consists of points $p(a)$ $(a\in [0, \infty]\smallsetminus {\bold{Q}}_{>0})$ and $p(a, c)$ $(a\in {\bold{Q}}_{>0}$, $c\in [0, \infty])$. $(q_1,q_2)\in {\bold{R}}^2_{>0}$ converges to $p(a)$ if and only if $q_j\to 0$ and $\log(q_2)/\log(q_1) \to a$. $(q_1,q_2)\in {\bold{R}}^2_{>0}$ converges to $p(a,c)$ if and only if $q_j\to 0$, $\log(q_2)/\log(q_1) \to a$, and $q_1^a/q_2\to c$. Under the map $S_{{\mathrm{val}}}\to S_{[:]}$, $p(a)$ goes to $r(a)\in S_{[:]}$ and $p(a, c)$ goes to $r(a)\in S_{[:]}$. (4) The fiber of $S_{[{\mathrm{val}}]}$ over $(0,0)\in S$ consists of points $s(a)$ $(a\in [0,\infty]\smallsetminus {\bold{Q}}_{>0})$ and $s(a, c)$ $(a\in {\bold{Q}}_{>0}$, $c\in [0, \infty])$. $(q_1,q_2)\in {\bold{R}}^2_{>0}$ converges to $s(a)$ if and only if $q_j\to 0$ and, for $t_j:= -1/\log(q_j)$ (so $t_j\to 0$), $\log(t_2)/\log(t_1)\to a$. $(q_1,q_2)\in {\bold{R}}^2_{>0}$ converges to $s(a,c)$ if and only if $q_j\to 0$ and, for $t_j:= -1/\log(q_j)$ (so $t_j\to 0$), $\log(t_2)/\log(t_1) \to a$, and $t_1^a/t_2$ converges to $c$. Under the map $S_{[{\mathrm{val}}]}\to S_{[:]}$, $s(1,c)$ goes to $r(c)\in S_{[:]}$. $s(a)$ with $a<1$ and $s(a, c)$ with $a<1$ go to $r(0)$ in $S_{[:]}$, and $s(a)$ with $a>1$ and $s(a, c)$ with $a>1$ go to $r(\infty)$ in $S_{[:]}$. (5) Some examples of convergences. (5.1) Fix $c\in {\bold{R}}_{>0}$. If $q\in {\bold{R}}_{>0}$ and $q\to 0$, $(cq, q)\in {\bold{R}}^2_{>0}$ converges to $r(1)$ in $S_{[:]}$, to $p(1, c)$ in $S_{{\mathrm{val}}}$, and to $s(1, 1)$ in $S_{[{\mathrm{val}}]}$. Thus the limit in $S_{[:]}$ and the limit in $S_{[{\mathrm{val}}]}$ are independent of $c$, but the limit in $S_{{\mathrm{val}}}$ depends on $c$. (5.2) Fix $a\in {\bold{R}}$ such that $0<a<1$. If $t\in {\bold{R}}_{>0}$ and $t\to 0$, $(\exp(-1/t), \exp(-1/t^a))\in {\bold{R}}_{>0}^2$ converges to $r(0)$ in $S_{[:]}$, to $p(0)$ in $S_{{\mathrm{val}}}$, and to $s(a)$ (resp.\ $s(a,1)$) in $S_{[{\mathrm{val}}]}$ if $a\operatorname{naive}otin {\bold{Q}}$ (resp.\ $a\in {\bold{Q}}$). Thus the limit in $S_{[:]}$ and the limit in $S_{{\mathrm{val}}}$ are independent of $a$, but the limit in $S_{[{\mathrm{val}}]}$ depends on $a$. \end{sbpara} {\subset}section{The spaces $D^{\sharp}_{\Sigma, [:]}$ and $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$}{\lambda}bel{ra6} \bold egin{sbpara} Let $\Sigma$ be a weak fan in $\fg_{\bold{Q}}$ (Part III, 2.2.3) and let ${\Gamma}amma$ be a neat subgroup of $G_{{\bold{Z}}}$ which is strongly compatible with $\Sigma$. Then we have a space ${\Gamma}amma \operatorname{\backslash} D_{\Sigma}$ which is endowed with a sheaf of holomorphic functions and an fs log structure. By taking $K={\bold{C}}$ in Section 4.2, we have a topological space $({\Gamma}amma \operatorname{\backslash} D_{\Sigma})_{[:]}$ with a proper surjective map $({\Gamma}amma \operatorname{\backslash} D_{\Sigma})_{[:]}\to {\Gamma}amma \operatorname{\backslash} D_{\Sigma}$. \end{sbpara} \bold egin{sbpara} Let $\Sigma$ be a weak fan in $\fg_{\bold{Q}}$ and let $D^{\sharp}_{\Sigma}$ be the topological space defined in Part III, 2.2.5. We define topological spaces $D^{\sharp}_{\Sigma,[:]}$ and $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$, and proper surjective maps $D^{\sharp}_{\Sigma,[:]}\to D^{\sharp}_{\Sigma}$, $D^{\sharp}_{\Sigma,{\mathrm{val}}}\to D^{\sharp}_{\Sigma, [:]}$, and $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}\to D^{\sharp}_{\Sigma, [:]}$. Here $D^{\sharp}_{\Sigma,{\mathrm{val}}}$ is the topological space defined in Part III, 3.2. \end{sbpara} \bold egin{sbpara}{\lambda}bel{4.4.3} Let ${\sigma}\in \Sigma$ and consider the open set $D_{{\sigma}}^{\sharp}$ of $D_{\Sigma}^{\sharp}$. There is a neat subgroup ${\Gamma}amma$ of $G_{\bold{Z}}$ which is strongly compatible with the fan $\text{face}({\sigma})$ of all faces of ${\sigma}$. We define the topological space $D^{\sharp}_{{\sigma},[:]}$ as the fiber product of $D^{\sharp}_{{\sigma}}\to {\Gamma}amma \operatorname{\backslash} D_{{\sigma}}\leftarrow ({\Gamma}amma \operatorname{\backslash} D_{{\sigma}})_{[:]}$. This is independent of the choice of ${\Gamma}amma$. Furthermore, the inverse image of the new log structure of $({\Gamma}amma \operatorname{\backslash} D_{{\sigma}})_{[:]}$ on $D^{\sharp}_{{\sigma},[:]}$ (given on the sheaf of all ${\bold{R}}$-valued continuous functions), which we call the {\it new log structure of $D^{\sharp}_{{\sigma},[:]}$}, is independent of the choice of ${\Gamma}amma$. These $D^{\sharp}_{{\sigma},[:]}$ glue to a topological space $D^{\sharp}_{\Sigma,[:]}$ over $D^{\sharp}_{\Sigma}$, and the new log structures of $D^{\sharp}_{{\sigma},[:]}$ glue to an fs log structure on the sheaf of all ${\bold{R}}$-valued functions on $D^{\sharp}_{\Sigma,[:]}$, which we call {\it the new log structure}. We define $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$ as the valuative space associated to $D^{\sharp}_{\Sigma,[:]}$ with the new log structure. We have a canonical proper surjective maps $D^{\sharp}_{\Sigma,[:]}\to D^{\sharp}_{\Sigma}$ and $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}\to D^{\sharp}_{\Sigma,[:]}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{trouble} Before we define the canonical map $D^{\sharp}_{\Sigma, {\mathrm{val}}}\to D^{\sharp}_{\Sigma, [:]}$, we remark that, though we have a canonical new log structure on $D^{\sharp}_{\Sigma,[:]}$, we do not have a canonical log structure on $D^{\sharp}_{\Sigma}$. For ${\sigma}\in \Sigma$ and for a neat subgroup ${\Gamma}amma$ of $G_{{\bold{Z}}}$ which is strongly compatible with $\text{face}({\sigma})$, the pull-back of the log structure of ${\Gamma}amma \operatorname{\backslash} D_{{\sigma}}$ on $D^{\sharp}_{{\sigma}}$ depends on the choice of ${\Gamma}amma$. Here we endow $D^{\sharp}_{{\sigma}}$ the sheaf of all ${\bold{C}}$-valued continuous functions. For example, consider the classical case $H_{0,{\bold{Z}}}={\bold{Z}}^2$ of pure of weight $1$ of Hodge type $(1,0)+(0,1)$, in which $D$ is the upper half plane. For the standard choice of ${\sigma}$ and ${\Gamma}amma= \bold egin{pmatrix} 1& {\bold{Z}} \\ 0&1\end{pmatrix}$, ${\Gamma}amma \operatorname{\backslash} D_{{\sigma}}$ is isomorphic to the unit disc and the log structure is generated by the coordinate function $q$. $D^{\sharp}_{{\sigma}}$ is identified with $\{x+iy\;|\; x\in {\bold{R}}, 0< y\leq \infty\}$ and the canonical projection $D^{\sharp}_{{\sigma}}\to {\Gamma}amma\operatorname{\backslash} D_{{\sigma}}$ is identified with $z\mapsto \exp(2\pi i z)$. We have $D^{\sharp}_{{\sigma},[:]}=D^{\sharp}_{{\sigma}}$ and the new log structure on it is generated by $1/y^{1/2}$, or equivalently by $1/(\log|q|)^{1/2}$. Take $n{{\gamma}mma}eq 2$ and replace ${\Gamma}amma$ by ${\Gamma}amma':=\bold egin{pmatrix} 1&n{\bold{Z}}\\0&1\end{pmatrix}$. Then the log structure of ${\Gamma}amma' \operatorname{\backslash} D_{{\sigma}}$ is generated by $q^{1/n}$. Hence the inverse image on $D^{\sharp}_{{\sigma}}$ of the log structure of ${\Gamma}amma \operatorname{\backslash} D_{{\sigma}}$ and that of ${\Gamma}amma'\operatorname{\backslash} D_{{\sigma}}$ do not coincide. This problem does not happen for the new log structure, for $1/(\log|q^{1/n}|)^{1/2}=n^{1/2}/(\log|q|)^{1/2}$ and $1/(\log|q|)^{1/2}$ generate the same log structure. \end{sbpara} \bold egin{sbpara}{\lambda}bel{valD} Endow $D^{\sharp}_{\Sigma}$ with the sheaf of all ${\bold{C}}$-valued continuous functions. For ${\sigma}\in \Sigma$, take a neat sungroup ${\Gamma}amma$ of $G_{{\bold{Z}}}$ which is strongly compatible with ${\sigma}$, and consider the inverse image on $D^{\sharp}_{{\sigma}}$ of the log structure of ${\Gamma}amma \operatorname{\backslash} D_{{\sigma}}$. We show that $D^{\sharp}_{{\sigma},{\mathrm{val}}}$ in Part III is identified with the valuative space $S_{{\mathrm{val}}}$ in Section 3.1 associated to $S:=D^{\sharp}_{{\sigma}}$ with this log structure (with $K={\bold{C}}$). In \ref{rvtoric}, let $N= \{x\in {\sigma}_{\bold{R}}\;|\; \exp(x)\in {\Gamma}amma\;\text{in}\;G_{\bold{R}}\}$, let $L= \Hom(N, {\bold{Z}})$, and regard ${\sigma}$ as a cone in $N_{\bold{R}}:={\bold{R}}\otimes N$. Let $\Sigma$ be the fan of all faces of ${\sigma}$, and denote $|{\operatorname{toric}}|(\Sigma)$ by $|{\operatorname{toric}}|_{{\sigma}}$. Then we have a commutative diagram $$\bold egin{matrix} D_{{\sigma},{\mathrm{val}}}^{\sharp} &\leftarrow & E^{\sharp}_{{\sigma},{\mathrm{val}}} &\overset{{\subset}set}\to & |{\operatorname{toric}}|_{{\sigma},{\mathrm{val}}}\times \Dc\\ \downarrow && \downarrow && \downarrow \\ D^{\sharp}_{{\sigma}} &\leftarrow & E^{\sharp}_{{\sigma}} & \overset{{\subset}set}\to & |{\operatorname{toric}}|_{{\sigma}}\times \Dc \end{matrix}$$ where the squares are cartesian, $E^{\sharp}_{{\sigma}}$ is a ${\sigma}_{\bold{R}}$-torsor over $D^{\sharp}_{{\sigma}}$ and $E^{\sharp}_{{\sigma},{\mathrm{val}}}$ is a ${\sigma}_{\bold{R}}$-torsor over $D^{\sharp}_{{\sigma},{\mathrm{val}}}$ for certain natural actions of ${\sigma}_{\bold{R}}$ on $E^{\sharp}_{{\sigma}}$ and $E^{\sharp}_{{\sigma},{\mathrm{val}}}$, and the pull-back of the log structure of $|D^{\sharp}_{{\sigma}}|$ (\ref{abslog}) on $E^{\sharp}_{{\sigma}}$ coincides with the pull-back of the canonical log structure of $|{\operatorname{toric}}|_{{\sigma}}$. In the upper row, the space in the middle and the space on the right are the valuative spaces associated to their lower spaces, respectively. Hence the valuative space associated to $D^{\sharp}_{{\sigma}}$ coincides with the quotient $D^{\sharp}_{{\sigma},{\mathrm{val}}}$ of $E^{\sharp}_{{\sigma},{\mathrm{val}}}$ by ${\sigma}_{\bold{R}}$, that is, $D^{\sharp}_{{\sigma},{\mathrm{val}}}$. Here the problem of the dependence of the log structure of $S=D^{\sharp}_{{\sigma}}$ on ${\Gamma}amma$ (\ref{trouble}) does not affect for the following reason. For another choice ${\Gamma}amma'$ of ${\Gamma}amma$ such that ${\Gamma}amma'{\subset}set {\Gamma}amma$, the identity map of $S$ is a morphism from $S$ with the log structure given by ${\Gamma}amma'$ to $S$ with the log structure given by ${\Gamma}amma$, and this morphism has the Kummer property \ref{timesval} of log structure. Hence the associated valuative space is independent of the choice of ${\Gamma}amma$. \end{sbpara} \bold egin{sbpara} By \ref{valD} and by Section \ref{ss:valsp} and Section \ref{ra2}, we have a proper surjective map $D^{\sharp}_{{\sigma},{\mathrm{val}}}\to D^{\sharp}_{{\sigma},[:]}$, and this glues to a proper surjective map $D^{\sharp}_{\Sigma,{\mathrm{val}}}\to D^{\sharp}_{\Sigma,[:]}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{rvalex2} {\bf Example.} We describe the differences of the topologies of $D^{\sharp}_{\Sigma,[:]}$, $D^{\sharp}_{\Sigma,{\mathrm{val}}}$ and $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$. Let $N_1, N_2\in \frak g_{{\bold{Q}}}$ and assume $N_1N_2=N_2N_1$ and that $N_1$ and $N_2$ are nilpotent and linearly independent over ${\bold{Q}}$. Let $F\in \Dc$ and assume that $(N_1, N_2, F)$ generates a nilpotent orbit in the sense of Part III, 2.2.2. Let $\Sigma$ be the fan of all faces of the cone ${\bold{R}}_{{{\gamma}mma}eq 0} N_1+{\bold{R}}_{{{\gamma}mma}eq 0}N_2$. When $y_1, y_2\in{\bold{R}}$ tend to $ \infty$, $\exp(iy_1N_1+iy_2N_2)F$ converges in $D^{\sharp}_{\Sigma}$. (1) Fix a constant $a\in {\bold{R}}$. When $y\to \infty$, $\exp(iyN_1+i(y+a)N_2)F$ converges in $D^{\sharp}_{\Sigma,{\mathrm{val}}}$, $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$, $D^{\sharp}_{\Sigma,[:]}$. The limit in $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$ is independent of $a$ and hence the limit in $D^{\sharp}_{\Sigma,[:]}$ is independent of $a$, but the limit in $D^{\sharp}_{\Sigma,{\mathrm{val}}}$ depends on $a$. (2) Fix a constant $a\in {\bold{R}}$ such that $0<a<1$. Then when $y\to \infty$, $\exp(iyN_1+iy^aN_2)F$ converges in $D^{\sharp}_{\Sigma,{\mathrm{val}}}$, $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$, $D^{\sharp}_{\Sigma,[:]}$. The limit in $D^{\sharp}_{\Sigma,{\mathrm{val}}}$ is independent of $a$ and hence the limit in $D^{\sharp}_{\Sigma,[:]}$ is independent of $a$, but the limit in $D^{\sharp}_{\Sigma,[{\mathrm{val}}]}$ depends on $a$. \end{sbpara} {\subset}section{CKS maps to $D_{{\rm{SL}}(2)}$ and $D_{{\rm{SL}}(2),{\mathrm{val}}}$}{\lambda}bel{ss:cks1} \bold egin{sbpara}{\lambda}bel{4.5.1} Recall that in Part III, Theorem 3.3.2, we proved that the identity map of $D$ extends uniquely to a continuous map $$D^{\sharp}_{\Sigma,{\mathrm{val}}}\to D_{{\rm{SL}}(2)}^I.$$ Part III, Section 3.3 is devoted to its proof. The corresponding result in the pure case is \cite{KU} Theorem 5.4.4 whose full proof is given in [ibid] Chapter 6. In this section \ref{ss:cks1}, we prove the following related theorems \ref{valper} and \ref{valper2}. \end{sbpara} \bold egin{sbthm}{\lambda}bel{valper} $(1)$ The identity map of $D$ extends uniquely to continuous maps $$D^{\sharp}_{\Sigma, [:]}\to D^I_{{\rm{SL}}(2)}, \quad D^{\sharp}_{\Sigma,{[\val]}}\to D^I_{{\rm{SL}}(2),{\mathrm{val}}}.$$ These maps respect the log structures on the sheaves of all ${\bold{R}}$-valued continuous functions. Here we use the new log structures on $D^{\sharp}_{\Sigma, [:]}$ in \ref{4.4.3} (cf.\ \ref{rat6}) and the log structure on $D^I_{{\rm{SL}}(2)}$ discussed in Theorem \ref{slis+}. $(2)$ The CKS map $D_{\Sigma,{\mathrm{val}}}^{\sharp}\to D^I_{{\rm{SL}}(2)}$ defined in Part III Theorem 3.3.2 coincides with the composition $D_{\Sigma,{\mathrm{val}}}^{\sharp}\to D^{\sharp}_{\Sigma,[:]}\to D^I_{{\rm{SL}}(2)}$. \end{sbthm} \bold egin{sbpara}{\lambda}bel{Dnilp} Let $D_{\operatorname{naive}ilp}$ be the set of $(N_1, \dots, N_n, F)$, where $n{{\gamma}mma}eq 0$, $N_j\in \fg_{\bold{R}}$ and $F\in \Dc$, which satisfies the following two conditions. (i) $(N_1,\dots, N_n, F)$ generates a nilpotent orbit in the sense of Part III, 2.2.2. (ii) For any $w\in {\bold{Z}}$ and for any $1\leq j\leq n$, let $W^{(j)}$ be the relative monodromy filtration of $y_1N_1+\dots +y_jN_j$ relative to $W$ $(W^{(j)}$ exists and does not depend on the choices of $y_j\in {\bold{R}}_{>0}$ by the condition (i)). Then the filtrations $W^{(j)}({{\gamma}mma}r^W)$ on ${{\gamma}mma}r^W$ $(1\leq j\leq n)$ are ${\bold{Q}}$-rational. \end{sbpara} \bold egin{sbpara}{\lambda}bel{assoc} We review the map $D_{\operatorname{naive}ilp} \to D_{{\rm{SL}}(2)}$ which sends $(N_1, \dots, N_n, F)\in D_{\operatorname{naive}ilp}$ to the class of the associated ${\rm{SL}}(2)$-orbit (Part II, 2.4). It is the map which sends $(N_1, \dots, N_n, F)$ to the limit of $\exp(\sum_{j=1}^n iy_jN_j)F$ where $y_j\in {\bold{R}}_{>0}$, $y_j/y_{j+1}\to \infty$ $(1\leq j\leq n$, $y_{n+1}$ denotes $1$) in $D_{{\rm{SL}}(2)}^I$. This map is also characterized as follows. Recall that an element $(p, Z)$ of $D_{{\rm{SL}}(2)}$ is determined by the following (i) and (ii). (i) Whether $(p, Z)$ is an $A$-orbit or a $B$-orbit. (ii) $(\Phi, {{\bold r}})$ where $\Phi$ is the set of weight filtrations on ${{\gamma}mma}r^W$ associated to $p$ (Part II, 2.5.2 (ii)) and ${{\bold r}}$ is any element of $Z$. Let $(p,Z)\in D_{{\rm{SL}}(2)}$ be the image of $(N_1, \dots, N_n, F)$. Then $(p,Z)$ is a $B$-orbit if and only if there is $j$ such that $N_j\operatorname{naive}eq 0$, $N_k=0$ for $1\leq k< j$, and ${{\gamma}mma}r^W(N_j)=0$. $\Phi$ is the set of $W^{(j)}({{\gamma}mma}r^W)$ for all $j$ such that ${{\gamma}mma}r^W(N_k) \operatorname{naive}eq 0$ for some $k\leq j$. ${{\bold r}}$ in the above (ii) is given as follows: Since $(N_1,\dots, N_n, F)$ generates a nilpotent orbit, $(W^{(n)}, F)$ is an MHS. Let $(W^{(n)}, \hat F_{(n)})$ be ${\bold{R}}$-split MHS associated to it. Then $(N_1, \dots, N_{n-1}, \exp(iN_n)\hat F_{(n)})$ generates a nilpotent orbit and hence $(W^{(n-1)}, \exp(iN_n)\hat F_{(n)})$ is an MHS. Let $(W^{(n-1)}, \hat F_{(n-1)})$ be the ${\bold{R}}$-split MHS associated to it. Then $(N_1, \dots, N_{n-2}, \exp(iN_{n-1})\hat F_{(n-1})$ generates a nilpotent orbit and hence $(W^{(n-2)}, \exp(iN_{n-1})\hat F_{(n-1)})$ is an MHS. $\dots$ In this way, we have ${\bold{R}}$-split MHS $(W^{(j)}, \hat F_{(j)})$ for $1\leq j\leq n$ by a downward induction on $j$. (See Part II 2.4.6). We obtain ${{\bold r}}\in D$ as ${{\bold r}}=\exp(iN_k)\hat F_{(k)}$ if $k$ is the minimal $j$ such that $N_j\operatorname{naive}eq 0$, where in the case $N_j=0$ for all $j$, we define ${{\bold r}} = F$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{Dsig:} Assume $({\Gamma}amma, \Sigma)$ is strongly compatible. By \ref{RandR'}, $D^{\sharp}_{\Sigma, [:]}$ is identified with the set of $({\sigma}, Z, (\cS^{(j)})_{0\leq j\leq n}, (N_j)_{1\leq j\leq n})$ where $({\sigma}, Z)\in D^{\sharp}_{\Sigma}$, and if $s$ denotes the image of $({\sigma},Z)$ in $S:={\Gamma}amma \operatorname{\backslash} D_{\Sigma}$, $\cS^{(j)}$ are faces of $(M_S/\cO_S^\times)_s$ such that $(M_S/\cO_S^\times)_s=\cS^{(0)}\supsetneq \cS^{(1)}\supsetneq \dots \supsetneq \cS^{(n)}=\{1\}$ and $N_j$ is a homomorphism $\cS^{(j-1)}\to {\bold{R}}^{\add}$ such that $N_j(\cS^{(j)})=0$ and $N_j(\cS^{(j-1)}\smallsetminus \cS^{(j)}){\subset}set {\bold{R}}_{>0}$. For $s= \text{class}({\sigma}, Z)\in S={\Gamma}amma \operatorname{\backslash} D_{\Sigma}$, $(M_S/\cO_S^\times)_s$ is canonically isomorphic to $\Hom({\Gamma}amma({\sigma}), {\bold{N}})$. Hence ${\sigma}$ is identified with $\Hom((M_S/\cO_S^\times)_s, {\bold{R}}^{\add}_{{{\gamma}mma}eq 0})$ and the face $\cS^{(j)}$ of $(M_S/\cO_S^\times)_s$ in the above corresponds to a face ${\sigma}_j$ of ${\sigma}$ consisting of all homomorphisms $(M_S/\cO_S^\times)_s\to {\bold{R}}^{\add}_{{{\gamma}mma}eq 0}$ which kills $\cS^{(j)}$. Hence $D^{\sharp}_{\Sigma,[:]}$ is identified with the set of $({\sigma}, Z, ({\sigma}_j)_{0\leq j\leq n}, (N_j)_{1\leq j\leq n})$ where $({\sigma},Z)\in D^{\sharp}_{\Sigma}$, ${\sigma}_j$ are faces of ${\sigma}$ such that $0={\sigma}_0{\subset}setneq {\sigma}_1{\subset}setneq \dots {\subset}setneq {\sigma}_n={\sigma}$, and $N_j$ is an element of ${\sigma}_{j,{\bold{R}}}/{\sigma}_{j-1,{\bold{R}}}$ which belongs to the image of an element of the interior of ${\sigma}_j$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{Dsig:2} Let $({\sigma}, Z, ({\sigma}_j)_{0\leq j\leq n}, (N_j)_{1\leq j\leq n})\in D^{\sharp}_{\Sigma,[:]}$ (\ref{Dsig:}) and let $\tilde N_j$ be an element of the interior of ${\sigma}_j$ whose image in ${\sigma}_{j,{\bold{R}}}/{\sigma}_{j-1,{\bold{R}}}$ coincides with $N_j$. Let $F\in Z$. Then $(\tilde N_1, \dots, \tilde N_n, F)$ generates a nilpotent orbit, as is easily seen. \end{sbpara} \bold egin{sbthm}{\lambda}bel{valper2} The map $D^{\sharp}_{\Sigma, [:]} \to D^I_{{\rm{SL}}(2)}$ (\ref{valper}) sends $({\sigma}, Z, ({\sigma}_j)_j, (N_j)_j)$ to the image of $(\tilde N_1, \dots, \tilde N_n, F)\in D_{\operatorname{naive}ilp}$ in $D_{{\rm{SL}}(2)}$ (\ref{assoc}). Here $F\in Z$ and $\tilde N_j$ is any element of the interior of ${\sigma}_j$ whose image in ${\sigma}_{j,{\bold{R}}}/{\sigma}_{j-1,{\bold{R}}}$ coincides with $N_j$. \end{sbthm} \bold egin{sblem}{\lambda}bel{dzs} Let ${\sigma}{\subset}set \fg_{\bold{R}}$ be a nilpotent cone, let $F\in \Dc$, and assume that $({\sigma}, F)$ generates a nilpotent orbit. Let $N\in {\sigma}_{{\bold{R}}}$ and let $F'=\exp(iN)F$. Let $M({\sigma}, W)$ be the relative monodromy filtration of ${\sigma}$ with respect to $W$. (1) ${\delta}ta(M({\sigma}, W), F')= {\delta}ta(M({\sigma}, W), F)+N$ where the last $N$ denotes the homomorphism ${{\gamma}mma}r^{M({\sigma},W)}\to {{\gamma}mma}r^{M({\sigma}, W)}$ which is the sum of the maps ${{\gamma}mma}r^{M({\sigma},W)}_k\to {{\gamma}mma}r^{M({\sigma}, W)}_{k-2}$ $(k\in {\bold{Z}})$ induced by $N$. (2) $\zeta(M({\sigma}, W), F')= \zeta(M({\sigma}, W), F)$. (3) ${\rm{spl}}_{M({\sigma}, W)}(F')= {\rm{spl}}_{M({\sigma},W)}(F)$. \end{sblem} \bold egin{pf} (1) follows from the definition of ${\delta}ta$. By (1), (2) follows from the facts that ${\delta}ta(M({\sigma}, W), F)$ and $N$ commute, $N$ is of Hodge type $(-1,-1)$ for $F({{\gamma}mma}r^{M({\sigma}), W)})$, and $\zeta_{-1,-1}=0$ in general. (3) follows from (1) and (2). \end{pf} \bold egin{sbpara}{\lambda}bel{Dsig:3} Let $({\sigma}, Z, ({\sigma}_j)_{0\leq j\leq n}, (N_j)_{1\leq j\leq n})\in D^{\sharp}_{\Sigma,[:]}$, $F\in Z$, $\tilde N_j$ be as in \ref{Dsig:2}. We show that the image of $(\tilde N_1, \dots, \tilde N_n, F)\in D_{\operatorname{naive}ilp}$ in $D_{{\rm{SL}}(2)}$ is independent of the choices of $\tilde N_j$ and the choice of $F\in Z$. We prove that the associated element of $D_{{\rm{SL}}(2)}$ does not depend on the choice of $F\in Z$. If $F'\in Z$, $F'=\exp(iN)F$ for some $N\in {\sigma}_{\bold{R}}$. Hence by \ref{dzs} (3) applied to $({\sigma}, F)$ which generates a nilpotent orbit, $\hat F_{(n)}$ is independent of the choice. We prove that the associated element of $D_{{\rm{SL}}(2)}$ does not depend on the choice of a lifting $\tilde N_j$ of $N_j$. If $\tilde N'_j$ is another lifting of $N_j$, $\tilde N'_j= \tilde N_j + R_j$ for some $R_j\in {\sigma}_{j-1,{\bold{R}}}$. By \ref{dzs} (3) applied to $({\sigma}_{j-1}, \exp(iN_j) \hat F_{(j)})$ which generates a nilpotent orbit, $\hat F_{(j-1)}$ is independent of the choice by downward induction on $j$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{Dsig:4} By \ref{Dsig:3}, we have a map $D^{\sharp}_{\Sigma,[:]}\to D_{{\rm{SL}}(2)}$ which sends $({\sigma}, Z, ({\sigma}_j)_{0\leq j\leq n}, (N_j)_{1\leq j\leq n})\in D^{\sharp}_{\Sigma,[:]}$ to the image of $(\tilde N_1, \dots, \tilde N_n, F)\in D_{\operatorname{naive}ilp}$ in $D_{{\rm{SL}}(2)}$. Comparing the definition of this map and the definition of the map $\psi: D^{\sharp}_{\Sigma,{\mathrm{val}}}\to D_{{\rm{SL}}(2)}$ (\ref{4.5.1}) given in Part III, 3.3.1, we see that the composition $D^{\sharp}_{\Sigma,{\mathrm{val}}}\to D^{\sharp}_{\Sigma,[:]}\to D_{{\rm{SL}}(2)}$ coincides with $\psi$. \end{sbpara} \bold egin{sbpara} We complete the proofs of \ref{valper} (1) and \ref{valper2}. This map $D^{\sharp}_{\Sigma,[:]}\to D_{{\rm{SL}}(2)}$ in \ref{Dsig:4} is continuous because $D^{\sharp}_{\Sigmama,{\mathrm{val}}}\to D^{I}_{{\rm{SL}}(2)}$ is continuous and $D^{\sharp}_{\Sigmama,{\mathrm{val}}}\to D^{\sharp}_{\Sigmama,[:]}$ is proper and surjective. \end{sbpara} \bold egin{sbpara}{\lambda}bel{4.5log} Endow $D^{\sharp}_{\Sigmama,[:]}$ with the the new log structure in \ref{rat6} on the sheaf of all ${\bold{R}}$-valued continuous functions. Consider the log structure on $D^I_{{\rm{SL}}(2)}$ in \ref{slis+}. We show that the continuous map $D^{\sharp}_{\Sigmama,[:]}\to D^I_{{\rm{SL}}(2)}$ respects these log structures. We check this on $E^{\sharp}_{{\sigma},[:]}$. On the toric component of $E^{\sharp}_{{\sigma},[:]}$, the log structure is generated by $t_j:=(y_{j+1}/y_j)^{1/2}$ ($1\leq j\leq n$, $y_{n+1}$ denotes $1$). Let $\bold eta$ be a distance to the boundary for $\Phi$, where $\Phi$ is the set of $W({\sigma}ma_j,W)$ and let $\tau: {\bold{G}}_{m,{\bold{R}}}^n\to \prod_w {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$ be the homomorphism whose $w$-component is the $\tau$ (\ref{pure2}) of the ${\rm{SL}}(2)$-orbit in $n$ variables associated to $(N_1({{\gamma}mma}r^W_w), \dots, N_n({{\gamma}mma}r^W_w), F({{\gamma}mma}r^W_w))$. Then $\bold eta(\exp(\sum_{j=1}^n iy_jN_j)F) = t u$ where $u:=\bold eta(\tau(t)^{-1}\exp(\sum_{j=1}^n iy_jN_j)F)$ is invertible in the ring of real analytic functions. Hence $D^{\sharp}_{\Sigmama,[:]}\to D^I_{{\rm{SL}}(2)}$ respects the log structures. \end{sbpara} \bold egin{sbpara} By \ref{4.5log}, the map $D^{\sharp}_{\Sigmama,[:]} \to D^I_{{\rm{SL}}(2)}$ induces the continuous map $D^{\sharp}_{\Sigma,{[\val]}}\to D^I_{{\rm{SL}}(2),{\mathrm{val}}}$ of associated valuative spaces. This proves \ref{valper} (2). \end{sbpara} \bold egin{sbpara} Consequently, in the pure case, we have an amplified fundamental diagram $$\bold egin{matrix} &&&&D^{\sharp}_{\Sigmama,{[\val]}}& \overset{\psi}\to & D_{{\rm{SL}}(2),{\mathrm{val}}}&\overset{\eta}\to &D_{{\rm {BS}},{\mathrm{val}}}\\ &&&&&&\downarrow &&\downarrow \\ &&&&\downarrow && D^{{\rm {BS}}}_{{\rm{SL}}(2)} &\to &D_{{\rm {BS}}}\\ &&&&&&\downarrow&&\\ {\Gamma}amma \operatorname{\backslash} D_{\Sigma,{\mathrm{val}}}& \leftarrow & D_{\Sigma,{\mathrm{val}}}^{\sharp}&\to &D^{\sharp}_{\Sigma,[:]} & \overset {\psi} \to &D_{{\rm{SL}}(2)}&&\\ \downarrow &&&&\downarrow&&&&\\ {\Gamma}amma \operatorname{\backslash} D_{\Sigma}&&\leftarrow &&D_{\Sigma}^{\sharp}&&&& \end{matrix}$$ which is commutative and in which the maps respect the structures of the spaces. \end{sbpara} \section{Mild nilpotent orbits and the space $D^{\diamond}_{{\rm{SL}}(2)}$ of ${\rm{SL}}(2)$-orbits}{\lambda}bel{s:dia} In this Section 5, we consider the spaces of mild nilpotent orbits, and the space $D^{\diamond}_{{\rm{SL}}(2)}$ which is closely related to mild nilpotent orbits. In Section 5.1, we give main definitions and main results of Section 5. In the rest of Section 5, we give the proofs of the results in Section 5.1. These results in Section 5.1 were obtained in our joint efforts with Spencer Bloch. {\subset}section{Mild nilpotent orbits and the space $D^{\diamond}_{{\rm{SL}}(2)}$} Let ${\cal {L}}= W_{-2}{\rm {End}}_{\bold{R}}({{\gamma}mma}r^W)$ be as in \ref{cL(F)}. \bold egin{sbpara}{\lambda}bel{cD} Let $D_{\operatorname{naive}ilp}^{{{\rm{mild}}}}$ be the subset of $D_{\operatorname{naive}ilp}$ (\ref{Dnilp}) consisting of all elements $(N_1,\dots, N_n, F)$ satisfying the following condition. For any $y_j{{\gamma}mma}eq 0$ $(1\leq j\leq n)$, there is a splitting (which may depend on $(y_j)_j$) of $W$ which is compatible with $\sum_{j=1}^n y_jN_j$. \end{sbpara} We have the following \lq\lq{\rm{SL}}(2)-orbit theorem for mild degeneration''. \bold egin{sbthm}{\lambda}bel{mnilp1} Let $(N_1, \dots, N_n, F)\in D_{\operatorname{naive}ilp}^{{{\rm{mild}}}}$. (1) If $y_j/y_{j+1}\to \infty$ $(1\leq j\leq n$, $y_{n+1}$ denotes $1$), ${\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)$ converges in ${\cal {L}}$. Moreover, there are $a_m \in {\cal {L}}$ for $m\in {\bold{N}}^n$ and $\varepsilon\in {\bold{R}}_{>0}$ satisfying the following (i) and (ii). (i) $\sum_{m \in {\bold{N}}^n} \; (\prod_{j=1}^n \;x_j^{m(j)})a_m$ absolutely converges for $x_j\in {\bold{R}}$, $|t_j|<\varepsilon$ ($1\leq j \leq n$). (ii) For $y_j\in {\bold{R}}_{>0}$ ($1\leq j\leq n$) such that $t_j:=(y_{j+1}/y_j)^{1/2}<\varepsilon$ ($1\leq j\leq n$, $y_{n+1}$ denotes $1$), we have $\exp(\sum_{j=1}^n iy_jN_j)F\in D$ and $${\delta}ta_W (\exp (\sum_{j=1}^n iy_jN_j)F) = \sum_{m\in {\bold{N}}^n}\; (\prod_{j=1}^n t_j^{m(j)}) a_m.$$ (2) Let $\tau^{{\rm{st}}ar}: {\bold{G}}^n_{m,{\bold{R}}}\to G({{\gamma}mma}r^W)$ be the homomorphism whose $G_{\bold{R}}({{\gamma}mma}r^W_w)$ component is the $\tau^{{\rm{st}}ar}$ (\ref{pure2}) of the ${\rm{SL}}(2)$-orbit in $n$ variables on ${{\gamma}mma}r^W_w$ associated to $(N_1({{\gamma}mma}r^W_w), \dots, N_n({{\gamma}mma}r^W_w), F({{\gamma}mma}r^W_w))$. Then there are $a_m \in {\cal {L}}$ for $m\in {\bold{N}}^n$ and $\varepsilon\in {\bold{R}}_{>0}$ satisfying the above condition (i) and the modification of the above condition (ii) by replacing ${\delta}ta_W (\exp (\sum_{j=1}^n iy_jN_j)F)$ with ${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}{\delta}ta_W (\exp (\sum_{j=1}^n iy_jN_j)F)$ where $t=(t_1,\dots, t_n), \; t_j=(y_{j+1}/y_j)^{1/2}$. (3) If $y_j/y_{j+1}\to \infty$ $(1\leq j\leq n$, $y_{n+1}$ denotes $1$), $\exp(\sum_{j=1}^n iy_jN_j)F$ converges in $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{{\rm{mild}}}}$. \end{sbthm} In fact, (3) follows from (2) by the definition of the structure of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ as an object of ${\cal {B}}'_{\bold{R}}(\log)$ given in \ref{2.3.11}. \bold egin{sbpara}{\lambda}bel{5.1.3} By \ref{mnilp1}, we have maps $$D_{\operatorname{naive}ilp}^{{{\rm{mild}}}} \to {\cal {L}}, \quad D_{\operatorname{naive}ilp}^{{{\rm{mild}}}} \to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$$ by taking the limit of the convergence in \ref{mnilp1}. \end{sbpara} \bold egin{sbpara}{\lambda}bel{mild} We define the {\it mild part $D_{\Sigmama}^{{{\rm{mild}}}}$ of the set of nilpotent orbits $D_{\Sigmama}$} as the part of points $({\sigma}ma,Z)$ which satisfy the following condition: (C) For each $N$ in the cone ${\sigma}ma$, there is a splitting of $W$ (which can depend on $N$) which is compatible with $N$. For the other spaces of nilpotent orbits $D_{\Sigmama}^{\sharp}$, $D_{\Sigmama,[:]}^{\sharp}$, $D_{\Sigmama,[{\mathrm{val}}]}^{\sharp}$ etc.\ we define their mild parts $D_{\Sigmama}^{\sharp,{{\rm{mild}}}}$, $D_{\Sigmama,[:]}^{\sharp,{{\rm{mild}}}}$, $D_{\Sigmama,[{\mathrm{val}}]}^{\sharp,{{\rm{mild}}}}$ etc.\ as the inverse images of $D_{\Sigmama}^{{{\rm{mild}}}}$. \end{sbpara} \bold egin{sbpara} In the above definition \ref{mild} of the mildness, the following stronger condition (C${}'$) need not be satisfied. $({\rm C}')$ There is a splitting of $W$ which is compatible with any element $N$ of the cone ${\sigma}ma$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{ExII} For example, in the case of Example II in Part I and Part II (the case of $0\to H^1(E)(1)\to * \to {\bold{Z}}\to 0$, where $E$ varies over elliptic curves), we had a nilpotent orbit of rank $2$, and that is a mild degeneration in the sense of \ref{mild} (that is, it satisfies (C)) but it does not satisfy $({\rm C}')$. \end{sbpara} \bold egin{sbthm}{\lambda}bel{Lthm} (1) There is a unique continuous map $D^{\sharp,{{\rm{mild}}}}_{\Sigma, [:]}\to {\cal {L}}$ which extends the map $D\to {\cal {L}}\;;\;x\mapsto {\delta}ta_W(x)$. (2) There is a unique continuous map $D^{\sharp,{{\rm{mild}}}}_{\Sigma, [:]}\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ which extends the identity map of $D$. (3) The map in (1) (resp.\ (2)) sends $({\sigma}, Z, ({\sigma}_j)_j, (N_j)_j) \in D^{\sharp,{{\rm{mild}}}}_{\Sigma,[:]}$ (\ref{Dsig:}) to the image of $(\tilde N_1, \dots, \tilde N_n, F)\in D^{{{\rm{mild}}}}_{\operatorname{naive}ilp}$ in ${\cal {L}}$ (resp.\ $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$) under the map in \ref{5.1.3}. Here $\tilde N_j$ is as in \ref{valper2} and $F$ is any element of $Z$. \end{sbthm} (1) of \ref{Lthm} shows the convergence of Beilinson regulators in a family with mild degeneration. See Section 7.2. \bold egin{sbpara}{\lambda}bel{diamond} We define a space $D^{\diamond}_{{\rm{SL}}(2)}$. Let $D^{\diamond}_{{\rm{SL}}(2)}$ be the subset of $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\times {\cal {L}}$ consisting of all elements $(p, Z, {\delta}ta)$ $((p,Z)\in D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ with $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ and $Z{\subset} D$ (\ref{slmap}), ${\delta}ta\in {\cal {L}})$ satisfying the following conditions (i) and (ii). (i) Let $n$ be the rank of $p$ and let ${\bold 0}:=(0,\dots,0) \in {\bold{Z}}^n$. Then ${\delta}ta$ is of ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p)$-weight $\leq \bold 0$. (ii) For any $F\in Z$, ${\delta}ta_W(F)$ coincides with the component of ${\delta}ta$ of ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p)$-weight $\bold 0$. We define the structure of $D^{\diamond}_{{\rm{SL}}(2)}$ as an object of ${\cal {B}}'_{{\bold{R}}}(\log)$ by regarding $D^{\diamond}_{{\rm{SL}}(2)}$ (resp.\ $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\times {\cal {L}}$) as $Y$ (resp.\ $X$) in \ref{embstr}. We have the evident morphism $$D^{\diamond}_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\;; \; (p,Z,{\delta}ta) \mapsto (p,Z)$$ of ${\cal {B}}'_{\bold{R}}(\log)$. \end{sbpara} \bold egin{sbpara} Via the map $$D\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\times {\cal {L}}\;;\;F\mapsto (F, {\delta}ta_W(F)),$$ we regard $D$ as a subset of $D^{\diamond}_{{\rm{SL}}(2)}$. \end{sbpara} \bold egin{sbthm}{\lambda}bel{diathm} $(1)$ Let $D_{\operatorname{naive}ilp}^{{{\rm{mild}}}}\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\times{\cal {L}}$ be the map which sends $(N_1, \dots, N_n, F)\in D_{\operatorname{naive}ilp}^{{{\rm{mild}}}}$ to the limit of $(F_y, {\delta}ta_W(F_y))$ where $y=(y_j)_{1\leq j\leq n} \in {\bold{R}}_{>0}^n$, $F_y:=\exp(\sum_{j=1}^n iy_jN_j)F$, $y_j/y_{j+1}\to \infty$ $(1\leq j\leq n$, $y_{n+1}$ denotes $1$). Then, the image of this map is contained in $D^{\diamond}_{{\rm{SL}}(2)}$. $(2)$ There is a unique continuous map $D^{\sharp,{{\rm{mild}}}}_{\Sigma, [:]}\to D_{{\rm{SL}}(2)}^{\diamond}$ which extends the identity map of $D$. $(3)$ There is a unique continuous map $D^{\sharp,{{\rm{mild}}}}_{\Sigma, [{\mathrm{val}}]}\to D_{{\rm{SL}}(2),{\mathrm{val}}}^{\diamond}$ which extends the identity map of $D$. \end{sbthm} \bold egin{sbprop}{\lambda}bel{diatostar} (1) The map $D^{\diamond}_{{\rm{SL}}(2)}\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} \times {\rm{spl}}(W) \times {\cal {L}}$ induced by $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} \times {\rm{spl}}(W)$ is injective, and the image of this map consists of all elements $(p,s,{\delta}ta)$ satisfying the following conditions (i) and (ii). (i) ${\delta}ta$ is of ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p)$-weight $\leq \bold0$. (ii) Let $(\rho_w,\varphi_w)_w$ be an ${\rm{SL}}(2)$-orbit on ${{\gamma}mma}r^W$ which represents $p$. Then the component of ${\delta}ta$ of ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p)$-weight $\bold 0$ is of Hodge type $(\leq -1,\leq -1)$ with respect to $(\varphi_w(i,\dots,i))_w$. (2) If $(p, Z, {\delta}ta)\in D^{\diamond}_{{\rm{SL}}(2)}$ and if $(p,s, {\delta}ta)$ is its image in $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} \times {\rm{spl}}(W)\times {\cal {L}}$, $Z$ is recovered from $(p,s,{\delta}ta)$ as follows. Under the embedding $D\to D({{\gamma}mma}r^W) \times {\rm{spl}}(W) \times {\cal {L}}$ in \ref{grsd}, the image of $Z{\subset}set D$ coincides with $(Z(p), s, {\delta}ta_{\bold 0}){\subset}set D({{\gamma}mma}r^W)\times {\rm{spl}}(W)\times {\cal {L}}$. Here ${\delta}ta_{\bold 0}$ denotes the component of ${\delta}ta$ of ${\rm{Ad}}(\tau_p^{{\rm{st}}ar})$-weight $\bold 0$. \end{sbprop} \bold egin{sbpara}{\lambda}bel{weakstar} By the {\it weak topology of $D^{\diamond}_{{\rm{SL}}(2)}$}, we mean the topology of $D^{\diamond}_{{\rm{SL}}(2)}$ as a subspace of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}\times {\rm{spl}}(W) \times {\cal {L}}$. We denote the topological space $D^{\diamond}_{{\rm{SL}}(2)}$ endowed with the weak topology by $D^{\diamond,\text{weak}}_{{\rm{SL}}(2)}$. This weak topology need not coincide with the topology defined in \ref{diamond}. See \ref{noIIstar}. \end{sbpara} \bold egin{sbrem} (1) Unlike other spaces of ${\rm{SL}}(2)$-orbits $(D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^I_{{\rm{SL}}(2)}$, $D^{II}_{{\rm{SL}}(2)}$, ...), $D$ is not necessarily dense in $D^{\diamond}_{{\rm{SL}}(2)}$ (even for the weak topology). (2) The authors believe that $D^{\diamond}_{{\rm{SL}}(2)}$ belongs to ${\cal {B}}'_{\bold{R}}(\log)^+$ and that this can be proved by using the methods in Section 2.7, but they have not yet proved it. \end{sbrem} \bold egin{sbpara} The above results show that we have commutative diagrams $$\bold egin{matrix} D^{\sharp,{{\rm{mild}}}}_{\Sigma, [:]} &\to & D_{{\rm{SL}}(2)}^{\diamond} &\to & D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{{\rm{mild}}}} \\ \cap &&&& \downarrow\\ D^{\sharp}_{\Sigma,[:]} &&\to && D^{II}_{{\rm{SL}}(2)} \end{matrix}\quad \quad\quad \bold egin{matrix}D^{\sharp,{{\rm{mild}}}}_{\Sigma, [{\mathrm{val}}]} &\to & D_{{\rm{SL}}(2),{\mathrm{val}}}^{\diamond} &\to & D_{{\rm{SL}}(2),{\mathrm{val}}}^{{\rm{st}}ar,{{\rm{mild}}}}\\ \cap &&&& \downarrow\\ D^{\sharp}_{\Sigma,[{\mathrm{val}}]} &&\to && D^{II}_{{\rm{SL}}(2),{\mathrm{val}}}. \end{matrix}$$ The rest of Section 5 is devoted to the proofs of the above results. \end{sbpara} {\subset}section{Preparations on pure ${\rm{SL}}(2)$-orbits} We review pure ${\rm{SL}}(2)$-orbits in one variable more. \bold egin{sbpara} Assume that we are in the pure case of weight $w$, and assume that we are given an ${\rm{SL}}(2)$-orbit $(\rho, \varphi)$ in one variable. Let $$N, N^+\in {\frak g}_{\bold{R}}$$ be as follows. Let $\rho_* : {\frak s}{\frak l}(2, {\bold{R}}) \to {\frak g}_{\bold{R}}$ be the Lie algebra homomorphism induced by $\rho$. Then $N$ (resp.\ $N^+$) is the image of $\bold egin{pmatrix} 0&1\\ 0&0\end{pmatrix}$ (resp.\ $\bold egin{pmatrix} 0&0\\ 1&0\end{pmatrix}$) in ${\frak s}{\frak l}(2,{\bold{R}})$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{pdec} We have a direct sum decomposition $$H_{0,{\bold{R}}}=\bigoplus_{k,r{{\gamma}mma}eq 0}\; H_{0,{\bold{R}}, (k,r)}$$ defined as follows. Let $Z= \Ker(N: H_{0,{\bold{R}}}\to H_{0,{\bold{R}}})$. Then $Z=\bigoplus_{k{{\gamma}mma}eq 0} Z_{(-k)}$ where $Z_{(-k)}$ is the part of $Z$ of $\tau^{{\rm{st}}ar}$-weight $-k$. Let $$H_{0,{\bold{R}}, (k,r)}:=(N^+)^r Z_{(-k)}.$$ In particular, $Z_{(-k)}=H_{0,{\bold{R}},(k,0)}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{pdec2} We have (1) Elements of $H_{0,{\bold{R}}, (k,r)}$ have $\tau^{{\rm{st}}ar}$-weight $2r-k$. (2) For each $k{{\gamma}mma}eq 0$, $H_{0,{\bold{R}}, (k,\bullet)}:=\bigoplus_r H_{0,{\bold{R}}, (k,r)}$ is stable under the action of ${\rm{SL}}(2,{\bold{R}})$ by $\rho$. As a representation of ${\rm{SL}}(2,{\bold{R}})$, we have a unique isomorphism $$H_{0,{\bold{R}}, (k, \bullet)}\cong \text{Sym}^k(A) \otimes Z_{(-k)}$$ where $A={\bold{R}}^2={\bold{R}} e_1\oplus {\bold{R}} e_2$, on which ${\rm{SL}}(2,{\bold{R}})$ acts via the natural action on $A$ and the trivial action on $Z_{(-k)}$, which sends $v\in Z_{(-k)}$ on the left-hand side to $e_1^k\otimes v\in \text{Sym}^k(A)\otimes Z_{(-k)}$ on the right-hand side. (3) For $e{{\gamma}mma}eq 0$, the kernel of $N^e: H_{0,{\bold{R}}}\to H_{0,{\bold{R}}}$ coincides with the direct sum of $H_{0,{\bold{R}},(k,r)}$ for $k,r{{\gamma}mma}eq 0$ such that $r < e$. (4) The filtration $\varphi(0)$ is the direct sum of its restrictions $\varphi(0)_{(k,r)}$ to $H_{0, {\bold{C}}, (k,r)}$ for all $(k,r)$. $H_{0,{\bold{R}},(k,r)}$ with Hodge filtration $\varphi(0)_{(k,r)}$ on $H_{0,{\bold{C}}, (k,r)}$ is an ${\bold{R}}$-Hodge structure of weight $w+2r-k$. (5) For any $z\in {\bold P}^1({\bold{C}})$, the filtration $\varphi(z)$ on $H_{0,{\bold{C}}}$ is the direct sum of its restrictions $\varphi(z)_{(k,\bullet)}$ to $H_{0,{\bold{C}},(k,\bullet)}$ for $k{{\gamma}mma}eq 0$. The filtration $\varphi(z)_{(k,\bullet)}$ is described as follows. In the isomorphism in (2), it is given by $\varphi(0)_{(k,0)}$ on $Z_{(-k),{\bold{C}}}$ and the filtration on $A_{{\bold{C}}}$ whose $F^0$ is $A_{{\bold{C}}}$, whose $F^2$ is $0$, and whose $F^1$ is ${\bold{C}} \cdot (ze_1+e_2)$ if $z\in {\bold{C}}$, and is ${\bold{C}} e_1$ if $z=\infty$. \end{sbpara} {\subset}section{More preparations on ${\rm{SL}}(2)$-orbits} \bold egin{sbpara}{\lambda}bel{presl2} Assume that we are given an ${\rm{SL}}(2)$-orbit $(\rho_w,\varphi_w)$ on ${{\gamma}mma}r^W_w$ in one variable for each $w\in {\bold{Z}}$. Let $$E=W_0{\rm {End}}_{\bold{R}}({{\gamma}mma}r^W)= \bigoplus_{w\leq 0} E_w, \quad E_w=\bigoplus_{a\in {\bold{Z}}}\Hom_{\bold{R}}({{\gamma}mma}r^W_a, {{\gamma}mma}r^W_{a+w}).$$ We apply our preparations in Section 5.2 to the ${\rm{SL}}(2)$-orbit in one variable of pure weight $w$ induced on each $E_w$ by $(\rho_a, \varphi_a)$ and $(\rho_{a+w}, \varphi_{a+w})$ $(a\in {\bold{Z}})$. By \ref{pdec}, we have a direct sum decomposition $$E_w= \bigoplus_{k,r{{\gamma}mma}eq 0} \; E_{w,(k,r)}.$$ \end{sbpara} \bold egin{sblem} $E_{w, (k, r)}E_{w',(k',r')}{\subset}set \bigoplus_{k'',r''} E_{w+w', (k'', r'')}$, where $(k'',r'')$ ranges over all elements of ${\bold{N}}\times {\bold{N}}$ such that $r''\leq r+r'$ and $k''-2r''=(k+k')-2(r+r')$. \end{sblem} \bold egin{pf} This follows from (1) and (3) of \ref{pdec2}. \end{pf} \bold egin{sbpara}{\lambda}bel{frakA} Let ${\bold{R}}\{\{t\}\}$ be the ring of power series in $t$ over ${\bold{R}}$ which absolutely converge when $|t|$ is small. We define subrings $\frak A_0$, $\frak A$, $\frak B_0$, $\frak B$ of ${\bold{R}}\{\{t\}\}\otimes_{{\bold{R}}} E$ as follows. $$\frak A_0 =E_{\bullet,(\bullet,0)} {\subset}set \frak A=\sum_{r{{\gamma}mma}eq 0} t^{2r}{\bold{R}}\{\{t^2\}\} \otimes_{{\bold{R}}} E_{\bullet,(\bullet,r)},$$ $$ \frak B_0=\sum_{k{{\gamma}mma}eq 0} t^k E_{\bullet,(k,0)} {\subset}set \frak B=\sum_{k{{\gamma}mma}eq 0} t^k {\bold{R}}\{\{t^2\}\}\otimes_{{\bold{R}}} E_{\bullet,(k,\bullet)}.$$ For $w\leq 0$, define the two-sided ideals of these rings as $$W_w\frak A_0 =W_wE_{\bullet,(\bullet,0)} {\subset}set W_w\frak A=\sum_{r{{\gamma}mma}eq 0} t^{2r}{\bold{R}}\{\{t^2\}\} \otimes_{{\bold{R}}} W_wE_{\bullet,(\bullet,r)},$$ $$ W_w\frak B_0=\sum_{k{{\gamma}mma}eq 0} t^k W_wE_{\bullet,(k,0)} {\subset}set W_w\frak B=\sum_{k{{\gamma}mma}eq 0} t^k {\bold{R}}\{\{t^2\}\}\otimes_{{\bold{R}}} W_wE_{\bullet,(k,\bullet)}.$$ \end{sbpara} \bold egin{sblem}{\lambda}bel{flem} We have, for $w\leq 0$, $${\rm{Ad}}(\tau^{{\rm{st}}ar}(t)) W_w\frak B= W_w\frak A, \quad {\rm{Ad}}(\tau^{{\rm{st}}ar}(t))W_w\frak B_0= W_w\frak A_0.$$ \end{sblem} These are direct consequences from the definitions in \ref{frakA}. We will apply the following \ref{Qalg2} in \ref{pfnm1} (resp. in the proof of \ref{r1eq}) by taking $A= {\bold{C}}\otimes_{\bold{R}} \frak B$ (resp. $A=W_0{\rm {End}}_{{\bold{C}}}(H_{{\bold{C}}})$). \bold egin{sbpara}{\lambda}bel{Qalg1} Let $A$ be a ${\bold{Q}}$-algebra. For a nilpotent ideal $I$ of $A$, we have bijections $$\exp: I \to 1+I, \quad \log : 1 +I \to I, \quad \exp(x)= \sum_{n=0}^{\infty} \frac{x^n}{n!}, \;\;\log(1-x) = \sum_{n=1}^{\infty} \frac{x^n}{n}$$ (these are finite sums, for $x\in I$ are nilpotent) which are the inverse of each other. Let $I^{(r)}$ $(r{{\gamma}mma}eq 1)$ be two-sided ideals of $A$ such that $I^{(1)}\supset I^{(2)}\supset I^{(3)}\supset \dots $, $I^{(r)}I^{(s)}{\subset}set I^{(r+s)}$ for any $r,s{{\gamma}mma}eq 1$, and $I^{(r)}=0$ for $r{{\gamma}mma}g 1$. Let $I=I^{(1)}$. Then $I$ is a nilpotent two-sided ideal. \end{sbpara} \bold egin{sblem}{\lambda}bel{Qalg2} Let the notation be as in \ref{Qalg1}. Let $M_j$ $(1\leq j\leq m)$ be ${\bold{Q}}$-submodules of $I$ such that $$I^{(r)}= \bigoplus_{j=1}^m\; (M_j\cap I^{(r)})$$ for any $r{{\gamma}mma}eq 1$. Then if $x\in I$, there is a unique family $(x_j)_{1\leq j\leq m}$ of elements $x_j$ of $M_j$ such that $$\exp(x)= \exp(x_1)\dots \exp(x_m).$$ \end{sblem} \bold egin{pf} Easy induction on $r$ such that $I^{(r)}=0$. \end{pf} {\subset}section{Proof of Theorem \ref{mnilp1}}{\lambda}bel{ss:cks2} \bold egin{sbpara}{\lambda}bel{Sch} We first prove Theorem \ref{mnilp1} in the case $n=1$. We use the following part of the SL(2)-orbit theorem in one variable of Schmid (\cite{Scw}). Assume that we are in the pure case. Then for $y{{\gamma}mma}g 0$, we have $$\exp(iyN)F= \exp(g(y)){\tau}(y^{-1/2}){{\bold r}}\quad \text{with}\;\; {{\bold r}}=\exp(iN)\hat F$$ for some convergent power series $g(y)=\sum_{m=0}^{\infty} y^{-m}a_m$ in $y^{-1}$ with $a_m\in {\rm {End}}_{{\bold{R}}}(H_{0,{\bold{R}}})$ such that $a_0=0$ and such that $N^{m+1}a_m=0$ for any $m$. \end{sbpara} \bold egin{sbprop}{\lambda}bel{5.4.1} Let $(N,F)\in D^{{{\rm{mild}}}}_{\operatorname{naive}ilp}$ (\ref{cD}) with one $N$. Let $(W^{(1)},\hat F)$ be the ${\bold{R}}$-split MHS associated to the MHS $(W^{(1)},F)$ and let ${{\bold r}} := \exp(iN)\hat F$. Then $(W,{{\bold r}})$ is an ${\bold{R}}$-split MHS and the splitting ${\rm{spl}}_W({{\bold r}})$ of $W$ is compatible with $N$. \end{sbprop} \bold egin{pf} This follows from \cite{BP} Lemma 2.2. \end{pf} \bold egin{sbpara}{\lambda}bel{3.6.2} Let $(N, F)\in D^{{{\rm{mild}}}}_{\operatorname{naive}ilp}$ with one $N$. Let ${{\bold r}}=\exp(iN)\hat F$ as in \ref{5.4.1} and let $s={\rm{spl}}_W({{\bold r}}):{{\gamma}mma}r^W \overset{\cong}\to H_{0,{\bold{R}}}$, $s^{(1)}={\rm{spl}}_{W^{(1)}}(F)={\rm{spl}}_{W^{(1)}}(\hat F): {{\gamma}mma}r^{W^{(1)}}\overset{\cong}\to H_{0,{\bold{R}}}$. By \ref{5.4.1}, $N$ is of weight $0$ for $s$. Let $\tau^{{\rm{st}}ar}: {\bold{G}}_m \to {\rm {Aut}}({{\gamma}mma}r^W)$ be the homomorphism associated to the ${\rm{SL}}(2)$-orbit on ${{\gamma}mma}r^W$ in one variable associated to $(N,F)({{\gamma}mma}r^W)$. (In the case $N({{\gamma}mma}r^W)=0$, $\tau^{{\rm{st}}ar}$ is defined to be the trivial homomorphism.) Let $y\in {\bold{R}}_{>0}$ and let $t= y^{-1/2}$. For the proof of the case $n=1$ of \ref{mnilp1} (1) and (2), it is sufficient to prove that ${\delta}ta_W(\exp(iyN)F)$ and ${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}{\delta}ta_W(\exp(iyN)F)$ converge in ${\cal {L}}$ when $y\to \infty$. We prove it. Note that the actions of $\tau(t)$ and $\tau^{{\rm{st}}ar}(t)$ on $D({{\gamma}mma}r^W)$ are the same. \end{sbpara} \bold egin{sbpara}{\lambda}bel{pfnm1} Let the notation be as in \ref{3.6.2}. For $y{{\gamma}mma}g 0$, let $g_w(y)$ for each $w\in {\bold{Z}}$ be as in the above result of Schmid in \ref{Sch} for $(N, F)({{\gamma}mma}r^W_w)$, and let $g(y)=\bigoplus_w g_w(y) \in E= W_0{\rm {End}}_{\bold{R}}({{\gamma}mma}r^W)$. By the above result of Schmid, we have (1) $\exp(iyN({{\gamma}mma}r^W))F({{\gamma}mma}r^W) = \exp(g(y)){\tau}^{{\rm{st}}ar}(t) {{\bold r}} ({{\gamma}mma}r^W), \quad g(y), \exp(g(y))\in \frak A$ \operatorname{naive}oindent where $N({{\gamma}mma}r^W)$ is the map ${{\gamma}mma}r^W\to {{\gamma}mma}r^W$ induced by $N$ and $\frak A$ is as in \ref{frakA}. Let $h(y) = {\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}g(y)$. Then (2) $h(y)\in \frak B$ \operatorname{naive}oindent by \ref{flem}. Let ${\delta}ta^{(1)}={\delta}ta_{W^{(1)}}(F)$ and let $\zeta^{(1)}$ be the corresponding $\zeta$ (\ref{II,1.2.3}), so that (3) $F=s^{(1)} \exp(-\zeta^{(1)})\exp(i{\delta}ta^{(1)})(s^{(1)})^{-1}\hat F.$ Write $s^{(1)} \exp(-\zeta^{(1)})\exp(i{\delta}ta^{(1)})(s^{(1)})^{-1}= \exp(apha)\exp(\bold eta)$ where $apha, \bold eta\in W_0{\rm {End}}_{\bold{C}}(H_{0,{\bold{C}}})\cap W_{-2}^{(1)}{\rm {End}}_{\bold{C}}(H_{0,{\bold{C}}})$, $apha$ is of $s$-weight $\leq -1$ and $\bold eta$ is of $s$-weight $0$. By (3), we have (4) $F({{\gamma}mma}r^W) = \exp(\bold eta({{\gamma}mma}r^W))\hat F({{\gamma}mma}r^W)$ \operatorname{naive}oindent where $\bold eta({{\gamma}mma}r^W)$ is the map ${{\gamma}mma}r^W\to {{\gamma}mma}r^W$ induced by $\bold eta$. We have (5) $\exp(iyN)\exp(\bold eta)\hat F= s\exp(iyN({{\gamma}mma}r^W))\exp(\bold eta({{\gamma}mma}r^W)) \hat F({{\gamma}mma}r^W)= s\exp(iyN({{\gamma}mma}r^W))F({{\gamma}mma}r^W)= s\exp(g(y))\tau^{{\rm{st}}ar}(t){{\bold r}} ({{\gamma}mma}r^W)$ \operatorname{naive}oindent where the first $=$ follows from $N=sN({{\gamma}mma}r^W)s^{-1}$ (\ref{5.4.1}), $\bold eta=s\bold eta({{\gamma}mma}r^W)s^{-1}$ and $\hat F=s\hat F({{\gamma}mma}r^W)$, the second $=$ follows from (4), and the last $=$ follows from (1). Since $s^{(1)}{\delta}ta^{(1)}(s^{(1)})^{-1}$ and $s^{(1)}\zeta^{(1)} (s^{(1)})^{-1}$ commute with $N$, $apha$ and $\bold eta$ commute with $N$. Hence we have $$s^{-1}apha s \in W_{-1}\frak A_0.$$ We have $$\exp(iyN)F= \exp(iyN)\exp(apha)\exp(\bold eta)\hat F = \exp(apha) \exp(iyN)\exp(\bold eta)\hat F$$ $$=\exp(apha) s \exp(g(y))\tau^{{\rm{st}}ar}(t) {{\bold r}} ({{\gamma}mma}r^W) = s \exp(g(y))\exp(apha(y))\tau^{{\rm{st}}ar}(t){{\bold r}}({{\gamma}mma}r^W)$$ where $apha(y):= {\rm{Ad}}(\exp(g(y)))^{-1}(s^{-1}apha s)$. Here the third $=$ follows from (5). Since $s^{-1}apha s\in W_{-1}\frak A_0$ and $\exp(g(y))\in \frak A$, we have $apha(y) \in W_{-1}\frak A$. Hence $${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1} apha(y) \in W_{-1}\frak B.$$ To apply \ref{Qalg2}, we use the direct sum decomposition $${\bold{C}}\otimes_{\bold{R}} W_{-1}{\rm {End}}_{\bold{R}}({{\gamma}mma}r^W)= M'_1\oplus M'_2 \oplus M'_3$$ where $M'_1= W_{-1}{\rm {End}}_{\bold{R}}({{\gamma}mma}r^W)$, $M'_2$ is the $-1$-eigen space of the complex conjugation acting on the $(\leq -1, \leq -1)$-Hodge component of ${\bold{C}}\otimes_{\bold{R}} W_{-1}{\rm {End}}_{\bold{R}}({{\gamma}mma}r^W)$ with respect to ${{\bold r}}({{\gamma}mma}r^W)$, and $M'_3= F^0({\bold{C}}\otimes_{\bold{R}} W_{-1}{\rm {End}}_{\bold{R}}({{\gamma}mma}r^W))$ for the Hodge filtration ${{\bold r}}({{\gamma}mma}r^W)$. In \ref{Qalg2}, consider the case $$A= \frak B, \quad I^{(r)}=W_{-r}A, \quad I=I^{(1)},$$ $$M_j = \bigoplus_{k{{\gamma}mma}eq 0} t^k{\bold{R}}\{\{t\}\}\otimes_{\bold{R}} M'_{j,(k, \bullet)} \quad (j=1,2,3).$$ Then the assumption of \ref{Qalg2} is satisfied. By \ref{Qalg2}, we have $$\exp({\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}apha(y)) = \exp(a(y)) \exp(ib(y)) \exp(c(y))$$ where $a(y) \in M_1$, $ib(y)\in M_2$, $c(y) \in M_3$. Then $$\exp(iyN)F = s \exp(g(y)) \tau^{{\rm{st}}ar}(t) \exp(a(y)) \exp(i b(y)) {{\bold r}}({{\gamma}mma}r^W).$$ Hence $${\delta}ta_W(\exp(iyN)F)={\rm{Ad}}(\exp(g(y)) {\rm{Ad}}(\tau^{{\rm{st}}ar}(t)) b(y) \in \frak A,$$ $${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}{\delta}ta_W(\exp(iyN)F)= {\rm{Ad}}(\exp(h(y)) b(y) \in \frak B.$$ Hence ${\delta}ta_W(\exp(iyN)F)$ and ${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}{\delta}ta_W(\exp(iyN)F)$ converge when $y\to \infty$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{5.4.5} We prove \ref{mnilp1} in general. Let $(N_1, \dots, N_n, F)\in D_{\operatorname{naive}ilp}$. Let $\tau$ (resp.\ $\tau^{{\rm{st}}ar}): {\bold{G}}_{m,{\bold{R}}}^n\to \prod_w {\rm {Aut}}_{\bold{R}}({{\gamma}mma}r^W_w)$ be the homomorphism whose $w$-component is the $\tau$ (resp.\ $\tau^{{\rm{st}}ar}$) (\ref{pure2}) of the ${\rm{SL}}(2)$-orbit in $n$-variables on ${{\gamma}mma}r^W_w$ associated to $(N_1({{\gamma}mma}r^W_w), \dots, N_n({{\gamma}mma}r^W_w), F({{\gamma}mma}r^W_w))$. Note the action of ${\bold{G}}_{m,{\bold{R}}}^n$ on $D({{\gamma}mma}r^W)$ via $\tau$ and that via $\tau^{{\rm{st}}ar}$ are the same. By SL(2)-orbit theorem in $n$ variables (\cite{KNU1}), $${\rm{Ad}}(\tau(t))^{-1}{\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)\quad (t= (t_1,\dots, t_n), \;t_j=(y_{j+1}/y_j)^{1/2}, \;y_{n+1}=1)$$ is a convergent series in $t_1, \dots, t_n$. Hence ${\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)$ and \operatorname{naive}ewline ${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}{\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)$ have the shapes of Laurent series $${\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)=(\prod_{j=1}^n t_j)^{-r} \cdot \sum_{m\in {\bold{N}}^n} (\prod_{1\leq j\leq n} t_j^{m(j)})a_m,$$ $${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}{\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)=(\prod_{j=1}^n t_j)^{-s} \cdot \sum_{m\in {\bold{N}}^n} (\prod_{1\leq j\leq n} t_j^{m(j)})b_m$$ for some $r,s\in {\bold{N}}$ and $a_m, b_m\in {\cal {L}}$ where the sums $\sum_{m\in {\bold{N}}^n}$ are convergent series. Now assume $(N_1, \dots, N_n, F)\in D_{\operatorname{naive}ilp}^{{{\rm{mild}}}}$. We prove that we can take $r=s=0$ (that is, these series are actually taylor series). It is sufficient to prove that when we fix $j$ and fix a sufficiently small $t_k>0$ for $k\operatorname{naive}eq j$, then these series become Taylor series in one variable in $t_j$. But in this situation, the first Laurent series becomes ${\delta}ta_W(\exp( iy'N')F')$ with $(N', F')\in D_{\operatorname{naive}ilp}$, where $$ y'= t_j^{-2}, \quad N'=t_j^2\sum_{k=1}^j y_kN_k= \sum_{k=1}^j \;(\prod_{k\leq \ell\leq n, \ell\operatorname{naive}eq j} t_{\ell}^{-2})N_k,$$ $$F'=\exp(\sum_{k=j+1}^n iy_k N_k)F=\exp(i\sum_{k=j+1}^n \;(\prod_{\ell=k}^n t_{\ell}^{-2}) N_k)F.$$ We consider the second Laurent series. Let $\tau^{{\rm{st}}ar}_j: {\bold{G}}_{m,{\bold{R}}}\to G_{\bold{R}}({{\gamma}mma}r^W)$ be the restriction of $\tau^{{\rm{st}}ar}$ to the $j$-th ${\bold{G}}_{m,{\bold{R}}}$. It is sufficient to prove that when $t_k$ for $k\operatorname{naive}eq j$ are fixed, ${\delta}ta(t):={\rm{Ad}}(\tau^{{\rm{st}}ar}_j(t_j))^{-1}{\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)$ is a Taylor series in $t_j$. Let $\tau^{{\rm{st}}ar,\prime}_j: {\bold{G}}_{m,{\bold{R}}}\to G_{\bold{R}}({{\gamma}mma}r^W)$ be the $\tau^{{\rm{st}}ar}$ of the ${\rm{SL}}(2)$-orbit in one variable associated to $(N', F')$ where $N'$ and $F'$ are as above. By the case $n=1$ applied to $(N', F')$, ${\delta}ta'(t):={\rm{Ad}}(\tau_j^{{\rm{st}}ar,\prime}(t_j))^{-1}{\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)$ is a Taylor series in $t_j$. Let $W^{(j)}$ be the relative monodromy filtration $M(N_1+\dots+N_j, W)$. By \cite{KNU1} Proposition 4.2, there is a convergent Taylor series $u=\sum_m (\prod_{k=j+1}^n t_k^{m(k)})u_m$ in $t_{j+1}, \dots, t_n$ with $u_m\in W^{(j)}_{-1}\fg_{\bold{R}}$ such that $u_0=0$ and such that $$\tau_j^{{\rm{st}}ar,\prime}(t_j)= \exp(u)\tau_j^{{\rm{st}}ar}(t_j)\exp(-u).$$ We have $${\delta}ta(t)= {\rm{Ad}}(\exp(v)\exp(-u))^{-1}{\delta}ta'(t)\quad{where}\quad v= {\rm{Ad}}(\tau^{{\rm{st}}ar}_j(t_j))^{-1}u.$$ Since $u_m\in W^{(j)}_{-1}\fg_{\bold{R}}$, $v$ is a Taylor series in $t_j$. Hence ${\delta}ta(t)$ is a Taylor series in $t_j$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{Coef} In the mild ${\rm{SL}}(2)$-orbit theorem Theorem \ref{mnilp1} (1) and (2), the power series depend real analytically on $(N_1, \dots, N_n, F)$ in the following sense. Let $A$ be a real analytic manifold and let $A\to \fg_{\bold{R}}\;;\;apha \mapsto N_{j, apha}$ ($1\leq j \leq n$) and $A\to \Dc\;;\;apha\mapsto F_{apha}$ be real analytic functions. Assume that $N_{j,apha}$ are nilpotent and commute with each other, and assume that $(N_{1,apha}, \dots, N_{n,apha},F_{apha})$ generates a nilpotent orbit for any $apha$. Assume further that for each $1\leq j\leq n$, the relative monodromy filtration $M(N_1+\dots+N_j, W)$ is independent of $apha$. Then the $\varepsilon$ in \ref{mnilp1} (1) and (2) can be taken constant locally on $A$, and the coefficients of the power series in \ref{mnilp1} (1) and (2) are real analytic functions on $A$. This follows from the corresponding result \cite{KNU1} Proposition 10.8 (\cite{CKS} Remark 4.65 (ii) in the pure case) for the original ${\rm{SL}}(2)$-orbit theorem and from the above proof in \ref{5.4.5} to reduce the mild ${\rm{SL}}(2)$-orbit theorem to the original one. \end{sbpara} {\subset}section{Proof of Theorem \ref{Lthm}}{\lambda}bel{ss:Lthm} We prove Theorem \ref{Lthm}. We first prove \bold egin{sbprop}{\lambda}bel{5.5.1} Let $({\sigma}, Z, ({\sigma}_j)_{0\leq j\leq n}, (N_j)_{1\leq j\leq n})\in D^{\sharp,{{\rm{mild}}}}_{\Sigma,[:]}$ (\ref{Dsig:}). Then for $\tilde N_j$ as in \ref{Dsig:2} and for $F\in Z$, the image of $(\tilde N_1, \dots, \tilde N_n, F)\in D^{{{\rm{mild}}}}_{\operatorname{naive}ilp}$ in $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\times {\cal {L}}$ (\ref{mnilp1}) is independent of the choices of $\tilde N_j$ and $F$. \end{sbprop} \bold egin{pf} For another choice $(\tilde N'_1,\dots, \tilde N_n', F')$ of $(\tilde N_1,\dots, \tilde N_n,F)$, we have $\tilde N_1'=\tilde N_1$, $\tilde N'_j=\tilde N_j+R_{j-1}$ for $2\leq j\leq n$ and $F'= \exp(iR_n)F$ for some $R_j\in {\sigma}_{j,{\bold{R}}}$. We have $$\exp(\sum_{j=1}^n iy_j\tilde N'_j)F'= \exp(\sum_{j=1}^n iy_j(\tilde N_j + (y_{j+1}/y_j)R_j))F$$ ($y_{n+1}$ denotes $1$). The limit of this for $y_j/y_{j+1}\to \infty$ coincides with the limit of that for $R_j=0$ by \ref{Coef}. \end{pf} \bold egin{sbpara}{\lambda}bel{5.4.2} By \ref{5.5.1}, we have a map $D^{\sharp,{{\rm{mild}}}}_{\Sigma,[:]}\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\times {\cal {L}}$. Let $D^{\sharp,{{\rm{mild}}}}_{\Sigma,{\mathrm{val}}}\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\times {\cal {L}}$ be the composition with $D^{\sharp,{{\rm{mild}}}}_{\Sigma,{\mathrm{val}}}\to D^{\sharp,{{\rm{mild}}}}_{\Sigma,[:]}$ (4.4.6). Since the last map is proper surjective (4.4.6), \ref{Lthm} is reduced to \end{sbpara} \bold egin{sbprop}{\lambda}bel{5.4.3} The map $D^{\sharp, {{\rm{mild}}}}_{\Sigma, {\mathrm{val}}}\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}\times {\cal {L}}$ is continuous. \end{sbprop} Just as Part III, Theorem 3.3.2 was reduced to the case $y^*_{{{\lambda}mbda},t}= y_{{{\lambda}mbda},t}$ of Part III, Proposition 3.3.4 (see \ref{8.3.b}), Proposition \ref{5.4.3} is reduced to $(A_0)$ of the following Proposition \ref{forLthm}. The proof of Proposition \ref{forLthm} given below is similar to the proof of Part III, Proposition 3.3.4. \bold egin{sbprop}{\lambda}bel{forLthm} Let the situation and the assumption be as in Part III, 3.3.3 with $y^*_{{{\lambda}mbda},t}=y_{{{\lambda}mbda},t}$ there. Assume that there is $\varepsilon\in {\bold{R}}_{>0}$ such that for any $(y_s)_{s\in S}\in {\bold{R}}^S$ satisfying the following condition (C), there is a splitting of $W$ (which may depend on $(y_s)_s$) which is compatible with $\sum_{s\in S} y_sN_s$. (C) If $1\leq j\leq n$, $s\in S_j$, and $y_s\operatorname{naive}eq 0$, then $y_ty_s^{-1}<\varepsilon$ for any $t \in S_{{{\gamma}mma}eq j+1}$ and $|y_ty_s^{-1}-a_ta_s^{-1}|<\varepsilon$ for any $t\in S_j$. Note that $(N_1, \dots, N_n, F)\in D_{\operatorname{naive}ilp}^{{{\rm{mild}}}}$ by this assumption. Let $\tau, \tau^{{\rm{st}}ar}: {\bold{G}}_{m,{\bold{R}}}^n \to {\rm {Aut}}_{\bold{R}}(H_{0,{\bold{R}}})$ be the homomorphisms given by the ${\rm{SL}}(2)$-orbit in $n$ variables associated to $(N_1, \dots, N_n, F)$. Let $${\delta}ta= \lim {\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F), \quad {\delta}ta'= \lim {\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}{\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)$$ where $y_j/y_{j+1}\to \infty$ ($1\leq j\leq n$, $y_{n+1}=1$) and where $t= (t_1,\dots, t_n)$, $t_j= (y_{j+1}/y_j)^{1/2}$. For $1\leq j\leq n+1$, let $e_{{{\lambda}mbda},{{\gamma}mma}eq j}:=\exp(\sum_{s\in S_{{{\gamma}mma}eq j}} iy_{{{\lambda}mbda},s}N_s) \in G_{\bold{C}}$. Then we have the following $(A_j)$ for $0\leq j\leq n$. $(A_j)$ for $1\leq j\leq n$: Let $e{{\gamma}mma}eq 1$. Then if ${{\lambda}mbda}$ is sufficiently large, there are $F^{(j)}_{{{\lambda}mbda}}\in \Dc$ satisfying the following (1)--(4). (1) $y_{{{\lambda}mbda},c_j}^ed(F_{{{\lambda}mbda}}, F^{(j)}_{{{\lambda}mbda}}) \to 0$. (2) $((N_s)_{s\in S_{\leq j}}, e_{{{\lambda}mbda},{{\gamma}mma}eq j+1}F^{(j)}_{{{\lambda}mbda}})$ generates a nilpotent orbit. (3) ${\delta}ta_W(\exp(\sum_{s\in S} iy_{{{\lambda}mbda},s} N_s)F^{(j)}_{{{\lambda}mbda}})$ converges to ${\delta}ta$. (4) ${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1} {\delta}ta_W(\exp(\sum_{s\in S} iy_{{{\lambda}mbda},s} N_s)F^{(j)}_{{{\lambda}mbda}})$ with $t=(t_1, \dots, t_n)$, $t_j= (y_{{{\lambda}mbda}, c_{j+1}}/y_{{{\lambda}mbda},c_j})^{1/2}$ converges to ${\delta}ta'$. $(A_0)$: Let $e{{\gamma}mma}eq 1$. Then if ${{\lambda}mbda}$ is sufficiently large, we have the following (3) and (4). (3) ${\delta}ta_W(\exp(\sum_{s\in S} iy_{{{\lambda}mbda},s} N_s)F_{{{\lambda}mbda}})$ converges to ${\delta}ta$. (4) ${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1} {\delta}ta_W(\exp(\sum_{s\in S} iy_{{{\lambda}mbda},s} N_s)F_{{{\lambda}mbda}})$ with $t$ as in above $(A_j)$ (4). \end{sbprop} \bold egin{sbpara} We prove Proposition \ref{forLthm} by downward induction on $j$. For $1\leq j\leq n$, let $\tau_j$ be the restriction of $\tau$ to the $j$-th factor of ${\bold{G}}_{m,{\bold{R}}}$ and let $\tau_{{{\gamma}mma}eq j} = \prod_{k=j}^n \tau_k((y_{{{\lambda}mbda}, c_{k+1}}/y_{{{\lambda}mbda}, c_k})^{1/2})\in G_{\bold{R}}$. $(A_n)$ follows from the condition (5) in Part III 3.3.3 for $j=n$ with $y^*_{{{\lambda}mbda},t}=y_{{{\lambda}mbda},t}$ and from \ref{Coef} Assume $0\leq j<n$. We prove $(A_j)$ assuming $(A_{j+1})$. Take sufficiently large integers $e, e', e''{{\gamma}mma}eq 0$. Take $F^{(j+1)}_{{{\lambda}mbda}}$ as in $(A_{j+1})$ with $e$ replaced by $e+e'+e''$. In the case $1\leq j<n$ (resp. $j=0$), let $F^{(j)}_{{{\lambda}mbda}}$ be $F_{{{\lambda}mbda}}^*$ in Part III, 3.3.3 (5) with $e$ there replaced by $e+e'+e''$ (resp. let $F^{(0)}_{{{\lambda}mbda}}= F_{{{\lambda}mbda}}$). We have (5) $y_{{{\lambda}mbda}, c_{j+1}}^{e+e'+e''}d(F^{(j)}_{{{\lambda}mbda}}, F^{(j+1)}_{{{\lambda}mbda}})\to 0$. By Part III, Lemma 3.3.6, $\tau_{{{\gamma}mma}eq j+1}^{-1} e_{{{\lambda}mbda},{{\gamma}mma}eq j+1}F^{(j+1)}_{{{\lambda}mbda}}$ converges. Hence by (5), $\tau_{{{\gamma}mma}eq j+1}^{-1} e_{{{\lambda}mbda},{{\gamma}mma}eq j+1}F^{(j)}_{{{\lambda}mbda}}$ converges and we have (6) $y_{{{\lambda}mbda},c_{j+1}}^{e+e'} d(\tau_{{{\gamma}mma}eq j+1}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1}F^{(j)}_{{{\lambda}mbda}}, \tau_{{{\gamma}mma}eq j+1}^{-1} e_{{{\lambda}mbda},{{\gamma}mma}eq j+1}F^{(j+1)}_{{{\lambda}mbda}})\to 0.$ By the mild SL(2)-orbit theorem \ref{mnilp1} for $((N_s)_{s\in S_{\leq j}}, \tau_{{{\gamma}mma}eq j+1}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1}F^{(j)}_{{{\lambda}mbda}})$ and $((N_s)_{s\in S_{\leq j}}, \tau_{{{\gamma}mma}eq j+1}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1}F^{(j+1)}_{{{\lambda}mbda}})$, \operatorname{naive}oindent and by \ref{Coef}, we have: (7) The four sequences $a_{{{\lambda}mbda}}:={\delta}ta_W(\tau_{{{\gamma}mma}eq j+1}^{-1} \exp(\sum_{s\in S} iy_{{{\lambda}mbda},s}N_s)F^{(j)}_{{{\lambda}mbda}})$, $b_{{{\lambda}mbda}}:={\delta}ta_W(\tau_{{{\gamma}mma}eq j+1}^{-1} \exp(\sum_{s\in S} iy_{{{\lambda}mbda},s}N_s)F^{(j+1)}_{{{\lambda}mbda}})$, $a'_{{{\lambda}mbda}}:={\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}{\delta}ta_W(\tau_{{{\gamma}mma}eq j+1}^{-1} \exp(\sum_{s\in S} iy_{{{\lambda}mbda},s}N_s)F^{(j)}_{{{\lambda}mbda}})$, $b'_{{{\lambda}mbda}}:={\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}{\delta}ta_W(\tau_{{{\gamma}mma}eq j+1}^{-1} \exp(\sum_{s\in S} iy_{{{\lambda}mbda},s}N_s)F^{(j+1)}_{{{\lambda}mbda}})$ \operatorname{naive}oindent converge in ${\cal {L}}$ and we have $$y_{{{\lambda}mbda}, c_{j+1}}^e(a_{{{\lambda}mbda}}-b_{{{\lambda}mbda}})\to 0, \quad y_{{{\lambda}mbda}, c_{j+1}}^e(a'_{{{\lambda}mbda}}-b'_{{{\lambda}mbda}})\to 0.$$ By the induction assumption on $j$, $\exp(\sum_{s\in S} iy_{{{\lambda}mbda},s}N_s)F^{(j+1)}_{{{\lambda}mbda}}$ converges to ${\delta}ta$ and \operatorname{naive}ewline ${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}\exp(\sum_{s\in S} iy_{{{\lambda}mbda},s}N_s)F^{(j+1)}_{{{\lambda}mbda}}$ converges to ${\delta}ta'$. Hence by (7), $\exp(\sum_{s\in S} iy_{{{\lambda}mbda},s}N_s)F^{(j)}_{{{\lambda}mbda}}$ converges to ${\delta}ta$ and ${\rm{Ad}}(\tau^{{\rm{st}}ar}(t))^{-1}\exp(\sum_{s\in S} iy_{{{\lambda}mbda},s}N_s)F^{(j)}_{{{\lambda}mbda}}$ converges to ${\delta}ta'$. \end{sbpara} {\subset}section{Proofs of other results in Section 5.1}{\lambda}bel{ss:cks0} \bold egin{sblem}{\lambda}bel{5.6.1} Let $x=(N_1, \dots, N_n, F)\in D^{{{\rm{mild}}}}_{\operatorname{naive}ilp}$ and let $p\in D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$ be the image of $x$. Let ${\bold 0}=(0,\dots,0)\in {\bold{Z}}^n$. (1) Let ${\delta}ta$ be the image of $x$ in ${\cal {L}}$. Then ${\delta}ta$ is of ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p)$-weight $\leq {\bold 0}$. (2) Let ${\delta}ta'\in {\cal {L}}$ be the limit of ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p(t))^{-1}{\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)$ $(t=(t_1,\dots, t_n), t_j=(y_{j+1}/y_j)^{1/2}$, $y_{n+1}$ denotes $1$, $t_j\to 0$). Then ${\delta}ta'$ coincides with the component of ${\delta}ta$ of ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p)$-weight $\bold 0$. \end{sblem} \bold egin{pf} Let ${\delta}ta(y):={\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F)$ and let ${\delta}ta'(y)= {\rm{Ad}}(\tau^{{\rm{st}}ar}_p(t))^{-1}{\delta}ta_W(\exp(\sum_{j=1}^n iy_jN_j)F))$ where $t_j$ is as above. Then ${\delta}ta(y)$ and ${\delta}ta'(y)$ are convergent series in $t_1,\dots, t_n$. For $a\in {\bold{Z}}^n$, let ${\delta}ta_a$ (resp.\ ${\delta}ta'_a$, resp.\ ${\delta}ta(y)_a$, resp.\ ${\delta}ta'(y)_a$) be the component of ${\delta}ta$ (resp.\ ${\delta}ta'$, resp.\ ${\delta}ta(y)$, resp.\ ${\delta}ta'(y)$) of ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p)$-weight $a$. Then ${\delta}ta(y)_a = (\prod_{j=1}^n t_j^{a(j)}) {\delta}ta'(y)_a$. Hence ${\delta}ta(y)_a$ is divisible by $\prod_{j=1}^n t_j^{\max(a(j), 0)}$. Hence the constant term ${\delta}ta_a$ of ${\delta}ta(y)_a$ is $0$ unless $a\leq {\bold 0}$. On the other hand, by the reduction to the case of one $N$, we have $${\delta}ta'(y)\in \sum_{k\in {\bold{N}}^n} (\prod_{j=1}^n t_j^{k(j)}) {\bold{R}}\{\{t_1,\dots, t_n\}\}\cdot (\bigcap_{j=1}^n W^{(j)}_{k(j)}{\cal {L}}).$$ Hence the constant term of ${\delta}ta'(y)$ belongs to $\bigcap_{j=1}^n W^{(j)}_0{\cal {L}}$. That is, ${\delta}ta'$ is of ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p)$-weight $\leq {\bf 0}$. For $a\in {\bold{Z}}^n$ such that $a\leq {\bold 0}$, the constant term ${\delta}ta'_a$ of ${\delta}ta'(y)_a= \prod_{j=1}^n t_j^{-a(j)}{\delta}ta(y)_a$ is $0$ unless $a={\bold 0}$. This argument shows also ${\delta}ta_{\bold 0}={\delta}ta'_{\bold 0}$. \end{pf} \bold egin{sbpara}{\lambda}bel{5.6.2} We prove \ref{diathm} (1). By \ref{5.6.1}, it is sufficient to prove that the element ${\delta}ta'\in {\cal {L}}$ in \ref{5.6.1} (2) belongs to ${\cal {L}}({{\bold r}})$ where ${{\bold r}}=(\varphi_w(i,\dots,i))_w$. Since ${{\bold r}}(y):=\tau_p^{{\rm{st}}ar}(t)^{-1}\exp(\sum_{j=1}^n iy_jN_j)F({{\gamma}mma}r^W)$ converges to ${{\bold r}}$, ${\cal {L}}({{\bold r}}(y))$ converges to ${\cal {L}}({{\bold r}})$. Since ${\delta}ta'(y)\in {\cal {L}}({{\bold r}}(y))$, its limit ${\delta}ta'$ belongs to ${\cal {L}}({{\bold r}})$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{5.6.3} We prove \ref{diathm} (2). Let $s\in {\rm{spl}}(W)$ be the image of $x$ (\ref{5.6.1}) in ${\rm{spl}}(W)$. Consider the element $(p,Z)$ of $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ (\ref{slmap}) where $Z$ is the subset of $D$ whose image in $D({{\gamma}mma}r^W)\times {\rm{spl}}(W)\times {\cal {L}}$ is $(Z(p), s, {\delta}ta_{\bold 0})$. Such element exists uniquely by \ref{5.6.2}. We show that $F_y:=\exp(\sum_{j=1}^n iy_jN_j)F$ converges to $(p, Z)$ in $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$. Let $\Phi$ be the set $\{W^{(1)}({{\gamma}mma}r^W), \dots, W^{(n)}({{\gamma}mma}r^W)\}$ of weight filtrations on ${{\gamma}mma}r^W$ associated to $p$. Fix a distance $\bold eta: D({{\gamma}mma}r^W) \to {\bold{R}}_{{{\gamma}mma}eq 0}^n$ to $\Phi$-boundary. Let $D_{{\rm{SL}}(2)}^{{\rm{st}}ar,{{\rm{mild}}}}(\Phi)\to D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim} \times {\rm{spl}}(W) \times {\cal {L}}$ be the map $\operatorname{naive}u_{apha,\bold eta}$ in \ref{staran1} where $apha=\tau_p$. Then $\operatorname{naive}u_{apha,\bold eta}(p,Z)=(p,s, {\rm{Ad}}(\tau_p(\bold eta({{\bold r}})))^{-1}{\delta}ta_{\bold 0})$. Hence it is sufficient to prove that $\operatorname{naive}u_{apha,\bold eta}(F_y)\in D({{\gamma}mma}r^W) \times {\rm{spl}}(W) \times {\cal {L}}$ converges to $(p, s, {\rm{Ad}}(\tau^{{\rm{st}}ar}_p(\bold eta({{\bold r}})))^{-1}{\delta}ta_{\bold 0})$. It is sufficient to prove that ${\rm{Ad}}(\tau^{{\rm{st}}ar}_p(\bold eta(F_y({{\gamma}mma}r^W))))^{-1} {\delta}ta(y)$ converges to ${\rm{Ad}}(\tau_p^{{\rm{st}}ar}(\bold eta({{\bold r}})))^{-1}{\delta}ta_{\bold 0}$. But this is deduced from the fact that $\bold eta(F_y({{\gamma}mma}r^W))t^{-1}$ converges to $\bold eta({{\bold r}})$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{5.6.4} We prove \ref{diathm} (3). By (2), it is sufficient to prove the compatibility of the map $D^{\sharp,{{\rm{mild}}}}_{\Sigma,[:]} \to D^{\diamond}_{{\rm{SL}}(2)}$ with log structures. This is reduced to the pure case treated in \ref{4.5log} because the log structure of $D^{\diamond}_{{\rm{SL}}(2)}$ is the inverse image of that of $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)^{\sim}$. \end{sbpara} \bold egin{sbpara} Theorem \ref{diatostar} follows from Lemma \ref{5.6.1} and Theorem \ref{diathm}(1). \end{sbpara} These complete the proofs of the results in Section 5.1. \section{Complements}{\lambda}bel{s:NSB} In Section \ref{ss:+3}, we give properties of the extended period domains. In Section \ref{ss:cks4}, we show that for nilpotent orbits in one variable, we have stronger results (\ref{rk1}) and (\ref{r1eq}) which connect the world of nilpotent orbits with the world of ${\rm{SL}}(2)$-orbits and Borel--Serre orbits. In Section \ref{ss:per}, we consider extended period maps. {\subset}section{Global properties of the extended period domains}{\lambda}bel{ss:+3} \bold egin{sbthm}{\lambda}bel{1.4.3} Let $X$ be one of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$, $D^{\diamond}_{{\rm{SL}}(2)}$, $D_{{\rm {BS}},{\mathrm{val}}}$, $D_{{\rm{SL}}(2),{\mathrm{val}}}^I$, $D_{{\rm{SL}}(2),{\mathrm{val}}}^{II}$, $D_{{\rm{SL}}(2),{\mathrm{val}}}^{{\rm{st}}ar}$, $D^{\diamond}_{{\rm{SL}}(2),{\mathrm{val}}}$, $D^{\sharp}_{\Sigma,[:]}$, and $D^{\sharp}_{\Sigmama,[{\mathrm{val}}]}$. Let ${\Gamma}$ be a subgroup of $G_{\bold{Z}}$. {\rm(1)} The action of ${\Gamma}$ on $X$ is proper, and the quotient space ${\Gamma}\operatorname{\backslash} X$ is Hausdorff. {\rm (2)} Assume that ${\Gamma}$ is neat. Let ${{\gamma}mma}\in {\Gamma}$, $p\in X$, and assume ${{\gamma}mma} p=p$. Then ${{\gamma}mma}=1$. {\rm (3)} Assume that ${\Gamma}$ is neat. Then the projection $X\to{\Gamma}\operatorname{\backslash} X$ is a local homeomorphism. Further, for $X= D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,+}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,-}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar,{\rm {BS}}}_{{\rm{SL}}(2)}$, there is a structure on the quotient such that the projection is a local isomorphism in ${\cal {B}}'_{{\bold{R}}}(\log)$. \end{sbthm} Note that the corresponding results for $D_{{\rm {BS}}}$, $D_{{\rm{SL}}(2)}^I$ and $D_{{\rm{SL}}(2)}^{II}$, and $D^{\sharp}_{\Sigma}$ and $D^{\sharp}_{\Sigma,{\mathrm{val}}}$ were already proved in Part I Theorem 9.1, Part II Theorem 3.5.17, and Part III, Theorem 4.3.6, respectively. \bold egin{pf} (3) for $X$ follows from (1) and (2) for $X$. Hence it is sufficient to prove (1) and (2). Since we have continuous maps $D_{{\rm{SL}}(2),{\mathrm{val}}}^{{\rm{st}}ar}\to D_{{\rm {BS}},{\mathrm{val}}} \to D_{{\rm {BS}}}$ and $D_{\Sigma, [{\mathrm{val}}]}\to D_{\Sigma, [:]}\to {\Gamma}amma\operatorname{\backslash} D_{\Sigma}$ which are compatible with the actions of ${\Gamma}amma$, the results for $D_{{\rm{SL}}(2),{\mathrm{val}}}^{{\rm{st}}ar}$, $ D_{{\rm {BS}},{\mathrm{val}}}$, $D_{\Sigma, [{\mathrm{val}}]}$, $D_{\Sigma, [:]}$ follow from the results for $D_{{\rm {BS}}}$ and ${\Gamma}amma\operatorname{\backslash} D_{\Sigma}$. Since $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ is proper and surjective, the properness of the action of ${\Gamma}amma$ on $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ follows from that for $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$. (2) for $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ follows from the $\bar L$-property (i.e., Theorem \ref{ls1} for the situation (a) in \ref{sit}) and the result for the pure case. Since there are continuous maps $D^{\diamond}_{{\rm{SL}}(2),{\mathrm{val}}}\to D^{\diamond}_{{\rm{SL}}(2)}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ which are compatible with the actions of ${\Gamma}amma$, the results for $D^{\diamond}_{{\rm{SL}}(2)}$ and $D^{\diamond}_{{\rm{SL}}(2),{\mathrm{val}}}$ follow from the result for $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$. \end{pf} \bold egin{sbcor} The spaces in the above theorem are Hausdorff. \end{sbcor} This is obtained from the above theorem by taking ${\Gamma}amma=\{1\}$. \bold egin{sbcor} Let $X= D^I_{{\rm{SL}}(2)}$, $D^{II}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ or $D^{\diamond}_{{\rm{SL}}(2)}$. Let ${\Gamma}amma$ be a neat subgroup of $G_{{\bold{Z}}}$. Then there is a unique structure on ${\Gamma}amma\operatorname{\backslash} X$ as an object of ${\cal {B}}'_{\bold{R}}(\log)^+$ (\ref{b[+]}) such that the projection $X\to {\Gamma}amma\operatorname{\backslash} X$ is a morphism in ${\cal {B}}'_{\bold{R}}(\log)^+$ which is locally an isomorphism. \end{sbcor} \bold egin{pf} This follows from (3) of Theorem \ref{1.4.3} and the corresponding results for $D^I_{{\rm{SL}}(2)}$ and $D^{II}_{{\rm{SL}}(2)}$ in Part II Theorem 3.5.17. \end{pf} {\subset}section{Results on nilpotent orbits in one variable}{\lambda}bel{ss:cks4} We prove results Theorem \ref{rk1} and Theorem \ref{r1eq} on nilpotent orbits in one variable. In \ref{rk21}--\ref{rk22}, give a counter-example for the extension of Theorem \ref{rk1} to nilpotent orbits in many variables. \bold egin{sbpara} Let $(D^{\sharp}_{\Sigma,[:]})'{\subset}set D^{\sharp}_{\Sigma,[:]}$ be the union of the two open sets $D_{\Sigma,[:]}^{\sharp,{{\rm{mild}}}}$ (\ref{mild}) and the inverse image of $D_{{\rm{SL}}(2),\operatorname{naive}spl}$ by $D^{\sharp}_{\Sigma,[:]}\to D_{{\rm{SL}}(2)}^I$ in \ref{valper} (1). Then $(D^{\sharp}_{\Sigma,[:]})'$ is the union of $D_{\Sigma,[:]}^{\sharp,{{\rm{mild}}}}$ and the set of the points $p$ of $D^{\sharp}_{\Sigma,[:]}$ such that if $N_1, \dots, N_n$ (ordered) is the monodromy logarithms associated to $p$, then $(W, N_1)$ does not split. The morphisms $D_{\Sigma, [:]}^{\sharp,{{\rm{mild}}}}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (\ref{diathm}, \ref{afd2}) and $D_{{\rm{SL}}(2),\operatorname{naive}spl}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (\ref{lam}) induce a morphism $(D_{\Sigma,[:]}^{\sharp})' \to D_{{\rm{SL}}(2)}^{{\rm{st}}ar}$. Let $(D^{\sharp}_{\Sigmama,{[\val]}})'$ be the inverse image of $ (D^{\sharp}_{\Sigma,[:]})' $ in $D^{\sharp}_{\Sigmama,{[\val]}}$. Then we obtain the induced morphism $(D^{\sharp}_{\Sigmama,{[\val]}})'\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ and a commutative diagram $$\bold egin{matrix} (D^{\sharp}_{\Sigmama,{[\val]}})'& \overset{\psi}\to & D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\\ \downarrow &&\downarrow\\ (D^{\sharp}_{\Sigma,[:]})' & \overset {\psi} \to &D^{{\rm{st}}ar}_{{\rm{SL}}(2)}. \end{matrix}$$ \end{sbpara} Let $\Xi$ be as in \ref{afd3}. Since $(D_{\Xi}^{\sharp})'=D^{\sharp}_{\Xi}$, we have \bold egin{sbthm}{\lambda}bel{rk1} The identity map of $D$ extends uniquely to a continuous map $$D_{\Xi}^{\sharp}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$$ and hence extends uniquely to a continuous map $D^{\sharp}_{\Xi} \to D_{{\rm {BS}},{\mathrm{val}}}$. \end{sbthm} \bold egin{sbrem}{\lambda}bel{r1rem} (1) The image of $D^{\sharp}_{\Xi}$ in $D_{{\rm{SL}}(2)}$ is contained in $D_{{\rm{SL}}(2),\leq 1}$ for both structures $I$, $II$ of $D_{{\rm{SL}}(2)}$. (We denote by $\leq 1$ the part where the log structure is of rank $\leq 1$.) (2) However, the image of $D^{\sharp}_{\Xi}$ in ${D^{\star}_{\SL(2)}}$ is not necessarily contained in $D^{{\rm{st}}ar}_{{\rm{SL}}(2),\leq 1}$. (This is seen in \ref{c.ex} below.) Hence the morphism in \ref{rk1} cannot be obtained as the composition $D^{\sharp}_{\Xi}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),\leq 1}\cong D^{{\rm{st}}ar}_{{\rm{SL}}(2),\leq 1,{\mathrm{val}}}$ (the first arrow here need not exist). For $p\in D_{\Xi}^{\sharp}$, it can happen that the image of $p$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ has some information about $p$ which the image of $p$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ does not have (see \ref{info1} and \ref{ex1} below). (3) The image of $D^{\sharp,{{\rm{mild}}}}_{\Xi}\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ (2.1.4) is contained in $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2),\leq 1}$. \end{sbrem} \bold egin{sbthm}{\lambda}bel{r1eq} Let $p=({\bold{R}}_{{{\gamma}mma}eq 0}N, \exp(i{\bold{R}} N)F)\in D^{\sharp}_{\Xi}$ with $N\operatorname{naive}e0$. Let $W'=W^{(1)}$ be the relative monodromy filtration of $N$ with respect to $W$. Let $(W',\hat F)$ be the ${\bold{R}}$-split mixed Hodge structure associated to the mixed Hodge structure $(W',F)$, i.e., ${\rm{spl}}_{W'}(F)(F({{\gamma}mma}r^{W'}))$ $(1.2)$. Then the following conditions are equivalent. {\rm (i)} $p$ belongs to $D^{\sharp,{{\rm{mild}}}}_{\Xi}$. {\rm (ii)} $\exp(iyN)F$ converges in $D^{\diamond}_{{\rm{SL}}(2)}$ when $y\to \infty$. {\rm (iii)} ${\delta}ta_W(\exp(iyN)F)$ converges in ${\cal {L}}$ when $y\to \infty$. {\rm (iv)} The image of $p$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (\ref{rk1}) belongs to $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$. {\rm (v)} The image of $p$ in $D_{{\rm {BS}}}$ (\ref{rk1}) belongs to $D^{{{\rm{mild}}}}_{{\rm {BS}}}$. {\rm (vi)} The image of $p$ in $D_{{\rm{SL}}(2)}$ belongs to $D_{{\rm{SL}}(2),{\rm{spl}}}$ (\ref{lam}). {\rm (vii)} ${\delta}ta_W(\exp(iN)\hat F) =0$. {\rm (viii)} The splitting ${\rm{spl}}_W(\exp(iN)\hat F)$ of $W$ is compatible with $N$. \end{sbthm} \bold egin{pf} We have proved (i) ${\bold{R}}ightarrow$ (ii). (ii) ${\bold{R}}ightarrow$ (iii) is clear. We know (ii) $ {\bold{R}}ightarrow$ (iv) $\Leftrightarrow$ (v), (v) ${\bold{R}}ightarrow$ (vi) $\Leftrightarrow$ (vii). (viii) ${\bold{R}}ightarrow$ (i) is clear. It is sufficient to prove the implications (iii) ${\bold{R}}ightarrow$ (vii) and (vii) ${\bold{R}}ightarrow$ (viii). Let $s= {\rm{spl}}_W(\exp(iN)\hat F)$, $\bar N={{\gamma}mma}r^W(N)\in \bigoplus_w \Hom({{\gamma}mma}r^W_w, {{\gamma}mma}r^W_w)$, $N_0=s\bar N s^{-1}$. We prove (vii) ${\bold{R}}ightarrow$ (viii). Assume (vii). Then $\exp(iN)\hat F= s(\exp(i\bar N){\hat F}({{\gamma}mma}r^W)) = \exp(iN_0) \hat F$. For the mixed Hodge structure $(W', \exp(iN)\hat F)=(W', \exp(iN_0)\hat F)$, we have $N={\delta}ta_{W'}(\exp(iN)\hat F) = {\delta}ta_{W'}(\exp(iN_0)\hat F) = N_0$ and (viii) holds. For the proof of (iii) ${\bold{R}}ightarrow$ (vii), we first prove the following claim. {\bf Claim.} ${\delta}ta_W(\exp(iN)\hat F)$ is of $W'$-weight $\leq -1$. Proof of Claim. Let $A= W_0 {\rm {End}}_{{\bold{C}}}(H_{0,{\bold{C}}})$. For $r{{\gamma}mma}eq 1$, let $I^{(r)}$ be the two-sided ideal $W_{-1}A\cap W'_{-r}A$ of $A$, and let $I=I^{(1)}$. Let $M_1= I \cap {\rm {End}}_{\bold{R}}(H_{\bold{R}})$. Let $M_2$ be the part of $iM_1{\subset}set I$ consisting of all elements which belong to the $(\leq -1, \leq -1)$-Hodge component of $A$ with respect to $\exp(iN_0)\hat F$. Let $M_3$ be the part of $I$ consisting of all elements which belong to $F^0A$ with respect to $\exp(iN)\hat F$. Then we have $I^{(r)}= (I^{(r)}\cap M_1)\oplus (I^{(r)}\cap M_2)\oplus (I^{(r)}\cap M_3)$ for any $r{{\gamma}mma}eq 1$. We have $\exp(iN)\hat F=\exp(x) \exp(iN_0)F$ for some $x\in I$. Hence by \ref{Qalg2}, there are $x_j\in M_j$ $(j=1,2,3)$ such that $\exp(iN)\hat F= \exp(x_1)\exp(x_2)\exp(x_3)\exp(iN_0)\hat F= \exp(x_1)\exp(x_2)\hat F= s' \exp(i{\delta}ta) \exp(i\bar N){\hat F}({{\gamma}mma}r^W)$ where $s' = \exp(x_1)s\in {\rm{spl}}(W)$ and $i{\delta}ta =s^{-1}x_2s$. We have ${\delta}ta_W(\exp(iN)\hat F)={\delta}ta\in W'_{-1}{\rm {End}}_{\bold{R}}({{\gamma}mma}r^W_{\bold{R}})$. Now we prove (iii) ${\bold{R}}ightarrow$ (vii). Assume that ${\delta}ta_W(\exp(iN)\hat F)\operatorname{naive}eq 0$. By Claim, there is $w\leq -1$ such that the component of ${\delta}ta_W(\exp(iN)\hat F)$ of $\tau$-weight $w$ is non-zero. Since ${\rm{Ad}}(\tau(\sqrt{y})) {\delta}ta_W(\exp(iyN)F)$ converges to ${\delta}ta_W(\exp(iN)\hat F)$ when $y\to \infty$, ${\delta}ta_W(\exp(iyN)F)$ is ${\rm{Ad}}(\tau(\sqrt{y})^{-1})B(y)$ where $B(y)$ converges to an element whose $w$-part is non-zero. Hence the $w$-part of $\tau$-weight of ${\delta}ta_W(\exp(iyN)F)$ is $ y^{-w/2}C(y)$ where $C(y)$ converges to a non-zero element, and hence diverges. \end{pf} \bold egin{sbpara}{\lambda}bel{rk21} We have constructed CKS maps $D^{\sharp,{{\rm{mild}}}}_{\Sigma, [:]}\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}$ (\ref{diathm}) and $D^{\sharp}_{\Xi}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ (\ref{rk1}). In the rest of this subsection, we show an example of ${\sigma}ma$ of rank $2$ such that there is no continuous map $D^{\sharp}_{{\sigma},{\mathrm{val}}}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ which extends the identity map of $D$. \end{sbpara} \bold egin{sbpara} Take an integer $m{{\gamma}mma}eq 1$. (The case $m{{\gamma}mma}eq 3$ will be a crucial example.) Let $H_0$ be of rank $2m+1$ with base $e_j'$ $(1\leq j\leq m)$, $e_j$ $(1\leq j\leq m)$, and $e$. The weight filtration is as follows. $W_{-m-1}=0$. $W_{-m}$ is generated by $e_j'$ and $e_j$ $(1\leq j\leq m)$. $W_{-1}=W_{-m}$. $W_0$ is the total space. We have $N_1, N_2$ defined as follows. $N_1e=0$, $N_1e_j=e'_j$. $N_1e'_j=0$. $N_2e=e'_m$, $N_2e_j=e_{j-1}$ and $N_2e'_j=e'_{j-1}$ for $2\leq j\leq m$, $N_2e_1=N_2e'_1=0$. Let ${\sigma}$ be the cone generated by $N_1$ and $N_2$. Note that $(W, N_1)$ splits, but $(W, N_2)$ does not split. \end{sbpara} \bold egin{sbpara} For $j=1,2$, let $W^{(j)}$ be the $W$-relative $N_j$-filtration. We give a splitting of $W^{(1)}$, which is compatible with $N_1$, as follows. $e$ is of weight $0$. $e_j$ is of weight $-m+1$, and $e'_j$ is of weight $-m-1$ $(1\le j\le m)$. We give a splitting of $W^{(2)}$, which is compatible with $N_2$, as follows. $e$ is of weight $0$. $e_j$ is of weight $-2(m-j)$, and $e'_j$ is of weight $-2(m-j+1)$ $(1\le j\le m)$. \end{sbpara} \bold egin{sbpara} Define $apha_1,apha_2: {\bold G}_{m,{\bold{R}}}\to {\rm {Aut}}_{\bold{R}}(H_{0,{\bold{R}}}) $ by using the above splittings of $W^{(1)}$ and $W^{(2)}$, respectively. Then $apha_1$ and $apha_2$ commute. Define $apha^{{\rm{st}}ar}_1, apha^{{\rm{st}}ar}_2: {\bold G}_{m,{\bold{R}}}\to {\rm {Aut}}_{\bold{R}}(H_{0,{\bold{R}}}) $ by $apha^{{\rm{st}}ar}_j(t) e=e$ and $apha_j^{{\rm{st}}ar}(t) x= t^m apha_j(t) x$ for $x\in W_{-m}$. Let $$t(y)= apha_1((y_2/y_1)^{1/2})apha_2((1/y_2)^{1/2}), \quad t^{{\rm{st}}ar}(y)=apha^{{\rm{st}}ar}_1((y_2/y_1)^{1/2})apha^{{\rm{st}}ar}_2((1/y_2)^{1/2}).$$ \end{sbpara} \bold egin{sbpara} We have $${\rm{Ad}}(t(y))^{-1}(y_1N_1+y_2N_2)=N_1+N_{2,y},$$ where $N_{2,y}$ coincides with $N_2$ on $e_j$ and $e'_j$ $(1\le j\le m)$, but $N_{2,y}e=(y_2/y_1)^{(m+1)/2}e_m'$ and the last element converges to $0$ when $y_1/y_2\to \infty$. $${\rm{Ad}}(t^{{\rm{st}}ar}(y))^{-1}(y_1N_1+y_2N_2)=N_1+ N'_{2,y},$$ where $N'_{2,y}$ coincides with $N_2$ on $e_j$ and $e'_j$ $(1\le j\le m)$, but $N'_{2,y}e=u_ye_m'$, where $$u_y=(y_2/y_1)^{1/2}y_2^{m/2}= y_1^{-1/2}y_2^{(m+1)/2}.$$ \end{sbpara} \bold egin{sbpara} Note that $u_y$ need not converge when $y_2\to \infty$ and $y_1/y_2\to \infty$. \end{sbpara} \bold egin{sbpara} Let $F$ be as follows. $F^1=0$. $F^0$ is generated by $e$ and $e_m$. $F^{-j}$, for $1\leq j\leq m-1$, is generated by $F^{-j+1}$ and $e_{m-j}$, $e'_{m-j+1}$. $F^{-m}$ is the total space. Then $(N_1, N_2, F)$ generates a nilpotent orbit. Hence $\exp(iy_1N_1+iy_2N_2)F$, as $y_2\to \infty$ and $y_1/y_2\to \infty$, converges in $D^{\sharp}_{{\sigma},{\mathrm{val}}}$. \end{sbpara} \bold egin{sblem}{\lambda}bel{noCKS} Let the notation be as above. If $m{{\gamma}mma}eq 3$, $\exp(iy_1N_1+iy_2N_2)F$, as $y_2\to \infty$ and $y_1/y_2\to \infty$, need not converge in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$. \end{sblem} This follows from \bold egin{sblem}{\lambda}bel{noCKS2} Let the notation and the assumption be as above. Let $F_y:=t^{{\rm{st}}ar}(y)^{-1}\exp(iy_1N_1+iy_2N_2)F$. Then, ${\delta}ta_W(F_y)$ does not converge in $\bar{\cal {L}}$ when $y_2\to \infty$ and $y_1/y_2\to \infty$. \end{sblem} \bold egin{sbpara}{\lambda}bel{rk22} We prove Lemma \ref{noCKS2}. Since $F_y= \exp({\rm{Ad}}(t^{{\rm{st}}ar}(y))^{-1}(iy_1N_1+iyN_2)) F$, $F_y$ is described as follows. $F_y^1=0$, $F_y^0$ is generated by $e+\sum_{k=1}^m i^k\cdot k!^{-1}\cdot u_y \cdot e_{m-j+1}'$ and $\exp(iN_1+iN_2)e_m$, $F_y^{-j}$ $(1\leq j\leq m-1)$ is generated by $F_y^{-j+1}$ and $\exp(iN_1+iN_2)e_{m-j}$ and $\exp(iN_1+iN_2)e_{m-j+1}'$, and $F_y^{-m}$ is the total space. The Hodge type of ${{\gamma}mma}r^W_{-m}$ of $F_y$ is that the $(j, -m-j)$-Hodge component is of dimension one if $j=0, -m$, two dimensional if $-1{{\gamma}mma}eq j{{\gamma}mma}eq 1-m$, and zero otherwise. ${\delta}ta_W(F_y)$ sends $e$ to the sum of the $(j, -m-j)$ components of $v_y:=\sum_{1\leq k\leq m,k:\text{odd}} (-1)^{(k-1)/2}\cdot k!^{-1}\cdot t \cdot e_{m-k+1}$ for $-1{{\gamma}mma}eq j{{\gamma}mma}eq 1-m$. {\bf Claim.} $v_y$ does not belong to the $((0,-m)+(-m,0))$-Hodge component of ${{\gamma}mma}r^W_{-m}$. By the claim, $v_y$ is $u_y$ times a {\it non-zero} element which is independent of $y_1, y_2$. Hence, when $y_1/y_2, y_2\to \infty$, $v_y$ need not converge in $\bar {\cal {L}}$. We prove the claim. Assume that $v_y$ belongs to the $((0,-m)+(-m,0))$-Hodge component of ${{\gamma}mma}r^W_{-m}$. Then we should have $$\sum_{1\leq k\leq m, k:\text{odd}} (-1)^{(k-1)/2}\cdot k!^{-1}\cdot e'_{m-k+1} = a\exp(iN_1+iN_2)e_m + b\exp(-iN_1-iN_2)e_m$$ for some $a,b\in {\bold{C}}$. If $V$ denotes the ${\bold{C}}$-vector space generated by $e_j$ $(1\leq j\leq m)$ and $e'_j$ $(1\leq j\leq m-3)$, we should have $$e'_m - (1/6)e'_{m-2} \equiv a(ie'_m - e'_{m-1} -(i/2)e'_{m-2}) + b(-ie'_m - e'_{m-1}+(i/2) e'_{m-2})\; \bmod V.$$ (To get this, use $(N_1+N_2)^k=kN_1N_2^{k-1}+ N_2^k$ and hence $\exp(iN_1+iN_2) = 1+ \sum_{k=1}^\infty (i^k \cdot (k-1)!^{-1} \cdot N_1N_2^{k-1}+ i^k \cdot k!^{-1}\cdot N_2^k)$.) By comparing the coefficients of $e'_{m-1}$, we have $a+b=0$. Hence $$e'_m-(1/6)e'_{m-2}\equiv a\cdot 2i\cdot (e'_m - (1/2)e'_{m-2})\;\bmod V.$$ This is impossible. \end{sbpara} \bold egin{sbrem} We do not know whether the identity map of $D$ always extends to a continuous map $D_{\Sigma, [{\mathrm{val}}]}^{\sharp}\to D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$ or not. \end{sbrem} {\subset}section{Extended period maps}{\lambda}bel{ss:per} The following is a modified version of Part III, 7.5.1 (1). \bold egin{sbthm}{\lambda}bel{perth1} Let $S$ be a connected, log smooth, fs log analytic space, and let $U$ be the open subspace of $S$ consisting of all points of $S$ at which the log structure of $S$ is trivial. Let $H$ be a variation of mixed Hodge structure on $U$ with polarized graded quotients for the weight filtration, and with unipotent local monodromy along $S\smallsetminus U$. Assume that $H$ extends to a log mixed Hodge structure (Part III, \S1.3) on $S$ $($that is, $H$ is admissible along $S\smallsetminus U$ as a variation of mixed Hodge structure$)$. Fix a base point $u\in U$ and let ${{\Lambda}mbda}bda=(H_0,W, ({\lambda}ngle\;,\;\ranglegle_w)_w, (h^{p,q})_{p,q})$ be $(H_{{\bold{Z}},u}, W, ({\lambda}ngle\;,\;\ranglegle_{w,u})_w, (\text{the Hodge numbers of $H$}))$. Let ${\Gamma}amma$ be a subgroup of $G_{\bold{Z}}$ which contains the global monodromy group $\text{Image}(\pi_1(U,u)\to G_{\bold{Z}})$ and assume that ${\Gamma}$ is neat. Let $\varphi: U\to {\Gamma}amma\operatorname{\backslash} D$ be the associated period map. Let $S^{\log}_{[:]}= S^{\log} \times_S S_{[:]}$ and let $S^{\log}_{[{\mathrm{val}}]}=S^{\log}\times_S S_{[{\mathrm{val}}]}$, and regard $U$ as open sets of these spaces. Then: (1) The map $\varphi:U\to{\Gamma}\operatorname{\backslash} D$ extends uniquely to a continuous maps $$ S_{[:]}^{\log}\to{\Gamma}\operatorname{\backslash} D_{{\rm{SL}}(2)}^I, \quad S_{[{\mathrm{val}}]}^{\log} \to {\Gamma} \operatorname{\backslash} D^I_{{\rm{SL}}(2),{\mathrm{val}}} $$ (2) Assume that the complement $S\smallsetminus U$ of $U$ is a smooth divisor on $S$. Then the map $\varphi:U\to{\Gamma}\operatorname{\backslash} D$ extends uniquely to a continuous map $$ S^{\log} \to {\Gamma} \operatorname{\backslash} D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}} $$ and hence extends uniquely to a continuous map $S^{\log} \to {\Gamma} \operatorname{\backslash} D_{{\rm {BS}},{\mathrm{val}}}$. \end{sbthm} \bold egin{pf} (1) is a modified version of Part III, 7.5.1 (1) which treated the extended period map $S^{\log}_{{\mathrm{val}}} \to D^I_{{\rm{SL}}(2)}$ where $S^{\log}_{{\mathrm{val}}}$ is the topological space defined in \cite{KU} 3.6.26. This map factors through the quotient space $S_{[:]}^{\log}$ of $S^{\log}_{{\mathrm{val}}}$ as is seen by the arguments in \ref{Dsig:3}. Since $S^{\log}_{{\mathrm{val}}}\to S_{[:]}$ is a proper surjective continuous map, the map $S_{[:]}\to {\Gamma} \operatorname{\backslash} D_{{\rm{SL}}(2)}^I$ is continuous. The last map is compatible with log structures as is seen by the arguments in \ref{5.6.4} and hence induces a continuous map $S^{\log}_{[{\mathrm{val}}]}\to {\Gamma}\operatorname{\backslash} D^I_{{\rm{SL}}(2),{\mathrm{val}}}$. (2) is proved similarly to (1) by using Theorem \ref{rk1}. \end{pf} In the rest of this Section 6.3, we consider mild log mixed Hodge structures. \bold egin{sbprop}{\lambda}bel{WNQ} Let ${\sigma}ma$ be a rational nilpotent cone (it is an ${\bold{R}}_{{{\gamma}mma}e0}$-cone generated by rational elements) in $\fg_{\bold{R}}$. Assume that there is $F\in \Dc$ such that $({\sigma}ma, F)$ generates a nilpotent orbit. If $(W, N)$ splits for any rational element $N$ of ${\sigma}ma$, $(W, N)$ splits for any element $N$ of the cone ${\sigma}ma$. \end{sbprop} \bold egin{pf} We may assume that $N$ is in the interior ${\sigma}ma_{>0}$ of ${\sigma}ma$. This is because if we denote by ${\sigma}'$ the face of ${\sigma}$ such that $N$ belongs to the interior of ${\sigma}'$, $({\sigma}', \exp(iN')F)$ generates a nilpotent orbit for some $N'\in {\sigma}_{>0}$ and hence we can replace ${\sigma}$ by ${\sigma}'$. Assume $N\in {\sigma}_{>0}$. Let $\hat F$ be the ${\bold{R}}$-split MHS associated to the MHS $(M(W, {\sigma}), F)$. Then $\exp(iN')\hat F\in D$ for any $N'\in {\sigma}_{>0}$. Hence as the composition of the continuous map $D\to {\cal {L}}\;;\;x\mapsto {\delta}ta_W(x)$ and the continuous map ${\sigma}_{>0}\to D\;;\; N' \mapsto \exp(iN')\hat F$, the map ${\sigma}_{>0}\to {\cal {L}}\;;\;N'\mapsto {\delta}ta_W(\exp(iN')\hat F)$ is continuous. By the part (i) ${\bold{R}}ightarrow$ (vii) of \ref{r1eq}, the last map sends all rational elements of ${\sigma}ma_{>0}$ to $0$. Hence it sends $N$ also to $0$. By the part (vii) ${\bold{R}}ightarrow$ (i) of \ref{r1eq}, this shows that $(W, N)$ splits. \end{pf} \bold egin{sbpara} Let ${\cal {B}}(\log)$ be the category of locally ringed spaces over ${\bold{C}}$ endowed with fs log structures satisfying a certain condition, defined in \cite{KU} (see \cite{KNU2} Part III, \S1.1 for the review). Let $S$ be an object of ${\cal {B}}(\log)$ and let $H$ be an LMH on $S$ with polarized graded quotients for the weight filtration. By \ref{WNQ}, for $s\in S$ and for $t\in S^{\log}$ lying over $s$, the following two conditions (i) and (ii) below are equivalent. Let $$\pi_1^+(s^{\log}):= \Hom((M_S/\cO_S^\times)_s, {\bold{N}}) {\subset}set \pi_1(s^{\log})=\Hom((M_S/\cO^\times_S)_s, {\bold{Z}}),$$ $$\pi_1(s^{\log},{\bold{R}}_{{{\gamma}mma}eq 0}) := \Hom((M_S/\cO^\times_S)_s, {\bold{R}}_{{{\gamma}mma}eq 0}^{\add}){\subset}set {\bold{R}}\otimes \pi_1(s^{\log})= \Hom((M_S/\cO_S^\times)_s, {\bold{R}}^{\add}).$$ Consider the action $\rho$ of $\pi_1(s^{\log})$ on $H_{{\bold{Z}},t}$, and consider the homomorphism $$\log(\rho): {\bold{R}}\otimes \pi_1(s^{\log})\to {\rm {End}}_{{\bold{R}}}(H_{{\bold{R}}, t})\;;\; a\otimes {{\gamma}mma}amma \mapsto a\log(\rho({{\gamma}mma}amma)).$$ Let $W$ be the weight filtration on $H_{{\bold{R}},t}$. (i) For any ${{\gamma}mma}amma\in \pi_1(s^{\log}, {\bold{R}}_{{{\gamma}mma}eq 0})$, $(W, \log(\rho)({{\gamma}mma}amma))$ splits. (ii) For any ${{\gamma}mma}amma\in \pi_1^+(s^{\log})$, $(W, \log(\rho({{\gamma}mma}amma)))$ splits. We say that $H$ is {\it mild} (we say also $H$ is {\it of mild degeneration}) if the equivalent conditions (i) and (ii) are satisfied for any $s$ and $t$. \end{sbpara} \bold egin{sblem}{\lambda}bel{pullback} Let $S$ and $H$ be as above and assume $H$ is mild. Let $S'\to S$ be a morphism in ${\cal {B}}(\log)$. Then the pull back of $H$ to $S'$ is mild. \end{sblem} This is clear. \bold egin{sbprop}{\lambda}bel{Ccut} Let $S$ be a log smooth fs log analytic space over ${\bold{C}}$ and let $H$ be a log mixed Hodge structure on $S$ with polarized graded quotients for the weight filtration $W$. Then the following two conditions (i) and (ii) are equivalent. (i) $H$ is mild. (ii) For any smooth analytic curve $C$ over ${\bold{C}}$ and any analytic map $f: C\to S$ such that the subset $f^{-1}(S\smallsetminus U)$ of $S$ is finite, the pull back $f^*H$ on $C$ is mild. Here we endow $C$ with the log structure associated to the finite subset $f^{-1}(S\smallsetminus U)$. If $S$ is an algebraic variety over ${\bold{C}}$, these conditions are equivalent to the modified version of the condition (ii) in which we take only smooth algebraic curves $C$ in it. \end{sbprop} \bold egin{pf} By \ref{pullback}, we have (i)$ {\bold{R}}ightarrow$ (ii). We prove (ii) ${\bold{R}}ightarrow$ (i). Assume (ii). Let $s\in S\smallsetminus U$ and let $t$ be a point of $S^{\log}$ lying over $s$. Let ${{\gamma}mma}amma\in \pi_1^+(s^{\log})$. We prove that $(W, \log(\rho)({{\gamma}mma}amma))$ splits. Let ${\sigma}$ be the face of $\pi_1^+(s^{\log})$, regarded as a monoid, such that ${{\gamma}mma}amma$ belongs to the interior of ${\sigma}$. Then there are $s'\in S$ and $t'\in S^{\log}$ lying over $s'$ and isomomorphisms $\pi^+((s')^{\log})\cong {\sigma}$ and $H_{{\bold{R}}, t'}\cong H_{{\bold{R}},t}$ such that the action of $\pi_1^+(s^{\log})$ on $H_{{\bold{R}},t}$ and the action of $\pi_1^+((s')^{\log})$ on $H_{{\bold{R}},t'}$ are compatible via these isomorphisms. By this we are reduced to the case where ${{\gamma}mma}amma$ belongs to the interior of $\pi_1^+(s^{\log})$. Assume ${{\gamma}mma}amma$ belongs to the interior of $\pi_1^+(s^{\log})$. Then there are a smooth analytic curve over ${\bold{C}}$, a morphism $f:C \to S$ and $s' \in C$ satisfying the following conditions (1)--(3). (1) $f(s')=s$. (2) $f^{-1}(S\smallsetminus U)$ is finite. (3) The image of $\pi_1^+((s')^{\log})\to \pi_1^+(s^{\log})$ contains ${{\gamma}mma}amma$. By the condition (ii), this proves that $(W, \log(\rho)({{\gamma}mma}amma))$ splits. In the case where $S$ is an algebraic variety, the same arguments show that the modified version of (ii) implies (i). \end{pf} \bold egin{sbthm}{\lambda}bel{perth2} Let the assumptions be as in \ref{perth1}. Assume furthermore that $H$ is mild. {\rm(1)} The period map $\varphi: S \to {\Gamma} \operatorname{\backslash} D$ extends uniquely to continuous maps $$S^{\log}_{[:]}\to {\Gamma}amma\operatorname{\backslash} D^{\diamond}_{{\rm{SL}}(2)},\quad S^{\log}_{[:]}\to {\Gamma}amma\operatorname{\backslash} D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)},$$ $$ S^{\log}_{[{\mathrm{val}}]}\to {\Gamma}amma \operatorname{\backslash} D^{\diamond}_{{\rm{SL}}(2),{\mathrm{val}}}, \quad S^{\log}_{{[\val]}}\to {\Gamma}amma\operatorname{\backslash} D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2),{\mathrm{val}}}, \quad S^{\log}_{{[\val]}}\to {\Gamma}amma\operatorname{\backslash} D^{{{\rm{mild}}}}_{{\rm {BS}},{\mathrm{val}}}.$$ {\rm(2)} For any point $s\in S$, there exist an open neighborhood $V$ of $s$, a log modification $V'$ of $V$ $(\text{\cite{KU}}\ 3.6.12)$, a commutative subgroup ${\Gamma}amma'$ of ${\Gamma}$, and a fan $\Sigma$ in $\fg_{\bold{Q}}$ which is strongly compatible with ${\Gamma}amma'$ such that the period map $\varphi|_{U\cap V}$ lifts to a morphism $U\cap V \to {\Gamma}amma'\operatorname{\backslash} D$ which extends uniquely to a morphism $V'\to{\Gamma}amma'\operatorname{\backslash} D^{{{\rm{mild}}}}_\Sigma$ of log manifolds. $$ \bold egin{matrix} U & \supset & U\cap V & {\subset} & V' \\ {{}^\varphi}\downarrow\;\;\; & & \downarrow & & \downarrow\\ {\Gamma}amma \operatorname{\backslash} D & \leftarrow & {\Gamma}amma' \operatorname{\backslash} D & {\subset} & {\Gamma}amma' \operatorname{\backslash} D^{{{\rm{mild}}}}_\Sigma. \end{matrix} $$ Furthermore, we have {\rm(2.1)} Assume $S\smallsetminus U$ is a smooth divisor. Then we can take $V=V'=S$ and ${\Gamma}'={\Gamma}$. That is, we have a commutative diagram $$ \bold egin{matrix} U&{\subset} & S \\ {{}^\varphi}\downarrow\;\;\;&&\downarrow\\ {\Gamma}amma\operatorname{\backslash} D&{\subset}&{\Gamma}amma\operatorname{\backslash} D^{{{\rm{mild}}}}_{\Sigma}.\end{matrix} $$ {\rm(2.2)} Assume ${\Gamma}amma$ is commutative. Then we can take ${\Gamma}amma'={\Gamma}amma$. {\rm (2.3)} Assume that ${\Gamma}amma$ is commutative and that the following condition {\rm(i)} is satisfied. {\rm(i)} There is a finite family $(S_j)_{1\leq j\leq n}$ of connected locally closed analytic subspaces of $S$ such that $S=\bigcup_{j=1}^n S_j$ as a set and such that, for each $j$, the inverse image of the sheaf $M_S/\cO^\times_S$ on $S_j$ is locally constant. Then we can take ${\Gamma}amma'={\Gamma}amma$ and $V=S$. \end{sbthm} (1) and (2) are modified version of Part III, Theorem 7.5.1 (1) and (2), respectively. (2) is proved in the same way as Part III, Theorem 7.5.1 (2). We can deduce (1) from (2) by using $D^{\sharp,{{\rm{mild}}}}_{\Sigma}\to D^{\diamond}_{{\rm{SL}}(2)}$ (\ref{diathm}) by the arguments in the above proof of Theorem \ref{perth1} (1). \section{Relations with asymptotic behaviors of regulators and local height pairings}{\lambda}bel{s:Ex} In this Section \ref{s:Ex}, we show examples to describe the relations of this work to the work \cite{BK} on the asymptotic behaviors of regulators and local height pairings. {\subset}section{Example III}{\lambda}bel{III} This is Example III of Parts I, II. It appeared in Part III as the case $b=2$ of 7.1.3. As in Section \ref{ss:reg} below, this example is related to the regulator of $K_2$ of a degenerating elliptic curve. In this Example III, and also in Example IV in Section 7.3 below, we compare $D_{{\rm {BS}}}$, $D_{{\rm{SL}}(2)}^I$, $D_{{\rm{SL}}(2)}^{II}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D_{{\rm{SL}}(2)}^{\diamond}$, and their associated valuative spaces, by regarding them as topological spaces, that is, we forget the real analytic structures. \bold egin{sbpara} Let $H_0={\bold{Z}}^3$ with basis $e_1,e_2,e_3$. The weight filtration is given by $$W_{-4}=0{\subset}set W_{-3}={\bold{R}} e_1+{\bold{R}} e_2=W_{-1}{\subset}set W_0=H_{0,{\bold{R}}}.$$ The intersection form on ${{\gamma}mma}r^W_{-3}$ is the anti-symmetric form characterized by ${\lambda}ngle e_2, e_1\ranglegle=1$. \end{sbpara} \bold egin{sbpara} $D({{\gamma}mma}r^W)\cong \frak h$, the upper half plane. $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)=D_{{\rm {BS}}}({{\gamma}mma}r^W)={\frak h}_{{\rm {BS}}}$. \end{sbpara} \bold egin{sbpara} We have a homeomorphism $$D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\overset{\cong}\to D_{{\rm {BS}},{\mathrm{val}}}$$ and this induces a homeomorphism $$D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\cong D_{{\rm {BS}}}$$ of quotient spaces. Let $W'$ be the increasing filtration on ${{\gamma}mma}r^W$ given by $$ W'_{-5}=0{\subset}set W'_{-4}={\bold{R}} e_1=W'_{-3} {\subset}set W'_{-2}={{\gamma}mma}r^W_{-3} {\subset}set W'_0={{\gamma}mma}r^W, $$ and let $\Phi=\{W'\}$. Let $P$ be the parabolic subgroup of $G_{\bold{R}}$ consisting of elements which preserve $W'$. Then, $D_{{\rm {BS}}}(P)=D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$ and it is the inverse image of the open set $\{x+iy\;|\;x\in {\bold{R}},\; y\in (0, \infty]\}$ of ${\frak h}_{{\rm {BS}}}$ under the projection $D_{{\rm {BS}}}=D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\to D_{{\rm {BS}}}({{\gamma}mma}r^W)=D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)={\frak h}_{{\rm {BS}}}$. We have $$D^I_{{\rm{SL}}(2)}= D^{II}_{{\rm{SL}}(2)}$$ (Part II, 3.6.2). So we denote $D_{{\rm{SL}}(2)}^I$ and $D_{{\rm{SL}}(2)}^{II}$ simply by $D_{{\rm{SL}}(2)}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{homeos1} Let $$V:={\bold{R}} e_1+{\bold{R}} e_2.$$ We have $${\rm{spl}}(W)\cong V, \quad {\cal {L}}\cong V,$$ where $v\in V$ corresponds in the first isomorphism to the splitting of $W$ given by $e_3+v$, i.e., $s\in{\rm{spl}}(W)$ such that $s(e_3({{\gamma}mma}r^W_0))=e_3+v$, and $v\in V$ corresponds in the second isomorphism to ${\delta}ta\in {\cal {L}}$ such that ${\delta}ta(e_3({{\gamma}mma}r^W_0))=v$. We have ${\cal {L}}(F)={\cal {L}}$ for any $F\in D({{\gamma}mma}r^W)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{IIIdia} We have homeomorphisms $$D\cong {\frak h}\times {\cal {L}} \times {\rm{spl}}(W) \cong {\bold{R}}_{>0} \times V \times {\bold{R}}\times V,$$ where the first isomorphism is $F\mapsto (F({{\gamma}mma}r^W), {\delta}ta_W(F), {\rm{spl}}_W(F))$, and the second isomorphism sends $(x+iy, {\delta}ta, s)$ to $(t, {\delta}ta, x, s)$, where $x, y\in {\bold{R}}$, $y>0$, $t:=1/\sqrt{y}$, and we identify both ${\cal {L}}$ and ${\rm{spl}}(W)$ with $V$ via the isomorphisms in \ref{homeos1}. We call the composition $D\cong {\bold{R}}_{>0}\times V \times {\bold{R}} \times V$ the {\it standard isomorphism for $D$}. Let $\bar V=\bar{\cal {L}}$ be as in \ref{2.3ex} (4). We have a commutative diagram of homeomorphisms $$\bold egin{matrix} D^{\diamond,\text{weak}}_{{\rm{SL}}(2)}(\Phi) && \cong && ({\bold{R}}_{{{\gamma}mma}eq 0} \times V \times {\bold{R}}\times V)'\\ \uparrow &&&& \uparrow (1)\\ D^{\diamond}_{{\rm{SL}}(2)}(\Phi)&& \cong && ({\bold{R}}_{{{\gamma}mma}eq 0} \times V \times {\bold{R}}\times V)'\\ \downarrow &&&& \downarrow (2)\\ D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi) && \cong && {\bold{R}}_{{{\gamma}mma}eq 0}\times {\bar V}\times {\bold{R}} \times V\\ \uparrow &&&& \uparrow \;\;\;\;\;\\ D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi) && \cong && ({\bold{R}}_{{{\gamma}mma}eq 0}\times {\bar V})_{{\mathrm{val}}} \times {\bold{R}}\times V\\ \downarrow &&&& \downarrow (3)\\ D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi) && \cong && ({\bold{R}}_{{{\gamma}mma}eq 0}\times {\bar V})_{{\mathrm{val}}} \times {\bold{R}}\times V\\ \downarrow &&&& \downarrow\;\;\;\;\;\\ D_{{\rm{SL}}(2)}(\Phi) && \cong && {\bold{R}}_{{{\gamma}mma}eq 0} \times {\bar V} \times {\bold{R}}\times V, \end{matrix}$$ where $$({\bold{R}}_{{{\gamma}mma}eq 0}\times V \times {\bold{R}}\times V)':=\{(t, {\delta}ta, x, s)\in {\bold{R}}_{{{\gamma}mma}eq 0} \times V\times {\bold{R}} \times V\;|\;{\delta}ta\in {\bold{R}} e_1\;\text{if}\;t=0\}, $$ and where $({\bold{R}}_{{{\gamma}mma}eq 0}\times {\bar V})_{{\mathrm{val}}}$ is the valuative space of ${\bold{R}}_{{{\gamma}mma}eq 0}\times {\bar V}$ associated to the canonical log structure (see a description below and also \cite{KU}, 0.5.21). The homeomorphism for $D^{\diamond,\text{weak}}_{{\rm{SL}}(2)}(\Phi)$ is compatible with the standard isomorphism for $D$, but other homeomorphisms are {\it not} compatible with the standard isomorphism. The homeomorphism for $D^{\diamond}_{{\rm{SL}}(2)}(\Phi)$ (resp.\ $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$, resp.\ $D_{{\rm{SL}}(2)}(\Phi))$ sends a point of $D$ corresponding to $(t,c_1e_1+c_2e_2, x, u)\in {\bold{R}}_{>0}\times V \times {\bold{R}} \times V$ under the standard isomorphism to $(t, c_1e_1+t^{-1}c_2e_2, x, u)$ (resp.\ $(t, tc_1e_1+t^{-1}c_2e_2, x, u)$, resp.\ $ (t, t^4c_1e_1+t^2c_2e_2, x, u))$. The homeomorphism for $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$ (resp.\ $D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi))$ is compatible with the homeomorphism for $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$ (resp.\ $D_{{\rm{SL}}(2)}(\Phi))$. Concerning the vertical arrows on the right-hand side, they are described as follows. The arrows without labels are the canonical projections (\ref{ss:valsp}). The map (1) sends $(t, c_1e_1+c_2e_2,x,u)$ to $(t, c_1e_1+tc_2e_2,x,u)$. The map (2) sends $(t,c_1e_1+c_2e_2, x,u)$ to $(t,tc_1e_1+c_2e_2, x,u)$. The map (3) is explained below. The valuative space $({\bold{R}}_{{{\gamma}mma}eq 0}\times {\bar V})_{{\mathrm{val}}}$ is described as follows. Over $U= ({\bold{R}}_{>0}\times \bar V)\cup ({\bold{R}}_{{{\gamma}mma}eq 0}\times V){\subset}set {\bold{R}}_{{{\gamma}mma}eq 0}\times \bar V$, it is $U$. The inverse image of $\{0\}\times (\bar V\smallsetminus V)$ in $({\bold{R}}_{{{\gamma}mma}e0}\times {\bar V})_{{\mathrm{val}}}$ consists of points (a) $p(0, {{\lambda}mbda}bda)$ $({{\lambda}mbda}bda\in \bar V\smallsetminus V)$, (b) $p(c, {{\lambda}mbda}bda)$ $(c\in {\bold{R}}_{>0}\smallsetminus {\bold{Q}}_{>0}$, ${{\lambda}mbda}bda\in \bar V\smallsetminus V)$, (c) $p(c+, {{\lambda}mbda}bda)$ $(c\in {\bold{Q}}_{{{\gamma}mma}eq 0}$, ${{\lambda}mbda}bda \in {\bar V}\smallsetminus V)$, (d) $p(c-, {{\lambda}mbda}bda)$ $(c\in {\bold{Q}}_{>0}$, ${{\lambda}mbda}bda\in \bar V\smallsetminus V)$, (e) $p(c, \mu)$ $(c\in {\bold{Q}}_{>0}$, $\mu \in V\smallsetminus \{0\})$. Write ${{\lambda}mbda}bda = 0\circ\mu$ with $\mu \in V \smallsetminus \{0\}$ (\ref{2.3ex} (4)). Then the above point is the limit of $t^{c'}\mu$, where $t>0$ and $t\to 0$ and, in the cases of (b) and (e) (resp.\ case (a), resp.\ case (c), resp.\ case (d)), $c'=c$ (resp.\ $c'\to \infty$, resp.\ $c'>c$ and $c' \to c$, resp.\ $c'<c$ and $c'\to c$). The map (3) sends $(t, {\delta}ta, x, u)$ $((t,{\delta}ta)\in ({\bold{R}}_{>0}\times {\bar V})\cup ({\bold{R}}_{{{\gamma}mma}eq 0}\times V))$ to $(t, t^3{\delta}ta,x,u)$, $(p(0, {{\lambda}mbda}bda), x,u)$ to $(p(0, {{\lambda}mbda}bda), x,u)$, $(p(c, apha),x,u)$ ($apha \in \bar V$) to $(p(c-3,apha),x,u)$ if $c>3$, to $(0, apha,x,u)$ if $c=3$, and to $(0,0,x,u)$ if $0<c<3$, $(p(c+, {{\lambda}mbda}bda),x,u)$ to $(p((c-3)+, {{\lambda}mbda}bda),x,u)$ if $c{{\gamma}mma}eq 3$, and to $(0,0,x,u)$ if $0\leq c<3$, $(p(c-, {{\lambda}mbda}bda),x,u)$ to $(p((c-3)-, {{\lambda}mbda}bda), x,u)$ if $c>3$, and to $(0,0,x,u)$ if $0<c\leq 3$. \end{sbpara} \bold egin{sbpara} We describe for Example III (1) that there is no continuous map $D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)\to D_{{\rm {BS}}}(P)=D_{{\rm{SL}}(2)}^{{\rm{st}}ar}(\Phi)$ which extends the identity map of $D$, and (2) how $\eta: D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)\to D_{{\rm {BS}},{\mathrm{val}}}(P)=D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$ is not continuous. Fixing $c_1, c_2\in {\bold{R}}$, for $t>0$, let $p(t)$ be the point of $D$ corresponding to $(t, c_1e_1+c_2e_2, 0,0)$ via the homeomorphism for $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$ in 7.1.5. Then, via the homeomorphism for $D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$, $p(t)$ corresponds to $(t, t^3c_1e_1+t^3c_2e_2, 0,0)$. Hence, when $t\to 0$, $p(t)$ converges in $D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$ to the point $p$ corresponding to $(0,0,0,0)$, but it converges in $D_{{\rm{SL}}(2)}^{{\rm{st}}ar}(\Phi)$ to the point which corresponds to $(0, c_1e_1+c_2e_2,0,0)$ which depends on $(c_1,c_2)$. This explains (1). Concerning (2), the image of $p$ under $\eta$ is the point $p'$ of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$ corresponding to $(0,0,0,0)$. If $(c_1, c_2)\operatorname{naive}eq (0,0)$, $p(t)$ does not converge to $p'$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{noIIstar} As is mentioned in \ref{weakstar}, the topology of $D^{\diamond,\text{weak}}_{{\rm{SL}}(2)}$ does not coincide with the one of $D^{\diamond}_{{\rm{SL}}(2)}$. Fixing $c\in {\bold{R}}$, for $t>0$, let $p(t)$ be the point of $D$ corresponding to $(t, tce_2, 0,0)$ via the homeomorphism for $D^{\diamond,\text{weak}}_{{\rm{SL}}(2)}(\Phi)$ in 7.1.5. Then, when $t\to 0$, $p(t)$ converges in $D^{\diamond,\text{weak}}_{{\rm{SL}}(2)}(\Phi)$ to the point corresponding to $(0,0,0,0)$. On the other hand, $p(t)$ corresponds to $(t, ce_2, 0,0)$ under the homeomorphism for $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$ and hence converges in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$ but the limit depends on the choice of $c$. \end{sbpara} \bold egin{sbpara} The open set $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}(\Phi)$ of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$ is the part consisting of elements corresponding to $(t,{\delta}ta,x,u)$ such that ${\delta}ta\in V{\subset}set \bar V$. The map $D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}(\Phi)\to D_{{\rm{SL}}(2)}(\Phi)$ corresponds to $(t, {\delta}ta, x, u)\mapsto (t, t^3{\delta}ta, x, u)$. It does not extend to a continuous map $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\to D_{{\rm{SL}}(2)}$. In fact, fixing $v\in V\smallsetminus \{0\}$, let $p(t)$ for $t>0$ be the point of $D$ corresponding to $(t, t^{-3}v, 0,0)$ via the homeomorphism for $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$. Then when $t\to 0$, $p(t)$ converges to the point of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$ corresponding to $(0, 0\circ v, 0,0)$ (\ref{2.3ex} (4)). But $p(t)$ converges to the point of $D_{{\rm{SL}}(2)}(\Phi)$ corresponding to $(0,v, 0,0)$ which depends on the choice of $v$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{ex3} Let $a\in {\bold{Q}}_{>0}$ and define $N_a\in \fg_{\bold{Q}}$ by $N_a(e_3)=ae_2$, $N_a(e_2)=e_1$, and $N_a(e_1)=0$. For $b\in {\bold{R}}$, let $F_b\in \Dc$ be the decreasing filtration defined as follows: $F_b^1=0$, $F_b^0$ is generated by $e_3+ibe_1$, $F_b^{-1}$ is generated by $F_b^0$ and $e_2$, and $F_b^{-2}$ is the total space. Then $(N_a, F_b)$ generates a nilpotent orbit. Let ${\sigma}ma_a={\bold{R}}_{{{\gamma}mma}eq 0}N_a$. Then $({\sigma}_a, \exp(i{\sigma}_{a,{\bold{R}}})F_b)\in D^{\sharp}_{{\sigma}_a}$ is the limit of $\exp(iyN_a)F_b$ for $y\to \infty$. This $({\sigma}_a,\exp(i{\sigma}_{a,{\bold{R}}})F_b)$ belongs to $D^{\sharp,{{\rm{mild}}}}_{{\sigma}_a}$ (\ref{mild}) if and only if $a=0$. We consider the image of $\exp(iyN_a)F_b\in D$ in ${\bold{R}}_{>0} \times V \times {\bold{R}} \times V$ under the isomorphism in \ref{IIIdia}. Let $t= 1/\sqrt{y}$. In the standard isomorphism for $D$, the image is $(t, at^{-2}e_2+be_1, 0, -(b/2)t^2e_2)$. (The last component is computed by using the relation of ${\delta}ta$ and $\zeta$ (\ref{II,1.2.3}).) In the homeomorphsm for $D^{\diamond}_{{\rm{SL}}(2)}$, the image is $(t, at^{-3}e_2+be_1, 0, -(b/2)t^2e_2)$. In the homeomorphsm for $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, the image is $(t at^{-3}e_2+bte_1, 0, -(b/2)t^2e_2)$. In the homeomorphism for $D_{{\rm{SL}}(2)}$, the image is $(t, ae_2+ bt^4e_1, 0, -(b/2)t^2e_2)$. \end{sbpara} By taking the limit for $t\to0$, we have: \bold egin{sblem}{\lambda}bel{IIIconv} $(1)$ If $a\operatorname{naive}eq 0$, the image of $({\sigma}_a, \exp(i{\sigma}_{a,{\bold{R}}})F_b)\in D^{\sharp}_{{\sigma}ma_a}$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$ (resp.\ $D_{{\rm{SL}}(2)}(\Phi)$, resp.\ $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$, resp.\ $D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi))$ has the coordinate $(0,\infty e_2, 0, 0)$ (resp.\ $(0, ae_2, 0, 0)$, resp.\ $(p(3, ae_2), 0,0)$, resp.\ $(0, ae_2, 0,0)$). $(2)$ If $a=0$, the image of $({\sigma}_0, \exp(i{\sigma}_{0,{\bold{R}}})F_b)\in D^{\sharp,{{\rm{mild}}}}_{{\sigma}ma_0}$ in $D^{\diamond}_{{\rm{SL}}(2)}(\Phi)$ (resp.\ $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$, resp.\ $D_{{\rm{SL}}(2)}(\Phi)$, resp.\ $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$, resp.\ $D_{{\rm{SL}}(2)}(\Phi)$) has the coordinate $(0, be_1, 0, 0)$ (resp.\ $(0, 0, 0, 0)$, resp.\ $(0, 0, 0, 0)$, resp.\ $(0,0, 0,0)$ if $b\operatorname{naive}eq 0$ and $(0,0,0,0)$ if $b=0$, resp.\ $(0,0,0,0)$). \end{sblem} \bold egin{sbpara}{\lambda}bel{info1} By Lemma \ref{IIIconv}, we have the following. Consider the image $p$ of $({\sigma}_a,\exp(i{\sigma}_{a,{\bold{R}}})F_b)\in D_{{\sigma}ma_a}^{\sharp}$ in one of $D_{{\rm{SL}}(2)}$, $D_{{\rm{SL}}(2),{\mathrm{val}}}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$. In the case $a=0$, consider also the image in $D^{\diamond}_{{\rm{SL}}(2)}$. (1) $p$ remembers $a$ in the cases of $D_{{\rm{SL}}(2)}$, $D_{{\rm{SL}}(2),{\mathrm{val}}}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$, but $p$ does not remember $a$ in the other cases. In the case $a\operatorname{naive}eq 0$, $p$ does not remember $b$ in any of these cases. (2) Assume $a=0$. Then $p$ remembers $b$ in the case of $D^{\diamond}_{{\rm{SL}}(2)}$, but $p$ does not remember $b$ in all other cases. \end{sbpara} $$\bold egin{matrix} D^{\sharp,{{\rm{mild}}}}_{\Xi} &\to & D^{\diamond}_{{\rm{SL}}(2)}\\ \downarrow &&\downarrow\\ D^{\sharp}_{\Xi} &\to& D_{{\rm{SL}}(2)} \end{matrix}$$ {\subset}section{Degeneration and regulator maps}{\lambda}bel{ss:reg} \bold egin{sbpara}{\lambda}bel{regdelta} Let $X$ be a proper smooth variety over ${\bold{C}}$. Let $n{{\gamma}mma}eq 1, r{{\gamma}mma}eq 0$. Then we have the ($r$-th) regulator map (\cite{Be1}) $$\text{reg}_X: K_n(X) \to \bigoplus_{p,q} \; (H^m(X)(r)_{{\bold{C}}, p,q})^{-},$$ where $m=2r-n-1$ and $(p,q)$ ranges over all elements of ${\bold{Z}}^2$ such that $p+q=m-2r$ and $p<0,q<0$, $H^m(X)(r)_{{\bold{C}},p,q}$ is the $(p,q)$-Hodge component of $H^m(X, {\bold{C}})$ with respect to the Hodge structure $H^m(X)(r)$, and $(-)^-$ denotes the minus part for the complex conjugation which fixes the image of $H^m(X, {\bold{Z}}(r))= H^m(X, {\bold{Z}})\otimes (2\pi i)^r{\bold{Z}}$. This regulator map is understood as ${\delta}ta$ (Section 1.2) of a mixed Hodge structure as follows. An element $Z\in K_n(X)$ determines a mixed Hodge structure $H_Z$ with an exact sequence $0\to H^m(X)(r) \to H_Z \to {\bold{Z}}\to 0$. We have $$\text{reg}_X(Z)= {\delta}ta_W(H_Z),$$ where $W$ is the weight filtration of $H_Z$. \end{sbpara} \bold egin{sbpara} Let $X\to S$, $0\in S$, $n, r, m$ be as in 0.3, and let $Z\in K_n(X\smallsetminus X_0)$. For $t\in S\smallsetminus \{0\}$, let $Z(t)\in K_n(X_t)$ be the pull back of $Z$. Then the regulator $\text{reg}(Z(t))$ is understood as ${\delta}ta_W(H_Z(t))$ where $H_Z$ denotes the variation of mixed Hodge structure on $S\smallsetminus \{0\}$ defined by $Z$ which has an exact sequence $0\to H^m(X/S)(r) \to H_Z \to {\bold{Z}} \to 0$ with $H^m(X/S)$ as in 0.3 and whose fiber $H_Z(t)$ at $t$ is the mixed Hodge structure in \ref{regdelta} associated to $Z(t)$. This $H_Z$ is admissible along $X_0$ and extends uniquely to a log mixed Hodge structure on $S$ which we denote by the same letter $H_Z$. Hence the behavior of $t\mapsto \text{reg}(Z(t))$ in the degeneration is explained by the theory of degeneration of mixed Hodge structure as in this paper. For the details of what follows, see \cite{BK}. \end{sbpara} \bold egin{sbprop}{\lambda}bel{KXmild} Assume $Z$ comes from $K_n(X)$. Then the log mixed Hodge structure $H_Z$ on $S$ is mild. \end{sbprop} \bold egin{pf} The Clemens-Schmid sequence $H^m(X_0,{\bold{Q}})\to H^m(X/S)_{{\bold{Q}},t}\overset{N}\to H^m(X/S)_{{\bold{Q}},t} \to H_{2d-m}(X_0,{\bold{Q}})$ $(t\in S \smallsetminus\{0\}$ is near to $0$) induces an injection $H^m(X/S)_{{\bold{Q}},t}/NH^m(X/S)_{{\bold{Q}},t} \to H_{2d-m}(X_0,{\bold{Q}})$. Here $N$ is the monodromy logarithm of $H_{Z,{\bold{Q}}}$ at $0\in S$. We have a commutative diagram $$\bold egin{matrix} K_n(X\smallsetminus X_0) &\overset{\partial}\to & K_{n-1}'(X_0) \\ \downarrow &&\downarrow \\ H^m(X/S)_{{\bold{Q}},t}/NH^m(X/S)_{{\bold{Q}},t} &\overset{{\subset}set}\to& H_{2d-m}(X_0,{\bold{Q}}).\end{matrix}$$ Here the left vertical arrow sends $Z\in K_n(X\smallsetminus X_0)$ to $Ne$ where $e$ is the lifting of $1\in {\bold{Q}}$ to $H_{Z,{\bold{Q}},t}$ under the exact sequence $0\to H^m(X/S)_{{\bold{Q}},t}\to H_{Z,{\bold{Q}},t}\to {\bold{Q}}\to 0$. $K'_{n-1}$ denotes the $K$-group of coherent sheaves. The right vertical arrow is the topological Chern class map. By the localization theory of $K$-theory, we have an exact sequence $K_n(X) \to K_n(X\smallsetminus X_0) \overset{\partial}\to K'_{n-1}(X_0)$. Assume $Z\in K_n(X\smallsetminus X_0)$ comes from $K_n(X)$. Then $\partial(Z)=0$ and hence the above diagram shows that the image of $Z$ in $H^m(X/S)_{{\bold{Q}},t}/NH^m(X/S)_{{\bold{Q}},t}$ is zero. This proves that $(W, N)$ splits. \end{pf} By \ref{KXmild} and by \ref{diathm}, we have \bold egin{sbthm}{\lambda}bel{thm2} If $Z\in K_n(X\smallsetminus X_0)$ comes from $K_n(X)$, the regulator $\text{reg}_{X_t}Z(t)$ $(t\in S\smallsetminus \{0\})$ converges when $t\to 0$. \end{sbthm} \bold egin{sbrem} In \cite{BK}, this result \ref{thm2} will be generalized to the situation $S$ need not be of dimension $\leq 1$. This generalization will be reduced to \ref{thm2} by using \ref{Ccut}. \end{sbrem} \bold egin{sbpara}{\lambda}bel{7.2.2} Let $X\to S$ and $0\in S$ be as in Section 0.3. Take $H_0= {\bold{Z}} \oplus H^m(X/S)(r)_{{\bold{Z}},t}$. The extended period maps in \S6.3 give a commutative diagram $$\bold egin{matrix} S^{\log} \times K_n(X) &\to& {\Gamma}amma\operatorname{\backslash} D^{\diamond}_{{\rm{SL}}(2)}\\ \downarrow &&\downarrow\\ S^{\log}\times K_n(X\smallsetminus X_0) &\to& {\Gamma}amma\operatorname{\backslash} D_{{\rm{SL}}(2)} \end{matrix}$$ Here ${\Gamma}amma$ is the group of all elements ${{\gamma}mma}amma$ of ${\rm {Aut}}_{\bold{Z}}(H_0)$ satisfying the following conditions. (i) ${{\gamma}mma}amma$ preserves $H^m(X/S)(r)_{{\bold{Z}},t}$. (ii) ${{\gamma}mma}amma e \equiv e \bmod H^m(X/S)(r)_{{\bold{Z}},t}$ where $e$ denotes $(1,0)\in H_0$. (iii) The action of ${{\gamma}mma}amma$ on $H^m(X/S)_{{\bold{Z}},t}$ is contained in the monodromy group of $H^m(X/S)_{{\bold{Z}}}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{ell1} We give an explicit example. Assume that $X\to S$ is a family of elliptic curves which degenerates at $0\in S$, and assume $n=r=2$. Then the period domain and the extended period domains which appear here are those of Example III (Section \ref{III}). Let the notation be as in \ref{III}. We discuss an explicit example of $Z\in K_2(X\smallsetminus X_0)$. Let $apha_j$ and $\bold eta_k$ be a finite number of torsion sections of $X\smallsetminus X_0$ over $S\smallsetminus \{0\}$, let $m_j, n_k\in {\bold{Z}}$ such that $\sum_j m_j=\sum_k n_k=0$, and consider the divisors $apha=\sum_j m_j(apha_j)$, $\bold eta=\sum_k n_k (\bold eta_k)$ on $X\smallsetminus X_0$ of degree $0$. Then we have an element $Z_{apha,\bold eta}\in K_2(X\smallsetminus X_0)$ (see \cite{Bl1}, \cite{Sct}). It is essentially the Steinberg symbol $\{f_{apha}, f_{\bold eta}\}$, where $f_{apha}$ (resp.\ $f_{\bold eta}$) is an element of ${\bold{Q}}\otimes {\bold{C}}(X)^\times$ whose divisor is $apha$ (resp.\ $\bold eta$). When $t$ tends to $0$ in $S$, there are $a,b\in W_{-3}H_{0,{\bold{R}}}$ such that we have $$\text{reg}_{X_t}(Z_{apha,\bold eta}(t))= ay+b +O(y^{-1}),$$ where $y$ is defined by $q(t)=e^{2\pi i(x+iy)}$ $(x,y\in {\bold{R}})$ with $q(t)$ the $q$-invariant of the elliptic curve $X_t$. We have $$a\equiv \sum_{j,k} m_jn_k B_3(\{r(apha_j)-r(\bold eta_k)\})e_2 \bmod {\bold{R}} e_1,$$ where: $B_3$ is the Bernoulli polynomial of degree $3$, $r(\mu)$ for a torsion section $\mu$ is the element of ${\bold{Q}}/{\bold{Z}}$ such that as a section of the Tate curve ${\bold G_m}/q^{{\bold{Z}}}$, $\mu$ is expressed as $s q^{r(\mu)}\bmod q^{{\bold{Z}}}$ with $s$ a root of $1$, $\{-\}: {\bold{Q}}/{\bold{Z}}\to [0, 1){\subset}set {\bold{Q}}$ is the lifting. Assume now that $r(apha_j)=r(\bold eta_k)=0$ for any $j,k$, that is, these torsion sections $apha_j$ and $\bold eta_k$ are roots of $1$ in the Tate curve ${\bold G}_m/q^{{\bold{Z}}}$. Then $Z_{apha, \bold eta}$ comes from $K_2(X)$, $a=0$, and the degeneration is mild. In this case, $$b= \sum_{j,k} m_jn_k D(apha_j/\bold eta_k)e_1,$$ where we regard $apha_j$ and $\bold eta_k$ as roots of $1$ and $D$ is the real analytic modified dilogarithm function (\cite{Bl1} of Bloch-Wigner. These things will be explained in \cite{BK} by using results in \cite{Sct}, \cite{Bl1}, \cite{GL} and using the results of this paper. \end{sbpara} {\subset}section{Example IV}{\lambda}bel{IV} This is Example IV of Part II. As is explained in Part II, 4.4 and also in Section \ref{ss:reBK} below, this example is related to the local height pairing of points of a degenerating elliptic curve. \bold egin{sbpara} Let $H_0={\bold{Z}}^4$ with basis $e_1,e_2,e_3, e_4$. The weight filtration is given by $$W_{-3}=0{\subset}set W_{-2}={\bold{R}} e_1{\subset}set W_{-1}= \bigoplus_{j=1}^3 {\bold{R}} e_j{\subset}set W_0=H_{0,{\bold{R}}}.$$ The intersection form on ${{\gamma}mma}r^W_{-1}$ is the anti-symmetric form characterized by ${\lambda}ngle e_3, e_2\ranglegle=1$. \end{sbpara} \bold egin{sbpara} We have $D({{\gamma}mma}r^W)\cong \frak h$, the upper half plane, and $D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)=D_{{\rm {BS}}}({{\gamma}mma}r^W)={\frak h}_{{\rm {BS}}}$. \end{sbpara} \bold egin{sbpara} We have a homeomorphism $$D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}\overset{\cong}\to D_{{\rm {BS}},{\mathrm{val}}}$$ and this induces a homeomorphism $$D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\cong D_{{\rm {BS}}}$$ of quotient spaces. Let $W'$ be the increasing filtration on ${{\gamma}mma}r^W$ given by $$ W'_{-3}=0{\subset}set W'_{-2}={{\gamma}mma}r^{W}_{-2}+{\bold{R}} (e_2\bmod W_{-2}) =W'_{-1} {\subset}set W'_0={{\gamma}mma}r^W, $$ and let $\Phi=\{W'\}$. Let $P$ be the parabolic subgroup of $G_{\bold{R}}$ consisting of elements which preserve $W'$. Then, $D_{{\rm {BS}}}(P)=D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi)$ and it is the inverse image of the open set $\{x+iy\;|\;x\in {\bold{R}}, y\in (0, \infty]\}$ of ${\frak h}_{{\rm {BS}}}$ under the projection $D_{{\rm {BS}}}=D^{{\rm{st}}ar}_{{\rm{SL}}(2)}\to D_{{\rm {BS}}}({{\gamma}mma}r^W)=D_{{\rm{SL}}(2)}({{\gamma}mma}r^W)={\frak h}_{{\rm {BS}}}$. We have $$D^I_{{\rm{SL}}(2)}= D^{II}_{{\rm{SL}}(2)}$$ (Part II, 4.4). We will denote both of them by $D_{{\rm{SL}}(2)}$. We have a canonical homeomorphism $$D^{\diamond}_{{\rm{SL}}(2)}\overset{\cong}\to D^{{\rm{st}}ar,{{\rm{mild}}}}_{{\rm{SL}}(2)}.$$ \end{sbpara} \bold egin{sbpara} We have $${\rm{spl}}(W)\cong {\bold{R}}^5, \quad {\cal {L}}\cong {\bold{R}},$$ where in the first isomorphism $(s_{3,4}, s_{2,4}, s_{1,4}, s_{1,3}, s_{1,2})\in {\bold{R}}^5$ corresponds to the splitting of $W$ which is given by $e_4+\sum_{j=1}^3 s_{j,4}e_j$ and $e_k+s_{1,k}e_1$ $(k=2,3)$, and the second isomorphism is given by ${\delta}ta\mapsto r$ $({\delta}ta\in {\cal {L}}$, $r\in {\bold{R}})$, ${\delta}ta e_4=re_1$. We have ${\cal {L}}(F)={\cal {L}}$ for any $F\in D({{\gamma}mma}r^W)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{IVdia} We have homeomorphisms $$D\cong {\frak h}\times {\cal {L}} \times {\rm{spl}}(W) \cong {\bold{R}}_{>0}\times {\bold{R}} \times {\bold{R}}^6,$$ where the left isomorphism is $F\mapsto (F({{\gamma}mma}r^W), {\delta}ta_W(F), {\rm{spl}}_W(F))$, and the second isomorphism sends $(x+iy, {\delta}ta, s)$ to $(1/\sqrt{y}, {\delta}ta, x, s)$, where $x, y\in {\bold{R}}$, $y>0$. We call the composite homeomorphsm $D\cong {\bold{R}}_{>0}\times {\bold{R}} \times {\bold{R}}^6$ the {\it standard isomorphism for $D$}. We have a commutative diagrams of homeomorphisms $$\bold egin{matrix} D^{{\rm{st}}ar}_{{\rm{SL}}(2)}(\Phi) && \cong && {\bold{R}}_{{{\gamma}mma}eq 0} \times [-\infty,\infty] \times {\bold{R}}^6\\ \uparrow &&&& \uparrow\;\;\;\;\; \\ D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi) && \cong && ({\bold{R}}_{{{\gamma}mma}eq 0}\times [-\infty, \infty])_{{\mathrm{val}}} \times {\bold{R}}^6\\ \downarrow &&&& \downarrow (1) \\ D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi) && \cong && ({\bold{R}}_{{{\gamma}mma}eq 0} \times [-\infty, \infty])_{{\mathrm{val}}} \times {\bold{R}}^6\\ \downarrow &&&& \downarrow\;\;\;\;\; \\ D_{{\rm{SL}}(2)}(\Phi) && \cong && {\bold{R}}_{{{\gamma}mma}eq 0}\times [-\infty,\infty] \times {\bold{R}}^6, \end{matrix}$$ where the upper two homeomorphisms are compatible with the standard isomorphism for $D$, but via the lower two homeomorphisms, a point of $D$ corresponding to $(t, {\delta}ta, u)\in {\bold{R}}_{>0}\times {\bold{R}} \times {\bold{R}}^6$ under the standard isomorphism for $D$ is sent to $ (t, t^2{\delta}ta, u)$. Concerning the vertical arrows on the right-hand side, the arrows except (1) are the canonical projections, and the arrow (1) is as follows: The map (1) sends $(t, {\delta}ta, u)$ $((t,{\delta}ta)\in ({\bold{R}}_{{{\gamma}mma}eq 0}\times {\bold{R}})\cup ({\bold{R}}_{>0}\times [-\infty,\infty]))$ to $(t, t^2{\delta}ta, u)$, $(p(c,\pm \infty), u)$ $(c\in {\bold{R}}_{>0}\smallsetminus {\bold{Q}}_{>0})$ to $ (p(c-2,\pm \infty), u)$ if $c>2$ and to $(0, 0, u)$ if $c<2$, $(p(c+, \pm \infty),u)$ $(c\in {\bold{Q}}_{{{\gamma}mma}eq 0})$ to $(p((c-2)+,\pm \infty), u)$ if $c{{\gamma}mma}eq 2$ and to $(0,0,u)$ if $c<2$, $(p(c-, \pm\infty), u)$ $(c\in {\bold{Q}}_{>0})$ to $(p((c-2)-, \pm \infty), u)$ if $c>2$ and to $(0,0, u)$ if $c\leq 2$, $(p(c, {\delta}ta),u)$ $(c\in {\bold{Q}}_{>0}$, ${\delta}ta \in {\bold{R}}\smallsetminus \{0\})$ to $(p(c-2, {\delta}ta),u)$ if $c>2$, to $(0, {\delta}ta, u)$ if $c=2$, and to $(0,0,u)$ if $c<2$. Here the notation $p(c, {\delta}ta)$ etc.\ are understood as in \ref{IIIdia} by replacing $\bar V\supset V$ by $[-\infty, \infty]\supset {\bold{R}}$. \end{sbpara} \bold egin{sbpara} Let $a\in {\bold{Q}}_{{{\gamma}mma}e0}$ and define $N_a\in \fg_{\bold{Q}}$ by $N_a(e_4)=ae_1$, $N_a(e_3)=e_2$, and $N_a(e_1)=N_a(e_2)=0$. Let ${\sigma}_a$ be the cone generated by $N_a$. For $b\in {\bold{R}}$, let $F_b\in \Dc$ be the decreasing filtration defined as follows: $F_b^1=0$, $F_b^0$ is generated by $e_3$ and $e_4+ibe_1$, and $F_b^{-1}$ is the total space. In $D^{\sharp}_{{\sigma}_a}$, we have the limit $({\sigma}_a, \exp(i{\sigma}_{a,{\bold{R}}})F_b)\in D^{\sharp}_{{\sigma}ma_a}$ of $\exp(iyN_a)F_b$ for $y\to \infty$. This $({\sigma}_a, \exp(i{\sigma}_{a,{\bold{R}}})F_b)$ belongs to $D^{\sharp,{{\rm{mild}}}}_{{\sigma}_a}$ if and only if $a=0$. For $y\in {\bold{R}}_{>0}$, via the first homeomorphism in the diagram in \ref{IVdia}, $\exp(iyN_a)F_b\in D$ is sent to $(1/\sqrt{y}, ay+b, \bold0)$ and hence the limit $({\sigma}_a, \exp(i{\sigma}_{a,{\bold{R}}})F_b)\in D_{{\sigma}_a}^{\sharp}$ is sent to $(0, \infty, \bold0)$ in the case $a\operatorname{naive}eq 0$, and to $(0, b, \bold0)$ in the case $a=0$. Here $\bold0$ denotes $(0,\dots,0)\in {\bold{R}}^6$. On the other hand, for $y\in {\bold{R}}_{>0}$, via the last homeomorphism in the diagram in \ref{IVdia}, $\exp(iyN_a)F_b\in D$ is sent to $(1/\sqrt{y}, a+y^{-1}b, \bold0)$ and hence the limit $({\sigma}_a, \exp(i{\sigma}_{a,{\bold{R}}})F_b)\in D_{{\sigma}_a}^{\sharp}$ is sent to $(0, a, \bold0)$. By taking the limit for $y\to \infty$, we have: \end{sbpara} \bold egin{sblem}{\lambda}bel{IVconv} The limit of $\exp(iyN_a)F_b$ for $y\to \infty$ exists also in $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$, and in $D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$. In the case $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$, the $({\bold{R}}_{{{\gamma}mma}eq 0}\times [-\infty, \infty])_{{\mathrm{val}}}$-component of the limit is $p(2, a)$ if $a\operatorname{naive}eq 0$ and is $(0, b)$ if $a=0$. In the case $D_{{\rm{SL}}(2),{\mathrm{val}}}(\Phi)$, the $({\bold{R}}_{{{\gamma}mma}eq 0}\times [-\infty, \infty])_{{\mathrm{val}}}$-component of the limit is $p(0, a)$. \end{sblem} \bold egin{sbpara}{\lambda}bel{ex1} By Lemma \ref{IVconv}, we have the following. Let $p$ be the image of $({\sigma}_a, \exp(i{\sigma}_{a,{\bold{R}}})F_b)\in D_{{\sigma}ma_a}^{\sharp}$ in one of $D_{{\rm{SL}}(2)}$, $D_{{\rm{SL}}(2),{\mathrm{val}}}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$, $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$. We have: (1) $p$ remembers $a$ in the cases of $D_{{\rm{SL}}(2)}$, $D_{{\rm{SL}}(2),{\mathrm{val}}}$ and $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$, whereas $p$ does not remember $a$ in the case of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$. In the case $a\operatorname{naive}eq 0$, $p$ does not remember $b$ in any of these cases. (2) Assume $a=0$. Then $p$ remembers $b$ in the cases of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ and $D^{{\rm{st}}ar}_{{\rm{SL}}(2),{\mathrm{val}}}$, but $p$ does not remember $b$ in the cases of $D_{{\rm{SL}}(2)}$ and $D_{{\rm{SL}}(2),{\mathrm{val}}}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{c.ex} The following is mentioned in \ref{r1rem}. Though ${\sigma}_a$ is of rank one, the image $p$ of $D^{\sharp}_{{\sigma}_a}$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ is not contained in $D^{{\rm{st}}ar}_{{\rm{SL}}(2),\leq 1}$ ($=$ the part of $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ at which the log structure $M$ satisfies $\text{rank}(M^{{{\gamma}mma}p}/\cO^\times)_p \leq 1)$ if $a\operatorname{naive}eq 0$. Indeed, in the case $a\operatorname{naive}eq 0$, the image of class\,$(N_a, F_b)\in D^{\sharp}_{{\sigma}_a}$ in $D^{{\rm{st}}ar}_{{\rm{SL}}(2)}$ has the coordinate $(0, \infty, \bold0)$, which shows that $(M/\cO^\times)_p \cong {\bold{N}}^2$. \end{sbpara} {\subset}section{Degeneration and height pairings}{\lambda}bel{ss:reBK} \bold egin{sbpara}{\lambda}bel{hpdelta} Let $X$ be a proper smooth algebraic variety over ${\bold{C}}$ of dimension $d$, and let $Y$ and $Z$ be algebraic cycles on $X$ of codimenson $r$ and $s$, respectively. We assume that $r+s=d+1$, that their supports are disjoint $|Y|\cap |Z|=\emptyset$, and that both $Y$ and $Z$ are homologically equivalent to $0$. Then we have a height pairing ${\lambda}ngle Y,Z \ranglegle_X\in {\bold{R}}$ (the local version of the height pairing for number field, at an Archimedean place). See \cite{Be2}, \cite{Bl2}. This height pairing is understood as ${\delta}ta$ (Section 1.2) of a mixed Hodge structure. We have $${\lambda}ngle Y, Z\ranglegle_X = {\delta}ta_W(H_{Y,Z}),$$ where $H_{Y,Z}$ is the mixed Hodge structure whose weight filtration $W$ has the following properties: $W_0H_{Y,Z}=H_{Y,Z}$, $W_{-3}=0$, ${{\gamma}mma}r^W_0={\bold{Z}}$, ${{\gamma}mma}r^W_{-2}={\bold{Z}}(1)$, ${{\gamma}mma}r^W_{-1}= H^{2r-1}(X)(r)$, constructed in \cite{Be2}, \cite{Bl2}. The exact sequence $0\to H^{2r-1}(X)(r) \to W_0/W_{-2}\to {\bold{Z}}\to 0$ is given by the class of $Y$, and the exact sequence $0\to {\bold{Z}}(1) \to W_{-1}\to H^{2r-1}(X)(r)\to 0$ is given by the class of $Z$. \end{sbpara} \bold egin{sbpara} Let $X\to S$ and $0\in S$ be as in Section \ref{relBK}. Let $Y$ and $Z$ be algebraic cycles on $X$ of codimension $r$ and $s$, respectively, such that $r+s=d+1$, where $d$ is the relative dimension of $X\to S$, such that $|Y|\cap |Z|=\emptyset$, and such that both $Y(t)$ and $Z(t)$ are homologically equivalent to $0$ for any $t\in S\smallsetminus \{0\}$. \end{sbpara} \bold egin{sbpara} Since the height paring ${\lambda}ngle Y(t), Z(t)\ranglegle$ is understood as ${\delta}ta$ of mixed Hodge structure(\ref{hpdelta}), its behavior in the degeneration is explained by the theory of degeneration of mixed Hodge structure as in this paper. When $t\to 0$ with $x$ fixed, there are $a,b\in {\bold{R}}$ such that we have $${\lambda}ngle Y(t), Z(t)\ranglegle_{X_t} = ay+b+O(y^{-1})$$ where taking a local coordinate $q$ on $S$ at $0$ such that $q(0)=0$, we define $y$ by $q=e^{2\pi i (x+iy)}$ $(x,y\in {\bold{R}},\, y>0)$. This follows by a general theory of degeneration of mixed Hodge structure as studied in Section \ref{IV}. Here, $a=0$ if and only if the degeneration of $H_{Y,Z}$ at $0\in S$ is mild. In \cite{BK}, it is shown that $a$ is the local geometric intersection number of $Y$ and $Z$ over $0\in S$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{ell1b} We give an explicit example. Assume that $X\to S$ is a family of degenerating elliptic curves, and assume $r=s=1$. Let $Y=\sum_j m_j(apha_j)$, $Z=\sum_k n_k(\bold eta_k)$ where $apha_j$ and $\bold eta_k$ are closures in $X$ of torsion sections of $X\smallsetminus X_0 \to S\smallsetminus \{0\}$, $m_j, n_k\in {\bold{Z}}$, $\sum_j m_j=\sum_k n_k=0$. We assume that the divisors $apha_j$ and $\bold eta_k$ of $X$ do not intersect for any pair $(j,k)$. This is an example discussed at the end of Part II, 4.4. The extended period domains which appear here are those of Example IV (Section \ref{IV}). We have $$a= \sum_{j,k} m_jn_k B_2(\{r(apha_j)-r(\bold eta_k)\}),$$ where $B_2$ is the Bernoulli polynomial of degree $2$. (The notation is as in \ref{ell1}.) This was explained in Part II, Proposition 4.4.8. If $r(apha_j)=r(\bold eta_k)=0$ for any $j,k$, then $a=0$ and the degeneration is mild. In this case, $$b= \sum_{j,k} m_jn_k l(apha_j/\bold eta_k),$$ where we regard $apha_j$ and $\bold eta_k$ as roots of $1$ and $l(t)=\log(|1-t|)$. These things are surprisingly similar to \ref{ell1}. These things will be explained in \cite{BK} more using the results of this paper. \end{sbpara} \setcounter{section}{0} \def\Alph{section}{\Alph{section}} \section{Corrections to \cite{KU}, supplements to Part III}{\lambda}bel{s:co} Corrections of errors in the book \cite{KU} have been put in the home page of Princeton University Press. In this Section A, we update them. In Section A.1 and Section A.2, we describe important corrections to \cite{KU}. In Section A.3, we give supplements to Part III. Other errors described in the above home page are minor ones. Section \ref{ss:8.1} and Section A.3 are important for \ref{ss:Lthm} in the text. {\subset}section{Change on \cite{KU} \S6.4}{\lambda}bel{ss:8.1} \bold egin{sbpara} The following errors (1) and (2) are in \cite{KU} Sections 6.4 and 7.1, respectively. (1) Proposition 3.1.6 is used in 6.4.12 (line 2 from the end), but Proposition 3.1.6 is not strong enough for the arguments in 6.4.12. (2) We can not have the second convergence in 7.1.2 (3). \end{sbpara} \bold egin{sbpara} We make the following changes 1, 2, 3 on the book \cite{KU}. The change 1 solves the above problem (1). The changes 2 and 3 solve the above problem (2). Change 1. We replace \cite{KU} 7.1.2 by \ref{8.1.a}--\ref{8.1.b} below. Change 2. We move \cite{KU} Section 7.1, revised as in the above Change 1, to the place just before \cite{KU} Section 6.4. That is, we exchange the order of the Sections 6.4 and 7.1. Change 3. We make the change on \cite{KU} 6.4.12 explained in \ref{8.1.c}. \end{sbpara} \bold egin{sbpara}{\lambda}bel{8.1.a} We will prove Theorem A (i), that $E_{\sigma}$ is open in $\tilde E_{{\sigma}}$ for the strong topology. Since $\tilde E_{{\sigma},{\mathrm{val}}}\to\tilde E_{\sigma}$ is proper surjective and $E_{{\sigma},{\mathrm{val}}}{\subset}\tilde E_{{\sigma},{\mathrm{val}}}$ is the inverse image of $E_{\sigma}{\subset}\tilde E_{\sigma}$, it is sufficient to prove that $E_{{\sigma},{\mathrm{val}}}$ is open in $\tilde E_{{\sigma},{\mathrm{val}}}$. Assume $x_{{{\lambda}mbda}}=(q_{{\lambda}mbda}, F'_{{\lambda}mbda})\in \tilde E_{{\sigma},{\mathrm{val}}}$ converges in $\tilde E_{{\sigma},{\mathrm{val}}}$ to $x=(q, F')\in E_{{\sigma},{\mathrm{val}}}$. We prove that $x_{{{\lambda}mbda}}\in E_{{\sigma},{\mathrm{val}}}$ for any sufficiently large ${{\lambda}mbda}$. \end{sbpara} \bold egin{sbpara} We fix notation. Let $|\;\;|:{\operatorname{toric}}_{{\sigma},{\mathrm{val}}}\to\operatorname{|toric|}_{{\sigma},{\mathrm{val}}}$ be the canonical projection induced by ${\bold{C}}\to{\bold{R}}$, $z\mapsto|z|$. Let $(A,V,Z)\in D^\sharp_{{\sigma},{\mathrm{val}}}$ be the image of $(|q|, F')\in E^{\sharp}_{{\sigma},{\mathrm{val}}}$ under $E^{\sharp}_{{\sigma},{\mathrm{val}}}\to D^{\sharp}_{{\sigma},{\mathrm{val}}}$ (5.3.7), and take an excellent basis $(N_s)_{s\in S}$ for $(A,V,Z)$ such that $N_s\in{\sigma}(q)$ for any $s$ (6.3.9). Let $S_j$ $(1\le j\le n)$ be as in 6.3.3. Take an ${\bold{R}}$-subspace $B$ of ${\sigma}_{\bold{R}}$ such that ${\sigma}_{\bold{R}}=A_{\bold{R}}\op B$. We have a unique injective open continuous map $$ ({\bold{R}}_{{{\gamma}mma}e0}^S)_{\mathrm{val}}\x B\to\operatorname{|toric|}_{{\sigma},{\mathrm{val}}} $$ which sends $((e^{-2\pi y_s})_{s\in S},b)$ $(y_s\in{\bold{R}}, b\in B)$ to $\bold e((\normalsize\sum_{s\in S}iy_sN_s)+ib)$ (cf. 3.3.5). Let $U$ be the image of this map. Define the maps $t_s:U\to{\bold{R}}_{{{\gamma}mma}e0}$ $(s\in S)$ and $b:U\to B$ by $$ (t,b)=((t_s)_{s\in S},b): U\simeq({\bold{R}}_{{{\gamma}mma}e0}^S)_{\mathrm{val}}\x B\to{\bold{R}}_{{{\gamma}mma}e0}^S\x B $$ (an abuse of notation $b$). We have $|q|\in U$ and $t(|q|):=(t_s(|q|))_{s\in S}=0$. Since $|q_{{\lambda}mbda}|\to |q|$, we may assume $|q_{{\lambda}mbda}|\in U$. Let $$F_{{{\lambda}mbda}}=\exp(i b(q_{{{\lambda}mbda}}))F'_{{{\lambda}mbda}}, \quad F=\exp(ib(q))F'.$$ Then $((N_s)_{s\in S}, F)$ generates a nilpotent orbit. \end{sbpara} \bold egin{sbpara} We may assume that, for some $m$ $(1\leq m\leq n+1)$, $t_s(q_{{\lambda}mbda})=0$ for any ${{\lambda}mbda}$ and $s\in S_{\leq m-1}$ and $t_s(q_{{\lambda}mbda})\operatorname{naive}eq 0$ for any ${{\lambda}mbda}$ and $s\in S_{{{\gamma}mma}eq m}$ (6.3.11), ($S_{\leq 0}$ and $S_{{{\gamma}mma}eq n+1}$ are defined as the empty set). Take $c_j\in S_j$ for each $j$. For $s\in S_{{{\gamma}mma}eq m}$, define $y_{{{\lambda}mbda},s}\in {\bold{R}}$ by $t_s(q_{{\lambda}mbda})=e^{-2\pi y_{{{\lambda}mbda},s}}$. For each $j\in {\bold{Z}}$ such that $m\leq j \leq n$, let $N_j= \normalsize\sum_{s\in S_j} a_sN_s$ where $a_s\in {\bold{R}}$ is the limit of $y_{{{\lambda}mbda},s}/y_{{{\lambda}mbda},c_j}$. Then $(N_1, \dots, N_n,F)$ generates a nilpotent orbit. Let $\tilde \rho: {\bold{G}}_{m,{\bold{R}}}^n\to G_{\bold{R}}$ be the homomorphism of the ${\rm{SL}}(2)$-orbit $(5.2.2)$ associated to $(N_1, \dots, N_n, F)$. For $m \leq j\leq n$, let $$ e_{{{\lambda}mbda},{{\gamma}mma}eq j}=\exp(\normalsize\sum_{s\in S_{{{\gamma}mma}eq j}} iy_{{{\lambda}mbda},s}N_s)\in G_{\bold{C}}, $$ $$ \tau_{{{\lambda}mbda},j}= \tilde \rho_j\left(\sqrt{y_{{{\lambda}mbda},c_{j+1}}/y_{{{\lambda}mbda},c_j}}\right)\in G_{\bold{R}}, \quad \tau_{{{\lambda}mbda}, {{\gamma}mma}eq j} = \normalsize\prod_{k= j}^n \tau_{{{\lambda}mbda},k}\in G_{\bold{R}} $$ $(y_{{{\lambda}mbda},c_{n+1}}$ denotes $1$). Here $\tilde\rho_j$ is the restriction of $\tilde\rho$ to the $j$-th factor of ${\bold{G}}_{m,{\bold{R}}}^n$. Let $\hat F_{(j)}$ ($1\leq j\leq n$) be as in 6.1.3 associated to $(N_1,\dots, N_n, F)$. By 3.1.6 applied to $S=E_{{\sigma}}{\subset}set X=\Ec_{{\sigma}}$, we have \end{sbpara} \bold egin{sblem}{\lambda}bel{lem712} Let the situation and the notation be as above. Let $m\leq j \leq n$ and let $e{{\gamma}mma}eq 0$. Then for any sufficiently large ${{\lambda}mbda}$, there exist $F^*_{{{\lambda}mbda}}\in \Dc$ satisfying the following (i) and (ii). $(i)$ $y_{{{\lambda}mbda},s}^ed(F_{{{\lambda}mbda}}, F^*_{{{\lambda}mbda}})\to 0 \;(\forall s\in S_j)$. $(ii)$ $(N_s, F^*_{{{\lambda}mbda}})$ {\it satisfies Griffiths transversality for any}\; $s\in S_{\leq j}.$ \operatorname{naive}oindent Furthermore, in the case $j=n$, there is $F^*_{{{\lambda}mbda}}$ as above satisfying the following condition $(ii)^*$ which is stronger than the above condition (ii). $(ii)^*$ $((N_s)_{s\in S}, F^*_{{{\lambda}mbda}})$ generates a nilpotent orbit. \end{sblem} \bold egin{sbprop}{\lambda}bel{prop712} Let the situation and the assumption be as above. Then the following assertions $(A_j)$ $(m-1\leq j\leq n)$, $(B_j)$ $(m\leq j\leq n)$, $(C_j)$ $(m\leq j\leq n)$ are true. $(A_j)$ (resp.\ $(B_j)$, resp.\ $(C_j)$) for $m\leq j\leq n$: Let $e{{\gamma}mma}eq 1$. Then for any sufficiently large ${{\lambda}mbda}$, there are $F^{(j)}_{{{\lambda}mbda}}\in \Dc$ satisfying the following (1)--(3). $(1)$ $y^e_{{{\lambda}mbda},j}d(F_{{{\lambda}mbda}}, F^{(j)}_{{{\lambda}mbda}}) \to 0$. $(2)$ $((N_s)_{s\in S_{\leq j}}, e_{{{\lambda}mbda},{{\gamma}mma}eq j+1}F^{(j)}_{{{\lambda}mbda}})$ generates a nilpotent orbit. $(3)$ $\tau_{{{\lambda}mbda},{{\gamma}mma}eq j+1}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1} F^{(j)}_{{{\lambda}mbda}}\to \exp(iN_{j+1})\hat F_{(j+1)}$. (resp.\ $(3)$ $\tau_{{{\lambda}mbda},{{\gamma}mma}eq j}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1} F^{(j)}_{{{\lambda}mbda}}\to \hat F_{(j)}$. resp.\ $(3)$ $\tau_{{{\lambda}mbda},{{\gamma}mma}eq j}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j}F^{(j)}_{{{\lambda}mbda}}\to \exp(iN_j)\hat F_{(j)}$.) \operatorname{naive}oindent Here $(A_n)$ is formulated by understanding $N_{n+1}=0$ and $\hat F_{(n+1)}=F$. $(A_{m-1})$: For any sufficiently large ${{\lambda}mbda}$, we have the following (2) and (3). $(2)$ $((N_s)_{s\in S_{\leq m-1}}, e_{{{\lambda}mbda},{{\gamma}mma}eq m}F_{{{\lambda}mbda}})$ generates a nilpotent orbit. $(3)$ $\tau_{{{\lambda}mbda},{{\gamma}mma}eq m}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq m} F_{{{\lambda}mbda}}\to \exp(iN_m)\hat F_{(m)}$. \end{sbprop} \bold egin{sbpara} We prove Proposition \ref{prop712} by using the downward induction of the form $(A_j)$ ${\bold{R}}ightarrow$ $(B_j)$ ${\bold{R}}ightarrow$ $(C_j)$ ${\bold{R}}ightarrow$ $(A_{j-1})$. (Here $m\leq j\leq n$.) $(B_j)$ ${\bold{R}}ightarrow$ $(C_j)$ is clear. $(A_j)$ ${\bold{R}}ightarrow$ $(B_j)$ is easy. $(A_n)$ follows from \ref{lem712}. We prove $(C_{j+1})$ ${\bold{R}}ightarrow$ $(A_j)$. By \ref{lem712}, if $m\leq j\leq n$ (resp. $j=m-1$), there are $F^{(j)}_{{{\lambda}mbda}}\in \Dc$ satisfying (1) and (resp. $F^{(j)}_{{{\lambda}mbda}}:=F_{{{\lambda}mbda}}$) satisfies the condition $(2')$ $(N_s, F^{(j)}_{{{\lambda}mbda}})$ satisfies Griffiths transversality for any $s\in S_{\le j}$. By $(C_{j+1})$, there are $F^{(j+1)}_{{{\lambda}mbda}}\in \Dc$ satisfying $(1'')$ $y^e_{{{\lambda}mbda},j+1}d(F_{{{\lambda}mbda}}, F^{(j+1)}_{{{\lambda}mbda}}) \to 0$. $(2'')$ $((N_s)_{s\in S_{\leq j+1}}, e_{{{\lambda}mbda},{{\gamma}mma}eq j+2}F^{(j+1)}_{{{\lambda}mbda}})$ generates a nilpotent orbit. $(3'')$ $\tau_{{{\lambda}mbda},{{\gamma}mma}eq j+1}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1} F^{(j+1)}_{{{\lambda}mbda}}\to \exp(iN_{j+1})\hat F_{(j+1)}$. By $(1'')$ and $(3'')$, we have (4) $\tau_{{{\lambda}mbda},{{\gamma}mma}eq j+1}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1} F_{{{\lambda}mbda}}\to \exp(iN_{j+1})\hat F_{(j+1)}$. By (4) and by (1), we have (5) $\tau_{{{\lambda}mbda},{{\gamma}mma}eq j+1}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1} F^{(j)}_{{{\lambda}mbda}}\to \exp(iN_{j+1})\hat F_{(j+1)}$. Concerning to the left-hand side of (5), by $(2')$, $(N_s, \tau_{{{\lambda}mbda},{{\gamma}mma}eq j+1}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1} F^{(j)}_{{{\lambda}mbda}})$ satisfies Griffiths transversality for any $s\in S_{\leq j}$. On the other hand, concerning the right-hand side of (5), $((N_s)_{s\in S_{\leq j}}, \exp(iN_{j+1})\hat F_{(j+1)})$ generates a nilpotent orbit. Hence (5) and 7.1.1 show that $((N_s)_{s\in S_{\leq j}}, \tau_{{{\lambda}mbda},{{\gamma}mma}eq j+1}^{-1}e_{{{\lambda}mbda},{{\gamma}mma}eq j+1} F^{(j)}_{{{\lambda}mbda}})$ generates a nilpotent orbit. This proves that $((N_s)_{s\in S_{\leq j}}, e_{{{\lambda}mbda},{{\gamma}mma}eq j+1} F^{(j)}_{{{\lambda}mbda}})$ generates a nilpotent orbit for any sufficiently large ${{\lambda}mbda}$. Hence for any sufficiently large ${{\lambda}mbda}$, $(W^{(j)}, e_{{{\lambda}mbda},{{\gamma}mma}eq j+1} F^{(j)}_{{{\lambda}mbda}})$ is a mixed Hodge structure where $W^{(j)}$ denotes the relative monodromy filtration of $N_1+\dots+N_j$ with respect to $W$. By this and by (5), we have $(A_j)$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{8.1.b} By $(A_{m-1})$ (2) of Proposition \ref{prop712}, $x_{{{\lambda}mbda}}$ belongs to $E_{{\sigma},{\mathrm{val}}}$ if ${{\lambda}mbda}$ is sufficiently large. This proves that $E_{{\sigma},{\mathrm{val}}}$ is open in $\tilde E_{{\sigma},{\mathrm{val}}}$, and hence proves that $E_{{\sigma}}$ is open in $\tilde E_{{\sigma}}$. \end{sbpara} \bold egin{sbpara}{\lambda}bel{8.1.c} In \cite{KU} 6.4.12, in the line 2 from the end, we replace the part \lq\lq by Proposition 3.1.6" \operatorname{naive}oindent with \lq\lq by the case $m=0$ and $x_{{{\lambda}mbda}}\in E^{\sharp}_{{\sigma},{\mathrm{val}}}$ of Proposition \ref{prop712}" \operatorname{naive}oindent of the present paper. \end{sbpara} \bold egin{sbrem}{\lambda}bel{8.1.d} By these changes, we have the following simplification in \cite{KU}. We can assume $y^*_{{{\lambda}mbda},t}=y_{{{\lambda}mbda},t}$ in \cite{KU} Proposition 6.4.1. It is claimed in the original \cite{KU} \S6.4 that the proofs of \cite{KU} Theorem 5.4.3 (ii) and Theorem 5.4.4 are reduced to \cite{KU} Proposition 6.4.1, but actually they are reduced to the case $y^*_{{{\lambda}mbda},t}=y_{{{\lambda}mbda},t}$ of \cite{KU} Proposition 6.4.1. \end{sbrem} {\subset}section{Change on \cite{KU} \S7.2}{\lambda}bel{ss:8.3} \bold egin{sbpara} Professor J.-P. Serre kindly pointed out that our book should not use \cite{BS} \S10, for errors are there (cf.\ 10.10. Remark in the version of \cite{BS} contained in \lq\lq Armand Borel oevres collected papers, Vol.\ III, Springer-Verlag, 1983"). We used a result \cite{BS} 10.4 in the proof of Lemma 7.2.12 of our book. In order to correct our argument, we change as follows. We put the following assumption in Theorem 7.2.2 (i): \lq\lq Assume that ${\sigma}$ is a nilpotent cone associated to a nilpotent orbit." We replace Lemma 7.2.12 and its proof in our book by the following proposition and its proof which does not use \cite{BS} \S10. \end{sbpara} \bold egin{sbprop} Let ${\sigma}$ be a nilpotent cone associated to a nilpotent orbit and let $W({\sigma})$ be the associated weight filtration. Then, by assigning the Borel-Serre splitting, we have a continuous map $E^\sharp_{{\sigma},{\mathrm{val}}}\to{\rm{spl}}(W({\sigma}))$. \end{sbprop} \bold egin{pf} The composite map $E^\sharp_{{\sigma},{\mathrm{val}}}\to D^\sharp_{{\sigma},{\mathrm{val}}}\overset{\psi}\to D_{{\rm{SL}}(2)}$ is continuous by the definition of the first map and by 6.4.1 for the CKS map $\psi$. Let $N_1,\dots,N_n$ be a generator of the cone ${\sigma}$. Let $s$ be a bijection $\{1, \dots, n\}\to\{1, \dots, n\}$. Then the image of the map $E^\sharp_{{\sigma},{\mathrm{val}}}\to D_{{\rm{SL}}(2)}$ is contained in the union $U$ of $D_{{\rm{SL}}(2)}(\{W(N_{s(1)}+\dots+N_{s(j)})\,|\,j=1,\dots,n\})$ where $s$ runs over all bijections $\{1, \dots, n\}\to\{1, \dots, n\}$. Since $N_{s(1)}+\dots+N_{s(n)}= N_1+\dots+N_n$, the filtration $W(N_1+\dots+N_n)=W({\sigma})$ appears for any $s$. By \cite{KNU2} Part II, Proposition 3.2.12, the Borel-Serre splitting gives a continuous map $U\to{\rm{spl}}(W({\sigma}))$. Thus we get our assertion. \end{pf} \bold egin{sbpara} We replace the third paragraph in 7.2.13 by the following: \lq\lq Since the action of ${\sigma}_{\bold{R}}$ on ${\rm{spl}}(W({\sigma}))$ is proper and $E^\sharp_{{\sigma},{\mathrm{val}}}$ is Housdorff, the action of ${\sigma}_{\bold{R}}$ on $E^\sharp_{{\sigma},{\mathrm{val}}}$ is proper by applying Lemma 7.2.6 (ii) to the continuous map $E^\sharp_{{\sigma},{\mathrm{val}}}\to{\rm{spl}}(W({\sigma}))$ in Proposition 7.2.12. Hence $\text{Re}(h_{{\lambda}mbda})$ converges in ${\sigma}_{\bold{R}}$ by Lemma 7.2.7." \end{sbpara} \bold egin{sbpara} Add the following sentence at the top of the fourth paragraph in 7.2.13: \lq\lq Let $|\;\;| : E_{{\sigma},{\mathrm{val}}} \to E^\sharp_{{\sigma},{\mathrm{val}}}$ be the continuous map $(q,F) \to (|q|,F)$ in 7.1.3." \end{sbpara} {\subset}section{Supplements to Part III} We add explanations to Part III, Section 3.3. \bold egin{sbpara} We put the following explanation \ref{8.3.a} just after the statement of Part III, Theorem 3.3.1. \end{sbpara} \bold egin{sbpara}{\lambda}bel{8.3.a} This Part III, Theorem 3.3.1 is the mixed Hodge version of \cite{KU} Theorem 5.4.3 of the pure case, and is proved in the same way. \end{sbpara} \bold egin{sbpara} We replace the two lines \lq\lq As in [KU09] 6.4, a key step ..... We only prove this proposition." \operatorname{naive}oindent just before Part III, 3.3.3 by the following \ref{8.3.b}. \end{sbpara} \bold egin{sbpara}{\lambda}bel{8.3.b} This Part III, Theorem 3.3.2 is proved in the following way. We can prove the evident mixed Hodge version of Proposition A.1.6 by using the same arguments in the proof of Proposition A.1.6. Just as \cite{KU} Theorem 5.4.4 was reduced to the case $y^*_{{{\lambda}mbda},t}= y_{{{\lambda}mbda},t}$ of \cite{KU} Proposition 6.4.1 by using the case $m=0$ and $x_{{{\lambda}mbda}}\in E^{\sharp}_{{\sigma},{\mathrm{val}}}$ of A.1.6 (see \ref{8.1.c}, \ref{8.1.d}), Part III, Theorem 3.3.2 is reduced to the case $y^*_{{{\lambda}mbda},t}= y_{{{\lambda}mbda},t}$ of Part III, Proposition 3.3.4 by using the case $m=0$ of this mixed Hodge version of Proposition A.1.6. \end{sbpara} \bold egin{sbpara}{\lambda}bel{8.3.c} We replace the part \lq\lq proposition implies" \operatorname{naive}oindent in line 2 of Remark after Part III, Proposition 3.3.4 with \lq\lq proposition and \cite{KU} 6.4.1 of the pure case imply". \end{sbpara} \bold egin{thebibliography}{99} \bibitem{Be1} {\sc Beilinson, A.}, \operatorname{naive}ewblock {\em Higher regulators and values of L-functions,}, VINITI, 24 (1984), 181--238 (in Russian); English translation: J. Soviet Math. 30 (1985), 2036--2070. \bibitem{Be2} {\sc Beilinson, A.}, \operatorname{naive}ewblock {\em Height pairing between algebraic cycles}, Contemp. Math. {\bf 67}, (1987), 1--24. \bibitem{Bl1}{\sc Bloch, ~S.}, \operatorname{naive}ewblock {\em Higher Regulators, Algebraic K-Theory, and Zeta Functions of Elliptic Curves}, Lect. Notes U.C. Irvine (1977), American Math. Sci. (2000). \bibitem{Bl2} {\sc Bloch, S.}, {\em Height pairings for algebraic cycles}, J. Pure Appl. Algebra {\bf 34} (1984), 119--145. \bibitem{BK} {\sc Bloch, S., Kato, K.}, \operatorname{naive}ewblock {\em Asymptotic behaviors of heights and regulators in degeneration}, in preparation. \bibitem{BS} {\sc Borel, A., Serre, J.-P.}, {\em Corners and arithmetic groups}, Comment. Math. Helv. {\bf 48}, (1973), 436--491. \bibitem{Bn} {\sc Bourbaki, N.}, {\em Topologie G\'en\'erale I}, in \'El\'ements de Math\'ematique, Hermann, Paris, Num\'ero d'\'Edition 2179, 1966, (English translation: Hermann and Addison-Wesley, 1966). \bibitem{BP} {\sc Brosnan, P., Pearlstein, G.}, {\em On the algebraicity of the zero locus of an admissible normal functions}, Composition Math. {\bf 189} (2013), 1913--1962. \bibitem{CKS}{\sc Cattani, E., Kaplan, A., Schmid, W.}, \operatorname{naive}ewblock {\em Degeneration of Hodge structures} Ann. of Math. {\bf 123} (1986), 457--535. \bibitem{GL} {\sc Goncharov, A.B. Levin A.M.}, \operatorname{naive}ewblock {\em Zagier's conjecture on $L(E,2)$}, Inventiones Math. {\bf 132} (1998), 393--432. \bibitem{Gr}{\sc Griffiths, P.}, \operatorname{naive}ewblock {\em Periods of integrals on algebraic manifolds. I. Construction and properties of modular varieties,} Amer. J. Math. {\bf 90} (1968), 568--626. \bibitem{KN}{\sc Kato, K., Nakayama, C.}, {\em Log Betti cohomology, log \'etale cohomology, and log de Rham cohomology of log schemes over ${\bold{C}}$}, \operatorname{naive}ewblock Kodai Math.\ J. {\bf 22} (1999), 161--186. \bibitem{KNU1}{\sc Kato, K., Nakayama, C., Usui, S.}, \operatorname{naive}ewblock {\em ${\rm{SL}}(2)$-orbit theorem for degeneration of mixed Hodge structure}, J. Algebraic Geometry {\bf 17} (2008), 401--479. \bibitem{KNU2} {\sc Kato, K., Nakayama, C., Usui, S.}, \operatorname{naive}ewblock {\em Classifying spaces of degenerating mixed Hodge structures}, I., Adv. Stud. Pure Math. {\bf 54} (2009), 187--222, II., Kyoto J. Math. {\bf 51}, 149--261 (2011), III., J. Algebraic Geometry {\bf 22} (2013), 671--772. \bibitem{KU0} {\sc Kato, K., Usui, S.}, \operatorname{naive}ewblock {\em Borel--Serre spaces and spaces of SL(2)-orbits}, Advanced Studies in Pure Math. 36 (2002), 321--382. \bibitem{KU} {\sc Kato, K., Usui, S.}, \operatorname{naive}ewblock {\em Classifying spaces of degenerating polarized Hodge structures}, Ann. Math. Studies {\bf 169}, Princeton Univ. Press (2009). \bibitem{O} {\sc Oda, T.}, \operatorname{naive}ewblock {\em Convex bodies and algebraic geometry}, Ergebnisse der Mathematik und ihrer Grenzgebiete 3.Folge$\cdot$Band 15, Springer-Verlag, Berlin and Heidelberg (1988). \bibitem{Pe} {\sc Pearlstein, G.}, \operatorname{naive}ewblock {\em ${\rm{SL}}_2$-orbits and degenerations of mixed Hodge structure}, J. Differential Geom. 74 (2006), 1--67. \bibitem{Sct}{\sc Schappacher, N., Scholl, A.J.}, {\em The boundary of Eisenstein symbol}, \operatorname{naive}ewblock Math. Ann. {\bf 290} (1991), 303--321. \bibitem{Scw}{\sc Schmid, W.}, \operatorname{naive}ewblock {\em Variation of Hodge structure: the singularities of the period mapping}, Invent. Math. {\bf 22} (1973), 211--319. \bibitem{U1}{\sc Usui, S.}, \operatorname{naive}ewblock {\em Variation of mixed Hodge structure arising from family of logarithmic deformations. II. Classifying space}, Duke Math. J. {\bf 51-4} (1984), 851--875. \bibitem{U2}{\sc Usui, S.}, \operatorname{naive}ewblock {\em A numerical criterion for admissibility of semisimple elements}, Tohoku Math. J. {\bf 45} (1993), 471--484. \end{thebibliography} \operatorname{naive}oindent {\rm Kazuya KATO \\ Department of mathematics \\ University of Chicago \\ Chicago, Illinois, 60637, USA} \\ {\tt [email protected]} \operatorname{naive}oindent {\rm Chikara NAKAYAMA \\ Department of Economics \\ Hitotsubashi University \\ 2-1 Naka, Kunitachi, Tokyo 186-8601, Japan \\ {\tt [email protected]} \operatorname{naive}oindent {\rm Sampei USUI \\ Graduate School of Science \\ Osaka University \\ Toyonaka, Osaka, 560-0043, Japan} \\ {\tt [email protected]} \end{document}
\begin{document} \begin{abstract} The rate of recurrence to measurable subsets in a conservative, ergodic infinite-measure preserving system is quantified by generic divergence or convergence of certain sums given by a function $\omega(n)$. In the context of skew products over transformations of a probability space, we relate this notion to the more frequently studied question of the growth rate of ergodic sums (including Lyapunov exponents). We study in particular skew products over an irrational rotation given by bounded variation $\mathbb{Z}$-valued functions: first the generic situation is studied and recurrence quantified, and then certain specific skew products over rotations are shown to violate this generic rate of recurrence. \end{abstract} \title{$\omega$-recurrence in skew products} \section{Introduction} In the study of transformations $T:X \rightarrow X$ which are ergodic with respect to a probability measure $\mu$, the rate of recurrence to a set of positive measure is completely governed by the Birkhoff ergodic theorem: if $A \subset X$ is of measure $\mu(A)=\alpha >0$, then for almost every $x \in X$ we have that the sequence $\{n : T^n(x) \in A\}$ is of \textit{density} $\alpha$ (we write $\#A$ for the number of elements in a set $A$): \begin{equation}\label{eqn - density} \lim_{i \rightarrow \infty} \frac{\# \{n=0,1,\ldots,i-1 : T^n(x) \in A\} }{i} = \alpha.\end{equation} If, however, $\mu(X)=\infty$ and $T$ is conservative and ergodic, then for any set $A$ of measure $0 < \mu(A) < \infty$, for almost every $x$ the sequence $\{n : T^n(x) \in A\}$ is infinite but the limit in \eqref{eqn - density} is zero. To quantify a generic rate of recurrence in this setting, we follow \cite{krengel} and let $\omega: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ be nonincreasing and regularly varying ($\omega(kx) \asymp \omega(x)$ for all $k \in \mathbb{R}^+$). For each $x \in X$ we define \[\zeta_j^k(x) = \sum_{i=j}^{k-1} \chi_A(T^i x) \omega(i), \quad \zeta(x) = \lim_{n \rightarrow \infty} \zeta_0^n (x)\] and through the assumption that $T$ is ergodic and conservative we have that $\zeta(x)$ either converges almost everywhere or diverges almost everywhere, and furthermore this convergence or divergence does not depend on the choice of the set $A$ of positive, finite measure. If $\zeta(x)=\infty$ for almost every $x$, the system is said to be $\omega$-recurrent, while if $\zeta(x)<\infty$ for almost every $x$ the system is called $\omega$-nonrecurrent. In this work we will concern ourselves with the following situation: let $\{Y, \nu\}$ be a probability space and $S: Y \rightarrow Y$ an ergodic measure-preserving transformation. Let $f:Y \rightarrow G$ be a function into a countable discrete group $G$, and denote the identity element of $G$ by $e$. Let $X = Y \times G$ and $\mu$ be the product of $\nu$ and the counting measure on $G$. Define the \textit{skew transformation} \begin{equation}\label{eqn - skew}T(y,g) = (S(y), g\cdot f(y)),\end{equation} and assume that $\{X, \mu, T\}$ is ergodic. Then the sequence of \emph{ergodic products of $y$} is given by \[e, f(y), f(y)\cdot f(S(y)), f(y) \cdot f(S(y)) \cdot f(S^2(y)), \ldots, \prod_{i=0}^{n-1} f(S^i (y)), \ldots\] We then have the cocycle identity for $n,m \geq 0$ \begin{equation}\label{eqn - cocycle}\prod_{i=0}^{n+m-1}f(S^i y)= \prod_{i=0}^{n-1}f(S^i y) \cdot \prod_{i=0}^{m-1}f(S^{n+i}y).\end{equation} In \S\ref{section - outline process} we will introduce notation and the central result of this work (Theorem \ref{theorem - main abstract theorem}). This machinery will be applied in \S\ref{section - rotation cocycles} to skew products over irrational rotations; let $Y=\mathbb{R}/\mathbb{Z}$ and $S$ be rotation by $\alpha \notin \mathbb{Q}$, and let $f:Y \rightarrow \mathbb{Z}$ be of bounded variation. In the particular case that \begin{equation}\label{eqn - staircase function}f(y) = \chi_{[0,1/2)}(y) - \chi_{[1/2,1)}(y),\end{equation} this skew product is called the \textit{infinite staircase}. We will study in \S\ref{section - rotation cocycles} the question of \textit{generic} $\omega$-recurrence in skew products into $\mathbb{Z}$ over irrational rotations: for almost-every choice of $\alpha$, any such ergodic skew product with fixed $f$ of bounded variation is $1 / n$-recurrent (Theorem \ref{theorem - 1 over n}). We will also apply our results to a particular class of \emph{interval exchange transformations} as another example of how the results of \S\ref{section - outline process} may be applied in general. Furthermore, using techniques developed in \cite{substitutions1}, we will show in \S\ref{section - staircase} that for any $\omega(n) \in o(n^{\epsilon})$ with $\epsilon < -1/2$ there is an uncountable set of $\alpha$ for which the infinite staircase is \emph{not} $\omega$-recurrent (Theorem \ref{theorem - not omega recurrent for power more than half}). Finally, the proof of a technical lemma in the theory of continued fractions (Lemma \ref{lemma - technical lemma}) is given in \S\ref{section - lemmas}. \section{$\omega$-recurrence, ergodic sums and Lyapunov exponents}\label{section - outline process} Let $\{Y, \nu\}$ be a probability space and $S:Y \rightarrow Y$ be an ergodic measure-preserving transformation. Let $f:Y \rightarrow G$ be a function into a countable discrete group $G$ with identity element $e$. Let $\{N_k\}$ be an increasing sequence of positive integers for $k=1,2,\ldots$, and for each $y \in Y$ let \[r_k(y) = \#\left\{g \in G : g=\prod_{i=0}^{n-1}f(S^i y), \quad n =1,2,\ldots,N_k \right\}\] Fixing some $\epsilon_1 \in (0,1]$, let $\rho_k(\epsilon_1)=\rho_k$ be such that \[\nu \left\{ y \in Y : r_k(y) \leq \rho_k \right\} \geq \epsilon_1,\] and then denote \begin{equation}\label{eqn - sets A} A_k = \left\{ y \in Y : r_k(y) \leq \rho_k\right\}.\end{equation} Fixing some choice of $k$, let $y \in A_k$ be arbitrary (but fixed), and let $\epsilon_2 \in (0,1)$. Then a particular $g \in G$ will be called $\epsilon_2$-\textit{crowded} if \[ \sum_{n=1}^{N_k} \chi_g \left( \prod_{i=0}^{n-1}f(S^i y) \right) \geq \epsilon_2 \frac{N_k}{\rho_k}.\] Define $G'_k(y, \epsilon_1, \epsilon_2)=G'_k(y)$ to be the set of all $\epsilon_2$-crowded values of $y$. Then the following is immediate: \begin{lemma}\label{lemma - lots of balls in crowded bins} For each $k$ and any $y \in A_k$, \[ \# \left\{n = 1,2, \ldots, N_k : \prod_{i=0}^{n-1}f(S^i(y)) \in G'_k(y) \right\} \geq (1- \epsilon_2)N_k.\] \end{lemma} Let $\epsilon_3 \in (0,1)$. With $k$ and $y \in A_k$ fixed, $m \in \{1,\ldots, N_k\}$ will be called $\epsilon_3$-\textit{predicting} if \[ \# \left\{n = 1,2,\ldots,N_k : \prod_{i=0}^{m+n}f(S^i y) = \prod_{i=0}^m f(S^i y)\right\} \geq \epsilon_2 \epsilon_3 \frac{N_k}{\rho_k}.\] Define $N'_k(y, \epsilon_1,\epsilon_2,\epsilon_3)=N'_k(y)$ to be the set of $\epsilon_3$-predicting times. \begin{lemma}\label{lemma - how many predicting} For each $k$ and any $y \in A_k$, $\#N'_k(y) \geq (1-\epsilon_2) (1-\epsilon_3) N_k - \rho_k$. \begin{proof} Fix $y$ and enumerate $G'_k(y)=\{g_1, g_2, \ldots, g_m\}$, where $m \leq \rho_k$. Let $j_i$ be the number of ergodic products with value $g_i$, so that by Lemma \ref{lemma - lots of balls in crowded bins} we have $j_1 + \ldots +j_m \geq (1-\epsilon_2)N_k$. For each $i$, then, at least $[(1-\epsilon_3)j_i] = (1-\epsilon_3)j_i - \{(1-\epsilon_3)j_i\}$ times are $\epsilon_3$-predicting, where $[x]$ denotes the integer part of $x$ and $\{x\}=x-[x]$. Then the number of $\epsilon_3$-predicting times is at least \[\sum_{i=1}^{m} [(1-\epsilon_3)j_i] \geq (1-\epsilon_3)(1-\epsilon_2)N_k - \sum_{i=1}^m \{(1-\epsilon_3)j_i\} > (1-\epsilon_3)(1-\epsilon_2)N_k-\rho_k. \qedhere\] \end{proof} \end{lemma} Note that from \eqref{eqn - cocycle} it follows that if $y \in Y$, $\tau$, $m \in \mathbb{N}$, \[\left(\prod_{i=0}^{\tau-1}f(S^i y) = \prod_{i=0}^{\tau +m -1}f(S^i y) \right) \quad \Longrightarrow \left( \prod_{i=0}^{m-1}f\left(S^i (S^\tau y)\right) = e\right).\] Finally, define $B_k(\epsilon_1, \epsilon_2, \epsilon_3)=B_k$ by \begin{equation}\label{eqn - define B_k}B_k = \left\{ y \in Y : \sum_{m=1}^{N_k} \chi_{e}\left(\prod_{i=0}^{m-1}f(S^i y)\right) \geq \epsilon_2 \epsilon_3 (N_k/\rho_k)\right\},\end{equation} the set of points which have at least $\epsilon_2 \epsilon_3 N_k/\rho_k$ ergodic products equal to the identity in the first $N_k$ times. \begin{proposition}\label{proposition - the good B_k} If $\rho_k \in o(N_k)$, then \[\liminf_{k \rightarrow \infty}\nu(B_k) \geq \epsilon_1(1 - \epsilon_2)(1-\epsilon_3) > 0.\] \begin{proof} Recall the sets $A_k$ from \eqref{eqn - sets A}, and let $C_i = S^{i}(A_k)$, $\tilde{C}_i = \left( C_i \cap B_k \right)$ for $i=0,1,\ldots,N_k-1$. Combining lemma \ref{lemma - how many predicting}, $\nu(C_0) = \nu(A_k) \geq \epsilon_1$ and the fact that $S$ preserves $\nu$, we obtain \[\sum_{i=0}^{N_k-1}\nu(\tilde{C}_i) \geq \epsilon_1\left((1-\epsilon_2)(1-\epsilon_3)N_k-\rho_k\right),\] so the measure of the union \[ \bigcup_{i=0}^{N_k-1} \tilde{C}_i\] is at least this quantity divided by the number of sets, $N_k$. Finally, we trivially note that this union of the $\tilde{C}_i$ is contained within $B_k$, and the conclusion follows from the assumption that $\rho_k \in o (N_k)$. \end{proof} \end{proposition} \begin{lemma}\label{lemma - new lemma} Suppose that $i_1< i_2 < \ldots < i_m$ and let \[ y \in \bigcap_{j=1}^m B_{i_j},\] with $B_k$ defined as in \eqref{eqn - define B_k}. Assume that there is some $\delta>0$ so that for $j=1,2,\ldots,m$ we have \[ \frac{N_{i_j}}{\rho_{i_j}} \geq \delta \sum_{\ell =1}^{j-1} \frac{N_{i_{\ell}}}{\rho_{i_{\ell}}}.\] Then if we set $\delta' = \delta/(1+\delta)$, \begin{equation}\label{eqn - estimate for main theorem} \zeta_1^{N_{i_m}}(y) \geq \delta' \epsilon_2 \epsilon_3 \sum_{j=1}^{m} \omega(N_{i_j}) \frac{N_{i_j}}{\rho_{i_j}} .\end{equation} \begin{proof} The quantity $\delta'$ satisfies for all $j=1,2,\ldots,m$ \begin{equation}\label{eqn - alternate delta} \frac{N_{i_j}}{\rho_{i_j}} - \delta' \sum_{\ell=1}^j \frac{N_{i_{\ell}}}{\rho_{i_{\ell}}} \geq \delta' \frac{N_{i_j}}{\rho_{i_j}}.\end{equation} For any $y \in B'$, first note that as $y \in B_{i_1}$, there are at least $\epsilon_2 \epsilon_3 (N_{i_1}/\rho_{i_1})$ ergodic products equal to the identity before time $N_{i_1}$, so using the fact that $\omega$ is non-increasing and $\delta'<1$, \[\zeta_1^{N_{i_1}}(y) \geq \delta' \epsilon_2 \epsilon_3 \omega(N_{i_1}) \frac{N_{i_1}}{\rho_{i_1}}.\] Next, as $y \in B_{i_2}$, there are at least $\epsilon_2 \epsilon_3 (N_{i_2}/\rho_{i_2})$ such times before $N_{i_2}$. At least $\epsilon_2 \epsilon_3 (N_{i_1}/\rho_{i_1})$ of these times occur before $N_{i_1}$ - perhaps even all of them do. Again using that $\omega$ is non-increasing, however, we may elect to only replace $\delta' \epsilon_2 \epsilon_3 (N_{i_1}/\rho_{i_1})$ of these times with the smaller value $\omega(N_{i_1})$, and then replace the rest with the even smaller $\omega(N_{i_2})$: \[ \zeta_1^{N_{i_2}}(y) \geq \epsilon_2 \epsilon_3 \left( \omega(N_{i_1}) \frac{\delta' N_{i_1}}{\rho_{i_2}} + \omega(N_{i_2}) \left( \frac{N_{i_2}}{\rho_{i_2}} - \frac{\delta' N_{i_1}}{\rho_{i_1}}\right) \right) \geq \delta' \epsilon_2 \epsilon_3 \sum_{j=1}^2 \omega(N_{i_j}) \frac{N_{i_j}}{\rho_{i_j}},\] where the last inequality follows from \eqref{eqn - alternate delta}. Proceeding in this manner we obtain \eqref{eqn - estimate for main theorem}. \end{proof} \end{lemma} \begin{theorem}\label{theorem - main abstract theorem} Suppose that $\rho_k \in o(N_k)$, and that there is a $\delta>0$ such that the inequality \begin{equation} \label{eqn - strange equation at the heart of it all} \frac{N_k}{\rho_k} \geq \delta \sum_{i=1}^{k-1} \frac{N_i}{\rho_i} \end{equation} holds for sufficiently large $k$, and assume that $\{X, \mu, T\}$ is conservative and ergodic. Then the system is $\omega$-recurrent for all $\omega$ such that \[\sum_{k=1}^{\infty} \omega(N_k) \frac{N_k}{\rho_k} = \infty.\] \begin{proof} Assume without loss of generality that \eqref{eqn - strange equation at the heart of it all} holds for all $k$, and again set $\delta' = \delta/(1+\delta)$. Recall the sets $B_k$ defined in \eqref{eqn - define B_k}, and suppose that $i_1<i_2< \ldots i_m$ and $j_1<j_2< \ldots j_M$ with $j_M \geq i_m$, so that that we may find \[ B \subset \bigcap_{\ell =1}^m B_{i_{\ell}}, \quad B' \subset \bigcap_{\ell = 1}^M B_{j_{\ell}}.\] Assume that $\mu(B)=\mu(B')$ and $B \cap B' = \emptyset$. Denote by $t_1 < t_2 < \ldots < t_{\tau}$ the enumeration of $\{i_1, i_2, \ldots i_m\} \cup \{j_1, j_2, \ldots, j_M\}$ in increasing order. Then using the same technique for estimating $\zeta_1^{N_k}$ from Lemma \ref{lemma - new lemma}, we find that \begin{align*} \int_{B \cup B'} \zeta_1^{N_{j_M}}(x) d \mu &\geq \int_B \zeta_1^{N_{j_m}}(x) d \mu + \int_{B'}\zeta_1^{N_{j_M}}(x) d \mu\\ & \geq \delta' \epsilon_2 \epsilon_3 \mu(B) \left( \sum_{\ell =1}^{m} \omega(N_{i_{\ell}}) \frac{N_{i_{\ell}}}{\rho_{i_{\ell}}} + \sum_{\ell = 1}^M \omega(N_{j_{\ell}})\frac{N_{j_{\ell}}}{\rho_{j_{\ell}}}\right)\\ &\geq \delta' \epsilon_2 \epsilon_3 \mu(B) \sum_{\ell = 1}^{\tau} \omega(N_{t_{\ell}}) \frac{N_{t_{\ell}}}{\rho_{t_{\ell}}}. \end{align*} From Proposition \ref{proposition - the good B_k} we know that $\liminf \nu(B_k)=\beta>0$, so let $D \subset Y$ be an arbitrary set such that $\nu(D)>1-\beta/2$. Then for sufficiently large $k$ we have \[ \nu \left( D \cap B_k \right) > \frac{\beta}{2}.\] Without loss of generality assume that the above inequality holds for all $k$. Assume further that the various $B_i \cap D$ coincide; the above computation shows that our estimate for this case is no larger than the general case. \[ \int_D \zeta_0^{N_k}(y) d\nu \geq \delta' \epsilon_2 \epsilon_3 \frac{\beta}{2} \sum_{ \ell=1}^k \omega(N_{i_{\ell}}) \frac{N_{i_{\ell}}}{\rho_{i_{\ell}}}.\] It follows from our assumptions, then, that for \emph{any} set $D \subset Y$ of measure $\nu(D) > 1- \beta/2$, we have \[\int_{D \times\{e\}} \zeta(y,g) d\mu = \int_D \zeta(y) d \nu =\infty,\] which implies that $\zeta(y,g)=\infty$ on a set of positive $\mu$-measure. \end{proof} \end{theorem} If $G$ is a discrete metric group, we will say that $G$ is \emph{at most $d$-dimensional} if \[ \#\left\{ g \in G : \|g \| \leq n\right\} \in O(n^d).\] \begin{corollary}\label{corollary - lyapunov} Assume that $\{Y, \nu, S\}$ is ergodic, with $f:Y \rightarrow G$, where $G$ is discrete and at most $d$-dimensional. Denote \begin{equation}\label{eqn - lyapunov} \lambda= \limsup_{n \rightarrow \infty} \frac{\log\left( \esssup_{y \in Y} \| \sum_{i=0}^{n-1}f(S^i y)\| \right)}{\log n},\end{equation} the principal Lyapunov exponent. If $ 1/d > \lambda$ and the skew product is ergodic (recall \eqref{eqn - skew}), then for any $1/d>\lambda'>\lambda$, the skew product is $n^{(d \lambda'-1)}$-recurrent. \begin{proof} Between the dimension $d$ and the principal Lyapunov exponent $\lambda$, we may set $N_k = \gamma^k$ for some $\gamma>1$ and (for sufficiently large $k$) $\rho_k = \gamma^{kd\lambda'}$. We apply Theorem \ref{theorem - main abstract theorem}; the verification of \eqref{eqn - strange equation at the heart of it all} is direct. \end{proof} \end{corollary} \section{$\omega$-recurrence in skew products into $\mathbb{Z}$ over rotations}\label{section - rotation cocycles} We now set $Y=\mathbb{R}/ \mathbb{Z}$, $\nu$ as Lebesgue measure, $S(y) = y + \alpha \mod 1$ for some $\alpha \in (0,1)\setminus \mathbb{Q}$, and $f:Y \rightarrow \mathbb{Z}$ to be of bounded variation. We use \textit{standard continued fraction} notation, where: \[\alpha = [a_1,a_2,\ldots] = \cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{\ddots}}}.\] The $a_i=a_i(\alpha)$ are called \emph{partial quotients} of $\alpha$. Beginning with $q_0=1$ and $q_1=a_1$, recursively define $q_{n+1}=a_{n+1}q_n+q_{n-1}$, and let \[\textbf{a}_n(\alpha)=\textbf{a}_n = \sum_{i=1}^n a_i.\] The \textit{Denjoy-Koksma inequality} states \begin{equation}\label{eqn - denjoy} \forall n \in \mathbb{N}, \, \forall y \in [0,1), \quad \left| \sum_{i=0}^{q_n -1} f(y+ i \alpha) - q_n \int_0^1 f(t)dt \right| < \textrm{Var}(f).\end{equation} The proof of the following Lemma is postponed until \S\ref{section - lemmas}: \begin{lemma}\label{lemma - technical lemma} For almost every $\alpha$, there is a $\delta > 0$ such that for all $k$ \[ \frac{q_{k}}{\textbf{a}_{k}} \geq \delta \sum_{i=1}^{k-1} \frac{q_i}{\textbf{a}_i} .\] \end{lemma} \begin{theorem}\label{theorem - 1 over n}Let $f$ be a $\mathbb{Z}$-valued function of bounded variation and zero mean such that for almost every $\alpha$ the corresponding skew product over rotation by $\alpha$ is conservative and ergodic. Then for almost every $\alpha$ the corresponding skew product over rotation by $\alpha$ is $(1/n)$-recurrent. \begin{proof} Set $N_k=q_k$, and via \eqref{eqn - denjoy} we let $\rho_k = 2\textrm{Var}(f)\textbf{a}_k$, and $\omega(n)=1/n$. Apply Theorem \ref{theorem - main abstract theorem}, justified by Lemma \ref{lemma - technical lemma}, disregarding the constant $2\textrm{Var}(f)$, and the known fact (see e.g. the final Corollary in \cite{MR1556899}) that for almost-every $\alpha$ we have \[\sum_{i=1}^{\infty} \frac{1}{\textbf{a}_i} = \infty. \qedhere\] \end{proof} \end{theorem} \begin{remark} It follows that for almost-every $\alpha$ and any bounded-variation function $f:\mathbb{R}/\mathbb{Z} \rightarrow \mathbb{Z}$, the principal Lyapunov exponent (recall \eqref{eqn - lyapunov}) is zero; this result is already known and follows via \eqref{eqn - denjoy}: see for example \cite[\S2]{conze}. \end{remark} \begin{remark} The sequence $\textbf{a}_i(\alpha)/(i \log i)$ converges to $1/\log 2$ in measure (\cite[\S 4]{MR1556899}). So for generic $\alpha$ it is possible to strengthen this recurrence to any $\omega$ of the form (for sufficiently large $n$) \[\omega(n) = \frac{1}{n \cdot \log^{(3)} n \cdot \log^{(4)} n \cdots \log^{(j)} n},\] where $\log^{(i)}n$ is the $i$-th iterated logarithm and $j$ is arbitrary; approximating $\textbf{a}_i > i \log i$ for all but a zero-density sequence of $i$ and $q_i > (K-\epsilon)^i$ (where $K$ is the \textit{Khintchine-Levy constant}) for all sufficiently large $i$ and $\epsilon>0$ arbitrary, we see that for some $C > 0$ \[ \omega(q_k)\frac{q_k}{\textbf{a}_k} \geq \frac{C}{k \cdot \log k \cdot \log^{(2)} k \cdots \log^{(j-1)} k}\] along a sequence of $k$ whose complement (in $\mathbb{N}$) is of zero density. \end{remark} A natural generalization of a rotation on $\mathbb{R}/\mathbb{Z}$ is an \emph{interval exchange transformation}, or \emph{IET}, defined on finitely many intervals. We are not particularly concerned with the definition or properties of IETs; an excellent survey on the subject is \cite{MR2219821}. For $T$ an IET of \emph{periodic type} (again, we are not concerned with the specific definition here), we have for all $n \in \mathbb{N}$ \cite[Theorem 2.2]{conze-fraczek}: \begin{equation}\label{eqn - periodic type bound} \sup_{x \in [0,1]} \left| \sum_{i=0}^{n-1} f \circ T^i (x) \right| \leq C \cdot \left(\log n\right)^{M+1} \cdot n^{\theta_2/\theta_1} \cdot V(f),\end{equation} where $f: [0,1) \rightarrow \mathbb{R}$, $V(f)$ is the variation of $f$, $0 \leq \theta_2 < \theta_1$ are the two largest Lyapunov exponents, $M$ is an explicit positive integer not larger than the number of intervals exchanged and $C$ is a constant not dependent on $n$. \begin{lemma}\label{lemma - technical lemma for IET} Let $T$ is an interval exchange of periodic type, ergodic with respect to Lebesgue measure, $f: S^1 \rightarrow \mathbb{Z}$ be of bounded variation, and define for some fixed $\gamma>1$ \[N_k = \gamma^k, \quad \rho_k = C k^{M+1} \gamma^{k \frac{\theta_2}{\theta_1}},\] where the constant $C$ is $2 \textrm{Var} (f)$ times the constant given by \eqref{eqn - periodic type bound} and does not depend on $k$. Then there is some $\delta>0$ such that \eqref{eqn - strange equation at the heart of it all} holds: \[ \frac{N_k}{\rho_k} \geq \delta \sum_{j=1}^{k-1} \frac{N_j}{\rho_j}.\] \begin{proof} We will show that $C_k$ are bounded, where \[C_k=\frac{\rho_k}{\gamma^k} \sum_{i=1}^{k-1} \frac{\gamma^i}{\rho_i}.\] Denote $\epsilon = \gamma^{1-\theta_2 / \theta_1}>1 $, so that \begin{align*} C_k &= \frac{k^{M+1}}{\epsilon^k} \sum_{i=1}^{ [k/2] } \frac{\epsilon^i}{i^{M+1}}+ \frac{k^{M+1}}{\epsilon^k}\sum_{i=[k/2]+1}^{k-1} \frac{\epsilon^i}{i^{M+1}}\\ &\leq \frac{k^{M+1}}{\epsilon^k} \sum_{i=1}^{[k/2]} \epsilon^i + \frac{k^{M+1}}{\epsilon^k \cdot [k/2]^{M+1}} \sum_{i=[k/2]+1}^{k-1} \epsilon^i\\ \end{align*} If we set $n \geq 1 - \log(\epsilon -1)$, we may bound \[ \sum_{i=1}^{k} \epsilon^i \leq \epsilon^{k+n},\] so we have (recall $M$ and $n$ are constants) \begin{align*} C_k &\leq \frac{k^{M+1} \epsilon ^{k/2+n}}{\epsilon^k} + \frac{2^{M+1} \epsilon^{k+n-1}}{\epsilon^k}\\ & \leq \frac{k^{M+1} \epsilon^{n+1}}{\epsilon^{k/2}} + 2^{M+1} \epsilon^{n-1}\\ \limsup_{k \rightarrow \infty} C_k &\leq 2^{M+1} \epsilon^n. \qedhere \end{align*} \end{proof} \end{lemma} Then we may obtain a result stronger than that implied solely by Corollary \ref{corollary - lyapunov}: \begin{theorem}\label{theorem - for IETs} Let $T$ be is an ergodic IET of periodic type defined on $M$ intervals, with $f:S^1 \rightarrow \mathbb{Z}$ of bounded variation, and assume further that the skew product given by $f$ is ergodic. Then the skew product is $(\log n)^M n^{(\theta_2/\theta_1 -1)}$-recurrent. \begin{proof} The proof is direct in light of Theorem \ref{theorem - main abstract theorem}, Lemma \ref{lemma - technical lemma for IET}, and divergence of $\sum 1/(n \log n)$. \end{proof} \end{theorem} \begin{remark} It is no more difficult to expand the result of Theorem \ref{theorem - for IETs} to $\omega(n)$ of the form \[\omega(n) = \frac{\left(\log n\right)^M}{n^{\frac{\theta_1-\theta_2}{\theta_1}} \cdot \log^{(2)}n \cdot \log^{(3)} n \cdots \log^{(j)}n},\] where $\log ^{(i)}(n)$ represents the $i$-th iterated logarithm, and $\omega(n)$ is defined for sufficiently large $n$. \end{remark} \section{Non $\omega$-recurrence in the infinite staircase}\label{section - staircase} We now turn our attention to the problem of \textit{specific} $\alpha$, and the rates of $\omega$-recurrence in the associated infinite staircase (recall \eqref{eqn - staircase function}). We will restrict the form of $\alpha$, the rotation on $\mathbb{R}/\mathbb{Z}$: \begin{equation}\label{eqn - alpha heavy form} \alpha = [2r_1,s_1,2r_2,s_2,\ldots] \quad (r_i, s_i \in \mathbb{Z}^+).\end{equation} A \textit{substitution} is a homomorphism from the free monoid on a finite set (the \textit{alphabet} of the substitution) to itself, and may be extended to also act on infinite sequences on this set. Elements of the free monoid are referred to as \textit{words}, and infinite sequences are frequently referred to as \textit{infinite words}. The \textit{concatenation} of two (finite) words $\omega_1$, $\omega_2$ is their product $\omega_1 \omega_2$, and if $\omega=\omega_1 \omega_2$, then $\omega_1$ is called a \textit{left factor} of $\omega$, and $\omega_2$ is called a \textit{right factor}. If $\omega'$ is a factor of $\omega$ and $\omega' \neq \omega$, then $\omega'$ is called a \textit{proper factor}. The space of infinite words is compact in the product topology (the alphabet is finite), and if finite words are equated with open cylinder sets in this space, one may refer in the natural manner to the limit of a sequence of finite words. In any dynamical system $\{X, \mu, T\}$, we may partition $X$ into a finite number of sets indexed by an alphabet, and the orbit of any point $x \in X$ corresponds to a sequence $\{\omega_0, \omega_1, \ldots\}$ in this alphabet, with $\omega_n$ being given by the symbol corresponding to the partition element containing $T^n(x)$. This sequence is called the \textit{symbolic encoding} of the orbit of $x$ with respect to the given partition. Fix the alphabet $\{A,B,C\}$ and define the substitutions $\sigma_i$ to be the homomorphisms determined by \[ \sigma_i: \left\{ \begin{array}{l} A \rightarrow A \left( A^{r_i} B^{r_i-1} C \right)\left( A^{r_i} B^{r_i-1} C \right)^{s_i-1} \\ B \rightarrow A \left( A^{r_i -1} B^{r_i} C \right)\left( A^{r_i} B^{r_i-1} C \right)^{s_i-1} \\ C \rightarrow A \left( A^{r_i -1} B^{r_i} C \right)\left( A^{r_i} B^{r_i-1} C \right)^{s_i} \end{array} \right. \] For convenience denote $\sigma^{(n)}=\sigma_1 \circ \sigma_2 \circ \cdots \circ \sigma_n$. \begin{proposition}\label{proposition - all the substitution results} The symbolic coding of the orbit of $0$ under rotation by $\alpha$, whose continued fraction expansion is of the form \eqref{eqn - alpha heavy form}, with respect to the partition \[A = \left[0,1/2 \right), \quad B = \left[ 1/2, 1-\alpha \right), \quad C=\left[1-\alpha,1\right), \] is given by the infinite word \[W = \lim_{n \rightarrow \infty} \sigma^{(n)}(A).\] Furthermore, the encoding of \textit{any} point $y \in [0,1)$ may be presented as beginning with the concatenated word $W_1(y) W_2(y)$, where $W_2(y) \in \{\sigma^{(n)}(A),\sigma^{(n)}(B),\sigma^{(n)}(C)\}$, and $W_1(y)$ is a proper right factor (possibly empty) of one of these words. Finally, the word $\sigma^{(n)}(A)$ is of length $q_{2n}$, and both $\sigma^{(n)}(B)$ and $\sigma^{(n)}(C)$ are of length $q_{2n}+q_{2n-1}$. \begin{proof} That the coding of the origin takes the form given is \cite[Thm. 1.1, Prop. 4.3]{substitutions1}. That the orbit of any $y$ may be realized through the concatenation given follows from \cite[Prop. 4.1]{substitutions1}. Finally, the lengths of all the words are computed in \cite[Lem. 5.4]{substitutions1}; we let \[ M_n = \left[ \begin{array}{c c} (2r_n-1)s_n+1 & s_n \\ (2r_n-1)s_n + r_n & s_n +1 \end{array}\right],\] and then we have \[ M_n \cdot M_{n-1} \cdots M_1 \left[ \begin{array}{c} 1 \\ 1\end{array}\right] = \left[ \begin{array}{c} \left| \sigma^{(n)}(A) \right|\\ \left|\sigma^{(n)}(B)\right| = \left| \sigma^{(n)}(C) \right| \end{array} \right].\] First note that $\sigma_1(A)$ is of length $q_2=2r_1s_1+1$, and similarly $\sigma_1(B)$ and $\sigma_1(C)$ are of length $q_2+q_1=2r_1(s_1+1)+1$. Assume, then, that $\sigma^{(n-1)}(A)$ is of length $q_{2n-2}$ and $\sigma^{(n-1)}(B)$ and $\sigma^{(n-1)}(C)$ are of length $q_{2n-2}+q_{2n-3}$. Then using the matrix product formula above, we inductively find \begin{align*} |\sigma^{(n)}(A)| &= ((2r_n-1)s_n+1)q_{2n-2} + s_n \left(q_{2n-2}+q_{2n-3}\right)\\ &=2r_ns_nq_{2n-2}-s_nq_{2n-2}+s_nq_{2n-2}+s_nq_{2n-3}+q_{2n-2}\\ &=s_n(2r_nq_{2n-2}+q_{2n-3})+q_{2n-2}\\ &=s_n(q_{2n-1})+q_{2n-2}=q_{2n}. \end{align*} Similarly $|\sigma^{(n)}(B)|=|\sigma^{(n)}(C)|=q_{2n}+q_{2n-1}$. \end{proof} \end{proposition} Given a word $P=p_1p_2\ldots p_n$ of length $n$ in this alphabet, we define \[g(P) = \#\{i \leq n: p_i=A\} - \#\{i\leq n: p_i = B\} - \#\{i \leq n: p_i=C\}.\] If $P$ is the coding of the length $(n-1)$ orbit of some point $y$, then $g(P)$ is the $n$-th ergodic sum of $y$. Further denote for $k \in \mathbb{Z}$ \[g(P,k) = \#\{i \leq n : g(p_1 p_2 \ldots p_i)=k\}.\] \begin{lemma}\label{lemma - bound on hits to levels for orbit of zero} Suppose $\alpha$ is of the form given by \eqref{eqn - alpha heavy form} and the $s_i$ are bounded. Then there is a constant $\tau$ such that for all $n$ we have \[ \max\left\{g(P,k): k \in \mathbb{Z}, P \in \{\sigma^{(n)}(A),\sigma^{(n)}(B),\sigma^{(n)}(C)\} \right\} \leq \tau q_{2n-2}.\] \begin{proof} As the substitutions $\sigma_i$ are homomorphisms, it is direct to show inductively that for our substitutions $\sigma_i$, we have for all $n$ \[g(\sigma^{(n)}(A))=1, \quad g(\sigma^{(n)}(B))=g(\sigma^{(n)}(C))=-1;\] see also \cite[Prop. 5.1]{substitutions1}. Consider, then, the example of \begin{align*} \sigma^{(n)}(A) & = \sigma^{(n-1)}(\sigma_n A)\\ &= \sigma^{(n-1)}(A (A^{r_n}B^{r_n-1}C)^{s_n})\\ &=\sigma^{(n-1)}(A) \left[ \left(\sigma^{(n-1)}(A)\right)^{r_n} \left(\sigma^{(n-1)}(B)\right)^{r_n-1}\left(\sigma^{(n-1)}(C)\right)\right]^{s_n}. \end{align*} Using that our sums are an additive cocycle, \begin{equation}\label{eqn - split}\begin{split}g(\sigma^{(n)}(A),k) = g(\sigma^{(n-1)}(A),k) &+ s_n \left( \sum_{j=1}^{r_n}g(\sigma^{(n-1)}(A),k-j) +\right.\\ & \left. \sum_{j=2}^{r_n+1}g(\sigma^{(n-1)}(B),k-j) + g(\sigma^{(n-1)}(C),k-1)\right).\end{split}\end{equation} Regardless of how large we choose to make $r_n$, we must have \[\sum_{j=1}^{r_n}g(\sigma^{(n-1)}(A),k-j) \leq q_{2n-2},\] as $q_{2n-2}$ is the length of the word $\sigma^{(n-1)}(A)$ (Prop. \ref{proposition - all the substitution results}) and each term in the word is accounted for at most once in the sum. Similarly we have \[\sum_{j=2}^{r_n+1}g(\sigma^{(n-1)}(B),k-j) \leq q_{2n-2}+q_{2n-3},\quad g(\sigma^{(n-1)}(C),k-1) \leq q_{2n-2}+q_{2n-3}. \] As the $s_i$ are bounded, we may find $\tau$, independent of $n$, so that via \eqref{eqn - split} $g(\sigma^{(n)}(A),k) \leq \tau q_{2n-2}$. Similar arguments apply to $\sigma^{(n)}(B)$ and $\sigma^{(n)}(C)$. \end{proof} \end{lemma} \begin{corollary}\label{corollary - bound on hits to any level any point} For any $y \in [0,1)$, if we denote $R_n(y)$ to be the symbolic encoding of the $q_{2n}$-length orbit of $y$, then there is a constant $\tau$ (independent of both $y$ and $n$) such that \[\max\{g(R_n(y),k) : k \in \mathbb{Z}\} \leq \tau q_{2n-2}.\] \begin{proof} From Proposition \ref{proposition - all the substitution results}, $R_n(y)$ is a left factor of $W_1(y)W_2(y)$, where $W_2(y) \in \{\sigma^{(n)}(A), \sigma^{(n)}(B), \sigma^{(n)}(C)\}$ is of length at least $q_{2n}$ (and $W_1(y)$ is a proper right factor of one of these words). Considering the words $W_1(y)$ and $W_2(y)$ independently, we need only double the constant $\tau$ from Lemma \ref{lemma - bound on hits to levels for orbit of zero}. \end{proof} \end{corollary} Note that if the $s_i$ are bounded, then for some $\tau'$ (independent of $n$) we have \begin{equation}\label{eqn - silly bound on q_n}2r_{n+1}q_{2n}<q_{2n+2}< \tau' r_{n+1}q_{2n}.\end{equation} \begin{theorem}\label{theorem - not omega recurrent for power more than half} Let $\omega(n) \in o(1/n^{\epsilon})$ for some $\epsilon>1/2$ and be monotone decreasing and regularly varying, and let $f$ be given by \eqref{eqn - staircase function}. Then there is an uncountable set of $\alpha$ such that the infinite staircase is \textit{not} $\omega$-recurrent. In fact, for $A=Y \times\{0\}$ any fixed $\delta>0$, there is an uncountable set of $\alpha$ for which \[\zeta(y) \leq \omega(0) + \delta.\] for \emph{all} $y \in Y$. \begin{proof} Let $A = Y \times\{0\}$ and $\delta>0$. We have via the cocycle identity \eqref{eqn - cocycle} \[\zeta_0^{q_{2n+2}}(y) \leq \zeta_0^{q_{2n}}(y) + \sum_{l=1}^{2r_{n+1}s_{n+1}+1} \left( \zeta_0^{q_{2n}} (y+(l -1)q_{2n}\alpha \right).\] By Corollary \ref{corollary - bound on hits to any level any point}, monotonicity of $\omega$, and \eqref{eqn - silly bound on q_n} we have \begin{equation}\label{eqn - phibound}\zeta_0^{q_{2n+2}}(y) \leq \zeta_0^{q_{2n}}(y) + \sum_{l=1}^{\tau' r_n} \tau q_{2n-2}\omega(l q_{2n}).\end{equation} By the assumption that $\omega(n) \in o(1/n^{\epsilon})$ and \eqref{eqn - silly bound on q_n}, we have that \begin{align*} \sum_{l=1}^{\tau' r_n} \tau q_{2n-2}\omega(l q_{2n}) &\leq \sum_{l=1}^{\tau' r_n} \frac{\tau q_{2n-2}}{l^{\epsilon}(2r_n q_{2n-2})^\epsilon}\\ &< \frac{\tau q_{2n-2}^{1-\epsilon}}{(2r_n)^{\epsilon}} \sum_{l=1}^{\tau' r_n} \frac{1}{l^{\epsilon}}\\ &< \frac{\tau q_{2n-2}^{1-\epsilon}}{(2r_n)^{\epsilon}} \frac{(\tau' r_n)^{1-\epsilon}-\epsilon}{1-\epsilon}, \end{align*} where the final line follows from elementary calculus (the so-called ``integral comparison" for sums). As $q_{2n-2}$ does not depend on $r_n$, and $\epsilon>1/2$, with $r_1,s_1,\ldots,r_{n-1},s_{n-1}$ fixed, we are free to set $r_n$ large enough so that \[\frac{\tau q_{2n-2}^{1-\epsilon} \left((\tau' r_n)^{1-\epsilon}-\epsilon\right)}{(2r_n)^{\epsilon}(1-\epsilon)}< \frac{\delta}{2^n},\] so letting $n \rightarrow \infty$, via \eqref{eqn - phibound} we have $\zeta(x) \leq \omega(0) + \delta$. As the choice of $r_n$ is not specific (just some lower bound depending on prior partial quotients), the set of such $\alpha$ that we may construct in this manner is uncountable. \end{proof} \end{theorem} \section{Proof of Lemma \ref{lemma - technical lemma}}\label{section - lemmas} \begin{proof} Denote for $k \in \mathbb{Z}^+$ \[C_k = \frac{\textbf{a}_k}{q_k} \sum_{i=1}^{k-1} \frac{q_i}{\textbf{a}_i};\] we will show that the $C_k$ are bounded for generic $\alpha$. Then \begin{align*} C_{k+1} &= \frac{\textbf{a}_{k+1}}{q_{k+1}} \sum_{i=1}^{k} \frac{q_i}{\textbf{a}_i}\\ &=\frac{\textbf{a}_{k}+ a_{k+1}}{a_{k+1}q_{k}+q_{k-1}} \left(\frac{q_k}{\textbf{a}_k} + \sum_{i=1}^{k-1} \frac{q_i}{\textbf{a}_i}\right)\\ &< \frac{\textbf{a}_{k}+ a_{k+1}}{a_{k+1}q_{k}} \left(\frac{q_k}{\textbf{a}_k} + \sum_{i=1}^{k-1} \frac{q_i}{\textbf{a}_i}\right) \\ &=\left(\frac{1}{a_{k+1}}+\frac{1}{\textbf{a}_k}\right) + \frac{C_k}{a_{k+1}}+\frac{C_k}{\textbf{a}_k}\\ &=\left(\frac{1}{\textbf{a}_k}+\frac{1}{a_{k+1}}\right)(C_k+1) \end{align*} For \textit{any} $\alpha$ we have $\textbf{a}_k \geq k$, so for sufficiently large $k$, \begin{equation}\label{eqn - contfrac estimate for large quotients}C_k \geq 4, \, a_{k+1} \geq 2 \quad \Longrightarrow \quad C_{k+1} < \frac{2}{3}C_k.\end{equation} If $\alpha$ only has finitely many $a_i=1$, then clearly from \eqref{eqn - contfrac estimate for large quotients} the $C_k$ remain bounded. So let $a_{k+i}=1$ for $i=1,2,\ldots,m$. In this case we have $\textbf{a}_{k+i}=\textbf{a}_k +i$ and $q_{k+i}=\varphi_i q_k + \varphi_{i-1}q_{k-1}$, where $\varphi_i$ is the $i$-th Fibonacci number (beginning with $\varphi_0=\varphi_1=1$). Then for $m,k>0$ \[C_{k+m} = \frac{\textbf{a}_{k+m}}{q_{k+m}} \sum_{i=1}^{k+m-1}\frac{q_i}{\textbf{a}_i} < \frac{\textbf{a}_k+m}{\varphi_m q_k} \left( \sum_{i=1}^{k-1} \frac{q_i}{\textbf{a}_i} + \sum_{i=0}^{m-1} \frac{\varphi_i q_k + \varphi_{i-1} q_{k-1}}{\textbf{a}_k +i}\right).\] As $q_{k-1}<q_k$ and $\varphi_{k-1}+\varphi_k=\varphi_{k+1}$, we have \begin{equation}\label{eqn - intermediary contfrac}C_{k+m} < \frac{\textbf{a}_k + m}{\textbf{a}_k \varphi_m} C_k + \frac{\textbf{a}_k+m}{\textbf{a}_k}\sum_{i=0}^{m-1} \frac{\varphi_{i+1}}{\varphi_m}.\end{equation} On the one hand, if $m=1$ we have \[C_{k+1} \leq \frac{k+1}{k}C_k + 2\frac{k+1}{k},\] which if $a_{k+2} \geq 2$, $C_k \geq 10$, and $k$ is sufficiently large ($k > 44$ suffices), then using \eqref{eqn - contfrac estimate for large quotients}, $C_{k+2} < C_k$. On the other hand, it is direct to verify for $m \geq 2$ both \[ \frac{\textbf{a}_k+m}{\textbf{a}_k \varphi_m} \leq 1, \quad \sum_{i=0}^{m-1} \varphi_i = \varphi_{m+1}-1, \] so that \eqref{eqn - intermediary contfrac} may be replaced with \[C_{k+m} < C_k + \left(1+\frac{m}{k} \right) \frac{\varphi_{m+2}}{\varphi_m} < C_k + 3 \left( 1 + \frac{m}{k} \right).\] Our interest then turns to establishing a reasonable bound on $m/k$ for generic $\alpha$. We discard the null set of $\alpha$ for which only finitely many $a_i \neq 1$, so for generic $\alpha$ we may define an infinite sequence of $n_i$, $m_i$ such that \[a_k = 1 \quad \Longleftrightarrow \quad k=n_i+j, \, j\in \{1,2,\ldots,m_i\}.\] It is an elementary fact in the theory of continued fractions that for generic $\alpha$, \[ \lim_{k \rightarrow \infty} \frac{\# \{a_j = 1 : j=1,2,\ldots k \} }{k} = \frac{\log 4 - \log 3}{\log 2} < \frac{1}{2}.\] On the other hand, if there were infinitely many $m_i > n_i$, we would have \[ \limsup_{k \rightarrow \infty} \frac{\# \{a_j = 1 : j=1,2,\ldots k \}}{k} \geq \frac{1}{2}.\] So for generic $\alpha$, we eventually have $m_i \leq n_i$, so for sufficiently large $i$ we have \begin{equation}\label{eqn - almost done} 0 \leq j \leq m_i \quad \Longrightarrow \quad C_{n_i+j} < C_{n_i}+6.\end{equation} We then have $a_{n_i+m_i+1} \geq 2$, so using \eqref{eqn - contfrac estimate for large quotients} \begin{equation}\label{eqn - last one}C_{n_i} \geq 12 \quad \Longrightarrow \quad C_{n_i+m_i+1} < \frac{2}{3} \left(C_{n_i}+6 \right) \leq C_{n_i}.\end{equation} Therefore, combining \eqref{eqn - contfrac estimate for large quotients}, \eqref{eqn - almost done}, and \eqref{eqn - last one}, for generic $\alpha$ the sequence $C_k$ remains bounded. \end{proof} \end{document}
\begin{document} \pagestyle{empty} \draft \twocolumn \wideabs{ \title{New, Efficient and Robust, Fiber-Based Quantum Key Distribution Schemes} \author{W.T. Buttler, J. R. Torgerson and S. K. Lamoreaux} \address{University of California, Los Alamos National Laboratory, Los Alamos, New Mexico 87545} \date{\today} \maketitle \begin{abstract} We present a new fiber based quantum key distribution (QKD) scheme which can be regarded as a modification of an idea proposed by Inoue, Waks and Yamamoto (IWY) \cite{ref:iwy01}. The scheme described here uses a single phase modulator and two differential delay elements in series at the transmitter that form an interferometer when combined with a third differential delay element at the receiver. The protocol is characterized by a high efficiency, reduced exposure to an attack by an eavesdropper, and higher sensitivity to such an attack when compared to other QKD schemes. For example, the efficiency with which transmitted data contribute to the private key is $3/4$ compared with $1/4$ for BB84 \cite{ref:bb84}. Moreover, an eavesdropper can aquire a maximum of $1/3$ of the key which leads to an error probability in the private key of $1/3$. This can be compared to $1/2$ and $1/4$ for these same parameters in both BB84 and IWY. The combination of these considerations should lead to increased range and key distribution rate over present fiber-based QKD schemes. \end{abstract} \pacs{PACS Numbers: 03.67.-a, 03.67.Hk, 03.67.Dd, 42.79.Sz} } \narrowtext The quantum key distribution (QKD) scheme recently proposed by Inoue, Waks and Yamamoto \cite{ref:iwy01} represents a new concept that should prove to be a significant advance for fiber-based QKD. Their scheme, which we label IWY, is sketched in Fig.~\ref{fig:iwy01} for the case of a two delay-element transmitter. Essentially, a single pulse entering the transmitter leaves as a superposition of three pulses, the last two of which have been given a random phase shift of $\Delta \phi \in \{0, \pi\}$ relative to the first in addition to a fixed time delay of $\Delta t$ and $2 \Delta t$ relative the time associated with the input state: The photon pulse is superposed in time and phase. The receiver effects a time delay of $\Delta t$ and causes the superposed pulses to overlap and hence interfere in the same manner as superposed pulses in the standard Franson interferometer \cite{ref:franson91} used in BB84-like fiber QKD as shown in Fig.~\ref{fig:bb84}. (The transmitter and receiver are typically referred to as Alice ({\bf A}) and Bob ({\bf B}) within the quantum information community.) The advantages of this new scheme are numerous: ({\rm I}) the efficiency with which transmitted data contribute to the private key, which we label $\eta_p$ the ``protocol efficiency,'' is increased to $\eta_p = 1 - 1/N$, where $N$ is the number of temporal superpositions leaving the transmitter; ({\rm II) the amount of information an eavesdropper ({\bf E}, usually Eve) can acquire on the key is reduced as $N$ increases; ({\rm III) the maximum error probability caused by eavesdropping, which we name ``disturbance'' and label as $p_d$, increases with $N$ and allows error reconciliation to be performed on data with a higher initial error probability; and ({\rm IV) {\bf B} does not need a phase-modulator (PM). The implications of these advantages, relating to the points raised above, are that {\bf A} and {\bf B} generate more key and can tolerate and correct for higher error rates because: ({\rm i}) qubits accumulate at a higher rate; ({\rm ii}) the fraction of key leaked during eavesdropping, $\eta_e$, that must be discarded to ensure the privacy is reduced with increasing $N$ ($\eta_e$ relates the maximum amount of information {\bf E} can acquire on the quantum key in a specified attack on the quantum channel); ({\rm iii}) the liklihood of detecting {\bf E}'s attacks on the quantum channel is increased as her eavesdropping creates a larger disturbance within the qubits; and ({\rm iv}) the random choice of $\Delta \phi$ by {\bf B} necessary in other schemes is eliminated in IWY and yields an increase in $\eta_p$ over these other schemes ($\eta_p$ increases more than $300$\% for the case of the typical, or current, implementations of fiber based BB84, e.g. as shown in Fig.~\ref{fig:bb84}). A further consequence of removing the PM from the receiver is that the typical $3$-$6$\,dB loss in PMs is removed from {\bf B}'s receiver. Thus, the key rate, robustness against errors, and distance over which qubits can be transmitted can be enhanced over currently realized fiber based QKD systems. The scheme we introduce here, and labelled BLT and shown in Fig.~\ref{fig:blt01} for two transmitter delay elements, improves upon IWY in several significant ways while retaining all of IWY's advantages over other schemes. First, BLT is simpler to implement because the series-element construction of BLT can be achieved with simple $50/50$ beamsplitters (BSs), while the parallel construction of IWY requires more specialized BSs to balance the output pulses' intensities. Second, the efficiency ($\eta_p$), potential eavesdropper knowledge ($\eta_e$) and disturbance ($p_d$) all favor BLT over IWY for the same number of delay elements in the transmitter. In BLT, the number $N$ of superposed pulses increases as $N = 2^m$ where $m$ is the number of delay elements at the transmitter; in IWY $N$ increases as $N = m + 1$. For example, the serial scheme of BLT yields $\eta_{p+} \ge 3/4$ while for the parallel scheme of IWY $\eta_{p\parallel} \ge 2/3$ with the lower bound given by $m = 2$ in each case. Moreover, the eavedropper's potential information $\eta_e$ is less in BLT than in IWY, and the disturbance $p_d$ is greater in BLT than in IWY, regardless of the type of attack employed by {\bf E}. Third, a single PM outside the transmitter delay elements further reduces the complexity and expense of the system and can increase its efficiency. For example, it is well-known that QKD is more secure and more efficient if each superposition of pulses contains exactly one photon. This would most likely be realized with a single photon source \cite{ref:photon-source} before the delay elements and appropriately modified optics as discussed below. A single PM outside the transmitter delay elements dramatically increases the receiver efficiency in this situation, again due to the typical $3$-$6$\,dB loss of PMs. Consider a fiber BB84 system as shown in Fig.~\ref{fig:bb84} in which {\bf A} sends single, or weak coherent, photon states to {\bf B} \cite{ref:franson91,ref:hughes48,ref:mhhtzg97}. For each input pulse the phase of {\bf A}'s PM is randomly toggled among a choice of four phases: $\phi_A \in \{0, \pi/2, \pi, 3 \pi/2 \}$. As the pulses arrive at {\bf B}, the PM at that end is toggled randomly between $\phi_B \in \{0, \pi/2 \}$ for each superposition of pulses to select the measurement basis for that superposition. ({\bf B}'s delay element is adjusted so that a phase difference $\phi_A - \phi_B = 0$ causes always a photodetection at the same detector and $\phi_A - \phi_B = \pi$ causes always a photodetection at the other detector.) After the weak pulses are exchanged {\bf B} reveals the times when photodetections were observed and the measurement bases chosen at those times. {\bf A} notes when their phases satisfied $\phi_A - \phi_B \in \{0, \pi\}$, and tells {\bf B} to discard measurement results which were not collected when this condition was satified. What remains is potentially secret key in which $\phi_A - \phi_B \in \{0, \pi \}$ correspond to binary values $\{0, 1\}$. The exchanged qubits will undoubtedly contain errors which must be eliminated through an appropriate error reconciliation protocol \cite{ref:bbbss92,ref:bs94}. After error reconciliation, privacy amplification \cite{ref:bbcm95} --- which is linked to the error rate --- must be applied to the remaining error-free key to reduce {\bf E}'s information to an acceptable level. If the error rate is too high all of the remaining key bits may be eliminated during this phase. In addition, the channel must be authenticated \cite{ref:sig-auth}; authentication represents an additional protocol expense. It is worthwhile to note the fraction of key sent by {\bf A} which remains after error reconciliation and privacy amplification for each of the schemes discussed here. We label this fraction $R_k$~\cite{ref:note4} and find \begin{equation} R_k = \eta_p(\mu_r - \eta_e \frac{p_o}{p_d}) \text, \end{equation} where $p_o$ is the measured error probability per bit in the key (before error reconciliation), and $\mu_r$ is the fraction of {\bf B}'s key material that remains after error reconciliation, and depends on $p_o$. The fraction of the distributed key that must be discarded through privacy maintenance is given by $\eta_e{p_o}/{p_d}$. Although an additional amount of key must be discarded if weak coherent states of light are used, this issue is disregarded in the discussion because $\bar n$ can be made arbitrarily small (in theory at least), and because other states of light can be employed that are not susceptible to this type of attack (e.g, entangled photon sources \cite{ref:E91,ref:kwiat2000}, or other single-photon states \cite{ref:photon-source}). Consider the (typical) fiber based BB84 implementation described above, and assume that the PM transmission is unity. This implementation has a receiver efficiency $\eta_{p(bb84)} = 1/4$ \cite{ref:ideal-bb84}: $1/2$ from the $50$:$50$ BS at the input of {\bf B}'s delay element and $1/2$ due to the random bases choice that is crucial to BB84 security. The $\eta_e$ and $p_d$ depend on the type of attack used by {\bf E}. The most powerful, and yet realistic, attack that has been devised for this scheme is for {\bf E} to intercept and resend the photon pulses in the Breidbart Basis \cite{ref:bbbss92}. With this type of attack, $\eta_e \le 0.585$ and $p_d = 1/4$. If weak coherent pulses were used to transmit the key, another fraction $\bar{n}/4$ of the key is also potentially compromised to {\bf E}, where $\bar{n} \ll 1$ is the average number of photons per superposition. The IWY and BLT schemes accomplish key exchange in a similar manner. That is, superposed photon pulses (single-photon or weak-coherent states) are transmitted from {\bf A} to {\bf B}. {\bf B}'s delay element is adjusted so that a phase difference $\Delta\phi\in\{0,\pi\}$ between adjacent pulses in a superposition always cause a photodetection at a particular detector. After photodetections are observed, {\bf B} tells {\bf A} at what times they were observed relative to an accepted time standard. Because {\bf A} knows what the phases of each superposition were, {\bf A} knows which of {\bf B}'s detectors registered a photodetection. As mentioned previously, $\eta_{p\parallel}=2/3$ while $\eta_{p+}=3/4$ for the implementations of BLT and IWY sketched in Figs.~\ref{fig:iwy01} and \ref{fig:blt01} respectively. The strongest attack that can be employed by {\bf E} is likely an intercept/resend attack. However, unlike BB84, a Breidbart-like attack is not possible in IWY or BLT as the measurements are made in orthogonal bases. For these schemes, the strongest intercept/resend strategy is to build a receiver much like {\bf B}, but to replace the first BS with a fast switch (we do not specify the switch technology, but merely state it is possible). In this manner, adjacent pulses can be made to overlap without loss of signal due to the first or last pulse in the superposition contributing less to the interference than the central pulses. If exactly one photon exists in the superposition {\bf E} will measure one superposition phase difference ($\Delta\phi$), but have no knowledge of the other remaining phase difference(s). Thus {\bf E}'s knowledge of the key for the superpositions that are intercepted and resent, and hence $\eta_e$ can be calculated as $\eta_{e\parallel} = 1/2$ and $\eta_{e+} = 1/3$ for the IWY and BLT respectively. The corresponding disturbances are $p_{d\parallel} = 1/4$ and $p_{d+} = 1/3$. If the effect of multiple photons in a superposition from the use of weak coherent states is included, {\bf E} can know another fraction of the key equal to $\bar n/4$ or $\bar n/6$ for IWY or BLT respectively. An important variation to either the IWY or BLT scheme is to add a BS and an extra delay element at the receiver as shown in (Fig.~\ref{fig:blt01v3}). For this system (denoted with $\oplus$), $\eta_{p\oplus} = 5/8$ while $\eta_{e\oplus} = 1/5$ is significantly reduced, and {\bf E}'s disturbance to the final key is increased to as much as $p_{d\oplus} = 2/5$, if the additional BS is equally reflecting and transmitting. With these considerations, $\eta_p \in \{1/4, 2/3, 3/4,5/8\}$ and $\eta_e/{p_d} \in \{2.34, 2, 1, 1/2\}$ for \{BB84, IWY, BLT and BLT$\oplus$\} respectively. The rate at which key is generated is proportional to $R_k$ and can be approximated from Fig.~\ref{fig:key-rate} where $R_k$ is plotted for the case of $\mu_r = 1 + (1 - p_o)\log_2(1 - p_o) + p_o\log_2(p_o)$: $1$ minus the {\it Shannon entropy}. In any case, in a practical QKD system, $R_k$ will be bounded by the values plotted in Fig.~\ref{fig:key-rate} for the four schemes at the same $p_o$. Figure~\ref{fig:key-rate} highlights the value of the relative schemes presented here. For $p_o \lesssim 0.13$, it is clear that BLT has a great advantage over the other three schemes: qubits are collected at a higher rate than for the other $3$ schemes. For $p_o \gtrsim 0.13$, BLT$\oplus$ retains a significant advantage due to its reduced leakage of $\eta_e = 1/5$, and disturbance $p_d = 2/5$. These facts do not consider the added complexity of BLT$\oplus$ at {\bf B} which includes the second delay element and additional detectors. For simplicity, efficiency, and protection against an intercept/resend attack by {\bf E}, BLT offers great advantages over BB84, IWY and BLT$\oplus$. In conclusion, we have introduced two new quantum key distribution schemes for optical fiber systems based on the recently presented idea of Inoue, Waks and Yamamoto \cite{ref:iwy01}. Our new schemes are more efficient and robust against eavesdropping than other schemes proposed to date (BLT$\oplus$ is more efficient than IWY for error rates $\gtrsim 2$-$3$\%). In particular, BLT also has the advantage of being relatively easy and inexpensive to implement. \begin{figure} \caption{The IWY parallel-delay element interferometer. Alice ({\bf A} \label{fig:iwy01} \end{figure} \begin{figure} \caption{The BLT series-delay element interferometer. {\bf A} \label{fig:blt01} \end{figure} \begin{figure} \caption{Fiber based BB84-like transmitter and receiver. {\bf A} \label{fig:bb84} \end{figure} \begin{figure} \caption{A BLT series-delay interferometer variation with increased disturbance and reduced potential eavedropper information. {\bf A} \label{fig:blt01v3} \end{figure} \begin{figure} \caption{Fraction of transmitted qubits which contribute to private key when privacy amplification is considered. This plot clearly shows that the series element idea has a great advantage over BB84, IWY, or BLT$\oplus$ for error rates les than about $0.13$. (The values presented here assume true single-photon QKD and do not consider weak coherent attack strategies, or the amount of key lost through error reconciliation.)} \label{fig:key-rate} \end{figure} \end{document}
\begin{document} \title{Uncertainty is complementary to Complementarity} \author{Ole Steuernagel, \\ Department of Physical Sciences, University of Hertfordshire, Hatfield, Herts, AL10 9AB, UK \\} \date{\today} \twocolumn{ \maketitle \begin{abstract} For any ideal two-path interferometer it is shown that the wave-particle duality of quantum mechanics implies Heisenberg's uncertainty relation and vice versa. It is conjectured that complementarity and uncertainty are two aspects of the same general principle. \end{abstract} \pacs{03.65.Bz} \narrowtext Bohr's principle of complementarity, applied to a two-path interferometer, describes the nature of a quantum system as being 'dualistic' in its particle and wave aspects~\cite{Wheeler.buch}. Although Bohr originally intended the principle of complementarity to be of greater generality than the wave-particle duality of quantum mechanics both concepts are now often treated as equivalent~\cite{Wheeler.buch,Feynman.buch}. In the famous debates between Einstein and Bohr in the late 1920's~\cite{Wheeler.buch} complementarity was contested but finally the argument has settled in its favor~\cite{Wheeler.buch,Feynman.buch}. The foundation of the concept of complementarity, however, is still argued upon~\cite{Scully91.nature,Zajonc91,Englert95.nature,Englert96,Knight98.nature,Duerr98,NewScientist98,Stern90,Bhandari92,Tan93,Storey94,Storey95.nature,Wiseman95.nature}. Whereas Bohr invariably used different versions of Heisenberg's uncertainty relation in refuting Einstein's attempts to disprove the concept of complementarity~\cite{Wheeler.buch}, a theoretical proposal from 1991~\cite{Scully91.nature} and an experiment performed last year~\cite{Duerr98} have been interpreted as showing that complementarity can arise without uncertainty~\cite{Knight98.nature,Duerr98,NewScientist98}. The question therefore remains as to whether complementarity always arises from Heisenberg's uncertainty principle~\cite{Bhandari92,Tan93,Storey94,Storey95.nature,Wiseman95.nature} or whether a 'more fundamental mechanism' -- entanglement without uncertainty -- can be at work~\cite{Englert96,Knight98.nature,NewScientist98}? \\ To answer this question, optimal two-path interferometers are analyzed; this is all that is needed since we are only interested in fundamental physical limits, thus neglecting unbalanced, lossy, and otherwise imperfect setups. Moreover, two paths, described by two quantum mechanical modes, capture all the essential physics, including all double slit setups and all systems currently under discussion~\cite{Scully91.nature,Zajonc91,Englert95.nature,Englert96,Knight98.nature,Duerr98,NewScientist98,Stern90,Bhandari92,Tan93,Storey94,Storey95.nature,Wiseman95.nature}. \\ In a two-mode interferometer two 'paths' does not necessarily refer to spatially separated paths, it can, for instance, mean two different spin states in Ramsey interferometry~\cite{Scully91.nature,Duerr98}. Heisenberg's position-momentum uncertainty should therefore not always be appropriate for the description of the loss of interference fringes when determining the path of a quantum particle~\cite{Scully91.nature,Duerr98,Bhandari92}. This has recently been debated~\cite{Scully91.nature,Englert95.nature,Bhandari92,Tan93,Storey94,Storey95.nature,Wiseman95.nature} but seems to be resolved now~\cite{Duerr98}: complementarity can be enforced without invoking {\em position-momentum uncertainty}. \\ We will, however, see that the assumption that path and wave measurements ($\hat P$ and $\hat W$) are complementary always leads to the {\em general Heisenberg--Robertson uncertainty relation}~\cite{Wheeler.buch,Robertson29,Maassen88} \begin{eqnarray} \Delta \hat P \cdot \Delta \hat W \geq \frac{1}{2} \; |\langle [\hat P,\hat W] \rangle | \; , \label{heisrob.unc} \end{eqnarray} and that the ensuing uncertainties are sufficient to make inaccessible either the path or the wave aspect of the quantum system: complementarity implies uncertainty and this uncertainty enforces complementarity. \\ In an optimal interferometer the two paths are identified with two quantum mechanical modes and can, without loss of generality, be assigned the two basis states of a formal spin-1/2 system $|\psi_+\rangle =(1,0)$ and $|\psi_-\rangle = (0,1)$. To scan the interference pattern the relative phase $\phi$ between these two basis states is changed by a phase shifter $\hat \Phi \;( = \exp[-i \; \hat \sigma_z \; \phi/2]$ without loss of generality~\cite{Englert96}). When measuring the particle aspect, i.e. experimentally discriminating the paths, see Fig.1., the paths are assigned two different values~$p_\pm$. Using the above representation for $|\psi_\pm\rangle$ we hence find that the path operator $\hat P $ has the general form $\hat P = p_+ |\psi_+\rangle\langle \psi_+ | + p_- |\psi_- \rangle \langle \psi_- | $ which, after suitable rescaling and without loss of generality, leads to our choice $p_\pm =\pm 1$ and the customary form~\cite{Englert96} \begin{eqnarray} \hat P =\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) = \hat \sigma_z \, , \label{poper.unc} \end{eqnarray} where the $\sigma$'s are the Pauli-matrices for spin-1/2. Analogous considerations hold for the operator~$\hat W$ describing the measurement of the wave features of the quantum particle. This is easiest seen from the fact that the determination of the interference pattern also amounts to a path measurement,~$\hat \sigma_z^{after} $, after the final beam-splitter~$\hat B$, see Fig.1. We conclude that, without loss of generality, $\hat W$ has two eigenvalues $w_\pm = \pm 1$ with associated eigenvectors $| \omega_\pm \rangle $. \\ What is the general form of $\hat W$ in terms of the basis vectors~$|\psi_\pm \rangle $ of $\hat P$? To find this connection between path and wave measurement all we need to use is the fact that they are complementary to each other: \\ "We say that two variables are 'complementary' if precise knowledge of one of them implies that all possible outcomes of measuring the other one are equally probable."~\cite{Scully91.nature} \\ With our convention~$p_\pm =\pm 1$ the expectation value of a path-measurement~$\hat P$, after a preceding measurement has projected the particle into a wave-eigenstate~$| \omega_\pm \rangle $, must therefore be zero \begin{eqnarray} \langle \omega_+ | \hat P |\omega_+ \rangle = 0 = \langle \omega_- | \hat P |\omega_- \rangle \, . \label{pzero.unc} \end{eqnarray} The complementary statement is \begin{eqnarray} \langle \psi_+ | \hat W |\psi_+ \rangle = 0 = \langle \psi_- | \hat W |\psi_- \rangle \, . \label{wzero.unc} \end{eqnarray} Both statements are in accord with other quantifications of complementarity suggested in recent years~\cite{Wooters79,Mandel91.opt,Englert96}. Using the orthonormality condition $\langle \omega_+ | \omega_- \rangle = 0$ we can solve equation~(\ref{pzero.unc}) and find \begin{eqnarray} |\omega_\pm \rangle = \frac{1}{\sqrt{2}} \; (\pm \exp[-i\phi_0/2],\exp[i\phi_0/2]) \, . \label{balstat.unc} \end{eqnarray} The phase angle~$\phi_0$ depends on the details of the interferometric setup. Note that the $|\omega_\pm \rangle$ are balanced, i.e., the particle is equally probably found in the upper or lower path of the interferometer. Only such states allow for maximum contrast of the interference pattern~\cite{Mandel91.opt}. Consequently all states $ |\phi \rangle $ we encounter in an optimal interferometer need to have this balanced form \begin{eqnarray} |\phi \rangle = \frac{1}{\sqrt{2}} \; (\exp[-i\phi/2],\exp[i\phi/2]) \, . \label{bala.unc} \end{eqnarray} This also follows from the fact that a pure phase shift operation~$\hat \Phi $ applied to $| \omega_\pm \rangle$ can only generate states of the form~(\ref{bala.unc}). Using~(\ref{balstat.unc}) we find \begin{eqnarray} \hat W = w_+ |\omega_+\rangle \langle \omega_+ | + w_- |\omega_- \rangle \langle \omega_- | \\ = \cos \phi_0 \; \hat \sigma_x + \sin \phi_0 \; \hat \sigma_y \, . \label{woper.unc} \end{eqnarray} This form of $\hat W $ might look unfamiliar but it describes, what we would expect, a path measurement after the last beam splitter $\hat W = \hat B^\dagger \hat \sigma_z \hat B = \hat \sigma_z^{after} $ expressed in terms of the basis inside the interferometer~\cite{vertausch}. Using~(\ref{bala.unc}) and~(\ref{woper.unc}) we find the expectation value of~$ \hat W $ is \begin{eqnarray} \langle \phi | \hat W | \phi \rangle = \cos (\phi-\phi_0) \, , \label{wval.unc} \end{eqnarray} the expected classical interference pattern. This confirms that the above form of $\hat W$ indeed represents the sought-after 'wave-operator'. We have succeeded in deriving $\hat W$ from equation~(\ref{pzero.unc}) -- the principle of complementarity -- alone. \\ Obviously $\hat P$ and $\hat W$ do not commute $([\hat \sigma_z,\hat \sigma_x]=2i\hat \sigma_y$ and $[\hat \sigma_z,\hat \sigma_y]=-2i\hat \sigma_x)$, hence, the uncertainty relation~(\ref{heisrob.unc}) becomes \begin{eqnarray} \Delta \hat P \cdot \Delta \hat W \geq | \langle \cos \phi_0 \; \hat \sigma_y - \sin \phi_0 \; \hat \sigma_x \rangle | \, . \label{heise.unc} \end{eqnarray} For an optimal interferometer, i.e. for a balanced state~(\ref{bala.unc}), this relation assumes the specific form \begin{eqnarray} \Delta_\phi \hat P \cdot \Delta_\phi \hat W \geq | \sin( \phi -\phi_0) | \, . \label{heisespec.unc} \end{eqnarray} In separate calculations one can easily confirm that $\Delta_\phi \hat P = 1$ and $\Delta_\phi \hat W=| \sin(\phi - \phi_0 )|$, a pictorial representation of these uncertainties is given in Fig.2. One might wonder whether Eq.~(\ref{heisespec.unc}) describes a valid Heisenberg uncertainty relationship since it gives a vanishing lower bound for states for which $\phi - \phi_0 $ is an integer multiple of $\pi$. But this is no reason for worry, it reflects the well-known fact that this bound vanishes for eigenstates $(|\phi \rangle = | \omega_\pm \rangle $) of the considered observables~\cite{Maassen88}. For the important case of greatest interferometric sensitivity, when $|\delta \langle \hat W \rangle /\delta \phi | = |\delta \cos ( \phi -\phi_0)/\delta \phi | $ is maximal, the uncertainty is maximal as well. \\ For any two-path interferometer we have derived the uncertainty relation that goes together with complementarity. Fig.2. shows that this uncertainty relation quantifies 'just what is needed': the variances have the magnitudes required to project the quantum state $|\phi \rangle$ into the eigenstates of the respective measurement. This projection uncertainty completely destroys the complementary information; when measuring the interference pattern all path knowledge is lost~(\ref{pzero.unc}) and vice versa~(\ref{wzero.unc}): uncertainty enforces complementarity. \\ Uncertainty relations~(\ref{heise.unc}) and ~(\ref{heisespec.unc}) hold for any two-mode duality experiment. For the sake of generality we have not specified the coupling that entangles particle and detector, but, once the coupling is specified, relation~(\ref{heise.unc}) encourages us to search for other problem-specific uncertainty relations that allow us to get a deeper understanding of the respective mechanism~\cite{Stern90,Luis98} and even to discover new physics~\cite{Bhandari92}. \\ Our analysis also sheds a fresh light on recent discussions: in particular we find that the claim that complementarity could be enforced without the unavoidable 'measurement disturbances' characteristic of the principle of uncertainty~\cite{Scully91.nature,Knight98.nature,NewScientist98} is not substantiated. Also the related idea that complementarity is a 'deeper principle' than uncertainty~\cite{Englert96,Knight98.nature,NewScientist98} is in disagreement with our present findings. \\ The analysis of complementarity and uncertainty presented here shows that for any two-path interferometer uncertainty and complementarity mutually imply each other; this leads me to the general conjecture that uncertainty and complementarity are two aspects of the same principle: Bohr's principle of complementarity is complementary to Heisenberg's uncertainty principle. \acknowledgments I wish to thank John Vaccaro, Howard Wiseman, Jim Collett and Koji Murakawa for lively discussions. \begin{figure} \caption{A cartoon of a general two-mode interferometer: to create a balanced state~(\protect\ref{bala.unc} \end{figure} \begin{figure} \caption{A Bloch-sphere representation of the general balanced state $| \phi \rangle $ of Eq.~(\ref{bala.unc} \end{figure} \end{document}
\begin{document} \title{Eulerian quasisymmetric functions and cyclic sieving} \author[Sagan]{Bruce Sagan} \address{Department of Mathematics, Michigan State University, East Lansing, MI 48824} \email{[email protected]} \author[Shareshian]{John Shareshian$^1$} \address{Department of Mathematics, Washington University, St. Louis, MO 63130} \thanks{$^{1}$Supported in part by NSF Grants DMS 0604233 and 0902142} \email{[email protected]} \author[Wachs]{Michelle L. Wachs$^2$} \address{Department of Mathematics, University of Miami, Coral Gables, FL 33124} \email{[email protected]} \thanks{$^{2}$Supported in part by NSF Grants DMS 0604562 and 0902323} \date{September 15, 2009} \maketitle \begin{center}{\it Dedicated to Dennis Stanton} \end{center} \begin{abstract} It is shown that a refined version of a q-analogue of the Eulerian numbers together with the action, by conjugation, of the subgroup of the symmetric group $S_n$ generated by the $n$-cycle $(1,2,\dots,n)$ on the set of permutations of fixed cycle type and fixed number of excedances provides an instance of the cyclic sieving phenonmenon of Reiner, Stanton and White. The main tool is a class of symmetric functions recently introduced in work of two of the authors. \end{abstract} \ {\sf s}ection{Introduction} \label{intro} In {\rm c}ite{sw,ShWa1}, certain quasisymmetric functions, called ``Eulerian quasisymmetric functions" are introduced and shown to be in fact symmetric functions. These symmetric functions have been useful in the study of the joint distribution of the permutation statistics, major index and excedance number. There are various versions of the Eulerian quasisymmetric functions. They are defined by first associating a fundamental quasisymmetric function with each permutation in the symmetric group $S_n$ and then summing these fundamental quasisymmetric functions over permutations in certain subsets of $S_n$. To obtain the most refined versions, the cycle-type Eulerian quasisymmetric functions $Q_{\lambda,j}$, one sums the fundamental quasisymmetric functions associated with the permutations having exactly $j$ excedances and cycle type $\lambda$. By summing over all the permutations in $S_n$ having $j$ excedances and $k$ fixed points, one obtains the less refined version $Q_{n,j,k}$. The precise definition of the Eulerian quasisymmetric functions and other key terms can be found in Section \ref{defs}. Shareshian and Wachs {\rm c}ite{sw,ShWa1} derive a formula for the generating function of $Q_{n,j,k}$ which specializes to a $(q,r)$-analog of a classical formula for the exponential generating function of the Eulerian polynomials. The $(q,r)$-analogue of the classical formula is given by \begin{equation} \label{expgeneqfix} {\sf s}um_{n \geq 0}A^{{\rm maj},{\rm exc},{\rm fix}}_n(q,t,r){\sf Ch}ac{z^n}{[n]_q!}={\sf Ch}ac{(1-tq)\exp_q(rz)}{\exp_q(ztq)-tq\exp_q(z)}, \end{equation} where $$A^{{\rm maj},{\rm exc},{\rm fix}}_n(q,t,r) := {\sf s}um_{{\sf s}igma \in S_n} q^{{\rm maj}({\sf s}igma)} t^{{\rm exc}({\sf s}igma)} r^{{\rm fix}({\sf s}igma)},$$ and \[ \exp_q(z):={\sf s}um_{n \geq 0}{\sf Ch}ac{z^n}{[n]_q!}. \] The cycle-type Eulerian quasisymmetric functions $Q_{\lambda,j}$ remain somewhat mysterious, and one might expect that better understanding of them will lead to further results on permutation statistics. In this paper, we provide evidence that this expectation is reasonable. With Theorem \ref{psum}, we prove a conjecture from {\rm c}ite{ShWa1}, describing the expansion of $Q_{\lambda,j}$, for $\lambda=(n)$, in terms of the power sum basis for the space of symmetric functions. Combining Theorem~\ref{psum} with a technique of D\'esarm\'enien {\rm c}ite{des}, we are able to evaluate, at all $n^{th}$ roots of unity, the {\it cycle-type $q$-Eulerian numbers} \begin{equation} \label{defqeuler} a_{\lambda,j}(q):={\sf s}um_{{\sf s}igma \in {\rm c}jl}q^{{\rm maj}({\sf s}igma)-{\rm exc}({\sf s}igma)}, \end{equation} where ${\rm c}jl$ is the set of all ${\sf s}igma \in S_n$ having exactly $j$ excedances and cycle type $\lambda$. This and an analysis of the excedance statistic on the centralizers $C_{S_n}(\tau)$ of certain permutations $\tau\in S_n$ enable us to establish the relationship between the polynomials $a_{\lambda,j}(q)$ and the cyclic sieving phenomenon of Reiner, Stanton and White {\rm c}ite{RSW} given in Theorem \ref{main1} below. \begin{notation} For a positive integer $d$, $\omega_d$ will denote throughout this paper an arbitrary complex primitive $d^{th}$ root of $1$. \end{notation} \begin{thm} \label{main1} Let $\gamma_n=(1,2,\ldots, n) \in S_n$ and let $G_n=\langle \gamma_n \rangle \leq S_n$. Then for all partitions $\lambda$ of $n$ and $j \in \{0,1,\dots,n-1\}$, the group $G_n$ acts on ${\rm c}jl$ by conjugation and the triple $(G_n,{\rm c}jl,a_{\lambda,j}(q))$ exhibits the cyclic sieving phenomenon. In other words, if $\tau \in G_n$ has order $d$ then \begin{equation} \label{maineq1} a_{\lambda,j}(\omega_d)=|C_{S_n}(\tau) {\rm c}ap {\rm c}jl|. \end{equation} \end{thm} For $\lambda$ a partition of $n$, let $S_\lambda$ be the set of all ${\sf s}igma \in S_n$ of cycle type $\lambda$ and define the {\it cycle type Eulerian polynomial} associated with $\lambda$ as $$A_\lambda^{{\rm maj},{\rm exc}}(q,t):= {\sf s}um_{{\sf s}igma \in S_\lambda} q^{{\rm maj}({\sf s}igma)} t^{{\rm exc}({\sf s}igma)}.$$ Then (\ref{maineq1}) can be rewritten as \begin{equation} \label{maineq2} A_\lambda^{{\rm maj},{\rm exc}}(\omega_d, t \omega_d^{-1}) = {\sf s}um_{{\sf s}igma \in C_{S_n}(\tau) {\rm c}ap S_\lambda} t^{{\rm exc}({\sf s}igma)},\end{equation} which is clearly a refinement of \begin{equation} \label{majexcfixeq} A_n^{{\rm maj},{\rm exc},{\rm fix}}(\omega_d,t\omega_d^{-1},s) = {\sf s}um_{{\sf s}igma \in C_{S_n}(\tau) } t^{{\rm exc}({\sf s}igma)} s^{{\rm fix}({\sf s}igma)}.\end{equation} We also prove that both sides of (\ref{majexcfixeq}) are, in fact, equal to \begin{equation} \label{bothsideseq} A^{{\rm maj},{\rm exc},{\rm fix}}_{{\sf Ch}ac n d}(1,t, {\sf Ch}ac{s^d+t[d-1]_t}{[d]_t})\,\, [d]_t^{{\sf Ch}ac n d},\end{equation} which by setting $s=1$ yields $$ A_{n}^{{\rm maj},{\rm exc}}(\omega_d,t \omega_d^{-1}) = A_{{\sf Ch}ac{n}{d}}(t) \,[d]^{{\sf Ch}ac {n} d}_t,$$ for all divisors $d$ of $n$. For the cycle-type Eulerian polynomial $A_{(n+1)}^{{\rm maj},{\rm exc}}(q,t)$, we obtain the similar looking result, \begin{equation} \label{otherform} A_{(n+1)}^{{\rm maj},{\rm exc}}(\omega_d,t \omega_d^{-1}) = t A_{{\sf Ch}ac{n}{d}}(t) \,[d]^{{\sf Ch}ac {n} d}_t, \end{equation} for all divisors $d$ of $n$. The paper is organized as follows. In Section~\ref{defs}, we review definitions of various terms such as cyclic sieving and Eulerian quasisymmetric functions. We also present some preliminary results on Eulerian quasisymmetric functions from {\rm c}ite{ShWa1}. In Section~\ref{symsec} we describe a technique that uses symmetric function theory to evaluate certain polynomials at roots of unity based on work of D\'esarm\'enien {\rm c}ite{des}. Theorem~\ref{psum} mentioned above is proved in Section~\ref{psumsec} by means of results of {\rm c}ite{ShWa1} which enable one to express the cycle-type Eulerian quasisymmetric functions in terms of the less refined version of Eulerian quasisymmetric functions. The proof of Theorem~\ref{main1} appears in Section~\ref{eqth1sec}. In Section~\ref{finsec}, we prove that both sides of (\ref{majexcfixeq}) are equal to (\ref{bothsideseq}), that (\ref{otherform}) holds, and that another triple exhibts the cyclic sieving phenomenon, namely $(G_{n}, S_{n,j}, a_{(n+1),j+1}(q))$, where $S_{n,j}$ is the set of all permutations in $S_n$ with $j$ excedances. {\sf s}ection{Definitions, known facts and preliminary results} \label{defs} {\sf s}ubsection{Cyclic Sieving} \label{cycsiv} Let $G$ be a finite cyclic group acting on a set $X$, and let $f(q)$ be a polynomial in $q$ with nonnegative integer coefficents. For $g \in G$, let ${\rm Fix}(g)$ be the set of fixed points of $g$ in $X$. The triple $(G,X,f(q))$ {\it exhibits the cyclic sieving phenomenon} of Reiner, Stanton and White {\rm c}ite{RSW} if for each $g \in G$ we have \begin{equation} \label{cseq} f(\omega_{|g|})=|{\rm Fix}(g)|, \end{equation} where $|g|$ is the order of $g$. \begin{remark} Since all elements of order $d$ in a cyclic group $G$ generate the same subgroup, they have the same set of fixed points in any action. Thus our formulation of the cyclic sieving phenomenon is equivalent to the definition given in {\rm c}ite{RSW}. \end{remark} Note that if $(G,X,f(q))$ exhibits the cyclic sieving phenomenon then $f(1)=|X|$, and interesting examples arise where $f(q)$ is the generating function for some natural statistic on $X$, that is, there exists some useful function $s:X \rightarrow {\mathbb N}$ such that \[ f(q)={\sf s}um_{x \in X}q^{s(x)}. \] See {\rm c}ite{RSW} for many examples, and {\rm c}ite{RSW2,EF,BRS, BR, Rho, PPR, PS} for more recent work. {\sf s}ubsection{Permutation Statistics} Recall that for a permutation ${\sf s}igma \in S_n$ acting from the right on $[n]:=\{1,\ldots,n\}$, the {\it excedance set} of ${\sf s}igma$ is \[ {\rm Exc}({\sf s}igma):=\{i \in [n-1]:i{\sf s}igma>i\} \] and the {\it descent set} of ${\sf s}igma$ is \[ {\rm Des}({\sf s}igma):=\{i \in [n-1]:i{\sf s}igma>(i+1){\sf s}igma\}. \] The {\it major index} of ${\sf s}igma$ is \[ {\rm maj}({\sf s}igma):={\sf s}um_{i \in {\rm Des}({\sf s}igma)}i, \] and the {\it excedance and descent statistics} of ${\sf s}igma$ are, respectively, \[ {\rm exc}({\sf s}igma):=|{\rm Exc}({\sf s}igma)|, \] and \[ {\rm des}({\sf s}igma):=|{\rm Des}({\sf s}igma)|. \] Let ${\rm Fix}({\sf s}igma)$ denote the set of fixed points of ${\sf s}igma$, that is $${\rm Fix}({\sf s}igma) := \{i \in [n] : i{\sf s}igma = i \}$$ and let $${\rm fix}({\sf s}igma) := |{\rm Fix}({\sf s}igma)|.$$ The excedance and descent statistics are equidistributed, and the $n^{th}$ {\it Eulerian polynomial} $A_n(t)$ can be defined as \[ {\sf s}um_{{\sf s}igma \in S_n}t^{{\rm exc}({\sf s}igma)}=A_n(t)={\sf s}um_{{\sf s}igma \in S_n}t^{{\rm des}({\sf s}igma)}. \] The Eulerian polynomial is also the generating polynomial for the {\it ascent statistic} on $S_n$, ${\rm asc}({\sf s}igma):=|\{i \in [n]:i{\sf s}igma < (i+1){\sf s}igma\}|$. For permutation statistics $ {s_1},\ldots,{ {s_k}}$ and a positive integer $n$, define the polynomial \[ A_n^{ s_1,\ldots,s_k}(t_1,\ldots,t_k):={\sf s}um_{{\sf s}igma \in S_n}t_1^{s_1 ({\sf s}igma) } t_2^{ s_2 ({\sf s}igma)}{\rm c}dots t_k^{s_k ({\sf s}igma)}. \] Also, set \[ A_0^{ s_1,\ldots,s_k}(t_1,\ldots,t_k):=1. \] {\sf s}ubsection{Partitions and Symmetric Functions} We use standard notation for partitions and symmetric functions. References for basic facts are {\rm c}ite{Mac,Sta2,Sag}. In particular, ${\sf p}_\lambda$ and ${\sf h}_\lambda$ will denote, respectively, the power sum and complete homogeneous symmetric functions associated to a partition $\lambda$. We use $l(\lambda)$ to denote the number of (nonzero) parts of $\lambda$ and $m_j(\lambda)$ to denote the number of parts of $\lambda$ equal to $j$. We write ${\rm Par}(n)$ for the set of all partitions of $n$. For $\lambda \in {\rm Par}(n)$, define the number $$ z_\lambda:={\rm Par}od_{j=1}^{n}j^{m_j(\lambda)}m_j(\lambda)!.$$ We use two standard methods to describe a partition $\lambda \in {\rm Par}(n)$. The first is to write $\lambda=(\lambda_1,\ldots,\lambda_{l(\lambda)})$, listing the (nonzero) parts of $\lambda$ so that $\lambda_i \geq \lambda_{i+1}$ for all $i$. The second is to write $\lambda=1^{m_1(\lambda)}\ldots n^{m_n(\lambda)}$, usually suppressing those synbols $i^{m_i(\lambda)}$ such that $m_i(\lambda)=0$ and writing $i^1$ as simply $i$. In particular, if $n=dk$ then $d^k$ represents the partition with $k$ parts of size $d$ and no other parts. If $\lambda=(\lambda_1,\ldots,\lambda_k) \in {\rm Par}(n)$ and $q \in {\mathbb Q}$ with $q\lambda_i \in {\mathbb P}$ for all $i \in [k]$, we write $q\lambda$ for $(q\lambda_1,\ldots,q\lambda_k) \in {\rm Par}(qn)$. For each ${\sf s}igma \in S_n$, let $\lambda({\sf s}igma)$ denote the cycle type of ${\sf s}igma$. Given $\lambda \in {\rm Par}(n)$, we write $S_\lambda$ for the set of all ${\sf s}igma \in S_n$ having cycle type $\lambda$. As in (\ref{defqeuler}), we write ${\rm c}jl$ for the set of those ${\sf s}igma \in S_\lambda$ satisfying ${\rm exc}({\sf s}igma)=j$. For symmetric functions $f,g$ with coefficients in ${\mathbb Q}[t]$, $f[g]$ will denote the plethysm of $g$ by $f$. The same notation will be used for plethysm of symmetric power series with no bound on their degree. One such power series is ${\sf H}:={\sf s}um_{n \geq 1}{\sf h}_n$. If we set \begin{eqnarray*} {\sf L} & := & {\sf s}um_{d \geq 1} {\sf Ch}ac{\mu(d)}{d}\log(1+{\sf p}_d) \\ & = & {\sf s}um_{d \geq 1}{\sf Ch}ac{\mu(d)}{d}{\sf s}um_{i \geq 1}{\sf Ch}ac{(-1)^{i-1}}{i}{\sf p}_d^i, \end{eqnarray*} where $\mu$ is the classical M\"obius function, then ${\sf H}$ and ${\sf L}$ are plethystic inverses, that is, \begin{equation} \label{plinv} {\sf L}[{\sf H}[f]]={\sf H}[{\sf L}[f]]=f \end{equation} for all symmetric power series $f$. (This is due to Cadogan, see {\rm c}ite{Cad} or {\rm c}ite[Exercise 7.88e]{Sta2}.) Note also that for any power series $h(t,x_1,x_2,\ldots)$ with coefficients in ${\mathbb Q}$ that is symmetric in $x_1,x_2,\ldots$ and any $d \in {\mathbb P}$, we have \begin{equation} \label{pple} {\sf p}_d[h]=h(t^d,x_1^d,x_2^d,\ldots). \end{equation} We shall use without further mention the facts $(f+g)[h]=f[h]+g[h]$ and $(fg)[h]=f[h]g[h]$. {\sf s}ubsection{$q$-Analogues} We use the standard notation for polynomial analogues of positive integers, that is, for a positive integer $n$ and a variable $q$, we define \[ [n]_q:={\sf s}um_{j=0}^{n-1}q^j = {\sf Ch}ac{1-q^n}{1-q} \] and $$[n]_q!:= [n]_q[n-1]_q{\rm c}dots[1]_q.$$ Also define $$[0]_q! :=1.$$ It is well-known that for any sequence $(k_1,\dots, k_m)$ of nonnegative integers whose sum is $n$, the $q$-multinomial coefficient $$ \left [ \begin{array}{c} n \\ k_1,\dots ,k_{m} \end{array} \right] _q := {\sf Ch}ac{[n]_q!} {[k_1]_q! {\rm c}dots [k_m]_q! } $$ is always a polynomial in ${\mathbb N}[q]$. The following $q$-analogue of the multinomial version of the Pascal recurrence relation is also well-known (see {\rm c}ite[(17b)]{Sta1}): \begin{equation} \label{pascal} \left [ \begin{array}{c} n \\ k_1,\dots ,k_m \end{array} \right]_q = {\sf s}um_{i=1}^{m} q^{ k_{i+1} +\dots + k_{m}} \left [ \begin{array}{c} n-1 \\ k_1,\dots ,k_i-1, \dots, k_{m} \end{array} \right]_q,\end{equation} where $(k_1,\dots, k_m)$ is a sequence of positive integers whose sum is $n$. We will need the following elementary fact, which also plays a role in the work of Reiner, Stanton and White {\rm c}ite{RSW}. \begin{prop}[see {{\rm c}ite[Equation (4.5)] {RSW}}] \label{multinom} Let $(k_1,\dots,k_m)$ be a sequence of nonnegative integers whose sum is $n$. If $d|n$ then $$ \left [ \begin{array}{c} n \\ k_1,\dots ,k_{m} \end{array} \right] _q |_{q=\omega_d} = \begin{cases} \left ( \begin{array}{c} {\sf Ch}ac n d \\ {\sf Ch}ac {k_1} d,\dots ,{\sf Ch}ac {k_{m}} d \end{array} \right) &\mbox{ if } d| k_i \,\, \forall i \in [m] \\ \,\, 0 &\mbox{ otherwise. } \end{cases}$$ \end{prop} {\sf s}ubsection{The Eulerian quasisymmetric functions} \label{Eulerian} Given a permutation ${\sf s}igma \in S_n$, we write ${\sf s}igma$ in one line notation, \[ {\sf s}igma={\sf s}igma_1\ldots{\sf s}igma_n, \] where ${\sf s}igma_i=i{\sf s}igma$. Set \[ [\overline{n}]:=\{\overline{i}:i \in [n]\}, \] and let $w({\sf s}igma)$ be the word in the alphabet ${\mathcal A}:=[n] {\rm c}up [\overline{n}]$ obtained from ${\sf s}igma$ by replacing ${\sf s}igma_i$ with $\overline{{\sf s}igma_i}$ whenever $i \in {\rm Exc}({\sf s}igma)$. Order ${\mathcal A}$ by \[ \overline{1}<\ldots<\overline{n}<1<\ldots<n, \] and for any word $w:=w_1\ldots w_n$ from ${\mathcal A}$, set \[ {\rm Des}(w):=\{i \in [n-1]:w_i>w_{i+1}\}. \] Now, for ${\sf s}igma \in S_n$, define \[ {\rm Dex}({\sf s}igma):={\rm Des}(w({\sf s}igma)). \] For example, if ${\sf s}igma=641532$ then $w({\sf s}igma)=\overline{6}\overline{4}1\overline{5}32$ and ${\rm Dex}({\sf s}igma)=\{1,3,5\}$. Recall now that a {\it quasisymmetric function} is a power series (with rational coefficients) $f$ of bounded degree in variables $x_1,x_2,\ldots$ such that if $j_1<\ldots<j_k$ and $l_1<\ldots<l_k$, then for all $a_1,\ldots,a_k$ the coefficients in $f$ of ${\rm Par}od_{i=1}^{k}x_{j_i}^{a_i}$ and ${\rm Par}od_{i=1}^{k}x_{l_i}^{a_i}$ are equal. The usual addition, multiplication and scalar multiplication make the set ${\mathcal Q}$ of quasisymmetric functions a ${\mathbb Q}$-algebra that strictly contains the algebra of symmetric functions. For $n \in {\mathbb P}$ and $S {\sf s}ubseteq [n-1]$, set \[ {\rm Mon}_S:=\{{\rm Par}od_{i=1}^{n}x_{j_i}:j_i \geq j_{i+1} \mbox{ for all } i \in [n-1] \mbox{ and } j_i>j_{i+1} \mbox{ for all } i \in S\}, \] and define the {\it fundamental quasisymmetric function} associated with $S$ to be \[ F_S:={\sf s}um_{x \in {\rm Mon}_S}x \in {\mathcal Q}. \] Recall from above that we have defined ${\rm c}jl$ to be the set of all permutations of cycle type $\lambda$ with $j$ excedances. The {\it Eulerian quasisymmetric function} associated to the pair $(\lambda,j)$ is \[ Q_{\lambda,j}:={\sf s}um_{{\sf s}igma \in {\rm c}jl}F_{{\rm Dex}({\sf s}igma)} \in {\mathcal Q}. \] The Eulerian quasisymmetric functions were introduced in {\rm c}ite{ShWa1} as a tool for studying the $({\rm maj},{\rm exc})$ $q$-analogue of the Eulerian polynomials. The connection between the Eulerian quasisymmetric functions and the $q$-Eulerian numbers is given in the following proposition. The {\em stable principal specialization} $\Omega$ is a homomorphism from the algebra of quasisymmetric functions ${\mathcal Q}$ to the algebra of formal power series ${\mathbb Q}[[q]]$ defined by $\Omega(x_i) = q^{i-1}$. \begin{prop}[{{\rm c}ite[Equation (2.13)] {ShWa1}}] {\footnote{Equation~2.13 in {\rm c}ite{ShWa1} has an extra factor of $q^{j}$ because $a_{\lambda,j} (q)$ is defined there to be the ${\rm maj}$ enumerator of ${\rm c}jl$ rather than the ${\rm maj}-{\rm exc}$ enumerator.}} \label{stabprop}For all partitions $\lambda$ of $n$ and $j\in\{0,1,\dots, n-1\}$, let $a_{\lambda,j}(q)$ be as in (\ref{defqeuler}). Then $$\Omega Q_{\lambda,j} = a_{\lambda,j} (q) / {\rm Par}od_{i=1}^n (1-q^i). $$ \end{prop} In {\rm c}ite{ShWa1}, it is also shown that in fact $Q_{\lambda,j}$ is always a symmetric function. If one knows $Q_{(n),j}$ for all $n,j$, then a fairly compact explicit formula for each $Q_{\lambda,j}$ can be obtained from Corollary 6.1 of {\rm c}ite{ShWa1}, which says that for any $\lambda \in {\rm Par}(n)$, \begin{equation} \label{ple} {\sf s}um_{j=0}^{n-1}Q_{\lambda,j}t^j={\rm Par}od_{i=1}^{n}{\sf h}_{m_i(\lambda)}\left[{\sf s}um_{l=0}^{i-1}Q_{(i),l}t^l\right]. \end{equation} As noted in {\rm c}ite{ShWa1}, if we set \[ Q_{n,j}:={\sf s}um_{\lambda \in {\rm Par}(n)}Q_{\lambda,j} \] for $n \ge 1$, and $$Q_{0,0}=Q_{(0),0}=1,$$ equation (\ref{ple}) implies $$ {\sf s}um_{n,j \geq 0}Q_{n,j}t^j={\sf s}um_{n \geq 0}{\sf h}_n\left[{\sf s}um_{i,j \geq 0}Q_{(i),j}t^j\right]={\sf H}\left[{\sf s}um_{i,j \geq 0}Q_{(i),j}t^j\right], $$ which by (\ref{plinv}) is equivalent to \begin{equation} \label{ple2} {\sf s}um_{n,j \geq 0}Q_{(n),j}t^j={\sf L}\left[{\sf s}um_{i,j \geq 0}Q_{i,j}t^j\right]. \end{equation} Proposition 6.6 of {\rm c}ite{ShWa1} gives an explicit formula for $Q_{n,j}$ in terms of the power sum symmetric function basis, \begin{equation} \label{qnj} {\sf s}um_{j=0}^{n-1}Q_{n,j}t^j={\sf s}um_{\nu \in {\rm Par}(n)}z_\nu^{-1}A_{l(\nu)}(t){\rm Par}od_{i=1}^{l(\nu)}[\nu_i]_t{\sf p}_\nu. \end{equation} By combining (\ref{ple}), (\ref{ple2}) and (\ref{qnj}) we obtain a formula for each $Q_{\lambda,j}$, which will be used in Section~\ref{psumsec} to prove a conjecture from {\rm c}ite{ShWa1} giving the expansion of $Q_{(n),j}$ in the power sum basis. {\sf s}ection{A symmetric function technique} \label{symsec} We describe here a general technique for evaluating polynomials at roots of unity based on a technique of D\'esarm\'enien {\rm c}ite{des}. This technique provides a key step in our proof of Theorem~\ref{main1}. One can also prove Theorem 1.1 using Springer's theory of regular elements in place of the technique we give here. A description of the relevance of Springer's work to the cyclic sieving phenomenon appears in [RSWh]. Given a homogeneous symmetric function $F$ of degree $n$ and a partition $\nu$ of $n$, let ${\rm c}hi^F_\nu$ be the coefficient of $z_\nu^{-1} p_\nu$ in the expansion of $F$ in terms of the basis $\{z^{-1}_\nu p_\nu : \nu \in {\rm Par}( n)\}$ for the space of homogeneous symmetric functions of degree $n$. That is, $ {\rm c}hi^F_\nu$ is uniquely determined by $$F = {\sf s}um_{\nu\in {\rm Par}(n) } {\rm c}hi^F_\nu z^{-1}_\nu p_\nu.$$ Although we will not make use of this, we note that if $F$ is the Frobeneous characteristic of a class function of $S_n$ then ${\rm c}hi^F_\nu$ is the value of the class function on permutations of cycle type $\nu$. Recall that $\Omega$ denotes the stable principal specialization defined in Section~\ref{Eulerian}. The following result is implicit in {\rm c}ite{des}. \begin{prop} \label{thdes} Suppose $f(q) \in {\mathbb Q}[q]$ and there exists a homogeneous symmetric function $F$ of degree $n$ with coefficients in ${\mathbb Q}$ such that $$f(q) ={\rm Par}od_{i=1}^n (1-q^i)\,\, \Omega F.$$ Then for all $d,k \in {\mathbb P}$ such that $n \in \{dk,dk+1\} $, $$f(\omega_d) = {\rm c}hi^F_\nu,$$ where $\nu = d^{k}$ or $\nu=1d^{k}$. \end{prop} \begin{proof} By expanding $F$ in the power sum basis for the symmetric functions, we have, \begin{eqnarray} \nonumber f(q) &=& {\rm Par}od_{i=1}^n (1-q^i) {\sf s}um_{\mu \in {\rm Par}(n)} {\rm c}hi^F_\mu z^{-1}_\mu\Omega {\sf p}_\mu \\ \label{desres} &=& {\sf s}um_{\mu \in {\rm Par}(n)} {\rm c}hi^F_\mu z^{-1}_\mu \,\, {\sf Ch}ac{{\rm Par}od_{i=1}^n (1-q^i)} {{\rm Par}od_{i=1}^{l(\mu)} (1-q^{\mu_i} )}. \end{eqnarray} It is shown in {\rm c}ite[Proposition 7.2]{des} that for all $\mu \in {\rm Par}(n)$, $$T_\mu(q):= {\sf Ch}ac{{\rm Par}od_{i=1}^n (1-q^i)} {{\rm Par}od_{i=1}^{l(\mu)} (1-q^{\mu_i} )}$$ is a polynomial in $q$ whose value at $\omega_d$ is given by \begin{equation} \label{deseq} T_\mu(\omega_d) = \begin{cases} z_\mu &\mbox{if } \mu = d^k \mbox{ or } \mu = 1 d^k\\ 0 &\mbox{otherwise.} \end{cases} \end{equation} We include a proof for the sake of completeness. Since $$T_\mu(q)= \left [ \begin{array}{c} n \\ \mu_1,\dots ,\mu_{l(\mu)} \end{array} \right] _q \,\, {\rm Par}od_{i=1}^{l(\mu)} {\rm Par}od_{j=1}^{\mu_i-1}(1-q^j),$$ we see that $T_\mu(q)$ is a polynomial and that if $T_\mu(\omega_d) \ne 0$ then $\mu_i \le d$ for all $i$. Hence, in the case that $n=dk$, it follows from Proposition~\ref{multinom} that $T_\mu(\omega_d) \ne 0$ only if $\mu_i = d$ for all $i$. By Proposition~\ref{multinom}, $$T_{d^k} (\omega_d) = k! ({\rm Par}od_{j =1}^{d-1}(1-\omega_d^j))^k = k!d^k.$$ Similarly, in the case that $n=dk+1$, we use (\ref{pascal}) and Proposition~\ref{multinom} to show that that $T_\mu(\omega_d)$ equals $k!d^k$ if $\mu = 1 d^k$ and is $0$ otherwise. Hence, in either case, (\ref{deseq}) holds. Now by plugging (\ref{deseq}) into (\ref{desres}) we obtain the desired result. \end{proof} We will use Propostion~\ref{thdes} to evaluate the cycle-type Eulerian numbers $a_{\lambda,j}(q)$ at all the $m^{th}$ roots of unity, where $m \in \{n-1,n\}$. We see from Proposition~\ref{stabprop} that we already have the required symmetric function, namely $Q_{\lambda,j}$. We thus obtain the first step in our proof of Theorem~\ref{main1}. \begin{prop} \label{evalth} Let $\lambda \in {\rm Par}( n)$ and let $d,k \in {\mathbb P}$. If $dk = n$ then $$a_{\lambda,j}(\omega_d) = {\rm c}hi^{Q_{\lambda,j}}_{d^k},$$ and if $dk =n-1$ then $$a_{\lambda,j}(\omega_d) = {\rm c}hi^{Q_{\lambda,j}}_{1d^k}.$$ \end{prop} In {\rm c}ite{ShWa1} a formula for the coefficients ${\rm c}hi^{Q_{(n),j}}_{\nu}$ is conjectured. This formula turns out to be just what we need to prove Theorem~\ref{main1}. In the next section we present the conjecture and its proof. \begin{remark} In {\rm c}ite{ShWa1} it is conjectured that $Q_{\lambda,j}$ is the Frobenius characteristic of some representation of $S_n$. By Proposition~\ref{evalth} and Theorem~\ref{main1}, the restriction of the conjectured representation to $G_n$ would necessarily be isomorphic to the permutation representation for the action of $G_n$ on $S_{\lambda,j}$. \end{remark} {\sf s}ection{The expansion of $Q_{(n),j} $} \label{psumsec} In this section we present a key result of our paper (Theorem \ref{psum}), which was conjectured in {\rm c}ite{ShWa1}. For a power series $f(t)={\sf s}um_{j \geq 0}a_jt^j$ and an integer $k$, let $f(t)_k$ be the power series obtained from $f(t)$ by erasing all terms $a_jt^j$ such that $\gcd(j,k) \neq 1$, so \[ f(t)_k:={\sf s}um_{\gcd(j,k)=1}a_jt^j. \] For example, if $f(t)=t+3t^2-5t^3+7t^4$ then $f(t)_2=t-5t^3$. For a partition $\nu=(\nu_1,\ldots,\nu_k)$, set \[ g(\nu):=\gcd(\nu_1,\ldots,\nu_k). \] \begin{thm}[{\rm c}ite{ShWa1}, Conjecture 6.5] \label{psum} For $\nu=(\nu_1,\ldots,\nu_k) \in {\rm Par}(n)$, set \[ G_\nu(t):=\left(tA_{k-1}(t){\rm Par}od_{i=1}^{k}[\nu_i]_t\right)_{g(\nu)}. \] Then \begin{equation} \label{peq} {\sf s}um_{j=0}^{n-1}Q_{(n),j}t^j={\sf s}um_{\nu \in {\rm Par}(n)}z_\nu^{-1} G_\nu(t){\sf p}_\nu. \end{equation} \end{thm} Theorem \ref{psum} can be restated as follows. Since $Q_{(n),j}$ is a homogeneous symmetric function of degree $n$, it can be expanded in the basis $\{z^{-1}_\lambda p_\lambda : \lambda \in {\rm Par}(n)\}$. Thus, the theorem says that the expansion coefficient of $z^{-1}_\nu p_\nu$ is $0$ if $\gcd(j,g(\nu)) \neq 1$, while if $\gcd(j,g(\nu))=1$ then the expansion coefficient equals the coefficient of $t^j$ in $tA_{l(\nu)-1}(t){\rm Par}od_{i=1}^{l(\nu)}[\nu_i]_t$. In order to prove Theorem~\ref{psum} we need two lemmas. As above, we write $\mu$ for the classical M\"obius function on ${\mathbb P}$, and recall that \begin{equation} \label{minv} {\sf s}um_{d|n}\mu(d)=\left\{\begin{array}{ll} 1 & n=1, \\ 0 & \mbox{otherwise}. \end{array} \right. \end{equation} \begin{lemma} \label{resum} For a partition $\nu=(\nu_1,\ldots,\nu_l)$, we have \begin{equation} \label{resumeq} G_\nu(t)={\sf s}um_{d|g(\nu)}\mu(d)d^{l-1}t^dA_{l-1}(t^d){\rm Par}od_{i=1}^{l}\left[{\sf Ch}ac{\nu_i}{d}\right]_{t^d}. \end{equation} \end{lemma} \begin{proof} It is known (and follows, for example, from {\rm c}ite[Theorem 4.5.14]{Sta1}) that for any positive integer $k$ we have \begin{equation} \label{euleq} {\sf Ch}ac{tA_{k-1}(t)}{(1-t)^k}={\sf s}um_{j \geq 1}j^{k-1}t^j. \end{equation} It follows directly from the definition of $f(t)_d$ that for any power series $g,h$ and any $d \in {\mathbb P}$ we have \begin{equation} \label{powid} \left(g(t)h(t^d)\right)_d=g(t)_dh(t^d). \end{equation} We see now that \begin{eqnarray*} G_\nu(t) & = & \left(tA_{l-1}(t){\rm Par}od_{i=1}^{l}{\sf Ch}ac{1-t^{\nu_i}}{1-t}\right)_{g(\nu)} \\ & = & \left({\sf Ch}ac{tA_{l-1}(t)}{(1-t)^l}{\rm Par}od_{i=1}^{l}(1-t^{\nu_i})\right)_{g(\nu)} \\ & = & \left({\sf s}um_{j \geq 1}j^{l-1}t^j\right)_{g(\nu)}{\rm Par}od_{i=1}^{l}(1-t^{\nu_i}) \\ & = & {\sf s}um_{j:\gcd(g(\nu),j)=1}j^{l-1}t^j{\rm Par}od_{i=1}^{l}(1-t^{\nu_i}), \end{eqnarray*} the third equality above following from (\ref{euleq}) and (\ref{powid}). Now \begin{eqnarray*} {\sf s}um_{d|g(\nu)}\mu(d){\sf s}um_{a \geq 1}(ad)^{l-1}t^{ad} & = & {\sf s}um_{j \geq 1}j^{l-1}t^j{\sf s}um_{d|\gcd(j,g(\nu))}\mu(d) \\ & = & {\sf s}um_{j:\gcd(j,g(\nu))=1}j^{l-1}t^j, \end{eqnarray*} the second equality following from (\ref{minv}). We see now that \begin{eqnarray*} G_\nu(t) & = & \left({\sf s}um_{d|g(\nu)}\mu(d){\sf s}um_{a \geq 1}(ad)^{l-1}t^{ad}\right){\rm Par}od_{i=1}^{l}(1-t^{\nu_i}) \\ & = & \left({\sf s}um_{d|g(\nu)}\mu(d)d^{l-1}{\sf Ch}ac{t^dA_{l-1}(t^d)}{(1-t^d)^l}\right){\rm Par}od_{i=1}^{l}(1-t^{\nu_i}) \\ & = & {\sf s}um_{d|g(\nu)}\mu(d)d^{l-1}t^dA_{l-1}(t^d){\rm Par}od_{i=1}^{l}{\sf Ch}ac{1-t^{\nu_i}}{1-t^d}, \end{eqnarray*} the second equality following from (\ref{euleq}). \end{proof} \begin{lemma} \label{exforlem} We have \begin{equation} \label{exfor} {\sf s}um_{k \geq 0}{\sf Ch}ac{A_k(t)}{k!}z^k=\exp\left({\sf s}um_{l \geq 1}{\sf Ch}ac{tA_{l-1}(t)}{l!}z^l\right). \end{equation} \end{lemma} \begin{proof} We apply the exponential formula (see {\rm c}ite[Corollary 5.1.6]{Sta2}) to the Eulerian polynomials. For any permutation ${\sf s}igma$ in $S_n$ let ${\sf p}i({\sf s}igma)$ be the partition of the set $[n]$ whose blocks are the supports of the cycles in the cycle decomposition of ${\sf s}igma$. Let $\Pi_n$ be the set of all partitions of the set $[n]$. For any partition ${\sf p}i$ in $\Pi_n$ set $$A_{\sf p}i(t) := {\sf s}um_{{\sf s}criptsize\begin{array}{c} {\sf s}igma \in S_n\\ {\sf p}i({\sf s}igma) = {\sf p}i \end{array}} t^{{\rm exc}({\sf s}igma)}.$$ Then $$A_n(t) = {\sf s}um_{{\sf p}i \in \Pi_n} A_{{\sf p}i}(t),$$ and $$A_{\sf p}i(t) = {\rm Par}od_{i=1}^k A_{\{B_i\}}(t) = {\rm Par}od_{i=1}^k A_{(|B_i|)}(t),$$ where ${\sf p}i = \{B_1,\dots,B_k\}$. It therefore follows from the exponential formula that $$ {\sf s}um_{k\ge 0} {\sf Ch}ac{A_k(t)} {k! } z^k = \exp ( {\sf s}um_{l\ge 1} {\sf Ch}ac{A_{(l)}(t)} {l! } z^l ).$$ To complete the proof we observe that \begin{equation} \label{circeul} A_{(l)}(t) = t A_{l-1}(t).\end{equation} Indeed, for ${\sf s}igma \in S_{(l)}$, write ${\sf s}igma$ in cycle notation $(x_1,x_2,\ldots ,x_l)$ with $x_l = l$. Now let $v({\sf s}igma)=x_1\ldots x_{l-1}$, a permutation in $S_{l-1}$ in one line notation. The excedance set of ${\sf s}igma$ is the union of $\{x_{l-1}\}$ and $\{x_i : i \mbox{ is an ascent of } v({\sf s}igma) \}$. Since $v$ is a bijection from $S_{(l)}$ to $S_{l-1}$, equation (\ref{circeul}) holds. \end{proof} \begin{proof}[Proof of Theorem~\ref{psum}] We have \begin{eqnarray} \nonumber {\sf s}um_{n \geq 1}{\sf s}um_{j=0}^{n-1}Q_{(n),j}t^j & = & {\sf L}\left[{\sf s}um_{i \geq 1}{\sf s}um_{j=0}^{i-1}Q_{i,j}t^j\right] \\ \nonumber & = & {\sf L}\left[{\sf s}um_{k \geq 1}{\sf s}um_{\nu:l(\nu)=k}z_\nu^{-1}A_k(t){\rm Par}od_{h=1}^{k}[\nu_h]_t{\sf p}_{\nu_h}\right] \\ \nonumber & = & {\sf s}um_{d \geq 1}{\sf Ch}ac{\mu(d)}{d}{\sf s}um_{i \geq 1}{\sf Ch}ac{(-1)^{i-1}}{i}{\sf p}_d^i\left[{\sf s}um_{k \geq 1}{\sf s}um_{\nu:l(\nu)=k}z_\nu^{-1}A_k(t){\rm Par}od_{h=1}^{k}[\nu_h]_t{\sf p}_{\nu_h}\right] \\ \nonumber & = & {\sf s}um_{d \geq 1}{\sf Ch}ac{\mu(d)}{d}{\sf s}um_{i \geq 1}{\sf Ch}ac{(-1)^{i-1}}{i}\left({\sf s}um_{k \geq 1}{\sf s}um_{\nu:l(\nu)=k}z_\nu^{-1}A_k(t^d){\rm Par}od_{h=1}^{k}[\nu_h]_{t^d}{\sf p}_{d\nu_h}\right)^i \\ \label{pfconj} & = & {\sf s}um_{d \geq 1} {\sf Ch}ac{\mu(d)}{d}\log\left(1+{\sf s}um_{k \geq 1}{\sf s}um_{\nu:l(\nu)=k}z_\nu^{-1}A_k(t^d){\rm Par}od_{h=1}^{k}[\nu_h]_{t^d}{\sf p}_{d\nu_h}\right), \end{eqnarray} the first equality following from (\ref{ple2}), the second from (\ref{qnj}), the third from the definition of ${\sf L}$ and the fourth from (\ref{pple}). For any $k \in {\mathbb P}$, let ${\mathcal M}_k$ be the set of all sequences $a=(a_1,a_2,\ldots)$ of nonnegative integers such that ${\sf s}um_{i \geq 1}a_i=k$. Then \begin{eqnarray}\nonumber {\sf s}um_{\nu:l(\nu)=k}z_\nu^{-1}{\rm Par}od_{i=1}^{k}[\nu_i]_{t^d}{\sf p}_{d\nu_i} & = & {\sf s}um_{\nu:l(\nu)=k}{\sf Ch}ac{1}{{\rm Par}od_{r \geq 1}m_r(\nu)!}{\rm Par}od_{r \geq 1}\left({\sf Ch}ac{[r]_{t^d}{\sf p}_{dr}}{r}\right)^{m_r(\nu)} \\ \nonumber & = & {\sf Ch}ac{1}{k!}{\sf s}um_{a \in {\mathcal M}_k}{{k} {\rm c}hoose {a_1,a_2,\ldots}}{\rm Par}od_{r \geq 1}\left({\sf Ch}ac{[r]_{t^d}{\sf p}_{dr}}{r}\right)^{a_r} \\ \label{mceq} & = & {\sf Ch}ac{1}{k!}\left({\sf s}um_{r \geq 1}{\sf Ch}ac{[r]_{t^d}{\sf p}_{dr}}{r}\right)^k. \end{eqnarray} We see now that \begin{eqnarray*} {\sf s}um_{n \geq 1}{\sf s}um_{j=0}^{n-1}Q_{(n),j}&=& {\sf s}um_{d \geq 1}{\sf Ch}ac{\mu(d)}{d}\log\left(1+{\sf s}um_{k \geq 1}{\sf Ch}ac{A_k(t^d)}{k!}\left({\sf s}um_{r \geq 1}{\sf Ch}ac{[r]_{t^d}{\sf p}_{dr}}{r}\right)^k\right)\\ &=& {\sf s}um_{d \geq 1} {\sf Ch}ac{\mu(d)}{d}{\sf s}um_{k \geq 1} {\sf Ch}ac{t^dA_{k-1}(t^d)}{k!}\left({\sf s}um_{r \geq 1}{\sf Ch}ac{[r]_{t^d}{\sf p}_{dr}}{r}\right)^k \\ & = & {\sf s}um_{d \geq 1}{\sf Ch}ac{\mu(d)}{d}{\sf s}um_{k \geq 1}{\sf s}um_{\nu:l(\nu)=k}z_\nu^{-1}t^dA_{k-1}(t^d){\rm Par}od_{i=1}^{k}[\nu_i]_{t^d}{\sf p}_{d\nu_i} \\ & = & {\sf s}um_{k \geq 1}{\sf s}um_{d \geq 1}\mu(d)d^{k-1}t^dA_{k-1}(t^d){\sf s}um_{\nu:l(\nu)=k}z_{d\nu}^{-1}{\rm Par}od_{i=1}^{k}[\nu_i]_{t^d}{\sf p}_{d\nu_i} \\ & = & {\sf s}um_{k \geq 1}{\sf s}um_{\nu:l(\nu)=k}z_\nu^{-1}{\sf p}_\nu{\sf s}um_{d|g(\nu)}\mu(d)d^{k-1}t^dA_{k-1}(t^d){\rm Par}od_{i=1}^{k}\left[{\sf Ch}ac{\nu_i}{d}\right]_{t^d} \\ & = & {\sf s}um_{k \geq 1}{\sf s}um_{\nu:l(\nu)=k}z_\nu^{-1}{\sf p}_\nu G_\nu(t). \end{eqnarray*} Indeed, first equality is obtained by combining (\ref{pfconj}) and (\ref{mceq}), the second equality is obtained from (\ref{exfor}), the third follows from (\ref{mceq}), the fourth and fifth are obtained by straightforward manipulations, and the last follows from (\ref{resumeq}). \end{proof} {\sf s}ection{The proof of Theorem \ref{main1}} \label{eqth1sec} {\sf s}ubsection{The expansion coefficients ${\rm c}hi^{Q_{\lambda,j}}_{d^k}$} To compute the expansion coefficients ${\rm c}hi^{Q_{\lambda,j}}_{d^k}$, we will need to obtain results like Theorem \ref{psum} with the partition $(n)$ replaced by an arbitrary partition $\lambda$, but in such results we will only need the coefficients of power sum symmetric functions of the form ${\sf p}_{(d,\ldots,d)}$. We begin with a definition generalizing that of $f(t)_d$. For a power series $f(t)={\sf s}um_ja_jt^j$ and positive integers $b,c$, let $f(t)_{b,c}$ be the power series obtained from $f$ by erasing all terms $a_it^i$ such that $\gcd(i,b) \neq c$, so \[ f(t)_{b,c}:={\sf s}um_{\gcd(i,b)=c}a_it^i. \] For example, if $f(t)=1+2t+3t^2+4t^3+5t^4$ then $f(t)_{6,2}=3t^2+5t^4$. If $c|b$ then for any power series $g,h$, we have \begin{equation} \label{ghbc} \left(g(t)h(t^b)\right)_{b,c}=g(t)_{b,c}h(t^b). \end{equation} We will use the following result. \begin{lemma} \label{ftbclem} Let $k,b,c \in {\mathbb P}$ and assume that $c|b$. Then \begin{equation} \label{ftbc} \left(tA_{k-1}(t)[b]_t^k\right)_{b,c}=c^{k-1}\left(t^cA_{k-1}(t^c)[b/c]_{t^c}^k\right)_{b,c}. \end{equation} \end{lemma} \begin{proof} We have \begin{eqnarray*} \left(tA_{k-1}(t)[b]_t^k\right)_{b,c} & = & \left({\sf Ch}ac{tA_{k-1}(t)}{(1-t)^k}(1-t^b)^k\right)_{b,c} \\ & = & \left({\sf Ch}ac{tA_{k-1}(t)}{(1-t)^k}\right)_{b,c}(1-t^b)^k \\ & = & (1-t^b)^k{\sf s}um_{j:\gcd(j,b)=c}j^{k-1}t^j \\ & = & (1-t^b)^k{\sf s}um_{i:\gcd(i,b/c)=1}(ic)^{k-1}t^{ic} \\ & = & c^{k-1}(1-t^b)^k{\sf s}um_{i:\gcd(i,b/c)=1}i^{k-1}t^{ic} \\ & = & c^{k-1}(1-t^b)^k\left({\sf Ch}ac{t^cA_{k-1}(t^c)}{(1-t^c)^k}\right)_{b,c} \\ & = & c^{k-1}\left(t^cA_{k-1}(t^c){\sf Ch}ac{(1-t^b)^k}{(1-t^c)^k}\right)_{b,c} \\ & = & c^{k-1}\left(t^cA_{k-1}(t^c)[b/c]_{t^c}^{k}\right)_{b,c}. \end{eqnarray*} Indeed, the second and seventh equalities follow from (\ref{ghbc}), the third and sixth follow from (\ref{euleq}), and the rest are straightforward. \end{proof} We begin our computation of ${\rm c}hi^{Q_{\lambda,j}}_{d^k}$ by considering first the case where all parts of $\lambda$ have the same size. For $\lambda, \nu \in {\rm Par}(n)$ and $j \in \{0,1,\dots,n-1\}$, set $$ {\rm c}hi_\nu^{\lambda,j} := {\rm c}hi^{Q_{\lambda,j}}_\nu$$ and \begin{equation} \label{pinu} {\sf p}i_\nu :={{n} {\rm c}hoose {\nu_1,\ldots,\nu_{l(\nu)}}}{\sf Ch}ac{1}{{\rm Par}od_{j=1}^{n}m_j(\nu)!}. \end{equation} \begin{thm} \label{mrkdth} Let $n,r,m,d,k \in {\mathbb P}$ with $n=rm=dk$. Set \[ {\rm Par}(m;d,r):=\{\mu=(\mu_1,\ldots,\mu_{l(\mu)}) \in {\rm Par}(m):\mu_i|d|r\mu_i \mbox{ for all } i \in [l(\mu)]\}. \] Then \begin{equation} \label{mrdk} {\sf s}um_{j=0}^{n-1}{\rm c}hi_{d^k}^{r^m,j}t^j={\sf s}um_{\mu \in {\rm Par}(m;d,r)}{\sf p}i_{{\sf Ch}ac{r}{d}\mu}{\rm Par}od_{i=1}^{l(\mu)}\left(tA_{{\sf Ch}ac{r}{d}\mu_i-1}(t)[d]_t^{{\sf Ch}ac{r}{d}\mu_i}\right)_{d,\mu_i}. \end{equation} \end{thm} \begin{proof} Note that (\ref{ple}) implies that \begin{equation} \label{reple} {\sf s}um_{j=0}^{n-1}Q_{r^m,j}t^j={\sf h}_m\left[{\sf s}um_{j=0}^{r-1}Q_{(r),j}t^j\right]. \end{equation} Now \begin{eqnarray*} {\sf h}_m\left[{\sf s}um_{j=0}^{r-1}Q_{(r),j}t^j\right] & = & {\sf s}um_{\mu \in {\rm Par}(m)}z_\mu^{-1}{\sf p}_\mu\left[{\sf s}um_{j=0}^{r-1}Q_{(r),j}t^j\right] \\ & = & {\sf s}um_{\mu \in {\rm Par}(m)}z_\mu^{-1}{\rm Par}od_{i=1}^{l(\mu)}{\sf p}_{\mu_i}\left[{\sf s}um_{\nu \in {\rm Par}(r)}z_\nu^{-1}G_\nu(t){\sf p}_\nu\right] \\ & = & {\sf s}um_{\mu \in {\rm Par}(m)}z_\mu^{-1}{\rm Par}od_{i=1}^{l(\mu)}{\sf s}um_{\nu \in {\rm Par}(r)}z_\nu^{-1}G_\nu(t^{\mu_i}){\sf p}_{\mu_i\nu}. \end{eqnarray*} Indeed, the first equality follows from the well known expansion (see any of {\rm c}ite{Mac,Sag,Sta2}) of ${\sf h}_m$ in the power sum basis, the second from Theorem \ref{psum} and the third from (\ref{pple}). On the other hand, it follows from the definition of ${\rm c}hi^{\lambda,j}_\nu$ that \[ {\sf s}um_{j=0}^{n-1}Q_{r^m,j}t^j={\sf s}um_{\nu \in {\rm Par}(n)}z_\nu^{-1}{\sf s}um_{j \geq 0}{\rm c}hi^{r^m,j}_\nu{\sf p}_\nu t^j. \] We see now equating the coefficients of ${\sf p}_{d^k}$ on both sides of (\ref{reple}) yields \begin{equation} \label{pdk} {\sf s}um_{j=0}^{n-1}{\rm c}hi^{r^m,j}_{d^k}t^j=z_{d^k}{\sf s}um_{\mu \in {\rm Par}(m;d,r)}z_\mu^{-1}{\rm Par}od_{i=1}^{l(\mu)}z^{-1}_{\left({\sf Ch}ac{d}{\mu_i}\right)^{r\mu_i/d}}G_{\left({\sf Ch}ac{d}{\mu_i}\right)^{r\mu_i/d}}(t^{\mu_i}). \end{equation} Now for all $\mu \in {\rm Par}(m;d,r)$, we have \begin{eqnarray} \nonumber z_{d^{k}}z_\mu^{-1}{\rm Par}od_{i=1}^{l(\mu)}z^{-1}_{\left({\sf Ch}ac{d}{\mu_i}\right)^{r\mu_i/d}} &=& {\sf Ch}ac{d^kk!}{{\rm Par}od_{j=1}^mm_j(\mu)!{\rm Par}od_{i=1}^{l(\mu)}\mu_i\left({\sf Ch}ac{r\mu_i}{d}\right)!\left({\sf Ch}ac{d}{\mu_i}\right)^{r\mu_i/d}} \\ \nonumber & & \\ \nonumber &=& = {\sf Ch}ac{k!{\rm Par}od_{i=1}^{l(\mu)}\mu_i^{(r\mu_i/d)-1}}{{\rm Par}od_{j=1}^{m}m_j(\mu)!{\rm Par}od_{i=1}^{l(\mu)}\left({\sf Ch}ac{r\mu_i}{d}\right)!} \\ \nonumber & & \\ \label{zpi} &=& {\sf p}i_{{\sf Ch}ac{r}{d}\mu}{\rm Par}od_{i=1}^{l(\mu)}\mu_i^{(r\mu_i/d)-1}, \end{eqnarray} and \begin{eqnarray} \nonumber G_{\left({\sf Ch}ac{d}{\mu_i}\right)^{r\mu_i/d}}(t^{\mu_i}) &=&\left(tA_{\left(r\mu_i/d\right)-1}(t)\left[d/\mu_i\right]_t^{r\mu_i/d}\right)_{d/\mu_i} |_{t=t^{\mu_i}} \\ \label{gdmu} &=& \left(t^{\mu_i}A_{\left(r\mu_i/d\right)-1}(t^{\mu_i})\left[d/\mu_i\right]_{t^{\mu_i}}^{r\mu_i/d}\right)_{d,\mu_i}. \end{eqnarray} We now have \begin{eqnarray*} {\sf s}um_{j=0}^{n-1}{\rm c}hi_{d^k}^{r^m,j} t^j & = & {\sf s}um_{\mu \in {\rm Par}(m;d,r)}{\sf p}i_{{\sf Ch}ac{r}{d}\mu}{\rm Par}od_{i=1}^{l(\mu)}\mu_i^{(r\mu_i/d)-1}\left(t^{\mu_i}A_{\left(r\mu_i/d\right)-1}(t^{\mu_i})\left[d/\mu_i\right]_{t^{\mu_i}}^{r\mu_i/d}\right)_{d,\mu_i} \\ & = & {\sf s}um_{\mu \in {\rm Par}(m;d,r)}{\sf p}i_{{\sf Ch}ac{r}{d}\mu}{\rm Par}od_{i=1}^{l(\mu)}\left(tA_{{\sf Ch}ac{r}{d}\mu_i-1}(t)[d]_t^{{\sf Ch}ac{r}{d}\mu_i}\right)_{d,\mu_i}, \end{eqnarray*} the first equality being obtained by substituting (\ref{zpi}) and (\ref{gdmu}) into (\ref{pdk}), and the second following from Lemma \ref{ftbclem}. \end{proof} We use Theorem \ref{mrkdth} to handle general $\lambda$. \begin{thm} \label{chilamnu} Say $\lambda \in {\rm Par}(n)$ and $n=kd$. \begin{enumerate} \item If there is some $r \in [n]$ such that $d$ does not divide $rm_r(\lambda)$ then ${\rm c}hi^{\lambda,j}_{d^k}=0$ for all $0 \leq j \leq n-1$. \item If $d$ divides $rm_r(\lambda)$ for all $r \in [n]$ then \begin{equation*} {\sf s}um_{j=0}^{n-1}{\rm c}hi_{d^k}^{\lambda,j} t^j={{{\sf Ch}ac{n}{d}} {\rm c}hoose {{\sf Ch}ac{1m_1(\lambda)}{d},{\sf Ch}ac{2m_2(\lambda)}{d},\ldots,{\sf Ch}ac{nm_n(\lambda)}{d}}} {\rm Par}od_{r=1}^{n}{\sf s}um_{j=0}^{rm_r(\lambda)-1}{\rm c}hi_{d^{rm_r(\lambda)/d}}^{r^{m_r(\lambda)},j}\,\, t^j. \end{equation*} \end{enumerate} \end{thm} \begin{proof} It follows directly from (\ref{ple}) that \begin{equation} \label{larmr} {\sf s}um_{j=0}^{n-1}Q_{\lambda,j}t^j={\rm Par}od_{r=1}^{n}{\sf s}um_{j=0}^{rm_r(\lambda)-1}Q_{r^{m_r(\lambda)},j}\,\,t^j. \end{equation} Expressing both sides of (\ref{larmr}) in terms of the power sum basis, we get \begin{equation} \label{frlarmr} {\sf s}um_{\mu \in {\rm Par}(n)}z_\mu^{-1}{\sf s}um_{j=0}^{n-1}{\rm c}hi_\mu^{\lambda,j} t^j{\sf p}_\mu={\rm Par}od_{r=1}^{n}{\sf s}um_{\nu \in {\rm Par}(rm_r(\lambda))}z_\nu^{-1}{\sf s}um_{j=0}^{rm_r(\lambda)-1}{\rm c}hi_\nu^{r^{m_r(\lambda)},j}\,\,t^j{\sf p}_\nu. \end{equation} Equating coefficients of ${\sf p}_{d^k}$ in (\ref{frlarmr}) we see that if $d$ does not divide every $rm_r(\lambda)$ then ${\rm c}hi_{d^k}^{\lambda,j}=0$ for all $j$, while if $d$ divides every $rm_r(\lambda)$ then \begin{eqnarray*} {\sf s}um_{j=0}^{n-1}{\rm c}hi_{d^k}^{\lambda,j} t^j & = & z_{d^k}{\rm Par}od_{r=1}^{n}z_{d^{rm_r(\lambda)/d}}^{-1}{\sf s}um_{j=0}^{rm_r(\lambda)-1}{\rm c}hi_{d^{rm_r(\lambda)/d}}^{r^{m_r(\lambda)},j}\,\,t^j \\ & = & {\sf Ch}ac{d^{n/d}(n/d)!}{{\rm Par}od_{r=1}^{n}d^{rm_r(\lambda)/d}(rm_r(\lambda)/d)!}{\rm Par}od_{r=1}^{n}{\sf s}um_{j=0}^{rm_r(\lambda)-1}{\rm c}hi_{d^{rm_r(\lambda)/d}}^{r^{m_r(\lambda)},j}\,\,t^j \\ & = & {{{\sf Ch}ac{n}{d}} {\rm c}hoose {{\sf Ch}ac{1m_1(\lambda)}{d},{\sf Ch}ac{2m_2(\lambda)}{d},\ldots,{\sf Ch}ac{nm_n(\lambda)}{d}}} {\rm Par}od_{r=1}^{n}{\sf s}um_{j=0}^{rm_r(\lambda)-1}{\rm c}hi_{d^{rm_r(\lambda)/d}}^{r^{m_r(\lambda)},j}\,\, t^j. \end{eqnarray*} \end{proof} {\sf s}ubsection{The permutation character $\theta^{\lambda,j}$ of $G_n$} Note that, upon considering cycle notation for elements of $S_n$, it is straightforward to show that if ${\sf s}igma \in {\rm c}jl$ then $\gamma_n^{-1}{\sf s}igma\gamma_n \in {\rm c}jl$. Thus the claim in Theorem \ref{main1} that $G_n$ acts on ${\rm c}jl$ is correct. Let $\theta^{\lambda,j}$ denote the permutation character of the action of $G_n$ on ${\rm c}jl$. Hence, $\theta^{\lambda,j}(\tau) $ is the number of elements of ${\rm c}jl$ centralized by $\tau \in G_n$. For $\nu \in {\rm Par}(n)$, let $\theta^{\lambda,j}_\nu= \theta^{\lambda,j}(\tau)$, where $\tau$ is any permutation of cycle type $\nu$. Since all $\tau \in G_n$ have cycle type of the form $d^k$, where $dk=n$, we need only concern ourselves with $\nu=d^k$. With Theorems \ref{mrkdth} and \ref{chilamnu} in hand, we now produce matching results for the permutation characters $\theta^{\lambda,j}$ of $G_n$. Again we begin with the case where $\lambda=r^m$ for some divisor $r$ of $n$. Before doing so, we derive, in the form most useful for our arguments, some known facts about centralizers in $S_n$ of elements of $G_n$, along with straightforward consequences of these facts. Fix positive integers $n,k,d$ with $n=kd$. Set $\tau=\gamma_n^{-k} \in G_n$. Note that $C_{S_n}(\tau)=C_{S_n}(\gamma_n^k)$. For $i \in [n]$, we have \[ i\tau=\left\{ \begin{array}{ll} i-k & i>k, \\ i-k+n & i \leq k. \end{array} \right. \] Now $\tau$ has cycle type $d^k$, and we can write $\tau$ as the product of $k$ $d$-cycles, $\tau=\tau_1\ldots\tau_k$, where $\tau_i$ has support \[ X_i:=\{j \in [n]:j \equiv i \bmod k\}.\] It follows that if ${\sf s}igma \in C_{S_n}(\tau)$ then for each $i \in [k]$ there is some $j \in [k]$ such that $X_i{\sf s}igma=X_j$. Thus we have an action of $C_{S_n}(\tau)$ on $\{X_1,\ldots,X_k\}$, which gives rise to a homomorphism $$\Phi:C_{S_n}(\tau) \rightarrow S_k.$$ Given $\rho \in S_k$, define ${\sf h}at{\rho} \in S_n$ to be the element that, for $r \in [k]$ and $q \in \{0,\ldots,d-1\}$, maps $r+qk$ to $r\rho+qk$. It is straightforward to check that ${\sf h}at{\rho} \in C_{S_n}(\tau)$ and $\Phi({\sf h}at{\rho})=\rho$. Moreover, if we set \[ R:=\{{\sf h}at{\rho}:\rho \in S_k\}, \] then $R \leq C_{S_n}(\tau)$ and the restriction of $\Phi$ to $R$ is an isomorphism. It follows that if we set $K={\rm kernel}(\Phi)$ then $C_{S_n}(\tau)$ is the semidirect product of $K$ and $R$. Now \[ K=\{{\sf s}igma \in C_{S_n}(\tau):X_i{\sf s}igma=X_i \mbox{ for all } i \in [k]\}={\rm Par}od_{i=1}^{k}C_{S_{X_i}}(\tau_i). \] Since every $d$-cycle in $S_d$ generates its own centralizer in $S_d$, we have \[ K={\rm Par}od_{i=1}^{k}\langle \tau_i \rangle=\left\{{\rm Par}od_{i=1}^{k}\tau_i^{e_i}:e_1,\ldots,e_k \in \{0,\ldots,d-1\}\right\}. \] Now, given $\rho \in S_k$ and $e_1,\ldots,e_k \in \{0,\ldots,d-1\}$, set \[ {\sf s}igma:=\tau_1^{e_1}\ldots\tau_k^{e_k}{\sf h}at{\rho} \in C_{S_n}(\tau). \] For $r \in [k]$ and $q \in \{0,\ldots,d-1\}$, we have (with ${\sf s}igma$ acting on the right) \begin{equation} \label{centact} (r+qk){\sf s}igma=\left\{ \begin{array}{ll} r\rho+(q-e_r)k & q \geq e_r, \\ r\rho+(q-e_r)k+n & q<e_r. \end{array} \right. \end{equation} It follows that $r+qk \in {\rm Exc}({\sf s}igma)$ if and only if either $q < e_r$ or $e_r=0$ and $r<r\rho$. We collect in the next lemma some useful consequences of what we have just seen. \begin{lemma} \label{centlem} Let $n=dk$ and let $\tau=\gamma_n^{-k}$. Let ${\sf s}igma \in C_{S_n}(\tau)$. Then there exist unique $\rho \in S_k$ and $e_1,\ldots,e_k \in \{0,\ldots,d-1\}$ such that \[ {\sf s}igma=\tau_1^{e_1}\ldots\tau_k^{e_k}{\sf h}at{\rho}, \] and if we define $E_0$ to be the number of $r \in [k]$ such that $e_r=0$ and $r \in {\rm Exc}(\rho)$, then \begin{equation} \label{centeq} {\rm exc}({\sf s}igma)=dE_0+{\sf s}um_{i=1}^{k}e_i. \end{equation} \end{lemma} Note that the unique $\rho \in S_k$ of Lemma~\ref{centlem} is equal to $\Phi({\sf s}igma)$ defined above. For $\mu \in {\rm Par}(k)$ and any divisor $r$ of $n$, set \[ {\rm C}_\mu:=\{{\sf s}igma \in C_{S_n}(\tau):\Phi({\sf s}igma) \in {\rm c}m\} \] and \[ {\rm C}_{\mu,r}:={\rm C}_\mu {\rm c}ap S_{r^{n/r}}, \] so ${\rm C}_{\mu,r}$ consists of those ${\sf s}igma \in C_{S_n}(\tau)$ such that ${\sf s}igma$ has cycle type $r^{n/r}$ and $\Phi({\sf s}igma)$ has cycle type $\mu$. \begin{lemma} \label{ckr} For any divisor $r$ of $n$, we have \[ {\sf s}um_{{\sf s}igma \in {\rm C}_{(k),r}}t^{{\rm exc}({\sf s}igma)}=\left\{ \begin{array}{ll} \left(tA_{k-1}(t)[d]_t^k\right)_{d,{\sf Ch}ac{n}{r}} & \mbox{if } k|r, \\ 0 & \mbox{otherwise} \end{array} \right. \] \end{lemma} \begin{proof} We begin by showing that \begin{equation} \label{disu} {\rm C}_{(k)}=\biguplus_{k|r|n}{\rm C}_{(k),r}. \end{equation} Certainly the union on the right side of (\ref{disu}) is contained in ${\rm C}_{(k)}$, so we prove that this union contains ${\rm C}_{(k)}$. Let ${\sf s}igma \in {\rm C}_{(k)}$. By Lemma \ref{centlem}, we have \[ {\sf s}igma=\tau_1^{e_1}\ldots\tau_k^{e_k}{\sf h}at{\rho} \] for unique $\rho \in S_{(k)}$ and $e_1,\ldots,e_k \in \{0,\ldots,d-1\}$. It follows from (\ref{centact}) that for each $j \in [n]$ we have \begin{equation} \label{sk} j{\sf s}igma^k \equiv j-k{\sf s}um_{i=1}^{k}e_i \bmod n. \end{equation} Moreover, if $j{\sf s}igma^l \equiv j \bmod k$ then $k|l$. Hence each cycle length in the cycle decomposition of ${\sf s}igma$ is a multiple of $k$. We claim that all cycles in the cycle decomposition of ${\sf s}igma$ have length $sk$, where $s$ is the order of $k{\sf s}um_{i=1}^{k}e_i$ in ${\mathbb Z}_n$. Indeed, it follows from (\ref{sk}) that for all $j \in [n]$, $$ j {\sf s}igma^{sk} \equiv j -sk{\sf s}um_{i=1}^k e_i \equiv j \bmod n ,$$ which implies that $j{\sf s}igma^{sk} = j$. Hence the order of ${\sf s}igma$ in $S_n$ divides $sk$. It follows that every cycle length in the cycle decomposition of ${\sf s}igma$ divides $sk$. Now we need only show that $sk$ divides the length of each cycle. Suppose $\alpha$ is a cycle of length $r$ and $j$ is an element in the support of $\alpha$. We have $k|r$ since $k$ divides the length of every cycle. Again using (\ref{sk}) we have, $$j = j{\sf s}igma^r = j ( {\sf s}igma^k)^{r/k} \equiv j - {\sf Ch}ac r k k {\sf s}um_{i=1}^k e_i \mod n ,$$ which implies that $( r/ k) k {\sf s}um_{i=1}^k e_i \equiv 0 \bmod n$. Thus $s$, the order of $k {\sf s}um_{i=1}^k e_i $, divides $r/k$, which implies that $sk$ divides $r$. We have therefore shown that $sk$ divides the length of every cycle, and since we have already shown that every cycle length divides $sk$, we conclude that all cycles in the cycle decomposition of ${\sf s}igma$ have the same length $sk$, that is, ${\sf s}igma \in {\rm C}_{(k),r}$ for some $r$ satisfying $k|r|n$, as claimed in (\ref{disu}). We have also shown that ${\rm C}_{(k), r} = \emptyset$ if $k $ does not divide $r$. Thus the claim of the lemma holds when $k$ does not divide $r$. Next we show that if ${\sf s}igma \in {\rm C}_{(k),r}$ then \begin{equation} \label{gcdckr} \gcd({\rm exc}({\sf s}igma),d)={\sf Ch}ac{n}{r}. \end{equation} As above, write ${\sf s}igma=\tau_1^{e_1}\ldots\tau_k^{e_k}{\sf h}at{\rho}$. Since $d|n$, it follows from (\ref{centeq}) that \begin{equation} \label{gcdgcd} \gcd({\rm exc}({\sf s}igma),d)=\gcd\left({\sf s}um_{i=1}^{k}e_i,d\right). \end{equation} Since $k{\sf s}um_{i=1}^{k}e_i$ has order $r/k$ in $({\mathbb Z}_n,+)$, we have that \[ {\sf Ch}ac{r}{k}={\sf Ch}ac{n}{\gcd\left(n,k{\sf s}um_{i=1}^{k}e_i\right)}={\sf Ch}ac{d}{\gcd\left(d,{\sf s}um_{i=1}^{k}e_i\right)}, \] the first equality following from simple facts about modular arithmetic and the second from the fact that $n=dk$. Now we have \begin{equation} \label{gcdd} \gcd\left(d,{\sf s}um_{i=1}^{k}e_i\right)={\sf Ch}ac{kd}{r}, \end{equation} and combining (\ref{gcdd}) with (\ref{gcdgcd}) gives (\ref{gcdckr}). Combining (\ref{disu}) and (\ref{gcdckr}), we get \begin{equation} \label{dnr} {\sf s}um_{{\sf s}igma \in {\rm C}_{(k),r}}t^{{\rm exc}({\sf s}igma)}=\left({\sf s}um_{{\sf s}igma \in {\rm C}_{(k)}}t^{{\rm exc}({\sf s}igma)}\right)_{d,{\sf Ch}ac{n}{r}} \end{equation} for each divisor $r$ of $n$. For $\rho \in S_k$ and $i \in [k]$, set \[ f_{\rho,i}(t):=\left\{ \begin{array}{ll} t[d]_t & \mbox{if } i \in {\rm Exc}(\rho), \\ {[d]_t} & \mbox{otherwise} \end{array} \right. \] Then \[ {\sf s}um_{{\sf s}igma \in \Phi^{-1}(\rho)}t^{{\rm exc}({\sf s}igma)}={\rm Par}od_{i=1}^{k}f_{\rho,i}(t)=t^{{\rm exc}(\rho)}[d]_t^k, \] the first equality following from Lemma \ref{centlem}. It follows now from (\ref{circeul}) that \begin{equation} \label{akexc} {\sf s}um_{{\sf s}igma \in {\rm C}_{(k)}}t^{{\rm exc}({\sf s}igma)}=tA_{k-1}(t)[d]_t^k, \end{equation} and combining (\ref{dnr}) and (\ref{akexc}) yields the lemma. \end{proof} \begin{lemma} \label{excprod} For divisors $k,r$ of $n$ and $\mu \in {\rm Par}(k)$, we have \[ {\sf s}um_{{\sf s}igma \in {\rm C}_{\mu,r}}t^{{\rm exc}({\sf s}igma)}={\sf p}i_\mu{\rm Par}od_{i=1}^{l(\mu)}{\sf s}um_{{\sf s}igma \in {\rm C}_{(\mu_i),r}}t^{{\rm exc}({\sf s}igma)}, \] where ${\sf p}i_\mu$ is defined as in (\ref{pinu}). \end{lemma} \begin{proof} Given ${\sf s}igma \in {\rm C}_{\mu,r}$, we write as usual ${\sf s}igma=\tau_1^{e_1}\ldots\tau_k^{e_k}{\sf h}at{\rho}$. Now $\rho \in S_k$ has cycle type $\mu$, so we can write $\rho=\rho_1\ldots\rho_{l(\mu)}$ as a product of disjoint cycles whose lengths form the partition $\mu$. For $i \in [l(\mu)]$, let $B_i$ be the support of $\rho_i$. We may assume that $|B_i|=\mu_i$ for all $i$. Set \[ \beta({\sf s}igma):=\{B_1,\ldots,B_{l(\mu)}\}, \] so $\beta({\sf s}igma)$ is a partition of $[k]$. For $i \in l(\mu)$, set \[ {\sf s}igma_i:=\left({\rm Par}od_{j \in B_i}\tau_j^{e_j}\right)\widehat{\rho_i} \in S_n. \] The supports of both $\widehat{\rho_i}$ and ${\rm Par}od_{j \in B_i}\tau_j^{e_j}$ are contained in \[ \{j+qk:j \in B_i,0 \leq q \leq {\sf Ch}ac{n}{k}-1\}. \] It follows that $\widehat{\rho_i}$ and $\Pi_{j \in B_h} \tau_j^{e_j}$ commute for all $i \ne h$, so \[ {\sf s}igma={\rm Par}od_{i=1}^{l(\mu)}{\sf s}igma_i. \] Moreover, \begin{equation} \label{ssi} {\rm exc}({\sf s}igma)={\sf s}um_{i=1}^{l(\mu)}{\rm exc}({\sf s}igma_i). \end{equation} For $i \in [l(\mu)]$, define $f_i$ to be the unique order preserving bijection from $B_i$ to $[\mu_i]$, and set \[ \overlineerline{{\sf s}igma}_i:=f_i^{-1}{\sf s}igma_if_i. \] Then, for each $i \in [l(\mu)]$, we have $\overlineerline{{\sf s}igma}_i \in {\rm C}_{(\mu_i),r}$ and \begin{equation} \label{oss} {\rm exc}(\overlineerline{{\sf s}igma}_i)={\rm exc}({\sf s}igma_i). \end{equation} Let $\Pi_\mu$ be the set of partitions of $[k]$ that have $m_j(\mu)$ blocks of size $j$ for each $j$. For each partition $X\in \Pi_\mu$, set \[ {\rm C}_X:=\{{\sf s}igma \in {\rm C}_{\mu,r}:\beta({\sf s}igma)=X\}. \] The map from ${\rm C}_X$ to ${\rm Par}od_{i=1}^{l(\mu)}{\rm C}_{(\mu_i),r}$ sending ${\sf s}igma$ to $(\overlineerline{{\sf s}igma}_1,\ldots,\overlineerline{{\sf s}igma}_{l(\mu)})$ is a bijection. Given (\ref{ssi}) and (\ref{oss}), we see that \begin{eqnarray*} {\sf s}um_{{\sf s}igma \in {\rm C}_{\mu,r} } t^{{\rm exc}({\sf s}igma} &=& {\sf s}um_{X \in \Pi_\mu} {\sf s}um_{{\sf s}igma \in {\rm C}_X}t^{{\rm exc}({\sf s}igma)}\\ &=& |\Pi_\mu| {\rm Par}od_{i=1}^{l(\mu)}{\sf s}um_{\rho \in {\rm C}_{(\mu_i),r}}t^{{\rm exc}(\rho)}. \end{eqnarray*} It is straightforward to see that $|\Pi_\mu| = {\sf p}i_\mu$, so the lemma follows. \end{proof} \begin{thm} \label{mrkdth2} Let $n,r,m,d,k \in {\mathbb P}$ with $n=rm=dk$. As in Theorem \ref{mrkdth}, let \[ {\rm Par}(m;d,r)=\{\mu=(\mu_1,\ldots,\mu_{l(\mu)}) \in {\rm Par}(m):\mu_i|d|r\mu_i \mbox{ for all } i \in [l(\mu)]\}. \] Then \begin{equation} \label{mrdk2} {\sf s}um_{j=0}^{n-1}\theta_{d^k}^{r^m,j}t^j={\sf s}um_{\mu \in {\rm Par}(m;d,r)}{\sf p}i_{{\sf Ch}ac{r}{d}\mu}{\rm Par}od_{i=1}^{l(\mu)}\left(tA_{{\sf Ch}ac{r}{d}\mu_i-1}(t)[d]_t^{{\sf Ch}ac{r}{d}\mu_i}\right)_{d,\mu_i}. \end{equation} \end{thm} \begin{proof} We have \begin{eqnarray*} {\sf s}um_{j=0}^{n-1}\theta_{d^k}^{r^m,j}t^j & = & {\sf s}um_{{\sf s}igma \in C_{S_n}(\gamma_n^{k}) {\rm c}ap S_{r^m}}t^{{\rm exc}({\sf s}igma)} \\ & = & {\sf s}um_{\mu \in {\rm Par}(k)}{\sf s}um_{{\sf s}igma \in {\rm C}_{\mu,r}}t^{{\rm exc}({\sf s}igma)} \\ & = & {\sf s}um_{\mu \in {\rm Par}(k)}{\sf p}i_\mu{\rm Par}od_{i=1}^{l(\mu)}{\sf s}um_{{\sf s}igma \in {\rm C}_{(\mu_i),r}}t^{{\rm exc}({\sf s}igma)} \\ & = & {\sf s}um_{\mu \in {\rm Par}(k;r,d)}{\sf p}i_\mu{\rm Par}od_{i=1}^{l(\mu)}\left(tA_{\mu_i-1}(t)[d]_t^{\mu_i}\right)_{d,d\mu_i/r}. \end{eqnarray*} Indeed, the first two equalities follow immediately from the definitions of $\theta^{{r^m},j}$ and ${\rm C}_{\mu,r}$, respectively, while the third follows from Lemma \ref{excprod} and the fourth from Lemma \ref{ckr}. Now for $\mu \in {\rm Par}(k;r,d)$, set $\nu:=\nu(\mu):={\sf Ch}ac{d}{r}\mu$, so $\mu={\sf Ch}ac{r}{d}\nu$. Now ${\sf Ch}ac{d}{r}k=m$ and, since $\mu_i|r|d\mu_i$, we have $\nu_i|d|r\nu_i$ for all $i$. Thus $\nu \in {\rm Par}(m;d,r)$. From this we see that the map $\mu \mapsto \nu(\mu)$ is a bijection from $ {\rm Par}(k;r,d)$ to $ {\rm Par}(m;d,r)$. Thus we have \[ {\sf s}um_{j=0}^{n-1}\theta_{d^k}^{r^m,j}t^j={\sf s}um_{\nu \in {\rm Par}(m;d,r)}{\sf p}i_{{\sf Ch}ac{r}{d}\nu}{\rm Par}od_{i=1}^{l(\nu)}\left(tA_{{\sf Ch}ac{r}{d}\nu_i-1}(t)[d]_t^{{\sf Ch}ac{r}{d}\nu_i}\right)_{d,\nu_i} \] as claimed. \end{proof} \begin{thm} \label{last} Say $\lambda \in {\rm Par}(n)$ and $n=kd$. \begin{enumerate} \item If there is some $r \in [n]$ such that $d$ does not divide $rm_r(\lambda)$ then $\theta^{\lambda,j}_{d^k}=0$ for all $0 \leq j \leq n-1$. \item If $d$ divides $rm_r(\lambda)$ for all $r \in [n]$ then \begin{equation*} {\sf s}um_{j=0}^{n-1}\theta_{d^k}^{\lambda,j}t^j={{{\sf Ch}ac{n}{d}} {\rm c}hoose {{\sf Ch}ac{1m_1(\lambda)}{d},{\sf Ch}ac{2m_2(\lambda)}{d},\ldots,{\sf Ch}ac{nm_n(\lambda)}{d}}} {\rm Par}od_{r=1}^{n}{\sf s}um_{j=0}^{rm_r(\lambda)-1}\theta_{d^{rm_r(\lambda)/d}}^{r^{m_r(\lambda)},j} t^j. \end{equation*} \end{enumerate} \end{thm} \begin{proof} Let ${\sf s}igma \in S_\lambda$, so ${\sf s}igma$ can be written as a product of disjoint cycles in which there appear exactly $m_r(\lambda)$ $r$-cycles for each $r \in [n]$. For each such $r$, let ${\sf s}igma_r$ be the product of all these $m_r(\lambda)$ $r$-cycles, and let $B_r$ be the support of ${\sf s}igma_r$. If ${\sf s}igma \in C_{S_n}(\gamma_n^k)$ then $\gamma_n^k$ commutes with each ${\sf s}igma_r$. It follows that $\gamma_n^k$ stabilizes each $B_r$ setwise. Therefore, for each $r \in [n]$, there is some $Y_r {\sf s}ubseteq [k]$ such that $$B_r=\biguplus_{i\in Y_r} X_i,$$ where $X_i = \{h \in [n] : h\equiv i \mod k\}$. Since $|X_i| = d$ for all $i$, it follows that $rm_r(\lambda)=|B_r|=d |Y_r|$, so (1) holds. For each $r \in [n]$, let $\beta_r \in S_{B_r}$ act as $\gamma_n^k$ does on $B_r$. We have $\gamma_n^k={\rm Par}od_{r=1}^n\beta_r$, and $\beta_r$ commutes with ${\sf s}igma_r$ for all $r$. For each $r \in [n]$, let $f_r$ be the unique order preserving bijection from $B_r$ to $[rm_r(\lambda)]$. Direct calculation shows that for each $r$, we have \begin{equation} \label{ftf} f_r^{-1}\beta_rf_r=\gamma_{rm_r(\lambda)}^{rm_r(\lambda)/d}. \end{equation} Also, \begin{equation} \label{excequiv} {\rm exc}({\sf s}igma)={\sf s}um_{r=1}^{n}{\rm exc}({\sf s}igma_r)={\sf s}um_{r=1}^{n}{\rm exc}(f_r^{-1}{\sf s}igma_rf_r). \end{equation} On the other hand suppose we are given an ordered $n$-tuple $(Y_1,\ldots,Y_n)$ of subsets of $[k]$ such that \begin{itemize} \item[(a)] $|Y_r|=rm_r(\lambda)/d$ for each $r \in [n]$, and \item[(b)] $[k]=\biguplus_{r=1}^{n}Y_r$, \end{itemize} and we set $B_r=\uplus_{i\in Y_r} X_i$ for each $r$. Then each $B_r$ is $\gamma_n^k$-invariant, and if we set $\beta_r$ equal to the restriction of $\gamma_n^k$ to $B_r$, we can obtain ${\sf s}igma \in C_{S_n}(\gamma_n^k) {\rm c}ap S_\lambda$ by choosing, for each $r$, any ${\sf s}igma_r \in S_{B_r}$ of shape $r^{m_r(\lambda)}$ commuting with $\beta_r$ and setting ${\sf s}igma={\rm Par}od_{r=1}^{n}{\sf s}igma_r$. The number of $n$-tuples satisfying (a) and (b) is \[ {{{\sf Ch}ac{n}{d}} {\rm c}hoose {{\sf Ch}ac{1m_1(\lambda)}{d},{\sf Ch}ac{2m_2(\lambda)}{d},\ldots,{\sf Ch}ac{nm_n(\lambda)}{d}}}, \] and the theorem now follows from (\ref{ftf}) and (\ref{excequiv}). \end{proof} Comparing Theorem \ref{mrkdth2} with Theorem \ref{mrkdth} and then comparing Theorem \ref{last} with Theorem \ref{chilamnu}, we obtain $${\rm c}hi_{d^k}^{Q_{\lambda,j}} = \theta_{d^k}^{\lambda,j}$$ for all $\lambda \in {\rm Par}( n)$, $j\in \{0,1,\dots,n-1\}$, and $d,k \in {\mathbb P}$ such that $dk=n$. Theorem~\ref{main1} now follows from Proposition~\ref{evalth}. {\sf s}ection{Some additional results} \label{finsec} As mentioned in the Introduction, Theorem~\ref{main1} is a refinement of (\ref{majexcfixeq}). In this section we show that the less refined result can also be obtained as a consequence of {\rm c}ite[Corollary 4.3]{ShWa1}, which states that \begin{equation} \label{formAn} A_n^{{\rm maj},{\rm exc}, {\rm fix}}(q,t,s) = {\sf s}um_{m = 0}^{\lfloor {n \overlineer 2} \rfloor} \!\!\!\!{\sf s}um_{{\sf s}criptsize \begin{array}{c} k_0\ge 0 \\ k_1,\dots, k_m \ge 2 \\ {\sf s}um k_i = n \end{array}} \left[\begin{array}{c} n \\k_0,\dots,k_m\end{array}\right]_q\,\, s^{k_0} {\rm Par}od_{i=1}^m tq[k_i-1]_{tq}.\end{equation} Although the alternative proof does not directly involve the Eulerian quasisymmetric functions, the proof of (\ref{formAn}) given in {\rm c}ite{ShWa1} does. Hence the Eulerian quasisymmetric functions play an indirect role. In this section we also prove the identity (\ref{otherform}) mentioned in the introduction and as a consequence obtain another cyclic sieving result. \begin{thm}\label{3pol} Let $dk=n$. Then the following expressions are all equal. \begin{enumerate} \item[(i)] $A_n^{{\rm maj},{\rm exc},{\rm fix}}(\omega_d,t \omega_d^{-1},s)$\\ \item[(ii)] ${\sf s}um_{{\sf s}igma \in C_{S_n}(\gamma_n^k)}t^{{\rm exc}({\sf s}igma)}s^{{\rm fix}({\sf s}igma)}$\\ \item[(iii)] $A^{{\rm exc},{\rm fix}}_k(t, {\sf Ch}ac{s^d+t[d-1]_t}{[d]_t})[d]_t^k$. \end{enumerate} \end{thm} \begin{proof} ((ii)=(iii)) For $\rho \in S_k$ and $i \in [k]$ set $$f_{\rho,i}(t,s) := \begin{cases} t[d]_t &\mbox{if } i \in {\rm Exc}(\rho) \\ s^d + t[d-1]_t &\mbox{if } i \in {\rm Fix}(\rho) \\ [d]_t &\mbox{otherwise.} \end{cases}$$ It follows from Lemma~\ref{centlem} that \begin{eqnarray*} {\sf s}um_{{\sf s}igma \in \Phi^{-1}(\rho)}t^{{\rm exc}({\sf s}igma)}s^{{\rm fix}({\sf s}igma)}&=& {\rm Par}od_{i=1}^{k}f_{\rho,i}(t,s) \\ &= & t^{{\rm exc}(\rho)}[d]_t^{k-{\rm fix}(\rho)}(s^d+t[d-1]_t)^{{\rm fix}(\rho)}. \end{eqnarray*} By summing over all $\rho \in S_k$, we obtain the equality of the expressions in (ii) and (iii). ((i) = (iii)) By setting $q=1$ in (\ref{expgeneqfix}) we obtain \begin{eqnarray} \nonumber {\sf s}um_{k\ge 0} A_k^{{\rm exc},{\rm fix}}(t,{\sf Ch}ac{s^d+t[d-1]_t}{[d]_t}) [d]_t^k {\sf Ch}ac{ z^k}{k!} & =& {\sf Ch}ac{(1-t) e^{(s^d+t[d-1]_t)z}} {e^{t[d]_t z} - t e^{[d]_t z}} \\ &=& \nonumber {\sf Ch}ac{(1-t)e^{s^dz}}{e^{(t[d]_t -t[d-1]_t)z} - t e^{([d]_t -t[d-1]_t)z}} \\ &=& \label{setq=1} {\sf Ch}ac{(1-t)e^{s^dz}}{e^{t^d z}-te^z} \end{eqnarray} It follows from (\ref{formAn}) and Proposition~\ref{multinom} that $$A_{dk}^{{\rm maj},{\rm exc},{\rm fix}}(\omega_d,t\omega_d^{-1},s) = {\sf s}um_{m \ge 0} \!\!\!\!{\sf s}um_{{\sf s}criptsize \begin{array}{c} l_0\ge 0 \\ l_1,\dots, l_m \ge 1 \\ {\sf s}um l_i = k \end{array}} \left(\begin{array}{c} k \\l_0,\dots,l_m\end{array}\right)\,\,s^{d l_0} {\rm Par}od_{i=1}^m t[dl_i-1]_{t}.$$ Hence, by straight-forward manipulation of formal power series we have, \begin{eqnarray*}& &{\sf h}space{-.7in} {\sf s}um_{k\ge 0} A_{dk}^{{\rm maj},{\rm exc},{\rm fix}}(\omega_d,t \omega_d^{-1},s) {\sf Ch}ac{z^k}{k!} \\ & = & {\sf s}um_{k\ge 0} {\sf s}um_{m \ge 0} \!\!\!\!{\sf s}um_{{\sf s}criptsize \begin{array}{c} l_0\ge 0 \\ l_1,\dots, l_m \ge 1 \\ {\sf s}um l_i = k \end{array}} \left(\begin{array}{c} k \\l_0,\dots,l_m\end{array}\right)\,\,s^{d l_0} {\rm Par}od_{i=1}^m t[dl_i-1]_{t} {\sf Ch}ac {z^k} {k!} \\ &=& e^{s^dz} {\sf s}um_{m,k \ge 0} \!\! \!\!\!\!{\sf s}um_{{\sf s}criptsize \begin{array}{c}{\sf s}um l_i =k \\ l_1,\dots, l_m \ge 1\end{array}} \left(\begin{array}{c} k \\l_1,\dots,l_m\end{array}\right)\,\, {\rm Par}od_{i=1}^m t[dl_i-1]_{t} {\sf Ch}ac {z^k} {k!} \\ &=& e^{s^dz} {\sf s}um_{m \ge 0} {\sf s}um_{ l_1,\dots, l_m \ge 1} {\rm Par}od_{i=1}^m t[dl_i-1]_{t} {\sf Ch}ac {z^{l_i} }{l_i!} \\ &=& e^{s^dz} {\sf s}um_{m\ge 0} \left( {\sf s}um_{l\ge 1} t[dl-1]_{t} {\sf Ch}ac {z^{l} }{l!}\right)^m. \end{eqnarray*} Further manipulation yields, \begin{eqnarray*}{\sf s}um_{m\ge 0} \left( {\sf s}um_{l\ge 1} t[dl-1]_{t} {\sf Ch}ac {z^{l} }{l!}\right)^m &=& {\sf Ch}ac{ 1} {1- \left( {\sf s}um_{l\ge 1} t[dl-1]_{t} {\sf Ch}ac {z^{l} }{l!}\right)} \\&=& {\sf Ch}ac{ 1-t }{1- t + {\sf s}um_{l\ge 1} t(t^{dl-1}-1) {\sf Ch}ac {z^{l} }{l!}} \\&=& {\sf Ch}ac{ 1-t}{1- t + e^{t^dz}-1 -t(e^z -1)} \\&=& {\sf Ch}ac{1-t} {e^{t^dz} - t e^z}. \end{eqnarray*} Hence $$ {\sf s}um_{k\ge 0} A_{dk}^{{\rm maj},{\rm exc},{\rm fix}}(\omega_d,t \omega_d^{-1},s) {\sf Ch}ac{z^k}{k!}= {\sf Ch}ac{(1-t)e^{s^dz}} {e^{t^dz} - t e^z}.$$ The result now follows from (\ref{setq=1}). \end{proof} \begin{cor} \label{cor3pol} Let $dk=n$. Then $$A_n^{{\rm maj},{\rm exc}}(\omega_d,t\omega_d^{-1}) = A_k(t)[d]_t^k.$$ \end{cor} A similar result holds for the cycle-type $q$-Eulerian polynomials $A_{(n)}^{{\rm maj},{\rm exc}} (q,t)$. \begin{thm} \label{circor} Let $dk = n-1$. Then $$A_{(n)}^{{\rm maj},{\rm exc}}(\omega_d,t \omega_d^{-1}) =t A_{k}(t)\, [d]_t^k .$$ \end{thm} \begin{proof} We apply Proposition~\ref{evalth} which tells us that for all $j$, the coefficient of $t^j$ in $A_{(n)}^{{\rm maj},{\rm exc}}(\omega_d,t \omega_d^{-1})$ is equal to ${\rm c}hi^{Q_{(n),j}}_{1d^k}$. By Theorem~\ref{psum}, ${\rm c}hi^{Q_{(n),j}}_{1d^k}$ equals the coefficient of $t^{j}$ in $G_{1d^k} (t)= t A_{k}(t) [d]_t^k$. \end{proof} \begin{cor} \label{cycleth} Let $S_{n,j}$ be the set of permutations in $S_n$ with $j$ excedances. Then the triple $(G_{n}, S_{n,j}, a_{(n+1),j+1}(q))$ exhibits the cyclic sieving phenomenon for all $j \in \{0,1,\dots,n-1\}$. \end{cor} \begin{proof} That the triple exhibits the cyclic sieving phenomenon is equivalent to the equation $$t{\sf s}um_{{\sf s}igma \in C_{S_{n}}(g)}t^{{\rm exc}({\sf s}igma)} = A_{(n+1)}^{{\rm maj},{\rm exc}}(\omega_d, t\omega_d^{-1}),$$ for all divisors $d$ of $n$ and $g \in G_n$ of order $d$. This equation is a consequence of Theorems~\ref{3pol} and~\ref{circor}, which respectively say that the left side and the right side of the equation both equal $tA_k(t) [d]_t^{{\sf Ch}ac n d}$. \end{proof} \end{document}
\begin{document} \begin{center} {\large{\bf Statistical-mechanical description of quantum entanglement}}\\ J. K. Korbicz$^{1,2,3}$, F. Hulpke$^1$, A. Osterloh$^{1}$, and M. Lewenstein$^{1,3,4}$\\ $^1$ Institut f\"ur Theoretische Physik, Leibniz Universit\"at Hannover, Appelstr. 2, D-30167 Hannover, Germany $^2$ Dept. d'Estructura i Constituents de la Mat\`eria, Universitat de Barcelona, 647 Diagonal, 08028 Barcelona, Spain $^3$ ICFO--Institut de Ci\`{e}ncies Fot\`{o}niques, Mediterranean Technology Park, 08860 Castelldefels (Barcelona), Spain $^4$ ICREA-Instituci\`{o} Catalana de Recerca i Estudis Avan\c ats, 08010 Barcelona, Spain \mathcal{E}_1d{center} \begin{abstract} We present a description of finite dimensional quantum entanglement, based on a study of the space of all convex decompositions of a given density matrix. On this space we construct a system of real polynomial equations describing separable states. We further study this system using methods of statistical mechanics. As an example, we apply finally our techniques to Werner states of two qubits and obtain a sufficient criterion for separability. \mathcal{E}_1d{abstract} \section{Introduction} \label{statintr} Separability is one of the central issues in quantum information theory (see Horodecki {\it et al.} \cite{RMP} for a review) in that in a separable density matrix all correlations are of classical origin and no real quantum information processing, as based on the presence of quantum entanglement of some kind, is impossible. The solution to the separability problem has been proved to be NP-hard~\cite{Gurvits} and hence every partial solution constitutes an important achievement. Seminal corner stones in that direction have been the Peres-Horodecki criterion~\cite{PeresHoro,Pawel}, and entanglement witnessing operators~\cite{TerhalWit,OptimWitness}. The first method exploits the fact that positive operators conserve the positivity of all separable density matrices, whereas some entangled density operators are mapped to non-positive operators. The latter approach uses limits for expectation values of suitably chosen witness operators to distinguish between separable and entangled states. A systematic analysis of the so called { {\it et al.} m bound} entangled states has been initiated by means of unextendible product bases (UPB)~\cite{UPBs} which in turn also paved the way towards a formulation of the separability problem in terms of roots of complex polynomial equations~\cite{PolynEqSeparability}. As far as we know, this route has not been pursued any further and in particular no direct test of separability via the convex roof extension of a pure state separability criterion has been probed so far. The main obstacles for such an approach have their origin in the complications involved in the minimization procedure over all decompositions of the density matrix under consideration. A proposal in this direction however has been presented by Osborne~\cite{Osborne04}. In this work we follow this route proposing a similar approach for studying the bipartite separability problem in finite dimensional Hilbert space $\mathcal{H}=\mathcal{H}_A\otimes\mathcal{H}_B\cong {\mathbb{C}} ^m \otimes {\mathbb{C}} ^n$ encoding the convex roof minimization in a way familiar from statistical-mechanics. The paper is organized as follows. After a formal definition of the separability problem and a short discussion of pure state separability criteria in the next section, we give a geometrical view on the space of $\rho$-ensembles and a formulation of the bipartite separability problem in terms of a set of nonlinear equations in section~\ref{rho-ens}. A mechanical analogy of these equations is drawn in section~\ref{CostFun} in terms of a Hamiltonian or cost function on a restricted ``phase space'' and constitutes the basis for the statistical mechanical approach presented in section~\ref{StatMech}. After presenting a proof-of-principles calculation for two-qubit Werner states in section~\ref{WernerStates} we draw our conclusions and give a short outlook of the presented formalism. \section{The bipartite separability problem} In order to formulate the problem, let us recall the following Definition: \begin{df} A state $\varrho$ of a bipartite system $AB$, described by $\mathcal{H}_A\otimes\mathcal{H}_B$, is called separable if there exists a convex decomposition of $\varrho$ composed entirely of product vectors: \begin{equation} \varrho=\sum_{i=1}^{N}p_i \,|x_i\rangle\langle x_i| \otimes|y_i\rangle\langle y_i|, \quad \ket{x_i}\in\mathcal{H}_A, \ket{y_i}\in\mathcal{H}_B. \mathcal{E}_1d{equation} \mathcal{E}_1d{df} A natural problem arises, known as the separability problem: {\it Given a state $\varrho$, decide if it is separable or not.} This problem has been proven to be NP-hard (Gurvits~\cite{Gurvits}) and (a part of) its difficulty lies in the fact that a convex decomposition of a given mixed state $\varrho$ into pure states: \begin{equation}\label{decomp} \varrho=\sum_{i=1}^{N}p_i \,|\Psi_i\rangle\langle \Psi_i| \mathcal{E}_1d{equation} is highly non-unique (see e.g. Bengtsson and $\dot{\text{Z}}$yczkowski~\cite{Bengtsson}). Thus, the following Definition makes sense: \begin{df}\label{roens} Unordered collection $\{p_i, \ket{\Psi_i}\}$, $i=1\dots N$ of probabilities and vectors satisfying (\ref{decomp}) is called a $\varrho$-ensemble of length $N$. \mathcal{E}_1d{df} In this work we develop the following approach to the separability problem: we propose to search the space of all $\varrho$-ensembles of a given state $\varrho$ for product $\varrho$-ensembles ($\varrho$-ensembles containing only product vectors), by applying one of the existing necessary and sufficient entanglement tests to each member of the ensemble. We want the test which has the simplest functional form---a polynomial. Such a test is provided by the square of generalized concurrence (see e.g. Rungta {\it et al.} ~\cite{Rungta}, Mintert {\it et al.} ~\cite{Mintert}, Hulpke~\cite{Florek}): \begin{lem}\label{warprod} For any vector $\ket\psi\in\mathcal{H}_A\otimes \mathcal{H}_B$ one has that: \begin{equation}\label{test} c^2(\psi):=||\psi||^4 - \rm{tr}_{\mathcal{H}_A}(\rm{tr}_{\mathcal{H}_B} |\psi\rangle\langle \psi|)^2G\times Ge 0 \mathcal{E}_1d{equation} and the equality holds if and only if $\ket\psi$ is product. \mathcal{E}_1d{lem} This leads to a set of real polynomial equations describing separable states. The resulting system is very complicated due to the fourth order of some equations and a large number of variables. Our idea is to study it using methods of classical statistical mechanics. The motivation is that such methods have proven to be very efficient not only within classical mechanics, but also in many other, distantly related areas (for an application to fundamental combinatorial problems see e.g. Kubasiak {\it et al.} \cite{Kubasiak} and references therein). Hence, we first develop a mechanical analogy for our system. Then we define a suitable cost function, or ``energy'', introduce a canonical ensemble, and study the resulting partition function. \section{The space of $\varrho$-ensembles and separability} \label{rho-ens} Let us begin with describing the space of all $\varrho$-ensembles of a given state $\varrho$. For convenience we pass from normalized $\varrho$-ensemble vectors $\ket{\Psi_i}$ to subnormalized ones: $\ket{\psi_i}:=\sqrt{p_i}\ket{\Psi_i}$, such that $\varrho=\sum_{i=1}^{N}|\psi_i\rangle\langle \psi_i|$. Let us fix an eigenensemble $\{\ket{e_\alphapha}\}$ of $\varrho$, where all the vectors $\ket{e_\alpha}$ correspond to non-zero eigenvalues $\lambda_\alpha$ of $\varrho$, $\alphapha=1\dots r$, and $r:=\text{rank}(\varrho)$ is the rank of $\varrho$. Then, all $\varrho$-ensembles are characterized by the well known Theorem by Schr\"odinger~\cite{Schrod} (see also~\cite{Hugh,Kirkpatrick}) \begin{thm}\label{HJW} Any $\varrho$-ensemble $\{\ket{\psi_i}\}$ of length $NG\times Ge r$ can be obtained from a subnormalized eigenensemble $\{\ket{e_\alphapha}\}$ such that $\rho=\sum_\alphapha \ket{e_\alphapha}\bra{e_\alphapha}$ through the following linear transformation: \begin{equation}\label{ens} \ket{\psi_i}:=\sum_{\alphapha=1}^r z_{i\alphapha} \ket{e_\alphapha}, \mathcal{E}_1d{equation} where the matrix $z_{i\alphapha}\in \mathbb{C}$ is an $N\times r$ block of a unitary $N\times N$ matrix, and hence satisfies \begin{equation}\label{stiefel} \sum_{i=1}^{N} \ov{z_{i\alphapha}}z_{i\beta}=\text{d}lta_{\alphapha\beta}. \mathcal{E}_1d{equation} \mathcal{E}_1d{thm} Theorem~\ref{HJW} gives us the characterization of all possible $\varrho$-ensembles in terms of $N\times r$ matrices $z$, satisfying the condition (\ref{stiefel}). Geometrically, this condition defines the so called { {\it et al.} m Stiefel manifold } \begin{equation}\label{st} V_{N,r}:=U(N)/U(N-r). \mathcal{E}_1d{equation} It forms a principal fiber bundle over the Grassmann manifold $G_{N,r}$ (the set of $r$-dimensional subspaces of $ {\mathbb{C}} ^N$) with a fiber diffeomorphic to $U(r)$ (we refer to Kobayashi and Nomizu Vol.~1~\cite{Kobayashi1} for the definition and basic properties of fiber bundles and to Spivak Vol. 5~\cite{Spivak} for more information on the Stiefel and Grassmann manifolds). However, note that there is some additional symmetry: from Eq.~(\ref{decomp}) we see that the order of vectors in a $\varrho$-ensemble does not matter, and thus two $N \times r$ matrices $z$, $z'$ satisfying Eq.~(\ref{stiefel}) and differing only by a permutation of their rows define the same $\varrho$-ensemble. To fix this freedom, observe that a $z$-matrix satisfying Eq.~(\ref{stiefel}) has necessarily rank $r$, and hence we may consider only those matrices $z$, for which the first $r$ rows are linearly independent. The set of such $z$'s constitutes a simply connected open subset of $V_{N,r}$ (which is nevertheless dense in $V_{N,r}$) and over such a neighborhood the bundle $V_{N,r}\xrightarrow []{U(r)}G_{N,r}$ is trivial by construction. This allows us to formally write down an explicit solution of the constraints (\ref{stiefel}) \begin{equation}\label{Srozw} z=GS\left( \begin{array}{c}{\bf 1}_r \\ {\bf v} \mathcal{E}_1d{array}\right)\cdot U, \mathcal{E}_1d{equation} where $U \in U(r)$, ${\bf 1}_r$ is the $r \times r$ unit matrix, ${\bf v}$ is an arbitrary, complex $(N-r) \times r$ matrix, and $GS$ denotes the Gram-Schmidt orthonormalization~\cite{GS} applied to the columns. There are no more symmetries, since we have defined in Definition~\ref{roens} $\varrho$-ensembles using vectors $\ket{\psi_i}$ rather than more physical projectors $\ket{\psi_i}\langle\psi_i|$, as the latter are harder to work with. In case of $\varrho$-ensembles defined through projectors, there would be an additional symmetry of multiplying each row of $z$ by a (different) phase. Comparing Eq.~(\ref{Srozw}) and Eq.~(\ref{ens}), one sees that an arbitrary $\varrho$-ensemble of length $N$ is obtained from the fixed eigenensemble by i) applying a unitary rotation to $\ket{e_\alphapha}$'s and ii) subsequent increasing of the length of the ensemble along the Grassmannian $G_{N,r}$. So far we have characterized $\varrho$-ensembles of a fixed length $N$. It seems that in the search for product ensemble we would have to consider all possible lengths $NG\times Ge r$. However, from Caratheodory's Theorem (see e.g. Kelly and Weiss~\cite{Car}) we know that a separable state can be decomposed into at most $N=m^2n^2$ linear independent (in $ {\mathbb{R}} ^{m^2n^2-1}$) product states. Hence, it is enough to consider only $\varrho$-ensembles of the length $N=m^2n^2$. (there is a natural inclusion of space of shorter ensembles in the space of longer ones). Let us now examine the entanglement test given by Proposition~\ref{warprod}. First, we quote some well known facts regarding the geometry of pure product states (see e.g. Bengtsson and $\dot{\text{Z}}$yczkowski~\cite{Bengtsson}). Note that the polynomial $c^2(\psi)$, defined in Eq.~(\ref{test}), is in fact a sum of modulus squared of quadratic, complex-analytical polynomials in $\ket\psi$: \begin{equation} \label{c2} c^2(\psi)=\frac{1}{2}\sum_{a,b=1}^{d_1,d_2} \big|\langle {\bf z}eta_a^{AA'}\otimes \omegaidetilde {\bf z}eta_b^{BB'}|\psi^{AB}\otimes\psi^{A'B'}\rangle\big|^2, \mathcal{E}_1d{equation} where $\{\ket{{\bf z}eta_a^{AA'}}\}_{a=1,\dots,d_1}$, $\{\ket{\omegaidetilde {\bf z}eta_b^{BB'}}\}_{b=1,\dots,d_2}$ are orthonormal bases of the skew-symmetric spaces $\mathcal{H}_A\omegaedge\mathcal{H}_{A'}\cong {\mathbb{C}} ^m \omegaedge {\mathbb{C}} ^m$ and $\mathcal{H}_B\omegaedge\mathcal{H}_{B'}\cong {\mathbb{C}} ^n \omegaedge {\mathbb{C}} ^n$, respectively. Thus, $c^2(\psi)=0$, and hence $\ket\psi$ is product, if and only if \begin{equation}\label{segre} \langle {\bf z}eta_a\otimes \omegaidetilde {\bf z}eta_b| \psi\otimes\psi\rangle=0\quad \text{for all} \quad a,b\; . \mathcal{E}_1d{equation} It is worth noticing that this is just the condition for the matrix of components of $\ket\psi$ to have rank one. Geometrically, the system of homogeneous equations (\ref{segre}), or equivalently the single equation $c^2(\psi)=0$, describes the image of the so called { {\it et al.} m Segre embedding} $ {\mathbb{C}} P^m \times {\mathbb{C}} P^n\mathcal{H}ookrightarrow {\mathbb{C}} P^{mn}$ given by $([x],[y])\mapsto [x\otimes y]$. As we can see from Eqs.~(\ref{segre}), this image, i.e. the set of product vectors, is a complex-analytical manifold---as an intersection of complex quadrics---in contrast to the Stiefel manifolds $V_{N,r}$, which are real. Since for all $i=1,\dots N$ polynomials $c^2(\psi_i)$ are non-negative and equal to zero if and only if $\ket{\psi_i}$ is product, we can sum them up for a given $\varrho$-ensemble, and thus obtain a collective separability test for the whole $\varrho$-ensemble, given by a single polynomial function. Combining this with the parametrization (\ref{ens}) and the constraint (\ref{stiefel}), we obtain the following description of separable states: \begin{lem}\label{p2} A states $\varrho$ of rank $r$ on $ {\mathbb{C}} ^m\otimes {\mathbb{C}} ^n$ is separable if and only if the following system of equations possesses a solution \begin{eqnarray} & &E_\varrho(z):=\sum_{i=1}^{m^2n^2}c^2(\psi_i) =\sum_{i=1}^{m^2n^2} \sum_{\alphapha, \dots ,\nu=1}^r \ov{z_{i\alphapha}} \:\ov{z_{i\beta}}E^\varrho_{\alphapha\beta\mu\nu}z_{i\mu}z_{i\nu}=0,\label{H}\\ & & \mathcal{C}_{\alpha\beta}(z):= \sum_{i=1}^{m^2n^2} \ov{z_{i\alphapha}}z_{i\beta}- \text{d}lta_{\alphapha\beta}=0,\label{wiaz} \mathcal{E}_1d{eqnarray} where \begin{equation}\label{CostOp} E^\varrho_{\alphapha\beta\mu\nu}:=\frac{1}{4}\langle e_{\alphapha}\otimes e_{\beta}| \Pi_m \otimes \Pi_n \,e_{\mu}\otimes e_{\nu}\rangle \mathcal{E}_1d{equation} and $\Pi_m,\Pi_n$ are the projectors from $ {\mathbb{C}} ^m\otimes {\mathbb{C}} ^m$, $ {\mathbb{C}} ^n\otimes {\mathbb{C}} ^n$ onto the skew-symmetric subspaces $ {\mathbb{C}} ^m \omegaedge {\mathbb{C}} ^m$, $ {\mathbb{C}} ^n \omegaedge {\mathbb{C}} ^n$ respectively. \mathcal{E}_1d{lem} We note that the pure state entanglement measure we use is the square of the generalized concurrence $c(\varrho)$ (cf. Rungta {\it et al.} ~\cite{Rungta}, Mintert {\it et al.} ~\cite{Mintert}), however, instead of the convex roof construction $c(\varrho):=\text{inf}\sum_i p_ic(\Psi_i)=\text{inf}\sum_i c(\psi_i)$ (where $\ket{\Psi_i}$ are normalized vectors), we analyze \begin{equation}\label{p^2} E_\varrho(z)=\sum_i p_i^2c^2(\Psi_i). \mathcal{E}_1d{equation} as a "quantifier" of entanglement. We remind the reader that no caveats are introduced by this, since we are only interested in the detection of zero entanglement rather than in the full construction of an entanglement monotone. Note that the Eqs.~(\ref{H}), (\ref{wiaz}) are invariant with respect to local unitary transformations, since when $\varrho$ is separable also $U_A\otimes U_B\varrho U_A^\dagger\otimes U_B^\dagger$ is, for arbitrary $U_A\in U(m)$, $U_B\in U(n)$. The latter transformation can be viewed either as a local change of basis (passive view) or as an active rotation (active view). Indeed, from Eq.~(\ref{test}) one immediately sees that $c^2(U_A\otimes U_B \psi)=c^2(\psi)$. Thus, the function $E_\varrho$ and all quantities derived from it are constant on the whole unitary class of $\varrho$, i.e. on $[\varrho]:=\{U_A\otimes U_B\varrho U_A^\dagger\otimes U_B^\dagger\, ; \, U_A\in U(m), U_B\in U(n)\}$. In what follows, we refer with $\varrho$ to its local unitary class $[\varrho]$. We give a brief comparison to a previous analysis carried out by Wu {\it et al.} in Ref.~\cite{Wu}, also leading to a different set of polynomial equations. These authors have used a higher order polynomial test for separability: let $\sigma_A := \text{tr}_{\mathcal{H}_B} |\psi \rangle \langle \psi|$, then $\ket\psi$ is product if and only if $\text{det}(\sigma_A - {\bf 1})=0$. The relation to Eq.~(\ref{test}) is established by observing that $\text{det}(\sigma_A - {\bf 1})=\sum_{k=0}^{m}(-1)^k c_k(\sigma_A)$, where $c_k$'s form a basis of $U(m)$-invariant polynomials (see e.g. Ref.~\cite{Kobayashi}). Particularly, $2c_2(\sigma_A)=\big( \text{tr}\sigma_A \big)^2-\text{tr}\sigma_A^2$, which is precisely the generalized concurrence squared (cf. Eq.~(\ref{test})). For testing separability, it is sufficient to consider only $c_2$. \section{Mechanical analogy} \label{CostFun} Equations (\ref{H}) and (\ref{wiaz}) form a system of real (after taking real and imaginary parts) polynomial equations. Let us denote by $\mathcal{V}_\varrho$ the set of its solutions for a given $\varrho$. Then the separability problem is equivalent to the question whether $\mathcal{V}_\varrho$ is empty or not. In principle there is a general solution to such problem, provided by the so called { {\it et al.} m Real Nullstellensatz} (see e.g. Bochnak {\it et al.} ~\cite{Bochnak}). It says that $\mathcal{V}_\varrho= {\it et al.} mptyset$ if and only if the ideal generated by the polynomials $E_\varrho$, $\{\text{Re}\mathcal{C}_{\alphapha\beta},\text{Im}\mathcal{C}_{\alphapha\beta}\}$, and by all (real) sum-of-squares (SOS) polynomials\footnote{ Interestingly, SOS polynomials also appear in a solution to the classicality problem of states of a single mechanical system---they are enough to detect a very broad family of states through generalized squeezing conditions (Korbicz {\it et al.} ~\cite{hp17}).} contains the constant $-1$. Equivalently, $\mathcal{V}_\varrho= {\it et al.} mptyset$ if and only if there exist a SOS polynomial $s=\sum_n(w_n)^2$, a real polynomial $t$, and (complex) polynomials $u_{\alpha\beta}$ such that: \begin{equation}\label{Gcertif} -1=s(z)+E_\varrho(z) t(z)+ \sum_{\alpha,\beta}\text{Re}\big[\mathcal{C}_{\alphapha\beta}(z)\,\ov{u_{\alpha\beta}(z)}\big]. \mathcal{E}_1d{equation} However, finding such a certificate is computationally very difficult and inefficient, due to the fact that the degrees of polynomials $s$, $t$, and $u_{\alpha\beta}$ are a priori unbounded (see also Refs.~\cite{Osborne04,Badziag}). Here we develop a different approach based on a statistical analysis of a classical-mechanical analogy. Namely, we treat $z_{i\alphapha}$ as a collection of complex row vectors ${\bf z}_i\in\mathbb{C}^r\cong {\mathbb{R}} ^{2r},\,i=1\dots N$ and treat each row ${\bf z}_i$ as a complex phase-space coordinate of a fictitious particle moving in $r$-dimensional space. Then the whole matrix $z_{i\alphapha}$ becomes a phase-space coordinate of a system of $N$ such particles in their composite phase-space $\Gamma:= {\mathbb{R}} ^{2r}\times\cdots\times {\mathbb{R}} ^{2r}\cong {\mathbb{C}} ^{Nr}$. Now, let $E_{\varrho}({\bf z}_1,\dots,{\bf z}_N)$ and $\mathcal{C}_{\alphapha\beta}({\bf z}_1,\dots,{\bf z}_N)$ be defined by Eqs.~(\ref{H}) and (\ref{wiaz}). We emphasize that $E_\varrho$ depends on the separability class of the analyzed state $\varrho$ through the fixed eigenensemble $\{\ket{e_{\alphapha}}\}$. From the property (\ref{test}) it follows that \begin{equation} E_{\varrho}({\bf z}_1,\dots,{\bf z}_N)G\times Ge 0 \quad \text{for any}\quad ({\bf z}_1,\dots,{\bf z}_N)\in {\mathbb{C}} ^{Nr}. \mathcal{E}_1d{equation} We will think of $E_\varrho$ as a cost function or Hamiltonian (it is extensive in the number of fictitious particles $N$), of our fictitious mechanical system. Then, we can treat $\mathcal{C}_{\alphapha\beta}$ as at the primary constraints imposed on the a priori independent phase-space coordinates $({\bf z}_1,\dots,{\bf z}_N)$. We note that even if the mechanical system corresponds to free particles (if $E_{\varrho}$ was diagonal) the resulting model is nevertheless interacting due to the forces of inertia induced by the non-linear constraints. The corner stones of the mechanical interpretation of the separability problem (\ref{H}), (\ref{wiaz}) can be summarized as follows: the $\varrho$-ensembles of density matrices with a fixed rank $r$ form the Stiefel manifold $V_{N,r}$, which we can be viewed at as a constraint surface in the phase-space $\Gamma$. Each state $\varrho$ defines the non-negative cost operator $E^\varrho_{\alphapha\beta\mu\nu}$ {\it et al.} qref{CostOp} which uniquely defines the cost function $E_\varrho$ on $\Gamma$, which probes the separability of the ensembles. The cost function $E_\varrho$ assumes the value zero on the constraint surface $V_{N,r}$ (which is then its global minimum) if and only if $\varrho$ is separable. \section{Statistical-mechanical approach}\label{StatMech} Although in principle one could tempt to solve the constraints explicitly by Eq.~(\ref{Srozw}), the resulting parametrization of the constrained manifold is rather hard to work with due to the iterative nature of the Gram-Schmidt orthonormalization. We circumvent the complications with an explicit incorporation of the constraints by using a standard method of implicit treatment of constrained systems due to Dirac~\cite{Dirac}. It is based on the introduction of Lagrange multipliers. To this end we define the full Hamiltonian of the systems as \begin{eqnarray} H_{full}({\bf z}_1\dots {\bf z}_N):=E_{\varrho} +\sum_{\alphapha,\beta}\omega_{\alphapha\beta}\mathcal{C}_{\alphapha\beta},\label{HT} \mathcal{E}_1d{eqnarray} where $\omega_{\alphapha\beta}$ are the Lagrange multipliers. Note that the constraints written in the matrix $\mathcal{C}_{\alphapha\beta}$ are not all independent: it is in fact a hermitian matrix and we need to employ one Lagrange multiplier for each independent constraint only. On the other hand we have considerable freedom for choosing the spurious Lagrange multipliers in the Lagrange matrix $\omega$. We choose $\omega$ to be hermitian. Then, $H_{full}$ is hermitian and has only real eigenvalues. Moreover, in order to take into account all independent constraints, we require that $\text{det}\,\omega\ne0$. The constraints $\mathcal{C}_{\alpha\beta} {\it et al.} quiv 0$ are then realized on average by setting to zero the variation of $H_{full}$ with respect to $\omega_{\alphapha\beta}$\,: $\partial H_{full}/\partial \omega_{\alphapha\beta}=0$. The number of fictitious particles $N$ will in general be notably large---in dimension $2\otimes 4$ for example we have $NG\times Ge 64$. Thus, the direct analytical study of our fictitious mechanical system seems rather hopeless and we proceed further using methods of statistical mechanics and numerical simulations. The most natural framework would be microcanonical ensemble, however it is also difficult to work with. Hence, we will introduce a canonical ensemble, keeping in mind that this is just a technical tool, so, for example, the inverse temperature $\beta$ plays only a role of a parameter here, without any physical meaning. We proceed to define the canonical partition function $Z$ for our system. The most natural definition is perhaps the following \begin{eqnarray} & & Z(\beta;\varrho)=\int \prod_{i,\mu}\text{d} ^2 z_{i\mu} \prod_{\alpha\leq\beta}\text{d}lta\big[\mathcal{C}_{\alpha\beta}({\bf z}_1\dots {\bf z}_N)\big]\text{e}^{-\beta E_{\varrho}}\nonumber\nonumber\\ & & =\int \prod_{i,\mu}\text{d} ^2 z_{i\mu} \prod_{\alpha\leq\beta}\text{d}lta\bigg[\sum_{i=1}^{N} \ov{z_{i\alphapha}}z_{i\beta}- \text{d}lta_{\alphapha\beta}\bigg]\text{exp} \bigg\{\!\!-\beta\sum_{i=1}^{N} \sum_{\alphapha, \dots ,\nu=1}^r \ov{z_{i\alphapha}} \:\ov{z_{i\beta}}E^\varrho_{\alphapha\beta\mu\nu}z_{i\mu}z_{i\nu}\bigg\},\label{Zinna} \mathcal{E}_1d{eqnarray} where the integration is explicitly restricted to the constraint surface $V_{N,r}$ (cf. Eq.~(\ref{st})) given by $\mathcal{C}_{\alpha\beta} {\it et al.} quiv 0$. The intuition behind such an approach is the following. We can formally introduce constraint ``state density'' function \begin{equation}\label{statedens} \rho( {\it et al.} psilon):=\int \prod_{i,\mu}\text{d} ^2 z_{i\mu} \prod_{\alpha\leq\beta}\text{d}lta\big[\mathcal{C}_{\alpha\beta}({\bf z}_1\dots {\bf z}_N)\big] \text{d}lta\big( {\it et al.} psilon-E_\varrho(z)\big). \mathcal{E}_1d{equation} Since $E_\varrho(z)G\times Ge 0$, $\rho( {\it et al.} psilon)$ is non-zero only for $ {\it et al.} psilonG\times Ge 0$. Then: \begin{equation} Z(\beta;\varrho)=\int_0^\infty \text{d} {\it et al.} psilon \rho( {\it et al.} psilon) \text{e}^{-\beta {\it et al.} psilon}. \mathcal{E}_1d{equation} Let us assume that the state in question is entangled. Then $E_\varrho(z)$ is strictly positive, so there exists a constant $a$ such that $E_\varrho(z)G\times Ge a>0$. The average ``energy'' is then separated from zero: \begin{eqnarray}\label{saturation} \langle\langle E_\varrho\rangle\rangle :=\frac{1}{Z(\beta;\varrho)} \int_0^\infty \text{d} {\it et al.} psilon \rho( {\it et al.} psilon)\, {\it et al.} psilon \text{e}^{-\beta {\it et al.} psilon} G\times Ge\frac{1}{Z(\beta;\varrho)} \int_a^\infty \text{d} {\it et al.} psilon \rho( {\it et al.} psilon)a \text{e}^{-\beta {\it et al.} psilon}=a. \mathcal{E}_1d{eqnarray} Now let $\varrho$ be separable. Then, by Proposition \ref{p2} $E_\varrho(z)$ has zeros on the constraint surface $\mathcal{C}_{\alpha\beta}=0$ with each zero corresponding to a separable $\varrho$-ensemble. Since such ensembles are ``rare'', we expect that the state density $\rho( {\it et al.} psilon)\to 0$ with $ {\it et al.} psilon\to 0$. Let us assume for a moment that the leading term in the actual dependence of $\rho( {\it et al.} psilon)$ was given by a power law \begin{equation}\label{ansatz} \rho( {\it et al.} psilon)=A {\it et al.} psilon^\text{d}lta, \quad A,\text{d}lta>0. \mathcal{E}_1d{equation} Then we obtain the well established result $Z(\beta;\varrho)=\frac{A}{\beta^{\text{d}lta+1}}\Gamma(\text{d}lta+1)$ and \begin{equation} \langle\langle E_\varrho\rangle\rangle =\frac{\Gamma(\text{d}lta+2)}{\Gamma(\text{d}lta+1)}\,\frac{1}{\beta}=\frac{\text{d}lta+1}{\beta}\label{1/b}. \mathcal{E}_1d{equation} Thus, we put forward the following conjecture: {\bf Conjecture.} {\it For the "state density" function $\rho( {\it et al.} psilon)$, defined in Eq. (\ref{statedens}), it holds: i) the mean energy (defined in Eq. (\ref{saturation})) $\langle\langle E_\varrho\rangle\rangle=a>0$ if and only if $\varrho$ is entangled; ii) the mean energy $\langle\langle E_\varrho\rangle\rangle$ scales as $1/\beta$ if and only if $\varrho$ is separable.} We anticipate that indeed we observe such a behavior in a simple case of $2\otimes 2$ Werner states~\cite{Werner}. Note that in general the exponent $\text{d}lta$ will depend on the state $\text{d}lta=\text{d}lta(\varrho)$. The partition function defined in Eq.~(\ref{Zinna}) is difficult to work with analytically (however one can still investigate it numerically, e.g. using Monte Carlo methods), so we use a different object---the partition function for the full Hamiltonian (\ref{HT}). We first rescale the variables: $z_{i\alphapha}\mapsto z_{i\alphapha}/\sqrt{N}$ and then define: \begin{eqnarray} Z(\beta,\omega;\varrho):=\int \prod_{i,\mu}\text{d} ^2 z_{i\mu} \, \text{exp}\Big[-\frac{\beta}{N^2} \Big(E_{\varrho}({\bf z}_1\dots {\bf z}_N) +N\sum_{i}\langle {\bf z}_i|\omega {\bf z}_i\rangle-N^2\text{tr}\omega\Big)\Big], \label{Z_0} \mathcal{E}_1d{eqnarray} where $\langle\,\cdot\,|\,\cdot\,\rangle$ denotes the standard scalar product in $\mathbb{C}^r$. Performing further rescaling: \begin{equation} \beta=N^2\tilde\beta \, , \quad \omega=\frac{N}{\beta}\tilde\omega \,,\label{skala} \mathcal{E}_1d{equation} $Z(\beta,\omega;\varrho)$ becomes (after dropping the tildes): \begin{eqnarray} & &Z(\beta,\omega;\varrho)=\int \prod_{i,\mu}\text{d} ^2 z_{i\mu} \,\text{exp}\Big[-{\beta}E_{\varrho}({\bf z}_1\dots {\bf z}_N) -\sum_{i} \langle {\bf z}_i|\omega {\bf z}_i\rangle+N\text{tr}\omega\Big]\nonumber\\ & &=\int \prod_{i,\mu}\text{d} ^2 z_{i\mu} \,\text{exp}\Big[-{\beta}\sum_i \sum_{\alphapha, \dots ,\nu} \ov{z_{i\alphapha}} \:\ov{z_{i\beta}}E^\varrho_{\alphapha\beta\mu\nu}z_{i\mu}z_{i\nu} -\sum_{i}\langle {\bf z}_i|\omega {\bf z}_i\rangle+N\text{tr}\omega \Big].\label{Z_stare} \mathcal{E}_1d{eqnarray} Now we are able to reproduce the (rescaled) constraints (\ref{wiaz}) only on average: \begin{equation} \frac{\partial}{\partial \omega_{\alphapha\beta}}\, \text{log}Z(\beta,\omega;\varrho)= \langle\langle N\text{d}lta_{\alphapha\beta}- \sum_i \ov{z_{i\alphapha}}z_{i\beta}\rangle\rangle \label{constr_stare_stare} \mathcal{E}_1d{equation} where the average $\langle\langle \:\cdot\: \rangle\rangle$ is taken with respect to the probability density defined through Eq.~(\ref{Z_stare}): \begin{eqnarray} P_{\varrho}({\bf z}_1\dots {\bf z}_N;\beta,\omega)&:=&\frac{1}{Z(\beta,\omega;\varrho)} \,\text{exp}\Big[-{\beta}E_{\varrho}({\bf z}_1\dots {\bf z}_N) -\sum_{i}\langle {\bf z}_i|\omega {\bf z}_i\rangle+N\text{tr}\omega\Big]. \mathcal{E}_1d{eqnarray} Thus, requiring that $\partial/\partial \omega_{\alphapha\beta}\, \text{log}Z(\beta,\omega;\varrho)=0$ amounts to: \begin{equation}\label{constr_stare} N\text{d}lta_{\alphapha\beta}= \langle\langle\sum_i \ov{z_{i\alphapha}}z_{i\beta}\rangle\rangle. \mathcal{E}_1d{equation} Following the standard treatment of constrained systems, the equations (\ref{constr_stare}) are treated as conditions imposed on a priori arbitrary (apart form being hermitian and non-singular) matrix of Lagrange multipliers $\omega$. We note that the above approach based on $H_{full}$ is nothing else but a (formal) evaluation of the integral (\ref{Zinna}) through the saddle point method with $N\to \infty$. A significant simplification of the partition function (\ref{Z_stare}) comes from the form of our Hamiltonian $E_\varrho$---from Eq.~(\ref{H}) it follows that $E_\varrho({\bf z}_1\dots {\bf z}_N)= \sum_i E_{\!1\varrho}({\bf z}_i)$, where $E_{\!1\varrho}$ is just the function $E_{\varrho}$ with $N=1$. The situation is more subtle with the constraints (\ref{constr_stare}). For the purpose of this work we will assume that the contribution to the sum from each fictitious particle is equal,i.e. $\langle\langle\ov{z_{i\alphapha}} z_{i\beta}\rangle\rangle=\text{d}lta_{\alphapha\beta}$ for every $i$. In general, such ``equipartition'' of course does not have to hold and it is an additional restriction on the Lagrange multipliers. By such an assumption we however achieve a factorization of the partition function: \begin{equation} Z(\beta,\omega;\varrho)=[Z_1(\beta,\omega;\varrho)]^N, \mathcal{E}_1d{equation} where $Z_1$ is a one-particle partition function: \begin{eqnarray} & &Z_1(\beta,\omega;\varrho):=\int \prod_{\mu=1}^{r}\text{d} ^2 z_{\mu} \,\text{exp}\Big[-{\beta}E_{\! 1\varrho}({\bf z}) -\langle {\bf z}|\omega {\bf z}\rangle + \text{tr}\omega \Big]\nonumber\\ & &=\int \prod_{\mu=1}^{r}\text{d} ^2 z_{\mu} \,\text{exp}\Big[-{\beta}\sum_{\alphapha, \dots ,\nu} \ov{z_{\alphapha}} \:\ov{z_{\beta}}E^\varrho_{\alphapha\beta\mu\nu}z_{\mu}z_{\nu} -\langle {\bf z}|\omega {\bf z}\rangle + \text{tr}\omega\Big].\label{Z} \mathcal{E}_1d{eqnarray} From now on we will consider $Z_1$ only. The constraint equations (\ref{constr_stare}) are then replaced by a one-particle version: \begin{equation} \frac{\partial}{\partial \omega_{\alphapha\beta}}\, \text{log}Z_1(\beta,\omega;\varrho)=\text{d}lta_{\alphapha\beta}- \langle\langle \ov{z_{\alphapha}} z_{\beta} \rangle\rangle =0\ , \label{constr} \mathcal{E}_1d{equation} in accordance with our extra assumption made above. The average in Eq. (\ref{constr}) is taken with respect to the probability distribution: \begin{equation} P_{1\varrho}({\bf z};\beta,\omega):=\frac{1}{Z_1(\beta,\omega;\varrho)} \,\text{exp}\Big[-{\beta}E_{\! 1\varrho}({\bf z}) -\langle {\bf z}|\omega {\bf z}\rangle+ \text{tr}\omega\Big] . \label{SP} \mathcal{E}_1d{equation} In particular, Eq.~(\ref{constr}) implies that $\langle\langle |z_{\alphapha}|^2\rangle\rangle =1$. To understand the meaning of Eq.~(\ref{constr}), let us assume that $\omega=\omega_0(\beta;\varrho)$ is its solution. Then Eq.~(\ref{constr}) implies that a family of vectors $\{\ket{\psi({\bf z})}:=\sum_{\alphapha} z_{\alphapha} \ket{e_{\alphapha}}\, ;\, {\bf z}\in \mathbb{C}^r\}$ forms a continuous $\varrho$-ensemble with respect to the probability distribution (\ref{SP}), i.e.: \begin{equation}\label{cont_ens} \int \! \text{d} ^{2r} {\bf z} P_{1\varrho}({\bf z};\beta,\omega_0)\, | \psi({\bf z})\rangle \! \langle \psi({\bf z})| = \varrho \mathcal{E}_1d{equation} irrespectively of $\beta$. Since $E_{\! 1\varrho}({\bf z})=c^2\big(\psi({\bf z})\big)$ (cf. Eqs.~(\ref{test}) and (\ref{H})) is the concurrence squared of each $\ket{\psi({\bf z})}$, the average ``energy'' is just the ensemble average of the concurrence squared: \begin{eqnarray} \langle\langle E_{\! 1\varrho} \rangle\rangle_0(\beta):= \int \text{d}^{2r} {\bf z} \, P_{1\varrho}\big[{\bf z};\beta,\omega_0(\beta;\varrho)\big]\, E_{\! 1\varrho}({\bf z}) =-\frac{\partial}{\partial \beta}\, \text{log}Z_1\Big|_{\omega=\omega_0(\beta;\varrho)}.\label{avH} \mathcal{E}_1d{eqnarray} Due to the property (\ref{c2}) one can formally simplify the integral (\ref{Z}) using the Hubbard-Stratonovitch trick. Indeed, Eq.~(\ref{Z}) can be rewritten as: \begin{eqnarray} Z_1(\beta,\omega;\varrho)= \int \prod_{\mu=1}^{r}\text{d} ^2 z_{\mu} \text{exp}\bigg\{-{\beta}\sum_{a,b=1}^{d_1,d_1} \bigg|\sum_{\alphapha, \beta}h^{ab}_{\alphapha\beta}(\varrho)z_\alphapha z_\beta\bigg|^2 -\langle {\bf z}|\omega {\bf z}\rangle+\text{tr}\omega\bigg\}, \mathcal{E}_1d{eqnarray} where: \begin{equation}\label{hab} h^{ab}_{\alphapha\beta}(\varrho):=\langle {\bf z}eta_a\otimes \omegaidetilde {\bf z}eta_b|e_\alphapha\otimes e_\beta\rangle \mathcal{E}_1d{equation} and we have rescaled $\beta$ by $1/4$. Next, we use the Hubbard-Stratonovitch substitution: \begin{equation} \text{exp}(-{\beta}|y|^2)=\int \frac{\text{d} ^2 s} {\pi\beta}\,\text{exp}\left(\!\!-\frac{|s|^2}{\beta} +\text{i}\:\ov s y+\text{i}s\ov y\right). \mathcal{E}_1d{equation} to obtain (after a formal interchange of the integrations): \begin{eqnarray} & &Z_1(\beta,\omega;\varrho)=\int\prod_{a,b=1}^{d_1,d_2}\frac{\text{d}^2 s_{ab}}{\pi\beta}\, \text{exp}\left(-\frac{1}{\beta}\sum_{a,b}|s_{ab}|^2+\text{tr}\omega\right) \int \frac{1}{2^r}\prod_{\mu=1}^{r}\text{d} z_{\mu}\text{d} \ov{z_{\mu}}\nonumber\\ & &\times\text{exp}\bigg\{\sum_{\alphapha,\beta}\Big[- \ov{z_{\alphapha}}\omega_{\alphapha\beta}z_\beta +\text{i}\sum_{a,b}\ov{s_{ab}}\,h^{ab}_{\alphapha\beta}(\varrho)z_\alphapha z_\beta +\text{i}\sum_{a,b}s_{ab}\,\ov{h^{ab}_{\alphapha\beta}(\varrho)}\ov{z_\alphapha}\, \ov{z_\beta} \Big]\bigg\}. \mathcal{E}_1d{eqnarray} The above integral is finite if and only if $\omega> 0$ (as we said earlier we assume $\omega$ to be non-singular in order not to loose any of the constraints, hence the strong inequality here). This puts no restriction on the amount of independent parameters in $\omega$ and from now on we will assume this condition to hold. Performing the Gaussian integration in the $2r$ variables ${\bf z},{\bf \ov z}$ finally yields: \begin{eqnarray} Z_1(\beta,\omega;\varrho)=\pi^r \int\prod_{a,b=1}^{d_1,d_2}\frac{\text{d}^2 s_{ab}}{\pi\beta} \text{exp}\Big(-\frac{1}{\beta}\sum_{a,b}|s_{ab}|^2+\text{tr}\omega\Big) \frac{1}{\sqrt{\text{det}M_\varrho({\bf s},\omega)}},\label{poHS} \mathcal{E}_1d{eqnarray} where $2r\times 2r$ matrix $M_\varrho({\bf s},\omega)$ is defined as follows: \begin{equation}\label{macierz} M_\varrho({\bf s},\omega):=\left[\begin{array}{cc} \omega & -2\text{i}\sum_{a,b}s_{ab}\ov{{\bf h}^{ab}(\varrho)}\\ -2\text{i}\sum_{a,b}\ov{s_{ab}}\,{\bf h}^{ab}(\varrho) & \ov\omega \mathcal{E}_1d{array}\right], \mathcal{E}_1d{equation} (we used the fact that $\ov\omega=\omega^T$) and ${\bf h}^{ab}(\varrho)$ denotes the $r\times r$ matrix whose elements are $h^{ab}_{\alphapha\beta}(\varrho)$. \section{Calculation for Werner states}\label{WernerStates} In this Section we apply the developed statistical method to study Werner states of a $2 \otimes 2 $ dimensional system. They are defined as follows: \begin{equation} W(p):=(1-p)|\Psi_-\rangle\langle \Psi_-| + \frac{p}{4} {\bf 1}_2 \otimes {\bf 1}_2, \label{WernerS} \mathcal{E}_1d{equation} where: \begin{equation}\label{BellB} \ket{\Psi_\pm}:=\frac{1}{\sqrt{2}}\big(\ket{01}\pm\ket{10}\big),\quad \ket{\Phi_\pm}:=\frac{1}{\sqrt{2}}\big(\ket{00}\pm\ket{11}\big) \mathcal{E}_1d{equation} are the Bell basis states and $\{\ket 0,\ket 1\}$ is the standard basis of $ {\mathbb{C}} ^2$. The states $W(p)$ have positive partial transpose, and hence are separable (Peres and Horodecki {\it et al.} \cite{PeresHoro}), for $pG\times Ge 2/3$. As the fixed eigenensemble $\{\ket{e_{\alphapha}}\}$ of $W(p)$ we take: \begin{eqnarray} & & \ket{e_1}:=\sqrt{1-\frac{3}{4}p}\,\ket{\Psi_-}, \quad \ket{e_2}:=\frac{\sqrt{p}}{2}\,\text{i}\ket{\Psi_+}, \quad \\ & & \ket{e_3}:=\frac{\sqrt{p}}{2}\,\text{i}\ket{\Phi_{-}},\quad \qquad \ \!\ket{e_4}:=\frac{\sqrt{p}}{2}\ket{\Phi_{+}}. \mathcal{E}_1d{eqnarray} We proceed to calculate the one-particle partition function $Z_1\big(\beta,\omega;W(p)\big) {\it et al.} quiv Z_1(\beta,\omega;p)$. In what follows we assume $p>0$, for $p=0$ corresponds to a pure state. According to the general formula (\ref{poHS}), we have to find the matrices ${\bf h}^{ab}\big(W(p)\big)$ and $M_{W(p)}(s,\omega)$, defined in Eqs.~(\ref{hab}) and (\ref{macierz}). Since in the case of $ {\mathbb{C}} ^2\otimes {\mathbb{C}} ^2$ the skew-symmetric subspace $ {\mathbb{C}} ^2\omegaedge {\mathbb{C}} ^2$ is one-dimensional---it is spanned by a single vector $\ket{\bf z}eta=1/\sqrt 2 (\ket{01}-\ket{10})$ in each copy $AA'$ and $BB'$---there is only one matrix ${\bf h}^{ab}\big(W(p)\big) {\it et al.} quiv {\bf h}(p)$ and only one Hubbard-Stratonovich parameter $s_{ab} {\it et al.} quiv s$. Calculation of ${\bf h}(p)$ and $M_{W(p)}(s,\omega) {\it et al.} quiv M_p(s,\omega)$ yields: \begin{eqnarray} & & {\bf h}(p)=\frac{1}{8}\left[\begin{array}{cccc} 4-3p & 0 & 0 & 0\\ 0 & p & 0 & 0\\ 0 & 0 & p & 0\\ 0 & 0 & 0 & p \mathcal{E}_1d{array} \right]\label{hp}\\ & & M_p(s,\omega)=\left[\begin{array}{cc} \omega & -2\text{i}s {\bf h}(p)\\ -2\text{i}\,\ov s \, {\bf h}(p) & \ov\omega \mathcal{E}_1d{array}\right],\label{M} \mathcal{E}_1d{eqnarray} so that: \begin{equation} E_1({\bf z};p)=\frac{1}{64}\big|(4-3p)z_1^2+pz_2^2+pz_3^2+pz_4^2\big|^2, \label{F1} \mathcal{E}_1d{equation} and: \begin{eqnarray} Z_1(\beta,\omega;p)=\int\text{d} ^2z_1\dots \text{d} ^2z_4 \text{exp}\Big[-{\beta}\, \big|(4-3p)z_1^2+pz_2^2+pz_3^2+pz_4^2\big|^2 -\langle {\bf z}|\omega {\bf z}\rangle+\text{tr}\omega\Big]\label{Z1werner} \mathcal{E}_1d{eqnarray} (we have absorbed the factor $1/64$ into the definition of the parameter $\beta$). Next, we calculate $\text{det}M_p(s,\omega)$ for $p \ne 0$. We first perform a transformation: \begin{eqnarray} M_p \mapsto M_p':= \left[\begin{array}{cc} {\bf h}(p)^{-1/2} & 0\\ 0 & {\bf h}(p)^{-1/2} \mathcal{E}_1d{array}\right]\, M_p \left[\begin{array}{cc} {\bf h}(p)^{-1/2} & 0\\ 0 & {\bf h}(p)^{-1/2} \mathcal{E}_1d{array}\right] = \left[\begin{array}{cc} \omega' & -2\text{i}s\\ -2\text{i}\, \ov s & \ov{\omega'}\mathcal{E}_1d{array}\right], \label{M'} \mathcal{E}_1d{eqnarray} where: \begin{equation}\label{w'} \omega': = {\bf h}(p)^{-1/2}\omega {\bf h}(p)^{-1/2}\ . \mathcal{E}_1d{equation} Then we multiply Eq.~(\ref{M'}) on the left by $\left[\begin{array}{cc} {\bf 1} & 0\\ 2\text{i}\,\ov s & \omega' \mathcal{E}_1d{array}\right]$ to obtain: \begin{equation} \left[\begin{array}{cc} {\bf 1} & 0\\ 2\text{i}\,\ov s & \omega' \mathcal{E}_1d{array}\right] M_p' = \left[\begin{array}{cc} \omega' & -2\text{i}s\\ 0 & 4|s|^2+\omega'\,\ov{\omega'}\mathcal{E}_1d{array}\right], \mathcal{E}_1d{equation} and after taking the determinants of both sides: \begin{equation} \text{det}M_p(s,\omega) =\text{det}{\bf h}(p)^2\,\text{det} \big(4|s|^2+\omega'\,\ov{\omega'}\big).\label{detM} \mathcal{E}_1d{equation} We then substitute Eq.~(\ref{detM}) into Eq.~(\ref{poHS}) and finally obtain (with $x:=4|s|^2$): \begin{equation} Z_1(\beta,\omega;p)= \frac{\pi^4}{4\beta\,\text{det}{\bf h}(p)} \, \text{e}^{\text{tr}[\omega'{\bf h}(p)]} \int\limits_0^{\infty}\!\!\frac{\text{d} x \,\text{e}^{-\frac{x}{4\beta}}} {\sqrt{\text{det}\big(x+\omega'\,\ov{\omega'}\big)}},\label{Z1HS3} \mathcal{E}_1d{equation} where $\omega'$ is defined through Eq.~(\ref{w'}). The above integral is well defined, since $\text{det}\big(x+\omega'\,\ov{\omega'}\big)= \text{det}\big(x+\sqrt{\omega'}\,\ov{\omega'}\sqrt{\omega'}\big)$ and $\sqrt{\omega'}\,\ov{\omega'}\sqrt{\omega'}$ is strictly positive, as we have assumed that $\text{det}\omega\ne 0$. We can explicitly calculate the derivative $\partial \text{log}Z_1 (\beta,\omega;p) / \partial \omega'$. For a generic $\omega'$ it takes the following form: \begin{eqnarray} & &\frac{\partial \text{log}Z_1 (\beta,\omega;p)}{\partial \omega'}= \sqrt{{\bf h}(p)}\frac{\partial \text{log}Z_1 (\beta,\omega;p)}{\partial \omega} \sqrt{{\bf h}(p)}\nonumber\\ & &=\Bigg[\int\limits_0^{\infty}\!\!\frac{\text{d} y \,\text{e}^{-\frac{y}{4\beta}}} {\sqrt{\text{det}\big(y+\omega'\,\ov{\omega'}\big)}}\Bigg]^{-1} \int\limits_0^{\infty}\!\!\frac{\text{d} x \,\text{e}^{-\frac{x}{4\beta}}} {\sqrt{\text{det}\big(x+\omega'\,\ov{\omega'}\big)}} \Big[{\bf h}(p)-\big(x+\omega'\,\ov{\omega'}\big)^{-1}\omega'\Big]\label{dZ}. \mathcal{E}_1d{eqnarray} The special case of Eqs.~(\ref{Z1HS3}), (\ref{dZ}) for Bell-diagonal states is straightforward---it is enough to replace matrix ${\bf h}(p)$ from Eq.~(\ref{hp}) with the diagonal matrix $4\,\text{diag}(1-p_1-p_2-p_3,p_1,p_2,p_3)$. \begin{figure} \begin{center} \includegraphics[width=0.55\linewidth, angle=270] {degen10.ps} \caption{\label{wykres1}The plot of $\min\limits_{\omega'}||\partial \text{log}Z_1(\beta,\omega;p)/\partial \omega'||_{HS}$ for Werner states as a function of probability $p$ for $\beta=10$.} \mathcal{E}_1d{center} \mathcal{E}_1d{figure} \section{Numerical results} Further studies of the integral (\ref{Z1HS3}) were performed using numerical methods. According to Eq.~(\ref{constr}) one has to search for a saddle point of $\text{log} Z_1(\beta,\omega;p)$ with respect to $\omega$ (or equivalently with respect to $\omega'$; cf. Eq.~(\ref{w'})). The search was performed by flood-minimizing the Hilbert-Schmidt norm of $\partial \text{log}Z_1 (\beta,\omega;p) / \partial \omega'_{\alphapha\beta}$ for a range of parameters $\beta=10,100,\dots$. For simplicity we assumed a specific form of $\omega'$: \begin{equation}\label{choice} \omega'=\left[\begin{array}{cccc} G\times Gamma & 0 & 0 & 0\\ 0 & \lambda & 0 & 0\\ 0 & 0 & \lambda & 0\\ 0 & 0 & 0 & \lambda \mathcal{E}_1d{array} \right] \mathcal{E}_1d{equation} and minimized the derivative (given by formula similar to to Eq. (\ref{dZ}), but taking into account the specific symmetry of (\ref{choice})) with respect to the parameters $G\times Gamma,\lambda >0$. We payed attention that the obtained minima are not on the border of the region $\omega' >0$ (or equivalently $\omega >0$). The specific choice (\ref{choice}) of $\omega'$ was motivated by the form of the cost function (\ref{F1}). We also obtained some numerical evidence that in the generic case the minima of $||\partial \text{log}Z_1 (\beta,\omega;p) / \partial \omega'||_{HS}$ were attained for matrices $\omega'$ very close to (\ref{choice}). The results of the simulations for $\beta=10$ are presented in Fig.~\ref{wykres1} (the results for higher values of $\beta$ did not differ from those for $\beta=10$). We see that for $pG\times Ge 0.89$ the constraints (\ref{constr}) can be satisfied. We shall call the interval where it happens ``equipartition region''. \begin{figure} \begin{center} \includegraphics[width=0.55\linewidth, angle=270]{betadep_new.ps} \caption{\label{wykres2} The plot of $\langle\langle E_{1W(p)} \rangle\rangle_0(\beta)$ for $p=0.90$ on a double log scale.} \mathcal{E}_1d{center} \mathcal{E}_1d{figure} Next, the dependence of the average entanglement $\langle\langle E_{1W(p)} \rangle\rangle_0(\beta)$ (cf. Eq.~(\ref{avH})) of the continuous ensemble (\ref{cont_ens}) on $\beta$ within the equipartition region was examined (recall that outside this region the one-particle constraints (\ref{cont_ens}) are no longer satisfied). Fig. \ref{wykres2} shows a sample plot for $p=0.9$. One sees that the average ``energy'' indeed scales like $1/\beta$, just like predicted by the Ansatz (\ref{ansatz}) and Eq.~(\ref{1/b}). The estimated exponent $\text{d}lta$ at this value of $p$ is $\text{d}lta\approx 1.75$. We have also checked that in the limiting case $\beta \to \infty$ the equipartition region is not altered. Hence, our procedure seems to detect separability of the Werner states (\ref{WernerS}) at least for $p G\times Ge 0.89$ and thus can serve only as a sufficient condition for separability. We did not check the behavior of $\langle\langle E_{1W(p)} \rangle\rangle_0(\beta)$ outside the equipartition region $p<0.89$. \section{Further questions and concluding remarks} The statistical mechanical approach to the separability problem as presented here differs from the more traditional techniques in that we studied the space of convex decompositions of a given state, rather than the convex set of all states. The resulting polynomial equations are real due to the constraint (\ref{wiaz}) and this real structure makes the analysis more complicated than it would be in a complex case. Hence, we applied statistical-mechanical methods to study possible zeros of this system. As an example we studied $2\otimes 2$ Werner states (\ref{WernerS}). However, the numerical difficulty already at this simple example was quite high and we have applied several simplifications. Nevertheless, the numerical results suggest that at least for separable states in a vicinity of the identity, the partition function and the average ``energy'', related to the ensemble entanglement (cf. Eq.(\ref{p^2}), show some qualitative change in their behavior. There are obviously some important questions left. First of all, we postulated rather than derived the power-law state density behavior (\ref{1/b}) for entangled states. It would be an interesting, albeit difficult, task to try to analytically derive this law. Or at least to find some arguments in its favor. Another thing is that in passing from the full $N$-particle constrains (\ref{constr_stare}) to the one-particle one (\ref{constr}) we have tacitly assumed a sort of ``equipartition'' of the constraints, i.e. that the constraints are divided equally among the particles. But it actually does not have to be like that. In particular, the shape of the curve in Fig.~\ref{wykres1} tells us that below $p=0.89$ the constrains are not ``equiparted''. Thus, in principle one should work with the full $N$-particle partition function (\ref{Z_stare}) and seek regions were full constraints (\ref{constr_stare}) can be satisfied. Then the scaling of the average ``energy'' with $\beta$ within that regions will be able to discriminate between separability and entanglement. As a side remark, we note that quite surprisingly, the value $p=0.89$ appears in Braunstein {\it et al.} ~\cite{Braunstein} separability criterion, based on an estimation of the size of a ball of separable states around the normalized identity (see also Bengtsson and $\dot{\text{Z}}$yczkowski~\cite{Bengtsson} and the references therein). It will be worth analyzing this curious coincidence in order to gain a deeper understanding of the strengths and weak points of the presented approach. Finally, let us mention that in principle one can try to directly numerically calculate integral (\ref{Zinna}) using Monte Carlo method. The points of $V_{N,r}$ can be generated either using Eq.~(\ref{Srozw}) or, what seems more feasible, directly from definition (\ref{st}). The latter method amounts to generating random unitary matrices from $U(N)$ and discarding $(N-r)$ of their columns (for the methods of random generation of unitary ensembles see e.g. Po\'zniak {\it et al.} ~\cite{Pozniak}). However, we have not performed such simulations. We gratefully acknowledge discussions with H.-U. Everts, P. Horodecki, G. Palacios, and R. Wimmer. We would like to thank Deutsche Forschungsgemeinschaft (SFB 407, SPP 1078, GK 282, 436 POL), the European Graduate College 665, EU IP project SCALA, ESF PESC Program QUDEDIS, Spanish MEC Program Consolider Ingenio 2010, and Trup Cualitat Generalitat de Catalunya for the financial support. \begin{thebibliography}{99} \bibitem{RMP} R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, quant-ph/0702225. \bibitem{PeresHoro} A. Peres, Phys. Rev. Lett. {\bf 77}, 1413 (1996); M. Horodecki and P. Horodecki and R. Horodecki, Phys. Lett. A {\bf 223}, 1 (1996). \bibitem{UPBs} C. H. Bennett, D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal, Phys. Rev. Lett. {\bf 82}, 5385 - 5388 (1999). \bibitem{PolynEqSeparability} B. Kraus, J.I. Cirac, S. Karnas, and M. Lewenstein, Phys. Rev. A {\bf 61}, 062302, (2000); P. Horodecki, M. Lewenstein, G. Vidal, and J.I. Cirac, Phys. Rev. A {\bf 62}, 032310, (2000). \bibitem{TerhalWit} B. M. Terhal, Physics Letters A {\bf 271}, 319 (2000); ibid. Lin. Alg. Appl. {\bf 323}, 61 (2000). \bibitem{OptimWitness} M. Lewenstein, B. Kraus, J. I. Cirac, and P. Horodecki, Phys. Rev. A {\bf 62}, 052310 (2000). \bibitem{Pawel} P. Horodecki, Phys. Lett. A {\bf 232}, 333 (1997). \bibitem{Gurvits} L. Gurvits, J. Comp. Sys. Sci. {\bf 69}, 448 (2004). \bibitem{Osborne04} T. Osborne, Quant. Inf. Comp. {\bf 7}, 209 (2007). \bibitem{Bengtsson} I. Bengtsson and K. {\.Z}yczkowski, {\it et al.} mph{Geometry of Quantum States: An Introduction to Quantum Entanglement} (Cambridge University Press, Cambridge, 2006). \bibitem{Rungta} P. Rungta, V. Bu$\check{\text z}$ek, C. M. Caves, M. Hillery, and G. J. Milburn, Phys. Rev. A {\bf 64}, 042315 (2001). \bibitem{Mintert} F. Mintert, M. Ku\'s, A. Buchleitner, Phys. Rev. Lett. {\bf 92}, 167902 (2004). \bibitem{Florek} F. Hulpke, PhD Thesis, Universit\"at Hannover 2004. \bibitem{Kubasiak} A. Kubasiak, J. K. Korbicz, J. Zakrzewski, and M. Lewenstein, Europhys. Lett. {\bf 72}, 506 (2005). \bibitem{Schrod} E. Schr\"oedinger, Proc. Camb. Phil. Soc. {\bf 32}, 446 (1936). \bibitem{Hugh} L. P. Hughston, R. Jozsa, W. K. Wootters, Phys. Lett. A {\bf 183}, 14 (1993). \bibitem{Kirkpatrick} K. A. Kirkpatrick, Found. Phys. Lett. {\bf 19}, 95 (2006). \bibitem{Kobayashi1} S. Kobayashi and K. Nomizu, {\it et al.} mph{Foundations of Differential Geometry, Vol.~1} (Interscience Publishers, New York, 1963). \bibitem{Spivak} M. Spivak, {\it et al.} mph{A Comprehensive Introduction to Differential Geometry, Vol.~5} (Publish or Perish, Wilmington, 1979). \bibitem{GS} The Gram-Schmidt orthogonalization creates an orthonormal basis of a span of a set $\{\ket{v_1},\dots,\ket{v_r}\}$ of a linearly independent vectors form some Hilbert space. Define: \begin{equation} \ket{\omegaidetilde{u}_1}:= \ket{v_1},\quad \ket{\omegaidetilde{u}_i}:=\ket{v_i}-\sum_{j=1}^{r-1}\frac{\langle \omegaidetilde{u}_j|v_i\rangle}{||\omegaidetilde{u}_j||^2} \,\ket{\omegaidetilde{u}_j}.\nonumber \mathcal{E}_1d{equation} Then $\{\omegaidetilde u_1,\dots,\omegaidetilde u_r\}$ is a orthogonal system and spans the same space as $\{v_1,\dots,v_r\}$. Passing to the normalized vectors: $\ket{u_i}:=\frac{1}{||\omegaidetilde{u}_i||^2}\,\ket{\omegaidetilde{u}_i}$, we obtain the desired orthonormal system. \bibitem{Car} P. J. Kelly and M. L. Weiss, {\it et al.} mph{Geometry and Convexity} (Wiley, New York, 1979). \bibitem{Wu} S. Wu, X. Chen, and Y. Zhang, Phys. Lett. A {\bf 275}, 244 (2000). \bibitem{Kobayashi} S. Kobayashi and K. Nomizu, {\it et al.} mph{Foundations of Differential Geometry, Vol.~2} (Interscience Publishers, New York, 1963). \bibitem{Bochnak} J. Bochnak, M. Coste, and M.-F. Roy, {\it et al.} mph{Real Algebraic Geometry} (Springer, Berlin, 1998). \bibitem{Badziag} P. Badzi\k ag, P. Horodecki, and R. Horodecki, quant-ph/0504041. \bibitem{hp17} J. K. Korbicz, J. I. Cirac, J. Wehr, and M. Lewenstein, Phys. Rev. Lett. {\bf 94}, 153601 (2005). \bibitem{Dirac} P. A. M. Dirac, {\it et al.} mph{Lectures on Quantum Mechanics} (Yeshiva University, New York, 1964). \bibitem{Werner} R. F. Werner, Phys. Rev. A {\bf 40}, 4277 (1989). \bibitem{Braunstein} S. L. Braunstein, C. M. Caves, R. Jozsa, N. Linden, S. Popescu, and R. Schack, Phys. Rev. Lett {\bf 83}, 1054 (1999). \bibitem{Pozniak} M. Po\'zniak, K. {\.Z}yczkowski, and M. Ku\'s, J. Phys. A {\bf 31}, 1059 (1998). \mathcal{E}_1d{thebibliography} \mathcal{E}_1d{document}
\begin{document} \title{Getting rid of nonlocality from quantum physics} \author{Andrei Khrennikov\\ Linnaeus University, International Center for Mathematical Modeling\\ in Physics and Cognitive Sciences V\"axj\"o, SE-351 95, Sweden} \maketitle \abstract{This paper is aimed to dissociate nonlocality from quantum theory. We demonstrate that the tests on violation of the Bell type inequalities are simply statistical tests of local incompatibility of observables. In fact, these are tests on violation of {\it the Bohr complementarity principle.} Thus, the attempts to couple experimental violations of the Bell type inequalities with ``quantum nonlocality'' is really misleading. These violations are explained in the quantum theory as exhibitions of incompatibility of observables for a single quantum system, e.g., the spin projections for a single electron or the polarization projections for a single photon. Of course, one can go beyond quantum theory with the hidden variables models (as was suggested by Bell) and then discuss their possible nonlocal features. However, conventional quantum theory is local.} \section{Introduction} As is well known, the~original EPR-argument~\cite{EPR} was fundamentally coupled with the Bohr complementarity principle~\cite{BR0,PL1,PL2} (see Section~\ref{COMPP}). Einstein, Podolsky, and~Rosen reasoned against the completeness of quantum mechanics (QM) by showing that the ``elements of reality'' corresponding to two incompatible observables (e.g., position and momentum) can be assigned to the same physical system. However, their argument was purely theoretical (even merely philosophical) and it was impossible to check the EPR-statement experimentally. Bohr pointed to the latter in his reply to Einstein~\cite{BR}; he considered the EPR-argument as metaphysical. Although~nonlocality was mentioned in the EPR-paper, it was considered as just a possible alternative to incompleteness of~QM. Nonlocality was emphasized for the first time by by Bohm. It was elevated by Bell (who admired Bohmian mechanics) through the argument based on violation of Bell's inequality~\cite{Bell0,Bell1,Bell2}. Our aim is to perform the {\it genuine quantum mechanical analysis of the derivation of the CHSH-inequality} considered as an inequality for correlations of quantum observables---{\it the quantum CHSH-inequality}. Thus, we do not try to go beyond QM. We are interested in the quantum mechanical interpretation of experimental violation of the CHSH-inequality. We show that, in~fact, the~degree of violation is straightforwardly coupled to the degree of incompatibility of observables, the~norms of commutators. In~particular, in~the scenario with two spatially separated systems, these are tests of local incompatibility, i.e.,~{\it incompatibility of observables on a single subsystem of a compound system} (Theorem 1, Section~\ref{LR}). We remark that the CHSH-inequality~\cite{CHSH} is derived for classical correlations expressed in the framework of hidden variables (established by Bell \cite{Bell0,Bell1,Bell2}) and by using the calculus of classical probabilities (the Kolmogorov probability theory). The~quantum CHSH-inequality is derived for quantum correlations by using the operator~formalism. We stress that the CHSH combination of correlations, \begin{equation} \label{LC} \langle {\cal B} \rangle =\frac{1}{2} [\langle A_1 B_1 \rangle + \langle A_1 B_2 \rangle + \langle A_2 B_1 \rangle- \langle A_2 B_2 \rangle], \end{equation} has three different~interpretations: \begin{itemize} \item Classical (hidden variables) correlations, $\langle {\cal B} \rangle_{\rm{CL}}.$ \item Experimental correlations, $\langle {\cal B} \rangle_{\rm{EXP}}.$ \item Quantum mechanical correlations, $\langle {\cal B} \rangle_{\rm{QM}}.$ \end{itemize} Following Bell, one can compare the classical theoretical quantity $\langle {\cal B} \rangle_{\rm{CL}}$ with its experimental counterpart $\langle {\cal B} \rangle_{\rm{EXP}}.$ The majority of the quantum foundation and information community proceeds in this way. This way leads to operating with the notion of nonlocality and action at a~distance. {\it Is there any reason to couple this mysticism with quantum theory?} It is more natural to start with the quantum theoretical analysis of quantity $\langle {\cal B} \rangle_{\rm{QM}}.$ It is easy to explain under what circumstances it can be bounded by 1 (or exceed 1).\footnote{Thus so-called classical bound has the purely quantum origin.} This quantum mechanical explanation is purely local. Thus, it is really incorrect in quantum theory to speak about its nonlocality or associate with it any kind of action at a~distance. It is well known that (by the complementarity principle) it is impossible to measure jointly two spin coordinates for electron. Therefore, $\langle {\cal B} \rangle_{\rm{QM}}$ can exceed 1. If~somebody does not believe in this prediction of QM, it would be natural to check violation of the principle of complementarity for a single electron (or photon), e.g., to~check violation of the Heisenberg uncertainty relation (in the form of the Robertson inequality). As emphasized above, here I proceed by using solely the formalism of QM, cf. with probabilistic analysis of the incompatibility interpretation of the Bell type {inequalities in} \cite{Accardi}-\cite{BC2} and especially~\cite{KHB2} (the probabilistic version of the present paper). (See also the recent preprint of Griffiths~\cite{Griffiths}, where incompatibility of quantum observables is emphasized; see the recent works of Boughn~\cite{Boughn1,Boughn2}, where the nonlocality viewpoint on quantum theory is critically analyzed and the role of the ontological vs. information interpretations of the wave function in discussions on ``quantum nonlocality'' is~emphasized.) Foundational issues such as the complementarity principle, incompatibility, nonlocality, realism, and~hidden variables, are discussed in more detail in Section~\ref{FOOP}. \section{Measuring the Degree of Incompatibility via the~CHSH-Test} We show that the degree of violation the quantum CHSH-inequality can be considered as a measure of incompatibility in two pairs of quantum observables, $A_1,A_2$ and $B_1, B_2.$ This is the simple consequence of the {\it the Landau identity} \cite{Landau,Landau1} (see Equation \eqref{L2}). In~quantum theory incompatibility is mathematically expressed as noncommutativity. Thus, by~testing incompatibility, we test the degree of noncommutativity, or~in other words, the~``magnitudes" of observables corresponding to commutators, \begin{equation} \label{INC} \hat M_A=i [\hat A_1, \hat A_2], \; \hat M_B=i [\hat B_1, \hat B_2]. \end{equation} We use the hat-symbol to denote~operators. The incompatibility-magnitude can be expressed via the maximal value of averages of commutator-operators, i.e.,~by their norms, for~example, \begin{equation} \label{MG} \sup_{\Vert \psi\Vert=1} \vert\langle \psi\vert \hat M_A\vert \psi\rangle\vert = \Vert \hat M_A\Vert. \end{equation} By interpreting quantity $\langle \psi\vert \hat M_A\vert \psi\rangle$ as the theoretical counterpart of experimental average $\langle M_A\rangle_\psi$ of observable $M_A,$ we can measure experimentally the incompatibility-magnitude, i.e.,~norm $\Vert \hat M_A\Vert$ from measurements of commutator-observable $M_A.$ (The main foundational problem is that measurement of such commutator-observables is challenging. Recently some progress was demonstrated on the basis of weak measurements, but~generally we are not able to measure commutator-quantities.) We remark that (from the quantum mechanical viewpoint) the CHSH-test estimates the product of incompatibility-magnitudes for the $A$-observables and $B$-observables, i.e.,~the quantity $\Vert \hat M_A\Vert\Vert \hat M_B\Vert.$ However, by~considering the $B$-observables as axillary and selecting them in a proper way (for example, such that the $B$-commutator is a simple operator), we can use the CHSH-test to get the experimental value for the incompatibility-magnitude $\Vert \hat M_A\Vert.$ \section{Incompatibility as Necessary Condition of Violation of Quantum~CHSH-Inequality} \unskip \subsection{General Case: Without Referring to the Tensor Product~Structure} \label{GC} {Consider} the Bohm--Bell type experiments. Four observables $A_1, A_2, B_1, B_2$ taking values $\pm 1$ are considered. It is assumed that observables in each pair $A_i, B_j, i, j=1,2,$ can be measured jointly, i.e.,~$A$-observables are compatible with $B$-observables. However, the~observables in pairs $A_1, A_2$ and $B_1,B_2$ are incompatible, i.e.,~they cannot be jointly measured. Thus, probability distributions $p_{A_i B_j}$ are well defined theoretically in QM and they can be verified experimentally; probability distributions $p_{A_1 A_2}$ and $p_{B_1 B_2}$ are not defined in QM and, hence, the~question of their experimental verification does not~arise. We consider quantum observables represented by Hermitian operators. In~QM, mathematical compatibility is represented by commutativity of operators, i.e.,~in the Bohm--Bell type experiments \begin{equation} \label{KO1} [\hat A_i, \hat B_j]=0, i,j=1,2, \end{equation} and generally $ [\hat A_1, \hat A_2]\not=0, \; [\hat B_1, \hat B_2]\not=0. $ The quantum theoretical CHSH-correlation function has \mbox{the form:} \begin{equation} \label{LC} \langle {\cal B} \rangle =\frac{1}{2} [\langle \hat A_1 \hat B_1 \rangle + \langle \hat A_1 \hat B_2 \rangle + \langle \hat A_2 \hat B_1 \rangle- \langle \hat A_2 \hat B_2 \rangle]. \end{equation} (Here and everywhere below the index $\rm{QM}$ pointing to the quantum formalism is omitted.) It is compared with the experimental CHSH-correlation~function. In the quantum framework, the~CHSH-correlation function can be expressed with the aid of \mbox{the Bell-operator:} \begin{equation} \label{L1} \hat {\cal B} = \frac{1}{2}[\hat A_1(\hat B_1+ \hat B_2) +\hat A_2(\hat B_1-\hat B_2)] \end{equation} as \begin{equation} \label{L1T} \langle {\cal B} \rangle = \langle \psi\vert \hat {\cal B} \vert \psi\rangle. \end{equation} By straightforward calculation, one can derive at the Landau identity: \begin{equation} \label{L2} \hat{{\cal B}}^2=I - (1/4) [\hat A_1, \hat A_2][\hat B_1,\hat B_2]. \end{equation} Thus, if~{\it at least one of the commutators} equals to zero, i.e., \begin{equation} \label{L3} [\hat A_1,\hat A_2]=0, \end{equation} or \begin{equation} \label{L4} [\hat B_1,\hat B_2]=0, \end{equation} then the following inequality holds: \begin{equation} \label{L1n} \vert \langle {\cal B} \rangle \vert \leq 1. \end{equation} To derive this inequality, we used solely the quantum formalism. The~inequality is the consequence of compatibility for at least one pair of observables, $A_1, A_2$ or $B_1, B_2.$ Thus, although~formally Equation~(\ref{L1n}) coincides with the standard CHSH-inequality, it has totally different meaning. \mbox{It is} better to call Equation~(\ref{L1n}) {\it the quantum CHSH inequality.} Thus, {\it compatibility of the $A$-observables or the $B$-observables is sufficient for validity of the quantum CHSH-inequality} (for all quantum states) or in other words {\it conjunction of incompatibilities of the $A$-observables and the $B$-observables is the necessary condition for its violation} (for some quantum state). \subsection{Compound~Systems} \label{COMS} States of a compound quantum system $S=(S_A,S_B)$ are represented in tensor product $H_{AB} =H_A \otimes H_B$ of the state spaces $H_A$ and $H_B$ of subsystems $S_A$ and $S_B.$ Observables are given by operators \begin{equation} \label{KO2} \hat A_i=\hat {\bf A}_i \otimes I, \; \hat B_i = I \otimes \hat {\bf B}_i, \end{equation} where Hermitian operators $\hat {\bf A}_i$ and $\hat {\bf B}_i$ act in $H_A$ and $H_B,$ respectively. They represent observables ${\bf A}_i, {\bf B}_i$ on subsystems $S_A, S_B$ of $S.$ For spatially separated systems, we call them {\it local observables}. This tensor representation automatically implies commutativity of operators $\hat A_i$ with operators $\hat B_j,$ i.e.,~ Equation~(\ref{KO1}) holds. We remark that the mathematical condition of incompatibility is reduced to condition $ [\hat {\bf A}_1, \hat {\bf A}_2]\not=0 \; \mbox{and} \; [\hat {\bf B}_1, \hat {\bf B}_2] \not=0. $ For spatially separated systems, it is natural to call incompatibility of the observables on $S_A$ (on $S_B$) {\it local incompatibility}. Section~\ref{GC} implies that {\it conjunction of local incompatibilities} is the necessary condition for violation of the quantum~CHSH-inequality. We remark that the mathematical formalism of this section is applicable to description of any kind of observables ``respecting'' the tensor product structure of observables. A~physical system $S$ need not be composed of two physical~subsystems. \section{Incompatibility as Sufficient Condition of Violation of Quantum~CHSH-Inequality} \unskip \subsection{General Case: Without Referring to the Tensor Product~Structure} \label{GC1} Assume that $A$-observables as well as $B$-observables are incompatible, i.e.,~corresponding operators do not commute: \begin{equation} \label{L3z} [\hat A_1,\hat A_2]\not=0 \; \mbox{and} \; [\hat B_1,\hat B_2] \not=0, \end{equation} i.e., \begin{equation} \label{L2zT} \hat M_A\not=0 \;\mbox{and} \; \hat M_B\not=0, \end{equation} where $\hat M_A= i[\hat A_1, \hat A_2], \; \hat M_B = i [\hat B_1,\hat B_2].$ It is important to note that $ [\hat M_A, \hat M_B]=0. $ We can write the Landau identity in the form \begin{equation} \label{L2z} \hat{{\cal B}}^2=I + (1/4)\hat M_{AB}, \end{equation} where $\hat M_{AB}= \hat M_A \hat M_B.$ If $M_{AB} =0,$ then, despite the incompatibility condition in Equation~(\ref{L3z}), the~QCHSH-inequality cannot be violated. We proceed under condition \begin{equation} \label{L2zz} \hat M_{AB}\not=0. \end{equation} In our framework, this condition is not so restrictive. We consider the quantum CHSH-inequality as a statistical test of incompatibility. It is natural to estimate the degree of incompatibility in one pair of observables, e.g.,~ in the $A$-pair. In~this approach, the~$B$-pair plays the axillary role and we can freely play with its selection. To~obtain the condition in Equation~(\ref{L2zz}), it is sufficient to select $B$-operators in such a way that the operator $\hat M_{B}$ is invertable. We especially highlight the case of compound systems (see Section~\ref{COMS}). Here incompatibility of the $A$-observables and the $B$-observables, see Equation~(\ref{L2zT}), automatically implies the condition in Equation~(\ref{L2zz}) Under the condition in Equation~(\ref{L2zz}), there exists some common eigenvector $\psi_{AB}$ such that $M_A \psi_{AB}= \mu_A \psi_{AB}, M_B \psi_{AB}= \mu_B \psi_{AB}$ and both eigenvalues are~nonzero. Suppose that $\mu_A>0$ and $\mu_B >0.$ Then, this $\psi_{AB}$ is an eigenvector of operator $\hat{\cal B}^2$ with eigenvalue $(1+\mu)>1, \mu=\mu_A \mu_B.$ Hence, $\Vert \hat {\cal B}^2 \Vert \geq (1+ \mu)>1$ and $$ 1< (1+ \mu) \leq \Vert \hat {\cal B}^2 \Vert = \Vert \hat {\cal B} \Vert^2. $$ Since $\hat {\cal B}$ is Hermitian, we have $$ \Vert \hat {\cal B}\Vert = \sup_{\Vert \psi \Vert =1} \vert \langle \psi\vert \hat {\cal B} \vert \psi\rangle \vert. $$ Finally, we get that $$ \sup_{\Vert \psi \Vert =1} \vert \langle \psi\vert \hat {\cal B} \vert \psi\rangle \vert > \sqrt{1+ \mu} >1. $$ Thus, there exist pure quantum states such that the QCHSH-inequality is~violated. Now, suppose that $\mu_A >0,$ but $\mu_B < 0.$ To change the sign of $\mu_B,$ it is sufficient to interchange the $B$-observables. Thus, {\it conjunction of incompatibilities of the $A$-observables and the $B$-observables constrained by Equation~(\ref{L2zz}) is sufficient for violation of the quantum CHSH-inequality.} \subsection{Compound~Systems} \label{LR} Here, $H=H_A\otimes H_B$ and $\hat A_j= \hat {\bf A}_j \otimes I, \hat B_j= I \otimes \hat {\bf B}_j,$ where Hermitian operators $\hat {\bf A}_j$ and $\hat {\bf B}_j$ act in $H_A$ and $H_B,$ respectively. \subsubsection{Incompatibility as Necessary and Sufficient Condition of Violation of the Quantum~CHSH-Inequality} Here, the~joint incompatibility-condition in Equation~(\ref{L3z}) is equivalent to incompatibility of observables on subsystems: \begin{equation} \label{L3za} \hat {\bf M}_A= i [\hat {\bf A}_1,\hat {\bf A}_2]\not=0 \; \mbox{and} \; \hat {\bf M}_B= i [\hat {\bf B}_1,\hat {\bf B}_2] \not=0. \end{equation} We have $\hat M_{AB}= \hat M_A \hat M_B= \hat {\bf M}_A \otimes \hat {\bf M}_B.$ As mentioned above, constraint $\hat M_{AB}\not=0$ is equivalent to Equation~(\ref{L3za}). Section~\ref{GC} implies that {\it conjunction of local incompatibilities} is the sufficient condition for violation of the quantum CHSH-inequality. Thus, we obtain: {\bf Theorem 1} [Local incompatibility criteria of QCHSH-violation] {\it Conjunction of local incompatibilities is the necessary and sufficient condition for violation of the quantum CHSH-inequality.} \subsubsection{Eigenvectors of the Bell Operator and Its~Square} \label{EV} Consider the eigenvector consideration of Section~\ref{GC1}. The~vector $\psi_{AB}= \psi_{A} \otimes \psi_{B},$ where $\psi_{A} \in H_A, \psi_{B} \in H_B,$ and $\hat {\bf M}_A\psi_{A} = \mu_A \psi_{A}, \hat {\bf M}_B \psi_{B} =\mu_B \psi_{B}.$ We assume that $\mu=\mu_A\mu_B >0.$ Then, $$ \langle \psi_{A} \otimes \psi_{B} \vert \hat {\cal B}^2 \vert \psi_{A} \otimes \psi_{B} \rangle > 1. $$ Thus, for~the squared CHSH-observable $\hat {\cal B}^2,$ the one-boundary can be violated for separable states. Here, entanglement of $A$ and $B$ observables plays no~role. To be more illustrative, let us restrict consideration to finite dimensional Hilbert spaces. There can be found states $\Psi$ and $\Phi$ such that $$ \max_{\Vert \psi \Vert=1} \vert \langle \psi \vert \hat {\cal B}^2 \vert \psi \rangle\vert = \langle \Psi \vert \hat {\cal B}^2 \vert \Psi \rangle, $$ $$ \max_{\Vert \psi \Vert=1} \vert \langle \psi \vert \hat {\cal B} \vert \psi \rangle= \vert \langle \Phi \vert \hat {\cal B} \vert \Phi\rangle. $$ The tricky thing is that generally $\Psi \not=\Phi.$ The equality for norms, $\Vert \hat {\cal B} \Vert= \sqrt{\Vert \hat {\cal B}^2 \Vert},$ does not imply equality of the $\max$-optimization~states. Of course, $\max$-states for ${\cal B}$ and ${\cal B}^2$ are connected: the former can be represented as linear combinations of the latter (the feature of all operators with degenerate spectrum). (As shown in \cite{Braunstein}, $\max$-states for ${\cal B}$ can be represented even as mixtures of $\max$-states for ${\cal B}^2$.) In the experiments to violate the quantum CHSH-inequality, tremendous efforts were put to prepare ensembles of entangled states. The~main reason for this is that the direct measurement of the observable represented by operator $\hat {\cal B}^2$ is challenging. In~{Appendix \ref{appb}}, we present the abstract analog of the Bell experiments treated as experiments to measure the degree of incompatibility. The~tensor product structure is excluded and, in~particular, an~analog of entangled states related to measurement of an observable and its square is~considered. \section{CHSH-Correlation Function as Measure of~Incompatibility} We start with consideration of observables respecting the tensor product structure on the state space $H=H_A\otimes H_B.$ Consider the eigenbases $(e_{Ak})$ and $(e_{Bk})$ of operators $\hat {\bf M}_A$ and $\hat {\bf M}_B$ (acting in $H_A$ and $H_B,$ respectively) and the corresponding eigenvalues $\mu_{Aj}, \mu_{Bj}.$ Let $\Vert \hat {\bf M}_A\Vert = \max_j \vert \mu_{Aj}\vert= \vert \mu_{Ai_a} \vert, \Vert \hat {\bf M}_B\Vert =\max_j \vert \mu_{Bj}\vert= \vert \mu_{Bi_b} \vert$ and let $\mu_{Ai_a} \mu_{Bi_b} >0.$ Then, $\Vert \hat {\cal B}^2 \Vert= (1+ \mu_{Ai_a}\mu_{Bi_b}).$ Thus, \begin{equation} \label{IV} b= \Vert \hat {\cal B} \Vert = \sqrt{1 + \frac{1}{4} \Vert [\hat {\bf A}_1, \hat {\bf A}_2]\Vert \; \Vert [\hat {\bf B}_1, \hat {\bf B}_2]\Vert}, \end{equation} where $\langle {\cal B}\rangle_\psi$ is given by Equation~(\ref{LC}); $b$ is the maximal possible value of CHSH-correlations. If~eigenvalues $\mu_{Ai_a}$ and $\mu_{Bi_b}$ have different signs, then we interchange the $B$-observables. From Equation~(\ref{IV}), we get that $$ \Vert [\hat {\bf A}_1, \hat {\bf A}_2]\Vert \Vert [\hat {\bf B}_1, \hat {\bf B}_2]\Vert= 4 (b^2-1). $$ The norm of commutator can be considered as a measure of incompatibility. Thus, \mbox{the CHSH-} correlation function gives the possibility to check experimentally the product of degrees of incompatibility for the $A$ and $B$ observables. One may consider this way of measuring of incompatibility as too tricky. \mbox{However, typically,} to~measure commutator-observable and then its average is challenging . (By ``measuring commutator- observable'', we mean measuring observable represented mathematically by commutator operator scaled by $i.$) Therefore, even such a tricky approach to this problem as measurement of the CHSH-correlation function deserves attention. Now, consider $B$-observables as axillary. In~this way, we are able to determine the degree of incompatibility for the $A$-observables by using some pair of axillary observables $B_1, B_2.$ We can select the latter in such a a way that their commutator is a ``good observable'', so that it can be easily measured for any state, thus its average and hence the norm can be determined. Then, we can measure incompatibility of observables $A_1$ and $A_2$ by using the formula: \begin{equation} \label{qqq} \Vert [\hat {\bf A}_1, \hat {\bf A}_2]\Vert = 4 (b^2-1)/ \Vert [\hat {\bf B}_1, \hat {\bf B}_2]\Vert . \end{equation} Why is the use of tensor product states so useful for measuring the degree of incompatibility? \mbox{By spitting} a system into two subsystems it is easy to check compatibility of $A$ and $B$ observables, \mbox{thus the} possibility to define the CHSH-correlation function which can be measured in~experiment. \section{Foundational~Questions}\label{FOOP} \subsection{Bohr's Complementarity~Principle} \label{COMPP} Often, it is claimed that Bohr's writings and, in~particular, about the complementarity principle are very difficult for understanding. (For example, in Schilpp's volume~\cite{Schilpp}, p.~674 (see~\mbox{also Plotnitsky~\cite{PL3}}, p.~108), one can find the following statement: ``{\it Thus, Einstein was confessed, after~decades of his exchanges with Bohr, that he was `unable to attain ... the sharp formulation ... [of] Bohr's principle of complementarity'''.}) This principle has the complex structure and composed of a few components. One of the problems is that typically this principle is reduced to just one of its components, namely, the incompatibility-component. Incompatibility has the most striking consequences for quantum theory and experiment. However, as separated from the body of the complementarity principle, incompatibility is difficult for~understanding. As emphasized in~\cite{KHB2}, the~complementarity principle is in fact the principle of contextuality of quantities used in the quantum formalism, in~the sense of coupling them to corresponding experimental contexts. Bohr did not use the notion ``experimental context". He considered experimental conditions~\cite{BR0}: {\it ``Strictly speaking, the~mathematical formalism of quantum mechanics and electrodynamics merely offers rules of calculation for the deduction of expectations pertaining to observations obtained under well-defined experimental conditions specified by classical physical concepts.''} By using the notion of experimental context as the synonymous of Bohr's experimental conditions we can present the complementarity principle as composed of the following components~\cite{KHB2}. (We remark that one has to be very careful by operating with the notion of contextuality. Nowadays, this notion is widely used in foundational discussion on the Bell type inequalities. In~such discussions, the~meaning of the notion contextuality does not coincide with Bohr's contextuality, as~taking into account the experimental context to explain the mechanism of generating the values of a quantum observable. From~Bohr's viewpoint, any single quantum observable is contextual. One may say that consideration of Bohr's contextuality in parallel with Bell's contextuality can be misleading. However, we can consider Bell's contextuality simply as a very special case of Bohr's contextuality.) \begin{itemize} \item (B1): There exists the fundamental quantum of action given by the Planck constant $h$: \item (B2): The presence of $h$ prevents approaching internal features of quantum systems. \item (B3): Therefore, it is meaningless (from the viewpoint of physics) to build scientific theories about such features. \item(B4): An output of any observable is composed of contributions from a system under measurement and the measurement device. \item (B5): Therefore, the~complete experimental arrangement (context) has to be taken into account. \item (B6): There is no reason to expect that all experimental contexts can be combined. Therefore, \mbox{there is} no reason to expect that all observables can be measured jointly. Hence, there exist incompatible observables. \end{itemize} (B6) can be called the incompatibility principle; this is a consequence of (B4) and (B5). Typically, the~complementarity principle is identified with (B6). However, such a viewpoint does not match Bohr's understanding of the complementarity principle, as the combination (B1)--(B6). This is the good place to remark that (B6) is very natural. The~existence of incompatible experimental contexts is not surprising. Compatibility of all experimental contexts would be really~surprising. \subsection{``Quantum~Nonlocality''} We briefly discuss the notion of (non)locality. \subsubsection{Relativistic~Invariance} Everywhere in physics, besides~the Bell inequality debates~\cite{Bell0,Bell1,Bell2,CHSH,Braunstein,Hardy,Cereceda,Wolf,Mermin}, locality is identified with the relativistic invariance of theory. Therefore, the~statements on nonlocality of quantum theory can make the impression (and they do!) that there is something wrong with relativistic invariance. However, there is nothing wrong with relativistic invariance. Of~course, QM (in particular, the~Schr\"odinger equation) is not relativistically invariant and attempts to construct relativistically invariant QM (based on the Dirac equation) were not successful. However, {\it QM is an approximation of quantum field theory which is relativistically invariant} (see Bogolubov and Shirkov~\cite{BS} and Haag~\cite{Haag} (especially Chapter 3, ``Algebras of Local Observables and Fields'')). (To complete the picture, we~remark that there is a non-relativistic quantum field theory (see for example, book~\cite{NQFT}).) \subsubsection{Hidden Variables and Action at a~Distance} One can say that nonlocality is a consequence of ``action at the distance'' \cite{Bell0,Bell1,Bell2} (see, e.g., Shimony~\cite{Shimony,Shimony1} and Jaeger~\cite{Jaeger, Jaeger1} for the detailed presentation). This interpretation is based on the invention of hidden variables. However, the~analysis presented in this paper shows clearly that, to~proceed in this framework, {\it one has to start with rejection of the basic principle of QM, the~complementarity principle.} It is not clear why violations of this principle should be sought for compound systems. Thus, before~inventing hidden variables, it would be natural to find violations of say the Heisenberg uncertainty principle (in the form of Robertson inequality). Moreover, the~modern attempt to go beyond QM with hidden variables is too straightforward. Already in the 19th century, in~the process of transition from Newtonian mechanics to classical field theory, physicists were confronted with the same problem as in the process of transition from classical physics to QM. It was resolved in the framework of Bild (image) methodology developed by Hertz and Boltzmann~\cite{Hertz1,Boltzmann,Boltzmann1} (see Section~\ref{HertzS} and papers~\cite{DA,Hertz,KHB2}). \subsubsection{Nonlocality = Violation of the Bell Type~Inequalities.} The common comment to my talks is that per definition ``quantum nonlocality'' is a violation of the Bell type inequalities. However, this viewpoint is really misleading. If~one recognizes that such violation is just a signature of incompatibility, then it is strange to speak about nonlocality, instead~of~complementarity. \subsection{Obscuring Incompatibility by Tensor Product Structure of~Observables} As pointed out, we concentrate our analysis on the CHSH-inequality~\cite{CHSH}. In contrast to the previous studies (see, e.g.,~\cite{Braunstein,Hardy,Cereceda,Wolf,Mermin}), we do not emphasize the role of the tensor product structure for the state space and observables. We proceed in the general framework and the tensor product model is just a special case of this framework. The~common emphasis of the tensor product structure obscures the crucial role played by incompatibility of observables. Mathematically the crucial role of incompatibility-noncommutativity for violation of the CHSH-inequality was clarified already by Landau~\cite{Landau,Landau1} (see also~\cite{Braunstein,Hardy,Cereceda,Wolf,Mermin}). However, the~mathematical calculations presented in these works did not lead to reinterpretation of violation of the~CHSH-inequality. I would like to emphasize the crucial role played by works of Landau~\cite{Landau,Landau1}. In~fact, Landau's articles carried the same message as the present paper: the CHSH inequality is an experimental test of the principle of complementarity. (He even used the terminology ``complementary observables'', instead of ``incompatible observables''.) Unfortunately, his excellent mathematical work was not completed by an extended interpretational discussion. Surprisingly, nowadays, his works are practically forgotten. (Of course, qualified people are aware about papers~\cite{Landau,Landau1}. However, generally, the~members of the quantum foundational community practically never refer to these papers. During~\mbox{20 years} of debates on the Bell inequality in V\"axj\"o, I have never heard about them. I got to know about Landau's works from E. Dzhafarov, an~expert in mathematical psychology. It happened say eight years ago and I also ignored the complementarity message of Landau. I was content to enjoy his mathematics.) I see two reasons for~this: \begin{enumerate} \item Landau used the abstract framework of $C^\star$-algebras and, for~many ``real physicists'', this was not so attractive. \item He emphasized the novel way to derive the Tsirelson bound and typically this paper is considered as devoted to this derivation, i.e.,~its crucial component, coupling of Bell's argument to Bohr's principle of complementarity was completely ignored. \end{enumerate} In the present paper, I select the intermediate strategy for representation. On~the one hand, \mbox{I do} not just follow Landau using the language of $C^\star$-algebras. On~the other hand, I also do not want to follow the common path based on the tensor product representation. I proceed in the complex Hilbert space formalism, but~generally without referring to the tensor product structure of operators. Mathematics is really simple. It is based on the interrelation of spetcral properties of the Bell operator ${\cal B}$ and its square ${\cal B}^2.$ (In fact, I have the impression that the essence of CHSH-test is this spectral interplay between the spectral properties of a Hermitian operator and its square. I try to present this vision in the abstract form in Appendix 2.) \subsection{Herz--Boltzmann Bild-Methodology of~Science} \label{HertzS} It is surprising that not only Bell, but even Einstein, Bohr, and Heisenberg did not know about the works of Hertz and Boltzmann~\cite{Hertz1,Boltzmann,Boltzmann1} on so-called ``Bild'' (image) methodology for physical theories. According to Hertz and Boltzmann, when speaking about a scientific theory, one has to specify its type: descriptive theory or observational theory. The~crucial point is that a descriptive theory need not be straightforwardly coupled with theory of observations. By~extending the Hertz--Boltzmann methodology to the quantum domain, we recognize that QM is an observational theory. Theories with hidden variables are of the descriptive type. The~same observational theory can be based on a variety of descriptive theories. Bell's type descriptive theories have very rigid coupling to QM, the~observational theory. One can construct a variety of corresponding descriptive theories which are not constrained by the Bell type~inequalities. In this paper, we do not to discuss the Bild-methodology of Herz and Boltzmann~\cite{Hertz1,Boltzmann,Boltzmann1} in much detail (see my recent article~\cite{Hertz}). We only make the remark on the notion of realism. From~the Bild-viewpoint, realism in physics as well as any other area of science is reduced solely to experimental facts. In~QM, this is exactly Bohr's point of view. Thus, the~only realistic component of QM are outcomes of measurements (Bohr's phenomena). Any physical theory (descriptive as well as observational) is only about human images of natural phenomena. At the same time, these images are created on the basis of human's interaction with~nature. Neither Einstein nor Bohr was not aware of the works of Hertz and Bolzmann. (In any event, \mbox{they never} cited these works.) Both Einstein and Bohr identified descriptive and observational theories. In fact, the~EPR-paper~\cite{EPR} can be considered as the message that QM is not a descriptive theory. However, at~the same time, Einstein--Podolsky--Rosen dreamed of a descriptive theory with the straightforward coupling to observations. According to Hertz and Boltzmann, the~latter is generally impossible. In~his reply~\cite{BR}, Bohr tried to explain that QM is an observational theory and such things as the EPR elements of reality do not belong to its domain. However, nobody was aware about Hertz--Boltzmann distinguishing of descriptive and observational theories. Therefore their discussion can be compared to conversation of the blind with the~deaf. Finally, we refer to an example of descriptive theory coupled to QM (treated as an observational theory). This is {\it prequantum classical statistical field theory} (PCSFT), which was developed by the author of this paper and coauthors~\cite{PQ} (see {Appendix 3). \subsection{On Incompatibility Interpretation of the Bell Type~Inequalities} In this paper, we analyze the CHSH-inequality and conclude that this is a test of the complementarity principle. It seems that this analysis can be extended to other Bell type inequalities. The~crucial mathematical step in this analysis is derivation of the analogs of the Landau identity \mbox{(see Hardy}~\cite{Hardy} for generalization of the CHSH inequality to an $N$-measurement scheme and Cereceda's paper~\cite{Cereceda}, where the very general case (including Mermin's inequalities) was studied in very detail). In Appendix 1, we show (independently of results based on the Landau identity that incompatibility for at least one pair of observables is the necessary condition for violation of any type of the Bell type inequalities. \section{Conclusions} The point of Bell's theorem is that a local hidden variables theory cannot reproduce the results of quantum theory. The implication is that only a nonlocal hidden variable theory (like Bohmian mechanics) can reproduce the correlations found in quantum theory (and in the real world).\footnote{For the moment, we follow the conventional approach to interrelation of subquantum and quantum theories which was established by Bell (see section \ref{HertzS} and Appendix 3 for more general picture due to Hertz and Boltzmann).} Here, clearly, ``nonlocality'' refers to hidden variables theories, not to quantum theory. The very common misconception is to (incorrectly) associate the term with quantum theory.\footnote{In particular, from~this viewpoint, the~comments of Aspect~\cite{Aspect1} and Wiseman~\cite{Wiseman} on the crucial experiments~\cite{Hensen,Giustina,Shalm} are really misleading (cf. with the comment of Kupczynski~\cite{BC2} and the author of this paper \cite{ABELL}).} We hope that the argument presented in this paper has convincingly demonstrated that this association is wrong. The deeper message of this paper is that the Bell inequality can be reinterpreted as {\it a condition on the quantum compatibility of local observables.} If local commutators vanish, the correlations are bounded just as they are when hidden variables are assumed to be local. The Bell type inequalities have one interpretation for hidden variables theories (the classical case), and another lesser-known and very interesting one for quantum theory. Consequently the outputs of experiments testing violation of the Bell type inequalities also can be interpreted in two different ways. The conventional interpretation is that these were classical vs. quantum physics tests. My interpretation is that such experiments were, in fact, the tests of local incompatibility of quantum observables. I claim that the latter interpretation does not diminish the foundational value of these breakthrough experiments \cite{Aspect,Weihs,Hensen,Giustina,Shalm}. Complementarity is the basic feature of quantum observables. Tests on this feature are of the great foundational importance. At the same time the conventional interpretation of these tests, local realism\footnote{This is the good place to stress that Bell has never used this notion, see the collection his papers in book \cite{Bell1}.} vs. quantum theory is, in fact, not so exciting. What is the meaning to test nowadays quantum against classical? The validity of quantum theory was confirmed by the huge body of experiments and technological applications. We also point out that the Bell type experiments played the crucial role in development of quantum technology: creation of efficient sources of entangled systems and photo-detectors as well as transmission of quantum systems to long distances with minimal disturbance. It is clear that to get rid of nonlocality from quantum theory is not a simple task. The~present note is just a step towards the common acceptance of the local interpretation of~QM. This paper should not be considered as directed against attempts to go beyond QM, by~introducing ``hidden variables''. However, in~such attempts, one has to take into account the basic principles of QM an especially the complementarity principle (see the recent article of Khrennikov and Alodjants~\cite{KHB3}). One also has to take into account the lessons of 19th century physics in the period of transition from Newtonian mechanics to field theory (section \ref{HertzS}.) I would like to thank Willem De Myunck, David Griffiths, Ehtibar Dzhafarov, and~Marian Kupczynski for stimulating discussions and comments. This work was supported by the research project of the Faculty of Technology, Linnaeus University, ``Modeling of complex hierarchic structures''. \section*{Appendix 1: Incompatibility as Necessary Condition for Violation of Any Bell Type Inequality}\label{appa} Consider a family of quantum observables $D_1,...,D_n$ represented by Hermitian operators $\hat D_1,..., \hat D_n.$ We restrict considerations to observables with discrete values; thus, operators have the purely discrete spectra. Denote by $\hat E_i(x)$ the orthogonal projector corresponding to the eigenvalue $x$ of $\hat D_i.$ Suppose that the observables are pairwise compatible, i.e.,~ $(D_i, D_j)$ can be measured jointly for any quantum state $\rho$ and jpd is well defined \begin{equation} \label{JPD1} p_{D_i D_j}(x,y; \rho)\equiv P(D_i=x, D_j=y; \rho). \end{equation} In QM, compatibility is mathematically represented via commutativity of operators, i.e.,~ $[\hat D_i, \hat D_j]=0,$ and, hence, $[\hat E_i(x), \hat E_j(y)]=0.$ The quantum formalism gives the following formula for jpd (von Neumann~\cite{VNM}): \begin{equation} \label{JPD2} p_{D_i D_j}(x,y; \rho) =\rm{Tr} \; \rho E_i(x) E_j(y)= \rm{Tr}\; \rho E_j(y) E_i(x) . \end{equation} Now, we point to the really surprising feature of quantum measurement theory. If~observables are pairwise compatible, i.e.,~each pair can be jointly measured with corresponding jpds $p_{ij}(x,y; \rho)$ given by Equation~(\ref{JPD2}), then they are also triple-wise compatible, quadruple-wise compatible and so on... Any family of observables, $D_{i_1},..., D_{i_m}$ can be jointly measured and the joint probability distribution is given by the formula: \begin{equation} \label{JPD3} p_{D_{i_1} ... D_{i_m}}(x_1,...,x_m; \rho) =\rm{Tr} \; \rho E_{i_1}(x_1)... E_{i_m}(x_m). \end{equation} On the left-hand side of this formula, one can take any permutation of indexes. This implies: ${\bf 2\implies m}:$ {\it pairwise compatibility $\implies$ multiple compatibility.} This is really astonishing. It is surprising that its specialty (from the general viewpoint of measurement theory) was not discussed in foundational~literature. We turn to Bell's inequalities. Now, we are endowed with ${\bf 2\implies m}$ property of quantum~observables. Consider the most general Bell-type framework. There are $K$ groups of quantum observables: $$ D^{k}=(D_1^{k},..., D_{N_k}^{k}), k=1,..., K. $$ Mathematically, they are represented by Hermitian operators: $$ \hat D^{k}=(\hat D_1^{k},...,\hat D_{N_k}^{k}). $$ Suppose that, for~different $k,$ observables are compatible, i.e.,~in the mathematical framework: $$ [\hat D_i^{n}, \hat D_j^{m}]=0, n \not=m. $$ Thus, jpds $p_{i_1...i_K}\equiv p_{D_{i_1}^{1}... D_{i_K}^{K}}$ are well defined and, hence, covariations as well $$ \langle D_{i_1}^{1}\cdots D_{i_K}^{K} \rangle = \rm{Tr} \; \rho \hat D_{i_1}^{1} \cdots \hat D_{i_K}^{K}= \sum x_1 \cdots x_K \; p_{i_1...i_K}(x_1,...,x_K). $$ Consider some Bell-type inequality (e.g., the~CHSH inequality or the Mermin inequality), \begin{equation} \label{QTE} \sum_{i_1...i_K} \; t_{i_1...i_K} \;\langle D_{i_1}^{1}... D_{i_K}^{K} \rangle + \mbox{correlations of lower orders} \leq c, \end{equation} where $t_{i_1...i_K}$ are some real constants. This inequality may be violated only if at least one pair of observables, say $(D_{i}^{n}, D_{j}^{n}),$ is incompatible, i.e., in~the mathematical formalism \begin{equation} \label{QTE1} [\hat D_{i}^{n}, \hat D_{j}^{n}] \not=0. \end{equation} Otherwise, jpd exists and the inequality in Equation~(\ref{QTE}) cannot be~violated. {\bf Theorem 2} {\it Incompatibility is a necessary condition for violation of any Bell-type inequality.} In the standard nonlocality discussions, it is assumed that there are $K$ systems $S^{k}, k=1,2,...,K,$ and observables $D^{k}$ are local observables for $S^{k}$. Endowed with this scheme, we analyze the possibility to violate the Bell-type inequality in Equation~(\ref{QTE}). The~necessary condition is that Equation~(\ref{QTE1}) holds true. This condition is~local. \section{Appendix 2:``Entanglement'' in the Absence of the Tensor Product Structure}\label{appb} In Section~\ref{EV}, we consider compound systems and discuss the well known fact that eigenvectors of operator $\hat {\cal B}^2$ giving the $\max$-value of its quadratic form can be selected as separable (non-entangled) states; they need not be eigenvectors of the Bell operator; its eigenvectors are linear combinations of the aforementioned separable~states. We want to show that the tensor product structure of states and operators is not crucial for the above consideration. Consider any Hermitian operator $\hat C$ and its square $\hat C^2.$ Let $u$ be an eigenvector of the latter, i.e.,~$\hat C^2 u = \lambda u, \lambda >0,$ and let $u$ is not an eigenvector of the former. Set $v= \hat C u/\sqrt{\lambda},$ i.e., \begin{equation} \label{Q2} \hat C u= \sqrt{\lambda} v, \; \hat C v= \sqrt{\lambda} u. \end{equation} Set \begin{equation} \label{Q21} \psi_{\pm}= u \pm v. \end{equation} Then, $\hat C \psi_{\pm}= \pm \sqrt{\lambda} \psi_{\pm}.$ Thus, $\psi_{\pm}$ are eigenvectors of $\hat C.$ If the quadratic form of $\hat C^2$ approaches its $\max$-value on eigenstate $u/\Vert u\Vert,$ then the quadratic form of $\hat C$ approaches its $\max$-value on eigenstate $\phi_{\pm}= \psi_{\pm}/ \Vert \psi_{\pm}\Vert .$ States $\psi_{\pm}$ can be considered as {\it generalization of entangled states}, which is to say entangled with respect to operator $\hat C.$ This consideration can be coupled to measurement theory. Consider some quantum observable $D$ represented by Hermitian operator $\hat D.$ (For simplicity, suppose that $\hat D\geq 0.$) Suppose that this observable is simple theoretically, by~complex experimentally. (Spectrum and eigenvectors of operator $\hat D$ can be easily found, but~measurement of observable $D$ is really challenging.) Consider also the observable $C$ represented by Hermitian operator $\hat C\equiv \sqrt{\hat D}.$ Suppose that this observable is complex theoretically, but~rather simple experimentally. (The structure of spectrum and eigenvectors of operator $\hat C$ is complicated, but~ measurement design for $C$ is straightforward.) We are interested in the following problem: {\it Find experimentally the upper bound for the average $\langle D \rangle_\psi$ of observable $D$ with respect to all possible states.} We stress that we are interested in the experimental verification of a theoretical prediction of~QM. We can easily find state $u$ corresponding to $\max$-eigenvalue $\lambda$ of operator $\hat D.$ Then, one of the $\max$-states of operator $\hat C$ can be found with the aid of ``$C$-entanglement'': \begin{equation} \label{H} \phi_{+}= (u + \lambda^{-1/2} \hat C u)/ \Vert u + \lambda^{-1/2} Cu\Vert. \end{equation} Finally, we prepare an ensemble of systems in quantum state $\phi_+$ and perform $C$-measurement for these~systems. In the Bell-type scenario (for observables respecting the tensor product structure), $\hat C=\hat {\cal B}$ is the Bell operator, $\hat D=\hat {\cal B}^2.$ In fact, the~degree of incompatibility is encoded in the observable corresponding to operator $\hat D.$ However, its straightforward measurement would involve measurement of observables corresponding to commutators. The~latter is challenging. At~the same time, eigenstates of $D$ have the simple tensor product structure (separable states). They can easily be found. Then, eigenstates of the Bell operator can be generated as superpositions of Equation~(\ref{H}). \section*{Appendix 3: Prequantum Classical Statistical Field Theory} The basic variables of PCSFT~\cite{PQ} are classical random fields defined on physical space. A random field can be considered as a function of two variables $\phi= \phi(x; \omega)$: $x$ is the spatial variable \mbox{(with three} real coordinates); $\omega$ is a random parameter. We remark that random fields can be considered as random vectors valued in the complex Hilbert space $H=L_2 ({\bf R}^3)$ of square integrable complex valued~functions. The key point of this theory is that covariance operator $B$ of random field $\phi$ is identified (up to normalization by trace) with the density operator of QM: \begin{equation} \label{LIK} B \to \rho=B/\rm{Tr} B. \end{equation} The covariance operator is an element of the descriptive theory (PCSFT) and the density operator is the element of the observational theory (QM). (For a complex valued random field, its covariance operator $B$ is a Hermitian positive operator with the finite trace. Thus, $B$ has all features of a density operator, besides~normalization $\rm{Tr} \rho=1.$) We remark that here the trace of field's covariance operators equals to average of field's energy: \begin{equation} \label{LIK1} \rm{Tr} B= E \Vert \phi(\omega)\Vert^2, \end{equation} where $E$ is mathematical expectation and $$ \Vert \phi(\omega)\Vert^2 = \int_{{\bf R}^3} \vert \phi(x; \omega)\vert^2\; d x$$ is square of the $L_2$-norm of the field (for the concrete value of the random parameter $\omega).$ Thus, normalization (determining ``descriptive $\to$ observational'' correspondence) is with respect to field's~energy. Physical variables of PCSFT are quadratic forms of fields. Each quadratic form on $H=L_2$ is determined by a Hermitian operator, $\hat A: H\to H.$ Hence, PCSFT variables have the form, $$f_A(\phi)(\omega) = \langle \phi(\omega)\vert A\vert \phi(\omega)\rangle,$$ where $\phi(\omega)\equiv \phi(x;\omega) \in L_2$ for each $\omega.$ Quadratic forms are elements of the descriptive theory (PCSFT) and Hermitian operators are elements of the observational~theory. Averages calculated in PCSFT coincide with averages calculated in QM. However, the~range of values of a quadratic form does not coincide with the range of values of the corresponding quantum observable, Hermitian operator (cf. with descriptive theories of the Bell type). \end{document}
\begin{document} \title{ Quantum properties of the three-mode squeezed operator: triply concurrent parametric amplifiers } \author{ Faisal A A El-Orany$^1$, Azeddine Messikh$^2$, Gharib S Mahmoud $^2$, Wahiddin M. R. B. } \affiliation{ Cyberspace Security Laboratory, MIMOS Berhad, Technology Park Malaysia, 57000 Kuala Lumpur, Malaysia} \affiliation{ International Islamic University Malaysia, P.O. Box 10, 50728 Kuala Lumpur, Malaysia } \date{\today} \begin{abstract} In this paper, we study the quantum properties of the three-mode squeezed operator. This operator is constructed from the optical parametric oscillator based on the three concurrent $\chi^{(2)}$ nonlinearities. We give a complete treatment for this operator including the symmetric and asymmetric nonlinearities cases. The action of the operator on the number and coherent states are studied in the framework of squeezing, second-order correlation function, Cauchy-Schwartz inequality and single-mode quasiprobability function. The nonclassical effects are remarkable in all these quantities. We show that the nonclassical effects generated by the asymmetric case--for certain values of the system parameters--are greater than those of the symmetric one. This reflects the important role for the asymmetry in the system. Moreover, the system can generate different types of the Schr\"{o}dinger-cat states. \end{abstract} \pacs{42.50.Dv,42.50.-p} \maketitle \section{Introduction} Squeezed light fulfils the uncertainty relation and has less noise than the coherent light in the one of the field quadratures. With the development of quantum information, squeezed states have become very important tool in providing efficient techniques for the encoding and decoding processes in the quantum cryptography \cite{hill}. For instance, the two-mode squeezed states are of great interest in the framework of continuous-variable protocol (CVP) \cite{Ekert}. In the CVP the quantum key distribution goes as follows. The two-mode squeezed source--such as parametric down conversion--emits two fields: one is distributed to Alice (A) and the other to Bob (B). Alice and Bob randomly choose to measure one of two conjugate field quadrature amplitudes. The correlation between the results of the same quadrature measurements on Alice's and Bob's side increases by increasing the values of the squeezing parameter. Through a public classical channel the users communicate their choices for the measurements. They keep only the results when both of them measure the same quadrature and hence the key is generated. The generation of the multiparities squeezed entangled states is an essential issue in the multiparty communication including a quantum teleportation network \cite{van1}, telecloning \cite{van2}, and controlled dense coding \cite{van3}. The $N$-mode CV entangled states have been generated by combining $N$ single-mode squeezed light in appropriately coupled beam splitters \cite{van1}. The three-mode CV entangled states have taken much interest in the literatures, e.g., \cite{guo,olsen,fister,van1,hua}. For instance, it has been theoretically shown that tripartite entanglement with different wavelengths can be generated by cascaded nonlinear interaction in an optical parametric oscillator cavity with parametric down conversion and sum-frequency generation \cite{guo}. Also, the three-mode CV states have been generated by three concurrent $\chi^{(2)}$ nonlinearities \cite{olsen}. This has been experimentally verified by the observation of the triply coincident nonlinearities in periodically poled $KTiOPO_4$ \cite{fister}. More details about this issue will be given in the next section. Furthermore, the comparison between the tripartite entanglement in the three concurrent nonlinearities and in the three independent squeezed states mixed on the beam splitters \cite{van1}, in the framework of the van Loock-Furusawa inequalities \cite{Furusawa}, has been performed in \cite{olsen}. It is worth mentioning that the generation of macroscopic and spatially separated three-mode entangled light for triply coupled $\chi^{(3)}$ Kerr coupler inside a pumped optical cavity has been discussed in \cite{hua}. It has been shown that the bright three-mode squeezing and full inseparable entanglement can be established inside and outside the cavity. Since the early days of the quantum optics squeezing is connected with what is so called squeezed operator. There have been different forms of this operator in the literatures, e.g. \cite{single,two,abdalla,hon,faisal,barry,marce}. For example, degenerate and non-degenerate parametric amplifiers are sources of the single-mode \cite{single} and the two-mode \cite{two} squeezing, respectively. The quantum properties of the three-mode squeezed operator (TMS), which is constructed from two parametric amplifiers and one frequency converter, have been demonstrated in \cite{faisal}. This operator can be represented by the $SU(1,1)$ Lie algebra generators \cite{faisal,barry}. Additionally, it can be generated--under certain condition--from bulk nonlinear crystal in which three dynamical modes are injected by three beams. Another possibility for the realization is the nonlinear directional coupler which is composed of two optical waveguides fabricated from some nonlinear material described by the quadratic susceptibility $\chi^{(2)}$. Finally, the mathematical treatments for particular type of the $n$-mode squeezed operator against vacuum states are given in \cite{hon}. In this paper, we treat the three concurrent parametric amplifiers given in \cite{olsen,fister} as three-mode squeezed operator. We quantitatively investigate the nonclassical effects associated with this operator when acting on the three-mode coherent and number states. For these states we investigate squeezing, second-order correlation function, Cauchy-Schwartz inequality and single-mode quasiprobability functions. This investigation includes the symmetric (equal nonlinearities) and asymmetric (non-equal nonlinearities) cases. In the previous studies the entanglement of the symmetric case only has been discussed \cite{olsen,fister}. The investigation in the current paper is motivated by the importance of the three concurrent parametric amplifiers in the quantum information research \cite{olsen,fister,van1}. Additionally, quantifying the nonclassical effects in the quantum systems is of fundamental interest in its own right. We prepare the paper in the following order. In section 2 we construct the operator and write down its Bogoliubov transformations. In sections 3 and 4 we study the quadrature squeezing, the second-order correlation function as well as the Cauchy-Schwartz inequality, respectively. In section 5 we investigate the single-mode quasiprobability functions. The main results are summarized in section 6. \section{Operator formalism} In this section we present the operator formalism for the optical parametric oscillator based on the three concurrent $\chi^{(2)}$ nonlinearities. We follow the technique given in \cite{olsen,fister} to construct the Hamiltonian of the system. In this regard, we consider three modes injected into a nonlinear crystal, whose susceptibility is $\chi^{(2)}$, to form three output beams at frequencies $\omega_0, \omega_1, \omega_2$. The interactions are selected to couple distinct polarizations. Assuming that $x$ is the axis of the propagation within the crystal. The mode $\hat{b}_1$ is pumped at frequency and polarization $(\omega_0+\omega_1,y)$ to produce the modes $\hat{a}_1(\omega_0,z)$ and $\hat{a}_2(\omega_1,y)$. The mode $\hat{b}_2$ is pumped at $(\omega_1+\omega_2,y)$ to produce the modes $\hat{a}_2$ and $\hat{a}_3(\omega_2,z)$. Eventually, the mode $\hat{b}_3$ is pumped at $(2\omega_1,z)$ to produce the modes $\hat{a}_1$ and $\hat{a}_2$. The scheme for this interaction can be found in \cite{olsen,fister}. The interaction Hamiltonian for this concurrent triple nonlinearity takes the form \cite{olsen,fister}: \begin{eqnarray} \hat{H}_{int}=i\hbar(\chi_1\hat{b}_1\hat{a}_1^\dagger\hat{a}_2^\dagger+ \chi_2\hat{b}_2 \hat{a}_1^\dagger\hat{a}_3^\dagger+ \chi_3\hat{b}_3\hat{a}_2^\dagger\hat{a}_3^\dagger)+{\rm h.c.}, \label{Ham} \end{eqnarray} where $\chi_j, j=1,2,3,$ represent the effective nonlinearities and {\rm h.c.} stands for the hermitian conjugate. The unitary operator associated with (\ref{Ham}) is: \begin{eqnarray} \hat{U}(t)=\exp \left(-it\frac{\hat{H}_{int}}{\hbar}\right). \label{Ham1} \end{eqnarray} In the undepleted pump approximation we set $r_j=\chi_j\langle \hat{b}_j(0)\rangle$ as real parameters. Now we obtain the requested squeezed operator as: \begin{equation}\label{1} \hat{S}(\underline{r})=\exp[r_1(\hat{a}_1\hat{a}_2-\hat{a}_1^{\dagger}\hat{a}_2^{\dagger})+ r_2(\hat{a}_1\hat{a}_3-\hat{a}_1^{\dagger}\hat{a}_3^{\dagger})+ r_3(\hat{a}_2\hat{a}_3-\hat{a}_2^{\dagger}\hat{a}_3^{\dagger})], \end{equation} where $(\underline{r})=(r_1,r_2,r_3)$. Throughout this paper, the symmetric case means $r_1=r_2=r_3=r$, otherwise we have an asymmetric case. It is evident that three disentangled state can be entangled under the action of this operator. This operator provides the following Bogoliubov transformations: \begin{equation}\label{2} \hat{S}^{\dagger}(\underline{r})\hat{a}_j\hat{S}(\underline{r})=f_1^{(j)}\hat{a}_1+ f_2^{(j)}\hat{a}^{\dagger}_1+g_1^{(j)}\hat{a}_2+ g_2^{(j)}\hat{a}^{\dagger}_2+h_1^{(j)}\hat{a}_3+ h_2^{(j)}\hat{a}^{\dagger}_3, \quad j=1,2,3 \end{equation} where $f^{(j)}_{j'},g^{(j)}_{j'},h^{(j)}_{j'}, j'=1,2$ are functions in term of the parameters $r_1,r_2,r_3$. The formulae of these functions for the asymmetric case are rather lengthy. Nevertheless, we write down only here the explicit forms for the symmetric case as \cite{olsen}: \begin{eqnarray} \begin{array}{lr} f_1^{(1)}=\frac{1}{3}[2\cosh(r)+\cosh(2r)], \quad f_2^{(1)}=\frac{1}{3}[2\sinh(r)-\sinh(2r)], \\ g_1^{(1)}=\frac{1}{3}[-\cosh(r)+\cosh(2r)], \quad g_2^{(1)}=-\frac{1}{3}[\sinh(r)+\sinh(2r)], \\ g_1^{(1)}=h_1^{(1)}= f_1^{(2)}= h_1^{(2)}=f_1^{(3)}=g_1^{(3)}, \\ f_1^{(1)}=g_1^{(2)}= h_1^{(3)},\quad f_2^{(1)}=g_2^{(2)}= h_2^{(3)}, \\ g_2^{(1)}=h_2^{(1)}= f_2^{(2)}= h_2^{(2)}=f_2^{(3)}=g_2^{(3)}. \label{3} \end{array} \end{eqnarray} Relations (\ref{2}) and (\ref{3}) will be frequently used in the paper. For the symmetric case, the entanglement has been already studied in terms of the van Loock-Furusawa measure \cite{olsen}. It has been shown that the larger the value of $r$, the greater the quantity of the entanglement in the tripartite. Moreover, the tripartite CV entangled state created tends towards GHZ state in the limit of infinite squeezing, but is analogous to a W state for finite squeezing \cite{pati}. In this paper we give an investigation for the entanglement of the asymmetric case from different point of view. This is based on the fact that the entanglement between different components in the system is a direct consequence of the occurrence of the nonclassical effects in their compound quantities and vice versa. We show that the asymmetric case can provide amounts of the nonclassical effects and/or entanglement greater than those of the symmetric case. Thus the asymmetry in the triply concurrent parametric amplifiers is important. The investigation of the operator (\ref{1}) will be given through the three-mode squeezed coherent and number states having the forms: \begin{equation}\label{12} |\psi_n\rangle= \hat{S}(\underline{r})|n_1,n_2,n_3\rangle,\quad |\psi_c\rangle=\hat{S}(\underline{r})|\alpha_1,\alpha_2,\alpha_3\rangle. \end{equation} Three-mode squeezed vacuum states can be obtained by simply setting $n_j=0$ or $\alpha_j=0$ in the above expressions. In the following sections we study the quantum properties for the states (\ref{12}) in greater details. \section{Quadrature Squeezing} Squeezing is an important phenomenon in the quantum theory, which can reflect the correlation in the compound systems very well. Precisely, squeezing can occur in combination of the quantum mechanical systems even if the single systems are not themselves squeezed. In this regard the nonclassicality of the system is a direct consequence of the entanglement. Squeezed light can be measured by the homodyne detector, in which the signal is superimposed on a strong coherent beam of the local oscillator. Additionally, squeezing has many applications in various areas, e.g., in quantum optics, optics communication, quantum information theory, etc \cite{ [21]}. Thus investigating squeezing for the quantum mechanical systems is an essential subject in the quantum theory. In this section we demonstrate different types of squeezing for the three-mode squeezed vacuum states (\ref{12}). To do so we define two quadratures $\hat{X}$ and $\hat{Y}$, which denote the real (electric) and imaginary (magnetic) parts, respectively, of the radiation field as: \begin{equation}\label{4} \begin{array}{lr} \hat{X}=\frac{1}{2}[\hat{a}_1+\hat{a}^{\dagger}_1 +c_1(\hat{a}_2+\hat{a}^{\dagger}_2)+c_2(\hat{a}_3+\hat{a}^{\dagger}_3)],\\ \hat{Y}=\frac{1}{2i}[\hat{a}_1-\hat{a}^{\dagger}_1 +c_1(\hat{a}_2-\hat{a}^{\dagger}_2)+c_2(\hat{a}_3-\hat{a}^{\dagger}_3)], \end{array} \end{equation} where $c_1, c_2$ are $c$-numbers take the values $0$ or $1$ to yield single-mode, two-mode and three-mode squeezing. These two operators, $\hat{X}$ and $\hat{Y}$, satisfy the following commutation relation: \begin{equation} [\hat{X},\hat{Y}]=iC, \end{equation} where $C=(1+c_1^2+c_2^2)/2$. It is said that the system is able to generate squeezing in the $x$- or $y$-quadrature if \begin{eqnarray} S_x&=&\frac{2\langle(\Delta\hat{X})^2\rangle-C}{C}<0, \nonumber\\ &&{\rm or}\\ S_y&=&\frac{2\langle(\Delta\hat{Y})^2\rangle-C}{C}<0, \nonumber \end{eqnarray} where $\langle(\Delta\hat{X})^2\rangle=\langle\hat{X}^2\rangle- \langle\hat{X}\rangle^2$ is the variance. Maximum squeezing occurs when $S_x=-1$ or $S_y=-1$. For the symmetric case, one can easily deduce the following expressions: \begin{eqnarray} \begin{array}{lr} S_x=\frac{1}{3(1+c_1^2+c_2^2)}\{(1+c_1^2+c_2^2)[2\exp(2r)+\exp(-4r)-3] +2(c_1+c_2+c_1c_2)[\exp(-4r)-\exp(2r)]\}, \\ \\ S_y=\frac{1}{3(1+c_1^2+c_2^2)}\{(1+c_1^2+c_2^2)[2\exp(-2r)+\exp(4r)-3] +2(c_1+c_2+c_1c_2)[\exp(4r)-\exp(-2r)]\}. \label{5} \end{array} \end{eqnarray} For the single-mode case, $c_1=c_2=0$, the expressions (\ref{5}) reduce to: \begin{eqnarray}\label{6} \begin{array}{lr} S_x=\frac{1}{3}[2\exp(2r)+\exp(-4r)-3], \\ S_y=\frac{1}{3}[2\exp(-2r)+\exp(4r)-3]. \end{array} \end{eqnarray} It is evident that the system cannot generate single-mode squeezing. This fact is valid for the asymmetric case, too. For the two-mode case, $c_1=1, c_2=0$, i.e. first-second mode squeezing, we obtain \begin{equation}\label{7} \begin{array}{lr} S_x=\frac{1}{3}[\exp(2r)+2\exp(-4r)-3], \\ S_y=\frac{1}{3}[\exp(-2r)+2\exp(4r)-3]. \end{array} \end{equation} Squeezing can be generated in the $x$-component only with a maximum value at $r=\ln(2)/3$. Also the maximum squeezing, i.e. $S_x=-1$, cannot be established in this case. This is in contrast with the two-mode squeezed operator \cite{two} for which $S_x=-1$ for large $r$. Roughly speaking, the quantum correlation in this system decreases the squeezing, which can be involved in the one of the bipartites. Finally, for three-mode case, $c_1=c_2=1$, we have \begin{equation}\label{8} S_x=\exp(-4r)-1,\quad S_y=\exp(4r)-1. \end{equation} Squeezing can be generated in the $x$-component only for $r>0$. Squeezing reaches its maximum value for large values of $r$. The origin of the occurrence squeezing in (\ref{8}) is in the strong correlation among the components of the system. Moreover, the amount of the produced squeezing is two (four) times greater than that of the two-mode \cite{two} (single-mode \cite{single}) squeezed operator for certain values of $r$. \begin{figure} \caption{ Two-mode (a) and three-mode (b) squeezing against $r_1$ for three-mode squeezed vacuum states. Solid, dashed and dotted curves are given for $(r_2,r_3)=(r_1,r_1), (0.1,0.2)$ and $ (0.4,0.6)$, respectively. } \label{fig1} \end{figure} In Figs. \ref{fig1}(a) and (b) we plot the squeezing parameter $S_x$ against $r_1$ for two- and three-mode squeezing, respectively. We found that squeezing is not remarkable in $S_y$. The solid curve is plotted for the symmetric case. The two-mode squeezing is given for the first-second mode system. We start the discussion with the two-mode case (Fig. \ref{fig1}(a)). From the solid curve, squeezing is gradually generated as $r_1$ increases providing its maximum value $S_x=-0.206$ at $r=\ln(2)/3=0.231$, then it reduces smoothly and eventually vanishes, i.e. $S_x\geq 0$, at $r\geq \ln(1+\sqrt{3})/2=0.5025$. For the asymmetric case, squeezing increases gradually to be maximum $S_x=-1$ over a certain range of $r_1$, then rapidly decreases and vanishes (see the dashed and dotted curves). This can be understood as follows. When the values of $r_2$ and $r_3$ are relatively small, the main contribution in the system is related to the first parametric amplifier. Thus the system behaves as the conventional two-mode squeezed operator for a certain range of $r_1$. This remark is noticeable when we compare the dotted curve to the dashed one. Generally, when the values of $r_2$ and $r_3$ increase, the degradation of the squeezing increases, too. Furthermore, in the range of $r_1$ for which $S_x=-1$, the entanglement in the bipartite $(1,2)$ is maximum, however, this is not the case for the other bipartities. This is connected with the fact: the quantum entanglement cannot be equally distributed among many different objects in the system. Comparison among different curves in Fig. 1(a) shows that the asymmetric case can provide amounts of squeezing much greater than those of the symmetric case. Now, we draw the attention to the three-mode squeezing, which is displayed in Fig. \ref{fig1}(b). For the symmetric case, $S_x$ exhibits squeezing for $r> 0$, which monotonically increases providing maximum value for large $r_1$, as we discussed above. This is in a good agreement with the fact that the symmetric case exhibits genuine tripartite entanglement for large values of $r_1$ \cite{cont}. For the asymmetric case, the curves show initially squeezing, which reaches its maximum by increasing $r_1$, then it gradually decreases and vanishes for large values of $r_1$. The greater the values of $r_2, r_3$ the higher the values of the maximum squeezing in $S_x$ and the shorter the range of $r_1$ over which squeezing occurs (compare dotted and dashed curves in Fig. 1(b)). This situation is the inverse of that of the two-mode case (compare Fig. \ref{fig1}(a) to (b)). Trivial remark, for small values of $r_1$ the amounts of squeezing produced by the symmetric case are smaller than those by the asymmetric one. We can conclude that for the asymmetric case the entanglement in the tripartite may be destroyed for large $r_1$, where $S_x>0$. Of course this is sensitive to the values of $r_2, r_3$. Conversely, the amounts of the entanglement between different bipartites in the system for the asymmetric case can be much greater than those of the symmetric one for particular choice of the parameters. The final remark, the amounts of squeezing produced by the operator (\ref{1}) are greater than those generated by the TMS \cite{faisal}. \section{Second-order correlation function and Cauchy-Schwartz inequality} In this section we study the second-order correlation function and the Cauchy-Schwartz inequality for the states (\ref{12}). These two quantities are useful for getting information on the correlations between different components in the system. In contrast to the quadrature squeezing, these quantities are not phase dependent and are therefore related to the particle nature of the field. We start with the single-mode second-order correlation function, which for the $j$th mode is defined as: \begin{equation} \label{second1} g_j^{(2)}(0)=\frac{\langle\hat{a}_j^{\dagger2}\hat{a}^2_j\rangle} {\langle{\hat{a}^\dagger_j\hat{a}_j}\rangle^2}-1, \end{equation} where $g_j^{(2)}(0)=0$ for Poissonian statistics (standard case), $g_j^{(2)}(0) < 0$ for sub-Poissonian statistics (nonclassical effects) and $g_j^{(2)}(0) > 0$ for super-Poissonian statistics (classical effects). The second-order correlation function can be measured by a set of two detectors \cite{[24]}, e.g. the standard Hanbury Brown–Twiss coincidence arrangement. Furthermore, the sub-Poissonian light has been realized in the resonance fluorescence from a two-level atom driven by a resonant laser field \cite{flour}. We have found that three-mode squeezed coherent state cannot yield sub-Poissonian statistics. Thus we strict the study here to $g_1^{(2)}(0)$ of the three-mode squeezed number states. Form (\ref{2}) and (\ref{12}), one can obtain the following moments: \begin{figure} \caption{Second-order correlation of the first mode against $r_1$. (a) $(n_1,n_2,n_3)=(1,1,1)$, $(r_2,r_3)=(r_1,r_1)$ solid curve, $(0.1,0.2)$ dashed curve, and $(0.4,0.6)$ dotted curve. (b) $(r_2,r_3)=(r_1,r_1)$, $(n_1,n_2,n_3)=(1,0,0)$ solid curve and $(0,1,0)$ dashed curve.} \label{fig2} \end{figure} \begin{eqnarray} \begin{array}{lr} \langle{\hat{a}_1^\dagger\hat{a}_1}\rangle = n_1 f_1^{(1)2}+(n_1+1) f_2^{(1)2} +n_2 g_1^{(1)2}+(n_2+1) g_2^{(1)2}+ n_3 h_1^{(1)2}+(n_3+1) h_2^{(1)2},\\ \\ \langle{\hat{a}_1^{\dagger 2}\hat{a}_1^2}\rangle = n_1(n_1-1) f_1^{(1)4}+(n_1+1)(n_1+2) f_2^{(1)4} +(2n_1+1)^2 f_1^{(1)2}f_2^{(1)2}\\ \\ +n_2(n_2-1) g_1^{(1)4}+(n_2+1)(n_2+2) g_2^{(1)4} +(2n_2+1)^2 g_1^{(1)2}g_2^{(1)2}\\ \\ +n_3(n_3-1) h_1^{(1)4}+(n_3+1)(n_3+2) h_2^{(1)4} +(2n_3+1)^2 h_1^{(1)2}h_2^{(1)2}\\ \\ + (2n_1+1) f_1^{(1)}f_2^{(1)}[2(2n_2+1) g_1^{(1)}g_2^{(1)}+(2n_3+1) h_1^{(1)}h_2^{(1)}]\\ \\ +(2n_3+1) h_1^{(1)}h_2^{(1)} [2(2n_2+1) g_1^{(1)}g_2^{(1)}+(2n_1+1) f_1^{(1)}f_2^{(1)}]\\ \\ +4[n_1 f_1^{(1)2}+(n_1+1) f_2^{(1)2}][n_2 g_1^{(1)2}+(n_2+1) g_2^{(1)2}+ n_3 h_1^{(1)2}+(n_3+1) h_2^{(1)2}]\\ \\ +4[n_2 g_1^{(1)2}+(n_2+1) g_2^{(1)2}][ n_3 h_1^{(1)2}+(n_3+1) h_2^{(1)2}]. \label{second2} \end{array} \end{eqnarray} By means of (\ref{second1}) and (\ref{second2}), the quantity $g_1^{(2)}(0)$ is depicted in Figs. \ref{fig2} for given values of the parameters. In Fig. \ref{fig2}(a) we present the role of the squeezing parameters $r_j$ on the behavior of the $g_1^{(2)}(0)$. From the solid curve (, i.e. symmetric case), one can observe that the maximum sub-Piossonian statistics occur for relatively small values of $r_j$. In this case, the system tends to the Fock state $|1\rangle$, which is a pure nonclassical state. As the values of $r_1$ increase, the nonclassicality monotonically decreases and completely vanishes around $r_1\simeq 0.3$. Comparison among the curves in Fig. 2(a) shows when the values of $(r_2,r_3)$ increase, the amounts of the sub-Poissonian statistics inherited in the first mode decrease. This is connected with the nature of the operator (\ref{1}), in which the behavior of the single-mode undergoes an amplification process caused by the various down-conversions involved in the system. Furthermore, particular values of the asymmetry can enlarge the range of nonclassicality (compare the solid curve to the dashed one). Now, we draw the attention to the Fig. 2(b), which is given to the symmetric case . From this figure one realizes how can obtain sub-Poissonian statistics from a particular mode as an output from the operator (\ref{1}). Precisely, this mode should be initially prepared in the nonclassical state. We can analytically prove this fact by substituting $n_1=0, n_2=n_3=n$ into (\ref{second1}) and (\ref{second2}). After minor algebra, we arrive at: \begin{eqnarray} \begin{array}{lr} \langle\hat{a}_1^{\dagger2}\hat{a}^2_1\rangle-\langle{\hat{a}^\dagger_1\hat{a}_1}\rangle^2 =f_2^{(1)4}+2n(n-1)g_1^{(1)4}+2(n+1)(n+2)g_2^{(1)4} \\ +2n(n+1)g_1^{(1)2}g_2^{(1)2}+[f_1^{(1)}f_2^{(1)}+2(2n+1)g_1^{(1)}g_2^{(1)}]^2 \\ +4 f_2^{(1)2}[ng_1^{(1)2}+(n+1)g_2^{(1)2}]\geq 0. \label{faisal1} \end{array} \end{eqnarray} The final remark, the comparison between the solid curves in Figs. 2(a) and (b) shows that the nonclassical range of $r_1$ in (b) is greater than that in (a). In other words, to enhance the sub-Poissonian statistics in a certain mode, the other modes have to be prepared in states close to the classical ones. \begin{figure} \caption{The parameter $V_{jk} \label{fig3} \end{figure} The violation of the classical inequalities has verified the quantum theory. Among these inequalities is the Cauchy-Schwarz inequality \cite{lou}, which its violation provides information on the intermodal correlations in the system. The first observation of this violation was obtained by Clauser, who used an atomic two-photon cascade system \cite{[3]}. More recently, strong violations using four-wave mixing have been adopted in \cite{[4],[5]}. In addition, a frequency analysis has been used to infer the violation of this inequality over a limited frequency regime \cite{[11]}. The Cauchy-Schwarz inequality is $V_{jk}\leq 0$, where $V_{jk}$ has the form: \begin{eqnarray} V_{jk}=\frac{\sqrt{\langle\hat{a}^{\dagger2}_j\hat{a}^2_j\rangle\langle\hat{a}^{\dagger2}_k\hat{a}^2_k\rangle}} {\langle\hat{a}_j^\dagger\hat{a}_j\hat{a}_k^\dagger\hat{a}_k\rangle}-1. \label{secont4} \end{eqnarray} Occurrence of the negative values in $V_{jk}$ means that the intermodal correlation is larger than the correlation among the photons in the same mode. This indicates a strong deviation from the classical Cauchy-Schwarz inequality. This is related to the quantum mechanical features, which include pseudodistributions instead of the true ones. In this respect, the Glauber-Sudarshan $P$ function possesses strong quantum properties \cite{[27]}. \begin{figure} \caption{ The parameter $V_{jk} \label{fig4} \end{figure} We have found that the Cauchy-Schwartz inequality can be violated for both coherent- and number-state cases. The expressions of the different quantities in (\ref{secont4}) are too lengthy but straightforward and hence we don't present them here. We start with the three-mode squeezed coherent states. Information about them is shown in Figs. 3(a), (b) and (c), for given values of the system parameters. The negative values are remarkable in most of the curves, reflecting the deviation from the classical inequality. For the symmetric case the nonclassical correlation is remarkable for $r_1\geq 0.4$, which increases gradually till $r_1\simeq 0.6$ providing its maximum value, then smoothly reduces and eventually goes to zero, i.e. $V_{jk}\simeq 0$, for large $r_1$. For the asymmetric case, the deviation from the classical inequality is obvious, which may be smaller or greater than those in the symmetric one based on the competition among different nonlinearities in the system $r_j$. In other words, e.g., the correlation between modes $1$ and $2$ is much stronger than that between the others only when $r_1>r_2, r_3$. This can be understood from the structure of the operator (\ref{1}). Comparing this behavior to that of the two-mode squeezing given in the preceding section one can conclude that a large amount of squeezing does not imply large violation of the inequality \cite{carmich}. As we mentioned before: the entanglement is a direct consequence of the occurrence of the nonclassical effects. As a result of this, the behavior of the two-mode squeezing and $V_{jk}$ may provide a type of contradiction. Precisely, the bipartite can be entangled (non-entangled) with respect to, say, two-mode squeezing ($V_{jk}$). This supports the fact that these two quantities provide only a sufficient condition for entanglement. Considering both of them we may obtain conditions closer to the necessary and sufficient condition. A study about this controversial issue has been already discussed for the entanglement in a parametric converter \cite{suh}, where different entanglement criteria leaded to different results. In Fig. 4 we plot the parameter $V_{jk}$ for the Fock-state case. From this figure the deviation from the classical $V_{jk}$ is quite remarkable. For the symmetric case, maximum deviation occurs in $V_{1,2}$ for $r=0$, monotonically decreases as $r$ evolving and vanishes at $r \simeq 1$. Comparison among different curves in this figure shows that the asymmetry can enlarge the range of $r_1$ over which the deviation of the inequality occurs. Similar behavior has been observed for $V_{2,3}$ and $V_{1,3}$ (we have checked this fact). In conclusion, the violation of the classical inequalities provides an explicit evidence of the quantum nature of intermodal correlation between modes. This is not surprising, as the entanglement is a pure quantum mechanical phenomenon that requires a certain degree of nonclassicality either in the initial state or in the process that governs the system. \section{Quasiprobability distribution function}\label{S:sec5} Quasiprobability distribution functions, namely, Husimi function ($Q$), Wigner function ($W$), and Glauber $P$ functions, are very important since they can give a global description of the nonclassical effects in the quantum systems. These functions can be measured by various means, e.g. photon counting experiments \cite{[34]}, using simple experiments similar to that used in the cavity (QED) and ion traps \cite{[35],[36]}, and homodyne tomography \cite{[37]}. For the system under consideration, we focus the attention here on the single-mode case, say, the first mode for the three-mode squeezed number states (\ref{12}). We start with the $s-$parameterized characteristic function $C(\zeta,s)$, which is defined as: \begin{equation} C(\zeta,s)={\rm Tr}[\hat{\rho}\exp(\zeta\hat{a}^{\dag}_1-\zeta^{*}\hat{a}_1+\frac{s}{2}|\zeta|)],\label{secf1} \end{equation} where $\hat{\rho}$ is the density matrix of the system under consideration and $s$ is a parameter taking the values $0,1,-1$ corresponding to symmetrically, normally and antinormally ordered characteristic functions, respectively. For three-mode squeezed number states and with the help of the relations (\ref{2}) one can easily obtain: \begin{eqnarray} \begin{array}{lr} C(\zeta,s)= \exp [-\frac{1}{2}|\upsilon_1|^2-\frac{1}{2}|\upsilon_2|^2-\frac{1}{2}|\upsilon_3|^2+\frac{s}{2}|\zeta|^2]\\ \times {\rm L}_{n_1}( |\upsilon_1|^2) {\rm L}_{n_2}( |\upsilon_2|^2) {\rm L}_{n_3}( |\upsilon_3|^2), \label{secf2} \end{array} \end{eqnarray} where \begin{equation} \upsilon_1=\zeta f_1^{(1)}-\zeta^* f_2^{(1)},\quad \upsilon_2=\zeta g_1^{(1)}-\zeta^* g_2^{(1)},\quad \upsilon_3=\zeta h_1^{(1)}-\zeta^* h_2^{(1)} \label{secf3} \end{equation} and ${\rm L}_k^{\gamma}(.)$ is the associated Laguerre polynomial having the form: \begin{equation}\label{reply1} {\rm L}_k^{\gamma}(x)=\sum\limits_{l=0}^{k} \frac{(\gamma+k)!(-x)^l}{(\gamma+l)!(k-l)!l!}. \end{equation} The $s$-parameterized quasiprobability distribution functions are defined as \begin{equation} W(z,s)=\pi^{-2}\int d^{2}\zeta C(\zeta,s)\exp(z\zeta^{*}-\zeta z^{*}), \label{secf4} \end{equation} where $z=x+iy$ and $s=0,1,-1$ are corresponding to $W, P, Q$ functions, respectively. On substituting (\ref{secf2}) into (\ref{secf4}) and applying the method of the differentiation under the sign of integration we can obtain the following expression: \begin{eqnarray} \begin{array}{lr} W(z,s)= \frac{1}{\pi}\sum\limits_{j',j,k=0}^{\{n_1,n_2,n_3\}}\sum\limits_{l_1,l_2=0}^{\{j',j+k\}}\left( \begin{array}{c} n_1 \\j' \end{array} \right) \left( \begin{array}{c} n_2 \\k \end{array} \right) \left( \begin{array}{c} n_3 \\j \end{array} \right) \left( \begin{array}{c} j' \\l_1 \end{array} \right) \left( \begin{array}{c} j+k \\l_2 \end{array} \right) \frac{(-1)^{j+j'+k}}{j!j'!k!}\\ \times (f_1^{(1)2}+f_2^{(1)2})^{l_1}(g_1^{(1)2}+g_2^{(1)2})^{l_2} (f_1^{(1)}f_2^{(1)})^{j'-l_1} (g_1^{(1)}g_2^{(1)})^{j+k-l_2} \frac{\partial^{l_1+l_2}}{\partial b_1^{l_1+l_2}}|_{b_1=0} \frac{\partial^{j+j'+k-l_1-l_2}}{\partial b_2^{j+j'+k-l_1-l_2}}|_{b_2=0}\\ \times \frac{1}{\sqrt{K}} \exp[-\frac{1}{K}(B|z|^2+(z^2+z^{*2})(\Lambda_2+b_2)], \label{secf5} \end{array} \end{eqnarray} where \begin{eqnarray} \begin{array}{lr} \Lambda_1= f_1^{(1)2}+f_2^{(1)2}+g_1^{(1)2}+g_2^{(1)2} +h_1^{(1)2}+h_2^{(1)2},\\ \\ \Lambda_2=f_1^{(1)}f_2^{(1)}+g_1^{(1)}g_2^{(1)}+h_1^{(1)}h_2^{(1)},\\ \\ B=\frac{1}{2}(\Lambda_1-s)-b_1,\quad K=B^2-(\Lambda_2+b_2)^2 . \label{secf7} \end{array} \end{eqnarray} The correlation between the modes in the system can be realized in $W(z,s)$ as cross terms, e.g. in $\Lambda_2$. This can give a qualitative information about the entanglement in the system. For the three-mode squeezed vacuum states (, i.e., $n_1=n_2=n_3=0$) the $W$ function (\ref{secf5}) can be expressed as: \begin{equation} W(x,y,s)=\frac{1}{\pi \sqrt{\vartheta_{+}\vartheta_{-}}} \exp[-\frac{x^2}{\vartheta_{+}}-\frac{y^2}{\vartheta_{-}}], \label{tsecf8} \end{equation} where \begin{eqnarray} \begin{array}{lr} \vartheta_{+}=2\langle(\Delta\hat{X}_1)^2\rangle-\frac{s}{2},\\ \\ =\frac{1}{2}[(f_1^{(1)}+f_2^{(1)})^2+(g_1^{(1)}+g_2^{(1)})^2+ (h_1^{(1)}+h_2^{(1)})^2]-\frac{s}{2},\\ \\ \vartheta_{-}=2\langle(\Delta\hat{Y}_1)^2\rangle-\frac{s}{2},\\ \\ =\frac{1}{2}[(f_1^{(1)}-f_2^{(1)})^2+(g_1^{(1)}-g_2^{(1)})^2+ (h_1^{(1)}-h_2^{(1)})^2]-\frac{s}{2}. \label{tsecf10} \end{array} \end{eqnarray} From (\ref{tsecf8}) and (\ref{tsecf10}) it is evident that the quasidistributions are Gaussians, narrowed in the $y$ direction and expanded in the $x$ direction. Nevertheless, this does not mean squeezing is available in this mode. Actually, this behavior represents the thermal squeezed light, which, in this case, is a super-classical light. Precisely, with $s=0$ the $W$ function exhibits stretched contour, whose area is broader than that of the coherent light. In this regard the phase distribution and the photon-number distribution associated with the single-mode case exhibits a single-peak structure for all values of $r_j$. This peak is broader than that of the coherent state, which has the same mean-photon number. Actually, this is a quite common property for the multimode squeezed operators \cite{single,two,abdalla,hon,faisal,barry,marce}. Thus the single-mode vacuum or coherent states, as outputs from the three concurrent amplifiers described by (\ref{1}) are not nonclassical states. This agrees with the information given in the sections 3 and 4. Now we consider two cases: $(n_1,n_2,n_3)=(0,0,n_3)$ and $(n_1,n_2,n_3)=(n_1,0,0)$. For the first case, we investigate the influence of the nonclassicality in the third mode on the behavior of the first mode. To do so we substitute $n_1=n_2=0$ in (\ref{secf5}) and after minor algebra we arrive at: \begin{eqnarray} \begin{array}{lr} W(x,y,s)= \frac{(-1)^{n_3}}{\pi\sqrt{(\frac{\Lambda_1-s}{2})^2-\Lambda_2^2}}(\frac{\eta_-}{\vartheta_+})^{n_3} \exp[-\frac{x^2}{\vartheta_-}-\frac{y^2}{\vartheta_+}]\\ \\ \times \sum\limits_{m=0}^{n_3}\left(\frac{\vartheta_+\eta_+}{\vartheta_-\eta_-}\right)^m {\rm L}_m^{-\frac{1}{2}}[(\frac{\eta_++\vartheta_-}{\eta_+\vartheta_-})x^2] {\rm L}_{n_3-m}^{-\frac{1}{2}}[(\frac{\eta_-+\vartheta_+}{\eta_-\vartheta_+})y^2] , \label{secf10} \end{array} \end{eqnarray} where \begin{equation} \eta_{\pm}=(h_1^{(1)}\pm h_2^{(1)})^2-\vartheta_{\mp}. \label{secf11} \end{equation} One can easily check when $r_1=r_2=r_3=0$ the $W$ function (\ref{secf10}) reduces to that of the vacuum state. The form (\ref{secf10}) includes Laguerre polynomial, which is well known in the literature by providing nonclassical effects in the phase space. This indicates that the nonclasssical effects can be transferred from one mode to another under the action of the operator (\ref{1}). Of course the amount of the transferred data depends on the values of the squeezing parameters $r_j$. \begin{figure} \caption{The Wigner function of the first mode (a) and the evolution of the phase space origin of the Wigner function (b). In (a) we use $(r_1,r_2,r_3,n_1,n_2,n_3)=(1.1,1.1,1.1,0,0,1)$. In (b) we use $(r_1,r_2,n_1,n_2,n_3)=(r_3,r_3,0,0,1)$ solid curve, $(0.6,0.8,0,0,1)$ dashed curve, and $(0.6,0.8,1,0,0)$ dotted curve. } \label{fig5} \end{figure} The second case $(n_1,0,0)$ has the same expression (\ref{secf10}) with the following transformations: \begin{equation} n_3\rightarrow n_1,\quad \eta_{\pm}=(f_1^{(1)}\pm f_2^{(1)})^2- \vartheta_{\mp}. \label{secf13} \end{equation} Now we prove that this $W$ function tends to that of the number state when $r_1=r_2=r_3=0$. In this case, the transformations (\ref{secf13}) tend to: \begin{equation} \eta_{\pm}=\frac{1+s}{2}, \quad \vartheta_{\pm}=\frac{1-s}{2}. \label{secf14} \end{equation} Substituting these variables in the expression (\ref{secf10}) (with $n_3\rightarrow n_1$) we obtain: \begin{eqnarray} \begin{array}{lr} W(x,y,s)= \frac{2(-1)^{n_1}}{\pi} \frac{(1+s)^{n_1}}{(1-s)^{n_1+1}}\exp[-\frac{2(x^2+y^2)}{1-s}] \sum\limits_{m=0}^{n_1} {\rm L}_m^{-\frac{1}{2}}[\frac{4x^2}{1-s^2}] {\rm L}_{n_1-m}^{-\frac{1}{2}}[\frac{4y^2}{1-s^2}]\\ \\ =\frac{2(-1)^{n_1}}{\pi} \frac{(1+s)^{n_1}}{(1-s)^{n_1+1}}\exp[-\frac{2(x^2+y^2)}{1-s}] {\rm L}_{n_1}[\frac{4(x^2+y^2)}{1-s^2}] , \label{secf15} \end{array} \end{eqnarray} In (\ref{secf15}) the transition from the first line to the second one has been done using the identity: \begin{equation} \sum\limits_{n=0}^{m} {\rm L}_n^{\tau_1}(x) {\rm L}_{m-n}^{\tau_2}(y)={\rm L}_m^{\tau_1+\tau_2+1}(x+y). \label{secf16} \end{equation} The expression (\ref{secf15}) is the $s$-quasprobability distribution for the number state, e.g. \cite{wolfgang}. \begin{figure} \caption{The Wigner function of the first mode when $(r_1,r_2,r_3,n_1,n_2,n_3)=(0.6,0.8,0.9,1,0,0)$ (a), $(0.6,0.8,0.9,0,0,1)$ (b), $(0.6,0.8,2,0,0,1)$ (c) and $(0.4,0.8,2,0,0,2)$ (d). } \label{fig6} \end{figure} We conclude this part by writing down the form of the $W$ function at the phase space origin, which is a sensitive point for the occurrence of the nonclassical effects. Moreover, it can simply give visualization about the behavior of the system. Additionally, this point can be measured by the photon counting method \cite{[34]}. From (\ref{secf10}) we have: \begin{equation} W(0,0,s)= \frac{(-1)^{n_3}}{\pi\sqrt{\vartheta_+\vartheta_-}} (\frac{\eta_-}{\vartheta_+}+\frac{\eta_+}{\vartheta_-})^{n_3}. \label{[28]} \end{equation} It is obvious that the Winger function exhibits negative values at the phase space origin only when $n_3$ is an odd number. In Figs. 5 and 6 we plot the $W$ functions for the given values of the system parameters. We start the discussion with the symmetric case. In Fig. 5(a) we use $(n_1,n_2,n_3)=(0,0,1)$ meaning that the mode under consideration is in the vacuum state. Thus for $r_j=0$ the $W$ function exhibits the single-peak-Gaussian structure with a center at the phase space origin. When $r_j\neq 0$ this behavior is completely changed, where one can observe a lot of the nonclassical features, e.g. negative values, multipeak structure and stretching contour (see Fig. 5(a)). This indicates that the nonclassical effects can be transferred from one mode to the other under the action of the operator (\ref{1}). In Fig. 5(b) we plot the "evolution" of the $W$ function given by (\ref{[28]}) against the parameter $r\quad (r_3)$ for the symmetric (asymmetric) case. The aim of this figure is to estimate the exact value of the nonlinearity $r\quad (r_3)$ for which the nonclassical effects maximally occur and/or transfer from certain mode to the other. For the symmetric case, this occurs at $r=1.2$, while the nonclassicality is completely washed out at $r=3$. Now, we draw the attention to the asymmetric case which is plotted in Figs. 6. Fig. 6(a) gives information on the case $(n_1,n_2,n_3)=(1,0,0)$. The $W$ function of the Fock state $|1\rangle$ is well known in the literatures by having inverted peak in phase space with maximum negative values. This is related to that this state provides maximum sub-Poissonian statistics. Under the action of the operator (\ref{1}) these negative values are reduced and the two-peak structure is started to be constructed. This indicates that the system is able to generate particular types of the Schr\"{o}dinger-cat states by controlling the system parameters. This is really obvious in Figs. 6(b)--(d), which are given to the cases $n_1=n_2=0, n_3=1,2$. For instance, from Fig. 6(b) the $W$ function provides a two-peak structure. Nevertheless, by increasing the values of the parameter $r_3$, the $W$ function exhibits the two Gaussian peaks and inverted negative peak in-between indicating the occurrence of the interference in phase space (see Fig. 6(c)). This shape is similar to that of the odd-coherent state. Additionally, the Fig. 6(d), in which $n_3=2$, provides the well-known shape of the $W$ function of the even coherent state. Generally, the even and the odd Schr\"{o}dinger-cat states have nearly identical classical components (, i.e. the positive peaks) and only differ in the sign of their quantum interferences. These are interesting results, which show that by controlling the nonlinearity of the system and preparing a certain mode in the Fock state $|1\rangle$ or $|2\rangle$ one can generate cat states. It is worth mentioning that the Fock state $|n\rangle$ can be prepared with very high efficiency according to the recent experiments \cite{[48]}. Similar results have been obtained from the codirectional three-mode Kerr nonlinear coupler \cite{faisalcoupler}. Furthermore, quite recently the construction of the cat state trapped in the cavity in which several photons survive long enough to be repeatedly measured is given in \cite{haroche}. In this technique, the atoms crossing the cavity one by one are used to obtain information about the field. We proceed, we have noted that the $W$ function of the cases $ n_2=n_3=0, n_1=1,2$ can provide quite similar behaviors as those of Figs. 6(c) and (d) for the same values of $r_j$. Now, we draw the attention to the dashed and dotted curves in the Fig. 5(b). These curves provide information on the evolution of the $W(0,0)$ against $r_3$ for the case of Fig. 6(a) and (c), respectively. From the dotted curve, i.e. the mode under consideration is in the Fock state, the $W(0,0)$ exhibits the maximum negativity at $r_3=0$, which monotonically decreases and completely vanishes at $r_3=4$. This shows for how long the nonclassicality inherited in the first mode survives based on the intensity of the third amplifier. On the other hand, form the dashed curve, i.e. the mode under consideration is in the vacuum state, the $W(0,0)$ exhibits negative values for $r_3\geq 1$, increases rapidly to show maximum around $r_3=2$, reduces gradually and vanishes for $r_3\geq 5$. This range of negativity is greater than that of the dotted curve. This is in a good agrement with the behavior of the second-order correlation function. Finally, comparison among the curves in Fig. 5(b) confirms the fact: for certain values of the system parameters, the asymmetric case can provide nonclassical effects greater than those of the symmetric one. \section{Conclusion} In this paper we have studied the three-mode squeezed operator, which can be implemented from the triply coincident nonlinearities in periodically poled $KTiOPO_4$. The action of this operator on the three-mode coherent and number states is demonstrated. We have studied quadrature squeezing, second-order correlation function, Cauchy-Schwartz inequality and quasiprobability distribution function. The obtained results can be summarized as follows. Generally, the single-mode vacuum or coherent states, as outputs from the three concurrent amplifiers are not nonclassical states. The system can exhibit two-mode and three-mode squeezing. The amount of the two-mode squeezing generated by the asymmetric case is much greater than that of the symmetric case. Three-mode squeezed coherent (number) states cannot (can) exhibit sub-Poissonian statistics. To obtain maximum sub-Poissonian statistics from a particular mode, under the action of the operator (\ref{1}), it must be prepared in the nonclassical state and the other modes in states close to the classical ones. We have found that the Cauchy-Schwartz inequality can be violated for both coherent states and number states. The origin in the violation is in the strong quantum correlation among different modes. For the Fock-state case, the asymmetry in the system enhances the range of nonlinearities for which $V_{jk}$ is nonclassical compared to that of the symmetric one. In the framework of the quasiprobability distribution we have shown that the nonclassical effects can be transferred from one mode to another under the action of the operator (\ref{1}). The amount of transferred nonclassicality is sensitive to the values of the squeezing parameters. Interestingly, the system can generate particular types of the Schr\"{o}dinger-cat states for certain values of the system parameters. Generally, we have found that the nonclassical effects generated by the operator (\ref{1}) are greater than those obtained from the operator TMS \cite{faisal}. Finally, the asymmetry in the three concurrent nonlinearities process is important for obtaining significant nonclassical effects. \section*{References} \end{document}
\begin{document} \title[Nonlinear Schrödinger problems]{Nonlinear Schrödinger problems: symmetries of some variational solutions} \author[Ch.~Grumiau]{Christopher Grumiau} \address{ Institut de Math{\'e}matique\\ Universit{\'e} de Mons, Le Pentagone\\ 20, Place du Parc, B-7000 Mons, Belgium} \email{[email protected]} \thanks{The author is partially supported by a grant from the National Bank of Belgium and by the program ``Qualitative study of solutions of variational elliptic partial differential equations. Symmetries, bifurcations, singularities, multiplicity and numerics'' of the FNRS, project 2.4.550.10.F of the Fonds de la Recherche Fondamentale Collective. } \begin{abstract} In this paper, we are interested in the nonlinear Schrödinger problem $-{{\mathcal D}}elta u + Vu = \abs{u}^{p-2}u$ submitted to the Dirichlet boundary conditions. We consider $p>2$ and we are working with an open bounded domain $\Omega\subseteq\mathbb{R}^N$ ($N\geqslantq 2$). Potential $V$ satisfies $\max(V,0)\in L^{N/2}(\Omega)$ and $\min(V,0)\in L^{+\infty}(\Omega)$. Moreover, $-{{\mathcal D}}elta + V$ is positive definite and has one and only one principal eigenvalue. When $p\simeq 2$, we prove the uniqueness of the solution once we fix the projection on an eigenspace of $-{{\mathcal D}}elta + V$. It implies partial symmetries (or symmetry breaking) for ground state and least energy nodal solutions. In the litterature, the case $V\equiv 0$ has already been studied. Here, we generalize the technique at our case by pointing out and explaining differences. To finish, as illustration, we implement the (modified) mountain pass algorithm to work with $V$ negative, piecewise constant or not bounded. It permits us to exhibit direct examples where the solutions break down the symmetries of $V$. \end{abstract} \subjclass{primary 35J20; secondary 35A30} \keywords{Nonlinear Schrödinger problems, ground state solutions, least energy nodal solutions, (nodal) Nehari set, mountain pass algorithm} \maketitle \section{Introduction} Let $N\geqslantq 2$, $p>2$, $\lambda>0$ and an open bounded domain $\Omega\subseteq\mathbb{R}^N $. We study the nonlinear Schrödinger problem \begin{equation} \label{eqp} -{{\mathcal D}}elta u(x) + V(x)u(x) = \lambda|u(x)|^{p-2}u(x) \end{equation} submitted to the Dirichlet boundary conditions (DBC). We are interested in the symmetry of solutions. When $V$ belongs to $L^{N/2}(\Omega)$, the solutions can be defined as the critical points of the energy functional \begin{equation*} \mathcal{E}_p: H^1_0 (\Omega)\to\mathbb{R} : u\mapsto \frac{1}{2}\int_{\Omega}\abs{\nabla u}^2 + Vu^2 -\frac{\lambda}{p}\int_{\Omega}\abs{u}^p. \end{equation*} Clearly, $0$ is solution. Concerning other solutions, if we assume that $-{{\mathcal D}}elta + V$ is positive definite and $V^-:=\min(V,0)\in L^{+\infty}(\Omega)$, then the norm $\norm{u}^2=\int_{\Omega}\abs{\nabla u}^2 + Vu^2$ defined on $H^1_0(\Omega)$ is equivalent to the traditional norm $\norm{u}^2_{H^1_0}= \int_{\Omega}\abs{\nabla u}^2$ (see Proposition~\ref{equiv}). By working in the same way as in~\cite{neuberger}, it directly implies the existence of ground state solutions (g.s.) and least energy nodal solutions (l.e.n.s.); i.e.\ one-signed (resp.\ sign-changing) solutions with minimal energy. These solutions are characterized as minima of $\mathcal{E}_p$ respectively on the (resp.\ nodal) Nehari set \[\mathcal{N}_p:=\leqslantft\{u\in H^1_0(\Omega)\setminus\{0\}\bigm\vert \int_{\Omega}\abs{\nabla u}^2 + Vu^2 =\lambda\int_{\Omega}\abs{u}^p\right\}\] (resp.\ ${{\mathcal M}}_p:=\{u : u^\pm\in \mathcal{N}_p\}$). The Morse index is $1$ (resp.\ $2$). In this paper, we study the structure of these two types of solutions. We verify whether they are odd or even with respect to the hyperplanes leaving $V$ invariant (i.e.\ $V$ respects an orthogonal symmetry with respect to the hyperplane). When it is the case, we say that the solution respects the symmetries of $V$. When $V\equiv 0$, this type of questions has already been studied. First, on the square in dimension $2$, we can mention a result of G.~Arioli and H.~Koch (see~\cite{arioli}). They proved the existence of a positive symmetric $\mathcal{C}^{\infty}$-function $w$ such that $-{{\mathcal D}}elta u = wu^3$ possesses a non-symmetric positive solution (with $1$ as Morse index). The same kind of result has also been obtained for a solution with $2$ as Morse index. The proof is partially computer-assisted. Second, in collaboration with D.~Bonheure, V.~Bouchez, C.~Troestler and J.~Van~Schaftingen (see~\cite{bbg,bbgv,gt}), we proved for $p$ close to $2$ that the symmetries are related to the symmetries of eigenfunctions of $-{{\mathcal D}}elta$. We generalize here the technique at some non-zero potentials $V$ and we make numerical experiments to illustrate it. In Section~\ref{sec:abst-sym}, by denoting $\lambda_{i}$ (resp.\ $E_i$) the distinct eigenvalues (resp.\ eigenspaces) of $-{{\mathcal D}}elta+ V$ with DBC in $H^1_0(\Omega)$, we prove the following Theorem~\ref{intro}. For this, we assume that $\lambda_1$ is the unique principal eigenvalue, i.e.\ an eigenvalue with a related eigenspace of dimension $1$ possessing an one-signed eigenfunction. We also require that eigenfunctions in $E_2$ have a nodal line of measure $0$. By using a maximum principle, these assumptions are satisfied at least when $V\in L^{+\infty}(\Omega)$ (see~\cite{gossez}). \begin{Thm} \label{intro} When $V\in L^{N/2}(\Omega)$, $V^-\in L^{+\infty}(\Omega)$ and $-{{\mathcal D}}elta + V$ is positive definite such that $\lambda_1$ is the unique principal eigenvalue, for $p$ close to $2$, the ground state\ (resp.\ least energy nodal) solutions respect the symmetries of their orthogonal projections in $H^1_0(\Omega)$ on $E_1$ (resp.\ $E_2$). \end{Thm} In particular, when the eigenspace has a dimension $1$, the solutions respect the symmetries of $V$. As we assumed that $\lambda_1$ is the unique principal eigenvalue, ground state solutions respect the symmetries of $V$. Depending on the structure of $E_2$, some symmetry breaking exist for least energy nodal solutions (see Section~\ref{symbr}). In fact, by a traditional bootstrap, a family of ground state (resp.\ least energy nodal) solutions $(u_p)_{p>2}$ converges for $\mathcal{C}$-norm to functions in $E_1$ (resp.\ $E_2$). So, for l.e.n.s., $u_p$ does not respect the symmetries of $V$ for $p$ small when the projection is not symmetric in $E_2$ (see Section~\ref{exp} for an example). For larger $p$, it is depending on the case. In Section~\ref{symbr}, we exhibit rectangles and $V$ (such that the eigenfunctions in $E_2$ are symmetric) where l.e.n.s.\ do not respect symmetries of $V$ for $p$ large enough. So, the result~\ref{intro} cannot be extended to all $p$. In Section~\ref{exp}, as illustration, we implement the (modified) mountain pass algorithm (see~\cite{mckenna,zhou1,zhou2}) to study the cases of $V$ negative constant, piecewise constant or singular. We exhibit direct examples such that the solutions break down the symmetries of $V$. \section{Main results}\label{sec:abst-sym} The proofs are related to the technique defined in~\cite{bbgv}. This is why we just point out and explain the differences and we do not make all the details. The first result implies that the traditional Poincaré's and Sobolev's inequalities are available for $\norm{\cdot}^2:=\int_{\Omega}\abs{\nabla \cdot}^2 + V(\cdot)^2$. \begin{prop} \label{equiv} If $-{{\mathcal D}}elta +V$ is positive definite, $V^+\in L^{N/2}(\Omega)$ and $V^-\in L^{+\infty}(\Omega)$, the norm $\norm{u}^2:=\int_{\Omega}\abs{\nabla u}^2 + Vu^2$ and the traditional norm $\norm{u}_{H^1_0}^2:=\int_{\Omega}\abs{\nabla u}^2$ are equivalent. \end{prop} \begin{proof} Using the Sobolev's inequalities on $\int_{\Omega}Vu^2$ and as $V\in L^{N/2}(\Omega)$, $\exists C>0$ such that \begin{equation*} \norm{u}^2 \leqslantq \int_{\Omega} \abs{\nabla u}^2 + C \int_{\Omega} \abs{\nabla u}^2. \end{equation*} Using the Poincaré's inequalities and as $V^-\in L^{+\infty}(\Omega)$, $\exists C>0$ and a real $K$ such that \begin{equation*} \begin{split} \norm{u}^2 &= \varepsilon \int_{\Omega}\abs{\nabla u}^2 + (1-\varepsilon) \norm{u}^2 + \varepsilon \int_{\Omega} Vu^2\\ & \geqslantq \varepsilon\int_{\Omega} \abs{\nabla u}^2 + ((1 -\varepsilon)C + \varepsilon K)\int_{\Omega}u^2\geqslantq \varepsilon \int_{\Omega} \abs{\nabla u}^2, \end{split} \end{equation*} where the last inequality is obtained for $\varepsilon$ small enough. \end{proof} Then, the proof of Theorem~\ref{intro} is based on two main results. The first one shows that, for $p\simeq 2$, a priori bounded solutions can be distinguished by their projections on $E_i$. \subsection{Abstract symmetry} \begin{lemma}\label{Lem:uniq} There exists $\varepsilon >0$ such that if $\|a(x)-\lambda_{i}\|_{L^{N/2}}< \varepsilon$ and $u$ solves $-{{\mathcal D}}elta u + Vu= a(x)u$ with DBC then $u=0$ or $P_{E_{i}}u \neq 0$. \end{lemma} \begin{proof} Similar as in Lemma~$3.1$ in~\cite{bbgv}, Poincar\'e's and Sobolev's inequalities are adapted using Proposition~\ref{equiv}. \end{proof} Then, we directly obtain our abstract symmetry result as in the proof of Proposition~$3.2$ in~\cite{bbgv}. Let us remark that the result holds for any $i$ and not just for $i=1$ or $2$ as stated in~\cite{bbgv}. We denote by $B(0,M)$ the ball in $H^1_0(\Omega)$ centered at $0$ and radius $M$. \begin{prop}\label{Prop-intro:uniqueness} Let $M > 0$. For $i\in\mathbb{N}_0, \exists\tilde{p} > 2$ such that, for $p \in (2, \tilde{p})$, if $u_p, v_p\in \{ u\in B(0,M): P_{E_i}u\notin B(0, \frac{1}{M})\}$ solve the boundary value problem with DBC $-{{\mathcal D}}elta u + Vu = \lambda_i \abs{u}^{p-2}u$ then $P_{E_i}u_p = P_{E_i}v_p$ implies $u_p=v_p$. \end{prop} These two results permit us to conclude as in Theorem~$3.6$ in~\cite{bbgv}. \begin{Thm} \label{thmf} Let $(G_\alpha)_{\alpha \in E}$ with $E=E_i$ be a group acting on $H^{1}_{0}(\Omega)$ such that, for $g\in G_{\alpha}$ and $u\in H^{1}_{0}(\Omega)$, \begin{center} $g(E)=E$,\quad $g(E^{\perp})=E^{\perp}$,\quad $g\alpha=\alpha$\quad and\quad $\mathcal{E}_{p}(gu)=\mathcal{E}_{p}(u).$ \end{center} For any $M>1$, $\exists \tilde{p}>2$ such that, for any family of solutions $(u_p)_{\tilde{p}>p>2}\subseteq \{ u\in B(0,M): P_{E_i}u\notin B(0, \frac{1}{M})\}$ of the boundary value problem with DBC $-{{\mathcal D}}elta u + Vu = \lambda_i \abs{u}^{p-2}u$, $u_{p}$ belongs to the invariant set of $G_{\alpha_{p}}$ where $\alpha_{p}$ is the orthogonal projection $P_{E}u_{p}$. \end{Thm} Theorem~\ref{thmf} can be used for any bounded family of solutions staying away from $0$. To apply Theorem~\ref{thmf} at a family $(u_p)_{p>2}$ of ground state (resp.\ least energy nodal) solutions for the problem~\eqref{eqp}, we study the asymptotic behavior when $p\to 2$. We prove that the expected upper and lower bounds are fine if and only if $\lambda = \lambda_1$ (resp.\ $\lambda_2$). In some sense, $\lambda_1$ (resp.\ $\lambda_2$) is the natural rescaling to work with ground state (resp.\ least energy nodal) solutions of problem~\eqref{eqp}. Let us remark that this condition is not a restriction. Indeed, by homogeneity of $\lambda \abs{u}^{p-2}u$ in the equation~\eqref{eqp}, the symmetries of ground state (resp.\ least energy nodal) solutions are independent of $\lambda$. \subsection{Asymptotic behavior} Let us denote $(u_p)_{p>2}$ a family of ground state (resp.\ least energy nodal) solutions for the problem~\eqref{eqp}. We consider $\lambda = \lambda_1$ for g.s.\ (resp.\ $\lambda_n$ the first not principal eigenvalue for l.e.n.s.) and $E=E_1$ (resp.\ $E_n$). \begin{lemma} Concerning the upper bound, $\limsup_{p\to2} \norm{u_p}^2 = \limsup_{p\to 2} \leqslantft( \frac{\mathcal{E}_p(u_p)} {1/2 -1/p}\right)\leqslantq \norm{u_*}^2$ where $u_*\in E$ mi\-ni\-mi\-zes the limit functional $\mathcal{E}_*:E\to \mathbb{R}: u\mapsto \int_{\Omega}u^2- \log u^2.$ \end{lemma} \begin{proof} The proof is inspired by Lemma~$4.1$ in~\cite{bbgv}. First, we define $v_p:= u_* + (p-2)w$ where $w\in H^1_0(\Omega)$ solves the problem $-{{\mathcal D}}elta w + V w -\lambda_2 w = \lambda_2 u_* \log \abs{u_*}$ with $P_{E}w=0$. Then, we prove that the projection of $v_p$ on $\mathcal{N}_p$ (resp.\ $\mathcal{M}_p$) converges when $p\to 2$. Concerning least energy nodal solutions, in~\cite{bbgv}, the result has been stated for $n=2$. Here, let us remark that it works with $E_n$ which is not specially $E_2$. We just need to ensure that $v_p$ is sign-changing for $p$ close to $2$. \end{proof} Nevertheless, we need to assume $n=2$ to obtain the lower bound. It is explained in the next Lemma. \begin{lemma} Concerning the lower bound, if $n=2$ then $\liminf_{p\to 2}\norm{u_p}>0$. \end{lemma} \begin{proof} The proof is inspired by Lemma $4.4$ in~\cite{bbgv}. Concerning l.e.n.s.\ (the argument is easier for g.s.), let $e_1$ be a first eigenfunction in $E_1$. By considering $s_{p}^-:= \frac{\int_{\Omega}u_{p}^+e_1}{\int_{\Omega}\abs{u_p}e_1} $ and $s_{p}^+:= 1-s_{p}^-$, we show the existence of $t_p>0$ such that $v_p= t_p(s_p^+ u_p^+ + s_p^- u_p^-)$ belongs to ${{\mathcal M}}_p \cap E_1^{\perp}$. Then, we prove that $v_p$ stays away from zero using Poincaré's and Sobolev's embeddings, which concludes the proof. For this part, we need to require that $\lambda_1$ is the unique principal eigenvalue, i.e.\ $n=2$. Otherwise, we should prove that $v_p\in (E_1 \oplus\ldots E_{n-1})^{\perp}$, which cannot be assumed. \end{proof} The two previous results imply Theorem~\ref{result}. \begin{Thm}\label{result} Assume that $-{{\mathcal D}}elta + V$ is positive definite and possesses one and only one principal eigenvalue ($n=2$), $V^+\in L^{N/2}(\Omega)$ and $V^-\in L^{+\infty}(\Omega)$. If $(u_{p})_{p>2}$ is a family of ground state\ (resp.\ least energy nodal) solutions for equation~\eqref{eqp} then $\exists C>0$ such that $\norm{u_p}_{H^1_0} \leqslant C \leqslantft(\frac{\lambda_i}{\lambda} \right)^\frac{1}{p-2},$ where $i=1$ (resp.\ $2$). If $p_{n}\to 2$ and $\leqslantft(\frac{\lambda_{i}}{\lambda}\right)^{\frac{1}{2-p_{n}}}u_{p_{n}} \rightharpoonup u_{*}$ in $H^{1}_{0}(\Omega)$, then $\leqslantft(\frac{\lambda_{i}}{\lambda}\right)^{\frac{1}{2-p_{n}}}u_{p_{n}}\to u_{*}$ in $H^{1}_{0}(\Omega)$, $u_{*}$ satisfies $-{{\mathcal D}}elta u_{*}+ Vu_*=\lambda_{i}u_{*}$ and $\mathcal{E}_{*}(u_{*})=\inf \{\mathcal{E}_{*}(u) : u\in E_{i}\setminus\{0\}, \langle \,{\mathrm d} \mathcal{E}_{*}(u),u\rangle=0\},$ where $\mathcal{E}_{*}:E_{i}\to \mathbb{R} : u\mapsto\frac{\lambda_{i}}{2} \int_{\Omega}u^{2}-u^{2}\log u^{2}.$ \end{Thm} \begin{rem} \begin{enumerate}[(i)] \item By a traditional bootstrap, $\leqslantft(\frac{\lambda_{i}}{\lambda}\right)^{\frac{1}{2-p_{n}}}u_{p_{n}} \rightharpoonup u_{*}$ implies $\leqslantft(\frac{\lambda_{i}}{\lambda}\right)^{\frac{1}{2-p_{n}}}u_{p_{n}} \to u_*$ for $\mathcal{C}$-norm (see~\cite{gt}). \item If $\lambda <\lambda_1$ (resp.\ $\lambda_2$), a family of g.s.\ (resp.\ l.e.n.s.) blows up in $H^1_0(\Omega)$. If $\lambda >\lambda_1$ (resp.\ $\lambda_2$), it goes to $0$. So, a family of g.s.\ (resp.\ l.e.n.s.) is bounded and stays away from $0$ if and only if $\lambda =\lambda_1$ (resp.\ $\lambda_2$). \item By homogeneity of $\lambda\abs{u}^{p-2}u$, the study of symmetries for only one value of $\lambda$ is enough to conclude symmetries for any $\lambda >0$. \item By combining Theorem~\ref{thmf} and Theorem~\ref{result}, we obtain that gound state solutions for $p$ close to $2$ respect the symmetries of their projection on $E_1$. As first eigenfunctions are unique up to a constant, they keep symmetries of $V$ for $p$ close to $2$. \item By combining Theorem~\ref{thmf} and Theorem~\ref{result}, we obtain that least energy nodal solutions for $p$ close to $2$ respect the symmetries of their projection on $E_2$. \end{enumerate} \end{rem} \subsection{Symmetry breaking for least energy nodal solutions}\label{symbr} For $p\simeq 2$, previous results showed that the structure of least energy nodal solutions are related to the symmetries of $u_*$ verifying $\mathcal{E}_{*}(u_{*})= \inf \{\mathcal{E}_{*}(u) : u\in E_{i}\setminus\{0\}, \langle \,{\mathrm d} \mathcal{E}_{*}(u),u\rangle=0\}.$ In~\cite{bbgv} (see Section~$6$), on the square and for $V\equiv 0$, it is proved that if $u_*$ does not respect the symmetries of the rectangle, i.e.\ $u_*$ is not odd or even with resepct to the medians (which is numerically observed), then there exists a symmetry breaking on rectangles sufficiently close to the square and $p$ sufficiently large. In our case, this property can be stated as follows. \begin{Thm} Let us work on a square. If $V$ is odd or even with respect to a median but $u_*$ does not respect this symmetry, then there exist some rectangles and $p$ such that least energy nodal solutions $u_p$ does not respect the symmetries of $V$. \end{Thm} Moreover, as $u_p$ converges for $\mathcal{C}$-norm, we are able to directly construct $V$ such that the least energy nodal solutions break down the symmetry of $V$. It will happen once $u_*$ is not symmetric. In the next section numerical experiments illustrate this interesting case. \section{Numerical illustrations: non-zero potentials $V$} \label{exp} In this section, we compute the (resp.\ modified) mountain pass algorithm to approach one-signed (resp.\ sign-changing) solutions (see~\cite{mckenna,zhou1,zhou2,cdn}). While it is not sure that approximate solutions have least energy, all the other solutions that we have found numerically have a larger energy. So, we will assume that the approximations are ground state (resp.\ least energy nodal) solutions. We also give some level curves: $1$ and $2$ for g.s., $\pm 1$ and $\pm 2$ for l.e.n.s. Numerically, we study $p=4$. Let us remark that we always obtain the same kind of symmetry for smaller values of $p$. We work with $p=4$ to illustrate that the result of Theorem~\ref{intro} seems to hold at least for a non-negligeable interval. \subsection{Negative constant potential on a square} As first example, we consider a constant $V$ such that $\lambda_1>0$, i.e.\ $V> -\tilde{\lambda}_1$ where $\tilde{\lambda}_1$ is the first eigenvalue of $-{{\mathcal D}}elta$. So, the required assumptions on $V$ are clearly satisfied. Theorem~\ref{intro} holds. In particular, concerning symmetries, we obtain \begin{enumerate} \item for $p$ close to $2$, on convex domains, ground state solutions are even with respect to each hyperplane leaving $\Omega$ invariant; \item for $p$ close to $2$, least energy nodal solutions on a rectangle are even and odd with respect to a median; \item for $p$ close to $2$, least energy nodal solutions on radial domains are even with respect to $N-1$ orthogonal directions and odd with respect to the orthogonal one; \item for $p$ close to $2$, least energy nodal solutions on a square are odd with respect to the barycenter. \end{enumerate} Numerically, we consider $-{{\mathcal D}}elta u - \frac{\pi^2}{4}u = u^3$ defined on the square $\Omega=(-1,1)^2$ in $\mathbb{R}^2$. First and second eigenvalues of $-{{\mathcal D}}elta$ are given by $\frac{\pi^2}{2}$ and $\frac{5\pi^2}{4}$. On the following graph, one-signed (resp.\ nodal) numerical solutions have the expected symmetries. Ground state solutions respect the symmetries of the square and least energy nodal solutions are odd with respect to the center $0$. Moreover, the nodal line of the sign-changing solutions seems to be a diagonal, as for $V\equiv 0$ (see~\cite{bbgv}). \begin{minipage}[h]{0.6\linewidth} \includegraphics[width=2.8cm, angle=270]{squareVMPA.png} \includegraphics[width=2.8cm, angle = 270]{squareVMMPA.png} \null \begin{center} \begin{tikzpicture} \draw[->] (0,0) --(0.5,0); \draw[->] (0,0)--(0,.5); \node[anchor= north] at (0.5,0) {$x$}; \node[anchor =east] at (0,0.5) {$y$}; \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{0.7cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{0.7cm}} \meshline{black}{-0.18673}{-0.13141}{-0.17736}{-0.14324} \meshline{black}{-0.1965}{-0.11493}{-0.18673}{-0.13141} \meshline{black}{-0.22232}{-0.05204}{-0.22055}{-0.05757} \meshline{black}{-0.22277}{-0.04972}{-0.22232}{-0.05204} \meshline{black}{-0.15776}{-0.16467}{-0.13413}{-0.18435} \meshline{black}{-0.15776}{-0.16467}{-0.17736}{-0.14324}\meshline{black}{-0.20775}{-0.09358}{-0.1965}{-0.11493} \meshline{black}{-0.22055}{-0.05757}{-0.20775}{-0.09358} \meshline{black}{-0.22277}{-0.04972}{-0.22765}{-0.00517}\meshline{black}{-0.12084}{-0.19335}{-0.09088}{-0.20878} \meshline{black}{-0.12084}{-0.19335}{-0.13413}{-0.18435}\meshline{black}{-0.22765}{-0.00517}{-0.22768}{0.00312} \meshline{black}{-0.04762}{-0.22231}{-0.07267}{-0.21554} \meshline{black}{-0.09088}{-0.20878}{-0.07267}{-0.21554} \meshline{black}{-0.22768}{0.00312}{-0.22322}{0.04732} \meshline{black}{-0.22268}{0.05023}{-0.22322}{0.04732} \meshline{black}{-0.00742}{-0.2279}{-0.00435}{-0.22814} \meshline{black}{-0.00742}{-0.2279}{-0.04762}{-0.22231} \meshline{black}{-0.22268}{0.05023}{-0.22043}{0.05746} \meshline{black}{-0.00435}{-0.22814}{-0.00113}{-0.228} \meshline{black}{-0.22043}{0.05746}{-0.20848}{0.09201} \meshline{black}{0.03893}{-0.22394}{-0.00113}{-0.228}\meshline{black}{-0.20848}{0.09201}{-0.19838}{0.11157}\meshline{black}{-0.18765}{0.12995}{-0.19838}{0.11157} \meshline{black}{0.03893}{-0.22394}{0.06697}{-0.21741} \meshline{black}{-0.17706}{0.14351}{-0.18765}{0.12995} \meshline{black}{0.06697}{-0.21741}{0.08221}{-0.2124} \meshline{black}{-0.17706}{0.14351}{-0.15908}{0.16342}\meshline{black}{0.11663}{-0.19615}{0.08221}{-0.2124} \meshline{black}{-0.15908}{0.16342}{-0.13374}{0.18475} \meshline{black}{0.12551}{-0.19059}{0.11663}{-0.19615} \meshline{black}{-0.13374}{0.18475}{-0.12266}{0.19234} \meshline{black}{0.15436}{-0.16799}{0.12551}{-0.19059} \meshline{black}{-0.12266}{0.19234}{-0.09044}{0.20906} \meshline{black}{0.16883}{-0.15308}{0.15436}{-0.16799} \meshline{black}{-0.09044}{0.20906}{-0.07509}{0.21481}\meshline{black}{0.18383}{-0.13504}{0.16883}{-0.15308} \meshline{black}{-0.04715}{0.22238}{-0.07509}{0.21481} \meshline{black}{0.18383}{-0.13504}{0.20059}{-0.1081} \meshline{black}{0.20619}{-0.09798}{0.20059}{-0.1081} \meshline{black}{-0.04715}{0.22238}{-0.01055}{0.22746} \meshline{black}{0.21218}{-0.08196}{0.20619}{-0.09798} \meshline{black}{-0.01055}{0.22746}{-0.00387}{0.22796} \meshline{black}{0.00315}{0.22764}{0.0394}{0.22385} \meshline{black}{-0.00387}{0.22796}{0.00315}{0.22764} \meshline{black}{0.21218}{-0.08196}{0.22074}{-0.0564} \meshline{black}{0.07045}{0.21646}{0.08266}{0.21237}\meshline{black}{0.07045}{0.21646}{0.0394}{0.22385} \meshline{black}{0.22074}{-0.0564}{0.223}{-0.04525} \meshline{black}{0.223}{-0.04525}{0.22708}{-0.01007} \meshline{black}{0.12591}{0.19053}{0.11937}{0.1947} \meshline{black}{0.11937}{0.1947}{0.08266}{0.21237} \meshline{black}{0.22718}{0.0071}{0.22708}{-0.01007} \meshline{black}{0.22361}{0.04193}{0.22718}{0.0071} \meshline{black}{0.20738}{0.09557}{0.21235}{0.08173}\meshline{black}{0.2029}{0.10392}{0.20738}{0.09557} \meshline{black}{0.16915}{0.15277}{0.15647}{0.16613} \meshline{black}{0.15647}{0.16613}{0.12591}{0.19053} \meshline{black}{0.22361}{0.04193}{0.22136}{0.05368} \meshline{black}{0.22136}{0.05368}{0.21235}{0.08173} \meshline{black}{0.18537}{0.13283}{0.2029}{0.10392} \meshline{black}{0.18537}{0.13283}{0.16915}{0.15277} \meshline{black}{-0.49344}{-0.45789}{-0.48063}{-0.47107} \meshline{black}{-0.51634}{-0.42982}{-0.49344}{-0.45789} \meshline{black}{-0.5673}{-0.3506}{-0.56706}{-0.35103} \meshline{black}{-0.54592}{-0.38823}{-0.52386}{-0.42025} \meshline{black}{-0.5673}{-0.3506}{-0.56748}{-0.35022} \meshline{black}{-0.46198}{-0.48971}{-0.4374}{-0.51053} \meshline{black}{-0.48063}{-0.47107}{-0.46198}{-0.48971} \meshline{black}{-0.56706}{-0.35103}{-0.54592}{-0.38823} \meshline{black}{-0.51634}{-0.42982}{-0.52126}{-0.42396}\meshline{black}{-0.52126}{-0.42396}{-0.52386}{-0.42025} \meshline{black}{-0.42663}{-0.51929}{-0.39416}{-0.54187} \meshline{black}{-0.4374}{-0.51053}{-0.42663}{-0.51929} \meshline{black}{-0.56748}{-0.35022}{-0.58599}{-0.31143}\meshline{black}{-0.58599}{-0.31143}{-0.59682}{-0.28324} \meshline{black}{-0.35091}{-0.56703}{-0.38714}{-0.54649} \meshline{black}{-0.39416}{-0.54187}{-0.38714}{-0.54649} \meshline{black}{-0.34318}{-0.57111}{-0.30767}{-0.58737} \meshline{black}{-0.34318}{-0.57111}{-0.35091}{-0.56703} \meshline{black}{-0.59682}{-0.28324}{-0.60183}{-0.27061} \meshline{black}{-0.61023}{-0.24392}{-0.61514}{-0.22832} \meshline{black}{-0.60183}{-0.27061}{-0.61023}{-0.24392} \meshline{black}{-0.29433}{-0.59292}{-0.26442}{-0.60381} \meshline{black}{-0.29433}{-0.59292}{-0.30767}{-0.58737} \meshline{black}{-0.61676}{-0.2217}{-0.61514}{-0.22832} \meshline{black}{-0.24009}{-0.61165}{-0.22118}{-0.617} \meshline{black}{-0.26442}{-0.60381}{-0.24009}{-0.61165} \meshline{black}{-0.62599}{-0.18463}{-0.61676}{-0.2217} \meshline{black}{-0.17994}{-0.62695}{-0.22118}{-0.617}\meshline{black}{-0.17994}{-0.62695}{-0.17792}{-0.62738} \meshline{black}{-0.62599}{-0.18463}{-0.62985}{-0.16412} \meshline{black}{-0.17792}{-0.62738}{-0.17398}{-0.62806} \meshline{black}{-0.62985}{-0.16412}{-0.63441}{-0.13952}\meshline{black}{-0.63441}{-0.13952}{-0.63846}{-0.10913} \meshline{black}{-0.13466}{-0.63511}{-0.17398}{-0.62806}\meshline{black}{-0.64044}{-0.09304}{-0.63846}{-0.10913} \meshline{black}{-0.11257}{-0.63808}{-0.13466}{-0.63511} \meshline{black}{-0.09139}{-0.64061}{-0.11257}{-0.63808} \meshline{black}{-0.64044}{-0.09304}{-0.64338}{-0.05629} \meshline{black}{-0.09139}{-0.64061}{-0.06201}{-0.64285} \meshline{black}{-0.64338}{-0.05629}{-0.64406}{-0.04517} \meshline{black}{-0.0481}{-0.64395}{-0.06201}{-0.64285}\meshline{black}{-0.64514}{-0.00527}{-0.64406}{-0.04517} \meshline{black}{-0.03667}{-0.64432}{-0.0481}{-0.64395}\meshline{black}{-0.03667}{-0.64432}{-0.00481}{-0.64518} \meshline{black}{-0.64514}{-0.00527}{-0.64514}{0.00416}\meshline{black}{0.02746}{-0.64464}{-0.00481}{-0.64518} \meshline{black}{-0.64411}{0.04413}{-0.64514}{0.00416} \meshline{black}{0.03848}{-0.6444}{0.02746}{-0.64464}\meshline{black}{-0.64411}{0.04413}{-0.64345}{0.05508}\meshline{black}{0.05148}{-0.64352}{0.03848}{-0.6444} \meshline{black}{0.05148}{-0.64352}{0.08176}{-0.64153} \meshline{black}{-0.64054}{0.09207}{-0.64345}{0.05508} \meshline{black}{0.08176}{-0.64153}{0.10424}{-0.63908} \meshline{black}{-0.64054}{0.09207}{-0.63863}{0.1078}\meshline{black}{0.10424}{-0.63908}{0.12505}{-0.63653} \meshline{black}{-0.63455}{0.1386}{-0.63863}{0.1078} \meshline{black}{0.12505}{-0.63653}{0.16072}{-0.63054} \meshline{black}{0.16072}{-0.63054}{0.16833}{-0.62931} \meshline{black}{-0.63455}{0.1386}{-0.63013}{0.16262} \meshline{black}{0.16833}{-0.62931}{0.17239}{-0.62849} \meshline{black}{-0.63013}{0.16262}{-0.62618}{0.18375} \meshline{black}{0.21162}{-0.61951}{0.17239}{-0.62849}\meshline{black}{-0.62618}{0.18375}{-0.6172}{0.21999} \meshline{black}{0.23322}{-0.61366}{0.21162}{-0.61951} \meshline{black}{-0.61537}{0.2275}{-0.6172}{0.21999} \meshline{black}{0.25491}{-0.60698}{0.23322}{-0.61366}\meshline{black}{-0.60975}{0.24542}{-0.61537}{0.2275} \meshline{black}{-0.60975}{0.24542}{-0.6021}{0.26982}\meshline{black}{0.25491}{-0.60698}{0.28805}{-0.59537} \meshline{black}{-0.5976}{0.28119}{-0.6021}{0.26982} \meshline{black}{0.2982}{-0.5913}{0.28805}{-0.59537} \meshline{black}{-0.58631}{0.31068}{-0.5976}{0.28119} \meshline{black}{0.2982}{-0.5913}{0.33744}{-0.57395} \meshline{black}{-0.56865}{0.34779}{-0.58631}{0.31068} \meshline{black}{0.34149}{-0.57189}{0.33744}{-0.57395}\meshline{black}{-0.56865}{0.34779}{-0.56768}{0.3499} \meshline{black}{0.34149}{-0.57189}{0.38192}{-0.54968} \meshline{black}{-0.56633}{0.3523}{-0.56768}{0.3499} \meshline{black}{-0.54634}{0.38755}{-0.56633}{0.3523} \meshline{black}{0.38192}{-0.54968}{0.38479}{-0.54785} \meshline{black}{-0.52298}{0.42153}{-0.54634}{0.38755} \meshline{black}{-0.52298}{0.42153}{-0.52173}{0.4233}\meshline{black}{0.38479}{-0.54785}{0.4219}{-0.52282} \meshline{black}{-0.52173}{0.4233}{-0.51942}{0.42607}\meshline{black}{0.42809}{-0.51794}{0.4219}{-0.52282}\meshline{black}{-0.51942}{0.42607}{-0.49401}{0.45726} \meshline{black}{0.42809}{-0.51794}{0.45773}{-0.49356} \meshline{black}{-0.49401}{0.45726}{-0.47965}{0.47207} \meshline{black}{-0.47965}{0.47207}{-0.46263}{0.4891}\meshline{black}{0.47142}{-0.4803}{0.45773}{-0.49356} \meshline{black}{-0.46263}{0.4891}{-0.43633}{0.51141} \meshline{black}{-0.43633}{0.51141}{-0.4274}{0.51869}\meshline{black}{0.48968}{-0.46203}{0.47142}{-0.4803} \meshline{black}{-0.4274}{0.51869}{-0.39303}{0.54263} \meshline{black}{-0.39303}{0.54263}{-0.38805}{0.54591} \meshline{black}{0.51477}{-0.4322}{0.48968}{-0.46203}\meshline{black}{-0.34974}{0.56767}{-0.38805}{0.54591} \meshline{black}{-0.34974}{0.56767}{-0.34427}{0.57056}\meshline{black}{0.51793}{-0.42836}{0.51477}{-0.4322} \meshline{black}{0.54299}{-0.39282}{0.52204}{-0.42237}\meshline{black}{0.52204}{-0.42237}{0.51793}{-0.42836} \meshline{black}{-0.30646}{0.5879}{-0.34427}{0.57056} \meshline{black}{-0.30646}{0.5879}{-0.29561}{0.59242} \meshline{black}{0.54299}{-0.39282}{0.55821}{-0.36685} \meshline{black}{-0.26317}{0.60424}{-0.29561}{0.59242} \meshline{black}{-0.24153}{0.61122}{-0.26317}{0.60424} \meshline{black}{0.55821}{-0.36685}{0.56484}{-0.35541} \meshline{black}{-0.21986}{0.61735}{-0.24153}{0.61122} \meshline{black}{-0.21986}{0.61735}{-0.18148}{0.62661}\meshline{black}{0.58385}{-0.31633}{0.56977}{-0.34498}\meshline{black}{0.56977}{-0.34498}{0.56484}{-0.35541} \meshline{black}{-0.13316}{0.63534}{-0.16684}{0.62933} \meshline{black}{-0.13316}{0.63534}{-0.11439}{0.63786}\meshline{black}{-0.17652}{0.62766}{-0.18148}{0.62661} \meshline{black}{0.59892}{-0.27831}{0.60001}{-0.27562} \meshline{black}{0.59892}{-0.27831}{0.58385}{-0.31633} \meshline{black}{-0.16684}{0.62933}{-0.17652}{0.62766} \meshline{black}{-0.04656}{0.64404}{-0.05552}{0.64334} \meshline{black}{-0.08982}{0.64076}{-0.11439}{0.63786} \meshline{black}{0.60181}{-0.27008}{0.60001}{-0.27562} \meshline{black}{-0.03926}{0.64426}{-0.04656}{0.64404} \meshline{black}{-0.05552}{0.64334}{-0.08982}{0.64076} \meshline{black}{0.60181}{-0.27008}{0.61371}{-0.23344} \meshline{black}{-0.03926}{0.64426}{-0.00332}{0.64518} \meshline{black}{0.0332}{0.64451}{0.03988}{0.64435}\meshline{black}{-0.00332}{0.64518}{0.0332}{0.64451} \meshline{black}{0.62485}{-0.1898}{0.6177}{-0.21761} \meshline{black}{0.61371}{-0.23344}{0.6177}{-0.21761} \meshline{black}{0.03988}{0.64435}{0.04777}{0.6438}\meshline{black}{0.63356}{-0.14477}{0.63059}{-0.16029} \meshline{black}{0.63059}{-0.16029}{0.62485}{-0.1898} \meshline{black}{0.04777}{0.6438}{0.0831}{0.6414} \meshline{black}{0.10927}{0.63849}{0.12634}{0.63635} \meshline{black}{0.10927}{0.63849}{0.0831}{0.6414} \meshline{black}{0.63356}{-0.14477}{0.63898}{-0.10554} \meshline{black}{0.63989}{-0.09839}{0.63898}{-0.10554}\meshline{black}{0.12634}{0.63635}{0.15576}{0.63132} \meshline{black}{0.63989}{-0.09839}{0.64368}{-0.05291} \meshline{black}{0.64382}{-0.05064}{0.64368}{-0.05291} \meshline{black}{0.16961}{0.62905}{0.15576}{0.63132} \meshline{black}{0.21288}{0.61918}{0.17696}{0.62754} \meshline{black}{0.16961}{0.62905}{0.17696}{0.62754} \meshline{black}{0.64523}{-0.00208}{0.64382}{-0.05064} \meshline{black}{0.64523}{-0.00144}{0.64523}{-0.00208}\meshline{black}{0.23751}{0.61241}{0.25616}{0.60657} \meshline{black}{0.23751}{0.61241}{0.21288}{0.61918} \meshline{black}{0.29943}{0.59082}{0.29204}{0.59383} \meshline{black}{0.64523}{-0.00144}{0.64401}{0.04716} \meshline{black}{0.64388}{0.04933}{0.64401}{0.04716}\meshline{black}{0.25616}{0.60657}{0.29204}{0.59383} \meshline{black}{0.34112}{0.57213}{0.34269}{0.57132} \meshline{black}{0.34112}{0.57213}{0.29943}{0.59082} \meshline{black}{0.64388}{0.04933}{0.64026}{0.09496} \meshline{black}{0.64026}{0.09496}{0.63941}{0.10188} \meshline{black}{0.38528}{0.54761}{0.38594}{0.54719} \meshline{black}{0.34269}{0.57132}{0.38528}{0.54761} \meshline{black}{0.63941}{0.10188}{0.63411}{0.14138} \meshline{black}{0.63411}{0.14138}{0.63128}{0.15651} \meshline{black}{0.42918}{0.51714}{0.42495}{0.52052} \meshline{black}{0.42495}{0.52052}{0.38594}{0.54719} \meshline{black}{0.62559}{0.18644}{0.63128}{0.15651} \meshline{black}{0.62559}{0.18644}{0.61873}{0.21368} \meshline{black}{0.60118}{0.27234}{0.60236}{0.26865}\meshline{black}{0.60047}{0.27411}{0.60118}{0.27234} \meshline{black}{0.47243}{0.47932}{0.46049}{0.49104} \meshline{black}{0.46049}{0.49104}{0.42918}{0.51714} \meshline{black}{0.61873}{0.21368}{0.61466}{0.23012} \meshline{black}{0.61466}{0.23012}{0.60236}{0.26865} \meshline{black}{0.58525}{0.31313}{0.60047}{0.27411} \meshline{black}{0.58525}{0.31313}{0.57201}{0.34046} \meshline{black}{0.5449}{0.38981}{0.55897}{0.36545}\meshline{black}{0.5449}{0.38981}{0.52582}{0.41708} \meshline{black}{0.49215}{0.45933}{0.51568}{0.43098} \meshline{black}{0.47243}{0.47932}{0.49215}{0.45933} \meshline{black}{0.57201}{0.34046}{0.5665}{0.35229} \meshline{black}{0.5665}{0.35229}{0.55897}{0.36545} \meshline{black}{0.52582}{0.41708}{0.52013}{0.4255} \meshline{black}{0.51568}{0.43098}{0.52013}{0.4255} \meshline{black}{-1}{-1}{-0.95}{-1} \meshline{black}{-1}{-0.95}{-1}{-1} \meshline{black}{-0.95}{-1}{-0.9}{-1} \meshline{black}{-1}{-0.9}{-1}{-0.95} \meshline{black}{-0.9}{-1}{-0.85}{-1} \meshline{black}{-1}{-0.9}{-1}{-0.85} \meshline{black}{-0.85}{-1}{-0.8}{-1} \meshline{black}{-1}{-0.8}{-1}{-0.85} \meshline{black}{-0.8}{-1}{-0.75}{-1} \meshline{black}{-1}{-0.75}{-1}{-0.8} \meshline{black}{-0.75}{-1}{-0.7}{-1} \meshline{black}{-1}{-0.7}{-1}{-0.75} \meshline{black}{-0.7}{-1}{-0.65}{-1} \meshline{black}{-1}{-0.65}{-1}{-0.7} \meshline{black}{-0.65}{-1}{-0.6}{-1} \meshline{black}{-1}{-0.6}{-1}{-0.65} \meshline{black}{-0.6}{-1}{-0.55}{-1} \meshline{black}{-1}{-0.55}{-1}{-0.6} \meshline{black}{-0.55}{-1}{-0.5}{-1} \meshline{black}{-1}{-0.5}{-1}{-0.55} \meshline{black}{-0.5}{-1}{-0.45}{-1} \meshline{black}{-1}{-0.45}{-1}{-0.5} \meshline{black}{-0.45}{-1}{-0.4}{-1} \meshline{black}{-1}{-0.4}{-1}{-0.45} \meshline{black}{-0.4}{-1}{-0.35}{-1} \meshline{black}{-1}{-0.35}{-1}{-0.4} \meshline{black}{-0.35}{-1}{-0.3}{-1} \meshline{black}{-1}{-0.3}{-1}{-0.35} \meshline{black}{-0.3}{-1}{-0.25}{-1} \meshline{black}{-1}{-0.25}{-1}{-0.3} \meshline{black}{-0.25}{-1}{-0.2}{-1} \meshline{black}{-1}{-0.2}{-1}{-0.25} \meshline{black}{-0.2}{-1}{-0.15}{-1} \meshline{black}{-1}{-0.15}{-1}{-0.2} \meshline{black}{-0.15}{-1}{-0.1}{-1} \meshline{black}{-1}{-0.1}{-1}{-0.15} \meshline{black}{-0.1}{-1}{-0.05}{-1} \meshline{black}{-1}{-0.05}{-1}{-0.1} \meshline{black}{-0.05}{-1}{0}{-1} \meshline{black}{-1}{-0.05}{-1}{-0} \meshline{black}{0}{-1}{0.05}{-1} \meshline{black}{-1}{0.05}{-1}{-0} \meshline{black}{0.05}{-1}{0.1}{-1} \meshline{black}{-1}{0.1}{-1}{0.05} \meshline{black}{0.1}{-1}{0.15}{-1} \meshline{black}{-1}{0.15}{-1}{0.1} \meshline{black}{0.15}{-1}{0.2}{-1} \meshline{black}{0.2}{-1}{0.25}{-1} \meshline{black}{-1}{0.2}{-1}{0.15} \meshline{black}{0.25}{-1}{0.3}{-1} \meshline{black}{-1}{0.25}{-1}{0.2} \meshline{black}{0.3}{-1}{0.35}{-1} \meshline{black}{-1}{0.3}{-1}{0.25} \meshline{black}{0.35}{-1}{0.4}{-1} \meshline{black}{-1}{0.35}{-1}{0.3} \meshline{black}{0.4}{-1}{0.45}{-1} \meshline{black}{-1}{0.4}{-1}{0.35} \meshline{black}{0.45}{-1}{0.5}{-1} \meshline{black}{-1}{0.45}{-1}{0.4} \meshline{black}{0.5}{-1}{0.55}{-1} \meshline{black}{-1}{0.5}{-1}{0.45} \meshline{black}{0.55}{-1}{0.6}{-1} \meshline{black}{-1}{0.55}{-1}{0.5} \meshline{black}{0.6}{-1}{0.65}{-1} \meshline{black}{-1}{0.6}{-1}{0.55} \meshline{black}{0.65}{-1}{0.7}{-1} \meshline{black}{-1}{0.65}{-1}{0.6} \meshline{black}{0.7}{-1}{0.75}{-1} \meshline{black}{-1}{0.7}{-1}{0.65} \meshline{black}{0.75}{-1}{0.8}{-1} \meshline{black}{0.8}{-1}{0.85}{-1} \meshline{black}{-1}{0.75}{-1}{0.7} \meshline{black}{0.85}{-1}{0.9}{-1} \meshline{black}{-1}{0.8}{-1}{0.75} \meshline{black}{0.9}{-1}{0.95}{-1} \meshline{black}{-1}{0.85}{-1}{0.8} \meshline{black}{1}{-0.8}{1}{-0.75} \meshline{black}{1}{-0.85}{1}{-0.8} \meshline{black}{1}{-0.9}{1}{-0.85} \meshline{black}{1}{-0.75}{1}{-0.7} \meshline{black}{1}{-0.95}{1}{-0.9} \meshline{black}{0.95}{-1}{1}{-1} \meshline{black}{1}{-0.7}{1}{-0.65} \meshline{black}{1}{-1}{1}{-0.95} \meshline{black}{-1}{0.9}{-1}{0.85} \meshline{black}{1}{-0.65}{1}{-0.6} \meshline{black}{1}{-0.6}{1}{-0.55} \meshline{black}{-1}{0.95}{-1}{0.9} \meshline{black}{1}{-0.55}{1}{-0.5} \meshline{black}{1}{-0.5}{1}{-0.45} \meshline{black}{-0.8}{1}{-0.85}{1} \meshline{black}{-0.85}{1}{-0.9}{1} \meshline{black}{-0.5}{1}{-0.55}{1} \meshline{black}{-0.75}{1}{-0.8}{1} \meshline{black}{1}{-0.45}{1}{-0.4} \meshline{black}{-0.55}{1}{-0.6}{1} \meshline{black}{-0.45}{1}{-0.5}{1} \meshline{black}{-1}{1}{-1}{0.95} \meshline{black}{-0.9}{1}{-0.95}{1} \meshline{black}{-0.7}{1}{-0.75}{1} \meshline{black}{-0.6}{1}{-0.65}{1} \meshline{black}{-0.65}{1}{-0.7}{1} \meshline{black}{-0.4}{1}{-0.45}{1} \meshline{black}{-0.2}{1}{-0.25}{1} \meshline{black}{-0.15}{1}{-0.2}{1} \meshline{black}{-0.25}{1}{-0.3}{1} \meshline{black}{-0.35}{1}{-0.4}{1} \meshline{black}{-0.95}{1}{-1}{1} \meshline{black}{-0.3}{1}{-0.35}{1} \meshline{black}{-0.1}{1}{-0.15}{1} \meshline{black}{1}{-0.4}{1}{-0.35} \meshline{black}{-0.05}{1}{-0.1}{1} \meshline{black}{1}{-0.35}{1}{-0.3} \meshline{black}{-0}{1}{-0.05}{1} \meshline{black}{1}{-0.3}{1}{-0.25} \meshline{black}{0.05}{1}{-0}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{1}{-0.25}{1}{-0.2} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{1}{-0.2}{1}{-0.15} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{-0.15}{1}{-0.1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{1}{-0.1}{1}{-0.05} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{1}{-0.05}{1}{0} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{-1}{-1}{-0.95}{-1} \meshline{black}{-1}{-1}{-1}{-0.95} \meshline{black}{-0.95}{-1}{-0.9}{-1} \meshline{black}{-1}{-0.9}{-1}{-0.95} \meshline{black}{-0.9}{-1}{-0.85}{-1} \meshline{black}{-1}{-0.85}{-1}{-0.9} \meshline{black}{-0.85}{-1}{-0.8}{-1} \meshline{black}{-1}{-0.8}{-1}{-0.85} \meshline{black}{-0.8}{-1}{-0.75}{-1} \meshline{black}{-1}{-0.75}{-1}{-0.8} \meshline{black}{-0.75}{-1}{-0.7}{-1} \meshline{black}{-1}{-0.7}{-1}{-0.75} \meshline{black}{-0.7}{-1}{-0.65}{-1} \meshline{black}{-1}{-0.65}{-1}{-0.7} \meshline{black}{-0.65}{-1}{-0.6}{-1} \meshline{black}{-1}{-0.6}{-1}{-0.65} \meshline{black}{-0.6}{-1}{-0.55}{-1} \meshline{black}{-1}{-0.55}{-1}{-0.6} \meshline{black}{-0.55}{-1}{-0.5}{-1} \meshline{black}{-1}{-0.5}{-1}{-0.55} \meshline{black}{-0.5}{-1}{-0.45}{-1} \meshline{black}{-1}{-0.45}{-1}{-0.5} \meshline{black}{-0.45}{-1}{-0.4}{-1} \meshline{black}{-1}{-0.4}{-1}{-0.45} \meshline{black}{-0.4}{-1}{-0.35}{-1} \meshline{black}{-1}{-0.35}{-1}{-0.4} \meshline{black}{-0.35}{-1}{-0.3}{-1} \meshline{black}{-1}{-0.3}{-1}{-0.35} \meshline{black}{-0.3}{-1}{-0.25}{-1} \meshline{black}{-1}{-0.25}{-1}{-0.3} \meshline{black}{-0.25}{-1}{-0.2}{-1} \meshline{black}{-1}{-0.2}{-1}{-0.25} \meshline{black}{-0.2}{-1}{-0.15}{-1} \meshline{black}{-1}{-0.15}{-1}{-0.2} \meshline{black}{-0.15}{-1}{-0.1}{-1} \meshline{black}{-1}{-0.1}{-1}{-0.15} \meshline{black}{-0.1}{-1}{-0.05}{-1} \meshline{black}{-1}{-0.05}{-1}{-0.1} \meshline{black}{-0.05}{-1}{0}{-1} \meshline{black}{-1}{-0}{-1}{-0.05} \meshline{black}{0}{-1}{0.05}{-1} \meshline{black}{-1}{0.05}{-1}{-0} \meshline{black}{0.05}{-1}{0.1}{-1} \meshline{black}{-1}{0.1}{-1}{0.05} \meshline{black}{0.1}{-1}{0.15}{-1} \meshline{black}{-1}{0.15}{-1}{0.1} \meshline{black}{0.15}{-1}{0.2}{-1} \meshline{black}{-1}{0.2}{-1}{0.15} \meshline{black}{0.2}{-1}{0.25}{-1} \meshline{black}{0.25}{-1}{0.3}{-1} \meshline{black}{-1}{0.25}{-1}{0.2} \meshline{black}{0.3}{-1}{0.35}{-1} \meshline{black}{-1}{0.3}{-1}{0.25} \meshline{black}{0.35}{-1}{0.4}{-1} \meshline{black}{-1}{0.35}{-1}{0.3} \meshline{black}{0.4}{-1}{0.45}{-1} \meshline{black}{-1}{0.4}{-1}{0.35} \meshline{black}{0.45}{-1}{0.5}{-1} \meshline{black}{-1}{0.45}{-1}{0.4} \meshline{black}{0.5}{-1}{0.55}{-1} \meshline{black}{-1}{0.5}{-1}{0.45} \meshline{black}{0.55}{-1}{0.6}{-1} \meshline{black}{-1}{0.55}{-1}{0.5} \meshline{black}{0.6}{-1}{0.65}{-1} \meshline{black}{-1}{0.6}{-1}{0.55} \meshline{black}{0.65}{-1}{0.7}{-1} \meshline{black}{-1}{0.65}{-1}{0.6} \meshline{black}{0.7}{-1}{0.75}{-1} \meshline{black}{-1}{0.7}{-1}{0.65} \meshline{black}{0.75}{-1}{0.8}{-1} \meshline{black}{-1}{0.75}{-1}{0.7} \meshline{black}{0.8}{-1}{0.85}{-1} \meshline{black}{0.85}{-1}{0.9}{-1} \meshline{black}{-1}{0.8}{-1}{0.75} \meshline{black}{0.9}{-1}{0.95}{-1} \meshline{black}{-1}{0.85}{-1}{0.8} \meshline{black}{1}{-0.8}{1}{-0.75} \meshline{black}{1}{-0.85}{1}{-0.8} \meshline{black}{1}{-0.9}{1}{-0.85} \meshline{black}{1}{-0.75}{1}{-0.7} \meshline{black}{0.95}{-1}{1}{-1} \meshline{black}{1}{-0.95}{1}{-0.9} \meshline{black}{1}{-0.7}{1}{-0.65} \meshline{black}{-1}{0.9}{-1}{0.85} \meshline{black}{1}{-1}{1}{-0.95} \meshline{black}{1}{-0.65}{1}{-0.6} \meshline{black}{1}{-0.6}{1}{-0.55} \meshline{black}{-1}{0.95}{-1}{0.9} \meshline{black}{1}{-0.55}{1}{-0.5} \meshline{black}{1}{-0.5}{1}{-0.45} \meshline{black}{-1}{1}{-1}{0.95} \meshline{black}{-0.8}{1}{-0.85}{1} \meshline{black}{-0.85}{1}{-0.9}{1} \meshline{black}{-0.5}{1}{-0.55}{1} \meshline{black}{-0.75}{1}{-0.8}{1} \meshline{black}{-0.55}{1}{-0.6}{1} \meshline{black}{1}{-0.45}{1}{-0.4} \meshline{black}{-0.45}{1}{-0.5}{1} \meshline{black}{-0.7}{1}{-0.75}{1} \meshline{black}{-0.6}{1}{-0.65}{1} \meshline{black}{-0.9}{1}{-0.95}{1} \meshline{black}{-0.65}{1}{-0.7}{1} \meshline{black}{-0.4}{1}{-0.45}{1} \meshline{black}{-0.2}{1}{-0.25}{1} \meshline{black}{-0.15}{1}{-0.2}{1} \meshline{black}{-0.25}{1}{-0.3}{1} \meshline{black}{-0.95}{1}{-1}{1} \meshline{black}{-0.35}{1}{-0.4}{1} \meshline{black}{-0.3}{1}{-0.35}{1} \meshline{black}{-0.1}{1}{-0.15}{1} \meshline{black}{1}{-0.4}{1}{-0.35} \meshline{black}{-0.05}{1}{-0.1}{1} \meshline{black}{1}{-0.35}{1}{-0.3} \meshline{black}{-0}{1}{-0.05}{1} \meshline{black}{1}{-0.3}{1}{-0.25} \meshline{black}{0.05}{1}{-0}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{1}{-0.25}{1}{-0.2} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{1}{-0.2}{1}{-0.15} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{-0.15}{1}{-0.1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{1}{-0.1}{1}{-0.05} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{1}{-0.05}{1}{0} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{1.5cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{1.5cm}} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.475}{0.52485}{0.47603}{0.52387} \meshline{black}{0.47198}{0.52776}{0.475}{0.52485} \meshline{black}{0.37967}{0.61966}{0.38914}{0.61231} \meshline{black}{0.51966}{0.47973}{0.5067}{0.49321} \meshline{black}{0.41125}{0.58794}{0.43257}{0.56853} \meshline{black}{0.5067}{0.49321}{0.47603}{0.52387} \meshline{black}{0.41125}{0.58794}{0.38914}{0.61231} \meshline{black}{0.44311}{0.55638}{0.47198}{0.52776} \meshline{black}{0.35429}{0.64499}{0.37967}{0.61966} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.60753}{0.39089}{0.60241}{0.39761} \meshline{black}{0.60241}{0.39761}{0.58939}{0.41103} \meshline{black}{0.43257}{0.56853}{0.44311}{0.55638} \meshline{black}{0.53854}{0.46154}{0.56359}{0.43539} \meshline{black}{0.51966}{0.47973}{0.53854}{0.46154} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.34833}{0.65148}{0.35429}{0.64499} \meshline{black}{0.63484}{0.3659}{0.60753}{0.39089} \meshline{black}{0.6505}{0.34675}{0.63484}{0.3659} \meshline{black}{0.58939}{0.41103}{0.57036}{0.42961} \meshline{black}{0.56359}{0.43539}{0.57036}{0.42961} \meshline{black}{0.3162}{0.68276}{0.34562}{0.65541} \meshline{black}{0.34562}{0.65541}{0.34833}{0.65148} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0.28448}{0.71417}{0.30206}{0.70111} \meshline{black}{0.66621}{0.33465}{0.6505}{0.34675} \meshline{black}{0.66621}{0.33465}{0.6928}{0.3058} \meshline{black}{0.30206}{0.70111}{0.3162}{0.68276} \meshline{black}{0.70393}{0.29668}{0.7258}{0.27412} \meshline{black}{0.73841}{0.25721}{0.7258}{0.27412} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.25881}{0.74306}{0.25367}{0.74585} \meshline{black}{0.25881}{0.74306}{0.28448}{0.71417} \meshline{black}{0.78565}{0.2132}{0.78322}{0.2171} \meshline{black}{0.69583}{0.30392}{0.6928}{0.3058} \meshline{black}{0.69583}{0.30392}{0.70393}{0.29668} \meshline{black}{0.73841}{0.25721}{0.75622}{0.24384} \meshline{black}{0.78565}{0.2132}{0.79208}{0.20437} \meshline{black}{0.75622}{0.24384}{0.78322}{0.2171} \meshline{black}{0.23682}{0.76171}{0.25367}{0.74585} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.81338}{0.18321}{0.79208}{0.20437} \meshline{black}{0.23682}{0.76171}{0.22233}{0.77718} \meshline{black}{0.83297}{0.17083}{0.8446}{0.15064} \meshline{black}{0.83297}{0.17083}{0.81338}{0.18321} \meshline{black}{0.18896}{0.80833}{0.21677}{0.78675} \meshline{black}{0.21677}{0.78675}{0.22233}{0.77718} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.15417}{0.84327}{0.17439}{0.83241} \meshline{black}{0.87094}{0.12568}{0.8446}{0.15064} \meshline{black}{0.18896}{0.80833}{0.17439}{0.83241} \meshline{black}{0.12937}{0.86928}{0.15417}{0.84327} \meshline{black}{0.87266}{0.12526}{0.87094}{0.12568} \meshline{black}{0.89791}{0.0996}{0.9066}{0.09071} \meshline{black}{0.9066}{0.09071}{0.90632}{0.08913} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.89791}{0.0996}{0.87266}{0.12526} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.12937}{0.86928}{0.12824}{0.87369} \meshline{black}{0.90632}{0.08913}{0.91136}{0.08039} \meshline{black}{0.09277}{0.9068}{0.12824}{0.87369} \meshline{black}{0.91136}{0.08039}{0.94072}{0.05378} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.09277}{0.9068}{0.09318}{0.90938} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.0863}{0.9213}{0.05719}{0.94369} \meshline{black}{0.09318}{0.90938}{0.0863}{0.9213} \meshline{black}{0.9647}{0.03299}{0.94072}{0.05378} \meshline{black}{0.95}{ 0}{0.9647}{0.03299} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.03306}{0.96707}{0.05719}{0.94369} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.05}{1}{0.03306}{0.96707} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.475}{0.52485}{0.47603}{0.52387} \meshline{black}{0.47198}{0.52776}{0.475}{0.52485} \meshline{black}{0.37967}{0.61966}{0.38914}{0.61231} \meshline{black}{0.51966}{0.47973}{0.5067}{0.49321} \meshline{black}{0.41125}{0.58794}{0.43257}{0.56853} \meshline{black}{0.5067}{0.49321}{0.47603}{0.52387} \meshline{black}{0.41125}{0.58794}{0.38914}{0.61231} \meshline{black}{0.44311}{0.55638}{0.47198}{0.52776} \meshline{black}{0.35429}{0.64499}{0.37967}{0.61966} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.60753}{0.39089}{0.60241}{0.39761} \meshline{black}{0.60241}{0.39761}{0.58939}{0.41103} \meshline{black}{0.43257}{0.56853}{0.44311}{0.55638} \meshline{black}{0.53854}{0.46154}{0.56359}{0.43539} \meshline{black}{0.51966}{0.47973}{0.53854}{0.46154} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.34833}{0.65148}{0.35429}{0.64499} \meshline{black}{0.63484}{0.3659}{0.60753}{0.39089} \meshline{black}{0.6505}{0.34675}{0.63484}{0.3659} \meshline{black}{0.58939}{0.41103}{0.57036}{0.42961} \meshline{black}{0.56359}{0.43539}{0.57036}{0.42961} \meshline{black}{0.3162}{0.68276}{0.34562}{0.65541} \meshline{black}{0.34562}{0.65541}{0.34833}{0.65148} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0.28448}{0.71417}{0.30206}{0.70111} \meshline{black}{0.66621}{0.33465}{0.6505}{0.34675} \meshline{black}{0.66621}{0.33465}{0.6928}{0.3058} \meshline{black}{0.30206}{0.70111}{0.3162}{0.68276} \meshline{black}{0.70393}{0.29668}{0.7258}{0.27412} \meshline{black}{0.73841}{0.25721}{0.7258}{0.27412} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.25881}{0.74306}{0.25367}{0.74585} \meshline{black}{0.25881}{0.74306}{0.28448}{0.71417} \meshline{black}{0.78565}{0.2132}{0.78322}{0.2171} \meshline{black}{0.69583}{0.30392}{0.6928}{0.3058} \meshline{black}{0.69583}{0.30392}{0.70393}{0.29668} \meshline{black}{0.73841}{0.25721}{0.75622}{0.24384} \meshline{black}{0.78565}{0.2132}{0.79208}{0.20437} \meshline{black}{0.75622}{0.24384}{0.78322}{0.2171} \meshline{black}{0.23682}{0.76171}{0.25367}{0.74585} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.81338}{0.18321}{0.79208}{0.20437} \meshline{black}{0.23682}{0.76171}{0.22233}{0.77718} \meshline{black}{0.83297}{0.17083}{0.8446}{0.15064} \meshline{black}{0.83297}{0.17083}{0.81338}{0.18321} \meshline{black}{0.18896}{0.80833}{0.21677}{0.78675} \meshline{black}{0.21677}{0.78675}{0.22233}{0.77718} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.15417}{0.84327}{0.17439}{0.83241} \meshline{black}{0.87094}{0.12568}{0.8446}{0.15064} \meshline{black}{0.18896}{0.80833}{0.17439}{0.83241} \meshline{black}{0.12937}{0.86928}{0.15417}{0.84327} \meshline{black}{0.87266}{0.12526}{0.87094}{0.12568} \meshline{black}{0.89791}{0.0996}{0.9066}{0.09071} \meshline{black}{0.9066}{0.09071}{0.90632}{0.08913} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.89791}{0.0996}{0.87266}{0.12526} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.12937}{0.86928}{0.12824}{0.87369} \meshline{black}{0.90632}{0.08913}{0.91136}{0.08039} \meshline{black}{0.09277}{0.9068}{0.12824}{0.87369} \meshline{black}{0.91136}{0.08039}{0.94072}{0.05378} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.09277}{0.9068}{0.09318}{0.90938} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.0863}{0.9213}{0.05719}{0.94369} \meshline{black}{0.09318}{0.90938}{0.0863}{0.9213} \meshline{black}{0.9647}{0.03299}{0.94072}{0.05378} \meshline{black}{0.95}{ 0}{0.9647}{0.03299} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.03306}{0.96707}{0.05719}{0.94369} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.05}{1}{0.03306}{0.96707} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.475}{0.52485}{0.47603}{0.52387} \meshline{black}{0.47198}{0.52776}{0.475}{0.52485} \meshline{black}{0.37967}{0.61966}{0.38914}{0.61231} \meshline{black}{0.51966}{0.47973}{0.5067}{0.49321} \meshline{black}{0.41125}{0.58794}{0.43257}{0.56853} \meshline{black}{0.5067}{0.49321}{0.47603}{0.52387} \meshline{black}{0.41125}{0.58794}{0.38914}{0.61231} \meshline{black}{0.44311}{0.55638}{0.47198}{0.52776} \meshline{black}{0.35429}{0.64499}{0.37967}{0.61966} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.60753}{0.39089}{0.60241}{0.39761} \meshline{black}{0.60241}{0.39761}{0.58939}{0.41103} \meshline{black}{0.43257}{0.56853}{0.44311}{0.55638} \meshline{black}{0.53854}{0.46154}{0.56359}{0.43539} \meshline{black}{0.51966}{0.47973}{0.53854}{0.46154} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.34833}{0.65148}{0.35429}{0.64499} \meshline{black}{0.63484}{0.3659}{0.60753}{0.39089} \meshline{black}{0.6505}{0.34675}{0.63484}{0.3659} \meshline{black}{0.58939}{0.41103}{0.57036}{0.42961} \meshline{black}{0.56359}{0.43539}{0.57036}{0.42961} \meshline{black}{0.3162}{0.68276}{0.34562}{0.65541} \meshline{black}{0.34562}{0.65541}{0.34833}{0.65148} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0.28448}{0.71417}{0.30206}{0.70111} \meshline{black}{0.66621}{0.33465}{0.6505}{0.34675} \meshline{black}{0.66621}{0.33465}{0.6928}{0.3058} \meshline{black}{0.30206}{0.70111}{0.3162}{0.68276} \meshline{black}{0.70393}{0.29668}{0.7258}{0.27412} \meshline{black}{0.73841}{0.25721}{0.7258}{0.27412} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.25881}{0.74306}{0.25367}{0.74585} \meshline{black}{0.25881}{0.74306}{0.28448}{0.71417} \meshline{black}{0.78565}{0.2132}{0.78322}{0.2171} \meshline{black}{0.69583}{0.30392}{0.6928}{0.3058} \meshline{black}{0.69583}{0.30392}{0.70393}{0.29668} \meshline{black}{0.73841}{0.25721}{0.75622}{0.24384} \meshline{black}{0.78565}{0.2132}{0.79208}{0.20437} \meshline{black}{0.75622}{0.24384}{0.78322}{0.2171} \meshline{black}{0.23682}{0.76171}{0.25367}{0.74585} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.81338}{0.18321}{0.79208}{0.20437} \meshline{black}{0.23682}{0.76171}{0.22233}{0.77718} \meshline{black}{0.83297}{0.17083}{0.8446}{0.15064} \meshline{black}{0.83297}{0.17083}{0.81338}{0.18321} \meshline{black}{0.18896}{0.80833}{0.21677}{0.78675} \meshline{black}{0.21677}{0.78675}{0.22233}{0.77718} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.15417}{0.84327}{0.17439}{0.83241} \meshline{black}{0.87094}{0.12568}{0.8446}{0.15064} \meshline{black}{0.18896}{0.80833}{0.17439}{0.83241} \meshline{black}{0.12937}{0.86928}{0.15417}{0.84327} \meshline{black}{0.87266}{0.12526}{0.87094}{0.12568} \meshline{black}{0.89791}{0.0996}{0.9066}{0.09071} \meshline{black}{0.9066}{0.09071}{0.90632}{0.08913} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.89791}{0.0996}{0.87266}{0.12526} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.12937}{0.86928}{0.12824}{0.87369} \meshline{black}{0.90632}{0.08913}{0.91136}{0.08039} \meshline{black}{0.09277}{0.9068}{0.12824}{0.87369} \meshline{black}{0.91136}{0.08039}{0.94072}{0.05378} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.09277}{0.9068}{0.09318}{0.90938} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.0863}{0.9213}{0.05719}{0.94369} \meshline{black}{0.09318}{0.90938}{0.0863}{0.9213} \meshline{black}{0.9647}{0.03299}{0.94072}{0.05378} \meshline{black}{0.95}{ 0}{0.9647}{0.03299} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.03306}{0.96707}{0.05719}{0.94369} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.05}{1}{0.03306}{0.96707} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.475}{0.52485}{0.47603}{0.52387} \meshline{black}{0.47198}{0.52776}{0.475}{0.52485} \meshline{black}{0.37967}{0.61966}{0.38914}{0.61231} \meshline{black}{0.51966}{0.47973}{0.5067}{0.49321} \meshline{black}{0.41125}{0.58794}{0.43257}{0.56853} \meshline{black}{0.5067}{0.49321}{0.47603}{0.52387} \meshline{black}{0.41125}{0.58794}{0.38914}{0.61231} \meshline{black}{0.44311}{0.55638}{0.47198}{0.52776} \meshline{black}{0.35429}{0.64499}{0.37967}{0.61966} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.60753}{0.39089}{0.60241}{0.39761} \meshline{black}{0.60241}{0.39761}{0.58939}{0.41103} \meshline{black}{0.43257}{0.56853}{0.44311}{0.55638} \meshline{black}{0.53854}{0.46154}{0.56359}{0.43539} \meshline{black}{0.51966}{0.47973}{0.53854}{0.46154} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.34833}{0.65148}{0.35429}{0.64499} \meshline{black}{0.63484}{0.3659}{0.60753}{0.39089} \meshline{black}{0.6505}{0.34675}{0.63484}{0.3659} \meshline{black}{0.58939}{0.41103}{0.57036}{0.42961} \meshline{black}{0.56359}{0.43539}{0.57036}{0.42961} \meshline{black}{0.3162}{0.68276}{0.34562}{0.65541} \meshline{black}{0.34562}{0.65541}{0.34833}{0.65148} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0.28448}{0.71417}{0.30206}{0.70111} \meshline{black}{0.66621}{0.33465}{0.6505}{0.34675} \meshline{black}{0.66621}{0.33465}{0.6928}{0.3058} \meshline{black}{0.30206}{0.70111}{0.3162}{0.68276} \meshline{black}{0.70393}{0.29668}{0.7258}{0.27412} \meshline{black}{0.73841}{0.25721}{0.7258}{0.27412} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.25881}{0.74306}{0.25367}{0.74585} \meshline{black}{0.25881}{0.74306}{0.28448}{0.71417} \meshline{black}{0.78565}{0.2132}{0.78322}{0.2171} \meshline{black}{0.69583}{0.30392}{0.6928}{0.3058} \meshline{black}{0.69583}{0.30392}{0.70393}{0.29668} \meshline{black}{0.73841}{0.25721}{0.75622}{0.24384} \meshline{black}{0.78565}{0.2132}{0.79208}{0.20437} \meshline{black}{0.75622}{0.24384}{0.78322}{0.2171} \meshline{black}{0.23682}{0.76171}{0.25367}{0.74585} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.81338}{0.18321}{0.79208}{0.20437} \meshline{black}{0.23682}{0.76171}{0.22233}{0.77718} \meshline{black}{0.83297}{0.17083}{0.8446}{0.15064} \meshline{black}{0.83297}{0.17083}{0.81338}{0.18321} \meshline{black}{0.18896}{0.80833}{0.21677}{0.78675} \meshline{black}{0.21677}{0.78675}{0.22233}{0.77718} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.15417}{0.84327}{0.17439}{0.83241} \meshline{black}{0.87094}{0.12568}{0.8446}{0.15064} \meshline{black}{0.18896}{0.80833}{0.17439}{0.83241} \meshline{black}{0.12937}{0.86928}{0.15417}{0.84327} \meshline{black}{0.87266}{0.12526}{0.87094}{0.12568} \meshline{black}{0.89791}{0.0996}{0.9066}{0.09071} \meshline{black}{0.9066}{0.09071}{0.90632}{0.08913} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.89791}{0.0996}{0.87266}{0.12526} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.12937}{0.86928}{0.12824}{0.87369} \meshline{black}{0.90632}{0.08913}{0.91136}{0.08039} \meshline{black}{0.09277}{0.9068}{0.12824}{0.87369} \meshline{black}{0.91136}{0.08039}{0.94072}{0.05378} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.09277}{0.9068}{0.09318}{0.90938} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.0863}{0.9213}{0.05719}{0.94369} \meshline{black}{0.09318}{0.90938}{0.0863}{0.9213} \meshline{black}{0.9647}{0.03299}{0.94072}{0.05378} \meshline{black}{0.95}{ 0}{0.9647}{0.03299} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.03306}{0.96707}{0.05719}{0.94369} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.05}{1}{0.03306}{0.96707} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.475}{0.52485}{0.47603}{0.52387} \meshline{black}{0.47198}{0.52776}{0.475}{0.52485} \meshline{black}{0.37967}{0.61966}{0.38914}{0.61231} \meshline{black}{0.51966}{0.47973}{0.5067}{0.49321} \meshline{black}{0.41125}{0.58794}{0.43257}{0.56853} \meshline{black}{0.5067}{0.49321}{0.47603}{0.52387} \meshline{black}{0.41125}{0.58794}{0.38914}{0.61231} \meshline{black}{0.44311}{0.55638}{0.47198}{0.52776} \meshline{black}{0.35429}{0.64499}{0.37967}{0.61966} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.60753}{0.39089}{0.60241}{0.39761} \meshline{black}{0.60241}{0.39761}{0.58939}{0.41103} \meshline{black}{0.43257}{0.56853}{0.44311}{0.55638} \meshline{black}{0.53854}{0.46154}{0.56359}{0.43539} \meshline{black}{0.51966}{0.47973}{0.53854}{0.46154} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.34833}{0.65148}{0.35429}{0.64499} \meshline{black}{0.63484}{0.3659}{0.60753}{0.39089} \meshline{black}{0.6505}{0.34675}{0.63484}{0.3659} \meshline{black}{0.58939}{0.41103}{0.57036}{0.42961} \meshline{black}{0.56359}{0.43539}{0.57036}{0.42961} \meshline{black}{0.3162}{0.68276}{0.34562}{0.65541} \meshline{black}{0.34562}{0.65541}{0.34833}{0.65148} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0.28448}{0.71417}{0.30206}{0.70111} \meshline{black}{0.66621}{0.33465}{0.6505}{0.34675} \meshline{black}{0.66621}{0.33465}{0.6928}{0.3058} \meshline{black}{0.30206}{0.70111}{0.3162}{0.68276} \meshline{black}{0.70393}{0.29668}{0.7258}{0.27412} \meshline{black}{0.73841}{0.25721}{0.7258}{0.27412} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.25881}{0.74306}{0.25367}{0.74585} \meshline{black}{0.25881}{0.74306}{0.28448}{0.71417} \meshline{black}{0.78565}{0.2132}{0.78322}{0.2171} \meshline{black}{0.69583}{0.30392}{0.6928}{0.3058} \meshline{black}{0.69583}{0.30392}{0.70393}{0.29668} \meshline{black}{0.73841}{0.25721}{0.75622}{0.24384} \meshline{black}{0.78565}{0.2132}{0.79208}{0.20437} \meshline{black}{0.75622}{0.24384}{0.78322}{0.2171} \meshline{black}{0.23682}{0.76171}{0.25367}{0.74585} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.81338}{0.18321}{0.79208}{0.20437} \meshline{black}{0.23682}{0.76171}{0.22233}{0.77718} \meshline{black}{0.83297}{0.17083}{0.8446}{0.15064} \meshline{black}{0.83297}{0.17083}{0.81338}{0.18321} \meshline{black}{0.18896}{0.80833}{0.21677}{0.78675} \meshline{black}{0.21677}{0.78675}{0.22233}{0.77718} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.15417}{0.84327}{0.17439}{0.83241} \meshline{black}{0.87094}{0.12568}{0.8446}{0.15064} \meshline{black}{0.18896}{0.80833}{0.17439}{0.83241} \meshline{black}{0.12937}{0.86928}{0.15417}{0.84327} \meshline{black}{0.87266}{0.12526}{0.87094}{0.12568} \meshline{black}{0.89791}{0.0996}{0.9066}{0.09071} \meshline{black}{0.9066}{0.09071}{0.90632}{0.08913} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.89791}{0.0996}{0.87266}{0.12526} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.12937}{0.86928}{0.12824}{0.87369} \meshline{black}{0.90632}{0.08913}{0.91136}{0.08039} \meshline{black}{0.09277}{0.9068}{0.12824}{0.87369} \meshline{black}{0.91136}{0.08039}{0.94072}{0.05378} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.09277}{0.9068}{0.09318}{0.90938} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.0863}{0.9213}{0.05719}{0.94369} \meshline{black}{0.09318}{0.90938}{0.0863}{0.9213} \meshline{black}{0.9647}{0.03299}{0.94072}{0.05378} \meshline{black}{0.95}{ 0}{0.9647}{0.03299} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.03306}{0.96707}{0.05719}{0.94369} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.05}{1}{0.03306}{0.96707} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0.46763}{0.57062}{0.47596}{0.56236} \meshline{black}{0.45114}{0.58963}{0.46763}{0.57062} \meshline{black}{0.56228}{0.47532}{0.56364}{0.4739} \meshline{black}{0.55844}{0.47902}{0.56228}{0.47532} \meshline{black}{0.49758}{0.53801}{0.51958}{0.51604} \meshline{black}{0.47596}{0.56236}{0.49758}{0.53801} \meshline{black}{0.41216}{0.6386}{0.43247}{0.61451} \meshline{black}{0.3887}{0.6751}{0.38883}{0.67497} \meshline{black}{0.60815}{0.43461}{0.59739}{0.44511} \meshline{black}{0.43907}{0.6041}{0.45114}{0.58963} \meshline{black}{0.56364}{0.4739}{0.59739}{0.44511} \meshline{black}{0.41216}{0.6386}{0.38883}{0.67497} \meshline{black}{0.55844}{0.47902}{0.52919}{0.50633} \meshline{black}{0.52919}{0.50633}{0.51958}{0.51604} \meshline{black}{0.43247}{0.61451}{0.43907}{0.6041} \meshline{black}{0.38855}{0.67533}{0.3887}{0.6751} \meshline{black}{0.65258}{0.40091}{0.63548}{0.4158} \meshline{black}{0.63548}{0.4158}{0.60815}{0.43461} \meshline{black}{0.36766}{0.71305}{0.38855}{0.67533} \meshline{black}{0.67749}{0.38825}{0.65258}{0.40091} \meshline{black}{0.69498}{0.37511}{0.67749}{0.38825} \meshline{black}{0.36766}{0.71305}{0.35506}{0.74371} \meshline{black}{0.72295}{0.36445}{0.69498}{0.37511} \meshline{black}{0.72295}{0.36445}{0.7344}{0.35742} \meshline{black}{0.35293}{0.75451}{0.35506}{0.74371} \meshline{black}{0.7344}{0.35742}{0.76473}{0.34964} \meshline{black}{0.34649}{0.79676}{0.35293}{0.75451} \meshline{black}{0.76473}{0.34964}{0.76729}{0.349} \meshline{black}{0.78994}{0.34801}{0.81243}{0.34668} \meshline{black}{0.81396}{0.34636}{0.81243}{0.34668} \meshline{black}{0.34704}{0.80024}{0.34649}{0.79676} \meshline{black}{0.78994}{0.34801}{0.76729}{0.349} \meshline{black}{0.35055}{0.83646}{0.34704}{0.80024} \meshline{black}{0.81396}{0.34636}{0.81596}{0.34687} \meshline{black}{0.35055}{0.83646}{0.36184}{0.85888} \meshline{black}{0.81596}{0.34687}{0.86551}{0.36116} \meshline{black}{0.36184}{0.85888}{0.3664}{0.87055} \meshline{black}{0.86551}{0.36116}{0.87634}{0.36849} \meshline{black}{0.40132}{0.9083}{0.38386}{0.88861} \meshline{black}{0.36928}{0.8744}{0.3664}{0.87055} \meshline{black}{0.40584}{0.91166}{0.40132}{0.9083} \meshline{black}{0.38386}{0.88861}{0.36928}{0.8744} \meshline{black}{0.87634}{0.36849}{0.89933}{0.39293} \meshline{black}{0.90913}{0.40128}{0.89933}{0.39293} \meshline{black}{0.43858}{0.9324}{0.40584}{0.91166} \meshline{black}{0.91185}{0.40508}{0.90913}{0.40128} \meshline{black}{0.48011}{0.9473}{0.46611}{0.94014} \meshline{black}{0.46611}{0.94014}{0.43858}{0.9324} \meshline{black}{0.51868}{0.95613}{0.48011}{0.9473} \meshline{black}{0.91185}{0.40508}{0.93248}{0.43845} \meshline{black}{0.52448}{0.95743}{0.51868}{0.95613} \meshline{black}{0.93248}{0.43845}{0.94006}{0.46538} \meshline{black}{0.94006}{0.46538}{0.94752}{0.47985} \meshline{black}{0.5704}{0.96518}{0.52596}{0.95743} \meshline{black}{0.52596}{0.95743}{0.52448}{0.95743} \meshline{black}{0.94752}{0.47985}{0.9566}{0.5201} \meshline{black}{0.58027}{0.96518}{0.6181}{0.96946} \meshline{black}{0.58027}{0.96518}{0.5704}{0.96518} \meshline{black}{0.9566}{0.5201}{0.95747}{0.52402} \meshline{black}{0.63232}{0.96946}{0.6181}{0.96946} \meshline{black}{0.63232}{0.96946}{0.66694}{0.97119} \meshline{black}{0.96512}{0.5694}{0.95747}{0.52501} \meshline{black}{0.95747}{0.52402}{0.95747}{0.52501} \meshline{black}{0.96512}{0.57925}{0.96948}{0.61668} \meshline{black}{0.68292}{0.97119}{0.66694}{0.97119} \meshline{black}{0.96512}{0.5694}{0.96512}{0.57925} \meshline{black}{0.68292}{0.97119}{0.71708}{0.97057} \meshline{black}{0.96948}{0.63169}{0.97127}{0.66542} \meshline{black}{0.96948}{0.61668}{0.96948}{0.63169} \meshline{black}{0.73238}{0.97057}{0.71708}{0.97057} \meshline{black}{0.76894}{0.96724}{0.73238}{0.97057} \meshline{black}{0.97127}{0.66542}{0.97127}{0.68274} \meshline{black}{0.97069}{0.71553}{0.97127}{0.68274} \meshline{black}{0.78057}{0.96724}{0.76894}{0.96724} \meshline{black}{0.97069}{0.71553}{0.97069}{0.73265} \meshline{black}{0.78057}{0.96724}{0.82341}{0.95993} \meshline{black}{0.97069}{0.73265}{0.96737}{0.76739} \meshline{black}{0.83844}{0.95686}{0.87139}{0.94836} \meshline{black}{0.82685}{0.95993}{0.82341}{0.95993} \meshline{black}{0.88965}{0.93539}{0.87139}{0.94836} \meshline{black}{0.96737}{0.76739}{0.96737}{0.78144} \meshline{black}{0.95987}{0.82224}{0.96737}{0.78144} \meshline{black}{0.83844}{0.95686}{0.82685}{0.95993} \meshline{black}{0.94737}{0.87319}{0.95443}{0.84778} \meshline{black}{0.94737}{0.87319}{0.93492}{0.89006} \meshline{black}{0.91279}{0.92502}{0.88965}{0.93539} \meshline{black}{0.95987}{0.82224}{0.95987}{0.8284} \meshline{black}{0.95987}{0.8284}{0.95443}{0.84778} \meshline{black}{0.93492}{0.89006}{0.92327}{0.9146} \meshline{black}{0.92327}{0.9146}{0.91279}{0.92502} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0.04453}{0.07476}{0.0745}{0.04477} \meshline{black}{0.08208}{0.04124}{0.11434}{0.02773} \meshline{black}{0.0745}{0.04477}{0.08208}{0.04124} \meshline{black}{0.02766}{0.11455}{0.04089}{0.08273} \meshline{black}{0.04089}{0.08273}{0.04453}{0.07476} \meshline{black}{0.11434}{0.02773}{0.13325}{0.02773} \meshline{black}{0.02766}{0.13296}{0.02766}{0.11455} \meshline{black}{0.13325}{0.02773}{0.16114}{0.02006} \meshline{black}{0.02011}{0.16115}{0.02766}{0.13296} \meshline{black}{0.16114}{0.02006}{0.18866}{0.02006} \meshline{black}{0.02011}{0.18805}{0.02011}{0.16115} \meshline{black}{0.20951}{0.01635}{0.18866}{0.02006} \meshline{black}{0.02011}{0.18805}{0.01641}{0.20927} \meshline{black}{0.20951}{0.01635}{0.24089}{0.01635} \meshline{black}{0.01641}{0.24037}{0.01641}{0.20927} \meshline{black}{0.2587}{0.0147}{0.24089}{0.01635} \meshline{black}{0.01471}{0.2584}{0.01641}{0.24037} \meshline{black}{0.2587}{0.0147}{0.29166}{0.0147} \meshline{black}{0.01471}{0.29144}{0.01471}{0.2584} \meshline{black}{0.30839}{0.01439}{0.29166}{0.0147} \meshline{black}{0.01471}{0.29144}{0.01438}{0.30827} \meshline{black}{0.30839}{0.01439}{0.34167}{0.01439} \meshline{black}{0.01438}{0.34167}{0.01438}{0.30827} \meshline{black}{0.35872}{0.01526}{0.34167}{0.01439} \meshline{black}{0.01525}{0.3588}{0.01438}{0.34167} \meshline{black}{0.35872}{0.01526}{0.39121}{0.01526} \meshline{black}{0.01525}{0.39118}{0.01525}{0.3588} \meshline{black}{0.40994}{0.01747}{0.39121}{0.01526} \meshline{black}{0.01742}{0.41006}{0.01525}{0.39118} \meshline{black}{0.40994}{0.01747}{0.44008}{0.01747} \meshline{black}{0.01742}{0.43993}{0.01742}{0.41006} \meshline{black}{0.44008}{0.01747}{0.46218}{0.02133} \meshline{black}{0.01742}{0.43993}{0.02125}{0.46227} \meshline{black}{0.46218}{0.02133}{0.48793}{0.02133} \meshline{black}{0.02125}{0.46227}{0.02125}{0.48772} \meshline{black}{0.48793}{0.02133}{0.51584}{0.02759} \meshline{black}{0.02747}{0.51586}{0.02125}{0.48772} \meshline{black}{0.51584}{0.02759}{0.53445}{0.02759} \meshline{black}{0.02747}{0.5341}{0.02747}{0.51586} \meshline{black}{0.57214}{0.03785}{0.53445}{0.02759} \meshline{black}{0.03749}{0.57163}{0.02747}{0.5341} \meshline{black}{0.57214}{0.03785}{0.57862}{0.03785} \meshline{black}{0.03749}{0.57827}{0.03749}{0.57163} \meshline{black}{0.59583}{0.0432}{0.62046}{0.05164} \meshline{black}{0.57862}{0.03785}{0.59583}{0.0432} \meshline{black}{0.03749}{0.57827}{0.04322}{0.59661} \meshline{black}{0.05123}{0.62026}{0.04322}{0.59661} \meshline{black}{0.63815}{0.06402}{0.62046}{0.05164} \meshline{black}{0.43075}{0.54921}{0.43261}{0.54771} \meshline{black}{0.43075}{0.54921}{0.42355}{0.55548} \meshline{black}{0.0625}{0.63603}{0.05123}{0.62026} \meshline{black}{0.46338}{0.51811}{0.47605}{0.50599} \meshline{black}{0.63815}{0.06402}{0.65802}{0.07209} \meshline{black}{0.43261}{0.54771}{0.46338}{0.51811} \meshline{black}{0.56351}{0.41399}{0.55739}{0.42216} \meshline{black}{0.54344}{0.43714}{0.55739}{0.42216} \meshline{black}{0.36198}{0.60937}{0.38921}{0.58824} \meshline{black}{0.51968}{0.46138}{0.4954}{0.48665} \meshline{black}{0.39699}{0.57966}{0.42355}{0.55548} \meshline{black}{0.4954}{0.48665}{0.47605}{0.50599} \meshline{black}{0.07165}{0.65836}{0.0625}{0.63603} \meshline{black}{0.34577}{0.62555}{0.32535}{0.6381} \meshline{black}{0.39699}{0.57966}{0.38921}{0.58824} \meshline{black}{0.6733}{0.0848}{0.68896}{0.10336} \meshline{black}{0.65802}{0.07209}{0.6733}{0.0848} \meshline{black}{0.34577}{0.62555}{0.36198}{0.60937} \meshline{black}{0.60706}{0.36359}{0.58757}{0.38919} \meshline{black}{0.58757}{0.38919}{0.56351}{0.41399} \meshline{black}{0.63186}{0.33601}{0.64246}{0.32127} \meshline{black}{0.30233}{0.658}{0.28607}{0.66522} \meshline{black}{0.64246}{0.32127}{0.65008}{0.30692} \meshline{black}{0.52667}{0.45464}{0.54344}{0.43714} \meshline{black}{0.51968}{0.46138}{0.52667}{0.45464} \meshline{black}{0.68276}{0.25964}{0.68706}{0.24878} \meshline{black}{0.32535}{0.6381}{0.30233}{0.658} \meshline{black}{0.68706}{0.24878}{0.69642}{0.21932} \meshline{black}{0.08637}{0.6757}{0.07165}{0.65836} \meshline{black}{0.10349}{0.68983}{0.08637}{0.6757} \meshline{black}{0.68896}{0.10336}{0.70234}{0.12677} \meshline{black}{0.70702}{0.15051}{0.70234}{0.12677} \meshline{black}{0.25901}{0.68453}{0.24151}{0.68923} \meshline{black}{0.28607}{0.66522}{0.25901}{0.68453} \meshline{black}{0.61597}{0.35544}{0.60706}{0.36359} \meshline{black}{0.63186}{0.33601}{0.61597}{0.35544} \meshline{black}{0.66669}{0.28633}{0.65008}{0.30692} \meshline{black}{0.70828}{0.16432}{0.70544}{0.1993} \meshline{black}{0.68276}{0.25964}{0.66669}{0.28633} \meshline{black}{0.70231}{0.20702}{0.69642}{0.21932} \meshline{black}{0.12954}{0.70424}{0.10349}{0.68983} \meshline{black}{0.12954}{0.70424}{0.15755}{0.70836} \meshline{black}{0.70702}{0.15051}{0.70828}{0.16432} \meshline{black}{0.18409}{0.70602}{0.21586}{0.70329} \meshline{black}{0.24151}{0.68923}{0.21586}{0.70329} \meshline{black}{0.70544}{0.1993}{0.70231}{0.20702} \meshline{black}{0.17273}{0.71012}{0.15755}{0.70836} \meshline{black}{0.18409}{0.70602}{0.17273}{0.71012} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0.51963}{0.49809}{0.51801}{0.49978} \meshline{black}{0.42551}{0.59622}{0.43253}{0.58983} \meshline{black}{0.5137}{0.50409}{0.51801}{0.49978} \meshline{black}{0.41027}{0.61302}{0.42551}{0.59622} \meshline{black}{0.45537}{0.5635}{0.47599}{0.54306} \meshline{black}{0.43253}{0.58983}{0.45537}{0.5635} \meshline{black}{0.55041}{0.46843}{0.56362}{0.45465} \meshline{black}{0.51963}{0.49809}{0.55041}{0.46843} \meshline{black}{0.48631}{0.53144}{0.5137}{0.50409} \meshline{black}{0.47599}{0.54306}{0.48631}{0.53144} \meshline{black}{0.39644}{0.62943}{0.41027}{0.61302} \meshline{black}{0.36852}{0.66329}{0.38901}{0.64101} \meshline{black}{0.60787}{0.41392}{0.58388}{0.43736} \meshline{black}{0.56362}{0.45465}{0.58388}{0.43736} \meshline{black}{0.39644}{0.62943}{0.38901}{0.64101} \meshline{black}{0.34347}{0.69871}{0.34531}{0.69701} \meshline{black}{0.34531}{0.69701}{0.36852}{0.66329} \meshline{black}{0.65137}{0.37802}{0.61858}{0.40656} \meshline{black}{0.61858}{0.40656}{0.60787}{0.41392} \meshline{black}{0.69121}{0.34856}{0.67827}{0.35853} \meshline{black}{0.69121}{0.34856}{0.69301}{0.34661} \meshline{black}{0.34062}{0.70242}{0.34347}{0.69871} \meshline{black}{0.65429}{0.37653}{0.65137}{0.37802} \meshline{black}{0.67827}{0.35853}{0.65429}{0.37653} \meshline{black}{0.72906}{0.32421}{0.69301}{0.34661} \meshline{black}{0.72906}{0.32421}{0.73367}{0.32009} \meshline{black}{0.34062}{0.70242}{0.3203}{0.73514} \meshline{black}{0.30292}{0.7733}{0.3203}{0.73514} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.30292}{0.7733}{0.3203}{0.73514} \meshline{black}{0.76835}{0.30459}{0.73367}{0.32009} \meshline{black}{0.76835}{0.30459}{0.77688}{0.29899} \meshline{black}{0.30259}{0.77479}{0.30292}{0.7733} \meshline{black}{0.81949}{0.2917}{0.77688}{0.29899} \meshline{black}{0.81949}{0.2917}{0.82178}{0.29073} \meshline{black}{0.29208}{0.82089}{0.30136}{0.78251} \meshline{black}{0.30136}{0.78251}{0.30259}{0.77479} \meshline{black}{0.82178}{0.29073}{0.82613}{0.29111} \meshline{black}{0.28987}{0.82975}{0.29208}{0.82089} \meshline{black}{0.82613}{0.29111}{0.8663}{0.29368} \meshline{black}{0.29216}{0.84372}{0.28987}{0.82975} \meshline{black}{0.29216}{0.84372}{0.2972}{0.87232} \meshline{black}{0.8663}{0.29368}{0.89314}{0.30781} \meshline{black}{0.91107}{0.3212}{0.89314}{0.30781} \meshline{black}{0.31046}{0.8962}{0.2972}{0.87232} \meshline{black}{0.32518}{0.91382}{0.31046}{0.8962} \meshline{black}{0.91107}{0.3212}{0.92753}{0.33994} \meshline{black}{0.36285}{0.93647}{0.37926}{0.94824} \meshline{black}{0.34135}{0.92748}{0.32518}{0.91382} \meshline{black}{0.94855}{0.37885}{0.93794}{0.36445} \meshline{black}{0.92753}{0.33994}{0.93794}{0.36445} \meshline{black}{0.40186}{0.95617}{0.37926}{0.94824} \meshline{black}{0.36285}{0.93647}{0.34135}{0.92748} \meshline{black}{0.95603}{0.40062}{0.94855}{0.37885} \meshline{black}{0.42135}{0.9624}{0.40186}{0.95617} \meshline{black}{0.42859}{0.9624}{0.46574}{0.97245} \meshline{black}{0.51224}{0.97872}{0.48438}{0.97245} \meshline{black}{0.42859}{0.9624}{0.42135}{0.9624} \meshline{black}{0.96258}{0.42124}{0.95603}{0.40062} \meshline{black}{0.48438}{0.97245}{0.46574}{0.97245} \meshline{black}{0.5602}{0.98259}{0.53798}{0.97872} \meshline{black}{0.53798}{0.97872}{0.51224}{0.97872} \meshline{black}{0.96258}{0.42124}{0.96258}{0.42852} \meshline{black}{0.96258}{0.42852}{0.97254}{0.46564} \meshline{black}{0.59014}{0.98259}{0.60905}{0.98473} \meshline{black}{0.59014}{0.98259}{0.5602}{0.98259} \meshline{black}{0.97254}{0.48408}{0.97874}{0.51201} \meshline{black}{0.97254}{0.46564}{0.97254}{0.48408} \meshline{black}{0.64116}{0.98473}{0.60905}{0.98473} \meshline{black}{0.64116}{0.98473}{0.65847}{0.98559} \meshline{black}{0.98256}{0.5597}{0.97874}{0.53751} \meshline{black}{0.97874}{0.51201}{0.97874}{0.53751} \meshline{black}{0.98256}{0.58963}{0.98474}{0.60834} \meshline{black}{0.69146}{0.98559}{0.65847}{0.98559} \meshline{black}{0.98256}{0.5597}{0.98256}{0.58963} \meshline{black}{0.69146}{0.98559}{0.70854}{0.98528} \meshline{black}{0.98474}{0.64084}{0.98563}{0.65771} \meshline{black}{0.98474}{0.60834}{0.98474}{0.64084} \meshline{black}{0.74119}{0.98528}{0.70854}{0.98528} \meshline{black}{0.75947}{0.98362}{0.74119}{0.98528} \meshline{black}{0.98563}{0.65771}{0.98563}{0.69137} \meshline{black}{0.98534}{0.70777}{0.98563}{0.69137} \meshline{black}{0.79028}{0.98362}{0.75947}{0.98362} \meshline{black}{0.98534}{0.70777}{0.98534}{0.74133} \meshline{black}{0.79028}{0.98362}{0.81171}{0.97996} \meshline{black}{0.98534}{0.74133}{0.98368}{0.7587} \meshline{black}{0.83842}{0.97996}{0.81171}{0.97996} \meshline{black}{0.98368}{0.7587}{0.98368}{0.79072} \meshline{black}{0.97994}{0.81112}{0.98368}{0.79072} \meshline{black}{0.86674}{0.97247}{0.83842}{0.97996} \meshline{black}{0.92489}{0.95573}{0.9169}{0.95931} \meshline{black}{0.97994}{0.81112}{0.97994}{0.8392} \meshline{black}{0.97994}{0.8392}{0.97223}{0.86666} \meshline{black}{0.88506}{0.97247}{0.86674}{0.97247} \meshline{black}{0.95505}{0.92576}{0.95841}{0.91867} \meshline{black}{0.9169}{0.95931}{0.88506}{0.97247} \meshline{black}{0.95505}{0.92576}{0.92489}{0.95573} \meshline{black}{0.97223}{0.86666}{0.97223}{0.88601} \meshline{black}{0.97223}{0.88601}{0.95841}{0.91867} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0.07572}{0.08655}{0.08592}{0.07634} \meshline{black}{0.08592}{0.07634}{0.10959}{0.06533} \meshline{black}{0.06526}{0.10944}{0.07572}{0.08655} \meshline{black}{0.12745}{0.05234}{0.10959}{0.06533} \meshline{black}{0.06526}{0.10944}{0.052}{0.12784} \meshline{black}{0.15594}{0.04461}{0.17227}{0.04012} \meshline{black}{0.15594}{0.04461}{0.12745}{0.05234} \meshline{black}{0.04021}{0.17229}{0.0436}{0.15965} \meshline{black}{0.052}{0.12784}{0.0436}{0.15965} \meshline{black}{0.17227}{0.04012}{0.17731}{0.04012} \meshline{black}{0.04021}{0.1761}{0.04021}{0.17229} \meshline{black}{0.21903}{0.0327}{0.17731}{0.04012} \meshline{black}{0.04021}{0.1761}{0.03281}{0.21854} \meshline{black}{0.21903}{0.0327}{0.23178}{0.0327} \meshline{black}{0.03281}{0.23074}{0.03281}{0.21854} \meshline{black}{0.2674}{0.0294}{0.23178}{0.0327} \meshline{black}{0.02941}{0.2668}{0.03281}{0.23074} \meshline{black}{0.2674}{0.0294}{0.28331}{0.0294} \meshline{black}{0.02941}{0.28287}{0.02941}{0.2668} \meshline{black}{0.31679}{0.02879}{0.28331}{0.0294} \meshline{black}{0.02941}{0.28287}{0.02876}{0.31654} \meshline{black}{0.31679}{0.02879}{0.33335}{0.02879} \meshline{black}{0.02876}{0.33334}{0.02876}{0.31654} \meshline{black}{0.36744}{0.03052}{0.33335}{0.02879} \meshline{black}{0.03049}{0.36759}{0.02876}{0.33334} \meshline{black}{0.36744}{0.03052}{0.38242}{0.03052} \meshline{black}{0.03049}{0.38237}{0.03049}{0.36759} \meshline{black}{0.41988}{0.03493}{0.38242}{0.03052} \meshline{black}{0.03483}{0.42012}{0.03049}{0.38237} \meshline{black}{0.41988}{0.03493}{0.43016}{0.03493} \meshline{black}{0.03483}{0.42987}{0.03483}{0.42012} \meshline{black}{0.43016}{0.03493}{0.47436}{0.04265} \meshline{black}{0.03483}{0.42987}{0.04249}{0.47455} \meshline{black}{0.47436}{0.04265}{0.47586}{0.04265} \meshline{black}{0.04249}{0.47455}{0.04249}{0.47543} \meshline{black}{0.47586}{0.04265}{0.48172}{0.04397} \meshline{black}{0.48172}{0.04397}{0.52024}{0.0527} \meshline{black}{0.04326}{0.47888}{0.04249}{0.47543} \meshline{black}{0.0524}{0.51968}{0.04326}{0.47888} \meshline{black}{0.5348}{0.06047}{0.52024}{0.0527} \meshline{black}{0.0601}{0.5347}{0.0524}{0.51968} \meshline{black}{0.5348}{0.06047}{0.56184}{0.06829} \meshline{black}{0.06743}{0.56094}{0.0601}{0.5347} \meshline{black}{0.59854}{0.0916}{0.59159}{0.08636} \meshline{black}{0.56184}{0.06829}{0.59159}{0.08636} \meshline{black}{0.08645}{0.59219}{0.06743}{0.56094} \meshline{black}{0.0903}{0.59762}{0.08645}{0.59219} \meshline{black}{0.59854}{0.0916}{0.61928}{0.1139} \meshline{black}{0.0903}{0.59762}{0.10455}{0.61029} \meshline{black}{0.38211}{0.57102}{0.38926}{0.56657} \meshline{black}{0.61928}{0.1139}{0.62943}{0.12299} \meshline{black}{0.6506}{0.17208}{0.65093}{0.17307} \meshline{black}{0.34404}{0.59895}{0.34589}{0.59814} \meshline{black}{0.6506}{0.17208}{0.64528}{0.16001} \meshline{black}{0.41781}{0.5417}{0.43264}{0.52975} \meshline{black}{0.34589}{0.59814}{0.38211}{0.57102} \meshline{black}{0.51509}{0.44793}{0.51968}{0.44215} \meshline{black}{0.50365}{0.45973}{0.51509}{0.44793} \meshline{black}{0.41781}{0.5417}{0.38926}{0.56657} \meshline{black}{0.30182}{0.62442}{0.30251}{0.62424} \meshline{black}{0.65083}{0.2261}{0.65083}{0.22666} \meshline{black}{0.12174}{0.62931}{0.10455}{0.61029} \meshline{black}{0.65093}{0.17307}{0.65083}{0.2261} \meshline{black}{0.45175}{0.51136}{0.47608}{0.4881} \meshline{black}{0.34404}{0.59895}{0.30251}{0.62424} \meshline{black}{0.62943}{0.12299}{0.6321}{0.12659} \meshline{black}{0.6321}{0.12659}{0.64528}{0.16001} \meshline{black}{0.43264}{0.52975}{0.45175}{0.51136} \meshline{black}{0.25918}{0.64394}{0.25061}{0.64466} \meshline{black}{0.59835}{0.3455}{0.58734}{0.36158} \meshline{black}{0.59835}{0.3455}{0.60674}{0.32873} \meshline{black}{0.56337}{0.39042}{0.54497}{0.41501} \meshline{black}{0.30182}{0.62442}{0.25918}{0.64394} \meshline{black}{0.54497}{0.41501}{0.51968}{0.44215} \meshline{black}{0.63326}{0.28582}{0.63905}{0.26977} \meshline{black}{0.50365}{0.45973}{0.4841}{0.48008} \meshline{black}{0.63905}{0.26977}{0.65083}{0.22666} \meshline{black}{0.4841}{0.48008}{0.47608}{0.4881} \meshline{black}{0.1296}{0.63498}{0.12174}{0.62931} \meshline{black}{0.1296}{0.63498}{0.1709}{0.65069} \meshline{black}{0.17517}{0.651}{0.21594}{0.65478} \meshline{black}{0.25061}{0.64466}{0.21594}{0.65478} \meshline{black}{0.58734}{0.36158}{0.57273}{0.38078} \meshline{black}{0.57273}{0.38078}{0.56337}{0.39042} \meshline{black}{0.60674}{0.32873}{0.62102}{0.30887} \meshline{black}{0.62102}{0.30887}{0.63326}{0.28582} \meshline{black}{0.17276}{0.65124}{0.1709}{0.65069} \meshline{black}{0.17276}{0.65124}{0.17517}{0.651} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.475}{0.52485}{0.47603}{0.52387} \meshline{black}{0.47198}{0.52776}{0.475}{0.52485} \meshline{black}{0.37967}{0.61966}{0.38914}{0.61231} \meshline{black}{0.51966}{0.47973}{0.5067}{0.49321} \meshline{black}{0.41125}{0.58794}{0.43257}{0.56853} \meshline{black}{0.5067}{0.49321}{0.47603}{0.52387} \meshline{black}{0.41125}{0.58794}{0.38914}{0.61231} \meshline{black}{0.44311}{0.55638}{0.47198}{0.52776} \meshline{black}{0.35429}{0.64499}{0.37967}{0.61966} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.60753}{0.39089}{0.60241}{0.39761} \meshline{black}{0.60241}{0.39761}{0.58939}{0.41103} \meshline{black}{0.43257}{0.56853}{0.44311}{0.55638} \meshline{black}{0.53854}{0.46154}{0.56359}{0.43539} \meshline{black}{0.51966}{0.47973}{0.53854}{0.46154} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.34833}{0.65148}{0.35429}{0.64499} \meshline{black}{0.63484}{0.3659}{0.60753}{0.39089} \meshline{black}{0.6505}{0.34675}{0.63484}{0.3659} \meshline{black}{0.58939}{0.41103}{0.57036}{0.42961} \meshline{black}{0.56359}{0.43539}{0.57036}{0.42961} \meshline{black}{0.3162}{0.68276}{0.34562}{0.65541} \meshline{black}{0.34562}{0.65541}{0.34833}{0.65148} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0.28448}{0.71417}{0.30206}{0.70111} \meshline{black}{0.66621}{0.33465}{0.6505}{0.34675} \meshline{black}{0.66621}{0.33465}{0.6928}{0.3058} \meshline{black}{0.30206}{0.70111}{0.3162}{0.68276} \meshline{black}{0.70393}{0.29668}{0.7258}{0.27412} \meshline{black}{0.73841}{0.25721}{0.7258}{0.27412} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.25881}{0.74306}{0.25367}{0.74585} \meshline{black}{0.25881}{0.74306}{0.28448}{0.71417} \meshline{black}{0.78565}{0.2132}{0.78322}{0.2171} \meshline{black}{0.69583}{0.30392}{0.6928}{0.3058} \meshline{black}{0.69583}{0.30392}{0.70393}{0.29668} \meshline{black}{0.73841}{0.25721}{0.75622}{0.24384} \meshline{black}{0.78565}{0.2132}{0.79208}{0.20437} \meshline{black}{0.75622}{0.24384}{0.78322}{0.2171} \meshline{black}{0.23682}{0.76171}{0.25367}{0.74585} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.81338}{0.18321}{0.79208}{0.20437} \meshline{black}{0.23682}{0.76171}{0.22233}{0.77718} \meshline{black}{0.83297}{0.17083}{0.8446}{0.15064} \meshline{black}{0.83297}{0.17083}{0.81338}{0.18321} \meshline{black}{0.18896}{0.80833}{0.21677}{0.78675} \meshline{black}{0.21677}{0.78675}{0.22233}{0.77718} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.15417}{0.84327}{0.17439}{0.83241} \meshline{black}{0.87094}{0.12568}{0.8446}{0.15064} \meshline{black}{0.18896}{0.80833}{0.17439}{0.83241} \meshline{black}{0.12937}{0.86928}{0.15417}{0.84327} \meshline{black}{0.87266}{0.12526}{0.87094}{0.12568} \meshline{black}{0.89791}{0.0996}{0.9066}{0.09071} \meshline{black}{0.9066}{0.09071}{0.90632}{0.08913} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.89791}{0.0996}{0.87266}{0.12526} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.12937}{0.86928}{0.12824}{0.87369} \meshline{black}{0.90632}{0.08913}{0.91136}{0.08039} \meshline{black}{0.09277}{0.9068}{0.12824}{0.87369} \meshline{black}{0.91136}{0.08039}{0.94072}{0.05378} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.09277}{0.9068}{0.09318}{0.90938} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.0863}{0.9213}{0.05719}{0.94369} \meshline{black}{0.09318}{0.90938}{0.0863}{0.9213} \meshline{black}{0.9647}{0.03299}{0.94072}{0.05378} \meshline{black}{0.95}{ 0}{0.9647}{0.03299} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.03306}{0.96707}{0.05719}{0.94369} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.05}{1}{0.03306}{0.96707} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.475}{0.52485}{0.47603}{0.52387} \meshline{black}{0.47198}{0.52776}{0.475}{0.52485} \meshline{black}{0.37967}{0.61966}{0.38914}{0.61231} \meshline{black}{0.51966}{0.47973}{0.5067}{0.49321} \meshline{black}{0.41125}{0.58794}{0.43257}{0.56853} \meshline{black}{0.5067}{0.49321}{0.47603}{0.52387} \meshline{black}{0.41125}{0.58794}{0.38914}{0.61231} \meshline{black}{0.44311}{0.55638}{0.47198}{0.52776} \meshline{black}{0.35429}{0.64499}{0.37967}{0.61966} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.60753}{0.39089}{0.60241}{0.39761} \meshline{black}{0.60241}{0.39761}{0.58939}{0.41103} \meshline{black}{0.43257}{0.56853}{0.44311}{0.55638} \meshline{black}{0.53854}{0.46154}{0.56359}{0.43539} \meshline{black}{0.51966}{0.47973}{0.53854}{0.46154} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.34833}{0.65148}{0.35429}{0.64499} \meshline{black}{0.63484}{0.3659}{0.60753}{0.39089} \meshline{black}{0.6505}{0.34675}{0.63484}{0.3659} \meshline{black}{0.58939}{0.41103}{0.57036}{0.42961} \meshline{black}{0.56359}{0.43539}{0.57036}{0.42961} \meshline{black}{0.3162}{0.68276}{0.34562}{0.65541} \meshline{black}{0.34562}{0.65541}{0.34833}{0.65148} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0.28448}{0.71417}{0.30206}{0.70111} \meshline{black}{0.66621}{0.33465}{0.6505}{0.34675} \meshline{black}{0.66621}{0.33465}{0.6928}{0.3058} \meshline{black}{0.30206}{0.70111}{0.3162}{0.68276} \meshline{black}{0.70393}{0.29668}{0.7258}{0.27412} \meshline{black}{0.73841}{0.25721}{0.7258}{0.27412} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.25881}{0.74306}{0.25367}{0.74585} \meshline{black}{0.25881}{0.74306}{0.28448}{0.71417} \meshline{black}{0.78565}{0.2132}{0.78322}{0.2171} \meshline{black}{0.69583}{0.30392}{0.6928}{0.3058} \meshline{black}{0.69583}{0.30392}{0.70393}{0.29668} \meshline{black}{0.73841}{0.25721}{0.75622}{0.24384} \meshline{black}{0.78565}{0.2132}{0.79208}{0.20437} \meshline{black}{0.75622}{0.24384}{0.78322}{0.2171} \meshline{black}{0.23682}{0.76171}{0.25367}{0.74585} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.81338}{0.18321}{0.79208}{0.20437} \meshline{black}{0.23682}{0.76171}{0.22233}{0.77718} \meshline{black}{0.83297}{0.17083}{0.8446}{0.15064} \meshline{black}{0.83297}{0.17083}{0.81338}{0.18321} \meshline{black}{0.18896}{0.80833}{0.21677}{0.78675} \meshline{black}{0.21677}{0.78675}{0.22233}{0.77718} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.15417}{0.84327}{0.17439}{0.83241} \meshline{black}{0.87094}{0.12568}{0.8446}{0.15064} \meshline{black}{0.18896}{0.80833}{0.17439}{0.83241} \meshline{black}{0.12937}{0.86928}{0.15417}{0.84327} \meshline{black}{0.87266}{0.12526}{0.87094}{0.12568} \meshline{black}{0.89791}{0.0996}{0.9066}{0.09071} \meshline{black}{0.9066}{0.09071}{0.90632}{0.08913} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.89791}{0.0996}{0.87266}{0.12526} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.12937}{0.86928}{0.12824}{0.87369} \meshline{black}{0.90632}{0.08913}{0.91136}{0.08039} \meshline{black}{0.09277}{0.9068}{0.12824}{0.87369} \meshline{black}{0.91136}{0.08039}{0.94072}{0.05378} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.09277}{0.9068}{0.09318}{0.90938} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.0863}{0.9213}{0.05719}{0.94369} \meshline{black}{0.09318}{0.90938}{0.0863}{0.9213} \meshline{black}{0.9647}{0.03299}{0.94072}{0.05378} \meshline{black}{0.95}{ 0}{0.9647}{0.03299} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.03306}{0.96707}{0.05719}{0.94369} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.05}{1}{0.03306}{0.96707} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.475}{0.52485}{0.47603}{0.52387} \meshline{black}{0.47198}{0.52776}{0.475}{0.52485} \meshline{black}{0.37967}{0.61966}{0.38914}{0.61231} \meshline{black}{0.51966}{0.47973}{0.5067}{0.49321} \meshline{black}{0.41125}{0.58794}{0.43257}{0.56853} \meshline{black}{0.5067}{0.49321}{0.47603}{0.52387} \meshline{black}{0.41125}{0.58794}{0.38914}{0.61231} \meshline{black}{0.44311}{0.55638}{0.47198}{0.52776} \meshline{black}{0.35429}{0.64499}{0.37967}{0.61966} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.60753}{0.39089}{0.60241}{0.39761} \meshline{black}{0.60241}{0.39761}{0.58939}{0.41103} \meshline{black}{0.43257}{0.56853}{0.44311}{0.55638} \meshline{black}{0.53854}{0.46154}{0.56359}{0.43539} \meshline{black}{0.51966}{0.47973}{0.53854}{0.46154} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.34833}{0.65148}{0.35429}{0.64499} \meshline{black}{0.63484}{0.3659}{0.60753}{0.39089} \meshline{black}{0.6505}{0.34675}{0.63484}{0.3659} \meshline{black}{0.58939}{0.41103}{0.57036}{0.42961} \meshline{black}{0.56359}{0.43539}{0.57036}{0.42961} \meshline{black}{0.3162}{0.68276}{0.34562}{0.65541} \meshline{black}{0.34562}{0.65541}{0.34833}{0.65148} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0.28448}{0.71417}{0.30206}{0.70111} \meshline{black}{0.66621}{0.33465}{0.6505}{0.34675} \meshline{black}{0.66621}{0.33465}{0.6928}{0.3058} \meshline{black}{0.30206}{0.70111}{0.3162}{0.68276} \meshline{black}{0.70393}{0.29668}{0.7258}{0.27412} \meshline{black}{0.73841}{0.25721}{0.7258}{0.27412} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.25881}{0.74306}{0.25367}{0.74585} \meshline{black}{0.25881}{0.74306}{0.28448}{0.71417} \meshline{black}{0.78565}{0.2132}{0.78322}{0.2171} \meshline{black}{0.69583}{0.30392}{0.6928}{0.3058} \meshline{black}{0.69583}{0.30392}{0.70393}{0.29668} \meshline{black}{0.73841}{0.25721}{0.75622}{0.24384} \meshline{black}{0.78565}{0.2132}{0.79208}{0.20437} \meshline{black}{0.75622}{0.24384}{0.78322}{0.2171} \meshline{black}{0.23682}{0.76171}{0.25367}{0.74585} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.81338}{0.18321}{0.79208}{0.20437} \meshline{black}{0.23682}{0.76171}{0.22233}{0.77718} \meshline{black}{0.83297}{0.17083}{0.8446}{0.15064} \meshline{black}{0.83297}{0.17083}{0.81338}{0.18321} \meshline{black}{0.18896}{0.80833}{0.21677}{0.78675} \meshline{black}{0.21677}{0.78675}{0.22233}{0.77718} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.15417}{0.84327}{0.17439}{0.83241} \meshline{black}{0.87094}{0.12568}{0.8446}{0.15064} \meshline{black}{0.18896}{0.80833}{0.17439}{0.83241} \meshline{black}{0.12937}{0.86928}{0.15417}{0.84327} \meshline{black}{0.87266}{0.12526}{0.87094}{0.12568} \meshline{black}{0.89791}{0.0996}{0.9066}{0.09071} \meshline{black}{0.9066}{0.09071}{0.90632}{0.08913} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.89791}{0.0996}{0.87266}{0.12526} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.12937}{0.86928}{0.12824}{0.87369} \meshline{black}{0.90632}{0.08913}{0.91136}{0.08039} \meshline{black}{0.09277}{0.9068}{0.12824}{0.87369} \meshline{black}{0.91136}{0.08039}{0.94072}{0.05378} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.09277}{0.9068}{0.09318}{0.90938} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.0863}{0.9213}{0.05719}{0.94369} \meshline{black}{0.09318}{0.90938}{0.0863}{0.9213} \meshline{black}{0.9647}{0.03299}{0.94072}{0.05378} \meshline{black}{0.95}{ 0}{0.9647}{0.03299} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.03306}{0.96707}{0.05719}{0.94369} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.05}{1}{0.03306}{0.96707} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.475}{0.52485}{0.47603}{0.52387} \meshline{black}{0.47198}{0.52776}{0.475}{0.52485} \meshline{black}{0.37967}{0.61966}{0.38914}{0.61231} \meshline{black}{0.51966}{0.47973}{0.5067}{0.49321} \meshline{black}{0.41125}{0.58794}{0.43257}{0.56853} \meshline{black}{0.5067}{0.49321}{0.47603}{0.52387} \meshline{black}{0.41125}{0.58794}{0.38914}{0.61231} \meshline{black}{0.44311}{0.55638}{0.47198}{0.52776} \meshline{black}{0.35429}{0.64499}{0.37967}{0.61966} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.60753}{0.39089}{0.60241}{0.39761} \meshline{black}{0.60241}{0.39761}{0.58939}{0.41103} \meshline{black}{0.43257}{0.56853}{0.44311}{0.55638} \meshline{black}{0.53854}{0.46154}{0.56359}{0.43539} \meshline{black}{0.51966}{0.47973}{0.53854}{0.46154} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.34833}{0.65148}{0.35429}{0.64499} \meshline{black}{0.63484}{0.3659}{0.60753}{0.39089} \meshline{black}{0.6505}{0.34675}{0.63484}{0.3659} \meshline{black}{0.58939}{0.41103}{0.57036}{0.42961} \meshline{black}{0.56359}{0.43539}{0.57036}{0.42961} \meshline{black}{0.3162}{0.68276}{0.34562}{0.65541} \meshline{black}{0.34562}{0.65541}{0.34833}{0.65148} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0.28448}{0.71417}{0.30206}{0.70111} \meshline{black}{0.66621}{0.33465}{0.6505}{0.34675} \meshline{black}{0.66621}{0.33465}{0.6928}{0.3058} \meshline{black}{0.30206}{0.70111}{0.3162}{0.68276} \meshline{black}{0.70393}{0.29668}{0.7258}{0.27412} \meshline{black}{0.73841}{0.25721}{0.7258}{0.27412} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.25881}{0.74306}{0.25367}{0.74585} \meshline{black}{0.25881}{0.74306}{0.28448}{0.71417} \meshline{black}{0.78565}{0.2132}{0.78322}{0.2171} \meshline{black}{0.69583}{0.30392}{0.6928}{0.3058} \meshline{black}{0.69583}{0.30392}{0.70393}{0.29668} \meshline{black}{0.73841}{0.25721}{0.75622}{0.24384} \meshline{black}{0.78565}{0.2132}{0.79208}{0.20437} \meshline{black}{0.75622}{0.24384}{0.78322}{0.2171} \meshline{black}{0.23682}{0.76171}{0.25367}{0.74585} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.81338}{0.18321}{0.79208}{0.20437} \meshline{black}{0.23682}{0.76171}{0.22233}{0.77718} \meshline{black}{0.83297}{0.17083}{0.8446}{0.15064} \meshline{black}{0.83297}{0.17083}{0.81338}{0.18321} \meshline{black}{0.18896}{0.80833}{0.21677}{0.78675} \meshline{black}{0.21677}{0.78675}{0.22233}{0.77718} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.15417}{0.84327}{0.17439}{0.83241} \meshline{black}{0.87094}{0.12568}{0.8446}{0.15064} \meshline{black}{0.18896}{0.80833}{0.17439}{0.83241} \meshline{black}{0.12937}{0.86928}{0.15417}{0.84327} \meshline{black}{0.87266}{0.12526}{0.87094}{0.12568} \meshline{black}{0.89791}{0.0996}{0.9066}{0.09071} \meshline{black}{0.9066}{0.09071}{0.90632}{0.08913} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.89791}{0.0996}{0.87266}{0.12526} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.12937}{0.86928}{0.12824}{0.87369} \meshline{black}{0.90632}{0.08913}{0.91136}{0.08039} \meshline{black}{0.09277}{0.9068}{0.12824}{0.87369} \meshline{black}{0.91136}{0.08039}{0.94072}{0.05378} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.09277}{0.9068}{0.09318}{0.90938} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.0863}{0.9213}{0.05719}{0.94369} \meshline{black}{0.09318}{0.90938}{0.0863}{0.9213} \meshline{black}{0.9647}{0.03299}{0.94072}{0.05378} \meshline{black}{0.95}{ 0}{0.9647}{0.03299} \meshline{black}{1}{0.15}{1}{0.2} \meshline{black}{1}{0.1}{1}{0.15} \meshline{black}{1}{0.2}{1}{0.25} \meshline{black}{0.03306}{0.96707}{0.05719}{0.94369} \meshline{black}{1}{0.25}{1}{0.3} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{1}{0.05}{1}{0.1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1}{0.3}{1}{0.35} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.05}{1}{0.03306}{0.96707} \meshline{black}{1}{0}{1}{0.05} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1}{0.35}{1}{0.4} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1}{0.4}{1}{0.45} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1}{0.45}{1}{0.5} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1}{0.5}{1}{0.55} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1}{0.55}{1}{0.6} \meshline{black}{1}{0.6}{1}{0.65} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1}{0.65}{1}{0.7} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1}{0.7}{1}{0.75} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1}{0.75}{1}{0.8} \meshline{black}{1}{0.8}{1}{0.85} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1}{0.85}{1}{0.9} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{0.9}{1}{0.95} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{0.95}{1}{1} \end{tikzpicture} \null \end{center} \end{minipage} \begin{minipage}[h]{0.4\linewidth} \begin{itemize} \item For g.s.: $\max (u)=2.18 $, $\mathcal{E}_{4} (u)= 2.54$ \item For l.e.n.s.: $\min(u)= -4.61$, $\max (u)= 4.61$, $\mathcal{E}_4(u)=33.21$ \item Starting function for g.s.:\\ $(x-1)(y-1)(x+1)(y+1)$ \item Starting function for l.e.n.s.:\\ $\sin(\pi(x+1))\sin(2\pi (y+1))$ \end{itemize} \end{minipage} \null \subsection{Piecewise constant potential on a rectangle} As second example, $V$ is piecewise constant on $(0,2)\times(0,1)$. In~\cite{gossez}, it is proved that there exists just one principal eigenvalue. So, our assumptions are satisfied and Theorem~\ref{intro} is available. For $\lambda=1$, $p=4$ and $V(x,y)=V_-:=0$ when $x<1$ (resp.\ $V_+:= 10$ otherwise), following graphs indicate that approximations are just even with respect to a direction. Ground state solutions are more or less ``located'' in $x<1$ (the side minimizing energy) and respect symmetries of $V$. Least energy nodal solutions seem to be formed by g.s.\ on each nodal domains and are even with respect to a direction. \\ \begin{minipage}[h]{0.6\linewidth} \includegraphics[width=2.8cm, angle=270]{rectMPA010.pdf} \includegraphics[width=2.8cm, angle = 270]{rectMMPA010.pdf} \null \begin{center} \begin{tikzpicture} \draw[->] (0,0) --(0.5,0); \draw[->] (0,0)--(0,.5); \node[anchor= north] at (0.5,0) {$x$}; \node[anchor =east] at (0,0.5) {$y$}; \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{1.5cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{1.5cm}} \meshline{black}{0.30202}{0.30517}{0.3011}{0.3065} \meshline{black}{0.30753}{0.29868}{0.30202}{0.30517} \meshline{black}{0.27654}{0.34914}{0.27677}{0.34868} \meshline{black}{0.3011}{0.3065}{0.27677}{0.34868} \meshline{black}{0.3622}{0.24112}{0.3359}{0.26492} \meshline{black}{0.38537}{0.22506}{0.39412}{0.21914} \meshline{black}{0.33096}{0.26977}{0.30753}{0.29868}\meshline{black}{0.27654}{0.34914}{0.27573}{0.35133} \meshline{black}{0.39412}{0.21914}{0.40219}{0.21516} \meshline{black}{0.3622}{0.24112}{0.38537}{0.22506} \meshline{black}{0.33096}{0.26977}{0.3359}{0.26492} \meshline{black}{0.4284}{0.20028}{0.40219}{0.21516} \meshline{black}{0.27573}{0.35133}{0.25951}{0.39221} \meshline{black}{0.46043}{0.18776}{0.4284}{0.20028} \meshline{black}{0.25707}{0.4017}{0.25951}{0.39221} \meshline{black}{0.24886}{0.43519}{0.25707}{0.4017} \meshline{black}{0.46855}{0.1841}{0.46043}{0.18776} \meshline{black}{0.49143}{0.17738}{0.5111}{0.17129} \meshline{black}{0.46855}{0.1841}{0.49143}{0.17738} \meshline{black}{0.5111}{0.17129}{0.51768}{0.17006} \meshline{black}{0.24886}{0.43519}{0.24534}{0.46871} \meshline{black}{0.24418}{0.47851}{0.24534}{0.46871} \meshline{black}{0.55546}{0.16191}{0.51768}{0.17006} \meshline{black}{0.24418}{0.47851}{0.24433}{0.49} \meshline{black}{0.55546}{0.16191}{0.57197}{0.15991} \meshline{black}{0.24392}{0.52186}{0.24433}{0.49} \meshline{black}{0.60193}{0.15626}{0.57197}{0.15991} \meshline{black}{0.24392}{0.52186}{0.2491}{0.56174}\meshline{black}{0.2491}{0.56174}{0.24942}{0.56503} \meshline{black}{0.60193}{0.15626}{0.62362}{0.15502} \meshline{black}{0.24942}{0.56503}{0.24985}{0.567} \meshline{black}{0.65074}{0.15428}{0.62362}{0.15502} \meshline{black}{0.24985}{0.567}{0.25965}{0.60853}\meshline{black}{0.65074}{0.15428}{0.67276}{0.15444} \meshline{black}{0.26588}{0.62574}{0.25965}{0.60853} \meshline{black}{0.70261}{0.15642}{0.67276}{0.15444} \meshline{black}{0.26588}{0.62574}{0.2769}{0.65236}\meshline{black}{0.70261}{0.15642}{0.71927}{0.15759} \meshline{black}{0.2769}{0.65236}{0.28837}{0.67381} \meshline{black}{0.7587}{0.16368}{0.71927}{0.15759} \meshline{black}{0.28837}{0.67381}{0.30275}{0.69607} \meshline{black}{0.7587}{0.16368}{0.76247}{0.16417} \meshline{black}{0.31568}{0.71331}{0.30275}{0.69607} \meshline{black}{0.76247}{0.16417}{0.76849}{0.16552} \meshline{black}{0.80008}{0.17273}{0.76849}{0.16552}\meshline{black}{0.34077}{0.73964}{0.31568}{0.71331} \meshline{black}{0.80008}{0.17273}{0.82592}{0.18008} \meshline{black}{0.347}{0.74568}{0.34077}{0.73964} \meshline{black}{0.84705}{0.18884}{0.82592}{0.18008} \meshline{black}{0.35884}{0.75427}{0.3814}{0.77268} \meshline{black}{0.347}{0.74568}{0.35884}{0.75427} \meshline{black}{0.86889}{0.19681}{0.84705}{0.18884} \meshline{black}{0.3814}{0.77268}{0.39908}{0.78317} \meshline{black}{0.90696}{0.21802}{0.90509}{0.21701} \meshline{black}{0.86889}{0.19681}{0.90509}{0.21701} \meshline{black}{0.90854}{0.21927}{0.90696}{0.21802} \meshline{black}{0.39908}{0.78317}{0.41869}{0.79468} \meshline{black}{0.43588}{0.80159}{0.45849}{0.81235} \meshline{black}{0.43588}{0.80159}{0.41869}{0.79468} \meshline{black}{0.94223}{0.24252}{0.90854}{0.21927} \meshline{black}{0.49931}{0.82509}{0.50081}{0.82564}\meshline{black}{0.49931}{0.82509}{0.45849}{0.81235} \meshline{black}{0.96338}{0.26286}{0.94223}{0.24252} \meshline{black}{0.96338}{0.26286}{0.9734}{0.27297} \meshline{black}{0.50542}{0.82667}{0.50081}{0.82564} \meshline{black}{0.9992}{0.30698}{0.9734}{0.27297} \meshline{black}{1.00095}{0.3096}{0.9992}{0.30698}\meshline{black}{0.50542}{0.82667}{0.54484}{0.83597} \meshline{black}{1.0242}{0.35052}{1.00095}{0.3096} \meshline{black}{1.02512}{0.35272}{1.0242}{0.35052} \meshline{black}{0.54484}{0.83597}{0.55641}{0.83749} \meshline{black}{0.55641}{0.83749}{0.59104}{0.84254}\meshline{black}{1.02512}{0.35272}{1.04201}{0.39388} \meshline{black}{1.04462}{0.40466}{1.04201}{0.39388} \meshline{black}{0.59104}{0.84254}{0.61003}{0.84383} \meshline{black}{0.63939}{0.84539}{0.61003}{0.84383} \meshline{black}{1.04462}{0.40466}{1.05351}{0.43718} \meshline{black}{1.05653}{0.47023}{1.05351}{0.43718}\meshline{black}{0.66096}{0.84547}{0.63939}{0.84539} \meshline{black}{0.66096}{0.84547}{0.69004}{0.84426} \meshline{black}{1.05827}{0.52374}{1.05755}{0.49253} \meshline{black}{1.05827}{0.52374}{1.05141}{0.56543} \meshline{black}{1.05802}{0.48046}{1.05653}{0.47023} \meshline{black}{0.70958}{0.84306}{0.69004}{0.84426} \meshline{black}{0.7434}{0.83846}{0.70958}{0.84306} \meshline{black}{1.05802}{0.48046}{1.05755}{0.49253} \meshline{black}{0.80016}{0.8267}{0.79998}{0.82674} \meshline{black}{0.80016}{0.8267}{0.80021}{0.82668}\meshline{black}{1.05125}{0.56701}{1.05141}{0.56543} \meshline{black}{1.04056}{0.61029}{1.051}{0.56789}\meshline{black}{0.756}{0.83686}{0.7434}{0.83846} \meshline{black}{0.79998}{0.82674}{0.756}{0.83686} \meshline{black}{1.05125}{0.56701}{1.051}{0.56789} \meshline{black}{1.03421}{0.62548}{1.02248}{0.65356} \meshline{black}{0.84258}{0.81354}{0.80021}{0.82668} \meshline{black}{0.86366}{0.80345}{0.84258}{0.81354} \meshline{black}{0.90301}{0.78343}{0.91951}{0.77348}\meshline{black}{1.04056}{0.61029}{1.03421}{0.62548} \meshline{black}{0.9419}{0.75465}{0.91951}{0.77348} \meshline{black}{0.99649}{0.69684}{1.0115}{0.67282} \meshline{black}{0.98451}{0.71273}{0.9593}{0.74013} \meshline{black}{1.02248}{0.65356}{1.0115}{0.67282} \meshline{black}{0.99649}{0.69684}{0.98451}{0.71273} \meshline{black}{0.86366}{0.80345}{0.88238}{0.79583} \meshline{black}{0.90301}{0.78343}{0.88238}{0.79583} \meshline{black}{0.9536}{0.74586}{0.9419}{0.75465} \meshline{black}{0.9593}{0.74013}{0.9536}{0.74586}\meshline{black}{0.24242}{0.18805}{0.24294}{0.18764} \meshline{black}{0.22449}{0.20642}{0.24242}{0.18805} \meshline{black}{0.266}{0.16844}{0.27779}{0.1607} \meshline{black}{0.18729}{0.25492}{0.21583}{0.21528} \meshline{black}{0.266}{0.16844}{0.24294}{0.18764}\meshline{black}{0.27779}{0.1607}{0.31049}{0.14334} \meshline{black}{0.22449}{0.20642}{0.216}{0.21507}\meshline{black}{0.216}{0.21507}{0.21583}{0.21528}\meshline{black}{0.18729}{0.25492}{0.18243}{0.26331} \meshline{black}{0.16489}{0.30118}{0.18243}{0.26331} \meshline{black}{0.31049}{0.14334}{0.3189}{0.13814} \meshline{black}{0.32382}{0.13588}{0.36287}{0.11944} \meshline{black}{0.3189}{0.13814}{0.32382}{0.13588} \meshline{black}{0.16285}{0.30676}{0.15187}{0.33725} \meshline{black}{0.16489}{0.30118}{0.16285}{0.30676} \meshline{black}{0.14638}{0.3621}{0.15187}{0.33725} \meshline{black}{0.36287}{0.11944}{0.39183}{0.11219} \meshline{black}{0.14164}{0.38097}{0.14638}{0.3621} \meshline{black}{0.41162}{0.1052}{0.39183}{0.11219}\meshline{black}{0.13667}{0.41541}{0.14164}{0.38097} \meshline{black}{0.41162}{0.1052}{0.45155}{0.09739} \meshline{black}{0.13667}{0.41541}{0.13554}{0.42188} \meshline{black}{0.45906}{0.09537}{0.45155}{0.09739}\meshline{black}{0.13154}{0.47153}{0.13435}{0.43891} \meshline{black}{0.13435}{0.43891}{0.13554}{0.42188} \meshline{black}{0.50576}{0.08844}{0.50627}{0.08836} \meshline{black}{0.45906}{0.09537}{0.50576}{0.08844} \meshline{black}{0.13129}{0.48205}{0.13154}{0.47153} \meshline{black}{0.50627}{0.08836}{0.50634}{0.08835} \meshline{black}{0.13189}{0.52565}{0.13129}{0.48205} \meshline{black}{0.50634}{0.08835}{0.5536}{0.0832} \meshline{black}{0.13188}{0.53234}{0.13189}{0.52565} \meshline{black}{0.5536}{0.0832}{0.55894}{0.08297} \meshline{black}{0.13204}{0.53411}{0.13188}{0.53234} \meshline{black}{0.1352}{0.57564}{0.13204}{0.53411} \meshline{black}{0.55894}{0.08297}{0.60177}{0.0802} \meshline{black}{0.1352}{0.57564}{0.13686}{0.58628}\meshline{black}{0.60177}{0.0802}{0.61002}{0.08002} \meshline{black}{0.13686}{0.58628}{0.14176}{0.61984} \meshline{black}{0.61002}{0.08002}{0.65097}{0.07921} \meshline{black}{0.14176}{0.61984}{0.14778}{0.64341}\meshline{black}{0.65097}{0.07921}{0.65966}{0.07921} \meshline{black}{0.15064}{0.65683}{0.14778}{0.64341} \meshline{black}{0.65966}{0.07921}{0.70119}{0.08024} \meshline{black}{0.15064}{0.65683}{0.16293}{0.69247}\meshline{black}{0.70119}{0.08024}{0.70767}{0.08038} \meshline{black}{0.16517}{0.69991}{0.16293}{0.69247} \meshline{black}{0.70767}{0.08038}{0.75198}{0.08337} \meshline{black}{0.1688}{0.70764}{0.16517}{0.69991} \meshline{black}{0.1688}{0.70764}{0.18456}{0.74021} \meshline{black}{0.75198}{0.08337}{0.75359}{0.08344} \meshline{black}{0.18456}{0.74021}{0.1925}{0.75406} \meshline{black}{0.75359}{0.08344}{0.76484}{0.08465} \meshline{black}{0.76484}{0.08465}{0.79783}{0.08808} \meshline{black}{0.1925}{0.75406}{0.21348}{0.78181} \meshline{black}{0.8025}{0.08897}{0.79783}{0.08808}\meshline{black}{0.21348}{0.78181}{0.22019}{0.79032} \meshline{black}{0.23904}{0.80716}{0.25264}{0.82084}\meshline{black}{0.8025}{0.08897}{0.84094}{0.0945} \meshline{black}{0.23904}{0.80716}{0.22019}{0.79032} \meshline{black}{0.25264}{0.82084}{0.25994}{0.82615} \meshline{black}{0.85312}{0.09761}{0.84094}{0.0945} \meshline{black}{0.28838}{0.84621}{0.25994}{0.82615} \meshline{black}{0.85312}{0.09761}{0.88369}{0.10333} \meshline{black}{0.90674}{0.11081}{0.88369}{0.10333} \meshline{black}{0.32739}{0.86566}{0.32001}{0.86129} \meshline{black}{0.32001}{0.86129}{0.28838}{0.84621} \meshline{black}{0.90674}{0.11081}{0.92554}{0.11519} \meshline{black}{0.32739}{0.86566}{0.3381}{0.8699}\meshline{black}{0.92554}{0.11519}{0.96458}{0.13084} \meshline{black}{0.36819}{0.88184}{0.3381}{0.8699} \meshline{black}{0.96458}{0.13084}{0.96519}{0.13101} \meshline{black}{0.38407}{0.88558}{0.41096}{0.89447} \meshline{black}{0.38407}{0.88558}{0.36819}{0.88184} \meshline{black}{1.00346}{0.14905}{0.96623}{0.13151} \meshline{black}{0.96519}{0.13101}{0.96623}{0.13151} \meshline{black}{0.44269}{0.90055}{0.4555}{0.90397} \meshline{black}{0.44269}{0.90055}{0.41096}{0.89447} \meshline{black}{1.02996}{0.16728}{1.00346}{0.14905} \meshline{black}{0.49834}{0.91039}{0.50144}{0.91102} \meshline{black}{0.49834}{0.91039}{0.4555}{0.90397} \meshline{black}{1.03847}{0.17177}{1.02996}{0.16728} \meshline{black}{0.50144}{0.91102}{0.5234}{0.91339} \meshline{black}{1.07164}{0.19665}{1.04694}{0.17798}\meshline{black}{1.03847}{0.17177}{1.04694}{0.17798} \meshline{black}{0.54845}{0.9162}{0.5234}{0.91339} \meshline{black}{1.09852}{0.22239}{1.1016}{0.2256} \meshline{black}{1.07164}{0.19665}{1.09852}{0.22239} \meshline{black}{0.59657}{0.91943}{0.55176}{0.91634} \meshline{black}{0.55176}{0.91634}{0.54845}{0.9162} \meshline{black}{1.1119}{0.23819}{1.1016}{0.2256}\meshline{black}{0.60363}{0.91959}{0.59657}{0.91943} \meshline{black}{0.64588}{0.92064}{0.60363}{0.91959} \meshline{black}{1.1119}{0.23819}{1.13013}{0.25838} \meshline{black}{0.65423}{0.92065}{0.64588}{0.92064} \meshline{black}{0.69636}{0.9198}{0.65423}{0.92065} \meshline{black}{1.13541}{0.26499}{1.13013}{0.25838} \meshline{black}{1.13541}{0.26499}{1.15658}{0.29701} \meshline{black}{0.70366}{0.91965}{0.69636}{0.9198} \meshline{black}{0.70366}{0.91965}{0.74813}{0.91674} \meshline{black}{0.78075}{0.9134}{0.79895}{0.91148} \meshline{black}{1.16329}{0.30773}{1.15658}{0.29701} \meshline{black}{0.79895}{0.91148}{0.80147}{0.91097} \meshline{black}{1.17989}{0.34283}{1.16329}{0.30773} \meshline{black}{1.19387}{0.38273}{1.19797}{0.39405} \meshline{black}{1.19797}{0.39405}{1.19859}{0.39725} \meshline{black}{0.75189}{0.91658}{0.74813}{0.91674} \meshline{black}{0.78075}{0.9134}{0.75189}{0.91658} \meshline{black}{0.8449}{0.90445}{0.80147}{0.91097} \meshline{black}{0.8449}{0.90445}{0.85716}{0.90114} \meshline{black}{1.17989}{0.34283}{1.18371}{0.35081} \meshline{black}{1.18371}{0.35081}{1.19387}{0.38273} \meshline{black}{1.20834}{0.43733}{1.19859}{0.39725} \meshline{black}{1.21024}{0.46372}{1.20834}{0.43733} \meshline{black}{1.21172}{0.5001}{1.21232}{0.5239} \meshline{black}{1.21232}{0.5239}{1.20781}{0.55437} \meshline{black}{0.88937}{0.89485}{0.85716}{0.90114} \meshline{black}{0.88937}{0.89485}{0.91604}{0.88579} \meshline{black}{1.18197}{0.65371}{1.18881}{0.63375}\meshline{black}{1.21024}{0.46372}{1.21248}{0.48062} \meshline{black}{0.9605}{0.87011}{0.97231}{0.86525}\meshline{black}{1.19747}{0.61044}{1.20479}{0.57479}\meshline{black}{0.97231}{0.86525}{0.98101}{0.85991}\meshline{black}{1.21172}{0.5001}{1.21248}{0.48062} \meshline{black}{1.04683}{0.82106}{1.03866}{0.82682} \meshline{black}{1.16723}{0.68302}{1.16018}{0.69697} \meshline{black}{1.19747}{0.61044}{1.18881}{0.63375} \meshline{black}{1.04683}{0.82106}{1.06046}{0.80899} \meshline{black}{1.20673}{0.56717}{1.20781}{0.55437} \meshline{black}{1.18197}{0.65371}{1.16723}{0.68302} \meshline{black}{1.13108}{0.74025}{1.14162}{0.72531}\meshline{black}{1.20673}{0.56717}{1.20479}{0.57479} \meshline{black}{0.91604}{0.88579}{0.93192}{0.88192} \meshline{black}{0.9605}{0.87011}{0.93192}{0.88192} \meshline{black}{1.09196}{0.78353}{1.11278}{0.762}\meshline{black}{1.16018}{0.69697}{1.14162}{0.72531} \meshline{black}{0.98101}{0.85991}{1.01086}{0.84539} \meshline{black}{1.01086}{0.84539}{1.03866}{0.82682} \meshline{black}{1.13108}{0.74025}{1.11278}{0.762} \meshline{black}{1.06046}{0.80899}{1.08101}{0.79361} \meshline{black}{1.09196}{0.78353}{1.08101}{0.79361} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{1}{0}{1.05}{0} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1.05}{0}{1.1}{0} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1.1}{0}{1.15}{0} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1.15}{0}{1.2}{0} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1.2}{0}{1.25}{0} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1.25}{0}{1.3}{0} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1.3}{0}{1.35}{0} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1.35}{0}{1.4}{0} \meshline{black}{1.05}{1}{1}{1} \meshline{black}{1.1}{1}{1.05}{1} \meshline{black}{1.4}{0}{1.45}{0} \meshline{black}{1.15}{1}{1.1}{1} \meshline{black}{1.45}{0}{1.5}{0} \meshline{black}{1.2}{1}{1.15}{1} \meshline{black}{1.25}{1}{1.2}{1} \meshline{black}{1.5}{0}{1.55}{0} \meshline{black}{1.3}{1}{1.25}{1} \meshline{black}{1.35}{1}{1.3}{1} \meshline{black}{1.55}{0}{1.6}{0} \meshline{black}{1.4}{1}{1.35}{1} \meshline{black}{1.6}{0}{1.65}{0} \meshline{black}{1.45}{1}{1.4}{1} \meshline{black}{1.5}{1}{1.45}{1} \meshline{black}{1.65}{0}{1.7}{0} \meshline{black}{1.55}{1}{1.5}{1} \meshline{black}{1.7}{0}{1.75}{0} \meshline{black}{1.6}{1}{1.55}{1} \meshline{black}{1.65}{1}{1.6}{1} \meshline{black}{1.75}{0}{1.8}{0} \meshline{black}{1.7}{1}{1.65}{1} \meshline{black}{1.8}{0}{1.85}{0} \meshline{black}{1.75}{1}{1.7}{1} \meshline{black}{1.8}{1}{1.75}{1} \meshline{black}{1.85}{0}{1.9}{0} \meshline{black}{1.85}{1}{1.8}{1} \meshline{black}{1.9}{0}{1.95}{0} \meshline{black}{1.9}{1}{1.85}{1} \meshline{black}{2}{0.15}{2}{0.2} \meshline{black}{2}{0.1}{2}{0.15} \meshline{black}{2}{0.2}{2}{0.25} \meshline{black}{2}{0.05}{2}{0.1} \meshline{black}{1.95}{0}{2}{0} \meshline{black}{2}{0.25}{2}{0.3} \meshline{black}{2}{0.5}{2}{0.55} \meshline{black}{2}{0.45}{2}{0.5} \meshline{black}{2}{0.3}{2}{0.35} \meshline{black}{2}{0.4}{2}{0.45} \meshline{black}{2}{0.55}{2}{0.6} \meshline{black}{2}{0}{2}{0.05} \meshline{black}{2}{0.35}{2}{0.4} \meshline{black}{1.95}{1}{1.9}{1} \meshline{black}{2}{0.6}{2}{0.65} \meshline{black}{2}{0.65}{2}{0.7} \meshline{black}{2}{0.7}{2}{0.75} \meshline{black}{2}{0.75}{2}{0.8} \meshline{black}{2}{0.8}{2}{0.85} \meshline{black}{2}{0.85}{2}{0.9} \meshline{black}{2}{0.9}{2}{0.95} \meshline{black}{2}{1}{1.95}{1} \meshline{black}{2}{0.95}{2}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{1}{0}{1.05}{0} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1.05}{0}{1.1}{0} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{1.1}{0}{1.15}{0} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1.15}{0}{1.2}{0} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1.2}{0}{1.25}{0} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1.25}{0}{1.3}{0} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1.3}{0}{1.35}{0} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1.35}{0}{1.4}{0} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1.05}{1}{1}{1} \meshline{black}{1.4}{0}{1.45}{0} \meshline{black}{1.1}{1}{1.05}{1} \meshline{black}{1.15}{1}{1.1}{1} \meshline{black}{1.45}{0}{1.5}{0} \meshline{black}{1.2}{1}{1.15}{1} \meshline{black}{1.25}{1}{1.2}{1} \meshline{black}{1.5}{0}{1.55}{0} \meshline{black}{1.3}{1}{1.25}{1} \meshline{black}{1.35}{1}{1.3}{1} \meshline{black}{1.55}{0}{1.6}{0} \meshline{black}{1.4}{1}{1.35}{1} \meshline{black}{1.6}{0}{1.65}{0} \meshline{black}{1.45}{1}{1.4}{1} \meshline{black}{1.5}{1}{1.45}{1} \meshline{black}{1.65}{0}{1.7}{0} \meshline{black}{1.55}{1}{1.5}{1} \meshline{black}{1.7}{0}{1.75}{0} \meshline{black}{1.6}{1}{1.55}{1} \meshline{black}{1.65}{1}{1.6}{1} \meshline{black}{1.75}{0}{1.8}{0} \meshline{black}{1.7}{1}{1.65}{1} \meshline{black}{1.8}{0}{1.85}{0} \meshline{black}{1.75}{1}{1.7}{1} \meshline{black}{1.85}{0}{1.9}{0} \meshline{black}{1.8}{1}{1.75}{1} \meshline{black}{1.85}{1}{1.8}{1} \meshline{black}{1.9}{0}{1.95}{0} \meshline{black}{1.9}{1}{1.85}{1} \meshline{black}{2}{0.15}{2}{0.2} \meshline{black}{2}{0.1}{2}{0.15} \meshline{black}{1.95}{0}{2}{0} \meshline{black}{2}{0.2}{2}{0.25} \meshline{black}{2}{0.05}{2}{0.1} \meshline{black}{2}{0.25}{2}{0.3} \meshline{black}{2}{0.5}{2}{0.55} \meshline{black}{1.95}{1}{1.9}{1} \meshline{black}{2}{0.45}{2}{0.5} \meshline{black}{2}{0}{2}{0.05} \meshline{black}{2}{0.3}{2}{0.35} \meshline{black}{2}{0.4}{2}{0.45} \meshline{black}{2}{0.55}{2}{0.6} \meshline{black}{2}{0.35}{2}{0.4} \meshline{black}{2}{0.6}{2}{0.65} \meshline{black}{2}{0.65}{2}{0.7} \meshline{black}{2}{0.7}{2}{0.75} \meshline{black}{2}{0.75}{2}{0.8} \meshline{black}{2}{0.8}{2}{0.85} \meshline{black}{2}{1}{1.95}{1} \meshline{black}{2}{0.85}{2}{0.9} \meshline{black}{2}{0.9}{2}{0.95} \meshline{black}{2}{0.95}{2}{1} \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{1.5cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{1.5cm}} \meshline{black}{1.09486}{0.4805}{1.09531}{0.47138} \meshline{black}{1.09499}{0.49014}{1.09486}{0.4805} \meshline{black}{1.10908}{0.35077}{1.11378}{0.32958}\meshline{black}{1.10908}{0.35077}{1.10259}{0.3908} \meshline{black}{1.09668}{0.43723}{1.10169}{0.39563} \meshline{black}{1.09531}{0.47138}{1.09668}{0.43723} \meshline{black}{1.09758}{0.56706}{1.0974}{0.56175} \meshline{black}{1.13162}{0.27435}{1.12013}{0.3077} \meshline{black}{1.11378}{0.32958}{1.12013}{0.3077} \meshline{black}{1.09499}{0.49014}{1.09462}{0.52378} \meshline{black}{1.09798}{0.57135}{1.09758}{0.56706} \meshline{black}{1.10259}{0.3908}{1.1019}{0.39398} \meshline{black}{1.09462}{0.52378}{1.0974}{0.56175} \meshline{black}{1.10169}{0.39563}{1.1019}{0.39398} \meshline{black}{1.13597}{0.26499}{1.15381}{0.22915} \meshline{black}{1.13597}{0.26499}{1.13162}{0.27435} \meshline{black}{1.09798}{0.57135}{1.10216}{0.61035} \meshline{black}{1.10758}{0.64115}{1.10216}{0.61035}\meshline{black}{1.15809}{0.22303}{1.15381}{0.22915} \meshline{black}{1.17871}{0.19307}{1.15809}{0.22303} \meshline{black}{1.19088}{0.18122}{1.2067}{0.16455} \meshline{black}{1.19088}{0.18122}{1.17871}{0.19307} \meshline{black}{1.11029}{0.65364}{1.10758}{0.64115} \meshline{black}{1.2067}{0.16455}{1.23071}{0.14864} \meshline{black}{1.12216}{0.69694}{1.12004}{0.68782} \meshline{black}{1.11029}{0.65364}{1.12004}{0.68782} \meshline{black}{1.12317}{0.70062}{1.12216}{0.69694}\meshline{black}{1.24046}{0.14076}{1.23071}{0.14864} \meshline{black}{1.25374}{0.13412}{1.27752}{0.12097}\meshline{black}{1.24046}{0.14076}{1.25374}{0.13412} \meshline{black}{1.12317}{0.70062}{1.13857}{0.74025} \meshline{black}{1.27752}{0.12097}{1.29265}{0.11636} \meshline{black}{1.14347}{0.75195}{1.13857}{0.74025} \meshline{black}{1.29265}{0.11636}{1.31751}{0.1059} \meshline{black}{1.14347}{0.75195}{1.16265}{0.78356} \meshline{black}{1.1691}{0.79405}{1.16265}{0.78356} \meshline{black}{1.31751}{0.1059}{1.34886}{0.09874} \meshline{black}{1.1691}{0.79405}{1.19949}{0.82687} \meshline{black}{1.34886}{0.09874}{1.35989}{0.09531} \meshline{black}{1.19949}{0.82687}{1.19987}{0.8273} \meshline{black}{1.35989}{0.09531}{1.40167}{0.08853} \meshline{black}{1.20068}{0.82786}{1.19987}{0.8273}\meshline{black}{1.40167}{0.08853}{1.40398}{0.08805} \meshline{black}{1.23363}{0.85538}{1.20068}{0.82786} \meshline{black}{1.44942}{0.08323}{1.42333}{0.08611} \meshline{black}{1.40398}{0.08805}{1.42333}{0.08611} \meshline{black}{1.23363}{0.85538}{1.26197}{0.87018} \meshline{black}{1.27188}{0.87568}{1.26197}{0.87018}\meshline{black}{1.44942}{0.08323}{1.45267}{0.08311}\meshline{black}{1.27961}{0.87808}{1.27188}{0.87568}\meshline{black}{1.45267}{0.08311}{1.49662}{0.08113} \meshline{black}{1.27961}{0.87808}{1.31233}{0.89218} \meshline{black}{1.49662}{0.08113}{1.50251}{0.0811} \meshline{black}{1.34167}{0.89903}{1.31233}{0.89218}\meshline{black}{1.34167}{0.89903}{1.35573}{0.90357} \meshline{black}{1.50251}{0.0811}{1.54592}{0.08202} \meshline{black}{1.39833}{0.91065}{1.35573}{0.90357} \meshline{black}{1.54592}{0.08202}{1.55115}{0.08215}\meshline{black}{1.40124}{0.91129}{1.39833}{0.91065} \meshline{black}{1.59748}{0.08625}{1.55115}{0.08215} \meshline{black}{1.42269}{0.9135}{1.40124}{0.91129} \meshline{black}{1.44825}{0.91639}{1.42269}{0.9135} \meshline{black}{1.59748}{0.08625}{1.59833}{0.0863}\meshline{black}{1.45166}{0.91652}{1.44825}{0.91639} \meshline{black}{1.60316}{0.08706}{1.64437}{0.09293} \meshline{black}{1.59833}{0.0863}{1.60316}{0.08706} \meshline{black}{1.49691}{0.91859}{1.45166}{0.91652} \meshline{black}{1.65187}{0.09507}{1.64437}{0.09293} \meshline{black}{1.50278}{0.91862}{1.49691}{0.91859} \meshline{black}{1.54738}{0.91763}{1.50278}{0.91862} \meshline{black}{1.65187}{0.09507}{1.68904}{0.10281}\meshline{black}{1.59944}{0.91328}{1.59546}{0.9136} \meshline{black}{1.55202}{0.9175}{1.54738}{0.91763} \meshline{black}{1.59986}{0.9132}{1.59944}{0.91328} \meshline{black}{1.71157}{0.11173}{1.68904}{0.10281} \meshline{black}{1.55202}{0.9175}{1.59546}{0.9136} \meshline{black}{1.64551}{0.90666}{1.59986}{0.9132} \meshline{black}{1.71157}{0.11173}{1.73193}{0.11747}\meshline{black}{1.65524}{0.90387}{1.64551}{0.90666} \meshline{black}{1.77209}{0.13784}{1.75964}{0.13182} \meshline{black}{1.73193}{0.11747}{1.75964}{0.13182}\meshline{black}{1.68974}{0.89669}{1.65524}{0.90387} \meshline{black}{1.77209}{0.13784}{1.78725}{0.14937} \meshline{black}{1.71538}{0.88667}{1.68974}{0.89669} \meshline{black}{1.78725}{0.14937}{1.80802}{0.16237}\meshline{black}{1.73151}{0.88216}{1.71538}{0.88667} \meshline{black}{1.77082}{0.8628}{1.75452}{0.87051} \meshline{black}{1.81837}{0.17274}{1.80802}{0.16237} \meshline{black}{1.75452}{0.87051}{1.73151}{0.88216}\meshline{black}{1.83987}{0.19309}{1.81837}{0.17274} \meshline{black}{1.78727}{0.85019}{1.77082}{0.8628} \meshline{black}{1.86435}{0.23122}{1.86239}{0.22868} \meshline{black}{1.86239}{0.22868}{1.83987}{0.19309} \meshline{black}{1.80745}{0.83756}{1.78727}{0.85019}\meshline{black}{1.86435}{0.23122}{1.86658}{0.23587} \meshline{black}{1.81888}{0.82602}{1.84162}{0.80412} \meshline{black}{1.81888}{0.82602}{1.80745}{0.83756} \meshline{black}{1.88476}{0.27056}{1.86658}{0.23587} \meshline{black}{1.86571}{0.76673}{1.86194}{0.77185} \meshline{black}{1.86194}{0.77185}{1.84162}{0.80412} \meshline{black}{1.89873}{0.31143}{1.89005}{0.28908} \meshline{black}{1.90876}{0.3544}{1.90601}{0.34519} \meshline{black}{1.86571}{0.76673}{1.87038}{0.75677} \meshline{black}{1.88476}{0.27056}{1.89005}{0.28908}\meshline{black}{1.89873}{0.31143}{1.90601}{0.34519}\meshline{black}{1.90876}{0.3544}{1.91535}{0.39793} \meshline{black}{1.90567}{0.65591}{1.90842}{0.64658} \meshline{black}{1.88872}{0.71358}{1.89842}{0.68946} \meshline{black}{1.88414}{0.72953}{1.87038}{0.75677}\meshline{black}{1.90842}{0.64658}{1.91457}{0.60645} \meshline{black}{1.90567}{0.65591}{1.89842}{0.68946}\meshline{black}{1.92192}{0.50421}{1.92048}{0.54649} \meshline{black}{1.92045}{0.45261}{1.92192}{0.49496} \meshline{black}{1.88872}{0.71358}{1.88414}{0.72953}\meshline{black}{1.9156}{0.39961}{1.92024}{0.44586} \meshline{black}{1.92024}{0.55395}{1.9155}{0.60016} \meshline{black}{1.91535}{0.39793}{1.91558}{0.39932}\meshline{black}{1.92192}{0.49496}{1.92192}{0.50421}\meshline{black}{1.92048}{0.54649}{1.92024}{0.55395} \meshline{black}{1.92024}{0.44586}{1.92045}{0.45261} \meshline{black}{1.91543}{0.60129}{1.91457}{0.60645}\meshline{black}{1.91558}{0.39932}{1.9156}{0.39961} \meshline{black}{1.9155}{0.60016}{1.91543}{0.60129} \meshline{black}{1.1695}{0.42686}{1.1676}{0.43731} \meshline{black}{1.16661}{0.45273}{1.1676}{0.43731} \meshline{black}{1.16404}{0.50403}{1.16374}{0.52385} \meshline{black}{1.16374}{0.52385}{1.16562}{0.54087} \meshline{black}{1.18416}{0.36597}{1.17564}{0.39404} \meshline{black}{1.17564}{0.39404}{1.1695}{0.42686} \meshline{black}{1.16661}{0.45273}{1.16337}{0.48058} \meshline{black}{1.16337}{0.48058}{1.16404}{0.50403} \meshline{black}{1.20492}{0.31652}{1.18952}{0.3508} \meshline{black}{1.16562}{0.54087}{1.16798}{0.56713} \meshline{black}{1.18952}{0.3508}{1.18416}{0.36597} \meshline{black}{1.177}{0.6076}{1.16798}{0.56713}\meshline{black}{1.21007}{0.30754}{1.22993}{0.27577} \meshline{black}{1.21007}{0.30754}{1.20492}{0.31652} \meshline{black}{1.1918}{0.65371}{1.18148}{0.62103}\meshline{black}{1.177}{0.6076}{1.17767}{0.61042} \meshline{black}{1.17767}{0.61042}{1.18148}{0.62103} \meshline{black}{1.2401}{0.26402}{1.2585}{0.2423} \meshline{black}{1.22993}{0.27577}{1.2401}{0.26402} \meshline{black}{1.1918}{0.65371}{1.1953}{0.66237} \meshline{black}{1.29126}{0.21591}{1.28736}{0.2193} \meshline{black}{1.28736}{0.2193}{1.2585}{0.2423} \meshline{black}{1.1953}{0.66237}{1.21304}{0.69702} \meshline{black}{1.29126}{0.21591}{1.29602}{0.21326} \meshline{black}{1.21911}{0.70763}{1.21304}{0.69702} \meshline{black}{1.29602}{0.21326}{1.32618}{0.19307} \meshline{black}{1.21911}{0.70763}{1.24439}{0.74033} \meshline{black}{1.35793}{0.17991}{1.32618}{0.19307} \meshline{black}{1.24773}{0.74457}{1.24439}{0.74033} \meshline{black}{1.36466}{0.17668}{1.35793}{0.17991} \meshline{black}{1.25772}{0.75341}{1.24773}{0.74457} \meshline{black}{1.38584}{0.17038}{1.4038}{0.16436}\meshline{black}{1.38584}{0.17038}{1.36466}{0.17668} \meshline{black}{1.25772}{0.75341}{1.27984}{0.77548} \meshline{black}{1.4038}{0.16436}{1.40964}{0.16327}\meshline{black}{1.27984}{0.77548}{1.2921}{0.78364} \meshline{black}{1.44345}{0.15613}{1.40964}{0.16327} \meshline{black}{1.3154}{0.80042}{1.2921}{0.78364}\meshline{black}{1.44345}{0.15613}{1.4589}{0.15458} \meshline{black}{1.34155}{0.81221}{1.3154}{0.80042} \meshline{black}{1.35449}{0.81925}{1.34155}{0.81221} \meshline{black}{1.48633}{0.15188}{1.4589}{0.15458} \meshline{black}{1.3788}{0.82696}{1.35449}{0.81925} \meshline{black}{1.48633}{0.15188}{1.50849}{0.15153} \meshline{black}{1.39632}{0.83332}{1.3788}{0.82696}\meshline{black}{1.50849}{0.15153}{1.53373}{0.15225} \meshline{black}{1.39632}{0.83332}{1.40457}{0.83493} \meshline{black}{1.44058}{0.84314}{1.40457}{0.83493} \meshline{black}{1.53373}{0.15225}{1.55634}{0.15382}\meshline{black}{1.46039}{0.84521}{1.44058}{0.84314}\meshline{black}{1.58525}{0.15858}{1.55634}{0.15382} \meshline{black}{1.48784}{0.84773}{1.46039}{0.84521} \meshline{black}{1.58525}{0.15858}{1.6015}{0.16115} \meshline{black}{1.51191}{0.8481}{1.48784}{0.84773} \meshline{black}{1.53818}{0.84693}{1.51191}{0.8481} \meshline{black}{1.64165}{0.17332}{1.6015}{0.16115} \meshline{black}{1.56012}{0.84533}{1.53818}{0.84693} \meshline{black}{1.64165}{0.17332}{1.6438}{0.17386} \meshline{black}{1.59235}{0.8395}{1.56012}{0.84533} \meshline{black}{1.6848}{0.1901}{1.64787}{0.17569} \meshline{black}{1.64787}{0.17569}{1.6438}{0.17386} \meshline{black}{1.60533}{0.83739}{1.59235}{0.8395}\meshline{black}{1.70444}{0.20286}{1.6848}{0.1901} \meshline{black}{1.63741}{0.82727}{1.6477}{0.82446} \meshline{black}{1.63741}{0.82727}{1.60533}{0.83739}\meshline{black}{1.6477}{0.82446}{1.65223}{0.82228} \meshline{black}{1.72239}{0.21263}{1.70444}{0.20286} \meshline{black}{1.75236}{0.23711}{1.73939}{0.22729} \meshline{black}{1.65223}{0.82228}{1.68807}{0.80789} \meshline{black}{1.72239}{0.21263}{1.73939}{0.22729}\meshline{black}{1.68807}{0.80789}{1.72406}{0.78432} \meshline{black}{1.7703}{0.25726}{1.75236}{0.23711} \meshline{black}{1.72406}{0.78432}{1.72428}{0.7842} \meshline{black}{1.7703}{0.25726}{1.77751}{0.26457} \meshline{black}{1.72443}{0.78407}{1.75839}{0.75696} \meshline{black}{1.72443}{0.78407}{1.72428}{0.7842}\meshline{black}{1.80287}{0.30033}{1.78006}{0.26785} \meshline{black}{1.77751}{0.26457}{1.78006}{0.26785} \meshline{black}{1.82691}{0.35143}{1.82299}{0.34348} \meshline{black}{1.82691}{0.35143}{1.82746}{0.35282} \meshline{black}{1.77238}{0.74075}{1.78786}{0.72267} \meshline{black}{1.75839}{0.75696}{1.77238}{0.74075} \meshline{black}{1.80379}{0.69748}{1.8148}{0.67837} \meshline{black}{1.78786}{0.72267}{1.80379}{0.69748} \meshline{black}{1.80287}{0.30033}{1.80669}{0.3074}\meshline{black}{1.80669}{0.3074}{1.82299}{0.34348}\meshline{black}{1.8337}{0.37315}{1.82746}{0.35282} \meshline{black}{1.83288}{0.63186}{1.82587}{0.65209} \meshline{black}{1.85158}{0.46083}{1.85329}{0.48353} \meshline{black}{1.82587}{0.65209}{1.8148}{0.67837}\meshline{black}{1.85183}{0.53601}{1.85335}{0.51452} \meshline{black}{1.84284}{0.59424}{1.84931}{0.56375} \meshline{black}{1.8337}{0.37315}{1.84121}{0.39671} \meshline{black}{1.84164}{0.39917}{1.84915}{0.43311} \meshline{black}{1.85329}{0.48353}{1.85335}{0.51452} \meshline{black}{1.84915}{0.43311}{1.85158}{0.46083} \meshline{black}{1.85183}{0.53601}{1.84931}{0.56375} \meshline{black}{1.83957}{0.61242}{1.83288}{0.63186} \meshline{black}{1.84284}{0.59424}{1.83957}{0.61242} \meshline{black}{1.84121}{0.39671}{1.84164}{0.39917} \meshline{black}{0.15977}{0.18933}{0.17244}{0.17652} \meshline{black}{0.22145}{0.13798}{0.19306}{0.15583} \meshline{black}{0.14348}{0.21531}{0.15977}{0.18933} \meshline{black}{0.17244}{0.17652}{0.19306}{0.15583} \meshline{black}{0.23749}{0.12886}{0.26264}{0.11661} \meshline{black}{0.22754}{0.13363}{0.22145}{0.13798}\meshline{black}{0.22754}{0.13363}{0.23749}{0.12886} \meshline{black}{0.13434}{0.22758}{0.14348}{0.21531} \meshline{black}{0.26264}{0.11661}{0.28093}{0.11131} \meshline{black}{0.11502}{0.26806}{0.12889}{0.23938} \meshline{black}{0.13434}{0.22758}{0.12889}{0.23938}\meshline{black}{0.11039}{0.28545}{0.11502}{0.26806} \meshline{black}{0.30311}{0.10248}{0.28093}{0.11131} \meshline{black}{0.11039}{0.28545}{0.10172}{0.30985} \meshline{black}{0.30311}{0.10248}{0.34766}{0.09259} \meshline{black}{0.09505}{0.3433}{0.10172}{0.30985} \meshline{black}{0.34766}{0.09259}{0.34903}{0.09217} \meshline{black}{0.09505}{0.3433}{0.09237}{0.35323} \meshline{black}{0.39792}{0.08409}{0.35435}{0.09134} \meshline{black}{0.34903}{0.09217}{0.35435}{0.09134} \meshline{black}{0.08604}{0.39837}{0.08645}{0.39559} \meshline{black}{0.09237}{0.35323}{0.08645}{0.39559}\meshline{black}{0.39792}{0.08409}{0.40667}{0.08354} \meshline{black}{0.086}{0.39898}{0.08604}{0.39837} \meshline{black}{0.44869}{0.0795}{0.40667}{0.08354} \meshline{black}{0.08182}{0.44564}{0.086}{0.39898} \meshline{black}{0.44869}{0.0795}{0.46046}{0.07912} \meshline{black}{0.08158}{0.4543}{0.08182}{0.44564} \meshline{black}{0.46046}{0.07912}{0.49943}{0.07785} \meshline{black}{0.08032}{0.49695}{0.08158}{0.4543} \meshline{black}{0.49943}{0.07785}{0.51151}{0.0778} \meshline{black}{0.08032}{0.50707}{0.08032}{0.49695} \meshline{black}{0.51151}{0.0778}{0.55077}{0.07886} \meshline{black}{0.08175}{0.55035}{0.08032}{0.50707} \meshline{black}{0.55077}{0.07886}{0.56082}{0.0791} \meshline{black}{0.08189}{0.55556}{0.08175}{0.55035} \meshline{black}{0.56082}{0.0791}{0.6034}{0.08272} \meshline{black}{0.08502}{0.5902}{0.08189}{0.55556} \meshline{black}{0.08597}{0.60146}{0.08502}{0.5902} \meshline{black}{0.6034}{0.08272}{0.60856}{0.08299} \meshline{black}{0.08597}{0.60146}{0.08629}{0.60321}\meshline{black}{0.60856}{0.08299}{0.63689}{0.08702} \meshline{black}{0.65464}{0.08945}{0.63689}{0.08702} \meshline{black}{0.09223}{0.64596}{0.08629}{0.60321} \meshline{black}{0.65827}{0.09042}{0.65464}{0.08945}\meshline{black}{0.09506}{0.65688}{0.09223}{0.64596} \meshline{black}{0.65827}{0.09042}{0.69895}{0.09837} \meshline{black}{0.10131}{0.68877}{0.09506}{0.65688} \meshline{black}{0.69895}{0.09837}{0.71735}{0.10487} \meshline{black}{0.10131}{0.68877}{0.10979}{0.71244} \meshline{black}{0.71735}{0.10487}{0.74038}{0.11081} \meshline{black}{0.11426}{0.72938}{0.10979}{0.71244} \meshline{black}{0.77242}{0.12507}{0.74038}{0.11081} \meshline{black}{0.77242}{0.12507}{0.77706}{0.12715} \meshline{black}{0.12953}{0.76238}{0.13175}{0.76745} \meshline{black}{0.12953}{0.76238}{0.11426}{0.72938} \meshline{black}{0.77706}{0.12715}{0.78151}{0.13004} \meshline{black}{0.13175}{0.76745}{0.13336}{0.7698} \meshline{black}{0.78151}{0.13004}{0.80873}{0.1449} \meshline{black}{0.83404}{0.16617}{0.8374}{0.16873} \meshline{black}{0.15529}{0.80608}{0.13336}{0.7698} \meshline{black}{0.84108}{0.17316}{0.8374}{0.16873} \meshline{black}{0.80873}{0.1449}{0.83404}{0.16617} \meshline{black}{0.15529}{0.80608}{0.17371}{0.82477} \meshline{black}{0.86651}{0.20007}{0.84108}{0.17316} \meshline{black}{0.18943}{0.84087}{0.17371}{0.82477} \meshline{black}{0.2263}{0.8656}{0.21787}{0.85912} \meshline{black}{0.89138}{0.24175}{0.87525}{0.215} \meshline{black}{0.86651}{0.20007}{0.87525}{0.215} \meshline{black}{0.21787}{0.85912}{0.18943}{0.84087} \meshline{black}{0.2263}{0.8656}{0.2346}{0.8696}\meshline{black}{0.89138}{0.24175}{0.89899}{0.26121} \meshline{black}{0.89899}{0.26121}{0.91069}{0.29267} \meshline{black}{0.92381}{0.34997}{0.92135}{0.34035} \meshline{black}{0.92452}{0.35352}{0.92381}{0.34997} \meshline{black}{0.26575}{0.88504}{0.2346}{0.8696} \meshline{black}{0.91397}{0.30596}{0.91069}{0.29267} \meshline{black}{0.92135}{0.34035}{0.91397}{0.30596} \meshline{black}{0.28688}{0.89095}{0.30804}{0.89902}\meshline{black}{0.28688}{0.89095}{0.26575}{0.88504} \meshline{black}{0.92452}{0.35352}{0.93053}{0.39358} \meshline{black}{0.9334}{0.42393}{0.93053}{0.39358} \meshline{black}{0.34637}{0.90706}{0.35257}{0.90882} \meshline{black}{0.34637}{0.90706}{0.30804}{0.89902} \meshline{black}{0.93631}{0.48032}{0.9348}{0.45267}\meshline{black}{0.9334}{0.42393}{0.93435}{0.43699} \meshline{black}{0.93631}{0.48032}{0.93606}{0.50556} \meshline{black}{0.3829}{0.91333}{0.35257}{0.90882} \meshline{black}{0.9348}{0.45267}{0.93435}{0.43699} \meshline{black}{0.93614}{0.52361}{0.93606}{0.50556} \meshline{black}{0.39863}{0.91579}{0.3829}{0.91333} \meshline{black}{0.93423}{0.56689}{0.93543}{0.54064} \meshline{black}{0.93614}{0.52361}{0.93543}{0.54064} \meshline{black}{0.44607}{0.92028}{0.40155}{0.91596} \meshline{black}{0.40155}{0.91596}{0.39863}{0.91579} \meshline{black}{0.9305}{0.60155}{0.93423}{0.56689}\meshline{black}{0.49501}{0.92213}{0.4542}{0.92055} \meshline{black}{0.4542}{0.92055}{0.44607}{0.92028} \meshline{black}{0.9305}{0.60155}{0.9299}{0.61018} \meshline{black}{0.92919}{0.61657}{0.92339}{0.65347} \meshline{black}{0.54543}{0.92142}{0.50514}{0.92218} \meshline{black}{0.50514}{0.92218}{0.49501}{0.92213} \meshline{black}{0.9299}{0.61018}{0.92919}{0.61657} \meshline{black}{0.9133}{0.69677}{0.91731}{0.68274} \meshline{black}{0.91731}{0.68274}{0.92339}{0.65347} \meshline{black}{0.89844}{0.74009}{0.89928}{0.73829} \meshline{black}{0.59745}{0.91791}{0.55457}{0.92121} \meshline{black}{0.55457}{0.92121}{0.54543}{0.92142} \meshline{black}{0.64895}{0.91148}{0.63397}{0.91341} \meshline{black}{0.65155}{0.91083}{0.64895}{0.91148} \meshline{black}{0.89928}{0.73829}{0.9133}{0.69677} \meshline{black}{0.88417}{0.76799}{0.89844}{0.74009}\meshline{black}{0.60251}{0.91766}{0.59745}{0.91791} \meshline{black}{0.63397}{0.91341}{0.60251}{0.91766} \meshline{black}{0.69399}{0.90287}{0.65155}{0.91083} \meshline{black}{0.70917}{0.89764}{0.69399}{0.90287} \meshline{black}{0.84745}{0.822}{0.84218}{0.82675} \meshline{black}{0.87112}{0.79055}{0.84745}{0.822} \meshline{black}{0.88417}{0.76799}{0.87651}{0.78342} \meshline{black}{0.73694}{0.89062}{0.70917}{0.89764} \meshline{black}{0.77248}{0.87463}{0.73694}{0.89062} \meshline{black}{0.81448}{0.85161}{0.78221}{0.87008} \meshline{black}{0.84218}{0.82675}{0.81448}{0.85161} \meshline{black}{0.87112}{0.79055}{0.87591}{0.78457} \meshline{black}{0.87651}{0.78342}{0.87591}{0.78457} \meshline{black}{0.77689}{0.87317}{0.77248}{0.87463} \meshline{black}{0.77689}{0.87317}{0.78221}{0.87008} \meshline{black}{0.20297}{0.30136}{0.22568}{0.26902} \meshline{black}{0.20226}{0.30272}{0.20297}{0.30136} \meshline{black}{0.29654}{0.20355}{0.2727}{0.22108} \meshline{black}{0.23189}{0.26018}{0.26311}{0.22812} \meshline{black}{0.29654}{0.20355}{0.3321}{0.18595} \meshline{black}{0.22568}{0.26902}{0.23161}{0.26048} \meshline{black}{0.20226}{0.30272}{0.19954}{0.30891} \meshline{black}{0.23189}{0.26018}{0.23161}{0.26048} \meshline{black}{0.26311}{0.22812}{0.2727}{0.22108} \meshline{black}{0.33574}{0.18389}{0.3321}{0.18595} \meshline{black}{0.19954}{0.30891}{0.18107}{0.34785} \meshline{black}{0.3385}{0.18284}{0.37539}{0.16929} \meshline{black}{0.3385}{0.18284}{0.33574}{0.18389}\meshline{black}{0.18107}{0.34785}{0.17816}{0.35556} \meshline{black}{0.39549}{0.16351}{0.37539}{0.16929} \meshline{black}{0.16729}{0.39291}{0.17816}{0.35556} \meshline{black}{0.42479}{0.15798}{0.39549}{0.16351} \meshline{black}{0.16729}{0.39291}{0.16543}{0.40182} \meshline{black}{0.16003}{0.42815}{0.16543}{0.40182} \meshline{black}{0.44646}{0.15426}{0.42479}{0.15798}\meshline{black}{0.44646}{0.15426}{0.4767}{0.15166} \meshline{black}{0.15758}{0.45726}{0.16003}{0.42815} \meshline{black}{0.49692}{0.15073}{0.4767}{0.15166}\meshline{black}{0.15758}{0.45726}{0.15525}{0.47711} \meshline{black}{0.49692}{0.15073}{0.52671}{0.15076} \meshline{black}{0.15546}{0.49442}{0.15525}{0.47711} \meshline{black}{0.15543}{0.52712}{0.15546}{0.49442} \meshline{black}{0.54912}{0.15245}{0.52671}{0.15076} \meshline{black}{0.54912}{0.15245}{0.57448}{0.15459} \meshline{black}{0.15871}{0.55849}{0.15543}{0.52712} \meshline{black}{0.15906}{0.56243}{0.15871}{0.55849} \meshline{black}{0.60473}{0.16048}{0.57448}{0.15459} \meshline{black}{0.15906}{0.56243}{0.16668}{0.60345}\meshline{black}{0.60473}{0.16048}{0.61987}{0.16303} \meshline{black}{0.16668}{0.60345}{0.16746}{0.60806} \meshline{black}{0.61987}{0.16303}{0.65514}{0.17404} \meshline{black}{0.66271}{0.17629}{0.65514}{0.17404} \meshline{black}{0.16926}{0.6141}{0.16746}{0.60806} \meshline{black}{0.16926}{0.6141}{0.18069}{0.65145}\meshline{black}{0.66687}{0.17832}{0.66271}{0.17629} \meshline{black}{0.70335}{0.19362}{0.66687}{0.17832} \meshline{black}{0.18069}{0.65145}{0.18813}{0.67066} \meshline{black}{0.18813}{0.67066}{0.20137}{0.69615} \meshline{black}{0.70335}{0.19362}{0.73528}{0.21414} \meshline{black}{0.7417}{0.21833}{0.73528}{0.21414} \meshline{black}{0.20137}{0.69615}{0.21369}{0.71699} \meshline{black}{0.7566}{0.2324}{0.7417}{0.21833} \meshline{black}{0.21369}{0.71699}{0.23135}{0.73936} \meshline{black}{0.77763}{0.25052}{0.7566}{0.2324} \meshline{black}{0.23135}{0.73936}{0.24259}{0.75289} \meshline{black}{0.77763}{0.25052}{0.78374}{0.25807} \meshline{black}{0.78374}{0.25807}{0.80848}{0.29044} \meshline{black}{0.27281}{0.77861}{0.27595}{0.78155} \meshline{black}{0.24259}{0.75289}{0.27281}{0.77861} \meshline{black}{0.27595}{0.78155}{0.27795}{0.78284} \meshline{black}{0.8162}{0.30443}{0.80848}{0.29044} \meshline{black}{0.83253}{0.33795}{0.8162}{0.30443}\meshline{black}{0.84828}{0.38863}{0.8498}{0.39328} \meshline{black}{0.27795}{0.78284}{0.31219}{0.80547} \meshline{black}{0.8501}{0.39469}{0.8498}{0.39328} \meshline{black}{0.83676}{0.34929}{0.83253}{0.33795} \meshline{black}{0.84828}{0.38863}{0.83676}{0.34929} \meshline{black}{0.34645}{0.82005}{0.35225}{0.8229}\meshline{black}{0.31219}{0.80547}{0.34645}{0.82005} \meshline{black}{0.8501}{0.39469}{0.85814}{0.43685} \meshline{black}{0.86033}{0.4636}{0.85814}{0.43685}\meshline{black}{0.35225}{0.8229}{0.36438}{0.82654} \meshline{black}{0.86174}{0.48022}{0.86033}{0.4636} \meshline{black}{0.39461}{0.83625}{0.36438}{0.82654} \meshline{black}{0.86165}{0.52353}{0.86149}{0.49894} \meshline{black}{0.86174}{0.48022}{0.86149}{0.49894} \meshline{black}{0.43959}{0.84499}{0.40704}{0.83848} \meshline{black}{0.40704}{0.83848}{0.39461}{0.83625} \meshline{black}{0.86165}{0.52353}{0.85845}{0.55326} \meshline{black}{0.46199}{0.84713}{0.48721}{0.84915} \meshline{black}{0.46199}{0.84713}{0.43959}{0.84499} \meshline{black}{0.85845}{0.55326}{0.85744}{0.56681} \meshline{black}{0.84934}{0.6101}{0.85589}{0.57599}\meshline{black}{0.51333}{0.84951}{0.53756}{0.84855}\meshline{black}{0.51333}{0.84951}{0.48721}{0.84915} \meshline{black}{0.85589}{0.57599}{0.85744}{0.56681} \meshline{black}{0.84139}{0.63759}{0.8358}{0.65339} \meshline{black}{0.84934}{0.6101}{0.84139}{0.63759} \meshline{black}{0.81552}{0.6967}{0.82045}{0.68806} \meshline{black}{0.53756}{0.84855}{0.56182}{0.84692} \meshline{black}{0.8358}{0.65339}{0.82045}{0.68806} \meshline{black}{0.56182}{0.84692}{0.59122}{0.84224}\meshline{black}{0.7944}{0.72968}{0.78541}{0.74003} \meshline{black}{0.81552}{0.6967}{0.7944}{0.72968} \meshline{black}{0.59122}{0.84224}{0.6077}{0.83977} \meshline{black}{0.64938}{0.82811}{0.6077}{0.83977} \meshline{black}{0.65342}{0.82672}{0.6918}{0.81212} \meshline{black}{0.6918}{0.81212}{0.71758}{0.79665} \meshline{black}{0.73859}{0.78337}{0.76398}{0.76373} \meshline{black}{0.78541}{0.74003}{0.76398}{0.76373} \meshline{black}{0.6508}{0.82778}{0.64938}{0.82811} \meshline{black}{0.6508}{0.82778}{0.65342}{0.82672} \meshline{black}{0.7294}{0.79055}{0.71758}{0.79665} \meshline{black}{0.73859}{0.78337}{0.7294}{0.79055} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{1.01788}{0.24213}{1.01854}{0.26375} \meshline{black}{1.01854}{0.26375}{1.01908}{0.27989} \meshline{black}{1.02104}{0.19823}{1.01988}{0.17695}\meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{1.02126}{0.04458}{1.02154}{0.04504} \meshline{black}{1.01913}{0.14373}{1.01988}{0.17695} \meshline{black}{1.01718}{0.35049}{1.01678}{0.333} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{1.01742}{0.36583}{1.01718}{0.35049} \meshline{black}{1}{ 0}{1.02126}{0.04458} \meshline{black}{1.02275}{0.12022}{1.02082}{0.08916}\meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{1.0161}{0.42027}{1.0163}{0.43713} \meshline{black}{1.02082}{0.08916}{1.02154}{0.04504} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{1.01635}{0.45331}{1.0163}{0.43713}\meshline{black}{1.0192}{0.22038}{1.01788}{0.24213} \meshline{black}{1.0192}{0.22038}{1.02104}{0.19823} \meshline{black}{1.01777}{0.30712}{1.01908}{0.27989} \meshline{black}{1.01678}{0.333}{1.01777}{0.30712} \meshline{black}{1.02037}{0.13325}{1.01913}{0.14373} \meshline{black}{1.01742}{0.36583}{1.01658}{0.39382} \meshline{black}{1.02275}{0.12022}{1.02037}{0.13325} \meshline{black}{1}{0}{1.05}{0} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{1.01658}{0.39382}{1.0161}{0.42027} \meshline{black}{1.01611}{0.50729}{1.01615}{0.5237} \meshline{black}{1.01635}{0.45331}{1.01603}{0.48042} \meshline{black}{1.01615}{0.5237}{1.01603}{0.54017} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1.01603}{0.48042}{1.01611}{0.50729} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1.01603}{0.54017}{1.01628}{0.56698} \meshline{black}{1.05}{0}{1.1}{0} \meshline{black}{1.01676}{0.61026}{1.0169}{0.5954} \meshline{black}{1.0169}{0.5954}{1.01628}{0.56698} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{1.01645}{0.62583}{1.01676}{0.61026} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1.1}{0}{1.15}{0} \meshline{black}{1.01645}{0.62583}{1.01725}{0.65356} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1.01799}{0.69686}{1.01832}{0.68466} \meshline{black}{1.01725}{0.65356}{1.01832}{0.68466} \meshline{black}{1.01799}{0.69686}{1.01743}{0.71054} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1.15}{0}{1.2}{0} \meshline{black}{1.01871}{0.74017}{1.01743}{0.71054} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1.02004}{0.77451}{1.0195}{0.78349} \meshline{black}{1.01871}{0.74017}{1.02004}{0.77451} \meshline{black}{1.2}{0}{1.25}{0} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{1.0195}{0.78349}{1.01861}{0.7949} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1.01861}{0.7949}{1.02026}{0.82681} \meshline{black}{1.25}{0}{1.3}{0} \meshline{black}{1.02086}{0.87012}{1.02167}{0.86418}\meshline{black}{1.02167}{0.86418}{1.02026}{0.82681} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{1.02086}{0.87012}{1.01931}{0.88014} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1.3}{0}{1.35}{0} \meshline{black}{1.02143}{0.91343}{1.01931}{0.88014} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1.02319}{0.95353}{1.02167}{0.95672} \meshline{black}{1.02143}{0.91343}{1.02319}{0.95353} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1}{1}{1.02167}{0.95672} \meshline{black}{1.35}{0}{1.4}{0} \meshline{black}{1.05}{1}{1}{1} \meshline{black}{1.1}{1}{1.05}{1} \meshline{black}{1.4}{0}{1.45}{0} \meshline{black}{1.15}{1}{1.1}{1} \meshline{black}{1.45}{0}{1.5}{0} \meshline{black}{1.2}{1}{1.15}{1} \meshline{black}{1.25}{1}{1.2}{1} \meshline{black}{1.5}{0}{1.55}{0} \meshline{black}{1.3}{1}{1.25}{1} \meshline{black}{1.35}{1}{1.3}{1} \meshline{black}{1.55}{0}{1.6}{0} \meshline{black}{1.4}{1}{1.35}{1} \meshline{black}{1.6}{0}{1.65}{0} \meshline{black}{1.45}{1}{1.4}{1} \meshline{black}{1.5}{1}{1.45}{1} \meshline{black}{1.65}{0}{1.7}{0} \meshline{black}{1.55}{1}{1.5}{1} \meshline{black}{1.7}{0}{1.75}{0} \meshline{black}{1.6}{1}{1.55}{1} \meshline{black}{1.65}{1}{1.6}{1} \meshline{black}{1.75}{0}{1.8}{0} \meshline{black}{1.7}{1}{1.65}{1} \meshline{black}{1.8}{0}{1.85}{0} \meshline{black}{1.75}{1}{1.7}{1} \meshline{black}{1.8}{1}{1.75}{1} \meshline{black}{1.85}{0}{1.9}{0} \meshline{black}{1.85}{1}{1.8}{1} \meshline{black}{1.9}{0}{1.95}{0} \meshline{black}{1.9}{1}{1.85}{1} \meshline{black}{2}{0.15}{2}{0.2} \meshline{black}{2}{0.1}{2}{0.15} \meshline{black}{2}{0.2}{2}{0.25} \meshline{black}{2}{0.05}{2}{0.1} \meshline{black}{1.95}{0}{2}{0} \meshline{black}{2}{0.25}{2}{0.3} \meshline{black}{2}{0.5}{2}{0.55} \meshline{black}{2}{0.45}{2}{0.5} \meshline{black}{2}{0.3}{2}{0.35} \meshline{black}{2}{0.4}{2}{0.45} \meshline{black}{2}{0.55}{2}{0.6} \meshline{black}{2}{0}{2}{0.05} \meshline{black}{2}{0.35}{2}{0.4} \meshline{black}{1.95}{1}{1.9}{1} \meshline{black}{2}{0.6}{2}{0.65} \meshline{black}{2}{0.65}{2}{0.7} \meshline{black}{2}{0.7}{2}{0.75} \meshline{black}{2}{0.75}{2}{0.8} \meshline{black}{2}{0.8}{2}{0.85} \meshline{black}{2}{0.85}{2}{0.9} \meshline{black}{2}{0.9}{2}{0.95} \meshline{black}{2}{1}{1.95}{1} \meshline{black}{2}{0.95}{2}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{1}{0}{1.05}{0} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1.05}{0}{1.1}{0} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{1.1}{0}{1.15}{0} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1.15}{0}{1.2}{0} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1.2}{0}{1.25}{0} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1.25}{0}{1.3}{0} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1.3}{0}{1.35}{0} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1.35}{0}{1.4}{0} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1.05}{1}{1}{1} \meshline{black}{1.4}{0}{1.45}{0} \meshline{black}{1.1}{1}{1.05}{1} \meshline{black}{1.15}{1}{1.1}{1} \meshline{black}{1.45}{0}{1.5}{0} \meshline{black}{1.2}{1}{1.15}{1} \meshline{black}{1.25}{1}{1.2}{1} \meshline{black}{1.5}{0}{1.55}{0} \meshline{black}{1.3}{1}{1.25}{1} \meshline{black}{1.35}{1}{1.3}{1} \meshline{black}{1.55}{0}{1.6}{0} \meshline{black}{1.4}{1}{1.35}{1} \meshline{black}{1.6}{0}{1.65}{0} \meshline{black}{1.45}{1}{1.4}{1} \meshline{black}{1.5}{1}{1.45}{1} \meshline{black}{1.65}{0}{1.7}{0} \meshline{black}{1.55}{1}{1.5}{1} \meshline{black}{1.7}{0}{1.75}{0} \meshline{black}{1.6}{1}{1.55}{1} \meshline{black}{1.65}{1}{1.6}{1} \meshline{black}{1.75}{0}{1.8}{0} \meshline{black}{1.7}{1}{1.65}{1} \meshline{black}{1.8}{0}{1.85}{0} \meshline{black}{1.75}{1}{1.7}{1} \meshline{black}{1.85}{0}{1.9}{0} \meshline{black}{1.8}{1}{1.75}{1} \meshline{black}{1.85}{1}{1.8}{1} \meshline{black}{1.9}{0}{1.95}{0} \meshline{black}{1.9}{1}{1.85}{1} \meshline{black}{2}{0.15}{2}{0.2} \meshline{black}{2}{0.1}{2}{0.15} \meshline{black}{1.95}{0}{2}{0} \meshline{black}{2}{0.2}{2}{0.25} \meshline{black}{2}{0.05}{2}{0.1} \meshline{black}{2}{0.25}{2}{0.3} \meshline{black}{2}{0.5}{2}{0.55} \meshline{black}{1.95}{1}{1.9}{1} \meshline{black}{2}{0.45}{2}{0.5} \meshline{black}{2}{0}{2}{0.05} \meshline{black}{2}{0.3}{2}{0.35} \meshline{black}{2}{0.4}{2}{0.45} \meshline{black}{2}{0.55}{2}{0.6} \meshline{black}{2}{0.35}{2}{0.4} \meshline{black}{2}{0.6}{2}{0.65} \meshline{black}{2}{0.65}{2}{0.7} \meshline{black}{2}{0.7}{2}{0.75} \meshline{black}{2}{0.75}{2}{0.8} \meshline{black}{2}{0.8}{2}{0.85} \meshline{black}{2}{1}{1.95}{1} \meshline{black}{2}{0.85}{2}{0.9} \meshline{black}{2}{0.9}{2}{0.95} \meshline{black}{2}{0.95}{2}{1} \end{tikzpicture} \null \end{center} \null \end{minipage} \begin{minipage}[h]{0.4\linewidth} \begin{itemize} \item For g.s.: $\max (u)=5.98 $, $\mathcal{E}_4 (u)= 30.98$ \item For l.e.n.s.: $\min(u)= -8.67$, $\max (u)= 6.53$, $\mathcal{E}_4(u)=76.23$ \item Starting function for g.s.: $(x-2)(y-1)xy$ \item Starting function for l.e.n.s.: \\$\sin(\pi(x+1))\sin(2\pi (y+1))$ \end{itemize} \end{minipage} \null If $V_-=0$ and $V_+=35$, we get the same symmetry for g.s.\ but l.e.n.s.\ are not symmetric. The mass is more or less ``located'' in the square defined by $x<1$. So, we obtain a direct symmetry breaking. To minimize the energy, the difference in the potential is so large that it is better to locate the mass in one side of the rectangle. On a square and for $V=0$, it is conjectured that l.e.n.s.\ is odd with respect to a diagonal. It explains the structure of the approximation. \begin{minipage}[h]{0.6\linewidth} \includegraphics[width=2.8cm, angle=270]{rectMPA035.pdf} \includegraphics[width=2.8cm, angle = 270]{rectMMPA035.pdf} \null \begin{center} \begin{tikzpicture} \draw[->] (0,0) --(0.5,0); \draw[->] (0,0)--(0,.5); \node[anchor= north] at (0.5,0) {$x$}; \node[anchor =east] at (0,0.5) {$y$}; \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{1.5cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{1.5cm}} \meshline{black}{0.23012}{0.16438}{0.24284}{0.15447} \meshline{black}{0.24284}{0.15447}{0.25345}{0.14879} \meshline{black}{0.15964}{0.24299}{0.17459}{0.22163} \meshline{black}{0.18786}{0.20284}{0.21268}{0.17826} \meshline{black}{0.17459}{0.22163}{0.18786}{0.20284} \meshline{black}{0.14979}{0.26587}{0.15964}{0.24299} \meshline{black}{0.2172}{0.17389}{0.23012}{0.16438} \meshline{black}{0.2172}{0.17389}{0.21268}{0.17826}\meshline{black}{0.25345}{0.14879}{0.27136}{0.13766} \meshline{black}{0.31075}{0.11943}{0.28535}{0.1309} \meshline{black}{0.27136}{0.13766}{0.28535}{0.1309} \meshline{black}{0.14003}{0.28508}{0.14979}{0.26587} \meshline{black}{0.31075}{0.11943}{0.33292}{0.11342} \meshline{black}{0.128}{0.32346}{0.14003}{0.28508} \meshline{black}{0.33292}{0.11342}{0.35547}{0.10487} \meshline{black}{0.128}{0.32346}{0.12743}{0.32497}\meshline{black}{0.11709}{0.36684}{0.12654}{0.3288} \meshline{black}{0.12654}{0.3288}{0.12743}{0.32497} \meshline{black}{0.35547}{0.10487}{0.40057}{0.09525} \meshline{black}{0.1154}{0.38006}{0.11709}{0.36684} \meshline{black}{0.40409}{0.09422}{0.40057}{0.09525}\meshline{black}{0.11036}{0.41051}{0.1154}{0.38006} \meshline{black}{0.45314}{0.08647}{0.42602}{0.09083} \meshline{black}{0.40409}{0.09422}{0.42602}{0.09083} \meshline{black}{0.10842}{0.43639}{0.11036}{0.41051} \meshline{black}{0.45314}{0.08647}{0.45693}{0.08622} \meshline{black}{0.1064}{0.45856}{0.10842}{0.43639} \meshline{black}{0.45693}{0.08622}{0.50178}{0.08146} \meshline{black}{0.1055}{0.49454}{0.1064}{0.45856} \meshline{black}{0.50178}{0.08146}{0.50988}{0.08113} \meshline{black}{0.1055}{0.49454}{0.10579}{0.51446}\meshline{black}{0.50988}{0.08113}{0.55079}{0.07889} \meshline{black}{0.10653}{0.54381}{0.10579}{0.51446} \meshline{black}{0.55079}{0.07889}{0.56101}{0.07873} \meshline{black}{0.10877}{0.56837}{0.10653}{0.54381} \meshline{black}{0.56101}{0.07873}{0.6007}{0.07856} \meshline{black}{0.11035}{0.58883}{0.10877}{0.56837} \meshline{black}{0.6007}{0.07856}{0.61069}{0.07864} \meshline{black}{0.11035}{0.58883}{0.11581}{0.62158}\meshline{black}{0.61069}{0.07864}{0.65182}{0.08052} \meshline{black}{0.11714}{0.63196}{0.11581}{0.62158} \meshline{black}{0.65182}{0.08052}{0.6589}{0.08074} \meshline{black}{0.12516}{0.66575}{0.11714}{0.63196} \meshline{black}{0.12683}{0.67318}{0.12516}{0.66575} \meshline{black}{0.6589}{0.08074}{0.70426}{0.08503} \meshline{black}{0.12683}{0.67318}{0.1277}{0.67555}\meshline{black}{0.70426}{0.08503}{0.70528}{0.08509} \meshline{black}{0.13978}{0.71399}{0.1277}{0.67555} \meshline{black}{0.70528}{0.08509}{0.71037}{0.08581} \meshline{black}{0.71037}{0.08581}{0.74979}{0.0912} \meshline{black}{0.15012}{0.73501}{0.13978}{0.71399} \meshline{black}{0.75824}{0.09333}{0.74979}{0.0912}\meshline{black}{0.15498}{0.74687}{0.15012}{0.73501} \meshline{black}{0.17618}{0.78029}{0.17619}{0.7803}\meshline{black}{0.75824}{0.09333}{0.7917}{0.09943} \meshline{black}{0.15498}{0.74687}{0.17618}{0.78029} \meshline{black}{0.17619}{0.7803}{0.17619}{0.7803} \meshline{black}{0.81153}{0.10565}{0.7917}{0.09943}\meshline{black}{0.17619}{0.7803}{0.20466}{0.81501} \meshline{black}{0.81153}{0.10565}{0.83109}{0.11015} \meshline{black}{0.20466}{0.81501}{0.21732}{0.82564}\meshline{black}{0.86582}{0.12401}{0.83109}{0.11015} \meshline{black}{0.23918}{0.84369}{0.21732}{0.82564} \meshline{black}{0.27721}{0.86531}{0.26889}{0.8598} \meshline{black}{0.86582}{0.12401}{0.86938}{0.12503} \meshline{black}{0.26889}{0.8598}{0.23918}{0.84369} \meshline{black}{0.90678}{0.14315}{0.87548}{0.12808}\meshline{black}{0.86938}{0.12503}{0.87548}{0.12808} \meshline{black}{0.27721}{0.86531}{0.28756}{0.86979}\meshline{black}{0.90678}{0.14315}{0.92996}{0.16061} \meshline{black}{0.28756}{0.86979}{0.31741}{0.88286} \meshline{black}{0.92996}{0.16061}{0.94097}{0.16736} \meshline{black}{0.94867}{0.17474}{0.97186}{0.19684} \meshline{black}{0.335}{0.88735}{0.3599}{0.89616} \meshline{black}{0.94097}{0.16736}{0.94867}{0.17474} \meshline{black}{0.335}{0.88735}{0.31741}{0.88286} \meshline{black}{0.99816}{0.23366}{0.9884}{0.21965}\meshline{black}{0.39411}{0.90304}{0.40436}{0.90588} \meshline{black}{0.9884}{0.21965}{0.97186}{0.19684} \meshline{black}{0.39411}{0.90304}{0.3599}{0.89616} \meshline{black}{0.44972}{0.91277}{0.45034}{0.91289} \meshline{black}{0.44972}{0.91277}{0.40436}{0.90588} \meshline{black}{0.99816}{0.23366}{1.01508}{0.2637} \meshline{black}{1.01508}{0.2637}{1.021}{0.27676} \meshline{black}{0.45491}{0.91337}{0.45034}{0.91289} \meshline{black}{1.03461}{0.30725}{1.021}{0.27676} \meshline{black}{1.04067}{0.32654}{1.03461}{0.30725}\meshline{black}{0.4974}{0.91799}{0.45491}{0.91337} \meshline{black}{1.04067}{0.32654}{1.04924}{0.35061} \meshline{black}{1.04924}{0.35061}{1.05604}{0.38509} \meshline{black}{1.06012}{0.41004}{1.0655}{0.4372} \meshline{black}{0.5458}{0.92078}{0.50285}{0.91821} \meshline{black}{1.06589}{0.45407}{1.0655}{0.4372}\meshline{black}{0.50285}{0.91821}{0.4974}{0.91799} \meshline{black}{1.06804}{0.52375}{1.06742}{0.50967} \meshline{black}{1.05604}{0.38509}{1.05842}{0.39391} \meshline{black}{1.05842}{0.39391}{1.06012}{0.41004} \meshline{black}{0.59547}{0.92134}{0.5544}{0.92092} \meshline{black}{0.5544}{0.92092}{0.5458}{0.92078} \meshline{black}{1.06804}{0.52375}{1.06577}{0.54061} \meshline{black}{1.06902}{0.48048}{1.06589}{0.45407} \meshline{black}{1.06902}{0.48048}{1.06742}{0.50967} \meshline{black}{0.6046}{0.92127}{0.59547}{0.92134} \meshline{black}{0.64648}{0.9196}{0.6046}{0.92127} \meshline{black}{1.06481}{0.56703}{1.06577}{0.54061} \meshline{black}{1.05799}{0.6103}{1.06102}{0.5853}\meshline{black}{1.06481}{0.56703}{1.06102}{0.5853} \meshline{black}{1.04707}{0.64783}{1.04577}{0.65358} \meshline{black}{1.05799}{0.6103}{1.04707}{0.64783} \meshline{black}{0.6535}{0.91938}{0.64648}{0.9196} \meshline{black}{0.69897}{0.91528}{0.6535}{0.91938} \meshline{black}{0.74742}{0.90883}{0.71355}{0.91341} \meshline{black}{0.75361}{0.90726}{0.74742}{0.90883} \meshline{black}{1.0371}{0.67654}{1.04577}{0.65358} \meshline{black}{0.70107}{0.91516}{0.69897}{0.91528} \meshline{black}{0.70107}{0.91516}{0.71355}{0.91341} \meshline{black}{0.79236}{0.90004}{0.75361}{0.90726} \meshline{black}{0.79236}{0.90004}{0.81147}{0.89367} \meshline{black}{1.03069}{0.69687}{1.0371}{0.67654} \meshline{black}{1.01093}{0.74017}{1.02829}{0.70199} \meshline{black}{0.9443}{0.82678}{0.9483}{0.82354}\meshline{black}{0.9483}{0.82354}{0.95953}{0.81061}\meshline{black}{1.03069}{0.69687}{1.02829}{0.70199} \meshline{black}{1.00556}{0.74934}{0.98403}{0.78347} \meshline{black}{1.01093}{0.74017}{1.00556}{0.74934} \meshline{black}{0.83537}{0.88789}{0.81147}{0.89367} \meshline{black}{0.83537}{0.88789}{0.8743}{0.8715} \meshline{black}{0.87756}{0.87009}{0.91411}{0.85098}\meshline{black}{0.9443}{0.82678}{0.91411}{0.85098} \meshline{black}{0.95953}{0.81061}{0.97906}{0.79013} \meshline{black}{0.98403}{0.78347}{0.97906}{0.79013} \meshline{black}{0.87568}{0.87108}{0.8743}{0.8715} \meshline{black}{0.87568}{0.87108}{0.87756}{0.87009} \meshline{black}{0.25283}{0.30504}{0.25319}{0.30446} \meshline{black}{0.25319}{0.30446}{0.25699}{0.29955} \meshline{black}{0.2495}{0.3115}{0.25283}{0.30504} \meshline{black}{0.34758}{0.21184}{0.33073}{0.22345} \meshline{black}{0.28485}{0.26333}{0.31341}{0.2359} \meshline{black}{0.28183}{0.26647}{0.25699}{0.29955} \meshline{black}{0.36922}{0.20077}{0.34758}{0.21184} \meshline{black}{0.31341}{0.2359}{0.33073}{0.22345} \meshline{black}{0.28485}{0.26333}{0.28183}{0.26647}\meshline{black}{0.2495}{0.3115}{0.23004}{0.34825} \meshline{black}{0.41356}{0.18096}{0.40325}{0.18535} \meshline{black}{0.4163}{0.18019}{0.41356}{0.18096} \meshline{black}{0.37818}{0.19555}{0.36922}{0.20077}\meshline{black}{0.23004}{0.34825}{0.22869}{0.35153} \meshline{black}{0.21445}{0.39168}{0.22869}{0.35153} \meshline{black}{0.37818}{0.19555}{0.40325}{0.18535} \meshline{black}{0.4564}{0.16749}{0.4163}{0.18019}\meshline{black}{0.21445}{0.39168}{0.21099}{0.40663} \meshline{black}{0.4564}{0.16749}{0.47077}{0.16479} \meshline{black}{0.20506}{0.43383}{0.21099}{0.40663} \meshline{black}{0.50218}{0.15835}{0.47077}{0.16479}\meshline{black}{0.20506}{0.43383}{0.20124}{0.47563} \meshline{black}{0.50218}{0.15835}{0.5243}{0.1559} \meshline{black}{0.20124}{0.47563}{0.20096}{0.47787} \meshline{black}{0.54976}{0.15339}{0.5243}{0.1559} \meshline{black}{0.20096}{0.47787}{0.20098}{0.4797} \meshline{black}{0.20042}{0.52203}{0.20098}{0.4797} \meshline{black}{0.54976}{0.15339}{0.5755}{0.15243} \meshline{black}{0.20464}{0.5595}{0.20042}{0.52203} \meshline{black}{0.59949}{0.15259}{0.5755}{0.15243} \meshline{black}{0.20464}{0.5595}{0.20513}{0.5641}\meshline{black}{0.59949}{0.15259}{0.62437}{0.15344} \meshline{black}{0.20513}{0.5641}{0.2076}{0.57452} \meshline{black}{0.2076}{0.57452}{0.2144}{0.60762} \meshline{black}{0.65225}{0.15654}{0.62437}{0.15344} \meshline{black}{0.21832}{0.62022}{0.2144}{0.60762} \meshline{black}{0.65225}{0.15654}{0.67089}{0.1585} \meshline{black}{0.21832}{0.62022}{0.22989}{0.65195} \meshline{black}{0.7097}{0.16672}{0.67089}{0.1585} \meshline{black}{0.23917}{0.67165}{0.22989}{0.65195} \meshline{black}{0.7097}{0.16672}{0.71489}{0.16764} \meshline{black}{0.23917}{0.67165}{0.25333}{0.69593} \meshline{black}{0.71489}{0.16764}{0.72472}{0.17062} \meshline{black}{0.75648}{0.18003}{0.72472}{0.17062} \meshline{black}{0.25333}{0.69593}{0.26531}{0.71375} \meshline{black}{0.79029}{0.19576}{0.75648}{0.18003}\meshline{black}{0.26531}{0.71375}{0.28776}{0.73949} \meshline{black}{0.79029}{0.19576}{0.80017}{0.19978} \meshline{black}{0.29561}{0.74789}{0.28776}{0.73949} \meshline{black}{0.80017}{0.19978}{0.80419}{0.20156} \meshline{black}{0.81299}{0.20738}{0.84665}{0.22856} \meshline{black}{0.31303}{0.76154}{0.32945}{0.77588} \meshline{black}{0.80419}{0.20156}{0.81299}{0.20738} \meshline{black}{0.29561}{0.74789}{0.31303}{0.76154} \meshline{black}{0.87786}{0.25811}{0.84665}{0.22856} \meshline{black}{0.32945}{0.77588}{0.34103}{0.78307} \meshline{black}{0.87966}{0.25962}{0.87786}{0.25811} \meshline{black}{0.34103}{0.78307}{0.36625}{0.79878} \meshline{black}{0.87966}{0.25962}{0.88049}{0.26061} \meshline{black}{0.38996}{0.80867}{0.40603}{0.81654}\meshline{black}{0.88049}{0.26061}{0.90867}{0.29591} \meshline{black}{0.36625}{0.79878}{0.38996}{0.80867} \meshline{black}{0.43825}{0.82662}{0.40603}{0.81654} \meshline{black}{0.91463}{0.30597}{0.90867}{0.29591} \meshline{black}{0.9325}{0.34015}{0.91463}{0.30597}\meshline{black}{0.94976}{0.39136}{0.95056}{0.39365} \meshline{black}{0.9507}{0.39425}{0.95056}{0.39365} \meshline{black}{0.44832}{0.82992}{0.43825}{0.82662} \meshline{black}{0.93663}{0.35005}{0.9325}{0.34015} \meshline{black}{0.94976}{0.39136}{0.93663}{0.35005} \meshline{black}{0.45258}{0.83075}{0.49259}{0.83985} \meshline{black}{0.45258}{0.83075}{0.44832}{0.82992} \meshline{black}{0.9507}{0.39425}{0.95979}{0.43704} \meshline{black}{0.96182}{0.46112}{0.95979}{0.43704}\meshline{black}{0.50892}{0.84183}{0.53925}{0.84564}\meshline{black}{0.50892}{0.84183}{0.49259}{0.83985} \meshline{black}{0.96342}{0.52364}{0.96325}{0.50221} \meshline{black}{0.96376}{0.48035}{0.96182}{0.46112} \meshline{black}{0.96376}{0.48035}{0.96325}{0.50221} \meshline{black}{0.53925}{0.84564}{0.56173}{0.84677} \meshline{black}{0.96342}{0.52364}{0.96005}{0.5505} \meshline{black}{0.56173}{0.84677}{0.58826}{0.84735}\meshline{black}{0.58826}{0.84735}{0.61177}{0.84686} \meshline{black}{0.95865}{0.56692}{0.96005}{0.5505} \meshline{black}{0.63988}{0.84454}{0.61177}{0.84686} \meshline{black}{0.94954}{0.6102}{0.95653}{0.5773}\meshline{black}{0.95653}{0.5773}{0.95865}{0.56692} \meshline{black}{0.94079}{0.63673}{0.93432}{0.65348} \meshline{black}{0.65939}{0.84273}{0.63988}{0.84454} \meshline{black}{0.65939}{0.84273}{0.69476}{0.83611} \meshline{black}{0.94954}{0.6102}{0.94079}{0.63673} \meshline{black}{0.91187}{0.69677}{0.91883}{0.68539} \meshline{black}{0.74757}{0.82219}{0.73111}{0.82673} \meshline{black}{0.75462}{0.81905}{0.74757}{0.82219} \meshline{black}{0.91883}{0.68539}{0.93432}{0.65348} \meshline{black}{0.87856}{0.74008}{0.89193}{0.7255} \meshline{black}{0.89193}{0.7255}{0.91187}{0.69677} \meshline{black}{0.70463}{0.83447}{0.69476}{0.83611} \meshline{black}{0.73111}{0.82673}{0.70463}{0.83447} \meshline{black}{0.78834}{0.80614}{0.75462}{0.81905} \meshline{black}{0.78834}{0.80614}{0.82425}{0.78514}\meshline{black}{0.82711}{0.7834}{0.86085}{0.75837} \meshline{black}{0.86085}{0.75837}{0.87856}{0.74008} \meshline{black}{0.82581}{0.78436}{0.82425}{0.78514} \meshline{black}{0.82711}{0.7834}{0.82581}{0.78436} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{1}{0}{1.05}{0} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1.05}{0}{1.1}{0} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{1.1}{0}{1.15}{0} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1.15}{0}{1.2}{0} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1.2}{0}{1.25}{0} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1.25}{0}{1.3}{0} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1.3}{0}{1.35}{0} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1.35}{0}{1.4}{0} \meshline{black}{1.05}{1}{1}{1} \meshline{black}{1.1}{1}{1.05}{1} \meshline{black}{1.4}{0}{1.45}{0} \meshline{black}{1.15}{1}{1.1}{1} \meshline{black}{1.45}{0}{1.5}{0} \meshline{black}{1.2}{1}{1.15}{1} \meshline{black}{1.25}{1}{1.2}{1} \meshline{black}{1.5}{0}{1.55}{0} \meshline{black}{1.3}{1}{1.25}{1} \meshline{black}{1.35}{1}{1.3}{1} \meshline{black}{1.55}{0}{1.6}{0} \meshline{black}{1.4}{1}{1.35}{1} \meshline{black}{1.6}{0}{1.65}{0} \meshline{black}{1.45}{1}{1.4}{1} \meshline{black}{1.5}{1}{1.45}{1} \meshline{black}{1.65}{0}{1.7}{0} \meshline{black}{1.55}{1}{1.5}{1} \meshline{black}{1.7}{0}{1.75}{0} \meshline{black}{1.6}{1}{1.55}{1} \meshline{black}{1.65}{1}{1.6}{1} \meshline{black}{1.75}{0}{1.8}{0} \meshline{black}{1.7}{1}{1.65}{1} \meshline{black}{1.8}{0}{1.85}{0} \meshline{black}{1.75}{1}{1.7}{1} \meshline{black}{1.8}{1}{1.75}{1} \meshline{black}{1.85}{0}{1.9}{0} \meshline{black}{1.85}{1}{1.8}{1} \meshline{black}{1.9}{0}{1.95}{0} \meshline{black}{1.9}{1}{1.85}{1} \meshline{black}{2}{0.15}{2}{0.2} \meshline{black}{2}{0.1}{2}{0.15} \meshline{black}{2}{0.2}{2}{0.25} \meshline{black}{2}{0.05}{2}{0.1} \meshline{black}{1.95}{0}{2}{0} \meshline{black}{2}{0.25}{2}{0.3} \meshline{black}{2}{0.5}{2}{0.55} \meshline{black}{2}{0.45}{2}{0.5} \meshline{black}{2}{0.3}{2}{0.35} \meshline{black}{2}{0.4}{2}{0.45} \meshline{black}{2}{0.55}{2}{0.6} \meshline{black}{2}{0}{2}{0.05} \meshline{black}{2}{0.35}{2}{0.4} \meshline{black}{1.95}{1}{1.9}{1} \meshline{black}{2}{0.6}{2}{0.65} \meshline{black}{2}{0.65}{2}{0.7} \meshline{black}{2}{0.7}{2}{0.75} \meshline{black}{2}{0.75}{2}{0.8} \meshline{black}{2}{0.8}{2}{0.85} \meshline{black}{2}{0.85}{2}{0.9} \meshline{black}{2}{0.9}{2}{0.95} \meshline{black}{2}{1}{1.95}{1} \meshline{black}{2}{0.95}{2}{1} \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{1.5cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{1.5cm}} \meshline{black}{0.54024}{0.2197}{0.54044}{0.21498}\meshline{black}{0.55282}{0.15797}{0.54741}{0.17587} \meshline{black}{0.54024}{0.2197}{0.5409}{0.22497} \meshline{black}{0.54741}{0.17587}{0.54044}{0.21498} \meshline{black}{0.56846}{0.13162}{0.57705}{0.11874} \meshline{black}{0.56846}{0.13162}{0.55282}{0.15797} \meshline{black}{0.5409}{0.22497}{0.5397}{0.26326} \meshline{black}{0.54592}{0.30664}{0.54481}{0.30033} \meshline{black}{0.57705}{0.11874}{0.59968}{0.10147}\meshline{black}{0.5397}{0.26326}{0.54481}{0.30033} \meshline{black}{0.54702}{0.30977}{0.54592}{0.30664} \meshline{black}{0.59968}{0.10147}{0.60932}{0.09176} \meshline{black}{0.54702}{0.30977}{0.55465}{0.35} \meshline{black}{0.61794}{0.08723}{0.64619}{0.07181}\meshline{black}{0.61794}{0.08723}{0.60932}{0.09176} \meshline{black}{0.5624}{0.37226}{0.55465}{0.35}\meshline{black}{0.64619}{0.07181}{0.66544}{0.06758}\meshline{black}{0.5624}{0.37226}{0.56717}{0.39334} \meshline{black}{0.56717}{0.39334}{0.5801}{0.4291} \meshline{black}{0.66544}{0.06758}{0.6875}{0.05885} \meshline{black}{0.5801}{0.4291}{0.58228}{0.43667} \meshline{black}{0.6875}{0.05885}{0.72128}{0.0536} \meshline{black}{0.59914}{0.47998}{0.59579}{0.47128} \meshline{black}{0.59579}{0.47128}{0.58228}{0.43667} \meshline{black}{0.59984}{0.48167}{0.59914}{0.47998} \meshline{black}{0.72128}{0.0536}{0.73172}{0.0508} \meshline{black}{0.61762}{0.52329}{0.59984}{0.48167} \meshline{black}{0.73172}{0.0508}{0.77334}{0.04671} \meshline{black}{0.61762}{0.52329}{0.62099}{0.53158} \meshline{black}{0.77334}{0.04671}{0.77769}{0.04605} \meshline{black}{0.63801}{0.5666}{0.62099}{0.53158} \meshline{black}{0.77769}{0.04605}{0.82336}{0.04409}\meshline{black}{0.64329}{0.57945}{0.63801}{0.5666}\meshline{black}{0.8251}{0.04402}{0.82336}{0.04409} \meshline{black}{0.64329}{0.57945}{0.66037}{0.60993} \meshline{black}{0.8251}{0.04402}{0.87209}{0.04459}\meshline{black}{0.66037}{0.60993}{0.66687}{0.6251} \meshline{black}{0.87209}{0.04459}{0.87409}{0.04475} \meshline{black}{0.66687}{0.6251}{0.68515}{0.65327} \meshline{black}{0.87409}{0.04475}{0.9195}{0.04793}\meshline{black}{0.68515}{0.65327}{0.69195}{0.66815} \meshline{black}{0.9195}{0.04793}{0.92523}{0.04913} \meshline{black}{0.92523}{0.04913}{0.96515}{0.05448}\meshline{black}{0.71334}{0.69663}{0.69195}{0.66815} \meshline{black}{0.97958}{0.05965}{0.96515}{0.05448} \meshline{black}{0.71334}{0.69663}{0.71896}{0.70788} \meshline{black}{0.97958}{0.05965}{1.00821}{0.06552}\meshline{black}{0.747}{0.74001}{0.71896}{0.70788} \meshline{black}{1.03879}{0.08055}{1.00821}{0.06552} \meshline{black}{0.74865}{0.74296}{0.747}{0.74001}\meshline{black}{1.03879}{0.08055}{1.04783}{0.08281} \meshline{black}{0.7563}{0.75036}{0.74865}{0.74296}\meshline{black}{1.08641}{0.10358}{1.0629}{0.09114} \meshline{black}{0.78067}{0.77404}{0.7563}{0.75036} \meshline{black}{1.04783}{0.08281}{1.0629}{0.09114} \meshline{black}{1.10346}{0.11851}{1.08641}{0.10358} \meshline{black}{0.78067}{0.77404}{0.79821}{0.78339} \meshline{black}{1.10346}{0.11851}{1.12193}{0.12937} \meshline{black}{0.79821}{0.78339}{0.81686}{0.79791} \meshline{black}{1.14677}{0.1541}{1.13836}{0.14617} \meshline{black}{1.12193}{0.12937}{1.13836}{0.14617} \meshline{black}{0.83858}{0.80657}{0.81686}{0.79791} \meshline{black}{0.8581}{0.81306}{0.83858}{0.80657} \meshline{black}{1.1711}{0.18428}{1.16988}{0.18248} \meshline{black}{1.16988}{0.18248}{1.14677}{0.1541} \meshline{black}{0.8581}{0.81306}{0.89518}{0.8181} \meshline{black}{0.89518}{0.8181}{0.90527}{0.81795} \meshline{black}{1.19531}{0.22245}{1.19615}{0.2243} \meshline{black}{1.1711}{0.18428}{1.19531}{0.22245}\meshline{black}{0.90527}{0.81795}{0.94309}{0.81448} \meshline{black}{0.94309}{0.81448}{0.96074}{0.8085} \meshline{black}{1.02275}{0.77922}{1.01409}{0.78349} \meshline{black}{1.21528}{0.26449}{1.19615}{0.2243} \meshline{black}{1.21788}{0.27391}{1.21528}{0.26449} \meshline{black}{1.03165}{0.77237}{1.02275}{0.77922} \meshline{black}{1.12132}{0.69694}{1.12383}{0.6944} \meshline{black}{0.96074}{0.8085}{0.98518}{0.80077} \meshline{black}{1.13571}{0.67895}{1.12383}{0.6944} \meshline{black}{1.22181}{0.51762}{1.22037}{0.52391} \meshline{black}{1.01409}{0.78349}{0.98518}{0.80077} \meshline{black}{1.07654}{0.74021}{1.09287}{0.7274} \meshline{black}{1.22037}{0.52391}{1.20982}{0.55089} \meshline{black}{1.22937}{0.30737}{1.21788}{0.27391} \meshline{black}{1.23319}{0.33594}{1.22937}{0.30737} \meshline{black}{1.05909}{0.75553}{1.03165}{0.77237}\meshline{black}{1.09287}{0.7274}{1.12132}{0.69694} \meshline{black}{1.23964}{0.394}{1.23692}{0.37103} \meshline{black}{1.23759}{0.41621}{1.23964}{0.394} \meshline{black}{1.15649}{0.65368}{1.1797}{0.61794} \meshline{black}{1.05909}{0.75553}{1.07654}{0.74021} \meshline{black}{1.23498}{0.45381}{1.23232}{0.48064} \meshline{black}{1.184}{0.61043}{1.20269}{0.57114} \meshline{black}{1.22181}{0.51762}{1.23232}{0.48064} \meshline{black}{1.15312}{0.65852}{1.13571}{0.67895} \meshline{black}{1.184}{0.61043}{1.1797}{0.61794} \meshline{black}{1.20452}{0.56717}{1.20982}{0.55089} \meshline{black}{1.15649}{0.65368}{1.15312}{0.65852} \meshline{black}{1.23655}{0.35064}{1.23319}{0.33594} \meshline{black}{1.23655}{0.35064}{1.23692}{0.37103} \meshline{black}{1.23795}{0.43734}{1.23759}{0.41621} \meshline{black}{1.20452}{0.56717}{1.20269}{0.57114}\meshline{black}{1.23795}{0.43734}{1.23498}{0.45381} \meshline{black}{0.59665}{0.22427}{0.58828}{0.26284} \meshline{black}{0.58681}{0.28757}{0.58677}{0.30647} \meshline{black}{0.61732}{0.17944}{0.5986}{0.21897} \meshline{black}{0.58828}{0.26284}{0.58681}{0.28757} \meshline{black}{0.5986}{0.21897}{0.59665}{0.22427} \meshline{black}{0.5896}{0.32291}{0.58677}{0.30647} \meshline{black}{0.64498}{0.14567}{0.62107}{0.17476} \meshline{black}{0.62107}{0.17476}{0.61732}{0.17944} \meshline{black}{0.5896}{0.32291}{0.59097}{0.34995} \meshline{black}{0.66794}{0.12998}{0.67908}{0.12194} \meshline{black}{0.66794}{0.12998}{0.64498}{0.14567} \meshline{black}{0.6007}{0.393}{0.59097}{0.34995} \meshline{black}{0.67908}{0.12194}{0.68962}{0.11804} \meshline{black}{0.6007}{0.393}{0.60074}{0.39335} \meshline{black}{0.601}{0.39421}{0.61325}{0.43669} \meshline{black}{0.68962}{0.11804}{0.71696}{0.10428} \meshline{black}{0.60074}{0.39335}{0.601}{0.39421} \meshline{black}{0.6183}{0.44962}{0.61325}{0.43669}\meshline{black}{0.71696}{0.10428}{0.74708}{0.09684}\meshline{black}{0.6183}{0.44962}{0.62982}{0.48001} \meshline{black}{0.75823}{0.09332}{0.74708}{0.09684} \meshline{black}{0.63882}{0.50074}{0.62982}{0.48001}\meshline{black}{0.75823}{0.09332}{0.79796}{0.08785}\meshline{black}{0.65011}{0.52332}{0.63882}{0.50074} \meshline{black}{0.8016}{0.08731}{0.79796}{0.08785} \meshline{black}{0.65011}{0.52332}{0.66189}{0.5473} \meshline{black}{0.8016}{0.08731}{0.84639}{0.08584} \meshline{black}{0.67442}{0.56664}{0.66189}{0.5473} \meshline{black}{0.84747}{0.08585}{0.84639}{0.08584} \meshline{black}{0.68737}{0.58968}{0.67442}{0.56664}\meshline{black}{0.84747}{0.08585}{0.89301}{0.08919}\meshline{black}{0.704}{0.60997}{0.68737}{0.58968} \meshline{black}{0.89685}{0.08996}{0.89301}{0.08919} \meshline{black}{0.71539}{0.62765}{0.704}{0.60997} \meshline{black}{0.89685}{0.08996}{0.93718}{0.09755}\meshline{black}{0.71539}{0.62765}{0.74139}{0.65331} \meshline{black}{0.93718}{0.09755}{0.95171}{0.10361} \meshline{black}{0.74653}{0.66024}{0.74139}{0.65331}\meshline{black}{0.95171}{0.10361}{0.97774}{0.112} \meshline{black}{1.01297}{0.13476}{1.00961}{0.13282} \meshline{black}{0.76148}{0.67237}{0.74653}{0.66024} \meshline{black}{0.97774}{0.112}{1.00961}{0.13282} \meshline{black}{0.76148}{0.67237}{0.78059}{0.68777} \meshline{black}{1.01666}{0.13837}{1.01297}{0.13476} \meshline{black}{0.80024}{0.69669}{0.78059}{0.68777} \meshline{black}{1.04619}{0.16045}{1.01666}{0.13837} \meshline{black}{0.80024}{0.69669}{0.81865}{0.70841} \meshline{black}{1.07458}{0.19214}{1.0628}{0.17871} \meshline{black}{1.04619}{0.16045}{1.0628}{0.17871}\meshline{black}{0.83543}{0.71412}{0.81865}{0.70841}\meshline{black}{0.83543}{0.71412}{0.86176}{0.72032} \meshline{black}{1.09511}{0.2223}{1.0992}{0.22953} \meshline{black}{1.07458}{0.19214}{1.09511}{0.2223} \meshline{black}{0.89019}{0.72248}{0.86176}{0.72032}\meshline{black}{0.89019}{0.72248}{0.91212}{0.71971} \meshline{black}{1.11723}{0.26491}{1.0992}{0.22953} \meshline{black}{1.12053}{0.27491}{1.11723}{0.26491} \meshline{black}{0.97507}{0.69631}{0.97371}{0.69682} \meshline{black}{0.97507}{0.69631}{0.97616}{0.69545} \meshline{black}{1.09542}{0.56706}{1.09787}{0.56257} \meshline{black}{0.93611}{0.71543}{0.91212}{0.71971}\meshline{black}{1.06718}{0.61031}{1.07195}{0.60427} \meshline{black}{0.93611}{0.71543}{0.97371}{0.69682} \meshline{black}{1.12053}{0.27491}{1.13248}{0.30773}\meshline{black}{1.13622}{0.33267}{1.13248}{0.30773} \meshline{black}{1.11994}{0.51417}{1.1167}{0.5238} \meshline{black}{1.09542}{0.56706}{1.07195}{0.60427}\meshline{black}{1.04299}{0.64073}{1.02896}{0.65357} \meshline{black}{1.14252}{0.39403}{1.14065}{0.37672} \meshline{black}{1.1167}{0.5238}{1.09787}{0.56257} \meshline{black}{1.14033}{0.41178}{1.14252}{0.39403} \meshline{black}{1.06718}{0.61031}{1.04299}{0.64073} \meshline{black}{1.1359}{0.45516}{1.1318}{0.48054} \meshline{black}{1.01093}{0.67183}{0.97616}{0.69545}\meshline{black}{1.1318}{0.48054}{1.11994}{0.51417} \meshline{black}{1.02896}{0.65357}{1.01093}{0.67183}\meshline{black}{1.13622}{0.33267}{1.14045}{0.35081} \meshline{black}{1.14045}{0.35081}{1.14065}{0.37672} \meshline{black}{1.14033}{0.41178}{1.13997}{0.43728}\meshline{black}{1.13997}{0.43728}{1.1359}{0.45516} \meshline{black}{0.13753}{0.22952}{0.17321}{0.20514} \meshline{black}{0.18446}{0.20135}{0.21366}{0.19002} \meshline{black}{0.10602}{0.26252}{0.12898}{0.23692} \meshline{black}{0.17321}{0.20514}{0.18446}{0.20135} \meshline{black}{0.13753}{0.22952}{0.12898}{0.23692}\meshline{black}{0.08448}{0.29966}{0.08498}{0.29868} \meshline{black}{0.24071}{0.18709}{0.24303}{0.18675} \meshline{black}{0.24071}{0.18709}{0.21366}{0.19002}\meshline{black}{0.08498}{0.29868}{0.10602}{0.26252} \meshline{black}{0.24681}{0.1871}{0.24303}{0.18675} \meshline{black}{0.08431}{0.30033}{0.08448}{0.29966} \meshline{black}{0.27637}{0.18769}{0.24681}{0.1871} \meshline{black}{0.0665}{0.33855}{0.08431}{0.30033} \meshline{black}{0.28936}{0.18997}{0.27637}{0.18769} \meshline{black}{0.28936}{0.18997}{0.32212}{0.19979} \meshline{black}{0.06085}{0.36388}{0.0665}{0.33855} \meshline{black}{0.34824}{0.21339}{0.32212}{0.19979} \meshline{black}{0.06085}{0.36388}{0.05387}{0.38061} \meshline{black}{0.35714}{0.21888}{0.34824}{0.21339} \meshline{black}{0.04546}{0.42381}{0.05387}{0.38061} \meshline{black}{0.36316}{0.22496}{0.35714}{0.21888} \meshline{black}{0.38844}{0.24283}{0.36316}{0.22496}\meshline{black}{0.0449}{0.42535}{0.04546}{0.42381} \meshline{black}{0.38844}{0.24283}{0.40672}{0.26533} \meshline{black}{0.03745}{0.47152}{0.04394}{0.4313} \meshline{black}{0.0449}{0.42535}{0.04394}{0.4313}\meshline{black}{0.41826}{0.27514}{0.40672}{0.26533} \meshline{black}{0.03745}{0.47152}{0.03745}{0.47934} \meshline{black}{0.44047}{0.3073}{0.41826}{0.27514} \meshline{black}{0.03265}{0.5195}{0.03745}{0.47934} \meshline{black}{0.44627}{0.3132}{0.44047}{0.3073} \meshline{black}{0.03265}{0.53224}{0.03265}{0.5195} \meshline{black}{0.46856}{0.35015}{0.44627}{0.3132} \meshline{black}{0.47253}{0.35499}{0.46856}{0.35015} \meshline{black}{0.03021}{0.56844}{0.03265}{0.53224} \meshline{black}{0.47253}{0.35499}{0.49308}{0.39334} \meshline{black}{0.03021}{0.58319}{0.03021}{0.56844} \meshline{black}{0.49719}{0.3994}{0.49308}{0.39334}\meshline{black}{0.03021}{0.58319}{0.02974}{0.61797} \meshline{black}{0.5152}{0.43662}{0.49719}{0.3994} \meshline{black}{0.02974}{0.63286}{0.02974}{0.61797} \meshline{black}{0.52042}{0.44602}{0.5152}{0.43662} \meshline{black}{0.02974}{0.63286}{0.0312}{0.66834} \meshline{black}{0.53548}{0.47992}{0.52042}{0.44602} \meshline{black}{0.54222}{0.49487}{0.53548}{0.47992}\meshline{black}{0.0312}{0.6815}{0.0312}{0.66834} \meshline{black}{0.0312}{0.6815}{0.0349}{0.71984} \meshline{black}{0.55406}{0.52322}{0.54222}{0.49487} \meshline{black}{0.55406}{0.52322}{0.56255}{0.54616} \meshline{black}{0.0349}{0.7291}{0.0349}{0.71984} \meshline{black}{0.5708}{0.56653}{0.56255}{0.54616} \meshline{black}{0.04175}{0.77321}{0.0349}{0.7291} \meshline{black}{0.5813}{0.60017}{0.5708}{0.56653}\meshline{black}{0.04175}{0.77556}{0.04175}{0.77321} \meshline{black}{0.58524}{0.60986}{0.5813}{0.60017} \meshline{black}{0.04175}{0.77556}{0.04398}{0.78554} \meshline{black}{0.05174}{0.82093}{0.04398}{0.78554} \meshline{black}{0.59402}{0.64174}{0.59713}{0.6532} \meshline{black}{0.58524}{0.60986}{0.59402}{0.64174} \meshline{black}{0.05174}{0.82093}{0.05842}{0.83302}\meshline{black}{0.59713}{0.6532}{0.59764}{0.65832} \meshline{black}{0.06799}{0.86361}{0.05842}{0.83302} \meshline{black}{0.59764}{0.65832}{0.60688}{0.69657} \meshline{black}{0.09825}{0.90276}{0.09206}{0.89472} \meshline{black}{0.06799}{0.86361}{0.09206}{0.89472} \meshline{black}{0.09825}{0.90276}{0.10941}{0.90955}\meshline{black}{0.60688}{0.69657}{0.60855}{0.72587} \meshline{black}{0.13869}{0.9295}{0.10941}{0.90955} \meshline{black}{0.1816}{0.94341}{0.16159}{0.93511} \meshline{black}{0.22025}{0.94925}{0.22769}{0.95108} \meshline{black}{0.60855}{0.72587}{0.61174}{0.73995} \meshline{black}{0.16159}{0.93511}{0.13869}{0.9295} \meshline{black}{0.22025}{0.94925}{0.1816}{0.94341} \meshline{black}{0.27574}{0.95496}{0.2737}{0.95475} \meshline{black}{0.2737}{0.95475}{0.22769}{0.95108} \meshline{black}{0.61233}{0.76075}{0.6115}{0.78334} \meshline{black}{0.61174}{0.73995}{0.61233}{0.76075} \meshline{black}{0.32445}{0.95583}{0.32542}{0.9558} \meshline{black}{0.32445}{0.95583}{0.27574}{0.95496} \meshline{black}{0.6115}{0.78334}{0.6031}{0.82169} \meshline{black}{0.37693}{0.95332}{0.37337}{0.95387} \meshline{black}{0.37337}{0.95387}{0.32542}{0.9558} \meshline{black}{0.6031}{0.82169}{0.60269}{0.82671} \meshline{black}{0.4311}{0.94617}{0.42062}{0.94905} \meshline{black}{0.42062}{0.94905}{0.37693}{0.95332} \meshline{black}{0.54589}{0.90614}{0.56646}{0.88504}\meshline{black}{0.53285}{0.91339}{0.54589}{0.90614}\meshline{black}{0.58168}{0.87007}{0.60189}{0.82966}\meshline{black}{0.60269}{0.82671}{0.60189}{0.82966} \meshline{black}{0.48983}{0.93109}{0.46581}{0.9407} \meshline{black}{0.46581}{0.9407}{0.4311}{0.94617} \meshline{black}{0.57819}{0.87541}{0.56646}{0.88504} \meshline{black}{0.57819}{0.87541}{0.58168}{0.87007} \meshline{black}{0.53285}{0.91339}{0.50792}{0.92702} \meshline{black}{0.50792}{0.92702}{0.48983}{0.93109} \meshline{black}{0.19985}{0.29558}{0.21055}{0.29231} \meshline{black}{0.19043}{0.30223}{0.19985}{0.29558} \meshline{black}{0.21055}{0.29231}{0.24217}{0.282} \meshline{black}{0.16509}{0.31597}{0.14569}{0.33415} \meshline{black}{0.19043}{0.30223}{0.16509}{0.31597} \meshline{black}{0.24217}{0.282}{0.26872}{0.28108} \meshline{black}{0.11854}{0.36764}{0.12698}{0.35581} \meshline{black}{0.12698}{0.35581}{0.14569}{0.33415} \meshline{black}{0.28991}{0.28196}{0.26872}{0.28108} \meshline{black}{0.11189}{0.38232}{0.11854}{0.36764} \meshline{black}{0.31457}{0.28714}{0.28991}{0.28196} \meshline{black}{0.09824}{0.40448}{0.11189}{0.38232} \meshline{black}{0.3482}{0.30105}{0.31457}{0.28714}\meshline{black}{0.3482}{0.30105}{0.35375}{0.30374} \meshline{black}{0.0824}{0.44595}{0.0883}{0.42983} \meshline{black}{0.0883}{0.42983}{0.09824}{0.40448} \meshline{black}{0.35375}{0.30374}{0.35802}{0.30736} \meshline{black}{0.081}{0.45463}{0.0824}{0.44595} \meshline{black}{0.35802}{0.30736}{0.38963}{0.32655} \meshline{black}{0.06999}{0.49078}{0.081}{0.45463} \meshline{black}{0.38963}{0.32655}{0.41257}{0.35021} \meshline{black}{0.06733}{0.51387}{0.06999}{0.49078} \meshline{black}{0.41257}{0.35021}{0.42167}{0.35716} \meshline{black}{0.06262}{0.53816}{0.06733}{0.51387} \meshline{black}{0.45078}{0.3933}{0.42167}{0.35716} \meshline{black}{0.45081}{0.39332}{0.45078}{0.3933} \meshline{black}{0.06078}{0.5667}{0.06262}{0.53816} \meshline{black}{0.45081}{0.39332}{0.45081}{0.39333} \meshline{black}{0.06078}{0.5667}{0.05918}{0.58632}\meshline{black}{0.47784}{0.43318}{0.45081}{0.39333}\meshline{black}{0.05877}{0.61649}{0.05918}{0.58632} \meshline{black}{0.47948}{0.43659}{0.47784}{0.43318} \meshline{black}{0.05877}{0.61649}{0.05932}{0.63573} \meshline{black}{0.50222}{0.47757}{0.47948}{0.43659} \meshline{black}{0.06043}{0.66434}{0.05932}{0.63573} \meshline{black}{0.50325}{0.47988}{0.50222}{0.47757} \meshline{black}{0.52302}{0.52319}{0.51178}{0.49878} \meshline{black}{0.51178}{0.49878}{0.50325}{0.47988} \meshline{black}{0.06043}{0.66434}{0.06363}{0.687} \meshline{black}{0.52302}{0.52319}{0.52416}{0.52611} \meshline{black}{0.06579}{0.71036}{0.06363}{0.687} \meshline{black}{0.52416}{0.52611}{0.53954}{0.5665} \meshline{black}{0.06579}{0.71036}{0.07361}{0.74101} \meshline{black}{0.54302}{0.57992}{0.53954}{0.5665}\meshline{black}{0.07562}{0.75443}{0.07361}{0.74101} \meshline{black}{0.54302}{0.57992}{0.55252}{0.60983} \meshline{black}{0.0917}{0.79682}{0.08862}{0.78835} \meshline{black}{0.08862}{0.78835}{0.07562}{0.75443} \meshline{black}{0.55252}{0.60983}{0.55766}{0.64102} \meshline{black}{0.09518}{0.80152}{0.0917}{0.79682} \meshline{black}{0.11516}{0.83765}{0.09518}{0.80152} \meshline{black}{0.55766}{0.64102}{0.56095}{0.65317} \meshline{black}{0.56314}{0.67496}{0.56493}{0.69654} \meshline{black}{0.13308}{0.85619}{0.11516}{0.83765} \meshline{black}{0.56095}{0.65317}{0.56314}{0.67496} \meshline{black}{0.13308}{0.85619}{0.1532}{0.87126} \meshline{black}{0.56256}{0.71894}{0.56493}{0.69654}\meshline{black}{0.20689}{0.89924}{0.18415}{0.88815} \meshline{black}{0.1532}{0.87126}{0.16646}{0.88144} \meshline{black}{0.25169}{0.90928}{0.24637}{0.90791} \meshline{black}{0.16646}{0.88144}{0.18415}{0.88815} \meshline{black}{0.56256}{0.71894}{0.5629}{0.73992} \meshline{black}{0.24637}{0.90791}{0.20689}{0.89924} \meshline{black}{0.25169}{0.90928}{0.29497}{0.91324}\meshline{black}{0.55276}{0.78331}{0.56009}{0.75678}\meshline{black}{0.5629}{0.73992}{0.56009}{0.75678} \meshline{black}{0.34948}{0.91246}{0.35055}{0.91231} \meshline{black}{0.31638}{0.91327}{0.34948}{0.91246} \meshline{black}{0.29952}{0.91371}{0.29497}{0.91324} \meshline{black}{0.52995}{0.82668}{0.54098}{0.81061} \meshline{black}{0.54098}{0.81061}{0.55276}{0.78331} \meshline{black}{0.39587}{0.90609}{0.40627}{0.90258} \meshline{black}{0.39587}{0.90609}{0.35055}{0.91231} \meshline{black}{0.30006}{0.91372}{0.31638}{0.91327} \meshline{black}{0.30006}{0.91372}{0.29952}{0.91371} \meshline{black}{0.52995}{0.82668}{0.51278}{0.84855} \meshline{black}{0.51278}{0.84855}{0.48417}{0.87003} \meshline{black}{0.43906}{0.89423}{0.47085}{0.87744} \meshline{black}{0.43906}{0.89423}{0.40627}{0.90258} \meshline{black}{0.47783}{0.87473}{0.48417}{0.87003} \meshline{black}{0.47783}{0.87473}{0.47085}{0.87744} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0.05}{0}{0} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.05}{0}{0.1} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0.43039}{0.05077}{0.43635}{0.09043} \meshline{black}{0.44488}{0.11144}{0.43635}{0.09043}\meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45614}{0.17916}{0.44992}{0.15886} \meshline{black}{0.45}{ 0}{0.43183}{0.04501} \meshline{black}{0.43039}{0.05077}{0.43183}{0.04501} \meshline{black}{0.4454}{0.13567}{0.44488}{0.11144} \meshline{black}{0.44992}{0.15886}{0.4454}{0.13567} \meshline{black}{0.4605}{0.18761}{0.45614}{0.17916} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0.4605}{0.18761}{0.46788}{0.22159} \meshline{black}{0}{0.45}{0}{0.5} \meshline{black}{0.47967}{0.24939}{0.46788}{0.22159} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0.47967}{0.24939}{0.48234}{0.26401} \meshline{black}{0.49849}{0.30689}{0.49755}{0.3045} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.48234}{0.26401}{0.49755}{0.3045} \meshline{black}{0.49878}{0.30736}{0.49849}{0.30689} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0.51442}{0.35006}{0.49878}{0.30736} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.51958}{0.35975}{0.51442}{0.35006} \meshline{black}{0.51958}{0.35975}{0.53154}{0.39334} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0.53154}{0.39334}{0.54021}{0.41147} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.54021}{0.41147}{0.54934}{0.43665} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0.56078}{0.46271}{0.54934}{0.43665}\meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.56743}{0.47995}{0.56078}{0.46271} \meshline{black}{0.58139}{0.5136}{0.56743}{0.47995} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.58551}{0.52326}{0.58139}{0.5136} \meshline{black}{0.58551}{0.52326}{0.60212}{0.5642} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.60327}{0.56657}{0.60212}{0.5642} \meshline{black}{0.62094}{0.60989}{0.60956}{0.58189} \meshline{black}{0.60956}{0.58189}{0.60327}{0.56657} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0.62094}{0.60989}{0.62246}{0.61544} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.62246}{0.61544}{0.6384}{0.65324} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0.6384}{0.65324}{0.64214}{0.66783} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.64214}{0.66783}{0.6551}{0.6966} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0.6551}{0.6966}{0.66126}{0.72118} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.66126}{0.72118}{0.67055}{0.73998} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.67976}{0.7756}{0.67055}{0.73998} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{0.68409}{0.78336}{0.67976}{0.7756} \meshline{black}{1}{0}{1.05}{0} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.69593}{0.82673}{0.69276}{0.81379} \meshline{black}{0.69276}{0.81379}{0.68409}{0.78336} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{0.69598}{0.834}{0.69593}{0.82673}\meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{1.05}{0}{1.1}{0} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{0.7069}{0.87008}{0.69598}{0.834} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{0.7079}{0.89985}{0.7069}{0.87008} \meshline{black}{1.1}{0}{1.15}{0} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{0.7079}{0.89985}{0.71427}{0.91341} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{1.15}{0}{1.2}{0} \meshline{black}{0.71834}{0.95671}{0.71981}{0.94766} \meshline{black}{0.71981}{0.94766}{0.71427}{0.91341} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{0.7}{1}{0.71834}{0.95671} \meshline{black}{1.2}{0}{1.25}{0} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1.25}{0}{1.3}{0} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1.3}{0}{1.35}{0} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1.35}{0}{1.4}{0} \meshline{black}{1.05}{1}{1}{1} \meshline{black}{1.1}{1}{1.05}{1} \meshline{black}{1.4}{0}{1.45}{0} \meshline{black}{1.15}{1}{1.1}{1} \meshline{black}{1.45}{0}{1.5}{0} \meshline{black}{1.2}{1}{1.15}{1} \meshline{black}{1.25}{1}{1.2}{1} \meshline{black}{1.5}{0}{1.55}{0} \meshline{black}{1.3}{1}{1.25}{1} \meshline{black}{1.35}{1}{1.3}{1} \meshline{black}{1.55}{0}{1.6}{0} \meshline{black}{1.4}{1}{1.35}{1} \meshline{black}{1.6}{0}{1.65}{0} \meshline{black}{1.45}{1}{1.4}{1} \meshline{black}{1.5}{1}{1.45}{1} \meshline{black}{1.65}{0}{1.7}{0} \meshline{black}{1.55}{1}{1.5}{1} \meshline{black}{1.7}{0}{1.75}{0} \meshline{black}{1.6}{1}{1.55}{1} \meshline{black}{1.65}{1}{1.6}{1} \meshline{black}{1.75}{0}{1.8}{0} \meshline{black}{1.7}{1}{1.65}{1} \meshline{black}{1.8}{0}{1.85}{0} \meshline{black}{1.75}{1}{1.7}{1} \meshline{black}{1.8}{1}{1.75}{1} \meshline{black}{1.85}{0}{1.9}{0} \meshline{black}{1.85}{1}{1.8}{1} \meshline{black}{1.9}{0}{1.95}{0} \meshline{black}{1.9}{1}{1.85}{1} \meshline{black}{2}{0.15}{2}{0.2} \meshline{black}{2}{0.1}{2}{0.15} \meshline{black}{2}{0.2}{2}{0.25} \meshline{black}{2}{0.05}{2}{0.1} \meshline{black}{1.95}{0}{2}{0} \meshline{black}{2}{0.25}{2}{0.3} \meshline{black}{2}{0.5}{2}{0.55} \meshline{black}{2}{0.45}{2}{0.5} \meshline{black}{2}{0.3}{2}{0.35} \meshline{black}{2}{0.4}{2}{0.45} \meshline{black}{2}{0.55}{2}{0.6} \meshline{black}{2}{0}{2}{0.05} \meshline{black}{2}{0.35}{2}{0.4} \meshline{black}{1.95}{1}{1.9}{1} \meshline{black}{2}{0.6}{2}{0.65} \meshline{black}{2}{0.65}{2}{0.7} \meshline{black}{2}{0.7}{2}{0.75} \meshline{black}{2}{0.75}{2}{0.8} \meshline{black}{2}{0.8}{2}{0.85} \meshline{black}{2}{0.85}{2}{0.9} \meshline{black}{2}{0.9}{2}{0.95} \meshline{black}{2}{1}{1.95}{1} \meshline{black}{2}{0.95}{2}{1} \meshline{black}{0}{0}{0.05}{0} \meshline{black}{0}{0}{0}{0.05} \meshline{black}{0.05}{0}{0.1}{0} \meshline{black}{0}{0.1}{0}{0.05} \meshline{black}{0.1}{0}{0.15}{0} \meshline{black}{0}{0.15}{0}{0.1} \meshline{black}{0.15}{0}{0.2}{0} \meshline{black}{0}{0.2}{0}{0.15} \meshline{black}{0.2}{0}{0.25}{0} \meshline{black}{0}{0.25}{0}{0.2} \meshline{black}{0.25}{0}{0.3}{0} \meshline{black}{0}{0.3}{0}{0.25} \meshline{black}{0.3}{0}{0.35}{0} \meshline{black}{0}{0.35}{0}{0.3} \meshline{black}{0.35}{0}{0.4}{0} \meshline{black}{0}{0.4}{0}{0.35} \meshline{black}{0.4}{0}{0.45}{0} \meshline{black}{0}{0.45}{0}{0.4} \meshline{black}{0.45}{0}{0.5}{0} \meshline{black}{0}{0.5}{0}{0.45} \meshline{black}{0.5}{0}{0.55}{0} \meshline{black}{0}{0.55}{0}{0.5} \meshline{black}{0.55}{0}{0.6}{0} \meshline{black}{0}{0.6}{0}{0.55} \meshline{black}{0.6}{0}{0.65}{0} \meshline{black}{0}{0.65}{0}{0.6} \meshline{black}{0.65}{0}{0.7}{0} \meshline{black}{0}{0.7}{0}{0.65} \meshline{black}{0.7}{0}{0.75}{0} \meshline{black}{0}{0.75}{0}{0.7} \meshline{black}{0.75}{0}{0.8}{0} \meshline{black}{0}{0.8}{0}{0.75} \meshline{black}{0.8}{0}{0.85}{0} \meshline{black}{0}{0.85}{0}{0.8} \meshline{black}{0.85}{0}{0.9}{0} \meshline{black}{0}{0.9}{0}{0.85} \meshline{black}{0.9}{0}{0.95}{0} \meshline{black}{0}{0.95}{0}{0.9} \meshline{black}{0.95}{0}{1}{0} \meshline{black}{0.2}{1}{0.15}{1} \meshline{black}{0.15}{1}{0.1}{1} \meshline{black}{0}{1}{0}{0.95} \meshline{black}{0.25}{1}{0.2}{1} \meshline{black}{0.1}{1}{0.05}{1} \meshline{black}{0.3}{1}{0.25}{1} \meshline{black}{0.05}{1}{0}{1} \meshline{black}{1}{0}{1.05}{0} \meshline{black}{0.35}{1}{0.3}{1} \meshline{black}{0.4}{1}{0.35}{1} \meshline{black}{1.05}{0}{1.1}{0} \meshline{black}{0.45}{1}{0.4}{1} \meshline{black}{0.5}{1}{0.45}{1} \meshline{black}{1.1}{0}{1.15}{0} \meshline{black}{0.55}{1}{0.5}{1} \meshline{black}{0.6}{1}{0.55}{1} \meshline{black}{1.15}{0}{1.2}{0} \meshline{black}{0.65}{1}{0.6}{1} \meshline{black}{0.7}{1}{0.65}{1} \meshline{black}{1.2}{0}{1.25}{0} \meshline{black}{0.75}{1}{0.7}{1} \meshline{black}{0.8}{1}{0.75}{1} \meshline{black}{1.25}{0}{1.3}{0} \meshline{black}{0.85}{1}{0.8}{1} \meshline{black}{0.9}{1}{0.85}{1} \meshline{black}{1.3}{0}{1.35}{0} \meshline{black}{0.95}{1}{0.9}{1} \meshline{black}{1.35}{0}{1.4}{0} \meshline{black}{1}{1}{0.95}{1} \meshline{black}{1.05}{1}{1}{1} \meshline{black}{1.4}{0}{1.45}{0} \meshline{black}{1.1}{1}{1.05}{1} \meshline{black}{1.15}{1}{1.1}{1} \meshline{black}{1.45}{0}{1.5}{0} \meshline{black}{1.2}{1}{1.15}{1} \meshline{black}{1.25}{1}{1.2}{1} \meshline{black}{1.5}{0}{1.55}{0} \meshline{black}{1.3}{1}{1.25}{1} \meshline{black}{1.35}{1}{1.3}{1} \meshline{black}{1.55}{0}{1.6}{0} \meshline{black}{1.4}{1}{1.35}{1} \meshline{black}{1.6}{0}{1.65}{0} \meshline{black}{1.45}{1}{1.4}{1} \meshline{black}{1.5}{1}{1.45}{1} \meshline{black}{1.65}{0}{1.7}{0} \meshline{black}{1.55}{1}{1.5}{1} \meshline{black}{1.7}{0}{1.75}{0} \meshline{black}{1.6}{1}{1.55}{1} \meshline{black}{1.65}{1}{1.6}{1} \meshline{black}{1.75}{0}{1.8}{0} \meshline{black}{1.7}{1}{1.65}{1} \meshline{black}{1.8}{0}{1.85}{0} \meshline{black}{1.75}{1}{1.7}{1} \meshline{black}{1.85}{0}{1.9}{0} \meshline{black}{1.8}{1}{1.75}{1} \meshline{black}{1.85}{1}{1.8}{1} \meshline{black}{1.9}{0}{1.95}{0} \meshline{black}{1.9}{1}{1.85}{1} \meshline{black}{2}{0.15}{2}{0.2} \meshline{black}{2}{0.1}{2}{0.15} \meshline{black}{1.95}{0}{2}{0} \meshline{black}{2}{0.2}{2}{0.25} \meshline{black}{2}{0.05}{2}{0.1} \meshline{black}{2}{0.25}{2}{0.3} \meshline{black}{2}{0.5}{2}{0.55} \meshline{black}{1.95}{1}{1.9}{1} \meshline{black}{2}{0.45}{2}{0.5} \meshline{black}{2}{0}{2}{0.05} \meshline{black}{2}{0.3}{2}{0.35} \meshline{black}{2}{0.4}{2}{0.45} \meshline{black}{2}{0.55}{2}{0.6} \meshline{black}{2}{0.35}{2}{0.4} \meshline{black}{2}{0.6}{2}{0.65} \meshline{black}{2}{0.65}{2}{0.7} \meshline{black}{2}{0.7}{2}{0.75} \meshline{black}{2}{0.75}{2}{0.8} \meshline{black}{2}{0.8}{2}{0.85} \meshline{black}{2}{1}{1.95}{1} \meshline{black}{2}{0.85}{2}{0.9} \meshline{black}{2}{0.9}{2}{0.95} \meshline{black}{2}{0.95}{2}{1} \end{tikzpicture} \null \end{center} \null \end{minipage} \begin{minipage}[h]{0.4\linewidth} \begin{itemize} \item For g.s.: $\max (u)=6.19 $, $\mathcal{E}_4 (u)= 33.14$ \item For l.e.n.s.: $\min(u)= -9.8$, $\max (u)= 9.7$, $\mathcal{E}_4(u)=181.09$ \item Starting function for g.s.: $(x-2)(y-1)xy$ \item Starting function for l.e.n.s.: \\$\sin(\pi(x+1))\sin(2\pi (y+1))$ \end{itemize} \end{minipage} \null \subsection{A singular potential on a ball} As last example, we study singular potentials. First, $\lambda =1$, $p=4$ and $V(x,y)= \frac{1}{\sqrt{x^2+y^2}}$ on the ball $B(0,1)$. Approximations show as expected that the ground state solutions are radial and the least energy nodal solutions are odd and even with respect to a diagonal. We obtain the same symmetry as for the potential $V=0$ (see~\cite{bbgv}). \\ \begin{minipage}[h]{0.6\linewidth} \includegraphics[width=2.8cm, angle=270]{cercleMPAsing.pdf} \includegraphics[width=2.8cm, angle = 270]{cercleMMPAsing.pdf} \null \begin{center} \begin{tikzpicture} \draw[->] (0,0) --(0.5,0); \draw[->] (0,0)--(0,.5); \node[anchor= north] at (0.5,0) {$x$}; \node[anchor =east] at (0,0.5) {$y$}; \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{0.8cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{0.8cm}} \meshline{black}{0.5604}{-0.01067}{0.5601}{0.00762} \meshline{black}{0.55818}{-0.04398}{0.5604}{-0.01067} \meshline{black}{0.5569}{-0.05913}{0.55223}{-0.09525} \meshline{black}{0.55223}{-0.09525}{0.54348}{-0.13479} \meshline{black}{0.55963}{0.03262}{0.55637}{0.06667} \meshline{black}{0.55963}{0.03262}{0.5601}{0.00762}\meshline{black}{0.55772}{-0.05325}{0.55818}{-0.04398} \meshline{black}{0.5569}{-0.05913}{0.55772}{-0.05325} \meshline{black}{0.53137}{-0.17729}{0.5425}{-0.13854} \meshline{black}{0.54348}{-0.13479}{0.54312}{-0.13652}\meshline{black}{0.55528}{0.07661}{0.55637}{0.06667}\meshline{black}{0.54751}{0.11997}{0.55528}{0.07661} \meshline{black}{0.54312}{-0.13652}{0.5425}{-0.13854} \meshline{black}{0.53519}{0.16683}{0.54225}{0.13998} \meshline{black}{0.53137}{-0.17729}{0.52153}{-0.20295} \meshline{black}{0.53519}{0.16683}{0.5346}{0.16873} \meshline{black}{0.54723}{0.12132}{0.54751}{0.11997} \meshline{black}{0.54225}{0.13998}{0.54723}{0.12132}\meshline{black}{0.52153}{-0.20295}{0.516}{-0.21745} \meshline{black}{0.51848}{0.21331}{0.5346}{0.16873} \meshline{black}{0.51848}{0.21331}{0.51831}{0.21372} \meshline{black}{0.50091}{-0.24896}{0.4973}{-0.25683} \meshline{black}{0.516}{-0.21745}{0.50091}{-0.24896} \meshline{black}{0.4973}{-0.25683}{0.4956}{-0.2599} \meshline{black}{0.51831}{0.21372}{0.51697}{0.21658} \meshline{black}{0.49917}{0.25551}{0.51697}{0.21658} \meshline{black}{0.4956}{-0.2599}{0.47538}{-0.29542} \meshline{black}{0.46678}{-0.30841}{0.47538}{-0.29542} \meshline{black}{0.49628}{0.26091}{0.49917}{0.25551}\meshline{black}{0.49628}{0.26091}{0.47747}{0.2944} \meshline{black}{0.46678}{-0.30841}{0.4495}{-0.33321} \meshline{black}{0.47747}{0.2944}{0.46724}{0.30988} \meshline{black}{0.46724}{0.30988}{0.45338}{0.3305} \meshline{black}{0.4495}{-0.33321}{0.43579}{-0.35066} \meshline{black}{0.45338}{0.3305}{0.42949}{0.36056} \meshline{black}{0.43579}{-0.35066}{0.41913}{-0.37035} \meshline{black}{0.4269}{0.36383}{0.42949}{0.36056} \meshline{black}{0.40276}{-0.38797}{0.41913}{-0.37035} \meshline{black}{0.42255}{0.36847}{0.4269}{0.36383} \meshline{black}{0.40276}{-0.38797}{0.38393}{-0.4064}\meshline{black}{0.39856}{0.395}{0.42255}{0.36847} \meshline{black}{0.36828}{-0.42068}{0.38393}{-0.4064} \meshline{black}{0.37794}{0.41409}{0.39856}{0.395}\meshline{black}{0.37794}{0.41409}{0.36793}{0.42361} \meshline{black}{0.36828}{-0.42068}{0.34332}{-0.44094} \meshline{black}{0.36793}{0.42361}{0.35753}{0.43189} \meshline{black}{0.33299}{-0.44898}{0.34332}{-0.44094} \meshline{black}{0.33529}{0.45004}{0.35753}{0.43189}\meshline{black}{0.33299}{-0.44898}{0.29865}{-0.47177} \meshline{black}{0.29865}{-0.47177}{0.29239}{-0.47605} \meshline{black}{0.33529}{0.45004}{0.30155}{0.47294} \meshline{black}{0.30155}{0.47294}{0.30021}{0.47389} \meshline{black}{0.29239}{-0.47605}{0.29074}{-0.47703} \meshline{black}{0.30021}{0.47389}{0.29936}{0.47439} \meshline{black}{0.28159}{-0.48193}{0.29074}{-0.47703} \meshline{black}{0.24786}{-0.50047}{0.28159}{-0.48193} \meshline{black}{0.29936}{0.47439}{0.2632}{0.49568} \meshline{black}{0.2632}{0.49568}{0.24513}{0.5045} \meshline{black}{0.2398}{-0.50417}{0.24786}{-0.50047} \meshline{black}{0.20739}{-0.51836}{0.2398}{-0.50417} \meshline{black}{0.22361}{0.51475}{0.24513}{0.5045}\meshline{black}{0.1918}{-0.52409}{0.20739}{-0.51836} \meshline{black}{0.1934}{0.5266}{0.22361}{0.51475} \meshline{black}{0.1918}{-0.52409}{0.16552}{-0.53298} \meshline{black}{0.18143}{0.531}{0.1934}{0.5266} \meshline{black}{0.16552}{-0.53298}{0.14388}{-0.53897} \meshline{black}{0.14363}{0.54236}{0.18143}{0.531} \meshline{black}{0.11867}{-0.54498}{0.14388}{-0.53897} \meshline{black}{0.13648}{0.54424}{0.14363}{0.54236}\meshline{black}{0.09267}{-0.55003}{0.11867}{-0.54498} \meshline{black}{0.13648}{0.54424}{0.09545}{0.55291} \meshline{black}{0.06374}{-0.5537}{0.09267}{-0.55003} \meshline{black}{0.08843}{0.55405}{0.09545}{0.55291} \meshline{black}{0.05322}{-0.55484}{0.06374}{-0.5537} \meshline{black}{0.04865}{0.55905}{0.08843}{0.55405} \meshline{black}{0.05322}{-0.55484}{0.02212}{-0.55696} \meshline{black}{0.03679}{0.55983}{0.04865}{0.55905}\meshline{black}{0.00212}{-0.55706}{0.02212}{-0.55696} \meshline{black}{-0.02354}{-0.55675}{0.00212}{-0.55706} \meshline{black}{0.03679}{0.55983}{0.00314}{0.56125} \meshline{black}{-0.02354}{-0.55675}{-0.05247}{-0.55451} \meshline{black}{0.00314}{0.56125}{-0.01919}{0.56064} \meshline{black}{-0.06943}{-0.55274}{-0.05247}{-0.55451} \meshline{black}{-0.01919}{0.56064}{-0.04109}{0.55979} \meshline{black}{-0.1028}{-0.54744}{-0.06943}{-0.55274} \meshline{black}{-0.11618}{-0.54477}{-0.1028}{-0.54744} \meshline{black}{-0.04109}{0.55979}{-0.08067}{0.55517} \meshline{black}{-0.14934}{-0.53673}{-0.11618}{-0.54477} \meshline{black}{-0.14934}{-0.53673}{-0.16437}{-0.53226} \meshline{black}{-0.08067}{0.55517}{-0.08404}{0.55478} \meshline{black}{-0.16437}{-0.53226}{-0.19282}{-0.52291} \meshline{black}{-0.19282}{-0.52291}{-0.21463}{-0.51418} \meshline{black}{-0.12605}{0.54695}{-0.08969}{0.55364}\meshline{black}{-0.08404}{0.55478}{-0.08969}{0.55364} \meshline{black}{-0.12605}{0.54695}{-0.15052}{0.54018} \meshline{black}{-0.21463}{-0.51418}{-0.23367}{-0.50623} \meshline{black}{-0.26772}{-0.48898}{-0.23367}{-0.50623} \meshline{black}{-0.28002}{-0.48185}{-0.30854}{-0.46494}\meshline{black}{-0.15052}{0.54018}{-0.16694}{0.53584} \meshline{black}{-0.27208}{-0.48675}{-0.26772}{-0.48898} \meshline{black}{-0.30854}{-0.46494}{-0.32475}{-0.45332}\meshline{black}{-0.28002}{-0.48185}{-0.27208}{-0.48675} \meshline{black}{-0.20679}{0.52176}{-0.1839}{0.52969} \meshline{black}{-0.16694}{0.53584}{-0.1839}{0.52969} \meshline{black}{-0.34285}{-0.44059}{-0.32475}{-0.45332} \meshline{black}{-0.36276}{-0.42377}{-0.37505}{-0.4137}\meshline{black}{-0.23376}{0.50961}{-0.20679}{0.52176} \meshline{black}{-0.36276}{-0.42377}{-0.34285}{-0.44059} \meshline{black}{-0.37505}{-0.4137}{-0.38724}{-0.40173} \meshline{black}{-0.23376}{0.50961}{-0.24551}{0.50456} \meshline{black}{-0.38724}{-0.40173}{-0.40532}{-0.38452} \meshline{black}{-0.25314}{0.50047}{-0.28318}{0.48455} \meshline{black}{-0.43345}{-0.35283}{-0.41784}{-0.37031} \meshline{black}{-0.24551}{0.50456}{-0.25314}{0.50047} \meshline{black}{-0.41784}{-0.37031}{-0.40532}{-0.38452} \meshline{black}{-0.45872}{-0.31917}{-0.45926}{-0.31845}\meshline{black}{-0.45872}{-0.31917}{-0.43345}{-0.35283} \meshline{black}{-0.3196}{0.46117}{-0.30834}{0.46846}\meshline{black}{-0.28318}{0.48455}{-0.30834}{0.46846} \meshline{black}{-0.45926}{-0.31845}{-0.46062}{-0.31629}\meshline{black}{-0.35484}{0.4344}{-0.35434}{0.43479} \meshline{black}{-0.35434}{0.43479}{-0.3196}{0.46117} \meshline{black}{-0.46062}{-0.31629}{-0.48304}{-0.28177} \meshline{black}{-0.35798}{0.43157}{-0.35484}{0.4344} \meshline{black}{-0.48968}{-0.26975}{-0.50424}{-0.24222}\meshline{black}{-0.48304}{-0.28177}{-0.48968}{-0.26975} \meshline{black}{-0.38899}{0.40413}{-0.35798}{0.43157} \meshline{black}{-0.5135}{-0.22175}{-0.52269}{-0.19965}\meshline{black}{-0.50424}{-0.24222}{-0.5135}{-0.22175} \meshline{black}{-0.42188}{0.36956}{-0.39329}{0.39989} \meshline{black}{-0.38899}{0.40413}{-0.39329}{0.39989} \meshline{black}{-0.5315}{-0.17504}{-0.53812}{-0.15367}\meshline{black}{-0.52269}{-0.19965}{-0.5315}{-0.17504} \meshline{black}{-0.4829}{0.28479}{-0.48051}{0.289}\meshline{black}{-0.48051}{0.289}{-0.46691}{0.30954} \meshline{black}{-0.42678}{0.3639}{-0.45331}{0.33007} \meshline{black}{-0.42188}{0.36956}{-0.42678}{0.3639} \meshline{black}{-0.54464}{-0.12941}{-0.55007}{-0.10352} \meshline{black}{-0.53812}{-0.15367}{-0.54464}{-0.12941} \meshline{black}{-0.4829}{0.28479}{-0.50186}{0.25029} \meshline{black}{-0.50186}{0.25029}{-0.51004}{0.23223} \meshline{black}{-0.45564}{0.32691}{-0.46691}{0.30954} \meshline{black}{-0.45331}{0.33007}{-0.45564}{0.32691} \meshline{black}{-0.53387}{0.17052}{-0.53374}{0.17089} \meshline{black}{-0.53387}{0.17052}{-0.53394}{0.17028} \meshline{black}{-0.55976}{0.01361}{-0.56017}{0.00314} \meshline{black}{-0.55914}{-0.02076}{-0.56017}{0.00314}\meshline{black}{-0.55778}{-0.04822}{-0.55356}{-0.08459} \meshline{black}{-0.55356}{-0.08459}{-0.55007}{-0.10352} \meshline{black}{-0.51004}{0.23223}{-0.51967}{0.2108} \meshline{black}{-0.51967}{0.2108}{-0.53374}{0.17089} \meshline{black}{-0.53394}{0.17028}{-0.54544}{0.1297} \meshline{black}{-0.54544}{0.1297}{-0.55246}{0.0925} \meshline{black}{-0.55856}{0.04602}{-0.55379}{0.08429}\meshline{black}{-0.55856}{0.04602}{-0.55976}{0.01361} \meshline{black}{-0.55914}{-0.02076}{-0.55859}{-0.04041} \meshline{black}{-0.55859}{-0.04041}{-0.55778}{-0.04822} \meshline{black}{-0.55246}{0.0925}{-0.55335}{0.08815} \meshline{black}{-0.55335}{0.08815}{-0.55379}{0.08429} \meshline{black}{0.74614}{-0.03709}{0.74566}{-0.05013} \meshline{black}{0.74329}{-0.07268}{0.74566}{-0.05013} \meshline{black}{0.74666}{0.02731}{0.74749}{-0.00642} \meshline{black}{0.74749}{-0.00642}{0.74614}{-0.03709} \meshline{black}{0.74284}{0.08193}{0.74382}{0.06808} \meshline{black}{0.74284}{0.08193}{0.74225}{0.0865} \meshline{black}{0.74329}{-0.07268}{0.74165}{-0.09196}\meshline{black}{0.73534}{-0.13382}{0.73958}{-0.1046} \meshline{black}{0.74644}{0.03789}{0.74666}{0.02731}\meshline{black}{0.74644}{0.03789}{0.74382}{0.06808} \meshline{black}{0.74165}{-0.09196}{0.73958}{-0.1046}\meshline{black}{0.72706}{-0.17087}{0.73534}{-0.13382} \meshline{black}{0.73682}{0.12605}{0.74225}{0.0865} \meshline{black}{0.73682}{0.12605}{0.73415}{0.14008} \meshline{black}{0.7263}{-0.17467}{0.72706}{-0.17087} \meshline{black}{0.72772}{0.17108}{0.73415}{0.14008} \meshline{black}{0.72282}{0.19043}{0.72772}{0.17108} \meshline{black}{0.71692}{-0.21044}{0.72542}{-0.17781} \meshline{black}{0.72542}{-0.17781}{0.7263}{-0.17467} \meshline{black}{0.7091}{-0.23435}{0.71692}{-0.21044} \meshline{black}{0.71531}{0.21723}{0.72282}{0.19043}\meshline{black}{0.70857}{0.2383}{0.71531}{0.21723} \meshline{black}{0.7091}{-0.23435}{0.70576}{-0.24474} \meshline{black}{0.6936}{-0.27703}{0.69682}{-0.26814} \meshline{black}{0.69682}{-0.26814}{0.70576}{-0.24474} \meshline{black}{0.69937}{0.26405}{0.70857}{0.2383}\meshline{black}{0.6936}{-0.27703}{0.69127}{-0.28234} \meshline{black}{0.69178}{0.28354}{0.69937}{0.26405} \meshline{black}{0.68}{-0.30958}{0.69127}{-0.28234} \meshline{black}{0.67943}{0.31174}{0.69178}{0.28354} \meshline{black}{0.68}{-0.30958}{0.66187}{-0.34503} \meshline{black}{0.67943}{0.31174}{0.67261}{0.32647} \meshline{black}{0.66187}{-0.34503}{0.65689}{-0.35535} \meshline{black}{0.65411}{-0.36025}{0.65689}{-0.35535} \meshline{black}{0.65498}{0.36019}{0.67261}{0.32647} \meshline{black}{0.65138}{0.36685}{0.65498}{0.36019} \meshline{black}{0.62884}{-0.40267}{0.65411}{-0.36025} \meshline{black}{0.65138}{0.36685}{0.63781}{0.38903} \meshline{black}{0.62884}{-0.40267}{0.62794}{-0.40407} \meshline{black}{0.63781}{0.38903}{0.62856}{0.4047} \meshline{black}{0.62794}{-0.40407}{0.60161}{-0.44204}\meshline{black}{0.62556}{0.40912}{0.62856}{0.4047} \meshline{black}{0.60104}{-0.44282}{0.60161}{-0.44204} \meshline{black}{0.60444}{0.44033}{0.62556}{0.40912} \meshline{black}{0.5892}{-0.45773}{0.60104}{-0.44282} \meshline{black}{0.60444}{0.44033}{0.59008}{0.4588} \meshline{black}{0.57198}{-0.47957}{0.5892}{-0.45773} \meshline{black}{0.59008}{0.4588}{0.57891}{0.47337} \meshline{black}{0.5711}{-0.4806}{0.57198}{-0.47957} \meshline{black}{0.55922}{0.4957}{0.57891}{0.47337} \meshline{black}{0.54182}{-0.51337}{0.5711}{-0.4806} \meshline{black}{0.55186}{0.50442}{0.55922}{0.4957} \meshline{black}{0.53953}{-0.51572}{0.54182}{-0.51337} \meshline{black}{0.54637}{0.51002}{0.55186}{0.50442} \meshline{black}{0.54637}{0.51002}{0.52343}{0.53423} \meshline{black}{0.51078}{-0.54408}{0.53953}{-0.51572} \meshline{black}{0.51078}{-0.54408}{0.50921}{-0.54557}\meshline{black}{0.52343}{0.53423}{0.49512}{0.55996} \meshline{black}{0.49512}{0.55996}{0.49276}{0.56219} \meshline{black}{0.50921}{-0.54557}{0.47727}{-0.5734} \meshline{black}{0.47727}{-0.5734}{0.47547}{-0.57498} \meshline{black}{0.49011}{0.56434}{0.49276}{0.56219} \meshline{black}{0.47547}{-0.57498}{0.47537}{-0.57507} \meshline{black}{0.46079}{0.58914}{0.49011}{0.56434} \meshline{black}{0.43654}{-0.6049}{0.45143}{-0.59345} \meshline{black}{0.47537}{-0.57507}{0.45143}{-0.59345} \meshline{black}{0.43647}{-0.60495}{0.43654}{-0.6049} \meshline{black}{0.43525}{0.60765}{0.46079}{0.58914} \meshline{black}{0.42662}{0.61411}{0.43525}{0.60765} \meshline{black}{0.39805}{-0.63091}{0.43647}{-0.60495} \meshline{black}{0.41198}{0.62344}{0.42662}{0.61411} \meshline{black}{0.3932}{-0.6338}{0.39805}{-0.63091} \meshline{black}{0.41198}{0.62344}{0.39084}{0.63757} \meshline{black}{0.36217}{-0.65202}{0.3932}{-0.6338} \meshline{black}{0.39084}{0.63757}{0.37827}{0.64475} \meshline{black}{0.37827}{0.64475}{0.35331}{0.65923} \meshline{black}{0.36217}{-0.65202}{0.35407}{-0.65644} \meshline{black}{0.32388}{0.6738}{0.35331}{0.65923} \meshline{black}{0.35407}{-0.65644}{0.32112}{-0.67292} \meshline{black}{0.31388}{0.67874}{0.32388}{0.6738} \meshline{black}{0.32112}{-0.67292}{0.31348}{-0.67654}\meshline{black}{0.27534}{0.69496}{0.31388}{0.67874}\meshline{black}{0.27493}{-0.6928}{0.31348}{-0.67654} \meshline{black}{0.27534}{0.69496}{0.27267}{0.69613} \meshline{black}{0.26982}{-0.69481}{0.27493}{-0.6928} \meshline{black}{0.27267}{0.69613}{0.27196}{0.6964} \meshline{black}{0.22682}{-0.70969}{0.26982}{-0.69481} \meshline{black}{0.22968}{0.71162}{0.27196}{0.6964}\meshline{black}{0.22407}{-0.71055}{0.22682}{-0.70969} \meshline{black}{0.22968}{0.71162}{0.22216}{0.71396} \meshline{black}{0.17866}{-0.72295}{0.22407}{-0.71055} \meshline{black}{0.18398}{0.72481}{0.22216}{0.71396}\meshline{black}{0.17673}{-0.72341}{0.17866}{-0.72295} \meshline{black}{0.18398}{0.72481}{0.17336}{0.72741} \meshline{black}{0.13497}{0.73549}{0.17336}{0.72741} \meshline{black}{0.13117}{-0.73266}{0.17673}{-0.72341} \meshline{black}{0.12779}{-0.73321}{0.13117}{-0.73266} \meshline{black}{0.12404}{0.73744}{0.13497}{0.73549} \meshline{black}{0.08304}{0.74312}{0.12404}{0.73744} \meshline{black}{0.08441}{-0.73905}{0.12779}{-0.73321} \meshline{black}{0.08441}{-0.73905}{0.0782}{-0.73962}\meshline{black}{0.07445}{0.7441}{0.08304}{0.74312} \meshline{black}{0.02709}{0.74715}{0.07445}{0.7441}\meshline{black}{0.03938}{-0.74232}{0.0782}{-0.73962} \meshline{black}{0.03938}{-0.74232}{0.0315}{-0.74256} \meshline{black}{0.02709}{0.74715}{0.02502}{0.74725} \meshline{black}{-0.02512}{0.74723}{0.01089}{0.74715} \meshline{black}{0.0315}{-0.74256}{-0.0038}{-0.74271} \meshline{black}{0.02502}{0.74725}{0.01089}{0.74715} \meshline{black}{-0.0038}{-0.74271}{-0.00978}{-0.74268}\meshline{black}{-0.02974}{0.74701}{-0.02512}{0.74723} \meshline{black}{-0.05037}{-0.74038}{-0.00978}{-0.74268} \meshline{black}{-0.02974}{0.74701}{-0.06812}{0.74463} \meshline{black}{-0.05301}{-0.74025}{-0.05037}{-0.74038} \meshline{black}{-0.07538}{0.7438}{-0.06812}{0.74463} \meshline{black}{-0.05346}{-0.74021}{-0.05301}{-0.74025} \meshline{black}{-0.05346}{-0.74021}{-0.09775}{-0.73548}\meshline{black}{-0.07538}{0.7438}{-0.10319}{0.74083} \meshline{black}{-0.14373}{0.73366}{-0.10319}{0.74083}\meshline{black}{-0.09775}{-0.73548}{-0.10403}{-0.73455} \meshline{black}{-0.10403}{-0.73455}{-0.14358}{-0.72787}\meshline{black}{-0.14373}{0.73366}{-0.14425}{0.73357} \meshline{black}{-0.15219}{-0.72614}{-0.14358}{-0.72787} \meshline{black}{-0.15219}{-0.72614}{-0.19048}{-0.71721}\meshline{black}{-0.18541}{0.72465}{-0.14483}{0.73343} \meshline{black}{-0.14483}{0.73343}{-0.14425}{0.73357} \meshline{black}{-0.19784}{-0.71533}{-0.19048}{-0.71721} \meshline{black}{-0.19784}{-0.71533}{-0.23887}{-0.70302} \meshline{black}{-0.22166}{0.71391}{-0.18541}{0.72465} \meshline{black}{-0.25258}{-0.69808}{-0.2829}{-0.68724}\meshline{black}{-0.28928}{-0.68453}{-0.2829}{-0.68724} \meshline{black}{-0.24129}{-0.70227}{-0.23887}{-0.70302} \meshline{black}{-0.25258}{-0.69808}{-0.24129}{-0.70227} \meshline{black}{-0.22166}{0.71391}{-0.2257}{0.7128} \meshline{black}{-0.32274}{-0.67027}{-0.28928}{-0.68453} \meshline{black}{-0.34237}{-0.66033}{-0.32274}{-0.67027} \meshline{black}{-0.26582}{0.69916}{-0.22926}{0.71152}\meshline{black}{-0.2257}{0.7128}{-0.22926}{0.71152} \meshline{black}{-0.39728}{-0.62968}{-0.39377}{-0.63165} \meshline{black}{-0.39728}{-0.62968}{-0.39926}{-0.62836}\meshline{black}{-0.30077}{0.68432}{-0.30562}{0.68236} \meshline{black}{-0.36078}{-0.65116}{-0.34237}{-0.66033} \meshline{black}{-0.26582}{0.69916}{-0.30077}{0.68432} \meshline{black}{-0.39377}{-0.63165}{-0.36078}{-0.65116} \meshline{black}{-0.39926}{-0.62836}{-0.43286}{-0.60666} \meshline{black}{-0.31295}{0.6787}{-0.30562}{0.68236} \meshline{black}{-0.45953}{-0.58637}{-0.43286}{-0.60666} \meshline{black}{-0.49847}{-0.55473}{-0.4739}{-0.57491} \meshline{black}{-0.34513}{0.6635}{-0.31295}{0.6787} \meshline{black}{-0.46646}{-0.5813}{-0.45953}{-0.58637} \meshline{black}{-0.49847}{-0.55473}{-0.52326}{-0.53087}\meshline{black}{-0.46646}{-0.5813}{-0.4739}{-0.57491} \meshline{black}{-0.38384}{0.64187}{-0.36318}{0.65328} \meshline{black}{-0.34513}{0.6635}{-0.36318}{0.65328} \meshline{black}{-0.53303}{-0.52125}{-0.55827}{-0.49509} \meshline{black}{-0.52888}{-0.5257}{-0.52326}{-0.53087} \meshline{black}{-0.58042}{-0.46842}{-0.58542}{-0.46254}\meshline{black}{-0.52888}{-0.5257}{-0.53303}{-0.52125} \meshline{black}{-0.55827}{-0.49509}{-0.58042}{-0.46842} \meshline{black}{-0.42017}{0.61833}{-0.41816}{0.61961}\meshline{black}{-0.41816}{0.61961}{-0.38384}{0.64187} \meshline{black}{-0.58542}{-0.46254}{-0.59465}{-0.44993} \meshline{black}{-0.4256}{0.61432}{-0.42017}{0.61833}\meshline{black}{-0.61119}{-0.42829}{-0.59465}{-0.44993} \meshline{black}{-0.61816}{-0.41781}{-0.6358}{-0.39128} \meshline{black}{-0.4256}{0.61432}{-0.45504}{0.59335} \meshline{black}{-0.61119}{-0.42829}{-0.61816}{-0.41781} \meshline{black}{-0.65003}{-0.36663}{-0.65856}{-0.35169} \meshline{black}{-0.65003}{-0.36663}{-0.6358}{-0.39128} \meshline{black}{-0.48803}{0.56657}{-0.46427}{0.58591}\meshline{black}{-0.45504}{0.59335}{-0.46427}{0.58591} \meshline{black}{-0.67674}{-0.31496}{-0.67872}{-0.31084} \meshline{black}{-0.67674}{-0.31496}{-0.65856}{-0.35169} \meshline{black}{-0.50264}{0.55348}{-0.52002}{0.53727} \meshline{black}{-0.48803}{0.56657}{-0.50264}{0.55348} \meshline{black}{-0.68891}{-0.28644}{-0.67872}{-0.31084} \meshline{black}{-0.55337}{0.50276}{-0.53674}{0.52056}\meshline{black}{-0.53674}{0.52056}{-0.52002}{0.53727} \meshline{black}{-0.69672}{-0.26823}{-0.68891}{-0.28644} \meshline{black}{-0.58694}{0.46303}{-0.56932}{0.48473} \meshline{black}{-0.55337}{0.50276}{-0.56932}{0.48473} \meshline{black}{-0.71225}{-0.22425}{-0.698}{-0.26483} \meshline{black}{-0.698}{-0.26483}{-0.69672}{-0.26823} \meshline{black}{-0.5998}{0.44645}{-0.61849}{0.41987} \meshline{black}{-0.5998}{0.44645}{-0.58694}{0.46303} \meshline{black}{-0.73342}{-0.14068}{-0.7326}{-0.14553} \meshline{black}{-0.73082}{-0.15296}{-0.7326}{-0.14553} \meshline{black}{-0.72402}{-0.18377}{-0.7152}{-0.21442} \meshline{black}{-0.7152}{-0.21442}{-0.71225}{-0.22425} \meshline{black}{-0.67392}{0.32304}{-0.67076}{0.32999} \meshline{black}{-0.67076}{0.32999}{-0.66004}{0.34989} \meshline{black}{-0.62705}{0.40719}{-0.64712}{0.37404} \meshline{black}{-0.61849}{0.41987}{-0.62705}{0.40719} \meshline{black}{-0.73082}{-0.15296}{-0.72402}{-0.18377} \meshline{black}{-0.74359}{-0.07037}{-0.74054}{-0.10059} \meshline{black}{-0.74054}{-0.10059}{-0.73342}{-0.14068} \meshline{black}{-0.68876}{0.29104}{-0.67392}{0.32304} \meshline{black}{-0.69973}{0.2621}{-0.68876}{0.29104} \meshline{black}{-0.65043}{0.36843}{-0.66004}{0.34989} \meshline{black}{-0.65043}{0.36843}{-0.64712}{0.37404} \meshline{black}{-0.71803}{0.20791}{-0.70996}{0.23217} \meshline{black}{-0.71803}{0.20791}{-0.72245}{0.19009} \meshline{black}{-0.73718}{0.12287}{-0.73489}{0.13396}\meshline{black}{-0.73718}{0.12287}{-0.73864}{0.11127} \meshline{black}{-0.74468}{0.05853}{-0.74635}{0.03904} \meshline{black}{-0.74686}{0.00028}{-0.74635}{0.03904} \meshline{black}{-0.74553}{-0.05069}{-0.74696}{-0.00717} \meshline{black}{-0.74553}{-0.05069}{-0.74359}{-0.07037} \meshline{black}{-0.70443}{0.25021}{-0.69973}{0.2621} \meshline{black}{-0.70443}{0.25021}{-0.70996}{0.23217} \meshline{black}{-0.72919}{0.16479}{-0.72245}{0.19009} \meshline{black}{-0.73489}{0.13396}{-0.72919}{0.16479} \meshline{black}{-0.73864}{0.11127}{-0.74303}{0.08131} \meshline{black}{-0.74303}{0.08131}{-0.74468}{0.05853} \meshline{black}{-0.74686}{0.00028}{-0.74703}{-0.00482} \meshline{black}{-0.74703}{-0.00482}{-0.74696}{-0.00717} \meshline{black}{1}{0}{0.99529}{-0.04779} \meshline{black}{1}{0}{0.9951}{0.04976} \meshline{black}{0.99054}{-0.09606}{0.99529}{-0.04779} \meshline{black}{0.9951}{0.04976}{0.99025}{0.09903} \meshline{black}{0.98569}{-0.14533}{0.99054}{-0.09606} \meshline{black}{0.99025}{0.09903}{0.98549}{0.1473} \meshline{black}{0.98078}{-0.19509}{0.98569}{-0.14533} \meshline{black}{0.98549}{0.1473}{0.98078}{0.19509} \meshline{black}{0.96685}{-0.24104}{0.98078}{-0.19509} \meshline{black}{0.98078}{0.19509}{0.96627}{0.24294} \meshline{black}{0.95276}{-0.28746}{0.96685}{-0.24104} \meshline{black}{0.96627}{0.24294}{0.9519}{0.29031} \meshline{black}{0.93839}{-0.33483}{0.95276}{-0.28746} \meshline{black}{0.9519}{0.29031}{0.93781}{0.33673} \meshline{black}{0.92388}{-0.38268}{0.93839}{-0.33483} \meshline{black}{0.93781}{0.33673}{0.92387}{0.38268} \meshline{black}{0.90124}{-0.42503}{0.92388}{-0.38268} \meshline{black}{0.92387}{0.38268}{0.9003}{0.42678} \meshline{black}{0.87837}{-0.46781}{0.90124}{-0.42503} \meshline{black}{0.9003}{0.42678}{0.87696}{0.47044} \meshline{black}{0.85504}{-0.51147}{0.87837}{-0.46781} \meshline{black}{0.87696}{0.47044}{0.8541}{0.51322} \meshline{black}{0.83147}{-0.55557}{0.85504}{-0.51147} \meshline{black}{0.8541}{0.51322}{0.83146}{0.55557} \meshline{black}{0.80101}{-0.59269}{0.83147}{-0.55557} \meshline{black}{0.83146}{0.55557}{0.79974}{0.59422} \meshline{black}{0.77023}{-0.63018}{0.80101}{-0.59269} \meshline{black}{0.79974}{0.59422}{0.76833}{0.63249} \meshline{black}{0.73883}{-0.66845}{0.77023}{-0.63018} \meshline{black}{0.76833}{0.63249}{0.73756}{0.66999} \meshline{black}{0.70711}{-0.7071}{0.73883}{-0.66845} \meshline{black}{0.73756}{0.66999}{0.7071}{0.7071} \meshline{black}{0.66999}{-0.73756}{0.70711}{-0.7071} \meshline{black}{0.7071}{0.7071}{0.66839}{0.73875} \meshline{black}{0.63249}{-0.76834}{0.66999}{-0.73756} \meshline{black}{0.66839}{0.73875}{0.6301}{0.77006} \meshline{black}{0.59422}{-0.79974}{0.63249}{-0.76834} \meshline{black}{0.6301}{0.77006}{0.59263}{0.8007} \meshline{black}{0.55557}{-0.83146}{0.59422}{-0.79974} \meshline{black}{0.59263}{0.8007}{0.55557}{0.831} \meshline{black}{0.51323}{-0.85407}{0.55557}{-0.83146} \meshline{black}{0.55557}{0.831}{0.51152}{0.85465} \meshline{black}{0.47045}{-0.87692}{0.51323}{-0.85407} \meshline{black}{0.51152}{0.85465}{0.46788}{0.87807} \meshline{black}{0.42679}{-0.90024}{0.47045}{-0.87692} \meshline{black}{0.46788}{0.87807}{0.42507}{0.90105} \meshline{black}{0.38268}{-0.9238}{0.42679}{-0.90024} \meshline{black}{0.42507}{0.90105}{0.38268}{0.9238} \meshline{black}{0.33673}{-0.93776}{0.38268}{-0.9238} \meshline{black}{0.38268}{0.9238}{0.33484}{0.93831} \meshline{black}{0.2903}{-0.95186}{0.33673}{-0.93776} \meshline{black}{0.33484}{0.93831}{0.28746}{0.95268} \meshline{black}{0.24293}{-0.96625}{0.2903}{-0.95186} \meshline{black}{0.28746}{0.95268}{0.24104}{0.96676} \meshline{black}{0.19509}{-0.98078}{0.24293}{-0.96625} \meshline{black}{0.24104}{0.96676}{0.19509}{0.9807} \meshline{black}{0.1473}{-0.98549}{0.19509}{-0.98078} \meshline{black}{0.19509}{0.9807}{0.14533}{0.98562} \meshline{black}{0.09902}{-0.99024}{0.1473}{-0.98549} \meshline{black}{0.14533}{0.98562}{0.09607}{0.9905} \meshline{black}{0.04976}{-0.9951}{0.09902}{-0.99024} \meshline{black}{0.09607}{0.9905}{0.04779}{0.99527} \meshline{black}{0}{-1}{0.04976}{-0.9951} \meshline{black}{0.04779}{0.99527}{0}{1} \meshline{black}{-0.05951}{-0.98078}{0}{-1} \meshline{black}{0}{1}{-0.04976}{0.9951} \meshline{black}{-0.1026}{-0.97319}{-0.05951}{-0.98078} \meshline{black}{-0.04976}{0.9951}{-0.09901}{0.99024} \meshline{black}{-0.14672}{-0.96542}{-0.1026}{-0.97319} \meshline{black}{-0.09901}{0.99024}{-0.14726}{0.98549} \meshline{black}{-0.195}{0.98078}{-0.24283}{0.96622} \meshline{black}{-0.14726}{0.98549}{-0.195}{0.98078} \meshline{black}{-0.19186}{-0.95747}{-0.14672}{-0.96542} \meshline{black}{-0.24283}{0.96622}{-0.29013}{0.95183} \meshline{black}{-0.23803}{-0.94934}{-0.19186}{-0.95747} \meshline{black}{-0.29013}{0.95183}{-0.33633}{0.93777} \meshline{black}{-0.28522}{-0.94103}{-0.23803}{-0.94934} \meshline{black}{-0.33633}{0.93777}{-0.382}{0.92387} \meshline{black}{-0.33344}{-0.93254}{-0.28522}{-0.94103} \meshline{black}{-0.382}{0.92387}{-0.4261}{0.90031} \meshline{black}{-0.38268}{-0.92387}{-0.33344}{-0.93254} \meshline{black}{-0.42503}{-0.90123}{-0.38268}{-0.92387} \meshline{black}{-0.46781}{-0.87837}{-0.42503}{-0.90123} \meshline{black}{-0.4261}{0.90031}{-0.46978}{0.87698} \meshline{black}{-0.51147}{-0.85503}{-0.46781}{-0.87837} \meshline{black}{-0.46978}{0.87698}{-0.5126}{0.85411} \meshline{black}{-0.55557}{-0.83146}{-0.51147}{-0.85503} \meshline{black}{-0.59268}{-0.801}{-0.55557}{-0.83146} \meshline{black}{-0.63018}{-0.77023}{-0.59268}{-0.801} \meshline{black}{-0.5126}{0.85411}{-0.555}{0.83146} \meshline{black}{-0.66845}{-0.73882}{-0.63018}{-0.77023} \meshline{black}{-0.7071}{-0.7071}{-0.66845}{-0.73882} \meshline{black}{-0.555}{0.83146}{-0.5937}{0.7998} \meshline{black}{-0.73756}{-0.66999}{-0.7071}{-0.7071} \meshline{black}{-0.76833}{-0.63249}{-0.73756}{-0.66999} \meshline{black}{-0.5937}{0.7998}{-0.63205}{0.76843} \meshline{black}{-0.79974}{-0.59422}{-0.76833}{-0.63249} \meshline{black}{-0.83146}{-0.55557}{-0.79974}{-0.59422} \meshline{black}{-0.63205}{0.76843}{-0.6697}{0.73762} \meshline{black}{-0.8541}{-0.51322}{-0.83146}{-0.55557} \meshline{black}{-0.6697}{0.73762}{-0.707}{0.70711} \meshline{black}{-0.87696}{-0.47044}{-0.8541}{-0.51322} \meshline{black}{-0.9003}{-0.42678}{-0.87696}{-0.47044} \meshline{black}{-0.707}{0.70711}{-0.73873}{0.66846} \meshline{black}{-0.92387}{-0.38268}{-0.9003}{-0.42678} \meshline{black}{-0.93781}{-0.33673}{-0.92387}{-0.38268} \meshline{black}{-0.73873}{0.66846}{-0.77014}{0.6302} \meshline{black}{-0.95189}{-0.29031}{-0.93781}{-0.33673} \meshline{black}{-0.77014}{0.6302}{-0.80093}{0.59269} \meshline{black}{-0.80093}{0.59269}{-0.8314}{0.55557} \meshline{black}{-0.96626}{-0.24294}{-0.95189}{-0.29031} \meshline{black}{-0.8314}{0.55557}{-0.85498}{0.51148} \meshline{black}{-0.98078}{-0.19509}{-0.96626}{-0.24294} \meshline{black}{-0.98549}{-0.1473}{-0.98078}{-0.19509} \meshline{black}{-0.85498}{0.51148}{-0.87833}{0.46782} \meshline{black}{-0.99024}{-0.09902}{-0.98549}{-0.1473} \meshline{black}{-0.87833}{0.46782}{-0.90122}{0.42504} \meshline{black}{-0.9951}{-0.04976}{-0.99024}{-0.09902} \meshline{black}{-0.90122}{0.42504}{-0.92387}{0.38268} \meshline{black}{-0.92387}{0.38268}{-0.93839}{0.33484} \meshline{black}{-0.93839}{0.33484}{-0.95276}{0.28746} \meshline{black}{-1}{0}{-0.9951}{-0.04976} \meshline{black}{-0.95276}{0.28746}{-0.96684}{0.24104} \meshline{black}{-0.96684}{0.24104}{-0.98078}{0.19509} \meshline{black}{-0.98078}{0.19509}{-0.98568}{0.14533} \meshline{black}{-0.99054}{0.09607}{-0.99529}{0.04779} \meshline{black}{-0.98568}{0.14533}{-0.99054}{0.09607} \meshline{black}{-0.99529}{0.04779}{-1}{0} \meshline{black}{1}{0}{0.99529}{-0.04779} \meshline{black}{1}{0}{0.9951}{0.04976} \meshline{black}{0.99054}{-0.09606}{0.99529}{-0.04779} \meshline{black}{0.9951}{0.04976}{0.99025}{0.09903} \meshline{black}{0.98569}{-0.14533}{0.99054}{-0.09606} \meshline{black}{0.99025}{0.09903}{0.98549}{0.1473} \meshline{black}{0.98078}{-0.19509}{0.98569}{-0.14533} \meshline{black}{0.98549}{0.1473}{0.98078}{0.19509} \meshline{black}{0.96685}{-0.24104}{0.98078}{-0.19509} \meshline{black}{0.98078}{0.19509}{0.96627}{0.24294} \meshline{black}{0.95276}{-0.28746}{0.96685}{-0.24104} \meshline{black}{0.96627}{0.24294}{0.9519}{0.29031} \meshline{black}{0.93839}{-0.33483}{0.95276}{-0.28746} \meshline{black}{0.9519}{0.29031}{0.93781}{0.33673} \meshline{black}{0.92388}{-0.38268}{0.93839}{-0.33483} \meshline{black}{0.93781}{0.33673}{0.92387}{0.38268} \meshline{black}{0.90124}{-0.42503}{0.92388}{-0.38268} \meshline{black}{0.92387}{0.38268}{0.9003}{0.42678} \meshline{black}{0.87837}{-0.46781}{0.90124}{-0.42503} \meshline{black}{0.9003}{0.42678}{0.87696}{0.47044} \meshline{black}{0.85504}{-0.51147}{0.87837}{-0.46781} \meshline{black}{0.87696}{0.47044}{0.8541}{0.51322} \meshline{black}{0.83147}{-0.55557}{0.85504}{-0.51147} \meshline{black}{0.8541}{0.51322}{0.83146}{0.55557} \meshline{black}{0.80101}{-0.59269}{0.83147}{-0.55557} \meshline{black}{0.83146}{0.55557}{0.79974}{0.59422} \meshline{black}{0.77023}{-0.63018}{0.80101}{-0.59269} \meshline{black}{0.79974}{0.59422}{0.76833}{0.63249} \meshline{black}{0.73883}{-0.66845}{0.77023}{-0.63018} \meshline{black}{0.76833}{0.63249}{0.73756}{0.66999} \meshline{black}{0.70711}{-0.7071}{0.73883}{-0.66845} \meshline{black}{0.73756}{0.66999}{0.7071}{0.7071} \meshline{black}{0.66999}{-0.73756}{0.70711}{-0.7071} \meshline{black}{0.7071}{0.7071}{0.66839}{0.73875} \meshline{black}{0.63249}{-0.76834}{0.66999}{-0.73756} \meshline{black}{0.66839}{0.73875}{0.6301}{0.77006} \meshline{black}{0.59422}{-0.79974}{0.63249}{-0.76834} \meshline{black}{0.6301}{0.77006}{0.59263}{0.8007} \meshline{black}{0.55557}{-0.83146}{0.59422}{-0.79974} \meshline{black}{0.59263}{0.8007}{0.55557}{0.831} \meshline{black}{0.51323}{-0.85407}{0.55557}{-0.83146} \meshline{black}{0.55557}{0.831}{0.51152}{0.85465} \meshline{black}{0.47045}{-0.87692}{0.51323}{-0.85407} \meshline{black}{0.51152}{0.85465}{0.46788}{0.87807} \meshline{black}{0.42679}{-0.90024}{0.47045}{-0.87692} \meshline{black}{0.46788}{0.87807}{0.42507}{0.90105} \meshline{black}{0.38268}{-0.9238}{0.42679}{-0.90024} \meshline{black}{0.42507}{0.90105}{0.38268}{0.9238} \meshline{black}{0.33673}{-0.93776}{0.38268}{-0.9238} \meshline{black}{0.38268}{0.9238}{0.33484}{0.93831} \meshline{black}{0.2903}{-0.95186}{0.33673}{-0.93776} \meshline{black}{0.33484}{0.93831}{0.28746}{0.95268} \meshline{black}{0.24293}{-0.96625}{0.2903}{-0.95186} \meshline{black}{0.28746}{0.95268}{0.24104}{0.96676} \meshline{black}{0.19509}{-0.98078}{0.24293}{-0.96625} \meshline{black}{0.24104}{0.96676}{0.19509}{0.9807} \meshline{black}{0.1473}{-0.98549}{0.19509}{-0.98078} \meshline{black}{0.19509}{0.9807}{0.14533}{0.98562} \meshline{black}{0.09902}{-0.99024}{0.1473}{-0.98549} \meshline{black}{0.14533}{0.98562}{0.09607}{0.9905} \meshline{black}{0.04976}{-0.9951}{0.09902}{-0.99024} \meshline{black}{0.09607}{0.9905}{0.04779}{0.99527} \meshline{black}{0}{-1}{0.04976}{-0.9951} \meshline{black}{0.04779}{0.99527}{0}{1} \meshline{black}{-0.05951}{-0.98078}{0}{-1} \meshline{black}{0}{1}{-0.04976}{0.9951} \meshline{black}{-0.1026}{-0.97319}{-0.05951}{-0.98078} \meshline{black}{-0.04976}{0.9951}{-0.09901}{0.99024} \meshline{black}{-0.14672}{-0.96542}{-0.1026}{-0.97319} \meshline{black}{-0.09901}{0.99024}{-0.14726}{0.98549} \meshline{black}{-0.14726}{0.98549}{-0.195}{0.98078} \meshline{black}{-0.19186}{-0.95747}{-0.14672}{-0.96542} \meshline{black}{-0.195}{0.98078}{-0.24283}{0.96622} \meshline{black}{-0.24283}{0.96622}{-0.29013}{0.95183} \meshline{black}{-0.23803}{-0.94934}{-0.19186}{-0.95747} \meshline{black}{-0.29013}{0.95183}{-0.33633}{0.93777} \meshline{black}{-0.28522}{-0.94103}{-0.23803}{-0.94934} \meshline{black}{-0.33633}{0.93777}{-0.382}{0.92387} \meshline{black}{-0.33344}{-0.93254}{-0.28522}{-0.94103} \meshline{black}{-0.382}{0.92387}{-0.4261}{0.90031} \meshline{black}{-0.38268}{-0.92387}{-0.33344}{-0.93254} \meshline{black}{-0.42503}{-0.90123}{-0.38268}{-0.92387} \meshline{black}{-0.4261}{0.90031}{-0.46978}{0.87698} \meshline{black}{-0.46781}{-0.87837}{-0.42503}{-0.90123} \meshline{black}{-0.51147}{-0.85503}{-0.46781}{-0.87837} \meshline{black}{-0.46978}{0.87698}{-0.5126}{0.85411} \meshline{black}{-0.55557}{-0.83146}{-0.51147}{-0.85503} \meshline{black}{-0.59268}{-0.801}{-0.55557}{-0.83146} \meshline{black}{-0.5126}{0.85411}{-0.555}{0.83146} \meshline{black}{-0.63018}{-0.77023}{-0.59268}{-0.801} \meshline{black}{-0.66845}{-0.73882}{-0.63018}{-0.77023} \meshline{black}{-0.7071}{-0.7071}{-0.66845}{-0.73882} \meshline{black}{-0.555}{0.83146}{-0.5937}{0.7998} \meshline{black}{-0.73756}{-0.66999}{-0.7071}{-0.7071} \meshline{black}{-0.76833}{-0.63249}{-0.73756}{-0.66999} \meshline{black}{-0.5937}{0.7998}{-0.63205}{0.76843} \meshline{black}{-0.79974}{-0.59422}{-0.76833}{-0.63249} \meshline{black}{-0.83146}{-0.55557}{-0.79974}{-0.59422} \meshline{black}{-0.63205}{0.76843}{-0.6697}{0.73762} \meshline{black}{-0.8541}{-0.51322}{-0.83146}{-0.55557} \meshline{black}{-0.6697}{0.73762}{-0.707}{0.70711} \meshline{black}{-0.87696}{-0.47044}{-0.8541}{-0.51322} \meshline{black}{-0.9003}{-0.42678}{-0.87696}{-0.47044} \meshline{black}{-0.707}{0.70711}{-0.73873}{0.66846} \meshline{black}{-0.92387}{-0.38268}{-0.9003}{-0.42678} \meshline{black}{-0.73873}{0.66846}{-0.77014}{0.6302} \meshline{black}{-0.93781}{-0.33673}{-0.92387}{-0.38268} \meshline{black}{-0.95189}{-0.29031}{-0.93781}{-0.33673} \meshline{black}{-0.77014}{0.6302}{-0.80093}{0.59269} \meshline{black}{-0.80093}{0.59269}{-0.8314}{0.55557} \meshline{black}{-0.96626}{-0.24294}{-0.95189}{-0.29031} \meshline{black}{-0.8314}{0.55557}{-0.85498}{0.51148} \meshline{black}{-0.98078}{-0.19509}{-0.96626}{-0.24294} \meshline{black}{-0.98549}{-0.1473}{-0.98078}{-0.19509} \meshline{black}{-0.85498}{0.51148}{-0.87833}{0.46782} \meshline{black}{-0.99024}{-0.09902}{-0.98549}{-0.1473} \meshline{black}{-0.87833}{0.46782}{-0.90122}{0.42504} \meshline{black}{-0.9951}{-0.04976}{-0.99024}{-0.09902} \meshline{black}{-0.90122}{0.42504}{-0.92387}{0.38268} \meshline{black}{-0.92387}{0.38268}{-0.93839}{0.33484} \meshline{black}{-0.93839}{0.33484}{-0.95276}{0.28746} \meshline{black}{-0.95276}{0.28746}{-0.96684}{0.24104} \meshline{black}{-1}{0}{-0.9951}{-0.04976} \meshline{black}{-0.96684}{0.24104}{-0.98078}{0.19509} \meshline{black}{-0.98078}{0.19509}{-0.98568}{0.14533} \meshline{black}{-0.98568}{0.14533}{-0.99054}{0.09607} \meshline{black}{-0.99529}{0.04779}{-1}{0} \meshline{black}{-0.99054}{0.09607}{-0.99529}{0.04779} \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{0.8cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{0.8cm}} \meshline{black}{0.70886}{0.03911}{0.71297}{0.04459} \meshline{black}{0.71297}{0.04459}{0.73241}{0.07526} \meshline{black}{0.76355}{0.14263}{0.7698}{0.1617}\meshline{black}{0.77336}{0.18226}{0.7698}{0.1617} \meshline{black}{0.75648}{0.12196}{0.73833}{0.08438} \meshline{black}{0.76355}{0.14263}{0.75648}{0.12196} \meshline{black}{0.70886}{0.03911}{0.68029}{0.00785} \meshline{black}{0.68029}{0.00785}{0.67589}{0.00351} \meshline{black}{0.73756}{0.08294}{0.73241}{0.07526} \meshline{black}{0.73756}{0.08294}{0.73833}{0.08438} \meshline{black}{0.78147}{0.23056}{0.78333}{0.24868} \meshline{black}{0.77888}{0.20414}{0.77336}{0.18226} \meshline{black}{0.78147}{0.23056}{0.77888}{0.20414} \meshline{black}{0.65475}{-0.01247}{0.67589}{0.00351}\meshline{black}{0.78333}{0.24868}{0.78299}{0.26064} \meshline{black}{0.65475}{-0.01247}{0.6404}{-0.0241} \meshline{black}{0.78299}{0.26064}{0.78414}{0.28867} \meshline{black}{0.6404}{-0.0241}{0.63659}{-0.02634} \meshline{black}{0.63659}{-0.02634}{0.60306}{-0.04572} \meshline{black}{0.78146}{0.31675}{0.78414}{0.28867} \meshline{black}{0.78049}{0.3369}{0.78146}{0.31675} \meshline{black}{0.60306}{-0.04572}{0.57358}{-0.0565} \meshline{black}{0.57358}{-0.0565}{0.5635}{-0.06031} \meshline{black}{0.77171}{0.38114}{0.78049}{0.3369} \meshline{black}{0.55443}{-0.06191}{0.5635}{-0.06031} \meshline{black}{0.771}{0.38629}{0.77171}{0.38114} \meshline{black}{0.55443}{-0.06191}{0.52243}{-0.07047} \meshline{black}{0.7641}{0.41083}{0.771}{0.38629} \meshline{black}{0.7641}{0.41083}{0.75824}{0.43243}\meshline{black}{0.52243}{-0.07047}{0.48829}{-0.07305} \meshline{black}{0.48829}{-0.07305}{0.47927}{-0.07451} \meshline{black}{0.75672}{0.43623}{0.75824}{0.43243} \meshline{black}{0.47927}{-0.07451}{0.44712}{-0.07395} \meshline{black}{0.74186}{0.47727}{0.75672}{0.43623} \meshline{black}{0.43452}{-0.07393}{0.44712}{-0.07395}\meshline{black}{0.74186}{0.47727}{0.73928}{0.48277} \meshline{black}{0.43452}{-0.07393}{0.43166}{-0.07345} \meshline{black}{0.38841}{-0.0693}{0.43166}{-0.07345} \meshline{black}{0.73928}{0.48277}{0.71955}{0.52437} \meshline{black}{0.38082}{-0.06742}{0.38841}{-0.0693}\meshline{black}{0.71955}{0.52437}{0.7191}{0.52514}\meshline{black}{0.38082}{-0.06742}{0.3408}{-0.0599} \meshline{black}{0.7191}{0.52514}{0.71503}{0.53209} \meshline{black}{0.69494}{0.56692}{0.71503}{0.53209} \meshline{black}{0.33348}{-0.05754}{0.3408}{-0.0599}\meshline{black}{0.69494}{0.56692}{0.6934}{0.56961} \meshline{black}{0.33348}{-0.05754}{0.29187}{-0.04602} \meshline{black}{0.66375}{0.61229}{0.67637}{0.59398} \meshline{black}{0.6934}{0.56961}{0.67637}{0.59398} \meshline{black}{0.28875}{-0.04479}{0.29187}{-0.04602} \meshline{black}{0.66375}{0.61229}{0.66231}{0.61408}\meshline{black}{0.27188}{-0.03869}{0.24606}{-0.02977} \meshline{black}{0.27188}{-0.03869}{0.28875}{-0.04479} \meshline{black}{0.66231}{0.61408}{0.65829}{0.61879} \meshline{black}{0.65829}{0.61879}{0.63084}{0.65304} \meshline{black}{0.24606}{-0.02977}{0.24189}{-0.0284} \meshline{black}{0.61714}{0.66755}{0.63084}{0.65304} \meshline{black}{0.24189}{-0.0284}{0.20492}{-0.01297} \meshline{black}{0.60087}{0.68426}{0.61714}{0.66755} \meshline{black}{0.20492}{-0.01297}{0.19082}{-0.00708} \meshline{black}{0.60087}{0.68426}{0.57808}{0.70512}\meshline{black}{0.57808}{0.70512}{0.5744}{0.7087} \meshline{black}{0.19082}{-0.00708}{0.16507}{0.00535} \meshline{black}{0.5744}{0.7087}{0.56662}{0.71513} \meshline{black}{0.16507}{0.00535}{0.13825}{0.0187} \meshline{black}{0.54312}{0.73504}{0.56662}{0.71513} \meshline{black}{0.13825}{0.0187}{0.12641}{0.02509} \meshline{black}{0.54312}{0.73504}{0.53716}{0.73973}\meshline{black}{0.0998}{0.04013}{0.08876}{0.04603} \meshline{black}{0.12641}{0.02509}{0.0998}{0.04013} \meshline{black}{0.50529}{0.76308}{0.52213}{0.75048} \meshline{black}{0.53716}{0.73973}{0.52213}{0.75048}\meshline{black}{0.08417}{0.0487}{0.08876}{0.04603} \meshline{black}{0.47895}{0.77961}{0.50529}{0.76308} \meshline{black}{0.47895}{0.77961}{0.47307}{0.78342}\meshline{black}{0.0517}{0.06768}{0.08417}{0.0487} \meshline{black}{0.47307}{0.78342}{0.47133}{0.78442} \meshline{black}{0.0517}{0.06768}{0.02878}{0.08224} \meshline{black}{0.43015}{0.80718}{0.47133}{0.78442} \meshline{black}{0.01552}{0.09035}{0.02878}{0.08224}\meshline{black}{0.43015}{0.80718}{0.42227}{0.81132}\meshline{black}{0.38615}{0.82807}{0.40701}{0.81829} \meshline{black}{0.40701}{0.81829}{0.42227}{0.81132} \meshline{black}{-0.01993}{0.11386}{-0.00658}{0.10516} \meshline{black}{-0.00658}{0.10516}{0.01552}{0.09035} \meshline{black}{0.38615}{0.82807}{0.35631}{0.83992}\meshline{black}{0.35631}{0.83992}{0.35074}{0.84192} \meshline{black}{-0.02808}{0.11993}{-0.01993}{0.11386} \meshline{black}{0.35074}{0.84192}{0.31097}{0.85478} \meshline{black}{0.30873}{0.85537}{0.31097}{0.85478} \meshline{black}{-0.05469}{0.13816}{-0.02808}{0.11993} \meshline{black}{0.30873}{0.85537}{0.2648}{0.86635}\meshline{black}{-0.08683}{0.16319}{-0.05469}{0.13816} \meshline{black}{0.26337}{0.8666}{0.2648}{0.86635} \meshline{black}{-0.08804}{0.16403}{-0.08683}{0.16319}\meshline{black}{0.26337}{0.8666}{0.21766}{0.87481}\meshline{black}{0.21692}{0.87489}{0.21766}{0.87481} \meshline{black}{-0.08955}{0.16524}{-0.1208}{0.19055} \meshline{black}{-0.08804}{0.16403}{-0.08955}{0.16524} \meshline{black}{0.21228}{0.87531}{0.21692}{0.87489} \meshline{black}{-0.14894}{0.21657}{-0.1208}{0.19055} \meshline{black}{0.21228}{0.87531}{0.17192}{0.87969} \meshline{black}{0.16288}{0.88041}{0.12368}{0.88081} \meshline{black}{0.17192}{0.87969}{0.16288}{0.88041} \meshline{black}{-0.15185}{0.21896}{-0.14894}{0.21657}\meshline{black}{0.10404}{0.88079}{0.07138}{0.87792} \meshline{black}{0.12368}{0.88081}{0.10404}{0.88079}\meshline{black}{-0.15185}{0.21896}{-0.15472}{0.22159} \meshline{black}{-0.15472}{0.22159}{-0.18195}{0.24841} \meshline{black}{0.048}{0.87567}{0.07138}{0.87792} \meshline{black}{0.02478}{0.87131}{0.048}{0.87567} \meshline{black}{-0.18195}{0.24841}{-0.20609}{0.2751} \meshline{black}{-0.20609}{0.2751}{-0.2101}{0.28004} \meshline{black}{-0.02696}{0.85867}{-0.0001}{0.86682} \meshline{black}{0.02478}{0.87131}{-0.0001}{0.86682}\meshline{black}{-0.07669}{0.84173}{-0.08651}{0.83801} \meshline{black}{-0.2101}{0.28004}{-0.21786}{0.2906} \meshline{black}{-0.08651}{0.83801}{-0.09947}{0.83123} \meshline{black}{-0.04523}{0.85403}{-0.02696}{0.85867} \meshline{black}{-0.04523}{0.85403}{-0.07669}{0.84173} \meshline{black}{-0.21786}{0.2906}{-0.23665}{0.31354} \meshline{black}{-0.09947}{0.83123}{-0.12423}{0.81976} \meshline{black}{-0.24625}{0.32632}{-0.23665}{0.31354} \meshline{black}{-0.24625}{0.32632}{-0.26089}{0.34974} \meshline{black}{-0.12423}{0.81976}{-0.13442}{0.81451} \meshline{black}{-0.13442}{0.81451}{-0.16404}{0.79583} \meshline{black}{-0.27607}{0.37547}{-0.26089}{0.34974}\meshline{black}{-0.27607}{0.37547}{-0.28225}{0.38937} \meshline{black}{-0.16404}{0.79583}{-0.18161}{0.78357} \meshline{black}{-0.20029}{0.76806}{-0.18161}{0.78357} \meshline{black}{-0.29698}{0.42289}{-0.28225}{0.38937}\meshline{black}{-0.29698}{0.42289}{-0.29991}{0.43349} \meshline{black}{-0.23305}{0.73592}{-0.21936}{0.75108} \meshline{black}{-0.20029}{0.76806}{-0.21936}{0.75108} \meshline{black}{-0.29991}{0.43349}{-0.31037}{0.46888} \meshline{black}{-0.31215}{0.48416}{-0.31037}{0.46888} \meshline{black}{-0.26464}{0.6955}{-0.25049}{0.71663} \meshline{black}{-0.23305}{0.73592}{-0.25049}{0.71663} \meshline{black}{-0.31667}{0.51349}{-0.31215}{0.48416} \meshline{black}{-0.31667}{0.51349}{-0.31545}{0.54541} \meshline{black}{-0.3128}{0.57104}{-0.30891}{0.5987} \meshline{black}{-0.30891}{0.5987}{-0.29751}{0.63108} \meshline{black}{-0.27616}{0.67898}{-0.29271}{0.6439} \meshline{black}{-0.26464}{0.6955}{-0.27616}{0.67898} \meshline{black}{-0.31545}{0.54541}{-0.31558}{0.55662} \meshline{black}{-0.31558}{0.55662}{-0.3128}{0.57104} \meshline{black}{-0.29751}{0.63108}{-0.29503}{0.63942} \meshline{black}{-0.29271}{0.6439}{-0.29503}{0.63942} \meshline{black}{0.61867}{0.17817}{0.62684}{0.19171} \meshline{black}{0.6424}{0.22736}{0.62684}{0.19171} \meshline{black}{0.58769}{0.13703}{0.60165}{0.15354}\meshline{black}{0.60165}{0.15354}{0.61867}{0.17817} \meshline{black}{0.65703}{0.27246}{0.64508}{0.23351} \meshline{black}{0.66259}{0.31466}{0.65703}{0.27246} \meshline{black}{0.64445}{0.23151}{0.6424}{0.22736} \meshline{black}{0.55355}{0.10569}{0.56714}{0.11732} \meshline{black}{0.58769}{0.13703}{0.56714}{0.11732} \meshline{black}{0.64508}{0.23351}{0.64445}{0.23151} \meshline{black}{0.51677}{0.08245}{0.51914}{0.08389} \meshline{black}{0.55355}{0.10569}{0.51914}{0.08389} \meshline{black}{0.66265}{0.31499}{0.66259}{0.31466} \meshline{black}{0.66428}{0.35848}{0.66266}{0.31549} \meshline{black}{0.66266}{0.31549}{0.66265}{0.31499} \meshline{black}{0.66171}{0.37943}{0.66428}{0.35848} \meshline{black}{0.51293}{0.08094}{0.51677}{0.08245}\meshline{black}{0.65982}{0.40298}{0.66171}{0.37943} \meshline{black}{0.51293}{0.08094}{0.47824}{0.06457} \meshline{black}{0.65262}{0.43425}{0.65982}{0.40298} \meshline{black}{0.44499}{0.05575}{0.47824}{0.06457} \meshline{black}{0.43741}{0.0536}{0.44499}{0.05575} \meshline{black}{0.64972}{0.44797}{0.65262}{0.43425} \meshline{black}{0.43219}{0.0531}{0.43741}{0.0536} \meshline{black}{0.63825}{0.48131}{0.64972}{0.44797} \meshline{black}{0.43219}{0.0531}{0.39513}{0.04699} \meshline{black}{0.63408}{0.49308}{0.63825}{0.48131} \meshline{black}{0.39513}{0.04699}{0.37092}{0.04721} \meshline{black}{0.63408}{0.49308}{0.62075}{0.52205}\meshline{black}{0.37092}{0.04721}{0.35091}{0.0463} \meshline{black}{0.6124}{0.53839}{0.62075}{0.52205} \meshline{black}{0.35091}{0.0463}{0.31731}{0.04998} \meshline{black}{0.60092}{0.55817}{0.6124}{0.53839} \meshline{black}{0.31731}{0.04998}{0.30501}{0.0509} \meshline{black}{0.58077}{0.58787}{0.60092}{0.55817} \meshline{black}{0.30501}{0.0509}{0.2687}{0.05842} \meshline{black}{0.58077}{0.58787}{0.5773}{0.59272}\meshline{black}{0.25749}{0.06064}{0.2687}{0.05842} \meshline{black}{0.5773}{0.59272}{0.57095}{0.60016}\meshline{black}{0.55091}{0.62546}{0.57095}{0.60016} \meshline{black}{0.22358}{0.07087}{0.25749}{0.06064}\meshline{black}{0.53617}{0.64064}{0.55091}{0.62546} \meshline{black}{0.22358}{0.07087}{0.20825}{0.07575} \meshline{black}{0.52191}{0.65546}{0.53617}{0.64064} \meshline{black}{0.20825}{0.07575}{0.18112}{0.0864} \meshline{black}{0.52191}{0.65546}{0.5057}{0.66912}\meshline{black}{0.49016}{0.6831}{0.5057}{0.66912} \meshline{black}{0.15703}{0.09686}{0.18112}{0.0864} \meshline{black}{0.49016}{0.6831}{0.46896}{0.69831} \meshline{black}{0.14092}{0.10453}{0.15703}{0.09686} \meshline{black}{0.45581}{0.70822}{0.46896}{0.69831} \meshline{black}{0.14092}{0.10453}{0.10352}{0.12469} \meshline{black}{0.45581}{0.70822}{0.44644}{0.71391}\meshline{black}{0.44644}{0.71391}{0.41878}{0.731} \meshline{black}{0.10352}{0.12469}{0.10282}{0.12508} \meshline{black}{0.39095}{0.74483}{0.41878}{0.731} \meshline{black}{0.1015}{0.1259}{0.06594}{0.14702} \meshline{black}{0.37856}{0.75077}{0.39095}{0.74483} \meshline{black}{0.10282}{0.12508}{0.1015}{0.1259} \meshline{black}{0.06594}{0.14702}{0.04739}{0.16016} \meshline{black}{0.3381}{0.76647}{0.37856}{0.75077} \meshline{black}{0.33697}{0.76686}{0.3381}{0.76647} \meshline{black}{0.03097}{0.17113}{0.04739}{0.16016} \meshline{black}{0.32469}{0.77034}{0.33697}{0.76686} \meshline{black}{0.0099}{0.18787}{0.03097}{0.17113}\meshline{black}{-0.00222}{0.19726}{0.0099}{0.18787} \meshline{black}{0.32469}{0.77034}{0.2929}{0.77961}\meshline{black}{-0.01258}{0.20705}{-0.00222}{0.19726} \meshline{black}{0.2929}{0.77961}{0.28983}{0.7804} \meshline{black}{0.28983}{0.7804}{0.24507}{0.7885} \meshline{black}{0.20028}{0.79268}{0.23973}{0.78902} \meshline{black}{-0.03376}{0.22523}{-0.01258}{0.20705}\meshline{black}{0.19058}{0.79231}{0.20028}{0.79268} \meshline{black}{0.24507}{0.7885}{0.24417}{0.78866} \meshline{black}{0.24417}{0.78866}{0.23973}{0.78902} \meshline{black}{-0.03376}{0.22523}{-0.05162}{0.24357} \meshline{black}{-0.05162}{0.24357}{-0.06307}{0.25571} \meshline{black}{0.19058}{0.79231}{0.15547}{0.79247} \meshline{black}{0.1216}{0.78769}{0.15547}{0.79247} \meshline{black}{-0.06307}{0.25571}{-0.08054}{0.27785} \meshline{black}{0.1216}{0.78769}{0.10648}{0.78627} \meshline{black}{0.06242}{0.77556}{0.08049}{0.77982} \meshline{black}{-0.09008}{0.2888}{-0.08054}{0.27785}\meshline{black}{0.08049}{0.77982}{0.10648}{0.78627} \meshline{black}{0.04742}{0.76943}{0.06242}{0.77556} \meshline{black}{-0.09008}{0.2888}{-0.09514}{0.29555} \meshline{black}{-0.09514}{0.29555}{-0.11456}{0.32477} \meshline{black}{0.04742}{0.76943}{0.01957}{0.75997} \meshline{black}{-0.02097}{0.73883}{-0.00252}{0.74852} \meshline{black}{0.01957}{0.75997}{-0.00252}{0.74852} \meshline{black}{-0.11456}{0.32477}{-0.12641}{0.34503} \meshline{black}{-0.12641}{0.34503}{-0.13561}{0.3647} \meshline{black}{-0.049}{0.71891}{-0.02097}{0.73883} \meshline{black}{-0.14754}{0.39248}{-0.13561}{0.3647}\meshline{black}{-0.14754}{0.39248}{-0.15238}{0.40962} \meshline{black}{-0.049}{0.71891}{-0.05417}{0.71562}\meshline{black}{-0.08581}{0.68722}{-0.05651}{0.71377} \meshline{black}{-0.15238}{0.40962}{-0.16043}{0.43832} \meshline{black}{-0.16043}{0.43832}{-0.16299}{0.46175} \meshline{black}{-0.05417}{0.71562}{-0.05651}{0.71377}\meshline{black}{-0.14118}{0.60717}{-0.14382}{0.59978} \meshline{black}{-0.14118}{0.60717}{-0.1295}{0.62736} \meshline{black}{-0.11644}{0.64987}{-0.0921}{0.68089} \meshline{black}{-0.08581}{0.68722}{-0.0921}{0.68089}\meshline{black}{-0.16582}{0.4827}{-0.16299}{0.46175}\meshline{black}{-0.16349}{0.52551}{-0.16582}{0.4827} \meshline{black}{-0.15634}{0.5673}{-0.16348}{0.52555} \meshline{black}{-0.14382}{0.59978}{-0.15634}{0.5673} \meshline{black}{-0.11995}{0.64521}{-0.1295}{0.62736}\meshline{black}{-0.11644}{0.64987}{-0.11995}{0.64521} \meshline{black}{-0.16349}{0.52552}{-0.16349}{0.52551}\meshline{black}{-0.16348}{0.52555}{-0.16349}{0.52552} \meshline{black}{0.27183}{-0.37802}{0.27412}{-0.38217} \meshline{black}{0.27183}{-0.37802}{0.26348}{-0.36359} \meshline{black}{0.20013}{-0.27031}{0.21142}{-0.28348} \meshline{black}{0.29316}{-0.42924}{0.28937}{-0.41971} \meshline{black}{0.27412}{-0.38217}{0.28937}{-0.41971} \meshline{black}{0.24735}{-0.33375}{0.22656}{-0.30363} \meshline{black}{0.183}{-0.25229}{0.20013}{-0.27031} \meshline{black}{0.22656}{-0.30363}{0.21142}{-0.28348} \meshline{black}{0.30518}{-0.47922}{0.30224}{-0.46781} \meshline{black}{0.26348}{-0.36359}{0.25059}{-0.33949} \meshline{black}{0.30224}{-0.46781}{0.29316}{-0.42924} \meshline{black}{0.24735}{-0.33375}{0.25059}{-0.33949} \meshline{black}{0.17183}{-0.23906}{0.183}{-0.25229} \meshline{black}{0.30643}{-0.49824}{0.30518}{-0.47922} \meshline{black}{0.16401}{-0.23084}{0.14213}{-0.20938} \meshline{black}{0.16401}{-0.23084}{0.17183}{-0.23906} \meshline{black}{0.30643}{-0.49824}{0.30854}{-0.52326}\meshline{black}{0.14213}{-0.20938}{0.11778}{-0.18828} \meshline{black}{0.30854}{-0.52326}{0.30542}{-0.56173} \meshline{black}{0.11084}{-0.18151}{0.11778}{-0.18828}\meshline{black}{0.29558}{-0.60743}{0.29779}{-0.59702} \meshline{black}{0.30542}{-0.56173}{0.30511}{-0.56444} \meshline{black}{0.10366}{-0.17543}{0.07862}{-0.15465} \meshline{black}{0.11084}{-0.18151}{0.10366}{-0.17543} \meshline{black}{0.29534}{-0.60824}{0.29558}{-0.60743} \meshline{black}{0.30511}{-0.56444}{0.29779}{-0.59702} \meshline{black}{0.07862}{-0.15465}{0.05756}{-0.13927} \meshline{black}{0.29534}{-0.60824}{0.29338}{-0.61308} \meshline{black}{0.29338}{-0.61308}{0.27996}{-0.64857}\meshline{black}{0.05756}{-0.13927}{0.04511}{-0.12923} \meshline{black}{0.02715}{-0.11662}{0.01063}{-0.10486} \meshline{black}{0.02715}{-0.11662}{0.04511}{-0.12923}\meshline{black}{0.2771}{-0.65413}{0.27996}{-0.64857} \meshline{black}{0.25975}{-0.68569}{0.2771}{-0.65413} \meshline{black}{0.01063}{-0.10486}{-0.00026}{-0.09788} \meshline{black}{0.25975}{-0.68569}{0.24356}{-0.70733}\meshline{black}{0.24356}{-0.70733}{0.23404}{-0.71991} \meshline{black}{-0.02457}{-0.08126}{-0.00026}{-0.09788} \meshline{black}{-0.05653}{-0.06154}{-0.02457}{-0.08126} \meshline{black}{0.22043}{-0.73313}{0.23404}{-0.71991} \meshline{black}{0.20452}{-0.75097}{0.22043}{-0.73313} \meshline{black}{-0.06081}{-0.0588}{-0.05653}{-0.06154}\meshline{black}{0.1861}{-0.76606}{0.20452}{-0.75097} \meshline{black}{-0.06959}{-0.05373}{-0.09746}{-0.03678} \meshline{black}{-0.06959}{-0.05373}{-0.06081}{-0.0588} \meshline{black}{0.17143}{-0.7784}{0.1861}{-0.76606} \meshline{black}{-0.09746}{-0.03678}{-0.11177}{-0.02863} \meshline{black}{0.17143}{-0.7784}{0.16022}{-0.78518} \meshline{black}{0.13509}{-0.80295}{0.16022}{-0.78518} \meshline{black}{-0.11177}{-0.02863}{-0.13509}{-0.01581} \meshline{black}{0.10229}{-0.81887}{0.13509}{-0.80295} \meshline{black}{-0.13509}{-0.01581}{-0.16567}{0.00001} \meshline{black}{0.10229}{-0.81887}{0.09072}{-0.82553} \meshline{black}{-0.17388}{0.00389}{-0.16567}{0.00001} \meshline{black}{0.09072}{-0.82553}{0.06933}{-0.83352} \meshline{black}{-0.20078}{0.01608}{-0.21383}{0.02234} \meshline{black}{-0.20078}{0.01608}{-0.17388}{0.00389} \meshline{black}{0.06933}{-0.83352}{0.04612}{-0.84253} \meshline{black}{-0.21824}{0.02446}{-0.21383}{0.02234} \meshline{black}{0.03363}{-0.84538}{0.04612}{-0.84253} \meshline{black}{0.00126}{-0.85493}{0.03363}{-0.84538} \meshline{black}{-0.25514}{0.03928}{-0.21824}{0.02446}\meshline{black}{-0.0183}{-0.85811}{0.00126}{-0.85493} \meshline{black}{-0.26967}{0.04535}{-0.25514}{0.03928} \meshline{black}{-0.05312}{-0.86272}{-0.05758}{-0.8636}\meshline{black}{-0.05312}{-0.86272}{-0.0183}{-0.85811} \meshline{black}{-0.29817}{0.05435}{-0.26967}{0.04535}\meshline{black}{-0.05758}{-0.8636}{-0.06331}{-0.86404}\meshline{black}{-0.31964}{0.06181}{-0.29817}{0.05435} \meshline{black}{-0.09828}{-0.86677}{-0.06331}{-0.86404} \meshline{black}{-0.34327}{0.06713}{-0.31964}{0.06181}\meshline{black}{-0.11347}{-0.86639}{-0.09828}{-0.86677} \meshline{black}{-0.11347}{-0.86639}{-0.14032}{-0.86735} \meshline{black}{-0.36819}{0.07393}{-0.34327}{0.06713} \meshline{black}{-0.16732}{-0.86576}{-0.14032}{-0.86735} \meshline{black}{-0.36819}{0.07393}{-0.39104}{0.07694} \meshline{black}{-0.18346}{-0.8655}{-0.16732}{-0.86576} \meshline{black}{-0.39104}{0.07694}{-0.41529}{0.08166} \meshline{black}{-0.21802}{-0.86212}{-0.18346}{-0.8655} \meshline{black}{-0.21802}{-0.86212}{-0.22783}{-0.86136}\meshline{black}{-0.44245}{0.08265}{-0.41529}{0.08166} \meshline{black}{-0.26631}{-0.85581}{-0.22783}{-0.86136} \meshline{black}{-0.26631}{-0.85581}{-0.27362}{-0.8547} \meshline{black}{-0.44245}{0.08265}{-0.46083}{0.08471} \meshline{black}{-0.31206}{-0.84693}{-0.27362}{-0.8547} \meshline{black}{-0.49913}{0.08238}{-0.46083}{0.08471}\meshline{black}{-0.31206}{-0.84693}{-0.32112}{-0.84475} \meshline{black}{-0.50461}{0.08252}{-0.49913}{0.08238} \meshline{black}{-0.35479}{-0.83541}{-0.32112}{-0.84475} \meshline{black}{-0.37176}{-0.82948}{-0.35479}{-0.83541} \meshline{black}{-0.50461}{0.08252}{-0.51632}{0.08062} \meshline{black}{-0.54705}{0.07631}{-0.51632}{0.08062}\meshline{black}{-0.3959}{-0.82031}{-0.37176}{-0.82948} \meshline{black}{-0.3959}{-0.82031}{-0.42689}{-0.80625} \meshline{black}{-0.54705}{0.07631}{-0.56666}{0.06967} \meshline{black}{-0.43806}{-0.80072}{-0.42689}{-0.80625} \meshline{black}{-0.43806}{-0.80072}{-0.47203}{-0.78236} \meshline{black}{-0.56666}{0.06967}{-0.58765}{0.06446} \meshline{black}{-0.47487}{-0.78076}{-0.47203}{-0.78236} \meshline{black}{-0.50859}{-0.75959}{-0.47487}{-0.78076} \meshline{black}{-0.61282}{-0.67301}{-0.59911}{-0.68589} \meshline{black}{-0.61429}{-0.67138}{-0.61282}{-0.67301} \meshline{black}{-0.58765}{0.06446}{-0.60685}{0.05564} \meshline{black}{-0.62641}{0.04685}{-0.60685}{0.05564} \meshline{black}{-0.50927}{-0.75914}{-0.54303}{-0.7347} \meshline{black}{-0.54521}{-0.73299}{-0.57871}{-0.7051} \meshline{black}{-0.50927}{-0.75914}{-0.50859}{-0.75959} \meshline{black}{-0.64477}{-0.63834}{-0.64049}{-0.64357} \meshline{black}{-0.54521}{-0.73299}{-0.54303}{-0.7347} \meshline{black}{-0.66732}{-0.61068}{-0.67274}{-0.60289}\meshline{black}{-0.64049}{-0.64357}{-0.61429}{-0.67138} \meshline{black}{-0.59911}{-0.68589}{-0.58096}{-0.70318} \meshline{black}{-0.66732}{-0.61068}{-0.64477}{-0.63834} \meshline{black}{-0.62641}{0.04685}{-0.66198}{0.02295} \meshline{black}{-0.66306}{0.0222}{-0.66198}{0.02295}\meshline{black}{-0.58096}{-0.70318}{-0.57871}{-0.7051} \meshline{black}{-0.69271}{-0.57504}{-0.69926}{-0.56413} \meshline{black}{-0.69271}{-0.57504}{-0.67274}{-0.60289} \meshline{black}{-0.71603}{-0.53751}{-0.72374}{-0.52211} \meshline{black}{-0.66937}{0.01633}{-0.66306}{0.0222}\meshline{black}{-0.71603}{-0.53751}{-0.69926}{-0.56413} \meshline{black}{-0.7381}{-0.49523}{-0.74649}{-0.47452} \meshline{black}{-0.7381}{-0.49523}{-0.72374}{-0.52211} \meshline{black}{-0.66937}{0.01633}{-0.69804}{-0.0091} \meshline{black}{-0.75566}{-0.4529}{-0.76206}{-0.43227}\meshline{black}{-0.74649}{-0.47452}{-0.75566}{-0.4529} \meshline{black}{-0.77768}{-0.37968}{-0.77384}{-0.39379} \meshline{black}{-0.77901}{-0.37081}{-0.77768}{-0.37968} \meshline{black}{-0.69804}{-0.0091}{-0.70194}{-0.01353} \meshline{black}{-0.76206}{-0.43227}{-0.76697}{-0.41902} \meshline{black}{-0.73096}{-0.05105}{-0.70194}{-0.01353} \meshline{black}{-0.737}{-0.0625}{-0.75746}{-0.09969} \meshline{black}{-0.76697}{-0.41902}{-0.77384}{-0.39379} \meshline{black}{-0.75746}{-0.09969}{-0.76147}{-0.10828} \meshline{black}{-0.78967}{-0.23805}{-0.78982}{-0.23896} \meshline{black}{-0.78984}{-0.24068}{-0.78982}{-0.23896} \meshline{black}{-0.78634}{-0.33629}{-0.77901}{-0.37081} \meshline{black}{-0.78851}{-0.3046}{-0.78634}{-0.33629} \meshline{black}{-0.77807}{-0.15916}{-0.77563}{-0.15084} \meshline{black}{-0.77563}{-0.15084}{-0.76147}{-0.10828} \meshline{black}{-0.73257}{-0.05356}{-0.73096}{-0.05105} \meshline{black}{-0.73257}{-0.05356}{-0.737}{-0.0625} \meshline{black}{-0.78546}{-0.19334}{-0.78967}{-0.23805} \meshline{black}{-0.79082}{-0.28648}{-0.78984}{-0.24068} \meshline{black}{-0.77807}{-0.15916}{-0.78546}{-0.19334} \meshline{black}{-0.79082}{-0.28648}{-0.78851}{-0.3046} \meshline{black}{0.08003}{-0.27906}{0.09645}{-0.30233} \meshline{black}{0.12365}{-0.35079}{0.10421}{-0.31493} \meshline{black}{0.05724}{-0.25138}{0.05294}{-0.24647} \meshline{black}{0.14177}{-0.39728}{0.12518}{-0.35443} \meshline{black}{0.09645}{-0.30233}{0.10421}{-0.31493} \meshline{black}{0.05724}{-0.25138}{0.08003}{-0.27906}\meshline{black}{0.12365}{-0.35079}{0.12518}{-0.35443}\meshline{black}{0.15235}{-0.44159}{0.14205}{-0.39845} \meshline{black}{0.05294}{-0.24647}{0.04687}{-0.24061} \meshline{black}{0.14177}{-0.39728}{0.14205}{-0.39845}\meshline{black}{0.15613}{-0.47996}{0.15283}{-0.44836} \meshline{black}{0.15388}{-0.51693}{0.15395}{-0.5119} \meshline{black}{0.15283}{-0.44836}{0.15235}{-0.44159} \meshline{black}{0.15349}{-0.51972}{0.15388}{-0.51693} \meshline{black}{0.04687}{-0.24061}{0.02393}{-0.21604} \meshline{black}{0.15395}{-0.5119}{0.15613}{-0.47996} \meshline{black}{-0.00739}{-0.18819}{0.00281}{-0.19727} \meshline{black}{0.00281}{-0.19727}{0.02393}{-0.21604} \meshline{black}{0.15349}{-0.51972}{0.14551}{-0.56073} \meshline{black}{0.1426}{-0.57033}{0.14551}{-0.56073} \meshline{black}{-0.00739}{-0.18819}{-0.01693}{-0.18105} \meshline{black}{0.1426}{-0.57033}{0.12607}{-0.60933}\meshline{black}{-0.04027}{-0.16207}{-0.01693}{-0.18105} \meshline{black}{0.12477}{-0.61202}{0.12607}{-0.60933} \meshline{black}{-0.07528}{-0.13836}{-0.04027}{-0.16207} \meshline{black}{0.1208}{-0.61791}{0.12477}{-0.61202} \meshline{black}{0.1208}{-0.61791}{0.10261}{-0.64794} \meshline{black}{-0.0753}{-0.13834}{-0.07528}{-0.13836}\meshline{black}{-0.07535}{-0.13831}{-0.11137}{-0.11574} \meshline{black}{0.10261}{-0.64794}{0.09218}{-0.66026} \meshline{black}{-0.0753}{-0.13834}{-0.07535}{-0.13831} \meshline{black}{0.07667}{-0.67858}{0.09218}{-0.66026} \meshline{black}{-0.13053}{-0.10537}{-0.11137}{-0.11574} \meshline{black}{0.07667}{-0.67858}{0.05629}{-0.69577} \meshline{black}{-0.14946}{-0.09538}{-0.13053}{-0.10537}\meshline{black}{0.05629}{-0.69577}{0.04719}{-0.70446} \meshline{black}{-0.18341}{-0.07981}{-0.14946}{-0.09538} \meshline{black}{0.03427}{-0.71331}{0.04719}{-0.70446} \meshline{black}{0.01917}{-0.72414}{0.03427}{-0.71331} \meshline{black}{-0.18958}{-0.07726}{-0.18341}{-0.07981}\meshline{black}{0.01917}{-0.72414}{0.00427}{-0.73195} \meshline{black}{-0.22099}{-0.06575}{-0.23188}{-0.06154} \meshline{black}{0.00427}{-0.73195}{-0.01046}{-0.74066}\meshline{black}{-0.22099}{-0.06575}{-0.18958}{-0.07726}\meshline{black}{-0.02163}{-0.74567}{-0.01046}{-0.74066} \meshline{black}{-0.23188}{-0.06154}{-0.23424}{-0.06067} \meshline{black}{-0.04692}{-0.75663}{-0.02163}{-0.74567} \meshline{black}{-0.23424}{-0.06067}{-0.27693}{-0.04884} \meshline{black}{-0.07526}{-0.76453}{-0.04692}{-0.75663} \meshline{black}{-0.07526}{-0.76453}{-0.08671}{-0.7685}\meshline{black}{-0.27693}{-0.04884}{-0.28329}{-0.04707} \meshline{black}{-0.08671}{-0.7685}{-0.10108}{-0.77132} \meshline{black}{-0.28329}{-0.04707}{-0.32559}{-0.04016} \meshline{black}{-0.12753}{-0.77672}{-0.10108}{-0.77132} \meshline{black}{-0.32559}{-0.04016}{-0.33044}{-0.03924} \meshline{black}{-0.14344}{-0.77787}{-0.12753}{-0.77672} \meshline{black}{-0.14344}{-0.77787}{-0.16933}{-0.781}\meshline{black}{-0.33044}{-0.03924}{-0.35849}{-0.03786} \meshline{black}{-0.35849}{-0.03786}{-0.37581}{-0.03676} \meshline{black}{-0.20127}{-0.78094}{-0.16933}{-0.781} \meshline{black}{-0.21248}{-0.78123}{-0.20127}{-0.78094} \meshline{black}{-0.37581}{-0.03676}{-0.37929}{-0.03711} \meshline{black}{-0.25301}{-0.77786}{-0.21248}{-0.78123} \meshline{black}{-0.37929}{-0.03711}{-0.41965}{-0.03887} \meshline{black}{-0.25301}{-0.77786}{-0.25724}{-0.77751} \meshline{black}{-0.44164}{-0.04384}{-0.41965}{-0.03887}\meshline{black}{-0.30033}{-0.77007}{-0.25724}{-0.77751} \meshline{black}{-0.30033}{-0.77007}{-0.30404}{-0.76931} \meshline{black}{-0.44164}{-0.04384}{-0.46148}{-0.04691} \meshline{black}{-0.30404}{-0.76931}{-0.34417}{-0.75843} \meshline{black}{-0.3533}{-0.75535}{-0.34417}{-0.75843} \meshline{black}{-0.48514}{-0.0552}{-0.46148}{-0.04691} \meshline{black}{-0.50136}{-0.06065}{-0.48514}{-0.0552}\meshline{black}{-0.38495}{-0.74346}{-0.3533}{-0.75535} \meshline{black}{-0.40466}{-0.7342}{-0.38495}{-0.74346} \meshline{black}{-0.52327}{-0.07272}{-0.50136}{-0.06065}\meshline{black}{-0.45532}{-0.7063}{-0.44923}{-0.7097} \meshline{black}{-0.45532}{-0.7063}{-0.46159}{-0.70175}\meshline{black}{-0.42204}{-0.72585}{-0.40466}{-0.7342} \meshline{black}{-0.52327}{-0.07272}{-0.53916}{-0.08028} \meshline{black}{-0.44923}{-0.7097}{-0.42204}{-0.72585} \meshline{black}{-0.4896}{-0.68258}{-0.46159}{-0.70175} \meshline{black}{-0.60875}{-0.55133}{-0.60623}{-0.55542} \meshline{black}{-0.52629}{-0.65137}{-0.50225}{-0.67259} \meshline{black}{-0.53916}{-0.08028}{-0.54716}{-0.08585} \meshline{black}{-0.54716}{-0.08585}{-0.57463}{-0.10614} \meshline{black}{-0.50225}{-0.67259}{-0.4896}{-0.68258} \meshline{black}{-0.57184}{-0.60314}{-0.58073}{-0.59219} \meshline{black}{-0.60875}{-0.55133}{-0.61432}{-0.54063}\meshline{black}{-0.53317}{-0.64495}{-0.55336}{-0.62473} \meshline{black}{-0.58073}{-0.59219}{-0.60623}{-0.55542} \meshline{black}{-0.53317}{-0.64495}{-0.52629}{-0.65137} \meshline{black}{-0.57184}{-0.60314}{-0.55336}{-0.62473} \meshline{black}{-0.5895}{-0.12056}{-0.57463}{-0.10614} \meshline{black}{-0.60694}{-0.13979}{-0.5895}{-0.12056}\meshline{black}{-0.61432}{-0.54063}{-0.63012}{-0.5128} \meshline{black}{-0.63597}{-0.50085}{-0.64696}{-0.47297} \meshline{black}{-0.65345}{-0.45635}{-0.66187}{-0.42323}\meshline{black}{-0.63597}{-0.50085}{-0.63012}{-0.5128} \meshline{black}{-0.60694}{-0.13979}{-0.61997}{-0.15718} \meshline{black}{-0.63524}{-0.18246}{-0.61997}{-0.15718}\meshline{black}{-0.65688}{-0.23264}{-0.6514}{-0.22009} \meshline{black}{-0.65902}{-0.24086}{-0.65688}{-0.23264}\meshline{black}{-0.67247}{-0.31601}{-0.6703}{-0.30172} \meshline{black}{-0.64696}{-0.47297}{-0.65345}{-0.45635} \meshline{black}{-0.67182}{-0.35039}{-0.67247}{-0.31601} \meshline{black}{-0.66567}{-0.40974}{-0.67126}{-0.3655}\meshline{black}{-0.66567}{-0.40974}{-0.66187}{-0.42323} \meshline{black}{-0.64167}{-0.19439}{-0.63524}{-0.18246} \meshline{black}{-0.64167}{-0.19439}{-0.6514}{-0.22009} \meshline{black}{-0.65902}{-0.24086}{-0.66757}{-0.27293} \meshline{black}{-0.66757}{-0.27293}{-0.6703}{-0.30172} \meshline{black}{-0.67182}{-0.35039}{-0.67182}{-0.36179} \meshline{black}{-0.67182}{-0.36179}{-0.67126}{-0.3655} \meshline{black}{1}{0}{0.99529}{-0.04779} \meshline{black}{1}{0}{0.9951}{0.04976} \meshline{black}{0.99054}{-0.09606}{0.99529}{-0.04779} \meshline{black}{0.9951}{0.04976}{0.99025}{0.09903} \meshline{black}{0.98569}{-0.14533}{0.99054}{-0.09606} \meshline{black}{0.99025}{0.09903}{0.98549}{0.1473} \meshline{black}{0.98078}{-0.19509}{0.98569}{-0.14533} \meshline{black}{0.98549}{0.1473}{0.98078}{0.19509} \meshline{black}{0.96685}{-0.24104}{0.98078}{-0.19509} \meshline{black}{0.98078}{0.19509}{0.96627}{0.24294} \meshline{black}{0.95276}{-0.28746}{0.96685}{-0.24104} \meshline{black}{0.96627}{0.24294}{0.9519}{0.29031} \meshline{black}{0.93839}{-0.33483}{0.95276}{-0.28746} \meshline{black}{0.9519}{0.29031}{0.93781}{0.33673} \meshline{black}{0.92388}{-0.38268}{0.93839}{-0.33483} \meshline{black}{0.93781}{0.33673}{0.92387}{0.38268} \meshline{black}{0.90124}{-0.42503}{0.92388}{-0.38268} \meshline{black}{0.92387}{0.38268}{0.9003}{0.42678} \meshline{black}{0.87837}{-0.46781}{0.90124}{-0.42503} \meshline{black}{0.9003}{0.42678}{0.87696}{0.47044} \meshline{black}{0.65571}{-0.40449}{0.63243}{-0.39208} \meshline{black}{0.70562}{-0.43653}{0.68299}{-0.4251} \meshline{black}{0.61492}{-0.38205}{0.63243}{-0.39208} \meshline{black}{0.85504}{-0.51147}{0.87837}{-0.46781} \meshline{black}{0.74017}{-0.45831}{0.70562}{-0.43653} \meshline{black}{0.58723}{-0.3632}{0.57273}{-0.35445} \meshline{black}{0.56668}{-0.35179}{0.57273}{-0.35445} \meshline{black}{0.65571}{-0.40449}{0.66909}{-0.41417} \meshline{black}{0.81119}{-0.50175}{0.80411}{-0.50124} \meshline{black}{0.66909}{-0.41417}{0.68299}{-0.4251} \meshline{black}{0.87696}{0.47044}{0.8541}{0.51322} \meshline{black}{0.85504}{ -0.51147}{0.81119}{-0.50175} \meshline{black}{0.77527}{-0.47986}{0.7407}{-0.45881} \meshline{black}{0.60274}{-0.37342}{0.61492}{-0.38205}\meshline{black}{0.74039}{-0.4585}{0.74017}{-0.45831} \meshline{black}{0.80411}{-0.50124}{0.77527}{-0.47986} \meshline{black}{0.60274}{-0.37342}{0.58723}{-0.3632} \meshline{black}{0.74039}{-0.4585}{0.7407}{-0.45881}\meshline{black}{0.53814}{-0.33272}{0.56668}{-0.35179} \meshline{black}{0.53814}{-0.33272}{0.51447}{-0.31983} \meshline{black}{0.83147}{-0.55557}{0.85504}{-0.51147} \meshline{black}{0.48039}{-0.29649}{0.46581}{-0.288} \meshline{black}{0.46581}{-0.288}{0.45927}{-0.28486} \meshline{black}{0.8541}{0.51322}{0.83146}{0.55557} \meshline{black}{0.51447}{-0.31983}{0.50204}{-0.31047} \meshline{black}{0.50204}{-0.31047}{0.48039}{-0.29649} \meshline{black}{0.42971}{-0.2653}{0.45927}{-0.28486} \meshline{black}{0.80101}{-0.59269}{0.83147}{-0.55557} \meshline{black}{0.40372}{-0.25052}{0.42971}{-0.2653} \meshline{black}{0.83146}{0.55557}{0.79974}{0.59422} \meshline{black}{0.35712}{-0.22032}{0.37633}{-0.23177} \meshline{black}{0.39336}{-0.24287}{0.40372}{-0.25052}\meshline{black}{0.34772}{-0.21544}{0.35712}{-0.22032} \meshline{black}{0.37633}{-0.23177}{0.39336}{-0.24287}\meshline{black}{0.77023}{-0.63018}{0.80101}{-0.59269} \meshline{black}{0.79974}{0.59422}{0.76833}{0.63249} \meshline{black}{0.321}{-0.19771}{0.34772}{-0.21544} \meshline{black}{0.29167}{-0.18035}{0.321}{-0.19771} \meshline{black}{0.73883}{-0.66845}{0.77023}{-0.63018} \meshline{black}{0.76833}{0.63249}{0.73756}{0.66999} \meshline{black}{0.28473}{-0.17532}{0.29167}{-0.18035} \meshline{black}{0.24863}{-0.1528}{0.27337}{-0.16789} \meshline{black}{0.27337}{-0.16789}{0.28473}{-0.17532}\meshline{black}{0.70711}{-0.7071}{0.73883}{-0.66845} \meshline{black}{0.24863}{-0.1528}{0.23601}{-0.14579} \meshline{black}{0.73756}{0.66999}{0.7071}{0.7071} \meshline{black}{0.21256}{-0.1303}{0.23601}{-0.14579} \meshline{black}{0.66999}{-0.73756}{0.70711}{-0.7071} \meshline{black}{0.21256}{-0.1303}{0.18042}{-0.11077} \meshline{black}{0.7071}{0.7071}{0.66839}{0.73875} \meshline{black}{0.17637}{-0.10794}{0.18042}{-0.11077}\meshline{black}{0.63249}{-0.76834}{0.66999}{-0.73756} \meshline{black}{0.66839}{0.73875}{0.6301}{0.77006} \meshline{black}{0.16938}{-0.10344}{0.14028}{-0.08545} \meshline{black}{0.16938}{-0.10344}{0.17637}{-0.10794} \meshline{black}{0.59422}{-0.79974}{0.63249}{-0.76834} \meshline{black}{0.14028}{-0.08545}{0.12517}{-0.07652} \meshline{black}{0.6301}{0.77006}{0.59263}{0.8007} \meshline{black}{0.10416}{-0.06295}{0.12517}{-0.07652}\meshline{black}{0.55557}{-0.83146}{0.59422}{-0.79974} \meshline{black}{0.10416}{-0.06295}{0.06958}{-0.04159} \meshline{black}{0.59263}{0.8007}{0.55557}{0.831} \meshline{black}{0.06793}{-0.0405}{0.06958}{-0.04159}\meshline{black}{0.51323}{-0.85407}{0.55557}{-0.83146} \meshline{black}{0.03172}{-0.018}{0.06492}{-0.03862} \meshline{black}{0.06793}{-0.0405}{0.06492}{-0.03862} \meshline{black}{0.55557}{0.831}{0.51152}{0.85465} \meshline{black}{0.47045}{-0.87692}{0.51323}{-0.85407} \meshline{black}{0.03172}{-0.018}{0.01391}{-0.00688} \meshline{black}{0.51152}{0.85465}{0.46788}{0.87807} \meshline{black}{-0.00442}{0.00463}{0.01391}{-0.00688} \meshline{black}{0.42679}{-0.90024}{0.47045}{-0.87692} \meshline{black}{-0.03858}{0.02577}{-0.04073}{0.02709} \meshline{black}{-0.03858}{0.02577}{-0.00442}{0.00463} \meshline{black}{0.46788}{0.87807}{0.42507}{0.90105} \meshline{black}{-0.04073}{0.02709}{-0.04189}{0.02783} \meshline{black}{0.38268}{-0.9238}{0.42679}{-0.90024} \meshline{black}{-0.07705}{0.04958}{-0.04189}{0.02783} \meshline{black}{0.42507}{0.90105}{0.38268}{0.9238} \meshline{black}{-0.07705}{0.04958}{-0.09787}{0.06287} \meshline{black}{0.33673}{-0.93776}{0.38268}{-0.9238} \meshline{black}{0.38268}{0.9238}{0.33484}{0.93831} \meshline{black}{-0.09787}{0.06287}{-0.1133}{0.07215} \meshline{black}{0.2903}{-0.95186}{0.33673}{-0.93776} \meshline{black}{-0.14224}{0.09005}{-0.14957}{0.09472} \meshline{black}{-0.14224}{0.09005}{-0.1133}{0.07215} \meshline{black}{0.33484}{0.93831}{0.28746}{0.95268} \meshline{black}{0.24293}{-0.96625}{0.2903}{-0.95186} \meshline{black}{-0.15381}{0.09762}{-0.14957}{0.09472} \meshline{black}{0.28746}{0.95268}{0.24104}{0.96676} \meshline{black}{-0.18593}{0.1172}{-0.15381}{0.09762}\meshline{black}{0.19509}{-0.98078}{0.24293}{-0.96625} \meshline{black}{0.24104}{0.96676}{0.19509}{0.9807} \meshline{black}{-0.18593}{0.1172}{-0.20992}{0.13291} \meshline{black}{0.1473}{-0.98549}{0.19509}{-0.98078} \meshline{black}{-0.22218}{0.13981}{-0.20992}{0.13291} \meshline{black}{0.19509}{0.9807}{0.14533}{0.98562} \meshline{black}{0.09902}{-0.99024}{0.1473}{-0.98549} \meshline{black}{-0.24603}{0.15436}{-0.25845}{0.16242} \meshline{black}{-0.24603}{0.15436}{-0.22218}{0.13981}\meshline{black}{0.14533}{0.98562}{0.09607}{0.9905} \meshline{black}{-0.25845}{0.16242}{-0.26588}{0.16773} \meshline{black}{0.04976}{-0.9951}{0.09902}{-0.99024} \meshline{black}{0.09607}{0.9905}{0.04779}{0.99527} \meshline{black}{-0.26588}{0.16773}{-0.29484}{0.1849} \meshline{black}{0}{-1}{0.04976}{-0.9951} \meshline{black}{0.04779}{0.99527}{0}{1} \meshline{black}{-0.29484}{0.1849}{-0.32194}{0.2028} \meshline{black}{-0.05951}{-0.98078}{0}{-1} \meshline{black}{-0.32194}{0.2028}{-0.33109}{0.2076} \meshline{black}{0}{1}{-0.04976}{0.9951} \meshline{black}{-0.36746}{0.23024}{-0.34982}{0.21878} \meshline{black}{-0.33109}{0.2076}{-0.34982}{0.21878} \meshline{black}{-0.1026}{-0.97319}{-0.05951}{-0.98078} \meshline{black}{-0.04976}{0.9951}{-0.09901}{0.99024} \meshline{black}{-0.37814}{0.23804}{-0.36746}{0.23024} \meshline{black}{-0.14672}{-0.96542}{-0.1026}{-0.97319} \meshline{black}{-0.09901}{0.99024}{-0.14726}{0.98549} \meshline{black}{-0.40397}{0.25281}{-0.37814}{0.23804}\meshline{black}{-0.195}{0.98078}{-0.24283}{0.96622} \meshline{black}{-0.14726}{0.98549}{-0.195}{0.98078} \meshline{black}{-0.19186}{-0.95747}{-0.14672}{-0.96542} \meshline{black}{-0.40397}{0.25281}{-0.43425}{0.27274} \meshline{black}{-0.24283}{0.96622}{-0.29013}{0.95183} \meshline{black}{-0.23803}{-0.94934}{-0.19186}{-0.95747} \meshline{black}{-0.43425}{0.27274}{-0.44031}{0.27569} \meshline{black}{-0.29013}{0.95183}{-0.33633}{0.93777} \meshline{black}{-0.44031}{0.27569}{-0.45353}{0.28336} \meshline{black}{-0.45353}{0.28336}{-0.47692}{0.29839} \meshline{black}{-0.28522}{-0.94103}{-0.23803}{-0.94934} \meshline{black}{-0.33633}{0.93777}{-0.382}{0.92387} \meshline{black}{-0.49087}{0.30873}{-0.47692}{0.29839} \meshline{black}{-0.33344}{-0.93254}{-0.28522}{-0.94103} \meshline{black}{-0.51366}{0.32114}{-0.49087}{0.30873}\meshline{black}{-0.382}{0.92387}{-0.4261}{0.90031} \meshline{black}{-0.38268}{-0.92387}{-0.33344}{-0.93254} \meshline{black}{-0.51366}{0.32114}{-0.54693}{0.34286} \meshline{black}{-0.42503}{-0.90123}{-0.38268}{-0.92387} \meshline{black}{-0.46781}{-0.87837}{-0.42503}{-0.90123} \meshline{black}{-0.54693}{0.34286}{-0.55027}{0.34433} \meshline{black}{-0.4261}{0.90031}{-0.46978}{0.87698} \meshline{black}{-0.51147}{-0.85503}{-0.46781}{-0.87837} \meshline{black}{-0.55839}{0.34885}{-0.55027}{0.34433}\meshline{black}{-0.55839}{0.34885}{-0.58747}{0.36723} \meshline{black}{-0.46978}{0.87698}{-0.5126}{0.85411} \meshline{black}{-0.55557}{-0.83146}{-0.51147}{-0.85503} \meshline{black}{-0.60441}{0.38006}{-0.58747}{0.36723} \meshline{black}{-0.59268}{-0.801}{-0.55557}{-0.83146} \meshline{black}{-0.63018}{-0.77023}{-0.59268}{-0.801} \meshline{black}{-0.5126}{0.85411}{-0.555}{0.83146} \meshline{black}{-0.62486}{0.3904}{-0.60441}{0.38006}\meshline{black}{-0.66845}{-0.73882}{-0.63018}{-0.77023} \meshline{black}{-0.66156}{0.41383}{-0.62486}{0.3904} \meshline{black}{-0.7071}{-0.7071}{-0.66845}{-0.73882} \meshline{black}{-0.66156}{0.41383}{-0.66178}{0.414} \meshline{black}{-0.555}{0.83146}{-0.5937}{0.7998} \meshline{black}{-0.73756}{-0.66999}{-0.7071}{-0.7071} \meshline{black}{-0.76833}{-0.63249}{-0.73756}{-0.66999} \meshline{black}{-0.662}{0.4142}{-0.66178}{0.414} \meshline{black}{-0.5937}{0.7998}{-0.63205}{0.76843} \meshline{black}{-0.79974}{-0.59422}{-0.76833}{-0.63249} \meshline{black}{-0.6987}{0.43703}{-0.662}{0.4142}\meshline{black}{-0.83146}{-0.55557}{-0.79974}{-0.59422} \meshline{black}{-0.63205}{0.76843}{-0.6697}{0.73762} \meshline{black}{-0.8541}{-0.51322}{-0.83146}{-0.55557} \meshline{black}{-0.72826}{0.45741}{-0.6987}{0.43703} \meshline{black}{-0.6697}{0.73762}{-0.707}{0.70711} \meshline{black}{-0.87696}{-0.47044}{-0.8541}{-0.51322} \meshline{black}{-0.73443}{0.45977}{-0.72826}{0.45741} \meshline{black}{-0.9003}{-0.42678}{-0.87696}{-0.47044} \meshline{black}{-0.73443}{0.45977}{-0.74655}{0.46495} \meshline{black}{-0.74655}{0.46495}{-0.77037}{0.48226} \meshline{black}{-0.707}{0.70711}{-0.73873}{0.66846} \meshline{black}{-0.92387}{-0.38268}{-0.9003}{-0.42678} \meshline{black}{-0.80164}{0.50414}{-0.77037}{0.48226} \meshline{black}{-0.93781}{-0.33673}{-0.92387}{-0.38268} \meshline{black}{-0.73873}{0.66846}{-0.77014}{0.6302} \meshline{black}{-0.95189}{-0.29031}{-0.93781}{-0.33673} \meshline{black}{-0.77014}{0.6302}{-0.80093}{0.59269} \meshline{black}{-0.80762}{0.50482}{-0.80164}{0.50414} \meshline{black}{-0.80093}{0.59269}{-0.8314}{0.55557} \meshline{black}{-0.96626}{-0.24294}{-0.95189}{-0.29031} \meshline{black}{-0.85498}{0.51148}{-0.80762}{0.50482} \meshline{black}{-0.8314}{0.55557}{-0.85498}{0.51148} \meshline{black}{-0.98078}{-0.19509}{-0.96626}{-0.24294} \meshline{black}{-0.98549}{-0.1473}{-0.98078}{-0.19509} \meshline{black}{-0.85498}{0.51148}{-0.87833}{0.46782} \meshline{black}{-0.99024}{-0.09902}{-0.98549}{-0.1473} \meshline{black}{-0.87833}{0.46782}{-0.90122}{0.42504} \meshline{black}{-0.9951}{-0.04976}{-0.99024}{-0.09902} \meshline{black}{-0.90122}{0.42504}{-0.92387}{0.38268} \meshline{black}{-0.92387}{0.38268}{-0.93839}{0.33484} \meshline{black}{-0.93839}{0.33484}{-0.95276}{0.28746} \meshline{black}{-1}{0}{-0.9951}{-0.04976} \meshline{black}{-0.95276}{0.28746}{-0.96684}{0.24104} \meshline{black}{-0.96684}{0.24104}{-0.98078}{0.19509} \meshline{black}{-0.98078}{0.19509}{-0.98568}{0.14533} \meshline{black}{-0.99054}{0.09607}{-0.99529}{0.04779} \meshline{black}{-0.98568}{0.14533}{-0.99054}{0.09607} \meshline{black}{-0.99529}{0.04779}{-1}{0}\meshline{black}{0.98078}{0.19509}{0.96627}{0.24294} \meshline{black}{0.95276}{-0.28746}{0.96685}{-0.24104} \meshline{black}{0.96627}{0.24294}{0.9519}{0.29031} \meshline{black}{0.93839}{-0.33483}{0.95276}{-0.28746} \meshline{black}{0.9519}{0.29031}{0.93781}{0.33673} \meshline{black}{0.92388}{-0.38268}{0.93839}{-0.33483} \meshline{black}{0.93781}{0.33673}{0.92387}{0.38268} \meshline{black}{0.90124}{-0.42503}{0.92388}{-0.38268} \meshline{black}{0.92387}{0.38268}{0.9003}{0.42678} \meshline{black}{0.87837}{-0.46781}{0.90124}{-0.42503} \meshline{black}{0.9003}{0.42678}{0.87696}{0.47044} \meshline{black}{0.85504}{-0.51147}{0.87837}{-0.46781} \meshline{black}{0.87696}{0.47044}{0.8541}{0.51322} \meshline{black}{0.83147}{-0.55557}{0.85504}{-0.51147} \meshline{black}{0.8541}{0.51322}{0.83146}{0.55557} \meshline{black}{0.80101}{-0.59269}{0.83147}{-0.55557} \meshline{black}{0.77023}{-0.63018}{0.80101}{-0.59269} \meshline{black}{0.83146}{0.55557}{0.79974}{0.59422} \meshline{black}{0.73883}{-0.66845}{0.77023}{-0.63018} \meshline{black}{0.79974}{0.59422}{0.76833}{0.63249} \meshline{black}{0.70711}{-0.7071}{0.73883}{-0.66845} \meshline{black}{0.76833}{0.63249}{0.73756}{0.66999} \meshline{black}{0.66999}{-0.73756}{0.70711}{-0.7071} \meshline{black}{0.73756}{0.66999}{0.7071}{0.7071} \meshline{black}{0.63249}{-0.76834}{0.66999}{-0.73756} \meshline{black}{0.7071}{0.7071}{0.66839}{0.73875} \meshline{black}{0.59422}{-0.79974}{0.63249}{-0.76834} \meshline{black}{0.66839}{0.73875}{0.6301}{0.77006} \meshline{black}{0.55557}{-0.83146}{0.59422}{-0.79974} \meshline{black}{0.6301}{0.77006}{0.59263}{0.8007} \meshline{black}{0.51323}{-0.85407}{0.55557}{-0.83146} \meshline{black}{0.59263}{0.8007}{0.55557}{0.831} \meshline{black}{0.47045}{-0.87692}{0.51323}{-0.85407} \meshline{black}{0.55557}{0.831}{0.51152}{0.85465} \meshline{black}{0.42679}{-0.90024}{0.47045}{-0.87692} \meshline{black}{0.51152}{0.85465}{0.46788}{0.87807} \meshline{black}{0.38268}{-0.9238}{0.42679}{-0.90024} \meshline{black}{0.46788}{0.87807}{0.42507}{0.90105} \meshline{black}{0.33673}{-0.93776}{0.38268}{-0.9238} \meshline{black}{0.42507}{0.90105}{0.38268}{0.9238} \meshline{black}{0.38268}{0.9238}{0.33484}{0.93831} \meshline{black}{0.2903}{-0.95186}{0.33673}{-0.93776} \meshline{black}{0.33484}{0.93831}{0.28746}{0.95268} \meshline{black}{0.24293}{-0.96625}{0.2903}{-0.95186} \meshline{black}{0.28746}{0.95268}{0.24104}{0.96676} \meshline{black}{0.19509}{-0.98078}{0.24293}{-0.96625} \meshline{black}{0.24104}{0.96676}{0.19509}{0.9807} \meshline{black}{0.1473}{-0.98549}{0.19509}{-0.98078} \meshline{black}{0.19509}{0.9807}{0.14533}{0.98562} \meshline{black}{0.09902}{-0.99024}{0.1473}{-0.98549} \meshline{black}{0.14533}{0.98562}{0.09607}{0.9905} \meshline{black}{0.04976}{-0.9951}{0.09902}{-0.99024} \meshline{black}{0.09607}{0.9905}{0.04779}{0.99527} \meshline{black}{0.04779}{0.99527}{0}{1} \meshline{black}{0}{-1}{0.04976}{-0.9951} \meshline{black}{0}{1}{-0.04976}{0.9951} \meshline{black}{-0.04779}{-0.99529}{0}{-1} \meshline{black}{-0.04976}{0.9951}{-0.09901}{0.99024} \meshline{black}{-0.09607}{-0.99054}{-0.04779}{-0.99529} \meshline{black}{-0.09901}{0.99024}{-0.14726}{0.98549} \meshline{black}{-0.14726}{0.98549}{-0.195}{0.98078} \meshline{black}{-0.14533}{-0.98568}{-0.09607}{-0.99054} \meshline{black}{-0.195}{0.98078}{-0.24283}{0.96622} \meshline{black}{-0.19509}{-0.98078}{-0.14533}{-0.98568} \meshline{black}{-0.24283}{0.96622}{-0.29013}{0.95183} \meshline{black}{-0.24104}{-0.96684}{-0.19509}{-0.98078} \meshline{black}{-0.29013}{0.95183}{-0.33633}{0.93777} \meshline{black}{-0.28746}{-0.95276}{-0.24104}{-0.96684} \meshline{black}{-0.33633}{0.93777}{-0.382}{0.92387} \meshline{black}{-0.33483}{-0.93839}{-0.28746}{-0.95276} \meshline{black}{-0.38268}{-0.92387}{-0.33483}{-0.93839} \meshline{black}{-0.382}{0.92387}{-0.4261}{0.90031} \meshline{black}{-0.42503}{-0.90123}{-0.38268}{-0.92387} \meshline{black}{-0.46781}{-0.87837}{-0.42503}{-0.90123} \meshline{black}{-0.4261}{0.90031}{-0.46978}{0.87698} \meshline{black}{-0.51147}{-0.85503}{-0.46781}{-0.87837} \meshline{black}{-0.55557}{-0.83146}{-0.51147}{-0.85503} \meshline{black}{-0.59268}{-0.801}{-0.55557}{-0.83146} \meshline{black}{-0.46978}{0.87698}{-0.5126}{0.85411} \meshline{black}{-0.63018}{-0.77023}{-0.59268}{-0.801} \meshline{black}{-0.66845}{-0.73882}{-0.63018}{-0.77023} \meshline{black}{-0.5126}{0.85411}{-0.555}{0.83146} \meshline{black}{-0.7071}{-0.7071}{-0.66845}{-0.73882} \meshline{black}{-0.555}{0.83146}{-0.5937}{0.7998} \meshline{black}{-0.73756}{-0.66999}{-0.7071}{-0.7071} \meshline{black}{-0.76833}{-0.63249}{-0.73756}{-0.66999} \meshline{black}{-0.5937}{0.7998}{-0.63205}{0.76843} \meshline{black}{-0.79974}{-0.59422}{-0.76833}{-0.63249} \meshline{black}{-0.83146}{-0.55557}{-0.79974}{-0.59422} \meshline{black}{-0.63205}{0.76843}{-0.6697}{0.73762} \meshline{black}{-0.8541}{-0.51322}{-0.83146}{-0.55557} \meshline{black}{-0.6697}{0.73762}{-0.707}{0.70711} \meshline{black}{-0.87696}{-0.47044}{-0.8541}{-0.51322} \meshline{black}{-0.707}{0.70711}{-0.73873}{0.66846} \meshline{black}{-0.9003}{-0.42678}{-0.87696}{-0.47044} \meshline{black}{-0.73873}{0.66846}{-0.77014}{0.6302} \meshline{black}{-0.77014}{0.6302}{-0.80093}{0.59269} \meshline{black}{-0.92387}{-0.38268}{-0.9003}{-0.42678} \meshline{black}{-0.80093}{0.59269}{-0.8314}{0.55557} \meshline{black}{-0.93781}{-0.33673}{-0.92387}{-0.38268} \meshline{black}{-0.95189}{-0.29031}{-0.93781}{-0.33673} \meshline{black}{-0.8314}{0.55557}{-0.85498}{0.51148} \meshline{black}{-0.96626}{-0.24294}{-0.95189}{-0.29031} \meshline{black}{-0.85498}{0.51148}{-0.87833}{0.46782} \meshline{black}{-0.98078}{-0.19509}{-0.96626}{-0.24294} \meshline{black}{-0.87833}{0.46782}{-0.90122}{0.42504} \meshline{black}{-0.90122}{0.42504}{-0.92387}{0.38268} \meshline{black}{-0.98549}{-0.1473}{-0.98078}{-0.19509} \meshline{black}{-0.92387}{0.38268}{-0.93839}{0.33484} \meshline{black}{-0.99024}{-0.09902}{-0.98549}{-0.1473} \meshline{black}{-0.9951}{-0.04976}{-0.99024}{-0.09902} \meshline{black}{-1}{0}{-0.9951}{-0.04976} \meshline{black}{-0.93839}{0.33484}{-0.95276}{0.28746} \meshline{black}{-0.95276}{0.28746}{-0.96684}{0.24104} \meshline{black}{-0.96684}{0.24104}{-0.98078}{0.19509} \meshline{black}{-0.98078}{0.19509}{-0.98568}{0.14533} \meshline{black}{-0.99529}{0.04779}{-1}{0} \meshline{black}{-0.98568}{0.14533}{-0.99054}{0.09607} \meshline{black}{-0.99054}{0.09607}{-0.99529}{0.04779} \end{tikzpicture} \null \end{center} \null \end{minipage} \begin{minipage}[h]{0.5\linewidth} \begin{itemize} \item For g.s.: $\max (u)=4.15 $, $\mathcal{E}_4 (u)= 29.9$ \item For l.e.n.s.: $\min(u)= -6.36$, $\max (u)= 6.36$, $\mathcal{E}_4(u)=76.04$ \item Starting function for g.s.: \\$\cos(\pi (x^2+y^2)^{0.5}/2)$ \item Starting function for l.e.n.s.:\\ $\cos(\pi(x^2+y^2)^{0.5}/2)$ $\cos(2\pi(x^{2}+y^{2})^{0.5}$ $\cos(\pi (x^{2}+y^{2})^{0.5}$ \end{itemize} \end{minipage} \null Second, $V(x,y)= \frac{1}{\sqrt{(x-0.5)^2+y^2}}$ on the ball $B(0,1)$. Ground state solutions seem to be even with respect to a direction but are not radial. One can remark the work of the singularity on the level curve $1$. Least energy nodal solutions are just odd with respect to a direction. The mass is a little bit attracted by the side $x<0$ (the side minimizing the energy).\\ \begin{minipage}[h]{0.6\linewidth} \includegraphics[width=2.8cm, angle=270]{cercleMPAsing2.pdf} \includegraphics[width=2.8cm, angle = 270]{cercleMMPAsing2.pdf} \null \begin{center} \begin{tikzpicture} \draw[->] (0,0) --(0.5,0); \draw[->] (0,0)--(0,.5); \node[anchor= north] at (0.5,0) {$x$}; \node[anchor =east] at (0,0.5) {$y$}; \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{0.8cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{0.8cm}} \meshline{black}{1}{0}{0.99529}{-0.04779} \meshline{black}{1}{0}{0.9951}{0.04976} \meshline{black}{0.99054}{-0.09606}{0.99529}{-0.04779} \meshline{black}{0.9951}{0.04976}{0.99025}{0.09903} \meshline{black}{0.98569}{-0.14533}{0.99054}{-0.09606} \meshline{black}{0.99025}{0.09903}{0.98549}{0.1473} \meshline{black}{0.98078}{-0.19509}{0.98569}{-0.14533} \meshline{black}{0.98549}{0.1473}{0.98078}{0.19509} \meshline{black}{0.96685}{-0.24104}{0.98078}{-0.19509} \meshline{black}{0.98078}{0.19509}{0.96627}{0.24294} \meshline{black}{0.95276}{-0.28746}{0.96685}{-0.24104} \meshline{black}{0.96627}{0.24294}{0.9519}{0.29031} \meshline{black}{0.93839}{-0.33483}{0.95276}{-0.28746} \meshline{black}{0.9519}{0.29031}{0.93781}{0.33673} \meshline{black}{0.92388}{-0.38268}{0.93839}{-0.33483} \meshline{black}{0.93781}{0.33673}{0.92387}{0.38268} \meshline{black}{0.90124}{-0.42503}{0.92388}{-0.38268} \meshline{black}{0.92387}{0.38268}{0.9003}{0.42678} \meshline{black}{0.87837}{-0.46781}{0.90124}{-0.42503} \meshline{black}{0.9003}{0.42678}{0.87696}{0.47044} \meshline{black}{0.85504}{-0.51147}{0.87837}{-0.46781} \meshline{black}{0.87696}{0.47044}{0.8541}{0.51322} \meshline{black}{0.83147}{-0.55557}{0.85504}{-0.51147} \meshline{black}{0.8541}{0.51322}{0.83146}{0.55557} \meshline{black}{0.80101}{-0.59269}{0.83147}{-0.55557} \meshline{black}{0.83146}{0.55557}{0.79974}{0.59422} \meshline{black}{0.77023}{-0.63018}{0.80101}{-0.59269} \meshline{black}{0.79974}{0.59422}{0.76833}{0.63249} \meshline{black}{0.73883}{-0.66845}{0.77023}{-0.63018} \meshline{black}{0.76833}{0.63249}{0.73756}{0.66999} \meshline{black}{0.70711}{-0.7071}{0.73883}{-0.66845} \meshline{black}{0.73756}{0.66999}{0.7071}{0.7071} \meshline{black}{0.66999}{-0.73756}{0.70711}{-0.7071} \meshline{black}{0.7071}{0.7071}{0.66839}{0.73875} \meshline{black}{0.63249}{-0.76834}{0.66999}{-0.73756} \meshline{black}{0.66839}{0.73875}{0.6301}{0.77006} \meshline{black}{0.59422}{-0.79974}{0.63249}{-0.76834} \meshline{black}{0.6301}{0.77006}{0.59263}{0.8007} \meshline{black}{0.55557}{-0.83146}{0.59422}{-0.79974} \meshline{black}{0.59263}{0.8007}{0.55557}{0.831} \meshline{black}{0.51323}{-0.85407}{0.55557}{-0.83146} \meshline{black}{0.55557}{0.831}{0.51152}{0.85465} \meshline{black}{0.47045}{-0.87692}{0.51323}{-0.85407} \meshline{black}{0.51152}{0.85465}{0.46788}{0.87807} \meshline{black}{0.42679}{-0.90024}{0.47045}{-0.87692} \meshline{black}{0.46788}{0.87807}{0.42507}{0.90105} \meshline{black}{0.38268}{-0.9238}{0.42679}{-0.90024} \meshline{black}{0.42507}{0.90105}{0.38268}{0.9238} \meshline{black}{0.33673}{-0.93776}{0.38268}{-0.9238} \meshline{black}{0.38268}{0.9238}{0.33484}{0.93831} \meshline{black}{0.2903}{-0.95186}{0.33673}{-0.93776} \meshline{black}{0.33484}{0.93831}{0.28746}{0.95268} \meshline{black}{0.24293}{-0.96625}{0.2903}{-0.95186} \meshline{black}{0.28746}{0.95268}{0.24104}{0.96676} \meshline{black}{0.19509}{-0.98078}{0.24293}{-0.96625} \meshline{black}{0.24104}{0.96676}{0.19509}{0.9807} \meshline{black}{0.1473}{-0.98549}{0.19509}{-0.98078} \meshline{black}{0.19509}{0.9807}{0.14533}{0.98562} \meshline{black}{0.09902}{-0.99024}{0.1473}{-0.98549} \meshline{black}{0.14533}{0.98562}{0.09607}{0.9905} \meshline{black}{0.04976}{-0.9951}{0.09902}{-0.99024} \meshline{black}{0.09607}{0.9905}{0.04779}{0.99527} \meshline{black}{0}{-1}{0.04976}{-0.9951} \meshline{black}{0.04779}{0.99527}{0}{1} \meshline{black}{-0.05951}{-0.98078}{0}{-1} \meshline{black}{0}{1}{-0.04976}{0.9951} \meshline{black}{-0.1026}{-0.97319}{-0.05951}{-0.98078} \meshline{black}{-0.04976}{0.9951}{-0.09901}{0.99024} \meshline{black}{-0.14672}{-0.96542}{-0.1026}{-0.97319} \meshline{black}{-0.09901}{0.99024}{-0.14726}{0.98549} \meshline{black}{-0.195}{0.98078}{-0.24283}{0.96622} \meshline{black}{-0.14726}{0.98549}{-0.195}{0.98078} \meshline{black}{-0.19186}{-0.95747}{-0.14672}{-0.96542} \meshline{black}{-0.24283}{0.96622}{-0.29013}{0.95183} \meshline{black}{-0.23803}{-0.94934}{-0.19186}{-0.95747} \meshline{black}{-0.29013}{0.95183}{-0.33633}{0.93777} \meshline{black}{-0.28522}{-0.94103}{-0.23803}{-0.94934} \meshline{black}{-0.33633}{0.93777}{-0.382}{0.92387} \meshline{black}{-0.33344}{-0.93254}{-0.28522}{-0.94103} \meshline{black}{-0.382}{0.92387}{-0.4261}{0.90031} \meshline{black}{-0.38268}{-0.92387}{-0.33344}{-0.93254} \meshline{black}{-0.42503}{-0.90123}{-0.38268}{-0.92387} \meshline{black}{-0.46781}{-0.87837}{-0.42503}{-0.90123} \meshline{black}{-0.4261}{0.90031}{-0.46978}{0.87698} \meshline{black}{-0.51147}{-0.85503}{-0.46781}{-0.87837} \meshline{black}{-0.46978}{0.87698}{-0.5126}{0.85411} \meshline{black}{-0.55557}{-0.83146}{-0.51147}{-0.85503} \meshline{black}{-0.59268}{-0.801}{-0.55557}{-0.83146} \meshline{black}{-0.63018}{-0.77023}{-0.59268}{-0.801} \meshline{black}{-0.5126}{0.85411}{-0.555}{0.83146} \meshline{black}{-0.66845}{-0.73882}{-0.63018}{-0.77023} \meshline{black}{-0.7071}{-0.7071}{-0.66845}{-0.73882} \meshline{black}{-0.555}{0.83146}{-0.5937}{0.7998} \meshline{black}{-0.73756}{-0.66999}{-0.7071}{-0.7071} \meshline{black}{-0.76833}{-0.63249}{-0.73756}{-0.66999} \meshline{black}{-0.5937}{0.7998}{-0.63205}{0.76843} \meshline{black}{-0.79974}{-0.59422}{-0.76833}{-0.63249} \meshline{black}{-0.83146}{-0.55557}{-0.79974}{-0.59422} \meshline{black}{-0.63205}{0.76843}{-0.6697}{0.73762} \meshline{black}{-0.8541}{-0.51322}{-0.83146}{-0.55557} \meshline{black}{-0.6697}{0.73762}{-0.707}{0.70711} \meshline{black}{-0.87696}{-0.47044}{-0.8541}{-0.51322} \meshline{black}{-0.9003}{-0.42678}{-0.87696}{-0.47044} \meshline{black}{-0.707}{0.70711}{-0.73873}{0.66846} \meshline{black}{-0.92387}{-0.38268}{-0.9003}{-0.42678} \meshline{black}{-0.93781}{-0.33673}{-0.92387}{-0.38268} \meshline{black}{-0.73873}{0.66846}{-0.77014}{0.6302} \meshline{black}{-0.95189}{-0.29031}{-0.93781}{-0.33673} \meshline{black}{-0.77014}{0.6302}{-0.80093}{0.59269} \meshline{black}{-0.80093}{0.59269}{-0.8314}{0.55557} \meshline{black}{-0.96626}{-0.24294}{-0.95189}{-0.29031} \meshline{black}{-0.8314}{0.55557}{-0.85498}{0.51148} \meshline{black}{-0.98078}{-0.19509}{-0.96626}{-0.24294} \meshline{black}{-0.98549}{-0.1473}{-0.98078}{-0.19509} \meshline{black}{-0.85498}{0.51148}{-0.87833}{0.46782} \meshline{black}{-0.99024}{-0.09902}{-0.98549}{-0.1473} \meshline{black}{-0.87833}{0.46782}{-0.90122}{0.42504} \meshline{black}{-0.9951}{-0.04976}{-0.99024}{-0.09902} \meshline{black}{-0.90122}{0.42504}{-0.92387}{0.38268} \meshline{black}{-0.92387}{0.38268}{-0.93839}{0.33484} \meshline{black}{-0.93839}{0.33484}{-0.95276}{0.28746} \meshline{black}{-1}{0}{-0.9951}{-0.04976} \meshline{black}{-0.95276}{0.28746}{-0.96684}{0.24104} \meshline{black}{-0.96684}{0.24104}{-0.98078}{0.19509} \meshline{black}{-0.98078}{0.19509}{-0.98568}{0.14533} \meshline{black}{-0.99054}{0.09607}{-0.99529}{0.04779} \meshline{black}{-0.98568}{0.14533}{-0.99054}{0.09607} \meshline{black}{-0.99529}{0.04779}{-1}{0} \meshline{black}{0.95276}{-0.28746}{0.96685}{-0.24104} \meshline{black}{0.96627}{0.24294}{0.9519}{0.29031} \meshline{black}{0.93839}{-0.33483}{0.95276}{-0.28746} \meshline{black}{0.9519}{0.29031}{0.93781}{0.33673} \meshline{black}{0.92388}{-0.38268}{0.93839}{-0.33483} \meshline{black}{0.93781}{0.33673}{0.92387}{0.38268} \meshline{black}{0.90124}{-0.42503}{0.92388}{-0.38268} \meshline{black}{0.92387}{0.38268}{0.9003}{0.42678} \meshline{black}{0.87837}{-0.46781}{0.90124}{-0.42503} \meshline{black}{0.9003}{0.42678}{0.87696}{0.47044} \meshline{black}{0.85504}{-0.51147}{0.87837}{-0.46781} \meshline{black}{0.87696}{0.47044}{0.8541}{0.51322} \meshline{black}{0.83147}{-0.55557}{0.85504}{-0.51147} \meshline{black}{0.8541}{0.51322}{0.83146}{0.55557} \meshline{black}{0.80101}{-0.59269}{0.83147}{-0.55557} \meshline{black}{0.77023}{-0.63018}{0.80101}{-0.59269} \meshline{black}{0.83146}{0.55557}{0.79974}{0.59422} \meshline{black}{0.73883}{-0.66845}{0.77023}{-0.63018} \meshline{black}{0.79974}{0.59422}{0.76833}{0.63249} \meshline{black}{0.70711}{-0.7071}{0.73883}{-0.66845} \meshline{black}{0.76833}{0.63249}{0.73756}{0.66999} \meshline{black}{0.66999}{-0.73756}{0.70711}{-0.7071} \meshline{black}{0.73756}{0.66999}{0.7071}{0.7071} \meshline{black}{0.63249}{-0.76834}{0.66999}{-0.73756} \meshline{black}{0.7071}{0.7071}{0.66839}{0.73875} \meshline{black}{0.59422}{-0.79974}{0.63249}{-0.76834} \meshline{black}{0.66839}{0.73875}{0.6301}{0.77006} \meshline{black}{0.55557}{-0.83146}{0.59422}{-0.79974} \meshline{black}{0.6301}{0.77006}{0.59263}{0.8007} \meshline{black}{0.51323}{-0.85407}{0.55557}{-0.83146} \meshline{black}{0.59263}{0.8007}{0.55557}{0.831} \meshline{black}{0.47045}{-0.87692}{0.51323}{-0.85407} \meshline{black}{0.55557}{0.831}{0.51152}{0.85465} \meshline{black}{0.42679}{-0.90024}{0.47045}{-0.87692} \meshline{black}{0.51152}{0.85465}{0.46788}{0.87807} \meshline{black}{0.38268}{-0.9238}{0.42679}{-0.90024} \meshline{black}{0.46788}{0.87807}{0.42507}{0.90105} \meshline{black}{0.33673}{-0.93776}{0.38268}{-0.9238} \meshline{black}{0.42507}{0.90105}{0.38268}{0.9238} \meshline{black}{0.38268}{0.9238}{0.33484}{0.93831} \meshline{black}{0.2903}{-0.95186}{0.33673}{-0.93776} \meshline{black}{0.33484}{0.93831}{0.28746}{0.95268} \meshline{black}{0.24293}{-0.96625}{0.2903}{-0.95186} \meshline{black}{0.28746}{0.95268}{0.24104}{0.96676} \meshline{black}{0.19509}{-0.98078}{0.24293}{-0.96625} \meshline{black}{0.24104}{0.96676}{0.19509}{0.9807} \meshline{black}{0.1473}{-0.98549}{0.19509}{-0.98078} \meshline{black}{0.19509}{0.9807}{0.14533}{0.98562} \meshline{black}{0.09902}{-0.99024}{0.1473}{-0.98549} \meshline{black}{0.14533}{0.98562}{0.09607}{0.9905} \meshline{black}{0.04976}{-0.9951}{0.09902}{-0.99024} \meshline{black}{0.09607}{0.9905}{0.04779}{0.99527} \meshline{black}{0.04779}{0.99527}{0}{1} \meshline{black}{0}{-1}{0.04976}{-0.9951} \meshline{black}{0}{1}{-0.04976}{0.9951} \meshline{black}{-0.04779}{-0.99529}{0}{-1} \meshline{black}{-0.04976}{0.9951}{-0.09901}{0.99024} \meshline{black}{-0.09607}{-0.99054}{-0.04779}{-0.99529} \meshline{black}{-0.09901}{0.99024}{-0.14726}{0.98549} \meshline{black}{-0.14726}{0.98549}{-0.195}{0.98078} \meshline{black}{-0.14533}{-0.98568}{-0.09607}{-0.99054} \meshline{black}{-0.195}{0.98078}{-0.24283}{0.96622} \meshline{black}{-0.19509}{-0.98078}{-0.14533}{-0.98568} \meshline{black}{-0.24283}{0.96622}{-0.29013}{0.95183} \meshline{black}{-0.24104}{-0.96684}{-0.19509}{-0.98078} \meshline{black}{-0.29013}{0.95183}{-0.33633}{0.93777} \meshline{black}{-0.28746}{-0.95276}{-0.24104}{-0.96684} \meshline{black}{-0.33633}{0.93777}{-0.382}{0.92387} \meshline{black}{-0.33483}{-0.93839}{-0.28746}{-0.95276} \meshline{black}{-0.38268}{-0.92387}{-0.33483}{-0.93839} \meshline{black}{-0.382}{0.92387}{-0.4261}{0.90031} \meshline{black}{-0.42503}{-0.90123}{-0.38268}{-0.92387} \meshline{black}{-0.46781}{-0.87837}{-0.42503}{-0.90123} \meshline{black}{-0.4261}{0.90031}{-0.46978}{0.87698} \meshline{black}{-0.51147}{-0.85503}{-0.46781}{-0.87837} \meshline{black}{-0.55557}{-0.83146}{-0.51147}{-0.85503} \meshline{black}{-0.59268}{-0.801}{-0.55557}{-0.83146} \meshline{black}{-0.46978}{0.87698}{-0.5126}{0.85411} \meshline{black}{-0.63018}{-0.77023}{-0.59268}{-0.801} \meshline{black}{-0.66845}{-0.73882}{-0.63018}{-0.77023} \meshline{black}{-0.5126}{0.85411}{-0.555}{0.83146} \meshline{black}{-0.7071}{-0.7071}{-0.66845}{-0.73882} \meshline{black}{-0.555}{0.83146}{-0.5937}{0.7998} \meshline{black}{-0.73756}{-0.66999}{-0.7071}{-0.7071} \meshline{black}{-0.76833}{-0.63249}{-0.73756}{-0.66999} \meshline{black}{-0.5937}{0.7998}{-0.63205}{0.76843} \meshline{black}{-0.79974}{-0.59422}{-0.76833}{-0.63249} \meshline{black}{-0.83146}{-0.55557}{-0.79974}{-0.59422} \meshline{black}{-0.63205}{0.76843}{-0.6697}{0.73762} \meshline{black}{-0.8541}{-0.51322}{-0.83146}{-0.55557} \meshline{black}{-0.6697}{0.73762}{-0.707}{0.70711} \meshline{black}{-0.87696}{-0.47044}{-0.8541}{-0.51322} \meshline{black}{-0.707}{0.70711}{-0.73873}{0.66846} \meshline{black}{-0.9003}{-0.42678}{-0.87696}{-0.47044} \meshline{black}{-0.73873}{0.66846}{-0.77014}{0.6302} \meshline{black}{-0.77014}{0.6302}{-0.80093}{0.59269} \meshline{black}{-0.92387}{-0.38268}{-0.9003}{-0.42678} \meshline{black}{-0.80093}{0.59269}{-0.8314}{0.55557} \meshline{black}{-0.93781}{-0.33673}{-0.92387}{-0.38268} \meshline{black}{-0.95189}{-0.29031}{-0.93781}{-0.33673} \meshline{black}{-0.8314}{0.55557}{-0.85498}{0.51148} \meshline{black}{-0.96626}{-0.24294}{-0.95189}{-0.29031} \meshline{black}{-0.85498}{0.51148}{-0.87833}{0.46782} \meshline{black}{-0.98078}{-0.19509}{-0.96626}{-0.24294} \meshline{black}{-0.87833}{0.46782}{-0.90122}{0.42504} \meshline{black}{-0.90122}{0.42504}{-0.92387}{0.38268} \meshline{black}{-0.98549}{-0.1473}{-0.98078}{-0.19509} \meshline{black}{-0.92387}{0.38268}{-0.93839}{0.33484} \meshline{black}{-0.99024}{-0.09902}{-0.98549}{-0.1473} \meshline{black}{-0.9951}{-0.04976}{-0.99024}{-0.09902} \meshline{black}{-1}{0}{-0.9951}{-0.04976} \meshline{black}{-0.93839}{0.33484}{-0.95276}{0.28746} \meshline{black}{-0.95276}{0.28746}{-0.96684}{0.24104} \meshline{black}{-0.96684}{0.24104}{-0.98078}{0.19509} \meshline{black}{-0.98078}{0.19509}{-0.98568}{0.14533} \meshline{black}{-0.99529}{0.04779}{-1}{0} \meshline{black}{-0.98568}{0.14533}{-0.99054}{0.09607} \meshline{black}{-0.99054}{0.09607}{-0.99529}{0.04779} \meshline{black}{0.26667}{-0.06954}{0.26605}{-0.08056} \meshline{black}{0.26605}{-0.08056}{0.26387}{-0.09608} \meshline{black}{0.25505}{-0.14559}{0.25168}{-0.16344} \meshline{black}{0.24702}{-0.17911}{0.25168}{-0.16344} \meshline{black}{0.26889}{0.04819}{0.26827}{0.05794} \meshline{black}{0.26889}{0.04819}{0.26938}{0.02377} \meshline{black}{0.26938}{-0.03818}{0.26989}{-0.00309} \meshline{black}{0.26938}{-0.03818}{0.26667}{-0.06954} \meshline{black}{0.26387}{-0.09608}{0.26042}{-0.12236}\meshline{black}{0.26042}{-0.12236}{0.25505}{-0.14559} \meshline{black}{0.26529}{0.09207}{0.26238}{0.11433} \meshline{black}{0.26529}{0.09207}{0.26827}{0.05794}\meshline{black}{0.24702}{-0.17911}{0.23964}{-0.20375}\meshline{black}{0.26938}{0.02377}{0.27009}{0.00479}\meshline{black}{0.27009}{0.00479}{0.26989}{-0.00309} \meshline{black}{0.22376}{-0.24212}{0.23964}{-0.20375} \meshline{black}{0.25846}{0.13659}{0.26238}{0.11433} \meshline{black}{0.25212}{0.16602}{0.25846}{0.13659} \meshline{black}{0.22168}{-0.24613}{0.20325}{-0.28178} \meshline{black}{0.22376}{-0.24212}{0.22331}{-0.24316}\meshline{black}{0.22331}{-0.24316}{0.22168}{-0.24613}\meshline{black}{0.19636}{-0.29272}{0.20325}{-0.28178} \meshline{black}{0.24761}{0.18192}{0.25212}{0.16602} \meshline{black}{0.23774}{0.21323}{0.24761}{0.18192} \meshline{black}{0.17797}{-0.31933}{0.19636}{-0.29272} \meshline{black}{0.23152}{0.22832}{0.23774}{0.21323}\meshline{black}{0.21953}{0.25623}{0.23152}{0.22832} \meshline{black}{0.17797}{-0.31933}{0.16601}{-0.33435} \meshline{black}{0.21953}{0.25623}{0.2084}{0.2762} \meshline{black}{0.2084}{0.2762}{0.19776}{0.29522} \meshline{black}{0.16601}{-0.33435}{0.14673}{-0.35563} \meshline{black}{0.19776}{0.29522}{0.17538}{0.32616} \meshline{black}{0.13343}{-0.369}{0.14673}{-0.35563} \meshline{black}{0.17538}{0.32616}{0.17248}{0.33028} \meshline{black}{0.13343}{-0.369}{0.1083}{-0.39033} \meshline{black}{0.16681}{0.33651}{0.17248}{0.33028} \meshline{black}{0.09906}{-0.39778}{0.1083}{-0.39033} \meshline{black}{0.1445}{0.36234}{0.16681}{0.33651} \meshline{black}{0.07027}{-0.41653}{0.09906}{-0.39778} \meshline{black}{0.12483}{0.37984}{0.1445}{0.36234} \meshline{black}{0.12483}{0.37984}{0.11321}{0.39072} \meshline{black}{0.07027}{-0.41653}{0.06291}{-0.42157} \meshline{black}{0.11321}{0.39072}{0.10176}{0.39899} \meshline{black}{0.05987}{-0.42317}{0.06291}{-0.42157} \meshline{black}{0.02537}{-0.44147}{0.05987}{-0.42317} \meshline{black}{0.07904}{0.41594}{0.10176}{0.39899}\meshline{black}{0.02537}{-0.44147}{-0.00356}{-0.45259}\meshline{black}{0.07904}{0.41594}{0.04523}{0.43519} \meshline{black}{-0.00356}{-0.45259}{-0.01364}{-0.45661} \meshline{black}{0.04145}{0.43735}{0.04523}{0.43519} \meshline{black}{-0.02435}{-0.45946}{-0.01364}{-0.45661} \meshline{black}{-0.05382}{-0.46804}{-0.02435}{-0.45946} \meshline{black}{0.02817}{0.44306}{0.04145}{0.43735}\meshline{black}{0.00084}{0.4554}{0.02817}{0.44306} \meshline{black}{-0.09405}{-0.47464}{-0.05382}{-0.46804} \meshline{black}{-0.00699}{0.45807}{0.00084}{0.4554} \meshline{black}{-0.09405}{-0.47464}{-0.09554}{-0.4749} \meshline{black}{-0.04342}{0.46935}{-0.00699}{0.45807} \meshline{black}{-0.09554}{-0.4749}{-0.09915}{-0.47512}\meshline{black}{-0.13847}{-0.47818}{-0.09915}{-0.47512} \meshline{black}{-0.04342}{0.46935}{-0.05637}{0.47228} \meshline{black}{-0.09187}{0.47852}{-0.05637}{0.47228}\meshline{black}{-0.15281}{-0.47786}{-0.13847}{-0.47818} \meshline{black}{-0.15281}{-0.47786}{-0.18307}{-0.47678}\meshline{black}{-0.10355}{0.4799}{-0.09187}{0.47852} \meshline{black}{-0.20519}{-0.47409}{-0.18307}{-0.47678} \meshline{black}{-0.22959}{-0.47031}{-0.20519}{-0.47409} \meshline{black}{-0.14529}{0.48197}{-0.10355}{0.4799} \meshline{black}{-0.2529}{-0.46508}{-0.22959}{-0.47031} \meshline{black}{-0.14886}{0.48202}{-0.14529}{0.48197} \meshline{black}{-0.2529}{-0.46508}{-0.2784}{-0.45791} \meshline{black}{-0.19261}{0.47969}{-0.159}{0.48131} \meshline{black}{-0.14886}{0.48202}{-0.159}{0.48131} \meshline{black}{-0.2784}{-0.45791}{-0.29689}{-0.45178} \meshline{black}{-0.29689}{-0.45178}{-0.32998}{-0.43812}\meshline{black}{-0.20519}{0.47784}{-0.19261}{0.47969} \meshline{black}{-0.33766}{-0.43471}{-0.32998}{-0.43812} \meshline{black}{-0.35478}{-0.42529}{-0.37566}{-0.41439}\meshline{black}{-0.20519}{0.47784}{-0.2349}{0.47322} \meshline{black}{-0.35478}{-0.42529}{-0.33766}{-0.43471} \meshline{black}{-0.37566}{-0.41439}{-0.385}{-0.4083} \meshline{black}{-0.27454}{0.46267}{-0.2349}{0.47322} \meshline{black}{-0.385}{-0.4083}{-0.41116}{-0.39112} \meshline{black}{-0.27454}{0.46267}{-0.27564}{0.46239} \meshline{black}{-0.44382}{-0.36454}{-0.44269}{-0.36542} \meshline{black}{-0.44269}{-0.36542}{-0.41116}{-0.39112} \meshline{black}{-0.44382}{-0.36454}{-0.44482}{-0.36357}\meshline{black}{-0.31528}{0.44847}{-0.27676}{0.46196}\meshline{black}{-0.27564}{0.46239}{-0.27676}{0.46196} \meshline{black}{-0.44482}{-0.36357}{-0.47443}{-0.3356} \meshline{black}{-0.35344}{0.43027}{-0.34695}{0.43325} \meshline{black}{-0.34695}{0.43325}{-0.31528}{0.44847} \meshline{black}{-0.49484}{-0.31178}{-0.5022}{-0.30342}\meshline{black}{-0.47443}{-0.3356}{-0.49484}{-0.31178} \meshline{black}{-0.36556}{0.42298}{-0.35344}{0.43027} \meshline{black}{-0.5022}{-0.30342}{-0.51528}{-0.28504} \meshline{black}{-0.39049}{0.40858}{-0.36556}{0.42298} \meshline{black}{-0.52747}{-0.2685}{-0.51528}{-0.28504} \meshline{black}{-0.4263}{0.38276}{-0.40047}{0.40143} \meshline{black}{-0.39049}{0.40858}{-0.40047}{0.40143} \meshline{black}{-0.53207}{-0.2609}{-0.55004}{-0.2307}\meshline{black}{-0.53207}{-0.2609}{-0.52747}{-0.2685} \meshline{black}{-0.44387}{0.36756}{-0.46073}{0.35234} \meshline{black}{-0.4263}{0.38276}{-0.44387}{0.36756} \meshline{black}{-0.55912}{-0.21218}{-0.56951}{-0.18966} \meshline{black}{-0.55004}{-0.2307}{-0.55912}{-0.21218} \meshline{black}{-0.49359}{0.31691}{-0.48014}{0.3321} \meshline{black}{-0.46073}{0.35234}{-0.48014}{0.3321} \meshline{black}{-0.57884}{-0.16534}{-0.58571}{-0.14493} \meshline{black}{-0.57884}{-0.16534}{-0.56951}{-0.18966} \meshline{black}{-0.52452}{0.27567}{-0.51069}{0.29534}\meshline{black}{-0.51069}{0.29534}{-0.49359}{0.31691} \meshline{black}{-0.5928}{-0.11993}{-0.5982}{-0.09537} \meshline{black}{-0.58571}{-0.14493}{-0.5928}{-0.11993} \meshline{black}{-0.53636}{0.25749}{-0.52452}{0.27567} \meshline{black}{-0.55303}{0.2273}{-0.53636}{0.25749}\meshline{black}{-0.5664}{0.19796}{-0.57494}{0.17893} \meshline{black}{-0.57494}{0.17893}{-0.57822}{0.16923} \meshline{black}{-0.5984}{0.09721}{-0.59829}{0.09764}\meshline{black}{-0.5984}{0.09721}{-0.59847}{0.09676} \meshline{black}{-0.60706}{0.02361}{-0.60779}{0.01254} \meshline{black}{-0.60779}{0.01254}{-0.60709}{-0.01067} \meshline{black}{-0.602}{-0.07524}{-0.60599}{-0.03964} \meshline{black}{-0.602}{-0.07524}{-0.5982}{-0.09537} \meshline{black}{-0.55303}{0.2273}{-0.55761}{0.21864} \meshline{black}{-0.55761}{0.21864}{-0.5664}{0.19796} \meshline{black}{-0.57822}{0.16923}{-0.58874}{0.13846} \meshline{black}{-0.59829}{0.09764}{-0.58874}{0.13846} \meshline{black}{-0.59847}{0.09676}{-0.60515}{0.05531} \meshline{black}{-0.60515}{0.05531}{-0.60706}{0.02361} \meshline{black}{-0.60684}{-0.03101}{-0.60709}{-0.01067} \meshline{black}{-0.60599}{-0.03964}{-0.60684}{-0.03101} \meshline{black}{0.44963}{-0.11756}{0.44942}{-0.1167} \meshline{black}{0.44977}{-0.12032}{0.44963}{-0.11756} \meshline{black}{0.44434}{-0.07339}{0.44094}{-0.06304} \meshline{black}{0.44942}{-0.1167}{0.44434}{-0.07339} \meshline{black}{0.45202}{-0.17709}{0.45201}{-0.20424} \meshline{black}{0.44977}{-0.12032}{0.4531}{-0.16137} \meshline{black}{0.44052}{0.05664}{0.4423}{0.06446} \meshline{black}{0.43893}{0.04895}{0.44052}{0.05664} \meshline{black}{0.43722}{-0.02885}{0.43512}{-0.00654} \meshline{black}{0.4531}{-0.16137}{0.45202}{-0.17709}\meshline{black}{0.43722}{-0.02885}{0.44094}{-0.06304}\meshline{black}{0.44617}{-0.24439}{0.45201}{-0.20424} \meshline{black}{0.4423}{0.06446}{0.44783}{0.0982} \meshline{black}{0.45161}{0.13799}{0.44783}{0.0982} \meshline{black}{0.43893}{0.04895}{0.43472}{0.01475}\meshline{black}{0.43472}{0.01475}{0.43512}{-0.00654}\meshline{black}{0.45334}{0.18315}{0.45167}{0.14551} \meshline{black}{0.45226}{0.20205}{0.45334}{0.18315} \meshline{black}{0.44597}{-0.24609}{0.44617}{-0.24439} \meshline{black}{0.4456}{-0.24756}{0.43778}{-0.28752} \meshline{black}{0.45176}{0.14042}{0.45161}{0.13799} \meshline{black}{0.45167}{0.14551}{0.45176}{0.14042} \meshline{black}{0.44597}{-0.24609}{0.4456}{-0.24756}\meshline{black}{0.45019}{0.22689}{0.45226}{0.20205} \meshline{black}{0.44523}{0.25761}{0.45019}{0.22689} \meshline{black}{0.43778}{-0.28752}{0.42661}{-0.32138} \meshline{black}{0.44238}{0.27161}{0.44523}{0.25761} \meshline{black}{0.44238}{0.27161}{0.43299}{0.30728} \meshline{black}{0.42661}{-0.32138}{0.42458}{-0.32799}\meshline{black}{0.40834}{-0.36803}{0.41963}{-0.33931} \meshline{black}{0.42458}{-0.32799}{0.41963}{-0.33931} \meshline{black}{0.42974}{0.31731}{0.43299}{0.30728} \meshline{black}{0.41686}{0.35244}{0.42974}{0.31731} \meshline{black}{0.40101}{-0.38209}{0.40834}{-0.36803} \meshline{black}{0.41162}{0.36412}{0.41686}{0.35244} \meshline{black}{0.40101}{-0.38209}{0.38802}{-0.40733}\meshline{black}{0.39753}{0.39383}{0.41162}{0.36412} \meshline{black}{0.37178}{-0.43273}{0.38802}{-0.40733} \meshline{black}{0.38684}{0.4123}{0.39753}{0.39383}\meshline{black}{0.38684}{0.4123}{0.37532}{0.43189} \meshline{black}{0.37178}{-0.43273}{0.36369}{-0.44516} \meshline{black}{0.37532}{0.43189}{0.35348}{0.46231} \meshline{black}{0.35348}{0.46231}{0.35027}{0.46682} \meshline{black}{0.34161}{-0.47411}{0.36369}{-0.44516} \meshline{black}{0.35027}{0.46682}{0.34395}{0.4741} \meshline{black}{0.34161}{-0.47411}{0.33542}{-0.48166} \meshline{black}{0.34395}{0.4741}{0.32328}{0.49968} \meshline{black}{0.31271}{-0.50747}{0.33542}{-0.48166} \meshline{black}{0.32328}{0.49968}{0.30803}{0.51496} \meshline{black}{0.31271}{-0.50747}{0.29307}{-0.52606}\meshline{black}{0.30803}{0.51496}{0.29374}{0.52986} \meshline{black}{0.29307}{-0.52606}{0.28258}{-0.53636} \meshline{black}{0.27678}{0.54419}{0.29374}{0.52986} \meshline{black}{0.28258}{-0.53636}{0.27117}{-0.54551} \meshline{black}{0.26192}{0.55766}{0.27678}{0.54419}\meshline{black}{0.27117}{-0.54551}{0.25069}{-0.56296} \meshline{black}{0.26192}{0.55766}{0.2423}{0.57197} \meshline{black}{0.2423}{0.57197}{0.22785}{0.58306} \meshline{black}{0.21642}{-0.58658}{0.25069}{-0.56296} \meshline{black}{0.21627}{-0.58669}{0.21642}{-0.58658} \meshline{black}{0.22785}{0.58306}{0.21742}{0.5895} \meshline{black}{0.21592}{-0.58689}{0.21627}{-0.58669} \meshline{black}{0.21742}{0.5895}{0.19174}{0.60629} \meshline{black}{0.21592}{-0.58689}{0.18013}{-0.60886} \meshline{black}{0.16217}{0.62174}{0.19174}{0.60629} \meshline{black}{0.16472}{-0.61651}{0.18013}{-0.60886} \meshline{black}{0.15298}{0.62673}{0.16217}{0.62174} \meshline{black}{0.14111}{-0.62848}{0.16472}{-0.61651} \meshline{black}{0.12038}{0.64076}{0.15298}{0.62673} \meshline{black}{0.11513}{-0.63908}{0.14111}{-0.62848} \meshline{black}{0.11179}{0.64463}{0.12038}{0.64076} \meshline{black}{0.11513}{-0.63908}{0.09988}{-0.64522} \meshline{black}{0.10928}{0.64553}{0.11179}{0.64463} \meshline{black}{0.09988}{-0.64522}{0.06825}{-0.65553} \meshline{black}{0.06828}{0.6601}{0.10928}{0.64553}\meshline{black}{0.06825}{-0.65553}{0.05839}{-0.65859} \meshline{black}{0.05774}{0.66299}{0.06828}{0.6601} \meshline{black}{0.05839}{-0.65859}{0.02019}{-0.66775} \meshline{black}{0.02202}{0.67237}{0.05774}{0.66299} \meshline{black}{0.02019}{-0.66775}{0.01421}{-0.66929}\meshline{black}{0.01421}{-0.66929}{0.00278}{-0.67106} \meshline{black}{0.02202}{0.67237}{0.00743}{0.67521} \meshline{black}{0.00278}{-0.67106}{-0.03008}{-0.67679} \meshline{black}{0.00743}{0.67521}{-0.02647}{0.68108} \meshline{black}{-0.03008}{-0.67679}{-0.05152}{-0.67853} \meshline{black}{-0.0406}{0.68276}{-0.02647}{0.68108} \meshline{black}{-0.05152}{-0.67853}{-0.07217}{-0.68071}\meshline{black}{-0.07675}{0.68589}{-0.0406}{0.68276} \meshline{black}{-0.07217}{-0.68071}{-0.11147}{-0.68145} \meshline{black}{-0.11147}{-0.68145}{-0.11526}{-0.68157}\meshline{black}{-0.08548}{0.68632}{-0.07675}{0.68589} \meshline{black}{-0.11526}{-0.68157}{-0.13128}{-0.68083}\meshline{black}{-0.12898}{0.68657}{-0.11795}{0.68636}\meshline{black}{-0.08548}{0.68632}{-0.11795}{0.68636} \meshline{black}{-0.15927}{-0.67988}{-0.13128}{-0.68083} \meshline{black}{-0.12898}{0.68657}{-0.13202}{0.68644} \meshline{black}{-0.15927}{-0.67988}{-0.16548}{-0.67926} \meshline{black}{-0.16548}{-0.67926}{-0.20447}{-0.6751} \meshline{black}{-0.13202}{0.68644}{-0.17205}{0.68418} \meshline{black}{-0.19417}{0.68136}{-0.17205}{0.68418} \meshline{black}{-0.21556}{-0.67328}{-0.20447}{-0.6751} \meshline{black}{-0.21556}{-0.67328}{-0.2512}{-0.66672} \meshline{black}{-0.21455}{0.67868}{-0.19417}{0.68136} \meshline{black}{-0.2512}{-0.66672}{-0.26256}{-0.66408} \meshline{black}{-0.29985}{-0.65414}{-0.26256}{-0.66408} \meshline{black}{-0.25642}{0.67019}{-0.24514}{0.67222}\meshline{black}{-0.24514}{0.67222}{-0.21455}{0.67868} \meshline{black}{-0.33971}{-0.64037}{-0.34897}{-0.63724} \meshline{black}{-0.34897}{-0.63724}{-0.35088}{-0.63647}\meshline{black}{-0.30692}{-0.65199}{-0.29985}{-0.65414} \meshline{black}{-0.25642}{0.67019}{-0.26433}{0.66806} \meshline{black}{-0.33971}{-0.64037}{-0.30692}{-0.65199} \meshline{black}{-0.38921}{-0.62029}{-0.35088}{-0.63647} \meshline{black}{-0.38921}{-0.62029}{-0.40489}{-0.61244}\meshline{black}{-0.26433}{0.66806}{-0.2979}{0.65936} \meshline{black}{-0.40489}{-0.61244}{-0.42776}{-0.60074} \meshline{black}{-0.33869}{0.64511}{-0.33231}{0.64715}\meshline{black}{-0.2979}{0.65936}{-0.33231}{0.64715} \meshline{black}{-0.46148}{-0.58035}{-0.42776}{-0.60074} \meshline{black}{-0.49894}{-0.55527}{-0.4681}{-0.57592} \meshline{black}{-0.34561}{0.64218}{-0.33869}{0.64511} \meshline{black}{-0.46419}{-0.57871}{-0.46148}{-0.58035} \meshline{black}{-0.49894}{-0.55527}{-0.52087}{-0.53782}\meshline{black}{-0.46419}{-0.57871}{-0.4681}{-0.57592} \meshline{black}{-0.37881}{0.62873}{-0.34561}{0.64218} \meshline{black}{-0.54236}{-0.51962}{-0.56358}{-0.50121} \meshline{black}{-0.53188}{-0.52927}{-0.52087}{-0.53782} \meshline{black}{-0.53188}{-0.52927}{-0.54236}{-0.51962} \meshline{black}{-0.56358}{-0.50121}{-0.58541}{-0.47875} \meshline{black}{-0.41692}{0.60978}{-0.40229}{0.61676}\meshline{black}{-0.40229}{0.61676}{-0.37881}{0.62873} \meshline{black}{-0.44105}{0.59557}{-0.41692}{0.60978}\meshline{black}{-0.59295}{-0.47127}{-0.58541}{-0.47875} \meshline{black}{-0.5982}{-0.46513}{-0.62072}{-0.43988} \meshline{black}{-0.5982}{-0.46513}{-0.59295}{-0.47127} \meshline{black}{-0.64034}{-0.41393}{-0.64744}{-0.4048} \meshline{black}{-0.44105}{0.59557}{-0.45317}{0.58871} \meshline{black}{-0.62072}{-0.43988}{-0.64034}{-0.41393} \meshline{black}{-0.66274}{-0.38148}{-0.64744}{-0.4048} \meshline{black}{-0.48782}{0.56599}{-0.45882}{0.58489}\meshline{black}{-0.45317}{0.58871}{-0.45882}{0.58489} \meshline{black}{-0.66274}{-0.38148}{-0.67266}{-0.36696} \meshline{black}{-0.50392}{0.55381}{-0.52107}{0.54087} \meshline{black}{-0.48782}{0.56599}{-0.50392}{0.55381} \meshline{black}{-0.69519}{-0.32848}{-0.67638}{-0.3605} \meshline{black}{-0.67638}{-0.3605}{-0.67266}{-0.36696} \meshline{black}{-0.55568}{0.51092}{-0.54272}{0.52232}\meshline{black}{-0.54272}{0.52232}{-0.52107}{0.54087} \meshline{black}{-0.71517}{-0.28793}{-0.70554}{-0.30753} \meshline{black}{-0.70554}{-0.30753}{-0.69519}{-0.32848} \meshline{black}{-0.59104}{0.47576}{-0.57958}{0.48752} \meshline{black}{-0.55568}{0.51092}{-0.57958}{0.48752} \meshline{black}{-0.73321}{-0.24414}{-0.72854}{-0.25577} \meshline{black}{-0.72854}{-0.25577}{-0.71517}{-0.28793} \meshline{black}{-0.61404}{0.4497}{-0.6247}{0.43689} \meshline{black}{-0.61404}{0.4497}{-0.59104}{0.47576} \meshline{black}{-0.74955}{-0.19487}{-0.74816}{-0.19909} \meshline{black}{-0.74816}{-0.19909}{-0.73321}{-0.24414} \meshline{black}{-0.64476}{0.4106}{-0.65519}{0.39585} \meshline{black}{-0.6247}{0.43689}{-0.64476}{0.4106} \meshline{black}{-0.75098}{-0.18911}{-0.74955}{-0.19487} \meshline{black}{-0.6833}{0.3513}{-0.67065}{0.37251} \meshline{black}{-0.67065}{0.37251}{-0.65519}{0.39585} \meshline{black}{-0.76154}{-0.14894}{-0.75098}{-0.18911} \meshline{black}{-0.76154}{-0.14894}{-0.76922}{-0.10457} \meshline{black}{-0.77603}{-0.04394}{-0.77624}{-0.03852} \meshline{black}{-0.77603}{-0.04394}{-0.77221}{-0.07908} \meshline{black}{-0.71096}{0.29827}{-0.69258}{0.33525} \meshline{black}{-0.72952}{0.25627}{-0.71301}{0.29366} \meshline{black}{-0.69258}{0.33525}{-0.6833}{0.3513} \meshline{black}{-0.73733}{0.2338}{-0.72952}{0.25627} \meshline{black}{-0.75701}{0.16903}{-0.75339}{0.18118} \meshline{black}{-0.75869}{0.16098}{-0.75701}{0.16903}\meshline{black}{-0.77153}{0.0904}{-0.77216}{0.08652} \meshline{black}{-0.7726}{0.08117}{-0.77216}{0.08652} \meshline{black}{-0.76922}{-0.10457}{-0.77004}{-0.1003} \meshline{black}{-0.77768}{0.0019}{-0.77691}{0.02317}\meshline{black}{-0.77768}{0.0019}{-0.77624}{-0.03852} \meshline{black}{-0.77221}{-0.07908}{-0.77035}{-0.09796} \meshline{black}{-0.71175}{0.2967}{-0.71096}{0.29827} \meshline{black}{-0.71175}{0.2967}{-0.71301}{0.29366} \meshline{black}{-0.77035}{-0.09796}{-0.77004}{-0.1003} \meshline{black}{-0.7447}{0.21332}{-0.73733}{0.2338} \meshline{black}{-0.7447}{0.21332}{-0.75339}{0.18118} \meshline{black}{-0.75869}{0.16098}{-0.76632}{0.12697} \meshline{black}{-0.76632}{0.12697}{-0.77153}{0.0904} \meshline{black}{-0.77629}{0.04494}{-0.7726}{0.08117} \meshline{black}{-0.77629}{0.04494}{-0.77691}{0.02317} \fill (0.5,0) circle (1pt); \draw[->] (0.5,0) --(1.3, 0.3) node[anchor = west] {$(0.5,0)$}; \end{tikzpicture} \begin{tikzpicture} \pgfsetxvec{\pgfpoint{0.8cm}{0cm}} \pgfsetyvec{\pgfpoint{0cm}{0.8cm}} \meshline{black}{0.491}{0.30518}{0.48879}{0.30202} \meshline{black}{0.491}{0.30518}{0.49565}{0.31513}\meshline{black}{0.45839}{0.25953}{0.46528}{0.26705} \meshline{black}{0.46528}{0.26705}{0.48879}{0.30202} \meshline{black}{0.51493}{0.35808}{0.52442}{0.38519}\meshline{black}{0.51085}{0.34456}{0.49565}{0.31513} \meshline{black}{0.52442}{0.38519}{0.52905}{0.41641}\meshline{black}{0.43165}{0.2306}{0.42537}{0.22528} \meshline{black}{0.45839}{0.25953}{0.43165}{0.2306} \meshline{black}{0.51085}{0.34456}{0.51493}{0.35808} \meshline{black}{0.39671}{0.20239}{0.42537}{0.22528}\meshline{black}{0.53122}{0.42709}{0.52905}{0.41641} \meshline{black}{0.53149}{0.44448}{0.5329}{0.46985}\meshline{black}{0.53122}{0.42709}{0.53149}{0.44448} \meshline{black}{0.53155}{0.48165}{0.5329}{0.46985} \meshline{black}{0.39671}{0.20239}{0.3904}{0.19703} \meshline{black}{0.38917}{0.19605}{0.35402}{0.1729} \meshline{black}{0.3904}{0.19703}{0.38917}{0.19605} \meshline{black}{0.52886}{0.51354}{0.53155}{0.48165} \meshline{black}{0.33777}{0.16344}{0.31629}{0.15271} \meshline{black}{0.35402}{0.1729}{0.33777}{0.16344} \meshline{black}{0.52886}{0.51354}{0.52385}{0.53469}\meshline{black}{0.28523}{0.13995}{0.31629}{0.15271} \meshline{black}{0.51837}{0.55852}{0.52385}{0.53469} \meshline{black}{0.51837}{0.55852}{0.51013}{0.58113} \meshline{black}{0.27737}{0.13606}{0.28523}{0.13995} \meshline{black}{0.27184}{0.13383}{0.23771}{0.12154} \meshline{black}{0.27737}{0.13606}{0.27184}{0.13383} \meshline{black}{0.51013}{0.58113}{0.50088}{0.60509} \meshline{black}{0.50088}{0.60509}{0.49192}{0.62304} \meshline{black}{0.23771}{0.12154}{0.20459}{0.11269} \meshline{black}{0.49192}{0.62304}{0.47483}{0.65367} \meshline{black}{0.20459}{0.11269}{0.19708}{0.11007} \meshline{black}{0.47483}{0.65367}{0.46991}{0.66125} \meshline{black}{0.19708}{0.11007}{0.18853}{0.10789} \meshline{black}{0.15595}{0.10014}{0.18853}{0.10789} \meshline{black}{0.46991}{0.66125}{0.45806}{0.67705} \meshline{black}{0.44488}{0.69648}{0.45806}{0.67705} \meshline{black}{0.13399}{0.09676}{0.15595}{0.10014} \meshline{black}{0.44488}{0.69648}{0.43833}{0.70457} \meshline{black}{0.13399}{0.09676}{0.11415}{0.09236} \meshline{black}{0.43833}{0.70457}{0.41711}{0.72919}\meshline{black}{0.11415}{0.09236}{0.07909}{0.08748} \meshline{black}{0.07174}{0.08647}{0.07909}{0.08748} \meshline{black}{0.3867}{0.7579}{0.41711}{0.72919} \meshline{black}{0.38578}{0.7589}{0.3867}{0.7579} \meshline{black}{0.0682}{0.08621}{0.07174}{0.08647} \meshline{black}{0.38495}{0.75961}{0.38578}{0.7589} \meshline{black}{0.0682}{0.08621}{0.02898}{0.08162} \meshline{black}{0.38495}{0.75961}{0.35378}{0.78624} \meshline{black}{0.00655}{0.08026}{0.02898}{0.08162}\meshline{black}{0.35378}{0.78624}{0.32528}{0.80563} \meshline{black}{0.00655}{0.08026}{-0.01434}{0.07848} \meshline{black}{0.32528}{0.80563}{0.32043}{0.80939} \meshline{black}{-0.0526}{0.0771}{-0.01434}{0.07848} \meshline{black}{0.32043}{0.80939}{0.3112}{0.81493} \meshline{black}{0.28538}{0.8308}{0.3112}{0.81493} \meshline{black}{-0.05817}{0.07688}{-0.0526}{0.0771} \meshline{black}{0.28538}{0.8308}{0.27338}{0.83646}\meshline{black}{-0.0779}{0.07683}{-0.05817}{0.07688} \meshline{black}{0.2483}{0.8499}{0.27338}{0.83646} \meshline{black}{-0.10242}{0.07655}{-0.0779}{0.07683}\meshline{black}{0.2483}{0.8499}{0.22266}{0.86038}\meshline{black}{-0.10242}{0.07655}{-0.10939}{0.07656} \meshline{black}{0.21071}{0.86586}{0.22266}{0.86038} \meshline{black}{-0.10939}{0.07656}{-0.14718}{0.07772} \meshline{black}{0.18694}{0.87397}{0.21071}{0.86586} \meshline{black}{0.18694}{0.87397}{0.17178}{0.87941} \meshline{black}{-0.14718}{0.07772}{-0.1642}{0.07824} \meshline{black}{0.16183}{0.88183}{0.12915}{0.89172} \meshline{black}{0.17178}{0.87941}{0.16183}{0.88183} \meshline{black}{-0.19253}{0.08059}{-0.1642}{0.07824} \meshline{black}{0.0899}{0.899}{0.07989}{0.90127} \meshline{black}{0.12915}{0.89172}{0.0899}{0.899}\meshline{black}{-0.21698}{0.08226}{-0.19253}{0.08059} \meshline{black}{0.06628}{0.90296}{0.07989}{0.90127} \meshline{black}{-0.21698}{0.08226}{-0.23854}{0.08536} \meshline{black}{0.06628}{0.90296}{0.0364}{0.90703}\meshline{black}{-0.26771}{0.08863}{-0.23854}{0.08536}\meshline{black}{0.0364}{0.90703}{0.01637}{0.90827}\meshline{black}{0.01637}{0.90827}{-0.00347}{0.9097} \meshline{black}{-0.28533}{0.09238}{-0.26771}{0.08863} \meshline{black}{-0.03621}{0.90973}{-0.04909}{0.90963} \meshline{black}{-0.00347}{0.9097}{-0.03621}{0.90973} \meshline{black}{-0.31634}{0.09742}{-0.28533}{0.09238} \meshline{black}{-0.04909}{0.90963}{-0.08456}{0.90751}\meshline{black}{-0.08456}{0.90751}{-0.09848}{0.90624} \meshline{black}{-0.31634}{0.09742}{-0.3331}{0.10225} \meshline{black}{-0.09848}{0.90624}{-0.12892}{0.90275} \meshline{black}{-0.12892}{0.90275}{-0.15078}{0.89897} \meshline{black}{-0.3331}{0.10225}{-0.36277}{0.10875} \meshline{black}{-0.17251}{0.89512}{-0.15078}{0.89897} \meshline{black}{-0.38216}{0.11591}{-0.36277}{0.10875} \meshline{black}{-0.20932}{0.88562}{-0.17251}{0.89512} \meshline{black}{-0.40676}{0.12284}{-0.38216}{0.11591}\meshline{black}{-0.20932}{0.88562}{-0.22061}{0.88253} \meshline{black}{-0.26392}{0.86802}{-0.22061}{0.88253} \meshline{black}{-0.43301}{0.13486}{-0.40676}{0.12284} \meshline{black}{-0.26392}{0.86802}{-0.26514}{0.86761} \meshline{black}{-0.2709}{0.86525}{-0.30658}{0.85106} \meshline{black}{-0.44795}{0.14011}{-0.43301}{0.13486}\meshline{black}{-0.26514}{0.86761}{-0.2709}{0.86525} \meshline{black}{-0.30658}{0.85106}{-0.31397}{0.84731} \meshline{black}{-0.48453}{0.16041}{-0.48585}{0.16112} \meshline{black}{-0.48453}{0.16041}{-0.44795}{0.14011}\meshline{black}{-0.31397}{0.84731}{-0.34513}{0.83258} \meshline{black}{-0.48585}{0.16112}{-0.48647}{0.16161} \meshline{black}{-0.34513}{0.83258}{-0.36586}{0.82022} \meshline{black}{-0.52166}{0.18451}{-0.48647}{0.16161}\meshline{black}{-0.3815}{0.81166}{-0.36586}{0.82022} \meshline{black}{-0.52166}{0.18451}{-0.54655}{0.20794} \meshline{black}{-0.3815}{0.81166}{-0.39433}{0.80357}\meshline{black}{-0.39433}{0.80357}{-0.42004}{0.78538} \meshline{black}{-0.55306}{0.21303}{-0.54655}{0.20794} \meshline{black}{-0.42004}{0.78538}{-0.43651}{0.77386} \meshline{black}{-0.45954}{0.75413}{-0.43651}{0.77386} \meshline{black}{-0.55818}{0.21876}{-0.58127}{0.24536} \meshline{black}{-0.55818}{0.21876}{-0.55306}{0.21303} \meshline{black}{-0.45954}{0.75413}{-0.47071}{0.74527} \meshline{black}{-0.47071}{0.74527}{-0.49634}{0.72017} \meshline{black}{-0.59682}{0.27025}{-0.60443}{0.28362} \meshline{black}{-0.58127}{0.24536}{-0.59682}{0.27025} \meshline{black}{-0.52811}{0.68526}{-0.51809}{0.69637} \meshline{black}{-0.52988}{0.68274}{-0.52811}{0.68526} \meshline{black}{-0.49634}{0.72017}{-0.50005}{0.7168}\meshline{black}{-0.61889}{0.31844}{-0.60443}{0.28362}\meshline{black}{-0.6221}{0.32812}{-0.61889}{0.31844} \meshline{black}{-0.50005}{0.7168}{-0.51809}{0.69637} \meshline{black}{-0.52988}{0.68274}{-0.55447}{0.65135} \meshline{black}{-0.63076}{0.36433}{-0.6221}{0.32812}\meshline{black}{-0.55447}{0.65135}{-0.56268}{0.6378} \meshline{black}{-0.63076}{0.36433}{-0.63289}{0.38099} \meshline{black}{-0.63516}{0.40875}{-0.63289}{0.38099} \meshline{black}{-0.57481}{0.62015}{-0.58569}{0.59969} \meshline{black}{-0.63516}{0.40875}{-0.63268}{0.45108} \meshline{black}{-0.56268}{0.6378}{-0.57481}{0.62015}\meshline{black}{-0.63187}{0.45633}{-0.62491}{0.50056} \meshline{black}{-0.62491}{0.50056}{-0.61199}{0.54009} \meshline{black}{-0.60896}{0.54758}{-0.59557}{0.58156} \meshline{black}{-0.58569}{0.59969}{-0.59557}{0.58156}\meshline{black}{-0.6325}{0.45376}{-0.63268}{0.45108}\meshline{black}{-0.6325}{0.45376}{-0.63187}{0.45633} \meshline{black}{-0.61055}{0.54423}{-0.61199}{0.54009}\meshline{black}{-0.60896}{0.54758}{-0.61055}{0.54423} \meshline{black}{0.36516}{0.40825}{0.36795}{0.41609} \meshline{black}{0.36795}{0.41609}{0.3693}{0.42515} \meshline{black}{0.34051}{0.3478}{0.35511}{0.37547}\meshline{black}{0.36516}{0.40825}{0.35511}{0.37547} \meshline{black}{0.3693}{0.42515}{0.37546}{0.45786} \meshline{black}{0.31087}{0.30326}{0.33488}{0.33641} \meshline{black}{0.34051}{0.3478}{0.33488}{0.33641} \meshline{black}{0.37546}{0.45786}{0.37533}{0.49478}\meshline{black}{0.27818}{0.26809}{0.30767}{0.29884}\meshline{black}{0.30767}{0.29884}{0.31087}{0.30326} \meshline{black}{0.36892}{0.54578}{0.37122}{0.52812} \meshline{black}{0.37533}{0.49478}{0.37543}{0.50116} \meshline{black}{0.36892}{0.54578}{0.36792}{0.54938} \meshline{black}{0.24334}{0.23931}{0.27287}{0.26289} \meshline{black}{0.27287}{0.26289}{0.27818}{0.26809} \meshline{black}{0.37122}{0.52812}{0.37543}{0.50116} \meshline{black}{0.20675}{0.21567}{0.22854}{0.22894} \meshline{black}{0.24334}{0.23931}{0.22854}{0.22894} \meshline{black}{0.3546}{0.59199}{0.36792}{0.54938} \meshline{black}{0.3546}{0.59199}{0.35296}{0.59578}\meshline{black}{0.16859}{0.19672}{0.17086}{0.19775} \meshline{black}{0.17086}{0.19775}{0.20675}{0.21567} \meshline{black}{0.35296}{0.59578}{0.34311}{0.61496}\meshline{black}{0.16575}{0.19574}{0.16859}{0.19672} \meshline{black}{0.3327}{0.63647}{0.34311}{0.61496} \meshline{black}{0.16575}{0.19574}{0.12944}{0.18073} \meshline{black}{0.33045}{0.64021}{0.3327}{0.63647} \meshline{black}{0.0947}{0.17039}{0.08898}{0.16872} \meshline{black}{0.12944}{0.18073}{0.0947}{0.17039} \meshline{black}{0.30851}{0.6728}{0.33045}{0.64021} \meshline{black}{0.08449}{0.16786}{0.08898}{0.16872}\meshline{black}{0.29265}{0.69132}{0.30851}{0.6728} \meshline{black}{0.29265}{0.69132}{0.28061}{0.70483} \meshline{black}{0.08449}{0.16786}{0.04776}{0.15905} \meshline{black}{0.28061}{0.70483}{0.26516}{0.71819} \meshline{black}{0.01628}{0.1546}{0.04776}{0.15905} \meshline{black}{0.24971}{0.73322}{0.26516}{0.71819} \meshline{black}{0.00544}{0.15273}{0.01628}{0.1546} \meshline{black}{0.24971}{0.73322}{0.22822}{0.74882} \meshline{black}{0.22822}{0.74882}{0.21563}{0.75822} \meshline{black}{-0.01908}{0.15077}{0.00544}{0.15273} \meshline{black}{-0.03773}{0.14901}{-0.01908}{0.15077} \meshline{black}{0.21563}{0.75822}{0.20809}{0.76242} \meshline{black}{-0.04525}{0.14879}{-0.03773}{0.14901} \meshline{black}{0.17828}{0.78064}{0.20809}{0.76242} \meshline{black}{-0.08177}{0.14794}{-0.04525}{0.14879} \meshline{black}{0.17828}{0.78064}{0.15595}{0.79055} \meshline{black}{0.13304}{0.80129}{0.15595}{0.79055} \meshline{black}{-0.10183}{0.14849}{-0.08177}{0.14794}\meshline{black}{0.13304}{0.80129}{0.09459}{0.8134} \meshline{black}{-0.10183}{0.14849}{-0.12684}{0.15001} \meshline{black}{0.0874}{0.81582}{0.09459}{0.8134} \meshline{black}{-0.12684}{0.15001}{-0.15452}{0.15256} \meshline{black}{0.06282}{0.82126}{0.0874}{0.81582} \meshline{black}{0.06282}{0.82126}{0.04871}{0.82461} \meshline{black}{-0.17302}{0.15545}{-0.15452}{0.15256} \meshline{black}{0.01179}{0.83058}{0.04304}{0.82537} \meshline{black}{0.04304}{0.82537}{0.04871}{0.82461} \meshline{black}{-0.20377}{0.16051}{-0.17302}{0.15545} \meshline{black}{-0.02802}{0.83256}{0.01179}{0.83058} \meshline{black}{-0.03997}{0.83309}{-0.02802}{0.83256} \meshline{black}{-0.22054}{0.16491}{-0.20377}{0.16051} \meshline{black}{-0.03997}{0.83309}{-0.08323}{0.83153}\meshline{black}{-0.24985}{0.17206}{-0.22054}{0.16491}\meshline{black}{-0.08323}{0.83153}{-0.10141}{0.8293} \meshline{black}{-0.24985}{0.17206}{-0.26981}{0.17959} \meshline{black}{-0.10141}{0.8293}{-0.12605}{0.82631} \meshline{black}{-0.26981}{0.17959}{-0.29279}{0.1872} \meshline{black}{-0.12605}{0.82631}{-0.15088}{0.82076} \meshline{black}{-0.15088}{0.82076}{-0.17087}{0.81658} \meshline{black}{-0.29279}{0.1872}{-0.32154}{0.20159} \meshline{black}{-0.17087}{0.81658}{-0.19127}{0.80982} \meshline{black}{-0.32154}{0.20159}{-0.33235}{0.20619} \meshline{black}{-0.19127}{0.80982}{-0.21559}{0.80244} \meshline{black}{-0.36869}{0.22888}{-0.35348}{0.21954} \meshline{black}{-0.33235}{0.20619}{-0.35348}{0.21954} \meshline{black}{-0.2387}{0.79269}{-0.21559}{0.80244} \meshline{black}{-0.24788}{0.78891}{-0.2387}{0.79269} \meshline{black}{-0.37775}{0.23688}{-0.36869}{0.22888} \meshline{black}{-0.28751}{0.76861}{-0.28341}{0.77089} \meshline{black}{-0.24788}{0.78891}{-0.28341}{0.77089}\meshline{black}{-0.40181}{0.25521}{-0.37775}{0.23688}\meshline{black}{-0.28751}{0.76861}{-0.29772}{0.76179} \meshline{black}{-0.40181}{0.25521}{-0.42183}{0.27675} \meshline{black}{-0.42183}{0.27675}{-0.43064}{0.28646} \meshline{black}{-0.29772}{0.76179}{-0.32593}{0.7443}\meshline{black}{-0.44723}{0.31177}{-0.43064}{0.28646} \meshline{black}{-0.36178}{0.71578}{-0.33678}{0.73654} \meshline{black}{-0.33678}{0.73654}{-0.32593}{0.7443} \meshline{black}{-0.45514}{0.32264}{-0.44723}{0.31177}\meshline{black}{-0.39912}{0.67889}{-0.37786}{0.70172} \meshline{black}{-0.36178}{0.71578}{-0.37786}{0.70172}\meshline{black}{-0.45765}{0.32733}{-0.45514}{0.32264}\meshline{black}{-0.45765}{0.32733}{-0.4746}{0.36468} \meshline{black}{-0.42921}{0.64023}{-0.41425}{0.6616} \meshline{black}{-0.39912}{0.67889}{-0.41425}{0.6616}\meshline{black}{-0.47804}{0.37491}{-0.4746}{0.36468}\meshline{black}{-0.48698}{0.41493}{-0.47804}{0.37491} \meshline{black}{-0.48818}{0.43527}{-0.49011}{0.46457} \meshline{black}{-0.48898}{0.47683}{-0.49011}{0.46457} \meshline{black}{-0.47777}{0.53725}{-0.47577}{0.54688} \meshline{black}{-0.47577}{0.54688}{-0.47007}{0.56128} \meshline{black}{-0.4557}{0.59499}{-0.44099}{0.62342} \meshline{black}{-0.44099}{0.62342}{-0.42921}{0.64023} \meshline{black}{-0.48787}{0.42049}{-0.48698}{0.41493}\meshline{black}{-0.48818}{0.43527}{-0.48787}{0.42049} \meshline{black}{-0.48598}{0.50699}{-0.48898}{0.47683} \meshline{black}{-0.47777}{0.53725}{-0.48598}{0.50699}\meshline{black}{-0.47007}{0.56128}{-0.46113}{0.58532} \meshline{black}{-0.4557}{0.59499}{-0.46113}{0.58532} \meshline{black}{0.51654}{-0.29453}{0.52233}{-0.3056} \meshline{black}{0.51654}{-0.29453}{0.49826}{-0.26829} \meshline{black}{0.43313}{-0.1983}{0.43621}{-0.201} \meshline{black}{0.53593}{-0.33508}{0.54207}{-0.35259} \meshline{black}{0.46483}{-0.22604}{0.48948}{-0.25518} \meshline{black}{0.53593}{-0.33508}{0.52233}{-0.3056} \meshline{black}{0.43044}{-0.19662}{0.43313}{-0.1983} \meshline{black}{0.43621}{-0.201}{0.46483}{-0.22604}\meshline{black}{0.49826}{-0.26829}{0.49228}{-0.25861} \meshline{black}{0.55251}{-0.39557}{0.54932}{-0.38028} \meshline{black}{0.54207}{-0.35259}{0.54932}{-0.38028}\meshline{black}{0.48948}{-0.25518}{0.49228}{-0.25861} \meshline{black}{0.55606}{-0.43722}{0.5559}{-0.43525} \meshline{black}{0.43044}{-0.19662}{0.39956}{-0.1727} \meshline{black}{0.5559}{-0.43525}{0.55251}{-0.39557} \meshline{black}{0.39956}{-0.1727}{0.37319}{-0.1583} \meshline{black}{0.55584}{-0.44026}{0.55606}{-0.43722} \meshline{black}{0.36252}{-0.15107}{0.37319}{-0.1583}\meshline{black}{0.55544}{-0.47869}{0.55584}{-0.44026} \meshline{black}{0.33251}{-0.13704}{0.32292}{-0.1324} \meshline{black}{0.33251}{-0.13704}{0.36252}{-0.15107}\meshline{black}{0.54913}{-0.518}{0.5508}{-0.50709} \meshline{black}{0.55544}{-0.47869}{0.5508}{-0.50709}\meshline{black}{0.3198}{-0.13144}{0.32292}{-0.1324} \meshline{black}{0.54592}{-0.52853}{0.54913}{-0.518} \meshline{black}{0.28145}{-0.11592}{0.3198}{-0.13144} \meshline{black}{0.54592}{-0.52853}{0.53964}{-0.55512} \meshline{black}{0.28145}{-0.11592}{0.26939}{-0.11296} \meshline{black}{0.53964}{-0.55512}{0.52952}{-0.58112} \meshline{black}{0.52952}{-0.58112}{0.52543}{-0.59198} \meshline{black}{0.23732}{-0.10249}{0.26939}{-0.11296} \meshline{black}{0.22077}{-0.09937}{0.23732}{-0.10249} \meshline{black}{0.52543}{-0.59198}{0.52144}{-0.59925} \meshline{black}{0.52144}{-0.59925}{0.50725}{-0.62988}\meshline{black}{0.19074}{-0.09181}{0.22077}{-0.09937}\meshline{black}{0.48847}{-0.65859}{0.50725}{-0.62988} \meshline{black}{0.19074}{-0.09181}{0.17345}{-0.08944} \meshline{black}{0.48847}{-0.65859}{0.48374}{-0.66699}\meshline{black}{0.14186}{-0.08368}{0.17345}{-0.08944}\meshline{black}{0.48374}{-0.66699}{0.46863}{-0.68643}\meshline{black}{0.45635}{-0.70282}{0.46863}{-0.68643} \meshline{black}{0.14186}{-0.08368}{0.12707}{-0.08235} \meshline{black}{0.45182}{-0.70742}{0.45635}{-0.70282} \meshline{black}{0.09074}{-0.07801}{0.12707}{-0.08235}\meshline{black}{0.45182}{-0.70742}{0.42259}{-0.73948}\meshline{black}{0.09074}{-0.07801}{0.08139}{-0.07754} \meshline{black}{0.42259}{-0.73948}{0.4145}{-0.7468} \meshline{black}{0.03748}{-0.0747}{0.08139}{-0.07754}\meshline{black}{0.40251}{-0.75653}{0.4145}{-0.7468} \meshline{black}{0.38084}{-0.77631}{0.40251}{-0.75653} \meshline{black}{0.03748}{-0.0747}{0.03628}{-0.07468} \meshline{black}{0.35273}{-0.79616}{0.38084}{-0.77631} \meshline{black}{0.03628}{-0.07468}{0.03137}{-0.07459} \meshline{black}{0.03137}{-0.07459}{-0.00834}{-0.07345} \meshline{black}{0.35273}{-0.79616}{0.34461}{-0.80236}\meshline{black}{-0.01758}{-0.0734}{-0.00834}{-0.07345} \meshline{black}{0.31675}{-0.81981}{0.34461}{-0.80236} \meshline{black}{-0.0525}{-0.07369}{-0.01758}{-0.0734} \meshline{black}{0.31675}{-0.81981}{0.30119}{-0.82848}\meshline{black}{-0.0749}{-0.07467}{-0.0525}{-0.07369}\meshline{black}{0.28594}{-0.83702}{0.30119}{-0.82848} \meshline{black}{-0.0749}{-0.07467}{-0.09619}{-0.07543} \meshline{black}{0.27692}{-0.841}{0.28594}{-0.83702} \meshline{black}{0.27692}{-0.841}{0.24772}{-0.85522} \meshline{black}{-0.13451}{-0.07853}{-0.09619}{-0.07543} \meshline{black}{0.22718}{-0.86297}{0.24772}{-0.85522} \meshline{black}{0.22718}{-0.86297}{0.20603}{-0.87141} \meshline{black}{-0.13451}{-0.07853}{-0.13938}{-0.07879} \meshline{black}{0.17941}{-0.87952}{0.20603}{-0.87141} \meshline{black}{-0.13938}{-0.07879}{-0.15085}{-0.08} \meshline{black}{0.16202}{-0.88477}{0.17941}{-0.87952} \meshline{black}{-0.18226}{-0.08327}{-0.15085}{-0.08} \meshline{black}{-0.19728}{-0.08595}{-0.18226}{-0.08327}\meshline{black}{0.13466}{-0.89106}{0.16202}{-0.88477} \meshline{black}{0.1195}{-0.8943}{0.13466}{-0.89106} \meshline{black}{-0.22466}{-0.08934}{-0.19728}{-0.08595} \meshline{black}{0.09492}{-0.89818}{0.1195}{-0.8943} \meshline{black}{0.08137}{-0.90034}{0.09492}{-0.89818} \meshline{black}{-0.26342}{-0.09713}{-0.22466}{-0.08934}\meshline{black}{0.04914}{-0.90322}{0.08137}{-0.90034} \meshline{black}{-0.26342}{-0.09713}{-0.26641}{-0.09753} \meshline{black}{0.04914}{-0.90322}{0.03794}{-0.90408}\meshline{black}{-0.2713}{-0.09856}{-0.26641}{-0.09753} \meshline{black}{0.00297}{-0.90439}{0.03794}{-0.90408} \meshline{black}{-0.2713}{-0.09856}{-0.30781}{-0.10694} \meshline{black}{-0.00595}{-0.90429}{0.00297}{-0.90439} \meshline{black}{-0.30781}{-0.10694}{-0.33659}{-0.11618} \meshline{black}{-0.03995}{-0.90175}{-0.00595}{-0.90429} \meshline{black}{-0.04636}{-0.90134}{-0.03995}{-0.90175} \meshline{black}{-0.34836}{-0.11898}{-0.33659}{-0.11618} \meshline{black}{-0.0869}{-0.89614}{-0.04636}{-0.90134} \meshline{black}{-0.08886}{-0.8959}{-0.0869}{-0.89614} \meshline{black}{-0.34836}{-0.11898}{-0.36178}{-0.12338} \meshline{black}{-0.38819}{-0.13318}{-0.36178}{-0.12338}\meshline{black}{-0.12444}{-0.88987}{-0.08886}{-0.8959} \meshline{black}{-0.38819}{-0.13318}{-0.42176}{-0.14879} \meshline{black}{-0.13312}{-0.88842}{-0.12444}{-0.88987} \meshline{black}{-0.17878}{-0.87876}{-0.17528}{-0.8795} \meshline{black}{-0.17882}{-0.87875}{-0.17878}{-0.87876} \meshline{black}{-0.42176}{-0.14879}{-0.42687}{-0.15071} \meshline{black}{-0.13358}{-0.88834}{-0.13312}{-0.88842} \meshline{black}{-0.17528}{-0.8795}{-0.13358}{-0.88834} \meshline{black}{-0.22269}{-0.86752}{-0.17882}{-0.87875} \meshline{black}{-0.42687}{-0.15071}{-0.43077}{-0.15253} \meshline{black}{-0.43077}{-0.15253}{-0.46448}{-0.17125} \meshline{black}{-0.22269}{-0.86752}{-0.22605}{-0.86638}\meshline{black}{-0.46448}{-0.17125}{-0.48527}{-0.18453} \meshline{black}{-0.48527}{-0.18453}{-0.50038}{-0.19644} \meshline{black}{-0.26518}{-0.85449}{-0.22605}{-0.86638} \meshline{black}{-0.26518}{-0.85449}{-0.27519}{-0.85035} \meshline{black}{-0.50038}{-0.19644}{-0.52682}{-0.21898} \meshline{black}{-0.53424}{-0.22723}{-0.52682}{-0.21898}\meshline{black}{-0.30569}{-0.8394}{-0.27519}{-0.85035} \meshline{black}{-0.30569}{-0.8394}{-0.32704}{-0.8288} \meshline{black}{-0.37271}{-0.80462}{-0.37982}{-0.8006} \meshline{black}{-0.53424}{-0.22723}{-0.55881}{-0.25519} \meshline{black}{-0.37982}{-0.8006}{-0.38395}{-0.79756} \meshline{black}{-0.55881}{-0.25519}{-0.56547}{-0.26584} \meshline{black}{-0.34365}{-0.82168}{-0.32704}{-0.8288} \meshline{black}{-0.37271}{-0.80462}{-0.34365}{-0.82168} \meshline{black}{-0.41497}{-0.77746}{-0.38395}{-0.79756} \meshline{black}{-0.58298}{-0.29307}{-0.56547}{-0.26584} \meshline{black}{-0.5929}{-0.31746}{-0.58298}{-0.29307}\meshline{black}{-0.49641}{-0.70345}{-0.48785}{-0.71339} \meshline{black}{-0.41497}{-0.77746}{-0.43826}{-0.75841}\meshline{black}{-0.60482}{-0.35575}{-0.61023}{-0.37495} \meshline{black}{-0.612}{-0.39746}{-0.61023}{-0.37495} \meshline{black}{-0.59565}{-0.53696}{-0.58925}{-0.55747} \meshline{black}{-0.52887}{-0.66352}{-0.51836}{-0.67881} \meshline{black}{-0.60791}{-0.48857}{-0.60489}{-0.5071} \meshline{black}{-0.51836}{-0.67881}{-0.49641}{-0.70345} \meshline{black}{-0.46972}{-0.73129}{-0.45287}{-0.7465} \meshline{black}{-0.56791}{-0.60355}{-0.57613}{-0.58574} \meshline{black}{-0.48785}{-0.71339}{-0.46972}{-0.73129} \meshline{black}{-0.61282}{-0.46252}{-0.6135}{-0.4311} \meshline{black}{-0.59977}{-0.333}{-0.5929}{-0.31746} \meshline{black}{-0.60489}{-0.5071}{-0.59565}{-0.53696} \meshline{black}{-0.54233}{-0.64609}{-0.55439}{-0.62627} \meshline{black}{-0.57613}{-0.58574}{-0.58925}{-0.55747} \meshline{black}{-0.59977}{-0.333}{-0.60482}{-0.35575} \meshline{black}{-0.44524}{-0.75332}{-0.43826}{-0.75841} \meshline{black}{-0.61282}{-0.46252}{-0.60791}{-0.48857} \meshline{black}{-0.61442}{-0.41847}{-0.612}{-0.39746} \meshline{black}{-0.54233}{-0.64609}{-0.52887}{-0.66352} \meshline{black}{-0.56791}{-0.60355}{-0.55439}{-0.62627} \meshline{black}{-0.45287}{-0.7465}{-0.44524}{-0.75332} \meshline{black}{-0.6135}{-0.4311}{-0.61442}{-0.41847} \meshline{black}{0.3501}{-0.31233}{0.34181}{-0.30043} \meshline{black}{0.34181}{-0.30043}{0.32567}{-0.28306} \meshline{black}{0.37785}{-0.36146}{0.36563}{-0.33653} \meshline{black}{0.3501}{-0.31233}{0.36563}{-0.33653} \meshline{black}{0.3041}{-0.25984}{0.28281}{-0.24057} \meshline{black}{0.38446}{-0.378}{0.39391}{-0.40865} \meshline{black}{0.37785}{-0.36146}{0.38446}{-0.378}\meshline{black}{0.31394}{-0.26869}{0.32567}{-0.28306} \meshline{black}{0.28281}{-0.24057}{0.2632}{-0.22784} \meshline{black}{0.3041}{-0.25984}{0.31394}{-0.26869}\meshline{black}{0.40111}{-0.45436}{0.39713}{-0.42574} \meshline{black}{0.39713}{-0.42574}{0.39391}{-0.40865} \meshline{black}{0.2632}{-0.22784}{0.24813}{-0.21644} \meshline{black}{0.40086}{-0.49727}{0.40131}{-0.47928} \meshline{black}{0.40131}{-0.47928}{0.40111}{-0.45436} \meshline{black}{0.21009}{-0.19611}{0.21452}{-0.19856} \meshline{black}{0.21452}{-0.19856}{0.24813}{-0.21644}\meshline{black}{0.39553}{-0.53267}{0.40086}{-0.49727} \meshline{black}{0.39457}{-0.53777}{0.39553}{-0.53267} \meshline{black}{0.21009}{-0.19611}{0.20829}{-0.19543} \meshline{black}{0.39457}{-0.53777}{0.38973}{-0.55236} \meshline{black}{0.16966}{-0.17848}{0.20829}{-0.19543} \meshline{black}{0.38973}{-0.55236}{0.38122}{-0.58145} \meshline{black}{0.16966}{-0.17848}{0.15744}{-0.17492} \meshline{black}{0.15744}{-0.17492}{0.12593}{-0.16457} \meshline{black}{0.38122}{-0.58145}{0.37861}{-0.58845} \meshline{black}{0.36183}{-0.62288}{0.37861}{-0.58845} \meshline{black}{0.12593}{-0.16457}{0.10878}{-0.16091} \meshline{black}{0.36183}{-0.62288}{0.3609}{-0.62452} \meshline{black}{0.10878}{-0.16091}{0.07899}{-0.15424} \meshline{black}{0.34542}{-0.65041}{0.3609}{-0.62452} \meshline{black}{0.07899}{-0.15424}{0.06168}{-0.1518} \meshline{black}{0.34542}{-0.65041}{0.3214}{-0.68087} \meshline{black}{0.3214}{-0.68087}{0.32043}{-0.68213} \meshline{black}{0.06168}{-0.1518}{0.02875}{-0.14759} \meshline{black}{0.32043}{-0.68213}{0.31952}{-0.68301} \meshline{black}{0.01584}{-0.14662}{0.02875}{-0.14759} \meshline{black}{0.29169}{-0.71301}{0.31952}{-0.68301} \meshline{black}{-0.02503}{-0.14493}{0.01584}{-0.14662} \meshline{black}{0.26589}{-0.73352}{0.29169}{-0.71301} \meshline{black}{0.26589}{-0.73352}{0.25706}{-0.74133}\meshline{black}{-0.02503}{-0.14493}{-0.02891}{-0.14487} \meshline{black}{0.25706}{-0.74133}{0.23941}{-0.75281}\meshline{black}{-0.02891}{-0.14487}{-0.04147}{-0.14529} \meshline{black}{-0.07277}{-0.1459}{-0.04147}{-0.14529} \meshline{black}{0.21979}{-0.76611}{0.23941}{-0.75281} \meshline{black}{-0.08289}{-0.14688}{-0.07277}{-0.1459}\meshline{black}{0.20975}{-0.7711}{0.21979}{-0.76611} \meshline{black}{0.20975}{-0.7711}{0.18007}{-0.78701} \meshline{black}{-0.08289}{-0.14688}{-0.11579}{-0.14962} \meshline{black}{0.18007}{-0.78701}{0.15703}{-0.79599} \meshline{black}{-0.11579}{-0.14962}{-0.14602}{-0.1548} \meshline{black}{0.13634}{-0.80421}{0.15703}{-0.79599} \meshline{black}{-0.15783}{-0.15642}{-0.14602}{-0.1548} \meshline{black}{0.10424}{-0.81314}{0.13634}{-0.80421} \meshline{black}{-0.15783}{-0.15642}{-0.17618}{-0.16064} \meshline{black}{-0.19906}{-0.16587}{-0.17618}{-0.16064} \meshline{black}{0.10424}{-0.81314}{0.08532}{-0.81816} \meshline{black}{-0.21756}{-0.17222}{-0.19906}{-0.16587}\meshline{black}{0.08532}{-0.81816}{0.04487}{-0.82426} \meshline{black}{0.04487}{-0.82426}{0.03563}{-0.82557}\meshline{black}{-0.21756}{-0.17222}{-0.23934}{-0.17839} \meshline{black}{-0.01105}{-0.82762}{0.03563}{-0.82557} \meshline{black}{-0.23934}{-0.17839}{-0.26036}{-0.18669} \meshline{black}{-0.27852}{-0.19441}{-0.26036}{-0.18669} \meshline{black}{-0.01909}{-0.8278}{-0.01105}{-0.82762} \meshline{black}{-0.30716}{-0.21003}{-0.27852}{-0.19441} \meshline{black}{-0.01909}{-0.8278}{-0.05182}{-0.82549}\meshline{black}{-0.31641}{-0.21445}{-0.30716}{-0.21003} \meshline{black}{-0.05182}{-0.82549}{-0.06803}{-0.8246} \meshline{black}{-0.32161}{-0.2175}{-0.31641}{-0.21445} \meshline{black}{-0.35284}{-0.23888}{-0.32161}{-0.2175}\meshline{black}{-0.07231}{-0.82402}{-0.06803}{-0.8246} \meshline{black}{-0.07231}{-0.82402}{-0.11411}{-0.81761}\meshline{black}{-0.36853}{-0.25119}{-0.35284}{-0.23888} \meshline{black}{-0.38722}{-0.26939}{-0.36853}{-0.25119} \meshline{black}{-0.11411}{-0.81761}{-0.12243}{-0.81597} \meshline{black}{-0.12243}{-0.81597}{-0.16085}{-0.80646} \meshline{black}{-0.38722}{-0.26939}{-0.40414}{-0.2871} \meshline{black}{-0.40414}{-0.2871}{-0.4189}{-0.3079} \meshline{black}{-0.16763}{-0.80469}{-0.16085}{-0.80646} \meshline{black}{-0.16763}{-0.80469}{-0.20939}{-0.79036} \meshline{black}{-0.4189}{-0.3079}{-0.43076}{-0.32478} \meshline{black}{-0.43076}{-0.32478}{-0.44649}{-0.35858} \meshline{black}{-0.24914}{-0.77345}{-0.21022}{-0.79001} \meshline{black}{-0.24914}{-0.77345}{-0.26099}{-0.76663} \meshline{black}{-0.46168}{-0.40466}{-0.45175}{-0.37353}\meshline{black}{-0.46168}{-0.40466}{-0.46508}{-0.43626} \meshline{black}{-0.20939}{-0.79036}{-0.20958}{-0.79029} \meshline{black}{-0.21022}{-0.79001}{-0.20958}{-0.79029} \meshline{black}{-0.44649}{-0.35858}{-0.44922}{-0.36411} \meshline{black}{-0.45175}{-0.37353}{-0.44922}{-0.36411} \meshline{black}{-0.28605}{-0.75354}{-0.26099}{-0.76663} \meshline{black}{-0.3805}{-0.67444}{-0.37877}{-0.67632} \meshline{black}{-0.28605}{-0.75354}{-0.31709}{-0.73187} \meshline{black}{-0.46583}{-0.49002}{-0.46635}{-0.45369}\meshline{black}{-0.46168}{-0.51211}{-0.45749}{-0.53464} \meshline{black}{-0.3805}{-0.67444}{-0.38394}{-0.66979}\meshline{black}{-0.46508}{-0.43626}{-0.46664}{-0.44677} \meshline{black}{-0.44867}{-0.56089}{-0.44132}{-0.58062} \meshline{black}{-0.32333}{-0.72735}{-0.35182}{-0.70428} \meshline{black}{-0.46168}{-0.51211}{-0.46583}{-0.49002} \meshline{black}{-0.46635}{-0.45369}{-0.46664}{-0.44677} \meshline{black}{-0.35182}{-0.70428}{-0.37877}{-0.67632} \meshline{black}{-0.41564}{-0.62834}{-0.43005}{-0.6034}\meshline{black}{-0.45749}{-0.53464}{-0.44867}{-0.56089} \meshline{black}{-0.31709}{-0.73187}{-0.31999}{-0.73007} \meshline{black}{-0.44132}{-0.58062}{-0.43005}{-0.6034} \meshline{black}{-0.4069}{-0.64131}{-0.38394}{-0.66979} \meshline{black}{-0.32333}{-0.72735}{-0.31999}{-0.73007} \meshline{black}{-0.41564}{-0.62834}{-0.4069}{-0.64131} \meshline{black}{1}{0}{0.99529}{-0.04779} \meshline{black}{1}{0}{0.9562}{0.02398} \meshline{black}{1}{0}{0.9951}{0.04976} \meshline{black}{0.95576}{0.02419}{0.9562}{0.02398}\meshline{black}{0.99054}{-0.09606}{0.99529}{-0.04779} \meshline{black}{0.95576}{0.02419}{0.91719}{0.02238} \meshline{black}{0.9951}{0.04976}{0.99025}{0.09903} \meshline{black}{0.89172}{0.02077}{0.91719}{0.02238} \meshline{black}{0.88704}{0.02172}{0.89172}{0.02077}\meshline{black}{0.98569}{-0.14533}{0.99054}{-0.09606} \meshline{black}{0.88704}{0.02172}{0.85304}{0.01975} \meshline{black}{0.99025}{0.09903}{0.98549}{0.1473} \meshline{black}{0.85304}{0.01975}{0.84902}{0.02089} \meshline{black}{0.98078}{-0.19509}{0.98569}{-0.14533} \meshline{black}{0.84902}{0.02089}{0.80693}{0.01861} \meshline{black}{0.98549}{0.1473}{0.98078}{0.19509} \meshline{black}{0.7975}{0.02066}{0.80693}{0.01861} \meshline{black}{0.96685}{-0.24104}{0.98078}{-0.19509} \meshline{black}{0.76087}{0.01743}{0.7975}{0.02066} \meshline{black}{0.98078}{0.19509}{0.96627}{0.24294} \meshline{black}{0.7398}{0.01953}{0.76087}{0.01743} \meshline{black}{0.95276}{-0.28746}{0.96685}{-0.24104} \meshline{black}{0.71623}{0.01642}{0.7398}{0.01953} \meshline{black}{0.96627}{0.24294}{0.9519}{0.29031} \meshline{black}{0.68128}{0.01723}{0.71623}{0.01642} \meshline{black}{0.93839}{-0.33483}{0.95276}{-0.28746} \meshline{black}{0.68128}{0.01723}{0.67199}{0.01565} \meshline{black}{0.9519}{0.29031}{0.93781}{0.33673} \meshline{black}{0.67199}{0.01565}{0.64699}{0.01474} \meshline{black}{0.64699}{0.01474}{0.62792}{0.01455} \meshline{black}{0.92388}{-0.38268}{0.93839}{-0.33483} \meshline{black}{0.62792}{0.01455}{0.62308}{0.01514} \meshline{black}{0.93781}{0.33673}{0.92387}{0.38268} \meshline{black}{0.62308}{0.01514}{0.58401}{0.01295} \meshline{black}{0.90124}{-0.42503}{0.92388}{-0.38268} \meshline{black}{0.58401}{0.01295}{0.56576}{0.01398} \meshline{black}{0.92387}{0.38268}{0.9003}{0.42678} \meshline{black}{0.54005}{0.01155}{0.56576}{0.01398} \meshline{black}{0.87837}{-0.46781}{0.90124}{-0.42503} \meshline{black}{0.50766}{0.01186}{0.54005}{0.01155}\meshline{black}{0.9003}{0.42678}{0.87696}{0.47044} \meshline{black}{0.496}{0.01034}{0.50766}{0.01186} \meshline{black}{0.85504}{-0.51147}{0.87837}{-0.46781} \meshline{black}{0.45314}{0.01106}{0.496}{0.01034} \meshline{black}{0.87696}{0.47044}{0.8541}{0.51322} \meshline{black}{0.45127}{0.01113}{0.45314}{0.01106} \meshline{black}{0.45092}{0.01119}{0.45127}{0.01113}\meshline{black}{0.83147}{-0.55557}{0.85504}{-0.51147} \meshline{black}{0.8541}{0.51322}{0.83146}{0.55557} \meshline{black}{0.40693}{0.01066}{0.45092}{0.01119} \meshline{black}{0.80101}{-0.59269}{0.83147}{-0.55557} \meshline{black}{0.39538}{0.0118}{0.40693}{0.01066} \meshline{black}{0.83146}{0.55557}{0.79974}{0.59422} \meshline{black}{0.39538}{0.0118}{0.36262}{0.01008} \meshline{black}{0.77023}{-0.63018}{0.80101}{-0.59269} \meshline{black}{0.79974}{0.59422}{0.76833}{0.63249} \meshline{black}{0.36262}{0.01008}{0.3388}{0.01125} \meshline{black}{0.31835}{0.00951}{0.3388}{0.01125} \meshline{black}{0.73883}{-0.66845}{0.77023}{-0.63018} \meshline{black}{0.76833}{0.63249}{0.73756}{0.66999} \meshline{black}{0.28138}{0.00977}{0.31835}{0.00951} \meshline{black}{0.28138}{0.00977}{0.27414}{0.00897} \meshline{black}{0.70711}{-0.7071}{0.73883}{-0.66845} \meshline{black}{0.73756}{0.66999}{0.7071}{0.7071} \meshline{black}{0.27414}{0.00897}{0.25263}{0.00839} \meshline{black}{0.25263}{0.00839}{0.23013}{0.00812} \meshline{black}{0.66999}{-0.73756}{0.70711}{-0.7071} \meshline{black}{0.23013}{0.00812}{0.22408}{0.00849} \meshline{black}{0.7071}{0.7071}{0.66839}{0.73875} \meshline{black}{0.22408}{0.00849}{0.18626}{0.00702} \meshline{black}{0.63249}{-0.76834}{0.66999}{-0.73756} \meshline{black}{0.66839}{0.73875}{0.6301}{0.77006} \meshline{black}{0.18626}{0.00702}{0.16698}{0.00749} \meshline{black}{0.16698}{0.00749}{0.14238}{0.00605} \meshline{black}{0.59422}{-0.79974}{0.63249}{-0.76834} \meshline{black}{0.6301}{0.77006}{0.59263}{0.8007} \meshline{black}{0.14238}{0.00605}{0.10931}{0.00591} \meshline{black}{0.55557}{-0.83146}{0.59422}{-0.79974} \meshline{black}{0.10931}{0.00591}{0.09846}{0.00515} \meshline{black}{0.59263}{0.8007}{0.55557}{0.831} \meshline{black}{0.09846}{0.00515}{0.06483}{0.00438} \meshline{black}{0.06483}{0.00438}{0.05452}{0.00422} \meshline{black}{0.51323}{-0.85407}{0.55557}{-0.83146} \meshline{black}{0.05153}{0.00425}{0.05452}{0.00422} \meshline{black}{0.55557}{0.831}{0.51152}{0.85465} \meshline{black}{0.47045}{-0.87692}{0.51323}{-0.85407} \meshline{black}{0.05153}{0.00425}{0.01061}{0.0031} \meshline{black}{0.51152}{0.85465}{0.46788}{0.87807} \meshline{black}{-0.00597}{0.00289}{0.01061}{0.0031} \meshline{black}{0.42679}{-0.90024}{0.47045}{-0.87692} \meshline{black}{-0.00597}{0.00289}{-0.03333}{0.00204} \meshline{black}{0.46788}{0.87807}{0.42507}{0.90105} \meshline{black}{-0.06365}{0.0013}{-0.03333}{0.00204} \meshline{black}{0.38268}{-0.9238}{0.42679}{-0.90024} \meshline{black}{-0.0773}{0.00098}{-0.06365}{0.0013} \meshline{black}{0.42507}{0.90105}{0.38268}{0.9238} \meshline{black}{0.33673}{-0.93776}{0.38268}{-0.9238} \meshline{black}{-0.12119}{-0.00014}{-0.0773}{0.00098}\meshline{black}{0.38268}{0.9238}{0.33484}{0.93831} \meshline{black}{-0.12126}{-0.00015}{-0.12119}{-0.00014} \meshline{black}{0.2903}{-0.95186}{0.33673}{-0.93776} \meshline{black}{-0.12147}{-0.00015}{-0.12126}{-0.00015} \meshline{black}{0.33484}{0.93831}{0.28746}{0.95268} \meshline{black}{-0.16529}{-0.00113}{-0.12147}{-0.00015}\meshline{black}{0.24293}{-0.96625}{0.2903}{-0.95186} \meshline{black}{-0.17904}{-0.00192}{-0.16529}{-0.00113} \meshline{black}{0.28746}{0.95268}{0.24104}{0.96676} \meshline{black}{-0.17904}{-0.00192}{-0.20936}{-0.00215} \meshline{black}{0.19509}{-0.98078}{0.24293}{-0.96625} \meshline{black}{0.24104}{0.96676}{0.19509}{0.9807} \meshline{black}{-0.20936}{-0.00215}{-0.23678}{-0.00355} \meshline{black}{0.1473}{-0.98549}{0.19509}{-0.98078} \meshline{black}{-0.23678}{-0.00355}{-0.25343}{-0.00327} \meshline{black}{0.19509}{0.9807}{0.14533}{0.98562} \meshline{black}{0.09902}{-0.99024}{0.1473}{-0.98549} \meshline{black}{-0.29418}{-0.00473}{-0.25343}{-0.00327} \meshline{black}{0.14533}{0.98562}{0.09607}{0.9905} \meshline{black}{0.04976}{-0.9951}{0.09902}{-0.99024} \meshline{black}{-0.29418}{-0.00473}{-0.2975}{-0.00456} \meshline{black}{0.09607}{0.9905}{0.04779}{0.99527} \meshline{black}{-0.2975}{-0.00456}{-0.30969}{-0.00472} \meshline{black}{-0.30969}{-0.00472}{-0.34169}{-0.00554} \meshline{black}{0}{-1}{0.04976}{-0.9951} \meshline{black}{0.04779}{0.99527}{0}{1} \meshline{black}{-0.34169}{-0.00554}{-0.35224}{-0.0066} \meshline{black}{-0.05951}{-0.98078}{0}{-1} \meshline{black}{0}{1}{-0.04976}{0.9951} \meshline{black}{-0.35224}{-0.0066}{-0.38592}{-0.00646} \meshline{black}{-0.1026}{-0.97319}{-0.05951}{-0.98078} \meshline{black}{-0.41023}{-0.00836}{-0.38592}{-0.00646}\meshline{black}{-0.04976}{0.9951}{-0.09901}{0.99024} \meshline{black}{-0.43009}{-0.00754}{-0.41023}{-0.00836} \meshline{black}{-0.14672}{-0.96542}{-0.1026}{-0.97319} \meshline{black}{-0.09901}{0.99024}{-0.14726}{0.98549} \meshline{black}{-0.46758}{-0.00943}{-0.43009}{-0.00754}\meshline{black}{-0.195}{0.98078}{-0.24283}{0.96622} \meshline{black}{-0.14726}{0.98549}{-0.195}{0.98078} \meshline{black}{-0.19186}{-0.95747}{-0.14672}{-0.96542} \meshline{black}{-0.47415}{-0.00887}{-0.46758}{-0.00943} \meshline{black}{-0.24283}{0.96622}{-0.29013}{0.95183} \meshline{black}{-0.23803}{-0.94934}{-0.19186}{-0.95747} \meshline{black}{-0.47415}{-0.00887}{-0.49867}{-0.00929} \meshline{black}{-0.51826}{-0.00999}{-0.49867}{-0.00929} \meshline{black}{-0.29013}{0.95183}{-0.33633}{0.93777} \meshline{black}{-0.28522}{-0.94103}{-0.23803}{-0.94934} \meshline{black}{-0.51826}{-0.00999}{-0.52521}{-0.01097} \meshline{black}{-0.33633}{0.93777}{-0.382}{0.92387} \meshline{black}{-0.33344}{-0.93254}{-0.28522}{-0.94103} \meshline{black}{-0.52521}{-0.01097}{-0.56249}{-0.01074} \meshline{black}{-0.56249}{-0.01074}{-0.58297}{-0.01299} \meshline{black}{-0.382}{0.92387}{-0.4261}{0.90031} \meshline{black}{-0.38268}{-0.92387}{-0.33344}{-0.93254} \meshline{black}{-0.58297}{-0.01299}{-0.60674}{-0.01168} \meshline{black}{-0.42503}{-0.90123}{-0.38268}{-0.92387} \meshline{black}{-0.46781}{-0.87837}{-0.42503}{-0.90123} \meshline{black}{-0.4261}{0.90031}{-0.46978}{0.87698} \meshline{black}{-0.63975}{-0.01416}{-0.60674}{-0.01168}\meshline{black}{-0.51147}{-0.85503}{-0.46781}{-0.87837} \meshline{black}{-0.63975}{-0.01416}{-0.65124}{-0.01287} \meshline{black}{-0.46978}{0.87698}{-0.5126}{0.85411} \meshline{black}{-0.55557}{-0.83146}{-0.51147}{-0.85503} \meshline{black}{-0.59268}{-0.801}{-0.55557}{-0.83146} \meshline{black}{-0.69606}{-0.01445}{-0.65124}{-0.01287}\meshline{black}{-0.63018}{-0.77023}{-0.59268}{-0.801} \meshline{black}{-0.5126}{0.85411}{-0.555}{0.83146} \meshline{black}{-0.66845}{-0.73882}{-0.63018}{-0.77023} \meshline{black}{-0.69606}{-0.01445}{-0.6963}{-0.0144} \meshline{black}{-0.7071}{-0.7071}{-0.66845}{-0.73882} \meshline{black}{-0.555}{0.83146}{-0.5937}{0.7998} \meshline{black}{-0.73756}{-0.66999}{-0.7071}{-0.7071} \meshline{black}{-0.6963}{-0.0144}{-0.69749}{-0.01439} \meshline{black}{-0.74215}{-0.01491}{-0.69749}{-0.01439} \meshline{black}{-0.76833}{-0.63249}{-0.73756}{-0.66999} \meshline{black}{-0.5937}{0.7998}{-0.63205}{0.76843} \meshline{black}{-0.75667}{-0.01757}{-0.74215}{-0.01491}\meshline{black}{-0.79974}{-0.59422}{-0.76833}{-0.63249} \meshline{black}{-0.83146}{-0.55557}{-0.79974}{-0.59422} \meshline{black}{-0.75667}{-0.01757}{-0.78775}{-0.01579} \meshline{black}{-0.63205}{0.76843}{-0.6697}{0.73762} \meshline{black}{-0.8541}{-0.51322}{-0.83146}{-0.55557} \meshline{black}{-0.78775}{-0.01579}{-0.81812}{-0.01913} \meshline{black}{-0.6697}{0.73762}{-0.707}{0.70711} \meshline{black}{-0.87696}{-0.47044}{-0.8541}{-0.51322} \meshline{black}{-0.83132}{-0.01689}{-0.81812}{-0.01913} \meshline{black}{-0.9003}{-0.42678}{-0.87696}{-0.47044} \meshline{black}{-0.707}{0.70711}{-0.73873}{0.66846} \meshline{black}{-0.86607}{-0.0176}{-0.83132}{-0.01689} \meshline{black}{-0.92387}{-0.38268}{-0.9003}{-0.42678} \meshline{black}{-0.87168}{-0.01826}{-0.86607}{-0.0176} \meshline{black}{-0.93781}{-0.33673}{-0.92387}{-0.38268} \meshline{black}{-0.73873}{0.66846}{-0.77014}{0.6302} \meshline{black}{-0.87734}{-0.01974}{-0.87168}{-0.01826} \meshline{black}{-0.95189}{-0.29031}{-0.93781}{-0.33673} \meshline{black}{-0.77014}{0.6302}{-0.80093}{0.59269} \meshline{black}{-0.91094}{-0.01911}{-0.87734}{-0.01974} \meshline{black}{-0.80093}{0.59269}{-0.8314}{0.55557} \meshline{black}{-0.96626}{-0.24294}{-0.95189}{-0.29031} \meshline{black}{-0.8314}{0.55557}{-0.85498}{0.51148} \meshline{black}{-0.95109}{-0.02219}{-0.91094}{-0.01911}\meshline{black}{-0.98078}{-0.19509}{-0.96626}{-0.24294} \meshline{black}{-0.98549}{-0.1473}{-0.98078}{-0.19509} \meshline{black}{-0.85498}{0.51148}{-0.87833}{0.46782} \meshline{black}{-0.99024}{-0.09902}{-0.98549}{-0.1473} \meshline{black}{-0.95315}{-0.02106}{-0.95109}{-0.02219} \meshline{black}{-0.87833}{0.46782}{-0.90122}{0.42504} \meshline{black}{-0.9951}{-0.04976}{-0.99024}{-0.09902} \meshline{black}{-0.90122}{0.42504}{-0.92387}{0.38268} \meshline{black}{-0.92387}{0.38268}{-0.93839}{0.33484} \meshline{black}{-0.93839}{0.33484}{-0.95276}{0.28746} \meshline{black}{-1}{0}{-0.95315}{-0.02106} \meshline{black}{-1}{0}{-0.9951}{-0.04976} \meshline{black}{-0.95276}{0.28746}{-0.96684}{0.24104} \meshline{black}{-0.96684}{0.24104}{-0.98078}{0.19509} \meshline{black}{-0.98078}{0.19509}{-0.98568}{0.14533} \meshline{black}{-0.99054}{0.09607}{-0.99529}{0.04779} \meshline{black}{-0.98568}{0.14533}{-0.99054}{0.09607} \meshline{black}{-0.99529}{0.04779}{-1}{0} \meshline{black}{0.95276}{-0.28746}{0.96685}{-0.24104} \meshline{black}{0.96627}{0.24294}{0.9519}{0.29031} \meshline{black}{0.93839}{-0.33483}{0.95276}{-0.28746} \meshline{black}{0.9519}{0.29031}{0.93781}{0.33673} \meshline{black}{0.92388}{-0.38268}{0.93839}{-0.33483} \meshline{black}{0.93781}{0.33673}{0.92387}{0.38268} \meshline{black}{0.90124}{-0.42503}{0.92388}{-0.38268} \meshline{black}{0.92387}{0.38268}{0.9003}{0.42678} \meshline{black}{0.87837}{-0.46781}{0.90124}{-0.42503} \meshline{black}{0.9003}{0.42678}{0.87696}{0.47044} \meshline{black}{0.85504}{-0.51147}{0.87837}{-0.46781} \meshline{black}{0.87696}{0.47044}{0.8541}{0.51322} \meshline{black}{0.83147}{-0.55557}{0.85504}{-0.51147} \meshline{black}{0.8541}{0.51322}{0.83146}{0.55557} \meshline{black}{0.80101}{-0.59269}{0.83147}{-0.55557} \meshline{black}{0.77023}{-0.63018}{0.80101}{-0.59269} \meshline{black}{0.83146}{0.55557}{0.79974}{0.59422} \meshline{black}{0.73883}{-0.66845}{0.77023}{-0.63018} \meshline{black}{0.79974}{0.59422}{0.76833}{0.63249} \meshline{black}{0.70711}{-0.7071}{0.73883}{-0.66845} \meshline{black}{0.76833}{0.63249}{0.73756}{0.66999} \meshline{black}{0.66999}{-0.73756}{0.70711}{-0.7071} \meshline{black}{0.73756}{0.66999}{0.7071}{0.7071} \meshline{black}{0.63249}{-0.76834}{0.66999}{-0.73756} \meshline{black}{0.7071}{0.7071}{0.66839}{0.73875} \meshline{black}{0.59422}{-0.79974}{0.63249}{-0.76834} \meshline{black}{0.66839}{0.73875}{0.6301}{0.77006} \meshline{black}{0.55557}{-0.83146}{0.59422}{-0.79974} \meshline{black}{0.6301}{0.77006}{0.59263}{0.8007} \meshline{black}{0.51323}{-0.85407}{0.55557}{-0.83146} \meshline{black}{0.59263}{0.8007}{0.55557}{0.831} \meshline{black}{0.47045}{-0.87692}{0.51323}{-0.85407} \meshline{black}{0.55557}{0.831}{0.51152}{0.85465} \meshline{black}{0.42679}{-0.90024}{0.47045}{-0.87692} \meshline{black}{0.51152}{0.85465}{0.46788}{0.87807} \meshline{black}{0.38268}{-0.9238}{0.42679}{-0.90024} \meshline{black}{0.46788}{0.87807}{0.42507}{0.90105} \meshline{black}{0.33673}{-0.93776}{0.38268}{-0.9238} \meshline{black}{0.42507}{0.90105}{0.38268}{0.9238} \meshline{black}{0.38268}{0.9238}{0.33484}{0.93831} \meshline{black}{0.2903}{-0.95186}{0.33673}{-0.93776} \meshline{black}{0.33484}{0.93831}{0.28746}{0.95268} \meshline{black}{0.24293}{-0.96625}{0.2903}{-0.95186} \meshline{black}{0.28746}{0.95268}{0.24104}{0.96676} \meshline{black}{0.19509}{-0.98078}{0.24293}{-0.96625} \meshline{black}{0.24104}{0.96676}{0.19509}{0.9807} \meshline{black}{0.1473}{-0.98549}{0.19509}{-0.98078} \meshline{black}{0.19509}{0.9807}{0.14533}{0.98562} \meshline{black}{0.09902}{-0.99024}{0.1473}{-0.98549} \meshline{black}{0.14533}{0.98562}{0.09607}{0.9905} \meshline{black}{0.04976}{-0.9951}{0.09902}{-0.99024} \meshline{black}{0.09607}{0.9905}{0.04779}{0.99527} \meshline{black}{0.04779}{0.99527}{0}{1} \meshline{black}{0}{-1}{0.04976}{-0.9951} \meshline{black}{0}{1}{-0.04976}{0.9951} \meshline{black}{-0.04779}{-0.99529}{0}{-1} \meshline{black}{-0.04976}{0.9951}{-0.09901}{0.99024} \meshline{black}{-0.09607}{-0.99054}{-0.04779}{-0.99529} \meshline{black}{-0.09901}{0.99024}{-0.14726}{0.98549} \meshline{black}{-0.14726}{0.98549}{-0.195}{0.98078} \meshline{black}{-0.14533}{-0.98568}{-0.09607}{-0.99054} \meshline{black}{-0.195}{0.98078}{-0.24283}{0.96622} \meshline{black}{-0.19509}{-0.98078}{-0.14533}{-0.98568} \meshline{black}{-0.24283}{0.96622}{-0.29013}{0.95183} \meshline{black}{-0.24104}{-0.96684}{-0.19509}{-0.98078} \meshline{black}{-0.29013}{0.95183}{-0.33633}{0.93777} \meshline{black}{-0.28746}{-0.95276}{-0.24104}{-0.96684} \meshline{black}{-0.33633}{0.93777}{-0.382}{0.92387} \meshline{black}{-0.33483}{-0.93839}{-0.28746}{-0.95276} \meshline{black}{-0.38268}{-0.92387}{-0.33483}{-0.93839} \meshline{black}{-0.382}{0.92387}{-0.4261}{0.90031} \meshline{black}{-0.42503}{-0.90123}{-0.38268}{-0.92387} \meshline{black}{-0.46781}{-0.87837}{-0.42503}{-0.90123} \meshline{black}{-0.4261}{0.90031}{-0.46978}{0.87698} \meshline{black}{-0.51147}{-0.85503}{-0.46781}{-0.87837} \meshline{black}{-0.55557}{-0.83146}{-0.51147}{-0.85503} \meshline{black}{-0.59268}{-0.801}{-0.55557}{-0.83146} \meshline{black}{-0.46978}{0.87698}{-0.5126}{0.85411} \meshline{black}{-0.63018}{-0.77023}{-0.59268}{-0.801} \meshline{black}{-0.66845}{-0.73882}{-0.63018}{-0.77023} \meshline{black}{-0.5126}{0.85411}{-0.555}{0.83146} \meshline{black}{-0.7071}{-0.7071}{-0.66845}{-0.73882} \meshline{black}{-0.555}{0.83146}{-0.5937}{0.7998} \meshline{black}{-0.73756}{-0.66999}{-0.7071}{-0.7071} \meshline{black}{-0.76833}{-0.63249}{-0.73756}{-0.66999} \meshline{black}{-0.5937}{0.7998}{-0.63205}{0.76843} \meshline{black}{-0.79974}{-0.59422}{-0.76833}{-0.63249} \meshline{black}{-0.83146}{-0.55557}{-0.79974}{-0.59422} \meshline{black}{-0.63205}{0.76843}{-0.6697}{0.73762} \meshline{black}{-0.8541}{-0.51322}{-0.83146}{-0.55557} \meshline{black}{-0.6697}{0.73762}{-0.707}{0.70711} \meshline{black}{-0.87696}{-0.47044}{-0.8541}{-0.51322} \meshline{black}{-0.707}{0.70711}{-0.73873}{0.66846} \meshline{black}{-0.9003}{-0.42678}{-0.87696}{-0.47044} \meshline{black}{-0.73873}{0.66846}{-0.77014}{0.6302} \meshline{black}{-0.77014}{0.6302}{-0.80093}{0.59269} \meshline{black}{-0.92387}{-0.38268}{-0.9003}{-0.42678} \meshline{black}{-0.80093}{0.59269}{-0.8314}{0.55557} \meshline{black}{-0.93781}{-0.33673}{-0.92387}{-0.38268} \meshline{black}{-0.95189}{-0.29031}{-0.93781}{-0.33673} \meshline{black}{-0.8314}{0.55557}{-0.85498}{0.51148} \meshline{black}{-0.96626}{-0.24294}{-0.95189}{-0.29031} \meshline{black}{-0.85498}{0.51148}{-0.87833}{0.46782} \meshline{black}{-0.98078}{-0.19509}{-0.96626}{-0.24294} \meshline{black}{-0.87833}{0.46782}{-0.90122}{0.42504} \meshline{black}{-0.90122}{0.42504}{-0.92387}{0.38268} \meshline{black}{-0.98549}{-0.1473}{-0.98078}{-0.19509} \meshline{black}{-0.92387}{0.38268}{-0.93839}{0.33484} \meshline{black}{-0.99024}{-0.09902}{-0.98549}{-0.1473} \meshline{black}{-0.9951}{-0.04976}{-0.99024}{-0.09902} \meshline{black}{-1}{0}{-0.9951}{-0.04976} \meshline{black}{-0.93839}{0.33484}{-0.95276}{0.28746} \meshline{black}{-0.95276}{0.28746}{-0.96684}{0.24104} \meshline{black}{-0.96684}{0.24104}{-0.98078}{0.19509} \meshline{black}{-0.98078}{0.19509}{-0.98568}{0.14533} \meshline{black}{-0.99529}{0.04779}{-1}{0} \meshline{black}{-0.98568}{0.14533}{-0.99054}{0.09607} \meshline{black}{-0.99054}{0.09607}{-0.99529}{0.04779} \end{tikzpicture} \null \end{center} \null \end{minipage} \begin{minipage}[h]{0.5\linewidth} \begin{itemize} \item For g.s.: $\max (u)=4.41 $, $\mathcal{E}_4 (u)= 18.74$ \item For l.e.n.s.: $\min(u)= -6.25$, $\max (u)= 6.25$, $\mathcal{E}_4(u)=76.23$ \item Starting function for g.s.: \\$\cos(\pi (x^2+y^2)^{0.5}/2)$ \item Starting function for l.e.n.s.: \\ $\cos(\pi (x^{2}+y^{2})^{0.5}/2)$ $ \cos(2\pi (x^{2}+y^{2})^{0.5}$ $\cos(\pi (x^{2}+y^{2})^{0.5}$ \end{itemize} \end{minipage} \null \end{document}
\begin{document} \title{f Some examples of composition operators and their approximation numbers on the Hardy space of the bi-disk} \noindent {\bf Abstract.} We give examples of composition operators $C_\Phi$ on $H^2 (\D^2)$ showing that the condition $\|\Phi \|_\infty = 1$ is not sufficient for their approximation numbers $a_n (C_\Phi)$ to satisfy $\lim_{n \to \infty} [a_n (C_\Phi) ]^{1/\sqrt{n}} = 1$, contrary to the $1$-dimensional case. We also give a situation where this implication holds. We make a link with the Monge-Amp\`ere capacity of the image of $\Phi$. \noindent \emph{Key-words}\/: approximation numbers; Bergman space; bidisk; composition operator; Green capacity; Hardy space; Monge-Amp\`ere capacity; weighted composition operator. \noindent \emph{MSC~2010 numbers} -- \emph{Primary}\/: 47B33 -- \emph{Secondary}\/: 30H10 -- 30H20 -- 31B15 -- 32A35 -- 32U20 -- 41A35 -- 46B28 \section{Introduction and notation} \label{sec: intro} \subsection{Introduction} The purpose of this paper is to continue the study of composition operators on the polydisk initiated in \cite{BLQR}, and in particular to examine to what extent one of the main results of \cite{LQR} still holds. Let $H$ be a Hilbert space and $T \colon H\to H$ a bounded operator. Recall that the \emph{approximation numbers} of $T$ are defined as: \begin{displaymath} \qquad \qquad a_n (T) = \inf_{{\rm rank}\, R < n} \| T - R\| \, , \quad n \geq 1 \, , \end{displaymath} and we have: \begin{displaymath} \| T \| = a_1 (T) \geq a_2 (T) \geq \cdots \geq a_n (T) \geq \cdots \end{displaymath} The operator $T$ is compact if and only if $a_n (T) \converge_{n \to \infty} 0$. \par For $d \geq 1$, we define: \begin{displaymath} \raise 4 pt \hbox{$\Bigg\{ $} \begin{array}{rl} \beta_{d}^{-}(T) & \dis = \liminf_{n\to \infty}\big[a_{n^d}(T)\big]^{1/n} \\ \beta_{d}^{+}(T) & \dis = \limsup_{n\to \infty}\big[a_{n^d}(T)\big]^{1/n} \end{array} \end{displaymath} We have: \begin{displaymath} 0\leq \beta_{d}^{-}(T)\leq \beta_{d}^{+}(T) \leq 1 \, , \end{displaymath} and we simply write $\beta_{d}(T)$ in case of equality. It may well happen in general (consider diagonal operators) that $\beta_{d}^{-}(T) = 0$ and $\beta_{d}^{+}(T)=1$.\par When $H = H^2 (\D)$ is the Hardy space on the open unit disk $\D$ of $\C$, and $T = C_\Phi$ is a composition operator, with $\Phi \colon \D \to \D$ a non-constant analytic function, we always have (\cite{LIQUEROD}): \begin{displaymath} \beta_{1}^{-}(C_\Phi) > 0 \, , \end{displaymath} and one of the main results of \cite{LIQUEROD} is the equivalence: \begin{equation} \label{equiv-dim1} \beta_{1}^+ (C_\Phi) < 1 \quad \Longleftrightarrow \quad \Vert \Phi\Vert_\infty < 1 \, . \end{equation} An alternative proof was given in \cite{LQR}, as a consequence of a so-called ``spectral radius formula'', which moreover shows that: \begin{displaymath} \beta_{1}^- (C_\Phi) = \beta_{1}^+ (C_\Phi) \, . \end{displaymath} In \cite{BLQR}, for $d \geq 2$, it is proved that, for a bounded symmetric domain $\Omega \subseteq \C^d$, if $\Phi \colon \Omega \to \Omega$ is analytic, such that $\Phi (\Omega)$ has a non-void interior, and the composition operator $C_\Phi \colon H^2 (\Omega) \to H^2 (\Omega)$ is compact, then: \begin{displaymath} \beta_d^- (C_\Phi) > 0 \, . \end{displaymath} On the other hand, if $\Omega$ is a product of balls, then: \begin{displaymath} \| \Phi \|_\infty < 1 \quad \Longrightarrow \quad \beta_d^+ (C_\Phi) < 1 \, . \end{displaymath} We do not know whether the converse holds and the purpose of this paper is to study some examples towards an answer. The paper is organized as follows. Section~\ref{sec: intro} is this short introduction, as well as some notations and definitions on singular numbers of operators and Hardy spaces of the polydisk to follow. Section~\ref{sec: weighted} contains preliminary results on weighted composition operators in one variable, which surprisingly play an important role in the study of non-weighted composition operators in two variables. Section~\ref{sec: splitted} studies the case of symbols with ``separated'' variables. Our main one variable result extends in this case. Section~\ref{sec: glued} studies the ``glued case'' $\Phi (z_1, z_2) = \big( \phi (z_1), \phi (z_1) \big)$ for which even boundedness is an issue. Here, the Bergman space $B^{2}(\D)$ enters the picture. Section~\ref{sec: triangularly} studies the case of ``triangularly separated'' variables. This section lets direct Hilbertian sums of weighted composition operators in one variable appear, and it contains our main result: an example of a symbol $\Phi$ satisfying $\Vert \Phi \Vert_\infty = 1$ and yet $\beta_{2}^{+} (C_\Phi) < 1$. The final Section~\ref{sec: capacity} discusses the role of the Monge-Amp\`ere pluricapacity, which is a multivariate extension of the Green capacity in the disk. Even though, as evidenced by our counterexample of Section~\ref{sec: triangularly}, this capacity will not capture all the behavior of the parameter $\beta_{m} (C_\Phi)$, some partial results are obtained, relying on theorems of S.~Nivoche and V.~Zakharyuta. \subsection{Notation} We denote by $\D$ the open unit disk of the complex plane and by $\T$ its boundary, the $1$-dimensional torus. The Hardy space $H^2 (\D^d)$ is the space of holomorphic functions $f \colon \D^d \to \C$ whose boundary values $f^\ast$ on $\T^d$ are square-integrable with respect to the Haar measure $m_d$ of $\T^d$, and normed with: \begin{displaymath} \| f \|_2^2 = \| f \|_{H^2 (\D^d)}^2 = \int_{\T^d} |f^\ast (\xi_1, \ldots, \xi_d) |^2 \, dm_d (\xi_1, \ldots, \xi_d) \, . \end{displaymath} If $f (z_1, \ldots, z_d) = \sum_{\alpha_1, \ldots, \alpha_d \geq 0} a_{\alpha_1, \ldots, \alpha_d} \, z_1^{\alpha_1} \cdots z_d^{\alpha_d}$, then: \begin{displaymath} \| f \|_2^2 = \sum_{\alpha_1, \ldots, \alpha_d \geq 0} | a_{\alpha_1, \ldots, \alpha_d} |^2 \, . \end{displaymath} We say that an analytic map $\Phi \colon \D^d \to \D^d$ is a \emph{symbol} if its associated composition operator $C_\Phi \colon H^2 (\D^d) \to H^2 (\D^d)$, defined by $C_\Phi (f) = f \circ \Phi$, is bounded. We say that $\Phi$ is \emph{truly $d$-dimensional} if $\Phi (\D^d)$ has a non-void interior. We will make use of two kinds of symbols defined on $\D$. The \emph{lens map} $\lambda_\theta \colon \D \to \D$ is defined, for $\theta \in (0, 1)$, by: \begin{equation} \lambda_\theta (z) = \frac{(1 + z)^\theta - (1 - z)^\theta}{(1 + z)^\theta + (1 - z)^\theta} \end{equation} (see \cite{Shapiro-livre}, p.~27, or \cite{LELIQR}, for more information), and corresponds to $u \mapsto u^\theta$ in the right half-plane. The \emph{cusp map} $\chi \colon \D \to \D$ was first defined in \cite{LELIQR-TAMS} and in a slightly different form in \cite{LIQURO-Estimates}; we actually use here the modified form introduced in \cite{LELIQR-capacity}, and then used in \cite{LELIQR-approx-Dirichlet}. We first define: \begin{displaymath} \chi_0 (z) = \frac{\displaystyle \Big( \frac{z - i}{i z - 1} \Big)^{1/2} - i} {\displaystyle - i \, \Big( \frac{z - i}{i z - 1} \Big)^{1/2} + 1} \, ; \end{displaymath} we note that $\chi_0 (1) = 0$, $\chi_0 (- 1) = 1$, $\chi_0 (i) = - i$, $\chi_0 (- i) = i$, and $\chi_0 (0) = \sqrt{2} - 1$. Then we set: \begin{displaymath} \chi_1 (z) = \log \chi_0 (z), \quad \chi_2 (z) = - \frac{2}{\pi}\, \chi_1 (z) + 1, \quad \chi_3 (z) = \frac{a}{\chi_2 (z)} \, \raise 1pt \hbox{,} \end{displaymath} and finally: \begin{displaymath} \chi (z) = 1 - \chi_3 (z) \, , \end{displaymath} where: \begin{equation} \label{definition de a} a = 1 - \frac{2}{\pi} \log (\sqrt{2} - 1) \in (1, 2) \end{equation} is chosen in order that $\chi (0) = 0$. The image $\Omega$ of the (univalent) cusp map is formed by the intersection of the inside of the disk $D \big(1 - \frac{a}{2} \raise 1pt \hbox{,} \frac{a}{2} \big)$ and the outside of the two disks $D \big(1 + \frac{i a}{2} \raise 1pt \hbox{,} \frac{a}{2} \big)$ and $D \big(1 - \frac{ i a}{2} \raise 1pt \hbox{,} \frac{a}{2} \big) \cdot$ Besides the approximation numbers, we need other singular numbers for an operator $S \colon X \to Y$ between Banach spaces $X$ and $Y$. \par The \emph{Bernstein numbers} $b_n (S)$, $n \geq 1$, which are defined by: \begin{equation} \label{Bernstein} b_n (S) = \sup_E \min_{x \in S_E} \| S x\| \, , \end{equation} where the supremum is taken over all $n$-dimensional subspaces of $X$ and $S_E$ is the unit sphere of $E$. The \emph{Gelfand numbers} $c_n (S)$, $n \geq 1$, which are defined by: \begin{equation} \label{Gelfand} c_n (S) = \inf \{ \| S_{\mid M} \| \, ; \ {\rm codim}\, M < n\} \, . \end{equation} The \emph{Kolmogorov numbers} $d_n (S)$, $n \geq 1$, which are defined by: \begin{equation} \label{Kolmogorov} d_n (S) = \inf _{\dim E < n} \raise -2pt \hbox{$\bigg[$} \sup_{x \in B_X} {\rm dist}\, (S x, E) \raise -2pt \hbox{$\bigg]$} \, . \end{equation} Pietsch showed that all $s$-numbers on Hilbert spaces are equal (see \cite{Pietsch}, \S~2, Corollary, or \cite{Pietsch-livre}, Theorem~11.3.4); hence: \begin{equation} \label{egalite} a_n (S) = b_n (S) = c_n (S) = d_n (S) \, . \end{equation} We denote $m$ the normalized Lebesgue measure on $\T = \partial \D$. If $\varphi \colon \D \to \D$, $m_\varphi$ is the pull-back measure on $\overline{\D}$ defined by $m_\varphi (E) = m [{\varphi^\ast}^{- 1} (E)]$, where $\varphi^\ast$ stands for the non-tangential boundary values of $\varphi$. The notation $A \lesssim B$ means that $A \leq C \, B$ for some positive constant $C$ and we write $A \approx B$ if we have both $A \lesssim B$ and $B \lesssim A$. \section{Preliminary results on weighted composition operators on $H^2 (\D)$} \label{sec: weighted} We see in this section that the presence of a ``rapidly decaying'' weight allows simpler estimates for the approximation numbers of a corresponding weighted composition operator. Such a study, but a bit different, is made in \cite{Lechner-LQR}. \par Let $\varphi \colon \D\to \D$ a non-constant analytic self-map in the disk algebra $A(\D)$ such that, for some constant $C > 1$ and for all $z\in \D$: \begin{equation}\label{veritas} \varphi (1) = 1\, ,\quad |1 - \varphi (z)|\leq 1 \, , \quad |1 - \varphi(z)|\leq C \, (1 -|\varphi(z)|) \end{equation} as well as $\varphi(z)\neq 1$ for $z\neq 1$. We can take for example $\varphi=\frac{1+ \lambda_{\theta}}{2}$ where $\lambda_\theta$ is the lens map with parameter $\theta$. Let $w \in H^\infty$ and let $T$ be the weighted composition operator \begin{displaymath} T = M_{w\circ \varphi} C_\varphi \colon H^2 \to H^2 \, . \end{displaymath} Note that $M_{w\circ \varphi} C_\varphi = C_\varphi M_w$. We first show that: \begin{theorem} \label{fish} Let $T = M_{w\circ \varphi} C_\varphi \colon H^2 \to H^2$ be as above and let $B$ be a Blaschke product with length $< N$. Then, with the implied constant depending only on the number $C$ in \eqref{veritas} (and of $\varphi$): \begin{displaymath} a_{N}(T) \lesssim \sup_{|z - 1|\leq 1,\ z\in \varphi (\D)} |B (z)| \, |w(z)| \, . \end{displaymath} \end{theorem} \begin{proof} The following preliminary observation (see also \cite{LELIQR}, p.~809), in which we denote by $S (\xi, h)=\{z\in \D \, ; \ |z - \xi|\leq h\}$ the Carleson window with center $\xi\in \T$ and size $h$, and by $K_\varphi$ the support of the pull-back measure $m_{\varphi} $, will be useful. \begin{equation}\label{prel} u\in S (\xi,h) \cap K_\varphi \quad \Longrightarrow \quad u\in S (1, Ch) \cap K_\varphi \, . \end{equation} Indeed, if $|u - \xi|\leq h$ and $u \in K_\varphi$, \eqref{veritas} implies: \begin{displaymath} 1 - |u|\leq |u - \xi|\leq h \quad \text{and} \quad |u - 1|\leq C (1-|u|)\leq Ch \, . \end{displaymath} Set $E = B H^2$. This is a subspace of codimension $<N$. If $f = B g \in E$, with $\Vert g \Vert = \Vert f\Vert$ (isometric division by $B$ in $BH^2$), we have $Tf = (w B g) \circ \varphi$, whence: \begin{displaymath} \Vert T(f) \Vert^2 = \int_{\D} |B|^2|w|^2 |g|^2 dm_{\varphi} \, , \end{displaymath} implying $\Vert T(f)\Vert^2\leq \Vert f\Vert^2 \Vert J\Vert^2$ where $J \colon H^2\to L^{2}(\sigma)$ is the natural embedding and where \begin{displaymath} \sigma = |B|^2|w|^2 dm_{\varphi} \, . \end{displaymath} Now, Carleson's embedding theorem for the measure $\sigma$ and \eqref{prel} show that (the implied constants being absolute): \begin{align*} \Vert J\Vert^2 & \lesssim \sup_{\xi\in \T,\ 0<h<1} \frac{1}{h} \int_{S(\xi,h)\cap K_\varphi} |B|^2|w|^2 dm_{\varphi} \\ & \lesssim \sup_{ 0<h<1} \frac{1}{h} \int_{S(1,Ch)\cap K_\varphi} |B|^2|w|^2 dm_{\varphi} \\ & \lesssim\bigg(\sup_{|z-1|\leq 1,\ z\in \overline{\varphi(\D)}} |B(z)|^2 |w (z)|^2 \bigg)\, \bigg(\sup_{ 0<h<1} \frac{1}{h} \int_{S(1,Ch)\cap K_\varphi} dm_{\varphi}\bigg) \\ &\lesssim \sup_{|z - 1|\leq 1,\ z\in \overline{\varphi(\D)}} |B(z)|^2|w(z)|^2 \, , \end{align*} since $m_\varphi$ is a Carleson measure for $H^2$ and where we used that, according to \eqref{veritas}: \begin{displaymath} K_\varphi \subseteq \overline{\varphi(\D)} \subseteq \{z\in \D \, ; \ |z - 1|\leq 1\} \, . \end{displaymath} This ends the proof of Theorem~\ref{fish} with help of the equality of $a_{N}(T)$ with the Gelfand number $c_N (T)$ recalled in \eqref{egalite}. \end{proof} In order to specialize efficiently the general Theorem~\ref{fish}, we recall the following simple Lemma~2.3 of \cite{LELIQR}, where: \begin{equation} \label{distance pseudo} \qquad \qquad \rho (a, b) = \bigg| \frac{a - b}{1 - \bar{a} b} \bigg| \, , \qquad a, b \in \D \, , \end{equation} is the \emph{pseudo-hyperbolic distance}: \begin{lemma} [\cite{LELIQR}] \label{rec} Let $a, b\in \D$ such that $|a - b|\leq L \min (1 - |a|, 1 -|b|)$. Then: \begin{displaymath} \rho (a, b) \leq \frac{L}{\sqrt{L^2 +1}} =: \kappa < 1 \, . \end{displaymath} \end{lemma} We can now state: \begin{theorem} \label{special} Assume that $\varphi$ is as in \eqref{veritas} and that the weight $w$ satisfies, for some parameters $0< \theta \leq 1$ and $R > 0$: \begin{displaymath} \qquad |w(z)|\leq \exp\bigg( - \frac{R}{|1 - z|^\theta}\bigg) \, , \quad \forall z\in \D \text{ with } \Re z \geq 0 \, . \end{displaymath} Then, the approximation numbers of $T = M_{w\circ \varphi} C_\varphi$ satisfy: \begin{displaymath} a_{nm+1} (T) \lesssim \max \big[\exp(-a n), \exp(- R \, 2^{m\theta})\big] \, , \end{displaymath} for all integers $n, m\geq 1$, where $a = \log [\sqrt{16C^2 +1} / (4C)] > 0$ and $C$ is as in \eqref{veritas}. \end{theorem} \begin{proof} Let $p_l = 1 - 2^{- l}$, $0 \leq l < m$ and let $B$ be the Blaschke product: \begin{displaymath} B (z) = \prod_{0 \leq l < m} \bigg(\frac{z - p_l}{1 - p_{l} z}\bigg)^n \, . \end{displaymath} Let $z \in K_\varphi \cap \D$ so that $0<|z - 1|\leq 1$. Let $l$ be the non-negative integer such that $2^{- l - 1}< |z - 1| \leq 2^{- l}$. We separate two cases: \noindent{\sl Case 1: $l\geq m$}. Then, \emph{the weight does the job}. Indeed, majorizing $|B (z)|$ by $1$ and using the assumption on $w$, we get: \begin{align*} |B(z)|^{2}|w(z)|^2 &\leq |w(z)|^2 \leq \exp \Big(- \frac{2R}{|1 - z|^{\theta}} \Big) \\ & \leq \exp (- 2 R \, 2^{l\theta}) \leq \exp (- 2 R \, 2^{m\theta}) \, . \end{align*} \noindent {\sl Case 2: $l < m$}. Then, \emph{the Blaschke product does the job}. Indeed, majorize $|w(z)|$ by $1$ and estimate $|B(z))|$ more accurately with help of Lemma~\ref{rec}; we observe that \begin{displaymath} |z - p_l| \leq |z - 1|+ 1 - p_l \leq 2 \times 2^{- l} = 2 (1 - p_l) \leq 4 C (1-p_l) \end{displaymath} and then, since $z\in K_\varphi$, we can write with $C\geq 1$ as in \eqref{veritas}: \begin{displaymath} 1 -|z|\geq \frac{1}{C} \, |1 - z|\geq \frac{1}{2 C} \, 2^{- l}\geq \frac{1}{4 C} \, |z - p_l| \, , \end{displaymath} so that the assumptions of Lemma~\ref{rec} are verified with $L = 4C$, giving: \begin{displaymath} \rho (z, p_l) \leq \frac{4 C}{\sqrt{16 C^2 +1}} = \exp (- a) < 1 \, . \end{displaymath} Hence, by definition, since $l< m$: \begin{displaymath} |B (z)| \leq [ \rho (z, p_l) ]^n \leq \exp(- a n) \, . \end{displaymath} Putting both cases together, and observing that our Blaschke product has length $n m < n m+1$, we get the result by applying Theorem~\ref{fish} with $N = n m+1$. \end{proof} \subsection{Some remarks} {\bf 1.} Twisting a composition operator by a weight may improve the compactness of this composition operator, or even may make this weighted composition operator compact though the non-weighted was not (see \cite{GKP} or \cite{Lechner-LQR}). However, this is not possible for all symbols, as seen in the following proposition. \begin{proposition} \label{evid} Let $w \in H^\infty$. If $\varphi$ is inner, or more generally if $|\varphi| = 1$ on a subset of $\T$ of positive measure, then $M_w \, C_\varphi$ is never compact (unless $w \equiv 0$). \end{proposition} \begin{proof} Indeed, suppose $T = M_w \, C_\varphi$ compact. Since $(z^n)_n$ converges weakly to $0$ in $H^2$ and since $T (z^n) = w \, \varphi^{n}$, we should have, since $|\varphi|= 1$ on $E$, with $m (E) > 0$: \begin{displaymath} \int_{E} |w|^2 \, dm = \int_{E} |w|^2 |\varphi|^{2 n} \, dm \leq \int_{\T} |w|^2 |\varphi|^{2 n} \, dm = \Vert T (z^n) \Vert^2 \converge_{n \to \infty} 0 \, , \end{displaymath} but this would imply that $w$ is null a.e. on $E$ and hence $w \equiv 0$ (see \cite{Duren}, Theorem~2.2), which was excluded. \end{proof} Note that \'E.~Amar and A.~Lederer proved in \cite{Amar} that $|\varphi| = 1$ on a set of positive measure if and only if $\varphi$ is an exposed point of of the unit ball of $H^\infty$; hence the following proposition can be viewed as the (almost) opposite case. \begin{proposition} \label{exposed} Let $\varphi \colon \D \to \D$ such that $\| \varphi \|_\infty = 1$. Assume that: \begin{displaymath} \int_\T \log (1 - |\varphi|) \, dm > - \infty \end{displaymath} (meaning that $\varphi$ is not an extreme point of the unit ball of $H^\infty$: see \cite{Duren}, Theorem~7.9). Then, if $w$ is an outer function such that $|w| = 1 - |\varphi|$, the weighted composition operator $T = M_w C_\varphi$ is Hilbert-Schmidt. \end{proposition} \begin{proof} We have: \begin{displaymath} \sum_{n = 0}^\infty \| T (z^n) \|^2 = \sum_{n = 0}^\infty \int_\T (1 - |\varphi|)^2 |\varphi|^{2 n} \, dm = \int_\T \frac{1 - |\varphi|}{1 + |\varphi|} \, dm < + \infty \, , \end{displaymath} and $T$ is Hilbert-Schmidt, as claimed. \end{proof} {\bf 2.} In \cite{Lechner-LQR}, Theorem~2.5, it is proved that we always have, for some constants $\delta, \rho > 0$: \begin{equation} \label{mino generale} \qquad a_{n}( M_w C_\varphi) \geq \delta \, \rho^n \, , \quad n = 1, 2, \dots \end{equation} (if $w \not\equiv 0$). We give here an alternative proof, based on a result of Gunatillake (\cite{GUN}), this result holding in a wider context. \begin{theorem} [Gunatillake] \label{gaj} Let $T = M_w C_\varphi$ be a compact weighted composition operator on $H^2$ and assume that $\varphi$ has a fixed point $a\in \D$. Then the spectrum of $T$ is the set: \begin{displaymath} \sigma (T) =\{0, w(a), w(a) \, \varphi'(a), w(a) \, [\varphi '(a)]^2, \ldots, w(a) \, [\varphi '(a)]^n, \ldots \} \end{displaymath} \end{theorem} \begin{proof} [Proof of \eqref{mino generale}] First observe that, in view of Proposition~\ref{evid}, $\varphi$ cannot be an automorphism of $\D$ so that the point $a$ is the Denjoy-Wolff point of $\varphi$ and is attractive. Theorem~\ref{gaj} is interesting only when $w(a) \, \varphi ' (a)\neq 0$. Now, we can give a new proof Theorem 2.5 of \cite{Lechner-LQR} as follows. Let $a\in \D$ be such that $w(a) \, \varphi ' (a) \neq 0$ ($H(\D)$ is a division ring and $\varphi ' \not\equiv 0$, $w \not\equiv 0$). Let $b = \varphi (a)$ and $\tau \in {\rm Aut}\, \D$ with $\tau (b) = a$. We set: \begin{displaymath} \psi = \tau \circ \varphi \quad \text{and} \quad S = M_w C_\psi = T C_\tau \, . \end{displaymath} This operator $S$ is compact because $T$ is. Since $\psi (a) = a$ and $\psi ' (a) = \tau '(b) \varphi ' (a) \neq 0$, Theorem~\ref{gaj} says that the non-zero eigenvalues of $S$, arranged in non-increasing order, are the numbers $\lambda_n = w (a) \, [\psi ' (a)]^{n - 1}$, $n\geq 1$. As a consequence of Weyl's inequalities, we know that: \begin{displaymath} a_{1} (S) \, a_{n}(S) \geq |\lambda_{2 n}|^2 \geq \delta \, \rho^{n} \, , \end{displaymath} with: \begin{displaymath} \delta = |w (a)|^{2} > 0 \quad \text{and} \quad \rho =|\psi ' (a)|^{4} > 0 \, . \end{displaymath} To finish, it is enough to observe that $a_{n} (S) \leq a_{n} (T) \, \Vert C_\tau \Vert$ by the ideal property of approximation numbers. \end{proof} \section{The splitted case} \label{sec: splitted} \begin{theorem}\label{treprov} Let $\Phi = (\phi, \psi) \colon \D^d \to \D^d$ be a truly $d$-dimensional symbol with $\phi \colon \D\to \D$ depending only on $z_1$ and $\psi \colon\D^{d-1}\to \D^{d - 1}$ only on $z_2,\ldots, z_d$, i.e. $\Phi (z_1, z_2, \ldots, z_d) = \big( \phi (z_1), \psi (z_2, \ldots, z_d)\big)$. Then, whatever $\psi$ behaves: \begin{displaymath} \Vert \phi \Vert_\infty = 1 \quad \Longrightarrow \quad \beta_{d}(C_\Phi) =1 \, . \end{displaymath} \end{theorem} \begin{proof} The proof is based on the following simple lemma, certainly well-known. \begin{lemma}\label{alegro} Let $S \colon H_1\to H_1$ and $T\colon H_2\to H_2$ be two compact linear operators, where $H_1$ and $H_2$ are Hilbert spaces. Let $S\otimes T$ be their tensor product, acting on the tensor product $H_1\otimes H_2$. Then: \begin{displaymath} a_{m n}(S\otimes T)\geq a_{m}(S) \, a_{n}(T) \end{displaymath} for all positive integers $m, n$. \end{lemma} We postpone the proof of the lemma and show how to conclude. We can assume $C_\Phi$ to be compact, so that $C_{\phi}$ is compact as well. Since $\Vert \phi \Vert_\infty = 1$, we have, thanks to \eqref{equiv-dim1}\,: \begin{displaymath} \qquad a_{m} (C_{\phi}) \geq \e^{- m \, \eps_m} \quad \text{with} \quad \eps_m \converge_{m \to \infty} 0 \, . \end{displaymath} Replacing $\eps_m$ by $\delta_m:= \sup_{p\geq m} \eps_p$, we can assume that $(\eps_m)_m$ is non-increasing. Moreover, \begin{displaymath} m \, \eps_m\to \infty \end{displaymath} since $C_{\phi}$ is compact and hence $a_{m} (C_{\phi}) \converge_{m \to \infty} 0$. We next observe that, due to the separation of variables in the definition of $\phi$ and $\psi$, we can write: \begin{equation}\label{proten} C_\Phi = C_{\phi}\otimes C_\psi \, . \end{equation} Indeed, write $z = (z_1, w)$ with $z_1 \in \D$ and $w \in \D^{d-1}$. If $f \in H^{2}(\D)$ and $g\in H^{2}(\D^{d-1})$, we see that: \begin{align*} C_{\Phi} (f\otimes g) (z) & = (f\otimes g) \big( \phi (z_1), \psi (w) \big) = f \big( \phi (z_1) \big)\, g \big( \psi(w) \big) \\ & = [C_{\phi} f (z_1)] \, [C_{\psi} g (w)] = (C_{\phi} f \otimes C_{\psi}g) (z) \, . \end{align*} Since tensor products $f \otimes g$ generate $H^{2}(\D^d)=H^2 (\D) \otimes H^2 (D^{d - 1})$, this proves \eqref{proten}. \par Let now $m$ be a large positive integer. Set ($[\, . \, ]$ stands for the integer part): \begin{equation} n_m = [m\eps_m]^{d-1} \quad \text{and} \quad N_m =m \, n_m \, . \end{equation} From what we know in dimension $d - 1$ (see \cite{BLQR}, Theorem~3.1) and from the preceding, we can write (observe that $\psi$ has to be truly $(d-1)$-dimensional since $\Phi$ is truly $d$-dimensional): \begin{displaymath} a_{m} (C_{\phi})\geq \exp(- m\, \eps_m) \quad \text{and}\quad a_{n} (C_{\psi})\geq a \, \exp (- C \, n^{1/(d-1)}) \, , \end{displaymath} for some positive constant $C$, which will be allowed to vary from one formula to another. Lemma~\ref{alegro} implies: \begin{displaymath} a_{N_m} (C_\Phi)\geq a \, \exp [- C \, (m \, \eps_m + {n_m}^{1/(d-1)}) ] \, . \end{displaymath} Since $n_m \lesssim (m \eps_m)^{d - 1}$, we get: \begin{displaymath} a_{N_m} (C_\Phi)\geq a \, \exp(- C \, m \, \eps_m) \, . \end{displaymath} Observe that $N_m = m \, n_m \sim m^d {\eps_m}^{d-1}$ and so ${N_m}^{1/d} \sim m \, {\eps_m}^{1 - 1/ d}$. As a consequence: \begin{align*} a_{N_m} (C_\Phi) \geq a \, \exp (- C \, m\,\eps_m ) & = a \, \exp \big[- (C \, {\eps_m}^{1 / d})\, (m \, {\eps_m}^{1- 1 /d})\big] \\ & \geq a \, \exp (-\eta_{m}\, {N_m}^{1/d} ) \end{align*} with $\eta_m: =C \,{\eps_m}^{1/d}$. Now, for $N > N_1$, let $m$ be the smallest integer satisfying $N_m \geq N$ (so that $N_{m - 1}< N \leq N_{m}$), and set $\delta_N =\eta_m$. We have $\lim_{N\to \infty} \delta_N =0$. Next, we note that $\lim_{m\to \infty} N_{m} / N_{m -1} = 1$, because $N_m \geq N_{m - 1}$ and: \begin{displaymath} \frac{N_{m}}{N_{m-1}} \leq \frac{m}{m - 1} \, \bigg( \frac{m \, \eps_m + 1}{(m - 1)\, \eps_{m - 1}}\bigg)^{d - 1} \sim \bigg(\frac{\eps_{m}}{\eps_{m-1}}\bigg)^{d-1} \leq 1 \, . \end{displaymath} Finally, if $N$ is an arbitrary integer and $N_{m-1} < N \leq N_m$, we obtain: \begin{displaymath} a_{N} (C_\Phi)\geq a_{N_m} (C_\Phi) \geq a \, \exp (-\eta_m \, {N_m}^{1/d}) \geq a \, \exp (- C \, \delta_N N^{1/d}) \, , \end{displaymath} since we observed that $\lim_{m\to \infty} N_{m} / N_{m-1}=1$. This amounts to say that $\beta_{d}(C_\Phi)=1$. \end{proof} \begin{proof}[Proof of Lemma~\ref{alegro}.] It is rather formal. Start from the Schmidt decompositions of $S$ and $T$ respectively (recall that Hilbert spaces, the approximation numbers are equal to the singular ones): \begin{displaymath} S =\sum_{m=1}^\infty a_{m}(S) \, u_m \odot v_m \, ,\qquad T=\sum_{n=1}^\infty a_{n}(T) \, u'_n \odot v'_n \, , \end{displaymath} where $(u_m)$, $(v_m)$ are two orthonormal sequences of $H_1$, $(u'_n)$, $(v'_n)$ two orthonormal sequences of $H_2$, and $u_m\odot v_m$ and $u'_n \odot v'_n$ denote the rank one operators defined by $(u_m \odot v_m) (x)=\langle x, v_m\rangle \, u_m$, $x \in H_1$, and $(u'_n \odot v'_n) (x) = \langle x, v'_n \rangle \, u'_n$, $x \in H_2$. We clearly have: \begin{displaymath} (u_m \odot v_m) \otimes (u'_n \odot v'_n) = (u_m\otimes u'_n) \odot (v_m\otimes v'_n) \, , \end{displaymath} so that the Schmidt decomposition of $S\otimes T$ is (with SOT-convergence): \begin{displaymath} S\otimes T = \sum_{m, n \geq 1} a_{m}(S) \, a_{n}(T) \, (u_m \otimes u'_n) \odot (v_m \otimes v'_n) \,, \end{displaymath} since the two sequences $(u_m\otimes u'_n)_{m, n}$ and $(v_m\otimes v'_n)_{m, n}$ are orthonormal: for instance, we have by definition: \begin{displaymath} \langle u_{m_1} \otimes u'_{n_1}, u_{m_2}\otimes u'_{n_2}\rangle = \langle u_{m_1}, u_{m_2},\rangle\, \langle u'_{n_1}, u'_{n_2}\rangle. \end{displaymath} This shows that the singular values of $S\otimes T$ are the non-increasing rearrangement of the positive numbers $a_{m}(S) \, a_{n}(T)$ and ends the proof of the lemma: the $mn$ numbers $a_{k}(S)\, a_{l}(T)$, for $1\leq k\leq m$, $1\leq l\leq n$ all satisfy $a_{k}(S)\, a_{l}(T)\geq a_{m}(S) \, a_{n}(T)$, so that $a_{mn}(S\otimes T)\geq a_{m}(S) \, a_{n}(T)$. \end{proof} \section{The glued case} \label{sec: glued} Here we consider symbols of the form: \begin{equation} \label{glued map} \Phi (z_1, z_2) = \big( \phi (z_1), \phi (z_1)\big) \, , \end{equation} where $\phi \colon \D \to \D$ is a non-constant analytic map. Note that such maps $\Phi$ are not truly $2$-dimensional. \subsection{Preliminary} We begin by remarking the following fact. Let $B^2 (\D)$ be the Bergman space of all analytic functions $f \colon \D \to \C$ such that: \begin{displaymath} \| f \|_{B^2}^2 := \int_\D |f (z)|^2 \, dA (z) < \infty \, , \end{displaymath} where $dA$ is the normalized area measure on $\D$. \begin{proposition} \label{B2-H2} Assume that the composition operator $C_\phi$ maps boundedly $B^2 (\D)$ into $H^2 (\D)$. Then $C_\Phi \colon H^2 (\D^2) \to H^2 (\D^2)$, defined by \eqref{glued map}, is bounded. \end{proposition} \begin{proof} If we write $f \in H^2 (\D^2)$ as: \begin{displaymath} f (z_1, z_2) = \sum_{j, k \geq 0} c_{j, k} z_1^j z_2^k \, , \quad \text{with} \quad \sum_{j, k \geq 0} |c_{j, k}|^2 = \| f \|_{H^2}^2 \, , \end{displaymath} we formally (or assuming that $f$ is a polynomial) have: \begin{displaymath} [C_{\Phi} f ] (z_1, z_2) = \sum_{j, k \geq 0} c_{j, k}[\phi (z_1)]^j [\phi (z_1)]^k = \sum_{n = 0}^\infty \bigg(\sum_{j + k =n} c_{j, k} \bigg) [\phi (z_1)]^n \, . \end{displaymath} Hence, if we set $g (z) = \sum_{n = 0}^\infty \big( \sum_{j + k = n} c_{j, k} \big) z^n$, we get: \begin{displaymath} [C_{\Phi} (f ) ] (z_1, z_2) = [C_\phi (g)] (z_1) \, , \end{displaymath} so that, by integrating: \begin{displaymath} \| C_\Phi (f) \|_{H^2 (\D^2)} = \| C_\phi (g) \|_{H^2 (\D)} \, . \end{displaymath} By hypothesis, there is a positive constant $M$ such that: \begin{displaymath} \| C_\phi (g) \|_{H^2 (\D)} \leq M \, \| g \|_{B^2 (\D)} \, . \end{displaymath} But, by the Cauchy-Schwarz inequality: \begin{align*} \| g \|_{B^2 (\D)}^2 & = \sum_{n = 0}^\infty \frac{1}{n + 1}\, \bigg| \sum_{j + k= n} c_{j, k} \bigg|^2 \\ & \leq \sum_{n = 0}^\infty \bigg( \sum_{j + k = n} |c_{j, k}|^2 \bigg) = \sum_{j, k \geq 0} |c_{j, k}|^2 = \| f \|_{H^2 (\D^2)}^2 \, , \end{align*} and we obtain $\| C_\Phi (f) \|_{H^2 (\D^2)} \leq M \, \| f \|_{H^2 (\D^2)}$. \end{proof} \subsection{Lens maps} Let $\lambda_\theta$ be a lens map of parameter $\theta$, $0 < \theta < 1$. We consider $\Phi_\theta \colon \D^2 \to \D^2$ defined by: \begin{equation} \Phi_\theta (z_1, z_2) = \big( \lambda_\theta (z_1), \lambda_\theta (z_1) \big) \, . \end{equation} We have the following result. \begin{theorem} \label{bi-lens} The composition operator $C_{\Phi_\theta} \colon H^2 (\D^2) \to H^2 (\D^2)$ is: \par $1)$ not bounded for $\theta > 1/2$; \par $2)$ bounded, but not compact for $\theta = 1/2$; \par $3)$ compact, and even Hilbert-Schmidt, for $0 < \theta < 1/2$. \end{theorem} \begin{proof} The reproducing kernel of $H^2 (\D^2)$ is, for $(a, b) \in \D^2$: \begin{equation} \qquad \qquad \qquad K_{a, b} (z_1, z_2) = \frac{1}{1 - \bar{a} z_1} \, \frac{1}{1 - \bar{b} z_2} \, \raise 1,5 pt \hbox{,} \qquad (z_1, z_2) \in \D^2 \, , \quad\quad \end{equation} and: \begin{displaymath} \| K_{a, b} \|^2 = \frac{1}{(1 - |a|^2) (1 - |b|^2)} \, \cdot \end{displaymath} $1)$ If $C_{\Phi_\theta}$ were bounded, we should have, for some $M < \infty$: \begin{displaymath} \qquad \qquad \| C_{\Phi_\theta}^{\, \ast} (K_{a, b} ) \|_{H^2} \leq M \, \| K_{a, b} \|_{H^2} \, , \quad \text{for all } a, b \in \D \, . \end{displaymath} Since $C_{\Phi_\theta}^{\, \ast} (K_{a, b} ) = K_{\Phi_\theta (a, b)} = K_{\lambda_\theta (a), \lambda_\theta (a)}$, we would get, with $b = 0$: \begin{displaymath} \bigg(\frac{1}{1 - |\lambda_\theta (a)|^2}\bigg)^2 \leq M^2 \, \frac{1}{1 - |a|^2} \, ; \end{displaymath} but this is not possible for $\theta > 1/2$, since $1 - |\lambda_\theta (a)|^2 \approx 1 - |\lambda_\theta (a)| \sim (1 - a)^\theta$ when $a$ goes to $1$, with $0 < a < 1$. \par For $2)$ and $3)$, let us consider the pull-back measure $m_\theta$ of the normalized Lebesgue measure on $\T = \partial \D$ by $\lambda_\theta$. It is easy to see that: \begin{equation} \sup_{\xi \in \T} m_\theta [ D (\xi, h) \cap \D)] = m_\theta [D (1, h) \cap \D] \approx h^{1 /\theta} \, . \end{equation} In particular, for $\theta \leq 1/2$, $m_\theta$ is a $2$-Carleson measure, and hence (see \cite{LELIQR-TAMS}, Theorem~2.1, for example) the canonical injection $j \colon B^2 (\D) \to L^2 (m_\theta)$ is bounded, meaning that, for some positive constant $M < \infty$: \begin{displaymath} \int_\D |f (z)|^2 \, d m_\theta (z) \leq M^2 \| f \|_{B^2}^2 \, . \end{displaymath} Since \begin{displaymath} \int_\D |f (z)|^2 \, dm_\theta (z) = \int_\T | f [\lambda_\theta (u)]|^2 \, dm (u) = \| C_{\lambda_\theta} (f) \|_{H^2}^2 \, , \end{displaymath} we get that $C_{\lambda_\theta}$ maps boundedly $B^2 (\D)$ into $H^2 (\D)$.\par It follows from Proposition~\ref{B2-H2} that $C_{\Phi_\theta} \colon H^2 (\D^2) \to H^2 (\D^2)$ is bounded. \par However, $C_{\Phi_{1/2}}$ is not compact since $C_{\Phi_{1/2}}^{\, \ast} (K_{a, b}) / \| K_{a, b} \|$ does not converge to $0$ as $a , b \to 1$, by the calculations made in $1)$.\par For $3)$, let $e_{j, k} (z_1, z_2) = z_1^j z_2^k$, $j, k \geq 0$, be the canonical orthonormal basis of $H^2 (\D^2)$; we have $[C_{\phi_\theta} (e_{j, k})] (z_1, z_2) = [\lambda_\theta (z_1)]^{j + k}$. Hence: \begin{displaymath} \sum_{j, k \geq 0} \| C_{\phi_\theta} (e_{j, k}) \|_{H^2 (\D^2)}^2 \leq \sum_{n = 0}^\infty (2 n + 1) \int_\T |\lambda_\theta|^{2 n} \, dm \leq \int_\T \frac{2}{(1 - |\lambda_\theta|^2 )^2} \, dm \, . \end{displaymath} Since, by Lemma~\ref{estim-lens} below, $1 - |\lambda_\theta (\e^{it })|^2 \gtrsim |1 - \e^{i t}|^\theta \geq t^\theta$ for $| t | \leq \pi/2$, we get: \begin{displaymath} \sum_{j, k \geq 0} \| C_{\phi_\theta} (e_{j, k}) \|_{H^2 (\D^2)}^2 \lesssim \int_0^{\pi / 2} \frac{dt}{t^{2 \theta}} < \infty, \end{displaymath} since $\theta < 1/2$. Therefore $C_{\phi_\theta}$ is Hilbert-Schmidt for $\theta < 1/2$. \end{proof} For sake of completeness, we recall the following elementary fact (see \cite{Shapiro-livre}, p.~28, or also \cite{LELIQR}, Lemma~2.5)). \begin{lemma} \label{estim-lens} With $\delta = \cos (\theta \pi / 2)$, we have, for $| z | \leq 1$ and $\Re z \geq 0$: \begin{displaymath} 1 - |\lambda_\theta (z)|^2 \geq \frac{\delta}{2}\, |1 - z|^\theta \, . \end{displaymath} \end{lemma} \begin{proof} We can write: \begin{displaymath} \lambda_\theta (z)=\frac{1 - w}{1 + w} \quad \text{with} \quad w=\bigg(\frac{1 - z}{1 + z}\bigg)^\theta \quad \text{and } |w|\leq 1 \, . \end{displaymath} Then: \begin{displaymath} \Re w \geq \delta \, |w|\geq \frac{\delta}{2}|1 - z|^\theta \, . \end{displaymath} Hence: \begin{displaymath} 1- |\lambda_\theta (z)|^2 = \frac{4 \, \Re w}{|1 + w|^2} \geq \delta \, |w|\geq \frac{\delta}{2} \, |1 - z|^\theta \, , \end{displaymath} as announced \end{proof} We now improve the result $3)$ of Theorem~\ref{bi-lens} by estimating the approximation numbers of $C_{\Phi_\theta}$ and get that $C_{\Phi_\theta}$ is in all Schatten classes of $H^2 (\D^2)$ when $\theta < 1/2$. \begin{theorem} For $0 < \theta < 1/2$, there exists $b = b_\theta > 0$ such that: \begin{equation} a_n (C_{\Phi_\theta}) \lesssim \e^{- b \sqrt{n}} \, . \end{equation} In particular $\beta_2^+ (C_{\Phi_\theta}) \leq \e^{- b} < 1$, though $\| \Phi_\theta \|_\infty = 1$, and even $\Phi_\theta (\T^2) \cap \T^2 \neq \emptyset$. \end{theorem} \begin{proof} Proposition~\ref{B2-H2} (and its proof) can be rephrased in the following way: if $C_\phi$ maps boundedly $B^2 (\D)$ into $H^2 (\D)$, then, we have the following factorization: \begin{equation} C_\Phi \colon H^2 (\D^2) \mathop{\longrightarrow}^J B^2 (\D) \mathop{\hbox to 20 pt {\rightarrowfill}}^{C_\phi} H^2 (\D) \mathop{\longrightarrow}^I H^2 (\D^2) \, , \end{equation} where $I \colon H^2 (\D) \to H^2 (\D^2)$ is the canonical injection given by $(If) (z_1, z_2) = f (z_1)$ for $f \in H^2 (\D)$, and $J \colon H^2 (\D^2) \to B^2 (\D)$ is the contractive map defined by: \begin{displaymath} (J f) (z) = \sum_{n = 0}^\infty \bigg( \sum_{j + k = n} c_{j, k} \bigg) z^n \, , \end{displaymath} for $f \in H^2 (\D^2)$ with $f (z_1, z_2) = \sum_{j, k \geq 0} c_{j, k} z_1^j z_2^k$. \par In the proof of Theorem~\ref{bi-lens}, we have seen that, for $0 < \theta \leq 1/2$, the composition operator $C_{\lambda_\theta}$ is bounded from $B^2 (\D)$ into $H^2 (\D)$; we get hence the factorization: \begin{displaymath} C_{\Phi_\theta} \colon H^2 (\D^2) \mathop{\longrightarrow}^J B^2 (\D) \mathop{\hbox to 25 pt {\rightarrowfill}}^{C_{\lambda_\theta}} H^2 (\D) \mathop{\longrightarrow}^I H^2 (\D^2) \, , \end{displaymath} Now, the lens maps have a semi-group property: \begin{equation} \lambda_{\theta_1 \theta_2} = \lambda_{\theta_1} \lambda_{\theta_2} \, , \end{equation} giving $C_{\lambda_{\theta_1 \theta_2}} = C_{\lambda_{\theta_1}} \circ C_{\lambda_{\theta_2}}$. \par For $0 < \theta < 1/2$, we therefore can write $C_{\lambda_\theta} = C_{\lambda_{2 \theta}} \circ C_{\lambda_{1/2}}$ (note that $2 \theta < 1$, so $C_{\lambda_{2 \theta}} \colon H^2 (\D) \to H^2 (\D)$ is bounded), and we get: \begin{displaymath} C_{\Phi_\theta} = I \, C_{\lambda_{2 \theta}} C_{\lambda_{1/2}} \, J \, . \end{displaymath} Consequently: \begin{displaymath} a_n (C_{\Phi_\theta}) \leq \| I \| \, \|J \| \, \| C_{\lambda_{1/2}} \|_{B^2 \to H^2} \, a_n (C_{\lambda_{2 \theta}}) \, . \end{displaymath} Now, we know (\cite{LELIQR}, Theorem~2.1) that $a_n (C_{\lambda_{2 \theta}}) \lesssim \e^{- b \sqrt{n}}$, so we get that $a_n (C_{\Phi_\theta}) \lesssim \e^{- b \sqrt{n}}$. \end{proof} \noindent{\bf Remark.} In \cite{BLQR}, we saw that for a truly $2$-dimensional symbol $\Phi$, we have $\beta_2^- (C_\phi) > 0$. Here the symbol $\Phi_\theta$ is not truly $2$-dimensional, but we nevertheless have $\beta_2 (C_{\Phi_\theta}) > 0$. In fact, let $E = \{ f \in H^2 (\D^2) \, ; \ \frac{\partial f}{\partial z_2} \equiv 0\}$; $E$ is isometrically isomorphic to $H^2 (\D)$ and the restriction of $C_{\Phi_\theta}$ to $E$ behaves as the $1$-dimensional composition operator $C_{\lambda_\theta} \colon H^2 (\D) \to H^2 (\D)$; hence (\cite{LIQUEROD}, Proposition~6.3): \begin{displaymath} \e^{- b_0 \sqrt{n}} \lesssim a_n (C_{\lambda_\theta}) = a_n ({C_{\Phi_\theta}}_{\mid E}) \leq a_n (C_{\Phi_\theta}) \, , \end{displaymath} and $\beta_2^- (C_{\Phi_\theta}) \geq \e^{- b_0} > 0$. \section{Triangularly separated variables} \label{sec: triangularly} In this section, we consider symbols of the form: \begin{equation} \label{triangular-symbol} \Phi (z_1, z_2) = \big( \phi (z_1), \psi (z_1) \, z_2\big) \, , \end{equation} where $\phi , \psi\colon \D \to \D$ are non-constant analytic maps. \par Such maps $\Phi$ are truly $2$-dimensional. \par More generally, if $h \in H^\infty$, with $h (0) = 0$ and $\| h \|_\infty \leq 1$, has its powers $h^k$, $k \geq 0$, orthogonal in $H^2$ (for convenience, we shall say that $h$ is a \emph{Rudin function}), we can consider: \begin{equation} \label{triangular-symbol-with-h} \Phi (z_1, z_2) = \big( \phi (z_1), \psi (z_1) \, h (z_2) \big) \, \end{equation} For such $h$ we can take for example an inner function vanishing at the origin, but there are other such functions, as shown by C.~Bishop: \noindent{\bf Theorem} (Bishop \cite{Bishop}). {\it The function $h$ is a Rudin function if and only if the pull-back measure $\mu = \mu_h$ is radial and Jensen, i.e for every Borel set $E$: \begin{displaymath} \qquad \quad \mu (\e^{i\theta} E)=\mu (E) \quad \text{and} \quad \int_{\overline{\D}} \log (1/|z|) \, d\mu (z) < \infty \, . \end{displaymath} Conversely, for every probability measure $\mu$ supported by $\overline{\D}$, which is radial and Jensen, there exists $h$ in the unit ball of $H^\infty$, with $h (0) = 0$, such that $\mu = \mu_h$.} If we take for $\mu$ the Lebesgue measure of $\T$, we get an inner function. But, as remarked in \cite{Bishop}, we can take for $\mu$ the Lebesgue measure on the union $\T \cup (1/2)\T$, normalized in order that $\mu (T) = \mu \big( (1/2) \T \big) = 1/2$. Then the corresponding $h$ is not inner since $|h| = 1/2$ on a subset of $\T$ of positive measure. He also showed that $h (z) / z$ may be a non-constant outer function. Also, P.~Bourdon (\cite{Bourdon}) showed that the powers of $h$ are orthogonal if and only if its Nevanlinna counting function is almost everywhere constant on each circle centered on the origin. \subsection{General facts} \label{general facts} We first observe that if $f \in H^2 (\D^2)$ and: \begin{displaymath} f (z_1, z_2) = \sum_{j, k \geq 0} c_{j, k} \, z_1^j z_2^k \, , \end{displaymath} then we can write: \begin{displaymath} f (z_1, z_2) = \bigg( \sum_{k \geq 0} f_k (z_1) \bigg) \, z_2^k \end{displaymath} with: \begin{displaymath} f_k (z_1) = \sum_{j \geq 0} c_{j, k} \, z_1^j \, , \end{displaymath} and: \begin{displaymath} \| f \|_{H^2 (\D^2)}^2 = \sum_{j, k \geq 0} |c_{j, k}|^2 = \sum_{k \geq 0} \| f_k \|_{H^2 (\D)}^2 \, . \end{displaymath} That means that we have an isometric isomorphism: \begin{displaymath} J \colon H^2 (\D^2) \longrightarrow \bigoplus_{k= 0}^\infty H^2 (\D) \, , \end{displaymath} defined by $J f = (f_k)_{k \geq 0}$.\par Now, for symbols $\Phi$ as in \eqref{triangular-symbol}, we have: \begin{displaymath} (C_\Phi f) (z_1, z_2) = \sum_{j, k \geq 0} c_{j, k} \, [\phi (z_1)]^j [\psi (z_1)]^k z_2^k \, , \end{displaymath} so that $J \, C_\Phi \, J^{- 1}$ appears as the operator $\bigoplus_k M_{\psi^k} C_\phi$ on $\bigoplus_k H^2 (\D)$, where $M_{\psi^k}$ is the multiplication operator by $\psi^k$: \begin{displaymath} [(M_{\psi^k} C_\phi ) f_k ] (z_1) = [\psi (z_1)]^k \, [(f_k \circ \phi) (z_1)] \, . \end{displaymath} When $\Phi$ is as in \eqref{triangular-symbol-with-h}, we have: \begin{displaymath} (C_\Phi f) (z_1, z_2) = \sum_{j, k \geq 0} c_{j, k} \, [\phi (z_1)]^j [\psi (z_1)]^k [h (z_2)]^k \, , \end{displaymath} with: \begin{displaymath} \| C_\Phi f\|^2 \leq \sum_{k = 0}^\infty \| T_k f_k\|^2 \end{displaymath} and: \begin{displaymath} T_k = M_{\psi^k} C_\phi \, ; \end{displaymath} hence $J \, C_\Phi \, J^{- 1}$ appears as pointwise dominated by the operator $T = \oplus_k T_k$ on $\bigoplus_k H^2 (\D)$. This implies a factorization $C_\Phi = A T$ with $\| A \| \leq 1$, so that $a_n (C_\Phi) \leq a_n (T)$ for all $n \geq 1$. We recall the following elementary fact. \begin{lemma} Let $(H_k)_{k \geq 0}$ be a sequence of Hilbert spaces and $T_k \colon H_k \to H_k$ be bounded operators. Let $H = \bigoplus_{k = 0}^\infty H_k$ and $T \colon H \to H$ defined by $T x = (T_k x_k)_k$. Then: \par $1)$ $T$ is bounded on $H$ if and only if $\sup_k \| T_k\| < \infty$; \par $2)$ $T$ is compact on $H$ if and only if each $T_k$ is compact and $\| T_k\| \converge_{k \to \infty} 0$. \end{lemma} Going back to the symbols of the form \eqref{triangular-symbol}, we have $\| M_{\psi^k} \| \leq \| \psi^k\|_\infty \leq 1$, since $\| \psi\|_\infty \leq 1$; hence $\| M_{\psi^k} C_\phi \| \leq \| C_\phi \|$ and the operator $(M_{\psi^k} C_\phi)_k$ is bounded on $\bigoplus_k H^2 (\D)$. Therefore $C_\Phi$ is bounded on $H^2 (\D^2)$. For approximation numbers, we have the following two facts. \begin{lemma} \label{lemme utile} Let $T_k \colon H_k \to H_k$ be bounded linear operators between Hilbert spaces $H_k$, $k \geq 0$. Let $H = \bigoplus_k H_k$ and $T = (T_k)_k \colon H \to H$, assumed to be compact. Then, for every $n_1, \ldots, n_K \geq 1$, and $0 \leq m_1 < \cdots < m_K$, $K \geq 1$, we have: \begin{equation} a_N (T) \geq \inf_{1 \leq k \leq K} a_{n_k} (T_{m_k}) \, , \end{equation} where $N = n_1 + \cdots + n_K$. \end{lemma} \begin{proof} We use the Bernstein numbers $b_n$ (see \eqref{Bernstein}), which are equal to the approximation numbers (see \eqref{egalite}). For $k = 1, \ldots, K$, there is an $n_k$-dimensional subspace $E_k$ of $H_{m_k}$ such that: \begin{displaymath} \qquad b_{n_k} (T_{m_k}) \leq \| T_{m_k} x \| \, , \quad \text{for all } x\in S_{E_k} \, . \end{displaymath} Then $E = \bigoplus_{k =1}^K E_k$ is an $N$-dimensional subspace of $H$ and for every $x = (x_1, x_2, \ldots) \in E$, we have: \begin{align*} \| T x \|^2 & = \sum_{k \leq K} \| T_{m_k} x_{m_k}\|^2 \geq \sum_{k \leq K} [b_{n_k} (T_{m_k})]^2 \, \|x_{m_k}\|^2 \\ & \geq \inf_{k \leq K} [ b_{n_k} (T_{m_k}) ]^2 \sum_{k \leq K} \| x_{m_k}\|^2 = \inf_{k \leq K} [b_{n_k} (T_{m_k}) ]^2 \| x\|^2 \, ; \end{align*} hence $b_N (T) \geq \inf_{k \leq K} b_{n_k} (T_{m_k}) $, and we get the announced result. \end{proof} \begin{lemma} \label{majo} Let $T = \bigoplus_{k=0}^\infty T_k$ acting on a Hilbertian sum $H = \bigoplus_{k = 0}^\infty H_k$. Let $n_0, \ldots, n_K$ be positive integers and $N = n_0 + \cdots + n_K - K$. Then, the approximation numbers of $T$ satisfy: \begin{equation}\label{maxsup} a_{N}(T) \leq \max \raise -1 pt \hbox{$\Big($} \max_{0\leq k\leq K} a_{n_k}(T_k), \sup_{k > K} \Vert T_k \Vert \raise -1 pt \hbox{$\Big)$} \, . \end{equation} \end{lemma} \begin{proof} Denote by $S$ the right-hand side of \eqref{maxsup}. Let $R_k$, $0\leq k\leq K$ be operators on $H_k$ of respective rank $< n_k$ such that $\Vert T_k - R_k\Vert = a_{n_k}(T_k)$ and let $R =\bigoplus_{k = 0}^{K} R_k$. Then $R$ is an operator of rank $\leq n_0 + \cdots + n_K - K - 1 < N$. If $f = \sum_{k = 0}^\infty f_k \in H$, we see that: \begin{align*} \Vert T f - R f \Vert^{2} & =\sum_{k = 0}^K \Vert T_{k} f_k - R_{k}f_k \Vert^{2} + \sum_{k > K} \Vert T_{k} f_k \Vert^{2} \\ & \leq \sum_{k = 0}^{K} a_{n_k} (T_k)^{2} \Vert f_k \Vert^2 + \sum_{k > K} \Vert T_{k} f_k \Vert^{2} \leq S^2 \sum_{k = 0}^\infty \Vert f_k \Vert^2 = S^2 \Vert f \Vert^2 \, , \end{align*} hence the result. \end{proof} We give now two corollaries of Lemma~\ref{majo}. \noindent{\bf Example 1.} We first use lens maps. We get: \begin{theorem} \label{majo lens} Let $\lambda_\theta$ the lens map of parameter $\theta$ and let $\psi \colon \D \to \D$ such that $\|\psi \|_\infty := c < 1$ and $h$ a Rudin function. We consider: \begin{displaymath} \Phi (z_1, z_2) = \big( \lambda_\theta (z_1), \psi (z_1) \, h (z_2) \big) \, . \end{displaymath} Then, for some positive constant $\beta$, we have, for all $N \geq 1$: \begin{equation} a_N (C_\Phi) \lesssim \e^{- \beta \, N^{1/3}} \, . \end{equation} \end{theorem} \begin{proof} Let $T_k = M_{\psi^k} C_{\lambda_\theta}$. We have $\| T_k \| \leq c^k$, so $\sup_{k > K} \| T_k\| \leq c^K$. On the other hand, we have $a_n (T_k) \leq c^k \, a_n (C_{\lambda_\theta}) \leq a_n (C_{\lambda_\theta}) \lesssim \e^{- \beta_\theta \sqrt{n}}$ (\cite{LELIQR}, Theorem~2.1). Taking $n_0 = n_1 = \cdots = n_K = K^2$ in Lemma~\ref{majo}, we get: \begin{displaymath} \max_{0 \leq k \leq K} a_{n_k} (T_k) \lesssim \e^{ - \beta_\theta K} \, . \end{displaymath} Since $n_0 + \cdots + n_K - K \approx K^3$, we obtain $a_{K^3} \lesssim \e^{ - \beta K}$, which gives the claimed result, by taking $\beta = \max \big (\beta_\theta, \log (1/ c) \big)$. \end{proof} \noindent{\bf Example 2.} We consider the cusp map $\chi$. We have: \begin{theorem} \label{majo cusp} Let $\chi$ be the cusp map, $h$ a Rudin function, and $\psi$ in the unit ball of $H^\infty$, with $\| \psi \|_\infty := c < 1$. Let: \begin{displaymath} \Phi (z_1, z_2) = \big( \chi (z_1), \psi (z_1) \, h (z_2) \big) \, . \end{displaymath} Then, for positive constant $\beta$, we have, for all $N \geq 1$: \begin{displaymath} a_N (C_\Phi) \lesssim \e^{ - \beta \sqrt{N} / \sqrt{\log N}} \, . \end{displaymath} \end{theorem} \begin{proof} Let $T_k = M_{\psi^k} C_\chi$. As above, we have $\sup_{k > K} \| T_k\| \leq c^K$. For the cusp map, we have $a_n (C_\chi) \lesssim \e^{ - \alpha n / \log n}$ (\cite{LIQURO-Estimates}, Theorem~4.3); hence $a_n (T_k) \lesssim \e^{ - \alpha n / \log n}$. We take $n_0 = n_1 = \cdots = n_K = K \, [\log K]$ (where $[\log K]$ is the integer part of $\log K$). Since $n_0 + \cdots + n_K \approx K^2 [\log K]$, we get, for another $\alpha > 0$: \begin{displaymath} a_{K^2 [\log K]} (C_\Phi) \lesssim \e^{- \alpha K} \, , \end{displaymath} which reads: $a_N (C_\Phi) \lesssim \e^{ - \beta \sqrt{N / \log N}}$, as claimed. \end{proof} \subsection{Lower bounds} In this subsection, we give lower bounds for approximation numbers of composition operators on $H^2$ of the bidisk, attached to a symbol $\Phi$ of the previous form $\Phi (z_1, z_2) = \big( \phi (z_1), \psi (z_1) \, h (z_2) \big)$ where $h$ is a Rudin function. The sharpness of those estimates will be discussed in the next subsection. We first need some lemmas in dimension one. \begin{lemma} \label{mino capa} Let $u, v \colon \D \to \D$ be two non-constant analytic self-maps and $T = M_v C_u \colon H^2 (\D) \to H^2 (\D)$ be the associated weighted composition operator. For $0 < r < 1$, we set $A = u (r \, \overline{\D})$ and $\Gamma = \exp \big( - 1 / \capa (A) \big)$. Then, for $0 < \delta \leq \inf_{|z| = r} |v (z)|$, we have: \begin{equation} \label{mino1} a_n (T) \gtrsim \sqrt{1 - r}\, \delta \, \Gamma^n \, . \end{equation} \end{lemma} In this lemma, $\capa (A)$ denotes the Green capacity of the compact subset $A \subseteq \D$ (see \cite{LQR}, \S~2.3 for the definition). For the proof, we need the following result (\cite{WID}, Theorem~7, p.~353). \begin{theorem} [Widom] \label{widom} Let $A$ be a compact subset of $\mathbb{D}$ and $\mathcal{C} (A)$ be the space of continuous functions on $A$ with its natural norm. Set: \begin{displaymath} \tilde{d}_{n} (A) = \inf_{E} \raise - 2pt \hbox{$\bigg[$} \sup_{f \in B_{H^\infty}} {\rm dist} \, (f, E) \raise - 2pt \hbox{$\bigg]$} \, , \end{displaymath} where $E$ runs over all $(n - 1)$-dimensional subspaces of $\mathcal{C} (A)$ and ${\rm dist} \, (f, E) = \inf_{h \in E} \Vert f - h \Vert_{\mathcal{C}(A)}$. Then \begin{equation} \tilde{d}_{n} (A) \geq \alpha \, \e^{- n/\capa (A)} \end{equation} for some positive constant $\alpha$. \end{theorem} \begin{proof} [Proof of Lemma~\ref{mino capa}] We apply Theorem~\ref{widom} to the compact set $A = u (r \, \overline{\D})$. Let $E$ be an $(n - 1)$-dimensional subspace of $H^2 = H^2 (\D)$; it can be viewed as a subspace of ${\mathcal C} (A)$, so, by Theorem~\ref{widom}, there exists $f \in H^\infty \subseteq H^2$ with $\| f \|_2 \leq \| f \|_\infty \leq 1$ such that: \begin{displaymath} \qquad \| f - h \|_{{\mathcal C} (A)} \geq \alpha \, \Gamma^n \, , \quad \forall h \in E \, . \end{displaymath} Then: \begin{displaymath} \| v \, ( f \circ u - h \circ u)\|_{{\mathcal C} (r \T)} \geq \delta \, \| (f - h) \circ u \|_ {{\mathcal C} (r \T)} = \delta \, \| f - h\|_{{\mathcal C}(A)} \geq \alpha \, \delta \, \Gamma^n \,. \end{displaymath} But: \begin{displaymath} \| v\, ( f \circ u - h \circ u)\|_{{\mathcal C} (r \T)} \leq \frac{1}{\sqrt{1 - r^2}} \, \| v \, ( f \circ u - h \circ u)\|_{H^2} \, ; \end{displaymath} Hence: \begin{displaymath} \| T f - T h\|_{H^2} \geq \alpha\, \sqrt{1 - r^2} \, \delta \, \Gamma^n \geq \alpha\, \sqrt{1 - r} \, \delta \, \Gamma^n\, . \end{displaymath} Since $h$ is an arbitrary function of $E$, we get ($B_{H^2}$ being the unit ball of $H^2$): \begin{displaymath} \inf_{\dim E < n} \raise -2pt \hbox{$\bigg[$} \sup_{f \in B_{H^2}} {\rm dist}\, \big(T f, T (E) \big) \raise -2pt \hbox{$\bigg]$} \geq \alpha\, \sqrt{1 - r} \, \delta \, \Gamma^n \, . \end{displaymath} But the left-hand side is equal to the Kolmogorov number $d_n (T)$ of $T$ (see \cite{LQR}, Lemma~3.12), and, as recalled in \eqref{egalite}, in Hilbert spaces, the Kolmogorov numbers are equal to the approximation numbers; hence we obtain: \begin{equation} \qquad a_n (T) \geq \alpha\, \sqrt{1 - r} \, \delta \, \Gamma^n \, , \quad n = 1, 2, \ldots \, , \end{equation} as announced. \end{proof} The next lemma shows that some Blaschke products are far away from $0$ on some circles centered at $0$. We consider a \emph{strongly interpolating sequence} $(z_j)_{ j\geq 1}$ of $\D$ in the sense that, if $\varepsilon_j := 1 - |z_j|$, then: \begin{equation} \label{regu} \varepsilon_{j+1}\leq \sigma \,\varepsilon_j \end{equation} and so $\varepsilon_j \leq \sigma^{j - 1} \varepsilon_1$, where $0 < \sigma < 1$ is fixed. Equivalently, the sequence $(|z_j|)_{j \geq 1}$ is interpolating. We consider the corresponding interpolating Blaschke product: \begin{equation} \label{Blaschke} B (z) = \prod_{j = 1}^\infty \frac{|z_j|}{z_j} \frac{z_{j} - z}{1 - z_{j} z} \, \cdot \end{equation} The following lemma is probably well-known, but we could find no satisfactory reference (see yet \cite{HAY} for related estimates) and provide a simple proof. \begin{lemma} \label{Lemma Blaschke} Let $(z_j)_{j\geq 1}$ be a strongly interpolating sequence as in \eqref{regu} and $B$ the associated Blaschke product \eqref{Blaschke}. \par Then there exists a sequence $r_l := 1 - \rho_l$ such that: \begin{equation} \label{encadrement rho} C_1 \, \sigma^l\leq \rho_l \leq C_2 \, \sigma^l \, , \end{equation} where $C_1$, $C_2$ are positive constants, and for which: \begin{equation} \label{mino Blaschke} |z| = r_l \quad \Longrightarrow \quad |B (z)|\geq \delta \, , \end{equation} where $\delta > 0$ does not depend on $l$. \end{lemma} \begin{proof} Let us denote by $p_l$, $1\leq p_l \leq l$, the biggest integer such that $\varepsilon_{p_l} \geq \sigma^{l - 1}\varepsilon_1$. We separate two cases. \noindent{\bf Case 1:} $\varepsilon_{p_l}\geq 2 \, \sigma^{l - 1}\varepsilon_1$. Then, we choose $\rho_l = \alpha \, \sigma^{l - 1}\varepsilon_1$ with $\alpha$ fixed, $1< \alpha < 2$. Since $\rho (\xi, \zeta)\geq \rho(|\xi|,|\zeta|)$ for all $\xi, \zeta \in \D$ (recall that $\rho$ is the pseudo-hyperbolic distance on $\D$), we have the following lower bound for $|z|= r_l$: \begin{displaymath} |B (z)| =\prod_{j = 1}^\infty \rho (z, z_j) \geq \prod_{j = 1}^\infty \rho (r_l, |z_j|) = \prod_{j \leq p_l} \rho (r_l, |z_j|) \times \prod_{j > p_l} \rho (r_l, |z_j|) := P_1 \times P_2 \, , \end{displaymath} and we estimate $P_1$ and $P_2$ separately. We first observe that $\dis \frac{\rho_l}{\varepsilon_{p_l}}\leq \frac{\alpha \, \sigma^{l - 1}\varepsilon_1}{2 \, \sigma^{l - 1}\varepsilon_1}\leq \frac{\alpha}{2}\,$, and then: \begin{displaymath} \frac{\rho_l}{\varepsilon_{j}} = \frac{\rho_l}{\varepsilon_{p_l}} \frac{\varepsilon_{p_l}}{\varepsilon_{j}}\leq \frac{\alpha}{2} \, \sigma^{p_l - j}. \end{displaymath} The inequality $\rho (1 - u, 1 - v) \geq \frac{|u - v|}{(u + v)}$ for $0 < u, v \leq 1$ now gives us: \begin{equation} \label{un} \quad \rho(r_l, |z_j|) \geq\frac{\varepsilon_j - \rho_{l}}{\varepsilon_j + \rho_{l}} =\frac{1 - \rho_l/\varepsilon_j}{1 + \rho_{l}/\varepsilon_j} \geq\frac{1 - (\alpha/2) \, \sigma^{p_l - j}}{1 + (\alpha/2) \, \sigma^{p_{l}-j}} \, \raise 1 pt \hbox{,} \quad \text{for } j\leq p_l \, , \end{equation} and: \begin{equation} \label{uun} P_1 \geq \prod_{k = 0}^\infty \bigg( \frac{1 - (\alpha/2) \, \sigma^{k}}{1 + (\alpha/2) \, \sigma^{k}} \bigg) \, \cdot \end{equation} Similarly: \begin{displaymath} \frac{\varepsilon_{p_{l} + 1}}{\rho_{l}}\leq \frac{\sigma^{l - 1}\varepsilon_1}{\alpha \, \sigma^{l - 1} \varepsilon_1} \leq \frac{1}{\alpha} \end{displaymath} and: \begin{displaymath} \qquad \quad \frac{\varepsilon_{j}}{\rho_{l}}\leq \frac{1}{\alpha} \, \sigma^{j - p_{l} - 1} \quad \text{for } j > p_l \, ; \end{displaymath} so that: \begin{equation} \label{deux} \quad \rho (r_l, |z_j|) \geq \frac{\rho_l - \varepsilon_j}{\rho_l + \varepsilon_j} = \frac{1 - \varepsilon_j/\rho_l}{1+\varepsilon_j/\rho_l} \geq \frac{1 - \alpha^{- 1} \sigma^{j - p_{l} - 1}}{1 + \alpha^{-1} \sigma^{j - p_{l} - 1}} \, \raise 1 pt \hbox{,} \quad \text{for } j > p_l \, , \end{equation} and \begin{equation} \label{deeux} P_2 \geq \prod_{k = 0}^\infty \bigg( \frac{1 - \alpha^{- 1} \sigma^{k}}{1 + \alpha^{-1} \sigma^{k}} \bigg) \, \cdot \end{equation} Finally, the condition of lower and upper bound for $\rho_l$ is fulfilled by construction. \goodbreak \noindent{\bf Case 2:} $\varepsilon_{p_l} \leq 2 \, \sigma^{l - 1}\varepsilon_1$. Then, we choose $\rho_l = a \, \varepsilon_{p_l}$ with $\sigma < a < 1$ fixed. Computations exactly similar to those of Case~1 give us: \begin{equation} \label{trois} |B (z)| \geq \prod_{k = 0}^\infty \bigg(\frac{1 - a \, \sigma^{k}}{1 + a \, \sigma^{k}}\bigg) \times \prod_{k = 0}^\infty \bigg( \frac{1 - a^{-1} \sigma^{k}}{1 + a^{- 1} \sigma^{k}}\bigg) =: \delta > 0 \, , \quad \text{for } |z| = r_l \, . \end{equation} Moreover, in this case: \begin{displaymath} a \, \sigma^{l - 1}\varepsilon_1 \leq \rho_l \leq 2 \, a \, \sigma^{l - 1}\varepsilon_1 \, , \end{displaymath} and the proof is ended. \end{proof} Now, we have the following estimation. \goodbreak \begin{theorem} \label{Theo general minoration} Let $\phi, \psi \colon \D \to \D$ be two non-constant analytic self-maps and $\Phi (z_1, z_2) = \big( \phi (z_1), \psi (z_1) \, h (z_2)\big)$, where $h$ is inner. Let $(r_l)_{l \geq 1}$ be an increasing sequence of positive numbers with limit $1$ such that: \begin{displaymath} \inf_{|z| = r_l} |\psi (z) | \geq \delta_l > 0 \, , \end{displaymath} with $\delta_l \leq \e^{- 1 / \capa (A_l)}$, where $A_l = \phi \big( r_l \overline{\D} \big)$. \par Then the approximation numbers $a_N (C_\Phi)$, $N \geq 1$, of the composition operator $C_\Phi \colon H^2 (\D^2) \to H^2 (\D^2)$ satisfy: \begin{equation} \label{estim inf} a_N (C_\Phi) \gtrsim \sup_{l \geq 1} \Big[ \sqrt{1 - r_l} \, \exp \big( - 8 \, \sqrt{N} \, \sqrt{\log ( 1 / \delta_l)} \, \sqrt {\log (1 / \Gamma_l} \, \, \big) \Big] \, , \end{equation} where: \begin{equation} \Gamma_l = \e^{- 1 / \capa (A_l)} \, . \end{equation} \end{theorem} \begin{proof} Since $h$ is inner, the sequence $(h^k)_{k \geq 0}$ is orthonormal in $H^2$ and hence $a_n (C_\Phi) = a_n (T)$ for all $n \geq 1$, where $T = \bigoplus_{k = 0}^\infty T_k$ and $T_k = M_{\psi^k} C_\phi$. Then Lemma~\ref{mino capa} gives: \begin{equation} a_n (T_k) \gtrsim \sqrt{1 - r_l} \, \delta_l^k \Gamma_l^n \end{equation} for all $n \geq 1$ and all $k \geq 0$. Let now: \begin{equation} p_l = \bigg[ \frac{\log (1 /\delta_l )}{\log (1/\Gamma_l)} \bigg] \,, \end{equation} where $[\, . \,]$ stands for the integer part, and: \begin{equation} \hskip 90 pt n_k = p_l k \, , \hskip 45 pt \text{for} \quad k = 1, \ldots, K \,. \qquad \end{equation} By Lemma~\ref{lemme utile}, applied with $m_k = k$ (i.e. to $H_1, \ldots, H_K$), we have, if $N = n_1 + \cdots + n_K$: \begin{displaymath} a_N (T) \geq \inf_{1 \leq k \leq K} \alpha\, \sqrt{1 - r_l} \, \delta_l^k \, \Gamma_l^n = \alpha \, \sqrt{1 - r_l} \, \delta_l^K \, \Gamma_l^{n_K} \, . \end{displaymath} But, since $p_l \leq \log (1 / \delta_l) / \log (1 / \Gamma_l)$: \begin{align*} \delta_l^K \, \Gamma_l^{n_K} & = \exp\big[ - \big( K \log (1 / \delta_l) + p_l K \log(1 / \Gamma_l) \big) \big] \geq \exp [ - 2 K \log (1 / \delta_l) ] \, . \end{align*} Since: \begin{displaymath} N = p_l \frac{K (K +1)}{2} \geq p_l \frac{K^2}{4} \geq \frac{K^2}{16} \, \frac{\log (1 / \delta_l)}{\log (1 / \Gamma_l)} \, \raise 1,5 pt \hbox{,} \end{displaymath} we get: \begin{displaymath} \delta_l^K \, \Gamma_l^{n_K} \geq \exp \big[ - 8 \, \sqrt{N} \, \sqrt{\log (1 / \delta_l)} \sqrt{ \log (1 / \Gamma_l)} \big] \, , \end{displaymath} and the result ensues. \end{proof} \noindent{\bf Example 1.} We take $\phi = \lambda_\theta$, a lens map, and $\psi = B$, a Blaschke product associated to a strongly regular sequence, as defined in \eqref{Blaschke}; then we get: \begin{theorem} \label{Prop mino lens} Let $\Phi \colon \D^2 \to \D^2$ be defined by: \begin{displaymath} \Phi (z_1, z_2) = \big(\lambda_\theta (z_1) , c\, B (z_1)\, h (z_2) \big) \, , \end{displaymath} where $B$ is a Blaschke product as in \eqref{Blaschke}, $0 < c < 1$, and $h$ is an arbitrary inner function, we have, for some positive constant $b$, for all $N \geq 1 $: \begin{equation} \qquad a_N (C_\Phi) \gtrsim \exp ( - b \, N^{1/3}) = \exp ( - b \, \sqrt{N} / N^{1/6} ) \, . \end{equation} In particular $\beta_2 (C_\Phi) = \beta_2^\pm (C_\Phi) = 1$. \end{theorem} \noindent{\bf Remark.} We saw in Theorem~\ref{majo lens} that this is the exact size, since we have: $a_N (C_\Phi) \lesssim \e^{- \beta \, N^{1/3}}$. \begin{proof} By Lemma~\ref{Lemma Blaschke}, there is a sequence of numbers $r_l \approx \sigma^l$ such that $|B (z) | \geq \delta$ for $|z| = r_l$, where $\delta$ is a positive constant (depending on $\sigma$). Since $\lambda_\theta (0) = 0$, we have: \begin{displaymath} {\rm diam}_\rho (A_l) \geq \lambda_\theta (r_l) \gtrsim 1 - (1 - r_l)^\theta \, ; \end{displaymath} hence, by \cite{LQR}, Theorem~3.13, we have: \begin{displaymath} \capa (A_l) \gtrsim \log \frac{1}{1 - r_l} \gtrsim l \, , \end{displaymath} or, equivalently: $\Gamma_l \geq \e^{ - b / l}$, some some $b > 0$. Then \eqref{estim inf} gives, for all $l \geq 1$ (with another $b$): \begin{displaymath} a_N (C_\Phi) \gtrsim \exp \bigg[ - b \bigg( l + \frac{\sqrt{N} }{\sqrt l} \bigg) \bigg] \, . \end{displaymath} Taking $l = N^{1/3}$, we get the result. \end{proof} \noindent {\bf Example 2.} By taking the cusp instead of a lens map, we obtain a better result, close to the extremal one. \begin{theorem} Let $\Phi (z_1, z_2) = \big( \chi (z_1), c\, B (z_1) \, h (z_2) \big)$, where $\chi$ is the cusp map, $B$ a Blaschke product as in \eqref{Blaschke}, $0 < c < 1$, and $h$ an arbitrary inner function. Then, for all $N \geq 1$: \begin{displaymath} a_N (C_\Phi) \gtrsim \e^{ - b \, \sqrt{N} / \sqrt{\log N }} \, . \end{displaymath} In particular $\beta_2 (C_\Phi) = 1$. \end{theorem} \noindent{\bf Remark.} We saw in Theorem~\ref{majo cusp} that this is the exact size, since we have: $a_N (C_\phi) \lesssim \e^{ - \beta \sqrt{N / \log N}}$. \begin{proof} The proof is the same as that of Proposition~\ref{Prop mino lens}, except that, for the cusp map, we have (note that $\chi (0) = 0$): \begin{displaymath} {\rm diam}_\rho (A_l) \geq \chi (r_l) \, . \end{displaymath} But when $r$ goes to $1$: \begin{displaymath} 1 - \chi (r) \sim \frac{\pi \, (\sqrt{2} - 1)}{2} \, \frac{1}{\log \big( 1 / (1 - r) \big)} \end{displaymath} (see \cite{LIQURO-Estimates}, Lemma~4.2). Hence, by \cite{LQR}, Theorem~3.13, again, we have: \begin{displaymath} \capa (A_l) \gtrsim \log \big( \log \big( 1 / (1 - r_l) \big) \, , \end{displaymath} so $\Gamma_l \geq \e^{ - b / \log l}$. Then, \eqref{estim inf} gives (with another $b$): \begin{displaymath} a_N (C_\Phi) \gtrsim \exp \bigg[ - b \bigg( l + \frac{\sqrt{N}}{\sqrt{\log l}} \bigg) \bigg] \, . \end{displaymath} In taking $l = \sqrt{N / \log N}$, we get the announced result. \end{proof} \subsection{Upper bounds} All previous results point in the direction that, if $\Vert \Phi \Vert_\infty = 1$, then however small $a_{n}(C_\Phi)$ is, it will always be larger than $\alpha \, \e^{- \beta \varepsilon_n \sqrt n}$ with $\varepsilon_n \to 0^{+}$, as this is the case in dimension one (with $n$ instead of $\sqrt n$). But Theorem~\ref{chobou} to follow shows that we cannot hope, in full generality, to get the same result in dimension $d \geq 2$, and that other phenomena await to be understood. Here is our main result. It shows that, even for a truly $2$-dimensional symbol $\Phi$, we can have $\Vert\Phi \Vert_\infty = 1$ and nevertheless $\beta_{2}^{+} (C_\Phi) < 1$, in contrast to the $1$-dimensional case where \eqref{equiv-dim1} holds. \begin{theorem} \label{chobou} There exist a map $\Phi \colon \D^2\to \D^2$ such that: \par $1)$ the composition operator $C_\Phi \colon H^{2}(\D^2)\to H^{2}(\D^2)$ is bounded and compact; \par $2)$ we have $\Vert\Phi \Vert_\infty = 1$ and $\Phi$ is truly $2$-dimensional, so that $\beta_{2}^{-}(C_\Phi) > 0$; \par $3)$ the singular numbers satisfy $a_{n}(C_\Phi) \leq \alpha \, \e^{-\beta \,\sqrt{n}}$ for some positive constants $\alpha, \beta$; in particular $\beta_{2}^{+} (C_\Phi) < 1$. \end{theorem} \begin{proof} Let $0 < \theta < 1$ be fixed, and $\lambda_\theta$ be the corresponding lens map. We set: \begin{displaymath} \left\{ \begin{array} {lcll} & \phi & = & \dis \frac{1 + \lambda_\theta}{2} \\ & w (z) & = & \dis \exp \bigg[ - \bigg( \frac{1+ z}{1 - z}\bigg)^\theta\, \bigg] \\ & \psi & = & w\circ \phi \, . \end{array} \right. \end{displaymath} Note that $\|\phi \|_\infty = 1$. Setting $\delta = \cos (\theta\pi/2) > 0$, we have for $z\in \D$: \begin{equation} \label{choi0} |1 - \phi (z)|= \frac{1}{2} \, |1 - \lambda_{\theta}(z)| = \bigg|\frac{(1 - z)^\theta}{(1 - z)^\theta + (1 + z)^\theta}\bigg| \leq \frac{|1 - z|^\theta}{\delta} \, \cdot \end{equation} Indeed, the argument $\alpha$ of $(1 \pm z)^\theta$ satisfies $|\alpha| \leq \theta \pi/2$ for $z\in \D$, and we get: \begin{displaymath} |(1 - z)^\theta + (1 + z)^\theta|\geq \Re [(1 - z)^\theta + (1+ z)^\theta] \geq \delta(|1 + z|^\theta + |1 - z|^\theta) \geq \delta \, . \end{displaymath} We also see that $\phi (\D)$ touches the boundary $\partial \D$ only at $1$ in a non-tangential way, meaning that for some constant $C>1$: \begin{displaymath} \qquad \quad 1 - |\phi (z)| \geq \frac{1}{C} \, |1 - \phi (z)| \, , \quad \forall z \in \D \, . \end{displaymath} Now, we have the following two inequalities: \begin{align} & \Re z \geq 0 \quad \Longrightarrow \quad |w (z)|\leq \exp \bigg( - \frac{\delta}{|1 - z|^\theta} \bigg) \label{choi1} \\ & z \in \D \quad \Longrightarrow \quad |\psi (z)| \leq \exp\bigg( - \frac{\delta^{2}}{|1 - z|^{\theta^2}}\bigg) \, . \label{choi2} \end{align} Indeed, with $S (z) = \big( \frac{1+ z}{1 - z}\big)^\theta$, we have $\Re\, S (z) \geq \delta \, |S (z)|\geq \delta |1 - z|^{- \theta}$ when $\Re z\geq 0$, giving \eqref{choi1}, and \eqref{choi0} and \eqref{choi1} imply, since $\Re \phi (z)\geq 0$: \begin{displaymath} |\psi (z)| = |w \big( \phi (z) \big)|\leq \exp \bigg( - \frac{\delta}{|1 - \phi (z)|^\theta} \bigg) \leq \exp \bigg( - \frac{\delta^{2}}{|1 - z|^{\theta^2}}\bigg) \, \cdot \end{displaymath} We now set: \begin{equation} \Phi (z_1, z_2) = \big(\phi (z_1), \psi (z_1) \, h (z_2) \big) \, , \end{equation} with $h$ a Rudin function. Observe that $\phi \in A(\D)$ and $\psi = w\circ \phi \in A(\D)$ as well ($w \in A(\D)$ with $w (1) = 0$; this is due to the presence of the parameter $\theta < 1$). hence if we take for $h$ a finite Blaschke product, the two components of $\Phi$ are in the bidisk algebra $A (\D^2)$.\par We have $\| \psi \|_\infty := \rho < 1$. In fact, for $\Re u \geq 0$, we have: \begin{displaymath} \bigg| \frac{1 + u}{1 - u} \bigg| \geq 2^{- \theta} | 1 + u|^\theta \geq 2^{- \theta} (1 + \Re u)^\theta \geq 2^{- \theta} \, , \end{displaymath} hence: \begin{displaymath} \Re \bigg[ \bigg( \frac{1 + u}{1 - u} \bigg)^\theta \, \bigg] \geq \bigg( \cos \frac{\theta \pi}{2} \bigg) \, \bigg| \frac{1 + u}{1 - u} \bigg|^\theta \geq \bigg( \cos \frac{\theta \pi}{2} \bigg) \, 2^{- \theta} = \delta \, 2^{- \theta} \, , \end{displaymath} and $\| w \circ \phi \|_\infty \leq \e^{2^{- \theta} \delta}$.\par Now, $1)$ follows from the orthogonal model presented in Section~\ref{general facts}, because $\| \psi \|_\infty < 1$. \par The assertion $2)$ follows from \cite{BLQR}, Theorem~3.1, since $\| \phi \|_\infty = 1$.\par We now prove $3)$. As observed, $C_\Phi$ can be viewed as a direct sum $T = \bigoplus_{k = 0}^\infty T_k$ acting on a Hilbertian sum $H = \bigoplus_{k = 0}^\infty H_k$, where $T_k$ acts on a copy $H_k$ of $H^{2}(\D)$ with: \begin{displaymath} T_k = M_{\psi^{k}} C_\phi \, . \end{displaymath} We fix the positive integer $n$. The rest of the proof will consist of three lemmas. \begin{lemma} \label{rest1} We have $\Vert T_k\Vert \leq 2 \, \rho^{- k} \leq 2\, \rho^{ - n}$ for $k > n$. \end{lemma} \begin{proof} Indeed, since $\phi (0) = 1/2$, we know that $\Vert C_{\phi}\Vert \leq \sqrt{\frac{1 + \phi(0)}{1 - \phi(0)}}=\sqrt{3}\leq 2$, so that $\Vert T_k\Vert\leq \Vert \psi^k\Vert_\infty \Vert C_\phi\Vert\leq \rho^{- k}\times 2$. \end{proof} \begin{lemma} \label{rest2} Set $b = a /\delta^2 $ where $a > 0$ is given by $\e^{- a}=4C /\sqrt{16 C^2+1}$ and $C$ is as in \eqref{veritas}. Let $m_k$ be the smallest integer such that $k \, \delta^{2} 2^{m_{k} \theta^2} \geq a n$; namely: \begin{equation} \label{smal} m_k =\bigg[\frac{\log(b\,n/k)}{\theta^{2}\log 2} \bigg] + 1 \, , \end{equation} where $[\, . \, ]$ stands for the integer part. Then, with $a'=\min(\log 2,a)$: \begin{displaymath} a_{n m_k + 1} (T_k) \lesssim \e^{- a'n} \, . \end{displaymath} \end{lemma} \begin{proof} This follows from Theorem~\ref{special} applied with $w = \psi^{k}$, $R = k \, \delta^2$ and $\theta$ changed into $\theta^2$. This is possible thanks to \eqref{choi2} and to Lemma~\ref{rest1}. Moreover we have adjusted $m_k$ so as to make the two terms in Theorem~\ref{special} of the same order. \end{proof} \begin{lemma} \label{rest3} The dimension $d :=\sum_{k = 0}^n n \, m_{k}$ satisfies, for some positive constant $\alpha$: \begin{displaymath} d \leq \alpha \, n^2 \, . \end{displaymath} \end{lemma} \begin{proof} Indeed, it is well-known that: \begin{displaymath} \sum_{k=1}^{n}\log k = n\log n - n + {\rm O}\, (\log n) \, , \end{displaymath} and, in view of \eqref{smal}, we have $m_k \leq \alpha'_\theta \log (b \, n /k) \leq \alpha''_\theta (\log n - \log k)$; hence: \begin{displaymath} \sum_{k = 1}^n m_k \leq \alpha''_\theta \big[ n \log n - \big(n \log n - n + {\rm O}\, (\log n) \big) \big] = \alpha''_\theta \, n + {\rm O}\, (\log n) \, , \end{displaymath} and we get $d \leq \alpha''_\theta \, n^2 + {\rm O}\, (n \log n) \leq \alpha_\theta \, n^2$. Alternatively, we could have used a Riemann sum for the function $\log (1/x)$ on $(0, 1]$. \end{proof} Finally, putting things together and using as well Proposition~\ref{majo} with $K = n$ and $n_k = n m_k + 1$ so that $(\sum_{k = 0}^n n_k) - n = (\sum_{k = 0}^n n \, m_k) + 1 = d + 1$, we get ignoring once more multiplicative constants: \begin{displaymath} a_{n^2} (T) \lesssim a_{d} (T) \leq \alpha \, \e^{- \beta n} \end{displaymath} with positive constants $\alpha$, $\beta$. This ends the proof of Theorem~\ref{chobou}. \end{proof} \section{Monge-Amp\`ere capacity and applications} \label{sec: capacity} \subsection{Definition} Let $K$ be a compact subset of $\D^m$ (in this section, for notational reasons, we denote the dimension by $m$ instead of $d$). The Monge-Amp\`ere capacity of $K$ has been defined by Bedford and Taylor (\cite{BT}; see also \cite{KOSK}, \S~5 or \cite{Klimek}, Chapter~1) as: \begin{displaymath} \capam (K) = \sup \bigg\{ \int_K (dd^c u)^m \, ; \ u \in PSH \text{ and } 0 \leq u \leq 1 \bigg\} \, , \end{displaymath} where $PSH$ is the set of plurisubharmonic functions on $\D^m$, $dd^c = 2 i \partial \bar\partial$, and $(dd^c)^m = dd^c \wedge \cdots \wedge dd^c$ ($m$ times). When $u \in PSH \cap\, {\cal C}^2 (\D^m)$, we have: \begin{displaymath} (dd^c u)^m = 4^m m! \det \bigg( \frac{\partial^2 u}{\partial z_j \partial \bar{z}_k} \bigg)\, dV (z) \, , \end{displaymath} where $dV (z) = (i/2)^m dz_1 \wedge d\bar{z}_1 \wedge \cdots \wedge dz_m \wedge d\bar{z}_m$ is the usual volume in $\C^m$. A more convenient formula (because $\D^m$ is bounded and hyperconvex: see \cite{Klimek}, p.~80, for the definition) is: \begin{displaymath} \capam (K) = \int_K (dd^c u_K^\ast)^m \, , \end{displaymath} where $u_K^\ast$ is called the \emph{extremal function of $K$} and is the upper semi-continuous regularization of: \begin{displaymath} u_K = \sup\{ v \in PSH \, ; \ v \leq 0 \text{ and } v \leq - 1 \text{ on } K\} \, , \end{displaymath} but we will not need that. \par As in \cite{ZAK}, we set: \begin{equation} \tau_m (K) = \frac{1}{(2 \pi)^m} \, \capam (K) \ . \end{equation} For $m = 1$, $\tau (K) := \tau_1 (K)$ is equal to the Green capacity $\capa (K)$ of $K$ with respect to $\D$, with the definition used in \cite{LQR} (see \cite{KOSK}, Theorem~8.1, where a factor $2 \pi$ is introduced). We further set: \begin{equation} \label{set} \Gamma_{m} (K) = \exp \bigg[ - \bigg( \frac{m!}{\tau_m (K)}\bigg)^{1/m}\bigg] \, \cdot \end{equation} We proved in \cite{LQR} that, for $m = 1$, and $\varphi \colon \D \to r\D$, with $0 < r <1$, we have: \begin{equation} \label{spectral} \beta_1 (C_\varphi) = \Gamma_1 \big( \overline{\varphi (\D)} \big) \, . \end{equation} The goal of this section is to see that Theorem~\ref{chobou} shows that this no longer holds for $m = 2$. \subsection{A seminal example} In one variable, our initial motivation had been the simple-minded example $\varphi (z) = r z$, $0 < r < 1$, for which $C_{\varphi}(z^n) = r^n z^n$, implying $a_{n}(C_\varphi) = r^{n - 1}$ and $\beta_1 (C_\varphi) = r$. If $K = \overline{\varphi (\D)} =\overline{D} (0, r)$, we have $\capa (K) = \frac{1}{\log 1/r}$ and $\Gamma_1 (K) = r$, so that $\beta_1 (C_\varphi) = \Gamma_1 (K)$. Let us examine the multivariate example (where $0< r_j < 1$): \begin{displaymath} \Phi (z_1, z_2, \ldots, z_m) = (r_1 z_1, r_2 z_2, \ldots, r_m z_m). \end{displaymath} If $K = \overline{\Phi (\D^m)}$, we have $K = \prod_{k = 1}^m \overline{D} (0, r_k)$, and hence (\cite{Blocki}, Theorem~3): \begin{equation} \label{thus} \tau_m (K) =\prod_{k = 1}^m \frac{1}{\log (1/r_k)} \, \cdot \end{equation} On the other hand, $C_{\Phi} (z_{1}^{n _1} z_{2}^{n_2}\cdots z_{m}^{n_m}) = r_{1}^{n_1} r_{2}^{n_2}\cdots r_{m}^{n_m} \, z_{1}^{n_1} z_{2}^{n_2}\cdots z_{m}^{n_m}$ so that the sequence $(a_n)_n$ of approximation numbers of $C_\Phi$ is the non-increasing rearrangement of the numbers $r_{1}^{n_1} r_{2}^{n_2}\cdots r_{m}^{n_m}$. It is convenient to state the following simple lemma. \begin{lemma} Let $\lambda_1, \ldots, \lambda_m$ be positive numbers. Let $N_A$ be the number of $m$-tuples of non-negative integers $(n_1, \ldots, n_m)$ such that $\sum_{k = 1}^m \lambda_k n_k\leq A$. Then, as $A\to \infty$: \begin{displaymath} N_A \sim \frac{A^m}{(\lambda_1\cdots \lambda_m)\, m!} \, \cdot \end{displaymath} \end{lemma} Indeed, just apply Karamata's tauberian theorem (see \cite{KOR} p.~30) to the generalized Dirichlet series: \begin{displaymath} S (\varepsilon) := \prod_{k=1}^m \frac{1}{1 - \e^{- \lambda_k \varepsilon}} \quad = \sum_{n_1, \ldots, n_m \geq 0} \e^{ - (\sum_{k = 1}^m \lambda_{k} n_k) \, \varepsilon} \, ; \end{displaymath} we have $S (\varepsilon) \sim \frac{\varepsilon^{- m}}{(\lambda_1 \cdots \lambda_m)}$ as $\varepsilon \to 0^{+}$. Let now $N$ be a positive integer and $\varepsilon = a_N$. Setting $\lambda_k = \log (1/r_k)$ and $A = \log (1/\varepsilon)$, we see that $N$ is the number of $m$-tuples $(n_1,\ldots, n_m)$ of non-negative integers such that $r_{1}^{n_1} r_{2}^{n_2}\cdots r_{m}^{n_m}\geq \varepsilon$, i.e. such that $\sum_{k = 1}^m \lambda_k n_k \leq A$. This number $N$ is hence nothing but the number $N_A$ of the previous lemma, so that: \begin{displaymath} N \sim \frac{A^m}{(\lambda_1 \cdots \lambda_m) \, m!} \, \cdot \end{displaymath} Inverting this formula, we get: \begin{displaymath} a_N (C_\Phi) = \exp \big[ - (1 + {\rm o}\, (1)) \, (m! (\lambda_1 \lambda_2\cdots \lambda_m)\, N)^{1/m}\big] \end{displaymath} and: \begin{displaymath} \beta_{m} (C_\Phi) = \exp\big[ - (m! \lambda_1 \lambda_2\cdots \lambda_m)^{1/m} \big] = \Gamma_{m} (K) \, , \end{displaymath} in view of \eqref{set} and \eqref{thus}. On the view of the simple-minded previous example, the extension of the spectral radius formula \eqref{spectral} to the multivariate case holds, and it is tempting to conjecture that this is a general phenomenon as in dimension one, all the more as the following extension of Widom's theorem was proved by Zakharyuta, based on the solution by S.~Nivoche of Zakharyuta's conjecture (\cite{Nivoche}); see also \cite{ZAK}, Theorem~5.4. A compact subset $K$ of $\D^m$ is said to be \emph{regular} if its extremal function $u_K^\ast$ is continuous on $\D^m$. \begin{theorem} [\cite{ZAK}, Theorem~5.6] \label{Zakha-exact} Let $K$ be a regular compact subset of $\D^m$ and $J \colon H^\infty (\D^m) \to {\cal C} (K)$ the canonical injection; then the Kolmogorov numbers $d_n (J)$ satisfy: \begin{equation} \lim_{n \to \infty} \big[d_n (J) \big]^{1/n^{1 /m}} = \exp \bigg[ - \bigg(\frac{m!}{\tau_m (K)} \bigg)^{1/m} \bigg] \, \cdot \end{equation} \end{theorem} Note that the right side is nothing but $\Gamma_m (K)$. We will see consequences of this result in a forthcoming paper (\cite{LQR-Pluricap}). \subsection{Upper bound} For the upper bound, the situation behaves better, as stated in the following theorem. \begin{theorem} [\cite{ZAK}, Proposition~6.1] \label{Zakha-upper} Let $K$ be a compact subset of $\D^m$ with non-void interior. Then: \begin{equation} \limsup_{n \to \infty} \big[ d_n (J) \big]^{1/n^{1/m}} \leq \exp \bigg[ - \bigg( \frac{m!}{\tau_m (K)} \bigg)^{1/m} \bigg] \, . \end{equation} \end{theorem} Note that $(K, \D^m)$ is a condenser since $K$ has non-void interior. We deduce the following upper bound. \begin{theorem} \label{extension} Let $\Phi$ be an analytic self-map of $\D^m$ with $\Vert \Phi \Vert_\infty = \rho < 1$, thus inducing a compact composition operator on $H^{2}(\D^m)$. Then we have: \begin{displaymath} \beta_{m}^{+} (C_\Phi) \leq \Gamma_{m} \big( \overline{\Phi (\D^m)} \big) \, . \end{displaymath} \end{theorem} \begin{proof} This proof provides in particular a simplification of that given in \cite{LQR} in dimension $m = 1$. Changing $n$ into $n^m$, Theorem~\ref{Zakha-upper} means that for every $\varepsilon > 0$, there exists an $(n^{m} - 1)$-dimensional subspace $V$ of $\mathcal{C}(K)$ such that, for any $g \in H^{\infty} (\D^m)$, there exists $h \in V$ such that: \begin{equation} \label{demi} \Vert g - h\Vert_{\mathcal{C}(K)} \leq C_\eps (1 + \varepsilon)^n \big[ \Gamma_{m}(K) \big]^n \Vert g \Vert_\infty \, . \end{equation} Let $l$ be an integer to be adjusted later, and $f (z) = \sum_{\alpha} b_\alpha z^\alpha \in B_{H^2}$, as well as $g (z) = \sum_{|\alpha|\leq l} b_\alpha z^\alpha$. We first note that (with $M_m$ depending only on $m$ and $\rho$, and since the number of $\alpha$'s such that $|\alpha|\leq p$ is ${\rm O}\, (p^m)$): \begin{displaymath} \sum_{|\alpha|> l} \rho^{2|\alpha|} \leq M_m \sum_{p > l} p^{m} \, \rho^{2 p} \leq M_m l^{m} \, \frac{\rho^{2 l}}{(1 - \rho^2)^{m + 1}} \, \cdot \end{displaymath} We next observe that, by the Cauchy-Schwarz and Parseval inequalities: \begin{equation}\label{une} \Vert g \Vert_\infty \leq M_m \, l^{m/2} \, , \end{equation} and \begin{equation} \label{une-et-demi} \qquad |f (z) - g (z)| \leq M_m \, l^{m/2}\frac{|z|_{\infty}^{l} \ \quad}{(1 - |z|_{\infty}^2)^{(m + 1)/2}} \, \raise 1 pt \hbox{,} \qquad \forall z \in \D^m \, . \end{equation} where $|z|_\infty :=\max_{j \leq m} |z_j|$ if $z = (z_1, \ldots, z_m)$. The subspace $F$ formed by functions $v \circ \Phi$, for $v\in V$, can be viewed as a subspace of $L^{\infty}(\T^m) \subseteq L^{2}(\T^m)$ with respect to the Haar measure of $\T^m$, the distinguished boundary of $\D^m$ (indeed, we can write $(v \circ \Phi)^\ast = v \circ \Phi^\ast$, where $\Phi^\ast$ denotes the almost everywhere existing radial limits of $\Phi (r z)$, which belong to $K$). Let finally $E = P (F) \subseteq H^2 (\D^m)$ where $P \colon L^{2}(\T^m)\to H^2 (\T^m) = H^2 (\D^m)$ is the orthogonal projection. This is a subspace of $H^2$ with dimension $< n^m$. Set temporarily $\eta = C_\eps (1 + \varepsilon)^n \big[ \Gamma_{m}(K) \big]^n$. It follows from \eqref{demi} and \eqref{une} that, for some $h \in V$: \begin{displaymath} \Vert g - h \Vert_{\mathcal{C}(K)} \leq \eta \, \Vert g \Vert_\infty \leq \eta \, M_m \, l^{m/2} \end{displaymath} and hence: \begin{displaymath} \Vert g \circ \Phi - h \circ \Phi \Vert_{2} \leq \Vert g \circ \Phi - h \circ \Phi \Vert_{\infty} \leq \eta \, M_m \, l^{m/2} \, , \end{displaymath} implying by orthogonal projection: \begin{displaymath} {\rm dist}\, (C_\Phi g, E) \leq \Vert g \circ \Phi - P (h \circ \Phi) \Vert_{2} \leq \eta \, M_m \, l^{m/2} \, . \end{displaymath} Now, since $C_\Phi f (z) - C_\Phi g (z) = f \big( \Phi (z) \big) - g \big( \Phi (z) \big)$, \eqref{une-et-demi} gives: \begin{displaymath} \Vert C_\Phi f - C_\Phi g \Vert_2 \leq \Vert C_\Phi f - C_\Phi g \Vert_\infty \leq M_m \, l^{m/2} \, \frac{\rho^l}{(1 - \rho^2)^{(m + 1)/2}} \end{displaymath} and hence: \begin{displaymath} {\rm dist}\, (C_\Phi f, E) \leq M_m \, l^{m/2} \,\bigg( \frac{\rho^l}{(1 - \rho^2)^{(m + 1)/2}} + C_\eps (1 + \eps)^n \big[ \Gamma_{m} (K) \big]^n \bigg) \, . \end{displaymath} It ensues, since $a_N (C_\Phi) = d_N (C_\Phi)$, that: \begin{displaymath} \big[ a_{n^m} (C_\Phi) \big]^{1/n} \leq (M_m \, l^{m/2})^{1/n} \, \bigg[ \frac{\rho^{l/n}}{(1 - \rho^2)^{(m + 1)/2n}} + C_\eps^{1/n} (1 + \eps) \, \Gamma_{m}(K) \bigg] \, . \end{displaymath} Taking now for $l$ the integer part of $n \log n$, and passing to the upper limit as $n \to \infty$, we obtain (since $l / n \to \infty$ and $(\log l)/n\to 0$): \begin{displaymath} \beta_{m}^{+} (C_\Phi) \leq (1 + \eps) \,\Gamma_{m} (K) \, , \end{displaymath} and Theorem~\ref{extension} follows. \end{proof} \noindent{\bf Acknowledgements:} The two first-named authors would like to thank the colleagues of the University of Sevilla for their kind hospitality, which allowed a pleasant and useful stay, during which this collaboration was initiated. They also thank E.~Fricain, S.~Nivoche, J.~Ortega-Cerd\`a, and A.~Zeriahi for useful discussions and informations.\par \noindent The third-named author is partially supported by the project MTM2015-63699-P (Spanish MINECO and FEDER funds). {\footnotesize Daniel Li \\ Univ. Artois, Laboratoire de Math\'ematiques de Lens (LML) EA~2462, \& F\'ed\'eration CNRS Nord-Pas-de-Calais FR~2956, Facult\'e Jean Perrin, Rue Jean Souvraz, S.P.\kern 1mm 18 F-62\kern 1mm 300 LENS, FRANCE \\ [email protected] Herv\'e Queff\'elec \\ Univ. Lille Nord de France, USTL, Laboratoire Paul Painlev\'e U.M.R. CNRS 8524 \& F\'ed\'eration CNRS Nord-Pas-de-Calais FR~2956 F-59\kern 1mm 655 VILLENEUVE D'ASCQ Cedex, FRANCE \\ [email protected] Luis Rodr{\'\i}guez-Piazza \\ Universidad de Sevilla, Facultad de Matem\'aticas, Departamento de An\'alisis Matem\'atico \& IMUS, Apartado de Correos 1160 41\kern 1mm 080 SEVILLA, SPAIN \\ [email protected] } \end{document}
\begin{document} \title{Quantum Random Access Code in Noisy Channels} \author{Rafael A. da Silva} \email[Correspondence email address: ]{[email protected]} \affiliation{Itaú Quantum Technologies} \affiliation{Universidade Federal do ABC, CCNH, Santo André, SP, Brazil} \author{Breno Marques} \email[Correspondence email address: ]{[email protected]} \affiliation{Universidade Federal do ABC, CCNH, Santo André, SP, Brazil} \date{\today} \begin{abstract} Random access code (RAC) communication protocol particularly useful when the communication between parties is restricted. In this work we built upon works that have previously proven quantum random access code (QRAC), in the absence of noise, to be more advantageous than classical random access code (CRAC), investigate the effects of noisy channel on QRAC performance and how the losses can be mitigated by using the see-saw method optimized by semi-definite programming when the noisy channel is known. \end{abstract} \keywords{Quantum communication, noisy channel, quantum random access code.} \maketitle \section{Introduction} In quantum communication one uses quantum resources such as superposition and entanglement to enhance information transmission beyond classical limitations \cite{Gal}. An example of this is the quantum random access code (QRAC), first introduced by S. J. Wiesner in 1983 \cite{Wiesner2} and rediscovered by A. Ambaini \textit{et al.} \cite{Andris}, in which the use of quantum strategies for encoding and decoding Alice's messages improves Bob's probability of correctly accessing the information he interested in when compared to classical strategies. In general, a QRAC, involves a party, Alice, who has an $n$-dit long string which she must encode in $m<n$ qudits and send to Bob, who is only interested in a subset of the string. He must be able to retrieve this information with average probability of success $P_d>P_{chance}$, where $P_{chance}$ represent his average probability if he randomly guessed the information. This family of QRACs can be symbolically represented as $n^{(d)} \xrightarrow{P_d} m$. Armin Tavakoli \textit{et al.} showed in \cite{Breno} that, in the noiseless regime, the $2^{(d)} \xrightarrow{P_d} 1$ QRAC outperforms its classical counterpart (the CRAC) for any dimension $d$ in terms of average probability of success.\\ \indent However, in the implementation quantum communication protocols, in both experimental and application contexts, it is unlikely that one will find a system that operates so close to ideality that one may completely neglect the influence of quantum noise on said system and, consequently, on the protocol itself. Therefore, in those contexts it is fundamental to account for sources of noise for determining possible limitations on the implementation the protocol or even its viability. Previous works have addressed the issue of quantum communication through noisy channel for protocols such as QKD \cite{QKD}, quantum steganography \cite{steno}, quantum teleportation \cite{alejandro, fortes}. However, the issue of the QRACs under noisy quantum channels has not been addressed, especially not for the $2^{(d)} \xrightarrow{P_d} 1$ QRAC. In this work, we investigate, \textit{via} simulations, how Bob's average probability of success, $P_d$, evolves in time, for a given dimension $d$, when the communication happens through one of the following Markovian channels: the dit flip, d-phase flip, dephasing, depolarizing, and amplitude damping channels. The simulations show that the action of these channels can reduce the efficiency of the QRAC to the point that its classical counterpart performs better. We then attempt to mitigate this loss in performance by optimizing the the protocol using semi-definite programming (SDP), a sub-field of convex optimization that has been extensively applied for a wide range of purposes in quantum information \cite{SDPCA, SDPSW,SDPRA, SDPFE}. It is especially well suited for our problem because our figure of merit, the quantum average probability of success, depends linearly on both the encoding states (density matrices) and decoding measurement operators, which are likewise positive semi-definite and are constrained to having trace equal to one. Therefore, the task of maximizing this probability of success can be cast as an SDP problem. In section (\ref{first}) we review the classical and quantum $2^{(d)} \xrightarrow{P_d} 1$ random access code protocol. In section (\ref{second}) we presented a basic overview of theory of open quantum system and how it relates to concept of noisy quantum channels; we also introduce the three noisy quantum channels studied in this work: depolarization, amplitude damping, and dephasing channels. In section (\ref{third}) we applied the concepts laid out in the previous section to study the behaviour and performance of quantum RAC in those noisy channels,how the noise is mitigates changing the protocol strategy. \section{Review of Random Access Code}\label{first} In a RAC, Bob aims to access an arbitrary subset of information held by Alice, using an restricted communication channel. The probability of success can be optimized by the chosen communication strategy \cite{Breno}. \subsection{Classical Random Access Code} In the classical version of the $2^{(d)} \xrightarrow{P_d} 1$ RAC, Alice encodes the string $x=x_0x_1$ in a $d$-level classical state (see figure \ref{fig:classico}). In the best classical strategy \cite{Breno} Alice always sends the first input $x_0$. If Bob have to guess the first (second) input the probability of success will be 1 (1/d). The average probability of success will be given by: \begin{equation}\label{eq1} P^C=\frac{1}{2}\left(1 + \frac{1}{d}\right). \end{equation} \begin{figure} \caption{Schematics of a classical RAC protocol for $2^{(d)} \label{fig:classico} \end{figure} \subsection{Quantum Random Access Code} In the quantum version of the $2^{(d)} \xrightarrow{P_d} 1$ RAC, however, Alice encodes $x=x_0x_1$ in a single $d$-level quantum state (see figure \ref{fig:quantico}) \cite{Breno}. This state can be constructed in terms of two mutually unbiased bases (MUB). In the present work we chose the computational $B_e=\{\ket{e_l}\}_{l=0}^{d-1}$ ($\ket{e_l}=\ket{l}$), and the Fourier basis $B_f=\{\ket{f_l}\}_{l=0}^{d-1}$ ($\ket{f_l}=(1/\sqrt{d})\sum_{n=0}^{n=d-1}\omega^{ln}\ket{n}$, where $\omega=\exp{2\pi i/d}$) for constructing the encoding states \begin{equation}\label{eq2} \ket{\psi_{x_0x_1}}=\frac{1}{\sqrt{2+(2/\sqrt{d})}}\left(\ket{e_{x_0}} + \ket{f_{x_1}}\right). \end{equation} Whenever Bob is interested in $x_0$ ($x_1$), he performs a measurement in the basis $B_e$ ($B_f$), and the average probability of success for this strategy is given by: \begin{equation}\label{eq3} \begin{split} \MoveEqLeft P^Q=\frac{1}{2d^2}\sum_{x_0=0}^{d-1}\sum_{x_1=0}^{d-1}\Tr{\rho_{x_0x_1}(M_{x_0}+M_{x_1})} \\ &=\frac{1}{2}\left(1 + \frac{1}{\sqrt{d}}\right), \end{split} \end{equation} where $\rho_{x_0x_1}=\op{\psi_{x_0x_1}}$, $M_{x_0}=\op{e_{x_0}}$ and $M_{x_1}=\op{f_{x_1}}$. It is easy to show that the probabilities of all outcomes are the same (equal to $P^Q$) regardless of Alice's string $x=x_0x_1$ and Bob's input guessed. This happens because the magnitudes of the projections of $\ket{\psi_{x_0x_1}}$ into $B_e$ and $B_f$ basis are all equal (see figure \ref{fig:state_distribution}, where this can be illustrated for $d=2$ using the Bloch sphere). As we will see sections \ref{second} and \ref{third}, this symmetry can be broken when the communication occurs thought noisy channels. \begin{figure} \caption{Schematics of a quantum RAC protocol through a noiseless communication channel for $2^{(d)} \label{fig:quantico} \end{figure} Comparing equations (\ref{eq1}) and (\ref{eq3}). It is straightforward to see that, in the noiseless communication channel, the probability of success is always greater for quantum case than the classical case, \textit{i.e.}, \begin{equation}\label{pqpc} P^Q/P^C>1 \quad\forall d. \end{equation} This ratio will be from now on the figure of merit when assessing the performance of the QRAC. \begin{figure} \caption{We present here a example for $d=2$ represented in the Bloch sphere. Notice that the projections on the $z$-axis (computational basis) and on the $x$-axis (Fourier basis) have the same modulus.} \label{fig:state_distribution} \end{figure} \section{Noisy quantum channels}\label{second} In non-ideal conditions, quantum noise is a limiting, or even prohibiting, the quantum communication protocols enhancement. Understanding how a quantum noise channel can affect the protocol is essential for predicting noise-related efficiency loss and it might be possible to find strategies for mitigating said effects without enhance the protocol complexity, e.g., using quantum error correction protocols. The resulting dynamics can be understood in the light of the theory of quantum channels \cite{Petruccione,crispin} that can be understand as a interaction between the encoding quantum system and the environment. This theory establishes that quantum channels in general, are completely positive and trace-preserving (CPTP) maps \cite{Capacity} which can be written in the Kraus representation as \begin{equation} \mathcal{N}(\rho)=\sum_{\nu}K_{\nu}\rho K_{\nu}^{\dagger}, \end{equation} where $K_{\nu}$ are the so-called Kraus operators, which satisfy $\sum_{\nu}K_{\nu}K_{\nu}^{\dagger}=I$. The quantum channels that will be considered in this work are presented in the next subsections and, as an example, the possible accessible states after the noise channel will be shown in a Block sphere representation for $d=2$. \subsection{Dit-flip Channel} This channel is the d-dimensional generalization of the bit-flip channel whose action flips a qudit $\ket{\mu}$ (figure~\ref{fig:dit}), with equal probability, to one of the states $\ket{\mu\oplus 1}$, $\ket{\mu\oplus 2}$, ..., $\ket{\mu\oplus d-1}$. Since the dit flip channel belongs to the family of discrete Weyl's channels (DWC) \cite{alejandro, Rehman}, it is possible to use the Weyl's operators \begin{equation} W_{\nu\mu}=\sum_{k=0}^{d-1}\omega^{k\nu}\op{k}{k\oplus \mu},\quad \omega=\exp{2\pi i/d}, \end{equation} to write its set of Kraus operators as \begin{equation} K_{\nu}= \begin{cases} \sqrt{1-p}W_{0\nu}, &\nu=0 \\ \sqrt{\frac{p}{d-1}}W_{0\nu}, &1\leq \nu \leq d-1. \end{cases} \end{equation} \begin{figure} \caption{Representation on the Bloch sphere of the dit-flip channel for the qubit. This map conserves the probabilities of measurements in the Fourier basis but decreases the probabilities for measurements in the computational basis.} \label{fig:dit} \end{figure} \subsection{D-phase-flip Channel} The d-phase-flip channel is a generalization of the qubit phase-flip channel (figure~\ref{fig:phase}). It acts on a qudit $\ket{\mu}$ flipping its phase, with equal probability, in one of the following ways $\omega\ket{\mu}$, $\omega^2\ket{\mu}$, ..., $\omega^{d-1}\ket{\mu}$, with $\omega=\exp{2\pi i/d}$. The Kraus operators for this channel are given as \begin{equation} K_{\mu}= \begin{cases} \sqrt{1-p}W_{\mu 0}, &\mu=0 \\ \sqrt{\frac{p}{d-1}}W_{\mu 0}, &1\leq \mu \leq d-1. \end{cases} \end{equation} \begin{figure} \caption{Representation on the Bloch sphere of the phase flip channel for the qubit. This map conserves the probabilities of measurements in the computational basis but decreases the probabilities for measurements in the Fourier basis.} \label{fig:phase} \end{figure} \subsection{Depolarizing Channel} The depolarizing channel is important in experimental contexts, where it is used to analyse experimental setups in which the quantum state may be lost or when working with non-ideal detectors. For qudits ($d$-level quantum systems), this channel can be described as follows: $\rho_S$ has probability $p$ of being replaced with a completely mixed state, $I/d$, otherwise it remains unchanged . The corresponding map is \begin{equation} \rho_S(t)=\frac{pI}{d}+(1-p)\rho_S(0), \end{equation} where $p=1-e^{-\Gamma t}$, and $\Gamma$ is the system-environment coupling constant. One interesting aspect of this channel is its symmetry (which can be visualized in figure~\ref{fig:depo} for $d=2$). As a consequence, the probabilities of success for measurements in either $B_e$ or $B_f$ deplete in time at the same rate. \begin{figure} \caption{Representation on the Bloch Sphere of a depolarizing channel for qubit. At $t=t_0$ the states- which are pure - lie on the surface of the sphere (represented by the dotted lines). As time passes and the states become mixed under the influence of noise, they now lie inside the old Bloch sphere as if it had shrunk. At a long enough time any initial state will evolve to a completely mix state represented by a single point at the center of the old Bloch sphere.} \label{fig:depo} \end{figure} \subsection{Dephasing Channel} The dephasing channel describes a decoherence process in which quantum information is lost without loss of energy. The evolution of a qudit under this channel can be described by \begin{equation}\label{masterdeph} \dot{\rho}_S = \Gamma\left[2a^{\dagger}a\rho_S a^{\dagger}a-\acomm{(a^{\dagger}a)^2}{\rho_S}\right], \end{equation} where $\Gamma$ is the dephasing system-environment coupling constant, and $a$ and $a^{\dagger}$ are the annihilation and creation operators, respectively. By solving (\ref{masterdeph}) one will find that the elements of the initial density matrix $\rho_S(0)$ will evolve as, \begin{equation}\label{deph_master} \mel{n}{\rho_S(t)}{m} =(1-p)^{(n-m)^2}\mel{n}{\rho_S(0)}{m}, \end{equation} where $(1-p)=e^{-\Gamma t}$. The evolution of a qudit under this channel happens in such a manner that the probabilities for measurements in the basis $B_e$ remain constant in time while for measurements in the basis $B_f$ they decrease (see figure (\ref{fig:deph}) for $d=2$). \begin{figure} \caption{Representation on the Bloch sphere of the dephasing channel for the qubit. This map shrinks the Bloch sphere in both $x$ and $y$ direction while the poles remains unchanged. Again, this means that the probabilities of measurements in the computational basis are constant in time, but vary for measurements in the Fourier basis.} \label{fig:deph} \end{figure} \subsection{Amplitude Damping} The amplitude damping channel models loss of energy of the system to its environment and describes, for instance, the phenomenon of spontaneous emission \cite{Davi}. In this case, it drives the system to its fundamental state $\ket{0}$ (see figure (\ref{fig:amp}) for $d=2$), and, as consequence, the probabilities for measurements in both bases $B_e$ and $B_f$ will decrease in time. This dynamics can be expressed \textit{via} the Kraus operators \begin{equation} K_{\nu}= \begin{cases} \op{\nu}+\sqrt{1-p}\sum_{k=1}^{d-1}\op{k},\quad \text{for} &\nu=0 \\ \sqrt{p}\op{0}{\nu},\quad \text{for}\quad 1\leq \nu \leq d-1. \end{cases} \end{equation} \begin{figure} \caption{Representation on the Bloch sphere of the amplitude-damping channel for a qubit. This map shrinks the Bloch sphere in the two directions of the equatorial plane, and also moves the center of resultant ellipsoid towards the north pole, \textit{i.e.} \label{fig:amp} \end{figure} \section{The QRAC under noisy channels}\label{third} In the noiseless regime, the $2^{(d)} \xrightarrow{P_d} 1$ QRAC always outperforms its classical counterpart (section \ref{first}). As expected, we will show that this is not the case when the communication is performed over noisy quantum channels. We investigate the influence of quantum noise by considering a three-step communication process composed of encoding (state preparation), transmission, and decoding (measurement). We considered noise disturbance only in transmission step. \subsection{The non-optimized scenario} Suppose that Alice prepares her encoding state $\rho_{x_0x_1}$ and the quantum system is sent through a noisy channel. Bob performs a measurement in either basis $B_e$ or $B_f$ as before (see figure (\ref{fig:quantico_noise}) for an illustration). Thus, his average probability of success is given by \begin{equation} P^Q(t)=\frac{1}{2d^2}\sum_{x_0=0}^{d-1}\sum_{x_1=0}^{d-1}\Tr{\rho_{x_0x_1}(t)(M_{x_0}+M_{x_1})}, \label{pqt} \end{equation} where $\rho_{x_0x_1}(t)$ is the quantum state received and $P^Q(t)$ can be used to evaluate how the QRAC performance changes as a function of time in comparison to the CRAC that is given by the ratio $P^Q(t)/P^C$. An important parameter is the time when QRAC losses the advantage over CRAC, $P^Q(t)/P^C=1$, that is shown in table~\ref{tab:1} for different $2^{(d)} \xrightarrow{P_d} 1$ QRAC scenario and noise channels. The rate at which this happens depends strongly on the dimension and the noise channel. Although the performance presented above is seemingly inevitable, we must entertain the possibility that our strategy of encoding and decoding using the computational and Fourier bases, albeit optimal for the noiseless case, may not be adequate using a noise channel. A better strategy might be found when a flexible encoding and decoding are fine-tuned for each of the noisy channels to minimize the performance losses. In the following section we discuss how we applied this strategy. \begin{figure} \caption{Schematics of a quantum RAC protocol through a noisy channel.} \label{fig:quantico_noise} \end{figure} \begin{table}[h!] \caption{Time span (in units $\Gamma$) when $P^Q(t)P^C=1$ for different quantum system dimensions $d$ and noise channels.} \centering \begin{tabular}{| c | c | c | c | c | c | c |} \hline\hline Channel & $d=2$ & $d=3$ & $d=4$ & $d=5$ & $d=6$ & $d=7$ \\ [0.5ex] \hline Dit flip & 0.35 & 0.44 & 0.47 & 0.48 & 0.47 & 0.46\\ D-phase flip & 0.34 & 0.45 & 0.46 & 0.48 & 0.46 & 0.46 \\ Dephasing & 0.88 & 0.48 & 0.28 & 0.18 & 0.12 & 0.09 \\ Amplitude damping & 0.47 & 0.32 & 0.24 & 0.18 & 0.15 & 0.12 \\ Depolarizing & 0.35 & 0.31 & 0.29 & 0.27 & 0.25 & 0.24 \\ \hline \end{tabular} \label{tab:1} \end{table} \subsection{The optimized case} The best strategy can be found for a QRAC when we take into account that $P^Q(t)$ does not depend only on the type of noise but also depends on whether or not the encoding states and decoding measurements are well adjusted to mitigate the noise channel effects. Therefore, choosing a good encoding requires anticipating the transformations that the channel may realize on the input states, and, whenever possible, optimize for the states that are least affected by them. Likewise, the decoding measurements should also reflect these channel-induced transformations. This new paradigm is illustrated in figure (\ref{fig:quantico_noise_opt}). \begin{figure} \caption{Schematics of the optimized noisy QRAC protocol.} \label{fig:quantico_noise_opt} \end{figure} The optimal encoding and decoding are determined by maximizing $P^Q(t)$. We formulated this optimization problem in terms of two interdependent SDP sub problems: \begin{equation}\label{sdp1} \begin{cases} \mathbf{max}\quad & P^Q(t) \\ \mathbf{s.t.:}\quad & \Tr{\rho'_{x_0x_1}(t)}=1,\\ & \rho'_{x_0x_1}(t)\succcurlyeq 0, \end{cases} \end{equation} and \begin{equation}\label{sdp2} \begin{cases} \mathbf{max}\quad & P^Q(t) \\ \mathbf{s.t.:}\quad & \Tr{M'_{x_0}(t)}=1,\\ & \Tr{M'_{x_1}(t)}=1,\\ & M'_{x_0}(t)\succcurlyeq 0,\\ & M'_{x_1}(t)\succcurlyeq 0, \end{cases} \end{equation} where $P^Q(t)$ is given by equation~\ref{pqt}. The optimization was conducted using the see-saw method \cite{seesaw}. In order to find the first iteration for the state $\rho'_{x_0x_1}(t)$ we run the sub problem describe in equation~\ref{sdp1} where the decoding measurements $M'_{x_0}(t)=\op{e_{x_0}}$ and $M'_{x_1}(t)=\op{f_{x_1}}$ are fixed. Next, we fixed the encoding, state $\rho'_{x_0x_1}(t)$ found in the previous iteration and optimize the decoding step, $M'_{x_0}(t)$ and $M'_{x_1}(t)$ (equation~\ref{sdp2}). This process is repeated until the value of $P^Q(t)$ ceases to improve. We use Python 3.9 with the libraries Qutip for quantum mechanics operations and Picos for the SDP optimization using cvxopt solver. In the graph \textbf{a} of figure (\ref{fig:graph0}) we show the value of $p$, where $p=1-\exp{-\Gamma t}$, when the threshold $P^Q=P^C$ was reached as a function of the dimension $d$, for the non-optimized QRAC, and in the graph \textbf{b} we show the same for the optimized QRAC. A comparison of the graphs \textbf{a} and \textbf{b} shows that the optimization was able to improve the performance of the QRAC for the dephasing channel, the dit flip channel, and the phase flip channel, where we saw that $$P^Q\geq P^C$$ for any time $t$ or dimension $d$. However, no improvement was found for the depolarizing and amplitude damping channels. In the graph \textbf{c} we show the ratio between the optimized and non-optimized values of $p$ (for $P^Q/P^C=1$) as a function of the dimension, which represents a measure of the improvement achieved by the optimization for each channel. We can see, for instance, that this improvement was overall greater for the dephasing channel but was also considerable for the dit flip and phase flip channels. \begin{figure} \caption{Graph \textbf{a} \label{fig:graph0} \end{figure} In the graphs \textbf{a} and \textbf{b} of figures (\ref{fig:graph1}), (\ref{fig:graph2}), and (\ref{fig:graph3}) the continuous lines represent the evolution of the non-optimized $P^Q/P^C$ ratio as a function of $\Gamma t$ while the dashed lines represent the optimized case for the dit flip, phase flip, and dephasing channels, respectively. A comparison of the dashed and continuous lines shows that the optimization substantially improves the values of $P^Q/P^C$ overall. It provides an advantage even for lower values of $\Gamma t$, and, for greater values, it guarantees that the performance of the QRAC is greater or equal to that of the CRAC. Particularly, for the dit flip and phase flip channel, it is worth noting that, for $d=2$ and $t\to \infty$ ($p=1$), the value of $P^Q/P^C$ for the optimal QRAC approaches the value it had for $t=0$ (or the noiseless QRAC), something that does not happen for the non-optimal QRAC. This happens because at this limit the dit and the phase are completed flipped, and the optimal decoding measurements are the ones that take this fact into account. Therefore, keeping the measurements fixed in this situation is detrimental to the performance of the protocol. Moreover the optimization makes the QRAC more robust to high-dimensional dephasing noise, which tends to be greater the higher the dimension. In the graphs \textbf{c} of figures (\ref{fig:graph1}), (\ref{fig:graph2}), and (\ref{fig:graph3}) we display how many times the value of the optimal $P^Q/P^C$ ratio is greater than the non-optimal one as a function of $\Gamma t$, which represent the gain from optimization, that increases with $\Gamma t$. For the dephasing channel, it increases with the dimension, but for the dit flip and phase flip channels, it varies less with $d$ and has highest value happens for $d=2$.\\ \indent As we discussed in section (\ref{first}), the optimal QRAC strategy is based on MUB: the information is encoded in the superposition of states coming from two MUB, and it is decoded by measuring in either one of those bases, depending on which letter Bob is interested in. However, when optimizing the strategy in presence of a noisy quantum channel, the resulting encoding and decoding are not based exclusively on MUB anymore. Therefore, indiscriminately relying on MUB in such scenario can lead to sub-optimal encoding and decoding strategies. Among the maps that we considered, for the dit flip and phase flip the optimal decoding measurements are in MUB for all time $t$, however the encoding states are not necessarily based on two-state superposition of different MUB. For the dephasing, the encoding and the decoding change, but the computational bases is preserved as one of the measurement basis. In this work we focus on the QRAC protocols, however this symmetry loss might be true for other prepare-measure protocols. \\ \indent Given the success of optimization for the dit flip, phase flip and dephasing channels, a natural question is why it failed for the depolarizing and amplitude damping channels. First, it should be emphasised that our approach only works when a better encoding states and better decoding measurements exist. The depolarizing channel transforms the qudit symmetrically (this can intuitively be seen in figure (\ref{fig:depo}) for $d=2$), and as consequence, the probabilities of success in any pair of encoding and decoding basis will decrease equally with time for any encoding states. Therefore, the encoding and decoding using $B_e$ and $B_f$ already lead to the best $P^Q(t)$ one can possibly achieve for this channel. In the amplitude damping channel, the qudit drives it to the fundamental state which makes the probability of decoding some values of $x_0$ or $x_1$ increasingly small, and even null depending on the basis, as time passes. This happens, for instance, when we use the computational basis for encoding and decoding $x_0$. In this case, when $t\to \infty$ no information can be sent by this channel, independently of encoding and decoding strategy. Despite this, however, the fact that we fail to optimize the QRAC under amplitude damping channel for finite $t$ using the approach presented in this work does not necessarily imply it cannot be accomplished. Therefore, this matter should be subject of further investigation.\\ \indent\section{Conclusion} In this work we reviewed the concept of random access code in both its classical and quantum versions using qudits and the generalized quantum maps to describe well-known noisy channels for any discrete dimension. We built upon this work by incorporating the theory of open quantum systems to understand how a noisy channel affects the performance of QRAC and we showed how the see-saw method can be used to optimize the protocol even in the presence of losses. The method presented here can be useful for many other quantum communication protocols to improve their effectiveness without need of extra resources. This work opens new possibilities to investigate other methods to improve the use in quantum communication and quantum computation protocols that have mapped their errors. In this work we focused on a single-qudit encoding, but we aspect that the method can also be used more than one quantum system or different types of quantum information protocols that is not a prepare-measure protocol. \indent\section*{Acknowledgments} R.A.S. acknowledges support from the Brazilian agency CAPES and CNPq. B.M. acknowledge partial support from the Brazilian National Institute of Science and Technology of Quantum Information (CNPq-INCT-IQ 465469/2014-0), CAPES/PrInt Process No. 88881.310346/2018-01 and CNPq (Grant No. 305165/2021-6). \begin{figure} \caption{Graph \textbf{a} \label{fig:graph1} \end{figure} \begin{figure} \caption{Graph \textbf{a} \label{fig:graph2} \end{figure} \begin{figure} \caption{Graph \textbf{a} \label{fig:graph3} \end{figure} \end{document}
\begin{document} \title{ Microscopic and phenomenological models of driven systems in structured reservoirs } \author{Gian Luca Giorgi} \affiliation{Institut UTINAM - UMR 6213, CNRS, Universit\'{e} Bourgogne Franche-Comt\'{e}, Observatoire des Sciences de l'Univers THETA, 41 bis avenue de l'Observatoire, F-25010 Besan\c{c}on, France} \affiliation{IFISC (UIB-CSIC), Instituto de F\'isica Interdisciplinar y Sistemas Complejos, UIB Campus, 07122, Palma de Mallorca, Spain} \author{Astghik Saharyan} \affiliation{Laboratoire Interdisciplinaire Carnot de Bourgogne, CNRS UMR 6303, Universit\'{e} Bourgogne Franche-Comt\'{e}, BP 47870, F-21078 Dijon, France} \author{St\'ephane Gu\'erin} \affiliation{Laboratoire Interdisciplinaire Carnot de Bourgogne, CNRS UMR 6303, Universit\'{e} Bourgogne Franche-Comt\'{e}, BP 47870, F-21078 Dijon, France} \author{Dominique Sugny} \affiliation{Laboratoire Interdisciplinaire Carnot de Bourgogne, CNRS UMR 6303, Universit\'{e} Bourgogne Franche-Comt\'{e}, BP 47870, F-21078 Dijon, France} \author{Bruno Bellomo} \email{[email protected]} \affiliation{Institut UTINAM - UMR 6213, CNRS, Universit\'{e} Bourgogne Franche-Comt\'{e}, Observatoire des Sciences de l'Univers THETA, 41 bis avenue de l'Observatoire, F-25010 Besan\c{c}on, France} \begin{abstract} We study the paradigmatic model of a qubit interacting with a structured environment and driven by an external field by means of a microscopic and a phenomenological model. The validity of the so-called fixed-dissipator (FD) assumption, where the dissipation is taken as the one of the undriven qubit is discussed. In the limit of a flat spectrum, the FD model and the microscopic one remarkably practically coincide. For a structured reservoir, we show in the secular limit that steady states can be different from those determined from the FD model, opening the possibility for exploiting reservoir engineering. We explore it as a function of the control field parameters, of the characteristics of the spectral density and of the environment temperature. The observed widening of the family of target states by reservoir engineering suggests new possibilities in quantum control protocols. \end{abstract} \maketitle \section{Introduction} Generally speaking, no quantum system can be considered as completely isolated from its environment, which is at the origin of dissipation and decoherence~\cite{breuer,gardiner}. These dissipative processes could negatively influence control protocols which aim at bringing a quantum system towards a desired target state, such as the ones considered in quantum control~\cite{dalessandro,carlini,sugny,lapertprl,shapiro,glaser,koch} and in remote state preparation~\cite{rsp,rsp2}. Dissipative dynamics can be strongly modified by using, for instance, dynamical decoupling strategies~\cite{viola1,viola2,addis} or tuned into a useful tool, e.g. by properly engineering the characteristics of the environment, to generate specific states~\cite{Briegel2006, Polzik2011, Bellomo2013, Bellomo2015}. The study of open quantum systems usually involves approximations~\cite{breuer}. Master equations are often derived in the weak-coupling regime between the system and the bath (Born approximation) and for memory-less dynamics with time-independent dissipation rates (Markovian approximation). Other common assumptions concern the absence of initial correlation between the system and its environment and the secular approximation. A standard way to obtain a master equation is based on a microscopic approach which takes into account the full Hamiltonian of the system and the environment, including their mutual coupling, and by performing (some of) the approximations described above. The system dynamics are completely positive as long as the master equation is in the Lindblad form~\cite{lindblad,Gorini}. An alternative route to take into account environmental effects relies on a phenomenological description of standard dissipative and dephasing mechanisms where Lindblad superoperators are designed to reproduce the desired process and the dissipator is built ``by hands", for instance by inferring the decay rates from experimental data. This approach may lead to a drastic simplification of the dynamics~\cite{breuer}. However, there are scenarios for which the phenomenological technique must be taken with a pinch of salt, as it may not be able to capture all the relevant aspects of the dynamics. In particular, this problem becomes crucial when an external control field is exerted to the system, or when different systems are coupled to each other while dissipating locally~\cite{adesso,naseem,marco}. In the context of phenomenological modelling of open quantum systems subject to external control fields, a standard assumption is the so-called fixed-dissipator (FD) assumption~\cite{Lacour,sauer,lapert,mukherjee,sauer2}. This is based on the hypothesis that the dissipative part of the master equation is not changed by the control term. Comparisons between phenomenological and microscopic master equations have been realized, also considering, in the case of bipartite systems, the effects due to a strong coupling between the internal parts~\cite{Scala2007,Rivas2010,Blais2011,Werlang2014,Bylicka2018}. Problematic consequences of phenomenologically derived master equations in quantum thermodynamics have been recently discussed~\cite{Levy2014, DeChiara2018, naseem} and, as shown in Ref.~\cite{sabrina}, the FD assumption can be at the origin of non physical trajectories in the non-Markovian limit. A generalized approach trying to unify phenomenological and microscopic approaches has been recently proposed \cite{Shavit}. The scope of this paper is to study the case of a driven qubit interacting with a structured environment by means of a microscopic model and to analyze the consequences of the FD assumption. This includes the possibility of using reservoir engineering as a tool for quantum control. For that purpose we mainly study the dynamics on asymptotic time scales and compare the steady states reachable with a microscopic master equation (MME) with the ones given by a master equation based on a fixed dissipator (FDME). We show that manipulating the environment through reservoir engineering, which is possible when the environment spectrum is not flat, allows one to obtain a collection of stationary states that can be very different from the ones given by the FDME. The paper is organized as follows. In Sec.~\ref{sec2}, we introduce the model of a qubit driven by a monochromatic laser and interacting with a bosonic environment. In Sec.~\ref{sec3}, the MME for such a system is explicitly derived. Some technical details are reported in the Appendix\ref{appe}. In Sec.~\ref{secfd}, we review the FD assumption, while in Sec.~\ref{sec4}, we present the main results of this study in the case of structured environments, both at zero and non-zero temperatures. Discussion and prospective views are presented in Sec.~\ref{sec5}. \section{The model system} \label{sec2} For the sake of simplicity, we tackle the problem of comparing microscopic and phenomenological models of driven systems in structured environments by revisiting a simple quantum system made of a qubit of frequency $\omega_0$ (whose free Hamiltonian is aligned along the $z$-axis) driven by a monochromatic control laser field whose frequency is $\omega_L$ and whose initial phase is $\varphi$~\cite{haikka, tanas}. We define the detuning as $\Delta=\omega_0-\omega_L$ and we refer to the Rabi frequency $\Omega$, related to the intensity of the laser field, as the driving amplitude. We assume henceforth that $\omega_0$ and $\omega_L$ are much larger than $\Delta$ and $\Omega$. The starting Hamiltonian is given by: \begin{equation} \bar{H}_S=\frac{\hbar\omega_0}{2}\sigma_z+\hbar\Omega\cos(\omega_L t+\varphi)\sigma_x, \end{equation} where $\sigma_z$ and $\sigma_x$ are Pauli matrices. Under the above condition on the parameters we may apply the rotating wave approximation on $\bar{H}_S$. We also move to a frame rotating at frequency $\omega_L$, by means of the unitary operator $U_L=\exp\left[ -i (\omega_L t+\varphi) \sigma_z /2 \right]$ (also absorbing the time-independent phase factor $\varphi$), obtaining: \begin{equation}\label{eqh} H_S=\frac{\hbar\Delta}{2}\sigma_z+\frac{\hbar\Omega}{2}\sigma_x. \end{equation} The interaction between the system and the environment, which is assumed not to depend on the control field (see, for instance, Ref.~\cite{weiss}), reads as follows: \begin{equation} \bar{H}_I=\sum_k \hbar\left( g_k a_k+ g_k^* a_k^\dag\right)\sigma_x, \end{equation} where $a_k$ $\prt{a_k^\dag}$ are the annihilation (creation) operators of the bosonic bath and $g_k$ are the coupling constants. In the above rotating frame, $\bar{H}_I$ is transformed into: \begin{equation}\label{hi} H_I=\sum_k \hbar \prt{g_k a_k+ g_k^* a_k^\dag}\left[ e^{i (\omega_L t+\varphi)}\sigma_+ +e^{-i (\omega_L t+\varphi)}\sigma_-\right]. \end{equation} The free Hamiltonian of the environment has the form $H_E=\sum_k \hbar \omega_k a_k^\dag a_k$. $H_S$ can be diagonalized as $H_S=\frac{\hbar \nu}{2} (|\phi_+\rangle\langle \phi_+| -|\phi_-\rangle\langle \phi_-|)$, with $\nu=\sqrt{\Delta^2+\Omega^2}$. Its eigenstates are \begin{eqnarray} |\phi_+\rangle=C |e\rangle+S |g\rangle,\nonumber\\ |\phi_-\rangle=C |g\rangle- S |e\rangle, \label{fi} \end{eqnarray} where $|g\rangle$ and $|e\rangle$ are, respectively, the ground and the excited state of the quibt free Hamiltonian $(\hbar \omega_0/2)\sigma_z$, $C=\cos(\theta/2) $, $S=\sin(\theta/2)$ and \begin{equation}\label{theta} \theta=2 \arctan[(\nu-\Delta)/\Omega ]. \end{equation} For example, for a given $\Delta>0$, $\theta$ goes from 0 to $\pi/2$ when $\Omega$ goes from 0 to infinity. \section{Microscopic master equation}\label{sec3} To derive a microscopic master equation, the qubit driven by the field is treated first. The resulting dressed qubit is next coupled to the environment by expressing $H_I$ in terms of the eigenoperators of $H_S$ and the standard Born and Markov approximations are applied (see also Refs.~\cite{tanas,haikka}). Defining $\tilde{\sigma}_z=|\phi_+\rangle\langle \phi_+|-|\phi_-\rangle\langle \phi_-|$ and $\tilde{\sigma}_\pm=|\phi_\pm\rangle\langle \phi_\mp|$, we can express the operators entering $H_I$ in terms of eigenoperators of $H_S$ as: \begin{eqnarray}\label{sx} \sigma_\pm&=&C^2 \tilde{\sigma}_\pm-S^2\tilde{\sigma}_\mp+SC\tilde{\sigma}_z, \nonumber\\ \sigma_z&=&\cos\theta \tilde{\sigma}_z-\sin\theta\tilde{\sigma}_x. \end{eqnarray} The detailed derivation of the MME is presented in the Appendix\ref{appe}. Its final form in the Schr\"odinger picture is: \begin{equation} \label{metot} \dot \rho=- \frac{i}{\hbar}[H_S+ H_{LS},\rho]+{\cal D}^{\rm sec}(\rho)+{\cal D}^{\rm nsec}(\rho), \end{equation} where $H_{LS}$ is the Lamb shift Hamiltonian, whose role is discussed in the Appendix\ref{appe}, while ${\cal D}^{\rm sec}(\rho)$ and ${\cal D}^{\rm nsec}(\rho)$ are, respectively, the secular and the non-secular parts of the dissipator, the latter featuring terms oscillating at frequencies $\nu$ and $2\nu$. With regards to ${\cal D}^{\rm sec}(\rho)$, it is given by: \begin{equation} {\cal D}^{\rm sec}(\rho)=\gamma_-^{\theta} {\cal L}\prtq{\tilde{\sigma}_+}(\rho)+\gamma_+^{\theta} {\cal L}\prtq{\tilde{\sigma}_-}(\rho)+\gamma_z^{\theta} {\cal L}\prtq{\tilde{\sigma}_z}(\rho), \end{equation} where the Lindblad superoperator is $ {\cal L}\prtq{\hat X}(\rho)=\hat X \rho \hat X ^\dagger -\prtg{\rho ,\hat X^\dagger \hat X }/2$, with \begin{eqnarray}\label{gammas} \gamma_{-}^{\theta}&=&2 \pi \left\{ C^4 J(\omega_L+\nu) n(\omega_L+\nu) + S^4\ J(\omega_L-\nu) \nonumber \right. \\ & \times & \left. [1+n(\omega_L-\nu)] \right\},\nonumber\\ \gamma_{+}^{\theta}&=& 2 \pi \left\{ C^4 J(\omega_L+\nu)[1+n(\omega_L+\nu)]+S ^4 J(\omega_L-\nu) \nonumber \right. \\ &\times&n(\omega_L- \nu)\left.\right\} \nonumber, \\ \gamma_{z}^{\theta}&=& 2 \pi \left\{ S^2 C^2J(\omega_L)[1+2 n(\omega_L)]\right\}, \end{eqnarray} where $J(\omega)$ is the spectral density of the environment, $n(\omega)=1/\prtq{\mathrm{e}^{\hbar\omega/(k_B T)}-1}$ is the average number of photons in the bath at frequency $\omega$, $k_B$ being the Boltzmann constant. The above coefficients can be rewritten as \begin{eqnarray}\label{gammas2} \gamma_{-}^{\theta}&=& C^4 \gamma_+ n_+ + S^4 \gamma_-(1+n_-),\nonumber\\ \gamma_{+}^{\theta}&=& C^4 \gamma_+(1+n_+)+S ^4 \gamma_- n_- \nonumber, \\ \gamma_{z}^{\theta}&=& S^2 C^2\gamma_0(1+2 n_0)], \end{eqnarray} where $\gamma_{p}= 2 \pi J(\omega_L+ p\nu)$ and $n_{p}= n(\omega_L+ p \nu)$, being $p=\{+ 1 , - 1, 0\}$ (for any parameter $l$ depending on $p$ we use the shorthand notation $l_p=\{l_+, l_-, l_0\}$). The operator ${\cal D}^{\rm nsec}(\rho) $ and its coefficients are reported in the Appendix\ref{appe}. In Sec.~\ref{SecT0} we give some comments about when their effect can not be neglected. A detailed analysis of the limits of validity of the secular approximation in our system can be found in Ref.~\cite{Shen}. \section{The reference case: the fixed dissipator}\label{secfd} In Sec.~\ref{sec3}, we have seen that, in the microscopic approach, the dissipator depends on the control field acting on the qubit. The FD approach consists in neglecting this dependance and in assuming that the dissipative part of the master equation is equal to the one in the absence of the control field, i.e. the qubit coupled to the environment is treated first, and this single entity is next coupled to the laser. The application of this procedure is well-known in quantum control protocols (see e.g. Refs.~\cite{Lacour,sauer,lapert,mukherjee,sauer2}). Recently, this approach has been used to determine the control Hamiltonian that counteracts a given dissipation~\cite{sauer,sauer2}. In this context, we consider a density matrix evolving according to a general (Lindblad) master equation: \begin{equation}\label{MEFD} \dot{\rho}=-\frac{i}{\hbar}[H,\rho]+{\cal D}^{\rm fd}(\rho). \end{equation} The set of stationary solutions $\dot{\rho}^{\rm fd}=0$, which are compatible with the fixed dissipator ${\cal D }^{\rm fd}(\rho^{\rm fd})$ can be computed by disregarding the coherent part. Since the coherent part of the master equation cannot change the spectrum and then the purity of the state, the same must also be true for the dissipator~\cite{sauer,lapert,sauer2}. Then, the collection of stationary states $\rho^{\rm fd}$ must obey the relation: \begin{equation}\label{fdeq} \forall n \in \{2,\dots,d\}: \; \;{\rm Tr}\prtg{\prt{\rho^{\rm fd}}^{n-1}{\cal D }^{\rm fd}\prt{\rho^{\rm fd}}}=0, \end{equation} where $d$ is the dimension of the Hilbert space. Thus, we have defined a (fixed) dissipator and a family of Hamiltonians. For any steady state, we can find the Hamiltonian $H$ such that $\dot{\rho}^{\rm fd}=0$. Writing the steady state as $\rho^{\rm fd}=\sum_{\alpha=1}^d \lambda_\alpha |\alpha\rangle\langle \alpha|$, it follows that~\cite{sauer,sauer2}: \begin{equation} H=\sum_{\alpha,\beta:\lambda_\alpha\neq \lambda_\beta}\frac{i \langle\alpha| D^{\rm fd}\prt{\rho^{\rm fd}}|\beta\rangle}{\lambda_\alpha-\lambda_\beta}|\alpha\rangle\langle\beta|. \end{equation} \begin{figure} \caption{Bloch sphere (gray) and steady-state ellipsoid (red). The ellipsoid has been drawn by taking $T=0$.} \label{Figsp} \end{figure} For the case of the qubit introduced in Sec.~\ref{sec2}, we only need to satisfy ${\rm Tr}\prtg{\rho^{\rm fd}{\cal D }^{\rm fd}\prt{\rho^{\rm fd}}}=0$. In this model, at a given bath temperature $T$, the FD is equal to the dissipator one would obtain in the absence of the control field ($\Omega\rightarrow 0$). This can be obtained from the microscopic dissipator of Eq.~(\ref{metot}) taking $\theta=0$ and $\nu=\Delta$. In this limit, the decay rates of Eq.~\eqref{gammas2} tend to: $\gamma_{-}^{0}=\gamma_{\rm fd} n_{\rm fd}$, $\gamma_{+}^{0}=\gamma_{\rm fd}(1+n_{\rm fd})$, and $\gamma_{z}^0=0$, where $\gamma_{\rm fd}=2 \pi J(\omega_0)$ and $n_{\rm fd}= n(\omega_0)$. The fixed dissipator is then of the form: \begin{equation}\label{FD} D^{\rm fd}(\rho)=\gamma_{\rm fd} n_{\rm fd} {\cal L}[\sigma_+](\rho)+\gamma_{\rm fd} (1+n_{\rm fd}) {\cal L}[\sigma_-](\rho). \end{equation} It follows that in the FD approach the steady state depends on $\gamma_{\rm fd}$ and it can be expressed, using $H=H_S$ in Eq.~\eqref{MEFD}, as (restoring the dependence on $\varphi$): \begin{eqnarray}\label{ssfd} \rho_{ee}^{\rm fd}&=& \frac{n_{\rm fd}}{1+2n_{\rm fd}}+ \frac{\Omega^2/(1+2 n_{\rm fd})}{\gamma_{\rm fd}^2(1+2 n_{\rm fd})^2+4 \Delta ^2+2 \Omega^2},\nonumber\\ \rho_{eg}^{\rm fd}&=& -\Omega \frac{2 \Delta/(1+2 n_{\rm fd})+i \gamma_{\rm fd} }{\gamma_{\rm fd}^2(1+2 n_{\rm fd})^2+4 \Delta ^2+2 \Omega^2}e^{-i \varphi}. \label{regf} \end{eqnarray} The FD steady solutions by varying the control field parameters, $\Omega$, $\Delta$ and $\varphi$, are represented in Fig.~\ref{Figsp} (for $T=0$) where they are shown to lie on the surface of an ellipsoid inside the Bloch sphere~\cite{recht,sauer,lapert,sauer2}. This ellipsoid is a standard geometric structure in nuclear magnetic resonance~\cite{lapert,levitt,ernst}. For $T\neq 0$, the steady states lie on a smaller ellipsoid inside the one depicted in Fig.~\ref{Figsp}. \section{Reservoir engineering through microscopic master equation with structured environment}\label{sec4} We present in this section the control of steady states by using the MME of Sec.~\eqref{sec3}. In particular, we compare the steady state solutions of the FDME with the ones provided by the MME to discuss how the control of the system is modified when the environment is used as a tool to suitably tailor the asymptotic states. We also compare some specific dynamics to highlight our results. In the case of flat spectrum, it holds $\gamma_+ = \gamma_-= \gamma_0= \gamma_{\rm fd}$ and one can show the remarkable property that, under the approximation $n_+ \approx n_- \approx n_0 \approx n_{\rm fd}$, the MME coincides exactly with the FDME~\cite{tanas} (see the Appendix\ref{appe} for a complete derivation): the steady states of the MME and of the FDME are thus the same for any $T$. In particular, in the secular limit, in the $z$-basis of the frame rotating at the laser frequency (after restoring the phase $\varphi$), the MME steady solutions are equal to the ones of Eq.~\eqref{ssfd} after discarding the terms containing $\gamma_{\rm fd}$, which are indeed negligible in this limit. One can show that the geometric form of the steady state solutions obtained by varying the control field parameters, $\Omega$, $\Delta$ and $\varphi$, corresponds to the very same ellipsoid of Fig.~\ref{Figsp}. When non-secular terms are added, the microscopic steady states coincide with the ones obtained with the FD, given in Eq.~\eqref{ssfd}. We consider below the case of structured environments in which relevant differences can instead occur. We consider in particular the MME in the secular regime, noting that this regime is typically encountered in several contexts such as in quantum optics setups~\cite{breuer}. The steady state $\rho^{\rm sec}$, which satisfies both $\prtq{\rho^{\rm sec},H_S+H_{LS}}=0$ and ${\cal D}^{\rm sec}\prt{\rho^{\rm sec}}=0$, is \begin{equation}\label{rss} \rho^{\rm sec}=\frac{\gamma^{\theta}_-}{\gamma^{\theta}_+ +\gamma^{\theta}_-}|\phi_+\rangle \langle \phi_+| +\frac{\gamma^{\theta}_+}{\gamma^{\theta}_++\gamma^{\theta}_-}|\phi_-\rangle \langle \phi_-|, \end{equation} where the apex ${\rm ``sec"}$ refers to the secular master equation. The collection of steady states that are obtained as a function of the control parameter $\theta$ and of the phase $\varphi$ (once it is restored) describes a surface in the Bloch vector representation which is invariant under a rotation around the $z$-axis. We consider structured environments characterized by a spectral density varying notably around $\omega_L$ on the scale of the dressed frequency $\nu$. In this scenario, even in the limit where the secular approximation holds, the microscopic approach provides a family of target steady states that may be not close to the ones obtained with the FDME. While in the FD case, there is only one value of the spectral density that matters, the two additional sidebands $\omega_L\pm \nu$ must be considered according to the microscopic derivation (see Eq.~\eqref{gammas}). When $n_+ \approx n_- \approx n_0 \approx n_{\rm fd}$, Eq.~\eqref{rss} takes the form (after restoring the phase $\varphi$) \begin{eqnarray}\label{ree2} \rho_{ee}^{\rm sec}&\approx& \frac{n_{\rm fd}}{1+2 n_{\rm fd}}+ \frac{S^2C^2}{1+2 n_{\rm fd}}\frac{S^2 \gamma_-+C^2 \gamma_+}{S^4 \gamma_-+C^4 \gamma_+}, \nonumber \\ \rho_{eg}^{\rm sec}& \approx& \frac{S C}{1+2 n_{\rm fd}}\frac{S^4 \gamma_--C^4 \gamma_+}{S^4 \gamma_-+C^4 \gamma_+} e^{-i \varphi}. \end{eqnarray} Exploiting the dependence on the two frequencies $\omega_L \pm \nu$ opens the possibility for taking profit from reservoir engineering. It indeed allows one to deform the ellipsoid of Fig.~\ref{Figsp}, thus modifying the family of target states. For instance, one of the possible consequences is that the equator of the ellipsoid can be broadened, allowing one to get higher values for the coherence, as it is always possible to reduce the weight of the smaller term in Eq.~\eqref{rss} and then to obtain purer states. \subsection{The case of zero temperature}\label{SecT0} \begin{figure} \caption{Families of steady states (components $x$ and $z$ of the Bloch vector, $r_x$ and $r_z$) determined from the FDME and from the secular MME by varying the control field parameters $\Omega$, assuming values $\ge 0$, and $\varphi$, being equal to $0$ or $\pi$, for a positive fixed value of $\Delta$. Panel (a): the FDME case is represented by the blue solid line (the dependence on $\gamma_{\rm fd} \label{figeng} \end{figure} We start the analysis with the zero-temperature case. The scenarios where $J(\omega_L+\nu)\gtrless J(\omega_L-\nu)$ are compared with the flat spectral density case in Fig.~\ref{figeng}, where the components $x$ and $z$ of the Bloch vector of the steady states, $r_x=2\, \mathrm{Re}[\rho_{eg}]$ and $r_z=2\rho_{ee}-1$, are plotted. In Fig.~\ref{figeng}(a), we consider fixed values for the ratio $x=\gamma_-/\gamma_+=J(\omega_L-\nu)/J(\omega_L+\nu)$ for all values of the the dressed energy. This analysis permits to visualize for any value of the control parameter $\theta$ of Eq.~\eqref{theta} (or equivalently of $\Omega/\Delta$) how much the steady states are expected to differ in the two approaches for a given $x$. In the two panels, all the parts of the different lines are obtained by considering a fixed positive $\Delta$ and using $\Omega$, assuming values $\ge 0$, and $\varphi$, being equal to $0$ or $\pi$, as control parameters. In particular, with respect to the FDME, purer states can be obtained (the coherence may become closer to the maximum allowed value of $1/2$) and even the population inversion can be reached. We observe that the FDME is considered in the case when its dependence on $\gamma_{\rm fd}$ is negligible, and so coincides with the microscopic secular solution in the limit of flat spectrum ($x=1$). As an example, let us consider the case where the target state reached using the FD dynamics at zero temperature is one of the maximally allowed coherent states $\rho_{\rm mc}^{\rm fd}$ of the ellipsoid in the $z$ basis, that is, a point that lies on the its equator \cite{sauer,sauer2}. This class of states is obtained using $\Omega=\pm\sqrt{2} \Delta$ and, written in the Bloch form, is $\vec{r}_{\rm mc}= \{\mp \cos\varphi/\sqrt{2},\; \mp \sin \varphi/\sqrt{2},\; -1/2 \}$. We focus on the case $\Omega=\sqrt{2} \Delta$ and $ \varphi=\pi$, obtaining then $\vec{r}_{\rm mc}= \{1/\sqrt{2},\; 0,\; -1/2 \}$. On the other hand, in the presence of structured reservoir, taking $\Omega=\sqrt{2} \Delta$ and $ \varphi=\pi$, we would end up in $\vec{r}\simeq\{ 0.805, 0, -0.569 \}$ using $x=0.1$ or in $\vec{r}\simeq\{ 0.134, 0 , -0.095 \}$ using $x=10$. The three states, reached with the same control field, are visualized with points in Fig.~\ref{figeng}(a). The distances between these points clearly point out how much could be the error of using the FDME to predict the steady state in a given control protocol. In order to treat a specific physical scenario where $x$ varies when the control field parameters are changed, we consider the case in which the spectral density has the Lorentzian profile \begin{equation}\label{lor} J_{\rm Lor} (\omega)=\frac{\gamma_l }{2\pi}\frac{\lambda^2}{(\omega_c-\omega)^2+\lambda^2}, \end{equation} where the parameter $\lambda$ defines the width of the curve and $\omega_c$ its center. We note that to satisfy the Markovian approximation used for the derivation of the MME, $\lambda$ must be much greater than $\gamma_l$. The flat spectral density case is recovered in the limit $\lambda\to\infty$. In this case, one can expect that only in some parts of the parameter space the deformation is relevant. On the tails of the curve, we fall for instance in something similar to the flat spectral density case, which gives the same results of the FDME. The differences in the case of a Lorentzian spectral density are depicted in Fig.~\ref{figeng}(b), where we have assumed that the dependence on $\gamma_{\rm fd}$ of the FDME steady solutions is negligible and calculated the steady states by varying $\Omega/\Delta$ in the case of a fixed Lorentzian, with $\lambda=\Delta$ (we have fixed a positive value for $\Delta$ and varied $\Omega$, assuming values $\ge 0$, and $\varphi$, being equal to $0$ or $\pi$). We have numerically compared the secular MME curve in Fig.~\ref{figeng}(b) with the one obtained adding the non-secular terms at zero temperature and for values of $\gamma_{\rm fd}$ much smaller than $\Delta$. In general, the non-secular curve is very close to the MME curve except when one approaches the origin of the axis, for values of $\Omega$ much larger than $\Delta$. For instance, for $\gamma_{\rm fd}/\Delta = 0.001$ we observe differences for $\Omega/\Delta$ greater than 100. When this ratio overcomes a given value, we observe steady values of $r_x$ different from zero, $r_z$ becoming positive but very close to zero, and at a certain point the non-secular MME starts to predict non-physical steady states. The occurrence of differences between secular and non-secular master equations has been discussed in Ref.~\cite{Shen}. In order to show how different steady states for the same values of the control field parameters are dynamically obtained, we report in Fig.~\ref{dynamics} the time evolution of $r_x$ and $r_z$ for the same cases of the points highlighted in Fig.~\ref{figeng}(a), obtained with $\Omega=\sqrt{2}\Delta$ and $\varphi=\pi$. In particular, we choose $\gamma_-=0.2 \gamma_0 $ and $\gamma_+=2 \gamma_0 $ for the case $x=0.1$, $\gamma_-=\gamma_+= \gamma_0 $ for the fixed dissipator case ($x=1$), and $\gamma_-=2 \gamma_0 $ and $\gamma_+=0.2 \gamma_0 $ for the case $x=10$. As before, the FDME is considered in the case when its dependence on $\gamma_{\rm fd}$ is negligible (we assume $\gamma_{\rm fd}\approx \gamma_{0}=\Delta/1000 $). The qubit is initially in the ground state $\ket{g}$. The values of the rates $\gamma_p$ are chosen without referring to a specific spectral density. \begin{figure} \caption{The Bloch vector components $r_x$ and $r_z$, as a function of time (in units of $\gamma_0^{-1} \label{dynamics} \end{figure} We have then shown that, in general, using the FDME can cause a lack of accuracy in determining the steady state, which would be detrimental in a quantum control protocol. This effect can be enlightened by considering the distance between the stationary state induced by a structured spectral density, as predicted by the MME, and the one given by the FDME as a function of the control field parameter $\Omega/\Delta$. In Fig.~\ref{engfid}, we use the fidelity as a measure of such distance. For two arbitrary states $\rho$ and $\sigma$ it is defined as $\mathrm{Tr} \prtg{\sqrt{\sqrt{\rho} \sigma \sqrt{\rho}}}^2 $. It is important to stress that a fidelity of the order of $3/4$ is already an indication of a dramatic difference between two states. For instance, the fidelity between a two-qubit Bell state and the state obtained from it by removing the coherence is $1/\sqrt{2}$. In Fig.~\ref{engfid}, an important discrepancy may be observed for $\Omega/\Delta \gtrsim 1$. In particular, for a given value of $\Omega/\Delta$, smaller values of fidelity are obtained when $x$ moves away from 1. The behavior for $\Omega/\Delta < 1$ is instead reminiscent of the fact that for small angles $\theta$ the microscopic dissipator tends to the FD one, as shown before Eq.~\eqref{FD}. \begin{figure} \caption{Fidelity between the FDME steady states (their dependence on $\gamma_{\rm fd} \label{engfid} \end{figure} One may raise doubts about the freedom in the choice of the spectral density. In particular, the Markovian approximation could break down for some frequency region. Here, we want to remark that the results of this section hold for values of the system-bath coupling well behind the weak-coupling limit, such that the Markovian character of the dynamics is warranted. In any case, even for an intermediate coupling constant, we are interested in the stationary regime, that takes place long after all the possible non-Markovian effects have been washed out. \subsection{The case of non zero temperature} According to what said so far, a structured spectral density allows for a broader family of target states but, at the same time, would typically give solutions that are distinct from the ellipsoid predicted by the FDME. We now show that zero-temperature FDME steady states can be recovered in the case of a structured environment by exploiting tailored thermal effects. To this aim, we consider the FDME steady states of Eq.~\eqref{ssfd} in the limit when the terms depending on $\gamma_{\rm fd}$ are negligible (being always $n_+ \approx n_- \approx n_0 \approx n_{\rm fd}$): \begin{eqnarray}\label{ree} \rho_{ee}^{\rm sec} &\approx& \frac{n_{\rm fd}}{1+2 n_{\rm fd}}+ \frac{\Omega^2/(1+2 n_{\rm fd})}{4 \Delta^2+2 \Omega^2} , \nonumber \\ \rho_{eg}^{\rm sec}&\approx & -\frac{ \Omega \Delta/(1+2 n_{\rm fd} ) }{2 \Delta^2+ \Omega^2} e^{-i \varphi}. \end{eqnarray} We indicate them with apex ${\rm ``sec"}$ since they coincide the with the steady states of the secular MME (see Eq.~\eqref{ree2}) in the limit of flat spectrum ($x=1$). We compare them at zero temperature with the general case of Eq.~\eqref{ree2} that depends both on $n_{\rm fd}$ and on the ratio $x=\gamma_-/\gamma_+$, and look, for any given $x$, for the existence of solutions of \begin{equation} \begin{cases} \rho_{ee}^{\rm sec}(x=1,n_{\rm fd}=0)=\rho_{ee}^{\rm sec}( x,n_{\rm fd})\\ \rho_{eg}^{\rm sec}(x=1,n_{\rm fd}=0)=\rho_{eg}^{\rm sec}(x,n_{\rm fd}) \end{cases}. \end{equation} The solution of both equations is given by \begin{equation} n_{\rm fd}(x)=\frac{C^4 S^4(1-x)}{(C^2 -S^2)(C^4+x S^4)}. \end{equation} Solutions corresponding to physical values of $n_{\rm fd}$ (that is $n_{\rm fd}\ge 0$) only appear for $0\le x\le 1$, which is easy to understand looking at Fig.~\ref{figeng}(a). In fact, thermal effects are expected to reduce the coherence of any state, making it impossible to move from the black line ($x=10$) to the blue one ($x=1$). The behavior of $n_{\rm fd}(x=0.1)$ is plotted in Fig.~\ref{FigT} as a function of the control field parameter $\Omega/\Delta$. The needed thermal correction is very small as long as $\Omega \lesssim \Delta$, as the same argument used to explain the behavior observed in Fig.~\ref{engfid} holds. \begin{figure} \caption{Thermal factor $n_{\rm fd} \label{FigT} \end{figure} \section{Discussion and conclusions}\label{sec5} Master equations are a powerful tool to analyze the dissipative dynamics of quantum systems. They are usually obtained by making a series of assumptions that need to be fulfilled and to be verified in realistic setups, as, in general, exact solutions are not available. They are often introduced based on phenomenological assumptions. Here, we have derived a microscopic master equation for a driven qubit and compared it with the fixed dissipator model, which is widely used especially in the quantum control community, as it allows to explore the behavior of entire families of control Hamiltonians in a simple way. We have found that, in the weak-coupling regime, the steady states of the two approaches can be very different in the case of a structured environment, while they are practically identical for a flat spectrum. In conclusion, considering the simplest case of a driven qubit, we have assessed the limit of validity of the phenomenological approach for the specific task on asymptotic time scales. We have explored the possibility of implementing reservoir engineering techniques to widen the family of target states, which are correctly predicted by using microscopic master equations. Quantum control protocols most often use time dependent fields, implying time dependent Rabi frequency, detuning, and phase as control parameters. For slowly varying parameters, one expects that the FDME and MME still coincide for a flat environment spectrum, and that the difference between them still persists for structured environments. The expected rich variety of target states resulting from structured environments could then be exploited using microscopic models in quantum control and reservoir engineering schemes. \begin{acknowledgments} This work was mainly supported by the French ``Investissements d'Avenir'' program, project ISITE-BFC (contract ANR-15-IDEX-03). G.L.G. acknowledges financial support from the CAIB postdoctoral program. A.S. and S.G. acknowledge additional support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 765075 (LIMQUET). G.L.G. and B.B. thank Hans-Rudolf Jauslin, Axel Kuhn, and David Viennot for useful discussions. B.B. thanks Mauro Paternostro for helpful comments.. \end{acknowledgments} \appendix* \section{Master equation}\label{appe} In this appendix, we derive the microscopic master equation of the driven qubit, presenting its various parts. Using Eq.~\eqref{sx}, in the interaction picture with respect to $H_S + H_E$, the interaction Hamiltonian of Eq.~\eqref{hi} reads \begin{equation}\label{me} \tilde H_I(t)=B(t)\left(f_+^t\tilde{\sigma}_++f_-^t\tilde{\sigma}_-+f_z^t\tilde{\sigma}_z\right), \end{equation} with $B(t)=\sum_k \hbar\prt{g_k a_k e^{-i \omega_k t}+ g_k^* a_k^\dag e^{i \omega_k t}}$ and where \begin{eqnarray}\label{fs} f_+^t&=&e^{i \nu t}\prtq{C^2 e^{i(\omega_L t+\varphi)}-S^2 e^{-i(\omega_L t+\varphi)}},\nonumber\\ f_-^t&=&e^{-i \nu t}\prtq{C^2 e^{-i(\omega_L t+\varphi)}-S^2 e^{i(\omega_L t+\varphi)}},\nonumber\\ f_z^t&=& S C\prtq{e^{i(\omega_L t+\varphi)}+e^{-i(\omega_L t+\varphi)}}. \end{eqnarray} The operators entering the master equation are multiplied by $f_i^t f_j^{t-s} $, with $i,j=+,-,z$. Thus, there will be secular terms for $\{i, j\}$ such that $f_i^t= f_j^{t*} $ and non-secular terms in all other cases. In general, the products $f_i^t f_j^{t-s} $ may have parts oscillating at the laser frequency $e^{\pm 2 i \omega_L t}$. For instance, \begin{eqnarray}\label{fp} f_+^t f_+^{t-s}&=& {\Big \{ } C^4 e^{i[\omega_L(2t-s)+2\varphi]}+S^4 e^{-i[\omega_L(2t-s)+2\varphi]} \nonumber \\ &-& 2 C^2 S^2 \cos\omega_L s {\Big \}} e^{i\nu(2 t-s)}. \end{eqnarray} On the basis of the condition assumed in Sec.~\ref{sec2}, $\omega_L\gg \Delta,\Omega$, we note that the first two fast-oscillating terms in Eq.~\eqref{fp} can be neglected. Neglecting this kind of terms is completely equivalent to obtain the master equation writing the interaction Hamiltonian in rotating wave approximation: \begin{equation}\label{hi2} H_I=\sum_k \hbar \prtq{ g_k a_k e^{i (\omega_L t+\varphi)}\sigma_+ + g_k^* a_k^\dag e^{-i (\omega_L t+\varphi)}\sigma_-}. \end{equation} In this limit, the products linked to the secular terms \begin{eqnarray}\label{fs2} f_-^t f_+^{t-s}&\approx & e^{-i\nu s} \prtq{C^4 e^{-i \omega_L s}+ S^4 e^{i \omega_L s}}, \nonumber \\ f_+^t f_-^{t-s}&\approx & e^{i\nu s} \prtq{ C^4 e^{i \omega_L s}+ S^4 e^{-i \omega_L s}},\nonumber \\ f_z^t f_z^{t-s}&\approx & 2 S^2 C^2\cos \omega_L s, \end{eqnarray} determine the coefficients of Eq.~\eqref{gammas}. Non-secular terms are determined by the products \begin{eqnarray} f^t_+f^{t-s}_+&\approx&-2 C^2 S^2 e^{i\nu(2t- s)} \cos\omega_L s, \nonumber\\ f^t_+f^{t-s}_z&\approx&C S\prt{C^2 e^{i\omega_L s}-S^2 e^{-i\omega_L s}}e^{i \nu t}, \nonumber \\ f^t_zf^{t-s}_+&\approx& e^{i\nu(t- s)} CS \prt{C^2 e^{-i\omega_L s}-S^2 e^{i\omega_L s}}, \end{eqnarray} together with $ f^t_-f^{t-s}_-=\prt{ f^t_+f^{t-s}_+}^*$, $f^t_-f^{t-s}_z=\prt{f^t_+f^{t-s}_z}^*$, and $f^t_zf^{t-s}_-=\prt{f^t_z f^{t-s}_+}^*$. The factors $e^{\pm i \nu t}$ and $e^{\pm 2 i \nu t}$ disappear once one moves back to the Schr\"odinger picture. We indicate with $\overline{f^t_i f^{t-s}_j}$ the products $f^t_i f^{t-s}_j$ after discarding the factors $e^{\pm i \nu t}$ and $e^{\pm 2 i \nu t}$ and, taking the continuum limit, we introduce the spectral density $J(\omega)=\sum_k |g_k|^2\delta(\omega-\omega_k)$, such that the trace over the bath's degrees of freedom is transformed into an integral over all the frequencies. The Born-Markov master equation, assuming a factorized initial condition for the system and its bath, is then given by \cite{breuer, gardiner} \begin{eqnarray}\label{ME2} && \!\!\!\!\! \dot \rho=-\frac{i}{\hbar}[H_S, \rho] +\frac{1}{\hbar^2}\sum_{i,j=+,-,z} \int_0^{\infty} ds \nonumber\\ && \!\!\!\!\! \left[\overline{f^{t*}_i f^{t-s}_j} \langle B(t)B(t-s)\rangle \prt{\tilde{\sigma}_j \rho\tilde{\sigma}_i^\dag - \tilde{\sigma}_i^\dag\tilde{\sigma}_j \rho}+\rm h.c. \right],\qquad \end{eqnarray} where h.c. denotes Hermitian conjugation and the bath correlation functions, taking a thermal equilibrium state $\rho_B$ at temperature $T$, are given by \begin{eqnarray} \label{funccorr} &&{\rm Tr}_B\{ B(t)B(t-s)\rho_B \}=\hbar^2 \int_0^{\infty} d\omega J(\omega)\nonumber \\ &&\quad \times \prtg{[1+ n(\omega)] e^{-i \omega s } +n(\omega) e^{i \omega s }}. \end{eqnarray} The explicit development of Eq.~\eqref{ME2} leads to Eq.~\eqref{metot}. In particular, in order to calculate the coefficients of the master equation one makes use of the identity \begin{equation}\label{expintegral} \int_0^\infty e^{\pm i \varepsilon s}d s=\pi \delta(\varepsilon)\pm i \mathcal{P}\frac{1}{\varepsilon}, \end{equation} where $\delta(\varepsilon)$ is the Dirac delta function and $\mathcal{P}$ denotes the Cauchy principal value. The Lamb shift Hamiltonian of Eq.~\eqref{metot} is given by \begin{equation}\label{Lamb-shift Hamiltonian} H_{LS}= \hbar \prt{s_+ \tilde{\sigma}_+\tilde{\sigma}_-+s_- \tilde{\sigma}_-\tilde{\sigma}_++ s_z \tilde{\sigma}_z^2}, \end{equation} where \begin{eqnarray} s_+ &=& \mathcal{P} \int_0^{\infty} d\omega J (\omega) \left\{ \frac{C^4 \prtq{1+n(\omega)}}{(\omega_L+\nu)-\omega} - \frac{S^4 n(\omega)}{(\omega_L-\nu)-\omega} \right\},\nonumber \\ s_- &=& \mathcal{P} \int_0^{\infty} d\omega J (\omega) \left\{\frac{S^4 \prtq{1+n(\omega)}}{ (\omega_L-\nu)-\omega}-\frac{C^4 n(\omega)}{(\omega_L+\nu)-\omega}\right\},\nonumber \\ s_z &=&\mathcal{P} \int_0^{\infty} d\omega J (\omega) S^2C^2\frac{1}{\omega_L- \omega} . \end{eqnarray} In the secular limit, it holds $[H_S,H_{LS}]=0$. As for the non-secular part ${\cal D}^{\rm nsec}(\rho)$ we have \begin{eqnarray}\label{Non secular part} {\cal D}^{\rm nsec}(\rho)&=&\prt{\gamma_{++}^\theta{+is_{++}}}\tilde{\sigma}_+\rho \tilde{\sigma}_+ \nonumber\\ &+& \prt{\gamma_{+z}^\theta{+is_{+z}} }\prt{ \tilde{\sigma}_+ \tilde{\sigma}_z \rho-\tilde{\sigma}_z \rho\tilde{\sigma}_+}\nonumber\\ &+& \prt{\gamma_{-z}^\theta{+is_{-z}}} \prt{ \tilde{\sigma}_- \tilde{\sigma}_z \rho-\tilde{\sigma}_z \rho\tilde{\sigma}_-}\nonumber\\ &+& \prt{\gamma_{z+}^\theta {+is_{z+}}}\prt{ \tilde{\sigma}_z \tilde{\sigma}_+ \rho-\tilde{\sigma}_+ \rho\tilde{\sigma}_z}\nonumber\\ &+& \prt{\gamma_{z-}^\theta{+is_{z-}}} \prt{ \tilde{\sigma}_z \tilde{\sigma}_- \rho-\tilde{\sigma}_- \rho\tilde{\sigma}_z}\nonumber\\ &+&{\rm h.c.}, \end{eqnarray} where the various coefficients $\gamma_{ij}^\theta$ and $s_{ij}$ can be computed by explicitly developing Eq.~\eqref{ME2}: \begin{eqnarray}\label{nonsgammas} \gamma_{++}^\theta&=& -\frac{1}{2} C^2 S^2 \prtq{\gamma_-(1+2n_-) + \gamma_+(1+2n_+)}, \nonumber\\ \gamma_{z+}^\theta&=& -\frac{1}{2} C S\left[\gamma_+ n_+ C^2 -\gamma_-(1+n_-)S^2 \right],\nonumber\\ \gamma_{z-}^\theta&=& -\frac{1}{2} C S\left[ \gamma_+(1+n_+)C^2 -\gamma_-n_-S^2 \right],\nonumber\\ \gamma_{+z}^\theta&=&-\frac{1}{2} \gamma_0C S\left[(1+n_0)C^2 -n_0 S^2 \right],\nonumber\\ \gamma_{-z}^\theta&=&-\frac{1}{2} \gamma_0 C S\left[n_0C^2 -(1+n_0) S^2 \right], \end{eqnarray} and \begin{eqnarray}\label{nsprincipal parts} s_{++}&=& -\mathcal{P}\int_0^{\infty} d\omega J (\omega) C^2 S^2\nonumber\\ &&\times\left[ \frac{1+2n(\omega)}{(\omega_L-\nu)-\omega} - \frac{ 1+2n(\omega)}{(\omega_L+\nu)-\omega} \right], \nonumber \\ s_{z+}&=& \mathcal{P}\int_0^{\infty} d\omega J (\omega) C S\nonumber\\ &&\times\left\{ \frac{S^2[1+n(\omega)]}{(\omega_L-\nu)-\omega} + \frac{C^2n(\omega)}{(\omega_L+\nu)-\omega} \right\},\nonumber\\ s_{z-}&=& -\mathcal{P}\int_0^{\infty} d\omega J (\omega) C S\nonumber\\ &&\times\left\{ \frac{S^2n(\omega)}{(\omega_L-\nu)-\omega} + \frac{C^2[1+n(\omega)]}{(\omega_L+\nu)-\omega} \right\},\nonumber\\ s_{+z}&=&-\mathcal{P}\int_0^{\infty} d\omega J (\omega) C S\nonumber\\ &&\times\left\{ \frac{C^2[1+n(\omega)]}{\omega_L-\omega} + \frac{S^2n(\omega)}{\omega_L-\omega} \right\},\nonumber\\ s_{-z}&=&\mathcal{P}\int_0^{\infty} d\omega J (\omega) C S\nonumber\\ && \times \left\{ \frac{C^2n(\omega)}{\omega_L-\omega} + \frac{S^2[1+n(\omega)]}{\omega_L-\omega} \right\}. \end{eqnarray} For each pair of $i,j$ in Eq.~\eqref{ME2} the part of the integrals involving the delta function, gives us the decay rates of Eq.~\eqref{gammas} when $i=j$ and the ones of Eq.~\eqref{nonsgammas} when $i\neq j$, for any spectral density. The principal part in Eq.~\eqref{expintegral} leads to the Lamb shift Hamiltonian of Eq.~\eqref{Lamb-shift Hamiltonian} and the terms in Eq.~\eqref{nsprincipal parts}. It can be shown (see for instance Ref.~\cite{tanas}) that, in the case of a flat spectral density, all of these principal parts vanish. This can be obtained by firstly performing the integrals by using a Lorentzian spectral density, and by then taking the width of this Lorentzian to infinity. In the case of a non-flat spectrum, we treat these terms taking again the Lorentzian spectral density. Since in the secular MME these terms lead to the Lamb shift Hamiltonian, which is nothing but energy shift, their effect is not relevant for the steady states, while in the case of the non-secular MME their contribution, in general, can not be neglected. Finally, keeping the terms~\eqref{Non secular part} in Eq.~\eqref{metot}, it is possible to show that in the flat-spectrum limit, under the approximation $n_+ \approx n_- \approx n_0 \approx n_{\rm fd}$, the non-secular MME gives exactly the same result as FDME, i.e, using $H=H_S$, Eq.~\eqref{metot} becomes Eq.~\eqref{MEFD} for any $\gamma_{\rm fd}$. \end{document}
\begin{document} \begin{abstract} We formulate a class of minimal tori in $\mathbb S^3$ in terms of classical mechanics, reveal a curious property of the Clifford torus, and note that the question of periodicity can be made more explicit in a simple way. \end{abstract} \title{Classical Mechanics of Minimal Tori in $\mathbb S^3$} \section{Introduction} The Clifford torus $\mathbb S^1(1/\sqrt{2})\times\mathbb S^1(1/\sqrt{2})$ is geometrically the simplest torus. It is a flat square torus. When embedded in the three-dimensional unit sphere $\mathbb S^3$, it becomes a minimal surface. It divides $\mathbb S^3$ into two congruent solid tori. Moreover, it is doubly ruled, i.e. it can be foliated by two orthogonal families of great circles. The Clifford torus is the unique algebraic minimal surface of degree 2 and is characterized even locally as the only (non-totally-geodesic) minimal surface of contant curvature in $\mathbb S^3$. A long-standing conjecture by Lawson \cite{L2} that the Clifford torus is the only embedded minimal torus in $\mathbb S^3$ was recently solved affirmatively by S. Brendle \cite{B1}. Also, more recently the Clifford torus has been used as a building block for constructing infinitely many compact embedded minimal surfaces in $\mathbb S^3$ \cite{KY}, \cite{CS}. For a good recent overview concerning minimal surfaces in $\mathbb S^3$ see \cite{B2} (and for constant mean curvature tori see \cite{AL} and references therein). In this note, we will report on two previously unnoticed aspects of minimal tori in $\mathbb S^3$: as deformations of the Clifford torus, and a relation to classical mechanics. \section{Hamiltonian description of surfaces} Originally starting out to find minimal surfaces of higher genus via stationary points of the functional \begin{eqnarray} S[w]:=\int\sqrt{\eta^{KL}\partial_{K}w\partial_{L}w}\sqrt{\eta}d^{D}u \end{eqnarray} i.e. solutions of (c.p. \cite{BH}, e.g. ) \begin{eqnarray} (\eta^{IJ}\eta^{KL}-\eta^{IK}\eta^{JL}) \partial_{I}w\partial_{J}w (\partial^{2}_{KL}-\Gamma^{M}_{KL}\partial_{M}w) =0, \end{eqnarray} we found a class of minimal tori of the form \begin{eqnarray} \vec{x}(\varphi^{1},\varphi^2)=\left(\begin{array}{cc} \cos\theta(\varphi^{1},\varphi^2)\cos(\varphi^{1}) \\ \cos\theta(\varphi^{1},\varphi^2)\sin(\varphi^{1}) \\ \sin\theta(\varphi^{1},\varphi^2)\cos(\varphi^{2}) \\ \sin\theta(\varphi^{1},\varphi^2)\sin(\varphi^{2}) \end{array}\right) \in \mathbb S^3 \subset \mathbb{R}^{4}, \end{eqnarray} where $\theta$ (when depending on $\varphi^{1}$ and $\varphi^{2}$ only via the combination $k\varphi^{1}+l\varphi^{2}=:t$) can be determined as the solution of a classical mechanics problem, \begin{eqnarray} \dot{\theta}^{2}+\frac{c^{2}s^{2}}{k^{2}s^{2}+l^{2}c^{2}}\left(1-\frac{c^{2}s^{2}}{E^{2}}\right)=0, \end{eqnarray} i.e. a zero energy solution of a point-particle (having "position" $\theta$ at ``time'' $t$) moving in a potential $V_E(\theta)$. ($c:=\cos\theta$, $s:=\sin\theta$) In order to see that solutions of (4) give minimal surfaces of the form (3), let us calculate the first and second fundamental forms corresponding to (3): \begin{eqnarray} (g_{ab})&=& \left(\begin{array}{cc} c^{2}+(\partial_{1}{\theta})^{2} & \partial_{1}{\theta}\partial_{2}{\theta} \\ \partial_{1}{\theta}\partial_{2}{\theta} & s^{2}+(\partial_{2}{\theta})^{2} \\ \end{array}\right)\\ (h_{ab})&=& \frac{1}{|\overrightarrow{m}|}\left(\begin{array}{cc} s^{2}c+sc\theta_{11}+2S^{2}\theta_{1}^{2} & sc\theta_{12}+(s^{2}-c^{2})\theta_{1}\theta_{2} \\ sc\theta_{12}+(s^{2}-c^{2})\theta_{1}\theta_{2} & -s^{2}c^{2}+sc\theta_{22}-2c^{2}\theta^{2}_{2} \\ \end{array}\right) \end{eqnarray} with \begin{eqnarray} \overrightarrow{m}:=sc\left( \begin{array}{c} \begin{array}{c} -sc_{1} \\ -ss_{1} \end{array} \\ \begin{array}{c} cc_{2} \\ cs_{2} \end{array} \\ \end{array} \right) -\left(\begin{array}{c} -ss_{1}\theta_{1} \\ sc_{1}\theta_{1} \\ -cs_{2}\theta_{2} \\ cc_{2}\theta_{2} \\ \end{array}\right),\,\,\,c_i:=\cos\varphi^i,\,s_i:=\sin\varphi^i \end{eqnarray} being orthogonal to $\partial_{1}\vec{x}$, $\partial_{2}\vec{x}$, and $\vec{x}$; hence one finds that (3) has zero mean curvature in $\mathbb S^3$ if and only if \begin{eqnarray} \begin{array}{c} \left(s^{2}+\theta_{2}^{2} \right)\left(s^{2}c^{2}+sc\theta_{11}+2s^{2}\theta_{1}^{2} \right) + \left(c^{2}+\theta_{1}^{2} \right)\left(-s^{2}c^{2}+sc\theta_{22}-2c^{2}\theta_{2}^{2} \right) \\ \quad- 2\theta_{1}\theta_{2}\left( sc\theta_{12}+(s^{2}-c^{2})\theta_{1}\theta_{2}\right)=0, \end{array} \end{eqnarray} where $\theta_{a}:=\partial_{a}\theta$. Note that, equivalently, one could have obtained (8) by varying \begin{eqnarray} S[\theta]:= \int\sqrt{g}d\varphi^{1}d\varphi^{2}=\int\sqrt{c^{2}s^{2}+s^{2}\theta_{1}^{2}+c^{2}\theta^{2}_{2}}d\varphi^{1}d\varphi^{2} . \end{eqnarray} For $\theta =\theta(k\varphi^{1}+l\varphi^{2})=\theta(t)$ one gets, from the Lagrangian (the `$-$' sign put in for later convenience) \begin{eqnarray} L= -\sqrt{c^{2}s^{2}+(k^{2}s^{2}+l^{2}c^{2})\dot{\theta}^{2}} \end{eqnarray} as well as directly from (8), the second order equation \begin{eqnarray} \begin{array}{c} sc\ddot{\theta}(k^{2}s^{2}+l^{2}s^{2})+ \dot{\theta}^{2}\left[(l^{2}-k^{2})s^{2}c^{2}+2s^{4}k^{2}-2c^{4}l^{2}\right] \\ \quad+s^{2}c^{2}(s^{2}-c^{2})=0 , \end{array} \end{eqnarray} which can also be shown to follow from (4) by simply differentiating. A possible way to systematically derive (4) from (11) is provided by a standard Legendre-transformation, obtaining from (10) a Hamiltonian \begin{equation} H:=\frac{\partial L}{\partial \dot{\theta}}\dot{\theta}-L=\cdots=\frac{c^{2}s^{2}}{\sqrt{c^{2}s^{2}+(k^{2}s^{2}+l^{2}c^{2})\dot{\theta}^{2}}}=E. \end{equation} Expressing H in terms of $\theta$ and the canonical momentum \begin{eqnarray} \pi:=\frac{\partial L}{\partial \dot{\theta}}=-\frac{(k^{2}s^{2}+l^{2}s^{2})\dot{\theta}}{\sqrt{c^{2}s^{2}+(k^{2}s^{2}+l^{2}c^{2})\dot{\theta}^{2}}} \end{eqnarray} one obtains (replacing $|cs|$ by $cs$, justifyable via $H = const$) \begin{eqnarray} H= \frac{1}{2}\sin(2\theta)\sqrt{1-\frac{\pi^{2}}{ k^{2}s^2+l^2c^2}}=H[\theta,\pi] ; \end{eqnarray} one can check that the first-order equations \begin{equation} \dot{\theta}=\frac{\partial H}{\partial\pi}= -\frac{1}{2}\sin 2\theta \frac{\frac{\pi}{ k^{2}s^2+l^2c^2}}{\sqrt{1-\frac{\pi^{2}}{ k^{2}s^2+l^2c^2}}}\,;\quad \dot{\pi} = -\frac{\partial H}{\partial \theta} \end{equation} reproduce (11). \section{$k=l$: Clifford torus in disguise} Whereas the axially symmetric $k=0$ (or $l=0$) case, except for our mechanical interpretation, is well known (S.Brendle \cite{B2} quotes R. Kusner, when discussing an elliptic integral solution of the form (3) for $l=1, k=0$), the above mentioned $k=l$ case can actually be integrated in terms of elementary functions, leading to the curious fact that the Clifford-Torus can be viewed as a "non-trivial" graph over itself (in infinitely many different ways). Separating variables, and letting $$\alpha\left(\varphi:= \varphi^{1}+\varphi^{2}\right)=2\theta\left(k\left(\varphi^{1}+\varphi^{2}\right)\right),$$ $a:= \frac{1}{2E}=\cosh\gamma >1$, one deduces from (4) that \begin{equation} \frac{t-t_{0}}{k}=\varphi-\varphi_{0}=\pm \int\frac{d\alpha}{\sin\alpha\sqrt{a^{2}\sin^{2}\alpha-1}} \end{equation} \begin{equation*} =\mp\arctan\frac{\cos\alpha}{\sqrt{a^{2}\sin^{2}\alpha-1}}\,\,\,\,\,\,\,\,\,\, \end{equation*} i.e. that $\alpha(\varphi)$ is given via \begin{eqnarray} \frac{\mp\cos\alpha}{\sqrt{a^{2}\sin^{2}\alpha-1}}=\tan(\varphi-\varphi_{0})=:T \end{eqnarray} as a solution of \begin{eqnarray} (\alpha')^{2}=\sin^2\alpha \left(a^{2}\sin^{2}\alpha-1\right). \end{eqnarray} Using this, respectively \begin{equation} \sin\alpha=\sqrt{\frac{1+T^{2}}{1+a^{2}T^{2}}}=\frac{1}{\sqrt{\cos^{2}(\varphi-\varphi_{0})+\cosh^{2}\gamma\sin^{2}(\varphi-\varphi_{0})}} \end{equation} \begin{equation*} \cos\alpha=\frac{-\sin(\varphi-\varphi_{0})\sinh\gamma}{\sqrt{\cos^{2}(\varphi-\varphi_{0})+\cosh^{2}\gamma\sin^{2}(\varphi-\varphi_{0})}} ,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \end{equation*} one can check that $\alpha(\varphi)$, given as in (19), does satisfy the second order equation (cp. (11)) that is equivalent to the vanishing of the mean curvature, \begin{eqnarray} (\sin \alpha) \alpha''-2\alpha'^{2}\cos\alpha=(\cos\alpha)(\sin\alpha)^{2}, \end{eqnarray} whereby it is useful to note that $$ \alpha'\cot\alpha= -\frac{T(a^{2}-1)}{1+a^{2}T^{2}} $$ \begin{eqnarray} \alpha''=\frac{-\sin(\varphi-\varphi^{0})\sinh\gamma(\cosh^{2}\gamma+\sinh^{2} \gamma\cos^{2}(\varphi-\varphi^{0}))}{(\cos^{2}(\varphi-\varphi^{0})+\cosh^{2} \gamma\sin^{2}(\varphi-\varphi^{0}))^{2}}. \end{eqnarray} The derived solution(s) read(s) \begin{eqnarray} \vec{x}\left(\varphi^{1}\varphi^{2}\right)=\frac{1}{2}\left( \begin{array}{c} \sqrt{1-\frac{e \sin(\varphi-\varphi^{0})}{\sqrt{1+e^{2}\sin^{2}(\varphi-\varphi^{0})}}}\begin{array}{c} \cos\varphi^{1} \\ \sin\varphi^{1} \end{array}\\ \sqrt{1+\frac{e \sin(\varphi-\varphi^{0})}{\sqrt{1+e^{2}\sin^{2}(\varphi-\varphi^{0})}}}\begin{array}{c} \cos\varphi^{2} \\ \sin\varphi^{2} \end{array} \\ \end{array} \right) \end{eqnarray} with $e:= \sinh\gamma \in \mathbb{R}$ an arbitrary constant. One easily sees that, for each value of $e \in \mathbb{R}$, the solution (22) defines an embedded minimal torus in $S^{3}$, which at first sight is rather puzzling, as every embedded minimal torus in $\mathbb S^3$ must be congruent to the Clifford torus \cite{B1}. Let us give three direct proofs for the above concrete case, (22). Firstly, for constant $\varphi$ both $x_{3}$ and $x_{4}$ can be expressed as a linear combination of $x_{1}$ and $x_{2}$, defining 2 hyper-planes in $\mathbb{R}^{4}$, respectively. Since the intersection of $\mathbb S^3$ with two hyper-planes containing the origin of $\mathbb R^4$ gives a great circle, the minimal torus is ruled. It is known \cite{L1} that the Clifford torus is the only embedded surface among all infinitely many ruled minimal surfaces in $\mathbb S^3$. Hence (22) describes a Clifford torus. Secondly, calculating the determinants of (6) and (5) for $\theta = \theta\left(k\varphi^{1}+l\varphi^{2}\right)$, one finds \begin{equation} h=\frac{-\dot{\theta}^{4}-2s^{2}c^{2}\dot{\theta}^{2}-s^{4}c^{4}}{\dot{\theta}^{2}+s^{2}c^{2}}=-\left(\dot{\theta}^{2}+s^{2}c^{2}\right) \end{equation} \begin{equation*} g=\dot{\theta}^{2}+s^{2}c^{2},\qquad \frac{h}{g}=-1 ,\qquad\qquad\qquad\,\,\, \end{equation*} hence for the intrinsic Gaussian curvature \begin{eqnarray} \begin{array}{c} R= (Tr W)^{2}-\left(Tr W^{2}\right)+(Tr \tilde{W})^{2}-\left(Tr \tilde{W}^{2}\right) \\ =0 - 2 + 4 -2=0\qquad\qquad\qquad\qquad\,\,\,\,\,\,\,\,\,\,\, \end{array} \end{eqnarray} $$\left(\tilde{h}_{ab}:= \vec{x}\cdot\partial _{ab}^{2}\vec{x}=-\partial _{a}\vec{x}\cdot\partial _{b}\vec{x}=-g_{ab}\Rightarrow \tilde{W}^{a}_{b}:=g^{ac}\tilde{h}_{cb}=-\delta^{a}_{b}\right),$$ proving that the minimal tori (22) are flat (hence the Clifford torus). Thirdly, one can construct an explicit isometry to \begin{eqnarray} \vec{\utilde{x}}(\tilde{\varphi}^{1},\tilde{\varphi}^2)=\frac{1}{\sqrt{2}}\left(\begin{array}{c} \cos(\tilde{\varphi}^{1}) \\ \sin(\tilde{\varphi}^{1}) \\ \cos(\tilde{\varphi}^{2}) \\ \sin(\tilde{\varphi}^{2}) \end{array}\right) \end{eqnarray} by finding a re-parametrization $\varphi^{1}\varphi^{2}\rightarrow\tilde{\varphi}^{1} \tilde{\varphi}^2$ such that $(k=l=1, \varphi^{0}=0)$ \begin{equation} 2\left(g_{ab} \right)= \mathds{1} + \left( \begin{array}{cc} \cos\alpha & 0 \\ 0 & -\cos\alpha \\ \end{array} \right) +\frac{1}{2}\sin^{2}\alpha\left(a^{2}\sin^{2}-1\right)\left( \begin{array}{cc} 1 & 1 \\ 1 & 1 \\ \end{array} \right) \end{equation} \begin{equation*} \qquad\qquad\quad\,\,\,\, = \mathds{1} -\frac{e\sin\varphi}{\sqrt{1+e^{2}\sin^{2}\varphi}}\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right) + \frac{1}{2}\frac{e^{2}\sin^{2}\varphi}{(1+e^{2}\sin^{2}\varphi)^{2}} \left( \begin{array}{cc} 1 & 1 \\ 1 & 1 \\ \end{array} \right) \end{equation*} \begin{equation*} \quad= J^{T}(2\tilde{g}_{..})J=J^{T}J;\qquad J^{\tilde{a}}_{a}=\frac{\partial \tilde{\varphi}^{\tilde{a}}}{\partial\varphi^{a}}.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \end{equation*} The Ansatz \begin{eqnarray} \tilde{\varphi}^{1}=\varphi^{1}+\int^{\varphi}u,\qquad \tilde{\varphi}^{2}=\varphi^{2}+\int^{\varphi}v \end{eqnarray} gives \begin{eqnarray} \begin{array}{c} J=\left( \begin{array}{cc} 1+u & u \\ v & 1+v \\ \end{array} \right), \qquad \qquad\qquad\qquad\qquad\,\,\\ J^{T}J= \mathds{1} + \left( \begin{array}{cc} 2u & u+v \\ u+v & 2v \\ \end{array} \right)+ (u^{2}+v^{2}) \left( \begin{array}{cc} 1 & 1 \\ 1 & 1 \\ \end{array} \right) , \end{array} \end{eqnarray} so that (26) will be satisfied when choosing \begin{equation} u=\frac{1}{2}\cos\alpha(\varphi)+ w(\varphi) \,\,\,\,\, \end{equation} \begin{equation*} v=-\frac{1}{2}\cos\alpha(\varphi)+ w(\varphi) \end{equation*} \begin{equation*} u^{2}+v^{2}+2w = w^{2} + 2w + \frac{1}{4}(\cos\alpha)^{2}=\frac{1}{2}\frac{c^{2}\sin^{2}\varphi}{(1+c^{2}\sin^{2}\varphi)^{2}}\qquad \end{equation*} \begin{equation*} w(\varphi)=-1+\frac{\sqrt{1+\frac{3}{4}e^{4}s^{4}+\frac{9}{4}e^{2}s^{2}}}{\left(1+e^{2}s^{2}\right)} (s=\sin\varphi). \end{equation*} \section{$k\neq l$} Analogously one could explicitly construct isothermal coordinates also for arbitrary $k$ and $l$ via $(s=\sin\theta,c=\cos\theta)$ \begin{eqnarray} \begin{array}{c} \left(g_{ab}\right)=\left( \begin{array}{cc} \cos2\theta & 0 \\ 0 & \sin2\theta \\ \end{array} \right)+\left( \begin{array}{cc} k^{2} & kl \\ kl & l^{2} \\ \end{array} \right)\left(\frac{c^{2}s^{2}}{E^{2}}-1\right)\left(\frac{c^{2}s^{2}}{k^{2}s^{2}+l^{2}c^{2}}\right) \\ = \rho^{2}J^{T}J, \qquad J=\left( \begin{array}{cc} 1+ku & lu \\ k v & 1+l v \\ \end{array} \right) .\qquad\qquad\qquad \end{array} \end{eqnarray} With $$Y(\theta):=\left(\frac{c^{2}s^{2}}{E^{2}}-1\right)\left(\frac{c^{2}s^{2}}{k^{2}s^{2}+l^{2}c^{2}}\right) = -V_E(\theta)$$ one gets \begin{eqnarray} \begin{array}{c} c^{2}+k^{2}Y=\left(1 + k^{2}(u^{2}+v^{2})+2ku\right)\rho^{2} \\ s^{2}+l^{2}Y=\left(1 + l^{2}(u^{2}+v^{2})+2lv\right)\rho^{2}\,\, \\ \qquad\, klY=\left(lu + k v + kl(u^{2}+v^{2})\right)\rho^{2} . \end{array} \end{eqnarray} Substracting $2kl(klY)$ from $l^{2}(c^{2}+k^{2}Y) + k^{2}(s^{2}+l^{2}Y) $ and dividing by $k^{2} + l^{2}$ one obtains \begin{eqnarray} \rho^{2}=\frac{k^{2}s^{2}+l^{2}c^{2}}{k^{2} + l^{2}} \end{eqnarray} while $l^{2}(c^{2}+k^{2}Y) - k^{2}(s^{2}+l^{2}Y) $ yields an (inhomogeous) linear! relation between $u$ and $v$, which (when substituted into any of the 3 equations in (31)) gives a simple quadratic equation for $u$ (or $v$).\\ In order to study the periodicity properties (for general $k$ and $l$), it is convenient to still write the solution into the form (22), but with $\varphi$ replaced by $\tilde{\varphi}$, which is defined by the absorption of the factor $k^{2}s^{2}+l^{2}c^{2}$, i.e. \begin{eqnarray} \frac{\partial\varphi}{\partial\tilde{\varphi}}=\sqrt{k^{2}\sin^{2}\tilde{\theta} + l^{2}\cos^{2}\tilde{\theta}}\qquad\qquad\qquad\qquad\, \end{eqnarray} $$=\sqrt{\frac{k^{2} + l^{2}}{2}}\sqrt{1+\frac{(k^{2} - l^{2})}{k^{2} + l^{2}}\frac{e\sin\tilde{\varphi}}{\sqrt{1+e^{2}\sin^{2}\tilde{\varphi}}}}.$$ In the case discussed by Brendle ($k=0, l=1$) one would get \begin{eqnarray} \varphi^{2}=\frac{1}{\sqrt{2}}\int^{\tilde{\varphi}}\sqrt{1 - \frac{e\sin u}{\sqrt{1 + e^{2}\sin^{2}u}}} du, \end{eqnarray} which is a somewhat simpler elliptic integral than the formula for the period in \cite{B2}. As for $e= 0$, \begin{eqnarray} \delta \varphi^{2}= \frac{1}{\sqrt{2}} \int_{\tilde{\varphi}}^{\tilde{\varphi}+2\pi} = \sqrt{2}\pi = \delta_0\varphi^{2}, \end{eqnarray} while for $e\rightarrow +\infty$, \begin{eqnarray*} \delta \varphi^{2}= \frac{1}{\sqrt{2}} \int_{\pi}^{2\pi}\sqrt{2}= \pi \end{eqnarray*} it is, because of the continuity in $e$, clear that there will be infinitely many values of $e$ for which $\delta \varphi^{2}$ will be a rational multiple of $\pi$, $(p/q)\,\pi$. Alternatively, with $\sin^2\theta(\varphi^{2}) = r^{2}(\varphi^{2})= v(\varphi^{2})$ one has \begin{equation} \varphi^{2} - \varphi_{0}^{2}= \frac{1}{2}\frac{1}{\sqrt{1- v_{-}}}\pi\left(\arcsin\sqrt{\frac{1 - v}{1 - v_{+}}},1-v_{+},\sqrt{\frac{1- v_{+}}{1- v_{-}}}\right) \end{equation} \begin{equation*} 1 > v \geq v_{+} > v_{-} >0, v_{\pm} = \frac{1}{2}\pm\sqrt{\frac{1}{4} - E^{2}}, \end{equation*} an elliptic integral of the third kind. \section*{Acknowledgment} This work was supported by the Swedish Research Council, Postech SRC-GAIA, and a Sogang University Research Grant (2012). \end{document}
\begin{document} \subjclass[2010]{05C50} \keywords{Laplacian spectrum, trees, multiplicities of eigenvalues} \begin{abstract} In this paper, we study the multiplicity of the Laplacian eigenvalues of trees. It is known that for trees, integer Laplacian eigenvalues larger than $1$ are simple and also the multiplicity of Laplacian eigenvalue $1$ has been well studied before. Here we consider the multiplicities of the other (non-integral) Laplacian eigenvalues. We give an upper bound and determine the trees of order $n$ that have a multiplicity that is close to the upper bound $\frac{n-3}{2}$, and emphasize the particular role of the algebraic connectivity. \end{abstract} \title{Trees with a large Laplacian~\linebreak eigenvalue multiplicity} \section{Introduction} In this paper, we study the multiplicity of the Laplacian eigenvalues of trees. In particular, we are interested in upper bounds for these multiplicities, and trees with multiplicities that are close to these upper bounds. Let $G$ be a graph with Laplacian eigenvalues $0=\mu_1\leq \cdots \leq \mu_n$. Fiedler \cite{fiedler} showed that $m_G(\mu_1)=1$ if and only if $G$ is connected, where $m_G(\mu)$ denotes the multiplicity of Laplacian eigenvalue $\mu$. Grone, Merris, and Sunder \cite[Prop.~2.2]{kamel} proved that $m_G(\mu_n)=1$ if $G$ is a connected bipartite graph (in particular, if it is a tree). On the other hand, for the complete graph $K_n$, the multiplicity of $\mu_n$ is large and indeed is $n-1$. How large can the multiplicities be when we restrict to trees? It is shown by Grone et al.~\cite[Thm.~2.1]{kamel} that if an integer $\mu \geq 2$ is a Laplacian eigenvalue of a tree $T$ of order $n$, then $m_T(\mu)=1$ and $\mu \mid n$. Interestingly, it is different for the Laplacian eigenvalue $1$, because $m_G(1)\geq p(G)-q(G)$ for all graphs $G$, where $p(G)$ and $q(G)$ are the number of pendant vertices and quasipendant vertices, respectively, see \cite[p.~263]{far}. For example, it follows that if $G$ is a star ($p(G)=n-1,q(G)=1$), then $m_G(1)=n-2$. Guo, Feng, and Zhang \cite{feng} showed that if $T$ is a tree of order $n$, then $m_T(1) \in S=\{0,\, 1,\, \ldots,\,n-5, n-4,\, n-2\}$ and for every integer $n$ and every $k\in S$ there exists a tree $T$ of order $n$ with $m_T(1)=k$. Also Barik, Lal, and Pati \cite{barik} studied the multiplicity of Laplacian eigenvalue $1$ in trees. In this paper we consider the multiplicities of the other (non-integral) Laplacian eigenvalues. Our paper is further organized as follows. In Section \ref{secNotation}, we introduce some notation and definitions. We then collect some relevant results from the literature in Section \ref{secSurvey}. In Section \ref{secExamples}, we introduce the family of trees that we will call spiders, as they will play a key role in the main part of the paper, Section \ref{secMAIN}. In this main part we will give an upper bound $\frac{n-3}{2}$ on the multiplicities of a Laplacian eigenvalue of a tree of order $n$, in particular for the algebraic connectivity, and characterize the trees that are close to this upper bound. One main result (Theorem \ref{km2}) --- and starting point of several refinements --- is that for a tree $T$ of order $n\geq 6$, the multiplicity of a Laplacian eigenvalue $\mu \neq 1$ is at most $\frac{n-3}{2}$, and we characterize the trees that attain this bound. We will also characterize the trees that have a multiplicity that is close to the upper bound (in Theorems \ref{n/2-2} and \ref{n-1/2-2}), and show that there are finitely many trees besides a specific family of spiders for which the algebraic connectivity has a multiplicity within a fixed range from the upper bound (in Theorem \ref{thm:n/2-k}). \section{Preliminaries} \subsection{Notation}\label{secNotation} Let $G = (V(G),E(G))$ be a graph, where $V (G) = \{v_1, \ldots , v_n\}$ is the vertex set and $E(G)$ is the edge set. Throughout this paper all graphs are simple, that is, without loops and multiple edges. Let $d_G(v_i)$ be the degree of $v_i$. We denote the non-increasing vertex degree sequence of $G$ by $(d_1, \ldots , d_n)$. The maximum and minimum degree of $G$ are denoted by $\Delta(G)=d_1$ and $\delta(G)=d_n$, respectively. The {\em adjacency matrix} of $G$, denoted by $A(G)$, is an $n\times n$ matrix whose $(i, j)$-entry is $1$ if $v_i$ and $v_j$ are adjacent and $0$ otherwise. We call $L(G) = D(G) - A(G)$ the {\it Laplacian matrix} of $G$, where $D(G)$ is the $n \times n$ diagonal matrix with $d_{ii}=d(v_i)$. The eigenvalues of $L(G)$ are called the {\it (Laplacian) eigenvalues}\footnote{Whenever we write about eigenvalues of a graph, we mean Laplacian eigenvalues (unless explicitly stated otherwise).} of $G$, and we denote these in increasing order by $0=\mu_1(G) \leq \cdots \leq \mu_n(G)$. The multiset of eigenvalues of $L(G)$ is called the (Laplacian) spectrum of $G$. Fiedler \cite{fiedler} called $\mu_2$ the {\it algebraic connectivity} of $G$. The multiplicity of a Laplacian eigenvalue $\mu$ in a graph $G$ is denoted by $m_G(\mu)$; the number of Laplacian eigenvalues of $G$ in an interval $I$ is denoted by $m_GI$. We say that two (or more) Laplacian eigenvalues are conjugates (of each other) if they are roots of the same irreducible factor of the characteristic polynomial of the Laplacian (over the rationals). Conjugate eigenvalues have the same multiplicity. A {\em star} (graph) is a complete bipartite graph $K_{1,n}$, for some positive integer $n$. We let $P_n$ be the path of order $n$. The {\em diameter} of $G$ is denoted by $\diam(G)$. A vertex of degree one is called a {\em pendant vertex} and a vertex that is adjacent to at least one pendant vertex is called a {\em quasipendant vertex}. The number of pendant and quasipendant vertices of a graph $G$ are denoted by $p(G)$ and $q(G)$, respectively. In all of the above notation, we remove the additional $G$ or $T$ if there is no ambiguity; for example $m(\mu)$ instead of $m_T(\mu)$, or $V$ instead of $V(G)$. \subsection{A collection of elementary results}\label{secSurvey} In this section we collect and extend some relevant basic lemmas from the literature. We start with some general ones. For other basic results, we refer to the books by Brouwer and Haemers \cite{BH}, Cvetkovi\'{c}, Doob, and Sachs \cite{CDS}, and Cvetkovi\'{c}, Rowlinson, and Simi\'{c} \cite{cvet}. \begin{lem}\label{cve} {\em \cite[Prop.~7.5.6]{cvet}.} If $G$ is a connected graph with $r$ distinct Laplacian eigenvalues, then $\diam(G)\leq r-1$. \end{lem} \begin{lem}\label{haem} {\em \cite[Thm.~1]{haemer}}. Let $G$ be a graph of order $n$, with vertex degrees $d_1\geq \cdots \geq d_n$ and Laplacian eigenvalues $\mu_1 \leq \cdots \leq \mu_n$. If $G$ is not $K_m \cup (n-m)K_1$, then $\mu_{n-m+1}\geq d_m - m+2$ for $1\leq m \leq n$. In particular, if $G$ has at least one edge, then $\mu_n\geq d_1+1$ and if $G$ has at least two edges, then $\mu_{n-1} \geq d_2$. \end{lem} We recall that a sequence $\mu_1 \geq \cdots \geq \mu_m$ interlaces another sequence $\lambda_1 \geq \cdots \geq \lambda_n$ with $m < n$ whenever $\lambda_i \geq \mu_i \geq \lambda_{n-m+i}$, for $i = 1, \ldots , m$. It is well known \cite[Thm.~1]{hwa} that the eigenvalues of a principal submatrix of a Hermitian matrix $M$ interlace the eigenvalues of $M$. For the Laplacian eigenvalues, we moreover have the following two specific results. \begin{lem}\label{kame3} {\em \cite[Thm.~4.1]{kamel}.} Let $\tilde{G}$ be a graph of order $n$ and let $G$ be a (spanning) subgraph of $\tilde{G}$ obtained by removing just one of its edges. Then the $n-1$ largest Laplacian eigenvalues of $G$ interlace the Laplacian eigenvalues of $\tilde{G}$. \end{lem} A consequence of this is the following. \begin{lem}\label{kame22} {\em \cite[Cor.~4.2]{kamel}.} Let $v$ be a pendant vertex of $\tilde{G}$ and let $G=\tilde{G}\setminus v$. Then the Laplacian eigenvalues of $G$ interlace the Laplacian eigenvalues of $\tilde{G}$. \end{lem} The following results concern multiplicities. \begin{lem}\label{kame} \cite[p.~263]{far}. Let $G$ be a graph with $p$ pendant vertices and $q$ quasipendant vertices. Then $m_G(1)\geq p-q$. \end{lem} This result follows from the observation that for every pair $(v_i,v_j)$ of pendant vertices that is adjacent to the same quasipendant vertex, the difference $e_i-e_j$ of the corresponding characteristic vectors is an eigenvector of $L(G)$ for eigenvalue $1$. This gives $p-q$ linearly independent eigenvectors. Finally, we have some specific results for trees. \begin{lem}\label{Tm2p} {\em \cite[Thm.~2.3]{kamel}.} Let $T$ be a tree with $p$ pendant vertices. If $\mu$ is a Laplacian eigenvalue of $T$, then $m_T(\mu) \leq p-1$. \end{lem} The following is a clear generalization to non-integral eigenvalues of a result by Grone, Merris, and Sunder \cite[Thm.~2.15]{kamel}. It uses the Matrix-Tree theorem, which states that any cofactor of the Laplacian matrix of $G$ equals the number of spanning trees of $G$. In case of trees, this is equivalent to the fact that the product of all non-zero Laplacian eigenvalues equals the number of vertices; we shall use this also later. \begin{lem}\label{van} Let $\mu$ be a Laplacian eigenvalue of a tree $T$ with $m_T(\mu)>1$. Then the product of $\mu$ and its conjugate eigenvalues equals $1$. In particular, if $\mu$ is an integer, then $\mu=1$. \end{lem} \begin{proof} Let $v$ be a pendant vertex. Then the Laplacian matrix of $T$ has the form $$L(T)=\begin{bmatrix} B& *\\ *& 1 \end{bmatrix},$$ where $B$ is the principal submatrix corresponding to $T\setminus v$. Since $m_T(\mu)>1$, there is an eigenvector $x$ of $L(T)$ for $\mu$ such that its last component is $0$. If $x'$ is the vector obtained from $x$ by deleting the last component, then it is not hard to see that $Bx'=\mu x'$. So $\mu$ is an eigenvalue of $B$ as well (and so are its conjugates). By the Matrix-Tree Theorem however, we have that $\det(B)=1$, and the result follows. \end{proof} To conclude this section, we mention a bound for the multiplicity of the algebraic connectivity of a tree. It follows easily from a result of Grone and Merris \cite{merris}. \begin{prop}\label{delta} Let $T$ be a tree with $\Delta \geq 2$. Then $m_T(\mu_2)\leq \Delta -1$. \end{prop} \begin{proof} Let $m=m_T(\mu_2)$, and suppose that $m\geq \Delta$. Because the multiplicity is at least $2$, there is an eigenvector that has value $0$ corresponding to one of the vertices, and so $T$ is a so-called type I tree (see \cite{merris}). By \cite[Thm.~2]{merris}, the so-called characteristic vertex of $T$ has degree at least $m+1$, but this is at least $\Delta+1$, which is a contradiction. \end{proof} \section{Spiders and their spectra}\label{secExamples} In this section we define two families of trees that are most relevant to our results. We start with the main one, i.e., the family of trees that have large multiplicities for some non-integral Laplacian eigenvalues. The spider $T(s,k)$, with $1\leq k \leq s$, is defined as in Figure \ref{12star}. It is obtained from the star $K_{1,s}$ by extending $k$ of its rays (legs) by an extra edge, and has $n=s+k+1$ vertices. We say that the spider has $k$ legs. \begin{figure} \caption{The spider $T(s,k)$} \label{12star} \end{figure} \begin{prop}\label{spectrum1} The Laplacian spectrum of $T(k,k)$ is $$\{0^{[1]},\, \theta^{[k-1]},\, \lambda_1^{[1]},\, \thetac^{[k-1]},\, \lambda_2^{[1]}\},$$ where $(\theta, \thetac)=(\frac{3-\sqrt{5}}{2},\frac{3+\sqrt{5}}{2})$ and $\lambda_1$, $\lambda_2$ are the roots of $x^2-(k+3)x+2k+1$. \end{prop} \begin{proof} In Figure \ref{12star}, an eigenvector for eigenvalue $\theta=\frac{3-\sqrt{5}}{2}$ is shown\footnote{Whenever we use $\theta$ and $\thetac$ in this paper, we mean the specific values $\frac{3-\sqrt{5}}{2}$ and $\frac{3+\sqrt{5}}{2}$.}. It is clear that there are $k-1$ linearly independent such eigenvectors corresponding to the eigenvalue $\theta$. Similarly, we find $k-1$ linearly independent eigenvectors corresponding to eigenvalue $\thetac=\frac{3+\sqrt{5}}{2}$. Because $\mu_1=0$, there are only two other eigenvalues $\lambda_1 \leq \lambda_2$. Because the product of all non-zero eigenvalues equals $n=2k+1$, it follows that $\lambda_1 \lambda_2 =2k+1$. Noting the trace of $L$ (which equals twice the number of edges) it follows that $\lambda_1 + \lambda_2 =k+3$, and the result follows. \end{proof} Note that if we let $\lambda_2 >\lambda_1$, then $\lambda_1>1$ and $\lambda_2>\thetac$, so for $k \geq 2$ the algebraic connectivity $\mu_2$ equals $\frac{3-\sqrt{5}}{2}$ and $\mu_{n-1}=\frac{3+\sqrt{5}}{2}$. Proposition \ref{spectrum1} thus shows that the upper bound of Proposition \ref{delta} is sharp. \begin{prop}\label{spectrum2} The Laplacian spectrum of $T(s,k)$ with $k<s$ is $$\{ 0^{[1]},\, \theta^{[k-1]},\, \lambda_1^{[1]},\, 1^{[s-k-1]},\, \lambda_2^{[1]},\, \thetac^{[k-1]},\, \lambda_3^{[1]}\},$$ where $(\theta, \thetac)=(\frac{3-\sqrt{5}}{2},\frac{3+\sqrt{5}}{2})$ and $\lambda_1$, $\lambda_2$, $\lambda_3$ are the roots of $x^3-(s+4)x^2+(3s+4)x-(s+k+1)$. \end{prop} \begin{proof} As before, we have eigenvalues $\theta$ and $\thetac$ with multiplicities at least $k-1$, and eigenvalue $0$. In addition, we have eigenvalue $1$ with multiplicity at least $s-k-1$ by Lemma \ref{kame}. Thus, three eigenvalues $\lambda_1, \lambda_2,\lambda_3$ are left. Again, from the product of all non-zero eigenvalues being equal to $n$, it follows that $\lambda_1 \lambda_2 \lambda_3 = n= s+k+1$. Moreover, $\lambda_1+\lambda_2 +\lambda_3= 2n-2-(s-k-1)-3(k-1)=s+4$. A quadratic equation easily follows from the trace of $L^2$: $$ \sum_{i=1}^n \mu_i^2= \sum_{i=1}^n d_i^2 + 2m$$ for every graph $G$ of order $n$ with $m$ edges, and degrees $d_i$. For $T(s,k)$, we obtain that $\lambda_1^2+\lambda_2^2 +\lambda_3^2= s^2+2s+8$, and from this we conclude that $\lambda_1 \lambda_2 + \lambda_1 \lambda_3 +\lambda_2 \lambda_3 = \frac{1}{2}(s+4)^2-\frac{1}{2}(s^2+2s+8) = 3s+4$. From all of this, the spectrum follows. \end{proof} Note that for a fixed integer $s$, one can use induction to show that $\nobreak \mu_2(T(s,s-i))\geq \theta$ and $\mu_{n-1}(T(s,s-i))\leq \thetac$, for $i=0, \ldots , s-1$, starting from Proposition \ref{spectrum1} and by inductively applying Lemma \ref{kame22}. On the other hand, for fixed $k\geq 2$, it also follows by induction that $\mu_{n-1}(T(k+i,k))\geq \thetac$ for all $i$. Consequently, we have the following. \begin{lem}\label{lemmatsk35} Let $T=T(s,k)$, with $s \geq k \geq 2$. Then $\mu_2(T)= \theta=\frac{3-\sqrt{5}}{2}$ and $\mu_{n-1}(T)= \thetac=\frac{3+\sqrt{5}}{2}$. \end{lem} It also follows from the above that $\mu_2(T(s,1))>\theta$ for $s\geq 1$. Below we will show that the spider graphs $T=T(s,k)$, with $s \geq k \geq 2$ are extremal regarding the eigenvalues $\theta$ and $\thetac$. Observe first however that if $k=s-1$ then $m_T(1)=0$ and $\lambda_2=2$. So in this case, $\lambda_1$ and $\lambda_3$ are the roots of $x^2-(s+2)x+s$ and the spectrum of $T(s,s-1)$ is $$\{0^{[1]},\, \theta^{[s-2]},\, \lambda_1^{[1]},\, 2^{[1]},\, \thetac^{[s-2]},\, \lambda_3^{[1]}\}.$$ \begin{figure} \caption{$H(s,r,t)$} \label{Htree} \end{figure} A second family of trees that we will use is shown in Figure \ref{Htree}. Such a tree $H(s, r, t)$ is obtained by attaching $s$ pendant vertices to one end point of the path on $r$ vertices, and $t$ pendant vertices to its other end point, for some positive integers $s,r,t$. We note that for $r=1$, we obtain a star (a tree with diameter $2$). The Laplacian spectrum of this graph is $\{0^{[1]},\, 1^{[n-2]},\, n^{[1]}\}$. Also, every tree of order $n$ with diameter $3$ is a graph $H(s,2,t)$ for some integers $s, t$ such that $s+t=n-2$. The spectrum of this graph can easily be obtained in a similar way as in Proposition \ref{spectrum2}. \begin{lem}\label{dou} \cite[Prop.~1]{Grone}. Let $T=H(s,2,t)$ be of order $n=s+t+2$. Then the characteristic polynomial of $L(T)$ is $x(x-1)^{n-4}[x^3-(n+2)x^2+(2n+st+1)x-n]$. \end{lem} In order to prove some of the results in the next section, we need the following two half-known characterizations of trees with extremal Laplacian eigenvalues $\frac{3 \pm \sqrt{5} }{2}$. The first is a result by Li, Guo, and Shiu \cite{li}. For completeness, we provide a shorter proof of this result. \begin{figure} \caption{The fork $F$} \label{obt} \end{figure} \begin{prop}\label{kho} {\em \cite[Thm.~3.4]{li}.} Let $G$ be a connected graph of order $n$. Then $\mu_{n-1}(G)=\frac{3+\sqrt{5}}{2}$ if and only if $G\cong T(s,k)$ for some positive integers $s,k$ with $k\geq 2$. \end{prop} \begin{proof} One side is clear by Lemma \ref{lemmatsk35}. To show the other side, suppose that $\mu_{n-1}(G)=\frac{3+\sqrt{5}}{2}=\thetac$. If $\Delta(G)=2$, then $G$ is a path or a cycle. Since $\mu_{5}(P_6)=3>\thetac$, it follows from interlacing (Lemmas \ref{kame3} and \ref{kame22}) that the order of $G$ is less than $6$, and then it can be checked that $G\cong P_5 = T(2,2)$. Next, assume that $u$ is a vertex of $G$ with degree $\Delta(G)\geq 3$. If there exists a vertex $v$ at distance $2$ from $u$ with degree at least $2$, then the graph $G$ can be obtained from the fork $F$ in Figure \ref{obt} by adding some pendant vertices and some edges. But $\mu_5(F) =3>\thetac$ and so again by interlacing, we have that $\mu_{n-1}(G)\geq \mu_5(F) > \thetac$, which is a contradiction. Thus, every vertex of degree at least $2$ is adjacent to $u$. By Lemma \ref{haem}, we have $d_2(G)\leq \mu_{n-1}$, hence $d_2(G) \leq 2$. So $G\cong T(s,k)$ for some integers $s,k$. By Proposition \ref{spectrum2}, it is necessary to have $k\geq 2$. \end{proof} We note that the proof actually shows that if $\mu_{n-1}(G) \leq \frac{3+\sqrt{5}}{2}$, then $G$ is a star or a spider. For more details and the full classification of graphs with $\mu_{n-1}(G) \leq 3$, we refer to Li, Guo, and Shiu \cite{li}. The following similar result is a strengthening of a result by Zhang \cite[Thm.~2.12]{zhang}. \begin{prop}\label{kho2} Let $T$ be a tree of order $n$. If $\mu_2(T) \geq \frac{3-\sqrt{5}}{2}$, then $T$ is a star, a spider, $H(2,2,2)$, or $H(3,2,2)$, and equality holds if and only if $T\cong T(s,k)$ for some positive integers $s,k$, with $k\geq 2$. \end{prop} \begin{proof} Suppose that $\mu_2(T) \geq \frac{3-\sqrt{5}}{2}=\theta$. By interlacing (Lemma \ref{kame22}), the tree $T$ cannot have $P_6$ as a subgraph since $\mu_2(P_6)<\theta$, so $\diam(T)\leq 4$. If $T$ is not a star, then it must have diameter $3$ or $4$. If $\diam(T)=3$, then $T$ is $H(s,2,t)$ for some positive integers $s,t$. Using Lemma \ref{dou}, we find however that $\mu_2(H(3,2,3)) < \theta $ and $\mu_2(H(4,2,2)) < \theta$, so it follows again by interlacing that in this case $T$ can only be one of the trees $H(2,2,2)$, $H(3,2,2)$, and $T(s,1)$, for some positive integer $s$. Indeed, $\mu_2(H(2,2,2)) > \theta $, $\mu_2(H(3,2,2)) > \theta$, and $\mu_2(T(s,1))> \theta$. Finally, suppose that $\diam(T)=4$. For the fork $F$ (see Figure \ref{obt}), we have $\mu_2(F)<\theta$. So $T$ cannot have $F$ as a subgraph, and thus $T$ is a spider. The case of equality now follows from Lemma \ref{lemmatsk35} and the fact that a star, $H(2,2,2)$, $H(3,2,2)$, and $T(s,1)$ all have $\mu_2>\theta$. \end{proof} \section{Trees with a large multiplicity}\label{secMAIN} We are now ready for our main results, that is, to bound the multiplicities of non-integer Laplacian eigenvalues of trees, and to characterize the trees with large Laplacian eigenvalue multiplicities. \begin{thm}\label{km2} Let $T$ be a tree of order $n\geq 6$ with a Laplacian eigenvalue $\mu$. If $\mu\neq 1$, then $m_T(\mu)\leq \frac{n-3}{2}$, and equality holds if and only if $T\cong T(\frac{n-1}{2},\frac{n-1}{2})$ and $\mu =\frac{3 \pm \sqrt{5}}{2}$. In particular, if equality holds, then $\mu$ is the algebraic connectivity of $T$ or its conjugate. \end{thm} \begin{proof} Suppose that $m_T(\mu) > \frac{n-3}{2}$ and $\mu \neq 1$. Then $m_T(\mu)>1$. So by Lemma \ref{van}, the eigenvalue $\mu$ is not an integer, so it has at least one conjugate eigenvalue $\overline{\mu}$ with $m_T(\overline{\mu})=m_T(\mu)$. So $m_T(\mu)\leq \frac{n-1}{2}$. If $n$ is odd, then $m_T(\mu) =\frac{n-1}{2}$, so $T$ has exactly $3$ distinct eigenvalues. This implies that the diameter of $T$ is at most $2$ (by Lemma \ref{cve}), so $T$ is a star, with spectrum $\{ 0^{[1]},\, 1^{[n-2]},\, n^{[1]}\}$, which is a contradiction. So $n$ is even and $m_T(\mu)=\frac{n-2}{2}$. Hence $T$ has spectrum $$ \{ 0^{[1]},\, \mu^{[\frac{n-2}{2}]},\, \overline{\mu}^{[\frac{n-2}{2}]},\, \lambda^{[1]}\}$$ for some eigenvalue $\lambda$. Now, by Lemma \ref{cve}, $T$ has diameter at most $3$, and it is not a star. So $T$ has $p=n-2$ pendant vertices and $q=2$ quasipendant vertices. Thus, $m_T(1)\geq p-q=n-4 \geq 2$, by Lemma \ref{kame}, which is a contradiction, so $m_T(\mu) \leq \frac{n-3}{2}$. Now, suppose that $m_T(\mu)= \frac{n-3}{2}$. Again, by Lemma \ref{van}, the eigenvalue $\mu$ is not an integer. If $n=7$, then possibly $\mu$ can have two conjugate eigenvalues. However, this would imply that $T$ has four distinct eigenvalues, which would give a contradiction in the same way as in the above case of $n$ even. So $n \geq 7$ and $\mu$ has exactly one conjugate eigenvalue $\overline{\mu}$. By Lemma \ref{van}, we have $\mu\overline{\mu}=1$. So the spectrum of $T$ is $$\{0^{[1]},\, \mu^{[\frac{n-3}{2}]},\, \overline{\mu}^{[\frac{n-3}{2}]},\, \lambda_1^{[1]},\, \lambda_2^{[1]}\},$$ for some eigenvalues $\lambda_1$ and $\lambda_2$. Since the product of all non-zero eigenvalues of a tree equals $n$, we find that $\lambda_1\lambda_2=n$. Because the only tree on $n$ vertices with an eigenvalue $n$ is the star (which easily follows by observing that the complement of such a tree should be disconnected), it follows that $m_T(1)=0$, and hence by Lemma \ref{kame}, we deduce that $p=q$. Moreover, by Lemma \ref{cve} the diameter of $T$ is at most $4$. Because $n$ is odd, it now easily follows that $T$ is isomorphic to $T(\frac{n-1}{2},\frac{n-1}{2})$. As observed in Proposition \ref{spectrum1}, this graph indeed has an eigenvalue (in fact, the two conjugates $\frac{3\pm \sqrt{5}}2)$ with multiplicity $\frac{n-3}{2}$. \end{proof} Note that if $T$ is a tree but not a star, then $\diam(T) \geq 3$. Because $\mu_2(P_4)<1$, it then follows by interlacing that $\mu_2(T)<1$, so we have the following. \begin{cor}\label{m2n2} If $T$ is a tree of order $n$ but not a star, then $m_T(\mu_2)\leq \frac{n-3}{2}$. \end{cor} The multiplicity of $\mu_2$ in a star $K_{1,n}$ equals $n-1$ and by Corollary \ref{m2n2}, $m_T(\mu_2)\leq \lfloor \frac{n-1}{2} \rfloor -1$ for every other tree $T$. So there is a huge gap between these multiplicities $n-1$ and $ \lfloor \frac{n-1}{2} \rfloor -1$. Lemma \ref{lemmatsk35} and Proposition \ref{spectrum2} show that there are no other gaps. In fact, for any $i=1,\dots, \lfloor \frac{n-1}{2} \rfloor -1$, the tree $T=T(n-i-2,i+1)$ has $m_T(\mu_2)=i$. In Theorem \ref{thm:n/2-k}, we will show that for every positive integer $j$, all except finitely many trees $T$ with $m_T(\mu_2)=\lfloor \frac{n}{2} \rfloor -j$ are spiders. But first, we determine the typical values of the Laplacian eigenvalues with large multiplicity. \begin{lem}\label{lem:n/2-k} Let $j$ be an integer with $j \geq 2$. If $T$ is a tree of order $n> 10j$ and $m_T(\mu)= \lfloor \frac{n}{2} \rfloor -j$ for a Laplacian eigenvalue $\mu \neq 1$, then $\mu =\frac{3 \pm \sqrt{5}}{2}$. \end{lem} \begin{proof} Suppose that $n\geq 10j+1$ and $m_T(\mu)=\lfloor \frac{n}{2} \rfloor -j$. Because $\mu$ cannot be integer, there exists another (conjugate) eigenvalue $\overline{\mu}$ with $m_T(\overline{\mu})=\lfloor \frac{n}{2}\rfloor -j$. Since $n\geq 10j+1$ there can be no other (conjugate) eigenvalue with such a large multiplicity. Thus, $\mu$ and $\overline{\mu}$ are the roots of a quadratic factor of the Laplacian polynomial, so $\mu + \overline{\mu}$ equals a positive integer $t$, say. Because $\mu\overline{\mu}=1$ by Lemma \ref{van}, it follows that $t \geq 3$. Since $\mu_1+\cdots +\mu_n =2n-2$, we have that $(\lfloor \frac{n}{2} \rfloor -j)t < 2n-2.$ First suppose that $t\geq 5$. If $n$ is even, then it follows that $n<\frac{2jt-4}{t-4} \leq 10j-4$, which is a contradiction. Similarly, if $n$ is odd, then $n<\frac{(2j+1)t-4}{t-4} \leq 10j+1$; again a contradiction. Next, suppose that $t=4$. Note that by Proposition \ref{delta} and Lemma \ref{haem}, we have $\Delta \geq \lfloor \frac{n}{2} \rfloor -j +1$ and $\mu_n \geq \Delta +1$, and hence $\mu_n \geq \lfloor \frac{n}{2}\rfloor -j+2$. Note also that $\mu_n$ is a simple eigenvalue by \cite[Prop.~2.2]{kamel}, so it is not equal to $\mu$ or $\overline{\mu}$. If $n$ is even, then $$ \sum_{\mu_i \notin \{ \mu, \overline{\mu} \}} \mu_i = 4j-2 > \frac{n}{2} -j+2$$ and so $n<10j-8$, which is a contradiction. Similarly, if $n$ is odd, then $$ \sum_{\mu_i \notin \{ \mu, \overline{\mu} \}} \mu_i = 4j > \frac{n-1}{2} -j+2$$ and so $n<10j-3$; again a contradiction. Thus, $t= 3$ and hence $\mu ,\overline{\mu} = \frac{3 \pm \sqrt{5}}{2}$. \end{proof} \begin{thm}\label{thm:n/2-k} Let $j$ be an integer with $j\geq 2$ and let $T$ be a tree of order $n> 10j$. Then $m_T(\mu_2)= \lfloor \frac{n}{2} \rfloor -j$ if and only if $T \cong T(\lceil \frac{n}{2} \rceil +j-2, \lfloor \frac{n}{2} \rfloor -j+1)$. \end{thm} \begin{proof} If $m_T(\mu_2)= \lfloor \frac{n}{2} \rfloor -j$, then $T$ is not a star, so $\mu_2 \neq 1$, and hence by Lemma \ref{lem:n/2-k}, we have $\mu_2= \frac{3-\sqrt{5}}{2}$. Now, by Proposition \ref{kho2}, it is clear that $T\cong T(s,k)$ for some suitable $s$ and $k$. These integers are now determined by the multiplicity of $\mu_2$ (see Propositions \ref{spectrum1} and \ref{spectrum2}), which finishes the proof. \end{proof} Observe the contrast between Theorems \ref{km2} and \ref{thm:n/2-k} in the sense that in the latter we restrict to eigenvalue $\mu_2$. Indeed, the following examples show that this restriction is necessary, at least for $j \geq 3$. \begin{figure} \caption{The tree $T^{*} \label{teta} \end{figure} Let $s \geq k \geq 2$ and let $T^{*}=T^{*}(s,k)$ be the tree obtained from $T(s,k)$ by adding one pendant vertex to one of the $k$ legs. See Figure \ref{teta} for the case $k=2$. It is clear from Proposition \ref{kho2} that $\mu_2(T^{*})< \theta$, and it follows from interlacing (Lemma \ref{kame22}) that $m_{T^{*}}(\theta)$ equals $k-2$ or $k-1$. We claim that this multiplicity equals $k-2$. In order to show this, we first consider the case $k=2$ and show that $\theta$ is not an eigenvalue of $T^{*}(s,2)$ for any $s \geq 2$. Indeed, suppose that $\theta$ is an eigenvalue of $T^{*}(s,2)$ with eigenvector $x$. By normalizing the entry $x_1=1$ of the top right vertex in Figure \ref{teta}, and applying the equation $Lx=\theta x$, we recursively find the entries of $x$ as given in the figure (we omit the technical details; note also that it easily follows that $x_1$ cannot be zero). But then finally for the bottom two right vertices (where we obtained $x_{n-1}=x_n=-1$) we should have the equation $x_n-x_{n-1}=\theta x_{n}$, which gives a contradiction. Next, consider the general case $T^{*}(s,k)$, and suppose that it has eigenvalue $\theta$ with multiplicity $k-1$, for $k\geq 3$. By interlacing it then follows that $T^{*}(s,k-1)$ has eigenvalue $\theta$ with multiplicity at least $k-2$, and by repeating this, we find that $T^{*}(s,2)$ has eigenvalue $\theta$ with multiplicity at least $1$, which is a contradiction, and hence confirms our claim. For even $n$, we can now take $s=\frac n2+j-4$ for $j \geq 3$, to obtain $m_{T^{*}}(\theta)=\frac n2-j$. For odd $n$, we take $s=\frac {n-1}2+j-3$ for $j \geq 3$, to obtain $m_{T^{*}}(\theta)=\frac {n-1}2-j$. Thus, for every $j \geq 3$ there are infinitely many trees with such multiplicities. In the given examples, we cannot take $j=2$. Indeed, for this case we can prove something stronger than in Theorem \ref{thm:n/2-k}, and obtain a result that is similar to Theorem \ref{km2}. Here we will show that there are unique trees of order $n$ with multiplicities $\frac{n}{2}-2$ and $\frac{n-1}{2}-2$ for an eigenvalue $\mu\neq 1$. \begin{thm}\label{n/2-2} Let $T$ be a tree of order $n\geq 8$ and $\mu \neq 1$ be a Laplacian eigenvalue of $T$. Then $m_T(\mu)= \frac{n}{2} -2$ if and only if $T\cong T(\frac{n}{2}, \frac{n}{2} -1)$ and $\mu=\frac{3\pm \sqrt{5}}{2}$. In particular, in this case $\mu$ is the algebraic connectivity of $T$ or its conjugate. \end{thm} \begin{proof} One side of the equivalence is clear from Proposition \ref{spectrum2}. To show the other side, we will first show by induction on $n$ (with $n$ even) the claim that if $m_T(\mu)= \frac{n}{2} -2$, then $\mu=\frac{3\pm \sqrt{5}}{2}$ and that $\mu_2(T)=\theta=\frac{3- \sqrt{5}}{2}$. We checked that this is true for $n=8$ by enumerating all trees of this order with the Sage computer package. For $n>8$, consider a tree $T$ with $m_T(\mu)= \frac{n}{2} -2$. Consider one of its extremal quasipendant vertices $v$ (in the sense that $v$ becomes pendant in the tree that is obtained from $T$ by removing all its pendant vertices). Now, remove the edge that connects $v$ to its (unique) non-pendant neighbor. The remaining graph is a disjoint union of a star (with center $v$) and a tree $T'$ on $t$ vertices, say. By interlacing (Lemma \ref{kame3}), it follows that $m_{T'}(\mu) \geq \frac n2-3$. But $m_{T'}(\mu) \leq \frac {t-3}2$ by Theorem \ref{km2}, so it follows that $t \geq n-3$. Because $n-t \geq 2$, we have two cases. If $t=n-3$, then $m_{T'}(\mu) = \frac {t-3}2$, and it follows from Theorem \ref{km2} that indeed $\mu=\frac{3\pm \sqrt{5}}{2}$ and $\mu_2(T')=\theta$, which implies by interlacing that $\mu_2(T)=\theta$. If $t=n-2$, then $m_{T'}(\mu) = \frac {t}2-2$, so by induction $\mu=\frac{3\pm \sqrt{5}}{2}$ and $\mu_2(T')=\theta$, but the latter (and interlacing) again implies that $\mu_2(T)=\theta$, which finishes the proof of our claim. Just like in the proof of Theorem \ref{thm:n/2-k}, we can now apply Proposition \ref{kho2} to finish the proof. \end{proof} We omit the proof of the following similar result, as the proof is very similar, although we also have to use Theorem \ref{n/2-2} now. \begin{thm}\label{n-1/2-2} Let $T$ be a tree of order $n\geq 9$ and $\mu \neq 1$ be a Laplacian eigenvalue of $T$. Then $m_T(\mu)= \frac{n-1}{2} -2$ if and only if $T\cong T(\frac{n-1}{2}+1, \frac{n-1}{2} -1)$ and $\mu=\frac{3\pm \sqrt{5}}{2}$. In particular, in this case $\mu$ is the algebraic connectivity of $T$ or its conjugate. \end{thm} We note that one could try to extend this result further, and indeed, the induction steps work over and over again, so induction would give a theorem about multiplicity $\lfloor \frac{n}{2} \rfloor -j$ for each $j \geq 2$, if only the bases of the induction steps would be true. Of course, this is where things go wrong. For example, we cannot prove a similar result for multiplicity $\frac{n}{2} -3$ and $n \geq 10$ because we have counterexamples such as $T^{*}(4,4)$ on $10$ vertices. Indeed, there are five such counterexamples, and we depict them in Figure \ref{5examples2}. Besides $T^{*}(4,4)$ (Fig.~\ref{5examples2}a), there are (exactly) two other trees of order $10$ that have eigenvalue $\theta$ with multiplicity $2$ and $\mu_2(T)<\theta$ (Fig.~\ref{5examples2}b and c). Moreover, there is one tree of order $10$ with eigenvalues $2 \pm \sqrt{3}$ with multiplicity $2$ (Fig.~\ref{5examples2}d), and one tree of order $10$ that has the roots of $x^3-5x^2+6x-1$ as eigenvalues with multiplicity $2$ (Fig.~\ref{5examples2}e). The variety of these examples shows that further classification of trees with an eigenvalue other than $1$ with multiplicity $\lfloor \frac{n}{2} \rfloor -j$ for $j \geq 3$ (on top of Lemma \ref{lem:n/2-k} and Theorem \ref{thm:n/2-k}) seems unfeasible. \begin{figure} \caption{Five counterexamples of order $10$} \label{5examples2} \end{figure} \noindent {\bf Acknowledgement}. The research of the first author was partly funded by Iran National Science Foundation (INSF) under the contract No. 96004167. Also, part of the work in this paper was done while the third author was visiting Tilburg University. \end{document}
\begin{equation}gin{document} \subjclass[2010]{35K20, 35K61, 35K92 pace{1mm}} \keywords{Regularity, parabolic equations, $(p,q)$-growth pace{1mm}} \thanks{{\it Acknowledgements.}\ This work is supported by the Engineering and Physical Sciences Research Council (EPSRC): CDT Grant Ref. EP/L015811/1. pace{1mm}} \maketitle \begin{equation}gin{abstract} We provide quantitative gradient bounds for solutions to certain parabolic equations with unbalanced polynomial growth and non-smooth coefficients. \mathbb Nd{abstract} pace{3mm} {\small a_{j}bleofcontents} \setlinespacing{1.08} \section{Introduction} We focus on the Cauchy-Dirichlet problem \begin{equation}gin{flalign}\label{pdd} \begin{equation}gin{cases} \ \partial_{t}u-\diver \ a(x,t,Du)=0\quad &\mbox{in} \ \ \Omega_{T}\\ \ u=f\quad &\mbox{on} \ \ \partial_{par}\Omega_{T}, \mathbb Nd{cases} \mathbb Nd{flalign} with initial-boundary datum $f\colon \mathbb{R}^{n+1}\to \mathbb{R}$ as in \eqref{ggg} below and nonlinear diffusive tensor $a(\cdot)$ featuring $(p,q)$-growth conditions as displayed in \eqref{ref}. The main novelties here are twofold: the map $x\mapsto a(x,t,z)$ is only Sobolev-differentiable in the sense that \begin{equation}gin{flalign*} \snr{\partial_{x}a(x,t,z)}\le \gamma(x,t)\left[(\mu^{2}+\snr{z}^{2})^{\Phirac{p-1}{2}}+(\mu^{2}+\snr{z}^{2})^{\Phirac{q-1}{2}}\right], \mathbb Nd{flalign*} where $\gamma$ possess a suitably high degree of integrability, cf. \eqref{gamma}. Moreover, we can treat in a single shot both the degenerate case $p\ge 2$ and the singular one $p<2$, allowing also for the case $\mu=0$. Precisely, we prove that \begin{equation}gin{theorem}\label{t1} If assumptions \eqref{regg}-\eqref{ggg} are satisfied, Cauchy-Dirichlet problem \eqref{pdd} admits a solution $u\in L^{p}(0,T;W^{1,p}(\Omega))$ such that \begin{equation}gin{flalign}\label{t1.1} Du\in L^{\infty}_{\operatorname{loc}}(\Omega_{T},\mathbb{R}^{n}),\qquad V_{\mu,p}(Du)\in L^{2}_{\operatorname{loc}}(0,T;W^{1,2}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n})) \mathbb Nd{flalign} and \begin{equation}gin{flalign}\label{t1.4} u\in W^{\iota,2}_{\operatorname{loc}}(0,T;L^{2}_{\operatorname{loc}}(\Omega))\quad \mbox{for all} \ \ \iota\in \left(0,\Phirac{1}{2}\right). \mathbb Nd{flalign} In particular, if $Q_{\varrho}\Subset \Omega_{T}$ is any parabolic cylinder there holds that \begin{equation}gin{flalign}\label{t1.5} \nr{H(Du)}_{L^{\infty}(Q_{\varrho/2})}\le \Phirac{c}{\varrho^{\begin{equation}ta_{1}}}\left[1+\left(\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{Q_{\varrho}}H(Du)^{\Phirac{p}{2}} \ \,{\rm d}y\right)^{\begin{equation}ta_{2}}\right], \mathbb Nd{flalign} with $c\equiv c(\texttt{data})$ and $\begin{equation}ta_{1},\begin{equation}ta_{2}\equiv \begin{equation}ta_{1},\begin{equation}ta_{2}(n,p,q,d)$. \mathbb Nd{theorem} We refer to Sections \ref{not}-\ref{ma} for a detailed description of the various quantities involved in the previous statement. Our analysis includes equations with double phase structure, such as \begin{equation}gin{flalign*} &\partial_{t}u-\diver\left(\snr{Du}^{p-2}Du+b(x,t)\snr{Du}^{q-2}Du\right)=0\quad \mbox{in} \ \ \Omega_{T}\\ &b\in L^{\infty}(\Omega_{T})\ \ \mbox{with}\ \ \partial_{x}b\in L^{d}(\Omega_{T}); \mathbb Nd{flalign*} or equations with variable exponent: \begin{equation}gin{flalign*} &\partial_{t}u-\diver\left(\snr{Du}^{p(x,t)-2}Du\right)=0\quad \mbox{in} \ \ \Omega_{T}\\ &p\in L^{\infty}(\Omega_{T})\ \ \mbox{with}\ \ \partial_{x}p\in L^{d}(\Omega_{T}); \mathbb Nd{flalign*} and also anisotropic equations like \begin{equation}gin{flalign*} &\partial_{t}u-\left[\diver(\snr{Du}^{p-2}Du)+\sum_{i=1}^{n}\partial_{x_{i}}\left((\mu^{2}+\snr{\partial_{x_{i}}u}^{2})^{\Phirac{p_{i}-2}{2}}\partial_{x_{i}}u\right)\right]\\ &1<p\le p_{i}<\infty \ \ \mbox{for all} \ \ i\in \{1,\cdots,n\}, \mathbb Nd{flalign*} where $(p,q)$, $(\inf_{\Omega_{T}}p,\sup_{\Omega_{T}}p)$, $\left(p,\max_{i\in \{1,\cdots,n\}}p_{i}\right)$ satisfy \eqref{pq} and $d$ is described by $\eqref{gamma}_{2}$. To the best of our knowledge, the result stated in Theorem \ref{t1} is new already in the standard growth case $p=q$. This fact poses additional difficulties due to the lack of informations on the regularity of solutions to \eqref{pdd} when $a(\cdot)$ has balanced polynomial growth. To overcome this issue, we proceed in two steps: first, we prove an higher integrability result for solutions to a regularized version of problem \eqref{pdd}. Then we use it to construct a sequence of maps satisfying suitable uniform estimates and converging to a solution of \eqref{pdd}. For the sake of simplicity, Theorem \ref{t1} is proved in the scalar case, but all our arguments can be adapted in a straightforward way to the vectorial setting as well, provided that $a(\cdot)$ has radial structure. Let us put our result in the context of the available literature. The systematic study of problem \begin{equation}gin{flalign}\label{elc} \begin{equation}gin{cases} \ -\diver \ a(x,Du)=0\quad &\mbox{in} \ \ \Omega\\ \ u=f\quad &\mbox{on} \ \ \partial\Omega \mathbb Nd{cases} \mathbb Nd{flalign} i.e., the elliptic counterpart of \eqref{pdd} started in \cite{ma2,ma3,ma1} and, subsequently, has undergone an intensive development over the last years, see for instance \cite{bacomi,bemi,besc,besc1,bobr,dm1,dm,deoh,dima,sharp,haok,lieb} and references therein. As suggested by the counterexamples contained in \cite{sharp,ma2}, already in the elliptic setting the regularity of solution to \eqref{elc} is strongly linked to the closeness of the exponents $(p,q)$ ruling the growth of the vector field $a(\cdot)$. Precisely, it turns out that \begin{equation}gin{flalign}\label{pq.2} 1\le\Phirac{q}{p}<1+\mathcal{M}(\texttt{problem's data}), \mathbb Nd{flalign} where $\mathcal{M}(\cdot)$ is in general a bounded function connecting the various informations given \emph{a priori} about solutions. In this respect, we refer to \cite{bacomi} for an idea on the subtle yet quantifiable interplay between the regularity of solutions and the main parameters of the problem and to \cite{bemi,dm1,dm}, where is shown that, as long as $p$ and $q$ stay close to each other, problems with $(p,q)$-growth can be interpreted as perturbations of problems having standard $p$-growth. In the parabolic setting, the regularity for solutions of \eqref{pdd} is very well understood when $a(\cdot)$ is modelled upon the parabolic $p$-laplacian, see e.g. \cite{d,dumi1,dumi,km2,km3} for an overview of the state of the art on this matter and \cite{ba,ba1}, where more general structures are analyzed. Finally, the question of existence of regular solutions of \eqref{pdd} when the nonlinear tensor $a(\cdot)$ has unbalanced polynomial growth was treated in \cite{bdm,bdm1,si,s}. The theory exposed in these papers confirms that, as in the elliptic case, a restriction like \eqref{pq.2} on the ratio $q/p$ suffices to prove existence of regular solutions to \eqref{pdd}. Actually, the function $\mathcal{M}(\cdot)$ is worsen for parabolic equations than for elliptic ones, due to the so-called phenomenon of caloric deficit, originated from the difference of scaling in space and time, see e.g. \cite{bdm,s}, in which $\mathcal{M}(\cdot)$ is quantified as a function of $n$ and $p$. In our case, $\mathcal{M}(\cdot)$ has to take into account also the integrability exponent of $\gamma$, therefore it depends on $n,p,d$ and, reversing the process of caloric deficit, it renders precisely the bound for elliptic equations with Sobolev-differentiable coefficients appearing in \cite{dm1,dm,elemarmas}. \subsubsection*{Organization of the paper} This paper is organized as follows: in Section \ref{pre} contains our notation, the list of the assumptions which will rule problem \eqref{pdd} and several by now classical tools in the framework of regularity theory for elliptic and parabolic PDE. Sections \ref{high}-\ref{mose} are devoted to the proof of Proposition \ref{p1} and Theorem \ref{t1} respectively. \section{Preliminaries}\label{pre} In this section we display the notation adopted throughout the paper and list some well-known result which will be helpful in the various proofs presented. \subsection{Notation}\label{not} In this paper, $\Omega_{T}:=\Omega\times (0,T)$ is a space-time cylinder over an open, bounded domain $\Omega\subset \mathbb{R}^{n}$, $n\ge 2$ with $C^{1}$-boundary. If $\tilde{\Omega}\subseteq \Omega$ and $t_{0}\in [0,T]$, by $\Omega_{t_{0}}$ we mean the subcylinder $\Omega\times (0,t_{0})\subseteq \Omega_{T}$. Clearly, when $t_{0}=0$, $\Omega_{0}\equiv \Omega$. We denote by $B_{\varrho}(x_{0}):=\left\{x\in \mathbb{R}^{n}\colon \snr{x-x_{0}}<\varrho\right\}$ the $n$-dimensional open ball centered at $x_{0}\in \mathbb{R}^{n}$ with radius $\varrho>0$. When working in the parabolic setting it is convenient to consider parabolic cylinders \begin{equation}gin{flalign*} Q_{\varrho}(y_{0}):=B_{\varrho}(x_{0})\times (t_{0}-\varrho^{2},t_{0})\quad \mbox{where} \ \ y_{0}:=(x_{0},t_{0})\in \mathbb{R}^{n+1}, \mathbb Nd{flalign*} i.e., balls in the parabolic metric. With "$y$" we shall always denote the couple $(x,t)\in \Omega_{T}$. Very often, when not otherwise stated, different balls (or cylinders) in the same context will share the same center. Given any differentiable map $G: \Omega\times \mathbb{R}\times \mathbb{R}^{n}\to \mathbb{R}$, with $\partial_{z}G(x,t,z)$ we mean the derivative of $G(\cdot)$ with respect to the $z$ variable, by $\partial_{t}G(x,t,z)$ the derivative in the time variable $t$ and by $\partial_{x}G(x,t,z)$ the derivative of $G$ with respect to the space variable $x$. We name "$c$" a general constant larger than one. Different occurrences from line to line will be still denoted by $c$, while special occurrences will be denoted by $c_1, c_2, \tilde c$ and so on. Relevant dependencies on parameters will be emphasized using parentheses, i.e., $c_{1}\equiv c_1(n,p)$ means that $c_1$ depends on $n,p$. For the sake of clarity, we shall adopt the shorthand notation \begin{equation}gin{flalign*} \texttt{data}:=\left(n,\nu,L,p,q,d,\nr{\gamma}_{L^{d}(\Omega_{T})}\right). \mathbb Nd{flalign*} In most of the inequalities appearing in the proof of our results we will use the symbols "$\lesssim$" or "$G_{T}rsim$", meaning that the inequalities hold up to constants depending from some (or all) the parameters collected in $\texttt{data}$. We refer to Section \ref{ma} for more details on the quantities appearing in the expansion of $\texttt{data}$. \subsection{Main assumptions}\label{ma} When dealing with the Cauchy-Dirichlet problem \eqref{pdd}, we assume that the nonlinear tensor $a\colon \Omega_{T}\times \mathbb{R}^{n}\to \mathbb{R}^{n}$ satisfies: \begin{equation}gin{flalign}\label{regg} \begin{equation}gin{cases} \ t\mapsto a(x,t,z)\quad &\mbox{measurable for all} \ \ x\in \Omega, z\in \mathbb{R}^{n}\\ \ x\mapsto a(x,t,z)\quad &\mbox{differentiable for all} \ \ t\in (0,T),z\in \mathbb{R}^{n}\\ \ z\mapsto a(x,t,z)\in C(\mathbb{R}^{n},\mathbb{R}^{n})\cap C^{1}(\mathbb{R}^{n}\setminus \{0\},\mathbb{R}^{n})\quad &\mbox{for all} \ \ (x,t)\in \Omega_{T} \mathbb Nd{cases} \mathbb Nd{flalign} and \begin{equation}gin{flalign}\label{ref} \begin{equation}gin{cases} \ \snr{a(x,t,z)}+(\mu^{2}+\snr{z}^{2})^{\Phirac{1}{2}}\snr{\partial_{z}a(x,t,z)}\le L\left[(\mu^{2}+\snr{z}^{2})^{\Phirac{p-1}{2}}+(\mu^{2}+\snr{z}^{2})^{\Phirac{q-1}{2}}\right]\\ \ \left[\partial_{z}a(x,t,z)\xi\cdot \xi\right]\ge \nu(\mu^{2}+\snr{z}^{2})^{\Phirac{p-2}{2}}\snr{\xi}^{2}\\ \ \snr{\partial_{x}a(x,t,z)}\le \gamma(x,t) \left[(\mu^{2}+\snr{z}^{2})^{\Phirac{p-1}{2}}+(\mu^{2}+\snr{z}^{2})^{\Phirac{q-1}{2}}\right], \mathbb Nd{cases} \mathbb Nd{flalign} which holds for all $(x,t)\in \Omega_{T}$ and $z,\xi\in \mathbb{R}^{n}$. In \eqref{ref}, $\mu\in [0,1]$ is any number, exponents $(p,q)$ are so that \begin{equation}gin{flalign}\label{pq} q<p+2\left(\Phirac{1}{n+2}-\Phirac{p}{2d}\right)\qquad\mbox{with}\qquad p> \Phirac{2nd}{(n+2)(d-2)} \mathbb Nd{flalign} and \begin{equation}gin{flalign}\label{gamma} \gamma\in L^{d}(\Omega_{T}) \ \ \mbox{for some} \ \ d>\max\left\{\Phirac{p}{2},1\right\}(n+2). \mathbb Nd{flalign} Finally, the function $f\colon \mathbb{R}^{n}\times \mathbb{R}\to \mathbb{R}$ satisfies \begin{equation}gin{flalign}\label{ggg} f\in C_{\operatorname{loc}}(\mathbb{R};L^{2}_{\operatorname{loc}}(\mathbb{R}^{n}))\cap L^{r}_{\operatorname{loc}}(\mathbb{R};W^{1,r}_{\operatorname{loc}}(\mathbb{R}^{n})),\quad \partial_{t}f\in L^{p'}_{\operatorname{loc}}(\mathbb{R};W^{-1,p}_{\operatorname{loc}}(\mathbb{R}^{n})), \mathbb Nd{flalign} where $r:=p'(q-1)$. In this setting, we define a weak solution to \eqref{pdd} as follows. \begin{equation}gin{definition}\label{d.1} A function $u\in f+L^{p}(0,T;W^{1,p}_{0}(\Omega))$ is a weak solution of problem \eqref{pdd} if and only if the identity \begin{equation}gin{flalign}\label{wfp} \int_{\Omega_{T}}\left[u\partial_{t}\varphi-a(x,t,Du)\cdot D\varphi\right] \ \,{\rm d}y=0 \mathbb Nd{flalign} holds true for all $\varphi\in C^{\infty}_{c}(\Omega_{T})$ and, in addition, $u(\cdot,0)=f(\cdot,0)$ in the $L^{2}$-sense, i.e.: \begin{equation}gin{flalign}\label{bf} \lim_{\partialta\to 0}\Phirac{1}{\partialta}\int_{0}^{\partialta}\int_{\Omega}\snr{u(x,s)-f(x,0)}^{2} \ \,{\rm d}x\,{\rm d}s=0. \mathbb Nd{flalign} \mathbb Nd{definition} \begin{equation}gin{remark} \emph{Let us compare the bound in \eqref{pq} with the one in force in the elliptic setting, i.e.: \begin{equation}gin{flalign}\label{epq} q<p+p\left(\Phirac{1}{n}-\Phirac{1}{d}\right), \mathbb Nd{flalign} see \cite{dm1,dm,elemarmas}. The restriction imposed in \eqref{pq} looks the right one: in fact, due to the different scaling in time, in \eqref{epq} $n$ must be replaced by $n+2$. Moreover, the usual parabolic deficit coming from the growth of the diffusive part of the equation affects also $d$:} \begin{equation}gin{flalign*} q<p+p\left(\Phirac{1}{n+2}-\left(d\cdot \Phirac{2}{p}\right)^{-1}\right)\cdot \Phirac{2}{p}. \mathbb Nd{flalign*} \emph{If we let $d\to \infty$ in \eqref{pq} and reverse the transformation prescribed by the caloric deficit phenomenon, we obtain} \begin{equation}gin{flalign*} q<p+\Phirac{p}{n}, \mathbb Nd{flalign*} \emph{which is the same appearing in \cite{sharp} when the space-depending coefficient is Lipschitz-continuous.} \mathbb Nd{remark} \subsection{Auxiliary results} In this section we collect some well-known facts that will have an important role throughout the paper. \subsubsection*{On Sobolev functions} Let $w\in L^{1}(\Omega_{T},\mathbb{R}^{k})$, $k\ge 1$ be any function. If $h \in \mathbb{R}^{n}$ is a vector, we denote by $a_{j}u_{h}\colon L^{1}(\Omega_{T},\mathbb{R}^{k})\to L^{1}(\Omega_{\snr{h}}\times (0,T),\mathbb{R}^{k})$ the standard finite difference operator in space, pointwise defined as \begin{equation}gin{flalign*} a_{j}u_{h}w(x):=w(x+h,t)-w(x,t) \quad \mbox{for a.e.} \ (x,t)\in \Omega_{\snr{h}}\times(0,T), \mathbb Nd{flalign*} where $\Omega_{|h|}:=\{x \in \Omega \, : \, \,{\rm dist}(x, \partial \Omega) > |h|\}$ and by $\mathrm{d}i\colon L^1(\Omega_{T},\mathbb{\mathbb R}^{k}) \to L^{1}(\Omega_{|h|}\times (0,T),\mathbb{R}^{k})$ the spacial difference quotient operator, i.e.: \begin{equation}gin{flalign*} \mathrm{d}i w(x,t):=\Phirac{w(x+h,t)-w(x,t)}{\snr{h}}=\snr{h}^{-1}(a_{j}u_{h}w(x,t)). \mathbb Nd{flalign*} Moreover, if $\tilde{h}\in \mathbb{R}$ is a number so that $\snr{h}<T$, we also recall the definition of finite difference operator in time $\tilde{a_{j}u}_{\tilde{h}}\colon L^{1}(\Omega_{T})\to L^{1}(\Omega\times (\snr{\tilde{h}},T-\snr{\tilde{h}}))$: \begin{equation}gin{flalign*} \tilde{a_{j}u}_{\tilde{h}}w(x,t):=w(x,t+h)-w(x,t). \mathbb Nd{flalign*} An important property of translation operators is their continuity in Lebesgue spaces. \begin{equation}gin{lemma}\label{transc} Let $\varphi\in C^{\infty}_{c}(\Omega)$ be any map, $h\in \mathbb{R}^{n}$ so that $\snr{h}\in \left(0,\Phirac{\,{\rm dist}(\,{\rm supp}(\varphi),\partial \Omega)}{4}\right)$ and $w\in L^{s}_{\operatorname{loc}}(\Omega_{T},\mathbb{R}^{k})$ with $s\in [1,\infty)$ and $k\in \mathbb{N}$. Then \begin{equation}gin{flalign*} \nr{(w(\ \cdot\ +h,t)-w(\cdot,t))\varphi}_{L^{s}(\Omega)}\to_{\snr{h}\to 0}0. \mathbb Nd{flalign*} \mathbb Nd{lemma} It is also useful to recall a basic property of difference quotient. \begin{equation}gin{lemma}\label{diffquo} Let $w\in L^{1}_{\operatorname{loc}}(\Omega_{T})$ be any function. There holds that \begin{equation}gin{itemize} \item if $w\in L^{s}_{\operatorname{loc}}(0,T;W^{1,s}_{\operatorname{loc}}(\Omega,\mathbb{R}^{k}))$, $s\in [1,\infty)$ and $\tilde{\Omega}\Subset$ is any open set, then \begin{equation}gin{flalign*} \nr{\mathrm{d}i w(\cdot,t)-Dw(\cdot,t)}_{L^{s}(\tilde{\Omega})}\to_{\snr{h}\to 0}; \mathbb Nd{flalign*} \item if in addition $s>1$ and $\tilde{\Omega}\Subset \Omega$ is any open set so that $$\sup_{\snr{h}>0}\int_{0}^{T}\int_{\tilde{\Omega}}\snr{\mathrm{d}i w(x,t)}^{s} \ \,{\rm d}x\,{\rm d}t<\infty,$$ then $Dw\in L^{s}(\ti{\Omega}\times (0,T))$ and $\nr{\mathrm{d}i w(\cdot,t)-Dw(\cdot,t)}_{L^{s}(\tilde{\Omega})}\to_{\snr{h}\to 0}0$. \mathbb Nd{itemize} \mathbb Nd{lemma} When dealing with parabolic PDE, solutions in general posses a modest degree of regularity in the time-variable, and, in particular, time derivatives exist only in the distributional sense. For this reason, we recall the definition and main properties of Steklov averages, see e.g. \cite[Chapter 1]{d}. \begin{equation}gin{definition} Let $w\in L^{1}(\Omega_{T},\mathbb{R}^{k})$, $k\in \mathbb{N}$, be any function. For $\partialta\in (0,T)$, the Steklov averages of $w$ are defined as \begin{equation}gin{flalign*} w_{\partialta}:=\begin{equation}gin{cases} \ \Phirac{1}{\partialta}\int_{t}^{t+\partialta}w(x,s) \ \,{\rm d}s\quad &t\in (0,T-\partialta]\\ \ 0\quad &t>T-\partialta \mathbb Nd{cases}\quad \mbox{and}\quad w_{\bar{\partialta}}:=\begin{equation}gin{cases} \ \Phirac{1}{\partialta}\int_{t-\partialta}^{t}w(x,s) \ \,{\rm d}s\quad &t\in (\partialta,T]\\ \ 0\quad &t<\partialta. \mathbb Nd{cases} \mathbb Nd{flalign*} \mathbb Nd{definition} \begin{equation}gin{lemma} If $w\in L^{s}_{\operatorname{loc}}(\Omega_{T})$, then $w_{\partialta}\to_{\partialta\to 0}w$ in $L^{s}_{\operatorname{loc}}(\Omega_{T-\varepsilon})$ for all $\varepsilon\in (0,T)$. If $w\in C(0,T; L^{s}(\Omega))$, then as $\partialta\to 0$, $w_{\partialta}(\cdot,t)$ converges to $w(\cdot,t)$ for all $t\in (0,T-\varepsilon)$ and all $\varepsilon\in (0,T)$. A similar statement holds for $w_{\bar{\partialta}}$ as well. \mathbb Nd{lemma} We also record the definition of fractional Sobolev spaces. \begin{equation}gin{definition} A function $w\in L^{s}(\Omega_{T},\mathbb{R}^{k})$ belongs to the fractional Sobolev space $W^{\alpha,\theta;s}(\Omega_{T},\mathbb{R}^{k})$, $\alpha,\theta\in (0,1)$, $k\in \mathbb{N}$ provided that \begin{equation}gin{flalign*} \int_{0}^{T}\int_{\Omega}\int_{\Omega}\Phirac{\snr{w(x_{1},t)-w(x_{2},t)}^{s}}{\snr{x_{1}-x_{2}}^{n+s\alpha}} \ \,{\rm d}x_{1}\,{\rm d}x_{2}\,{\rm d}t+\int_{0}^{T}\int_{0}^{T}\int_{\Omega}\Phirac{\snr{w(x,t_{1})-w(x,t_{2})}^{s}}{\snr{t_{1}-t_{2}}^{1+s\theta}} \ \,{\rm d}x\,{\rm d}t_{1}\,{\rm d}t_{2}<\infty. \mathbb Nd{flalign*} The local variant of $W^{\alpha,\theta;s}(\Omega_{T},\mathbb{R}^{k})$ can be defined in the usual way. \mathbb Nd{definition} The usual relation between Nikolski spaces and fractional Sobolev spaces holds in the parabolic setting as well. \begin{equation}gin{proposition}\label{fracsob} Let $w\in L^{s}(\Omega_{T},\mathbb{R}^{k})$, $(t_{1},t_{2})\Subset (0,T)$, $\tilde{\Omega}\Subset \Omega$ be an open set, $h\in \mathbb{R}^{n}$ be any vector with $\snr{h}<\Phirac{\,{\rm dist}(\tilde{\Omega},\partial \Omega)}{4}$ and $\tilde{h}\in \mathbb{R}$ be a number so that $\snr{\ti{h}}<\Phirac{\min\{t_{1},T-t_{2}\}}{4}$. Assume that \begin{equation}gin{flalign*} \int_{t_{1}}^{t_{2}}\int_{\tilde{\Omega}}\snr{w(x,t+\tilde{h})-w(x,t)} \ \,{\rm d}x\,{\rm d}t\le c'\snr{\tilde{h}}^{s\theta}\quad \mbox{for some} \ \ \theta \in (0,1), \mathbb Nd{flalign*} where $c'$ is a positive, absolute constant. Then there exists a constant $\tilde{c}\equiv \tilde{c}(n,s,c',\iota,t_{1},T-t_{2})>0$ such that \begin{equation}gin{flalign*} \int_{t_{1}}^{t_{2}}\int_{t_{1}}^{t_{2}}\int_{\tilde{\Omega}}\Phirac{\snr{w(x,l_{1})-w(x,l_{2})}^{s}}{\snr{l_{1}-l_{2}}^{1+s\iota}} \ \,{\rm d}x\,{\rm d}l_{1}\,{\rm d}l_{2}\le \tilde{c}<\infty\quad \mbox{for all} \ \ \iota \in (0,\theta). \mathbb Nd{flalign*} Suppose that \begin{equation}gin{flalign*} \int_{t_{1}}^{t_{2}}\int_{\tilde{\Omega}}\snr{w(x+h,t)-w(x,t)}^{s} \ \,{\rm d}x\,{\rm d}t \le c'\snr{h}^{s\alpha}\quad \mbox{for some} \ \ \alpha\in (0,1), \mathbb Nd{flalign*} with $c'$ positive, absolute constant. Then, \begin{equation}gin{flalign*} \int_{t_{1}}^{t_{2}}\int_{\tilde{\Omega}}\int_{\tilde{\Omega}}\Phirac{\snr{w(x_{1},t)-w(x_{2},t)}^{s}}{\snr{x_{1}-x_{2}}^{n+s\gamma}} \ \,{\rm d}x_{1}\,{\rm d}x_{2}\,{\rm d}t\le \tilde{c}<\infty\quad \mbox{for all} \ \ \gamma\in (0,\alpha), \mathbb Nd{flalign*} with $\tilde{c}\equiv \tilde{c}(n,s,c',\gamma,\,{\rm dist}(\ti{\Omega},\partial \Omega))$. \mathbb Nd{proposition} We refer to \cite{akm,pala,dumi1,lsu} for more details on this matter. We close this part with a fundamental compactness criterion in parabolic Sobolev spaces, whose proof can be found in \cite{sim}. \begin{equation}gin{lemma}\label{al} Let $X\subset B\subset Y$ be three Banach spaces such that the immersion $X\hookrightarrow B$ is compact and $1\le a_{1}\le a_{2}\le \infty$ be numbers satisfying the balance condition $a_{1}>a_{2}/(1+\sigma a_{2})$ for some $\sigma\in (0,1)$. If the set $\mathcal{J}$ is bounded in $L^{a_{2}}(0,T;X)\cap W^{\sigma,a_{1}}(0,T;Y)$, then $\mathcal{J}$ is compact in $L^{a_{2}}(0,T;B)$ and eventually in $C(0,T;B)$ when $a_{2}=\infty$. \mathbb Nd{lemma} \subsubsection*{Tools for $p$-laplacean type problems} For a constant $\tilde{c}\in [0,1]$ and $z\in \mathbb{R}^{n}$ we introduce the auxiliary vector field \begin{equation}gin{flalign*} V_{\tilde{c},s}(z):=(\tilde{c}^{2}+\snr{z}^{2})^{\Phirac{s-2}{4}}z\qquad s\in \{p,q\}, \mathbb Nd{flalign*} which turns out to be very convenient in handling the monotonicity properties of certain operators. \begin{equation}gin{lemma}\label{l1} For any given $z_{1},z_{2}\in \mathbb{R}^{n}$, $z_{1}\not=z_{2}$ there holds that \begin{equation}gin{flalign*} \snr{V_{\tilde{c},s}(z_{1})-V_{\tilde{c},s}(z_{2})}^{2}\sim (\tilde{c}^{2}+\snr{z_{1}}^{2}+\snr{z_{2}}^{2})^{\Phirac{s-2}{2}}\snr{z_{1}-z_{2}}^{2}, \mathbb Nd{flalign*} where the constants implicit in "$\sim$" depend only from $(n,s)$. \mathbb Nd{lemma} Another useful result is the following \begin{equation}gin{lemma}\label{l6} Let $s>-1$, $\tilde{c}\in [0,1]$ and $z_{1},z_{2}\in \mathbb{R}^{n}$ be so that $\tilde{c}+\snr{z_{1}}+\snr{z_{2}}>0$. Then \begin{equation}gin{flalign*} \int_{0}^{1}\left[\tilde{c}^{2}+\snr{z_{1}+\lambda(z_{2}-z_{1})}^{2}\right]^{\Phirac{s}{2}} \ \d\lambda\sim (\tilde{c}^{2}+\snr{z_{1}}^{2}+\snr{z_{2}}^{2})^{\Phirac{s}{2}}, \mathbb Nd{flalign*} with constants implicit in "$\sim$" depending only from $s$. \mathbb Nd{lemma} Finally, the iteration lemma. \begin{equation}gin{lemma}\label{l0} Let $\mathcal{Z}\colon [\varrho,R)\to [0,\infty)$ be a function which is bounded on every interval $[\varrho, R_*]$ with $R_*<R$. Let $\varepsilon\in (0,1)$, $a_1,a_2,\gamma_{1},\gamma_{2}\ge 0$ be numbers. If \begin{equation}gin{flalign*} \mathcal{Z}(a_{j}u_1)\le \varepsilon \mathcal{Z}(a_{j}u_2)+ \Phirac{a_1}{(a_{j}u_2-a_{j}u_1)^{\gamma_{1}}}+\Phirac{a_2}{(a_{j}u_2-a_{j}u_1)^{\gamma_{2}}}\ \ \mbox{for all} \ \varrho\le a_{j}u_1<a_{j}u_2< R\;, \mathbb Nd{flalign*} then \begin{equation}gin{flalign*} \mathcal{Z}(\varrho)\le c\left[\Phirac{a_1}{(R-\varrho)^{\gamma_{1}}}+\Phirac{a_2}{(R-\varrho)^{\gamma_{2}}}\right]\;, \mathbb Nd{flalign*} holds with $c\equiv c(\varepsilon,\gamma_{1},\gamma_{2})$. \mathbb Nd{lemma} \section{Higher Sobolev regularity for non-degenerate systems}\label{high} In this section we prove the existence of a suitably regular weak solution to Cauchy-Dirichlet problem \begin{equation}gin{flalign}\label{pdd+} \begin{equation}gin{cases} \ \partial_{t}v-\diver\ \tilde{a}(x,t,Dv)=0\quad &\mbox{in} \ \ \Omega_{T}\\ \ v=f\quad &\mbox{on} \ \ \partial_{par}\Omega_{T}, \mathbb Nd{cases} \mathbb Nd{flalign} where $f$ is as in \eqref{ggg} and the diffusive tensor $\tilde{a}\colon \Omega_{T}\times \mathbb{R}^{n}\to \mathbb{R}$ satisfies \begin{equation}gin{flalign}\label{regg+} \begin{equation}gin{cases} \ t\mapsto \tilde{a}(x,t,z)\quad &\mbox{measurable for all} \ \ x\in \Omega, z\in \mathbb{R}^{n}\\ \ x\mapsto \tilde{a}(x,t,z)\quad &\mbox{differentiable for all} \ \ t\in (0,T),z\in \mathbb{R}^{n}\\ \ z\mapsto \tilde{a}(x,t,z)\in C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\quad &\mbox{for all} \ \ (x,t)\in \Omega_{T} \mathbb Nd{cases} \mathbb Nd{flalign} and \begin{equation}gin{flalign}\label{ref+} \begin{equation}gin{cases} \ \snr{\tilde{a}(x,t,z)}+(\ti{\mu}^{2}+\snr{z}^{2})^{\Phirac{1}{2}}\snr{\partial_{z}\ti{a}(x,t,z)}\le L\left[(\ti{\mu}^{2}+\snr{z}^{2})^{\Phirac{p-1}{2}}+(\ti{\mu}^{2}+\snr{z}^{2})^{\Phirac{q-1}{2}}\right]\\ \ \left[\partial_{z}\ti{a}(x,t,z)\xi\cdot \xi\right]\ge \nu(\ti{\mu}^{2}+\snr{z}^{2})^{\Phirac{p-2}{2}}\snr{\xi}^{2}\\ \ \snr{\partial_{x}\ti{a}(x,t,z)}\le \gamma(x,t) \left[(\ti{\mu}^{2}+\snr{z}^{2})^{\Phirac{p-1}{2}}+(\ti{\mu}^{2}+\snr{z}^{2})^{\Phirac{q-1}{2}}\right], \mathbb Nd{cases} \mathbb Nd{flalign} for all $(x,t)\in \Omega_{T}$ and $z,\xi\in \mathbb{R}^{n}$. In \eqref{ref+}, $(p,q)$ are linked by the relation in \eqref{pq}, $\gamma$ is as in \eqref{gamma} and \begin{equation}gin{flalign}\label{extra} \ti{\mu}>0. \mathbb Nd{flalign} Our first result is the following \begin{equation}gin{proposition}\label{p1} Let $f\colon \mathbb{R}^{n}\times \mathbb{R}\to \mathbb{R}$ be as in \eqref{ggg} and $\ti{a}\colon \Omega_{T}\times \mathbb{R}^{n}\to \mathbb{R}^{n}$ be a Carath\'eodory vector field satisfying \eqref{regg+}, \eqref{ref+}, \eqref{pq}, \eqref{gamma} and \eqref{extra}. Then there exists a weak solution $v\in L^{p}(0,T;W^{1,p}(\Omega))$ of Cauchy-Dirichlet problem \eqref{pdd+} such that \begin{equation}gin{flalign}\label{sss} v\in L^{s}_{\operatorname{loc}}(0,T;W^{1,s}_{\operatorname{loc}}(\Omega))\quad \mbox{for all} \ \ s\in \left[1,p+\Phirac{4}{\tilde{n}}\right] \mathbb Nd{flalign} satisfying \begin{equation}gin{flalign}\label{cv7} \partial_{t}v\in L^{l}_{\operatorname{loc}}(\Omega_{T})\quad \mbox{for some} \ \ l\equiv l(n,p,q,d)\in \left(1,\min\{2,p\}\right) \mathbb Nd{flalign} and \begin{equation}gin{flalign}\label{difff} Dv\in L^{\infty}_{\operatorname{loc}}(0,T,L^{2}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n}))\quad \mbox{with}\quad V_{p}(Dv)\in L^{2}_{\operatorname{loc}}(0,T;W^{1,2}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n})). \mathbb Nd{flalign} \mathbb Nd{proposition} For the sake of simplicity, we shall split the proof of Proposition \ref{p1} into eight steps. \subsubsection*{Step 1: Approximating Cauchy-Dirichlet problems} For the ease of notation, we define numbers: \begin{equation}gin{flalign}\label{r} m:=\Phirac{d}{d-2}>1,\qquad \tilde{q}:=\max\left\{q-\Phirac{p}{2},1\right\}, \mathbb Nd{flalign} and, for $j\in \mathbb{N}$, consider a usual family of non-negative mollifiers $\{\psi_{j}\}$ of $\mathbb{R}^{n+1}$. We then regularize $f$ via convolution against $\{\psi_{j}\}$, thus obtaining the sequence $\{f_{j}\}:=\{f*\psi_{j}\}$, set \begin{equation}gin{flalign}\label{ej} \varepsilon_{j}:=\left(1+j+\nr{f_{j}}_{L^{2m\tilde{q}}(\Omega_{T})}^{2m\tilde{q}}\right)^{-1},\qquad \ti{H}(z):=(\ti{\mu}^{2}+\snr{z}^{2}), \mathbb Nd{flalign} correct the nonstandard growth of the diffusive tensor $\ti{a}(\cdot)$ as follows: \begin{equation}gin{flalign}\label{defa} \ti{a}_{j}(x,t,z):=\ti{a}(x,t,z)+\varepsilon_{j}\ti{H}(z)^{\Phirac{2m\tilde{q}-2}{2}}z \mathbb Nd{flalign} and consider solutions $v_{j}\in L^{2m\ti{q}}(0,T;W^{1,2m\ti{q}}(\Omega))$ of the following Cauchy-Dirichlet problem \begin{equation}gin{flalign}\label{pd} \begin{equation}gin{cases} \ \partial v_{j}-\diver \ a_{j}a_{j}(x,t,Dv_{j})=0\quad &\mbox{in} \ \ \Omega_{T}\\ \ v_{j}=f_{j}\quad &\mbox{on} \ \ \partial_{par}\Omega_{T}. \mathbb Nd{cases} \mathbb Nd{flalign} By \eqref{extra}, \eqref{ref} and the definition in \eqref{defa}, it can be easily seen that \eqref{regg+} holds and and \begin{equation}gin{flalign}\label{refreg} \begin{equation}gin{cases} \ \snr{a_{j}a_{j}(x,t,z)}+\ti{H}(z)^{\Phirac{1}{2}}\snr{\partial_{z}a_{j}a_{j}(x,t,z)}\le c\left[\ti{H}(z)^{\Phirac{p-1}{2}}+\ti{H}(z)^{\Phirac{q-1}{2}}\right]+c\varepsilon_{j}\ti{H}(z)^{\Phirac{2m\tilde{q}-1}{2}}\\ \ \partial_{z}a_{j}a_{j}(x,t,z)\ge c\left[\ti{H}(z)^{\Phirac{p-2}{2}}\varepsilon_{j}\ti{H}(z)^{\Phirac{2m\tilde{q}-2}{2}}\right]\snr{\xi}^{2}\\ \ \snr{\partial_{x}a_{j}a_{j}(x,t,z)}\le c\gamma(x,t)\left[\ti{H}(z)^{\Phirac{p-1}{2}}+\ti{H}(z)^{\Phirac{q-1}{2}}\right], \mathbb Nd{cases} \mathbb Nd{flalign} for all $(x,t)\in \Omega_{T}$, $z,\xi\in \mathbb{R}^{n}$, with $\gamma$ as in \eqref{gamma} and $c\equiv c(n,\nu,L,p,q,d)$. We recall that the weak formulation associated to problem \eqref{pdd+} reads as \begin{equation}gin{flalign}\label{wfj} \int_{\Omega_{T}}\left[v_{j}\partial_{t}\varphi -a_{j}a(x,t,Dv_{j})\cdot D\varphi\right] \ \,{\rm d}y=0\quad \mbox{for all} \ \ \varphi\in C^{\infty}_{c}(\Omega_{T}) \mathbb Nd{flalign} and the attainment of the boundary datum $f_{j}$ must be considered in the $L^{2}$-sense as in Definition \ref{d.1}. \subsubsection*{Step 2: Uniform energy bounds} Our main goal it to prove that the sequence $\{v_{j}\}$ is bounded, uniformly with respect to $j\in \mathbb{N}$ in the space-time $L^{p}$-norm. Since this is quite a routine procedure, we will just sketch it and refer the reader to \cite{bdm,si}, for more details. Modulo using Steklov averages, we can test \eqref{wfj} against the difference $v_{j}-f_{j}$ to get \begin{equation}gin{flalign}\label{24} \int_{\Omega}&\snr{v_{j}(x,t)-f_{j}(x,t)}^{2} \ \,{\rm d}x\nonumber \\ &+\int_{0}^{t}\int_{\Omega}a_{j}a_{j}(x,s,Dv_{j})\cdot(Dv_{j}-Df_{j}) \ \,{\rm d}x \,{\rm d}s \nonumber \\ =&-\int_{0}^{t}\langle v_{j}-f_{j},\partial_{t}f_{j}\rangle_{W^{1,p}_{0}(\Omega)} \ \,{\rm d}s\quad \mbox{for a.e.} \ \ t\in (0,T). \mathbb Nd{flalign} By $\eqref{refreg}_{2}$, H\"older and Young inequalities, if $p\ge 2$ a straightforward computation renders that \begin{equation}gin{flalign*} \int_{0}^{t}\int_{\Omega}&\snr{Dv_{j}}^{p} \ \,{\rm d}x\,{\rm d}s +\varepsilon_{j}\int_{0}^{t}\int_{\Omega}\snr{Dv_{j}}^{2m\tilde{q}} \ \,{\rm d}x\,{\rm d}s\nonumber \\ \lesssim &\int_{0}^{t}\int_{\Omega}\left[a_{j}a_{j}(x,s,Dv_{j})-a_{j}a_{j}(x,s,Df_{j})\right](Dv_{j}-Df_{j}) \ \,{\rm d}x\,{\rm d}s\nonumber \\ &+\int_{0}^{t}\int_{\Omega}\left[\snr{Df_{j}}^{p}+\varepsilon_{j}\snr{Df_{j}}^{2m\ti{q}}\right] \ \,{\rm d}x\,{\rm d}s, \mathbb Nd{flalign*} while if $1<p<2$ there holds that \begin{equation}gin{flalign*} \int_{0}^{t}\int_{\Omega}&\snr{Dv_{j}}^{p} \ \,{\rm d}x\,{\rm d}s+\varepsilon_{j}\int_{0}^{t}\int_{\Omega}\snr{Dv_{j}}^{2m\ti{q}} \ \,{\rm d}x\,{\rm d}s\nonumber \\ \lesssim &\Phirac{1}{\sigma}\int_{0}^{t}\int_{\Omega}\left[a_{j}a_{j}(x,s,Dv_{j})-a_{j}(x,s,Df_{j})\right]\cdot(Dv_{j}-Df_{j}) \ \,{\rm d}x\,{\rm d}s\nonumber \\ &+\sigma \int_{0}^{t}\int_{\Omega}\snr{Dv_{j}}^{p} \ \,{\rm d}y+\int_{\Omega_{t_{0}}}\left[\snr{Df_{j}}^{p}+\varepsilon_{j}\snr{Df_{j}}^{2m\ti{q}}\right] \ \,{\rm d}x\,{\rm d}s. \mathbb Nd{flalign*} Moreover, using $\eqref{refreg}_{1}$, H\"older and Young inequalities we have \begin{equation}gin{flalign*} \int_{0}^{t}\int_{\Omega}&a_{j}a_{j}(x,s,Df_{j})\cdot(Dv_{j}-Df_{j}) \ \,{\rm d}x\,{\rm d}s\lesssim \sigma \int_{0}^{t}\int_{\Omega}\snr{Dv_{j}}^{p} \ \,{\rm d}x\,{\rm d}s+\sigma\varepsilon_{j}\int_{0}^{t}\int_{\Omega}\snr{Dv_{j}}^{2m\ti{q}} \ \,{\rm d}x\,{\rm d}s \nonumber\\ &+\Phirac{1}{\sigma}\int_{0}^{t}\int_{\Omega}\left[1+\snr{Df_{j}}^{r}\right] \ \,{\rm d}x\,{\rm d}s+\Phirac{\varepsilon_{j}}{\sigma}\int_{0}^{t}\int_{\Omega}\ti{H}(Df_{j})^{m\tilde{q}} \ \,{\rm d}x\,{\rm d}s. \mathbb Nd{flalign*} Here, we also used that $q\ge p\Rightarrow r\ge q$ and, of course, that $2m\ti{q}> 2$. Finally, by H\"older, Sobolev-Poincar\'e and Young inequalities \begin{equation}gin{flalign*} \left| \ \int_{0}^{t}\langle v_{j}-f_{j}\rangle_{W^{1,p}_{0}(\Omega)} \ \,{\rm d}s \ \right|\lesssim \sigma\int_{0}^{t}\int_{\Omega}\snr{Dv_{j}}^{p} \ \,{\rm d}x\,{\rm d}s+\Phirac{1}{\sigma}\nr{\partial_{t}f_{j}}^{p'}_{L^{p'}(0,t_{1};W^{-1,p'}(\Omega))}. \mathbb Nd{flalign*} Inserting the content of all the previous displays in \eqref{24}, recalling \eqref{ej}, \eqref{ggg} and well-known convolution properties, choosing $\sigma>0$ small enough, we obtain \begin{equation}gin{flalign}\label{unibd1} \int_{0}^{t}\int_{\Omega}&\snr{Dv_{j}}^{p} \ \,{\rm d}x\,{\rm d}s+\varepsilon_{j}\int_{0}^{t}\int_{\Omega}\snr{Dv_{j}}^{2m\ti{q}} \ \,{\rm d}x\,{\rm d}s+\int_{\Omega}\snr{v_{j}(x,t)-f_{j}(x,t)}^{2} \ \,{\rm d}x\nonumber \\ \lesssim& \left[\int_{\int_{0}^{t}\Omega}\left[1+\snr{Df_{j}}^{r}\right] \ \,{\rm d}x\,{\rm d}s+\varepsilon_{j}\int_{0}^{t}\int_{\Omega}\ti{H}(Df_{j})^{m\ti{q}} \ \,{\rm d}x\,{\rm d}s+\nr{\partial_{t}f_{j}}^{p'}_{L^{p'}(0,t;W^{-1,p'}(\Omega))}\right]\nonumber \\ \lesssim &\left[\int_{0}^{t}\int_{\Omega}\left[1+\snr{Df}^{r}\right] \ \,{\rm d}x\,{\rm d}s+\nr{\partial_{t}f}^{p'}_{L^{p'}(0,t;W^{-1,p'}(\Omega))}+1\right]\nonumber\\ \lesssim &\left[\nr{Df}_{L^{r}(\Omega_{T})}+\nr{\partial_{t}f}^{p'}_{L^{p'}(0,T;W^{-1,p'}(\Omega))}+1\right]. \mathbb Nd{flalign} As stated at the end of Section \ref{ma}, none of the constants implicit in "$\lesssim$" depends on $t\in (0,T)$, therefore we can send $t\to T$ on the right-hand side of \eqref{unibd1} to get \begin{equation}gin{flalign}\label{unibd} &\int_{0}^{t}\int_{\Omega}\snr{Dv_{j}}^{p} \ \,{\rm d}x\,{\rm d}s+\varepsilon_{j}\int_{0}^{t}\int_{\Omega}\snr{Dv_{j}}^{2m\ti{q}} \ \,{\rm d}x\,{\rm d}s+\int_{\Omega}\snr{v_{j}(x,t)-f_{j}(x,t)}^{2} \ \,{\rm d}x\nonumber \\ &\qquad \lesssim \left[\nr{Df}_{L^{r}(\Omega_{T})}^{r}+\nr{\partial_{t}f}^{p'}_{L^{p'}(0,T;W^{-1,p'}(\Omega))}+1\right]=:\mathcal{C}_{f}. \mathbb Nd{flalign} \subsubsection*{Step 3: Caccioppoli inequality} Let $h\in \mathbb{R}^{n}\setminus \{0\}$ any vector satisfying $\snr{h}\in (0,1)$, $B_{\varrho}\subset \Omega$ a ball with radius $0<\varrho\le 1$ and so that $B_{2\varrho}\Subset \Omega$, $g\in W^{1,\infty}(\mathbb{R})$ a non-negative function with bounded, piecewise continuous, non-negative first derivative and $\chi\in W^{1,\infty}([0,T])$ with $\chi(0)=0$, $\varphi\in C^{\infty}(B_{\varrho},[0,1])$ two cut-off functions. By the approximation procedure developed e.g. in \cite[Section 3]{bdm} or \cite[Section 3.1]{s}, we can test \eqref{wfj} against a suitably regularized version of the comparison map $\varphi^{2}\chi\mathrm{d}i u_{j}g(\snr{\mathrm{d}i u_{j}}^{2})$ and manipulate it to obtain, for a.e. $a_{j}u\in (0,\min\{T,1\})$, \begin{equation}gin{flalign}\label{0} \Phirac{1}{2}\int_{B_{\varrho}}&\varphi^{2}\chi\left(\int_{0}^{\snr{\mathrm{d}i v_{j}}^{2}}g(s) \ \,{\rm d}s\right) \ \,{\rm d}x+\sum_{k=1}^{n}\int_{Q_{a_{j}u}}\varphi^{2}\chi\mathrm{d}ia_{j}a_{j}^{k}(x,t,Dv_{j})D_{k}\left[\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right] \ \,{\rm d}y\nonumber \\ =&-2\sum_{k=1}^{n}\int_{Q_{a_{j}u}}\varphi\chi\left(\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right)\mathrm{d}ia_{j}a_{j}^{k}(x,t,Dv_{j}) D_{k}\varphi \ \,{\rm d}y\nonumber \\ &-\Phirac{1}{2}\int_{Q_{a_{j}u}}\left(\int_{0}^{\snr{\mathrm{d}i v_{j}}^{2}}g(s)\ \,{\rm d}s\right)\varphi^{2}\partial_{t}\chi \ \,{\rm d}y, \mathbb Nd{flalign} where we abbreviated $Q_{a_{j}u}:=B_{\varrho}\times (0,a_{j}u)$. We also reduce further the size of $\snr{h}$: we ask that \begin{equation}gin{flalign}\label{hhh} \snr{h}\in \left(0,\Phirac{\,{\rm dist}(\,{\rm supp}(\varphi),\partial B_{\varrho})}{10000}\right). \mathbb Nd{flalign} Using the mean value theorem, we rearrange $\mathrm{d}ia_{j}(x,t,Du_{j})$ in a more convenient way: \begin{equation}gin{flalign*} \mathrm{d}ia_{j}a_{j}^{k}(x,t,Dv_{j})=&\snr{h}^{-1}\left[\tilde{a}^{k}(x+h,t,Dv_{j}(x+h))-\tilde{a}^{k}(x,t,Dv_{j}(x+h))\right]\nonumber \\ &+\snr{h}^{-1}\varepsilon_{j}\left[\ti{H}(Dv_{j}(x+h))^{\Phirac{2m\tilde{q}-2}{2}}D_{k}v_{j}(x+h)-\ti{H}(Dv_{j}(x))^{\Phirac{2m\tilde{q}-2}{2}}D_{k}v_{j}(x)\right]\nonumber \\ &+\snr{h}^{-1}\left[\tilde{a}^{k}(x,t,Dv_{j}(x+h))-\tilde{a}^{k}(x,t,Dv_{j}(x))\right]\nonumber \\ =&\snr{h}^{-1}\sum_{l=1}^{n}\left[\int_{0}^{1}\partial_{x_{l}}\tilde{a}^{k}(x+\lambda h,t,Dv_{j}(x+h))h^{l} \ \d \lambda\right]\nonumber \\ &+\sum_{l=1}^{n}\left[\int_{0}^{1}\partial_{z_{l}}a_{j}a_{j}^{k}(x,t,Dv_{j}(x)+\lambda a_{j}u_{h}Dv_{j}(x))\d\lambda\right]\mathrm{d}i D_{l}v_{j}. \mathbb Nd{flalign*} Plugging this expansion in \eqref{0} we eventually get \begin{equation}gin{flalign}\label{1} \Phirac{1}{2}\int_{B_{\varrho}}&\varphi^{2}\chi\left(\int_{0}^{\snr{\mathrm{d}i v_{j}}^{2}}g(s) \ \,{\rm d}s\right) \ \,{\rm d}x\nonumber \\ &+\snr{h}^{-1}\sum_{k,l=1}^{n}\int_{Q_{a_{j}u}}\varphi^{2}\chi\left[\int_{0}^{1}\partial_{x_{l}}\ti{a}^{k}(x+\lambda h,t,Dv_{j}(x+he_{i}))h^{l} \ \d\lambda\right]D_{k}\left[\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right] \ \,{\rm d}y\nonumber \\ &+\sum_{k,l=1}^{n}\int_{Q_{a_{j}u}}\varphi^{2}\chi\left[\int_{0}^{1}\partial_{z_{l}}a_{j}a_{j}^{k}(x,t,Dv_{j}(x)+\lambda a_{j}u_{h}Dv_{j}(x)) \ \d \lambda\right]\mathrm{d}i D_{l}v_{j}D_{k}\left[\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right] \ \,{\rm d}y\nonumber \\ =&-2\snr{h}^{-1}\sum_{k,l=1}^{n}\int_{Q_{a_{j}u}}\varphi\chi\left(\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right)\left[\int_{0}^{1}\partial_{x_{l}}\ti{a}^{k}(x+\lambda h,t,Dv_{j}(x+he_{i}))h^{l} \ \d\lambda\right]D_{k}\varphi \ \,{\rm d}y\nonumber \\ &-2\sum_{k,l=1}^{n}\int_{Q_{a_{j}u}}\varphi\chi\left(\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right)\left[\int_{0}^{1}\partial_{z_{l}}a_{j}a_{j}^{k}(x,t,Dv_{j}(x)+\lambda a_{j}u_{h}v_{j}) \ \d\lambda\right]\mathrm{d}i D_{l}v_{j}D_{k}\varphi \ \,{\rm d}y\nonumber \\ &+\Phirac{1}{2}\int_{Q_{a_{j}u}}\left(\int_{0}^{\snr{\mathrm{d}i v_{j}}^{2}}g(s) \ \,{\rm d}s\right)\varphi^{2}\partial_{t}\chi \ \,{\rm d}y. \mathbb Nd{flalign} For reasons that will be clear in a few lines, we introduce the shorthands \begin{equation}gin{flalign*} \mathcal{D}(h):=\left(\ti{\mu}^{2}+\snr{Dv_{j}(x+h)}^{2}+\snr{Dv_{j}(x)}^{2}\right)\quad \mbox{and}\quad \mathcal{G}(h):=\left(g(\snr{\mathrm{d}i v_{j}}^{2})+\snr{\mathrm{d}i v_{j}}^{2}g'(\snr{\mathrm{d}i v_{j}}^{2})\right), \mathbb Nd{flalign*} and notice that, by \eqref{extra}, $\mathcal{D}(h)>\ti{\mu}^{2}>0$. Now we start estimating all the terms appearing in \eqref{1}. For the sake of clarity, we split \begin{equation}gin{flalign*} &\mbox{(I)}:=\snr{h}^{-1}\sum_{k=1}^{n}\int_{Q_{a_{j}u}}\varphi^{2}\chi\left[\int_{0}^{1}\partial_{x_{l}}\ti{a}^{k}(x+\lambda h,t,Dv_{j}(x+he_{i}))h^{l} \ \d\lambda\right]D_{k}\left[\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right] \ \,{\rm d}y\nonumber \\ &\ \ =\snr{h}^{-1}\sum_{k=1}^{n}\int_{Q_{a_{j}u}}\varphi^{2}\chi\left[\int_{0}^{1}\partial_{x_{l}}\ti{a}^{k}(x+\lambda h,t,Dv_{j}(x+he_{i}))h^{l} \ \d\lambda\right]\mathrm{d}i D_{k}v_{j}g(\snr{\mathrm{d}i v_{j}}^{2}) \ \,{\rm d}y\nonumber \\ & \ \ +2\snr{h}^{-1}\sum_{k=1}^{n}\int_{Q_{a_{j}u}}\varphi^{2}\chi\left[\int_{0}^{1}\partial_{x_{l}}\ti{a}^{k}(x+\lambda h,t,Dv_{j}(x+he_{i}))h^{l} \ \d\lambda\right]\snr{\mathrm{d}i v_{j} }^{2}g'(\snr{\mathrm{d}i v_{j}}^{2})\mathrm{d}i D_{k}v_{j} \ \,{\rm d}y\nonumber \\ & \ \ =:\mbox{(I)}_{1}+\mbox{(I)}_{2}. \mathbb Nd{flalign*} With \eqref{refreg}$_{3}$, \eqref{gamma}, H\"older and Young inequalities we bound \begin{equation}gin{flalign*} \snr{\mbox{(I)}_{1}}&+\snr{\mbox{(I)}_{2}}\le c\int_{Q_{a_{j}u}}\varphi^{2}\chi\left(\int_{0}^{1}\gamma(x+\lambda h,t) \ \d\lambda\right)\left[\mathcal{D}(h)^{\Phirac{p-1}{2}}+\mathcal{D}(h)^{\Phirac{q-1}{2}}\right]\mathcal{G}(h)\snr{\mathrm{d}i Dv_{j}} \ \,{\rm d}y\nonumber \\ \le&\sigma \int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{G}(h)\mathcal{D}(h)^{\Phirac{p-2}{2}}\snr{\mathrm{d}i Dv_{j}}^{2} \ \,{\rm d}y\nonumber \\ &+\Phirac{c}{\sigma}\int_{Q_{a_{j}u}}\varphi^{2}\chi\left(\int_{0}^{1}\gamma(x+\lambda h,t) \ \d\lambda\right)^{2}\left[\mathcal{D}(h)^{\Phirac{p}{2}}+\mathcal{D}(h)^{\Phirac{2q-p}{2}}\right]\mathcal{G}(h) \ \,{\rm d}y\nonumber \\ \le&\sigma \int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{G}(h)\mathcal{D}(h)^{\Phirac{p-2}{2}}\snr{\mathrm{d}i Dv_{j}}^{2} \ \,{\rm d}y\nonumber \\ &+\Phirac{c}{\sigma}\int_{0}^{a_{j}u}\nr{\gamma(\cdot,t)}^{2}_{L^{d}(B_{2\varrho})}\left(\int_{B_{\varrho}}\varphi^{2m}\chi^{m}\left[\mathcal{D}(h)^{\Phirac{pm}{2}}+\mathcal{D}(h)^{\Phirac{(2q-p)m}{2}}\right]\mathcal{G}(h)^{m} \ \,{\rm d}x \right)^{\Phirac{1}{m}} \ \,{\rm d}t\nonumber \\ \le &\sigma \int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{G}(h)\mathcal{D}(h)^{\Phirac{p-2}{2}}\snr{\mathrm{d}i Dv_{j}}^{2} \ \,{\rm d}y+\Phirac{c}{\sigma}\left(\int_{Q_{a_{j}u}}\varphi^{2m}\chi^{m}\left[1+\mathcal{D}(h)^{m\left(q-\Phirac{p}{2}\right)}\right]\mathcal{G}(h)^{m} \ \,{\rm d}y\right)^{\Phirac{1}{m}}, \mathbb Nd{flalign*} for $c\equiv c(\texttt{data})$. Moreover, by \eqref{refreg}$_{2}$, Lemmas \ref{l1} and \ref{l6}, we obtain \begin{equation}gin{flalign*} &\mbox{(II)}:=\sum_{k,l=1}^{n}\int_{Q_{a_{j}u}}\varphi^{2}\chi\left[\int_{0}^{1}\partial_{z_{l}}a_{j}a_{j}^{k}(x,t,Dv_{j}(x)+\lambda a_{j}u_{h}Dv_{j}(x)) \ \d \lambda\right]\mathrm{d}i D_{l}v_{j}D_{k}\left[\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right] \ \,{\rm d}y\nonumber \\ &\ \ =\sum_{k,l=1}^{n}\int_{Q_{a_{j}u}}\varphi^{2}\chi\left[\int_{0}^{1}\partial_{z_{l}}a_{j}a_{j}^{k}(x,t,Dv_{j}(x)+\lambda a_{j}u_{h}Dv_{j}(x)) \ \d \lambda\right]\mathrm{d}i D_{l}v_{j}\mathrm{d}i D_{k} v_{j} g(\snr{\mathrm{d}i v_{j}}^{2}) \ \,{\rm d}y\nonumber \\ &\ \ +2\sum_{k,l=1}^{n}\int_{Q_{a_{j}u}}\varphi^{2}\chi\left[\int_{0}^{1}\partial_{z_{l}}a_{j}a_{j}^{k}(x,t,Dv_{j}(x)+\lambda a_{j}u_{h}Dv_{j}(x)) \ \d \lambda\right]\mathrm{d}i D_{l}v_{j}\snr{\mathrm{d}i v_{j}}^{2}g'(\snr{\mathrm{d}i v_{j}}^{2})\mathrm{d}i D_{k}v_{j} \ \,{\rm d}y\nonumber \\ & \ \ \ge c\snr{h}^{-2}\int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{D}(h)^{\Phirac{p-2}{2}}\snr{a_{j}u_{h} Dv_{j}}^{2}\mathcal{G}(h) \ \,{\rm d}y+c\snr{h}^{-2}\varepsilon_{j} \int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{D}(h)^{\Phirac{2m\tilde{q}-2}{2}}\snr{a_{j}u_{h} Dv_{j}}^{2}\mathcal{G}(h) \ \,{\rm d}y\nonumber \\ & \ \ \ge c\int_{Q_{a_{j}u}}\varphi^{2}\chi \mathcal{G}(h)\snr{\mathrm{d}i V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y+c\varepsilon_{j} \int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{G}(h)\snr{\mathrm{d}i V_{\ti{\mu},2m\tilde{q}}(Dv_{j})}^{2} \ \,{\rm d}y, \mathbb Nd{flalign*} with $c\equiv c(n,\nu,p,q,d)$. With \eqref{refreg}$_{1,3}$, H\"{o}lder and Young inequalities we finally obtain \begin{equation}gin{flalign*} &\snr{\mbox{(III)}}+\snr{\mbox{(IV)}}:=\nonumber \\ &\quad -2\snr{h}^{-1}\sum_{k,l=1}^{n}\int_{Q_{a_{j}u}}\varphi\chi\left(\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right)\left[\int_{0}^{1}\partial_{x_{l}}a_{j}a^{k}(x+h\lambda,t,Dv_{j}(x+j))h^{l} \ \d\lambda\right]D_{k}\varphi \ \,{\rm d}y\nonumber \\ &\quad-2\sum_{k,l=1}^{n}\int_{Q_{a_{j}u}}\varphi\chi\left(\mathrm{d}i v_{j}g(\snr{\mathrm{d}i v_{j}}^{2})\right)\left[\int_{0}^{1}\partial_{z_{l}}a_{j}a_{j}^{k}(x,t,Dv_{j}(x)+\lambda a_{j}u_{h}Dv_{j}) \ \d\lambda\right]\mathrm{d}i D_{l}v_{j}D_{k}\varphi \ \,{\rm d}y\nonumber \\ &\quad\le \sigma\int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{G}(h)\mathcal{D}(h)^{\Phirac{p-2}{2}}\snr{\mathrm{d}i Dv_{j}}^{2} \ \,{\rm d}y+\sigma \varepsilon_{j}\int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{G}(h)\mathcal{D}(h)^{\Phirac{2m\tilde{q}-2}{2}}\snr{\mathrm{d}i Dv_{j}}^{2}\ \,{\rm d}y\nonumber \\ &\quad+\Phirac{c\varepsilon_{j}}{\sigma}\int_{Q_{a_{j}u}}\chi\snr{D\varphi}^{2}g(\snr{\mathrm{d}i v_{j}}^{2})\snr{\mathrm{d}i v_{j}}^{2}\mathcal{D}(h)^{\Phirac{2m\tilde{q}-2}{2}} \ \,{\rm d}y\nonumber\\ &\quad +\Phirac{c}{\sigma}\int_{Q_{a_{j}u}}\chi\snr{D\varphi}^{2}\snr{\mathrm{d}i v_{j}}^{2}g(\snr{\mathrm{d}i v_{j}}^{2})\left[\mathcal{D}(h)^{\Phirac{p-2}{2}}+\mathcal{D}(h)^{q-\Phirac{p}{2}-1}\right] \ \,{\rm d}y\nonumber \\ &\quad+c\nr{\gamma}^{2}_{L^{d}(\Omega_{T})}\left(\int_{Q_{a_{j}u}}\chi^{m}\varphi^{2m}g(\snr{\mathrm{d}i v_{j}}^{2})^{m}\left[\mathcal{D}(h)^{\Phirac{pm}{2}}+\mathcal{D}(h)^{m\left(q-\Phirac{p}{2}\right)}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}}, \mathbb Nd{flalign*} where $c\equiv c(\texttt{data})$. Merging the content of all the previous displays and choosing $\sigma>0$ sufficiently small, we end up with \begin{equation}gin{flalign}\label{2} \Phirac{1}{2}\int_{B_{\varrho}}&\varphi^{2}\chi\left(\int_{0}^{\snr{\mathrm{d}i v_{j}}^{2}}g(s) \ \,{\rm d}s\right) \ \,{\rm d}x\nonumber \\ &+\int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{G}(h)\snr{\mathrm{d}i V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y+\varepsilon_{j}\int_{Q_{a_{j}u}}\varphi^{2}\chi\mathcal{G}(h)\snr{\mathrm{d}i V_{\ti{\mu},2m\tilde{q}}(Dv_{j})}^{2} \ \,{\rm d}y\nonumber \\ \le &c\left(\int_{Q_{a_{j}u}}\chi^{m}\varphi^{2m}\mathcal{G}(h)^{m}\left[1+\mathcal{D}(h)^{m\left(q-\Phirac{p}{2}\right)}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}}\nonumber \\ &+c\int_{Q_{a_{j}u}}\chi\snr{D\varphi}^{2}\snr{\mathrm{d}i v_{j}}^{2}g(\snr{\mathrm{d}i v_{j}}^{2})\left[\mathcal{D}(h)^{\Phirac{p-2}{2}}+\mathcal{D}(h)^{q-\Phirac{p}{2}-1}\right] \ \,{\rm d}y\nonumber \\ &+c\varepsilon_{j}\int_{Q_{a_{j}u}}\chi\snr{\mathrm{d}i v_{j}}^{2}g(\snr{\mathrm{d}i v_{j}}^{2})\mathcal{D}(h)^{\Phirac{2m\tilde{q}-2}{2}}\snr{D\varphi}^{2} \ \,{\rm d}y\nonumber \\ &+c\int_{Q_{a_{j}u}}\left(\int_{0}^{\snr{\mathrm{d}i v_{j}}^{2}}g(s) \ \,{\rm d}s\right)\varphi^{2}\partial_{t}\chi \ \,{\rm d}y=:\mathcal{I}(h), \mathbb Nd{flalign} with $c\equiv c(\texttt{data})$. In \eqref{2}, we also used that $m>1$ and that, being $p\le q$ we have that $\Phirac{p}{2}\le \Phirac{q}{2}\le q-\Phirac{p}{2}$. For $z\in \mathbb{R}^{n}$, set $\hat{\mathcal{G}}(z):=\left(g(\snr{z}^{2})+\snr{z}^{2}g'(\snr{z}^{2})\right)$. Now we recall \eqref{hhh} and that $g(\cdot)$ is bounded with bounded, piecewise continuous, non-negative first derivative. Keeping \eqref{extra} in mind, it is then easy to see that by Lemmas \ref{transc}-\ref{diffquo}, we can use Fatou Lemma on the left-hand side of \eqref{2} and a well-known variant of the dominated convergence theorem on the right-hand side of \eqref{2} to end up with \begin{equation}gin{flalign}\label{5} &\Phirac{1}{2}\int_{B_{\varrho}}\varphi^{2}\chi\left(\int_{0}^{\snr{D v_{j}}^{2}}g(s) \ \,{\rm d}s\right) \ \,{\rm d}x\nonumber \\ &\quad+\int_{Q_{a_{j}u}}\varphi^{2}\chi\hat{\mathcal{G}}(Dv_{j})\snr{D V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y+\varepsilon_{j}\int_{Q_{a_{j}u}}\varphi^{2}\chi\hat{\mathcal{G}}(Dv_{j})\snr{D V_{\ti{\mu},2m\tilde{q}}(Dv_{j})}^{2} \ \,{\rm d}y\nonumber \\ &\quad\le c\left(\int_{Q_{a_{j}u}}\chi^{m}\left(\snr{D\varphi}^{2m}+\varphi^{2m}\right)\hat{\mathcal{G}}(Dv_{j})^{m}\left[1+\ti{H}(Dv_{j})^{m\left(q-\Phirac{p}{2}\right)}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}}\nonumber \\ &\quad+c\varepsilon_{j}\int_{Q_{a_{j}u}}\chi\snr{D\varphi}^{2} g(\snr{Dv_{j}}^{2})\ti{H}(Dv_{j})^{m\tilde{q}} \ \,{\rm d}y\nonumber \\ &\quad+c\int_{Q_{a_{j}u}}\left(\int_{0}^{\snr{Dv_{j}}^{2}}g(s) \ \,{\rm d}s\right) \ \varphi^{2}\partial_{t}\chi \ \,{\rm d}y, \mathbb Nd{flalign} with $c\equiv c(\texttt{data})$. \subsubsection*{Step 4: Higher weak differentiability and interpolation} Our starting point is inequality \eqref{5} with the choice $g\equiv 1$, $\Phirac{\varrho}{2}\le a_{j}u_{1}<a_{j}u_{2}\le \varrho$, $\varphi\in C^{\infty}_{c}(B_{\varrho})$ so that \begin{equation}gin{flalign*} \mathds{1}_{B_{a_{j}u_{1}}}\le \varphi\le \mathds{1}_{B_{a_{j}u_{2}}}\quad \mbox{and}\quad \snr{D\varphi}\le \Phirac{4}{a_{j}u_{2}-a_{j}u_{1}} \mathbb Nd{flalign*} and $\chi \in W^{1,\infty}(\mathbb{R},[0,1])$ with \begin{equation}gin{flalign*} \chi(t_{0}-a_{j}u_{2}^{2})=0,\quad \chi\equiv 1 \ \ \mbox{on} \ \ (t_{0}-a_{j}u_{1}^{2},t_{0}), \quad 0\le \partial_{t}\chi\le\Phirac{4}{(a_{j}u_{2}-a_{j}u_{1})^{2}}. \mathbb Nd{flalign*} Combining \eqref{5} with \eqref{unibd} we obtain \begin{equation}gin{flalign}\label{6} \sup_{t_{0}-a_{j}u_{2}^{2}<t<t_{0}}\int_{B_{\varrho}}&\varphi^{2}\chi\snr{Dv_{j}(x,t)}^{2} \ \,{\rm d}x+\int_{Q_{\varrho}}\varphi^{2}\chi\snr{DV_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}x\nonumber \\ &+\varepsilon_{j}\int_{Q_{\varrho}}\varphi^{2}\chi\snr{V_{\ti{\mu},2m\ti{q}}(Dv_{j})}^{2} \ \,{\rm d}y\nonumber \\ \le &\Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{2}}\left(\int_{Q_{a_{j}u_{2}}}\left[1+\ti{H}(Dv_{j})^{m\ti{q}} \ \,{\rm d}y\right]\right)^{\Phirac{1}{m}}+\Phirac{c\mathcal{C}_{f}}{(a_{j}u_{2}-a_{j}u_{1})^{2}}, \mathbb Nd{flalign} with $c\equiv c(\texttt{data})$. Now we set \begin{equation}gin{flalign}\label{tn} \tilde{n}:=\begin{equation}gin{cases} \ n\quad &\mbox{if} \ \ n>2\\ \ \mbox{any number in} \ \ \left(2,\min\left\{2\left(\Phirac{d}{d(q-p)+p}-1\right),\Phirac{2(d-2)}{d(q-p)+p}\right\}\right)\quad &\mbox{if} \ \ n=2 \ \ \mbox{and} \ \ \tilde{q}=q-\Phirac{p}{2}\\ \ \mbox{any number in} \ \ \left(2,\Phirac{2p(d-2)}{2d-pd+2p}\right)\quad &\mbox{if} \ \ n=2 \ \ \mbox{and} \ \ \tilde{q}=1 \mathbb Nd{cases} \mathbb Nd{flalign} and notice that, if $p\ge 2$ \begin{equation}gin{flalign*} \ti{H}(z)^{\Phirac{p}{2}}\ge\snr{V_{\ti{\mu},p}(z)}^{2}\ge \snr{z}^{p}\quad \mbox{for all} \ \ z\in \mathbb{R}^{n}, \mathbb Nd{flalign*} or, if $1<p<2$, \begin{equation}gin{flalign*} \ti{H}(z)^{\Phirac{p}{2}}\ge \snr{V_{\ti{\mu},p}(z)}^{2}\ge 2^{\Phirac{p-2}{2}}\snr{z}^{p}\quad \mbox{for all} \ \ z\in \mathbb{R}^{n} \ \ \mbox{with} \ \ \snr{z}\ge \ti{\mu}. \mathbb Nd{flalign*} On a fixed time slice we use H\"older inequality and \eqref{6} to bound \begin{equation}gin{flalign*} \int_{B_{\varrho}}&\varphi^{2\left(1+\Phirac{2}{\tilde{n}}\right)}\snr{Dv_{j}}^{p+\Phirac{4}{\tilde{n}}} \ \,{\rm d}x\le \left(\int_{B_{\varrho}}\varphi^{\Phirac{2\tilde{n}}{\tilde{n}-2}}\snr{Dv_{j}}^{\Phirac{\tilde{n}p}{\tilde{n}-2}} \ \,{\rm d}x\right)^{\Phirac{\tilde{n}-2}{\tilde{n}}}\left(\int_{B_{\varrho}}\varphi^{2}\snr{Dv_{j}}^{2} \ \,{\rm d}x\right)^{\Phirac{2}{\tilde{n}}}\nonumber \\ \le &c\left[\left(\int_{B_{\varrho}}\varphi^{\Phirac{2\tilde{n}}{\tilde{n}-2}} \ \,{\rm d}x\right)^{\Phirac{\tilde{n}-2}{\tilde{n}}}+\left(\int_{B_{\varrho}}\varphi^{\Phirac{2\tilde{n}}{\tilde{n}-2}}\snr{V_{\ti{\mu},p}(Dv_{j})}^{\Phirac{2\tilde{n}}{\tilde{n}-2}} \ \,{\rm d}x\right)^{\Phirac{\tilde{n}-2}{\tilde{n}}}\right]\left(\int_{B_{\varrho}}\varphi^{2}\snr{Dv_{j}}^{2} \ \,{\rm d}x\right)^{\Phirac{2}{\tilde{n}}}\nonumber \\ \le &c\left[\int_{B_{\varrho}}\snr{D\varphi}^{2} \ \,{\rm d}x+\int_{B_{\varrho}}\snr{D(\varphi V_{\ti{\mu},p}(Dv_{j}))}^{2} \ \,{\rm d}x\right]\left(\int_{B_{\varrho}}\varphi^{2}\snr{Dv_{j}}^{2} \ \,{\rm d}x\right)^{\Phirac{2}{\tilde{n}}}\nonumber \\ \le &c\left[\int_{B_{\varrho}}\snr{D\varphi}^{2} \ \,{\rm d}x+\int_{B_{\varrho}}\varphi^{2}\snr{DV_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}x+\int_{B_{\varrho}}\snr{V_{\ti{\mu},p}(Dv_{j})}^{2}\snr{D\varphi}^{2} \ \,{\rm d}x\right]\left(\int_{B_{\varrho}}\varphi^{2}\snr{Dv_{j}}^{2} \ \,{\rm d}x\right)^{\Phirac{2}{\tilde{n}}}. \mathbb Nd{flalign*} We multiply both sides of the inequality in the previous display by $\chi^{1+\Phirac{2}{\tilde{n}}}$, integrate in time for $t\in (t_{0}-a_{j}u_{2}^{2},t_{0})$, take the supremum in the time variable of the last integral on the right-hand side, use \eqref{6} and eventually get \begin{equation}gin{flalign}\label{7} \int_{Q_{a_{j}u_{1}}}&\snr{Dv_{j}}^{p+\Phirac{4}{\tilde{n}}} \ \,{\rm d}y\le \Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{2\left(1+\Phirac{2}{\tilde{n}}\right)}}\left(\int_{Q_{a_{j}u_{2}}}\left[1+\ti{H}(Dv_{j})^{m\ti{q}}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}\left(1+\Phirac{2}{\tilde{n}}\right)}+\Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{2\left(1+\Phirac{2}{\tilde{n}}\right)}}\nonumber\\ \le &\Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{2\left(1+\Phirac{2}{\tilde{n}}\right)}}\left(\int_{Q_{a_{j}u_{2}}}\snr{Dv_{j}}^{2m\ti{q}} \ \,{\rm d}y\right)^{\Phirac{1}{m}\left(1+\Phirac{2}{\tilde{n}}\right)}+\Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{2\left(1+\Phirac{2}{\tilde{n}}\right)}}, \mathbb Nd{flalign} where $c\equiv c(\texttt{data},\mathcal{C}_{f})$. We can rearrange \eqref{7} in the following way: \begin{equation}gin{flalign}\label{8} \nr{Dv_{j}}_{L^{p+\Phirac{4}{\tilde{n}}}(B_{a_{j}u_{1}}\times (t_{0}-a_{j}u_{1}^{2},t_{0}))}\le \Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{\Phirac{2(\ti{n}+2)}{\ti{n}p+4}}}\nr{Dv_{j}}_{L^{2m\ti{q}}(B_{a_{j}u_{2}}\times (t_{0}-a_{j}u_{2}^{2},t_{0}))}^{\Phirac{2\ti{q}(\ti{n}+2)}{\ti{n}p+4}}+ \Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{\Phirac{2(\ti{n}+2)}{\ti{n}p+4}}}. \mathbb Nd{flalign} Notice that, by \eqref{gamma} and \eqref{pq}, there holds that \begin{equation}gin{flalign}\label{pq.1} p\le q<2m\ti{q}<p+\Phirac{4}{\ti{n}}, \mathbb Nd{flalign} so we can apply the interpolation inequality \begin{equation}gin{flalign}\label{inter} \nr{Dv_{j}}_{L^{2m\ti{q}}(B_{a_{j}u_{2}}\times (t_{0}-a_{j}u_{2}^{2},t_{0}))}\le \nr{Dv_{j}}_{L^{p}(B_{a_{j}u_{2}}\times (t_{0}-a_{j}u_{2}^{2},t_{0}))}^{1-\theta}\nr{Dv_{j}}_{L^{p+\Phirac{4}{\ti{n}}}(B_{a_{j}u_{2}}\times (t_{0}-a_{j}u_{2}^{2},t_{0}))}^{\theta}, \mathbb Nd{flalign} where $\theta\in (0,1)$ solves \begin{equation}gin{flalign*} \Phirac{1}{2m\ti{q}}=\Phirac{1-\theta}{p}+\Phirac{\ti{n}\theta}{\ti{n}p+4}\ \ \Rightarrow \ \ \theta=\Phirac{(\ti{n}p+4)(2m\ti{q}-p)}{8m\ti{q}}. \mathbb Nd{flalign*} Plugging \eqref{inter} into \eqref{8} we get \begin{equation}gin{flalign}\label{9} \nr{Dv_{j}}_{L^{p+\Phirac{4}{\tilde{n}}}(B_{a_{j}u_{1}}\times (t_{0}-a_{j}u_{1}^{2},t_{0}))}\le &\Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{\Phirac{2(\ti{n}+2)}{\ti{n}p+4}}}\nr{Dv_{j}}_{L^{p+\Phirac{4}{\tilde{n}}}(B_{a_{j}u_{2}}\times(t_{0}-a_{j}u_{2}^{2},t_{0}))}^{\Phirac{2\theta\ti{q}(\ti{n}+2)}{\ti{n}p+4}}\nr{Dv_{j}}_{L^{p}(B_{a_{j}u_{2}}\times(t_{0}-a_{j}u_{2}^{2},t_{0}))}^{\Phirac{2(1-\theta)\ti{q}(\ti{n}+2)}{\ti{n}p+4}}\nonumber \\ &+\Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{\Phirac{2(\ti{n}+2)}{\ti{n}p+4}}}, \mathbb Nd{flalign} with $c\equiv c(\texttt{data},\mathcal{C}_{f})$. By \eqref{pq} and \eqref{tn} there holds that \begin{equation}gin{flalign*} \Phirac{2\theta\ti{q}(\ti{n}+2)}{\ti{n}p+4}<1, \mathbb Nd{flalign*} so we can apply Young inequality with conjugate exponents \begin{equation}gin{flalign}\label{conex} \left(\Phirac{4m}{(2m\ti{q}-p)(\ti{n}+2)},\Phirac{4m}{4m-(2m\ti{q}-p)(\ti{n}+2)}\right) \mathbb Nd{flalign} to get \begin{equation}gin{flalign}\label{lip4} \nr{Dv_{j}}_{L^{p+\Phirac{4}{\tilde{n}}}(B_{a_{j}u_{1}}\times (t_{0}-a_{j}u_{1}^{2},t_{0}))}\le &\Phirac{1}{2}\nr{Dv_{j}}_{L^{p+\Phirac{4}{\tilde{n}}}(B_{a_{j}u_{2}}\times (t_{0}-a_{j}u_{2}^{2},t_{0}))}\nonumber \\ &+\Phirac{c(\texttt{data},\mathcal{C}_{f})}{(a_{j}u_{2}-a_{j}u_{1})^{\hat{\theta}}}\left[1+\nr{Dv_{j}}_{L^{p}(B_{a_{j}u_{2}}\times (t_{0}-a_{j}u_{2}^{2},t_{0}))}^{\begin{equation}ta}\right], \mathbb Nd{flalign} where we set $\hat{\theta}:=\Phirac{8m(\ti{n}+2)}{(\ti{n}p+4)[4m-(2m\ti{q}-p)(\ti{n}+2)]}$ and $\begin{equation}ta:=\Phirac{8m(1-\theta)\ti{q}(\ti{n}+2)}{[4m-(2m\ti{q}-p)(\ti{n}+2)](\ti{n}p+4)}$. Now we are in position to apply Lemma \ref{l0} and \eqref{unibd} to the inequality in the previous display and conclude with \begin{equation}gin{flalign} &\nr{Dv_{j}}_{L^{p+\Phirac{4}{\tilde{n}}}(B_{\varrho/2}\times (t_{0}-(\varrho/2)^{2},t_{0}))}\le \Phirac{c}{\varrho^{\hat{\theta}}}\left[1+\nr{Dv_{j}}_{L^{p}(B_{\varrho}\times (t_{0}-\varrho^{2},t_{0}))}^{\begin{equation}ta}\right]\nonumber \\ &\qquad\le \Phirac{c}{\varrho^{\hat{\theta}}}\left[\nr{Df}_{L^{r}(\Omega_{T})}^{r\begin{equation}ta}+\nr{\partial_{t}f}^{\begin{equation}ta p'}_{L^{p'}(0,T;W^{-1,p'}(\Omega))}+1\right]\label{10} \mathbb Nd{flalign} for $c\equiv c(\texttt{data})$. Finally, H\"older inequality and \eqref{10} in particular imply that \begin{equation}gin{flalign}\label{11} \nr{Dv_{j}}_{L^{s}(B_{\varrho/2}\times (t_{0}-(\varrho/2)^{2},t_{0}))}\le \Phirac{c(\texttt{data},\mathcal{C}_{f},s)}{\varrho^{\hat{\theta}}}\quad \mbox{for all} \ \ s\in \left[1,p+\Phirac{4}{\ti{n}}\right], \mathbb Nd{flalign} thus \eqref{pq} and \eqref{tn} render that $s=q$ and $s=2m\ti{q}$ are both admissible choices. In the previous two displays, we also expanded the expression of $\mathcal{C}_{f}$. \subsubsection*{Step 5: Fractional differentiability in space} Let $t_{0}\in (0,T)$ be any number and $\varphi\in C^{\infty}_{c}(B_{\varrho})$ and $\chi\in W^{1,\infty}(\mathbb{R},[0,1])$ be two cut-off functions satisfying \begin{equation}gin{flalign}\label{phi} \mathds{1}_{B_{\varrho/4}}\le \varphi\le \mathds{1}_{B_{\varrho/2}}\quad \mbox{and}\quad \snr{D\varphi}\le \Phirac{4}{\varrho} \mathbb Nd{flalign} and \begin{equation}gin{flalign}\label{chi} \chi(t_{0}-\varrho^{2}/4)=0,\quad \chi=1 \ \ \mbox{on} \ \ (t_{0}-\varrho^{2}/16,t_{0}),\quad 0\le \partial_{t}\chi\le \Phirac{4}{\varrho^{2}} \mathbb Nd{flalign} respectively. If $p\ge 2$, by Lemma \ref{l1} we have \begin{equation}gin{flalign}\label{fr5} \int_{Q_{\varrho/2}}&\varphi^{2}\chi\snr{\mathrm{d}i V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y\sim \snr{h}^{-2}\int_{Q_{\varrho/2}}\varphi^{2}\chi\mathcal{D}(h)^{\Phirac{p-2}{2}}\snr{a_{j}u_{h}Dv_{j}}^{2} \ \,{\rm d}y\nonumber \\ G_{T}rsim& \snr{h}^{-2}\int_{Q_{\varrho/2}}\varphi^{2}\chi\snr{a_{j}u_{h}Dv_{j}}^{p} \ \,{\rm d}y, \mathbb Nd{flalign} while, for $1<p<2$ we have that \begin{equation}gin{flalign}\label{fr6} \snr{h}^{-p}\int_{Q_{\varrho/2}}&\varphi^{2}\chi\snr{a_{j}u_{j}Dv_{j}}^{p} \ \,{\rm d}y\le \left( \snr{h}^{-2}\int_{Q_{\varrho/2}}\varphi^{2}\chi\mathcal{D}(h)^{\Phirac{p-2}{2}}\snr{a_{j}u_{h}Dv_{j}}^{2} \ \,{\rm d}x\right)^{\Phirac{p}{2}}\left(\int_{Q_{\varrho/2}}\varphi^{2}\chi\mathcal{D}(j)^{\Phirac{p}{2}} \ \,{\rm d}y\right)^{\Phirac{2-p}{2}}\nonumber \\ \lesssim &\left(\snr{h}^{-2}\int_{Q_{\varrho/2}}\varphi^{2}\chi\snr{\mathrm{d}i V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{p}{2}}\left(\int_{Q_{\varrho/2}}\varphi^{2}\chi\mathcal{D}(h)^{\Phirac{p}{2}} \ \,{\rm d}y\right)^{\Phirac{2-p}{2}}. \mathbb Nd{flalign} Therefore, if $p\ge 2$, by \eqref{fr5}, \eqref{2} with $g\equiv 1$, $\varphi$ and $\chi$ as in \eqref{phi}-\eqref{chi}, \eqref{6} and \eqref{11} we obtain \begin{equation}gin{flalign}\label{fr7} &\left(\limsup_{\snr{h}\to 0}\int_{Q_{\varrho/4}}\left| \ \Phirac{a_{j}u_{h}Dv_{j}}{\snr{h}^{\Phirac{2}{p}}} \ \right|^{p} \ \,{\rm d}y\right)\lesssim \limsup_{\snr{h}\to 0}\int_{Q_{\varrho/2}}\varphi^{2}\chi\snr{\mathrm{d}i V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y\nonumber \\ &\qquad \lesssim \limsup_{\snr{h}\to 0}\mathcal{I}(h)\lesssim \varrho^{-2}\left[1+\left(\int_{Q_{\varrho/2}}\ti{H}(Dv_{j})^{m\ti q} \ \,{\rm d}y\right)^{\Phirac{1}{m}}\right]\lesssim \varrho^{-\tilde{\theta}}, \mathbb Nd{flalign} while, for $1<p<2$ we have, using also \eqref{unibd} \begin{equation}gin{flalign}\label{fr8} &\limsup_{\snr{h}\to 0}\left(\int_{Q_{\varrho/4}}\left| \ \Phirac{a_{j}u_{h}Dv_{j}}{\snr{h}} \ \right|^{p}\right)\lesssim \left(\limsup_{\snr{h}\to 0}\mathcal{I}(h)\right)^{\Phirac{p}{2}}\mathcal{C}_{f}^{\Phirac{2-p}{2}}\nonumber \\ &\qquad \lesssim\varrho^{-p}\left[1+\left(\int_{Q_{\varrho/2}}\ti{H}(Dv_{j})^{m\ti{q}} \ \,{\rm d}y\right)^{\Phirac{1}{m}}\right]^{\Phirac{p}{2}}\lesssim \varrho^{-\tilde{\theta}}, \mathbb Nd{flalign} In both, \eqref{fr7}-\eqref{fr8}, $\tilde{\theta}\equiv \tilde{\theta}(n,p,q,d)$ and the constants implicit in "$\lesssim$" depend on $(\texttt{data},\mathcal{C}_{f})$. Combining \eqref{fr7}-\eqref{fr8}, Proposition \ref{fracsob} and a standard covering argument, we can conclude that \begin{equation}gin{flalign}\label{fr9} Dv_{j}\in L^{p}_{\operatorname{loc}}(0,T;W^{\varsigma,p}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n}))\quad \mbox{for all} \ \ \varsigma\in \left(0,\min\left\{1,\Phirac{2}{p}\right\}\right). \mathbb Nd{flalign} \subsubsection*{Step 6: Fractional differentiability in time} We aim to prove that \begin{equation}gin{flalign}\label{fr1} a_{j}a_{j}(\cdot,\cdot,Dv_{j})\in L^{l}_{\operatorname{loc}}(0,T;W^{1,l}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n}))\quad \mbox{for some} \ \ l\equiv l(n,p,q,d)\in (1,\min\{2,p\}). \mathbb Nd{flalign} The forthcoming argument appears for instance in \cite{dumi} for the $p$-laplacean case with $p\ge 2$. Before going on, let us record some computations which will be helpful in a few lines. By the definition given in \eqref{r} it is clear that \begin{equation}gin{flalign}\label{fr3.1} \max\left\{\Phirac{p}{2},q-\Phirac{p}{2},m\ti{q}\right\}=m\ti{q}. \mathbb Nd{flalign} Moreover, by \eqref{pq.1} we also have that there exists $l\in (1,2)$ so that \begin{equation}gin{flalign}\label{fr2.1} \max\left\{s_{1},s_{2}\right\}<p+\Phirac{4}{\ti{n}}, \mathbb Nd{flalign} where we set \begin{equation}gin{flalign*} s_{1}:=\Phirac{2l(m\ti{q}-1)}{2-l}\qquad\mbox{and}\qquad s_{2}:=\Phirac{dl(q-1)}{(d-l)}. \mathbb Nd{flalign*} For $\varphi,\chi$ as in \eqref{phi}-\eqref{chi} and $h$ as in \eqref{hhh}, we expand \begin{equation}gin{flalign*} \int_{Q_{\varrho/2}}&\left[\varphi^{2}\chi\snr{a_{j}u_{h} a_{j}a_{j}(\cdot,t,Dv_{j})}\right]^{l} \ \,{\rm d}y\lesssim \int_{Q_{\varrho/2}}\left[\varphi^{2}\chi\snr{a_{j}a_{j}(x+h,t,Dv_{j}(x+h))-a_{j}a_{j}(x,t,Dv_{j}(x+h))}\right]^{l} \ \,{\rm d}y\nonumber \\ &+\int_{Q_{\varrho/2}}\left[\varphi^{2}\chi\snr{a_{j}a_{j}(x,t,Dv_{j}(x+h))-a_{j}a_{j}(x,t,Dv_{j}(x))}\right]^{l} \ \,{\rm d}y=:\mbox{(I)}+\mbox{(II)} \mathbb Nd{flalign*} and estimate, via $\eqref{refreg}_{3}$, \eqref{fr2.1} and H\"older inequality, \begin{equation}gin{flalign*} \mbox{(I)}\lesssim& \snr{h}^{l}\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\left(\int_{0}^{1}\gamma(x+h\lambda,t) \ \d\lambda\right)^{l}\left[1+\mathcal{D}(h)^{\Phirac{l(q-1)}{2}}\right] \ \,{\rm d}y\nonumber \\ \lesssim&\snr{h}^{l}\nr{\gamma}_{L^{d}(\Omega_{T})}^{l}\left(\int_{Q_{\varrho/2}}\left[1+\mathcal{D}(h)^{\Phirac{s_{2}}{2}}\right] \ \,{\rm d}y\right)^{\Phirac{l(q-1)}{s_{2}}}. \mathbb Nd{flalign*} Concerning term $\mbox{(II)}$ we distinguish three cases: $q\ge p\ge 2$, $q\ge 2>p$ and $2>q\ge p$. If $q\ge p\ge 2$, via $\eqref{refreg}_{1,3}$, \eqref{fr3.1}, \eqref{fr2.1}, H\"older inequality, Lemmas \ref{l1} and \ref{l6} we get \begin{equation}gin{flalign*} \mbox{(II)}\lesssim&\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\left[\mathcal{D}(h)^{\Phirac{p-2}{2}}+\mathcal{D}(h)^{\Phirac{q-2}{2}}\right]^{l}\snr{a_{j}u_{j}Dv_{j}}^{l} \ \,{\rm d}y\nonumber \\ &+\varepsilon_{j}\int_{Q_{\varrho/2}}\left[\mathcal{D}(h)^{\Phirac{2m\ti{q}-2}{2}}\snr{a_{j}u_{h}Dv_{j}}\right]^{l} \ \,{\rm d}y\nonumber \\ \lesssim&\snr{h}^{l}\left(\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\mathcal{D}(h)^{\Phirac{l(p-2)}{2-l}} \ \,{\rm d}y\right)^{\Phirac{2-l}{2}}\left(\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{\mathrm{d}i V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{l}{2}}\nonumber \\ &+\snr{h}^{l}\left(\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\mathcal{D}(h)^{\Phirac{l(2q-p-2)}{2(2-l)}} \ \,{\rm d}y\right)^{\Phirac{2-l}{2}}\left(\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{\mathrm{d}i V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{l}{2}}\nonumber \\ &+\snr{h}^{l}\left(\varepsilon_{j}\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\mathcal{D}(h)^{\Phirac{l(m\ti{q}-1)}{2-l}} \ \,{\rm d}y\right)^{\Phirac{2-l}{2}}\left(\varepsilon_{j}\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{V_{\ti{\mu},2m\ti{q}}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{l}{2}}. \mathbb Nd{flalign*} For $q\ge 2>p$, recalling \eqref{extra} we obtain \begin{equation}gin{flalign*} \mbox{(II)}\lesssim& \snr{h}^{l}\mu^{p-2}\left(\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{\mathrm{d}i Dv_{j}}^{p} \ \,{\rm d}y\right)^{\Phirac{l}{p}}\nonumber \\ &+\snr{h}^{l}\left(\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\mathcal{D}(h)^{\Phirac{l(2q-p-2)}{2(2-l)}} \ \,{\rm d}y\right)^{\Phirac{2-l}{2}}\left(\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{\mathrm{d}i V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{l}{2}}\nonumber \\ &+\snr{h}^{l}\left(\varepsilon_{j}\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\mathcal{D}(h)^{\Phirac{l(m\ti{q}-1)}{2-l}} \ \,{\rm d}y\right)^{\Phirac{2-l}{2}}\left(\varepsilon_{j}\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{V_{\ti{\mu},2m\ti{q}}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{l}{2}}. \mathbb Nd{flalign*} Finally, when $2>q\ge p$ we use \eqref{extra} to conclude that \begin{equation}gin{flalign*} \mbox{(II)}\lesssim&\snr{h}^{l}\mu^{p-2}\left(\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{\mathrm{d}i Dv_{j}}^{p} \ \,{\rm d}y\right)^{\Phirac{l}{p}}\nonumber \\ &+\int_{Q_{\varrho/2}\cap\{\mathcal{D}(h)\le 1\}}\varphi^{2l}\chi^{l}\left[\mathcal{D}(h)^{\Phirac{q-p}{2}}\mathcal{D}(h)^{\Phirac{p-2}{2}}\snr{a_{j}u_{h}Dv_{j}}\right]^{l} \ \,{\rm d}y\nonumber \\ &+\int_{Q_{\varrho/2}\cap\{\mathcal{D}(h)>1\}}\left[\mathcal{D}(h)^{\Phirac{q-2}{2}}\snr{a_{j}u_{h}Dv_{j}}\right]^{l} \ \,{\rm d}y\nonumber \\ &+\snr{h}^{l}\left(\varepsilon_{j}\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\mathcal{D}(h)^{\Phirac{l(m\ti{q}-1)}{2-l}} \ \,{\rm d}y\right)^{\Phirac{2-l}{2}}\left(\varepsilon_{j}\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{V_{\ti{\mu},2m\ti{q}}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{l}{2}}\nonumber \\ \lesssim&\snr{h}^{l}(\mu^{p-2}+1)\left(\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{\mathrm{d}i Dv_{j}}^{p} \ \,{\rm d}y\right)^{\Phirac{l}{p}}\nonumber \\ &+\snr{h}^{l}\left(\varepsilon_{j}\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\mathcal{D}(h)^{\Phirac{l(m\ti{q}-1)}{2-l}} \ \,{\rm d}y\right)^{\Phirac{2-l}{2}}\left(\varepsilon_{j}\int_{Q_{\varrho/2}}\varphi^{2l}\chi^{l}\snr{V_{\ti{\mu},2m\ti{q}}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{l}{2}}. \mathbb Nd{flalign*} Merging the content of all the previous displays and using Lemma \ref{diffquo}, \eqref{6} with $a_{j}u_{1},a_{j}u_{2}$ replaced by $\Phirac{\varrho}{4},\Phirac{\varrho}{2}$ respectively and \eqref{11}, we obtain \begin{equation}gin{flalign*} \limsup_{\snr{h}\to 0}&\int_{Q_{\varrho/4}}\snr{\mathrm{d}i a_{j}a_{j}(x,t,Dv_{j})}^{l}\ \,{\rm d}y\lesssim \nr{\gamma}_{L^{d}(\Omega_{T})}^{l}\left(\int_{Q_{\varrho/2}}1+\snr{Dv_{j}}^{s_{2}} \ \,{\rm d}y\right)^{\Phirac{l(q-1)}{s_{2}}}\nonumber \\ &+(\ti{\mu}^{p-2}+1)\left(\limsup_{\snr{h}\to 0}\int_{Q_{\varrho/2}}\snr{\mathrm{d}i Dv_{j}}^{p} \ \,{\rm d}y\right)^{\Phirac{l}{p}}\nonumber \\ &+\left(\int_{Q_{\varrho/2}}\left[1+\snr{Dv_{j}}^{s_{1}}\right] \ \,{\rm d}y\right)^{\Phirac{2-l}{2}}\left(\limsup_{\snr{h}\to 0}\int_{Q_{\varrho/2}}\snr{\mathrm{d}i V_{\ti{\mu},p}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{l}{2}}\nonumber \\ &+\left(\varepsilon_{j}\int_{Q_{\varrho/2}}\left[1+\snr{Dv_{j}}^{s_{1}}\right] \ \,{\rm d}y\right)^{\Phirac{2-l}{2}}\left(\limsup_{\snr{h}\to 0}\varepsilon_{j}\int_{Q_{\varrho/2}}\snr{\mathrm{d}i V_{\ti{\mu},2m\ti{q}}(Dv_{j})}^{2} \ \,{\rm d}y\right)^{\Phirac{l}{2}}\lesssim \varrho^{-\tilde{\theta}}. \mathbb Nd{flalign*} Finally, applying Fatou's lemma and Lemma \ref{diffquo} on the left-hand side of the chain of inequalities displayed above we obtain that \begin{equation}gin{flalign}\label{fr9.1} \int_{Q_{\varrho/4}}\snr{Da_{j}a_{j}(x,t,Dv_{j})}^{l} \ \,{\rm d}y\le c\varrho^{-\tilde{\theta}}, \mathbb Nd{flalign} with $c\equiv c(\texttt{data},\mathcal{C}_{f},\ti{\mu})$ and $\tilde{\theta}\equiv \tilde{\theta}(n,p,q,d)$. With \eqref{fr9.1} and a standard covering argument we deduce \eqref{fr1}. Now, whenever we consider a subset of type $\tilde{\Omega}\times(t_{1},t_{2})\Subset \Omega_{T}$ with $\tilde{\Omega}\Subset \Omega$ open, from \eqref{fr9.1} and \eqref{fr1} and a covering argument we have that \begin{equation}gin{flalign}\label{fr11} \nr{\diver \ a_{j}a_{j}(\cdot,\cdot,Dv_{j})}_{L^{l}(\tilde{\Omega}\times (t_{1},t_{2}))}\le c\nr{D a_{j}a_{j}(\cdot,\cdot,Dv_{j})}_{L^{l}(\tilde{\Omega}\times (t_{1},t_{2}))}\le c, \mathbb Nd{flalign} for $c\equiv c(\texttt{data},\mathcal{C}_{f},\mu,t_{1},T-t_{2},\,{\rm dist}(\tilde{\Omega},\partial \Omega))$. Finally, integrating by parts in \eqref{wfj} and using \eqref{fr11} we obtain that \begin{equation}gin{flalign}\label{fr12} \partial_{t}v_{j}\in L^{l}_{\operatorname{loc}}(\Omega_{T})\qquad \mbox{with} \ \ l\equiv l(n,p,q,d)\in (1,\min\{p,2\}). \mathbb Nd{flalign} \subsubsection*{Step 6: Convergence} A standard covering argument combined with Proposition \ref{fracsob}, \eqref{11}, \eqref{fr7}-\eqref{fr9} and \eqref{fr11}-\eqref{fr12} respectively then implies that if $\tilde{\Omega}\Subset \Omega$ is any open subset and $(t_{1},t_{2})\Subset (0,T)$ is an interval, then \begin{equation}gin{flalign} &\nr{Dv_{j}}_{L^{s}(\tilde{\Omega}\times (t_{1},t_{2}))}\le c\quad \mbox{for all} \ \ s\in\left[1,p+\Phirac{4}{\ti{n}}\right];\label{13.1}\\ &\nr{v_{j}}_{L^{p}(t_{1},t_{2};W^{1+\varsigma}(\tilde{\Omega}))}\le c\quad \mbox{for all} \ \ \varsigma\in\left(0,\min\left\{1,\Phirac{2}{p}\right\}\right);\label{13}\\ &\nr{v_{j}}_{W^{\iota,l}(t_{1},t_{2};L^{l}(\tilde{\Omega}))}\le c\quad \mbox{for all} \ \ \iota\in (0,1),\label{13.2} \mathbb Nd{flalign} with $c\equiv c(\texttt{data},s,\varsigma,\iota,\mathcal{C}_{f},t_{1},T-t_{2},\,{\rm dist}(\tilde{\Omega},\partial \Omega))$. Estimates \eqref{13} and \eqref{13.2} render that \begin{equation}gin{flalign*} \{v_{j}\} \ \mbox{is uniformly bounded in} \ W^{\iota,l}_{\operatorname{loc}}(0,T;L^{l}_{\operatorname{loc}}(\Omega))\cap L^{p}_{\operatorname{loc}}(0,T;W^{1+\varsigma,p}_{\operatorname{loc}}(\Omega)), \mathbb Nd{flalign*} therefore we can first choose $\iota\in \left(\Phirac{p-l}{lp},1\right)$ so that $l>\Phirac{p}{1+\iota p}$ and then apply Lemma \ref{al} with $a_{1}=l$, $a_{2}=p$, $\sigma=\iota$, $X=W^{1+\varsigma,p}_{\operatorname{loc}}(\Omega)$, $B=W^{1,l}_{\operatorname{loc}}(\Omega)$ and $Y=L^{l}_{\operatorname{loc}}(\Omega)$ to conclude that \begin{equation}gin{flalign}\label{cv2} \mbox{there exists a subsequence}\ \{v_{j}\} \ \mbox{strongly converging to} \ v \ \mbox{in} \ L^{l}_{\operatorname{loc}}(0,T;W^{1,l}_{\operatorname{loc}}(\Omega)), \mathbb Nd{flalign} where we also used that $l<p$. Using \eqref{13.1} we also see that, again up to subsequences, \begin{equation}gin{flalign}\label{cv3} Dv_{j}\rightharpoonup Dv\quad \mbox{in} \ \ L^{s}_{\operatorname{loc}}(\Omega_{T},\mathbb{R}^{n}) \quad \mbox{for all} \ \ s\in\left[1,p+\Phirac{4}{\ti{n}}\right] \mathbb Nd{flalign} which assures that \begin{equation}gin{flalign}\label{cv5} \nr{Dv}_{L^{s}(\tilde{\Omega}\times (t_{1},t_{2}))}\le c(\texttt{data},s,\mathcal{C}_{f},t_{1},T-t_{2},\,{\rm dist}(\tilde{\Omega},\partial \Omega))\quad \mbox{and}\quad \left.v\right|_{\partial_{par}\Omega_{T}}=\left.f\right|_{\partial_{par}\Omega_{T}}. \mathbb Nd{flalign} By \eqref{pq.1}, \eqref{cv2}, \eqref{cv3}, \eqref{cv5} and the interpolation inequality \begin{equation}gin{flalign*} \nr{Dv_{j}-Dv}_{L^{s}(\tilde{\Omega}\times (t_{1},t_{2}))}\le& \nr{Dv_{j}-Dv}_{L^{l}(\tilde{\Omega}\times (t_{1},t_{2})))}^{\theta}\nr{Dv_{j}-Dv}_{L^{p+\Phirac{4}{\tilde{n}}}(\tilde{\Omega}\times (t_{1},t_{2})))}^{1-\theta}\nonumber \\ \le &c\nr{Dv_{j}-Dv}_{L^{l}(\tilde{\Omega}\times (t_{1},t_{2})))}^{\theta} \mathbb Nd{flalign*} with $c\equiv c(\texttt{data},s,\mathcal{C}_{f},t_{1},T-t_{2},\,{\rm dist}(\tilde{\Omega},\partial \Omega))$ and \begin{equation}gin{flalign*} \Phirac{1}{s}=\Phirac{\tilde{n}\theta}{\tilde{n}p+4}+\Phirac{1-\theta}{l} \ \Longrightarrow \ \theta=\Phirac{(\tilde{n}p+4)(s-l)}{s(\tilde{n}p+4-\tilde{n}l)}, \mathbb Nd{flalign*} we can conclude that \begin{equation}gin{flalign}\label{cv4} Dv_{j}\to Dv\quad \mbox{in} \ \ L^{s}_{\operatorname{loc}}(0,T;L^{s}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n}))\quad \mbox{for all} \ \ s\in \left[1,p+\Phirac{4}{\tilde{n}}\right). \mathbb Nd{flalign} Once \eqref{cv4} is available, we can look back at \eqref{2} with $g\equiv 1$, send first $j\to \infty$ and then $\snr{h}\to 0$ and rearrange the right-hand side with the help of \eqref{10} to obtain \eqref{difff}. Moreover, using \eqref{cv4}, $\eqref{refreg}_{1}$ and the dominated convergence theorem, we can pass to the limit in \eqref{wfj} to conclude that $v$ satisfies \begin{equation}gin{flalign}\label{cv6} \int_{\Omega_{T}}\left[v\partial_{t}\varphi-a(x,t,Dv)\cdot D\varphi\right] \ \,{\rm d}y=0\quad \mbox{for all} \ \ \varphi\in C^{\infty}_{c}(\Omega_{T}). \mathbb Nd{flalign} Once \eqref{sss}, \eqref{cv6} and $\eqref{difff}$ are available, we can repeat the same computations leading to \eqref{fr1}-\eqref{fr12} with $\ti{a}(\cdot), v$ replacing $a_{j}a_{j}(\cdot), v_{j}$ to obtain \eqref{cv7}. \subsubsection*{Step 8: The initial boundary condition} With \eqref{cv6}, the energy estimate \eqref{unibd} and the continuity of $f$ in time prescribed by $\eqref{ggg}_{1}$, we can proceed exactly as in \cite[Section 6.5]{bdm} to verify the requirements of Definition \ref{d.1} (formulated for $v$ and $\ti{a}(\cdot)$ of course). \section{Gradient bounds}\label{mose} This section is divided into two parts: in the first one we construct a sequence of maps satisfying suitable uniform estimates and in the second we prove that such a sequence converges to a weak solution of problem \eqref{pdd}. \subsection{Uniform $L^{\infty}$-estimates}\label{inf} We consider again Cauchy-Dirichlet problem \eqref{pdd} with $a(\cdot)$ described by \eqref{regg}-\eqref{pq} and $f$ as in \eqref{ggg}. To construct a suitable family of approximating problems, this time we only regularize the vector field $a(\cdot)$ in the gradient variable by convolution against a sequence $\{\phi_{j}\}$ of mollifiers of $\mathbb{R}^{n}$ with the following features: \begin{equation}gin{flalign*} \phi\in C^{\infty}_{c}(B_{1}), \quad \nr{\phi}_{L^{1}(\mathbb{R}^{n})}=1,\quad \phi_{j}(x):=j^{n}\phi(jx),\quad B_{3/4}\subset \,{\rm supp} (\phi). \mathbb Nd{flalign*} This leads to the definition of the approximating vector field \begin{equation}gin{flalign}\label{lip3} a_{j}(x,t,z):=\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{B_{1}}a(x,t,z+j^{-1}z')\phi(z') \ \,{\rm d}zz, \mathbb Nd{flalign} satisfying the structural conditions \begin{equation}gin{flalign}\label{reggej} \begin{equation}gin{cases} \ t\mapsto a_{j}(x,t,z)\quad &\mbox{measurable for all} \ \ x\in \Omega, z\in \mathbb{R}^{n}\\ \ x\mapsto a_{j}(x,t,z)\quad &\mbox{differentiable for all} \ \ t\in (0,T),z\in \mathbb{R}^{n}\\ \ z\mapsto a_{j}(x,t,z)\in C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\quad &\mbox{for all} \ \ (x,t)\in \Omega_{T} \mathbb Nd{cases} \mathbb Nd{flalign} and \begin{equation}gin{flalign}\label{refregj} \begin{equation}gin{cases} \ \snr{a_{j}(x,t,z)}+H_{j}(z)^{\Phirac{1}{2}}\snr{\partial_{z}a_{j}(x,t,z)}\le c\left[H_{j}(z)^{\Phirac{p-1}{2}}+H_{j}(z)^{\Phirac{q-1}{2}}\right]\\ \ \partial_{z}a_{j}(x,t,z)\ge cH_{j}(z)^{\Phirac{p-2}{2}}\snr{\xi}^{2}\\ \ \snr{\partial_{x}a_{j}(x,t,z)}\le c\gamma(x,t)\left[H_{j}(z)^{\Phirac{p-1}{2}}+H_{j}(z)^{\Phirac{q-1}{2}}\right], \mathbb Nd{cases} \mathbb Nd{flalign} for all $(x,t)\in \Omega_{T}$, $z,\xi\in \mathbb{R}^{n}$, $\gamma$ as in \eqref{gamma}, with $c\equiv c(n,\nu,L,p,q)$, see \cite[Section 4.5]{dm} for more details on this matter. In \eqref{refregj}, \begin{equation}gin{flalign*} \mu_{j}:=\mu+j^{-1}>0\quad \mbox{and}\quad H_{j}(z):=(\mu_{j}^{2}+\snr{z}^{2}). \mathbb Nd{flalign*} We then define problem \begin{equation}gin{flalign}\label{pdjj} \begin{equation}gin{cases} \ \partial_{t}v-\diver \ a_{j}(x,t,Dv)=0\quad &\mbox{in} \ \ \Omega_{T}\\ \ v=f\quad &\mbox{on} \ \ \partial_{par}\Omega_{T}, \mathbb Nd{cases} \mathbb Nd{flalign} with $f$ as in \eqref{ggg}. By \eqref{reggej}-\eqref{refregj}, we see that the assumptions of Proposition \ref{p1} are satisfied, thus problem \eqref{pdjj} admits a solution $u_{j}\in L^{p}(0,T;W^{1,p}(\Omega))$ in the sense of Definition \ref{d.1}, satisfying \eqref{sss}, \eqref{cv7} and \eqref{difff}. In particular, \eqref{sss} authorizes to test \eqref{wfp} against test functions defined as products of $u_{j}$ with suitable cut-off functions, therefore, for such a solution, we can repeat almost the same computations leading to \eqref{5} (with $\varepsilon_{j}\equiv 0$, of course), for getting \begin{equation}gin{flalign}\label{lip0} &\Phirac{1}{2}\int_{B_{\varrho}}\varphi^{2}\chi\left(\int_{0}^{\snr{D u_{j}}^{2}}g(s) \ \,{\rm d}s\right) \ \,{\rm d}x\nonumber \\ &\quad+\int_{Q_{a_{j}u}}\varphi^{2}\chi\hat{\mathcal{G}}(Du_{j})\snr{D V_{\mu_{j},p}(Du_{j})}^{2} \ \,{\rm d}y\nonumber \\ &\quad\le c\left(\int_{Q_{a_{j}u}}\chi^{m}\left(\snr{D\varphi}^{2m}+\varphi^{2m}\right)\hat{\mathcal{G}}(Du_{j})^{m}\left[1+H_{j}(Du_{j})^{m\left(q-\Phirac{p}{2}\right)}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}}\nonumber \\ &\quad+c\int_{Q_{a_{j}u}}\left(\int_{0}^{\snr{Du_{j}}^{2}}g(s) \ \,{\rm d}s\right) \ \varphi^{2}\partial_{t}\chi \ \,{\rm d}y, \mathbb Nd{flalign} with $c\equiv c(\texttt{data})$, $g\in W^{1,\infty}(\mathbb{R})$ non-negative with bounded, non-negative, piecewise continuous first derivative, $\varphi\in C^{\infty}_{c}(B_{\varrho},[0,1])$ and $\chi\in W^{1,\infty}([0,T])$. The quantity $\hat{\mathcal{G}}(Du_{j})$ is defined as in \emph{Step 3} of the proof of Proposition \ref{p1}, clearly with $u_{j}$ replacing $v_{j}$. For $i\in \mathbb{N}$, we inductively define radii $\varrho_{i}:=a_{j}u_{1}+(a_{j}u_{2}-a_{j}u_{1})2^{-i+1}$ with $\Phirac{\varrho}{2}\le a_{j}u_{1}<a_{j}u_{2}\le \varrho$, select cut-off functions $\varphi_{i}\in C^{1}_{c}(B_{\varrho})$ so that \begin{equation}gin{flalign*} &\mathds{1}_{B_{\varrho_{i+1}}}\le \varphi_{i}\le \mathds{1}_{B_{\varrho_{i}}}\quad \mbox{and}\quad \snr{D\varphi_{i}}\le \Phirac{4}{\varrho_{i}-\varrho_{i+1}}=\Phirac{2^{i+2}}{(a_{j}u_{2}-a_{j}u_{1})} \mathbb Nd{flalign*} and $\chi_{i}\in W^{1,\infty}_{0}((t_{0}-\varrho^{2},t_{0}),[0,1])$ satisfying \begin{equation}gin{flalign*} &\chi_{i}(t_{0}-\varrho_{j}^{2})=0,\quad \chi_{i}\equiv 1 \ \ \mbox{on} \ \ (t_{0}-\varrho_{i+1}^{2},t_{0}),\quad \snr{\partial_{t}\chi_{i}}\le \Phirac{4}{(\varrho_{i}-\varrho_{i+1})^{2}}\le \Phirac{2^{2i}}{(a_{j}u_{2}-a_{j}u_{1})^{2}} \mathbb Nd{flalign*} and numbers \begin{equation}gin{flalign}\label{ki} \kappa_{1}\equiv 0,\qquad \kappa_{i}:=\Phirac{\mathcal{G}amma}{m}+\omega \kappa_{i-1} \ \ \mbox{for} \ \ i\ge 2,\qquad \alpha_{i}:=m\ti{q}+m\kappa_{i}, \mathbb Nd{flalign} where we set \begin{equation}gin{flalign} \omega:=\Phirac{1}{m}\left[1+\Phirac{2}{\ti{n}}\right]\stackrel{\eqref{gamma}}{>}1\quad \mbox{and}\quad \mathcal{G}amma:=\Phirac{p}{2}+\Phirac{2}{\tilde{n}}-m\ti{q}\stackrel{\eqref{pq}}{>}0.\label{s1} \mathbb Nd{flalign} In \eqref{lip0} we take $\varphi\equiv \varphi_{i}$, $\chi\equiv \chi_{i}$ and, for $M>0$ set \begin{equation}gin{flalign*} g(s)\equiv g_{i,M}(s):=\begin{equation}gin{cases} \ (\mu_{j}^{2}+s)^{\kappa_{i}}&\quad \mbox{if} \ \ s\le M\\ \ (\mu_{j}^{2}+M)^{\kappa_{i}}&\quad \mbox{if} \ \ s> M, \mathbb Nd{cases} \mathbb Nd{flalign*} which is admissible by construction in \eqref{lip0}. Clearly, \begin{equation}gin{flalign}\label{lip6} g_{i,M}(s)\le (\mu_{j}^{2}+s)^{\kappa_{i}}\quad \mbox{for all} \ \ s\in [0,\infty). \mathbb Nd{flalign} All in all, \eqref{lip0} becomes \begin{equation}gin{flalign}\label{lip1} &\Phirac{1}{2}\int_{B_{\varrho}}\varphi_{i}^{2}\chi_{i}\left(\int_{0}^{\snr{D u_{j}}^{2}}g_{i,M}(s) \ \,{\rm d}s\right) \ \,{\rm d}x\nonumber \\ &\quad+\int_{Q_{a_{j}u}}\varphi_{i}^{2}\chi_{i}\hat{\mathcal{G}}_{i,M}(Du_{j})\snr{D V_{\mu_{j},p}(Du_{j})}^{2} \ \,{\rm d}y\nonumber \\ &\quad\le c\left(\int_{Q_{a_{j}u}}\chi_{i}^{m}\left(\snr{D\ti{\varphi}}^{2m}+\varphi_{i}^{2m}\right)\hat{\mathcal{G}}_{i,M}(Du_{j})^{m}\left[1+H_{j}(Du_{j})^{m\left(q-\Phirac{p}{2}\right)}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}}\nonumber \\ &\quad+c\int_{Q_{a_{j}u}}\left(\int_{0}^{\snr{Du_{j}}^{2}}g_{i,M}(s) \ \,{\rm d}s\right) \ \varphi_{i}^{2}\partial_{t}\chi_{i} \ \,{\rm d}y, \mathbb Nd{flalign} where we defined $\hat{\mathcal{G}}_{i,M}$ in the obvious way: $\hat{\mathcal{G}}_{i,M}(z):=\left(g_{i,M}(\snr{z}^{2})+\snr{z}^{2}g_{i,M}'(\snr{z}^{2})\right)$. As we only know that $\{u_{j}\}$ satisfies \eqref{sss}-\eqref{difff}, we have to proceed inductively. We shall prove that \begin{equation}gin{flalign}\label{lip2} H_{j}(Du_{j})^{\alpha_{i}}\in L^{1}(Q_{\varrho_{i}})\Rightarrow H_{j}(Du_{j})^{\alpha_{i+1}}\in L^{1}(Q_{\varrho_{i+1}})\quad \mbox{for all} \ \ i\in \mathbb{N}. \mathbb Nd{flalign} \subsubsection*{Basic step} Let us verify \eqref{lip2} for $i=1$. In this case, we immediately see that $\hat{\mathcal{G}}_{1,m}(Du_{j})\equiv 1$ and notice that, since the approximating sequence $\{u_{j}\}$ we choose satisfies \eqref{sss}-\eqref{difff}, all the computations made in \emph{Step 3} of Section \ref{high} are legal without further corrections to the growth of the vector field defined in \eqref{lip3}. Moreover, a quick inspection of estimates \eqref{5}-\eqref{6} points out the dependency of the constants from $\mathcal{C}_{f}$ is due only to the presence of the term multiplying $\varepsilon_{j}$, which, in the present case is zero. Hence, \eqref{lip1} becomes \eqref{6} with $\varepsilon_{j}\equiv 0$, $\varphi\equiv \varphi_{1}$ and $\chi\equiv \chi_{1}$. Since $\alpha_{1}=m\ti{q}$ and $\alpha_{2}=\Phirac{p}{2}+\Phirac{2}{\ti{n}}$, we can easily deduce from \eqref{7} (with $a_{j}u_{1}=\varrho_{2}$, $a_{j}u_{2}=\varrho_{1}$ and no dependencies of the constants from $\mathcal{C}_{f}$) that $H_{j}(Du_{j})^{\alpha_{2}}\in L^{1}(Q_{\varrho_{2}})$. \subsubsection*{Induction step} We assume now that \begin{equation}gin{flalign}\label{lip5} H_{j}(Du_{j})^{\alpha_{i}}\in L^{1}(Q_{\varrho_{i}}) \mathbb Nd{flalign} and expand into \eqref{lip1} the expression of $\hat{\mathcal{G}}_{i,M}(Du_{j})$ for getting, after a few standard manipulations: \begin{equation}gin{flalign*} \Phirac{1}{2}\int_{B_{\varrho_{i}}}&\varphi_{i}\chi_{i}\left(\int_{0}^{\min\{\snr{Du_{j}}^{2},M\}}(\mu_{j}^{2}+s)^{\kappa_{i}} \ \,{\rm d}s\right) \ \,{\rm d}x\nonumber \\ &+\int_{Q_{\varrho_{i}}\cap\{\snr{Du_{j}}^{2} \le M\}}\varphi_{i}^{2}\chi_{i}(\mu_{j}^{2}+\snr{Du_{j}}^{2})^{\kappa_{i}}\snr{DV_{\mu_{j},p}(Du_{j})}^{2} \ \,{\rm d}y\nonumber \\ \le &c(1+\kappa_{i})\left(\int_{Q_{\varrho_{i}}}\ti{\chi}^{m}\left(\snr{D\varphi_{i}}^{2m}+\varphi_{i}^{2m}\right)\left[1+H_{j}(Du_{j})^{m\left(\kappa_{i}+q-\Phirac{p}{2}\right)}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}}\nonumber \\ &+\Phirac{c}{1+\kappa_{i}}\int_{Q_{\varrho_{i}}}\varphi_{i}^{2}\partial_{t}\chi_{i}H_{j}(Du_{j})^{1+\kappa_{i}} \ \,{\rm d}y\nonumber \\ \le &c(1+\kappa_{i})\left(\int_{Q_{\varrho_{i}}}\left[\ti{\chi}^{m}\left(\snr{D\varphi_{i}}^{2m}+\varphi_{i}^{2m}\right)+\varphi_{i}^{2m}\snr{\partial_{t}\chi_{i}}^{m}\right]\left[1+H_{j}(Du_{j})^{m(\kappa_{i}+\ti{q})}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}}, \mathbb Nd{flalign*} for $c\equiv c(\texttt{data})$. For the inequality in the previous display we used in particular \eqref{lip6} and the definition of $\varphi_{i},\chi_{i}$. Now we can send $M\to \infty$ in the previous display and apply Fatou Lemma on the left-hand side, the dominated convergence theorem, $\eqref{ki}_{3}$ and \eqref{lip5} on the right-hand side to conclude with \begin{equation}gin{flalign}\label{lip7} \Phirac{1}{2}\int_{B_{\varrho_{i}}}&\varphi_{i}\chi_{i}H_{j}(Du_{j})^{1+\kappa_{i}} \ \,{\rm d}x+(1+\kappa_{i})\int_{Q_{\varrho_{i}}}\varphi_{i}^{2}\chi_{i}H_{j}(Du_{j})^{\kappa_{i}}\snr{DV_{\mu_{j},p}(Du_{j})}^{2} \ \,{\rm d}y\nonumber \\ \le &c(1+\kappa_{i})^{2}\left(\int_{Q_{\varrho_{i}}}\left[\chi_{i}^{m}\left(\snr{D\varphi_{i}}^{2m}+\varphi_{i}^{2m}\right)+\varphi_{i}^{2m}\snr{\partial_{t}\chi_{i}}^{m}\right]\left[1+H_{j}(Du_{j})^{\alpha_{i}}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}}, \mathbb Nd{flalign} where $c\equiv c(\texttt{data})$. Next, with \eqref{difff} at hand, we compute \begin{equation}gin{flalign*} \snr{DH_{j}(Du_{j})^{\Phirac{p+2\kappa_{i}}{4}}}^{2}=\left(\Phirac{p+2\kappa_{i}}{p}\right)^{2}H_{j}(Du_{j})^{\kappa_{i}}\snr{DH_{j}(Du_{j})^{\Phirac{p}{4}}}^{2} \mathbb Nd{flalign*} and \begin{equation}gin{flalign*} \snr{DV_{\mu_{j},p}(Du)}^{2}=&\left(\Phirac{p-2}{2}\right)^{2}H_{j}(Du_{j})^{\Phirac{p-6}{2}}\snr{Du_{j}\cdot D^{2}u_{j}}^{2}\snr{Du_{j}}^{2}\nonumber \\ &+H_{j}(Du_{j})^{\Phirac{p-2}{2}}\snr{D^{2}u_{j}}^{2}+(p-2)H_{j}(Du_{j})^{\Phirac{p-4}{2}}\snr{Du_{j}\cdot D^{2}u_{j}}^{2}\nonumber \\ \ge &\min\{1,(p-1)\}H_{j}(Du_{j})^{\Phirac{p-2}{2}}\snr{D^{2}u_{j}}^{2}, \mathbb Nd{flalign*} so, keeping in mind that \begin{equation}gin{flalign*} \snr{DH_{j}(Du_{j})^{\Phirac{p}{4}}}^{2}\le \left(\Phirac{p}{2}\right)^{2}H_{j}(Du_{j})^{\Phirac{p-2}{2}}\snr{D^{2}u_{j}}^{2} \mathbb Nd{flalign*} we end up with \begin{equation}gin{flalign}\label{19} \snr{DH_{j}(Du_{j})^{\Phirac{p+2\kappa_{i}}{4}}}^{2}\le \Phirac{(p+2\kappa_{i})^{2}}{4\min\{p-1,1\}}H_{j}(Du_{j})^{\kappa_{i}}\snr{DV_{\mu_{j},p}(Du_{j})}^{2}. \mathbb Nd{flalign} Plugging \eqref{19} into \eqref{lip7} we obtain, after routine calculation, \begin{equation}gin{flalign}\label{lip8} &\sup_{t_{0}-(r_{i,2})^{2}<t<t_{0}}\int_{B_{\varrho_{i}}}\varphi_{i}^{2}\chi_{i}H_{j}(Du_{j})^{1+\kappa_{i}} \ \,{\rm d}x+\int_{Q_{\varrho_{i}}}\chi_{i}\snr{D(\varphi_{i}[H_{j}(Du_{j})^{\Phirac{p+2\kappa_{i}}{4}}+1])}^{2} \ \,{\rm d}y\nonumber \\ &\qquad\le \sup_{t_{0}-(r_{i,2})^{2}<t<t_{0}}\int_{B_{\varrho_{i}}}\varphi_{i}\chi_{i}H_{j}(Du_{j})^{1+\kappa_{i}} \ \,{\rm d}x\nonumber \\ &\qquad +c\int_{Q_{\varrho_{i}}}\chi_{i}\left[\varphi_{i}^{2}\snr{DH_{j}(Du_{j})^{\Phirac{p+2\kappa_{i}}{4}}}^{2}+\snr{D\ti{\varphi}}^{2}\left(H_{j}(Du_{j})^{\Phirac{p+2\kappa_{i}}{2}}+1\right)\right] \ \,{\rm d}y\nonumber \\ &\qquad\le c(1+\kappa_{i})^{4}\left(\int_{Q_{\varrho_{i}}}\left[\chi^{m}\left(\snr{D\varphi_{i}}^{2m}+\varphi_{i}^{2m}\right)+\varphi_{i}^{2m}\snr{\partial_{t}\chi_{i}}^{m}\right]\left[1+H_{j}(Du_{j})^{\alpha_{i}}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}}, \mathbb Nd{flalign} with $c\equiv c(\texttt{data})$. For $\ti{n}$ as in \eqref{tn}, we define $\ti{\sigma}_{i}:=2(1+\kappa_{i})\ti{n}^{-1}$. On a fixed time slice, we apply in sequence H\"older and Sobolev-Poincar\'e inequalities to get \begin{equation}gin{flalign}\label{lip9} \int_{B_{\varrho_{i}}}&\varphi_{i}^{2\left(1+\Phirac{2}{\tilde{n}}\right)}H_{j}(Du_{j})^{\Phirac{p+2\kappa_{i}}{2}+\ti{\sigma}_{i}} \ \,{\rm d}x\nonumber \\ \le& \left(\int_{B_{\varrho_{i}}}\left[\varphi_{i}^{2}(H_{j}(Du_{j})^{\Phirac{p+2\kappa_{i}}{2}}+1)\right]^{\Phirac{\tilde{n}}{\tilde{n}-2}} \ \,{\rm d}x\right)^{\Phirac{\tilde{n}-2}{\tilde{n}}}\left(\int_{B_{\varrho_{i}}}\varphi_{i}^{2}H_{j}(Du_{j})^{\ti{\sigma}_{i}\Phirac{\tilde{n}}{2}} \ \,{\rm d}x\right)^{\Phirac{2}{\tilde{n}}}\nonumber \\ \le &c\left(\int_{B_{\varrho_{i}}}\snr{D[\varphi_{i}(H_{j}(Du_{j})^{\Phirac{p+2\kappa_{i}}{4}}+1)]}^{2} \ \,{\rm d}x\right)\left(\int_{B_{\varrho_{i}}}\varphi_{i}^{2}H_{j}(Du_{j})^{\ti{\sigma}_{i}\Phirac{\tilde{n}}{2}} \ \,{\rm d}x\right)^{\Phirac{2}{\tilde{n}}}, \mathbb Nd{flalign} for $c\equiv c(n,p,q,d)$. Now we multiply both sides of \eqref{lip9} by $\chi_{i}^{\Phirac{\tilde{n}+2}{\tilde{n}}}$, integrate with respect to $t\in (t_{0}-(r_{i,2})^{2},t_{0})$, take the supremum over $t\in (t_{0}-(r_{i,2})^{2},t_{0})$ on the right-hand side, use \eqref{lip8} and eventually obtain \begin{equation}gin{flalign}\label{lip10} \int_{Q_{\varrho_{i}}}&(\varphi_{i}^{2}\chi_{i})^{1+\Phirac{2}{\tilde{n}}}\left[1+H_{j}(Du_{j})^{\Phirac{p+2\kappa_{i}}{2}+\ti{\sigma}_{i}}\right] \ \,{\rm d}y\nonumber \\ \le &c(1+\kappa_{i})^{4\left(1+\Phirac{2}{\tilde{n}}\right)}\left(\int_{Q_{\varrho_{i}}}\left[\chi_{i}^{m}\left(\varphi_{i}^{2m}+\snr{D\varphi_{i}}^{2m}\right)+\varphi_{i}^{2m}\snr{\partial_{t}\chi_{i}}^{m}\right]\left[1+H_{j}(Du_{j})^{\alpha_{i}}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}\left(1+\Phirac{2}{\tilde{n}}\right)}, \mathbb Nd{flalign} where $c\equiv c(\texttt{data})$. In the light of \eqref{ki}-\eqref{s1} we have \begin{equation}gin{flalign}\label{lip11} \Phirac{p}{2}+\kappa_{i}+\ti{\sigma}_{i}=&\Phirac{p}{2}+\Phirac{2}{\ti{n}}+m\omega\kappa_{i}=\left(\Phirac{p}{2}+\Phirac{2}{\ti{n}}-m\ti{q}\right)+m\left(\ti{q}+\omega\kappa_{i}\right)\nonumber \\ =&m\left(\Phirac{\mathcal{G}amma}{m}+\ti{q}+\omega\kappa_{i}\right)=m\left(\ti{q}+\kappa_{i+1}\right)=\alpha_{i+1}, \mathbb Nd{flalign} so, recalling also the definition of $\chi_{i},\varphi_{i}$ \eqref{lip10} becomes \begin{equation}gin{flalign*} \int_{Q_{\varrho_{i+1}}}H_{j}(Du_{j})^{\alpha_{i+1}} \ \,{\rm d}y\le \Phirac{c(\texttt{data},i)}{(\varrho_{i}-\varrho_{i+1})^{2}}\left(\int_{Q_{\varrho_{i}}}\left[1+H_{j}(Du_{j})^{\alpha_{i}}\right] \ \,{\rm d}y\right)^{\Phirac{1}{m}\left(1+\Phirac{2}{\tilde{n}}\right)}\stackrel{\eqref{lip5}}{<}\infty \mathbb Nd{flalign*} and \eqref{lip5} is proved for all $i\in \mathbb{N}$.\\\\ Now we know that the quantity appearing on the right-hand side of \eqref{lip10} is finite for all $i\in \mathbb{N}$, we define \begin{equation}gin{flalign*} A_{i}:=\left(\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{Q_{\varrho_{j}}}\left[1+H_{j}(Du_{j})^{\alpha_{i}}\right] \ \d z\right)^{\Phirac{1}{\alpha_{i}}}. \mathbb Nd{flalign*} From the definitions in \eqref{ki}, it is easy to see that whenever $i\ge 2$ \begin{equation}gin{flalign*} \kappa_{i}=\Phirac{\mathcal{G}amma}{m}\sum_{l=0}^{i-2}\omega^{i}\quad \mbox{and}\quad \alpha_{i}=m\ti{q}+\mathcal{G}amma\sum_{l=0}^{i-2}\omega^{i}, \mathbb Nd{flalign*} so $\eqref{s1}_{2}$ yields that $\alpha_{i}\to \infty$. In these terms, \eqref{lip10} can be rearranged as \begin{equation}gin{flalign}\label{lip12} A_{i+1}\le \left[\Phirac{c2^{4i}(1+\kappa_{i})^{2}}{(a_{j}u_{2}-a_{j}u_{1})^{2}}\right]^{\Phirac{2m\omega}{\alpha_{i+1}}}A_{i}^{\Phirac{\omega\alpha_{i}}{\alpha_{i+1}}}, \mathbb Nd{flalign} for $c\equiv c(\texttt{data})$. Iterating \eqref{lip12} we obtain \begin{equation}gin{flalign}\label{lip13} A_{i+1}\le \left(\Phirac{c}{a_{j}u_{2}-a_{j}u_{1}}\right)^{\Phirac{4m}{\alpha_{i+1}}\sum_{l=1}^{i}\omega^{l}}\prod_{l=0}^{i-1}\left[2^{4(i-l)}(1+\kappa_{i-l})^{2}\right]^{\Phirac{2m\omega^{l}}{\alpha_{i+1}}}A_{1}^{\Phirac{\omega^{i}\alpha_{1}}{\alpha_{i+1}}}. \mathbb Nd{flalign} Let us study the asymptotics of the various constants appearing in \eqref{lip13}. We have: \begin{equation}gin{flalign*} \lim_{i\to \infty}\Phirac{4m}{\alpha_{i+1}}\sum_{l=1}^{i}\omega^{l}=\Phirac{4m\omega}{\mathcal{G}amma}, \qquad \lim_{i\to \infty}\Phirac{\omega^{i}\alpha_{1}}{\alpha_{i+1}}=\Phirac{m\ti{q}(\omega-1)}{\mathcal{G}amma} \mathbb Nd{flalign*} and \begin{equation}gin{flalign*} \lim_{i\to \infty}\prod_{l=0}^{i-1}&\left[2^{8(i-l)}(1+\kappa_{i-l})^{2}\right]^{\Phirac{2m\omega^{l}}{\alpha_{i+1}}}\nonumber \\ \le& \exp\left\{\Phirac{4m(\omega-1)}{\mathcal{G}amma}\log\left(4\max\left\{2,\Phirac{\mathcal{G}amma}{m(\omega-1)}\right\}\right)\left[\sum_{l=1}^{\left[\Phirac{1}{\log\omega}\right]+1}\omega^{-l}l+\Phirac{1+e^{-1}}{\log\omega}\right]\right\}, \mathbb Nd{flalign*} where we also used that \begin{equation}gin{flalign*} \sum_{l=1}^{\infty}\omega^{-l}l\le \sum_{l=1}^{\left[\Phirac{1}{\log\omega}\right]+1}\omega^{-l}l+\Phirac{1+e^{-1}}{\log\omega}. \mathbb Nd{flalign*} As \begin{equation}gin{flalign}\label{lip14} &\left(\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{Q_{\varrho_{i+1}}}H_{j}(Du_{j})^{\alpha_{i+1}}\right)^{\Phirac{1}{\alpha_{i+1}}}\le A_{i+1}\nonumber \\ &\qquad \le\left(\Phirac{c}{a_{j}u_{2}-a_{j}u_{1}}\right)^{\Phirac{4m}{\alpha_{i+1}}\sum_{l=1}^{i}\omega^{l}}\prod_{l=0}^{i-1}\left[2^{4(i-l)}(1+\kappa_{i-l})^{2}\right]^{\Phirac{2m\omega^{l}}{\alpha_{i+1}}}A_{1}^{\Phirac{\omega^{i}\alpha_{1}}{\alpha_{i+1}}}, \mathbb Nd{flalign} we can pass to the limit in \eqref{lip14} for concluding that \begin{equation}gin{flalign}\label{lip16} &\nr{H_{j}(Du_{j})}_{L^{\infty}(Q_{a_{j}u_{1}})}\le \Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{\theta'}}\left(\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{Q_{a_{j}u_{2}}}\left[1+H_{j}(Du_{j})^{m\ti{q}}\right] \ \,{\rm d}y\right)^{\Phirac{(\omega-1)}{\mathcal{G}amma}}\nonumber \\ &\quad\le \Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{\theta}}\left[1+\nr{H_{j}(Du_{j})}_{L^{\infty}(Q_{a_{j}u_{2}})}^{\left(m\ti{q}-\Phirac{p}{2}\right)\Phirac{(\omega-1)}{\mathcal{G}amma}}\right]\left(\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{Q_{\varrho}}\left[1+H_{j}(Du_{j})^{\Phirac{p}{2}}\right] \ \,{\rm d}y\right)^{\Phirac{(\omega-1)}{\mathcal{G}amma}}, \mathbb Nd{flalign} with $c\equiv c(\texttt{data})$, $\theta'\equiv \theta'(n,p,q,d)$ and $\theta:=\theta'+(n+2)(\omega-1)\mathcal{G}amma^{-1}$. Recalling the definition given in \eqref{r} and the restriction imposed in \eqref{pq}, it is easy to see that \begin{equation}gin{flalign}\label{lip15} \mathcal{G}amma^{-1}\left(m\ti{q}-\Phirac{p}{2}\right)(\omega-1)<1. \mathbb Nd{flalign} In fact, verifying \eqref{lip15} is equivalent to check the validity of the following inequality \begin{equation}gin{flalign*} \ti{q}<\Phirac{p}{2m}+\Phirac{2}{\omega\ti{n}m}, \mathbb Nd{flalign*} which is satisfied by means of \eqref{pq} and \eqref{tn}. So we can apply Young inequality with conjugate exponents $(b_{1},b_{2}):=\left(\Phirac{2\mathcal{G}amma}{(2m\ti{q}-p)(\omega-1)},\Phirac{2\mathcal{G}amma}{2\mathcal{G}amma-(2m\ti{q}-p)(\omega-1)}\right)$ in \eqref{lip16} to end up with \begin{equation}gin{flalign}\label{lip16.1} &\nr{H_{j}(Du_{j})}_{L^{\infty}(Q_{a_{j}u_{1}})}\le \Phirac{1}{2}\nr{H_{j}(Du_{j})}_{L^{\infty}(Q_{a_{j}u_{2}})}\nonumber \\ &\quad+\Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{\theta}}\left(\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{Q_{\varrho}}\left[1+H_{j}(Du_{j})^{\Phirac{p}{2}}\right] \ \,{\rm d}y\right)^{\Phirac{(\omega-1)}{\mathcal{G}amma}} +\Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{\theta b_{2}}}\left(\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{Q_{\varrho}}\left[1+H_{j}(Du_{j})^{\Phirac{p}{2}}\right] \ \,{\rm d}y\right)^{\Phirac{(\omega-1)b_{2}}{\mathcal{G}amma}}\nonumber \\ &\quad \le \Phirac{1}{2}\nr{H_{j}(Du_{j})}_{L^{\infty}(Q_{a_{j}u_{2}})}+\Phirac{c}{(a_{j}u_{2}-a_{j}u_{1})^{\theta b_{2}}}\left[1+\left(\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{Q_{\varrho}}H_{j}(Du_{j})^{\Phirac{p}{2}} \ \,{\rm d}y\right)^{\Phirac{(\omega-1)b_{2}}{\mathcal{G}amma}}\right], \mathbb Nd{flalign} with $c\equiv c(\texttt{data})$. Now we apply Lemma \ref{l0} to \eqref{lip16.1} to conclude that \begin{equation}gin{flalign}\label{lip17} \nr{H_{j}(Du_{j})}_{L^{\infty}(Q_{\varrho/2})}\le \Phirac{c}{\varrho^{\begin{equation}ta_{1}}}\left[1+\left(\mathop{\int\tau_{-h}kip -1,05em -\, \!\!\!}\nolimits_{Q_{\varrho}}H_{j}(Du_{j})^{\Phirac{p}{2}} \ \,{\rm d}y\right)^{\begin{equation}ta_{2}}\right], \mathbb Nd{flalign} for $c\equiv c(\texttt{data})$, $\begin{equation}ta_{1}:=\theta b_{2}$ and $\begin{equation}ta_{2}:=\Phirac{(\omega-1)b_{2}}{\mathcal{G}amma}$. \subsection{Proof of Theorem \ref{t1}} Let $\{u_{j}\}$ be the sequence built in Section \ref{inf}. As for each $j\in \mathbb{N}$, $u_{j}$ solves problem \eqref{pdjj}, which is driven by the nonlinear tensor $a_{j}(\cdot)$ defined in \eqref{lip3}, thus satisfying in particular \eqref{refregj}, and has boundary datum $f$ described by \eqref{ggg}, we deduce that the uniform energy bound \eqref{unibd} holds true. Hence, combining \eqref{unibd} with \eqref{lip17} we obtain that \begin{equation}gin{flalign*} \nr{H_{j}(Du_{j})}_{L^{\infty}(Q_{\varrho/2})}\le\Phirac{c}{\varrho^{\begin{equation}ta}}\left[\nr{Df}_{L^{r}(\Omega_{T})}^{r}+\nr{\partial_{t}f}^{p'}_{L^{p'}(0,T;W^{-1,p'}(\Omega))}+1\right], \mathbb Nd{flalign*} with $\begin{equation}ta:=\begin{equation}ta_{1}+(n+2)\begin{equation}ta_{2}$ and $c\equiv c(\texttt{data})$. Whenever $(t_{1},t_{2})\Subset (0,T)$ and $\tilde{\Omega}\Subset \Omega$ is open, a standard covering argument and the content of the above display render that \begin{equation}gin{flalign}\label{lip18} \nr{Du_{j}}_{L^{\infty}(\tilde{\Omega}\times (t_{1},t_{2}))}\le c(\texttt{data},\mathcal{C}_{f},\,{\rm dist}(\tilde{\Omega},\partial \Omega),t_{1},T-t_{2}). \mathbb Nd{flalign} Estimates \eqref{unibd} and \eqref{lip18} in turn imply that there exists a function $u\in L^{p}(0,T,W^{1,p}(\Omega))$ with gradient $Du\in L^{\infty}_{\operatorname{loc}}(0,T;L^{\infty}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n}))$ so that \begin{equation}gin{flalign}\label{lip19} \begin{equation}gin{cases} \ u_{j}\rightharpoonup u \quad &\mbox{in} \ \ L^{p}(0,T;W^{1,p}(\Omega))\\ \ Du_{j}\rightharpoonup^{*} Du\quad &\mbox{in} \ \ L^{\infty}_{\operatorname{loc}}(0,T;L^{\infty}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n}))\\ \ u_{j}=f\quad &\mbox{on} \ \ \partial_{par}\Omega. \mathbb Nd{cases} \mathbb Nd{flalign} In particular, by \eqref{lip18}, $\eqref{lip19}_{2}$ and weak$^{*}$-lower semicontinuity we have \begin{equation}gin{flalign}\label{lip27} \nr{Du}_{L^{\infty}(\tilde{\Omega}\times (t_{1},t_{2}))}\le c(\texttt{data},\mathcal{C}_{f},\,{\rm dist}(\tilde{\Omega},\partial \Omega),t_{1},T-t_{2}). \mathbb Nd{flalign} Such information is not sufficient to pass to the limit as $j\to \infty$ in \eqref{wfj}, therefore we shall prove that $u_{j}$ admits some fractional derivative in space and in time which is controllable uniformly with respect to $j\in \mathbb{N}$. Concerning the fractional derivative in space, we can use \emph{verbatim} the same argument leading to \eqref{fr7}-\eqref{fr9} to deduce that \begin{equation}gin{flalign*} Du_{j}\in L^{p}_{\operatorname{loc}}(0,T;W^{\varsigma,p}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n}))\quad \mbox{for all} \ \ \varsigma\in \left(0,\min\left\{1,\Phirac{2}{p}\right\}\right) \mathbb Nd{flalign*} with \begin{equation}gin{flalign}\label{lip24} \nr{u_{j}}_{L^{p}(t_{1},t_{2};W^{1+\varsigma}(\tilde{\Omega}))}\le c(\texttt{data},\varsigma,\mathcal{C}_{f},t_{1},T-t_{2},\,{\rm dist}(\tilde{\Omega},\partial \Omega)). \mathbb Nd{flalign} On the other hand, we cannot borrow the corresponding estimates for the fractional derivative in time of the $u_{j}$'s developed in \emph{Step 6} of Section \ref{high}: the constant appearing on the right-hand side of \eqref{fr9.1} depends on $\ti{\mu}^{-1}$ and, since now $\ti{\mu}\equiv \mu_{j}$, it may blow up in the limit as $j\to\infty$ if $\mu=0$. Therefore we shall follow a different path, see \cite[Section 9]{dumi1} for the case $q=p=2$. Let $0<t_{1}<\hat{t}_{1}<\hat{t}_{2}<t_{2}<T$ and $\tilde{h}>0$ be so that $0<\tilde{h}<\Phirac{\min\{\hat{t}_{1}-t_{1},t_{2}-\hat{t}_{2},1\}}{1000}$. Using the forward Steklov average to reformulate \eqref{wfj} we obtain, for a.e. $t\in (t_{1},t_{2})$, \begin{equation}gin{flalign}\label{lip20} \int_{\Omega}\left[\partial_{t}[u_{j}]_{\ti{h}}\varphi+[a_{j}(x,t,Du_{j})]_{\ti{h}}\cdot D\varphi\right] \ \,{\rm d}y=0\quad \mbox{for all} \ \ \varphi\in C^{\infty}_{c}(\Omega). \mathbb Nd{flalign} Since $\partial_{t}[u_{j}]_{\ti{h}}=\ti{h}^{-1}\ti{a_{j}u}_{\ti{h}}u_{j}$, we can rearrange \eqref{lip20} as \begin{equation}gin{flalign*} \int_{\Omega}\left[\Phirac{\ti{a_{j}u}_{\ti{h}}u_{j}}{\ti{h}}\varphi+[a_{j}(x,t,Du_{j})]_{\ti{h}}\cdot D\varphi\right] \ \,{\rm d}y=0. \mathbb Nd{flalign*} Modulo regularization, by \eqref{sss}, in the above display we can pick $\varphi:=\eta^{2}\ti{a_{j}u}_{\ti{h}}u_{j}$ with \begin{equation}gin{flalign*} \eta\in C^{\infty}_{c}(\tilde{\Omega})\quad \mbox{so that}\quad \nr{D\eta}_{L^{\infty}(\tilde{\Omega}}\le \Phirac{4}{\,{\rm dist}(\tilde{\Omega},\partial \Omega)}, \mathbb Nd{flalign*} and integrate over the interval $(\hat{t}_{1},\hat{t}_{2})$ to get \begin{equation}gin{flalign}\label{lip21} h^{-1}\int_{\hat{t}_{1}}^{\hat{t}_{2}}\int_{\Omega}&\snr{\ti{a_{j}u}_{\ti{h}}u_{j}}^{2}\eta^{2} \ \,{\rm d}x\,{\rm d}s=-\int_{\hat{t}_{1}}^{\hat{t}_{2}}\int_{\Omega}[a_{j}(x,t,Du_{j})]_{\ti{h}}\cdot\left[\eta^{2}\ti{a_{j}u}_{\ti{h}}Du_{j}+2\ti{a_{j}u}_{\ti{h}}u_{j}\eta D\eta \right] \ \,{\rm d}x\,{\rm d}s. \mathbb Nd{flalign} Recall that, for any function $w\in L^{1}(\ti{\Omega}\times (t_{1},t_{2}))$ there holds that \begin{equation}gin{flalign*} \int_{\hat{t}_{1}}^{\hat{t}_{2}}\int_{\ti{\Omega}}\snr{w_{\ti{h}}} \ \,{\rm d}x\,{\rm d}s\le \int_{\hat{t}_{1}-\ti{h}}^{\hat{t}_{2}+\ti{h}}\int_{\tilde{\Omega}}\snr{w} \ \,{\rm d}x\,{\rm d}s\le \int_{t_{1}}^{t_{2}}\int_{\tilde{\Omega}}\snr{w} \ \,{\rm d}x\,{\rm d}s, \mathbb Nd{flalign*} therefore, by $\eqref{refregj}_{1}$, \eqref{lip18}, H\"older and Young inequalities we estimate \begin{equation}gin{flalign}\label{lip22} &\left| \ \int_{\hat{t}_{1}}^{\hat{t}_{2}}\int_{\Omega}[a_{j}(x,t,Du_{j})]_{\ti{h}}\cdot\left[\eta^{2}\ti{a_{j}u}_{\ti{h}}Du_{j}+2\ti{a_{j}u}_{\ti{h}}u_{j}\eta D\eta \right] \ \,{\rm d}x\,{\rm d}s \ \right|\nonumber \\ &\quad \le 2\left(\sup_{\tilde{\Omega}\times (t_{1},t_{2})}\snr{Du_{j}}\right)\left(\int_{t_{1}}^{t_{2}}\int_{\tilde{\Omega}}a_{j}(x,t,Du_{j}) \ \,{\rm d}x\,{\rm d}s\right)\nonumber \\ &\quad+\Phirac{\ti{h}^{-1}}{2} \int_{\hat{t}_{1}}^{\hat{t}_{2}}\int_{\tilde{\Omega}}\eta^{2}\snr{\ti{a_{j}u}_{\ti{h}}u_{j}}^{2} \ \,{\rm d}x \,{\rm d}s+\ti{h}\nr{D\eta}_{L^{\infty}(\tilde{\Omega}}^{2}\int_{t_{1}}^{t_{2}}\int_{\tilde{\Omega}}\snr{a_{j}(x,t,Du_{j})}^{2} \ \,{\rm d}x \,{\rm d}x\nonumber \\ &\quad \le \Phirac{\ti{h}^{-1}}{2} \int_{\hat{t}_{1}}^{\hat{t}_{2}}\int_{\tilde{\Omega}}\eta^{2}\snr{\ti{a_{j}u}_{\ti{h}}u_{j}}^{2} \ \,{\rm d}x+c, \mathbb Nd{flalign} with $c\equiv c(\texttt{data},\mathcal{C}_{f},\,{\rm dist}{\tilde{\Omega},\partial \Omega},t_{1},T-t_{2})$. Merging \eqref{lip21} and \eqref{lip22} we end up with \begin{equation}gin{flalign*} \limsup_{\ti{h}\to \infty}\left(\ti{h}^{-1}\int_{\hat{t}_{1}}^{\hat{t}_{2}}\int_{\tilde{\Omega}}\snr{\ti{a_{j}u}_{\ti{h}}u_{j}}^{2}\ \,{\rm d}x\,{\rm d}s\right)\le c(\texttt{data},\mathcal{C}_{f},\,{\rm dist}{\tilde{\Omega},\partial \Omega},t_{1},T-t_{2}), \mathbb Nd{flalign*} which, being $\hat{t}_{1},t_{1},\hat{t}_{2},t_{2}$ arbitrary, and since we can repeat exactly the same procedure for the backward Steklov average of $u_{j}$, we get \begin{equation}gin{flalign*} u_{j}\in W^{\iota,2}_{\operatorname{loc}}(0,T;L^{2}_{\operatorname{loc}}(\Omega))\quad \mbox{for all} \ \ \iota\in \left(0,\Phirac{1}{2}\right) \mathbb Nd{flalign*} and \begin{equation}gin{flalign}\label{lip23} \nr{u_{j}}_{W^{\iota,2}(0,T;L^{2}(\tilde{\Omega}))}\le c(\texttt{data},\iota,\mathcal{C}_{f},\,{\rm dist}{\tilde{\Omega},\partial \Omega},t_{1},T-t_{2}). \mathbb Nd{flalign} From \eqref{lip24} and \eqref{lip23} we deduce that \begin{equation}gin{flalign*} \{u_{j}\} \ \mbox{is bounded uniformly w.r.t. $j\in \mathbb{N}$ in} \ W^{\iota,2}_{\operatorname{loc}}(0,T;L^{2}_{\operatorname{loc}}(\Omega))\cap L^{p}_{\operatorname{loc}}(0,T;W^{1+\varsigma,p}_{\operatorname{loc}}(\Omega)) \mathbb Nd{flalign*} for all $\iota\in \left(0,\Phirac{1}{2}\right)$, $\varsigma\in \left(0,\min\left\{1,\Phirac{2}{p}\right\}\right)$, thus we can apply Lemma \ref{al} with $a_{1}= p$, $a_{2}=2$, $\sigma=\iota$, $X=W^{1+\varsigma,p}_{\operatorname{loc}}(\Omega)$, $B=W^{1,\min\{2,p\}}_{\operatorname{loc}}(\Omega)$, $Y=L^{2}_{\operatorname{loc}}(\Omega)$, to obtain a (non-relabelled) subsequence $\{u_{j}\}$ so that \begin{equation}gin{flalign}\label{lip25} u_{j}\to u\quad \mbox{in} \ \ L^{\min\{p,2\}}_{\operatorname{loc}}(0,T;W^{1,\min\{p,2\}}_{\operatorname{loc}}(\Omega)). \mathbb Nd{flalign} Combining $\eqref{lip19}_{2}$, \eqref{lip25} and \eqref{lip27} we get \begin{equation}gin{flalign}\label{lip26} Du_{j}\to Du\quad \mbox{in} \ \ L^{s}_{\operatorname{loc}}(0,T;L^{s}_{\operatorname{loc}}(\Omega,\mathbb{R}^{n}))\quad \mbox{for all} \ \ s\in (1,\infty), \mathbb Nd{flalign} therefore we can pass to the limit in \eqref{wfj} to deduce that $u$ satisfies \eqref{cv6}. Moreover, repeating \emph{Step 8} of Section \ref{high} we finally see that Definition \ref{d.1} is satisfied, therefore $u$ is a solution of problem \eqref{pdd} and, recalling also \eqref{lip27} we obtain $\eqref{t1.1}_{1}$. Once $\eqref{t1.1}_{1}$ is available, we can repeat the same procedure leading to \eqref{lip23} (with $a(\cdot),u$ replacing $a_{j}(\cdot),u_{j}$) to obtain $\eqref{t1.4}$. Furthermore, by \eqref{lip26}, \eqref{lip27} and \eqref{unibd} we can pass to the limit for $j\to \infty$ in \eqref{lip0} with $g\equiv 1$ and, after a standard covering argument, get $\eqref{t1.1}_{2}$. Finally, combining $\eqref{lip19}_{2}$ and \eqref{lip26} with \eqref{lip17} we obtain \eqref{t1.5}. The proof of Theorem \ref{t1} is complete. \begin{equation}gin{thebibliography}{} \bibitem{akm} B. Avelin, T. Kuusi, G. Mingione: Nonlinear Calder\'on-Zygmund theory in the limiting case. \emph{Arch. Ration. Mech. Anal.} 227, nr. 2, 663-714, (2018). \bibitem{ba} P. Baroni, Lorentz estimates for degenerate and singular evolutionary systems. \emph{J. Differential Equations} 255, 2927-2951, (2013). \bibitem{ba1} P. Baroni, Riesz potential estimates for a general class of quasilinear equations. \emph{Calc. Var. \& PDE} 53(3-4),12, pp. 803-846, (2015). \bibitem{bacomi} P. Baroni, M. Colombo, G. Mingione, Regularity for general functionals with double phase. \emph{Calc. Var. \& PDE} 57:62, (2018). \bibitem{bemi} L. Beck, G. Mingione, Lipschitz bounds and non-uniform ellipticity. \emph{Comm. Pure Appl. Math.}, (2020). \url{https://doi.org/10.1002/cpa.21880} \bibitem{besc} P. Bella, M. Schaffner, Local boundedness and Harnack inequality for solutions of linear non-uniformly elliptic equations. \emph{Comm. Pure Appl. Math.}, to appear. \bibitem{besc1} P. Bella, M. Schaffner, On the regularity of scalar integral functionals with $(p,q)$-growth. \emph{Anal. PDE}, to appear. \bibitem{bdm} V. B\"ogelein, F. Duzaar, P. Marcellini, Parabolic Equations with $p,q$-growth. \emph{J. Math. Pures Appl.} 100, 535-563, (2013). \bibitem{bdm1} V. B\"ogelein, F. Duzaar, P. Marcellini, Parabolic Systems with $p,q$-Growth: A Variational Approach. \emph{Arch. Rational Mech. Anal.} 210, 219-267, (2013). \bibitem{bobr} P. Bousquet, L. Brasco, $C^{1}$-regularity of orthotropic $p$-harmonic functions in the plane. \emph{Anal. PDE} 11, nr. 4, 813-854, (2018). \bibitem{dm1} C. De Filippis, G. Mingione, Lipschitz bounds and non-autonomous functionals. \emph{Preprint} (2020). \bibitem{dm} C. De Filippis, G. Mingione, On the Regularity of Minima of Non-autonomous Functionals. \emph{J. Geometric Analysis}, (2019). \url{https://doi.org/10.1007/s12220-019-00225-z} \bibitem{deoh} C. De Filippis, J. Oh, Regularity for multi-phase variational problems. \emph{J. Diff. Equ.} 267, 1631-1670, (2019). \bibitem{d} E. DiBenedetto, Degenerate Parabolic Equations. \emph{Universitext, Springer-Verlag New York}, (1993). \bibitem{dima} T. Di Marco, P. Marcellini, A-priori gradient bound for elliptic systems under either slow or fast growth conditions. \emph{Preprint} (2019). \url{https://arxiv.org/pdf/1910.04158.pdf} \bibitem{pala} E. Di Nezza, G. Palatucci, E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces. \emph{Bulletin des Sciences Math\'ematiques} 136, 5, 521-573, (2012). \bibitem{dumi1} F. Duzaar, G. Mingione, Second order parabolic systems, optimal regularity, and singular sets of solutions. \emph{Ann. I. H. Poincar\'e - AN} 22, 705-751, (2005). \bibitem{dumi} F. Duzaar, G. Mingione, K. Steffen, Parabolic Systems with Polynomial Growth and Regularity. \emph{Memoirs AMS} 1005, 214, (2011). \bibitem {sharp} L. Esposito, F. Leonetti, G. Mingione, Sharp regularity for functionals with $(p,q)$ growth. \emph{J. Diff. Equ.} 204, 5-55, (2004). \bibitem{haok} P. H\"ast\"o, J. Ok, Maximal regularity for local minimizers of non-autonomous functionals. \emph{Preprint}, (2019). \url{https://arxiv.org/pdf/1902.00261.pdf} \bibitem{hisc} J. Hirsch, M. Sch\"affner, Growth conditions and regularity, an optimal local boundedness result. \emph{Preprint}, (2019). \url{https://arxiv.org/pdf/1911.12822.pdf} \bibitem{km2} T. Kuusi, G. Mingione, The Wolff gradient bound for degenerate parabolic equations. \emph{J. Eur. Math. Soc. (JEMS)} 16, nr. 4, 835-892, (2014). \bibitem{km3} T. Kuusi, G. Mingione, Gradient regularity for nonlinear parabolic equations. \emph{Ann. Sc. Norm. Super. Pisa Cl. Sci.} 5, 12, nr. 4, 755-822, (2013). \bibitem{lsu} O. A. Ladyzhenskaya, V. A. Solonnikov, N. N. Ural’tseva, Linear and quasi-linear equations of parabolic type. \emph{Transl. Math. Monographs AMS}, 23, 1968. \bibitem{lieb} G. M. Lieberman, The natural generalization of the natural condition of Ladyzhenskaya and Ural'tseva for elliptic equation. Comm. PDE 16, 311-361, (1991). \bibitem{elemarmas}P. Marcellini, A variational approach to parabolic equations under general and $p,q$-growth conditions. \emph{Nonlinear Analysis}, (2019). \url{https://doi.org/10.1016/j.na.2019.02.010} \bibitem{ma2} P. Marcellini, On the definition and the lower semicontinuity of certain quasiconvex integrals. \emph{Annales de l'I.H.P. Analyse non lin\'eaire} 3, nr. 5, 391-409, (1986). \bibitem{ma3} P. Marcellini, Regularity and Existence of Solutions of Elliptic Equations with $p,q$-Growth Conditions. \emph{J. Differential Equations} 90, 1-30, (1991). \bibitem{ma1} P. Marcellini, Regularity of minimizers of integrals of the calculus of variations with non standard growth conditions. \emph{Arch. Rat. Mech. Anal.} 105, 267-284, (1989). \bibitem{sim} J. Simon, Compact sets in the space $L^{p}(0,T;B)$. \emph{Ann. Math. Pura Appl.} 146, (4), 65-96, 1987. \bibitem{si} T. Singer, Existence of weak solutions of parabolic systems with $p,q$-growth. \emph{Manuscripta Math.} 151, 87-112, (2016). \bibitem{s} T. Singer, Parabolic Equations with $p,q$-growth: the subquadratic case. \emph{Quart. J. Math.} 66, 707-742, (2015). \mathbb Nd{thebibliography} \mathbb Nd{document}
\begin{document} \title{New Results for the Descartes-Frenicle-Sorli \\ Conjecture on Odd Perfect Numbers} \author{Jose Arnaldo B. Dris\\ Far Eastern University\\ Manila, Philippines\\ [email protected]\\[2pt]} \maketitle \begin{abstract} If $N={q^k}{n^2}$ is an odd perfect number given in Eulerian form, then the Descartes-Frenicle-Sorli conjecture predicts that $k=1$. Brown \cite{Brown} has recently announced a proof for the inequality \\$q < n$, and a partial proof that $q^k < n$ holds under many cases. In this article, we give a strategy for strengthening Brown's result to $q^2 < n$. {\bf AMS Subject Classification: Primary 11A05; Secondary 11J25, 11J99} {\bf Key Words and Phrases: odd perfect number, Sorli's conjecture, Euler prime} \end{abstract} \section{Introduction}\label{Section1} If $N$ is a positive integer, then we write $\sigma(N)$ for the sum of the divisors of $N$. A number $N$ is \emph{perfect} if $\sigma(N)=2N$. It is currently unknown whether there are infinitely many even perfect numbers, or whether any odd perfect numbers (OPNs) exist. Ochem and Rao recently proved \cite{OchemRao} that, if $N$ is an odd perfect number, then $N > {10}^{1500}$ and that the largest component (i.e., divisor $p^a$ with $p$ prime) of $N$ is bigger than ${10}^{62}$. This improves on previous results by Brent, Cohen and te Riele \cite{BrentCohenteRiele} in 1991 ($N > {10}^{300}$) and Cohen \cite{Cohen} in 1987 (largest component $p^a > {10}^{20}$). An odd perfect number $N = {q^k}{n^2}$ is said to be given in Eulerian form if $q$ is prime with $q \equiv k \equiv 1 \pmod 4$ and $\gcd(q, n) = 1$. (The number $q$ is called the \emph{Euler prime}, while the component $q^k$ is referred to as the \emph{Euler factor}. Note that, since $q$ is prime and $q \equiv 1 \pmod 4$, then $q \geq 5$.) We denote the abundancy index $I$ of the positive integer $x$ as $$I(x) = \frac{\sigma(x)}{x}.$$ In his Ph.~D.~thesis, Sorli \cite{Sorli} conjectured that $k=1$, after testing large numbers with $8$ distinct prime factors for perfection. (More recently, Beasley \cite{Beasley} points out that Descartes was the first to conjecture $k=1$ ``in a letter to Mersenne in 1638, with Frenicle's subsequent observation occurring in 1657".) In the M.~Sc.~thesis \cite{Dris3}, it was conjectured that the components $q^k$ and $n$ are related by the inequality $q^k < n$. This conjecture was made on the basis of the result $I(q^k)<I(n)$. Recently, Brown \cite{Brown} announced a proof for the inequality $q < n$, and a partial proof that $q^k < n$ holds under many cases. \section{Conditions Sufficient for Sorli's Conjecture}\label{Section2} Some sufficient conditions for Sorli's conjecture were given in \cite{Dris}. We reproduce these conditions here. \begin{lemma}\label{Lemma1} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. If $n<q$, then $k=1$. \end{lemma} \begin{remark}\label{Remark1} The proof of Lemma \ref{Lemma1} follows from the inequality $q^k < n^2$ and the congruence $k \equiv 1 \pmod 4$ (see \cite{Dris}). (Note the related inequality $$I(q^k)<I(n^2)$$ for the abundancy indices of the components $q^k$ and $n^2$.) \end{remark} \begin{lemma}\label{Lemma2} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. If $$\sigma(n) \le \sigma(q),$$ then $k=1$. \end{lemma} \begin{lemma}\label{Lemma3} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. If $$\frac{\sigma(n)}{q} < \frac{\sigma(q)}{n},$$ then $k=1$. \end{lemma} \begin{remark}\label{Remark2} Notice that, if $$\frac{\sigma(n)}{q} < \frac{\sigma(q)}{n},$$ then it follows that $$\frac{\sigma(n)}{q^k} = \frac{\sigma(n)}{q} < \frac{\sigma(q)}{n} = \frac{\sigma(q^k)}{n}.$$ Consequently, by the contrapositive, if $$\frac{\sigma(q^k)}{n} < \frac{\sigma(n)}{q^k},$$ then $$\frac{\sigma(q)}{n} \leq \frac{\sigma(q^k)}{n} < \frac{\sigma(n)}{q^k} \leq \frac{\sigma(n)}{q}.$$ \end{remark} \begin{remark}\label{Remark3} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Suppose that $$\frac{\sigma(q)}{n} = \frac{\sigma(n)}{q}.$$ Then we know that: $${q}{\sigma(q)} = {n}{\sigma(n)}.$$ Since $\gcd(q, n) = 1$, then $q \mid \sigma(n)$ and $n \mid \sigma(q)$. Therefore, it follows that $\displaystyle\frac{\sigma(q)}{n}$ and $\displaystyle\frac{\sigma(n)}{q}$ are equal positive integers. This is a contradiction, as: $$1 < I(q) = \frac{\sigma(q)}{q} = 1 + \frac{1}{q} \leq \frac{6}{5} < \sqrt{\frac{5}{3}} < I(n) < I(q)I(n) = I(qn) < 2$$ which implies that: $$1 < \sqrt{\frac{5}{3}} < I(n) < I(q)I(n) = I(qn) = \left[\frac{\sigma(q)}{q}\right]\left[\frac{\sigma(n)}{n}\right] = \left[\frac{\sigma(q)}{n}\right]\left[\frac{\sigma(n)}{q}\right] < 2$$ Consequently, $$\frac{\sigma(q)}{n} \neq \frac{\sigma(n)}{q}.$$ Similarly, we can prove that $$\frac{\sigma(q^k)}{n} \neq \frac{\sigma(n)}{q^k}.$$ \end{remark} \begin{lemma}\label{Lemma4} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Then $n<q$ if and only if $N<{q^3}$. \end{lemma} \begin{proof} Suppose that $N = {q^k}{n^2}$ is an odd perfect number given in Eulerian form. If $n<q$, then assuming to the contrary that ${q^3}<N$, we get that $${q^3}<N=q{n^2}<q\cdot{q^2}=q^3$$ since $n<q$ implies $k=1$, by Lemma \ref{Lemma1}. For the other direction, if $N<{q^3}$, then ${q^k}{n^2}<{q^3}$, so that we have $${n^2}<{q^{3-k}}\leq{q^2}$$ since $k \equiv 1 \pmod 4$ implies that $k \geq 1$. Consequently, $n < q$, and we are done. \end{proof} \begin{corollary}\label{CorollaryToLemma4} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Then $n<q^{5/2}$ if and only if $N<{q^6}$. \end{corollary} \begin{proof} First we show that $n<q^{5/2}$ implies $k=1$. To this end, assuming $n<q^{5/2}$, since $q^k < n^2$ (see \cite{Dris}), we then have that: $$q \leq q^k < n^2 < q^5.$$ The last chain of inequalities implies that $$1 \leq k < 5.$$ This inequality, together with the condition $k \equiv 1 \pmod 4$, implies that $k=1$. We now prove the claim in Corollary \ref{CorollaryToLemma4}. If $n<q^{5/2}$, then assuming to the contrary that ${q^6}<N$, we get that $${q^6}<N=q{n^2}<q\cdot{q^5}={q^6}.$$ This is a contradiction. For the other direction, if $N<{q^6}$, then ${q^k}{n^2}<{q^6}$, so that we have $${n^2}<{q^{6-k}}\leq{q^5}$$ since $k \equiv 1 \pmod 4$ implies that $k \geq 1$. Consequently, $n < q^{5/2}$, and we are done. \end{proof} \begin{remark}\label{Remark4} A recent result by Acquaah and Konyagin \cite{AcquaahKonyagin} \emph{almost} disproves $n<q$. They obtained the estimate $y < (3N)^{1/3}$ for all the prime factors $y$ of an odd perfect number $N$. In particular, if $N={q^k}{n^2}$ is an odd perfect number given in Eulerian form, then letting $y=q$ and assuming $k=1$ gives: $$q < (3N)^{1/3} = (3q{n^2})^{1/3} \Longrightarrow q^3 < 3q{n^2} \Longrightarrow q < n\sqrt{3}.$$ Since the contrapositive of the implication $n<q \Longrightarrow k=1$ is $k > 1 \Longrightarrow q < n$, it follows that the inequality $$q<n\sqrt{3}$$ holds unconditionally, regardless of the status of Sorli's conjecture. More recently, Brown \cite{Brown} claims a proof for the inequality $q < n$, and a partial proof that $q^k < n$ holds under many cases. \end{remark} We now give a condition that is weaker than $n < q$, which also implies $k=1$. \begin{lemma}\label{WeakConditionImpliesSorli} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Then $$n < {\left(\frac{3}{2}{q^5}\right)}^{1/2}$$ implies $k=1$. \end{lemma} \begin{proof} Suppose that $N = {q^k}{n^2}$ is an odd perfect number given in Eulerian form. Let $$n < {\left(\frac{3}{2}{q^5}\right)}^{1/2}$$ and assume to the contrary that $k \neq 1$. Since $k \equiv 1 \pmod 4$, this means that $k \geq 5$. Additionally, from \cite{Dris}, we have that $$q^k < \sigma(q^k) \leq \frac{2}{3}{n^2}.$$ Consequently, we have the following chain of inequalities: $$q^5 \leq q^k < \frac{2}{3}{\left({\left(\frac{3}{2}{q^5}\right)}^{1/2}\right)}^2 < q^5.$$ This is a contradiction. \end{proof} We also have the following corollary to Lemma \ref{WeakConditionImpliesSorli}, and this uses a result from \cite{BroughanDelbourgoZhou}. \begin{corollary}\label{CorollarytoWeakConditionImpliesSorli} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Then $$n < {\left(\frac{315}{2}{q^5}\right)}^{1/2}$$ implies $k=1$. \end{corollary} \begin{proof} The proof is very similar to that of Lemma \ref{WeakConditionImpliesSorli}, except that it uses the improved bound $$\sigma(q^k) \leq \frac{2}{315}{n^2}$$ (see \cite{BroughanDelbourgoZhou}) instead of $$\sigma(q^k) \leq \frac{2}{3}{n^2}$$ (see \cite{Dris}). \end{proof} \begin{remark}\label{ConditionsEquivalentToLemma10AndCorollary11} Similar to the proofs of Lemma \ref{Lemma4} and Corollary \ref{CorollaryToLemma4}, we can show that the following biconditionals are true: $$n < {\left(\frac{3}{2}{q^5}\right)}^{1/2} \Longleftrightarrow N < \frac{3}{2}{q^6}$$ $$n < {\left(\frac{315}{2}{q^5}\right)}^{1/2} \Longleftrightarrow N < \frac{315}{2}{q^6}$$ \end{remark} \begin{remark}\label{ChenChenImprovingOnBroughanDelbourgoZhou} Chen and Chen \cite{ChenChen} has a relatively recent paper which further improves on Broughan et.~al.'s results (see \cite{BroughanDelbourgoZhou}). They also pose a related open problem. \end{remark} \section{New Results Related to Sorli's Conjecture}\label{Section3} First, we reproduce the following lemma from \cite{Dris}, as we will be using these results later. \begin{lemma}\label{Lemma5} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. The following series of inequalities hold: \begin{itemize} \item{If $k = 1$, then $1 < I(q^k) = I(q) \leq \frac{6}{5} < \sqrt{\frac{5}{3}} < I(n) < 2$.} \item{If $k \geq 1$, then $1 < I(q^k) < \frac{5}{4} < \sqrt{\frac{8}{5}} < I(n) < 2$.} \end{itemize} \end{lemma} We have the following (slightly) stronger inequality from \cite{Dris}. \begin{lemma}\label{Lemma6} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Then ${\left(I(q^k)\right)}^2 < I(n^2)$. \end{lemma} \begin{proof} The proof follows from the inequality $I(q^k) < \sqrt[3]{2}$ and the equation $2 = I(q^k)I(n^2)$. \end{proof} \begin{remark}\label{Remark5} Another proof of Lemma \ref{Lemma6} is as follows: $$I(q^k) < \frac{5}{4} \Longrightarrow {\left(I(q^k)\right)}^2 < \frac{25}{16} = 1.5625 < 1.6 = \frac{8}{5} < I(n^2).$$ In fact, if $${\left(I(q^k)\right)}^y < {\left(\frac{5}{4}\right)}^y \leq \frac{8}{5} < I(n^2)$$ then $$y \leq \frac{3\log{2} - \log{5}}{\log{5} - 2\log{2}}.$$ Thus, if we let $$z = \frac{3\log{2} - \log{5}}{\log{5} - 2\log{2}} \approx 2.1062837195,$$ then $${\left(I(q^k)\right)}^z \leq \frac{8}{5} < I(n^2).$$ \end{remark} Next, we derive a lower bound for $I(q^k)+I(n)$. \begin{lemma}\label{Lemma7} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. The following inequality holds: $$I(q^k) + I(n) \geq I(q) + I(n) > 1 + \sqrt{2}.$$ \end{lemma} \begin{proof} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Then we have the following: $$I(q^k) + I(n) \geq I(q) + I(n) \geq 1 + \frac{1}{q} + \sqrt{\frac{2(q-1)}{q}}.$$ But $$f(q) = 1 + \frac{1}{q} + \sqrt{\frac{2(q-1)}{q}}$$ is a decreasing function of $q$. Consequently, $$f(q) > \lim_{q \rightarrow \infty}{\left(1 + \frac{1}{q} + \sqrt{\frac{2(q-1)}{q}}\right)} = 1 + \sqrt{2}.$$ \end{proof} \begin{remark}\label{Remark6} The following result was communicated to the author (via e-mail, by Pascal Ochem) in April of 2013: If $N={q^k}{n^2}$ is an odd perfect number given in Eulerian form, then $$I(n)>{\left(\frac{8}{5}\right)}^{\frac{\ln(4/3)}{\ln(13/9)}} \approx 1.44440557.$$ (Note that ${\left(\frac{8}{5}\right)}^{\frac{\ln(4/3)}{\ln(13/9)}} > \sqrt{2}$.) \end{remark} Further to Remark \ref{Remark6} and Lemma \ref{Lemma6}, we have the following related result. \begin{lemma}\label{CorollaryToLemma6} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Then ${\left(I(q)\right)}^2 < I(n)$. \end{lemma} \begin{proof} By Lemma \ref{Lemma5}, $$I(q) \leq \frac{6}{5} \Longrightarrow {\left(I(q)\right)}^2 \leq \frac{36}{25} = 1.44.$$ The conclusion follows from the result $I(n)>1.44440557$ in Remark \ref{Remark6}. In fact, if $$\left(I(q)\right)^u < \left(\frac{6}{5}\right)^u \leq {\left(\frac{8}{5}\right)}^{\frac{\ln(4/3)}{\ln(13/9)}}$$ then $$u \leq -\frac{\left(2\log(2) - \log(3)\right)\left(3\log(2) - \log(5)\right)}{\left(\log(2) + \log(3) - \log(5)\right)\left(2\log(3) - \log(13)\right)}.$$ Thus, if we let $$v = -\frac{\left(2\log(2) - \log(3)\right)\left(3\log(2) - \log(5)\right)}{\left(\log(2) + \log(3) - \log(5)\right)\left(2\log(3) - \log(13)\right)} \approx 2.0168$$ then $$\left(I(q)\right)^v \leq {\left(\frac{8}{5}\right)}^{\frac{\ln(4/3)}{\ln(13/9)}} < I(n).$$ \end{proof} \begin{remark}\label{Remark7} As pointed out by Ochem to the author (via the same e-mail mentioned in Remark \ref{Remark6}), a case-by-case analysis yields a sharper lower bound for $I(q^k)+I(n)$: \begin{itemize} \item{If $q = 5$ then $I(q^k)+I(n) \geq I(q)+I(n) \geq (6/5) + {(8/5)}^{\ln(4/3)/\ln(13/9)} \approx 2.6444055$.} \item{If $q \geq 13$ then $I(q^k)+I(n) \geq I(q)+I(n) \geq (14/13) + {(24/13)}^{\ln(4/3)/\ln(13/9)} \approx 2.6924318$.} \end{itemize} Therefore, we have the lower bound $$I(q^k)+I(n) \geq I(q)+I(n) \geq \frac{6}{5} + {\left(\frac{8}{5}\right)}^{\frac{\ln(4/3)}{\ln(13/9)}} \approx 2.6444055.$$ \end{remark} We now state and prove the following theorem, which provides conditions equivalent to the conjecture mentioned in the introduction. \begin{theorem}\label{Theorem1} If $N = {q^k}{n^2}$ is an odd perfect number given in Eulerian form, then the following biconditional is true: $$q^k < n \Longleftrightarrow \sigma(q^k) < \sigma(n).$$ \end{theorem} In preparation for the proof of Theorem \ref{Theorem1}, we derive the following results. \begin{lemma}\label{Lemma8} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. If $$I(q^k) + I(n) < \frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k},$$ then $$q^k < n \Longleftrightarrow \sigma(q^k) < \sigma(n).$$ \end{lemma} \begin{proof} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Assume that $$I(q^k) + I(n) < \frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k}.$$ It follows that $$I(q^k) + I(n) < \left(\frac{q^k}{n}\right)I(q^k) + \left(\frac{n}{q^k}\right)I(n).$$ Consequently, $${{q^k}n}\left(I(q^k) + I(n)\right) < {q^{2k}}I(q^k) + {n^2}I(n).$$ Thus, $$n\left[q^k - n\right]I(n) < {q^k}\left[q^k - n\right]I(q^k).$$ If $q^k < n$, then $q^k - n < 0$. Hence, $$q^k < n \Longrightarrow {q^k}I(q^k) < nI(n) \Longrightarrow \sigma(q^k) < \sigma(n).$$ If $n < q^k$, then $0 < q^k - n$. Hence, $$n < q^k \Longrightarrow nI(n) < {q^k}I(q^k) \Longrightarrow \sigma(n) < \sigma(q^k).$$ Consequently, we have $$q^k < n \Longleftrightarrow \sigma(q^k) < \sigma(n),$$ as desired. \end{proof} \begin{lemma}\label{Lemma9} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. If $$\frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k} < I(q^k) + I(n),$$ then $$q^k < n \Longleftrightarrow \sigma(n) < \sigma(q^k).$$ \end{lemma} \begin{proof} The proof of Lemma \ref{Lemma9} is very similar to the proof of Lemma \ref{Lemma8}. \end{proof} Now, assume that $$\frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k} < I(q^k) + I(n).$$ Consider the conclusion of the implication in Lemma \ref{Lemma9} in light of the result $I(q^k) < I(n)$: $$q^k < n \Longleftrightarrow \sigma(n) < \sigma(q^k).$$ If $q^k < n$, then since $I(q^k) < I(n)$ implies that $$\frac{\sigma(q^k)}{\sigma(n)} < \frac{q^k}{n},$$ we have $$\frac{\sigma(q^k)}{\sigma(n)} < \frac{q^k}{n} < 1,$$ which further implies that $\sigma(q^k) < \sigma(n)$. This contradicts Lemma \ref{Lemma9}. Similarly, if $\sigma(n) < \sigma(q^k)$, then $$1 < \frac{\sigma(q^k)}{\sigma(n)} < \frac{q^k}{n},$$ from which it follows that $n < q^k$. Again, this contradicts Lemma \ref{Lemma9}. Hence, we know that $$n < q^k < \sigma(q^k) < \sigma(n)$$ must hold, under the given assumption. Assuming Brown's proof for $q^k < n$ is completed, this case is ruled out. Consequently, the inequality $$\frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k} < I(q^k) + I(n)$$ cannot be true. Therefore, the reverse inequality $$I(q^k) + I(n) \leq \frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k}$$ must be true. It remains to consider the case when $$I(q^k) + I(n) = \frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k}.$$ Notice that this is true if and only if $$\sigma(q^k) = \sigma(n),$$ (because $q^k \neq n$). Thus, since $I(q^k) < I(n)$, this implies that $n < q^k$. Again, assuming Brown's proof for $q^k < n$ is completed, this case is ruled out. In other words (by Lemma \ref{Lemma8}), we have Theorem \ref{Theorem1} (and the corollary that follows). \begin{corollary}\label{Corollary1} If $N = {q^k}{n^2}$ is an odd perfect number given in Eulerian form, then the following biconditional is true: $$q^k < n \Longleftrightarrow \frac{\sigma(q^k)}{n} < \frac{\sigma(n)}{q^k}.$$ \end{corollary} We now give another condition that is equivalent to the author's conjecture (mentioned in the introduction). \begin{theorem}\label{Theorem2} If $N = {q^k}{n^2}$ is an odd perfect number given in Eulerian form, then the following biconditional is true: $$\frac{\sigma(q^k)}{n} < \frac{\sigma(n)}{q^k} \Longleftrightarrow \frac{q^k}{n} + \frac{n}{q^k} < \frac{\sigma(q^k)}{\sigma(n)} + \frac{\sigma(n)}{\sigma(q^k)}.$$ \end{theorem} \begin{proof} Let $N$ be an odd perfect number given in Eulerian form. Then $N = {q^k}{n^2}$ where $q \equiv k \equiv 1 \pmod 4$ and $\gcd(q, n) = 1$. First, we show that $$\frac{\sigma(q^k)}{n} < \frac{\sigma(n)}{q^k}$$ implies $$\frac{q^k}{n} + \frac{n}{q^k} < \frac{\sigma(q^k)}{\sigma(n)} + \frac{\sigma(n)}{\sigma(q^k)}.$$ Since $I(q^k) < I(n)$, we have that $$\frac{\sigma(q^k)}{\sigma(n)} < \frac{q^k}{n}.$$ On the other hand, the inequality $$\frac{\sigma(q^k)}{n} < \frac{\sigma(n)}{q^k}$$ gives us that $$\frac{\sigma(q^k)}{\sigma(n)} < \frac{n}{q^k}.$$ This in turn implies that $$\frac{q^k}{n} < \frac{\sigma(n)}{\sigma(q^k)}.$$ Putting these inequalities together, we have the series $$\frac{\sigma(q^k)}{\sigma(n)} < \frac{q^k}{n} < \frac{\sigma(n)}{\sigma(q^k)}.$$ Now consider the product $$\left(\frac{\sigma(q^k)}{\sigma(n)} - \frac{q^k}{n}\right)\left(\frac{\sigma(n)}{\sigma(q^k)} - \frac{q^k}{n}\right).$$ This product is negative. Consequently we have $$\left(\frac{\sigma(q^k)}{\sigma(n)}\right)\left(\frac{\sigma(n)}{\sigma(q^k)}\right) - \left(\frac{q^k}{n}\right)\left(\frac{\sigma(q^k)}{\sigma(n)} + \frac{\sigma(n)}{\sigma(q^k)}\right) + {\left(\frac{q^k}{n}\right)}^2 < 0,$$ from which it follows that $$1 + {\left(\frac{q^k}{n}\right)}^2 < \left(\frac{q^k}{n}\right)\left(\frac{\sigma(q^k)}{\sigma(n)} + \frac{\sigma(n)}{\sigma(q^k)}\right).$$ Therefore, we obtain $$\frac{n}{q^k} + \frac{q^k}{n} < \frac{\sigma(q^k)}{\sigma(n)} + \frac{\sigma(n)}{\sigma(q^k)}$$ as desired. Next, assume that $$\frac{\sigma(n)}{q^k} < \frac{\sigma(q^k)}{n}.$$ Since $I(q^k) < I(n)$, we obtain $$\frac{n}{q^k} < \frac{\sigma(q^k)}{\sigma(n)} < \frac{q^k}{n}.$$ Now consider the product $$\left(\frac{n}{q^k} - \frac{\sigma(q^k)}{\sigma(n)}\right)\left(\frac{q^k}{n} - \frac{\sigma(q^k)}{\sigma(n)}\right).$$ This product is negative. Therefore, we obtain $$\left(\frac{n}{q^k}\right)\left(\frac{q^k}{n}\right) - \left(\frac{\sigma(q^k)}{\sigma(n)}\right)\left(\frac{n}{q^k} + \frac{q^k}{n}\right) + {\left(\frac{\sigma(q^k)}{\sigma(n)}\right)}^2 < 0,$$ from which we get $$1 + {\left(\frac{\sigma(q^k)}{\sigma(n)}\right)}^2 < \left(\frac{\sigma(q^k)}{\sigma(n)}\right)\left(\frac{n}{q^k} + \frac{q^k}{n}\right).$$ Consequently, we have $$\frac{\sigma(n)}{\sigma(q^k)} + \frac{\sigma(q^k)}{\sigma(n)} < \frac{n}{q^k} + \frac{q^k}{n}.$$ Together with the result in the previous paragraph, this shows that $$\frac{\sigma(q^k)}{n} < \frac{\sigma(n)}{q^k}$$ is equivalent to $$\frac{q^k}{n} + \frac{n}{q^k} < \frac{\sigma(q^k)}{\sigma(n)} + \frac{\sigma(n)}{\sigma(q^k)}.$$ \end{proof} \begin{remark}\label{Remark8} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. \\ Note that, in general, it is true that $$\frac{\sigma(q^k)}{\sigma(n)} + \frac{\sigma(n)}{\sigma(q^k)} < \frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k},$$ and $$\frac{q^k}{n} + \frac{n}{q^k} < \frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k}.$$ Therefore, $$\frac{\sigma(q^k)}{n} < \frac{\sigma(n)}{q^k}$$ is equivalent to $$\frac{q^k}{n} + \frac{n}{q^k} < \frac{\sigma(q^k)}{\sigma(n)} + \frac{\sigma(n)}{\sigma(q^k)} < \frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k},$$ while $$\frac{\sigma(n)}{q^k} < \frac{\sigma(q^k)}{n}$$ is equivalent to $$\frac{\sigma(q^k)}{\sigma(n)} + \frac{\sigma(n)}{\sigma(q^k)} < \frac{q^k}{n} + \frac{n}{q^k} < \frac{\sigma(q^k)}{n} + \frac{\sigma(n)}{q^k}.$$ \end{remark} At this point, we dispose of the following lemma: \begin{lemma}\label{Lemma10} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Then at least one of the following sets of inequalities is true: \begin{itemize} { \item{$\bf{A}: q^k < \sigma(q^k) < n < \sigma(n)$} \item{$\bf{B}: q^k < n < \sigma(q^k) < \sigma(n)$} \item{$\bf{C}: n < q^k < \sigma(n) < \sigma(q^k)$} \item{$\bf{D}: n < \sigma(n) \leq q^k < \sigma(q^k)$} } \end{itemize} \end{lemma} Lemma \ref{Lemma10} is proved by listing all possible permutations of the set $$\left\{q^k, n, \sigma(q^k), \sigma(n)\right\}$$ and then using Theorem \ref{Theorem1}. Note that Brown's result that $q^k < n$, when completed, would rule out cases $\bf{C}$ and $\bf{D}$ in Lemma \ref{Lemma10}. Also, notice that by assuming $k=1$, case $\bf{B}$ is also ruled out. Consequently, we have the following theorem. \begin{theorem}\label{Theorem3} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. If $k=1$, then $\sigma(q^k)<n$. \end{theorem} As a corollary, by the contrapositive to Theorem \ref{Theorem3}, we have: \begin{corollary}\label{Corollary2} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. If $n<\sigma(q^k)$, then $k>1$. \end{corollary} \begin{remark}\label{Remark9} If one could show the biconditional $$n<q^{k+1} \iff n<\sigma(q^k),$$ then one would be able to show that $$n<q^{k+1} \implies k > 1.$$ By the contrapositive, one would then have $$k=1 \implies q^{k+1} < n \implies q^2 < n.$$ However, we know that $$n<q^2 \implies k=1.$$ Consequently, $$n<q^2 \implies k=1 \implies q^2 < n$$ which proves that $q^2 < n$, strengthening Brown's result. \end{remark} \section{Final Analysis of the New Results}\label{Section4} The new results presented in this article seem to imply the following conjecture (see \cite{Dris2}). \begin{conjecture}\label{Conjecture1} Let $N = {q^k}{n^2}$ be an odd perfect number given in Eulerian form. Then the Descartes-Frenicle-Sorli conjecture is false. (That is, $k > 1$ must hold.) \end{conjecture} \begin{remark}\label{Remark10} Notice how all of the implications in the Lemmas \ref{Lemma1}, \ref{Lemma2} and \ref{Lemma3} in Section \ref{Section2} become vacuously true, given Brown's result that \\$q < n$. Also, notice that, in Section \ref{Section3}, we could specialize Theorem \ref{Theorem1} (and its consequences) to the case $k=1$ and still get the same results, as follows: $$q < n \iff \sigma(q) < \sigma(n) \iff \frac{\sigma(q)}{n} < \frac{\sigma(n)}{q}.$$ \end{remark} \section{Conclusion}\label{Section5} An improvement to the currently known upper bound of $I(n) < 2$ will be considered a major breakthrough. In the sequel (\url{http://arxiv.org/abs/1303.2329}), a viable approach towards improving the inequality $I(n) < 2$ will be presented, which may necessitate the use of ideas from the paper \cite{Ward}. \section{Acknowledgments} The author sincerely thanks the anonymous referee(s) who have made several corrections and suggestions, which helped in improving the style of the paper. The author would like to thank Pascal Ochem for communicating the sharper lower bound for $I(n)$. The author also wishes to thank Carl Pomerance for pointing out the relevance of the paper \cite{AcquaahKonyagin}. The author also expresses his gratitude to Peter Acquaah for helpful e-mail exchanges on the topic. Lastly, the author expresses his gratitude to an anonymous reader ``Pascal" who pointed out some ``errors'' in an earlier version of \cite{Dris} (see \cite{Dris2}), thus encouraging him to come up with this sequel. \end{document}
\begin{document} \title{A position and a time for the photon} \author{J. Le\'on}\email{[email protected]} \affiliation{Instituto de Matem\'aticas y F\'{\i}sica Fundamental, CSIC\\ Serrano 113-bis, 28006 MADRID, Spain} \date{\today} \begin{abstract} This paper gives a constructive answer to the question whether photon states can contain or not, and to what extent, the readings of rulers and clocks. The paper first shows explicitly that, along with the momentum representation, there is room in the one photon Hilbert space for an alternative position representation. This is made possible by the existence of a self-adjoint, involutive, position operator conjugate to the momentum operator~\cite{hawtona}. Position and momenta are shown to satisfy the Heisenberg-Weyl quantization rules in the helicity basis, which is analyzed anew from this point of view. The paper then turns to the photon's time of arrival. By picking an appropriate photon Hamiltonian - using Maxwell equations as the photon Schr\"{o}dinger equation - a conjugate time of arrival operator is built. Its interpretation, including the probability densities for the instant of arrival (at arbitrary points of 3-D space) of photon states with different helicities coming from arbitrary places, is discussed. \end{abstract} \pacs{03.65.-w, 14.70.Bh, 42.50.Dv} \maketitle \section*{Introduction} Since the advent of Special Relativity, light propagation has played a very special role among the variety of natural phenomena studied by Physics. In conjunction with rods and clocks, light signals were a main component in the construction of the -then- new theory. The universality of $c$, the speed of light, set the new standard commensurate to the breadth of relativistic physics. While light was understood at the time as a manifestation of electromagnetic waves, the interaction of matter and radiation demanded a quantum leap: the photon. This is the place to recall the mismatch between the prominent role played in relativity by the arrival times of light signals and the position of their group centers and wave fronts, and the lack of the corresponding operators in quantum physics~\cite{pauli,pryce,newton}. The reason of this deficiency of the quantum theory~\cite{wightman} can be traced back to the fact that the photon does not have a complete set of helicity states; the state with helicity $\lambda=0$ is missing. This paper aims to give a constructive answer to the long debated question whether the photon state can contain or no, and to what extent, the readings of rulers and clocks, i.e. the where's and when's of the photon. In view of the recent experimental advances, an appropriate formalism is necessary for a meaningful discussion of the propagation and arrival of light signals, mainly if these arrive photon by photon. The literature on this question is abundant and only a part of it is in the bibliography at the end of this paper. The reader is referred to~\cite{scully} for a concise and clear introduction to this topic. Electromagnetic waves propagate at the speed of light in vacuum. This basic tenet of relativity, has been challenged recently both from the experimental and the theoretical front. It is remarkable that the theoretical part of this question was settled long time ago by Sommerfeld~\cite{sommerfeld} and Brillouin~\cite{brillouin}, who demonstrated that the electromagnetic wave fronts always move with speed $c$. To my knowledge, the cases where the group velocity in vacuum was found larger than $c$ where always produced by an incorrect identification of the signal center. Superluminal propagation in vacuum turned always to be a geometric artifact of the wave form and not the group speed of the wave packet. The main point to highlight here is the absence of foundational problems associated to the definitions of the group position and time of arrival of classical e.m. waves. It may be difficult to ascertain these questions in a given situation, but only from the technical point of view, not as a matter of principle. Electromagnetic waves are composed of elementary quanta: photons. Like the rest of elementary particles, these are mathematically identified as elementary systems under the Poincare group~\cite{wigner,bargmann}. Namely, all the possible sets of values that the dynamical properties of the system can attain are connected by transformations of the group to an arbitrary, fixed, set of values, taken as standard. The different classes of elementary systems are identified by the values taken by the Casimir operators of the group. Photons are characterized by lightlike momenta $p^2=(p^0)^2-\bm{p}^2=0$ and a helicity value that is either $\lambda=+1$ or $\lambda=-1$, there are thus two irreducible representations (1,0) and (0,1) (in the standard notation $(j,j')$), that are combined into an irreducible entity by parity (1,0)$\stackrel{\cal P}{\rightarrow}$(0,1). Finally, photons transform under the representation $(1,0)+(0,1)$ of the complete Poincare group. The properties of photons are thus given by the eigenvalues of the subsets of commuting generators of the Poincare group in that representation. There are two widely used alternative subsets: a). $\{\bm{p}, \,\lambda\}$, that describe plane waves, suited for the description of initial and final scattering states, free flying photons etc, and b). $\{p^0,j,j_z,parity\}$, that describe spherical photon waves that simplify the analysis of radiation, and in general, of those systems where there is a singled out space point. The rapid pace and depth of the current advances in the manipulation of photon states are demanding a parallel progress on the theoretical side. In some cases it is necessary to describe accurately the behavior of photons that are in the near region. In others, their space time distributions in regions close to the microscopic realm, for states containing just one photon or a few of them, in states possibly entangled that spread over these regions, etc. Current experimental set ups include a variety of very, sensitive single photon devices. Even more, many of the most exciting results have been obtained experimenting with these highly nonclassical light states. On its side, the theoretical framework should be able to address simple questions like the time of arrival of a photon at the place where some device is located, the relation between the orbital angular momentum of the photon and its position at some time, etc. It is necessary at least to give a meaning to these questions, something whose very possibility has been the subject of debate~\cite{pauli,landaua,landaub,galindo,amrein,rosewarne} since the early times of Quantum Mechanics. I shall avoid here repeating the well known pros and cons for the wave function of the photon~\cite{cooka,cookb,inagaki,iwoa,iwob,sipe} and whether it represents the probability amplitude for finding the photon in a given region of space (technically a Borel set $\Delta \in \bm{R}^3$). Instead, I shall trace the problem back by an alternative route to non-relativistic quantum mechanics (NRQM) where the vanishing photon mass ($M=0$) prevents the definition of a position operator $\hat{\bm{q}}$ for the photon. Let me first give some support to the above assertion. The dynamical properties of an elementary object, i.e. a free particle, in non-relativistic quantum theory can be traced back to its behavior under the Galileo group~\cite{jordan}. In particular the position operator $\hat{\bm{q}}$ is related to the Galileo boost operator $\hat{\bm{G}}$ by $\hat{G}_i=M \hat{q}_i$, where $M$ is the mass of the particle. For sure, it can be defined directly by $\hat{q}_i |\bm{x}\rangle = x_i|\bm{x}\rangle$, so that its components are commuting, its eigenvalues real numbers, and its eigenfunctions Dirac delta functions. On the other hand, this definition has to be compatible with the fact that the momentum operator $\hat{\bm{p}}$ generates translations in configuration space, $[\hat{q}_i,\hat{p}_j]=i \hbar$. Accordingly in the momentum representation, more appropriate for a discussion of the Galileo group generators, the position operator acts as a derivative on the momenta $\hat{q}_i= i\hbar \partial/\partial p_i$; if $\hat{p}_i$ is the basic, multiplicative operator of the representation, then $\hat{q}_i$ is derived from it. In other words, particle position emerges from the Galilean properties of the particle, specifically from its behavior under space translations. As a consequence of the above, one may wonder that if in NRQM a massless system can not have a properly defined position operator, then: Why expect the opposite in RQM? The contenders in the debate about the photon wave function could be aware or not about the lack of a non-relativistic $\hat{\bm{q}}$ for the photon. In any case, they were undoubtedly influenced by their consequences, and induced to give up in the search of a relativistic $\hat{\bm{q}}$ for $M=0$. The fact that the photon little group is $E(2)$ instead of $SU(2)$ added the filling drop to the confusion. In some studies of the position operator for relativistic systems, photon transversality was identified as the obstacle to the existence of a photon position operator~\cite{newton,wightman}. Other analysis~\cite{pryce,iwob} pointed to the lack of commutativity of its components (or better, of the Lorentz boost components) as the reason for the lack of an appropriate $\hat{\bm{q}}$. Finally, M. Hawton found out~\cite{hawtona,hawtonb,hawtonc} very recently a self-adjoint position operator with the required commutation rules, that operates in the photon Hilbert space as a physically acceptable position should do. This opens up the possibility of analyzing the question of photon localizability from a new and more powerful setting: A good position operator should come endowed with a probabilistic interpretation, which is the tool suited to the analysis of localizability~\cite{wightman,rosewarne,galindo,amrein}. On the other hand, the most recent theoretical analysis~\cite{adlard,iwoc} indicate that individual photons can be localized in regions much smaller than previously thought, with entanglement (instead of wavelength) as the physical ruler~\cite{chan} in the experiment~\cite{kurtsiefer}. This increases the value of the Hawton construct and the urge to explore its consequences. In this paper I shall analyze the properties of the photon states in Minkowski space-time. On the way, we shall devise several tools necessary to this end. The photon Hilbert space will be introduced in Sect. II, where we also present two alternative conjugate representations and show the probabilistic interpretation that they can be given. Sect. III is devoted to the explicit construction of the Hawton position operator in these representations. We revise the properties of the operator in the helicity basis and conclude that it qualifies as a good position operator. Finally, we complete the standard picture of one photon states with additional position dependent information. Conversely, Sect. IV is devoted to the time dependent information. We first consider Maxwell equations as the photon Sch\"odinger equation and solve the time evolution associated to it. Then, we set up the formalism for analyzing the photon time of arrival at an arbitrary point of space. It will be necessary to split the Hilbert space into two subspaces putting the eventually detected states in one of them. In it we build the sought time operator, obtain its eigenfunctions and give the positive operator valued measure that permits a probabilistic analysis of the times of arrival of a photon in a given state at an arbitrary space position. The paper ends in Sect. V with a summary of the results, some considerations on their meaning, and indications for the use of the formalism in the analysis of some current experiments. \section*{One photon Hilbert space} We shall work in the Coulomb gauge i.e. within the Hilbert space $\cal{H}$ of one particle transverse states $\tilde{A}_i(\bm{p}),\; i=1,2,3$ defined on the forward light cone $(p^0=|\bm{p}|)$ (note the symbol $\tilde{}$ , it is a label for transversality). The scalar product in $\cal{H}$ is defined as \begin{equation} (A,A') = \int d\sigma(\bm{p}) \tilde{A}^*_i (\bm{p}) \tilde{A'}_i (\bm{p}) \label{p2} \end{equation} where $d\sigma(\bm{p}) =d^3p/2 |\bm{p}|$ is the measure on the light cone. An arbitrary photon state $\bm{A}\in\cal{H}$ can be written as \begin{equation} \tilde{A}_i (\bm{x}) = (2 \pi)^{-3/2} \int \frac{d^3 p}{\sqrt{2 |\bm{p}|}} \; e^{i \bm{p}\bm{x}}\; \tilde{A}_i (\bm{p}) \label{p1} \end{equation} Note that $\tilde{A}_i (\bm{x})$ is solenoidal so that, in spite of being $(A,A') = \int d^3 x \tilde{A}^*_i (\bm{x}) \tilde{A'}_i (\bm{x})$, $\bm{x}$ does not qualify for a good position (in poor words: $x_i \tilde{A}(\bm{x})$ is not solenoidal, so $x_i$ takes $\tilde{A}$ out of the Hilbert space). Due to Poincare invariance (that we are taking fully into account, in spite of the non covariant notation), the helicity $W$ is a good quantum number. In fact, $\hat{\bm{p}}$ and $\hat{W}=\bm{S} \widehat{(\bm{p}/|\bm{p}|)}$ -- where $S^a_{ij}=-i\epsilon_{aij}$ are the spin-1 matrices -- form a complete set of commuting operators on the mass shell. In the following, I shall work in momentum space where the momentum operator $\hat{\bm{p}}$ is simply represented by the vector variable $\bm{p}=(p_1,p_2,p_3)$. An arbitrary operator $\hat{G}$ function of the momentum, that commutes with the helicity $[ \hat{G},\hat{W}]=0$, shall have eigenfunctions $\tilde{V}^i_{G,\lambda}(\bm{p})$ satisfying \begin{eqnarray} (\hat{p}^a \,\tilde{V}_{G,\lambda})^i (\bm{p}) = p^a \, \tilde{V}^i_{G,\lambda}(\bm{p}), \; & (\hat{W} \,\tilde{V}_{G,\lambda})^i (\bm{p}) = \lambda \, \tilde{V}^i_{G,\lambda}(\bm{p}),\; & (\hat{G} \;\; \tilde{V}_{G,\lambda})^i (\bm{p})= G\, \tilde{V}^i_{G,\lambda}(\bm{p})\; \label{p3} \end{eqnarray} The second of these equations implies that $\tilde{V}^i_{G,\lambda}(\bm{p})\propto \epsilon^i (\bm{p}, \lambda)$, where $\bm{\epsilon} (\bm{p}, +1)$ ($\bm{\epsilon} (\bm{p}, -1)$) are the right (left) polarization vectors for momentum $\bm{ p}$. There is some arbitrariness in their definition, the convention of Ref.~\cite{weinberga} adopted here is $\epsilon_i (\bm{p}, \lambda)=D_{ij}[\Lambda_{\bm{p}\leftarrow \bm{k}}]\, \epsilon_j (\lambda)$, for $\bm{p}=\Lambda \, \bm{k}$ and $\bm{k}=(0,0,|\Lambda^{-1}\bm{p}|)$, with $\bm{\epsilon}(\lambda)$ such that $S_3 \,\bm{\epsilon}(\lambda)=\lambda \bm{\epsilon}(\lambda)$. In spherical coordinates where $\bm{p}= (k,\theta,\varphi)$ and $\bm{k}=(0,0,k)$ -- i.e. taking $\Lambda$ as a pure rotation for simplicity -- the polarization vectors are given as linear combinations of the unitary vectors $\bm{e}(\bm{p},\sigma),\; \sigma=\theta,\varphi,k$ (or $\sigma=1,2,3$): $$\bm{e}(\bm{p},k)=(\partial \bm{ p}/ \partial k),\, \bm{e}(\bm{p},\theta)=(1/k)\, (\partial \bm{p}/ \partial \theta),\, \bm{e}(\bm{p},\varphi)= (1/k\sin\theta)\, (\partial \bm{p}/\partial \varphi)$$ in the standard form: \begin{equation} \epsilon^i (\bm{p}, \lambda)= - \frac{\lambda}{\sqrt{2}}\{e^i(\bm{p},\theta)+i \lambda \, e^i(\bm{p},\varphi)\}\;\;\,\, \lambda=\pm 1 \label{p4} \end{equation} The representations of the Poincare group for the photon states are the (1,0) and (0,1), that correspond to $\lambda=-1$ (left polarization) and $\lambda=1$ (right polarization) respectively. They are combined in the direct sum $(1,0) \oplus (0,1)$ that becomes irreducible when parity is included in the group. No other states exist (see for instance~\cite{weinberga} page 69) because the little group is not semi-simple. In particular, the polarizations can not be taken as four-vectors. Even if we form the object $\epsilon^\mu (p, \lambda)=(\epsilon^0 (p, \lambda),\bm{\epsilon} (p, \lambda))$, it not only has to satisfy in every reference frame $ p_\mu \epsilon^\mu (p, \lambda)=0$, but also $\epsilon^0 (p, \lambda)=0$, or any other condition chosen to fix the gauge. In short: in spite of its four-dimensional appearance, the polarizations do not transform as four-vectors under Lorentz transformations, but as a sort of connections~\cite{weinbergb,weinbergc}: \begin{equation} \Lambda_\nu^\mu \epsilon^\nu (\Lambda\, p,\lambda)=\exp\{i \lambda \Theta(p,\Lambda)\} \epsilon^\mu (p,\lambda) + p^\mu \Omega(p,\lambda;\Lambda)\label{p4a} \end{equation} where $\Theta(p,\Lambda)$ is the Wigner rotation angle, and $\Omega(p,\lambda;\Lambda)$ is a gauge transformation fixed by the gauge condition ($\epsilon^0=0)$. Of course, the photons can be given a covariant (tensor) representation by means of the gauge invariant antisymmetric tensor $F_{\alpha \beta}(p,\lambda) \propto i \epsilon_{\alpha \beta \mu \nu} (p^\mu \epsilon^\nu(p,\lambda)-p^\nu \epsilon^\mu(p,\lambda))$, that gives rise to the electric and magnetic fields. With these caveats in mind, I shall continue to use the three dimensional notation to work in the Coulomb gauge in the following, turning to~\cite{weinbergb,weinbergc} for questions of Poincare covariance or gauge invariance. By construction, \begin{equation} (\bm{S}\cdot \bm{e}(\bm{p},k))_{ij} =(S_k)_{ij},\;\;\; \mbox{and}\;\;\; (S_k)_{ij} \epsilon_j (\bm{p}, \lambda)= \lambda \epsilon_i (\bm{p}, \lambda),\;\; \lambda=\pm 1 \label{p5} \end{equation} We can now write $\tilde{V}^i_{G,\lambda}(\bm{p}) =g_G (\bm{p})\, \epsilon^i (\bm{p}, \lambda)$, where $g_G (\bm{p})$ is a $G$ dependent function of $\bm{p}$ to be determined in each case. By these definitions, these functions span the transverse subspace orthogonal to $\bm{p}$, and are eigenfunctions of the helicity with eigenvalue $\lambda$. \subsection*{Momentum and position representations} In the trivial case of the momentum operator $\hat{\bm{G}}=\bm{p}$ where $\tilde{V}^i_{\bm{p},\lambda}(\bm{p'}) =g_{\bm{p}} (\bm{p'})\, \epsilon^i (\bm{p'}, \lambda) $, it is straightforward to obtain $g_{\bm{p}}$. We want $(V_{\bm{p}_1,\lambda_1},V_{\bm{p}_2,\lambda_2})=\delta_{\lambda_1 \lambda_2} \delta^{(3)}(\bm{ p}_1-\bm{ p}_2)$, but we only have \begin{equation} (V_{\bm{p}_1,\lambda_1},V_{\bm{p}_2,\lambda_2}) =\int d\sigma(\bm{p}){\tilde{V}}^{*i}_{\bm{p}_1,\lambda_1} (\bm{p}) \tilde{V}^i_{\bm{p}_2,\lambda_2} (\bm{p})=\delta_{\lambda_1 \lambda_2} \int d\sigma(\bm{p})g^*_{\bm{p}_1} (\bm{p})g_{\bm{p}_2} (\bm{p})\label{p6} \end{equation} where we used that $\bm{\epsilon}^* (\bm{p}, \lambda) \bm{\epsilon} (\bm{p}, \lambda')=\delta_{\lambda \lambda'}$. This requires the value $g_{\bm{p}}(\bm{p}')=\sqrt{2 |\bm{p}'|} \delta^{(3)}(\bm{ p}'-\bm{ p})$ for $g_{\bm{p}}$, so that, apart from phases, the eigenfunctions have to be \begin{eqnarray} \tilde{V}^i_{\bm{p},\lambda}(\bm{p}') =\sqrt{2 |\bm{p}'|}\, \epsilon^i (\bm{p}', \lambda)\, \delta^{(3)}(\bm{ p}'-\bm{ p}),\;\;\; & \mbox{and}\;\;\; & \tilde{V}^i_{\bm{p},\lambda}(x) =\frac{1}{(2\pi)^{3/2}} \, \epsilon^i (\bm{p}, \lambda)\, e^{i \bm{p}\bm{x}}\label{p8} \end{eqnarray} Note that the second equation in (\ref{p8}) comes from the first one by straight application of Eq (\ref{p1}). The next step is to assume the existence of a position, namely a vector operator $\hat{\bm{G}}=\hat{\bm{q}}=(\hat{q_1},\hat{q_2},\hat{q_3})$, with simultaneous eigenfunctions in $\cal{H}$ that can be given as $\tilde{V}^i_{\bm{q},\lambda}(\bm{p}) =g_{\bm{q}} (\bm{p})\, \epsilon^i (\bm{p}, \lambda) $. For this to be possible it is necessary that: \begin{itemize} \item[i.] Position and helicity commute, that is $[\hat{q}^i,\hat{W}]=0$. \item[ii.] The three components are in involution, that is $[\hat{q}^i,\hat{q}^j]=0$. \end{itemize} To these conditions we have to add that positions and momenta be canonically conjugate operators: $[\hat{q}^i,\hat{p}^j]=i \delta^{ij}$. Finally, it will be necessary to check whether $\hat{\bm{q}}$ gives rise to a probabilistic interpretation or not. We shall return to the explicit construction of the position operator later on but, for the time being, we assume its eigenfunctions exist in the transverse Hilbert space of the photon and form an orthogonal and complete set. We first explore the implications of the orthogonality. A direct computation using $\tilde{V}^i_{\bm{q},\lambda}(\bm{p}) =g_{\bm{q}} (\bm{p})\, \epsilon^i(\bm{p}, \lambda) $gives \begin{equation} (V_{\bm{q}_1,\lambda_1},V_{\bm{q}_2,\lambda_2})=\delta_{\lambda_1 \lambda_2} \int d\sigma(\bm{p}) g^*_{\bm{q}_1} (\bm{p}) g_{\bm{q}_2} (\bm{p}) \label{p9} \end{equation} but, as in the case of the momentum operator, we would like to have $(V_{\bm{q}_1,\lambda_1},V_{\bm{q}_2,\lambda_2}) = \delta_{\lambda_1 \lambda_2} \delta^{(3)}(\bm{ q}_1-\bm{ q}_2)$. This requires $g_{\bm{q}}(\bm{p})= (2 \pi)^{-3/2} \,\sqrt{2 |\bm{p}|}\,\, e^{-i \bm{p}\bm{q}}$. We can now give the position eigenfunctions in both, momentum and coordinate representations: \begin{eqnarray} \tilde{V}^i_{\bm{q},\lambda}(\bm{p}) = (2 \pi)^{-3/2}\, \sqrt{2 |\bm{p}|}\, \epsilon^i (\bm{p}, \lambda)\, e^{-i \bm{p}\bm{q}} \; &\mbox{and} \; &\tilde{V}^i_{\bm{q},\lambda}(\bm{x})=(2\pi)^{-3} \int d^3 p\, \epsilon^i (\bm{p}, \lambda)\, e^{i \bm{p}(\bm{x}-\bm{q})}\label{p11} \end{eqnarray} It is clear from the last equation that $\bm{x}$ does not correspond to the position eigenvalue $\bm{q}$. This mismatch is due to the presence in the integrand of the {\em rhs} of momentum dependent polarization vectors that link momentum with spin.This equation summarizes much of the troublesome nature of the coordinate space wave function of the photon. The above eigenfunctions form complete sets. By straightforward calculation using $\sum_\lambda \epsilon^{*i} (\bm{p},\lambda) \epsilon^j (\bm{p},\lambda)=\delta^{ij}-p^i p^j/|\bm{p}|^2 = \delta^{ij}_\bot (\bm{p})$, we get the decompositions of the identity \begin{eqnarray} \prod (\bm{p}_1,\bm{p}_2) =&\sum_\lambda \int d\sigma(\bm{p}) \tilde{V}^i_{\bm{p},\lambda} (\bm{p}_1) \tilde{V}^{j*}_{\bm{p},\lambda} (\bm{p}_2)&= \delta^{ij}_\bot (\bm{p}_1) \delta^{(3)}(\bm{ p}_1-\bm{ p}_2) \label{p11a}\\ \bigwedge (\bm{p}_1,\bm{p}_2) =& \sum_\lambda \int d^3 q \tilde{V}^i_{\bm{q},\lambda} (\bm{p}_1) \tilde{V}^{j*}_{\bm{q},\lambda} (\bm{p}_2)&= 2 |\bm{p}_1 | \delta^{ij}_\bot (\bm{p}_1) \delta^{(3)}(\bm{ p}_1-\bm{ p}_2) \label{p11b} \end{eqnarray} We have arrived at two alternative complete sets of commuting operators ({\sl momentum, helicity}) and ({\sl position, helicity}), whose simultaneous eigenfunctions form alternative bases of the Hilbert space $\cal{H}$ of the massless $(1,0) \oplus (0,1)$ representation of the Poincare group. We recall again the difference between position and coordinate, and refer the reader back to Eq.(\ref{p11}) for a check. Given an arbitrary state $\Psi$ in $\cal{H}$, we could employ Dirac bracket notation to denote its components in both representations \begin{eqnarray} \langle \bm{p} \lambda|\Psi\rangle=&\int d\sigma(\bm{p}')\tilde{V}^{i*}_{\bm{p},\lambda} (\bm{p}') \tilde{\Psi}^i (\bm{p}')=& \frac{1}{\sqrt{2\,|\bm{p}|}}\epsilon^{i*} (\bm{p},\lambda) \tilde{\Psi}^i (\bm{p}) \label{p11m}\\ \langle \bm{q} \lambda|\Psi\rangle=&\int d\sigma(\bm{p}')\tilde{V}^{i*}_{\bm{q},\lambda} (\bm{p}') \tilde{\Psi}^i (\bm{p}')=& (2\pi)^{-3/2} \int d^3 p \,\,e^{i \bm{p}\bm{q}}\,\, \langle \bm{p} \lambda|\Psi \rangle \label{p11n} \end{eqnarray} The relation (\ref{p11n}) is the standard one between position and momentum representations in quantum mechanics. In fact, \begin{equation} \langle \bm{q} \lambda | \bm{p} \lambda' \rangle= (2 \pi)^{-3/2}\,\,\delta_{\lambda \lambda'} \exp(i \bm{p} \bm{q}),\; \langle \bm{p} \lambda | \bm{p}' \lambda' \rangle=\delta_{\lambda \lambda'} \delta^{(3)}(\bm{ p}-\bm{ p'}),\; \langle \bm{q} \lambda | \bm{q}' \lambda' \rangle=\delta_{\lambda \lambda'} \delta^{(3)}(\bm{ q}-\bm{ q'}) \label{p11e} \end{equation} Note also the scalar product in both representations: \begin{equation} \langle \psi | \psi \rangle=\sum_\lambda \int d^3 p\, |\langle \bm{p} \lambda|\Psi \rangle|^2 = \sum_\lambda \int d^3 q\, |\langle \bm{q} \lambda|\Psi \rangle|^2 \label{p11z} \end{equation} The absence of polarization vectors in the integrand in the {\sl rhs} of (\ref{p11n}) eliminates the obstructions~\cite{wightman,galindo,amrein} that prevent from considering \begin{equation} P_\psi (\Delta,\lambda)= \int_\Delta d^3 q |\langle \bm{q} \lambda|\Psi \rangle|^2 \label{p11y} \end{equation} as the probability for finding a photon of helicity $\lambda$ in the Borel set $\Delta \in \mathbf{R}^3$. In particular, $P_\psi (\Delta)=0$ is possible. The reason is that, in the absence of polarization vectors in (\ref{p11n}), the Fourier transform of $\langle \bm{q} \lambda|\Psi \rangle$ can be an entire function of $\bm{p}$. Then, according to the Plancherel-Polya theorem~\cite{plancherel}, $P_\psi (\Delta)$ may vanish in Borel sets like $\Delta \in\mathbf{R}^3$. Physically this is necessary in order to cope with the absence of photons in $\Delta$. It is the product of $\tilde{\Psi}^i (\bm{p})$ and ${\epsilon^i (\bm{p},\lambda)}/{\sqrt{2 |\bm{p}}|}$ in (\ref{p11m}) that has to be entire. This can be so even if the presence of $\sqrt|\bm{p}|$ explicitly and in the $\epsilon$'s implies that the individual $\tilde{\Psi}^i (\bm{p})$ can not be entire functions and, therefore, can not be given a probabilistic interpretation. In any case, it is worth to recall that the expressions in (\ref{p11m}) are for fixed helicity. Thence photon states with fixed helicity could be localizable in spite of~\cite{galindo} and of theorems 1 and 2 in~\cite{amrein}. This is a first consequence of the until now only hypothesized existence of $\bm{q}$. The expansion of arbitrary states in $\cal{H}$ in terms of momentum and helicity eigenstates can be given inverting (\ref{p11m}) (recall that $\cal{H}$ is the space of {\sl transverse} states) \begin{equation} \tilde{\Psi}^i (\bm{p})=\sqrt{2\, |\bm{p}|} \sum_\lambda \epsilon^i (\bm{p},\lambda) \langle \bm{p} \lambda|\Psi \rangle \label{p11u} \end{equation} a relation that is customarily used in conjunction with Eq. ({\ref{p1}) to write \begin{equation} \tilde{\Psi}^i (\bm{x})=(2\pi)^{-3/2} \sum_{\lambda} \int \,\,d^3p\,\, e^{i \bm{p}\bm{x}} \, \epsilon^i (\bm{p}, \lambda)\,\, \langle \bm{p} \lambda|\Psi\rangle \label{p11f} \end{equation} This is the standard coordinate representation used to describe the one photon states and -- promoting $ \langle \bm{p} \lambda|\Psi\rangle $ to the ranks of annihilation operators -- the photon field. The comparison of (\ref{p11n}) and (\ref{p11f}) indicates clearly that it is the presence of the $p$-dependent polarization vector in the integrands of (\ref{p11}) and (\ref{p11f}) that forestalls the interpretation of $\bm{x}$ as a true position. The decompositions Eqs. (\ref{p11u}) and (\ref{p11f}) clearly show that the {$\tilde{\Psi}(\bm{p})$} are transverse (i.e. $\sum_i p_i\tilde{\Psi}_i(\bm{p})=0$) and the $\tilde{\Psi} (\bm{x})$ solenoidal (i.e. $\partial \tilde{\Psi}_i(\bm{x})/\partial x_i)=0$. The interested reader is referred to ~\cite{iwob} for the use of (\ref{p11f}) as a photon wave function and to Landau and Peierls~\cite{landaub} to explore the meaning of the non local relations between $\tilde{\Psi}(\bm{x})$ and $\langle \bm{q} \lambda|\Psi\rangle$. It is straightforward to show that they are \begin{equation} \tilde{\psi}^i(\bm{x})=\sum_\lambda \int d^3 q \, \tilde{V}^i_{\bm{q} \lambda} (\bm{x}) \langle \bm{q} \lambda |\psi \rangle,\;\; \mbox{and}\; \; \langle \bm{q} \lambda |\psi \rangle= \int d^3 x \, \tilde{V}^{i*}_{\bm{q} \lambda} (\bm{x}) \tilde{\psi}^i(\bm{x}) \label{p11g} \end{equation} \section*{Position operator} In the previous section we have seen that there is room in the Hilbert space of one photon states for an ordinary position operator. In fact \begin{equation} \langle \bm{q}' \lambda|\hat{q_i}|\psi \rangle= q'_i \langle \bm{q}'\lambda|\psi \rangle ,\;\; \mbox{and}\; \; \langle \bm{p} \lambda |\hat{q_i}|\psi \rangle=i \hbar(\frac{\partial}{\partial p_i}) \langle \bm{p} \lambda|\psi \rangle \label{p11h} \end{equation} Here we will analyze the features of this operator as seen in the space of transverse states. Following Hawton~\cite{hawtona}, we first identify its structure by letting it to operate on the eigenfunctions $V_{\bm{q},\lambda}$. Then we shall show that it qualifies as an appropriate position operator. Concisely, our first task is to find the operator $\hat{q}^a,\, a=1,2,3$ whose eigenfunctions are the $\bm{V}$'s given in Eq. (\ref{p11}) \begin{equation} \left\{\hat{q}^a \, \tilde{\bm{V}}_{\bm{q},\lambda}\right\}_i(\bm{p})= q^a \, \left\{\tilde{\bm{V}}_{\bm{q},\lambda}\right\}_i (\bm{p}) \label{p12} \end{equation} By using above the explicit form (\ref{p11}) of $\tilde{\bm{V}}_{\bm{q},\lambda}$, Hawton got after some algebra~\cite{hawtona} \begin{eqnarray} \left\{\hat{q}^a \, \tilde{\bm{V}}_{\bm{q},\lambda}\right\}_i (\bm{p})= ( i \delta_{ij} \nabla^a + (Q^a)_{ij} ) \, \left\{\tilde{\bm{V}}_{\bm{q},\lambda}\right\}_j (\bm{p})&\Rightarrow & (\hat{q}^a)_{ij}=i \delta_{ij} \nabla^a + (Q^a)_{ij}\label{p13} \end{eqnarray} where \begin{equation} \nabla^a =\sqrt{2 |\bm{p}|} \frac{\partial}{\partial p^a}\frac{1}{\sqrt{2 |\bm{p}|}},\;\; \mbox{and}\;\; (Q^a)_{ij} =i \sum_{\sigma=\theta,\varphi,k} e_i (\bm{p}, \sigma) \left\{\nabla^a e_j (\bm{p}, \sigma) \right\} \label{p14} \end{equation} As is well known, the operator ordering implied by $\nabla^a$ is necessary to make it self-adjoint with the measure $d\sigma(\bm{p})$. Notice that the attained operator (\ref{p13}) is independent of the helicity quantum number, being a matrix in the coordinate indices. By computing the derivatives of the basis vectors that appear in the definition of $Q^a$, one obtains the explicit expression of the operator \begin{equation} (\hat{q}^a)_{ij}=i \delta_{ij} \nabla^a +\frac{1}{|\bm{p}|} \left[\bm{e}(\bm{p},k)\wedge \bm{S}_{ij}\right]^a - \frac{\cot \theta}{|\bm{p}|} e^a(\bm{p},\varphi) W_{ij}\label{p15a} \end{equation} due to Hawton~\cite{hawtona,hawtonb,hawtonc}, who correctly identified the first two terms as the Pryce position operator~\cite{pryce}, and the last one as a compensating term~\cite{iwoc} for the topological photon phase~\cite{chiao,tomita}. She also realized that, due to the last term on the {\sl rhs} of (\ref{p15a}), $\hat{\bm{q}}$ appeared as a set of three components in involution, something that previous position operators did not meet. A compact expression that summarizes much of the above, shows explicitly the relation between this position operator and the spinless one $i \nabla^a$, and gives a rationale for it, is \begin{equation} (\hat{q}^a)_{ij} =i \sum_{\sigma=\theta,\varphi,k} e_i (\bm{p}, \sigma) \nabla^a e_j (\bm{p}, \sigma) \label{p15} \end{equation} The sum in (\ref{p15}) includes the longitudinal polarization $\bm{e}(\bm{p}, k)$ but, in spite of it, $\hat{\bm{q}}$ operates within the transverse subspace $\cal{H}$ overcoming some queries put forward by Wightman~\cite{wightman}. Using (\ref{p15}) on an arbitrary function $\bm{\Psi} \in \cal{H}$, expanded according to (\ref{p11u}), we obtain \begin{equation} \left\{ \hat{q}^a \tilde{\bm{\Psi}} \right\}_i (\bm{p}) =\sum_\lambda \int d^3 p' \tilde{V}^i_{\bm{p}',\lambda}(\bm{p}) i \frac{\partial}{\partial p'^a} \langle \bm{p}',\lambda|\bm{\Psi}\rangle= \sqrt{2 \,|\bm{p}|} \sum_\lambda \epsilon_i (\bm{p},\lambda) i \frac{\partial}{\partial p^a} \langle \bm{p},\lambda|\bm{\Psi}\rangle \label{p16} \end{equation} which shows explicitly the transversality of $\hat{\bm{q}}\cdot \tilde{\bm{\Psi}}$. We also recall here that $[\hat{q}^a,\hat{W}]=0$ as can be seen by explicit computation using (\ref{p13}) and the definition of the helicity. Putting all together, it is possible to use the familiar notations: \begin{equation} \hat{q}^a =\sum_\lambda \int d^3 p\,\, |\bm{p} \lambda \rangle i \frac{\partial}{\partial p^a} \langle \bm{p}, \lambda|, \;\;\; \mbox{and}\;\;\; \hat{q}^a =\sum_\lambda \int d^3 q\,\, |\bm{q} \lambda\rangle q^a \langle \bm{q}, \lambda| \label{p17} \end{equation} for the position operator in the helicity representation $\hat{q}^a$. Finally, the wave functions can be interpreted as in the non relativistic case: given a photon in the state $\Psi$, $P_\Psi (\bm{p},\lambda)=|\langle \bm{p} \lambda |\Psi \rangle|^2$ gives the probability of finding it with helicity $\lambda$ and momentum $\bm{p}$, and $P_\Psi (\bm{q},\lambda)=|\langle \bm{q} \lambda |\Psi \rangle|^2$ gives the probability of finding it with helicity $\lambda$ in the position $\bm{q}$. Any arbitrary operator $\hat{\cal{O}}$ defined in the Hilbert space $\cal{H}$ can be represented in the two different bases introduced above. Using the scalar product (\ref{p2}) and the relation (\ref{p11u}) we get: \begin{eqnarray} \langle \bm{\Phi}, \hat{\cal{O}}\; \bm{\Psi} \rangle &=& (2 \pi)^{-3/2} \int d\sigma(\bm{p})\, \sum_{ij}\, \tilde{\Phi}^*_i (\bm{p}) \hat{\cal{O}}_{ij} \tilde{\Psi}_j (\bm{p}) \\ &=& (2 \pi)^{-3/2} \int d^3 p \sum_{\lambda\,\lambda'} \langle \bm{\Phi} |\bm{p}\, \lambda \rangle\, \hat{\cal{O}}_{\lambda\,\lambda'} \langle \bm{p}\, \lambda | \bm{\Psi} \rangle \label{p18} \end{eqnarray} so that there is a well defined relation between the operator's expressions in both bases: \begin{equation} \hat{\cal{O}}_{\lambda\,\lambda'}(\bm{p}, \partial/\partial \bm{p}) = \frac{1}{\sqrt{2 |\bm{p}|}}\sum_{ij} \epsilon^*_i (\bm{p},\lambda) \hat{\cal{O}}_{ij}(\bm{p}, \partial/\partial \bm{p}) \epsilon_j (\bm{p},\lambda') \sqrt{2 |\bm{p}|} \label{p19} \end{equation} Applying this to the canonically conjugate momenta $\tilde{p}^a_{ij}=\delta_{ij} p^a$ and position (\ref{p15}) operators, we get them in the helicity basis as \begin{equation} \hat{p}^a_{\lambda \lambda'}=\delta_{\lambda \lambda'} p^a;\,\,\, \mbox{and}\,\,\, \hat{q}^a_{\lambda \lambda'}=\delta_{\lambda \lambda'} i \frac{\partial}{\partial p^a} \label{p20} \end{equation} We have thus recovered the old Heisenberg-Weyl quantization rules as anticipated in (\ref{p11h}). They are valid in the helicity basis and only in it because, due to helicity conservation, additional terms appear in other frames to compensate for the effects of $\partial/\partial \bm{p}$ on the momentum dependent polarizations. As said above, the meaning of the derivative in the helicity basis is that of a covariant derivative. In fact, it is related to the standard covariant derivative introduced in ref~\cite{iwoc} to account for the topological phase~\cite{chiao,tomita} of the photon. \begin{equation} \widehat{(i\nabla^a)}_{\lambda \lambda'}= \delta_{\lambda \lambda'} \left[ i \frac{\partial}{\partial p^a} + \lambda D^a (\bm{p})\right] \label{p21} \end{equation} where $D^a (\bm{p})=\cot \theta e^a(\bm{p},\varphi)/|\bm{p}|$. Notice, by the way, that the helicity operator $\hat{W}_{ij}$ transforms to the helicity quantum number $\lambda$ in the transformation from the spin basis onto the helicity basis. The spin matrix also undergoes a similar transformation \begin{equation} \hat{S^a}_{\lambda \lambda'} =\delta_{\lambda \lambda'} \lambda e^a(\bm{p},k) \label{p22} \end{equation} The transformation to the helicity basis analyzed above $\hat{{\cal O}}_{ij} \rightarrow \hat{{\cal O}}_{\lambda \lambda'}$ should not be mistaken with the similarity transformation to the photon frame, namely: \begin{equation} \hat{{\cal O}}^a_{\sigma \sigma'}=\sum_{ij} e_i(\bm{p},\sigma) \hat{{\cal O}}^a_{ij} e_j(\bm{p},\sigma'), \,\,\, \mbox{with}\,\,\, \{ \sigma,\sigma'\}=\{k,\theta,\varphi\}\label{p24} \end{equation} This is simply the rotation from the fixed cartesian axes to the axes lying along the unit vectors $\bm{e}(\bm{p},\sigma)$. A most striking case occurs for the spin matrix that, by (\ref{p24}), becomes $\hat{S}^a_{\sigma \sigma'}=i \sum_{\sigma''} \epsilon_{\sigma \sigma' \sigma''} e^a(\bm{p},\sigma'')$. Notice the difference with (\ref{p22}): \begin{equation} \left[\hat{S}^a,\hat{S}^b\right]_{\sigma \sigma'}= i \sum_c \epsilon_{a b c} \hat{S}^c_{\sigma \sigma'}\,\,\, \mbox{while} \,\,\, \left[\hat{S}^a,\hat{S}^b\right]_{\lambda \lambda'}=0 \label{p25} \end{equation} The vanishing of the second commutator is of no surprise as there are no remains of the spin matrices in the helicity representation. The dimension of the spin space is 3, while the helicity basis is a sum of two one-dimensional representations. Even after putting together the two parity related representations (1,0) and (0,1) to have both helicities, the would-be helicity 0 eigenstate is outside the representation space. The relation of these facts with gauge invariance was discussed~\cite{weinbergb,weinbergc} a long time ago. From the geometric point of view, the transformation $\hat{{\cal O}}_{ij} \rightarrow \hat{{\cal O}}_{\lambda \lambda'}$ is a projection from $\mathbf{R}^3$ onto the transverse subspace spanned by $\bm{e}(\bm{p},\theta)$ and $\bm{e}(\bm{p},\varphi)$, along with the appropriate label rearrangements. As a result, the operator products must be handled with caution: in general their transform shall not coincide with the product of the operators' transforms. We saw this in Eq. (\ref{p25}) for the spin matrices. The same occurs for the angular momentum and the boost operators and, in general, for all the operators that pull the states out of the transverse space. This sometimes led to define the helicity representation through a non singular transformation applied to the spin representation: \begin{equation} \hat{\cal{O}}'_{ij}(\bm{p}, \partial/\partial \bm{p}) = \frac{1}{\sqrt{2 |\bm{p}|}}\sum_{rs} R^{-1}_{ir}(\bm{p}) \hat{\cal{O}}_{rs}(\bm{p}, \partial/\partial \bm{p}) R_{sj}(\bm{p}) \sqrt{2 |\bm{p}|} \label{p26} \end{equation} where $R(\bm{p})$ can be any of the rotations from the standard momentum $(0,0,|\bm{p}|)$ to $\bm{p}$. Denoting by $\bm{e}(\sigma), \sigma=1,2,3$ three unitary vectors along the fixed axis \begin{equation} e_i(\bm{p},\sigma)=R_{ij}(\bm{p}) e_j(\sigma) \label{p27} \end{equation} Notice that the intrinsic arbitrariness in $R$, due to the invariance of the standard momentum under rotations around it, is removed by the choice of fixed vectors $e_i(\sigma) $ in (\ref{p27}). Note also that the helicity representation operators are obtained by projecting (\ref{p26}) on the fixed helicity basis: \begin{equation} \hat{\cal{O}}_{\lambda \lambda'} = \,\,\sum_{i j}\,\, \epsilon^*_i(\lambda) \,\,\hat{\cal{O}}'_{ij} \,\, \epsilon_j(\lambda') \label{p27a} \end{equation} where, according to (\ref{p4}) and (\ref{p27}), $\epsilon_i(\lambda)=\frac{\lambda}{\sqrt{2}} \{\epsilon_i(1)+i\lambda \epsilon_i(2)\}$. Needless to say that (\ref{p27a}) is invertible only within the subspace orthogonal to the standard momentum. The application of the above results to the specific case of the electromagnetic field completes the standard picture of the one photon state with additional, position dependent, information. A photon in a state $\bm{A}$ can be given as \begin{equation} A^i (\bm{x})=(2\pi)^{-3/2} \sum_{\lambda} \int \,\,\frac{d^3 p}{\sqrt{2\, |\bm{p}|}}\,\, e^{i \bm{p}\bm{x}} \, \epsilon^i (\bm{p}, \lambda)\,\, \langle \bm{p} \lambda|\,\,\bm{A}\,\,\rangle \label{p27aa} \end{equation} or in the alternative form \begin{equation} A^i(\bm{x}) =\sum_\lambda \int dq V^i_{\bm{q},\lambda}(\bm{x}) \,\, \langle \bm{q}\,\, \lambda|\,\,\bm{A}\, \,\rangle \label{p27e} \end{equation} this reinforces the interpretation of $\hat{\bm{q}}$ as a position operator and of $\bm{V}_{\bm{q} \lambda}(x)$ (given explicitly in (\ref{p11}) as the configuration space amplitude of the photon state localized at $\bm{q}$ with helicity $\lambda$. Note that we could add zero components to the polarization vectors to form a fourvector-like object $A_\mu (\bm{x})$. However, as shown in (\ref{p4a}) this object does not transform as a fourvector, but inhomogeneously as a connection: \begin{equation} {\cal U}[\Lambda] A_\mu (x,\lambda) {\cal U}[\Lambda^{-1}]= \Lambda_\nu^\mu\,\, (2 \pi)^{-3/2} \int \frac{d^3 p}{\sqrt{2 p}} \,\,e^{-i p \Lambda x} \,\,\{\epsilon_\nu (p,\lambda)- p_\nu\,\, \Omega(p,\lambda;\Lambda)\}\,\,\, \langle p \,\lambda |\,\,A \rangle \label{p27b} \end{equation} A tensor-like covariant object for the photon can be defined as \begin{equation} F_{\mu \nu}(\bm{x})=(2\pi)^{-3/2} \sum_{\lambda} \int \,\,\frac{d^3 p}{\sqrt{2\,|\bm{p}|}}\,\, e^{i \bm{p}\bm{x}} \, i\,\,\{p_\mu\, \epsilon_\nu (\bm{p}, \lambda)- p_\nu\, \epsilon_\mu (\bm{p}, \lambda)\}\,\, \langle \bm{p} \lambda|\,\,A\rangle \label{p27c} \end{equation} This contains the same information that $A_\mu$, but transforms as an antisymmetric tensor; its components constitute the electric and magnetic fields. In the next section we shall use (\ref{p27c}) to construct and solve the photon Maxwell equations in vacuum. \section*{Time operator} The Galileo boost operator in the NR representations for elementary systems has only three components $\hat{G}_i, i=1,2,3$. Each of them is associated to a component of the position operator. There is no room in the representation space for an additional boost associated to the time. In fact, time is invariant under Galileo boosts. Most likely, this is hidden among the reasons~\cite{pauli,mandelstamm,fock,paul,aharonov,kijowski, buscha,buschb,hilgevoorda,hilgevoordb} for the difficulty of finding a time operator that behave just like the other position operators, something tempting in the study of individual particles. As explained in~\cite{hilgevoordb}, this is a misleading approach to the role of time in quantum mechanics and its use led to several dead ends in the past. A question to be taken into account is that time translations are given in terms the Hamiltonian $H$ which is not independent, but is fixed as a function of the other operators in the representation by the mass shell condition. The same is to be expected for a time operator, instead of the addition of a new independent component to the position, a given function of the basic operators, whose explicit form depends on the very properties of the representation. Leaving aside the far-reaching but still out of reach question of the status of Newtonian time in quantum mechanics, and the search of its corresponding operator -- if any -- it is possible to identify~\cite{grot,reisenberger} time-like properties of quantum systems. The simplest of these appears for free elementary particles in the form of the time of arrival at a fixed position $\bm{X}$, a deceptively simple property whose analysis in quantum mechanics is very delicate and full of traps. Classically it is a derived quantity, a function of the initial position and momentum whose values are either ``never" (when $\bm{X}$ is out of the particle trajectory), or a real number that solves the equations of motion for $t$ as a function of $\bm{X}$. Notice that some kind of integrability is implicit for this classical notion to work properly~\cite{leonc}. Quantization requires the splitting of the Hilbert space into two orthogonal subspaces, one that contains the never detected states, and other that contains the eventually detected ones. The time of arrival acts within this last subspace~\cite{grot,leona}, where it is represented by an operator which is not self-adjoint but only maximally symmetric. Its eigenstates are not orthogonal, so that instead a projector valued measure, the statistics of the times of arrival at a detector can only be analyzed in terms of positive operators~\cite{giannitrapani}. Below, I shall show how this works for the photon. First, we shall study the time dependence hidden in the bracket $\langle \bm{p} \lambda|\,\,A(t)\rangle$. Taking into account that $p^2=0$, and that $ p\, \epsilon(\bm{p},\lambda)=0$, we get from (\ref{p27c}) the equations $\partial^\mu F_{\mu \nu}=0$. These cover the full set of Maxwell equations due to the duality relation among null fields in vacuum~\cite{weinbergc}: \begin{equation} F^{\mu \nu}(x,\lambda)=i\,\,\frac{\lambda}{2}\,\, \epsilon^{\mu \nu \rho \sigma} \,\, F_{\rho \sigma} (x,\lambda)\Rightarrow \left\{ \begin{array}{rcl} F_{ij} & = & \epsilon_{ijk} (E_k+i\lambda\,B_k)\\&&\\ F_{0k}&=& -i \lambda (E_k+i\lambda\,B_k) \end{array} \right. \label{p27d} \end{equation} We now have two equations $\partial^i F_{i0}=0$ and $\partial^0 F_{0j}+\partial^i F_{ij}=0$. Calling $\bm{F}(\bm{x}, \lambda)=\bm{E}(\bm{x}, \lambda)+i \lambda \bm{B}(\bm{x}, \lambda)$, the first equation reduces to the divergenceless condition $\bm{\nabla}\bm{F}(\bm{x}, \lambda)=0$ and the second to the Schr\"odinger equation~\cite{pryce,cooka,inagaki,iwob,sipe}: \begin{equation} i \frac{\partial \bm{F}(\bm{x}, \lambda)}{\partial t}= \lambda c \bm{\nabla}\wedge \bm{F}(\bm{x}, \lambda)\label{q10} \end{equation} Recalling the expression for the spin matrix $(S^j)_{ik}=i \epsilon_{ijk}$, this equation can be given a more transparent form: \begin{equation} i\frac{\partial F_i(\bm{x}, \lambda)}{\partial t} = \lambda (\bm{S} \bm{p})_{ij}F_j(\bm{x}, \lambda)\label{q11} \end{equation} where $\bm{p}=-i \bm{\nabla}_{\bm{x}}$. Using now (\ref{p27c}) in the definition of $\bm{F}$ we obtain \begin{equation} \bm{F}(\bm{x}, \lambda)=(2\pi)^{-3/2} \int d^3 p \sqrt{2 |\bm{p}|}\, i\,e^{i \bm{p}\bm{x}} \bm{\epsilon}(\bm{p},\lambda)\,\, \langle \bm{p} \lambda|A(t)\rangle \label{q12} \end{equation} therefore, $(\bm{S}\bm{p})_{ij}F_j=\lambda |\bm{p}| F_j$ and hence, \begin{equation} i\frac{\partial \bm{F}(\bm{x}, \lambda)}{\partial t}=c\,\, |\bm{p}|\,\,\bm{F}(\bm{x}, \lambda) \label{q13} \end{equation} Therefore, the photon Hamiltonian is $H= c|\bm{p}| \delta_{\lambda \lambda'}$ in the helicity basis. Due to the simple expression of the photon Hamiltonian, the position operator in the in the helicity basis evolves quite simply: \begin{equation} \frac{d\hat{q}^a}{d t}=i [\hat{q}^a,c \sqrt{\hat{\bm{p}}^2}]=c\,\, \frac{\hat{p}^a}{|\hat{\bm{p}}|},\,\,\, \,\,\,\frac{d\hat{p}^a}{d t}=0\,\,\,\,\, \Rightarrow \,\,\, \hat{q}^a (t)=\hat{q}^a + c\,\, \frac{\hat{p}^a}{|\hat{\bm{p}}|}\, t,\,\,\, \hat{p}^a(t)=\hat{p}^a \label{p28} \end{equation} In the first of these Eqs. we displayed explicitly the hamiltonian obtained from the mass shell condition $H(p)=c\sqrt{\bm{p}^2}$, but then we continue to denote it by $|\bm{p}|$ as before. $\hat{q}^a$ and $\hat{p}^a$ are the position and momentum operators of the photon at $t=0$, while $\hat{q}^a(t)$ is the position operator at a time $t$ (we are working in the Heisenberg picture). All this is trivial and prompts to the seemingly innocent question: When does the photon arrives at a position $\bm{x}$? This is the alternative to the standard quantum mechanical question: Where is the photon at time $t$? Much in the same way that the question ``where?" brings a position operator whose probability distribution is used to ascertain {\sl at what place}?, the question ``when?" demands the introduction of a time operator, with the meaning of time of arrival (ie. {\sl at what instant}?), to close the logical loop. Several obstructions prevent the existence of such an operator in the standard formulation of quantum mechanics. The earliest one, which turned out to be the strongest, was formulated in the early days of quantum theory by Pauli~\cite{pauli}: The simultaneous existence of unitary representations for both, energy and time conjugate operators, is incompatible with a bounded hamiltonian, opening up at least a sort of infrared catastrophe (see however~\cite{hilgevoordb}). This led to renounce to the self-adjointness of the time operator, keeping its maximal symmetry only (i.e. ${T^*}^\top =T$). In these conditions, time eigenstates are no longer orthogonal (unless some steering regularization is used~\cite{grot}) and, instead of the traditional projector valued measure, it is necessary to turn to a positive operator valued measure~\cite{giannitrapani} to interpret the formalism. The interested reader is referred to the review~\cite{muga} to get an account of these and related issues, and references to the original literature. Generally, the search of a time of arrival operator has been undertaken in the case of one space dimension. The present author studied the case of the free particle in three space dimensions~\cite{leona} concluding that the detected states are confined within a subspace of the whole Hilbert space. Outside it is the realm of no detection, that is, of those states for which the time of arrival is ``never." In classical terms, they miss the detector, whose efficiency -- a different question -- is assumed to be $1$. On physical grounds, detection requires a constraint: that the particle momentum is parallel to the line joining ${\bm{q}}$ with the arrival (detector) position $\bm{z}$. This vector constraint and the free particle Hamiltonian form a first class system. The use of this formulation made possible the obtention in~\cite{leona} of a very simple solution for the time of arrival in three dimensional space, basically an extension of the 1-D results. A time operator for photons requires of a 3-D setting. Not only because the transverse-vector character of the electromagnetic field stripes the 1-D approach of credibility. Also from the practical side: we shall analyze later on the arrival of photons through inhomogeneous media, where direction changes will in general take place, whose description needs of more than the mere distance covered. Therefore, we shall examine the first class constrained system that evolves in the detected subspace as the first step towards the time of arrival of photons. The task can be formulated in very simple terms: If $\bm{z}$ is the detector position, at what time $t$ is $\hat{q}^a (t)=z^a$? In other words: How to invert the equation (\ref{p28}), namely $z^a=\hat{q}^a + c\,\, ({\hat{p}^a}/{\hat{\bm{p}}})\,\, t$, to get $t$? Several comments are in order here. First, we are promoting $t$ to the category of a q-number, while demoting $\hat{\bm{q}}(t)$ to a given, external, parameter. This is the very task to accomplish to define a time operator. Second, the evolution equations that we have to invert to obtain the time of arrival is the set (\ref{p28}) of three equations, one per component, depending on a unique parameter $t$. To be compatible, they have to satisfy the constraint: \begin{equation} {\mathbf L}_a (\bm{z}) = \epsilon_{a b c}(\hat{q}_b-z_b)\,\,\hat{p}_c = 0\label{p29} \end{equation} To quantize this constrained system we borrow from the method of Dirac~\cite{dirac}. Classically, the constraint guarantees that the orbital angular momentum of the particle is $\bm{z}\wedge\bm{p}$, so that $\bm{z}$ is a point of its trajectory. The total Hamiltonian formed by adding the constraints to the original one is: \begin{equation} H_\top(\bm{z})=c \sqrt{\bm{p}^2}\,\,+\,\,\mu_a {\mathbf L}_a (\bm{z}) \label{p30} \end{equation} where $q^a$ and $p^a$ are the dynamical variables to become operators after quantization, the $\mu$'s are Lagrange multipliers, and $\bm{z}$ is an external vector parameter corresponding to the detection position. The system is first class: \begin{equation} \{{\mathbf L}_a (\bm{z}),\,\,{\mathbf L}_b (\bm{z})\}=\epsilon_{a b c}\,\, {\mathbf L}_c (\bm{z}),\;\;\; \{{\mathbf L}_a (\bm{z}),\,\,H_\top(\bm{z})\}=\epsilon_{a b c}\,\, \mu_b\,\,\, {\mathbf L}_c (\bm{z}) \label{p31} \end{equation} where $\{,\}$ indicate Poisson brackets as this is still classical dynamics. The evolution of the constraints does not produce additional (secondary) constraints. Hence, the Hamiltonian (\ref{p30}) is enough to account for the evolution of the constrained system. When quantizing the system, the vanishing of the constraint translates into a kind of subsidiary condition: \begin{equation} \left(\hat{{\mathbf L}}_a (\bm{z}) \tilde{\Phi}\right)_i (\bm{p})=0 \label{p32} \end{equation} The set of vectors of the Hilbert space $\tilde{\Phi}_i (\bm{p})$ that satisfy the above equation form the subspace ${\cal H}_{\bm{z}}$ of the states that could, eventually, be detected at $\bm{z}$. On the other hand, using (\ref{p16}), Eq. (\ref{p32}) can be written as: \begin{equation} \epsilon_{a b c} \sqrt{2 |\bm{p}|} \sum_\lambda \epsilon_i (\bm{p},\lambda)\,\, \{i \frac{\partial}{\partial p_b} - z_b \}\,\, p_c \,\,\langle \bm{p} \lambda |\Phi \rangle=0 \label{p33} \end{equation} whose solution is \begin{equation} \langle \bm{p} \lambda |\Phi_{\bm{z}} \rangle = e^{-i \bm{p} \bm{z}}\,\,\, \Phi(|\bm{p}|,\lambda,\bm{z}) \label{p34} \end{equation} Note that the functions $\Phi$ may depend on the modulus of the momentum (but not on its direction), on the helicity and, possibly, on the detection point $\bm{z}$. However, this last dependence has to be switched off to maintain translational invariance. We now proceed to invert Eq. (\ref{p28}). First of all we write down the action of the position operator on the detected subspace: \begin{equation} \hat{q}^a \,\,\,\langle \bm{p} \lambda |\Phi_{\bm{z}} \rangle = \,\, e^{-i \bm{p} \bm{z}}\,\,\,\{z^a+i\frac{p^a}{|\bm{p}|} \,\frac{\partial}{\partial |\bm{p}|}\}\,\,\, \Phi(|\bm{p}|,\lambda) \label{p35} \end{equation} Hence, with $z^a$ a parameter and $\hat{t}(\bm{z})$ an operator, Eq. (\ref{p28}) reads: \begin{equation} z^a=\hat{q}^a + c\,\, \frac{p^a}{|\bm{p}|}\,\,\, \hat{t}(\bm{z}),\,\,\, \Rightarrow\,\,\, \,\,e^{-i \bm{p} \bm{z}}\,\, i\,\,\frac{p^a}{|\bm{p}|} \,\frac{\partial}{\partial |\bm{p}|}\,\, \Phi(|\bm{p}| ,\lambda)+c\,\,\frac{p^a}{|\bm{p}|}\,\,\, \hat{t}(\bm{z})\,\,\,\, e^{-i \bm{p} \bm{z}}\,\,\Phi(|\bm{p}|,\lambda)=0 \label{p36} \end{equation} This equation has to be valid whatever the function $\Phi$ chosen. This serves to define $\hat{t}(\bm{})$: It is precisely the operator that transforms (\ref{p36}) into an identity in ${\cal H}_{\bm{z}}$, that is: \begin{equation} \hat{t}(\bm{z}) \approx -i\,\, e^{-i \bm{p} \bm{z}}\,\,\,\frac{\partial}{\partial |\bm{p}|}\,\,\,\,e^{i \bm{p} \bm{z}},\,\,\,\,\hat{t}(\bm{z}) = -i\,\, e^{-i \bm{p} \bm{z}}\,\frac{1}{|\bm{p}|}\,\,\frac{\partial}{\partial |\bm{p}|}\,\,|\bm{p}| \,\,\,e^{i \bm{p} \bm{z}} \label{p37} \end{equation} The symbol $\approx$ at the left indicates equal up to operator ordering, something that we fix by the condition that $\hat{t}$ be maximally symmetric in the integration by parts with the measure $d^3p$ of the $|\bm{p} \lambda \rangle$ basis. This produces the operator defined at the right hand side, that we shall use as the time of arrival operator in what follows. It depends parametrically on $\bm{z}$ in as much the same way as the operators depend parametrically on $t$. Incidentally, this makes us recall that we are working in the Heisenberg picture at $t=0$. It is straightforward to show that, when some time $t_0$ has elapsed, the time operator shifts to $\hat{t}(\bm{z},t_0)=\hat{t}(\bm{z})-t_0$, and that the arrival occurs at a time $t_{\bm{z}}$ such that $\hat{t}(\bm{z},t_{\bm{z}})=0$. The eigenfunctions $\langle \bm{p} \lambda| t \bm{z}\rangle$ of $\hat{t}(\bm{z})$ obtained by solving the eigenvalue equation $\hat{t}(\bm{z})\,\,\langle \bm{p} \lambda| t \bm{z}\rangle= t \langle \bm{p} \lambda| t \bm{z}\rangle$ are proportional to $\exp{i(H t- \bm{p}\bm{z})}/|\bm{p}|$. Due to the fact that $\lim_{|\bm{p}|\rightarrow 0} (|\bm{p}| \,\,\langle\bm{p} \lambda | t \lambda \rangle) \neq 0$, being $|\bm{p}|=0$ the lower bound of the Hamiltonian, the operator (\ref{p37}) can not be self-adjoint. This is similar to the case of the radial momentum $p_r=i r^{-1} (\partial/\partial r) r$ in three space dimensions (but notice that this does not prevent us from using $H=p_r^2/2m \,\,+\,\, l(l+1)/2m r^2$ as Hamiltonian; what is done in this case is to restrict the behavior of the wave function at the boundary $r=0$). The time operator commutes with the helicity so, taking into account that $\langle \bm{p} \lambda | \bm{q} \lambda' \rangle= \delta_{\lambda \lambda'} (2 \pi)^{-3/2} \exp{(-i \bm{p}\bm{q})}$, we could write \begin{equation} \langle\bm{p}\,\,\lambda'|t\,\,\lambda; \bm{z} \rangle =\langle \bm{p}\,\,\lambda'|\,\,\,\frac{1}{H} e^{i H t}\,\,\,|\bm{z}\,\,\, \lambda \rangle\,\,\, \Rightarrow \,\,\, |t \,\,\lambda; \,\,\bm{z}\rangle = \frac{1}{H} e^{i H t}\,\,\,|\bm{z} \,\,\lambda \rangle \label{p38} \end{equation} This notation is highly symbolic mainly due to the fact that $\bm{z}$ is just an external parameter belonging to the experimental set-up, the observer's will, etc, so that there is nothing like $\bm{z}$ among the properties of the particle. However, the above expression is correct if one considers that $|\bm{z} \,\,\lambda \rangle$ is the eigenket of the particle's position operator with eigenvalue $\bm{z}$. Writing the detected states (\ref{p34}) in the same form may through some light on the meaning of the notation: \begin{equation} \langle \bm{p}\,\,\lambda \,\,|\Phi_{\bm{z}}\rangle =\langle \bm{p}\,\,\lambda \,\,|\,\Phi(H,\,\,\lambda))|\bm{z} \,\,\,\lambda \,\,\rangle \label{p39} \end{equation} This is the effect of the subsidiary condition (\ref{p32}): it projects on the detector position $\bm{z}$, keeping only that part of the state that is in $s$-wave relative to $\bm{z}$ (hence the $\Phi$ dependence on $|\bm{p}|$ alone). The lack of completeness this produces on the time operator, and the associated interpretation, was discussed with some detail in ref.~\cite{leona} for the relativistic massive spinless particle. I shall repeat here the two main results tailored to the massless, helicity $\pm 1$, photon case:\\ 1. The time eigenstates are not orthogonal: \begin{eqnarray} \sum_{\lambda'} \int d^3 p && \langle t\,\,\lambda;\,\,\bm{z}|\,\,\bm{p}\,\,\lambda' \rangle \langle\,\,\bm{p}\,\,\lambda'|\,\,t'\,\,\lambda";\,\,\bm{z}\rangle \nonumber\\ =&& \delta_{\lambda\,\,\lambda"}\,\,\, \frac{1}{2\,\pi^2}\,\,\, \int_0^\infty d|\bm{p}| \,\,\,e^{i|\bm{p}|(t'-t)}= \delta_{\lambda\,\,\lambda"}\,\,\, \frac{1}{2\,\pi^2}\,\,\,\frac{i}{t'-t+i\,\,\epsilon} \label{p40} \end{eqnarray} 2. The basis is complete only within ${\cal H}_{\bm{z}}$. In other words, the projection over states orthogonal to the detector position is excluded from the decomposition of the identity in terms of time eigenstates: \begin{eqnarray} \sum_{\lambda'} \int_{-\infty}^{+\infty} dt\,\,\, \langle \bm{p}\,\,\lambda|\,\,t\,\,\lambda';\,\,\bm{z} \rangle\,\,\langle\,\,t\,\,\lambda';\,\,\bm{z}|\,\,\bm{p}"\,\,\lambda"\rangle &=& \delta_{\lambda\,\,\lambda"}\,\, \frac{2 \pi}{|\bm{p}|^2}\,\,\delta(|\bm{p}|-|\bm{p}"|)\,\,\langle \bm{p}\,\,\lambda|\bm{z}\,\,\lambda\,\,\rangle\,\, \langle\,\bm{z}\,\,\lambda\,\,|\bm{p}"\,\,\,\lambda"\, \rangle\,\,\, \nonumber \\ \Rightarrow\,\,\,\sum_{\lambda'} \int_{-\infty}^{+\infty} dt\,\,\, \langle \bm{p}\,\,\lambda|\,\,t\,\,\lambda';\,\,\bm{z} \rangle\,\,\langle\,\,t\,\,\lambda';\,\,\bm{z}|\,\,\Phi_{\bm{z}}\rangle &=& \langle \bm{p}\,\,\lambda|\,\,\Phi_{\bm{z}}\rangle \,\,\,\,\, \forall \,\,\Phi_{\bm{z}} \,\in\,{\cal H}_{\bm{z}} \label{p41} \end{eqnarray} We recall that due to the non-orthogonality of the states, the spectral decomposition of the operator \begin{equation} \hat{t}(z) =\sum_\lambda \,\,\,\int dt\,\,\,t\,\,\, |\,\,t\,\,\lambda;\,\,\bm{z} \rangle\,\,\langle\,\,t\,\,\lambda;\,\,\bm{z}| \label{p42} \end{equation} defines a positive operator valued measure only (not a projector valued one) that can be used to give the probability that the arrival of the state $\psi$ at the detector occurs in the time $t$ as $P_\psi(t;\bm{z})= \sum_\lambda\,\,|\langle\,\,t\,\,\lambda;\,\,\bm{z}|\,\,\psi \,\,\rangle|^2$. This really is a probability density, something not reflected on the notation for the sake of simplicity. Note also that, due to the lack of completeness, the probability of eventually arriving at $\bm{z}$ (in any time) $P_\psi (\bm{z})= \int dt\,\,P_\psi(t;\bm{z})\leq 1$, and may be zero for states $\psi \notin {\cal H}_{\bm{z}}$. Finally, the mean value of the time of arrival at $\bm{z}$ of a particle in the state $\psi$ is $t_\psi (\bm{z})=\langle\,\, \psi\,\,|\hat {t}(\bm{z})|\,\,\psi \,\,\rangle/P_\psi (\bm{z})$, this excludes counterfactuals (the case $P_\psi (\bm{z})=0$) as it should. \section*{Conclusions} Photons are eventually detected through their interaction with matter. Gauge invariance singles out the minimal coupling of the potentials to the currents $J_\mu (\bm{x},t)$ of additively conserved quantum numbers (the electric charge in this case). The coupling to Pauli like currents (e. g. $\partial^{\nu}\bar{\psi}\sigma_{\mu\nu} \psi$) would never produce the finite amplitudes for absorbing or emitting soft photons observed experimentally. These facts and their implications have been thoroughly analyzed by S. Weinberg~\cite{weinbergc}and other authors. Of course, by means of a canonical transformation~\cite{power,woolley} the minimal coupling interaction can be cast into a multipolar form. We will take into account this structure of the interactions when discussing the detection of photons. Assume for simplicity the case of broad-band photodetectors for which the counting rates are given by the energy density of the field $\langle \psi |\hat{\bm{E}}^{\dagger}(\bm{x},t) \cdot \hat{\bm{E}}(\bm{x},t)| \psi \rangle$. Then, the probability that the state $\psi$ be localized (in the plain sense of being detected by a detector) in a neighborhood $\Delta$ around $(\bm{x},t)$ is given in terms of the fraction of the total energy that is within $\Delta$. Absolute localization of quantum mechanical systems~\cite{wightman} -that is, the condition that the probability of finding the system out of some finite volume vanish- is such a strong condition that it violates causality~\cite{hegerfeldta,hegerfeldtb}. In other words, any free particle initially confined in a finite volume, continues in it forever, or immediately spreads to infinity. This result applies to free relativistic and non relativistic particles, to complex systems, in the presence of interactions, etc. Surprisingly the only requirement for this is that the system Hamiltonian be bounded from below. This prompts the question of what is the maximal degree of localization to be expected for a particle. An important clue~\cite{hegerfeldtc} is that -- even if the localization outside the finite volume is not absolute, but exponentially bounded tails are permitted -- the probability spreads out to infinity faster than with any finite propagation speed. The limits to photon localization have been strengthened recently~\cite{iwoc} based on simple physical requirements to be satisfied by one photon states. Basically, these are: a) That $\langle \bm{p} \lambda | \psi \rangle$ can be given a probabilistic interpretation (something that we discussed below (\ref{p11z})) and b) That the Hamiltonian be bounded from below. Then, the Paley-Wiener Theorem VIII~\cite{paley} says that the fall-off of the photon wavefunction $\langle \bm{q} \lambda | \psi \rangle$ as $|\bm{q}| \rightarrow \infty$ is slower than $\exp (-a |\bm{q}|^r)$, where $a,r$ are positive constants and $r<1$. As the physical requirements noted above apply to all types of particles, the same occurs to the limit of almost exponential localization: it applies to all kind of particles. This puts photons at the same level than the other particles in what refers to localization. A recent analysis of spontaneous emission from excited atoms~\cite{chan} has shown the possibility of producing entangled atom-photon states where the photon wave packets have Gaussian tails. This explicit breaking of the barrier of exponential localization is a product of the entangled final state. These results shall likely find their application in the field of quantum information, and shall promote new developments in quantum optics. Some necessary tools like good position~\cite{hawtona} and time of arrival operators and their associated probabilistic interpretations are provided in this paper. We are completing a detailed analysis of the application of these tools to the tunneling through photonic band gaps, to HOM~\cite{hong} interferometry and entanglement~\cite{chan,can}, and to the superluminal propagation detected in several experiments~\cite{berkeley1,vienna,berkeley2}. \begin{acknowledgments} I want to thank J. Julve and F.J. de Urr\'{\i}es and L. Lamata for useful discussions and valuable comments on the paper. This work was partially supported by the Spanish Ministry of Science and Technology under the project BMF 2002-00834. \end{acknowledgments} \end{document}
{\boldsymbol{\eta}}gin{document} \date{\today} \title{Explicit error bounds for randomized {S} {\boldsymbol{\eta}}gin{abstract} Smolyak's method, also known as hyperbolic cross approximation or sparse grid method, is a powerful tool to tackle multivariate tensor product problems solely with the help of efficient algorithms for the corresponding univariate problem. In this paper we study the randomized setting, i.e., we randomize Smolyak's method. We provide upper and lower error bounds for randomized Smolyak algorithms with explicitly given dependence on the number of variables and the number of information evaluations used. The error criteria we consider are the worst-case root mean square error (the typical error criterion for randomized algorithms, often referred to as ``randomized error'') and the root mean square worst-case error (often referred to as ``worst-case error''). Randomized Smolyak algorithms can be used as building blocks for efficient methods such as multilevel algorithms, multivariate decomposition methods or dimension-wise quadrature methods to tackle successfully high-dimensional or even infinite-dimensional problems. As an example, we provide a very general and sharp result on the convergence rate of $N$-th minimal errors of infinite-dimensional integration on weighted reproducing kernel Hilbert spaces. Moreover, we are able to characterize the spaces for which randomized algorithms for infinte-dimensional integration are superior to deterministic ones. We illustrate our findings for the special instance of weighted Korobov spaces. We indicate how these results can be extended, e.g., to spaces of functions whose smooth dependence on successive variables increases (``spaces of increasing smoothness'') and to the problem of $L^2$-approximation (function recovery). {\varepsilon}nd{abstract} \section{Introduction}\label{SECT1} Smolyak's method or algorithm, also known as sparse grid method, hyperbolic cross approximation, Boolean method, combination technique or discrete blending method, was outlined by Smolyak in \cite{Smo63}. It is a general method to treat multivariate tensor product problems. Its major advantage is the following: to tackle a multivariate tensor product problem at hand one only has to understand the corresponding univariate problem. More precisely, Smolyak's algorithm uses algorithms for the corresponding univariate problem as building blocks, and it is fully determined by the choice of those algorithms. If those algorithms for the univariate problem are optimal, then typically Smolyak's algorithm for the multivariate problem is almost optimal, i.e., its convergence rate is optimal up to logarithmic factors. Today Smolyak's method is widely used in scientific computing and there exists a huge number of scientific articles dealing with applications and modifications of it. A partial list of papers (which is, of course, very far from being complete) on {\varepsilon}mph{deterministic Smolyak algorithms} may contain, e.g., the articles \cite{WW95, WW99} for general approximation problems, \cite{Gen74, Del90, Tem92, BD93, FH96, NR96, GG98, GG03, Pet03, GLSS07, HHPS18} for numerical integration, \cite{Gor71, Del82, Tem87, Tem93, SU07, Ull08, DG16} for function recovery, and \cite{Per86, Zen91, NR96a, Yse05, Yse06, Gar07, GH09} for other applications. Additional references and further information may be found in the survey articles \cite{BG04, Gri06}, the book chapters \cite[Chapter~15]{NW10}, \cite[Chapter~4]{Tem18}, and the books \cite{DS89, DTU18}. On {\varepsilon}mph{randomized Smolyak algorithms} much less is known. Actually, we are only aware of two articles that deal with randomized versions of Smolyak's method, namely \cite{DLP07} and \cite{HM11}. In \cite{DLP07} Dick et al. investigate a specific instance of the randomized Smolyak method and use it as a tool to show that higher order nets may be used to construct integration algorithms achieving almost optimal order of convergence (up to logarithmic factors) of the worst case error in certain Sobolev spaces. In \cite{HM11} Heinrich and Milla employ the randomized Smolyak method as a building block of an algorithm to compute antiderivatives of functions from $L^p([0,1]^d),$ allowing for fast computation of antiderivative values for any point in $[0,1]^d.$ Note that in both cases the randomized Smolyak method is applied as an ad hoc device and none of the papers gives a systematic treatment of its properties. With this paper we want to start a systematic treatment of randomized Smolyak algorithms. Similar to the paper \cite{WW95}, where deterministic Smolyak methods were studied, we discuss the randomized Smolyak method for general linear approximation problems on tensor products of Hilbert spaces. Examples of such approximation problems are numerical integration or $L^2$-approximation, i.e., function recovery. The error criteria for randomized algorithms or, more generally, randomized operators that we consider are extensions of the worst-case error for deterministic algorithms. The first error criterion is the worst-case root mean square error, often referred to as ``randomized error''. This error criterion is typically used to assess the quality of randomized algorithms. The second error criterion is the root mean square worst-case error, often referred to as ``worst-case error''. This quantity is commonly used to prove the existence of a good {\varepsilon}mph{deterministic} algorithm with the help of the ``pidgeon hole principle'': It arises as an average of the usual deterministic worst case error over a set of deterministic algorithms $\mathcal{A}$ endowed with a probability measure $\mu$. If the average is small, there exists at least one algorithm in $\mathcal{A}$ with small worst-case error, see, e.g., \cite{DLP07} or \cite{SW98}. Notice that the pair $(\mathcal{A}, \mu)$ can be canonically identified with a randomized algorithm. We derive upper error bounds for both error criteria for randomized Smolyak algorithms with explicitly given dependence on the number of variables and the number of information evaluations used. The former number is the underlying dimension of the problem, the latter number is typically proportional to the cost of the algorithm. The upper error bounds show that the randomized Smolyak method can be efficiently used at least in moderately high dimension. We complement this result by providing lower error bounds for randomized Smolyak algorithms that nearly match our upper bounds. As in the deterministic case, our upper and lower error bounds contain logarithmic factors whose powers depend linearly on the underlying dimension $d$, indicating that the direct use of the randomized Smolyak method in very high dimension may be prohibitive. Nevertheless, our upper error bounds shows that randomized Smolyak algorithms make perfect building blocks for more sophisticated algorithms such as multilevel algorithms (see, e.g., \cite{Hei98, HS99, Gil08a, GHM09, GiW09, HMNR10, NHMR11, Gne12, BG12, DG12, KSS15}), multivariate decomposition methods (see, e.g., \cite{KSWW10a, PW11, Was12, Gne13, DG12, DG13}) or dimension-wise quadrature methods (see \cite{GH10}). We demonstrate this in the case of the infinite-dimensional integration problem on weighted tensor products of reproducing kernel Hilbert spaces with general kernels. We provide the exact polynomial convergence rate of $N$-th minimal errors---the corresponding upper error bound is established by multivariate decomposition methods based on randomized Smolyak algorithms. The paper is organized as follows: In Section \ref{SECT2} we provide the general multivariate problem formulation and illustrate it with two examples. In Section \ref{SECT3} we introduce the randomized multivariate Smolyak method building on randomized univariate algorithms. Our assumptions about the univariate randomized algorithms resemble the ones made in \cite{WW95} in the deterministic case. In Remark~\ref{algL2Rep} we observe that we may identify our randomized linear approximation problem of interest with a corresponding deterministic $L^2$-approximation problem. In Section \ref{SECT4} we follow the course of \cite{WW95} and establish first error bounds in terms of the underlying dimension of the problem and the level of the considered Smolyak algorithm, see Theorem~\ref{BasicLemma} and Corollary~\ref{levelBoundGeneralConditions}. For the randomized error criterion Remark~\ref{algL2Rep} helps us to boil the error analysis of the randomized Smolyak method down to the error analysis of the deterministic Smolyak method provided in \cite{WW95}. For the worst case error criterion Remark~\ref{algL2Rep} is of no help and therefore we state the details of the analysis. Up to this point we consider general randomized operators to approximate the solution we are seeking for. In Section \ref{SECT5} we focus on randomized algorithms and the information evalutions they use. In Theorem~\ref{Theo_UB_PW} we present upper error bounds for the randomized Smolyak method where the dependence on the underlying dimension of the problem and on the number of information evalutions is revealed. In Corollary \ref{Cor_Lower_Bound} we provide lower error bounds for the randomized Smolyak method. In Section \ref{INF_DIM_INT} we apply our upper error bounds for randomized Smolyak algorithms to the infinite integration problem. After introducing the setting, we provide the exact polynomial convergence rate of $N$-th minimal errors in Theorem~\ref{Theo_UB_PW}. In Corollary~\ref{Power} we compare the power of randomized algorithms and deterministic algorithms for infinite-dimensional integration, and in Corollary~\ref{Korobov} we illustrate the result of Theorem~\ref{Theo_UB_PW} for weighted Korobov spaces. In Remark~\ref{Baustelle} and Remark~\ref{Generalizations} we discuss previous contributions to the considered infinite integration problem and generalizations to other settings such as to function spaces with increasing smoothness or to the $L^2$-approximation problem. In the appendix we provide for the convenience of the reader a self-contained proof of a folklore result on the convergence rates of randomized algorithms on Korobov spaces. \section{Formulation of the Problem}\label{SECT2} Let $d\in \mathbb{N}$. For $n=1,\ldots,d,$ let $F^{(n)}$ be a separable Hilbert space of real valued functions, $G^{(n)}$ be a separable Hilbert space, and $S^{(n)}:F^{(n)} \to G^{(n)}$ be a continuous linear operator. We consider now the tensor product spaces $F_d$ and $G_d$ given by $$F_d := F^{(1)} \otimes \cdots \otimes F^{(d)},$$ $$G_d := G^{(1)} \otimes \cdots \otimes G^{(d)},$$ and the tensor product operator $S_d$ (also called {\varepsilon}mph{solution operator}) given by $$ S_d := S^{(1)} \otimes \cdots \otimes S^{(d)}.$$ We frequently use results concerning tensor products of Hilbert spaces and tensor product operators without giving explicit reference, for details on this subject see, e.g., \cite{Wei80}. We denote the norms in $F^{(n)}$ and $F_d$ by $\|\cdot \|_{F^{(n)}}$ and $\|\cdot \|_{F_d}$ respectively, and the norms in $G^{(n)}$ and $G_d$ simply by $\|\cdot\|$. Furthermore, $L(F_d, G_d)$ denotes the space of all bounded linear operators between $F_d$ and $G_d.$ $S_d(f)$ may be approximated by randomized linear algorithms or, more generally, by randomized linear operators. We define a {\varepsilon}mph{randomized linear operator} $A$ to be a mapping $$A: \Omega \mathrm{i}ghtarrow L(F_d, G_d)$$ such that $Af: \Omega \mathrm{i}ghtarrow G_d$ is a random variable for each $f \in F_d$; here $(\Omega, \Sigma, \Prob)$ is some probability space and $G_d$ is endowed with its Borel $\sigma-$field. We put $$\mathcal{O}^{\ran} := \mathcal{O}^{\ran, \lin}(\Omega, F_d, G_d) := \{A:\Omega \mathrm{i}ghtarrow L(F_d, G_d) \, | \, A \text{ is a randomized linear operator}\}. $$ Obviously one may interpret deterministic bounded linear operators as randomized linear operators with trivial dependance on $\Omega$. Accordingly, we put $$\mathcal{O}^{\deter} := \mathcal{O}^{\deter, \lin}(F_d, G_d) := L(F_d, G_d) \subset \mathcal{O}^{\ran, \lin}(\Omega, F_d, G_d),$$ where the inclusion is based on the identification of $A\in L(F_d, G_d)$ with the constant mapping $\Omega \ni \omega \mapsto A$. A {\varepsilon}mph{(randomized) linear approximation problem } is given by a quadruple $\{S_d, F_d, G_d, \mathcal{O}(\Omega) \},$ where $\mathcal{O}(\Omega) \subseteq \mathcal{O}^{\ran,\lin}(\Omega, F_d, G_d)$ denotes the class of admissible randomized linear operators. We are mainly interested in results for randomized linear algorithms, which constitute a subclass of $\mathcal{O}^{\ran}$ and will be introduced in Section \ref{SECT5}. Consider a randomized linear operator $A$ meant to approximate $S_d$. The {\varepsilon}mph{randomized error} of the operator is given by {\boldsymbol{\eta}}gin{equation} \label{ran_err} e^{\rm{r}}(A) := e^{\rm{r}}(S_d,A):= \sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}}\mathbb{E}xpec \bigg[ \lVert (S_d-A)f \rVert^2\bigg]^{\frac{1}{2}}, {\varepsilon}nd{equation} and the{\varepsilon}mph{ (root mean square) worst case error} is {\boldsymbol{\eta}}gin{equation} \label{wor_err} e^{\rm{w}}(A) := e^{\rm{w}}(S_d,A):= \mathbb{E}xpec \bigg[ \sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}}\lVert (S_d-A)f \rVert^2\bigg]^{\frac{1}{2}}. {\varepsilon}nd{equation} Clearly we have $0\le e^{\rm{r}}(S_d,A) \le e^{\rm{w}}(S_d,A)$. Notice that for a {\varepsilon}mph{deterministic} linear Operator $A$ both errors coincide with the {\varepsilon}mph{deterministic worst case error} $$e^{\rm d}(A) := e^{\rm d}(S_d, A) := \sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}}\lVert (S_d-A)f \rVert, $$ i.e., $e^{\rm d}(S_d, A) = e^{\rm r}(S_d, A) = e^{\rm w}(S_d, A)$. We finish this section by giving two examples of typical tensor product problems that fit into the framework given above. {\boldsymbol{\eta}}gin{example}\label{ProblemExample} For $n=1,\ldots, d$ let $D^{(n)}\neq {\varepsilon}mptyset$ be an arbitrary domain and let $\rho^{(n)}$ be a positive measure on $D^{(n)}$. Denote by $E$ the Cartesian product $D^{(1)} \times \cdots \times D^{(d)}$ and by $\mu$ the product measure $\otimes_{n=1}^d \rho$ on $E$. {\boldsymbol{\eta}}gin{itemize} \item[{\rm (}i{\rm)}] By choosing $F^{(n)} \subset L^2(D^{(n)},\rho^{(n)})$, $G^{(n)} := \mathbb{R}$, and $S^{(n)}$ to be the integration functional $S^{(n)}(f) = \int_{D^{(n)}} f \,{\rm d}\rho^{(n)}$, we obtain $F_d \subset L^2(E, \mu)$, $G_d = \mathbb{R}$, and $S_d$ is the integration functional on $F_d$ given by $$S_d(f) = \int_{E} f \,{\rm d}\mu, \hspace{3ex} f\in F_d.$$ The {\varepsilon}mph{integration problem} is now to compute or approximate for given $f\in F_d$ the integral $S_d(f)$. \item[{\rm (}ii{\rm)}] By choosing $F^{(n)} \subset G^{(n)} := L^2(D^{(n}),\rho^{(n)})$ and $S^{(n)}$ to be the embedding operator from $F^{(n)}$ into $G^{(n)}$, we obtain $F_d \subset G_d = L^2(E, \mu)$ and $S_d$ is the embedding operator from $F_d$ into $G_d$ given by $$S_d(f) = f, \hspace{3ex} f\in F_d.$$ The {\varepsilon}mph{$L^2$-approximation problem} is now to reconstruct a given function $f\in F_d$, i.e., to compute or approximate $S_d(f)$; the reconstruction error is measured in the $L^2$-norm. {\varepsilon}nd{itemize} Note that in both problem formulations above the phrase ``a given function $f$'' does not necessarilly mean that the whole function is known. Usually there is only partial information about the function available (like a finite number of values of the function or of its derivatives or a finite number of Fourier coefficients) available. We discuss this point in more detail in Section \ref{ALG5.1}. {\varepsilon}nd{example} \section{Smolyak Method for Tensor Product Problems}\label{SECT3} From now on we are interested in randomizing the Smolyak method which is to be defined in this section. Assume that for every $n = 1,2,\ldots, d,$ we have a sequence of randomized linear operators $(U_l^{(n)})_{ l \in \mathbb{N}},$ which approximate the solution operator $S^{(n)}$ such that for every $f \in F^{(n)}$ it holds: $U_l^{(n)}f$ is a random variable on a probability space $(\Omega^{(n)}, \Sigma^{(n)}, \Prob^{(n)}).$ We shall refer to separate $U^{(n)}_l$ as to {\varepsilon}mph{building blocks}. Put $\Omega := \Omega^{(1)} \times \ldots \times \Omega^{(d)}, \Sigma := \bigotimes_{n = 1}^{d} \Sigma^{(n)}, \Prob := \bigotimes_{n = 1}^{d} \Prob^{(n)}$. We denote {\boldsymbol{\eta}}gin{align*} & \Delta_0^{(n)} := U_0^{(n)} := 0, \quad \Delta_l^{(n)} := U_l^{(n)} - U_{l-1}^{(n)}, \quad l \in \mathbb{N}, {\varepsilon}nd{align*} and {\boldsymbol{\eta}}gin{align*} & Q(L,d) := \left\{\bl \in \mathbb{N}^d \, | \, |\bl| \leq L\mathrm{i}ght\}. {\varepsilon}nd{align*} Note that if $L \geq d$, then $|Q(L,d)| = \binom{L}{d}$. For $f \in F_d $ the {\varepsilon}mph{randomized Smolyak method of level L} approximating the tensor product problem $\{S_d,F_d,G_d, \mathcal{A}(\Omega)\}$ is given by {\boldsymbol{\eta}}gin{equation} \label{basic_form} A(L,d)f:\Omega \mathrm{i}ghtarrow G_d, \quad \omega \mapsto \left(\sum\limits_{\bl \in Q(L,d) } \bigotimes_{n = 1}^{d}\Delta^{(n)}_{l_n}(\omega_n)\mathrm{i}ght)f. {\varepsilon}nd{equation} We would like to stress that due to the definition of the probability space $(\Omega, \Sigma, \Prob)$ for given $f_n \in F^{(n)}, n = 1,2, \ldots, d,$ the families $((U^{(n)}_lf_n)_{l \in \mathbb{N}}), n = 1,2, \ldots, d,$ are mutually independent. Note that for $L < d$ the Smolyak method is the zero operator. Therefore, we will always assume (without stating it explicitly every time) that $L \geq d.$ It can be verified that the following representation holds {\boldsymbol{\eta}}gin{equation}\label{alg_bb_rep} A(L,d) = \sum_{L-d+1 \leq |\bl| \leq L} (-1)^{L-|\bl|} \binom{d-1}{L-|\bl|} \bigotimes_{n = 1}^d U^{(n)}_{l_n}, {\varepsilon}nd{equation} cf. \cite[Lemma 1]{WW95}. When investigating the randomized error we need that for every $l \in \mathbb{N}$ and $n=1,\ldots,d$ {\boldsymbol{\eta}}gin{equation}\label{Ran1} U_l^{(n)}f \in L^{2}(\Omega^{(n)}, G^{(n)}) \quad \text{ for all } f \in F^{(n)}. {\varepsilon}nd{equation} In the worst case error analysis we require for every $l \in \mathbb{N}$ and $n=1,\ldots,d$ {\boldsymbol{\eta}}gin{equation}\label{Wce1} \mu_{l,n}(\omega) := \sup\limits_{\substack{\lVert f \rVert_{F^{(n)}} \leq 1}} \lVert (U_l^{(n)}f)(\omega_n) \rVert < \infty \quad \text{ for all } \omega_n \in \Omega^{(n)} {\varepsilon}nd{equation} and that $\mu_{l,n}: \Omega \mathrm{i}ghtarrow [0, \infty)$ is measurable with {\boldsymbol{\eta}}gin{equation}\label{Wce2} \lVert \mu_{l,n} \rVert_{L^{2}(\Omega^{(n)}, \mathbb{R})} < \infty. {\varepsilon}nd{equation} Let $\rm{x} \in \{\rm{r}, \rm{w}\}.$ When considering the error $e^{\rm{x}}(S_d,A(L,d)),$ we assume that there exist constants $B,C,E > 0$ and $D \in (0,1)$ such that for every $n = 1,2, \ldots, d,$ and every $l \in \mathbb{N}$ {\boldsymbol{\eta}}gin{equation} \label{IntNorm} \lVert S^{(n)} \rVert_{\rm{op}} \leq B, {\varepsilon}nd{equation} {\boldsymbol{\eta}}gin{equation} \label{ApproxNorm} e^{\rm{x}}(S^{(n)}, U_l^{(n)}) \leq CD^l, {\varepsilon}nd{equation} and additionally in the randomized setting {\boldsymbol{\eta}}gin{equation} \label{DiffNorm} \sup\limits_{\substack{ \lVert f \rVert_{F^{(n)}} \leq 1}}\mathbb{E}xpec \bigg[\lVert \underbrace{ U_l^{(n)}f - U_{l - 1}^{(n)}f}_{= \Delta^{(n)}_lf} \rVert^2\bigg]^{\frac{1}{2}} \leq E D^l, {\varepsilon}nd{equation} and in the worst case setting {\boldsymbol{\eta}}gin{equation} \label{DiffNormWce} \mathbb{E}xpec \bigg[ \sup\limits_{\substack{ \lVert f \rVert_{F^{(n)}} \leq 1}}\lVert \underbrace{ U_l^{(n)}f - U_{l - 1}^{(n)}f}_{= \Delta^{(n)}_lf} \rVert^2\bigg]^{\frac{1}{2}} \leq E D^l. {\varepsilon}nd{equation} Note that (\ref{ApproxNorm}) implies the conditions (\ref{DiffNorm}) and (\ref{DiffNormWce}) with a constant $E := C(1+D^{-1})$ for all $l \geq 2.$ Still (\ref{DiffNorm}) and (\ref{DiffNormWce}) may even hold for some smaller $E.$ {\boldsymbol{\eta}}gin{remark}\label{algL2Rep} For our randomized error analysis it would be convenient to identify a randomized linear operator $A: \Omega \mathrm{i}ghtarrow L(F,G),$ $F,G$ separable Hilbert spaces, with the mapping (\ref{algo_form}) which we again denote by $A:$ {\boldsymbol{\eta}}gin{equation}\label{algo_form} A : F \to L^2(\Omega, G), \quad f\mapsto \big( \omega \mapsto Af(\omega) \big). {\varepsilon}nd{equation} We now show that this identification makes sense for all the operators we are considering. We start with the building blocks $U^{(n)}_l.$ From (\ref{DiffNorm}) we obtain {\boldsymbol{\eta}}gin{equation}\label{Ran2} \sup\limits_{\substack{\lVert f \rVert_{F^{(n)}} \leq 1}}\mathbb{E}xpec \left[ \lVert U_l^{(n)}f \rVert^2 \mathrm{i}ght]^{1/2} \leq \frac{ED}{1-D}, {\varepsilon}nd{equation} implying $(U^{(n)}_lf)(\cdot) \in L^2(\Omega^{(n)}, G^{(n)})$ for all $f \in F^{(n)}.$ The building blocks $U_l^{(n)}$ are obviously linear as mappings $F^{(n)} \mathrm{i}ghtarrow L^2(\Omega^{(n)}, G^{(n)})$ and, due to (\ref{Ran2}), also bounded, i.e. continuous. Now, since for arbitrary sample spaces $ \Omega_1, \Omega_2$ and separable Hilbert spaces $H_1, H_2$ it holds $$L^{2}(\Omega_1, H_1) \otimes L^{2}(\Omega_2, H_2) \cong L^{2}(\Omega_1 \times \Omega_2, H_1 \otimes H_2),$$ we have that $(\bigotimes_{n = 1}^{d} U^{(n)}_{l_n})(f)(\cdot) $ lies in $L^{2}(\Omega,G_d)$ for $f \in F_d.$ Clearly, the tensor product operator $\bigotimes_{n = 1}^{d} U^{(n)}_{l_n}$ is a bounded linear mapping $F\to L^2(\Omega, G_d)$. Since due to {\varepsilon}qref{alg_bb_rep} the Smolyak method $A(L,d)$ may be represented as a finite sum of such tensor product operators, it is also a bounded linear map $F\to L^{2}(\Omega,G_d).$ If we formally consider $S_d$ as an operator $F_d\to L^2(\Omega, G_d)$, $f\mapsto \big( \omega \mapsto S_df \big)$ (i.e., an operator that maps into the constant $L^2$-functions), then $S_d$ is still linear and continuous with operator norm {\boldsymbol{\eta}}gin{equation*} \|S_d\|_{\op} = \sup_{\|f\|_{F_d}\le 1} \| S_df\|_{L^2(\Omega, G)} = \sup_{\|f\|_{F_d}\le 1} \mathbb{E}xpec \left[ \| S_df(\omega)\|^2 \mathrm{i}ght]^{1/2}, {\varepsilon}nd{equation*} and the usual randomized error can be written as {\boldsymbol{\eta}}gin{equation}\label{error_op_norm} e^{\rm{r}}(S_d, A) = \sup_{\|f\|_{F_d}\le 1} \|(S_d-A)f\|_{L^2(\Omega, G_d)} = \|S_d- A\|_{\op}. {\varepsilon}nd{equation} The worst case error unfortunately does not allow for a representation as operator norm similar to {\varepsilon}qref{error_op_norm}. Note that the above identification turns a randomized approximation problem $$S:F\mathrm{i}ghtarrow G, \quad A: \Omega \mathrm{i}ghtarrow L(F,G) $$ into a deterministic $L^2$-approximation problem $$S: F \mathrm{i}ghtarrow L^2(\Omega, G), \quad A:F \mathrm{i}ghtarrow L^2(\Omega, G).$$ {\varepsilon}nd{remark} \section{Error Analysis in Terms of the Level}\label{SECT4} We now perform the error analysis of the approximation of $S_d$ by the Smolyak method $A(L,d)$ in terms of the level $L,$ which may be done under the rather general assumptions of Sections $2$ and $3.$ {\boldsymbol{\eta}}gin{theorem}\label{BasicLemma} For $L,d \in \mathbb{N}, L\geq d$ let $A(L,d)$ be a randomized Smolyak method as described in Section \ref{SECT3}. Let $\rm{x} \in \{\rm{w}, \rm{r}\}$. Assume {\varepsilon}qref{IntNorm}, {\varepsilon}qref{ApproxNorm} and, dependently on the setting, for $\rm{x} = \rm{r}$ additionally assume (\ref{Ran1}),(\ref{DiffNorm}) and for $x = \rm{w}$ additionally assume (\ref{Wce1}), (\ref{Wce2}) and {\varepsilon}qref{DiffNormWce}. Then we have {\boldsymbol{\eta}}gin{equation}\label{levelBound} e^{\rm{x}}(S_d,A(L,d)) \leq CB^{d-1}D^{L-d+1} \sum\limits_{j = 0}^{d-1} \bigg( \frac{ED}{B} \bigg)^j \binom{L-d+j}{j} \leq CH^{d-1}\binom{L}{d-1}D^L, {\varepsilon}nd{equation} where $H = \max\{\frac{B}{D}, E\}.$ {\varepsilon}nd{theorem} {\boldsymbol{\eta}}gin{proof} The second inequality in (\ref{levelBound}) follows easily by using $\sum_{j = 0}^{d-1} \binom{L-d+j}{j} = \binom{L}{d-1}$ and estimating $(\frac{ED}{B})^j \leq \max \{1, (\frac{ED}{B})^{d-1}\},$ so all there remains to be done is proving the first inequality. Firstly we shall focus on the worst case error bound. Note that for a fixed $\omega \in \Omega$ $$\sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}} \lVert (S_d - A(L,d)(\omega))f \rVert^2 = \left(\sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}} \lVert (S_d - A(L,d)(\omega))f \rVert\mathrm{i}ght)^2 = \lVert S_d - A(L,d)(\omega) \rVert^2_{{\rm op}}$$ Now we may proceed similarly as in the proof of Lemma $2$ from \cite{WW95}, by induction on $d,L$ for $d \in \mathbb{N}$ and $L \in \{d,d+1,\ldots\}$ . For $d=1$ and any $L \in \mathbb{N}_{\geq d}$ we have $S_d = S^{(1)}$ and $A(L,1) = U^{(1)}_L,$ so the statement is just the condition (\ref{ApproxNorm}). Suppose we have already proved the claim for $L,d$ and want to prove it for $L+1,d+1.$ Using $$A(L+1,d+1) = \sum_{\bl \in Q(L,d)} \bigotimes_{n = 1}^{d} \Delta^{(n)}_{l_n} \otimes U^{(d+1)}_{L+1-|\bl|}$$ and Minkowski's inequality we get {\boldsymbol{\eta}}gin{align*} &\ e^{\rm{w}}(S_{d+1}, A(L+1, d+1)) = \mathbb{E}xpec \bigg[\lVert S_{d+1} - A(L+1, d+1) \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ &= \mathbb{E}xpec\bigg[\lVert \sum_{\bl \in Q(L,d)} (\bigotimes_{n = 1}^d \Delta_{l_n}^{(n)}) \otimes (S^{(d+1)} - U^{(d+1)}_{L+1-|\bl|}) + (S_d - A(L,d))\otimes S^{(d+1)} \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ &\leq \mathbb{E}xpec\bigg[\lVert \sum_{\bl \in Q(L,d)} (\bigotimes_{n = 1}^d \Delta_{l_n}^{(n)}) \otimes (S^{(d+1)} - U^{(d+1)}_{L+1-|\bl|}) \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} + \mathbb{E}xpec\bigg[\lVert (S_d - A(L,d))\otimes S^{(d+1)} \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}}. {\varepsilon}nd{align*} We use Minkowski's inequality, properties of tensor product operator norms, the fact that component algorithms $U^{(n)}_l, l \in \mathbb{N}$ are randomized independently for different $n \in \{1,\ldots, d\}$, (\ref{ApproxNorm}) and (\ref{DiffNormWce}) to obtain {\boldsymbol{\eta}}gin{align*} {\boldsymbol{\eta}}gin{split} & \mathbb{E}xpec \bigg[\lVert \sum_{\bl \in Q(L,d)} (\bigotimes_{n = 1}^d \Delta_{l_n}^{(n)}) \otimes (S^{(d+1)} - U^{(d+1)}_{L+1-|\bl|}) \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ &\leq \sum_{\bl \in Q(L,d)} \mathbb{E}xpec \bigg[ \lVert \bigotimes_{n = 1}^{d} \Delta^{(n)}_{l_n}\rVert^{2}_{{\rm op}} \lVert S^{(d+1)} - U^{(d+1)}_{L+1-|\bl|} \rVert^{2}_{{\rm op}} \bigg]^{\frac{1}{2}} \\ &= \sum_{\bl \in Q(L,d)} \bigg(\prod_{n = 1}^d \mathbb{E}xpec \bigg[ \lVert \Delta_{l_n}^{(n)}\rVert_{{\rm op}}^2 \bigg]^{\frac{1}{2}} \bigg) \mathbb{E}xpec\bigg[ \lVert S^{(d+1)} - U_{L+1-|\bl|}^{(d+1)} \rVert_{{\rm op}}^2\bigg]^{\frac{1}{2}} \\ &\leq \sum_{\bl \in Q(L,d)} CE^dD^{L+1} = \binom{L}{d}CE^d D^{L + 1}. {\varepsilon}nd{split} {\varepsilon}nd{align*} Furthermore, using (\ref{IntNorm}), {\boldsymbol{\eta}}gin{align*} & \mathbb{E}xpec\bigg[\lVert (S_d - A(L,d))\otimes S^{(d+1)} \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ & = \mathbb{E}xpec \bigg[\lVert (S_d - A(L,d))\rVert^2_{{\rm op}} \lVert S^{(d+1)} \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ & = \lVert S^{(d+1)} \rVert_{{\rm op}} \mathbb{E}xpec \bigg[\lVert (S_d - A(L,d))\rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ & \leq Be^{{\rm w}}(S_d,A(L,d)). {\varepsilon}nd{align*} Therefore we have {\boldsymbol{\eta}}gin{align*} & e^{\rm{w}}(S_{d+1}, A(L+1,d+1)) \leq \binom{L}{d} E^dCD^{L+1} + Be^{{\rm w}}(S_d,A(L,d)) {\varepsilon}nd{align*} and using the induction hypothesis finishes the proof for the worst case error. Now consider the randomized error. By similar calculations as in the first part of the proof one could show that the claim holds true for the randomized error for elementary tensors. Then however, one encounters problems trying to lift it to the whole Hilbert space. The difficulty lies in the fact that the randomized error is not an operator norm of some tensor product operator, which would have enabled us to write it as a product of norms of the corresponding univariate operators and which has proved to be useful in bounding the worst case error. To get round it we need a different approach. The idea is to interpret a randomized problem as a deterministic $L^2-$approximation problem. As already explained in the Remark \ref{algL2Rep} we may identify $(S_d - A(L,d)): \Omega \mathrm{i}ghtarrow L(F_d,G_d)$ with an operator $F_d \mathrm{i}ghtarrow L^{2}(\Omega, G_d)$ again denoted by $(S_d - A(L,d)).$ Then however $e^{\rm{r}}(S_d, A) = \|S_d- A\|_{\op}$ and we may proceed exactly as in Lemma 2 from \cite{WW95}, which finishes the proof. {\varepsilon}nd{proof} We may generalize the result of the Theorem \ref{BasicLemma} by allowing for more flexibility in convergence rates in (\ref{ApproxNorm}), (\ref{DiffNorm}) and (\ref{DiffNormWce}). It can be used to capture additional logarithmic factors in the error bounds for the building blocks algorithms. This turns out to be particularly useful when investigating the error bounds for Smolyak methods whose building blocks are, e.g., multivariate quadratures or approximation algorithms, as it is the case in \cite{DLP07}. Suppose namely there exists a constant $D \in (0,1)$ and non decreasing sequences of positive numbers $(C_l)_l, (E_l)_l, l \in \mathbb{N},$ such that for every $l \in \mathbb{N}$ {\boldsymbol{\eta}}gin{equation} \label{ApproxNormSeq} e^{\rm{x}}(S^{(n)}, U_l^{(n)}) \leq C_lD^l, \quad \rm{x} \in \{\rm{r}, \rm{w}\}. {\varepsilon}nd{equation} Moreover, in case of the randomized error {\boldsymbol{\eta}}gin{equation} \label{DiffNormSeq} \sup\limits_{\substack{ \lVert f \rVert_{F^{(n)}} \leq 1}}\mathbb{E}xpec \bigg[\lVert \underbrace{ U_l^{(n)}f - U_{l - 1}^{(n)}f}_{= \Delta^{(n)}_lf} \rVert^2\bigg]^{\frac{1}{2}} \leq E_l D^l , {\varepsilon}nd{equation} and in case of the worst case error {\boldsymbol{\eta}}gin{equation} \label{DiffNormWceSeq} \mathbb{E}xpec \bigg[ \sup\limits_{\substack{ \lVert f \rVert_{F^{(n)}} \leq 1}}\lVert \underbrace{ U_l^{(n)}f - U_{l - 1}^{(n)}f}_{= \Delta^{(n)}_lf} \rVert^2\bigg]^{\frac{1}{2}} \leq E_l D^l. {\varepsilon}nd{equation} It is now easy to prove Corollary \ref{levelBoundGeneralConditions} along the lines of the proof of Theorem \ref{BasicLemma}. {\boldsymbol{\eta}}gin{corollary}\label{levelBoundGeneralConditions} For $L,d \in \mathbb{N}, L\geq d$ let $A(L,d)$ be a randomized Smolyak method as described in Section \ref{SECT3}. Let $\rm{x} \in \{\rm{w}, \rm{r}\}$. Assume {\varepsilon}qref{IntNorm}, {\varepsilon}qref{ApproxNormSeq} and, dependently on the setting, for $\rm{x} = \rm{r}$ assume (\ref{Ran1}),(\ref{DiffNormSeq}) and for $x = \rm{w}$ assume (\ref{Wce1}), (\ref{Wce2}) and {\varepsilon}qref{DiffNormWceSeq}. Then we have {\boldsymbol{\eta}}gin{equation} e^{\rm{x}}(S_d,A(L,d)) \leq C_LB^{d-1}D^{L-d+1} \sum\limits_{j = 0}^{d-1} \bigg( \frac{E_{L-1}D}{B} \bigg)^j \binom{L-d+j}{j} \leq C_LH_L^{d-1}\binom{L}{d-1}D^L, {\varepsilon}nd{equation} where $H_L = \max\{\frac{B}{D}, E_{L-1}\}.$ {\varepsilon}nd{corollary} {\boldsymbol{\eta}}gin{remark} Note that applying Corollary \ref{levelBoundGeneralConditions} to the uni- or multivariate building block algorithms error bounds from \cite{DLP07} we may reproduce the error bounds obtained in this paper for the final (higher dimensional) Smolyak method. {\varepsilon}nd{remark} {\boldsymbol{\eta}}gin{comment} {\boldsymbol{\eta}}gin{abstract} Smolyak's method, also known as hyperbolic cross approximation or sparse grid method, is a powerful tool to tackle multivariate tensor product problems solely with the help of efficient algorithms for the corresponding univariate problem. In this paper we study the randomized setting, i.e., we randomize Smolyak's method. We provide upper and lower error bounds for randomized Smolyak algorithms with explicitly given dependence on the number of variables and the number of information evaluations used. The error criteria we consider are the worst-case root mean square error (the typical error criterion for randomized algorithms, often referred to as ``randomized error'') and the root mean square worst-case error (often referred to as ``worst-case error''). Randomized Smolyak algorithms can be used as building blocks for efficient methods such as multilevel algorithms, multivariate decomposition methods or dimension-wise quadrature methods to tackle successfully high-dimensional or even infinite-dimensional problems. As an example, we provide a very general and sharp result on infinite-dimensional integration on weighted reproducing kernel Hilbert spaces and illustrate it for the special case of weighted Korobov spaces. We indicate how this result can be extended, e.g., to spaces of functions whose smooth dependence on successive variables increases (``spaces of increasing smoothness'') and to the problem of $L_2$-approximation (function recovery). {\varepsilon}nd{abstract} \title{Explicit error bounds for randomized {S} \section{Introduction}\label{SECT1} Smolyak's method or algorithm, also known as sparse grid method, hyperbolic cross approximation, Boolean method, combination technique or discrete blending method, was outlined by Smolyak in \cite{Smo63}. It is a general method to handle multivariate tensor product problems. Its major advantage is the following: to tackle a multivariate tensor product problem at hand one only has to understand the corresponding univariate problem. More precisely, Smolyak's algorithm uses algorithms for the corresponding univariate problem as building blocks, and it is fully determined by the choice of those algorithms. If those algorithms for the univariate problem are optimal, Smolyak's algorithm for the multivariate problem is almost optimal, i.e., its convergence rate is optimal up to logarithmic factors. Today Smolyak's method is widely used in scientific computing and there exists a huge number of scientific articles dealing with applications and modifications of it. A partial list of papers (which is, of course, very far from being complete) on {\varepsilon}mph{deterministic Smolyak algorithms} may contain, e.g., the articles \cite{WW95, WW99} for general approximation problems, \cite{Gen74, Del90, Tem92, BD93, FH96, NR96, GG98, GG03, Pet03, GLSS07, HHPS18} for numerical integration, \cite{Gor71, Del82, Tem87, Tem93, SU07, Ull08, DG16} for function recovery, and \cite{Per86, Zen91, NR96a, Yse05, Yse06, Gar07, GH09} for other applications. Additional references and further information may be found in the survey articles \cite{BG04, Gri06}, the book chapter \cite[Chapter~15]{NW10}, and the books \cite{DS89, DTU18}. On {\varepsilon}mph{randomized Smolyak algorithms} much less is known. Actually, we are only aware of two articles that deal with randomized versions of Smolyak's method, namely \cite{DLP07} and \cite{HM11}. In \cite{DLP07} Dick et al. investigate a specific instance of the randomized Smolyak method and use it as a tool to show that higher order nets may be used to construct integration algorithms achieving almost optimal order of convergence (up to logarithmic factors) of the worst case error in certain Sobolev spaces. In \cite{HM11} Heinrich et al. employ the randomized Smolyak method as a building block of an algorithm to compute antiderivatives of functions from $L^p([0,1]^d),$ allowing for fast computation of antiderivative values for any point in $[0,1]^d.$ Note that in both cases the randomized Smolyak method is applied as an ad hoc device and none of the papers gives a systematic treatment of its properties. We provide upper and lower error bounds for randomized Smolyak algorithms with explicitly given dependence on the number of variables and the number of information evaluations used. The error criteria for randomized algorithms (or, more generally, operators) that we consider are extensions of the worst-case error for deterministic algorithms. The first error criterion is the worst-case root mean square error, which is the typical error criterion for randomized algorithms, often referred to as ``randomized error''. and the root mean square worst-case error (often referred to as ``worst-case error''). \section{Formulation of the Problem}\label{SECT2} Let $d\in \mathbb{N}$. For $n=1,\ldots,d,$ let $F^{(n)}$ be a separable Hilbert space of real valued functions, $G^{(n)}$ be a separable Hilbert space, and $S^{(n)}:F^{(n)}} \to G^{(n)}$ be a continuous linear operator. We consider now the tensor product spaces $F_d$ and $G_d$ given by $$F_d := F^{(1)} \otimes \cdots \otimes F^{(d)},$$ $$G_d := G^{(1)} \otimes \cdots \otimes G^{(d)},$$ and the tensor product operator $S_d$ (also called {\varepsilon}mph{solution operator}) given by $$ S_d := S^{(1)} \otimes \cdots \otimes S^{(d)}.$$ We frequently use results concerning tensor products of Hilbert spaces and tensor product operators without giving explicit reference, for details on this subject see, e.g., \cite{Wei80}. We denote the norms in $F^{(n)}$ and $F_d$ by $\|\cdot \|_{F^{(n)}}$ and $\|\cdot \|_{F_d}$ respectively, and the norms in $G^{(n)}$ and $G_d$ simply by $\|\cdot\|$. Furthermore, $L(F_d, G_d)$ denotes the space of all bounded linear operators between $F_d$ and $G_d.$ $S_d(f)$ may be approximated by randomized linear algorithms or, more generally, by randomized linear operators. We define a {\varepsilon}mph{randomized linear operator} $A$ to be a mapping $$A: \Omega \mathrm{i}ghtarrow L(F_d, G_d)$$ such that $Af: \Omega \mathrm{i}ghtarrow G_d$ is a random variable for each $f \in F_d$; here $(\Omega, \Sigma, \Prob)$ is some probability space and $G_d$ is endowed with its Borel $\sigma-$field. We put $$\mathcal{O}^{\ran} := \mathcal{O}^{\ran, \lin}(\Omega, F_d, G_d) := \{A:\Omega \mathrm{i}ghtarrow L(F_d, G_d) \, | \, A \text{ is a randomized linear operator}\}. $$ Obviously one may interpret deterministic bounded linear operators as randomized linear operators with trivial dependance on $\Omega$. Accordingly, we put $$\mathcal{O}^{\deter} := \mathcal{O}^{\deter, \lin}(F_d, G_d) := L(F_d, G_d) \subset \mathcal{O}^{\ran, \lin}(\Omega, F_d, G_d),$$ where the inclusion is based on the identification of $A\in L(F_d, G_d)$ with the constant mapping $\Omega \ni \omega \mapsto A$. A {\varepsilon}mph{(randomized) linear approximation problem } is given by a quadruple $\{S_d, F_d, G_d, \mathcal{O}(\Omega) \},$ where $\mathcal{O}(\Omega) \subseteq \mathcal{O}^{\ran,\lin}(\Omega, F_d, G_d)$ denotes the class of admissible randomized linear operators. We are mainly interested in results for randomized linear algorithms, which constitute a subclass of $\mathcal{O}^{\ran}$ and will be introduced in Section \ref{SECT5}. Consider a randomized linear operator $A$ meant to approximate $S_d$. The {\varepsilon}mph{randomized error} of the operator is given by {\boldsymbol{\eta}}gin{equation} \label{ran_err} e^{\rm{r}}(A) := e^{\rm{r}}(S_d,A):= \sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}}\mathbb{E}xpec \bigg[ \lVert (S_d-A)f \rVert^2\bigg]^{\frac{1}{2}}, {\varepsilon}nd{equation} and the{\varepsilon}mph{ (root mean square) worst case error} is {\boldsymbol{\eta}}gin{equation} \label{wor_err} e^{\rm{w}}(A) := e^{\rm{w}}(S_d,A):= \mathbb{E}xpec \bigg[ \sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}}\lVert (S_d-A)f \rVert^2\bigg]^{\frac{1}{2}}. {\varepsilon}nd{equation} Clearly we have $0\le e^{\rm{r}}(S_d,A) \le e^{\rm{w}}(S_d,A)$. Notice that for a {\varepsilon}mph{deterministic} linear Operator $A$ both errors coincide with the {\varepsilon}mph{deterministic worst case error} $$e^{\rm d}(A) := e^{\rm d}(S_d, A) := \sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}}\lVert (S_d-A)f \rVert, $$ i.e., $e^{\rm d}(S_d, A) = e^{\rm r}(S_d, A) = e^{\rm w}(S_d, A)$. We finish this section by giving two examples of typical tensor product problems that fit into the framework given above. {\boldsymbol{\eta}}gin{example}\label{ProblemExample} For $n=1,\ldots, d$ let $D^{(n)}\neq {\varepsilon}mptyset$ be an arbitrary domain and let $\rho^{(n)}$ be a positive measure on $D^{(n)}$. Denote by $E$ the Cartesian product $D^{(1)} \times \cdots \times D^{(d)}$ and by $\mu$ the product measure $\otimes_{n=1}^d \rho$ on $E$. {\boldsymbol{\eta}}gin{itemize} \item[{\rm (}i{\rm)}] By choosing $F^{(n)} \subset L^2(D^{(n)},\rho^{(n)})$, $G^{(n)} := \mathbb{R}$, and $S^{(n)}$ to be the integration functional $S^{(n)}(f) = \int_{D^{(n)}} f \,{\rm d}\rho^{(n)}$, we obtain $F_d \subset L^2(E, \mu)$, $G_d = \mathbb{R}$, and $S_d$ is the integration functional on $F_d$ given by $$S_d(f) = \int_{E} f \,{\rm d}\mu, \hspace{3ex} f\in F_d.$$ The {\varepsilon}mph{integration problem} is now to compute or approximate for given $f\in F_d$ the integral $S_d(f)$. \item[{\rm (}i{\rm)}] By choosing $F^{(n)} \subset G^{(n)} := L^2(D^{(n}),\rho^{(n)})$ and $S^{(n)}$ to be the embedding operator from $F^{(n)$ into $G^{(n)}$, we obtain $F_d \subset G_d = L^2(E, \mu)$ and $S_d$ is the embedding operator from $F_d$ into $G_d$ given by $$S_d(f) = f, \hspace{3ex} f\in F_d.$$ The {\varepsilon}mph{$L^2$-approximation problem} is now to reconstruct a given function $f\in F_d$, i.e., to compute or approximate $S_d(f)$; the reconstruction error is measured in the $L^2$-norm. {\varepsilon}nd{itemize} Note that in both problem formulations above the phrase ``a given function $f$'' does not necessarilly mean that the whole function is known. Usually there is only partial information about the function available (like a finite number of values of the function or of its derivatives or a finite number of Fourier coefficients) available. We discuss this point in more detail in Section \ref{ALG5.1}. {\varepsilon}nd{example} \section{Smolyak Method for Tensor Product Problems}\label{SECT3} From now on we are interested in randomizing the Smolyak method which is to be defined in this section. Assume that for every $n = 1,2,\ldots, d,$ we have a sequence of randomized linear operators $(U_l^{(n)})_{ l \in \mathbb{N}},$ which approximate the solution operator $S^{(n)}$ such that for every $f \in F^{(n)}$ it holds: $U_l^{(n)}f$ is a random variable on a probability space $(\Omega^{(n)}, \Sigma^{(n)}, \Prob^{(n)}).$ We shall refer to separate $U^{(n)}_l$ as to {\varepsilon}mph{building blocks}. Put $\Omega := \Omega^{(1)} \times \ldots \times \Omega^{(d)}, \Sigma := \bigotimes_{n = 1}^{d} \Sigma^{(n)}, \Prob := \bigotimes_{n = 1}^{d} \Prob^{(n)}$. We denote {\boldsymbol{\eta}}gin{align*} & \Delta_0^{(n)} := U_0^{(n)} := 0, \quad \Delta_l^{(n)} := U_l^{(n)} - U_{l-1}^{(n)}, \quad l \in \mathbb{N}, {\varepsilon}nd{align*} and {\boldsymbol{\eta}}gin{align*} & Q(L,d) := \left\{\bl \in \mathbb{N}^d \, | \, |\bl| \leq L\mathrm{i}ght\}. {\varepsilon}nd{align*} Note that if $L \geq d$, then $|Q(L,d)| = \binom{L}{d}$. For $f \in F_d $ the {\varepsilon}mph{randomized Smolyak method of level L} approximating the tensor product problem $\{S_d,F_d,G_d, \mathcal{A}(\Omega)\}$ is given by {\boldsymbol{\eta}}gin{equation} \label{basic_form} A(L,d)f:\Omega \mathrm{i}ghtarrow G_d, \quad \omega \mapsto \left(\sum\limits_{\bl \in Q(L,d) } \bigotimes_{n = 1}^{d}\Delta^{(n)}_{l_n}(\omega_n)\mathrm{i}ght)f. {\varepsilon}nd{equation} We would like to stress that due to the definition of the probability space $(\Omega, \Sigma, \Prob)$ for given $f_n \in F^{(n)}, n = 1,2, \ldots, d,$ the families $((U^{(n)}_lf_n)_{l \in \mathbb{N}}), n = 1,2, \ldots, d,$ are mutually independent. Note that for $L < d$ the Smolyak method is the zero operator. Therefore, we will always assume (without stating it explicitly every time) that $L \geq d.$ It can be verified that the following representation holds {\boldsymbol{\eta}}gin{equation}\label{alg_bb_rep} A(L,d) = \sum_{L-d+1 \leq |\bl| \leq L} (-1)^{L-|\bl|} \binom{d-1}{L-|\bl|} \bigotimes_{n = 1}^d U^{(n)}_{l_n}, {\varepsilon}nd{equation} cf. \cite[Lemma 1]{WW95}. When investigating the randomized error we need that for every $l \in \mathbb{N}$ and $n=1,\ldots,d$ {\boldsymbol{\eta}}gin{equation}\label{Ran1} U_l^{(n)}f \in L^{2}(\Omega^{(n)}, G^{(n)}) \quad \text{ for all } f \in F^{(n)}. {\varepsilon}nd{equation} In the worst case error analysis we require for every $l \in \mathbb{N}$ and $n=1,\ldots,d$ {\boldsymbol{\eta}}gin{equation}\label{Wce1} \mu_{l,n}(\omega) := \sup\limits_{\substack{\lVert f \rVert_{F^{(n)}} \leq 1}} \lVert (U_l^{(n)}f)(\omega_n) \rVert < \infty \quad \text{ for all } \omega_n \in \Omega^{(n)} {\varepsilon}nd{equation} and that $\mu_{l,n}: \Omega \mathrm{i}ghtarrow [0, \infty)$ is measurable with {\boldsymbol{\eta}}gin{equation}\label{Wce2} \lVert \mu_{l,n} \rVert_{L^{2}(\Omega^{(n)}, \mathbb{R})} < \infty. {\varepsilon}nd{equation} Let $\rm{x} \in \{\rm{r}, \rm{w}\}.$ When considering the error $e^{\rm{x}}(S_d,A(L,d)),$ we assume that there exist constants $B,C,E > 0$ and $D \in (0,1)$ such that for every $n = 1,2, \ldots, d,$ and every $l \in \mathbb{N}$ {\boldsymbol{\eta}}gin{equation} \label{IntNorm} \lVert S^{(n)} \rVert_{\rm{op}} \leq B, {\varepsilon}nd{equation} {\boldsymbol{\eta}}gin{equation} \label{ApproxNorm} e^{\rm{x}}(S^{(n)}, U_l^{(n)}) \leq CD^l, {\varepsilon}nd{equation} and additionally in the randomized setting {\boldsymbol{\eta}}gin{equation} \label{DiffNorm} \sup\limits_{\substack{ \lVert f \rVert_{F^{(n)}} \leq 1}}\mathbb{E}xpec \bigg[\lVert \underbrace{ U_l^{(n)}f - U_{l - 1}^{(n)}f}_{= \Delta^{(n)}_lf} \rVert^2\bigg]^{\frac{1}{2}} \leq E D^l, {\varepsilon}nd{equation} and in the worst case setting {\boldsymbol{\eta}}gin{equation} \label{DiffNormWce} \mathbb{E}xpec \bigg[ \sup\limits_{\substack{ \lVert f \rVert_{F^{(n)}} \leq 1}}\lVert \underbrace{ U_l^{(n)}f - U_{l - 1}^{(n)}f}_{= \Delta^{(n)}_lf} \rVert^2\bigg]^{\frac{1}{2}} \leq E D^l. {\varepsilon}nd{equation} Note that (\ref{ApproxNorm}) implies the conditions (\ref{DiffNorm}) and (\ref{DiffNormWce}) with a constant $E := C(1+D^{-1})$ for all $l \geq 2.$ Still (\ref{DiffNorm}) and (\ref{DiffNormWce}) may even hold for some smaller $E.$ {\boldsymbol{\eta}}gin{remark}\label{algL2Rep} For our randomized error analysis it would be convenient to identify a randomized linear operator $A: \Omega \mathrm{i}ghtarrow L(F,G),$ $F,G$ separable Hilbert spaces, with the mapping (\ref{algo_form}) which we again denote by $A:$ {\boldsymbol{\eta}}gin{equation}\label{algo_form} A : F \to L^2(\Omega, G), \quad f\mapsto \big( \omega \mapsto Af(\omega) \big). {\varepsilon}nd{equation} We now show that this identification makes sense for all the operators we are considering. We start with the building blocks $U^{(n)}_l.$ From (\ref{DiffNorm}) we obtain {\boldsymbol{\eta}}gin{equation}\label{Ran2} \sup\limits_{\substack{\lVert f \rVert_{F^{(n)}} \leq 1}}\mathbb{E}xpec \left[ \lVert U_l^{(n)}f \rVert^2 \mathrm{i}ght]^{1/2} \leq \frac{ED}{1-D}, {\varepsilon}nd{equation} implying $(U^{(n)}_lf)(\cdot) \in L^2(\Omega^{(n)}, G^{(n)})$ for all $f \in F^{(n)}.$ The building blocks $U_l^{(n)}$ are obviously linear as mappings $F^{(n)} \mathrm{i}ghtarrow L^2(\Omega^{(n)}, G^{(n)})$ and, due to (\ref{Ran2}), also bounded, i.e. continuous. Now, since for arbitrary sample spaces $ \Omega_1, \Omega_2$ and separable Hilbert spaces $H_1, H_2$ it holds $$L^{2}(\Omega_1, H_1) \otimes L^{2}(\Omega_2, H_2) \cong L^{2}(\Omega_1 \times \Omega_2, H_1 \otimes H_2),$$ we have that $(\bigotimes_{n = 1}^{d} U^{(n)}_{l_n})(f)(\cdot) $ lies in $L^{2}(\Omega,G_d)$ for $f \in F_d.$ Clearly, the tensor product operator $\bigotimes_{n = 1}^{d} U^{(n)}_{l_n}$ is a bounded linear mapping $F\to L^2(\Omega, G_d)$. Since due to {\varepsilon}qref{alg_bb_rep} the Smolyak method $A(L,d)$ may be represented as a finite sum of such tensor product operators, it is also a bounded linear map $F\to L^{2}(\Omega,G_d).$ If we formally consider $S_d$ as an operator $F_d\to L^2(\Omega, G_d)$, $f\mapsto \big( \omega \mapsto S_df \big)$ (i.e., an operator that maps into the constant $L^2$-functions), then $S_d$ is still linear and continuous with operator norm {\boldsymbol{\eta}}gin{equation*} \|S_d\|_{\op} = \sup_{\|f\|_{F_d}\le 1} \| S_df\|_{L^2(\Omega, G)} = \sup_{\|f\|_{F_d}\le 1} \mathbb{E}xpec \left[ \| S_df(\omega)\|^2 \mathrm{i}ght]^{1/2}, {\varepsilon}nd{equation*} and the usual randomized error can be written as {\boldsymbol{\eta}}gin{equation}\label{error_op_norm} e^{\rm{r}}(S_d, A) = \sup_{\|f\|_{F_d}\le 1} \|(S_d-A)f\|_{L^2(\Omega, G_d)} = \|S_d- A\|_{\op}. {\varepsilon}nd{equation} The worst case error unfortunately does not allow for a representation as operator norm similar to {\varepsilon}qref{error_op_norm}. Note that the above identification turns a randomized approximation problem $$S:F\mathrm{i}ghtarrow G, \quad A: \Omega \mathrm{i}ghtarrow L(F,G) $$ into a deterministic $L^2$-approximation problem $$S: F \mathrm{i}ghtarrow L^2(\Omega, G), \quad A:F \mathrm{i}ghtarrow L^2(\Omega, G).$$ {\varepsilon}nd{remark} \section{Error Analysis in Terms of the Level}\label{SECT4} We now perform the error analysis of the approximation of $S_d$ by the Smolyak method $A(L,d)$ in terms of the level $L,$ which may be done under the rather general assumptions of Sections $2$ and $3.$ {\boldsymbol{\eta}}gin{theorem}\label{BasicLemma} For $L,d \in \mathbb{N}, L\geq d$ let $A(L,d)$ be a randomized Smolyak method as described in Section \ref{SECT3}. Let $\rm{x} \in \{\rm{w}, \rm{r}\}$. Assume {\varepsilon}qref{IntNorm}, {\varepsilon}qref{ApproxNorm} and, dependently on the setting, for $\rm{x} = \rm{r}$ additionally assume (\ref{Ran1}),(\ref{DiffNorm}) and for $x = \rm{w}$ additionally assume (\ref{Wce1}), (\ref{Wce2}) and {\varepsilon}qref{DiffNormWce}. Then we have {\boldsymbol{\eta}}gin{equation}\label{levelBound} e^{\rm{x}}(S_d,A(L,d)) \leq CB^{d-1}D^{L-d+1} \sum\limits_{j = 0}^{d-1} \bigg( \frac{ED}{B} \bigg)^j \binom{L-d+j}{j} \leq CH^{d-1}\binom{L}{d-1}D^L, {\varepsilon}nd{equation} where $H = \max\{\frac{B}{D}, E\}.$ {\varepsilon}nd{theorem} {\boldsymbol{\eta}}gin{proof} The second inequality in (\ref{levelBound}) follows easily by using $\sum_{j = 0}^{d-1} \binom{L-d+j}{j} = \binom{L}{d-1}$ and estimating $(\frac{ED}{B})^j \leq \max \{1, (\frac{ED}{B})^{d-1}\},$ so all there remains to be done is proving the first inequality. Firstly we shall focus on the worst case error bound. Note that for a fixed $\omega \in \Omega$ $$\sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}} \lVert (S_d - A(L,d)(\omega))f \rVert^2 = \left(\sup\limits_{\substack{\lVert f \rVert_{F_d} \leq 1}} \lVert (S_d - A(L,d)(\omega))f \rVert\mathrm{i}ght)^2 = \lVert S_d - A(L,d)(\omega) \rVert^2_{{\rm op}}$$ Now we may proceed similarly as in the proof of Lemma $2$ from \cite{WW95}, by induction on $d,L$ for $d \in \mathbb{N}$ and $L \in \{d,d+1,\ldots\}$ . For $d=1$ and any $L \in \mathbb{N}_{\geq d}$ we have $S_d = S^{(1)}$ and $A(L,1) = U^{(1)}_L,$ so the statement is just the condition (\ref{ApproxNorm}). Suppose we have already proved the claim for $L,d$ and want to prove it for $L+1,d+1.$ Using $$A(L+1,d+1) = \sum_{\bl \in Q(L,d)} \bigotimes_{n = 1}^{d} \Delta^{(n)}_{l_n} \otimes U^{(d+1)}_{L+1-|\bl|}$$ and Minkowski's inequality we get {\boldsymbol{\eta}}gin{align*} &\ e^{\rm{w}}(S_{d+1}, A(L+1, d+1)) = \mathbb{E}xpec \bigg[\lVert S_{d+1} - A(L+1, d+1) \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ &= \mathbb{E}xpec\bigg[\lVert \sum_{\bl \in Q(L,d)} (\bigotimes_{n = 1}^d \Delta_{l_n}^{(n)}) \otimes (S^{(d+1)} - U^{(d+1)}_{L+1-|\bl|}) + (S_d - A(L,d))\otimes S^{(d+1)} \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ &\leq \mathbb{E}xpec\bigg[\lVert \sum_{\bl \in Q(L,d)} (\bigotimes_{n = 1}^d \Delta_{l_n}^{(n)}) \otimes (S^{(d+1)} - U^{(d+1)}_{L+1-|\bl|}) \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} + \mathbb{E}xpec\bigg[\lVert (S_d - A(L,d))\otimes S^{(d+1)} \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}}. {\varepsilon}nd{align*} We use Minkowski's inequality, properties of tensor product operator norms, the fact that component algorithms $U^{(n)}_l, l \in \mathbb{N}$ are randomized independently for different $n \in \{1,\ldots, d\}$, (\ref{ApproxNorm}) and (\ref{DiffNormWce}) to obtain {\boldsymbol{\eta}}gin{align*} {\boldsymbol{\eta}}gin{split} & \mathbb{E}xpec \bigg[\lVert \sum_{\bl \in Q(L,d)} (\bigotimes_{n = 1}^d \Delta_{l_n}^{(n)}) \otimes (S^{(d+1)} - U^{(d+1)}_{L+1-|\bl|}) \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ &\leq \sum_{\bl \in Q(L,d)} \mathbb{E}xpec \bigg[ \lVert \bigotimes_{n = 1}^{d} \Delta^{(n)}_{l_n}\rVert^{2}_{{\rm op}} \lVert S^{(d+1)} - U^{(d+1)}_{L+1-|\bl|} \rVert^{2}_{{\rm op}} \bigg]^{\frac{1}{2}} \\ &= \sum_{\bl \in Q(L,d)} \bigg(\prod_{n = 1}^d \mathbb{E}xpec \bigg[ \lVert \Delta_{l_n}^{(n)}\rVert_{{\rm op}}^2 \bigg]^{\frac{1}{2}} \bigg) \mathbb{E}xpec\bigg[ \lVert S^{(d+1)} - U_{L+1-|\bl|}^{(d+1)} \rVert_{{\rm op}}^2\bigg]^{\frac{1}{2}} \\ &\leq \sum_{\bl \in Q(L,d)} CE^dD^{L+1} = \binom{L}{d}CE^d D^{L + 1}. {\varepsilon}nd{split} {\varepsilon}nd{align*} Furthermore, using (\ref{IntNorm}), {\boldsymbol{\eta}}gin{align*} & \mathbb{E}xpec\bigg[\lVert (S_d - A(L,d))\otimes S^{(d+1)} \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ & = \mathbb{E}xpec \bigg[\lVert (S_d - A(L,d))\rVert^2_{{\rm op}} \lVert S^{(d+1)} \rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ & = \lVert S^{(d+1)} \rVert_{{\rm op}} \mathbb{E}xpec \bigg[\lVert (S_d - A(L,d))\rVert^2_{{\rm op}} \bigg]^{\frac{1}{2}} \\ & \leq Be^{{\rm w}}(S_d,A(L,d)). {\varepsilon}nd{align*} Therefore we have {\boldsymbol{\eta}}gin{align*} & e^{\rm{w}}(S_{d+1}, A(L+1,d+1)) \leq \binom{L}{d} E^dCD^{L+1} + Be^{{\rm w}}(S_d,A(L,d)) {\varepsilon}nd{align*} and using the induction hypothesis finishes the proof for the worst case error. Now consider the randomized error. By similar calculations as in the first part of the proof one could show that the claim holds true for the randomized error for elementary tensors. Then however, one encounters problems trying to lift it to the whole Hilbert space. The difficulty lies in the fact that the randomized error is not an operator norm of some tensor product operator, which would have enabled us to write it as a product of norms of the corresponding univariate operators and which has proved to be useful in bounding the worst case error. To get round it we need a different approach. The idea is to interpret a randomized problem as a deterministic $L^2-$approximation problem. As already explained in the Remark \ref{algL2Rep} we may identify $(S_d - A(L,d)): \Omega \mathrm{i}ghtarrow L(F_d,G_d)$ with an operator $F_d \mathrm{i}ghtarrow L^{2}(\Omega, G_d)$ again denoted by $(S_d - A(L,d)).$ Then however $e^{\rm{r}}(S_d, A) = \|S_d- A\|_{\op}$ and we may proceed exactly as in Lemma 2 from \cite{WW95}, which finishes the proof. {\varepsilon}nd{proof} We may generalize the result of the Theorem \ref{BasicLemma} by allowing for more flexibility in convergence rates in (\ref{ApproxNorm}), (\ref{DiffNorm}) and (\ref{DiffNormWce}). It can be used to capture additional logarithmic factors in the error bounds for the building blocks algorithms. This turns out to be particularly useful when investigating the error bounds for Smolyak methods whose building blocks are, e.g., multivariate quadratures or approximation algorithms, as it is the case in \cite{DLP07}. Suppose namely there exists a constant $D \in (0,1)$ and non decreasing sequences of positive numbers $(C_l)_l, (E_l)_l, l \in \mathbb{N},$ such that for every $l \in \mathbb{N}$ {\boldsymbol{\eta}}gin{equation} \label{ApproxNormSeq2} e^{\rm{x}}(S^{(n)}, U_l^{(n)}) \leq C_lD^l, \quad \rm{x} \in \{\rm{r}, \rm{w}\}. {\varepsilon}nd{equation} Moreover, in case of the randomized error {\boldsymbol{\eta}}gin{equation} \label{DiffNormSeq2} \sup\limits_{\substack{ \lVert f \rVert_{F^{(n)}} \leq 1}}\mathbb{E}xpec \bigg[\lVert \underbrace{ U_l^{(n)}f - U_{l - 1}^{(n)}f}_{= \Delta^{(n)}_lf} \rVert^2\bigg]^{\frac{1}{2}} \leq E_l D^l , {\varepsilon}nd{equation} and in case of the worst case error {\boldsymbol{\eta}}gin{equation} \label{DiffNormWceSeq2} \mathbb{E}xpec \bigg[ \sup\limits_{\substack{ \lVert f \rVert_{F^{(n)}} \leq 1}}\lVert \underbrace{ U_l^{(n)}f - U_{l - 1}^{(n)}f}_{= \Delta^{(n)}_lf} \rVert^2\bigg]^{\frac{1}{2}} \leq E_l D^l. {\varepsilon}nd{equation} It is now easy to prove Corollary \ref{levelBoundGeneralConditions} along the lines of the proof of Theorem \ref{BasicLemma}. {\boldsymbol{\eta}}gin{corollary}\label{levelBoundGeneralConditions2} For $L,d \in \mathbb{N}, L\geq d$ let $A(L,d)$ be a randomized Smolyak method as described in Section \ref{SECT3}. Let $\rm{x} \in \{\rm{w}, \rm{r}\}$. Assume {\varepsilon}qref{IntNorm}, {\varepsilon}qref{ApproxNormSeq2} and, dependently on the setting, for $\rm{x} = \rm{r}$ assume (\ref{Ran1}),(\ref{DiffNormSeq2}) and for $x = \rm{w}$ assume (\ref{Wce1}), (\ref{Wce2}) and {\varepsilon}qref{DiffNormWceSeq2}. Then we have {\boldsymbol{\eta}}gin{equation} e^{\rm{x}}(S_d,A(L,d)) \leq C_LB^{d-1}D^{L-d+1} \sum\limits_{j = 0}^{d-1} \bigg( \frac{E_{L-1}D}{B} \bigg)^j \binom{L-d+j}{j} \leq C_LH_L^{d-1}\binom{L}{d-1}D^L, {\varepsilon}nd{equation} where $H_L = \max\{\frac{B}{D}, E_{L-1}\}.$ {\varepsilon}nd{corollary} {\boldsymbol{\eta}}gin{remark} Note that applying Corollary \ref{levelBoundGeneralConditions2} to the uni- or multivariate building block algorithms error bounds from \cite{DLP07} we may reproduce the error bounds obtained in this paper for the final (higher dimensional) Smolyak method. {\varepsilon}nd{remark} {\varepsilon}nd{comment} \section{Error Analysis in Terms of Information}\label{SECT5} \subsection{Algorithms}\label{ALG5.1} Consider a linear approximation problem given by $\{S,F,G, \mathcal{O}(\Omega)\}.$ The aim of this section is to specify those linear operators that we want to call algorithms and to explain the typical information-based complexity framework for investigating the error of an algorithm in terms of the cardinality of information, for further reference see, e.g., \cite{TWW88}. To this end we shall specify a class of linear bounded functionals on $F$ called {\varepsilon}mph{admissible information functionals} and denoted by $\Lambda$, which will become one more parameter of the approximation problem. Given a constant $\tau \in \mathbb{N}_0$ and, if $\tau > 0$, a collection of $\lambda_i \in \Lambda, i = 1,\ldots, \tau,$ the {\varepsilon}mph{information operator} $\mathcal{N}: F \mathrm{i}ghtarrow \mathbb{R}^{\max\{\tau,1\}}$ applied to $f \in F$ is determined via \[ \mathcal{N}(f) = \left\{ {\boldsymbol{\eta}}gin{array}{ll} 0 & \text{ if } \tau = 0, \\ (\lambda_1(f),\ldots, \lambda_{\tau}(f)) & \text{else.} \\ {\varepsilon}nd{array} \mathrm{i}ght. \] Note that we are considering only non-adaptive information, meaning that the information functionals used do not depend on $f \in F.$ A deterministic linear operator $A \in \mathcal{O}^{\deter, \lin}(F,G)$ is called a {\varepsilon}mph{deterministic linear algorithm} if it admits a representation {\boldsymbol{\eta}}gin{align*} A = \phi \circ \mathcal{N}, {\varepsilon}nd{align*} where $\mathcal{N}$ is an information operator and $\phi: \mathbb{R}^{\max\{{\tau},1\}} \mathrm{i}ghtarrow G$ is an arbitrary mapping. We denote the number of information functionals used by the deterministic algorithm $A$ for any input $f \in F$ by $\cardet(A,F),$ i.e., $$\cardet(A,F) := \tau.$$ We denote the class of deterministic linear algorithms with admissible information functionals $\Lambda$ by $\mathcal{A}^{\deter,\lin}(F,G, \Lambda).$ Let $(V_l)_{l \in \mathbb{N}}$ be an arbitrary sequence of algorithms and let $(\lambda_{l,i})_{i \in [m_l]}$ be the information functionals used by $V_l.$ We say that the sequence $(V_l)_l$ {\varepsilon}mph{ uses nested information} if for every $a < b$ $$\{\lambda_{a,i} \,| \, i \in [m_a] \} \subseteq \{\lambda_{b,i} \,| \, i \in [m_b] \}.$$ A {\varepsilon}mph{randomized linear algorithm} $A \in \mathcal{O}^{\ran, \lin}(\Omega, F,G)$ is a mapping $$A:\Omega \mathrm{i}ghtarrow \mathcal{A}^{\deter,\lin}(F,G,\Lambda)$$ such that $\omega \mapsto \cardet(A(\omega), F) \text{ is a random variable.}$ We denote the class of randomized linear algorithms with admissible information functionals $\Lambda$ by $\mathcal{A}^{\ran,\lin}(\Omega,F,G,\Lambda) =: \mathcal{A}(\Omega, \Lambda).$ For a randomized linear algorithm $A$ we may finally define $$\carran(A,F) := \mathbb{E}xpec \left[ \cardet(A,F) \mathrm{i}ght].$$ We say that the information used by a sequence of randomized linear algorithms is {\varepsilon}mph{nested} if it is nested for each $\omega \in \Omega.$ Note that the information used by $(A(L,d))_{L \geq d}$ is nested. Now we would like to make some reasonable assumptions on the cost of building blocks of the Smolyak method. Consider a randomized Smolyak method as described in Section \ref{SECT3} with building blocks being randomized algorithms. Let $$m_{l,n} := \carran(U^{(n)}_{l}, F^{(n)}).$$ Notice that $m_{0,n} = 0.$ For $d \in \mathbb{N}, L = d,d+1,\ldots$ put $$N := N(L,d) := \carran(A(L,d), F_d).$$ Let us assume that for every $n \in \{1,\ldots,d\}$ the sequence $(m_{l,n})_{l \in \mathbb{N}}$ is non-decreasing and that there exist constants $1\leq K_{\rm{low}} \leq K_{\rm{up}}, 1< K$ such that for every $ n = 1,\ldots,d, l \in \mathbb{N}$ it holds {\boldsymbol{\eta}}gin{equation} \label{m_est_prot} K_{\rm{low}} K^{l-1}(K - 1) \leq m_{l,n} - m_{l-1,n} \leq K_{\rm{up}} K^{l-1}(K - 1). {\varepsilon}nd{equation} Note that this implies {\boldsymbol{\eta}}gin{equation}\label{m_est} K_{\rm{low}} (K^l - 1) \leq m_{l,n} \leq K_{\rm{up}}(K^l - 1), \quad l \in \mathbb{N}. {\varepsilon}nd{equation} {\boldsymbol{\eta}}gin{example} Consider the integration problem from Example \ref{ProblemExample}. Let $s \in \mathbb{N}.$ For $n = 1, \ldots, d$ let $ D^{(n)} = [0,1]^s,$ $\rho^{(n)}$ be Lebesgue measure on $[0,1]^s$ and $F^{(n)}$ be some reproducing kernel Hilbert space of functions defined on $[0,1]^s$ (e.g., a Sobolev space with sufficiently high smoothness parameter). Choose a prime number $b \geq s$ and for $n = 1, \ldots, d,$ and $l \in \mathbb{N}$ let $\mathcal{P}_l^{(n)}$ be a scrambled $(0,l,s)-$net in base $b$ as introduced in \cite{Owe95}. Now $$U^{(n)}_l : F^{(n)} \mathrm{i}ghtarrow \mathbb{R}, \quad f \mapsto \frac{1}{b^l} \sum_{x \in \mathcal{P}_l^{(n)}} f(x)$$ is a randomized algorithm. Moreover, if we randomize $(U^{(n)}_l)_{n,l}$ in such a way that the families $(U^{(n)}_l)_l$ are independent then we may use them as building blocks of the Smolyak method and all the results of this paper apply, cf. also \cite{DLP07}. {\varepsilon}nd{example} \subsection{Upper Error Bounds} Throughout the whole section we require that the assumptions of Theorem \ref{BasicLemma} hold. Let us define $\alpha := \frac{\log(\frac{1}{D})}{\log(K)},$ where $K$ is as in (\ref{m_est_prot}) and $D$ is as in (\ref{ApproxNorm}). We define the {\varepsilon}mph{polynomial convergence rate of the algorithms} $U^{(n)}_l, l \in \mathbb{N}$ by {\boldsymbol{\eta}}gin{equation}\label{poly_rate_conv} \mu^{(n)}_{\rm{x}} := \sup\{\delta \geq 0 \, | \, \sup_{l \in \mathbb{N}} e^{\rm{x}}(S^{(n)}, U^{(n)}_l) m^{\delta}_{l,n} < \infty\} {\varepsilon}nd{equation} where $\rm{x} \in \{\rm{r}, \rm{w}\}.$ It is straightforward to verify that $\alpha \leq \mu^{(n)}_{\rm{x}}$ for every $n.$ Indeed,we have {\boldsymbol{\eta}}gin{equation}\label{mu_u_x} e^{\rm{x}}(S^{(n)}, U^{(n)}_l) \leq \frac{CK_{\rm{up}}a}{m_{l,n}^{\alpha}}, {\varepsilon}nd{equation} because of {\boldsymbol{\eta}}gin{equation}\label{OneDimCardBound} \frac{C K_{\rm{up}}a }{m_{l,n}^{\alpha}} \geq \frac{C K_{\rm{up}}a }{K_{\rm{up}}a (K^l - 1)^{\alpha}} \geq \frac{C}{K^{l \alpha}} = CD^l. {\varepsilon}nd{equation} Hence for each $n \in \{1, \ldots, d\}$ the quantity $\alpha$ is a lower bound on the polynomial order of convergence $\mu^{(n)}_{\rm{x}}$ of the algorithms $U^{(n)}_l, l \in \mathbb{N},$ and can be chosen arbitrarily close to $\mu^{(n)}_{\rm{x}}$ if the constants $C$ and $D$ in (\ref{ApproxNorm}) are chosen appropriately. The aim of this section is to develop upper bounds on the error of $d-$variate Smolyak method in terms of $N,d$ and $\alpha.$ More concretely we prove the following theorem. {\boldsymbol{\eta}}gin{theorem}\label{CardInfUpBound} Let $\rm{x} \in \{\rm{r}, \rm{w}\}.$ Let $K_{\rm{low}},K_{\rm{up}},K, \alpha$ be as above, and let the assumptions of Theorem \ref{BasicLemma} hold. Then there exist constants $C_0, C_1$ such that for all $d \in \mathbb{N}$ and all $L \geq d$ it holds {\boldsymbol{\eta}}gin{equation}\label{pw_req_est_dim1} e^{\rm{x}}(A(L,1)) \leq C_0C_1 N^{-\alpha} {\varepsilon}nd{equation} and {\boldsymbol{\eta}}gin{equation}\label{pw_req_est} e^{\rm{x}}(A(L,d)) \leq C_0 C_1^d \left(1 + \frac{ \log(N)}{d-1}\mathrm{i}ght)^{(d-1)(\alpha + 1)}N^{-\alpha}, \,\, d\geq 2, {\varepsilon}nd{equation} where $N = N(L,d)$ is the cardinality of information used by the algorithm $A(L,d).$ {\varepsilon}nd{theorem} To prove Theorem \ref{CardInfUpBound} we need a lemma bounding $N(L,d)$ in terms of $K_{\rm{low}},K_{\rm{up}},K, d$ and $L.$ {\boldsymbol{\eta}}gin{lemma}\label{NBound} Let $K_{\rm{low}},K_{\rm{up}},K$ be as above. Put $$N_l^{\rm{nest}}:= N_l^{\rm{nest}}(L,d) = K_{\rm{low}}d \bigg(\frac{K - 1}{K}\bigg)^{d} K^L \binom{L-1}{d-1}, $$ $$N_u :=N_u(L,d) = K_{\rm{up}}d \frac{K}{K - 1} K^L \binom{L-1}{d-1},$$ $$N_u^{\rm{nest}} := N_u^{\rm{nest}}(L,d) := K_{\rm{up}}d \bigg(\frac{K - 1}{K}\bigg)^{d} K^L \binom{L-1}{d-1}.$$ For every $d \in \mathbb{N}$ and $L \geq d$ it holds $$N_l^{\rm{nest}}(L,d) \leq N(L,d) \leq N_u(L,d).$$ Moreover, if the building blocks of the Smolyak method use nested information then $$ N_l^{\rm{nest}}(L,d) \leq N(L,d) \leq N_u^{\rm{nest}}(L,d).$$ {\varepsilon}nd{lemma} {\boldsymbol{\eta}}gin{proof} We have {\boldsymbol{\eta}}gin{align*} N(L,d)& = \mathbb{E}xpec \left[\cardet\left( \sum_{L-d+1 \leq |\bl|\leq L} (-1)^{L - |\bl|} \binom{d-1}{L - |\bl|} \bigotimes_{n = 1}^d U^{(n)}_{l_n}(\omega)\mathrm{i}ght) \mathrm{i}ght] \\ & \leq \sum_{L-d+1 \leq |\bl|\leq L} \mathbb{E}xpec \left[ \cardet\left(\bigotimes_{n = 1}^d U^{(n)}_{l_n}(\omega) \mathrm{i}ght) \mathrm{i}ght]\\ & = \sum_{L-d+1 \leq |\bl|\leq L} \prod_{n = 1}^{d} \mathbb{E}xpec \left[ \cardet\left(U^{(n)}_{l_n}(\omega)\mathrm{i}ght) \mathrm{i}ght] \\ & = \sum_{L-d+1 \leq |\bl|\leq L} \prod_{n = 1}^{d} m_{l_n,n} \leq K_{\rm{up}}d \sum_{L-d+1 \leq |\bl|\leq L} K^{|\bl|}. {\varepsilon}nd{align*} Now following the steps of \cite[Lemma 7]{WW95} we obtain {\boldsymbol{\eta}}gin{align*} N(L,d) & \leq K_{\rm{up}}d \sum_{|\bl| = L - d +1}^{L} K^{|\bl|} \leq K_{\rm{up}}d \sum_{\nu = L - d +1}^{L} K^{\nu} \binom{\nu - 1}{d-1}\\ & \leq K_{\rm{up}}d \binom{L-1}{d-1} (K^{L+1} - K^{ L-d+1 })(K-1)^{-1}\\ & \leq K_{\rm{up}}d \frac{K}{K - 1} K^{L} \binom{L-1}{d-1} = N_u. {\varepsilon}nd{align*} Now we provide a lower bound on $N(L,d).$ Note that given the cardinality of information used by the building blocks, the cardinality of information used by the Smolyak method is minimal when the information used by the building blocks is nested for every coordinate. In this case the information used by the Smolyak method is exactly the information used by $\sum_{|\bl| = L} \bigotimes_{n = 1}^{d} U^{(n)}_{l_n}.$ Let us fix $\bl \in \mathbb{N}^d, |\bl| = L.$ The expected value of the cardinality of information used by $\bigotimes_{n = 1}^{d}U^{(n)}_{l_n}$ and at the same time not used by any other $\bigotimes_{n = 1}^{d}U^{(n)}_{ v_n}$ with $|{\bf v}| = L$ is {\boldsymbol{\eta}}gin{equation} \prod_{n = 1}^{d} (m_{l_n,n} - m_{l_n - 1, n}) \geq K_{\rm{low}}d K^{L-d}(K-1)^d. {\varepsilon}nd{equation} We obtain {\boldsymbol{\eta}}gin{align*} N(L,d) & \geq \sum_{|\bl| = L} K_{\rm{low}}d K^{L-d}(K-1)^d \\ & = K_{\rm{low}}d \left( \frac{K - 1}{K} \mathrm{i}ght)^d K^L \binom{L-1}{d-1} =N_l^{\rm{nest}}. {\varepsilon}nd{align*} The upper bound in the case when the building blocks use nested information follows in exactly the same manner on noting that {\boldsymbol{\eta}}gin{equation} K_{\rm{up}}d K^{L-d}(K-1)^d \geq \prod_{n = 1}^{d} (m_{l_n,n} - m_{l_n - 1, n}). {\varepsilon}nd{equation} {\varepsilon}nd{proof} {\boldsymbol{\eta}}gin{proof}{(Theorem \ref{CardInfUpBound})} Note that $N(L,1) = m_{L,1}$ so we have already showed the statement for $d = 1$ in (\ref{OneDimCardBound}). It remains to consider the case $d > 1.$ Consider the function $$f: [1, \infty) \mathrm{i}ghtarrow \mathbb{R}, \quad x \mapsto \left(1 + \frac{\log(x)}{d-1}\mathrm{i}ght)^{(d-1)(\alpha + 1)}x^{-\alpha}.$$ We will show that there exist constants $\widetilde{C}_{u,0}, \widetilde{C}_{u,1}, \widetilde{C}_{l,0}, \widetilde{C}_{l,1}$ such that for $N_u,N_l$ from Lemma \ref{NBound} it holds {\boldsymbol{\eta}}gin{equation}\label{NuBound} e^{\rm{x}}(A(L,d)) \leq \widetilde{C}_{u,0}\widetilde{C}_{u,1}^d f(N_u) {\varepsilon}nd{equation} and {\boldsymbol{\eta}}gin{equation}\label{NlBound} e^{\rm{x}}(A(L,d)) \leq \widetilde{C}_{l,0}\widetilde{C}_{l,1}^d f(N_l). {\varepsilon}nd{equation} Now unimodality of $f$ combined with the fact that the extremum is a maximum yields $f(N(L,d)) \geq \min \{f(N_u), f(N_l) \}$ finishing the proof. First we prove (\ref{NuBound}). Calling upon Theorem \ref{BasicLemma} and using $L \leq \frac{\log(N_u)}{ \log(K)}$ we get {\boldsymbol{\eta}}gin{align*} e^{\rm{x}}(S_d,A(L,d)) & \leq CH^{d-1} \binom{L}{d-1}D^L \\ & =CH^{d-1} \binom{L}{d-1}K^{-L\alpha} \\ & = CH^{d-1}\binom{L}{d-1}\bigg(K_{\rm{up}}d \frac{K}{K - 1} \bigg)^{\alpha} \binom{L-1}{d-1}^{\alpha}N_u^{-\alpha} \\ & \leq \frac{C}{H}(HK_{\rm{up}}a)^d \left(\frac{K}{K - 1}\mathrm{i}ght)^{\alpha}\binom{L}{d-1}^{\alpha+1}N_u^{- \alpha} \\ & \leq \frac{C}{H}(HK_{\rm{up}}a)^d \left(\frac{K}{K - 1}\mathrm{i}ght)^{\alpha} \frac{\log(N_u)^{(d-1)(\alpha+1)}}{( \log(K))^{(d-1)(\alpha+1)} ((d-1)!)^{\alpha+1}} N_u^{- \alpha} \\ & = \frac{C}{((d-1)!)^{\alpha+1}H}\left(\frac{K \log(K)}{K - 1}\mathrm{i}ght)^{\alpha}\log(K) \left( \frac{H K_{\rm{up}}a}{( \log(K))^{\alpha+1}} \mathrm{i}ght)^d \frac{\log(N_u)^{(d-1)(\alpha+1)}}{N_u^{\alpha}} \\ & = ((d-1)!)^{-\alpha - 1} C_{u,0} C_{u,1}^d \frac{\log(N_u)^{(d-1)(\alpha+1)}}{N_u^{\alpha}}, {\varepsilon}nd{align*} with constants $C_{u,0}, C_{u,1}$ not depending neither on $d$ nor on $N(L,d).$ By Stirling's formula we conclude {\boldsymbol{\eta}}gin{align*} e^{\rm{x}}(S_d,A(L,d)) & \leq \left(\frac{e^d}{(2\pi)^{\frac{1}{2}}(d-1)^{\frac{1}{2}} (d-1)^{(d-1)} }\mathrm{i}ght)^{\alpha + 1} C_{u,0} C_{u,1}^d \frac{\log(N_u)^{(d-1)(\alpha+1)}}{N_u^{\alpha}} \\ & \leq \widetilde{C}_{u,0} \widetilde{C}_{u,1}^d \left(\frac{\log(N_u)}{d-1}\mathrm{i}ght)^{(d-1)(\alpha + 1)}N_u^{-\alpha}. {\varepsilon}nd{align*} Now we prove (\ref{NlBound}). To this end it suffices to prove that there exist constants $\hat{C}_{0}, \hat{C}_1$ independent of $d$ and $N$ such that {\boldsymbol{\eta}}gin{equation} \label{ulInequality} \left(\frac{\log(N_u)}{d-1}\mathrm{i}ght)^{(d-1)(\alpha + 1)}N_u^{-\alpha} \leq \hat{C}_0 \hat{C}_1^d \left( 1 + \frac{\log(N_l)}{d-1}\mathrm{i}ght)^{(d-1)(\alpha + 1)}N_l^{-\alpha}, {\varepsilon}nd{equation} i.e., $$\left(\frac{N_l}{N_u}\mathrm{i}ght)^{\alpha} \left(\frac{\log(N_u)}{(d-1) + \log(N_l)} \mathrm{i}ght)^{(d-1)(\alpha + 1)} \leq \hat{C}_0 \hat{C}_1^d .$$ Note that $$\frac{N_u}{N_l} = \left(\frac{K}{K-1} \mathrm{i}ght)^{d+1} \left(\frac{K_{\rm{up}}}{K_{\rm{low}}}\mathrm{i}ght)^d$$ so, putting $\hat{K} = \frac{K}{K-1}\frac{K_{\rm{up}}}{K_{\rm{low}}}$ we have {\boldsymbol{\eta}}gin{align*} \frac{\log(N_u)}{(d-1) + \log(N_l)} \leq & \frac{\log \left( \frac{K}{K-1} \mathrm{i}ght) + d\log(\hat{K}) + \log(N_l)}{(d-1) + \log(N_l)} \\ & \leq \log\left( \frac{K}{K-1}\mathrm{i}ght) + \frac{d}{d-1}\log(\hat{K}) + 1 \leq \log\left( \frac{K}{K-1}\mathrm{i}ght) + 2\log(\hat{K}) + 1 . {\varepsilon}nd{align*} Since obviously $\left(\frac{N_l}{N_u}\mathrm{i}ght)^{\alpha} \leq 1$ this shows (\ref{ulInequality}) and finishes the proof of the theorem. {\varepsilon}nd{proof} {\boldsymbol{\eta}}gin{comment} We would like to show similar bounds with $N_l$ in place of $N_u.$ Firstly we show that there exists constant ${\varepsilon}ta_{K, {\varepsilon}psilon}$ depending only on $K$ and ${\varepsilon}psilon$ for which it holds $$L \leq {\varepsilon}ta_{K, {\varepsilon}psilon} \log(N_l).$$ Indeed, by the definition of $N_l$ and since $L > (1+{\varepsilon}psilon)d$ we get {\boldsymbol{\eta}}gin{align*} L &= \log_K(N_l) - d\log_K((K-1)K^{-1}K_{\rm{low}}) - \log_K\left(\binom{L-1}{d-1}\mathrm{i}ght) \leq \\ &\log_K(N_l) - d\log_K(K^{-1}) = \\ &\log_K(N_l) + d < \log_K(N_l) + (1 + {\varepsilon}psilon)^{-1}L, {\varepsilon}nd{align*} and so we may take ${\varepsilon}ta_{K,{\varepsilon}psilon} = (1+{\varepsilon}psilon)({\varepsilon}psilon \log(K))^{-1}.$ Now, repeating the calculations in exactly the same fashion as we did for $N_u$ we see that for the constants $C_{l,0} = \frac{C}{H((d-1)!)^{1+\alpha}}, C_{l,1}= H\left(\frac{(K-1)K_{\rm{low}}}{K} \mathrm{i}ght){\varepsilon}ta_K^{1+\alpha}$ it holds $$e^{\rm{x}}(S_d, A(L,d)) \leq C_{l,0} C_{l,1}^d \frac{\log(N_l)^{(d-1)(1+ \alpha)}}{N_l^{\alpha}}. $$ {\varepsilon}nd{comment} \subsection{Lower Error Bounds} In this subsection we make the following additional assumptions. The first assumption states that there exist a sequence of instances of the problem $\{S_d,F_d,G_d, \mathcal{A}(\Omega, \Lambda)\}$ that is genuinely univariate, i.e., there exists a sequence $(f_l)_{l \in \mathbb{N}} \in F_d, \, f_l= g_{1,l} \otimes g_{2,l} \otimes \cdots \otimes g_{d,l}$ such that $\lVert f_l \rVert_{F_d} = 1$ for which {\boldsymbol{\eta}}gin{equation}\label{v1} \lVert(S^{(2)} \otimes \cdots \otimes S^{(d)})(g_{2,l} \otimes \cdots \otimes g_{d,l})\rVert =: \theta_d > 0, {\varepsilon}nd{equation} and the $U_l^{(n)}, l\geq 1$ are exact on $g_{n,l}$ for $n>1$ . Secondly, we assume that there exist constants $\widetilde{C}>0, \widetilde{D}\in(0,1)$ such that for every $l \in \mathbb{N}$ {\boldsymbol{\eta}}gin{equation}\label{v2} \mathbb{E}xpec\bigg[ \lVert S^{(1)}g_{1,l} - U_l^{(1)}g_{1,l} \rVert^2\bigg]^{\frac{1}{2}} \geq \widetilde{C} \widetilde{D}^l. {\varepsilon}nd{equation} Let us put $${\boldsymbol{\eta}}ta := \frac{\log(\frac{1}{\widetilde{D}})}{\log(K)},$$ with $\widetilde{D}$ as in (\ref{v2}) and $K$ as in (\ref{m_est_prot}). Using (\ref{v2}) and (\ref{m_est}) one easily sees that {\boldsymbol{\eta}}gin{equation}\label{UnivLowBoundR} \mathbb{E}xpec \left[\lVert (S^{(1)} - U^{(1)}_l)g_{1,l} \rVert^{2} \mathrm{i}ght]^{\frac{1}{2}} \geq \widetilde{C} \left( K_{\rm{low}}(1-K^{-1}) \mathrm{i}ght)^{{\boldsymbol{\eta}}ta} m_{l,1}^{-{\boldsymbol{\eta}}ta}, {\varepsilon}nd{equation} meaning that we have ${\boldsymbol{\eta}}ta \geq \mu^{(1)}_{{\rm{x}}},$ where $ \mu^{(1)}_{{\rm{x}}}$ is as in (\ref{poly_rate_conv}). Moreover, by choosing $(g_{1,l})_{l \in \mathbb{N}}$ appropriately, ${\boldsymbol{\eta}}ta$ can be made arbitrarily close to $\mu^{(1)}_{{\rm{x}}}.$ {\boldsymbol{\eta}}gin{example} The assumptions made in this subsection are quite naturally met for many important problems. Consider for instance an integration problem as described in Example \ref{ProblemExample}, where $F^{(n)}, n = 2, \ldots, d$ may be any spaces containing constant functions. Then, for an appropriate $(g_{1,l})_{l \in \mathbb{N}}$ (chosen so that the integration error does not converge too fast to $0$) we have that $$f_l := g_{1,l} \otimes \mathbbm{1}_{D} \otimes \cdots \mathbbm{1}_{D}$$ satisfies our assumptions for any randomized quadrature with weights adding up to $1.$ {\varepsilon}nd{example} {\boldsymbol{\eta}}gin{lemma}\label{LowBoundLem} Let ${\rm{x}} \in \{\rm{w}, \rm{r}\},$ and let (\ref{v1}) and (\ref{v2}) hold. Then there exists a constant $\hat{c}_d$ such that $$e^{{\rm{x}}}(A(L,d)) \geq \hat{c}_d m_{L,1}^{-{\boldsymbol{\eta}}ta}$$ for all $L \geq d.$ If additionally (\ref{v1}) and (\ref{v2}) are satisfied for all $d \in \mathbb{N}$ with the same constants $\widetilde{C}$ and $\widetilde{D}$ and ${\mathfrak T}heta := \sup_{L,d} \left( \frac{m_{L,1}}{m_{L-d+1,1}} \mathrm{i}ght)^{2{\boldsymbol{\eta}}ta} \theta^2_d $ is bounded, then we may choose the constants $(\hat{c}_d)_{d \in \mathbb{N}}$ in such a way that they are all equal. {\varepsilon}nd{lemma} {\boldsymbol{\eta}}gin{proof} Choosing $f_l$ satisfying (\ref{v1}), due to exactness assumption we obtain {\boldsymbol{\eta}}gin{align*} {\boldsymbol{\eta}}gin{split} A(L,d)f_l & = \sum_{\bl \in Q(L,d)} \bigotimes_{n = 1}^{d} \Delta^{(n)}_{l_n} f_{n,l} \\ & = \sum_{t = 1}^{L-d+1} \Delta_t^{(1)}g_{1,l}\otimes \Delta_1^{(2)}g_{2,l} \cdots \otimes \Delta_1^{(d)}g_{d,l} \\ & = U^{(1)}_{L-d+1}g_{1,l} \otimes S^{(2)}g_{2,l} \cdots \otimes S^{(d)}g_{d,l}. {\varepsilon}nd{split} {\varepsilon}nd{align*} Let us put $T := \bigotimes_{n = 2}^d S^{(n)}, h = \bigotimes_{n = 2}^d g_{n,l}.$ Due to (\ref{UnivLowBoundR}) we have {\boldsymbol{\eta}}gin{align*} e^{{\rm{x}}}(A(L,d))^2 & \geq \mathbb{E}xpec \lVert (S_d - A(L,d))f_l \rVert^2 = \mathbb{E}xpec \lVert (S^{(1)} \otimes T - U^{(1)}_{L -d+1} \otimes T)f_l \rVert^2 \\ & = \mathbb{E}xpec \lVert (S^{(1)} - U^{(1)}_{L-d+1})g_{1,l} \otimes Th \rVert^2 = \lVert Th \rVert^2 \mathbb{E}xpec \lVert (S^{(1)} - U^{(1)}_{L-d+1})g_{1,l} \rVert^2 \\ & = \theta^2_d\mathbb{E}xpec \lVert (S^{(1)} - U^{(1)}_{L-d+1})g_{1,l} \rVert^2 \geq \widetilde{C}^2\theta^2_d \left( K_{\rm{low}}(1-K^{-1}) \mathrm{i}ght)^{2{\boldsymbol{\eta}}ta} \left( \frac{m_{L,1}}{m_{L-d+1,1}} \mathrm{i}ght)^{2{\boldsymbol{\eta}}ta} m_{L,1}^{-2{\boldsymbol{\eta}}ta}. {\varepsilon}nd{align*} {\varepsilon}nd{proof} {\boldsymbol{\eta}}gin{remark} In particular constants $(\hat{c}_d)_{d \in \mathbb{N}}$ may be chosen all equal e.g. when (\ref{v1}) and (\ref{v2}) are satisfied for all $d \in \mathbb{N}$ with the same constants $\widetilde{C},\widetilde{D}$ and $\theta := \sup_{d \in \mathbb{N}} \theta_d < \infty.$ {\varepsilon}nd{remark} {\boldsymbol{\eta}}gin{lemma}\label{NBoundInM} Let there exist constants $1\leq K_{\rm{low}} \leq K_{\rm{up}}, 1< K$ such that for all $n = 1,\ldots, d$ and $l \in \mathbb{N}$ (\ref{m_est_prot}) is satisfied. Then there exists a constant $\widetilde{c}_d$ such that $$\widetilde{c}_d m_{L,1} (\log(m_{L,1}))^{d-1} \leq N(L,d)$$ for all $L \geq d.$ Moreover, if $\xi := \xi(d) := (K^d-1)^{\frac{1}{d}} > 1$ then there exists a constant $\widetilde{C}_d$ such that $$N(L,d) \leq \widetilde{C}_d m_{L,1} (\log(m_{L,1}))^{(d-1)}$$ for all $L\geq d.$ {\varepsilon}nd{lemma} {\boldsymbol{\eta}}gin{proof} First we prove the upper bound. On the one hand, due to (\ref{m_est}) it holds {\boldsymbol{\eta}}gin{align*} m_{L,1} \log(m_{L,1})^{d-1} & \geq K_{\rm{low}} \frac{K^L-1}{K^L} \log(K_{\rm{low}}(K^L - 1))^{d-1} K^L \\ & \geq K_{\rm{low}} \frac{K^d - 1}{K^d} \log\left(\left[(K_{\rm{low}}(K^L-1 ))^{\frac{1}{L}}\mathrm{i}ght]^L\mathrm{i}ght)^{d-1} K^L \\ & \geq K_{\rm{low}} \frac{K^d - 1}{K^d} \log(\xi)^{d-1} L^{d-1}K^L, {\varepsilon}nd{align*} where we used that the function $[0, \infty) \ni x \mapsto (K^x - 1)^{\frac{1}{x}} $ is increasing. On the other hand, according to Lemma \ref{NBound} $$N(L,d) \leq K_{\rm{up}}d \frac{K}{K-1} \binom{L-1}{d-1}K^L \leq K_{\rm{up}}d \frac{K}{K-1} \frac{1}{(d-1)!} L^{d-1}K^L.$$ It follows that the constant $$\widetilde{C}_d = K_{\rm{up}}d \frac{K}{K-1} \frac{1}{(d-1)!} \left[ K_{\rm{low}} \frac{K^d - 1}{K^d} \log(\xi)^{d-1} \mathrm{i}ght]^{-1} $$ does the job. Now we prove the lower bound. On the one hand due to (\ref{m_est}) we have {\boldsymbol{\eta}}gin{align*} m_{L,1} (\log(m_{L,1}))^{d-1} \leq K_{\rm{up}}(K^L - 1) \left(\log(K_{\rm{up}}(K^L-1))\mathrm{i}ght)^{d-1} \leq K_{\rm{up}} \left(\log(K_{\rm{up}}^{\frac{1}{L}}K)\mathrm{i}ght)^{d-1} K^L L^{d-1}. {\varepsilon}nd{align*} Noticing that $f: [d, \infty) \mathrm{i}ghtarrow \mathbb{R}, x \mapsto \frac{(x-1)\cdots(x-d+1)}{x^{d-1}}$ is increasing we obtain {\boldsymbol{\eta}}gin{align*} & \binom{L-1}{d-1} = \frac{(L-1) \cdots (L-d+1)}{L^{d-1}} \frac{L^{d-1}}{(d-1)!} \geq \frac{(d-1)!}{(d)^{d-1}}\frac{L^{d-1}}{(d-1)!} \\ & = \frac{1}{(d)^{d-1}} L^{d-1}. {\varepsilon}nd{align*} On the other hand we have due to Lemma \ref{NBound} {\boldsymbol{\eta}}gin{align*} N(L,d) & \geq \left( \frac{(K-1)K_{\rm{low}}}{K} \mathrm{i}ght)^d K^L \binom{L-1}{d-1} \\ & \geq \left( \frac{(K-1)K_{\rm{low}}}{K} \mathrm{i}ght)^d \frac{1}{(d)^{d-1}} K^L L^{d-1}. {\varepsilon}nd{align*} It follows that the constant $$\widetilde{c}_d = \left(\frac{(K-1)K_{\rm{low}}}{K} \mathrm{i}ght)^d \frac{1}{(d)^{d-1}}\left(K_{\rm{up}} \left(\log(K_{\rm{up}}^{\frac{1}{d}}K)\mathrm{i}ght)^{d-1}\mathrm{i}ght)^{-1}$$ satisfies $$\widetilde{c}_d m_{L,1} (\log(m_{L,1}))^{(d-1)} \leq N(L,d).$$ {\varepsilon}nd{proof} {\boldsymbol{\eta}}gin{remark} Note that the constants $\widetilde{C}_d$ and $\widetilde{c}_d$ from Lemma \ref{NBoundInM} fall superexponentially fast in $d.$ {\varepsilon}nd{remark} {\boldsymbol{\eta}}gin{corollary}\label{Cor_Lower_Bound} Let ${\rm{x}} \in \{\rm{r}, \rm{w} \}.$ Let (\ref{v1}) and (\ref{v2}) hold. Furthermore, let there exist constants $1\leq K_{\rm{low}} \leq K_{\rm{up}}, 1< K$ such that for all $n = 1,\ldots, d$ and $l \in \mathbb{N}$ (\ref{m_est_prot}) is satisfied. Moreover, assume that $m_{L,1} \geq 16$. Then there exists a constant $c_d$ such that given $\delta \in (0,1)$ there exists $ N(\delta)$ such that for every $N \geq N(\delta)$ $$e^{{\rm{x}}}(A(L,d)) \geq c_d \frac{(\log(N))^{(d-1 - \delta) {\boldsymbol{\eta}}ta}}{N^{{\boldsymbol{\eta}}ta}}$$ with ${\boldsymbol{\eta}}ta = \frac{\log(\frac{1}{\widetilde{D}})}{\log(K)}$ and $N = N(L,d).$ {\varepsilon}nd{corollary} {\boldsymbol{\eta}}gin{proof} Let $\widetilde{c}$ be such that for every $L \in \mathbb{N}$ $$\widetilde{c} m_{L,1} (\log(m_{L,1}))^{d-1} \leq N(L,d).$$ The existence of $\widetilde{c}$ is guaranteed by Lemma \ref{NBoundInM}. We put $\widetilde{c}_0 := \min\{\widetilde{c}, 1\}.$ We would like to express the bound from Lemma \ref{LowBoundLem} in terms of the cardinality $N := N(L,d)$. To this end we want to find a function $g:\mathbb{R} \mathrm{i}ghtarrow \mathbb{R}$ of the form $g(x) = \frac{x^{{\boldsymbol{\eta}}ta}}{(\log(x))^{{\varepsilon}ta}}$ such that for large $m$ {\boldsymbol{\eta}}gin{equation}\label{TranslFun} g(m (\log(m))^{d-1}) \geq m^{{\boldsymbol{\eta}}ta} {\varepsilon}nd{equation} implying $$e^{{\rm{x}}}(A(L,d)) \geq \frac{\hat{c}_d}{m^{{\boldsymbol{\eta}}ta}_{L,1}} \geq \frac{\hat{c}_d}{g( m_{L,1} (\log(m_{L,1}))^{d-1})}.$$ We rewrite (\ref{TranslFun}) as $$\frac{m^{{\boldsymbol{\eta}}ta} (\log(m))^{(d-1){\boldsymbol{\eta}}ta}}{(\log(m(\log(m))^{d-1}))^{{\varepsilon}ta}} \geq m^{{\boldsymbol{\eta}}ta}.$$ Hence (\ref{TranslFun}) holds if $${\varepsilon}ta \leq \frac{ (d-1) {\boldsymbol{\eta}}ta \log(\log(m))}{\log\left(\log(m) + (d-1) \log(\log(m))\mathrm{i}ght)}$$ and the expression on the right hand side converges from below to $(d-1){\boldsymbol{\eta}}ta$ as $m$ goes to $\infty.$ To obtain $$\frac{\hat{c}_d}{g(m_{L,1} \log(m_{L,1})^{d-1})} \geq \frac{\hat{c}_d}{g(\widetilde{c}_0^{-1} N)}$$ it is sufficient to check that g is increasing on the interval $[m_{L,1} \log(m_{L,1})^{d-1}, \infty).$ Simple calculations reveal that $g$ is increasing on $[e^{\frac{{\varepsilon}ta}{{\boldsymbol{\eta}}ta}}, \infty) \supset [e^{d-1}, \infty).$ The final step is to notice that $$e^{{\rm{x}}}(A(L,d)) \geq \frac{\hat{c}_d}{g(\widetilde{c}_0^{-1} N)} = \hat{c}_d \frac{\log(\widetilde{c}_0^{-1} N)^{{\varepsilon}ta}}{(\widetilde{c}_0^{-1} N)^{{\boldsymbol{\eta}}ta}} \geq \hat{c}_d\widetilde{c}_0^{{\boldsymbol{\eta}}ta} \frac{\log(N)^{{\varepsilon}ta}}{N^{{\boldsymbol{\eta}}ta}}.$$ Putting $c_d := \hat{c}_d \widetilde{c}_0^{{\boldsymbol{\eta}}ta}$ finishes the proof. {\varepsilon}nd{proof} \section{Application to Infinite-Dimensional Integration}\label{INF_DIM_INT} In Theorem \ref{Theo_UB_PW} we provide a sharp result on randomized infinite-dimensional integration on weighted reproducing kernel Hilbert spaces that parallels the sharp result on deterministic infinite-dimensional integration stated in \cite[Theorem~5.1]{GHHR17}. Results from \cite{Gne13} and from \cite{PW11} in combination with Theorem~\ref{CardInfUpBound} rigorously establish the sharp randomized result in the special case where the weighted reproducing kernel Hilbert space is based on an anchored univariate kernel. With the help of the embedding tools provided in \cite{GHHR17} this result will be extended to general weighted reproducing kernel Hilbert spaces. Before we can state and prove Theorem \ref{Theo_UB_PW} we first have to introduce the setting, cf. \cite{GHHR17}. For basic results about reproducing kernels $K$ and the corresponding Hilbert spaces $H(K)$ we refer to \cite{Aro50}. We denote the norm on $H(K)$ by $\|\cdot \|_K$ and the space of constant functions (on a given domain) by $H(1)$; here $1$ denotes the constant kernel that only takes the function value one. \subsection{Assumptions}\label{subsec:assumptions} Henceforth we assume that {\boldsymbol{\eta}}gin{enumerate}[label=(A\arabic*)] \item\label{a1} $H$ is a vector space of real-valued functions on a domain $D \neq {\varepsilon}mptyset$ with $H(1) \subsetneq H$ {\varepsilon}nd{enumerate} and {\boldsymbol{\eta}}gin{enumerate}[label=(A\arabic*)] \setcounter{enumi}{1} \item\label{a2} $\|\cdot\|_1$ and $\|\cdot\|_2$ are seminorms on $H$, induced by symmetric bilinear forms $\langle\cdot,\cdot\rangle_1$ and $\langle\cdot,\cdot\rangle_2$, such that $\|1\|_1=1$ and $\|1\|_2=0$. {\varepsilon}nd{enumerate} Let {\boldsymbol{\eta}}gin{equation}\label{eq3} \|f\|_H := \left(\|f\|_1^2 + \|f\|_2^2\mathrm{i}ght)^{1/2} \hspace{3ex}\text{for $f \in H$.} {\varepsilon}nd{equation} Furthermore, we assume that {\boldsymbol{\eta}}gin{enumerate}[label=(A\arabic*)] \setcounter{enumi}{2} \item\label{a3} $\|\cdot\|_H$ is a norm on $H$ that turns this space into a reproducing kernel Hilbert space, and there exists a constant $c \geq 1$ such that {\boldsymbol{\eta}}gin{equation}\label{eq2} \|f\|_H \leq c\left(\left|\langle f,1\rangle_1\mathrm{i}ght|+\|f\|_2\mathrm{i}ght) \hspace{3ex}\text{for all $f \in H$.} {\varepsilon}nd{equation} {\varepsilon}nd{enumerate} Condition {\varepsilon}qref{eq2} is equivalent to the fact that $\|\cdot\|_H$ and $\left|\langle\cdot,1\rangle_1\mathrm{i}ght|+\|\cdot\|_2$ are equivalent norms on $H$. Let us restate Lemma~2.1 from \cite{GHHR17}: {\boldsymbol{\eta}}gin{lemma}\label{lem10} For each $\gamma >0$ there exists a uniquely determined reproducing kernel $k_{\gamma}$ on $D\times D$ such that $H(1+k_{\gamma}) = H$ as vector spaces and {\boldsymbol{\eta}}gin{equation*} \|f\|^2_{1+k_{\gamma}} = \|f\|_1^2 + \frac{1}{\gamma} \|f\|^2_2. {\varepsilon}nd{equation*} Moreover, the norms $\|\cdot \|_H$ and $\|\cdot\|_{1+k_{\gamma}}$ are equivalent and $H(1) \cap H(k_{\gamma})=\{0\}$. {\varepsilon}nd{lemma} Note that for the special value $\gamma =1$ we have $\|\cdot\|_{1+k_1} = \|\cdot\|_H$. The next example illustrates the assumptions and the statement of Lemma \ref{lem10}; for more information and a slight generalization see \cite[Example~2.3]{GHHR17}. {\boldsymbol{\eta}}gin{comment} {\boldsymbol{\eta}}gin{exmp}\label{exa1} Fix $r \in \mathbb{N}$ and consider the Sobolev space {\boldsymbol{\eta}}gin{align*} W^{r,2}[0,1] =\{f\in L^2[0,1] \mid f^{(\nu)}\in L^2[0,1], 1\leq\nu\leq r\}, {\varepsilon}nd{align*} where $f^{(\nu)}$ denotes the $\nu$th distributional derivative of $f$. Moreover, consider three pairs of seminorms on this space, given by {\boldsymbol{\eta}}gin{align*} \|f\|_{1,\text{S}}^2: &=\int^1_0 \left| f(y) \mathrm{i}ght|^2 \,{\rm d}y, \hspace{3ex} \|f\|^2_{2,\text{S}}: =\sum_{\nu=1}^r \int^1_0 |f^{(\nu)}(y)|^2 \,{\rm d}y\\ \intertext{and, for fixed $a\in [0,1]$,} \|f\|_{1,\pitchfork}^2: &=|f(a)|^2, \hspace{3ex} \|f\|^2_{2,\pitchfork}: =\sum_{\nu=1}^{r-1} |f^{(\nu)}(a)|^2 + \int^1_0 |f^{(r)}(y)|^2 \,{\rm d} y \\ \intertext{as well as} \|f\|_{1,\text{A}}^2: &=\left| \int^1_0 f(y) \,{\rm d}y \mathrm{i}ght|^2, \hspace{3ex} \|f\|^2_{2,\text{A}}: =\sum_{\nu=1}^{r-1} \left|\int_0^1 f^{(\nu)}(y) \, {\rm d} y \mathrm{i}ght|^2 + \int^1_0 |f^{(r)}(y)|^2 \,{\rm d}y {\varepsilon}nd{align*} for $f\in W^{r,2}[0,1]$. For $\ast\in\{\text{S},\pitchfork,\text{A}\}$ let \[ \|f\|_{H,\ast} = \bigl(\|f\|_{1,\ast}^2 + \|f\|_{2,\ast}^2\bigr)^{1/2}. \] This gives us three equivalent norms, each of which turns $H$ into a reproducing kernel Hilbert space, see, e.g., \cite[Sec.~A.2]{NW08}. The assumptions \ref{a1}, \ref{a2}, and \ref{a3} are satisfied for $H=W^{r,2}[0,1]$ and each of these pairs of seminorms, see \cite[Example~2.1]{GHHR17}. In all three cases we denote the reproducing kernels from Lemma~\ref{lem10} by $k_{\gamma,\ast}$. For $\ast \in \{\pitchfork,\text{A}\}$ the seminorm $\|\cdot\|_{1,\ast}$ is induced by a bounded linear functional, implying $k_{\gamma,\ast} = \gamma \cdot k_{1,\ast}$, see \cite[Rem.~2.2]{GHHR17}. The norm on $H(1+k_{\gamma,\pitchfork})$ corresponds to the {\varepsilon}mph{anchored decomposition} of $f$, see, e.g., \cite{KSWW10b}, while the norm on $H(1+k_{\gamma,\text{A}})$ corresponds to the {\varepsilon}mph{ANOVA decomposition} of $f$ with {\varepsilon}mph{anchor $a$}, see, e.g., \cite{DG13,KSWW10b}. For $\ast = \text{S}$ and $\gamma=1$ we obtain the standard norm on the Sobolev space $W^{r,2}[0,1]$. The seminorm $\|\cdot\|_{1,\text{S}}$ is not induced by a bounded linear functional, implying that we do not have $k_{\gamma,\text{S}}=\gamma\cdot k_{1,\text{S}}$ for any $\gamma\neq 1$, see \cite[Rem.~2.3]{GHHR17}. {\varepsilon}nd{exmp} {\varepsilon}nd{comment} {\boldsymbol{\eta}}gin{exmp}\label{e33} Let $D:= [0,1)$ and $r>1/2$. The {\varepsilon}mph{periodic Sobolev space} $K_r = K_r([0,1))$ (also known as {\varepsilon}mph{Korobov space}) is the Hilbert space of all $f\in L^2([0,1])$ with finite norm \[ \|f\|_{r}^2 := |\hat{f}(0)|^2 + \sum_{h\in\mathbb{Z} \setminus \{0\}} | \hat{f}(h)|^2 h^{2r}, \] where $\hat{f}(h) =\int_0^{1} f(t)e^{- 2\pi i h t}\,\mathrm dt$ is the $h$-th Fourier coefficient of $f$. The functions in $K_r$ are continuous and periodic. It is easily checked that the reproducing kernel of $K_r$ is given by {\boldsymbol{\eta}}gin{equation}\label{kernel_korobov} 1+k_1(x,y) = 1 + \sum_{h\in\mathbb{Z} \setminus \{0\}} h^{-2r} e^{2\pi i h (x-y)}, \hspace{3ex}x,y \in [0,1). {\varepsilon}nd{equation} Consider the pair of seminorms on $K_r$ given by {\boldsymbol{\eta}}gin{equation*} \|f\|_{1} = | \hat{f}(0)| \hspace{3ex}\text{and}\hspace{3ex} \|f\|_{2}^2 = \sum_{h\neq 0} | \hat{f}(h)|^2 \, h^{2r}. {\varepsilon}nd{equation*} The assumptions \ref{a1}, \ref{a2}, and \ref{a3} are easily verified. For $\gamma >0$ we have $k_{\gamma} = \gamma \cdot k_{1}$. {\varepsilon}nd{exmp} Further examples of spaces that satisfy the assumptions \ref{a1}, \ref{a2}, and \ref{a3} are, for instance, the {\varepsilon}mph{(non-periodic) Sobolev spaces} $W^{r,2}([0,1])$ of smoothness $r\in\mathbb{N}$ endowed with either the standard norm, the anchored norm or the ANOVA norm, see \cite[Example~2.1]{GHHR17}. We now want to study weighted tensor product Hilbert spaces of multivariate functions, which implies that we have to consider {\varepsilon}mph{product weights} as introduced in \cite{SW98}. More precisely, we consider a sequence ${\boldsymbol{\gamma}}=\left(\gamma_j\mathrm{i}ght)_{j\in \mathbb{N}}$ of positive weights that satisfies {\boldsymbol{\eta}}gin{equation}\label{summable} \sum_{j=1}^\infty \gamma_j < \infty. {\varepsilon}nd{equation} The decay of the weights is quantified by {\boldsymbol{\eta}}gin{equation*} \decay({\boldsymbol{\gamma}}) := \sup \Bigl( \Bigl\{ p > 0 \, \Big|\, \sum^\infty_{j=1} \gamma_j^{1/p} < \infty \Bigr\} \cup \{0\} \Bigr); {\varepsilon}nd{equation*} due to {\varepsilon}qref{summable} we have $\decay({\boldsymbol{\gamma}}) \ge 1$. For each weight $\gamma_j$ let $k_{\gamma_j}$ be the kernel from Lemma \ref{lem10}. With the help of the weights we can define spaces of functions of finitely many variables. For $d\in\mathbb{N}$ we define the reproducing kernel $K^{{\boldsymbol{\gamma}}}_d$ on $D^d\times D^d$ by {\boldsymbol{\eta}}gin{align}\label{eq100} K^{{\boldsymbol{\gamma}}}_d(\bx,{\mathbf y}) := \prod_{j= 1}^d (1+k_{\gamma_j}(x_j,y_j)), \hspace{3ex} \bx, {\mathbf y}\in D^{d}. {\varepsilon}nd{align} The reproducing kernel Hilbert space $H(K^{{\boldsymbol{\gamma}}}_d)$ is the (Hilbert space) tensor product of the spaces $H(1+k_{\gamma_j})$. Now we want to define a space of functions of infinitely many variables. The natural domain for the counterpart of {\varepsilon}qref{eq100} for infinitely many variables is given by {\boldsymbol{\eta}}gin{align}\label{eq101} {\mathfrak X}^{{\boldsymbol{\gamma}}} := \Bigl\{{\bf x} \in D^{\mathbb{N}} \,\Big|\, \prod_{j=1}^\infty (1+ k_{\gamma_j}(x_j, x_j) )<\infty \Bigr\}. {\varepsilon}nd{align} Let $a,a_1,\dots,a_n\in D$ be arbitrary. Due to \cite[Lemma~2.2]{GHHR17} we have $(a_1,\dots,a_n,a,a,\dots)\in {\mathfrak X}^{{\boldsymbol{\gamma}}}$, and in particular ${\mathfrak X}^{{\boldsymbol{\gamma}}} \neq{\varepsilon}mptyset$. We define the reproducing kernel $K^{{\boldsymbol{\gamma}}}_\infty$ on ${\mathfrak X}^{{\boldsymbol{\gamma}}}\times {\mathfrak X}^{{\boldsymbol{\gamma}}}$ by {\boldsymbol{\eta}}gin{align}\label{eq102} K^{{\boldsymbol{\gamma}}}_\infty({\bf x},{\bf y}) := \prod_{j=1}^\infty (1+k_{\gamma_j}(x_j, y_j)), \hspace{3ex}\bx,{\mathbf y} \in {\mathfrak X}^{{\boldsymbol{\gamma}}}. {\varepsilon}nd{align} For a function $f\colon D^d \to \mathbb{R}$ we define $\psi_d f\colon {\mathfrak X}^{\boldsymbol{\gamma}} \to \mathbb{R}$ by {\boldsymbol{\eta}}gin{align}\label{g8} \left( \psi_d f \mathrm{i}ght) (\bx) =f(x_1,\dots,x_d) \hspace{3ex}\text{for $\bx\in{\mathfrak X}^{\boldsymbol{\gamma}}$.} {\varepsilon}nd{align} Due to \cite[Lemma~2.3]{GHHR17} $\psi_d$ is a linear isometry from $H(K_d^{{\boldsymbol{\gamma}}})$ into $H(K^{{\boldsymbol{\gamma}}}_\infty)$, and {\boldsymbol{\eta}}gin{equation}\label{dense_subspace} \bigcup_{d\in \mathbb{N}}\psi_d (H(K_d^{{\boldsymbol{\gamma}}})) \hspace{2ex} \text{is a dense subspace of $H(K^{{\boldsymbol{\gamma}}}_\infty)$.} {\varepsilon}nd{equation} \subsection{The Integration Problem} To obtain a well-defined integration problem we assume that $\rho$ is a probability measure on $D$ implying {\boldsymbol{\eta}}gin{align*} H \subseteq L^1(D,\rho). {\varepsilon}nd{align*} Let $\rho^d$ and $\rho^\mathbb{N}$ denote the corresponding product measures on $D^d$ and $D^\mathbb{N}$, respectively. Due to \cite[Lemma~3.1]{GHHR17} we have for all $d\in\mathbb{N}$ that {\boldsymbol{\eta}}gin{align*} H(K_d^{\boldsymbol{\gamma}})\subseteq L^1(D^d,\rho^{d}), {\varepsilon}nd{align*} and the respective embeddings $J_d$ from $H(K_d^{\boldsymbol{\gamma}})$ into $L^1(D^d,\rho^{d})$ are continuous with {\boldsymbol{\eta}}gin{equation}\label{uniformly_bounded} \sup_{d\in\mathbb{N}}\|J_d\|_{\rm op} <\infty. {\varepsilon}nd{equation} Define the linear functional $I_d\colon H(K_d^{\boldsymbol{\gamma}})\to \mathbb{R}$ by {\boldsymbol{\eta}}gin{align*} I_d(f) =\int_{D^d}f\,{\rm d}\rho^{d}, \hspace{3ex}\text{$f\in H(K_d^{\boldsymbol{\gamma}})$.} {\varepsilon}nd{align*} Note that $\|I_d\|_{\rm op} \geq 1$, since $I_d(1) = 1$ and $\|1\|_{K^{\boldsymbol{\gamma}}_d}=1$. Furthermore, $\|I_d\|_{\rm op} \leq \|J_d\|_{\rm op}$, and therefore {\varepsilon}qref{uniformly_bounded} implies {\boldsymbol{\eta}}gin{align}\label{eq104} 1 \leq \sup_{d\in\mathbb{N}}\|I_d\|_{\rm op} <\infty. {\varepsilon}nd{align} This yields the existence of a uniquely determined bounded linear functional {\boldsymbol{\eta}}gin{equation}\label{integration_functional} I_\infty \colon H(K^{\boldsymbol{\gamma}}_\infty)\to\mathbb{R} \hspace{2ex}\text{such that}\hspace{2ex} I_\infty (\psi_d f) =I_d(f) \hspace{2ex}\text{for all $f\in H(K^{\boldsymbol{\gamma}}_d)$,\,$d\in\mathbb{N}$,} {\varepsilon}nd{equation} cf. \cite[Lemma~3.2]{GHHR17}. Note that every $f\in H(K^{\boldsymbol{\gamma}}_\infty)$ is measurable with respect to the trace of the product $\sigma$-algebra on $D^\mathbb{N}$. (This follows from {\varepsilon}qref{dense_subspace}, {\varepsilon}qref{uniformly_bounded}, and the fact that the pointwise limit of measurable functions is again measurable.) If ${\mathfrak X}^{\boldsymbol{\gamma}}$ is measurable, $\rho^\mathbb{N}({\mathfrak X}^{\boldsymbol{\gamma}})=1$, and $H(K^{\boldsymbol{\gamma}}_\infty)\subseteq L^1({\mathfrak X}^{\boldsymbol{\gamma}}, \rho^\mathbb{N})$, then the bounded linear functional {\varepsilon}qref{integration_functional} is given by {\boldsymbol{\eta}}gin{align*} I_\infty (f) =\int_{{\mathfrak X}^{\boldsymbol{\gamma}}} f\,\mathrm d\rho^\mathbb{N} \hspace{3ex}\text{for all $f\in H(K^{\boldsymbol{\gamma}}_\infty)$.} {\varepsilon}nd{align*} For sufficient conditions under which these assumptions are fulfilled we refer to \cite{GMR12}. We consider the integration problem on $H(K^{\boldsymbol{\gamma}}_\infty)$ consisting in the approximation of the functional $I_\infty$ by randomized algorithms that use function evaluations (i.e., {\varepsilon}mph{standard information}) as admissable information. \subsection{The Unrestricted Subspace Sampling Model} We use the cost model introduced in \cite{KSWW10a}, which we refer to as {\varepsilon}mph{unrestricted subspace sampling model}. It only accounts for the cost of function evaluations. To define the cost of a function evaluation, we fix an anchor $a \in D$ and a non-decreasing function $$\$: \mathbb{N}_0 \to [1,\infty].$$ Put {\boldsymbol{\eta}}gin{equation*} \mathcal{U}:= \{u\subset \mathbb{N}\,|\, |u| <\infty\}. {\varepsilon}nd{equation*} For each $u\in\mathcal{U}$ put {\boldsymbol{\eta}}gin{equation*} \mathcal{T}_{u} := \{ {\bf t} \in D^\mathbb{N} \,|\, t_j = a \hspace{1ex}\text{for all} \hspace{1ex} j\in\mathbb{N}\setminus u\}. {\varepsilon}nd{equation*} To simplify the representation, we confine ourselves to non-adaptive randomized linear algorithms of the form {\boldsymbol{\eta}}gin{equation}\label{Qn} Q(f) = \sum^n_{i=1} w_i f({\boldsymbol{t}}^{(i)}), {\varepsilon}nd{equation} where the number $n \in \mathbb{N}$ of knots is fixed and the knots ${\boldsymbol{t}}^{(i)}$ as well as the coefficients $w_i\in \mathbb{R}$ are random variables with values in some $\mathcal{T}_{v_i}$, $v_i\in \mathcal{U}$, and in $\mathbb{R}$, respectively. (We discuss a larger class of algorithms in Remark~\ref{Baustelle}.) The cost of $Q$ is given by {\boldsymbol{\eta}}gin{align}\label{cost_function} \cost(Q) =\sum_{i=1}^n \inf\{ \$(|u|) \mid \text{$u \in \mathcal{U}$ such that ${\boldsymbol{t}}^{(i)}(\omega) \in \mathcal{T}_u$ for all $\omega \in \Omega$} \}. {\varepsilon}nd{align} In the definition of the cost function an inclusion property has to hold for all $\omega \in \Omega$. Often this worst case point of view is replaced by an average case (cf., e.g., \cite{MGR09} or \cite{DG13, Gne13, PW11}). We stress that such a replacement would not affect the cost of the algorithms that we employ to establish our upper bounds for the $N$-th minimal errors; for lower bounds cf. Remark~\ref{Baustelle}(iii). Let $\rm{x} \in \{\rm{d}, \rm{r}, \rm{w}\}$. For $N\ge 0$ let us define the {\varepsilon}mph{$N$-th minimal error on} $H(K^{\boldsymbol{\gamma}}_\infty)$ by {\boldsymbol{\eta}}gin{equation*} e^{\rm x}(N,K^{\boldsymbol{\gamma}}_\infty) := \inf\{ e^{\rm x}(I_\infty, Q) \,|\, Q\hspace{1ex}\text{as in {\varepsilon}qref{Qn}} \hspace{1ex}\text{and}\hspace{1ex}{\rm cost}_{}(Q) \le N\}, {\varepsilon}nd{equation*} where in the case $\rm{x} = \rm{d}$ the algorithms have to be deterministic, while in the case $\rm{x} \in \{\rm{r}, \rm{w}\}$ they are allowed to be randomized. The {\varepsilon}mph{(polynomial) convergence order of the $N$-th minimal errors of infinite-dimensional integration} is given by {\boldsymbol{\eta}}gin{align}\label{lambdacost} \lambda^{{\rm x}}(K^{\boldsymbol{\gamma}}_\infty) :=\sup \Bigl\{ \alpha \ge 0 \mid \sup_{N\in \mathbb{N}} e^{\rm x}(N,K^{\boldsymbol{\gamma}}_\infty) \cdot N^\alpha < \infty \Bigr\}. {\varepsilon}nd{align} In analogy to our definitions for infinite-dimensional integration, we consider for univariate integration on $H(1+k_1)$ also linear randomized algorithms $Q$ of the form {\varepsilon}qref{Qn}, except that this time the knots ${\boldsymbol{t}}^{(i)}$ are, of course, random variables with values in $D$. The cost of such an algorithm is simply the number $n$ of function evaluations, and {\varepsilon}mph{$N$-th minimal errors on} $H(1+k_1)$ are given by {\boldsymbol{\eta}}gin{equation*} e^{\rm x}(N,1+k_1) := \inf\{ e^{\rm x}(I_1, Q) \,|\, Q\hspace{1ex}\text{as in {\varepsilon}qref{Qn}} \hspace{1ex}\text{and}\hspace{1ex}n \le N\}. {\varepsilon}nd{equation*} The {\varepsilon}mph{(polynomial) convergence order of the $N$-th minimal errors of univariate integration} is given by {\boldsymbol{\eta}}gin{align*} \lambda^{{\rm x}}(1+k_1) :=\sup \Bigl\{ \alpha \ge 0 \mid \sup_{N\in \mathbb{N}} e^{\rm x}(N, 1+k_1) \cdot N^\alpha < \infty \Bigr\}. {\varepsilon}nd{align*} {\boldsymbol{\eta}}gin{remark}\label{Rem_Rel_Min_Err} Let $K\in \{K^{{\boldsymbol{\gamma}}}_\infty, 1 + k_1\}$ and, accordingly, $I\in \{I_\infty, I_1\}$. Obviously, {\boldsymbol{\eta}}gin{equation*} e^{\rm r}(N, K) \le e^{\rm w}(N, K) \le e^{\rm d}(N, K), \hspace{3ex}\text{and thus}\hspace{3ex} \lambda^{\rm r}(K) \ge \lambda^{\rm w}(K) \ge \lambda^{\rm d}(K). {\varepsilon}nd{equation*} Furthermore, it is easy to see that $e^{\rm w}(N,K) \ge e^{\rm d}(N,K)$ holds: If $Q$ is an arbitrary randomized algorithm of the form {\varepsilon}qref{Qn} with $\cost(Q) \le N$, then for every $\omega \in \Omega$ the cost of the deterministic algorithm $Q(\omega)$ is at most $N$, implying $$ \sup_{\|f\|_K \le 1} |(I-Q(\omega))f| \ge e^{\rm d}(N,K), $$ which in turn leads to $e^{\rm w}(I,Q) \ge e^{\rm d}(N,K)$. Hence we obtain {\boldsymbol{\eta}}gin{equation}\label{d=w} e^{\rm w}(N,K) = e^{\rm d}(N,K) \hspace{3ex}\text{and}\hspace{3ex} \lambda^{\rm w}(K) = \lambda^{\rm d}(K). {\varepsilon}nd{equation} {\varepsilon}nd{remark} \subsection{A Sharp Result on Infinite-Dimensional Integration} The next theorem determines the exact polynomial convergence rate of the $N$-th minimal errors of infinte-dimensional integration on weighted reproducing kernel Hilbert spaces. {\boldsymbol{\eta}}gin{theorem}\label{Theo_UB_PW} Let $\rm{x} \in \{\rm{r}, \rm{w}\}$. If the cost function $\$$ satisfies $\$(\nu) = \Omega(\nu)$ and $\$(\nu) = O(e^{\sigma \nu})$ for some $\sigma\in (0,\infty)$, then we have {\boldsymbol{\eta}}gin{equation}\label{sharp_bound} \lambda^{{\rm x}}(K^{{\boldsymbol{\gamma}}}_\infty) = \min \left\{ \lambda^{{\rm x}}(1+k_1),\, \frac{{\rm decay}({\boldsymbol{\gamma}}) - 1}{2} \mathrm{i}ght\}. {\varepsilon}nd{equation} {\varepsilon}nd{theorem} Notice that the theorem implies that in the randomized setting infinite-dimensional integration on weighted reproducing kernel Hilbert spaces is (essentially) not harder than the corresponding univariate integration problem (as far as the polynomial convergence rate is concerned) as long as the weights decay fastly enough, i.e., as long as $${\rm decay}({\boldsymbol{\gamma}}) \ge 2 \lambda^{{\rm x}}(1+k_1) +1.$$ {\boldsymbol{\eta}}gin{proof} Let us first consider the case ${\rm x} = {\rm r}$. In the special case where the reproducing kernel $k_1$ is {\varepsilon}mph{anchored in $a$} (i.e., $k_1(a,a)=0$) and satisfies $\gamma k_1 = k_\gamma$ for all $\gamma >0$ (cf. Lemma~\ref{lem10}), the statement of the Theorem follows from \cite{Gne13} and from \cite{PW11} in combination with Theorem~\ref{CardInfUpBound}, as we will explain below in detail. For a general reproducing kernel $k_1$ we need to find a suitably associated reproducing kernel $k_a$ anchored in $a$ and satisfying $\gamma k_a = (k_a)_\gamma$ for all $\gamma >0$ to employ the embedding machinery from \cite{GHHR17} to obtain the desired result {\varepsilon}qref{sharp_bound}. To this purpose we consider the bounded linear functional $\xi: H\to \mathbb{R}$, $f \mapsto f(a)$, where $a\in D$ is our fixed anchor. We define a new pair of seminorms on $H$ by {\boldsymbol{\eta}}gin{equation*} \|f\|_{1,a} := |\xi(f)| \hspace{3ex}\text{and}\hspace{3ex} \|f\|_{2,a} := \|f-\xi(f)\|_{H}. {\varepsilon}nd{equation*} Notice that $\|\cdot \|_{1,a}$ is induced by the symmetric bilinear form $\langle f,g \rangle_{1,a} := \xi(f) \cdot \xi(g)$. This new pair of seminorms satisfies obviously assumption \ref{a2} and the norms $\|\cdot \|_{H} = ( \|\cdot\|^2_1 + \|\cdot\|^2_2)^{1/2}$ and $\|\cdot \|_{H,a} := ( \|\cdot\|^2_{1,a} + \|\cdot\|^2_{2,a})^{1/2}$ are equivalent norms on $H$. Hence $\|\cdot\|_{H,a}$ turns $H$ into a reproducing kernel Hilbert space, and satisfies {\varepsilon}qref{eq2} with $c=1$ since {\boldsymbol{\eta}}gin{equation*} \|f\|_{H,a} \le \| f \|_{1,a} + \|f\|_{2,a} = | \langle f, 1 \rangle_{1,a}| + \|f\|_{2,a} \hspace{3ex}\text{for all $f\in H$.} {\varepsilon}nd{equation*} Thus the new pair of seminorms satisfies also \ref{a3}. Furthermore, if $k_a$ is the reproducing kernel on $D\times D$ such that {\boldsymbol{\eta}}gin{equation*} H(k_a) = \{f\in H\,|\, f(a)=0\} {\varepsilon}nd{equation*} and {\boldsymbol{\eta}}gin{equation*} \|f\|_{k_a} = \|f\|_{1+ k_a} = \|f\|_{H,a} \hspace{3ex}\text{for all $f\in H(k_a)$,} {\varepsilon}nd{equation*} then $k_a$ is anchored in $a$ and moreover we have $H(1+k_a) = H$ as vector spaces, $H(1) \cap H(k_a) = \{0\}$, and {\boldsymbol{\eta}}gin{equation*} \|f\|^2_{1+\gamma k_a} = \|f\|^2_{1,a} + \frac{1}{\gamma} \|f\|^2_{2,a} {\varepsilon}nd{equation*} for all $\gamma >0$, $f\in H$, implying $(k_a)_\gamma = \gamma k_a$, see \cite[Rem.~2.2]{GHHR17}. Since $ \|\cdot\|_H = \|\cdot\|_{1+k_1}$ and $\|\cdot \|_{H,a} = \|\cdot \|_{1+k_a}$ are equivalent norms on $H(1+k_1) = H = H(1+k_a)$, we obtain $\lambda^{{\rm r}}(1+k_1) = \lambda^{{\rm r}}(1+k_{a})$. Due to \cite[Thm.~2.3]{GHHR17} we have {\boldsymbol{\eta}}gin{align*} {\mathfrak X}^{{\boldsymbol{\gamma}},a} := \Bigl\{{\bf x} \in D^{\mathbb{N}} \,\Big|\, \prod_{j=1}^\infty (1+ \gamma_j k_{a}(x_j, x_j) )<\infty \Bigr\} = {\mathfrak X}^{{\boldsymbol{\gamma}}}. {\varepsilon}nd{align*} According to {\varepsilon}qref{eq102} we define $K^{{\boldsymbol{\gamma}},a}_\infty: {\mathfrak X}^{{\boldsymbol{\gamma}}} \times {\mathfrak X}^{{\boldsymbol{\gamma}}} \to \mathbb{R}$ by {\boldsymbol{\eta}}gin{equation*} K^{{\boldsymbol{\gamma}},a}_\infty({\bf x},{\bf y}) := \prod_{j=1}^\infty (1+\gamma_j k_{a}(x_j, y_j)) \hspace{3ex}\text{for $\bx,{\mathbf y} \in {\mathfrak X}^{{\boldsymbol{\gamma}}}$.} {\varepsilon}nd{equation*} Now we consider the integration problem in $H(K^{{\boldsymbol{\gamma}},a}_\infty)$ and may use \cite[Subsect.~3.2.1]{Gne13} and \cite[Cor.~1]{PW11} in combination with Theorem \ref{CardInfUpBound}. Indeed, due to Theorem \ref{CardInfUpBound} we may choose linear randomized algorithms with convergence rates $\alpha$ arbitrarily close to $\lambda^{{\rm r}}(1+k_{1}) = \lambda^{{\rm r}}(1+k_{a}) $ to obtain via the randomized Smolyak method algorithms that satisfy {\varepsilon}qref{pw_req_est} for ${\rm x} = {\rm r}$ (and consequently also \cite[Eqn.~(10)]{PW11}). Now \cite[Cor.~1]{PW11} ensures that {\boldsymbol{\eta}}gin{equation*} \lambda^{{\rm r}}(K^{{\boldsymbol{\gamma}}, a}_\infty) \ge \min \left\{ \lambda^{{\rm r}}(1+k_1),\, \frac{{\rm decay}({\boldsymbol{\gamma}}) - 1}{2} \mathrm{i}ght\}. {\varepsilon}nd{equation*} Furthermore, we have due to \cite[Eqn.~(21)]{Gne13} {\boldsymbol{\eta}}gin{equation*} \lambda^{{\rm r}}(K^{{\boldsymbol{\gamma}},a}_\infty) \le \min \left\{ \lambda^{{\rm r}}(1+k_1),\, \frac{{\rm decay}({\boldsymbol{\gamma}}) - 1}{2} \mathrm{i}ght\}. {\varepsilon}nd{equation*} Due to \cite[Cor.~5.1]{GHHR17} these estimates also hold for $H(K^{{\boldsymbol{\gamma}}}_\infty)$. Let us now consider the case ${\rm x} = {\rm w}$. Due to {\varepsilon}qref{d=w}, identity {\varepsilon}qref{sharp_bound} follows directly from the deterministic result \cite[Theorem~5.1]{GHHR17}. {\varepsilon}nd{proof} We now provide two corollaries and to add some remarks. Theorem~\ref{Theo_UB_PW}, which deals with randomized algorithms, and the corresponding deterministic theorem \cite[Theorem~5.1]{GHHR17} allow immediately to compare the power of deterministic and randomized algorithms. {\boldsymbol{\eta}}gin{corollary}\label{Power} Let the assumptions of Theorem~\ref{Theo_UB_PW} hold. For infinite-dimensional integration on $H(K^{{\boldsymbol{\gamma}}}_\infty)$ randomized algorithms are superior to deterministic algorithms, i.e., $\lambda^{{\rm r}}(K^{{\boldsymbol{\gamma}}}_\infty) > \lambda^{{\rm d}}(K^{{\boldsymbol{\gamma}}}_\infty)$, if and only if {\boldsymbol{\eta}}gin{equation*} \lambda^{{\rm r}}(1+k_1) > \lambda^{{\rm d}}(1+k_1) \hspace{3ex}\text{and}\hspace{3ex} \decay({\boldsymbol{\gamma}}) > 1 + 2 \lambda^{{\rm d}}(1+k_1) {\varepsilon}nd{equation*} are satisfied. {\varepsilon}nd{corollary} The next corollary on infinite-dimensional integration on weighted Korobov spaces in the randomized setting parallels \cite[Theorem~5.5]{GHHR17}, which discusses the deterministic setting. {\boldsymbol{\eta}}gin{corollary}\label{Korobov} Let $r>1/2$, and let the univariate reproducing kernel $k_1$ be as in {\varepsilon}qref{kernel_korobov}. Then the weighted Korobov space $H(K^{\boldsymbol{\gamma}}_\infty)$ is an infinite tensor product of the periodic Korobov space $H(1+k_1) = K_r([0,1))$ of smoothness $r$, see Example \ref{e33}. If the cost function $\$$ satisfies $\$(\nu) = \Omega(\nu)$ and $\$(\nu) = O(e^{\sigma \nu})$ for some $\sigma\in (0,\infty)$, then we have {\boldsymbol{\eta}}gin{equation*} \lambda^{{\rm r}}(K^{{\boldsymbol{\gamma}}}_\infty) = \min \left\{ r + \frac{1}{2},\, \frac{{\rm decay}({\boldsymbol{\gamma}}) - 1}{2} \mathrm{i}ght\} \hspace{3ex}\text{and}\hspace{3ex} \lambda^{{\rm w}}(K^{{\boldsymbol{\gamma}}}_\infty) = \min \left\{ r,\, \frac{{\rm decay}({\boldsymbol{\gamma}}) - 1}{2} \mathrm{i}ght\}. {\varepsilon}nd{equation*} {\varepsilon}nd{corollary} {\boldsymbol{\eta}}gin{proof} Since $\lambda^{{\rm r}}(1+k_1) = r+1/2$ and $\lambda^{{\rm w}}(1+k_1) = r$ (see Appendix and Remark~\ref{Rem_Rel_Min_Err}), Theorem \ref{Theo_UB_PW} immediately yields the result for $\lambda^{{\rm r}}(K^{{\boldsymbol{\gamma}}}_\infty)$ and $\lambda^{{\rm w}}(K^{{\boldsymbol{\gamma}}}_\infty)$. Notice that the result for $\lambda^{{\rm w}}(K^{{\boldsymbol{\gamma}}}_\infty)$ can also be derived from Remark~\ref{Rem_Rel_Min_Err} and \cite[Theorem~5.5]{GHHR17}. {\varepsilon}nd{proof} {\boldsymbol{\eta}}gin{remark}\label{Baustelle} Let us come back to Theorem~\ref{Theo_UB_PW}. \nopagebreak {\boldsymbol{\eta}}gin{itemize} \item[(i)] Algorithms that achieve convergence rates arbitrarily close to $\lambda^{\rm x}(K^{{\boldsymbol{\gamma}}}_\infty)$ are, e.g., {\varepsilon}mph{multivariate decomposition methods (MDMs)} that were introduced in \cite{KSWW10a} (in the deterministic setting) and developed further in \cite{PW11} (in the deterministic and in the randomized setting); originally, these algorithms were called {\varepsilon}mph{changing dimension algorithms}, cf., e.g., \cite{DG12, DG13, Gne13, KSWW10a, PW11}. MDMs exploit that the anchored function decomposition of an integrand can be efficiently computed; a method for multivariate integration based on the same idea is the {\varepsilon}mph{dimension-wise integration method} proposed in \cite{GH10}. To achieve (nearly) optimal convergence rates, the MDMs may employ as building blocks Smolyak algorithms for multivariate integration that rely on (nearly) optimal algorithms for univariate integration on $H(1+k_1)$, cf. \cite[Section~3.3]{PW11} and the proof of Theorem~\ref{Theo_UB_PW}. \item[(ii)] In the special case where ${\rm x} = {\rm r}$ and where $k_1$ is an {\varepsilon}mph{ANOVA-kernel} (i.e., $k_1$ satisfies $\int_D k_1(y,x) \,{\rm d}x = 0$ for every $y\in D$) a version of Theorem~\ref{Theo_UB_PW} was already proved in \cite[Theorem~4.3]{DG13}. It was the first result that rigorously showed that MDMs can achieve the optimal order of convergence also on spaces with norms that are {\varepsilon}mph{not} induced by an underlying anchored function space decomposition. It was not derived with the help of function space embeddings, but by an elaborate direct analysis. Apart from addressing only the ANOVA setting, a further drawback of \cite[Theorem~4.3]{DG13} is that its assumptions are slightly stronger than the ones made in Theorem~\ref{Theo_UB_PW}: It is not sufficient to know the convergence rate of the $N$-th minimal errors of the univariate integration problem, but additionally one has to verify the existence of unbiased randomized algorithms for multivariate integration that satisfy certain variance bounds, see \cite[Assumption~4.1]{DG13}. Nevertheless, in many important cases it is well known that such variance bounds hold. Furthermore, one should mention that the analysis in \cite{DG13} ist not restricted to product weights as in this section, but is done for general weights. Note that the kernel $k_1$ of the Korobov space $K_r([0,1))$ from Example~\ref{e33} and Corollary~\ref{Korobov} is actually an ANOVA kernel. Hence the identity for $\lambda^{{\rm r}}(K^{{\boldsymbol{\gamma}}}_\infty)$ in Corollary~\ref{Korobov} may also be derived by employing \cite[Theorem~4.3]{DG13} after verifying the existence of unbiased algorithms for multivariate integration that satisfy \cite[Assumption~4.1]{DG13}. \item[(iii)] The upper bound for $\lambda^{{\rm r}}(K^{{\boldsymbol{\gamma}}}_\infty)$ in {\varepsilon}qref{sharp_bound} relies on the corresponding bound \cite[Eqn.~(21)]{Gne13} for the case where the univariate reproducing kernel $k_1$ is anchored in $a$. Although the definition of the cost function in \cite{Gne13} takes the average case and not the worst case point of view and differs therefore from {\varepsilon}qref{cost_function}, both definitions lead to the same cost for the admissable class of algorithms $\mathcal{A}^{\rm res}$ considered in the unrestricted subspace sampling model in \cite{Gne13}. The class $\mathcal{A}^{\rm res}$ contains not only algorithms of the form {\varepsilon}qref{Qn}, but also adaptive and non-linear algorithms. In the proof of Theorem~\ref{Theo_UB_PW} we employ the function space embeddings from \cite{GHHR17}, which allows us to transfer results for linear algorithms from the case of anchored kernels to the general case. Hence we can conclude that the upper bound for $\lambda^{{\rm r}}(K^{{\boldsymbol{\gamma}}}_\infty)$ in {\varepsilon}qref{sharp_bound} holds also if we admit adaptive linear algorithms of the form {\varepsilon}qref{Qn} for infinite-dimensional as well as for univariate integration, but we do not know whether this is still the case if we admit non-linear algorithms. {\varepsilon}nd{itemize} {\varepsilon}nd{remark} We finish this section with some remarks on extensions of our results on infinte-dimensional integration to other settings. {\boldsymbol{\eta}}gin{remark}\label{Generalizations} To obtain computational tractability of problems depending on a high or infinite number of variables, it is usually essential to be able to arrange the variables in such a way that their impact decays sufficiently fast. One approach to model the decreasing impact of successive variables is to use weighted function spaces, like the ones we defined and studied in this section, to moderate the influence of groups of variables. This approach goes back to the seminal paper \cite{SW98}. Another approach is the concept of {\varepsilon}mph{increasing smoothness} with respect to properly ordered variables, see, e.g., \cite{DG16, GHHRW18, HHPS18, IKPW16, KPW14a, PW10, Sie14}. The precise definition of Hilbert spaces of functions depending on infinitely many variables of increasing smoothness can be found in \cite[Section~3]{GHHRW18}. Now \cite[Theorem~3.19]{GHHRW18} shows how to relate these spaces to suitable weighted Hilbert spaces via mutual embeddings, making it therefore easy to transfer our results in the randomized setting, Theorem \ref{Theo_UB_PW} and Corollary \ref{Korobov}, from weighted spaces to spaces with increasing smoothness, cf. \cite[Theorem~4.5 and Corollary~4.7]{GHHRW18} for the corresponding transference results in the deterministic setting. Instead of applying our result Theorem \ref{CardInfUpBound} to the infinite-dimensional integration problem, we may also use it to tackle the {\varepsilon}mph{infinite-dimensional $L_2$-approximation problem}. Indeed, a sharp result for the latter problem was obtained in \cite[Corollary~9]{Was12} in the deterministic setting for weighted anchored reproducing kernel Hilbert spaces with the help of multivariate decomposition methods based on Smolyak algorithms (cf. \cite[Theorem~7]{WW11b}). The analysis relies on explicit cost bounds for deterministic Smolyak algorithms from \cite{WW95}. In \cite[Theorem~4.5]{GHHRW18} the result is extended to weighted (not necessarily anchored) reproducing kernel Hilbert spaces (relying on the embedding tools from \cite{GHHR17}) and to spaces of increasing smoothness. Now one may use Theorem \ref{CardInfUpBound} to establish a corresponding result to \cite[Corollary~9]{Was12} for weighted anchored spaces in the randomized setting and may generalize it to non-anchored weighted spaces and to spaces of increasing smoothness via the embedding results established in \cite{GHHR17, GHHRW18}. To work out all the details of these generalizations is beyond the scope of the present paper. {\varepsilon}nd{remark} \section{Appendix}\label{app} \subsection{Randomized Integration Error in Korobov Spaces} For $r > \frac{1}{2}$ we denote via $K_{r} := K_{r}([0,1))$ the space of Korobov functions on a one-dimensional torus with smoothness parameter $r.$ The space is equipped with the norm $$\lVert f \rVert^2_r := |\hat{f}(0)|^2 + \sum_{h \in \mathbb{Z} \setminus \{0\}} |\hat{f}(h)|^2 |h|^{2r},$$ see Example \ref{e33}. It is a folklore result that the polynomial convergence order $\lambda^{\rm{r}}(1+k_1)$ of randomized quadratures on $K_r$ is equal to $ r + \frac{1}{2}.$ Since we have not found in the literature a complete proof handling all cases $r > \frac{1}{2}$, we decided to provide a proof sketch in this appendix. Similar reasoning for (non-periodic) Sobolev spaces with integer parameter $r$ may be found, e.g., in \cite[Chapter ~2.2]{Nov88}. For the lower error bound we need the following lemma. {\boldsymbol{\eta}}gin{lemma}\label{LowerBoundLemma} Let $r > \frac{1}{2}.$ There exists a sequence of functions $(f_n)_{n \in \mathbb{N}} \in K_r$ and a constant $C > 0$ such that for every $n \in \mathbb{N}$ {\boldsymbol{\eta}}gin{enumerate} \item $\supp(f_n) \subset (0, n^{-1})$, \item $\int_{[0,1)} f_n \, dx = n^{-r-1},$ \item $\lVert f_n \rVert^2_r \leq Cn^{-1}.$ \label{NormCondition} {\varepsilon}nd{enumerate} {\varepsilon}nd{lemma} {\boldsymbol{\eta}}gin{proof} Let $f$ be a positive infinitely many times differentiable function with $ \supp(f) \subset (0,1)$ and integral equal to $1.$ Define $$f_n(x) := n^{-r} f(nx), n \in \mathbb{N}.$$ The sequence $(f_n)_{n \in \mathbb{N}}$ is the sequence we are looking for. Since all other properties are obvious it is enough to check that for $(f_n)_n, n \in \mathbb{N}$ the condition (\ref{NormCondition}) holds. Fix $n.$ Since $f$ (as a bump function) is in the Schwartz space $\mathcal{S}(\mathbb{R}),$ the same holds for its Fourier transform $\hat{f}$ and as a result there exists $\tilde{C}>0$ such that for every $x \in \mathbb{R}$ $$|\hat{f}(x)|^2 \leq \left( \frac{\tilde{C}}{(1+|x|)^{r+1}} \mathrm{i}ght)^2.$$ Since $|\hat{f_n}(0)|^2 \leq n^{-3}$ we may neglect it. Simple calculations reveal {\boldsymbol{\eta}}gin{align*} & \sum_{h \in \mathbb{Z} \setminus \{0\}} |\hat{f_n}(h)|^2 |h|^{2r} = \frac{1}{n} \sum_{h \in \mathbb{Z} \setminus \{0\}} \frac{1}{n} |\hat{f}(\frac{h}{n})|^2 \left(\frac{|h|}{n}\mathrm{i}ght)^{2r} \\ & \leq \frac{1}{n} \sum_{h \in \mathbb{Z} \setminus \{0\}} \frac{1}{n} \left( \frac{\tilde{C}}{(1+\frac{|h|}{n})^{r+1}} \mathrm{i}ght)^2 \left(\frac{|h|}{n}\mathrm{i}ght)^{2r} =: \frac{1}{n} \rho_n. {\varepsilon}nd{align*} Due to monotonicity of $\mathbb{N} \ni h \mapsto \frac{1}{n} \left( \frac{\tilde{C}}{(1 + \frac{h}{n})^{r+1}} \mathrm{i}ght)^2\left( \frac{h}{n} \mathrm{i}ght)^{2r}$ we obtain {\boldsymbol{\eta}}gin{align*} & \rho_n \leq 2 \frac{1}{n} \left( \frac{\tilde{C}}{(1 +\frac{1}{n})^{r+1}} \mathrm{i}ght)^2\left( \frac{1}{n} \mathrm{i}ght)^{2r} + 2 \int_{0}^{\infty} \frac{1}{n} \left( \frac{\tilde{C}}{(1 + \frac{t}{n})^{r+1}} \mathrm{i}ght)^2\left( \frac{t}{n} \mathrm{i}ght)^{2r} \, dt \\ & \leq 2\tilde{C}^2\left( 1 + \int_{0}^{\infty} \frac{s^{2r}}{(1 + s)^{2r + 2}} \, ds \mathrm{i}ght), {\varepsilon}nd{align*} which finishes the proof. {\varepsilon}nd{proof} For the upper bound we need a lemma on the $L^2-$approximation of functions from Korobov spaces by trigonometric polynomials. {\boldsymbol{\eta}}gin{lemma}\label{UpperBoundLemma} Let $r > \frac{1}{2}, f \in K_r, N \in \mathbb{N}$, and let $q_N$ be the trigonometric polynomial defined by {\boldsymbol{\eta}}gin{equation*} q_N(x) := \sum_{k=1-N}^N \widehat{\alpha}_k e^{2\pi i kx}, \hspace{3ex} x\in [0,1), {\varepsilon}nd{equation*} where {\boldsymbol{\eta}}gin{equation*} \widehat{\alpha}_k := \frac{1}{2N} \sum^{2N-1}_{j=0} f \left( \tfrac{j}{2N} \mathrm{i}ght) e^{-\pi i kj/N} \hspace{3ex}\text{for $k=1-N, 2-N, \ldots, 0 ,1,\ldots,N$,} {\varepsilon}nd{equation*} are the $2N$ discrete Fourier coefficients of $f$. Then we have {\boldsymbol{\eta}}gin{equation*} f \left( \tfrac{j}{2N} \mathrm{i}ght) = q_N \left( \tfrac{j}{2N} \mathrm{i}ght) \hspace{3ex}\text{for $j=0,1,\ldots,2N-1$,} {\varepsilon}nd{equation*} and {\boldsymbol{\eta}}gin{equation*} \lVert f - q_N \rVert^2_{L^2([0,1))} \leq (1+C_r) N^{-2r} \lVert f \rVert^2_r, \hspace{3ex}\text{where $C_r := 2\sum_{{\varepsilon}ll=1}^\infty (2{\varepsilon}ll -1)^{-2r}$.} {\varepsilon}nd{equation*} The discrete Fourier coefficients $\widehat{\alpha}_k$, $k=1-N, 2-N, \ldots, 0 ,1,\ldots,N$, can be computed via the fast Fourier transform at cost $O(N \ln(N))$. {\varepsilon}nd{lemma} {\boldsymbol{\eta}}gin{proof} Proofs of the statements of the lemma can be found in many standard texts on numerical analysis. We follow the course of \cite[Sections~52 and 53]{Han02}. Since $r > \frac{1}{2}$ we may write $f$ as a uniformly convergent Fourier series $$f(x) = \sum_{h \in \mathbb{Z}} \hat{f}(h)e^{2 \pi i h x}.$$ It is well-known (and not difficult to calculate) that if $q_N$ is a trigonometric polynomial of degree $N$ interpolating $f$ in the nodes $\frac{j}{2N}, j=0,1,\ldots, 2N-1$ then it is given by $$q_N(x) = \sum_{j = 1-N}^{N} \left( \sum_{{\varepsilon}ll \in \mathbb{Z}} \hat{f}(j + 2{\varepsilon}ll N) \mathrm{i}ght) e^{2 \pi i j x},$$ see for example Lemma $52.5$ in \cite{Han02}. It holds {\boldsymbol{\eta}}gin{equation*} {\boldsymbol{\eta}}gin{split} \|f - q_N \|^2_{L^2([0,1))} &= \sum_{j = 1-N}^{N} \left| \widehat{\alpha}_j - \hat{f}(j) \mathrm{i}ght|^2 + \sum_{j = - \infty}^{-N} \left| \hat{f}(j) \mathrm{i}ght|^2 + \sum_{j = N+1}^{\infty} \left| \hat{f}(j) \mathrm{i}ght|^2 \\ &\le \sum_{j = 1-N}^{N} \Bigg| \sum_{|{\varepsilon}ll |\geq 1} \hat{f}(j + 2{\varepsilon}ll N) \Bigg|^2 + \sum_{|j| \ge N} \left| \hat{f}(j) \mathrm{i}ght|^2. {\varepsilon}nd{split} {\varepsilon}nd{equation*} The last sum may be bounded in an obvious way by $N^{-2r}\lVert f \rVert^2_r$ and for the double sum we may use the Cauchy Schwarz inequality to obtain {\boldsymbol{\eta}}gin{equation*} {\boldsymbol{\eta}}gin{split} \sum_{j = 1-N}^N \Bigg| \sum_{|{\varepsilon}ll | \geq 1} \hat{f}(j + 2{\varepsilon}ll N) \Bigg|^2 &= \sum_{j = 1-N}^N \Bigg| \sum_{|{\varepsilon}ll | \geq 1} \frac{1}{(j+2{\varepsilon}ll N)^r} \hat{f}(j+2{\varepsilon}ll N) (j+2{\varepsilon}ll N)^r \Bigg|^2 \\ &\leq \sum_{j = 1-N}^N \left( \sum_{|{\varepsilon}ll | \geq 1} (j + 2{\varepsilon}ll N)^{-2r} \mathrm{i}ght) \left( \sum_{|{\varepsilon}ll | \geq 1} \left| \hat{f}(j + 2{\varepsilon}ll N) \mathrm{i}ght|^2 (j+2{\varepsilon}ll N)^{2r} \mathrm{i}ght) \\ &\leq \left( \sum_{|{\varepsilon}ll | \geq 1} (2{\varepsilon}ll N - N)^{-2r} \mathrm{i}ght) \left( \sum_{k \in \mathbb{Z}} \left| \hat{f}(k) \mathrm{i}ght|^2 k^{2r} \mathrm{i}ght)\\ &\leq N^{-2r} \left( 2\sum_{{\varepsilon}ll=1}^\infty (2{\varepsilon}ll -1)^{-2r} \mathrm{i}ght) \|f\|^2_r. {\varepsilon}nd{split} {\varepsilon}nd{equation*} The cost analysis of the fast Fourier transform is well known and can, e.g., be found in \cite[Section~53]{Han02}. {\varepsilon}nd{proof} {\boldsymbol{\eta}}gin{theorem} Let $r > \frac{1}{2}.$ It holds $$\lambda^{\rm{r}}(1+k_1) = r + \frac{1}{2}.$$ {\varepsilon}nd{theorem} {\boldsymbol{\eta}}gin{proof} The upper bound on $\lambda^{\rm{r}}(1+k_1)$ is settled immediately by Lemma \ref{LowerBoundLemma} in conjunction with Corollary $7.35$ from \cite{MGNR12}. Indeed, for $N\in \mathbb{N}$ choose $n:= 6N$ and $$g_k(x) = f_{n} \big( x - \tfrac{k-1}{n} \big), \hspace{3ex} k=1,\ldots, n.$$ Then the assumptions of Corollary $7.35$ from \cite{MGNR12} are satisfied for ${\varepsilon}psilon = n^{-r-1}$ implying $e^{\rm r}(N, 1+k_1) \ge \tfrac{1}{6^{r+1}\sqrt{2}}N^{-r-1/2}$, and thus establishing the upper bound. To get the lower bound on $\lambda^{\rm{r}}(1+k_1)$ consider the following algorithm. Let $N \in \mathbb{N}.$ We first interpolate $f \in K_r$ with the trigonometric polynomial $q_N$ from Lemma~\ref{UpperBoundLemma}. Now the integral over $q_N$ is simply the discrete Fourier coefficient $\widehat{\alpha}_0$. To approximate the integral of $g:=(f - q_N)$ we apply a simple Monte Carlo quadrature. To this end let $(X_j)_{j = 0}^{N-1}$ be independent random variables such that $X_j$ is distributed uniformly on $[0,1)$. We put $$Qg = \frac{1}{N}\sum_{j = 0}^{N-1} g(X_{j}).$$ Now, since $Qg$ is unbiased and $(X_j)_{j = 0}^{N-1}$ are independent, applying Lemma \ref{UpperBoundLemma} we get {\boldsymbol{\eta}}gin{equation} \mathbb{E}xpec \left[ (Qg - \int_{[0,1)} g \, dx)^2 \mathrm{i}ght]^{\frac{1}{2}} \leq \frac{1}{N^{\frac{1}{2}}} \lVert g \rVert_{L^2([0,1))} \leq \frac{\sqrt{1+C_r}}{N^{\frac{1}{2}} N^{r}} \lVert f \rVert_r. {\varepsilon}nd{equation} Since cost of the algorithm is of order $O(N\ln(N))$, the claim follows. {\varepsilon}nd{proof} \subsection*{Acknowledgment} The authors thank Stefan Heinrich for pointing out the reference \cite{HM11}. Part of the work was done while the authors were visiting the Mathematical Research and Conference Center Bedlewo in Autumn 2016 and the Erwin Schr\"odinger Institute for Mathematics and Physics (ESI) in Vienna in Autumn 2017. Both authors acknowledge support by the Polish Academy of Sciences; Marcin Wnuk additionally acknowledges support by the German Academic Exchange Service (DAAD). {\boldsymbol{\eta}}gin{thebibliography}{9} \bibitem{Aro50} {\sc N.~Aronszajn}, {{\varepsilon}m Theory of reproducing kernels}, Trans. Amer. Math. Soc., 68 (1950), pp.~337--404. \bibitem{BG12} {\sc J.~Baldeaux and M.~Gnewuch}, {{\varepsilon}m Optimal randomized multilevel algorithms for infinite-dimensional integration on function spaces with {ANOVA}-type decomposition}, SIAM J.~Numer. Anal., 52 (2014), pp.~1128--1155. \bibitem{BD93} {\sc G.~Baszenski and F.~J. Delvos}, {{\varepsilon}m Multivariate {B}oolean midpoint rules}, in Numerical Integration IV, H.~Brass and H.~H{\"a}mmerlin, eds., Basel, 1993, Birkh{\"a}user, pp.~1--11. \bibitem{BG04} {\sc H.~J. Bungartz and M.~Griebel}, {{\varepsilon}m Sparse grids}, Acta Numerica, 13 (2004), pp.~147--269. \bibitem{Del82} {\sc F.-J. Delvos}, {{\varepsilon}m $d$-variate {B}oolean interpolation}, J. Approx. Theory, 34 (1982), pp.~99--114. \bibitem{Del90} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m {B}oolean methods for double integration}, Math. Comp., 55 (1990), pp.~683--692. \bibitem{DS89} {\sc F.-J. Delvos and W.~Schempp}, {{\varepsilon}m Boolean Methods in Interpolation and Approximation}, vol.~230 of Pitman Research Notes in Mathematics, Longman, Essex, 1989. \bibitem{DG12} {\sc J.~Dick and M.~Gnewuch}, {{\varepsilon}m Infinite-dimensional integration in weighted {H}ilbert spaces: anchored decompositions, optimal deterministic algorithms, and higher order convergence}, Found. Comput. Math., 14 (2014), pp.~1027--1077. \bibitem{DG13} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m Optimal randomized changing dimension algorithms for infinite-dimensional integration on function spaces with {ANOVA}-type decomposition}, J. Approx. Theory, 184 (2014), pp.~111--145. \bibitem{DLP07} {\sc J.~Dick, G.~Leobacher, and F.~Pillichshammer}, {{\varepsilon}m Randomized {S}molyak algorithms based on digital sequences for multivariate integration}, IMA J. Numer. Analysis, 27 (2007), pp.~655--674. \bibitem{DG16} {\sc D.~D{\~u}ng and M.~Griebel}, {{\varepsilon}m Hyperbolic cross approximation in infinite dimensions}, J. Complexity, 33 (2016), pp.~55--88. \bibitem{DTU18} {\sc D.~D{\~u}ng, V.~Temlyakov, and T.~Ullrich}, {{\varepsilon}m Hyperbolic Cross Approximation}, Birkh{\"a}user, Basel, 2018. \bibitem{FH96} {\sc K.~Frank and S.~Heinrich}, {{\varepsilon}m Computing discrepancies of {S}molyak quadrature rules}, J. Complexity, 12 (1996), pp.~287--314. \bibitem{Gar07} {\sc J.~Garcke}, {{\varepsilon}m A dimension adaptive sparse grid combination technique for machine learning}, ANZIAM Journal, (2007), pp.~C725--C740. \bibitem{GH09} {\sc J.~Garcke and M.~Hegland}, {{\varepsilon}m Fitting multidimensional data using gradient penalties and the sparse grid combination technique}, Computing, (2009), pp.~1--25. \bibitem{Gen74} {\sc A.~C. Genz}, {{\varepsilon}m Some extrapolation methods for the numerical calculation of multidimensional integrals}, in Software for Numerical Mathematics, D.~J. Evans, ed., Academic Press, New York, 1974, pp.~159--172. \bibitem{GG98} {\sc T.~Gerstner and M.~Griebel}, {{\varepsilon}m Numerical integration using sparse grids}, Numer. Algorithms, (1998), pp.~209--232. \bibitem{GG03} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m Dimension-adaptive tensor-product quadrature}, Computing, (2003), pp.~65--87. \bibitem{Gil08a} {\sc M.~B. Giles}, {{\varepsilon}m Multilevel {M}onte {C}arlo path simulation}, Oper. Res., 56 (2008), pp.~607--617. \bibitem{GHM09} {\sc M.~B. Giles, D.~J. Higham, and X.~Mao}, {{\varepsilon}m Analysing multi-level {M}onte {C}arlo for options with non-globally {L}ipschitz payoff}, Finance Stoch., 13 (2009), pp.~403--413. \bibitem{GiW09} {\sc M.~B. Giles and B.~J. Waterhouse}, {{\varepsilon}m Multilevel quasi-{M}onte {C}arlo path simulation}, in Advanced financial modelling, vol.~8 of Radon Ser. Comput. Appl. Math., Walter de Gruyter, Berlin, 2009, pp.~165--181. \bibitem{Gne12} {\sc M.~Gnewuch}, {{\varepsilon}m Infinite-dimensional integration on weighted {H}ilbert spaces}, Math. Comp., 81 (2012), pp.~2175--2205. \bibitem{Gne13} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m Lower error bounds for randomized multilevel and changing dimension algorithms}, in Monte Carlo and Quasi-Monte Carlo Methods 2013, J.~Dick, F.~Y. Kuo, G.~W. Peters, and I.~H. Sloan, eds., Springer, Heidelberg, 2013, pp.~399--415. \bibitem{GHHR17} {\sc M.~Gnewuch, M.~Hefter, A.~Hinrichs, and K.~Ritter}, {{\varepsilon}m Embeddings of weighted {H}ilbert spaces and applications to multivariate and infinite-dimensional integration}, Journal of Approximation Theory, 222 (2017), pp.~8--39. \bibitem{GHHRW18} {\sc M.~Gnewuch, M.~Hefter, A.~Hinrichs, K.~Ritter, and G.~W. Wasilkowski}, {{\varepsilon}m Embeddings for infinite-dimensional integration and {$L_2$}-approximation with increasing smoothness}. \newblock Preprint 2018 (Math. ArXiv: arxiv.org/abs/1809.07103). \bibitem{GLSS07} {\sc M.~Gnewuch, R.~Lindloh, R.~Schneider, and A.~Srivastav}, {{\varepsilon}m Cubature formulas for function spaces with moderate smoothness}, J.~Complexity, 23 (2007), pp.~828--850. \bibitem{GMR12} {\sc M.~Gnewuch, S.~Mayer, and K.~Ritter}, {{\varepsilon}m On weighted {H}ilbert spaces and integration of functions of infinitely many variables}, J.~Complexity, 30 (2014), pp.~29--47. \bibitem{Gor71} {\sc W.~J. Gordon}, {{\varepsilon}m Blending function methods of bivariate and multivariate interpolation and approximation}, SIAM J. Numer. Anal., 8 (1971), pp.~158--177. \bibitem{Gri06} {\sc M.~Griebel}, {{\varepsilon}m Sparse grids and related approximation schemes for higher order problems}, in Foundations of Computational Mathematics, Santander 2005, L.~M. Pardo, A.~Pinkus, E.~S{\"u}li, and M.~J. Todd, eds., Cambridge, 2006, Cambridge University Press, pp.~106--161. \bibitem{GH10} {\sc M.~Griebel and M.~Holtz}, {{\varepsilon}m Dimension-wise integration of high-dimensional functions with applications to finance}, J.~Complexity, 26 (2010), pp.~455--489. \bibitem{HHPS18} {\sc A.-L. Haji-Ali, H.~Harbrecht, M.~D. Peters, and M.~Siebenmorgen}, {{\varepsilon}m Novel results for the anisotropic sparse grid quadrature}, J. Complexity, 47 (2018), pp.~62--85. \bibitem{Han02} {\sc M.~Hanke-Burgeois}, {{\varepsilon}m Grundlagen der Numerischer Mathematik und des Wissenschaftlichen Rechnens}, Teubner, 2002. \bibitem{Hei98} {\sc S.~Heinrich}, {{\varepsilon}m {M}onte {C}arlo complexity of global solution of integral equations}, J. Complexity, 14 (1998), pp.~151--175. \bibitem{HM11} {\sc S.~Heinrich and B.~Milla}, {{\varepsilon}m The randomized complexity of indefinite integration}, J. Complexity, 27 (2011), pp.~352--382. \bibitem{HS99} {\sc S.~Heinrich and E.~Sindambiwe}, {{\varepsilon}m {M}onte {C}arlo complexity of parametric integration}, J. Complexity, 15 (1999), pp.~317--341. \bibitem{HMNR10} {\sc F.~J. Hickernell, T.~M{\"u}ller-Gronbach, B.~Niu, and K.~Ritter}, {{\varepsilon}m Multi-level {M}onte {C}arlo algorithms for infinite-dimensional integration on $\mathbb{R}^{\mathbb{N}}$}, J. Complexity, 26 (2010), pp.~229--254. \bibitem{IKPW16} {\sc C.~Irrgeher, P.~Kritzer, F.~Pillichshammer, and H.~Wo\'zniakowski}, {{\varepsilon}m Tractability of multivariate approximimation defined over {H}ilbert spaces with exponential weights}, J. Approx. Theory, 207 (2016), pp.~301--338. \bibitem{KPW14a} {\sc P.~Kritzer, F.~Pillichshammer, and H.~Wo\'zniakowski}, {{\varepsilon}m Tractability of multivariate analytic problems}, Radon Ser.\ Comput.\ Appl.\ Math., 15 (2014), pp.~147--170. \bibitem{KSS15} {\sc F.~Y. Kuo, C.~Schwab, and I.~H. Sloan}, {{\varepsilon}m Multi-level quasi-{M}onte {C}arlo finite element methods for a class of elliptic partial differential equations with random coefficients}, Found. Comput. Math., 15 (2015.), pp.~411--449. \bibitem{KSWW10a} {\sc F.~Y. Kuo, I.~H. Sloan, G.~W. Wasilkowski, and H.~Wo\'zniakowski}, {{\varepsilon}m Liberating the dimension}, J. Complexity, 26 (2010), pp.~422--454. \bibitem{MGNR12} {\sc T.~M{\"u}ller-Gronbach, E.~Novak, and K.~Ritter}, {{\varepsilon}m Monte {C}arlo-{A}lgorithmen}, Springer Lehrbuch, Springer, 2012. \bibitem{MGR09} {\sc T.~M{\"u}ller-Gronbach and K.~Ritter}, {{\varepsilon}m Variable subspace sampling and multi-level algorithms}, in Monte Carlo and Quasi-Monte Carlo Methods 2008, P.~L'Ecuyer and A.~B. Owen, eds., Springer, 2009. \bibitem{NHMR11} {\sc B.~Niu, F.~J. Hickernell, T.~M{\"u}ller-Gronbach, and K.~Ritter}, {{\varepsilon}m Deterministic multi-level algorithms for infinite-dimensional integration on $\mathbb{R}^{\mathbb{N}}$}, J. Complexity, 27 (2011), pp.~331--351. \bibitem{Nov88} {\sc E.~Novak}, {{\varepsilon}m Deterministic and Stochastic Error Bounds in Numerical Analysis}, vol.~1349 of Lect. Notes in Math., Springer-Verlag, Berlin, 1988. \bibitem{NR96a} {\sc E.~Novak and K.~Ritter}, {{\varepsilon}m Global optimization using hyperbolic cross points}, in State of the Art in Global Optimization, C.~A. Floudas and P.~M. Pardalos, eds., Dordrecht, 1996, Kluwer, pp.~19--33. \bibitem{NR96} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m High dimensional integration of smooth functions over cubes}, Numer. Math., 75 (1996), pp.~79--97. \bibitem{NW10} {\sc E.~Novak and H.~Wo\'zniakowski}, {{\varepsilon}m Tractability of {M}ultivariate {P}roblems. Vol. 2: {S}tandard {I}nformation for {F}unctionals}, EMS Tracts in Mathematics, European Mathematical Society (EMS), Z\"urich, 2010. \bibitem{Owe95} {\sc A.~B. Owen}, {{\varepsilon}m Randomly permuted $(t,m,s)$-nets and $(t,s)$-sequences}, in Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, H.~Niederreiter and P.~J.-S. Shiue, eds., New York, 1995, Springer, pp.~299--317. \bibitem{PW10} {\sc A.~Papageorgiou and H.~Wo\'zniakowski}, {{\varepsilon}m Tractability through increasing smoothness}, J. Complexity, 26 (2010), pp.~409--421. \bibitem{Per86} {\sc S.~V. Pereverzev}, {{\varepsilon}m On optimization of approximate methods of solving integral equations}, Sov. Math. Dokl., 33 (1986), pp.~347--351. \bibitem{Pet03} {\sc K.~Petras}, {{\varepsilon}m Smolyak cubature of given polynomial degree with few nodes for increasing dimension}, Numer. Math., 93 (2003), pp.~729--753. \bibitem{PW11} {\sc L.~Plaskota and G.~W. Wasilkowski}, {{\varepsilon}m Tractability of infinite-dimensional integration in the worst case and randomized settings}, J. Complexity, 27 (2011), pp.~505--518. \bibitem{SU07} {\sc W.~Sickel and T.~Ullrich}, {{\varepsilon}m Smolyak's algorithm, sampling on sparse grids and function spaces of dominated mixed smoothness}, East J. Approx., 13 (2007), pp.~387--425. \bibitem{Sie14} {\sc P.~Siedlecki}, {{\varepsilon}m Uniform weak tractability of multivariate problems with increasing smoothness}, J. Complexity, 30 (2014), pp.~716--734. \bibitem{SW98} {\sc I.~H. Sloan and H.~Wo\'{z}niakowski}, {{\varepsilon}m When are quasi-{M}onte {C}arlo algorithms efficient for high dimensional integrals?}, J. Complexity, 14 (1998), pp.~1--33. \bibitem{Smo63} {\sc S.~A. Smolyak}, {{\varepsilon}m Quadrature and interpolation formulas for tensor products of certain classes of functions}, Dokl. Akad. Nauk. SSSR 4, 4 (1963), pp.~240--243. \bibitem{Tem87} {\sc V.~N. Temlyakov}, {{\varepsilon}m Approximate recovery of periodic functions of several variables}, Math. USSR Sbornik, 56 (1987), pp.~249--261. \bibitem{Tem92} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m On a way of obtaining lower estimates for the errors of quadrature formulas}, Math. USSR Sbornik, 71 (1992), pp.~247--257. \bibitem{Tem93} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m On approximate recovery of functions with bounded mixed derivative}, J. Complexity, 9 (1993), pp.~41--59. \bibitem{Tem18} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m Multivariate Approximation}, Cambridge University Press, Cambrigde, 2018. \bibitem{TWW88} {\sc J.~F. Traub, G.~W. Wasilkowski, and H.~Wo\'zniakowski}, {{\varepsilon}m Information-Based Complexity}, Academic Press, New York, 1988. \bibitem{Ull08} {\sc T.~Ullrich}, {{\varepsilon}m Smolyak's algorithm, sampling on sparse grids and {S}obolev spaces of dominated mixed smoothness}, East J. Approx., 14 (2008), pp.~1--38. \bibitem{Was12} {\sc G.~W. Wasilkowski}, {{\varepsilon}m Liberating the dimension for ${L}_2$-approximation}, J. Complexity, 28 (2012), pp.~304--319. \bibitem{WW95} {\sc G.~W. Wasilkowski and H.~Wo\'zniakowski}, {{\varepsilon}m Explicit cost bounds for algorithms for multivariate tensor product problems}, J. Complexity, 11 (1995), pp.~1--56. \bibitem{WW99} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m Weighted tensor product algorithms for linear multivariate problems}, J. Complexity, 15 (1999), pp.~402--447. \bibitem{WW11b} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m Liberating the dimension for function approximation: Standard information}, J. Complexity, 27 (2011), pp.~417--440. \bibitem{Wei80} {\sc J.~Weidmann}, {{\varepsilon}m Linear Operators in Hilbert Spaces}, Springer, Berlin, Heidelberg, New York, 1980. \bibitem{Yse05} {\sc H.~Yserentant}, {{\varepsilon}m Sparse grids spaces for the numerical solution of the electronic {S}chr\"odinger equation}, Numer. Math., 101 (2005), pp.~381--389. \bibitem{Yse06} \leavevmode\vrule height 2pt depth -1.6pt width 23pt, {{\varepsilon}m Sparse grids, adaptivity, and symmetry}, Computing, 78 (2006), pp.~195--209. \bibitem{Zen91} {\sc C.~Zenger}, {{\varepsilon}m Sparse grids}, in Parallel Algorithms for Partial Differential Equations, W.~Hackbusch, ed., Braunschweig, 1991, Vieweg, pp.~241--251. {\varepsilon}nd{thebibliography} {\varepsilon}nd{document}
\begin{document} \title{Zeros of the Hurwitz zeta function\in the interval $(0,1)$ hanks{Research supported by Swiss National Science Foundation Grant no. 107887. Support has also been given over time by Scuola Normale Superiore of Pisa, University of Z\"urich and IBM Z\"urich Research Lab.} \begin{abstract} We first prove an inequality for the Hurwitz zeta function $\zeta(\sigma,w)$ in the case $\sigma>0$. As a corollary we derive that it has no zeros and is actually negative for $\sigma\in(0,1)$ and $1-\sigma\leq w$ and, as a particular instance, the known result that the classical zeta function has no zeros in $(0,1)$. \end{abstract} \section{Statement and proof of the results} The Hurwitz zeta function is classically defined for $\Re(s)>1$ as $$ \zeta(s,w)\doteq\sum_{n=0}^{\infty} (n+w)^{-s}, $$ \noindent with $w$ being a positive real number, and it can be continued analytically to the whole $s$-plane, except for a pole in $1$ (see e.g. \cite[Proposition 9.6.6]{co07}). Sometimes the definition is extended by letting $w$ be a complex number, while in other situations $w$ is only restricted to be a real number in $(0,1]$. Notice in fact the following relation: $$ \zeta(s,w)=\zeta(s,w+1)+w^{-s}, $$ which follows by considering a summation over the terms $(n+1+w)$. The following theorem is meant to add a new result to what is already known about its zeros (see e.g. \cite{sp76}). As usual in the literature, $\sigma$ will denote the real part of $s$. \begin{thm} Suppose that $\sigma>0$ and $\sigma\neq 1$. Then $\zeta(\sigma,w)<\frac{1-\sigma-w}{(1-\sigma)w^{\sigma}}$. \end{thm} \begin{proof} To make it clear how the steps in the proof drive towards the thesis and where they leave space for possible improvements, we use here the following approach: we start considering $s$ a generic complex number and only later we will add conditions on the parameters. As a first step we derive a representation for the Hurwitz zeta function through the Euler-Maclaurin summation formula, in analogy with the one derived for the classical Riemann zeta function (see e.g \cite[chapter 6]{ed01}). Namely, the following Euler-Maclaurin summation formula $$ \sum_{n=N}^M f(n)=\int_N^M f(x)dx + \frac{1}{2}[f(M)+f(N)]+ \frac{B_2}{2}f'(x)\big|_N^M + \frac{B_4}{4!}f^{(3)}(x)\big|_N^M +\ldots, $$ \noindent where $B_2, B_4,\ldots$ are Bernoulli numbers, is applied to $f(n)=(n+w)^{-s}$ letting $M$ tend to $\infty$, to obtain, for $\Re(s)>1$, $$ \zeta(s,w)=\sum_{n=0}^{N-1}(n+w)^{-s}-\frac{(N+w)^{1-s}}{1-s}+O((N+w)^{-\sigma}). $$ This formula is in fact true for all $\mathbb{C}\backslash\{1\}$, as explained for the analogous case of the Riemann zeta function (\cite[section 6.4]{ed01}) by considering an expression for the remainder in integral form and its correspondent halfplane of convergence (see also \cite[Proposition 9.6.7]{co07}). The formula above implies that, if we restrict to $\Re(s)>0$, $s\neq 1$, $$ \zeta(s,w)=\lim_{N\to\infty}\left(\sum_{n=0}^{N}(n+w)^{-s}-\frac{(N+w)^{1-s}}{1-s}\right). $$ We notice that $$\sum_{n=0}^{N}(n+w)^{-\sigma}=w^{-\sigma}+\int_0^N (x+w)^{-\sigma}dx+\lambda_N,$$ \noindent where $$\lambda_N=\sum_{n=1}^N\{(n+w)^{-\sigma}-\int_{n-1}^n(x+w)^{-\sigma}dx\}$$ \noindent is such that $0>\lambda_1>\lambda_N>\lambda_{N+1}$. After computing $\int_0^N (x+w)^{-\sigma}dx= \frac{(N+w)^{1-\sigma}}{1-\sigma}-\frac{w^{1-\sigma}}{1-\sigma}$, we get $$\sum_{n=0}^{N}(n+w)^{-\sigma}-\frac{(N+w)^{1-\sigma}}{1-\sigma}=\frac{1-\sigma-w}{(1-\sigma)w^{\sigma}}+\lambda_N,$$ \noindent so that $$ \lim_{N\to\infty}\left(\sum_{n=0}^{N}(n+w)^{-\sigma}-\frac{(N+w)^{1-\sigma}}{1-\sigma}\right)=\zeta(\sigma,w)=\frac{1-\sigma-w}{(1-\sigma)w^{\sigma}}+\lim_{N\to\infty} \lambda_N, $$ \noindent and therefore $$ \zeta(\sigma,w)<\frac{1-\sigma-w}{(1-\sigma)w^{\sigma}}. $$ \end{proof} \begin{cor} $\zeta(\sigma,w)$ is negative and in particular nonzero when $\sigma\in(0,1)$ and $1-\sigma\leq w$. \end{cor} A particular case is the following well known result (see e.g. \cite[chapter 13]{ap76}): \begin{cor}\label{cor} The Riemann zeta function $\zeta(s)$ has no zeros in $(0,1)$. \end{cor} \begin{proof} In fact $\zeta(s)=\zeta(s,1)$. \end{proof} \begin{rmk} Notice that when $\sigma>1$ the upper bound is always positive, as it should be, given the definition of $\zeta(\sigma,w)$ for $\Re(s)>1$. \end{rmk} \begin{rmk} It is well known that the Dirichlet $L$-function, defined for $\Re(s)>1$ as $L(\chi,s)\doteq\sum_{n=1}^{\infty}\frac{\chi(n)}{n^s}$, where $\chi$ is a Dirichlet character (mod $q$), can be represented through a sum of Hurwitz zeta functions (see e.g \cite{co07} or \cite{ap76}) in the following way: $$ L(\chi,s)=q^{-s}\sum_{a=1}^q \chi(a)\zeta(s,a/q). $$ The Extended Riemann Hypothesis conjectures that the Dirichlet $L$-function $L(\chi,s)$, for a primitive character $\chi$, has no zeros with real part different from $1/2$ in the critical strip; and its strong version says that $L(\chi,1/2)$ is always nonzero too: see also \cite[section 10.5.7]{co07}). We leave then open for investigation to see whether our main result could help to find new zero-free regions for this class of functions, for example to show that the Dirichlet $L$-functions are also nonzero on the real axis between $0$ and $1$, which would thus establish a weaker version of the Extended Riemann Hypothesis, though stronger in including the point $1/2$. A very simple instance is actually at hand: \begin{cor} Let $\chi_2$ be the unique character modulo $2$. Then $L(\chi_2,s)$ has no zeros in $(0,1)$. \end{cor} \begin{proof} From the above formula we get that $L(\chi_2,s)=2^{-s}\zeta(s,1/2)$ and we know from our theorem that $\zeta(s,1/2)$ is nonzero in $[1/2,1)$, so that $L(\chi_2,s)$ is also nonzero in $[1/2,1)$. Now, we know by the functional equation for Dirichlet $L$-functions (see e.g. \cite[Theorem 10.2.14]{co07}) that if $s$ is not a zero, then $1-s$ is not a zero either. Therefore $L(\chi_2,s)$ is nonzero throughout $(0,1)$. \end{proof} An alternative proof of this result can be derived using Corollary \ref{cor} and the fact that $\zeta(s,1/2)=\zeta(s)\cdot (2^s-1)$ (see e.g. \cite[Proposition 9.6.2]{co07}). In fact this case belongs to a more general scenario: when $\chi$ is a principal character $\chi_0$ modulo $q$, then: $$ L(\chi_0,s)=\zeta(s)\prod_{p|q}\left(1-\frac{1}{p^s}\right) $$ (see e.g. \cite{co07} or \cite{sc10}). \end{rmk} \end{document} \begin{rmk} The same proof can be adapted for the classical zeta function, where the result is well known (see e.g. \cite[chapter 13]{ap76}). In fact we also have $$ \zeta(s)=\sum_{n=1}^{N-1}n^{-s}+\frac{N^{1-s}}{s-1}+\frac{1}{2}N^{-s}+O(N^{-\sigma-1}). $$ What essentially changes in this case is only that we save one step, since the integraton gives directly $$ \frac{N^{1-\sigma}}{1-\sigma}-\frac{\sigma}{1-\sigma}. $$ \end{rmk} It is well known that the Riemann zeta function $\zeta(s)$ is zero at $s=-2,-4,-6,\ldots$, which are called the trivial zeros. The Riemann Hypothesis conjectures that all non-trivial zeros have real part equal to $1/2$. This conjecture inspired many mathematicians, so that a vast literature is nowadays available, and it has been also computationally verified for the first $10^{13}$ non trivial zeros, ordered by increasing positive imaginary part (see e.g \cite{go04}). Several authors also came up with sufficient and/or necessary conditions: for many of them, and at the same time to have a more general overview of the problem, we can refer the reader for example to \cite{bo08}, \cite{bo06}, \cite{co03} and the references therein. This work is mainly devoted to present new sufficient conditions and equivalent criteria. An overview of the paper is the following. In this section we recall first some properties of the zeta function and then we give a short introduction to the main generalizations of $\zeta$ and the correspondent generalized conjectures. In some cases we also add some remarks about the given statements, in particular about Hadamard product expansions and about zeros in $(0,1)$. In this last case we actually come up with a result with its own indipendent relevance: the nonexistence of roots in $(0,1)$ for the Hurwitz zeta function. In section $2$ we will state and prove sufficient conditions for the Riemann Hypothesis while in section $3$ we deal with equivalent criteria. Analogues of these statements for generalized versions of the Riemannn Hypothesis are in some cases considered in the remarks. Throughout the paper we will make use of basic properties of the Riemann zeta function: we will summarize them below in order that also a non-specialist can follow the argument. We will refer later to them, when needed, by using numbers in round brackets. Some facts about the $\Gamma$ function are also added . \begin{enum1} \item It is known (see e.g. \cite[chapter 13]{ap76}) that there are no zeros of $\zeta$ on the real axis between $0$ and $1$ (see also the remark below), nor on the half-planes $x\leq 0$ and $x\geq 1$. So what is still to be proved is that the only zeros in the critical strip, i.e in the strip $0<\Re(s)<1$, and outside of the real axis, are on the line $x=\frac{1}{2}$. The critical strip $0<\Re(s)<1$ will then be the only region of interest for the sequel, for example as far as analytical properties or convergence of sequences of functions are concerned. \item The function $\Lambda(s)=\pi^{-\frac{s}{2}}\Gamma(\frac{s}{2})\zeta(s)$ is analytic in the whole complex plane apart from poles at $0$ and $1$. Furthermore $\Lambda(s)=\Lambda(1-s)$ (see e.g \cite{la06}). \item In the critical strip zeros of $\zeta$ are the same as zeros of $\Lambda$ ($\Gamma$ has no zeros). It follows from $(2)$ that if $s$ is a zero of $\zeta$, then also $1-s$ is a zero. \item Since $\zeta$ is analytic in the critical strip, its zeros are isolated. \item A representation for $\zeta$ in the strip is the following: $$ \zeta(s)=(1-2^{1-s})^{-1}\cdot \phi(s), $$ where $\phi(s)=\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^s}$ (see e.g. \cite[section 10.2]{na00}). Again zeros of $\zeta$ in the strip are the same as those of $\phi$. \item $\overline{\zeta(s)}=\zeta(\bar{s})$. \footnote{It follows easily, for example, by the representation in $(5)$ ; or one can see it from the Schwarz reflection principle in complex analysis (for example \cite{ca95}), since $\zeta$ is real on the real axis.} \item From $(6)$ and $(2)$ it follows that zeros are symmetric both with respect to the critical line $x=\frac{1}{2}$ and with respect to the real axis. \item By Euler-Maclaurin summation formula (see e.g \cite[section 6.4]{ed01}), if $s=\sigma+it$ belongs to $\mathbb{C}\backslash\{1\}$ the following hold: $$ \zeta(s)=\sum_{n=1}^{N-1}n^{-s}+\frac{N^{1-s}}{s-1}+\frac{1}{2}N^{-s}+O(N^{-\sigma-1}); $$ $$ \zeta(s)=\sum_{n=1}^{N-1}n^{-s}+\frac{N^{1-s}}{s-1}+\frac{1}{2}N^{-s}+\frac{B_2}{2}sN^{-s-1}+O(N^{-\sigma-3}), $$ where $B_2$ is the second Bernoulli number. Similar equalities hold for higher approximations. Moreover, by the upper bound on the rest as expressed in \cite{ed01}, the constant implicit in the $O$-notation can be chosen to be the same for all $s$ in any compact subset of $\mathbb{C}\backslash\{1\}$. Actually, if we consider a region $\sigma\geq \sigma_0>0$, we could even say that $$ \zeta(s)=\sum_{n=1}^{N}n^{-s}+\frac{N^{1-s}}{s-1}+O(N^{-\sigma}) $$ holds uniformly, provided that $N>C|t|/2\pi$, where $C$ is a given constant greater than $1$ (see \cite[Theorem 4.11]{ti86}). \item The zeta function has infinitely many roots on the critical line (Hardy's theorem: see e.g. \cite[chapter 11]{ed01}). \item The function $\xi(s)\doteq\frac{s(s-1)}{2}\Lambda(s)$ is an entire function of finite order and admits the following infinite Hadamard-product expansion which ranges through all the nontrivial roots $\rho_n$ of $\zeta$: $$ \xi(s)=e^{A+Bs}\prod_{n=1}^{\infty}\left(1-\frac{s}{\rho_n}\right)e^{s/\rho_n}, $$ the product being absolutely and uniformly convergent on compact subsets (see e.g. \cite{da00}, \cite{ka92}, \cite{ba97}). \item $\overline{\Gamma(s)}=\Gamma(\bar{s})$ \footnote{It follows for example from the definition of $\Gamma$ as a limit or again by the Schwarz reflection principle.} \item If $0<\sigma<1/2$, then $\left|\frac{\Gamma(\frac{1-s}{2})}{\Gamma(\frac{s}{2})}\right|\leq|(1+s)/2|^{1/2-\sigma}$ and $\left|\frac{\Gamma(\frac{2-s}{2})}{\Gamma(\frac{s+1}{2})}\right|\leq|(1+s)/2|^{1/2-\sigma}$. \footnote{ We report here Lemma $1$ and $2$ in \cite{ra59}, which are a consequence of the Phragm\'en-Lindel\"of theorem in complex analysis. They imply ($12$), but their general statement could also be important in view of possible further generalizations to the case of general $L$-functions of degree bigger than $1$ (see e.g. \cite{ka00},\cite{ge04}). \begin{lemma} For $q\geq 0$, $-1/2\leq\sigma\leq 1/2$ we have $$ \left|\frac{\Gamma(\frac{q+1-s}{2})}{\Gamma(\frac{q+s}{2})}\right|\leq|(q+1+s)/2|^{1/2-\sigma}, $$ \end{lemma} \begin{lemma} For $q\geq 1/2$, $-1/2\leq\sigma\leq 1/2$ we have $$ \left|\frac{\Gamma(\frac{q+1-s}{2})}{\Gamma(\frac{q+s}{2})}\right|\leq|(q+s)/2|^{1/2-\sigma}, $$ \end{lemma} } \end{enum1} We now give a brief introduction to the most important generalization of the zeta function and of the Riemann Hypothesis, though plenty of them have been object of study. As far as this paper is concerned, we will in fact restrict ourselves mainly to Dirichlet $L$-functions with the correspondent Extended Riemann Hypothesis, and only rarely comments will be given for other scenarios, such as the case of the Dedekind zeta function and its Generalized Riemann Hypothesis. The Extended Riemann Hypothesis conjectures that the Dirichlet $L$-function $L(s,\chi)\doteq\sum_{n=1}^{\infty}\frac{\chi(s)}{n^s}$, for a primitive character $\chi$ modulo $q$, has no zeros with real part different from $1/2$ in the critical strip (actually the strong version says that $L(1/2,\chi)$ is always nonzero too; see also \cite[secton 10.5.7]{co07}). We recall that there exists a functional equation for Dirichlet $L$-functions (see e.g \cite[Theorem 10.2.14]{co07} and \cite{co07a} for the definitions of the symbols used), very similar to the one for $\Lambda(s)$ and similarly involving the gamma function: let $e=0$ (resp. $1$) if $\chi(-1)=1$ (resp. $=-1$), and $$ \Lambda(\chi,s)\doteq(q/\pi)^{(s+e)/2}\Gamma((s+e)/2)L(\chi,s); $$ then $$ \Lambda(\chi,s)=W(\chi)\Lambda(\bar{\chi},1-s) $$ where $W(\chi)$, the root number, is a complex number with absolute value equal to $1$. We don't need to multiply by $s(s-1)$ to obtain an entire function, since $\Lambda(\chi,s)$ is already entire and has a Hadamard product expansion (see e.g. \cite{da00} or \cite{ka92}). And there is also an analogue of Hardy's theorem about the number of roots on the critical line (\cite{bo95a}, \cite{mu08}, \cite{zu78}, \cite{zu78a}). Noteworthy is the following representation through Hurwitz zeta functions (see e.g \cite{co07} or \cite{ap76}): if $\chi$ is a character (mod $q$), then $$ L(s,\chi)=q^{-s}\sum_{a=1}^q \chi(a)\zeta(s,a/q). $$ We recall that the Hurwitz zeta function is defined for $\Re(s)>1$ as $$ \zeta(s,w)\doteq\sum_0^{\infty} (n+w)^{-s}, $$ with $w$ being a positive real number, and it can be continued analytically to the whole $s$-plane. For the case of the Dedekind zeta function we would like simply to remark that, since it can be written, whenever the field involved is an abelian extension of $\mathbb{Q}$, as a product of L-functions (see e.g. \cite[Theorem 10.5.25]{co07}), then proving the Extended Riemann Hypothesis provides a proof of the Generalized Riemann Hypothesis for this kind of fields too. (Notice that proving the Extended Riemann Hypothesis only for the group of real characters would prove on itw own the Generalized Riemann Hypothesis for the field associated to this group: see \cite[Theorem 4.3]{wa82}) \begin{rmk} We give here a new proof of the fact that $\zeta$ has no zeros in $(0,1)$. As we will say again below, more relevant is the fact that this proof can be adapted in the case of the Hurwitz zeta function. The proof in the case of $\zeta$ is the following. We have already seen that, from $(8)$, $$ \zeta(s)=\lim_{N\to\infty}\left(\sum_{n=1}^{N}n^{-s}-\frac{N^{1-s}}{1-s}\right). $$ Now, by the triangular inequality (actually in the case of real $s$, which is what will interest us now, we have equality), $$ \left|\sum_{n=1}^{N}n^{-s}\right|\leq\sum_{n=1}^{N}n^{-\sigma}< 1+\int_1^N x^{-\sigma}dx=\frac{N^{1-\sigma}}{1-\sigma}-\frac{\sigma}{1-\sigma}. $$ But on the real axis $\left|\frac{N^{1-s}}{1-s}\right|$ is exactly $\frac{N^{1-\sigma}}{1-\sigma}$. Moreover, in $(0,1)$, $\frac{\sigma}{1-\sigma}$ is positive, so that, for every $N$, $\sum_{n=1}^{N}n^{-\sigma}$ is less than $\frac{N^{1-\sigma}}{1-\sigma}$ by more than a fixed positive quantity. Then in the limit the difference is nonzero, and in fact negative. Now, for the Hurwitz zeta function, which admits a similar Euler-Maclaurin expansion, the argumentation is the same. In fact, by Euler-Maclaurin formula, we can write\footnote{In this case too, similarly as what shown in $(8)$, it's possible to have uniformity by providing some additional conditions: namely, as in \cite[section III.2, Theorem 1]{ka92}, if $0<\sigma_0\leq\sigma\leq 2$ and $\pi N\geq |t|\geq 2\pi$, then $$ \zeta(s,w)=\sum_{n=0}^{N}(n+w)^{-s}+\frac{(N)^{1-s}}{s-1}++O((N)^{-\sigma_0}), $$ where the implicit constant depends only on $\sigma_0$. } $$ \zeta(s,w)=\sum_{n=1}^{N-1}(n+w)^{-s}+\frac{(N+w)^{1-s}}{s-1}+\frac{1}{2}(N+w)^{-s}+O((N+w)^{-\sigma-1}). $$ Following then the same lines of the proof given for $\zeta$, the following is also proved: \begin{thm} $\zeta(s,w)$ has no zeros for $s\in(0,1)$. \end{thm} \begin{proof} What essentially changes from the case of $\zeta$ is only that the equality above is even replaced by an inequality, as $$ 1+\int_1^N (x+w)^{-\sigma}dx= 1+ \frac{(N+w)^{1-\sigma}}{1-\sigma}-\frac{(1+w)^{1-\sigma}}{1-\sigma}<\frac{(N+w)^{1-\sigma}}{1-\sigma}-\frac{\sigma}{1-\sigma}. $$ \end{proof} This appears to be a new result with its own relevance (for other statements on zeros of Hurwitz zeta functions see \cite{sp76}). If this could also help to show that the Dirichlet $L$-functions, which we showed above to be representable through a sum of Hurwitz zeta functions, are also nonzero on the real axis between $0$ and $1$ (a weaker version of the Extended Riemann Hypothesis, though stronger in including the point $1/2$) is left open to investigation. \end{rmk} \begin{rmk} In this section we make some remarks about product expansions of $\zeta$ and its generalizations. First of all, there are expressions which are simpler than that shown at $(10)$: as shown in \cite{ed01} $$ \xi(s)=c_1\prod\left(1-\frac{s}{\rho_n}\right), $$ or $$ \xi(s)=c_2\prod\left(1-\frac{s-\frac{1}{2}}{\rho_n-\frac{1}{2}}\right), $$ where $c_1,c_2$ are some constants and the product is understood to be taken over all roots $\rho_n$ of $\xi$ (which are the nontrivial roots of $\zeta$) in an order which pairs each root $\rho_n$ with the corresponding one $1-\rho_n$. Considering $s=\frac{1}{2}+iv$ and substituting $\phi_n=i(\frac{1}{2}-\rho_n)$, we can also write $$ \Xi(v)\doteq\xi(\frac{1}{2}+iv)=c_2\prod\left(1-\frac{v}{\phi_n}\right), $$ Or, which is more common in the literature and brings the advantage of forgetting about the order, pairing as said with $1-\rho_n$, which corresponds to $-\phi_n$, we can write the following product representation over all roots with positive imaginary part, which correspond to $\phi_n$ with positive real part: $$ \Xi(v)=c_2\prod\left(1-\frac{v^2}{\phi_n^2}\right) $$ We notice that $\Xi(v)$ is an even function, which is, by the way, a direct consequence of the functional equation $\xi(s)=\xi(1-s)$. What we would like particularly to remark is the following: we can start in the other way around writing the Hadamard expansion for $\xi(\frac{1}{2}+iv)$, then looking at the roots $\phi_n$ and $-\phi_n$ we see that by pairing the corresponding terms the exponentials cancel with each other and finally we can obtain an expression in $s$ by substituting back $v=i(\frac{1}{2}-s)$; we'll see it again later on. Let us deal now with product expansions of $\Lambda(\chi,s)$ and similar expressions for other L-functions. We know in this case that we cannot easily drop the exponentials inside the product (see also \cite{la06b}: one would need some restrictions on the locations of the zeros), because, unless $\chi$ is a real character, the roots of $L(\chi,s)$ are only symmetric with respect to the critical line. Though, when one is considering the distribution of zeros, it might be useful to consider the following. First notice that for the set of real characters the zeros are symmetric again with respect to the real axis and $W(\chi)=1$. For the general case consider the function $G(\chi,s)\doteq\Lambda(\chi,s)\Lambda(\bar{\chi},s)$ and $\Psi(\chi,v)\doteq G(\chi,1/2+iv)$. Clearly, if $s$ is not a zero for $G$, nor it is for $\Lambda(\chi,s)$ and $\Lambda(\bar{\chi},s)$, and if $G$ has a finite number of roots in a region, then this holds for the L-functions too; and zeros of $G$ are now symmetric also with respect to the real axis. In fact, because of the functional equation for $L(\chi,s)$, and the fact that $\overline{W(\chi)}=W(\bar{\chi})$, we have $G(\chi,s)=G(\chi,1-s)$. Moreover $G(\chi,s)$ is real for $s$ real (or $\Psi(\chi,v)$ is real for imaginary $v$): $$ \overline{\Lambda(\chi,s)\Lambda(\bar{\chi},s)}=\Lambda(\bar{\chi},s)\Lambda(\chi,s)=\Lambda(\chi,s)\Lambda(\bar{\chi},s). $$ We are now in the conditions to obtain a modified Hadamard product like that of $\zeta$, as explained in \cite[Theorem 4]{la06b}. That $B=0$ can also be seen by the evenness of $\Psi$, which again follows from the functional equation for $G$. But in fact we can do something nicer: we don't need groups of four roots to obtain a modified product, provided that we consider doing a change of variable as above. We would consider a function $$ \Xi(v,\chi)=e^{A+Bv}\prod_{n=1}^{\infty}\left(1-\frac{v}{\phi_n}\right)e^{v/\phi_n}, $$ where again zeros $\rho_n$ of $\Lambda(\chi,s)$ are written as $\frac{1}{2}+i\phi_n$, with $\phi_n$ zeros of $\Xi(v,\chi)$. Since the product with the exponentials is already converging absolutely and the order of the factors is ininfluential, we can pair, as above, $\phi_n$ and $-\phi_n$ and the exponentials drop. Same scenario holds for automorphic $L$-functions (see \cite[section 2, in particular Theorem 2.1]{la07}) and $L$-functions associated to Hecke characters (see e.g. \cite{la94} and \cite{fr06}), and in particular the Dedekind zeta function which corresponds to the trivial character. But anyway the Dedekind zeta function is real on the real axis, as the norm of ideals are positive integers: then, because of the Schwarz reflection principle for analytic functions, its roots are symmetric with respect to the real axis. \end{rmk} \section{Sufficient conditions for the Riemann Hypothesis} Before stating our first sufficient condition, we recall a result of Selberg (\cite{se42}, \cite[section 2.1.11]{ka95}) which says that $$ \lim_{T\to\infty} \frac{1}{T}\mu\left\{t\in(0,T] : \zeta(1/2+ik)\neq 0, k\in(t,t+\Phi(t)/\log t) \right\}=0, $$ where $\Phi(t)$ is a positive function which tends to $\infty$ as $t\to\infty$. This implies, by taking for example $\Phi(t)=\log\log t$, that, for any fixed $\eta$, $$ \lim_{T\to\infty} \frac{1}{T}\mu\left\{t\in(0,T] : \zeta(1/2+ik)\neq 0, k\in(t,t+\eta) \right\}=0, $$ or $$ \lim_{T\to\infty} \frac{1}{T}\mu\left\{t\in(0,T] : \exists k\in(t,t+\eta) \mid \zeta(1/2+ik)=0 \right\}=1. $$ Here is our first statement: \begin{thm} Suppose that for each $\epsilon,\eta>0$ there exists a $\lambda$ such that $$ \lim_{T\to\infty} \mu\left\{t\in(0,T]: \exists k\in(t,t+\eta) \mid \zeta(1/2+ik)=0 \cap(|s-1/2-ik|<\lambda \Rightarrow |\zeta(s)|<\epsilon) \right\}=T $$ Then the Riemann Hypothesis is true. \end{thm} \begin{rmk} As for how much our condition, as stated above in the theorem, is likely to be true is though not so easy to say: if on one side it has been shown (\cite{go84a}) that, assuming the Riemann Hypothesis, the average of $|\zeta'(\rho)|^2$ over the roots up to height $T$ is asymptotic to $\frac{1}{12}(\log T)^3$, thus increasing with $T$, on the other one it's also known that infinitely many times it assumes arbitrarily small values (\cite[Theorem 3]{ng08}). Anyway from the proof below it will become evident where it is possible to modify or relax the condition and obtain other, eventually more significant, implications: Theorems $2.2$ and Corollary $2.3$ will be two examples. \end{rmk} \begin{proof} In order to prove the above statement we use Voronin's universality theorem. \footnote{ Voronin's universality theorem says the following (see e.g. \cite{st07}): \begin{thm} Suppose that $K$ is a compact subset of the strip $\frac{1}{2}<\sigma<1$ with connected complement and let $g(s)$ be a non-vanishing continuous function on $K$ which is analytic in the interior of $K$. Then for any $\epsilon>0$ $$ \liminf_{T\to\infty} \frac{1}{T} \mu\left\{\tau\in[0,T] : \max_{s\in K} |\zeta(s+i\tau)-g(s)|<\epsilon \right\}>0. $$ ($\mu$ stands for Lebesgue measure) \end{thm} } We remark (see \cite[section 8.1]{st07}) that, if Voronin's theorem were true even if $g(s)$ is allowed to vanish, then the Riemann Hypothesis would be false. Though, as shown in the reference cited, this cannot happen, i.e a function having zeros cannot be approximated uniformly by $\zeta$, which in fact hints at the relation between universality and distribution of zeros. \footnote{Actually, as it is stated in \cite{st07}, the property above about the impossibility of that kind of approximation pertains to universal $L$-functions in general. The given argument though seems to need essentially a density estimate of the type $N(1/2+\epsilon, T)=o(T)$ for any $\epsilon>0$, where $N(\sigma, T)$ stands for the number of zeros in the upper critical strip up to height $T$ and with real part bigger or equal than $\sigma$. This is known to hold in many cases, but still not in general, as far as I know (\cite{mo71b}, \cite{se92a}, \cite{ka95}, \cite{hi76}, \cite{st07}, \cite{he77a}, \cite{re80}, \cite{pe82a}). In particular from $N(\sigma, T)= O(T^{1-\alpha(\sigma-1/2)\log T})$, which holds for zeta and Dirichlet L-functions as well as for L-functions of quadratic fields and Dirichlet series connected to some cusp forms (\cite{se92a}), one can deduce the density estimate above.} In this regard we notice that with another notion of universality, slightly different from that expressed by Voronin's theorem, it would be possible to find functions which satisfy the main properties of $\zeta$, namely the functional equation in $(2)$, the property of being analytic except for a pole in $1$ and that of being real on the real axis, but do not satisfy the Riemann Hypothesis (see \cite{ni09}). We are going then to proceed by contradiction, assuming that the Riemann Hypothesis is false: suppose that $s_0$ is a zero in the strip $\frac{1}{2}<\sigma<1$ and let $K$ be any compact in this strip containing $s_0$. Let $s_1=1/2+it_1$ be a fixed root of $\zeta$ on the critical line . For any given $\epsilon>0$, by continuity, there exists $\eta>0$ such that $|\Re{(s_1-s)}|+|\Im{(s_1-s)}|<2\eta$ implies $|\zeta(s)|<\epsilon/4$. Since the derivative of $\zeta$ is strongly (i.e: the function to be approximated need not be non-vanishing) universal in Voronin's sense (see \cite[section 1.3 and 1.6]{st07}), for any $\delta>0$ \[ \liminf_{T\to\infty} \frac{1}{T} \mu\left\{\tau\in[0,T] : \max_{s\in K_2} |\zeta'(s+i\tau)-\zeta'(s)|\leq \delta \right\}>0, \] for any compact $K_2$ in the right open half of the strip. Now, we know from our hypothesis that there exists a $\lambda$ (which we can take less than $\eta$) such that the condition is fulfilled for the pair $\epsilon/4, \eta$. We will take $K_2$ as a compact containing all lines from $K$ to the segment $[s_1+\lambda,s_1+\lambda+i\eta]$. Let also $\delta$ be equal to $\frac{\epsilon}{2(\max_{s\in K}|s-s_1-\lambda|+\eta)}$ and consider $|\zeta(s+i\tau)-\zeta(s)|$ for each of those $\tau$ such that $\max_{s\in K_2} |\zeta'(s+i\tau)-\zeta'(s)|\leq \delta$. By the triangular inequality we have, for any $0\leq\alpha\leq 1$ and $s\in K$, $$ |\zeta(s+i\tau)-\zeta(s)|\leq\left|\int_{s_1+\lambda+i\alpha\eta}^s(\zeta'(s+i\tau)-\zeta'(s))ds\right|+|\zeta(s_1+\lambda+i\alpha\eta+i\tau)|+|\zeta(s_1+\lambda+i\alpha\eta)| $$ where the path of integration is chosen to be a straight line connecting the two points, so that the first part on the right can be bounded by $|s-(s_1+\lambda+i\alpha\eta)|\cdot\delta$, which is less or equal than $\epsilon/2$. Now, if $|\zeta(s_1+\lambda+i\tau)|\leq\epsilon/4$, then the second part on the right with $\alpha=0$ is less or equal than $\epsilon/2$ too. Otherwise we know, by our hypothesis, that for almost all $\tau$, there exists $0\leq\alpha\leq 1$ such that $\zeta(s_1+i\alpha\eta+i\tau)=0$ and $|\zeta(s_1+\lambda+i\alpha\eta+i\tau)|\leq \epsilon/4$, so that again the second part is less or equal than $\epsilon/2$. Thus in general we can conclude that $$ \liminf_{T\to\infty} \frac{1}{T} \mu\left\{\tau\in[0,T] : \max_{s\in K} |\zeta(s+i\tau)-\zeta(s)|\leq \epsilon \right\}>0. $$ Though this cannot happen, as we remarked above, since $\zeta$ is supposed to vanish on $K$, and we obtain then a contradiction. \end{proof} \begin{rmk} Alternatively, instead of proving it by contradiction, we could have used directly Bagchi's theorem. \footnote{ Bagchi proved the following (see e.g. \cite[section 8.2]{st07}) \begin{thm} RH is true if and only if for any compact subset $K$ of the strip $\frac{1}{2}<\sigma<1$ with connected complement and for any $\epsilon>0$ $$ \liminf_{T\to\infty} \frac{1}{T} \mu\left\{\tau\in[0,T] : \max_{s\in K} |\zeta(s+i\tau)-\zeta(s)|<\epsilon \right\}>0. $$ \end{thm} } \end{rmk} \begin{rmk} By looking at the given proof one may think about varying it by considering the possibility that zeta takes a particular value on a line $\sigma=\sigma_0$, with $1/2<\sigma_0<1$, as often as it happens for zeros on the critical line. Though it has been shown (\cite{bo32}, \cite[Theorem 1.5]{st07}) that, for any complex number $c\neq 0$, the number of roots of $\zeta(s)=c$ up to height $T$ in the strip $1/2<\sigma_1<\sigma_2<1$ is only asymptotic to $KT$ for a finite positive constant $K$. \end{rmk} We can also modify our requirement in the following way: \begin{thm} Suppose that $\zeta'$ is strongly universal for any compact in $\{\frac{1}{2}<\sigma<1\}\cup\{[s_*,s_*+i\eta]\}$, where $s_*$ is a zero on the critical line and $\eta$ is a positive real number. Then the Riemann Hypothesis is true. \end{thm} \begin{proof} We see, from the proof given above, that, if $\zeta'$ were strongly universal in $\{\frac{1}{2}<\sigma<1\}\cup\{[s_*,s_*+i\eta]\}$, this would allow us to take $K_2$ as a compact including part of the critical line (and so forget about $\lambda$) and the proof would follow more easily applying directly Selberg's theorem without our additional condition. \end{proof} \begin{cor} Suppose that $\zeta'$ is strongly universal in $\frac{1}{2}\leq\sigma<1$. Then the Riemann Hypothesis is true. \end{cor} \begin{rmk} If one considers to generalize this type of results to other scenarios, then one needs to know which properties still hold in which situations. We give here some guidelines of references. For the case of Dirichlet $L$-series, we have universality for the $L$-function and its derivative (\cite{ba82a}, \cite{st07}) and a good density estimate, as already mentioned before. Moreover a result in the direction of Selberg's holds too (\cite[section 4]{se92a}): if $[0,T]$ is divided into intervals of length $\eta$, all except $o(T/\eta)$ of them will contain $c\eta\log T+O((\eta\log T)^{\frac{3}{4}})$ zeros, where $c$ is a constant (depending more generally on the degree of the $L$-function). For further generalizations we refer to \cite{st07} and in particular theorem $6.12$, together with the table of degrees in \cite{fr06}, to see which classes of functions admit analogues of universality theorems. For connections of universality theorems with correspondent Riemann hypotheses we refer to \cite[section 8.3]{st07}. For universality of derivatives, beside \cite[section 1.3 and 1.6]{st07}, we refer to \cite{la85c}, \cite{la05}. \end{rmk} We will find now another sufficient condition, by a completely different approach. We first define: $$ F(s)\doteq \frac{\zeta(s)}{\zeta(1-s)}. $$ Notice that $F(s)$ can be analytically continued in the zeros of $\zeta$, by defining it there to be $F(s)=\frac{\pi^{s-\frac{1}{2}}\Gamma(\frac{1-s}{2})}{\Gamma(\frac{s}{2})}$ from $(2)$. This actually not only shows that zeros of the denominator are just removable singularities, but also that, if $s$ is a zero, then $s$ and $1-s$ have the same multiplicity. \footnote{Just suppose that is not the case, then $s$ has higher multiplicity, in order for $F(s)$ to remain bounded in a neighboorhood of $s$ (and thus to be extendable: see \cite[section III.4.4]{ca95}); but in this case $\frac{\zeta(1-s)}{\zeta(s)}$ would not be extendable, by the same reason. Or just notice that, if $s$ is a zero with a higher multiplicity than $1-s$, then $F(s)=0$, while from the explicit expression above we see that $F(s)$ is never zero in the critical strip.} Consider now $H_N(s)\doteq\sum_{n=1}^{N}n^{-s}+\frac{N^{1-s}}{s-1}$, which can also be seen as a truncated sum: if we define (as in \cite{pu08}): $$ h_n(s)= \begin{cases} \frac{1}{n^s}-\frac{n^{1-s}-(n-1)^{1-s}}{1-s}\ \ \text{if}\ \ n\geq 2\\ \frac{1}{s-1}\ \ \text{if}\ \ n=1\\ 1\ \ \text{if}\ \ n=0 \end{cases} $$ then $H_N(s)$ is just $\sum_{n=0}^N h_n(s)$, the $N$-th partial sum of the series $\sum_{n=0}^{\infty} h_n(s)$, which is equal to $\zeta(s)$ because of the summation formula in $(8)$): $$ \zeta(s)=\sum_{n=0}^{N} h_n(s)-\frac{1}{2}N^{-s}+O(N^{-\sigma-1}) $$ so that $$ \zeta(s)=\sum_{n=0}^{\infty} h_n(s) $$ and, if $\zeta(s)=0$, $$ \sum_{n=0}^{N} h_n(s)=\frac{1}{2}N^{-s}+O(N^{-\sigma-1}). $$ Suppose now $s_*=\sigma_* +it_*$ is in the critical strip. \begin{prop} Whenever $\zeta(s_*)\neq 0$ (and so $\zeta(1-s_*)\neq 0$), then $$\lim_{s \to s_*} \frac{\zeta(s)}{\zeta(1-s)}=\lim_{N \to \infty} \frac{H_N(s_*)}{H_N(1-s_*)}. $$ \end{prop} \begin{proof} Since $\lim_{N \to \infty} H_N(1-s_*)=\zeta(1-s_*)\neq 0$, we have: $$ \lim_{N \to \infty} \frac{H_N(s_*)}{H_N(1-s_*)}=\frac{\lim_{N \to \infty} H_N(s_*)}{\lim_{N \to \infty} H_N(1-s_*)}= \frac{\zeta(s_*)}{\zeta(1-s_*)}=F(s_*). $$ \end{proof} Now, for $a(s),b(s)$ functions with real positive values, let $f_{a(s)}(N,s)\doteq\lceil a(s)^{-1/2\sigma} N^{1-\sigma} \rceil$ (resp. with $b$), with, as usual, $s=\sigma +it$, and let $H^{a,b}(s)\doteq\lim_{N \to \infty} \left|\frac{H_{f_{a(s)}(N,s)}(s)}{H_{f_{b(1-s)}(N,1-s)}(1-s)}\right|$. Notice that in proving the previous proposition we didn't use the fact that we had $N$-th partial sums in both the numerator and the denominator. Then along the same lines we can also prove: \begin{prop} If $\zeta(s_*)\neq 0$, then $$|F(s_*)|=\lim_{s \to s_*} \left|\frac{\zeta(s)}{\zeta(1-s)}\right|=\lim_{N \to \infty} \left|\frac{H_{f_{a(s_*)}(N,s_*)}(s_*)}{H_{f_{b(1-s_*)}(N,1-s_*)}(1-s_*)}\right|=H^{a,b}(s_*). $$ \end{prop} Let's consider now $H^{a,b}(s_*)$ when $s_*$ is zero of $\zeta$: we claim that \begin{lemma} For any zero $s_*$ of $\zeta$, if $a(s),b(s)$ are constants, $A$ and $B$, we have $H^{a,b}(s_*)=|\sqrt{A}/\sqrt{B}|$. \end{lemma} \begin{proof} Write $\lceil A^{-1/2\sigma} N^{1-\sigma} \rceil$ first as $A^{-1/2\sigma} N^{1-\sigma}+\epsilon$ with $0\leq\epsilon <1$, and then as $(A^{-1/2\sigma} N^{1-\sigma})\cdot (1+\frac{\epsilon}{A^{-1/2\sigma} N^{1-\sigma}})$. After a similar decomposition has been done for $f_B(N,1-s)$, since we're supposing that $\zeta(s_*)=0$ (so that $\sum_{n=0}^{N} h_n(s_*)=\frac{1}{2}N^{-s_*}+O(N^{-\sigma_*-1})$ as shown above using the summation formula), we can write $$ H^{a,b}(s_*)=\lim_{N\to\infty}\left|\frac{\frac{1}{2}(A^{-1/2\sigma_*} N^{1-\sigma_*})^{-s_*}(1+\frac{\epsilon_1}{A^{-1/2\sigma_*} N^{1-\sigma_*}})^{-s_*}+O(f_A(N,s_*)^{-\sigma_*-1})}{\frac{1}{2}(B^{-1/2(1-\sigma_*)}N^{\sigma_*})^{-1+s_*}(1+\frac{\epsilon_2}{ B^{-1/2(1-\sigma_*)}N^{\sigma_*}})^{-1+s_*}+O(f_B(N,1-s_*)^{-2+\sigma_*})}\right|. $$ By the binomial expansion of $(1+x)^z$, $$ (1+\frac{\epsilon_1}{A^{-1/2\sigma_*} N^{1-\sigma_*}})^{-s_*}=1+O(\frac{1}{N^{1-\sigma_*}}) $$ and $$ (1+\frac{\epsilon_2}{B^{-1/2(1-\sigma_*)}N^{\sigma_*}})^{-1+s_*}=1+O(\frac{1}{N^{\sigma_*}}), $$ so that we finally get \[ \lim_{N\to\infty}\left|\frac{\frac{1}{2}A^{s_*/2\sigma_*} N^{-s_*(1-\sigma_*)}+O((N^{1-\sigma_*})^{-\sigma_*-1})}{\frac{1}{2}B^{(1-s_*)/2(1-\sigma_*)} N^{(-1+s_*)\sigma_*}+O((N^{\sigma_*})^{-2+\sigma_*})}\right|= \] \[ =\lim_{N\to\infty}\left|\frac{\frac{1}{2}A^{s_*/2\sigma_*} N^{-s_*(1-\sigma_*)}}{\frac{1}{2}B^{(1-s_*)/2(1-\sigma_*)} N^{(-1+s_*)\sigma_*}}\right|=\lim_{N\to\infty}\left|\frac{A^{\sigma_*/2\sigma_*} N^{-\sigma_*(1-\sigma_*)}}{B^{(1-\sigma_*)/2(1-\sigma_*)} N^{(-1+\sigma_*)\sigma_*}}\right|=|\sqrt{A}/\sqrt{B}| \] \end{proof} Consider now the case $a(s)=b(s)=|F(s)|$. Using the fact that $|F(s)|=1/|F(1-s)|$, one sees in a similar manner that $$ H^{|F(s)|,|F(s)|}(s)=|\sqrt{|F(s)|}/\sqrt{|F(1-s)|}|=|F(s)| $$ if $s$ is a zero. If it is not a zero, then the statement is of course true, so that we have built a continuous function. \begin{rmk} Notice that to build a similar continuous function in the case of Dirichlet L-series by using their representation as a sum of Hurwitz zeta functions, one needs the higher approximation in $(8)$, since the relations $\sum_{a=1}^q \chi(a)=\sum_{a=1}^q \bar{\chi}(a)=0$ hold (as $\chi$ is non-principal). \end{rmk} We look at this point at the functions which are the argument of $\lim$ in the definition of $H^{|F(s)|,|F(s)|}(s)$, and call them $|F(s)|_N$. We first notice that for $\sigma=1/2$, they are always equal to $1$ and equal to the limit $|F(s)|=1$. In the right open half instead we have $|F(s)|<1$ as soon as $t$ is large enough (see Theorem $3.7$ in the next section; and $\left|\frac{\zeta(\sigma+it)}{\zeta(1-\sigma-it)}\right|$ is actually monotonically decreasing in $\sigma$ for fixed $t>2\pi+1$, as proven in \cite[Theorem 1]{sa03}). Moreover it is not hard to see that, for each $s$ in this region, the sequence $\{|F(s)|_N\}_N$ admits a strictly monotone subsequence: for that it suffices to check that $\{|F(s)|_N\}_N$ does not get constant from some point, which can be seen for example by the fact that, for any two constants $k_1$ and $k_2$, $\lceil k_1 N^{1-\sigma} \rceil$ and $\lceil k_2 N^{\sigma} \rceil$ are increasing by different rates for $N$ large enough. Now, if for an $s$ in the half strip $|F(s)|_N<|F(s)|_M$ for some $N,M$, by continuity this holds true also in a neighboorhood. In the theorem below we will essentially assume that the intersection of all this kind of neighboorhoods relative to some subsequence does not reduce to a point. Before stating the theorem, we notice that we can also consider slightly different functions converging to $|F(s)|$: for example in the numerator of $|F(s)|_N$, we could take $m_{|F(s)|}(N,s)\doteq 1+\lceil |F(s)|^{-1/2\sigma} N^{1-\sigma} \rceil$ instead of $f_{|F(s)|}(N,s)$ and call the new approximants as $|F_m(s)|_N$. \begin{thm} Suppose that for each $s$ in the right (or left) open half strip we can find a neighboorhood $I_s$ and a subsequence $N_k(s)$ such that $\{|F(s)|_{N_k}\}_{N_k}$ and $\{|F_m(s)|_{N_k}\}_{N_k}$ converge monotonically (either increasing or decreasing for the whole $I_s$). Then the Riemann Hypothesis is true. \end{thm} \begin{proof} Thanks to the hypothesis of monotonicity, and since the approximants converge by construction to a continuous function, by Dini's theorem,\footnote{ Dini's theorem, as stated in \cite[Theorem 7.13]{ru76}, is the following: \begin{thm} Suppose that $K$ is compact, and that $\{f_n\}$ is a sequence of continuous functions on $K$, that they converge pointwise to a continuous function $f$ on $K$, and that $f_n(x)\geq f_{n+1}(x)$ (or $\leq$) for all $x\in K$ and all $n$. Then $f_n\to f$ uniformly on $K$. \end{thm} } the convergence in a compact set $D$ inside $I_s$ would be uniform. This implies that, for $N$ bigger than some $N_1$, $|F(s)|_N$ is uniformly bounded in $D$: then, for those $N$, possible zeros of the denominator $H_{f_{|F(1-s)|}(N,1-s)}(1-s)$ would have to be zeros of the numerator $H_{f_{|F(s)|}(N,s)}(s)$ too. And for $N$ bigger than some $N_2$ zeros of $H_{f_{|F(1-s)|}(N,1-s)}(1-s)$ would also be zeros of $H_{m_{|F(s)|}(N,s)}(s)$. But $H_N$ and $H_{N+1}$ cannot both be zero, at least for $N$ large, which is what interests us now: the difference between the two is $h_N(s)$, which is $0$ exactly when $1-s=n-(n-1)^{1-s}n^s=n-n(1-\frac{1}{n})^{1-s}$; though $1-s=n-n(1-\frac{1-s}{n})$ which is only an approximation of the binomial expansion of $n-n(1-\frac{1}{n})^{1-s}$. (In fact in the critical strip this is also true for small values, as stated in \cite{pu04}.) In conclusion, for all $N$ big enough, $H_N$ would have to be nonzero throughout $D$, which, by Hurwitz's theorem,\footnote{ Hurwitz's theorem, as stated in \cite[Theorem 5.13]{st07}, is the following: \begin{thm} Let $G$ be a region and $\{f_n\}$ be a sequence of functons analytic on $G$ which converges uniformly on $G$ to some function $f$. Suppose that $f(s)$ is not identically zero, then an interior point $s_0$ of $G$ is a zero of $f(s)$ if and only if there exists a sequence $\{s_n\}$ in $G$ which tends to $s_0$ as $n\to \infty$, and $f_n(s_n)=0$ for all sufficiently large $n$. \end{thm} } is sufficient to prove that $\zeta$ has no zeros in $D$. \end{proof} \section{Equivalent criteria for the Riemann Hypothesis} In the first part of this section we concentrate on results that involve zeros of partial sums. One of the most famous theorems in this regard is a classical by Tur\'an, who proved the following (see e.g. \cite[section 8.14]{ap90} or \cite[section 8.3]{bo08}): \begin{thm} Let $\zeta_N(s)\doteq\sum_{n=1}^N \frac{1}{n^s}$. If there exists an $N_0$ such that $\zeta_N(s)\neq 0$ for all $N\geq N_0$ and all $\sigma>1$, then the Riemann Hypothesis is true. \end{thm} We remark that this is only a one-side implication, which has also proved to be unuseful for a proof of the Riemann Hypothesis, since Montgomery proved the existence of roots of these partial sums in certain half planes to the right of the critical strip, as cited in the references above. In this regard and in general about zeros of partial sums it's recommended to have a look at \cite{bo07}, \cite{go07b}, \cite{go08a} too. Our first result in this direction is the following: \begin{thm} The Riemann Hypothesis is true if and only if for any compact disc $K$ in the right open half of the critical strip there exists an $N_0$ such that, for infinitely many $N>N_0$, $H_N(s)$ is non-vanishing for all $s$ in $K$. \end{thm} \begin{proof} If the Riemann Hypothesis is true, then there are no zeros in the right open half. In any compact $K$ then $|\zeta|$ has a minimum $m>0$. Now, by $(8)$, we know that $$ |H_N(s)-\zeta(s)|=|\frac{1}{2}N^{-s}+O(N^{-\sigma-1})| $$ (keeping in mind the usual remark about the $O$ notation). By taking $N$ big enough, this can be made smaller than $m/2$ for all $s$ in $K$, so that, by the triangular inequality, $|H_N(s)|=|H_N(s)-\zeta(s)+\zeta(s)|\geq |\zeta(s)|-|H_N(s)-\zeta(s)|\geq m/2>0$. In the other direction, to prove that the Riemann Hypothesis is true, we can use Hurwitz's theorem (see footnote in the previous section). Since our hypothesis implies that for any compact disc $K$ there cannot exist a sequence $s_N$ in $K$ which is a zero for $H_N$ for all sufficiently large $N$, Hurwitz's theorem tells us that there is no zero of $\zeta$ in $K$. But any $s$ in the right open half is center of a compact disc in the interior of this strip. \end{proof} We can derive a similar statement using the approximants $\phi_N(s)\doteq\sum_{n=1}^N \frac{(-1)^{n-1}}{n^s}$. We have \begin{thm} The Riemann Hypothesis is true if and only if for any compact disc $K$ in the right open half of the critical strip there exists an $N_0$ such that, for infinitely many $N>N_0$, $\phi_N(s)$ is non-vanishing for all $s$ in $K$. \end{thm} \begin{proof} We can follow the same argument as above, provided that we consider $(1-2^{1-s})^{-1}\cdot \phi_N(s)$. Notice that in this case we know that $\phi_N(s)$ is converging uniformly in any compact of the strip, since any Dirichlet series converges uniformly on every compact subset interior to its half-plane of convergence (see e.g. \cite[chapter 11]{ap76}). \end{proof} Generalizing a bit these results, for example for the case of $H_N(s)$: \begin{thm} The Riemann Hypothesis is true if and only if for any compact disc $K$ in the right open half of the critical strip there exists an analytic function $f(N,s)$ which tends to zero in $K$ as $N\to\infty$, $\alpha_1,\ldots,\alpha_{d}\in\mathbb{C}$, $L_1,\ldots,L_d,N_0\in\mathbb{N}$ such that, for infinitely many $N>N_0$, $f(N,s)+\alpha_1 H_N(s)+\alpha_2 H_{N+L_1}+\cdots+(1-\sum_{i=1}^d \alpha_i) H_{N+L_d}$ is non-vanishing for all $s$ in $K$. \end{thm} \begin{proof} If the Riemann Hypothesis is true, then for any $K$, as we saw in the previous equivalence, there exists an $N_0$ such that, for infinitely many $N>N_0$, $H_N(s)$ is non-vanishing for all $s$ in $K$. This corresponds to taking, $f(N,s)=0$ for all $s$ in $K$, $\alpha_1=1$ and all other $\alpha$ equal to $0$. In the other direction, the proof follows the one given above for the less general case, since $f(s)+\alpha_1 H_N(s)+\alpha_2 H_{N+L_1}+\cdots+(1-\sum_{i=1}^d \alpha_i) H_{N+L_d}$ is also converging to $\zeta(s)$ uniformly on $K$. \end{proof} For example it would be sufficient to prove that the arithmetic mean of $H_N$ and $H_{N-1}$ is always nonzero for infinitely many $N>N_0$ and for all $s$ in any given $K$. This might be easier, though being essentially equivalent; we already remarked that if one of the two is zero, the other is not, which might be useful. In the sequel other equivalent reformulations, though of a different nature, are to be found. Let $s_0=\sigma_0+it_0$ be a zero of $\zeta$ in the critical strip. $F(s_0)$, where $F$ is defined as in the previous section, is then the nonzero value $F(s_0)=\frac{\pi^{s_0-\frac{1}{2}}\Gamma(\frac{1-s_0}{2})}{\Gamma(\frac{s_0}{2})}$. \begin{thm} $|F(s_0)|=1$ if and only if the Riemann Hypothesis is true. \end{thm} \begin{proof} In fact $|F(s_0)|=1$ happens when $\pi^{\sigma-1/2}=|\pi^{s-1/2}|=\left|\frac{\Gamma(\frac{s}{2})}{\Gamma(\frac{1-s}{2})}\right|$. When $\sigma=1/2$, the right member is equal to $1$ by ($11$) so for any $t$ the two members are equal. Otherwise from ($12$) we get that, for $\sigma<1/2$, $\left|\frac{\Gamma(\frac{s}{2})}{\Gamma(\frac{1-s}{2})}\right|\geq |(1+s)/2|^{\sigma-1/2}$. This means that, for $\sigma<1/2$ and $\sqrt{(1+\sigma)^2+t^2}> 2\pi$, $\left|\frac{\Gamma(\frac{s}{2})}{\Gamma(\frac{1-s}{2})}\right|> \pi^{\sigma-1/2}$, so that the right and left member will never be equal in this case. For $\sigma>1/2$, let $z=1-s$ so that $\Re{z}<1/2$: then $\left|\frac{\Gamma(\frac{s}{2})}{\Gamma(\frac{1-s}{2})}\right|=\left|\frac{\Gamma(\frac{1-z}{2})}{\Gamma(\frac{z}{2})}\right|\leq|(1+z)/2|^{1/2-\Re(z)}=|(2-s)/2|^{\sigma-1/2}$. And this means that, for $\sigma>1/2$ and $\sqrt{(2-\sigma)^2+t^2}< 2\pi$, we have that $\left|\frac{\Gamma(\frac{s}{2})}{\Gamma(\frac{1-s}{2})}\right|< \pi^{\sigma-1/2}$, and again the right and left member will never be equal in this case either. But notice that the region $\{1/2<\sigma<1\}\cap\{\sqrt{(2-\sigma)^2+t^2}< 2\pi\}$ is the symmetric, about the critical line, of the region $\{0<\sigma<1/2\}\cap\{\sqrt{(1+\sigma)^2+t^2}< 2\pi\}$. And, because of ($3$), if in a region there are no zeros, there are no zeros in the symmetric region either. It follows that there are no zeros in $\{0<\sigma<1/2\}$, a part possibly on the line $\sqrt{(1+\sigma)^2+t^2}=2\pi$, which we can though exclude since we know that the first zero in the upper half-plane has imaginary part larger than $14$. This implies, by ($3$), that there are no zeros in the whole critical strip except the critical line. \end{proof} \begin{rmk} For an analogue result in the case of Dirichlet L-functions and the Extended Riemann Hypothesis we would consider $\left|\frac{L(\bar{\chi},s_0)}{L(\chi,1-s_0)}\right|=1$. There would not be any modifications beside the fact that we may need the second inequality in $(12)$ in case $\chi(-1)=-1$. The point is that we are not able to exclude the curve line, since in this case we are not aware of general theoretical or computational results on zero-free regions within a certain height in the critical strip. \end{rmk} In fact if we consider a generic $s$ instead of a zero in the theorem above, we can prove the following: \begin{thm} $|F(s)|=1$ in the critical strip if and only if $s$ satisfies the following: $\sigma=1/2$ or $\{0<\sigma<1/2\}\cap\{\sqrt{(1+\sigma)^2+t^2}= 2\pi\}$ or $\{1/2<\sigma<1\}\cap\{\sqrt{(2-\sigma)^2+t^2}= 2\pi\}$. \end{thm} \begin{proof} The only difference with the proof above is that, thanks to $(6)$ and the fact that $F(s)=1/F(1-s)$, if $|F(s)|\neq 1$ in a region as before, neither it will be in the symmetric one. That it is in fact $1$ on the curve lines follows by continuity, since one can easily compute that, in $\{0<\sigma<1/2\}\cap\{\sqrt{(1+\sigma)^2+t^2}> 2\pi\}$, $|F(s)|> 1$ while it is smaller than $1$ below the line (take for example the part of real axis inside the strip, where $\zeta$ is negative and decreasing: see \cite[notes to chapter 1]{iv03}). \end{proof} Notice that this result improves the known asympotic estimates or almost equivalences (see for example \cite[Lemma 6.1]{go07b}, \cite{sa03} and \cite[Theorem 4.3]{br04} in addition to the known asymptotic estimate by Stirling's approximation), which on the other hand would have been sufficient to prove Theorem $3.5$. We remark that theorems $3.5$ and $3.7$ are not specific of $\zeta$ per se: it's not hard for example to build from $\zeta$ other analytic functions which safisfy the functional equation and the property of being real on the real axis (and which might also have zeros not on the critical line), by multiplying it with an expression of the type $f(z)f(1-z)\overline{f(\bar{z})}\overline{f(1-\bar{z})}$. As \cite{sa03} and \cite{sa04} say, the Riemann Hypothesis would follow if we could improve these results and obtain a strict inequality ($|\zeta(\sigma+it)|>|\zeta(1-\sigma+it)|$ for $0<\sigma<1/2$ and $t>2\pi+1$), since roots have to be symmetric with respect to the critical line. Let us see now how theorem $3.5$ can bring to other formulations of the Riemann Hypothesis. Let's suppose that the multiplicity of $s_0$ is $m$ (as well as that of $1-s_0$, as we already remarked in the previous section). Since $\zeta(s)$ and $\zeta(1-s)$ are analytic at $s_0$, the following power series expansions hold for $s$ in a neighboorhood of $s_0$: $$ \zeta(s)=\sum_{n=0}^{\infty}\frac{\zeta^{(n)}(s_0)}{n!}(s-s_0)^n, $$ $$ \zeta(1-s)=\sum_{n=0}^{\infty}\frac{(-1)^n\zeta^{(n)}(1-s_0)}{n!}(s-s_0)^n. $$ Therefore $$ |F(s_0)|=\left|\lim_{s \to s_0} \frac{\zeta(s)}{\zeta(1-s)}\right|= \left|\frac{\zeta^{(m)}(s_0)}{\zeta^{(m)}(1-s_0)}\right|. $$ By $(6)$, $\overline{\zeta(s)}=\zeta(\bar{s})$ for any $s$, so that $|\zeta^{(m)}(s)|=|\zeta^{(m)}(\bar{s})|$. We then also have $$ |F(s_0)|= \left|\frac{\zeta^{(m)}(s_0)}{\zeta^{(m)}(1-\bar{s_0})}\right|. $$ Thus we can write the following propositions: \begin{prop} The Riemann Hypothesis is true if and only if $\left|\frac{\zeta^{(m)}(s_0)}{\zeta^{(m)}(1-\bar{s_0})}\right|=1$ \end{prop} and \begin{prop} The Riemann Hypothesis is true if and only if $\left|\frac{\zeta^{(m)}(s_0)}{\zeta^{(m)}(1-s_0)}\right|=1$ \end{prop} We can write explicit expansions for these derivatives. For example we can look at $F(s)$ with the representation of $\zeta$ as in $(5)$. Then $F(s)=\frac{\phi(s)}{\phi(1-s)}\cdot\frac{(1-2^{s})}{(1-2^{1-s})}$, where $F(s_0)$, as above, is clearly meant to be its analytic continuation in $s_0$. Since there's only one possible analytic continuation and since the second fraction is a nonzero constant at $s_0$, then $\frac{\phi(s_0)}{\phi(1-s_0)}\doteq \lim_{s \to s_0} \frac{\phi(s)}{\phi(1-s)}$ should also be some nonzero constant. And, similarly as above, $$ \\\left|\lim_{s \to s_0} \frac{\phi(s)}{\phi(1-s)}\right|= \left|\frac{\phi^{(m)}(s_0)}{\phi^{(m)}(1-s_0)}\right|. $$ By the analytic properties of Dirichlet series (see \cite[section 11.7]{ap76}), we can write (differentiating term by term) for $\Re(s)>0$: $$ \phi^{(m)}(s)=\sum_{n=1}^{\infty}\frac{(-1)^{n+m-1}(\log n)^m}{n^s}. $$ So we would like to evaluate: $$ \lim_{N\to\infty} \left| \frac{\sum_{n=1}^{N}(-1)^{n+m-1}(\log n)^m n^{-s_0}}{\sum_{n=1}^{N}(-1)^{n+m-1}(\log n)^m n^{-1+s_0}} \right| $$ (limit of the denominator is nonzero, so that we can put the limit before the fraction). We can then rephrase the previous proposition in the following way: \begin{prop} Suppose that $$ \sum_{n=1}^{\infty}(-1)^{n-1} n^{-s_0}=\sum_{n=1}^{\infty}(-1)^{n-1} n^{-1+s_0}=0 $$ with $s_0$ a zero of order $m$. Then this implies $$ |\sum_{n=1}^{\infty}(-1)^{n+m-1}(\log n)^m n^{-s_0}|\cdot\left|\frac{(1-2^{s_0})}{(1-2^{1-s_0})}\right|=|\sum_{n=1}^{\infty}(-1)^{n+m-1}(\log n)^m n^{-1+s_0}|. $$ if and only if the Riemann Hypothesis is true. \end{prop} Clearly a similar argument would hold if we consider $1-\bar{s_0}$ instead. On the other hand it would be sufficient, to prove the Riemann Hypothesis, to show that $|F(s_0)|$ or $$ \lim_{N\to\infty} \left| \frac{\sum_{n=1}^{N}(-1)^{n+m-1}(\log n)^m n^{-s_0}}{\sum_{n=1}^{N}(-1)^{n+m-1}(\log n)^m n^{-1+s_0}} \right| $$ is going to be $0$ or $\infty$ (thus not a nonzero constant) if $s_0$ is not on the critical line. To investigate whether $|F(s_0)|$ is equal to $1$, we can consider also the function we built in the previous section through the approximants $H_N$. For example showing that $H^{|F(s_0)|,|F(1-s_0)|}(s)$, which is continuous in $s_0$, is also continuous at least in $1-s_0$, would prove that $|F(s_0)|=1$, since we would have $|F(1-s_0)|=|F(s_0)|$, because of continuity and lemma $2.12$, but on the other hand $|F(s_0)|=1/|F(1-s_0)|$. Or showing that there exists a constant $k$ such that $H^{|F(s)|,|F(s)|}(s)=H^{k,k}(s)$ for all $s$ in the strip would also imply $|F(s_0)|=1$. (Clearly this holds locally for a neighboorhood: being $s$ in a sufficiently small neighboorhood of $s_*$ ensures us that, if $s_*$ is a zero, then by $(4)$, $\zeta(s)$ will not be zero, and, if $s_*$ is not a zero, then by continuity $\zeta(s)$ will not be zero either. So we are in the same situation as in the previous propositions $2.10$ and $2.11$, where what essentially counts is that the $H$-subscript $N$ or $f(N,s)$ tends to infinity with $N$.) In other words in this case Theorem $3.5$ can be reformulated as \begin{prop} The Riemann Hypothesis is true if and only if $H^{1,1}(s)$ is continuous throughout the strip. \end{prop} Notice that we can also state the following, which though is essentially weaker, as the continuity in this case appears to be much less at hand. \begin{prop} The Riemann Hypothesis is true if and only if $\lim_N \frac{H_N(s)}{H_N(1-s)}$ is continuous. \end{prop} \begin{proof} If the Riemann Hypothesis is true, then there are only zeros on the critical line, where $\lim_N \frac{H_N(s)}{H_N(\bar{s})}=1$ since, for each $N$, $H_N(s)$ is the conjugate of $H_N(\bar{s})$ and so it has the same modulus. On the other side, we show that, if the Riemann Hypothesis were not true, then the limit would not be continuous. This is because in a zero out of the critical line, if we compute similarly as done in lemma $2.12$, the limit would now be $0$ or $\infty$, while we know that $F(s)$ is always nonzero. \end{proof} \section{Acknowledgements} I would like to thank everybody who has somehow helped me, even if I won't mention him personally, from the colleagues with whom I had fruitful discussions to the people who read part of or the whole manuscript, those who introduced me to the subject and everyone who has encouraged or supported me. Many thanks, among the others, to: Joachim Rosenthal, Jens Zumbr\"agel, Felix Fontein, G\'erard Maze, Alessio Martini, Emanuele Spadaro, Ivan Contreras, Elisa Gorla, Patrick Sol\'e, Ashkan Nikeghbali, Christopher Hughes, Michele Elia, Pietro Peterlongo. \end{document}
\begin{document} \date{} \author{Halise Melis Ak\c{c}in and Ali Erdo\u{g}an } \title{Some Results on Betti Series of Universal Modules of Differential Operators} \maketitle \begin{abstract} In this article, we discuss the rationality of the Betti series of $\Omega _{n}(R_{m})$ where $\Omega _{n}(R_{m})$ denotes the universal module of $n$ th order derivations of $R_{m}$. We proved that if $R$ is a coordinate ring of an affine irreducible curve represented by $\frac{k[x_{1},x_{2},...,x_{s}] }{(f)}$ and if it has at most one singularity point, then the Betti series of $ \Omega _{n}(R_{m})$ is rational where $m$ is a maximal ideal of $R$. $\noindent $ \end{abstract} \let\thefootnote\relax\footnote{\emph{Key Words: universal differential operator modules, minimal resolution } \emph{Mathematics subject classification 2010: 13N05}} \section{Introduction and Preliminaries} The following notations will be fixed throughout the paper: ring means commutative with identity and $R$ is a commutative $k$-algebra where $k$ is an algebraically closed field of characteristic zero. \newline An $n$th order $k$-derivation $D$ of $R$ into an $R$-module $F$ is an element of $Hom_{k}(R,F)$ such that for any $n+1$ elements $r_0, r_1,\ldots, r_{n}$ of $R$, the following identity holds: \begin{center} $D(r_0r_1\ldots r_{n})=\sum\limits_{i=1}^{n}(-1)^{i-1} \sum\limits_{j_1<j_2<\ldots<j_{i}}r_{j_1}\ldots r_{j_{i}}D(r_0\ldots \hat{r} _{j_1}\ldots \hat{r}_{j_{i}}\ldots r_{n})$ \end{center} where the hat over $r_{i}$'s means that it is missed. It can be easily seen that a first order derivation is the ordinary derivation of $R$ into an $R$-module $F$. In \cite{Nak2}, a universal object for $n$th order derivations constructed in the following way: Consider the exact sequence \begin{equation*} 0\rightarrow I\rightarrow R\otimes _{k}R\text{ }\overset{\varphi }{ \rightarrow }\text{ }R\rightarrow 0 \end{equation*} where $\varphi $ is defined as $\varphi (\sum\limits_{i=1}^{n}a_{i}\otimes b_{i})=\sum\limits_{i=1}^{n}a_{i}b_{i}$ for $a_{i},b_{i}\in R$ and $I$ \ is the kernel of $\varphi$ . It is known that $\ker \varphi$ is generated by the set \begin{center} \{$1\otimes r-r\otimes 1:r\in R$\} \end{center} as an $R$-module. Then the mapping $d_{n}$ from $R$ into $I/I^{n+1}$ given by \begin{equation*} d_{n}(r)=1\otimes r-r\otimes 1+I^{n+1} \text{ and } d_{n}(1)=0 \end{equation*} is called the universal derivation of order $n$ that is, any $n$th order derivation $D$ from $R$ into $F$ can be factored through $I/I^{n+1}$ where $F $ is an $R$-module. Here, the $R$-module $I/I^{n+1}$ is called the universal module of $n$th order derivations and is denoted by $\Omega _{n}(R)$. Note that if $R$ is a finitely generated $k$-algebra, then $\Omega _{n}(R)$ is a finitely generated $R$-module. It is proved in \cite[Prop. 2]{Nak2} that if $R=k[x_{1},...,x_{s}]$ is a polynomial algebra over $k$ with $s$ variables, then $\Omega _{n}(R)$ is a free $R$-module of rank $\binom{n+s}{s} -1$ with basis \begin{equation*} \left\{ d_{n}(x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}...x_{s}^{\alpha _{s}}): \text{ }1\leq \alpha _{1}+\alpha _{2}+...+\alpha _{s}\leq n\right\} \end{equation*} and in \cite[Theo. 9]{Nak2} that \begin{center} $\Omega _{n}(R)\otimes _{R}R_{S}\cong \Omega _{n}(R_{S})$ \end{center} where $S$ is a multiplicatively closed subset of $R$. A free resolution of $\Omega _{n}(R)$ where $R$ is a local $k$-algebra with maximal ideal $m$ is called a minimal resolution if the followings are satisfied: \begin{equation*} \ldots \rightarrow F_{2}\text{ }\overset{\partial _{2}}{\rightarrow }\text{ } F_{1}\overset{\partial _{1}}{\rightarrow }F_{0}\text{ }\overset{\varepsilon } {\rightarrow }\text{ }\Omega _{n}(R)\rightarrow 0 \end{equation*} $F_{i}$'s are free $R$-modules of finite rank for all $i$ and $\partial _{n}(F_{n})\subseteq mF_{n-1}$ for all $n\geq 1$ (see \cite{Ku} for definition). Let $(R,m)$ be a local ring. The Betti series of $\Omega_{n}(R)$ is defined to be the series \begin{equation*} B(\Omega _{n}(R),t)=\underset{i\geq 0}{\sum }\dim _{R/m}Ext^{i}(\Omega _{n}(R),\frac{R}{m})t^{i}\text{ for all }n\geq 1. \end{equation*} \begin{lemma} Let $R$ be a local ring with maximal ideal $m$ and $M$ be a finitely generated $R$-module. Suppose that \begin{equation*} 0\rightarrow F_{1}\overset{\partial }{\rightarrow }F_{0}\rightarrow M\rightarrow 0 \end{equation*} is a minimal resolution of $M.$ Then $Ext^{1}(M,R/m)$ is not zero. \end{lemma} Dealing with $n$th order case presents extra difficulties. Let $R=k[x,y,z]$ be a $k$-algebra with $z^2=x^3$ and $y^2=xz$. It is shown in \cite[ex. 3.1.6 and 3.4.7]{Er3} that $pd(\Omega_1(R))\leq1$, but $ pd(\Omega_2(R))$ is not finite. In \cite{Er4}, the following proposition is proved: \begin{proposition} Let $R=k[x_{1}, \ldots, x_{s}]$ and $S=k[y_{1}, \ldots, y_{t}]$ be polynomial algebras and let $I$ be an ideal of $R$ generated by elements $ \{f_{1}, \ldots, f_{m}\}$. Assume that $R/I=k[x_{1}, \ldots, x_{s}]/ (f_{1}, \ldots, f_{m})$ is an affine $k$-algebra of dimension $s-m$ and $ pd(\Omega_{2}(R/I))\leq 1$. Then $pd(\Omega_{2}(R/I\otimes_{k}S))\leq 1$. \end{proposition} But, unfortunately, this result is not true even for $n=3$. \ \ \ So, there are two natural questions arise from these results. Can we generalize the dimension of the ring $R$? Can we generalize the dimension of the universal module $\Omega_{n}$? In \cite{Er1} and \cite{Mel1}, it is studied the following question: \begin{center} When is the Betti series of a universal module of second order derivations rational? \end{center} Our goal is to establish a result analogue of this question for $n$th order universal differential operator modules. \section{Main Results} \begin{proposition} Let $k[x_1,x_2,\ldots, x_{s}]$ be a polynomial algebra and $m$ be a maximal ideal of $k[x_1,x_2,\ldots, x_{s}]$ containing an irreducible element $f$. If the elements $d_{n}(x_1^{\alpha_1}x_2^{\alpha_2} \ldots x_{s}^{\alpha_{s}}f)$ belong to $m\Omega_{n}(k[x_1,x_2,\ldots, x_{s}]) $ whenever $0\leq \alpha_1+ \alpha_2 + \ldots + \alpha_{s} \leq n-1$, then $ \Omega_{n}(\frac{k[x_1,x_2,\ldots, x_{s}]}{(f)})_{\overset{-}{m}}$ admits a minimal resolution of $(\frac{k[x_1,x_2,\ldots, x_{s}]}{(f)})_{\overset{-}{m} }-$modules where $\overset{-}{m}=m/(f)$ is a maximal ideal of $\frac{ k[x_1,x_2,\ldots, x_{s}]}{(f)}$. \end{proposition} \begin{proof} Let $R=S/I=\frac{k[x_1,x_2,\ldots, x_{s}]}{(f)} $ and $\overset{-}{m}$ be a maximal ideal of $R$. Then by \cite[Theo. 14 pg. 24]{Nak2} we have the following short exact sequence of $R$-modules \begin{equation} \label{seq} \xymatrix{0 \ar[r] & \frac{N+f\Omega_{n}(S)}{f\Omega_{n}(S)} \ar[r] & \frac{\Omega_{n}(S)}{f\Omega_{n}(S)} \ar[r]^{\alpha} & \Omega_{n}(R) \ar[r] & 0 } \end{equation} where $N$ is a submodule of $\Omega_{n}(S)$ generated by the elements of the form \begin{center} $\{d_{n}(g): g \in fk[x_1,x_2,\ldots, x_{s}]\}$. \end{center} By localizing (\ref{seq}) at $\overset{-}{m}$, we get the following exact sequence of $R_{\overset{-}{m}}-$modules: \begin{equation} \label{seq1} \xymatrix{0 \ar[r] & (\frac{N+f\Omega_{n}(S)}{f\Omega_{n}(S)})_{\overset{-}{m}} \ar[r] & \frac{\Omega_{n}(S)}{f\Omega_{n}(S)}_{\overset{-}{m}} \ar[r]^{\alpha_{\overset{-}{m}}} & \Omega_{n}(R)_{\overset{-}{m}} \ar[r] & 0 }. \end{equation} \textbf{Step 1.} A module generated by the set $\{d_{n}(g): g \in f k[x_1,x_2,\ldots, x_{s}]\}$ is a submodule of $m\Omega_{n}(k[x_1,x_2,\ldots, x_{s}])$. \textit{Proof of Step 1}. Since $d_{n}$ is $k$-linear, it suffices to show \begin{center} $d_{n}(x_1^{\alpha_1}x_2^{\alpha_2} \ldots x_{s}^{\alpha_{s}}f)\in m\Omega_{n}(k[x_1,x_2,\ldots, x_{s}])$. \end{center} By using the properties of $d_{n}$, we get $d_{n}(x_1^{\alpha_1}x_2^{\alpha_2} \ldots x_{s}^{\alpha_{s}}f)=\underset{ \gamma}{\Sigma}a_{\gamma}(x_1,x_2,\ldots, x_{s})d_{n}(x_1^{\gamma_1}x_2^{\gamma_2} \ldots x_{s}^{\gamma_{s}}f)$ $+f(\underset{\beta}{\Sigma}a^{\prime }_{\beta}(x_1,x_2,\ldots, x_{s})d_{n}(x_1^{\beta_1}x_2^{\beta_2} \ldots x_{s}^{\beta_{s}})$ where $a_{\gamma}(x_1,x_2,\ldots, x_{s}), a^{\prime }_{\beta}(x_1,x_2,\ldots, x_{s})\in k[x_1,x_2,\ldots, x_{s}] $, $0\leq \gamma_1+\gamma_2+\ldots +\gamma_{s}\leq n-1$, $0< \beta_1+ \beta_2+\ldots + \beta_{s}\leq n $. By assumption, we know \begin{center} $d_{n}(x_1^{\gamma_1}x_2^{\gamma_2} \ldots x_{s}^{\gamma_{s}}f) \in m\Omega_{n}(k[x_1,x_2,\ldots, x_{s}])$ \end{center} whenever $0\leq \gamma_1+ \gamma_2 + \ldots + \gamma_{s} \leq n-1$ and $f\in m$, then the result follows. $\noindent $ \textbf{Step 2.} $(\frac{N+f\Omega_{n}(S)}{f\Omega_{n}(S)})_{\overset{-}{m} }\subseteq \overset{-}{m}(\frac{\Omega_{n}(S)}{f\Omega_{n}(S)})_{\overset{-}{ m}}$. \textit{Proof of Step 2}. By step 1, we know $N\subseteq m\Omega_{n}(S)$ and the rest is clear. \textbf{Step 3.} $(\frac{N+f\Omega_{n}(S)}{f\Omega_{n}(S)})_{\overset{-}{m}}$ is generated by $\binom{n+s-1}{s}$ elements. \textit{Proof of Step 3}. It is known that $(\frac{N+f\Omega_{n}(S)}{ f\Omega_{n}(S)})$ is generated by the set \begin{center} $\{d_{n}(x_1^{\alpha_1}x_2^{\alpha_2} \ldots x_{s}^{\alpha_{s}}f)+ f\Omega_{n}(S): 0\leq \alpha_1+ \alpha_2+ \ldots + \alpha_{s}\leq n-1\}$. \end{center} And, it has $\binom{n+s-1}{s}$ elements. \textbf{Step 4.} $(\frac{N+f\Omega_{n}(S)}{f\Omega_{n}(S)})_{\overset{-}{m}}$ is a free $R_{\overset{-}{m}}$-module. \textit{Proof of Step 4}. The Krull dimension of $R_{\overset{-}{m}}$ is $s-1 $ and let $K$ be the field of fractions of $R_{\overset{-}{m}}$. Then by tensoring the exact sequence in (\ref{seq1}) by $K$, we get \begin{equation} \label{seq2} \xymatrix{0 \ar[r] & K \otimes_{R_{\overset{-}{m}}}(\frac{N+f\Omega_{n}(S)}{f\Omega_{n}(S)})_{ \overset{-}{m}} \ar[r] & K \otimes_{R_{\overset{-}{m}}} (\frac{\Omega_{n}(S)}{f\Omega_{n}(S)})_{\overset{-}{m}} \ar[r]^{\alpha_{\overset{-}{m}}} & K \otimes_{R_{\overset{-}{m}}} \Omega_{n}(R)_{\overset{-}{m}} \ar[r] & 0 }. \end{equation} We know that, $(\frac{\Omega_{n}(S)}{f\Omega_{n}(S)})_{\overset{-}{m}}$ is a free $R_{\overset{-}{m}}$- module of rank $\binom{n+s}{s}-1$. By using the following isomorphism, \begin{center} $K\otimes_{R_{\overset{-}{m}}}\Omega_{n}({R_{\overset{-}{m}}})\cong \Omega_{n}(K)$ \end{center} we have $dim$ $K \otimes_{R_{\overset{-}{m}}}(\frac{N+f\Omega_{n}(S)}{f\Omega_{n}(S)} )_{\overset{-}{m}}= $dim$ $K$ \otimes_{R_{\overset{-}{m}}} (\frac{\Omega_{n}(S)}{ f\Omega_{n}(S)})_{\overset{-}{m}}-dim \Omega_{n}(K)$ $=\binom{n+s}{s}-\binom{n+s-1}{s-1}=\binom{n+s-1}{s}$. Since the rank of $(\frac{N+f\Omega_{n}(S)}{f\Omega_{n}(S)})_{\overset{-}{m}} $ is equal to its number of minimal generators, it is a free $R_{\overset{-}{ m}}$-module. Therefore, the short exact sequence given in (\ref{seq1}) is a minimal resolution for $\Omega_{n}({R_{\overset{-}{m}}})$. \end{proof} Let $R$ be a finitely generated regular $k$-algebra and $m$ be a maximal ideal of R then $\Omega_{n}({R_{m}})$ is a free $R_{m}$-module. Then it is clear that $B(\Omega_{n}({R_{m}},t))$ is rational. \begin{theorem} Let $k[x_1,x_2,\ldots, x_{s}]$ be a polynomial algebra and $m$ be a maximal ideal of $k[x_1,x_2,\ldots, x_{s}]$ containing an irreducible element $f$. Let $d_{n}(x_1^{\alpha_1}x_2^{\alpha_2} \ldots x_{s}^{\alpha_{s}}f)\in m\Omega_{n}(k[x_1,x_2,\ldots, x_{s}])$ for $0\leq \alpha_1+ \alpha_2 + \ldots + \alpha_{s} \leq n-1$. Assume that $ R=\frac{k[x_1,x_2,\ldots, x_{s}]}{(f)}$ is not a regular ring at $\overset{-} {m}=m/(f)$. Then $B(\Omega_{n}({R_{\overset{-}{m}}},t),)$ is a rational function. \end{theorem} \begin{proof} By the previous proposition, the exact sequence of $R_{\overset{-}{m}}$ -modules in (\ref{seq1}) is a minimal resolution of $\Omega_{n}({R_{\overset{ -}{m}}})$. And we get the result. \end{proof} \textbf{Question:} It should be interesting to know whether the Betti Series is rational for the algebra $R=k[x,y,z]$ with $z^2=x^3$ and $y^2=xz$. \textbf{Acknowledgement} This work is a part of the first autor's PhD thesis. The first author is grateful to TUBITAK for their financial support during her visit at the University of Sheffield. Ali Erdogan \\ Hacettepe University \\ Department of Mathematics \\ Ankara,Turkey \\ e-mail:[email protected] Halise Melis Akcin \\ Hacettepe University \\ Department of Mathematics \\ Ankara,Turkey \\ e-mail:[email protected] \end{document}
\begin{document} \title{A weak solution to a perturbed one-Laplace system by $p$-Laplacian is continuously differentiable} \begin{abstract} In this paper we aim to show continuous differentiability of weak solutions to a one-Laplace system perturbed by $p$-Laplacian with $1<p<\infty$. The main difficulty on this equation is that uniform ellipticity breaks near a facet, the place where a gradient vanishes. We would like to prove that derivatives of weak solutions are continuous even across the facets. This is possible by estimating H\"{o}lder continuity of Jacobian matrices multiplied with its modulus truncated near zero. To show this estimate, we consider an approximated system, and use standard methods including De Giorgi's truncation and freezing coefficient arguments. \end{abstract} \bigbreak \textbf{Mathematics Subject Classification (2020)} 35B65, 35J47, 35J92 \bigbreak \textbf{Keywords} $C^{1}$-regularity, De Giorgi's truncation, freezing coefficient method \section{Introduction}\label{Section: Introduction} In this paper, we consider a very singular elliptic system given by \[-b\mathop{\mathrm{div}} \left(\lvert Du\rvert^{-1}\nabla u^{i}\right)-\mathop{\mathrm{div}}\left(\lvert Du\rvert^{p-2}\nabla u^{i} \right)=f^{i}\quad \textrm{for }i\in\{\,1,\,\dots\,,\,N\,\}\] in a fixed domain $\Omega\subset {\mathbb R}^{n}$ with a Lipschitz boundary. Here \(b\in(0,\,\infty),\,p\in(1,\,\infty)\) are fixed constants, and the dimensions \(n,\,N\) satisfy \(n\ge 2,\,N\ge 1\). For each $i\in\{\,1,\,\dots\,,\,N\,\}$, the scalar functions \(u^{i}=u^{i}(x_{1},\,\dots\,,\,x_{n})\) and \(f^{i}=f^{i}(x_{1},\,\dots\,,\,x_{n})\) are respectively unknown and given in \(\Omega\). The vector $\nabla u^{i}=(\partial_{x_{1}}u^{i},\,\dots\,,\,\partial_{x_{n}}u^{i})$ denotes the gradient of the scalar function $u^{i}$ with $\partial_{x_{\alpha}}u^{i}=\partial u^{i}/\partial x_{\alpha}$ for $i\in\{\,1,\,\dots\,,\,N\,\},\,\alpha\in\{\,1,\,\dots\,,\,n\,\}$. The matrix $Du=(\partial_{x_{\alpha}}u^{i})_{i,\,\alpha}=(\nabla u^{i})_{i}$ denotes the $N\times n$ Jacobian matrix of the mapping $u=(u^{1},\,\dots\,,\,u^{n})$. Also, the divergence operator $\mathop{\mathrm{div}} X=\sum_{j=1}^{n}\partial_{x_{j}}X^{j}$ is defined for an ${\mathbb R}^{n}$-valued vector field $X=(X^{1},\,\dots\,,\,X^{n})$ with $X^{j}=X^{j}(x_{1},\,\dots\,,\,x_{n})$ for $j\in\{\,1,\,\dots\,,\,n\,\}$. This elliptic system is denoted by \begin{equation}\label{Eq (Section 1) 1+p-Laplace} L_{b,\,p}u\coloneqq -b\mathop{\mathrm{div}} \mleft(\lvert Du\rvert^{-1}Du\mright)-\mathop{\mathrm{div}}\mleft(\lvert Du\rvert^{p-2}Du\mright)=f\quad\textrm{in }\Omega \end{equation} with $f\coloneqq (f^{1},\,\dots\,,\,f^{N})$, or more simply by $-b\Delta_{1}u-\Delta_{p}u=f$. Here the divergence operators \(\Delta_{1}\) and \(\Delta_{p}\), often called one-Laplacian and $p$-Laplacian respectively, are given by \[\Delta_{1}u\coloneqq \mathrm{div}\,\mleft(\lvert Du\rvert^{-1}Du\mright),\quad\Delta_{p}u\coloneqq \mathrm{div}\,\mleft(\lvert Du\rvert^{p-2}Du\mright).\] In the case $b=0$, the system (\ref{Eq (Section 1) 1+p-Laplace}) becomes so called the $p$-Poisson system. For this problem, H\"{o}lder regularity of $Du$ is well-established both in scalar and system cases \cite{MR709038}, \cite{MR672713}, \cite{MR721568}, \cite{MR727034}, \cite{MR474389}, \cite{MR0244628} (see also \cite{MR1230384}, \cite{MR743967}, \cite{MR814022}, \cite{MR783531} for parabolic $p$-Laplace problems). In the limiting case $p=1$, H\"{o}lder continuity of $Du$ generally fails even if $f$ is sufficiently smooth. In fact, even in the simplest case $n=N=1$, any absolutely continuous non-decreasing function of one variable $u=u(x_{1})$ satisfies $-\Delta_{1}u=0$. In particular, even for the one-Lapalce equation, it seems impossible in general to show continuous differentiability ($C^{1}$-regularity) of weak solutions. This problem is substantially because ellipticity of one-Laplacian degenerates in the direction of $Du$, which differs from $p$-Laplacian. Also, in the multi-dimensional case $n\ge 2$, diffusivity of $\Delta_{1}u$ is non-degenerate in directions that are orthogonal to $Du$. It should be mentioned that this ellipticity becomes singular in a facet $\{Du=0\}$, the place where a gradient vanishes. Since one-Laplacian $\Delta_{1}$ contains anisotropic diffusivity, consisting of degenerate and singular ellipticity, this operator seems analytically difficult to handle in existing elliptic regularity theory. Therefore, it is a non-trivial problem whether a solution to (\ref{Eq (Section 1) 1+p-Laplace}), which is a one-Laplace system perturbed by $p$-Laplacian, is in $C^{1}$. In the special case where a solution is both scalar-valued and convex, Giga and the author answered this problem affirmatively in a paper \cite{giga2021continuity}. Most of the arguments therein, based on convex analysis and a strong maximum principle, are rather elementary, although the strategy may not work for the system case. After that work was completed, the author have found it possible to show $C^{1}$-regularity of weak solutions without convexity of solutions. Instead, we would like to use standard methods in elliptic regularity theory, including a freezing coefficient argument and De Giorgi's truncation. This approach is valid even for the system problem, and the main purpose of this paper is to establish $C^{1}$-regularity results in the vectorial case $N\ge 2$. It is worth mentioning that for the scalar case $N=1$, $C^{1}$-regularity results have been established in the author's recent work \cite{T-scalar}, where a generalization of the operator $\Delta_{1}$ is also discussed. Although the basic strategy in this paper are the same with \cite{T-scalar}, some computations in this paper are rather simpler, since the diffusion operator is assumed to have a symmetric structure, often called the Uhlenbeck structure. It should be recalled that this structure will be often required when one considers everywhere regularity for vector-valued problems. For this reason, we do not try to generalize $\Delta_{1}$ in this system problem. More generally, in this paper we would like to discuss an elliptic system \begin{equation}\label{Eq (Section 1) main system} {\mathcal L}u\coloneqq -b\Delta_{1}u-{\mathcal L}_{p}u=f\quad\textrm{in }\Omega, \end{equation} where ${\mathcal L}_{p}$ generalizes $\Delta_{p}$. The detailed conditions of ${\mathcal L}_{p}$ are described later in Section \ref{Subsect: Generalized results in system}. \subsection{Our strategy} We would like to briefly describe our strategy in this paper. It should be mentioned that even for the system problem, the strategy itself is the same as the scalar case \cite{T-scalar}. For simplicity, we consider the model case (\ref{Eq (Section 1) main system}). Our result is \begin{theorem}\label{Theorem: A weak solution is continuously differentiable} Let $p\in(1,\,\infty),\,q\in(n,\,\infty\rbrack$, and $f\in L^{q}(\Omega;\,{\mathbb R}^{N})$. Assume that $u$ is a weak solution to the system (\ref{Eq (Section 1) 1+p-Laplace}). Then, $u$ is continuously differentiable. \end{theorem} The main difficulty on showing $C^{1}$-regularity is that the system (\ref{Eq (Section 1) 1+p-Laplace}) becomes non-uniformly elliptic near a facet. To explain this, we compute the Hessian matrix $D_{\xi}^{2}E(\xi_{0})$ for $\xi_{0}\in{\mathbb R}^{Nn}\setminus\{ 0\}$, where $E(\xi)\coloneqq b\lvert \xi\rvert+\lvert\xi\rvert^{p}/p\,(\xi\in{\mathbb R}^{Nn})$ denotes the energy density. As a result, this $Nn\times Nn$ real symmetric matrix satisfies \begin{align*} (\textrm{ellipticity ratio of $D_{\xi}^{2}E(\xi_{0})$})&\coloneqq \frac{(\textrm{the largest eigenvalue of $D_{\xi}^{2}E(\xi_{0})$})}{(\textrm{the lowest eigenvalue of $D_{\xi}^{2}E(\xi_{0})$})}\\&\le \frac{1+b\delta^{1-p}}{p-1}\eqqcolon {\mathcal R}(\delta) \end{align*} for all $\xi_{0}\in{\mathbb R}^{Nn}$ with $\lvert \xi_{0}\rvert>\delta>0$ with $\delta$ sufficiently close to $0$. It should be mentioned that the bound ${\mathcal R}(\delta)$ will blow up as $\delta$ tends to $0$. In other words, the system (\ref{Eq (Section 1) 1+p-Laplace}) becomes non-uniformly elliptic as $Du$ vanishes. In particular, it will be difficult to show H\"{o}lder continuity of derivatives across a facet of a solution to (\ref{Eq (Section 1) 1+p-Laplace}). We would like to emphasize that our problem is substantially different from either a non-standard growth problem or a $(p,\,q)$-growth problem, where ellipticity ratios will become unbounded as a gradient blows up \cite{MR2291779}, \cite{MR4258810}. To see our computation above, however, it will be possible that a mapping ${\mathcal G}_{\delta}(Du)$ with \begin{equation}\label{Eq (Section 1) Truncated vector field} {\mathcal G}_{\delta}(\xi)\coloneqq \mleft(\lvert \xi\rvert-\delta\mright)_{+}\frac{\xi}{\lvert \xi\rvert}\quad \textrm{for }\xi\in{\mathbb R}^{Nn},\,\delta\in(0,\,1) \end{equation} is $\alpha$-H\"{o}lder continuous for some constant $\alpha\in(0,\,1)$, which may depend on $\delta$. This observation is expectable because the mapping ${\mathcal G}_{\delta}(Du)$ is supported in a place $\{\lvert Du\rvert>\delta\}$, where the problem (\ref{Eq (Section 1) 1+p-Laplace}) becomes uniformly elliptic. Although this $\alpha=\alpha(\delta)$ might degenerate as we let $\delta\to 0$, we are able to conclude that $Du$ is also continuous. In fact, by the definition of ${\mathcal G}_{\delta}$, it is easy to check that the mapping ${\mathcal G}_{\delta}(Du)$ uniformly converges to ${\mathcal G}_{0}(Du)=Du$ as $\delta\to 0$. Thus, to prove $C^{1}$-regularity of solutions, it suffices to prove continuity of the mapping ${\mathcal G}_{\delta}(Du)$ for each fixed $\delta\in(0,\,1)$. When we show H\"{o}lder continuity of ${\mathcal G}_{\delta}(Du)$, one of the main difficulties is that the system (\ref{Eq (Section 1) 1+p-Laplace}) becomes very singular near facets of solutions. In particular, it seems impossible to justify regularity on second order Sobolev derivatives of solutions across the facets, based on difference quotient methods. Therefore, we will have to relax the very singular operator $L_{b,\,p}$ by regularized operators that are non-degenerate and uniformly elliptic, so that higher regularity on Sobolev derivatives are guaranteed. For the model case (\ref{Eq (Section 1) 1+p-Laplace}), the approximation problem is given by \begin{equation}\label{Eq (Section 1) Approximating problem special case} L_{b,\,p}^{\varepsilon}u_{\varepsilon}\coloneqq -\mathop{\mathrm{div}}\mleft(\frac{Du_{\varepsilon}}{\sqrt{\varepsilon^{2}+\lvert Du_{\varepsilon}\rvert^{2}}}\mright)-\mathop{\mathrm{div}}\mleft(\mleft(\varepsilon^{2}+\lvert Du_{\varepsilon}\rvert^{2}\mright)^{p/2-1}Du_{\varepsilon}\mright)=f_{\varepsilon}\quad \textrm{for each }\varepsilon\in(0,\,1), \end{equation} where $f_{\varepsilon}$ convergences to $f$ in a weak sense. The relaxed operator $L_{b,\,p}^{\varepsilon}$ naturally appears when one approximates the density $E$ by \[E_{\varepsilon}(z)\coloneqq b\sqrt{\varepsilon^{2}+\lvert z\rvert^{2}}+\frac{1}{p}\mleft(\varepsilon^{2}+\lvert z\rvert^{2}\mright)^{p/2}\quad \textrm{for }\varepsilon\in(0,\,1),\,z\in{\mathbb R}^{Nn}.\] Then, the system (\ref{Eq (Section 1) Approximating problem special case}) is uniformly elliptic, in the sense that \begin{align*} (\textrm{ellipticity ratio of $D_{\xi}^{2}E_{\varepsilon}(\xi_{0})$})&\le C\mleft(1+\mleft(\varepsilon^{2}+\lvert \xi_{0}\rvert^{2}\mright)^{(1-p)/2}\mright)\le C\mleft(1+\varepsilon^{1-p}\mright)\quad \textrm{for all }\xi_{0}\in{\mathbb R}^{Nn}, \end{align*} where the positive constant $C$ depends only on $b$, and $p$. This ellipticity ratio appears to be dominated by $\sqrt{\varepsilon^{2}+\lvert\xi_{0}\rvert^{2}}$, rather than by $\lvert\xi_{0}\rvert$. In particular, to measure ellipticity ratios for (\ref{Eq (Section 1) Approximating problem special case}), it is natural to adapt $V_{\varepsilon}\coloneqq\sqrt{\varepsilon^{2}+\lvert Du_{\varepsilon}\rvert^{2}}$ as another modulus. For this reason, we have to consider another mapping ${\mathcal G}_{\delta,\,\varepsilon}(Du_{\varepsilon})$, where \begin{equation}\label{Eq (Section 1) relaxed truncation} {\mathcal G}_{\delta,\,\varepsilon}(\xi)\coloneqq \left(\sqrt{\varepsilon^{2}+\lvert \xi\rvert^{2}}-\delta\right)_{+}\frac{\xi}{\lvert \xi\rvert}\quad \textrm{for }\xi\in{\mathbb R}^{Nn}\quad \textrm{with }0<\varepsilon<\delta. \end{equation} Then, our problem is reduced to a priori H\"{o}lder estimates of ${\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})$, where $u_{\varepsilon}$ solves (\ref{Eq (Section 1) Approximating problem special case}) with $0<\varepsilon<\delta/4$ and $0<\delta<1$. To obtain these a priori estimates, we appeal to a freezing coefficient argument and De Giorgi's truncation. Roughly speaking, the former method can be applied when $Du_{\varepsilon}$ does not vanish in a suitable sense, and otherwise the latter is fully used. To judge whether $Du_{\varepsilon}$ degenerates or not, we will measure superlevel sets of $V_{\varepsilon}$. It should be mentioned that distinguishing based on sizes of superlevel sets is found in existing works on $C^{1,\,\alpha}$-regularity for $p$-Laplace problems both in elliptic and parabolic cases (see e.g., \cite[Chapter IX]{MR1230384}, \cite{MR2635642}). Our method works in the system case, as long as the energy density $E$ is spherically symmetric. Here we recall a well-known fact that when one considers everywhere regularity of a vector-valued solution, a diffusion operator is often required to have a symmetric structure, called the Uhlenbeck structure. For this reason, generalization of the one-Laplace operator $\Delta_{1}$ is not discussed in this paper (see also \cite[\S 2.4]{T-scalar} for detailed explanations). It should be emphasized that this restriction is not necessary in the scalar case. In fact, generalization of $\Delta_{1}$ is given in the author's recent work \cite{T-scalar}, which only focuses on the scalar case, and deals with more general approximation arguments based on the convolution of standard mollifiers. \subsection{Mathematical models and comparisons to related works on very degenerate problems}\label{Subsect: Models and comparisons} In Section \ref{Subsect: Models and comparisons}, we briefly describe mathematical models. The equation (\ref{Eq (Section 1) 1+p-Laplace}) is derived from a minimizing problem of a functional \[{\mathcal F}(u)\coloneqq \int_{\Omega}E(Du)\,{\mathrm d}x-\int_{\Omega}\langle f\mid u\rangle\,{\mathrm{d}}x\quad \textrm{with}\quad E(z)\coloneqq b\lvert \xi\rvert+\frac{1}{p}\lvert \xi\rvert^{p}\quad \textrm{for }\xi\in{\mathbb R}^{Nn}\] under a suitable boundary condition. Here $\langle\,\cdot\mid\cdot\,\rangle$ denotes the standard inner product in ${\mathbb R}^{N}$. The density $E$ often appears in mathematical modeling of materials, including motion of Bingham fluids and growth of crystal surface. In a paper \cite{spohn1993surface}, Spohn gave a mathematical model of crystal surface growth under roughening temperatures. From a thermodynamic viewpoint (see \cite{MR3289366} and the references therein), the evolution of a scalar-valued function $h=h(x,\,t)$ denoting the height of crystal in a two-dimentional domain $\Omega$, is modeled as \[\partial_{t} h+\Delta \mu=0\quad \textrm{with}\quad\Delta=\Delta_{2}.\] Here $\mu$ is a scalar-valued function denoting a chemical potential, and considered to satisfy the Euler--Lagrange equation \[\mu=-\frac{\delta \Phi}{\delta h}\quad\textrm{with}\quad \Phi(h)=\beta_{1}\int_{\Omega}\lvert\nabla h\rvert\,dx+\beta_{3} \int_{\Omega}\lvert\nabla h\rvert^{3}\,dx,\quad \beta_{1},\,\beta_{3}>0.\] The energy functional $\Phi$ is called a crystal surface energy, whose density is essentially the same as \(E\) with \(p=3\). Finally, the resulting evolution equation for \(h\) is given by \[k\partial_{t}h=\Delta L_{k\beta_{1},\,3}h\quad\textrm{with}\quad k=\frac{1}{3\beta_{3}}.\] When $h$ is stationary, then $h$ must satisfy an equation $L_{k\beta_{1},\,3}h=f$, where $f=-k\mu$ is harmonic and therefore smooth. Our Theorem \ref{Theorem: A weak solution is continuously differentiable} implies that a gradient $\nabla h$ is continuous. When $p=2$, the density $E$ also appears when modeling motion of a material called Bingham fluid. Mathematical formulations concerning Bingham fluids are found in \cite[Chapter VI]{MR0521262}. In particular, when the motion of fluids is sufficiently slow, the stationary flow model results in an elliptic system \[\frac{\delta \Psi}{\delta V}\equiv \mu_{2}(L_{\mu_{1}/\mu_{2},\,2}V)=-\nabla \pi\quad \textrm{with }\Psi(V)\coloneqq \mu_{1}\int_{U}\lvert DV\rvert\,{\mathrm{d}}x+\frac{\mu}{2}\int_{U}\lvert DV\rvert^{2}\,{\mathrm{d}}x.\] Here the unknown ${\mathbb R}^{3}$-valued vector field $V=V(x)$ denotes velocity of Bingham fluids in a domain $U\subset{\mathbb R}^{3}$ and satisfies an incompressible condition $\mathop{\mathrm{div}} V=0$. The other unknown scalar-valued function $\pi=\pi(x)$ denotes the pressure. Bingham fluids contain two different aspects of elasticity and viscosity, and corresponding to these properties, the positive constants $\mu_{1}$ and $\mu_{2}$ respectively appear in the energy functional $\Psi$. From Theorem \ref{Theorem: A weak solution is continuously differentiable}, we conclude that if the pressure function $\pi$ satisfies $\nabla\pi\in L^{q}(U;\,{\mathbb R}^{3})$ for some $q>3$, then the velocity field $V$ is continuously differentiable. Also, when one considers a stationary laminar Bingham flow in a cylindrical pipe $U=\Omega\times {\mathbb R}\subset{\mathbb R}^{3}$, a scalar problem appears. To be precise, we let $V$ be of the form $V=(0,\,0,\,u(x_{1},\,x_{2}))$, where $u$ is an unknown scalar function. Clearly this flow $V$ is incompressible. Under this setting, we have a resulting elliptic equation \[L_{\mu_{1}/\mu_{2},\,2}u=f\quad \textrm{in }\Omega\subset{\mathbb R}^{2} \quad \textrm{with}\quad f=-\frac{\pi^{\prime}}{\mu_{2}},\] where the pressure function $\pi$ depends only on $x_{3}$, and its derivative $\pi^{\prime}=-\mu_{2} f$ must be constant. Also in this laminar model, from our main result, it follows that the component $u$ is continuously differentiable. The density function $E$ also appears in mathematical modeling of congested traffic dynamics \cite{MR2407018}. There this model results in a minimizing problem \begin{equation}\label{Eq (Section 1) An optimal transport model} \sigma_{\mathrm{opt}}\in \mathop{\mathrm{arg~min}}\mleft\{\int_{\Omega}E(\sigma)\,{\mathrm d}x\mathrel{}\middle|\mathrel{} \left.\begin{array}{c} \sigma\in L^{p}(\Omega;\,{\mathbb R}^{n}),\\ -\mathop{\mathrm{div}} \sigma=f\textrm{ in }\Omega,\,\sigma\cdot\nu=0\textrm{ on }\partial\Omega \end{array} \right. \mright\}. \end{equation} In a paper \cite{MR2651987}, it is shown that the optimal traffic flow $\sigma_{\mathrm{opt}}$ of the problem (\ref{Eq (Section 1) An optimal transport model}) is uniquely given by $\nabla E^{\ast}(\nabla v)$, where $v$ solves an elliptic equation \begin{equation}\label{Eq (Section 1) Very degenerate eq} -\mathop{\mathrm{div}}(\nabla E^{\ast}(\nabla v))=f\in L^{q}(\Omega)\quad \textrm{in }\Omega \end{equation} under a Neumann boundary condition. Here \[E^{\ast}(z)=\frac{1}{p^{\prime}}\mleft(\lvert z\rvert-b\mright)_{+}^{p^{\prime}}\] is the Legendre transform of $E$, and $p^{\prime}\coloneqq p/(p-1)$ denotes the H\"{o}lder conjugate of $p\in(1,\,\infty)$. The problem whether the flow $\sigma_{\mathrm{opt}}=\nabla E^{\ast}(\nabla v)$ is continuous has been answered affirmatively under the assumption $q\in(n,\,\infty\rbrack$. There it is an interesting question whether vector filed ${\mathcal G}_{b+\delta}(\nabla v)$ with $\delta>0$ fixed, defined similarly to (\ref{Eq (Section 1) Truncated vector field}), is continuous. We should note that the equation (\ref{Eq (Section 1) Very degenerate eq}) is also non-uniformly elliptic around the set $\{\lvert \nabla v\rvert\le b\}$, in the sense that the ellipticity ratio of $\nabla^{2}E^{\ast}(z_{0})$ will blow up as $\lvert z_{0}\rvert\to b+0$. However, continuity of truncated vector fields ${\mathcal G}_{b+\delta}(\nabla u)$ with $\delta>0$ is expectable, since there holds \begin{align*} (\textrm{ellipticity ratio of $\nabla^{2}E^{\ast}(z_{0})$})&=\frac{(\textrm{the largest eigenvalue of $\nabla^{2}E^{\ast}(z_{0})$})}{(\textrm{the lowest eigenvalue of $\nabla^{2}E^{\ast}(z_{0})$})}\\&\le (p-1)\left(1+(\delta-b)^{-1}\right) \end{align*} for all $z_{0}\in{\mathbb R}^{n}$ satisfying $\lvert z_{0}\rvert\ge b+\delta$ when $\delta$ is sufficiently close to $0$. This estimate suggests that for each fixed $\delta>0$, the truncated vector field ${\mathcal G}_{b+\delta}(\nabla v)$ should be H\"{o}lder continuous. It should be noted that it is possible to show ${\mathcal G}_{b+\delta}(\nabla v)$ uniformly converges to ${\mathcal G}_{b}(\nabla v)$ as $\delta\to 0$, and thus ${\mathcal G}_{b}(\nabla v)$ will be also continuous. When $v$ is scalar-valued, continuity of ${\mathcal G}_{b+\delta}(\nabla v)$ with $\delta>0$ was first shown by Santambrogio--Vespri \cite{MR2728558} in 2010 for the special case $n=2$ with $b=1$. The proof therein is based on oscillation estimates on the Dirichlet energy, which works under the condition $n=2$ only. Later in 2014, Colombo--Figalli \cite{MR3133426} established a more general proof that works for any dimension $n\ge 2$ and any density function $E^{\ast}$, as long as the zero-levelset of $E^{\ast}$ is sufficiently large enough to define a real-valued Minkowski gauge. This Minkowski gauge becomes a basic modulus for judging uniform ellipticity of equations they treated. Here we would like to note that their strategy will not work for our problem (\ref{Eq (Section 1) 1+p-Laplace}), since the density function $E^{\ast}$ seems structurally different from $E$. In fact, in our problem, the zero-levelset of $E$ is only a singleton, and therefore it seems impossible to adapt the Minkowski gauge as a real-valued modulus. The recent work by B\"{o}gelein--Duzaar--Giova--Passarelli di Napoli \cite{BDGPdN} is motivated by extending these regularity results to the vectorial case. There they considered an approximation problem of the form \begin{equation}\label{Eq (Section 1) Relaxed very degenerate equation} -\varepsilon\Delta v_{\varepsilon}-\mathop{\mathrm{div}}\,(D_{\xi}E^{\ast}(Dv_{\varepsilon}))=f\quad \textrm{in }\Omega \end{equation} with $b=1$. The paper \cite{BDGPdN} provides a priori H\"{o}lder continuity of ${\mathcal G}_{1+2\delta}(Dv_{\varepsilon})$ for each fixed $\delta\in(0,\,1)$, whose estimate is independent of an approximation parameter $\varepsilon\in(0,\,1)$. It will be worth noting that the modulus $\lvert Dv_{\varepsilon}\rvert$, which measures ellipticity ratio of (\ref{Eq (Section 1) Relaxed very degenerate equation}), has a symmetric structure. This fact appears to be fit to prove everywhere H\"{o}lder continuity of ${\mathcal G}_{1+2\delta}(Dv_{\varepsilon})$ for the system case. Although our proofs on a priori H\"{o}lder estimates are inspired by \cite[\S 4--7]{BDGPdN}, there are mainly three different points between our proofs and theirs. The first is how to approximate systems. To be precise, our regularized problem (\ref{Eq (Section 1) Approximating problem special case}) is based on relaxing the principal part $L_{b,\,p}u$ itself. For this reason, we should treat a different mapping ${\mathcal G}_{2\delta,\,\varepsilon}$, instead of ${\mathcal G}_{2\delta}$. Compared with ours, for their approximation problems (\ref{Eq (Section 1) Relaxed very degenerate equation}), the principal part itself is not changed at all, and thus it seems not necessary to introduce another mapping than ${\mathcal G}_{2\delta}$. The second lies in structural differences between the densities $E$ and $E^{\ast}$. In particular, when showing a Campanato-type energy growth estimate, most of our computations given in Section \ref{Section: Campanato estimates} will differ from theirs given in \cite[\S 4--5]{BDGPdN}. The third is that we have to control an approximation parameter $\varepsilon$ by a truncation parameter $\delta$, and in fact, we will restrict $\varepsilon\in(0,\,\delta/4)$ when showing a priori H\"{o}lder continuity of ${\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})$. This restriction will be necessary because we have to deal with two different symmetric moduli $\lvert Du_{\varepsilon}\rvert$ and $V_{\varepsilon}=\sqrt{\varepsilon^{2}+\lvert Du_{\varepsilon}\rvert^{2}}$. \subsection{Our generalized results and outlines of the paper}\label{Subsect: Generalized results in system} Throughout this paper, we let $p\in(1,\,\infty)$, and the $p$-Laplace-type operator ${\mathcal L}_{p}$ is assumed to be of the form \[{\mathcal L}_{p}u=\mathop{\mathrm{div}}\mleft(g_{p}^{\prime}(\lvert Du\rvert^{2})Du\mright).\] Here $g_{p}\in C(\lbrack0,\,\infty))\cap C_{{\mathrm{loc}}}^{2,\,\beta_{0}}((0,\,\infty))$ is a non-negative function with $\beta_{0}\in(0,\,1\rbrack$. Moreover, there exist constants $0<\gamma\le \Gamma<\infty$ such that there hold \begin{equation}\label{Eq (Section 1) Growth g-p-prime} \lvert g_{p}^{\prime}(\sigma)\rvert\le \Gamma \sigma^{p/2-1}\quad \textrm{for all }\sigma\in(0,\,\infty), \end{equation} \begin{equation}\label{Eq (Section 1) Growth g-p-pprime} \lvert g_{p}^{\prime\prime}(\sigma)\rvert\le \Gamma \sigma^{p/2-2}\quad \textrm{for all }\sigma\in(0,\,\infty), \end{equation} \begin{equation}\label{Eq (Section 1) Ellipticity g_p} \gamma (\sigma+\tau)^{p/2-1}\le g_{p}^{\prime}(\sigma+\tau)+2\sigma\min\mleft\{\,g_{p}^{\prime\prime}(\sigma+\tau),\,0\,\mright\} \quad \textrm{for all }\sigma\in\lbrack 0,\,\infty),\,\tau\in(0,\,1). \end{equation} For $\beta_{0}$-H\"{o}lder continuity of $g_{p}^{\prime\prime}$, we assume that \begin{equation}\label{Eq (Section 1) Growth g-p-ppprime} \mleft\lvert g_{p}^{\prime\prime}(\sigma_{1})- g_{p}^{\prime\prime}(\sigma_{2})\mright\rvert\le \Gamma\mu^{p-4-2\beta_{0}}\lvert \sigma_{1}-\sigma_{2}\rvert^{\beta_{0}} \end{equation} for all $\sigma_{1},\,\sigma_{2}\in\lbrack\mu^{2}/4,\,7\mu^{2}\rbrack$ with $\mu\in(0,\,\infty)$. A typical example is \begin{equation}\label{Eq (Section 1) gp example} g_{p}(\sigma)\coloneqq \frac{2\sigma^{p/2}}{p}\quad \textrm{for }\sigma\in\lbrack 0,\,\infty). \end{equation} In fact, this $g_{p}$ satisfies (\ref{Eq (Section 1) Growth g-p-prime})--(\ref{Eq (Section 1) Ellipticity g_p}) with \(\gamma\coloneqq \min\{\,1,\,p-1\,\},\quad\Gamma\coloneqq \max\mleft\{\,1,\,\lvert p-2\rvert/2\,\mright\}\). Moreover, we have \[\lvert g_{p}^{\prime\prime\prime}(\sigma)\rvert\le {\tilde\Gamma} \sigma^{p/2-3}\quad \textrm{for all }\sigma\in(0,\,\infty)\] with ${\tilde\Gamma}\coloneqq \lvert (p-2)(p-4)\rvert/4$. From this estimate, it is easy to find a constant $\Gamma=\Gamma(p)\in(1,\,\infty)$ such that (\ref{Eq (Section 1) Growth g-p-ppprime}) holds with $\beta_{0}=1$. In this case, the operator ${\mathcal L}_{p}$ becomes $\Delta_{p}$. Thus, the system (\ref{Eq (Section 1) main system}) generalizes (\ref{Eq (Section 1) 1+p-Laplace}). Our main result is the following Theorem \ref{Theorem: C1-regularity}, which clearly yields Theorem \ref{Theorem: A weak solution is continuously differentiable}. \begin{theorem}\label{Theorem: C1-regularity} Let $p\in(1,\,\infty),\,q\in(n,\,\infty\rbrack$, and $f\in L^{q}(\Omega;\,{\mathbb R}^{N})$. Assume that $u$ is a weak solution to (\ref{Eq (Section 1) main system}) in a Lipschitz domain $\Omega\subset{\mathbb R}^{n}$, where $g_{p}$ satisfies (\ref{Eq (Section 1) Growth g-p-prime})--(\ref{Eq (Section 1) Growth g-p-ppprime}). Then, for each fixed $\delta\in(0,\,1)$ and $x_{\ast}\in\Omega$, there exists an open ball $B_{\rho_{0}}(x_{\ast})\Subset\Omega$ such that ${\mathcal G}_{2\delta}(Du)\in C^{\alpha}(B_{\rho_{0}/2}(x_{\ast});\,{\mathbb R}^{Nn})$. Here the exponent $\alpha\in(0,\,1)$ and the radius $\rho_{0}\in(0,\,1)$ depend at most on $b$, $n$, $N$, $p$, $q$, $\beta_{0}$, $\gamma$, $\Gamma$, $\lVert f\rVert_{L^{q}(\Omega)}$, $\lVert Du\rVert_{L^{p}(\Omega)}$, $d_{\ast}\coloneqq\mathop{\mathrm{dist}} (x_{\ast},\,\partial\Omega)$, and $\delta$. Moreover, we have \begin{equation}\label{Eq (Section 1) local bounds of Jacobian multiplied with G-delta} \mleft\lvert {\mathcal G}_{2\delta}(Du(x))\mright\rvert\le \mu_{0}\quad \textrm{for all }x\in B_{\rho_{0}}(x_{\ast}), \end{equation} \begin{equation}\label{Eq (Section 1) local continuity of Jacobian multiplied with G-delta} \mleft\lvert {\mathcal G}_{2\delta}(Du(x_{1}))-{\mathcal G}_{2\delta}(Du(x_{2}))\mright\rvert\le \frac{2^{n/2+2\alpha+2}\mu_{0}}{\rho_{0}^{\alpha}}\lvert x_{1}-x_{2}\rvert^{\alpha}\quad \textrm{for all }x_{1},\,x_{2}\in B_{\rho_{0}/2}(x_{\ast}), \end{equation} where the constant $\mu_{0}\in(0,\,\infty)$ depends at most on $b$, $n$, $p$, $q$, $\gamma$, $\Gamma$, $\lVert f\rVert_{L^{q}(\Omega)}$, $\lVert Du\rVert_{L^{p}(\Omega)}$, and $d_{\ast}$. In particular, the Jacobian matrix $Du$ is continuous in \(\Omega\). \end{theorem} This paper is organized as follows. Section \ref{Section: Approximation} provides approximation problems for the system (\ref{Eq (Section 1) main system}), which is based on the relaxation of energy densities. After fixing some notations in Section \ref{Subsect: Notation}, we give a variety of quantitative estimates related to the relaxed densities in Section \ref{Subsect: Basic estimates}. Next in Section \ref{Subsect: Convergence}, we will justify that solutions of regularized problem, denoted by $u_{\varepsilon}$, converges to the original problem (\ref{Eq (Section 1) main system}). Our main theorems are proved in Section \ref{Subsect: A priori Hoelder}, which presents a priori H\"{o}lder estimates on Jacobian matrices $Du_{\varepsilon}$ that are suitably truncated near facets. There we state three basic a priori estimates on approximated solutions, consisting of local Lipschitz bounds (Proposition \ref{Prop: Lipschitz bounds}), a De Giorgi-type oscillation lemma (Proposition \ref{Prop: De Giorgi's truncation}) and Campanato-type growth estimates (Proposition \ref{Prop: Schauder estimate}). The proofs of Propositions \ref{Prop: Lipschitz bounds}--\ref{Prop: Schauder estimate} are given later in the remaining Sections \ref{Section: Weak formulations}--\ref{Section: Appendix}. From these estimates, we will deduce local a priori H\"{o}lder estimates of ${\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})$, which are independent of an approximation parameter $\varepsilon\in(0,\,\delta/4)$ (Theorem \ref{Theorem: A priori Hoelder estimate}). From Proposition \ref{Prop: Convergence on weak system} and Theorem \ref{Theorem: A priori Hoelder estimate}, we finally give the proof of Theorem \ref{Theorem: C1-regularity}. Sections \ref{Section: Weak formulations}--\ref{Section: Appendix} are devoted to prove Propositions \ref{Prop: Lipschitz bounds}--\ref{Prop: Schauder estimate}. Among them, most of the proofs of Propositions \ref{Prop: Lipschitz bounds}--\ref{Prop: De Giorgi's truncation} are rather easier. For the reader's convenience, we would like to provide a brief proof of Proposition \ref{Prop: Lipschitz bounds} in the appendix (Section \ref{Section: Appendix}). For Proposition \ref{Prop: De Giorgi's truncation}, we briefly describe sketches of the proof in Section \ref{Subsect: De Giorgi's truncation} after showing a basic weak formulation in Section \ref{Subsect: Basic weak formulations}. We omit some standard arguments in Section \ref{Subsect: De Giorgi's truncation} concerning De Giorgi's levelset lemmata, since they are already found in \cite[\S 7]{BDGPdN} (see also \cite[\S 4.2]{T-scalar}). Section \ref{Section: Campanato estimates} is focused on the proof of Proposition \ref{Prop: Schauder estimate}. In Section \ref{Subsect: Energy estimates}, we obtain a variety of energy estimates from the weak formulation deduced in Section \ref{Subsect: Basic weak formulations}. Section \ref{Subsect: freezing coefficient method} establishes a freezing coefficient argument under a good condition telling that a derivative does not vanish. After providing two basic lemmata on our shrinking arguments in Section \ref{Subsect: Shrinking lemmata}, we give the proof of Proposition \ref{Prop: Schauder estimate} in Section \ref{Subsect: Proof of Campanato-decay}. Finally, we would like to mention again that $C^{1}$-regularity for the scalar case is treated in the author's recent work \cite{T-scalar}. There, approximation of the density function is discussed, which is based on the convolution of Friedrichs' mollifier and makes it successful to generalize one-Laplace operator. It should be emphasized that this generalization may not work in the system case, when it comes to everywhere continuous differentiability. This is essentially because the general singular operator discussed therein will lack the Uhlenbeck structure, except the one-Laplacian (see \cite[\S 2.4]{T-scalar} for further explanations). Also, compared with \cite{T-scalar}, some of the computations in this paper becomes rather simpler, since we consider a special case where the energy density is spherically symmetric. Some estimates in Section \ref{Subsect: Basic estimates}--\ref{Subsect: Convergence} are used without proofs and some proofs in Section \ref{Subsect: De Giorgi's truncation} are omitted. The full computations and proofs of them are given in \cite[\S 2.1 \& 4.2]{T-scalar}. \section{Approximation schemes}\label{Section: Approximation} The aim of Section \ref{Section: Approximation} is to give approximation schemes for the problem (\ref{Eq (Section 1) main system}). The basic idea is that the energy density \begin{equation} E(\xi)=\frac{1}{2}g(\lvert \xi\rvert^{2})\quad (\xi\in{\mathbb R}^{Nn})\quad \textrm{with}\quad g(\sigma)\coloneqq 2b\sqrt{\sigma}+g_{p}(\sigma)\quad(0\le \sigma<\infty) \end{equation} is to be regularized by \begin{equation}\label{Eq (Section 2) Relaxed Energy Density}E_{\varepsilon}(\xi)=\frac{1}{2}g_{\varepsilon}(\lvert \xi\rvert^{2})\,(\xi\in{\mathbb R}^{Nn})\quad \textrm{with}\quad g_{\varepsilon}(\sigma)\coloneqq 2b\sqrt{\varepsilon^{2}+\sigma}+g_{p}(\varepsilon^{2}+\sigma)\quad (0\le \sigma<\infty) \end{equation} for $\varepsilon\in(0,\,1)$ denoting the approximation parameter. Then, the relaxed operator ${\mathcal L}^{\varepsilon}$ will be given by \[{\mathcal L}^{\varepsilon}u_{\varepsilon}\coloneqq-\mathop{\mathrm{div}}\mleft(g_{\varepsilon}^{\prime}(\lvert Du_{\varepsilon}\rvert^{2})Du_{\varepsilon} \mright)\equiv-b\mathop{\mathrm{div}}\mleft(\frac{Du_{\varepsilon}}{\sqrt{\varepsilon^{2}+\lvert Du_{\varepsilon}\rvert^{2}}}\mright)-\mathop{\mathrm{div}}\mleft(g_{p,\,\varepsilon}^{\prime}(\lvert Du_{\varepsilon}\rvert^{2})Du_{\varepsilon} \mright),\] similarly to $L_{b,\,p}$ given in (\ref{Eq (Section 1) Approximating problem special case}). In Section \ref{Section: Approximation}, we will show that this approximation scheme works and deduce some basic estimates on relaxed mapping. \subsection{Notations}\label{Subsect: Notation} We first fix some notations throughout the paper. We denote ${\mathbb Z}_{\ge 0}\coloneqq \{\,0,\,1,\,2,\,\dots\,\,\}$ by the set of all non-negative integers, and ${\mathbb N}\coloneqq {\mathbb Z}_{\ge 0}\setminus\{ 0\}$ by the set of all natural numbers. For given $k\in{\mathbb N}$, we denote $\langle\,\cdot\mid\cdot\,\rangle_{k}$ by the standard inner product over the Euclidean space ${\mathbb R}^{k}$. That is, for $k$-dimensional vectors $\xi=(\xi_{1},\,\dots\,,\,\xi_{k}),\,\eta=(\eta_{1},\,\dots\,,\,\eta_{k})\in{\mathbb R}^{k}$, we define \[\langle \xi\mid\eta\rangle_{k}\coloneqq \sum_{j=1}^{k}\xi_{j}\eta_{j}\in{\mathbb R}^{k}.\] We denote $\lvert\,\cdot\,\rvert_{k}$ by the Euclidean norm, \[\textrm{i.e., } \lvert \xi\rvert_{k}\coloneqq \sqrt{\langle\xi\mid\xi\rangle}\in\lbrack0,\,\infty)\quad \textrm{for }\xi\in{\mathbb R}^{k}.\] We also define a tensor product $\xi\otimes \eta$ as a $k\times k$ real matrix, \[\textrm{i.e., }\xi\otimes\eta\coloneqq (\xi_{j}\eta_{k})_{j,\,k}=\begin{pmatrix}\xi_{1}\eta_{1} & \dots &\xi_{1}\eta_{k}\\ \vdots & \ddots & \vdots\\ \xi_{k}\eta_{1}&\dots &\xi_{k}\eta_{k}\end{pmatrix}.\] We denote $\mathrm{id}_{k}$ by the $k\times k$ identity matrix. For a $k\times k$ real matrix $A$, we define the operator norm \[\lVert A\rVert_{k}\coloneqq \sup\mleft\{\lvert Ax\rvert_{k}\mathrel{}\middle| \mathrel{}x\in{\mathbb R}^{k},\,\lvert x\rvert_{k}\le 1\mright\}.\] We also introduce the positive semi-definite ordering on all real symmetric matrices. In other words, for $k\times k$ real symmetric matrices $A$, $B$, we write $A\leqslant B$ or $B\geqslant A$ when the difference $B-A$ is positive semi-definite. For notational simplicity, we often omit the script $k$ denoting the dimension. In particular, we simply denote the norms by $\lvert\,\cdot\,\rvert$ or $\lVert\,\cdot\,\rVert$, and the inner product by $\langle\,\cdot\mid\cdot\,\rangle$. We often regard an $N\times n$ real matrix $\xi=(\xi_{\alpha}^{i})_{1\le \alpha\le n,\,1\le i\le N}$ as an $Nn$-dimensional vector by ordering $\xi=(\xi_{1}^{1},\,\dots\,,\,\xi_{1}^{N},\,\dots\,,\,\xi_{n}^{1},\,\dots\,,\,\xi_{n}^{N})\in{\mathbb R}^{Nn}$. Then, similarly to the above, for given $N\times n$ matrices $\xi=(\xi_{\alpha}^{i}),\,\eta=(\eta_{\alpha}^{i})$, we are able to define the inner product $\langle \xi\mid\eta\rangle_{Nn}\in{\mathbb R}$, and a tensor product $\xi\otimes\eta$ as an $(Nn)\times (Nn)$ real-valued matrix. We also note that under our setting, the norm $\lvert \xi\rvert$ of a $N\times n$ real matrix $\xi$ is identified with the Frobenius norm of $\xi$. For a scalar-valued function $u=u(x_{1},\,\dots\,,\,x_{n})$, the gradient of $u$ is denoted by \(\nabla u\coloneqq (\partial_{x_{\alpha}}u)_{\alpha}\) and is often regarded as an $n$-dimensional row vector. For an ${\mathbb R}^{k}$-valued function $u=(u^{1},\,\dots\,,\,u^{k})$ with $u^{i}=u^{i}(x_{1},\,\dots\,,\,x_{n})$ for $i\in\{\,1,\,\dots\,,\,k\,\}$, the Jacobian matrix of $u$ is denoted by \(Du\coloneqq \begin{pmatrix} D_{1} u & \cdots & D_{n}u\end{pmatrix}\equiv (\partial_{x_{\alpha}}u^{i})_{\alpha,\,i}\) with $D_{\alpha}u=(\partial_{x_{\alpha}}u^{i})_{i}$ for each $\alpha\in\{\,1,\,\dots\,,\,n\,\}$. These $Du$ and $D_{\alpha}u$ are often considered as an $k\times n$-matrix and a $k$-dimentional column vector respectively. For given numbers $s\in\lbrack 1,\,\infty\rbrack$, $k\in{\mathbb N}$, $d\in{\mathbb N}$ and a fixed domain $U\subset {\mathbb R}^{n}$, we denote $L^{s}(U;\,{\mathbb R}^{d})$ and $W^{k,\,s}(U;\,{\mathbb R}^{d})$ respectively by the Lebesgue space and the Sobolev space. To shorten the notations, we often write $L^{s}(U)\coloneqq L^{s}(U;\,{\mathbb R})$ and $W^{k,\,s}(U)\coloneqq W^{k,\,s}(U;\,{\mathbb R})$. Throughout this paper, we define \[g_{1}(\sigma)\coloneqq 2b\sqrt{\sigma}\quad \textrm{for }\sigma\in\lbrack0,\,\infty).\] Clearly, $g_{1}$ is in $C(\lbrack0,\,\infty))\cap C^{3}((0,\,\infty))$ and satisfies \begin{equation}\label{Eq (Section 2) Growth g-1-prime} \mleft\lvert g_{1}^{\prime}(\sigma)\mright\rvert\le b\sigma^{-1/2}\quad \textrm{for all }\sigma\in(0,\,\infty), \end{equation} \begin{equation}\label{Eq (Section 2) Growth g-1-pprime} \mleft\lvert g_{1}^{\prime\prime}(\sigma)\mright\rvert\le \frac{b}{2}\sigma^{-3/2}\quad \textrm{for all }\sigma\in(0,\,\infty), \end{equation}\begin{equation}\label{Eq (Section 2) Degenerate ellipticity of g-1} 0\le g_{1}^{\prime}(\sigma+\tau)+2\sigma\min\mleft\{\,g_{1}^{\prime\prime}(\sigma+\tau),\,0\,\mright\}\quad \textrm{for all }\sigma\in(0,\,\infty),\,\tau\in(0,\,1). \end{equation} Also, by the growth estimate \[\mleft\lvert g_{1}^{\prime\prime\prime}(\sigma)\mright\rvert\le \frac{3b}{4}\sigma^{-5/2}\quad \textrm{for all }\sigma\in(0,\,\infty),\] it is easy to check that there holds \begin{equation}\label{Eq (Section 2) Growth g-1-ppprime} \mleft\lvert g_{1}^{\prime\prime}(\sigma_{1})-g_{1}^{\prime\prime}(\sigma_{2})\mright\rvert\le 24\cdot b\mu^{-5}\lvert \sigma_{1}-\sigma_{2}\rvert \end{equation} for all $\sigma_{1},\,\sigma_{2}\in\lbrack\mu^{2}/4,\,7\mu^{2}\rbrack$ with $\mu\in(0,\,\infty)$. It should be mentioned that unlike the assumption (\ref{Eq (Section 1) Ellipticity g_p}), the inequality (\ref{Eq (Section 2) Degenerate ellipticity of g-1}) gives no quantitative monotonicity estimates for the one-Laplace operator $\Delta_{1}$. We consider the relaxed function $g_{\varepsilon}$ given by (\ref{Eq (Section 2) Relaxed Energy Density}). This function can be decomposed by $g_{\varepsilon}=g_{1,\,\varepsilon}+g_{p,\,\varepsilon}$ with \[g_{1,\,\varepsilon}(\sigma)\coloneqq 2b\sqrt{\varepsilon^{2}+\sigma},\quad g_{p,\,\varepsilon}(\sigma)\coloneqq g_{p}(\varepsilon^{2}+\sigma)\] for $\sigma\in\lbrack 0,\,\infty)$. Corresponding to them, for each $s\in\{\,1,\,p\,\}$, we define \[A_{s,\,\varepsilon}(\xi)\coloneqq g_{s,\,\varepsilon}^{\prime}(\lvert \xi\rvert^{2})\xi\quad \textrm{for }\xi\in{\mathbb R}^{Nn}\] as an $N\times n$ matrix, and \[{\mathcal B}_{s,\,\varepsilon}(\xi)\coloneqq g_{\varepsilon}^{\prime}(\lvert \xi\rvert^{2}){\mathrm{id}_{Nn}}+2g_{s,\,\varepsilon}^{\prime\prime}(\lvert \xi\rvert^{2})\xi\otimes\xi\quad \textrm{for }\xi\in{\mathbb R}^{Nn}\] as an $Nn\times Nn$ matrix. Then, the summations \begin{equation}\label{Eq (Section 2) Def of A-epsilon} A_{\varepsilon}(\xi)\coloneqq A_{1,\,\varepsilon}(\xi)+A_{p,\,\varepsilon}(\xi)\quad \textrm{for } \xi\in{\mathbb R}^{Nn} \end{equation} and \begin{equation}\label{Eq (Section 2) Def of B-epsilon} {\mathcal B}_{\,\varepsilon}(\xi)\coloneqq {\mathcal B}_{1,\,\varepsilon}(\xi)+{\mathcal B}_{p,\,\varepsilon}(\xi) \quad \textrm{for }\xi\in{\mathbb R}^{Nn} \end{equation} respectively denote the Jacobian matrices and the Hessian matrices of $E_{\varepsilon}$ defined by (\ref{Eq (Section 2) Relaxed Energy Density}). By direct calculations, it is easy to check that $A_{p,\,\varepsilon}(\xi)$ converges to $A_{p}(\xi)$ for each $\xi\in{\mathbb R}^{Nn}$, where \begin{equation}\label{Eq (Section 2) Def of A-p} A_{p}(\xi)\coloneqq \mleft\{\begin{array}{cc} g_{p}^{\prime}(\lvert\xi\rvert^{2})\xi& (\xi\neq 0),\\ 0 & (\xi=0). \end{array} \mright. \end{equation} Finally, we introduce a subdifferential set for the Euclid norm. We denote $\partial\lvert\,\cdot\,\rvert(\xi_{0})\subset{\mathbb R}^{Nn}$ by the subdifferential of the absolute value function $\lvert\,\cdot\,\rvert_{Nn}$ at $\xi_{0}\in{\mathbb R}^{Nn}$. In other words, $\partial\lvert\,\cdot\,\rvert(\xi_{0})$ is the set of all vectors $\zeta\in{\mathbb R}^{Nn}$ that satisfies a subgradient inequality \[\lvert \xi\rvert_{Nn}\ge \lvert\xi_{0}\rvert_{Nn}+\langle \zeta \mid \xi-\xi_{0}\rangle_{Nn}\quad \textrm{for all }\xi\in{\mathbb R}^{Nn}.\] This set is explicitly given by \begin{equation}\label{Eq: sub} \partial\lvert\,\cdot\,\rvert(\xi_{0})=\mleft\{\begin{array}{cc} \mleft\{\zeta\in{\mathbb R}^{Nn}\mathrel{}\middle|\mathrel{}\lvert \zeta\rvert\le 1\mright\} & (\xi_{0}=0),\\ \{\xi_{0}/\lvert \xi_{0}\rvert\} & (\xi_{0}\neq 0), \end{array}\mright.\end{equation} and this formulation is used when one gives the definitions of weak solutions. \subsection{Quantitative estimates on relaxed mappings}\label{Subsect: Basic estimates} In Section \ref{Subsect: Basic estimates}, we would like to introduce quantitative estimates related to the mappings ${\mathcal G}_{2\delta,\,\varepsilon}$ and $g_{\varepsilon}$. When deducing these estimates, we often use the assumption that the density function $E$ is spherically symmetric. As a related item, we refer the reader to \cite[\S 2.1]{T-scalar}, which deals with the scalar case without symmetry of $E$, and provides full computations of some estimates omitted in Section \ref{Subsect: Basic estimates}. In this paper, we often assume that \begin{equation}\label{Eq (Section 2) delta-epsilon} 0<\delta<1,\quad \textrm{and}\quad 0<\varepsilon<\frac{\delta}{4}. \end{equation} It should be mentioned that the mapping ${\mathcal G}_{\delta,\,\varepsilon}\colon{\mathbb R}^{Nn}\rightarrow {\mathbb R}^{Nn}$ given by (\ref{Eq (Section 1) relaxed truncation}) makes sense as long as $0<\varepsilon<\delta$ holds. In particular, under the setting (\ref{Eq (Section 2) delta-epsilon}), the mappings ${\mathcal G}_{\delta,\,\varepsilon},\,{\mathcal G}_{2\delta,\,\varepsilon}$ are well-defined. Moreover, Lipschitz continuity of ${\mathcal G}_{2\delta,\,\varepsilon}$ follows from (\ref{Eq (Section 2) delta-epsilon}). \begin{lemma}\label{Lemma: G-2delta-epsilon} Let $\delta,\,\varepsilon$ satisfy (\ref{Eq (Section 2) delta-epsilon}). Then the mapping ${\mathcal G}_{2\delta,\,\varepsilon}$ satisfies \begin{equation}\label{Eq (Section 2) Lipschitz bounds of the mapping G-2delta-epsilon} \left\lvert{\mathcal G}_{2\delta,\,\varepsilon}(\xi_{1})-{\mathcal G}_{2\delta,\,\varepsilon}(\xi_{2})\right\rvert\le c_{\dagger}\lvert \xi_{1}-\xi_{2}\rvert\quad \textrm{for all }z_{1},\,z_{2}\in{\mathbb R}^{Nn} \end{equation}with $c_{\dagger}\coloneqq 1+32/(3\sqrt{7})$. \end{lemma} In the limiting case $\varepsilon=0$, Lipschitz continuity like (\ref{Eq (Section 2) Lipschitz bounds of the mapping G-2delta-epsilon}) is found in \cite[Lemma 2.3]{BDGPdN}. Modifying the arguments therein, we can easily prove Lemma \ref{Lemma: G-2delta-epsilon}, the full proof of which is given in \cite[Lemma 2.4]{T-scalar}. Next, we consider the mappings $A_{\varepsilon}=A_{1,\,\varepsilon}+A_{p,\,\varepsilon}$ and ${\mathcal B}_{\varepsilon}={\mathcal B}_{1,\,\varepsilon}+{\mathcal B}_{p,\,\varepsilon}$ defined by (\ref{Eq (Section 2) Def of A-epsilon})--(\ref{Eq (Section 2) Def of B-epsilon}), and describe some basic results including monotonicity and growth estimates. For each $s\in \{\,1,\,p\,\}$, the eigenvalues of ${\mathcal B}_{s,\,\varepsilon}(\xi)$ are given by either \[\lambda_{1}(\xi)\coloneqq g_{s}^{\prime}(\varepsilon^{2}+\lvert\xi\rvert^{2})\quad\textrm{or}\quad \lambda_{2}(\xi)\coloneqq g_{s}^{\prime}(\varepsilon^{2}+\lvert\xi\rvert^{2})+2\lvert\xi\rvert^{2}g_{s}^{\prime\prime}(\varepsilon^{2}+\lvert\xi\rvert^{2}).\] Combining this with (\ref{Eq (Section 1) Growth g-p-prime})--(\ref{Eq (Section 1) Ellipticity g_p}) and (\ref{Eq (Section 2) Growth g-1-prime})--(\ref{Eq (Section 2) Degenerate ellipticity of g-1}), the mappings ${\mathcal B}_{p,\,\varepsilon}$ and ${\mathcal B}_{1,\,\varepsilon}$ respectively satisfy \begin{equation}\label{Eq (Section 2) Estimates on B-p-epsilon} \gamma(\varepsilon^{2}+\lvert\xi\rvert^{2})^{p/2-1}\mathrm{id}_{Nn}\leqslant {\mathcal B}_{p,\,\varepsilon}(\xi) \leqslant 3\Gamma(\varepsilon^{2}+\lvert\xi\rvert^{2})^{p/2-1} \mathrm{id}_{Nn}\quad \textrm{for all }\xi\in{\mathbb R}^{Nn}, \end{equation} and \begin{equation}\label{Eq (Section 2) Estimates on B-1-epsilon} O\leqslant {\mathcal B}_{1,\,\varepsilon}(\xi) \leqslant b(\varepsilon^{2}+\lvert\xi\rvert^{2})^{-1/2} \mathrm{id}_{Nn}\quad \textrm{for all }\xi\in{\mathbb R}^{Nn}, \end{equation} where $O$ denotes the zero matrix. In particular, by elementary computations as in \cite[Lemma 3]{MR4201656}, it is easy to get \begin{equation}\label{Eq (Section 2) Monotonicity estimate for A-p-epsilon} \mleft\langle A_{\varepsilon}(\xi_{1})-A_{\varepsilon}(\xi_{0})\mathrel{}\middle|\mathrel{} \xi_{1}-\xi_{0}\mright\rangle\ge \mleft\{\begin{array}{cc} c(p)\gamma\mleft(\varepsilon^{2}+\lvert\xi_{0}\rvert^{2}+\lvert\xi_{1}\rvert^{2}\mright)^{(p-1)/2}\lvert\xi_{1}-\xi_{0}\rvert^{2} & (1<p<2),\\ c(p)\gamma\lvert\xi_{1}-\xi_{0}\rvert^{p} & (2\le p<\infty), \end{array}\mright. \end{equation} and \begin{equation}\label{Eq (Section 2) Growth estimate for A-p-epsilon} \mleft\lvert A_{p,\,\varepsilon}(\xi_{1})-A_{p,\,\varepsilon}(\xi_{0}) \mright\rvert\le \mleft\{\begin{array}{cc} C(p)\Gamma\lvert\xi_{1}-\xi_{0}\rvert^{p-1} & (1<p<2),\\ C(p)\Gamma\mleft(\varepsilon^{p-2}+\lvert\xi_{0}\rvert^{p-2}+\lvert\xi_{1}\rvert^{p-2}\mright)\lvert\xi_{1}-\xi_{0}\rvert & (2\le p<\infty), \end{array} \mright. \end{equation} for all $\xi_{0},\,\xi_{1}\in{\mathbb R}^{Nn}$. We often consider a special case where a variable $\xi\in{\mathbb R}^{Nn}$ may not vanish. On this setting, it is possible to deduce continuity or monotonicity estimates other than (\ref{Eq (Section 2) Monotonicity estimate for A-p-epsilon})--(\ref{Eq (Section 2) Growth estimate for A-p-epsilon}). In fact, following elementary computations given in \cite[Lemmata 2.2--2.3]{T-scalar}, from (\ref{Eq (Section 2) Estimates on B-p-epsilon}), we are able to obtain a growth estimate \begin{equation}\label{Eq (Section 2) Growth estimate for A-s-epsilon with 1<s<2} \mleft\lvert A_{p,\,\varepsilon}(\xi_{1})-A_{p,\,\varepsilon}(\xi_{0}) \mright\rvert\le C(p)\Gamma\min\mleft\{\,\lvert\xi_{0}\rvert^{p-2},\,\lvert\xi_{1}\rvert^{p-2}\mright\}\lvert\xi_{1}-\xi_{0}\rvert \end{equation} for all $(\xi_{0},\,\xi_{1})\in({\mathbb R}^{Nn}\times{\mathbb R}^{Nn})\setminus\{(0,\,0)\}$ provided $1<p<2$, and a monotonicity estimate \begin{equation}\label{Eq (Section 2) Growth estimate for A-s-epsilon with s>2} \mleft\langle A_{p,\,\varepsilon}(\xi_{1})-A_{p,\,\varepsilon}(\xi_{0})\mathrel{}\middle|\mathrel{}\xi_{1}-\xi_{0} \mright\rangle\ge C(p)\gamma\max\mleft\{\,\lvert \xi_{0}\rvert^{p-2},\,\lvert\xi_{1}\rvert^{p-2}\,\mright\}\lvert \xi_{1}-\xi_{0}\rvert^{2} \end{equation} for all $\xi_{0},\,\xi_{1}\in{\mathbb R}^{Nn}$ provided $2\le p<\infty$. It is worth mentioning that even for $A_{1,\,\varepsilon}$, there holds a growth estimate \begin{equation}\label{Eq (Section 2) Growth estimate for A-s-epsilon with s=1} \mleft\lvert A_{1,\,\varepsilon}(\xi_{1})-A_{1,\,\varepsilon}(\xi_{0}) \mright\rvert\le 2b\min\mleft\{\,\lvert \xi_{0}\rvert^{-1},\,\lvert \xi_{1}\rvert^{-1} \,\mright\}\lvert \xi_{1}-\xi_{0}\rvert \end{equation} for all $(\xi_{0},\,\xi_{1})\in({\mathbb R}^{Nn}\times {\mathbb R}^{Nn})\setminus\{(0,\,0)\}$ (see also \cite[Lemma 2.1]{T-scalar}). In fact, without loss of generality, we may let $\lvert \xi_{1}\rvert\ge \lvert \xi_{0}\rvert$. By the triangle inequality, we compute \[\mleft\lvert \frac{\xi_{1}}{\sqrt{\varepsilon^{2}+\lvert \xi_{1}\rvert^{2}}} -\frac{\xi_{0}}{\sqrt{\varepsilon^{2}+\lvert \xi_{0}\rvert^{2}}} \mright\rvert= \mleft\lvert \frac{\xi_{1}-\xi_{0}}{\sqrt{\varepsilon^{2}+\lvert \xi_{1}\rvert^{2}}}-\frac{\sqrt{\varepsilon^{2}+\lvert \xi_{1}\rvert^{2}}-\sqrt{\varepsilon^{2}+\lvert \xi_{0}\rvert^{2}}}{\sqrt{\varepsilon^{2}+\lvert \xi_{1}\rvert^{2}}\cdot \sqrt{\varepsilon^{2}+\lvert \xi_{0}}\rvert^{2}}\xi_{0} \mright\rvert\le 2\frac{\lvert \xi_{1}-\xi_{0}\rvert}{\sqrt{\varepsilon^{2}+\lvert\xi_{1}\rvert^{2}}},\] from which (\ref{Eq (Section 2) Growth estimate for A-s-epsilon with s=1}) follows. Here we have used one-Lipschitz continuity of the smooth function $\sqrt{\varepsilon^{2}+t^{2}}\,(t\in{\mathbb R})$. For monotonicity of $A_{1,\,\varepsilon}$, from (\ref{Eq (Section 2) Estimates on B-1-epsilon}) we obtain \begin{equation}\label{Eq (Section 2) Monotonicity of A-1-epsilon} \mleft\langle A_{1,\,\varepsilon}(\xi_{1})-A_{1,\,\varepsilon}(\xi_{0})\mathrel{}\middle|\mathrel{}\xi_{1}-\xi_{0} \mright\rangle\ge 0 \end{equation} for all $\xi_{0},\,\xi_{1}\in{\mathbb R}^{Nn}$. When it comes to a monotonicity estimate that is independent of $\varepsilon$, better estimates than (\ref{Eq (Section 2) Monotonicity of A-1-epsilon}) seem to be no longer expectable, since ellipticity of $\Delta_{1}u$ degenerates in the direction of $Du$. In Lemma \ref{Lemma: Error estimates} below, we briefly deduce good estimates that will be used in Section \ref{Subsect: freezing coefficient method}. \begin{lemma}\label{Lemma: Error estimates} Let positive constants $\delta$ and $\varepsilon$ satisfy (\ref{Eq (Section 2) delta-epsilon}). Under the assumptions (\ref{Eq (Section 1) Growth g-p-prime})--(\ref{Eq (Section 1) Growth g-p-ppprime}), the mappings $A_{\varepsilon}$ and ${\mathcal B}_{\varepsilon}$ defined by (\ref{Eq (Section 2) Def of A-epsilon})--(\ref{Eq (Section 2) Def of B-epsilon}) satisfy the following: \begin{enumerate} \item \label{Item 1/2 (Section 2) Monotonicity and Growth estimates} Let $M\in(\delta,\,\infty)$ be a fixed constant. Then there exists constants $C_{1},\,C_{2}\in(0,\,\infty)$, depending at most on $b$, $p$, $\gamma$, $\Gamma$, $\delta$ and $M$, such that we have \begin{equation}\label{Eq (Section 2) Monotonicity outside} \mleft\langle A_{\varepsilon}(\xi_{1})-A_{\varepsilon}(\xi_{0})\mathrel{}\middle|\mathrel{}\xi_{1}-\xi_{0}\mright\rangle\ge C_{1}\lvert \xi_{1}-\xi_{0}\rvert^{2}, \end{equation} and \begin{equation}\label{Eq (Section 2) Growth outside} \lvert A_{\varepsilon}(\xi_{1})-A_{\varepsilon}(\xi_{0})\rvert\le C_{2}\lvert \xi_{1}-\xi_{0}\rvert, \end{equation} for all $\xi_{0},\,\xi_{1}\in{\mathbb R}^{Nn}$ satisfying \[\delta\le\lvert \xi_{0}\rvert\le M,\quad \textrm{and}\quad \lvert \xi_{1}\rvert\le M.\] \item \label{Item 2/2 (Section 2) Hessian Errors} For all $\xi_{0},\,\xi_{1}\in{\mathbb R}^{Nn}$ enjoying \begin{equation}\label{Eq (Section 2) Variable conditions in error estimates} \delta+\frac{\mu}{4}\le\lvert\xi_{0}\rvert\le \delta+\mu\quad \textrm{and}\quad \lvert\xi_{1}\rvert\le\delta+\mu\quad \textrm{with}\quad \delta<\mu<\infty, \end{equation} we have \begin{equation}\label{Eq (Section 2) Hessian errors} \mleft\lvert {\mathcal B}_{\varepsilon}(\xi_{0})(\xi_{1}-\xi_{0})-\mleft(A_{\varepsilon}(\xi_{1})-A_{\varepsilon}(\xi_{0})\mright)\mright\rvert\le C(b,\,p,\,\beta_{0},\,\Gamma,\,\delta)\mu^{p-2-\beta_{0}}\lvert\xi_{1}-\xi_{0}\rvert^{1+\beta_{0}}. \end{equation} \end{enumerate} \end{lemma} Estimates (\ref{Eq (Section 2) Monotonicity outside})--(\ref{Eq (Section 2) Growth outside}) are easy to deduce from (\ref{Eq (Section 2) Monotonicity estimate for A-p-epsilon})--(\ref{Eq (Section 2) Monotonicity of A-1-epsilon}) and $\varepsilon<\delta<M$. We would like to show (\ref{Eq (Section 2) Hessian errors}), which plays an important role in our proof of regularity estimates. Although an estimate like (\ref{Eq (Section 2) Hessian errors}) is shown in the author's recent work \cite[Lemmata 2.2 \& 2.6]{T-scalar}, most of our computations herein become rather direct and simple, since the density $E_{\varepsilon}$ is assumed to be spherically symmetric. \begin{proof} Let positive constants $\delta,\,\varepsilon$ and vectors $\xi_{0},\,\xi_{1}\in{\mathbb R}^{Nn}$ satisfy respectively (\ref{Eq (Section 2) delta-epsilon}) and (\ref{Eq (Section 2) Variable conditions in error estimates}). We set $\xi_{t}\coloneqq \xi_{0}+t(\xi_{1}-\xi_{0})\in{\mathbb R}^{k}$ for $t\in\lbrack0,\,1\rbrack$. To show (\ref{Eq (Section 2) Hessian errors}), we claim that \begin{equation}\label{Eq (Section 2) Hessian error p} \mleft\lvert {\mathcal B}_{p,\,\varepsilon}(\xi_{0})(\xi_{1}-\xi_{0})-\mleft(A_{p,\,\varepsilon}(\xi_{1})-A_{p,\,\varepsilon}(\xi_{0})\mright)\mright\rvert\le C(p,\,\beta_{0})\Gamma\mu^{p-2-\beta_{0}}\lvert\xi_{1}-\xi_{0}\rvert^{1+\beta_{0}}, \end{equation} and \begin{equation}\label{Eq (Section 2) Hessian error 1} \mleft\lvert {\mathcal B}_{1,\,\varepsilon}(\xi_{0})(\xi_{1}-\xi_{0})-\mleft(A_{1,\,\varepsilon}(\xi_{1})-A_{1,\,\varepsilon}(\xi_{0})\mright)\mright\rvert\le Cb\mu^{-2}\lvert\xi_{1}-\xi_{0}\rvert^{2}. \end{equation} The desired estimate (\ref{Eq (Section 2) Hessian errors}) immediately follows from (\ref{Eq (Section 2) Hessian error p})--(\ref{Eq (Section 2) Hessian error 1}). Here it should be noted that the inequality $\mu^{-2}\lvert \xi_{1}-\xi_{0}\rvert^{2}\le 4^{1-\beta_{0}}\delta^{1-p}\mu^{p-2-\beta_{0}}\lvert \xi_{1}-\xi_{0}\rvert^{1+\beta_{0}}$ holds by $0<\delta<\mu<\infty$ and $\lvert \xi_{1}-\xi_{0}\rvert\le 4\mu$. We would like to give the proof of (\ref{Eq (Section 2) Hessian error p}). We first consider the case $\lvert \xi_{1}-\xi_{0}\rvert\le \mu/2$. Then, by the triangle inequality, it is easy to check that \[\frac{\mu}{2}\le\lvert\xi_{0}\rvert-t\lvert \xi_{1}-\xi_{0}\rvert\le\lvert \xi_{t}\rvert\le\lvert\xi_{0}\rvert+t\lvert\xi_{1}-\xi_{0}\rvert\le \frac{5\mu}{2}\] for all $t\in\lbrack0,\,1\rbrack$. In particular, we have \[\frac{\mu^{2}}{4}\le a_{t}\coloneqq \min\mleft\{\,\varepsilon^{2}+\lvert\xi_{t}\rvert^{2},\,\varepsilon^{2}+\lvert\xi_{0}\rvert^{2}\,\mright\}\le \max\mleft\{\,\varepsilon^{2}+\lvert\xi_{t}\rvert^{2},\,\varepsilon^{2}+\lvert\xi_{0}\rvert^{2}\,\mright\}\eqqcolon b_{t}\le \frac{13\mu^{2}}{2},\] where we have used $\varepsilon<\delta<\mu$ to get the last inequality. Also, it is easy to compute \[\lVert \xi_{t}\otimes\xi_{t}-\xi_{0}\otimes \xi_{0}\rVert\le \mleft(2t\lvert\xi_{0}\rvert+t^{2}\lvert\xi_{1}-\xi_{0}\rvert\mright)\lvert\xi_{1}-\xi_{0}\rvert\le\frac{9\mu}{2}\lvert\xi_{1}-\xi_{0}\rvert,\] and \[b_{t}-a_{t}=\mleft\lvert \lvert\xi_{t}\rvert^{2}-\lvert\xi_{0}\rvert^{2} \mright\rvert=\mleft\lvert 2t\langle \xi_{0}\mid \xi_{1}-\xi_{0}\rangle+t^{2}\lvert \xi_{1}-\xi_{0}\rvert^{2}\mright\rvert \le \frac{9\mu}{2}\lvert \xi_{1}-\xi_{0}\rvert\] for all $t\in\lbrack0,\,1\rbrack$, and $\lVert \xi_{0}\otimes\xi_{0}\rVert\le\lvert\xi_{0}\rvert^{2}\le (2\mu)^{2}$. Combining them with (\ref{Eq (Section 1) Growth g-p-pprime}), and (\ref{Eq (Section 1) Growth g-p-ppprime}), we are able to check that the operator norm of \begin{align*} {\mathcal B}_{p,\,\varepsilon}(\xi_{t})-{\mathcal B}_{p,\,\varepsilon}(\xi_{0})&=2g_{p}^{\prime\prime}(\varepsilon^{2}+\lvert \xi_{t}\rvert^{2})\mleft(\xi_{t}\otimes \xi_{t}-\xi_{0}\otimes \xi_{0} \mright)\\ &\quad+2\mleft[g_{p}^{\prime\prime}(\varepsilon^{2}+\lvert\xi_{t}\rvert ^{2})-g_{p}^{\prime\prime}(\varepsilon^{2}+\lvert\xi_{0}\rvert ^{2}) \mright](\xi_{0}\otimes \xi_{0})\\&\quad +\mleft[g_{p}^{\prime}(\varepsilon^{2}+\lvert\xi_{t}\rvert^{2})- g_{p}^{\prime}(\varepsilon^{2}+\lvert\xi_{0}\rvert^{2})\mright]\mathrm{id}_{Nn} \end{align*} is bounded by $C\Gamma\mu^{p-2-\beta_{0}}\lvert\xi_{1}-\xi_{0}\rvert^{\beta_{0}}$ for some constant $C=C(p,\,\beta_{0})\in(0,\,\infty)$. As a result, we obtain \begin{align*} \mleft\lvert {\mathcal B}_{p,\,\varepsilon}(\xi_{0})(\xi_{1}-\xi_{0})-\mleft(A_{p,\,\varepsilon}(\xi_{1})-A_{p,\,\varepsilon}(\xi_{0})\mright)\mright\rvert&=\mleft\lvert \int_{0}^{1}\mleft({\mathcal B}_{p,\,\varepsilon}(\xi_{0})-{\mathcal B}_{p,\,\varepsilon}(\xi_{t})\mright)\cdot(\xi_{1}-\xi_{0})\,{\mathrm{d}}t \mright\rvert\\&\le \lvert\xi_{1}-\xi_{0}\rvert\int_{0}^{1}\mleft\lVert {\mathcal B}_{p,\,\varepsilon}(\xi_{t})-{\mathcal B}_{p,\,\varepsilon}(\xi_{0}) \mright\rVert\,{\mathrm{d}}t\\&\le C(p,\,\beta_{0})\Gamma\mu^{p-2-\beta_{0}}\lvert\xi_{1}-\xi_{0}\rvert^{1+\beta_{0}}. \end{align*} In the remaining case $\lvert \xi_{1}-\xi_{0}\rvert>\mu/2$, it is easy to compute \begin{align*} &\mleft\lvert {\mathcal B}_{p,\,\varepsilon}(\xi_{0})(\xi_{1}-\xi_{0})-\mleft(A_{p,\,\varepsilon}(\xi_{1})-A_{p,\,\varepsilon}(\xi_{0})\mright)\mright\rvert\\&\le\mleft(\mleft\lVert {\mathcal B}_{p,\,\varepsilon}(\xi_{0})\mright\rVert\lvert \xi_{1}-\xi_{0}\rvert+\mleft\lvert \mleft(A_{p,\,\varepsilon}(\xi_{1})-A_{p,\,\varepsilon}(\xi_{0})\mright)\mright\rvert\mright)\cdot\mleft(\frac{2\lvert \xi_{1}-\xi_{0}\rvert}{\mu}\mright)^{\beta_{0}}\\&\le C(p,\,\beta_{0})\Gamma\mu^{p-2-\beta_{0}}\lvert \xi_{1}-\xi_{0}\rvert^{1+\beta_{0}} \end{align*} by (\ref{Eq (Section 2) Estimates on B-p-epsilon}), (\ref{Eq (Section 2) Growth estimate for A-p-epsilon})--(\ref{Eq (Section 2) Growth estimate for A-s-epsilon with 1<s<2}), (\ref{Eq (Section 2) Variable conditions in error estimates}) and $\varepsilon<\delta<\mu$. This completes the proof of (\ref{Eq (Section 2) Hessian error p}). By similar computations, we are able to conclude (\ref{Eq (Section 2) Hessian error 1}) from (\ref{Eq (Section 2) Growth g-1-pprime}), (\ref{Eq (Section 2) Growth g-1-ppprime}), (\ref{Eq (Section 2) Estimates on B-1-epsilon}) and (\ref{Eq (Section 2) Growth estimate for A-s-epsilon with s=1}). \end{proof} We conclude this section by mentioning that similar estimates hold for another mapping $G_{p,\,\varepsilon}\colon{\mathbb R}^{Nn}\rightarrow {\mathbb R}^{Nn}$, defined by \begin{equation}\label{Eq (Section 2) G-p-epsilon} G_{p,\,\varepsilon}(\xi)\coloneqq h_{p}^{\prime}(\varepsilon^{2}+\lvert \xi\rvert^{2})\xi\quad \textrm{for }\xi\in{\mathbb R}^{Nn} \end{equation} with \begin{equation}\label{Eq (Section 2) hp} h_{p}(\sigma)\coloneqq \frac{2\sigma^{(p+1)/2}}{p+1}\quad \textrm{for }\sigma\in\lbrack0,\,\infty). \end{equation} It should be mentioned that this $h_{p}$ is the same as $g_{p+1}$ with $g_{p}$ given by (\ref{Eq (Section 1) gp example}). Hence, similarly to (\ref{Eq (Section 2) Monotonicity estimate for A-p-epsilon}), we have \[\langle G_{p,\,\varepsilon}(\xi_{1})-G_{p,\,\varepsilon}(\xi_{0})\mid \xi_{1}-\xi_{0}\rangle\ge c\lvert \xi_{1}-\xi_{0}\rvert^{p+1}\quad \textrm{for all } \xi_{0},\,\xi_{1}\in {\mathbb R}^{Nn}\] with $c=c(p)\in(0,\,\infty)$. Moreover, since the mapping $G_{p,\,\varepsilon}$ is bijective and enjoys $G_{p,\,\varepsilon}(0)=0$ by the definition, the inverse mapping $G_{p,\,\varepsilon}^{-1}$ satisfies \begin{equation}\label{Eq (Section 2) Estimate on inverse mapping} \mleft\lvert G_{p,\,\varepsilon}^{-1}(\xi)\mright\rvert\le C(p)\lvert \xi\rvert^{1/p}\quad \textrm{for all }\xi\in{\mathbb R}^{Nn} \end{equation} with $C=c^{-1}\in(0,\,\infty)$. Also, similarly to (\ref{Eq (Section 2) Growth estimate for A-s-epsilon with s>2}), it is possible to get \begin{equation}\label{Eq (Section 2) G-p-epsilon local ellipticity} \mleft\lvert G_{p,\,\varepsilon}(\xi_{1})-G_{p,\,\varepsilon}(\xi_{0})\mright\rvert\ge C(p)\max\mleft\{\,\lvert\xi_{1}\rvert^{p-1},\,\lvert \xi_{2}\rvert^{p-1} \,\mright\}\lvert \xi_{1}-\xi_{0}\rvert \end{equation} for all $\xi_{0},\,\xi_{1}\in{\mathbb R}^{Nn}$. The estimates (\ref{Eq (Section 2) Estimate on inverse mapping})--(\ref{Eq (Section 2) G-p-epsilon local ellipticity}) will be used in Section \ref{Subsect: Energy estimates}. \subsection{Justifications of convergence of solutions}\label{Subsect: Convergence} The aim of Section \ref{Subsect: Convergence} is to give an approximating system for (\ref{Eq (Section 1) main system}), and justify convergence of weak solutions. We only deal with the special case where the energy density $E$ is spherically symmetric. In the scalar case, more general approximation problems are discussed in \cite[\S 2.4--2.5]{T-scalar}, including variational inequality problems and generalization of the total variation energy. Only in this section, we assume that the exponent $q$ satisfies \begin{equation}\label{Eq (Section 2) exponent q} \mleft\{\begin{array}{rc} \displaystyle\frac{np}{np-n+p}<q\le \infty& (1<p<n),\\ 1<q\le\infty & (p=n),\\ 1\le q\le \infty & (n<p<\infty),\end{array} \mright. \end{equation} so that we can use the compact embedding \(W^{1,\,p}(\Omega;\,{\mathbb R}^{N})\hookrightarrow L^{q^{\prime}}(\Omega;\,{\mathbb R}^{N})\) (see e.g., \cite[Chapters 4 \& 6]{MR2424078}). Under this setting, we give the definitions of a weak solution to the system (\ref{Eq (Section 1) main system}), and of the Dirichlet boundary value problem \begin{equation}\label{Eq (Section 2) Dirichlet boundary problem} \mleft\{\begin{array}{ccccc} {\mathcal L}u& = & f & \textrm{in} &\Omega,\\ u & = & u_{\star} & \textrm{on} & \partial\Omega. \end{array} \mright. \end{equation} \begin{definition} Let the functions $u_{\star}\in W^{1,\,p}(\Omega;\,{\mathbb R}^{N})$, $f\in L^{q}(\Omega;\,{\mathbb R}^{N})$ be given with $p\in(1,\,\infty)$ and $q\in\lbrack 1,\,\infty\rbrack$ satisfying (\ref{Eq (Section 2) exponent q}). A function $u\in W^{1,\,p}(\Omega;\,{\mathbb R}^{N})$ is said to be a \textit{weak} solution to (\ref{Eq (Section 1) main system}) in $\Omega$ when there exists $Z\in L^{\infty}(\Omega;\,{\mathbb R}^{Nn})$ such that there hold \begin{equation}\label{Eq (Section 2) Z is subgradient} Z(x)\in\partial \lvert\,\cdot\,\rvert(Du(x))\quad \textrm{for a.e. }x\in\Omega, \end{equation} and \begin{equation}\label{Eq (Section 2) Weak formulation of very singular problems} \int_{\Omega}\langle Z\mid D\phi\rangle\,{\mathrm{d}}x+\int_{\Omega}\mleft\langle A_{p}(Du)\mathrel{}\middle|\mathrel{}D\phi\mright\rangle\,{\mathrm{d}}x=\int_{\Omega}\langle f\mid \phi \rangle\,{\mathrm{d}}x\quad \textrm{for all }\phi\in W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N}). \end{equation} Here $A_{p}\in C({\mathbb R}^{n};\,{\mathbb R}^{Nn})$ and $\partial\lvert\,\cdot\,\rvert(\xi)\subset{\mathbb R}^{Nn}\,(\xi\in{\mathbb R}^{Nn})$ are given by (\ref{Eq (Section 2) Def of A-p})--(\ref{Eq: sub}). When a function $u\in u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$ is a weak solution to (\ref{Eq (Section 1) main system}) in $\Omega$, $u$ is called a \textit{weak} solution of the Dirichlet problem (\ref{Eq (Section 2) Dirichlet boundary problem}). \end{definition} It should be recalled that the problem (\ref{Eq (Section 1) main system}) is derived from a minimizing problem of the energy functional \begin{equation}\label{Eq (Section 2) energy functional: original} {\mathcal F}_{0}(v)\coloneqq \int_{\Omega}\mleft(E(Dv)-\langle f\mid v\rangle\mright)\,{\mathrm{d}}x\quad \textrm{for }v\in W^{1,\,p}(\Omega;\,{\mathbb R}^{N}) \end{equation} under a suitable boundary condition. We approximate this functional by \begin{equation}\label{Eq (Section 2) energy functional: relaxed} {\mathcal F}_{\varepsilon}(v)\coloneqq \int_{\Omega}\mleft(E_{\varepsilon}(Dv)-\langle f_{\varepsilon}\mid v\rangle\mright)\,{\mathrm{d}}x\quad \textrm{for }v\in W^{1,\,p}(\Omega;\,{\mathbb R}^{N}),\,\varepsilon\in(0,\,1). \end{equation} Here the net $\{f_{\varepsilon}\}_{0<\varepsilon<1}\subset L^{q}(\Omega;\,{\mathbb R}^{N})$ satisfies \begin{equation}\label{Eq (Section 2) Weak convergence of f} f_{\varepsilon}\rightharpoonup f\quad \textrm{in }\sigma\mleft(L^{q}(\Omega;\,{\mathbb R}^{N}),\,L^{q^{\prime}}(\Omega;\,{\mathbb R}^{N})\mright). \end{equation} In other words, we only let $f_{\varepsilon}\in L^{q}(\Omega;\,{\mathbb R}^{N})$ weakly converge to $f$ when $q$ is finite, and otherwise weak$^{\ast}$ convergence is assumed. In particular, we may let $\lVert f_{\varepsilon}\rVert_{L^{q}(\Omega)}$ be uniformly bounded with respect to $\varepsilon$. In Proposition \ref{Prop: Convergence on weak system}, we justify convergence of minimizers of relaxed energy functionals. \begin{proposition}[A convergence result on approximation problems]\label{Prop: Convergence on weak system} Let $p\in(1,\,\infty),\,q\in\lbrack 1,\,\infty\rbrack$ satisfy (\ref{Eq (Section 2) exponent q}), and consider functionals ${\mathcal F}_{0}$, ${\mathcal F}_{\varepsilon}\,(0<\varepsilon<1)$ given by (\ref{Eq (Section 2) energy functional: original})--(\ref{Eq (Section 2) energy functional: relaxed}), where $\{f_{\varepsilon}\}_{0<\varepsilon<1}\subset L^{q}(\Omega;\,{\mathbb R}^{N})$ satisfies (\ref{Eq (Section 2) Weak convergence of f}). For a given function $u_{\star}\in W^{1,\,p}(\Omega;\,{\mathbb R}^{N})$, we define \begin{equation}\label{Eq (Section 2) minimizer of approximated functionals} u_{\varepsilon}\coloneqq \mathop{\mathrm{arg~min}}\mleft\{v\in u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})\mathrel{}\middle|\mathrel{} {\mathcal F}_{\varepsilon}(v)\mright\}\quad \textrm{for each }\varepsilon\in(0,\,1). \end{equation} Then, there exists a unique function $u\in u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$ such that $u_{\varepsilon}\rightarrow u$ in $W^{1,\,p}(\Omega;\,{\mathbb R}^{N})$ up to a subsequence. Moreover, the limit function $u$ is a weak solution of the Dirichlet boundary value problem (\ref{Eq (Section 2) Dirichlet boundary problem}), and there holds \begin{equation}\label{Eq (Section 2) minimizer of original functionals} u=\mathop{\mathrm{arg~min}}\mleft\{v\in u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})\mathrel{}\middle|\mathrel{} {\mathcal F}_{0}(v)\mright\}. \end{equation} \end{proposition} Before showing Proposition \ref{Prop: Convergence on weak system}, we note that (\ref{Eq (Section 2) minimizer of approximated functionals}) is characterized by \begin{equation}\label{Eq (Section 2) Regularized weak formulation} \int_{\Omega}\mleft\langle A_{\varepsilon}(Du_{\varepsilon})\mid D\phi\mright\rangle\,{\mathrm{d}}x=\int_{\Omega}\langle f_{\varepsilon}\mid \phi\rangle\,{\mathrm{d}}x\quad \textrm{for all }\phi\in W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N}). \end{equation} In other words, for a function $u_{\varepsilon}\in u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$, $u_{\varepsilon}$ verifies (\ref{Eq (Section 2) minimizer of approximated functionals}) if and only if $u_{\varepsilon}$ satisfies (\ref{Eq (Section 2) Regularized weak formulation}). In the limiting case $\varepsilon=0$, however, it is a non-trivial question whether a function $u_{0}$ that verifies (\ref{Eq (Section 2) minimizer of original functionals}) is a weak solution of (\ref{Eq (Section 2) Dirichlet boundary problem}), since the total variation energy is neither G\^{a}teaux differentiable nor Fr\'{e}chet differentiable. This is substantially due to non-smoothness of the absolute value function at the origin. To overcome this problem, we appeal to an approximation method, and construct a vector field $Z$ that satisfies (\ref{Eq (Section 2) Z is subgradient})--(\ref{Eq (Section 2) Weak formulation of very singular problems}). \begin{proof} We first mention that the right hand sides of (\ref{Eq (Section 2) minimizer of approximated functionals})--(\ref{Eq (Section 2) minimizer of original functionals}) are well-defined. In fact, to construct minimizers by direct methods (\cite[Chapter 8]{MR1625845}, \cite[Chapter 4]{MR1962933}), it suffices to prove boundedness and coerciveness of the functional ${\mathcal F}_{\varepsilon}$ over the space $u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$. To check boundedness, we let $g_{p}(0)=0$ without loss of generality. Then the growth estimate $g_{p}(\sigma)\le C(p,\,\Gamma)\sigma^{p/2}$ easily follows from (\ref{Eq (Section 1) Growth g-p-prime}). In particular, by the continuous embedding $L^{q}(\Omega;\,{\mathbb R}^{N})\hookrightarrow W^{1,\,p}(\Omega;\,{\mathbb R}^{N})$, we have \begin{equation}\label{Eq (Section 2) Growth of F-epsilon} {\mathcal F}_{\varepsilon}(v)\le K_{0}\mleft( 1+\lVert Dv\rVert_{L^{p}(\Omega)}^{p}+\lVert f_{\varepsilon}\rVert_{L^{q}(\Omega)}\lVert v\rVert_{W^{1,\,p}(\Omega)} \mright) \end{equation} for all $\varepsilon\in\lbrack 0,\,1)$ and $v\in W^{1,\,p}(\Omega;\,{\mathbb R}^{N})$. For a coercive estimate, we have \begin{equation}\label{Eq (Section 2) Coercive of F-epsilon} {\mathcal F}_{\varepsilon}(v)\ge K_{1}\lVert Dv\rVert_{L^{p}(\Omega)}-K_{2}\mleft( 1+\lVert f_{\varepsilon}\rVert_{L^{q}(\Omega)}\lVert u_{\star}\rVert_{W^{1,\,p}(\Omega)}+\lVert f_{\varepsilon}\rVert_{L^{q}(\Omega)}^{p^{\prime}}\mright) \end{equation} for all $\varepsilon\in\lbrack 0,\,1)$ and $v\in u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$. Here the constants $K_{1}\in\lbrack 0,\,1)$ and $K_{2}\in(1,\,\infty)$ depend at most on $n$, $p$, $q$, $\gamma$ and $\Omega$, but are independent of $\varepsilon\in\lbrack 0,\,1)$. It is possible to deduce (\ref{Eq (Section 2) Coercive of F-epsilon}) by applying the continuous embedding $W^{1,\,p}(\Omega;\,{\mathbb R}^{N})\hookrightarrow L^{q^{\prime}}(\Omega;\,{\mathbb R}^{N})$ and the Poincar\'{e} inequality \begin{equation}\label{Eq (Section 2) Poincare ineq} \lVert v-u_{\star}\rVert_{W^{1,\,p}(\Omega)}\le C(n,\,p,\,\Omega)\lVert Dv-Du_{\star}\rVert_{L^{p}(\Omega)}\quad \textrm{for all }v\in u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N}). \end{equation} The detailed computations to show (\ref{Eq (Section 2) Coercive of F-epsilon}) are substantially similar to \cite[Propostion 2.11]{T-scalar} (see also \cite[\S 3]{MR4201656}). Uniqueness of minimizers are guaranteed by strict convexity of $E_{\varepsilon}$ and (\ref{Eq (Section 2) Poincare ineq}). Therefore, we are able to define $u$ and $u_{\varepsilon}$ satisfying (\ref{Eq (Section 2) minimizer of approximated functionals}) and (\ref{Eq (Section 2) minimizer of original functionals}) respectively. We also mention that $\{Du_{\varepsilon}\}_{0<\varepsilon<1}\subset L^{p}(\Omega;\,{\mathbb R}^{Nn})$ is bounded, and so $\{u_{\varepsilon}\}_{0<\varepsilon<1}\subset W^{1,\,p}(\Omega;\,{\mathbb R}^{N})$ is by (\ref{Eq (Section 2) Poincare ineq}). This is easy to deduce by applying (\ref{Eq (Section 2) Growth of F-epsilon})--(\ref{Eq (Section 2) Coercive of F-epsilon}) to the inequality ${\mathcal F}_{\varepsilon}(v_{\varepsilon})\le {\mathcal F}_{\varepsilon}(u_{\star})$ following from (\ref{Eq (Section 2) minimizer of approximated functionals}). Hence, by the weak compactness theorem, we may choose a sequence $\{\varepsilon_{j}\}_{j=1}^{\infty}\subset(0,\,1)$ such that $\varepsilon_{j}\to 0$ and \begin{equation}\label{Eq (Section 2) Weak limit u0} u_{\varepsilon_{j}}\rightharpoonup u_{0}\quad \textrm{in }W^{1,\,p}(\Omega;\,{\mathbb R}^{N}) \end{equation} for some function $u_{0}\in u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$. Moreover, the condition (\ref{Eq (Section 2) exponent q}) enables us to apply the compact embedding, so that we may let \begin{equation}\label{Eq (Section 2) Strong convergence from compact embedding} u_{\varepsilon_{j}}\rightarrow u_{0}\quad \textrm{in }L^{q^{\prime}}(\Omega;\,{\mathbb R}^{N}) \end{equation} by taking a subsequence if necessary. We claim that \begin{equation}\label{Eq (Section 2) Monotonicity term convergence} I(\varepsilon_{j})\coloneqq \int_{\Omega}\langle A_{\varepsilon_{j}}(Du_{\varepsilon_{j}})-A_{\varepsilon_{j}}(Du_{0})\mid Du_{\varepsilon_{j}}-Du_{0}\rangle\,{\mathrm{d}}x\to 0\quad\textrm{as }j\to\infty. \end{equation} Before showing (\ref{Eq (Section 2) Monotonicity term convergence}), we mention that this result yields \begin{equation}\label{Eq (Section 2) strong convergence} Du_{\varepsilon_{j}}\to Du_{0}\quad \textrm{in }L^{p}(\Omega;\,{\mathbb R}^{Nn}). \end{equation} In fact, when $1<p<2$, we apply (\ref{Eq (Section 2) Monotonicity estimate for A-p-epsilon}), (\ref{Eq (Section 2) Monotonicity of A-1-epsilon}), and H\"{o}lder's inequality to obtain \begin{align*} \lVert Du_{\varepsilon_{j}}-Du_{0}\rVert_{L^{p}(\Omega)}^{p}&\le \mleft(\int_{\Omega}\mleft(\varepsilon_{j}^{2}+\lvert Du_{\varepsilon_{j}}\rvert^{2}+\lvert Du_{0}\rvert^{2}\mright)^{p/2-1}\lvert Du_{\varepsilon_{j}}-Du_{0}\rvert^{2}\,{\mathrm{d}}x\mright)^{p/2} \\& \quad \cdot\underbrace{\sup_{j}\mleft(\int_{\Omega}\mleft(\varepsilon_{j}^{2}+\lvert Du_{\varepsilon_{j}}\rvert^{2}+\lvert Du_{0}\rvert^{2}\mright)^{p/2}\,{\mathrm{d}}x \mright)^{1-p/2}}_{\eqqcolon C}\\&\le C\lambda^{-p/2}\cdot I(\varepsilon_{j})^{p/2}\to 0\quad \textrm{as }j\to\infty. \end{align*} Here it is noted that $C$ is finite by (\ref{Eq (Section 2) Weak limit u0}). In the remaining case $2\le p<\infty$, we simply use (\ref{Eq (Section 2) Monotonicity estimate for A-p-epsilon}) and (\ref{Eq (Section 2) Monotonicity of A-1-epsilon}) to get \[\lVert Du_{\varepsilon_{j}}-Du_{0}\rVert_{L^{p}(\Omega)}^{p}\le \frac{I(\varepsilon_{j})}{\lambda C(p)}\to 0\quad \textrm{as }j\to\infty.\] For the proof of (\ref{Eq (Section 2) Monotonicity term convergence}), we decompose $I=I_{1}-I_{2}$ with \[I_{1}(\varepsilon_{j})\coloneqq\int_{\Omega}\langle A_{\varepsilon_{j}}(Du_{\varepsilon_{j}})\mid Du_{\varepsilon_{j}}-Du_{0}\rangle\,{\mathrm{d}}x,\quad I_{2}(\varepsilon_{j})\coloneqq\int_{\Omega}\langle A_{\varepsilon_{j}}(Du)\mid Du_{\varepsilon_{j}}-Du_{0}\rangle\,{\mathrm{d}}x.\] For $I_{1}$, we test $\phi\coloneqq u_{\varepsilon_{j}}-u_{0}\in W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$ into (\ref{Eq (Section 2) Regularized weak formulation}). Then, we obtain \[I_{1}(\varepsilon_{j})=\int_{\Omega}\langle f_{\varepsilon_{j}}\mid u_{\varepsilon_{j}}-u_{0}\rangle\,{\mathrm d}x\to 0\quad \textrm{as }j\to\infty\] by (\ref{Eq (Section 2) Weak convergence of f}) and (\ref{Eq (Section 2) Strong convergence from compact embedding}). For $I_{2}$, we note that for every $\xi\in{\mathbb R}^{Nn}$, $A_{\varepsilon}(\xi)$ converges to \[A_{0}(\xi)\coloneqq \mleft\{\begin{array}{cc} g^{\prime}(\lvert \xi\rvert^{2})\xi & (\xi\neq 0),\\ 0 & (\xi=0), \end{array}\quad \textrm{as }\varepsilon\to 0, \mright.\] since $A_{\varepsilon}(\xi)=b\mleft(\varepsilon^{2}+\lvert \xi\rvert^{2}\mright)^{-1/2}\xi+g_{p}^{\prime}(\varepsilon^{2}+\lvert \xi\rvert^{2})\xi$ satisfies $A_{\varepsilon}(0)=0$. Also, for every $\varepsilon\in(0,\,1)$, it is easy to check that \[\lvert A_{\varepsilon}(Du_{0})\rvert\le b+\Gamma\mleft(\varepsilon^{2}+\lvert Du_{0}\rvert^{2} \mright)^{\frac{p-1}{2}}\le C(b,\,p,\,\Lambda)(1+\lvert Du_{0}\rvert^{p-1})\quad \textrm{a.e. in }\Omega.\] Hence, by applying Lebesgue's dominated convergence theorem, we have $A_{\varepsilon}(Du_{0})\rightarrow A_{0}(Du_{0})$ in $L^{p^{\prime}}(\Omega;\,{\mathbb R}^{Nn})$. It should be recalled that the weak convergence $Du_{\varepsilon}\rightharpoonup Du_{0}$ in $L^{p}(\Omega;\,{\mathbb R}^{Nn})$ is already known by (\ref{Eq (Section 2) Weak limit u0}). These convergence results imply $I_{2}(\varepsilon_{j})\to 0$ as $j\to\infty$. Thus, the claim (\ref{Eq (Section 2) Monotonicity term convergence}) is verified. We would like to prove that the limit $u_{0}$ is a weak solution of the problem (\ref{Eq (Section 2) Dirichlet boundary problem}). First, by (\ref{Eq (Section 2) strong convergence}) and \cite[Theorem 4.9]{MR2759829}, we may let \begin{equation}\label{Eq (Section 2) a.e. convergence of Du} Du_{\varepsilon_{j}}\rightarrow Du_{0}\quad \textrm{a.e. in }\Omega, \end{equation} and \begin{equation}\label{Eq (Section 2) existence of dominant function} \lvert Du_{\varepsilon_{j}}\rvert\le v\quad \textrm{a.e. in }\Omega, \end{equation} by taking a subsequence if necessary. Here a non-negative function $v\in L^{p}(\Omega)$ is independent of the subscript $j\in{\mathbb N}$. Then, \begin{equation}\label{Eq (Section 2) A-p-epsilon strong convergence} A_{p,\,\varepsilon_{j}}(Du_{\varepsilon_{j}})\rightarrow A_{p}(Du_{0})\quad \textrm{in }L^{p^{\prime}}(\Omega;\,{\mathbb R}^{Nn}) \end{equation} follows from (\ref{Eq (Section 2) a.e. convergence of Du})--(\ref{Eq (Section 2) existence of dominant function}). To verify this, we should note that by (\ref{Eq (Section 2) Growth estimate for A-p-epsilon}), the mapping $A_{p,\,\varepsilon}$ locally uniformly converges to $A_{p}$ in ${\mathbb R}^{Nn}$. Combining with (\ref{Eq (Section 2) a.e. convergence of Du}), we can check $A_{p,\,\varepsilon_{j}}(Du_{\varepsilon_{j}})\rightarrow A_{p}(Du_{0})$ a.e. in $\Omega$. Also, by (\ref{Eq (Section 1) Growth g-p-prime}) and (\ref{Eq (Section 2) existence of dominant function}), we get \[\lvert A_{p,\,\varepsilon_{j}}(Du_{\varepsilon_{j}})\rvert\le \Gamma\mleft(\varepsilon_{j}^{2}+\lvert Du_{\varepsilon_{j}}\rvert^{2}\mright)^{p/2-1}\lvert Du_{\varepsilon_{j}}\rvert\le \Gamma\mleft(1+v^{2}\mright)^{\frac{p-1}{2}}\in L^{p^{\prime}}(\Omega)\] a.e. in $\Omega$. Thus, (\ref{Eq (Section 2) A-p-epsilon strong convergence}) can be deduced by Lebesgue's dominated convergence theorem. Secondly, since the mapping $Z_{j}\coloneqq A_{1,\,\varepsilon_{j}}(Du_{\varepsilon_{j}})$ satisfies $\lVert Z_{j}\rVert_{L^{\infty}(\Omega)}\le 1$, up to a subsequence, we may let \begin{equation}\label{Eq (Section 2) weak-star limit Z} Z_{j}\overset{\ast}{\rightharpoonup} Z\quad \textrm{in }L^{\infty}(\Omega;\,{\mathbb R}^{Nn}) \end{equation} for some $Z\in L^{\infty}(\Omega;\,{\mathbb R}^{Nn})$ \cite[Corollary 3.30]{MR2759829}. This limit clearly satisfies $\lVert Z\rVert_{L^{\infty}(\Omega)}\le 1$. Therefore, to check (\ref{Eq (Section 2) Z is subgradient}), it suffices to prove \[Z=\frac{Du_{0}}{\lvert Du_{0}\rvert}\quad \textrm{a.e. in }D\coloneqq \{x\in\Omega\mid Du_{0}(x)\neq 0\}.\] This claim is easy to deduce by (\ref{Eq (Section 2) a.e. convergence of Du}). In fact, (\ref{Eq (Section 2) a.e. convergence of Du}) yields $Z_{\varepsilon_{j}}\to Du_{0}/\lvert Du_{0}\rvert$ a.e. in $D$, and hence we are able to conclude $Z_{\varepsilon_{j}}\overset{\ast}{\rightharpoonup}Du_{0}/\lvert Du_{0}\rvert$ in $L^{\infty}(D;\,{\mathbb R}^{Nn})$ by Lebesgue's dominated convergence theorem. Finally, using (\ref{Eq (Section 2) Weak convergence of f}) and (\ref{Eq (Section 2) A-p-epsilon strong convergence})--(\ref{Eq (Section 2) weak-star limit Z}), we are able to deduce (\ref{Eq (Section 2) Weak formulation of very singular problems}) by letting $\varepsilon=\varepsilon_{j}$ and $j\to\infty$ in the weak formulation (\ref{Eq (Section 2) Regularized weak formulation}). We mention that $u_{0}$, a weak solution of (\ref{Eq (Section 2) Dirichlet boundary problem}), coincides with $u$ satisfying (\ref{Eq (Section 2) minimizer of original functionals}). In fact, for arbitrary $\phi\in W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$, we have \[\lvert D(u+\phi)\rvert\ge \lvert Du\rvert+\langle Z\mid D\phi\rangle\quad \textrm{a.e. in }\Omega,\] since $Z$ satisfies (\ref{Eq (Section 2) Z is subgradient}). Similarly, we can easily get \[g_{p}(\lvert D(u+\phi)\rvert^{2})\ge g_{p}(\lvert Du\rvert^{2})+\mleft\langle A_{p}(Du)\mathrel{}\middle|\mathrel{}D\phi\mright\rangle\quad \textrm{a.e. in }\Omega.\] By testing $\phi\in W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$ into (\ref{Eq (Section 2) Weak formulation of very singular problems}), we can easily notice ${\mathcal F}_{0}(u)\le {\mathcal F}_{0}(u+\phi)$ for all $\phi\in W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$. In other words, the limit function $u_{0}\in u_{\star}+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})$ satisfies (\ref{Eq (Section 2) minimizer of original functionals}). We recall that a function verifying (\ref{Eq (Section 2) minimizer of original functionals}) uniquely exists, and thus $u=u_{0}$. This completes the proof. \end{proof} \subsection{H\"{o}lder continuity estimates}\label{Subsect: A priori Hoelder} In Section \ref{Subsect: A priori Hoelder}, we would like to prove our main theorem (Theorem \ref{Theorem: C1-regularity}) by an approximation argument. In Proposition \ref{Prop: Convergence on weak system}, we have already justified that a weak solution to (\ref{Eq (Section 1) main system}) can be approximated by a weak solution to \begin{equation}\label{Eq (Section 2) Approximated system} -\mathop{\mathrm{div}}\mleft(g_{\varepsilon}^{\prime}(\lvert Du_{\varepsilon}\rvert^{2})Du_{\varepsilon}\mright)=f_{\varepsilon}, \end{equation} under a suitable Dirichlet boundary value condition. We should note that the function $u_{\varepsilon}$ defined by (\ref{Eq (Section 2) minimizer of approximated functionals}) solves (\ref{Eq (Section 2) Approximated system}) in the distributional sense, under a boundary condition $u_{\varepsilon}=u_{\star}$ on $\partial\Omega$. Since we have already justified convergence for approximated solutions $u_{\varepsilon}$ in Proposition \ref{Prop: Convergence on weak system}, it suffices to obtain a priori regularity estimates on weak solutions to a regularized system (\ref{Eq (Section 2) Approximated system}). The key estimates are given by Theorem \ref{Theorem: A priori Hoelder estimate}, where the continuity estimates are independent of the approximation parameter $\varepsilon$, so that the Arzel\`{a}--Ascoli theorem can be applied. \begin{theorem}[A priori H\"{o}lder estimates on truncated Jacobian matrices]\label{Theorem: A priori Hoelder estimate} Let positive numbers $\delta,\,\varepsilon$ satisfy (\ref{Eq (Section 2) delta-epsilon}), and $u_{\varepsilon}$ be a weak solution to a regularized system (\ref{Eq (Section 2) Approximated system}) in \(\Omega\) with \begin{equation}\label{Eq (Section 2) control of Du} \lVert Du_{\varepsilon}\rVert_{L^{p}(\Omega)}\le L, \end{equation} and \begin{equation}\label{Eq (Section 2) control of f} \lVert f_{\varepsilon}\rVert_{L^{q}(\Omega)}\le F \end{equation} for some constants $F,\,L\in(0,\,\infty)$. Then, for each fixed $x_{\ast}\in\Omega$, there exist a sufficiently small open ball $B_{\rho_{0}}(x_{\ast})\Subset\Omega$ and an exponent $\alpha\in(0,\,1)$, depending at most on $b,\,n,\,N,\,p,\,q,\,\gamma,\,\Gamma,\, F,\,L,\,d_{\ast}=\mathop{\mathrm{dist}}(x_{\ast},\,\partial\Omega)$, and $\delta$, such that ${\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})$ is in $C^{\alpha}(B_{\rho_{0}/2}(x_{0});\,{\mathbb R}^{Nn})$. Moreover, there exists a constant $\mu_{0}\in(0,\,\infty)$, depending at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $F$, $L$, and $d_{\ast}$, such that \begin{equation}\label{Eq (Section 2) Bound of G-2delta-epsilon} \sup_{B_{\rho_{0}}(x_{\ast})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright\rvert\le \mu_{0}, \end{equation} and \begin{equation}\label{Eq (Section 2) Holder continuity of G-2delta-epsilon} \mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon}(x_{1}))-{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon}(x_{2}))\mright\rvert\le \frac{2^{n/2+2\alpha+2}\mu_{0}}{\rho_{0}^{\alpha}}\lvert x_{1}-x_{2}\rvert^{\alpha} \end{equation} for all $x_{1},\,x_{2}\in B_{\rho_{0}/2}(x_{\ast})$. \end{theorem} To prove Theorem \ref{Theorem: A priori Hoelder estimate}, we use three basic propositions, whose proofs are given later in Sections \ref{Section: Weak formulations}--\ref{Section: Appendix}. Here it is noted that when stating Propositions \ref{Prop: Lipschitz bounds}--\ref{Prop: Schauder estimate}, we have to use another modulus \(V_{\varepsilon}\coloneqq \sqrt{\varepsilon^{2}+\lvert Du_{\varepsilon}\rvert^{2}}\), instead of $\lvert Du_{\varepsilon}\rvert$, since we have relaxed the principal divergence operator ${\mathcal L}$ itself. Along with this, we have already introduced another truncation mapping ${\mathcal G}_{2\delta,\,\varepsilon}$, instead of ${\mathcal G}_{2\delta}$. The first proposition states local a priori Lipschitz bounds for regularized solutions. \begin{proposition}[Local Lipschitz bounds]\label{Prop: Lipschitz bounds} Let $\varepsilon\in(0,\,1)$ and $u_{\varepsilon}$ be a weak solution to a regularized system (\ref{Eq (Section 2) Approximated system}) in \(\Omega\). Fix an open ball $B_{\rho}(x_{0})\Subset \Omega$ with $\rho\in(0,\,1\rbrack$. Then, there exists a constant $C\in(0,\,\infty)$ depending at most on $b$, $n$, $p$, $q$, $\gamma$, and $\Gamma$, such that \[\mathop{\mathrm{ess~sup}}_{B_{\theta\rho}(x_{0})}\,V_{\varepsilon}\le C\mleft[1+\lVert f_{\varepsilon}\rVert_{L^{q}(B_{\rho}(x_{0}))}^{1/(p-1)}+\frac{\lVert V_{\varepsilon}\rVert_{L^{p}(B_{\rho}(x_{0}))}}{[(1-\theta)\rho]^{d}}\mright]\] for all $\theta\in(0,\,1)$. Here the exponent $d\in\lbrack n/p,\,\infty)$ depends at most on $n$, $p$, and $q$. \end{proposition} These estimates can be deduced by carefully choosing test functions whose supports are separated from facets of approximated solutions. Also, it should be emphasized that local Lipschitz bounds in Proposition \ref{Prop: Lipschitz bounds} are expectable, since the density $E_{\varepsilon}$ has a $p$-Laplace-type structure when it is sufficiently far from the origin. Lipschitz estimates from a viewpoint of asymptotic bahaviour of density functions are found in the existing literature \cite{MR4078712} (see also \cite{MR852362} as a classical work). For the reader's convenience, we would like to provide the proof of Proposition \ref{Prop: Lipschitz bounds} in the appendix (Section \ref{Section: Appendix}). Hereinafter we assume local uniform boundedness of $V_{\varepsilon}$, which is guaranteed by Proposition \ref{Prop: Lipschitz bounds}. In particular, a scalar function $\lvert {\mathcal G}_{\delta,\,\varepsilon}(Du_{\varepsilon})\rvert$ is uniformly bounded in each fixed subdomain of $\Omega$. For an open ball $B_{\rho}(x_{0})\Subset \Omega$ and positive numbers $\mu\in(0,\,\infty)$, $\nu\in(0,\,1)$, we introduce a superlevel set \[S_{\rho,\,\mu,\,\nu}(x_{0})\coloneqq \{x\in B_{\rho}(x_{0})\mid V_{\varepsilon}-\delta>(1-\nu)\mu\}.\] The second and third propositions (Propositions \ref{Prop: De Giorgi's truncation}--\ref{Prop: Schauder estimate}) are useful for estimating oscillation of ${\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})$. Our analysis herein depends on whether $V_{\varepsilon}$ vanishes near a point $x_{0}$ or not, which is judged by measuring the size of the superlevel set $S_{\rho,\,\mu,\,\nu}(x_{0})$. Before stating Propositions \ref{Prop: De Giorgi's truncation}--\ref{Prop: Schauder estimate}, we introduce a constant $\beta\in(0,\,1)$ by \[\beta\coloneqq\mleft\{\begin{array}{cc} 1-\displaystyle\frac{n}{q} & (n<q<\infty),\\ {\hat\beta}_{0} & (q=\infty), \end{array} \mright.\] where ${\hat\beta}_{0}\in(0,\,1)$ is an arbitrary number. The exponent $\beta$ appears when one considers regularity of solutions to the Poisson system $-\Delta v=f$ with $f$ in $L^{q}$. In fact, from the classical Schauder theory, it is well-known that this weak solution $v$ admits local $C^{1,\,\beta}$-regularity. Propositions \ref{Prop: De Giorgi's truncation}--\ref{Prop: Schauder estimate} below will be shown in Sections \ref{Section: Weak formulations}--\ref{Section: Campanato estimates}. \begin{proposition}[An oscillation lemma]\label{Prop: De Giorgi's truncation} Let $u_{\varepsilon}$ be a weak solution to a regularized system (\ref{Eq (Section 2) Approximated system}) in $\Omega$. Assume that positive numbers $\delta,\,\varepsilon,\,\mu,\,F,\,M$ satisfy (\ref{Eq (Section 2) delta-epsilon}), (\ref{Eq (Section 2) control of f}) and \begin{equation}\label{Eq (Section 2) G-delta-epsilon bound} \mathop{\mathrm{ess~sup}}_{B_{\rho}(x_{0})}\,\mleft\lvert {\mathcal G}_{\delta,\,\varepsilon}(Du_{\varepsilon})\mright\rvert\le \mu<\mu+\delta\le M \end{equation} for some open ball $B_{\rho}(x_{0})\Subset \Omega$ with $\rho\in(0,\,1\rbrack$. Assume that there holds \begin{equation}\label{Eq (Section 3) Measure condition 1} \lvert S_{\rho/2,\,\mu,\,\nu}(x_{0}) \rvert\le (1-\nu)\lvert B_{\rho/2}(x_{0})\rvert \end{equation} for some constant $\nu\in(0,\,1)$. Then, we have either \begin{equation} \mu^{2}<C_{\star}\rho^{\beta}, \end{equation} or \begin{equation} \mathop{\mathrm{ess~sup}}_{B_{\rho/4}(x_{0})}\,\mleft\lvert {\mathcal G}_{\delta,\,\varepsilon}(Du_{\varepsilon})\mright\rvert\le \kappa\mu. \end{equation} Here the constants $C_{\star}\in(0,\,\infty)$ and $\kappa\in(2^{-\beta},\,1)$ depend at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $F$, $M$, $\delta$ and $\nu$. \end{proposition} \begin{proposition}[Campanato-type growth estimates]\label{Prop: Schauder estimate} Let $u_{\varepsilon}$ be a weak solution to a regularized system (\ref{Eq (Section 2) Approximated system}) in $\Omega$. Assume that positive numbers $\delta$, $\varepsilon$, $\mu$, $F$, $M$ satisfy (\ref{Eq (Section 2) delta-epsilon}), (\ref{Eq (Section 2) control of f}), \begin{equation}\label{Eq (Section 2) esssup V-epsilon} \mathop{\mathrm{ess~sup}}_{B_{\rho}(x_{0})}\,V_{\varepsilon}\le \mu+\delta\le M \end{equation} for some open ball $B_{\rho}(x_{0})\Subset \Omega$, and \begin{equation}\label{Eq (Section 2) delta<mu} 0<\delta<\mu. \end{equation} Then, there exist numbers $\nu\in(0,\,1/4),\,\rho_{\star}\in(0,\,1)$, depending at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $F$, $M$, and $\delta$, such that the following statement holds true. If there hold $0<\rho<\rho_{\star}$ and \begin{equation}\label{Eq (Section 2) Measure condition 2} \lvert S_{\rho,\,\mu,\,\nu}(x_{0})\rvert>(1-\nu)\lvert B_{\rho}(x_{0})\rvert, \end{equation} then the limit \begin{equation} \Gamma_{2\delta,\,\varepsilon}(x_{0})\coloneqq \lim_{r\to 0}({\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon}))_{x_{0},\,r}\in{\mathbb R}^{Nn} \end{equation} exists. Moreover, there hold \begin{equation}\label{Eq (Section 2) Bound of Gamma-2delta-epsilon} \lvert \Gamma_{2\delta,\,\varepsilon}(x_{0})\rvert\le \mu, \end{equation} and \begin{equation}\label{Eq (Section 2) Campanato-type growth from Schauder} \fint_{B_{r}(x_{0})}\mleft\lvert {\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{0})\mright\rvert^{2}\,{\mathrm{d}}x\le \mleft(\frac{r}{\rho} \mright)^{2\beta} \mu^{2}\quad \textrm{for all }r\in(0,\,\rho\rbrack. \end{equation} \end{proposition} \begin{Remark}\label{Rmk: Equivalence on boundedness}\upshape Let $\delta$ and $\varepsilon$ satisfy $0<\varepsilon<\delta$. Then, for $\xi\in{\mathbb R}^{Nn}$, $\mleft\lvert {\mathcal G}_{\delta,\,\varepsilon}(\xi)\mright\rvert\le \mu$ holds if and only if $\xi$ satisfies $\sqrt{\varepsilon^{2}+\lvert\xi\rvert^{2}}\le\mu+\delta$. In particular, the conditions (\ref{Eq (Section 2) G-delta-epsilon bound}) and (\ref{Eq (Section 2) esssup V-epsilon}) are equivalent. Also, it should be noted that the mapping ${\mathcal G}_{2\delta,\,\varepsilon}$ satisfies \begin{equation}\label{Eq (Section 2) Control of G-2delta-epsilon by G-delta-epsilon} \mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(\xi)\mright\rvert\le \mleft(\mleft\lvert{\mathcal G}_{\delta,\,\varepsilon}(\xi)\mright\rvert -\delta\mright)_{+}\le \mleft\lvert {\mathcal G}_{\delta,\,\varepsilon}(\xi)\mright\rvert\quad \textrm{for all }\xi\in{\mathbb R}^{Nn}, \end{equation} which is used in the proof of Theorem \ref{Theorem: A priori Hoelder estimate}. \end{Remark} By applying Propositions \ref{Prop: Lipschitz bounds}--\ref{Prop: Schauder estimate}, we would like to prove Theorem \ref{Theorem: A priori Hoelder estimate}. \begin{proof} For each fixed \(x_{\ast}\in\Omega\), we first choose \[R\coloneqq \min\mleft\{\,\frac{1}{2},\,\frac{1}{3}\mathop{\mathrm{dist}}\,(x_{\ast},\,\partial\Omega)\mright\}>0,\] so that \(B_{2R}(x_{\ast})\Subset \Omega\) holds. By Proposition \ref{Prop: Lipschitz bounds}, we may take a finite constant \(\mu_{0}\in(0,\,\infty)\), depending at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $F$, and $R$, such that \begin{equation}\label{Eq (Section 2) Lipschitz bound} \mathop{\mathrm{ess~sup}}_{B_{R}(x_{\ast})}\,V_{\varepsilon}\le \mu_{0}. \end{equation} We set \(M\coloneqq 1+\mu_{0}\), so that \(\mu_{0}+\delta\le M\) clearly holds. We choose and fix the numbers \(\nu\in(0,\,1/4),\,\rho_{\star}\in(0,\,1)\) as in Proposition \ref{Prop: Schauder estimate}, which depend at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $F$, $M$, and $\delta$. Corresponding to this \(\nu\), we choose finite constants \(\kappa\in(2^{-\beta},\,1)\), \(C_{\star}\in\lbrack1,\,\infty)\) as in Proposition \ref{Prop: De Giorgi's truncation}. We define the desired H\"{o}lder exponent \(\alpha\in(0,\,\beta/2)\) by \(\alpha\coloneqq -\log\kappa/\log 4\), so that the identity \(4^{-\alpha}=\kappa\) holds. We also put the radius \(\rho_{0}\) such that \begin{equation}\label{Eq (Section 2) determination of radius} 0<\rho_{0}\le \min\mleft\{\,\frac{R}{2},\,\rho_{\star}\,\mright\}<1\quad\textrm{and}\quad C_{\star}\rho_{0}^{\beta}\le \kappa^{2}\mu_{0}^{2}, \end{equation} which depends at most on $b$, $n$, $p$, $q$, $\gamma$, $\Gamma$, $F$, $L$, $M$, and $\delta$. We define non-negative decreasing sequences \(\{\rho_{k}\}_{k=1}^{\infty},\,\{\mu_{k}\}_{k=1}^{\infty}\) by setting \(\rho_{k}\coloneqq 4^{-k}\rho_{0},\,\mu_{k}\coloneqq \kappa^{k}\mu_{0}\) for \(k\in{\mathbb N}\). By \(2^{-\beta}<\kappa=4^{-\alpha}<1\) and (\ref{Eq (Section 2) determination of radius}), it is easy to check that \begin{equation}\label{Eq (Section 2) An estimate for induction} \sqrt{ C_{\star}\rho_{k}^{\beta}}\le 2^{-\beta k}\kappa\mu_{0}\le\kappa^{k+1}\mu_{0}= \mu_{k+1}, \end{equation} and \begin{equation}\label{Eq (Section 2) Estimate on radius ratio} \mu_{k}=4^{-\alpha k}\mu_{0}=\mleft(\frac{\rho_{k}}{\rho_{0}}\mright)^{\alpha}\mu_{0} \end{equation} for every \(k\in{\mathbb Z}_{\ge 0}\). We claim that for every \(x_{0}\in B_{\rho_{0}}(x_{\ast})\), the limit \[\Gamma_{2\delta,\,\varepsilon}(x_{0})\coloneqq\lim_{r\to 0}\mleft({\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright)_{x_{0},\,r}\in{\mathbb R}^{n}\] exists, and this limit satisfies \begin{equation}\label{Eq (Section 2) Campanato-type alpha-growth estimate} \fint_{B_{r}(x_{0})}\mleft\lvert {\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{0}) \mright\rvert^{2}\,{\mathrm d}x\le 4^{2\alpha+1}\mleft(\frac{r}{\rho_{0}}\mright)^{2\alpha}\mu_{0}^{2}\quad \textrm{for all }r\in(0,\,\rho_{0}\rbrack. \end{equation} In the proof of (\ref{Eq (Section 2) Campanato-type alpha-growth estimate}), we introduce a set \[{\mathcal N}\coloneqq \mleft\{k\in{\mathbb Z}_{\ge 0}\mathrel{}\middle|\mathrel{} \lvert S_{\rho_{k}/2,\,\mu_{k},\,\nu}(x_{0})\rvert>(1-\nu) \lvert B_{\rho_{k}/2}(x_{0})\rvert\mright\},\] and define \(k_{\star}\in{\mathbb Z}_{\ge 0}\) to be the minimum number of $\mathcal N$ when it is non-empty. We consider the three possible cases: \({\mathcal N}\neq \emptyset\) and \(\mu_{k_{\star}}> \delta\); \({\mathcal N}\neq \emptyset\) and \(\mu_{k_{\star}}\le \delta\); and \({\mathcal N}=\emptyset\). Before dealing with each case, we mention that if \({\mathcal N}\neq \emptyset\), then it is possible to apply Proposition \ref{Prop: De Giorgi's truncation} with \((\rho,\,\mu)=(\rho_{k},\,\mu_{k})\) for \(k\in\{\,0,\,1,\,\dots\,,\,k_{\star}-1\,\}\), by the definition of $k_{\star}$. With (\ref{Eq (Section 2) An estimate for induction}) in mind, we are able to obtain \begin{equation}\label{Eq (Section 2) Oscillation growth from De Giorgi} \mathop{\mathrm{ess~sup}}_{B_{\rho_{k}}(x_{0})}\,\mleft\lvert{\mathcal G}_{\delta,\,\varepsilon}(\nabla u_{\varepsilon})\mright\rvert\le \mu_{k}\quad \textrm{for every }k\in\{\,0,\,1,\,\dots\,,\,k_{\star}\,\}.\end{equation} In the first case, the conditions \(k_{\star}\in {\mathcal N}\) and \(\mu_{k_{\star}}>\delta\) enable us to apply Proposition \ref{Prop: Schauder estimate} for the open ball \(B_{\rho_{k_{\star}}/2}(x_{0})\) with \(\mu=\mu_{k_{\star}}\). In particular, the limit \(\Gamma_{2\delta,\,\varepsilon}(x_{0})\) exists and this limit satisfies \begin{equation}\label{Eq (Section 2) Bound of mu-k in main theorem} \mleft\lvert \Gamma_{2\delta,\,\varepsilon}(x_{0})\mright\rvert\le \mu_{k_{\star}}. \end{equation} and \begin{equation}\label{Eq (Section 2) Campanato-type beta-growth in main theorem} \fint_{B_{r}(x_{0})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{0})\mright\rvert^{2}\,{\mathrm d}x\le \mleft(\frac{2r}{\rho_{k_{\star}}}\mright)^{2\beta}\mu_{k_{\star}}^{2}\quad \textrm{for all }r\in\mleft( 0,\,\frac{\rho_{k_{\star}}}{2}\mright], \end{equation}When \(0<r\le \rho_{k_{\star}}/2\), we use (\ref{Eq (Section 2) Estimate on radius ratio}), (\ref{Eq (Section 2) Campanato-type beta-growth in main theorem}), and \(\alpha<\beta\) to get \[\fint_{B_{r}(x_{0})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{0})\mright\rvert^{2}\,{\mathrm d}x\le \mleft(\frac{2r}{\rho_{k_{\star}}}\mright)^{2\alpha}\mleft(\frac{\rho_{k_{\star}}}{\rho_{0}}\mright)^{2\alpha}\mu_{0}^{2}= 4^{\alpha}\mleft(\frac{r}{\rho_{0}}\mright)^{2\alpha}\mu_{0}^{2}.\] When \(\rho_{k_{\star}}/2<r\le \rho_{0}\), there corresponds a unique integer \(k\in\{\,0,\,\dots\,,\,k_{\star}\,\}\) such that \(\rho_{k+1}<r\le \rho_{k}\). By (\ref{Eq (Section 2) Oscillation growth from De Giorgi})--(\ref{Eq (Section 2) Bound of mu-k in main theorem}), we compute \begin{align*} \fint_{B_{r}(x_{0})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{0})\mright\rvert^{2}\,{\mathrm d}x&\le 2\mleft(\mathop{\mathrm{ess~sup}}_{B_{\rho_{k}}(x_{0})}\,\mleft\lvert {\mathcal G}_{\delta,\,\varepsilon}(Du_{\varepsilon})\mright\rvert^{2}+\mleft\lvert\Gamma_{2\delta,\,\varepsilon}(x_{0}) \mright\rvert^{2}\mright)\\&\le 4\mu_{k}^{2}\le 4 \mleft(\frac{\rho_{k}}{\rho_{0}}\mright)^{2\alpha}\mu_{0}^{2}\le 4 \mleft(\frac{4r}{\rho_{0}}\mright)^{2\alpha}\mu_{0}^{2}. \end{align*} In the second case, we recall (\ref{Eq (Section 2) Control of G-2delta-epsilon by G-delta-epsilon}) in Remark \ref{Rmk: Equivalence on boundedness} to notice \({\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})=0\) a.e. in \(B_{k_{\star}}(x_{0})\). Combining with (\ref{Eq (Section 2) Oscillation growth from De Giorgi}), we can easily check \begin{equation}\label{Eq (Section 2) G-2delta-epsilon decay in digital} \mathop{\mathrm{ess~sup}}_{B_{\rho_{k}}(x_{0})}\,\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright\rvert\le \mu_{k}\quad \textrm{for every }k\in{\mathbb Z}_{\ge 0}, \end{equation} which clearly yields \(\Gamma_{2\delta,\,\varepsilon}(x_{0})=0\). For every \(r\in (0,\,\rho_{0}\rbrack\), there corresponds a unique integer \(k\in{\mathbb Z}_{\ge 0}\) such that \(\rho_{k+1}<r\le \rho_{k}\). By (\ref{Eq (Section 2) G-2delta-epsilon decay in digital}) and \(\kappa=4^{-\alpha}\), we have \begin{align*} \fint_{B_{r}(x_{0})}\mleft\lvert {\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{0})\mright\rvert^{2}\,{\mathrm d}x&=\fint_{B_{r}(x_{0})}\mleft\lvert {\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright\rvert^{2}\,{\mathrm d}x\\&\le \mathop{\mathrm{ess~sup}}_{B_{\rho_{k}}(x_{0})}\,\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright\rvert^{2}\le \mu_{k}^{2}=4^{-2\alpha k}\mu_{0}^{2}\\&\le \mleft[4\cdot\mleft(\frac{r}{\rho_{0}}\mright)\mright]^{2\alpha}\mu_{0}^{2}=16^{\alpha}\mleft(\frac{r}{\rho_{0}}\mright)^{2\alpha}\mu_{0}^{2}. \end{align*} In the remaining case \({\mathcal N}=\emptyset\), it is clear that there holds \(\lvert E_{\rho_{k}/2,\,\mu_{k},\,\nu_{k}}\rvert \le (1-\nu)\lvert B_{\rho_{k}/2}(x_{0})\rvert\) for every \(k\in{\mathbb Z}_{\ge 0}\). Applying (\ref{Eq (Section 2) An estimate for induction}) and Proposition \ref{Prop: De Giorgi's truncation} with \((\rho,\,\mu)=(\rho_{k},\,\mu_{k})\,(k\in{\mathbb Z}_{\ge 0})\) repeatedly, we can easily check that \[\mathop{\mathrm{ess~sup}}_{B_{\rho_{k}}(x_{0})}\,\mleft\lvert{\mathcal G}_{\delta,\,\varepsilon}(Du_{\varepsilon})\mright\rvert\le \mu_{k}\quad \textrm{for every }k\in{\mathbb Z}_{\ge 0}.\] In particular, (\ref{Eq (Section 2) G-2delta-epsilon decay in digital}) clearly follows from this and (\ref{Eq (Section 2) Control of G-2delta-epsilon by G-delta-epsilon}). Thus, the proof of (\ref{Eq (Section 2) Campanato-type alpha-growth estimate}) can be accomplished, similarly to the second case. In any possible cases, we conclude that $\Gamma_{2\delta,\,\varepsilon}(x_{0})$ exists and satisfies (\ref{Eq (Section 2) Campanato-type alpha-growth estimate}), as well as \begin{equation}\label{Eq (Section 2) Boundedness on Gamma-2delta-epsilon} \mleft\lvert \Gamma_{2\delta,\,\varepsilon}(x_{0})\mright\rvert\le \mu_{0}\quad \textrm{for all }x_{0}\in B_{\rho_{0}}(x_{\ast}), \end{equation} which immediately follows from (\ref{Eq (Section 2) Lipschitz bound}). From (\ref{Eq (Section 2) Campanato-type alpha-growth estimate}) and (\ref{Eq (Section 2) Boundedness on Gamma-2delta-epsilon}), we would like to show \begin{equation}\label{Eq (Section 2) Hoelder estimate on Gamma-2delta-epsilon} \mleft\lvert \Gamma_{2\delta,\,\varepsilon}(x_{1})-\Gamma_{2\delta,\,\varepsilon}(x_{2})\mright\rvert\le \mleft(\frac{2^{2\alpha+2+n/2}}{\rho_{0}^{\alpha}}\mu_{0}\mright)\lvert x_{1}-x_{2}\rvert^{\alpha} \end{equation} for all \(x_{1},\,x_{2}\in B_{\rho_{0}/2}(x_{\ast})\). We prove (\ref{Eq (Section 2) Hoelder estimate on Gamma-2delta-epsilon}) by dividing into the two possible cases. When \(r\coloneqq \lvert x_{1}-x_{2}\rvert\le \rho_{0}/2\), we set a point \(x_{3}\coloneqq (x_{1}+x_{2})/2\in B_{\rho_{0}}(x_{\ast})\). By \(B_{r/2}(x_{3})\subset B_{r}(x_{j})\subset B_{\rho_{0}/2}(x_{j})\subset B_{\rho_{0}}(x_{\ast})\) for each \(j\in\{\,1,\,2\,\}\), we use (\ref{Eq (Section 2) Campanato-type alpha-growth estimate}) to obtain \begin{align*} &\mleft\lvert\Gamma_{2\delta,\,\varepsilon}(x_{1})-\Gamma_{2\delta,\,\varepsilon}(x_{2}) \mright\rvert^{2}\\&=\fint_{B_{r/2}(x_{3})}\mleft\lvert\Gamma_{2\delta,\,\varepsilon}(x_{1})-\Gamma_{2\delta,\,\varepsilon}(x_{2}) \mright\rvert^{2}\,{\mathrm d}x\\&\le 2\mleft(\fint_{B_{r/2}(x_{3})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{1})\mright\rvert^{2}\,{\mathrm d}x +\fint_{B_{r/2}(x_{3})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{2})\mright\rvert^{2}\,{\mathrm d}x\mright) \\& \le 2^{n+1}\mleft(\fint_{B_{r}(x_{1})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{1})\mright\rvert^{2}\,{\mathrm d}x +\fint_{B_{r}(x_{2})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{2\delta,\,\varepsilon}(x_{2})\mright\rvert^{2}\,{\mathrm d}x\mright)\\&\le 2^{n+4}\cdot 16^{\alpha}\mleft(\frac{r}{\rho_{0}}\mright)^{2\alpha}\mu_{0}^{2}=\mleft(\frac{2^{2\alpha+2+n/2}}{\rho_{0}^{\alpha}}\mu_{0}\mright)^{2}\lvert x_{1}-x_{2}\rvert^{2\alpha}, \end{align*} which yields (\ref{Eq (Section 2) Hoelder estimate on Gamma-2delta-epsilon}). In the remaining case \(\lvert x_{1}-x_{2}\rvert>\rho_{0}/2\), we simply use (\ref{Eq (Section 2) Boundedness on Gamma-2delta-epsilon}) to get \[\mleft\lvert\Gamma_{2\delta,\,\varepsilon}(x_{1})-\Gamma_{2\delta,\,\varepsilon}(x_{2})\mright\rvert\le 2\mu_{0}\le 2\cdot \frac{2^{\alpha}\lvert x_{1}-x_{2}\rvert^{\alpha}}{\rho_{0}^{\alpha}}\mu_{0},\] which completes the proof of (\ref{Eq (Section 2) Hoelder estimate on Gamma-2delta-epsilon}). Finally, since the mapping \(\Gamma_{2\delta,\,\varepsilon}\) is a Lebesgue representative of \({\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\in L^{p}(\Omega;\,{\mathbb R}^{Nn})\), the claims (\ref{Eq (Section 2) Bound of G-2delta-epsilon})--(\ref{Eq (Section 2) Holder continuity of G-2delta-epsilon}) immediately follow from (\ref{Eq (Section 2) Boundedness on Gamma-2delta-epsilon})--(\ref{Eq (Section 2) Hoelder estimate on Gamma-2delta-epsilon}). \end{proof} We would like to conclude Section \ref{Section: Approximation} by giving the proof of our main Theorem \ref{Theorem: C1-regularity}. \begin{proof} Let $u\in W^{1,\,p}(\Omega;\,{\mathbb R}^{N})$ be a weak solution to (\ref{Eq (Section 1) main system}). We choose and fix $\{f_{\varepsilon}\}_{0<\varepsilon<1}\subset L^{q}(\Omega;\,{\mathbb R}^{Nn})$ enjoying (\ref{Eq (Section 2) Weak convergence of f}), and for each fixed $\varepsilon\in(0,\,1)$, we define the functional ${\mathcal F}_{\varepsilon}$ by (\ref{Eq (Section 2) energy functional: relaxed}). We set a function \[u_{\varepsilon}\coloneqq \mathop{\mathrm{arg~min}}\mleft\{v\in u+W_{0}^{1,\,p}(\Omega;\,{\mathbb R}^{N})\mathrel{}\middle|\mathrel{} {\mathcal F}_{\varepsilon}(v)\mright\}\quad \textrm{for every }\varepsilon\in(0,\,1),\] which is a unique solution of the Dirichlet problem \[\mleft\{\begin{array}{ccccc} {\mathcal L}^{\varepsilon}u_{\varepsilon}& = & f_{\varepsilon} & \textrm{in} &\Omega,\\ u_{\varepsilon} & = & u & \textrm{on} & \partial\Omega. \end{array} \mright.\] By Proposition \ref{Prop: Convergence on weak system}, there exists a decreasing sequence $\{\varepsilon_{j}\}_{j=1}^{\infty}\subset(0,\,1)$ such that $\varepsilon_{j}\to 0$ and $u_{\varepsilon_{j}}\to u$ in $W^{1,\,p}(\Omega;\,{\mathbb R}^{N})$. In particular, we may let (\ref{Eq (Section 2) a.e. convergence of Du}) hold by taking a subsequence if necessary. Also, we may choose finite constants $L>\lVert Du_{\varepsilon}\rVert_{L^{p}(\Omega)}$ and $F>\lVert f_{\varepsilon}\rVert_{L^{q}(\Omega)}$ that satisfy (\ref{Eq (Section 2) control of Du})--(\ref{Eq (Section 2) control of f}). Fix $x_{\ast}\in\Omega$ and $\delta\in(0,\,1)$ arbitrarily. Then Theorem \ref{Theorem: A priori Hoelder estimate} enables us to apply the Arzel\`{a}--Ascoli theorem to the net $\{{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\}_{0<\varepsilon<\delta/4}\subset C^{\alpha}(B_{\rho_{0}/2}(x_{\ast});\,{\mathbb R}^{Nn})$. In particular, by taking a subsequence if necessary, we are able to conclude that ${\mathcal G}_{2\delta,\,\varepsilon_{j}}(Du_{\varepsilon_{j}})\in C^{\alpha}(B_{\rho_{0}/2}(x_{\ast});\,{\mathbb R}^{Nn})$ uniformly converges to a continuous mapping $v_{2\delta}\in C^{\alpha}(B_{\rho_{0}/2}(x_{\ast});\,{\mathbb R}^{Nn})$. On the other hand, by (\ref{Eq (Section 2) a.e. convergence of Du}) we have already known that ${\mathcal G}_{2\delta,\,\varepsilon_{j}}(Du_{\varepsilon_{j}})\rightarrow {\mathcal G}_{2\delta}(Du)$ a.e. in $\Omega$. Hence, it follows that $v_{2\delta}={\mathcal G}_{2\delta}(Du)$ a.e. in $B_{\rho_{0}/2}(x_{\ast})$. Moreover, from (\ref{Eq (Section 2) Bound of G-2delta-epsilon})--(\ref{Eq (Section 2) Holder continuity of G-2delta-epsilon}), we conclude that ${\mathcal G}_{2\delta}(Du)$ satisfies (\ref{Eq (Section 1) local bounds of Jacobian multiplied with G-delta})--(\ref{Eq (Section 1) local continuity of Jacobian multiplied with G-delta}). We have already proved ${\mathcal G}_{\delta}(Du)\in C^{0}(\Omega;\,{\mathbb R}^{Nn})$ for each fixed $\delta\in(0,\,1)$, from which we show continuity of $Du$. By the definition of ${\mathcal G}_{\delta}$, we can easily check that \[\sup_{\Omega}\,\mleft\lvert{\mathcal G}_{\delta_{1}}(Du)-{\mathcal G}_{\delta_{2}}(Du)\mright\rvert\le \lvert \delta_{1}-\delta_{2}\rvert\quad \textrm{for all }\delta_{1},\,\delta_{2}\in(0,\,1).\] In particular, the net $\{{\mathcal G}_{\delta}(Du)\}_{0<\delta<1}\subset C^{0}(\Omega;\,{\mathbb R}^{Nn})$ uniformly converges to a continuous mapping $v_{0}\in C^{0}(\Omega;\,{\mathbb R}^{Nn})$. On the other hand, it is clear that ${\mathcal G}_{\delta}(Du)\rightarrow Du$ a.e. in $\Omega$. Thus, we realize $v_{0}=Du$ a.e. in $\Omega$, and this completes the proof. \end{proof} \section{Weak formulations}\label{Section: Weak formulations} \subsection{A basic weak formulation}\label{Subsect: Basic weak formulations} We note that for each fixed $\varepsilon\in(0,\,1)$, the regularized system (\ref{Eq (Section 2) Approximated system}) is uniformly elliptic. To be precise, by direct computations, we can easily check that the relaxed density $E_{\varepsilon}(\xi)=g_{\varepsilon}(\lvert \xi\rvert^{2})$ admits a constant $C_{\varepsilon}\in(\gamma,\,\infty)$, depending on $\varepsilon$, such that \begin{equation}\label{Eq (Section 3) Uniform ellipticity on approximated density} \gamma\mleft(\varepsilon^{2}+\lvert \xi\rvert^{2}\mright)^{p/2-1}{\mathrm{id}_{Nn}}\leqslant {\mathcal B}_{\varepsilon}(\xi)=D^{2}E_{\varepsilon}(\xi) \leqslant C_{\varepsilon}\mleft(\varepsilon^{2}+\lvert \xi\rvert^{2}\mright)^{p/2-1}{\mathrm{id}_{Nn}}\quad \textrm{for all }\xi\in{\mathbb R}^{Nn} \end{equation} Hence, it is not restrictive to let $u_{\varepsilon}\in W_{\mathrm{loc}}^{1,\,\infty}(\Omega;\,{\mathbb R}^{N})\cap W_{\mathrm{loc}}^{2,\,2}(\Omega;\,{\mathbb R}^{N})$ (see also Remark \ref{Eq Higher W-2-2 and W-1-infty regularity} in Section \ref{Section: Appendix}). In particular, for each fixed $B_{\rho}(x_{0})\Subset\Omega$, we are able to deduce \begin{equation}\label{Eq (Section 3) Weak formulation local} \int_{B_{\rho}(x_{0})}\langle A_{\varepsilon}(Du_{\varepsilon})\mid D\phi\rangle\,{\mathrm{d}}x=\int_{B_{\rho}(x_{0})}\langle f_{\varepsilon}\mid \phi\rangle\,{\mathrm{d}}x \end{equation} for all $\phi\in W_{0}^{1,\,1}(B_{\rho}(x_{0});\,{\mathbb R}^{Nn})$. Also, differentiating (\ref{Eq (Section 2) Approximated system}) by $x_{\alpha}$ and integrating by parts, we can deduce an weak formulation \begin{equation}\label{Eq (Section 3) Weak formulation differentiated} \int_{B_{\rho}(x_{0})}\mleft\langle D_{\alpha}\mleft(A_{\varepsilon}\mleft(Du_{\varepsilon}\mright) \mright)\mathrel{} \middle|\mathrel{} D\phi\mright\rangle\,{\mathrm{d}}x=-\int_{B_{\rho}(x_{0})}\langle f_{\varepsilon}\mid D_{\alpha}\phi\rangle\,{\mathrm{d}}x \end{equation} for all $\phi\in W_{0}^{1,\,2}(B_{\rho}(x_{0});\,{\mathbb R}^{N})$, $\alpha\in\{\,1,\,\dots\,,\,n\,\}$. Following \cite[Lemma 4.1]{BDGPdN} (see also \cite[Lemma 3.5]{T-scalar}), we would like to give a basic weak formulation that is fully used in our computations. \begin{lemma}\label{Lemma: Weak formulation for V-epsilon} Let $u_{\varepsilon}$ be a weak solution to the regularized system (\ref{Eq (Section 2) Approximated system}) with $0<\varepsilon<1$. Assume that a non-decreasing and non-negative function $\psi\in W_{\mathrm{loc}}^{1,\,\infty}(\lbrack0,\,\infty))$ is differentiable except at finitely many points. For any non-negative function $\zeta\in C_{c}^{1}(B_{\rho}(x_{0}))$, we set \[\mleft\{\begin{array}{ccl} J_{1} & \coloneqq & \displaystyle\int_{B_{\rho}(x_{0})}\mleft\langle {\mathcal C}_{\varepsilon}(Du_{\varepsilon})\nabla V_{\varepsilon}\mathrel{} \middle|\mathrel{} \nabla\zeta\mright\rangle\psi(V_{\varepsilon})V_{\varepsilon}\,{\mathrm{d}}x,\\ J_{2} &\coloneqq & \displaystyle\int_{B_{\rho}(x_{0})}\mleft\langle {\mathcal C}_{\varepsilon}(Du_{\varepsilon})\nabla V_{\varepsilon}\mathrel{} \middle|\mathrel{} \nabla V_{\varepsilon}\mright\rangle\zeta \psi^{\prime}(V_{\varepsilon}) V_{\varepsilon}\,{\mathrm{d}}x,\\ J_{3} & \coloneqq &\displaystyle\int_{B_{\rho}(x_{0})}V_{\varepsilon}^{p-2}\mleft\lvert D^{2}u_{\varepsilon}\mright\rvert^{2}\zeta\psi(V_{\varepsilon})\,{\mathrm{d}}x,\\ J_{4}& \coloneqq & \displaystyle\int_{B_{\rho}(x_{0})}\lvert f_{\varepsilon}\rvert^{2}\psi(V_{\varepsilon})V_{\varepsilon}^{2-p}\zeta\,{\mathrm{d}}x,\\ J_{5}&\coloneqq &\displaystyle\int_{B_{\rho}(x_{0})}\lvert f_{\varepsilon}\rvert^{2} \psi^{\prime}(V_{\varepsilon})V_{\varepsilon}^{3-p}\zeta\,{\mathrm{d}}x,\\ J_{6}&\coloneqq & \displaystyle\int_{B_{\rho}(x_{0})}\lvert f_{\varepsilon}\rvert \lvert\nabla\zeta\rvert\psi(V_{\varepsilon}) V_{\varepsilon}\,{\mathrm{d}}x, \end{array} \mright.\] with \[{\mathcal C}_{\varepsilon}(Du_{\varepsilon})\coloneqq g_{\varepsilon}^{\prime}(\lvert Du_{\varepsilon}\rvert^{2}){\mathrm{id}_{n}}+2g_{\varepsilon}^{\prime\prime}(\lvert Du_{\varepsilon}\rvert^{2})\sum_{i=1}^{N}\nabla u_{\varepsilon}^{i}\otimes\nabla u_{\varepsilon}^{i}.\] Then we have \begin{equation}\label{Eq (Section 3) Resulting weak formulation} 2J_{1}+J_{2}+\gamma J_{3}\le \frac{1}{\gamma}(nJ_{4}+J_{5})+2J_{6}. \end{equation} \end{lemma} Before the proof, we mention that the matrix-valued functions ${\mathcal C}_{\varepsilon}(Du_{\varepsilon})$ satisfy \begin{equation}\label{Eq (Section 3) Ellipticity of coefficients} \gamma V_{\varepsilon}^{p-2} {\mathrm{id}_{n}} \leqslant {\mathcal C}_{\varepsilon}(Du_{\varepsilon})\leqslant \mleft(bV_{\varepsilon}^{-1}+3\Gamma V_{\varepsilon}^{p-2}\mright){\mathrm{id}_{n}}\quad \textrm{a.e. in }\Omega, \end{equation} which follows from (\ref{Eq (Section 1) Growth g-p-pprime})--(\ref{Eq (Section 1) Ellipticity g_p}) and (\ref{Eq (Section 2) Growth g-1-pprime})--(\ref{Eq (Section 2) Degenerate ellipticity of g-1}). \begin{proof} For notational simplicity we write $B\coloneqq B_{\rho}(x_{0})$. For a fixed index \(\alpha\in\{\,1,\,\dots\,,\,n\,\}\), we test $\phi\coloneqq \zeta\psi(V_{\varepsilon})D_{{\alpha}}u_{\varepsilon}\in W_{0}^{1,\,2}(B;\,{\mathbb R}^{N})$ into (\ref{Eq (Section 3) Weak formulation differentiated}). Summing over $\alpha\in\{\,1,\,\dots\,,\,n\,\}$, we have \begin{align*} &J_{0}+J_{1}+J_{2}\\&=-\int_{B}\psi(V_{\varepsilon})\langle f_{\varepsilon}\mid Du_{\varepsilon}\nabla\zeta \rangle\,{\mathrm{d}}x-\int_{B}\zeta\psi^{\prime}(V_{\varepsilon})\langle f_{\varepsilon}\mid Du_{\varepsilon}\nabla V_{\varepsilon}\rangle\,{\mathrm{d}}x-\int_{B}\zeta\psi(V_{\varepsilon}) \sum_{i=1}^{N}\sum_{\alpha=1}^{n}f_{\varepsilon}^{i}\partial_{x_{\alpha}}^{2}u_{\varepsilon}^{i}\,{\mathrm{d}}x \\&\eqqcolon -(J_{7}+J_{8}+J_{9}) \end{align*} with \[J_{0}\coloneqq \int_{B}\mleft[g_{\varepsilon}^{\prime}(\lvert Du_{\varepsilon}\rvert^{2})\lvert D^{2}u_{\varepsilon}\rvert^{2}+\frac{1}{2}g_{\varepsilon}^{\prime\prime}(\lvert Du_{\varepsilon}\rvert^{2})\mleft\lvert \nabla \lvert Du_{\varepsilon}\rvert^{2}\mright\rvert^{2}\mright]\zeta\psi(V_{\varepsilon})\,{\mathrm{d}}x.\] To estimate $J_{0}$, we recall $\mleft\lvert \nabla \lvert Du_{\varepsilon}\rvert^{2}\mright\rvert^{2}\le 4\lvert D^{2}u_{\varepsilon}\rvert^{2}\lvert Du_{\varepsilon}\rvert^{2}$, which follows from the Cauchy--Schwarz inequality. By this and (\ref{Eq (Section 1) Ellipticity g_p}), we are able to deduce \[g_{p,\,\varepsilon}^{\prime}(\lvert Du_{\varepsilon}\rvert^{2})\lvert D^{2}u_{\varepsilon}\rvert^{2}+\frac{1}{2}g_{p,\,\varepsilon}^{\prime\prime}(\lvert Du_{\varepsilon}\rvert^{2})\mleft\lvert \nabla \lvert Du_{\varepsilon}\rvert^{2}\mright\rvert^{2}\ge \gamma V_{\varepsilon}^{p-2}\mleft\lvert D^{2}u_{\varepsilon}\mright\rvert^{2}\] a.e. in $\Omega$. Similarly, we use (\ref{Eq (Section 2) Degenerate ellipticity of g-1}) to get \[g_{1,\,\varepsilon}^{\prime}(\lvert Du_{\varepsilon}\rvert^{2})\lvert D^{2}u_{\varepsilon}\rvert^{2}+\frac{1}{2}g_{1,\,\varepsilon}^{\prime\prime}(\lvert Du_{\varepsilon}\rvert^{2})\mleft\lvert \nabla \lvert Du_{\varepsilon}\rvert^{2}\mright\rvert^{2}\ge 0\] a.e. in $\Omega$. These inequalities imply $J_{0}\ge \gamma J_{3}$. Clearly, $\lvert J_{7}\rvert\le J_{6}$ holds. To compute $J_{8},\,J_{9}$, we recall (\ref{Eq (Section 3) Ellipticity of coefficients}). Then by Young's inequality we can compute \[\lvert J_{8}\rvert\le \frac{\gamma}{2}\int_{B}V_{\varepsilon}^{p-1}\lvert\nabla V_{\varepsilon}\rvert^{2}\zeta\psi^{\prime}(V_{\varepsilon})\,{\mathrm{d}}x+\frac{1}{2\gamma}\int_{B}\lvert f_{\varepsilon}\rvert^{2} V_{\varepsilon}^{3-p}\zeta\psi^{\prime}(V_{\varepsilon})\,{\mathrm{d}}x\le \frac{1}{2}J_{2}+\frac{1}{2\gamma}J_{5},\] and \begin{align*} \lvert J_{9}\rvert&\le \sqrt{n}\int_{B}\lvert f_{\varepsilon}\rvert\zeta\psi(V_{\varepsilon})\mleft\lvert D^{2}u_{\varepsilon} \mright\rvert\,{\mathrm{d}}x\\&\le \frac{\gamma}{2}\int_{B}V_{\varepsilon}^{p-1}\mleft\lvert D^{2}u_{\varepsilon} \mright\rvert^{2}\zeta\psi(V_{\varepsilon})\,{\mathrm{d}}x+\frac{n}{2\gamma}\int_{B}\lvert f_{\varepsilon}\rvert^{2}\psi(V_{\varepsilon})V_{\varepsilon}^{2-p}\zeta\,{\mathrm{d}}x\\&=\frac{\gamma}{2}J_{3}+\frac{n}{2\gamma}J_{4}. \end{align*} As a result, we get \[J_{1}+J_{2}+\gamma J_{3}\le \frac{1}{2}J_{2}+\frac{\gamma}{2}J_{3}+\frac{n}{2\gamma}J_{4}+\frac{1}{2\gamma}J_{5}+J_{6},\] from which (\ref{Eq (Section 3) Resulting weak formulation}) immediately follows. \end{proof} \subsection{De Giorgi's truncation and Caccioppoli-type estimates}\label{Subsect: De Giorgi's truncation} In the resulting weak formulation (\ref{Eq (Section 3) Resulting weak formulation}), we may discard two non-negative integrals $J_{2},\,J_{3}$. Then, (\ref{Eq (Section 3) Resulting weak formulation}) implies that the scalar function $V_{\varepsilon}$ is a subsolution to an elliptic problem. By appealing to convex composition of $V_{\varepsilon}$, we would like to prove that another scalar function $U_{\delta,\,\varepsilon}$ defined by \begin{equation}\label{Eq (Section 3) U-delta-epsilon} U_{\delta,\,\varepsilon}\coloneqq (V_{\varepsilon}-\delta)_{+}^{2}\in L_{\mathrm{loc}}^{\infty}(\Omega)\cap W_{\mathrm{loc}}^{1,\,2}(\Omega) \end{equation} is also a subsolution to a certain \textit{uniformly} elliptic problem. This is possible since this function is supported in a place $\{\,V_{\varepsilon}>\delta\,\}$, where the system (\ref{Eq (Section 2) Approximated system}) becomes uniformly elliptic. Thus, as in Lemma \ref{Lemma: Subsolution} below, we are able to obtain Caccioppoli-type estimates for $U_{\delta,\,\varepsilon}$. \begin{lemma}\label{Lemma: Subsolution} Let $u_{\varepsilon}$ be a weak solution to (\ref{Eq (Section 2) Approximated system}) with $0<\varepsilon<1$. Assume that positive numbers $\delta$, $\mu$, $M$, and an open ball $B_{\rho}(x_{0})$ satisfy (\ref{Eq (Section 2) G-delta-epsilon bound}). Then, the scalar function $U_{\delta,\,\varepsilon}$ defined by (\ref{Eq (Section 3) U-delta-epsilon}) satisfies \begin{equation}\label{Eq (Section 3) U-delta-epsilon is subsol} \int_{B_{\rho}(x_{0})}\mleft\langle {\mathcal A}_{\delta,\,\varepsilon}(Du_{\varepsilon})\nabla U_{\delta,\,\varepsilon}\mathrel{}\middle|\mathrel{}\nabla\zeta \mright\rangle\,{\mathrm{d}}x\le C_{0}\mleft[\int_{B_{\rho}(x_{0})}\lvert f_{\varepsilon}\rvert^{2}\zeta\,{\mathrm{d}}x+\int_{B_{\rho}(x_{0})}\lvert f_{\varepsilon}\rvert\lvert\nabla \zeta\rvert\,{\mathrm{d}}x\mright] \end{equation} for all non-negative function $\zeta\in C_{c}^{1}(B_{\rho}(x_{0}))$, where ${\mathcal A}_{\delta,\,\varepsilon}(Du_{\varepsilon})$ is an $n\times n$ matrix-valued function satisfying \begin{equation}\label{Eq (Section 3) Uniform ellipticity of A-delta-epsilon} \gamma_{\ast}{\mathrm{id}_{n}} \leqslant {\mathcal A}_{\delta,\,\varepsilon}(Du_{\varepsilon})\leqslant \Gamma_{\ast}{\mathrm{id}_{n}}\quad \textrm{a.e. in }B_{\rho}(x_{0}). \end{equation} The constants $C_{0}\in(0,\,\infty)$ and $0<\gamma_{\ast}\le \Gamma_{\ast}<\infty$ depend at most on $b$, $n$, $p$, $\gamma$, $\Gamma$, $M$, and $\delta$. In particular, we have \begin{equation}\label{Eq (Section 3) Caccioppoli estimate} \int_{B_{\rho}(x_{0})}\lvert \nabla[\eta (U_{\delta,\,\varepsilon}-k)_{+}]\rvert^{2}\,{\mathrm{d}}x\le C\mleft[\int_{B_{\rho}(x_{0})}\lvert \nabla \eta\rvert^{2}(U_{\delta,\,\varepsilon}-k)_{+}^{2}\,{\mathrm{d}}x+\mu^{2}\int_{A_{k,\,\rho}(x_{0})}\lvert f_{\varepsilon}\rvert^{2}\eta^{2}\,{\mathrm{d}}x \mright] \end{equation} for all $k\in(0,\,\infty)$ and for any non-negarive function $\eta\in C_{c}^{1}(B_{\rho}(x_{0}))$. Here $A_{k,\,\rho}(x_{0})\coloneqq \{ x\in B_{\rho}(x_{0})\mid U_{\delta,\,\varepsilon}(x)>k\}$, and the constant $C\in(0,\,\infty)$ depends on $\gamma_{\ast}$, $\Gamma_{\ast}$, and $C_{0}$. \end{lemma} \begin{proof} We choose $\psi(t)\coloneqq (t-\delta)_{+}$ for $t\in\lbrack 0,\,\infty)$, so that $\psi(V_{\varepsilon})=\lvert{\mathcal G}_{\delta,\,\varepsilon}(Du_{\varepsilon})\rvert$ holds. From (\ref{Eq (Section 3) Resulting weak formulation}), we will deduce (\ref{Eq (Section 3) U-delta-epsilon is subsol}). To compute $J_{1}$, we note that $\psi(V_{\varepsilon})$ vanishes when $V_{\varepsilon}\le \delta$, and that the identity $\nabla U_{\delta,\,\varepsilon}=2\psi(V_{\varepsilon})\nabla V_{\varepsilon}$ holds. From these we obtain \[J_{1}=\int_{B_{\rho}(x_{0})}\mleft\langle V_{\varepsilon} {\mathcal C}_{\varepsilon}(Du_{\varepsilon})\psi(V_{\varepsilon})\nabla V_{\varepsilon}\mathrel{} \middle|\mathrel{} \nabla\zeta\mright\rangle\,{\mathrm{d}}x=\frac{1}{2}\int_{B_{\rho}(x_{0})}\mleft\langle {\mathcal A}_{\delta,\,\varepsilon}(Du_{\varepsilon})\nabla U_{\delta,\,\varepsilon}\mathrel{}\middle|\mathrel{}\nabla\zeta \mright\rangle\,{\mathrm{d}}x\] with \[{\mathcal A}_{\delta,\,\varepsilon}(Du_{\varepsilon})\coloneqq\mleft\{\begin{array}{cc} V_{\varepsilon}{\mathcal C}_{\delta,\,\varepsilon}(Du_{\varepsilon}) & (\textrm{if }V_{\varepsilon}>\delta),\\ \mathrm{id}_{n} & (\textrm{otherwise}). \end{array} \mright.\] By (\ref{Eq (Section 3) Ellipticity of coefficients}), the coefficient matrix ${\mathcal A}_{\delta,\,\varepsilon}(Du_{\varepsilon})$ satisfies (\ref{Eq (Section 3) Uniform ellipticity of A-delta-epsilon}) with \[\gamma_{\ast}=\min\mleft\{\,1,\,\gamma\delta^{p-1}\mright\},\quad\Gamma_{\ast}=\max\mleft\{\,1,\,b+3\Gamma M^{p-1}\,\mright\}.\] By discarding positive integrals $J_{2}$, $J_{3}$, we compute \begin{align*} J_{1}&\le \frac{1}{2\gamma}\int_{B_{\rho}(x_{0})}\lvert f_{\varepsilon}\rvert^{2}\frac{n\psi(V_{\varepsilon})V_{\varepsilon}+\psi^{\prime}(V_{\varepsilon})V_{\varepsilon}^{2}}{V_{\varepsilon}^{p-1}}\zeta\,{\mathrm{d}}x+ \int_{B_{\rho}(x_{0})}\lvert f_{\varepsilon}\rvert \lvert\nabla\zeta\rvert\psi(V_{\varepsilon}) V_{\varepsilon}\,{\mathrm{d}}x\\&\le \frac{1}{2\gamma}\int_{B_{\rho}(x_{0})}\lvert f_{\varepsilon}\rvert^{2}\frac{n\mu M+M^{2}}{\delta^{p-1}}\zeta\,{\mathrm{d}}x+\mu M\int_{B_{\rho}(x_{0})}\lvert f_{\varepsilon}\rvert\lvert \nabla\zeta \rvert\,{\mathrm{d}}x, \end{align*} from which (\ref{Eq (Section 3) U-delta-epsilon is subsol}) immediately follows. The estimate (\ref{Eq (Section 3) Caccioppoli estimate}) is easy to deduce by choosing $\zeta\coloneqq \eta^{2}(U_{\delta,\,\varepsilon}-k)_{+}\in W_{0}^{1,\,2}(B_{\rho}(x_{0}))$ into (\ref{Eq (Section 3) U-delta-epsilon is subsol}), and making standard absorbing arguments (see \cite[Lemma 3.15]{T-scalar} for detailed computations). We should mention that this test function $\zeta$ is admissible by approximation, since it is compactly supported in $B_{\rho}(x_{0})$. \end{proof} The Caccioppoli-type inequality (\ref{Eq (Section 3) Caccioppoli estimate}) implies that the scalar function $U_{\delta,\,\varepsilon}$ is in a certain De Giorgi class. From this fact, we can conclude two oscillation lemmata for the scalar function $U_{\delta,\,\varepsilon}=\lvert {\mathcal G}_{\delta,\,\varepsilon}(\nabla u_{\varepsilon})\rvert^{2}$ (Lemmata \ref{Lemma: Oscillation lemma for U-epsilon-delta 1}--\ref{Lemma: Oscillation lemma for U-epsilon-delta 2}). \begin{lemma}\label{Lemma: Oscillation lemma for U-epsilon-delta 1} Let $u_{\varepsilon}$ be a weak solution to (\ref{Eq (Section 2) Approximated system}). Assume that positive numbers $\delta$, $\mu$, $F$, $M$, and an open ball $B_{\rho}(x_{0})$ satisfy (\ref{Eq (Section 2) control of f}) and (\ref{Eq (Section 2) G-delta-epsilon bound}). Then there exists a number ${\hat \nu}\in(0,\,1)$, depending at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $F$, $M$, and $\delta$, such that if there holds \begin{equation}\label{Eq (Section 3) measure condition on De Giorgi lemma} \frac{\lvert \{x\in B_{\rho/2}(x_{0})\mid U_{\delta,\,\varepsilon}(x)>(1-\theta)\mu^{2}\}\rvert}{\lvert B_{\rho/2}(x_{0})\rvert}\le {\hat \nu} \end{equation} for some $\theta\in(0,\,1)$, then we have either \[\mu^{2}<\frac{\rho^{\beta}}{\theta}\quad \textrm{or}\quad \mathop{\mathrm{ess~sup}}_{B_{\rho/4}(x_{0})}\,U_{\delta,\,\varepsilon}\le \mleft(1-\frac{\theta}{2}\mright)\mu^{2}.\] \end{lemma} \begin{lemma}\label{Lemma: Oscillation lemma for U-epsilon-delta 2} Under the assumptions of Proposition \ref{Prop: De Giorgi's truncation}, for every $i_{\star}\in {\mathbb N}$, we have either \[\mu^{2}<\frac{2^{i_{\star}}\rho^{\beta}}{\nu}\quad \textrm{or}\quad \frac{\mleft\lvert \mleft\{ x\in B_{\rho/2}(x_{0})\mathrel{}\middle|\mathrel{}U_{\delta,\,\varepsilon}(x)> \mleft(1-2^{-i_{\star}}\nu\mright)\mu^{2}\mright\} \mright\rvert}{\lvert B_{\rho/2}(x_{0})\rvert}<\frac{C_{\dagger}}{\nu\sqrt{i_{\star}}}.\] Here the constant $C_{\dagger}\in(0,\,\infty)$ depends at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $F$, $M$, and $\delta$. \end{lemma} We omit the proofs of Lemmata \ref{Lemma: Oscillation lemma for U-epsilon-delta 1}--\ref{Lemma: Oscillation lemma for U-epsilon-delta 2}, because they can be deduced by routine calculations given in \cite[Chapter 10, \S 4--5]{MR2566733}. For detailed computations, see \cite[\S 7]{BDGPdN} or \cite[\S 4]{T-scalar}. By Lemmata \ref{Lemma: Oscillation lemma for U-epsilon-delta 1}--\ref{Lemma: Oscillation lemma for U-epsilon-delta 2}, we are able to find the desired constants $C_{\star}\in(1,\,\infty)$ and $\kappa\in(2^{-\beta},\,1)$ in Proposition \ref{Prop: De Giorgi's truncation}. To be precise, we choose a sufficiently large number $i_{\star}=i_{\star}(C_{\dagger},\,\nu,\,{\hat\nu})\in{\mathbb N}$ such that \[\frac{C_{\dagger}}{\nu\sqrt{i_{\star}}}\le {\hat\nu}\quad \textrm{and}\quad 0<2^{-(i_{\star}+1)}<1-2^{-2\beta}.\] Then, by applying Lemmata \ref{Lemma: Oscillation lemma for U-epsilon-delta 1}--\ref{Lemma: Oscillation lemma for U-epsilon-delta 2} with $\theta=2^{-i_{\star}}\nu$, we can easily check that the constants $C_{\star}\coloneqq \theta^{-1}=2^{i_{\star}}\nu^{-1}\in(1,\,\infty)$, $\kappa\coloneqq \sqrt{1-2^{-(i_{\star}+1)}}\in(2^{-\beta},\,1)$ satisfy Proposition \ref{Prop: De Giorgi's truncation} (see also \cite[Proposition 3.5]{BDGPdN}, \cite[Proposition 3.3]{T-scalar}). \section{Campanato-type decay estimates}\label{Section: Campanato estimates} Section \ref{Section: Campanato estimates} provides the proof of Proposition \ref{Prop: Schauder estimate}. The basic idea is that the measure assumption (\ref{Eq (Section 2) Measure condition 2}) implies that the scalar function $V_{\varepsilon}$ will not degenerate near the point $x_{0}$ if the ratio $\nu$ is sufficiently close to $0$. With this in mind, we appeal to freezing coefficient arguments and shrinking methods to obtain Campanato-type integral growth estimates (\ref{Eq (Section 2) Campanato-type growth from Schauder}). To prove this, however, we also have to check that an average $(Du_{\varepsilon})_{x_{0},\,r}$ never degenerates even when the radius $r$ tends to $0$. To overcome this problem, we will provide a variety of energy estimates from the assumptions (\ref{Eq (Section 2) delta<mu}) and (\ref{Eq (Section 2) Measure condition 2}). Most of our computations concerning energy estimates differ from \cite[\S 4--5]{BDGPdN}, since the density function we discuss is structurally different from theirs. In Section \ref{Section: Campanato estimates}, we consider an $L^{2}$-mean oscillation given by \[\Phi(x_{0},\,r)\coloneqq \fint_{B_{r}(x_{0})}\mleft\lvert Du_{\varepsilon}-(Du_{\varepsilon})_{x_{0},\,r}\mright\rvert^{2}\,{\mathrm{d}}x\quad \textrm{for }r\in(0,\,\rho\rbrack\] in an open ball $B_{\rho}(x_{0})\Subset\Omega$. To estimate $\Phi$, we often use a well-known fact that there holds \begin{equation}\label{Eq (Section 4) minimizing property on L2-average} \fint_{B_{r}(x_{0})}\mleft\lvert v-(v)_{x_{0},\,r}\mright\rvert^{2}\,{\mathrm{d}}x=\min_{\xi\in{\mathbb R}^{Nn}}\fint_{B_{r}(x_{0})}\lvert v-\xi\rvert^{2}\,{\mathrm{d}}x \end{equation} for every $v\in L^{2}(B_{r}(x_{0});\,{\mathbb R}^{Nn})$. Also, we recall the Poincar\'{e}--Sobolev inequality \cite[Chapter IX, Theorem 10.1]{MR1897317}: \begin{equation}\label{Eq: Poincare--Sobolev inequality} \fint_{B_{r}(x_{0})}\lvert v\rvert^{2}\,{\mathrm{d}}x\le C(n)r^{2}\mleft(\fint_{B_{r}(x_{0})}\lvert Dv\rvert^{\frac{2n}{n+2}}\,{\mathrm{d}}x\mright)^{\frac{n+2}{n}} \end{equation} for all $v\in W^{1,\,\frac{2n}{n+2}}(B_{r}(x_{0});\,{\mathbb R}^{Nn})$ satisfying \[\fint_{B_{r}(x_{0})}v\,{\mathrm{d}}x=0,\] which is used in Sections \ref{Subsect: Energy estimates}--\ref{Subsect: freezing coefficient method}. \subsection{Energy estimates}\label{Subsect: Energy estimates} First in Section \ref{Subsect: Energy estimates}, we go back to the weak formulation (\ref{Eq (Section 3) Resulting weak formulation}), and deduce some energy estimates under suitable assumptions as in Proposition \ref{Prop: Schauder estimate}. Here we would like to show local $L^{2}$-bounds for the Jacobian matrices of $G_{p,\,\varepsilon}(Du_{\varepsilon})=V_{\varepsilon}^{p-1}Du_{\varepsilon}$, where the mapping $G_{p,\,\varepsilon}$ is defined by (\ref{Eq (Section 2) G-p-epsilon}). \begin{lemma}\label{Lemma: Energy estimates} Let $u_{\varepsilon}$ be a weak solution to (\ref{Eq (Section 2) Approximated system}) in \(\Omega\). Assume that positive numbers $\delta$, $\mu$, $F$, $M$, and an open ball $B_{\rho}(x_{0})\Subset \Omega$ satisfy (\ref{Eq (Section 2) control of f}), (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}) and $0<\rho<1$. Then, for all $\nu\in(0,\,1)$ and $\tau\in(0,\,1)$, the following estimates (\ref{Eq (Section 4) Energy estimate from weak form 1})--(\ref{Eq (Section 4) Energy estimate from weak form 2}) hold: \begin{equation}\label{Eq (Section 4) Energy estimate from weak form 1} \fint_{B_{\tau\rho}(x_{0})}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(Du_{\varepsilon})\mright] \mright\rvert^{2}\,{\mathrm{d}}x\le \frac{C}{\tau^{n}}\mleft[\frac{1}{(1-\tau)^{2}\rho^{2}}+F^{2}\rho^{-\frac{2n}{q}}\mright]\mu^{2p}. \end{equation} \begin{equation}\label{Eq (Section 4) Energy estimate from weak form 2} \frac{1}{\lvert B_{\tau\rho}(x_{0})\rvert}\int_{S_{\tau\rho,\,\mu,\,\nu}(x_{0})}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(Du_{\varepsilon})\mright] \mright\rvert^{2}\,{\mathrm{d}}x\le \frac{C}{\tau^{n}}\mleft[\frac{\nu}{(1-\tau)^{2}\rho^{2}}+\frac{F^{2}\rho^{-\frac{2n}{q}}}{\nu}\mright]\mu^{2p}. \end{equation} Here the constants $C\in(0,\,\infty)$ given in (\ref{Eq (Section 4) Energy estimate from weak form 1})--(\ref{Eq (Section 4) Energy estimate from weak form 2}) depend at most on $b$, $n$, $p$, $q$, $\gamma$, $\Gamma$, and $\delta$. \end{lemma} Following computations given in \cite[Lemma 3.9]{T-scalar} for the scalar problems, we provide the proof of Lemma \ref{Lemma: Energy estimates}. \begin{proof} We first compute \begin{align*} \mleft\lvert D\mleft[G_{p,\,\varepsilon}(Du_{\varepsilon})\mright] \mright\rvert^{2}&\le \mleft\lvert h_{p}^{\prime}(V_{\varepsilon}^{2})D^{2}u_{\varepsilon}+2h_{p}^{\prime\prime}(V_{\varepsilon}^{2})Du_{\varepsilon}\otimes D^{2}u_{\varepsilon}Du_{\varepsilon} \mright\rvert^{2}\\&\le 2V_{\varepsilon}^{2(p-1)}\mleft\lvert D^{2}u_{\varepsilon}\mright\rvert^{2}+2(p-1)^{2}V_{\varepsilon}^{2(p-3)}\mleft\lvert D^{2}u_{\varepsilon}\mright\rvert^{2}\mleft\lvert Du_{\varepsilon}\mright\rvert^{4}\\&\le c_{p}V_{\varepsilon}^{2(p-1)}\mleft\lvert D^{2}u_{\varepsilon}\mright\rvert^{2}\quad \textrm{a.e. in }\Omega, \end{align*} where $c_{p}\coloneqq 2(p^{2}-2p+2)>0$. With this in mind, we apply Lemma \ref{Lemma: Weak formulation for V-epsilon} with $\psi(t)\coloneqq t^{p}{\tilde \psi}(t)$ for $t\in\lbrack0,\,\infty)$. Here the function ${\tilde\psi}$ will later be chosen as either \begin{equation}\label{Eq (Section 4) Energy estimate choice 1} {\tilde\psi}(t)\equiv 1, \end{equation} or \begin{equation}\label{Eq (Section 4) Energy estimate choice 2} {\tilde\psi}(t)\coloneqq (t-\delta-k)_{+}^{2} \end{equation} for some constant $k>0$. We choose a cutoff function \(\eta\in C_{c}^{1}(B_{\rho}(x_{0}))\) such that \[\eta\equiv 1\quad\textrm{on $B_{\tau \rho}(x_{0})$}\quad\textrm{and}\quad\lvert\nabla\eta\rvert\le \frac{2}{(1-\tau)\rho}\quad\textrm{in $B_{\rho}(x_{0})$},\] and set \(\zeta\coloneqq \eta^{2}\). Then, the weak formulation (\ref{Eq (Section 3) Resulting weak formulation}) becomes \begin{align*} &\gamma{\mathbf L}_{1}+{\mathbf L}_{2}\\ & \coloneqq \gamma\int_{B}V_{\varepsilon}^{2(p-1)}\mleft\lvert D^{2}u_{\varepsilon}\mright\rvert^{2} {\tilde\psi}(V_{\varepsilon})\eta^{2}\,{\mathrm d}x\\&\quad + \int_{B}\mleft\langle{\mathcal C}_{\varepsilon}(D_{\varepsilon})\nabla V_{\varepsilon}\mathrel{}\middle|\mathrel{}\nabla V_{\varepsilon}\mright\rangle \psi^{\prime}(V_{\varepsilon})V_{\varepsilon}\eta^{2}\,{\mathrm d}x \\&\le 4\int_{B}\mleft\lvert \mleft\langle{\mathcal C}_{\varepsilon}(D_{\varepsilon})\nabla V_{\varepsilon}\mathrel{}\middle|\mathrel{}\nabla\eta\mright\rangle\mright\rvert \psi(V_{\varepsilon})V_{\varepsilon}\eta\,{\mathrm d}x\\&\quad+ \frac{1}{\gamma}\int_{B}\lvert f_{\varepsilon}\rvert^{2}V_{\varepsilon}^{2-p}\mleft(n{\psi}(V_{\varepsilon})+V_{\varepsilon}\psi^{\prime}(V_{\varepsilon})\mright)\eta^{2}\,{\mathrm d}x \\& \quad\quad+4\int_{B}\lvert f_{\varepsilon}\rvert\lvert \nabla \eta\rvert V_{\varepsilon}\psi(V_{\varepsilon})\eta\,{\mathrm d}x \\&\eqqcolon {\mathbf R}_{1}+{\mathbf R}_{2}+{\mathbf R}_{3}. \end{align*} For \({\mathbf R}_{1}\), we use the Cauchy--Schwarz inequality \[\mleft\lvert \mleft\langle {\mathcal C}_{\varepsilon}(D_{\varepsilon})\nabla V_{\varepsilon}\mathrel{}\middle|\mathrel{}\nabla\eta\mright\rangle\mright\rvert\le \sqrt{\mleft\langle{\mathcal C}_{\varepsilon}(D_{\varepsilon})\nabla V_{\varepsilon}\mathrel{}\middle|\mathrel{}\nabla V_{\varepsilon}\mright\rangle}\,\cdot\,\sqrt{\mleft\langle{\mathcal C}_{\varepsilon}(D_{\varepsilon})\nabla \eta\mathrel{}\middle|\mathrel{}\nabla\eta\mright\rangle},\] which is possible by (\ref{Eq (Section 3) Ellipticity of coefficients}). Hence, by Young's inequality, we have \begin{align*} {\mathbf R}_{1}&\le {\mathbf L}_{2}+4\int_{B}\mleft\langle {\mathcal C}_{\varepsilon}(Du_{\varepsilon})\nabla\eta\mathrel{}\middle|\mathrel{}\nabla\eta\mright\rangle \frac{V_{\varepsilon}\psi(V_{\varepsilon})^{2}}{\psi^{\prime}(V_{\varepsilon})}\,\mathrm{d}x. \end{align*} For \({\mathbf R}_{3}\), we apply Young's inequality to get \[{\mathbf R}_{3}\le 2\int_{B}\lvert f_{\varepsilon}\rvert^{2}\psi^{\prime}(V_{\varepsilon}) V_{\varepsilon}^{3-p}\eta^{2}\,{\mathrm d}x+2 \int_{B}\lvert \nabla\eta\rvert^{2}\frac{V_{\varepsilon}\psi(V_{\varepsilon})^{2}}{\psi^{\prime}(V_{\varepsilon})}V_{\varepsilon}^{p-2}\,{\mathrm d}x.\] Therefore, by (\ref{Eq (Section 3) Ellipticity of coefficients}), we obtain \begin{align*} {\mathbf L}_{1}&\le\frac{1}{\gamma}\int_{B}\lvert\nabla\eta\rvert^{2} \mleft[2V_{\varepsilon}^{p-2}+4\mleft(bV_{\varepsilon}^{-1}+3\Gamma V_{\varepsilon}^{p-2}\mright)\mright]\frac{V_{\varepsilon}\psi(V_{\varepsilon})^{2}}{\psi^{\prime}(V_{\varepsilon})}\,{\mathrm d}x\\&\quad +\frac{1}{\gamma}\mleft(2+\frac{1}{\gamma}\mright)\int_{B}\lvert f_{\varepsilon}\rvert^{2}\eta^{2}\frac{n\psi(V_{\varepsilon})+V_{\varepsilon}\psi^{\prime}(V_{\varepsilon})}{V_{\varepsilon}^{p-2}}\,{\mathrm d}x. \end{align*} Here we note that the assumptions (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}) yield \(V_{\varepsilon}\le \delta+\mu\le 2\mu\) a.e. in \(B\), and \(\mu^{l}=\mu^{l-2p}\cdot\mu^{2p}\le\delta^{l-2p}\mu^{2p}\) for every \(l\in(0,\,2p)\). Hence it follows that \begin{align*} \mleft[2V_{\varepsilon}^{p-2}+4\mleft(bV_{\varepsilon}^{-1}+3\Gamma V_{\varepsilon}^{p-2}\mright)\mright]\frac{V_{\varepsilon}\psi(V_{\varepsilon})^{2}}{\psi^{\prime}(V_{\varepsilon})}&\le C(b,\,\Gamma) \mleft[V_{\varepsilon}^{p-2}+V_{\varepsilon}^{-1}\mright]\frac{V_{\varepsilon}^{p+2}{\tilde\psi}(V_{\varepsilon})^{2}}{p{\tilde \psi}(V_{\varepsilon})+V_{\varepsilon}{\tilde\psi}^{\prime}(V_{\varepsilon})}\\&\le C(b,\,\Gamma)\frac{\mleft(1+\delta^{1-p}\mright)\mu^{2p}{\tilde\psi}(V_{\varepsilon})^{2}}{p{\tilde \psi}(V_{\varepsilon})+V_{\varepsilon}{\tilde\psi}^{\prime}(V_{\varepsilon})} \end{align*} a.e. in $B$, and \begin{align*} \frac{n\psi(V_{\varepsilon})+V_{\varepsilon}\psi^{\prime}(V_{\varepsilon})}{V_{\varepsilon}^{p-2}}&=n(p+1)V_{\varepsilon}^{2}{\tilde\psi}(V_{\varepsilon})+V_{\varepsilon}^{3}{\tilde\psi}^{\prime}(V_{\varepsilon})\\&\le 4n(p+1)\delta^{2(1-p)}\mu^{2p}\mleft[{\tilde \psi}(V_{\varepsilon})+V_{\varepsilon}{\tilde\psi}^{\prime}(V_{\varepsilon})\mright] \end{align*} a.e. in $B$. As a result, we obtain \begin{align} &\int_{B}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(\nabla u_{\varepsilon})\mright]\mright\rvert^{2}\eta^{2}{\tilde \psi}(V_{\varepsilon})\,{\mathrm d}x \le c_{p}{\mathbf L}_{1}\nonumber\\&\le C\mu^{2p}\mleft[\int_{B}\lvert\nabla\eta\rvert^{2}\frac{{\tilde\psi}(V_{\varepsilon})^{2}}{p{\tilde \psi}(V_{\varepsilon})+V_{\varepsilon}{\tilde\psi}^{\prime}(V_{\varepsilon})}\,{\mathrm d}x+\int_{B}\lvert f_{\varepsilon}\rvert^{2}\eta^{2} \mleft[{\tilde \psi}(V_{\varepsilon})+V_{\varepsilon}{\tilde\psi}^{\prime}(V_{\varepsilon})\mright]\,{\mathrm d}x\,\mright]\label{Eq (Section 4) Energy estimate mid fin} \end{align} with \(C=C(b,\,n,\,p,\,\gamma,\,\Gamma,\,\delta)\in(0,\,\infty)\). We will deduce (\ref{Eq (Section 4) Energy estimate from weak form 1})--(\ref{Eq (Section 4) Energy estimate from weak form 2}) from (\ref{Eq (Section 4) Energy estimate mid fin}) by choosing ${\tilde \psi}$ as (\ref{Eq (Section 4) Energy estimate choice 1}) or (\ref{Eq (Section 4) Energy estimate choice 2}). Let ${\tilde \psi}$ satisfy (\ref{Eq (Section 4) Energy estimate choice 1}). Then by (\ref{Eq (Section 4) Energy estimate mid fin}) and H\"{o}lder's inequality, we have \begin{align*} &\int_{B_{\tau\rho}(x_{0})}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(\nabla u_{\varepsilon})\mright]\mright\rvert^{2}\,{\mathrm d}x\\&\le C(b,\,n,\,p,\,\gamma,\,\Gamma,\,\delta)\mu^{2p}\mleft[\frac{1}{p}\int_{B}\lvert \nabla\eta\rvert^{2}\,{\mathrm d}x+\int_{B}\lvert f_{\varepsilon}\rvert^{2}\eta^{2}{\mathrm d}x\mright]\\&\le C(b,\,n,\,p,\,q,\,\gamma,\,\Gamma,\,\delta)\mu^{2p}\mleft[\frac{1}{(1-\tau)^{2}\rho^{2}}+F^{2}\rho^{-\frac{2n}{q}}\mright]\lvert B_{\rho}(x_{0})\rvert. \end{align*} Next, we let \({\tilde\psi}\) satisfy (\ref{Eq (Section 4) Energy estimate choice 2}) with \[k\coloneqq (1-2\nu)\mu\ge\frac{\mu}{2}>0.\] Then, from (\ref{Eq (Section 2) Lipschitz bound}), it follows that \((V_{\varepsilon}-\delta-k)_{+}\le \mu-k=2\nu\mu\) a.e. in \(B\). Hence, it is easy to check that \[\mleft\{ \begin{array}{ccccccc} \displaystyle\frac{{\tilde\psi}(V_{\varepsilon})^{2}}{p{\tilde \psi}(V_{\varepsilon})+V_{\varepsilon}{\tilde\psi}^{\prime}(V_{\varepsilon})}&=&\displaystyle\frac{(V_{\varepsilon}-\delta-k)_{+}^{3}}{p(V_{\varepsilon}-\delta-k)_{+}+2V_{\varepsilon}}&\le& \displaystyle\frac{(2\nu\mu)^{3}}{0+2(\delta+k)}&\le& 8\nu^{3}\mu^{2},\\ {\tilde\psi}(V_{\varepsilon})+V_{\varepsilon}{\tilde\psi}^{\prime}(V_{\varepsilon})&\le& 3V_{\varepsilon}(V_{\varepsilon}-\delta-k)_{+}&\le& 3\cdot 2\mu\cdot 2\nu\mu&=&12\nu\mu^{2}, \end{array} \mright.\] a.e. in $B$. Over \(S_{\tau\rho,\,\mu,\,\nu}(x_{0})\), we have \(V_{\varepsilon}-\delta-k>(1-\nu)\mu-k=\nu\mu>0\), and hence ${\tilde\psi}(V_{\varepsilon})\ge (\nu\mu)^{2}$. Combining these results with (\ref{Eq (Section 4) Energy estimate mid fin}), we are able to compute \begin{align*} &\nu^{2}\mu^{2}\int_{S_{\tau\rho,\,\mu,\,\nu}(x_{0})}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(\nabla u_{\varepsilon})\mright]\mright\rvert^{2}\,{\mathrm d}x \\&\le \int_{B}\eta^{2}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(\nabla u_{\varepsilon})\mright]\mright\rvert^{2}{\tilde\psi}(V_{\varepsilon})\,{\mathrm d}x\\&\le C\mu^{2p}\mleft[8\nu^{3}\mu^{2}\int_{B}\lvert\nabla\eta\rvert^{2}\,{\mathrm d}x+12\nu\mu^{2} \int_{B}\lvert f_{\varepsilon}\rvert^{2}\eta^{2}\,{\mathrm d}x\mright]\\&\le C\nu^{2}\mu^{2p+2}\mleft[\frac{\nu}{(1-\tau)^{2}\rho^{2}}+\frac{F^{2}\rho^{-\frac{2n}{q}}}{\nu} \mright]\lvert B_{\rho}(x_{0})\rvert \end{align*} for some constant \(C=C(b,\,n,\,p,\,q,\,\gamma,\,\Gamma,\,\delta)\in(0,\,\infty)\). Recalling \(\lvert B_{\tau\rho}(x_{0})\rvert=\tau^{n}\lvert B_{\rho}(x_{0})\rvert\), we conclude (\ref{Eq (Section 4) Energy estimate from weak form 1})--(\ref{Eq (Section 4) Energy estimate from weak form 2}). \end{proof} \begin{lemma}\label{Lemma: Energy estimates 2} Let $u_{\varepsilon}$ be a weak solution to (\ref{Eq (Section 2) Approximated system}) in \(\Omega\). Assume that positive numbers $\delta$, $\varepsilon$, $\mu$, $F$, $M$, and an open ball $B_{\rho}(x_{0})\Subset \Omega$ satisfy (\ref{Eq (Section 2) delta-epsilon}), (\ref{Eq (Section 2) control of f}), (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}) and $0<\rho<1$. If (\ref{Eq (Section 2) Measure condition 2}) is satisfied for some $\nu\in(0,\,1/4)$, then for all $\tau\in(0,\,1)$, we have \begin{equation}\label{Eq (Section 4) Energy estimates from weak form 3} \Phi(x_{0},\,\tau\rho)\le \frac{C_{\dagger}\mu^{2}}{\tau^{n}}\mleft[\frac{\nu^{2/n}}{(1-\tau)^{2}}+\frac{F^{2}}{\nu}\rho^{2\beta}\mright], \end{equation} where the constant $C_{\dagger}\in(0,\,\infty)$ depends at most on $b$, $n$, $p$, $q$, $\gamma$, $\Gamma$, and $\delta$. \end{lemma} For completeness, we provide the proof of Lemma \ref{Lemma: Energy estimates 2}, which can be accomplished similarly to \cite[Lemma 3.10]{T-scalar} for the scalar case. \begin{proof} To prove (\ref{Eq (Section 4) Energy estimates from weak form 3}), we consider integrals ${\mathbf I},\,{\mathbf{II}}$ given by \[{\mathbf I}\coloneqq \frac{1}{\lvert B_{\tau\rho}(x_{0})\rvert}\int_{B_{\tau\rho}(x_{0})\setminus S_{\rho,\,\mu,\,\nu}(x_{0})}\lvert Du_{\varepsilon}-\xi\rvert^{2}\,{\mathrm{d}}x,\] and \[{\mathbf{II}}\coloneqq \frac{1}{\lvert B_{\tau\rho}(x_{0})\rvert}\int_{B_{\tau\rho}(x_{0})\cap S_{\rho,\,\mu,\,\nu}(x_{0})}\lvert Du_{\varepsilon}-\xi\rvert^{2}\,{\mathrm{d}}x,\] where $\xi\coloneqq G_{p,\,\varepsilon}^{-1}\mleft((G_{p,\,\varepsilon}(Du_{\varepsilon}))_{x_{0},\,\tau\rho}\mright)\in{\mathbb R}^{Nn}$. For ${\mathbf I}$, we use (\ref{Eq (Section 2) Measure condition 2}) to get \[{\mathbf I}\le \frac{\lvert B_{\rho}(x_{0})\setminus S_{\rho,\,\mu,\,\nu}(x_{0}) \rvert}{\tau^{n}\lvert B_{\rho}(x_{0})\rvert}\mathop{\mathrm{ess~sup}}_{B_{\rho}(x_{0})}\,\mleft(V_{\varepsilon}+\lvert \xi\rvert\mright)^{2}\le \frac{C(p)\nu\mu^{2}}{\tau^{n}}.\] The last inequality is easy to check by (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}). In fact, we have \[\mleft\lvert G_{p,\,\varepsilon}(Du_{\varepsilon})\mright\rvert=V_{\varepsilon}^{p-1}\lvert Du_{\varepsilon}\rvert\le V_{\varepsilon}^{p}\le (\delta+\mu)^{p}\le (2\mu)^{p}\quad \textrm{a.e. in }B_{\rho}(x_{0}).\] Then, recalling (\ref{Eq (Section 2) Estimate on inverse mapping}), we obtain \[\lvert \xi\rvert\le C(p)\mleft\lvert G_{p,\,\varepsilon}(\xi)\mright\rvert^{1/p}=C(p)\mleft\lvert(G_{p,\,\varepsilon}(Du_{\varepsilon}))_{x_{0},\,\tau\rho}\mright\rvert^{1/p}\le C(p)\mathop{\mathrm{ess~sup}}_{B_{\rho}(x_{0})}\,\mleft\lvert G_{p,\,\varepsilon}(Du_{\varepsilon})\mright\rvert^{1/p}\le C(p)\mu,\] and hence it is possible to estimate ${\mathbf I}$ as above. Before computing $\mathbf{II}$, we recall (\ref{Eq (Section 2) delta-epsilon}) and the definition of $S_{\rho,\,\mu,\,\nu}(x_{0})$. Then, we are able to compute \begin{equation}\label{Eq (Section 4) Non-degenerate-gradient over superlevel set} \frac{\delta}{4}+\lvert Du_{\varepsilon}\rvert\ge \varepsilon+\lvert Du_{\varepsilon}\rvert\ge V_{\varepsilon}\ge \delta+(1-\nu)\mu\quad \textrm{a.e. in }S_{\rho,\,\mu,\,\nu}(x_{0}). \end{equation} In particular, there holds \[\lvert Du_{\varepsilon}\rvert\ge \frac{3}{4}\mu\quad \textrm{a.e. in }S_{\rho,\,\mu,\,\nu}(x_{0})\] by $0<\nu<1/4$. With this in mind, we apply (\ref{Eq (Section 2) G-p-epsilon local ellipticity}) to obtain \begin{align*} \mathbf{II}&\le \frac{C(p)}{\mu^{2(p-1)}\lvert B_{\tau\rho}(x_{0})\rvert}\int_{B_{\tau\rho}(x_{0})\cap S_{\rho,\,\mu,\,\nu}(x_{0})}\mleft\lvert G_{p,\,\varepsilon}(Du_{\varepsilon})-G_{p,\,\varepsilon}(\xi)\mright\rvert^{2}\,{\mathrm d}x\\&\le \frac{C(p)}{\mu^{2(p-1)}}(\tau\rho)^{2}\mleft(\fint_{B_{\tau\rho}(x_{0})}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(Du_{\varepsilon})\mright] \mright\rvert^{\frac{2n}{n+2}} \,{\mathrm d}x\mright)^{\frac{n+2}{n}}\eqqcolon \frac{C(p)}{\mu^{2(p-1)}}(\tau\rho)^{2}\cdot \mathbf{III}^{1+2/n}. \end{align*} Here we have applied (\ref{Eq: Poincare--Sobolev inequality}) to the function $G_{p,\,\varepsilon}(Du_{\varepsilon})-G_{p,\,\varepsilon}(\xi)$. The integral $\mathbf{III}$ can be decomposed by ${\mathbf{III}}={\mathbf{III}}_{1}+{\mathbf{III}}_{2}$ with \[{\mathbf{III}}_{1}\coloneqq \frac{1}{\lvert B_{\tau\rho}(x_{0})\rvert}\int_{B_{\tau\rho}(x_{0})\setminus S_{\rho,\,\mu,\,\nu}(x_{0})}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(Du_{\varepsilon})\mright] \mright\rvert^{\frac{2n}{n+2}}\,{\mathrm d}x,\] and \[{\mathbf{III}}_{2}\coloneqq \frac{1}{\lvert B_{\tau\rho}(x_{0})\rvert}\int_{S_{\tau\rho,\,\mu,\,\nu}(x_{0})}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(Du_{\varepsilon})\mright] \mright\rvert^{\frac{2n}{n+2}}\,{\mathrm d}x.\] To control these integrals, we apply Lemma \ref{Lemma: Energy estimates}. For ${\mathbf{III}}_{1}$, we use H\"{o}lder's inequality and (\ref{Eq (Section 4) Energy estimate from weak form 1}) to obtain\begin{align*} \mathbf{III}_{1}^{1+2/n}&\le \mleft[\frac{\lvert B_{\rho}(x_{0})\setminus S_{\rho,\,\mu,\,\nu}(x_{0})\rvert }{\lvert B_{\rho}(x_{0})\rvert }\mright]^{2/n} \fint_{B_{\tau\rho}(x_{0})}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(Du_{\varepsilon})\mright] \mright\rvert^{2}\,{\mathrm d}x\\&\le C(b,\,n,\,p,\,q,\,\gamma,\,\Gamma,\,\delta)\frac{\nu^{2/n}\mu^{2p}}{\tau^{n}}\mleft[\frac{1}{(1-\tau)^{2}\rho^{2}}+F^{2}\rho^{-\frac{2n}{q}}\mright]. \end{align*} where we should note $\lvert B_{\rho}(x_{0})\setminus S_{\rho,\,\mu,\,\nu}(x_{0})\rvert\le \nu\lvert B_{\rho}(x_{0})\rvert$ by (\ref{Eq (Section 2) Measure condition 2}). Similarly for ${\mathbf{III}}_{2}$, we can compute \begin{align*} \mathbf{III}_{2}^{1+2/n}&\le \mleft[\frac{\lvert S_{\tau\rho,\,\mu,\,\nu}(x_{0})\rvert}{\lvert B_{\tau\rho}(x_{0})\rvert}\mright]^{2/n}\frac{1}{\lvert B_{\tau\rho}(x_{0})\rvert} \int_{S_{\tau\rho,\,\mu,\,\nu}(x_{0})}\mleft\lvert D\mleft[G_{p,\,\varepsilon}(Du_{\varepsilon})\mright] \mright\rvert^{2}\,{\mathrm d}x\\&\le C(b,\,n,\,p,\,q,\,\gamma,\,\Gamma,\,\delta)\frac{\mu^{2p}}{\tau^{n}}\mleft[\frac{\nu}{(1-\tau)^{2}\rho^{2}}+\frac{F^{2}\rho^{-\frac{2n}{q}}}{\nu}\mright]. \end{align*} by H\"{o}lder's inequality and (\ref{Eq (Section 4) Energy estimate from weak form 2}). Finally, we use (\ref{Eq (Section 4) minimizing property on L2-average}) to obtain \begin{align*} \Phi(x_{0},\,\tau\rho)&\le \fint_{B_{\tau\rho}(x_{0})}\lvert Du_{\varepsilon}-\xi\rvert^{2}\,{\mathrm{d}}x={\mathbf I}+{\mathbf{II}}\\&\le \frac{C(p)\nu\mu^{2}}{\tau^{n}}+\frac{C(n,\,p)}{\mu^{2p-2}}(\tau\rho)^{2}\mleft({\mathbf{III}}_{1}^{1+2/n}+{\mathbf{III}}_{2}^{1+2/n} \mright)\\&\le \frac{C\mu^{2}}{\tau^{n}}\mleft[\nu+\frac{\nu+\nu^{2/n}}{(1-\tau)^{2}}\tau^{2}+\frac{1+\nu^{1+2/n}}{\nu}\tau^{2}F^{2}\rho^{2\beta}\mright] \end{align*} with $C\in(0,\,\infty)$ depending at most on $b$, $n$, $p$, $q$, $\gamma$, $\Gamma$, and $\delta$. By $0<\tau<1$, $0<\nu<1/4$ and $n\ge 2$, we are able to find a constant $C_{\dagger}=C_{\dagger}(b,\,n,\,p,\,q,\,\gamma,\,\Gamma,\,\delta)\in(0,\,\infty)$ such that (\ref{Eq (Section 4) Energy estimates from weak form 3}) holds. \end{proof} \subsection{Higher integrability estimates and freezing coefficient arguments}\label{Subsect: freezing coefficient method} In Section \ref{Subsect: freezing coefficient method}, we appeal to freezing coefficient methods when an average $(Du_{\varepsilon})_{x_{0},\,\rho}$ does not vanish. We introduce a harmonic mapping $v_{\varepsilon}$ near $x_{0}$, and obtain an error estimate for a comparison function $u_{\varepsilon}-v_{\varepsilon}$. When computing errors from a comparison function, we have to deduce higher integrability estimates on $\lvert Du_{\varepsilon}-(Du_{\varepsilon})_{x_{0},\,\rho}\rvert$, which can be justified by applying so called Gehring's lemma. \begin{lemma}[Gehring's Lemma]\label{Lemma: Gehring lemma} Let \(B=B_{R}(x_{0})\subset{\mathbb R}^{n}\) be an open ball, and non-negative function \(g,\,h\) satisfy \(g\in L^{s}(B),\,h\in L^{{\tilde s}}(B)\) with \(1<s<{\tilde s}\le \infty\). Suppose that there holds \[\fint_{B_{r}(z_{0})}g^{s}{\mathrm d}x\le {\hat C}\mleft[\mleft(\fint_{B_{2r}(z_{0})}g\,{\mathrm d}x \mright)^{s}+\fint_{B_{2r}(z_{0})}h^{s}\,{\mathrm d}x\mright]\] for all \(B_{2r}(z_{0})\subset B\). Here \({\hat C}\in(0,\,\infty)\) is a constant independent of \(z_{0}\in B\) and \(r>0\). Then there exists a sufficiently small positive number \(\varsigma=\varsigma(b,\,s,\,{\tilde s},\,n)\) such that \(g\in L_{\mathrm{loc}}^{\sigma_{0}}(B)\) with \(\sigma_{0}\coloneqq s(1+\varsigma)\in(s,\,{\tilde s})\). Moreover, for each \(\sigma\in(s,\,\sigma_{0})\), we have \[\mleft(\fint_{B_{R/2}(x_{0})}g^{\sigma}{\mathrm d}x\mright)^{1/\sigma}\le C\mleft[\mleft(\fint_{B_{R}(x_{0})}g^{s}\,{\mathrm d}x\mright)^{1/s}+\mleft(\fint_{B_{R}(x_{0})}h^{\sigma}\,{\mathrm d}x\mright)^{1/\sigma}\mright],\] where the constant $C\in(0,\,\infty)$ depends at most on $\sigma,\,n,\,s,\,{\tilde s}$ and ${\hat C}$. \end{lemma} The proof of Gehring's lemma is found in \cite[Theorem 3.3]{MR2173373}, which is based on ball decompositions \cite[Lemma 3.1]{MR2173373} and generally works for a metric space with a doubling measure (see also \cite[\S 6.4]{MR1962933}). By applying Lemma \ref{Lemma: Gehring lemma}, we prove Lemma \ref{Lemma: Higher integrability}. \begin{lemma}[Higher integrability lemma]\label{Lemma: Higher integrability} Let $u_{\varepsilon}$ be a weak solution to (\ref{Eq (Section 2) Approximated system}). Assume that positive numbers $\delta,\,\varepsilon,\,\mu,\,M,\,F$ and an open ball $B_{\rho}(x_{0})\Subset\Omega$ satisfy (\ref{Eq (Section 2) delta-epsilon}), (\ref{Eq (Section 2) control of f}) and (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}). Fix a matrix $\xi\in{\mathbb R}^{Nn}$ with \begin{equation}\label{Eq (Section 4) xi} \delta+\frac{\mu}{4}\le \lvert\xi\rvert\le \delta+\mu. \end{equation} Then, there exists a constant \(\vartheta=\vartheta(b,\,n,\,N,\,p,\,q,\,\gamma,\,\Gamma,\,M,\,\delta)\) such that \begin{equation}\label{Eq (Section 4) integrability up} 0<\vartheta\le \min\mleft\{\,\frac{1}{2},\,\beta,\,\beta_{0}\,\mright\}<1, \end{equation} and \begin{equation}\label{Eq (Section 4) Higher integrability result} \fint_{B_{\rho/2}(x_{0})}\lvert Du_{\varepsilon}-\xi\rvert^{2(1+\vartheta)}\,{\mathrm d}x\le C\mleft[\mleft(\fint_{B_{\rho}(x_{0})}\lvert Du_{\varepsilon}-\xi\rvert^{2}\,{\mathrm d}x \mright)^{1+\vartheta}+F^{2(1+\vartheta)}\rho^{2\beta(1+\vartheta)}\mright]. \end{equation} Here the constant \(C\in(0,\,\infty)\) depends at most on \(b,\,n,\,N,\,p,\,q,\,\gamma,\,\Gamma,\,M,\,\delta\). \end{lemma} \begin{proof} It suffices to prove \begin{equation}\label{Eq (Section 4) Claim for Gehring's lemma} \fint_{B_{r/2}(z_{0})}\lvert Du_{\varepsilon}-\xi\rvert^{2}\,{\mathrm d}x\le {\hat C}\mleft[\mleft(\fint_{B_{r}(z_{0})}\lvert Du_{\varepsilon}-\xi\rvert^{\frac{2n}{n+2}}\,{\mathrm d}x\mright)^{\frac{n+2}{n}}+\fint_{B_{r}(z_{0})}\lvert \rho f_{\varepsilon}\rvert^{2}\,\mathrm{d}x\mright] \end{equation} for any \(B_{r}(z_{0})\subset B\coloneqq B_{\rho}(x_{0})\), where \({\hat C}={\hat C}(b,\,n,\,p,\,\gamma,\,\Gamma,\,M,\,\delta)\in(0,\,\infty)\) is a constant. In fact, this result enables us to apply Lemma \ref{Lemma: Gehring lemma} with \((s,\,{\tilde s})\coloneqq (1+2/n,\,q(n+2)/2n)\), \(g\coloneqq \lvert Du_{\varepsilon}-\xi\rvert^{\frac{2n}{n+2}}\in L^{s}(B),\,h\coloneqq \lvert \rho f_{\varepsilon}\rvert^{\frac{2n}{n+2}}\in L^{\tilde s}(B)\), we are able to find a small exponent \(\vartheta=\vartheta(n,\,q,\,C_{\ast})>0\) and a constant \(C=C(n,\,q,\,\vartheta,\,C_{\ast})\in(0,\,\infty)\) such that there hold (\ref{Eq (Section 4) integrability up}) and \[\fint_{B_{\rho/2}(x_{0})}\lvert Du_{\varepsilon}-\xi\rvert^{2(1+\vartheta)}\,{\mathrm d}x\le C\mleft[\mleft(\fint_{B_{\rho}(x_{0})}\lvert Du_{\varepsilon}-\xi\rvert^{2}\,{\mathrm d}x \mright)^{1+\vartheta}+\fint_{B_{\rho}(x_{0})}\lvert \rho f_{\varepsilon}\rvert^{2(1+\vartheta)}\,{\mathrm d}x \mright].\] We note that $\vartheta\le \beta$ from (\ref{Eq (Section 4) integrability up}) yields $2(1+\vartheta)\le q$, and therefore $\lvert\rho f_{\varepsilon}\rvert^{2(1+\vartheta)}$ is integrable in $B_{\rho}(x_{0})$. Moreover, by H\"{o}lder's inequality, we have \[\fint_{B_{\rho}(x_{0})}\lvert\rho f_{\varepsilon}\rvert^{2(1+\vartheta)}\,{\mathrm d}x\le C(n,\,\vartheta)F^{2(1+\vartheta)}\rho^{2\beta(1+\vartheta)},\] from which (\ref{Eq (Section 4) Higher integrability result}) easily follows. To prove (\ref{Eq (Section 4) Claim for Gehring's lemma}), for each fixed ball $B_{r}(z_{0})\subset B$, we set a function $w_{\varepsilon}\in W^{1,\,\infty}(B_{r}(z_{0});\,{\mathbb R}^{N})$ by \[w_{\varepsilon}(x)\coloneqq u_{\varepsilon}(x)-(u_{\varepsilon})_{z_{0},\,r}-\xi\cdot (x-z_{0})\quad \textrm{for }x\in B_{r}(z_{0}).\] Clearly $Dw_{\varepsilon}=Du_{\varepsilon}-\xi$ holds. We choose a cutoff function $\eta\in C_{c}(B_{r}(z_{0}))$ satisfying \[\eta\equiv 1\quad \textrm{on }B_{r/2}(z_{0}),\quad\textrm{and}\quad 0\le\eta\le 1,\quad \lvert\nabla\eta\rvert\le \frac{4}{r}\quad \textrm{in }B_{r}(z_{0}),\] and test $\phi\coloneqq \eta^{2}w_{\varepsilon}\in W_{0}^{1,\,1}(B_{\rho}(x_{0});\,{\mathbb R}^{N})$ into (\ref{Eq (Section 3) Weak formulation local}). Then we have \begin{align*} 0&=\int_{B_{r}(z_{0})}\langle A_{\varepsilon}(Du_{\varepsilon})-A_{\varepsilon}(\xi)\mid D\phi\rangle\,{\mathrm{d}}x-\int_{B_{r}(z_{0})}\langle f_{\varepsilon}\mid \phi\rangle\,{\mathrm{d}}x\\&=\int_{B_{r}(z_{0})}\eta^{2}\langle A_{\varepsilon}(Du_{\varepsilon})-A_{\varepsilon}(\xi)\mid Du_{\varepsilon}-\xi\rangle\,{\mathrm{d}}x\\&\quad +2\int_{B_{r}(z_{0})}\eta\langle A_{\varepsilon}(Du_{\varepsilon})-A_{\varepsilon}(\xi)\mid w_{\varepsilon}\otimes\nabla\eta\rangle\,{\mathrm{d}}x-\int_{B_{r}(z_{0})}\eta^{2}\langle f_{\varepsilon}\mid w_{\varepsilon}\rangle\,{\mathrm{d}}x\\&\eqqcolon {\mathbf J}_{1}+{\mathbf J}_{2}+{\mathbf J}_{3}. \end{align*} Here it should be mentioned that since there clearly hold $\delta\le \lvert\xi\rvert\le M$, and $\lvert Du_{\varepsilon}\rvert\le M$ a.e. in $B$, we are able to apply (\ref{Eq (Section 2) Monotonicity outside}) and (\ref{Eq (Section 2) Growth outside}) in Lemma \ref{Lemma: Error estimates} to ${\mathbf J}_{1}$ and ${\mathbf J}_{2}$ respectively. Then, by applying Young's inequality to ${\mathbf J}_{2}$ and ${\mathbf J}_{3}$, and making a standard absorbing argument (see \cite[Lemma 3.7]{T-scalar} for detailed computations), we are able to obtain \begin{align*} \fint_{B_{r/2}(z_{0})}\mleft\lvert Dw_{\varepsilon}\mright\rvert^{2}\,{\mathrm d}x&\le 2^{n}\fint_{B_{r}(z_{0})}\mleft\lvert D(\eta w_{\varepsilon})\mright\rvert^{2}\,{\mathrm d}x\\&\le C\mleft[\fint_{B_{r}(z_{0})}\mleft(\lvert\nabla \eta\rvert^{2}+\frac{\eta^{2}}{r^{2}}\mright) \lvert w_{\varepsilon}\rvert^{2}\,{\mathrm d}x+r^{2}\fint_{B_{r}(z_{0})}\eta^{2}\lvert f_{\varepsilon}\rvert^{2}\,{\mathrm d}x \mright]\\&\le C\mleft[r^{-2}\fint_{B_{r}(z_{0})} \lvert w_{\varepsilon}\rvert^{2}\,{\mathrm d}x+\fint_{B_{r}(z_{0})}\lvert \rho f_{\varepsilon}\rvert^{2}\,{\mathrm d}x\mright]\\&\le {\hat C}^{2}\mleft[\mleft(\fint_{B_{r}(z_{0})}\lvert Dw_{\varepsilon}\rvert^{\frac{2n}{n+2}}\,{\mathrm{d}}x\mright)^{\frac{n+2}{2n}}+\mleft(\fint_{B_{r}(z_{0})}\lvert \rho f_{\varepsilon}\rvert^{2}\,{\mathrm d}x\mright)^{1/2}\mright]^{2} \end{align*} for some constant \({\hat C}\in(0,\,\infty)\) depending on $n$, $C_{1}$, and $C_{2}$. Here we have applied (\ref{Eq: Poincare--Sobolev inequality}) to the function $w_{\varepsilon}$ to obtain the last inequality. Recalling \(Dw_{\varepsilon}=Du_{\varepsilon}-\zeta\), we finally conclude that (\ref{Eq (Section 4) Claim for Gehring's lemma}) holds for any open ball \(B_{r}(z_{0})\subset B\). \end{proof} From Lemma \ref{Lemma: Higher integrability}, we would like to deduce a comparison estimate in Lemma \ref{Lemma: Perturbation result}. \begin{lemma}\label{Lemma: Perturbation result} Let $u_{\varepsilon}$ be a weak solution to (\ref{Eq (Section 2) Approximated system}). Assume that positive numbers $\delta$, $\varepsilon$, $\mu$, $M$, $F$, and an open ball $B_{\rho}(x_{0})\Subset\Omega$ satisfy (\ref{Eq (Section 2) delta-epsilon}), (\ref{Eq (Section 2) control of f}), (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}), and \begin{equation}\label{Eq (Section 4) average assumption} \delta+\frac{\mu}{4}\le \mleft\lvert (Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert\le \delta+\mu. \end{equation} Consider the Dirichlet boundary value problem \begin{equation}\label{Eq (Section 4) Dirichlet boundary problem harmonic system} \mleft\{\begin{array}{rclcc} -\mathop{\mathrm{div}}\mleft({\mathcal B}_{\varepsilon}\mleft((Du_{\varepsilon})_{x_{0},\,\rho}\mright)Dv_{\varepsilon}\mright) & = & 0 &\textrm{in} &B_{\rho/2}(x_{0}),\\ v_{\varepsilon} & = & u_{\varepsilon} & \textrm{on} & \partial B_{\rho/2}(x_{0}). \end{array} \mright. \end{equation} Then, there exists a unique function $v_{\varepsilon}\in u_{\varepsilon}+W_{0}^{1,\,2}(\Omega;\,{\mathbb R}^{N})$ that solves (\ref{Eq (Section 4) Dirichlet boundary problem harmonic system}). Moreover, we have \begin{equation}\label{Eq (Section 4) Comparison estimate} \fint_{B_{\rho/2}(x_{0})}\lvert Du_{\varepsilon}-Dv_{\varepsilon}\rvert^{2}\,{\mathrm{d}}x\le C\mleft\{ \mleft[\frac{\Phi(x_{0},\,\rho)}{\mu^{2}}\mright]^{\vartheta}\Phi(x_{0},\,\rho)+\mleft(F^{2}+F^{2(1+\vartheta)} \mright)\rho^{2\beta} \mright\}, \end{equation} and \begin{equation}\label{Eq (Section 4) Decay of harmonic mappings} \fint_{B_{\tau\rho}(x_{0})}\mleft\lvert Dv_{\varepsilon}-(Dv_{\varepsilon})_{x_{0},\,\tau\rho}\mright\rvert^{2}\,{\mathrm{d}}x\le C\tau^{2}\fint_{B_{\rho/2}(x_{0})}\mleft\lvert Dv_{\varepsilon}-(Dv_{\varepsilon})_{x_{0},\,\rho}\mright\rvert^{2}\,{\mathrm{d}}x \end{equation} for all $\tau\in(0,\,1/2\rbrack$. Here the exponent $\vartheta$ is given by Lemma \ref{Lemma: Higher integrability}, and the constants $C\in(0,\,\infty)$ in (\ref{Eq (Section 4) Comparison estimate})--(\ref{Eq (Section 4) Decay of harmonic mappings}) depends at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $M$, and $\delta$. \end{lemma} Before showing Lemma \ref{Lemma: Perturbation result}, we mention that our analysis on perturbation arguments is based on the assumption (\ref{Eq (Section 4) average assumption}). It is easy to estimate $\mleft\lvert (Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert$ by above. In fact, by (\ref{Eq (Section 2) esssup V-epsilon}) we have \[\mleft\lvert (Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert\le \fint_{B_{\rho}(x_{0})}\mleft\lvert Du_{\varepsilon}\mright\rvert\,{\mathrm{d}}x\le \fint_{B_{\rho}(x_{0})}V_{\varepsilon}\,{\mathrm{d}}x\le \delta+\mu.\] To estimate the value $\mleft\lvert (Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert$ by below, however, we have to make careful computations, which are based on the measure assumption (\ref{Eq (Section 2) Measure condition 2}) and energy estimates as in Section \ref{Subsect: Energy estimates}. The condition $\mleft\lvert (Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert\ge \delta+\mu/4$ is to be justified later in Sections \ref{Subsect: Shrinking lemmata}--\ref{Subsect: Proof of Campanato-decay}. \begin{proof} We set $\xi\coloneqq (Du_{\varepsilon})_{x_{0},\,\rho}\in{\mathbb R}^{Nn}$. By (\ref{Eq (Section 2) delta-epsilon}), (\ref{Eq (Section 2) delta<mu}) and (\ref{Eq (Section 4) average assumption}), it is easy to check \(\delta/4\le \mu/4\le \sqrt{\varepsilon^{2}+\lvert \xi\rvert^{2}}\le \delta+(\delta+\mu)\le \delta+M\). Hence, the matrix ${\mathcal B}_{\varepsilon}(\xi)$ admits a constant $m\in(0,\,1)$, depending at most on $b$, $p$, $\gamma$, $\Gamma$, $M$, and $\delta$, such that there holds \(m{\mathrm{id}}_{Nn}\leqslant {\mathcal B}_{\varepsilon}(Du_{\varepsilon})\leqslant m^{-1}{\mathrm{id}}_{Nn}\). In particular, the matrix ${\mathcal B}_{\varepsilon}(\xi)$ satisfies the Legendre condition, and hence unique existence of the Dirichlet problem (\ref{Eq (Section 4) Dirichlet boundary problem harmonic system}) follows from \cite[Theorem 3.39]{MR3099262}. Moreover, since the coefficient matrix ${\mathcal B}_{\varepsilon}(\xi)$ is constant and satisfies the Legendre--Hadamard condition, it is easy to find a constant $C=C(n,\,N,\,m)\in(0,\,\infty)$ such that (\ref{Eq (Section 4) Decay of harmonic mappings}) holds (see e.g., \cite[Lemma 2.17]{MR3887613}, \cite[Proposition 5.8]{MR3099262}). To prove (\ref{Eq (Section 4) Comparison estimate}), we first check $l_{0}\mu^{p-2}{\mathrm{id}}_{Nn} \leqslant{\mathcal B}_{\varepsilon}(\xi)$ for some constant $l_{0}=l_{0}(p)\in(0,\,1)$. This can be easily deduced by \(\mu/4\le \sqrt{\varepsilon^{2}+\lvert \xi\rvert^{2}}\le \delta+(\delta+\mu)\le 5\mu\). Since $v_{\varepsilon}$ satisfies a weak formulation \[\int_{B}\mleft\langle {\mathcal B}_{\varepsilon}(\xi)Dv_{\varepsilon}\mathrel{}\middle|\mathrel{} D\phi \mright\rangle\,{\mathrm{d}}x=0\quad \textrm{for all }\phi\in W_{0}^{1,\,2}(B;\,{\mathbb R}^{N}),\] where we write $B\coloneqq B_{\rho/2}(x_{0})$ for notational simplicity, combining with (\ref{Eq (Section 3) Weak formulation local}), we have \begin{align*} &\int_{B}\mleft\langle {\mathcal B}_{\varepsilon}(\xi)(Du_{\varepsilon}-Dv_{\varepsilon})\mathrel{}\middle|\mathrel{} D\phi \mright\rangle\,{\mathrm{d}}x\\&=\int_{B}\mleft\langle {\mathcal B}_{\varepsilon}(\xi)(Du_{\varepsilon}-\xi)-(A_{\varepsilon}(Du_{\varepsilon})-A_{\varepsilon}(\xi)) \mathrel{}\middle|\mathrel{}D\phi \mright\rangle\,{\mathrm{d}}x+\int_{B}\langle f_{\varepsilon}\mid \phi\rangle\,{\mathrm{d}}x \end{align*} for all $\phi\in W_{0}^{1,\,2}(B;\,{\mathbb R}^{N})$. The assumptions (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}) and (\ref{Eq (Section 4) average assumption}) enable us to use (\ref{Eq (Section 2) Hessian errors}) in Lemma \ref{Lemma: Error estimates}. As a result, we are able to find a constant $C\in(0,\,\infty)$, depending at most on $b$, $p$, $\beta_{0}$, $\gamma$, $\Gamma$, $M$, and $\delta$, such that \[\int_{B}\mleft\langle {\mathcal B}_{\varepsilon}(\xi)(Du_{\varepsilon}-Dv_{\varepsilon})\mathrel{}\middle|\mathrel{} D\phi \mright\rangle\,{\mathrm{d}}x\le C\mu^{p-2-\beta_{0}}\int_{B}\lvert Du_{\varepsilon}-\xi\rvert^{1+\beta_{0}}\lvert D\phi\rvert\,{\mathrm{d}}x+\int_{B}\lvert f_{\varepsilon}\rvert\lvert \phi\rvert\,{{\mathrm d}}x\] for all $\phi\in W_{0}^{1,\,2}(B;\,{\mathbb R}^{N})$. Testing $\phi\coloneqq u_{\varepsilon}-v_{\varepsilon}\in W_{0}^{1,\,2}(B;\,{\mathbb R}^{N})$ into this weak formulation and using the Cauchy--Schwarz inequality, we compute \begin{align*} &l_{0}\gamma\mu^{p-2}\int_{B}\lvert Du_{\varepsilon}-Dv_{\varepsilon}\rvert^{2}\,{\mathrm{d}}x\\&\le C\mu^{p-2-\beta_{0}}\mleft(\int_{B}\lvert Du_{\varepsilon}-\xi\rvert^{2(1+\beta_{0})}\,{\mathrm{d}}x \mright)^{1/2}\mleft(\int_{B}\lvert Du_{\varepsilon}-Dv_{\varepsilon} \rvert^{2}\,{\mathrm{d}}x \mright)^{1/2}\\&\quad +C(n,\,q)F\rho^{\beta+\frac{n}{2}}\mleft(\int_{B}\lvert Du_{\varepsilon}-Dv_{\varepsilon} \rvert^{2}\,{\mathrm{d}}x \mright)^{1/2}. \end{align*} Here we have also applied the Poincar\'{e} inequality to the function $u_{\varepsilon}-v_{\varepsilon}\in W_{0}^{1,\,2}(B;\,{\mathbb R}^{N})$. Thus, we obtain \[\fint_{B}\lvert Du_{\varepsilon}-Dv_{\varepsilon}\rvert^{2}\,{\mathrm{d}}x\le C\mleft[\frac{1}{\mu^{2\beta_{0}}}\fint_{B}\lvert Du_{\varepsilon}-\xi\rvert^{2(1+\beta_{0})}\,{\mathrm{d}}x+\mu^{2(2-p)}F^{2}\rho^{2\beta} \mright]\] for some constant $C\in(0,\,\infty)$ depending at most on $b$, $n$, $N$, $p$, $q$, $\beta_{0}$, $\gamma$, $\Gamma$, $M$, and $\delta$. Since $\xi$ clearly satisfies (\ref{Eq (Section 4) xi}), we are able to apply Lemma \ref{Lemma: Gehring lemma}. Also, by (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}) and (\ref{Eq (Section 4) integrability up}), it is easy to check that $2(1+\vartheta)\le 2(1+\beta_{0})$ and \[\lvert Du_{\varepsilon}-\xi\rvert\le 2(\delta+\mu)\le 4\mu\quad \textrm{a.e. in }B_{\rho}(x_{0}).\] With these results in mind, we use H\"{o}lder's inequality and (\ref{Eq (Section 4) Higher integrability result}) to obtain \begin{align*} \fint_{B}\lvert Du_{\varepsilon}-Dv_{\varepsilon}\rvert^{2}\,{\mathrm{d}}x&\le C\mleft[\frac{c(n,\,\vartheta)}{\mu^{2\vartheta}}\fint_{B}\lvert Du_{\varepsilon}-\xi\rvert^{2(1+\vartheta)}\,{\mathrm{d}}x+\mu^{2(2-p)}F^{2}\rho^{2\beta} \mright]\\&\le C\mleft[\frac{\Phi(x_{0},\,\rho)^{1+\vartheta}}{\mu^{2\vartheta}}+\frac{F^{2(1+\vartheta)}\rho^{2\beta(1+\vartheta)}}{\mu^{2\vartheta}}+\mu^{2(2-p)}F^{2}\rho^{2\beta} \mright] \end{align*} for some $C\in(0,\,\infty)$. Here it is mentioned that $0<\rho\le 1$ and $\delta<\mu<M$ hold by (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}), and therefore (\ref{Eq (Section 4) Comparison estimate}) is verified. \end{proof} \subsection{Key lemmata in shrinking methods}\label{Subsect: Shrinking lemmata} In Section \ref{Subsect: Shrinking lemmata}, we provide two lemmata on shrinking methods. To prove these, we use results from Sections \ref{Subsect: Energy estimates}--\ref{Subsect: freezing coefficient method}. The first lemma (Lemma \ref{Lemma: Shrinking lemma 1}) states that an average $(Du_{\varepsilon})_{x_{0},\,\rho}\in{\mathbb R}^{Nn}$ does not vanish under suitable settings. This result makes sure that our freezing coefficient argument given in Section \ref{Subsect: freezing coefficient method} will work. \begin{lemma}\label{Lemma: Shrinking lemma 1} Let $u_{\varepsilon}$ be a weak solution to (\ref{Eq (Section 2) Approximated system}) in $\Omega$. Assume that positive numbers $\delta$, $\varepsilon$, $\mu$, $F$, $M$, and an open ball $B_{\rho}(x_{0})\Subset\Omega$ satisfy (\ref{Eq (Section 2) delta-epsilon}), (\ref{Eq (Section 2) control of f}), and (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}). Then, for each fixed $\theta\in(0,\,1/16)$, there exist numbers $\nu\in(0,\,1/4),\,{\hat\rho}\in(0,\,1)$, depending at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $F$, $M$, $\delta$, and $\theta$, such that the following statement is valid. If both $0<\rho<{\hat\rho}$ and (\ref{Eq (Section 2) Measure condition 2}) hold, then we have \begin{equation}\label{Eq (Section 4) result 1 average integral non-vanishing} \lvert (Du_{\varepsilon})_{x_{0},\,\rho}\rvert\ge \delta+\frac{\mu}{2}, \end{equation} and \begin{equation}\label{Eq (Section 4) result 2 oscillation control} \Phi(x_{0},\,\rho)\le \theta\mu^{2}. \end{equation} \end{lemma} Although the proof of Lemma \ref{Lemma: Shrinking lemma 1} is inspired by \cite[Lemma 5.5]{BDGPdN}, it should be emphasized that on our regularized problem (\ref{Eq (Section 2) Approximated system}), we have to deal with two different moduli $\lvert Du_{\varepsilon}\rvert$ and $V_{\varepsilon}=\sqrt{\varepsilon^{2}+\lvert Du_{\varepsilon}\rvert^{2}}$, which is substantially different from \cite{BDGPdN}. Therefore, as mentioned in Section \ref{Subsect: Models and comparisons}, in the proof of Lemma \ref{Lemma: Shrinking lemma 1}, we have to carefully utilize (\ref{Eq (Section 2) delta-epsilon}), so that $\varepsilon$ can be suitably dominated by $\delta$. Also, it should be mentioned that our proof is substantially the same with \cite[Lemma 3.12]{T-scalar}, although there are some differences on ranges of $\varepsilon$ or $\theta$. \begin{proof} We will later choose constants $\tau\in(0,\,1),\,\nu\in(0,\,1/4),\,{\hat\rho}\in(0,\,1)$. By (\ref{Eq (Section 4) minimizing property on L2-average}), we have \[\Phi(x_{0},\,\rho)\le\fint_{B_{\rho}(x_{0})}\mleft\lvert Du_{\varepsilon}-(Du_{\varepsilon})_{x_{0},\,\tau\rho}\mright\rvert^{2}\,{\mathrm d}x=\mathbf{J}_{1}+\mathbf{J}_{2}\] with \[\mleft\{\begin{array}{rcl} \mathbf{J}_{1}&\coloneqq&\displaystyle\frac{1}{\lvert B_{\rho}(x_{0})\rvert}\displaystyle\int_{B_{\tau\rho}(x_{0})}\mleft\lvert Du_{\varepsilon}-(Du_{\varepsilon})_{x_{0},\,\tau\rho}\mright\rvert^{2}\,{\mathrm d}x, \\ \mathbf{J}_{2}&\coloneqq & \displaystyle\frac{1}{\lvert B_{\rho}(x_{0})\rvert}\displaystyle\int_{B_{\rho}(x_{0})\setminus B_{\tau\rho}(x_{0})}\mleft\lvert Du_{\varepsilon}-(Du_{\varepsilon})_{x_{0},\,\tau\rho}\mright\rvert^{2}\,{\mathrm d}x. \end{array} \mright.\] For \(\mathbf{J}_{1}\), we apply Lemma \ref{Lemma: Energy estimates 2} to obtain \[\mathbf{J}_{1}=\tau^{n}\Phi(x_{0},\,\tau\rho)\le C_{\dagger}\mu^{2}\mleft[\frac{\nu^{2/n}}{(1-\tau)^{2}}+\frac{F^{2}}{\nu} \rho^{2\beta}\mright]\] with \(C_{\dagger}=C_{\dagger}(b,\,n,\,N,\,p,\,\gamma,\,\Gamma,\,M,\,\delta)\in(0,\,\infty)\). For \(\mathbf{J}_{2}\), we use (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}) to get \(\lvert Du_{\varepsilon}\rvert\le V_{\varepsilon}\le \delta+\mu\le 2\mu\) a.e. in \(B_{\rho}(x_{0})\), and hence \(\lvert(Du_{\varepsilon})_{x_{0},\,\tau\rho}\rvert\le 2\mu\). These inequalities yield \[\mathbf{J}_{2}\le 8\mu^{2}\cdot\frac{\lvert B_{\rho}(x_{0})\setminus B_{\tau\rho}(x_{0})\rvert}{\lvert B_{\rho}(x_{0})\rvert}=8\mu^{2}(1-\tau^{n})\le 8n\mu^{2}(1-\tau),\] where we have used \(1-\tau^{n}=(1+\tau+\cdots+\tau^{n-1})(1-\tau)\le n(1-\tau)\). Hence, we obtain \[\Phi(x_{0},\,\rho)\le C_{\dagger}\mu^{2}\mleft[\frac{\nu^{2/n}}{(1-\tau)^{2}}+\frac{F^{2}}{\nu} {\hat\rho}^{2\beta}\mright]+8n(1-\tau)\mu^{2}.\] We first fix \[\tau\coloneqq 1-\frac{\theta}{24n}\in(0,\,1),\quad\textrm{so that there holds}\quad 8n(1-\tau)=\frac{\theta}{3}.\] Next we choose \(\nu\in(0,\,1/4)\) sufficiently small that it satisfies \[\nu\le\min\mleft\{\,\mleft(\frac{\theta(1-\tau)^{2}}{3C_{\dagger}}\mright)^{n/2},\,\frac{1-4\sqrt{\theta}}{11} \,\mright\},\] so that we have \[\frac{C_{\dagger}\nu^{2/n}}{(1-\tau)^{2}}\le \frac{\theta}{3},\quad \textrm{and}\quad \sqrt{\theta}\le \frac{1-11\nu}{4}.\] Corresponding to this \(\nu\), we choose and fix sufficiently small \({\hat\rho}\in(0,\,1)\) satisfying \[{\hat\rho}^{2\beta}\le \frac{\nu\theta}{3C_{\dagger}(1+F^{2})},\] which yields \(C_{\dagger}F^{2}{\hat\rho}^{2\beta}/\nu\le \theta/3\). Our settings of \(\tau,\,\nu,\,{\hat\rho}\) clearly yield (\ref{Eq (Section 4) result 2 oscillation control}). To prove (\ref{Eq (Section 4) result 1 average integral non-vanishing}), we recall (\ref{Eq (Section 4) Non-degenerate-gradient over superlevel set}) and use (\ref{Eq (Section 2) Measure condition 2}), Then we obtain \begin{align*} \fint_{B_{\rho}(x_{0})}\lvert Du_{\varepsilon}\rvert\,{\mathrm d}x&\ge \frac{\lvert S_{\rho,\,\mu,\,\nu}(x_{0})\rvert}{\lvert B_{\rho}(x_{0})\rvert}\cdot \mathop{\mathrm{ess~inf}}\limits_{S_{\rho,\,\mu,\,\nu}(x_{0})}\,\lvert Du_{\varepsilon}\rvert\\&\ge (1-\nu)\cdot\mleft[(1-\nu)\mu+\frac{3}{4}\delta\mright]>0. \end{align*} On the other hand, by the triangle inequality, the Cauchy--Schwarz inequality and (\ref{Eq (Section 4) result 2 oscillation control}), it is easy to get \begin{align*} \mleft\lvert\fint_{B_{\rho}(x_{0})}\lvert Du_{\varepsilon}\rvert\,{\mathrm d}x -\mleft\lvert(Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert\mright\rvert&=\mleft\lvert \fint_{B_{\rho}(x_{0})}\mleft[\lvert Du_{\varepsilon}\rvert-\mleft\lvert(Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert\mright]\,{\mathrm d}x\mright\rvert\\&\le \fint_{B_{\rho}(x_{0})}\mleft\lvert Du_{\varepsilon}-(Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert\,{\mathrm d}x\\&\le \sqrt{\Phi(x_{0},\,\rho)}\le \sqrt{\theta}\mu. \end{align*} Again by the triangle inequality, we obtain \begin{align*} \mleft\lvert(Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert&\ge \mleft\lvert\fint_{B_{\rho}(x_{0})}\lvert Du_{\varepsilon}\rvert\,{\mathrm d}x \mright\rvert-\mleft\lvert\fint_{B_{\rho}(x_{0})}\lvert Du_{\varepsilon}\rvert\,{\mathrm d}x-\mleft\lvert(Du_{\varepsilon})_{x_{0},\,\rho}\mright\rvert\mright\rvert\\&\ge \mleft((1-\nu)^{2}-\sqrt{\theta}\mright)\mu+\frac{3}{4}(1-\nu)\delta. \end{align*} By (\ref{Eq (Section 2) delta<mu}) and our choice of \(\nu\), we can check that \begin{align*} \mleft((1-\nu)^{2}-\sqrt{\theta}\mright)\mu+\frac{3}{4}(1-\nu)\delta-\mleft(\delta+\frac{\mu}{2}\mright)&=\mleft(\frac{1}{2}-2\nu+\nu^{2}-\sqrt{\theta}\mright)\mu-\mleft(\frac{1}{4}+\frac{3}{4}\nu\mright)\delta\\&\ge \mleft(\frac{1-11\nu}{4}-\sqrt{\theta} \mright)\mu\ge 0, \end{align*} which completes the proof of (\ref{Eq (Section 4) result 1 average integral non-vanishing}). \end{proof} The second lemma (Lemma \ref{Lemma: Shrinking lemma 2}) is a result from perturbation arguments given in Section \ref{Subsect: freezing coefficient method}. \begin{lemma}\label{Lemma: Shrinking lemma 2} Let $u_{\varepsilon}$ be a weak solution to (\ref{Eq (Section 2) Approximated system}) in $\Omega$. Assume that positive numbers $\delta$, $\varepsilon$, $\mu$, $F$, $M$, and an open ball $B_{\rho}(x_{0})$ satisfy (\ref{Eq (Section 2) delta-epsilon}), (\ref{Eq (Section 2) control of f}), (\ref{Eq (Section 2) esssup V-epsilon})--(\ref{Eq (Section 2) delta<mu}), and $0<\rho<1$. Let $\vartheta$ be the constant in Lemma \ref{Lemma: Gehring lemma}. If (\ref{Eq (Section 4) average assumption}) and \begin{equation}\label{Eq (Section 4) energy decay setting} \Phi(x_{0},\,\rho)\le \tau^{\frac{n+2}{\vartheta}}\mu^{2} \end{equation} hold for some $\tau\in(0,\,1/2)$, then we have \begin{equation}\label{Eq (Section 4) energy decay result} \Phi(x_{0},\,\tau\rho)\le C_{\ast}\mleft[\tau^{2}\Phi(x_{0},\,\rho)+\frac{\rho^{2\beta}}{\tau^{n}}\mu^{2}\mright]. \end{equation} Here the constant $C_{\ast}$ depends at most on $b$, $n$, $N$, $p$, $q$, $\beta_{0}$, $\gamma$, $\Gamma$, $F$, $M$, and $\delta$. \end{lemma} \begin{proof} Let \(v_{\varepsilon}\in u_{\varepsilon}+W_{0}^{1,\,2}(B_{\rho/2}(x_{0});\,{\mathbb R}^{N})\) be the unique solution of (\ref{Eq (Section 4) Dirichlet boundary problem harmonic system}). We use (\ref{Eq (Section 4) minimizing property on L2-average}) to get \begin{align*} \Phi(x_{0},\,\tau\rho)&\le\fint_{B_{\tau\rho}(x_{0})}\mleft\lvert Du_{\varepsilon}-(Dv_{\varepsilon})_{x_{0},\,\tau\rho}\mright\rvert^{2}\,{\mathrm d}x \\&\le \frac{2}{(2\tau)^{n}}\fint_{B_{\rho/2}(x_{0})}\lvert Du_{\varepsilon}-Dv_{\varepsilon}\rvert^{2}\,{\mathrm d}x +2\fint_{B_{\tau\rho}(x_{0})}\mleft\lvert Dv_{\varepsilon}-(Dv_{\varepsilon})_{x_{0},\,\tau\rho} \mright\rvert^{2}\,{\mathrm d}x, \end{align*} where we have used \(\lvert B_{\rho/2}(x_{0})\rvert=(2\tau)^{-n}\cdot \lvert B_{\tau\rho}(x_{0})\rvert\). For the second average integral, (\ref{Eq (Section 4) minimizing property on L2-average}) and (\ref{Eq (Section 4) Decay of harmonic mappings}) yield \[\fint_{B_{\tau\rho}(x_{0})}\mleft\lvert Dv_{\varepsilon}-(Dv_{\varepsilon})_{x_{0},\,\tau\rho} \mright\rvert^{2}\,{\mathrm d}x\le C\mleft[\tau^{2}\fint_{B_{\rho/2}(x_{0})}\lvert Du_{\varepsilon}-Dv_{\varepsilon}\rvert^{2}\,{\mathrm d}x+\tau^{2}\Phi(x_{0},\,\rho)\mright] \] with \(C=C(n,\,N,\,p,\,q,\,\gamma,\,\Gamma,\,M,\,\delta)\in(0,\,\infty)\). By (\ref{Eq (Section 2) delta<mu}), (\ref{Eq (Section 4) Comparison estimate}) and (\ref{Eq (Section 4) energy decay setting}), we are able to compute \begin{align*} \Phi(x_{0},\,\tau\rho)&\le C\mleft[\frac{1}{\tau^{n}}\fint_{B_{\rho/2}(x_{0})}\lvert Du_{\varepsilon}-Dv_{\varepsilon}\rvert^{2}\,{\mathrm d}x+\tau^{2}\Phi(x_{0},\,\rho)\mright]\\&\le C\mleft\{ \mleft[\frac{\Phi(x_{0},\,\rho)}{\mu^{2}}\mright]^{\vartheta}\cdot\frac{\Phi(x_{0},\,\rho)}{\tau^{n}}+\frac{F^{2}+F^{2(1+\vartheta)}}{\tau^{n}}\rho^{2\beta}+\tau^{2}\Phi(x_{0},\,\rho)\mright\}\\&\le C\mleft[\tau^{2}\Phi(x_{0},\,\rho)+\mleft(F^{2}+F^{2(1+\vartheta)}\mright)\cdot\frac{\rho^{2\beta}}{\tau^{n}}\cdot\mleft(\frac{\mu}{\delta}\mright)^{2}\mright]\\&\le C_{\ast}(b,\,n,\,N,\,p,\,q,\,\beta_{0},\,\gamma,\,\Gamma,\,F,\,M,\,\delta)\mleft[\tau^{2}\Phi(x_{0},\,\rho)+\frac{\rho^{2\beta}}{\tau^{n}}\mu^{2}\mright], \end{align*} which completes the proof. \end{proof} \subsection{Proof of Proposition \ref{Prop: Schauder estimate}}\label{Subsect: Proof of Campanato-decay} We would like to prove Proposition \ref{Prop: Schauder estimate} by shrinking arguments. A key point of the proof, which is inspired by \cite[Proposition 3.4]{BDGPdN}, is to justify that an average $(Du_{\varepsilon})_{x_{0},\,r}\in{\mathbb R}^{Nn}$ never vanishes even when $r$ tends to $0$, so that Lemma \ref{Lemma: Shrinking lemma 2} can be applied. To verify this, we make careful computations, found in the proof of Lemma \ref{Lemma: Shrinking lemma 1}. \begin{proof} We set a constant $\vartheta$ as in Lemma \ref{Lemma: Higher integrability}. We will determine a sufficiently small constant $\tau\in(0,\,1/2)$, and corresponding to this \(\tau\), we will put the desired constants \(\rho_{\star}\in(0,\,1)\) and \(\nu\in(0,\,1/4)\). We first assume that \begin{equation}\label{Eq (Section 4) Determination of tau 1} 0<\tau<\max\mleft\{\,\tau^{\beta},\,\tau^{1-\beta}\,\mright\}<\frac{1}{16},\quad \textrm{and therefore}\quad \theta\coloneqq \tau^{\frac{n+2}{\vartheta}}\in\mleft(0,\,\frac{1}{16}\mright) \end{equation} Throughout the proof, we let \(\nu\in(0,\,1/6)\) and \({\hat\rho}\in(0,\,1)\) be sufficiently small constants satisfying Lemma \ref{Lemma: Shrinking lemma 1} with \(\theta\) defined by (\ref{Eq (Section 4) Determination of tau 1}). We also assume that \(\rho_{\star}\) is so small that there holds \begin{equation}\label{Eq (Section 4) Determination of rho-star 1} 0<\rho_{\star}\le {\hat\rho}<1. \end{equation} Assume that the open ball \(B_{\rho}(x_{0})\) satisfies \(0<\rho<\rho_{\star}\), and (\ref{Eq (Section 2) Measure condition 2}) holds for the constant \(\nu\in(0,\,1/6)\). We set a non-negative decreasing sequence \(\{\rho_{k}\}_{k=0}^{\infty}\) by \(\rho_{k}\coloneqq \tau^{k}\rho\). We will choose suitable \(\tau\) and \(\rho_{\ast}\) such that there hold \begin{equation}\label{Eq (Section 4) Induction claim 2} \mleft\lvert (Du_{\varepsilon})_{x_{0},\,{\rho_{k}}} \mright\rvert\ge \delta+\mleft[\frac{1}{2}-\frac{1}{8}\sum_{j=0}^{k-1}2^{-j}\mright]\mu \ge \delta+\frac{\mu}{4} \end{equation} and \begin{equation}\label{Eq (Section 4) Induction claim 1} \Phi(x_{0},\,\rho_{k})\le \tau^{2\beta k}\tau^{\frac{n+2}{\vartheta}}\mu^{2}, \end{equation} for all \(k\in{\mathbb Z}_{\ge 0}\), which will be proved by mathematical induction. For \(k=0,\,1\), we apply Lemma \ref{Lemma: Shrinking lemma 1} to deduce (\ref{Eq (Section 4) result 1 average integral non-vanishing})--(\ref{Eq (Section 4) result 2 oscillation control}) with \(\theta=\tau^{\frac{n+2}{\vartheta}}\). In particular, we have \begin{equation}\label{Eq (Section 4) Phi estimate for the first step} \Phi(x_{0},\,\rho)\le \tau^{\frac{n+2}{\vartheta}}\mu^{2}, \end{equation} and hence (\ref{Eq (Section 4) Induction claim 1}) is obvious when $k=0$. From (\ref{Eq (Section 4) result 1 average integral non-vanishing}), we have already known that (\ref{Eq (Section 4) Induction claim 2}) holds for \(k=0\). Also, (\ref{Eq (Section 4) result 1 average integral non-vanishing}) and (\ref{Eq (Section 4) Phi estimate for the first step}) enable us to apply Lemma \ref{Lemma: Shrinking lemma 2} to obtain \begin{align*} \Phi(x_{0},\,\rho_{1})&\le C_{\ast}\mleft[\tau^{2}\Phi(x_{0},\,\rho)+\frac{\rho^{2\beta}}{\tau^{n}}\mu^{2}\mright]\\&\le C_{\ast}\tau^{2(1-\beta)}\cdot\tau^{2\beta}\tau^{\frac{n+2}{\vartheta}}\mu^{2}+\frac{C_{\ast}\rho_{\star}^{2\beta}}{\tau^{n}}\mu^{2}, \end{align*} where \(C_{\ast}\in(0,\,\infty)\) is a constant as in Lemma \ref{Lemma: Shrinking lemma 2}, depending at most on $b$, $n$, $N$, $p$, $q$, $\beta_{0}$, $\gamma$, $\Gamma$, $F$, $M$, and $\delta$. Now we assume that \(\tau\) and \(\rho_{\star}\) satisfy \begin{equation}\label{Eq (Section 4) Determination of tau 2} C_{\ast}\tau^{2(1-\beta)}\le \frac{1}{2}, \end{equation} and \begin{equation}\label{Eq (Section 4) Determination of rho-star 2} C_{\ast}\rho_{\star}^{2\beta}\le \frac{1}{2}\tau^{n+2\beta+\frac{n+2}{\vartheta}}, \end{equation} so that (\ref{Eq (Section 4) Induction claim 1}) holds for \(k=1\). In particular, by (\ref{Eq (Section 4) integrability up}), (\ref{Eq (Section 4) Determination of tau 1}), (\ref{Eq (Section 4) Phi estimate for the first step}) and the Cauchy--Schwarz inequality, we obtain \begin{align*} \mleft\lvert(Du_{\varepsilon})_{x_{0},\,\rho_{1}}-(Du_{\varepsilon})_{x_{0},\,\rho_{0}} \mright\rvert&\le \fint_{B_{\rho_{1}}(x_{0})}\mleft\lvert Du_{\varepsilon}-(Du_{\varepsilon})_{x_{0},\,\rho_{0}}\mright\rvert\,{\mathrm d}x\nonumber\\&\le \mleft(\fint_{B_{\rho_{1}}(x_{0})}\mleft\lvert Du_{\varepsilon}-(Du_{\varepsilon})_{x_{0},\,\rho_{0}}\mright\rvert^{2}\,{\mathrm d}x\mright)^{1/2}=\tau^{-\frac{n}{2}}\Phi(x_{0},\,\rho)^{1/2}\nonumber\\&\le \tau^{\frac{n+2}{2\vartheta}-\frac{n}{2}}\mu\le \tau\mu\le \frac{1}{8}\mu. \end{align*} Combining this result with (\ref{Eq (Section 4) result 1 average integral non-vanishing}), we use the triangle inequality to get \[\mleft\lvert (Du_{\varepsilon})_{x_{0},\,\rho_{1}}\mright\rvert\ge \mleft\lvert (Du_{\varepsilon})_{x_{0},\,\rho_{0}}\mright\rvert-\mleft\lvert(Du_{\varepsilon})_{x_{0},\,\rho_{1}}-(Du_{\varepsilon})_{x_{0},\,\rho_{0}} \mright\rvert\ge \mleft(\delta+\frac{\mu}{2}\mright)-\frac{\mu}{8},\] which means that (\ref{Eq (Section 4) Induction claim 2}) holds true for \(k=1\). Next, we assume that the claims (\ref{Eq (Section 4) Induction claim 2})--(\ref{Eq (Section 4) Induction claim 1}) are valid for an integer \(k\ge 1\). Then \(\Phi(x_{0},\,\rho_{k})\le \tau^{\frac{n+2}{\vartheta}}\mu^{2}\) clearly holds. Combining this result with the induction hypothesis (\ref{Eq (Section 4) Induction claim 2}), we have clarified that the solution \(u_{\varepsilon}\) satisfies assumptions of Lemma \ref{Lemma: Shrinking lemma 2} in a smaller ball \(B_{\rho_{k}}(x_{0})\subset B_{\rho}(x_{0})\). By Lemma \ref{Lemma: Shrinking lemma 2}, (\ref{Eq (Section 4) Determination of tau 2}), and the induction hypothesis (\ref{Eq (Section 4) Induction claim 1}), we compute \begin{align*} \Phi(x_{0},\,\rho_{k+1})&\le C_{\ast}\mleft[\tau^{2}\Phi(x_{0},\,\rho_{k})+\frac{\rho_{k}^{2\beta}}{\tau^{n}}\mu^{2}\mright]\\&\le C_{\ast}\tau^{2(1-\beta)}\cdot \tau^{2\beta (k+1)}\tau^{\frac{n+2}{\vartheta}}\mu^{2}+\frac{C_{\ast}\rho_{\star}^{2\beta}}{\tau^{n}}\cdot\tau^{2\beta k}\mu^{2}\\&\le \tau^{2\beta(k+1)}\cdot \tau^{\frac{n+2}{\vartheta}}\mu^{2}, \end{align*} which means that (\ref{Eq (Section 4) Induction claim 1}) holds true for \(k+1\). Also, by the Cauchy--Schwarz inequality and the induction hypothesis (\ref{Eq (Section 4) Induction claim 1}), we have \begin{align*} \mleft\lvert(Du_{\varepsilon})_{x_{0},\,\rho_{k+1}}- (Du_{\varepsilon})_{x_{0},\,\rho_{k}}\mright\rvert&\le \fint_{B_{\rho_{k+1}}(x_{0})}\mleft\lvert Du_{\varepsilon}- (Du_{\varepsilon})_{x_{0},\,\rho_{k}}\mright\rvert\,{\mathrm d}x \\&\le \mleft(\fint_{B_{\rho_{k+1}}(x_{0})}\mleft\lvert Du_{\varepsilon}-(Du_{\varepsilon})_{x_{0},\,\rho_{k}}\mright\rvert^{2}\,{\mathrm d}x\mright)^{1/2}=\tau^{-n/2}\Phi(x_{0},\,\rho_{k})^{1/2}\\&\le \tau^{\beta k}\tau^{\frac{n+2}{\vartheta}-\frac{n}{2}}\mu\le 2^{-k}\cdot \frac{1}{8} \mu. \end{align*} Here we have also used (\ref{Eq (Section 4) Determination of tau 1}). Therefore, by the induction hypothesis (\ref{Eq (Section 4) Induction claim 2}) and the triangle inequality, we get \begin{align*} \mleft\lvert(Du_{\varepsilon})_{x_{0},\,\rho_{k+1}}\mright\rvert&\ge\mleft\lvert(Du_{\varepsilon})_{x_{0},\,\rho_{k}}\mright\rvert-\mleft\lvert (Du_{\varepsilon})_{x_{0},\,\rho_{k+1}}-(Du_{\varepsilon})_{x_{0},\,\rho_{k}}\mright\rvert\\& \ge \delta+ \mleft[\frac{1}{2}-\frac{1}{8}\sum_{j=0}^{k-1}2^{-j}\mright]\mu-\frac{1}{8}\cdot 2^{-k}\mu, \end{align*} which implies that (\ref{Eq (Section 4) Induction claim 2}) is valid for \(k+1\). This completes the proof of (\ref{Eq (Section 4) Induction claim 2})--(\ref{Eq (Section 4) Induction claim 1}). We define \[\Psi_{2\delta,\,\varepsilon}(x_{0},\,r)\coloneqq \fint_{B_{r}(x_{0})}\mleft\lvert {\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\mleft({\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright)_{x_{0},\,r}\mright\rvert^{2}\,{\mathrm d}x\] for \(r\in(0,\,\rho\rbrack\), and we set a sequence of vectors \(\{\Gamma_{k}\}_{k=0}^{\infty}\subset {\mathbb R}^{Nn}\) by \[\Gamma_{k}\coloneqq\mleft({\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright)_{x_{0},\,\rho_{k}}\quad \textrm{for }k\in{\mathbb Z}_{\ge 0}.\] Let \(c_{\dagger}>0\) be the constant satisfying (\ref{Eq (Section 2) Lipschitz bounds of the mapping G-2delta-epsilon}). For each \(k\in{\mathbb Z}_{\ge 0}\), we apply (\ref{Eq (Section 4) minimizing property on L2-average}), (\ref{Eq (Section 4) integrability up}) and (\ref{Eq (Section 4) Induction claim 1})--(\ref{Eq (Section 4) Phi estimate for the first step}) to get \begin{align*} \Psi_{2\delta,\,\varepsilon}(x_{0},\,\rho_{k})&\le \fint_{B_{\rho_{k}}(x_{0})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-{\mathcal G}_{2\delta,\,\varepsilon}\mleft((Du_{\varepsilon})_{x_{0},\,\rho_{k}} \mright)\mright\rvert^{2}\,{\mathrm d}x\\&\le c_{\dagger}^{2}\cdot \Phi(x_{0},\,\rho_{k})\\&\le c_{\dagger}^{2}\tau^{2\beta k}\tau^{\frac{n+2}{\vartheta}}\mu^{2}\le c_{\dagger}^{2}\tau^{2\beta k+2(n+2)}\mu^{2}. \end{align*} Here we let \(\tau\) satisfy \begin{equation}\label{Eq (Section 4) Determination of tau 3} \tau\le \frac{1}{\sqrt{2}c_{\dagger}} \end{equation} to get \begin{equation}\label{Eq (Section 4) Campanato-growth estimate for G-delta-epsilon} \Psi_{2\delta,\,\varepsilon}\mleft(x_{0},\,\rho_{k}\mright)\le \tau^{2n+2}\tau^{2\beta k}\mu^{2}\quad \textrm{for all }k\in{\mathbb Z}_{\ge 0}. \end{equation} By (\ref{Eq (Section 4) Campanato-growth estimate for G-delta-epsilon}) and the Cauchy--Schwarz inequality, we have \begin{align*} \lvert \Gamma_{k+1}-\Gamma_{k}\rvert&\le \fint_{B_{k}}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{k}\mright\rvert\,{\mathrm d}x\\&\le \mleft(\fint_{B_{k}}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{k}\mright\rvert^{2}\,{\mathrm d}x\mright)^{1/2}=\tau^{-n/2}\Psi_{2\delta,\,\varepsilon}(x_{0},\,\rho_{k})^{1/2}\\&\le \tau^{n/2+1}\tau^{\beta k}\mu \end{align*} for all \(k\in{\mathbb Z}_{\ge 0}\). In particular, for all \(k,\,l\in{\mathbb Z}_{\ge 0}\) with \(k<l\), we have \begin{align*} \lvert \Gamma_{l}-\Gamma_{k}\rvert&\le \sum_{i=k}^{l-1}\lvert \Gamma_{i+1}-\Gamma_{i}\rvert\le \sum_{i=k}^{l-1}\tau^{n/2+1}\tau^{\beta i}\mu\\&\le \tau^{n/2+1}\mu\sum_{i=k}^{\infty}\tau^{\beta i}=\tau^{n/2+1}\frac{\tau^{\beta k}}{1-\tau^{\beta}}\mu. \end{align*} The setting (\ref{Eq (Section 4) Determination of tau 1}) clearly yields $\tau^{\beta}\le 1/2$. Hence, we have \begin{equation}\label{Eq (Section 4) Cauchy-estimate} \lvert \Gamma_{k}-\Gamma_{l}\rvert\le 2\tau^{n/2+1}\tau^{\beta k}\mu\quad \textrm{for all $k,l\in{\mathbb Z}_{\ge 0}$ with }k<l, \end{equation} which implies that \(\{\Gamma_{k}\}_{k=0}^{\infty}\) is a Cauchy sequence in \({\mathbb R}^{Nn}\). Therefore the limit \[\Gamma_{\infty}\coloneqq\lim_{k\to\infty}\Gamma_{k}\in{\mathbb R}^{Nn}\] exists. Moreover, by letting \(l\to\infty\) in (\ref{Eq (Section 4) Cauchy-estimate}), we have \[\lvert \Gamma_{k}-\Gamma_{\infty}\rvert\le 2\tau^{n/2+1}\tau^{\beta k}\mu\quad \textrm{for every }k\in{\mathbb Z}_{\ge 0}.\] Combining this result with (\ref{Eq (Section 4) Campanato-growth estimate for G-delta-epsilon}), we obtain \begin{align}\label{Eq (Section 4) Digital Campanato-Growth estimate in Perturbation} \fint_{B_{\rho_{k}}(x_{0})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{\infty} \mright\rvert^{2}\,{\mathrm d}x&\le 2\fint_{B_{\rho_{k}}(x_{0})}\mleft[\mleft\lvert {\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{k} \mright\rvert^{2}+\lvert\Gamma_{k}-\Gamma_{\infty} \rvert^{2} \mright] \,{\mathrm d}x\nonumber\\&\le 2\mleft(\tau^{2n+2}\tau^{2\beta k}\mu^{2}+4\tau^{n+2}\tau^{2\beta k}\mu^{2}\mright)\nonumber\\&\le 10\tau^{2(1-\beta)}\cdot \tau^{n+2\beta (k+1)}\mu^{2}\nonumber\\&\le \tau^{n+2\beta (k+1)}\mu^{2}. \end{align} Here we have used $\tau^{2(1-\beta)}\le 1/10$, which immediately follows from (\ref{Eq (Section 4) Determination of tau 1}). For each \(r\in(0,\,\rho\rbrack\), there corresponds a unique \(k\in{\mathbb Z}_{\ge 0}\) such that \(\rho_{k+1}<r\le \rho_{k}\). Then by (\ref{Eq (Section 4) Digital Campanato-Growth estimate in Perturbation}), we have \begin{align}\label{Eq (Section 4) Continuous Campanato-Growth estimate in Perturbation} \fint_{B_{r}(x_{0})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{\infty}\mright\rvert^{2}\,{\mathrm d}x&\le \tau^{-n}\fint_{B_{\rho_{k}}(x_{0})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{\infty}\mright\rvert^{2}\,{\mathrm d}x \nonumber\\&\le \tau^{2\beta (k+1)}\mu^{2}\le \mleft(\frac{r}{\rho}\mright)^{2\beta}\mu^{2} \end{align} for all $r\in(0,\,\rho\rbrack$. By the Cauchy--Schwarz inequality, we also have \begin{align*} \mleft\lvert\mleft({\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright)_{x_{0},\,r}-\Gamma_{\infty}\mright\rvert&\le \fint_{B_{r}(x_{0})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{\infty}\mright\rvert^{2}\,{\mathrm d}x\\&\le\mleft(\fint_{B_{r}(x_{0})}\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})-\Gamma_{\infty}\mright\rvert^{2}\,{\mathrm d}x \mright)^{1/2}\le \mleft(\frac{r}{\rho}\mright)^{\beta}\mu. \end{align*} for all $r\in(0,\,\rho\rbrack$. This result yields \[\Gamma_{2\delta,\,\varepsilon}(x_{0})\coloneqq\lim_{r\to 0}\mleft({\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright)_{x_{0},\,r}=\Gamma_{\infty},\] and hence the desired estimate (\ref{Eq (Section 2) Campanato-type growth from Schauder}) clearly follows from (\ref{Eq (Section 4) Continuous Campanato-Growth estimate in Perturbation}). It is noted that (\ref{Eq (Section 2) esssup V-epsilon}) and (\ref{Eq (Section 2) Control of G-2delta-epsilon by G-delta-epsilon}) imply \(\mleft\lvert{\mathcal G}_{2\delta,\,\varepsilon}(Du_{\varepsilon})\mright\rvert\le \mu\) a.e. in \(B_{\rho}(x_{0})\), and therefore (\ref{Eq (Section 2) Bound of Gamma-2delta-epsilon}) is obvious. Finally, we mention that we may choose a sufficiently small constant \(\tau=\tau(C_{\ast},\,\beta)\in(0,\,1/2)\) verifying (\ref{Eq (Section 4) Determination of tau 1}), (\ref{Eq (Section 4) Determination of tau 2}), and (\ref{Eq (Section 4) Determination of tau 3}). Corresponding to this \(\tau\), we take sufficient small numbers \(\nu\in(0,\,1/6),\,{\hat\rho}\in(0,\,1)\) as in Lemma \ref{Lemma: Shrinking lemma 2}, depending at most on $b$, $n$, $N$, $p$, $q$, $\gamma$, $\Gamma$, $F$, $M$, $\delta$, and $\theta=\tau^{\frac{n+2}{\vartheta}}$. Then, we are able to determine a sufficiently small radius \(\rho_{\star}=\rho_{\star}(C_{\ast},\,\beta,\,{\hat\rho})\in(0,\,1)\) verifying (\ref{Eq (Section 4) Determination of rho-star 1}) and (\ref{Eq (Section 4) Determination of rho-star 2}), and this completes the proof. \end{proof} \section{Appendix: Local Lipschitz bounds}\label{Section: Appendix} In Section \ref{Section: Appendix}, we would like to provide the proof of Proposition \ref{Prop: Lipschitz bounds} for the reader's convenience. Before showing Proposition \ref{Prop: Lipschitz bounds}, we recall two basic lemmata (see \cite[Lemma 4.3]{MR2777537} and \cite[Chapter 2, Lemma 4.7]{MR0244627} for the proofs). \begin{lemma}\label{Lemma: Absorbing lemma} Assume that a non-negative bounded function $H\colon \lbrack 0,\,1\rbrack\rightarrow (0,\,\infty)$ admits constants $\theta\in\lbrack 0,\,1)$ and $\alpha,\,A,\,B\in(0,\,\infty)$ such that \[H(t)\le \theta H(s)+\frac{A}{(s-t)^{\alpha}}+B\] holds whenever $0\le t<s\le 1$. Then we have \[H(t)\le C(\alpha,\,\theta)\mleft[\frac{A}{(s-t)^{\alpha}}+B \mright]\] for any $0\le t<s\le 1$. \end{lemma} \begin{lemma}\label{Lemma: Geometric convergence lemma} Assume that a sequence $\{a_{m}\}_{m=0}^{\infty}\subset (0,\,\infty)$ satisfies \[a_{m+1}\le CB^{m}a_{m}^{1+\varsigma}\] for all $m\in{\mathbb Z}_{\ge 0}$. Here $C,\,\varsigma\in(0,\,\infty)$ and $B\in(1,\,\infty)$ are constants. If $a_{0}$ satisfies \[a_{0}\le C^{-1/\varsigma}B^{-1/\varsigma^{2}},\] then $a_{m}\to 0$ as $m\to\infty$. \end{lemma} The proof of Proposition \ref{Prop: Lipschitz bounds} is based on De Giorgi's truncation. More sophisticated computations than ours concerning local Lipschitz bounds by De Giorgi's truncation are given in \cite[Theorem 1.13]{MR4078712}, where external force terms are assumed to be less regular than $L^{q}\,(n<q\le\infty)$. Compared with \cite{MR4078712}, our proof of Proposition \ref{Prop: Lipschitz bounds} is rather elementary, since we only deal with the case where the external force term $f_{\varepsilon}$ is in a Lebesgue space $L^{q}$ with $q\in(n,\,\infty\rbrack$. To control $f_{\varepsilon}$ by the $L^{q}$-norm, we appeal to standard absorbing arguments as in \cite[Theorem 4.1, Method 1]{MR2777537} (see also \cite[\S 4.3]{MR4201656} for scalar problems). \begin{proof} For notational simplicity, we write $B_{r}\coloneqq B_{r}(x_{0})$ for $r\in(0,\,\rho\rbrack$, and $B\coloneqq B_{\rho}(x_{0})$. We fix a constant $k\coloneqq 1+\lVert f_{\varepsilon}\rVert_{L^{q}(B)}^{1/(p-1)}\ge 1$, and define a superlevel set \[A(l,\,r)\coloneqq \mleft\{x\in B_{r}\mathrel{} \middle|\mathrel{} \mleft(V_{\varepsilon}(x)-k\mright)_{+}^{p}>l\mright\}\] for $l\in(0,\,\infty)$ and $r\in(0,\,\rho\rbrack$. Then, by (\ref{Eq (Section 3) Ellipticity of coefficients}) and $k\ge 1$, there holds \begin{equation}\label{Eq (Appendix) Uniform ellipticity on Lipschitz bounds} \gamma V_{\varepsilon}^{p-2}{\mathrm{id}}_{n}\leqslant {\mathcal C}_{\varepsilon}(Du_{\varepsilon})\leqslant {\hat\Gamma}V_{\varepsilon}^{p-2}{\mathrm{id}}_{n}\quad \textrm{a.e. in }A(0,\,\rho) \end{equation} with ${\hat\Gamma}\coloneqq b+3\Gamma$. We first claim that there holds \begin{equation}\label{Eq (Appendix) First claim on Lipschitz bounds} \int_{B}\mleft\lvert \nabla(\eta W_{l})\mright\rvert^{2}\,{\mathrm{d}}x\le C(b,\,n,\,p,\,\gamma,\,\Gamma)\mleft[\int_{B}\lvert\nabla\eta\rvert^{2}W_{l}^{2}\,{\mathrm{d}}x+\int_{A(l,\,\rho)}f_{k}\mleft(W_{l}^{2}+l^{p}W_{l}+l^{2p}\mright)\,{\mathrm{d}}x\mright] \end{equation} for all $l\in\mleft(k^{p},\,\infty\mright)$ and for any non-negative function $\eta\in C_{c}^{1}(B)$. Here the non-negative functions $W_{l}$ and $f_{k}$ are respectively given by \(W_{l}\coloneqq\mleft(\mleft(V_{\varepsilon}-k\mright)_{+}^{p}-l\mright)_{+}\), and \[f_{k}\coloneqq \frac{\lvert f_{\varepsilon}\rvert^{2}}{V_{\varepsilon}^{2(p-1)}}\chi_{A(0,\,\rho)}\in L^{q/2}(B),\quad \textrm{so that}\quad f_{k}V_{\varepsilon}^{2(p-1)}=\lvert f_{\varepsilon}\rvert^{2}\chi_{A(0,\,\rho)}\] holds a.e. in $B$. To prove (\ref{Eq (Appendix) First claim on Lipschitz bounds}), we apply Lemma \ref{Lemma: Weak formulation for V-epsilon} with $\psi(\sigma)\coloneqq \mleft((\sigma-k)_{+}^{p}-l\mright)_{+}$ and $\zeta\coloneqq\eta^{2}$, so that $W_{l}=\psi(V_{\varepsilon})$ holds. Under this setting, we discard a non-negative term $J_{3}$, and carefully compute the other integrals. To compute $J_{1}$, we may use (\ref{Eq (Appendix) Uniform ellipticity on Lipschitz bounds}), since $W_{l}$ vanishes outside the set $A(l,\,\rho)$. When estimating $J_{4},\,J_{5},\,J_{6}$, we use \[\frac{V_{\varepsilon}}{V_{\varepsilon}-k}=1+\frac{k}{V_{\varepsilon}-k}\le 1+\frac{k}{l^{1/p}}\le 2\quad \textrm{a.e. in }A(l,\,\rho),\] and \[V_{\varepsilon}^{p}\le 2^{p-1}\mleft((V_{\varepsilon}-k)^{p}+k^{p}\mright)\le 2^{p}\mleft(W_{l}+l\mright)\quad \textrm{a.e. in }A(l,\,\rho),\] which are easy to deduce by $l>k^{p}$. Here we also note that the identity \(W_{l}+l=(V_{\varepsilon}-k)^{p}\) holds a.e. in $A(l,\,\rho)$. Combining these, we can compute \begin{align*} &p\gamma \int_{B}\eta^{2}\mleft\lvert \nabla V_{\varepsilon}\mright\rvert^{2}V_{\varepsilon}^{p-1}\mleft(V_{\varepsilon}-k\mright)^{p-1}\chi_{A(l,\,\rho)}\,{\mathrm{d}}x\\ &\le J_{2}\le 2\lvert J_{1}\rvert+\frac{1}{\gamma}(n\lvert J_{4}\rvert+\lvert J_{5}\rvert)+2\lvert J_{6}\rvert\\&\le 4{\hat\Gamma}\int_{B}\eta W_{l}V_{\varepsilon}^{p-1}\lvert\nabla V_{\varepsilon}\rvert\lvert\nabla\eta\rvert\,{\mathrm{d}}x\\&\quad+\frac{1}{\gamma}\mleft[n\int_{A(l,\,\rho)}f_{k}W_{l}V_{\varepsilon}^{p}\eta^{2}\,{\mathrm{d}}x+p\int_{A(l,\,\rho)}f_{k}\mleft(V_{\varepsilon}-k\mright)^{p}V_{\varepsilon}^{p}\frac{V_{\varepsilon}}{V_{\varepsilon}-k}{\mathrm{d}}x\mright]\\&\quad\quad+4\int_{A(l,\,\rho)}\lvert f_{\varepsilon}\rvert\lvert\nabla\eta\rvert W_{l}V_{\varepsilon}\eta\,{\mathrm{d}}x\\&\le \frac{p\gamma}{2}\int_{B}\eta^{2}\mleft\lvert \nabla V_{\varepsilon}\mright\rvert^{2}V_{\varepsilon}^{p-1}\mleft(V_{\varepsilon}-k\mright)^{p-1}\chi_{A(l,\,\rho)}\,{\mathrm{d}}x\\&\quad +C(b,\,n,\,p,\,\gamma,\,\Gamma)\mleft[\int_{B}W_{l}^{2}\lvert\nabla \eta\rvert^{2}\,{\mathrm{d}}x+\int_{A(l,\,\rho)}f_{k}\mleft(W_{l}^{2}+lW_{l}+l^{2}\mright)\,{\mathrm{d}}x\mright] \end{align*} by Young's inequality. By an absorbing argument, we obtain \begin{align*} &\int_{B}\eta^{2}\mleft\lvert\nabla W_{l}\mright\rvert^{2}\,{\mathrm{d}}x=p^{2}\int_{B}\eta^{2}\lvert \nabla V_{\varepsilon}\rvert^{2}\mleft(V_{\varepsilon}-k\mright)^{2(p-1)}\chi_{A_{l}}\,{\mathrm{d}}x \\&\le p^{2}\int_{B}\eta^{2}\mleft\lvert \nabla V_{\varepsilon}\mright\rvert^{2}V_{\varepsilon}^{p-1}\mleft(V_{\varepsilon}-k\mright)^{p-1}\chi_{A_{l}}\,{\mathrm{d}}x\\&\le C(b,\,n,\,p,\,\gamma,\,\Gamma)\mleft[\int_{B}W_{l}^{2}\lvert\nabla \eta\rvert^{2}\,{\mathrm{d}}x+\int_{A(l,\,\rho)}f_{k}\mleft(W_{l}^{2}+lW_{l}+l^{2}\mright)\,{\mathrm{d}}x\mright] \end{align*} from which (\ref{Eq (Appendix) First claim on Lipschitz bounds}) immediately follows. Next, we would like to find a constant $C_{\clubsuit}=C_{\clubsuit}(b,\,n,\,N,\,p,\,q,\,\gamma,\,\Gamma)\in (0,\,\infty)$ and an exponent $\varsigma=\varsigma(n,\,q)\in(0,\,2/n\rbrack$ such that \begin{equation}\label{Eq (Appendix) 1.5-th claim on Lipschitz bounds} \int_{A({\hat l},\,{\hat r})}W_{{\hat l}}^{2}\,{\mathrm{d}}x\le C\mleft[\frac{1}{(r-{\hat r})^{2}}+\frac{{\hat l}^{2}}{({\hat l}-l)^{2}} \mright]\frac{1}{({\hat l}-l)^{2\varsigma}}\mleft(\int_{A(l,\,r)}W_{l}^{2}{\mathrm{d}}x \mright)^{1+\varsigma} \end{equation} holds for arbitrary numbers $l$, ${\hat l}$, $r$, ${\hat r}$, $\hat\rho$ enjoying $0<{\hat r}<r\le{\hat\rho} \le\rho$ and $0<l_{0}\coloneqq k^{p}+C_{\clubsuit}\lVert W_{k^{p}}\rVert_{L^{2}(B_{\hat\rho})}\le l<{\hat l}<\infty$. As a preliminary for proving (\ref{Eq (Appendix) 1.5-th claim on Lipschitz bounds}), we would like to verify that \begin{equation}\label{Eq (Appendix) Second claim on Lipschitz bounds} \int_{A(l,\,r)}(\eta W_{l})^{2}\,{\mathrm{d}}x\le C\mleft[\lvert A(l,\,r)\rvert^{\varsigma}\int_{A(l,\,r)}W_{l}^{2}\lvert \nabla \eta\rvert^{2}{\mathrm{d}}x+l^{2}\lvert A(l,\,r)\rvert^{1+\varsigma} \mright] \end{equation} holds for all $r\in(0,\,{\hat \rho}\rbrack$, $l\in\lbrack l_{0},\,\infty)$, and for any non-negative function $\eta\in C_{c}^{1}(B_{r})$ with $0\le\eta\le 1$. Here the constants $C\in(0,\,\infty)$ in (\ref{Eq (Appendix) 1.5-th claim on Lipschitz bounds})--(\ref{Eq (Appendix) Second claim on Lipschitz bounds}) depend at most on $b$, $n$, $p$, $q$, $\gamma$ and $\Gamma$. To deduce (\ref{Eq (Appendix) Second claim on Lipschitz bounds}), we should mention that $\lVert f_{k}\rVert_{L^{q/2}(B)}\le 1$ is clear by the definitions of $k$ and $f_{k}$. For simplicity, we consider the case $n\ge 3$, where we can use the Sobolev embedding $W_{0}^{1,\,2}(B_{r})\hookrightarrow L^{2^{\ast}}(B_{r})$ with $2^{\ast}=2n/(n-2)$. Combining with H\"{o}lder's inequality and Young's inequality, by (\ref{Eq (Appendix) First claim on Lipschitz bounds}) we get \begin{align*} \int_{A(l,\,r)}\eta^{2}f_{k}\mleft(W_{l}^{2}+lW_{l}+l^{2}\mright)\,{\mathrm{d}}x& \le \mleft(\sigma+C(n)\lvert A(l,\,r)\rvert^{2/n-2/q}\mright)\int_{A(l,\,r)}\mleft\lvert\nabla (\eta W_{l})\mright\rvert^{2}\,{\mathrm{d}}x\\&\quad+l^{2}\mleft(\frac{C(n)}{\sigma}\lvert A(l,\,r) \rvert^{1+2/n-4/q}+\lvert A(l,\,r)\rvert^{1-2/q}\mright) \end{align*} for any $\sigma>0$. By the definition of $A(l,\,r)$ and the Cauchy--Schwarz inequality, we can easily check \[\lvert A(l,\,r)\rvert\le \frac{1}{l-k^{p}}\int_{A(l,\,r)}W_{k^{p}}\,{\mathrm{d}}x\le \frac{\lvert A(l,\,r)\rvert^{1/2}}{l-k^{p}}\lVert W_{k^{p}}\rVert_{L^{2}(B_{\hat\rho})}.\] Hence, it follows that \[\lvert A(l,\,r)\rvert\le \mleft(\frac{\lVert W_{k^{p}}\rVert_{L^{2}(B_{\hat\rho})}}{l-k^{p}}\mright)^{2}.\] Thus, we can choose and fix sufficiently small $\sigma\in(0,\,1)$ and suitably large $C_{\clubsuit}\in(0,\,\infty)$, both of which depend at most on $b$, $n$, $p$, $q$, $\gamma$ and $\Gamma$, so that an absorbing argument can be made. Moreover, by our choice of $C_{\clubsuit}$, we may let $\lvert A(l,\,r)\rvert\le 1$. As a result, we are able to conclude (\ref{Eq (Appendix) Second claim on Lipschitz bounds}) with $\varsigma\coloneqq 2/n-2/q\in(0,\,2/n)$ when $n\ge 3$. In the remaining case $n=2$, we fix a sufficiently large constant $\kappa\in(2,\,\infty)$, and use the Sobolev embedding $W_{0}^{1,\,2}(B_{r})\hookrightarrow L^{\kappa}(B_{r})$. By similar computations, we can find a constant $C_{\clubsuit}$, depending at most on $b$, $n$, $p$, $q$, $\gamma$, $\Gamma$ and $\kappa$, such that (\ref{Eq (Appendix) Second claim on Lipschitz bounds}) holds for some exponent $\varsigma=\varsigma(n,\,q,\,\kappa)\in(0,\,1-2/q)$. Now we would like to prove (\ref{Eq (Appendix) 1.5-th claim on Lipschitz bounds}). Let $l$, ${\hat l}$ and $r$, ${\hat r}$, ${\hat\rho}$ satisfy respectively $l_{0}<l<{\hat l}<\infty$ and $0<{\hat r}<r\le{\hat\rho}\le\rho$. Corresponding to the radii $r,\,{\hat r}$, we fix a non-negative function $\eta\in C_{c}^{1}(B_{r})$ such that \[\eta\equiv 1\quad \textrm{on }B_{{\hat r}}\quad \textrm{and}\quad \lvert\nabla\eta\rvert\le \frac{2}{r-{\hat r}}\quad \textrm{in }B_{r}.\] Since $W_{l}\ge {\hat l}-l$ holds a.e. in $A({\hat l},\,\rho)$, it is easy to check that \[\lvert A({\hat l},\,\rho)\rvert\le \frac{1}{({\hat l}-l)^{2}}\int_{A(l,\,\rho)}W_{l}^{2} \,{\mathrm{d}}x.\] Also, the inclusion $A({\hat l},\,{\hat r})\subset A(l,\,r)$ and the inequality $W_{{\hat l}}\le W_{l}$ yield \[\int_{A({\hat l},\,{\hat r})}W_{\hat l}^{2}\,{\mathrm{d}}x\le\int_{A(l,\,r)}W_{l}^{2}\,{\mathrm{d}}x.\] From these inequalities and (\ref{Eq (Appendix) Second claim on Lipschitz bounds}), we can easily conclude (\ref{Eq (Appendix) 1.5-th claim on Lipschitz bounds}). From (\ref{Eq (Appendix) 1.5-th claim on Lipschitz bounds}), we would like to complete the proof of Proposition \ref{Prop: Lipschitz bounds}. We fix arbitrary $\theta\in(0,\,1)$, ${\hat\rho}\in(0,\,\rho\rbrack$. For each $m\in{\mathbb Z}_{\ge 0}$, we set \[l_{m}\coloneqq l_{0}+L_{0}(1-2^{-m}),\quad \rho_{m}\coloneqq \mleft[\theta+2^{-m}(1-\theta)\mright]{\hat\rho},\quad a_{m}\coloneqq \lVert W_{l_{m}}\rVert_{L^{2}(B_{\rho_{m}})},\] where the constant $L_{0}\in(0,\,\infty)$ is to be chosen later. Then, by (\ref{Eq (Appendix) 1.5-th claim on Lipschitz bounds}), we get \[a_{m+1}\le C\mleft[\frac{2^{m+1}}{(1-\theta)r}+2^{m+1}\mright]\frac{2^{\varsigma(m+1)}}{L_{0}^{\varsigma}}a_{m}\le \frac{{\tilde C}}{(1-\theta)r}L_{0}^{-\varsigma}2^{(1+\varsigma)m}a_{m}^{1+\varsigma}\] for every $m\in{\mathbb Z}_{\ge 0}$, where ${\tilde C}\in(0,\,\infty)$ depends at most on $b$, $n$, $p$, $q$, $\gamma$ and $\Gamma$. We set $L_{0}$ by \[L_{0}\coloneqq C_{\diamondsuit}\lVert W_{k^{p}}\rVert_{L^{2}(B_{\hat\rho})}\quad \textrm{with}\quad C_{\diamondsuit}\coloneqq \mleft[\frac{{\tilde C}}{(1-\theta){\hat\rho}} \mright]^{1/\varsigma}2^{\frac{1+\varsigma}{\varsigma^{2}}},\] so that we obtain \[a_{0}=\lVert W_{l_{0}}\rVert_{L^{2}(B_{\hat\rho})}\le \lVert W_{k^{p}}\rVert_{L^{2}(B_{\hat\rho})}\le \mleft[\frac{{\tilde C}L_{0}^{-\varsigma}}{(1-\theta){\hat\rho}} \mright]^{-1/\varsigma}\mleft[2^{1+\varsigma}\mright]^{-1/\varsigma^{2}}.\] By Lemma \ref{Lemma: Geometric convergence lemma}, we have $a_{m}\to 0$ as $m\to \infty$. In particular, it follows that \[\int_{A(l_{0}+L_{0},\,\theta{\hat\rho})}W_{l_{0}+L_{0}}^{2}\,{\mathrm{d}}x=0,\] which implies \(\lVert W_{k^{p}}\rVert_{L^{\infty}(B_{\theta \hat\rho})}\le\mleft(C_{\clubsuit}+C_{\diamondsuit}\mright)\lVert W_{k^{p}}\rVert_{L^{2}(B_{\hat\rho})}\). As a consequence, we obtain \[\lVert W_{k^{p}}\rVert_{L^{\infty}(B_{\theta\hat\rho})}\le C(b,\,n,\,p,\,q,\,\gamma,\,\Gamma)\frac{\lVert W_{k^{p}}\rVert_{L^{2}(B_{\hat\rho})}}{\mleft[(1-\theta)\hat\rho\mright]^{pd/2}} \quad \textrm{for all }\theta\in(0,\,1),\,\hat\rho\in(0,\,\rho\rbrack\] with $d\coloneqq 2/(p\varsigma)\in \lbrack n/p,\,\infty)$. By $\lVert W_{k^{p}}\rVert_{L^{2}(B_{\hat\rho})}\le \lVert W_{k^{p}}\rVert_{L^{1}(B)}^{1/2}\lVert W_{k^{p}}\rVert_{L^{\infty}(B_{\hat\rho})}^{1/2}$ and Young's inequality, we get \[\lVert W_{k^{p}}\rVert_{L^{\infty}(B_{\theta\hat\rho})}\le \frac{1}{2}\lVert W_{k^{p}}\rVert_{L^{\infty}(B_{\hat\rho})}+C(b,\,n,\,p,\,q,\,\gamma,\,\Gamma)\frac{\lVert W_{k^{p}}\rVert_{L^{1}(B)}}{\mleft[(1-\theta)\hat\rho\mright]^{pd}}\] for all $\theta\in(0,\,1)$, ${\hat\rho}\in(0,\,\rho\rbrack$. By applying Lemma \ref{Lemma: Absorbing lemma} with $H(s)\coloneqq \lVert W_{k^{p}}\rVert_{L^{\infty}(B_{s\rho})}$. As a result, we are able to deduce \[\lVert W_{k^{p}}\rVert_{L^{\infty}(B_{\theta\rho})}\le C(b,\,n,\,p,\,q,\,\gamma,\,\Gamma)\frac{\lVert W_{k^{p}}\rVert_{L^{1}(B)}}{\mleft[(1-\theta)\rho\mright]^{pd}}\quad \textrm{for all }\theta\in(0,\,1).\] By the definition of $W_{k^{p}}$, we get \[\mathop{\mathrm{ess~sup}}_{B_{\theta\rho}}\,V_{\varepsilon}\le k+\mathop{\mathrm{ess~sup}}_{B_{\theta\rho}}\,\mleft(W_{k^{p}}+k^{p}\mright)^{1/p}\le 2k+C(b,\,n,\,p,\,q,\,\gamma,\,\Gamma)\frac{\lVert W_{k^{p}}\rVert_{L^{1}(B)}^{1/p}}{[(1-\theta)\rho]^{d}},\] By $k=1+\lVert f_{\varepsilon}\rVert_{L^{q}(B)}^{1/(p-1)}$ and $\lVert W_{k^{p}}\rVert_{L^{1}(B)}\le \lVert V_{\varepsilon}\rVert_{L^{p}(B)}^{p}$, we complete the proof. \end{proof} \begin{Remark}[Higher regularity of regularized solutions]\label{Eq Higher W-2-2 and W-1-infty regularity}\upshape In this paper, we have often used $u_{\varepsilon}\in W_{\mathrm{loc}}^{1,\,\infty}(\Omega;\,{\mathbb R}^{N})\cap W_{\mathrm{loc}}^{2,\,2}(\Omega;\,{\mathbb R}^{N})$. Following \cite{MR709038}, \cite{MR1634641} and \cite[Chapter 8]{MR1962933}, we briefly describe how to improve this regularity (see also \cite[Chapters 4--5]{MR0244627}). There it is noted that it is not restrictive to let $f_{\varepsilon}\in L^{\infty}(\Omega;\,{\mathbb R}^{N})$, or even $f_{\varepsilon}\in C^{\infty}(\Omega;\,{\mathbb R}^{N})$, since our approximation arguments work as long as (\ref{Eq (Section 2) Weak convergence of f}) holds. First, appealing to a standard difference quotient method as in \cite[Theorem 2]{MR1634641} or \cite[\S 8.2]{MR1962933}, we are able to get \begin{equation}\label{Eq (Appendix) improved regularity from difference quotient} \int_{\omega} V_{\varepsilon}^{p-2}\mleft\lvert D^{2}u_{\varepsilon}\mright\rvert^{2}\,{\mathrm{d}}x\le C\mleft(\varepsilon,\,\omega,\,\lVert f_{\varepsilon}\rVert_{L^{\infty}(\Omega)}\mright)<\infty\quad \textrm{for every }\omega\Subset \Omega, \end{equation} which is possible by (\ref{Eq (Section 3) Uniform ellipticity on approximated density}). Secondly, we appeal to the Uhlenbeck structure to prove $Du_{\varepsilon}\in L_{\mathrm{loc}}^{\infty}(\Omega;\,{\mathbb R}^{Nn})$. We test \[\phi\coloneqq \eta^{2}V_{\varepsilon}^{l}D_{\alpha}u_{\varepsilon}\quad \textrm{with}\quad\eta\in C_{c}^{1}(B_{\rho}(x_{0})),\quad 0\le l<\infty\] into the weak formulation (\ref{Eq (Section 3) Weak formulation differentiated}) for each $\alpha\in\{\,1,\,\dots\,,\,n\,\}$. We should note that when $l=0$ this test function is admissible by (\ref{Eq (Appendix) improved regularity from difference quotient}), and all of the computations in Lemma \ref{Lemma: Weak formulation for V-epsilon} make sense. By (\ref{Eq (Section 3) Resulting weak formulation}) and standard computations given in \cite[\S 3]{MR709038}, \cite[\S 8.3]{MR1962933}, we can improve local integrability of $V_{\varepsilon}$, and in particular we may test the same $\phi$ with larger $l>0$. By Moser's iteration arguments, we are able to conclude $V_{\varepsilon}\in L_{\mathrm{loc}}^{\infty}(\Omega)$, and hence $Du_{\varepsilon}\in L_{\mathrm{loc}}^{\infty}(\Omega;\,{\mathbb R}^{Nn})$ follows. Finally, $u_{\varepsilon}\in W_{\mathrm{loc}}^{2,\,2}(\Omega;\,{\mathbb R}^{N})$ is clear by (\ref{Eq (Appendix) improved regularity from difference quotient}) and $Du_{\varepsilon}\in L_{\mathrm{loc}}^{\infty}(\Omega;\,{\mathbb R}^{Nn})$. These computations above are substantially dependent on an approximation parameter $\varepsilon\in(0,\,1)$. When it comes to local uniform $L^{\infty}$-bounds of $Du_{\varepsilon}$, the strategy based on Moser's iteration may not work, since the test function $\phi$ may intersect with the facet of $u_{\varepsilon}$. In contrast, for the scalar problem, local uniform $L^{\infty}$-bounds of gradients $\nabla u_{\varepsilon}=(\partial_{x_{1}}u_{\varepsilon},\,\dots\,,\,\partial_{x_{n}}u_{\varepsilon})$ for weak solutions to (\ref{Eq (Section 2) Approximated system}) is successfully deduced by Moser's iteration \cite[\S 4.2]{MR4201656}. The strategy therein is to choose test functions far from facets, by truncating the term $\partial_{x_{\alpha}}u_{\varepsilon}$ in a place $\{\lvert \partial_{x_{\alpha}}u_{\varepsilon}\rvert\le k\}$ for some $k\ge 1$. This modification forces us to adapt another modulus that is different from $V_{\varepsilon}=\sqrt{\varepsilon^{2}+\lvert Du_{\varepsilon}\rvert^{2}}$ and that may lack spherical symmetry. For this reason, it appears that the proof of a priori Lipschitz bounds based on Moser's iteration works only in the scalar problem. In the proof of Proposition \ref{Prop: Lipschitz bounds}, we have appealed to De Giorgi's truncation, since we do not have to choose another non-symmetric modulus. \end{Remark} \end{document}
\betaegin{equation}gin{document} \title[inverse mean curvature flow inside a cone in warped products] {inverse mean curvature flow inside a cone in warped products} \alphauthor{ Li Chen, Jing Mao$^{\alphast}$, Ni Xiang and Chi Xu} \alphaddress{ Faculty of Mathematics and Statistics, Key Laboratory of Applied Mathematics of Hubei Province, Hubei University, Wuhan 430062, China. } \epsilonmail{[email protected], [email protected], [email protected]} \thanks{ $\alphast$ Corresponding author} \deltaate{} \betaegin{equation}gin{abstract} Given a convex cone in the \epsilonmph{prescribed} warped product, we consider hypersurfaces with boundary which are star-shaped with respect to the center of the cone and which meet the cone perpendicularly. If those hypersurfaces inside the cone evolve along the inverse mean curvature flow, then, by using the convexity of the cone, we can prove that this evolution exists for all the time and the evolving hypersurfaces converge smoothly to a piece of round sphere as time tends to infinity. \epsilonnd{abstract} \maketitle {\it \,\,\,\,mall{{\betaf Keywords}: Inverse mean curvature flow, cone, warped products.} {{\betaf MSC}: Primary 53C44, Secondary 53C42, 35B45, 35K93.} } \,\,\,\,ection{Introduction} Unlike the mean curvature flow, which is a shrinking flow, the inverse mean curvature flow (IMCF for short), which says a submanifold evolves along its outward normal direction with a speed equal to the reciprocal of the mean curvature, in general is an expanding flow. A classical result for IMCF is due to Gerhardt \cite{cg}, who proved that if a closed, smooth, star-shaped hypersurface with strictly positive mean curvature evolves along the IMCF, then the flow exists for all the time and, after rescaling, the evolving hypersurfaces converge to a round sphere as time tends to infinity (see also \cite{ju}). The star-shaped assumption is important in the study of IMCF, since it allows us to equivalently transform the evolution equation of IMCF, which, in local coordinates of the initial submanifold, corresponds to a system of second-order parabolic partial differential equations (PDEs for short) with some specified initial conditions, into a scalar second-order parabolic PDE which seems to be relatively easy to deal with. In general, singularities may occur in finite time for non-starshaped submanifolds evolving under the IMCF. By defining a notion of weak solutions to IMCF, Huisken and Ilmanen \cite{hi1,hi2} proved the Riemannian Penrose inequality by using the IMCF approach. One of the reasons why people pay attention to the study of IMCF is that one can use IMCF to derive some interesting geometric inequalities like what Huisken and Ilmanen have done. In fact, there already exist some results like this. For instance, we know that for a closed convex surface $\mathcal {M}$ in the $3$-dimensional Euclidean space $\mathbb{R}^3$, the classical Minkowski inequality is \betaegin{equation}gin{eqnarray*} \int\limits_{\mathcal{M}}Hd\mu\mathfrak geqslant\,\,\,\,qrt{16\partiali|\mathcal{M}|}, \epsilonnd{eqnarray*} where $H$ is the mean curvature, $|\mathcal{M}|$ is the area of $\mathcal{M}$, and $d\mu$ is the volume density of $\mathcal{M}$. This result can be generalized to the high dimensional case. In fact, for a convex hypersurface $\mathcal {M}$ in $\mathbb{R}^n$, one has \betaegin{equation}gin{eqnarray*} \int\limits_{\mathcal{M}}Hd\mu\mathfrak geqslant(n-1)|\mathbb{S}^{n-1}|^{{\mathfrak f}racac{1}{n-1}}|\mathcal{M}|^{{\mathfrak f}racac{n-2}{n-1}}, \epsilonnd{eqnarray*} where $|\mathbb{S}^{n-1}|$ denotes the area of the unit sphere in $\mathbb{S}^{n-1}$ in $\mathbb{R}^n$. By using the method of IMCF, the above Minkowski inequality has been proven to be valid for mean convex and star-shaped hypersurfaces in $\mathbb{R}^n$ also (cf. \cite{gl,gmtz}). Also using the method of IMCF, Brendle, Hung and Wang \cite{bhw} proved a sharp Minkowski inequality for mean convex and star-shaped hypersurfaces in the $n$-dimensional ($n\mathfrak geqslant3$) anti-de Sitter-Schwarzschild manifold, which generalized the related conclusions in the Euclidean space mentioned above. The first and second authors have been working on IMCF for several years and have also obtained some interesting results. For instance, Chen and Mao \cite{cm} considered the evolution of a smooth, star-shaped and $F$-admissible ($F$ is a 1-homogeneous function of principle curvatures satisfying some suitable conditions) embedded closed hypersurface in the $n$-dimensional ($n\mathfrak geqslant3$) anti-de Sitter-Schwarzschild manifold along its outward normal direction has a speed equal to $1/F$ (clearly, this evolution process is a natural generalization of IMCF, and we call it \epsilonmph{inverse curvature flow}. We write as ICF for short), and they proved that this ICF exists for all the time and, after rescaling, the evolving hypersurfaces converge to a sphere as time tends to infinity. This interesting conclusion has been improved by Chen, Mao and Zhou \cite{cmz} to the situation that the ambient space is a warped product $I\times_{\langlembda(r)}N^{n}$ with $I$ an unbounded interval of $\mathbb{R}$ (i.e., the set of real numbers) and $N^{n}$ a Riemannian manifold of nonnegative Ricci curvature. Suppose $N$ and $B$ are semi-Riemannian manifolds with metrics $g_{N}$ and $g_{B}$, and let $f>0$ be a smooth function on $N$. The \epsilonmph{warped product} $N\times_{f}B$ is the product manifold furnished with the metric tensor $g=\partiali^{\alphast}(g_{N})+(f\circ\partiali)^{2}\,\,\,\,igma^{\alphast}(g_{B})$, where $\partiali$ and $\,\,\,\,igma$ are the projections of $N\times{B}$ onto $N$ and $B$, respectively. $N$ and $B$ are called the \epsilonmph{base} and the \epsilonmph{fiber} of $N\times_{f}B$, respectively. Clearly, $I\times_{\langlembda(r)}N^{n}$ mentioned above is a special warped product with base $N^{n}$ and fiber $I\,\,\,\,ubset\mathbb{R}$. Comparing with general Riemannian manifolds, warped products have some interesting and useful properties (see, for instance, \cite[Appendix A]{m2} or \cite[Appendix A]{mdw}). We call special warped products $[0,\epsilonll)\times_{f(x)}\mathbb{S}^{n-1}$, $f(0)=0$, $f'(0)=1$, $f|_{(0,\epsilonll)}>0$, $0<\epsilonll\leqslant\infty$ are $n$-dimensional ($n\mathfrak geqslant2$) \epsilonmph{spherically symmetric manifolds} with the base point $p:=\{0\}\times_{f(0)}\mathbb{S}^{n-1}$ (also known as \epsilonmph{generalized space forms}). Especially, if $\epsilonll=\infty$, $f(x)=x$, then $[0,\epsilonll)\times_{f(x)}\mathbb{S}^{n-1}\epsilonquiv\mathbb{R}^n$; if $\epsilonll=\infty$, $f(x)={\mathfrak f}racac{\,\,\,\,inh(\,\,\,\,qrt{-k}x)}{\,\,\,\,qrt{-k}}$, then $[0,\epsilonll)\times_{f(x)}\mathbb{S}^{n-1}\epsilonquiv\mathbb{H}^n(-k)$, i.e., the $n$-dimensional hyperbolic space with constant sectional curvature $-k<0$; if $\epsilonll={\mathfrak f}racac{\partiali}{\,\,\,\,qrt{k}}$, $f(x)={\mathfrak f}racac{\,\,\,\,in(\,\,\,\,qrt{k}x)}{\,\,\,\,qrt{k}}$, then after endowing a one-point compactification topology, the closure of $[0,\epsilonll)\times_{f(x)}\mathbb{S}^{n-1}$ equals $\mathbb{S}^{n}(k)$, i.e., the $n$-dimensional sphere with constant sectional curvature $k>0$. Spherically symmetric manifolds are very nice mode spaces which can be used to successfully improve some classical results in Riemannian geometry (for example, Cheng's eigenvalue comparison theorem, Bishop's volume comparison theorem, and so on). For more details on this topic, we refer readers to, for instance, \cite{fmi,m2,m1,m3,mdw}. Marquardt \cite{Ma2} successfully proved that if an $n$-dimensional ($n\mathfrak geqslant2$) compact $C^{2,\alphalpha}$-hypersurface with boundary, which meets a given cone in $\mathbb{R}^{n+1}$ perpendicularly and is star-shaped with respect to the center of the cone, evolves along the IMCF, then the flow exists for all the time and, after rescaling, the evolving hypersurfaces converge to a piece of the round sphere as time tends to infinity. Based on our experience in \cite{cm,cmz}, we would like to know ``\epsilonmph{if we replace the ambient space $\mathbb{R}^{n+1}$ in \cite{Ma2} by a warped product $I\times_{\langlembda(r)}N^{n}$ with $I\,\,\,\,ubseteq\mathbb{R}$ an unbounded interval, whether the IMCF exists for all the time or not? What about the convergence if we have the long-time existence?}". The purpose of this paper is trying to answer this question. Assume that, as before, $I$ is an unbounded interval of $\mathbb{R}$ and $N^{n}$ is a Riemannian manifold with metric $g_{N}$. Naturally, $I\times_{\langlembda(r)}N^{n}$ is a warped product with the warping function $\langlembda(r)$ defined on $I$ and the metric given as follows \betaegin{equation}gin{eqnarray*} g=dr\otimes dr +\langlembda^{2}(r)g_{N}. \epsilonnd{eqnarray*} Let $M^{n}\,\,\,\,ubset N^{n}$ be a portion of $N^{n}$ such that $\Sigma^{n}:=\{(y(x),x)\in I\times_{\langlembda(r)}N^{n}|y(x)>0,x\in\partialartial M^{n}\}$ be the boundary of a smooth convex cone. We can prove the following conclusion. \betaegin{equation}gin{theorem}\langlebel{main1.1} Let $I\times_{\langlembda(r)}N^{n}$ be an $(n+1)$-dimensional ($n \mathfrak geqslant 2$) warped product with the warping function $\langlembda(r)$ satisfying $\langlembda(r)>0$, $0<\langlembda'(r)\leqslant C$ and $0\leqslant\langlembda^{1+\alphalpha}(r)\langlembda''(r)\leqslant C$ for some positive constants $\alphalpha$, $C$ on $I^{\circ}$ (if $I$ has an endpoint $a$, then $I^{\circ}=I\betaackslash\{a\}$; if $I$ does not have endpoint, then $I^{\circ}=I$), where $I$ denotes an unbounded interval of $\mathbb{R}$ and $N^n$ is an $n$-dimensional Riemannian manifold with nonnegative Ricci curvature. Let $\Sigma^n \,\,\,\,ubset I\times_{\langlembda(r)}N^{n}$ be the boundary of a smooth, convex cone that is centered at some interior point of $M^n$ and has the outward unit normal vector $\mu$. Let $F_0:M^n \rightghtarrow I\times_{\langlembda(r)}N^{n}$ such that $M^n_0:= F_0(M^n)$ is a compact $C^{2,\alphalpha}$-hypersurface which is star-shaped with respect to the center of the cone and has a strictly positive principal curvature. Assume furthermore that $M^n_0$ meets $\Sigma^n$ orthogonally.That is \betaegin{equation}gin{eqnarray*} F_0(\partialartial {M^n}) \,\,\,\,ubset {\Sigma}^n,\quad \qquad{\langlengle\mu \circ F_0,\vec{\nu}_{0} \circ F_0\ranglengle|}_{\partialartial {M^n}}=0, \epsilonnd{eqnarray*} where $\vec{\nu}_{0}$ is the outward unit normal to $M^n_0$. Then there exists a unique embedding \betaegin{equation}gin{eqnarray*} F \in C^{2+\alphalpha,{\mathfrak f}racac{1+\alphalpha}{2}}\left(M^n\times [0,\infty),I\times_{\langlembda(r)}N^{n}\rightght) \cap C^{\infty}\left(M^n\times (0,\infty),I\times_{\langlembda(r)}N^{n}\rightght) \epsilonnd{eqnarray*} with $F(\partialartial {M^n},t) \,\,\,\,ubset \Sigma^n$ for $t\mathfrak geqslant0$, satisfying the following system \betaegin{equation}gin{eqnarray*} (\,\,\,\,harp) \left\{ \betaegin{equation}gin{array}{lll} {\mathfrak f}racac{\partialartial F}{\partialartial t}={\mathfrak f}racac{\nu}{H}\circ F \qquad &in~ M^{n}\times(0,\infty)\\ \langlengle\mu\circ F,\vec{\nu}\circ F\ranglengle=0 \qquad &on~\partialartial M^{n}\times(0,\infty)\\ F(\cdot,0)=F_{0} \qquad &on~M^{n} \epsilonnd{array} \rightght. \epsilonnd{eqnarray*} where $\vec{\nu}$ is the unit normal vector to $M^n_t:=F(M^n,t)$ pointing away from the center of the cone and $H$ is the scalar mean curvature of $M^n_t$. Moreover, after area-preserving rescaling, the rescaled solution $\widetilde{F}(\cdot,t)$ converges smoothly to an embedding $F_{\infty}$, mapping $M^n$ into a piece of a geodesic sphere. \epsilonnd{theorem} \betaegin{equation}gin{remark} \rm{ Clearly, if $\langlembda(r)=r$, $N^n=\mathbb{S}^n$ (i.e., the $n$-dimensional Euclidean unit sphere), $I=[0,\infty)$, then $I^{\circ}=(0,\infty)$ and $I\times_{\langlembda(r)}N^{n}\epsilonquiv\mathbb{R}^{n+1}$, Theorem \mathrm{e}f{main1.1} here degenerates into \cite[Theorem 1]{Ma1}. That is, our Theorem \mathrm{e}f{main1.1} covers \cite[Theorem 1]{Ma1} as a special case. } \epsilonnd{remark} This paper is organized as follows. The geometry of star-shaped hypersurfaces in the warped product $I\times_{\langlembda(r)}N^{n}$ will be discussed in Section 2, and we will use the fact that star-shaped hypersurfaces $M^{n}_{t}$ can be written as graphs over $M^{n}\,\,\,\,ubset N^{n}$ to transform the first evolution equation of the system $(\,\,\,\,harp)$ into a scalar second-order parabolic PDE, which leads to the short-time existence of the IMCF. $C^0$ and gradient estimates will be derived in Sections 3 and 4, respectively. Higher regularity and convergence of the solution of $(\,\,\,\,harp)$ will be shown in the last section. \,\,\,\,ection{Preliminary Facts} In this section, we would like to give some basic facts first such that our conclusions can be explained clearly and understood well. We want to describe the hypersurface $M^{n}_{t}$ at time $t$ as a graph over $M^{n}\,\,\,\,ubset N^n\,\,\,\,ubset I\times_{\langlembda(r)}N^{n}$, and then we can make ansatz \betaegin{equation}gin{eqnarray*} \widetilde{F}:M^n \times [0,T) \rightghtarrow I\times_{\langlembda(r)}N^{n}:(x,t) \rightghtarrow \left(u(x,t),x\rightght) \epsilonnd{eqnarray*} for some function $u: M^n \times [0,T) \rightghtarrow I\,\,\,\,ubseteq\mathbb{R}$. Since the initial $C^{2,\alphalpha}$-hypersurface is star-shaped, there exists a scalar function $u_0\in C^{2,\alphalpha} (M^n_{0})$ such that the $F_0: M^n \rightghtarrow I\times_{\langlembda(r)}N^{n}$ has the form $x \mapsto (u_0(x),x)$. Set $\widetilde{M}^{n}_{t}:=\widetilde{F}(M^n,t)$. Define $p:=\widetilde{F}(x,t)$ and assume that a point on $M^n$ is described by local coordinates $\xi^{1},\ldots,\xi^{n}$, that is, $x=x(\xi^{1},\ldots,\xi^{n})$. Let $\partialartial_i$ be the corresponding coordinate vector fields on $M^{n}\,\,\,\,ubset N^n$ and $\,\,\,\,igma_{ij}=g_{N}(\partialartial_i,\partialartial_j)$ be the metric on $M^{n}\,\,\,\,ubset N^n$. Let $u_{i}=Du_{i}$, $u_{ij}=D_{j}D_{i}u$, and $u_{ijk}=D_{k}D_{j}D_{i}u$ denote the covariant derivatives of $u$ with respect to the metric $g_{N}$ and let $\nabla$ be the Levi-Civita connection of $\widetilde{M}^{n}_{t}$ with respect to the metric $\widetilde{g}$ induced from the metric $g$ of the warped product $I\times_{\langlembda(r)}N^{n}$. The tangent vector on $\widetilde{M}^{n}_{t}$ is \betaegin{equation}gin{eqnarray*} \vec{e}_{i}=\partialartial_{i}+D_{i}u\partialartial_{r} \epsilonnd{eqnarray*} and the corresponding outward unit normal vector is given by \betaegin{equation}gin{eqnarray*} \vec{\nu}={\mathfrak f}racac{1}{v}\left(\partialartial_r-{\mathfrak f}racac{1}{\langlembda^2}\nabla^j u\partialartial_j\rightght), \epsilonnd{eqnarray*} where $\nabla^{j}u=\,\,\,\,igma^{ij}\nabla_{i}u$, and $v:=\,\,\,\,qrt{1+\langlembda^{-2}|\nabla u|^2}$ with $\nabla u$ the gradient of $u$. Clearly, we know that the induced metric $\widetilde{g}$ on $\widetilde{M}^n_t$ has the form \betaegin{equation}gin{equation}\langlebel{2.1} g_{ij}=\langlembda^2\,\,\,\,igma_{ij}+\nabla _{i}u\cdot\nabla _{j}u \epsilonnd{equation} and its inverse is given by \betaegin{equation}gin{equation}\langlebel{2.2} g^{ij}={\mathfrak f}racac{1}{\langlembda^2}\left(\,\,\,\,igma^{ij}-{\mathfrak f}racac{\nabla^i u\nabla^j u}{1+|\nabla u|^2}\rightght)={\mathfrak f}racac{1}{\langlembda^2}\left(\,\,\,\,igma^{ij}-{\mathfrak f}racac{\nabla^i u \nabla^j u}{v^{2}}\rightght). \epsilonnd{equation} Let $h_{ij}dx_i \otimes dx_j$ be the second fundamental form of $\widetilde{M}^n_t$, and then we have \betaegin{equation}gin{eqnarray*} h_{ij}=\langlengle\nabla_{\vec{e}_i} \vec{e}_j,\vec{\nu}\ranglengle=-{\mathfrak f}racac{1}{v}\left(u_{i,j}-\langlembda \langlembda^{\partialartialime}\,\,\,\,igma_{ij}-2{\mathfrak f}racac{\langlembda^{\partialartialime}}{\langlembda}u_iu_j\rightght), \epsilonnd{eqnarray*} where $u_{i,j}$ is the covariant derivative of $u$. Define a new function $\varphi(x,t)=\int_{c}^{u(x,t)}{\mathfrak f}racac{1}{\langlembda(s)}ds$, where $x=x(\xi^{1},\ldots,\xi^{n})$, and then the second fundamental form can be rewritten as \betaegin{equation}gin{eqnarray*} h_{ij}={\mathfrak f}racac{\langlembda}{v}\left(\langlembda^{\partialartialime}(\,\,\,\,igma_{ij}+\varphi_i \varphi_j)-\varphi_{i,j}\rightght) \epsilonnd{eqnarray*} and \betaegin{equation}gin{eqnarray*} h^i_j=g^{ik}h_{jk}={\mathfrak f}racac{\langlembda^{\partialartialime}}{\langlembda v}\deltaelta^i_j-{\mathfrak f}racac{1}{\langlembda v}\widetilde{\,\,\,\,igma}^{ik}\varphi_{k,j} \qquad\qquad \mathrm{with}~~ \widetilde{\,\,\,\,igma}^{ij}=\,\,\,\,igma^{ij}-{\mathfrak f}racac{\varphi_i \varphi_j}{v^2}. \epsilonnd{eqnarray*} Naturally, the scalar mean curvature is given by \betaegin{equation}gin{eqnarray*} H=\,\,\,\,um_{i=1}^{n}h^i_i={\mathfrak f}racac{n\langlembda^{\partialartialime}}{\langlembda v}-{\mathfrak f}racac{1}{\langlembda v}\,\,\,\,um_{i=1}^{n}\left(\,\,\,\,um_{k=1}^{n}\widetilde{\,\,\,\,igma}^{ik}\varphi_{k,i}\rightght). \epsilonnd{eqnarray*} Based on the above facts and \cite{Ma2}, we can get the following existence and uniqueness for the IMCF $(\,\,\,\,harp)$. \betaegin{equation}gin{lemma} \langlebel{lemma2.1} Let $F_0$, $I$, $\langlembda$ be as in Theorem \mathrm{e}f{main1.1}. Then there exist some $T>0$, a unique solution $u \in C^{2+\alphalpha,{\mathfrak f}racac{1+\alphalpha}{2}}(M^n\times [0,\infty),I) \cap C^{\infty}(M^n \times (0,\infty), I)$, where $\varphi(x,t)=\int_{c}^{u(x,t)}{\mathfrak f}racac{1}{\langlembda(s)}ds$, of the following system \betaegin{equation}gin{eqnarray*} (\widetilde{\,\,\,\,harp})\left\{ \betaegin{equation}gin{array}{lll} {\mathfrak f}racac{\partialartial \varphi}{\partialartial t}={\mathfrak f}racac{v}{\langlembda H}={\mathfrak f}racac{v^2}{n\langlembda^{\partialartialime}-\widetilde{\,\,\,\,igma}^{ij}\varphi_{i,j}} \qquad \qquad & in ~M^{n}\times(0,T)\\ \nabla_\mu \varphi =0 \qquad\qquad & on ~\partialartial M^{n}\times(0,T)\\ \varphi(\cdot,0)=\varphi_{0}:=\int_{c}^{u(x,0)}{\mathfrak f}racac{1}{\langlembda(s)}ds \qquad\qquad & on ~ M^{n}, \epsilonnd{array} \rightght. \epsilonnd{eqnarray*} and a unique map $\partialsi: M^{n}\times[0,T]\rightghtarrow M^{n}$, which has to be bijective for fixed $t$ and has to satisfy $\partialsi(\partialartial M^{n},t)=\partialartial M^{n}$, such that the map $F$ defined by \betaegin{equation}gin{eqnarray*} F:M^{n}\times[0,T)\rightghtarrow I\times_{\langlembda(r)}N^{n}: (x,t)\mapsto \widetilde{F}(\partialsi(x,t),t) \epsilonnd{eqnarray*} has the same regularity as stated in Theorem \mathrm{e}f{main1.1} and is the unique solution to $(\,\,\,\,harp)$. \epsilonnd{lemma} \betaegin{equation}gin{remark} \rm{ As pointed out in \cite{Ma2}, for immersed hypersurfaces in a Riemannian manifold and for arbitrary smooth supporting hypersurfaces $\Sigma^n$, one can also get the short time existence of $(\,\,\,\,harp)$. Naturally, for immersed hypersurfaces in a warped product $I\times_{\langlembda(r)}N^{n}$, we definitely have the short time existence result.} \epsilonnd{remark} Let $T^{\alphast}$ be the maximal time such that there exists some $u\in C^{2,1}(M^{n},[0,T^{\alphast}))\cap C^{\infty}(M^{n},(0,T^{\alphast}))$ which solves $(\widetilde{\,\,\,\,harp})$. In the sequel, we will prove a priori estimates for those admissible solutions on $[0,T]$ where $T<T^{\alphast}$. \,\,\,\,ection{$C^0$ estimate} In this section, we will use the evolution equation of $\varphi$ to get some estimates for $\langlembda$ and $\deltaot \varphi$. \betaegin{equation}gin{lemma}\langlebel{lemma3.1} If $\varphi$ satisfies $(\widetilde{\,\,\,\,harp})$, and the warping function $\langlembda(r)$ satisfies $\langlembda(r)>0$, $\langlembda'(r)>0$, $\langlembda''(r)\mathfrak geqslant0$ on $I^{\circ}$, then we have \betaegin{equation}gin{equation*} \langlembda(\inf u(\cdot,0))\leqslant\langlembda(u(x,t))e^{-{\mathfrak f}racac{t}{n}}\leqslant\langlembda(\,\,\,\,up u(\cdot,0)), \qquad\quad {\mathfrak f}orall~ x=(\xi^{1},\ldots,\xi^{n})\in M^{n}\,\,\,\,ubset N^{n}, t\in[0,T]. \epsilonnd{equation*} \epsilonnd{lemma} \betaegin{equation}gin{proof} If $\varphi$ could reach its maximum at the boundary, by Hopf's Lemma, it follows that the derivative of $\varphi$ along the outward unit normal vector must be strictly greater than $0$, which is contradict with the boundary condition $\nabla_\mu \varphi=0$. Therefore, $\varphi$ must attain its maximum at interior points. The same situation happens when $\varphi$ reaches its minimum. By the chain rule, it is easy to get $\deltaot{\langlembda}=\langlembda'\deltaot{u}=\langlembda'\langlembda\deltaot{\varphi}$. Hence, the evolution equation of $\langlembda$ is the following \betaegin{equation}gin{eqnarray} \langlebel{3.1} {\mathfrak f}racac{1}{\langlembda'\langlembda}\deltaot{\langlembda}={\mathfrak f}racac{v^2}{n\langlembda'-\widetilde{\,\,\,\,igma}^{ij}\varphi_{i,j}}. \epsilonnd{eqnarray} By the facts \betaegin{equation}gin{eqnarray*} \langlembda_i=\langlembda'u_{i}=\langlembda'\langlembda\varphi_i \epsilonnd{eqnarray*} and \betaegin{equation}gin{eqnarray*} \langlembda_{i,j}=\langlembda''\langlembda\varphi_{i}+(\langlembda')^2\varphi_{i}\varphi_{j}+\langlembda'\langlembda\varphi_{i,j}, \epsilonnd{eqnarray*} we know that when $\varphi$ gets its maximum or minimum, the same situation happens to $\langlembda$. When $\langlembda$ gets its maximum, the Hessian of $\varphi$ is negative definite, which means $\widetilde{\,\,\,\,igma}^{ij}\varphi_{i,j}\leqslant0$. Therefore, when $\langlembda$ gets its maximum, we have \betaegin{equation}gin{eqnarray*} {\mathfrak f}racac{1}{\langlembda'(\,\,\,\,up u(\cdot,0))\langlembda}\deltaot{\langlembda}\leqslant{\mathfrak f}racac{1}{n\langlembda'(\,\,\,\,up u(\cdot,0))}, \epsilonnd{eqnarray*} which implies \betaegin{equation}gin{eqnarray*} {\mathfrak f}racac{1}{\langlembda}\deltaot{\langlembda}\leqslant{\mathfrak f}racac{1}{n}. \epsilonnd{eqnarray*} Integrating both sides of the above inequality, we can get \betaegin{equation}gin{equation}\langlebel{3.2} \langlembda\leqslant\langlembda(\,\,\,\,up u(\cdot,0))e^{\mathfrak f}racac{t}{n}. \epsilonnd{equation} When $\langlembda$ gets its minimum, $\widetilde{\,\,\,\,igma}^{ij}\varphi_{i,j}$ is positive definite. By a similar way, we can obtain \betaegin{equation}gin{equation}\langlebel{3.3} \langlembda(\inf u(\cdot,0))e^{\mathfrak f}racac{t}{n}\leqslant\langlembda. \epsilonnd{equation} Combining (\mathrm{e}f{3.2}) and (\mathrm{e}f{3.3}) yields the conclusion of Lemma \mathrm{e}f{lemma3.1} directly. \epsilonnd{proof} \betaegin{equation}gin{remark} \rm{ Clearly, Lemma \mathrm{e}f{lemma3.1} tells us that in the evolving process of the IMCF $(\,\,\,\,harp)$, the rescaled warping function $\langlembda(u(x,t))e^{-{\mathfrak f}racac{t}{n}}$ can be controlled from both below and above, which implies that one might expect some \epsilonmph{good} convergence for evolving hypersurfaces after rescaling. } \epsilonnd{remark} \betaegin{equation}gin{lemma}\langlebel{lemma3.2} If $\varphi$ satisfies $(\widetilde{\,\,\,\,harp})$ and $\langlembda$ satisfies $\langlembda(r)>0$, $\langlembda'(r)>0$, $0\leqslant\langlembda^{1+\alphalpha}(r)\langlembda''(r)\leqslant C$ for some positive constants $\alphalpha$, $C$ on $I^{\circ}$, then there exist two positive constants $C_1$ and $C_2$ such that \betaegin{equation}gin{eqnarray*} C_1\leqslant\deltaot{\varphi}\leqslant C_{2}. \epsilonnd{eqnarray*} \epsilonnd{lemma} \betaegin{equation}gin{proof} Set $Q(\nabla\varphi,\nabla^2\varphi,\langlembda'):={\mathfrak f}racac{v^2}{n\langlembda'-\widetilde{\,\,\,\,igma}^{ij}\varphi_{i,j}}$. Differentiating both sides of the first evolution equation of $(\widetilde{\,\,\,\,harp})$, it is easy to get that $\deltaot{\varphi}$ satisfies \betaegin{equation}gin{eqnarray} \langlebel{3.4} \left\{ \betaegin{equation}gin{array}{lll} {\mathfrak f}racac{\partialartial \deltaot{\varphi}}{\partialartial t}=Q^{ij}\nabla_{ij}\deltaot{\varphi}+Q^{k}\nabla_k\deltaot{\varphi}-{\mathfrak f}racac{nv^2\langlembda''\langlembda\deltaot{\varphi}}{(n\langlembda - \widetilde{\,\,\,\,igma}^{ij}\varphi_{i,j})^2} \qquad \qquad & in ~M^{n}\times(0,T)\\ \nabla_ \mu \deltaot{\varphi}=0 \qquad\qquad & on ~\partialartial M^{n}\times(0,T)\\ \deltaot{\varphi}(\cdot,0)=\deltaot{\varphi}_0\qquad\qquad & on ~ M^{n}. \epsilonnd{array} \rightght. \epsilonnd{eqnarray} Similar to the argument in the proof of Lemma \mathrm{e}f{lemma3.1}, it follows that $\deltaot{\varphi}$ must reach its maximum and the minimum at interior points by applying Hopf's Lemma to (\mathrm{e}f{3.4}). Therefore, at the point where $\deltaot{\varphi}$ gets its maximum, we have \betaegin{equation}gin{eqnarray} \langlebel{3.5} \deltaot{\varphi}_i=0, \qquad \deltaot{\varphi}_{i,j}\leqslant0. \epsilonnd{eqnarray} Conversely, at the point where $\deltaot{\varphi}$ gets its minimum, we have \betaegin{equation}gin{eqnarray} \langlebel{3.6} \deltaot{\varphi}_i=0, \qquad \deltaot{\varphi}_{i,j}\mathfrak geqslant0. \epsilonnd{eqnarray} Set \betaegin{equation}gin{eqnarray*} \deltaot{\varphi}_{max}:=\,\,\,\,up\limits_{M^n}\deltaot{\varphi}(\cdot,t), \qquad \deltaot{\varphi}_{min}:=\inf\limits_{M^n}\deltaot{\varphi}(\cdot,t). \epsilonnd{eqnarray*} Combining (\mathrm{e}f{3.4}), (\mathrm{e}f{3.5}) and the assumptions for the warping function $\langlembda$, we can obtain the evolution equation of $\deltaot{\varphi}_{max}$ as follows \betaegin{equation}gin{eqnarray*} {\mathfrak f}racac{\partialartial \deltaot{\varphi}_{max}}{\partialartial t}=Q^{ij}\nabla_{ij}\deltaot{\varphi}_{max}-{\mathfrak f}racac{n\langlembda''\deltaot{\varphi}_{max}}{\langlembda H^2} \leqslant -{\mathfrak f}racac{n\langlembda''\langlembda \deltaot{\varphi}^3}{v^2}\leqslant 0, \epsilonnd{eqnarray*} which implies \betaegin{equation}gin{eqnarray} \langlebel{3.7} \deltaot{\varphi}\leqslant \deltaot{\varphi}_{max}\leqslant \,\,\,\,up\limits_{M^n}\deltaot{\varphi}(\cdot,0):=C_{2}. \epsilonnd{eqnarray} On the other hand, since $0<\langlembda^{1+\alphalpha}(r)\langlembda''(r)\leqslant C$, we have \betaegin{equation}gin{eqnarray*} 0<\langlembda''(r)\langlembda(r)\leqslant Ce^{-{\mathfrak f}racac{\alphalpha}{n}t} \epsilonnd{eqnarray*} by applying Lemma \mathrm{e}f{lemma3.1}. \epsilonmph{In general, the constant $C$ in the above inequality should be different from the one in the assumption of Lemma \mathrm{e}f{lemma3.2}. However, for convenience, we use the same symbol}. Then, combining (\mathrm{e}f{3.4}), (\mathrm{e}f{3.6}) and the assumptions for $\langlembda$, we have \betaegin{equation}gin{eqnarray*} {\mathfrak f}racac{\partialartial \deltaot{\varphi}_{min}}{\partialartial t}=Q^{ij}\nabla_{ij}\deltaot{\varphi}_{min}-{\mathfrak f}racac{n\langlembda''\langlembda \deltaot{\varphi}^3_{min}}{v^2}\mathfrak geqslant -{\mathfrak f}racac{n\langlembda''\langlembda \deltaot{\varphi}^3_{min}}{v^2}\mathfrak geqslant -Ce^{-{\mathfrak f}racac{\alphalpha}{n}t}\deltaot{\varphi} \epsilonnd{eqnarray*} by applying Lemma \mathrm{e}f{lemma3.1}. The maximum principle tells us that $\deltaot{\varphi}$ is bounded from below by the solution of ordinary differential equation (we write as ODE for short) \betaegin{equation}gin{equation} \langlebel{3.8} {\mathfrak f}racac{\partialartial f}{\partialartial t}=-Ce^{-{\mathfrak f}racac{\alphalpha}{n}t}f, \epsilonnd{equation} with $f(0):=\inf\limits_{M^{n}\,\,\,\,ubset N^n} \deltaot{\varphi}(\cdot,0)$. By a straight calculation, we get the solution of the ODE (\mathrm{e}f{3.8}) as follows \betaegin{equation}gin{eqnarray*} f=f(0)e^{{\mathfrak f}racac{C\alphalpha}{n}(e^{-{\mathfrak f}racac{\alphalpha}{n}t}-1)}\mathfrak geqslant C_{1}:=f(0)e^{-{\mathfrak f}racac{C\alphalpha}{n}}>0. \epsilonnd{eqnarray*} Therefore, we have \betaegin{equation}gin{eqnarray*} \deltaot{\varphi}\mathfrak geqslant \deltaot{\varphi}_{min}(t) \mathfrak geqslant f(t) \mathfrak geqslant C_1. \epsilonnd{eqnarray*} Together with (\mathrm{e}f{3.7}), the conclusion of Lemma \mathrm{e}f{lemma3.2} follows. \epsilonnd{proof} \betaegin{equation}gin{remark} \rm{By the $C^0$-estimate, we know that IMCF preserves the convexity during the evolving process, which implies $H>0$ for all $t\in[0,T]$. This fact has been shown in \cite[Theorem 3.5]{Zh}.} \epsilonnd{remark} \,\,\,\,ection{Gradient Estimate} In this section, the gradient estimate will be shown. \betaegin{equation}gin{lemma}\langlebel{lemma4.1} If $\varphi$ satisfies $(\widetilde{\,\,\,\,harp})$, $\langlembda$ satisfies $\langlembda(r)>0$, $\langlembda'(r)>0$, $0\leqslant\langlembda^{1+\alphalpha}(r)\langlembda''(r)\leqslant C$ for some positive constants $\alphalpha$, $C$ on $I^{\circ}$, and the Ricci curvature of $N^n$ is nonnegative, then we have \betaegin{equation}gin{eqnarray*} |\nabla\varphi|\leqslant C_3, \epsilonnd{eqnarray*} where $C_3$ is a nonnegative constant depending on $\,\,\,\,up\limits_{M^{n}}\varphi(\cdot,0)$, i.e., the supremum of $\varphi(x,t)$ at the initial time $t=0$. \epsilonnd{lemma} \betaegin{equation}gin{proof} Set $\partialsi={\mathfrak f}racac{|\nabla\varphi|^2}{2}$. Then differentiating $\partialsi$ with respect to $t$ and together with $(\widetilde{\,\,\,\,harp})$, we have \betaegin{equation}gin{eqnarray*} {\mathfrak f}racac{\partialartial \partialsi}{\partialartial t}={\mathfrak f}racac{\partialartial}{\partialartial t}\nabla_m\varphi\nabla^m\varphi=\nabla_m \deltaot{\varphi}\nabla^m\varphi=\nabla_m Q\nabla^m\varphi, \epsilonnd{eqnarray*} which is equivalent with \betaegin{equation}gin{eqnarray*} {\mathfrak f}racac{\partialartial \partialsi}{\partialartial t}=\left(Q^{ij}\nabla_{ijm}\varphi+Q^k\nabla_{mk}\varphi-{\mathfrak f}racac{v^2\langlembda^{\partialartialime \partialartialime}\langlembda\nabla_m\varphi}{(n\langlembda^{\partialartialime}-\tilde{\,\,\,\,igma}^{ij}\varphi_{i,j})^2}\rightght)\nabla^m\varphi. \epsilonnd{eqnarray*} By straightforward calculation, we have \betaegin{equation}gin{eqnarray} \langlebel{4.1} {\mathfrak f}racac{\partialartial \partialsi}{\partialartial t}=Q^{ij}\nabla_{ijm}\varphi\nabla^m\varphi+Q^k\nabla_{km}\nabla^m\varphi-{\mathfrak f}racac{2n\langlembda^{\partialartialime \partialartialime}\partialsi}{\langlembda H^2} \epsilonnd{eqnarray} and \betaegin{equation}gin{eqnarray} \langlebel{4.2} \nabla_{ij}\partialsi=\nabla_j(\nabla_{mj}\varphi\nabla^m\varphi)&=&\nabla_{mij}\varphi\nabla^m\varphi+\nabla_{mi}\varphi\nabla^m_j\varphi\nonumber\\ &=&(\nabla_{ijm}\varphi+R^l_{imj})\nabla^m\varphi+\nabla_{mi}\nabla^m_j\varphi. \epsilonnd{eqnarray} On the other hand, applying the Ricci identity, we can obtain \betaegin{equation}gin{eqnarray} \langlebel{4.3} \nabla_{ijm}\varphi\nabla^m\varphi=\nabla_{ij}\partialsi-R^l_{imj}\nabla_l\varphi\nabla^m\varphi-\nabla_{mi}\varphi\nabla^m_j\varphi. \epsilonnd{eqnarray} Combining (\mathrm{e}f{4.1}), (\mathrm{e}f{4.2}) and (\mathrm{e}f{4.3}) yields \betaegin{equation}gin{eqnarray} \langlebel{4.4} {\mathfrak f}racac{\partialartial \partialsi}{\partialartial t}=Q^{ij}\nabla_{ij}\partialsi+\left(Q^k-{\mathfrak f}racac{\nabla^k\partialsi}{\langlembda^2v^2H^2}\rightght)\nabla_k \partialsi-{\mathfrak f}racac{1}{\langlembda^2H^2}\,\,\,\,igma^{ij}R_{limj}\nabla^l\varphi\nabla^m\varphi+ \nonumber\\ {\mathfrak f}racac{R_{limj}\nabla^i\varphi\nabla^j\varphi\nabla^l\varphi\nabla^m\varphi}{\langlembda^2H^2v^2}-{\mathfrak f}racac{|\nabla^2\varphi|^2}{\langlembda^2H^2}-{\mathfrak f}racac{2n\langlembda''\partialsi}{\langlembda H^2}. \qquad\qquad \epsilonnd{eqnarray} Since the curvature tensor is antisymmetric with respect to the indices $i$ and $j$, we have $R_{limj}\nabla^l\varphi\nabla^i\varphi\nabla^m\varphi\nabla^j\varphi=0$. Besides, the nonnegativity of the Ricci curvature yields \betaegin{equation}gin{eqnarray*} \,\,\,\,igma^{ij}R_{limj}\nabla^l\varphi\nabla^m\varphi=Ric(\nabla\varphi,\nabla\varphi)\mathfrak geqslant0. \epsilonnd{eqnarray*} By Lemmas \mathrm{e}f{lemma3.1}, \mathrm{e}f{lemma3.2} and the fact $\deltaot\varphi={\mathfrak f}racac{v}{\langlembda H}$, we have \betaegin{equation}gin{eqnarray*} \langlembda H^2=\langlembda\left({\mathfrak f}racac{v}{\langlembda\deltaot\varphi}\rightght)^2&=&{\mathfrak f}racac{\langlembda^2+|\nabla r|^2}{\langlembda\deltaot\varphi^2}\\ &\leqslant&{\mathfrak f}racac{\langlembda^2+1}{\langlembda\deltaot\varphi^2}\\ &\leqslant&{\mathfrak f}racac{\langlembda^2+1}{\langlembda C_{1}^2}, \epsilonnd{eqnarray*} which implies \betaegin{equation}gin{eqnarray*} -{\mathfrak f}racac{2n\langlembda''}{\langlembda H^2}&\mathfrak geqslant&- {\mathfrak f}racac{2nC_{1}^2\langlembda\langlembda''}{\langlembda^2+1}\mathfrak geqslant -2nC_{1}^2\langlembda''\langlembda^{-1}\mathfrak geqslant-2nC_{1}^2C\langlembda^{-(2+\alphalpha)}\\ &\mathfrak geqslant&-2nC_{1}^2C \left(\langlembda\left(\inf_{M^n} u(\cdot,0)\rightght)\rightght)^{-(2+\alphalpha)}e^{-{\mathfrak f}racac{(2+\alphalpha)t}{n}}\\ &=&-C_{4}e^{-{\mathfrak f}racac{(2+\alphalpha)t}{n}}, \epsilonnd{eqnarray*} where $C_4:=2nC_{1}^2C \left(\langlembda\left(\inf\limits_{M^n} u(\cdot,0)\rightght)\rightght)^{-(2+\alphalpha)}$. Putting the above three facts into (\mathrm{e}f{4.4}) results in \betaegin{equation}gin{eqnarray} \langlebel{4.5} {\mathfrak f}racac{\partialartial \partialsi}{\partialartial t}\leqslant Q^{ij}\nabla_{ij}\partialsi+\left(Q^k-{\mathfrak f}racac{\nabla^k\partialsi}{\langlembda^2v^2H^2}\rightght)\nabla_k \partialsi-C_4e^{-{\mathfrak f}racac{(2+\alphalpha)t}{n}}\partialsi. \epsilonnd{eqnarray} Choose an orthonormal frame $\{e_{1},\cdots,e_{n}\}$ at $x\in\partialartial M^n$ such that $e_{1},\cdots,e_{n-1}\in T_{x}\partialartial M^n$ and $e_{n}=\mu$, and then we can obtain \betaegin{equation}gin{eqnarray*} \nabla_ \mu \partialsi&=&\nabla_{e_n}\partialsi=\,\,\,\,um_{i=1}^{n}\nabla_{e_{i},e_{n}}\varphi\nabla_{e_{i}}\varphi=\,\,\,\,um_{i=1}^{n-1}(\nabla_{e_i}\nabla_{e_n}\varphi-(\nabla_{e_i}e_n)\varphi)\nabla_{e_i}\varphi\\ &=&-\,\,\,\,um_{i=1}^{n-1}\langlengle\nabla_{e_i}e_n,e_j\ranglengle\nabla_{e_j}\varphi\nabla_{e_i}\varphi\\ &=&-\,\,\,\,um_{i=1}^{n-1}h_{ij}^{\partialartial{M}^n}\nabla_{e_i}\varphi\nabla_{e_j}\varphi\leqslant0, \epsilonnd{eqnarray*} where $h_{ij}^{\partialartial{M}^n}$ is the second fundamental form of $\partialartial M^n$. The last inequality holds because of the convexity of the cone $\Sigma^n$. Therefore, together with (\mathrm{e}f{4.5}),we know that $\partialsi$ satisfies \betaegin{equation}gin{eqnarray*} \left\{ \betaegin{equation}gin{array}{lll} {\mathfrak f}racac{\partialartial \partialsi}{\partialartial t}\leqslant Q^{ij}\nabla_{ij}\partialsi+\left(Q^k-{\mathfrak f}racac{\nabla^k\partialsi}{\langlembda^2v^2H^2}\rightght)\nabla_k \partialsi-C_4e^{-{\mathfrak f}racac{(2+\alphalpha)t}{n}}\partialsi \qquad \qquad & in ~M^{n}\times(0,T)\\ \nabla_\mu \varphi \leqslant 0 \qquad\qquad & on ~\partialartial M^{n}\times(0,T)\\ \partialsi(\cdot,0)={\mathfrak f}racac{|\nabla\varphi(\cdot,0)|^2}{2}={\mathfrak f}racac{|\nabla\varphi_{0}|^2}{2} \qquad\qquad & on ~ M^{n}. \epsilonnd{array} \rightght. \epsilonnd{eqnarray*} Then using the maximum principle, we have $\partialsi\leqslant\,\,\,\,up\limits_{M^n}{\mathfrak f}racac{|\nabla\varphi_{0}|^2}{2}$, which, together with Lemma \mathrm{e}f{lemma3.1}, yields the desired gradient estimate. \epsilonnd{proof} \,\,\,\,ection{Higher regularity and Convergence} In this section, the convergence and the higher regularity of the IMCF $(\,\,\,\,harp)$ will be discussed after the area-preserving rescaling. We consider the rescaling $ \widehat F=\epsilonta (t)F(x,t)$, where $F$ is the parameterization of the graph $M^{n}_{t}$, and $\epsilonta(t)$ is the smooth function with respect to $t$ satisfying \betaegin{equation}gin{eqnarray} \langlebel{5.1} \int_{\widehat{M}^{n}_{t}}d\widehat\mu =|M_0|, \epsilonnd{eqnarray} where $\widehat{M}^{n}_{t}$ is the rescaled hypersurface, $d\widehat{\mu}$ is the volume element of $\widehat{M}^{n}_{t}$, $|M_0|$ denotes the area of the initial hypersurface $M_0$. Recall that the induced metric and the second fundamental form of $M^{n}_{t}$ are given by \betaegin{equation}gin{eqnarray*} g_{ij}=\left\langlengle{\mathfrak f}racac{\partialartial F}{\partialartial x^{i}},{\mathfrak f}racac{\partialartial F}{\partialartial x^{j}}\rightght\ranglengle, \qquad\qquad h_{ij}=\left\langlengle\vec{\nu},{\mathfrak f}racac{\partialartial^2 F}{\partialartial x^i \partialartial x^j}\rightght\ranglengle, \epsilonnd{eqnarray*} so the corresponding induced metric and the second fundamental form of $\widehat{M}^{n}_{t}$ should be \betaegin{equation}gin{eqnarray*} \widehat{g}_{ij}=\epsilonta^{2}g_{ij}, \qquad \widehat{h}_{ij}=\epsilonta^{2}h_{ij}, \qquad \widehat{g}^{ij}=\epsilonta^{-2} g^{ij}. \epsilonnd{eqnarray*} Differentiating both sides of (\mathrm{e}f{5.1}) and then we can get \betaegin{equation}gin{eqnarray*} {\mathfrak f}racac{d}{dt}\int _{\widehat{M}^{n}_{t}}d\widehat\mu=\int _{M^n_t}{\mathfrak f}racac{\widehat{g}^{ij}{\mathfrak f}racac{\partialartial}{\partialartial t}\widehat{g}_{ij}}{2}d\mu=\int _{M^n_t}\left(\epsilonta\epsilonta'g_{ij}+{\mathfrak f}racac{1}{H}h_{ij}\epsilonta^2\rightght)g^{ij}\epsilonta^{-2}d\mu=0, \epsilonnd{eqnarray*} which implies that \betaegin{equation}gin{eqnarray*} \int _{M^n_t} \left(n\epsilonta^{-1}\epsilonta' +1 \rightght)d\mu =0. \epsilonnd{eqnarray*} Therefore, we have \betaegin{equation}gin{eqnarray*} n\epsilonta^{-1}\epsilonta'+1=0, \epsilonnd{eqnarray*} and then solving the above ODE, together with $\epsilonta(0)=1$, yields $\epsilonta(t)=e^{-{\mathfrak f}racac{1}{n}t}$. In order to get the long time existence for the IMCF $(\,\,\,\,harp)$, we do the rescaling $\widehat{u}=ue^{-{\mathfrak f}racac{1}{n}t}$ for the graphic function $u(x,t)$ of the evolving hypersurface $M^n_t$ in the base part of the warped product $I\times_{\langlembda(r)}N^{n}$. Through this process, we can obtain the following result. \betaegin{equation}gin{lemma} \langlebel{lemma5.1} Let $u$ be an admissible solution of $(\widetilde{\,\,\,\,harp})$ and let $\Sigma ^{n}$ be a smooth, convex cone. If $\langlembda$ satisfies $\langlembda(r)>0$, $0<\langlembda'(r)\leqslant C$, $0\leqslant\langlembda^{1+\alphalpha}(r)\langlembda''(r)\leqslant C$ for some positive constants $\alphalpha$, $C$ on $I^{\circ}$, and the Ricci curvature of $N^n$ is nonnegative, then there exist some $\betaegin{equation}ta >0 $ and some $D>0$ such that \betaegin{equation}gin{eqnarray*} [\nabla \widehat{u}]_{\betaegin{equation}ta}+[\partialartial{\widehat u}/\partialartial t]_{\betaegin{equation}ta}+[\widehat H]_{\betaegin{equation}ta} \leqslant D \left(\partialarallel u_0\partialarallel_{C^{2+\alphalpha}(M^{n})}, n, \betaegin{equation}ta, M^{n}\rightght) \epsilonnd{eqnarray*} where $[f]_{\betaegin{equation}ta}:=[f]_{x,\betaegin{equation}ta}+[f]_{t,\betaegin{equation}ta/2}$ is the sum of the H\"older coefficients of $f$ with respect to $x$ and $t$ in the domain $M^{n} \times [0,T]$. \epsilonnd{lemma} \betaegin{equation}gin{proof} First, we try to have the priori estimates for $|\nabla \varphi|$ and $|\partialartial \varphi/\partialartial t|$. That is because the priori estimates for $|\nabla \varphi|$ and $|\partialartial \varphi/\partialartial t|$ imply a bound for $[\widehat u]_{x,\betaegin{equation}ta}$ and $[\widehat u]_{t,\betaegin{equation}ta/2}$, which, together with \cite[Chapter 2, Lemma 3.1]{La1}, can give the bound for $[\nabla \widehat u]_{t,\betaegin{equation}ta/2}$ provided a bound for $[\nabla\widehat{u}]_{x,\betaegin{equation}ta}$ obtained. Since, after rescaling, $|\nabla\widehat u|$ and $\partialartial \widehat u /\partialartial t$ can be written as \betaegin{equation}gin{eqnarray*} \nabla \widehat u=\nabla u \cdot e^{-t/n}, \qquad \deltaot {\widehat u}=\deltaot ue^{-{\mathfrak f}racac{1}{n}t}-{\mathfrak f}racac{1}{n}\widehat{u}, \epsilonnd{eqnarray*} which implies $\nabla \widehat u=\langlembda \nabla \varphi \cdot e^{-{\mathfrak f}racac{\alphalpha}{n}t}$. Then it is sufficient to bound $[\nabla \varphi]_{x,\betaegin{equation}ta}$ if one wants to bound $[\nabla\widehat{u}]_{x,\betaegin{equation}ta}$. In order to get this bound, we fix $x$ and rewrite the first evolution equation of $(\widetilde{\,\,\,\,harp})$ as follows \betaegin{equation}gin{eqnarray} \langlebel{5.2} \mathrm{div}_{\,\,\,\,igma}\left({\mathfrak f}racac{\nabla \varphi}{\,\,\,\,qrt{1+|\nabla \varphi|^2}}\rightght)={\mathfrak f}racac{n\langlembda'}{\,\,\,\,qrt{1+|\nabla \varphi|^2}}-{\mathfrak f}racac{\,\,\,\,qrt{1+|\nabla \varphi|^2}}{\deltaot \varphi}, \epsilonnd{eqnarray} which is an elliptic PDE endowed with the Neumann boundary condition $\nabla_\mu \varphi =0$. Clearly, the RHS of (\mathrm{e}f{5.2}) is bounded since $\deltaot\varphi$, $\nabla\varphi$ are bounded (see Lemmas \mathrm{e}f{lemma3.2} and \mathrm{e}f{lemma4.1} respectively) and $0<\langlembda'\leqslant C$. Besides, the RHS of (\mathrm{e}f{5.2}) is also a measurable function in $x$. Therefore, by similar calculations to those in \cite{La2}, Chapter 4, $\S6$ (interior estimate) and Chapter 10, $\S2$ (boundary estimate), we can get a Morrey estimate, which yields the estimate for $[\nabla \varphi]_{x,\betaegin{equation}ta}$. Note that $\deltaot {\widehat u}=\langlembda \deltaot \varphi e^{-\alphalpha t/n}$. So, we need to bound $\deltaot \varphi$ if we want to get a bound for $[\partialartial \widehat u /\partialartial t]_{\betaegin{equation}ta}$. By the first equation of (\mathrm{e}f{3.4}), it is not difficult to get the evolution equation for $\deltaot\varphi$ with respect to the induced rescaled metric as follows \betaegin{equation}gin{eqnarray*} {\mathfrak f}racac{\partialartial{\deltaot \varphi}}{\partialartial t}=\mathrm{div}_{\widehat g}\left({\mathfrak f}racac{\nabla \deltaot \varphi}{{\widehat H}^2}\rightght)-2\deltaot \varphi^{-1}{\mathfrak f}racac{|\nabla \deltaot \varphi|^2_{\widehat g}}{{\widehat H}^2}-n{\mathfrak f}racac{\langlembda\langlembda''{\deltaot \varphi}^3}{v^2}. \epsilonnd{eqnarray*} Note that the Neumann condition $\nabla _{\mu}\deltaot \varphi =0$ implies that the interior and boundary estimates are basically the same. Then, together with the strict positivity of $\deltaot \varphi$, we can define the test function $\chi=\xi^2\deltaot\varphi$ and the support of $\xi$ has to be chosen away from the boundary for the interior estimate. Integration by parts and Young's inequality results in \betaegin{equation}gin{eqnarray*} &&{\mathfrak f}racac{1}{2}\partialarallel \deltaot \varphi \xi \partialarallel^{2}_{2,M_{t}^n}{\mathfrak B}ig{|}_{t_0}^{t_1}+{\mathfrak f}racac{1}{\max{\widehat H}^2}\int ^{t_1}_{t_0}\int _{M^n_t}{\xi}^2|\nabla \deltaot \varphi|^2d\mu_t dt\leqslant \\ && \qquad \qquad \qquad \int ^{t_1}_{t_0}\int _{M^n_t}\left[\deltaot {\varphi}^2\xi\left|{\mathfrak f}racac{\partialartial \xi}{\partialartial t}\rightght|+{\mathfrak f}racac{{\xi}^2|\nabla \deltaot \varphi|^2}{2\max{\widehat H}^2}+{\mathfrak f}racac{2\max{\widehat H}^2\deltaot{\varphi}^2|\nabla \xi|^2}{\min{\widehat H}^4}\rightght], \epsilonnd{eqnarray*} which implies \betaegin{equation}gin{eqnarray*} &&{\mathfrak f}racac{1}{2}\partialarallel \deltaot \varphi \xi \partialarallel^{2}_{2,M_{t}^n}{\mathfrak B}ig{|}_{t_0}^{t_1}+{\mathfrak f}racac{1}{2\max{\widehat H}^2}\int ^{t}_{t_0}\int _{M^n_t}{\xi}^2|\nabla \deltaot \varphi|^2d\mu_t dt\leqslant \\ && \qquad \qquad \qquad \left(1+{\mathfrak f}racac{2\max{\widehat H}^2}{\min{\widehat H}^4}\rightght)\int ^{t_1}_{t_0}\int _{M^n_t}\deltaot {\varphi}^2\left[\xi\left|{\mathfrak f}racac{\partialartial \xi}{\partialartial t}\rightght|+|\nabla \xi|^2\rightght], \epsilonnd{eqnarray*} where, as before, $\max(\cdot)$ and $\min(\cdot)$ denote the supremum and the infimum of a prescribed quantity over $M^{n}\,\,\,\,ubset N^n$ respectively. Then, similar to \cite{La1}, Chapter 5, $\S1$ (interior estimate) and $\S7$ (boundary estimate), the boundedness for $[\deltaot \varphi]_{\betaegin{equation}ta}$ can be obtained, and moreover, all local interior and boundary estimates are independent of $T$. The estimate of $\widehat H$ follows from the estimates for $\langlembda$, $\nabla \varphi$, $\deltaot \varphi$ and the identity $\deltaot\varphi\langlembda e^{-t/n}\widehat{H}=v=\,\,\,\,qrt{1+|\nabla \varphi|^2}$. \epsilonnd{proof} Applying Lemma \mathrm{e}f{lemma5.1}, we can get the following higher-order estimates. \betaegin{equation}gin{lemma} \langlebel{lemma5.2} Let $u$ be an admissible solution of $(\widetilde{\,\,\,\,harp})$ and let $\Sigma ^{n}$ be a smooth, convex cone. If $\langlembda$ satisfies $\langlembda(r)>0$, $0<\langlembda'(r)\leqslant C$, $0\leqslant\langlembda^{1+\alphalpha}(r)\langlembda''(r)\leqslant C$ for some positive constants $\alphalpha$, $C$ on $I^{\circ}$, and the Ricci curvature of $N^n$ is nonnegative, then for every $t_0 \in (0,T)$, there exists some $\betaegin{equation}ta>0$ such that \betaegin{equation}gin{eqnarray*} \partialarallel \widehat u \partialarallel _{C^{2+\betaegin{equation}ta,1+\betaegin{equation}ta/2}(M^{n}\times[0,T])}\leqslant D\left(\partialarallel u_0 \partialarallel_{C^{2+\alphalpha}(M^n)},n,\betaegin{equation}ta,M^n\rightght) \epsilonnd{eqnarray*} and \betaegin{equation}gin{eqnarray*} \partialarallel \widehat u \partialarallel _{C^{2k+\betaegin{equation}ta,k+\betaegin{equation}ta/2}(M^{n}\times[t_0,T])}\leqslant D\left(\partialarallel u(\cdot,t_0) \partialarallel_{C^{2+\alphalpha}(M^n)},n,\betaegin{equation}ta,M^n\rightght). \epsilonnd{eqnarray*} \epsilonnd{lemma} \betaegin{equation}gin{proof} Since $\varphi(x,t)=\int_{c}^{u(x,t)}{\mathfrak f}racac{1}{\langlembda(s)}ds$, we know that the bound of $\varphi$ leads to the bound of $u$. So, we try to estimate $\varphi$. Rewrite $(\widetilde\,\,\,\,harp)$ as follows \betaegin{equation}gin{eqnarray} \langlebel{5.3} {\mathfrak f}racac{\partialartial \varphi}{\partialartial t}={\mathfrak f}racac{1}{\widehat H^2}{\mathfrak D}elta _{\widehat g}\varphi +\left({\mathfrak f}racac{2\,\,\,\,qrt{1+|\nabla \varphi|^2}}{\widehat \langlembda \widehat H}-{\mathfrak f}racac{n \langlembda'}{{\widehat \langlembda}^2 {\widehat H}^2}\rightght) \epsilonnd{eqnarray} By Lemma \mathrm{e}f{lemma5.1}, we know that (\mathrm{e}f{5.3}) is a uniformly parabolic PDE with H\"{o}lder continuous coefficients. Therefore, by \cite[Chapter IV, Theorem 5.3]{La1}, which shows $\varphi$ is $C^{2+\betaegin{equation}ta,1+\betaegin{equation}ta/2}$, and the linear theory in \cite[Chapter 4]{Li}, which implies the second-order bound, we have \betaegin{equation}gin{eqnarray} \langlebel{5.4} \partialarallel \widehat u \partialarallel _{C^{2+\betaegin{equation}ta,1+\betaegin{equation}ta/2}}\leqslant D\left(\partialarallel u_0 \partialarallel_{C^{2+\alphalpha}(M^n)},n,\betaegin{equation}ta,M^n\rightght). \epsilonnd{eqnarray} Differentiating both sides of (\mathrm{e}f{5.3}) with respect to $t$ and $\xi_{i}$, $1\leqslant i \leqslant n$, respectively, one can easily get evolution equations of $\deltaot\varphi$ and $\varphi_i$ respectively, which, using the estimate (\mathrm{e}f{5.4}), can be treated as uniformly parabolic PDEs on the time interval $[t_0,T]$. At the initial time $t_0$, all compatibility conditions are satisfied and the initial function $\varphi(\cdot,t_0)$ is smooth, which implies a $C^{3+\betaegin{equation}ta,(3+\betaegin{equation}ta)/2}$ estimate for $\varphi_i$ and a $C^{1+\betaegin{equation}ta,1+\betaegin{equation}ta/2}$ estimate for $\deltaot\varphi$. So, we have the $C^{4+\betaegin{equation}ta,2+\betaegin{equation}ta/2}$ estimate for $\widetilde{u}$. From \cite[Chapter 4, Theorem 4.3, Exercise 4.5]{Li} and the above argument, it is not difficult to know that the constant are independent of $T$. Higher regularity can be proven by induction over $k$. \epsilonnd{proof} Then the long-time existence and the convergence can be discussed. \betaegin{equation}gin{lemma} \langlebel{lemma5.3} let $\Sigma ^{n}$ be a smooth, convex cone. Let $u$ be an admissible solution of $(\widetilde{\,\,\,\,harp})$ and let $T^\alphast$ be the maximal existence time. Then $T^{\alphast}=\infty$, and the rescaled solution $\widetilde{F}(\cdot,t)=F(\cdot,t)e^{-{\mathfrak f}racac{1}{n} t}$ ($F(\cdot,t)$ is the embedding map mentioned in Theorem \mathrm{e}f{main1.1}) converges smoothly to an embedding $F_{\infty}$, mapping $M^n$ into a piece of a geodesic sphere. \epsilonnd{lemma} \betaegin{equation}gin{proof} By Lemma \mathrm{e}f{lemma5.2}, we know that the H\"{o}lder norms of $u=\widehat{u}e^{t/n}$ cannot blow up as $T$ tends to the maximal time $T^{\alphast}<\infty$, which implies that $u$ can be extended to a solution to $(\widetilde\,\,\,\,harp)$ in $[0,T^{\alphast}]$. The short time existence result (see Lemma \mathrm{e}f{lemma2.1}) and the higher-order estimates (see Lemma \mathrm{e}f{lemma5.2}) imply the existence of a solution beyond $[0,T^{\alphast}]$ which is smooth away from $t=0$. This is a contradiction. Therefore, we have $T^{\alphast}=\infty$. By Lemmas \mathrm{e}f{lemma3.1} and \mathrm{e}f{lemma4.1}, we can get the following estimate \betaegin{equation}gin{eqnarray*} |Du|\leqslant Ke^{-\mathfrak gamma t}, \epsilonnd{eqnarray*} where $K$ is a positive constant depending only on $C_3$ $\langlembda(\inf u(\cdot,0))$, $\mathfrak gamma$ is a constant depending only on $\alphalpha$ and the dimension $n$. By the Arzel\`a-Ascoli theorem,we know that every subsequence of $\widehat u$ converges to a constant function $\omega_{\infty}$ in $C^{1}(M^n)$. Assume that $\widehat u$ convergent to the constant $\omega_{\infty}$ in $C^{k}(M^n)$. Since $\widehat u$ is uniformly bounded in $C^{k+\betaegin{equation}ta +1}$ and $\widehat u$ is H\"older continuous, by the Arzel\`a-Ascoli theorem we know that there exists a subsequence convergent to $\omega_{\infty}$ in $C^{k+1}(M^n)$. Then we can get the conclusion that every subsequence must converge, and the limit has to be $\omega_{\infty}$. Therefore, $\widehat u$ converges to $\omega_{\infty}$ in $C^{k+1}(M^n)$. The $C^{\infty}$ convergence follows by the induction. Finally, we accurately describe the asymptotic behavior of rescaled hypersurfaces as time tends to infinity. Recall that $h^i_j={\mathfrak f}racac{1}{\langlembda v}(\langlembda'\deltaelta ^i_j-\tilde \,\,\,\,igma^{ik}\varphi _{kj})$. So, by the assumption $0<\langlembda'\leqslant C$, Lemmas \mathrm{e}f{lemma3.1} and \mathrm{e}f{lemma5.2}, it follows that \betaegin{equation}gin{eqnarray*} \left|\widehat h ^i_j-\deltaelta^i_j\rightght|&\leqslant& e^{-{\mathfrak f}racac{1}{n}t}\left({\mathfrak f}racac{\langlembda'}{\langlembda v}-1\rightght)\deltaelta^i_j+{\mathfrak f}racac{1}{\langlembda v}e^{{\mathfrak f}racac{1}{n}t}\widetilde \,\,\,\,igma ^{ik}\varphi _{kj}\\ &\leqslant& e^{-({\mathfrak f}racac{n+1}{n})t}\left({\mathfrak f}racac{\langlembda'}{v\langlembda e^{-{\mathfrak f}racac{1}{n}t}}\deltaelta^i_j+{\mathfrak f}racac{1}{v\langlembda e^{-{\mathfrak f}racac{1}{n}t}}\widetilde \,\,\,\,igma ^{ik}\varphi _{kj}\rightght)\\ &\leqslant& C_{5}e^{-\deltaelta t} \epsilonnd{eqnarray*} for some positive constant $C_{5}$ depending on $C$, $D$, $C_3$, $\langlembda(\inf u(\cdot,0)) $, and some constant $\deltaelta$ depending on $\alphalpha$ and $n$. So, from the above argument, we know that after rescaling, the evolving hypersurfaces converge smoothly to a piece of a geodesic sphere as time tends to infinity. \epsilonnd{proof} Theorem \mathrm{e}f{main1.1} follows naturally from Lemmas \mathrm{e}f{lemma2.1} and \mathrm{e}f{lemma5.3}. $\\$\textbf{Acknowledgments}. This research was supported in part by the National Natural Science Foundation of China (Grant Nos. 11201131, 11401131 and 11101132) and Hubei Key Laboratory of Applied Mathematics (Hubei University). \vspace {1cm} \betaegin{equation}gin{thebibliography}{50} \,\,\,\,etlength{\itemsep}{-0pt} \,\,\,\,mall \betaibitem{bhw} S. Brendle, P.-K. Hung and M.-T. Wang, \epsilonmph{A Minkowski inequality for hypersurfaces in the anti-de Sitter-Schwarzschild manifold}, Commun. Pure Appl. Math. {\betaf 69} (2016) 124--144. \betaibitem{cm} L. Chen and J. Mao, \epsilonmph{Non-parametric inverse curvature flows in the AdS-Schwarzschild manifold}, The Journal of Geometric Analysis, DOI:10.1007/s12220-017-9848-6. \betaibitem{cmz} L. Chen, J. Mao and H.-Y. Zhou, \epsilonmph{Inverse curvature flows in warped product manifolds}, preprint. \betaibitem{fmi} P. Freitas, J. Mao and I. Salavessa, \epsilonmph{Spherical symmetrization and the first eigenvalue of geodesic disks on manifolds}, Calc. Var. Partial Differential Equations {\betaf 51} (2014) 701--724. \betaibitem{cg} C. Gerhardt, \epsilonmph{Flow of nonconvex hypersurfaces into spheres}, J. Differ. Geom. {\betaf 32} (1990) 299--314. \betaibitem{gl} P.-F. Guan and J.-F. Li, \epsilonmph{The quermassintegral inequalities for k-convex starshaped domains}, Adv. Math. {\betaf 221} (2009) 1725--1732. \betaibitem{gmtz} P.-F. Guan, X.-N. Ma, N. Trudinger and X.-H. Zhu, \epsilonmph{A form of Alexandrov-Fenchel inequality}, Pure Appl. Math. Q. {\betaf6} (2010) 999--1012. \betaibitem{hi1} G. Huisken and T. Ilmanen, \epsilonmph{The inverse mean curvature flow and the Riemannian Penrose inequality}, J. Differ. Geom. {\betaf 59} (2001) 353--437. \betaibitem{hi2} G. Huisken and T. Ilmanen, \epsilonmph{Higher regularity of the inverse mean curvature flow}, J. Differ. Geom. {\betaf 80} (2008) 433--451. \betaibitem{La1} O.-A. Ladyzenskaja, V.-A. Solonnikov and N.-N. Ural'ceva, \epsilonmph{Linear and Quasilinear Equations of Parabolic Type}, AMS, New York, 1968. \betaibitem{La2} O.-A. Ladyzenskaja and N.-N. Ural'ceva, \epsilonmph{Linear and Quasilinear Elliptic Equations}, Academic Press, New York, 1968. \betaibitem{Li} G.-M. Lieberman, \epsilonmph{Second Order Parabolic Differential Equations}, World Scientific, Singapore, 1996. \betaibitem{m2} J. Mao, \epsilonmph{Eigenvalue estimation and some results on finite topological type}, Ph.D. thesis, IST-UTL, 2013. \betaibitem{m1} J. Mao, \epsilonmph{Eigenvalue inequalities for the $p$-Laplacian on a Riemannian manifold and estimates for the heat kernel}, J. Math. Pures Appl. {\betaf 101}(3) (2014) 372--393. \betaibitem{m3} J. Mao, \epsilonmph{Volume comparison theorems for manifolds with radial curvature bounded}, Czech. Math. J. {\betaf 66}(1) (2016) 71--86. \betaibitem{mdw} J. Mao, F. Du and C.-X. Wu, \epsilonmph{Eigenvalue Problems on Manifolds}, Scientific Press, Beijing, 2017. \betaibitem{Ma2} T. Marquardt, \epsilonmph{Inverse mean curvature flow for hypersurfaces with boundary}, Ph.D. thesis, FU-Berlin, 2012. \betaibitem{Ma1} T. Marquardt, \epsilonmph{Inverse mean curvature flow for star-shaped hypersurfaces evolving in a cone}, J. Geom. Anal. {\betaf 23} (2013) 1303--1313. \betaibitem{ju} J. Urbas, \epsilonmph{On the expansion of starshaped hypersurfaces by symmetric functions of their principal curvatures}, Math. Z. {\betaf 205} (1990) 355--372. \betaibitem{Zh} H.-Y. Zhou, \epsilonmph{Inverse mean curvature flow in warped product manifolds}, available online at arXiv:1609.09665v3. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{f Uniqueness of singular solution of semilinear elliptic equation hanks{ Supported in part by the National Science Funds of China (10901047)} \begin{center} \begin{minipage}{130mm} {\small {\bf Abstract} \ \ \ In this paper, we study asymptotic behavior of solution near 0 for a class of elliptic problem. The uniqueness of singular solution is established.\vskip0.1in {\it Keywords:}\ Nonhomogeneous semilinear elliptic equation; Positive solutions; Asymptotic behavior; Singular solutions \ \ } \end{minipage} \end{center} \baselineskip 0.2in \vskip 0.2in {\bf 1. Introduction} \setcounter{section}{1} \setcounter{equation}{0}\vskip 0.1in In this paper, we study the elliptic equation \begin{equation} \Delta u+K(|x|)u^{p}+\mu f(|x|)=0, \end{equation} where $n\geq 3, \Delta=\Sigma_{1}^{n}\frac{\partial^{2}}{\partial x_{i}^{2}}$ is the Laplace operator, $p>1$ is a constant, $\mu\geq0$ is a parameter, and $f$ and $K$ are given locally H$\ddot{o}$lder continuous function in $R^{n}\setminus\{0\}$, so that the solutions of (1.1) are classical on $0<|x|<\infty$. However, at $x=0$, when $K$ is "bad", usually we cannot expect the solutions to be differentiable, or even continuous owing to the singularity of $K$ at $x=0$. Let $u$ be a solution of (1.1), the singular point $x=0$ of $u$ is called a removable singular point if $u(0)\equiv\lim_{x\to0}u(x)$ exists, otherwise $x=0$ is called a nonremovable singular point. It is shown by Ni and Yotsutani [13] that when $x=0$ is a removable singular point of a solution of (1.1), the existence of the derivatives of the solution depends on the ''blow up" rate of $K$ at $x=0$ [13, Proposition 4.4]. Let $u\in C^{2}(R^{n}\setminus 0)$ be a solution of (1.1). If $x=0$ is a removable singular point of $u$, then $u$ is said to be a regular solution of (1.1), otherwise $u$ is said to be a singular solution. The purpose of this paper is to study the asymptotic behavior of singular positive solutions and to obtain the uniqueness of singular positive solutions of (1.1) which has diverse physical and geometrical backgrounds. In particular, in the case $K=1$ and $p=2$, (1.1) arises naturally in establishing occupation time limit theorems for super-Brownian motions which requires analyzing cumulant generating functions satisfying some integral equations equivalent to the parabolic counterparts of (1.1). There are many works devoted to the studying the existence, monotonicity and asymptotic expansion at infinity of positive solutions of the equation (1.1). We refer the interested readers to [2,3,4,6,7, 9-12] and the references therein. In this paper, we consider positive radial solutions of (1.1) with radial functions $K,f$. The radial version of (1.1) is of the form $$u''+\frac{n-1}{r}u'+K(r)u^{p}+\mu f=0.$$ For the same reasons, the regular solutions that have finite limits at $r=0$, are particularly interesting , which lead us to consider the initial value problem $$ \left\{ \begin{array}{ll}u''+\frac{n-1}{r}u'+K(r)u^{p}+\mu f=0,\\ u(0)=\alpha. \end{array} \right. \eqno (1.2) $$ We use $u_{\alpha}=u(r,\alpha)$ to denote the unique solution of (1.2). \vskip0.1in First, we introduce the following notations, which will be used throughout this paper: \[ m\equiv \frac{l+2}{p-1},\ \ \ \ \ L\equiv \lbrack m(n-2-m)]^{\frac{1}{p-1}}, \] \begin{eqnarray*} p_{c}=\left\{ \begin{array}{ll} \frac{(n-2)^{2}-2(l+2)(n+l)+2(l+2)\sqrt{(n+l)^{2}-(n-2)^{2}}}{(n-2)(n-10-4l)}, & n>10+4l, \\ \infty,& 3\leq n\leq 10+4l. \end{array} \right. \end{eqnarray*} $$ \lambda _{2}=\lambda _{2}(n,p,l)=\frac{(n-2-2m)+\sqrt{ (n-2-2m)^{2}-4(l+2)(n-2-m)}}{2}. $$ The hypotheses of $K(|x|)$ are often divided into two cases: the fast decay case and the slow decay case. In this paper, we will focus on the slow decay case, i.e. $K(r)\geq cr^{l}$, for some $l>-2$ and $r$ large. First, let us introduce a collection of hypotheses on $K(|x|)$ and $f$: $(K.1)$\ \ $K(|x|)=k_{\infty }|x|^{l}+O(|x|^{-d_{1}})$ at $|x|\rightarrow \infty $ for some constants $k_{\infty }>0$ and $d_{1}>n-\lambda _{2}-m(p+1)$. $(K.2)$ \ \ $\lim_{x\to0}|x|^{-l}K(|x|)=k_{0}>0.$ $(K.3)$\ \ $K(r)$ is locally Lipschitz continuous and $\frac{d}{dr} (r^{-l}K(r))\leq0$ for a.e. $r>0$. $(f.1)$\ \ $\lim_{|x|\to 0} |x|^{d}f(|x|)=b\geq0$, where $0<d\leq m+2$. $(f.2)$\ \ $f(x)=O(|x|^{-q})$ near $\infty $ for some $q>n-m-\lambda _{2}$. $(f'.2)$ $\int_{0}(r^{d}f)_{r}^{+}dr<\infty.$ $(K'.2)$ $\int_{0}(r^{-l}K(r))_{r}^{+}dr< \infty.$ $(f'.3)$ $\int_{0}(r^{d}f)_{r}^{-}dr<\infty.$ $(K'.3)$ $\int_{0}(r^{-l}K(r))_{r}^{-}dr< \infty,$ where $k^{\pm}=\max\{\pm k,0\}, r=|x|$. \vskip0.1in Our main result is as follows:\vskip0.1in {\bf Theorem 1.} Suppose that $K(r)$ satisfies $(K.1)-(K.3),(K'.2)$ $f$ satisfies $(f.1)$ and $(f.2),(f'.2)$, $p>p_{c},0<d<2$. Then (1.2) has one and only one singular solution $U(r)$, Furthermore, for any regular solution $u(r)$, the following holds $$u(r)<U(r)\leq\frac{L}{(r^{-l}K(r))^{\frac{1}{p-1}}r^{m}}.$$\vskip0.1in This paper is organized as follows. In Section 2, the asymptotic behavior of singular positive solution of (1.2) near 0 is studied. Finally, the uniqueness result about the singular positive solution is established.\vskip0.1in { \bf 2 Asymptotic behavior of singular solution near 0}\setcounter{section}{2} \setcounter{equation}{0}\vskip0.1in In this section, we obtain the asymptotic behavior of the positive radial solution of (1.2) near 0. First, we obtain the prior estimates of the positive solution of (1.2) near 0. { \bf Lemma 2.1.} Let $p>1$ and $u$ be a positive solution of (1.2) in $B_{r}\setminus\{0\}$. Then, there exists a positive constant $R$ such that for $0<r<R$, $u(r)\leq C r^{-\frac{2+l}{p-1}}$ for some constant $C$.\vskip0.1in {\bf Proof.} From (1.2), we have $$(r^{n-1}u'(r))'+r^{n-1}K(r)u^{p}\leq0,$$ then, there exists a small positive constant $r_{k}<r$ such that \begin{eqnarray*} r^{n-1}u'(r)&=&r^{n-1}_{k}u'(r_{k})-\int_{r_{k}}^{r}r^{n-1}K(r)u^{p}dr\\ &\leq&-\int_{r_{k}}^{r}r^{n-1}K(r)u^{p}dr, \end{eqnarray*} since $u'(r_{k})<0$ near 0. From that, $$\frac{u'(r)}{u^{p}(r)}\leq-\frac{C}{r^{n-1}}\int_{r_{k}}^{r}s^{n-1}K(s)ds\leq Cr^{l+1} \eqno(2.1).$$ Integrating (2.1) over $(0,r)$, we have $u(r)\leq Cr^{-\frac{2+l}{p-1}}$, and the proof is completed.\vskip0.2in Now, we verify the following asymptotic behavior of $u$ near 0 by using the Li's energy method in [9]\vskip0.2in { \bf Theorem 2.2.} Let $u$ be a positive solution of (1.2) near 0. Assume that $u(r)=O(r^{-m})$ at 0 and $ K, f$ satisfy (i) $(f.1), (K.1)$ and $(f'.2), (K'.2)$ if $p>\frac{n+2+2l}{n-2}$ or (ii) $(f.1),(K.1)$ and $(f'.3), (K'.3)$ if $\frac{n+l}{n-2}<p<\frac{n+2+2l}{n-2}$ with $d=m+2, b\geq0$. Then, $b\leq\max_{z\in R^{+}}\{L^{p-1}z-k_{0}z^{p}\}$ and $\lim_{r\to0}r^{m}u(r)=z_{1}\ \mbox{or}\ z_{2}$, where $z_{1}$ and $z_{2}$, $z_{1}\leq z_{2}$ are two roots of the equation $k_{0}z^{p}-L^{p-1}z+b=0$.\vskip0.1in {\bf Proof.} Denote $v(t):=r^{m}u, t=\log r,$ then $$v''(t)+av'(t)-L^{p-1}v+k(t)v^{p}+g(t)=0, \eqno(2.2)$$ where $a=n-2-2m, k(t)=e^{-lt}K(e^{t})$ and $g(t):=e^{(m+2)t}f(e^{t})$. Suppose that $$0\leq \alpha=\lim\inf_{t\to-\infty}v(t)<\lim\sup_{t\to-\infty}v(t)=\beta<\infty.$$ Then, there exist two sequences $\{\eta_{i}\}$ and $\{\xi_{i}\}$ going to $-\infty$ as $i\to-\infty$ such that $\{\eta_{i}\}$ and $\{\xi_{i}\}$ are local minima and local maxima of $v$, respectively, satisfying $\eta_{i}<\xi_{i}<\eta_{i+1}, i=1.2,....$ Define an energy function $$E(t):=\frac{1}{2}(v')^{2}-\frac{L^{p-1}}{2}v^{2}+\frac{1}{p+1}k(t)v^{p+1}+bv. \eqno(2.3)$$ Now, multiplying (2.2) by $v'(t)$ and integrating by parts over $[t,T]$ ($T$ is a constant) we obtain \[ E(t)+a\int_{t}^{T}(v')^{2}ds+ [g(t)-b]v(t)=C(T)+\int_{t}^{T}g'ds+\frac{1}{p+1}\int_{t}^{T}v^{p+1}k'ds. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2.4) \] Assume that $f, K$ satisfy $(f.1), (K.1)$ and $(f'.2), (K'.2)$ and $p>\frac{n+2+2l}{n-2}$, from (2.4), we have \begin{eqnarray*} &&E(t)+a\int_{t}^{T}(v')^{2}ds+ [g(t)-b]v(t)+\frac{1}{p+1}\int_{t}^{T}v^{p+1}[k']^{-}ds+\int_{t}^{T}[g']^{-}ds\\ &&=C(T)+\int_{t}^{T}[g']^{+}ds+\frac{1}{p+1}\int_{t}^{T}v^{p+1}[k']^{+}ds. \end{eqnarray*} Since $v$ is bounded, $E(\eta_{i})$ is bounded independently of $i$ and $[k']^{+}, [g']^{+}\in L^{1}(T,\infty)$, we conclude that $$\int_{T}^{\eta_{i}}V_{s}^{2}ds,\ \ \int_{T}^{\eta_{i}} [k']^{-}V^{p+1}ds\ \ \mbox{and} \ \int_{T}^{\eta_{i}}[g']^{-}ds$$ are bounded independent of $i$ which implies that $$\int_{-\infty}^{T}(v')^{2}ds<\infty. \eqno(2.5)$$ Similarly, if $f, K$ satisfy $(f.1), (K.1)$ and $(f'.3), (K'.3)$, we also have $V_{s}\in L^{2}(-\infty,T)$. It follows from (2.4) and (2.5) that $E=\lim_{t\to-\infty}E(t)$ exists, which in turn from (2.3) implies $v'(t)$ is bounded. Then, from (2.2), $v''(t)$ is bounded also and from (2.5), $v'(t)\to 0$ as $t\to -\infty$. Let $h(v)=-\frac{L^{p-1}}{2}v^{2}+\frac{k_{0}}{p+1}v^{p+1}+bv$. Since $$\lim_{i\to\infty}E(\eta_{i})=h(\alpha)=E=h(\beta)=\lim_{i\to\infty}E(\xi_{i}),$$ we choose $\alpha<\gamma<\beta$ and $t_{i}\in(\eta_{i},\xi_{i})$ such that $v(t_{i})=\gamma, \frac{dh}{dv}(\gamma)=0$ and $h(\gamma)\neq E$. However $E=\lim_{i\to\infty}E(t_{i})=\lim_{i\to\infty}(\frac{1}{2}v'(t_{i})^{2}+h(\gamma))=h(\gamma)$, a contradiction. Therefore, $v_{\infty}=\lim_{t\to-\infty}v(t)$ exists. For given $\tilde{\xi}>0$, there exists a sequence $\{s_{i}\}$ converging to $-\infty$ such that $|v'(s_{i})|\leq \tilde{\xi}$ for $ i=1,2,....$ Since $E(s_{i})$ is bounded, we obtain (2.5) from (2.4). From (2.4) and (2.5), $\lim_{t\to-\infty}E(t)$ exists. Thus, (2.3) implies $\lim_{t\to-\infty}v'(t)=0$. Then, by (2.2), $\lim_{t\to\infty} v''(t)$ exists and must be 0. Therefore, we conclude from (2.2) that $b\leq\max_{z\in R^{+}}\{L^{p-1}z-k_{0}z^{p}\}$ and $v_{\infty}=z_{1}\ \mbox{or}\ z_{2}$. The proof is completed. \vskip0.2in {\bf Corollary 2.3.} Let $u$ be a positive solution of (1.2) near 0, and $f, K$ satisfy the same condition as in Theorem 2.2 except that $d=m+2$ is replaced by $0<d<m+2$. Then $\lim_{r\to0}r^{m}u(r)=\frac{L^{p-1}}{k_{0}}$ or 0.\vskip0.1in In case $0<d<m+2$, then $\lim_{t\to\infty} g(t)=\lim_{t\to-\infty}e^{(m+2)t}f(e^{t})=0$, similar to that of Theorem 2.2, we can immediately obtain $\lim_{r\to0}r^{m}u(r)=\frac{L^{p-1}}{k_{0}}$ or 0. The detail proof is omitted here \vskip 0.2in {\bf Theorem 2.4.} \ Let $p>\frac{n+l}{n-2}$. Suppose that $f(r)=O(r^{-d})$ at 0 with $d<m+2$. Then, any positive solution of (1.2) satisfying $\lim_{r\to 0}r^{m}u(r)=0$ has the asymptotic behavior at 0 such that \begin{eqnarray*} u(r)=\left\{ \begin{array}{lll} O(r^{2-d}) & \mbox{if}\ 2<d<m+2,\\ O(|\log r|) & \mbox{if}\ d=2,\\ O(1) & \mbox{if}\ d<2. \end{array} \right. \end{eqnarray*} \vskip 0.2in {\bf Proof.} First, we claim that there exists a constant $\beta>0$ such that $u(r)=O(r^{-m+\beta})$ near 0. Set $v(r)=r^{m}u(r)$ for $r>0$ and $L_{\varepsilon}v \equiv \Delta v-\frac{2mv^{\prime }}{r}-m(n-2-m-\varepsilon )\frac{v}{r^{2}}+\mu r^{m}f.$ Then, $v(r)$ satisfies $$L_{\varepsilon}v-m\varepsilon\frac{v}{r^{2}}+\frac{v^{p}}{r^{2}}K(r)r^{-l}=0$$ and for any $\varepsilon>0$, there exists $R_{\varepsilon}>0$ such that $L_{\varepsilon}v\geq0$ for $0<r\leq R_{\varepsilon}$. On the other hand, for $0<\varepsilon <n-2-m$, let $\varphi _{\varepsilon }(x)=|x|^{\beta _{\varepsilon }}$ we have \[ L_{\varepsilon }\varphi _{\varepsilon }=[\beta (\beta -1)+(n-1-2m)\beta -m(n-2-m-\varepsilon )]|x|^{\beta _{\varepsilon }-2}+\mu r^{m}f \] in $R^{n}\setminus \{0\}.$ Choosing $\beta _{\varepsilon }>0$ sufficiently small such that \[ \beta _{\varepsilon }(\beta _{\varepsilon }-1)+(n-1-2m)\beta _{\varepsilon }-m(n-2-m-\varepsilon )\leq 0, \] and \[ \frac{r^{m}f}{r^{\beta _{\varepsilon }-2}}\rightarrow 0 \ \ \ \ \ \mbox{as}\ \ \ \ \ r \to0 . \] So there exists an $R_{\varepsilon }^{\prime }>0$ such that \[ L_{\varepsilon }\varphi _{\varepsilon }\leq 0\ \ \ \ \ \ \mbox{in}\ \ \ \ \ \ \ \ 0<r<R_{\varepsilon }^{\prime }. \] Setting $R_{\varepsilon }^{\prime \prime }=\min \{R_{\varepsilon }^{\prime },R_{\varepsilon }\}$, $C_{\varepsilon }=v(R_{\varepsilon }^{\prime \prime })(R_{\varepsilon }^{\prime \prime })^{-\beta _{\varepsilon }},$ we see that \begin{eqnarray*} \left\{ \begin{array}{lll} L_{\varepsilon }(v-C_{\varepsilon }\varphi _{\varepsilon })\geq 0 & \ \ \ \ \ \ \mbox{for}\ \ \ \ 0<r\leq R_{\varepsilon }^{\prime \prime }, \\ v-C_{\varepsilon }\varphi _{\varepsilon }=0 & \ \ \ \ \ \ \mbox{at}\ \ \ \ R_{\varepsilon }^{\prime \prime }, \\ v(r)-C_{\varepsilon }\varphi _{\varepsilon }(r)\rightarrow 0 & \ \ \ \ \ \ \mbox{as}\ \ \ \ \ r\to 0 , & \end{array} \right. \end{eqnarray*} since $\beta _{\varepsilon }>0$. Observing that the coefficient of the term $ v$ in $L_{\varepsilon }$ is negative, we conclude by the maximum principle that $v-C_{\varepsilon }\varphi _{\varepsilon }\leq 0$ for $0<r\leq R_{\varepsilon }^{\prime \prime }$, i.e. $v(r)\leq C_{\varepsilon }r^{\beta _{\varepsilon }}$ near 0. This guarantees that $u(r)\leq C_{\varepsilon }r^{-m+\beta _{\varepsilon }}$ near 0. Since, there exists a sequence ${t_{i}}$ going to $-\infty$ such that $v'(t_{i})\to 0$ with $r_{i}=e^{t_{i}}$, then, $r_{i}^{\frac{p+1+l}{p-1}}u'(r_{i})\to0$ as $r_{i}\to0$ and thus $$ \lim_{r_{i}\to0}r_{i}^{n-1}u'(r_{i})=0. \eqno(2.6)$$ By (1.2) and (2.6), we observe that $$u'(r)=-\frac{1}{r^{n-1}}\int_{0}^{r}(K(r)u^{p}+\mu f)s^{n-1}ds \eqno(2.7)$$ for $0<r\leq R$. Integrating (2.7) over $[r,R]$, and changing the order of integration, we have $$\begin{array}{lllllllll} u(r)&=& \int_{r}^{R}\frac{1}{t^{n-1}}[\int_{0}^{t}(K(s)u^{p}+\mu f)s^{n-1}ds]dt+u(R)\\\\ &\leq& \frac{r^{2-n}}{n-2}\int_{0}^{r}(K(s)u^{p}+\mu f)s^{n-1}ds+ \frac{1}{n-2}\int_{r}^{R}(K(s)u^{p}+\mu f)sds+u(R). \end{array}\eqno(2.8)$$ Then, for small $r>0$, \begin{eqnarray*} u(r)\leq \left\{ \begin{array}{lll} C(1+r^{-m+p\beta}+r^{2-d} & \mbox{if}\ p\beta\neq m,d>2,\\ C(1+|\log r|+r^{2-d}) & \mbox{if}\ p\beta=m,d>2. \end{array} \right. \end{eqnarray*} If $m+2-p\beta\leq d$, then $u(r)=O(r^{2-d})$ near 0. Otherwise, repeating the above arguments with $\beta$ replace by $p\beta$ leads to $m+2-p^{2}\beta>d$ and in turn $m+2-p^{k}\beta>d$ for any positive integer $k$, which is absurd. Therefore, we have $u(r)=O(r^{2-d})$ near 0. If $d=2$, then it follows (2.8) that for small $r>0$, \begin{eqnarray*} u(r)\leq \left\{ \begin{array}{lll} C(1+r^{-m+p\beta}+|\log r|) & \mbox{if}\ p\beta\neq m, \\ C(1+|\log r|) & \mbox{if}\ p\beta= m. \end{array} \right. \end{eqnarray*} The case $p\beta\neq m$ can be dealt as same in the above. Therefore, $u(r)=O(\log r)$ near 0. On the other hand, if $d<2$, then it follows (2.8) that for small $r>0$, \begin{eqnarray*} u(r)\leq \left\{ \begin{array}{lll} C(1+r^{-m+p\beta}) & \mbox{if}\ p\beta\neq m, \\ C(1+|\log r|) & \mbox{if}\ p\beta= m. \end{array} \right. \end{eqnarray*} If $-m+p\beta>0$, then $u(r)=O(1)$ near 0. Otherwise, repeating the above arguments with $\beta$ replace by $p\beta$ leads to $-m+p^{2}\beta<0$ and in turn $-m+p^{k}\beta<0$ for any positive integer $k$, which is absurd. Therefore, we have $u(r)=O(1)$ near 0. The proof is completed. \baselineskip 0.2in \vskip 0.2in {\bf 3 Uniqueness of singular solution }\setcounter{section}{5} \setcounter{equation}{0}\vskip0.1in In order to prove the Theorem 1, we need following Lemma\vskip0.1in {\bf Lemma 3.2.} Suppose that $K(r),f(r)$ satisfy the same condition as in the Theorem 2.2 except that $d=m+2$ is replaced by $d<m+2$. Let $u(r)$ be a positive solution of (1.2), then, $$\lim_{r\to0^{+}}r\frac{d}{dr}(r^{m}u(r))=0.$$ {\bf Proof.} Let $v(t)=r^{m}u(r), r=e^{t}$, then $$v''+b_{0}v'-L^{p-1}v+k(t)v^{p}+\mu f(e^{t})e^{(m+2)t}=0$$ or $$(e^{b_{0}t}v'(t))'+e^{b_{0}t}(k(t)v^{p}-L^{p-1}v+\mu f(e^{t})e^{(m+2)t})=0.$$ Integrating from $T$ to $t$ for $T<t$, we have $$e^{b_{0}t}v'(t)-e^{b_{0}T}v'(T)+\int_{T}^{t}e^{b_{0}s}(k(s)v^{p}-L^{p-1}v+\mu f(e^{s})e^{(m+2)s})=0.$$ By Corollary 2.3, we know that $\lim_{t\to-\infty}(k(t)v^{p}-L^{p-1}v+\mu f(e^{t})e^{(m+2)t})=0$. Given $\varepsilon>0$, there exists $t_{\varepsilon}$ such that $|k(t)v^{p}-L^{p-1}v+\mu f(e^{t})e^{(m+2)t}|<\varepsilon$, when $t<t_{\varepsilon}$, and $$|v'(t)|\leq |e^{b_{0}(T-t)}v'(T)|+\frac{\varepsilon}{b_{0}}(-e^{-b_{0}(t-T)}+1).$$ Letting $T$ (if necessary, take a subsequence) go to $-\infty$, we have $|v'(t)|\leq\frac{\varepsilon}{b_{0}}$, if $t<t_{\varepsilon}$, since $\varepsilon$ can be arbitrary small, we conclude $\lim_{t\to-\infty}v'(t)=0$, or equivalently $\lim_{r\to0^{+}}r\frac{d}{dr}(r^{m}u(r))=0.$\vskip0.1in {\bf Lemma 3.3.}$^{[11]}$ \ Suppose $f(t)$ and $g(t)$ are continues functions, $\lim_{t\to+\infty}f(t)=b>0$, $\lim_{t\to+\infty}g(t)=c>0$. Let $y(t)$ be a solution of $$y''-f(t)y'+g(t)y=0.$$ Then $y(t)$ is unbounded as $t\to+\infty$.\vskip0.1in {\bf Proof of Theorem 1.} First, we prove (1.2) has only one singular solution. Suppose $u_{1}(r)$ and $u_{2}(r)$ are two different singular solutions. As we did in Lemma 3.2, let $v_{i}(t)=r^{m}u_{i}(r), i=1,2, r=e^{t}$, and $h(t)=\frac{v_{2}(t)}{v_{1}(t)},$ then we have $$ h''+(b_{0}+\frac{2v_{1}'}{v_{1}})h'+k(t)v_{1}^{p-1}(h^{p}-h)-\mu f(e^{t})e^{(m+2)t}(\frac{h-1}{v_{1}})=0.$$ Let $Q(t)=h(-t)-1, f(t)=b_{0}+\frac{2v_{1}'}{v_{1}}$, $g_{1}(t)=k(t)v_{1}^{p-1}(h^{p}-h)/(h-1)$ when $h\neq1$, and $g_{1}(t)=(p-1)k(t)v_{1}^{p-1}$ when $h=1$, $g_{2}(t)=\mu f(e^{t})e^{(m+2)t}/v_{1}, g(t)=g_{1}(t)-g_{2}(t)$. Then $Q(t)$ satisfies $$Q''-f(-t)Q'+g(-t)Q=0.$$ Suppose $Q(t)\not \equiv0$. By Corollary 2.3 and Lemma 3.2, $\lim_{t\to-\infty}Q(t)=0, \lim _{t\to-\infty}Q'(t)=0, \lim_{t\to+\infty} f(-t)=b_{0}$ and $\lim_{t\to+\infty}g(-t)=c_{0}$. But by the Lemma 3.3, $Q'(t)$ is unbounded as $t\to+\infty$. The contradiction shows (1.2) has only one singular solution. Now we show the existence of singular solution of (1.2). By Theorem A, for any $\alpha>\alpha_{**}$, there exists a positive solution $u_{\alpha}$ of (1.2) which is strictly increasing in $\alpha$, and $r^{m}u_{\alpha}<\frac{L}{[r^{-l}K(r)]^{\frac{1}{p-1}}}$ by Lemma 2.2 of [6]. From (1.2) we have that $$(r^{n-1}u_{\alpha}'(r))'+r^{n-1}(K(r)u_{\alpha}^{p}+\mu f(r))=0$$ Integrating from 0 to $r$, we have that, from the fact $r^{-l}K(r)$ is non-increasing, \begin{eqnarray*} u'_{\alpha}(r)&=&-\frac{1}{r^{n-1}}\int_{0}^{r}(K(s)u_{\alpha}^{p}(s)+\mu f)s^{n-1}\\ &\leq&\frac{L^{p}}{r^{n-1}}\int_{0}^{r}s^{n-1-\frac{2p}{p-1}}K(s)^{-\frac{1}{p-1}}ds +\frac{1}{r^{n-1}}(\int_{0}^{r_{0}}+\int_{r_{0}}^{r}\mu f(s)s^{n-1}ds\\ &\leq& \frac{L^{p}}{r^{n-1}}r^{\frac{l}{p-1}}K(r)^{\frac{-1}{p-1}}\int_{0}^{r}s^{n-1-\frac{2p}{p-1}-\frac{l}{p-1}}ds +(Cr^{1-n}+Cr^{1-q})\\ &=&\frac{(p-1)L^{p}}{[(n-2)p-(n+l)][r^{p+1}K(r)]^{\frac{1}{p-1}}}+(Cr^{1-n}+Cr^{1-q}), \end{eqnarray*} where $r_{0}$ is positive number. Hence, $u_{\alpha}'$ is uniformly bounded on any compact subset of $(0,\infty)$ for $\alpha$ and consequently, ${u_{\alpha}}$ is equicontinuous on any compact subset of $(0,\infty)$. By the Arzela-Ascoli Theorem and a standard diagonal argument, there exists a sequence ${\alpha_{j}}$ tending to $\infty$ as $j\to\infty$ such that $u_{\alpha_{j}}$ converges uniformly on compact subsets, and thus $U(r):=\lim_{\alpha_{j}\to\infty}u_{\alpha_{j}}$ is continuous on $(0,\infty)$. From the Lemma 2.2 of [6], we have that, for each $\alpha>0$, $$r^{m}u_{\alpha}<r^{m}U(r)\leq L/(r^{-l}K(r))^{\frac{1}{p-1}}\ \ \mbox{on}\ \ (0,\infty).$$ Considering the equation $$u''_{\alpha}=-\frac{n-1}{r}u'_{\alpha}-Ku_{\alpha}^{p}-\mu f,\eqno(3.1)$$ we observe that $u_{\alpha}''$ is uniformly bounded in $\alpha$ on any compact subset of $(0,\infty)$. The Arzela-Ascoli Theorem implies again that there exists a subsequence of ${\alpha_{j}}$ (still denote it by ${\alpha_{j}}$) such that $u'_{\alpha_{j}}(r)$ converges uniformly on any compact subset of $(0,\infty)$ as $\alpha_{j}\to\infty$. Then, $U$ is differentiable on $(0,\infty)$ and $u'_{\alpha_{j}}(r)\to U'$ uniformly on any compact subset of $(0,\infty)$ as $\alpha_{j}\to\infty$. By (3.1), $u''_{\alpha_{j}}$ converges also uniformly on any compact subset. Therefore, $U'$ is differentiable on $(0,\infty)$ and $u''_{\alpha_{j}}\to U''$ uniformly on any compact subset of $(0,\infty)$ as $\alpha_{j}\to\infty$. From (3.1), letting $j\to\infty,U\geq0$ satisfies $$U''=-\frac{n-1}{r}U'-KU^{p}-\mu f\ \ \ \mbox{on}\ \ (0,\infty)$$ and is a singular solution. By the maximum principle, $U>0$, which completes the proof.\vskip0.1in \textbf{Acknowledgement}\vskip0.1in The authors would like to express their thanks to Prof Yi-Li for some helpful suggestions and the referee for helpful comments. \end{document}
\begin{document} \mathcal{m}aketitle \begin{abstract} Let $M$ be a tame mouse modelling $\mathcal{m}athrm{ZF}C$. We show that $M$ satisfies ``$V=\mathcal{m}athrm{HOD}_x$ for some real $x$'', and that the restriction $\mathcal{m}athbb{E}^M\!\upharpoonright\![\omega_1^M,\mathcal{m}athrm{OR}^M)$ of the extender sequence $\mathcal{m}athbb{E}^M$ of $M$ to indices above $\omegaega_1^M$ is definable without parameters over the universe of $M$. We show that $M$ has universe $\mathcal{m}athrm{HOD}^M[X]$, where $X=M|\omega_1^M$ is the initial segment of $M$ of height $\omega_1^M$ (including $\mathcal{m}athbb{E}^M\!\upharpoonright\!\omega_1^M$), and that $\mathcal{m}athrm{HOD}^M$ is the universe of a premouse over some $t\subseteq\omega_2^M$. We also show that $M$ has no proper grounds via strategically $\sigma$-closed forcings. We then extend some of these results partially to non-tame mice, including a proof that many natural $\varphi$-minimal mice model ``$V=\mathcal{m}athrm{HOD}$'', assuming a certain fine structural hypothesis whose proof has almost been given elsewhere.\footnote{Theorems \ref{thm:tame_e_def_from_x}, \ref{thm:E_almost_def_tame_L[E]}, \ref{thm:tame_HOD_x} and \ref{tm:HOD_tame_mouse} were presented in a series of talks at the WWU M\"unster set theory seminar in the summer semester of 2016, and Theorems \ref{thm:tame_e_def_from_x}, \ref{thm:tame_HOD_x} and a summary of some other results at the UC Irvine conference in inner model theory in 2016 (handwritten notes at \cite{irvine_talk_notes}). A summary was also presented at the Leeds Logic Colloquium 2016 (slides at \cite{leeds_talk_slides}). However, the very last theorem presented in the latter talk (slide 46) was overstated: the known proof only works with $\omega_3^M$ replacing $\omega_2^M$, as it is stated here in Theorem \ref{tm:HOD_non-tame_mouse}. The author apologises for the oversight. Theorems \ref{tm:tame_grounds} and \ref{tm:V=HOD_in_transcendent_mice} were noticed later.} \end{abstract} \section{Introduction} Let $M$ be a mouse. We write $\mathcal{m}athbb{E}^M$ for the extender sequence of $M$, not including the active extender $F^M$ of $M$.\footnote{See \S\ref{subsec:terminology} for (a reference to) more terminology.} It was shown in \cite{V=HODX} that if $M$ has no largest cardinal (in fact more generally than this) then $\mathcal{m}athbb{E}^M$ is definable over the universe $\unioniv{M}$ of $M$ from the parameter $M|\omega_1^M$. We consider here the following questions: \begin{enumerate}[label=--] \item Is $\mathcal{m}athbb{E}^M$ definable over $\unioniv{M}$ from a real parameter? \item How much of the iteration strategy $\Sigma_{\canM}$ for $\canM=M|\omega_1^M$ is known to $M$? \item What can be said about the structure of $\mathcal{m}athrm{HOD}^{\unioniv{M}}$? How close is $\mathcal{m}athrm{HOD}^{\unioniv{M}}$ to $M$? \end{enumerate} We will see that these questions are related. We write $\mathcal{m}athbb{E}_+^M=\mathcal{m}athbb{E}^M\ \widehat{\ }\ F^M$ and $\canM^M=M|\omega_1^M$. Recall that a premouse $M$ is \emph{non-tame} iff there is $E\in\mathcal{m}athbb{E}_+^M$ and $\delta$ such that $\mathcal{m}athrm{cr}(E)<\delta<\mathcal{m}athrm{lh}(E)$ and $M|\mathcal{m}athrm{lh}(E)\mathcal{m}odels$``$\delta$ is Woodin as witnessed by $\mathcal{m}athbb{E}$''. The power set axiom is denoted $\mathcal{m}athrm{PS}$. Our first result answers a question of Steel and Schindler from \cite{sile}; the proof owes much to their methods employed in that paper: \begin{tm}\label{thm:tame_e_def_from_x} Let $M$ be a $(0,\omega_1+1)$-iterable tame premouse satisfying ``$\omega_1$ exists''. Then $\canM^M$ is definable over $\mathcal{m}athcal{H}=\mathcal{m}athcal{H}_{\delta}^M$ from a real parameter, where $\delta=\omega_2^M$. In fact $\{\canM^M\}$ is $\Sigma_2^\mathcal{m}athcal{H}(\{x\})$ for some $x\in\mathcal{m}athbb R^M$. \end{tm} In the preceding theorem, if $\omega_1^M$ is the largest cardinal of $M$ then $\mathcal{m}athcal{H}_\delta^M$ denotes $\unioniv{M}$. Combining \cite[Theorem 1.1]{V=HODX} and Theorem \ref{thm:tame_e_def_from_x} above, we have: \begin{cor}\label{cor:tame_V=HODx} Let $M$ be a tame, $(0,\omega_1+1)$-iterable premouse such that $\unioniv{M}\mathcal{m}odels\mathcal{m}athrm{ZF}C$. Then $M\mathcal{m}odels$``$V=\mathcal{m}athrm{HOD}_{x}$ for some $x\in\mathcal{m}athbb R$''. \end{cor} We also use a variant of the proof to yield some information regarding grounds of tame mice, relating to a question of Miedzianowski and Goldberg \cite{gironaconfproblems}; see Theorem \ref{tm:tame_grounds}. In \S\S\ref{sec:HOD_tame_mice},\ref{sec:HOD_in_non-tame} we prove some facts regarding $\mathcal{m}athrm{HOD}^{\unioniv{M}}$, for mice $M$ satisfying $\mathcal{m}athrm{ZF}C$. (In fact, the assumption that $M\mathcal{m}odels\mathcal{m}athrm{ZF}C$ is only stated in order the the usual definition of $\mathcal{m}athrm{HOD}$ works. The results do not really depend on this.) Let $N=L[M_1^\#]$, which is of course tame. It is well known that $N\mathcal{m}odels$``$V\neq\mathcal{m}athrm{HOD}$'' (see \ref{rem:L[M_1^sharp]}). Therefore $\mathcal{m}athbb{E}^N$ is not definable over $\unioniv{N}$ without parameters. However, we will show that any tame mouse $M$ satisfying $\mathcal{m}athrm{PS}$ can \emph{almost} define $\mathcal{m}athbb{E}^M$ from no parameters. (In the statement below, \emph{tractability} (Definition \ref{dfn:tractable}) is just a fine structural requirement, and holds in particular if $M\mathcal{m}odels$``$\omega_2$ exists'', and $\mathcal{m}athscr{P}^M$ (Definition \ref{dfn:candidate,Ppp}) is roughly the collection of premice in $M$ which ``eventually agree'' with $M|\omega_1^M$): \begin{tm}\label{thm:E_almost_def_tame_L[E]} Let $M$ be a $(0,\omega_1+1)$-iterable tame tractable premouse satisfying ``$\omega_1$ exists'' and $\delta=\omega_2^M$ \textup{(}with $\delta=\mathcal{m}athrm{OR}^M$ if $\omega_1$ is the largest cardinal of $M$\textup{)}. Then: \begin{enumerate}[label=--] \item $\mathcal{m}athbb{E}^M\!\upharpoonright\![\omega_1^M,\mathcal{m}athrm{OR}^M)$ is $\unioniv{M}$-definable without parameters, and \item $\mathcal{m}athscr{P}^M$ is definable over $\mathcal{m}athcal{H}_\delta^M$ without parameters. \end{enumerate} \end{tm} The results above concern tame mice. We now turn to (short-extender) mice in general with no smallness restriction. All of our results here rely on a technical hypothesis, STH ($\ \|\ ar$-translation hypothesis, Definition \ref{dfn:Q^*_when_Q_correct}), which is almost proved in \cite{closson}, but not quite, and which should be routine to verify with basically the methods of \cite{closson}. We give the key definitions in \S\ref{sec:star}, but a proof of STH is beyond the scope of this paper. Many typical $\varphi$-minimal mice are \emph{transcendent} (Definition \ref{dfn:transcendent}), including for example $M_1^\#$, and assuming STH, $M_{\mathcal{m}athrm{wlim}}^\#$ (the sharp for a Woodin limit of Woodins), the least mouse with an active superstrong extender (in MS-indexing, so this is not $0^\#$), and many more. \begin{tm}\label{tm:V=HOD_in_transcendent_mice} Assume STH. Let $M\in\mathrm{pm}_1$ be a transcendent tractable $\omega$-mouse. Let $\delta=\omega_2^M$. Then $\canM^M$ is definable without parameters over $\mathcal{m}athcal{H}_\delta^M$. Therefore if $N\triangleleft M$ with $M|\omega_2^M\trianglelefteq N$ and $N\mathcal{m}odels\mathcal{m}athrm{PS}$ or $N\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$, then $\mathcal{m}athbb{E}^N$ is definable without parameters over $\unioniv{N}$, so if $N\mathcal{m}odels\mathcal{m}athrm{ZF}C$ then $\unioniv{N}\mathcal{m}odels$``$V=\mathcal{m}athrm{HOD}$''. \end{tm} We finally consider the question of the structure of $\mathcal{m}athrm{HOD}^{L[\mathcal{m}athbb{E}]}$. Our results here only give information ``above $\delta$'' where $\delta=\omega_2^{L[\mathcal{m}athbb{E}]}$ if $L[\mathcal{m}athbb{E}]$ is tame and $\delta=\omega_3^{L[\mathcal{m}athbb{E}]}$ otherwise. The question of the nature of $\mathcal{m}athrm{HOD}^{L[\mathcal{m}athbb{E}]}$ below $\delta$ appears to be much more subtle, and relates to the question of the nature of $\mathcal{m}athrm{HOD}^{L[x]}$ for a cone of reals $x$.\footnote{See \cite{lcfd} for partial results, and \cite{zhu_iterates}, \cite{a_long_comparison}, \cite{local_mantles_in_Lx} for possibly related issues.} For by considering arbitrary mice, we are including examples like $L[M_n^\#]=L[x]$. Before we state the results, we make a coarser remark. Let $M$ be a mouse modelling $\mathcal{m}athrm{ZF}C$ and $\canM=\canM^M$. By \cite{V=HODX}, $\unioniv{M}=\mathcal{m}athrm{HOD}^{\unioniv{M}}_{\canM}$. So letting $H=\mathcal{m}athrm{HOD}^{\unioniv{M}}$ and $\mathcal{m}athbb P\in H$ be Vopenka for adding subsets of $\omega_1^M$ (as computed in $M$) and $G_{\canM}$ the generic for $\canM$, standard facts on Vopenka forcing give \[H[G_{\canM}]=\mathcal{m}athrm{HOD}_{\canM}^{\unioniv{M}}=\unioniv{M}\] (cf.~Footnote \ref{ftn:Vopenka_extension} for some explanation). In $M$, there are only $\omega_3^M$-many subsets of $(\mathcal{m}athcal{H}_{\omega_2})^M$, so $\card^M(\mathcal{m}athbb P)\leq\omega_3^M$. In fact, this Vopenka has the $\omega_3^M$-cc in $H$, because the antichains correspond in $M$ to partitions of $\mathcal{m}athcal{P}(\omega_1)^M$. Therefore $M$ and $H$ have the same cardinals $\geq\omega_3^M$. Therefore $\mathcal{m}athbb P$ is in fact equivalent in $H$ to a forcing $\subseteq\omega_3^M$. (Actually, arguing as in the proof of Lemma \ref{lem:tame_Vopenka}, one can also prove this directly, and show that there is such a $\mathcal{m}athbb P\subseteq\omega_3^M$ which is definable without parameters over $\unioniv{M}$.) In particular, there is $X\in\mathcal{m}athcal{P}(\omega_3)^M$ such that $H[X]=\unioniv{M}$. One can ask whether this is optimal. In fact, it can be somewhat improved: \begin{dfn} We say that a premouse $M$ is \emph{below a Woodin limit of Woodins} iff there is no segment of $M$ satisfying ``There is a Woodin limit of Woodins''. \end{dfn} $\mathcal{m}athbb B_{\mathcal{m}easlim,\delta}$ (Definition \ref{dfn:meas-lim_extender_algebra}) denotes a simple variant of the extender algebra at $\delta$. \begin{tm}\label{tm:HOD_non-tame_mouse} Let $M$ be a $(0,\omega_1+1)$-iterable premouse satisfying $\mathcal{m}athrm{ZF}C$ with $H=\mathcal{m}athrm{HOD}^{\unioniv{M}}\subsetneq M$, $\delta=\omega_3^M$ and $\canM=\canM^M$. Then $H[M|\delta]=\unioniv{M}$, and if $M$ is below a Woodin limit of Woodins then there is $X\subseteq\omega_2^M$ such that $H[X]=\unioniv{M}$. Moreover, there an $M$-class premouse $W$, definable over $M$ without parameters, such that $W|\delta$ is a segment of an iterate of $\canM^M$ via $\Sigma_{\canM^M}$, and $W\mathcal{m}odels$``$\delta$ is Woodin'', and $H=W[t]$ where $t=\Th_{\Sigma_2}^{\mathcal{m}athcal{H}}(\delta)$ where $\mathcal{m}athcal{H}=\mathcal{m}athcal{H}_\delta^M$, and $t$ is $(W,\mathcal{m}athbb B_{\mathcal{m}easlim,\delta}^W)$-generic. \footnote{Note that we do not claim that $\delta$ is the least Woodin of $W$, nor even that $\delta$ is a cutpoint of $W$.} \end{tm} In the tame case we can state a tighter relationship between $M$ and $W$, $\omega_3^M$ is reduced to $\omega_2^M$, and we get $H[\canM^M]=\unioniv{M}$. But we defer the full statement (see Theorem \ref{tm:HOD_tame_mouse}). \subseteqsection{Conventions and Notation}\label{subsec:terminology} For a summary of terminology see \cite{fsfni}. We just mention a few non-standard and key points here. We deal with premice $M$ with Mitchell-Steel indexing and fine structure, except that we allow superstrong extenders on the extender sequence $\mathcal{m}athbb{E}_+^M$ and use the modifications to the fine structure explained in \cite[\S5]{V=HODX}. Let $M$ be a premouse (possibly proper class). We say $M\in\mathrm{pm}_n$ iff $M\mathcal{m}odels$``$\omega_n$ exists''.\footnote{Here the ``$\omega_n$'' is not supposed to refer to $\omega_n^V$; we just mean that $M\mathcal{m}odels$``There are at least $(n+1)$ infinite cardinals''.} An \emph{$\omega$-premouse} iff is a sound premouse $N$ with $\rho_\omega^N=\omega$; an \emph{$\omega$-mouse} is an $(\omega,\omega_1+1)$-iterable $\omega$-premouse. The \emph{degree} $\deg(N)$ of an $\omega$-premouse $N$ is the least $n<\omega$ such that $\rho_{n+1}^N=\omega$. If $N$ is an $\omega$-mouse, we write $\Sigma_N$ for the unique $(\omega,\omega_1+1)$-strategy for $N$. We write $\canM^M$ for $M|\omega_1^M$. Suppose $M$ is $k$-sound where $k<\omega$. We say that $M$ satisfies \emph{$(k+1)$-condensation} iff it satisfies the conclusion of \cite[Theorem 5.2]{premouse_inheriting}. Let $\dot{p}\in V_\omega\backslash\omega$ be some fixed constant. Then for $\rho_{k+1}^M\leq\alpha\leq\rho_k^M$, we write $t^M_{k+1}(\alpha)$ is the theory given by replacing $\vec{p}_{k+1}^M$ in $\Th_{\mathcal{m}athrm{r}\Sigma_{k+1}}^M(\alpha\cup\{\vec{p}_{k+1}^M\})$ with $\dot{p}$, and write $t^M_{k+1}=t^M_{k+1}(\rho_{k+1}^M)$. For a limit length iteration tree $\mathcal{m}athcal{T}$ on an $\omega$-premouse and a $\mathcal{m}athcal{T}$-cofinal branch $b$, $Q(\mathcal{m}athcal{T},b)$ denotes the Q-structure $Q\trianglelefteq M^\mathcal{m}athcal{T}_b$ for $\delta(\mathcal{m}athcal{T})$, if this exists, and otherwise $Q=M^\mathcal{m}athcal{T}_b$. \section{Local branch definability} \begin{lem}\label{lem:Q_computes_b} Let $\mathcal{m}athcal{T}$ be a limit length $\omega$-maximal tree on an $\omega$-premouse and $b$ a $\mathcal{m}athcal{T}$-cofinal branch with $M^\mathcal{m}athcal{T}_b$ being $\delta(\mathcal{m}athcal{T})$-wellfounded and $Q=Q(\mathcal{m}athcal{T},b)$ wellfounded. Let $\delta=\delta(\mathcal{m}athcal{T})$, $t=t^Q_{q+1}(\delta)$ where $Q$ is $q$-sound and $\rho_{q+1}^Q\leq\delta\leq\rho_q^Q$, and $X=\mathcal{m}athrm{trancl}(\{\mathcal{m}athcal{T},t\})$. Then: \begin{enumerate}[label=\textup{(}\roman*\textup{)}] \item\label{item:b_in_J(X)} $b\in\mathcal{m}athcal{J}(X)$, and \item\label{item:b_from_params_unif} $b$ is $\Sigma_1^{\mathcal{m}athcal{J}(X)}(\{\mathcal{m}athcal{T},t\})$, uniformly in $(\mathcal{m}athcal{T},t)$. \end{enumerate} \end{lem} \begin{proof} Part \ref{item:b_in_J(X)}: If $Q=M^\mathcal{m}athcal{T}_b$ then we can just use standard calculations using core maps (done in the codes given by the theory $t$, however) to find a tail sequence of extenders used along $b$, and hence, find $b$ itself, from $(\mathcal{m}athcal{T},Q)$. So suppose $Q\triangleleft M^\mathcal{m}athcal{T}_b$, so $\rho_\omega^Q=\delta$ and $Q$ is fully sound. \begin{case} $Q$ singularizes $\delta$. Let $f:\theta\to\delta$ be cofinal in $\delta$, with $\theta=\cof^{M^\mathcal{m}athcal{T}_b}(\delta)$, $f$ the least which is definable over $Q$ (without parameters). Let $\alpha\in b$ be such that $(\alpha, b]_\mathcal{m}athcal{T}$ does not drop and $\delta\in\mathcal{m}athrm{rg}(i^\mathcal{m}athcal{T}_{\alpha b})$ (so $Q,f\in\mathcal{m}athrm{rg}(i^\mathcal{m}athcal{T}_{\alpha b})$) and $\theta<\kappa=\mathcal{m}athrm{cr}(i^\mathcal{m}athcal{T}_{\alpha b})$. Let $i^\mathcal{m}athcal{T}_{\alpha b}(\bar{\delta},\bar{f})=(\delta,f)$. For $\gamma<\theta$, let $\beta_\gamma$ be the least $\beta\in[\alpha,\mathcal{m}athrm{lh}(\mathcal{m}athcal{T}))$ such that $\alpha\leq_\mathcal{m}athcal{T}\beta$ and $(\alpha,\beta]_\mathcal{m}athcal{T}$ does not drop and $i^\mathcal{m}athcal{T}_{\alpha\beta}(\bar{f}(\gamma))=f(\gamma)<\lambda(E^\mathcal{m}athcal{T}_\beta)$. Then $\beta_\gamma\in b$. (Suppose not. Let $\xi+1\in b$ be least such that $\xi\geq\beta_\gamma$. Let $\delta=\mathcal{m}athrm{pred}^\mathcal{m}athcal{T}(\xi+1)$. So $\alpha\leq_\mathcal{m}athcal{T}\delta<\beta_\gamma\leq\xi$, so by the minimality of $\beta_\gamma$, \[ \mathcal{m}athrm{cr}(E^\mathcal{m}athcal{T}_\xi)=\mathcal{m}athrm{cr}(i^\mathcal{m}athcal{T}_{\delta b})\leq f(\gamma)<\lambda(E^\mathcal{m}athcal{T}_{\beta_\gamma})\leq\lambda(E^\mathcal{m}athcal{T}_\xi). \] But then $f(\gamma)\notin\mathcal{m}athrm{rg}(i^\mathcal{m}athcal{T}_{\delta b})$, so $f(\gamma)\notin\mathcal{m}athrm{rg}(i^\mathcal{m}athcal{T}_{\alpha b})$, a contradiction.) So $b$ is appropriately computable from $(\mathcal{m}athcal{T},t)$ and the parameter $(\alpha,\bar{\delta})$. But if we define another branch $b'$ from $(\mathcal{m}athcal{T},t)$, in the same manner, but from some other parameter $(\alpha',\bar{\delta}')$, with $\alpha'\notin b$, then $Q(\mathcal{m}athcal{T},b')\neq Q(\mathcal{m}athcal{T},b)$, and this fact is first-order over $(Q,t)$, because we can compute the corresponding theory $t'$ of $Q(\mathcal{m}athcal{T},b')$ by consulting the theories of the models along $b'$. So by demanding that the selected parameter results in a Q-structure whose theory agrees with $t$, we can actually compute the correct $b$ from $(\mathcal{m}athcal{T},t)$ without the extra parameter. \end{case} \begin{case} $Q$ does not singularize $\delta$. Let $A\subseteq\delta$ be definable over $Q$ without parameters, such that no $\kappa<\delta$ is ${<\delta}$-$A$-reflecting. Let $C$ be the set of all limit cardinals $\lambda<\delta$ of $Q$ such that for all $\kappa<\lambda$, $\kappa$ is not ${<\lambda}$-$A$-reflecting. Then $C$ is club in $\delta$ because $Q$ does not singularize $\delta$. Let $\alpha\in b$ be such that $\delta\in\mathcal{m}athrm{rg}(i^\mathcal{m}athcal{T}_{\alpha b})$. Let $i^\mathcal{m}athcal{T}_{\alpha b}(\bar{C})=C$. For $\gamma\in C$, let $\beta_\gamma$ be the least $\beta\in[\alpha,\mathcal{m}athrm{lh}(\mathcal{m}athcal{T}))$ such that $\gamma<\mathcal{m}athrm{lh}(E^\mathcal{m}athcal{T}_\beta)$. Then $\beta_\gamma\in b$. (Suppose not, and let $\xi\geq\beta_\gamma$ be least with $\xi+1\in b$. Let $\delta=\mathcal{m}athrm{pred}^\mathcal{m}athcal{T}(\xi+1)$, so $\alpha\leq_\mathcal{m}athcal{T}\delta<\beta_\gamma\leq\xi$. So \[ \kappa=\mathcal{m}athrm{cr}(E^\mathcal{m}athcal{T}_\xi)<\mathcal{m}athrm{lh}(E^\mathcal{m}athcal{T}_\delta)\leq\gamma<\mathcal{m}athrm{lh}(E^\mathcal{m}athcal{T}_{\beta_\gamma} )\leq\mathcal{m}athrm{lh}(E^\mathcal{m}athcal{T}_\xi) \] and $\gamma\leq\nu(E^\mathcal{m}athcal{T}_\xi)$ since $\gamma$ is a $Q$-cardinal. But since $i^\mathcal{m}athcal{T}_{\alpha b}(\bar{A})=A$, we have \[ i_{E^\mathcal{m}athcal{T}_\xi}(A\cap\kappa)\cap\gamma=A\cap\gamma, \] so by the ISC, restrictions of $E^\mathcal{m}athcal{T}_\xi$ witness the fact that $\kappa$ is ${<\gamma}$-$A$-strong in $Q$, so $\gamma\notin C$, contradiction.) So $b$ is computable from $(\mathcal{m}athcal{T},t)$ and the parameter $(\alpha,\bar{\delta})$, and like before, we actually therefore get a computation from $(\mathcal{m}athcal{T},t)$ without the extra parameter. \end{case} Part \ref{item:b_from_params_unif}: It seems we can't quite uniformly tell which of the above three cases holds. But the calculations used in the case that $Q\triangleleft M^\mathcal{m}athcal{T}_b$ still work when $Q=M^\mathcal{m}athcal{T}_b$ and $\delta$ is not $\utilde{\rSigma}_k^Q$-Woodin, but $\rho_{k+10}^Q=\delta$. So our $\Sigma_1$ formula seeks either some $k<\omega$ such that $Q$ is not $k$-sound, and applies the procedure for when $Q=M^\mathcal{m}athcal{T}_b$, or some $k<\omega$ such that $Q$ is $(k+10)$-sound and $\rho_{k+10}^Q=\delta$, but $\delta$ is not $\utilde{\rSigma}_k^Q$-Woodin, and then uses the procedure for when $Q\triangleleft M^\mathcal{m}athcal{T}_b$ (with complexity say $\utilde{\rSigma}_{k+5}$). We have enough information in some $\mathcal{m}athcal{S}_n(X)$ to verify all the relevant computations, including that $Q$ is the correct direct limit of certain substructures appearing along the branch $b$. This yields the desired uniform computation for \ref{item:b_from_params_unif}. \end{proof} \begin{dfn} Let $\mathcal{m}athcal{T}$ be as above and $Q$ be a (wellfounded) Q-structure for $M(\mathcal{m}athcal{T})$, and $t$ as above for $Q$. Then $\mathrm{branch}(\mathcal{m}athcal{T},Q)$ or $\mathrm{branch}(\mathcal{m}athcal{T},t)$ is the unique $\mathcal{m}athcal{T}$-cofinal branch $b$ computed from $(\mathcal{m}athcal{T},Q)$ as above (as the output of our $\Sigma_1^{\mathcal{m}athcal{J}(X)}(\{\mathcal{m}athcal{T},Q\})$ procedure) if it exists, and is otherwise undefined. \end{dfn} \section{Self-iterability and definability}\label{sec:self-it} We begin with some basic examples which provide some context for the paper. \begin{tm}\label{thm:1-small_pc_V=HOD} Let $M$ be a proper class, $1$-small, $(0,\omega_1+1)$-iterable premouse. Then $\mathcal{m}athbb{E}^M$ is definable over $\unioniv{M}$, so $\unioniv{M}\mathcal{m}odels$``$V=\mathcal{m}athrm{HOD}$''. \end{tm} \begin{proof} By \cite[Theorem 3.11(b)***]{V=HODX}, it suffices to see that $\canM^M$ is definable over $\unioniv{M}$. But because $M$ is proper class, and trees $\mathcal{m}athcal{T}$ on $\canM^M$ in $M$ are guided by Q-structures of the form $\mathcal{m}athcal{J}_\alpha(M(\mathcal{m}athcal{T}))$, we get $M\mathcal{m}odels$``$\canM^M$ is $(\omega,\omega_1+1)$-iterable'', so $\canM^M$ is outright definable over $\unioniv{M}$, and hence so is $\mathcal{m}athbb{E}^M$. \end{proof} In particular $M_1\mathcal{m}odels$``$V=\mathcal{m}athrm{HOD}$'', a fact first proven by Steel, via other means. On the other hand: \begin{rem}\label{rem:L[M_1^sharp]}Assume that $M_1^\#$ exists (and is $(\omega,\omega+1)$-iterable) and let $N=L[M_1^\#]$. Note that $N$ is an $(\omega,\omega_1+1)$-iterable tame premouse. Standard descriptive set theoretic observations show that $\unioniv{N}\mathcal{m}odels$``$V\neq\mathcal{m}athrm{HOD}$'', and in fact, that $\omega_1^{N}$ is measurable in $\mathcal{m}athrm{HOD}^{\unioniv{N}}$. (So by Theorem \ref{thm:1-small_pc_V=HOD}, $N$ is the least such proper class mouse.) For the record, we give the proof that $\omega_1^{N}$ is measurable in $\mathcal{m}athrm{HOD}^{\unioniv{N}}$. It suffices to see that $N\mathcal{m}odels\Delta^1_2$-determinacy, for then $N\mathcal{m}odels\mathcal{m}athrm{OD}$-determinacy (by Kechris-Solovay \cite[Corollary 6.8]{lcfd}), and hence $\omega_1^N$ is measurable in $\mathcal{m}athrm{HOD}^{\unioniv{N}}$ by the effective version of Solovay's result (see \cite[Theorem 2.15]{lcfd}). (Further, $\omega_2^N$ is Woodin in $\mathcal{m}athrm{HOD}^{\unioniv{N}}$ by Woodin \cite[Theorem 6.10]{lcfd}.) So let $g\in N$ be $M_1$-generic for $\mathcal{m}athrm{Coll}(\omega,\delta)$ where $\delta$ is Woodin in $M_1$ (note $(\delta^+)^{M_1}<\omega_1^N$, so such a $g$ exists). By Neeman \cite[Corollary 6.12]{detLR}, $M_1[g]\mathcal{m}odels\Delta^1_2$-determinacy. Let $X\in N$ be $\Delta^1_2$, and $\varphi,\psi$ be $\Pi^1_2$ formulas such that \[ X=\{x\in\mathcal{m}athbb R^N\mathcal{m}id N\mathcal{m}odels\varphi(x)\}\text{ and }Y=\mathcal{m}athbb R^N\backslash X=\{x\in\mathcal{m}athbb R^N\mathcal{m}id N\mathcal{m}odels\psi(x)\}.\] Let $\bar{X}=X\cap M_1[g]$ and $\bar{Y}=Y\cap M_1[g]$. By absoluteness, \[ \bar{X}=\{x\in \mathcal{m}athbb R^{M_1[g]}\mathcal{m}id M_1[g]\mathcal{m}odels\varphi(x)\}\text{ and } \bar{Y}=\{x\in\mathcal{m}athbb R^{M_1[g]}\mathcal{m}id M_1[g]\mathcal{m}odels\psi(x)\},\] so $\bar{X}$ is $\Delta^1_2$ in $M_1[g]$. Let $\sigma\in M_1[g]$ be a winning strategy for the game $\mathcal{m}athcal{G}^{M_1[g]}_{\bar{X}}$. The fact that $\sigma$ is winning is a $\Pi^1_2$ assertion (for either player), so $\sigma$ is still winning in $N$. This verifies that $N$ satisfies $\Delta^1_2$-determinacy. This proof relies heavily on descriptive set theory. Is there an inner model theoretic proof that $\unioniv{N}\mathcal{m}odels$``$V\neq\mathcal{m}athrm{HOD}$''? There \emph{is} such a proof that $L[x]\mathcal{m}odels$``$V\neq\mathcal{m}athrm{HOD}$'' for a cone of reals $x$ (assuming $M_1^\#$); see \cite{a_long_comparison}. \end{rem} \begin{rem} Note that we have not ruled out the possibility of set-sized mice $N$ which model $\mathcal{m}athrm{ZF}C$ and are $1$-small, and such that $N\mathcal{m}odels$``$V\neq\mathcal{m}athrm{HOD}$''. Let $M$ be the least mouse satisfying $\mathcal{m}athrm{ZF}C+$``There is a Woodin cardinal''. Then $M$ is pointwise definable and $\mathcal{m}athcal{J}(M)$ is sound, $\rho_1^{\mathcal{m}athcal{J}(M)}=\omega$ and $p_1^{\mathcal{m}athcal{J}(M)}=\{\mathcal{m}athrm{OR}^M\}$. Let $N$ be the least mouse with $M\triangleleft N$ and $N\mathcal{m}odels\mathcal{m}athrm{ZF}C$; so $N=\mathcal{m}athcal{J}_\alpha(M)$ for some $\alpha\in\mathcal{m}athrm{OR}$, and $N\triangleleft M_1$ and $N$ is pointwise definable and $\mathcal{m}athcal{J}(N)$ is sound and $\rho_1^{\mathcal{m}athcal{J}(N)}=\omega$. Then genericity iterations can be used to show that $N\mathcal{m}odels$``$M$ is not $(\omega,\omega_1+1)$-iterable'', and the author does not know whether $\unioniv{N}\mathcal{m}odels$``$V=\mathcal{m}athrm{HOD}$''. \end{rem} \begin{rem}Considering again $N=L[M_1^\#]$, clearly $\unioniv{N}\mathcal{m}odels$``$V=\mathcal{m}athrm{HOD}_x$ for some real $x$''. Steel and Schindler showed that if $M$ is a tame mouse satisfying $\mathcal{m}athrm{ZF}C^-+$``$\omega_1$ exists'', then there is $\alpha<\omega_1^M$ such that $M\mathcal{m}odels$``$\canM^M$ is above-$\alpha$, $(\omega,\omega_1)$-iterable''. We next show that this cannot be improved to ``above $\alpha$, $(\omega,\omega_1+1)$-iterable''. So we cannot use $(\omega_1+1)$-iterability to prove Theorem \ref{thm:tame_e_def_from_x}.\end{rem} \begin{dfn}\label{dfn:meas-lim_extender_algebra} Working in a premouse $M$, the \emph{meas-lim extender algebra at $\delta$}, written $\mathcal{m}athbb B_{\mathcal{m}athrm{ml},\delta}$, is the version of the $\delta$-generator extender algebra at $\delta$ in which we only induce axioms with extenders $E\in\mathcal{m}athbb{E}^M$ such that $\nu_E$ is an inaccessible limit of measurable cardinals of $M$. And $\mathcal{m}athbb B^{\geq\alpha}_{\mathcal{m}athrm{ml},\delta}$ denotes the variant using only extenders $E$ with $\mathcal{m}athrm{cr}(E)\geq\alpha$. \end{dfn} \begin{exm}\label{exm:M_1^sharp-closed_reals} Let $S$ be the least active mouse such that $S|\omega_1^S$ is closed under the $M_1^\#$-operator and let $N=L[S|\omega_1^S]$. Note that $N\mathcal{m}odels$``I am $\omega_1$-iterable'', and in fact, letting $\Sigma$ be the correct strategy for $N$, then $\Sigma\!\upharpoonright\!\mathcal{m}athrm{HC}^N$ is definable over $N$. We claim that, however, \[ N\mathcal{m}odels\neg\existsists\alpha<\omega_1\ [\canM^N\text{ is above-}\alpha,\ (\omega,\omega_1+1)\text{-iterable}].\] Let $P\triangleleft N|\omega_1^N$ project to $\omega$. We will construct tree $\mathcal{m}athcal{T}\in N$, on $R=M_1(P)$, above $P$, of length $\omega_1^N$, above $P$, via the correct strategy, such that $\mathcal{m}athcal{T}$ has no cofinal branch in $N$. Since $M_1^\#(P)\triangleleft N$ and $P$ can be taken arbitrarily high below $\omega_1^N$, this suffices. Let $\mathcal{m}athbb B=(\mathcal{m}athbb B^{\geq\mathcal{m}athrm{OR}^P}_{\mathcal{m}athrm{ml},\delta^R})^R$. We define $\mathcal{m}athcal{T}$ by $\mathcal{m}athbb{E}^{N|\omega_1^N}$-genericity iteration with respect to $\mathcal{m}athbb B$ (and its images), interweaving short linear iterations at successor measurables, as follows. Work in $N$. The tree $\mathcal{m}athcal{T}$ will be nowhere dropping. We define a continuous sequence $\left<\eta_\alpha\right>_{\alpha<\omega_1^N}$ where $\eta_\alpha$ is either $0$ or a limit ordinal ${<\omega_1^N}$, and define $\mathcal{m}athcal{T}\!\upharpoonright\!(\eta_\alpha+1)$, by induction on $\alpha$. Set $\eta_0=0$. Suppose we have defined $\mathcal{m}athcal{T}\!\upharpoonright\!(\eta_\alpha+1)$ and it is short; so \[ i^\mathcal{m}athcal{T}_{0\eta_\alpha}(\delta^R)>\delta=\delta(\mathcal{m}athcal{T}\!\upharpoonright\!\eta_\alpha)\] (where $\delta(\mathcal{m}athcal{T}\!\upharpoonright\! 0)=0$). Let $G=G^\mathcal{m}athcal{T}_{\eta_\alpha}$ be the least \emph{bad} extender $G\in\mathcal{m}athbb{E}(M^\mathcal{m}athcal{T}_{\eta_\alpha})$; that is, it induces an axiom of $i^\mathcal{m}athcal{T}_{0\eta_\alpha}(\mathcal{m}athbb B)$ which is false for $\mathcal{m}athbb{E}^{N|\omega_1^N}$ (or set $G^\mathcal{m}athcal{T}_{\eta_\alpha}=\emptyset$ if there is no such; in fact there will be one). By induction, we will have $\delta\leq\nu_G<\mathcal{m}athrm{lh}(G)$ (assuming $G$ is defined). By definition of $\mathcal{m}athbb B$, $\nu_G$ is a limit of measurables of $M^\mathcal{m}athcal{T}_{\eta_\alpha}$. Suppose $\nu_G>\delta$ (or $G$ is undefined). Let $\mathcal{m}u$ be the least $M^\mathcal{m}athcal{T}_{\eta_\alpha}$-measurable such that $\delta<\mathcal{m}u$, and let $D\in\mathcal{m}athbb{E}(M^\mathcal{m}athcal{T}_{\eta_\alpha})$ be the (unique) normal measure on $\mathcal{m}u$. Note that $\mathcal{m}athrm{lh}(D)<\mathcal{m}athrm{lh}(G)$ (if $G$ is defined). Let $Q\triangleleft N$ be least such that $Q=M_1^\#(S)$ for some $S\triangleleft N$ with $\rho_\omega^S=\omega$ and $\mathcal{m}u<\mathcal{m}athrm{OR}^S$. Let $\eta_{\alpha+1}=\mathcal{m}athrm{OR}^Q$. Then $\mathcal{m}athcal{T}\!\upharpoonright\![\eta_\alpha,\eta_{\alpha+1}+1]$ is given by linearly iterating with $D$ and its images. Now suppose instead that $\nu_G=\delta$. Then we set $\eta_{\alpha+1}=\eta_\alpha+\omega$, set $E^\mathcal{m}athcal{T}_{\eta_\alpha}=G$, and letting $\mathcal{m}u$ be the least successor measurable of $M^\mathcal{m}athcal{T}_{\eta_\alpha+1}$ with $\mathcal{m}u>\delta$ and $D\in\mathcal{m}athbb{E}(M^\mathcal{m}athcal{T}_{\eta_\alpha+1})$ be the normal measure on $\mathcal{m}u$, let $\mathcal{m}athcal{T}\!\upharpoonright\![\eta_\alpha+1,\eta_{\alpha+1}+1]$ be given by linear iteration with $D$ and its images. Note that in both cases, because $\mathcal{m}u$ is a successor measurable, this does not leave any bad extender algebra axioms induced by extenders $G$ such that \[ \delta<\mathcal{m}athrm{lh}(G)<\delta(\mathcal{m}athcal{T}\!\upharpoonright\!\eta_{\alpha+1}).\] So it is straightforward to see that $\mathcal{m}athcal{T}$ is normal and is nowhere dropping. We set $\mathcal{m}athcal{T}=\mathcal{m}athcal{T}\!\upharpoonright\!\eta_\alpha$ where $\alpha$ is least such that either $\alpha=\omega_1^N$ or $\mathcal{m}athcal{T}\!\upharpoonright\!\eta_\alpha$ is maximal (non-short). Note that $\mathcal{m}athcal{T}\in N$ and $\mathcal{m}athcal{T}$ is via the correct strategy, so it suffices to verify: \begin{clm} $\mathcal{m}athrm{lh}(\mathcal{m}athcal{T})=\omega_1^N$ and $N$ has no $\mathcal{m}athcal{T}$-cofinal branch.\end{clm} \begin{proof} Suppose $\mathcal{m}athrm{lh}(\mathcal{m}athcal{T})=\omega_1^N$ but $N$ has a $\mathcal{m}athcal{T}$-cofinal branch $b$. We do the usual reflection argument, and get an elementary $\pi:M\to N|\gamma$ for some countable $M$ and large $\gamma$, with $\mathcal{m}athcal{T},b\in\mathcal{m}athrm{rg}(\pi)$. Let $\kappa=\mathcal{m}athrm{cr}(\pi)$. Let $\beta+1=\mathcal{m}in(b\backslash(\kappa+1))$. Because $\mathcal{m}athcal{T}$ is normal and by the usual proof that genericity iterations succeed, it suffices to see that $E^\mathcal{m}athcal{T}_\beta=G^\mathcal{m}athcal{T}_\beta$. But if not then note that $\beta=\kappa$ and there is $\alpha$ such that $\kappa\in(\eta_\alpha,\eta_{\alpha+1})$ (if $\kappa=\eta_\alpha$ but $E^\mathcal{m}athcal{T}_\beta\neq G^\mathcal{m}athcal{T}_\beta$ then $\delta(\mathcal{m}athcal{T}\!\upharpoonright\!\eta_\alpha)<\kappa$, contradiction). But $N|\kappa$ is closed under the $M_1^\#$-operator, and since $\eta_\alpha<\kappa$, therefore $\eta_{\alpha+1}<\kappa$, contradiction. Now suppose that $\mathcal{m}athrm{lh}(\mathcal{m}athcal{T})<\omega_1$; then $\mathcal{m}athcal{T}=\mathcal{m}athcal{T}\!\upharpoonright\!\eta_\alpha$ is maximal with some $\alpha<\omega_1^N$. Note that $\alpha$ is a limit. Let $b$ be the correct $\mathcal{m}athcal{T}$-cofinal branch, chosen in $V$. So \[ i^\mathcal{m}athcal{T}(\delta^R)=\delta=\delta(\mathcal{m}athcal{T}\!\upharpoonright\!\eta_\alpha)\text{ is Woodin in }M^\mathcal{m}athcal{T}_b,\] and $\delta<\omega_1^N$. Let $Q$ result from linearly iterating out the sharp of $M^\mathcal{m}athcal{T}_b$. Then $N|\delta$ is $Q$-generic for $i^\mathcal{m}athcal{T}_b(\mathcal{m}athbb B)$, and since $\alpha$ is a limit ordinal and because of the linear iterations inserted in $\mathcal{m}athcal{T}$, $N|\delta$ is closed under the $M_1^\#$-operator. But $\delta$ is regular in $Q[N|\delta]$, hence regular in $L[N|\delta]$. This easily contradicts the minimality of $N$.\end{proof} \end{exm} \section{Ordinal-real definability in tame mice}\label{sec:tame_V=HOD_x} In this section we prove some results for tame mice, including Theorem \ref{thm:tame_e_def_from_x}, which has the consequence that every tame mouse satisfying $\mathcal{m}athrm{ZF}C$, satisfies ``$V=\mathcal{m}athrm{HOD}_x$ for some real $x$'', and also that every tame mouse satisfying ``$\omega_1$ exists'' satisfies ``there is a wellorder of $\mathcal{m}athbb R$ definable from a real parameter over $\mathcal{m}athcal{H}_{\omega_2}$'' (the wellorder is just the canonical one of $\canM^M$). This answers an (implicit) question of Steel and Schindler from \cite[p.~2]{sile}. The methods are, moreover, very similar to those of \cite{sile}. \begin{dfn} For an $\omega$-mouse $M$, or for a mouse $M$ satisfying ``$V=\mathcal{m}athrm{HC}$'', $\Sigma_M$ denotes the unique $(\omega,\omega_1+1)$-iteration strategy for $M$. Let $M$ be a $(0,\omega_1+1)$-iterable premouse satisfying ``$\omega_1$ exists''. Let $\canM=\canM^M$ and $\alpha<\omega_1^M$. Then $\Sigma_{\canM\canM\alpha}$ denotes the restriction of $\Sigma_{\canM}$ to above-$\alpha$ trees in $\canM$. \end{dfn} \begin{tm}\label{thm:tame_HOD_x} Let $M$ be a tame mouse satisfying ``$\omega_1$ exists'' and $\canM=\canM^M$. Then there is an $\alpha<\omega_1^M$ such that: \begin{enumerate} \item $\Sigma_{\canM\canM\alpha}\in M$; in fact, this strategy is definable over $\canM$ from parameter $\alpha$, \item\label{item:om_1_iter_enough} For every sound tame $\omega$-premouse $R$ with $M|\alpha\trianglelefteq R\in M$, if $M\mathcal{m}odels$``$R$ is above-$\alpha$, $(\omega,\omega_1)$-iterable'' then $R\triangleleft\canM$. \end{enumerate} Therefore: \begin{enumerate}[label=--] \item $\canM$ is definable over $(\mathcal{m}athcal{H}_{\omega_2})^M$ from the parameter $M|\alpha$, and \item if $M\mathcal{m}odels\mathcal{m}athrm{PS}$ or $\unioniv{M}\mathcal{m}odels\mathcal{m}athrm{ZF}^-$ then $\mathcal{m}athbb{E}^M$ is definable over $\unioniv{M}$ from $M|\alpha$. \end{enumerate} \end{tm} \begin{proof} By \cite{sile}, we may fix $\bar{R}\triangleleft M|\omega_1^M$ such that $\rho_\omega^{\bar{R}}=\omega$ and $\canM^M$ is above-$\mathcal{m}athrm{OR}^{\bar{R}}$ $\omega_1$-iterable in $M$, in fact via the restriction of the correct strategy $\Sigma_{\canM}$. That is, $\Sigma_{\canM\canM\alpha}\in M$ where $\alpha=\mathcal{m}athrm{OR}^{\bar{R}}$. Given $R$ such that $\bar{R}\trianglelefteq R\triangleleft\canM$, $\Sigma_R^M$ denotes the restriction of this strategy to trees on $R$. We say that $(R,S)\in M$ is a \emph{conflicting pair} iff: \begin{enumerate}[label=--] \item $R,S$ are tame sound premice which project to $\omega$, \item $\bar{R}\triangleleft R\triangleleft\canM$ and $\bar{R}\triangleleft S$ and $R|\omega_1^{R}=S|\omega_1^{S}$ but $R\neq S$, and \item $S$ is $\omega_1$-iterable in $M$. \end{enumerate} If part \ref{item:om_1_iter_enough} of the theorem fails for every $\alpha<\omega_1^M$, note that for every such $\alpha$ there is a conflicting pair $(R,S)$ with $\alpha<\omega_1^R$. However, for the present we just assume that we have some conflicting pair and work with this, without assuming that part \ref{item:om_1_iter_enough} fails for every $\alpha$. So fix a conflicting pair $(R_0,S_0)$. Let $\Gamma_0$ be an $\omega_1^M$-strategy for $S_0$ in $M$. Working in $M$, we attempt to compare $R_0,S_0$, via $\Sigma^M_{R_0},\Gamma_0$, folding in extra extenders to ensure that for every limit stage $\lambda$ of the comparison, letting \begin{enumerate}[label=--] \item $\delta_\lambda=\delta((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda)$ and \item $N_\lambda=M((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda)$, \end{enumerate} we have that \begin{enumerate}[label=\textup{(}$*$\arabic*\textup{)}] \item $M|\delta_\lambda$ is generic for the meas-lim extender algebra of $N_\lambda$ at $\delta_\lambda$ and \item\label{item:limit_definability} if $N_\lambda$ is not a Q-structure for $\delta_\lambda$ then $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda\subseteq M|\delta_\lambda$ and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda$ is definable over $M|\delta_\lambda$ from parameters (and therefore so is $N_\lambda$).\footnote{The statement that $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda\subseteq M|\delta_\lambda$ is to be interpreted that for each $\alpha<\lambda$ we have $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\alpha\in M|\delta_\lambda$, where $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\alpha$ incorporates all models $M^\mathcal{m}athcal{T}_\beta$ and embeddings $i^\mathcal{m}athcal{T}_{\beta\gamma}$ for $\beta\leq\gamma<\alpha$, and the tree structure ${<_\mathcal{m}athcal{T}}\!\upharpoonright\!\alpha$, etc, and likewise for $\mathcal{m}athcal{U}$. The definability condition adds the requirement that the sequence $\left<(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\alpha\right>_{\alpha<\lambda}$ is definable.} \end{enumerate} Note, however, that there need not actually be Woodin cardinals in $R_0,S_0$, and the trees might drop in model at points. To deal with this correctly, the folding in of extenders for genericity iteration (and other purposes) is done much as in \cite{backcomp}, and also in \cite[Definition 5.4]{iter_for_stacks}. We clarify below exactly how this is executed, along with ensuring the definability condition \ref{item:limit_definability}. We will define $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta+1)$ by induction on $\eta$. Given $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta+1)$, if this constitutes a successful comparison (that is, $M^\mathcal{m}athcal{T}_\eta\trianglelefteq M^\mathcal{m}athcal{U}_\eta$ or vice versa), we stop at stage $\eta$ (and will then derive a contradiction via Claim \ref{clm:comparison_doesnt_terminate} below). Now suppose otherwise and let $F^\mathcal{m}athcal{T}_\eta,F^\mathcal{m}athcal{U}_\eta$ be the extenders witnessing least disagreement between $M^\mathcal{m}athcal{T}_\eta,M^\mathcal{m}athcal{U}_\eta$ (as explained below, we might not use these extenders in $\mathcal{m}athcal{T},\mathcal{m}athcal{U}$, however). Of course, one of these extenders might be empty, but at least one is non-empty. Let $\ell_\eta=\mathcal{m}athrm{lh}(F^\mathcal{m}athcal{T}_\eta)$ or $\ell_\eta=\mathcal{m}athrm{lh}(F^\mathcal{m}athcal{U}_\eta)$, whichever is defined, and \[ K_\eta=M^\mathcal{m}athcal{T}_\eta||\ell_\eta=M^\mathcal{m}athcal{U}_\eta||\ell_\eta. \] We will define the comparison $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ in certain blocks, during some of which we fold in short linear iterations. In order to ensure the definability condition \ref{item:limit_definability} above, initially we must linearly iterate to the point in $M$ which constructs $(R_0,S_0)$, and following certain limit stages $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta+1)$ ($\eta$ a limit ordinal) of the comparison, we will fold in a linear iteration out to a segment of $M$ which constructs $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta+1)$. So, we define a strictly increasing, continuous sequence $\left<\eta_\alpha\right>_{\alpha<\omega_1^M}$ of ordinals $\eta_\alpha$ such that either $\eta_\alpha=0$ or $\eta_\alpha$ is a limit, and simultaneously define \[ (\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta_\alpha+1).\] Given a limit $\eta<\omega_1^M$, let $Q^\mathcal{m}athcal{T}_\eta=Q(\mathcal{m}athcal{T}\!\upharpoonright\!\eta,[0,\eta)_\mathcal{m}athcal{T})$ (the Q-structure $Q$ for $\delta_\eta$ with $Q\trianglelefteq M^\mathcal{m}athcal{T}_\eta$) and likewise $Q^\mathcal{m}athcal{U}_\eta=Q(\mathcal{m}athcal{U}\!\upharpoonright\!\eta,[0,\eta)_\mathcal{m}athcal{U})$. These exist as $R_0,S_0$ project to $\omega$ and are sound. Also let $Q^\mathcal{m}athcal{T}_0=R_0$ and $Q^\mathcal{m}athcal{U}_0=S_0$, and let $N_0=R_0|\omega_1^{R_0}=S_0|\omega_1^{S_0}$. We set $\eta_0=0$. Suppose we have defined $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta_\alpha+1)$, so $\eta_\alpha<\omega_1^M$ and $\eta_\alpha=0$ or is a limit. We next define $\eta_{\alpha+1}$ and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_{\alpha+1}$, and hence $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta_{\alpha+1}+1)$. In the definition we literally assume that we reach no $\eta<\eta_{\alpha+1}$ such that $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta+1)$ is a successful comparison; if we do reach such an $\eta$ then we stop the construction there. There are three cases to consider. \begin{casetwo}\label{case:Qs_inequal} $Q^\mathcal{m}athcal{T}_{\eta_\alpha}\neq Q^\mathcal{m}athcal{U}_{\eta_\alpha}$ (note this includes the case $\alpha=0$). If $F^\mathcal{m}athcal{T}_{\eta_\alpha}$ is defined then $\mathcal{m}athrm{lh}(F^\mathcal{m}athcal{T}_{\eta_\alpha})\leq\mathcal{m}athrm{OR}(Q^\mathcal{m}athcal{T}_{\eta_\alpha})$, and likewise for $F^\mathcal{m}athcal{U}_{\eta_\alpha}$. Thus, note that by tameness (or otherwise if $\alpha=0$), $\delta_{\eta_\alpha}$ is a strong cutpoint of $Q^\mathcal{m}athcal{T}_{\eta_\alpha}$ and of $Q^\mathcal{m}athcal{U}_{\eta_\alpha}$, so $\mathcal{m}athcal{T}\!\upharpoonright\![\eta_\alpha,\infty)$ will be based on $Q^\mathcal{m}athcal{T}_{\eta_\alpha}$ and above $\delta_{\eta_\alpha}$, and likewise $\mathcal{m}athcal{U}\!\upharpoonright\![\eta_\alpha,\infty)$. We insert a short linear iteration past the point where $M$ constructs $Q^\mathcal{m}athcal{T}_{\eta_\alpha},Q^\mathcal{m}athcal{U}_{\eta_\alpha}$, and hence (by Lemma \ref{lem:Q_computes_b}), the branches $[0,\eta_\alpha)_\mathcal{m}athcal{T}$ and $[0,\eta_\alpha)_\mathcal{m}athcal{U}$, if $\alpha>0$. Let $\eta_{\alpha+1}$ be the least limit ordinal $\eta<\omega_1^M$ such that $Q^\mathcal{m}athcal{T}_{\eta_\alpha},Q^\mathcal{m}athcal{U}_{\eta_\alpha}\in M|(\eta+\omega)$ (clearly if $\alpha>0$ then $\eta_\alpha\leq\delta_{\eta_\alpha}<\eta_{\alpha+1}$). Now $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\![\eta_\alpha,\eta_{\alpha+1})$ is given as follows: Let $\eta\in[\eta_\alpha,\eta_{\alpha+1})$ and suppose we have defined $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta+1)$. Recall that $K_\eta$ was defined above. If $K_\eta$ has a ($K_\eta$-total) measurable $\mathcal{m}u>\delta_{\eta_\alpha}$ then letting $\mathcal{m}u$ be least such, $E^\mathcal{m}athcal{T}_\eta=E^\mathcal{m}athcal{U}_\eta$ is the unique normal measure on $\mathcal{m}u$ in $\mathcal{m}athbb{E}^{K_\eta}$. Otherwise, $E^\mathcal{m}athcal{T}_\eta=F^\mathcal{m}athcal{T}_\eta$ and $E^\mathcal{m}athcal{U}_\eta=F^\mathcal{m}athcal{U}_\eta$. Note that if $\alpha=0$ then $\delta_{\eta_\alpha}=\omega_1^{N_{\eta_{\alpha+1}}}$, and if $\alpha>0$ then $N_{\eta_{\alpha+1}}\mathcal{m}odels$``$\delta_{\eta_\alpha}$ is Woodin'', and in either case, $N_{\eta_{\alpha+1}}\mathcal{m}odels$``there are no measurables or Woodins $>\delta_{\eta_\alpha}$''. So $Q^\mathcal{m}athcal{T}_{\eta_{\alpha+1}}=N_{\eta_{\alpha+1}}=Q^\mathcal{m}athcal{U}_{\eta_{\alpha+1}}$, and (by tameness) $N_{\eta_{\alpha+1}}$ has no extenders inducing meas-lim extender algebra axioms with index in $[\delta_{\eta_\alpha},\delta_{\eta_{\alpha+1}}]$. \end{casetwo} \begin{casetwo}\label{case:pc_Woodins} $N_{\eta_\alpha}\mathcal{m}odels$``There is a proper class of Woodins'' (so $Q^\mathcal{m}athcal{T}_{\eta_\alpha}=N_{\eta_\alpha}=Q^\mathcal{m}athcal{U}_{\eta_\alpha}$). By tameness, it follows that $\delta_{\eta_\alpha}$ is a cutpoint (maybe not strong cutpoint) of either $M^\mathcal{m}athcal{T}_{\eta_\alpha}$ or $M^\mathcal{m}athcal{U}_{\eta_\alpha}$,\footnote{That is, either $\mathcal{m}athcal{T}$ or $\mathcal{m}athcal{U}$ uses non-empty extenders cofinally below $\eta_\alpha$; if $\mathcal{m}athcal{T}$ does then $\delta_{\eta_\alpha}$ is a limit of Woodins of $M^\mathcal{m}athcal{T}_{\eta_\alpha}$, and likewise for $\mathcal{m}athcal{U}$.} and hence $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\![\eta_\alpha,\infty)$ will be above $\delta_{\eta_\alpha}$. In this case we need to insert a short linear iteration past the point in $M$ which constructs $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_\alpha$ (we will have $\alpha=\eta_\alpha=\delta_{\eta_\alpha}$ and already have \[ (\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta\in M|\eta_\alpha \] for every $\eta<\eta_\alpha$, but it is not clear that $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_\alpha$ is actually definable over $M|\eta_\alpha$, as it is not clear that the branch choices of $\mathcal{m}athcal{U}$ are appropriately definable). So let $\eta<\omega_1^M$ be the least limit ordinal such that $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_\alpha\in M|(\eta+\omega)$ (we have $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_\alpha\in\mathcal{m}athrm{HC}^M$ by assumption). Note then that \[ [0,\eta_\alpha)_\mathcal{m}athcal{T},[0,\eta_\alpha)_\mathcal{m}athcal{U}\in M(\eta+\omega) \] by tameness. We set $\eta_{\alpha+1}=\mathcal{m}ax(\eta,\eta_\alpha+\omega)$. Now $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\![\eta_\alpha,\eta_{\alpha+1})$ is constructed as in the previous case, and note that again, $N_{\eta_{\alpha+1}}$ has no measurables $>\delta_{\eta_\alpha}$ (maybe $\delta_{\eta_\alpha}$ itself is measurable; in order to ensure that we get a useful comparison, it is important here that we do not iterate at $\delta_{\eta_\alpha}$ itself during the interval $[\eta_\alpha,\eta_{\alpha+1})$). \end{casetwo} \begin{casetwo}\label{case:Qs_match} $Q^\mathcal{m}athcal{T}_{\eta_\alpha}=Q^\mathcal{m}athcal{U}_{\eta_\alpha}$ and $N_{\eta_\alpha}\mathcal{m}odels$``There is not a proper class of Woodins''. We set $\eta_{\alpha+1}=\eta_\alpha+\omega$. Let $\eta\in[\eta_{\alpha},\eta_{\alpha+1})$ and suppose we have defined $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta+1)$. If $\eta=\eta_\alpha$ (and in fact in general), \begin{equation}\label{eqn:Qs_match} Q^\mathcal{m}athcal{T}_{\eta_\alpha}=Q^\mathcal{m}athcal{U}_{\eta_\alpha}\triangleleft K_\eta.\end{equation} If there is any $E\in\mathcal{m}athbb{E}^{K_\eta}$ such that $\nu_E$ is an $K_\eta$-inaccessible limit of $K_\eta$-measurables, and $E$ induces an extender algebra axiom which is false of $\mathcal{m}athbb{E}^M$, then set $E^\mathcal{m}athcal{T}_\eta=E^\mathcal{m}athcal{U}_\eta=$ the least such $E$. Otherwise set $E^\mathcal{m}athcal{T}_\eta=F^\mathcal{m}athcal{T}_\eta$ and $E^\mathcal{m}athcal{U}_\eta=F^\mathcal{m}athcal{U}_\eta$. Note then that \[ \mathcal{m}athrm{OR}(Q^\mathcal{m}athcal{T}_{\eta_\alpha})=\mathcal{m}athrm{OR}(Q^\mathcal{m}athcal{U}_{\eta_\alpha})<\ell_{\eta_\alpha}\leq\ell_{ \eta} \] by line (\ref{eqn:Qs_match}), tameness and since $Q^\mathcal{m}athcal{T}_{\eta_\alpha}$ projects to $\delta_{\eta_\alpha}$. \end{casetwo} This completes all cases. Of course, limit stages ${<\omega_1^M}$ are taken care of by our strategies. This completes the definition of the comparison. \begin{clmtwo} $\mathcal{m}athcal{T},\mathcal{m}athcal{U}$ are normal, and moreover, if $\alpha<\beta$ and [$E=E^\mathcal{m}athcal{T}_\alpha\neq\emptyset$ or $E=E^\mathcal{m}athcal{U}_\alpha\neq\emptyset$] and [$F=E^\mathcal{m}athcal{T}_\beta\neq\emptyset$ or $F=E^\mathcal{m}athcal{U}_\beta\neq\emptyset$], then $\mathcal{m}athrm{lh}(E)<\mathcal{m}athrm{lh}(F)$. \end{clmtwo} \begin{proof} This is a straightforward induction. \end{proof} \begin{clmtwo}\label{clm:comparison_doesnt_terminate} For each $\alpha<\omega_1^M$, either $F^\mathcal{m}athcal{T}_\alpha$ or $F^\mathcal{m}athcal{U}_\alpha$ is defined, and hence, we get a comparison $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ of length $\omega_1^M$. \end{clmtwo} \begin{proof} Suppose not and let $\alpha$ be least such. So $M^\mathcal{m}athcal{T}_\alpha=M^\mathcal{m}athcal{U}_\alpha$, since $R_0,S_0$ are both sound and project to $\omega$. So letting $C=\mathcal{m}athfrak{C}_\omega(M^\mathcal{m}athcal{T}_\alpha)=\mathcal{m}athfrak{C}_\omega(M^\mathcal{m}athcal{U}_\alpha)$, there is $\beta+1<\mathcal{m}athrm{lh}(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ such that $\beta<_\mathcal{m}athcal{T}\alpha$ and letting $\varepsilon=\mathrm{succ}^\mathcal{m}athcal{T}(\beta,\alpha]$, $(\varepsilon,\alpha]_\mathcal{m}athcal{T}\cap\mathscr{D}^\mathcal{m}athcal{T}=\emptyset$ and $C=M^{*\mathcal{m}athcal{T}}_\varepsilon\trianglelefteq M^\mathcal{m}athcal{T}_\beta$, and $E^\mathcal{m}athcal{T}_\beta\in\mathcal{m}athbb{E}_+^C$, and likewise there is $\gamma+1<\mathcal{m}athrm{lh}(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ such that $\gamma<_\mathcal{m}athcal{U}\alpha$ and letting $\eta=\mathrm{succ}^\mathcal{m}athcal{U}(\gamma,\alpha]$, we have $(\eta,\alpha]_\mathcal{m}athcal{U}\cap\mathscr{D}^\mathcal{m}athcal{U}=\emptyset$ and $C=M^{*\mathcal{m}athcal{U}}_\eta\trianglelefteq M^\mathcal{m}athcal{U}_\gamma$ and $E^\mathcal{m}athcal{U}_\gamma\in\mathcal{m}athbb{E}_+^C$. Since $R_0\neq S_0$ but $R_0|\omega_1^{R_0}=S_0|\omega_1^{S_0}$, we have $C\neq R_0$ and $C\neq S_0$, so in fact, $C\triangleleft M^\mathcal{m}athcal{T}_\beta$ and $C\triangleleft M^\mathcal{m}athcal{U}_\gamma$. Now since $E^\mathcal{m}athcal{T}_\beta\in\mathcal{m}athbb{E}_+^C$ and $E^\mathcal{m}athcal{U}_\gamma\in\mathcal{m}athbb{E}_+^C$, but $E^\mathcal{m}athcal{T}_\beta$ is the least disagreement between $C$ and $M^\mathcal{m}athcal{T}_\alpha$, and $E^\mathcal{m}athcal{U}_\gamma$ is the least disagreement between $C$ and $M^\mathcal{m}athcal{U}_\alpha=M^\mathcal{m}athcal{T}_\alpha$, we must have $\beta=\gamma$ and $E^\mathcal{m}athcal{T}_\beta=E^\mathcal{m}athcal{U}_\beta$. Therefore $E=E^\mathcal{m}athcal{T}_\beta=E^\mathcal{m}athcal{U}_\beta$ was chosen either for genericity iteration purposes, or for short linear iteration purposes. We have $\beta<\alpha$, so either $F^\mathcal{m}athcal{T}_\beta$ or $F^\mathcal{m}athcal{U}_\beta$ is defined; suppose $F=F^\mathcal{m}athcal{T}_\beta$ is defined. Since this is least disagreement between $M^\mathcal{m}athcal{T}_\beta,M^\mathcal{m}athcal{U}_\beta$, but $C\triangleleft M^\mathcal{m}athcal{T}_\beta$ and $C\triangleleft M^\mathcal{m}athcal{U}_\beta$, we have $\mathcal{m}athrm{OR}^C<\mathcal{m}athrm{lh}(F)$. We also have $\mathcal{m}athrm{lh}(E)\leq\mathcal{m}athrm{OR}^C$, and note that by how we chose $E$, $\nu_E$ is a cardinal of $K_\beta=M^\mathcal{m}athcal{T}_\beta||\mathcal{m}athrm{lh}(F)$. But $\mathcal{m}athrm{cr}(E^\mathcal{m}athcal{T}_{\varepsilon-1})<\nu_E$, and therefore $E^\mathcal{m}athcal{T}_{\varepsilon-1}$ is total over $K_\beta$, so \[ M^{*\mathcal{m}athcal{T}}_\varepsilon=C\triangleleft M^\mathcal{m}athcal{T}_\beta|\mathcal{m}athrm{lh}(F)\trianglelefteq M^{*\mathcal{m}athcal{T}}_\varepsilon,\] a contradiction. \end{proof} Let $N=N_{\omega_1^M}=M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$, so $\mathcal{m}athrm{OR}^N=\omega_1^M$. \begin{clmtwo}\label{clm:no_pc_Woodins} $N\mathcal{m}odels$``There is not a proper class of Woodins''.\end{clmtwo} \begin{proof} Otherwise, by tameness, we get $\mathcal{m}athcal{T}$- and $\mathcal{m}athcal{U}$-cofinal branches $b,c$. That is, for each $\delta<\omega_1^M$ such that $\delta$ is Woodin in $N$, $\delta$ is a cutpoint of $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$, meaning that there is no extender used in $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ which overlaps $\delta$. But then letting $W$ be the set of all such $\delta$, $b=\bigcup_{\delta\in W}[0,\delta]_\mathcal{m}athcal{T}$ is a $\mathcal{m}athcal{T}$-cofinal branch, and likewise for $\mathcal{m}athcal{U}$. Now working in $M$, we argue much as in the usual proof of termination of comparison/genericity iteration, with one extra observation. We get some $\lambda<\mathcal{m}athrm{OR}^M$ and some sufficiently elementary $\pi:\bar{M}\to M|\lambda$ with the relevant objects ``in $\mathcal{m}athrm{rg}(\pi)$'' and $\bar{M}$ countable.\footnote{At worst we might have $\mathcal{m}athrm{OR}^M=\omega_1^M+\omega$ and $\lambda=\omega_1^M$, and in such a case we instead choose $n<\omega$ and some $x\in\canM$ such that $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ is definable from $x$ over $\canM$, and let $\bar{M}$ be a countable $\Sigma_n$-elementary hull of $\canM$ including $x$.} Let $\kappa=\mathcal{m}athrm{cr}(\pi)$. Then $\kappa=\eta_\kappa=\delta_{\eta_\kappa}$. Let \[ \beta+1=\mathrm{succ}^\mathcal{m}athcal{T}(\kappa,\omega_1^M)\text{ and }\gamma+1=\mathrm{succ}^\mathcal{m}athcal{U}(\kappa,\omega_1^M).\] Then $E^\mathcal{m}athcal{T}_\beta,E^\mathcal{m}athcal{U}_\gamma$ are compatible through $\mathcal{m}in(\nu(E^\mathcal{m}athcal{T}_\beta),\nu(E^\mathcal{m}athcal{U}_\gamma))$. The usual arguments for termination of comparison/genericity iteration show that $\beta=\gamma=\kappa$ and $E=E^\mathcal{m}athcal{T}_\kappa=E^\mathcal{m}athcal{U}_\kappa$ was chosen for short linear iteration purposes, and $\mathcal{m}athrm{cr}(E)=\kappa$. Since $N\mathcal{m}odels$``There is a proper class of Woodins'', $N_\kappa=N|\kappa$ satisfies the same. But since $\kappa=\eta_\kappa=\delta_{\eta_\kappa}$ and by the rules of choosing $E^\mathcal{m}athcal{T}_\kappa$, we therefore have $\mathcal{m}athrm{cr}(E)>\kappa$, a contradiction.\footnote{Of course, in a case like when $\mathcal{m}athrm{OR}^M=\omega_1^M+\omega$, all the calculations here need to be refined.} \end{proof} Using Claim \ref{clm:no_pc_Woodins} we may fix $\eta^*<\omega_1^M$ with $\eta^*$ above all Woodins of $N$. \begin{clmtwo}\label{clm:Qs_eventually_match} For all limits $\lambda<\omega_1^M$ such that $\delta_\lambda>\eta^*$, we have $Q^\mathcal{m}athcal{T}_\lambda=Q^\mathcal{m}athcal{U}_\lambda$ and $Q^\mathcal{m}athcal{T}_\lambda\triangleleft M^\mathcal{m}athcal{T}_\lambda$ and $Q^\mathcal{m}athcal{U}_\lambda\triangleleft M^\mathcal{m}athcal{U}_\lambda$. \end{clmtwo} \begin{proof}If $Q^\mathcal{m}athcal{T}_\lambda\neq Q^\mathcal{m}athcal{U}_\lambda$ then comparison would force us to use some extender within the Q-structures, and this would mean that $\delta_\lambda$ is Woodin in $N$, contradicting the choice of $\eta$. But then if say $Q^\mathcal{m}athcal{T}_\lambda=M^\mathcal{m}athcal{T}_\lambda$ then $M^\mathcal{m}athcal{T}_\lambda= M^\mathcal{m}athcal{U}_\lambda$, contradicting Claim \ref{clm:comparison_doesnt_terminate}. \end{proof} \begin{clmtwo}\label{clm:no_cofinal_branches} There is no $\mathcal{m}athcal{T}$-cofinal branch $b\in M$, and no $\mathcal{m}athcal{U}$-cofinal branch $c\in M$. \end{clmtwo} \begin{proof} If both such $b$ and $c$ exist in $M$ then we can reach a contradiction much as in the proof of Claim \ref{clm:no_pc_Woodins}. Now suppose that we have such a branch $b\in M$, but not $c$. Let $Q=Q(\mathcal{m}athcal{T},b)$. If $Q=M^\mathcal{m}athcal{T}_b$ then working in $M$, we can take a hull, and letting $\kappa$ be the resulting critical point, with $\eta^*<\kappa$, note that $Q^\mathcal{m}athcal{T}_\kappa=M^\mathcal{m}athcal{T}_\kappa$, contradicting Claim \ref{clm:Qs_eventually_match}. So $Q^\mathcal{m}athcal{T}_b\triangleleft M^\mathcal{m}athcal{T}_b$. We claim that \[ \mathrm{branch}(\mathcal{m}athcal{U},Q^\mathcal{m}athcal{T}_b)\text{ yields a }\mathcal{m}athcal{U}\text{-cofinal branch }c\in M,\] a contradiction. For assuming not, again working in $M$ we can take a hull, and letting $\kappa$ be the resulting critical point, note that \[ \mathrm{branch}(\mathcal{m}athcal{U}\!\upharpoonright\!\kappa,Q^\mathcal{m}athcal{T}_\kappa)\text{ does not yield a }\mathcal{m}athcal{U}\!\upharpoonright\!\kappa\text{-cofinal branch,}\] contradicting Claim \ref{clm:Qs_eventually_match} and Lemma \ref{lem:Q_computes_b}. If instead we have $c\in M$ but no such $b\in M$, it is symmetric. \end{proof} We will now give a more thorough analysis of stages of the comparison, and how they relate to the Woodins of the final common model $N$ and segments of $M$ which project to $\omega$. Let $\left<\beta_\gamma\right>_{\gamma<\Omega}$ enumerate the Woodin cardinals of $N$ in increasing order, and let $\beta_\Omega=\omega_1^M$. Let \[ \alpha_\gamma=\sup_{\gamma'<\gamma}\beta_{\gamma'},\] so $\alpha_\gamma<\beta_\gamma$ and either $\alpha_\gamma=0$ or $\alpha_\gamma$ is Woodin or a limit of Woodins in $N$, and $\alpha_\Omega$ is the supremum of all Woodins of $N$. We will show below that for each $\gamma$, we have [if $\gamma>0$ then $\alpha_\gamma=\eta_{\alpha_\gamma}=\delta_{\eta_{\alpha_\gamma}}$], and either: \begin{enumerate}[label=--] \item $\gamma=\alpha_\gamma=0$ (and recall that Case \ref{case:Qs_inequal} attains at stage $0$ of the comparison) and let $\chi_0=\eta_1$, that is, $\chi_0$ is the least $\chi$ such that $(R_0,S_0)\in M|(\chi+\omega)$, or \item $\gamma$ is a successor (so $\alpha_\gamma=\beta_{\gamma-1}$ is Woodin in $N$), and Case \ref{case:Qs_inequal} attains at stage $\alpha=\alpha_\gamma$ of the comparison, and let $\chi_\gamma=\eta_{\alpha_\gamma+1}$, that is, $\chi_\gamma$ is the least $\chi$ such that $Q^\mathcal{m}athcal{T}_{\eta_{\alpha_\gamma}},Q^\mathcal{m}athcal{U}_{\eta_{\alpha_\gamma}}\in M|(\chi+\omega)$, or \item $\gamma$ is a limit (so $\alpha_\gamma$ is a limit of Woodins of $N$), and Case \ref{case:pc_Woodins} attains at stage $\alpha=\alpha_\gamma$ of the comparison, and let $\chi_\gamma=\eta_{\alpha_\gamma+1}$, that is, $\chi_\gamma=\mathcal{m}ax(\chi,\eta_{\alpha_\gamma}+\omega)$ where $\chi$ is least such that $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_{\alpha_\gamma}\in M|(\chi+\omega)$. \end{enumerate} \begin{clmtwo}\label{clm:thorough_analysis} Let $\gamma\leq\Omega$. Then we have: \begin{enumerate} \item\label{item:alpha_gamma_closure} $\alpha_\gamma=\eta_{\alpha_\gamma}$ and if $\gamma>0$ then $\alpha_\gamma=\delta_{\eta_{\alpha_\gamma}}$ \item\label{item:case_for_alpha_gamma} Case \ref{case:Qs_inequal} or Case \ref{case:pc_Woodins} attains at stage $\alpha_\gamma$ of the comparison, according to the discussion above, \item\label{item:param_projs_to_om} $M|\chi_\gamma$ projects to $\omega$, and if $\gamma>0$ then $M|\alpha_\gamma$ has largest cardinal $\omega$, \item\label{item:beta_gamma_closure} $\beta_\gamma=\eta_{\beta_\gamma}=\delta_{\eta_{\beta_\gamma}}$ \item\label{item:case_for_beta_gamma} if $\gamma<\Omega$ then Case \ref{case:Qs_inequal} attains at stage $\beta_\gamma$ of the comparison, \end{enumerate} and for every limit $\zeta\in[\alpha_\gamma,\beta_\gamma]$, if $N_\zeta$ is not a Q-structure for $\delta_\zeta$ then: \begin{enumerate}[resume*] \item\label{item:zeta_closure} $\zeta=\eta_\zeta=\delta_{\eta_\zeta}$ and if $\zeta>\alpha_\gamma$ then $\zeta>\chi_\gamma$, \item\label{item:M|zeta_lgcd_om} $M|\zeta\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$ and has largest cardinal $\omega$, \item $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta\subseteq M|\zeta$, \item if $\zeta>\alpha_\gamma$ then $x=(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\alpha_\gamma+1)\in M|\zeta$ and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta$ is definable over $M|\zeta$ from the parameter $x$, \item\label{item:M|zeta_generic} $M|\zeta$ is $N_\zeta$-generic for the meas-lim extender algebra of $N_\zeta$ at $\zeta$, \item\label{item:Q_is_P-con_of_projector} $Q^\mathcal{m}athcal{T}_\zeta$ is the output of the P-construction (see \cite{sile}) of $M|\xi$ over $N_\zeta$, where $\xi$ is least such that $\xi\geq\zeta$ and $\rho_\omega^{M|\xi}=\omega$ (so in fact $\xi>\zeta$), \item\label{item:Qs_matching_below_beta_gamma} if $\alpha_\gamma<\zeta<\beta_\gamma$ then $Q^\mathcal{m}athcal{T}_\zeta=Q^\mathcal{m}athcal{U}_\zeta\triangleleft N$, \item\label{item:Qs_not_matching_at_beta_gamma} if $\zeta=\beta_\gamma<\omega_1^M$ then $Q^\mathcal{m}athcal{T}_\zeta\neq Q^\mathcal{m}athcal{U}_\zeta$ and $Q^\mathcal{m}athcal{T}_\zeta,Q^\mathcal{m}athcal{U}_\zeta\ntrianglelefteq N$. \end{enumerate} \end{clmtwo} \begin{proof}By induction on $\gamma$, with a sub-induction on $\zeta$. Also note that if $N_\zeta$ is a Q-structure for $\delta_\zeta$ then $[0,\zeta)_\mathcal{m}athcal{T}$ and $[0,\zeta)_\mathcal{m}athcal{U}$ are easily definable from $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta$. Note then that parts \ref{item:alpha_gamma_closure} and \ref{item:case_for_alpha_gamma} follow easily by induction from parts \ref{item:beta_gamma_closure} and \ref{item:case_for_beta_gamma} (we have $0=\alpha_0=\eta_{\alpha_0}$ by definition, and $\delta_0=\omega_1^{R_0}$). Consider part \ref{item:param_projs_to_om}. If $\gamma=0$ this is just because $R_0,S_0$ are sound and project to $\omega$. Suppose $\gamma>0$. Then $M|\alpha_\gamma$ has largest cardinal $\omega$ by induction. Clearly $\rho_\omega(M|\chi_\gamma)\leq\alpha_\gamma$, so suppose that $\rho_\omega(M|\chi_\gamma)=\alpha_\gamma$. Then $\alpha_\gamma=\omega_1^{\mathcal{m}athcal{J}(M|\chi_\gamma)}$ and by Lemma \ref{lem:Q_computes_b} we have \[ (\mathcal{m}athcal{T}\!\upharpoonright\!\alpha_\gamma,b),(\mathcal{m}athcal{U}\!\upharpoonright\!\alpha_\gamma,c)\in\mathcal{m}athcal{J}(M|\chi_\gamma) \] where $b=[0,\alpha_\gamma)_\mathcal{m}athcal{T}$ and $c=[0,\alpha_\gamma)_\mathcal{m}athcal{U}$. But then working inside $\mathcal{m}athcal{J}(M|\chi_\gamma)$ we can use parts of the proofs of Claims \ref{clm:no_pc_Woodins} and \ref{clm:no_cofinal_branches} to reach a contradiction. Now it suffices to verify parts \ref{item:zeta_closure}--\ref{item:Qs_not_matching_at_beta_gamma} for each limit $\zeta\in[\alpha_\gamma,\beta_\gamma]$, since then parts \ref{item:beta_gamma_closure} and \ref{item:case_for_beta_gamma} follow from parts \ref{item:zeta_closure} and \ref{item:Qs_not_matching_at_beta_gamma}. If $\zeta=\alpha_\gamma$ then the required facts already hold by induction if $\gamma$ is a successor or trivially if $\gamma$ is a limit, as then $N_\zeta\mathcal{m}odels$``There is a proper class of Woodins'', so $N_\zeta$ is a Q-structure for itself. So suppose $\zeta>\alpha_\gamma$ and that $N_\zeta$ is not a Q-structure for itself, that is, $N_\zeta\mathcal{m}odels\mathcal{m}athrm{ZF}C$ and $\mathcal{m}athcal{J}(N_\zeta)\mathcal{m}odels$``$\delta_\zeta=\mathcal{m}athrm{OR}(N_\zeta)$ is Woodin''. So $N_\zeta\mathcal{m}odels$``There is a proper class of measurables'', so note $\zeta=\eta_\varphi$ for some $\varphi>0$, and since we integrated genericity iteration into $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$, part \ref{item:M|zeta_generic} (genericity of $M|\delta_\zeta$) holds, so $\delta_\zeta$ is regular in $\mathcal{m}athcal{J}(N_\zeta)[M|\delta_\zeta]$, hence regular in $\mathcal{m}athcal{J}(M|\delta_\zeta)$, so $M|\delta_\zeta\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$. Let us verify $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta\subseteq M|\delta_\zeta$ and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta$ is definable over $M|\delta_\zeta$ from the parameter $x=(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\alpha_\gamma+1)$. We have $\chi_\gamma<\delta_\zeta$ because $N_\zeta$ is not a Q-structure for itself. So $x\in M|\delta_\zeta$. But then working in $M|\delta_\zeta$, which satisfies $\mathcal{m}athrm{ZF}C^-$, we can define $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta$, because the extender selection algorithm can be executed in $M|\delta_\zeta$, in particular since we only need to make $M|\delta_\zeta$ generic in that interval, and at non-trivial limit stages $\zeta'\in(\alpha_\gamma,\zeta)$ (when $N_{\zeta'}$ is not a Q-structure for itself) we use the inductively established fact that $Q^\mathcal{m}athcal{T}_{\zeta'}=Q^\mathcal{m}athcal{U}_{\zeta'}$ is computed by P-construction from some proper segment of $M$, and in fact some proper segment of $M|\delta_\zeta$ (as $Q^\mathcal{m}athcal{T}_{\zeta'}\triangleleft N$ in this case). Now since $M|\delta_\zeta\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$ and $\delta_\zeta=\delta_{\eta_\varphi}$, it follows that $\varphi=\delta_{\eta_\varphi}=\delta_\zeta=\zeta$. It also follows that $\omega$ is the largest cardinal of $M|\zeta=M|\delta_\zeta$, as otherwise working in $M|\zeta$, which then satisfies ``$\omega_1$ exists'', we can establish a contradiction to termination of comparison/genericity iteration much as before. So $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta\subseteq M|\zeta$ and both $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta$ and $N_\zeta$ are definable from $x$ over $M|\zeta$, and we have extender algebra genericity as stated earlier. Therefore we are in the position to form the P-construction of segments of $M$ above $N_\zeta$. Let $\xi$ be least such that $\xi\geq\zeta$ and $\rho_\omega^{M|\xi}=\omega$. Given $\eta\in[\zeta,\xi]$ let $P_\eta$ be the P-construction of $M|\eta$ over $N_\zeta$, if it exists. Now if $P_\xi$ exists then it must be a Q-structure for $\zeta$. For otherwise, we have that $\zeta$ is Woodin in $\mathcal{m}athcal{J}(P_\xi)$, and $M|\zeta$ is generic for the meas-lim extender algebra of $\mathcal{m}athcal{J}(P_\xi)$ at $\zeta$, so $\zeta$ is regular in $\mathcal{m}athcal{J}(P_\xi)[M|\zeta]$, but then $\zeta$ is regular in $\mathcal{m}athcal{J}(M|\xi)$, contradicting that $\rho_\omega^{M|\xi}=\omega$. Now suppose there is $\eta<\xi$ such that $P_\eta$ exists and either projects $<\zeta$ or is a Q-structure for $N_\zeta$. If $P_\eta$ projects $<\zeta$ then note that $P_\eta=M^\mathcal{m}athcal{T}_\zeta$. But then working in $M|\xi$, noting that $\zeta=\omega_1^{M|\xi}$, we can reach a contradiction as in the proof of Claim \ref{clm:no_cofinal_branches}. So $\rho_\omega^{P_\zeta}=\zeta$ and $P_\zeta$ is a Q-structure for $\zeta$. But here we also reach a contradiction as in the proof of Claim \ref{clm:no_cofinal_branches}. It follows that $P_\xi$ exists and $P_\xi=Q^\mathcal{m}athcal{T}_\zeta$. Finally, if $\zeta<\beta_\gamma$ then $Q^\mathcal{m}athcal{T}_\zeta=Q^\mathcal{m}athcal{U}_\zeta\triangleleft N$, since $\zeta$ is not Woodin in $N$ by assumption; and if $\zeta=\beta_\gamma<\omega_1^M$ then $Q^\mathcal{m}athcal{T}_\zeta\neq Q^\mathcal{m}athcal{U}_\zeta$ and hence, $Q^\mathcal{m}athcal{T}_\zeta\ntrianglelefteq N$, since if $Q^\mathcal{m}athcal{T}_\zeta=Q^\mathcal{m}athcal{U}_\zeta$ then Case \ref{case:Qs_match} would attain at stage $\zeta$ (recall $\alpha_\gamma<\zeta$) and then we would have $Q^\mathcal{m}athcal{T}_\zeta\triangleleft N$, contradicting the fact that $\beta_\gamma$ is Woodin in $N$. \end{proof} \begin{clmtwo}\label{clm:not_Q_and_ZFCmin,V=HC} Let $\gamma\leq\Omega$ and $\zeta\in(\alpha_\gamma,\beta_\gamma]$ be a limit. Then the following are equivalent: \begin{enumerate}[label=\textup{(}\roman*\textup{)}] \item\label{item:not_Q-structure} $\mathcal{m}athcal{J}(N_\zeta)\mathcal{m}odels$``$\delta_\zeta$ is Woodin'' (equivalently, $N_\zeta$ is not a Q-structure for $\delta_\zeta$), \item\label{item:ZFCmin_and_closure} $M|\zeta\mathcal{m}odels\mathcal{m}athrm{ZF}C^-\ \&\ V=\mathcal{m}athrm{HC}$ and $\zeta>\chi_\gamma$, \item\label{item:ZFC_min_etc} $M|\zeta\mathcal{m}odels\mathcal{m}athrm{ZF}C^-\ \&\ V=\mathcal{m}athrm{HC}$ and $\zeta=\eta_\zeta=\delta_{\eta_\zeta}$. \end{enumerate} \end{clmtwo} \begin{proof} We have that \ref{item:ZFC_min_etc} implies \ref{item:ZFCmin_and_closure}, because $\eta_{\alpha_\gamma+1}\geq\chi_\gamma$. And \ref{item:not_Q-structure} implies \ref{item:ZFC_min_etc} by the previous claim. So it suffices to see that \ref{item:ZFCmin_and_closure} implies \ref{item:not_Q-structure}. So suppose $\zeta>\chi_\gamma$ and $M|\zeta\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$, where $\zeta\in(\alpha_\gamma,\beta_\gamma]$. Then as in the proof of Claim \ref{clm:thorough_analysis}, and because $M|\zeta\mathcal{m}odels$``$V=\mathcal{m}athrm{HC}$'', $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta\subseteq M|\zeta$ and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\zeta$ is definable from the parameter $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\alpha_\gamma+1)$ over $M|\zeta$ (the fact that $M|\zeta\mathcal{m}odels$``$V=\mathcal{m}athrm{HC}$'' ensures that at each non-trivial intermediate limit stage $\zeta'$, the P-construction computing $Q^\mathcal{m}athcal{T}_{\zeta'}$ is performed by a proper segment of $M|\zeta$, using part \ref{item:Q_is_P-con_of_projector} of Claim \ref{clm:thorough_analysis}). So $\delta_\zeta=\zeta$ and $N_\zeta$ is a class of $M|\zeta$. Now if $\mathcal{m}athcal{J}(N_\zeta)\mathcal{m}odels$``$\zeta$ is not Woodin'' then $[0,\zeta)_\mathcal{m}athcal{T}$ and $[0,\zeta)_\mathcal{m}athcal{U}$ are in $M|(\zeta+\omega)$, but $\zeta=\omega_1^{M|(\zeta+\omega)}$ since $M|\zeta\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$, so we can again run the usual proof working in $M|(\zeta+\omega)$ for a contradiction. \end{proof} Write $\mathcal{m}athcal{T}_0=\mathcal{m}athcal{T}$ and $\mathcal{m}athcal{U}_0=\mathcal{m}athcal{U}$. We now enter a proof by contradiction, by assuming that part \ref{item:om_1_iter_enough} of the theorem fails for every $\alpha<\omega_1^M$. Then we can fix a conflicting pair $(R_1,S_1)$ with $\chi_\Omega<\zeta=_{\mathrm{def}}\omega_1^{R_1}=\omega_1^{S_1}$. So $R_1\triangleleft M$ and $R_1|\zeta=M|\zeta=S_1|\zeta$. We have \[ M|\zeta\mathcal{m}odels\mathcal{m}athrm{ZF}C^-\ \&\ V=\mathcal{m}athrm{HC},\] so by Claim \ref{clm:not_Q_and_ZFCmin,V=HC}, $N_\zeta$ is not a Q-structure for itself, so the conclusions of Claim \ref{clm:thorough_analysis} hold for $\zeta$. Repeat the foregoing comparison with $(R_1,S_1)$ replacing $(R_0,S_0)$, producing trees $\mathcal{m}athcal{T}_1$ on $R_1$ and $\mathcal{m}athcal{U}_1$ on $S_1$. Continue in this manner, producing a sequence \[ \left<R_n,\mathcal{m}athcal{T}_n,S_n,\mathcal{m}athcal{U}_n\right>_{n<\omega}. \] (It is not relevant whether the sequence is in $M$.) Now $\mathcal{m}athcal{T}_1$ is a tree on $R_1$, above $\zeta=\omega_1^{R_1}$. By Claim \ref{clm:thorough_analysis}, $Q^{\mathcal{m}athcal{T}_0}_\zeta$ is the output of the P-construction of $R_1$ above $N_\zeta$. So we can translate $\mathcal{m}athcal{T}_1$ into a tree $\mathcal{m}athcal{T}_1'$ on $Q^{\mathcal{m}athcal{T}_0}_\zeta$; note this tree is above $\zeta$. So \[ \mathcal{m}athcal{X}_1=\mathcal{m}athcal{T}_0\!\upharpoonright\!(\zeta+1)\ \widehat{\ }\ \mathcal{m}athcal{T}_1' \] is a correct normal tree on $R_0$, $\zeta$ is a strong cutpoint of $\mathcal{m}athcal{X}_1$, and $\mathcal{m}athcal{X}_1$ drops in model at $\zeta+1$, as $\mathcal{m}athcal{X}_1\!\upharpoonright\![\zeta,\infty)$ is based on $Q^\mathcal{m}athcal{T}_\zeta$ and $Q^\mathcal{m}athcal{T}_\zeta\triangleleft M^\mathcal{m}athcal{T}_\zeta$ by Claim \ref{clm:thorough_analysis}. Continue recursively, defining $\left<\mathcal{m}athcal{X}_n\right>_{n<\omega}$, by setting $\zeta_n=\omega_1^{R_{n+1}}$, and translating $\mathcal{m}athcal{T}_{n+1}$ (on $R_{n+1}$) into a tree $\mathcal{m}athcal{T}_{n+1}'$ on $Q(\mathcal{m}athcal{X}_n,[0,\zeta_n)_{\mathcal{m}athcal{X}_n})\triangleleft M^{\mathcal{m}athcal{X}_n}_{\zeta_n}$ (of course there is a natural finite sequence of intermediate translations between $\mathcal{m}athcal{T}_{n+1}$ and $\mathcal{m}athcal{T}_{n+1}'$), and setting \[ \mathcal{m}athcal{X}_{n+1}=\mathcal{m}athcal{X}_n\!\upharpoonright\!(\zeta_n+1)\ \widehat{\ }\ \mathcal{m}athcal{T}_{n+1}'.\] Let $\mathcal{m}athcal{X}=\liminf_{n<\omega}\mathcal{m}athcal{X}_n$. Then $\mathcal{m}athcal{X}$ is a correct normal tree on $R_0$. But it has a unique cofinal branch, which drops in model infinitely often, a contradiction. This completes the proof of the theorem. \end{proof} \begin{dfn} Let $N$ be a premouse, $\mathcal{m}athcal{T}$ a limit length iteration tree on some $M\triangleleft N$, and $\mathcal{m}athrm{OR}^M<\delta\leq\mathcal{m}athrm{OR}^N$. We say that $\mathcal{m}athcal{T}$ is \emph{$N|\delta$-optimal} iff: \begin{enumerate}[label=--] \item $\delta=\delta(\mathcal{m}athcal{T})$, \item $\mathcal{m}athcal{T}\subseteq N||\delta$ and $\mathcal{m}athcal{T}$ is definable from parameters over $N||\delta$, and \item $N||\delta$ is generic for the $\delta^{<\omega}$-generator meas-lim extender algebra of $M(\mathcal{m}athcal{T})$.\qedhere \end{enumerate} \end{dfn} For the remainder of this section we restrict our attention to tame mice. \begin{dfn}\label{dfn:iterability-good} Let $N$ be a tame pm satisfying ``$\mathcal{m}athrm{ZF}C^-+V=\mathcal{m}athrm{HC}$''. $\Lambda_{\mathcal{m}athrm{t}}^N$ ($\mathcal{m}athrm{t}$ for \emph{tame}) denotes the partial putative $(\omega,\mathcal{m}athrm{OR}^N)$-iteration strategy $\Lambda$ for $N$, defined over $N$ as follows. We define $\Lambda$ by induction on the length of trees. Let $\mathcal{m}athcal{T}\in N$. We say that $\mathcal{m}athcal{T}$ is \emph{necessary} iff $\mathcal{m}athcal{T}$ is an iteration tree via $\Lambda$, of limit length, and letting $\delta=\delta(\mathcal{m}athcal{T})$, then either $M(\mathcal{m}athcal{T})$ is a Q-structure for itself, or [$\mathcal{m}athcal{T}$ is $N|\delta$-optimal and either $\mathcal{m}athrm{lgcd}(N|\delta)=\omega$ or $\mathcal{m}athrm{lgcd}(N|\delta)=\omega_1^{N|\delta}$].\footnote{The restriction on $\mathcal{m}athrm{lgcd}(N|\delta)$ could be reduced, but we only need it in this case; note that it ensures that $\delta$ is a strong cutpoint of $N|\delta$} Every $\mathcal{m}athcal{T}\in\mathcal{m}athrm{dom}(\Lambda)$ is necessary. Let $\mathcal{m}athcal{T}$ be necessary. Then $\Lambda(\mathcal{m}athcal{T})=b$ iff $b\in N$ and either $Q(\mathcal{m}athcal{T},b)=M(\mathcal{m}athcal{T})$ or there is $R\triangleleft N$ such that $\delta$ is a strong cutpoint of $R$ and $Q(\mathcal{m}athcal{T},b)$ is the output of the P-construction of $R$ above $M(\mathcal{m}athcal{T})$.\footnote{That is, the P-construction $Q$ of $R$ above $M(\mathcal{m}athcal{T})$ is defined, $\mathcal{m}athrm{OR}^Q=\mathcal{m}athrm{OR}^{R}$ and $Q=Q(\mathcal{m}athcal{T},b)$.} We say that $N$ is \emph{tame-iterability-good} iff all putative trees via $\Lambda^N_{\mathcal{m}athrm{t}}$ have wellfounded models, and $\Lambda^N_{\mathcal{m}athrm{t}}(\mathcal{m}athcal{T})$ is defined for all necessary $\mathcal{m}athcal{T}$. \end{dfn} \begin{rem} Note that because $N\mathcal{m}odels$``$V=\mathcal{m}athrm{HC}$'', every tree $\mathcal{m}athcal{T}$ on $N$ drops immediately to some proper segment, and $Q(\mathcal{m}athcal{T},b)$ exists for every limit length $\mathcal{m}athcal{T}$ and $\mathcal{m}athcal{T}$-cofinal branch $b$ with $M^\mathcal{m}athcal{T}_b$ wellfounded. By Lemma \ref{lem:Q_computes_b} (local branch definability), $\{b=\Lambda(\mathcal{m}athcal{T})\}$ is uniformly $\Sigma_1^{\mathcal{m}athcal{J}(Q^*)}(\{\mathcal{m}athcal{T}\})$, where $Q=Q(\mathcal{m}athcal{T},b)$ and either $Q^*$ is the least segment of $N$ such that $\mathcal{m}athcal{T}$ is definable from parameters over $Q^*$, when $Q=M(\mathcal{m}athcal{T})$, or $Q^*$ is the (least) segment of $N$ whose P-construction above $M(\mathcal{m}athcal{T})$ reaches $Q$, when $M(\mathcal{m}athcal{T})\triangleleft Q$. In particular, $\Lambda$ is $\Sigma_1$-definable over $N$, and \emph{tame-iterability-good} is expressed by a first-order formula $\varphi$ (modulo $\mathcal{m}athrm{ZF}C^-$). \end{rem} The following lemma is proved as in \cite{sile}: \begin{lem} Let $M$ be a $(0,\omega_1+1)$-iterable tame premouse satisfying either $\mathcal{m}athrm{ZF}C^-$ or ``$\omega_1$ exists''. Then $\canM^M$ is tame-iterability-good and $\Lambda_{\mathcal{m}athrm{t}}^{\canM^M}\subseteq\Sigma_{\canM^M}$. \end{lem} Miedzianowski and Goldberg asked \cite{gironaconfproblems} about the nature of grounds of mice via specific kinds of forcings, in particular $\sigma$-closed and $\sigma$-distributive. A partial result on this was established in \cite[\S11]{fsfni}, and we now improve this for tame mice modelling $\mathcal{m}athrm{ZF}C$: \begin{tm}\label{tm:tame_grounds} Let $M$ be an $(0,\omega_1+1)$-iterable tame premouse modelling $\mathcal{m}athrm{ZF}C$. Then $M$ has no proper grounds $W$ via forcings $\mathcal{m}athbb P\in W$ such that $W\mathcal{m}odels$``$\mathcal{m}athbb P$ is strategically $\sigma$-closed''. \end{tm} \begin{proof} By \cite[\S11]{fsfni}, it suffices to see that $\mathbbm{m}=\canM^M\in W$, and of course we already have $\canM\subseteq W$. So suppose otherwise. We will reach a contradiction via a slight variant of the construction for Theorem \ref{thm:tame_HOD_x}, so we just give a sketch. Fix a name $\dot{\mathbbm{m}}\in W$ for $\mathbbm{m}$. We may fix $\xi<\omega_1^M=\omega_1^W$ and a name $\dot{\Sigma}\in M$ such that in $W$, $\mathcal{m}athbb P$ forces ``the universe is that of a tame premouse $N$ such that $\canM^N=\dot{\canM}$ is tame-iterability-good and $\dot{\Sigma}$ is an above-$\check{\xi}$-$(\omega,\omega_1)$-strategy for $\dot{\canM}$, $\dot{\Sigma}$ is consistent with $\Lambda_{\mathcal{m}athrm{t}}^{\dot{\canM}}$, $N=\mathcal{m}athrm{cs}(\dot{M})$, $N$ satisfies various first order facts established here and elsewhere for tame mice, and $\dot{\mathbbm{m}}\notin\check{V}$''. Work in $W$. Fix a strategy $\Psi$ witnessing that $\mathcal{m}athbb P$ is strategically $\sigma$-closed. Pick some $(p_0,q_0)\in\mathcal{m}athbb P\times\mathcal{m}athbb P$ and some conflicting pair $(R_0,S_0)$ such that $p_0\forces_\mathcal{m}athbb P$``$R_0\triangleleft\canM$'' and $q_0\forces_\mathcal{m}athbb P$``$S_0\triangleleft\canM$'' (where \emph{conflicting pair} is defined like before, but with $R_0|\xi=S_0|\xi$ and $\xi<\omega_1^{R_0}=\omega_1^{S_0}$). Let $p'_0=\Psi(\left<p_0\right>)$. Let $G\times H$ be $(W,\mathcal{m}athbb P\times\mathcal{m}athbb P)$-generic with $(p_0',q_0)\in G\times H$. Note that $\mathcal{m}athbb P\forces$``$\check{\mathcal{m}athbb P}$ is strategically $\sigma$-closed, as witnessed by $\check{\Psi}$''. Work in $W[G,H]$. It follows that $\mathcal{m}athrm{HC}^{W[G,H]}=\mathcal{m}athrm{HC}^{W[G]}=\mathcal{m}athrm{HC}^{W[H]}=\mathcal{m}athrm{HC}^M$, and therefore $\dot{\Sigma}_G,\dot{\Sigma}_H$ are above-$\xi$-$(\omega,\omega_1)$-strategies in $W[G,H]$. Compare $R_0,S_0$ in the manner of the previous proof, producing trees $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$, via $\dot{\Sigma}_G,\dot{\Sigma}_H$, except that we fold in $\dot{\mathbbm{m}}_G$-genericity instead of $\canM$-genericity. As before, the comparison lasts $\omega_1^M$ stages and $M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ has boundedly many Woodins. Therefore there is some $\xi_1<\omega_1^M$ after which $\mathcal{m}athcal{T},\mathcal{m}athcal{U}$ agree about all Q-structures, and these Q-structures are given by P-construction using proper segments of $\dot{\mathbbm{m}}_G$. Let $\dot{\mathcal{m}athcal{T}}_0,\dot{\mathcal{m}athcal{U}}_0\in W$ be $\mathcal{m}athbb P\times\mathcal{m}athbb P$-names for $\mathcal{m}athcal{T},\mathcal{m}athcal{U}$. Work in $W$. Let $(p_0'',q_0'')\leq(p_0',q_0)$ be such that $p_0''\forces$``$\dot{\mathcal{m}athcal{T}}_0$ is a tree via $\dot{\Sigma}$ of length $\omega_1$, and for every $\delta\in(\check{\xi}_1,\omega_1)$, if $\dot{\canM}|\delta\mathcal{m}odels\mathcal{m}athrm{ZF}^-+V=\mathcal{m}athrm{HC}$ then $\delta=\delta(\dot{\mathcal{m}athcal{T}}_0\!\upharpoonright\!\delta)$ and $Q=Q(\dot{\mathcal{m}athcal{T}}_0\!\upharpoonright\!\delta,[0,\delta)_{\dot{\mathcal{m}athcal{T}}_0})$ is given by P-construction of $Q'$ above $M(\dot{\mathcal{m}athcal{T}}_0)$, where $Q'\triangleleft\dot{\canM}$ is the least $\omega$-premouse such that $\delta\leq\mathcal{m}athrm{OR}^{Q'}$, and $Q\triangleleft M^{\dot{\mathcal{m}athcal{T}}_0}_\delta$, and $\dot{\mathcal{m}athcal{T}}_0\!\upharpoonright\!(\delta+1)$ is via $\dot{\Sigma}$''. Pick some $(p_1,q_1)\in\mathcal{m}athbb P\times\mathcal{m}athbb P$ and some $(R_1,S_1)$ such that $p_1,q_1\leq p_0''$, and $(R_1,S_1)$ is a conflicting pair with $R_1|\xi_1=S_1|\xi_1$ and $\xi_1<\omega_1^{R_1}=\omega_1^{S_1}$ and $p_1\forces$``$\check{R_1}\triangleleft\dot{\canM}$'' and $q_1\forces$``$\check{S_1}\triangleleft\dot{\canM}$''. So $(p_1,q_0'')\forces$``With $\delta=\omega_1^{\check{R_1}}$, and $Q'\triangleleft\dot{\mathbbm{m}}$ as above, then $Q'=\check{R_1}$'', likewise $(q_1,q_0'')$ and $S_1$. So Let $p_1'=\Psi(p_0,p_0',p_1)$. Carry on in this way, much as before, but also producing the sequence $\left<p_n,p_n'\right>_{n<\omega}$ via $\Psi$. We can therefore find $p_\omega\in\mathcal{m}athbb P$ with $p_\omega\leq p_n$ for all $n<\omega$. But then $p_\omega$ forces the existence of a tree via $\dot{\Sigma}$ whose only cofinal branch has infinitely many drops, a contradiction. \end{proof} \section{Candidates and their extensions}\label{sec:candidates} We now prepare for the proof of Theorem \ref{thm:E_almost_def_tame_L[E]}. The proof will use a combination of the methods of the previous section with those of \cite{V=HODX}. But nothing in this section requires tameness, and what we establish will also be used in \S\ref{sec:HOD_in_non-tame}. \begin{dfn}\label{dfn:candidate,Ppp} Let $M\in\mathrm{pm}_1$. We say that $\mathbbm{n}$ is an \emph{$M$-candidate} iff $\mathbbm{n}\in M$, $\mathbbm{n}$ is a premouse with $\unioniv{\mathbbm{n}}=\mathcal{m}athrm{HC}^M$, and every initial segment of $\mathbbm{n}$ satisfies $(n+1)$-condensation for every $n<\omega$. Let $\mathbbm{p},\mathbbm{n}$ be $M$-candidates and $\alpha<\omega_1^M$. We say that $\mathbbm{p},\mathbbm{n}$ \emph{converge at $\alpha$} iff: \begin{enumerate}[label=--] \item $\unioniv{\mathbbm{p}|\alpha}=\unioniv{\mathbbm{n}|\alpha}$ (hence $\omega_1^{\mathbbm{p}|\alpha}=\omega_1^{\mathbbm{n}|\alpha}$), \item $\mathbbm{p}|\alpha,\mathbbm{n}|\alpha$ are inter-definable from parameters (that is, $\mathcal{m}athbb{E}_+^{\mathbbm{p}|\alpha}$ is definable over $\mathbbm{n}|\alpha$ from parameters and likewise $\mathcal{m}athbb{E}_+^{\mathbbm{n}|\alpha}$ over $\mathbbm{p}|\alpha$), \item $\rho_\omega^{\mathbbm{p}|\alpha}\leq\omega_1^{\mathbbm{p}|\alpha}$ (and note $\rho_\omega^{\mathbbm{n}|\alpha}=\rho_\omega^{\mathbbm{p}|\alpha}$). \end{enumerate} We say that $\mathbbm{p},\mathbbm{n}$ \emph{$\omega$-converge at $\alpha$} iff $\mathbbm{p},\mathbbm{n}$ converge at $\alpha$ and $\rho_\omega^{\mathbbm{p}|\alpha}=\omega$. Let $\mathbbm{p},\mathbbm{n}$ be $M$-candidates. We write $\mathbbm{p}\sim_\alpha \mathbbm{n}$ iff $\mathbbm{p},\mathbbm{n}$ converge at $\alpha$ and \[ \mathcal{m}athbb{E}^\mathbbm{p}\!\upharpoonright\!(\alpha,\omega_1^M)=\mathcal{m}athbb{E}^\mathbbm{n}\!\upharpoonright\!(\alpha,\omega_1^M).\] Let $\mathcal{m}athscr{P}^M=\{\mathbbm{n}\in M\bigm|\mathbbm{n}\text{ is an }M\text{-candidate and }\existsists\alpha<\omega_1^M\ [\mathbbm{n}\sim_\alpha\canM^M]\}$. \end{dfn} Note that if $M\in\mathrm{pm}_1$ then $\mathcal{m}athscr{P}^M\in M$, and for each $N\in\mathcal{m}athscr{P}^M$ , we have $\unioniv{N}=\mathcal{m}athrm{HC}^M$ and $N$ is $\Sigma_1$-definable from parameters over $\canM^M$. \begin{dfn} Let $M\in\mathrm{pm}_1$ with either $M\mathcal{m}odels\mathcal{m}athrm{PS}$ or $\unioniv{M}\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$. Work in $\unioniv{M}$ and let $\mathbbm{n}$ be a candidate. If the inductive condensation stack $S$ above $\mathbbm{n}$ (see \cite[Definition 3.12]{V=HODX}) has universe $V$, then we define $\mathcal{m}athrm{cs}(\mathbbm{n})=S$; otherwise $\mathcal{m}athrm{cs}(\mathbbm{n})$ is undefined. \end{dfn} \begin{lem}\label{lem:tame_es_above_om_1_def_from_Ppp} Let $M\in\mathrm{pm}_1$ be $(0,\omega_1+1)$-iterable, satisfying either $\mathcal{m}athrm{ZF}C^-$ or $\mathcal{m}athrm{PS}$. Then: \begin{enumerate} \item\label{item:elements_of_scrP} For all $\mathbbm{n}\in\mathcal{m}athscr{P}^M$, $\mathcal{m}athrm{cs}(\mathbbm{n})^M$ is well-defined, so has universe $\unioniv{M}$, the proper segments of $\mathcal{m}athrm{cs}(\mathbbm{n})$ satisfy standard condensation, and $\mathcal{m}athbb{E}^{\mathcal{m}athrm{cs}(\mathbbm{n})}\!\upharpoonright\![\omega_1^M,\mathcal{m}athrm{OR}^M)=\mathcal{m}athbb{E}^M\!\upharpoonright\![\omega_1^M,\mathcal{m}athrm{OR}^M)$. \item Therefore, $\mathcal{m}athbb{E}^M\!\upharpoonright\![\omega_1^M,\mathcal{m}athrm{OR}^M)$ is definable over $\unioniv{M}$ from the parameter $\mathcal{m}athscr{P}^M$. \end{enumerate} \end{lem} \begin{proof} Work in $\unioniv{M}$. Let $\mathbbm{n}\in\mathcal{m}athscr{P}^M$. \begin{clmthree} $\mathcal{m}athrm{cs}(\mathbbm{n})$ is defined (so has universe $V$) and $\mathcal{m}athbb{E}^{\mathcal{m}athrm{cs}(\mathbbm{n})}\!\upharpoonright\![\omega_1,\mathcal{m}athrm{OR}^M)=\mathcal{m}athbb{E}^M\!\upharpoonright\![\omega_1,\mathcal{m}athrm{OR}^M)$. \end{clmthree} \begin{proof} Let $\alpha<\omega_1$ be such that $\mathbbm{n}\sim_\alpha\canM^M$. So $\mathbbm{n}|\alpha$ and $M|\alpha$ project to $\omega$ and are inter-definable from parameters. Fix a real $x$ coding the pair $(\mathbbm{n}|\alpha,M|\alpha)$, $x$ definable over $M|\alpha$. Let $M_x,\mathbbm{n}_x$ be the translations of $M,\mathbbm{n}$ to $x$-premice. Then $\mathbbm{n}_x=M_x|\omega_1$. So by the relativization of \cite[3.11, 3.12]{V=HODX}, \[ \mathcal{m}athrm{cs}(\mathbbm{n}_x)=\mathcal{m}athrm{cs}(M_x|\omega_1)^+=M_x \] is defined and has universe $V$, $\mathcal{m}athbb{E}^{\mathcal{m}athrm{cs}(\mathbbm{n}_x)}\!\upharpoonright\![\omega_1,\mathcal{m}athrm{OR}^M)=\mathcal{m}athbb{E}^M\!\upharpoonright\![\omega_1,\mathcal{m}athrm{OR}^M)$, and since $\mathcal{m}athrm{cs}(\mathbbm{n}_x)=M_x$ is iterable (as an $x$-mouse), its proper segments satisfy standard condensation (for $x$-mice). Let $\widetilde{\mathbbm{n}}$ be the translation of $\mathcal{m}athrm{cs}(\mathbbm{n}_x)$ to a standard premouse extending $\mathbbm{n}|\alpha$. So $\widetilde{\mathbbm{n}}$ has universe $V$. We claim that $\widetilde{\mathbbm{n}}=\mathcal{m}athrm{cs}(\mathbbm{n})$. Most of the defining properties for $\mathcal{m}athrm{cs}(\mathbbm{n})$ (see \cite[Definition 3.12]{V=HODX}) just carry over from $\mathcal{m}athrm{cs}(\mathbbm{n}_x)$. However, some of the required properties are not quite immediate, because we can have hulls of segments of $\widetilde{\mathbbm{n}}$ which do not include $x$ in them, so do not correspond to hulls of $\mathcal{m}athrm{cs}(\mathbbm{n}_x)$. So let $R\triangleleft\widetilde{\mathbbm{n}}$ and let $\bar{R}$ be countable and $\pi:\bar{R}\to R$ be elementary. We claim that there is some $S\triangleleft \mathbbm{n}$ and $\sigma:\bar{R}\to S$ such that $\sigma$ is elementary. For let $R_x$ be the translation of $R$ to an $x$-premouse. Let $\bar{R}^+_x$ be countable and $\pi^+_x:\bar{R}^+_x\to R_x$ be elementary with $\mathcal{m}athrm{rg}(\pi)\subseteq\mathcal{m}athrm{rg}(\pi^+_x)$. So there is some $S_x\triangleleft \mathbbm{n}_x|\omega_1$ and $\sigma_x^+:\bar{R}^+_x\to S_x$ which is elementary, and $x\in\mathcal{m}athrm{rg}(\sigma_x^+)$. Let $S\triangleleft \mathbbm{n}$ be the translation of $S_x$ to a standard premouse. Let $\tau:\bar{R}\to\bar{R}^+_x$ be the commuting map. Then clearly $\sigma^+_x\circ\tau:\bar{R}\to S$ is elementary, as desired. Standard condensation for proper segments of $\widetilde{\mathbbm{n}}$ (which is used both in the proof that $\widetilde{\mathbbm{n}}=\mathcal{m}athrm{cs}(\mathbbm{n})$, and also otherwise for part \ref{item:elements_of_scrP}) now follows easily: supposing $R\triangleleft\widetilde{\mathbbm{n}}$ fails some condensation fact, let $\pi:\bar{R}\to R$ be elementary with $\bar{R}$ countable and $\pi,\bar{R}$ in $M$, and let $S\triangleleft \mathbbm{n}$ and $\sigma\in \mathbbm{n}$ with $\sigma:\bar{R}\to S$ elementary. Then the failure of condensation reflects into $S$, contradicting our assumptions about $\mathbbm{n}$.\end{proof} The lemma follows immediately from the claim.\end{proof} By the lemma, to prove Theorem \ref{thm:E_almost_def_tame_L[E]}, it suffices to see that $\mathcal{m}athscr{P}^M$ is definable over $(\mathcal{m}athcal{H}_{\omega_2})^M$ without parameters. For this we will use a comparison argument very much like that of the proof of Theorem \ref{thm:tame_e_def_from_x}. \begin{dfn}\label{dfn:tractable} A sound pm $N$ satisfies \emph{standard condensation} iff $N$ satisfies $(n+1)$-condensation for every $n<\omega$. Let $P\in\mathrm{pm}_1$ with $P\mathcal{m}odels$``$\omega_1$ is the largest cardinal''. We say that $P$ satisfies \emph{$(1,\omega_1)$-condensation} iff for every premouse ${\bar{P}}$ with $\eta=\omega_1^{\bar{P}}<\omega_1^P$, if ${\bar{P}}$ is $\eta$-sound and $\rho_1^{\bar{P}}\leq\eta$ and $\pi:{\bar{P}}\to P$ is a near $0$-embedding with $\mathcal{m}athrm{cr}(\pi)=\eta=\omega_1^{\bar{P}}$ (so $\pi(\eta)=\omega_1^P$) then ${\bar{P}}\triangleleft P$. A pm $M$ is \emph{tractable} if (i) $M\in\mathrm{pm}_1$, (ii) all proper segments of $M$ satisfy standard condensation, and (iii) if $M\mathcal{m}odels$``$\omega_1$ is the largest cardinal'' then $\omega<\rho_1^M$ and $M$ satisfies $(1,\omega_1)$-condensation. \end{dfn} \begin{dfn} Let $M\in\mathrm{pm}_1$. Work in $M$. Let $\mathbbm{n}$ be an $M$-candidate. A \emph{Jensen extension} of $\mathbbm{n}$ is a sound pm $\mathbbm{n}'$ such that $\mathbbm{n}\trianglelefteq\mathbbm{n}'$, and there is $n<\omega$ such that $\rho_{n+1}^{\mathbbm{n}'}=\omega_1^M$ and $(n+1)$-condensation and $(1,\omega_1)$-condensation hold for $\mathbbm{n}'$. An \emph{$\mathcal{m}athcal{S}$-Jensen extension} of $N$ is a structure of the form $\mathcal{m}athcal{S}_n(N')$, where $N'$ is a Jensen extension of $N$ and $n<\omega$. \end{dfn} \begin{lem}\label{lem:Jensen_stack} Let $M\in\mathrm{pm}_1$. Work in $M$. Let $\mathbbm{n}$ be a candidate. Then: \begin{enumerate} \item\label{item:segs_satisfy_cond} For each Jensen extension $S$ of $\mathbbm{n}$, all segments of $S$ satisfy standard condensation. \item\label{item:compatibility_Jensen_exts} For all Jensen extensions $S_0,S_1$ of $\mathbbm{n}$, either $S_0\trianglelefteq S_1$ or $S_1\trianglelefteq S_0$. \end{enumerate} \end{lem} \begin{proof} Part \ref{item:segs_satisfy_cond}: All segments of $\mathbbm{n}$ satisfy standard condensation, as $\mathbbm{n}$ is a candidate. But by the assumed condensation for $S$, we can reflect segments of $S$ down to segments of $\mathbbm{n}$, with a $\Sigma_m$-elementary map, with $m<\omega$ arbitrarily high. Part \ref{item:compatibility_Jensen_exts}: One can run Jensen's standard proof (see e.g. \cite[Fact 3.1***]{V=HODX}) inside $M$, unless $M=\mathcal{m}athcal{J}(M')$ for some $M'$. In the latter case, we get $S_0,S_1\in\mathcal{m}athcal{S}_n(M')$ for some $n<\omega$. But then for any $m<\omega$, in $M$ we can form $\Sigma_m$-elementary substructures of $\mathcal{m}athcal{S}_n(M')$ whose transitive collapse $\bar{\mathcal{m}athcal{S}}$ is in $M|\omega_1^M$, and with the uncollapse map $\pi:\bar{\mathcal{m}athcal{S}}\to\mathcal{m}athcal{S}_n(M')$ in $M$, and such that $\mathcal{m}athrm{rg}(\pi)\cap\omega_1^M=\alpha$ for some $\alpha<\omega_1^M$. By condensation, we get a contradiction as in Jensen's proof. \end{proof} Lemma \ref{lem:Jensen_stack} gives that the stack $\mathcal{m}athrm{sJs}(\mathbbm{n})$ defined below is a premouse extending $\mathbbm{n}$: \begin{dfn} Let $M\in\mathrm{pm}_1$. Work in $M$. Let $\mathbbm{n}$ be a candidate. Then $\mathcal{m}athrm{sJs}(\mathbbm{n})$ denotes the stack of all $\mathcal{m}athcal{S}$-Jensen extensions of $\mathbbm{n}$. We also often write $\mathbbm{n}^+=\mathcal{m}athrm{sJs}(\mathbbm{n})$. We say $\mathbbm{n}$ is \emph{strong} iff (i) $\mathcal{m}athrm{sJs}(\mathbbm{n})$ has universe $\mathcal{m}athcal{H}_{\omega_2}$ and (ii) if $M\mathcal{m}odels$``$\omega_1$ is the largest cardinal'' then $\mathcal{m}athrm{sJs}(\mathbbm{n})$ satisfies $(1,\omega_1)$-condensation. \end{dfn} \begin{lem}\label{lem:iterable_tract_implies_strong} If $M$ is an $(0,\omega_1+1)$-iterable tractable pm then $\canM^M$ is a strong candidate in $M$. \end{lem} \begin{proof} This follows from the definitions and standard condensation facts. \end{proof} \begin{dfn} Let $M\in\mathrm{pm}_1$. Let $\mathbbm{p},\mathbbm{n}$ be candidates of $M$. Let $\varepsilon<\omega_1^M$. We say $(\mathbbm{p},\mathbbm{n})$ \emph{diverges at $\varepsilon$} iff there is $\gamma<\varepsilon$ such that $(\mathbbm{p},\mathbbm{n})$ converges at $\gamma$ and $\varepsilon$ is least ${>\gamma}$ such that $\mathcal{m}athbb{E}^\mathbbm{p}_\varepsilon\neq\mathcal{m}athbb{E}^\mathbbm{n}_\varepsilon$. We say $(\mathbbm{p},\mathbbm{n})$ \emph{$\omega$-diverges at $\varepsilon$} iff $(\mathbbm{p},\mathbbm{n})$ diverge at $\varepsilon$ and there is $\gamma$ as above such that $(\mathbbm{p},\mathbbm{n})$ $\omega$-converges at $\gamma$. If $(\mathbbm{p},\mathbbm{n})$ $\omega$-diverges at $\varepsilon$ then $\delta^{\mathbbm{p},\mathbbm{n}}_\varepsilon$ denotes $\omega_1^{\mathbbm{p}|\varepsilon}=\omega_1^{\mathbbm{n}|\varepsilon}$ (so by $\omega$-divergence, $\gamma<\delta^{\mathbbm{p},\mathbbm{n}}_\varepsilon$). \end{dfn} Note that if $(\mathbbm{p},\mathbbm{n})$ converges at $\gamma$ then $\gamma$ is a strong cutpoint of $\mathbbm{p},\mathbbm{n}$, and if also $\mathcal{m}athbb{E}^\mathbbm{p}\!\upharpoonright\!(\gamma,\varepsilon)=\mathcal{m}athbb{E}^\mathbbm{n}\!\upharpoonright\!(\gamma,\varepsilon)$, then $\unioniv{\mathbbm{p}|\varepsilon}=\unioniv{\mathbbm{n}|\varepsilon}$ and $\mathbbm{p}||\varepsilon,\mathbbm{n}||\varepsilon$ are inter-definable from $\gamma$ and parameters in $\mathbbm{p}|\gamma$, uniformly in $\varepsilon$ in a $\Delta_1$ fashion, and likewise for $\mathbbm{p}|\varepsilon,\mathbbm{n}|\varepsilon$ if $F^{\mathbbm{p}|\varepsilon}=F^{\mathbbm{n}|\varepsilon}$. Note also that if $(\mathbbm{p},\mathbbm{n})$ diverge at $\varepsilon$ and $\gamma$ is as above, then $\gamma<\omega_2^{\mathbbm{p}|\varepsilon}=\omega_2^{\mathbbm{n}|\varepsilon}$ (note that $\omega_2^{\mathbbm{p}|\varepsilon}=\omega_2^{\mathbbm{n}|\varepsilon}<\varepsilon$ as either $\mathbbm{p}|\varepsilon$ or $\mathbbm{n}|\varepsilon$ is active). \begin{lem}\label{lem:cofinal_converge_points} Let $M$ be a $(0,\omega_1+1)$-iterable tractable pm. Let $\mathbbm{p}\in M$ be a strong candidate for $M$. Then $(\canM^M,\mathbbm{p})$ $\omega$-converges at unboundedly many $\gamma<\omega_1^M$. \end{lem} \begin{proof} We consider primarily the case that either $M\in\mathrm{pm}_2$ or there is no $M'\triangleleft M$ such that $M=\mathcal{m}athcal{J}(M')$, and then sketch the modifications needed for the other case. Write $\mathbbm{n}^+=\mathcal{m}athrm{sJs}^M(\mathbbm{n})$ for $M$-candidates $\mathbbm{n}$. Let $\mathbbm{m}_0=\mathbbm{m}=\canM^M$ and $\mathbbm{p}_0=\mathbbm{p}$. (So $\mathbbm{m}^+=M|\omega_2^M$.) Given $\mathbbm{m}_n,\mathbbm{p}_n$, let $\mathbbm{m}_{n+1}$ be the least $\mathbbm{m}'\triangleleft \mathbbm{m}^+$ with $\mathbbm{m}_n\triangleleft \mathbbm{m}'$ and $\mathbbm{p}_n\in \mathbbm{m}'$ and $\rho_\omega^{\mathbbm{m}'}=\omega_1^M$; and define $\mathbbm{p}_{n+1}$ symmetrically from $\mathbbm{p}_n,\mathbbm{m}_n,\mathbbm{p}^+$. Let $\widetilde{\mathbbm{m}}$ be the stack of all $\mathbbm{m}_n$, and $\widetilde{\mathbbm{p}}$ likewise. Note that $\widetilde{\mathbbm{m}},\widetilde{\mathbbm{p}}$ have the same universe $U$, and $\widetilde{\mathbbm{p}}$ is $\Sigma_1^U(\{\mathbbm{p}_0\})$ (as $\widetilde{\mathbbm{p}}$ is the stack of all Jensen extensions of $\mathbbm{p}_0$ in $U$), and likewise for $\widetilde{\mathbbm{m}}$ from $\mathbbm{m}_0$, so in particular, $\widetilde{\mathbbm{p}},\widetilde{\mathbbm{m}}$ are inter-definable from parameters. Also, $\widetilde{\mathbbm{m}}\trianglelefteq M|\omega_2^M$. Note that $\left<\mathbbm{m}_n,\mathbbm{p}_n\right>_{n<\omega}$ is also $\Sigma_1^U(\{(\mathbbm{m}_0,\mathbbm{p}_0)\})$, so $\rho_1^{\widetilde{\mathbbm{m}}}=\omega_1^M=\rho_1^{\widetilde{\mathbbm{p}}}$. Let $\eta_0<\omega_1^M$ be the least $\eta$ such that \[ p_1^{\widetilde{\mathbbm{m}}},p_1^{\widetilde{\mathbbm{p}}}, w_1^{\widetilde{\mathbbm{m}}},w_1^{\widetilde{\mathbbm{p}}},\mathbbm{m}_0,\mathbbm{p}_0\in \mathcal{m}athrm{Hull}_1^{\widetilde{\mathbbm{m}}}(\eta\union\{p_1^{\widetilde{\mathbbm{m}}}\} )\cap\mathcal{m}athrm{Hull}_1^{\widetilde{\mathbbm{p}}}(\eta\union\{p_1^{\widetilde{\mathbbm{p}}}\})\] (where $w_1$ denotes the set of $1$-solidity witnesses). For $\eta\in[\eta_0,\omega_1^M)$, note that because of the definability of $\widetilde{\mathbbm{p}}$ from $\mathbbm{p}_0$ and $\widetilde{\mathbbm{m}}$ from $\mathbbm{m}_0$, \[ \mathcal{m}athrm{Hull}_1^{\widetilde{\mathbbm{m}}}(\eta\union\{p_1^{\widetilde{\mathbbm{m}}}\})\text{ and }\mathcal{m}athrm{Hull}_1^{\widetilde{\mathbbm{p}}}(\eta\union\{p_1^{\widetilde{\mathbbm{p}}}\})\text{ have the same elements.}\] Let $H_\eta,H'_\eta$ be the transitive collapses of the hulls respectively, $\pi_\eta:H_\eta\to \widetilde{\mathbbm{m}}$ and $\pi'_\eta:H'_\eta\to \widetilde{\mathbbm{p}}$ the uncollapse maps. Note $\pi_\eta=\pi'_\eta$ and $H_\eta,H'_\eta$ have the same universe and are inter-definable from parameters. Let $C$ be the set of all $\eta\in[\eta_0,\omega_1^M)$ with $\eta=\omega_1^{H_\eta}=\mathcal{m}athrm{cr}(\pi_\eta)$. Note that if $\mathcal{m}athrm{OR}^U=\omega_2^M$, i.e. $U=(\mathcal{m}athcal{H}_{\omega_2})^M$, then $M\mathcal{m}odels$``$\omega_2$ does not exist'', so $\omega<\rho_1^M$ by tractability. So in any case, $C$ is club in $\omega_1^M$. Let $\eta\in C$. Then $H_\eta,H'_\eta$ are $\eta$-sound, so by $(1,\omega_1)$-condensation, $H_\eta\triangleleft\canM$ and $H'_\eta\triangleleft \mathbbm{p}$. So $(\canM,\mathbbm{p})$ converge at $\mathcal{m}athrm{OR}^{H_\eta}$. Now let $\eta<\xi$ be consecutive elements of $C$. Then $\rho_1^{H_\xi}=\rho_1^{H'_\xi}=\omega$, because note that \[ \xi=\omega_1^M\cap\mathcal{m}athrm{Hull}^{\widetilde{\mathbbm{m}}}((\eta+1)\cup\{p_1^{\widetilde{\mathbbm{m} }}\} ), \] so $H_\xi=\mathcal{m}athrm{Hull}_1^{H_\xi}(\{q\})$ where $\pi_{\xi}(q)=\{\eta,p_1^{\widetilde{\mathbbm{m}}}\}$, and likewise for $H_\xi'$. So $(\canM,\mathbbm{p})$ $\omega$-converge at $\xi$. Since this holds for cofinally many $\xi<\omega_1^M$, we are done. If instead $M\mathcal{m}odels$``$\omega_1$ is the largest cardinal'' and $M=\mathcal{m}athcal{J}(M')$ (so $\rho_\omega^{M'}=\omega_1^M$) then proceed similarly, but define $\left<\mathbbm{m}_n,k_n,\mathbbm{p}_n,\ell_n\right>_{n<\omega}$ with $k_0=\ell_0=0$ and $\mathbbm{m}_{n+1}$ is the least $\mathbbm{m}'\triangleleft M$ such that $\mathbbm{m}_n\trianglelefteq \mathbbm{m}'$ and there is $k<\omega$ such that $\mathbbm{p}_n\in\mathcal{m}athcal{S}_k(\mathbbm{m}')$ and $k_n,\ell_n<k$, and then let $k_{n+1}$ be the least witnessing $k$, and define $\mathbbm{p}_{n+1},\ell_{n+1}$ symmetrically. Define $\widetilde{\mathbbm{m}},\widetilde{\mathbbm{p}}$ in the obvious manner from this sequence (and once again, they have a common universe $U$, and now $\widetilde{\mathbbm{p}}=\mathcal{m}athrm{sJs}^U(\mathbbm{p}_0)$, etc). Now proceed much as before. \end{proof} \begin{dfn}\label{dfn:ptilde(m)} In the above context, let $\widetilde{\mathbbm{p}}(\mathbbm{m})$ denote $\widetilde{\mathbbm{p}}$ and $\widetilde{\mathbbm{m}}(\mathbbm{p})$ denote $\widetilde{\mathbbm{m}}$. \end{dfn} \section{Tail definability of $\mathcal{m}athbb{E}$ in tame mice}\label{sec:tail_def_tame} For this section and the next, we restrict our attention to tame mice. \begin{dfn} Let $M\in\mathrm{pm}_1$ be tame. We say that an $M$-candidate $\mathbbm{n}$ is \emph{tame-good} iff $\mathbbm{n}$ is strong and tame-iterability-good in $M$. We write $\mathcal{m}athscr{G}_{\mathcal{m}athrm{t}}^M$, for the set of tame-good candidates of $M$. For the most part we abbreviate $\mathcal{m}athscr{G}_{\mathcal{m}athrm{t}}$ with $\mathcal{m}athscr{G}$.\end{dfn} \begin{lem}\label{lem:def_of_Ppp_in_tame_mice} Let $M$ be a $(0,\omega_1+1)$-iterable tractable tame premouse. Then $\mathcal{m}athscr{G}_{\mathcal{m}athrm{t}}^M\subseteq\mathcal{m}athscr{P}^M$. Therefore $\mathcal{m}athscr{P}^M$ is definable over $(\mathcal{m}athcal{H}_{\omega_2})^M$ without parameters. \end{lem} \begin{proof} We write $\mathcal{m}athscr{G}=\mathcal{m}athscr{G}_{\mathcal{m}athrm{t}}$. The ``therefore'' clause follows from the rest, as given any $M$-candidate $\mathbbm{n}$, we get $\mathbbm{n}\in\mathcal{m}athscr{P}^M$ iff $\mathbbm{n}\sim_\alpha\mathbbm{m}$ for some $\mathbbm{m}\in\mathcal{m}athscr{G}^M$ and some $\alpha<\omega_1^M$. So let $\mathbbm{n}\in\mathcal{m}athscr{G}^M$; we show $\mathbbm{n}\in\mathcal{m}athscr{P}^M$. For this, we use a comparison argument very much like in the proof of Theorem \ref{thm:tame_HOD_x} (but only its first round, which produced the trees $\mathcal{m}athcal{T}_0,\mathcal{m}athcal{U}_0$ there), so we only outline enough to explain the differences. It suffices to see find some $\gamma<\omega_1^M$ such that $\mathbbm{n},\canM^M$ $\omega$-converge at $\gamma$, and do not diverge at any $\varepsilon>\gamma$. So suppose we cannot, and for each such $\gamma$, let $\varepsilon'_\gamma$ be the least $\varepsilon>\gamma$ such that $(\mathbbm{n},\canM^M)$ diverge at $\varepsilon$. Let $C'$ be the set of all $\gamma<\omega_1^M$ such that $(\mathbbm{n},\canM^M)$ $\omega$-converges at $\gamma$. By \ref{lem:cofinal_converge_points}, $C'$ is cofinal in $\omega_1^M$, and clearly $0\in C'$. Define a sequence $\left<\gamma_\alpha\right>_{\alpha<\omega_1^M}$ by $\gamma_0=0$, and given $\left<\gamma_\alpha\right>_{\alpha<\lambda}$ with $\lambda<\omega_1^M$, then $\gamma_{\lambda}$ is the least $\gamma\in C'$ with $\gamma\geq\sup_{\alpha<\lambda}\varepsilon'_{\gamma_\alpha}$. So $\left<\gamma_\alpha\right>_{\alpha<\omega_1^M}$ is cofinal in $\omega_1^M$. Now let $\varepsilon_\alpha=\varepsilon'_{\gamma_\alpha}$ and \[ \delta_\alpha=\omega_1^{M|\varepsilon_\alpha}=\omega_1^{\mathbbm{n}|\varepsilon_\alpha}.\] Let $R_\alpha$ be the least $R\triangleleft M$ with $M|\varepsilon_\alpha\trianglelefteq R$ and $\rho_\omega^R=\omega$, and $S_\alpha\triangleleft \mathbbm{n}$ likewise. So \[ \gamma_\alpha<\delta_\alpha=\omega_1^{R_\alpha}=\omega_1^{S_\alpha}<\varepsilon_\alpha\leq \mathcal{m}athrm{OR}^{R_\alpha},\mathcal{m}athrm{OR}^{S_\alpha}\leq\gamma_{\alpha+1}. \] Note that $\gamma_0=0$ and $\varepsilon_0$ indexes the least disagreement between $M,\mathbbm{n}$. We will define a length $\omega_1^M$ comparison/genericity iteration $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ of $(R_0,S_0)$, via $(\Lambda_{\mathcal{m}athrm{t}}^{\canM^M},\Lambda_{\mathcal{m}athrm{t}}^\mathbbm{n})$, such that $\left<\delta_\alpha\right>_{0<\alpha<\omega_1^M}$ are exactly the Woodin cardinals of $M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$. Then as in the proof of \ref{thm:tame_HOD_x}, because $M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ has a proper class of Woodins and $M,\mathbbm{n}$ are tame, we will have $\mathcal{m}athcal{T}$-cofinal and $\mathcal{m}athcal{U}$-cofinal branches, and this will give a contradiction. Given $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\alpha+1)$, let $F^\mathcal{m}athcal{T}_\alpha,F^\mathcal{m}athcal{U}_\alpha$ be the least disagreement between $(M^\mathcal{m}athcal{T}_\alpha,M^\mathcal{m}athcal{U}_\alpha)$, write $\ell_\alpha=\mathcal{m}athrm{lh}(F^\mathcal{m}athcal{T}_\alpha)$ or $\ell_\alpha=\mathcal{m}athrm{lh}(F^\mathcal{m}athcal{U}_\alpha)$, whichever is defined, and $K_\alpha=M^\mathcal{m}athcal{T}_\alpha||\ell_\alpha=M^\mathcal{m}athcal{U}_\alpha||\ell_\alpha$. We first define $(\mathcal{m}athcal{T}_1,\mathcal{m}athcal{U}_1)=(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\delta_1+1)$; this will yield $\delta((\mathcal{m}athcal{T}_1,\mathcal{m}athcal{U}_1)\!\upharpoonright\!\delta_1)=\delta_1$ and $M((\mathcal{m}athcal{T}_1,\mathcal{m}athcal{U}_1)\!\upharpoonright\!\delta_1)$ will be definable from parameters over $M|\delta_1$, and equivalently, over $\mathbbm{n}|\delta_1$ (note that $M|\delta_1,\mathbbm{n}|\delta_1$ are inter-definable from parameters). We have $\mathcal{m}athrm{OR}^{R_0},\mathcal{m}athrm{OR}^{S_0}\leq\gamma_1$. We construct $(\mathcal{m}athcal{T}_1,\mathcal{m}athcal{U}_1)$ by comparison subject to folding in meas-lim genericity iteration and short linear iterations, much as in the proof of \ref{thm:tame_HOD_x}. Now $(\mathcal{m}athcal{T}_1,\mathcal{m}athcal{U}_1)$ has two phases. In the first we fold in a short linear iteration at the least measurable of $K_\alpha$ (that is, if $K_\alpha$ has a least measurable cardinal $\mathcal{m}u$, then we set $E^{\mathcal{m}athcal{T}_1}_\alpha=E^{\mathcal{m}athcal{U}_1}_\alpha=$ the least normal measure on $\mathcal{m}u$, and otherwise $E^{\mathcal{m}athcal{T}_1}_\alpha=F^\mathcal{m}athcal{T}_\alpha$ and $E^{\mathcal{m}athcal{U}_1}_\alpha=F^\mathcal{m}athcal{U}_\alpha$), until we reach the least $\alpha$ such that $\gamma_1\leq\ell_\alpha$. In the second phase, we fold in meas-lim extender algebra violations for making $(\mathcal{m}athbb{E}^M,\mathcal{m}athbb{E}^\mathbbm{n})$ generic (with the meas-lim requirements from the perspective of $K_\alpha$, as in the proof of \ref{thm:tame_HOD_x}). We continue in this manner until producing $(\mathcal{m}athcal{T}_1,\mathcal{m}athcal{U}_1)$ of length $\delta_1$. At limit stages of $(\mathcal{m}athcal{T}_1,\mathcal{m}athcal{U}_1)$ (and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ in general) we use $(\Lambda_{\mathcal{m}athrm{t}}^{\canM^M},\Lambda_{\mathcal{m}athrm{t}}^\mathbbm{n})$ to select branches. Thus, we need to verify that this makes sense, i.e. that the trees at those stages are necessary. Note also that $M|\delta_1$ and $\mathbbm{n}|\delta_1$ satisfy $\mathcal{m}athrm{ZF}C^-$, contain $R_0,S_0,\gamma_1$, and moreover, $M|\delta_1$ and $\mathbbm{n}|\delta_1$ are inter-definable from parameters, so the extender selection process just described is definable from parameters over both. Let $\lambda\leq\delta_1$ be a limit with $N=M((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda)$ not a Q-structure for itself, and let $\delta=\delta((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda)$. We claim: \begin{enumerate} \item\label{item:M|delta,N|delta_generic} $M|\delta$ and $\mathbbm{n}|\delta$ are meas-lim extender algebra generic over $N$ at $\delta$, \item\label{item:M|delta,N|delta_sat_ZFC-+V=HC} $M|\delta$ and $\mathbbm{n}|\delta$ satisfy $\mathcal{m}athrm{ZF}C^-+$``$V=\mathcal{m}athrm{HC}$'', \item $N\mathcal{m}odels$``There are no Woodin cardinals'', \item \label{item:Tt,Uu_def} $\lambda=\delta$ and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\delta\subseteq(M|\delta)\cap(\mathbbm{n}|\delta)$ and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\delta$ is definable from parameters over $M|\delta$ and $\mathbbm{n}|\delta$, \item\label{item:Q-struc_comp} letting $\gamma\geq\delta$ be least with $\rho_\omega^{M|\gamma}=\omega$, the P-construction $Q$ of $M|\gamma$ over $N$ is defined, $\mathcal{m}athrm{OR}^Q=\gamma$, and $Q$ is a Q-structure for $N$; and likewise for $\mathbbm{n}$ and the least $\gamma'\geq\delta$ with $\rho_\omega^{\mathbbm{n}|\gamma'}=\omega$, which yields a Q-structure $Q'$ for $N$, \item\label{item:Qs_match_below_delta_1} if $\delta<\delta_1$ then $Q=Q'$, where $Q,Q'$ are as above. \end{enumerate} This is by induction on $\lambda$, and much as in the proof of \ref{thm:tame_HOD_x}. Items \ref{item:M|delta,N|delta_generic} and \ref{item:M|delta,N|delta_sat_ZFC-+V=HC} are as there. Now suppose that $N$ has no Woodins, and we deduce items \ref{item:Tt,Uu_def}, \ref{item:Q-struc_comp} and \ref{item:Qs_match_below_delta_1}. The parameter we need to define the trees is $(R_0,S_0)$, which we have in the relevant segments of $M,\mathbbm{n}$ because we initially folded in linear iteration past $\gamma_1$. As mentioned above, the extender selection process is definable from parameters over $M|\delta_1$ and $\mathbbm{n}|\delta_1$, and note that in fact, for $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda$ is definable from parameters over $M|\delta$ and $\mathbbm{n}|\delta$. Because $N$ has no Woodins, the Q-structures $Q_\xi,Q_\xi'$ used at limit stages $\xi<\lambda$ in $\mathcal{m}athcal{T},\mathcal{m}athcal{U}$ to determine $[0,\xi)_\mathcal{m}athcal{T}$ and $[0,\xi)_\mathcal{m}athcal{U}$ respectively are identical and are proper segments of $N$. By induction, these are computed as in item \ref{item:Q-struc_comp}, and the segments of $M,\mathbbm{n}$ used to compute them have height ${<\delta}$, so $M|\delta,\mathbbm{n}|\delta$ can determine them, and hence $[0,\xi)_\mathcal{m}athcal{T}$ and $[0,\xi)_\mathcal{m}athcal{U}$. So $M|\delta,\mathbbm{n}|\delta$ can compute $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda'$ as long as $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda'\subseteq(M|\delta)\cap(\mathbbm{n}|\delta)$. But if this fails for some $\lambda'<\lambda$, we contradict the fact that $M|\delta$ and $\mathbbm{n}|\delta\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$. Item \ref{item:Tt,Uu_def} now follows. It also follows that $\mathcal{m}athcal{T}\!\upharpoonright\!\delta$ and $\mathcal{m}athcal{U}\!\upharpoonright\!\delta$ are necessary, so $\Lambda_{\mathcal{m}athrm{t}}^{\canM^M}(\mathcal{m}athcal{T}\!\upharpoonright\!\delta)$ and $\Lambda_{\mathcal{m}athrm{t}}^\mathbbm{n}(\mathcal{m}athcal{U}\!\upharpoonright\!\delta)$ are defined, and the process continues. Let $\gamma$ be as in item \ref{item:Q-struc_comp}. Let $Q$ be the result of the P-construction of $M$ above $N$ (recall this stops as soon as it reaches a Q-structure or projects across $\delta$). Because $\delta$ is regular in $Q[M|\delta]$, we cannot have $M|\gamma\in Q[M|\delta]$, so $\mathcal{m}athrm{OR}^Q\leq\gamma$. But if $\mathcal{m}athrm{OR}^Q<\gamma$ then we reach a contradiction as in the proof of Claim \ref{clm:no_cofinal_branches} in the proof of \ref{thm:tame_HOD_x}. So $\mathcal{m}athrm{OR}^Q=\gamma$. It is analogous for $\mathbbm{n}$. For item \ref{item:Qs_match_below_delta_1}, by item \ref{item:Q-struc_comp} and by the agreement of $M|\delta_1$ and $\mathbbm{n}|\delta_1$, if $\delta<\delta_1$ then $Q=Q'$. (Note here $\gamma,\gamma'<\delta_1$, as $M|\delta_1,\mathbbm{n}|\delta_1$ have largest cardinal $\omega$.) It remains to verify that $N$ has no Woodins. So suppose $N\mathcal{m}odels$``$\eta$ is Woodin'' and let $\eta$ be least such. Then because we have folded in meas-lim genericity iteration, $M|\eta,\mathbbm{n}|\eta$ are $(N,\mathcal{m}athbb B_{\mathcal{m}athrm{ml},\eta}^N)$-generic, so $M|\eta$ and $\mathbbm{n}|\eta$ satisfy $\mathcal{m}athrm{ZF}C^-$. Let $\lambda'<\lambda$ be least such that $\delta((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda')\geq\eta$. Then note that by $\mathcal{m}athrm{ZF}C^-$ and as before, $M|\eta$ and $\mathbbm{n}|\eta$ can compute $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda'$, and we get $\lambda'=\eta$. But since $N|\eta$ has no Woodins, the preceding applies with $\lambda$ replaced by $\lambda'=\eta<\lambda$. In particular, $Q_\eta=Q_\eta'$, where these are the Q-structures determining $[0,\eta)_\mathcal{m}athcal{T},[0,\eta)_\mathcal{m}athcal{U}$. Since $\eta$ is Woodin in $N$, $E^\mathcal{m}athcal{T}_\eta$ or $E^\mathcal{m}athcal{U}_\eta$ must come from $Q_\eta=Q_\eta'$. But then $E^\mathcal{m}athcal{T}_\eta=E^\mathcal{m}athcal{U}_\eta$, so this extender is being used for linear iteration or genericity iteration purposes, and $Q_\eta\triangleleft K_\eta$. But $\eta$ is a strong cutpoint of $Q_\eta$, so $E^\mathcal{m}athcal{T}_\eta$ causes a drop in model to some $P\trianglelefteq Q_\eta$. But then $E^\mathcal{m}athcal{T}_\eta$ is not $K_\eta$-total, a contradiction. This completes the induction, giving $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\delta_1+1)$. Now suppose $\lambda=\delta_1$. By item \ref{item:Q-struc_comp}, letting $b,c$ be the branches chosen in $\mathcal{m}athcal{T},\mathcal{m}athcal{U}$, $Q(\mathcal{m}athcal{T},b)$ results from the P-construction of $R_1$ above $N=M((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\delta_1)$, and has height $\mathcal{m}athrm{OR}^{R_1}$, and $Q(\mathcal{m}athcal{U},c)$ is that of $S_1$ above $N$, of height $\mathcal{m}athrm{OR}^{S_1}$. But $\varepsilon_1$ indexes the least disagreement between $R_1,S_1$ above $\delta_1$. Now \[ Q(\mathcal{m}athcal{T},b)||\varepsilon_1=Q(\mathcal{m}athcal{U},c)||\varepsilon_1 \text{ but } Q(\mathcal{m}athcal{T},b)|\varepsilon_1\neq Q(\mathcal{m}athcal{U},c)|\varepsilon_1.\] For if $Q(\mathcal{m}athcal{T},b)|\varepsilon_1=Q(\mathcal{m}athcal{U},c)|\varepsilon_1$ then because $Q(\mathcal{m}athcal{T},b)[M|\delta_1]$ and $Q(\mathcal{m}athcal{T},b)[\mathbbm{n}|\delta_1]$ have the same universe and the forcing is small relative to the active extenders, there is a unique possible extension of the extenders to the extensions, so $R_1|\varepsilon_1=S_1|\varepsilon_1$, contradiction. So the overall comparison now reduces to a comparison of $Q(\mathcal{m}athcal{T},b)$ with $Q(\mathcal{m}athcal{T},c)$, and therefore $\delta_1$ will be the least Woodin cardinal, and hence (by tameness, or in this case, just that $\delta_1$ is the least such Woodin) also a strong cutpoint of the final model. Now suppose $\alpha>0$ and we have defined $(\mathcal{m}athcal{T}_\alpha,\mathcal{m}athcal{U}_\alpha)$, of length $\delta_\alpha+1$, and the P-constructions of $R_{\alpha+1},S_{\alpha+1}$ yield the Q-structures $Q(\mathcal{m}athcal{T}\!\upharpoonright\!\delta_\alpha,b')$ and $Q(\mathcal{m}athcal{U}\!\upharpoonright\!\delta_\alpha,c')$ etc. We then define $(\mathcal{m}athcal{T}_{\alpha+1},\mathcal{m}athcal{U}_{\alpha+1})$ extending $(\mathcal{m}athcal{T}_\alpha,\mathcal{m}athcal{U}_\alpha)$, above $\delta_\alpha$, of length $\delta_{\alpha+1}+1$. Here we again have two stages. In the first we fold in linear iteration past $\gamma_{\alpha+1}$, at the least measurable $>\delta_\alpha$, and in the second we fold in genericity iteration. Everything is analogous to the case $\alpha=1$ (there are now Woodin cardinals in $M((\mathcal{m}athcal{T}_{\alpha+1},\mathcal{m}athcal{U}_{\alpha+1})\!\upharpoonright\!\lambda)$, but they are exactly the $\delta_\beta$ for $\beta\leq\alpha$). Given $\left<\mathcal{m}athcal{T}_\alpha,\mathcal{m}athcal{U}_\alpha\right>_{\alpha<\eta}$ for a limit $\eta$, this gives $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda$ where $\lambda=\sup_{\alpha<\eta}\delta_\alpha$. Note $\lambda=\delta((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda)$. Because $M((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda)$ satisfies ``There is a proper class of Woodins'' by induction, it is a Q-structure for itself, so $\mathcal{m}athcal{T}\!\upharpoonright\!\lambda$ and $\mathcal{m}athcal{U}\!\upharpoonright\!\lambda$ are necessary (as they are in $M$), and hence in the domains of the iteration strategies. This yields $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\lambda+1)$. We get $M^\mathcal{m}athcal{T}_\lambda\ntrianglelefteq M^\mathcal{m}athcal{U}_\lambda\ntrianglelefteq M^\mathcal{m}athcal{T}_\lambda$. Since $\lambda$ is a limit of strong cutpoints of $M^\mathcal{m}athcal{T}_\lambda,M^\mathcal{m}athcal{U}_\lambda$, the comparison now reduces to a comparison of $M^\mathcal{m}athcal{T}_\lambda,M^\mathcal{m}athcal{U}_\lambda$, above $\lambda$. Note that $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\lambda+1)$ is definable from parameters over $M|\gamma_\eta$, and over $\mathbbm{n}|\gamma_\eta$ (or at least, $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\lambda$ is definable from parameters over those segments, and $[0,\lambda)_\mathcal{m}athcal{T},[0,\lambda)_\mathcal{m}athcal{U}$ are also, so the models $M^\mathcal{m}athcal{T}_\lambda,M^\mathcal{m}athcal{U}_\lambda$ are definable ``in the codes'', but might literally have ordinal height $>\gamma_\eta$). At this stage we fold in linear iteration past $\gamma_\eta$, at the least measurable $\mathcal{m}u>\lambda$, if there is such, and then genericity iteration, to produce $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\delta_\eta+1)$ much as before. This completes the description of the comparison. We produce trees $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ of length $\omega_1^M$, and $\left<\delta_\alpha\right>_{\alpha<\omega_1^M}$ enumerates the Woodins of $M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$, cofinal in $\omega_1^M$. By tameness, we get $\mathcal{m}athcal{T}$-cofinal and $\mathcal{m}athcal{U}$-cofinal branches $b,c\in M$ (this doesn't require any further iterability assumptions). One now reaches a contradiction as in the proof of Theorem \ref{thm:tame_HOD_x}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:E_almost_def_tame_L[E]}] By Lemma \ref{lem:def_of_Ppp_in_tame_mice}, $\mathcal{m}athscr{P}^M$ is definable over $(\mathcal{m}athcal{H}_{\omega_2})^M$ without parameters. So by Lemma \ref{lem:tame_es_above_om_1_def_from_Ppp}, $\mathcal{m}athbb{E}^M\!\upharpoonright\![\omega_1^M,\mathcal{m}athrm{OR}^M)$ is definable over $\unioniv{M}$ without parameters. \end{proof} \section{$\mathcal{m}athrm{HOD}$ in tame mice}\label{sec:HOD_tame_mice} Write $\mathcal{m}athscr{G}=\mathcal{m}athscr{G}_{\mathcal{m}athrm{t}}$. By \ref{lem:def_of_Ppp_in_tame_mice}, $\mathcal{m}athscr{G}^M\subseteq\mathcal{m}athscr{P}^M$ and $\mathcal{m}athscr{G}^M$ is $\Pi_2^{(\mathcal{m}athcal{H}_{\omega_2})^M}$ (without parameters). In Theorem \ref{tm:HOD_tame_mouse} we give an analysis of $\mathcal{m}athrm{HOD}^{L[\mathcal{m}athbb{E}]}$ above $\omega_2^{L[\mathcal{m}athbb{E}]}$, for tame $L[\mathcal{m}athbb{E}]$. This uses Vopenka: \begin{dfn} Let $M$ be a $(0,\omega_1+1)$-iterable tame premouse satisfying $\mathcal{m}athrm{ZF}C$. Then $\mathrm{Vop}_{*\mathcal{m}athscr{G}}^M$ denotes the Vopenka forcing corresponding to non-empty $\mathcal{m}athrm{OD}^{\unioniv{M}}$ subsets of $\mathcal{m}athscr{G}^M$, coded in the usual manner with ordinals as conditions. (Let $\mathcal{m}athbb P_0$ be the forcing whose conditions are non-empty $\mathcal{m}athrm{OD}^{\unioniv{M}}$ subsets of $\mathcal{m}athscr{G}^M$, with $A\leq B$ iff $A\subseteq B$. Then $\mathrm{Vop}_{*\mathcal{m}athscr{G}}^M$ is the natural isomorph of $\mathcal{m}athbb P_0$, using standard ordinal codes for conditions in $\mathcal{m}athbb P_0$.) \end{dfn} \begin{rem}Note that $\mathrm{Vop}_{*\mathcal{m}athscr{G}}^M$ is definable over $\unioniv{M}$ without parameters. Once we have proved the following lemma, we will define $\mathrm{Vop}_{\mathcal{m}athscr{G}}^M$ as a more natural isomorph of $\mathrm{Vop}_{*\mathcal{m}athscr{G}}^M$, which is a subset of $\omega_2$, and is definable over $(\mathcal{m}athcal{H}_{\omega_2})^M$ without parameters. \end{rem} \begin{lem}\label{lem:tame_Vopenka} Let $M$ be a $(0,\omega_1+1)$-iterable tame premouse satisfying $\mathcal{m}athrm{ZF}C$. Let $\mathcal{m}athbb P=\mathrm{Vop}_{*\mathcal{m}athscr{G}}^M$ and $\delta=\omega_2^M$. Let $H=\mathcal{m}athrm{HOD}^{\unioniv{M}}$. Then: \begin{enumerate} \item\label{item:Vop_delta-cc} $\mathcal{m}athbb P\in H$ and $H\mathcal{m}odels$``$\mathcal{m}athbb P$ is a $\delta$-cc complete Boolean algebra''. \item\label{item:Vop_sub_delta} $\mathcal{m}athbb P\cong$ to some $\mathcal{m}athbb P'\subseteq\delta$ which is $(\Sigma_3\wedge\Pi_3)^{(\mathcal{m}athcal{H}_{\omega_2})^M}$-definable without parameters, \item\label{item:H[G]=H[M|om_1]} There is $G$ which is $(H,\mathcal{m}athbb P)$-generic, with $H[G]=H[\canM^M]$ having universe $\unioniv{M}$. \item\label{item:every_Vop_cond_generic} For every $p\in\mathcal{m}athbb P$ there is an $(H,\mathcal{m}athbb P)$-generic $G'\in M$ such that $p\in G'$ and $H[G']$ has universe $\unioniv{M}$. \end{enumerate} \end{lem} \begin{proof} Part \ref{item:Vop_delta-cc}: We have $\mathcal{m}athbb P\in H$ and $H\mathcal{m}odels$``$\mathcal{m}athbb P$ is a complete Boolean algebra'' by the usual proof for Vopenka forcing. We have $H\mathcal{m}odels$``$\mathcal{m}athbb P$ is $\delta$-cc'' because in $M$, $\mathcal{m}athscr{G}^M$ has cardinality $\leq\omega_1^M$, and all maximal antichains of $\mathcal{m}athbb P$ in $H$ correspond to partitions of $\mathcal{m}athscr{G}^M$ in $M$. Part \ref{item:Vop_sub_delta}: A \emph{nice code} is a triple $(\alpha,\beta,\varphi)$ such that $\alpha<\beta<\omega_2^M$ and $\varphi$ is a formula. The nice code $(\alpha,\beta,\varphi)$ codes the set \[ A_{\alpha\beta\varphi}=\{\mathbbm{n}\in\mathcal{m}athscr{G} ^M\bigm|\mathcal{m}athrm{sJs}(\mathbbm{n})|\beta\mathcal{m}odels\varphi(\alpha)\}. \] \begin{clmfour}\label{clm:every_OD_set_has_nice_code}A set $A\subseteq\mathcal{m}athscr{G}^M$ is $\mathcal{m}athrm{OD}^{\unioniv{M}}$ iff $A$ has a nice code.\end{clmfour} \begin{proof} Each $A_{\alpha\beta\varphi}$ is $\mathcal{m}athrm{OD}^{\unioniv{M}}$ since $\mathcal{m}athscr{G}^M$ and $\mathbbm{n}\mathcal{m}apsto\mathcal{m}athrm{sJs}(\mathbbm{n})$ are $\unioniv{M}$-definable. So suppose $A\subseteq\mathcal{m}athscr{G}^M$ is $\mathcal{m}athrm{OD}^{\unioniv{M}}$ but has no nice code. Let $\lambda\in\mathcal{m}athrm{OR}^M$ be a limit cardinal of $M$ and $\xi<\lambda$ and $\varphi$ be a formula (in the language of set theory) such that $\mathbbm{n}\in A$ iff $\mathcal{m}athcal{H}_\lambda^M\mathcal{m}odels\varphi(\mathbbm{n},\xi)$. In fact, because we are arguing by contradiction, we may assume $\xi=0$ (take the least $\xi$ such that $\varphi(\cdot,\xi)$ yields a set with no nice code, and then by substituting another formula for $\varphi$, we can take $\xi=0$). Let $\mathbbm{n}\in\mathcal{m}athscr{G}^M$. Then $N=\mathcal{m}athrm{cs}(\mathbbm{n})$ is well-defined, has universe $\unioniv{M}$, and satisfies standard condensation, by Lemma \ref{lem:tame_es_above_om_1_def_from_Ppp}. Also, as in the proof of that lemma, $N$ can be translated into an iterable $x$-mouse $N_x$ for some $x\in\mathcal{m}athbb R^M$. Let \[ H^\mathbbm{n}=\mathcal{m}athrm{Hull}_1^{N|(\lambda+\omega)}(\{\lambda\}\cup\omega_1^M), \] $C^\mathbbm{n}$ be its transitive collapse and and $\pi^\mathbbm{n}:C^\mathbbm{n}\to N|(\lambda+\omega)$ the uncollapse. By the iterability of $N_x$ (as an $x$-mouse), and since $x\in\mathcal{m}athrm{rg}(\pi^\mathbbm{n})$ and $\lambda$ is an $M$-cardinal, then $C^\mathbbm{n}$ is $1$-sound with $\pi^\mathbbm{n}(p_1^{C^\mathbbm{n}})=\{\lambda\}$. So by standard condensation, $C^\mathbbm{n}\triangleleft N$, so in fact $C^\mathbbm{n}\triangleleft\mathcal{m}athrm{sJs}(\mathbbm{n})$. But the elements of $H^\mathbbm{n}$ are independent of $\mathbbm{n}$, because given $\mathbbm{n}'\in\mathcal{m}athscr{G}^M$, $(\mathbbm{n},\mathbbm{n}')$ are interdefinable from parameters, so $(\mathcal{m}athrm{cs}(\mathbbm{n})|\lambda,\mathcal{m}athrm{cs}(\mathbbm{n}')|\lambda)$ are also (as they have the same extender sequence above $\omega_1^M$). So $\mathcal{m}athrm{OR}(H^\mathbbm{n})$ and $\pi^\mathbbm{n}$ are also independent of $\mathbbm{n}$. Let $\pi=\pi^\mathbbm{n}$ and $\pi({\bar{\lambda}})=\lambda$. Let $\psi_\varphi$ be the formula, in the language of premice, asserting $\varphi(L[\mathcal{m}athbb{E}]|\omega_1)$. Then \[ \mathbbm{n}\in A\iff\mathcal{m}athcal{H}_\lambda^M\mathcal{m}odels\varphi(\mathbbm{n})\iff \mathcal{m}athrm{cs}(\mathbbm{n})|\lambda\mathcal{m}odels\psi_\varphi\iff \mathcal{m}athrm{sJs}(\mathbbm{n})|{\bar{\lambda}}\mathcal{m}odels\psi_\varphi.\] So $(0,{\bar{\lambda}},\psi_\varphi)$ is a nice code for $A$, a contradiction. \end{proof} So let $\mathcal{m}athbb P'$ be the coding of $\mathcal{m}athbb P$ via nice codes (for non-empty subsets of $\mathcal{m}athscr{G}^M$). Then $\mathcal{m}athbb P'\subseteq\delta^3$ and because $\mathcal{m}athscr{G}^M$ is $\Pi_2^{(\mathcal{m}athcal{H}_{\omega_2})^M}$, the set of conditions in $(\alpha,\beta,\varphi)\in\mathcal{m}athbb P'$ is $\Sigma_3^{(\mathcal{m}athcal{H}_{\omega_2})^M}$ (to assert $A_{\alpha\beta\varphi}\neq\emptyset$), and the ordering restricted to these conditions is $\Pi_3^{(\mathcal{m}athcal{H}_{\omega_2})^M}$. Parts \ref{item:H[G]=H[M|om_1]}, \ref{item:every_Vop_cond_generic}: As usual, for every $\mathbbm{n}\in\mathcal{m}athscr{G}^M$ we have the generic filter \[ G_\mathbbm{n}=\{(\alpha,\beta,\varphi)\in\mathcal{m}athbb P'\bigm| \mathbbm{n}\in A_{\alpha\beta\varphi}\}. \] \begin{clmfour} $H[\mathbbm{n}]\subseteq H[\mathbbm{n}^+]=H[G_\mathbbm{n}]=\unioniv{M}$. \end{clmfour} \begin{proof} $G_\mathbbm{n}$ and $\mathbbm{n}^+=\mathcal{m}athrm{sJs}(\mathbbm{n})$ are easily inter-computable, so $H[\mathbbm{n}^+]=H[G_\mathbbm{n}]$. By standard Vopenka facts, we have $H[G_\mathbbm{n}]=\mathcal{m}athrm{HOD}_{\mathbbm{n}}^{\unioniv{M}}$. \footnote{\label{ftn:Vopenka_extension}That is, let $X\subseteq\eta\in\mathcal{m}athrm{OR}^M$ with $X\in\mathcal{m}athrm{HOD}_{\mathbbm{n}}^{\unioniv{M}}$, and fix a formula $\varphi$ and $\alpha\in\mathcal{m}athrm{OR}$ such that $X=\{\beta<\eta\bigm|\unioniv{M}\mathcal{m}odels\varphi(\mathbbm{n},\alpha,\beta)\}$. For $\beta<\eta$ let $p^*_\beta=\{\mathbbm{n}'\in\mathcal{m}athscr{G}^M\bigm|\unioniv{M}\mathcal{m}odels\varphi(\mathbbm{n}',\alpha, \beta)\}$, and noting $p^*_\beta\in\mathcal{m}athbb P$, let $p_\beta\in\mathcal{m}athbb P'$ be the corresponding element, and letting $\tau:\eta\to V$ with $\tau(\beta)=p_\beta$, note $\tau\in H$. But $\tau$ is a $\mathcal{m}athbb P'$-name and $\tau_{G_{\mathbbm{n}}}=X$.} But by Lemma \ref{lem:tame_es_above_om_1_def_from_Ppp}, we have $\mathcal{m}athrm{HOD}_{\mathbbm{n}}^{\unioniv{M}}=\unioniv{M}$. \end{proof} \begin{clmfour}\label{clm:N_generates_N^+} $H[\mathbbm{n}]=H[\mathbbm{n}^+]$. \end{clmfour} \begin{proof} It suffices to see that $\mathbbm{n}^+\subseteq H[\mathbbm{n}]$, because then $\mathbbm{n}^+$ is just the Jensen stack above $\mathbbm{n}$ in $H[\mathbbm{n}]$, so $\mathbbm{n}^+\in H[\mathbbm{n}]$ also. Fix $\xi\in(\omega_1^M,\omega_2^M)$ such that $\mathbbm{n}^+|\xi$ projects to $\omega_1^M$. It suffices to see that $\mathbbm{n}^+|\xi\in H[\mathbbm{n}]$, and again via the Jensen stack, we may assume that $\gamma=\omega_2^{\mathbbm{n}^+|\xi}<\xi$ and $\mathbbm{n}^+|\gamma\in H[\mathbbm{n}]$ and there is some $\lambda\in(\gamma,\xi]$ such that $\mathbbm{n}^+|\lambda$ is active. Let $\mathcal{m}athbb Q=\mathcal{m}athbb P'\cap \mathbbm{n}^+|\gamma$. Note that $\mathcal{m}athbb Q$ is definable over $\unioniv{\mathbbm{n}^+|\gamma}$ (just as $\mathcal{m}athbb P'$ is defined over $(\mathcal{m}athcal{H}_{\omega_2})^M=\unioniv{\mathbbm{n}^+}$). We have $\mathcal{m}athbb Q\in H$ as $\mathcal{m}athbb Q=\mathcal{m}athbb P'\cap\gamma^3$. Let $\lambda$ be the supremum of all $\lambda'\leq\xi$ such that $\mathbbm{n}^+|\lambda'$ is active. So $\gamma=\omega_2^{\mathbbm{n}^+|\lambda}$. So working over $\mathbbm{n}^+|\lambda$ (or equivalently, $M|\lambda$), let $R$ be the result of the P-construction of $\mathbbm{n}^+|\lambda$ above $(\gamma^3,\mathcal{m}athbb Q)$. Then $R\in H$, because $\mathcal{m}athbb Q\in H$, and given any $\mathbbm{n}'\in\mathcal{m}athscr{G}^M$, the extender sequences of $(\mathbbm{n}')^+$ and $\mathbbm{n}^+$ agree above $\omega_1^M$, so $\mathcal{m}athbb Q$ is definable over $(\mathbbm{n}')^+|\gamma$ just as over $\mathbbm{n}^+|\gamma$ (as they have the same universe), and their P-constructions yield the same output $R$. As before, $R||\lambda\mathcal{m}odels$``$\mathcal{m}athbb Q$ is a $\gamma$-cc complete Boolean algebra'' and $G_{\mathbbm{n},\gamma}=G_\mathbbm{n}\cap\gamma^3$, is $R||\lambda$-generic for $\mathcal{m}athbb Q$. Therefore the P-construction of $\mathbbm{n}^+|\lambda$ yields a $(\gamma^3,\mathcal{m}athbb Q)$-premouse (which is $R$), and we have the usual fine structural correspondence between segments of $\mathbbm{n}^+$ of height in $(\gamma,\lambda]$, and the corresponding segments of $R$. Now by induction, we have $\mathbbm{n}^+|\gamma\in H[\mathbbm{n}]$, and $\mathbbm{n}^+|\gamma$ is inter-computable with $G_{\mathbbm{n},\gamma}$. But then the extender sequence of $\mathbbm{n}^+|\lambda$ is determined by that of $R|\lambda$, as $\mathbbm{n}^+|\lambda$ is a small generic extension thereof. So $\mathbbm{n}^+|\lambda\in H[\mathbbm{n}]$, and therefore $\mathbbm{n}^+|\xi\in H[\mathbbm{n}]$, as desired. \end{proof} There also is an alternate proof of this last claim, which is actually quite different: \begin{proof}[Sketch of alternate proof of Claim \ref{clm:N_generates_N^+}] If our mice were Jensen-indexed, we could argue as follows: Given $\alpha$ such that $\mathbbm{n}^+|\alpha$ is active, let $\xi_\alpha=(\kappa^+)^{\mathbbm{n}^+|\alpha}$ where $\kappa=\mathcal{m}athrm{cr}(F^{\mathbbm{n}^+|\alpha})$. The sequence \[ \mathcal{m}athscr{F}=\left<F^{\mathbbm{n}^+|\alpha}\!\upharpoonright\!\xi_\alpha\mathcal{m}id\alpha\in(\omega_1^M, \omega_2^M)\text{ and }F^{\mathbbm{n}^+|\alpha}\neq\emptyset\right> \] would be in $H$, because the sequence is independent of $\mathbbm{n}\in\mathcal{m}athscr{G}^M$. But $(\mathbbm{n},\mathcal{m}athscr{F})$ determines $\mathbbm{n}^+$, by standard arguments. (Let $P$ be an active premouse with Jensen indexing. Let $G=F^P\!\upharpoonright\!(\kappa^+)^P$ where $\kappa=\mathcal{m}athrm{cr}(P)$. Then $G,P||\mathcal{m}athrm{OR}^P$ determines $F^P$ as follows. Let $X\subseteq\kappa$; we want to determine $i^P_F(X)$. Let $\alpha<(\kappa^+)^P$ be such that $X\in P|\alpha$ and $P|\alpha$ projects to $\kappa$. Note that there is a unique elementary embedding $\pi:P|\alpha\to P|G(\alpha)$ with $\pi\!\upharpoonright\!\kappa=\mathcal{m}athrm{id}$, and $\pi$ is determined by the first-order theory of $P|G(\alpha)$. But then $\pi(X)=i^P_F(X)$, determining the latter, as desired.) But we work with Mitchell-Steel indexing, and it is not obvious to the author how to use the preceding kind of argument directly with this indexing. So instead, we convert indexing first. Let $\widetilde{\mathbbm{n}^+}$ be the above-$\omega_1^M$ Jensen-indexed conversion of $\mathbbm{n}^+$. It isn't relevant here whether the structure we get is actually a premouse, with sound segments etc. It only needs to code the information in $\mathbbm{n}^+$ above $\omega_1^M$ via a coherent sequence of Jensen-indexed extenders. \footnote{Given a Mitchell-Steel indexed $P$ satisfying ``$\omega_1$ exists'', define $\widetilde{P}$ by induction on sequences of ultrapowers. First set $\widetilde{P_0}=P_0$ where $P_0=P|(\omega_1^P+\omega)$. If $P$ is active then \[ \widetilde{P}=(\widetilde{U^P},F^P\!\upharpoonright\!(P|(\kappa^+)^P)) \] where $U^P=\mathcal{m}athrm{Ult}(P|(\kappa^+)^P,F^P)$ and $\kappa=\mathcal{m}athrm{cr}(F^P)$. If $P$ is passive then $\widetilde{P}=\mathcal{m}athrm{stack}_{Q\triangleleft P}\mathcal{m}athcal{J}(\widetilde{Q})$.} Because the extender sequence of $\mathbbm{n}^+$ above $\omega_1^M$ is independent of $\mathbbm{n}\in\mathcal{m}athscr{G}^M$, so is the extender sequence of $\widetilde{\mathbbm{n}^+}$ above $\omega_1^M$. Let $\widetilde{\mathcal{m}athscr{F}}$ be the restriction to ordinals of $\mathcal{m}athbb{E}^{\widetilde{\mathbbm{n}^+}}$ above $\omega_1^M$. By a variant of the argument in parentheses above, from $\mathbbm{n}$ and $\widetilde{\mathcal{m}athscr{F}}$ we can compute $\widetilde{\mathbbm{n}^+}$, so $\widetilde{\mathbbm{n}^+}\in H[\mathbbm{n}]$. (In the argument above we used that the proper segments of premice are sound, but we don't need this property of our Jensen-indexed structure. For if $\widetilde{\mathbbm{n}^+}|\alpha$ is active with extender $F$, then we first convert $\widetilde{\mathbbm{n}^+}||\alpha$ to a Mitchell-Steel indexed premouse $Q$, and then from $Q$ and $F\!\upharpoonright\!\mathcal{m}athrm{OR}$ we can compute $F$ much as before.) So $\widetilde{\mathbbm{n}^+}\in H[\mathbbm{n}]$, but then we can (as above) invert back to Mitchell-Steel indexing, so $\mathbbm{n}^+\in H[\mathbbm{n}]$. \end{proof} Applying the above with $\mathbbm{n}=\canM^M$, we have established part \ref{item:H[G]=H[M|om_1]}. To complete the proof of part \ref{item:every_Vop_cond_generic}, observe that if $p\in\mathcal{m}athbb P'$ then there is $\mathbbm{n}\in\mathcal{m}athscr{G}^M$ with $p\in G_\mathbbm{n}$ (because the forcing includes only nice codes for non-empty sets) and we have just seen that $H[\mathbbm{n}]=H[G_\mathbbm{n}]=\unioniv{M}$, as desired. \end{proof} \begin{dfn} $\mathrm{Vop}_{\mathcal{m}athscr{G}}^M$ denotes the forcing $\mathcal{m}athbb P'$ of the previous lemma. \end{dfn} We finally use similar methods as part of the proof of the following theorem: \begin{tm}\label{tm:HOD_tame_mouse} Let $M$ be a $(0,\omega_1+1)$-iterable tame premouse satisfying $\mathcal{m}athrm{ZF}C$. Let $H=\mathcal{m}athrm{HOD}^{\unioniv{M}}$ and suppose that $H\neq\unioniv{M}$. Let $\delta=\omega_2^M$ and $t = \Th_{\Sigma_3}^{\mathcal{m}athcal{H}_{\delta}^M}(\delta)$. Then there are $\mathcal{m}athbb{F},\mathcal{m}athcal{H},W$ such that: \begin{enumerate}[label=--] \item $\mathcal{m}athcal{H}=(H,\mathcal{m}athbb{F},t)$ is a $(0,\omega_1+1)$-iterable $(\delta,t)$-premouse with universe $H$ and $\mathcal{m}athbb{E}^\mathcal{m}athcal{H}=\mathcal{m}athbb{F}$, \item every $X\in H$ with $X\subseteq\eta$ for some $\eta<\delta$, is encoded into $t$, so $X\in\mathcal{m}athcal{J}((\delta,t))$, \item $\mathcal{m}athbb{F}$ is the restriction of $\mathcal{m}athbb{E}^M\!\upharpoonright\![\delta,\infty)$ to $H$, \item $\mathcal{m}athcal{H}$ is definable over $\unioniv{M}$ without parameters, \item $\unioniv{M}$ is a generic extension of $H$ via a poset in $\mathcal{m}athcal{J}((\delta,t))$, \item $\unioniv{M}=H[\mathbbm{m}^M]$, \item $W$ is a premouse and lightface proper class of $\unioniv{M}$ and $W\subseteq H$, \item $W\mathcal{m}odels$``$\delta$ is the least Woodin cardinal'', \item $t$ is generic for the meas-lim extender algebra of $W$ at $\delta$, \item $\mathcal{m}athbb{E}^W\!\upharpoonright\![\delta,\infty)$ is the restriction of $\mathcal{m}athbb{F}$ to $W$, \item $H=W[t]$, and \item if $M=\mathcal{m}athrm{Hull}^M(\emptyset)$ then $W\trianglelefteq X$ for some correct iterate $X$ of $\canM^M$.\footnote{Recall that when we write $M=\mathcal{m}athrm{Hull}^M(\emptyset)$, the definability can refer to $\mathcal{m}athbb{E}^M$, so this does not trivially imply that $\unioniv{M}\mathcal{m}odels$``$V=\mathcal{m}athrm{HOD}$''.} \end{enumerate} \end{tm} \begin{proof} Let $D$ be the set of all $\gamma<\omega_2^M$ such that $M|\gamma\mathcal{m}odels\mathcal{m}athrm{ZF}C^-+$``$\omega_1$ is the largest cardinal''. Let $\vec{R}=\left<\mathcal{m}athbb P_\gamma,R_\gamma\right>_{\gamma\in D}$ be $\mathcal{m}athbb P_\gamma=\mathrm{Vop}_{\mathcal{m}athscr{G}}^M\cap\gamma^3$ and $R_\gamma$ is the output of the P-construction of $M|\lambda$ above $\mathcal{m}athbb P_\gamma$, where $\xi$ is least such that $\xi>\gamma$ and $\rho_\omega^{M|\xi}=\omega_1^M$ and $\lambda$ is the supremum of $\gamma$ and all $\lambda'\leq\xi$ such that $M|\lambda'$ is active. By the proof of Lemma \ref{lem:tame_Vopenka}, $D$, $\vec{R}$ and $\mathrm{Vop}_{\mathcal{m}athscr{G}}^M$ are $\Sigma_3^{(\mathcal{m}athcal{H}_{\omega_2})^M}$, and hence, encoded into $t$. Let $R$ be the output of the P-construction of $M$ above $(\delta,t)$. Also like in \ref{lem:tame_Vopenka}, $R$ is definable without parameters over $\unioniv{M}$, so $R\subseteq H$. We have $\mathrm{Vop}_{\mathcal{m}athscr{G}}^M\in R$, and for each $\mathbbm{n}\in\mathcal{m}athscr{G}^M$, $G_\mathbbm{n}$ is $R$-generic for $\mathrm{Vop}_{\mathcal{m}athscr{G}}^M$, and $R[G_\mathbbm{n}]=R[\mathbbm{n}]$ has universe $\unioniv{M}$. By \ref{lem:tame_Vopenka}, for each $p\in\mathrm{Vop}_{\mathcal{m}athscr{G}}^M$ we have some such $\mathbbm{n}\in\mathcal{m}athscr{G}^M$ with $p\in G_\mathbbm{n}$. It follows that $H\subseteq R$ ($R$ computes the theory of ordinals in $\unioniv{M}$ by considering what is forced by $\mathrm{Vop}_{\mathcal{m}athscr{G}}^M$). So $\unioniv{R}=H$. Setting $\mathcal{m}athbb{F}=\mathcal{m}athbb{E}^R$, we have the desired $\mathcal{m}athcal{H}=(H,\mathcal{m}athbb{F},t)$. The fact that every bounded $X\subseteq\delta$ in $H$ is encoded into $t$ is like in the proof of Claim \ref{clm:every_OD_set_has_nice_code} of Lemma \ref{lem:tame_Vopenka} part \ref{item:Vop_sub_delta}. We now construct $W$. We get $W|\delta$ from a certain simultaneous comparison/genericity iteration of all $\mathbbm{n}\in\mathcal{m}athscr{G}^M$, and then $\mathcal{m}athbb{E}^W\!\upharpoonright\![\delta,\infty)$ is the restriction of $\mathcal{m}athbb{E}^M\!\upharpoonright\![\delta,\infty)$. The details of the comparison are similar to those in the proof of Theorem \ref{thm:tame_HOD_x}, so we just give a sketch. For $\mathbbm{n}\in\mathcal{m}athscr{G}^M$, let $\mathcal{m}athcal{T}_\mathbbm{n}$ be the tree on $\mathbbm{n}$ produced by the comparison. Given $\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!(\alpha+1)$ for all $\mathbbm{n}$, let $F^{\mathcal{m}athcal{T}_\mathbbm{n}}_\alpha$ be the least disagreement extenders, indexed at $\ell_\alpha$ when non-empty, and $K_\alpha=M^{\mathcal{m}athcal{T}_\mathbbm{n}}_\alpha||\ell_\alpha$. For $\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!(\omega_1^M+1)$, we compare, subject to folding in linear iteration at the least measurable of $K_\alpha$. For $\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!(\omega_1^M,\omega_2^M)$, we compare, subject to folding in meas-lim genericity iteration for making $t_{\mathbbm{n}}=\Th_{\Sigma_3}^{\mathbbm{n}^+}(\delta)$ generic (recall $\mathbbm{n}^+=\mathcal{m}athrm{sJs}(\mathbbm{n})$, a premouse with universe $(\mathcal{m}athcal{H}_{\omega_2})^M$ in this case, but the theory here can also refer to $\mathcal{m}athbb{E}^{\mathbbm{n}^+}$). For each $\mathbbm{n}'\in\mathcal{m}athscr{G}^M$, since $\mathbbm{n},\mathbbm{n}'$ are inter-definable from parameters and the genericity iteration only begins above $\omega_1^M$, the theories $t_{\mathbbm{n}},t_{\mathbbm{n}'}$ are easily inter-computable, and locally so (ordinal-by-ordinal modulo some fixed parameters $<\omega_1^M$), so genericity iteration with respect to $t_{\mathbbm{n}}$ is equivalent to that with respect to $t_{\mathbbm{n}'}$. Let $\Lambda_{\mathcal{m}athrm{t},2}^\mathbbm{n}$ be the putative extension of $\Lambda_{\mathcal{m}athrm{t}}^\mathbbm{n}$ to trees $\mathcal{m}athcal{T}$ of length $<\omega_2^M$, which satisfy the other requirements of necessity, but relative to $\mathbbm{n}^+$, and still using P-construction to compute Q-structures. Then $\Lambda_{\mathcal{m}athrm{t},2}^\mathbbm{n}$ is defined for all necessary trees, and yields wellfounded models, by an easy reflection argument: if not, then we can fix some $R\triangleleft \mathbbm{n}^+$ witnessing this which projects to $\omega_1^M$, and then use condensation to reflect to some hull $\bar{R}\triangleleft \mathbbm{n}$, and deduce that $\Lambda_{\mathcal{m}athrm{t}}^\mathbbm{n}$ is defective.\footnote{Here and below we use the possibility that $\mathcal{m}athrm{lgcd}(N|\delta)=\omega_1^{N|\delta}$ in Definition \ref{dfn:iterability-good}.} We use $\Lambda_{\mathcal{m}athrm{t},2}^\mathbbm{n}$ to form $\mathcal{m}athcal{T}_\mathbbm{n}$. We stop the comparison if it reaches length $\omega_2^M$. Let us verify that it in fact has length $\omega_2^M$. Much as before, it cannot terminate early, in that we cannot reach a stage $\alpha$ such that for some $\mathbbm{n}$, we have $M^{\mathcal{m}athcal{T}_\mathbbm{n}}_\alpha\trianglelefteq M^{\mathcal{m}athcal{T}_{\mathbbm{n}'}}_\alpha$ for every $\mathbbm{n}'$. So we just need to see that $\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!\lambda\in\mathcal{m}athrm{dom}(\Lambda_{\mathcal{m}athrm{t},2}^\mathbbm{n})$ for every limit $\lambda\leq\omega_2^M$. We also claim that $M(\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!\lambda)$ has no Woodin cardinals, and if $M(\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!\lambda)$ is not a Q-structure for itself then $\delta(\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!\lambda)=\lambda$ and ($*$) $\mathbbm{n}^+||\lambda\preccurlyeq_{\Sigma_3} \mathbbm{n}^+$. Property ($*$) together with the usual fact that the earlier Q-structures are retained, ensures that $\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!\lambda$ (and in fact the entire comparison through length $\lambda$) is definable over $\mathbbm{n}^+|\lambda$. This is mostly as before, but ($*$) is new, so we focus on its verification. Let $\left<\gamma_\alpha\right>_{\alpha<\omega_2^M}$ enumerate the set $C$ of ordinals $\gamma<\omega_2^M$ with $M||\gamma\preccurlyeq_{\Sigma_3}M|\omega_2^M$, in increasing order. Let $H_\beta=\mathcal{m}athrm{Hull}_{\Sigma_3}^{M|\omega_2^M}(\beta)$. Note that $C$ is club in $\omega_2^M$ and $\omega_1^M<\gamma_0$, $H_{\gamma_\alpha}=M||\gamma_\alpha$, and if $\gamma_\alpha<\gamma<\gamma_{\alpha+1}$ then $H_\gamma=H_{\gamma_{\alpha+1}}$. Moreover, if $\gamma_\alpha<\xi\leq\gamma_{\alpha+1}$ and \[ t_\xi=\Th_{\Sigma_3}^{M|\omega_2^M}(\xi), \] then $t_\xi$ encodes a surjection of $(\gamma_\alpha+1)^{<\omega}$ onto $\xi$. Write $t_{\canM^M\xi}=t_\xi$ and $t_{\mathbbm{n}\xi}$ for the corresponding theory for other $\mathbbm{n}\in\mathcal{m}athscr{G}^M$; so when $\omega_1^M\leq\xi$, there is a simple translation between $t_{\canM^M\xi}$ and $t_{\widetilde{\mathbbm{n}}\xi}$. Now suppose that $M(\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!\lambda)$ is not a Q-structure for itself. We claim that \[ \xi=_{\mathrm{def}}\delta(\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!\lambda)=\gamma_\alpha \] for some limit $\alpha$. For suppose that $\gamma_\alpha<\xi\leq\gamma_{\alpha+1}$ for some $\alpha$ (or it is similar if $\xi\leq\gamma_0$). Then $t_{\mathbbm{n}\xi}$ is meas-lim extender algebra generic over $M(\mathcal{m}athcal{T})$, and $\xi$ is regular in $\mathcal{m}athcal{J}(M(\mathcal{m}athcal{T}))[t_{\mathbbm{n}\xi}]$. But $t_{\mathbbm{n}\xi}$ encodes a surjection of $(\gamma_\alpha+1)^{<\omega}$ onto $\xi$, collapsing $\xi$ in $\mathcal{m}athcal{J}(M(\mathcal{m}athcal{T}))[t_\xi^\mathbbm{n}]$, a contradiction. By the previous paragraph, combined with the standard arguments, we now get that $\lambda=\xi$ and $M|\xi\preccurlyeq_{\Sigma_3}M|\omega_2^M$ and $M|\xi\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$ and $\mathcal{m}athcal{T}_\mathbbm{n}\!\upharpoonright\!\xi$ is definable over $M|\xi$. So the arguments from earlier proofs now go through. So we get a comparison of length $\delta=\omega_2^M$. Let $W|\delta$ be the resulting common part model. Note that $W|\delta\in H$, and in fact, $W|\delta$ is definable without parameters over $\mathcal{m}athcal{H}_\delta^M$. It follows that $W|\delta$ is in fact definable (in the codes) over $(\delta,t)$, via consulting what is forced by $\mathrm{Vop}_{\mathcal{m}athscr{G}}^M$. (Note here that because $\mathrm{Vop}_{\mathcal{m}athscr{G}}^M$ has the $\delta$-cc in $H$, every bounded subset of $\delta$ in $M$ has a name in $H$ given by some bounded $X\subseteq\delta$, and since each such $X$ is encoded into $t$, $\Th_{\Sigma_n}^{\mathcal{m}athcal{H}_\delta^M}$ is definable over $(\delta,t)$ for each $n<\omega$.) Also, each $t_{\mathbbm{n}\delta}$ is meas-lim extender algebra generic over $W|\delta$, but $t$ is easily locally computed from any $t_{\mathbbm{n}\delta}$, and hence is also generic over $W|\delta$. So $W|\delta$ and $(\delta,t)$ are generically equivalent, so we can build the P-construction of $\mathcal{m}athcal{H}$ above $W|\delta$, or equivalently, the P-construction of $\mathcal{m}athrm{cs}(\mathbbm{n})^{\unioniv{M}}$ above $W|\delta$, for any $\mathbbm{n}\in\mathcal{m}athscr{G}^M$. Let $W$ be the resulting model. Because $W$ was produced by comparison, the P-construction cannot reach a Q-structure, so $W\mathcal{m}odels$``$\delta$ is Woodin'', and note $H=W[t]$ and $\mathcal{m}athbb{F}$ is induced by $\mathcal{m}athbb{E}^W\!\upharpoonright\![\delta,\infty)$. Finally suppose that $M=\mathcal{m}athrm{Hull}^M(\emptyset)$. Then $\mathcal{m}athcal{J}(M)$ is an $\omega$-mouse. In particular, $M$ is countable. The tree $\mathcal{m}athcal{T}=\mathcal{m}athcal{T}_{\canM^M}$ is on $\canM^M$, via the correct strategy, and has countable length, since $M$ is countable. Let $b=\Sigma_{\canM^M}(\mathcal{m}athcal{T})$ and $Q=Q(\mathcal{m}athcal{T},b)$. By tameness, $\delta$ is a strong cutpoint of $Q$, and it follows that $W\trianglelefteq \mathcal{m}athcal{J}(W)=Q\trianglelefteq M^\mathcal{m}athcal{T}_b$, as desired. \end{proof} \begin{rem} We actually now get another alternate proof of the fact that $H[\canM^M]=\unioniv{M}$: We have $H=W[t]$, and note that in $W[t][\canM^M]$, we can recover the tree on $\canM^M$ which leads to $W|\delta$, by comparing $\canM^M$ with $W|\delta$, and noting that since $\delta$ is the least Woodin of $W$, all the Q-structures guiding this tree are available for this. But then starting from $\canM^M$, we can then inductively recover $M|\delta$ by translating the Q-structures over to segments of $M|\delta$ extending $\canM^M$. We will also use a variant of this later, in the non-tame context.\end{rem} \section{$\ \|\ ar$-translation}\label{sec:star} We now prepare to deal more carefully with non-tame mice, by discussing the basics of $\ \|\ ar$-translation and its inverse, the latter being the generalization of P-construction to non-tame mice. This section is essentially a summary of results from \cite{closson}, slightly adapted. \begin{dfn} Let $N$ be an $n$-sound premouse. Fix some constant symbol $\dot{p}\in V_\omega\backslash\mathcal{m}athrm{OR}$. For $\alpha\leq\mathcal{m}athrm{OR}^N$ we write $t_{n+1}^N(\alpha)$ for the theory in the language of premice with constants in $\alpha\union\{\dot{p}\}$, which results by modifying $\Th_{n+1}^N(\alpha\union\{\vec{p}_{n+1}^N\})$ by replacing $\vec{p}_{n+1}^N$ with $\dot{p}$. We write $t_{n+1}^N$ for $t_{n+1}^N(\rho_{n+1}^N)$. \end{dfn} \begin{dfn} Let $P$ be a sound premouse. We say that $\mathcal{m}athcal{T}$ is \emph{$P$-optimal} iff \begin{enumerate}[label=--] \item $\mathcal{m}athcal{T}$ is $\omega$-maximal on some $\omega$-premouse $N\triangleleft P|\omega_1^P$, \item $\mathcal{m}athcal{T}$ has limit length $\delta=\delta(\mathcal{m}athcal{T})$, \item $\delta$ is a successor cardinal of $P$, \item $\mathcal{m}athcal{J}(M(\mathcal{m}athcal{T}))\mathcal{m}odels$``$\delta$ is Woodin'', \item $\mathcal{m}athcal{T}$ is definable from parameters over $P$, and \item $\rho_\omega^P\leq\delta$ and $t_{k+1}^P(\delta)$ is $\mathcal{m}athbb B_{\mathcal{m}easlim,\delta}^{M(\mathcal{m}athcal{T})}$-generic over $M(\mathcal{m}athcal{T})$, where $k$ is least with $\rho_{k+1}^P\leq\delta$. \end{enumerate} Given $M\in\mathrm{pm}_1$, we say that $\mathcal{m}athcal{T}$ is \emph{$P$-optimal for $M$} iff $\mathcal{m}athcal{T}\in M$ and $P\triangleleft M$ and $\mathcal{m}athcal{T}$ is $P$-optimal and $\delta(\mathcal{m}athcal{T})$ is a cutpoint (hence strong cutpoint) of $M$. \end{dfn} \begin{lem}\label{lem:P-optimal_uniqueness} Let $M$ be a pm. Let $\mathcal{m}athcal{T}$ be both $P$- and $P'$-optimal for $M$. Then $P=P'$. \end{lem} \begin{proof} Suppose $P\triangleleft P'$. Let $k$ be least with $\rho_{k+1}^{P'}\leq\delta=\delta(\mathcal{m}athcal{T})$. Note $\rho_1^{\mathcal{m}athcal{J}(P)}\leq\delta=\rho_\omega^P$. Let $R=M(\mathcal{m}athcal{T})$ and $t= t_1^{\mathcal{m}athcal{J}(\mathcal{R})}(\delta)$ and $u= t_{k+1}^{P'}(\delta)$. Then $t$ is computable from $t_1^{\mathcal{m}athcal{J}(P)}(\delta)$ (since $R$ is $P$-parameter-definable), hence computable from $u$, since $\mathcal{m}athcal{J}(P)\trianglelefteq P'$. So $t\in\mathcal{m}athcal{J}(\mathcal{R})[u]$ (recall $u$ is $\mathcal{m}athbb B_{\mathcal{m}easlim,\delta}^{R}$-generic over $\mathcal{m}athcal{J}(\mathcal{R})$). Now $\delta$ is $\utilde{\rSigma}_\omega^{\mathcal{m}athcal{J}(\mathcal{R})}$-regular because $\delta$ is regular in $\mathcal{m}athcal{J}(\mathcal{R})[u]$ and $t\in\mathcal{m}athcal{J}(\mathcal{R})[u]$. We claim $\rho_1^{\mathcal{m}athcal{J}(\mathcal{R})}=\delta$. So suppose $\rho_1^{\mathcal{m}athcal{J}(\mathcal{R})}<\delta$. Let \[ H=\mathcal{m}athrm{Hull}_1^{\mathcal{m}athcal{J}(\mathcal{R})}(\rho_1^{\mathcal{m}athcal{J}(\mathcal{R})}\union\{p_1^{\mathcal{m}athcal{J}(\mathcal{R})}\}) \] and $\gamma=\sup(H\cap\delta)$. Then $\gamma<\delta$ by the $\utilde{\rSigma}_\omega^{\mathcal{m}athcal{J}(\mathcal{R})}$-regularity of $\delta$. Let \[ H'=\mathcal{m}athrm{Hull}_1^{\mathcal{m}athcal{J}(\mathcal{R})}(\gamma\union\{p_1^{\mathcal{m}athcal{J}(\mathcal{R})}\}). \] Then $H'\cap\delta=\gamma$ by a familiar argument, but then the transitive collapse of $H'$ is in $R$, a contradiction. (It follows that $\rho_\omega^{\mathcal{m}athcal{J}(\mathcal{R})}=\delta$; otherwise we get an $\utilde{\rSigma}_{n+1}^{\mathcal{m}athcal{J}(\mathcal{R})}$-singularization of $\delta=\rho_n^{\mathcal{m}athcal{J}(\mathcal{R})}$ with some $n\in[1,\omega)$.) Now for $n<\omega$ let $t_n=\{\varphi\in t\bigm|\mathcal{m}athcal{S}_n(R)\mathcal{m}odels\varphi\}$, so $t_n\in\mathcal{m}athcal{J}(\mathcal{R})$ and $t=\bigcup_{n<\omega}t_n$. Let $\tau\in\mathcal{m}athcal{J}(\mathcal{R})$ be a name such that $\tau_G=t$, where $G$ is the generic filter associated to $u$. Let $p\in\mathcal{m}athbb B_{\mathcal{m}easlim,\delta}^{R}$ be the Boolean value of ``$\tau$ is a consistent theory in parameters in $\delta\union\{\dot{p}\}$''. For each $n<\omega$, let $p_n\in\mathcal{m}athbb B^R_{\mathcal{m}easlim,\delta}$ be the conjunction of $p$ with the Boolean value of ``$\check{t_n}\subseteq\tau$''. So $p_n\in R$ and $\left<p_n\right>_{n<\omega}$ is $\utilde{\rSigma}_1^{\mathcal{m}athcal{J}(\mathcal{R})}$. In fact $\left<p_n\right>_{n<\omega}\in R$, since $\mathcal{m}athcal{J}(\mathcal{R})$ does not definably singularize $\delta$ and $\rho_1^{\mathcal{m}athcal{J}(\mathcal{R})}=\delta$. So $q=\bigwedge_{n<\omega}p_n\in\mathcal{m}athbb B^R_{\mathcal{m}easlim,\delta}$. Now $q\neq 0$ and $q\in G$, since $\tau_G=t=\bigcup_{n<\omega}t_n$. But then $t=\{\varphi\bigm|q\forces\varphi\in\tau\}$. So $t\in\mathcal{m}athcal{J}(\mathcal{R})$, which is impossible. \end{proof} \begin{dfn}\label{dfn:transcendent} A premouse $M$ is \emph{transcendent} iff $M\in\mathrm{pm}_1$, $M$ is an $\omega$-mouse and for all $\mathcal{m}athcal{T},P\in M$, letting $\canM=\canM^M$, if \begin{enumerate}[label=--] \item $\rho_\omega^P=\rho_{k+1}^P=\delta=\omega_1^M$, \item $\mathcal{m}athcal{T}$ is on $\canM$, via $\Sigma_{\canM}$, and $\mathcal{m}athrm{lh}(\mathcal{m}athcal{T})=\delta=\delta(\mathcal{m}athcal{T})$, \item $\mathcal{m}athcal{T}$ is $P$-optimal for $M$ and \item $\mathcal{m}athcal{J}(M(\mathcal{m}athcal{T}))\mathcal{m}odels$``$\delta$ is a Woodin'', \end{enumerate} then letting $Q=Q(\mathcal{m}athcal{T},\Sigma_M(\mathcal{m}athcal{T}))$ and $n<\omega$, $\Th_{\Sigma_{n+1}}^M(\emptyset)$ is not definable from parameters over $Q[t_{k+1}^P]$. Given an $\omega$-mouse $R\triangleleft M$, \emph{transcendent above $R$} is the relativization to parameter $R$ and trees above $R$. \end{dfn} \begin{rem} Note that $M_n^\#$ is transcendent for $n\leq\omega$ (of course here the $\ \|\ ar$-translations are just inverse P-construction). Many other such standard ``minimal'' mice are transcendent; for example, we will also observe in Remark \ref{rem:Wlim} that $M_{\mathcal{m}athrm{wlim}}^\#$ (the sharp for a Woodin limit of Woodins) is transcendent, as is the minimal mouse $M$ with an active superstrong extender. But $(M_1^\#)^\#$ is not transcendent, which is easily seen via genericity iteration. However, $(M_1^\#$ is trivially transcendent above $M_1^\#$. But the sharp of the model $S$ of Example \ref{exm:M_1^sharp-closed_reals} is not transcendent above any $\omega$-mouse $R\triangleleft\canM^S$. For let $\mathcal{m}athcal{T}$ on $M_1^\#(R)$ be as there, and note that $\mathcal{m}athcal{T}$ is $S|\omega_1^S$-optimal, but we get $Q=M^\mathcal{m}athcal{T}_b$ is the output of the P-construction of $S$ above $M(\mathcal{m}athcal{T})$, and $\mathcal{m}athrm{OR}^Q=\mathcal{m}athrm{OR}^S$. \end{rem} \begin{rem} Let $\mathcal{m}athcal{T}$ be $P$-optimal and $\delta=\delta(\mathcal{m}athcal{T})$. We next define the \emph{$\ \|\ ar$-translation} $Q^\ \|\ ar=Q^\ \|\ ar(\mathcal{m}athcal{T},P)$ of certain premice $Q$ extending $M(\mathcal{m}athcal{T})$ (in the right context). This is a simple variant of the procedure in \cite{closson}. The goal is to convert $Q$, which may have extenders $E\in\mathcal{m}athbb{E}_+^Q$ with $\mathcal{m}athrm{cr}(E)\leq\delta$, into some premouse $M$ extending $P$, having $\delta$ as a strong cutpoint, but containing essentially the same information (modulo the generic object $P$). The overlapping extenders $E$ are converted into ultrapower maps, which can be recovered by $M$ by computing the corresponding core maps. The differences with \cite{closson} are (i) we define $R^\ \|\ ar$ for all \emph{valid} segments of $R\trianglelefteq Q$, which begins with $M(\mathcal{m}athcal{T})$ itself (instead of waiting for the least admissible beyond $M(\mathcal{m}athcal{T})$; \emph{valid} is defined presently and pertains to condition (iii) below), (ii) we set $M(\mathcal{m}athcal{T})^\ \|\ ar=P$ (so $P$ is the starting point, instead of basically $M|\delta$), and (iii) we allow $\delta$ to be the critical point of extenders in $\mathcal{m}athbb{E}_+^Q$. Items (i) and (ii) only involve slight fine structural changes, just at the bottom of the hierarchy, and are straightforward. To translate the extenders as in (iii), one takes ultrapowers just as for other extenders, the difference being that the ultrapower is formed of some segment of $Q$ instead of a segment of a model of $\mathcal{m}athcal{T}$. Otherwise things are very similar to \cite{closson}. We give the definition now in detail, and will then state some facts about it, but a proof of those facts is beyond the scope of the paper, so we will just take them as a hypothesis throughout this section. \end{rem} \begin{dfn}\label{dfn:star} Let $\mathcal{m}athcal{T}$ be $P$-optimal and $\delta=\delta(\mathcal{m}athcal{T})$. Let $Q$ be a premouse. A \emph{$\delta$-measure} of $Q$ is an $E\in\mathcal{m}athbb{E}_+^Q$ such that $\mathcal{m}athrm{cr}(E)=\delta$ and $E$ is $Q$-total. Let $\mathcal{m}u^Q_\delta$ denote the least such, if it exists. Say $Q$ is \emph{$\ \|\ ar$-valid} iff \begin{enumerate}[label=(\roman*)] \item $M(\mathcal{m}athcal{T})\trianglelefteq Q$ and if $M(\mathcal{m}athcal{T})\triangleleft Q$ then $Q\mathcal{m}odels$``$\delta$ is Woodin'', and \item if $Q$ has a $\delta$-measure then $Q$ is $\delta$-sound and there is $q<\omega$ such that $\rho_{q+1}^Q\leq\delta$. \end{enumerate} Given $\kappa<\delta$, let $\beta_\kappa$ be the least $\beta<\mathcal{m}athrm{lh}(\mathcal{m}athcal{T})$ such that $\kappa<\nu(E^\mathcal{m}athcal{T}_\beta)$, let $M^*_{\kappa}$ be the largest $N\trianglelefteq M^\mathcal{m}athcal{T}_\beta$ be largest such that $\mathcal{m}athcal{P}(N)\cap\mathcal{m}athcal{P}(\kappa)\subseteq M(\mathcal{m}athcal{T})$, and $n_\kappa=$ the largest $n<\omega$ such that $\kappa<\rho_n^{M^*_\kappa}$. Assuming $R$ is $\ \|\ ar$-valid, we (attempt to) define the \emph{$\ \|\ ar$-translation} recursively as follows: \begin{enumerate} \item $M(\mathcal{m}athcal{T})^\ \|\ ar=P$. \item If $R$ has a $\delta$-measure and $\rho_{r+1}^R\leq\delta(\mathcal{m}athcal{T})<\rho_r^R$, then $R^\ \|\ ar=\mathcal{m}athrm{Ult}_r(R,\mathcal{m}u^R_\delta)^\ \|\ ar$ (and note that if wellfounded, $\mathcal{m}athrm{Ult}_r(R,\mathcal{m}u^R_\delta)$ is valid and has no $\delta$-measure). \end{enumerate} Suppose from now on that $R$ has no $\delta$-measure. Then: \begin{enumerate}[resume*] \item If $R$ is active with $\kappa=\mathcal{m}athrm{cr}(F^R)<\delta$ then: \begin{enumerate} \item If $R$ is type 2 and $\delta=\mathcal{m}athrm{lgcd}(R)$ and $U=\mathcal{m}athrm{Ult}(R,F^R)$ has a $\delta$-measure, then $R^\ \|\ ar=\mathcal{m}athrm{Ult}_0(R,\mathcal{m}u^U_\delta)^\ \|\ ar$. \item Otherwise $R^\ \|\ ar=\mathcal{m}athrm{Ult}_{n_\kappa}(M^{*}_\kappa,F^R)^\ \|\ ar$ \end{enumerate} \item If $R$ is passive and $R=\mathcal{m}athcal{J}(S)$ (note then $S$ is valid) then $\mathcal{m}athcal{J}(R)^\ \|\ ar=\mathcal{m}athcal{J}(S^\ \|\ ar)$. \item If $R$ is passive of limit type then $R^\ \|\ ar$ is the stack of all $S^\ \|\ ar$ for all such $S$ such that $S$ has no $\delta$-measure (note there are cofinally many $\ \|\ ar$-valid $S\triangleleft R$). \item If $R$ is active with $\mathcal{m}athrm{cr}(F^R)>\delta$ and \begin{enumerate} \item the universe of $(R^\mathcal{m}athrm{pv})^\ \|\ ar$ is that of $R[P]$ (a meas-lim extender algebra extension), and \item the canonical extension $F^*$ of $F^R$ to the generic extension induces a premouse $((R^\mathcal{m}athrm{pv})^\ \|\ ar,F^*)$, \end{enumerate} then we set $R^\ \|\ ar=$ this premouse. \item Otherwise, $R^\ \|\ ar$ is ill-defined. \end{enumerate} This definition proceeds along a natural linear order (we leave the details of this to the reader, but it is implicit in Remark \ref{rem:closson_correction} below), and if this is illfounded, then $R^\ \|\ ar$ is also ill-defined. \end{dfn} \begin{rem}\label{rem:closson_correction} If $\Phi(\mathcal{m}athcal{T})\ \widehat{\ }\ (Q,q)$ is iterable (where $q$ indicates the degree associated to $q$ and either $q=0$ or $Q$ has a $\delta$ measure and $\rho_{q+1}^Q\leq\delta<\rho_q^Q$), then it is straightforward to see that the definition of $Q^\ \|\ ar$ is by recursion along a wellorder (consider finite degree-maximal trees $\mathcal{m}athcal{U}$ on $\Phi(\mathcal{m}athcal{T})\ \widehat{\ }\ (Q,q)$ such that $\mathcal{m}athrm{cr}(E^\mathcal{m}athcal{U}_\alpha)\leq\delta$ for all $\alpha+1<\mathcal{m}athrm{lh}(\mathcal{m}athcal{U})$). The fact that $Q^\ \|\ ar$ is a well-defined premouse, however, takes serious fine structural calculation, as executed in \cite{closson}. There are however some small issues in \cite{closson} which need some correction; most significantly (as far as the author is aware), the description of the relationship between the standard parameters of $Q$ and those of $Q^\ \|\ ar$ is incorrect in some cases, which come up in clause (d'') of Theorem 1.2.9 of \cite{closson}, when $j=1$ there.\footnote{In the notation there, assuming $\mathcal{m}athcal{P}$ is $1$-sound and Dodd-sound, it should be $p_{n+1}(\mathcal{m}athcal{P}[g]^*)=j(p_{n+1}^R\backslash\kappa)\ \widehat{\ }\ q$ where $j:R\to\mathcal{m}athrm{Ult}_n(R,F^\mathcal{m}athcal{P})$ is the ultrapower map for the relevant $R$ and $\kappa=\mathcal{m}athrm{cr}(j)$, and $q=t^\mathcal{m}athcal{P}\backslash\delta$ where $t^{\mathcal{m}athcal{P}}$ is the Dodd parameter of $\mathcal{m}athcal{P}$. This clause is only addressed in the very last paragraph of the proof (p.~52,53 of \cite{closson}), by being omitted at that point.} The author intends to write an account of this \cite{closson}, incorporating of course the modifications (i)--(iii). But this is beyond the scope of the present paper, and here we will just summarize the features we need, make the assumption that these do indeed work out and complete the proofs of the current paper using this assumption. \end{rem} \begin{dfn}\label{dfn:Q^*_when_Q_correct} The \emph{$\ \|\ ar$-translation hypothesis} (\emph{STH}) is the following assertion: Let $\mathcal{m}athcal{T}$ be $P$-optimal, $\delta=\delta(\mathcal{m}athcal{T})$ and $Q$ be $\ \|\ ar$-valid. Write $Q^\ \|\ ar=Q^\ \|\ ar(\mathcal{m}athcal{T},P)$. Then: \begin{enumerate} \item If $Q^\ \|\ ar$ is a well-defined premouse, then $\mathcal{m}athcal{P}(\delta)\cap Q[P]=\mathcal{m}athcal{P}(\delta)\cap Q^\ \|\ ar$, and letting $n<\omega$ and $x\in Q^\ \|\ ar$ and \begin{enumerate} \item $\theta=\rho_\omega^Q$, if $Q$ is sound with $\delta\leq\rho_\omega^Q$, or \item $\theta=\delta$ otherwise, \end{enumerate} the theory $\Th_{\Sigma_{n+1}}^{Q^\ \|\ ar}(\delta\cup\{x\})$ is definable from parameters over $Q[P]$. \item If $\mathcal{m}athcal{T}$ is on an $\omega$-mouse $N$, of countable length, via $\Sigma_N$, and $Q=Q(\mathcal{m}athcal{T},\Sigma_M(\mathcal{m}athcal{T}))$, then $Q^\ \|\ ar$ is a well-defined premouse, is $\delta$-sound and above-$\delta$-$(q,\omega_1+1)$-iterable whenever $\delta<\rho_q(Q^\ \|\ ar)$.\qedhere \end{enumerate} \end{dfn} The proof of STH is almost as in \cite{closson}, though see Remark \ref{rem:closson_correction}. An immediate consequence of STH is that if $\mathcal{m}athcal{T}$ is $P$-optimal for $M$ where $M$ is $(0,\omega_1+1)$-iterable, then since $\delta$ is a successor cardinal of $P$ and a strong cutpoint of $M$, and $\delta$ is Woodin in $Q$, hence regular in $Q[P]$, we get \begin{enumerate}[label=--] \item either $Q^\ \|\ ar\triangleleft M$ or [$M||(\delta^+)^M=Q^\ \|\ ar||(\delta^+)^M$ and $\delta$ is a successor cardinal in $M$], and \item if $M$ is an $\omega$-mouse then $Q^\ \|\ ar\trianglelefteq M$.\qedhere \end{enumerate} We now invert the $\ \|\ ar$-translation, also using a small modification of \cite{closson}. \begin{dfn} Let $M\in\mathrm{pm}_1$ be a premouse and $\mathcal{m}athcal{T}$ be $P$-optimal for $M$. Let $\delta=\delta(\mathcal{m}athcal{T})$. Let $q<\omega$ and $Q$ be a $q$-sound, $(q+1)$-universal premouse such that $M(\mathcal{m}athcal{T})\trianglelefteq Q$ and $Q\mathcal{m}odels$``$\delta$ is Woodin'', $\rho_{q+1}^Q\leq\delta\leq\rho_q^Q$, and $\mathcal{m}athfrak{C}_{q+1}(Q)$ is $(q+1)$-solid. For $\kappa\in[\rho_{q+1}^Q,\rho_q^Q)$, recall that $Q$ has the \emph{hull property at $\kappa$} iff \[\mathcal{m}athcal{P}(\kappa)\cap Q\subseteq C_\kappa=\mathrm{cHull}_{q+1}^Q(\kappa\cup\vec{p}_{q+1}^Q).\] Let $\pi_\kappa:C_\kappa\to Q$ be the uncollapse map. (So $Q$ has the hull property at $\rho_{q+1}^Q$.) Say $Q$ is \emph{$\delta$-critical} iff $Q$ is non-$\delta$-sound (hence $\delta<\rho_q^Q$) but has the hull property at $\delta$. Say $Q$ is \emph{$\ \|\ ar$-$\delta$-critical} iff $Q$ is $\delta$-critical, $(\delta+1)$-sound, and letting $\mathcal{m}u$ be the normal measure on $\delta$ derived from $\pi_\delta$, either \begin{enumerate}[label=(\roman*)] \item $\mathcal{m}u_\delta\in\mathcal{m}athbb{E}_+^{C_\delta}$ (hence $Q=\mathcal{m}athrm{Ult}_q(C_\delta,\mathcal{m}u)$ and $C_\delta||\mathcal{m}athrm{lh}(\mathcal{m}u)=Q||\mathcal{m}athrm{lh}(\mathcal{m}u)$ and $\mathcal{m}athrm{lh}(\mathcal{m}u)=(\delta^{++})^{Q}$, or \item $C_\delta$ is active type 2 with $\mathcal{m}athrm{lgcd}(C_\delta)=\delta$ and $\mathcal{m}u_\delta\in\mathcal{m}athbb{E}^U$ where $U=\mathcal{m}athrm{Ult}(C_\delta,F(C_\delta))$ (hence $q=0$ and $Q=\mathcal{m}athrm{Ult}_0(C_\delta,\mathcal{m}u)$ and $U||\mathcal{m}athrm{lh}(\mathcal{m}u)=Q||(\delta^{++})^Q$ and $C_\delta^\mathcal{m}athrm{pv}=Q||(\delta^+)^Q$). \end{enumerate} Say $Q$ \emph{successor-projects across $\delta$} iff \begin{enumerate}[label=(\roman*)] \item $\rho_{q+1}^Q<\delta$, \item \emph{if} $Q$ has the hull property at $\delta$ \emph{then} $Q$ is $\delta$-sound, and \item there is a largest $\kappa<\delta$ such that $Q$ has the hull property at $\kappa$. \end{enumerate} Suppose $Q$ successor-projects across $\delta$, as witnessed by $\kappa$, and let $E$ be the $(\kappa,\pi_\kappa(\kappa))$-extender derived from $\pi_\kappa$. Say $Q$ \emph{$\ \|\ ar$-successor-projects across $\delta$} iff $C_\kappa=M^*_\kappa$ and $q=n_\kappa$ and there is $\nu\in[(\kappa^+)^Q,\pi_\kappa(\kappa)]$ such that $E\!\upharpoonright\!\nu$ is non-type Z and the trivial completion of $E\!\upharpoonright\!\nu$ is not in $\mathcal{m}athbb{E}_+^Q$, and taking $\nu$ least such, we have $\delta\leq\nu$ and $Q=\mathcal{m}athrm{Ult}_{n_\kappa}(M^*_\kappa,E\!\upharpoonright\!\nu)$ (hence $\delta<\rho_q^Q$). In this case, the \emph{extender-core} of $Q$ is $N=(Q||(\nu^+)^Q,E')$ where $E'$ is the trivial completion of $E\!\upharpoonright\!\nu$ (so $N^\mathcal{m}athrm{pv}\trianglelefteq Q^\mathcal{m}athrm{pv}$ and $N||(\nu^+)^N=Q||(\nu^+)^Q$). Say $Q$ is \emph{terminal} iff either \begin{enumerate}[label=(\roman*)] \item $Q$ is fully sound with $\rho_\omega^Q=\delta$ and $Q$ is a Q-structure for $\delta$, or \item there is $r<\omega$ such that $Q$ is $r$-sound but not $(r+1)$-sound, $\rho_{r+1}^Q<\delta\leq\rho_r^Q$, $Q$ is $\delta$-sound and $(r+1)$-universal, and there is no largest $\kappa<\delta$ such that $Q$ has the hull property at $\kappa$. \end{enumerate} We say that $Q$ is \emph{$\ \|\ ar$-terminal} iff terminal and if $\rho_{r+1}^Q<\delta\leq\rho_r^Q$ then there are cofinally many $\kappa<\delta$ where $Q$ has the hull property.\footnote{Note that if $M(\mathcal{m}athcal{T})\trianglelefteq R\trianglelefteq Q=Q(\mathcal{m}athcal{T},b)$ where $M^\mathcal{m}athcal{T}_b$ is wellfounded, then $R$ is terminal iff $R$ is $\ \|\ ar$-terminal iff $R=Q$.} Let $R\trianglelefteq M$ with $P\trianglelefteq R$. The \emph{black hole construction} $R^{\text{\tiny\spiral}}=R^{\text{\tiny\spiral}}(\mathcal{m}athcal{T},P)$ is defined as follows. It is a kind of background construction using all extenders in $\mathcal{m}athbb{E}_+^R$ beyond $P$ (as far as it is defined), but with a modified coring process which allows the appearance of extenders $E$ with $\mathcal{m}athrm{cr}(E)\leq\delta$. The intent is to invert the $\ \|\ ar$-translation.\footnote{Thanks to Henri Menke for the latex code for ${\text{\tiny\spiral}}$.} For $R$ such that $P\trianglelefteq R\trianglelefteq M$ we (attempt to) define $R^{{\text{\tiny\spiral}}}$, as follows: Set $P^{\text{\tiny\spiral}}_0=M(\mathcal{m}athcal{T})$. Suppose we have $R^{\text{\tiny\spiral}}_0$. We attempt to define models $R^{\text{\tiny\spiral}}_n$ for $n<\omega$, and then set $R^{\text{\tiny\spiral}}=\lim_{n<\omega}R^{\text{\tiny\spiral}}_n$. Suppose we have $R'=R^{\text{\tiny\spiral}}_n$. If $R'$ is sound and $\delta\leq\rho_\omega^{R'}$ then we define $R^{\text{\tiny\spiral}}=R^{\text{\tiny\spiral}}_m=R'$ for all $m\in[n,\omega)$. Otherwise let $q<\omega$ be least such that $R'$ is $q$-sound and either $\rho=\rho_{q+1}^R<\delta$ or $R'$ is non-$(q+1)$-sound. We assume the following and proceed as follows; otherwise we give up and $R^{\text{\tiny\spiral}}_{n+1}$ is undefined: \begin{enumerate} \item $\rho\leq\delta$ and $R'$ is non-$(q+1)$-sound, but $R'$ is $(q+1)$-universal and $\mathcal{m}athfrak{C}_{q+1}(R')$ is $(q+1)$-solid, \item if $R'$ is $\delta$-critical then $R'$ is $\ \|\ ar$-$\delta$-critical and we set $R^{\text{\tiny\spiral}}_{n+1}=$ the $\delta$-core of $R'$, \item if $R'$ successor-projects across $\delta$ then $R'$ $\ \|\ ar$-successor projects across $\delta$, and we set $R^{\text{\tiny\spiral}}_{n+1}=$ the extender-core of $R'$, and \item if $R'$ is terminal then $R'$ is $\ \|\ ar$-terminal, and we set $R^{\text{\tiny\spiral}}=R'$ (and the construction goes no further). \end{enumerate} This completes the description of $R^{\text{\tiny\spiral}}_{n+1}$. Note that if $R^{\text{\tiny\spiral}}_n$ exists for all $n<\omega$ then $\lim_{n<\omega}R^{\text{\tiny\spiral}}_n$ also exists, so we have defined $R^{\text{\tiny\spiral}}$. Now let $R=M||\alpha$ or $R=M|\alpha$ for some $\alpha$, and suppose we have successfully defined $S^{\text{\tiny\spiral}}$ for all $S\triangleleft R$, these are sound premice, and none are terminal. If $R=\mathcal{m}athcal{J}(S)$ then we set $R^{\text{\tiny\spiral}}_0=\mathcal{m}athcal{J}(S^{\text{\tiny\spiral}})$. If $R$ is passive of limit type then $R^{\text{\tiny\spiral}}_0=\liminf_{S\triangleleft R}S^{\text{\tiny\spiral}}$ (note that this exists, like with standard background constructions). And if $R$ is active, hence with $\delta<\mathcal{m}athrm{cr}(F^R)$, then we assume that $F^R$ restricts to an extender $E$ such that $S=((R^\mathcal{m}athrm{pv})^{\text{\tiny\spiral}},E)$ is a premouse, and we set $R^{\text{\tiny\spiral}}_0=S$ (and otherwise $R^{\text{\tiny\spiral}}_0$ is undefined). \end{dfn} The following lemma, saying in particular that the ${\text{\footnotesize\spiral}}$-construction and $\ \|\ ar$-translation are inverses, are straightforward to verify by induction: \begin{lem}\label{lem:bh_inverts_star} Let $\mathcal{m}athcal{T}$ be $P$-optimal for $M$. Adopting the notation of Definition \ref{dfn:star} ($\ \|\ ar$-translation), suppose that $Q$ is $\ \|\ ar$-valid, $Q^\ \|\ ar=Q^\ \|\ ar(\mathcal{m}athcal{T},P)$ is well-defined and $Q^\ \|\ ar\trianglelefteq M$. Then there is $n<\omega$ such that $(Q^\ \|\ ar)^{\text{\tiny\spiral}}_n(\mathcal{m}athcal{T},P)$ is well-defined and $(Q^\ \|\ ar)^{\text{\tiny\spiral}}_n=Q$. Conversely, let $R$ and $r<\omega$ be such that $P\trianglelefteq R\trianglelefteq M$ and $R^{\text{\tiny\spiral}}_r=R^{\text{\tiny\spiral}}_r(\mathcal{m}athcal{T},P)$ is well-defined. Then $R^{\text{\tiny\spiral}}_r$ is $\ \|\ ar$-valid, $(R^{\text{\tiny\spiral}}_r)^\ \|\ ar=(R^{\text{\tiny\spiral}}_r)^\ \|\ ar(\mathcal{m}athcal{T},P)$ is well-defined and $(R^{\text{\tiny\spiral}}_r)^\ \|\ ar=R$. Moreover, if $(P,0)\trianglelefteq(S,s)\trianglelefteq(R,r)$ then $(S^{\text{\tiny\spiral}}_s)||(\delta^+)^{S^{\text{\tiny\spiral}}_s}=(R^{\text{\tiny\spiral}}_r)||(\delta^+)^{S^{\text{\tiny\spiral}}_n}$. \end{lem} \begin{dfn} Let $\mathcal{m}athcal{T}$ be $P$-optimal for $M$, $\delta=\delta(\mathcal{m}athcal{T})$ and $P\trianglelefteq R\triangleleft M$ with $\delta$ an $R$-cardinal or $\delta=\mathcal{m}athrm{OR}^R$. We say that $R$ is \emph{just beyond $\delta$-projection} iff there is $S$ such that $P\trianglelefteq S\triangleleft R$ and $\rho_\omega^S=\delta$ and there is no admissible $R'\trianglelefteq R$ with $S\triangleleft R'$. \end{dfn} So if $R$ is just beyond $\delta$-projection then $\rho_1^R\leq\delta$. The ${\text{\footnotesize\spiral}}$-construction is almost completely local, but it seems maybe not quite completely at the level of measurable Woodins, because of the requirement of computing cores which project to $\delta$ (if there is such a non-trivial core, then there are $\delta$-measures, hence measurable Woodins). To handle this we split into two cases in what follows. \footnote{The construction is completely local \emph{in the codes}, but it seems maybe not literally. More precisely, if $\rho_\omega^{R^{\text{\tiny\spiral}}}=\delta$ but $R^{\text{\tiny\spiral}}$ is not sound, and $\alpha\in\mathcal{m}athrm{OR}$, then while it is not clear that the model $\mathcal{m}athcal{J}_\alpha(\mathcal{m}athfrak{C}_\omega(R^{\text{\tiny\spiral}}))$ is definable from parameters over $\mathcal{m}athcal{J}_\alpha(R^{\text{\tiny\spiral}})$, the theory $\Th_{\mathcal{m}athrm{r}\Sigma_{n+1}}^{\mathcal{m}athcal{J}_\alpha(\mathcal{m}athfrak{C}_\omega(R^{\text{\tiny\spiral}}))}(\delta\cup\{x\})$ is definable from parameters over $\mathcal{m}athcal{J}_\alpha(R^{\text{\tiny\spiral}})$, for each $n<\omega$ and $x\in\mathcal{m}athcal{J}_\alpha(\mathcal{m}athfrak{C}_\omega(R^{\text{\tiny\spiral}}))$. However, if $\alpha\geq(\omega\cdot\mathcal{m}athrm{OR}(\mathcal{m}athfrak{C}_\omega(R^{\text{\tiny\spiral}})))$, then we do have $\mathcal{m}athcal{J}_\alpha(\mathcal{m}athfrak{C}_\omega(R^{\text{\tiny\spiral}}))$ literally definable from parameters over $\mathcal{m}athcal{J}_\alpha(R^{\text{\tiny\spiral}})$.} \begin{lem}\label{lem:compute_*_q_translation} There are formulas $\psi_{{\text{\tiny\spiral}}}$ and $\psi'_{{\text{\tiny\spiral}}}$ of the language of premice such that for all $M\in\mathrm{pm}_1$, all $\mathcal{m}athcal{T},P,R\in M$ such that $\mathcal{m}athcal{T}$ is $P$-optimal for $M$, $\delta=\delta(\mathcal{m}athcal{T})$ is an $R$-cardinal, $P\triangleleft R\triangleleft M$ and $R^{\text{\tiny\spiral}}_0=R^{\text{\tiny\spiral}}_0(\mathcal{m}athcal{T},P)$ is well-defined, we have: \begin{enumerate} \item\label{item:card_agmt} $R$ and $R^{\text{\tiny\spiral}}_0$ have the same cardinals $\kappa\geq\delta$, and for each such $\kappa>\delta$, we have $R^{\text{\tiny\spiral}}_0|\kappa=(R|\kappa)^{\text{\tiny\spiral}}_0=(R|\kappa)^{\text{\tiny\spiral}}$ (whereas $R^{\text{\tiny\spiral}}_0|\delta=P^{\text{\tiny\spiral}}_0=P^{\text{\tiny\spiral}}$). \item\label{item:proj_agmt} If $\rho_\omega^R=\mathcal{m}athrm{OR}^R$ then $R^{\text{\tiny\spiral}}=R^{\text{\tiny\spiral}}_0\subseteq R$, $\rho_\omega(R^{\text{\tiny\spiral}})=\mathcal{m}athrm{OR}(R^{\text{\tiny\spiral}})=\mathcal{m}athrm{OR}(R)$. \item If $R$ is not just beyond $\delta$-projection then $R^{\text{\tiny\spiral}}_0\subseteq R$ and $\Th_{\mathcal{m}athrm{r}\Sigma_0}^{R^{\text{\tiny\spiral}}_0}(R^{\text{\tiny\spiral}}_0)$ is defined over $R$ by $\psi_{{\text{\tiny\spiral}}}$ from the parameter $\mathcal{m}athcal{T}$, and \item\label{item:R_just_beyond_delta}If $R$ is just beyond $\delta$-projection then $\rho_1(R^{\text{\tiny\spiral}}_0)\leq\delta$, $R^{\text{\tiny\spiral}}_0$ is $\delta$-sound, and $t^{R^{\text{\tiny\spiral}}_0}_1(\delta)$ is defined over $R$ by $\psi_{{\text{\tiny\spiral}}}'$ from the parameter $\mathcal{m}athcal{T}$. \end{enumerate} \end{lem} \begin{proof}[Proof sketch] The formula $\psi_{\text{\tiny\spiral}}$ basically says to perform the ${\text{\footnotesize\spiral}}$-construction, whereas $\psi'_{\text{\tiny\spiral}}$ says to do that up to a point, and then to perform a coded version of it, working with theories $\subseteq\delta$ instead of the actual models. We won't write down the formulas explicitly, but just sketch out some main considerations and an explanation of parts \ref{item:card_agmt}, \ref{item:proj_agmt} and \ref{item:R_just_beyond_delta}. The proof that everything works is by induction on $R$. If $R$ has no largest cardinal, it is easy by induction, leading to (the relevant clause of) $\psi_{\text{\tiny\spiral}}$ for this case. Suppose $R$ has a largest cardinal $\kappa>\delta$. We can compute $(R|\kappa)^{\text{\tiny\spiral}}$ definably over $R|\kappa$, and it has height $\kappa$. We claim that for each $S\triangleleft R$ such that $\rho_\omega^S=\kappa$, we have $\rho_\omega(S^{\text{\tiny\spiral}})=\kappa$ and $S^{\text{\tiny\spiral}}\triangleleft R^{\text{\tiny\spiral}}_0$, and hence, $(R^\mathcal{m}athrm{pv})^{\text{\tiny\spiral}}_0$ is the stack of $\mathcal{m}athcal{J}(S^{\text{\tiny\spiral}})$ over all such $S$. Given this, we get an appropriate definition of $(R^\mathcal{m}athrm{pv})^{\text{\tiny\spiral}}_0$ over $R$, and then if $R$ is active, we just add the restriction of $F^R$, leading to $\psi_{\text{\tiny\spiral}}$ for this case. So supposing $\rho_\omega(S^{\text{\tiny\spiral}})<\kappa$, then by induction, we can take a hull of $S$ to which condensation applies, producing some $\bar{S}\triangleleft S$, such that $\rho_\omega(\bar{S}^{\text{\tiny\spiral}})=\rho_\omega(S^{\text{\tiny\spiral}})$ and $\bar{S}^{\text{\tiny\spiral}}$ defines the set missing from $S^{\text{\tiny\spiral}}$, which gives a contradiction. Conversely, since $(S^{\text{\tiny\spiral}})^\ \|\ ar=S$, STH implies the set $t\subseteq\kappa$ missing from $S$ is definable from parameters over $S^{\text{\tiny\spiral}}[P]$. But then if $\kappa<\rho_\omega(S^{\text{\tiny\spiral}})$, then there is a set in $\mathcal{m}athcal{P}(\kappa)\cap S^{\text{\tiny\spiral}}$ coding the relevant forcing relation, which implies $t\in S^{\text{\tiny\spiral}}[P]\subseteq S$, contradiction. Now suppose $\mathcal{m}athrm{lgcd}(R)=\delta$. If $R$ is not just beyond $\delta$-projection, then $R$ is admissible or a limit of admissible proper segments, and it is then easy to define $R^{\text{\tiny\spiral}}_0$ over $R$. So suppose $P\trianglelefteq S\triangleleft R$ and $\rho_\omega^S=\delta$ but there is no admissible $R'$ with $R'\trianglelefteq S$, and let $S$ be least such. Then much as before, we can take $n<\omega$ such that $S^{\text{\tiny\spiral}}_n$ is sound and $\rho_\omega(S^{\text{\tiny\spiral}}_n)=\delta$, and then $S^{\text{\tiny\spiral}}=S^{\text{\tiny\spiral}}_n$, and note $R^{\text{\tiny\spiral}}_0=\mathcal{m}athcal{J}_\alpha(S^{\text{\tiny\spiral}}_n)$, where $R=\mathcal{m}athcal{J}_\alpha(S)$. Let $k<\omega$ be such that $\rho_{k+1}(S^{\text{\tiny\spiral}}_n)=\delta$, and note that $t=t_{k+1}^{S^{\text{\tiny\spiral}}_n}$ is definable from $\mathcal{m}athcal{T}$ over $S$. Starting from the parameter $t$, it is straightforward to uniformly define $t_1^{\mathcal{m}athcal{J}_\beta(S^{\text{\tiny\spiral}}_n)}(\delta)$ over $\mathcal{m}athcal{J}_\beta(S)$, for $\beta\in(0,\alpha]$. This leads to $\psi_{\text{\tiny\spiral}}'$. Finally let us observe that $R_0^{\text{\tiny\spiral}}$ is $\delta$-sound. Note that $R_0^{\text{\tiny\spiral}}=\mathcal{m}athrm{Hull}_1^{R_0^{\text{\tiny\spiral}}}(\delta\cup\{\gamma\})$ where $\gamma=\mathcal{m}athrm{OR}(S^{\text{\tiny\spiral}})$, and let $\xi$ be least such that $\gamma\in\mathcal{m}athrm{Hull}_1^{R_0^{\text{\tiny\spiral}}}(\delta\cup\{\xi\})$. If $\xi=0$ then we are done, so suppose $\xi\geq\delta$. Then $R^0_{\text{\tiny\spiral}}=\mathcal{m}athrm{Hull}_1^{R_0^{\text{\tiny\spiral}}}(\delta\cup\{\xi\})$, and note that $\mathcal{m}athrm{Hull}_1^{R_0^{\text{\tiny\spiral}}}(\xi)\cap\mathcal{m}athrm{OR}=\xi$, and it follows that $p_1^{R_0^{\text{\tiny\spiral}}}\backslash\delta=\{\xi\}$ and $R_0^{\text{\tiny\spiral}}$ is $1$-solid above $\delta$ and is $\delta$-sound. \end{proof} A full analysis of $\ \|\ ar$-translation and proof of STH needs a sharper, more extensive version of the preceding lemma. \begin{rem}\label{rem:Wlim} Assume STH and $M_{\mathcal{m}athrm{wlim}}^\#$ exists and is $(\omega,\omega_1+1)$-iterable. Then $M_{\mathcal{m}athrm{wlim}}^\#$ is transcendent. For suppose not, and let $\mathcal{m}athcal{T},P\in M$ be a counterexample; so $t=\Th_{\mathcal{m}athrm{r}\Sigma_1}^{M^\#_{\mathcal{m}athrm{wlim}}}(\emptyset)$ is in $\mathcal{m}athcal{J}(Q[P])$ where $Q=Q(\mathcal{m}athcal{T},\Sigma_{\canM}(\mathcal{m}athcal{T}))$. But then if $Q^\ \|\ ar\triangleleft M$ then $Q\in M$, so $Q[P]\in M$, so $t\in M$, contradiction. So $M\trianglelefteq Q^\ \|\ ar$, which implies $M=Q^\ \|\ ar$. But note then that $M^{\text{\tiny\spiral}}_0$ is produced by iterating the phalanx $\Phi(\mathcal{m}athcal{T})\ \widehat{\ }\ \left<Q\right>$ finitely many steps (vis extenders with critical points $\leq\delta$), so $M^{\text{\tiny\spiral}}_0$ is also an iterate of $\canM$ or a segment thereof. But $M^{\text{\tiny\spiral}}_0[P]$, a generic extension via the meas-lim extender algebra, has universe that of $M$, and the extenders in $\mathcal{m}athbb{E}_+(M^{\text{\tiny\spiral}}_0)$ with critical point $>\delta$ are exactly the level-by-level restrictions of those of $\mathcal{m}athbb{E}_+^M$. So $M^{\text{\tiny\spiral}}_0$ inherits all the Woodin cardinals of $M$, and the active sharp, and this contradicts the minimality of $M$. The argument for the least mouse with an active superstrong extender is very similar. And obviously there are many such variants. \end{rem} \section{$\mathcal{m}athrm{HOD}$ in non-tame mice}\label{sec:HOD_in_non-tame} We can now begin our analysis of ordinal definability in non-tame mice. All the results will assume STH. Recall that \S\ref{sec:candidates} applies. \begin{dfn} Let $\mathbbm{n}$ be a premouse satisfying ``$\mathcal{m}athrm{ZF}C^-+V=\mathcal{m}athrm{HC}$''. Then $\Lambda^\mathbbm{n}$ denotes the partial $(\omega,\mathcal{m}athrm{OR}^\mathbbm{n})$-iteration strategy $\Lambda$ for $\mathbbm{n}$, defined over $\mathbbm{n}$ as follows. We define $\Lambda$ by induction on the length of trees. Let $\mathcal{m}athcal{T}\in\mathbbm{n}$. We say that $\mathcal{m}athcal{T}$ is \emph{necessary} iff $\mathcal{m}athcal{T}$ is an iteration tree via $\Lambda$, of limit length, and letting $\delta=\delta(\mathcal{m}athcal{T})$, either $M(\mathcal{m}athcal{T})$ is a Q-structure for itself, or $\mathcal{m}athcal{T}$ is $P$-optimal for $\mathbbm{n}$, with some $P\triangleleft\mathbbm{n}$. Every $\mathcal{m}athcal{T}\in\mathcal{m}athrm{dom}(\Lambda)$ is necessary. Let $\mathcal{m}athcal{T}$ be necessary, and $P$-optimal for $\mathbbm{n}$ if such $P$ exists. Then $\Lambda(\mathcal{m}athcal{T})=b$ iff $b\in\mathbbm{n}$ and letting $Q=Q(\mathcal{m}athcal{T},b)$, if $M(\mathcal{m}athcal{T})\triangleleft Q$ then $Q^\ \|\ ar=Q^\ \|\ ar(\mathcal{m}athcal{T},P)$ is well-defined and $Q^\ \|\ ar\triangleleft\mathbbm{n}$. (Note that if $\Lambda(\mathcal{m}athcal{T})=b$ then $b,Q\in\mathcal{m}athcal{J}_\lambda(Q^\ \|\ ar)$, where $\mathcal{m}athcal{J}_\lambda(Q^\ \|\ ar)$ is admissible, and the assertion that ``$\Lambda(\mathcal{m}athcal{T})=b$'' is uniformly $\Sigma_1^{\mathcal{m}athcal{J}_\lambda(Q^\ \|\ ar)}(\{\mathcal{m}athcal{T}\})$, by Lemmas \ref{lem:bh_inverts_star}, \ref{lem:compute_*_q_translation} and \ref{lem:Q_computes_b}. So $\Lambda$ is $\Sigma_1$-definable over $\mathbbm{n}$.\footnote{Here of course we can refer to $\mathcal{m}athbb{E}^{\mathbbm{n}}$. Since $\mathbbm{n}\mathcal{m}odels$``$V=\mathcal{m}athrm{HC}$'', we can say that ``$\delta$ is a cutpoint of $\mathbbm{n}$'' by just saying it is a cutpoint of some segment of $\mathbbm{n}$ which projects to $\omega$.}) We say that $\mathbbm{n}$ is \emph{iterability-good} iff all trees via $\Lambda^\mathbbm{n}$ have wellfounded models, and $\Lambda^\mathbbm{n}(\mathcal{m}athcal{T})$ is defined for all necessary $\mathcal{m}athcal{T}$. (Note that \emph{iterability-good} is expressed by a first-order formula $\varphi$ (modulo $\mathcal{m}athrm{ZF}C^-$).) \end{dfn} \begin{lem}\label{lem:M_is_iterability-good} Assume STH. Let $M\in\mathrm{pm}_1$ be $(0,\omega_1+1)$-iterable and $\mathbbm{m}=\mathbbm{m}^M$. Then $\Lambda^{\mathbbm{m}}\subseteq\Sigma_{\mathbbm{m}}$ and $\mathbbm{m}$ is iterability-good. \end{lem} \begin{dfn}\label{dfn:G^M} Let $M\in\mathrm{pm}_1$. Then $\mathcal{m}athscr{G}^M$ denotes the set of all strong iterability-good $M$-candidates $\mathbbm{n}$ such that for every $P\triangleleft\mathbbm{n}$, if $P$ has no largest cardinal then $P\mathcal{m}odels$``I am $\mathcal{m}athrm{cs}(P|\omega_1^P)$''.\footnote{The clause regarding the $P\triangleleft\mathbbm{n}$ is not needed in the proof of Theorem \ref{tm:V=HOD_in_transcendent_mice}.} \end{dfn} \begin{proof}[Proof of Theorem \ref{tm:V=HOD_in_transcendent_mice}] We are assuming STH and $M\in\mathrm{pm}_1$ is a transcendent tractable $\omega$-mouse, and want to see that $\canM=\canM^M$ is definable without parameters over $\mathcal{m}athcal{H}_\lambda^M$, where $\lambda=\omega_2^M$. We will show that $\mathcal{m}athscr{G}^M=\{\canM\}$, which suffices. We will not use the assumption that $M$ is an $\omega$-premouse, nor that it is transcendent, until the very last sentence of the proof. So what we establish prior to that point can and will also be used in the proof of Theorem \ref{tm:HOD_non-tame_mouse}. We know $\canM\in\mathcal{m}athscr{G}^M$, by Lemmas \ref{lem:iterable_tract_implies_strong}, \ref{lem:M_is_iterability-good} and \cite{V=HODX}, so suppose $\mathbbm{n}\in\mathcal{m}athscr{G}^M$ with $\mathbbm{m}\neq\mathbbm{n}$. We will form and analyse a genericity comparison of $\mathbbm{m}$ with $\mathbbm{n}$. (In the proof of Theorem \ref{tm:HOD_non-tame_mouse}, we will need to adapt this to a simultaneous comparison of all elements of $\mathcal{m}athscr{G}^M$, and we will leave the adaptation to the reader, but it is straightforward.) Let $\widetilde{\mathbbm{m}}=\widetilde{\mathbbm{m}}(\mathbbm{n})$ and $\widetilde{\mathbbm{n}}=\widetilde{\mathbbm{n}} (\mathbbm{m})$ (see Definition \ref{dfn:ptilde(m)}). Recall that $\widetilde{\mathbbm{m}}\trianglelefteq \mathbbm{m}^+=M|\omega_2^M$ and $\widetilde{\mathbbm{n}}\trianglelefteq\mathbbm{n}^+$ and $\rho_1^{\widetilde{\mathbbm{m}}}=\omega_1^M=\rho_1^{\widetilde{\mathbbm{n}}}$ and $\unioniv{\widetilde{\mathbbm{m}}}=U=\unioniv{\widetilde{\mathbbm{n}}}$ and there is $\xi<\omega_1^M$ such that $\Sigma_1^{\widetilde{\mathbbm{n}}}(\{\alpha,p_1^{\widetilde{\mathbbm{n}}}\})$ is recursively equivalent to $\Sigma_1^{\widetilde{\mathbbm{m}}}(\{\alpha,p_1^{\widetilde{\mathbbm{m}}}\})$, meaning that there are recursive functions $\varphi\mathcal{m}apsto\varphi'$ and $\varphi\mathcal{m}apsto\widehat{\varphi}$ such that for all $x\in U$ and $\Sigma_1$ formulas $\varphi$ in the passive premouse language, $\widetilde{\mathbbm{m}}\mathcal{m}odels\varphi(\xi,p_1^{\widetilde{\mathbbm{m}}},x)$ iff $\widetilde{\mathbbm{n}}\mathcal{m}odels\varphi'(\xi,p_1^{\widetilde{\mathbbm{n}}},x)$, and $\widetilde{\mathbbm{n}}\mathcal{m}odels\varphi(\xi,p_1^{\widetilde{\mathbbm{n}}},x)$ iff $\widetilde{\mathbbm{m}}\mathcal{m}odels\widehat{\varphi}(\xi,p_1^{\widetilde{\mathbbm{m}}},x)$. We may assume that the $1$-solidity witnesses for $\widetilde{\mathbbm{m}}$ are in $\mathcal{m}athrm{Hull}_1^{\widetilde{\mathbbm{m}}}(p_1^{\widetilde{\mathbbm{m}}}\cup\xi)$ and likewise for $\widetilde{\mathbbm{n}}$. Let $t^{\widetilde{\mathbbm{m}}}=t_1^{\widetilde{\mathbbm{m}}}$ and $t^{\widetilde{\mathbbm{n}}}=t_1^{\widetilde{\mathbbm{n}}}$. Let $(A,B)$ be the least conflicting pair with $A\triangleleft\mathbbm{m}$ and $B\triangleleft\mathbbm{n}$. We construct a $t^{\widetilde{\mathbbm{m}}}$-genericity comparison $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ of $(A,B)$, via $(\Lambda^{\mathbbm{m}},\Lambda^{\mathbbm{n}})$, folding in initial linear iteration past $(\xi,A,B)$, and linear iterations past $\ \|\ ar$-translations of non-trivial Q-structures. We now turn to the details. We first set up some notation. For $\eta\in(\xi,\omega_1^M)$, let \[ H_\eta=\mathrm{cHull}_1^{\widetilde{\mathbbm{m}}}(\eta\cup\{p_1^{\widetilde{\mathbbm{m}}}\})\text{ and } \pi_\eta:H_\eta\to \widetilde{\mathbbm{m}}\text{ be the uncollapse},\] \[ J_\eta=\mathrm{cHull}_1^{\widetilde{\mathbbm{n}}}(\eta\cup\{p_1^{\widetilde{\mathbbm{n}}}\})\text{ and }\sigma_\eta:J_\eta\to\widetilde{\mathbbm{n}}\text{ be the uncollapse}.\] Note that $\mathcal{m}athrm{rg}(\pi_\eta)=\mathcal{m}athrm{rg}(\sigma_\eta)$ and $\mathcal{m}athrm{OR}^{H_\eta}=\mathcal{m}athrm{OR}^{J_\eta}$ and $H_\eta\triangleleft\mathbbm{m}$ and $J_\eta\triangleleft\mathbbm{n}$. Let $C\subseteq\omega_1^M$ be the club of all $\eta$ such that $\eta=\mathcal{m}athrm{cr}(\pi_\eta)=\mathcal{m}athrm{cr}(\sigma_\eta)$. So for $\eta\in C$, we have $\rho_1^{H_\eta}\leq\omega_1^{H_\eta}=\eta$ and $\rho_1^{J_\eta}\leq\omega_1^{J_\eta}=\eta$ and $\pi_\eta(\eta)=\omega_1^M=\sigma_\eta(\eta)$ and $\pi_\eta(p_1^{H_\eta}\backslash\eta)=p_1^{\widetilde{\mathbbm{m}}}$ and $\sigma_\eta(p_1^{J_\eta}\backslash\eta)=p_1^{\widetilde{\mathbbm{n}}}$ and \begin{equation}\label{eqn:t_rest_eta_is_theory_of_H_eta} t^{\widetilde{\mathbbm{m}}}\!\upharpoonright\!\eta=\Th_1^{H_\eta}(\eta\cup\{p_1^{H_\eta}\})(p_1^{ H_\eta}/\dot{p}) \end{equation} and likewise for $t^{\widetilde{\mathbbm{n}}}\!\upharpoonright\!\eta$ and $J_\eta$. And given $\eta<\delta\leq\eta'$ with $\eta,\eta'\in C$ consecutive, \begin{equation}\label{eqn:encoded_surj}t^{\widetilde{\mathbbm{m}}}\!\upharpoonright\!\delta\text{ encodes a surjection } (\eta+1)^{<\omega}\to\delta.\end{equation} If $\eta\in C$ and $\rho_1^{H_\eta}<\eta$ (equivalently, $\rho_1^{J_\eta}<\eta$) then (since $H_\eta$ is $1$-sound and $\pi_\eta(p_1^{H_\eta}\backslash\eta)=p_1^{\widetilde{\mathbbm{m}}}$), \begin{equation}\label{eqn:encoded_surj_when_rho_1=om}t^{\widetilde{\mathbbm{m}}} \!\upharpoonright\!\eta\text { encodes a surjection } \omega\to\eta.\end{equation} We will construct a strictly increasing sequence $\left<\eta_\beta\right>_{\beta<\omega_1^M}$ and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta_\beta+1)$, recursively in $\beta$. The ordinals $\eta_\beta$ will be exactly those $\eta$ such that $M((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta)$ is not a Q-structure for itself (and then $\eta=\delta((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta)$, but $\eta$ need not be Woodin in the eventual $M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$). We will see that each $\eta_\beta$ is a limit point of $C$ with $\rho_1^{H_{\eta_\beta}}=\eta_\beta$. If we have constructed $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\alpha+1)$ where $\alpha<\omega_1^M$, we let $F^\mathcal{m}athcal{T}_\alpha,F^\mathcal{m}athcal{U}_\alpha,K_\alpha$ be as usual, and will have $F^\mathcal{m}athcal{T}_\alpha\neq\emptyset$ or $F^\mathcal{m}athcal{U}_\alpha\neq\emptyset$. We now begin the construction, considering first $\beta=0$. We construct $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_0$ in 2 phases. In this first (given $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\alpha+1$ where $\alpha<\eta_0$), we compare, subject to linear iteration of the least measurable $\mathcal{m}u$ of $K_\alpha$, until $\mathcal{m}u\geq\mathcal{m}ax(\xi,\mathcal{m}athrm{OR}^A,\mathcal{m}athrm{OR}^B)$. In the second, we compare, subject to $t^{\widetilde{\canM}}$-genericity iteration for meas-lim extender algebra axioms of $K_\alpha$ (equivalently, $t^{\widetilde{\mathbbm{n}}}$-genericity). Let $\eta_0$ be the least $\eta$ such that $M((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta)$ is not a Q-structure for itself. The iteration strategies $\Lambda^{\canM},\Lambda^{\mathbbm{n}}$ apply trivially prior to stage $\eta_0$, and an easy reflection argument shows that $\eta_0<\omega_1^M$ exists (recall $M\mathcal{m}odels\mathcal{m}athrm{ZF}C$, so we have enough space above $\omega_1^M$ for this). Since $R=M((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_0)$ is not a Q-structure for itself, we need to see that $\mathcal{m}athcal{T}\in\mathcal{m}athrm{dom}(\Lambda^{\mathbbm{m}})$ and $\mathcal{m}athcal{U}\in\mathcal{m}athrm{dom}(\Lambda^{\mathbbm{n}})$. Let $\delta=\delta((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_0)$. So $t^{\widetilde{\canM}}\!\upharpoonright\!\delta$ and $t^{\widetilde{\mathbbm{n}}}\!\upharpoonright\!\delta$ are $\mathcal{m}athbb B_{\mathcal{m}easlim,\delta}^{\mathcal{m}athcal{J}(R)}$-generic over $\mathcal{m}athcal{J}(R)$, and $\delta$ is regular in $\mathcal{m}athcal{J}(R)[t^{\widetilde{\canM}}\!\upharpoonright\!\delta]$. So by line (\ref{eqn:encoded_surj}), it follows that $\delta$ is a limit point of $C$, so $\delta=\omega_1^{H_\delta}=\omega_1^{J_\delta}$, and by line (\ref{eqn:encoded_surj_when_rho_1=om}), it follows that $\rho_1^{H_\delta}=\delta$, and in fact note $\rho_\omega^{H_\delta}=\delta$ (since each $\mathcal{m}athrm{r}\Sigma_{n+1}$ theory in parameters can be defined from $t^{\widetilde{\mathbbm{m}}}\!\upharpoonright\!\delta$). Likewise, $\rho_1^{J_\delta}=\delta=\rho_\omega^{J_\delta}$. Note also that $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_0\subseteq(H_\delta|\delta)\cap(J_\delta|\delta)$ and $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_0$ is definable from the parameter $(A,B,\xi)$ over $H_\delta$, and likewise over $J_\delta$, and so $\eta_0=\delta$ (the most complex aspect of the definition being the $t^{\widetilde{\mathbbm{m}}}$-genericity iteration, but this is equivalent to $t^{\widetilde{\mathbbm{m}}}\!\upharpoonright\!\delta$ for this segment, and that is definable over $H_\delta$ and over $J_\delta$). (So $\eta_0$ is indeed a limit point of $C$, etc.) Now it follows that $\mathbbm{m}\mathcal{m}odels$``$\mathcal{m}athcal{T}\!\upharpoonright\!\eta_0$ is $H_{\eta_0}$-optimal'' and $\mathbbm{n}\mathcal{m}odels$``$\mathcal{m}athcal{U}\!\upharpoonright\!\eta_0$ is $J_{\eta_0}$-optimal'', and hence these trees are in the domains of our strategies, as desired. Now suppose we have constructed $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta_\beta+1)$ for some $\beta$, with $\delta((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_\beta)=\eta_\beta$. To reach $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta_{\beta+1}+1)$, we first determine whether there is $E\in\mathcal{m}athbb{E}(K_{\eta_\beta})$ which induces a bad meas-lim extender algebra axiom with $\nu(E)=\eta_\beta$. If so, set $E^\mathcal{m}athcal{T}_{\eta_\beta}=E^\mathcal{m}athcal{U}_{\eta_\beta}=$ the least such. After that, or otherwise, we proceed with comparison, subject to pushing the least measurable of $K_\alpha$ which is $>\eta_\beta$, to $\geq\mathcal{m}ax(\mathcal{m}athrm{OR}(Q^\mathcal{m}athcal{T})^\ \|\ ar),\mathcal{m}athrm{OR}((Q^\mathcal{m}athcal{U})^\ \|\ ar))$, where $Q^\mathcal{m}athcal{T}=Q(\mathcal{m}athcal{T}\!\upharpoonright\!\eta_\beta,[0,\eta_\beta)_\mathcal{m}athcal{T})$ and likewise for $Q^\mathcal{m}athcal{U}$, and the superscript-$\ \|\ ar$ denotes the associated $\ \|\ ar$-translation (using $H_{\eta_\beta}$ and $\mathcal{m}athcal{T}\!\upharpoonright\!\eta_\beta$ for the $\mathcal{m}athcal{T}$-side, and $J_{\eta_\beta}$ and $\mathcal{m}athcal{U}\!\upharpoonright\!\eta_\beta$ for the $\mathcal{m}athcal{U}$-side). After that, we again compare subject to $t^{\widetilde{\canM}}$-genericity iteration as before. By induction, $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_\beta$ is definable from parameters over $H_{\eta_\beta}$ and over $J_{\eta_\beta}$, which are segments of $(Q^\mathcal{m}athcal{T})^\ \|\ ar$ and $(Q^\mathcal{m}athcal{U})^\ \|\ ar$ respectively. So from $(Q^\mathcal{m}athcal{T})^\ \|\ ar$ we can recover (from parameters) first $\mathcal{m}athcal{T}\!\upharpoonright\!\eta_\beta$, and hence also $\mathcal{m}athcal{T}\!\upharpoonright\!(\eta_\beta+1)$, the last step because $Q^\mathcal{m}athcal{T}=((Q^\mathcal{m}athcal{T})^\ \|\ ar)^{{\text{\tiny\spiral}}}$; likewise $\mathcal{m}athcal{U}\!\upharpoonright\!(\eta_\beta+1)$ from $(Q^\mathcal{m}athcal{U})^\ \|\ ar$. And from $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta_\beta+1)$ through $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta_{\beta+1}+1)$ is like for $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!(\eta_0+1)$. Now suppose we have defined $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta$ where $\eta=\sup_{\beta<\lambda}\eta_\beta$ and $\lambda$ is a limit. So $\eta=\delta((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta)$ and $\eta$ is a limit point of $C$. Suppose first that $M((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta)$ is a Q-structure for itself (hence we will set $\eta<\eta_\lambda$). We now compare subject to pushing the least measurable of $K_\alpha$ which is $>\eta$, to above where $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta$ is constructed in $\mathbbm{m}$ and $\mathbbm{n}$, and then proceed subject to genericity iteration, producing $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta_\lambda$. Finally suppose that $M((\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta)$ is not a Q-structure for itself. Then $\eta_\lambda=\eta\in C$, is a limit point of $C$, etc. We can now proceed basically as in the successor case to see that $\mathcal{m}athcal{T}\!\upharpoonright\!\eta\in\mathcal{m}athrm{dom}(\Lambda^{\mathbbm{m}})$, etc, but also using now that we can uniformly compute the Q-structures which guide $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\eta$, by ${\text{\footnotesize\spiral}}$-construction, since we inductively folded in iteration past their $\ \|\ ar$-translations (the computation of the genericity iteration aspect is as before). This completes the construction of the comparison. It lasts $\delta=\omega_1^M$ stages. Either $\mathcal{m}athcal{T}$ or $\mathcal{m}athcal{U}$ has no cofinal branch in $M$, as before. Let $b=\Sigma_A(\mathcal{m}athcal{T})$ (the correct $\mathcal{m}athcal{T}$-cofinal branch) and $Q=Q(\mathcal{m}athcal{T},b)$. Let $Q^\ \|\ ar=Q^\ \|\ ar(\mathcal{m}athcal{T},\widetilde{\mathbbm{m}})$. \begin{clmfive}\label{clm:Q^star_reaches_M} $\mathbbm{m}^+=Q^\ \|\ ar||\mathcal{m}athrm{OR}(\mathbbm{m}^+)$.\end{clmfive} \begin{proof} Suppose not. By STH, it follows that $Q^\ \|\ ar\triangleleft\mathbbm{m}^+$. And $Q^{\ \|\ ar{\text{\tiny\spiral}}}=Q^{\ \|\ ar{\text{\tiny\spiral}}}(\mathcal{m}athcal{T},\widetilde{\mathbbm{m}})=Q$, so by Lemmas \ref{lem:compute_*_q_translation} and \ref{lem:Q_computes_b}, we get $b\in M$, and hence there is no $\mathcal{m}athcal{U}$-cofinal branch in $M$. (Our assumptions seem to allow the possibility that $M=\mathcal{m}athcal{J}(Q^\ \|\ ar)$, in which case it seems maybe $Q\notin M$, but still the relevant theory $t\in M$.) \begin{sclmfive} $(\mathbbm{n}^+)_0^{\text{\tiny\spiral}}=(\mathbbm{n}^+)_0^{\text{\tiny\spiral}}(\mathcal{m}athcal{U},\widetilde{\mathbbm{n}})$ is well-defined and satisfies ``$\delta$ is Woodin'' (note if $M\mathcal{m}odels$``$\omega_2$ exists'' then it follows that $\mathbbm{n}^{+{\text{\tiny\spiral}}}=(\mathbbm{n}^+)_0^{\text{\tiny\spiral}}$). \end{sclmfive} \begin{proof} Suppose not and let $R\triangleleft\mathbbm{n}^+$ be least such that $\widetilde{\mathbbm{n}}\trianglelefteq R$ and either (i) $R^{\text{\tiny\spiral}}=R^{\text{\tiny\spiral}}(\mathcal{m}athcal{U},\widetilde{\mathbbm{n}})$ is ill-defined or not a premouse, or (ii) it is a well-defined premouse and is a Q-structure for $M(\mathcal{m}athcal{U})$ or projects $<\delta$. If (i) holds then working in $\mathbbm{n}^+$, which has universe that of $\mathbbm{m}^+$, we can use condensation to find $\bar{R}\triangleleft\mathbbm{n}$ and a sufficiently elementary $\pi:\bar{R}\to R$ with $\mathcal{m}athrm{cr}(\pi)=\bar{\delta}=\omega_1^{\bar{\mathcal{R}}}$, $P,\mathcal{m}athcal{U}\in\mathcal{m}athrm{rg}(\pi)$, $\pi(\bar{\widetilde{\mathbbm{n}}})=\widetilde{\mathbbm{n}}$, $\pi(\bar{\mathcal{m}athcal{U}})=\mathcal{m}athcal{U}$ and hence $\bar{\mathcal{m}athcal{U}}=\mathcal{m}athcal{U}\!\upharpoonright\!\bar{\delta}$. Also, $\bar{\widetilde{\mathbbm{n}}}\trianglelefteq\bar{R}$, and $\bar{\mathcal{m}athcal{U}}$ is $\bar{\widetilde{\mathbbm{n}}}$-optimal. By Lemma \ref{lem:compute_*_q_translation}, the ill-definedness of $R^{\text{\tiny\spiral}}$ reflects to $\bar{R}^{\text{\tiny\spiral}}(\bar{\mathcal{m}athcal{U}},\bar{\widetilde{\mathbbm{n}}})$, contradicting that $\mathbbm{n}$ is iterability-good. So (ii) holds. But then $R^{\text{\tiny\spiral}}$ must determine a $\mathcal{m}athcal{U}$-cofinal branch, because otherwise, we can do a similar reflection argument to get a Q-structure for some $M(\bar{\mathcal{m}athcal{U}})$ with $\bar{\mathcal{m}athcal{U}}\triangleleft\mathcal{m}athcal{U}$, produced by ${\text{\footnotesize\spiral}}$-construction, which does not yield a $\bar{\mathcal{m}athcal{U}}$-cofinal branch, again contradicting that $\mathbbm{n}$ is iterability-good. \end{proof} By the subclaim, $Q\ntriangleleft(\mathbbm{n}^+)^{\text{\tiny\spiral}}_0$. \begin{sclmfive} In $M$ (hence also in $\mathbbm{n}^+$) there is a club $C\subseteq\delta$ consisting of Woodin cardinals of $M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$, hence Woodin cardinals of $(\mathbbm{n}^+)_0^{\text{\tiny\spiral}}$. \end{sclmfive} \begin{proof} By Lemma \ref{lem:compute_*_q_translation}, $t=t_{q+1}^Q(\delta)\in\unioniv{\mathbbm{m}^+}=\unioniv{\mathbbm{n}^+}$, where $\rho_{q+1}^Q\leq\delta\leq\rho_q^Q$. Fix the least $N\triangleleft\mathbbm{n}^+$ such that $t\in\mathcal{m}athcal{J}(N)$, so $\rho_\omega^N=\delta$. By STH and \ref{lem:compute_*_q_translation}, $N^{\text{\tiny\spiral}}=N^{\text{\tiny\spiral}}(\mathcal{m}athcal{U},\widetilde{\mathbbm{n}})\triangleleft(\mathbbm{n}^+)^{\text{\tiny\spiral}}_0$, $\rho_\omega(N^{\text{\tiny\spiral}})=\delta$, $(N^{\text{\tiny\spiral}})^\ \|\ ar=N$ and $t$ is definable from parameters over $N^{\text{\tiny\spiral}}[t_1^{\widetilde{\mathbbm{n}}}]$. We claim that $N^{\text{\tiny\spiral}}\ntriangleleft Q$. For suppose $R\triangleleft Q$ and $t$ is definable from parameters over $R[t_1^{\widetilde{\mathbbm{n}}}]$. We have that $t_1^{\widetilde{\mathbbm{n}}}$ is also generic over $Q$ for $\mathcal{m}athbb B_{\mathcal{m}easlim,\delta}^Q$, and from $t$ and $t_1^{\widetilde{\mathbbm{n}}}$ one can compute the corresponding theory of $Q[t_1^{\widetilde{\mathbbm{n}}}]$ which could be denoted $t_{q+1}^{Q[t_1^{\widetilde{\mathbbm{n}}}]}$. But that theory is not in $Q[t_1^{\widetilde{\mathbbm{n}}}]$ by a standard diagonalization. So $N^{\text{\tiny\spiral}}\ntriangleleft Q$, but $Q\ntrianglelefteq N^{\text{\tiny\spiral}}$. And we have $Q^\ \|\ ar\triangleleft\mathbbm{m}^+$ and $N\triangleleft\mathbbm{n}^+$. So working in $M$, we can fix $P\triangleleft M$ with $\rho_\omega^{P}=\delta$ and these objects all in $\mathcal{m}athcal{J}(P)$, and a form a continuous, increasing chain $\left<P'_\alpha\right>_{\alpha<\omega_1^M}$ of substructures $P'_\alpha\preccurlyeq_n P$, with $n<\omega$ sufficiently large, and all relevant objects definable from parameters in $P'_0$, and a club $C=\left<\delta_\alpha\right>_{\alpha<\omega_1^M}$, such that $P'_\alpha\cap\delta=\delta_\alpha$. Let $P_\alpha$ be the transitive collapse of $P'_\alpha$ and $\pi_\alpha:P_\alpha\to P$ the uncollapse, so $\mathcal{m}athrm{cr}(\pi_\alpha)=\delta_\alpha$ and $\pi_\alpha(\delta_\alpha)=\delta$. By condensation, we have $P_\alpha\triangleleft\mathbbm{m}$. Let $\widetilde{\mathbbm{m}}_\alpha$, $Q^{\text{s}}_\alpha$, $Q_\alpha$, $\widetilde{\mathbbm{n}}_\alpha$, $N_\alpha$, $N_\alpha^{\text{b}}$ be the resulting ``preimages'' of $\widetilde{\mathbbm{m}}$, $Q^\ \|\ ar$, $Q$, $\widetilde{\mathbbm{n}}$, $N$, $N^{\text{\tiny\spiral}}$ respectively. Then (because $n$ is large enough), condensation and elementarity give that $\widetilde{\mathbbm{m}}_\alpha\trianglelefteq Q^{\text{s}}_\alpha\triangleleft\mathbbm{m}$ and $\widetilde{\mathbbm{n}}_\alpha\trianglelefteq N_\alpha\triangleleft\mathbbm{n}$ and the relevant first order properties reflect down to these models at each $\alpha$, along with $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})\!\upharpoonright\!\delta_\alpha$, which is the preimage of $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$. It follows that the Q-structures used at stage $\delta_\alpha$ in $\mathcal{m}athcal{T},\mathcal{m}athcal{U}$ are distinct, and therefore $\delta_\alpha$ is Woodin in $M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$. So $C$ is a club of Woodins of $M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$. \end{proof} We can now easily reach a contradiction. We have $((\mathbbm{n}^+)^{\text{\tiny\spiral}}_0)^\ \|\ ar(\mathcal{m}athcal{U},\widetilde{\mathbbm{n}})=\mathbbm{n}^+$. Let $R'\triangleleft\mathbbm{n}^+$ be least such that $C\in\mathcal{m}athcal{J}(R')$, so $\rho_\omega^{R'}=\delta$. Let $Q'=(R')^{\text{\tiny\spiral}}$. So $\rho_\omega^{Q'}=\delta$ and $Q'\triangleleft\mathbbm{n}^{+{\text{\tiny\spiral}}}$ and $R'=(Q')^\ \|\ ar$. So $C$ is definable from parameters over $Q'[t_1^{\widetilde{\mathbbm{n}}}]$, so $C\in(\mathbbm{n}^+)_0^{\text{\tiny\spiral}}[t_1^{\widetilde{\mathbbm{n}}}]$. But since $\delta$ is Woodin in $(\mathbbm{n}^+)^{\text{\tiny\spiral}}_0$, the forcing is $\delta$-cc in $(\mathbbm{n}^+)^{\text{\tiny\spiral}}_0$, so there is a club $D\subseteq C$ with $D\in(\mathbbm{n}^+)^{\text{\tiny\spiral}}_0$. Letting $\eta$ be the least limit point of $D$, then $\cof^{(\mathbbm{n}^+)^{\text{\tiny\spiral}}_0}(\eta)=\omega$, so $\eta$ is not Woodin in $(\mathbbm{n}^+)^{\text{\tiny\spiral}}_0$, hence not Woodin in $M(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$, a contradiction, completing the proof of the claim.\end{proof} But now since $M$ is an $\omega$-mouse, the claim implies $M\trianglelefteq Q^\ \|\ ar$, which by STH contradicts the assumption that $M$ is transcendent. \end{proof} \begin{proof}[Proof of Theorem \ref{tm:HOD_non-tame_mouse}] We are no longer assuming that $M$ is transcendent, nor an $\omega$-mouse. But we assume that $M\mathcal{m}odels\mathcal{m}athrm{ZF}C$ and have $H=\mathcal{m}athrm{HOD}^{\unioniv{M}}$. Suppose $H\neq\unioniv{M}$; we want to analyse $H$. The analysis is analogous to that in the tame case, Theorem \ref{tm:HOD_tame_mouse}. However, we will not prove that $\mathcal{m}athbb{E}^W$ is the restriction of $\mathcal{m}athbb{E}^M$ above $\omega_3^M$ (or above anywhere); we will instead get that $M$ is a $\ \|\ ar$-translation of some appropriate $W$. Let $\mathbbm{n}\in\mathcal{m}athscr{G}^M$. Recall that everything in the proof of Theorem \ref{tm:V=HOD_in_transcendent_mice} preceding its very last sentence applies. So we can compare $(\mathbbm{m},\mathbbm{n})$ as before, producing a comparison $(\mathcal{m}athcal{T},\mathcal{m}athcal{U})$ of length $\bar{\delta}=\omega_1^M$, and either $\mathcal{m}athcal{T}$ or $\mathcal{m}athcal{U}$ has no cofinal branch in $M$. (In the current proof we write $\delta=\omega_3^M$.) Let $b=\Sigma_A(\mathcal{m}athcal{T})$ (the correct $\mathcal{m}athcal{T}$-cofinal branch) and $Q=Q(\mathcal{m}athcal{T},b)$. By Claim \ref{clm:Q^star_reaches_M} of the preceding proof, $\mathbbm{m}^+=Q^\ \|\ ar||\omega_2^M$. We are now also assuming that $M\mathcal{m}odels\mathcal{m}athrm{ZF}C$, so $\mathbbm{m}^+\mathcal{m}odels\mathcal{m}athrm{ZF}C^-$, so we can't have $\mathbbm{m}^+= Q^\ \|\ ar$ (but it seems $Q^\ \|\ ar$ might be active at $\omega_2^M$). We also have $\canM^{+{\text{\tiny\spiral}}}=\canM^{+{\text{\tiny\spiral}}}(\mathcal{m}athcal{T},\widetilde{\mathbbm{m}})=Q||\omega_2^M$ is well-defined, and satisfies ``$\bar{\delta}$ is Woodin''. \begin{clmsix}\label{clm:1} $\mathbbm{n}^{+{\text{\tiny\spiral}}}=\mathbbm{n}^{+{\text{\tiny\spiral}}}(\mathcal{m}athcal{U},\widetilde{\mathbbm{n}})$ is well-defined and $\canM^{+{\text{\tiny\spiral}}}=\mathbbm{n}^{+{\text{\tiny\spiral}}}$. \end{clmsix} \begin{proof} A reflection argument like before shows that either $\mathbbm{n}^{+{\text{\tiny\spiral}}}$ is well-defined (producing a model of height $\omega_2^M$) or it reaches a Q-structure. If it reaches a Q-structure, we can argue as above to produce a club of Woodins $\subseteq\bar{\delta}$ for a contradiction. And if it does not reach a Q-structure but $\mathbbm{m}^{+{\text{\tiny\spiral}}}\neq\mathbbm{n}^{+{\text{\tiny\spiral}}}$, we can again reflect a disagreement down, noting that it also produces a club $\left<\delta_\alpha\right>_{\alpha<\omega_1^M}$ with disagreement between the Q-structures at stage $\delta_\alpha$, and hence again a club of Woodins. \end{proof} \begin{clmsix}\label{clm:M^moon=N^moon} We have: \begin{enumerate} \item\label{item:claim4} $M^{\text{\tiny\spiral}}=M^{\text{\tiny\spiral}}(\mathcal{m}athcal{T},\widetilde{\mathbbm{m}})$ is well-defined, $M^{\text{\tiny\spiral}}|({\bar{\delta}}^+)^{M^{\text{\tiny\spiral}}}=(\canM^+)^{\text{\tiny\spiral}}$ and $M^{\text{\tiny\spiral}}\mathcal{m}odels$``$\bar{\delta}$ is Woodin'', \item\label{item:claim5(i)} $\mathcal{m}athrm{cs}(\mathbbm{n})^{\unioniv{M}}$ is well-defined, and hence is a proper class premouse $N$ with $\unioniv{N}=\unioniv{M}$, \item\label{item:claim5(ii)} $N^{\text{\tiny\spiral}}=N^{\text{\tiny\spiral}}(\mathcal{m}athcal{U},\widetilde{\mathbbm{n}})$ is well-defined, and \item\label{item:claim5(iii)} $M^{\text{\tiny\spiral}}=N^{\text{\tiny\spiral}}$. \end{enumerate} \end{clmsix} \begin{proof} Part \ref{item:claim4}: The well-definedness follows easily from condensation, since $(\canM^+)^{\text{\tiny\spiral}}$ is well-defined. The rest is by Lemma \ref{lem:compute_*_q_translation}. Parts \ref{item:claim5(i)}--\ref{item:claim5(iii)}: Suppose not. We have $M\mathcal{m}odels\mathcal{m}athrm{ZF}C$. So fix a limit cardinal $\lambda$ of $M$ such that either $\mathcal{m}athrm{cs}(\mathbbm{n})^{M|\lambda}$ or $(\mathcal{m}athrm{cs}(\mathbbm{n})^{M|\lambda})^{\text{\tiny\spiral}}$ is not well-defined, or $(\mathcal{m}athrm{cs}(\mathbbm{n})^{M|\lambda})^{\text{\tiny\spiral}}\neq (M|\lambda)^{\text{\tiny\spiral}}$. We will again reflect the failure down to a segment of $\mathbbm{m}^+$, and reach a contradiction. We have to be a little careful how we form the hull to do this, however. Note that standard condensation holds for all segments of $M^{\text{\tiny\spiral}}$, since otherwise by condensation in $M$ we could reflect the failure down to $(\canM^+)^{\text{\tiny\spiral}}$, which is iterable. Let $R=\mathcal{m}athcal{J}(M|\lambda)^{\text{\tiny\spiral}}$; because $\lambda$ is an $M$-cardinal, we have $R=\mathcal{m}athcal{J}((M|\lambda)^{\text{\tiny\spiral}})$ and $\mathcal{m}athrm{OR}^R=\lambda+\omega$ and $\rho_\omega^R=\lambda$ and $R\triangleleft M^{\text{\tiny\spiral}}$. Let $\alpha<\omega_2^M$ be such that $\mathbbm{n}\in M|\alpha$ and $\alpha=\mathcal{m}athrm{cr}(\pi_\alpha)$ where $\pi_\alpha:C_\alpha\to R$ is the uncollapse map for $C_\alpha=\mathrm{cHull}_1^{R}(\alpha\union\{\lambda\})$. (Note that $\widetilde{\mathbbm{m}},\widetilde{\mathbbm{n}}, \mathcal{m}athcal{T},\mathcal{m}athcal{U}\in M|\alpha$ also.) So $C_\alpha=\mathcal{m}athcal{J}(K)$ for some $K$, and $K\preccurlyeq_1 C_\alpha$ (as $R|\lambda\preccurlyeq_1 R$ as $\lambda$ is a cardinal of $R$), so $C_\alpha$ is $\alpha$-sound, with $\rho_1^{C_\alpha}\leq\alpha$ and $p_1^{C_\alpha}\backslash\alpha=\{\pi_\alpha^{-1}(\lambda)\}$. Therefore $C_\alpha\triangleleft R$. Let \[ C=\mathcal{m}athrm{Hull}_1^R(\bar{\delta}\union\{\lambda,\alpha\}) \] and $\pi:C\to R$ the uncollapse. Note that $C_\alpha\in\mathcal{m}athrm{rg}(\pi)$, since $C_\alpha\trianglelefteq D$ where $D$ is the least segment of $R$ projecting to ${\bar{\delta}}$ with $\alpha\leq\mathcal{m}athrm{OR}^D$. Hence $\pi(C_\alpha)=C_\alpha$. It easily follows that $C$ is $1$-sound with $\rho_1^C={\bar{\delta}}$ and $p_1^C=\{\pi^{-1}(\lambda),\alpha\}$, and so $C\triangleleft R$. Let \[ C'=\mathrm{cHull}_1^{\mathcal{m}athcal{J}(M|\lambda)}({\bar{\delta}}\union\{\lambda,\alpha\}) \] and $\pi':C'\to\mathcal{m}athcal{J}(M|\lambda)$ the uncollapse. Note that $\mathcal{m}athrm{rg}(\pi')\cap\mathcal{m}athrm{OR}=\mathcal{m}athrm{rg}(\pi)\cap\mathcal{m}athrm{OR}$, since all $\Sigma_1$ facts true in $R[\widetilde{\mathbbm{m}}]$ are $\Sigma_1$-forced over $R$ and $({\bar{\delta}}+1)\subseteq\mathcal{m}athrm{rg}(\pi)$, and by STH and Lemma \ref{lem:compute_*_q_translation}, $\mathcal{m}athcal{J}(M|\lambda)$ is $\Sigma_1^{R[\widetilde{\mathbbm{m}}]} (\{\mathcal{m}athcal{T},\widetilde{\mathbbm{m}},\lambda\})$ and, conversely, $R$ is $\Sigma_1^{\mathcal{m}athcal{J}(M|\lambda)}(\{\mathcal{m}athcal{T},\widetilde{\mathbbm{m}},\lambda\})$. Much as above, $C'\triangleleft M$, and note that $C=(C')^{\text{\tiny\spiral}}$, by the elementarity of $\pi,\pi'$ and the corresponding $\Sigma_1$-definability of the $\ \|\ ar$-translation/${\text{\footnotesize\spiral}}$-construction of $C,C'$.\footnote{Alternatively, we have $\rho_1^{(C')^{\text{\tiny\spiral}}}=\rho_1^{C'}={\bar{\delta}}$, and in the ${\text{\footnotesize\spiral}}$-construction, after projecting to $\bar{\delta}$, a segment cannot later be lost, so $C\trianglelefteq(C')^{\text{\tiny\spiral}}$ or vice versa, but $({\bar{\delta}}^+)^{C}=({\bar{\delta}}^+)^{C'}=({\bar{\delta}}^+)^{(C')^{\text{\tiny\spiral}}}$, so $C=(C')^{\text{\tiny\spiral}}$.} Now $C\triangleleft\mathbbm{m}^{+{\text{\tiny\spiral}}}=\mathbbm{n}^{+{\text{\tiny\spiral}}}$. Let $K$ be such that $C=\mathcal{m}athcal{J}(K)$. Then $(K^\ \|\ ar)^{\mathbbm{m}}=K^\ \|\ ar(\mathcal{m}athcal{T},\widetilde{\mathbbm{m}})\trianglelefteq \mathbbm{m}^+$ and $(K^\ \|\ ar)^{\mathbbm{n}}=K^\ \|\ ar(\mathcal{m}athcal{U},\widetilde{\mathbbm{n}})\trianglelefteq\mathbbm{n}^+$. Because $K$ has no largest cardinal, $(K^\ \|\ ar)^{\mathbbm{m}}=\mathbbm{m}^+|\mathcal{m}athrm{OR}^K$ has universe that of $K[\widetilde{\mathbbm{m}}]$, and $(K^\ \|\ ar)^{\mathbbm{n}}=\mathbbm{n}^+|\mathcal{m}athrm{OR}^K$ that of $K[\widetilde{\mathbbm{n}}]$, but note these universes are identical. Because $\mathbbm{m},\mathbbm{n}\in\mathcal{m}athscr{E}^M$ and by an easy reflection below $\omega_1^M$, it follows that $\mathbbm{n}^+|\mathcal{m}athrm{OR}^K\mathcal{m}odels$``I am $\mathcal{m}athrm{cs}(\mathbbm{n})$ and $\mathcal{m}athrm{cs}(\mathbbm{n})^{{\text{\tiny\spiral}}}$ is well-defined and equals $K$''. But since also $K=(\mathbbm{m}^+)^{{\text{\tiny\spiral}}}$, and $\pi':C'\to\mathcal{m}athcal{J}(M|\lambda)$ is sufficiently elementary, this gives a contradiction, establishing the claim. \end{proof} Now we have $\card^M(\mathcal{m}athscr{G}^M)\leq\omega_2^M$ and \footnote{In the analogous situation in the tame case, we had $\mathcal{m}athscr{G}^M\subseteq\mathcal{m}athscr{P}^M$ and $\card^M(\mathcal{m}athscr{P}^M)\leq\omega_1^M$, but for non-tame, as far as the author knows, we might have $\mathcal{m}athscr{G}^M\not\subseteq\mathcal{m}athscr{P}^M$.} and $\delta=\omega_3^M$. For each $\mathbbm{n}$, write $\mathbbm{n}^{++}=\mathcal{m}athrm{sJs}(\mathcal{m}athrm{sJs}(\mathbbm{n}))^{\mathcal{m}athcal{H}_\delta^M}$ (so $\mathbbm{m}^{++}=M|\delta$, and by Claim \ref{clm:M^moon=N^moon}, $\mathbbm{n}^{++}$ has universe $\mathcal{m}athcal{H}_\delta^M$ for all $\mathbbm{n}$) and $t_{\mathbbm{n}}=\Th_{\mathcal{m}athrm{r}\Sigma_2}^{\mathbbm{n}^{++}}(\delta)$. Then for all $\mathbbm{n}_1,\mathbbm{n}_2\in\mathcal{m}athscr{G}^M$, there are parameters $\vec{\beta}\in\omegaega_2^M$ such that for all $\vec{\alpha}<\omegaega_3^M$, \[ t_{\mathbbm{n}_1}\!\upharpoonright\!\{\vec{\beta},\vec{\alpha}\}\text{ is recursively equivalent to } t_{\mathbbm{n}_2}\!\upharpoonright\!\{\vec{\beta},\vec{\alpha}\},\text{ uniformly in }\vec{\alpha}; \] this follows from the claims regarding the comparison above (comparing each of $\mathbbm{n}_1,\mathbbm{n}_2$ with $\mathbbm{m}=\canM^M$ and considering the respective common ${\text{\footnotesize\spiral}}$-constructions, to translate between $t_{\mathbbm{n}_i}$ and $t^{\mathbbm{m}}$). So for extenders with critical points $>\omegaega_2^M$, $t_{\mathbbm{n}}$-genericity iteration for some $\mathbbm{n}$, is equivalent to simultaneous $t_\mathbbm{n}$-genericity iteration for all $\mathbbm{n}$. Now consider the simultaneous comparison of all $\mathbbm{n}\in\mathcal{m}athscr{G}^M$, as above, interweaving $t_\mathbbm{n}$-genericity iteration, and interweaving least measurables until passing $\omega_2^M$, using $\Lambda^{\mathbbm{n}^{++}}$ to iterate $\mathbbm{n}$; that is, the strategy defined like $\Lambda^{\mathbbm{n}}$, but over $\mathbbm{n}^{++}$. (Since $\mathbbm{n}\preccurlyeq_1\mathbbm{n}^{++}$, this works.) Since $H\subsetneq M$, we must have $\{\canM\}\subsetneq\mathcal{m}athscr{G}^M$, so the comparison cannot succeed. Let $\mathcal{m}athcal{T}_{\mathbbm{n}}$ be the tree on $\mathbbm{n}$. We can analyse the comparison like we analysed the comparison of two models earlier, and we get similar results. It lasts exactly $\omega_3^M$ steps, and letting $\mathbbm{n}^{+\infty}=\mathcal{m}athrm{cs}(\mathbbm{n})^{\unioniv{M}}$, then $\mathbbm{n}'=(\mathbbm{n}^{+\infty})^{\text{\tiny\spiral}}(\mathcal{m}athcal{T}_{\mathbbm{n}},\mathbbm{n}^{++})$ is a proper class premouse extending $M(\mathcal{m}athcal{T}_{\mathbbm{n}})$, satisfies ``$\delta$ is Woodin'', and is independent of $\mathbbm{n}$. So $W=\mathbbm{n}'$ is definable without parameters over $\unioniv{M}$, and each $t_{\mathbbm{n}}$ is $(W,\mathcal{m}athbb B_{\mathcal{m}easlim,\delta}^W)$-generic. In particular, $W\subseteq H=\mathcal{m}athrm{HOD}^{\unioniv{M}}$. Let $t=\Th_{\Sigma_2}^{\mathcal{m}athcal{H}_\delta^M}(\delta)$. \begin{clmsix}\label{clm:6} $H=\unioniv{W}[t]$ and $\unioniv{M}=H[\canM^M|\delta]$.\end{clmsix} \begin{proof}By the previous paragraph, $t$ is $(W,\mathcal{m}athbb B_{\mathcal{m}easlim,\delta}^W)$-generic and $W[t]\subseteq H$. And letting $\mathcal{m}athbb Q\in H$ be Vopenka for adding subsets of $\omega_1^M$, then $G_{\canM^M}$ is $(H,\mathcal{m}athbb Q)$-generic. We need to examine more closely the particular Vopenka needed to add $\canM^M$. \begin{sclmsix}Let $A\subseteq\mathcal{m}athscr{G}^M$ be $\mathcal{m}athrm{OD}^{\unioniv{M}}$. Then $A$ is $\Sigma_2^{\mathcal{m}athcal{H}_{\delta}^M}(\{\alpha\})$ for some $\alpha<\delta$.\end{sclmsix} \begin{proof} Let $\lambda$ be some limit cardinal of $M$ such that $A$ is $\mathcal{m}athrm{OD}$ over $\mathcal{m}athcal{H}_\lambda^M$. Let $\mathbbm{n}\in\mathcal{m}athscr{G}^M$ and choose $\alpha<\delta$ such that letting \[ C_{\mathbbm{n}}=\mathrm{cHull}_1^{\mathcal{m}athcal{J}(\mathbbm{n}^{+\infty}|\lambda)} (\omegaega_2^M\union\{\lambda,\alpha\}) \] and $\pi_{\mathbbm{n}}:C_{\mathbbm{n}}\to\mathcal{m}athcal{J}(\mathbbm{n}'|\lambda)$ be the uncollapse, then $C_{\mathbbm{n}}$ is sound with $\rho_1^{C_{\mathbbm{n}}}=\omegaega_2^M$ and $p_1^{C_{\mathbbm{n}}}=\{(\pi_{\mathbbm{n}})^{-1}(\lambda),\alpha\}$ and $C_{\mathbbm{n}}\triangleleft {\mathbbm{n}}^{++}$. For $\mathbbm{n}_1,\mathbbm{n}_2\in\mathcal{m}athscr{G}^M$, then $\mathbbm{n}_1^{+\infty}|\lambda$ is inter-definable with $\mathbbm{n}_2^{+\infty}|\lambda$, uniformly in parameters $\mathbbm{n}_1,\mathbbm{n}_2\in\mathcal{m}athcal{H}_\gamma^M\subseteq C_{\mathbbm{n}_1}\cap C_{\mathbbm{n}_2}$, where $\gamma=\omega_2^M$. It follows that $\xi=_{\mathrm{def}}\mathcal{m}athrm{OR}(C_{\mathbbm{n}_1})=\mathcal{m}athrm{OR}(C_{\mathbbm{n}_2})$ and $C_{\mathbbm{n}_1},C_{\mathbbm{n}_2}$ have the same universe. But then note that $A$ is $\Sigma_2^{\mathcal{m}athcal{H}_{\delta}^M}(\{\xi,\alpha\})$ for some $\alpha<\xi$, because $\mathcal{m}athscr{G}^M$ is definable over $\mathcal{m}athcal{H}_{\gamma}^M$ and $\{\mathcal{m}athcal{H}_{\gamma}^M\}$ is $\Sigma_2^{\mathcal{m}athcal{H}_\delta^M}$, so the function $\mathbbm{n}\mathcal{m}apsto C_{\mathbbm{n}}$ is $\Sigma_2^{\mathcal{m}athcal{H}_\delta^M}(\{\xi\})$, and this suffices. This proves the subclaim.\end{proof} Let $\mathcal{m}athbb P\in H$ be the Vopenka corresponding to $\mathcal{m}athrm{OD}^M$ subsets of $\mathcal{m}athscr{G}^M$, taking ordinal codes $<\delta$ in the natural form given be the foregoing proof, as conditions. Note then that $\mathcal{m}athbb P$ (with its ordering) is $\Sigma_2^{\mathcal{m}athcal{H}_\delta^M}$, and $\mathcal{m}athbb P\in\unioniv{W}[t]$. Given $\mathbbm{n}\in\mathcal{m}athscr{G}^M$, note that $\mathbbm{n}^{++}$ can be computed from $(G_\mathbbm{n},t)$, so $\mathbbm{n}^{++}\in W[t][G_\mathbbm{n}]$. Conversely, easily $G_\mathbbm{n}\in W[t][\mathbbm{n}^{++}]$. Since $\mathbbm{n}^{+\infty}=W^\ \|\ ar(\mathcal{m}athcal{T}_{\mathbbm{n}},\mathbbm{n}^{++})$, therefore $\unioniv{\mathbbm{n}^{+\infty}}=W[t][\mathbbm{n}^{++}]=W[t][G_\mathbbm{n}]$. In particular, \[ \unioniv{M}=W[t][G_{\canM^M}]=H[G_{\canM^M}]. \] It follows that $\unioniv{W}[t]=H$, just by the general $\mathcal{m}athrm{ZF}C$ fact that if $N_1\subseteq N_2$ are proper class transitive models of $\mathcal{m}athrm{ZF}C$ and there is $\mathcal{m}athbb P\in N_1$ and $G$ which is both $(N_1,\mathcal{m}athbb P)$-generic and $(N_2,\mathcal{m}athbb P)$-generic and $N_1[G]=N_2[G]$, then $N_1=N_2$. This proves Claim \ref{clm:6}. \end{proof} We have now completed the proof except for one more fact when below a Woodin limit of Woodins: \begin{clmsix} Suppose $M$ is below a Woodin limit of Woodins. Then there is $\alpha<\omega_3^M$ such that $\unioniv{M}=H[M|\alpha]$, and hence some $X\subseteq\omega_2^M$ with $\unioniv{M}=H[X]$. \end{clmsix} \begin{proof}For this, let $\alpha_0$ be a proper limit stage of $\mathcal{m}athcal{T}=\mathcal{m}athcal{T}_{\canM^M}$ such that the Woodins of $W|\delta$ are bounded strictly below $\delta(\mathcal{m}athcal{T}\!\upharpoonright\!\alpha_0)$, and let $\alpha>\delta(\mathcal{m}athcal{T}\!\upharpoonright\!\alpha_0)$ be such that $\mathcal{m}athcal{T}\!\upharpoonright\!(\alpha_0+1)\in M|\alpha$. Then $M|\delta$ can be inductively recovered from $M|\alpha$ and $W|\delta$, by comparing $\mathcal{m}athcal{T}\!\upharpoonright\!(\alpha_0+1)$ (as a phalanx) against $W|\delta$, using the $\ \|\ ar$-translations $Q^\ \|\ ar$ of the Q-structures $Q=Q(\mathcal{m}athcal{T}\!\upharpoonright\!\lambda,b)\trianglelefteq W$ to compute projecting mice $N\triangleleft M|\delta$ (noting that if $Q\neq M(\mathcal{m}athcal{T}\!\upharpoonright\!\lambda)$ then $\rho_\omega(Q^\ \|\ ar)=\omega_2^M$, because otherwise $\lambda=\omega_3^{\mathcal{m}athcal{J}(Q^\ \|\ ar)}$, and as $Q$ is the common Q-structure for all trees at stage $\lambda$, working inside $\mathcal{m}athcal{J}(Q^\ \|\ ar)$, we can compute $\mathcal{m}athcal{T}_{\mathbbm{n}}\!\upharpoonright\!\lambda$-cofinal branches for all $\mathbbm{n}\in\mathcal{m}athscr{G}^M$, which contradicts comparison termination as before). \end{proof} This proves the theorem. \end{proof} \section*{Bibliography} \end{document}
\begin{document} \title{When heterodyning beats homodyning: an assessment with quadrature moments} \author{Y.~S.~Teo} \affiliation{BK21 Frontier Physics Research Division, Seoul National University, 08826 Seoul, South Korea} \author{C.~R.~M\"{u}ller} \affiliation{Max-Planck-Institut f\"ur die Physik des Lichts, Staudtstra\ss e 2, 91058 Erlangen, Germany} \author{H.~Jeong} \affiliation{Center for Macroscopic Quantum Control, Seoul National University, 08826 Seoul, South Korea} \author{Z.~Hradil} \affiliation{Department of Optics, Palack\'{y} University, 17. listopadu 12, 77146 Olomouc, Czech Republic} \author{J.~\v{R}eh\'{a}\v{c}ek} \affiliation{Department of Optics, Palack\'{y} University, 17. listopadu 12, 77146 Olomouc, Czech Republic} \author{L.~L.~S\'{a}nchez-Soto} \affiliation{Departamento de \'Optica, Facultad de F\'{\i}sica, Universidad Complutense, 28040 Madrid, Spain} \affiliation{Max-Planck-Institut f\"ur die Physik des Lichts, Staudtstra\ss e 2, 91058 Erlangen, Germany} \begin{abstract} We examine the moment-reconstruction performance of both the homodyne and heterodyne (double-homodyne) measurement schemes for arbitrary quantum states and introduce moment estimators that optimize the respective schemes for any given data. In the large-data limit, these estimators are as efficient as the maximum-likelihood estimators. We then illustrate the superiority of the heterodyne measurement for the reconstruction of the first and second moments by analyzing Gaussian states and many other significant non-classical states. \end{abstract} \pacs{03.65.Ta, 03.67.Hk, 42.50.Dv, 42.50.Lc} \maketitle \section{Introduction} The next-generation quantum technologies introduce novel and innovative routes to the understanding and implementation of measurements, communication, and computation. In this respect, the manipulation of a quantum light source using continuous-variable~(CV) measurements offer many advantages~\cite{Braunstein:2005aa,Ferraro:2005ns,CV2007:aa,Andersen:2010ng,Adesso:2014pm}. There exist two standard $CV$ measurement schemes. The more commonly employed homodyne detection~\cite{Yuen:1983ba,Abbas:1983ak, Schumaker:1984qm}, which performs an approximate measurement of rotated photonic quadratures~\cite{Banaszek:1997ot}, probes the marginal distribution of the Wigner function of the unknown quantum state~\cite{Vogel:1989zr}. The other less widely adopted double-homodyne detection, or the heterodyne detection, involves the joint measurement of complementary observables~\cite{Arthurs:1965al,Yuen:1982hh,Arthurs:1988aa,Martens:1990al,Martens:1991aa,Raymer:1994aj,Trifonov:2001up,Werner:2004as}that directly samples the phase space according to the Husimi function~\cite{Stenholm:1992ps} and is connected to the conventional heterodyne scheme~\cite{Javan:1962aa,Read:1965aa,Carleton:1968aa,Gerhardt:1972aa,Yuen:1980ys,Shapiro:1984aa,Shapiro:1985aa,Walker:1986qn,Collett:1987aa,Lai:1989ah}. These measurement schemes, which experimentally probe quasi-probability distributions, can also be equivalently understood as practical means to directly characterize the source in terms of the ordered moments of the quadrature operators in phase space. Gaussian states~\cite{Ferraro:2005ns} for example, which are important in analyzing CV quantum information processing~\cite{Lorenz:2004aa, Lance:2005aa,Scarani:2009cq,Weedbrook:2012ag}, are conveniently described by this representation since all their operator moments are functions of only the first and second moments. Therefore, estimating the first and second moments are enough to fully reconstruct the Gaussian state or verify if the reconstructed state is accurately Gaussian \cite{Rehacek:2009ys}. Higher moments come into play for general quantum states. On its own right, the topic of operator moments of quantum states draws interest in the context of generalized uncertainty relations~\cite{Angulo:1993aa,Angulo:1994ps}, non-classicality detection~\cite{Simon:9709030,Arvind:1998aa}, entanglement detection~\cite{Namiki:2012gs,Ivan:2012da}, and cryptography~\cite{Leverrier:2012qs,Thearle:2016aa}. In Refs.~\cite{Rehacek:2015qp} and \cite{Muller:2016da}, we theoretically and experimentally compared the two measurement schemes, using a polarization-squeezing setup~\cite{Heersink:2005ul,Grangier:1987fk,Josse:2004ys,Marquardt:2007bh,Muller:2012ys,Peuntinger:2014aa} for the latter. We analyzed the physical implications of having the unavoidable Arthurs-Kelly type noise that is inherent in the joint measurement heterodyne scheme on moment reconstruction. We found that despite this additional noise, for a single-mode central-Gaussian source the heterodyne scheme still results in second-moment estimators that are more accurate than the homodyne scheme for a wide range of the squeezing strength and temperature parameter. In this article, we extend the theory of these two CV measurement schemes to general quantum states and show that the tomographic advantage in using the heterodyne scheme carries over to other interesting and important non-Gaussian states. This message is conveyed in five main sections. Section~\ref{sec:cov_mom} gives an overview of the fundamental elements in first- and second-moment tomography, as well as the concept of reconstruction accuracy. These elements are then used to discuss the general theory of moment reconstruction for the homodyne and heterodyne schemes in Sec.~\ref{sec:gen_theory}. In that section, we shall also introduce optimal moment estimators that asymptotically approach the respective Cram{\'e}r-Rao bounds, which are derived in Appendix~\ref{app:optimality}. In Sec.~\ref{sec:first-mom}, we shall study the CV schemes in first-moment estimation where it shall be shown that heterodyne detection will always outperform homodyne detection unless the state is of minimum uncertainty, in which case the two schemes give equal reconstruction accuracy per sampling event. This result shall be discussed with some interesting classes of non-Gaussian states. Next, we study the results for second-moment estimation Sec.~\ref{sec:sec-mom} with the same classes of non-Gaussian states and illustrate once again the tomographic advantages of using the heterodyne scheme in moment tomography. Finally, Sec.~\ref{sec:conc} concludes the presented results in a summary. \section{The covariance matrix and moment-reconstruction accuracy} \label{sec:cov_mom} In dealing with single-mode bosonic systems such as photons, for the pair of position $X$ and momentum $P$ quadrature operators obeying $[X,P]=\mathrm{i}$ (with the quantum unit $\hbar\equiv1$) that form the column $\rvec{R}=\TP{(X\,\,\,P)}$, the covariance matrix can be written as \begin{equation} \dyadic{G}=\RE{\langle\rvec{R}\rvec{R}^\textsc{t}\rangle}-\langle\rvec{R}\rangle\TP{\langle\rvec{R}\rangle}=\dyadic{G}_2-\dyadic{G}_1\,, \end{equation} where we have introduced the first- ($\dyadic{G}_1=\langle\rvec{R}\rangle\TP{\langle\rvec{R}\rangle}$) and second-moment ($\dyadic{G}_2=\RE{\langle\rvec{R}\rvec{R}^\textsc{t}\rangle}$) matrices. The two independent parameters $\left\{\langle X\rangle,\langle P\rangle\right\}$ in $\dyadic{G}_1$ and three independent parameters $\left\{\langle X^2\rangle,\frac{1}{2}\langle\{X,P\}\rangle,\langle P^2\rangle\right\}$ in $\dyadic{G}_2$ constitute the complete set of five parameters that characterize $\dyadic{G}$. The well-known class of Gaussian states possesses a Gaussian Wigner function or any other kind of well-behaved quasi-probability distribution. As a consequence, any Gaussian state is fully described by only $\dyadic{G}_1$ and $\dyadic{G}_2$. The covariance matrix for any quantum state obeys the inequality $\dyadic{G}\geq\dyadic{\sigma}_y/2$ in terms of the Pauli matrix $\dyadic{\sigma}_y$, which is a recast of the Heisenberg-Robertson-Schr{\"o}dinger (HRS) uncertainty relation for position and momentum operators. This gives the equivalent stricter inequality $\mathrm{d}ET{\dyadic{G}}\geq1/4$ in addition to the standard positivity constraint for $\dyadic{G}$. The reconstruction of the full covariance matrix $\dyadic{G}$ involves the quantum tomography of all the five independent parameters that define the first and second operator moments of the state. Here, the figure of merit the reconstruction accuracy is the mean squared-error~(MSE) $\mathcal{D}=\MEAN{}{\Tr{\left(\widehat{\dyadic{G}}-\dyadic{G}\right)^2}}$ between $\dyadic{G}$ and its estimator $\widehat{\dyadic{G}}$. In terms of $\dyadic{G}_1$ and $\dyadic{G}_2$, \begin{align} \mathcal{D}=&\,\underbrace{\MEAN{}{\Tr{\left(\widehat{\dyadic{G}}_1-\dyadic{G}_1\right)^2}}}_{\mathclap{\displaystyle \qquad\!\!\!=\mathcal{D}_1}}+\underbrace{\MEAN{}{\Tr{\left(\widehat{\dyadic{G}}_2-\dyadic{G}_2\right)^2}}}_{\mathclap{\displaystyle \qquad\!\!\!=\mathcal{D}_2}}\nonumber\\ &\,+\left\{\text{cross terms}\right\}\,. \label{eq:G1G2MSE} \end{align} To illustrate the physics behind moment reconstruction, we shall analyze both the first and second-moment reconstruction accuracy separately. In practice, these analyses are relevant to the situation where the reconstructions of $\dyadic{G}_1$ and $\dyadic{G}_2$ are carried out with independent data. For this situation, the $\left\{\text{cross terms}\right\}$ in Eq.~\eqref{eq:G1G2MSE} vanish so that the total MSE is the sum of the respective MSEs $\mathcal{D}_1$ and $\mathcal{D}_2$ of the reconstructed moments. From hereon, to facilitate discussions, we shall analyze the quantity $\rvec{r}=\left<\rvec{R}\right>$ in place of $\dyadic{G}_1$, where $\mathcal{D}_1=\MEAN{}{(\widehat{\rvec{r}}-\rvec{r})^2}$. In unbiased statistical estimation theory~\cite{Cox:2006dv}, the MSE $\mathcal{D}\geq\Tr{\dyadic{F}^{-1}}$ is bounded from below by the inverse of the Fisher information matrix $\dyadic{F}$, or the Cram{\'e}r-Rao bound (CRB). Consequently, we have the respective first- and second-moment CRBs $\mathcal{D}_1\geq\Tr{\dyadic{F}_1^{-1}}$ and $\mathcal{D}_2\geq\Tr{\dyadic{F}_2^{-1}}$. Therefore, the general theory of the Fisher matrices $\dyadic{F}_1$ and $\dyadic{F}_2$ for the two CV schemes is in order. \begin{figure} \caption{\label{fig:Fig1} \label{fig:Fig1} \end{figure} \section{General theory} \label{sec:gen_theory} \subsection{Homodyne detection} The homodyne detection~\cite{Yuen:1983ba,Abbas:1983ak, Schumaker:1984qm} involves a 50:50 beam splitter that introduces an interference between the optical source of an unknown state (signal) and the local oscillator (coherent-state reference source or simply LO), the latter of which is set to a much larger optical intensity than the mean intensity of the optical source of an unknown quantum state $\rho$~[see Fig.~\ref{fig:Fig1}(a)]. A subtraction of the output photocurrents gives a distribution of voltage readouts $-\infty<x_\vartheta<\infty$ for the LO phase $0\leq\vartheta\leq\pi$, which essentially corresponds to the eigenvalue probability distribution of the quadrature operator $X_\vartheta=X\cos\vartheta+P\sin\vartheta$. It then follows that statistically, the expectation value $\left<X_\vartheta^m\right>$ for any integer value $m$ contains all measurable information about the $m$th operator moments of $X$ and $P$. Since the data acquired with this scheme are the marginals of the Wigner function, the first $(m=1)$ and second $(m=2)$ moments, or $\dyadic{G}$, that are reconstructed with these data may be attributed to this quasi-probability distribution function. In a typical homodyne experiment, the value of $\vartheta$ is set to increase linearly. The data collected would then be binned for all the measured $\vartheta$ values. The data bins are mutually independent, so that the Fisher matrices $\dyadic{F}_{1,\textsc{hom}}$ and $\dyadic{F}_{2,\textsc{hom}}$ for the respective first- and second-moment CRBs can each be understood as a summation of Fisher matrices of every LO phase bin according to the additivity property of the Fisher information. In the limit of large number of sampling events $N$, the central limit theorem states that the unbiased estimator $\widehat{\left<X^m_\vartheta\right>}$ of the $m$th quadrature moment $\left<X^m_\vartheta\right>$ that is defined as an average sum of independently collected random voltage values for the phase $\vartheta$ follows a Gaussian distribution of data mean $\mu=\mu(\vartheta)=\left<X^m_\vartheta\right>$ and data variance $\sigma^2/N$ where $\sigma^2=\sigma(\vartheta)^2=\left<X^{2m}_\vartheta\right>-\left<X^m_\vartheta\right>^2$, so that the Fisher matrix \begin{equation} \dyadic{F}_{\vartheta,m}=\dfrac{N}{\sigma^2}\dfrac{\partial\mu}{\partial\rvec{a}}\dfrac{\partial\mu}{\partial\rvec{a}}+\dfrac{1}{2\sigma^4}\dfrac{\partial\sigma^2}{\partial\rvec{a}}\dfrac{\partial\sigma^2}{\partial\rvec{a}} \label{eq:fisher_theta} \end{equation} for a given LO phase $\vartheta$ in the large-$N$ limit follows the well-known expression for Gaussian distributions, where in our case $\rvec{a}$ is the column of $m$th moment parameters we are interested in reconstructing. As it is clear that only the first term of \eqref{eq:fisher_theta} would survive in this limit, we thus have the \emph{scaled} homodyne Fisher matrix \begin{equation} \widetilde{\dyadic{F}}_{m,\textsc{hom}}=\int_{(\pi)}\dfrac{\mathrm{d}\vartheta}{\pi}\,\dfrac{\dyadic{F}_{\vartheta,m}}{N}=\int_{(\pi)}\dfrac{\mathrm{d}\vartheta}{\pi}\,\dfrac{1}{\sigma(\vartheta)^2}\dfrac{\partial\mu(\vartheta)}{\partial\rvec{a}}\dfrac{\partial\mu(\vartheta)}{\partial\rvec{a}} \label{eq:Fhomm} \end{equation} with respect to the number of sampling events $N$ for the complete set of homodyne quadrature-eigenstate outcomes. Scaled statistical quantities such as this one shall be the focus of this article in analyzing tomographic performances, as the scaled CRB (sCRB) represents the power-law coefficient of the MSE in this limit that determines the difficulty in obtaining an estimator of a certain pre-chosen MSE accuracy. \subsubsection{First-moment reconstruction} All information about the first moments, $\rvec{a}=\rvec{r}$, of the covariance matrix is completely encoded in the expectation value $\mu(\vartheta)=\langle X_\vartheta\rangle$. The variance for the data is then given by $\sigma(\vartheta)^2=\langle X_\vartheta^2\rangle-\langle X_\vartheta\rangle^2$. The scaled Fisher matrix for the first-moment estimation with homodyne data is therefore given by \begin{equation} \dyadic{F}_\textsc{hom}oneN=\int_{(\pi)}\dfrac{\mathrm{d}\theta}{\pi}\dfrac{\dyadic{m}_\vartheta}{\langle X^2_\vartheta\rangle-\langle X_\vartheta\rangle^2}\,, \label{eq:Fisher_HOM_first} \end{equation} where \begin{equation} \dyadic{m}_\vartheta\,=\begin{pmatrix} (\cos\vartheta)^2 & \sin\vartheta\cos\vartheta\\ \sin\vartheta\cos\vartheta & (\sin\vartheta)^2 \end{pmatrix}\,. \end{equation} The integral can be evaluated exactly, bringing us to the closed-form expression \begin{equation} \mathcal{H}_\textsc{1,hom}=\Tr{\dyadic{G}}+2\sqrt{\mathrm{d}ET{\dyadic{G}}}\,. \label{eq:CRB_HOM_first} \end{equation} With the machinery of quantum tomography (see Appendix~\ref{subsec:app:firstmom-est}), an observer can construct the optimal moment estimator that achieves the sCRB. Suppose that the observer collects homodyne data for $N$ sampling events and bins the voltage values into $\{x_{jk}\}$ according to a discrete number $n_\vartheta$ of LO phase bins $\vartheta_k$, where $j$ labels the $n_x$ real voltage values per LO phase bin $\vartheta=\vartheta_k$ and $k$ labels the phase bins. Then an unbiased estimator for any particular expectation value $\langle X_k\rangle\equiv\langle X_{\vartheta_k}\rangle$ would be \begin{equation} \widehat{\langle X_k\rangle}=\dfrac{1}{N}\sum^{n_x}_{j=1}n_{jk}x_{jk}\,,\quad \sum^{n_\vartheta}_{k=1}\sum^{n_x}_{j=1}n_{jk}=\sum^{n_\vartheta}_{k=1}N_k=N\,. \end{equation} Then upon denoting $\rvec{u}_k\equiv\rvec{u}_{\vartheta_k}=(\cos{\vartheta_k}\,\,\sin{\vartheta_k})^{\textsc{t}}$ where we note that $\dyadic{m}_k=\rvec{u}_k\TP{\rvec{u}}_k$, the optimal first-moment estimator is given by \begin{align} \widehat{\rvec{r}}^{(\textsc{opt})}_\textsc{hom}&=\dyadic{W}_1^{-1}\sum^{n_\vartheta}_{k=1}\rvec{u}_k\dfrac{N_k\widehat{\left<X_k\right>}}{\widehat{\left<X^2_k\right>}-\widehat{\left<X_k\right>}^2}\,\nonumber\\ \dyadic{W}_1&=\sum^{n_\vartheta}_{k=1}\rvec{m}_{k}\dfrac{N_k}{\widehat{\left<X_k^2\right>}-\widehat{\left<X_k\right>}^2}\,, \end{align} which is immediately computable given the processed data $\left\{\widehat{\left<X_k\right>}\right\}$ and $\left\{\widehat{\left<X^2_k\right>}\right\}$ that are defined by \begin{equation} \widehat{\left<X^m_k\right>}=\dfrac{1}{N}\sum^{n_x}_{j=1}n_{jk}x^m_{jk}\quad(m=1,2,\ldots)\,. \label{eq:data_l} \end{equation} That this estimator achieves the sCRB asymptotically is also shown in Appendix~\ref{subsec:app:firstmom-est}. This equivalently implies that the optimal estimator is as efficient as the maximum-likelihood~(ML) estimator for the multinomially-distributed binned data $\{x_{jk}\}$. \subsubsection{Second-moment reconstruction} To estimate $\dyadic{G}_2$, it is clear that second-moment information is completely encoded in the second quadrature moment $\langle X^2_\vartheta\rangle$, which is a function of the three independent parameters $a_1=\langle X^2\rangle$, $a_2=\langle \frac{1}{2}\{\mathrm{d}elta X,\mathrm{d}elta P\}\rangle$ and $a_3=\langle P^2\rangle$. From Eq.~\eqref{eq:Fhomm}, the corresponding $3\times3$ Fisher matrix for these three parameters is \begin{equation} \dyadic{F}_\textsc{hom}twoN=N\int_{(\pi)}\dfrac{\mathrm{d}\theta}{\pi}\dfrac{\dyadic{M}_\vartheta}{\langle X^4_\vartheta\rangle-\langle X^2_\vartheta\rangle^2}\,, \label{eq:Fisher_HOM} \end{equation} where \begin{equation} \dyadic{M}_\vartheta\,\,\widehat{=}\begin{pmatrix} \left(\cos{\vartheta}\right)^2\\ \sqrt{2}\sin{\vartheta}\cos{\vartheta}\\ \left(\sin{\vartheta}\right)^2 \end{pmatrix}\begin{pmatrix} \left(\cos{\vartheta}\right)^2 & \sqrt{2}\sin{\vartheta}\cos{\vartheta} & \left(\sin{\vartheta}\right)^2 \end{pmatrix}\,. \end{equation} The analytical answer to $\dyadic{F}_\textsc{hom}twoN$ for an arbitrary state, and its subsequent inverse $\mathcal{H}_{2,\textsc{hom}}=\Tr{\dyadic{F}_\textsc{hom}twoN^{-1}}$ is difficult to calculate, as the denominator in the integrand generally contains trigonometric functions in a complicated manner. Nevertheless, the integral can be calculated explicitly for many interesting and important quantum sources. The optimal second-moment estimator (see Appendix~\ref{subsec:app:secmom-est}) that achieves the corresponding sCRB can be cleanly expressed using the vectorization operation $\VEC{\dyadic{Y}}$ that turns a matrix into a column according to \begin{equation} \dyadic{Y}\,\,\widehat{=}\begin{pmatrix} y_1 & y_2\\ y_2 & y_3 \end{pmatrix}\quad\mapsto\quad\VEC{\dyadic{Y}}\,\,\widehat{\equiv}\begin{pmatrix} y_1\\ \sqrt{2}\,y_2\\ y_3 \end{pmatrix} \end{equation} in any pre-chosen basis, such that $\Tr{\dyadic{Y}_1\dyadic{Y}_2}=\TP{\VEC{\dyadic{Y}_1}}\VEC{\dyadic{Y}_2}$ for any two $2\times2$ symmetric matrices $\dyadic{Y}_1$ and $\dyadic{Y}_2$. Given the processed data defined in Eq.~\eqref{eq:data_l}, the final operationally-ready expressions for this optimal estimator are given as follows: \begin{align} \widehat{\dyadic{G}}^{(\textsc{opt})}_{2,\textsc{hom}}&=\dyadic{W}_2^{-1}\sum^{n_\vartheta}_{k=1}\VEC{\dyadic{m}_k}\dfrac{N_k\widehat{\left<X_k^2\right>}}{\widehat{\left<X^4_k\right>}-\widehat{\left<X_k^2\right>}^2}\,,\nonumber\\ \dyadic{W}_2&=\sum^{n_\vartheta}_{k=1}\dyadic{M}_k\dfrac{N_k}{\widehat{\left<X_{k}^4\right>}-\widehat{\left<X_{k}^2\right>}^2}\,. \end{align} For accurate tomography, the value of $N$ is typically large enough such that $\widehat{\dyadic{G}}^{(\textsc{opt})}_{2,\textsc{hom}}$ is a proper covariance matrix and approaches the ML estimator that asymptotically achieves the sCRB, which is strictly speaking the correct regime where $\widehat{\dyadic{G}}^{(\textsc{opt})}_{2,\textsc{hom}}$ is to be used for second-moment tomography. On a separate note, optimal estimators for overcomplete quantum-state tomography of $\rho$ was developed in \cite{Zhu:2014aa} and later rederived in \cite{Teo:2015qs} with the variational principle that is also used to construct the optimal moment estimators in Appendix~\ref{app:optimality}. \subsection{Heterodyne detection} \label{subsec:heterodyne} The heterodyne detection scheme essentially uses two homodyne setups to perform a joint measurement of two complementary observables~[see Fig.~\ref{fig:Fig1}(b)], which are in this case chosen to be the standard $X$ and $P$ quadrature pair for convenience. It is well-known (\cite{Arthurs:1965al,Yuen:1982hh,Arthurs:1988aa,Martens:1990al,Martens:1991aa,Raymer:1994aj,Trifonov:2001up,Werner:2004as}) that the product of their joint-measurement standard deviations has a larger lower bound than the usual one-half of a quantum unit given by the original Heisenberg relation owing to the additional quantum noise introduced by the joint measurement. The outcomes for this scheme are in fact the overcomplete set of coherent states. This means that the resulting data are direct phase-space samples of the Husimi~function for the statistical operator $\rho$. The technical complication of having additional measurement noise can therefore be translated completely into the phase-space language that is relevant in our subsequent analysis. Given an infinite set of the Husimi-function data, we have access to the moments $\overline{x^kp^l}$ (the overline denotes the average with respect to the Husimi function, or simply the Husimi average), with which the corresponding ``$\dyadic{G}$'' operator \begin{equation} \dyadic{G}_{\textsc{het}}\,\widehat{=}\begin{pmatrix} \overline{x^2}-\overline{x}^2 & \overline{xp}-\overline{x}\,\overline{p}\\ \overline{xp}-\overline{x}\,\overline{p} & \overline{p^2}-\overline{p}^2 \end{pmatrix} \end{equation} can be directly constructed. One can then show that for any quantum state, \begin{equation} \dyadic{G}_{\textsc{het}}=\dyadic{G}+\dfrac{\dyadic{1}}{2}\,. \label{eq:Ghet_G_half} \end{equation} The corresponding Arthurs-Kelly type measurement uncertainty relation \begin{equation} \VAR{\textsc{q}}{x}\VAR{\textsc{q}}{p}=\left(\langle(\mathrm{d}elta X)^2\rangle+\frac{1}{2}\right)\left(\langle(\mathrm{d}elta P)^2\rangle+\frac{1}{2}\right)\geq1\,, \end{equation} which is saturated by coherent states $[\langle(\mathrm{d}elta X)^2\rangle=\langle(\mathrm{d}elta P)^2\rangle=1/2]$, can thereafter be understood as a physical manifestation of the Gauss-Weierstrass transform [related to Eq.~\eqref{eq:Ghet_G_half}] between the Wigner and Husimi functions if the joint-measurement data are directly used to calculate variances (here denoted by $\VAR{\textsc{q}}{y}=\overline{y^2}-\overline{y}^2$ for a complete Husimi-function data $\{y\}$). We shall show that this additional quantum noise, when combined with optimal tomography strategies, can still lead to better moment-reconstruction accuracies relative to the homodyne scheme. \subsubsection{First-moment reconstruction} From Sec.~\ref{subsec:heterodyne}, we note that the data collected from the heterodyne scheme are a scatter set of phase-space coordinates $\{(x_j,p_j)\}$ that are distributed according to the Husimi function. As Eq.~\eqref{eq:Ghet_G_half} tells us that there is no difference between the state average $\rvec{r}$ and Husimi average of $(x\,\,\,p)^\textsc{t}$, being a two-parameter estimation scheme, the first-moment sCRB with respect to the heterodyne data can again be found by taking the average of the distance between the estimator \begin{equation} \widehat{\rvec{r}}_\textsc{het}\,\widehat{=}\,\dfrac{1}{N}\sum^N_{j=1}\begin{pmatrix} x_j\\ p_j \end{pmatrix} \end{equation} and the true column $\rvec{r}^\textsc{t}\widehat{=}\,(\overline{x}\,\,\,\overline{p})$: \begin{equation} \mathcal{D}_{1,\textsc{het}}=\MEAN{}{\left(\widehat{\rvec{r}}_\textsc{het}-\rvec{r}\right)^2}=\dfrac{1}{N}\left(\VAR{\textsc{q}}{x}+\VAR{\textsc{q}}{p}\right)\,, \end{equation} so that \begin{equation} \mathcal{H}_{1,\textsc{het}}=\VAR{\textsc{q}}{x}+\VAR{\textsc{q}}{p}=\Tr{\dyadic{G}}+1\,. \label{eq:CRB_HET_first} \end{equation} That $N\mathcal{D}_{1,\textsc{het}}=\mathcal{H}_{1,\textsc{het}}$ follows in the limit of large $N$, where the unbiased estimator $\widehat{\rvec{r}}_\textsc{het}$ is asymptotically optimal since in this limit, the distribution of $\widehat{\rvec{r}}_\textsc{het}$ follows a bivariate Gaussian distribution with vanishing widths, such that $\widehat{\rvec{r}}_\textsc{het}$ becomes the ML estimator that approaches the sCRB for this Gaussian distribution. \subsubsection{Second-moment reconstruction} Similarly, to arrive at the optimal accuracy for estimating $\dyadic{G}_2$ using heterodyne data, we define the natural second-moment estimator \begin{equation} \widehat{\dyadic{G}}_{2,\textsc{het}}\,\widehat{=}\dfrac{1}{N}\sum^{N}_{j=1}\begin{pmatrix} x_j^2 & x_jp_j\\ x_jp_j & p_j^2 \end{pmatrix}\,, \end{equation} where $\{(x_j,p_j)\}$ are again the sampled Husimi-function data collected during heterodyne detection. From Eq.~\eqref{eq:Ghet_G_half}, we get \begin{equation} \dyadic{G}_{2,\textsc{het}}=\dyadic{G}_{2}+\dfrac{\dyadic{1}}{2}\,. \label{eq:onehalf} \end{equation} The MSE $\mathcal{D}_{2,\textsc{het}}$ for heterodyne detection concerning second-moment estimation is consequently given by \begin{align} \mathcal{D}_{2,\textsc{het}}&=\MEAN{}{\Tr{\left(\widehat{\dyadic{G}}_{2,\textsc{het}}-\dyadic{G}_{2,\textsc{het}}\right)^2}}\nonumber\\ &=\Tr{\MEAN{}{\widehat{\dyadic{G}}_{2,\textsc{het}}^2}}-\Tr{\dyadic{G}_{2,\textsc{het}}^2}\nonumber\\ &=\dfrac{1}{N}\left(\VAR{\textsc{q}}{x^2}+\VAR{\textsc{q}}{p^2}+2\,\VAR{\textsc{q}}{xp}\right)\,. \end{align} In the large-$N$ limit, this MSE is essentially the sCRB \begin{equation} \mathcal{H}_{2,\textsc{het}}=\VAR{\textsc{q}}{x^2}+\VAR{\textsc{q}}{p^2}+2\,\VAR{\textsc{q}}{xp}\,. \label{eq:CRB_HET} \end{equation} since $\widehat{\dyadic{G}}_{2,\textsc{het}}$ again becomes the ML estimator. To see this, we inspect the Fisher matrix $\dyadic{F}_{2,\textsc{het}}$ for the estimator $\widehat{\dyadic{G}}_{2,\textsc{het}}$. If we look at the random column \begin{equation} \rvec{x}=\VEC{\widehat{\dyadic{G}}_{2,\textsc{het}}}\widehat{\equiv}\,\dfrac{1}{N}\sum^N_{j=1}\begin{pmatrix} x_j\\ \sqrt{2}\,x_jp_j\\ p_j \end{pmatrix} \end{equation} that represents $\widehat{\dyadic{G}}_{2,\textsc{het}}$, we find that in the limit of large $N$, the central limit theorem again says that $\rvec{x}$ follows a Gaussian distribution defined by the mean $\rvec{\mu}=\overline{\rvec{x}}\,\widehat{=}\,\TP{(\overline{x^2}\,\,\,\sqrt{2}\,\overline{xp}\,\,\,\,\overline{p^2})}$ and the covariance matrix \begin{equation} \dyadic{\Sigma}=\overline{\rvec{x}\TP{\rvec{x}}}-\rvec{\mu}\TP{\rvec{\mu}}\,\widehat{=}\dfrac{1}{N}\begin{pmatrix} \VAR{\textsc{q}}{x^2} & * & *\\ * & 2\,\VAR{\textsc{q}}{xp} & *\\ * & * & \VAR{\textsc{q}}{p^2} \end{pmatrix}\,, \end{equation} so that we eventually recover the well-known result $\dyadic{\Sigma}=\dyadic{F}_{2,\textsc{het}}^{-1}$ for Gaussian scatter data that saturates the CRB as we remember that $\Tr{\dyadic{\Sigma}}=\mathcal{D}_{2,\textsc{het}}$. Equation~\eqref{eq:CRB_HET} then follows tout de suite. \section{First-moment estimation} \label{sec:first-mom} \subsection{General optimality of heterodyne tomography} As far as first-moment estimation is concerned, the general results in Eqs.~\eqref{eq:CRB_HOM_first} and \eqref{eq:CRB_HET_first} imply that $\mathcal{H}_{1,\textsc{het}}\leq\mathcal{H}_{1,\textsc{hom}}$ for \emph{any} quantum state. This main result hinges on the physical HRS uncertainty relation, which is equivalent to the constraint $\mathrm{d}ET{\dyadic{G}}\geq1/4$ for the covariance matrix $\dyadic{G}$. This constraint means that \begin{equation} \mathcal{H}_{1,\textsc{hom}}=\Tr{\dyadic{G}}+2\sqrt{\mathrm{d}ET{\dyadic{G}}}\geq\Tr{\dyadic{G}}+1=\mathcal{H}_{1,\textsc{het}}\,. \end{equation} This implies that for \emph{all} quantum states, the reconstruction accuracy of the optimal heterodyne first-moment estimator is always higher or equal to that of the optimal homodyne first-moment estimator in locating the average center of the quantum state in phase space. For minimum-uncertainty states, the accuracies of the two schemes are equal ($\mathcal{H}_{1,\textsc{hom}}=\mathcal{H}_{1,\textsc{het}}$). Subsequent well-known and interesting examples merely illustrate this fundamental fact. In terms of the first-moment performance ratio \begin{equation} \gamma_1=\dfrac{\mathcal{H}_{1,\textsc{het}}}{\mathcal{H}_{1,\textsc{hom}}}, \end{equation} a subunit magnitude indicates that the heterodyne scheme outperforms the homodyne scheme. \subsection{Gaussian states} \label{subsec:mom1_gauss} For a Gaussian state where the covariance matrix $\dyadic{G}$ characterizes the spread of its Wigner function, the state variance of $X_\vartheta$ is simply \begin{equation} \langle X^2_\vartheta\rangle-\langle X_\vartheta\rangle^2=\TP{\rvec{u}}_\vartheta\,\dyadic{G}\,\rvec{u}_\vartheta\,. \end{equation} From Eqs.~\eqref{eq:CRB_HOM_first} and \eqref{eq:CRB_HET_first}, the first-moment performance ratio \begin{equation} \gamma_1=\dfrac{\Tr{\dyadic{G}}+1}{\Tr{\dyadic{G}}+2\sqrt{\mathrm{d}ET{\dyadic{G}}}}\leq1 \end{equation} clearly cannot exceed one since any physical state satisfying the HRS uncertainty relation must take $\mathrm{d}ET{\dyadic{G}}\geq1/4$. The maximum value of $\gamma_1=1$ is attained for minimum-uncertainty states. \subsection{Fock states} A Fock state of the ket $\ket{n}$ is always centered at the origin of the phase space $(\rvec{r}=\rvec{0})$. The circular symmetry of these states imply the fact that $(\mathrm{d}elta X)^2=(\mathrm{d}elta P)^2=n+1/2=(\mathrm{d}elta X_\vartheta)^2$, whence \begin{equation} \mathcal{H}_{1,\textsc{hom}}=2(2n+1) \label{eq:CRB_HOM_Fock_first} \end{equation} since such states have zero first moments. On the other hand, for the heterodyne scheme, we get \begin{equation} \mathcal{H}_{1,\textsc{het}}=2(n+1) \label{eq:CRB_HET_Fock_first} \end{equation} by simply using the Husimi~characteristic function from Table~\ref{tab:char} in Appendix~\ref{app:char_func}. Therefore, we get a \begin{equation} \gamma_1=\dfrac{n+1}{2n+1} \label{eq:gamma_FOCK} \end{equation} that is always sub-unity unless $n=0$, a result that is again familiar from Sec.~\ref{subsec:mom1_gauss}. In the limit of large photon numbers, the first-moment $\gamma_1$ approaches 1/2. \subsection{Even/odd coherent states} Another popular class of non-Gaussian states with interesting phase-space quantum interference features are the even/odd coherent states characterized by the ket $\ket{\pm;\alpha_0}=(\ket{\alpha_0}\pm\ket{-\alpha_0})\mathcal{N}_\pm$ of appropriate normalization constants $\mathcal{N}_\pm=1/\sqrt{2\pm2\,\E{-2|\alpha_0|^2}}$, whose first moments $\rvec{r}$ are all equal to zero. Using the definitions $a=\frac{1}{2}\left[\left<(\mathrm{d}elta X)^2\right>-\left<(\mathrm{d}elta P)^2\right>\right]=\alpha_0^2$ and $b_\pm=\frac{1}{2}\left[\left<(\mathrm{d}elta X)^2\right>+\left<(\mathrm{d}elta P)^2\right>\right]=\alpha_0^2\left[\tanh(\alpha_0^2)\right]^{\pm1}+1/2$, \begin{equation} \mathcal{H}_{1,\textsc{hom}}=2\left(b_\pm+\sqrt{b_\pm^2-a^2}\right)\,. \label{eq:CRB_HOM_padd_first} \end{equation} For the heterodyne counterpart, one finds that \begin{equation} \mathcal{H}_{1,\textsc{het}}=2\left(b_\pm+\frac{1}{2}\right)\,, \end{equation} which contributes to the performance ratio \begin{equation} \gamma_1=\dfrac{b_\pm+\frac{1}{2}}{b_\pm+\sqrt{b_\pm^2-a^2}}\,. \end{equation} For both types of coherent state superpositions, $\gamma_1\rightarrow1$ as $\alpha_0\rightarrow\infty$. For even coherent states, the performance ratio $\gamma_1=1$ when $\alpha_0=0$ as it should. Otherwise, this ratio is always less than one for any positive $\alpha_0$. There exists a single local minimum of $\gamma_1\approx0.7577$ at $\alpha_0\approx1.715$. For odd coherent states, $\gamma_1<1$ for \emph{all} $\alpha_0$ values, with the minimum value of $\gamma_1=1/3$ at $\alpha_0=0$. For these states, $\gamma_1$ increases monotonically to one as $\alpha_0$ tends to infinity. \subsection{Displaced Fock states} Displacement and photon-addition are two important physical procedures that are frequently discussed in quantum physics. The different orders in which these processes are carried out on the vacuum state give output states of a different nature. Displacing an $m$-photon-added vacuum state by a complex amplitude $\alpha_0$ results in displaced Fock states defined by the ket $D(\alpha_0)\ket{m}$ can be effectively performed using a beam splitter with a high transmissivity and a strong coherent state~\cite{Paris:1996do,Banaszek:1999qn}. It can be shown easily that the first-moment sCRBs are indeed given by Eqs.~\eqref{eq:CRB_HOM_Fock_first} and \eqref{eq:CRB_HET_Fock_first}, so that the performance ratio is then completely identical to that of the usual central Fock states in Eq.~\eqref{eq:gamma_FOCK}. This reflects the physical fact that the accuracy in estimating the displacement cannot explicitly depend on where the center of the displaced Fock states is when full sets of CV measurement outcomes are considered, as the tomographic coverage of the entire phase space is then complete. This accuracy depends only on the variances, which describe the second-order symmetry and is unaffected at all by the displacement. \subsection{Photon-added coherent states} A swap in the order of photon addition and displacement on the vacuum state gives the photon-added coherent state of $m$ added photons and reference amplitude $\alpha_0$ is defined by the ket $\ket{m;\alpha_0}=\mathcal{N}_{m,|\alpha_0|^2}{A^\dagger}^m\ket{\alpha_0}$ with the bosonic annihilation operator $A$, where the normalization constant $\mathcal{N}_{m,|\alpha_0|^2}=\E{|\alpha_0|^2/2}/\sqrt{m!\pFq{1}{1}{m+1}{1}{|\alpha_0|^2}}$ involves the confluent hypergeometric function of the first kind $\pFq{1}{1}{a}{b}{y}$. The integer value $m$ denotes the extent to which the mean photon number \begin{equation} \left<A^\dagger A\right>=(m+1)\dfrac{\pFq{1}{1}{m+2}{1}{|\alpha_0|^2}}{\pFq{1}{1}{m+1}{1}{|\alpha_0|^2}}-1\,, \label{eq:mean_padd} \end{equation} which is always larger than $|\alpha_0|^2+m$ whenever $\alpha_0\neq0$, is increased nonlinearly by the operation by ${A^\dagger}^m$ on the reference coherent ket $\ket{\alpha_0}$. This particular class of quantum states is but one of many possible kinds of photon-added states, which are of interest to the quantum community for testing some fundamental statements~\cite{Parigi:2004qc,Kim:2008bc,Zavatta:2009aa}. For these photon-added coherent states, the second-order symmetry is now affected by the combined action of the displacement and photon addition, so that $\left<(\mathrm{d}elta X)^2\right>$ and $\left<(\mathrm{d}elta P)^2\right>$ are functions of $m$ and $\alpha_0$. These expressions can be straightforwardly computed with the help of the characteristic functions given in Table~\ref{tab:char} in Appendix~\ref{app:char_func}. By defining \begin{align} a &= - \frac{\alpha_0^2(m+1)}{2\, \pFq{1}{1}{m+1}{1}{\alpha_0^2}^2} \left[2(m+1)\,\pFq{1}{1}{m+2}{2}{\alpha_0^2}^2 \right . \nonumber \\ & \left . - (m+2)\,\pFq{1}{1}{m+1}{1}{\alpha_0^2}\pFq{1}{1}{m+3}{3}{\alpha_0^2}\right] \,,\nonumber\\ b & = m + \frac{1}{2}-\frac{\alpha_0^2 m \,\pFq{1}{1}{m+1}{2}{\alpha_0^2}} {\,\pFq{1}{1}{m+1}{1}{\alpha_0^2}^2} \left[\,\pFq{1}{1}{m+1}{1}{\alpha_0^2} \right . \nonumber \\ & \left . + m \, \pFq{1}{1}{m+1}{2}{\alpha_0^2}\right] \,, \end{align} such that $b>a$, the first-moment sCRB for homodyne detection is of the same form as in Eq.~\eqref{eq:CRB_HOM_padd_first}, namely \begin{equation} \mathcal{H}_{1,\textsc{hom}}=2\left(b+\sqrt{b^2-a^2}\right)\,. \end{equation} The first-moment sCRB for heterodyne detection is given by \begin{equation} \mathcal{H}_{1,\textsc{het}}=2\left[a+(m+1)\frac{\pFq{1}{1}{m+2}{2}{\alpha_0^2}}{\pFq{1}{1}{m+1}{1}{\alpha_0^2}}\right]\,. \end{equation} Clearly, when $\alpha_0=0$, the answers in Eqs.~\eqref{eq:CRB_HOM_Fock_first} and \eqref{eq:CRB_HET_Fock_first} for an $m$-number Fock state are reproduced exactly. With $m=0$, the respective sCRBs of a value of 2 for all $\alpha_0$s are furthermore consistent with Sec.~\ref{subsec:mom1_gauss}. Otherwise, $\gamma_1$ is always sub-unity, and approaches unity as $\alpha_0\rightarrow\infty$. \section{Second-moment estimation} \label{sec:sec-mom} \subsection{Gaussian states} \label{subsec:second_mom_gauss} It seems fitting to commence the discussion of second-moment estimation with the Gaussian state, for it is natural to begin with the generalization of the results that already appeared in Refs.~\cite{Rehacek:2015qp} and \cite{Muller:2016da} to general noncentral Gaussian states $\left(\rvec{r}\neq\rvec{0}\right)$. We suppose that the Gaussian state of the covariance matrix $\dyadic{G}$ is centered at $\rvec{r}=\rvec{r}_0=\TP{(x_0\,\,\,p_0)}$. From Table~\ref{tab:char} in Appendix~\ref{app:char_func}, by defining $\mu_\vartheta=\TP{\rvec{u}}_\vartheta\,\rvec{r}_0$ and $\sigma_\vartheta^2=\TP{\rvec{u}}_\vartheta\,\dyadic{G}\,\rvec{u}_\vartheta$, the variance for the second quadrature moment reads \begin{equation} \left<X_\vartheta^4\right>-\left<X_\vartheta^2\right>^2=2\,\sigma_\vartheta^2\left(\sigma_\vartheta^2+2\,\mu_\vartheta^2\right)\,. \label{eq:GAUSS_var} \end{equation} For \emph{central} Gaussian states $(\langle X\rangle=\langle P\rangle=0)$, we have $\langle X^4_\vartheta\rangle=3\langle X^2_\vartheta\rangle^2$ and the scaled Fisher matrix in Eq.~\eqref{eq:Fisher_HOM} turns into the familiar form in \cite{Rehacek:2015qp,Muller:2016da}. For the more general situation, one can repeat the contour-method integration in \cite{Rehacek:2015qp} to calculate the scaled Fisher matrix in Eq.~\eqref{eq:Fisher_HOM}. The answer is given as \begin{widetext} \begin{align} \dyadic{F}_\textsc{hom}twoN=\dfrac{- 2}{(c+\mathrm{i} b)(w_3+\mathrm{i} w_2)} \left [ \dfrac{M_{z=0}}{z_{1+}z_{1-}z_{2+}z_{2-}}+\dfrac{M_{z=z_{1-}}}{z_{1-}(z_{1-}-z_{1+})(z_{1-}-z_{2+})(z_{1-}-z_{2-})}+ \dfrac{M_{z=z_{2-}}}{z_{2-}(z_{2-}-z_{1-})(z_{2-}-z_{1+})(z_{2-}-z_{2+})}\right ] \label{eq:Fisher_GAUSS} \end{align} together with the definitions \begin{align} a&=\dfrac{1}{2}\Tr{\dyadic{G}}\,,\,\,b=\dfrac{1}{2}\left(\dyadic{G}_{11}-\dyadic{G}_{22}\right)\,,\,\,c=\dyadic{G}_{12}\,,\nonumber\\ w_1&=a+\rvec{r}_0^2\,,\!\quad w_2=b+x_0^2-p_0^2\,,\,\quad w_3=c+2x_0p_0\,,\nonumber\\ z_{1\pm}&=\dfrac{-a\pm\mathrm{i}\sqrt{-a^2+b^2+c^2}}{b-\mathrm{i} c}\,,\,\,z_{2\pm}=\dfrac{-w_1\pm\mathrm{i}\sqrt{-w_1^2+w_2^2+2_3^2}}{w_2-\mathrm{i} w_3}\,,\nonumber\\ M_z&\,\,\widehat{=}\,\dfrac{1}{16}\begin{pmatrix} (z+1)^4 & -\mathrm{i}\sqrt{2}(z-1)(z+1)^3 & -(z^2-1)^2\\ -\mathrm{i}\sqrt{2}(z-1)(z+1)^3 & -2(z^2-1)^2 & \mathrm{i}\sqrt{2}(z+1)(z-1)^3\\ -(z^2-1)^2 & \mathrm{i}\sqrt{2}(z+1)(z-1)^3 & (z-1)^4 \end{pmatrix}\,. \end{align} \end{widetext} When $\rvec{r}_0=0$, we have $w_1=a$, $w_2=b$ and $w_3=c$ and the scaled Fisher matrix $\dyadic{F}_\textsc{hom}twoN$ reduces to that for the central Gaussian state in \cite{Rehacek:2015qp}. For the general setting, the full expression of $\mathcal{H}_{2,\textsc{hom}}$ is omitted here in this case due to its complexity. On the other hand, the sCRB with the heterodyne scheme for these noncentral Gaussian states can be calculated directly from Eq.~\eqref{eq:CRB_HET} using the characteristic function in Table~\ref{tab:char} and is given by \begin{align} \mathcal{H}_{2,\textsc{het}}=2\,\Big(&\Tr{\dyadic{G}_\textsc{het}}^2-\mathrm{d}ET{\dyadic{G}_\textsc{het}}\nonumber\\ &+\TP{\rvec{r}_0}\,\dyadic{G}_\textsc{het}\,\rvec{r}_0+\Tr{\dyadic{G}_\textsc{het}}\rvec{r}_0^2\Big)\,, \label{eq:CRB_HET_GAUSS} \end{align}where one immediately verifies the counterpart expression in \cite{Rehacek:2015qp} for the central Gaussian states upon setting $\rvec{r}_0=\rvec{0}$. At this stage, we reassure ourselves the physics of the problem of second-moment tomography by understanding, first, that in the case where tomography is performed on the \emph{full} covariance matrix $\dyadic{G}$ then the sCRB, which is the minimum of the MSE, should not depend on the orientation of the two-dimensional uncertainty region (here being an ellipse for any Gaussian state) described by the eigenvectors of this matrix but only its eigenvalues owing to the form of the MSE. Additionally, the accuracy should also be independent of $\rvec{r}_0$. When only the second-moment matrix $\dyadic{G}_2$ is reconstructed, the sCRB should also not depend on its eigenvectors but only its eigenvalues. The physics remains the same. However, there is a difference between estimating the full matrix $\dyadic{G}$ and estimating just $\dyadic{G}_2$. Since $\dyadic{G}_2$ is in general an increasing function of the first moments, this means that as the displacement of the center from the phase-space origin for the quantum state increases, the geometric mean of eigenvalues (GME) of $\dyadic{G}_2$ correspondingly becomes larger so that the second-order-``temperature'' of the state, a terminology borrowed from Gaussian states, as described by the GME is now higher and this results in a stronger $\dyadic{G}_2$-``thermal'' property much like the thermal Gaussian states. So we would expect, based on the findings in \cite{Rehacek:2015qp}, that states with large displacements give poor second-moment tomographic accuracies for \emph{both} CV schemes, and yet provides a subunit \begin{equation} \gamma_2=\dfrac{\mathcal{H}_{2,\textsc{het}}}{\mathcal{H}_{2,\textsc{hom}}} \end{equation} performance ratio. It is also physically intuitive that the accuracies for both schemes should also be independent of the angle of displacement, but depend only on the magnitude of the displacement. For non-Gaussian states, the \emph{fourth} moments arising from the structure of the MSE, which are no longer functions of the first and second moments as is the case for Gaussian states, also contribute to the sCRB, and therefore $\gamma_2$, as described in the general theory in Sec.~\ref{sec:gen_theory}. This physics, however, \emph{seems} to be violated by the noncentral-Gaussian-state expressions in \eqref{eq:Fisher_GAUSS} and \eqref{eq:CRB_HET_GAUSS}, namely that $\mathcal{H}_{2,\textsc{het}}$ depends on the explicit displacement vector $\rvec{r}_0$ and covariance matrix $\dyadic{G}$, for instance. This mishap has nothing to do with any kind of physical violation, but has only to do with the way we specify Gaussian states. By choosing to parametrize a multivariate Gaussian distribution using the natural independent parameters $\rvec{r}_0$ and $\dyadic{G}$ (the full matrix), we inadvertently change the eigenvalues of $\dyadic{G}_2$ by changing $\rvec{r}_0$ and fixing $\dyadic{G}$. This becomes obvious when one finds that the two positive eigenvalues $\lambda_\pm$ of $\dyadic{G}_2$ is given by \begin{equation} \lambda_{\pm}=|\alpha_0|^2+\dfrac{1}{2}\Tr{\dyadic{G}_\textsc{het}}\pm\left|\alpha_0^2+\TP{\rvec{w}}\,\dyadic{G}_\textsc{het}\,\rvec{w}\right|^2\,, \label{eq:GAUSS_EIGs} \end{equation} where $\rvec{w}= \tfrac{1}{\sqrt{2}} \TP{(1\,\,\,\,\mathrm{i})}$ and $\alpha_0=(x_0+\mathrm{i} p_0)/\sqrt{2}$. The consequence of this natural definition results in such an apparent observation. The noncentral Gaussian states so defined form the singular example in this article where this happens, and the two other noncentral non-Gaussian states which we shall soon visit do not have this technical issue. \begin{figure} \caption{\label{fig:Fig2} \label{fig:Fig2} \end{figure} To investigate the second-moment performance ratio $\gamma_2=\mathcal{H}_\textsc{2,het}/\mathcal{H}_\textsc{2,hom}$, we may reparametrize the eigenvalues of $\dyadic{G}$ with the squeezing strength $1\leq\lambda<\infty$ and the temperature parameter $1\leq\mu<\infty$ that is commonly adopted in describing all Gaussian states. Then $\dyadic{G}$ has the spectral decomposition \begin{equation} \dyadic{G}\,\,\widehat{=}\begin{pmatrix} \cos\phi & -\sin\phi\\ \sin\phi & \cos\phi \end{pmatrix} \begin{pmatrix} \dfrac{\mu}{2\lambda} & 0\\ 0 & \dfrac{\mu\lambda}{2} \end{pmatrix} \begin{pmatrix} \cos\phi & \sin\phi\\ -\sin\phi & \cos\phi \end{pmatrix} \label{eq:G_spec} \end{equation} where $\phi$ orientates the eigenvectors of $\dyadic{G}$. In this parametrization, we can clearly see that a large displacement magnitude contributes to a large temperature, so that a small value of $\gamma_2$ can be anticipated for these highly displaced or $\dyadic{G}_2$-thermal Gaussian states based on the conclusions in \cite{Rehacek:2015qp} and \cite{Muller:2016da}. The behavior of $\gamma_2$ is very similar to that for the central Gaussian states and is plotted in Fig.~\ref{fig:Fig2} for various values of $|\alpha_0|$. The lowest achievable $\gamma_2$ values go with the highly thermal Gaussian states ($\lambda=1$, $\mu\gg|\rvec{r}_0|$), whose covariance matrix $\dyadic{G}=\mu\dyadic{1}/2$ is simply a multiple of the identity. Their second quadrature moment has a variance $\left<X_\vartheta^4\right>-\left<X_\vartheta^2\right>^2=\mu^2/2$, according to Eq.~\eqref{eq:GAUSS_var}, that is of course independent of the LO phase $\vartheta$ due to the rotational symmetry. The performance ratio then takes the minimum value of $3/10$. The maximum of $\gamma_2$ occurs with the coherent states ($\mu=\lambda=1$) and takes a value of $6/5$ for $\rvec{r}_0=\rvec{0}$. For larger magnitudes of $\alpha_0$, the value of $\gamma_2$ drops below unity beyond the magnitude of $|\alpha_0|=\sqrt{5/32}$, which can be obtained through optimization. One may verify that at this critical magnitude, $\mathcal{H}_{2,\textsc{hom}}=\mathcal{H}_{2,\textsc{het}}=63/8$. So, given a displacement magnitude larger than $\sqrt{5/32}$, the heterodyne scheme always outperforms the homodyne scheme in second-moment estimation. In the limit of large $\mu$ and $\lambda$, where we may take this limit such that $\mu=\lambda$ without loss of generality, if one considers the spectral decomposition in Eq.~\eqref{eq:G_spec}, then $\gamma_2$ for $\phi=0$ plotted in Fig.~\ref{fig:Fig3} shows the values for different $\mu$ as an indication that $\gamma_2\leq1$ in this limit. Different $\phi$ values simply rotate these plots in the $x_0$--$p_0$ plane. \begin{figure} \caption{\label{fig:Fig3} \label{fig:Fig3} \end{figure} \subsection{Fock states} \label{subsec:second_mom_fock} Owing to the rotational symmetry of the Fock states [$\dyadic{G}_2=(n+1/2)\dyadic{1}$], the second and fourth quadrature moments \begin{equation} \left< X_\vartheta^4\right>-\langle X_\vartheta^2\rangle^2=\dfrac{1}{2}\langle X_\vartheta^2\rangle^2+\dfrac{3}{8} \label{eq:fock_Xt} \end{equation} are independent of the local-oscillator phase $\vartheta$, so that the Fisher matrix \begin{equation} \dyadic{F}_{2,\textsc{hom}}=\dfrac{1}{4(n^2+n+1)}\begin{pmatrix} 3 & 0 & 1\\ 0 & 2 & 0\\ 1 & 0 & 3 \end{pmatrix}\,. \end{equation} It then follows that the sCRB is given by \begin{equation} \mathcal{H}_{2,\textsc{hom}}=5\,(n^2+n+1)\,. \label{eq:CRB_HOM_Fock} \end{equation} On the other hand, the Husimi characteristic function for the Fock states in Appendix~\ref{app:char_func} produces the answer \begin{equation} \mathcal{H}_{2,\textsc{het}}=2\,(n+1)(n+3)\,. \label{eq:CRB_HET_Fock} \end{equation} The performance ratio \begin{equation} \gamma_2=\dfrac{2\,(n+1)(n+3)}{5\,(n^2+n+1)} \end{equation} is less than one for $n\geq2$, in which regime the Fock states are sufficiently $\dyadic{G}_2$-``thermal''. For $n=0$, we evidently obtain the familiar answer $\gamma_2=6/5$ for the vacuum state, whereas for $n=1$, $\gamma_2=16/15$. In the limit of large $n$, $\gamma_2\rightarrow2/5$ (see Fig.~\ref{fig:Fig4}). \begin{figure} \caption{\label{fig:Fig4} \label{fig:Fig4} \end{figure} \subsection{Even/odd coherent states} Since the eigenvalues \begin{equation} \lambda^{(\pm)}_\pm=\dfrac{1}{2}+|\alpha_0|^2\left\{\left[\tanh\!\left(|\alpha_0|^2\right)\right]^{(\pm1)}\pm 1\right\} \end{equation} of $\dyadic{G}_2$ are simple functions of $|\alpha_0|^2$ for the even/odd $(\pm)$ coherent states, we may take $\alpha_0\geq0$ without loss of generality. The quadrature moments can be easily derived with the help of Appendix~\ref{app:char_func}, which give the following second-moment variance \begin{align} \left<X_\vartheta^4\right>-\left<X_\vartheta^2\right>^2=&\,\dfrac{1}{2}+2\alpha_0^2\left\{\cos(2\vartheta)+\left[\tanh\!\left(\alpha_0^2\right)\right]^{\pm1}\right\}\nonumber\\ &\,\pm\dfrac{4\alpha_0^4}{\left(\E{\alpha_0^2}\pm\E{-\alpha_0^2}\right)^2}\,. \end{align} By relying on the asymptotic behaviors $\coth y\approx 1/y$ and $\mathrm{cosech}\, y\approx1/y$ of the hyperbolic trigonometric functions for small arguments, we revert to the limiting second-moment variances for $n=0$ and $n=1$, which is consistent with the fact that the even states approach the vacuum state and the odd states approach the single-photon Fock state. The Fisher matrix $\dyadic{F}_{2,\textsc{hom}}$ thus takes the simple form \begin{align} \dyadic{F}_{2,\textsc{hom}}&=\int_{(\pi)}\dfrac{\mathrm{d}\vartheta}{\pi}\,\dfrac{\dyadic{M}_\vartheta}{m_\pm+l\cos(2\vartheta)}\quad (l=2\alpha_0^2<m_\pm)\,,\nonumber\\ m_\pm&=\dfrac{1}{2}+2\alpha_0^2\left[\tanh\!\left(\alpha_0^2\right)\right]^{\pm1}\pm\dfrac{4\alpha_0^4}{\left(\E{\alpha_0^2}\pm\E{-\alpha_0^2}\right)^2}\,, \end{align} whence one obtains \begin{equation} \mathcal{H}_{2,\textsc{hom}}=6m_\pm+4\sqrt{m_\pm^2-l^2} \label{eq:CRB_HOM_oe} \end{equation} after carrying out the integration, matrix inversion and matrix trace. On the other hand, the Husimi-average moments of the heterodyne data contribute to the result \begin{equation} \mathcal{H}_{2,\textsc{het}}=6+12\alpha_0^2\left[\tanh\!\left(\alpha_0^2\right)\right]^{\pm1}\pm\dfrac{8\alpha_0^4}{\left(\E{\alpha_0^2}\pm\E{-\alpha_0^2}\right)^2} \label{eq:CRB_HET_oe} \end{equation} for the heterodyne sCRB. \begin{figure} \caption{\label{fig:Fig5} \label{fig:Fig5} \end{figure} We once again remind the reader that the sCRBs stated in Eqs.~\eqref{eq:CRB_HOM_oe} and \eqref{eq:CRB_HET_oe} are independent of the phase of the even/odd coherent states, as this phase amounts to a rotation in phase space that is immaterial in determining the moment-estimation accuracy. For arbitrary complex values of $\alpha_0$, the expressions are still valid after the change $\alpha_0^2\rightarrow|\alpha_0|^2$. The ratio $\gamma_2$ is greater than one for small values of $\alpha_0$, with the special limiting cases ($\alpha_0=0$) being those of the respective Fock states, and less than one for large values of $\alpha_0$. The crossover values for which these states become sufficiently $\dyadic{G}_2$-``thermal'' such that $\gamma_2=1$ differ for both the even and odd states (see Fig.~\ref{fig:Fig5}). For sufficiently large $\alpha_0$, $\gamma_2$ approaches unity from below. This can be clearly seen by taking the limit $\alpha_0\rightarrow\infty$. In this limit, we have $m_\pm\rightarrow2\alpha_0^2=l$ so that $\mathcal{H}_{2,\textsc{hom}}\rightarrow12\alpha_0^2\approx\mathcal{H}_{2,\textsc{het}}$. For these class of states, $\gamma_2$ has a stationary minimum that is again different for the two types of states, and this is elucidated in Fig.~\ref{fig:Fig5}. At $\alpha_0\approx0.631$, the $\gamma_2$ values for the even and odd states are equal, even though their $\dyadic{G}_2$ matrices are very different. The reason is that the combined contributions of all the second and fourth moments give an overall multiplicative factor of about 2.0694 to both $\mathcal{H}_{2,\textsc{het}}$ and $\mathcal{H}_{2,\textsc{hom}}$ for the odd state relative to the even state. \subsection{Displaced Fock states} As opposed to the previous three classes of states, the displaced Fock states (as well as the photon-added coherent states that follow) possess a nonzero quadrature first moment. As the only two parameters $\alpha_0=(x_0+\mathrm{i} p_0)/\sqrt{2}$ and $m$ that characterize these displaced Fock states do not, in any way, restrict the covariance matrix $\dyadic{G}$, it is easy to show that the $\dyadic{G}_2$ geometry, and hence its reconstruction accuracy, depends only on the displacement magnitude $|\alpha_0|^2$ and not its phase. This is done by directly inspecting the eigenvalues of $\dyadic{G}_2$, namely \begin{align} \lambda_1&=m+\dfrac{1}{2}\,,\nonumber\\ \lambda_2&=m+2|\alpha_0|^2+\dfrac{1}{2}\,, \end{align} one of which is an increasing function of $|\alpha_0|^2$. As a result, we only need to consider the case where $\alpha_0=x_0/\sqrt{2}$ is positive. As $\alpha_0$ increases, the GME increases, which means that the quantum state becomes more $\dyadic{G}_2$-``thermal''. We shall soon see that an increase in $|\alpha_0|^2$ results in a smaller performance ratio $\gamma_2$ in favor of the heterodyne scheme. To calculate the homodyne sCRB, we first note that the relevant even-order quadrature moments (see Appendix~\ref{app:char_func}) supply the second-moment quadrature variance \begin{align} \left<X_\vartheta^4\right>-\left<X_\vartheta^2\right>^2&=m_0+l\cos(2\vartheta)\,,\nonumber\\ m_0&=\dfrac{1}{2}\left[m^2+m+\alpha_0^2\,(8m+4)\right]\,,\nonumber\\ l&=2\,\alpha_0^2\,(2m+1)<m_0\,, \end{align} which bears striking resemblance in form with that for the even/odd coherent states, so that the sCRB also takes the same closed form as Eq.~\eqref{eq:CRB_HOM_oe} inasmuch as \begin{equation} \mathcal{H}_{2,\textsc{hom}}=6\,m_0+4\sqrt{m_0^2-l^2}\,. \label{eq:CRB_HOM_disp} \end{equation} For the heterodyne scheme, we subsequently get \begin{equation} \mathcal{H}_{2,\textsc{het}}=2\,(m+1)(m+6\,\alpha_0^2) \label{eq:CRB_HET_disp} \end{equation} by again referring to Table~\ref{tab:char}. The interplay between the discrete ($m$) and continuous $(\alpha_0)$ parameters give rise to familiar cases that have already been analyzed previously for the Gaussian and Fock states. For $m=0$, we of course have the coherent state of amplitude $\alpha_0$ where the maximum $\gamma_2(\alpha_0=0)=6/5$ and the crossover point $\gamma_2(\alpha_0=\sqrt{5/32})=1$ beyond which $\gamma_2<1$ are reproduced by Eqs.~\eqref{eq:CRB_HOM_disp} and \eqref{eq:CRB_HET_disp}. For $m=1$, we have the $m=1$ Fock state for $\alpha_0=0$ so that the unsurprising number $\gamma_2(\alpha_0)=16/15$ comes up from the same sCRB expressions. The crossover point for $\gamma_2=1$ is located at $\alpha_0=\frac{1}{2}\sqrt{19/3-2\sqrt{87}/3}\approx0.1696$. The performance ratio becomes subunit for \emph{all} displacements $\alpha_0$ for $m\geq2$, just like the Fock states. In the limit of large displacements $\alpha_0^2\gg m$, we have $m_0\rightarrow l$ and \begin{equation} \gamma_2\!\left(\alpha_0^2\gg m\right)=\dfrac{m+1}{2m+1}\,, \end{equation} which approaches $1/2$ in the regime $\alpha_0^2\gg m\gg1$. For this two-parameter quantum state, it is interesting to look at the minimum value of $\gamma_2$ over all possible displacement magnitudes $\alpha_0$ for each $m$ [see Fig.~\ref{fig:Fig6}(a)]. To calculate the minimum stationary points $\alpha_0=\widetilde{\alpha}_0$, we differentiate $\gamma_2$ with respect to $\alpha_0$ and set the derivative to zero. While the analytical form for the optimal $\gamma_2=\gamma_{2,\text{min}}$ as a complicated function of $m$ exists, the approximated forms \begin{equation} \gamma_{2,\text{min}}\approx\begin{cases}0.8504-0.5893\,m&(\text{small-}m\,\text{ regime})\\ 0.3693+\dfrac{0.6565}{m}&(\text{large-}m\,\text{ regime})\end{cases} \end{equation} are enough to understand the optimal-$\gamma_2$ curve in terms of a power law already for moderately large $m$. Interestingly, the saturation point for $\gamma_2$ is slightly lower than $2/5$, which is the $\gamma_2$ for the Fock state of an infinitely large $m$ value. This hints that the optimal center for the displaced Fock state of a large $m$ for which $\gamma_2=\gamma_{2,\text{min}}$ is significantly far away from the phase-space origin. This is indeed consistent with the behavior of the minimum point $\widetilde{\alpha}_0$, which also has a complicated closed-form expression~[plotted in Fig.~\ref{fig:Fig6}(a)], so that we only present the more useful approximated forms \begin{equation} \widetilde{\alpha}_0\approx\begin{cases}1.2929+2.2060\,m-3.2976\,m^2\!\!\!\!&(\text{small-}m\,\text{ regime})\\ 0.3993\sqrt{m}+\dfrac{2.8174}{\sqrt{m}}&(\text{large-}m\,\text{ regime})\end{cases} \end{equation} that highlight the main gradient features. To summarize, the minimum value of $\gamma_2$ essentially behaves as a power law in $m$, and the corresponding stationary minimum $\widetilde{\alpha}_0$ is quadratic for small $m$ and goes as a square-root curve for large $m$. \begin{figure} \caption{\label{fig:Fig6} \label{fig:Fig6} \end{figure} \subsection{Photon-added coherent states} As in the case of the displaced Fock states, the eigenvalues of $\dyadic{G}_2$ for the photon-added coherent states, \begin{align} \lambda_1=&\,(m+1)\dfrac{\pFq{1}{1}{m+2}{2}{|\alpha_0|^2}}{\pFq{1}{1}{m+1}{1}{|\alpha_0|^2}}-\dfrac{1}{2}\,,\nonumber\\ \lambda_2=&\,2m+2|\alpha_0|^2+\dfrac{1}{2}\nonumber\\ &\,+m(2|\alpha_0|^2-1)\dfrac{\pFq{1}{1}{m+1}{2}{|\alpha_0|^2}}{\pFq{1}{1}{m+1}{1}{|\alpha_0|^2}}\,, \end{align} are also functions of $|\alpha_0|^2$, which correctly coincides with the physics of the second-moment estimation problem. This also means that discussing in terms of the range $\alpha_0\geq0$ covers the tomography analysis sufficiently. Moreover, the eigenvalues are increasing functions of the displacement magnitude, so that the GME becomes larger with $\alpha_0$, thereby rendering the photon-added states more $\dyadic{G}_2$-``thermal''. This again gives a smaller performance ratio $\gamma_2$, or a better tomographic performance for the heterodyne scheme compared to the homodyne scheme. Once more with the help of Table~\ref{tab:char} in Appendix~\ref{app:char_func}, the quadrature moments can be written down in principle, but they are represented by bulky expressions that are hardly worth any analytical value and the Fisher-matrix integral in Eq.~\eqref{eq:Fisher_HOM} has no known closed-form expression. However, we may still briefly discuss the important limiting cases. For $\alpha_0\ll\sqrt{m}$, to second order in $\alpha_0$, it can be shown that \begin{equation} \mathcal{H}_{2,\textsc{hom}}\approx5(m^2+m+1)+10\alpha_0^2(m+1)(m+2)\,, \end{equation} where the asymptotic connection with Fock states is clear. On the other hand, in the regime of large $\alpha_0\gg\sqrt{m}$, we find that \begin{equation} \mathcal{H}_{2,\textsc{hom}}=3+12\alpha_0^2+2\sqrt{1+8\alpha_0^2}\approx12\alpha_0^2\,, \label{eq:CRB_HOM_padd_large} \end{equation} which is the second-moment homodyne sCRB for coherent states. This is also the homodyne sCRB for large-intensity even/odd coherent states. The reason is that for large amplitudes, all these states behave like a coherent state of amplitude $\alpha_0$ as far as second-moment estimation is concerned since all their $\dyadic{G}_2$ eigenvalues are indistinguishable in this limit. Upon revisiting Eq.~\eqref{eq:CRB_HET}, the heterodyne sCRB can be shown to have the closed form \begin{align} \mathcal{H}_{2,\textsc{het}}=\,2\Bigg\{&3+4m+2\alpha_0^2(m+3)-m\dfrac{\pFq{1}{1}{m+1}{2}{\alpha_0^2}}{\left[\pFq{1}{1}{m+1}{1}{\alpha_0^2}\right]^2}\nonumber\\ &\times\Big[2(\alpha_0^4-3\alpha_0^2-m)\,\pFq{1}{1}{m+1}{1}{\alpha_0^2}\nonumber\\ &\quad\,\,\,\,+m(2\alpha_0^4-2\alpha_0^2+1)\,\pFq{1}{1}{m+1}{2}{\alpha_0^2}\Big]\Bigg\}\,. \end{align}for $\alpha_0>0$. The behavior to leading order in $\alpha_0^2$ for $\alpha_0\ll\sqrt{m}$, \begin{equation} \mathcal{H}_{2,\textsc{het}}\approx\left(2+4\alpha_0^2\right)(m+1)(m+3)\,, \label{eq:CRB_HET_padd_small} \end{equation} is evidently consistent with the known result for Fock states. For $\alpha_0\gg\sqrt{m}$, we once again have $\mathcal{H}_{2,\textsc{het}}\approx12\alpha_0^2$. Note also that, as expected, the first equality in Eq.~\eqref{eq:CRB_HOM_padd_large} is equal to 5, and Eq.~\eqref{eq:CRB_HOM_padd_large} gives a value of 6 for the vacuum state ($m=0$). For $m\geq2$, the ratio $\gamma_2<1$ for all $\alpha_0$. This natural extension to the result for the Fock states means that for highly nonlinear photon-``adding'' operations, the performance of heterodyne detection is always better than that of homodyne detection in terms of second-moment covariance-dyadic estimation. For $m=0$, the analysis reverts to that for the coherent state, where the crossover occurs at $\alpha_0=\sqrt{5/32}$ after solving for $\mathcal{H}_{2,\textsc{hom}}=\mathcal{H}_{2,\textsc{het}}=2\left(3+6\alpha_0^2\right)$ so that $\gamma_2(\alpha_0>\sqrt{5/32})<1$, which is again consistent with Sec.~\ref{subsec:second_mom_gauss}. For $m=1$, the crossover point $\alpha_0\approx0.2$ may be obtained as the numerical solution. As $\alpha_0$ approaches infinity, previous arguments imply that $\gamma_2\rightarrow1$ for any $m$. In view of the behavior of $\gamma_2$, another interesting limit is the high-nonlinearity limit ($m\rightarrow\infty$). In this case, we notice that the value $\alpha_0=\widetilde{\alpha}_0$ for which $\gamma_2$ is minimum approaches zero. A good model to estimate this minimum point in this limit is given by \begin{equation} \widetilde{\alpha}_0\approx\dfrac{3}{2m}\,,\label{eq:model_a0} \end{equation} which can be approximated from curve fitting. Therefore in the large-$m$ limit, the optimum performance ratio $\gamma_2$ is that of an intense Fock state of a large photon number and so we expect the minimum value of $\gamma_2$ to approach $2/5$ as discussed in Sec.~\ref{subsec:second_mom_fock}. In other words, for sufficiently large $m$, the minimum of $\gamma_2$ follows the noncentral power law \begin{equation} \min_{\alpha_0}\{\gamma_2\}=\gamma_2\Big|_{\alpha_0=\widetilde{\alpha}_0}\approx\dfrac{2}{5} + \dfrac{6}{5m}\,. \end{equation} Figure~\ref{fig:Fig6}(b) succinctly highlights these observations. \section{Conclusion} \label{sec:conc} We compare the moment-reconstruction performances of the homodyne and heterodyne joint-measurement measurement schemes using optimal moment estimators that minimizes the mean squared-error. We first showed that in first-moment tomography, the heterodyne scheme is always tomographically superior to, or at least as good as, the homodyne scheme for \emph{all} quantum states in terms of the mean squared error of the moment estimators. The underlying physical reason is solely the Heisenberg-Robertson-Schr{\"o}dinger uncertainty relation for complementary observables. For second-moment tomography, we showed that the heterodyne scheme can often outperform the homodyne scheme for Gaussian states and many other interesting and important classes of non-Gaussian states. All these states indicate a trend that a larger geometric mean of second-moment eigenvalues (second-moment ``temperature'') improves the moment reconstruction accuracy with the heterodyne scheme relative to the homodyne scheme. This trend, however, is not monotonic in the second-moment ``temperature'', because there is also influence from the fourth moments originating from the form of the mean squared-error, the combined contributions of both give interesting features the reconstruction accuracy, as illustrated by the examples in this article. The general theory introduced in Sec.~\ref{sec:gen_theory} can be applied to higher-moment estimation that are important in general operator-moment applications and source-calibration protocols, and these shall be reported in the future. \section{Acknowledgments} We acknowledge financial support from the BK21 Plus Program (21A20131111123) funded by the Ministry of Education (MOE, Korea) and National Research Foundation of Korea (NRF), the NRF grant funded by the Korea government (MSIP) (Grant No. 2010-0018295), the Korea Institute of Science and Technology Institutional Program (Project No. 2E26680-16-P025), the European Research Council (Advanced Grant PACART), the Spanish MINECO (Grant FIS2015-67963-P), the Grant Agency of the Czech Republic (Grant No. 15-03194S), and the IGA Project of the Palack{\'y} University (Grant No. IGA PrF 2016-005). \appendix \section{Optimal estimators for homodyne tomography} \label{app:optimality} \subsection{First-moment estimation} \label{subsec:app:firstmom-est} In this discussion, the reconstruction accuracy of the estimator $\widehat{\rvec{r}}_\textsc{hom}$ for $\rvec{r}$ shall be taken to be the usual MSE distance measure \begin{equation} \mathcal{D}_\textsc{1,hom}=\overline{\left(\widehat{\rvec{r}}_\textsc{hom}-\rvec{r}\right)^2} \end{equation} that is typically defined for columns. One straightforward way to obtain an estimator $\widehat{\rvec{r}}_\textsc{hom}$ is to make use of $\left<X_\vartheta\right>=\left<X\right>\cos\vartheta+\left<P\right>\sin\vartheta$ to ascertain that \begin{equation} \dyadic{L}_\vartheta\rvec{r}=\rvec{r}_\vartheta \end{equation} for an $n_\vartheta\times2$ matrix $\dyadic{L}_\vartheta$ ($n_\vartheta$ being the number of bins for the LO phases $\vartheta$) and a column $\rvec{r}_\vartheta$ of $n_\vartheta$ true averages $\left<X_\vartheta\right>$. The highly overcomplete nature of the measurement thus permits us to define, for any experimentally obtained estimates of average values $\rvec{r}_\vartheta\equiv\left(\widehat{\left<X_1\right>}\,\,\widehat{\left<X_2\right>}\,\,\ldots\,\, \widehat{\left<X_{n_\vartheta}\right>}\right)^\textsc{t}$ $\left(\overline{\widehat{\left<X_k\right>}}=\left<X_{\vartheta_k}\right>\right)$, \begin{equation} \widehat{\rvec{r}}^{(\textsc{lin})}_\textsc{hom}=\dyadic{L}_\vartheta^{-}\left<\rvec{R}_\vartheta\right> \label{eq:lin_est_firstmom} \end{equation} as the linear estimator of interest using the pseudoinverse $\dyadic{L}_\vartheta^{-}$ of $\dyadic{L}_\vartheta$. This estimator, however, is suboptimal in the sense that it does \emph{not} minimize the MSE $\mathcal{D}_{1,\textsc{hom}}$. To obtain the best estimator for $\rvec{r}$ [often known as the linear unbiased estimator (BLUE)] that minimizes the MSE, we resort to the linear optimization of \begin{equation} \widehat{\rvec{r}}_\textsc{hom}=\sum^{n_\vartheta}_{k=1}\rvec{v}_k\widehat{\left<X_k\right>} \label{eq:gen_est_firstmom} \end{equation} over all possible \emph{reconstruction columns} $\rvec{v}_k$ for the estimates $\widehat{\left<X_k\right>}$. Data consistency according to $\widehat{\left<X_k\right>}=\rvec{u}_k^{\textsc{t}}\widehat{\rvec{r}}_\textsc{hom}$ requires these reconstruction columns, or \emph{dual columns}, to satisfy the property \begin{equation} \sum^{n_\vartheta}_{k=1}\rvec{v}_k\rvec{u}_k^{\textsc{t}}=\dyadic{1}=\sum^{n_\vartheta}_{k=1}\rvec{u}_k\rvec{v}_k^{\textsc{t}} \label{eq:dual_columns} \end{equation} with the measurement columns $\rvec{u}_k=\rvec{u}_{\vartheta_k}=\TP{(\cos\vartheta_k\,\,\sin\vartheta_k)}$. Logically, we must have \begin{equation} \overline{\widehat{\rvec{r}}_\textsc{hom}}=\sum^{n_\vartheta}_{k=1}\rvec{v}_k\left<X_k\right>=\rvec{r}\,. \label{eq:gen_true_firstmom} \end{equation} The Lagrange function for the optimization is therefore \begin{align} \mathcal{L}_{\textsc{hom}}=\mathcal{D}_\textsc{hom}-\Tr{\dyadic{\Lambda}\left(\sum^{n_\vartheta}_{k=1}\rvec{u}_k\rvec{v}_k^{\textsc{t}}-\dyadic{1}\right)}\,, \end{align} where $\dyadic{\Lambda}$ is the Lagrange matrix for the dual-column constraint in \eqref{eq:dual_columns}. In terms of the dual columns, \begin{align} \mathcal{D}_\textsc{1,hom}=&\,\sum^{n_\vartheta}_{k=1}\sum^{n_\vartheta}_{k'=1}\rvec{v}_k^\textsc{t}\rvec{v}_{k'}\left(\overline{\widehat{\left<X_k\right>}\widehat{\left<X_{k'}\right>}}-\left<X_k\right>\left<X_{k'}\right>\right)\nonumber\\ =&\,\sum^{n_\vartheta}_{k=1}\rvec{v}_k^\textsc{t}\rvec{v}_{k}\overline{\widehat{\left<X_k\right>}^2}+\sum_{k\neq k'}\rvec{v}_k^\textsc{t}\rvec{v}_{k'}\overline{\widehat{\left<X_k\right>}}\,\,\overline{\widehat{\left<X_{k'}\right>}}\nonumber\\ &\,-\sum^{n_\vartheta}_{k=1}\sum^{n_\vartheta}_{k'=1}\rvec{v}_k^\textsc{t}\rvec{v}_{k'}\left<X_k\right>\left<X_{\vartheta_{k'}}\right>\,. \end{align} Since the unbiased estimate \begin{equation} \widehat{\left<X_k\right>}=\dfrac{1}{N_k}\sum^{n_x}_{j=1}n_{jk}x_{jk} \end{equation} is an average sum of all the measured $n_x$ voltage readings $x_{jk}$ per LO phase that are distributed according to the multinomial distribution of random multinomial weights $\sum_jn_{jk}=N_k$, the second moment is given by \begin{align} \overline{\widehat{\left<X_k\right>}^2}&=\dfrac{1}{N_k^2}\sum^{n_x}_{j=1}\sum^{n_x}_{j'=1}\overline{n_{jk}n_{j'k}}x_{jk}x_{j'k}\nonumber\\ &=\dfrac{1}{N_k}\sum^{n_x}_{j=1}p_{jk}x_{jk}^2+\dfrac{N_k-1}{N_k}\sum^{n_x}_{j=1}\sum^{n_x}_{j'=1}p_{jk}p_{j'k}x_{jk}x_{j'k}\nonumber\\ &=\dfrac{1}{N_k}\left<X_k^2\right>+\dfrac{N_k-1}{N_k}\left<X_k\right>^2\,, \end{align} The final equality is valid for sufficiently large data (bins) for all phases, as $p_{jk}\rightarrow\mathrm{d} x_\vartheta\, p(x_\vartheta,\vartheta)$ and \begin{align} \sum^{N_k}_{j=1}p_{jk}x_{jk}^2&\rightarrow\int\mathrm{d} x_\vartheta \,p(x_\vartheta,\vartheta)\,x_\vartheta^2\nonumber\\ &=\int\mathrm{d} x_\vartheta \left<\ket{x_\vartheta}\bra{x_\vartheta}\right>x_\vartheta^2=\left<X^2_\vartheta\right>\,. \end{align} So, we finally get \begin{equation} \mathcal{D}_{1,\textsc{hom}}=\sum^{n_\vartheta}_{k=1}\dfrac{\rvec{v}_k^\textsc{t}\rvec{v}_{k}}{N_k}\left(\left<X_k^2\right>-\left<X_k\right>^2\right)\,. \label{eq:D_hom_firstmom} \end{equation} A simple variation of $\mathcal{L}_\textsc{hom}$ therefore gives \begin{align} \updelta\mathcal{L}_\textsc{hom}=&\,\sum^{n_\vartheta}_{k=1}\dfrac{\updelta\rvec{v}_k^\textsc{t}\rvec{v}_{k}+\rvec{v}_k^\textsc{t}\updelta\rvec{v}_{k}}{N_k}\left(\left<X_k^2\right>-\left<X_k\right>^2\right)\nonumber\\ &-\dfrac{1}{2}\Tr{\dyadic{\Lambda}\sum^{n_\vartheta}_{k=1}\left(\rvec{u}_k\updelta\rvec{v}_k^{\textsc{t}}+\updelta\rvec{v}_k\rvec{u}_k^{\textsc{t}}\right)}\equiv0\,, \end{align} or \begin{align} \dfrac{1}{2}\dyadic{\Lambda}&=\dyadic{F}\!\left(\{\left<X_k\right>,\left<X_k^2\right>\}\right)\equiv\sum^{n_\vartheta}_{k=1}\rvec{u}_{k}\rvec{u}_{k}^\textsc{t}\dfrac{N_k}{\left<X_k^2\right>-\left<X_k\right>^2}\,,\nonumber\\ \rvec{v}_k&=\dfrac{N_k}{\left<X_k^2\right>-\left<X_k\right>^2}\dyadic{F}\!\left(\{\left<X_k\right>,\left<X_k^2\right>\}\right)^{-1}\rvec{u}_k \end{align} The matrix $\dyadic{F}\!\left(\{\left<X_k\right>,\left<X_k^2\right>\}\right)$ is known as the frame matrix. The BLUE therefore depends on the true moments which are certainly unavailable in the first place, for no tomography is otherwise necessary at all. Nonetheless, one can substitute the estimated moments for them to obtain an asymptotically efficient optimal estimator that approximates the BLUE. An unbiased estimate for the second moment is given by \begin{equation} \widehat{\left<X^2_k\right>}=\dfrac{1}{N_k}\sum^{N_k}_{j=1}n_{jk}x^2_{jk}\,, \label{eq:secmom_est} \end{equation} so that the asymptotically optimal estimator is given by \begin{align} \widehat{\rvec{r}}^{(\textsc{opt})}_\textsc{hom}&=\dyadic{W}_1^{-1}\sum^{n_\vartheta}_{k=1}\rvec{u}_k\dfrac{N_k\widehat{\left<X_k\right>}}{\widehat{\left<X^2_k\right>}-\widehat{\left<X_k\right>}^2}\,\nonumber\\ \dyadic{W}_1&=\sum^{n_\vartheta}_{k=1}\rvec{m}_{k}\dfrac{N_k}{\widehat{\left<X_k^2\right>}-\widehat{\left<X_k\right>}^2}\,, \end{align} It is easy to see that when the estimated moments approach the true moments, this optimal estimator attains the sCRB. Directly from Eq.~\eqref{eq:D_hom_firstmom}, we immediately know that its corresponding MSE is given by \begin{equation} \mathcal{D}^{(\textsc{opt})}_{1,\textsc{hom}}=\Tr{\dyadic{F}\!\left(\{\left<X_k\right>,\left<X_k^2\right>\}\right)^{-1}} \end{equation} and all we need to realize is that for sufficiently large $N$ and uniformly-distributed quadrature outcomes, $N_k/N\rightarrow\mathrm{d}\vartheta/\pi$ and the frame matrix \begin{equation} \dfrac{1}{N}\dyadic{F}\!\left(\{\left<X_k\right>,\left<X_k^2\right>\}\right)\rightarrow \int_{(\pi)}\dfrac{\mathrm{d}\vartheta}{\pi}\dfrac{\dyadic{m}_\vartheta}{\left<X_{\vartheta}^2\right>-\left<X_{\vartheta}\right>^2}=\dyadic{F}_\textsc{hom}oneN \end{equation} is nothing more than the Fisher matrix introduced in Eq.~\eqref{eq:Fisher_HOM_first}. This also means that the BLUE and the asymptotically optimal estimator are both asymptotically as efficient as the ML estimator. This construction comes with a basic and important lesson. The simple linear estimator $\widehat{\rvec{r}}^{(\textsc{lin})}_\textsc{hom}$ in Eq.~\eqref{eq:lin_est_firstmom}, which is suboptimal, depends only on the first moments. To improve the reconstruction accuracy, more aspects of the data that are attributed to the figure of merit chosen to measure this accuracy would have to be incorporated systematically. In the case of the MSE, these are linear combinations of both the first and second moments, or at least their estimates. Put differently, we should always use the reconstruction estimator that optimize the figure of merit we choose to rank the goodness of the reconstruction. \begin{table*}[htp] \begin{ruledtabular} \begin{small} \resizebox{2\columnwidth}{!}{ \begin{tabular}{lll} Class of Quantum States & Quadrature Characteristic Function & Husimi Characteristic Function\\ \hline {\bf Gaussian} & $\displaystyle\exp\left(-\frac{1}{2}\left(\TP{\rvec{u}}_\vartheta\,\dyadic{G}\,\rvec{u}_\vartheta\right)^{\!2} k^2+\mathrm{i}\,\TP{\rvec{u}}_\vartheta\rvec{r}_0\,k\right)$ \vphantom{$\dfrac{{{W^W}^W}^W}{{{W^W}^W}^W}$} & $\displaystyle\E{g^*\alpha_0+g\alpha^*_0}\,\exp\!\left(\frac{\mathrm{d}ET{\dyadic{G}_\textsc{het}}}{2}\rvec{g}^\dagger\dyadic{M}\,\rvec{g}\right)$ \vphantom{$\dfrac{{{W^W}^W}^W}{{{W^W}^W}^W}$}\\[3ex] {\bf Fock} & $\displaystyle\E{-\frac{k^2}{4}}\LAG{n}{\frac{k^2}{2}}$ & $\displaystyle\pFq{1}{1}{n+1}{1}{|g|^2}$\\[3ex] {\bf Even/odd coherent} & $\displaystyle\E{-\frac{k^2}{4}}\dfrac{\cos(k x_\vartheta)\pm\E{-2|\alpha_0|^2}\cosh(k p_\vartheta)}{1\pm\E{-2|\alpha_0|^2}}$ & $\displaystyle \dfrac{\E{-|\alpha_0|^2}}{2\pm2\,\E{-2|\alpha_0|^2}}\left[\begin{matrix*}[l] &\E{|g+\alpha_0|^2}+\E{|g-\alpha_0|^2}\\ \pm\!\!\!\!&\E{(g^*-\alpha_0^*)(g+\alpha_0)}\pm\,\text{c.c.} \end{matrix*}\right] $\\[3ex] {\bf Displaced Fock} & $\displaystyle\E{-\frac{k^2}{4}+\mathrm{i} kx_\vartheta}\LAG{m}{\frac{k^2}{2}}$ & $\displaystyle\E{g^*\alpha_0+g\alpha^*_0}\pFq{1}{1}{m+1}{1}{|g|^2}$\\[3ex] {\bf Photon-added coherent} & $\displaystyle\E{\frac{k^2}{4}}\dfrac{\pFq{1}{1}{m+1}{1}{\left(\alpha_0+\frac{\mathrm{i} k}{\sqrt{2}}\E{\mathrm{i}\vartheta}\right)\left(\alpha_0^*+\frac{\mathrm{i} k}{\sqrt{2}}\E{-\mathrm{i}\vartheta}\right)}}{\pFq{1}{1}{m+1}{1}{|\alpha_0|^2}}$ & $\displaystyle\dfrac{\pFq{1}{1}{m+1}{1}{|g+\alpha_0|^2}}{\pFq{1}{1}{m+1}{1}{|\alpha_0|^2}}$ \end{tabular} } \end{small} \end{ruledtabular} \caption{\label{tab:char} A list of characteristic functions for all the quantum states discussed. The symbols in this table are defined as $\alpha_0\E{-\mathrm{i}\vartheta}=(x_\vartheta+\mathrm{i} p_\vartheta)/\sqrt{2}$ where $x_0=x_{\vartheta=0}$ and $p_0=p_{\vartheta=0}$, $\rvec{g}=\TP{(-g\,\,\,g^*)}$, $\dyadic{M}=\dyadic{H}^\dagger\dyadic{G}_\textsc{het}^{-1}\,\dyadic{H}$, and $\dyadic{H}\,\widehat{=}\dfrac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1\\ -\mathrm{i} & \mathrm{i} \end{pmatrix}$.} \end{table*} \subsection{Second-moment estimation} \label{subsec:app:secmom-est} By the same token, we can construct the optimal estimator that approximates the BLUE for second-moment estimation by minimizing the MSE \begin{equation} \mathcal{D}_{2,\textsc{hom}}=\overline{\Tr{\left(\widehat{\dyadic{G}}_{2,\textsc{hom}}-\dyadic{G}_{2,\textsc{hom}}\right)^2}} \end{equation} over the estimator that is of the linear form \begin{equation} \widehat{\dyadic{G}}_{2,\textsc{hom}}=\sum^{n_\vartheta}_{k=1}\dyadic{\Theta}_k\widehat{\left<X_{k}^2\right>} \end{equation} with respect to the second-moment estimates. This form is a natural extension to the column estimator $\widehat{\rvec{r}}_\textsc{hom}$ \emph{via} a generalization of the dual columns $\rvec{v}_k$ to \emph{dual} matrices $\dyadic{\Theta}_k$. Completely analogous to the discussion in Appendix~\ref{subsec:app:firstmom-est}, consistency with $\widehat{\left<X^2_k\right>}=\rvec{u}_k^{\textsc{t}}\widehat{\dyadic{G}}_{2,\textsc{hom}}\rvec{u}_k$ implies that \begin{equation} \widehat{\dyadic{G}}_{2,\textsc{hom}}=\sum^{n_\vartheta}_{k=1}\dyadic{\Theta}_k\rvec{u}_k^{\textsc{t}}\widehat{\dyadic{G}}_{2,\textsc{hom}}\rvec{u}_k\,. \label{eq:G2hom_consistency} \end{equation} The above relation can be simplified by introducing the \emph{vectorization} notation $\VEC{\dyadic{Y}}$ that turns a matrix $\dyadic{Y}$ into a column. Since all two-dimensional matrices considered here are real and symmetric, they are essentially characterized by three real parameters. Hence in our context, given that \begin{equation} \dyadic{Y}\,\widehat{=}\begin{pmatrix} y_1 & y_2\\ y_2 & y_3 \end{pmatrix}\,, \end{equation} the vectorized quantity is defined as \begin{equation} \VEC{\dyadic{Y}}\,\widehat{\equiv}\begin{pmatrix} y_1\\ \sqrt{2}\,y_2\\ y_3 \end{pmatrix}\,. \end{equation} This operation is a variant of the usual column-stacking vectorization operation to apply on $2\times2$ real symmetric matrices for our case to make contact with the property $\Tr{\dyadic{Y}_1\dyadic{Y}_2}=\TP{\VEC{\dyadic{Y}_1}}\VEC{\dyadic{Y}_2}$ between any pair of such matrices $\dyadic{Y}_1$ and $\dyadic{Y}_2$. In this notation, Eq.~\eqref{eq:G2hom_consistency} becomes \begin{equation} \VEC{\widehat{\dyadic{G}}_{2,\textsc{hom}}}=\sum^{n_\vartheta}_{k=1}\VEC{\dyadic{\Theta}_k}\VEC{\dyadic{m}_k}^\textsc{t}\VEC{\widehat{\dyadic{G}}_{2,\textsc{hom}}}\,, \end{equation} which is equivalent to the vectorized constraint \begin{equation} \sum^{n_\vartheta}_{k=1}\VEC{\dyadic{\Theta}_k}\VEC{\dyadic{m}_k}^\textsc{t}=\dyadic{1}=\sum^{n_\vartheta}_{k=1}\VEC{\dyadic{m}_k}\VEC{\dyadic{\Theta}_k}^\textsc{t}\,. \end{equation} As usual, to derive the expression for the optimal estimator, we first calculate $\mathcal{D}_{2,\textsc{hom}}$ in terms of the dual matrices. For this we shall need the average of the square of the estimate $\widehat{\left<X_{k}^2\right>}$ defined in Eq.~\eqref{eq:secmom_est}: \begin{equation} \overline{\widehat{\left<X_{k}^2\right>}^2}=\dfrac{1}{N_k}\left<X_k^4\right>+\dfrac{N_k-1}{N_k}\left<X_k^2\right>^2\,, \end{equation} from which gives the expression \begin{equation} \mathcal{D}_{2,\textsc{hom}}=\sum^{n_\vartheta}_{k=1}\dfrac{\VEC{\dyadic{\Theta}}_k^\textsc{t}\VEC{\dyadic{\Theta}}_{k}}{N_k}\left(\left<X_k^4\right>-\left<X_k^2\right>^2\right) \label{eq:D_hom_secmom} \end{equation} for the MSE. Then, by carrying out the variation of the appropriate Lagrange function similar to the calculations in Appendix~\eqref{subsec:app:firstmom-est} and remembering the additional association $\dyadic{M}_k=\VEC{\dyadic{m}_{k}}\VEC{\dyadic{m}_{k}}^\textsc{t}$, we find that the optimal matrices for the BLUE are \begin{align} \dyadic{F}\!\left(\{\left<X_k^2\right>,\left<X_k^4\right>\}\right)\equiv\sum^{n_\vartheta}_{k=1}\dyadic{M}_k\dfrac{N_k}{\left<X_k^4\right>-\left<X_k^2\right>^2}\,,\nonumber\\ \VEC{\dyadic{\Theta}_k}=\dfrac{N_k}{\left<X_k^4\right>-\left<X_k^2\right>^2}\dyadic{F}\!\left(\{\left<X_k^2\right>,\left<X_k^4\right>\}\right)^{-1}\VEC{\dyadic{m}_k}\,. \end{align} Finally, the asymptotically optimal estimator is given by \begin{align} \widehat{\dyadic{G}}^{(\textsc{opt})}_{2,\textsc{hom}}&=\dyadic{W}_2^{-1}\sum^{n_\vartheta}_{k=1}\VEC{\dyadic{m}_k}\dfrac{N_k\widehat{\left<X_k^2\right>}}{\widehat{\left<X^4_k\right>}-\widehat{\left<X_k^2\right>}^2}\,,\nonumber\\ \dyadic{W}_2&=\sum^{n_\vartheta}_{k=1}\dyadic{M}_k\dfrac{N_k}{\widehat{\left<X_{k}^4\right>}-\widehat{\left<X_{k}^2\right>}^2}\,. \end{align} That this estimator asymptotically attains the sCRB for second-moment estimation is again clear. \section{List of characteristic functions} \label{app:char_func} In calculating the moments for both the homodyne and heterodyne schemes, it is extremely useful to start with the relevant characteristic functions for both schemes. To facilitate the discussions in the main article, we have supplied a list of quadrature characteristic functions $\left(\left<\E{\mathrm{i} k X_\vartheta}\right>\right)$ for the homodyne scheme and a list of Husimi characteristic functions $\displaystyle\left[\overline{\E{g^*\alpha+g\alpha^*}}\,,\,\,g=(u+\mathrm{i} v)/\sqrt{2}\right]$ for the heterodyne scheme respectively in Table~\ref{tab:char} in this appendix section. Then the two kinds of moments can then be readily computed by the prescriptions \begin{align} \left<X^m_\vartheta\right>&=\left(-\mathrm{i}\dfrac{\partial}{\partial k}\right)^m\left<\E{\mathrm{i} k X_\vartheta}\right>\Bigg|_{k=0}\,,\nonumber\\ \overline{x^kp^l}&=\left(\dfrac{\partial}{\partial u}\right)^k\left(\dfrac{\partial}{\partial v}\right)^l\overline{\E{g^*\alpha+g\alpha^*}}\Bigg|_{u,v=0}\,, \end{align} which simply involves multiple differentiations with respect to the free variables and later setting these variables to zero. Some useful identities for the confluent hypergeometric functions and Laguerre polynomials that allow for consistency verification between two characteristic functions of different quantum states are given below: \begin{align} \LAG{0}{x}&=1\,,\nonumber\\ \LAG{1}{x}&=1-x\,,\nonumber\\ \pFq{1}{1}{1}{1}{x}&=\E{x}\,,\nonumber\\ \pFq{1}{1}{2}{1}{x}&=\E{x}(1+x)\,,\nonumber\\ \pFq{1}{1}{n+1}{1}{-x}&=\E{-x}\LAG{n}{x}\,. \end{align} \input{homhet_theory12.bbl} \end{document}
\begin{document} \begin{frontmatter}[classification=text] \title{The Bandwidth Theorem in Sparse Graphs} \author[PA]{Peter Allen} \author[JB]{Julia B\"ottcher} \author[JE]{Julia Ehrenm\"uller} \author[AT]{Anusch Taraz} \begin{abstract} The bandwidth theorem [Mathematische Annalen, 343(1):175--205, 2009] states that any $n$-vertex graph~$G$ with minimum degree $\big(\tfrac{k-1}{k}+o(1)\big)n$ contains all $n$-vertex $k$-colourable graphs~$H$ with bounded maximum degree and bandwidth $o(n)$. We provide sparse analogues of this statement in random graphs as well as pseudorandom graphs. More precisely, we show that for $p\gg \big(\tfrac{\log n}{n}\big)^{1/\Delta}$ asymptotically almost surely each spanning subgraph~$G$ of $G(n,p)$ with minimum degree $\big(\tfrac{k-1}{k}+o(1)\big)pn$ contains all $n$-vertex $k$-colourable graphs~$H$ with maximum degree $\Delta$, bandwidth $o(n)$, and at least $C p^{-2}$ vertices not contained in any triangle. A similar result is shown for sufficiently bijumbled graphs, which, to the best of our knowledge, is the first resilience result in pseudorandom graphs for a rich class of \emph{spanning }subgraphs. Finally, we provide improved results for~$H$ with small degeneracy, which in particular imply a resilience result in $G(n,p)$ with respect to the containment of spanning bounded degree trees for $p\gg \big(\tfrac{\log n}{n}\big)^{1/3}$. \end{abstract} \end{frontmatter} \section{Introduction} A central topic in extremal graph theory is to determine minimum degree conditions which force a graph $G$ to contain a copy of some large or even spanning subgraph $H$. The prototypical example of such a theorem is Dirac's theorem~\cite{dirac1952}, which states that if $\delta(G)\ge\tfrac12 v(G)$ and $v(G)\ge 3$, then $G$ is Hamiltonian. Analogous results were established for a wide range of spanning subgraphs~$H$ with bounded maximum degree such as powers of Hamilton cycles, trees, or $F$-factors for any fixed graph~$F$ (see e.g.~\cite{kuhnsurvey} for a survey). One feature that all these subgraphs~$H$ have in common is that their \emph{bandwidth} is small. The bandwidth of a graph~$H$ is the minimum $b$ such that there is a labelling of the vertex set of~$H$ by integers $1, \ldots, n$ with $|i-j| \leq b$ for every edge $ij$ of~$H$. And indeed, it was shown in~\cite{bottcher2009proof} that a more general result holds, which provides a minimum degree condition forcing any spanning bounded degree subgraphs of small bandwidth. \begin{theorem}[Bandwidth Theorem~\cite{bottcher2009proof}] \label{thm:bandwidth} For every $\gamma >0$, $\Delta \geq 2$, and $k\geq 1$, there exist $\beta>0$ and $n_0 \geq 1$ such that for every $n\geq n_0$ the following holds. If $G$ is a graph on $n$ vertices with minimum degree $\delta(G) \geq \left(\frac{k-1}{k}+\gamma\right)n$ and if $H$ is a $k$-colourable graph on $n$ vertices with maximum degree $\Delta(H) \leq \Delta$ and bandwidth at most $\beta n$, then $G$ contains a copy of $H$. \end{theorem} We remark that in contrast to the above mentioned earlier results for specific bounded degree spanning subgraphs the minimum degree condition in this theorem has an error term~$\gamma n$, and it is known that this cannot completely be omitted in this general statement. In that sense the minimum degree condition in Theorem~\ref{thm:bandwidth} is best-possible. It is also known that the bandwidth condition cannot be dropped completely (see~\cite{bottcher2009proof} for further explanations). Moreover, this condition does not limit the class of graphs under consideration unreasonably, because many interesting classes of graphs have sublinear bandwidth. Indeed, it was shown in \cite{bottcher2010bandwidth} that for bounded degree $n$-vertex graphs, restricting the bandwidth to $o(n)$ is equivalent to restricting the treewidth to $o(n)$ or forbidding linear sized expanding subgraphs, which implies that bounded degree planar graphs, or more generally classes of bounded degree graphs defined by forbidding some fixed minor have bandwidth $o(n)$. Generalisations of Theorem~\ref{thm:bandwidth} were obtained in~\cite{BoeHeiTar,BoeTarWue,KnoTre,Lee:degenerateBandwidth}. In this paper we are interested in the transference of Theorem~\ref{thm:bandwidth} to sparse graphs. Such transference results recently received much attention, including for example the breakthrough result on the transference of Tur\'an's theorem to random graphs by Conlon and Gowers~\cite{ConGow} and Schacht~\cite{Schacht}. The random graph model we shall consider here is the binomial random graph $G(n,p)$, which has $n$ vertices, and each pair of vertices forms an edge independently with probability $p$. We shall study the asymptotic appearance of spanning subgraphs~$H_n$ in~$G(n,p)$ under adversarial edge deletions. We denote the sequence of graphs we consider by $H=(H_n)$, and, abusing notation slightly also often write~$H$ for the graph~$H_n$, when it is clear from the context what~$n$ is. The appearance of large or spanning subgraphs of $G(n,p)$ was studied since the early days of probabilistic combinatorics and by now many important results were obtained. Gems include the theorem of Riordan~\cite{Riordan} which gives a very good, and in many cases tight, upper bound on the threshold for $G(n,p)$ to contain a general sequence of spanning graphs~$H=(H_n)$, and the Johansson--Kahn--Vu theorem~\cite{JohKahVu} on $F$-factors, which we state for the case $F=K_k$. \begin{theorem}[Johansson, Kahn and Vu~\cite{JohKahVu}]\label{thm:JKV} For each $k\ge 3$ there exists $C>0$ such that the following holds for $p=p(n)\ge C n^{-2/k}(\log n)^{1/\binom{k}{2}}$. Asymptotically almost surely, $G(n,p)$ contains a $K_k$-factor, that is, a collection of $\big\lfloor\tfrac{n}{k}\big\rfloor$ pairwise vertex-disjoint copies of $K_k$. \end{theorem} For a sequence of spanning graphs $H=(H_n)$ with maximum degree $\Delta(H)\le\Delta$, Riordan's theorem implies that $G(n,p)$ \emph{asymptotically almost surely} (a.a.s.), that is, with probability tending to~$1$ as~$n$ tends to infinity, contains~$H$ as a subgraph if $p\cdot n^{\frac{2}{\Delta+1}-\frac{2}{\Delta(\Delta+1)}}\to\infty$. This is not believed to be best possible. Indeed, Theorem~\ref{thm:JKV} states that the threshold for $G(n,p)$ to contain a $K_{\Delta+1}$-factor is $n^{-2/(\Delta+1)}(\log n)^{1/\binom{\Delta+1}{2}}$, and it is conjectured in~\cite{FerLuhNgu} that above this probability we also get any other sequence of spanning graphs $H=(H_n)$ with $\Delta(H)\le\Delta$. This was proved, using Theorem~\ref{thm:JKV}, to be true for almost spanning graphs by Ferber, Luh, and Nguyen~\cite{FerLuhNgu}. \begin{theorem}[Ferber, Luh, and Nguyen~\cite{FerLuhNgu}] \label{thm:FerLuhNgu} For every $\varepsilon>0$ and $\Delta\ge 1$, and every sequence $H=(H_n)$ of graphs with $v(H)\le(1-\varepsilon)n$ and $\Delta(H)\le\Delta$, the random graph $G(n,p)$ a.a.s.\ contains~$H$ if $p\cdot n^{2/(\Delta+1)}/(\log n)^{1/\binom{\Delta+1}{2}} \to\infty$. \end{theorem} Better bounds are available if we further know that the degeneracy of~$H$ is bounded by a constant much smaller than $\Delta(H)$. The \emph{degeneracy} of~$H$ is the smallest integer~$D$ such that any subgraph of~$H$ has a vertex of degree at most~$D$. Surprisingly, for this class of graphs~$H$ already Riordan's theorem implies an essentially optimal bound. \begin{corollary}[of Riordan's theorem~\cite{Riordan}] \label{cor:Riordan} For every $\Delta\ge 1$ and $D\ge 3$, and every sequence $H=(H_n)$ of graphs with $v(H)\le n$ and $\Delta(H)\le\Delta$ and degeneracy at most~$D$, the random graph $G(n,p)$ a.a.s.\ contains~$H$ if $p\cdot n^{1/D} \to\infty$. \end{corollary} This is best possible because a simple first moment calculation shows that if $p\cdot n^{1/D}\to 0$ then $G(n,p)$ a.a.s.\ does not contain the $D$-th power of a Hamilton path, which is a $D$-degenerate graph with maximum degree $2D$. Two features that both Riordan's theorem and Theorem~\ref{thm:JKV} (and consequently all results which rely on them, such as Theorem~\ref{thm:FerLuhNgu} and Corollary~\ref{cor:Riordan}) have in common is that their proofs are non-constructive, and the proof techniques do not allow for so-called universality results. A graph~$G$ is said to be \emph{universal} for a family~$\mathcal{H}$ of graphs if~$G$ contains copies of all graphs in~$\mathcal{H}$ simultaneously. The random graph~$G(n,p)$ is known to be universal for various families of graphs, but in almost all cases we only know an upper bound on the threshold for universality, which we do not believe is the correct answer. The reason why probabilistic existence results such as Corollary~\ref{cor:Riordan} do not imply universality is that in $G(n,p)$ the failure probability for containing any given spanning graph $H$ without isolated vertices is at least $(1-p)^{n-1}$, the probability that a fixed vertex of $G(n,p)$ is isolated. This probability is too large to apply a union bound. Thus, to prove universality results, one needs to show that any graph $G$ with some collection of properties that $G(n,p)$ a.a.s.\ possesses must contain any given $H\in\mathcal{H}$. Using this approach, and improving on a series of earlier results, Dellamonica, Kohayakawa, R\"odl and Ruci\'nski~\cite{dellamonica2014} obtained the following universality result for the family $\mathcal{H}(n,\Delta)$ of $n$-vertex graphs with maximum degree $\Delta$. \begin{theorem}[Dellamonica, Kohayakawa, R\"odl and Ruci\'nski~\cite{dellamonica2014}] \label{thm:universal} For all $\Delta\ge3$ there is~$C$ such that if $p\ge C\big(\frac{\log n}{n}\big)^{1/\Delta}$ then $G(n,p)$ is a.a.s.\ universal for $\mathcal{H}(n,\Delta)$. \end{theorem} However, it is conjectured that universality and the appearance of a $K_{\Delta+1}$-factor occur together, at the threshold given in Theorem~\ref{thm:FerLuhNgu}. A probability bound which is better, but still far from the conjectured truth, was so far only established for almost spanning graphs by Conlon, Ferber, Nenadov and \v{S}kori\'c~\cite{CFNS}, who showed that for $\Delta\ge 3$, if $p\gg n^{-1/(\Delta-1)}\log^5 n$ then $G(n,p)$ is a.a.s.\ universal for $\mathcal{H}\big((1-o(1))n,\Delta)\big)$. For graphs with small degeneracy, again, the following better bound exists, but this also is far away from the threshold in Corollary~\ref{cor:Riordan}, which is a plausible candidate for the correct answer. \begin{theorem}[Allen, B\"ottcher, H\`an, Kohayakawa, Person~\cite{blowup}] \label{thm:Duniversal} For all $\Delta, D\ge 1$ there is~$C$ such that if $p\ge C\big(\frac{\log n}{n}\big)^{1/(2D+1)}$ then $G(n,p)$ is a.a.s.\ universal for all graphs in~$\mathcal{H}(n,\Delta)$ with degeneracy at most~$D$. \end{theorem} Very recently Conlon and Nenadov~\cite{ConNen} established an essentially best possible bound on~$p$ for an almost spanning analogue: They showed that $G(n, p)$ is a.a.s.\ universal for all graphs in $\mathcal{H}\big((1-\varepsilon)n,\Delta\big)$ with degeneracy at most~$D$ for $p\ge\big(\frac{C\log^2 n}{n\log\log n}\big)^{1/D}$. Furthermore, one may ask how robustly $G(n,p)$ contains certain large subgraphs~$H$. Questions of this type were considered by Alon, Capalbo, Kohayakawa, R\"odl, Ruci\'nski and Szemer\'edi~\cite{ACKRRS}, and further popularised by Sudakov and Vu~\cite{SudVu}, who introduced the term local resilience. Let $\mathcal{P}$ be a monotone increasing graph property and let $G$ be a graph in $\mathcal{P}$. The \emph{local resilience} of $G$ with respect to $\mathcal{P}$ is defined to be the minimum $r \in \mathbb R$ such that by deleting at each vertex $v\in V(G)$ at most $r\deg(v)$ edges one can obtain a graph not in $\mathcal{P}$. In this language, for example, Theorem~\ref{thm:bandwidth} says that the local resilience of $K_n=G(n,1)$ with respect to being universal for all $k$-colourable graphs in $\mathcal{H}(n,\Delta)$ with sublinear bandwidth is $\frac1k-o(1)$, and a sparse analogue asks for a similar local resilience result to hold a.a.s.\ for $G(n,p)$. Lee and Sudakov~\cite{lee2012} obtained a very strong local resilience result for Hamilton cycles. Improving on~\cite{SudVu}, they showed that the local resilience of $G(n,p)$ with respect to Hamiltonicity is a.a.s.\ at least $\tfrac12-o(1)$ when $p=\Omega\big(\tfrac{\log n}{n}\big)$. This is optimal up to the constant factor, as below this probability $G(n,p)$ is itself a.a.s.\ not Hamiltonian. Triangle factors were investigated by Balogh, Lee and Samotij~\cite{balogh2012corradi}, who proved that the local resilience of $G(n,p)$ with respect to the containment of a triangle factor on $n-O\big(p^{-2}\big)$ vertices is a.a.s.\ $\frac13-o(1)$ if $p\gg (\frac{\log n}{n})^{1/2}$. It was observed by Huang, Lee, and Sudakov~\cite{huang2012} that we cannot hope to cover all vertices in the host graph with triangles: Already for constant~$p$ it is easy to delete all edges in the neighbourhood of $\Theta(p^{-2})$ vertices in~$G(n,p)$ without violating the resilience condition. Very recently Noever and Steger~\cite{NoeSte} showed that the local resilience of $G(n,p)$ with respect to containing a $(1-o(1))n$-vertex squared cycle (a cycle with all edges between vertices at distance $2$ added) is a.a.s.\ $\tfrac13-o(1)$ provided $p\gg n^{-1/2+o(1)}$. Even more recently, Nenadov and \v{S}kori\'{c}~\cite{NenSko} removed the $\log$-factor from the probability bound of~\cite{balogh2012corradi}. These results are notable in that the bounds on~$p$ are close to optimal: for $p\ll n^{-1/2}$, a.a.s.\ most edges of~$G(n,p)$ are not in triangles and one can obtain a triangle-free graph by deleting only a tiny fraction of edges at each vertex, so that the local resilience of $G(n,p)$ with respect to containing triangles is $o(1)$. Furthermore, for $p\ll n^{-1/2}$ the random graph $G(n,p)$ itself does not contain any $(1-o(1))n$-vertex squared cycle. More general subgraphs~$H$ were considered by Huang, Lee and Sudakov~\cite{huang2012}, who proved an analogue of the bandwidth theorem (Theorem~\ref{thm:bandwidth}) in $G(n,p)$ with $0<p<1$ constant (for subgraphs~$H$ with certain vertices not in triangles). A version which works for much smaller probabilities $p\gg(\frac{\log n}{n})^{1/\Delta}$ in the special case of bipartite graphs~$H$ on $(1-o(1))n$ vertices (with maximum degree~$\Delta$ and sublinear bandwidth) was established in~\cite{bottcher2013almost}. In~\cite[Theorem~1.9]{blowup} the so-called sparse blow-up lemma is used to prove a similar result for graphs~$H$ which are not necessarily bipartite. An even better bound on~$p$ was obtained when we restrict the problem to the class of almost spanning trees~$H$: Balogh, Csaba and Samotij~\cite{balogh2011} proved that the local resilience of $G(n,p)$ with respect to containing copies of all trees $T$ on $(1-o(1))n$ vertices with $\Delta(T) \leq \Delta$ is asymptotically almost surely at least~$1/2 - o(1)$ if $p \gg 1/n$, which is optimal. Finally, returning to $H$-factors, Conlon, Gowers, Samotij and Schacht~\cite{CGSS} gave resilience results for almost-spanning $H$-factors which work down to the optimal probability threshold, but leave a linear number of vertices uncovered; Nenadov and \v{S}kori\'c~\cite{NenSko} substantially improved this, but (for most graphs) the number of vertices left uncovered in their result is still not the correct order of magnitude. \subsection*{Our results.} We prove several sparse analogues of the bandwidth theorem (Theorem~\ref{thm:bandwidth}). Our first result is a version for sparse random graphs, extending the resilience results of Huang, Lee and Sudakov~\cite{huang2012}, \cite{bottcher2013almost}, and~\cite{blowup}. \begin{theorem} \label{thm:main} For each $\gamma >0$, $\Delta \geq 2$, and $k \geq 1$, there exist constants $\beta^\ast >0$ and $C^{\ast} >0$ such that the following holds asymptotically almost surely for $\Gamma = G(n,p)$ if $p \geq C^{\ast}\big(\frac{\log n}{n}\big)^{1/\Delta}$. Let $G$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq\left(\frac{k-1}{k}+ \gamma\right)pn$, and let $H$ be a $k$-colourable graph on $n$ vertices with $\Delta(H) \leq \Delta$, bandwidth at most $\beta^\ast n$, and with at least $C^{\ast} p^{-2}$ vertices which are not contained in any triangles of $H$. Then $G$ contains a copy of $H$. \end{theorem} Observe that the bound on~$p$ achieved in this result matches the bound in the universality result in Theorem~\ref{thm:universal}. Hence, though we do not believe it to be optimal, improving it will most likely be hard. Moreover, as explained in conjunction with Theorem~\ref{thm:bandwidth}, the minimum degree of $G$ cannot be decreased, nor can the bandwidth restriction be removed. As indicated above, it is also necessary that $\Theta(p^{-2})$ vertices of~$H$ are not in triangles. If in addition the subgraph~$H$ is also $D$-degenerate, we can prove a variant of Theorem~\ref{thm:main} for $p\gg (\log n/n)^{1/(2D+1)}$. Again, this probability bound matches the one in the currently best universality result for $D$-degenerate graphs given in Theorem~\ref{thm:Duniversal}. As before we require a certain number of vertices which are not in triangles of~$H$. But, due to technicalities of our proof method, in addition these vertices are now also required not to be in four-cycles. \begin{theorem}\label{thm:degenerate} For each $\gamma >0$, $\Delta \geq 2$, and $D, k \geq 1$, there exist constants $\beta^\ast >0$ and $C^{\ast} >0$ such that the following holds asymptotically almost surely for $\Gamma = G(n,p)$ if $p \geq C^{\ast}\big(\frac{\log n}{n}\big)^{1/(2D+1)}$. Let $G$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq\left(\frac{k-1}{k}+ \gamma\right)pn$ and let $H$ be a $D$-degenerate, $k$-colourable graph on $n$ vertices with $\Delta(H) \leq \Delta$, bandwidth at most $\beta^\ast n$ and with at least $C^{\ast} p^{-2}$ vertices which are not contained in any triangles or four-cycles of $H$. Then $G$ contains a copy of $H$. \end{theorem} Since trees are $1$-degenerate this implies a resilience result for trees when $p\gg (\frac{\log n}{n})^{1/3}$. This probability bound is much worse than that obtained by Balogh, Csaba, and Samotij~\cite{balogh2011} for almost-spanning trees, and unlikely to be optimal, but it is the first resilience result for bounded degree \emph{spanning} trees in $G(n,p)$. Finally, we also establish a sparse analogue of Theorem~\ref{thm:bandwidth} in bijumbled graphs, one of the most widely studied classes of pseudorandom graphs. A graph $\Gamma$ is \emph{$(p,\nu)$-bijumbled} if for all disjoint sets $X,Y\subseteq V(\Gamma)$ we have \[\big|e(X,Y)-p|X||Y|\big|\le\nu\sqrt{|X||Y|}\,.\] This definition goes back to an equivalent notion introduced by Thomason~\cite{Tho87} who initiated the study of pseudorandom graphs. It is also related to the well investigated class of $(n,d,\lambda)$-graphs in that an $(n,d,\lambda)$-graph is $\big(\tfrac{d}{n},\lambda\big)$-bijumbled. Only very recently a universality result similar to Theorem~\ref{thm:universal} was established for bijumbled graphs in~\cite{blowup}, where it was shown that $(p,\nu)$-bijumbled graphs~$G$ with $\delta(G)\ge\frac12pn$ and $\nu\ll p^{\max(4,(3\Delta+1)/2)}n$ are universal for $\mathcal{H}(n,\Delta)$. Our resilience result works for the same bijumbledness condition, though we do not believe it to be optimal. Local resilience results in bijumbled graphs were so far only obtained for special subgraphs~$H$: Dellamonica, Kohayakawa, Marciniszyn, and Steger~\cite{dellamonica2008} considered cycles~$H$ of length $(1-o(1))n$, the results of Conlon, Fox and Zhao~\cite{CFZ} imply resilience for $F$-factors covering $(1-o(1))n$ vertices, and Krivelevich, Lee and Sudakov~\cite{KriLeeSud} established a resilience result for pancyclicity. Hence, previous to this work only little was known about the resilience of bijumbled (or indeed any other common notion of pseudorandom) graphs. \begin{theorem} \label{thm:jumbled} For each $\gamma >0$, $\Delta \geq 2$, and $k \geq 1$, there exists a constant $c >0$ such that the following holds for any $p>0$. Given $\nu\le cp^{\max(4,(3\Delta+1)/2)}n$, suppose $\Gamma$ is a $\big(p,\nu\big)$-bijumbled graph, $G$ is a spanning subgraph of $\Gamma$ with $\delta(G) \geq\big(\tfrac{k-1}{k}+\gamma\big)pn$, and $H$ is a $k$-colourable graph on $n$ vertices with $\Delta(H) \leq \Delta$ and bandwidth at most $c n$. Suppose further that there are at least $c^{-1}p^{-6} \nu^2n^{-1}$ vertices in $V(H)$ that are not contained in any triangles of $H$. Then $G$ contains a copy of $H$. \end{theorem} We remark that the requirement of $C^{\ast} p^{-6}\nu^2 n^{-1}$ vertices of $H$ not being in triangles comes from our use of a so-called regularity inheritance lemma proved in~\cite{ABSS}; this bound is not believed to be optimal (see Section~\ref{sec:remarks} for further details). The proofs of our results rely on sparse versions of the so-called blow-up lemma. The blow-up lemma is an important tool in extremal graph theory, proved by Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{komlos1997blow} and was for example instrumental in the proof of the bandwidth theorem and its analogue in $G(n,p)$ for constant~$p$ by Huang, Lee and Sudakov~\cite{huang2012}. However it applies only to dense graphs. Several of the earlier resilience results in sparse random graphs developed sparse blow-up type results handling special classes of graphs: Balogh, Lee and Samotij~\cite{balogh2012corradi} proved a sparse blow-up lemma for embedding triangle factors, and in~\cite{bottcher2013almost} a blow-up lemma for embedding almost spanning bipartite graphs in sparse graphs was used. Full versions of the blow-up lemma in sparse random graphs and pseudorandom graphs were established only very recently in~\cite{blowup}. We will use these here. We remark that a simple use of these blow-up lemmas gives almost spanning versions of our main results (as already noted in~\cite{blowup}), and the main work here is to extend this to spanning embedding results, which turns out to be much harder. Further, we note that we actually prove somewhat stronger statements than Theorem~\ref{thm:main}, Theorem~\ref{thm:degenerate}, and Theorem~\ref{thm:jumbled} in the same sense in that a stronger statement than Theorem~\ref{thm:bandwidth} was proven in~\cite{bottcher2009proof}: we allow~$H$ in fact to be $(k+1)$-colourable, where the additional colour may only be assigned to very few well distributed vertices (for details see, e.g., Theorem~\ref{thm:maink} below). Thus, for instance, even though Theorem~\ref{thm:main} only implies that the local resilience of $G(n,p)$ with respect to Hamiltonicity is a.a.s.\ $\tfrac12-o(1)$ when $n$ is even, Theorem~\ref{thm:maink} implies it also for $n$ odd. \subsection*{Organisation.} The remainder of this paper is organised as follows. In Section~\ref{sec:preliminaries} we introduce necessary definitions and collect some known results which we need in our proofs. Next, in Section~\ref{sec:mainlemmas}, we outline the proof of the bandwidth theorem in sparse random graphs, Theorem~\ref{thm:main}, and state the four technical lemmas we require. Their proofs are given in Sections~\ref{sec:prooflemG}--\ref{sec:prooflembalancing}, and the proof of Theorem~\ref{thm:main} is presented in Section~\ref{sec:proofmain}. We provide the modifications required to obtain Theorem~\ref{thm:degenerate} in Section~\ref{sec:proofdegen}, and those required for Theorem~\ref{thm:jumbled} in Section~\ref{sec:proofjumbled}. Finally, Section~\ref{sec:remarks} contains some concluding remarks, and Appendix~\ref{app:tools} contains proofs of a few results which are more or less standard but which we could not find in the form we need in the literature. \section{Preliminaries} \label{sec:preliminaries} Throughout the paper $\log$ denotes the natural logarithm. We assume that the order $n$ of all graphs tends to infinity and therefore is sufficiently large whenever necessary. For reals $a, b >0$ and integer $k\in \mathbb N$, we use the notation $(a\pm b) = [a-b, a+b]$ and $[k] = \{1, \ldots, k\}$. Our graph-theoretic notation is standard and follows \cite{bollobas1998modern}. In particular, given a graph $G$ its vertex set is denoted by $V(G)$ and its edge set by $E(G)$. Let $A,B\subseteqeq V$ be disjoint vertex sets. We denote the number of edges between $A$ and $B$ by $e(A,B)$. For a vertex $v \in V(G)$ we write $N_G(v)$ for the neighbourhood of $v$ in $G$ and $N_G(v,A):= N_G(v) \cap A$ for the neighbourhood of $v$ restricted to $A$ in $G$. Given vertices $v_1, \ldots, v_k \in V(G)$ we denote the joint neighbourhood of $v_1, \ldots, v_k$ restricted to a set $A$ by $N_G(v_1, \ldots, v_k; A) = \bigcap_{i\in[k]} N_G(v_i, A)$. Finally, we use the notation $\deg_G(v) := |N_G(v)|$ and $\deg_G(v, A) := |N_G(v,A)|$, as well as $\deg_G(v_1, \ldots, v_k; A) := |N_G(v_1, \ldots, v_k;A)|$ for the degree of $v$ in $G$, the degree of $v$ restricted to $A$ in $G$ and the size of the joint neighbourhood of $v_1, \ldots, v_k$ restricted to $A$ in $G$. Finally, let $\deg_G(v) := |N_G(v)|$ be the degree of $v$ in $G$. For the sake of readability, we do not intend to optimise the constants in our theorems and proofs. Now we introduce some definitions and results of the regularity method as well as related tools that are essential in our proofs. In particular, we state a minimum degree version of the sparse regularity lemma (Lemma~\ref{lem:regularitylemma}) and the sparse blow up lemma (Lemma~\ref{thm:blowup}). Both lemmas use the concept of regular pairs. Let $G= (V,E)$ be a graph, $\varepsilon, d >0$, and $p \in (0,1]$. Moreover, let $X,Y \subseteqeq V$ be two disjoint nonempty sets. The \emph{$p$-density} of the pair $(X,Y)$ is defined as \[d_{G,p}(X,Y) := \frac{e_G(X,Y)}{p|X||Y|}.\] For most of this paper, when we work with random graphs, we will be interested in the regularity concept called \emph{lower-regularity}. When we work with bijumbled graphs, on the other hand, we will need the stronger concept \emph{regularity}. The difference is that in the former we impose only lower bounds on $p$-densities, whereas in the latter we impose in addition upper bounds. The main reason for this difference is that our `regularity inheritance lemmas' below have different requirements in random and in bijumbled graphs; we do not otherwise make use of the extra strength of `regular' as opposed to `lower-regular'. We also need to define super-regularity, for which we require $G$ to be a subgraph of a graph $\Gamma$, which will be the random or bijumbled graph whose resilience properties we are establishing. \begin{definition}[$(\varepsilon,d,p)$-(super-)(lower-)regular pairs] \label{def:regular} Let~$G$ and~$\Gamma$ be graphs with $G\subseteq\Gamma$. The pair $(X,Y)$ is called \emph{$(\varepsilon,d,p)_G$-lower-regular} if for every $X'\subseteqeq X$ and $Y'\subseteqeq Y$ with $|X'|\geq \varepsilon|X|$ and $|Y'|\geq \varepsilon |Y|$ we have $d_{G,p}(X',Y') \geq d- \varepsilon$. It is called \emph{$(\varepsilon,d,p)_G$-regular} if there exists $d'\ge d$ such that for every $X'\subseteqeq X$ and $Y'\subseteqeq Y$ with $|X'|\geq \varepsilon|X|$ and $|Y'|\geq \varepsilon |Y|$ we have $d_{G,p}(X',Y') = d'\pm \varepsilon$. If $(X,Y)$ is either $(\varepsilon,d,p)_G$-lower-regular or $(\varepsilon,d,p)_G$-regular, and in addition we have \begin{align*} |N_G(x,Y)| &\geq (d-\varepsilon)\max\big(p|Y|,\deg_\Gamma(x,Y)/2\big)\quad\text{and}\\ |N_G(y,X)| &\geq (d-\varepsilon)\max\big(p|X|,\deg_\Gamma(y,X)/2\big) \end{align*} for every $x \in X$ and $y \in Y$, then the pair $(X,Y)$ is called \emph{$(\varepsilon,d,p)_G$-super-regular}. When we use super-regularity it will be clear from the context whether $(X,Y)$ is lower-regular or regular. \end{definition} Note that a regular pair is by definition lower-regular, though the converse does not hold. Furthermore, although the definition of super-regularity of $G$ contains a reference to $\Gamma$, at each place in this paper where we use super-regularity, we will see that the first term in the maximum is larger than the second. When it is clear from the context, we may omit the subscript $G$ in $(\varepsilon,d,p)_G$-\mbox{(super-)}regular which is used to indicate with respect to which graph a pair is \mbox{(super-)}regular. A direct consequence of the definition of $(\varepsilon,d,p)$-lower-regular pairs is the following proposition about the sizes of neighbourhoods in lower-regular pairs. \begin{proposition} \label{prop:neighbourhood} Let $(X,Y)$ be $(\varepsilon, d,p)$-lower-regular. Then there are less than $\varepsilon |X|$ vertices $x\in X$ with $|N(x,Y)| < (d-\varepsilon)p|Y|$. \qed \end{proposition} The next proposition asserts that altering the vertex sets in an $(\varepsilon,d,p)$-(lower-)regular pair slightly does not destroy (lower-)regularity. \begin{proposition} \label{prop:subpairs3} Let $(X,Y)$ be an $(\varepsilon,d,p)$-lower-regular pair in a graph $G$ and let $\hat{X}$ and $\hat Y$ be two subsets of $V(G)$ such that $|X\triangle \hat{X}| \leq \mu |X|$ and $|Y \triangle \hat Y| \leq \nu |Y|$ for some $0 \leq \mu, \nu \leq 1$. Then $(\hat X, \hat Y)$ is $(\hat \varepsilon, d, p)$-lower-regular, where $\hat \varepsilon := \varepsilon + 2\sqrt{\mu} + 2 \sqrt{\nu}$. Furthermore, if for any disjoint $A,A'\subseteq V(G)$ with $|A|\ge\mu|X|$ and $|A'|\ge\nu|Y|$ we have $e(A,A')\le (1+\mu+\nu)p|A||A'|$, and $(X,Y)$ is $(\varepsilon,d,p)$-regular, then $(\hat X, \hat Y)$ is $(\hat \varepsilon, d, p)$-regular. \end{proposition} We defer the proof of this to Appendix~\ref{app:tools}. In order to state the sparse regularity lemma, we need some more definitions. A partition $\mathcal V = \{V_i\}_{i\in\{0,\ldots,r\}}$ of the vertex set of $G$ is called an \emph{$(\varepsilon,p)_G$-regular partition} of $V(G)$ if $|V_0|\leq \varepsilon |V(G)|$ and $(V_i,V_{i'})$ forms an $(\varepsilon,0,p)_G$-regular pair for all but at most $\varepsilon\binom{r}{2}$ pairs $\{i,i'\}\in \binom{[r]}{2}$. It is called an \emph{equipartition} if $|V_i| = |V_{i'}|$ for every $i,i'\in[r]$. The partition $\mathcal V$ is called \emph{$(\varepsilon,d,p)$-(lower-)regular} on a graph $R$ with vertex set $[r]$ if $(V_i, V_{i'})$ is $(\varepsilon,d,p)_G$-(lower-)regular for every $\{i,i'\} \in E(R)$. The graph $R$ is referred to as the \emph{$(\varepsilon,d,p)_G$-reduced graph} of $\mathcal V$, the partition classes $V_i$ with $i \in [r]$ as \emph{clusters}, and $V_0$ as the \emph{exceptional set}. We also say that $\mathcal V$ is \emph{$(\varepsilon,d,p)_G$-super-regular} on a graph $R'$ with vertex set $[r]$ if $(V_i, V_{i'})$ is $(\varepsilon,d,p)_G$-super-regular for every $\{i,i'\}\in E(R')$. Again, when we talk about reduced graphs or super-regularity, whether we are using lower-regularity or regularity will be clear from the context. We will however always specify whether a partition is regular or only lower-regular on $R$. Analogously to Szemer\'edi's regularity lemma for dense graphs, the sparse regularity lemma, proved by Kohayakawa and Rödl~\cite{kohayakawa1997, kohayakawa2003}, asserts the existence of an $(\varepsilon,p)$-regular partition of constant size of any sparse graph. We state a minimum degree version of this lemma, whose proof (following~\cite{bottcher2013almost}) we defer to Appendix~\ref{app:tools}. \begin{lemma}[Minimum degree version of the sparse regularity lemma] \label{lem:regularitylemma} For each $\varepsilon >0$, each $\alpha \in [0,1]$, and $r_0\geq 1$ there exists $r_1\geq 1$ with the following property. For any $d\in[0,1]$, any $p>0$, and any $n$-vertex graph $G$ with minimum degree $\alpha p n$ such that for any disjoint $X,Y\subseteq V(G)$ with $|X|,|Y|\ge\tfrac{\varepsilon n}{r_1}$ we have $e(X,Y)\le \big(1+\tfrac{1}{1000}\varepsilon^2\big)p|X||Y|$, there is an $(\varepsilon,p)_G$-regular equipartition of $V(G)$ with $(\varepsilon,d,p)_G$-reduced graph $R$ satisfying $\delta(R) \geq (\alpha-d-\varepsilon)|V(R)|$ and $r_0 \leq |V(R)| \leq r_1$. \end{lemma} A key ingredient in the proof of our main theorem is the so-called sparse blow up lemma developed by H{\`a}n, Kohayakawa, Person, and two of the current authors in~\cite{blowup}. Given a subgraph $G \subseteqeq \Gamma =G(n,p)$ with $p \gg (\log n/n)^{1/\Delta}$ and an $n$-vertex graph $H$ with maximum degree at most $\Delta$ with vertex partitions $\mathcal V$ and $\mathcal W$, respectively, the sparse blow up lemma guarantees under certain conditions a spanning embedding of $H$ in $G$ which respects the given partitions. In order to state this lemma we need to introduce some definitions. \begin{definition}[$(\vartheta, R')$-buffer] \label{def:buffer} Let $R'$ be a graph on $r$ vertices and let $H$ be a graph with vertex partition $\mathcal W=\{W_i\}_{i\in[r]}$. We say that the family $\widetilde{\mathcal{W}}=\{\widetilde{W}_i\}_{i\in[r]}$ of subsets $\widetilde{W}_i\subseteqeq W_i$ is an \emph{$(\vartheta,R')$-buffer} for $H$ if \begin{enumerate}[label=\itmit{\roman{*}}] \item $|\widetilde{W}_i|\geq\vartheta |W_i|$ for all $i\in[r]$, and \item for each $i\in[r]$ and each $x\in\widetilde{W}_i$, the first and second neighbourhood of $x$ go along $R'$, i.e.,\ for each $\{x,y\},\{y,z\}\in E(H)$ with $y\in W_j$ and $z\in W_k$ we have $\{i,j\}\in E(R')$ and $\{j,k\}\in E(R')$. \end{enumerate} \end{definition} Let $G$ and $H$ be graphs on $n$ vertices with partitions $\mathcal V=\{V_i\}_{i\in[r]}$ of $V(G)$ and $\mathcal W=\{W_i\}_{i\in[r]}$ of $V(H)$. We say that $\mathcal V$ and $\mathcal W$ are \emph{size-compatible} if $|V_i|=|W_i|$ for all $i\in[r]$. If there exists an integer $m \geq 1$ such that $m \leq |V_i| \leq \kappa m$ for every $i\in [r]$, then we say that $(G,\mathcal V)$ is $\kappa$-balanced. Given a graph $R$ on $r$ vertices, we call $(G, \mathcal V)$ an \emph{$R$-partition} if for every edge $\{x,y\}\in E(G)$ with $x \in V_i$ and $y\in V_{i'}$ we have $\{i,i'\}\in E(R)$. We will actually need a little more than just an embedding of $H$ into $G$ respecting given partitions: we will need to restrict the images of some vertices of $H$ to subsets of the clusters of $G$. The following definition encapsulates the properties we have to guarantee for the sparse blow-up lemma to obtain such an embedding. \begin{definition}[Restriction pair] \label{def:restrict} Let $\varepsilon,d>0$, $p \in [0,1]$, and let $R$ be a graph on $r$ vertices. Furthermore, let $G$ be a (not necessarily spanning) subgraph of $\Gamma = G(n,p)$ and let $H$ be a graph given with vertex partitions $\mathcal V= \{V_i\}_{i\in[r]}$ and $\mathcal W = \{W_i\}_{i\in[r]}$, respectively, such that $(G,\mathcal V)$ and $(H,\mathcal W)$ are size-compatible $R$-partitions. Let $\mathcal I=\{I_x\}_{x\in V(H)}$ be a collection of subsets of $V(G)$, called \emph{image restrictions}, and $\mathcal J=\{J_x\}_{x\in V(H)}$ be a collection of subsets of $V(\Gamma)\setminus V(G)$, called \emph{restricting vertices}. For each $i\in [r]$ we define $R_i\subseteqeq W_i$ to be the set of all vertices $x \in W_i$ for which $I_x \neq V_i$. We say that $\mathcal I$ and $\mathcal J$ are a \emph{$(\rho,\zeta,\Delta,\Delta_J)$-restriction pair} if the following properties hold for each $i\in[r]$ and $x\in W_i$. \begin{enumerate}[label=\itmarab{RP}] \item\label{itm:restrict:numres} We have $|R_i|\leq\rho|W_i|$. \item\label{itm:restrict:sizeIx} If $x\in R_i$, then $I_x\subseteqeq \bigcap_{u\in J_x} N_\Gamma(u, V_i)$ is of size at least $\zeta(dp)^{|J_x|}|V_i|$. \item\label{itm:restrict:Jx} If $x\in R_i$, then $|J_x|+\deg_H(x)\leq\Delta$ and if $x\in W_i\setminus R_i$, then $J_x=\varnothing$. \item\label{itm:restrict:DJ} Each vertex in $V(G)$ appears in at most $\Delta_J$ of the sets of $\mathcal J$. \item\label{itm:restrict:sizeGa} We have $\big|\bigcap_{u\in J_x} N_\Gamma(u, V_i)\big| = (p\pm\varepsilon p)^{|J_x|}|V_i|$. \item\label{itm:restrict:Ireg} If $x\in R_i$, for each $xy\in E(H)$ with $y\in W_j$, \[\text{the pair }\quad\Big( V_i \cap \bigcap_{u\in J_x}N_\Gamma(u), V_j \cap \bigcap_{v\in J_y}N_\Gamma(v)\Big)\quad\text{ is $(\varepsilon,d,p)_G$-lower-regular.}\] \end{enumerate} \end{definition} Suppose $\mathcal V$ is an $(\varepsilon,d,p)_G$-lower-regular partition of $V(G)$ with reduced graph $R$, and let $R'$ be a subgraph of $R$. We say $(G,\mathcal V)$ has \emph{one-sided inheritance on $R'$} if for every $\{i,j\}, \{j,k\}\in E(R')$ and every $v\in V_i$ the pair $\big(N_\Gamma(v, V_j),V_k\big)$ is $(\varepsilon,d,p)_G$-lower-regular. Given a $(\vartheta,R')$-buffer $\widetilde{\mathcal{W}}$, we say that $(G,\mathcal V)$ has \emph{two-sided inheritance on $R'$ for $\widetilde{\mathcal{W}}$} if whenever there is a triangle $w_iw_jw_k\in H$ with $w_i\in\widetilde{W}_i$, $w_j\in W_j$ and $w_k\in W_k$, it follows that for every $v\in V_i$ the pair $\big(N_\Gamma(v, V_j),N_\Gamma(v, V_k)\big)$ is $(\varepsilon,d,p)_G$-lower-regular. Now we can finally state the sparse blow up lemma. \begin{lemma}[{\cite[Lemma 1.21]{blowup}}] \label{thm:blowup} For each $\Delta$, $\Delta_{R'}$, $\Delta_J$, $\vartheta,\zeta, d>0$, $\kappa>1$ there exist $\eBL,\rho>0$ such that for all $r_1$ there is a $\CBL$ such that for $p\geq\CBL(\log n/n)^{1/\Delta}$ the random graph $\Gamma=G_{n,p}$ asymptotically almost surely satisfies the following. Let $R$ be a graph on $r\le r_1$ vertices and let $R'\subseteqeq R$ be a spanning subgraph with $\Delta(R')\leq \Delta_{R'}$. Let $H$ and $G\subseteqeq \Gamma$ be graphs given with $\kappa$-balanced, size-compatible vertex partitions $\mathcal W=\{W_i\}_{i\in[r]}$ and $\mathcal V=\{V_i\}_{i\in[r]}$ with parts of size at least $m\geq n/(\kappa r_1)$. Let $\mathcal I=\{I_x\}_{x\in V(H)}$ be a family of image restrictions, and $\mathcal J=\{J_x\}_{x\in V(H)}$ be a family of restricting vertices. Suppose that \begin{enumerate}[label=\itmarab{BUL}] \item\label{itm:blowup:H} $\Delta(H)\leq \Delta$, for every edge $\{x,y\}\in E(H)$ with $x\in W_i$ and $y\in W_j$ we have $\{i,j\}\in E(R)$ and $\widetilde{\mathcal{W}}=\{\widetilde{W}_i\}_{i\in[r]}$ is an $(\vartheta,R')$-buffer for $H$, \item\label{itm:blowup:G} $\mathcal V$ is $(\eBL,d,p)_G$-lower-regular on $R$, $(\eBL,d,p)_G$-super-regular on $R'$, has one-sided inheritance on $R'$, and two-sided inheritance on $R'$ for $\widetilde{\mathcal{W}}$, \item\label{itm:blowup:restrict} $\mathcal I$ and $\mathcal J$ form a $(\rho,\zeta,\Delta,\Delta_J)$-restriction pair. \end{enumerate} Then there is an embedding $\phi\colon V(H)\to V(G)$ such that $\phi(x)\in I_x$ for each $x\in H$. \end{lemma} Observe that in the blow up lemma for dense graphs, proved by Koml{\'o}s, S{\'a}rk{\"o}zy, and Szemer{\'e}di~\cite{komlos1997blow}, one does not need to explicitly ask for one- and two-sided inheritance properties since they are always fulfilled by dense regular partitions. This is, however, not true in general in the sparse setting. The following two lemmas will be very useful whenever we need to choose vertices whose neighbourhoods inherit lower-regularity. \begin{lemma}[One-sided lower-regularity inheritance,~\cite{blowup}] \label{lem:OSRIL} For each $\eo, \ao >0$ there exist $\varepsilon_0 >0$ and $C >0$ such that for any $0 < \varepsilon < \varepsilon_0$ and $0 < p <1$ asymptotically almost surely $\Gamma= G(n,p)$ has the following property. For any disjoint sets $X$ and $Y$ in $V(\Gamma)$ with $|X|\geq C\max\big(p^{-2}, p^{-1} \log n\big)$ and $|Y| \geq C p^{-1} \log n$, and any subgraph $G$ of $\Gamma[X,Y]$ which is $(\varepsilon, \ao,p)_G$-lower-regular, there are at most $C p^{-1}\log (en/|X|)$ vertices $z \in V(\Gamma)$ such that $(X \cap N_{\Gamma}(z),Y)$ is not $(\eo,\ao,p)_G$-lower-regular. \end{lemma} \begin{lemma}[Two-sided lower-regularity inheritance,~\cite{blowup}] \label{lem:TSRIL} For each $\et,\at>0$ there exist $\varepsilon_0>0$ and $C >0$ such that for any $0<\varepsilon<\varepsilon_0$ and $0<p<1$, asymptotically almost surely $\Gamma=G_{n,p}$ has the following property. For any disjoint sets $X$ and $Y$ in $V(\Gamma)$ with $|X|,|Y|\ge C\max\{p^{-2},p^{-1}\log n\}$, and any subgraph $G$ of $\Gamma[X,Y]$ which is $(\varepsilon,\at,p)_G$-lower-regular, there are at most $C\max\{p^{-2},p^{-1}\log (en/|X|)\}$ vertices $z \in V(\Gamma)$ such that $\big(X\cap N_\Gamma(z),Y\cap N_\Gamma(z)\big)$ is not $(\et,\at,p)_G$-lower-regular. \end{lemma} We close this section with some probabilistic tools. We start with the following useful observation. Roughly speaking, it states that a.a.s.~nearly all vertices in $G(n,p)$ have approximately the expected number of neighbours within large enough subsets. \begin{proposition}[] \label{prop:chernoff} For each $\varepsilon>0$ there exists a constant $C >0$ such that for every $0<p<1$ asymptotically almost surely $\Gamma=G(n,p)$ has the following properties. For any disjoint $X,Y\subseteq V(\Gamma)$ with $|X|\ge Cp^{-1}\log n$ and $|Y|\ge Cp^{-1}\log (en/|X|)$, we have $e(X,Y)=(1\pm\varepsilon)p|X||Y|$ and $e(X)\le 2p|X|^2$. Furthermore, for every $X \subseteqeq V(\Gamma)$ with $|X| \geq C p^{-1} \log n$, the number of vertices $v \in V(\Gamma)$ with $\big||N_{\Gamma}(v,X)| - p |X|\big| > \varepsilon p |X|$ is at most $C p^{-1} \log (en/|X|)$. \end{proposition} Note that in most of this paper we will use the upper bound $\log(en/|X|)\le\log n$ when applying this proposition, and Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL}, valid since (in all applications) we have $|X|\ge e$. We will only need the full strength of these three results when proving the Lemma for $G$ (Lemma~\ref{lem:G}). In the proof of Proposition~\ref{prop:chernoff} we use the following version of Chernoff's Inequalities (see e.g.~\cite[Chapter~2]{janson2011random} for a proof). \begin{theorem}[Chernoff's Inequality,~\cite{janson2011random}] \label{thm:chernoff} Let $X$ be a random variable which is the sum of independent Bernoulli random variables. Then we have for $\varepsilon\leq 3/2$ \[\mathbb{P}\big[|X-\mathbb{E}[X]| > \varepsilon \mathbb{E}[X]\big] < 2e^{-\varepsilon^2\mathbb{E}[X]/3}\,.\] Furthermore, if $t\ge 6\mathbb{E}[X]$ then we have \[\mathbb{P}\big[X\ge\mathbb{E}[X]+t\big]\le e^{-t}\,.\] \end{theorem} \begin{proof}[Proof of Proposition~\ref{prop:chernoff}] Since the statement of the proposition is stronger when $\varepsilon$ is smaller, we may assume that $0<\varepsilon\le 1$. We set $C'=100\varepsilon^{-2}$ and $C=1000C'\varepsilon^{-1}$. We first show that $\Gamma=G(n,p)$ a.a.s.\ has the following two properties. For any disjoint $A,B\subseteq V(\Gamma)$, with $|A|\ge C'p^{-1}\log n$ and $|B|\ge C'p^{-1}\log(en/|A|)$, we have $e(A,B)=\big(1\pm\tfrac{\varepsilon}{2}\big)p|A||B|$. For any $A\subseteq V(\Gamma)$, we have $e(A)\le 4p|A|^2+2|A|\log n$, and if $|A|\ge C'p^{-1}\log n$ then $e(A)\le 2p|A|^2$. Note that these properties imply the first two conclusions of the proposition. We estimate the failure probability of the first property using Theorem~\ref{thm:chernoff} and the union bound. Assuming without loss of generality that $|A|\ge|B|$, this probability is at most \begin{align*} \sum_{|A|,|B|\le n} \binom{n}{|A|}^2\cdot 2e^{-\varepsilon^2p|A||B|/12}&\le 2n\sum_{|A|}\Big(\frac{en}{|A|}\Big)^{2|A|}e^{-\varepsilon^2C'|A|\log (en/|A|)/12}\\ &<2n\sum_{|A|}\Big(\frac{en}{|A|}\Big)^{-2|A|}\,. \end{align*} For the second property, observe that $4p|A|^2>7p\binom{|A|}{2}$, so that for any given $A$ by Theorem~\ref{thm:chernoff} we have \[\mathbb{P}\big[e(A)\ge 4p|A|^2+2|A|\log n\big]\le e^{-2|A|\log n}=n^{-2|A|}\,.\] Taking a union bound over the at most $n^{|A|}$ choices of $A$ given $|A|$, we see that the failure probability of the second property is at most $\sum_{a=1}^nn^{-a}$. Finally, the failure probability of the last property is at most \[\sum_{|A|\ge C'p^{-1}\log n}n^{|A|}\cdot 2e^{-p\binom{|A|}{2}/3}\le\sum_{|A|}2n^{|A|}e^{-C'|A|\log n/12}\le 2n^{-2}\,,\] and since all three failure probabilities tend to zero as $n\to\infty$, we conclude that a.a.s.\ $G(n,p)$ enjoys both properties. Now suppose $\Gamma$ has these properties, and let $X\subseteq V(\Gamma)$ have size at least $Cp^{-1}\log n$. We first show that there are at most $C'p^{-1}\log (en/|X|)$ vertices in $\Gamma$ which have less than $(1-\varepsilon)p|X|$ neighbours in $X$. If this were false, then we could choose a set $Y$ of $C'p^{-1}\log (en/|X|)$ vertices in $\Gamma$ which have less than $(1-\varepsilon)p|X|$ neighbours in $X$. By choice of $C$ and since $|X|>e$, we have $(1-\varepsilon)p|X|\le \big(1-\tfrac\varepsilon2\big)p|X\setminus Y|$, so we see that $e(Y,X\setminus Y)<\big(1-\tfrac{\varepsilon}{2}\big)p|Y||X\setminus Y|$. This is a contradiction since $|X\setminus Y|\ge C'p^{-1}\log n$. Next we show that there are at most $2C'p^{-1}\log (en/|X|)$ vertices of $\Gamma$ which have more than $(1+\varepsilon)p|X|$ neighbours in $X$. Again, if this is not the case we can let $Y$ be a set of $2C'p^{-1}\log (en/|X|)$ vertices of $\Gamma$ with more than $(1+\varepsilon)p|X|$ neighbours in $X$. Now $e(Y)\le 4p|Y|^2+2|Y|\log n=8C'|Y|\log(en/|X|)+2|Y|\log n\le 10C'|Y|\log n$, so there are at most $|Y|/2$ vertices in $Y$ which have $40C'\log n$ or more neighbours in $Y$. Let $Y'\subseteq Y$ consist of those vertices with at most $40C'\log n$ neighbours in $Y$. For each $v\in Y'$ we have \[(1+\varepsilon)p|X|\le\deg(v;X)\le \deg(v;Y)+\deg(v;X\setminus Y)\,,\] and so, by choice of $C$, each vertex of $Y'$ has at least $\big(1+\tfrac\varepsilon2\big)p|X\setminus Y|$ neighbours in $X\setminus Y$. Since $|Y'|\ge C'p^{-1}\log(2en/|X|)$ and $|X\setminus Y|\ge |X|/2\ge C'p^{-1}\log n$, this is a contradiction. Finally, since by choice of $C$ we have $3C'p^{-1}\log n<Cp^{-1}\log n$ we conclude that all but at most $Cp^{-1}\log (en/|X|)$ vertices of $\Gamma$ have $(1\pm\varepsilon)p|X|$ neighbours in $X$, as desired. \end{proof} Now let $N$, $m$, and $s$ be positive integers and let $S$ and $S' \subseteqeq S$ be two sets with $|S| = N$ and $|S'| = m$. The \emph{hypergeometric distribution} is the distribution of the random variable $X$ that is defined by drawing $s$ elements of $S$ without replacement and counting how many of them belong to $S'$. It can be shown that Theorem~\ref{thm:chernoff} still holds in the case of hypergeometric distributions (see e.g.~\cite{janson2011random}, Chapter~2 for a proof) with $\mathbb{E}[X]:= ms/N$. \begin{theorem}[Hypergeometric inequality,~\cite{janson2011random}] \label{thm:hypergeometric} Let $X$ be a random variable that follows the hypergeometric distribution with parameters $N$, $m$, and $s$. Then for any $\varepsilon>0$ and $t\ge\varepsilon ms/N$ we have \[\mathbb{P}\big[|X - ms/N| > t \big] < 2e^{-\varepsilon^2t/3}\,.\] \end{theorem} We require the following technical lemma, which is a consequence of the hypergeometric inequality stated in Theorem~\ref{thm:hypergeometric}. \begin{lemma}\label{lem:hypgeo} For each $\eta>0$ and $\Delta$ there exists $C$ such that the following holds. Let $W\subseteq [n]$, let $t\le 100n^\Delta$, and let $T_1,\ldots,T_t$ be subsets of $W$. For each $m\le |X|$ there is a set $S\subseteq W$ of size $m$ such that \[|T_i\cap S|=\frac{m}{|W|}|T_i|\pm \big(\eta|T_i|+C\log n\big)\text{ for every }i\in[t]\,.\] \end{lemma} \begin{proof} Set $C=30\eta^{-2}\Delta$. Observe that for each $i$, the size of $T_i\cap S$ is hypergeometrically distributed. By Theorem~\ref{thm:hypergeometric}, for each $i$ we have \[\mathbb{P}\Big[|T_i\cap S|\neq \frac{m}{|W|}|T_i|\pm \big(\eta|T_i|+C\log n\big)\Big]<2e^{-\eta^2C\log n/3}<\frac{2}{n^{1+\Delta}}\,,\] so taking the union bound over all $i\in[t]$ we conclude that the probability of failure is at most $2t/n^{1+\Delta}\le 200/n\to 0$ as $n\to\infty$, as desired. \end{proof} We shall also use McDiarmid's Inequality. \begin{lemma}[McDiarmid's Inequality~\cite{McDiarmid}]\label{lem:McDiarmid} Let $X_1, \ldots, X_k$ be independent random variables, where $X_i$ takes values in a finite set $A_i$ for each $i\in [k]$. Suppose that a function $g: A_1 \times \ldots \times A_k \to \mathbb R$ satisfies for each $i\in [k]$ $$\sup_{x_1,\ldots, x_k, \hat x_i}|g(x_1,x_2,\ldots,x_k)-g(x_1,x_2,\ldots, x_{i-1}, \hat{x}_i, x_{i+1}, \ldots, x_k)| \leq c_i.$$ Then, for any $\varepsilon >0$, we have $$\mathbb{P}\big[|\mathbb{E}[g(X_1,\ldots, X_k)]- g(X_1,\ldots, X_k)| \geq \varepsilon \big]\leq 2\exp\left\{-\frac{2\varepsilon^2}{\sum_{i\in[k]} c_i^2}\right\}\,.$$ \end{lemma} \section{Proof overview and main lemmas} \label{sec:mainlemmas} Theorem~\ref{thm:main} is a corollary of the following more general Theorem~\ref{thm:maink}, which we prove in Section~\ref{sec:proofmain}. We require one preliminary definition. \begin{definition}[Zero-free colouring]\label{def:zerofree} Let $H$ be a $(k+1)$-colourable graph on $n$ vertices and let $\mathcal L$ be a labelling of its vertex set by integers $1, \ldots, n$ of bandwidth at most $\beta n$. A proper $(k+1)$-colouring $\sigma:V(H) \to \{0,\ldots,k\}$ of its vertex set is said to be \emph{$(z,\beta)$-zero-free} with respect to $\mathcal L$ if any $z$ consecutive blocks contain at most one block with colour zero, where a block is defined as a set of the form $\{(t-1)4k\beta n +1, \ldots, t4k\beta n\}$ with some $t \in [1/(4k\beta)]$, and a block with colour zero is a block in which at least one vertex is coloured with zero. \end{definition} \begin{theorem} \label{thm:maink} For each $\gamma>0$, $\Delta \geq 2$, and $k\geq 2$, there exist constants $\beta >0$, $z>0$, and $C>0$ such that the following holds asymptotically almost surely for $\Gamma = G(n,p)$ if $p\geq C\left(\frac{\log n}{n}\right)^{1/\Delta}$. Let $G$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq\left(\frac{k-1}{k}+\gamma\right)pn$ and let $H$ be a graph on $n$ vertices with $\Delta(H) \leq \Delta$ that has a labelling $\mathcal L$ of its vertex set of bandwidth at most $\beta n$, a $(k+1)$-colouring that is $(z,\beta)$-zero-free with respect to $\mathcal L$ and where the first $\sqrt{\beta} n$ vertices in $\mathcal L$ are not given colour zero and the first $\beta n$ vertices in $\mathcal L$ include $C p^{-2}$ vertices that are not contained in any triangles of $H$. Then $G$ contains a copy of $H$. \end{theorem} \subsection{Proof overview}\label{subsec:over} We now give a brief sketch of the proof of Theorem~\ref{thm:maink}. Ultimately, our goal is to apply the sparse blow-up lemma, Lemma~\ref{thm:blowup}, to find an embedding of $H$ into $G$. Thus, the proof boils down to obtaining the required conditions. But there is a catch: this is not as such possible, as for any lower-regular partition of $G$ there can be $O(p^{-2})$ exceptional vertices which are `badly behaved' with respect to the partition. These vertices will never satisfy the conditions of the sparse blow-up lemma, and we will have to deal with them beforehand. We will do this by `pre-embedding' some vertices of $H$ to cover the exceptional vertices, and then apply the sparse blow-up lemma to complete the embedding of $H$ into $G$, using image restrictions to ensure we really obtain an embedding of $H$. Let us now fill in a few more details. We start by obtaining, in the lemma for $G$, Lemma~\ref{lem:G}, a lower-regular partition of $G$ into parts $V_0$ and $V_{i,j}$ for $i\in[r]$ (where $r$ may be large but is bounded above by a constant) and $j\in [k]$ with several extra properties. The most important properties are that $|V_0|=O\big(p^{-2}\big)$, that the corresponding reduced graph, which we call $R^k_r$, on the vertex set $[r]\times[k]$ has high minimum degree and contains a so-called backbone graph on which the partition does not merely provide lower-regular pairs but super-regular pairs, and that all vertices outside $V_0$ have inheritance properties with respect to all lower-regular pairs. Here, a \emph{backbone graph} has all edges between $(i,j)$ and $(i',j')$ with $|i-i'|\le 1$ and $j\neq j'$. One should think of the backbone graph as consisting of copies of $K_k$ (one for each $i\in[r]$) connected in a linear order; and the high minimum degree of $R^k_r$ ensures that each $K_k$ extends to $K_{k+1}$ in $R^k_r$. In short, if the exceptional vertices $V_0$ did not exist, this partition, together with a corresponding partition of $V(H)$, would be what we need to apply the sparse blow-up lemma. Passing over for now the inconvenient existence of $V_0$, our next task is to find the corresponding partition of $V(H)$, for which we use the lemma for $H$, Lemma~\ref{lem:H2}. The basic idea is then to split $H$ into intervals in the bandwidth order. We assign the first interval to the first $K_k$ of the backbone graph according to the given colouring of $H$, with the few vertices of colour zero assigned to a vertex extending this clique of the backbone graph to $K_{k+1}$, and so on. Using the bandwidth property and zero-freeness of the colouring one can do this in such a way as to obtain a graph homomorphism from $H$ to $R^k_r$, which is what we need. In addition, we need the number of vertices assigned to each $(i,j)\in V(R^k_r)$ to be very close to $|V_{i,j}|$. We cannot guarantee exact equality, but we can get very close by making further use of bandwidth, zero-freeness, and the fact that $K_k$s in $R^k_r$ extend to $K_{k+1}$s. Now we have to deal with the exceptional set $V_0$. We do this as follows. We choose a vertex $v$ in the exceptional set, and `pre-embed' to it a vertex $x$ picked from the first $\sqrt{\beta}n$ vertices of $\mathcal L$ which is not in any triangle of $H$. Using the common neighbourhood lemma, Lemma~\ref{lem:common}, we choose $\Delta$ neighbours of $v$ which are `well-behaved' with respect to the clusters $V_{i,j}$ for some $i\in[r]$, and pre-embed the neighbours of $x$ to these vertices. The `well-behaved' properties are what we need to generate image restrictions for the second neighbours of $x$ (which we will embed using the sparse blow-up lemma) satisfying the restriction pair properties. We also need to change the assignment from the Lemma for $H$ locally (up to a large but constant distance from $x$) to accommodate this: the vertex $x$, and its first and second neighbours, might have been assigned somewhere quite different previously. We repeat this until we have pre-embedded to all exceptional vertices, and let $H'$ and $G'$ be respectively the unembedded vertices of $H$ and the vertices of $G$ to which we did not pre-embed. At this point we have all the conditions we need to apply the sparse blow-up lemma to complete the embedding, except that the partitions of $H'$ and $G'$ we have do not quite have parts of matching sizes. We use the balancing lemma, Lemma~\ref{lem:balancing}, to deal with this. The idea is simple: we take some carefully selected vertices in clusters of $G$ which are too big (compared to the assigned part of $H$) and move them to other clusters, first in order to make sure that the total number of vertices in $\bigcup_i V_{i,j}$ is correct for each $j$ (using the high minimum degree of $R^k_r$) and then (using the structure of the backbone graph) to give each cluster the correct size. At last, applying the sparse blow-up lemma, Lemma~\ref{thm:blowup}, we complete the embedding of $H$ into $G$. We note that this proof sketch glosses over some subtleties. In particular, at the two places where `we choose' vertices onto which to pre-embed, we have to be quite careful to choose vertices correctly so that this strategy can be completed and we do not destroy good properties obtained earlier. We will return to this point immediately before the proof of Theorem~\ref{thm:maink} in Section~\ref{sec:proofmain} to explain how we do this. \subsection{Main lemmas} In this subsection we formulate the four main lemmas that we use in the proof of Theorem~\ref{thm:maink} mentioned in the above overview. We defer the proofs of these lemmas to later sections. Before stating these lemmas, we need some more definitions. Let $r, k \geq 1$ and let $B^k_r$ be the backbone graph on $kr$ vertices. That is, we have \[V(B^k_r) := [r] \times [k]\] and for every $j \neq j' \in [k]$ we have $\{(i,j),(i',j')\} \in E(B^k_r)$ if and only if $|i-i'|\le1$. Let $K^k_r \subseteqeq B^k_r$ be the spanning subgraph of $B^k_r$ that is the disjoint union of $r$ complete graphs on $k$ vertices given by the following components: the complete graph $K^k_r[\{(i,1),\ldots, (i,k)\}]$ is called the \emph{$i$-th component} of $K^k_r$ for each $i\in [r]$. A vertex partition $\mathcal V' = \{V_{i,j}\}_{i\in[r],j\in[k]}$ is called \emph{$k$-equitable} if $\big||V_{i,j}| -|V_{i,j'}|\big|\leq 1$ for every $i\in [r]$ and $j,j'\in[k]$. Similarly, an integer partition $\{n_{i,j}\}_{i\in[r],j\in[k]}$ of $n$ (meaning that $n_{i,j} \in \mathbb Z_{\geq 0}$ for every $i\in [r],j\in[k]$ and $\sum_{i\in[r]j\in[k]} n_{i,j} = n$) is \emph{$k$-equitable} if $|n_{i,j}-n_{i,j'}| \leq 1$ for every $i\in[r]$ and $j,j'\in[k]$. The lemma for $G$ says that a.a.s.~$\Gamma = G(n,p)$ satisfies the following property if $p \gg (\log n/n)^{1/2}$. For any spanning subgraph $G\subseteq\Gamma$ with minimum degree a sufficiently large fraction of $pn$, there exists an $(\varepsilon,d,p)_G$-lower-regular vertex partition $\mathcal V$ of $V(G)$ whose reduced graph $R^k_r$ contains a clique factor $K^k_r$ on which the corresponding vertex sets of $\mathcal V$ are pairwise $(\varepsilon,d,p)$-super-regular. Furthermore, $(G,\mathcal V)$ has one-sided and two-sided inheritance with respect to $R^k_r$, and the $\Gamma$-neighbourhoods of all vertices but the ones in the exceptional set of $\mathcal V$ have almost exactly their expected size in each cluster. The proof of Lemma~\ref{lem:G} is given in Section~\ref{sec:prooflemG}. \begin{lemma}[Lemma for $G$] \label{lem:G} For each $\gamma > 0$ and integers $k \geq 2$ and $r_0 \geq 1$ there exists $d > 0$ such that for every $\varepsilon \in \left(0, \frac{1}{2k}\right)$ there exist $r_1\geq 1$ and $C^{\ast}>0$ such that the following holds a.a.s.~for $\Gamma = G(n,p)$ if $p \geq C^{\ast} \left(\log n/n\right)^{1/2}$. Let $G=(V,E)$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq \left(\frac{k-1}{k} + \gamma\right)pn$. Then there exists an integer $r$ with $r_0\leq kr \leq r_1$, a subset $V_0 \subseteqeq V$ with $|V_0| \leq C^{\ast} p^{-2}$, a $k$-equitable vertex partition $\mathcal V = \{V_{i,j}\}_{i\in[r],j\in[k]}$ of $V(G)\setminus V_0$, and a graph $R^k_r$ on the vertex set $[r] \times [k]$ with $K^k_r \subseteqeq B^k_r \subseteqeq R^k_r$, with $\delta(R^k_r) \geq \left(\frac{k-1}{k} + \frac{\gamma}{2}\right)kr$, and such that the following are true. \begin{enumerate}[label=\itmarab{G}] \item \label{lemG:size} $\frac{n}{4kr}\leq |V_{i,j}| \leq \frac{4n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item \label{lemG:regular} $\mathcal V$ is $(\varepsilon,d,p)_G$-lower-regular on $R^k_r$ and $(\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item \label{lemG:inheritance} both $\big(N_{\Gamma}(v, V_{i,j}),V_{i',j'}\big)$ and $\big(N_{\Gamma}(v, V_{i,j}),N_{\Gamma}(v, V_{i',j'})\big)$ are $(\varepsilon,d,p)_G$-lower-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, \item \label{lemG:gamma} $|N_{\Gamma}(v,V_{i,j})| = (1 \pm \varepsilon)p|V_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \end{enumerate} Furthermore, if we replace~\ref{lemG:inheritance} with the weaker \begin{enumerate}[label=\itmsol{G}{3'}] \item \label{lemG:inheritancep} $\big(N_{\Gamma}(v, V_{i,j}),V_{i',j'}\big)$ is an $(\varepsilon,d,p)_G$-lower-regular pair for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, \end{enumerate} then we have the stronger bound $|V_0|\leC^{\ast} p^{-1}$. \end{lemma} After Lemma~\ref{lem:G} has constructed a lower-regular partition $\mathcal V$ of $V(G)$, the second main lemma deals with the graph $H$ that we would like to find as a subgraph of $G$. More precisely, Lemma~\ref{lem:H2} provides a homomorphism $f$ from the graph $H$ to the reduced graph $R^k_r$ given by Lemma~\ref{lem:G} which has among others the following properties. The edges of $H$ are mapped to the edges of $R^k_r$, and the vast majority of the edges of $H$ are assigned to edges of the clique factor $K^k_r \subseteqeq R^k_r$. The number of vertices of $H$ mapped to a vertex of $R^k_r$ only differs slightly from the size of the corresponding cluster of $\mathcal V$. The lemma further guarantees that each of the first $\sqrt{\beta}n$ vertices of the bandwidth ordering of $V(H)$ is mapped to $(1,j)$ with $j$ being the colour that the vertex has received by the given colouring of $H$. In case $H$ is $D$-degenerate the next lemma also ensures that for every $(i,j) \in [r] \times [k]$, a constant fraction of vertices mapped to $(i,j)$ have each at most $2D$ neighbours. \begin{lemma}[Lemma for~$H$]\label{lem:H2} Given $D, k, r \geq 1$ and $\xi, \beta > 0 $ the following holds if $\xi \leq 1/(kr)$ and $\beta \leq 10^{-10}\xi^2/(D k^4r)$. Let $H$ be a $D$-degenerate graph on $n$ vertices, let $\mathcal L$ be a labelling of its vertex set of bandwidth at most $\beta n$ and let $\sigma: V(H) \to \{0,\ldots k\}$ be a proper $(k+1)$-colouring that is $(10/\xi, \beta)$-zero-free with respect to $\mathcal L$, where the colour zero does not appear in the first $\sqrt{\beta}n$ vertices of $\mathcal{L}$. Furthermore, let $R^k_r$ be a graph on vertex set $[r] \times [k]$ with $K^k_r \subseteqeq B^k_r \subseteqeq R^k_r$ such that for every $i\in [r]$ there exists a vertex $z_i \in \big([r]\setminus\{i\}\big) \times [k]$ with $\big\{z_i, (i,j)\big\} \in E(R^k_r)$ for every $j\in [k]$. Then, given a $k$-equitable integer partition $\{m_{i,j}\}_{i\in[r],j\in[k]}$ of $n$ with $n/(10kr) \leq m_{i,j} \leq 10n/(kr)$ for every $i \in[r]$ and $j\in [k]$, there exists a mapping $f \colon V(H) \to [r]\times[k]$ and a set of special vertices $X \subseteqeq V(H)$ such that we have for every $i\in [r]$ and $j\in[k]$ \begin{enumerate}[label=\itmarab{H}] \item\label{lemH:H1} $m_{i,j} - \xi n \leq |f^{-1}(i,j)| \leq m_{i,j} + \xi n$, \item\label{lemH:H2} $|X| \leq \xi n$, \item\label{lemH:H3} $\{f(x),f(y)\} \in E(R^k_r)$ for every $\{x,y\} \in E(H)$, \item\label{lemH:H4} $y,z\in \cup_{j'\in[k]}f^{-1}(i,j')$ for every $x\in f^{-1}(i,j)\setminus X$ and $xy,yz\in E(H)$, \item\label{lemH:H5} $f(x) = \big(1, \sigma(x)\big)$ for every $x$ in the first $\sqrt{\beta}n$ vertices of $\mathcal{L}$, and \item\label{lemH:H6} $|\{x\in f^{-1}(i,j): \deg(x) \leq 2D\}| \geq \tfrac{1}{24D} |f^{-1}(i,j)|$. \end{enumerate} \end{lemma} Lemma~\ref{lem:H2} is a strengthened version of~\cite[Lemma~8]{BoeTarWue}. The proof of~\cite[Lemma~8]{BoeTarWue} is deterministic; here we use a probabilistic argument to show the existence of a function $f$ that also satisfies the additional property~\ref{lemH:H6}, which is required for Theorem~\ref{thm:degenerate}. However, we still borrow ideas from the proof of~\cite[Lemma~8]{BoeTarWue}. The proof of Lemma~\ref{lem:H2} will be given in Section~\ref{sec:lemH}. During the pre-embeding, we embed a vertex $x$ of $H$ onto a vertex $v$ of $V_0$, and we also embed its neighbours $N_H(x)$. This creates restrictions on the vertices of $G$ to which we can embed the second neighbours, and for application of Lemma~\ref{thm:blowup} we need certain conditions to be satisfied. The next lemma states that we can find vertices in $N_G(v)$, to which we will embed $N_H(x)$, satisfying these conditions. \begin{lemma}[Common neighbourhood lemma] \label{lem:common} For each $d>0$, $k \geq 2$, and $\Delta \geq 2$ there exists $\alpha >0$ such that for every $\varepsilon^\ast \in (0,1)$ there exists $\varepsilon_0 >0$ such that for every $r\geq 1$ and every $0<\varepsilon\le\varepsilon_0$ there exists $C^{\ast} >0$ such that the following is true. If $p \geq C^{\ast} \left(\log n/n\right)^{1/\Delta}$, then $\Gamma = G(n,p)$ a.a.s.~satisfies the following. Let $G=(V,E)$ be a (not necessarily spanning) subgraph of $\Gamma$ and $\{V_i\}_{i\in[k]}\cup \{W\}$ a vertex partition of a subset of $V$ such that the following are true for every $i,i'\in [k]$. \begin{enumerate}[label=\itmarab{V}] \item\label{cnl:bal} $\frac{n}{4kr}\le |V_i|\le \frac{4n}{kr}$, \item\label{cnl:Vreg} $(V_i,V_{i'})$ is $(\varepsilon, d, p)_G$-lower-regular, \item\label{cnl:W} $|W|\ge 10^{-10}\frac{\varepsilon^4 pn}{k^4r^4}$, and \item\label{cnl:Wdeg} $|N_G(w,V_i)| \geq dp|V_i|$ for every $w \in W$. \end{enumerate} Then there exists a tuple $(w_1, \ldots, w_\Delta) \in \binom{W}{\Delta}$ such that for every $\Lambda,\Lambda^\ast\subseteqeq[\Delta]$, and every $i \neq i' \in [k]$ we have \begin{enumerate}[label=\itmarab{W}] \item\label{cnl:Gsize} $|\bigcap_{j\in \Lambda} N_G(w_j,V_i)|\geq \alpha p^{|\Lambda|}|V_i|$, \item\label{cnl:Gasizen} $|\bigcap_{j\in \Lambda} N_{\Gamma}(w_j)| \le (1 + \varepsilon^\ast)p^{|\Lambda|}n$, \item\label{cnl:Gasize} $|\bigcap_{j\in \Lambda} N_{\Gamma}(w_j,V_i)| = (1 \pm \varepsilon^\ast)p^{|\Lambda|}|V_i|$, and \item\label{cnl:Nreg} $\big(\bigcap_{j\in \Lambda}N_{\Gamma}(w_j,V_i),\bigcap_{j^\ast\in \Lambda^\ast}N_{\Gamma}(w_{j^\ast},V_{i'})\big)$ is $(\varepsilon^\ast, d,p)_G$-lower-regular if $|\Lambda|,|\Lambda^\ast| < \Delta$ and either $\Lambda\cap\Lambda^\ast=\varnothing$ or $\Delta\geq 3$ or both. \end{enumerate} \end{lemma} Let $H'$ and $G'$ denote the subgraphs of $H$ and $G$ that result from removing all vertices that were used in the pre-embedding process. As a last step before finally applying the sparse blow-up lemma, the clusters in $\restr{\mathcal V}{G'}$ need to be adjusted to the sizes of $\restr{W_{i,j}}{H'}$. The next lemma states that this is possible, and that after this redistribution the regularity properties needed for Lemma~\ref{thm:blowup} still hold. \begin{lemma}[Balancing lemma] \label{lem:balancing} For all integers $k\geq 1$, $r_1, \Delta \geq 1$, and reals $\gamma, d >0$ and $0 < \varepsilon < \min\{d,1/(2k)\}$ there exist $\xi >0$ and $C^{\ast} >0$ such that the following is true for every $p \geq C^{\ast} \left(\log n/n\right)^{1/2}$ and every $10\gamma^{-1}\le r \leq r_1$ provided that $n$ is large enough. Let $\Gamma$ be a graph on the vertex set $[n]$ and let $G=(V,E)\subseteqeq \Gamma$ be a (not necessarily spanning) subgraph with vertex partition $\mathcal V = \{V_{i,j}\}_{i\in[r],j\in[k]}$ that satisfies $n/(8kr) \leq |V_{i,j}| \leq 4n/(kr)$ for each $i\in[r]$, $j\in[k]$. Let $\{n_{i,j}\}_{i \in [r], j\in [k]}$ be an integer partition of $\sum_{i\in[r],j\in[k]} |V_{i,j}|$. Let $R^k_r$ be a graph on the vertex set $[r] \times [k]$ with minimum degree $\delta(R^k_r) \geq \big((k-1)/k+\gamma/2\big) kr$ such that $K^k_r \subseteqeq B^k_r \subseteqeq R^k_r$. Suppose that the partition $\mathcal V$ satisfies the following properties for each $i\in[r]$, each $j\neq j'\in[k]$, and each $v\in V$. \begin{enumerate}[label=\itmarab{B}] \item \label{lembalancing:sizes} We have $n_{i,j} - \xi n \leq |V_{i,j}| \leq n_{i,j} + \xi n$, \item \label{lembalancing:regular1} $\mathcal V$ is $\big(\tfrac{\varepsilon}{4},d,p\big)_G$-lower-regular on $R^k_r$ and $\big(\tfrac{\varepsilon}{4},d,p\big)_G$-super-regular on $K^k_r$, \item \label{lembalancing:inheritance1} both $\big(N_{\Gamma}(v, V_{i,j}),V_{i,j'}\big)$ and $\big(N_{\Gamma}(v, V_{i,j}),N_{\Gamma}(v, V_{i,j'})\big)$ are $\big(\tfrac{\varepsilon}{4},d,p\big)_G$-lower-regular pairs, and \item \label{lembalancing:gamma1} we have $|N_{\Gamma}(v,V_{i,j})| = \big(1 \pm \tfrac{\varepsilon}{4}\big)p|V_{i,j}|$. \end{enumerate} Then, there exists a partition $\mathcal{V'}= \{V'_{i,j}\}_{i\in[r],j\in[k]}$ of $V$ such that the following properties hold for each $i\in[r]$, each $j\neq j'\in [k]$, and each $v\in V$. \begin{enumerate}[label=\itmarabp{B}{'}] \item\label{lembalancing:sizesout} We have $|V'_{i,j}|=n_{i,j}$, \item\label{lembalancing:symd} We have $|V_{i,j}\triangle V'_{i,j}|\le 10^{-10}\varepsilon^4k^{-2}r_1^{-2} n$, \item \label{lembalancing:regular} $\mathcal{V'}$ is $(\varepsilon,d,p)_G$-lower-regular on $R^k_r$ and $(\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item \label{lembalancing:inheritance} both $\big(N_{\Gamma}(v,V'_{i,j}), V'_{i,j'}\big)$ and $\big(N_{\Gamma}(v,V'_{i,j}), N_\Gamma(v,V'_{i,j'})\big)$ are $(\varepsilon,d,p)_G$-lower-regular pairs, and \item\label{lembalancing:gammaout} For each $1\le s\le\Delta$ and vertices $v_1,\ldots,v_s\in[n]$ we have \[\big|N_\Gamma(v_1,\ldots,v_s;V_{i,j})\triangle N_\Gamma(v_1,\ldots,v_s;V'_{i,j})\big|\le 10^{-10}\varepsilon^4k^{-2}r_1^{-2}\deg_\Gamma(v_1,\ldots,v_s)+C^{\ast}\log n\,.\] \end{enumerate} Furthermore, if for any two disjoint vertex sets $A,A'\subseteq V(\Gamma)$ with $|A|,|A'|\ge\tfrac{1}{50000kr_1}\varepsilon^2\xi pn$ we have $e_\Gamma(A,A')\le \big(1+\tfrac{1}{100}\varepsilon^2\xi\big)p|A||A'|$, and if `lower-regular' is replaced with `regular' in~\ref{lembalancing:regular1}, and~\ref{lembalancing:inheritance1}, then we can replace `lower-regular' with `regular' in~\ref{lembalancing:regular} and~\ref{lembalancing:inheritance}. \end{lemma} \section{The lemma for \texorpdfstring{$G$}{G}} \label{sec:prooflemG} In this section we prove the Lemma for~$G$ (Lemma~\ref{lem:G}), which borrows from the proof of \cite[Proposition~17]{bottcher2009proof} and from the proof of \cite[Lemma~9]{bottcher2013almost}. Our strategy is as follows. We first apply Lemma~\ref{lem:regularitylemma} to obtain an equitable partition of $V(G)$ within whose reduced graph we can find a backbone graph by Theorem~\ref{thm:bandwidth}. We let $Z_1$ be the vertices whose $\Gamma$-degrees are `wrong' to this partition, or whose neighbourhoods fail to inherit lower-regularity (plus a few extra to maintain $k$-equitability), and we remove the vertices $Z_1$. Now there may be some vertices in each cluster which destroy super-regularity on the clique factor of the backbone graph. We redistribute these, and the exceptional set of the regular partition, to other clusters. Now we would like to say we are finished, but the moving of vertices may have destroyed some of the regularity inheritance, $\Gamma$-neighbourhood, and super-regularity properties we tried to obtain. However, it is easy to check that a vertex only witnesses failure of these properties if exceptionally many of its $\Gamma$-neighbours were moved from or to a cluster. We let $Z_2$ be the set of all such vertices, and remove them. We will see that $Z_2$ is so small that its removal does not significantly affect the properties we want, so that we can set $V_0=Z_1\cup Z_2$ and we are done. \begin{proof}[Proof of Lemma~\ref{lem:G}] We first fix the constants in the proof. Given $\gamma>0$, $k\ge 2$ and $r_0\ge 1$, set $d=\tfrac{\gamma}{32}$. Let $\beta$ and $n_0$ be returned by Theorem~\ref{thm:bandwidth} for input $\tfrac12\gamma$, $3k$ and $k$. Let $r'_0=\max\{n_0,k/d,10k/\beta,r_0\}$. Given $\varepsilon\in\big(0,\tfrac{1}{2k}\big]$, let $0<\varepsilona\le 10^{-10}\varepsilon^2\gamma k^{-2}$ be small enough for both Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL} for input $\tfrac12\varepsilon$ and $d$. Let $C$ be large enough for these applications of Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL}, and also large enough for Proposition~\ref{prop:chernoff} with input $\tfrac{1}{1000}\big(\tfrac{\varepsilona}{k}\big)^2$. Now Lemma~\ref{lem:regularitylemma}, with input $\tfrac{1}{k}\varepsilona$, $\tfrac{k-1}{k}+\gamma$ and $r'_0+k$, returns $r_1$. We set $C^{\ast}=1000k^3r_1^5C/(\varepsilona)^2$. Given $p\ge C^{\ast}\big(\tfrac{\log n}{n}\big)^{1/2}$, the random graph $G(n,p)$ a.a.s.\ satisfies the good events of Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL}, and Proposition~\ref{prop:chernoff}. We condition on $\Gamma=G(n,p)$ satisfying these good events. Given $G\subseteq\Gamma$ with $\delta(G)\ge\big(\tfrac{k-1}{k}+\gamma\big)pn$, we apply Lemma~\ref{lem:regularitylemma}, with input $\tfrac{1}{k}\varepsilona$, $\tfrac{k-1}{k}+\gamma$, $r'_0+k$, and $d$, to $G$. We may do this because $G$ is a subgraph of $\Gamma$, and by choice of $C^{\ast}$ we have $Cp^{-1}\log n\le\tfrac{\varepsilona n}{kr_1}$, so that the condition of Lemma~\ref{lem:regularitylemma} is satisfied because the good event of Proposition~\ref{prop:chernoff} holds for $\Gamma$. The result is a $\big(\tfrac{1}{k}\varepsilona,p\big)$-lower-regular partition of $V(G)$ into $t'\in [r'_0+k, r_1]$ equally sized clusters, with exceptional set of size at most $\tfrac{1}{k}\varepsilona n$, whose $(\varepsilona,d,p)$-reduced graph has minimum degree at least $\big(\tfrac{k-1}{k}+\gamma-d-\tfrac{1}{k}\varepsilona\big)t'$. We remove at most $k-1$ of these clusters to the exceptional set, obtaining an $(\varepsilona,p)$-lower-regular partition $\mathcal U$ of $V(G)$ into $kr$ equally sized clusters, where $r'_0\le kr\le r_1$, with exceptional set $U_0$ of size at most $\varepsilona n$, whose $(\varepsilona,d,p)$-reduced graph $R^k_r$ has minimum degree at least $\big(\tfrac{k-1}{k}+\gamma-d-\tfrac{1}{k}\varepsilona\big)kr-k$. By choice of $d$ and $\varepsilona$, and by choice of $r'_0$, we have \[\Big(\frac{k-1}{k}+\gamma-d-\frac{1}{k}\varepsilona\Big)kr-k\ge\Big(\frac{k-1}{k}+\frac{\gamma}{2}\Big)kr\,.\] Observe that $B^k_r$ has bandwidth at most $2k<\beta r'_0$, and maximum degree less than $3k$. Thus Theorem~\ref{thm:bandwidth}, with input $\tfrac{\gamma}{2}$, $3k$, and $k$, in particular states that $R^k_r$ contains a copy of $B^k_r$. We fix one such copy. We let its vertices $\{(i,j)\}_{i\in[r],j\in[k]}$ label the vertices of $R^k_r$, and similarly let the cluster of $\mathcal U$ corresponding to the vertex $(i,j)$ of $B^k_r$ be $U_{i,j}$ for each $i\in[r]$ and $j\in[k]$. The partition $\mathcal U$ is equitable, and thus in particular $k$-equitable. We now create $Z_1$ as follows. We start with all vertices $v$ of $G$ for which there are $(i,j)$ and $(i',j')$ in $V(R^k_r)$, with $\{(i,j),(i',j')\}$ an edge of $R^k_r$, such that either $\big(N_\Gamma(v,U_{i,j}),U_{i',j'}\big)$ or $\big(N_\Gamma(v,U_{i,j}),N_\Gamma(v,U_{i',j'})\big)$ is not $\big(\tfrac{1}{2}\varepsilon,d,p\big)_G$-lower-regular. We add all vertices $v$ of $G$ for which there exists $U_{i,j}$ with $\deg_\Gamma(v,U_{i,j})\neq(1\pm\varepsilona)p|U_{i,j}|$, or for which $\deg_\Gamma(v,U_0)>2\varepsilona p n$. Finally we add a minimum number of vertices to obtain $k$-equitability of the sets $\big\{U_{i,j}\setminus Z_1\big\}_{i\in[r],j\in[k]}$. Note that we have $|U_{i,j}|\ge n/(2kr_1)$ for each $i,j$, and we can estimate the number of vertices with more than $2\varepsilona p n$ neighbours in $U_0$ by considering a superset of $U_0$ of size $\varepsilona n$. It follows that for each $i,j$ we have $\log(en/|U_{i,j}|),\log(en/|U_0|)\le\log(ekr_1/\varepsilona)$. By Lemma~\ref{lem:OSRIL} and Lemma~\ref{lem:TSRIL}, and Proposition~\ref{prop:chernoff}, we have \begin{equation}\label{eq:sizeZ1} |Z_1|\le 4kr_1^2 C\max\big\{p^{-2},p^{-1}\log (ekr_1/\varepsilona)\big\}\le 8k^2r_1^3Cp^{-2}/\varepsilona\le\frac{\varepsilona}{kr_1} n\,, \end{equation} where the factor $k$ accounts for vertices removed to maintain $k$-equitability. We now try to obtain super-regularity on the copy of $K^k_r$ in $B^k_r$. For each $i\in[r]$ and $j\in[k]$ let $W_{i,j}$ be the vertices of $U_{i,j}\setminus Z_1$ which have less than $(d-2\varepsilona)p|U_{i,j'}|$ neighbours in $U_{i,j'}$ for some $j'\neq j$. Because $(U_{i,j},U_{i,j'})$ is $(\varepsilona,d,p)$-lower-regular for each $i\in[r]$ and $j\neq j'\in[k]$, we have $|W_{i,j}|\le k\varepsilona|U_{i,j}|$ for each $i\in[r]$ and $j\in[k]$. Now let $W$ contain $U_0\setminus Z_1$ together with all $W_{i,j}$ for $i\in[r]$ and $j\in[k]$, and a minimum number of additional vertices from $V(G)\setminus Z_1$ to obtain $k$-equitability of the sets $\big\{U_{i,j}\setminus (Z_1\cup W)\big\}_{i\in[r],j\in[k]}$. By construction, we have $|W|\le\varepsilona n+kr\cdot k\varepsilona\tfrac{n}{kr}\le 2k\varepsilona n$. Given any $w\in W$, because $w\not\in Z_1$ we have \[\deg_\Gamma(w,U_0)\le 2\varepsilona pn\quad\text{and}\quad\deg_\Gamma(w,U_{i,j})\le(1+\varepsilona)p|U_{i,j}|\] for each $i\in[r]$ and $j\in[k]$. Now let us consider the edges of $G$ leaving $w$. At most $2\varepsilona pn$ of these go to $U_0$, and by definition at most $2dpn$ go to sets $U_{i,j}$ such that $\deg_G(w,U_{i,j})\le 2dp|U_{i,j}|$. Since $\deg_G(w)\ge\big(\tfrac{k-1}{k}+\gamma\big)pn$, at least $\big(\tfrac{k-1}{k}+\tfrac{\gamma}{2}\big)pn$ edges leaving $w$ go to sets $U_{i,j}$ with $\deg_G(w,U_{i,j})\ge 2dp|U_{i,j}|$. Since $|U_{i,j}|\le \tfrac{1}{kr}n$, in particular there are at least \[\frac{\Big(\frac{k-1}{k}+\frac{\gamma}{2}\Big)pn}{(1+\varepsilona)p\frac{n}{kr}}\ge \Big(\frac{k-1}{k}+\frac{\gamma}{4}\Big)kr\] sets $U_{i,j}$ with $i\in[r]$ and $j\in[k]$ such that $\deg_G(w,U_{i,j})\ge 2dp|U_{i,j}|$. It follows that there are at least $\tfrac{\gamma}{4}r$ indices $i\in[r]$ such that $\deg_G(w,U_{i,j})\ge 2dp|U_{i,j}|$ for each $j\in[k]$. We now assign to each $w\in W$ sequentially an index $c(w)\in[r]\times [k]$. For each $w$, we choose $c(w)=(i,j)$ as follows. The index $i$ is chosen minimal in $[r]$ such that $\deg_G(w,U_{i,j'})\ge 2dp|U_{i,j'}|$ for each $j'\in[k]$, but at most $\tfrac{100}{r}k\varepsilona \gamma^{-1} n$ vertices $w'\in W$ have so far been assigned $c(w')=(i,j')$ for any $j'\in[k]$. We choose $j\in[k]$ minimising the number of vertices $w'\in W$ with $c(w)=(i,j)$. Because $|W|\le 2k\varepsilona n$, this assignment is always possible. Next, for each $i\in[r]$ and $j\in[k]$, we let $V'_{i,j}$ consist of $U_{i,j}\setminus (Z_1\cup W_{i,j})$, together with all $w\in W$ such that $c(w)=(i,j)$. By construction, we have \[|U_{i,j}\triangle V'_{i,j}|\le |Z_1|+|W_{i,j}|+\frac{100}{r}k\varepsilona \gamma^{-1}n\le 1000k^2\varepsilona\gamma^{-1}|U_{i,j}|\,.\] Finally, we let $Z_2$ be the vertices $v\in V(G)\setminus Z_1$ with $\deg_\Gamma(v,U_{i,j}\triangle V'_{i,j})\ge 2000k^2\varepsilona\gamma^{-1} p|U_{i,j}|$ for some $i\in[r]$ and $j\in[k]$, together with a minimum number of additional vertices of $V(G)\setminus Z_1$ to obtain $k$-equitability of the sets $V_{i,j}:=V'_{i,j}\setminus Z_2$. We set $V_0=Z_1\cup Z_2$. We claim that $\mathcal V=\{V_{i,j}\}_{i\in[r],j\in[k]}$ is the desired partition of $V(G)\setminus V_0$. Note that the sets $V'_{i,j}$ and $V'_{i,j'}$ differ in size by at most one for any $i\in[r]$ and $j,j'\in[k]$, by our construction of the assignment $c$. We apply Proposition~\ref{prop:chernoff} to estimate the number of vertices $v\in V(G)\setminus Z_1$ with $\deg_\Gamma(v,U_{i,j}\triangle V'_{i,j})\ge 2000k^2\varepsilona\gamma^{-1} p|U_{i,j}|$ by considering a superset of $U_{i,j}\triangle V'_{i,j}$ of size $1000k^2\varepsilona\gamma^{-1}|U_{i,j}|\ge \varepsilona n/r_1$. By Proposition~\ref{prop:chernoff} we thus have \begin{equation}\label{eq:sizeZ2} |Z_2|\le r_1+Ckr_1p^{-1}\log (er_1/\varepsilona)\le 4Ckr_1^2p^{-1}/\varepsilona\le\frac{\varepsilona}{kr_1}pn\,. \end{equation} This gives \begin{equation}\label{eq:symUV} |U_{i,j}\triangle V_{i,j}|\le|U_{i,j}\triangle V'_{i,j}|+|Z_2|\le 2000k^2\varepsilona\gamma^{-1}|U_{i,j}|\,. \end{equation} Now given any $v\in V(G)\setminus V_0$, for each $i\in[r]$ and $j\in[k]$, because $v\not\in Z_2$ we have $\deg_\Gamma(v,U_{i,j}\triangle V'_{i,j})\le 2000k^2\varepsilona\gamma^{-1}p|U_{i,j}|$. We thus have \begin{equation}\label{eq:symdUV} \deg_\Gamma(v,U_{i,j}\triangle V_{i,j})\le 2000k^2\varepsilona\gamma^{-1}p|U_{i,j}|+|Z_2|\le 3000k^2\varepsilona\gamma^{-1}p|U_{i,j}|\,, \end{equation} and because $v\not\in Z_1$ we have $\deg_\Gamma(v,U_{i,j})=(1\pm\varepsilona)p|U_{i,j}|$, and hence by~\eqref{eq:symUV} \begin{equation}\label{eq:vU} \deg_\Gamma(v,V_{i,j})=\big(1\pm 10000k^2\varepsilona\gamma^{-1}\big)p|V_{i,j}|\,. \end{equation} Adding up~\eqref{eq:sizeZ1} and~\eqref{eq:sizeZ2}, we conclude \begin{equation}\label{eq:sizeV0} |V_0|\le 8k^2r_1^3Cp^{-2}/\varepsilona+4Ckr_1^2p^{-1}/\varepsilona\le C^{\ast} p^{-2}\,, \end{equation} as desired. The partition $\mathcal V=\{V_{i,j}\}_{i\in[r],j\in[k]}$ is by construction $k$-equitable, and the graph $R^k_r$ has minimum degree $\big(\tfrac{k-1}{k}+\tfrac{\gamma}{2}\big)kr$ as desired. For each $i\in[r]$ and $j\in[k]$ we have $|U_{i,j}|=(1\pm\varepsilona)\frac{n}{kr}$, and so~\eqref{eq:symUV} and our choice of $\varepsilona$ give~\ref{lemG:size}. Next, if $\{(i,j),(i',j')\}$ is an edge of $R^k_r$, then $G$ is $(\varepsilona,d,p)$-lower-regular on $(U_{i,j},U_{i',j'})$ by construction. By~\eqref{eq:symUV}, Proposition~\ref{prop:subpairs3}, and our choice of $\varepsilona$, $G$ is $(\varepsilon,d,p)$-lower-regular on $(V_{i,j},V_{i',j'})$. Given $i\in[r]$ and $j\neq j'\in[k]$, let $v$ be a vertex of $V_{i,j}$. Observe that since $v\in V_{i,j}$, either we have $v\in U_{i,j}$, in which case, since $v\not\in W$ we have $\deg_G(v,U_{i,j'})\ge (d-2\varepsilona)p|U_{i,j'}|$, or $v$ is in $W$ and has $c(v)=(i,j)$, in which case $\deg_G(v,U_{i,j'})\ge dp|U_{i,j'}|$. By~\eqref{eq:symUV} and~\eqref{eq:symdUV} we have \[\deg_G(v,V_{i,j'})\ge(d-2\varepsilona)p|U_{i,j'}|-3000k^2\varepsilona\gamma^{-1}p|U_{i,j'}|\ge(d-\varepsilon)p|V_{i,j'}|\,,\] giving~\ref{lemG:regular}. If $\{(i,j),(i',j')\}\in E(R^k_r)$, then for any $v\in V(G)\setminus V_0$, since $v\not\in Z_1$, the pairs $\big(N_\Gamma(v,U_{i,j}),U_{i',j'}\big)$ and $\big(N_\Gamma(v,U_{i,j}),N_\Gamma(v,U_{i',j'})\big)$ are $\big(\tfrac12\varepsilon,d,p\big)_G$-lower-regular. Using~\eqref{eq:symUV} and~\eqref{eq:symdUV}, Proposition~\ref{prop:subpairs3} and our choice of $\varepsilona$, we conclude~\ref{lemG:inheritance}. Finally,~\ref{lemG:gamma} follows from~\eqref{eq:vU} and our choice of $\varepsilona$. Note that if we alter the definition of $Z_1$, removing the condition on $\big(N_\Gamma(v,U_{i,j}),N_\Gamma(v,U_{i',j'})\big)$, then we do not need to use Lemma~\ref{lem:TSRIL} and the bound in~\eqref{eq:sizeZ1} improves to $|Z_1|\le 8k^2r_1^3Cp^{-1}/\varepsilona$. Thus, if we only require~\ref{lemG:inheritancep}, we obtain $|V_0|\leC^{\ast} p^{-1}$ as claimed. \end{proof} \section{The lemma for \texorpdfstring{$H$}{H}}\label{sec:lemH} In this section we present the proof of Lemma~\ref{lem:H2}. The proof idea is as follows. First, given the zero-free labelling $\mathcal L$ and $(k+1)$-colouring $\sigma$ of $H$, we split $\mathcal L$ into the blocks of the definition of zero-freeness. We partition the blocks into $r$ `sections' of consecutive blocks, such that the $i$-th section contains about $\sum_{j\in[k]}m_{i,j}$ vertices, and furthermore such that the `boundary vertices, namely the first and last $\beta n$ vertices of each section, do not receive colour zero. Now it is easy to check that assigning the vertices of colour $j$ in the $i$-th section to $(i,j)$ for each $i\in[r]$ and $j\in[k]$, and the vertices of colour zero in the $i$-th section to $z_i$, is a graph homomorphism. However it can be very unbalanced, since different colours in $[k]$ may be used with very different frequencies in each section. To fix this, we replace $\sigma$ with a new colouring $\sigma'$, which we obtain as follows. We partition each section into `intervals' of consecutive blocks, and for each interval except the last in each section, we pick a random permutation of $[k]$. We will show that there is a colouring $\sigma'$ such that all but the first few vertices of each interval are coloured according to the permutation applied to $\sigma$, with vertices of colour zero staying coloured zero. We use this colouring $\sigma'$ in place of $\sigma$ to define the mapping $f$. We let $X$ consist of all vertices whose distance is two or less to either boundary vertices, vertices near the start of an interval, or colour zero vertices. To complete the proof, we show that so few vertices receive colour zero that they do not much affect the desired conclusions. Now the mapping $f$ is in expectation balanced, and using Lemma~\ref{lem:McDiarmid} we can show that it is also with high probability close to balanced. It is also easy to check that, since $H$ is $D$-degenerate, in the $i$-th section of $\mathcal L$ there are many vertices of degree at most $2D$. In expectation these are distributed about evenly over the $\big\{(i,j)\big\}_{j\in[k]}$ by $f$, and again McDiarmid's inequality shows that with high probability the same holds. These two observations give us~\ref{lemH:H1} and~\ref{lemH:H6}, while the other four desired conclusions hold by construction. \begin{proof}[Proof of Lemma~\ref{lem:H2}] For given $D\geq 1$, set $\alpha = 1/(24D)$. Let $k, r \geq 1$ and $\xi, \beta >0$ be given, where $\xi \leq 1/(kr)$ and $\beta \leq 10^{-10}\xi^2/(D k^4r)$. Let $H$ and $K^k_r \subseteqeq B^k_r \subseteqeq R^k_r$ be graphs as in the statement of the lemma. Let $\mathcal L$ be the given labelling of $V(H)$ of bandwidth at most $\beta n$. We denote the set of the first $\sqrt{\beta}n$ vertices of $\mathcal L$ by $F$. Let $\sigma: V(H) \to \{0,\ldots k\}$ be the given proper $(k+1)$-colouring of $V(H)$ that is $(10/\xi, \beta)$-zero-free with respect to $\mathcal L$ and such that $\sigma(F) \subseteqeq [k]$. Also, let $z_1, \ldots, z_n$ be vertices such that $z_i \in \big([r]\setminus\{i\}\big) \times [k]$ with $\big\{z_i, (i,j)\big\} \in E(R^k_r)$ for every $i\in [r]$ and $j\in [k]$. Finally, set $b=k/\sqrt{\beta}$. Let $\{m_{i,j}\}_{i\in[r],j\in[k]}$ be the given $k$-equitable integer partition of $n$ with $n/(10kr) \leq m_{i,j} \leq 10n/(kr)$ for every $i\in[r]$ and $j\in[k]$. Let us now introduce the notation that we use in this proof. Recall that for every $t\in \big[1/(4k\beta)\big]$ the $i$-th block is defined as \[B_t:= \{(t-1)4k\beta n +1, \ldots, t4k\beta n\}.\] Next we split the labelling $\mathcal L$ into $r$ \emph{sections}, where the first and the last block of each section are zero-free. Each section is partitioned into \emph{intervals}, each of which but possibly the last one consists of $b$ \emph{blocks}. Since $\sigma$ is $(10/\xi, \beta)$-zero-free with respect to $\mathcal L$, we can choose indices $0 = t_0 \leq t_1 \leq \ldots \leq t_{r-1} \leq t_r = 1/(4k\beta)$ such that $B_{t_i}$ and $B_{t_{i}+1}$ are zero-free blocks for every $i\in [r]$ and \[\sum_{t=1}^{t_i}|B_t| \leq \sum_{t=1}^{i} \sum_{j\in[k]}m_{t,j} < 12k\beta n + \sum_{t=1}^{t_i} |B_t|.\] Since $m_{i,j} \geq n/(10kr) > 12k\beta n$, indices $t_0, \ldots, t_r$ are distinct. For every $i \in [r]$ we define the $i$-th section $S_i$ as \[\bigcup_{t = t_{i-1}+1}^{t_i} B_t.\] This means by the choice of the indices $t_0, \ldots, t_r$ that the first and last block of each section are zero-free. Since $\{m_{i,j}\}_{i\in[r],j\in[k]}$ is a $k$-equitable partition, we have in particular \begin{equation}\label{eq:mijSi} \frac{1}{k} (|S_i|- 12k\beta n) \leq m_{i,j} \leq \frac{1}{k} \big(|S_i| + 12 k \beta n\big). \end{equation} The last $\beta n$ vertices of the blocks $B_{t_i}$ and the first $\beta n$ vertices of the blocks $B_{t_{i+1}}$ are called \emph{boundary vertices} of $H$. Notice that colour zero is never assigned to boundary vertices by $\sigma$. For each $i\in [r]$, we split $S_i$ into $s_i:= \left\lceil (t_i-t_{i-1}-1)/b \right\rceil$ intervals, where each of the first $(s_i-1)$ intervals is the concatenation of exactly $b$ blocks and the last interval consists of $t_i-t_{i-1}-1 - b(s_i-1) \leq b$ blocks. Therefore, for every $i\in [r]$, we have \begin{equation}\label{eq:sizeSi} s_i(b-1)4k\beta n +1 \leq |S_i| \leq s_i b 4 k \beta n. \end{equation} Using Equation~\eqref{eq:mijSi}, $b= k/\sqrt{\beta}$, and $n/(10kr)\leq m_{i,j} \leq 10n/(kr)$ we get, for every $i\in[r]$, the following bounds on $s_i$ \begin{equation*}\label{eq:si} \frac{1}{100rk^2\sqrt{\beta}} \leq s_i \leq \frac{10}{rk^2\sqrt{\beta}}. \end{equation*} We denote the intervals of the $i$-th section by $I_{i,1}, \ldots, I_{i,s_i}$. Let $B^{\mathrm{sw}}_{i,\ell}$ denote the union of the first two blocks of each interval $I_{i,\ell}$. All of these blocks but $B^{\mathrm{sw}}_{i,1}$ and $B^{\mathrm{sw}}_{i,s_i}$ will be used to switch colours within parts of $H$. Notice that we have $|B^{\mathrm{sw}}_{i,\ell}| = 8k \beta n$ and, since $\sigma$ is $(10/\xi, \beta)$-zero-free with respect to $\mathcal L$, at least one of the two blocks of $B^{\mathrm{sw}}_{i,\ell}$ is zero-free. We will not use $B^{\mathrm{sw}}_{i,1}$ and $B^{\mathrm{sw}}_{i,s_i}$ to switch colours because we will need that the boundary vertices do not receive colour zero. For every $i\in [r]$ and every $\ell \in \{2,\ldots, s_i-1\}$, we choose a permutation $\pi_{i,\ell}: [k] \to [k]$ uniformly at random. The next claim ensures that we can use zero-free blocks to obtain a proper colouring of the vertex set such that vertices before the switching block are coloured according to the original colouring and the colours of the vertices after the switching block are permuted as wished. A proof can be found in \cite{BoeTarWue}. \begin{claim}[\cite{BoeTarWue}] \label{claim:switching} Let $\sigma: [n] \to \{0,\ldots, k\}$ be a proper $(k+1)$-colouring of $H$, let $B_t$ be a zero-free block and let $\pi$ be any permutation of $[k]$. Then there exists a proper $(k+1)$-colouring $\sigma'$ of $H$ with $\sigma'(x) = \sigma(x)$ for all $x\in \bigcup_{i<t} B_i$ and \[\sigma'(x) = \begin{cases} \pi(\sigma(x)) & \text{if } \sigma(x) \neq 0 \\ 0 & \text{otherwise} \end{cases}\] for all $x\in \bigcup_{i>t}B_i$. \end{claim} We use Claim~\ref{claim:switching} to switch colours at the beginning of each interval except for the first and last interval of each section. More precisely, we switch colours within the sets $B^{\mathrm{sw}}_{i,\ell}$ so that the colouring of the remaining vertices in the interval $I_{i,\ell}$ matches $\pi_{i, \ell}$. Note that we can indeed use $B^{\mathrm{sw}}_{i,\ell}$ to do the switching since one of the two blocks in $B^{\mathrm{sw}}_{i,\ell}$ is zero-free. In particular, we get a proper $(k+1)$-colouring $\sigma'= \sigma'\big(\pi_{1,2}, \ldots, \pi_{r,s_r-1}\big): V(H) \to \{0, \ldots k+1\}$ of $H$ that fulfils the following. For every $x\in I_{1,1}$ we have \[\sigma'(x) = \sigma(x),\] for each $i \in [r]$ and $\ell \in \{2,\ldots, s_i-1\}$ and every $x\in I_{i,\ell} \setminus B^{\mathrm{sw}}_{i,\ell}$ we have that \[\sigma'(x) = \begin{cases} \pi_{i,\ell}\big(\sigma(x)\big) & \text{if } \sigma(x) \neq 0 \\ 0 & \text{otherwise} \end{cases}\] and for each $i\in [r]$ and every $x\in I_{i,s_{i}}\cup I_{i+1,1}$ (where $I_{r+1,1}:= \varnothing$) we have that \[\sigma'(x) = \pi_{i, s_i-1}\big(\sigma(x)\big).\] While $\sigma'$ is well-defined on the sets $B^{\mathrm{sw}}_{1,2}, \ldots, B^{\mathrm{sw}}_{r, s_r-1}$ by Claim~\ref{claim:switching}, the definition on these sets is rather complicated as it is depends on which of the two blocks in $B^{\mathrm{sw}}_{i,\ell}$ is zero-free and on the colourings before and after the switching. However, the precise definition on these sets is not important for the remainder of the proof. Hence, we omit it here. Observe that $\sigma'$ never assigns colour zero to boundary vertices. Using $\sigma'$ we now define $f= f\big(\pi_{1,2}, \ldots, \pi_{r,s_r-1}\big): V(H) \to [r] \times [k]$ as follows. For each $i\in [r]$ and $x\in S_i$ we set \[f(x):= \begin{cases} \big(i, \sigma'(x)\big) & \text{if } \sigma'(x) \neq 0 \\ z_i & \text{otherwise},\end{cases}\] where $z_i \in \big([r]\setminus\{i\}\big) \times [k]$ is the vertex defined in the statement of the lemma. Let $X$ consist of all vertices at distance two or less from a boundary vertex of $\mathcal L$, from a vertex in any $B^{\mathrm{sw}}_{i,\ell}$, or from a colour zero vertex. We now show that $f$ and $X$ satisfy Properties~\ref{lemH:H2}--\ref{lemH:H5} with probability 1 and Properties~\ref{lemH:H1} and~\ref{lemH:H6} with high probability. In particular, this implies that the desired $f$ and $X$ exist. We start with Property~\ref{lemH:H1}. For each $i\in [r]$ let \[S^\ast_i:=S_i \setminus \left( \bigcup_{ \ell \in [s_i]} B^{\mathrm{sw}}_{i,\ell} \cup I_{i,1} \cup I_{i,s_i} \right)\] be the set of all vertices in $S_i$ except for the first and last interval and the first two blocks of each interval of $S_i$. We will also make use of the following restricted function \[f^\ast = f^\ast\big(\pi_{1,2},\ldots, \pi_{r,s_r}\big):= \restr{f}{\bigcup_{i\in[r]} S^\ast_i }.\] The basic idea of the proof of Property~\ref{lemH:H1} is to determine bounds on $|{f^\ast}^{-1}(i,j)|$ that hold with positive probability and then deduce the desired bounds on $|f^{-1}(i,j)|$. Since the permutations $\pi_{i,\ell}$ were chosen uniformly at random, we have by definition of $f^\ast$ that the expected number of vertices mapped to $(i,j)\in [r]\times[k]$ by $f^\ast$ is \begin{multline*} \mathbb{E}\big[|{f^{\ast}}^{-1}(i,j)|\big] = \frac{1}{k} \Big[ (s_i-2) (b-2) 4k \beta n -\big|\{x \in S^\ast_{i}: \sigma(x) = 0 \}\big| \Big] \\ + \big|\bigcup_{\iota\in [r]\setminus \{i\}} \{x\in S^\ast_{\iota}: \sigma(x) = 0 \text{ and } z_{\iota} = (i,j) \}\big|\,. \end{multline*} In particular, the following bounds on the expected value of $|{f^{\ast}}^{-1}(i,j)|$ hold. \begin{equation}\label{eq:fastijlow} \mathbb{E}\big[|{f^{\ast}}^{-1}(i,j)|\big] \leq (s_i-2) (b-2) 4 \beta n + \frac{\xi}{10} n \end{equation} and \begin{equation}\label{eq:fastijup} \mathbb{E}\big[|{f^{\ast}}^{-1}(i,j)|\big] \geq (1- \xi/ 10) (s_i-2) (b-2) 4 \beta n \geq (s_i-2) (b-2) 4 \beta n - \frac{\xi}{10}n. \end{equation} If one replaced a permutation $\pi_{i,\ell}$ by some other permutation $\tilde{\pi}: [k] \to [k]$, then $|{f^\ast}^{-1}(i,j)|$ would change by at most $(b-2) 4k\beta n$. Hence, by McDiarmid's Inequality (Lemma~\ref{lem:McDiarmid}) we have \begin{multline} \mathbb{P}\left[\big| (s_i-2) (b-2) 4 \beta n- |{f^\ast}^{-1}(i,j)|\big| \geq \frac{\xi}{5} n \right] \overset{\eqref{eq:fastijlow}, \eqref{eq:fastijup}}\leq \\ \mathbb{P}\left[\big| \mathbb{E}[|{f^\ast}^{-1}(i,j)] - |{f^\ast}^{-1}(i,j)|\big| \geq \frac{\xi}{10} n \right] \leq 2\exp\left\{-\frac{\xi^2 n^2}{50 (s_i-2) \big((b-2)4k\beta n\big)^2}\right\}. \end{multline} Taking the union bound over all $j\in [k]$ and using $s_i\leq 10/(rk^2\sqrt{\beta})$ and $b=k/\sqrt{\beta}$ as well as $\beta \leq 10^{-10}\xi^2/(D k^4r)$ yields \[ \mathbb{P}\left[\big| (s_i-2) (b-2) 4 \beta n- |{f^\ast}^{-1}(i,j)|\big| \geq \frac{\xi}{5} n \text{ for all } j\in [k] \right] \leq 2k \exp\left\{-\frac{\xi^2r}{8000k^2\sqrt{\beta}}\right\}\leq 2k e^{-k} < 1 .\] Observe that $|{f^\ast}^{-1}(i,j)|$ is independent of the choices for $\pi_{i',\ell}$ if $i' \neq i$. Hence, with positive probability we have, for every $i\in [r]$ and $j\in [k]$, that \[(s_i-2) (b-2) 4 \beta n - \frac{\xi}{5}n \leq |{f^\ast}^{-1}(i,j)| \leq (s_i-2) (b-2) 4 \beta n + \frac{\xi}{5} n. \] From the definition of $f^\ast$ it follows that $|f^{-1}(i,j)|\geq |{f^\ast}^{-1}(i,j)|$ and \[|f^{-1}(i,j)| \leq |{f^\ast}^{-1}(i,j)| + |I_{i,1}| + |I_{i,s_i}| + \sum_{\ell=2}^{s_i-1} |B^{\mathrm{sw}}_{i,\ell}|+ \Big|\big\{x \in\hspace{-2mm} \bigcup_{\iota \in [r] \setminus \{i\}} S_{\iota}\setminus S^\ast_{\iota}: \sigma'(x) = 0 \text{ and } z_{\iota}=(i,j) \big\}\Big|. \] Using $s_i\leq 10/(rk^2\sqrt{\beta})$ and $b=k/\sqrt{\beta}$ and $\beta \leq 10^{-10}\xi^2/(D k^4r)$, with positive probability we have for every $i\in[r]$ and $j\in[k]$ that \begin{align*} |f^{-1}(i,j)| &\geq |{f^\ast}^{-1}(i,j)| \geq (s_i-2) (b-2) 4 \beta n - \frac{\xi}{5} n\\ &\geq (s_i-2)(b-2) 4 \beta n - \frac{\xi}{5} n + \left(8 (s_i + b) \beta n - \frac{4}{5}\xi n\right)\\ &\geq s_ib4\beta n + 16 \beta n - \xi n\\ & \overset{\eqref{eq:sizeSi}}\geq \frac{1}{k}\big(|S_i| + 16 k \beta n\big) - \xi n \overset{\eqref{eq:mijSi}}\geq m_{i,j} - \xi n. \end{align*} On the other hand, \begin{align*} |f^{-1}(i,j)| &\leq |{f^\ast}^{-1}(i,j)| + |I_{i,1}| + |I_{i,s_i}| + \sum_{\ell=2}^{s_i-1} |B^{\mathrm{sw}}_{i,\ell}| \\&\,\,\quad+ \Big|\big\{x \in \bigcup_{\iota \in [r] \setminus \{i\}} S_{\iota}\setminus S^\ast_{\iota}: \sigma'(x) = 0 \text{ and } z_{\iota}=(i,j) \big\}\Big|\\& \leq (s_i-2) (b-2) 4 \beta n + \frac{\xi}{5} n +8bk\beta n+ (s_i-2)8k\beta n+ \frac{\xi}{10}n \\& \leq \frac{1}{k}\big((s_i-2)(b-2)4k\beta n \big) + \xi n\\& \leq \frac{1}{k} (|S_i|-12k\beta n) + \xi n \overset{\eqref{eq:mijSi}}\leq m_{i,j}+\xi n, \end{align*} which shows that Property~\ref{lemH:H1} holds with positive probability. By definition of $X$, since $\mathcal L$ is a $\beta n$-bandwidth ordering, any vertex in $X$ is at distance at most $2\beta n$ in $\mathcal L$ from a boundary vertex, a vertex of some $B^{\mathrm{sw}}_{i,\ell}$, or from a vertex assigned colour zero. Because there are $r$ sections, the boundary vertices form $r-1$ intervals each of length $2\beta n$, and so at most $6r\beta n$ vertices of $H$ are at distance $2$ or less from a boundary vertex. There are $\sum_{i\in[r]} s_i$ intervals and hence $\sum_{i\in[r]} s_i$ switching blocks each of size $8k\beta n$. As $s_i \leq 10/(rk^2\sqrt{\beta})$ for every $i\in[r]$, there are at most $(4+8k)\beta n\cdot 10/(k^2\sqrt{\beta})$ vertices at distance 2 or less from a vertex of some switching block. Similarly, because $\mathcal L$ is $(10/\xi,\beta)$-zero-free, in any consecutive $10/\xi$ blocks at most one contains vertices of colour zero, and hence at most $(8+4k)\beta n$ vertices in any such $10/\xi$ consecutive blocks are at distance $2$ or less from a vertex of colour zero. Thus we have \[|X|\le 6r\beta n+(4+8k)\beta n\left(\frac{10}{k^2\sqrt{\beta}n}\right)+(8+4k)\beta n\left(\frac{n}{4k\beta n\cdot 10/\xi}+1\right)\le 6r\beta n+\frac{1}{4}\xi n+\frac{1}{3}\xi n\le\xi n\,,\] which gives~\ref{lemH:H2}. Since $\sigma'$ is a proper colouring, and boundary vertices are not adjacent to colour zero vertices, by definition, $f$ restricted to the boundary vertices is a graph homomorphism to $B^k_r$. On the other hand, on each section $S_i$, again since $\sigma'$ is a proper colouring and since $\big\{(i,j)\big\}_{j\in[k]}\cup\{z_i\}$ forms a clique in $R^k_r$, $f$ is a graph homomorphism to $R^k_r$. Since $\mathcal L$ is a $\beta n$-bandwidth ordering, any edge of $H$ is either contained in a section or goes between two boundary vertices, and we conclude that $f$ is a graph homomorphism from $H$ to $R^k_r$, giving~\ref{lemH:H3}. Now, given $i\in[r]$ and $j\in[k]$, and $x\in f^{-1}(i,j)\setminus X$, if $\{x,y\}$ and $\{y,z\}$ are edges of $H$, then $y$ and $z$ are at distance two or less from $x$ in $H$. In particular, by definition of $X$ neither $y$ nor $z$ is either a boundary vertex, in any $B^{\mathrm{sw}}_{i,\ell}$, or assigned colour zero. Since boundary vertices appear in intervals of length $2\beta n$ in $\mathcal L$, and $\mathcal L$ is a $\beta n$-bandwidth ordering, it follows that $y$ and $z$ are both in $S_i$. Furthermore, suppose $x\in I_{i,\ell}$ for some $\ell$. By definition $x\not\inB^{\mathrm{sw}}_{i,\ell}$. Because $B^{\mathrm{sw}}_{i,\ell}$ and $B^{\mathrm{sw}}_{i,\ell+1}$ (if the latter exists) are intervals of length $8k\beta n$, both $y$ and $z$ are also in $I_{i,\ell}\setminus B^{\mathrm{sw}}_{i,\ell}$, and in particular both $y$ and $z$ are in $\bigcup_{j'\in[k]}f^{-1}(i,j')$, giving~\ref{lemH:H4}. Since $\sqrt{\beta}n \leq b4k\beta n \leq |I_{1,1}|$ and $\sigma'(x) \neq 0$ for each $x$ in the first $\sqrt{\beta}n$ vertices of $\mathcal{L}$, it follows directly from the definition of $f$ that $f(x) = \big(1,\sigma(x)\big)$, which shows Property~\ref{lemH:H5}. Finally, we show that Property~\ref{lemH:H6} holds with positive probability. Let $i\in[r]$ and $j\in[k]$. We define the random variable $\mathcal E_{i,j} := |\{x \in {f^\ast}^{-1}(i,j): \deg(x) \leq 2 D\}|$. Since $H$ is $D$-degenerate and $\mathcal L$ is a labelling of bandwidth at most $\beta n$ we have \[e\big(S_i^\ast, V(H)\big) \leq D |S_i^\ast| + D 4\beta n \leq D \big(1+ 1/(4D)\big) |S_i^\ast|.\] Hence, it must hold that $|\{x\in S_i^\ast: \deg(x) \geq 2D + 1\}| (2D + 1) \leq 2 D \big(1+1/(4D)\big) |S_i^\ast|.$ This yields $|\{x\in S_i^\ast: \deg(x) \leq 2D\}| \geq |S_i^\ast|/(6D)$ and therefore \[\mathbb{E}[\mathcal E_{i,j}] \geq \frac{1}{6kD} |S_i^\ast| \geq \frac{1}{6D} (s_i-2) (b-2)4\beta n.\] By applying Chernoff's Inequality (Theorem~\ref{thm:chernoff}) and using Equations~\eqref{eq:mijSi} and~\eqref{eq:sizeSi} as well as $\alpha= 1/(24D)$ we get \begin{align*} &\mathbb{P}\Big[\big|\{x \in f^{-1}(i,j):\deg(x) \leq 2 D \}\big| < \alpha |f^{-1}(i,j)|\Big] \overset{\ref{lemH:H1}}\leq \mathbb{P}\Big[\mathcal E_{i,j} < \alpha (s_i b 4 \beta n + 2\xi n)\Big]\\ &\leq \mathbb{P}\Big[\mathcal E_{i,j} < 2 \alpha \big((s_i-2)(b-2) 4 \beta n\big)\Big] \leq \mathbb{P}\Big[\mathcal E_{i,j} < \frac{1}{2}\mathbb{E}[\mathcal E_{i,j}]\Big] < 2 \exp\left\{- \frac{(s_i-2)(b-2)4\beta n}{72}\right\}. \end{align*} Taking the union bound over all $i\in[r]$ and $j\in [k]$ yields that Property~\ref{lemH:H6} holds with positive probability. \end{proof} \section{The common neighbourhood lemma} In order to prove Lemma~\ref{lem:common} we need the following version of the Sparse Regularity Lemma, allowing for a partition equitably refining an initial partition with parts of very different sizes. Given a partition $V(G)=V_1\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}}\dots\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}} V_s$, we say a partition $\{V_{i,j}\}_{i\in[s],j\in[t]}$ is an equitable $(\varepsilon,p)$-regular refinement of $\{V_i\}_{i\in[s]}$ if $|V_{i,j}|=|V_{i,j'}|\pm 1$ for each $i\in[s]$ and $j,j'\in[t]$, and there are at most $\varepsilon s^2t^2$ pairs $(V_{i,j},V_{i',j'})$ which are not $(\varepsilon,0,p)$-regular. \begin{lemma} \label{lem:SRLb} For each $\varepsilon>0$ and $s\in\mathbb{N}$ there exists $t_1\geq 1$ such that the following holds. Given any graph $G$, suppose $V_1\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}}\dots\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}} V_s$ is a partition of $V(G)$. Suppose that $e(V_i)\le 3p|V_i|^2$ for each $i\in[s]$, and $e(V_i,V_{i'})\le 2p|V_i||V_{i'}|$ for each $i\neq i'\in[s]$. Then there exist sets $V_{i,0}\subseteq V_i$ for each $i\in[s]$ with $|V_{i,0}|<\varepsilon|V_i|$, and an equitable $(\varepsilon,p)$-regular refinement $\{V_{i,j}\}_{i\in[s],j\in[t]}$ of $\{V_i\setminus V_{i,0}\}_{i\in[s]}$ for some $t\le t_1$. \end{lemma} The proof is standard, following Scott's method~\cite{Scott}. We defer it to Appendix~\ref{app:tools}. To prove Lemma~\ref{lem:common}, we work as follows. First, we choose a regularity parameter $\varepsilonaa_0$ and apply Lemma~\ref{lem:SRLb} with $\varepsilonaa_0$ and the initial partition $V_1\setminus W,\dots,V_k\setminus W,W$. From this partition, all we need is a part $W'\subseteq W$ and parts $V'_i\subseteq V_i\setminus W$ for each $i\in[k]$, such that each pair $(W',V'_i)$ is $(\varepsilonaa_0,d/2,p)$-lower-regular, which we find by averaging. We now choose our vertices $w_1,\dots,w_\Delta$ sequentially (in Claim~\ref{claim:common}), such that the desired~\ref{cnl:Gsize}--\ref{cnl:Nreg} hold for all subsets of the so far chosen vertices at each stage. This is in spirit very much like the usual dense case `Key Lemma' sequential embedding of vertices using regularity, but in the sparse setting here we need to work somewhat harder and use the regularity inheritance lemmas to show that we can choose vertices which give us lower-regular pairs for future embedding (rather than this being automatic from the slicing lemma, as it is in the dense case). Thus, the proof mainly amounts to showing that the number of vertices which break one of the desired properties and which we therefore cannot choose is always much smaller than $|W'|$. In order to show this for~\ref{cnl:Gsize} we need to maintain some extra properties, specifically sizes of $G$- and $\Gamma$-neighbourhoods of chosen vertices within each $V'_i$, and that these $\Gamma$-neighbourhoods of chosen vertices in each $V'_i$ form lower-regular pairs with $W'$. Note that the way we choose our various regularity parameters amounts to ensuring that, even after $\Delta-1$ successive applications of regularity inheritance lemmas, we still have sufficient regularity for our argument. Furthermore, it is important to note that the choice of $\varepsilonaa_0$ does not have anything to to with $\varepsilona$ or $\varepsilon_0$, rather it affects only the returned value of $\alpha$. \begin{proof}[Proof of Lemma~\ref{lem:common}] First we fix all constants that we need throughout the proof. Given $d>0$, $k\geq 1$, and $\Delta\geq 2$, let $\varepsilonaa_{\Delta}:=8^{-\Delta}\frac{1}{(k+1)^2}\left(\frac d 8\right)^{\Delta}$. Now, for each $j=1,\dots,\Delta$ sequentially, choose $\varepsilonaa_{\Delta-j}\le\varepsilonaa_{\Delta-j+1}$ not larger than the $\varepsilon_0$ returned by Lemma~\ref{lem:OSRIL} for input $\varepsilonaa_{\Delta-j}$ and $\tfrac{d}{2}$. Now, Lemma~\ref{lem:SRLb} with input $\varepsilonaa_0$ and $s=k+1$ returns $t_1\ge 1$. We set \[\alpha:=\frac{1}{2t_1}\Big(\frac{d}{4}\Big)^\Delta\,.\] Next, given $\varepsilona>0$, let $\varepsilona_{\Delta-1,\Delta-1}:=\varepsilona$, and let $\varepsilona_{j,\Delta}=\varepsilona_{\Delta,j}=1$ for each $1\le j\le\Delta$. For each $(j,j')\in[\Delta]^2\setminus\{(1,1)\}$ in lexicographic order sequentially, we choose \[\varepsilona_{\Delta-j,\Delta-j'}\le\min\{\varepsilona_{\Delta-j+1,\Delta-j'},\varepsilona_{\Delta-j,\Delta-j'+1},\varepsilona_{\Delta-j+1,\Delta-j'+1}\}\] not larger than the $\varepsilon_0$ returned by Lemma~\ref{lem:OSRIL} for both input $\varepsilona_{\Delta-j+1,\Delta-j'}$ and $d$, and for input $\varepsilona_{\Delta-j,\Delta-j'+1}$ and $d$, and not larger than the $\varepsilon_0$ returned by Lemma~\ref{lem:TSRIL} for input $\varepsilona_{\Delta-j+1,\Delta-j'+1}$ and $d$. We choose $\varepsilon_0$ small enough such that $(1+\varepsilon_0)^{\Delta} \leq 1+\varepsilona$ and $(1-\varepsilon_0)^{\Delta} \geq 1-\varepsilona$. Given $r\ge 1$ and $\varepsilon$ with $0<\varepsilon\le\varepsilon_0$, suppose that $C$ is large enough for each of these calls to Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL}, and for Proposition~\ref{prop:chernoff} with input $\varepsilon_0$. Finally, we set \[C^{\ast}= 10^{12}k^4t_1r^4\varepsilon^{-4}2^{2\Delta}C\,.\] Given $p\ge C^{\ast}\big(\tfrac{\log n}{n}\big)^{1/\Delta}$, a.a.s.\ the good events of each of the above calls to Lemma~\ref{lem:OSRIL} and~\ref{lem:TSRIL}, and to Proposition~\ref{prop:chernoff} and Lemma~\ref{lem:SRLb}, occur. We condition from now on upon these events occurring for $\Gamma=G(n,p)$. Let $G=(V,E)$ be a subgraph of $\Gamma$. Suppose $\{V_i\}_{i\in[k]}$ and $W$ satisfy the conditions of the lemma. We first apply Lemma~\ref{lem:SRLb}, with the promised input parameters $\varepsilonaa_0$ and $s=k+1$, to $G[V_1\cup\dots\cup V_k\cup W]$, with input partition $\{V_i\setminus W\}_{i\in[k]}\cup\{W\}$. We can do this because $Cp^{-1}\log n<10^{-10}\frac{\varepsilon^4 pn}{k^4r^4}$, so that the good event of Proposition~\ref{prop:chernoff} guarantees that the conditions of Lemma~\ref{lem:SRLb} are satisfied. This returns a partition refining each set of $\{V_i\setminus W\}_{i\in[k]}\cup\{W\}$ into $1\le t\le t_1$ clusters together with a small exceptional set. Let $W'\subseteq W$ be a cluster which is in at most $2k\varepsilonaa_0 t$ pairs with clusters in $\big(V_1\cup\dots\cup V_k\big)\setminus W$ which are not $(\varepsilonaa_0,p)_G$-lower-regular. Such a cluster exists by averaging. By Proposition~\ref{prop:chernoff} and~\ref{cnl:bal}, at most $4(k+1)\varepsilonaa_0 p \tfrac{4n}{r}|W'|$ edges lie in the pairs between $W'$ and the $V_i$ which are not lower-regular, and by Proposition~\ref{prop:chernoff} and~\ref{cnl:W} at most $2p|W||W'|<\varepsilonaa_0 p\tfrac{n}{r}|W'|$ edges leaving $W'$ lie in $W$. By~\ref{cnl:Wdeg}, for each $i\in[k]$ each $w\in W'$ has at least $dp|V_i|$ neighbours in $V_i$, and hence there are at least $\tfrac{dp}{2}|V_i||W'|$ edges from $W'$ to $V_i\setminus W$ which lie in $(\varepsilonaa_0,p)_G$-lower-regular pairs. By averaging, for each $i\in[k]$ there exists a cluster $V'_i$ of the partition such that $(W',V_i')$ is $(\varepsilonaa_0, d/2, p)_G$-lower-regular. For the remainder of the proof, we will only need these $k+1$ clusters from the partition. Notice that for every $i\in[k]$ we have \[|V_i| \geq |V_i'| \geq \frac{n}{8kt_1r} \geq \frac{1}{8kt_1r} (C^{\ast})^{2}p^{-2}\log n \geq C^{\ast} p^{-2} \log n\] and \begin{equation} \label{eq:sizeW} |W'| \geq 10^{-11}\frac{\varepsilon^4 pn}{t_1k^4r^4} \geq 10^{-11}\frac{\varepsilon^4}{t_1k^4r^4}(C^{\ast})^{2}p^{-1}\log n \geq C^{\ast} p^{-1} \log n \end{equation} both by the choice of $C^{\ast}$ and $p$. We choose the $\Delta$-tuple $(w_1,\dots,w_\Delta)$ inductively, using the following claim. \begin{claim} \label{claim:common} For each $0\le \ell\le\Delta$ there exists an $\ell$-tuple $(w_1,\ldots,w_\ell) \in \binom{W'}{\ell}$ such that the following holds. For every $\Lambda, \Lambda^\ast \subseteqeq [\ell]$, and every $i \neq i' \in [k]$ we have \begin{enumerate}[label=\itmarab{L}] \item\label{cnl:cl:Wreg} $\big(\bigcap_{j\in \Lambda}N_{\Gamma}(w_j,V_i'),W'\big)$ is $(\varepsilonaa_{|\Lambda|},\frac d 2, p)_G$-lower-regular if $|\Lambda|< \Delta$, \item\label{cnl:cl:NGVp} $|\bigcap_{j\in\Lambda} N_G(w_j,V_i')| \geq \big(\frac{d}{4}\big)^{|\Lambda|} p^{|\Lambda|}|V_i'|$, \item\label{cnl:cl:NGa} $|\bigcap_{j\in\Lambda}N_{\Gamma}(w_j)| \leq (1+\varepsilon_0)^{|\Lambda|}p^{|\Lambda|} n$, \item\label{cnl:cl:NGaVp} $|\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i')| = (1\pm\varepsilon_0)^{|\Lambda|}p^{|\Lambda|} |V_i'|$, \item\label{cnl:cl:NGaV} $|\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i)| = (1\pm\varepsilon_0)^{|\Lambda|}p^{|\Lambda|} |V_i|$, and \item\label{cnl:cl:Vreg} $\big(\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i),\bigcap_{j^\ast\in\Lambda^\ast}N_{\Gamma}(w_{j^\ast},V_{i'})\big)$ is $(\varepsilona_{|\Lambda|,|\Lambda^\ast|},d,p)_G$-lower-regular if\\ $|\Lambda|,|\Lambda^\ast| < \Delta$ and either $\Delta \geq 3$ or $\Lambda \cap \Lambda^\ast = \varnothing$ or both. \end{enumerate} \end{claim} We prove this claim by induction on $\ell$. Recall that if $\Lambda=\emptyset$ then $\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i')$ is by definition equal to $V'_i$, and that $[0]=\emptyset$. \begin{claimproof}[Proof of Claim~\ref{claim:common}] For the base case $\ell=0$, observe that~\ref{cnl:cl:Wreg} follows from our choice of $W'$ and the $V_i'$. For every $i,j\in [k]$, the pair $(V_i,V_j)$ is $(\varepsilon,d,p)_G$-lower-regular by~\ref{cnl:Vreg}, and since $\varepsilon\le\varepsilona_{0,0}$ this gives~\ref{cnl:cl:Vreg}. The remaining three properties~\ref{cnl:cl:NGVp},~\ref{cnl:cl:NGaVp} and~\ref{cnl:cl:NGaV} are tautologies for $\ell=0$. For the inductive step, suppose that for some $0\le\ell<\Delta$ there exists an $\ell$-tuple $(w_1, \ldots, w_{\ell}) \in \binom{W'}{\ell}$ satisfying~\ref{cnl:cl:Wreg}--\ref{cnl:cl:Vreg}. We now find a vertex $w_{\ell+1} \in W'$ such that the $(\ell+1)$-tuple $(w_1, \ldots, w_{\ell+1})$ still satisfies~\ref{cnl:cl:Wreg}--\ref{cnl:cl:Vreg}. We do this by determining, for each of these five conditions, an upper bound on the number of vertices in $W'$ that violate them and show that the sum of these upper bounds is less than $|W'|-\ell$. Suppose $\Lambda \subseteqeq [\ell]$ satisfies $|\Lambda| < \Delta -1$, and suppose $i \in [k]$. By the choice of $C$ and $p$ we have for every $i \in [k]$ \begin{equation}\label{eq:common:sizeNGa} \big|\bigcap_{j\in \Lambda}N_{\Gamma}(w_j,V_i')\big|\geBy{\ref{cnl:cl:NGaVp}} (1-\varepsilon_0)^{|\Lambda|}p^{|\Lambda|}|V_i'| \overset{|\Lambda| < \Delta-1}\geq (1-\varepsilon_0)^{\Delta-2} p^{\Delta-2} \frac{n}{8ktr} \geq C p^{-2} \log n\,. \end{equation} We also have $|W'| \geq C^{\ast} p^{-1} \log n$ by \eqref{eq:sizeW} and $\big(\bigcap_{j\in \Lambda} N_{\Gamma}(w_j,V_i'), W'\big)$ is an $(\varepsilonaa_{|\Lambda|}, d/2, p)_G$-lower-regular pair by~\ref{cnl:cl:Wreg}. Since the good event of Lemma~\ref{lem:OSRIL} with input $\varepsilonaa_{|\Lambda|+1}$ and $\tfrac{d}{2}$ occurs, there exist at most $C p^{-1} \log n$ vertices $w$ in $W'$ such that \[\bigg(\bigcap_{j\in \Lambda}N_{\Gamma}(w_j,V_i') \cap N_{\Gamma}(w), W'\bigg) = \bigg(\bigcap_{j\in \Lambda}N_{\Gamma}(w_j,V_i') \cap N_{\Gamma}(w,V_i'),W'\bigg)\] is not $(\varepsilonaa_{|\Lambda|+1},\frac d 2, p)_G$-lower-regular. Summing over all possible choices of $\Lambda \subseteqeq [l]$ and $i \in [k]$, there are at most $2^\Delta k^2 C p^{-1} \log n$ vertices $w$ in $W'$ such that $(w_1, \ldots, w_l, w)$ does not satisfy~\ref{cnl:cl:Wreg}. Moving on to~\ref{cnl:cl:NGVp}, let $\Lambda\subseteq[\ell]$ and $i\in[k]$ be given. We have \begin{align*} \big|\bigcap_{j\in\Lambda}N_G(w_j,V'_i)\big|&\geBy{\ref{cnl:cl:NGVp}}\Big(\frac{d}{4}\Big)^{|\Lambda|}p^{|\Lambda|}|V'_i|\quad\text{and}\\ \big|\bigcap_{j\in\Lambda}N_\Gamma(w_j,V'_i)\big|&\leBy{\ref{cnl:cl:NGaVp}}(1+\varepsilon_0)^{|\Lambda|}p^{|\Lambda|}|V'_i|\,. \end{align*} By choice of $\varepsilon_0$ and $\varepsilonaa_{|\Lambda|}$, we thus have $\big|\bigcap_{j\in\Lambda}N_G(w_j,V'_i)\big|\ge\varepsilonaa_{|\Lambda|}\big|\bigcap_{j\in\Lambda}N_\Gamma(w_j,V'_i)\big|$. Now by~\ref{cnl:cl:Wreg}, the pair $\big(W',\bigcap_{j\in\Lambda}N_\Gamma(w_j,V'_i)\big)$ is $\big(\varepsilonaa_{|\Lambda|},\tfrac{d}{2},p\big)_G$-lower-regular, and thus the number of vertices $w\in W'$ such that \[\big|N_G(w,V'_i)\cap\bigcap_{j\in\Lambda}N_G(w_j,V'_i)\big|<\Big(\frac{d}{4}\Big)^{|\Lambda|+1}p^{|\Lambda|+1}|V'_i|\] is at most $\varepsilonaa_{|\Lambda|}|W'|\le\varepsilonaa_{\Delta}|W'|$. Summing over the choices of $\Lambda\subseteq[\ell]$ and $i\in[k]$, the number of $w\in W'$ violating~\ref{cnl:cl:NGVp} is at most $2^\Delta k\varepsilonaa_{\Delta}|W'|$. For~\ref{cnl:cl:NGaVp}, given $\Lambda\subseteq[\ell]$ and $i\in[k]$, by~\ref{cnl:cl:NGaVp} we have \[ \big|\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i')\big| = (1\pm\varepsilon_0)^{|\Lambda|}p^{|\Lambda|} |V_i'|\,,\] and by choice of $\varepsilon_0$ and $p$, in particular $\big|\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i')\big|\ge C p^{-1}\log n$. Since the good event of Proposition~\ref{prop:chernoff} occurs, the number of vertices $w\in W'$ such that $\big|N_{\Gamma}(w,V'_i)\cap\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i')\big|$ is either smaller than $(1-\varepsilon_0)^{|\Lambda|+1}p^{|\Lambda|+1}|V_i'|$ or larger than $(1+\varepsilon_0)^{|\Lambda|+1}p^{|\Lambda|+1}|V_i'|$ is at most $2C p^{-1}\log n$. Summing over the choices of $\Lambda\subseteq[\ell]$ and of $i\in[k]$, we conclude that at most $2^{\Delta+1}kC p^{-1}\log n$ vertices of $W'$ violate~\ref{cnl:cl:NGaVp}. Since $n\ge |V_i|\ge|V'_i|$, the same calculation shows that a further at most $2^{\Delta+1}kC p^{-1}\log n$ vertices of $W'$ violate~\ref{cnl:cl:NGaV}, and at most $2^{\Delta+1}kC p^{-1}\log n$ vertices of $W'$ violate~\ref{cnl:cl:NGa}. Finally, we come to~\ref{cnl:cl:Vreg}. Suppose we are given $\Lambda,\Lambda'\subseteq[\ell]$ and distinct $i,i'\in[k]$. Suppose that $|\Lambda|\le\Delta-2$ and $|\Lambda'|\le\Delta-1$. We wish to show that for most vertices $w\in W'$, the pair $\big(N_{\Gamma}(w,V_i)\cap \bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i),\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V'_i)\big)$ is $\big(\varepsilona_{|\Lambda|+1,|\Lambda'|},d,p\big)_G$-lower-regular, and furthermore, if $\Delta\ge 3$ and $|\Lambda'|\le\Delta-2$, that the pair \[\bigg(N_{\Gamma}(w,V_i)\cap \bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i),N_{\Gamma}(w,V_{i'})\cap\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V'_i)\bigg)\] is $\big(\varepsilona_{|\Lambda|+1,|\Lambda'|+1},d,p\big)_G$-lower-regular. By~\ref{cnl:cl:NGaV}, and by choice of $\varepsilon_0$, $C$ and $p$, we have \begin{align*} \big|\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i)\big|&\ge(1-\varepsilon_0)^{|\Lambda|} p^{|\Lambda|}|V_i|\ge C p^{|\Lambda|-\Delta}\log n\quad\text{and}\\ \big|\bigcap_{j\in\Lambda'}N_{\Gamma}(w_j,V_{i'})\big|&\ge(1-\varepsilon_0)^{|\Lambda'|} p^{|\Lambda'|}|V_{i'}|\ge C p^{|\Lambda'|-\Delta}\log n\,. \end{align*} By~\ref{cnl:cl:Vreg}, the pair $\big(\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i),\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V'_i)\big)$ is $\big(\varepsilona_{|\Lambda|,|\Lambda'|},d,p\big)_G$-lower-regular. Since the good event of Lemma~\ref{lem:OSRIL} with input $\varepsilona_{|\Lambda|+1,|\Lambda'|}$ and $d$ occurs, there are at most $C p^{-1}\log n$ vertices $w$ of $W'$ such that $\big(N_{\Gamma}(w,V_i)\cap\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i),\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V'_i)\big)$ is not $\big(\varepsilona_{|\Lambda|+1,|\Lambda'|},d,p\big)_G$-lower-regular. Furthermore, if $|\Lambda'|\le\Delta-2$, then since the good event of Lemma~\ref{lem:TSRIL} with input $\varepsilona_{|\Lambda|+1,|\Lambda'|+1}$ and $d$ occurs, there are at most $C p^{-2}\log n$ vertices $w$ of $W'$ such that \[\Big(N_{\Gamma}(w,V_i)\cap\bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V_i),N_{\Gamma}(w,V_{i'})\cap \bigcap_{j\in\Lambda}N_{\Gamma}(w_j,V'_i)\Big)\text{ is not }\big(\varepsilona_{|\Lambda|+1,|\Lambda'|},d,p\big)_G\text{-lower-regular.}\] Observe that if $\Delta=2$ the property~\ref{cnl:cl:Vreg} does not require this pair to be lower-regular. Summing over the choices of $\Lambda,\Lambda'\subseteq[\ell]$ and $i,i'\in[k]$, we conclude that if $\Delta=2$ then at most $2^{2\Delta}k^2C p^{-1}\log n$ vertices $w$ of $W'$ cause~\ref{cnl:cl:Vreg} to fail, while if $\Delta\ge 3$, at most $2^{2\Delta}k^2C(p^{-1}+p^{-2})\log n$ vertices $w$ of $W'$ violate~\ref{cnl:cl:Vreg}. Summing up, if $\Delta=2$ then at most \begin{equation}\label{eq:common:bad2} 2^{\Delta}k^2C p^{-1}\log n+2^\Delta k\varepsilonaa_{\Delta}|W'|+3\cdot 2^{\Delta+1}kC p^{-1}\log n+2^{2\Delta}k^2C p^{-1}\log n \end{equation} vertices $w$ of $W'$ cannot be chosen as $w_{\ell+1}$. By choice of $C^{\ast}$ and $\varepsilonaa_{\Delta}$, and by choice of $p$, this is at most $\tfrac12|W'|$, so that there exists a vertex of $W'$ which can be chosen as $w_{\ell+1}$, as desired. If on the other hand $\Delta\ge 3$, then at most \begin{equation}\label{eq:common:bad3} 2^{\Delta}k^2C p^{-1}\log n+2^\Delta k\varepsilonaa_{\Delta}|W'|+3\cdot 2^{\Delta+1}kC p^{-1}\log n+2^{2\Delta}k^2C (p^{-1}+p^{-2})\log n \end{equation} vertices of $W'$ cannot be chosen as $w_{\ell+1}$. Again by choice of $C^{\ast}$, $\varepsilonaa_{\Delta}$ and $p$, this is at most $\tfrac12|W'|$, and again we therefore can choose $w_{\ell+1}$ satisfying~\ref{cnl:cl:Wreg}--\ref{cnl:cl:Vreg} as desired. \end{claimproof} Finally, let us argue why the lemma is a consequence of Claim~\ref{claim:common}. Let $(w_1,\ldots, w_\Delta) \in \binom{W'}{\Delta}$ be a tuple satisfying~\ref{cnl:cl:Wreg}--\ref{cnl:cl:Vreg}. By~\ref{cnl:cl:NGVp}, for any $\Lambda\subseteq[\ell]$ and $i\in[k]$ we have \[\Big|\bigcap_{j\in\Lambda}N_G(w_j,V_i)\Big|\ge \Big(\frac{d}{4}\Big)^{|\Lambda|} p^{|\Lambda|}|V_i'|\ge\Big(\frac{d}{4}\Big)^\Delta p^{|\Lambda|}\frac{|V_i|}{2t_1}\ge\alpha p^{|\Lambda|}|V_i|\,,\] as required for~\ref{cnl:Gsize}. Properties~\ref{cnl:Gasizen},~\ref{cnl:Gasize} and~\ref{cnl:Nreg} are respectively~\ref{cnl:cl:NGa},~\ref{cnl:cl:NGaV} and~\ref{cnl:cl:Vreg}, by choice of $\varepsilon_0$. \end{proof} \section{The balancing lemma} \label{sec:prooflembalancing} The statement of Lemma~\ref{lem:balancing} gives us a partition of $V(G)$ with parts $\big(V_{i,j}\big)_{i\in[r],j\in[k]}$, and a collection of `target integers' $\big(n_{i,j}\big)_{i\in[r],j\in[k]}$, with each $n_{i,j}$ close to $|V_{i,j}|$, and with $\sum n_{i,j}=\sum |V_{i,j}|$. Our aim is to find a partition of $V(G)$ with parts $\big(V'_{i,j}\big)_{i\in[r],j\in[k]}$ such that $|V'_{i,j}|=n_{i,j}$ for each $i,j$. This partition is required to maintain similar regularity properties as the original partition, while not substantially changing common neighbourhoods of vertices. There are two steps to our proof. In a first step, we correct \emph{global imbalance}, that is, we find a partition $\widetilde{\mathcal{V}}$ which maintains all the desired properties and which has the property that $\sum_i |\widetilde{V}_{i,j}|=\sum_i n_{i,j}$ for each $j\in[k]$. To do this, we identify some $j^\ast$ such that $\sum_i |V_{i,j^\ast}|>\sum_i n_{i,j^\ast}$ and $j'$ such that $\sum_i |V_{i,j'}|<\sum_i n_{i,j'}$. We move $\sum_i |V_{i,j^\ast}|- n_{i,j^\ast}$ vertices from $V_{1,j^\ast}$ to some cluster $V_{i',j'}$, maintaining the desired properties, and repeat this procedure until no global imbalance remains. In a second step, we correct \emph{local imbalance}, that is, for each $i=1,\dots,r-1$ sequentially, and for each $j\in[k]$, we move vertices between $\widetilde{V}_{i,j}$ and $\widetilde{V}_{i+1,j}$, maintaining the desired properties, to obtain the partition $\mathcal V'$ such that $|V'_{i,j}|=n_{i,j}$ for each $i,j$. Observe that because $\widetilde{\mathcal{V}}$ is globally balanced, once we know $|V'_{i,j}|=n_{i,j}$ for each $i\in[r-1]$ and each $j\in[k]$ we are guaranteed that $|V'_{r,j}|=n_{r,j}$ for each $j\in[k]$. The proof of the lemma then comes down to showing that we can move vertices and maintain the desired properties. Because we start with a partition in which $V_{i,j}$ is very close to $n_{i,j}$ for each $i$ and $j$, the total number of vertices we move in any step is at most the sum of the differences, which is much smaller than any $n_{i,j}$. The following lemma shows that we can move any small (compared to all $n_{i,j}$) number of vertices from one part to another and maintain the desired properties. \begin{lemma}\label{lem:smallmove} For all integers $k, r_1, \Delta \geq 1$, and reals $d>0$ and $0<\varepsilon<1/2k$ as well as $0 < \xi < 1/(100kr_1^3)$, there exists $C^{\ast}>0$ such that the following holds for all sufficiently large $n$. Let $\Gamma$ be a graph on vertex set $[n]$, and let $G$ be a not necessarily spanning subgraph. Let $X,Z_1,\ldots,Z_{k-1} \subseteqeq V(G)$ be pairwise disjoint subsets, each of size at least $n/(16kr_1)$, such that $(X,Z_i)$ is $(\varepsilon,d,p)_G$-lower-regular for each $i$. Then for each $1\le m\le 2r_1^2\xi n$, there exists a set $S$ of $m$ vertices of $X$ with the following properties. \begin{enumerate}[label=\itmarab{SM}] \item\label{smallmove:degG} For each $v\in S$ we have $\deg_G(v;Z_i)\ge(d-\varepsilon)p|Z_i|$ for each $i\in[k-1]$, and \item\label{smallmove:I} for each $1\le s\le\Delta$ and every collection of vertices $v_1,\ldots,v_s\in[n]$ we have \[\deg_\Gamma(v_1,\ldots,v_s;S)\le 100kr_1^3\xi\deg_\Gamma(v_1,\ldots,v_s; X)+\frac{1}{100}C^{\ast}\log n\,.\] \end{enumerate} \end{lemma} \begin{proof} Given $k$, $r_1$, $\Delta$, $d$, $\xi$ and $\varepsilon$, let $C$ be returned by Lemma~\ref{lem:hypgeo} for input $\xi$ and $\Delta$. We set $C^{\ast}=100C$. Given $\Gamma$, $G$ and $X$, $Y$, $Z_1,\ldots,Z_{k-1}$, let $X'$ be the set of vertices $v\in X$ such that $\deg_G(v;Z_i)\ge(d-\varepsilon)p|Z_i|$ for each $i\in[k-1]$. Because each pair $(X,Z_i)$ for $i\in[k-1]$ is $(\varepsilon,d,p)_G$-lower-regular, we have $|X'|\ge |X|-k\varepsilon|X|\ge|X|/2$. We now apply Lemma~\ref{lem:hypgeo}, with input $\xi$, $\Delta$, $W=X'$ and the sets $T_i$ being the sets $N_\Gamma(v_1,\ldots,v_s;X')$ for each $1\le s\le\Delta$ and $v_1,\ldots,v_s\in[n]$, to choose a set $S$ of size $m\leq 2r_1^2\xi n\leq |X'|$ in $X'$. We then have \begin{align*} \deg_\Gamma(v_1,\ldots,v_s;S)&\le \left(\frac{2r_1^2\xi n}{|X'|}+\xi\right)\deg_\Gamma(v_1,\ldots,v_s;X')+C\log n\\ &\le 100kr_1^3\xi\deg_\Gamma(v_1,\ldots,v_s;X)+\frac{1}{100}C^{\ast}\log n\,, \end{align*} where the final inequality is by choice of $C^{\ast}$, and since $|X'|\ge|X|/2\ge n/(32kr_1)$. Thus the set $S$ satisfies~\ref{smallmove:I}, and since $S\subseteqeq X'$ we have~\ref{smallmove:degG}. \end{proof} We now prove the balancing lemma. \begin{proof}[Proof of Lemma~\ref{lem:balancing}] Given integers $k, r_1, \Delta \geq 1$ and reals $\gamma, d >0$ and $0< \varepsilon < \min\{d, 1/(2k)\}$, we set \[\xi =10^{-15} \varepsilon^4d/(k^3r_1^5).\] Let $C^{\ast}_1$ be returned by Lemma~\ref{lem:smallmove} with input $k$, $r_1$, $\Delta$, $d$, $\varepsilon/4$ and $\xi$, and let $C^{\ast}_2$ be returned by Lemma~\ref{lem:smallmove} with input $k$, $r_1$, $\Delta$, $d$, $3\varepsilon/4$ and $\xi$. We set $C^{\ast} = \max\{C^{\ast}_1, C^{\ast}_2\}$. Now suppose that $p\geC^{\ast}\big(\frac{\log n}{n}\big)^{1/2}$, that $10\gamma^{-1}\le r\le r_1$, and that graphs $\Gamma$ and $G$, a partition $\mathcal V$ of $V=V(G)$, and graphs $R^k_r$, $B^k_r$ and $K^k_r$ on $[r]\times [k]$ as in the statement of Lemma~\ref{lem:balancing} are given. \textbf{First stage (global imbalance):} We use the following algorithm. \begin{algorithm} \caption{Global balancing}\label{alg:global} \While{$\exists j\in[k]$ such that $\sum_{i\in[r]}|V_{i,j}|-n_{i,j}\neq 0$}{ Choose $j^\ast\in[k]$ maximising $\sum_{i\in[r]}|V_{i,j^\ast}|-n_{i,j^\ast}$ \; Choose $i'>1$ such that $V_{i',j}$ is not changed and $(V_{1,j^\ast},V_{i',j})$ is $\big(\tfrac{\varepsilon}{4},d,p\big)_G$-lower-regular $\forall j\in[k]$\; Choose $j'\in[k]$ such that $\sum_{i\in[r]}|V_{i,j'}|-n_{i,j'}<0$ \; Select $S\subseteq V_{1,j^\ast}$ with $|S|=\sum_{i\in[r]}|V_{i,j^\ast}|-n_{i,j^\ast}$ \; Set $V_{1,j^\ast}:=V_{1,j^\ast}\setminus S$ and $V_{i',j'}=V_{i',j'}\cup S$ \; Flag $V_{1,j^\ast}$ and $V_{i',j'}$ as changed \; } \end{algorithm} In each step where we select $S$, we make use of Lemma~\ref{lem:smallmove} to do so, with input $k$, $r_1$, $\Delta$, $d$, and $\varepsilon/4$, with $X=V_{1,j^\ast}$ and with the $Z_1,\ldots,Z_{k-1}$ being the $V_{i',j''}$ with $j''\neq j'$. We claim that the algorithm completes successfully, in other words that each of the choices is possible, and that Lemma~\ref{lem:smallmove} is always applicable. In each While loop, since $\sum_{i,j}|V_{i,j}|-n_{i,j}=0$ and since the While condition is satisfied, $j^\ast$ satisfies $\sum_{i\in[r]}|V_{i,j^\ast}|-n_{i,j^\ast}>0$. Observe that the While loop is run at most $k$ times, since at the end of the While loop in which we selected some $j=j^\ast$ we have $\sum_{i\in[r]}|V_{i,j^\ast}|-n_{i,j^\ast}=0$ and therefore we do not select $j$ as either $j^\ast$ or $j'$ in future iterations. It follows that the number of $V_{i,j}$ flagged as changed never exceeds $2k$. Now the set $V_{1,j^\ast}$ has degree at least $\big(k-1+\tfrac{\gamma k}{2}\big)r$ in $R^k_r$, and so there are at least $\gamma k r/2$ indices $i\in[r]$ such that $V_{1,j^\ast}$ is adjacent to each $V_{i,j}$ in $R^k_r$. Since $\gamma k r/2>3k$, in particular we can choose $i'$ such that $V_{1,j^\ast}$ is adjacent to each $V_{i',j}$ in $R^k_r$ and no $V_{i',j}$ is flagged as changed. It follows that each pair $(V_{1,j^\ast},V_{i',j})$ is $\big(\tfrac{\varepsilon}{4},d,p\big)_G$-lower-regular and thus it is possible to choose $i'$. It is possible to choose $j'$ since the While condition holds. Finally, we need to show that Lemma~\ref{lem:smallmove} is always applicable with the given parameters. In each application, the sets denoted $X,Z_1,\ldots,Z_{k-1}$ are parts of the partition $\mathcal V$ (so they were not changed by the algorithm yet). It follows that each set has size at least $n/(8kr)>n/(16kr_1)$. Since $\mathcal V$ is $\big(\tfrac\varepsilon4,d,p)$-lower-regular on $B_k^r$, the pairs $(X,Z_1),\dots,(X,Z_{k-1})$ are $\big(\tfrac\varepsilon4,d,p)$-lower-regular as required. Finally, by choice of $j^\ast$ we see that the sizes of the sets $S$ we select in each step are decreasing, so it is enough to show that in the first step we have $|S|\le r\xi n$, which follows from~\ref{lembalancing:sizes}. Thus Lemma~\ref{lem:smallmove} is applicable in each step, and we conclude that the algorithm indeed completes. We denote the resulting vertex partition by $\widetilde{\mathcal{V}}=\{\widetilde{V}_{i,j}\}_{i\in[r],j\in[k]}$. \begin{claim} We have the following properties. \begin{enumerate}[label=\itmarab{P}] \item\label{claimpVt:sizeparts} For each $i\in[r]$ and $j\in[k]$ we have $\big||\widetilde{V}_{i,j}|-n_{i,j}\big|\le 2r\xi n$, \item\label{claimpVt:regular} $\widetilde{\mathcal{V}}$ is $\big(\tfrac{\varepsilon}{2},d,p\big)_G$-lower-regular on $R^k_r$ and $\big(\tfrac{\varepsilon}{2},d,p\big)_G$-super-regular on $K^k_r$, \item\label{claimpVt:NGa} For each $i\in[r]$, $j\in[k]$ and $1\le s\le\Delta$ and $v_1,\ldots,v_s\in[n]$ we have \[|N_{\Gamma}(v_1,\ldots,v_s,\widetilde{V}_{i,j}) \triangle N_{\Gamma}(v_1,\ldots,v_s,V_{i,j})| \leq 100kr_1^3\xi\deg_\Gamma\big(v_1,\ldots,v_s;V(G)\big)+\frac{1}{100}C^{\ast}\log n\,.\] \end{enumerate} \end{claim} \begin{claimproof} Observe that vertices were removed from or added to each $V_{i,j}$ to form $\widetilde{V}_{i,j}$ at most once in the running of Algorithm~\ref{alg:global}, and the number of vertices added or removed was at most $r\xi n$. Since $|V_{i,j}|$ satisfies~\ref{lembalancing:sizes}, we conclude that~\ref{claimpVt:sizeparts} holds. Furthermore, the vertices added to or removed from $V_{i,j}$ satisfy~\ref{smallmove:I} and therefore~\ref{claimpVt:NGa} holds. Since each set $V_{i,j}$ has size at least $n/(8kr)$, we can apply Proposition~\ref{prop:subpairs3} with $\mu=\nu=8kr^2\xi$ to each edge of $R^k_r$, concluding that $\widetilde{\mathcal{V}}$ is $\big(\tfrac{\varepsilon}{2},d,p\big)_G$-lower-regular on $R^k_r$ since $\tfrac{\varepsilon}{4}+4\sqrt{8kr^2\xi}<\tfrac{\varepsilon}{2}$. Now for any $i\in[r]$ and $j\in[k]$, consider $v\in\widetilde{V}_{i,j}$. If $v\not\in V_{i,j}$, then we applied Lemma~\ref{lem:smallmove} to select $v$, and when we did so no $V_{i,j'}$ was flagged as changed by Algorithm~\ref{alg:global}. Thus by~\ref{smallmove:degG} we have \[\deg_G(v;\widetilde{V}_{i,j'})=\deg_G(v;V_{i,j'})\ge\Big(d-\frac{\varepsilon}{4}\Big)p|V_{i,j'}|=\Big(d-\frac{\varepsilon}{4}\Big)p|\widetilde{V}_{i,j'}|\] for each $j'\neq j$, since $V_{i,j}$ is then flagged as changed and thus $V_{i,j'}=\widetilde{V}_{i,j'}$ for each $j'\neq j$. If on the other hand $v\in V_{i,j}$, then by~\ref{lembalancing:regular1} we started with $\deg_G(v;V_{i,j'})\ge\big(d-\tfrac{\varepsilon}{4}\big)p|V_{i,j'}|$. By~\ref{smallmove:I} and~\ref{lembalancing:gamma1}, we have \[\deg_G(v;\widetilde{V}_{i,j'})\ge \Big(d-\frac{\varepsilon}{4}\Big)p|V_{i,j'}|-\frac{\varepsilon^2}{1000kr_1}\Big(1+\frac{\varepsilon}{4}\Big)p|V_{i,j'}|-\frac{1}{100}C^{\ast}\log n \ge\Big(d-\frac{\varepsilon}{2}\Big)p|\widetilde{V}_{i,j'}|\,,\] where the final inequality follows by choice of $n$ sufficiently large and since $|\widetilde{V}_{i,j'}|\le|V_{i,j'}|+r\xi n\le \big(1+\tfrac{\varepsilon d}{100}\big)|V_{i,j'}|$. We conclude that $\widetilde{\mathcal{V}}$ is $\big(\tfrac{\varepsilon}{2},d,p\big)$-super-regular on $K^k_r$, giving~\ref{claimpVt:regular}. \end{claimproof} \textbf{Second stage (local imbalance):} We use Algorithm~\ref{alg:local} to correct the local imbalances in $\widetilde{\mathcal{V}}$. \begin{algorithm} \caption{Local balancing}\label{alg:local} \ForEach{$i=1,\dots,r-1$}{ \ForEach{$j=1,\dots,k$}{ \If{$|\widetilde{V}_{i,j}|>n_{i,j}$}{ Select $S\subseteq \widetilde{V}_{i,j}$ with $|S|=|\widetilde{V}_{i,j}|-n_{i,j}$ \; Set $\widetilde{V}_{i,j}:=\widetilde{V}_{i,j}\setminus S$ and $\widetilde{V}_{i+1,j}:=\widetilde{V}_{i+1,j}\cup S$ \; } \Else{ Select $S\subseteq \widetilde{V}_{i+1,j}$ with $|S|=n_{i,j}-|\widetilde{V}_{i,j}|$ \; Set $\widetilde{V}_{i+1,j}:=\widetilde{V}_{i+1,j}\setminus S$ and $\widetilde{V}_{i,j}:=\widetilde{V}_{i,j}\cup S$ \; } } } \end{algorithm} Again, in each step when we select $S$ we make use of Lemma~\ref{lem:smallmove} to do so. If we select $S$ from $\widetilde{V}_{i,j}$, then we use input $k$, $r_1$, $d$, $3\varepsilon/4$ and $\xi$ with $X=\widetilde{V}_{i,j}$ and the sets $Z_1,\ldots,Z_{k-1}$ being $\widetilde{V}_{i+1,j'}$ for $j'\neq j$. If on the other hand we select $S$ from $\widetilde{V}_{i+1,j}$, then we use input $k$, $r_1$, $d$ and $3\varepsilon/4$, with $X=\widetilde{V}_{i+1,j}$ and the sets $Z_1,\ldots,Z_{k-1}$ being $\widetilde{V}_{i,j'}$ for $j'\neq j$. We claim that Lemma~\ref{lem:smallmove} is always applicable. To see that this is true, observe first that the number of vertices which we move between any $\widetilde{V}_{i,j}$ and $\widetilde{V}_{i+1,j}$ in a given step is by~\ref{claimpVt:sizeparts} bounded by $2k^2r^2\xi n$. We change any given $\widetilde{V}_{i,j}$ at most twice in the running of the algorithm, so that in total at most $4k^2r^2\xi n$ vertices are changed. In particular, we maintain $|\widetilde{V}_{i,j}|\ge n/(16kr_1)$ throughout, and, by Proposition~\ref{prop:subpairs3}, with input $\mu=\nu=\tfrac{4r^2\xi n}{n/(16kr_1)}<100r_1^3k\xi$, and using~\ref{claimpVt:regular}, we maintain the property that any pair in $R^k_r$, and in particular any pair in $B^k_r$, is $\big(\tfrac{3\varepsilon}{4},d,p\big)$-lower-regular throughout. This shows that Lemma~\ref{lem:smallmove} is always applicable, and therefore the algorithm completes and returns a partition $\mathcal V'$. We claim that this is the desired partition. We need to check that~\ref{lembalancing:sizesout}---\ref{lembalancing:gammaout} hold. Since for each $j\in[k]$ we have $\sum_i|V'_{i,j}|=\sum_i|\widetilde{V}_{i,j}|=\sum_in_{i,j}$, and since $|V'_{i,j}|=n_{i,j}$ for each $i\in[r-1]$ and $j\in[k]$, we conclude that $|V'_{i,j}|=n_{i,j}$ for all $i$ and $j$, giving~\ref{lembalancing:sizesout}. For the first part of~\ref{lembalancing:regular}, we have justified that we maintain $\big(\tfrac{3\varepsilon}{4},d,p\big)_G$-lower-regularity on $R^k_r$ throughout the algorithm. For the second part, we need to show that for each $i\in[r]$ and $j\neq j'\in[k]$, and each $v\in V'_{i,j}$, we have $\deg_G(v;V'_{i,j'})\ge(d-\varepsilon)p|V'_{i,j'}|$. If $v\in\widetilde{V}_{i,j}$, then by~\ref{claimpVt:regular} we have $\deg_G(v;\widetilde{V}_{i,j'})\ge\big(d-\tfrac{\varepsilon}{2}\big)p|\widetilde{V}_{i,j'}|$. We change $\widetilde{V}_{i,j'}$ at most twice to obtain $V'_{i,j'}$, both times by adding or removing vertices satisfying~\ref{smallmove:I}. As in the proof of Claim~\ref{claimpVt:NGa} above, using~\ref{lembalancing:gamma1} and~\ref{claimpVt:NGa} we obtain $\deg_G(v;\widetilde{V}_{i,j'})\ge(d-\varepsilon)p|V'_{i,j}|$ as desired. If $v\not\in\widetilde{V}_{i,j}$, then it was added to the set $\widetilde{V}_{i,j}$ by Algorithm~\ref{alg:local}, and $\widetilde{V}_{i,j'}$ was changed at most twice thereafter. Again, using~\ref{smallmove:degG},~\ref{smallmove:I},~\ref{lembalancing:gamma1} and~\ref{claimpVt:NGa} we obtain $\deg_G(v;\widetilde{V}_{i,j'})\ge(d-\varepsilon)p|V'_{i,j}|$ as desired. Now~\ref{lembalancing:symd} holds since the total number of vertices moved in Algorithm~\ref{alg:global} is at most $k^2r\xi n$, in Algorithm~\ref{alg:local} at most $4k^2r^2\xi n$ vertices are changed in each cluster, and by choice of $\xi$. To see that~\ref{lembalancing:inheritance} holds, observe that by~\ref{lembalancing:gamma1},~\ref{claimpVt:NGa} and~\ref{smallmove:I} we have \[\big|N_\Gamma(v;V_{i,j})\Delta N_\Gamma(v;V'_{i,j}) \big|\le \frac{\varepsilon^2}{100kr_1}\deg_\Gamma\big(v;V(G)\big)+\frac{1}{10}C^{\ast}\log n\le \frac{\varepsilon^2}{50}\deg_\Gamma(v;V_{i,j})\] where the final inequality follows by choice of $p$ and of $n$ sufficiently large. Using~\ref{lembalancing:inheritance1}, we can apply Proposition~\ref{prop:subpairs3}, with $\mu=\nu=\tfrac{\varepsilon^2}{50}$, to deduce~\ref{lembalancing:inheritance}. For~\ref{lembalancing:gammaout}, observe that for any given $i\in[r]$ and $j\in[k]$ we change $\widetilde{V}_{i,j}$ at most twice in the running of Algorithm~\ref{alg:local}, both times either adding or removing a set satisfying~\ref{smallmove:I}. By~\ref{claimpVt:NGa} and choice of $\xi$, we conclude that~\ref{lembalancing:gammaout} holds. Finally, suppose that for any two disjoint vertex sets $A,A'\subseteq V(\Gamma)$ with $|A|,|A'|\ge \tfrac{1}{50000kr_1}\varepsilon^2\xi pn$ we have $e_\Gamma(A,A')\le \big(1+\tfrac{1}{100}\varepsilon^2\xi\big)p|A||A'|$. In each application of Proposition~\ref{prop:subpairs3} we have $\mu,\nu\ge\tfrac{1}{50}\varepsilon^2\xi$, and, and if we have `regular' in place of `lower-regular' in~\ref{lembalancing:regular1}, and~\ref{lembalancing:inheritance1}, we always apply Proposition~\ref{prop:subpairs3} to a regular pair with sets of size at least $\tfrac{\varepsilon}{1000kr_1}pn$, so it returns regular pairs for~\ref{lembalancing:regular}, and~\ref{lembalancing:inheritance}, as desired. \end{proof} \section{The Bandwidth Theorem in random graphs} \label{sec:proofmain} Before embarking on the proof, we first recall from the proof overview (Section~\ref{subsec:over}) the main ideas. Given $G$, we first use the lemma for~$G$ (Lemma~\ref{lem:G}) to find a lower-regular partition of $V(G)$, with an extremely small exceptional set $V_0$, and whose reduced graph $R^k_r$ contains a spanning backbone graph $B^k_r$, on whose subgraph $K^k_r$ the graph $G$ is super-regular and has one- and two-sided inheritance. Given this, and $H$ together with a $(z,\beta)$-zero-free $(k+1)$-colouring, we use the lemma for~$H$ (Lemma~\ref{lem:H2}) to find a homomorphism $f$ from $V(H)$ to $R^k_r$ almost all of whose edges are mapped to $K^k_r$ and in which approximately the `right' number of vertices of $H$ are mapped to each vertex of $R^k_r$. At this point, if $V_0$ were empty, and if the `approximately' were exact, we would apply the sparse blow-up lemma (Lemma~\ref{thm:blowup}) to obtain an embedding of $H$ into $G$. Our first aim is to deal with $V_0$. We do this one vertex at a time. Given $v\in V_0$, we choose $x\in V(H)$ from the first $\beta n$ vertices of the supplied bandwidth order $\mathcal L$ which is not in any triangles. We embed $x$ to $v$. We then embed the neighbours of $x$ to carefully chosen neighbours of $v$, which we obtain using Lemma~\ref{lem:common}. Here we use the fact that $N_H(x)$ is independent. This then fixes a clique of $K^k_r$ to which $N^2_H(x)$ must be assigned, and gives image restrictions in the corresponding parts of the lower-regular partition for these vertices. Since $N^2_H(x)$ may have been assigned by $f$ to some quite different clique in $K^k_r$, we have to adjust $f$ to match. This we can do using the fact, which follows from our assumptions on $\mathcal L$, that $x$ is far from vertices of colour zero. Now the idea is simply to repeat the above procedure, choosing vertices of $V(H)$ to pre-embed which are widely separated in $H$, until we pre-embedded vertices to all of $V_0$. We end up with a homomorphism $f^*$ from what remains of $V(H)$ to $R^k_r$. It is easy to check that this homomorphism still maps about the right number of vertices of $H$ to each vertex of $R^k_r$, simply because $V_0$ is small. We now apply the Balancing Lemma (Lemma~\ref{lem:balancing}) to correct the sizes of the clusters to match $f^*$, and complete the embedding of $H$ using the Sparse Blow-up Lemma (Lemma~\ref{thm:blowup}). There are two difficulties with this idea, the `subtleties' mentioned in the proof overview (Section~\ref{subsec:over}). First, if $\Delta=2$ we might have $|V_0|\gg pn$, so that we should be worried that at some stage of the pre-embedding we choose $v\in V_0$ and discover most or all of its neighbours have already been pre-embedded to. It turns out to be easy to resolve this: we choose each $v\in V_0$ not arbitrarily, but by taking those which have least available neighbours first. We will show that this is enough to avoid the problem. More seriously, because we perform the pre-embedding sequentially, we might use up a significant fraction of $N_G(w)$ for some $w\in V(G)$ in the pre-embedding, destroying super-regularity of $G$ on $K^k_r$, or we might use up a significant fraction of some common neighbourhood which defines an image restriction for the sparse blow-up lemma. In order to avoid this, before we begin the pre-embedding we fix a set $S\subseteq V(G)$ whose size is a very small constant times $n$, chosen using Lemma~\ref{lem:hypgeo} to not have a large intersection with any $N_G(w)$ or with any $\Gamma$-common neighbourhood of at most $\Delta$ vertices of $\Gamma$ (which could define an image restriction). We perform the pre-embedding as outlined above, \emph{except} that we choose our neighbours of each $v$ within $S$. This procedure is guaranteed not to use up neighbourhood sets guaranteed by super-regularity or image restriction sets, since these sets are all contained in $V\setminus V_0$ and even using up all of $S$ would not be enough to do damage. \begin{proof}[Proof of Theorem~\ref{thm:maink}] Given $\gamma>0$, $\Delta\ge 2$ and $k\ge 2$, let $d$ be returned by Lemma~\ref{lem:G}, with input $\gamma$, $k$ and $r_0:=10\gamma^{-1}$. Let $\alpha$ be returned by Lemma~\ref{lem:common} with input $d$, $k$ and $\Delta$. We set $D=\Delta$, and let $\eBL>0$ and $\rho>0$ be returned by Lemma~\ref{thm:blowup} with input $\Delta$, $\Delta_{R'}=3k$, $\Delta_J=\Delta$, $\vartheta=\tfrac{1}{100D}$, $\zeta=\tfrac14\alpha$, $d$ and $\kappa=64$. Next, putting $\varepsilona=\tfrac18\eBL$ into Lemma~\ref{lem:common} returns $\varepsilon_0>0$. We choose $\varepsilon=\min\big(\varepsilon_0,d,\tfrac{1}{4D}\varepsilona,\tfrac{1}{100k}\big)$. Putting $\varepsilon$ into Lemma~\ref{lem:G} returns $r_1$. Next, Lemma~\ref{lem:balancing}, for input $k$, $r_1$, $\Delta$, $\gamma$, $d$ and $8\varepsilon$, returns $\xi>0$. We assume without loss of generality that $\xi\le 1/(10kr_1)$, and set $\beta=10^{-12}\xi^2/(\Delta k^4r_1^2)$. Let $\mu=\tfrac{\varepsilon^2}{100000kr}$. Finally, suppose $C^{\ast}$ is large enough for each of these lemmas, for Lemma~\ref{thm:blowup}, for Proposition~\ref{prop:chernoff} with input $\varepsilon$, and for Lemma~\ref{lem:hypgeo} with input $\varepsilon\mu^2$ and $\Delta$. We set $C=10^{10}k^2r_1^2\varepsilon^{-2}\xi^{-1}\Delta^{2r_1+20}\mu^{-\Delta}C^{\ast}$, and $z=10/\xi$. Given $p\ge C\big(\tfrac{\log n}{n}\big)^{1/\Delta}$, a.a.s.\ $G(n,p)$ satisfies the good events of Lemma~\ref{thm:blowup}, Lemma~\ref{lem:G} and Lemma~\ref{lem:common}, and Proposition~\ref{prop:chernoff}, with the stated inputs. Suppose that $\Gamma=G(n,p)$ satisfies these good events. Suppose $G\subseteqeq \Gamma$ is any spanning subgraph with $\delta(G) \geq \big(\tfrac{k-1}{k}+\gamma\big)pn$. Let $H$ be a graph on $n$ vertices with $\Delta(H) \leq \Delta$, and $\mathcal{L}$ be a labelling of vertex set $V(H)$, of bandwidth at most $\beta n$, such that the first $\beta n$ vertices of $\mathcal L$ include $Cp^{-2}$ vertices that are not contained in any triangles of $H$, and such that there exists a $(k+1)$-colouring that is $(z, \beta)$-zero-free with respect to $\mathcal L$, and the colour zero is not assigned to the first $\sqrt{\beta}n$ vertices. Applying Lemma~\ref{lem:G} to $G$, with input $\gamma$, $k$, $r_0$ and $\varepsilon$, we obtain an integer $r$ with $10\gamma^{-1}\le kr\le r_1$, a set $V_0\subseteq V(G)$ with $|V_0|\le C^{\ast} p^{-2}$, a $k$-equitable partition $\mathcal V=\{V_{i,j}\}_{i\in[r],j\in[k]}$ of $V(G)\setminus V_0$, and a graph $R^k_r$ on vertex set $[r]\times[k]$ with minimum degree $\delta(R^k_r)\ge\big(\tfrac{k-1}{k}+\tfrac{\gamma}{2}\big)kr$, such that $K^k_r\subseteq B^k_r\subseteq R^k_r$, and such that \begin{enumerate}[label=\itmarabp{G}{a}] \item\label{main:Gsize} $\frac{n}{4kr}\leq |V_{i,j}| \leq \frac{4n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item\label{main:Greg} $\mathcal V$ is $(\varepsilon,d,p)_G$-lower-regular on $R^k_r$ and $(\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item\label{main:Ginh} both $\big(N_{\Gamma}(v, V_{i,j}),V_{i',j'}\big)$ and $\big(N_{\Gamma}(v, V_{i,j}),N_{\Gamma}(v, V_{i',j'})\big)$ are $(\varepsilon, d,p)_G$-lower-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, and \item\label{main:Ggam} $|N_{\Gamma}(v,V_{i,j})| = (1 \pm \varepsilon)p|V_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \end{enumerate} Given $i\in[r]$, because $\delta(R^k_r)>(k-1)r$, there exists $v\in V(R^k_r)$ adjacent to each $(i,j)$ with $j\in[k]$. This, together with our assumptions on $H$, allow us to apply Lemma~\ref{lem:H2} to $H$, with input $D$, $k$, $r$, $\tfrac{1}{10}\xi$ and $\beta$, and with $m_{i,j}:=|V_{i,j}|+\tfrac{1}{kr}|V_0|$ for each $i\in[r]$ and $j\in[k]$, choosing the rounding such that the $m_{i,j}$ form a $k$-equitable integer partition of $n$. Since $\Delta(H)\le\Delta$, in particular $H$ is $\Delta$-degenerate. Let $f\colon V(H) \to [r] \times [k]$ be the mapping returned by Lemma~\ref{lem:H2}, let $W_{i,j} := f^{-1}(i,j)$, and let $X \subseteqeq V(H)$ be the set of special vertices returned by Lemma~\ref{lem:H2}. For every $i\in [r]$ and $j\in [k]$ we have \begin{enumerate}[label=\itmarabp{H}{a}] \item\label{H:size} $m_{i,j} - \tfrac{1}{10}\xi n \leq |W_{i,j}| \leq m_{i,j} + \tfrac{1}{10}\xi n$, \item\label{H:sizeX} $|X| \leq \xi n$, \item\label{H:edge} $\{f(x),f(y)\} \in E(R^k_r)$ for every $\{x,y\} \in E(H)$, \item\label{H:special} $y,z\in \bigcup_{j'\in[k]}f^{-1}(i,j')$ for every $x\in f^{-1}(i,j)\setminus X$ and $xy,yz\in E(H)$, and \item\label{H:v1} $f(x)=\big(1,\sigma(x)\big)$ for every $x$ in the first $\sqrt{\beta}n$ vertices of $\mathcal{L}$. \end{enumerate} Lemma~\ref{lem:H2} actually gives a little more, which we do not require for this proof. We let $F$ be the first $\beta n$ vertices of $\mathcal{L}$. By definition of $\mathcal{L}$, in $F$ there are at least $C p^{-2}$ vertices whose neighbourhood in $H$ is independent. Next, we apply Lemma~\ref{lem:hypgeo}, with input $\varepsilon\mu^2$ and $\Delta$, to choose a set $S\subseteq V(G)$ of size $\mu n$. We let the $T_i$ of Lemma~\ref{lem:hypgeo} be all sets which are common neighbourhoods in $\Gamma$ of at most $\Delta$ vertices of $\Gamma$, together with the sets $V_{i,j}$ for $i\in[r]$ and $j\in[k]$. The result of Lemma~\ref{lem:hypgeo} is that for any $1\le\ell\le\Delta$ and vertices $u_1,\dots,u_\ell$ of $V(G)$, we have \begin{equation}\label{eq:intS} \begin{split} \Big|S\cap\bigcap_{1\le i\le\ell}N_\Gamma(u_i)\Big|&=(1\pm\varepsilon\mu)\mu\Big|\bigcap_{1\le i\le\ell}N_\Gamma(u_i)\Big|\pm \varepsilon\mu p^\ell n\,,\quad \text{and}\\ \big|S\cap V_{i,j}\big|&\le 2\mu|V_{i,j}|\quad\text{for each $i\in[r]$ and $j\in[k]$,} \end{split} \end{equation} where we use the fact $p\ge C\big(\tfrac{\log n}{n}\big)^{1/\Delta}$ and choice of $C$ to deduce $C^{\ast}\log n<\varepsilon\mu p^\Delta n$. Our next task is to create the pre-embedding that covers the vertices of $V_0$. We use the following algorithm, starting with $\phi_0$ the empty partial embedding. \begin{algorithm} \caption{Pre-embedding} \label{alg:pre} Set $t:=0$ \; \While{$V_0\setminus\mathrm{im}(\phi_t)\neq\emptyset$}{ \lnl{line:choosev} Let $v_{t+1}\in V_0\setminus\mathrm{im}(\phi_t)$ minimise $\big|\big(N_G(v)\cap S\big)\setminus\mathrm{im}(\phi_t)\big|$ over $v\in V_0\setminus\mathrm{im}(\phi_t)$ \; Choose $x_{t+1}\in F$ with $N_H(x)$ independent, with $\mathrm{dist}\big(x_{t+1},\mathrm{dom}(\phi_t)\big)\ge 2r+20$ \; Let $N_H(x_{t+1})=\{y_1,\dots,y_\ell\}$ \; \lnl{line:choosenbs} Choose $w_1,\dots,w_{\ell}\in \big(N_G(v)\cap S\big)\setminus\mathrm{im}(\phi_t)$ \; $\phi_{t+1}:=\phi_t\cup\{x_{t+1}\to v_{t+1}\}\cup\{y_1\to w_1\}\cup\dots\cup\{y_\ell\to w_\ell\}$ \; $t:=t+1$ \; } \end{algorithm} Suppose this algorithm does not fail, terminating with $t=t^*$. The final $\phi_{t^*}$ is an embedding of some vertices of $H$ into $V(G)$ which covers $V_0$ and is contained in $V_0\cup S$. Before we specify how exactly we choose vertices at line~\ref{line:choosenbs}, we justify that the algorithm does not fail. In other words, we need to justify that at every time $t$ there are vertices of $F$ whose neighbourhood is independent and which are not close to any vertices in $\mathrm{dom}(\phi_t)$, and that at every time $t$, the set $\big(N_G(v)\cap S\big)\setminus\mathrm{im}(\phi_t)$ is big. For the first, observe that since $|V_0|\leC^{\ast} p^{-2}$, we have $\mathrm{dom}(\phi_t)\leC^{\ast}\Delta p^{-2}$ at every step. Thus the number of vertices at distance less than $2r+20$ from $\mathrm{dom}(\phi_t)$ is at most \[\big(1+\Delta+\dots+\Delta^{2r+19}\big)C^{\ast}\Delta p^{-2}< 2C^{\ast}\Delta^{2r+20} p^{-2}\] which by choice of $C$ is smaller than the number of vertices in $F$ with $N_H(x)$ independent. For the second part, suppose that at some time $t$ we pick a vertex $v$ such that $\big|\big(N_G(v)\cap S\big)\setminus\mathrm{im}(\phi_t)\big|<\tfrac14\mu pn$. For each $t-\tfrac{1}{100(\Delta+1)}\mu pn\le t'<t$, we have $\big|\big(N_G(v)\cap S\big)\setminus\mathrm{im}(\phi_{t'})\big|<\tfrac3{10}\mu pn$, yet at each of these times $v$ is not picked, so that the vertex picked at each time has at most as many uncovered neighbours in $S$ as $v$. Let $Z$ be the set of vertices chosen at line~\ref{line:choosev} in each of these time steps. Then for each $z\in Z$ we have $\big|\big(N_G(v)\cap S\big)\setminus\mathrm{im}(\phi_t)\big|\le\tfrac3{10}\mu pn$. But since $\delta(G)>\tfrac12pn$, by~\eqref{eq:intS} and choice of $\varepsilon$ we have $\big|N_G(z)\cap S\big|\ge \tfrac25\mu pn$, so $\big|N_G(z)\cap\mathrm{im}(\phi_t)\big|\ge \tfrac1{10}\mu pn$ for each $z\in Z$. By choice of $C$, we have $|Z|=\tfrac{1}{100(\Delta+1)}\mu pn \ge C^{\ast} p^{-1}\log n$. Since $|\mathrm{im}(\phi)|\le(\Delta+1)|V_0|\le\tfrac{1}{100}\mu n$, by choice of $C$, this contradicts the good event of Proposition~\ref{prop:chernoff}. We have justified that Algorithm~\ref{alg:pre} completes, and indeed that at each time we reach line~\ref{line:choosenbs} there are at least $\tfrac14\mu pn$ vertices of $\big(N_G(v)\cap S\big)\setminus\mathrm{im}(\phi)$ to choose from. In order to specify how to choose these vertices, we need the following claim. \begin{claim}\label{cl:chooseW} Given any set $Y$ of $\tfrac14\mu pn$ vertices of $V(G)$, there exists $W\subseteq Y$ of size at least $\tfrac{1}{8r}\mu pn$ and an index $i\in[r]$ with the following property. For each $w\in W$ and each $j\in[k]$, we have $|N_G(w,V_{i,j})|\ge dp|V_{i,j}|$. \end{claim} \begin{claimproof} First let $Y'$ be obtained from $Y$ by removing all vertices $y\in Y$ such that either $|N_\Gamma(y,V_0)|\ge \varepsilon p n$, or for some $i\in[r]$ and $j\in[k]$ we have $\big|N_\Gamma(y,V_{i,j})\big|\neq (1\pm\varepsilon)p|V_{i,j}|$. Because the good event of Proposition~\ref{prop:chernoff} occurs, the total number of vertices removed is at most $2kr C^{\ast} p^{-1}\log n<\tfrac12|Y|$, where the inequality is by choice of $C$. Now given any $y\in Y'$, if for each $i\in[r]$ there is $j\in[k]$ such that $\big|N_G(y,V_{i,j})\big|<dp|V_{i,j}|$, then, since the $\{V_{i,j}\}$ are $k$-equitable, we have $|N_G(y)|\le \varepsilon p n+dpn+(1+\varepsilon)\tfrac{k-1}{k}pn+r<\big(\tfrac{k-1}{k}+\gamma\big)pn$, a contradiction. We conclude that for each $y\in Y'$ there exists $i\in[r]$ such that $|N_G(y,V_{i,j})|\ge dp|V_{i,j}|$ for each $j\in[k]$. We let $W$ be the vertices of $Y'$ giving a majority choice of $i$. \end{claimproof} Now at each time $t$, in line~\ref{line:choosenbs} of Algorithm~\ref{alg:pre}, we choose the vertices $w_1,\dots,w_\ell$ as follows. Let $Y=\big(N_G(v_t)\cap S\big)\setminus\mathrm{im}(\phi_t)$. Let $i_t\in[r]$ be an index, and $W\subseteq Y$ be a set of size $\tfrac{1}{8r}\mu pn$, such that $\big|N_G(w,V_{i_t,j})\big|\ge dpn|V_{i_t,j}|$ for each $j\in[k]$, whose existence is guaranteed by Claim~\ref{cl:chooseW}. By construction, and by our choice of $\mu$, we can apply Lemma~\ref{lem:common} with input $d$, $k$, $\Delta$, $\varepsilona$, $r$ and $\varepsilon$, with the clusters $\big\{V_{i_t,j}\big\}_{j\in[k]}$ as the $\big\{V_i\big\}_{i\in[k]}$, and inputting a subset of $W$ of size $10^{-10}\tfrac{\varepsilon^4pn}{k^4r^4}$ as requried for~\ref{cnl:W}. This last is possible by choice of $\mu$. To verify the conditions of Lemma~\ref{lem:common}, observe that~\ref{cnl:bal} follows from~\ref{main:Gsize},~\ref{cnl:Vreg} from~\ref{main:Greg}, and~\ref{cnl:Wdeg} from Claim~\ref{cl:chooseW}. We obtain a $\Delta$-tuple of vertices in $W$ satisfying~\ref{cnl:Gsize}--\ref{cnl:Nreg}. We let $w_1,\dots,w_\ell$ be the first $\ell$ vertices of this tuple. Let $H'=H- \mathrm{dom}(\phi_{t^*})$. We next define image restricting vertex sets and create an updated homomorphism $f^*:V(H')\to [r]\times[k]$. For each $x\in V(H)\setminus\mathrm{dom}(\phi_{t^*})$, let $J_x=\phi_{t^*}\big(N_H(x)\cap\mathrm{dom}(\phi_{t^*})\big)$. Now, since the vertices $\{x_t\}_{t\in[t^*]}$ are by construction at pairwise distance at least $2r+20$, in particular for each $y\in V(H')$ with $J_y\neq\emptyset$ the vertex $y$ is at distance two from one $x_t$, and at distance greater than $r+10$ from all others. Let $j\in [k]$ such that $f(y)=(1,j)$. Then we set $f^*(y):=(i_{t},j)$. Next, for each $t\in[t^*]$ and each $z\in V(H)$ at distance $3,\dots,i_t+1$ from $x_t$, we set $f^*(z)$ as follows. Recall that $f(z)=(1,j)$ for some $j\in[k]$. We set $f^*(z)=\big(i_t+2-\mathrm{dist}(x_t,z),j\big)$. Because the $\{x_t\}$ are at pairwise distance at least $2r+20$, no vertex is at distance $r+5$ or less from any two $x_t$ and $x_{t'}$, so that $f^*$ is well-defined. Because $R^k_r$ contains $B^k_r$, the $f^*$ we constructed so far is a graph homomorphism. Furthermore, for each $x_t$ the set of vertices $z$ at distance $i_t+1$ from $x_t$ are in the first $\sqrt{\beta}n$ vertices of $\mathcal L$, and so by~\ref{H:v1} satisfy $f^*(z)=f(z)$. We complete the construction of $f^*$ by setting $f^*(z)=f(z)$ for each remaining $z\in V(H)\setminus\mathrm{dom}(\phi_{t^*})$. Because $f$ is a graph homomorphism, $f^*$ is also a graph homomorphism whose domain is $V(H')$. For each $i\in[r]$ and $j\in[k]$, let $W'_{i,j}$ be the set of vertices $w\in V(H')$ with $f^*(w)\in V_{i,j}$, and let $X'$ consist of $X$ together with all vertices of $H'$ at distance $r+10$ or less from some $x_t$ with $t\in[t^*]$. The total number of vertices $z\in V(H)$ at distance at most $r+10$ from some $x_t$ is at most $2\Delta^{r+10}|V_0|<\tfrac{1}{100}\xi n$. Since $W_{i,j}\triangle W'_{i,j}$ contains only such vertices, we have \begin{enumerate}[label=\itmarabp{H}{b}] \item\label{Hp:sizeWp} $m_{i,j}-\tfrac15\xi n\le |W'_{i,j}|\le m_{i,j}+\tfrac15\xi n$, \item\label{Hp:sizeX} $|X'| \leq 2\xi n$, \item\label{Hp:edge} $\{f^*(x),f^*(y)\} \in E(R^k_r)$ for every $\{x,y\} \in E(H')$, and \item\label{Hp:special} $y,z\in \bigcup_{j'\in[k]}W'_{i,j'}$ for every $x\in W'_{i,j}\setminus X'$ and $xy,yz\in E(H')$. \end{enumerate} where~\ref{Hp:sizeX},~\ref{Hp:edge} and~\ref{Hp:special} hold by~\ref{H:sizeX} and definition of $X'$, by definition of $f^*$, and by~\ref{H:special} and choice of $X'$ respectively. Furthermore, we have \begin{enumerate}[label=\itmarabp{G}{a}] \item $\frac{n}{4kr}\leq |V_{i,j}| \leq \frac{4n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item $\mathcal V$ is $(\varepsilon,d,p)_G$-lower-regular on $R^k_r$ and $(\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item both $\big(N_{\Gamma}(v, V_{i,j}),V_{i',j'}\big)$ and $\big(N_{\Gamma}(v, V_{i,j}),N_{\Gamma}(v, V_{i',j'})\big)$ are $(\varepsilon, d,p)_G$-lower-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, and \item $|N_{\Gamma}(v,V_{i,j})| = (1\pm \varepsilon)p|V_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \item\label{main:GpI} $\big|V_{f^*(x)}\cap\bigcap_{u\in J_x}N_G(u)\big|\ge\alpha p^{|J_x|}|V_{f^*(x)}|$ for each $x\in V(H')$, \item\label{main:GpGI} $\big|V_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u)\big|=(1\pm\varepsilon^*)p^{|J_x|}|V_{f^*(x)}|$ for each $x\in V(H')$, and \item\label{main:GpIreg} $\big(V_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u),V_{f^*(y)}\cap\bigcap_{v\in J_y}N_\Gamma(v)\big)$ is $(\varepsilon^*,d,p)_G$-lower-regular for each $xy\in E(H')$. \item\label{main:GaI} $\big|\bigcap_{u\in J_x}N_\Gamma(u)\big|\le(1+\varepsilona) p^{|J_x|}n$ for each $x\in V(H')$, \end{enumerate} Properties~\ref{main:Gsize} to~\ref{main:Ggam} are repeated for convenience. Properties~\ref{main:GpI},~\ref{main:GpGI} and~\ref{main:GaI}, are trivial when $J_x=\emptyset$, and are otherwise guaranteed by Lemma~\ref{lem:common}. Finally~\ref{main:GpIreg} follows from~\ref{main:Greg} when $J_x,J_y=\emptyset$, and otherwise is guaranteed by Lemma~\ref{lem:common}. For each $i\in[r]$ and $j\in[k]$, let $V'_{i,j}=V_{i,j}\setminus\mathrm{im}(\phi_{t^*})$, and let $\mathcal V'=\{V'_{i,j}\}_{i\in[r],j\in[k]}$. Because $V_{i,j}\setminus V'_{i,j}\subseteq S$ for each $i\in[r]$ and $j\in[k]$, using~\eqref{eq:intS} and Proposition~\ref{prop:subpairs3}, and our choice of $\mu$, we obtain \begin{enumerate}[label=\itmarabp{G}{b}] \item\label{Gp:sizeV} $\frac{n}{6kr}\leq |V'_{i,j}| \leq \frac{6n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item\label{Gp:Greg} $\mathcal V'$ is $(2\varepsilon,d,p)_G$-lower-regular on $R^k_r$ and $(2\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item\label{Gp:Ginh} both $\big(N_{\Gamma}(v, V'_{i,j}),V'_{i',j'}\big)$ and $\big(N_{\Gamma}(v, V'_{i,j}),N_{\Gamma}(v, V'_{i',j'})\big)$ are $(2\varepsilon, d,p)_G$-lower-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, and \item\label{Gp:GsGa} $|N_{\Gamma}(v,V'_{i,j})| = (1 \pm 2\varepsilon)p|V_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \item\label{Gp:sizeI} $\big|V'_{f^*(x)}\cap\bigcap_{u\in J_x}N_G(u)\big|\ge\tfrac12\alpha p^{|J_x|}|V'_{f^*(x)}|$, \item\label{Gp:sizeGa} $\big|V'_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u)\big|=(1\pm2\varepsilon^*)p^{|J_x|}|V'_{f^*(x)}|$, and \item\label{Gp:Ireg} $\big(V'_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u),V'_{f^*(y)}\cap\bigcap_{v\in J_y}N_\Gamma(v)\big)$ is $(2\varepsilon^*,d,p)_G$-lower-regular. \item\label{Gp:GaI} $\big|\bigcap_{u\in J_x}N_\Gamma(u)\big|\le(1+2\varepsilona) p^{|J_x|}n$ for each $x\in V(H')$, \end{enumerate} We are now almost finished. The only remaining problem is that we do not necessarily have $|W'_{i,j}|=|V'_{i,j}|$ for each $i\in[r]$ and $j\in[k]$. Since $|V'_{i,j}|=|V_{i,j}|\pm 2\Delta^{r+10}|V_0|=m_{i,j}\pm 3\Delta^{r+10}|V_0|$, by~\ref{Hp:sizeWp} we have $|V'_{i,j}|=|W'_{i,j}|\pm \xi n$. We can thus apply Lemma~\ref{lem:balancing}, with input $k$, $r_1$, $\Delta$, $\gamma$, $d$, $8\varepsilon$, and $r$. This gives us sets $V''_{i,j}$ with $|V''_{i,j}|=|W'_{i,j}|$ for each $i\in[r]$ and $j\in[k]$ by~\ref{lembalancing:sizesout}. Let $\mathcal V''=\{V''_{i,j}\}_{i\in[r],j\in[k]}$. Lemma~\ref{lem:balancing} guarantees us the following. \begin{enumerate}[label=\itmarabp{G}{c}] \item\label{Gpp:sizeV} $\frac{n}{8kr}\leq |V''_{i,j}| \leq \frac{8n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item\label{Gpp:Greg} $\mathcal V''$ is $(4\varepsilona,d,p)_G$-lower-regular on $R^k_r$ and $(4\varepsilona,d,p)_G$-super-regular on $K^k_r$, \item\label{Gpp:Ginh} both $\big(N_{\Gamma}(v, V''_{i,j}),V''_{i',j'}\big)$ and $\big(N_{\Gamma}(v, V''_{i,j}),N_{\Gamma}(v, V''_{i',j'})\big)$ are $(4\varepsilona, d,p)_G$-lower-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, and \item\label{Gpp:GsGa} we have $(1-4\varepsilon)p|V''_{i,j}| \leq |N_{\Gamma}(v,V''_{i,j})| \leq (1 + 4\varepsilon)p|V''_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \item\label{Gpp:sizeI} $\big|V''_{f^*(x)}\cap\bigcap_{u\in J_x}N_G(u)\big|\ge\tfrac14\alpha p^{|J_x|}|V''_{f^*(x)}|$, \item\label{Gpp:sizeGa} $\big|V''_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u)\big|=(1\pm4\varepsilon^*)p^{|J_x|}|V'_{f^*(x)}|$, and \item\label{Gpp:Ireg} $\big(V''_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u),V''_{f^*(y)}\cap\bigcap_{v\in J_y}N_\Gamma(v)\big)$ is $(4\varepsilon^*,d,p)_G$-lower-regular. \end{enumerate} Here~\ref{Gpp:sizeV} comes from~\ref{Gp:sizeV} and~\ref{lembalancing:symd}, while~\ref{Gpp:Greg} comes from~\ref{lembalancing:regular} and choice of $\varepsilon$. \ref{Gpp:Ginh} is guaranteed by~\ref{lembalancing:inheritance}. Now, each of~\ref{Gpp:GsGa},~\ref{Gpp:sizeI} and~\ref{Gpp:sizeGa} comes from the corresponding~\ref{Gp:GsGa},~\ref{Gp:sizeI} and~\ref{Gp:sizeGa} together with~\ref{lembalancing:gammaout}. Finally,~\ref{Gpp:Ireg} comes from~\ref{Gp:Ireg} and~\ref{Gp:GaI} together with Proposition~\ref{prop:subpairs3} and~\ref{lembalancing:gammaout}. For each $x\in V(H')$ with $J_x=\emptyset$, let $I_x=V''_{f^*(x)}$. For each $x\in V(H')$ with $J_x\neq\emptyset$, let $I_x=V''_{f^*(x)}\cap\bigcap_{u\in J_x}N_G(u)$. Now $\mathcal W'$ and $\mathcal V''$ are $\kappa$-balanced by~\ref{Gpp:sizeV}, size-compatible by construction, partitions of respectively $V(H')$ and $V(G)\setminus\mathrm{im}(\phi_{t^*})$, with parts of size at least $n/(\kappa r_1)$ by~\ref{Gpp:sizeV}. Letting $\widetilde{W}_{i,j}:=W'_{i,j}\setminus X'$, by~\ref{Hp:sizeX}, choice of $\xi$, and~\ref{Hp:special}, $\{\widetilde{W}_{i,j}\}_{i\in[r],j\in[k]}$ is a $\big(\vartheta,K^k_r\big)$-buffer for $H'$. Furthermore since $f^*$ is a graph homomorphism from $H'$ to $R^k_r$, we have~\ref{itm:blowup:H}. By~\ref{Gpp:Greg},~\ref{Gpp:Ginh} and~\ref{Gpp:GsGa} we have~\ref{itm:blowup:G}, with $R=R^k_r$ and $R'=K^k_r$. Finally, the pair $(\mathcal I,\mathcal J)=\big(\{I_x\}_{x\in V(H')},\{J_x\}_{x\in V(H')}\big)$ form a $\big(\rho,\tfrac14\alpha,\Delta,\Delta\big)$-restriction pair. To see this, observe that the total number of image restricted vertices in $H'$ is at most $\Delta^2|V_0|<\rho|V_{i,j}|$ for any $i\in[r]$ and $j\in[k]$, giving~\ref{itm:restrict:numres}. Since for each $x\in V(H')$ we have $|J_x|+\deg_{H'}(x)=\deg_H(x)\le\Delta$ we have~\ref{itm:restrict:Jx}, while~\ref{itm:restrict:sizeIx} follows from~\ref{Gpp:sizeI}, and~\ref{itm:restrict:sizeGa} follows from~\ref{Gpp:sizeGa}. Finally,~\ref{itm:restrict:Ireg} follows from~\ref{Gpp:Ireg}, and~\ref{itm:restrict:DJ} follows since $\Delta(H)\le\Delta$. Together this gives~\ref{itm:blowup:restrict}. Thus, by Lemma~\ref{thm:blowup} there exists an embedding $\phi$ of $H'$ into $G\setminus\mathrm{im}(\phi_{t^*})$, such that $\phi(x)\in I_x$ for each $x\in V(H')$. Finally, $\phi\cup\phi_{t^*}$ is an embedding of $H$ in $G$, as desired. \end{proof} With Theorem~\ref{thm:maink} in hand, we can now present the proof of Theorem~\ref{thm:main}. \begin{proof}[Proof of Theorem~\ref{thm:main}] Given $\gamma$, $\Delta$, and $k$, let $\beta>0$, $z>0$, and $C >0$ be returned by Theorem~\ref{thm:maink} with input $\gamma$, $\Delta$, and $k$. Set $\beta^\ast := \beta/2$ and $C^{\ast} := C/\beta$. Let $H$ be a $k$-colourable graph on $n$ vertices with $\Delta(H) \leq \Delta$ such that there exists a set $W$ of at least $C^{\ast} p^{-2}$ vertices in $V(H)$ that are not contained in any triangles of $H$ and such that there exists a labelling $\mathcal L$ of its vertex set of bandwidth at most $\beta^\ast n$. By the choice of $C^{\ast}$ we find an interval $I \subseteqeq \mathcal L$ of length $\beta n$ containing a subset $F \subseteqeq W$ with $|F| = C p^{-2}$. Now we can rearrange the labelling $\mathcal L$ to a labelling $\mathcal L'$ of bandwidth at most $2 \beta^\ast n = \beta n$ such that $F$ is contained in the first $\beta n$ vertices in $\mathcal L'$. Then, by Theorem~\ref{thm:maink} we know that $\Gamma = G(n,p)$ satisfies the following a.a.s.~if $p\geq C(\log n/n)^{1/\Delta}$ and in particular if $p\geq C^{\ast}(\log n/n)^{1/\Delta}$. If $G$ is a spanning subgraph of $\Gamma$ with $\delta(G) \geq \big((k-1)/k+\gamma\big)pn$, then $G$ contains a copy of $H$, which finishes the proof. \end{proof} \section{Lowering the probability for degenerate graphs} \label{sec:proofdegen} As with Theorem~\ref{thm:main}, we deduce Theorem~\ref{thm:degenerate} from the following more general statement. \begin{theorem}\label{thm:degen} For each $\gamma>0$, $\Delta \geq 2$, $D\ge 1$ and $k\geq 1$, there exist constants $\beta >0$, $z>0$, and $C>0$ such that the following holds asymptotically almost surely for $\Gamma = G(n,p)$ if $p\geq C\big(\frac{\log n}{n}\big)^{1/(2D+1)}$. Let $G$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq\big(\frac{k-1}{k}+\gamma\big)pn$ and let $H$ be a graph on $n$ vertices with $\Delta(H) \leq \Delta$ and degeneracy at most $D$, that has a labelling $\mathcal L$ of its vertex set of bandwidth at most $\beta n$, a $(k+1)$-colouring that is $(z,\beta)$-zero-free with respect to $\mathcal L$ and where the first $\sqrt{\beta} n$ vertices in $\mathcal L$ are not given colour zero and the first $\beta n$ vertices in $\mathcal L$ include $C p^{-2}$ vertices that are not in any triangles or copies of $C_4$ in $H$. Then $G$ contains a copy of $H$. \end{theorem} The proof of Theorem~\ref{thm:degen} is quite similar to that of Theorem~\ref{thm:maink}. We provide only a sketch, highlighting the differences. (For more details and background on this result see~\cite{JuliaEsDiss}.) The most important of these differences are that we do not use Lemma~\ref{lem:common} in the pre-embedding, and that we use a version of Lemma~\ref{thm:blowup} whose performance is better for degenerate graphs. In order to state this, we need the following definitions. Given an order $\tau$ on $V(H)$ and a family $\mathcal J$ of image restricting vertices, we define $\pi^\tau(x):=|J_x|+\big|\{y\in N_H(x):\tau(y)<\tau(x)\}\big|$. Now the condition on $\tau$ we need for our enhanced blow-up lemma is the following. \begin{definition}[$(\tilde{D},p,m)$-bounded order]\label{def:Dpm_bdd_order} Let~$H$ be a graph given with buffer sets $\widetilde{\mathcal{W}}$ and a restriction pair~$\mathcal I=\{I_i\}_{i\in[r]}$ and~$\mathcal J=\{J_i\}_{i\in[r]}$. Let~$\widetilde{W}=\bigcup\widetilde{\mathcal{W}}$. Let~$\tau$ be an ordering of $V(H)$ and $W^e\subseteq V(H)$. Then~$\tau$ is a \emph{$(\tilde{D},p,m)$-bounded order} for~$H$, $\widetilde{\mathcal{W}}$, $\mathcal I$ and $\mathcal J$ with \emph{exceptional set} $W^e$ if the following conditions are satisfied for each $x\in V(H)$. \begin{enumerate}[label=\itmarab{ORD}] \item\label{ord:Dx} Define \[ \tilde{D}_x:=\begin{cases} \tilde{D}-2 & \text{if there is $yz\in E(H)$ with $y,z\in N_H(x)$ and $\tau(y),\tau(z)>\tau(x)$}\\ \tilde{D}-1 & \text{else if there is $y\in N_H(x)$ with $\tau(y)>\tau(x)$}\\ \tilde{D} & \text{otherwise}\,. \end{cases}\] We have $\pi^\tau(x)\le \tilde{D}_x$, and if $x\in N(\widetilde{W})$ even $\pi^\tau(x)\le \tilde{D}_x-1$. Finally, if $x\in\widetilde{W}$ we have $\deg(x)\le \tilde{D}$. \item\label{ord:halfD} One of the following holds: \begin{itemize} \item $x\in W^e$, \item $\pi^\tau(x)\le \frac12 \tilde{D}$, \item $x$ is not image restricted and every neighbour~$y$ of~$x$ with $\tau(y)<\tau(x)$ satisfies $\tau(x)-\tau(y)\le p^{\pi^\tau(x)}m$. \end{itemize} \item\label{ord:NtX} If $x\in N(\widetilde{W})$ then all but at most $\tilde{D}-1-\max_{z\not\in W^e}\pi^\tau(z)$ neighbours~$y$ of $x$ with $\tau(y)<\tau(x)$ satisfy $\tau(x)-\tau(y)\le p^{\tilde{D}} m$. \end{enumerate} \end{definition} To obtain the best probability bound, one should choose $\tau$ to minimise $\tilde{D}$. In the proof of Theorem~\ref{thm:degen} we will take $\tau$ to be an order witnessing $D$-degeneracy, $W^e$ will contain all image restricted vertices, and we will choose buffer sets containing vertices of degree at most $2D+1$. One can easily check that this allows us to choose $\tilde{D}=2D+1$. \begin{lemma}[{\cite[Lemma~1.23]{blowup}}] \label{thm:dblow} For all $\Delta\ge 2$, $\Delta_{R'}$, $\Delta_J$, $\tilde{D}$, $\alpha,\zeta, d>0$, $\kappa>1$ there exist $\varepsilon,\rho>0$ such that for all $r_1$ there is a $C$ such that for \[p\ge C\bigg(\frac{\log n}{n}\bigg)^{1/\tilde{D}}\] the random graph $\Gamma=G_{n,p}$ a.a.s.\ satisfies the following. Let $R$ be a graph on $r\le r_1$ vertices and let $R'\subseteq R$ be a spanning subgraph with $\Delta(R')\leq \Delta_{R'}$. Let $H$ and $G\subseteq \Gamma$ be graphs with $\kappa$-balanced, size-compatible vertex partitions $\mathcal W=\{W_i\}_{i\in[r]}$ and $\mathcal V=\{V_i\}_{i\in[r]}$, respectively, which have parts of size at least $m\ge n/(\kappa r_1)$. Let $\widetilde{\mathcal{W}}=\{\widetilde{W}_i\}_{i\in[r]}$ be a family of subsets of $V(H)$, $\mathcal I=\{I_x\}_{x\in V(H)}$ be a family of image restrictions, and $\mathcal J=\{J_x\}_{x\in V(H)}$ be a family of restricting vertices. Let $\tau$ be an order of $V(H)$ and $W^e\subseteq V(H)$ be a set of size $|W^e|\le\varepsilon p^{{\max_{x\in W^e}\pi^\tau(x)}}n/r_1$. Suppose that \begin{enumerate}[label=\itmarab{DBUL}] \item\label{dbul:1} $\Delta(H)\leq \Delta$, $(H,\mathcal W)$ is an $R$-partition, and $\widetilde{\mathcal{W}}$ is an $(\alpha,R')$-buffer for $H$, \item\label{dbul:2} $(G,\mathcal V)$ is an $(\varepsilon,d,p)$-lower-regular $R$-partition, which is $(\varepsilon,d,p)$-super-regular on $R'$, has one-sided inheritance on~$R'$, and two-sided inheritance on~$R'$ for $\widetilde{\mathcal{W}}$, \item\label{dbul:3} $\mathcal I$ and $\mathcal J$ form a $(\rho,\zeta,\Delta,\Delta_J)$-restriction pair. \item\label{dbul:4} $\tau$ is a $(\tilde{D},p,\varepsilon n/r_1)$-bounded order for~$H$, $\widetilde{\mathcal{W}}$, $\mathcal I$, $\mathcal J$ with exceptional set~$W^e$. \end{enumerate} Then there is an embedding $\psi\colon V(H)\to V(G)$ such that $\psi(x)\in I_x$ for each $x\in H$. \end{lemma} \begin{proof}[Sketch proof of Theorem~\ref{thm:degen}] We set up constants quite similarly as in the proof of Theorem~\ref{thm:maink}. Specifically, given $\gamma>0$, $\Delta\ge 2$, $D$ and $k\ge 2$, let $d$ be returned by Lemma~\ref{lem:G}, with input $\gamma$, $k$ and $r_0:=10\gamma^{-1}$. Let $\alpha=\tfrac{d}{2}$. Let $\tilde{D}=2D+1$. Now let $\eBL>0$ and $\rho>0$ be returned by Lemma~\ref{thm:dblow} with input $\Delta$, $\Delta_{R'}=3k$, $\Delta_J=\Delta$, $\tilde{D}'$, $\vartheta=\tfrac{1}{100D}$, $\zeta=\tfrac14\alpha$, $d$ and $\kappa=64$. Let $\varepsilona=\tfrac18\eBL$, and then Lemma~\ref{lem:OSRIL}, for input $\varepsilona$ and $d$, returns $\varepsilon_1>0$. Let $\varepsilon_0>0$ be small enough both for Lemma~\ref{lem:TSRIL} with input $\varepsilona$ and $d$, and for Lemma~\ref{lem:OSRIL} with input $\varepsilon_1$ and $d$. We choose $\varepsilon=\min\big(\varepsilon_0,d,\tfrac{1}{4D}\varepsilona,\tfrac{1}{2k}\big)$. Putting $\varepsilon$ into Lemma~\ref{lem:G} returns $r_1$. Next, Lemma~\ref{lem:balancing}, for input $k$, $r_1$, $\Delta$, $\gamma$, $d$ and $8\varepsilon$, returns $\xi>0$. We assume without loss of generality that $\xi\le 1/(10kr_1)$, and set $\beta=10^{-12}\xi^2/(\Delta k^4r_1^2)$. Let $\mu=\tfrac{\varepsilon^2}{100000kr}$. Finally, suppose $C^{\ast}$ is large enough for each of these lemmas, for Lemma~\ref{thm:blowup}, for Proposition~\ref{prop:chernoff} with input $\varepsilon$, and for Lemma~\ref{lem:hypgeo} with input $\varepsilon\mu^2$ and $\Delta$. We set $C=10^{10}k^2r_1^2\varepsilon^{-2}\xi^{-1}\Delta^{2r_1+20}\mu^{-1}C^{\ast}$, and $z=10/\xi$. Given $p\ge C\big(\tfrac{\log n}{n}\big)^{1/(2D+1)}$, a.a.s.\ $G(n,p)$ satisfies the good events of Lemma~\ref{thm:dblow}, Lemma~\ref{lem:G}, Lemma~\ref{lem:OSRIL} and Lemma~\ref{lem:TSRIL}, and Proposition~\ref{prop:chernoff}, with the stated inputs. Suppose that $\Gamma=G(n,p)$ satisfies these good events. Let $G$ be a spanning subgraph of $\Gamma$ with $\delta(G)\ge\big(\tfrac{k-1}{k}+\gamma\big)pn$. Let $H$ be any graph on $n$ vertices with $\Delta(H)\le\Delta$, and let $\mathcal L$ be a labelling of $V(H)$ of bandwidth at most $\beta n$ whose first $\beta n$ vertices include $C p^{-2}$ vertices that are not contained in any triangles or four-cycles of $H$, and such that there exists a $(k+1)$-colouring that is $(z,\beta)$-zero-free with respect to $\mathcal L$, and the colour zero is not assigned to the first $\sqrt{\beta}n$ vertices. Furthermore, let $\tau$ be a $D$-degeneracy order of $V(H)$. Next, as in the proof of Theorem~\ref{thm:maink}, we apply Lemma~\ref{lem:G} to $G$, obtaining a partition of $V(G)$ with the properties~\ref{main:Gsize}--\ref{main:Ggam}. Note that if $D=1$, in place of~\ref{main:Ginh} we will ask only for the weaker condition \begin{enumerate}[label=\itmsol{G}{3'}] \item\label{main:Ginhp} $\big(N_\Gamma(v,V_{i,j}),V_{i',j'}\big)$ is an $(\varepsilon,d,p)_G$-lower-regular pair for every $\big\{(i,j),(i',j')\big\}\in E(R^k_r)$ and $v\in V\setminus V_0$, \end{enumerate} and thus for $D=1$ we have $|V_0|\leC^{\ast} p^{-1}$, while for $D\ge 2$ we have $|V_0|\leC^{\ast} p^{-2}$. Next, we apply Lemma~\ref{lem:H2} to obtain a partition of $V(H)$. We use the same inputs as in the proof of Theorem~\ref{thm:maink}, with the exception that $D$ is now given in the statement of Theorem~\ref{thm:degen} rather than being set equal to $\Delta$. The result is a function $f:V(H)\to V(R^k_r)$ and a special set $X$ with the same properties~\ref{H:size}--\ref{H:v1}, and in addition \begin{enumerate}[label=\itmsol{H}{6a}] \item\label{H:buf} $|\{x\in f^{-1}(i,j): \deg(x) \leq 2D\}| \geq \tfrac{1}{24D} |f^{-1}(i,j)|$. \end{enumerate} We now continue following the proof of Theorem~\ref{thm:maink}, using Lemma~\ref{lem:hypgeo} with input $\varepsilon\mu^2$ and $D+1$ (rather than $\varepsilon\mu^2$ and $\Delta$), to choose a set $S$ satisfying~\eqref{eq:intS} for each $1\le\ell\le D+1$ and vertices $u_1,\dots,u_{\ell}$ of $V(G)$. We use the same pre-embedding Algorithm~\ref{alg:pre}, with the exception that we choose vertices at line~\ref{line:choosenbs} differently. As before, given $v_{t+1}\in V_0\setminus\mathrm{im}(\phi_t)$, we use Claim~\ref{cl:chooseW} to find a set $W\subseteq N_G(v_{t+1})$ of size at least $\tfrac{1}{8r}\mu p n$ and an index $i\in[r]$ such that for each $w\in W$ we have $\big|N_G(w,V_{i,j})\big|\ge dp|V_{i,j}|$ for each $j\in[k]$. However, rather than applying Lemma~\ref{lem:common}, we let $w_1,\dots,w_\ell$ be distinct vertices of $W$ which satisfy~\ref{main:GpI}--\ref{main:GaI}. We now justify that this is possible. We choose the $w_1,\dots,w_\ell$ successively. Since $x_{t+1}$ is not contained in any triangle or four-cycle of $H$, we have $|J_x|\le 1$ for each $x\in V(H)$, so that~\ref{main:GpI} is automatically satisfied. By Proposition~\ref{prop:chernoff},~\ref{main:GpGI} and~\ref{main:GaI} are satisfied for all but at most $2C^{\ast} kr_1 p^{-1}\log n$ vertices of $W$. It remains to show that we can obtain~\ref{main:GpIreg}, which we do as follows. For $s\in[\ell]$, when we come to choose $w_s$, we insist that for any $\big\{(i,j),(i',j')\big\}\in E(R^k_r)$, the following hold. First, $\big(N_\Gamma(w_s,V_{i,j}),V_{i',j'}\big)$ is $(\varepsilon_1,d,p)_G$-lower-regular. Second, $\big(N_\Gamma(w_s,V_{i,j}),N_\Gamma(w_s,V_{i',j'})\big)$ is $(\varepsilona,d,p)_G$-lower-regular. Third, for each $1\le t\le s-1$, $\big(N_\Gamma(w_s,V_{i,j}),N_\Gamma(w_t,V_{i',j'})\big)$ is $(\varepsilona,d,p)_G$-lower-regular. The conditions of respectively Lemma~\ref{lem:OSRIL}, Lemma~\ref{lem:TSRIL}, and Lemma~\ref{lem:OSRIL} are in each case satisfied (in the last case by choice of $w_t$) and thus in total at most $3C^{\ast} k^2r_1^2\max\{p^{-2},p^{-1}\log n\}$ vertices of $W$ are prohibited. Since $5C^{\ast} k^2r_1^2\max\{p^{-2},p^{-1}\log n\}<\tfrac{|W|}{2}<\ell$ by choice of $C$, at each step there is a valid choice of $w_s$. Since for each $x\in V(H')$ we have $|J_x|\le 1$, this construction guarantees~\ref{main:GpIreg}. We now return to following the proof of Theorem~\ref{thm:maink}. We obtain $\mathcal V'$ by removing the images of pre-embedded vertices, and $\mathcal V''$ by applying Lemma~\ref{lem:balancing}. Note that here~\ref{lembalancing:gammaout} may be trivial, that is, the error term $C^{\ast}\log n$ may dominate the main term when $s$ is large, but we only require it for $s=1$ to obtain~\ref{Gpp:sizeV}--\ref{Gpp:Ireg}. Finally, we are ready to apply Lemma~\ref{thm:dblow} to complete the embedding. We define $(\mathcal I,\mathcal J)$ as in the proof of Theorem~\ref{thm:maink}. We however let $\widetilde{W}_{i,j}$ consist of the vertices of $W'_{i,j}\setminus X$ whose degree is at most $2D$. By~\ref{H:buf} there are at least $\tfrac{1}{100D}|W'_{i,j}|$ of these, so that $\widetilde{\mathcal{W}}$ is a $(\vartheta,K^k_r)$-buffer, giving~\ref{dbul:1}. Now~\ref{dbul:2} follows from~\ref{Gpp:Greg} and~\ref{Gpp:Ginh}. Finally, $(\mathcal I,\mathcal J)$ is a $(\rho,\tfrac14\alpha,\Delta,\Delta)$-restriction pair, giving~\ref{dbul:3}, exactly as in the proof of Theorem~\ref{thm:maink}. However now we need to give an order $\tau'$ on $V(H')$ and a set $W^e\subseteq V(H')$. The former is simply the restriction of $\tau$ to $V(H')$, and the set $W^e$ consists of all vertices $x\in V(H)$ with $|J_x|>0$. We now verify the remaining conditions of Lemma~\ref{thm:dblow}. We claim $|W^e|\le \Delta^2|V_0|\le \varepsilon p^{\max_{x\not\in W^e}\pi^{\tau'}(x)}n/r_1$. Observe that $\pi^{\tau'}(x)\le\pi^\tau(x)+|J_x|\le D+1$. For $D=1$, we have $|V_0|\leC^{\ast} p^{-1}$, and by choice of $C$ the desired inequality follows. For $D\ge 2$, we have $|V_0|\leC^{\ast} p^{-2}$, and again by choice of $C$ we have the desired inequality. The last condition we must verify is~\ref{dbul:4}, that $\tau'$ is a $(\tilde{D},p,\varepsilon n/r_1)$-bounded order. For any vertex $x$ of $H'$, we have $\pi^{\tau'}(x)\le\pi^\tau(x)+1\le D+1$, and furthermore for all vertices not in $W^e$ we have $\pi^{\tau'}(x)=\pi^\tau(x)\le D$. To verify~\ref{ord:Dx}, first note that by construction the vertices of $\bigcup\widetilde{\mathcal{W}}$ have degree at most $2D\le\tilde{D}$. Further, observe that if $D=1$ then $H'$ contains no triangles, and $\tilde{D}=3=D+2$. Since vertices in $N\big(\bigcup\widetilde{\mathcal{W}}\big)$ are by construction not image restricted, so are not in $W^e$, this is as required for~\ref{ord:Dx}. If on the other hand $D\ge 2$ then $\tilde{D}\ge D+3$, and again the conditions of~\ref{ord:Dx} are met. Next, if $x\not\in W^e$ then $\pi^{\tau'}(x)\le D$, so that~\ref{ord:halfD} holds. Finally, observe that $\max_{z\not\in W^e}\pi^{\tau'}(z)\le D$, and vertices $x\in N\big(\bigcup\widetilde{\mathcal{W}}\big)$ by construction have $\pi^{\tau'}(x)=\pi^\tau(x)\le D$, so that~\ref{ord:NtX} holds. We can thus apply Lemma~\ref{thm:dblow} to embed $H'$ into $G'$, completing the embedding of $H$ into $G$ as desired. \end{proof} The proof of Theorem~\ref{thm:degenerate} from Theorem~\ref{thm:degen} follows the deduction of Theorem~\ref{thm:main} from Theorem~\ref{thm:maink}, and we omit it. \section{The Bandwidth Theorem in bijumbled graphs} \label{sec:proofjumbled} Again, Theorem~\ref{thm:jumbled} is a consequence of the following. \begin{theorem}\label{thm:jumbledk} For each $\gamma >0$, $\Delta \geq 2$, and $k \geq 1$, there exists a constant $c >0$ such that the following holds for any $p>0$. Given $\nu\le cp^{\max(4,(3\Delta+1)/2)}n$, suppose $\Gamma$ is a $\big(p,\nu\big)$-bijumbled graph, $G$ is a spanning subgraph of $\Gamma$ with $\delta(G) \geq\big(\tfrac{k-1}{k}+\gamma\big)pn$, and $H$ is a $k$-colourable graph on $n$ vertices with $\Delta(H) \leq \Delta$ and bandwidth at most $c n$. Suppose further that $H$ has a labelling $\mathcal L$ of its vertex set of bandwidth at most $c n$, a $(k+1)$-colouring that is $(z,c)$-zero-free with respect to $\mathcal L$, and where the first $\sqrt{c} n$ vertices in $\mathcal L$ are not given colour zero, and the first $cn$ vertices in $\mathcal L$ include $c^{-1}p^{-6} \nu^2n^{-1}$ vertices in $V(H)$ that are not contained in any triangles of $H$. Then $G$ contains a copy of $H$. \end{theorem} The proof of Theorem~\ref{thm:jumbledk} is a straightforward modification of that of Theorem~\ref{thm:maink}. Rather than repeating the entire proof, we sketch the modifications which have to be made. Again, for more details and background on this result see~\cite{JuliaEsDiss}. Since we are working with bijumbled graphs, we need to work with regular pairs, rather than lower-regular pairs, at all times. In order to use this concept, and to work with bijumbled graphs, we need versions of Lemmas~\ref{thm:blowup},~\ref{lem:OSRIL}, and~\ref{lem:TSRIL}, and Proposition~\ref{prop:chernoff}, which work with regular pairs and with $\Gamma$ a bijumbled graph rather than a random graph. We also need the following easy proposition, which lower bounds the possible $\nu$ for a $(p,\nu)$-jumbled graph with $p>0$. \begin{proposition}\label{prop:bijn} Suppose $\tfrac{16}{n}<p<1-\tfrac{16}{n}$. There does not exist any $(p,\nu)$-bijumbled $n$-vertex graph with $\nu\le\min\big(\sqrt{pn/32},\sqrt{(1-p)n/32}\big)$. \end{proposition} \begin{proof} Suppose that $\Gamma$ is a $(p,\nu)$-bijumbled graph on $n$ vertices with $p\le\tfrac12$. If $\Gamma$ contains $\tfrac12n$ vertices of degree at least $4pn$, then we have $e(\Gamma)\ge pn^2$, and letting $A,B$ be a maximum cut of $\Gamma$, by bijumbledness we have \[\frac12pn^2\le e(A,B)\le p|A||B|+\nu\sqrt{|A||B|}\le\tfrac14pn^2+\frac12\nu n\,,\] and thus $\nu\ge pn/2\ge\sqrt{pn/32}$. If on the contrary $\Gamma$ contains at least $\tfrac12n$ vertices of degree less than $4pn$, then let $A$ be a set of $\tfrac{1}{8p}$ such vertices, and $B$ a set of $\tfrac{n}{4}$ vertices with no neighbours in $A$. By bijumbledness, we have \[0\ge p|A||B|-\nu\sqrt{|A||B|}=\frac{n}{32}-\nu\sqrt{n/(32p)}\] and thus $\nu\ge \sqrt{pn/32}$. The same argument applied to $\overline{\Gamma}$ proves the $p\ge\tfrac12$ case. \end{proof} The following sparse blow-up lemma for jumbled graphs is proved in~\cite{blowup}. \begin{lemma}[{\cite[Lemma~1.25]{blowup}}]\label{thm:jblowup} For all $\Delta\ge 2$, $\Delta_{R'}$, $\Delta_J$, $\alpha,\zeta, d>0$, $\kappa>1$ there exist $\varepsilon,\rho>0$ such that for all $r_1$ there is a $c>0$ such that if $p>0$ and \[\beta\le cp^{\max(4,(3\Delta+1)/2)}n\] any $(p,\beta)$-bijumbled graph~$\Gamma$ on $n$ vertices satisfies the following. Let $R$ be a graph on $r\le r_1$ vertices and let $R'\subseteq R$ be a spanning subgraph with $\Delta(R')\leq \Delta_{R'}$. Let $H$ and $G\subseteq \Gamma$ be graphs given with $\kappa$-balanced, size-compatible vertex partitions $\mathcal X=\{X_i\}_{i\in[r]}$ and $\mathcal V=\{V_i\}_{i\in[r]}$, respectively, which have parts of size at least $m\ge n/(\kappa r_1)$. Let $\widetilde{\mathcal{X}}=\{\widetilde{X}_i\}_{i\in[r]}$ be a family of subsets of $V(H)$, $\mathcal I=\{I_x\}_{x\in V(H)}$ be a family of image restrictions, and $\mathcal J=\{J_x\}_{x\in V(H)}$ be a family of restricting vertices. Suppose that \begin{enumerate}[label=\itmarab{JBUL}] \item $\Delta(H)\leq \Delta$, $(H,\mathcal X)$ is an $R$-partition, and $\widetilde{\mathcal{X}}$ is an $(\alpha,R')$-buffer for $H$, \item $(G,\mathcal V)$ is an $(\varepsilon,d,p)$-regular $R$-partition, which is $(\varepsilon,d,p)$-super-regular on $R'$, and has one-sided inheritance on~$R'$, and two-sided inheritance on~$R'$ for $\widetilde{\mathcal{X}}$, \item $\mathcal I$ and $\mathcal J$ form a $(\rho p^\Delta,\zeta,\Delta,\Delta_J)$-restriction pair. \end{enumerate} Then there is an embedding $\psi\colon V(H)\to V(G)$ such that $\psi(x)\in I_x$ for each $x\in H$. \end{lemma} There are three differences between this result and Lemma~\ref{thm:blowup}. First, we assume a bijumbledness condition on $\Gamma$, rather than that $\Gamma$ is a typical random graph. Second, we require regular pairs in place of lower-regular pairs. Third, the number of vertices we may image restrict is much smaller. We will see that these last two restrictions do not affect our proof substantially. Next, in~\cite{ABSS}, the following regularity inheritance lemmas for bijumbled graphs are proved. \begin{lemma}[{\cite[Lemma~3]{ABSS}}]\label{lem:pOSRIL} For each $\varepsilon',d>0$ there are $\varepsilon,c>0$ such that for all $0<p<1$ the following holds. Let $G\subseteq \Gamma$ be graphs and $X,Y,Z$ be disjoint vertex sets in $V(\Gamma)$. Assume that \begin{itemize} \item $(X,Z)$ is $(p,cp^{3/2}\sqrt{|X||Z|})$-bijumbled in $\Gamma$, \item $(X,Y)$ is $\big(p,cp^2(\log_2\tfrac{1}{p})^{-1/2}\sqrt{|X||Y|}\big)$-bijumbled in $\Gamma$, and \item $(X,Y)$ is $(\varepsilon,d,p)_G$-regular. \end{itemize} Then, for all but at most at most $\varepsilon'|Z|$ vertices~$z$ of~$Z$, the pair $\big(N_\Gamma(z)\cap X,Y\big)$ is $(\varepsilon',d,p)_G$-regular. \end{lemma} \begin{lemma}[{\cite[Lemma~4]{ABSS}}]\label{lem:pTSRIL} For each $\varepsilon',d>0$ there are $\varepsilon,c>0$ such that for all $0<p<1$ the following holds. Let $G\subseteq \Gamma$ be graphs and $X,Y,Z$ be disjoint vertex sets in $V(\Gamma)$. Assume that \begin{itemize} \item $(X,Z)$ is $(p,cp^{2}\sqrt{|X||Z|})$-bijumbled in $\Gamma$, \item $(Y,Z)$ is $(p,cp^3\sqrt{|Y||Z|})$-bijumbled in $\Gamma$, \item $(X,Y)$ is $(p,cp^{5/2}\big(\log_2\tfrac{1}{p}\big)^{-\frac12}\sqrt{|X||Y|})$-bijumbled in $\Gamma$, and \item $(X,Y)$ is $(\varepsilon,d,p)_G$-regular. \end{itemize} Then, for all but at most $\varepsilon'|Z|$ vertices~$z$ of~$Z$, the pair $\big(N_\Gamma(z)\cap X,N_\Gamma(z)\cap Y\big)$ is $(\varepsilon',d,p)_G$-regular. \end{lemma} The following two lemmas, which more closely resemble Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL}, are corollaries. \begin{lemma} \label{lem:pseudOSRIL} For each $\eo, \ao >0$ there exist $\varepsilon_0 >0$ and $C >0$ such that for any $0 < \varepsilon < \varepsilon_0$ and $0 < p <1$, if $\Gamma$ is any $(p,\nu)$-bijumbled graph the following holds. For any disjoint sets $X$ and $Y$ in $V(\Gamma)$ with $|X|\geq C p^{-3} \nu$ and $|Y| \geq C p^{-2} \nu$, and any subgraph $G$ of $\Gamma[X,Y]$ which is $(\varepsilon, \ao,p)_G$-regular, there are at most $C p^{-3}\nu^2|X|^{-1}$ vertices $z \in V(\Gamma)$ such that $(X \cap N_{\Gamma}(z),Y)$ is not $(\eo,\ao,p)_G$-regular. \end{lemma} \begin{lemma} \label{lem:pseudTSRIL} For each $\et,\at>0$ there exist $\varepsilon_0>0$ and $C >0$ such that for any $0<\varepsilon<\varepsilon_0$ and $0<p<1$, if $\Gamma$ is any $(p,\nu)$-bijumbled graph the following holds. For any disjoint sets $X$ and $Y$ in $V(\Gamma)$ with $|X|,|Y|\ge Cp^{-3}\nu$, and any subgraph $G$ of $\Gamma[X,Y]$ which is $(\varepsilon,\at,p)_G$-regular, there are at most $Cp^{-6}\nu^2/\min\big(|X|,|Y|\big)$ vertices $z \in V(\Gamma)$ such that $\big(X\cap N_\Gamma(z),Y\cap N_\Gamma(z)\big)$ is not $(\et,\at,p)_G$-regular. \end{lemma} Note that the bijumbledness requirements of this lemma are such that if $Y$ and $Z$ are sets of size $\Theta(n)$, then $X$ must have size $\Omega\big(p^{-6}\nu^2 n^{-1}\big)$. This is where the requirement of Theorem~\ref{thm:jumbledk} for vertices of $H$ not in triangles comes from. Finally, we give a bijumbled graphs version of Proposition~\ref{prop:chernoff}. We defer its proof, which is standard, and similar to that of Proposition~\ref{prop:chernoff}, to Appendix~\ref{app:tools}. \begin{proposition} \label{prop:pseudchernoff} For each $\varepsilon>0$ there exists a constant $C >0$ such that for every $p>0$, any graph $\Gamma$ which is $(p,\nu)$-jumbled has the following property. For any disjoint $X,Y \subseteqeq V(\Gamma)$ with $|X|,|Y|\ge \varepsilon^{-1}p^{-1}\nu$, we have $e(X,Y)=(1\pm\varepsilon)p|X||Y|$, and $e(X)\le 2p|X|^2$. Furthermore, for every $Y\subseteq V(\Gamma)$ with $|Y|\ge Cp^{-1}\nu$, the number of vertices $v \in V(\Gamma)$ with $\big||N_{\Gamma}(v,Y)| - p |Y|\big| > \varepsilon p |Y|$ is at most $Cp^{-2}\nu^2|Y|^{-1}$. \end{proposition} Now, using these lemmas, we can prove bijumbled graph versions of Lemmas~\ref{lem:G} and~\ref{lem:common}, and use these to complete the proof of Theorem~\ref{thm:jumbledk}. All these proofs are straightforward modifications of those in the previous sections. Briefly, the modifications we make are to replace `lower-regular' with `regular' in all proofs, to replace applications of lemmas for random graphs with the bijumbled graph versions above, and to recalculate some error bounds. The only one of our main lemmas which changes in an important way is the following Lemma for $G$. \begin{lemma}[Lemma for $G$, bijumbled graph version] \label{lem:pseudG} For each $\gamma > 0$ and integers $k \geq 2$ and $r_0 \geq 1$ there exists $d > 0$ such that for every $\varepsilon \in \left(0, \frac{1}{2k}\right)$ there exist $r_1\geq 1$ and $c,C^{\ast}>0$ such that the following holds for any $n$-vertex $(p,\nu)$-bijumbled graph $\Gamma$ with $\nu\le c p^3n$ and $p>0$. Let $G=(V,E)$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq \left(\frac{k-1}{k} + \gamma\right)pn$. Then there exists an integer $r$ with $r_0\leq kr \leq r_1$, a subset $V_0 \subseteqeq V$ with $|V_0| \leq C^{\ast} p^{-6}\nu^2 n^{-1}$, a $k$-equitable vertex partition $\mathcal V = \{V_{i,j}\}_{i\in[r],j\in[k]}$ of $V(G)\setminus V_0$, and a graph $R^k_r$ on the vertex set $[r] \times [k]$ with $K^k_r \subseteqeq B^k_r \subseteqeq R^k_r$, with $\delta(R^k_r) \geq \left(\frac{k-1}{k} + \frac{\gamma}{2}\right)kr$, and such that the following are true. \begin{enumerate}[label=\itmarab{G}] \item \label{plemG:size} $\frac{n}{4kr}\leq |V_{i,j}| \leq \frac{4n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item \label{plemG:regular} $\mathcal V$ is $(\varepsilon,d,p)_G$-regular on $R^k_r$ and $(\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item \label{plemG:inheritance} both $\big(N_{\Gamma}(v, V_{i,j}),V_{i',j'}\big)$ and $\big(N_{\Gamma}(v', V_{i,j}),N_{\Gamma}(v', V_{i',j'})\big)$ are $(\varepsilon,d,p)_G$-regular pairs for every edge $\{(i,j),(i',j')\} \in E(R^k_r)$, every $v\in V\setminus (V_0 \cup V_{i,j})$ and $v'\in V\setminus (V_0 \cup V_{i,j} \cup V_{i',j'})$, \item \label{plemG:gamma} we have $(1-\varepsilon)p|V_{i,j}| \leq |N_{\Gamma}(v,V_{i,j})| \leq (1 + \varepsilon)p|V_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \end{enumerate} \end{lemma} The change here, apart from replacing `lower-regular' with `regular', and working in bijumbled graphs, is that $V_0$ may now be a much larger set. Nevertheless, the proof is basically the same. \begin{proof}[Sketch proof of Lemma~\ref{lem:pseudG}] We begin the proof as in that of Lemma~\ref{lem:G}, setting up the constants in the same way, with the exception that we replace Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL} with Lemmas~\ref{lem:pseudOSRIL} and~\ref{lem:pseudTSRIL}, and Proposition~\ref{prop:chernoff} with Proposition~\ref{prop:pseudchernoff}. We require $C$ to be sufficiently large for Lemmas~\ref{lem:pseudOSRIL} and~\ref{lem:pseudTSRIL}, and for Proposition~\ref{prop:pseudchernoff}. We define $C^{\ast}=100k^2r_1^3C/\varepsilona$ as in the proof of Lemma~\ref{lem:G}, and set \[c=10^{-5}(\varepsilona)^3(kr_1)^{-3}(C^{\ast})^{-1}\,.\] We now assume $\Gamma$ is $(p,\nu)$-bijumbled rather than random, with $\nu\le cp^3n$. In particular, by choice of $c$ this implies that \begin{equation}\label{eq:psG:s} 10k^2r_1^2 Cp^{-2}\nu^2n^{-1}\le\varepsilona pn\quad\text{and}\quad 10k^2r_1^3 C p^{-6}\nu^2n^{-1}\le\varepsilona n\,. \end{equation} We obtain a regular partition, with a reduced graph containing $B^k_r$, exactly as in the proof of Lemma~\ref{lem:G}, using Proposition~\ref{prop:pseudchernoff} in place of Proposition~\ref{prop:chernoff} to justify the use of Lemma~\ref{lem:regularitylemma}. The next place where we need to change things occurs in defining $Z_1$, where we replace `lower-regular' with `regular', and in estimating the size of $Z_1$. Using Lemmas~\ref{lem:pseudOSRIL} and~\ref{lem:pseudTSRIL}, and Proposition~\ref{prop:chernoff} with Proposition~\ref{prop:pseudchernoff}, we replace~\eqref{eq:sizeZ1} with \[|Z_1|\le kr_1^2Cp^{-6}\nu^2n^{-1}+kr_1^2Cp^{-3}\nu^2n^{-1}+2kr_1Cp^{-2}\nu^2n^{-1}\le 4kr_1^2Cp^{-6}\nu^2n^{-1} \leByRef{eq:psG:s}\frac{\varepsilona}{kr_1}n\,.\] Note that the final conclusion is as in~\eqref{eq:sizeZ1}. We can now continue following the proof of Lemma~\ref{lem:G} until we come to estimate the size of $Z_2$, where we use Proposition~\ref{prop:pseudchernoff} and replace~\eqref{eq:sizeZ2} with \[|Z_2|\le r_1+kr_1 Cp^{-2}\nu^2n^{-1}\leByRef{eq:psG:s}\frac{\varepsilona}{kr_1}pn\,.\] Again, the final conclusion is as in~\eqref{eq:sizeZ2}. The next change we have to make is in estimating the size of $V_0$, when we replace~\eqref{eq:sizeV0} with \[|V_0|\le|Z_1|+|Z_2|\le 4kr_1^2 Cp^{-6}\nu^2n^{-1}+r_1+kr_1Cp^{-2}\nu^2 n^{-1}\le C^{\ast} p^{-6}\nu^2n^{-1}\,.\] Finally, we need regular pairs in~\ref{plemG:regular} and~\ref{plemG:inheritance}. We obtained regular pairs from Lemma~\ref{lem:regularitylemma} and in the definition of $Z_1$, so that we only need Proposition~\ref{prop:subpairs3} to return regular pairs. We always apply Proposition~\ref{prop:subpairs3} to pairs of sets of size at least $\tfrac{\varepsilona pn}{r_1}$, altering them by a factor $\varepsilona$. Now Proposition~\ref{prop:pseudchernoff} shows that if $X$ and $Y$ are disjoint subsets of $\Gamma$ with $|X|,|Y|\le (\varepsilona p)^{-1}\nu$, then $e_\Gamma(X,Y)\le (1+\varepsilona)p|X||Y|$, as required. By choice of $c$, we have $(\varepsilona p)^{-1}\nu\le (\varepsilona)^2pn/r_1$, so that the condition of Proposition~\ref{prop:subpairs3} to return regular pairs is satisfied. \end{proof} The other one of our main lemmas which requires change, Lemma~\ref{lem:common}, only requires changing `lower-regular' to `regular' and replacing the random graph with a bijumbled $\Gamma$. This does require some change in the proof, as we then use the bijumbled graph versions of various lemmas, whose error bounds are different. \begin{lemma}[Common neighbourhood lemma, bijumbled graph version] \label{lem:pseudcommon} For each $d>0$, $k \geq 1$, and $\Delta \geq 2$ there exists $\alpha >0$ such that for every $\varepsilon^\ast \in (0,1)$ there exists $\varepsilon_0 >0$ such that for every $r\geq 1$ and every $0<\varepsilon\le\varepsilon_0$ there exists $c>0$ such that the following is true. For any $n$-vertex $(p,cp^{\Delta+1}n)$-bijumbled graph $\Gamma$ the following holds. Let $G=(V,E)$ be a (not necessarily spanning) subgraph of $\Gamma$ and $\{V_i\}_{i\in[k]}\cup \{W\}$ a vertex partition of a subset of $V$ such that the following are true for every $i,i'\in [k]$. \begin{enumerate}[label=\itmarab{G}] \item\label{pcnl:bal} $\frac{n}{4kr}\le |V_i|\le \frac{4n}{kr}$, \item\label{pcnl:Vreg} $(V_i,V_{i'})$ is $(\varepsilon, d, p)_G$-regular, \item\label{pcnl:W} $|W|\ge\frac{\varepsilon pn}{16kr^2}$, and \item\label{pcnl:Wdeg} $|N_G(w,V_i)| \geq dp|V_i|$ for every $w \in W$. \end{enumerate} Then there exists a tuple $(w_1, \ldots, w_\Delta) \in \binom{W}{\Delta}$ such that for every $\Lambda,\Lambda^\ast\subseteqeq[\Delta]$, and every $i \neq i' \in [k]$ we have \begin{enumerate}[label=\itmarab{W}] \item\label{pcnl:Gsize} $|\bigcap_{j\in \Lambda} N_G(w_j,V_i)|\geq \alpha p^{|\Lambda|}|V_i|$, \item\label{pcnl:Gasizen} $|\bigcap_{j\in \Lambda} N_{\Gamma}(w_j)| \le (1 + \varepsilon^\ast)p^{|\Lambda|}n$, \item\label{pcnl:Gasize} $ (1-\varepsilon^\ast)p^{|\Lambda|}|V_i| \leq |\bigcap_{j\in \Lambda} N_{\Gamma}(w_j,V_i)| \leq (1 + \varepsilon^\ast)p^{|\Lambda|}|V_i|$, and \item\label{pcnl:Nreg} $\big(\bigcap_{j\in \Lambda}N_{\Gamma}(w_j,V_i),\bigcap_{j^\ast\in \Lambda^\ast}N_{\Gamma}(w_{j^\ast},V_{i'})\big)$ is $(\varepsilon^\ast, d,p)_G$-regular if $|\Lambda|,|\Lambda^\ast| < \Delta$ and either $\Lambda\cap\Lambda^\ast=\varnothing$ or $\Delta\geq 3$ or both. \end{enumerate} \end{lemma} The main modifications we make to the proof of Lemma~\ref{lem:common} are to replace Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL} with Lemmas~\ref{lem:pseudOSRIL} and~\ref{lem:pseudTSRIL}, and Proposition~\ref{prop:chernoff} with Proposition~\ref{prop:pseudchernoff}, and to replace all occurrences of `lower-regular' with `regular'. We sketch the remaining modifications below. \begin{proof}[Sketch proof of Lemma~\ref{lem:pseudcommon}] We begin the proof by setting constants as in the proof of Lemma~\ref{lem:common}, but appealing to Lemmas~\ref{lem:pseudOSRIL} and~\ref{lem:pseudTSRIL}, and Proposition~\ref{prop:pseudchernoff}, rather than their random graph equivalents. We set $c=10^{-20}2^{-2\Delta}\varepsilon^5(Ct_1kr)^{-4}$. Suppose $\nu\le cp^{\Delta+2}n$, and that $\Gamma$ is an $n$-vertex $(p,\nu)$-bijumbled graph rather than a random graph. In order to apply Lemma~\ref{lem:SRLb} to $G$, we need to observe that its condition is satisfied by Proposition~\ref{prop:pseudchernoff} and because $\varepsilon^{-1}p^{-1}\nu<10^{-10}\tfrac{\varepsilon^4pn}{k^4r^4}$ by choice of $c$. The same inequality justifies further use of Proposition~\ref{prop:pseudchernoff} to find the desired $W'$. Estimating the size of $W'$, we replace~\eqref{eq:sizeW} with \begin{equation}\label{eq:psizeW} |W'|\ge 10^{-11}\frac{\varepsilon^4pn}{t_1k^4r^4}\ge 10^5 Cp^{-2}\nu\,, \end{equation} where the final inequality is by choice of $c$. We only need to change the statement of Claim~\ref{claim:common} by replacing `lower-regular' with `regular' in~\ref{cnl:cl:Wreg} and~\ref{cnl:cl:Vreg}. However we need to make rather more changes to its inductive proof. The base case remains trivial. In the induction step, we need to replace~\eqref{eq:common:sizeNGa} with \[\big|\bigcap_{j\in\Lambda}N_\Gamma(w_j,V'_i)\big|\ge (1-\varepsilon_0)^{\Delta-2}p^{\Delta-2}\frac{n}{8tr}\ge 10^5Cp^{-4}\nu\,,\] where the final inequality is by choice of $c$. This, together with $|W'|\ge 10^5Cp^{-2}\nu$ from~\eqref{eq:psizeW}, justifies that we can apply Lemma~\ref{lem:pseudOSRIL}. We obtain that at most $2^\Delta k^2 Cp^{-3}\nu^2\tfrac{8krt_1}{n}$ vertices $w$ in $W$ violate~\ref{cnl:cl:Wreg}. The estimate on the number of vertices violating~\ref{cnl:cl:NGVp} does not change. For~\ref{cnl:cl:NGaVp}, we need to observe that $\big|\bigcup_{j\in\Lambda}N_\Gamma(w_j,V'_i)\big|=(1\pm\varepsilon_0)^{|\Lambda|}p^{|\Lambda|}|V'_i|$, and in particular by choice of $\varepsilon_0$ and $c$ this quantity is at least $Cp^{-1}\nu$. Then Proposition~\ref{prop:pseudchernoff} then gives that at most $2^{\Delta+1}kCp^{-2}\nu^2\tfrac{8krt_1}{n}$ vertices destroy~\ref{cnl:cl:NGaVp}, and the same calculation gives the same bound for the number of vertices violating~\ref{cnl:cl:NGaV} and~\ref{cnl:cl:NGa}. Finally, for~\ref{cnl:cl:Vreg}, we need to use the inequality $(1-\varepsilon_0)^{\Delta-1}p^{\Delta-1}\tfrac{n}{4kr}\ge Cp^{-2}\nu$, which holds by choice of $c$, to justify that Lemmas~\ref{lem:pseudOSRIL} and~\ref{lem:pseudTSRIL} can be applied as the corresponding random graph versions are in Lemma~\ref{lem:common}. We obtain quite different bounds from these lemmas, however. If $\Delta=2$, then we only use Lemma~\ref{lem:pseudOSRIL}, with an input regular pair having both sets of size at least $\tfrac{n}{4kr}$, so that the number of vertices violating~\ref{cnl:cl:Vreg} in this case is at most $2^{2\Delta}k^2Cp^{-3}\nu^2\tfrac{4kr}{n}$. If $\Delta\ge 3$, we use both Lemma~\ref{lem:pseudOSRIL} and~\ref{lem:pseudTSRIL}. The set playing the r\^ole of $X$ in Lemma~\ref{lem:pseudOSRIL} has size at least $(1-\varepsilon_0)^{\Delta-2}p^{\Delta-2}\tfrac{n}{4kr}$, while we apply Lemma~\ref{lem:pseudTSRIL} with both sets of the regular pair having at least this size. As a consequence, the number of vertices violating~\ref{cnl:cl:Vreg} is at most $2^{2\Delta+1}k^2Cp^{-6}\nu^2 (1-\varepsilon_0)^{2-\Delta}p^{2-\Delta}\tfrac{4kr}{n}$ for the case $\Delta\ge 3$. Putting this together, for the case $\Delta=2$ we replace~\eqref{eq:common:bad2} with the following upper bound for the number of vertices $w\in W'$ which cannot be chosen as $w_{\ell+1}$. \[2^\Delta k^2Cp^{-3}\nu^2\frac{8krt_1}{n}+2^\Delta k\varepsilonaa_\Delta|W'|+3\cdot 2^{\Delta+1} kCp^{-2}\nu^2\frac{8krt_1}{n}+2^{2\Delta}k^2Cp^{-3}\nu^2\frac{4kr}{n}\] By choice of $c$ and $\varepsilonaa_\Delta$, this quantity is at most $\tfrac12|W'|$, completing the induction step for $\Delta=2$. For $\Delta\ge 3$, we replace the upper bound~\eqref{eq:common:bad3} with \[2^\Delta k^2Cp^{-3}\nu^2\frac{8krt_1}{n}+2^\Delta k\varepsilonaa_\Delta|W'|+3\cdot 2^{\Delta+1} kCp^{-2}\nu^2\frac{8krt_1}{n}+2^{2\Delta+1}k^2Cp^{-6}\nu^2(1-\varepsilon_0)^{2-\Delta}p^{2-\Delta}\frac{4kr}{n}\] which by choice of $c,\varepsilon_0$ and $\varepsilonaa_\Delta$ is at most $\tfrac12|W'|$, completing the induction step for $\Delta\ge 3$. We conclude that the modified Claim~\ref{claim:common} continues to hold, and this implies the statement of Lemma~\ref{lem:pseudcommon} as in the proof of Lemma~\ref{lem:common}. \end{proof} The proof of Theorem~\ref{thm:jumbledk} is similar to that of Theorem~\ref{thm:maink}. Again, we sketch the modifications. \begin{proof}[Sketch proof of Theorem~\ref{thm:jumbledk}] We begin as in the proof of Theorem~\ref{thm:maink}, setting up constants as there, but replacing Lemma~\ref{lem:G} with Lemma~\ref{lem:pseudG}, Lemma~\ref{lem:common} with Lemma~\ref{lem:pseudcommon}, Lemma~\ref{thm:blowup} with Lemma~\ref{thm:jblowup}, and Proposition~\ref{prop:chernoff} with Proposition~\ref{prop:pseudchernoff}. In addition to the constants defined in the proof of Theorem~\ref{thm:maink} we require $0<c\le 10^{-50}\varepsilon^8\mu\rho\xi^2(\Delta k r_1C)^{-10}$ to be small enough for Lemmas~\ref{lem:pseudG} and~\ref{lem:pseudcommon}. Now, instead of assuming $\Gamma$ to be a typical random graph, suppose $\nu\le cp^{\max\{4,(3\Delta+1)/2\}}n$, and let $\Gamma$ be an $n$-vertex $(p,\nu)$-bijumbled graph. By Proposition~\ref{prop:bijn} we have \begin{equation}\label{eq:jsizep} p\ge C^{\ast}\Big(\frac{\log n}{n}\Big)^{1/2}\,. \end{equation} We continue following the proof of Theorem~\ref{thm:maink}. We now assume the first $\beta n$ vertices of $\mathcal L$ include $Cp^{-6}\nu^2n^{-1}$ vertices that are not contained in any triangles of $H$. We appeal to Lemma~\ref{lem:pseudG} rather than Lemma~\ref{lem:G} to obtain a partition of $V(G)$. This partition has $|V_0|\le C^{\ast} p^{-6}\nu^2n^{-1}$ (which is different to the upper bound in the proof of Theorem~\ref{thm:maink}), but still satisfies~\ref{main:Gsize} and~\ref{main:Ggam}, and~\ref{main:Greg} and~\ref{main:Ginh} when `lower-regular' is replaced by `regular' in both statements. The application of Lemma~\ref{lem:H2} is identical. The application of Lemma~\ref{lem:hypgeo} is also identical, and the deduction of~\eqref{eq:intS} is still valid by~\eqref{eq:jsizep}. The pre-embedding is also identical, except that we replace each occurrence of $C^{\ast}\max\{p^{-2},p^{-1}\log n\}$ with $C^{\ast} p^{-6}\nu^2n^{-1}$, and that we replace the application of Proposition~\ref{prop:chernoff} justifying that at each visit to Line~\ref{line:choosev} we have at least $\tfrac14\mu p n$ choices with an application of Proposition~\ref{prop:pseudchernoff}. To verify the condition of the latter, and to see that this yields a contradiction we use the inequality $|Z|\ge\tfrac{1}{100(\Delta+1)}\mu p n\ge 2C^{\ast} p^{-2}\nu^2\tfrac{8r}{\varepsilon n}$, which holds by choice of $c$. Moving on, we justify Claim~\ref{cl:chooseW} by observing that $\tfrac{\varepsilon n}{4kr_1}\ge Cp^{-1}\nu$, which allows us to apply Proposition~\ref{prop:pseudchernoff} in place of Proposition~\ref{prop:chernoff}, and that $2krC^{\ast} p^{-2}\nu^2\tfrac{4kr_1}{\varepsilon n}\le\tfrac{|Y|}{2}$, both inequalities following by choice of $c$. Now Lemma~\ref{lem:pseudcommon}, in place of Lemma~\ref{lem:common}, finds $w_1,\dots,w_\ell$. Our construction of $f^*$, and its properties, is identical, while Lemma~\ref{lem:pseudcommon} gives~\ref{main:Gsize}--\ref{main:GaI}, with `lower-regular' replaced by `regular' in~\ref{main:Greg},~\ref{main:Ginh} and~\ref{main:GpIreg}. The deduction of~\ref{Gp:sizeV}--\ref{Gp:GaI} is identical, except that we use the `regular' consequence of Proposition~\ref{prop:subpairs3}. To justify this, observe that each time we apply Proposition~\ref{prop:subpairs3}, we apply it to a regular pair with sets of size at least $(1-\varepsilona)p^{\Delta-1}\tfrac{n}{4kr}$ by~\ref{main:Gsize} and~\ref{main:GpGI}, and we change the set sizes by a factor $(1\pm 2\mu)$, so that Proposition~\ref{prop:pseudchernoff} gives the required condition. To check this in turn, we need to observe that $2\mu(1-\varepsilona)p^{\Delta-1}\tfrac{n}{4kr}\ge100\mu^{-1}p^{-1}\nu$, which follows by choice of $c$. We can thus replace `lower-regular' with `regular' in~\ref{Gp:Greg},~\ref{Gp:Ginh} and~\ref{Gp:Ireg}. Next, we still have $3\Delta^{r+10}|V_0|\le\tfrac{1}{10}\xi n$, so that $|V'_{i,j}|=|W'_{i,j}|\pm\xi n$ is still valid for each $i\in[r]$ and $j\in[k]$. This, together with~\eqref{eq:jsizep}, Proposition~\ref{prop:pseudchernoff}, and the inequality $\tfrac{1}{50000kr_1}\varepsilon^2\xi pn\le 100\varepsilon^{-2}\xi^{-1}p^{-1}\nu$, justifies that we can apply Lemma~\ref{lem:balancing} to obtain~\ref{Gpp:sizeV}--\ref{Gpp:sizeGa}, with `lower-regular' replaced by `regular' in~\ref{Gpp:Greg} and~\ref{Gpp:Ginh}. Finally, to obtain~\ref{Gpp:Ireg} with `lower-regular' replaced by `regular', we use Proposition~\ref{prop:subpairs3}, with the condition to output regular pairs guaranteed by the inequality $10^{-20}\varepsilon^4k^{-3}r_1^{-3}p^{\Delta-1}n\ge 10^{20}\varepsilon^{-4}k^3r_1^3Cp^{-1}\nu$, which follows by choice of $c$, and Proposition~\ref{prop:pseudchernoff}. Finally, we verify the conditions for Lemma~\ref{thm:jblowup}. The only point where we have to be careful is with the number of image restricted vertices. The total number of image restricted vertices in $H'$ is at most $\Delta^2|V_0|\le\Delta^2C^{\ast} p^{-6}\nu^2n^{-1}$, which by choice of $c$ and by~\ref{Gpp:sizeV} is smaller than $\rho p^\Delta|V_{i,j}|$ for any $i\in[r]$ and $j\in[k]$, justifying that $(\mathcal I,\mathcal J)$ is indeed a $(\rho p^\Delta,\tfrac14\alpha,\Delta,\Delta)$-restriction pair. The remaining conditions of Lemma~\ref{thm:jblowup} are verified as in the proof of Theorem~\ref{thm:maink}, and applying it we obtain an embedding $\phi$ of $H'$ into $G\setminus\mathrm{im}(\phi_{t^*})$, so tha $\phi\cup\phi_{t^*}$ is the desired embedding of $H$ into $G$. \end{proof} Finally, the deduction of Theorem~\ref{thm:jumbled} from Theorem~\ref{thm:jumbledk} is essentially the same as that of Theorem~\ref{thm:main} from Theorem~\ref{thm:maink}, and we omit it. \section{Concluding remarks} \label{sec:remarks} \subsection{General spanning subgraphs} Our main theorems place restrictions on the graphs $H$ with respect to whose containment random or pseudorandom graphs have local resilience. As was shown by Huang, Lee and Sudakov~\cite{huang2012}, such restrictions are necessary. Given $\varepsilon>0$, if $\Gamma$ is either a typical random graph $G(n,p)$ or a pseudorandom graph with density $p$, and $p$ is sufficiently small, then one can delete edges from $\Gamma$ in order to remove all triangles at a given vertex $v$, without deleting more than $\varepsilon p n$ edges at any vertex. Thus if $H$ is any graph all of whose vertices are in triangles, if $p=o(1)$ the local resilience of $\Gamma$ with respect to containment of $H$ is $o(1)$. This leads to the question: if we instead restrict $G$, requiring in addition to the conditions of Theorem~\ref{thm:main} that $G$ contains a positive proportion of the copies of $K_{\Delta+1}$ in $\Gamma$ at each vertex, is it true that $G$ will contain any $k$-colourable, bounded degree spanning subgraph $H$ with sublinear bandwidth without further restriction? We study this question in a forthcoming companion note to this paper, together with Schnitzer~\cite{ABEST}. \subsection{Optimality of Theorem~\ref{thm:main}} Recall that Huang, Lee and Sudakov~\cite{huang2012} proved that the restriction on $H$ that $C^{\ast} p^{-2}$ vertices should not be in triangles is necessary for all $p$. For $p$ constant, they proved a version of Theorem~\ref{thm:main}, but the number of vertices in $H$ they require to have independent neighbourhood grows as a tower type function of $p^{-1}$, and they also require these vertices to be well-distributed in the bandwidth order, so that our result is strictly stronger than theirs. On the other hand, we do not believe that the lower bound on $p$ in Theorem~\ref{thm:main} is optimal. For $\Delta=2$, the statement is certainly false for $p\ll n^{-1/2}$, since then $G(n,p)$ has a.a.s.~local resilience $o(1)$ with respect to containing even one triangle. It seems likely that the statement is true down to this point, a log factor improvement on our result. For $\Delta=3$, the statement as written is false for $p\ll n^{-1/3}$. Briefly, the reason for this is that in expectation a vertex is in $O\big(p^6n^3\big)$ copies of $K_4$ in $G(n,p)$, and (with some work) this implies that there is a.a.s.\ a subgraph of $G(n,p)$ with minimum degree very close to $pn$ and $p^{-5}n^{-1}$ vertices not in copies of $K_4$. For $p\ll n^{-1/3}$, $p^{-5}n^{-1}\gg p^{-2}$, so that we would also have to insist on many vertices of $H$ not being in copies of $K_4$ to accommodate this. Generalising this, we obtain the following conjecture. \begin{conjecture} For each $\gamma >0$, $\Delta \geq 2$, and $k \geq 1$, there exist constants $\beta^\ast >0$ and $C^{\ast} >0$ such that the following holds asymptotically almost surely for $\Gamma = G(n,p)$ if $p \geq C^{\ast} n^{-2/(\Delta+2)}$. Let $G$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq\left(\frac{k-1}{k}+ \gamma\right)pn$ and let $H$ be a $k$-colourable graph on $n$ vertices with $\Delta(H) \leq \Delta$, bandwidth at most $\beta^\ast n$, there are at least $C^{\ast} p^{-2}$ vertices in $V(H)$ that are not contained in any triangles of $H$, and at least $C^{\ast} p^{-(\Delta+2)(\Delta-1)/2}n^{2-\Delta}$ vertices in $V(H)$ which are not in $K_{\Delta+1}$. Then $G$ contains a copy of $H$. \end{conjecture} This conjecture seems to be hopelessly out of reach with our current state of knowledge. We cannot even prove that $G(n,p)$ itself is universal for graphs on $\tfrac{n}{2}$ vertices with maximum degree~$\Delta$. The best current result in this direction is due to Conlon, Ferber, Nenadov and \v{S}kori\'c~\cite{CFNS}, who show that for $\Delta\ge 3$, if $p\gg n^{-1/(\Delta-1)}\log^5 n$ then $G(n,p)$ is a.a.s.\ universal for graphs on $\big(1-o(1)\big)n$ vertices of maximum degree $\Delta$, finally breaking the $n^{-1/\Delta}$ barrier which is reached by several papers, but still far from the conjectured truth. It is possible that their methods could be used to prove a version of Theorem~\ref{thm:main} for almost-spanning $H$ in sparser random graphs, but this does not appear to be straightforward. \subsection{Optimality of Theorem~\ref{thm:degenerate}} The `extra' restriction we place in Theorem~\ref{thm:degenerate}, of having many vertices of $H$ which are neither in triangles nor four-cycles, is an artifact of our proof. It would be possible to remove the stipulation regarding four-cycles---one can prove a version of Lemma~\ref{lem:common} capable of embedding vertices in a degeneracy order. However this comes at the cost of a worse lower bound on $p$. It seems likely that one would be able to obtain a result for $p\gg\big(\tfrac{\log n}{n}\big)^{1/(2D+2)}$, but we did not check the details. As with Theorem~\ref{thm:main}, we expect that the bound $p\ge\big(\tfrac{\log n}{n}\big)^{1/(2D+1)}$ in Theorem~\ref{thm:degenerate} is far from the truth: again the exponent is most likely a factor of roughly $2$ too small. Again, however, proving such a statement in general seems hopeless. Nevertheless, in one interesting case we can substantially improve on Theorem~\ref{thm:degenerate}. Specifically, if $H$ is an $F$-factor for some fixed $F$, then we can follow the proof of Theorem~\ref{thm:degenerate}, but set $\tilde{D}=D+3$. We can do this because we choose a degeneracy order on $H$ in which the copies of $F$ are segments. We obtain a version of Theorem~\ref{thm:degenerate} in which $H$ is required to be an $F$-factor, where $F$ is $D$-degenerate, but the lower bound on $p$ improves to $p\geC^{\ast}\big(\tfrac{\log n}{n}\big)^{1/(D+3)}$. This is still not optimal, but at least the exponent is asymptotically optimal as $D$ grows, rather than being off by a factor of two in the limit. For some specific $F$ one can improve this bound further; moreover for $F$-factors one can slightly improve on Lemma~\ref{thm:dblow} (see the concluding remarks of~\cite{blowup}). \subsection{Optimality of Theorem~\ref{thm:jumbled}} The requirement of $C^{\ast} p^{-6}\nu^2 n^{-1}$ vertices of $H$ not in triangles comes from Lemma~\ref{lem:pTSRIL}. This lemma is proved in~\cite{ABSS}, where it is conjectured that the bijumbledness requirement is not optimal. What exactly the optimal result should be is not clear. When $|X|=|Y|=|Z|=\tfrac{n}{3}$, a construction of Alon~\cite{AlonConstr} shows that $\big(p,cp^2n\big)$-bijumbledness is necessary for some $c>0$, but in our application we are interested in $Y$ and $Z$ being of order $n$, and $X$ much smaller. We also do not believe that the bijumbledness requirement of Theorem~\ref{thm:jumbled} is optimal. This requirement comes from Lemma~\ref{thm:jblowup}, and it is suggested there that the statement could still hold given only $\big(p,cp^{\Delta+C}\big)$-bijumbledness for some $C$. Such an improvement there would immediately improve the results here correspondingly. It is generally conjectured that substantial further improvement is not possible, in the strong form that it is likely that for some $C>0$ and all $\Delta$ there exists $c>0$ such that for all large $n$ an $n$-vertex $\big(p,cp^{\Delta-C}\big)$-bijumbled graph exists which does not contain $K_{\Delta+1}$ at all. \appendix \section*{Appendix} \section{Tools} \label{app:tools} We collect in this appendix proofs of results which are more or less standard but which we could not find in the form we require in the literature. We begin by showing that small alterations to regular pairs give us regular pairs. \begin{proof}[Proof of Proposition~\ref{prop:subpairs3}] Let $A \subseteqeq \hat X$ and $B \subseteqeq \hat Y$ such that $|A| \geq \hat \varepsilon |\hat X|$ and $|B| \geq \hat \varepsilon |\hat Y|$ be given. Define $A' := A \cap X$ and $B':= B \cap Y$ and note that \begin{equation*} |A'| \geq |A| - \mu |X| \geq \hat \varepsilon |\hat X| - \mu |X| \geq \hat{\varepsilon}(1-\mu) |X| - \mu |X| \geq \big(\hat \varepsilon - 2 \sqrt{\mu}\big) |X| \geq \varepsilon |X| \end{equation*} by the definition of $\hat\varepsilon$. Analogously, one can show that $|B'| \geq \varepsilon |Y|$. Since $(X,Y)$ is an $(\varepsilon, d,p)$-regular pair, we know that $d_p(A',B') \geq d- \varepsilon$. Furthermore, we have \[|A'| \geq |A| - \mu |X| \geq |A| - \mu \frac{|A|}{\hat\varepsilon} \geq \big(1- \sqrt{\mu}\big)|A|\] and by an analogous calculation we get $|B'| \geq \big(1- \sqrt{\nu}\big)|B|$. For the number of edges between $A$ and $B$ we get \begin{align*} e(A,B) &\geq e(A',B') \geq (d- \varepsilon) p|A'| |B'| \geq (d-\varepsilon)p \big(1-\sqrt{\mu}\big)\big(1-\sqrt{\nu}\big) |A| |B|\\ & \geq \big(d- \varepsilon - 2 \sqrt{\mu} - 2\sqrt{\nu}\big) p |A| |B| \geq (d-\hat\varepsilon) p |A| |B|. \end{align*} Therefore we have \[d_p(A,B) \geq d-\hat\varepsilon,\] which finishes the proof. Now suppose that $(X,Y)$ is $(\varepsilon,d,p)$-fully-regular. Let $d'$ be such that $d_p(A',B')=d'\pm\varepsilon$ for any $A'\subseteq X$ and $B'\subseteq Y$ with $|A'|\ge\varepsilon|X|$ and $|B'|\ge\varepsilon |Y|$. Let $A\subseteq\hat{X}$ and $B\subseteq \hat{Y}$ with $|A| \geq \hat \varepsilon |\hat X|$ and $|B| \geq \hat \varepsilon |\hat Y|$ be given. As above, we obtain $e(A,B)\ge (d'-\hat\varepsilon) p |A| |B|$. We also have \begin{align*} e(A,B)&\le e(A',B')+e(A',B\setminus B')+e(A\setminus A', B)\\ &\le (d'+\varepsilon)p|A'||B'|+(1+\mu+\nu)p|A'|\nu|B|+(1+\mu+\nu)p\mu|A||B|\\ &\le (d'+\hat{\varepsilon})|A||B|\,, \end{align*} so that $(\hat{X},\hat{Y})$ is $(\varepsilon,d,p)$-fully-regular, as desired. \end{proof} Next, we prove the Sparse Regularity Lemma variant Lemma~\ref{lem:SRLb}, whose proof follows~\cite{Scott}. \begin{proof}[Proof of Lemma~\ref{lem:SRLb}] Given $\varepsilon>0$ and $s$, let $L=100s^2\varepsilon^{-1}$. Let $n_1=1$, and for each $j\ge 2$ let $n_j=10000\varepsilon^{-1}n_j2^{sn_j}$. Let $t_1=n_{1000\varepsilon^{-5}(L^2+16Ls^2)+1}$. We define the energy of a pair of disjoint sets $P,P'$ contained in respectively $V_i$ and $V_{i'}$ to be \[\mathcal{E}(P,P'):=\frac{|P||P'|\min\big(d_p(P,P')^2,2Ld_p(P.P')-L^2\big)}{|V_i||V_{i'}|}\,.\] Note that this quantity is convex in $d_p(P,P')$. Now given a partition $\mathcal{P}$ refining $\{V_i\}_{i\in[s]}$, we define the energy of $\mathcal{P}$ to be \[\mathcal{E}(\mathcal{P}):=\sum_{\{P,P'\}\subseteq\mathcal{P}}\mathcal{E}(P,P')\,.\] We now construct a succession of partitions $\mathcal{P}_{j+1}$ for each $j\ge 1$, refining $\mathcal{P}_1:=\{V_i\}_{i\in[s]}$. We claim that for each $j$, the following hold. \begin{enumerate}[label=\itmarab{R}] \item\label{srl:r1} $\mathcal{P}_j$ partitions $V_i$ into between $n_j$ and $\big(1+\tfrac{1}{100}\varepsilon\big)n_j$ sets, of which the largest $n_j$ are equally sized. \item\label{srl:r2} $\mathcal{E}(\mathcal{P})\ge \tfrac{1}{1000}\varepsilon^5j$. \end{enumerate} We stop if $\mathcal{P}_j$ is $\big(\tfrac12\varepsilon,p\big)$-regular. If not, we apply the following procedure. For each pair of $\mathcal{P}_j$ which is not $\big(\tfrac12\varepsilon,0,p\big)$-regular, we take a witness of its irregularity, consisting of a subset of each side of the pair. We let $\mathcal{P}'_j$ be the union of the Venn diagrams of all witness sets in each part of $\mathcal{P}_j$. Since $\mathcal{P}_j$ is not $\big(\tfrac12\varepsilon,p\big)$-regular, there are at least $\tfrac12\varepsilon s^2 n_j^2$ pairs which are not $\big(\tfrac12\varepsilon,0,p\big)$-regular. By choice of $L$ and by~\ref{srl:r1}, at least $\tfrac14\varepsilon s^2 n_j^2$ of these pairs have density not more than $\tfrac12L$. By the defect Cauchy-Schwarz inequality, just from refining these pairs we conclude that $\mathcal{E}(\mathcal{P}'_j)\ge\mathcal{E}(\mathcal{P}_j)+\tfrac{1}{1000}\varepsilon^5$. Note that, by convexity of $\mathcal{E}(P,P')$ in $d_p(P,P)$, refining the other pairs does not affect $\mathcal{E}(P'_j)$ negatively. We now let $\mathcal{P}_{j+1}$ be obtained by splitting each set of $\mathcal{P}'_j$ within each $V_i$ into sets of size $\tfrac{1000-\varepsilon}{1000n_{j+1}}|V_i|$ plus at most one smaller set. Again by Jensen's inequality, we have $\mathcal{E}(\mathcal{P}_{j+1})\ge\mathcal{E}(\mathcal{P}'_j)$, giving~\ref{srl:r2}. Since $\mathcal{P}'_j$ partitions each $V_i$ into at most $n_j2^{sn_j}=\tfrac{1}{10000}\varepsilon n_{j+1}$, the total number of smaller sets is at most $\tfrac{1}{10000}\varepsilon n_{j+1}$. This gives~\ref{srl:r1}. Now observe that for any partition $\mathcal{P}$ refining $\mathcal{P}_1$, we have $\mathcal{E}(\mathcal{P})\le L^2+16Ls^2$. It follows that this procedure must terminate with $j\le 1000\varepsilon^{-5}(L^2+16Ls^2)+1$. The final $\mathcal{P}_j$ is thus $\big(\tfrac12\varepsilon,p\big)$-regular. For each $i\in[s]$, let $V_{i,0}$ consist of the union of all but the largest $n_j$ parts of $\mathcal{P}_j$. Let $\mathcal{P}$ be the partition of $\bigcup_{i\in[s]}V_i\setminus V_{i,0}$ given by $\mathcal{P}_j$. This is the desired equitable $(\varepsilon,p)$-regular refinement of $\{V_i\setminus V_{i,0}\}_{i\in[s]}$. \end{proof} Using Lemma~\ref{lem:SRLb} (purely in the interests of self-containment, as we could also use the results of~\cite{kohayakawa1997}), we now prove Lemma~\ref{lem:regularitylemma}. \begin{proof}[Proof of Lemma~\ref{lem:regularitylemma}] Given $\varepsilon>0$ and $r_0$, without loss of generality we assume $\varepsilon\le\tfrac1{10}$. Let $t_1$ be returned by Lemma~\ref{lem:SRLb} for input $\tfrac{1}{1000}\varepsilon^2s^{-1}$ and $s=100r_0\varepsilon^{-1}$. Let $r_1=st_1$. Given $\alpha>0$, let $G$ be an $n$-vertex graph with minimum degree $\alpha p n$. Let $\{V_i\}_{i\in[s]}$ be an arbitrary partition of $V(G)$ into sets of as equal as possible size. By assumption, we have $e(V_i,V_{i'})\le 2p|V_i||V_{i'}|$ for each $i\neq i'$. Furthermore, if $V_i$ is a part with $e(V_i)\ge 3p|V_i|^2$, then taking a maximum cut $A,A'$ of $V_i$ we have $e(A,A')\ge\tfrac32 p|V_i|^2$. Enlarging the smaller of $A$ and $A'$ if necessary, we have a pair of sets both of size at most $|V_i|$ between which there are at least $\tfrac32p|V_i|^2$ edges, again contradicting the assumption of Lemma~\ref{lem:regularitylemma}. Thus $G$ satisfies the conditions of Lemma~\ref{lem:SRLb} with input $\tfrac1{1000}\varepsilon^2s^{-1}$ and $s$. Applying that lemma, we obtain a collection $\{V_{i,0}\}_{i\in[s]}$ of sets, and an $(\varepsilon,p)$-regular partition $\mathcal{P}$ of $\bigcup_{i\in[s]}V_i\setminus V_{i,0}$ which partitions each $V_i\setminus V_0$ into $t\le t_1$ sets. Note that $s\le |\mathcal{P}|\le r_1$ by construction. Now let $V'_0$ be the union of the $V_{i,0}$ for $i\in[s]$, any sets $W\in\mathcal{P}$ that lie in more than $\tfrac14\varepsilon s t$ pairs which are not $(\tfrac{1}{1000}\varepsilon,p)$-regular, and at most two vertices from each set $W\in\mathcal{P}$ in order that the partition of $V(G)\setminus V'_0$ induced by $\mathcal{P}$ is an equipartition. Because the total number of pairs which are not $(\tfrac1{1000}\varepsilon,p)$-regular is at most $\tfrac{1}{1000}\varepsilon^2s^{-1}(r_0 t)^2$, the number of such sets in any given $V_i$ is at most $\tfrac{1}{100}\varepsilon t$, so $|V'_{i,0}|$ has size at most $\tfrac{1}{50}\varepsilon |V_i|$, and the number of parts of $\mathcal{P}$ in $V_i\setminus V'_{i,0}$ is larger than $\tfrac{t}{2}$. Thus the partition $\mathcal{P}'$ of $V(G)\setminus V'_0$ induced by $\mathcal{P}$ is an $(\varepsilon,p)$-regular equipartition of $V(G)\setminus V'_0$, and we have $|V'_0|\le\varepsilon n$. We claim that this partition $\mathcal{P}'$ has all the properties we require. It remains only to check that for each $d\in[0,1]$, the $d$-reduced graph of $\mathcal{P}'$ has minimum degree at least $(\alpha-d-\varepsilon)t'$. Suppose that $P$ is a part of $\mathcal{P}'$. Now we have $e(P)\le 3p|P|^2$, since otherwise, as before, a maximum cut $A,A'$ of $P$ has at least $\tfrac32p|P|^2<\tfrac{1}{20}\varepsilon p|P|n$ edges, yielding a contradiction to the assumption on the maximum density of pairs of $G$. By construction, $P$ lies in at most $\tfrac12\varepsilon t'$ pairs which are not $(\varepsilon,p)$-regular, and these contain at most $(1+\tfrac{1}{10}\varepsilon)p|P|\big(\tfrac12\varepsilon t'|P|\big)<\tfrac{3}{4}\varepsilon p |P|n$ edges of $G$. We conclude that at least $\alpha p |P|n-\tfrac{7}{8}\varepsilon p |P|n$ edges of $G$ leaving $P$ lie in $(\varepsilon,p)$-regular pairs of $\mathcal{P}'$. Of these, at most $dp|P|n$ can lie in pairs of density less than $p$, so that the remaining at least $\big(\alpha-d-\tfrac{7}{8}\varepsilon\big)p|P|n$ edges lie in $(\varepsilon,d,p)$-regular pairs. If so many edges were in less than $(\alpha-d-\varepsilon)t'$ pairs leaving $P$, this would contradict our assumption on the maximum density of $G$, so that we conclude $P$ lies in at least $(\alpha-d-\varepsilon)t'$ pairs which are $(\varepsilon,d,p)$-regular, as desired. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:pseudchernoff}] Given $\varepsilon>0$, set $C'=100\varepsilon^{-2}$ and $C=200C'\varepsilon^{-1}$. Suppose that $\Gamma$ is $(p,\nu)$-bijumbled. First, given disjoint $X,Y\subseteq V(\Gamma)$ with $|X|,|Y|\ge \varepsilon^{-1}p^{-1}\nu$, $(p,\nu)$-bijumbledness of $\Gamma$ we have $e(X,Y)=p|X|||Y|\pm\nu\sqrt{|X||Y|}$, and we need only verify that $\nu\sqrt{|X||Y|}\le\varepsilon p|X||Y|$, which follows from the lower bound on $|X|,|Y|$. For the second property, let $(A,B)$ be a maximum cut of $X$. We have $e(A,B)\ge\tfrac12e(X)$, and $|A||B|\le\tfrac14|X|^2$. By $(p,\nu)$-bijumbledness of $\Gamma$, we conclude \[e(X)\le 2e(A,B)\le 2p|A||B|+2\nu\sqrt{|A||B|}\le \frac12p|X|^2+\nu|X|\] so that it is enough to verify $\nu|X|\le p|X|^2$, which duly follows from the lower bound on $|X|$. Now let $Y\subseteq V(\Gamma)$ have size at least $Cp^{-1}\nu$. We first show that there are at most $C'p^{-2}\nu^2|Y|^{-1}$ vertices in $\Gamma$ which have less than $(1-\varepsilon)p|Y|$ neighbours in $Y$. If this were false, then we could choose a set $X$ of $C'p^{-2}\nu^2|Y|^{-1}$ vertices in $\Gamma$ which have less than $(1-\varepsilon)p|Y|$ neighbours in $Y$. Since by choice of $C$ we have $(1-\varepsilon)p|Y|\le \big(1-\tfrac\varepsilon2\big)p|Y\setminus X|$, we see that $e(X,Y\setminus X)<\big(1-\tfrac{\varepsilon}{2}\big)p|X||Y\setminus X|$. Since \[\nu\sqrt{|X||Y|}=\nu\sqrt{C'p^{-2}\nu^2}=\sqrt{C'}\nu^2p^{-1}<\frac{\varepsilon}{2}p|X||Y\setminus X|\] this is a contradiction to $(p,\nu)$-bijumbleness of $\Gamma$. Next we show that there are at most $2C'p^{-2}\nu^2|Y|^{-1}$ vertices of $\Gamma$ which have more than $(1+\varepsilon)p|Y|$ neighbours in $Y$. Again, if this is not the case we can let $X$ be a set of $2C'p^{-2}\nu^2|Y|^{-1}$ vertices of $\Gamma$ with more than $(1+\varepsilon)p|Y|$ neighbours in $Y$. If there are more than $\tfrac12|X|$ vertices of $X$ with more than $\tfrac12\varepsilon p|Y|$ neighbours in $X$, then we have $e(X)\ge\tfrac18\varepsilon p|X||Y|$. Taking a maximum cut $A,B$ of $X$, we have $e(A,B)\ge\tfrac{1}{16}\varepsilon p|X||Y|$, and by $(p,\nu)$-bijumbledness of $\Gamma$ we therefore have \[\frac{1}{16}\varepsilon p|X||Y|\le p|A||B|+\nu\sqrt{|A||B|}\le\frac14p|X|^2+\frac12\nu|X|\,,\] and since $|X|\le\tfrac{1}{100}\varepsilon |Y|$, we conclude $|Y|\le 100\varepsilon^{-1}p^{-1}\nu$, a contradiction to the choice of $C$. We conclude that there are $\tfrac12|X|$ vertices $X'$ of $X$ have at most $\tfrac12\varepsilon p|Y|$ neighbours in $X$, and hence at least $\big(1+\tfrac12\varepsilon\big)p|Y|$ neighbours in $Y\setminus X$. By $(p,\nu)$-bijumbledness of $\Gamma$ we have \[\frac12|X|\Big(1+\frac12\varepsilon\Big)p|Y|\le e(X',Y\setminus X)\le \frac12 p|X||Y|+\nu\sqrt{\frac{1}{2}p|X||Y|}\,,\] from which we have $\varepsilon C'p^{-1}\nu^2\le 2\sqrt{C'}\nu^2p^{-1}$, a contradiction to the choice of $C'$. \end{proof} \begin{aicauthors} \begin{authorinfo}[PA] Peter Allen\\ London School of Economics, \\ Department of Mathematics, \\ Houghton Street, \\ London WC2A 2AE, UK \\ p\mathrm{im}agedot{}d\mathrm{im}agedot{}allen\mathrm{im}ageat{}lse\mathrm{im}agedot{}ac\mathrm{im}agedot{}uk \end{authorinfo} \begin{authorinfo}[JB] Julia B\"ottcher\\ London School of Economics, \\ Department of Mathematics, \\ Houghton Street, \\ London WC2A 2AE, UK \\ j\mathrm{im}agedot{}boettcher\mathrm{im}ageat{}lse\mathrm{im}agedot{}ac\mathrm{im}agedot{}uk \end{authorinfo} \begin{authorinfo}[JE] Julia Ehrenm\"uller\\ Technische Universit\"at Hamburg, \\ Institut f\"ur Mathematik, \\ Am Schwarzenberg-Campus 3, \\ 21073 Hamburg, Germany \\ julia\mathrm{im}agedot{}ehrenmueller\mathrm{im}ageat{}gmail\mathrm{im}agedot{}com \end{authorinfo} \begin{authorinfo}[AT] Anusch Taraz\\ Technische Universit\"at Hamburg, \\ Institut f\"ur Mathematik, \\ Am Schwarzenberg-Campus 3, \\ 21073 Hamburg, Germany \\ taraz\mathrm{im}ageat{}tuhh\mathrm{im}agedot{}de \end{authorinfo} \end{aicauthors} \end{document}
\begin{document} \setlength{\parindent}{0mm} \title{Cavity optomechanics with ultra-cold Bose gases for quasiparticle state manipulation and prospects for sensing applications} \author{Benjamin Maa{\ss}} \author{Daniel Hartley} \author{Kurt Busch} \author{Dennis R\"atzel} \email{[email protected]} \address{Humboldt-Universität zu Berlin, Institut für Physik, AG Theoretische Optik \& Photonik, Newtonstraße 15, 12489 Berlin, Germany} \begin{abstract} Ensembles of ultra-cold atoms have been proven to be versatile tools for high precision sensing applications. Here, we present a method for manipulation and readout of the state of trapped clouds of ultra-cold bosonic atoms. In particular, we discuss the creation of coherent and squeezed states of quasiparticles and the coupling of quasiparticle modes through an external cavity field. This enables operations like state swapping and beam splitting which can be applied to realize a Mach-Zehnder interferometer (MZI) in frequency space. We present two explicit example applications in sensing: the measurement of the healing length of the condensate with the MZI scheme, and the measurement of an oscillating force gradient with a pulsed optomechanical readout scheme. Furthermore, we calculate fundamental limitations based on parameters of state-of-the-art technology. \noindent{\it Keywords\/}: Bose-Einstein condensation, cavity optomechanics, quasiparticle, phonons, sensing \end{abstract} \maketitle \section{Introduction} The development of atom trapping and cooling technology has led to an explosion of applications in many areas of physics. One of the most well-known applications of control on the single atom scale is the atomic clock, which helped define the second in terms of the fundamental constants of nature \cite{ludlow2015optical}. Cold atoms are also used for quantum simulation, studying models from condensed matter \cite{lewenstein2007ultracold} as well as artificial gauge fields \cite{Dalibard2011coll} and other exotic topological states \cite{cooper2019topological}. Matter-wave interferometry with cold atoms has been applied to sensing applications such as the measurement of gravitational fields \cite{peters1999measurement,mcguirk2002sensitive,geiger2011detecting,bidel2018absolute} and precision tests of fundamental physics, such as measuring Newton's gravitational constant \cite{rosi2014precision} and the fine structure constant \cite{parker2018measurement} and testing the equivalence principle \cite{Fray2004atomic,Schlippert2014quantum}, as well as physics beyond the standard model \cite{hamilton_atom-interferometry_2015}. When a three-dimensional cloud of atoms is cooled to such a degree that a macroscopic fraction of the atoms fall into the motional ground state, they condense into a Bose-Einstein condensate (BEC). Strictly speaking, in lower dimensions, Bose-Einstein condensation does not occur. Instead, quasi-condensates form that do not exhibit the long-rang order of a BEC. Unless we explicitly refer to BECs or quasi-condensates, we will use the term condensate for both in the following. Condensates of ultra-cold atoms can be used to push the sensitivity of the fundamental physics tests mentioned above even further \cite{Muentinga:2013int,van2010bose,gaaloul2014precision,Abend:2016atom,Hardman:2016sim,Asenbaum:2017pha}, even to applications in extraterrestrial space \cite{becker2018space,aveline2020observation}. The interaction between the atoms in the condensate leads to low-energy quasiparticles taking the form of phonons, i.e. quantised sound waves. Phonons are extensively studied in the field of quantum simulation \cite{bloch2012quantum,gring2012relaxation,rauer2018recurrences,Michael2019from}, analogue gravity including the simulation of event horizons \cite{barcelo2001analogue,lahav2010realization,steinhauer2014observation}, cosmic inflation \cite{fischer2004quantum} and gravitational waves \cite{bravo2015analog,hartley2018analogue}. Collective oscillations of condensates may also be used for sensing applications as demonstrated by the measurement of the thermal Casimir-Polder force presented in \cite{Obrecht:2007meas,Antezza:2004effect}. Further proposals include force sensing \cite{Motazedifard2019force}, gravimetry \cite{ratzel_dynamical_2018,bravo2020phononic}, tests of gravitationally induced collapse models \cite{Howl:2019expl} and even gravitational wave detection \cite{Sabin:2014gravwave,sabin_thermal_2016,Schuetzhold:2018int,Robbins_2019,robbins2021detection}. Collective oscillations in BECs have been already studied in early experiments \cite{Jin:1996coll,Mewes:1996coll,Stamper-Kurn:1998coll} and it has been demonstrated that highly excited quasi-particle states can be created with light pulses \cite{Katz:2004high} and periodic modulations of the trap potential \cite{Jaskula:2012acoustic,Michael2019from}. Readout methods for quasiparticle excitations of condensates include self-interference of the Bose gas after release from the trap denoted as heterodyning \cite{Katz:2004high} or time-of-flight measurements (e.g. \cite{stamper1999excitation}) and in-situ phase contrast imaging \cite{Stamper-Kurn:1998coll,Schley:2013planck}. Here, we present an alternative approach for creating, manipulating and reading out quasiparticle states of a condensate based on cavity optomechanics. The coupling of optical cavity modes to BECs has already been experimentally achieved \cite{brennecke2007cavity,brennecke2008cavity}. Our mechanism allows for displacement, single-mode squeezing or two-mode squeezing, and the coupling of two nearly arbitrary modes with interactions that are reminiscent of beam splitters and mirrors. Our manuscript is organized as follows: we introduce the cavity-condensate coupling in section \ref{sec:hamiltonian} and our proposed approach for state manipulation in section \ref{sec:stateman}. We present a Mach-Zehnder interferometer in frequency space in section \ref{sec:MZI} and a possible read-out mechanism via pulsed optomechanics in section \ref{sec:pulsed}. In section \ref{sec:damping}, we discuss damping mechanisms, and in section \ref{sec:applications}, we present two potential applications as examples of how to use our state manipulation scheme. We conclude our findings in section \ref{sec:conclusion}. \section{Dynamics of the composite system} \label{sec:hamiltonian} We begin by considering a condensate trapped in an external potential within a Fabry--P\'{e}rot type high finesse optical cavity (see Figure \ref{fig:cavity}). The trap is considered to be elongated and the orientation of the cavity is considered to be aligned with the elongated axis of the condensate and the $z$-direction. Then, we restrict our considerations to quasiparticle modes in the elongated direction of the trap and treat the system in a one-dimensional way considering an effective cross sectional area $\mathcal{A}$ of the condensate \footnote{This approximation is valid if the parameters and the geometry of the setup are chosen such that the coupling of the modes in the elongated direction to those in the transverse directions is sufficiently suppressed, for example, in the case of very tight transverse confinement (as considered below in the example application).}. \begin{figure} \caption{\label{fig:cavity} \label{fig:cavity} \end{figure} The time evolution of the total system is described by the Hamiltonian $\hat{H}_\text{total}= \hat{H}_\mathrm{cav}+\hat{H}_\mathrm{disp}+\hat{H}_\mathrm{cond}$, where $\hat{H}_\mathrm{cav}$ is the cavity field Hamitonian, $\hat{H}_\mathrm{disp}$ is the coupling of light and atomic cloud that we will introduce below and the Hamiltonian that describes the time-evolution of the atomic cloud is \begin{align} \nonumber \hat{H}_\mathrm{cond} = \int dz\,\hat{\psi}^\dagger\Bigg[ & -\frac{\hbar^2}{2m_a}\partial_z^2 + \frac{\tilde{g}}{2} \hat{\psi}^\dagger\hat{\psi} + V_0 + \delta V_\mathrm{ext}\Bigg]\hat{\psi} \end{align} where $m_a$ is the atomic mass and $\tilde{g}$ is the atom-atom interaction strength. $V_0$ is the trap potential and $\delta V_\mathrm{ext}$ includes all other external potentials that may affect the condensate; for example, an external gravitational potential. We assume that $\delta V_\mathrm{ext}$ can be considered to be small in comparison to the trapping potential and only changes the density distribution of the condensate slightly. Later, we will consider the sensing of $\delta V_\mathrm{ext}$ via its effect on the condensate as a specific application. The one-dimensional description represented by $\hat{H}_\mathrm{cond}$ can be directly derived from the three-dimensional standard description of interacting Bose gases \cite{Pitaevskii:2003bose}. For example, in the case of a three-dimensional condensate in a three-dimensional box trap with a box-shaped ground state \cite{gaunt2013bose}, $\tilde{g}=g/\mathcal{A}$ where $g=4\pi\hbar^2a_{sc}/m_a$ is the 3D coupling constant parameterised by the s-wave scattering length $a_{sc}$ \footnote{Note that $a_\mathrm{sc}$ can, in general, be widely tuned for some atom species that possess Feshbach resonances (e.g. $^{23}$Na \cite{inouye1998observation}, $^{85}$Rb \cite{roberts1998resonant} and $^{87}$Rb \cite{marte2002feshbach}) by employing strong magnetic fields to modify the s-wave scattering length. Unfortunately, also the three-body loss rate is strongly enhanced near a Feshbach resonance (see also \cite{chin2010feshbach}), where three-body loss is the dominating loss mechanism for trapped condensates and the dominating limitating factor for the maximal experimental time considered in this paper. Therefore, Feshbach resonances are of little use for our proposal and will not be considered.}. Another possibility is to assume a strong harmonic transverse trap potential (such that the atoms are transversely mostly in the trap's ground state) which leads to a Gaussian shape of the condensate in the transverse direction and a one-dimensional quasi-condensate implying $\tilde{g}=g_\mathrm{1D}=g/(2\pi a_\perp^2)$, where $a_\perp=\sqrt{\hbar/(m_a\omega_\perp)}$ is the transverse oscillator length given by the transverse trapping frequency $\omega_\perp$. Then, the Hamiltonian (\ref{eq:hamiltonian}) is an approximation that corresponds to the case of a low one-dimensional density $\rho_\mathrm{1d} = \langle \hat{\psi}^\dagger\hat{\psi} \rangle$ of the condensate such that $\rho_\mathrm{1d} a_\mathrm{sc} \ll 1$ (see e.g. \cite{Salasnich2002eff,rauer2018recurrences}) \footnote{For $\rho_\mathrm{1d} a_\mathrm{sc} \gtrsim 1$, we would need to replace $\hat{\psi}^\dagger\hat{\psi}$ by more complicated functions of $\hat{\psi}^\dagger\hat{\psi}$ (see e.g. \cite{Salasnich2002eff,rauer2018recurrences}). For the sake of simplicity we refrain from this here.}. \subsection{Light-matter coupling and full potential} We consider a single optical cavity mode with annihilation and creation operators $\hat{a}$ and $\hat{a}^\dagger$ respectively, and free evolution with $\hat{H}_\mathrm{cav} = \hbar\omega_c \hat{a}^\dagger\hat{a}$ that is coupled to the atomic field operator $\hat{\psi}$ via the dispersive coupling Hamiltonian (photon absorption and stimulated re-emission) \cite{nagy2009nonlinear,ritsch2013cold} \begin{align}\label{eq:Hdisp} \hat{H}_\mathrm{disp}= \int dz \,\hat{\psi}^\dagger(z) \hbar\frac{g_0^2}{\Delta_A}f_\mathrm{cav}^2(z)\,\hat{a}^\dagger\hat{a}\,\hat{\psi}(z) \,. \end{align} To achieve this form of coupling, the cavity mode frequency is chosen close to an atomic resonance with a detuning $\Delta_A$ and the single photon Rabi frequency $g_0 = d\sqrt{\omega_c/(2 \hbar\epsilon_0\mathcal{V}_c)}$, where $d$ is the atomic dipole moment along the cavity mode polarization, $\mathcal{V}_c = \mathcal{A}_c \int dz \,|f_\mathrm{cav}(z)|^2$ is the effective cavity mode volume, $\mathcal{A}_c$ is the effective cross sectional area of the cavity mode and $f_\mathrm{cav}(z)$ is the cavity mode function. We assume that the cavity mode is driven by a strong laser field with frequency $\omega=\omega_c +\Delta_c$, and we move into the corresponding rotating frame. This allows us to treat the cavity field perturbatively by splitting the mode operators $\hat{a}$ and $\hat{a}^\dagger$ into their expectation values and fluctuations as $\hat{a}=\langle \hat{a} \rangle + \delta \hat{a}$ \cite{aspelmeyer2014cavity,pikovski:thesis}. Since $\hat{H}_\mathrm{disp}$ is invariant under phase factors $\hat{a} \rightarrow \hat{a}e^{i\zeta}$, without loss of generality, we consider $\langle \hat{a} \rangle$ as real valued and set $\langle \hat{a} \rangle = \sqrt{N_{ph}}$, where $N_{ph}$ is the number of photons in the cavity mode. The photon number is related to the circulating power in the cavity as $P_c = \hbar \omega_c N_{ph} c/(2L_c) $, where $c$ is the speed of light and $L_c$ is the length of the cavity. $P_c$ can be varied by modulating the cavity pump power on time scales much larger than the life time of photons in the cavity mode. This is the basis for the state manipulation of the quasiparticles that we present in this work, and in the following, we consider $N_{ph}$ as generally time-dependent. Introducing the split $\hat{a}=\langle \hat{a} \rangle + \delta \hat{a}$ into $\hat{H}_\mathrm{disp}$ and neglecting the second order term in $\delta \hat{a}$, we obtain \cite{aspelmeyer2014cavity} \begin{align}\label{eq:hamiltonian} \nonumber \hat{H}_\text{total} = -\hbar\Delta_c \delta\hat{a}^\dagger\delta\hat{a} + \int dz\,\hat{\psi}^\dagger\Bigg[ & -\frac{\hbar^2}{2m_a}\partial_z^2 + \frac{\tilde{g}}{2} \hat{\psi}^\dagger\hat{\psi} + V\left(z,t\right) \\ & + \hbar\frac{g_0^2\sqrt{N_{ph}(t)} }{\Delta_A}f_\mathrm{cav}^2(z)\left(\delta\hat{a}^\dagger + \delta\hat{a} \right) \Bigg]\hat{\psi}\,, \end{align} where the full potential acting on the condensate is \begin{equation}\label{eq:potential} V\left(z,t\right) = V_0\left(z\right) + \delta V_\mathrm{ext}\left(z,t\right) + \hbar\frac{g_0^2}{\Delta_A}f_\mathrm{cav}^2\left(z\right)N_{ph}(t) = V_0\left(z\right) + \delta V\left(z,t\right) \,. \end{equation} We restrict our considerations to situations where $\delta V$ oscillates around a mean. We split $\delta V$ into its time-average $\overline{\delta V}$ and the purely oscillating part $V_\mathrm{osc}=\delta V - \overline{\delta V}$. Later we will assume that $V_\mathrm{osc}$ oscillates on resonance with a quasiparticle excitation or quasiparticle transitions. Therefore, we can already conclude that $\overline{\delta V}$ modifies the stationary condensate ground state and the quasiparticle mode functions, while $V_\mathrm{osc}$ drives the quasiparticle modes. \subsection{Condensate ground state and Bogoliubov approximation} We split the field operator into a part describing the atoms in the collective ground state of the atomic ensemble and a part describing trapped atoms in states above the ground state. In the Heisenberg picture, we split the atom field operator into a part describing the macroscopic condensate fraction and a field operator $\hat\vartheta$ that describes the non-condensate atoms in the form \begin{eqnarray}\label{eq:expansion} \hat\psi(z,t)= (\hat c_0 \psi_0(z) + \hat\vartheta(z,t) ) e^{-i\left( \mu t + \int_0^t dt'\, \delta\mu_\mathrm{osc}(t')\right)/\hbar} \,, \end{eqnarray} where $\hat c_0$ is the annihilation operator for atoms in the collective ground state $\psi_0$ which is normalized as $\int dz\, |\psi_0|^2 = 1$ and fulfills the stationary Gross-Pitaevski (GP) equation \begin{equation}\label{eq:statgp} \left(-\frac{\hbar^2}{2m} \partial_z^2 + V_0 + \overline{\delta V} + \tilde{g}N_0 |\psi_0|^2 \right)\psi_0 = \mu\, \psi_0\,, \end{equation} with the chemical potential $\mu$. Furthermore, we have defined $\delta\mu_\mathrm{osc} = \int dz \, |\psi_0|^2\, V_\mathrm{osc}$ which can be interpreted as a time-dependent shift of the chemical potential due to the oscillating part of the external potential (see Appendix \ref{sec:intham}). Assuming that $\overline{\delta V}$ perturbs the ground state of the condensate only slightly, we can make the ansatz $\psi_0 = \bar\psi_0 + \delta \psi_0$ with $\bar\psi_0 $ a solution of Eq. (\ref{eq:statgp}) with $\overline{\delta V}\rightarrow 0$. Expressions for the perturbation $\delta \psi_0$ can be found as a solution of a linearized version of the GP equation as presented in Appendix \ref{sec:gstate_pert}. The perturbation leads to a modification of the quasiparticle modes which, in turn, leads to terms of second order in $\delta V$ in the quasiparticle Hamiltonian (see also Appendix \ref{sec:gstate_pert}) and we will neglect $\overline{\delta V}$ in the following (restricting our considerations to effects of first order in $\delta V$). Since the time evolution of the field operator $\hat\psi(z,t)$ is governed by the Heisenberg equation with respect to the Hamiltonian $\hat{H}_\text{total}$, we find that the time evolution of $\hat\psi'(z,t) :=\hat c_0 \psi_0(z) + \hat\vartheta(z,t)$ is governed by the Heisenberg equation with respect to the Hamiltonian \begin{equation} \hat H_\text{total}':= \hat H_\text{total} - (\mu+\delta\mu_\mathrm{osc}) \hat N \end{equation} (similar to the grand canonical Hamiltonian) where $\hat N(t) = \int dz\, \hat\psi^{\prime\dagger}(z,t) \hat\psi^\prime(z,t)$ is the number operator of the atom field. In the next step, we apply the Bogoliubov approximation to $\hat H_\text{total}'$, where the field $\hat\vartheta$ is treated as a small perturbation while the ground state is strongly occupied such that the replacement $\hat c_0 \rightarrow \sqrt{N_0} \mathbb{I}$ can be performed, where $N_0$ is the number of atoms in the condensate. Then, we obtain the driving Hamiltonian (see Appendix \ref{sec:intham} for the details of the derivation) \begin{eqnarray}\label{eq:Hprimeint} \hat H_\mathrm{dr} & = & \sqrt{N_0} \int dz\, (V_\mathrm{osc} - \delta\mu_\mathrm{osc})\left(\psi_0^* \hat\vartheta + \psi_0 \hat\vartheta^{\dagger}\right) + \int dz\, \hat\vartheta^{\dagger}(V_\mathrm{osc} -\delta\mu_\mathrm{osc})\hat\vartheta\,. \end{eqnarray} In lowest order in the atom field operators, the back action on the cavity mode fluctuation quadrature is given via the Hamiltonian \begin{equation}\label{eq:backaction_hamiltonian_unexp} \hat H_\mathrm{ba} = \hbar\sqrt{N_0} \frac{g_0^2\sqrt{N_{ph}} }{\Delta_A} \left(\delta\hat{a}^\dagger + \delta\hat{a} \right) \int dz\, f_\mathrm{cav}^2 \left(\psi_0^*\hat\vartheta + \psi_0\hat\vartheta^{\dagger} \right)\,. \end{equation} Here, a term that is proportional to $\left(\delta\hat{a}^\dagger + \delta\hat{a}\right)$ and independent of $\hat\vartheta$ has been absorbed into a re-normalization of the cavity mode frequency \footnote{This is the effect of the refractive index change due to the presence of the condensate atoms in the cavity.} (see also Appendix \ref{sec:freqshift} for how to include this frequency shift from the start). \subsection{Quasiparticle mode expansion} We expand the field operator in terms of Bogoliubov modes describing the quasiparticle excitations as \begin{equation} \hat\vartheta = \sum_n \left(u_n\hat b_{n} + v_n^*\hat b_{n}^\dagger \right)\,, \end{equation} where $[b_{n},b_{m}^\dagger]=\delta_{nm}$ and the normalized mode functions $u_n$ and $v_n$ fulfill the stationary Bogoliubov-de~Gennes (BDG) equations which can be found in Appendix \ref{sec:intham}. The expansion of $\hat\vartheta$ in the Bogoliubov basis diagonalizes the free quasiparticle Hamiltonian $\hat H_\mathrm{BdG}$ to second order in $\hat\vartheta$ (the Bogoliubov-de~Gennes Hamiltonian, see Appendix \ref{sec:intham}), which governs the free evolution of the condensate with $V(z,t)\rightarrow V_0(z)$, that is, $\hat H_\mathrm{BdG} = \sum_n \hbar \omega_n \hat b_{n}^\dagger \hat b_{n}$. Then, taking the free time evolution of the creation and annihilation operators $\delta\hat{a}^\dagger$, $\delta\hat{a}$, $\hat{b}^\dagger$ and $\hat{b}$ into account, we obtain the back-action Hamiltonian in the corresponding rotating frames (the interaction picture) as \begin{align} \hat{H}_\mathrm{ba} & = \left(\delta\hat{a}^\dagger e^{-i\Delta_c t} + \delta\hat{a} e^{+i\Delta_c t}\right)\sum_{n}\kappa_{n} \left(\hat{b}_{n}e^{-i(\omega_{n}t-\theta_n)} + \hat{b}_{n}^{\dagger}e^{i(\omega_{n}t-\theta_n)}\right) \label{eq:backaction_hamiltonian} \end{align} where the real quantities $\kappa_n$ and $\theta_n$ are defined such that \begin{align}\label{eq:coupling} \kappa_n e^{i\theta_n} = \frac{\hbar g_0^2\sqrt{N_0 N_{ph}} }{\Delta_A } \int dz \,f_\mathrm{cav}^2 \left( \psi_0^* u_n + \psi_0 v_n \right) \, \,. \end{align} Using the mode decomposition above, we find that the driving Hamiltonian in Eq. (\ref{eq:Hprimeint}) assumes the form \begin{align} \label{eq:interaction_hamiltonian} \nonumber \hat{H}_\mathrm{dr} & =\sum_{n} \left(P_{n} \hat{b}_{n} e^{-i\omega_{n}t} + P_{n}^* \hat{b}_{n}^{\dagger}e^{i\omega_{n}t}\right)+\sum_{n}\left(O_{n}\hat{b}_{n}^{\dagger}\hat{b}_{n} + \left( N_{n} \left(\hat{b}_{n}^{\dagger}\right)^{2}e^{+2i\omega_{n}t} + N_{n}^*\,\hat{b}_{n}^{2}e^{-2i\omega_{n}t}\right)\right)\\ & \hphantom{==}+\sum_{n,l<n} \left( M_{nl} \hat{b}_{n}^{\dagger}\hat{b}_{l}e^{i(\omega_{n} - \omega_{l})t} + M_{nl}^* \hat{b}_{l}^{\dagger}\hat{b}_{n}e^{-i(\omega_{n} - \omega_{l})t} \right) \\ & \nonumber \hphantom{==}+\sum_{n,l<n} \left(L_{nl} \hat{b}_{n}\hat{b}_{l} e^{-i(\omega_{n} + \omega_{l})t} + L_{nl}^* \hat{b}_{n}^{\dagger}\hat{b}_{l}^{\dagger}e^{i(\omega_{n} + \omega_{l})t} \right) , \end{align} where the form of each of the time-dependent coefficients is \begin{align}\label{eq:coefficients} \nonumber P_{n} & = \sqrt{N_0} \int dz \, (V_\mathrm{osc} - \delta\mu_\mathrm{osc}) \,\left( \psi_0^* u_n + \psi_0 v_n \right) \\ \nonumber O_{n} & = \int dz \, (V_\mathrm{osc} - \delta\mu_\mathrm{osc})\, \left(|u_n|^2 + |v_n|^2\right) \\ N_{n} & = \int dz \, (V_\mathrm{osc} - \delta\mu_\mathrm{osc})\, u_n^* v_n^* \\ \nonumber M_{nl} & = \int dz \, (V_\mathrm{osc} - \delta\mu_\mathrm{osc})\, \left( u_n^* u_l + v_n^* v_l\right) \\ \nonumber L_{nl} & = \int dz \, (V_\mathrm{osc} - \delta\mu_\mathrm{osc})\, \left( u_n v_l + v_n u_l\right) \,. \end{align} and we have neglected a vacuum term $\propto |v_n|^2$. The driving Hamiltonian is one of the main results of this work. The coefficients can be interpreted as the moments of the external potential with respect to the quasiparticle modes and different processes. The first term with the coefficient $P_n$ corresponds to a linear displacement of the quasiparticle state by the cavity field or potential perturbation. The second order terms can be interpreted as various nonlinear processes familiar from optical four-wave mixing; the term with the coefficient $O_n$ corresponds to a time-dependent frequency shift of the quasiparticle modes, the terms with the coefficients $N_n$ and $L_{nl}$ take the form of one-mode and two-mode squeezing operators, and the term with the coefficient $M_{nl}$ has the form of a beam splitting operation. \section{State manipulation by modulated cavity power} \label{sec:stateman} In the following, we consider a cavity field which is periodically intensity modulated on resonance with a particular quasiparticle mode or mixture thereof at frequency $\omega_m$. Then, particular processes in $\hat{H}_\mathrm{dr}$ are strongly enhanced while the non-resonant processes are averaged out. In particular, we set $N_{ph}\left(t\right)=N_{ph,0}\left(1+\eta\cos\left(\omega_{m}t\right)\right)$ where $\eta\le 1$ is some constant modulation amplitude. If we assume that there is no external potential besides the time-independent trapping potential, we find from equation (\ref{eq:potential}) \begin{equation} V_\mathrm{osc} - \delta\mu_\mathrm{osc} = \hbar\frac{g_0^2}{\Delta_A}\left(f_\mathrm{cav}^2 - \frac{1}{2}\right)N_{ph,0}\,\eta\cos\left(\omega_{m}t\right) \end{equation} Through the rotating wave approximation (RWA), we discard terms that must oscillate at a non-zero frequency and cannot be brought into resonance. Then, to first and second order in the quasiparticle mode operators, we obtain \begin{equation} \begin{split} \hat{H}_\mathrm{dr} & = \sum_n \left(\bar{P}_{n}\hat{b}_{n}e^{-i\left(\omega_{n}-\omega_{m}\right)t} + \bar{P}_{n}^*\hat{b}_{n}^{\dagger}e^{i\left(\omega_{n}-\omega_{m}\right)t}\right)\\ & \hphantom{==} +\sum_{n} \left(\left(\bar{N}_{n}\hat{b}_{n}^{\dagger}\right)^{2}e^{i\left(2\omega_{n}-\omega_{m}\right)t}+\bar{N}_{n}^*\hat{b}_{n}^{2}e^{-i\left(2\omega_{n}-\omega_{m}\right)t}\right)\\ & \hphantom{==}+\sum_{n,l<n}\left( \bar{M}_{nl}\hat{b}_{n}^{\dagger}\hat{b}_{l}e^{i\left(\omega_{n}-\omega_{l}-\omega_{m}\right)t} + \bar{M}_{nl}^*\hat{b}_{l}^{\dagger}\hat{b}_{n}e^{-i\left(\omega_{n}-\omega_{l}-\omega_{m}\right)t}\right)\\ & \hphantom{==}+\sum_{n,l<n}\left(\bar{L}_{nl}\hat{b}_{n}\hat{b}_{l}e^{-i\left(\omega_{n}+\omega_{l}-\omega_{m}\right)t} + \bar{L}_{nl}^*\hat{b}_{n}^{\dagger}\hat{b}_{l}^{\dagger}e^{i\left(\omega_{n}+\omega_{l}-\omega_{m}\right)t}\right) \end{split} \label{eq:interaction_hamiltonian_resonantdriving} \end{equation} where the coefficients are given by equation (\ref{eq:coefficients}) with the replacement $V_\mathrm{osc} - \delta\mu_\mathrm{osc}\rightarrow \hbar g_0^2 (f_\mathrm{cav}^2 - 1/2)N_{ph,0}\,\eta/(2\Delta_A)$. From the interaction Hamiltonian (\ref{eq:interaction_hamiltonian_resonantdriving}), we can see that we are able to to selectively drive particular quasiparticle interactions via the choice of the cavity field intensity oscillation frequency $\omega_m$. To be able to apply the RWA, we must ensure that the quasiparticle time scale (given by the inverse of the mode frequencies involved) and the measurement time are well separated. \subsection{Beam splitting and mode swapping} \label{subsec:beamsplit} The first type of resonant interaction that we consider here is achieved with the resonance condition $\omega_m=\omega_n-\omega_l$, for which the interaction Hamiltonian (\ref{eq:interaction_hamiltonian_resonantdriving}) further reduces with the RWA to $\hat{H}_\mathrm{dr}=\bar{M}_{nl}\hat{b}_{n}^{\dagger}\hat{b}_{l} + \bar{M}_{nl}^*\hat{b}_{l}^{\dagger}\hat{b}_{n}$, assuming no other resonance conditions are met. The time evolution operator due to this Hamiltonian is thus \begin{equation} \label{eq:mirror} \hat{U}_\mathrm{dr}=\exp\left[i\left(\mathcal{M}_{nl}\hat{b}_{n}^{\dagger}\hat{b}_{l} + \mathcal{M}_{nl}^*\hat{b}_{l}^{\dagger}\hat{b}_{n}\right)\right] \end{equation} where $\mathcal{M}_{nl}=\bar{M}_{nl}t/\hbar$. This has the form of a beam splitting or mode mixing operator, where the phase can be tuned both by the cavity intensity and interaction time. Calculating the evolution of the quasiparticle modes due to such an operation as $\hat{b}'=\hat{U}_\mathrm{dr}\hat{b}\hat{U}_\mathrm{dr}^\dagger$, we see that the effect can be written as \begin{equation} \begin{pmatrix} \hat{b}'_{n}\\ \hat{b}'_{l} \end{pmatrix}= \begin{pmatrix} \cos\left(|\mathcal{M}_{nl}|\right) & -ie^{i\theta_{nl}}\sin\left(|\mathcal{M}_{nl}|\right)\\ -ie^{-i\theta_{nl}}\sin\left(|\mathcal{M}_{nl}|\right) & \cos\left(|\mathcal{M}_{nl}|\right) \end{pmatrix} \begin{pmatrix} \hat{b}_{n}\\ \hat{b}_{l} \end{pmatrix} \end{equation} where $\theta_{nl}$ is the phase of $\mathcal{M}_{nl}$ defined such that $\mathcal{M}_{nl}=|\mathcal{M}_{nl}|e^{i\theta_{nl}}$. If $|\mathcal{M}_{nl}|=\pi/4$, then we are applying a symmetrical beam splitting operation, where two different quasiparticle modes are mixed with each other. If $|\mathcal{M}_{nl}|=\pi/2$, then we have a "mirror" or "swap" operation, where the photon occupation of two modes are swapped. These operations are highly versatile, and could be used to create novel, currently unfeasible, states. A "mirror" type operation could directly populate high order quasiparticle modes in a targeted way, starting from an easily attainable initial thermal state. A beam splitting operation is particularly interesting for both quantum metrology and fundamental quantum mechanics applications, as this creates entanglement between two modes. Examples by constructing a quasiparticle Mach-Zehnder interferometer and incorporating it into a measurement scheme will be given in section \ref{sec:applications}. \subsection{Displacement and squeezing} \label{subsec:squeezing} The second resonance condition we consider here is $\omega_m=\omega_n$. This resonance results as above in a time evolution of the form \begin{equation}\label{eq:Udisp} \hat{U}_\mathrm{dr}=\exp\left[i\left(\mathcal{P}_n\hat{b}_{n}+\mathcal{P}_n^*\hat{b}^\dagger_{n}\right)\right] \end{equation} where $\mathcal{P}_{n} =\bar{P}_{n} t/\hbar$. This is a linear displacement operator, which over the interaction time $t$ creates a single mode coherent quasiparticle state in the mode labelled by $n$, with an average quasiparticle population of $|\mathcal{P}_{n}|^2$. If instead we consider $\omega_m=2\omega_n$, the only resonant term results in a time evolution operator \begin{equation} \hat{U}_\mathrm{dr}=\exp\left[\frac{i}{2}\left(\mathcal{N}_{n}\left(\hat{b}_{n}^{\dagger}\right)^{2} + \mathcal{N}_{n}^*\hat{b}_{n}^{2}\right)\right] \end{equation} where $\mathcal{N}_{n}=2\bar{N}_{n}t/\hbar$, which is a single mode squeezing operator where $\mathcal{N}_{n}$ plays the role of the squeezing parameter. Finally, when $\omega_m=\omega_n+\omega_l$, we have a two mode squeezing operator \begin{equation} \hat{U}_\mathrm{dr}=\exp\left[\frac{i}{2}\left(\mathcal{L}_{nl}\hat{b}_{n}\hat{b}_{l} + \mathcal{L}_{nl}^*\hat{b}_{n}^{\dagger}\hat{b}_{l}^{\dagger}\right)\right] \end{equation} where the squeezing parameter is given by $\mathcal{L}_{nl}=2\bar{L}_{nl}t/\hbar$. These operations can in particular be used to generate coherent excitations of quasiparticles that may be used, for example, for sensing applications. In the next section, we will describe a specific sensing scheme; a quasiparticle Mach-Zehnder interferometer in frequency space. \section{A quasiparticle Mach-Zehnder interferometer} \label{sec:MZI} The operations on the quasiparticle modes that we discussed in the previous section can be combined to realize quantum protocols with quasiparticles. As a specific example, we will discuss a quasiparticle Mach-Zehnder interferometer in this section. In particular, it can be used for sensing applications as we will explain later in our first example application; measuring the s-wave scattering length. A Mach-Zehnder interferometer consists of two consecutive beam splitting operations acting on two modes, with a period of free evolution between them in which a phase difference $\Theta$ is accumulated. With the second beam splitting operation, the phase difference is imprinted on the individual modes by constructive or destructive interference, and can in principle be extracted. Here, we realize the beam splitting operations with the driving Hamiltonian as described in Sec. \ref{subsec:beamsplit} with $|\mathcal{M}_{nl}|=\pi/4$. With a vacuum state and a coherent state with the displacement parameter $\alpha_n$ as respective input states, it can be shown that the fundamental precision limit for phase estimation is given by \cite{demkowicz2015quantum} \begin{equation}\label{eq:MZIsens} \Delta \Theta \ge \frac{1}{|\alpha_n|}\,, \end{equation} and is reached for $\Theta=\pi/2$. This corresponds to the standard quantum limit. The creation of the initial state can be realized in our setup using the displacement part of the driving Hamiltonian. Additional enhancement can be achieved, for example, with squeezed probe states which can be created with the squeezing operation discussed above. However, this quantum enhancement is strongly limited due to noise as we discuss in Sec. \ref{sec:damping}. Therefore, we do not consider quantum enhanced sensing in this article. \section{Pulsed readout} \label{sec:pulsed} To apply the above-described manipulation methods to tasks in fundamental research and metrology, it must be possible to read out the state of the condensate. Several different techniques have been developed for measuring quasiparticle excitations of condensates. Excitations can be measured directly by measuring phase oscillations through "heterodyne detection" \cite{Katz:2004high} (see also \cite{Tozzo:2004pho}) or density perturbations by phase-contrast imaging \cite{Andrews:1996dir,Stamper:1998col}. Quasiparticle momenta can also be mapped onto internal states of atoms via doubly detuned Raman pulses \cite{Schuetzhold2006det}. Another option is time-of-flight measurements where the quasiparticle momentum is mapped onto free particle momenta after trap release and the propagation of the atoms falling in the gravitational field of the earth is measured (e.g. \cite{stamper1999excitation}). A particular version of this is to first split the elongated condensate in two parts that are then later brought into interference \cite{schumm2005matter,schweigler2017experimental,van2018projective,rauer2018recurrences,gluza2020quantum}. In this section, we propose a method for imprinting the displacement of a single quasiparticle mode onto the cavity field phase, which can be read out with high precision through a homodyne detection scheme on the light leaving the cavity. The method is based on the approach of pulsed optomechanics presented, for example, in \cite{vanner2011pulsed,pikovski:thesis}. To this end, we assume that the dynamics of the quasiparticles is much slower than the measurement. This requires the measurement to be constructed from pulses shorter than the time scale of the quasiparticle dynamics. As we will show below, for typical experimental parameters, this requires light pulses with a length below the microsecond level to access high order quasiparticle modes, which is well within the capabilities of current technology. For the pulsed optomechanics scheme, the external read-out laser is tuned on resonance with the cavity field such that $\Delta_c = 0$ and we obtain the sum of driving and back-action Hamiltonian to leading order in $1/\sqrt{N_0}$ \begin{align} \hat{H}_\mathrm{plsd} = \hat{H}_\mathrm{dr} + \hat{H}_\mathrm{ba} & = \sum_n \kappa_{n} \left(\sqrt{N_{ph}} + \delta\hat{a}^\dagger + \delta\hat{a} \right) \left(\hat{b}_{n}e^{-i(\omega_{n}t_m - \theta_n)} + \hat{b}_{n}^{\dagger}e^{i(\omega_{n}t_m - \theta_n)}\right) \,. \end{align} If we consider the dynamics of the cavity field momentum quadrature perturbation $\hat{P}_L=i\left(\delta\hat{a}^\dagger-\delta\hat{a}\right)$, we find \begin{equation} \begin{split} \hat{P}_{L}\left( \Delta t\right) & =\exp\left[\frac{i}{\hbar}\hat{H}_\mathrm{plsd} \Delta t\right]\hat{P}_{L}\exp\left[-\frac{i}{\hbar}\hat{H}_\mathrm{plsd} \Delta t\right]\\ & =\hat{P}_{L} - \sum_n \frac{2\kappa_{n}}{\hbar} \left(\hat{b}_{n}e^{-i(\omega_{n}t_m - \theta_n)} + \hat{b}_{n}^{\dagger}e^{i(\omega_{n}t_m - \theta_n)}\right) \Delta t \,. \end{split} \end{equation} Thus, we see that $\hat{P}_{L}$ depends on the quasiparticle displacement. Since we assume that the initial probe state of the light field is prepared such that $\langle \delta \hat{a} \rangle = 0$, $\langle \hat{a}^\dagger - \hat{a}\rangle = 0$ and $\langle \hat{a}^\dagger + \hat{a}\rangle = 2\sqrt{N_{ph}}\gg 1$, and the phase $\phi$ of a general coherent state is defined such that $\langle \hat{a}^\dagger + \hat{a}\rangle = 2\sqrt{N_{ph}} \cos \phi$ and $i\langle \hat{a}^\dagger - \hat{a}\rangle = 2\sqrt{N_{ph}} \sin \phi$, for small $\langle \hat{P}_{L}\left( \Delta t\right)\rangle$, we can interpret the shift of $\hat{P}_{L}$ as a small phase shift of the cavity mode state \begin{equation} \phi_\mathrm{ba} = \frac{\langle\hat{P}_{L}\left( \Delta t\right)\rangle}{2\sqrt{N_{ph}}} \,. \end{equation} This change of phase can, in principle, be read out by interfering the light from the cavity with a local phase reference, e.g. a homodyne measurement with fundamental precision limit given by the standard quantum limit \begin{equation} \Delta \phi_\mathrm{ba} \ge \frac{1}{\sqrt{N_{ph}}} \end{equation} and signal to noise ratio \begin{equation}\label{eq:SNR} \mathrm{SNR} = \frac{|\phi_\mathrm{ba}|}{\Delta \phi_\mathrm{ba}} \le \frac{1}{2} |\langle\hat{P}_{L}\left( \Delta t\right)\rangle|\,. \end{equation} A closer analysis of the measurement precision employing tools of quantum metrology reveals that the fundamental precision limit for the sensing of displacement of the quasiparticle modes saturates for large times and large coupling $\kappa_{2n_\mathrm{cav}}$; the detailed calculations can be found in Appendix \ref{app:fund-readout-limit}. To summarize, the fundamental precision limit depends on the Quantum Fisher Information (QFI), which quantifies the maximal amount of information that can be gained about a parameter encoded in a given state of the system from a measurement on the system (maximized over all possible measurements). For a coherent state of the cavity field with amplitude $\sqrt{N_{ph}}$ and quasiparticle mode with amplitude $\mathcal{P}_{n}$, the QFI for the estimation of $\mathcal{P}_{n}$ is \begin{equation} F_{\rho_{F}}\left(\mathcal{P}_{n}\right)=\frac{16\chi^{2}}{1+4\chi^{2}}\,. \end{equation} From the QFI, we can then calculate the Cramer-Rao bound for the fundamental precision limit as \begin{equation} (\Delta\mathcal{P}_{n})^2 \ge \frac{1}{F_{\rho_{F}}} =\frac{1}{4}+\frac{1}{16\chi^{2}}\rightarrow \frac{1}{4} \quad\text{for}\quad \chi\rightarrow \infty\,, \end{equation} where $\chi = \kappa_{2n_\mathrm{cav}}\Delta t/\hbar$. Note that, alternatively, the interaction described by $\hat{H}_\mathrm{ba}$ can be employed for mode cooling and state preparation by tuning the external driving laser frequency to the quasiparticle side bands in the sideband resolved regime, that is $\omega_n$ much larger than the cavity bandwidth. However, this cannot coexist with the state manipulation via intensity modulation and the pulsed optomechanics scheme which require the opposite regime, where $\omega_n$ is much smaller than the cavity bandwidth. \section{Damping and decoherence} \label{sec:damping} For the above description of the condensate, we have considered the quasiparticle dynamics as lossless and we have neglected terms of third and fourth order in $\hat\vartheta$ in the expansion of $\hat{H}_\mathrm{total}$. In particular, terms of third order in $\hat\vartheta$ lead to Beliaev damping \cite{Giorgini:1998dam} and Landau damping \cite{Szepfalusy:1974on,Shi:1998fin,Fedichev:1998damp} which are significant loss mechanisms in condensates. Landau damping is induced by a process where a quasiparticle excitation scatters inelastically with a thermal quasiparticle excitation. Beliaev damping is the decay of a quasiparticle excitation (via the scattering with a condensate atom) into two quasiparticle excitations. While Landau damping can be suppressed by reducing the temperature, Beliaev damping is present also at zero temperature and scales very strongly with the quasi-particle momentum. For one-dimensional quasi-condensates, both processes are highly suppressed and fourth order processes become relevant (e.g. \cite{yuen2015enhanced}). For example for uniformity in the elongated direction, transverse harmonic trapping and low temperatures, the scattering-induced damping rate of high energy quasiparticles becomes (see e.g. $\Gamma^\mathrm{damp}$ on page 7 of \cite{mazets2011dynamics}) \begin{equation}\label{eq:1dloss} \gamma^\mathrm{1D}_\mathrm{sc} = 72 \sqrt{3} (\ln 4/3 )^2 \omega_\perp \left(\frac{\rho_\mathrm{1d} a_\mathrm{sc}^2 }{a_\perp}\right)^2 \,. \end{equation} Another damping process that has to be taken into account is three-body losses, where three atoms interact. Two atoms form a molecule and the binding energy is transferred to the molecule and the third atom which are then ejected from the condensate. The corresponding decay constant is $\gamma_\mathrm{3B} := 3D\rho_0^2$, where $D$ is the three-body decay constant. For example, for $\rho_{\mathrm 0} = 10^{14}\,\mathrm{cm}^{-3}$ and a decay constant $D\sim 5.8\times 10^{-30}\,\mathrm{cm}^6\mathrm{s}^{-1}$ for rubidium atoms \cite{Burt:1997coh}, we find $\gamma_\mathrm{3B} = 3D\rho_{\mathrm c}^2 \sim 0.2 \,\mathrm{s}^{-1}$. Similarly, for an ytterbium BEC with $D \sim 4\times 10^{-30}\,\mathrm{cm}^6\mathrm{s}^{-1}$ \cite{Takuso:2003spin} and the same density, we obtain $\gamma_\mathrm{3B} \sim 0.1 \,\mathrm{s}^{-1}$. Three-body loss is also suppressed for one-dimensional quasi-condensates (see \cite{mehta_three-body_2007,haller_three-body_2011,tolra_observation_2004,haller_three-body_2011} and Appendix C of \cite{raetzel2021decay} for detailed discussion), so the effect of three-body loss can be limited by increasing the strengt of the transverse confinement of the condensate. In addition to the resulting loss of quasiparticle excitations, loss mechanisms are always accompanied by noise which leads to the decay of quantum enhancements in sensing applications \cite{Howl_2017,raetzel2021decay}. Both effects limit the time that coherent processes with quasiparticles can be performed in a condensate. To ensure that the approximations made for the description introduced in the previous section are valid for the examples below, we assume that all processes are performed on a time-scale much smaller than that of all loss mechanisms. We do not consider quantum enhancement explicitly in this article. \section{Example applications} \label{sec:applications} Here, we present two examples of measurements that could be performed with the interactions we propose in this paper. \subsection{Restriction to box traps} For ease of calculation, we consider the trapping potential $V_0\left(z\right)$ in the elongated direction to be a uniform box potential of length $L$, e.g. as in \cite{rauer2018recurrences} or \cite{gaunt2013bose}, as the mode functions of the quasiparticle excitations have a particularly simple form. Furthermore, we assume that $L$ is much larger than the healing length of the condensate $\xi$, where $\xi = \hbar/\sqrt{2 m_a \tilde{g} \rho_\mathrm{0,1d}}$ and $\xi = \hbar/\sqrt{4 m_a \tilde{g} \rho_\mathrm{0,1d}}$ for three-dimensional BECs in transversely box-shaped traps and one-dimensional quasi-condensates in tight transverse harmonic traps, respectively, where $\rho_\mathrm{0,1d}=N_0/L$. This implies that the ground state wave function $\psi_0$ is almost constant over the length of the trap in the elongated direction besides a small region (on the length scale of $\xi$) at the boundary of the box potential where it quickly decays to zero \cite{Pitaevskii:2003bose} (see figure \ref{fig:ground-state} for an example). It follows from the stationary GP equation (\ref{eq:statgp}) that the chemical potential becomes $\mu=\tilde{g} \rho_\mathrm{0,1d}+V_0(z_0)$, where $z_0$ is chosen inside the box and $\overline{\delta V}$ is neglected as explained in Section \ref{sec:hamiltonian}. To be able to derive analytical results, we restrict our considerations to two distinct regimes of quasiparticle modes. On the one hand, we consider modes where the atomic kinetic energy $\hbar^2 k_n^2/2m_a$ is much smaller than the interaction energy $\mu_0 = \tilde{g} \rho_\mathrm{0,1d}$, which implies that $k_n^2\xi^2\ll 1$ and therefore, the wavelength is much larger than the healing length. On the other hand, we consider high energy modes, where $\hbar^2 k_n^2/2m_a \gg \mu_0$, which implies that the wavelength is of the same order or much shorter than the healing length. \subsubsection{Low energy modes} For the low-energy modes, the ground state wave function appears box-like as $\psi_0=\chi_\mathrm{BT}/\sqrt{L}$, where $\chi_\mathrm{BT}$ is the characteristic function of the 1-dimensional box potential in the $z$-direction (i.e. $\chi_\mathrm{BT} = 1$ inside and $\chi_\mathrm{BT}=0$ outside the box, respectively). The abrupt decay of the density at the condensate's boundary leads to a delta-function-like term in the stationary BDG equations that govern the spatial dependence of the mode functions. This term implies Neumann boundary conditions on the quasiparticle modes. Furthermore, for the low-energy modes, we can apply the Thomas-Fermi approximation where the kinetic term in the stationary BDG equations are neglected. Therefore, the quasiparticle modes assume a particularly simple form \begin{equation} u_{n}^\mathrm{low}\left(z\right)=\alpha_n \psi_0(z) \varphi^\mathrm{c}_{n}\left(z\right)\text{ , and } \, \, v_{n}^\mathrm{low}\left(z\right)=\beta_n \psi_0(z)\varphi^\mathrm{c}_{n}\left(z\right), \end{equation} where \begin{align} \nonumber \alpha_n &= \left(\sigma_{n}^{-1}+\sigma_{n}\right)/2 \text{ , } \beta_n = \left(\sigma_{n}^{-1}-\sigma_{n}\right)/2\,\text{,}\\ \sigma_{n}&= \left(1+2\tilde{g} \rho_\mathrm{0,1d}\left(\frac{\hbar^2k_n^2}{2m_a}\right)^{-1}\right)^{1/4} \text{ , and }\,\, \varphi^\mathrm{c}_{n}\left(z\right)= \sqrt{2 } \cos\left(k_{n}\left(z+\frac{L}{2}\right)\right)\text{ , }k_{n}=\frac{n\pi}{L}. \end{align} and we have chosen the boundaries of the trap potential at $z=-L/2$ and $z=L/2$. The mode frequencies become \begin{equation}\label{eq:freq} \omega_{n}=\frac{\hbar k_{n}^{2}}{2m_a}\sigma_{n}^2 \end{equation} which simplifies to $\omega_{n}\approx c_{s}k_{n}$ where $c_{s}=\sqrt{\tilde{g}\rho_\mathrm{0,1d}/m_a}$ is the speed of sound in the low energy limit. \subsubsection{High energy modes} The high energy modes are defined by having kinetic energies much larger than the atom-atom interaction energy $g\rho_0$, which implies that these modes are dominated by their kinetic energy. Depending on the depth of the trapping potential, high energy quasiparticles can be free or bound. In the case of free propagation, we would recover the mode decomposition chosen in \cite{nagy2009nonlinear,ritsch2013cold}. In that case, quasiparticles would be lost eventually represented by atoms leaving the condensate and the length of the condensate would limit the time for driving and sensing. In the following, we assume that the trapping potential is deep enough for atoms with high energy quasiparticle momenta to be bound states. For the momenta $\hbar k$ of photons in the optical range, the recoil momentum $p_\mathrm{rec}=2\hbar k$ corresponds to a kinetic energy $p_\mathrm{rec}^2/(2m)$ of the order of $2\pi \hbar \times 10^4\,$Hz for $\mathrm{Rb}^{87}$-atoms. Trap depths of this order are consistently achieved with harmonic traps and are achievable in principle for box traps as well. In practice, the box trap will not have perfectly steep walls on the length scale defined by the wavelength of high-energy modes, which is also on the same order or below the healing length $\xi$ of the condensate. Therefore, there are no simple boundary conditions for high-energy modes in contrast to the case of low-energy modes. The mode functions and energies have to be calculated numerically for explicit experimental situations. In Appendix \ref{app:toymodel}, we present a toy model with a trapezoid well where this is realized. Assuming that $\xi\ll L$, in most parts of the trap the high-energy modes can be approximated as a linear combination of sine and cosine functions \footnote{Note that the conventional mode decomposition for uniform condensates would be modes of the form $\exp(i \hbar k_n z)$ and their complex conjugate. However, since the light mode is confined in a cavity, the light-atom interaction is symmetric under the exchange of the propagation direction of the quasiparticles and modes of the form $\cos(k_{n}(z+L/2))$ and $\sin(k_{n}(z+L/2))$ are directly addressed. The explicit coupling strength will depend on the position of the condensate inside the optical resonator.}, that is \begin{eqnarray} u_{n}^\mathrm{high}\left(z\right)&\approx& \frac{1}{\sqrt{L}}\left( A^\mathrm{c}_n \varphi^\mathrm{c}_{n}\left(z\right) + A^\mathrm{s}_n \varphi^\mathrm{s}_{n}\left(z\right)\right) \end{eqnarray} where $\varphi^\mathrm{c}_{n}$ was defined above and \begin{align} \varphi^\mathrm{s}_{n}\left(z\right)= \sqrt{2 } \sin\left(k_{n}\left(z+\frac{L}{2}\right)\right)\,. \end{align} For the sake of simplicity and since we are only interested in principle limits of our scheme here, we assume in the following that the trap potential is optimized such that the contribution $A^\mathrm{s}_{n}$ is negligible for all modes that we couple to or address below (see Appendix \ref{app:toymodel} for an example) and set \begin{eqnarray} u_{n}^\mathrm{high}\left(z\right)&\approx& \frac{1}{\sqrt{L}} \varphi^\mathrm{c}_{n}\left(z\right) \, \end{eqnarray} inside the trap and $u_{n}^\mathrm{high}=0$ outside the trap. \subsubsection{The back-action Hamiltonian and pulsed readout} By choosing the cavity mode appropriately (e.g. by choosing the length of the cavity, positioning of the cavity mirrors with respect to the atomic cloud), we can achieve that the coupling $\kappa_n$ is strongly emphasized for a specific mode and $\hat{H}_\mathrm{ba}$ in equation (\ref{eq:backaction_hamiltonian}) reduces approximately to the standard linearized optomechanical coupling Hamiltonian. In the following, we choose \begin{equation} f_\mathrm{cav}(z) = \sin\left(k_\mathrm{cav}\left(z+\frac{L}{2}\right)\right) \end{equation} and we assume that $k_\mathrm{cav}=n_\mathrm{cav} \pi/L$ where $n_\mathrm{cav}\in \mathbb{N}$. Dirichlet boundary conditions are then fulfilled at $z_{j_1}=-(j_1+1/2)L$ and $z_{j_2}=(j_2 + 1/2) L $ with $j_1,j_2 \in \mathbb{N}$, which implies that the cavity mirrors could be placed at any of these positions to realize the modes $f_\mathrm{cav}(z)$. The effective cavity mode volume becomes $\mathcal{V}_c=L_c \mathcal{A}_c/2$. Based on the approximation made above for the mode functions, we find $\theta_n=0$ (as defined in equation (\ref{eq:coupling})) and \begin{eqnarray} \kappa_n &\approx & - \frac{\hbar g_0^2\sqrt{N_0 N_{ph}} }{2\sqrt{2}\Delta_A }\, \sigma_{2n_\mathrm{cav}}^{-1} \delta^{2n_\mathrm{cav}}_n \,, \end{eqnarray} where $\delta^a_b$ is the Kronecker delta. Then, we obtain that the coupling is only significant for $n = 2n_\mathrm{cav} $, that is $k_n = 2k_\mathrm{cav}$, which implies that momentum is conserved as in uniform condensates \cite{nagy2009nonlinear}. The back-action Hamiltonian becomes \begin{align} \hat{H}_\mathrm{plsd} & = \kappa_{2n_\mathrm{cav}} \left(\sqrt{N_{ph}} + \delta\hat{a}^\dagger + \delta\hat{a} \right) \left(\hat{b}_{2n_\mathrm{cav}} + \hat{b}_{2n_\mathrm{cav}}^{\dagger} \right) \label{eq:backaction_hamiltonian_2c} \end{align} where we have assumed that the measurement time $t_m$ is chosen such that $\omega_{2n_\mathrm{cav}} t_m$ is a multiple of $2\pi$. The momentum quadrature evolves accordingly as \begin{equation}\label{eq:quadev} \hat{P}_{L}\left(t\right) = \hat{P}_{L}- \frac{2\kappa_{2n_\mathrm{cav}}}{\hbar} \left(\hat{b}_{2n_\mathrm{cav}} + \hat{b}_{2n_\mathrm{cav}}^{\dagger}\right) \Delta t \, \end{equation} which implies that the resulting phase shift can be associated with one specific quasiparticle mode. \subsubsection{The driving Hamiltonian} As for the back-action Hamiltonian, momentum conservation restricts the modes that the driving Hamiltonian allows access to. Based on the approximation made above for the mode functions, for the first order term in the driving Hamiltonian, we immediately find \begin{align}\label{eq:Pn} \bar{P}_{n} & \approx -\frac{\eta}{2 } \sigma_n^{-1}\bar{\kappa} \sqrt{2 N_0} \,\delta^{2n_\mathrm{cav}}_n \quad\text{where}\quad \bar{\kappa} = \hbar g_0^2 N_{ph,0}/(4\Delta_A)\,. \end{align} If a high-energy mode is involved, we find that all squeezing processes are suppressed and we do not write the expressions here. For low-energy modes, we obtain \begin{equation} \bar{N}_{n} \approx - \frac{\eta}{8}\left(\sigma_{n}^{-2}-\sigma_{n}^{2}\right) \bar{\kappa} \,\delta_{n}^{n_\mathrm{cav}} \,, \end{equation} while the coefficient for two mode squeezing becomes \begin{align}\label{eq:coefficients3} \bar{L}_{nl} & \approx -\frac{\eta}{4}\left(\sigma_{n}^{-1}\sigma_{l}^{-1} - \sigma_{n}\sigma_{l}\right) \bar{\kappa} \, \left(\delta_{n-l}^{2n_\mathrm{cav}} + \delta_{n+l}^{2n_\mathrm{cav}}\right) \,. \end{align} The general expression for the coefficient for the beam-splitting operation is \begin{align}\label{eq:coefficients2} \bar{M}_{nl} & \approx -\frac{\eta}{4}\left(\sigma_{n}^{-1}\sigma_{l}^{-1} + \sigma_{n}\sigma_{l}\right) \bar{\kappa} \,\left(\delta_{n-l}^{2n_\mathrm{cav}} + \delta_{n+l}^{2n_\mathrm{cav}}\right) \,. \end{align} where we took into account that $l<n$. The above list of coefficients will be the basis for explicit numerical examples in the next sections. In the expressions for $\bar{L}_{nl}$ and $\bar{M}_{nl}$, several options for combinations of momenta of coupled modes appear. For equidistant energies, each expression contains a momentum combination that, together with the energy condition of resonant driving, leads to resonant direct driving of a third mode. For example, for $\bar{L}_{nl}$, the condition $n+l=2n_\mathrm{cav}$ on the momenta and $\omega_n+\omega_l=\omega_m$ would lead to direct driving of the mode $n+l$. For $\bar{M}_{nl}$, the condition $n-l=2n_\mathrm{cav}$ on the momenta and $\omega_n-\omega_l=\omega_m$ would lead to direct driving of the mode $n-l$. Since the coupling through $\bar{L}_{nl}$ and $\bar{M}_{nl}$ is weaker than the direct driving via $\bar{P}_{n}$ by a factor $2\sqrt{N_0}$, the direct driving of the additional modes would be significant, in general, even too large for the system to stay in the Bogoliubov regime. Therefore, the momentum combinations that lead to the driving of additional modes have to be avoided or at least one of the involved modes has to be a high-energy mode as the energy grows quadratically with the wave number in the high-energy regime. \subsection{Explicit numerical examples} In the following, we give two explicit examples that may be realized with typical parameters of state-of-the-art experimental systems. The wavelength of typical lasers is below one micron and very high densities of the condensate would be necessary to obtain a healing length which is at least one order of magnitude smaller than the wavelength. This would limit the condensate lifetime and the quasiparticle coherence significantly (see section \ref{sec:damping}). Therefore, we will consider the first-order coupling (back-action and displacement) only for high-energy modes, i.e. for modes for which $\hbar^2k_n^2/(2m_a) \gg \tilde{g} \rho_\mathrm{0,1d}$. In all of the following examples, we consider two consecutive beam splitting or mode swapping operations that couple the high-energy mode to a low-energy mode. We consider a rubidium-87 1d quasi-condensate of $N_0=10^3$ atoms in a $200\mu$m long box trap at a density of $10^{14}$ atoms per cm$^3$, for which the healing length is approximately $\xi=2.77\times10^{-7}$m and the effective cross sectional area $\mathcal{A}$ whose value is equivalent to that of a circle with a radius $a_\perp \sim 130$\,nm which could be created with a harmonic trap in the transverse direction with trap frequency $\sim 7\,$kHz. The scattering length of rubidium-87 is $\sim 5.18$\,nm \cite{Egorov:2013meas} and its atomic mass is $1.44\times 10^{-25}\,\mathrm{kg}$ which implies $\rho_\mathrm{1d}a_\mathrm{sc}\sim 0.03\ll 1 $ justifying the description of the quasi-condensate based on $\hat{H}_\mathrm{total}$. In figure \ref{fig:ground-state}, we show a plot of the absolute value squared of the ground state wave function in a proper box potential and a more realistic continuous potential approximating a box potential. We see that the change of the ground state is small and we consider the case of a discontinuous box potential for our analytical estimates below. Rubidium atoms have two D-lines that can be used for the dispersive coupling; the D1-line is at $\nu_{D1}\sim 377$\,THz and the D2-line at $\nu_{D2}\sim 384$\,THz with a natural linewidth $\Gamma \sim 2\pi \times 6$\,Mhz and dipole moments of $d_1 \sim 2.5\times 10^{-29}$\,Cm and $d_2 \sim 3.6\times 10^{-29}$\,Cm for the D1 and D2 line, respectively. We will consider a first laser mode used for the preparation of the probe state through the displacement term in the driving Hamiltonian and choose a detuning of $\Delta_{A,1}/2\pi \sim +300$\,GHz from the D2-line. This implies that a mode with mode number $n_\mathrm{high} \sim 1020$ is addressed by direct driving. This mode is at a frequency of about $15$\,kHz for which $\hbar^2k_{n_\mathrm{high}}^2/(2m_a g_\mathrm{1d} \rho_\mathrm{0,1d}) \sim 40$ and the mode is in the high-energy regime. For the pulsed read-out, we consider a second laser mode with a much smaller detuning for $\Delta_{A,1'}/2\pi \sim 1$\,GHz to increase the coupling strength. This laser couples to the same quasiparticle mode as the driving laser. This is because the difference in the detuning is small enough such that the photon momenta do not differ significantly. We consider two options for a laser mode for beam splitting/mode swapping operations. For our first example, a low-energy phonon mode with high frequency is advantageous. For the above parameters, the condition $\hbar^2k_n^2/(2m_a) \ll \tilde{g} \rho_\mathrm{0,1d}$ for low-energy modes limits the mode number to $n_\mathrm{low}\sim 50$ with a frequency of $\sim 170\,$Hz. To couple this mode to the high-energy mode necessitates a detuning of the beam splitting laser of $\Delta_{A,2}/2\pi \sim -20\,$THz from the D1-line. This detuning is larger than the frequency difference between the two lines which implies that both lines contribute to the driving process. For our second example application, a low frequency of the low-energy mode is advantageous. The mode number is already limited from below by the necessity to perform the driving over several periods of the mode while keeping the total driving time significantly below $1/\gamma^\mathrm{1D}_\mathrm{sc}$. Therefore, we choose a detuning of $\Delta_{A,2'}/2\pi \sim -300\,$GHz from the D1-line which implies a beam splitter/mode swapper coupling of the mode $n_\mathrm{high}\sim 1020$ to the mode $n_\mathrm{low}'\sim 20$ with a frequency of $\sim 70\,$Hz. \begin{figure} \caption{\label{fig:ground-state} \label{fig:ground-state} \end{figure} Assuming a cavity length of $L_\mathrm{cav} \sim 10$\,cm, the free spectral range becomes $\sim 3$\,GHz and it will be easy to find matching cavity modes in the chosen frequency ranges. Furthermore, we assume an effective cross sectional area of the cavity mode of 1\,mm$^2$. This implies single photon Rabi frequencies of $g_{0,2} \sim 180$\,kHz and $g_{0,1} \sim 130$\,kHz for the D2-line and D1-line respectively. To exclude all other processes besides the wanted beam splitting/mode swapping operation, we use a modulation of the intensity of the beam splitting laser mode in the cavity on resonance with the frequency difference between the high-energy and the low-energy mode and assume that each beam splitting/mode swapping operation lasts for $t_\mathrm{BS}=200$\,ms. Furthermore, we consider $100$\,ms for the accumulation of the signal between the beam splitting/mode swapping operations which gives a total time of the whole scheme of $500$\,ms plus the time for the preparation of the probe state, where we assumed that the read-out happens on a much shorter time-scale. For the parameters considered here, we find that $\gamma_\mathrm{sc}^\mathrm{1D} \sim 0.5$\,s$^{-1}$ for the mode $n_\mathrm{high}$ and we will neglect the decay of quasiparticles in the following. With the above parameters, for the pulsed optomechanical read-out scheme, we obtain single-quasiparticle precision (SNR $\sim 1$ for $\langle b_{n_\mathrm{high}}\rangle = 1$, compare Eq. (\ref{eq:SNR}) with Eq. (\ref{eq:quadev})) if the accumulated total duration of the measurement is $\Delta t \gtrsim \hbar/(2 |\kappa_{2n_\mathrm{cav}}|) = \sqrt{2} \Delta_{A,1'}/(g_{0,2}^2\sqrt{N_0 N_{ph}})\sim 800\,\mathrm{ns}$ assuming $N_{ph}\sim 10^8$ photons in the cavity mode (corresponding to an intra-cavity power of $\sim 40$\,mW and an intensity of $4\,\mathrm{W/cm^2}$. ). To generate, for example, a coherent state of $N_{n_\mathrm{high}} = 10$ quasiparticles in the high-energy mode by resonant modulation of the intra-cavity power of the first laser mode for a duration of $t_\mathrm{cr}\sim 30$\,ms, an average number of photons in the cavity $N_{ph,0}=|8\Delta_{A,1} \sqrt{N_{n_\mathrm{high}}}/(\eta g_{0,2}^2 t_\mathrm{cr} \sqrt{2N_0}) | \sim 10^3$ \footnote{Compare Eq. (\ref{eq:Pn}) with Eq. (\ref{eq:Udisp}).} (corresponding to $\sim 0.4\,\mathrm{\mu W}$ intra-cavity power) is sufficient given a modulation amplitude $\eta=1$ \footnote{ Note that we are interested in keeping the photon number sufficiently large to reduce the potential effects of photon number fluctuations (shot noise). The photon number can be increased, for example, by increasing the detuning. For sufficiently high photon numbers, the photon number fluctuations will average out due to the short lifetime of photons in the cavity of much less than the read-out time of 400\,ns.} . Furthermore, we obtain that the spatial maximum of the corresponding time-averaged light potential is $\mathrm{max}(\overline{\delta V})=8\hbar \sqrt{N_{n_\mathrm{high}}}/(t_\mathrm{cr}\eta \sqrt{2N_0}) \sim 0.008 \mu_0$, which justifies our treatment of the light potential as a small perturbation. For the two driving laser modes employed for beam splitting/mode swapping with detunings $\Delta_{A,2}$ and $\Delta_{A,2'}$, we find a necessary photon number for a full mode swapping operation (i.e. $\bar{M}_{n_\mathrm{low}n_\mathrm{high}}t/\hbar \sim \pi/2$) at $t_\mathrm{BS}\sim 100$\,ms of $N_{ph,0}\gtrsim |4\pi(g_{0,1}^2/\Delta_{A,2} + g_{0,2}^2/(2\pi(\nu_{D1}-\nu_{D2})+\Delta_{A,2}))^{-1}/(\eta \alpha_{n_\mathrm{low}} t_\mathrm{BS}) | \sim 10^5$ and $N_{ph,0}\gtrsim |4\pi \Delta_{A,2'}/(\eta\alpha_{n_\mathrm{low}'} g_{0,1}^2 t_\mathrm{BS})| \sim 4\times 10^3$, respectively, for the Bogoliubov coefficients $\alpha_{n_\mathrm{low}}\sim 1.3$ and $\alpha_{n_\mathrm{low}'}\sim 1.8$ and a modulation amplitude $\eta=1$. For a beam splitting operation, half the number of photons may be employed. For a full mode swapping operation, the spatial maximum of the time-averaged light-potential becomes $\mathrm{max}(\overline{\delta V})=4\pi\hbar/(\eta\alpha_{n_\mathrm{low}}t_\mathrm{BS}) \sim 0.02 \mu_0 $ for both driving laser frequencies. This value is sufficiently low with respect to the chemical potential to treat the resulting change of the condensate's ground state as a small perturbation, justifying our approach. \subsubsection{Measuring the s-wave scattering length} A change of the s-wave scattering length from $a_{\mathrm{sc},1}$ to some $a_{\mathrm{sc},2}$ will change the frequency of low-energy modes through the dispersion relation (\ref{eq:freq}), i.e. $\omega_{n_\mathrm{low}} \propto \sqrt{a_\mathrm{sc}}$. The frequency of the high-energy mode can be approximated as unaffected. Therefore, the relevant phase shift accumulated between the two modes of the quasiparticle MZI is $\Delta \phi = \Delta\omega_{n_\mathrm{low}} t_\mathrm{int}$, where $\Delta\omega_{n_\mathrm{low}} =\omega_{n_\mathrm{low}}(a_{\mathrm{sc},1})-\omega_{n_\mathrm{low}}(a_{\mathrm{sc},2})$ is the frequency shift due to the change of the scattering length and $t_\mathrm{int}$ is the time during which the s-wave scattering length is modified. We chose $t_\mathrm{int}=100$\,ms, equivalent to the time between the two beam splitting operations of the MZI. By Gaussian error propagation, we obtain for the fundamental relative precision limit for estimations of any small change in the scattering length from Eq. \ref{eq:MZIsens} as \begin{equation} \frac{\Delta a_\mathrm{sc}}{a_\mathrm{sc}} \ge \left|\frac{d \Delta\omega_{n_\mathrm{low}}}{d a_\mathrm{sc}} t_\mathrm{int}\right|^{-1} \frac{1}{a_\mathrm{sc} \sqrt{N_\mathrm{high}}} \sim \frac{2}{\omega_{n_\mathrm{low}} t_\mathrm{int} \sqrt{N_{n_\mathrm{high}}}} \end{equation} where we have assumed that the low-energy mode is initially in the ground state and the high-energy mode is brought into a coherent probe state with quasiparticle number $N_{n_\mathrm{high}}$. If we assume that the probe state contains $N_{n_\mathrm{high}} = 10$ quasiparticles and use $\omega_{n_\mathrm{low}}\sim 2\pi\times 170\,$Hz for $n_\mathrm{low}\sim 50$, we find for the relative precision $\Delta a_\mathrm{sc}/a_\mathrm{sc}\gtrsim 0.006$. This value for the precision limit can be optimized by considering longer interaction times which is, however, limited by the decay of quasiparticles due to the damping mechanisms described above. A larger number of quasiparticles in the probe state would also be advantageous; this is however limited to be much smaller than the number of atoms in the quasi-condensate, which is limited due to the size of the trap and the necessity to keep the density low to avoid three-body losses. \subsubsection{Sensing oscillating force gradients} \label{subsec:oscillating_mass} As a second example application, we consider a temporally oscillating force gradient represented by the potential $\delta V=-z^2 G_0 \sin\left(\Omega t\right)$. In \cite{ratzel_dynamical_2018}, it has been shown that the effect on quasiparticle modes is stronger for smaller frequencies and enhanced by resonance. Therefore, we assume that $\Omega$ is on resonance with the low-energy mode $n_\mathrm{low}$. In the Heisenberg picture and to leading order in the quasiparticle field $\hat\vartheta$, the interaction Hamiltonian takes the form (see Appendix \ref{sec:derivationintHamPi}) \begin{equation}\label{eq:intHamPi} \hat{H}_\mathrm{int}= \Pi_{n_\mathrm{low}}\left(\hat{b}_{n_\mathrm{low}} - \hat{b}_{n_\mathrm{low}}^{\dagger}\right), \end{equation} where, \begin{eqnarray} \Pi_{n_\mathrm{low}} &=& i\frac{\sqrt{N_0} G_0}{2 L \sigma_{n_\mathrm{low}}} \int_{-L/2}^{L/2} dz\, \left(z^2 - \frac{L^2}{12} \right) \, \varphi_{n_\mathrm{low}}^\mathrm{c}(z) = i\frac{\sqrt{2 N_0} G_0}{\sigma_{n_\mathrm{low}}k_{n_\mathrm{low}}^2} \nonumber \\ &\approx & i G_0 \sqrt{2\pi N_0 \rho_0 a_\mathrm{sc}} \left(\frac{\hbar}{m_a \omega_n} \right)^{3/2} \,, \end{eqnarray} for $n_\mathrm{low}$ even and $\Pi_{n_\mathrm{low}}=0$ for $n_\mathrm{low}$ odd, and through the RWA, we disregard terms that are off resonant. The effect of the oscillating force is to create quasiparticles through a linear displacement with amplitude $|\Pi_{n_\mathrm{low}} t/\hbar|$. We first apply a mode swapping operation between modes $n_\mathrm{low}=20$ and $n_\mathrm{high}=1020$, as detailed in section \ref{subsec:beamsplit}. This is advantageous as the low-energy mode has higher thermal occupancy than the high-energy mode we consider for read-out. In particular, we can assume that the probe state is the vacuum state and that the time-scale of thermalization which is bound from below by $1/\gamma_\mathrm{sc}^\mathrm{1D}$ is much larger than the time scale of the full sensing protocol. We then let the quasiparticle modes evolve while the force gradient oscillates, creating a displaced state in mode $n_\mathrm{low}=20$. A second mode swapping operation is applied to bring the displacement signal back to $n_\mathrm{high}$, where it can be read out with an appropriate read-out scheme, for example, the pulsed scheme outlined in section \ref{sec:pulsed}. We assume that the read-out achieves a single-quasiparticle precision in a single-shot experiment. Given the above parameters, one quasiparticle is created after $t_\mathrm{int}=100\,$ms for a minimal force gradient \begin{equation} G_{0,\mathrm{min}} = \frac{\left(m_a \omega_{n_\mathrm{low}}\right)^{3/2}}{t_\mathrm{int} \sqrt{ 2\pi \hbar N_0 \rho_0 a_\mathrm{sc}}} \sim 10^{-23}\,\mathrm{Nm^{-1}} \,. \end{equation} This value corresponds to the force gradient induced by a harmonic potential with frequency $\sqrt{G_{0,\mathrm{min}}/m_a}/(2\pi) \sim 1$\,Hz acting on the atoms. Repeating the experiment for about a week, that is about $10^4$ times, would reduce the minimal force gradient by another 2 orders of magnitude to $\sqrt{G_{0,\mathrm{min}}/m_a}/(2\pi) \sim 10$\,mHz. This theoretical prediction can be compared with the experiment described in \cite{Obrecht:2007meas}, where a precision for a time-independent Casimir-Polder force gradient slightly below 100\,mHz has been reached by employing the center of mass motion of a BEC. However, one has to take into account that sources of noise and other systematic errors which play an important role in \cite{Obrecht:2007meas} are not considered here. The minimal force gradient can, in principle, be decreased by choosing a lower frequency of the probe mode. For example, employing $n=2$ instead of $n=20$ would lead to a decrease of $G_{0,\mathrm{min}}$ by a factor $\sim 30$. However, to ensure that the mode swapping only couples to the low-energy mode, the driving time has to be increased by a factor 10 as well, which increases the total interaction time to the same value as the time scale of the decay of high-energy quasiparticles $1/\gamma_\mathrm{sc}^\mathrm{1D}$. As above, an increase of the number of atoms seems to be advantageous as it would increase the driving strength. However, it would also lead to a strong decrease in condensate lifetime which overcompensates the increase in driving strength. \section{Conclusion and Discussion} \label{sec:conclusion} We developed a framework for the description of cavity optomechanics with trapped condensates to describe the manipulation and read-out of quasiparticle states with laser fields. The set of possible operations includes displacement, single-mode squeezing, two-mode squeezing and mode-mixing. In particular, displacement and mode-mixing can be used to create a quasiparticle Mach-Zehnder Interferometer (MZI) in frequency space. We have presented example applications and considered parameters of state-of-the-art technology to enable estimates of the fundamental limitations of the scheme for quantum metrology. For example, the measurement scheme for force gradients that we discussed may be employed to measure the thermal Casimir-Polder force as in \cite{Obrecht:2007meas} and the quasiparticle MZI may be employed for sensing effects that lead to a differential change of the frequencies of the quasiparticle modes. However, we found that the fundamental sensitivity bounds for state-of-the-art parameters are not very promising and significant technological progress would be needed to overcome the inherent limitations. The most severe restrictions arise from noise in the condensate due to quasiparticle-quasiparticle interactions and atom loss. Such effects limit the lifetime of the condensate, and thus limit the time for signal accumulation and state manipulation. We conclude that, in practice, displacements can only be induced in modes with particle-like dispersion due the short wavelengths of lasers which can readily be confined to cavities. For these high-energy modes, Beliaev damping becomes strongly pronounced in three-dimensional ultra-cold Bose gases. Therefore, we considered one-dimensional quasi-condensates in our example applications. It would be interesting to re-assess our example applications if laser-cavity systems with wavelengths in the range of $10-100\,\mathrm{\mu m}$ were available. In that case, low-energy quasiparticle modes could be addressed directly and three-dimensional quantum gases could be considered allowing for much higher numbers of condensate atoms and quasiparticles in the probe states. Another option to access low-lying phonon modes directly would be to arrange the beam line of the driving laser at a large angle to the longitudional axis of the BEC \footnote{In this case, momentum transfer to the phonons would couple photons out of the cavity mode which implies a different coupling Hamiltonian than the one used in this article.}, which however does not lift the restriction to one-dimensional condensates, as the mode structure has to be confined to restrict the coupling to longitudinal modes. We estimate the fundamental capabilities of our scheme here without considering experimental imperfections such as vibrations, trap instability, imperfect boundary conditions and atom losses. As these would depend on a specific implementation, we leave these calculations to future work, which could assist in further identifying the technological advancements necessary to implement our scheme. It would be interesting to investigate similar sensing and mode manipulation schemes with superfluid helium which is much more stable, enables much higher coherence times and direct driving of modes with wavelengths in the microwave range (see for example \cite{Lorenzo_2014}). \appendix \section{Cavity frequency shift} \label{sec:freqshift} Due to the proportionality of $\hat{H}_\mathrm{disp}$ in Eq. (\ref{eq:Hdisp}) to the photon number operator $\hat{a}^\dagger\hat{a}$, we can conclude that $\hat{H}_\mathrm{disp}$ leads to a frequency shift of the cavity frequency proportional to the number of atoms $N_0$ in the atomic ensemble weighted with the overlap of $\hat{\psi}^\dagger(z)\hat{\psi}(z)$ and the cavity mode square. Effectively, this is a refractive index change due to the presence of the atoms in the cavity. We take the average frequency shift into account from the start by renormalizing the cavity mode frequency as $\omega_c \rightarrow \omega_c - \delta \omega_c$, where \begin{equation} \delta \omega_c = \frac{g_0^2 N_0}{\Delta_A} \int dz \,\chi_\mathrm{BT}(z) f_\mathrm{cav}(z)^2/L\,, \end{equation} where $L$ its length of the box potential and $\chi_\mathrm{BT}$ is the characteristic function of the 1-dimensional box potential in the $z$-direction (i.e. $\chi_\mathrm{BT} = 1$ inside and $\chi_\mathrm{BT}=0$ outside the box, respectively). For each single run of the experiment, the atom number is fixed and does not have quantum properties. Therefore, we can formally replace $N_0$ with the atom number operator $\hat N = \mathcal{A}\int dz\, \hat\psi^\dagger(z) \hat\psi(z)$ in the following. Then, together with the Hamiltonian governing the dynamics of the atomic ensemble, the total Hamiltonian of our system is \begin{align}\label{eq:hamiltonianshifted} \nonumber \hat{H}_\text{total} & = \hbar \omega_c \hat{a}^\dagger\hat{a} + \int dz \, \hat{\psi}^\dagger(z) \, \hbar\frac{g_0^2}{\Delta_A} \,\hat{a}^\dagger\hat{a} \left( f_\mathrm{cav}^2(z) - \int dz'\, \chi_\mathrm{BT}(z') f_\mathrm{cav}^2(z')/L\right) \hat{\psi}(z) \\ & + \int dz\,\hat{\psi}^\dagger(z)\Bigg[ -\frac{\hbar^2}{2m}\partial_z^2 + \frac{\tilde{g}}{2} \hat{\psi}^\dagger(z)\hat{\psi}(z) + V_0\left(z\right) + \delta V_\mathrm{ext}\left(z,t\right) \Bigg]\hat{\psi}(z) \end{align} \section{The interaction Hamiltonian} \label{sec:intham} Starting from the split of the atomic field into the ground state part and excitations as $\hat\psi = \hat{c}_0 \psi_0 + \hat{\varphi}$, neglecting all contributions of the perturbation $\hat\varphi$ and the fluctuations of the light field $\delta\hat{a}$, we obtain \begin{eqnarray} \nonumber \hat H_\mathrm{total} &=& \int dz\, \hat{c}_0^\dagger \psi_0^{*}\left(-\frac{\hbar^2}{2m} \partial_z^2 + V + \frac{\tilde{g}}{2}\hat{c}_0^\dagger\hat{c}_0|\psi_0|^2\right)\hat{c}_0\psi_0 \,. \end{eqnarray} With the canonical commutation relations $[\hat{c}_0,\hat{c}_0^\dagger]=1$, the Heisenberg equation of motion for the ground state leads to \begin{equation} \hat{c}_0(t) = \hat{c}_0(0) \exp\left(-\frac{i}{\hbar}\left(\mu t + \int_0^t dt'\delta \mu_\mathrm{osc}(t') + \tilde{g}t \int dz\,|\psi_0|^4 \left(\hat{N}_0 - (N_0+1)\right)\right)\right)\,, \end{equation} where $\hat{N}_0=\hat{c}_0^\dagger\hat{c}_0$ for normalized $\psi_0$, where we defined $\delta \mu_\mathrm{osc}(t)= \int dz\, |\psi_0|^2 V_\mathrm{osc}(t)$. If we assume that the state of condensate is restricted to the particle sector around the particle number $N_0\gg 1$ that is much larger than the particle number fluctuations, we can neglect the last term in the above equation and obtain \begin{equation} \hat{c}_0(t) \approx \hat{c}_0(0) \exp\left(-\frac{i}{\hbar}\left(\mu t + \int_0^t dt'\delta \mu_\mathrm{osc}(t')\right)\right)\,, \end{equation} With this observation, we re-define the splitting of the atomic field operator in the Heisenberg picture as \begin{eqnarray} \hat\psi= (\hat c_0 \psi_0 + \hat\vartheta) e^{-i\left( \mu t + \int_0^t dt'\, \delta\mu_\mathrm{osc}(t')\right)/\hbar} \,. \end{eqnarray} Then, we find that the time evolution of $\hat\psi' :=\hat c_0 \psi_0 + \hat\vartheta$ is governed by the Heisenberg equation with respect to the Hamiltonian \begin{equation} \hat H_\text{total}':= \hat H_\text{total} - (\mu+\delta\mu_\mathrm{osc}) \hat N \end{equation} (similar to the grand canonical Hamiltonian) where $\hat N = \int dz\, \hat\psi^{\prime\dagger}\hat\psi^\prime$ is the full atom number operator. $\mu+\delta\mu_\mathrm{osc}$ can be interpreted as a time-dependent chemical potential. We continue with the Hamiltonian $\hat{H}'_\mathrm{total}$ and use the Bogoliubov approximation $\hat{c}_0\rightarrow \sqrt{N_0}\mathbb{I}$ to find the expansion up to second order in $\hat\vartheta$ \begin{eqnarray}\label{eq:expansionfull} \nonumber \hat H'_\mathrm{total} &=& -\hbar\Delta_c \delta\hat{a}^\dagger\delta\hat{a} + N_0\int dz\, \psi_0^{*}\left(-\frac{\hbar^2}{2m} \partial_z^2 + V_0 + \overline{\delta V} + \frac{\tilde{g}N_0}{2}|\psi_0|^2\right)\psi_0 \\ \nonumber && + \sqrt{N_0}\int dz\, \left(\hat\vartheta^{\dagger}\left(-\frac{\hbar^2}{2m} \partial_z^2 + V_0 + \overline{\delta V} + \tilde{g}N_0|\psi_0|^2 \right)\psi_0 + h.c.\right) \\ \nonumber && + \int dz\, \left(\hat\vartheta^{\dagger}\left(-\frac{\hbar^2}{2m} \partial_z^2 + V_0 + \overline{\delta V} + 2\tilde{g}N_0|\psi_0|^2 \right)\hat\vartheta + \frac{\tilde{g}N_0}{2}\left(\hat\vartheta^{\dagger 2}\psi_0^{ 2} + \psi_0^{*2}\hat\vartheta^{2}\right)\right) \\ && + \sqrt{N_0} \int dz\, \tilde{g}\left(\hat\vartheta^{\dagger 2}\hat\vartheta \psi_0 + \psi_0^{*}\hat\vartheta^{\dagger}\hat\vartheta^{2}\right) + \int dz\, \frac{\tilde{g}}{2}\hat\vartheta^{\dagger 2}\hat\vartheta^{2}\\ \nonumber && + \hbar\frac{g_0^2\sqrt{N_{ph}} }{\Delta_A} \left(\delta\hat{a}^\dagger + \delta\hat{a} \right) \int dz \, f_\mathrm{cav}^2\left( N_0 \psi_0^{*} \psi_0 + \sqrt{N_0}\left(\psi_0^{*}\hat\vartheta + \psi_0\hat\vartheta^\dagger \right) + \hat\vartheta^\dagger\hat\vartheta \right)\\ \nonumber && + N_0 \int dz\, \psi_0^{*} V_\mathrm{osc}\psi_0 + \sqrt{N_0}\int dz\, \left(\psi_0^{*} V_\mathrm{osc}\hat\vartheta + \hat\vartheta^{\dagger} V_\mathrm{osc}\psi_0\right) + \int dz\, \hat\vartheta^{\dagger} V_\mathrm{osc}\hat\vartheta \\ \nonumber && - N_0 \mu \int dz\, \psi_0^{*} \psi_0 - N_0 \delta\mu_\mathrm{osc} \int dz\, \psi_0^{*} \psi_0 - \sqrt{N_0} \mu \int dz\, \left(\psi_0^{*} \hat\vartheta + \hat\vartheta^{\dagger}\psi_0\right) \\ \nonumber && - \sqrt{N_0}\delta\mu_\mathrm{osc} \int dz\, \left(\psi_0^{*} \hat\vartheta + \hat\vartheta^{\dagger}\psi_0\right) - \mu\int dz\, \hat\vartheta^{\dagger} \hat\vartheta- \delta\mu_\mathrm{osc}\int dz\, \hat\vartheta^{\dagger} \hat\vartheta \end{eqnarray} In the last three lines, we see the contribution of the time dependent potential perturbation and $ - (\mu+\delta\mu_\mathrm{osc}) \hat N$. The second term in the second to last line and the first term in the third to last line cancel. With Eq. (\ref{eq:statgp}), the second term in the first line in Eq. (\ref{eq:expansionfull}) gives the classical energy of the condensate, and with the first term in the second last line of Eq. (\ref{eq:expansionfull}) \begin{equation} E^{(0)} = -\frac{\tilde{g}N_0^2}{2}\int dz\, |\psi_0|^4\,. \end{equation} Again with the stationary GP equation (\ref{eq:statgp}), the second line of Eq. (\ref{eq:expansionfull}) becomes \begin{equation} \hat H^{(1)} = \sqrt{N_0}\mu\int dz\, \left(\hat\vartheta^{\dagger}\psi_0 + h.c.\right)\,, \end{equation} which cancels with the last term in the second last line of Eq. (\ref{eq:expansionfull}). The third line of Eq. (\ref{eq:expansionfull}) gives rise to the Bogoliubov Hamiltonian. We combine the third line and the second term in the last line of Eq. (\ref{eq:expansionfull}) as \begin{eqnarray} \hat H^{(2)} &:=& \quad \int dz\, \left(\hat\vartheta^{\dagger}\left(-\frac{\hbar^2}{2m} \partial_z^2 + V_0 + \overline{\delta V} - \mu + 2\tilde{g}N_0|\psi_0|^2 \right)\hat\vartheta \right. \\ \nonumber && \left. + \frac{\tilde{g}N_0}{2}\left(\hat\vartheta^{\dagger 2}\psi_0^{2} + \psi_0^{*2}\hat\vartheta^{2}\right)\right) \,, \end{eqnarray} As explained in the main text, we expand the field operator in terms of Bogoliubov modes describing the quasiparticle excitations as \begin{equation} \hat\vartheta = \sum_n \left(u_n\hat b_{n} + v_n^*\hat b_{n}^\dagger \right)\,, \end{equation} where $[b_{n},b_{m}^\dagger]=\delta_{nm}$, the mode functions $u_n$ and $v_n$ fulfill the stationary Bogoliubov-de~Gennes (BDG) equations \begin{eqnarray}\label{eq:BDG} \hbar \omega_n u_n(z) &=& \left(-\frac{\hbar^2}{2m} \partial_z^2 + V_0 + \overline{\delta V} - \mu + 2\tilde{g}N_0|\psi_0|^2\right)u_n(z) + \tilde{g}N_0\psi_0^2 v_n(z)\\ -\hbar \omega_n v_n(z) &=& \left(-\frac{\hbar^2}{2m} \partial_z^2 + V_0 + \overline{\delta V} - \mu + 2\tilde{g}N_0|\psi_0|^2\right)v_n(z) + \tilde{g}N_0\psi_0^{*2} u_n(z)\, \end{eqnarray} and are normalized with respect to the inner product \begin{equation}\label{eq:normauv} \int dz \, \left(u_n^* u_m - v_n^* v_m\right) = \delta_{nm}\,. \end{equation} The expansion of $\hat\vartheta$ in the Bogoliubov basis diagonalizes the Bogoliubov Hamiltonian as $\hat H_\mathrm{BdG} = \, :H^{(2)}:\, = \sum_n \hbar \omega_n \hat b_{n}^\dagger \hat b_{n}$, where $:\quad:$ denotes the normal ordering with respect to the Bogoliubov mode operators $b_{n}^\dagger$ and $\hat b_{n}$, which leads to the omission of the constant vacuum energy. The fifth line in equation (\ref{eq:expansionfull}) governs the back action on the light field. The first term in brackets corresponds to a time-independent shift of the cavity field frequency. This is discussed in more details in the main text and in Appendix \ref{sec:freqshift} and we omit this term. The third term in brackets is of higher order and will be omitted as well. The second term gives rise to the back action Hamiltonian in equation (\ref{eq:backaction_hamiltonian_unexp}). Furthermore, we combine the remaining terms of Eq. (\ref{eq:expansionfull}) to the driving Hamiltonian in equation (\ref{eq:Hprimeint}) and neglect terms of third and fourth order in $\hat\vartheta$ that give rise to quasiparticle-quasiparticle interactions that are discussed in Section \ref{sec:damping}. \section{Derivation of the driving Hamiltonian due to an external force gradient} \label{sec:derivationintHamPi} Starting from Eqs. (\ref{eq:interaction_hamiltonian}) and (\ref{eq:coefficients}), to first order in $1/\sqrt{N_0}$ and with $V_\mathrm{osc}=-z^2 G_0 \sin\left(\Omega t\right)$, we obtain the driving Hamiltonian due to the external force gradient as \begin{align} \nonumber \hat{H}_\mathrm{int} &:= \hat{H}_\mathrm{dr} =\sum_{n} \left(P_{n} \hat{b}_{n} e^{-i\omega_{n}t} + P_{n}^* \hat{b}_{n}^{\dagger}e^{i\omega_{n}t}\right) \end{align} where \begin{align} P_{n} = \sqrt{N_0} \int dz \, (V_\mathrm{osc} - \delta\mu_\mathrm{osc}) \,\left( \psi_0^* u_n + \psi_0 v_n \right) \,. \end{align} Making the assumptions of Sec. \ref{sec:applications} (homogeneous condensate in a box trap etc.), assuming the resonance condition $\Omega=\omega_{n_\mathrm{low}}$ and applying the RWA, we find the interaction Hamiltonian \begin{align} \nonumber \hat{H}_{int} &= \Pi_{n_\mathrm{low}} \left(\hat{b}_{n_\mathrm{low}} - \hat{b}_{n_\mathrm{low}}^{\dagger}\right)\,, \end{align} where \begin{align} \Pi_{n_\mathrm{low}} = i\frac{G_0 \sqrt{N_0}}{2\sqrt{L}} \int dz \, \left(z^2 - \frac{L^2}{12}\right) \,\left( u_{n_\mathrm{low}} + v_{n_\mathrm{low}} \right) = i\frac{G_0 \sqrt{N_0}}{2L \sigma_{n_\mathrm{low}}} \int dz \, \left(z^2 - \frac{L^2}{12}\right) \,\varphi_{n_\mathrm{low}}^c\,. \end{align} \section{Ground state and mode function perturbations} \label{sec:gstate_pert} In this appendix, we discuss the dependence of the ground state on the time-averaged perturbation of the external potential $\delta V$ that is discussed in Sec. \ref{sec:hamiltonian}. We have that $\delta \psi_0$ is a solution of the linearized GP equation \begin{equation}\label{eq:statgp_lin} \overline{\delta V}\,\bar\psi_0 + \left(-\frac{\hbar^2}{2m} \partial_z^2 + 2\tilde{g}N_0|\bar{\psi}_0|^2 \right)\delta\psi_0 + \tilde{g}N_0\bar{\psi}_0^2 \delta\psi_0^* = \mu \delta\psi_0\,. \end{equation} We apply the Thomas-Fermi approximation and neglect the kinetic term. Then, the linearized GP equation can be solved as \begin{equation} \mathrm{Re}\left(\delta\psi_0\right) = - \overline{\delta V}/(2\mu \mathcal{V}^{1/2})\,, \end{equation} and $\mathrm{Im}(\delta\psi_0) = 0$. In a similar fashion, we can find the modifications of the quasiparticle mode function $u_n$ and $v_n$. As all of these modifications are of first order in $\overline{\delta V}$ and $\hat{H}_\mathrm{dr}$ as well as $\hat{H}_\mathrm{ba}$ are both of first order in the external potential and light field interaction, respectively, their modifications due to $\overline{\delta V}$ will be of higher order and can be neglected for the purposes of this article. \section{Fundamental sensitivity of the pulsed readout} \label{app:fund-readout-limit} When considering only the resonant targeted quasiparticle mode for the pulsed measurement scheme described in section \ref{sec:pulsed}, the time-evolution operator has the form \begin{equation} \hat{U}_{pulse}=\exp\left[-\frac{i}{\hbar} \kappa_{2n_\mathrm{cav}} \left(\sqrt{N_{ph}} + \left(\delta\hat{a}+\delta\hat{a}^{\dagger}\right)\right)\left(\hat{b}_{2n_\mathrm{cav}}+\hat{b}_{2n_\mathrm{cav}}^{\dagger}\right)\Delta t \right]. \end{equation} The measurement sequence comprising two mirror operations and the gravitationally induced displacement can be described with the time-evolution operator \begin{equation}\hat{U}_{D}=\left(\hat{U}_{n,c}^{M}\right)^{\dagger}\hat{D}_{n}\hat{U}_{n,c}^{M} \end{equation} where $\hat{D}_{n}=\exp\left[-i\mathcal{P}_{n}\left(\hat{b}_{n}-\hat{b}_{n}^{\dagger}\right)\right]$, $\mathcal{P}_{n}=\Pi_{n}t/\hbar$, and $\hat{U}_{n,c}^{M}$ is given by (\ref{eq:mirror}) for the specific modes labelled $n$ and $c$ with the interaction time appropriately tuned. For some initial density matrix $\rho_{0}$, the final state of the condensate-cavity system is then described as \begin{equation} \rho_{F}=\hat{U}_{pulse}\hat{U}_{D}\rho_{0}\hat{U}_{D}^{\dagger}\hat{U}_{pulse}^{\dagger}. \end{equation} For notational convenience, we also define $\chi=\kappa_{2n_\mathrm{cav}}\Delta t/\hbar$. We define the operator basis $\hat{X}_{cav}=\delta\hat{a}+\delta\hat{a}^{\dagger}$ and $\hat{P}_{cav}=i\left(\delta\hat{a}^{\dagger}-\delta\hat{a}\right)$ and the vector $\hat{x}=\left(\hat{X}_{cav},\hat{P}_{cav}\right)$. For an initially thermal quasiparticle state (negligible initial occupancy in the high order mode c) and a coherent cavity state, the displacement vector in this basis for the reduced cavity state is given by \begin{equation} d=\left\langle \hat{x}\right\rangle =\left(0,4\chi\mathcal{P}_{n}\right) \end{equation} where the expectation values are taken with respect to $\rho_F$. The covariance matrix, with elements defined by $\Sigma_{i,j}=\left\langle \hat{x}_{i}\hat{x}_{j}+\hat{x}_{j}\hat{x}_{i}\right\rangle -\left\langle \hat{x}_{i}\right\rangle \left\langle \hat{x}_{j}\right\rangle$, is given by \begin{equation} \Sigma=\begin{pmatrix}1 & 0\\ 0 & 1+4\chi^{2} \end{pmatrix}. \end{equation} The quantum Fisher information (QFI) $H_{\rho}\left(\lambda\right)$ gives a measure of the amount of information about a parameter $\lambda$ which can be extracted from a state $\rho$ optimised over all possible measurements. The QFI can be calculated for Gaussian states using only the displacement vector and covariance matrix \cite{safranek2015}. We find that the QFI for estimating the displacement $\mathcal{P}_{n}$ of the quasiparticle state is given by \begin{equation} H_{\rho_{F}}\left(\mathcal{P}_{n}\right)=\frac{16\chi^{2}}{1+4\chi^{2}}. \end{equation} The quantum Cramer-Rao bound \cite{koklovett_qprocessing} links the possible measurement sensitivity with the QFI, and in our case it has the form \begin{equation} \left(\Delta\mathcal{P}_{n}\right)^{2}=\frac{1}{N_{meas}}\left(\frac{1}{4}+\frac{1}{16\chi^{2}}\right) \end{equation} where $N_{meas}$ is the number of repetitions of the measurement scheme performed. \section{A toy model for high-energy modes - trapezoid potential} \label{app:toymodel} To show how the coefficients $A^c_n$ and $A^s_n$ of modes in the high-energy regime are constructed and vary with the mode number, we consider a toy model. We start by modelling the trap potential as a well with finitely steep walls, i.e. with a certain inclination, such that \begin{equation} V_0(z) = a( -(z+L/2) \Theta(-(z+L/2)) + (z-L/2) \Theta(z-L/2))\,. \end{equation} We want the potential to rise to the kinetic energy of the high energy move in a fraction of $L$, i.e. $a=b\hbar^2(2k_\mathrm{cav})^2/(2mL)$, where $b\gtrsim 10$. For high-energy modes, we can set $v_n=0$, and for $\delta V=0$, the BDG equations in (\ref{eq:BDG}) become the stationary Schrödinger equation for $u_n$ \begin{eqnarray}\label{eq:stSchröd} 0 &=& \left(-\frac{\hbar^2}{2m} \partial_z^2 + (\tilde{V}_0(z)- E_n) \right)u_n(z) \end{eqnarray} with $E_n=\hbar\omega_n$ and the potential $\tilde{V}_0(z)= V_0 - \mu_0 + 2\tilde{g}N_0|\psi_0|^2$. We note that for a small healing length in comparison to the trap length, we can approximate $2\tilde{g}N_0|\psi_0|^2\approx 2\tilde{g}\rho_\mathrm{1d} = 2\mu_0$ and the interaction energy $\mu_0 = \tilde{g}\rho_\mathrm{1d}$ can be regarded as a small offset of the potential such that \begin{eqnarray} \nonumber \tilde{V}_0(z) &:=& V_0(z) - \tilde{g}\rho_\mathrm{1d} \\ \nonumber &\approx& a( -(z+L/2-\mu_0/a) \Theta(-(z+L/2)) + (z-L/2+\mu_0/a) \Theta(z-L/2)) \\ && + (\Theta(z+L/2) - \Theta(z-L/2)) \mu_0\,. \end{eqnarray} and $L+2\mu_0/a \approx L$. Then, we define three regions I, II and III, where I and III are left of $z=-L/2$ and right of $z=L/2$, respectively, and II lies in between I and III. \begin{figure} \caption{\label{fig:trapezoid-potential} \label{fig:trapezoid-potential} \end{figure} In region III, we find the stationary Schrödinger equation \begin{eqnarray} 0 &=& \left(- \partial_z^2 + a\frac{2m}{\hbar^2}(z - L/2 + \mu_0/a - E_n/a) \right)u_n(z) \end{eqnarray} We redefine the spatial coordinate as $\bar{z}=(2m a/\hbar^2)^{1/3}(z - L/2 + \mu_0/a - E_n/a)$ such that \begin{eqnarray}\label{eq:stSchrödI} 0 &=& \left(-\partial_{\bar{z}}^2 + \bar{z} \right)\bar{u}_n(\bar{z})\,. \end{eqnarray} This differential equation is solved by the Airy function $\mathrm{Ai}(\bar{z})$ and we obtain the solution \begin{equation} \tilde{u}_{n,I}(\tilde{z}) = C_{R,n} \mathrm{Ai}(\tilde{z} - \tilde{L}/2 + \tilde{E}_n)\,. \end{equation} where we defined $\tilde{z}=\tilde{a}^{1/3} z$, $\tilde{L}=\tilde{a}^{1/3} L$, $\tilde{E}_n=\tilde{a}^{-2/3} 2m(\mu_0 - E_n)/\hbar^2$ and $\tilde{a}=2m a/\hbar^2=b(2k_\mathrm{cav})^2/L$. We find the corresponding solution in region I by reflection at the origin and for the region II, we find the differential equation \begin{eqnarray}\label{eq:stSchrödII} 0 &=& \left(-\partial_{\tilde{z}}^2 + \tilde{E}_n \right)\tilde{u}_n(\tilde{z}) \end{eqnarray} which is simply solved by \begin{equation} \tilde{u}_{n,II}(\tilde{z}) = \sqrt{\frac{2}{L}} \left(A^\mathrm{c}_n \cos\left(\tilde{k}_{n}\left(\tilde{z}+\frac{\tilde{L}}{2}\right)\right) + A^\mathrm{s}_n \sin\left(\tilde{k}_{n}\left(\tilde{z}+\frac{\tilde{L}}{2}\right)\right)\right) \end{equation} where $\tilde{k}_{n}=\sqrt{\tilde{E}_n}=\tilde{a}^{-1/3} k_n$. From imposing the continuity of the wave function and its first derivative at $z=-L/2$ and $z=L/2$, we obtain the following expressions for the coefficients \begin{eqnarray} \nonumber C_{R,n} &=& \mathcal{N} \left( \cos(\tilde{k}_{n} \tilde{L}) - \frac{\mathrm{Ai}'(-\tilde{E}_n)}{\tilde{k}_{n} Ai(-\tilde{E}_n)}\sin(\tilde{k}_{n} \tilde{L}) \right)\\ A^\mathrm{c}_n &=& \sqrt{\frac{L}{2}} \mathcal{N} \mathrm{Ai}(-\tilde{E}_n) \\ \nonumber A^\mathrm{s}_n &=& -\sqrt{\frac{L}{2}} \mathcal{N} \frac{\mathrm{Ai}'(-\tilde{E}_n)}{\tilde{k}_n}\,. \end{eqnarray} and the consistency condition \begin{equation}\label{eq:spectrumcond} \mathrm{Ai}'(-\tilde{E}_n)\cos(\tilde{k}_n\tilde{L}) + \frac{\tilde{E}_n \mathrm{Ai}(-\tilde{E}_n)^2 -\mathrm{Ai}'(-\tilde{E}_n)^2}{\tilde{k}_n \mathrm{Ai}(-\tilde{E}_n)} \sin(\tilde{k}_n\tilde{L}) = 0 \end{equation} which defines the spectrum of possible $\tilde{E}_n$. The factor $\mathcal{N}$ is a normalization constant. Solutions to equation (\ref{eq:spectrumcond}) as well as $\mathcal{N}$ can be found numerically. In figure \ref{fig:trapezoid-potential}, we show the squared magnitude of the cavity mode function and the quasiparticle mode function with $k_{n_\mathrm{high}}\sim 2k_\mathrm{cav}$, a plot of the coefficients $A_n^c$ and $A_n^s$ as well as the coupling coefficient $\kappa_n$ for $n$ close to $n_\mathrm{high}$. We plot these for parameters equivalent to those used in Section \ref{sec:applications}, besides the trap length which is chosen to be 198 microns and $b=108.5$ which does not appear in the main text. We find that the length scale $\tilde{a}^{-1/3}\sim 190\,$nm. We have chosen the values for $L$ and $b$ such that $A_n^c/A_n^s \gg 1$ and $k_{n_\mathrm{high}}$ is very close to $2k_\mathrm{cav}$ to optimize the overlap. In particular, we see that the coupling to modes with $n\neq n_\mathrm{high}$ is very small and can be neglected which was assumed in the main text. \section{Numerical treatment of the GP equation} \label{app:numericalGP} In this appendix, we introduce the numerical method used for the calculation of the ground state presented in Fig. \ref{fig:ground-state}. We restrict our considerations to one-dimensional quasi-condensates and start with the time dependent GP equation \begin{equation}\label{eq:time_dep_gp} \left(-\frac{\hbar^2}{2m} \partial_z^2 + V_0 + g_\mathrm{1d} N_0 |\bar\psi|^2 \right)\bar\psi = i\hbar \partial_t\bar\psi\, \end{equation} for the ground state wave function normalized as $\int dz\,|\bar\psi|^2 = 1$ \footnote{Note that, for $\bar\psi_0=\psi_0e^{-i\mu t}$, we would have $i\hbar\partial_t\bar\psi_0=\mu$ and recover the stationary GP equation (\ref{eq:statgp}).}. We define $\tilde{z}=z/\xi$, $\tau = \mu_0 t/\hbar$, $\tilde{V}=V/\mu_0$, $\tilde{\psi}=\sqrt{\xi}\bar\psi$ and $\mu_0 =g_\mathrm{1d}\rho_\mathrm{0,1d}$, where the one-dimensional density $\rho_\mathrm{0,1d}=N_0/L$ and the healing length $\xi=\hbar/\sqrt{4m_a \mu_0}$ have been already defined in the main text. Then, equation (\ref{eq:time_dep_gp}) can be rewritten in dimensionless form as \begin{equation}\label{eq:time_dep_gp_dimless} \left(-2\partial_{\tilde{z}}^2 + \tilde{V} + \tilde{L}|\tilde\psi|^2 \right)\tilde\psi = i \partial_\tau \tilde\psi\,, \end{equation} where $\tilde{L}=L/\xi$ and $\int d\tilde{z}\,|\tilde\psi|^2 = 1$. To obtain the ground state, we perform an imaginary time propagation \cite{minguzzi2004numerical}, that is, we use imaginary time steps $-i\,d\tau$. Furthermore, we solve the differential equation numerically on a spatial grid with a discrete time split step method (see \cite{minguzzi2004numerical} and Appendix A of \cite{barenghi2016primer}). More concretely, the wave function is represented as a one-dimensional array $\tilde\psi_\mathrm{ar}=\{\tilde\psi_j\}$ of complex values of $\tilde\psi$ at the points of a one-dimensional array $\{\tilde{z}_j\}$ with $J$ entries of equidistant points from an interval of length $L_g$ of the $\tilde{z}$-axis. In each time step from $\tau_s$ to $\tau_{s+1}=\tau_s+d\tau$, first, we perform the operation $U^{1/2}_{\tilde{z}}(\tau_s,d\tau)=\{\exp(-i(\tilde{V}(\tilde{z}_j,\tau_s) + \tilde{L}|\tilde\psi_j(\tau_s)|^2)d\tau/2)\}$, on the wave function $\tilde\psi_\mathrm{ar}(\tau_s)$, perform a discrete Fourier transform $\mathcal{F}$ of $U^{1/2}_{\tilde{z}}(\tau_s,d\tau)\tilde\psi_\mathrm{ar}(\tau_s)$ where Fourier space is represented by another array $\{\tilde{k}_j\}$ of wave numbers $k_j=2\pi (j-1-J/2)/L_g$. Then, we apply the operation $U_{\tilde{k}}(d\tau)=\exp(-2i \tilde{k}^2 d\tau)$ on $\mathcal{F}[U^{1/2}_{\tilde{z}}(\tau_s,d\tau)\tilde\psi_\mathrm{ar}(\tau_s)]$, apply the Fourier back-transform $\mathcal{F}^{-1}$ and another time $U^{1/2}_{\tilde{z}}(\tau_s,d\tau)$. That is, the full operation can be written as \begin{equation} \tilde{\psi}_\mathrm{ar}(\tau_{s+1}) = U^{1/2}_{\tilde{z},0}(\tau_s,-i\,d\tau) \mathcal{F}^{-1}[U_{\tilde{k}}(-i\,d\tau)\mathcal{F}[U^{1/2}_{\tilde{z},0}(\tau_s,-i\,d\tau)\tilde\psi_\mathrm{ar}(\tau_s)]]\,, \end{equation} where $U^{1/2}_{\tilde{z},0}$ is equivalent to $U^{1/2}_{\tilde{z}}$ with $\tilde{V}$ replaced by the time-independent trap potential $\tilde{V}_0=V_0/\mu_0$ and $U_{\tilde{k}}=U^{1/2}_{\tilde{k}}U^{1/2}_{\tilde{k}}$. After each time step, $\tilde{\psi}_\mathrm{ar}$ is normalized to one. \end{document}
\begin{document} \title[Leavitt path algebras with bounded index of nilpotence]{Leavitt path algebras with bounded index of nilpotence} \subjclass[2010]{16D50, 16D60.} \keywords{Leavitt path algebras, bounded index of nilpotence, direct-finiteness, simple modules, injective modules} \author{Kulumani M. Rangaswamy} \address{Departament of Mathematics, University of Colorado at Colorado Springs, Colorado-80918, USA} \email{[email protected]} \author{Ashish K. Srivastava} \address{Department of Mathematics and Statistics, St. Louis University, St. Louis, MO-63103, USA} \email{[email protected]} \thanks{The work of the second author is partially supported by a grant from Simons Foundation (grant number 426367).} \maketitle \begin{abstract} In this paper we completely describe graphically Leavitt path algebras with bounded index of nilpotence. We show that the Leavitt path algebra $L_{K}(E)$ has index of nilpotence at most $n$ if and only if no cycle in the graph $E$ has an exit and there is a fixed positive integer $n$ such that the number of distinct paths that end at any given vertex $v$ (including $v$, but not including the entire cycle $c$ in case $v$ lies on $c$) is less than or equal to $n$. Interestingly, the Leavitt path algebras having bounded index of nilpotence turn out to be precisely those that satisfy a polynomial identity. Furthermore, Leavitt path algebras with bounded index of nilpotence are shown to be directly-finite and to be $\mathbb{Z}$-graded $\Sigma$-$V$ rings. As an application of our results, we answer an open question raised in \cite{JST} whether an exchange $\Sigma$-$V$ ring has bounded index of nilpotence. \end{abstract} \section{Introduction} \noindent The objective of this paper is to characterize Leavitt path algebras with bounded index of nilpotence. A ring $R$ is said to have \textit{bounded index of} \textit{nilpotence} if there is a positive integer $n$ such that $x^{n}=0$ for all nilpotent elements $x$ in $R$. If $n$ is the least such integer then $R$ is said to have \textit{index of nilpotence $n$}. We show that the Leavitt path algebra $L:=L_{K}(E)$ of a directed graph $E$ over a field $K$ has index of nilpotence at most $n$ if and only if no cycle in the graph $E$ has an exit and there is a fixed positive integer $n$ such that the number of distinct paths that end at any given vertex $v$ (including $v$, but not including the cycle $c$ in case $v$ lies on $c$) is less than or equal to $n$. In this case, $L$ becomes a subdirect product of matrix rings $M_{t}(K)$ or $M_{t}(K[x,x^{-1}])$ of finite order $t\leq n$. Examples are constructed showing that $L$ need not decompose as a direct sum of these matrix rings $M_{t}(K)$ or $M_{t}(K[x,x^{-1}])$, though the decomposition is possible when $E$ is row-finite. We show that a Leavitt path algebra $L$ with bounded index of nilpotence is always directly-finite and that $L$ is a $ \mathbb{Z} $-graded $\Sigma$-$V$ ring, that is, each graded simple left/right $L$-module is graded $\Sigma$-injective. Examples show that the converse of these statements do not hold. Interestingly, it turns out that the graphical conditions on $E$ that ensure $L$ has a bounded index of nilpotence are exactly the same graphical conditions on $E$ that were shown in \cite{BLR} to imply that $L$ satisfies a polynomial identity. When $E$ is a finite graph, these graphical conditions also imply that $L$ has GK-dimension $\leq1$. Such statements illustrate a unique phenomenon in the study of Leavitt path algebras where a single graph property of $E$ often implies different ring-theoretic properties for $L$ and these ring-theoretic properties are usually independent of each other for general rings (see \cite{R1} for several illustrations of this phenomenon of Leavitt path algebras). This feature of Leavitt path algebras makes them really useful tools in constructing examples of rings of various desired ring-theoretic properties. Finally, as an application of our results, we answer a question raised in \cite{JST}, whether an exchange $\Sigma$-$V$ ring has bounded index of nilpotence. For the general notation, terminology and results in Leavitt path algebras, we refer the reader to \cite{AArS}.\ We give below an outline of some of the needed basic concepts and results. A (directed) graph $E=(E^{0},E^{1},r,s)$ consists of two sets $E^{0}$ and $E^{1}$ together with maps $r,s:E^{1}\rightarrow E^{0}$. The elements of $E^{0}$ are called \textit{vertices} and the elements of $E^{1}$ \textit{edges}. A vertex $v$ is called a \textit{sink} if it emits no edges and a vertex $v$ is called a \textit{regular} \textit{vertex} if it emits a non-empty finite set of edges. An \textit{infinite emitter} is a vertex which emits infinitely many edges. For each $e\in E^{1}$, we call $e^{\ast}$ a ghost edge. We let $r(e^{\ast})$ denote $s(e)$, and we let $s(e^{\ast})$ denote $r(e)$. A\textit{\ path} $\mu$ of length $n>0$ is a finite sequence of edges $\mu=e_{1}e_{2}\cdot\cdot\cdot e_{n}$ with $r(e_{i})=s(e_{i+1})$ for all $i=1,\cdot\cdot\cdot,n-1$. In this case $\mu^{\ast}=e_{n}^{\ast}\cdot \cdot\cdot e_{2}^{\ast}e_{1}^{\ast}$ is the corresponding ghost path. A vertex is considered a path of length $0$. A path $\mu$ $=e_{1}\dots e_{n}$ in $E$ is \textit{closed} if $r(e_{n} )=s(e_{1})$, in which case $\mu$ is said to be \textit{based at the vertex }$s(e_{1})$. A closed path $\mu$ as above is called \textit{simple} provided it does not pass through its base more than once, i.e., $s(e_{i})\neq s(e_{1})$ for all $i=2,...,n$. The closed path $\mu$ is called a \textit{cycle} if it does not pass through any of its vertices twice, that is, if $s(e_{i})\neq s(e_{j})$ for every $i\neq j$. An \textit{exit }for a path $\mu=e_{1}\dots e_{n}$ is an edge $e$ such that $s(e)=s(e_{i})$ for some $i$ and $e\neq e_{i}$. An \textit{infinite rational path} $p$ is an infinite path of the form $p=x_{1}x_{2} \ldots x_{n} \ldots$ where there is an $m \geq1$ such that $x_{k} = g$, a fixed closed path for all $k \geq m$ and that $x_{k}$ is an edge $e_{k}$ if $k <m$. Thus $p$ will be of the form $p=e_{1}e_{2}\cdot \cdot\cdot e_{m-1}gggg\cdot\cdot\cdot$ where $g$ is a closed path and the $e_{i}$ are edges. An infinite path which is not rational is called an \textit{irrational path}. A graph $E$ is said to satisfy \textit{Condition (K)}, if every vertex $v$ on a closed path $c$ is also the base of a another closed path $c^{\prime} $different from $c$. A graph $E$ is said to satisfy \textit{Condition (L)} if every cycle has an exit. If there is a path from vertex $u$ to a vertex $v$, we write $u\geq v$. A subset $D$ of vertices is said to be \textit{downward directed }\ if for any $u,v\in D$, there exists a $w\in D$ such that $u\geq w$ and $v\geq w$. A subset $H$ of $E^{0}$ is called \textit{hereditary} if, whenever $v\in H$ and $w\in E^{0}$ satisfy $v\geq w$, then $w\in H$. A hereditary set is \textit{saturated} if, for any regular vertex $v$, $r(s^{-1}(v))\subseteq H$ implies $v\in H$. Given an arbitrary graph $E$ and a field $K$, the \textit{Leavitt path algebra }$L_{K}(E)$ is defined to be the $K$-algebra generated by a set $\{v:v\in E^{0}\}$ of pair-wise orthogonal idempotents together with a set of variables $\{e,e^{\ast}:e\in E^{1}\}$ which satisfy the following conditions: (1) \ $s(e)e=e=er(e)$ for all $e\in E^{1}$. (2) $r(e)e^{\ast}=e^{\ast}=e^{\ast}s(e)$\ for all $e\in E^{1}$. (3) (The ``CK-1 relations") For all $e,f\in E^{1}$, $e^{\ast}e=r(e)$ and $e^{\ast}f=0$ if $e\neq f$. (4) (The ``CK-2 relations") For every regular vertex $v\in E^{0}$, \[ v=\sum_{e\in E^{1},s(e)=v}ee^{\ast}. \] Every Leavitt path algebra $L_{K}(E)$ is a $\mathbb{Z}$\textit{-graded algebra}, namely, $L_{K}(E)={\displaystyle\bigoplus\limits_{n\in\mathbb{Z}}} L_{n}$ induced by defining, for all $v\in E^{0}$ and $e\in E^{1}$, $\deg (v)=0$, $\deg(e)=1$, $\deg(e^{\ast})=-1$. Here the $L_{n}$ are abelian subgroups satisfying $L_{m}L_{n}\subseteq L_{m+n}$ for all $m,n\in\mathbb{Z} $. Further, for each $n\in\mathbb{Z}$, the \textit{homogeneous component }$L_{n}$ is given by \[ L_{n}=\{ {\textstyle\sum} k_{i}\alpha_{i}\beta_{i}^{\ast}\in L:\text{ } |\alpha_{i}|-|\beta_{i}|=n\}. \] Elements of $L_{n}$ are called \textit{homogeneous elements}. An ideal $I$ of $L_{K}(E)$ is said to be a \textit{graded ideal} if $I=$ ${\displaystyle\bigoplus\limits_{n\in\mathbb{Z}}} (I\cap L_{n})$. If $A,B$ are graded modules over a graded ring $R$, we write $A\cong_{gr}B$ if $A$ and $B$ are graded isomorphic and we write $A\oplus_{gr}B$ to denote a graded direct sum. We will also be using the usual grading of a matrix of finite order. For this and for the various properties of graded rings and graded modules, we refer to \cite{H-1}, \cite{HR} and \cite{NV}. A \textit{breaking vertex }of a hereditary saturated subset $H$ is an infinite emitter $w\in E^{0}\backslash H$ with the property that $0<|s^{-1}(w)\cap r^{-1}(E^{0}\backslash H)|<\infty$. The set of all breaking vertices of $H$ is denoted by $B_{H}$. For any $v\in B_{H}$, $v^{H}$ denotes the element $v-\sum_{s(e)=v,r(e)\notin H}ee^{\ast}$. Given a hereditary saturated subset $H$ and a subset $S\subseteq B_{H}$, $(H,S)$ is called an \textit{admissible pair.} Given an admissible pair $(H,S)$, the ideal generated by $H\cup \{v^{H}:v\in S\}$ is denoted by $I(H,S)$. It was shown in \cite{T} that the graded ideals of $L_{K}(E)$ are precisely the ideals of the form $I(H,S)$ for some admissible pair $(H,S)$. Moreover, $L_{K}(E)/I(H,S)\cong L_{K} (E\backslash(H,S))$. Here $E\backslash(H,S)$ is a \textit{Quotient graph of }$E$ where $(E\backslash(H,S))^{0}=(E^{0}\backslash H)\cup\{v^{\prime}:v\in B_{H}\backslash S\}$ and $(E\backslash(H,S))^{1}=\{e\in E^{1}:r(e)\notin H\}\cup\{e^{\prime}:e\in E^{1} $ with $r(e)\in B_{H}\backslash S\}$ and $r,s$ are extended to $(E\backslash(H,S))^{0}$ by setting $s(e^{\prime})=s(e)$ and $r(e^{\prime})=r(e)^{\prime}$. It is known (see \cite{R}) that if $P$ is a prime ideal of $L$ with $P\cap E^{0}=H$, then $E^{0}\backslash H$ is downward directed. Let $\Lambda$ be an infinite set and $R$, a ring. Then $M_{\Lambda}(R)$ denotes the ring of $\Lambda\times\Lambda$ matrices in which all except at most finitely many entries are non-zero. \section{Results} \noindent In this section, we characterize Leavitt path algebras having bounded index of nilpotence. We begin with the following useful proposition some part of which might be implicit in earlier works on Leavitt path algebras. \begin{prop} \label{matrixCreation} Let $E$ be an arbitrary graph and let $L:=L_{K}(E)$. \begin{enumerate} [(a)] \item Let $v$ be a vertex in $E$ which does not lie on a closed path. If, for some $n\geq1$, there are $n$ distinct paths $p_{1},\cdot\cdot\cdot,p_{n}$ in $E$ that end at $v$, then the set $T_{n}=\{{\displaystyle\sum\limits_{i=1} ^{n}}{\displaystyle\sum\limits_{j=1}^{n}}k_{ij}p_{i}p_{j}^{\ast}:k_{ij}\in K\}$ is a subring of $L$ isomorphic to the matrix ring $M_{n}(K)$. \item Let $v$ be a vertex in $E$ lying on a cycle $c$ and let $f$ be an exit for $c$ at $v$. Then, for every integer $n\geq1$, the subset $S_{n}=\{{\displaystyle\sum\limits_{i=1}^{n}}$ ${\displaystyle\sum \limits_{j=1}^{n}}k_{ij}c^{i}ff^{\ast}(c^{\ast})^{j}:k_{ij}\in K\}$ is a subring of $L$ isomorphic to the matrix ring $M_{n}(K)$. \item Let $v$ be a vertex lying on a cycle $c$ without exits in $E$. If, for some $n\geq1$, there are $n$ distinct paths $p_{1},\cdot\cdot\cdot,p_{n}$ in $E$ that end at $v$ and do not go through the entire cycle $c$, then again the set $T_{n}=\{{\displaystyle\sum\limits_{i=1}^{n}}{\displaystyle\sum \limits_{j=1}^{n}}k_{ij}p_{i}p_{j}^{\ast}:k_{ij}\in K\}$ is a subring of $L$ isomorphic to the matrix ring $M_{n}(K)$. \end{enumerate} \end{prop} \begin{proof} (a) First observe that $p_{j}^{\ast}p_{k}\neq0$ if and only if $p_{j}=p_{k}$. Because, if $p_{j}^{\ast}p_{k}\neq0$, then either $p_{j}=p_{k}p^{\prime}$ or $p_{k}=p_{j}q^{\prime}$ for some paths $p^{\prime},q^{\prime}$. Since $r(p_{j})=r(p_{k})=v$, we get $s(p^{\prime})=v=r(p^{\prime})$ and $s(q^{\prime})=v=r(q^{\prime})$. Since $v$ does not lie on a closed path, we conclude that $p^{\prime}=v=q^{\prime}$. So $p_{j}=p_{k}$. Conversely, if $p_{j}=p_{k}$, then clearly $p_{j}^{\ast}p_{k}=$ $p_{j}^{\ast}p_{j}=v\neq0$. For all $i,j$, let $\varepsilon_{ij}=p_{i}p_{j}^{\ast}$. Clearly, $(\varepsilon_{ii})^{2}=\varepsilon_{ii}$ and $\varepsilon_{ij}\varepsilon _{kl}=\varepsilon_{il}$ or $0$ according as $j=k$ or not. Thus the $\varepsilon_{ij}$ form a set of matrix units and it is readily seen that the set $T_{n}=\{{\displaystyle\sum\limits_{i=1}^{n}}{\displaystyle\sum \limits_{j=1}^{n}}k_{ij}p_{i}p_{j}^{\ast}:k_{ij}\in K\}$ is a subring of $L$ isomorphic to the matrix ring $M_{n}(K)$. (b) Suppose $c$ is a cycle in $E$ with an exit $f$ at a vertex $v$. Consider the set $\{\varepsilon_{ij}=c^{i}ff^{\ast}(c^{\ast})^{j}:1\leq i,j\leq n\}$. Clearly, the $\varepsilon_{ij}$ form a set of matrix units as $(\varepsilon _{ii})^{2}=\varepsilon_{ii}$ and $\varepsilon_{ij}\varepsilon_{kl} =\varepsilon_{il}$ or $0$ according as $j=k$ or not. It is then easy to check that the set $S_{n}=\{{\displaystyle\sum\limits_{i=1}^{n}}$ ${\displaystyle\sum\limits_{j=1}^{n}}k_{ij}c^{i}ff^{\ast}(c^{\ast})^{j} :k_{ij}\in K\}$ is a subring of $L$ isomorphic to the matrix ring $M_{n}(K)$. (c) Let $v$ be the base of a cycle $c$ without exits and $p_{1},\cdot \cdot\cdot,p_{n}$ be $n$ distinct paths that end at $v$ and not go through the entire cycle $c$. Using the fact that $c$ is a cycle without exits and repeating the arguments as in (a), it follows that $p_{j}^{\ast}p_{k}\neq0$ if and only if $p_{j}=p_{k}$. As before let $\varepsilon_{ij}=p_{i}p_{j}^{\ast}$ with $1\leq i,j\leq n$. Clearly, $(\varepsilon_{ii})^{2}=\varepsilon_{ii}$ and $\varepsilon_{ij}\varepsilon_{kl}=\varepsilon_{il}$ or $0$ according as $j=k$ or not. Thus the $\varepsilon_{ij}$ form a set of matrix units and it is readily seen that $T_{n}=\{{\displaystyle\sum\limits_{i=1}^{n}} {\displaystyle\sum\limits_{j=1}^{n}}k_{ij}p_{i}p_{j}^{\ast}:k_{ij}\in K\}$ is a subring of $L$ isomorphic to the matrix ring $M_{n}(K)$. \end{proof} We are now ready to describe all the Leavitt path algebras with bounded index of nilpotence. It is interesting to note that the Leavitt path algebras having bounded index of nilpotence are precisely those that satisfy a polynomial identity. Recall that an algebra $A$ over a field $K$ is said to satisfy a polynomial identity if there exists a non-zero element $f$ in $K[x_{1},\ldots,x_{n}]$ such that $f(a_{1},\ldots,a_{n})=0$ for all $a_{i}$ in $A$. Clearly every commutative ring satisfies a polynomial identity but there are many interesting classes of noncommutative rings too that satisfy a polynomial identity.\ For instance, the Amitsur-Levitzky theorem (see \cite{P}) states that, for any $n\geq1$, the matrix ring $M_{n}(R)$ over a commutative ring $R$ satisfies a polynomial identity of degree $2n$. In \cite{BLR} it is shown that the Leavitt path algebra $L_{K}(E)$ of an arbitrary graph $E$ over a field $K$ satisfies a polynomial identity if and only if no cycle in $E$ has an exit and there is a fixed positive integer $d$ such that the number of distinct paths that end at any given vertex $v$ (including $v$, but not including the entire cycle $c$ in case $v$ lies on $c$) is less than or equal to $d$. When $E$ is a finite graph, then the Leavitt path algebra $L_{K}(E)$ satisfying a polynomial identity is known to be equivalent to the Gelfand-Kirillov dimension of $L_{K}(E)$ being at most one \cite{BLR}. \begin{theorem} \label{bdd Index} Let $E$ be an arbitrary graph. Then the following properties are equivalent for $L:=L_{K}(E)$: \begin{enumerate} \item $L$ has index of nilpotence less than or equal to $n$; \item No cycle in $E$ has an exit and there is a fixed positive integer $n$ such that the number of distinct paths that end at any given vertex $v$ (including $v$, but not including the entire cycle $c$ in case $v$ lies on $c$) is less than or equal to $n$; \item For any graded prime ideal $P$ of $L$, $L/P\cong_{gr}M_{t}(K)$ or $M_{t}(K[x,x^{-1}])$ where $t\leq n$ with appropriate matrix gradings; \item $L$ is a graded subdirect product of graded rings $\{A_{i}:i\in I\}$ where, for each $i$, $A_{i}\cong_{gr}M_{t_{i}}(K)$ or $M_{t_{i}}(K[x,x^{-1}]) $ with appropriate matrix gradings where, for each $i$, $t_{i}\leq n$, a fixed positive integer. \item $L$ satisfies a polynomial identity. \end{enumerate} \end{theorem} \begin{proof} Assume (i), that is, assume that the index of nilpotence of $L$ is $\leq n$. We claim that no cycle in $E$ can have an exit. Because, otherwise, by Proposition \ref{matrixCreation} (b), $L$ will contain subrings of matrices of arbitrary finite size and this will give rise to unbounded index of nilpotence for $L$. Thus every vertex $v$ in $E$ either does not lie on a closed path or lies on a cycle without exits. If there are more than $n$ distinct paths ending at $v$, then again, by Proposition \ref{matrixCreation} (a) and (c), $L$ will contain a copy of a matrix ring of order greater than $n$ over $K$ which will imply that the index of nilpotence of $L$ is greater than $n$, a contradiction. This proves (ii). Assume (ii). Let $P=I(H,S)$ be a graded prime ideal of $L$. Our hypothesis implies that no cycle in $E\backslash(H,S)$ has an exit and that $n$ is also the upper bound for the number of distinct paths ending at any vertex in $E\backslash(H,S)$. So $E\backslash(H,S)$ contains no infinite irrational paths. This means that every path ends at a sink or at a cycle without exits. Also, as $I(H,S)$ is a graded prime ideal, Theorem 3.12 of \cite{R} implies that $(E\backslash(H,S))^{0}$ is downward directed. Consequently, $E\backslash(H,S)$ contains either (a) exactly one sink $w$ or (b) exactly one cycle $c$ without exits based at a vertex $v$. Now in case (a), there are no more than $n$ distinct paths ending at $w$ and, in case (b), there are no more than $n$ paths which end at $v$ and do not go through the cycle $c$. We then appeal to Corollary 2.6.5 and Lemma 2.7.1 of \cite{AArS} to conclude that $L/P\cong L_{K}(E\backslash(H,S))\cong M_{t}(K)$ or $M_{t}(K[x,x^{-1}])$ according as $E\backslash(H,S)$ contains a sink or a cycle without exits. This proves (iii). Assume (iii). Now, for any graded prime ideal $P$, $L/P\cong_{gr}M_{t}(K)$ or $M_{t}(K[x,x^{-1}])$ with appropriate matrix gradings where $t\leq n$, a fixed positive integer. It is known that the intersection $I$ of all graded prime ideals of $L$ is zero. For the sake of completeness, we shall outline the argument. If $I\neq0$, being a graded ideal, $I$ the will contain vertices. But, given any vertex $v$, a graded ideal $M$ maximal among graded ideals with respect to $v\notin M$ is a graded prime ideal, because, for any two homogeneous elements $a,b$, if $a\notin M$ and $b\notin M$, then $v\in M+LaL$ and $v\in M+LbL$. Consequently, $v=v^{2}\in(M+LaL)(M+LbL)=M+LaLbL$. Since $v\notin M$, $aLb\nsubseteqq M$. Thus $M$ is a graded prime ideal. But then $v\notin M$ implies $v\notin I$, a contradiction. Thus $\cap\{P:P$ graded prime ideal$\}=0$. Consequently, $L$ is a graded subdirect product of the graded rings $L/P$ graded isomorphic to $M_{t}(K)$ or $M_{t}(K[x,x^{-1}])$ under appropriate matrix gradings, where $t\leq n$, a fixed positive integer. This proves (iv). Assume (iv). If $t\leq n$, then the matrix rings $M_{t}(K)$ and $M_{t} (K[x,x^{-1}])$ will each have index of nilpotence $\leq n$. Consequently, a subdirect product of such rings will also have nilpotence index $\leq n$. This proves (i). Assume (iv). By the Amitsur-Levitzky theorem \cite{P}, both $M_{t}(K)$ and $M_{t}(K[x,x^{-1}])$ with $t\leq n$ are polynomial identity rings satisfying a polynomial identity of degree $\leq2n$. From this, it is clear that the subdirect product $L$ also satisfies a polynomial identity of degree $\leq n$. This proves (v). The implication (v) $\implies$ (ii) has been established in \cite{BLR}. \end{proof} \noindent The Leavitt path algebra in Theorem \ref{bdd Index} need not decompose as a direct sum of matrix rings, as the following example shows. \begin{example} \textrm{\label{InfiniteClock} Consider the following \textquotedblleft infinite clock" graph $E$: \[ \begin{array} [c]{ccccc} & & \bullet_{w_{1}} & & \bullet_{w_{2}}\\ & \ddots & \uparrow & \nearrow & \\ & \cdots & \bullet_{v} & \longrightarrow & \bullet_{w_{3}}\\ & \swarrow & \vdots & \ddots & \\ _{w_{n}} & & & & \end{array} \] \noindent Thus $E^{0}=\{v\}\cup\{w_{1},w_{2},\cdot\cdot\cdot,w_{n},\cdot \cdot\cdot\}$ where the $w_{i}$ are all sinks. For each $n\geq1$, let $e_{n}$ denote the single edge connecting $v$ to $w_{n}$. The graph $E$ is acyclic and so every ideal of $L$ is graded (\cite{HR}). The number of distinct paths ending at any given sink \ (including the sink) is $\leq2$. For each $n\geq1$, $H_{n}=\{w_{i}:i\neq n\}$ is a hereditary saturated set, $B_{H_{n}}=\{v\}$ and ($E\backslash(H_{n},B_{H_{n}}))^{0}=\{v,w_{n}\}$ is downward directed. Also E $\backslash$ (H$_{n}$,B$_{H_{n}}$) trivially satisfies Condition (L). Hence the ideal $P_{n}$ generated by $H_{n}\cup\{v-e_{n}e_{n}^{\ast}\}$ is a\ graded primitive ideal by Theorem 4.3(iii) of \cite{R} and $L_{K}(E)/P\cong M_{2}(K)$. Moreover, every graded primitive (equivalently, prime) ideal $P$ of $L_{K}(E)$ is equal to $P_{n}$ for some $n$. By \cite[Theorem 4.12]{HRS-1}, $L_{K}(E)$ is a graded $\Sigma$-$V$ ring. } \textrm{But $L_{K}(E)$ cannot decompose as a direct sum of the matrix rings $M_{2}(K) $. Because, otherwise, $v$ would lie in a direct sum of finitely many copies of $M_{2}(K)$. Since the ideal generated by $v$ is $L_{K}(E)$, $L_{K}(E)$ will then be a direct sum of finitely many copies of $M_{2}(K)$. This is impossible since $L_{K}(E)$ contains an infinite set of orthogonal idempotents $\{e_{n}e_{n}^{\ast}:n\geq1\}$. } \textrm{We can also describe the internal structure of this ring $L_{K}(E)$. The socle $S$ of $L_{K}(E)$ is the ideal generated by the sinks $\{w_{i} :i\geq1\}$, $S\cong\bigoplus\limits_{\aleph_{0}}M_{2}(K)$ and $L_{K}(E)/S\cong K$. } \end{example} But the decomposition is possible if the graph is row-finite, as shown in the following theorem. \begin{theorem} \label{row-finite bddIndex}Let $E$ be a row-finite graph. Then the following properties are equivalent for $L:=L_{K}(E)$: \begin{enumerate} [(i)] \item $L$ has bounded index of nilpotence $\leq n$; \item There is a fixed positive integer $n$ and a graded isomorphism \[ L\cong_{gr}{\displaystyle\bigoplus\limits_{i\in I}}M_{n_{i}}(K)\oplus {\displaystyle\bigoplus\limits_{j\in J}}M_{n_{j}}(K[x,x^{-1}]) \] where $I,J$ are arbitrary index sets and, for all $i\in I$ and $j\in J$, $n_{i}\leq n$ and $n_{j}\leq n$ . In particular, $L$ is graded semi-simple (that is, a direct sum of graded simple left/right ideals). \end{enumerate} \end{theorem} \begin{proof} Assume (i). By Theorem \ref{bdd Index}, no cycle in $E$ has an exit and the number of distinct paths that end at any vertex $v$ is $\leq n$, with the proviso that if $v$ sits on a cycle $c$, then these paths do not include the entire cycle $c$. If $A$ is the graded ideal generated by all the sinks in $E$ and all the vertices on cycles without exits, then, by Corollary 2.6.5 and Lemma 2.7.1 of \cite{AArS}, $A\cong{\displaystyle\bigoplus\limits_{i\in I} }M_{n_{i}}(K)\oplus{\displaystyle\bigoplus\limits_{j\in J}}M_{n_{j} }(K[x,x^{-1}])$. By giving appropriate matrix gradings, this isomorphism becomes a graded isomorphism. We claim that $L=A$. Let $H\subseteq A$ be the set consisting of all the sinks and all the vertices on cycles in $E$. By hypothesis, every path in $E$ that does not include an entire cycle has length $\leq n$ and ends at a vertex in $H$. So if $u$ is any vertex in $E$, using the fact that all the vertices in $E$ are regular and by a simple induction on the length of the longest path from $u$, we can conclude that $u$ belongs to the saturated closure of $H$. This implies that $L=A\cong_{gr} {\displaystyle\bigoplus\limits_{i\in I}}M_{n_{i}}(K)\oplus {\displaystyle\bigoplus\limits_{j\in J}}M_{n_{j}}(K[x,x^{-1}])$. This proves (ii). (ii) $\implies$ (i) follows from the fact that the matrix rings $M_{n_{i}}(K) $ and $M_{n_{j}}(K[x,x^{-1}])$ with $n_{i},n_{j}\leq n$ have index of nilpotence $\leq n$. \end{proof} One consequence of Theorem \ref{bdd Index} is the following. \begin{prop} \label{bdx => SigmaV} Let $E$ be an arbitrary graph. If $L:=L_{K}(E)$ has bounded index of nilpotence $n$, then $L$ is a graded $\Sigma$-$V$ ring. \end{prop} \begin{proof} If $L$ has bounded index of nilpotence $n$, then for any graded primitive ideal $P$ of $L$ (since it is also graded prime), we have, by Theorem \ref{bdd Index}(iii), $L/P\cong_{gr}M_{t}(K)$ or $M_{t}(K[x,x^{-1}])$ with appropriate matrix gradings, where $t\leq n$. By \cite[Theorem 4.12]{HRS-1}, we then conclude that $L$ is a graded $\Sigma$-$V$ ring. \end{proof} The converse of the above result does not hold, as can be seen in the two examples below. \begin{example} \textrm{\label{Inverse infinite clock} Let $E$ be the \textquotedblleft inverse infinite clock" graph consisting of a sink $w$ and countably infinite edges $\{e_{n}:n\geq1\}$ with $r(e_{n})=w$ and s(e$_{n}$) = w}$_{n}$ \textrm{ for all $n$. } \textrm{ \[ \begin{array} [c]{ccccc} & & \bullet_{w_{1}} & & \bullet_{w_{2}}\\ & \ddots & \downarrow & \swarrow & \\ & \cdots & \bullet_{w} & \longleftarrow & \bullet_{w_{3}}\\ & \nearrow & \vdots & \ddots & \\ \bullet_{w_{n}} & & & & \end{array} \] } \textrm{\noindent Then $L_{K}(E)\cong M_{\infty}(K)$, the infinite $\omega\times\omega$ matrix with finitely many non-zero entries. Now, under appropriate matrix grading, $M_{\infty}(K)$ is graded semisimple (that is, a graded direct sum of \ graded simple modules) and so all graded left/right $M_{\infty}(K)$-modules are graded injective and hence $L$ is a graded $\Sigma$-$V$ ring. But $L$ does not have bounded index of nilpotence by Theorem \ref{bdd Index}(ii). } \end{example} \begin{example} \textrm{\label{bdd idx no sigmaV} Consider the following graph $F$ consisting of two cycles $g$ and $c$ connected by an edge: \[ \begin{array} [c]{ccccccc} \bullet & \longrightarrow & \bullet & & \bullet & \longrightarrow & \bullet\\ \uparrow & g & \downarrow & & \uparrow & c & \downarrow\\ \bullet & \longleftarrow & \bullet & \longrightarrow & \bullet_{v} & \longleftarrow & \bullet \end{array} \] } \textrm{\noindent Now $F$ is downward directed, $c$ is a cycle without exits and the various powers of the cycle $g$ give rise to infinitely many distinct paths that end at the base $v$ of the cycle $c$. Hence $L_{K}(F)\cong M_{\infty}(K[x,x^{-1}])$ (by Lemma 2.7.1 of \cite{AArS}) and is graded semisimple. Hence each graded simple module over $L_{K}(F)$ is graded $\Sigma $-injective. But $L_{K}(F)$ does not have bounded index of nilpotence, as $M_{\infty}(K[x,x^{-1}])$ contains subrings isomorphic to $M_{n}(K[x,x^{-1}])$ for every positive integer $n$. } \end{example} In the monograph \cite{JST}, the following open question (6.33: Problem 3, Chapter 6) was raised: \begin{question} Does every exchange right $\Sigma$-$V$ ring have bounded index of nilpotence? \end{question} We answer this question in the negative in the following remark. \begin{remark} \textrm{Consider the graph $E$ of Example \ref{Inverse infinite clock}. As noted there, $L_{K}(E)\cong M_{\infty}(K)$ $\mathrm{\ }$which is semisimple and hence is a $\Sigma$-$V$ ring. Since $E$ is acyclic, $L_{K}(E)$ is von Neumann regular \cite{AR} and hence is an exchange ring. But $L_{K}(E)$ does not have bounded index of nilpotence by Theorem \ref{bdd Index}(ii). } \end{remark} \begin{remark} \textrm{It was shown in \cite{HRS-1} that a Leavitt path algebra $L_{K}(E)$ is directly-finite (equivalently, graded directly-finite with respect to vertices) if and only if no cycle in $E$ has an exit. In view of Theorem \ref{bdd Index}, it is clear that a Leavitt path algebra of bounded index is always directly-finite. But, for arbitrary rings with bounded index of nilpotence, this is not the case. Let $S=\prod R_{k}$, where each $R_{k} \cong\mathbb{Z}(p^{n})$, the ring of integers modulo a fixed integer $n\geq2$. Now $(pS)^{n}=0$ and if $a\in S$ is nilpotent, then $a\in pS$ and $a^{n}=0$. Consequently, $S$ has bounded index of nilpotence. In fact, the index of nilpotence of $S$ is $n$. But $S$ is not directly-finite, since $S\cong \prod\limits_{k\geq2}R_{k}\cong\prod\limits_{k\geq3}R_{k}\cong\cdot\cdot\cdot $. } \textrm{Conversely, a directly-finite Leavitt path algebra need not have bounded index of nilpotence. If $E$ is the graph consisting of an infinite line segment \[ \bullet\longrightarrow\bullet\longrightarrow\bullet\longrightarrow\cdot \cdot\cdot\bullet\longrightarrow\cdot\cdot\cdot \] then clearly $L_{K}(E)$ is directly-finite, but, by Theorem \ref{bdd Index} (ii), $L_{K}(E)\cong M_{\infty}(K)$ does not have bounded index of nilpotence. } \end{remark} \end{document}
\begin{document} \preprint{APS/123-QED} \title{ Quantum walks on graphs representing the firing patterns of a quantum neural network} \author{Maria Schuld} \email{[email protected]} \author{Ilya Sinayskiy} \author{Francesco Petruccione} \affiliation{Quantum Research Group, School of Chemistry and Physics, University of KwaZulu-Natal Durban, KwaZulu-Natal, 4001, South Africa\\ and National Institute for Theoretical Physics (NITheP), KwaZulu-Natal, 4001, South Africa } \date{\today} \begin{abstract} Quantum walks have been shown to be fruitful tools in analysing the dynamic properties of quantum systems. This article proposes to use quantum walks as an approach to Quantum Neural Networks (QNNs). QNNs replace binary McCulloch-Pitts neurons with a qubit in order to use the advantages of quantum computing in neural networks. A quantum walk on the firing states of such a QNN is supposed to simulate central properties of the dynamics of classical neural networks, such as associative memory. It is shown that a biased discrete Hadamard walk derived from the updating process of a biological neuron does not lead to a unitary walk. However, a Stochastic Quantum Walk between the global firing states of a QNN can be constructed and it is shown that it contains the feature of associative memory. The quantum contribution to the walk accounts for a modest speed-up in some regimes.\bigbreak \begin{description} \item[PACS numbers] 03.67.Lx, 03.65.Yz, 87.18.Sn \end{description} \end{abstract} \pacs{Valid PACS appear here} \maketitle \section{Introduction} Quantum walks, the quantum equivalent of classical random walks, became a booming research field in the last decade \citep{kempe03, kendon07, venegas12}. Based on the theory of Markov chains, classical random walks study the evolution of the probability distribution of an abstract walker's position on a graph. The positions or vertices of the graph are connected by edges symbolising transition probabilities. In each step, the walker makes a random decision (often described by a coin toss) to which adjacent position to jump. Quantum walks, in which the walker's position on the graph can be a superposition and the decision process is simulated by a `quantum coin' such as the Hadamard transformation, show a surprisingly different behaviour to classical random walks. Quantum walks have been formulated as discrete \citep{aharonov01} or continuous \citep{childs02} walks, and led to new versions such as the semi-classical Stochastic Quantum Walk \citep{whitfield10} or the Open Quantum Walk \citep{attal12}. \\ The potential of quantum walks is based on their fruitful application for quantum computing just like classical walks lead to efficient classical algorithms \citep{moore02, ambainis03}. An important application of quantum walks has been found in the newly emerging field of quantum biology \citep{ball11}. Evidence suggests that photosynthetic plants use nontrivial quantum effects such as superposition and interference for energy transport in their light-harvesting complexes \citep{engel07, panitchayangkoon10}. The trajectory of an excitation `jumping' between molecular `sites' from the antenna to the reaction center can thereby be modeled with the formalism of quantum walks \citep{ishizaki09, hoyer10, mohseni08, rebentrost09}. \\ The success story of quantum biology is a motivation to reaccess questions of quantum dynamics in another important biological system: the brain. In 2006, Christof Koch and Klaus Hepp wrote in their \textit{Nature} contribution entitled `Quantum Mechanics in the Brain' \citep{koch06}: ``The critical question [...] is whether any components of the nervous system - a $300^{\circ}$ Kelvin tissue strongly coupled to its environment - display macroscopic quantum behaviours, such as quantum entanglement, that are key to the brain's function.'' Besides a number of controversial theories on the `quantum brain' \citep{hameroff13, freeman08}, there has been no evidence for nontrivial quantum effects in the nervous system yet. On the contrary, the macroscopic nature of signal transmission between neural cells seems to make quantum coherence impossible \citep{tegmark00}. However, the intersection of neuroscience and quantum physics has been accessed from the perspective of computational science. In the last two decades, various Quantum Neural Network (QNN) models \citep{andrecut02, altaisky01,gupta01, behrman02, fei03, zhou12, oliveira08, toth00} have been proposed. Although not claiming to be realistic quantum models of biological neural networks, these proposals explore alternative ways of computation using both the advantages of quantum computing and neural computing.\\ This article follows the perspective of QNN research and investigates a new approach to introduce quantum physics into neural networks by making use of the theory of quantum walks. The sites of the quantum walk symbolise the firing patterns of a neural network consisting of simplified binary neurons with the states `active' and `resting'. A firing pattern is given by a binary string encoding which neuron of a network is firing and which is resting. The current position of the walker represents the network's firing state. A \textit{quantum} walker is of course able to be in a superposition of firing patterns. We show that a discrete quantum walk, in which a Hadamard-like biased coin is successively flipped to decide on the firing state of single neurons clashes with the framework of unitary quantum walks. To simulate a neural network's dissipative dynamics, we therefore need to focus on quantum walks that incorporate decoherence. A continuous Stochastic Quantum Walk on the hypercube, obtaining basic features of the brain's property of associative memory (i.e. retrieving a memorised pattern upon an imperfect initial pattern), is implemented and analysed. It can be shown that under certain conditions, the quantum part of the walk accounts for a modest speed-up of the walk. These results serve as an example of the application of quantum walks to obtain specific dynamics of a quantum system.\\ The paper has the following structure: Section II and III very briefly introduce into the necessary theoretical background of quantum walks as well as Quantum Neural Networks. Section IV gives an idea of how quantum walks can be constructed in the context of QNNs, and explains the reason for the failure of the most intuitive way. A more mature approach is presented. The conclusion (Section V) offers a discussion including a way forward for the use of quantum walks in QNN research. \section{A brief introduction to quantum walks} On any graph of $n$ vertices a Markov chain can be defined. A Markov chain is a sequence of events that is governed by a stochastic process in which the results of a timestep only depend on the results of the previous timestep \citep{feller08}. Markov chains are described by a stochastic matrix $ M(n\times n, \mathbb{R})$ with $\sum^n_{j = 1} m_{ij} = 1$ and entries $m_{ij}$ representing the weight of the directed edge going from vertex $i$ to $j$ (see Fig. \ref{markovchain}). These weights can be interpreted as a transition rate from site $i$ to $j$. Repeatedly applying $M$ to a $n$-dimensional stochastic vector $\vec{\pi}$ with $\sum^n_{l=1} \pi_{l}=1$ evolves an initial probability distribution through discrete time steps. The probability of being at vertex $i$ changes according to \citep{childs02} \begin{equation} \frac{d\pi_i}{dt} = -\sum \limits_j M_{ij}\pi_j(t). \label{machain} \end{equation} Markov chains on regular undirected graphs result in a stationary probability distribution $\pi_s$ which is independent of the initial state \citep{aharonov01}. The time it takes to reach the stable distribution is called the \textit{mixing time}.\\ \begin{figure} \caption{(Colour online) Any weighed graph defines a Markov chain represented by a stochastic matrix.} \label{markovchain} \end{figure} Markov chains with equal probability to jump from site $i$ to any of the $d$ sites adjacent to $i$ are also known as random walks on a graph \footnote{Some authors extend the random walk model to biased probabilities \citep{mackay02, kendon07, stefanak09, ribeiro04}. These so called `biased random walks' are nothing else than Markov chains introduced here. We will use the more general definition.}. Random walks are based on the idea of an abstract walker who in each step tosses a $d$-dimensional coin to choose one of $d$ possible directions at random. Random walks have been proven to be powerful tools in constructing efficient algorithms in computer science (for references see \citep{moore02}). \\ In the quantum equivalent of random walks, a quantum walker walks between sites by changing its position state $\ket{x} \in \{\ket{1},..., \ket{n}\}$. The difference to classical walks is twofold: First, the various paths are realised in a superposition and thus interfere with one another, and a measurement `collapses' the paths into a current position. Second, the dynamics have to preserve the squared amplitude vector instead of the stochastic vector to preserve the total probability. This means that the evolution has to be unitary, or in the most general case of an open system, a completely positive trace preserving map \citep{aharonov02, nielsen10}. \\ The unitarity of quantum walks furthermore implies that the evolution is reversible. Quantum walks therefore do not have a stationary probability distribution $\pi_s$ as classical random walks do. However, it turns out that taking an average over the probability distribution over states $\ket{x}$, \begin{equation} \bar{P}_T(x) = \frac{1}{T} \sum^{T-1}_{t=0} | \braket{x}{\psi(t)}|^2, \label{LD}\end{equation} leads to a stable distribution $\bar{P}_s$ \citep{aharonov01}. Quantum walks received a fair amount of attention and have been the topic of extensive reviews, books and attempts of implementations \citep{kendon07, kempe03, venegas12, wang13, travaglione02}. The reasons are that first, quantum walks show markedly different features than their classical counterparts. Second, quantum walks were in some cases able to outperform classical walks \citep{kempe02, childs02, aharonov02, ambainis03, shenvi03}.\\ Two versions of quantum walks were established and exist in parallel: the discrete \citep{aharonov01} and continuous time quantum walk \citep{farhi98, childs02}. The bridge between the two has finally been shown in \citep{strauch06}. An important development was also the exploration of decoherence in quantum walks \citep{kendon07}. Recently, an interesting version of the continuous quantum walk with decoherence has been introduced \citep{whitfield10}. So called Stochastic Quantum Walks obey a Gorini-Kossakowski-Lindblad-Sudarshan (GKLS) type master equation \citep{lindblad76,gorini78} that consists of a coherent as well as an incoherent part. These three versions will be important in the application further down and shall therefore be briefly presented.\\ \textit{Discrete quantum walks}\\ In a discrete quantum walk, the `walker' is associated with a wave function describing a quantum system with states $\ket{\psi} = \ket{c} \otimes \ket{i} \in \mathcal{H}_c \otimes \mathcal{H}_i$. The Hilbert space $\mathcal{H}_i$ has a basis$\{\ket{0},..., \ket{n}\}$ ($n$ may be countable infinite) that represent sites or vertices on which the walk takes place. The Hilbert space $\mathcal{H}_c$ with basis $\ket{1},..., \ket{d_i}$ is a `coin' space that denotes the current state of a coin `tossed' to decide which direction to take next. Note that usually only regular graphs are considered and $d$ is independent of the current position $\ket{i}$. The discrete walk then follows two substeps in each step $t\rightarrow t+1$, executed by a coin and a shift operator: \[\ket{\psi_{t+1}} = \hat{S}(\hat{C} \otimes 1)\ket{\psi_t}.\] First, the coin is `tossed' by applying $\hat{C}$ to the coin space. By that, the coin state is put into a superposition. Second, the conditional shift operator $\hat{S}$ shifts the state to the $r$'th adjacent site if the outcome of the coin is $\ket{r}$ ($r \in \{1,...,d\}$). \\ The most well-known coin is the Hadamard transformation that works on the two-dimensional basis $\{\ket{a}, \ket{b}\}$ as \begin{equation}\hat{H} \ket{a} = \frac{1}{\sqrt{2}} (\ket{a} + \ket{b}), \; \; \hat{H} \ket{b} = \frac{1}{\sqrt{2}} (\ket{b} - \ket{a}). \label{had}\end{equation} The minus in the second equation indicates that even the Hadamard transformation is not completely unbiased and denotes at the same time the fundamental difference to classical walks, as it is the source of interference \footnote{The reason why a separate coin space is introduced is that the unitarity condition is not fulfilled by most examples of walks (a detailed argumentation can be found in \citep{ambainis03}).}.\\ \textit{Continous quantum walks}\\ In 2001, Andrew Childs, Edward Farhi and Sam Gutman published a continuous version of the quantum walk. Their idea is based on the equivalence between Eq. (\ref{machain}) and the Schr\"odinger equation for state $\ket{\psi (t)}$ \begin{equation} \imath \frac{d}{dt} \braket{i}{\psi(t)} = \sum \limits_j \bra{i}H\ket{j} \braket{j}{\psi(t)} \label{SE} \end{equation} While Eq. (\ref{machain}) preserves $\sum^n_{l=1} \pi_{l}=1$, the Schr\"odinger equation (\ref{SE}) makes sure that $\sum_i |\braket{i}{\psi(t)}|^2 = 1$ is fulfilled. The difference between the two evolutions is simply the imaginary unit in the latter \citep{childs02}. Comparing both equations, one can see that the stochastic matrix $M$ of the classical Markov chain is replaced by the Hamiltonian $H$ of the quantum system. $H$ consequently equals the weighed adjacency matrix of the graph. To obtain a hermitian and thus symmetric operator (as the entries are real numbers), the graph needs to be undirected.\\ \textit{Stochastic Quantum Walks}\\ A number of contributions investigate what happens if decoherence is introduced into quantum walks \citep{kendon07, brun03}. Decoherence destroys the quantum property of coherent states and drives the dynamics into the classical regime. In some cases this can lead to preferred dynamics \citep{kendon07}. An interesting proposal for a decohered continuous quantum walk has recently been introduced \citep{whitfield10}. A so called Stochastic Quantum Walk (SQW) evolves a density matrix according to the GKLS master equation \citep{lindblad76,gorini78} \begin{equation} \dot{\rho} =- \imath \kappa [H, \rho] - \gamma \sum_k \left( \frac{1}{2} L_k^{\dagger} L_k \rho + \frac{1}{2}\rho L_k^{\dagger} L_k - L_k\rho L_k^{\dagger} \right). \label{lindblad} \end{equation} Note that here and in the following, $\hbar$ is set to one. The parameters $\kappa$ and $\gamma$ define the influence of the two terms on the right side. The sum term describes the stochastic evolution, in which $L_k$ denote Lindblad jump operators that decohere the quantum state. The first term is the usual Schr\"odinger quantum evolution as known from the von Neumann equation. The Hamiltonian represents a (weighed) adjacency matrix as in the continuous walk given in Eq. (\ref{SE}). In this fashion, the ``evolution among vertices happens through coherences developed by a Hamiltonian'' \citep{whitfield10} while the system is constantly exposed to decoherence. Thus, both advantages of dissipation and coherence can be used and transitions from one to the other studied. We will implement a version of the Stochastic Quantum Walk in Section IVb. \section{Neural Networks and the concept of the `quron'} To get an understanding of what is meant by the `firing patterns of a QNN', we need to briefly introduce into some fundamentals of computational neuroscience as well as the basic concept of a QNN.\\ Neural networks are computational systems inspired by the biological neural networks forming our brain. The brain is believed to compute information by carrying electric signals, so called \textit{action potentials}, along the membranes of interconnected neural cells \citep{purves08, dayan01}. The algorithmic dynamics of biological neural networks are defined by the connection strengths between neurons as well as by the activation function of a neuron due to the input signals from other neurons. Information is encoded in the global firing pattern of the neural network.\\ It turns out that important properties of the brain, such as the computation of incomplete or imperfect input, can be retrieved by the easiest model of a neuron possible, introduced by McCulloch and Pitts as early as 1943 \citep{mcculloch43}: an active neuron firing a sequence of action potentials in a given time is represented by a `1' while a resting neuron is represented by a `0'. (for other types of neural networks, see \citep{rabinovich06}). The $N$ neurons of a neural network can thus be described by variables $x_i \in \{0,1\}, \; i = 1,...,N$. Each neuron $x_i$ is assigned to a characteristic threshold $\theta_i$. The biologically derived activation or updating function of a neuron $x_i$ is then given by \begin{equation} x_i = \left\{ \begin{array}{l l} 1, & \quad \text{if} \; \sum\limits_{i \neq j=1}^N w_{ji} x_j \leq \theta_i,\\ 0, & \quad \text{else,} \end{array} \right . \label{update}\end{equation} where the $w_{ij},\; i,j = 1,...,N $ are real numbers denoting the strength of the connection between neuron $x_i$ and $x_j$. The according setup is called a `perceptron' (see Fig. \ref{neural}). The vector $(x_1,...,x_N)$ is called the `state' or firing pattern of the network. Initialising the network means to set each neuron to either $1$ or $0$. An update step of the global network state can either happen through a synchronous update of all neurons, through a chronological or random sequence of individual updates according to Eq. (\ref{update}).\\ \begin{figure} \caption{(Colour online) Illustration of a perceptron, a mathematical model of the neural activation mechanism with neurons $x_1,x_2,x_3, x_4$ and connection strengths $w_{14} \label{neural} \end{figure} One of the milestones in artificial neural network research was John J. Hopfield's 1982 publication on a network today widely known as `Hopfield network' \citep{hopfield82} in which the connection strengths fulfill \[w_{ij} = w_{ji}, \qquad w_{ii} = 0. \] Although of a simple setup, the Hopfield model shows the powerful feature of associative memory. Associative memory is the ability to -- out of a number of stored firing network states -- retrieve the network state that is in the center of the dynamic basin of attraction for the input pattern. Hopfield networks thus store firing patterns as dynamic attractors. These dynamic attractors are minima of the Ising-type energy function \begin{equation} E(x_1,...,x_N) = - \frac{1}{2} \sum\limits_{i=1}^{N} \sum\limits_{j=1}^{N} w_{ij} x_i x_j + \sum\limits_{i=1}^{N} \theta_{i} x_i. \label{energy} \end{equation} A Hopfield network always inherits attractors from the nonlinearity of the updating process. The specific dynamics of a neural network are then solely defined by the choice of weights $w_{ij}$. The property $w_{ii} = 0$ makes sure that all attractors are stable states (as opposed to limit cycles of alternating states) \citep{rojas96}. After a finite number of updating steps, any initial state of the network will consequently end up in the `closest' attracting network state which is then reproduced by the updating process. An important implication is that neural networks based on a step activation function do manipulate information irreversibly due to its injectivity, i.e. a state of a neural network at timestep $t_{n-1}$ cannot be reconstructed from its state at timestep $t_n$. This might be different for other types of activation functions such as the sigmoid function. It is also interesting to note that in conventional neural networks, the number of neural excitations (i.e. of neurons with the state `active') is not conserved, which is crucial for the attempt to construct quantum walks of excitations in Quantum Neural Network. \\ Approaches to QNNs \citep{kak95, perus00, menneer95, zak98, gupta01, oliveira08, behrman99, li02} are mostly based on Hopfield-type neural networks. The basic idea of introducing quantum properties is to replace the McCulloch-Pitts neuron $x = \{0,1\}$ by a `qubit neuron' $\ket{x} $ of the two-dimensional Hilbert space $\mathcal{H}^2$ with basis $ \{\ket{0}, \ket{1}\}$. We propose to simply call this object a `quron'. The central property of a quron is that it can be in a superposition of its two firing states with the complex amplitudes $\alpha, \beta$ \begin{equation}\ket{x} = \alpha \ket{0} + \beta \ket{1}, \;\; |\alpha|^2 + |\beta|^2 = 1. \label{qubit} \end{equation} The state of a network with $N$ qurons thus becomes a quantum product state of the $2^N$-dimensional Hilbert space $\mathcal{H}^{2^N} =\mathcal{H}^2_{(1)} \otimes \dots \otimes \mathcal{H}^2_{(N)}$ \[\ket{\psi} = \ket{x_1} \otimes \ket{x_2} \otimes \dots \otimes \ket{x_N} = \ket{x_1 x_2 \dots x_N}.\] These are the firing states of a QNN on which a quantum walk will be constructed in the following section. \section{Quantum walks between quantum neural network states} The genius of the Hopfield model lies in the fact that operations on the neuron as a local unit impose dynamics on the global network state. These global dynamics can be understood as a classical random walk between network states, \[(x_1,...,x_n)_{t_0} \rightarrow (x_1,...,x_n)_{t_1} \rightarrow (x_1,...,x_n)_{t_2} \rightarrow \hdots, \] beginning with the initial pattern and in each step jumping to the updated network state. After a finite number of steps, the chain reproduces the stable state serving as the output of the algorithm. \\ Likewise, a quantum walk on the firing states of a QNN can be defined as an evolution \[\ket{\psi}_{t_0} \rightarrow \ket{\psi}_{t_1} \rightarrow \ket{\psi}_{t_2}\rightarrow \hdots,\] following the laws of quantum mechanics. An important question is the structure of the graph underlying such a quantum walk. The vertices of the graph are given by binary strings denoting all possible firing patterns. The connectivity thus depends on the updating protocol. If all neurons are updated synchronously, each transition between firing patterns is theoretically possible and the graph is fully connected. We will concentrate on the more common case of updating single neurons at a time giving rise to the hypercube given in Fig. \ref{HC} for the $N = 3$ dimensional case. The hypercube connects binary strings that differ only in one bitflip (in other words, they have a Hamming distance of one \citep{hamming50}). We add self-connections of every site to account for updates that leave the firing pattern unchanged.\\ \begin{figure} \caption{The graph of a quantum walk on the firing states of a QNN has the structure of a hypercube, where the firing patterns sit at the corners.} \label{HC} \end{figure} The remaining part of this article will mention a very intuitive way of implementing a quantum walk on a QNN by tossing a quantum coin to decide upon the updated state of each quron, and show why it fails to lead to a discrete, unitary quantum walk. It will then present a version of a Stochastic Quantum Walk that simulates an associative memory and discuss the results. \subsection{Why the most intuitive version of a discrete quantum walk fails} A straight forward way to construct a QNN seems to be to replace the updating process of a neuron given in Eq. (\ref{update}) by a biased Hadamard-like transformation on a two-dimensional coin state $\ket{c}= \{\ket{0}_c, \ket{1}_c\}$ and to flip the state of the quron depending on the outcome of the coin as done in the discrete quantum walk. To retrieve nontrivial dynamics, we would want a biased coin leading to a superposition shown in Eq. (\ref{qubit}) in which the amplitudes $|\alpha|^2$ and $|\beta|^2$ encode a probability of the corresponding neuron to fire or to rest. This can be done by defining the firing probability $0\leq p_i \leq1$ of a neuron $x_i$ as \begin{equation} p_i = \frac{\sum_j w_{ij} x_j + (N-1)}{2(N-1)}, \label{firingprob} \end{equation} and choosing \[\alpha_i = \sqrt{1- p_i}, \qquad \beta_i = \pm \sqrt{ p_i} \] The $\pm$ in front of $\beta$ is introduced to simulate the quantum properties of the Hadamard transformation or in other words, to introduce interference. The firing probability is nothing else than the normalised summed up signal coming from all input neurons to an output neuron. Since we sum over $N-1$ weighed neurons $w_{ij}x_j \in [-1,1]$, the incoming signal lies in the interval $[-(N-1), (N-1)]$. To obtain a positive value normalised to $[0,1]$ representing the probability, we consequently need to `shift' the signal to positive values and divide by the range of the interval, $2(N-1)$. Thus, if the incoming signal is strong, the probability for the neuron to become active is high, regardless of its prior state. Note that the thresholds $\theta_i$ of classical neurons are set to zero when dealing with qurons in the following. The updating process for quron $\ket{x_i}$ can consequently be formulated as a transformation \begin{multline} \hat{H} \ket{0}_c = \sqrt{1-p_i} \ket{0}_c +\sqrt{p_i} \ket{1}_c, \\ \hat{H} \ket{1}_c =\sqrt{1-p_i} \ket{0}_c - \sqrt{p_i} \ket{1}_c. \label{neuroncoin} \end{multline} This is slightly different from the biased Hadamard transformation used in biased quantum walks \citep{mackay02, kendon07, stefanak09, ribeiro04}, \begin{multline} \hat{H} \ket{0}_c = \sqrt{p_i} \ket{0}_c +\sqrt{1-p_i} \ket{1}_c, \\ \hat{H} \ket{1}_c =\sqrt{1-p_i} \ket{0}_c - \sqrt{p_i} \ket{1}_c, \label{biasedcoin} \end{multline} where the variable $p_i$ denotes the probability that the coin state flips its value, so that the biased Hadamard coin depends on the history of the coin state. This small difference leads to a problem in the implementation of the quantum walk proposed here, since the coin (Eq. \ref{neuroncoin}) is nonunitary. In fact, this property is not surprising since it stems from the dissipative dynamics of the Hopfield network, in which information of the former state of the neuron does not feed into the updating process (due to $w_{ii} = 0$). A direct application of coherent quantum walks onto neural dynamics is thus not trivial. As a conclusion, quantum walks including decoherence must be considered to incorporate dissipation. \subsection{A Stochastic Quantum Walk on the hypercube} We want to propose another type of quantum walk that is not derived from the neural updating mechanism, but still obtains the Hopfield network's dynamics of associative memory. Hence, our goal is to introduce a Stochastic Quantum Walk on a hypercube graph that ends up in one of two `attracting firing states' closer to the initial state in terms of Hamming distance.\\ The hypercube of dimension $N$ is given by a set of vertices $\mathcal{V}^{2^N}$ as `corners', representing the density matrices $\ket{x_1,...,x_N}_i\bra{x_1,...,x_N} = \ket{i}\bra{i}, i = 1,...,2^N$ (compare Fig. \ref{sqwHC}). The quantum state $\ket{x_1,...,x_N}_i$ is thereby the $i$-th basis state of a QNN of 2-level qurons as introduced above, and the shorthand $\ket{i}$ is used to reduce notation. In the hypercube, two vertices $\ket{i}\bra{i}$ and $\ket{j}\bra{j}$ are connected by an edge if their respective network states differ by one quron's state or not at all, or in other words, the Hamming distance $d_H(i,j)$ between the two states $\ket{i}$ and $\ket{j}$ is one or zero. The hypercube graph's adjacency matrix is consequently given by \[ H_{ij} = \left\{ \begin{array}{l l} a_{ij}, & \quad \text{if } d_H(i,j) = 0,1 ,\\ 0, & \quad \text{else.} \end{array} \right. . \] For now we set $a_{ij}=1$ \footnote{The weights $a_{ij}$ can be chosen to be biased so that edges further away from both sinks get a lower value than those close to a sink. This gives another speed-up, especially in high dimensions.}. We introduce sinks by removing the edges leading to/from the vertices that represent the patterns we want to memorise. For simplicity we shall consider the example of only two `sink states', but this case can easily be generalised. Removing the edges is necessary since once the walker arrived at a sink, it is supposed to be trapped with no possibility to leave. Since in continuous quantum walks the adjacency matrix is equivalent to the hamiltonian, the graph structure needs to be undirected to ensure the hermiticity of the hamiltonian. This is why by removing the edges leading out of the sinks, we have to remove the edges leading to the sinks at the same time. The graph structure of the coherent part of the stochastic quantum walk is sketched in Fig. \ref{sqwHC} on the left.\\ The dissipative part of the GKLS master equation (\ref{lindblad}) can be written with the help of jump operators $L_k$. We use these jump operators to account for the `directed' part of the walk. Each edge $i\leftrightarrow j$ between vertices $ \ket{i}\bra{i}$ and $ \ket{j}\bra{j}$ is assigned with a jump operator $L_k =L_{ i\rightarrow j} =\ket{j}\bra{i} $ where $\ket{j}\bra{j}$ is the vertex closer or equal to a sink state. If both vertices sharing an edge have the same distance to a sink, no jump operator is attributed to that edge. This setup creates a flow to the sink states in the dissipative part and builds a `bridge' between the graph structure of the coherent part to the disconnected sinks. The graph structure of the dissipative part of the stochastic quantum walk is sketched in Fig. \ref{sqwHC} on the right. \\ \begin{figure} \caption{(Colour online) Illustration of the construction of the graph for the Stochastic Quantum Walk described in the text. The left figure represents the coherent part, using a Hamiltonian derived from the graph's adjacency matrix. The sinks ($\ket{7} \label{sqwHC} \end{figure} The resulting master equation for the stochastic quantum walk with the two sink states $\ket{l}\bra{l},\ket{m}\bra{m}$ is then given by Eq. (\ref{lindblad}) with \begin{align*} H &= \sum \limits_{<i,j> \neq l,m}a_{ij} \ket{i}\bra{j},\\ L_k &= L_{ i\rightarrow j} = \ket{j}\bra{i}, \\ \end{align*} where $<i,j> = \{\ket{i}\bra{i},\ket{j}\bra{j} \in \mathcal{V}^{2^N} | d_H(i,j) =1 \}$ is a pair of connected vertices, $i\rightarrow j = \{\ket{i}\bra{i},\ket{j}\bra{j} \in \mathcal{V}^{2^N} |\;\; \mathrm{min}[d_H(j,l),d_H(j,m)] \leq \mathrm{min}[d_H(i,l),d_H(i,m)] \}$ denotes a pair of connected vertices in which $\ket{j}\bra{j}$ is the vertex closer or equal to a sink state, and $a_{ij} = a_{ji}$.\\ \begin{figure} \caption{(Colour online) Example of the evolution of the QNN firing states' probabilities in the Stochastic Quantum Walk on the hypercube of dimension $N =3$ as introduced here; with sinks at $\ket{101} \label{HCplot1} \end{figure} \begin{figure} \caption{(Colour online) In this example, the sink states are given by $\ket{011} \label{HCplot2} \end{figure} \subsection{Results} The Stochastic Quantum Walk starts at the vertex representing the initial firing pattern and propagates over the hypercube. After a time evolution of the magnitude of several time units, the walk always finds the sink closest to the initial state in terms of Hamming distance with a probability of nearly $1$ (Fig. \ref{HCplot1}). The model consequently shows the basic neural network feature of associative memory. If the two sinks have the same distance, the output is an equal probability of both sink states (Fig. \ref{HCplot2}). This is an optimisation to a classical, deterministic associative memory which favours one of two states of an equal Hamming distance to the initial states. \\ \begin{figure} \caption{(Colour online) Mixing time $T_M$ in inverse units of $\gamma$ to reach a steady state in dependence of the parameters $\kappa$ and $\gamma$ in the Stochastic Quantum Walk on the $4$-dimensional hypercube with sink states $\ket{1011} \label{parHC} \end{figure} It turns out that the dynamics of the Stochastic Quantum Walk on the hypercube are mainly influenced by the incoherent part of Eq. (\ref{lindblad}). Figure \ref{parHC} shows the time until the average probability distribution reaches the correct stable state (mixing time) in dependence of the two parameters $\kappa$ and $ \gamma$ as given in Eq. (\ref{lindblad}). One can see that the time of reaching a stable distribution only depends on $\kappa$ for small values of $\gamma$, which denotes the coherent or quantum part in the stochastic walk. However, for values $\gamma < 1$, the quantum part can increase the speed of the walk by several time units. Since the quantum walk has shown to traverse the hypercube exponentially faster than a classical walk \citep{kempe02}, the contribution of the quantum speed-up might be larger in higher dimensions. Due to the exponential growth in the dimension of the Hamiltonian, simulations in dimensions $\geq 7$ require large computational resources. \\ \section{Conclusions} This article studied some aspects of the application of quantum walks to Quantum Neural Networks. It was argued that a direct translation of the neural updating mechanism into a Hadamard-like transformation faces the problem of a nonunitary coin operator. This stems from the dissipative nature of the neural activation function and is symptomatic for the attempt to combine the attractor-like dynamics of neural networks and the linear, unitary dynamics of quantum objects. We concluded that decoherence needs to be introduced into the model. A Stochastic Quantum Walk on the hypercube was therefore constructed, and we could show its property of associative memory, an important feature of neural networks. Due to the low dependence on the quantum evolution or coherent part of the walk, this model is only under a limited perspective a candidate for a quantum walk on the firing states of a Quantum Neural Network. However, these results can be seen as a first attempt in this direction and serve as an example of the application of quantum walks to obtain certain algorithmic dynamics of quantum systems. \\ There are other versions of quantum walks that might be worth investigating in order to overcome the flaws presented by Stochastic Quantum Walks and coined quantum walks. For example, there are different ways to introduce decoherence into the dynamics, such as projective measurements on the coin \citep{brundec03, kendon03, kendon07}. In fact, measurements have been proposed by several authors searching for a QNN model to account for the nonlinear updating process of neurons in a quantum regime \citep{kak95, perus00, menneer95, zak98}. Another interesting version of quantum walks are the recently developed Open Quantum Walks \citep{attal12, sinayskiy12, bauer13}. Based on the theory of open quantum systems, Open Quantum Walks describe a walker whose internal degrees of freedom are interacting with an environment and influencing the walker's external degree of freedom. The formalism shows a striking analogy to the updating function of neurons, giving the advantage that it does not require the global coherence of QNN states as in the walks on network states investigated here. Open Quantum Walks might consequently be a natural candidate when studying possible dynamics of Quantum Neural Networks.\\ The underlying idea to this paper was to use the formalism of quantum walks in order to find a dynamic evolution of a Quantum Neural Network that optimises the computational properties of classical neural networks \citep{schuld14}. In a second step, the dynamic evolution could then be attempted to be attributed to physical processes, in the far picture possibly leading to a quantum model of biological neural networks. It can therefore be interesting to ask how the model can incorporate learning, a mechanism characteristic for neural networks. It is important to emphasize again that the nodes of the graph constructed in Fig. \ref{sqwHC} do not represent qurons, but entire firing states of a Quantum Neural Network and the edges $a_{ij}$ consequently do not correspond to the neural weights $w_{ij}, \; i,j= 1,...,N$. However, similar to Hopfield networks `learning' a pattern by choosing appropriate weights that imprint the memory states into the energy function Eq. (\ref{energy}), choosing the connections $a_{ij}$ defines the dynamics of the Quantum Associative Memory model presented here. In both cases, learning is static, i.e. done by the initial construction of the network (or the graph). To get a quantum model that includes dynamic learning it would be a fruitful perspective to construct a quantum walk that simulates the above mentioned \textit{feed-forward neural networks}. Feed-forward networks are dynamically trained by so called backpropagation algorithms, where random initial weigths are repeatedly manipulated to minimise an error function comparing target outputs of a training set to real outputs calculated by the neural network \citep{rabinovich06}. Such a quantum walk would be required to reproduce feed-forward network's characteristica such as pattern recognition and could serve as an interesting continuation of the results found here. \\ \begin{thebibliography}{66} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Kempe}(2003)}]{kempe03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kempe}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Contemp. Phys.}\ }\textbf {\bibinfo {volume} {44}},\ \bibinfo {pages} {307} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kendon}(2007)}]{kendon07} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Kendon}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Math. Structures Comput. Sci.}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {1169} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Venegas-Andraca}(2012)}]{venegas12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~E.}\ \bibnamefont {Venegas-Andraca}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Processing}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {1015} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dorit~Aharonov}\ and\ \citenamefont {Vazirani}(2001)}]{aharonov01} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~K.}\ \bibnamefont {Dorit~Aharonov}, \bibfnamefont {Andris~Ambainis}}\ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Vazirani}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Proc. 33rd STOC (ACM)}\ ,\ \bibinfo {pages} {50–59}} (\bibinfo {year} {2001})},\ \bibinfo {note} {quant-ph/0012090}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ \emph {et~al.}(2002)\citenamefont {Childs}, \citenamefont {Farhi},\ and\ \citenamefont {Gutmann}}]{childs02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gutmann}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Processing}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {35} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Whitfield}\ \emph {et~al.}(2010)\citenamefont {Whitfield}, \citenamefont {Rodr\'\i~guez-Rosario},\ and\ \citenamefont {Aspuru-Guzik}}]{whitfield10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Whitfield}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Rodr\'\i~guez Rosario}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {022323} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Attal}\ \emph {et~al.}(2012)\citenamefont {Attal}, \citenamefont {Petruccione},\ and\ \citenamefont {Sinayskiy}}]{attal12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Attal}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Sinayskiy}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {376}},\ \bibinfo {pages} {1545} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Moore}\ and\ \citenamefont {Russell}(2002)}]{moore02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Moore}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Russell}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Randomization and Approximation Techniques in Computer Science}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {year} {2002})\ pp.\ \bibinfo {pages} {164--178}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ambainis}(2003)}]{ambainis03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ambainis}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Int. J. Quantum Inf.}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {507} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ball}(2011)}]{ball11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Ball}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {474}},\ \bibinfo {pages} {272} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Engel}\ \emph {et~al.}(2007)\citenamefont {Engel}, \citenamefont {Calhoun}, \citenamefont {Read}, \citenamefont {Ahn}, \citenamefont {Man{\v{c}}al}, \citenamefont {Cheng}, \citenamefont {Blankenship},\ and\ \citenamefont {Fleming}}]{engel07} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Engel}}, \bibinfo {author} {\bibfnamefont {T.~R.}\ \bibnamefont {Calhoun}}, \bibinfo {author} {\bibfnamefont {E.~L.}\ \bibnamefont {Read}}, \bibinfo {author} {\bibfnamefont {T.-K.}\ \bibnamefont {Ahn}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Man{\v{c}}al}}, \bibinfo {author} {\bibfnamefont {Y.-C.}\ \bibnamefont {Cheng}}, \bibinfo {author} {\bibfnamefont {R.~E.}\ \bibnamefont {Blankenship}}, \ and\ \bibinfo {author} {\bibfnamefont {G.~R.}\ \bibnamefont {Fleming}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {446}},\ \bibinfo {pages} {782} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Panitchayangkoon}\ \emph {et~al.}(2010)\citenamefont {Panitchayangkoon}, \citenamefont {Hayes}, \citenamefont {Fransted}, \citenamefont {Caram}, \citenamefont {Harel}, \citenamefont {Wen}, \citenamefont {Blankenship},\ and\ \citenamefont {Engel}}]{panitchayangkoon10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Panitchayangkoon}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Hayes}}, \bibinfo {author} {\bibfnamefont {K.~A.}\ \bibnamefont {Fransted}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Caram}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Harel}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wen}}, \bibinfo {author} {\bibfnamefont {R.~E.}\ \bibnamefont {Blankenship}}, \ and\ \bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Engel}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Proc. Natl. Acad. Sci.}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {12766} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ishizaki}\ and\ \citenamefont {Fleming}(2009)}]{ishizaki09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ishizaki}}\ and\ \bibinfo {author} {\bibfnamefont {G.~R.}\ \bibnamefont {Fleming}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Proc. Natl. Acad. Sci.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {17255} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hoyer}\ \emph {et~al.}(2010)\citenamefont {Hoyer}, \citenamefont {Sarovar},\ and\ \citenamefont {Whaley}}]{hoyer10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Hoyer}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sarovar}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~B.}\ \bibnamefont {Whaley}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {065041} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mohseni}\ \emph {et~al.}(2008)\citenamefont {Mohseni}, \citenamefont {Rebentrost}, \citenamefont {Lloyd},\ and\ \citenamefont {Aspuru-Guzik}}]{mohseni08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mohseni}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rebentrost}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys.}\ }\textbf {\bibinfo {volume} {129}},\ \bibinfo {pages} {174106} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rebentrost}\ \emph {et~al.}(2009)\citenamefont {Rebentrost}, \citenamefont {Mohseni}, \citenamefont {Kassal}, \citenamefont {Lloyd},\ and\ \citenamefont {Aspuru-Guzik}}]{rebentrost09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rebentrost}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mohseni}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Kassal}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {033003} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Koch}\ and\ \citenamefont {Hepp}(2006)}]{koch06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Koch}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Hepp}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {440}},\ \bibinfo {pages} {611} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hameroff}\ and\ \citenamefont {Penrose}(2013)}]{hameroff13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}\ \bibnamefont {Hameroff}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Penrose}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Life Rev.}\ }\textbf {\bibinfo {volume} {403}}\ (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Freeman}\ and\ \citenamefont {Vitiello}(2008)}]{freeman08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Freeman}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Vitiello}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A}\ }\textbf {\bibinfo {volume} {41}},\ \bibinfo {pages} {304042} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tegmark}(2000)}]{tegmark00} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tegmark}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {61}},\ \bibinfo {pages} {4194} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Andrecut}\ and\ \citenamefont {Ali}(2002)}]{andrecut02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Andrecut}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ali}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Int. J. Mod. Phys. C}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {75} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Altaisky}(2001)}]{altaisky01} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Altaisky}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint quant-ph/0107012}\ } (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gupta}\ and\ \citenamefont {Zia}(2001)}]{gupta01} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gupta}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Zia}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Comput. System Sci.}\ }\textbf {\bibinfo {volume} {63}},\ \bibinfo {pages} {355} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Behrman}\ \emph {et~al.}(2002)\citenamefont {Behrman}, \citenamefont {Chandrashekar}, \citenamefont {Wang}, \citenamefont {Belur}, \citenamefont {Steck},\ and\ \citenamefont {Skinner}}]{behrman02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~C.}\ \bibnamefont {Behrman}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Chandrashekar}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Belur}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Steck}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Skinner}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint quant-ph/0202131}\ } (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fei}\ and\ \citenamefont {Baoyu}(2003)}]{fei03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Fei}}\ and\ \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Baoyu}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Neural Networks and Signal Processing, 2003. Proceedings of the 2003 International Conference on}}},\ Vol.~\bibinfo {volume} {1}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {2003})\ pp.\ \bibinfo {pages} {539--542}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2012)\citenamefont {Zhou}, \citenamefont {Wang}, \citenamefont {Wu},\ and\ \citenamefont {Shi}}]{zhou12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Wu}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Shi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Int. J. Theor. Phys.}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {705} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Oliveira}\ \emph {et~al.}(2008)\citenamefont {Oliveira}, \citenamefont {Silva}, \citenamefont {Ludermir}, \citenamefont {Leonel}, \citenamefont {Galindo},\ and\ \citenamefont {Pereira}}]{oliveira08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Oliveira}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont {T.~B.}\ \bibnamefont {Ludermir}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Leonel}}, \bibinfo {author} {\bibfnamefont {W.~R.}\ \bibnamefont {Galindo}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Pereira}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Neural Networks, 2008. SBRN'08. 10th Brazilian Symposium on}}}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {2008})\ pp.\ \bibinfo {pages} {147--152}\BibitemShut {NoStop} \bibitem [{\citenamefont {T{\'o}th}\ \emph {et~al.}(2000)\citenamefont {T{\'o}th}, \citenamefont {Lent}, \citenamefont {Tougaw}, \citenamefont {Brazhnik}, \citenamefont {Weng}, \citenamefont {Porod}, \citenamefont {Liu},\ and\ \citenamefont {Huang}}]{toth00} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {T{\'o}th}}, \bibinfo {author} {\bibfnamefont {C.~S.}\ \bibnamefont {Lent}}, \bibinfo {author} {\bibfnamefont {P.~D.}\ \bibnamefont {Tougaw}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Brazhnik}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Weng}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Porod}}, \bibinfo {author} {\bibfnamefont {R.-W.}\ \bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.-F.}\ \bibnamefont {Huang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint cond-mat/0005038}\ } (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Feller}(2008)}]{feller08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}\ \bibnamefont {Feller}},\ }\href@noop {} {\emph {\bibinfo {title} {An introduction to probability theory and its applications}}}\ (\bibinfo {publisher} {John Wiley \& Sons},\ \bibinfo {year} {2008})\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1} \BibitemOpen \bibinfo {note} {Some authors extend the random walk model to biased probabilities \protect \citep {mackay02, kendon07, stefanak09, ribeiro04}. These so called `biased random walks' are nothing else than Markov chains introduced here. We will use the more general definition.}\BibitemShut {Stop} \bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(2002)\citenamefont {Aharonov}, \citenamefont {Ambainis}, \citenamefont {Kempe},\ and\ \citenamefont {Vazirani}}]{aharonov02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Aharonov}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ambainis}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kempe}}, \ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Vazirani}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv:quant-ph/0012090}\ } (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nielsen}\ and\ \citenamefont {Chuang}(2010)}]{nielsen10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Nielsen}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum computation and quantum information}}}\ (\bibinfo {publisher} {Cambridge university press},\ \bibinfo {year} {2010})\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ and\ \citenamefont {Manouchehri}(2013)}]{wang13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Manouchehri}},\ }\href@noop {} {\emph {\bibinfo {title} {Physical Implementation of Quantum Walks}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {year} {2013})\BibitemShut {NoStop} \bibitem [{\citenamefont {Travaglione}\ and\ \citenamefont {Milburn}(2002)}]{travaglione02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Travaglione}}\ and\ \bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Milburn}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo {pages} {032310} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kempe}(2002)}]{kempe02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kempe}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint quant-ph/0205083}\ } (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shenvi}\ \emph {et~al.}(2003)\citenamefont {Shenvi}, \citenamefont {Kempe},\ and\ \citenamefont {Whaley}}]{shenvi03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Shenvi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kempe}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~B.}\ \bibnamefont {Whaley}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {052307} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Farhi}\ and\ \citenamefont {Gutmann}(1998)}]{farhi98} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gutmann}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages} {915} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Strauch}(2006)}]{strauch06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~W.}\ \bibnamefont {Strauch}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {030301} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lindblad}(1976)}]{lindblad76} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Lindblad}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Commun. Math. Phys.}\ }\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {119} (\bibinfo {year} {1976})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gorini}\ \emph {et~al.}(1978)\citenamefont {Gorini}, \citenamefont {Frigerio}, \citenamefont {Verri}, \citenamefont {Kossakowski},\ and\ \citenamefont {Sudarshan}}]{gorini78} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Gorini}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Frigerio}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Verri}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kossakowski}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Sudarshan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rep. Math. Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {149} (\bibinfo {year} {1978})}\BibitemShut {NoStop} \bibitem [{Note2()}]{Note2} \BibitemOpen \bibinfo {note} {The reason why a separate coin space is introduced is that the unitarity condition is not fulfilled by most examples of walks (a detailed argumentation can be found in \protect \citep {ambainis03}).}\BibitemShut {Stop} \bibitem [{\citenamefont {Brun}\ \emph {et~al.}(2003{\natexlab{a}})\citenamefont {Brun}, \citenamefont {Carteret},\ and\ \citenamefont {Ambainis}}]{brun03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~A.}\ \bibnamefont {Brun}}, \bibinfo {author} {\bibfnamefont {H.~A.}\ \bibnamefont {Carteret}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ambainis}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {052317} (\bibinfo {year} {2003}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Purves}(2008)}]{purves08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Purves}},\ }\href@noop {} {\emph {\bibinfo {title} {Neuroscience}}},\ \bibinfo {edition} {3rd}\ ed.\ (\bibinfo {publisher} {Sinauer},\ \bibinfo {year} {2008})\BibitemShut {NoStop} \bibitem [{\citenamefont {Dayan}\ and\ \citenamefont {Abbott}(2001)}]{dayan01} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Dayan}}\ and\ \bibinfo {author} {\bibfnamefont {L.~F.}\ \bibnamefont {Abbott}},\ }\href@noop {} {\emph {\bibinfo {title} {Theoretical neuroscience}}},\ Vol.~\bibinfo {volume} {31}\ (\bibinfo {publisher} {MIT press Cambridge, MA},\ \bibinfo {year} {2001})\BibitemShut {NoStop} \bibitem [{\citenamefont {McCulloch}\ and\ \citenamefont {Pitts}(1943)}]{mcculloch43} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~S.}\ \bibnamefont {McCulloch}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Pitts}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {B. Math. Biol.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {115} (\bibinfo {year} {1943})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rabinovich}\ \emph {et~al.}(2006)\citenamefont {Rabinovich}, \citenamefont {Varona}, \citenamefont {Selverston},\ and\ \citenamefont {Abarbanel}}]{rabinovich06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~I.}\ \bibnamefont {Rabinovich}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Varona}}, \bibinfo {author} {\bibfnamefont {A.~I.}\ \bibnamefont {Selverston}}, \ and\ \bibinfo {author} {\bibfnamefont {H.~D.}\ \bibnamefont {Abarbanel}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {1213} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hopfield}(1982)}]{hopfield82} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Hopfield}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Proc. Natl. Acad. Sci.}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {2554} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rojas}(1996)}]{rojas96} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Rojas}},\ }\href@noop {} {\emph {\bibinfo {title} {Neural Nets: A Systematic Introduction}}}\ (\bibinfo {publisher} {Springer-Verlag New York Incorporated},\ \bibinfo {year} {1996})\BibitemShut {NoStop} \bibitem [{\citenamefont {Kak}(1995)}]{kak95} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Kak}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Adv. Imag. Elect. Phys.}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {259} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peru{\v{s}}}(2000)}]{perus00} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Peru{\v{s}}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Neural Netw. World}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {1001} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Menneer}\ and\ \citenamefont {Narayanan}(1995)}]{menneer95} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Menneer}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Narayanan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Department of Computer Science, University of Exeter, United Kingdom, Technical Report}\ }\textbf {\bibinfo {volume} {329}},\ \bibinfo {pages} {1995} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zak}\ and\ \citenamefont {Williams}(1998)}]{zak98} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zak}}\ and\ \bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Williams}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Int. J. Theor. Phys.}\ }\textbf {\bibinfo {volume} {37}},\ \bibinfo {pages} {651} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Behrman}\ \emph {et~al.}(1999)\citenamefont {Behrman}, \citenamefont {Steck},\ and\ \citenamefont {Skinner}}]{behrman99} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~C.}\ \bibnamefont {Behrman}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Steck}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~R.}\ \bibnamefont {Skinner}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Neural Networks, 1999. IJCNN'99. International Joint Conference on}}},\ Vol.~\bibinfo {volume} {2}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {1999})\ pp.\ \bibinfo {pages} {874--877}\BibitemShut {NoStop} \bibitem [{\citenamefont {Li}\ \emph {et~al.}(2002)\citenamefont {Li}, \citenamefont {Zhao},\ and\ \citenamefont {Zheng}}]{li02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhao}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Zheng}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Signal Processing, 2002 6th International Conference on}}},\ Vol.~\bibinfo {volume} {2}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {2002})\ pp.\ \bibinfo {pages} {1267--1270}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schuld}\ \emph {et~al.}(2014)\citenamefont {Schuld}, \citenamefont {Sinayskiy},\ and\ \citenamefont {Petruccione}}]{schuld14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schuld}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Sinayskiy}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\ }\href@noop {} {\enquote {\bibinfo {title} {The quest for a quantum neural network},}\ } (\bibinfo {year} {2014}),\ \bibinfo {note} {in preparation.}\BibitemShut {Stop} \bibitem [{\citenamefont {Hamming}(1950)}]{hamming50} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Hamming}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Bell Syst. Tech. J.}\ }\textbf {\bibinfo {volume} {29}},\ \bibinfo {pages} {147} (\bibinfo {year} {1950})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mackay}\ \emph {et~al.}(2002)\citenamefont {Mackay}, \citenamefont {Bartlett}, \citenamefont {Stephenson},\ and\ \citenamefont {Sanders}}]{mackay02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~D.}\ \bibnamefont {Mackay}}, \bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont {Bartlett}}, \bibinfo {author} {\bibfnamefont {L.~T.}\ \bibnamefont {Stephenson}}, \ and\ \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Sanders}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A}\ }\textbf {\bibinfo {volume} {35}},\ \bibinfo {pages} {2745} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{\v{S}}tefa{\v{n}}{\'a}k}\ \emph {et~al.}(2009)\citenamefont {{\v{S}}tefa{\v{n}}{\'a}k}, \citenamefont {Kiss},\ and\ \citenamefont {Jex}}]{stefanak09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{\v{S}}tefa{\v{n}}{\'a}k}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kiss}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Jex}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {043027} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ribeiro}\ \emph {et~al.}(2004)\citenamefont {Ribeiro}, \citenamefont {Milman},\ and\ \citenamefont {Mosseri}}]{ribeiro04} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Ribeiro}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Milman}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Mosseri}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {190503} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{Note3()}]{Note3} \BibitemOpen \bibinfo {note} {The weights $a_{ij}$ can be chosen to be biased so that edges further away from both sinks get a lower value than those close to a sink. This gives another speed-up, especially in high dimensions.}\BibitemShut {Stop} \bibitem [{\citenamefont {Brun}\ \emph {et~al.}(2003{\natexlab{b}})\citenamefont {Brun}, \citenamefont {Carteret},\ and\ \citenamefont {Ambainis}}]{brundec03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~A.}\ \bibnamefont {Brun}}, \bibinfo {author} {\bibfnamefont {H.~A.}\ \bibnamefont {Carteret}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ambainis}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {032304} (\bibinfo {year} {2003}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kendon}\ and\ \citenamefont {Tregenna}(2003)}]{kendon03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Kendon}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Tregenna}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {042315} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sinayskiy}\ and\ \citenamefont {Petruccione}(2012)}]{sinayskiy12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Sinayskiy}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Process.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {1301} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bauer}\ \emph {et~al.}(2013)\citenamefont {Bauer}, \citenamefont {Bernard},\ and\ \citenamefont {Tilloy}}]{bauer13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bauer}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bernard}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Tilloy}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {062340} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Inequalities for integrals of the modified Struve function of the first kind II} \author{Robert E. Gaunt\footnote{School of Mathematics, The University of Manchester, Manchester M13 9PL, UK}} \date{\today} \maketitle \begin{abstract}Simple inequalities are established for integrals of the type $\int_0^x \mathrm{e}^{-\alphamma t} t^{-\nu} \mathbf{L}_\nu(t)\,\mathrm{d}t$, where $x>0$, $0\leq\alphamma<1$, $\nu>-\frac{3}{2}$ and $\mathbf{L}_{\nu}(x)$ is the modified Struve function of the first kind. In most cases, these inequalities are tight in certain limits. As a consequence we deduce a tight double inequality, involving the modified Struve function $\mathbf{L}_{\nu}(x)$, for a generalized hypergeometric function. \end{abstract} \noindent{{\bf{Keywords:}}} Modified Struve function; inequality; integral \noindent{{{\bf{AMS 2010 Subject Classification:}}} Primary 33C10; 26D15 \section{Introduction}\label{intro} In a series of recent papers \cite{gaunt ineq1, gaunt ineq3, gaunt ineq6}, simple lower and upper bounds, involving the modified Bessel function of the first kind $I_\nu(x)$, were obtained for the integrals \begin{equation}\label{intbes}\int_0^x \mathrm{e}^{-\alphamma t} t^{\pm\nu} I_\nu(t)\,\mathrm{d}t, \end{equation} where $x>0$, $0\leq\alphamma<1$ and $\nu>-\frac{1}{2}$. For $\alphamma\not=0$ there does not exist simple closed form expressions for these integrals. The inequalities of \cite{gaunt ineq1,gaunt ineq3} were needed in the development of Stein's method \cite{stein,chen,np12} for variance-gamma approximation \cite{eichelsbacher, gaunt vg, gaunt vg2}. Although, as they are simple and surprisingly accurate the inequalities may also prove useful in other problems involving modified Bessel functions; see for example, \cite{baricz3} in which inequalities for modified Bessel functions of the first kind were used to obtain lower and upper bounds for integrals involving modified Bessel functions of the first kind. The modified Struve function of the first kind, defined for $x\in\mathbb{R}$ and $\nu\in\mathbb{R}$ by \begin{equation*}\mathbf{L}_\nu(x)=\sum_{k=0}^\infty \frac{\big(\frac{1}{2}x\big)^{\nu+2k+1}}{\Gamma(k+\frac{3}{2})\Gamma(k+\nu+\frac{3}{2})}, \end{equation*} is closely related to the modified Bessel function $I_\nu(x)$, and either shares or has a close analogue to the properties of $I_\nu(x)$ that were used by \cite{gaunt ineq1,gaunt ineq3,gaunt ineq6} to obtain inequalities for the integrals in (\ref{intbes}). The function $\mathbf{L}_\nu(x)$ is itself a widely used special function; see a standard reference, such as \cite{olver}, for its basic properties. It has numerous applications in the applied sciences, including leakage inductance in transformer windings \cite{hw94}, perturbation approximations of lee waves in a stratified flow \cite{mh69}, scattering of plane waves by soft obstacles \cite{s84}; see \cite{bp13} for a list of further application areas. It is therefore a natural problem to ask for simple inequalities, involving the modified Struve function of the first kind, for the integrals \begin{equation}\label{intstruve}\int_0^x \mathrm{e}^{-\alphamma t} t^{\nu} \mathbf{L}_\nu(t)\,\mathrm{d}t, \qquad \int_0^x \mathrm{e}^{-\alphamma t} t^{-\nu} \mathbf{L}_\nu(t)\,\mathrm{d}t \end{equation} where $x>0$, $0\leq\alphamma<1$ and $\nu>-\frac{3}{2}$. When $\alphamma=0$ both integrals in (\ref{intstruve}) can be evaluated exactly, because the modified Struve function $\mathbf{L}_{\nu}(x)$ can be represented as a generalized hypergeometric function. To see this, recall that the generalized hypergeometric function (see \cite{olver} for this definition and further properties) is defined by \begin{equation*}{}_pF_q\big(a_1,\ldots,a_p;b_1,\ldots,b_q;x\big)=\sum_{k=0}^\infty \frac{(a_1)_k\cdots(a_p)_k}{(b_1)_k\cdots(b_q)_k}\frac{x^k}{k!}, \end{equation*} and the Pochhammer symbol is given by $(a)_0=1$ and $(a)_k=a(a+1)(a+2)\cdots(a+k-1)$, $k\geq1$. Then, for $-\nu-\frac{3}{2}\notin\mathbb{N}$, we have the representation \begin{equation*}\mathbf{L}_\nu(x)=\frac{x^{\nu+1}}{\sqrt{\pi}2^\nu\Gamma(\nu+\frac{3}{2})} {}_1F_2\bigg(1;\frac{3}{2},\nu+\frac{3}{2};\frac{x^2}{4}\bigg) \end{equation*} (see also \cite{bp13} for other representations in terms of the generalized hypergeometric function). A straightforward calculation then yields \begin{equation}\label{besint6}\int_0^x \frac{ \mathbf{L}_\nu(t)}{t^\nu}\,\mathrm{d}t=\frac{x^{2}}{\sqrt{\pi}2^{\nu+1}\Gamma(\nu+\frac{3}{2})}{}_2F_3\bigg(1,1;\frac{3}{2},2,\nu+\frac{3}{2};\frac{x^2}{4}\bigg), \end{equation} with a similar formula available for $\int_0^x t^{\nu} \mathbf{L}_\nu(t)\,\mathrm{d}t$. When $\alphamma\not=0$, there does, however, not exist a closed form formula for the integrals in (\ref{intstruve}). Moreover, even when $\alphamma=0$ the first integral is given in terms of the generalized hypergeometric function. This provides the motivation for establishing simple bounds, involving the modified Struve function $\mathbf{L}_\nu(x)$, for these integrals. Inequalities were established by \cite{gaunt ineq4} for the first integral in (\ref{intstruve}) by adapting the techniques used by \cite{gaunt ineq1,gaunt ineq3} to bound the related integral involving the modfied Bessel function $I_\nu(x)$. In this note, we obtain lower and upper bounds for the second integral in (\ref{intstruve}). We proceed in a similar manner to \cite{gaunt ineq4} by adapting the methods used \cite{gaunt ineq6} to bound related integrals involving $I_\nu(x)$, and the inequalities obtained in this note take a similar form to those obtaned by \cite{gaunt ineq6}. As already noted, the reason for this similarity is because many of the properties of $I_\nu(x)$ that were exploited in the proofs of \cite{gaunt ineq1,gaunt ineq3,gaunt ineq6} are shared by $\mathbf{L}_\nu(x)$, which we now list. All these formulas can be found in \cite{olver}, except for the inequality which is given in \cite{bp14}. Further inequalities for $\mathbf{L}_\nu(x)$ can be found in \cite{bp14,bps17,gaunt ineq5,jn98}, some of which improve on the inequality of \cite{bp14}. For positive values of $x$ the function $\mathbf{L}_{\nu}(x)$ is positive for $\nu>-\frac{3}{2}$ . The function $\mathbf{L}_{\nu}(x)$ satisfies the recurrence relation and differentiation formula \begin{align}\label{Iidentity}\mathbf{L}_{\nu -1} (x)- \mathbf{L}_{\nu +1} (x) &= \frac{2\nu}{x} \mathbf{L}_{\nu} (x)+\frac{\big(\frac{1}{2}x\big)^\nu}{\sqrt{\pi}\Gamma(\nu+\frac{3}{2})}, \\ \label{diffone}\frac{\mathrm{d}}{\mathrm{d}x} \bigg(\frac{\mathbf{L}_{\nu} (x)}{x^\nu} \bigg) &= \frac{\mathbf{L}_{\nu +1} (x)}{x^\nu}+\frac{2^{-\nu}}{\sqrt{\pi}\Gamma(\nu+\frac{3}{2})}, \end{align} and has the following asymptotic properties: \begin{align}\label{Itend0}\mathbf{L}_{\nu}(x)&\sim \frac{2}{\sqrt{\pi}\Gamma(\nu+\frac{3}{2})}\bigg(\frac{x}{2}\bigg)^{\nu+1}, \quad x \downarrow 0, \: \nu>-\tfrac{3}{2}, \\ \label{Itendinfinity}\mathbf{L}_{\nu}(x)&\sim \frac{\mathrm{e}^{x}}{\sqrt{2\pi x}}, \quad x \rightarrow\infty, \: \nu\in\mathbb{R}. \end{align} Let $x > 0$. Then \begin{equation}\label{Imon}\mathbf{L}_{\nu} (x) < \mathbf{L}_{\nu - 1} (x), \quad \nu \geq \tfrac{1}{2}. \end{equation} We end this introduction by noting that \cite{gaunt ineq6} also derived lower and upper bounds for the integral $\int_x^\infty \mathrm{e}^{\alphamma t} t^{-\nu} K_\nu(t)\,\mathrm{d}t$, where $x>0$, $\nu>-\frac{1}{2}$, $0\leq\alphamma<1$ and $K_\nu(x)$ is a modified Bessel function of the second kind. Analogously to the problem studied in this note it is natural to ask for bounds for the integral $\int_x^\infty \mathrm{e}^{\alphamma t} t^{-\nu} \mathbf{M}_\nu(t)\,\mathrm{d}t$, where $\mathbf{M}_\nu(x)=\mathbf{L}_\nu(x)-I_\nu(x)$ is the modified Struve function of the second kind. However, the inequalities of \cite{gaunt ineq6} do not have a natural analogue for $\mathbf{M}_\nu(x)$; a discussion as to why this is the case is given in the Introduction of \cite{gaunt ineq4}. \section{Inequalities for integrals of the modified Struve function of the first kind}\label{sec2} The following theorem complements the inequalities for the integral $\int_0^x \mathrm{e}^{-\alphamma t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t$ that are given in Theorem 2.1 of \cite{gaunt ineq4}. The inequalities are natural analogues of the inequalities obtained in Theorem 2.5 of \cite{gaunt ineq6} for the related integrals involving the modified Bessel function $I_\nu(x)$. Before stating the theorem, we introduce the notation \begin{align*}a_{\nu,n}&=\frac{2\nu+n+1}{\sqrt{\pi}2^{\nu+n+2}(n+2)(\nu+n+1)\Gamma(\nu+n+\frac{5}{2})}, \\ b_{\nu,n}&=\frac{(2\nu+n+1)(2\nu+n+3)}{\sqrt{\pi}2^{\nu+n+4}(n+1)(n+4)(\nu+n+3)\Gamma(\nu+n+\frac{9}{2})}, \\ c_{\nu,n}&=\frac{2\nu+n+1}{\sqrt{\pi}2^{\nu+n+1}(n+1)(n+2)\Gamma(\nu+n+\frac{5}{2})}. \end{align*} \begin{theorem}\label{tiger1}Let $0<\alphamma<1$ and $n>-1$. Then, for all $x>0$, \begin{align}\label{bi1}\int_0^x \frac{\mathbf{L}_\nu(t)}{t^\nu}\,\mathrm{d}t&>\frac{\mathbf{L}_\nu(x)}{x^\nu}-\frac{x}{\sqrt{\pi}2^\nu\Gamma(\nu+\frac{3}{2})}, \quad \nu>-\tfrac{3}{2}, \\ \label{bi2}\int_0^x \frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t&>\frac{\mathbf{L}_{\nu+n+1}(x)}{x^\nu}-a_{\nu,n}x^{n+2}, \quad \nu>-\tfrac{1}{2}(n+1), \\ \label{bi3}\int_0^x \frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t&<\frac{2(\nu+n+1)}{n+1}\frac{\mathbf{L}_{\nu+n+1}(x)}{x^\nu}-\frac{2\nu+n+1}{n+1}\frac{\mathbf{L}_{\nu+n+3}(x)}{x^\nu}\nonumberumber \\ &\quad+b_{\nu,n}x^{n+4}-c_{\nu,n}x^{n+2}, \quad \nu>-\tfrac{1}{2}(n+1), \\ \label{bi4}\int_0^x \mathrm{e}^{-\alphamma t}\frac{\mathbf{L}_\nu(t)}{t^\nu}\,\mathrm{d}t&>\frac{1}{1-\alphamma}\bigg(\mathrm{e}^{-\alphamma x}\int_0^x\frac{\mathbf{L}_\nu(t)}{t^\nu}\,\mathrm{d}t-\frac{1-(1+\alphamma x)\mathrm{e}^{-\alphamma x}}{\sqrt{\pi}\alphamma2^\nu\Gamma(\nu+\frac{3}{2})}\bigg), \quad \nu>-\tfrac{3}{2}, \\ \label{bi5}\int_0^x \mathrm{e}^{-\alphamma t}\frac{\mathbf{L}_\nu(t)}{t^\nu}\,\mathrm{d}t&>\frac{1}{1-\alphamma}\bigg(\mathrm{e}^{-\alphamma x}\frac{\mathbf{L}_{\nu}(x)}{x^\nu}-\frac{(1+\alphamma x)(1-\mathrm{e}^{-\alphamma x})}{\sqrt{\pi}\alphamma2^\nu\Gamma(\nu+\frac{3}{2})}\bigg), \quad \nu>-\tfrac{3}{2}. \end{align} We have equality in (\ref{bi2}) and (\ref{bi3}) if $\nu=-\frac{1}{2}(n+1)$. Inequalities (\ref{bi1})--(\ref{bi5}) are tight as $x\rightarrow\infty$ and inequality (\ref{bi3}) is also tight as $x\downarrow0$. Now suppose that $\nu>-\frac{1}{2}(n+1)$, and let \begin{equation*}D_{\nu,n}:=\sup_{x>0}\frac{x^\nu}{\mathbf{L}_{\nu+n}(x)}\int_0^x \frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t. \end{equation*} The existence of $D_{\nu,n}$ is guaranteed by inequalities (\ref{bi3}) and (\ref{Imon}), and we have $D_{\nu,n}<2(\nu+n+1)$. Suppose also that $0<\alphamma<\frac{1}{D_{\nu,n}}$. Then, for all $x>0$, \begin{align}\label{bi7}\int_0^x \mathrm{e}^{-\alphamma t}\frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t&<\frac{\mathrm{e}^{-\alphamma x}}{1-D_{\nu,n}\alphamma}\int_0^x\frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t, \\ \int_0^x \mathrm{e}^{-\alphamma t}\frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t&< \frac{\mathrm{e}^{-\alphamma x}}{1-D_{\nu,n}\alphamma}\bigg(\frac{2(\nu+n+1)}{n+1}\frac{\mathbf{L}_{\nu+n+1}(x)}{x^\nu} \nonumberumber \\ \label{bi8}&\quad-\frac{2\nu+n+1}{n+1}\frac{\mathbf{L}_{\nu+n+3}(x)}{x^\nu}+b_{\nu,n}x^{n+4}-c_{\nu,n}x^{n+2} \bigg). \end{align} \end{theorem} \begin{proof}We first establish inequalities (\ref{bi1})--(\ref{bi8}) and then prove that the inequalities are tight in certain limits. (i) From inequality (\ref{Imon}) we obtain \begin{align*}\int_0^x \frac{\mathbf{L}_\nu(t)}{t^\nu}\,\mathrm{d}t>\int_0^x \frac{\mathbf{L}_{\nu+1}(t)}{t^\nu}\,\mathrm{d}t =\frac{\mathbf{L}_\nu(x)}{x^\nu}-\frac{x}{\sqrt{\pi}2^\nu\Gamma(\nu+\frac{3}{2})}, \end{align*} where we used the differentiation formula (\ref{diffone}) and limiting form (\ref{Itend0}) to evaluate the integral. (ii) The assertion that there is equality in (\ref{bi2}) and (\ref{bi3}) if $\nu=-\frac{1}{2}(n+1)$ can be seen from the fact the both these upper and lower bounds (which we now prove) are equal in this case. We now suppose that $\nu>-\frac{1}{2}(n+1)$. Consider the function \begin{equation*}u(x)=\int_0^x \frac{\mathbf{L}_{\nu +n}(t)}{t^\nu}\,\mathrm{d}t-\frac{\mathbf{L}_{\nu+n+1}(x)}{x^\nu}+a_{\nu,n}x^{n+2}. \end{equation*} We argue that $u(x)>0$ for all $x>0$, which will prove the result. We first note that from the differentiation formula (\ref{diffone}) followed by identity (\ref{Iidentity}) we have that \begin{align}&\frac{\mathrm{d}}{\mathrm{d}x}\bigg(\frac{\mathbf{L}_{\nu+n+1}(x)}{x^\nu}\bigg)=\frac{\mathrm{d}}{\mathrm{d}x}\bigg(x^{n+1}\cdot\frac{\mathbf{L}_{\nu+n+1}(x)}{x^{\nu+n+1}}\bigg)\nonumberumber\\ &\quad=(n+1)\frac{\mathbf{L}_{\nu+n+1}(x)}{x^{\nu+1}}+\frac{\mathbf{L}_{\nu+n+2}(x)}{x^{\nu}}+\frac{x^{n+1}}{\sqrt{\pi}2^{\nu+n+1}\Gamma(\nu+n+\frac{5}{2})}\nonumberumber\\ &\quad=\frac{n+1}{2(\nu+n+1)}\bigg(\frac{\mathbf{L}_{\nu+n}(x)}{x^\nu}-\frac{\mathbf{L}_{\nu+n+2}(x)}{x^\nu}-\frac{x^{n+1}}{\sqrt{\pi}2^{\nu+n+1}\Gamma(\nu+n+\frac{5}{2})}\bigg)\nonumberumber\\ &\quad\quad+\frac{\mathbf{L}_{\nu+n+2}(x)}{x^\nu}+\frac{x^{n+1}}{\sqrt{\pi}2^{\nu+n+1}\Gamma(\nu+n+\frac{5}{2})}\nonumberumber \\ \label{num3}&\quad=\frac{n+1}{2(\nu+n+1)}\frac{\mathbf{L}_{\nu+n}(x)}{x^\nu}+\frac{2\nu+n+1}{2(\nu+n+1)}\frac{\mathbf{L}_{\nu+n+2}(x)}{x^\nu}+(n+2)a_{\nu,n}x^{n+1}. \end{align} Therefore \begin{align*}u'(x)=\frac{2\nu+n+1}{2(\nu+n+1)}\bigg(\frac{\mathbf{L}_{\nu+n}(x)}{x^\nu}-\frac{\mathbf{L}_{\nu+n+2}(x)}{x^\nu}\bigg) >0, \end{align*} where we used (\ref{Imon}) to obtain the inequality. Also, from (\ref{Itend0}), as $x\downarrow0$, \begin{align*}u(x)&\sim \int_0^x \frac{t^{n+1}}{\sqrt{\pi}2^{\nu+n}\Gamma(\nu+n+\frac{3}{2})}\,\mathrm{d}t-\frac{x^{n+2}}{\sqrt{\pi}2^{\nu+n+1}\Gamma(\nu+n+\frac{5}{2})}+a_{\nu,n}x^{n+2}\\ &=\frac{x^{n+2}}{\sqrt{\pi}2^{\nu+n}(n+2)\Gamma(n+\nu+\frac{3}{2})} -\frac{x^{n+2}}{\sqrt{\pi}2^{\nu+n+1}\Gamma(\nu+n+\frac{5}{2})}+a_{\nu,n}x^{n+2}\\ &=\frac{x^{n+2}}{\sqrt{\pi}2^{\nu+n}\Gamma(\nu+n+\frac{3}{2})}\bigg(\frac{1}{n+2}-\frac{1}{2(\nu+n+\frac{3}{2})}\bigg)+a_{\nu,n}x^{n+2} >0, \end{align*} where the inequality can be seen to hold because $\nu>-\frac{1}{2}(n+1)$. Thus, we conclude that $u(x)>0$ for all $x>0$, as required. (iii) Integrating both sides of (\ref{num3}) over $(0,x)$, applying the fundamental theorem of calculus and rearranging gives \begin{align*}\int_0^x \frac{\mathbf{L}_{\nu +n} (t)}{t^\nu}\,\mathrm{d}t &= \frac{2(\nu+n+1)}{n+1} \frac{\mathbf{L}_{\nu +n+1} (x)}{x^\nu} -\frac{2\nu+n+1}{n+1} \int_0^x \frac{\mathbf{L}_{\nu +n +2} (t)}{t^\nu}\,\mathrm{d}t \\ &\quad-\frac{2\nu+n+1}{n+1}\int_0^x\frac{t^{n+1}}{\sqrt{\pi}2^{\nu+n+1}\Gamma(\nu+n+\frac{5}{2})}\,\mathrm{d}t. \end{align*} Evaluating the second integral on the right hand-side of the above expression and using inequality (\ref{bi2}) to bound the first integral then yields (\ref{bi3}). (iv) Let $\nu>-1$. Then integration by parts and inequality (\ref{bi1}) gives \begin{align*} \int_0^x \mathrm{e}^{-\alphamma t} \frac{\mathbf{L}_\nu(t)}{t^\nu} \,\mathrm{d}t &= \mathrm{e}^{-\alphamma x}\int_0^x \frac{\mathbf{L}_\nu(t)}{t^\nu}\,\mathrm{d}t + \alphamma \int_0^x \mathrm{e}^{-\alphamma t}\bigg(\int_0^t \frac{\mathbf{L}_\nu(u)}{u^\nu} \,\mathrm{d}u\bigg) \,\mathrm{d}t \\ &> \mathrm{e}^{-\alphamma x}\int_0^x \frac{\mathbf{L}_\nu(t)}{t^\nu}\,\mathrm{d}t + \alphamma \int_0^x \mathrm{e}^{-\alphamma t} \frac{\mathbf{L}_\nu(t)}{t^\nu} \,\mathrm{d}t-\alphamma \int_0^x \frac{t\mathrm{e}^{-\alphamma t}}{\sqrt{\pi}2^\nu\Gamma(\nu+\frac{3}{2})} \,\mathrm{d}t, \end{align*} whence on evaluating $\int_0^xt\mathrm{e}^{-\alphamma t} \,\mathrm{d}t=\frac{1}{\alphamma^2}(1-(1+\alphamma x)\mathrm{e}^{-\alphamma x})$ and rearranging we obtain (\ref{bi3}). (v) Apply inequality (\ref{bi1}) to inequality (\ref{bi4}). (vi) We now prove inequality (\ref{bi7}); the assertion that $D_{\nu,n}<2(\nu+n+1)$ is immediate from inequalities (\ref{bi3}) and (\ref{Imon}). Now, integrating by parts similarly to we did in part (iv), we have \begin{align*}\int_0^x \mathrm{e}^{-\alphamma t} \frac{\mathbf{L}_{\nu+n}(t)}{t^\nu} \,\mathrm{d}t &= \mathrm{e}^{-\alphamma x}\int_0^x \frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t + \alphamma \int_0^x \mathrm{e}^{-\alphamma t}\bigg(\int_0^t \frac{\mathbf{L}_{\nu+n}(u)}{u^\nu} \,\mathrm{d}u\bigg) \,\mathrm{d}t \\ &<\mathrm{e}^{-\alphamma x}\int_0^x \frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t + D_{\nu,n}\alphamma \int_0^x \mathrm{e}^{-\alphamma t} \frac{\mathbf{L}_{\nu+n}(t)}{t^\nu} \,\mathrm{d}t. \end{align*} As we assumed $0<\alphamma<\frac{1}{D_{\nu,n}}$, on rearranging we obtain inequality (\ref{bi7}). (vii) Apply inequality (\ref{bi3}) to inequality (\ref{bi7}). (viii) Finally, we prove that inequalities (\ref{bi1})--(\ref{bi5}) are tight as $x\rightarrow\infty$ and inequality (\ref{bi3}) is also tight as $x\downarrow0$. We begin by noting that a straightforward asymptotic analysis using (\ref{Itendinfinity}) gives that, for $0\leq\alphamma<1$, $n>-\tfrac{3}{2}$ and $\nu\in\mathbb{R}$, \begin{equation}\label{eqeq1} \int_0^x \mathrm{e}^{-\alphamma t}\frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t\sim \frac{1}{\sqrt{2\pi}(1-\alphamma)}x^{-\nu-1/2}\mathrm{e}^{(1-\alphamma)x}, \quad x\rightarrow\infty, \end{equation} and we also have \begin{equation}\label{eqeq2}\mathrm{e}^{-\alphamma x}\frac{\mathbf{L}_{\nu+n}(x)}{x^\nu}\sim \frac{1}{\sqrt{2\pi}}x^{-\nu-1/2}\mathrm{e}^{(1-\alphamma)x}, \quad x\rightarrow\infty. \end{equation} One can now readily check with the aid of (\ref{eqeq1}) and (\ref{eqeq2}) that inequalities (\ref{bi1})--(\ref{bi5}) are tight as $x\rightarrow\infty$. It now remains to prove that inequality (\ref{bi3}) is tight as $x\downarrow0$. From (\ref{Itend0}), we have on the one hand, as $x\downarrow0$, \begin{equation*}\int_0^x \frac{\mathbf{L}_{\nu+n}(t)}{t^\nu}\,\mathrm{d}t\sim\int_0^x \frac{t^{n+1}}{\sqrt{\pi}2^{\nu+n}\Gamma(\nu+n+\frac{3}{2})}\,\mathrm{d}t=\frac{x^{n+2}}{\sqrt{\pi}2^{\nu+n}(n+2)\Gamma(\nu+n+\frac{3}{2})}, \end{equation*} and on the other, \begin{align*}&\frac{2(\nu+n+1)}{n+1}\frac{\mathbf{L}_{\nu+n+1}(x)}{x^\nu}-\frac{2\nu+n+1}{n+1}\frac{\mathbf{L}_{\nu+n+3}(x)}{x^\nu}+b_{\nu,n}x^{n+2}+c_{\nu,n}x^{n+4} \\ &\quad \sim \frac{(\nu+n+1)x^{n+2}}{\sqrt{\pi}2^{\nu+n}(n+1)\Gamma(\nu+n+\frac{5}{2})}-\frac{(2\nu+n+1)x^{n+2}}{\sqrt{\pi}2^{\nu+n+1}(n+1)(n+2)\Gamma(\nu+n+\frac{5}{2})} \\ &\quad=\frac{(2(\nu+n+1)(n+2)-(2\nu+n+1))x^{n+2}}{\sqrt{\pi}2^{\nu+n+1}(n+1)(n+2)\Gamma(\nu+n+\frac{5}{2})} \\ &\quad=\frac{2(n+1)(\nu+n+\frac{3}{2})x^{n+2}}{\sqrt{\pi}2^{\nu+n+1}(n+1)(n+2)\Gamma(\nu+n+\frac{5}{2})} =\frac{x^{n+2}}{\sqrt{\pi}2^{\nu+n}(n+2)\Gamma(\nu+n+\frac{3}{2})}, \end{align*} where we used that $u\Gamma(u)=\Gamma(u+1)$. This proves the claim. \end{proof} \begin{remark}The constants $D_{\nu,n}$ can be computed numerically. As an example, we used \emph{Mathematica} to find $D_{0,0}=1.109$, $D_{1,0}=1.331$ $D_{3,0}=1.693$, $D_{5,0}=1.990$ and $D_{10,0}=2.584$. \end{remark} \begin{remark}The upper bounds (\ref{bi7}) and (\ref{bi8}) are not tight in the limits $x\downarrow0$ and $x\rightarrow\infty$, but they are of the correct order in both limits ($O(x^{n+1})$ as $x\downarrow0$, and $O(x^{-\nu-1/2}\mathrm{e}^{(1-\alphamma)x})$ as $x\rightarrow\infty$). The bounds are simple but are not entirely satisfactory in that they only hold for $0<\alphamma<\frac{1}{D_{\nu,n}}$, whereas one would like the inequalities to be valid for all $0<\alphamma<1$. It should be mentioned that a similar problem was encountered by \cite{gaunt ineq3} in that the upper bounds obtained for $\int_0^x \mathrm{e}^{-\alphamma t} t^\nu I_\nu(t)\,\mathrm{d}t$ were only valid for $0<\alphamma<\alpha_\nu$, for some $0<\alpha_\nu<1$. \end{remark} We end by noting that one can combine the inequalities of Theorem \ref{tiger1} and the integral formula (\ref{besint6}) to obtain lower and upper bounds for a generalized hypergeometric function. We give an example in the following corollary. \begin{corollary}\label{struvebessel}Let $\nu>\frac{1}{2}$. Then, for all $x>0$, \begin{align}\mathbf{L}_{\nu}(x)-a_{\nu-1,0}x^{\nu+1}&<\frac{x^{\nu+1}}{\sqrt{\pi}2^{\nu}\Gamma(\nu+\frac{1}{2})}{}_2F_3\bigg(1,1;\frac{3}{2},2,\nu+\frac{1}{2};\frac{x^2}{4}\bigg)\nonumberumber \\ \label{dob22}&<2\nu \mathbf{L}_\nu(x)-(2\nu-1)\mathbf{L}_{\nu+2}(x)+b_{\nu-1,0}x^{\nu+3}-c_{\nu-1,0}x^{\nu+1}. \end{align} \end{corollary} \begin{proof}Combine the integral formula (\ref{besint6}) and inequalities (\ref{bi2}) and (\ref{bi3}) (with $n=0$) of Theorem \ref{tiger1}, and replace $\nu$ by $\nu-1$. \end{proof} \begin{remark}We know from Theorem \ref{tiger1} that the two-sided inequality (\ref{dob22}) is tight in the limit $x\rightarrow\infty$, and the upper bound is also tight as $x\downarrow0$. To elaborate further, we denote by $F_\nu(x)$ the expression involving the generalized hypergeometric function in (\ref{dob22}), and the lower and upper bounds by $L_\nu(x)$ and $U_\nu(x)$. We used \emph{Mathematica} to compute the relative error in approximating $F_\nu(x)$ by $L_\nu(x)$ and $U_\nu(x)$, and numerical results are given in Tables \ref{table1} and \ref{table2}. We observe that, for a given $x$, the relative error in approximating $F_\nu(x)$ by either $L_\nu(x)$ or $U_\nu(x)$ increases as $\nu$ increases. We also notice from Table \ref{table1} that, for a given $\nu$, the relative error in approximating $F_\nu(x)$ by $L_\nu(x)$ decreases as $x$ increases. However, from Table \ref{table2} we see that, for a given $\nu$, as $x$ increases the relative error in approximating $F_\nu(x)$ by $U_\nu(x)$ initially increases before decreasing. This is because the upper bound is tight as $x\downarrow0$. \begin{comment} We now note the bound $\frac{I_{\nu+1}(x)}{I_\nu(x)}>\frac{x}{2(\nu+1)+x}$, $\nu>-1$, which is the simplest lower bound of a sequence of more complicated rational lower bounds given in \cite{nasell2}. $\frac{\mathbf{L}_{\nu+1}(x)}{\mathbf{L}_\nu(x)}>\frac{x}{x+2\nu+3}$, $\nu>-1$ We thus obtain that the relative error in approximating $F_\nu(x)$ by either $L_\nu(x)$ or $U_\nu(x)$ is at most \begin{align*}&\frac{2\nu \mathbf{L}_\nu(x)-(2\nu-1)\mathbf{L}_{\nu+2}(x)+b_{\nu-1,0}x^{\nu+3}-c_{\nu-1,0}x^{\nu+1}}{\mathbf{L}_\nu(x)-a_{\nu-1,0}x^{\nu+1}}-1\\ &\quad=(2\nu-1)\bigg(1-\frac{\mathbf{L}_{\nu+2}(x)}{\mathbf{L}_{\nu+1}(x)}\frac{\mathbf{L}_{\nu+1}(x)}{\mathbf{L}_{\nu}(x)}\bigg)+R_\nu(x) \\ &\quad<(2\nu-1)\bigg(1-\frac{x^2}{(x+2\nu+7)(x+2\nu+5)}\bigg)+R_\nu(x)\\ &\quad=\frac{(2\nu-1)((2\nu+5)(2\nu+7)+(4\nu+12)x)}{(x+2\nu+5)(x+2\nu+7)}+R_\nu(x), \end{align*} which, for fixed $x$, has rate $\nu$ as $\nu\rightarrow\infty$ and, for fixed $\nu$, has rate $x^{-1}$ as $x\rightarrow\infty$. \end{comment} \begin{table}[h] \centering \caption{\footnotesize{Relative error in approximating $F_\nu(x)$ by $L_\nu(x)$.}} \label{table1} {\scriptsize \begin{tabular}{|c|rrrrrrr|} \hline \backslashbox{$\nu$}{$x$} & 0.5 & 5 & 10 & 25 & 50 & 100 & 250 \\ \hline 1 & 0.4959 & 0.2540 & 0.1089 & 0.0409 & 0.0202 & 0.0101 & 0.0040 \\ 2.5 & 0.7979 & 0.6225 & 0.3708 & 0.1539 & 0.0784 & 0.0396 & 0.0159 \\ 5 & 0.8992 & 0.8229 & 0.6374 & 0.3130 & 0.1678 & 0.0869 & 0.0355 \\ 7.5 & 0.9329 & 0.8923 & 0.7741 & 0.4407 & 0.2482 & 0.1318 & 0.0547 \\ 10 & 0.9498 & 0.9249 & 0.8472 & 0.5426 & 0.3205 & 0.1745 & 0.0735 \\ \hline \end{tabular} } \end{table} \begin{table}[h] \centering \caption{\footnotesize{Relative error in approximating $F_\nu(x)$ by $U_\nu(x)$.}} \label{table2} {\scriptsize \begin{tabular}{|c|rrrrrrr|} \hline \backslashbox{$\nu$}{$x$} & 0.5 & 5 & 10 & 25 & 50 & 100 & 250 \\ \hline 1 & 0.0041 & 0.1939 & 0.1981 & 0.1034 & 0.0558 & 0.0289 & 0.0118 \\ 2.5 & 0.0070 & 0.5184 & 0.9270 & 0.6847 & 0.4073 & 0.2213 & 0.0930 \\ 5 & 0.0062 & 0.5679 & 1.6268 & 2.0626 & 1.4411 & 0.8462 & 0.3721 \\ 7.5 & 0.0051 & 0.4985 & 1.7368 & 3.4231 & 2.7983 & 1.7750 & 0.8169 \\ 10 & 0.0043 & 0.4285 & 1.6301 & 4.5028 & 4.2818 & 2.9312 & 1.3959 \\ \hline \end{tabular} } \end{table} \end{remark} \end{document}
\begin{document} \title{\textbf{Pseudo minimum phi-divergence estimator for multinomial logistic regression with complex sample design}} \author{Elena Castilla, Nirian Mart\'{\i}n and Leandro Pardo\\{\small Department of Statistics and Operations Research, Complutense University of Madrid, Spain}} \date{\today} \maketitle \begin{abstract} This article develops the theoretical framework needed to study the multinomial logistic regression model for complex sample design with pseudo minimum phi-divergence estimators. Through a numerical example and simulation study new estimators are proposed for the parameter of the logistic regression model with overdispersed multinomial distributions for the response variables, the pseudo minimum Cressie-Read divergence estimators, as well as new estimators for the intra-cluster correlation coefficient. The results show that the Binder's method for the intra-cluster correlation coefficient exhibits an excellent performance when the pseudo minimum Cressie-Read divergence estimator, with $\lambda=\frac{2}{3}$, is plugged. \end{abstract} \noindent\underline{\textbf{AMS 2001 Subject Classification}}\textbf{: }62F12, 62J12 \noindent\underline{\textbf{Keywords and phrases}}\textbf{:} Design effect; Cluster sampling; Pseudo-likelihood; Sample weight. \section{Introduction\label{sec1}} Multinomial logistic regression is frequently the method of choice when the response is a qualitative variable, with two or more mutually exclusive unordered response categories, and interest is in the relationship between the response variables with respect to their corresponding explanatory variables or covariates. The $k$ explanatory variables of interest, $\boldsymbol{x} =\left( x_{1},...,x_{k}\right) ^{T}$, may be binary, categorical, ordinal or continuos. The multinomial logistic regression procedure is based on assuming that the $(d+1)$-dimensional response random variable $\boldsymbol{Y} =(Y_{1},...,Y_{d+1})^{T}$ is a multinomial random variable of a unique observation with parameters $\pi_{1}\left( \boldsymbol{\beta}\right) ,...,\pi_{d+1}\left( \boldsymbol{\beta}\right) $ being \begin{equation} \pi_{r}\left( \boldsymbol{\beta}\right) =\Pr\left( Y_{r}=1|\boldsymbol{x} \right) =\left\{ \begin{array} [c]{ll} \dfrac{\exp\{\boldsymbol{x}^{T}\boldsymbol{\beta}_{r}\}}{1+ {\textstyle\sum_{s=1}^{d}} \exp\{\boldsymbol{x}^{T}\boldsymbol{\beta}_{s}\}}, & r=1,...,d\\ \dfrac{1}{1+ {\textstyle\sum_{s=1}^{d}} \exp\{\boldsymbol{x}^{T}\boldsymbol{\beta}_{s}\}}, & r=d+1 \end{array} \right. , \label{1.1} \end{equation} with $\boldsymbol{\beta}=(\boldsymbol{\beta}_{1}^{T},...,\boldsymbol{\beta }_{d}^{T})^{T}$, where $\boldsymbol{\beta}_{r}=\left( \beta_{1r} ,...,\beta_{kr}\right) ^{T}$ is a $k$-dimensional real value vector of unknown parameters for $r=1,...,d$. An observation of $\boldsymbol{Y}$, $\boldsymbol{y}$, is any $(d+1)$-dimensional vector with $d$ zeros and a unique one (classification vector), which is observed together with explanatory variables $\boldsymbol{x}$. In order to make inferences about $\boldsymbol{\beta}_{r}$, $r=1,...,d$, a random sample $\left( \boldsymbol{Y} _{i},\boldsymbol{x}_{i}\right) $, $i=1,...,n$ is considered, where $\boldsymbol{Y}_{i}=(Y_{i1},...,Y_{i,d+1})^{T}$ and $\boldsymbol{x} _{i}=\left( x_{i1},...,x_{ik}\right) ^{T}$. For more details about multinomial logistic regression models see for instance Agresti (2002), Amemiya (1981), Anderson (1972, 1982, 1984), Engel (1988), Lesaffre (1986), Lesaffre and Albert (1986, 1989), Liu and Agresti (2005), Mantel (1966), Theil (1969), McCullagh (1980). In that papers the inferences about the parameters are carried out on the basis of the maximum likelihood estimator in the case of the estimation and on the likelihood ratio test and Wald tests in the case of testing. In Gupta et al. (2006a, 2006b, 2007, 2008) new procedures for making statistical inference in the multinomial logistic regression were presented based on phi-divergences measures. When the data have been collected not under the assumptions of simple random sampling but in a complex survey, with stratification, clustering, or unequal selection probabilities, for example, the estimation of the multinomial logistic regression coefficients and their estimated variances that ignore these features may be misleading. Discussions of multinomial logistic regression in sample surveys can be seen in Binder (1983), Roberts, Rao and Kumar (1987), Skinner, Holt and Smith (1989), Morel (1989), Lehtonen and Pahkinen (1995) and Morel and Neerchal (2012). In this paper, we consider the multinomial logistic regression model with complex survey and we shall introduce for this model the pseudo minimum phi-divergence estimator for the regressions coefficients, deriving its asymptotic distribution. As a particular case, we shall obtain the asymptotic distribution of the pseudo maximum likelihood estimator. In Section \ref{sec2}, we present some notation as well as some results in relation to the maximum likelihood estimator. Section \ref{sec3} is devoted to introduce the pseudo minimum phi-divergence estimator as an extension of the maximum likelihood estimator as well as its asymptotic distribution. In Section \ref{sec4} and \ref{sec5}, the numerical example and simulation study are swown. Finally, in Section \ref{sec6}, some concluding remarks are given. \section{Multinomial logistic regression model for complex sample design\label{sec2}} We shall assume that the population under consideration is divided into $H$ distinct strata. In each stratum $h$, the sample is consisted of $n_{h}$ clusters, $h=1,...,H$, and each cluster is comprised of $m_{hi}$ units, $h=1,...,H,$ $i=1,...,n_{h}$. Let \begin{equation} \boldsymbol{y}_{hij}=\left( y_{hij1},....,y_{hij,d+1}\right) ^{T},\text{ }h=1,...,H,\text{ }i=1,...,n_{h},\text{ }j=1,...,m_{hi} \label{2.1} \end{equation} be the $(d+1)$-dimensional classification vectors, with $y_{hijr}$ $=1$ and $y_{hijs}$ $=0$ for $s\in\{1,...,d+1\}-\{r\}$ if the $j$-th unit selected from the $i$-th cluster of the $h$-th stratum fall in the $r$-th category. Let $\boldsymbol{x}_{hij}=\left( x_{hij1},....,x_{hijk}\right) ^{T}$ be a $k$-dimensional vector of explanatory variables associated with the $i$-th cluster in the $h$-th stratum for the $j$-th individual. We shall also denote by $w_{hi}$ the sampling weight from the $i$-th cluster of the $h$-th stratum. For each $i$, $h$ and $j$, the expectation of the $r$-th element of $\boldsymbol{Y}_{hij}=(Y_{hij1},...,Y_{hij,d+1})^{T}$, with a realization $\boldsymbol{y}_{hij}$, is determined by the multinomial logistic regression relationship \begin{equation} \pi_{hijr}\left( \boldsymbol{\beta}\right) =\left\{ \begin{array} [c]{ll} \dfrac{\exp\{\boldsymbol{x}_{hij}^{T}\boldsymbol{\beta}_{r}\}}{1+ {\textstyle\sum_{s=1}^{d}} \exp\{\boldsymbol{x}_{hij}^{T}\boldsymbol{\beta}_{s}\}}, & r=1,...,d\\ \dfrac{1}{1+ {\textstyle\sum_{s=1}^{d}} \exp\{\boldsymbol{x}_{hij}^{T}\boldsymbol{\beta}_{s}\}}, & r=d+1 \end{array} \right. , \label{2.1.0} \end{equation} with $\boldsymbol{\beta}_{r}=\left( \beta_{1r},...,\beta_{kr}\right) ^{T}\in \mathbb{R} ^{k}$, $r=1,...,d$. We shall denote by $\boldsymbol{\pi}_{hij}\left( \boldsymbol{\beta}\right) $ the $(d+1)$-dimensional probability vector \begin{equation} \boldsymbol{\pi}_{hij}\left( \boldsymbol{\beta}\right) =\left( \pi _{hij1}\left( \boldsymbol{\beta}\right) ,...,\pi_{hij,d+1}\left( \boldsymbol{\beta}\right) \right) ^{T}. \label{2.2} \end{equation} The parameter space associated to the multinomial logistic regression model considered in (\ref{2.1.0}) is given by \[ \Theta=\{\boldsymbol{\beta}=(\boldsymbol{\beta}_{1}^{T},...,\boldsymbol{\beta }_{d}^{T})^{T},\text{ }\boldsymbol{\beta}_{j}=\left( \beta_{j1} ,...,\beta_{jk}\right) ^{T}\in\mathbb{R}^{k},\text{ }j=1,...,d\}=\mathbb{R} ^{dk}. \] In this context and taking into account the weights $w_{hi}$, the pseudo log-likelihood, $\mathcal{L}\left( \boldsymbol{\beta}\right) $, for the multinomial logistic regression model given in (\ref{2.1.0}) has the expression \begin{equation} \mathcal{L}\left( \boldsymbol{\beta}\right) = {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} {\displaystyle\sum\limits_{j=1}^{m_{hi}}} w_{hi}\log\boldsymbol{\pi}_{hij}^{T}\left( \boldsymbol{\beta}\right) \boldsymbol{y}_{hij}, \label{2.3} \end{equation} where $\log\boldsymbol{\pi}_{hij}\left( \boldsymbol{\beta}\right) =\left( \log\pi_{hij1}\left( \boldsymbol{\beta}\right) ,...,\log\pi_{hij,d+1}\left( \boldsymbol{\beta}\right) \right) ^{T}$. For more details about $\mathcal{L}\left( \boldsymbol{\beta}\right) $ see for instance Morel (1989) and Morel and Neerchal (2012). In practice, it is not a strong assumption to consider that the expectation of the $r$-th component of $\boldsymbol{Y}_{hij}$ does not depend on $j$, i.e., \[ \pi_{hijr}\left( \boldsymbol{\beta}\right) =\pi_{hir}\left( \boldsymbol{\beta}\right) ,\quad j=1,...,m_{hi}, \] where $\pi_{hijr}\left( \boldsymbol{\beta}\right) =\mathrm{E}[Y_{hijr} ]=\Pr(Y_{hijr}=1)$. This is related to a common vector of explanatory variables $\boldsymbol{x}_{hi}=\left( x_{hi1},....,x_{hik}\right) ^{T}$ for all the individuals in the $i$-th cluster of the $h$-th stratum and we shall denote $\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta}\right) $ instead of $\boldsymbol{\pi}_{hij}\left( \boldsymbol{\beta}\right) $ the vector mean associated to $\boldsymbol{Y}_{hij}$. Let \begin{equation} \widehat{\boldsymbol{Y}}_{hi}= {\displaystyle\sum\limits_{j=1}^{m_{hi}}} \boldsymbol{Y}_{hij}=\left( {\displaystyle\sum\limits_{j=1}^{m_{hi}}} Y_{hij1},..., {\displaystyle\sum\limits_{j=1}^{m_{hi}}} Y_{hij,d+1}\right) ^{T}=(\widehat{Y}_{hi1},...,\widehat{Y}_{hi,d+1})^{T} \label{2.201} \end{equation} be the random vector of counts in the $i$-th cluster of the $h$-th stratum. Under homogeneity assumption within the clusters, the pseudo log-likelihood is \begin{align} \mathcal{L}\left( \boldsymbol{\beta}\right) & = {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} {\displaystyle\sum\limits_{j=1}^{m_{hi}}} w_{hi}\log\boldsymbol{\pi}_{hi}^{T}\left( \boldsymbol{\beta}\right) \boldsymbol{y}_{hij}\nonumber\\ & = {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi}\log\boldsymbol{\pi}_{hi}^{T}\left( \boldsymbol{\beta}\right) \widehat{\boldsymbol{y}}_{hi}. \label{2.3.1} \end{align} The pseudo maximum likelihood estimator $\widehat{\boldsymbol{\beta}}_{P}$ of $\boldsymbol{\beta}$ is obtained maximizing in $\boldsymbol{\beta}$ the pseudo log-likelihood given in (\ref{2.3.1}). This estimator can be obtained as the solution of the system of equations \begin{equation} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi}\frac{\partial\boldsymbol{\pi}_{hi}^{\ast T}\left( \boldsymbol{\beta }\right) }{\partial\boldsymbol{\beta}}\boldsymbol{\Delta}^{-1} (\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) )\boldsymbol{r}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) =\boldsymbol{0} _{dk}, \label{2.4} \end{equation} being \begin{align*} \frac{\partial\boldsymbol{\pi}_{hi}^{\ast T}\left( \boldsymbol{\beta}\right) }{\partial\boldsymbol{\beta}} & =\mathcal{\boldsymbol{\Delta}} (\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) )\otimes \boldsymbol{x}_{hi},\\ \boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta }\right) ) & =\mathrm{diag}(\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) )-\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) \boldsymbol{\pi}_{hi}^{\ast T}\left( \boldsymbol{\beta}\right) ,\\ \boldsymbol{r}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) & =\widehat{\boldsymbol{y}}_{hi}^{\ast}-m_{hi}\boldsymbol{\pi}_{hi}^{\ast }\left( \boldsymbol{\beta}\right) . \end{align*} With superscript $^{\ast}$ on a vector we denote the vector obtained deleting\ the last component from the initial vector, and thus $\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) =\left( \pi_{hi1}\left( \boldsymbol{\beta}\right) ,...,\pi_{hid}\left( \boldsymbol{\beta}\right) \right) ^{T}$ and $\widehat{\boldsymbol{y}} _{hi}^{\ast}=\left( \widehat{y}_{hi1}^{\ast},...,\widehat{y}_{hid}^{\ast }\right) ^{T}$. The system of equations (\ref{2.4}) can be written as $\boldsymbol{u}\left( \boldsymbol{\beta}\right) =\boldsymbol{0}_{dk}$, being \begin{align} \boldsymbol{u}\left( \boldsymbol{\beta}\right) & = {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} \boldsymbol{u}_{hi}\left( \boldsymbol{\beta}\right) ,\label{Un}\\ \boldsymbol{u}_{hi}\left( \boldsymbol{\beta}\right) & =w_{hi} \boldsymbol{r}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) \otimes \boldsymbol{x}_{hi}. \label{Un2} \end{align} \section{Pseudo minimum phi-divergence estimator: asymptotic distribution\label{sec3}} In this Section we shall introduce, for the fist time, the pseudo minimum phi-divergence estimator, $\widehat{\boldsymbol{\beta}}_{\phi,P}$, of the parameter $\boldsymbol{\beta}$ as a natural extension of the pseudo maximum likelihood estimator $\widehat{\boldsymbol{\beta}}_{P}$. We define the following theoretical probability vector \[ \boldsymbol{\pi}\left( \boldsymbol{\beta}\right) =\frac{1}{\tau} (w_{11}m_{11}\boldsymbol{\pi}_{11}^{T}(\boldsymbol{\beta}),...,w_{1n_{1} }m_{1n_{1}}\boldsymbol{\pi}_{1n_{1}}^{T}(\boldsymbol{\beta}),...,w_{H1} m_{H1}\boldsymbol{\pi}_{H1}^{T}\left( \boldsymbol{\beta}\right) ,...,w_{Hn_{H}}m_{Hn_{H}}\boldsymbol{\pi}_{Hn_{H}}^{T}(\boldsymbol{\beta }))^{T}, \] with \begin{equation} \tau= {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi}m_{hi} \label{2.202} \end{equation} being a known value. Based on $\widehat{\boldsymbol{y}}_{hi}$, observation of $\widehat{\boldsymbol{Y}}_{hi}$\ defined in (\ref{2.201}), we consider the vector $\widehat{\boldsymbol{y}}_{h}$ for each stratum $h$, \[ \widehat{\boldsymbol{y}}_{h}=(w_{h1}\widehat{\boldsymbol{y}}_{h1} ^{T},...,w_{hn_{h}}\widehat{\boldsymbol{y}}_{hn_{h}}^{T})^{T}. \] We shall also consider the non-parametric probability vector \begin{align*} \widehat{\boldsymbol{p}} & =\frac{1}{\tau}(\widehat{\boldsymbol{y}}_{1} ^{T},...,\widehat{\boldsymbol{y}}_{H}^{T})^{T}\\ & =\frac{1}{\tau}(w_{11}\widehat{\boldsymbol{y}}_{11}^{T},...,w_{1n_{1} }\widehat{\boldsymbol{y}}_{1n_{1}}^{T},...,w_{H1}\widehat{\boldsymbol{y}} _{H1}^{T},...,w_{Hn_{H}}\widehat{\boldsymbol{y}}_{Hn_{H}}^{T})^{T}. \end{align*} The Kullback-Leibler divergence between the probability vectors $\widehat{\boldsymbol{p}}$ and $\boldsymbol{\pi}\left( \boldsymbol{\beta }\right) $ is given by \begin{align} d_{K\mathrm{-}L}\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) \right) & =\frac{1}{\tau} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi} {\displaystyle\sum\limits_{s=1}^{d+1}} \widehat{y}_{his}\log\frac{\widehat{y}_{his}}{m_{hi}\pi_{his}\left( \boldsymbol{\beta}\right) }\label{3.1}\\ & =K-\frac{1}{\tau} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi} {\displaystyle\sum\limits_{s=1}^{d+1}} \widehat{y}_{his}\log\pi_{his}\left( \boldsymbol{\beta}\right) \nonumber\\ & =K-\frac{1}{\tau} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi}\log\boldsymbol{\pi}_{hi}^{T}\left( \boldsymbol{\beta}\right) \widehat{\boldsymbol{y}}_{hi},\nonumber \end{align} with $K$ being a constant not dependent of $\boldsymbol{\beta}$. Based on (\ref{2.3.1}) and (\ref{3.1}), we can define the pseudo maximum likelihood estimator for the multinomial logistic regression model given in (\ref{2.1.0}) by \begin{equation} \widehat{\boldsymbol{\beta}}_{P}=\arg\min_{\boldsymbol{\beta\in\Theta} }d_{K\mathrm{-}L}\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) \right) . \label{3.2} \end{equation} But Kullback-Leibler divergence is a particular divergence measure in the family of phi-divergence measures, \begin{equation} d_{\phi}\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) \right) =\frac{1}{\tau}\sum\limits_{h=1}^{H} \sum\limits_{i=1}^{n_{h}}w_{hi}m_{hi}\sum\limits_{s=1}^{d+1}\pi_{his}\left( \boldsymbol{\beta}\right) \phi\left( \frac{\widehat{y}_{his}}{m_{hi} \pi_{his}\left( \boldsymbol{\beta}\right) }\right) , \label{3.3} \end{equation} where $\phi\in\Phi^{\ast}$ is the class of all convex functions $\phi\left( x\right) $, defined for $x>0$, such that at $x=1$, $\phi\left( 1\right) =0$, $\phi^{\prime\prime}\left( 1\right) >0,$ and at $x=0$, $0\phi\left( 0/0\right) =0$ and $0\phi\left( p/0\right) =\lim_{u\rightarrow\infty} \phi\left( u\right) /u$. For every $\phi\in\Phi^{\ast}$ differentiable at $x=1$, the function \[ \varphi\left( x\right) \equiv\phi\left( x\right) -\phi^{\prime}\left( 1\right) \left( x-1\right) \] also belongs to $\Phi^{\ast}$. Then we have $d_{\varphi}\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) \right) =d_{\phi}\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) \right) $, and $\varphi$ has the additional property that $\varphi^{\prime}\left( 1\right) =0$. Because the two divergence measures are equivalent, we can consider the set $\Phi^{\ast}$ to be equivalent to the set \[ \Phi\equiv\Phi^{\ast}\cap\left\{ \phi:\phi^{\prime}\left( 1\right) =0\right\} . \] In what follows, we give our theoretical results for $\phi\in\Phi$, but we often apply them to choices of functions in $\Phi^{\ast}$. An equivalent definition of (\ref{3.3}) is a weighted version of phi-divergences between the cluster non-parametric probabilities and theoretical probabilities \[ d_{\phi}\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) \right) =\sum\limits_{h=1}^{H}\sum\limits_{i=1} ^{n_{h}}\frac{w_{hi}m_{hi}}{\tau}d_{\phi}\left( \tfrac {\widehat{\boldsymbol{y}}_{hi}}{m_{hi}},\boldsymbol{\pi}_{hi} (\boldsymbol{\beta})\right) , \] where \[ d_{\phi}\left( \tfrac{\widehat{\boldsymbol{y}}_{hi}}{m_{hi}},\boldsymbol{\pi }_{hi}(\boldsymbol{\beta})\right) =\sum\limits_{s=1}^{d+1}\pi_{his}\left( \boldsymbol{\beta}\right) \phi\left( \frac{\widehat{y}_{his}}{m_{hi} \pi_{his}\left( \boldsymbol{\beta}\right) }\right) . \] For more details about phi-divergences measures see Pardo (2005). Based on (\ref{3.2}) and (\ref{3.3}) we shall introduce, in this paper, the pseudo minimum phi-divergence estimator for the parameter $\boldsymbol{\beta}$ in the multinomial logistic regression model under complex survey defined in (\ref{2.1.0}) as follows, \begin{definition} We consider the multinomial logistic regression model with complex survey defined in (\ref{2.1.0}). The pseudo minimum phi-divergence estimator of $\boldsymbol{\beta}$ is defined as \[ \widehat{\boldsymbol{\beta}}_{\phi,P}=\arg\min_{\boldsymbol{\beta}\in\Theta }d_{\phi}\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) \right) , \] where $d_{\phi}\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) \right) $, the phi-divergence measure between the probability vectors $\widehat{\boldsymbol{p}}$ and $\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) $, is given in (\ref{3.3}). \end{definition} For $\phi(x)=x\log x-x+1$ the associated phi-divergence (\ref{3.3}) coincides with the Kullback-Leibler divergence (\ref{3.1}), therefore the pseudo minimum phi-divergence estimator of $\boldsymbol{\beta}$ based on $\phi(x)$ contains as special case the pseudo maximum likelihood estimator. With the same philosophy, the following result generalizes $\boldsymbol{u}_{hi}\left( \boldsymbol{\beta}\right) $ given in (\ref{Un2}) and later this result plays an important role for the asymptotic distribution of the pseudo minimum phi-divergence estimator, $\widehat{\boldsymbol{\beta}}_{\phi,P}$. \begin{theorem} \label{Th0}The pseudo minimum phi-divergence estimator of $\boldsymbol{\beta} $, $\widehat{\boldsymbol{\beta}}_{\phi,P}$, is obtained by solving the system of equations $\boldsymbol{u}_{\phi}\left( \boldsymbol{\beta}\right) =\boldsymbol{0}_{dk}$, where \begin{align} \boldsymbol{u}_{\phi}\left( \boldsymbol{\beta}\right) & =\sum \limits_{h=1}^{H}\sum\limits_{i=1}^{n_{h}}\boldsymbol{u}_{\phi,hi}\left( \boldsymbol{\beta}\right) ,\label{3.09}\\ \boldsymbol{u}_{\phi,hi}\left( \boldsymbol{\beta}\right) & =\frac {w_{hi}m_{hi}}{\phi^{\prime\prime}(1)}\boldsymbol{\Delta}(\boldsymbol{\pi }_{hi}^{\ast}\left( \boldsymbol{\beta}\right) )\boldsymbol{f}_{\phi ,hi}^{\ast}(\tfrac{\widehat{\boldsymbol{y}}_{hi}}{m_{hi}},\boldsymbol{\beta })\otimes\boldsymbol{x}_{hi}, \label{3.010} \end{align} where \begin{align} \boldsymbol{f}_{\phi,hi}^{\ast}(\tfrac{\widehat{\boldsymbol{y}}_{hi}}{m_{hi} },\boldsymbol{\beta}) & =(f_{\phi,hi1}(\tfrac{\widehat{y}_{hi1}}{m_{hi} },\boldsymbol{\beta}),...,f_{\phi,hid}(\tfrac{\widehat{y}_{hid}}{m_{hi} },\boldsymbol{\beta}))^{T},\nonumber\\ f_{\phi,his}(x,\boldsymbol{\beta}) & =\frac{x}{\pi_{his}(\boldsymbol{\beta })}\phi^{\prime}\left( \frac{x}{\pi_{his}(\boldsymbol{\beta})}\right) -\phi\left( \frac{x}{\pi_{his}(\boldsymbol{\beta})}\right) \label{3.010b} \end{align} \end{theorem} \begin{proof} The pseudo minimum phi-divergence estimator of $\boldsymbol{\beta}$, $\widehat{\boldsymbol{\beta}}_{\phi,P}$, is obtained by solving the system of equations $\frac{\partial}{\partial\boldsymbol{\beta}}d_{\phi}\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta}\right) \right) =\boldsymbol{0}_{dk}$, and then it is also obtained from $\boldsymbol{u}_{\phi}\left( \boldsymbol{\beta}\right) =\boldsymbol{0}_{dk} $, where \[ \boldsymbol{u}_{\phi}\left( \boldsymbol{\beta}\right) =-\frac{\tau} {\phi^{\prime\prime}(1)}\frac{\partial}{\partial\boldsymbol{\beta}}d_{\phi }\left( \widehat{\boldsymbol{p}},\boldsymbol{\pi}\left( \boldsymbol{\beta }\right) \right) =\sum\limits_{h=1}^{H}\sum\limits_{i=1}^{n_{h} }\boldsymbol{u}_{\phi,hi}\left( \boldsymbol{\beta}\right) , \] with \begin{align} \boldsymbol{u}_{\phi,hi}\left( \boldsymbol{\beta}\right) & =-\frac {w_{hi}m_{hi}}{\phi^{\prime\prime}(1)}\frac{\partial}{\partial \boldsymbol{\beta}}d_{\phi}\left( \tfrac{\widehat{\boldsymbol{y}}_{hi} }{m_{hi}},\boldsymbol{\pi}_{hi}(\boldsymbol{\beta})\right) =\frac {w_{hi}m_{hi}}{\phi^{\prime\prime}(1)} {\displaystyle\sum\limits_{s=1}^{d+1}} \frac{\partial\pi_{his}(\boldsymbol{\beta})}{\partial\boldsymbol{\beta} }f_{\phi,his}(\tfrac{\widehat{y}_{his}}{m_{hi}},\boldsymbol{\beta})\nonumber\\ & =\frac{w_{hi}m_{hi}}{\phi^{\prime\prime}(1)}\frac{\partial\boldsymbol{\pi }_{hi}^{T}(\boldsymbol{\beta})}{\partial\boldsymbol{\beta}}\boldsymbol{f} _{\phi,hi}(\tfrac{\widehat{\boldsymbol{y}}_{hi}}{m_{hi}},\boldsymbol{\beta}), \label{uu} \end{align} and \[ \boldsymbol{f}_{\phi,hi}(\tfrac{\widehat{\boldsymbol{y}}_{hi}}{m_{hi} },\boldsymbol{\beta})=(f_{\phi,hi1}(\tfrac{\widehat{y}_{hi1}}{m_{hi} },\boldsymbol{\beta}),...,f_{\phi,hi,d+1}(\tfrac{\widehat{y}_{hi,d+1}}{m_{hi} },\boldsymbol{\beta}))^{T}. \] Since \begin{equation} \frac{\partial\boldsymbol{\pi}_{hi}^{T}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}}=\left( \boldsymbol{I}_{d\times d},\boldsymbol{0} _{d\times1}\right) \boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta}\right) )\otimes\boldsymbol{x}_{hi}, \label{der} \end{equation} the expression of $\boldsymbol{u}_{\phi,hi}\left( \boldsymbol{\beta}\right) $\ is rewritten as (\ref{3.010}). \end{proof} \begin{remark} An important family of divergence measures is obtained by restricting $\phi$ from the family of convex\ functions to the Cressie-Read subfamily \begin{equation} \phi_{\lambda}(x)=\left\{ \begin{array} [c]{ll} \frac{1}{\lambda(1+\lambda)}\left[ x^{\lambda+1}-x-\lambda(x-1)\right] , & \lambda\in \mathbb{R} -\{-1,0\}\\ \lim_{\upsilon\rightarrow\lambda}\frac{1}{\upsilon(1+\upsilon)}\left[ x^{\upsilon+1}-x-\upsilon(x-1)\right] , & \lambda\in\{-1,0\} \end{array} \right. . \label{CR} \end{equation} We can observe that for $\lambda=0$, we have \[ \phi_{\lambda=0}(x)=\lim_{\upsilon\rightarrow0}\frac{1}{\upsilon(1+\upsilon )}\left[ x^{\upsilon+1}-x-\upsilon(x-1)\right] =x\log x-x+1, \] and the associated phi-divergence (\ref{3.3}), coincides with the Kullback-Leibler divergence (\ref{3.1}), therefore the pseudo minimum phi-divergence estimator of $\boldsymbol{\beta}$ based on $\phi_{\lambda}(x)$ contains as special case the pseudo maximum likelihood estimator and $\boldsymbol{u}_{hi}\left( \boldsymbol{\beta}\right) $ given in (\ref{Un2}) matches $\boldsymbol{u}_{\phi,hi}\left( \boldsymbol{\beta}\right) $ given in (\ref{3.010}). For the Cressie-Read subfamily it is established that for $\lambda\neq-1$, $\boldsymbol{u}_{\phi_{\lambda}}\left( \boldsymbol{\beta }\right) = {\textstyle\sum\nolimits_{h=1}^{H}} {\textstyle\sum\nolimits_{i=1}^{n_{i}}} \boldsymbol{u}_{\phi_{\lambda},hi}\left( \boldsymbol{\beta}\right) $, where \[ \boldsymbol{u}_{\phi_{\lambda},hi}\left( \boldsymbol{\beta}\right) =\frac{w_{hi}}{(\lambda+1)m_{hi}^{\lambda}}\frac{\partial\boldsymbol{\pi} _{hi}^{T}(\boldsymbol{\beta})}{\partial\boldsymbol{\beta}}\mathrm{diag} ^{-(\lambda+1)}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta} ))\widehat{\boldsymbol{y}}_{hi}^{\lambda+1}, \] since we have (\ref{uu}) with \begin{equation} \boldsymbol{f}_{\phi_{\lambda},hi}(\tfrac{\widehat{\boldsymbol{y}}_{hi} }{m_{hi}},\boldsymbol{\beta})=\frac{1}{\lambda+1}\left( \frac{1} {m_{hi}^{\lambda+1}}\mathrm{diag}^{-(\lambda+1)}(\boldsymbol{\pi} _{hi}(\boldsymbol{\beta}))\widehat{\boldsymbol{y}}_{hi}^{\lambda +1}-\boldsymbol{1}_{d+1}\right) , \label{f} \end{equation} From (\ref{der}) and \begin{align*} \boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta}\right) )\mathrm{diag}^{-(\lambda+1)}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta})) & =\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta}\right) )\mathrm{diag}^{-1}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta}))\mathrm{diag} ^{-\lambda}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta}))\\ & =\mathrm{diag}^{-\lambda}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta }))-\boldsymbol{\pi}_{hi}(\boldsymbol{\beta})\boldsymbol{1}_{d+1} ^{T}\mathrm{diag}^{-\lambda}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta})), \end{align*} it is concluded that \begin{align} \boldsymbol{u}_{\phi_{\lambda},hi}\left( \boldsymbol{\beta}\right) & =\frac{w_{hi}}{(\lambda+1)m_{hi}^{\lambda}}\left( \mathrm{diag}^{-\lambda }(\boldsymbol{\pi}_{hi}^{\ast}(\boldsymbol{\beta}))\widehat{\boldsymbol{y} }_{hi}^{\ast,\lambda+1}-[\boldsymbol{1}_{d+1}^{T}\mathrm{diag}^{-\lambda }(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta}))\widehat{\boldsymbol{y}} _{hi}^{\lambda+1}]\boldsymbol{\pi}_{hi}^{\ast}(\boldsymbol{\beta})\right) \otimes\boldsymbol{x}_{hi}\nonumber\\ & =\frac{w_{hi}}{(\lambda+1)m_{hi}^{\lambda}}\left\{ \mathrm{diag}^{\lambda }(\boldsymbol{\epsilon}_{hi}^{\ast})\widehat{\boldsymbol{y}}_{hi}^{\ast }-\left[ \boldsymbol{1}_{d+1}^{T}\mathrm{diag}^{\lambda}(\boldsymbol{\epsilon }_{hi})\widehat{\boldsymbol{y}}_{hi}\right] \boldsymbol{\pi}_{hi}^{\ast }(\boldsymbol{\beta})\right\} \otimes\boldsymbol{x}_{hi}, \label{ulam} \end{align} where \[ \boldsymbol{\epsilon}_{hi}=\mathrm{diag}^{-1}(\boldsymbol{\pi}_{hi} (\boldsymbol{\beta}))\widehat{\boldsymbol{y}}_{hi},\qquad\boldsymbol{\epsilon }_{hi}^{\ast}=\mathrm{diag}^{-1}(\boldsymbol{\pi}_{hi}^{\ast} (\boldsymbol{\beta}))\widehat{\boldsymbol{y}}_{hi}^{\ast}. \] Notice that replacing $\lambda=0$ in $\boldsymbol{u}_{\phi_{\lambda} ,hi}\left( \boldsymbol{\beta}\right) $ given in (\ref{ulam}), $\boldsymbol{u}_{hi}\left( \boldsymbol{\beta}\right) $ given in (\ref{Un2}) is obtained. For $\lambda=-1$ in (\ref{f}), we have \[ \lim_{\lambda\rightarrow-1}\boldsymbol{f}_{\phi_{\lambda},hi}(\tfrac {\widehat{\boldsymbol{y}}_{hi}}{m_{hi}},\boldsymbol{\beta})=\log\left( \mathrm{diag}^{-1}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta}))\frac {\widehat{\boldsymbol{y}}_{hi}}{m_{hi}}\right) , \] and therefore \[ \lim_{\lambda\rightarrow-1}\boldsymbol{u}_{\phi_{\lambda},hi}\left( \boldsymbol{\beta}\right) =w_{hi}m_{hi}\boldsymbol{\Delta}(\boldsymbol{\pi }_{hi}^{\ast}\left( \boldsymbol{\beta}\right) )\log\left( \mathrm{diag} ^{-1}(\boldsymbol{\pi}_{hi}^{\ast}(\boldsymbol{\beta}))\frac {\widehat{\boldsymbol{y}}_{hi}^{\ast}}{m_{hi}}\right) \otimes\boldsymbol{x} _{hi}. \] The family of pseudo minimum divergence estimators, obtained from $\phi_{\lambda}(x)$given in (\ref{CR}), will be called the pseudo minimum Cressie-Read divergence estimators and for $\boldsymbol{\beta}$\ they will be denoted by $\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P}$. This family of estimators will be used in Sections \ref{sec4} and \ref{sec5}. \end{remark} In the following theorem we shall present the asymptotic distribution of the pseudo minimum phi-divergence estimator, $\widehat{\boldsymbol{\beta}} _{\phi,P}$. \begin{theorem} \label{Th1}Let $\widehat{\boldsymbol{\beta}}_{\phi,P}$ the pseudo minimum phi-divergence estimator of parameter $\boldsymbol{\beta}$ for a multinomial logistic regression model with complex survey, $n= {\displaystyle\sum\limits_{h=1}^{H}} n_{h}$ the total of clusters in all the strata of the sample and $\eta _{h}^{\ast}$\ an unknown proportion obtained as $\lim_{n\rightarrow\infty }\frac{n_{h}}{n}=\eta_{h}^{\ast}$, $h=1,...,H$. Then we have \[ \sqrt{n}(\widehat{\boldsymbol{\beta}}_{\phi,P}-\boldsymbol{\beta} _{0})\overset{\mathcal{L}}{\underset{n\mathcal{\rightarrow}\infty }{\longrightarrow}}\mathcal{N}\left( \boldsymbol{0}_{dk},\mathbf{H} ^{-1}\left( \boldsymbol{\beta}_{0}\right) \mathbf{G}\left( \boldsymbol{\beta}_{0}\right) \mathbf{H}^{-1}\left( \boldsymbol{\beta} _{0}\right) \right) , \] where \[ \mathbf{H}\left( \boldsymbol{\beta}\right) =\lim_{n\rightarrow\infty }\mathbf{H}_{n}\left( \boldsymbol{\beta}\right) = {\displaystyle\sum\limits_{h=1}^{H}} \eta_{h}^{\ast}\lim_{n_{h}\rightarrow\infty}\mathbf{H}_{n_{h}}^{(h)}\left( \boldsymbol{\beta}\right) \text{,\quad}\mathbf{G}\left( \boldsymbol{\beta }\right) =\lim_{n\rightarrow\infty}\mathbf{G}_{n}\left( \boldsymbol{\beta }\right) = {\displaystyle\sum\limits_{h=1}^{H}} \eta_{h}^{\ast}\lim_{n_{h}\rightarrow\infty}\mathbf{G}_{n_{h}}^{(h)}\left( \boldsymbol{\beta}\right) , \] with \[ \mathbf{H}_{n}\left( \boldsymbol{\beta}\right) =\frac{1}{n} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi}m_{hi}\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) )\otimes\boldsymbol{x}_{hi}\boldsymbol{x}_{hi} ^{T},\text{\quad}\mathbf{H}_{n_{h}}^{(h)}\left( \boldsymbol{\beta}\right) =\frac{1}{n_{h}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi}m_{hi}\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) )\otimes\boldsymbol{x}_{hi}\boldsymbol{x}_{hi} ^{T}, \] \[ \mathbf{G}_{n}\left( \boldsymbol{\beta}\right) =\frac{1}{n} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} \boldsymbol{V}[\boldsymbol{U}_{hi}\left( \boldsymbol{\beta}\right) ],\text{\quad}\mathbf{G}_{n_{h}}^{(h)}\left( \boldsymbol{\beta}\right) =\frac{1}{n_{h}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} \boldsymbol{V}[\boldsymbol{U}_{hi}\left( \boldsymbol{\beta}\right) ],\text{\quad}\boldsymbol{V}[\boldsymbol{U}_{hi}\left( \boldsymbol{\beta }\right) ]=w_{hi}^{2}\boldsymbol{V}[\widehat{\boldsymbol{Y}}_{hi}^{\ast }]\otimes\boldsymbol{x}_{hi}\boldsymbol{x}_{hi}^{T}, \] $\mathbf{H}\left( \boldsymbol{\beta}\right) $ is the Fisher information matrix, $\boldsymbol{V}[\boldsymbol{\cdot}]$ denotes the variance-covariance matrix of a random vector and $\boldsymbol{U}_{hi}\left( \boldsymbol{\beta }\right) $ is the random variable generator of $\boldsymbol{u}_{hi}\left( \boldsymbol{\beta}\right) $, given by (\ref{Un2}). \end{theorem} \begin{proof} From Theorem \ref{Th0} and by following the same steps of the linearization method of Binder (1983), \[ \mathbf{G}\left( \boldsymbol{\beta}\right) =\lim_{n\rightarrow\infty }\boldsymbol{V}[\tfrac{1}{\sqrt{n}}\boldsymbol{U}_{\phi}\left( \boldsymbol{\beta}\right) ]\quad\text{and}\quad\mathbf{H}\left( \boldsymbol{\beta}\right) =-\lim_{n\rightarrow\infty}\frac{1}{n} \frac{\partial\boldsymbol{U}_{\phi}^{T}\left( \boldsymbol{\beta}\right) }{\partial\boldsymbol{\beta}}, \] where $\boldsymbol{U}_{\phi}\left( \boldsymbol{\beta}\right) $ is the random vector generator of $\boldsymbol{u}_{\phi}\left( \boldsymbol{\beta}\right) $, given by (\ref{3.09}). Taking into account that $f_{\phi,his}(\pi _{his}(\boldsymbol{\beta}),\boldsymbol{\beta})=0$ and $f_{\phi,his}^{\prime }(\pi_{his}(\boldsymbol{\beta}),\boldsymbol{\beta})=\frac{1}{\pi _{his}(\boldsymbol{\beta})}\phi^{\prime\prime}\left( 1\right) $, a first Taylor expansion of $f_{\phi,his}(\tfrac{\widehat{Y}_{his}}{m_{hi} },\boldsymbol{\beta})$ given in (\ref{3.010b}) is \begin{align} f_{\phi,his}(\tfrac{\widehat{Y}_{his}}{m_{hi}},\boldsymbol{\beta}) & =f_{\phi,his}(\pi_{his}(\boldsymbol{\beta}),\boldsymbol{\beta})+f_{\phi ,his}^{\prime}(\pi_{his}(\boldsymbol{\beta}),\boldsymbol{\beta})(\tfrac {\widehat{Y}_{his}}{m_{hi}}-\pi_{his}(\boldsymbol{\beta}))+o(\tfrac {\widehat{Y}_{his}}{m_{hi}}-\pi_{his}(\boldsymbol{\beta}))\nonumber\\ & =\frac{\phi^{\prime\prime}\left( 1\right) }{\pi_{his}(\boldsymbol{\beta })}(\tfrac{Y_{his}}{m_{hi}}-\pi_{his}(\boldsymbol{\beta}))+o(\tfrac {\widehat{Y}_{his}}{m_{hi}}-\pi_{his}(\boldsymbol{\beta})), \label{ff} \end{align} i.e. \[ \boldsymbol{f}_{\phi,hi}(\tfrac{\widehat{\boldsymbol{Y}}_{hi}}{m_{hi} },\boldsymbol{\beta})=\phi^{\prime\prime}\left( 1\right) \mathrm{diag} ^{-1}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta}))(\tfrac {\widehat{\boldsymbol{Y}}_{hi}}{m_{hi}}-\boldsymbol{\pi}_{hi} (\boldsymbol{\beta}))+o\left( \boldsymbol{1}_{d+1}\left\Vert \tfrac {\widehat{\boldsymbol{Y}}_{hi}}{m_{hi}}-\boldsymbol{\pi}_{hi} (\boldsymbol{\beta})\right\Vert \right) , \] and hence from (\ref{uu}) \[ \frac{1}{\sqrt{n}}\boldsymbol{U}_{\phi}\left( \boldsymbol{\beta}\right) =\frac{1}{\sqrt{n}}\sum\limits_{h=1}^{H}\sum\limits_{i=1}^{n_{h}}w_{hi} m_{hi}\frac{\partial\boldsymbol{\pi}_{hi}^{T}(\boldsymbol{\beta})} {\partial\boldsymbol{\beta}}\mathrm{diag}^{-1}(\boldsymbol{\pi}_{hi} (\boldsymbol{\beta}))(\tfrac{\widehat{\boldsymbol{Y}}_{hi}}{m_{hi} }-\boldsymbol{\pi}_{hi}(\boldsymbol{\beta}))+ {\displaystyle\sum\limits_{h=1}^{H}} \sqrt{\eta_{h}^{\ast}}o\left( \boldsymbol{1}_{dk}\left\Vert \frac{1} {\sqrt{n_{h}}}\left( {\displaystyle\sum_{i=1}^{n_{h}}} \widehat{\boldsymbol{Y}}_{hi}- {\displaystyle\sum_{i=1}^{n_{h}}} m_{hi}\boldsymbol{\pi}_{hi}(\boldsymbol{\beta})\right) \right\Vert \right) . \] From the Central Limit Theorem given in Rao (1973, page 147) \[ \frac{1}{\sqrt{n_{h}}}\left( {\displaystyle\sum_{i=1}^{n_{h}}} \widehat{\boldsymbol{Y}}_{hi}- {\displaystyle\sum_{i=1}^{n_{h}}} m_{hi}\boldsymbol{\pi}_{hi}(\boldsymbol{\beta})\right) \underset{n_{h} \rightarrow\infty}{\overset{\mathcal{L}}{\longrightarrow}}\mathcal{N} (\boldsymbol{0}_{d+1},\lim_{n_{h}\rightarrow\infty}\tfrac{1}{n_{h}} {\textstyle\sum_{i=1}^{n_{h}}} \boldsymbol{V}[\widehat{\boldsymbol{Y}}_{hi}]), \] then \[ o\left( \boldsymbol{1}_{dk}\left\Vert \frac{1}{\sqrt{n_{h}}}\left( {\displaystyle\sum_{i=1}^{n_{h}}} \widehat{\boldsymbol{Y}}_{hi}- {\displaystyle\sum_{i=1}^{n_{h}}} m_{hi}\boldsymbol{\pi}_{hi}(\boldsymbol{\beta})\right) \right\Vert \right) =o\left( o_{p}(\boldsymbol{1}_{dk})\right) =o_{p}(\boldsymbol{1}_{dk}), \] and thus \[ \frac{1}{\sqrt{n}}\boldsymbol{U}_{\phi}\left( \boldsymbol{\beta}\right) =\frac{1}{\sqrt{n}}\sum\limits_{h=1}^{H}\sum\limits_{i=1}^{n_{h}}w_{hi} \frac{\partial\log\boldsymbol{\pi}_{hi}^{T}(\boldsymbol{\beta})} {\partial\boldsymbol{\beta}}(\widehat{\boldsymbol{y}}_{hi}-m_{hi} \boldsymbol{\pi}_{hi}(\boldsymbol{\beta}))+o_{p}(\boldsymbol{1}_{dk}). \] Since \begin{align*} \frac{\partial\log\boldsymbol{\pi}_{hi}^{T}(\boldsymbol{\beta})} {\partial\boldsymbol{\beta}}\boldsymbol{\pi}_{hi}(\boldsymbol{\beta}) & =\frac{\partial\boldsymbol{\pi}_{hi}^{T}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}}\mathrm{diag}^{-1}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta }))\boldsymbol{\pi}_{hi}(\boldsymbol{\beta})\\ & =\frac{\partial\boldsymbol{\pi}_{hi}^{T}(\boldsymbol{\beta})} {\partial\boldsymbol{\beta}}\boldsymbol{1}_{d+1}=\frac{\partial\left( \boldsymbol{\pi}_{hi}^{T}(\boldsymbol{\beta})\boldsymbol{1}_{d+1}\right) }{\partial\boldsymbol{\beta}}=\boldsymbol{0}_{dk}, \end{align*} \begin{align*} \frac{\partial\log\boldsymbol{\pi}_{hi}^{T}(\boldsymbol{\beta})} {\partial\boldsymbol{\beta}}\widehat{\boldsymbol{y}}_{hi} & =\frac {\partial\boldsymbol{\pi}_{hi}^{T}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}}\mathrm{diag}^{-1}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta }))\widehat{\boldsymbol{Y}}_{hi}\\ & =\left( \left( \boldsymbol{I}_{d\times d},\boldsymbol{0}_{d\times 1}\right) \boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta }\right) )\otimes\boldsymbol{x}_{hi}\right) \mathrm{diag}^{-1} (\boldsymbol{\pi}_{hi}(\boldsymbol{\beta}))\widehat{\boldsymbol{Y}}_{hi}\\ & =\left( \boldsymbol{I}_{d\times d},\boldsymbol{0}_{d\times1}\right) \boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta}\right) )\mathrm{diag}^{-1}(\boldsymbol{\pi}_{hi}(\boldsymbol{\beta} ))\widehat{\boldsymbol{Y}}_{hi}\otimes\boldsymbol{x}_{hi}\\ & =\left( \boldsymbol{I}_{d\times d},\boldsymbol{0}_{d\times1}\right) \left( \widehat{\boldsymbol{Y}}_{hi}-\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta}\right) \boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta }\right) ^{T}\mathrm{diag}^{-1}\left( \pi_{hi}\left( \boldsymbol{\beta }\right) \right) \widehat{\boldsymbol{Y}}_{hi}\right) \otimes \boldsymbol{x}_{hi}\\ & =\left( \boldsymbol{I}_{d\times d},\boldsymbol{0}_{d\times1}\right) \left( \widehat{\boldsymbol{Y}}_{hi}-m_{hi}\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta}\right) \right) \otimes\boldsymbol{x}_{hi}\\ & =\left( \widehat{\boldsymbol{y}}_{hi}^{\ast}-m_{hi}\boldsymbol{\pi} _{hi}^{\ast}\left( \boldsymbol{\beta}\right) \right) \otimes\boldsymbol{x} _{hi}, \end{align*} it follows that \begin{equation} \frac{1}{\sqrt{n}}\boldsymbol{U}_{\phi}\left( \boldsymbol{\beta}\right) =\frac{1}{\sqrt{n}}\sum\limits_{h=1}^{H}\sum\limits_{i=1}^{n_{h}}w_{hi}\left( \widehat{\boldsymbol{y}}_{hi}^{\ast}-m_{hi}\boldsymbol{\pi}_{hi}^{\ast }(\boldsymbol{\beta})\right) \otimes\boldsymbol{x}_{hi}+o_{p}(\boldsymbol{1} _{dk}), \label{Texp} \end{equation} Then $\mathbf{H}\left( \boldsymbol{\beta}_{0}\right) $ is the limit of \begin{align*} -\frac{1}{n}\frac{\partial}{\partial\boldsymbol{\beta}}\boldsymbol{U}_{\phi }^{T}\left( \boldsymbol{\beta}\right) & =\frac{1}{n}\sum\limits_{h=1} ^{H}\sum\limits_{i=1}^{n_{h}}w_{hi}m_{hi}\frac{\partial}{\partial \boldsymbol{\beta}}\boldsymbol{\pi}_{hi}^{\ast}(\boldsymbol{\beta} )\otimes\boldsymbol{x}_{hi}+o_{p}(\boldsymbol{1}_{dk\times dk})\\ & =\frac{1}{n}\sum\limits_{h=1}^{H}\sum\limits_{i=1}^{n_{h}}w_{hi} m_{hi}\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}\right) )\otimes\boldsymbol{x}_{hi}+o_{p}(\boldsymbol{1} _{dk\times dk}), \end{align*} as $n$ increases, and hence $\mathbf{H}\left( \boldsymbol{\beta}\right) =\lim_{n\rightarrow\infty}\mathbf{H}_{n}\left( \boldsymbol{\beta}\right) $. On the other hand, from (\ref{Texp}) it follows that \[ \frac{1}{\sqrt{n}}\boldsymbol{U}_{\phi}\left( \boldsymbol{\beta}\right) =\frac{1}{\sqrt{n}}\boldsymbol{U}\left( \boldsymbol{\beta}\right) +o_{p}(\boldsymbol{1}_{dk}), \] and this justifies that $\mathbf{G}\left( \boldsymbol{\beta}\right) =\lim_{n\rightarrow\infty}\mathbf{G}_{n}\left( \boldsymbol{\beta}\right) $. \end{proof} The following result justifies how to estimate $\mathbf{G}_{n}\left( \boldsymbol{\beta}\right) $, in particular $\widehat{\mathbf{G}} _{n}(\widehat{\boldsymbol{\beta}}_{P})$ given in (\ref{GHat0}), which is provided by the \texttt{SURVEYLOGISTIC} procedure\ of \texttt{SAS}. \begin{remark} \label{Th2}Matrix $\mathbf{G}\left( \boldsymbol{\beta}_{0}\right) $ of Theorem \ref{Th1} can be consistently estimated as \begin{equation} \widehat{\mathbf{G}}_{n}(\widehat{\boldsymbol{\beta}}_{\phi,P})=\frac{1}{n} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} \left( \boldsymbol{u}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P})-\tfrac {1}{n}\boldsymbol{u}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) \left( \boldsymbol{u}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P})-\tfrac{1} {n}\boldsymbol{u}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) ^{T} \label{GHat} \end{equation} with $\widehat{\boldsymbol{\beta}}_{\phi,P}$ being any pseudo minimum phi-divergence estimator of parameter $\boldsymbol{\beta}$. In particular, if $\phi(x)=x\log x-x+1$, \begin{equation} \widehat{\mathbf{G}}_{n}(\widehat{\boldsymbol{\beta}}_{P})=\frac{1}{n} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} \boldsymbol{u}_{hi}(\widehat{\boldsymbol{\beta}}_{P})\boldsymbol{u}_{hi} ^{T}(\widehat{\boldsymbol{\beta}}_{P}), \label{GHat0} \end{equation} since $\boldsymbol{u}(\widehat{\boldsymbol{\beta}}_{P})=\boldsymbol{0}_{dk}$. On the other hand, matrix $\mathbf{H}\left( \boldsymbol{\beta}_{0}\right) $ of Theorem \ref{Th1} can be consistently estimated as \[ \mathbf{H}_{n}(\widehat{\boldsymbol{\beta}}_{\phi,P})=\frac{1}{n} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi}m_{hi}\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast} (\widehat{\boldsymbol{\beta}}_{\phi,P}))\otimes\boldsymbol{x}_{hi} \boldsymbol{x}_{hi}^{T}. \] \end{remark} Let $\widehat{\boldsymbol{\beta}}_{\phi}$ denote the minimum phi-divergence estimator of $\boldsymbol{\beta}$ for simple random sampling within each cluster, i.e. multinomial sampling. By following Gupta and Pardo (2007), it can be seen that \[ \lim_{n\rightarrow\infty}\boldsymbol{V}[\sqrt{n}\widehat{\boldsymbol{\beta} }_{\phi}]=\mathbf{H}^{-1}\left( \boldsymbol{\beta}_{0}\right) . \] The \textquotedblleft design effect matrix\textquotedblright\ for the multinomial logistic regression model with sample survey design is defined as $\lim_{n\rightarrow\infty}\boldsymbol{V}[\sqrt{n}\widehat{\boldsymbol{\beta} }_{\phi,P}]\boldsymbol{V}^{-1}[\sqrt{n}\widehat{\boldsymbol{\beta}}_{\phi }]=\mathbf{H}^{-1}\left( \boldsymbol{\beta}_{0}\right) \mathbf{G}\left( \boldsymbol{\beta}_{0}\right) $ and the \textquotedblleft design effect\textquotedblright, denoted by $\nu$, for the multinomial logistic regression model with sample survey design is defined as $\nu\mathbf{(} \boldsymbol{\beta}_{0})=\frac{1}{dk}\mathrm{trace}\left( \mathbf{H} ^{-1}\mathbf{(}\boldsymbol{\beta}_{0})\mathbf{G(}\boldsymbol{\beta} _{0})\right) $. In practice, $\mathbf{H(}\boldsymbol{\beta}_{0})$ and $\mathbf{G(}\boldsymbol{\beta}_{0})$\ can be consistently estimated through the pseudo minimum phi-divergence estimator of parameter $\boldsymbol{\beta}$ as \[ \mathbf{H}_{n}(\widehat{\boldsymbol{\beta}}_{\phi,P})=\frac{1}{n} {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi}m_{hi}\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast} (\widehat{\boldsymbol{\beta}}_{\phi,P}))\otimes\boldsymbol{x}_{hi} \boldsymbol{x}_{hi}^{T}, \] and $\widehat{\mathbf{G}}_{n}(\widehat{\boldsymbol{\beta}}_{\phi,P})$\ given in (\ref{GHat}). For more details about the design matrix in other models see for instance Rao and Scott (1984) or formula 7.6 in Rao and Thomas (1989). \begin{definition} A consistent estimator of the design effect matrix, $\mathbf{H}^{-1}\left( \boldsymbol{\beta}\right) \mathbf{G}\left( \boldsymbol{\beta}\right) $, based on the linearization method of Binder (1983) and the pseudo minimum phi-divergence estimator of parameter $\boldsymbol{\beta}$, is \begin{align*} \mathbf{H}_{n}^{-1}(\widehat{\boldsymbol{\beta}}_{\phi,P})\widehat{\mathbf{G} }_{n}(\widehat{\boldsymbol{\beta}}_{\phi,P}) & =\left( {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{hi}m_{hi}\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast} (\widehat{\boldsymbol{\beta}}_{\phi,P}))\otimes\boldsymbol{x}_{hi} \boldsymbol{x}_{hi}^{T}\right) ^{-1}\\ & \times {\displaystyle\sum\limits_{h=1}^{H}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} \left( \boldsymbol{u}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P})-\tfrac {1}{n}\boldsymbol{u}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) \left( \boldsymbol{u}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P})-\tfrac{1} {n}\boldsymbol{u}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) ^{T}. \end{align*} Similarly, a consistent estimator of the design effect, $\nu\left( \boldsymbol{\beta}_{0}\right) =\frac{1}{dk}\mathrm{trace}\left( \mathbf{H}^{-1}\left( \boldsymbol{\beta}_{0}\right) \mathbf{G}\left( \boldsymbol{\beta}_{0}\right) \right) $, based on the linearization method of Binder (1983) and the pseudo minimum phi-divergence estimator of parameter $\boldsymbol{\beta}$, is \begin{equation} \widehat{\nu}(\widehat{\boldsymbol{\beta}}_{\phi,P})=\frac{1}{dk} \mathrm{trace}\left( \mathbf{H}_{n}^{-1}(\widehat{\boldsymbol{\beta}} _{\phi,P})\widehat{\mathbf{G}}_{n}(\widehat{\boldsymbol{\beta}}_{\phi ,P})\right) . \label{eff} \end{equation} \end{definition} The estimator of the design effect is specially interesting for clusters such that \begin{align} \boldsymbol{E}[\widehat{\boldsymbol{Y}}_{hi}] & =m_{h}\boldsymbol{\pi} _{hi}\left( \boldsymbol{\beta}_{0}\right) \quad\text{and}\quad \boldsymbol{V}[\widehat{\boldsymbol{Y}}_{hi}]=\nu_{m_{h}}m_{h} \boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta} _{0}\right) ),\label{multOver}\\ \nu_{m_{h}} & =1+\rho_{h}^{2}(m_{h}-1),\nonumber \end{align} with $\nu_{m_{h}}$ being the overdispersion parameter,$\ \rho_{h}^{2}$\ being the intra-cluster correlation coefficient and equal cluster sizes in the strata, $m_{hi}=m_{h}$, $h=1,...,H$, $i=1,...,n_{h}$. Examples of distributions of $\widehat{\boldsymbol{y}}_{hi}$ verifying (\ref{multOver}) are the so-called \textquotedblleft overdispersed multinomial distributions\textquotedblright\ (see Alonso et al. (2016)). For these distributions, once the pseudo minimum phi-divergence estimator of parameter $\boldsymbol{\beta}$, $\widehat{\boldsymbol{\beta}}_{\phi,P}$, is obtained, the interest lies in estimating $\rho_{h}^{2}$. In Theorems \ref{Th3} and \ref{Th4} two proposals of families of estimates for $\nu_{m_{h}}$ and $\rho_{h}^{2}$ are established. Both proposals are independent of the weights except for $\widehat{\boldsymbol{\beta}}_{\phi,P}$, and this fact has a logical explanation taking into account that the weights are constructed only for estimation of $\boldsymbol{\beta}$. \begin{theorem} \label{Th3}Let $\widehat{\boldsymbol{\beta}}_{\phi,P}$ the pseudo minimum phi-divergence estimate of parameter $\boldsymbol{\beta}$ for a multinomial logistic regression model with \textquotedblleft overdispersed multinomial distribution\textquotedblright. Assume that $w_{hi}=w_{h}$, $i=1,...,n_{h}$. Then \begin{align} \widehat{\nu}_{m_{h}}(\widehat{\boldsymbol{\beta}}_{\phi,P}) & =\frac{1} {dk}\mathrm{trace}\left( \left( {\displaystyle\sum\limits_{i=1}^{n_{h}}} m_{h}\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast} (\widehat{\boldsymbol{\beta}}_{\phi,P}))\otimes\boldsymbol{x}_{hi} \boldsymbol{x}_{hi}^{T}\right) ^{-1}\right. \nonumber\\ & \left. \times {\displaystyle\sum\limits_{i=1}^{n_{h}}} \left( \boldsymbol{v}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P} )-\boldsymbol{\bar{v}}_{h}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) \left( \boldsymbol{v}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P} )-\boldsymbol{\bar{v}}_{h}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) ^{T}\right) \label{overdisp} \end{align} with \begin{align*} \boldsymbol{v}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P}) & =\boldsymbol{r} _{hi}^{\ast}\left( \boldsymbol{\beta}\right) \otimes\boldsymbol{x}_{hi},\\ \boldsymbol{\bar{v}}_{h}(\widehat{\boldsymbol{\beta}}_{\phi,P}) & =\tfrac {1}{n_{h}} {\displaystyle\sum\limits_{k=1}^{n_{h}}} \boldsymbol{v}_{hk}(\widehat{\boldsymbol{\beta}}_{\phi,P}), \end{align*} is an estimator of $\nu_{m_{h}}$ based on the \textquotedblleft linearization method of Binder\textquotedblright\ and the pseudo minimum phi-divergence estimator of $\widehat{\boldsymbol{\beta}}_{\phi,P}$, and \[ \widehat{\rho}_{h}^{2}(\widehat{\boldsymbol{\beta}}_{\phi,P})=\frac {\widehat{\nu}_{m_{h}}(\widehat{\boldsymbol{\beta}}_{\phi,P})-1}{m_{h}-1} \] is an estimator of $\rho_{h}^{2}$ based on the \textquotedblleft linearization method of Binder\textquotedblright\ and the pseudo minimum phi-divergence estimator of $\widehat{\boldsymbol{\beta}}_{\phi,P}$. \end{theorem} \begin{proof} If $\boldsymbol{V}[\widehat{\boldsymbol{Y}}_{hi}]=\nu_{m_{h}}m_{h} \boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}\left( \boldsymbol{\beta} _{0}\right) )$, then from the expression of $\mathbf{G}_{n_{h}}^{(h)}\left( \boldsymbol{\beta}_{0}\right) $ given in Theorem \ref{Th2}, \begin{align*} \mathbf{G}_{n_{h}}^{(h)}\left( \boldsymbol{\beta}_{0}\right) & =\frac {1}{n_{h}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{h}^{2}\boldsymbol{V}[\widehat{\boldsymbol{Y}}_{hi}^{\ast}]\otimes \boldsymbol{x}_{hi}\boldsymbol{x}_{hi}^{T}=\nu_{m_{h}}w_{h}\frac{1}{n_{h}} {\displaystyle\sum\limits_{i=1}^{n_{h}}} w_{h}m_{h}\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}_{0}\right) )\otimes\boldsymbol{x}_{hi}\boldsymbol{x} _{hi}^{T}\\ & =\nu_{m_{h}}w_{h}\mathbf{H}_{n_{h}}^{(h)}\left( \boldsymbol{\beta} _{0}\right) . \end{align*} Hence, from \[ \mathrm{trace}\left( \mathbf{H}_{n_{h}}^{(h)}\left( \boldsymbol{\beta} _{0}\right) ^{-1}\mathbf{G}_{n_{h}}^{(h)}\left( \boldsymbol{\beta} _{0}\right) \right) =\nu_{m_{h}}w_{h}dk, \] and consistency of $\mathbf{H}_{n_{h}}^{(h)}(\widehat{\boldsymbol{\beta} }_{\phi,P})$ and $\widehat{\mathbf{G}}_{n_{h}}^{(h)} (\widehat{\boldsymbol{\beta}}_{\phi,P})$, \[ \widehat{\nu}_{m_{h}}(\widehat{\boldsymbol{\beta}}_{\phi,P})=\frac{1} {dk}\mathrm{trace}\left( \frac{1}{w_{h}}\mathbf{H}_{n_{h}}^{(h)} (\widehat{\boldsymbol{\beta}}_{\phi,P})^{-1}\widehat{\mathbf{G}}_{n_{h}} ^{(h)}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) , \] is proven with \begin{align*} \frac{1}{w_{h}}\mathbf{H}_{n_{h}}^{(h)}(\widehat{\boldsymbol{\beta}}_{\phi ,P})^{-1}\widehat{\mathbf{G}}_{n_{h}}^{(h)}(\widehat{\boldsymbol{\beta}} _{\phi,P}) & =\left( {\displaystyle\sum\limits_{i=1}^{n_{h}}} m_{h}\boldsymbol{\Delta}(\boldsymbol{\pi}_{hi}^{\ast} (\widehat{\boldsymbol{\beta}}_{\phi,P}))\otimes\boldsymbol{x}_{hi} \boldsymbol{x}_{hi}^{T}\right) ^{-1}\\ & \times {\displaystyle\sum\limits_{i=1}^{n_{h}}} \left( \boldsymbol{v}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P} )-\boldsymbol{\bar{v}}_{h}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) \left( \boldsymbol{v}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P} )-\boldsymbol{\bar{v}}_{h}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) ^{T},\\ \boldsymbol{v}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P}) & =\frac{1} {w_{h}}\boldsymbol{u}_{hi}\left( \boldsymbol{\beta}\right) , \end{align*} which is equivalent to (\ref{overdisp}). \end{proof} \begin{remark} Since \begin{equation} \widehat{\nu}_{m_{h}}(\widehat{\boldsymbol{\beta}}_{\phi,P})=\frac{1}{w_{h} }\frac{1}{dk}\mathrm{trace}\left( \mathbf{H}_{n_{h}}^{(h)} (\widehat{\boldsymbol{\beta}}_{\phi,P})^{-1}\widehat{\mathbf{G}}_{n_{h}} ^{(h)}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) =\frac{1}{w_{h} }\widehat{\nu}^{(h)}(\widehat{\boldsymbol{\beta}}_{\phi,P}), \label{overdisp2} \end{equation} unless $w_{h}=1$, the overdispersion parameter $\widehat{\nu}_{m_{h} }(\widehat{\boldsymbol{\beta}}_{\phi,P})$ and the design effect $\widehat{\nu }^{(h)}(\widehat{\boldsymbol{\beta}}_{\phi,P})$ of the $h$-th stratum are not in general equivalent. Based on the expression of (\ref{overdisp}) $\widehat{\nu}_{m_{h}}(\cdot)$, does not depend on the weights except for that $\widehat{\boldsymbol{\beta}}_{\phi,P}$ is plugged in $\widehat{\nu}_{m_{h} }(\cdot)$, additionally based on (\ref{overdisp2}) it is concluded that $\widehat{\nu}^{(h)}(\widehat{\boldsymbol{\beta}}_{\phi,P})$\ is directly proportional to the weights. \end{remark} \begin{theorem} \label{Th4}Let $\widehat{\boldsymbol{\beta}}_{\phi,P}$ the pseudo minimum phi-divergence estimate of parameter $\boldsymbol{\beta}$ for a multinomial logistic regression model with \textquotedblleft overdispersed multinomial distribution\textquotedblright. Then \[ \widetilde{\nu}_{m_{h}}(\widehat{\boldsymbol{\beta}}_{\phi,P})=\frac{1} {n_{h}d}\sum\limits_{i=1}^{n_{h}}\sum\limits_{s=1}^{d+1}\frac{\left( \widehat{y}_{his}-m_{h}\pi_{his}(\widehat{\boldsymbol{\beta}}_{\phi ,P})\right) ^{2}}{m_{h}\pi_{his}(\widehat{\boldsymbol{\beta}}_{\phi,P})} \] is an estimation of $\nu_{m_{h}}$ based on the \textquotedblleft method of moments\textquotedblright\ and the pseudo minimum phi-divergence estimator of $\widehat{\boldsymbol{\beta}}_{\phi,P}$, and \[ \widetilde{\rho}_{h}^{2}(\widehat{\boldsymbol{\beta}}_{\phi,P})=\frac {\widetilde{\nu}_{m_{h}}(\widehat{\boldsymbol{\beta}}_{\phi,P})-1}{m_{h}-1} \] is an estimation of $\rho_{h}^{2}$ based on the \textquotedblleft method of moments\textquotedblright\ and the pseudo minimum phi-divergence estimator of $\widehat{\boldsymbol{\beta}}_{\phi,P}$. \end{theorem} \begin{proof} The mean vector and variance-covariance matrix of \[ \boldsymbol{Z}_{hi}^{\ast}(\boldsymbol{\beta}_{0})=\sqrt{m_{h}} \boldsymbol{\Delta}^{-\frac{1}{2}}(\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}_{0}\right) )(\tfrac{\widehat{\boldsymbol{Y}}_{hi}^{\ast} }{m_{h}}-\boldsymbol{\pi}_{hi}^{\ast}\left( \boldsymbol{\beta}_{0}\right) ), \] are respectively \begin{align*} \boldsymbol{E}[\boldsymbol{Z}_{hi}^{\ast}(\boldsymbol{\beta}_{0})] & =\boldsymbol{0}_{d},\\ \boldsymbol{V}[\boldsymbol{Z}_{hi}^{\ast}(\boldsymbol{\beta}_{0})] & =\nu_{m_{h}}\boldsymbol{I}_{d}, \end{align*} for $h=1,...,H$. An unbiased estimator of $\boldsymbol{V}[\boldsymbol{Z} _{hi}^{\ast}(\boldsymbol{\beta}_{0})]$ is \[ \widehat{\boldsymbol{V}}[\boldsymbol{Z}_{hi}^{\ast}(\boldsymbol{\beta} _{0})]=\frac{1}{n_{h}}\sum\limits_{i=1}^{n_{h}}\boldsymbol{Z}_{hi}^{\ast }(\boldsymbol{\beta}_{0})\boldsymbol{Z}_{hi}^{\ast T}(\boldsymbol{\beta} _{0}), \] from which is derived \begin{align*} E\left[ \mathrm{trace}\widehat{\boldsymbol{V}}[\boldsymbol{Z}_{hi}^{\ast }(\boldsymbol{\beta}_{0})]\right] & =\mathrm{trace}\boldsymbol{V} [\boldsymbol{Z}_{hi}^{\ast}(\boldsymbol{\beta}_{0})],\\ E\left[ \frac{1}{n_{h}}\sum\limits_{i=1}^{n_{h}}\mathrm{trace}\left( \boldsymbol{Z}_{hi}^{\ast}(\boldsymbol{\beta}_{0})\boldsymbol{Z}_{hi}^{\ast T}(\boldsymbol{\beta}_{0})\right) \right] & =\mathrm{trace}\left( \nu_{m_{h}}\boldsymbol{I}_{d}\right) ,\\ E\left[ \frac{1}{n_{h}}\sum\limits_{i=1}^{n_{h}}\boldsymbol{Z}_{hi}^{\ast T}(\boldsymbol{\beta}_{0})\boldsymbol{Z}_{hi}^{\ast}(\boldsymbol{\beta} _{0})\right] & =\nu_{m_{h}}d,\\ E\left[ \frac{1}{n_{h}d}\sum\limits_{i=1}^{n_{h}}\boldsymbol{Z}_{hi}^{\ast T}(\boldsymbol{\beta}_{0})\boldsymbol{Z}_{hi}^{\ast}(\boldsymbol{\beta} _{0})\right] & =\nu_{m_{h}}. \end{align*} This expression suggest using \begin{align*} \widetilde{\nu}_{m_{h}}(\widehat{\boldsymbol{\beta}}_{\phi,P}) & =\frac {1}{n_{h}d}\sum\limits_{i=1}^{n_{h}}\widehat{\boldsymbol{z}}_{hi,\phi,P}^{\ast T}(\widehat{\boldsymbol{\beta}}_{\phi,P})\widehat{\boldsymbol{z}}_{hi,\phi ,P}^{\ast}(\widehat{\boldsymbol{\beta}}_{\phi,P})\\ & =\frac{1}{n_{h}d}m_{h}\left( \tfrac{\widehat{\boldsymbol{y}}_{hi}^{\ast} }{m_{h}}-\boldsymbol{\pi}_{hi}^{\ast}(\widehat{\boldsymbol{\beta}}_{\phi ,P})\right) ^{T}\boldsymbol{\Delta}^{-1}(\boldsymbol{\pi}_{hi}^{\ast }(\widehat{\boldsymbol{\beta}}_{\phi,P}))\left( \tfrac {\widehat{\boldsymbol{y}}_{hi}^{\ast}}{m_{h}}-\boldsymbol{\pi}_{hi}^{\ast }(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) \\ & =\frac{1}{n_{h}d}m_{h}\left( \tfrac{\widehat{\boldsymbol{y}}_{hi}}{m_{h} }-\boldsymbol{\pi}_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) ^{T}\boldsymbol{\Delta}^{-}(\boldsymbol{\pi}_{hi}(\widehat{\boldsymbol{\beta} }_{\phi,P}))\left( \tfrac{\widehat{\boldsymbol{y}}_{hi}^{\ast}}{m_{h} }-\boldsymbol{\pi}_{hi}^{\ast}(\widehat{\boldsymbol{\beta}}_{\phi,P})\right) ,\\ \widehat{\boldsymbol{z}}_{hi,\phi,P}^{\ast} & =\sqrt{m_{h}} \boldsymbol{\Delta}^{-\frac{1}{2}}(\boldsymbol{\pi}_{hi}^{\ast} (\widehat{\boldsymbol{\beta}}_{\phi,P}))\left( \tfrac{\widehat{\boldsymbol{y} }_{hi}^{\ast}}{m_{h}}-\boldsymbol{\pi}_{hi}^{\ast}(\widehat{\boldsymbol{\beta }}_{\phi,P})\right) . \end{align*} Finally, since $\boldsymbol{\Delta}^{-}(\boldsymbol{\pi}_{hi} (\widehat{\boldsymbol{\beta}}_{\phi,P}))=\mathrm{diag}^{-1}(\boldsymbol{\pi }_{hi}(\widehat{\boldsymbol{\beta}}_{\phi,P}))$, is a possible expression for the generalized inverse, the desired result for $\widetilde{\nu}_{m_{h} }(\widehat{\boldsymbol{\beta}}_{\phi,P})$ is obtained. \end{proof} \section{Numerical Example\label{sec4}} In this Section we shall consider an example, which appears in SAS Institute Inc. (2013, Chapter 95) as well as in An (2002), in order to illustrate how does the pseudo minimum phi-divergence estimator work for the multinomial logistic regression with complex sample survey. \begin{table}[htbp] \tabcolsep2.8pt \centering $ \begin{tabular} [c]{cc}\hline \textbf{Class} & \textbf{Enrollment}\\\hline \multicolumn{1}{l}{Freshman} & 3734\\ \multicolumn{1}{l}{Sophomore} & 3565\\ \multicolumn{1}{l}{Junior} & 3903\\ \multicolumn{1}{l}{Senior} & 4196\\\hline \end{tabular} $ \caption{Number of student in each class of the target population for the survey.\label{table1}} \end{table} A market research firm conducts a survey among undergraduate students at the University of North Carolina (UNC), at Chapel Hill, to evaluate three new web designs at a commercial web-site targeting undergraduate students. The total number of student in each class in the Fall semester of 2001 is shown in Table \ref{table1}. The sample design is a stratified sample with clusters nested on them, with the strata being the four students' classes and the clusters the three web designs. Initially $100$ students were planned to be randomly selected in each of the $n=12$ web designs using sample random sampling (without replacement). For this reason, the weights for estimation are considered to be $w_{1}=\frac{3734}{300}$, $w_{2}=\frac{3565}{300}$, $w_{3}=\frac{3903}{300}$, $w_{4}=\frac{4196}{300}$. Since $m_{hi}=100$ for $h=1,2,3,4=H$ (strata), $i=1,2,3=n_{h}$ (clusters) except for $m_{12}=90$ and $m_{43}=97$, in practice some observations are missing values. Each student selected in the sample is asked to evaluate the three Web designs and to rate them ranging from dislike very much to like very much: ($1$) dislike very much, ($2$) dislike, ($3$) neutral, ($4$) like, ($5=d+1$) like very much. The survey results are collected and shown in Table \ref{table2}, with the three different Web designs coded A, B and C. This table matches the one given in An (2002) and the version appeared in SAS Institute Inc. (2013, Chapter 95) is slightly different. \begin{table}[htbp] \tabcolsep2.8pt \centering $ \begin{tabular} [c]{lcccccc}\hline & & \multicolumn{5}{c}{\textbf{Rating Counts}}\\\cline{3-7} \textbf{Strata} & \textbf{Design} & 1 & 2 & 3 & 4 & 5\\\hline Freshman & A & \multicolumn{1}{r}{10} & \multicolumn{1}{r}{34} & \multicolumn{1}{r}{25} & \multicolumn{1}{r}{16} & \multicolumn{1}{r}{15}\\ & B & \multicolumn{1}{r}{5} & \multicolumn{1}{r}{10} & \multicolumn{1}{r}{24} & \multicolumn{1}{r}{30} & \multicolumn{1}{r}{21}\\ & C & \multicolumn{1}{r}{11} & \multicolumn{1}{r}{14} & \multicolumn{1}{r}{20} & \multicolumn{1}{r}{34} & \multicolumn{1}{r}{21}\\\hline Sophomore & A & \multicolumn{1}{r}{19} & \multicolumn{1}{r}{12} & \multicolumn{1}{r}{26} & \multicolumn{1}{r}{18} & \multicolumn{1}{r}{25}\\ & B & \multicolumn{1}{r}{10} & \multicolumn{1}{r}{18} & \multicolumn{1}{r}{32} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{17}\\ & C & \multicolumn{1}{r}{15} & \multicolumn{1}{r}{22} & \multicolumn{1}{r}{34} & \multicolumn{1}{r}{9} & \multicolumn{1}{r}{20}\\\hline Junior & A & \multicolumn{1}{r}{8} & \multicolumn{1}{r}{21} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{26} & \multicolumn{1}{r}{22}\\ & B & \multicolumn{1}{r}{1} & \multicolumn{1}{r}{14} & \multicolumn{1}{r}{25} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{37}\\ & C & \multicolumn{1}{r}{16} & \multicolumn{1}{r}{19} & \multicolumn{1}{r}{30} & \multicolumn{1}{r}{23} & \multicolumn{1}{r}{12}\\\hline Senior & A & \multicolumn{1}{r}{11} & \multicolumn{1}{r}{14} & \multicolumn{1}{r}{24} & \multicolumn{1}{r}{33} & \multicolumn{1}{r}{18}\\ & B & \multicolumn{1}{r}{8} & \multicolumn{1}{r}{15} & \multicolumn{1}{r}{35} & \multicolumn{1}{r}{30} & \multicolumn{1}{r}{12}\\ & C & \multicolumn{1}{r}{2} & \multicolumn{1}{r}{34} & \multicolumn{1}{r}{27} & \multicolumn{1}{r}{18} & \multicolumn{1}{r}{16}\\\hline \end{tabular} \ \ \ \ \ $\caption{Evaluation of New Web Designs.\label{table2}} \end{table} The explanatory variables are qualitative, and valid to distinguish the clusters within the strata. With respect to design A, it is given by $\boldsymbol{x}_{h1}^{T}=\boldsymbol{x}_{1}^{T}=(1,0,0)$, $h=1,2,3,4$; with respect to design B, by $\boldsymbol{x}_{h2}^{T}=\boldsymbol{x}_{2} ^{T}=(0,1,0)$, $h=1,2,3,4$; with respect to design C, by $\boldsymbol{x} _{h3}^{T}=\boldsymbol{x}_{3}^{T}=(0,0,1)$, $h=1,2,3,4$. In Table \ref{table4} every row represents the pseudo minimum Cressie-Read divergence estimates of the $5$-dimensional probability vector $\boldsymbol{\pi}_{hi} (\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P})=\boldsymbol{\pi} _{i}(\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P})$, for the $i$-th cluster $i=1,2,3$, for any stratum $h=1,2,3,4$, and a specific value in $\lambda \in\{0,\frac{2}{3},1,1.5,2,2.5\}$. Each column of Table \ref{table3} summarizes, first the pseudo minimum Cressie-Read divergence estimates of $\boldsymbol{\beta}=(\boldsymbol{\beta}_{1}^{T},\boldsymbol{\beta}_{2} ^{T},\boldsymbol{\beta}_{3}^{T},\boldsymbol{\beta}_{4}^{T})^{T}$, with $\boldsymbol{\beta}_{i}^{T}=(\beta_{i1},\beta_{i2},\beta_{i3})$ $i=1,2,3,4$ \ and\ $\lambda\in\{0,\frac{2}{3},1,1.5,2,2.5\}$, as well as the two versions of the intra-cluster correlation estimates according to Theorems \ref{Th3} and \ref{Th4} for the strata with the same cluster sizes, i.e. Sophomore ($2$) and Junior ($3$). Section \ref{sec5} is devoted to study through simulation the best choice for the value of $\lambda$ according to the root of the minimum square error of $\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P}$, $\widehat{\rho}^{2}(\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P})$ and $\widetilde{\rho}^{2}(\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P})$. \begin{table}[htbp] \tabcolsep2.8pt \centering $ \begin{tabular} [c]{ccccccc}\hline & & \multicolumn{5}{c}{\textbf{Rating Counts}}\\\cline{3-7} $\lambda$ & \textbf{Design} & 1 & 2 & 3 & 4 & 5\\\hline $0$ & A & $0.1185$ & $0.2016$ & $0.2445$ & $0.2363$ & $0.1991$\\ & B & $0.0611$ & $0.1458$ & $0.2983$ & $0.2727$ & $0.2222$\\ & C & $0.1083$ & $0.2276$ & $0.2791$ & $0.2124$ & $0.1727$\\ $\frac{2}{3}$ & A & $0.1200$ & $0.2079$ & $0.2387$ & $0.2369$ & $0.1965$\\ & B & $0.0660$ & $0.1439$ & $0.2931$ & $0.2672$ & $0.2297$\\ & C & $0.1145$ & $0.2275$ & $0.2723$ & $0.2167$ & $0.1690$\\ $1$ & A & $0.1208$ & $0.2109$ & $0.2359$ & $0.2371$ & $0.1952$\\ & B & $0.0676$ & $0.1431$ & $0.2909$ & $0.2648$ & $0.2336$\\ & C & $0.1163$ & $0.2279$ & $0.2695$ & $0.2188$ & $0.1675$\\ $1.5$ & A & $0.1221$ & $0.2152$ & $0.2319$ & $0.2374$ & $0.1934$\\ & B & $0.0693$ & $0.1420$ & $0.2879$ & $0.2616$ & $0.2392$\\ & C & $0.1179$ & $0.2289$ & $0.2659$ & $0.2215$ & $0.1657$\\ $2$ & A & $0.1234$ & $0.2191$ & $0.2282$ & $0.2376$ & $0.1917$\\ & B & $0.0705$ & $0.1410$ & $0.2854$ & $0.2587$ & $0.2444$\\ & C & $0.1188$ & $0.2301$ & $0.2630$ & $0.2240$ & $0.1641$\\ $2.5$ & A & $0.1246$ & $0.2226$ & $0.2248$ & $0.2377$ & $0.1902$\\ & B & $0.0714$ & $0.1402$ & $0.2831$ & $0.2562$ & $0.2491$\\ & C & $0.1192$ & $0.2314$ & $0.2604$ & $0.2262$ & $0.1628$\\\hline \end{tabular} \ $ \caption{Pseudo minimum Cressie-Read divergence estimates of probabilities for any of the four strata.\label{table4}} \end{table} \begin{table}[htbp] \tabcolsep2.8pt \centering $ \begin{tabular} [c]{cccccccc}\hline & \multicolumn{7}{c}{$\lambda$}\\\cline{2-8} & $\qquad0$ & $\qquad\frac{2}{3}$ & $\qquad1$ & $\qquad1.5$ & $\qquad2$ & $\qquad2.5$ & $\quad$\\\hline $\widehat{\beta}_{11,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$-0.5188$} & \multicolumn{1}{r}{$-0.4933$} & \multicolumn{1}{r}{$-0.4802$} & \multicolumn{1}{r}{$-0.4604$} & \multicolumn{1}{r}{$-0.4411$} & \multicolumn{1}{r}{$-0.4228$} & \\ $\widehat{\beta}_{12,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$-1.2910$} & \multicolumn{1}{r}{$-1.2475$} & \multicolumn{1}{r}{$-1.2400$} & \multicolumn{1}{r}{$-1.2381$} & \multicolumn{1}{r}{$-1.2424$} & \multicolumn{1}{r}{$-1.2494$} & \\ $\widehat{\beta}_{13,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$-0.4665$} & \multicolumn{1}{r}{$-0.3889$} & \multicolumn{1}{r}{$-0.3649$} & \multicolumn{1}{r}{$-0.3397$} & \multicolumn{1}{r}{$-0.3230$} & \multicolumn{1}{r}{$-0.3116$} & \\\cline{2-8} $\widehat{\beta}_{21,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$0.0127$} & \multicolumn{1}{r}{$0.0564$} & \multicolumn{1}{r}{$0.0773$} & \multicolumn{1}{r}{$0.1069$} & \multicolumn{1}{r}{$0.1336$} & \multicolumn{1}{r}{$0.1573$} & \\ $\widehat{\beta}_{22,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$-0.4210$} & \multicolumn{1}{r}{$-0.4676$} & \multicolumn{1}{r}{$-0.4899$} & \multicolumn{1}{r}{$-0.5213$} & \multicolumn{1}{r}{$-0.5498$} & \multicolumn{1}{r}{$-0.5750$} & \\ $\widehat{\beta}_{23,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$0.2761$} & \multicolumn{1}{r}{$0.2974$} & \multicolumn{1}{r}{$0.3079$} & \multicolumn{1}{r}{$0.3233$} & \multicolumn{1}{r}{$0.3380$} & \multicolumn{1}{r}{$0.3517$} & \\\cline{2-8} $\widehat{\beta}_{31,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$0.2056$} & \multicolumn{1}{r}{$0.1947$} & \multicolumn{1}{r}{$0.1894$} & \multicolumn{1}{r}{$0.1816$} & \multicolumn{1}{r}{$0.1741$} & \multicolumn{1}{r}{$0.1670$} & \\ $\widehat{\beta}_{32,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$0.2946$} & \multicolumn{1}{r}{$0.2438$} & \multicolumn{1}{r}{$0.2196$} & \multicolumn{1}{r}{$0.1857$} & \multicolumn{1}{r}{$0.1551$} & \multicolumn{1}{r}{$0.1280$} & \\ $\widehat{\beta}_{33,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$0.4803$} & \multicolumn{1}{r}{$0.4770$} & \multicolumn{1}{r}{$0.4754$} & \multicolumn{1}{r}{$0.4733$} & \multicolumn{1}{r}{$0.4714$} & \multicolumn{1}{r}{$0.4697$} & \\\cline{2-8} $\widehat{\beta}_{41,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$0.1715$} & \multicolumn{1}{r}{$0.1870$} & \multicolumn{1}{r}{$0.1944$} & \multicolumn{1}{r}{$0.2048$} & \multicolumn{1}{r}{$0.2143$} & \multicolumn{1}{r}{$0.2228$} & \\ $\widehat{\beta}_{42,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$0.2048$} & \multicolumn{1}{r}{$0.1512$} & \multicolumn{1}{r}{$0.1256$} & \multicolumn{1}{r}{$0.0896$} & \multicolumn{1}{r}{$0.0570$} & \multicolumn{1}{r}{$0.0280$} & \\ $\widehat{\beta}_{43,\phi_{\lambda},P}$ & \multicolumn{1}{r}{$0.2070$} & \multicolumn{1}{r}{$0.2488$} & \multicolumn{1}{r}{$0.2668$} & \multicolumn{1}{r}{$0.2906$} & \multicolumn{1}{r}{$0.3111$} & \multicolumn{1}{r}{$0.3288$} & \\\hline $\widehat{\rho}_{2}^{2}(\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P})$ & \multicolumn{1}{r}{$0.0119$} & \multicolumn{1}{r}{$0.0123$} & \multicolumn{1}{r}{$0.0127$} & \multicolumn{1}{r}{$0.0135$} & \multicolumn{1}{r}{$0.0142$} & \multicolumn{1}{r}{$0.0150$} & \\ $\widetilde{\rho}_{2}^{2}(\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P})$ & \multicolumn{1}{r}{$0.0119$} & \multicolumn{1}{r}{$0.0048$} & \multicolumn{1}{r}{$0.0051$} & \multicolumn{1}{r}{$0.0056$} & \multicolumn{1}{r}{$0.0061$} & \multicolumn{1}{r}{$0.0067$} & \\\cline{2-8} $\widehat{\rho}_{3}^{2}(\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P})$ & \multicolumn{1}{r}{$0.0088$} & \multicolumn{1}{r}{$0.0072$} & \multicolumn{1}{r}{$0.0066$} & \multicolumn{1}{r}{$0.0059$} & \multicolumn{1}{r}{$0.0054$} & \multicolumn{1}{r}{$0.0051$} & \\ $\widetilde{\rho}_{3}^{2}(\widehat{\boldsymbol{\beta}}_{\phi_{\lambda},P})$ & \multicolumn{1}{r}{$0.0088$} & \multicolumn{1}{r}{$0.0014$} & \multicolumn{1}{r}{$0.0010$} & \multicolumn{1}{r}{$0.0006$} & \multicolumn{1}{r}{$0.0003$} & \multicolumn{1}{r}{$0.0000$} & \\\hline \end{tabular} \ \ $ \caption{Pseudo minimum Cressie-Read divergence estimates of $\boldsymbol{\beta }$ and $\rho ^{2}$.\label{table3}} \end{table} \pagebreak \section{Simulation Study\label{sec5}} In order to analyze the performance of the proposed estimators through root of the mean square errors (RMSE), an adapted design focussed in the simulation experiment proposed in Morel (1989) is conducted. Based on a unique stratum with $n$ clusters of the same size $m$, three overdispersed multinomial distributions for $\widehat{\boldsymbol{Y}}_{i}$\ described as \begin{align*} \boldsymbol{E}[\widehat{\boldsymbol{Y}}_{i}] & =m\boldsymbol{\pi}_{i}\left( \boldsymbol{\beta}_{0}\right) \quad\text{and}\quad\boldsymbol{V} [\widehat{\boldsymbol{Y}}_{i}]=\nu_{m}m\boldsymbol{\Delta}(\boldsymbol{\pi }_{i}\left( \boldsymbol{\beta}_{0}\right) ),\\ \nu_{m} & =1+\rho^{2}(m-1), \end{align*} are considered for $i=1,...,n$, the Dirichlet-multinomial (DM), the random-clumped (RC) and the $m$-inflated distribution ($m$-I), all of them with the same parameters $\boldsymbol{\pi}_{i}\left( \boldsymbol{\beta} _{0}\right) $ and $\rho$\ (see Appendix of Alonso et al. (2016) for details of their generators). The value of the true probability associated with the $i$-th cluster is $\boldsymbol{\pi}_{i}\left( \boldsymbol{\beta}_{0}\right) =(\pi_{i1}\left( \boldsymbol{\beta}_{0}\right) ,\pi_{i2}\left( \boldsymbol{\beta}_{0}\right) ,\pi_{i3}\left( \boldsymbol{\beta}_{0}\right) ,\pi_{i4}\left( \boldsymbol{\beta}_{0}\right) )^{T}$, where \[ \boldsymbol{\pi}_{ir}\left( \boldsymbol{\beta}_{0}\right) =\dfrac {\exp\{\boldsymbol{x}_{i}^{T}\boldsymbol{\beta}_{r,0}\}}{ {\textstyle\sum_{s=1}^{d+1}} \exp\{\boldsymbol{x}_{i}^{T}\boldsymbol{\beta}_{s,0}\}},\quad r=1,2,3,4, \] $\boldsymbol{\beta}=(\boldsymbol{\beta}_{1}^{T},\boldsymbol{\beta}_{2} ^{T},\boldsymbol{\beta}_{3}^{T},\boldsymbol{\beta}_{4}^{T})^{T}$, with $\boldsymbol{\beta}_{1}^{T}=(-0.3,-0.1,0.1,0.2)$, $\boldsymbol{\beta}_{2} ^{T}=(0.2,-0.2,-0.2,0.1)$, $\boldsymbol{\beta}_{3}^{T}=(-0.1,0.3,-0.3,0.1)$, $\boldsymbol{\beta}_{4}^{T}=(0,0,0,0)$, and \[ \boldsymbol{x}_{i}\overset{ind}{\sim}\mathcal{N}(\boldsymbol{\mu },\boldsymbol{\Sigma}),\quad\boldsymbol{\mu}=(1,-2,1,5)^{T},\quad \boldsymbol{\Sigma}=\mathrm{diag}\{0,25,25,25\},\quad i=1,\ldots,n, \] while the value true intra-cluster correlation parameter, $\rho^{2}$, is different depending on the scenario. Notice that $d=3$ and $k=4$, and the values of $n$ and $m$ are different depending on the scenario. \begin{itemize} \item Scenario 1: $n=60$, $m=21$, $\rho^{2}\in\{0.05i\}_{i=0}^{19}$, DM, RC and $m$-I distributions (Figures \ref{fig1}-\ref{fig3}); \item Scenario 2: $n\in\{10i\}_{i=1}^{15}$, $m=21$, $\rho^{2}=0.25$, RC distribution (Figure \ref{fig4}); \item Scenario 3: $n=60$, $m\in\{10i\}_{i=1}^{10}$, $\rho^{2}=0.25$, RC distribution (Figures \ref{fig5}-\ref{fig6}, above); \item Scenario 4: $n=60$, $m\in\{10i\}_{i=1}^{10}$, $\rho^{2}=0.75$, RC distribution (Figures \ref{fig5}-\ref{fig6}, middle); \item Scenario 5: $n=20$, $m\in\{10i\}_{i=1}^{10}$, $\rho^{2}=0.25$, RC distribution (Figures \ref{fig5}-\ref{fig6}, below). \end{itemize} In the previous scenarios the RMSE for the pseudo minimum Cressie-Read divergence estimators of $\boldsymbol{\beta}$ with $\lambda\in\{0,\frac{2} {3},1,1.5,2,2.5\}$ are studied, as well as for the estimators of $\rho^{2}$, depending on the method (of moments or Binder) and the value of $\lambda$ to estimate $\boldsymbol{\beta}$ (ordinal axis of the plots). As expected from a theoretical point of view, the simulations show that the RMSE increases as $\rho^{2}$ increases, $n$ decreases or $m$ decreases. For $\boldsymbol{\beta}$, the interest of the pseudo minimum Cressie-Read divergence estimators is clearly justified for small-moderate sizes of $n$ and strong-moderate intra-cluster correlation. The cluster size, $m$, affects but not so much as the number of clusters, $n$. More thoroughly, in these cases, the value of $\lambda\in\{\frac{2}{3},1,1.5,2,2.5\}$ exhibits better performance than the pseudo maximum likelihood estimator ($\lambda=0$). For the estimators of the intra-cluster correlation coefficient two clear and important findings, valid for any value of $n$, $m$, or true value of $\rho^{2}$\ , are: \begin{description} \item[*] The estimator of $\rho^{2}$ with the method of of moments is not recommended, since the estimator with the Binder's method is much better. \item[*] The best estimator of $\rho^{2}$ with the Binder's method is obtained with $\lambda=\frac{2}{3}$. \end{description} \begin{figure} \caption{RMSEs of of seudo minimum Cressie-Read divergence estimators of $\boldsymbol{\beta } \label{fig1} \end{figure} \begin{figure} \caption{RMSEs of estimators of $\rho ^{2} \label{fig2} \end{figure} \begin{figure} \caption{RMSEs of estimators of $\rho ^{2} \label{fig3} \end{figure} \begin{figure} \caption{RMSEs of estimators of $\boldsymbol{\beta } \label{fig4} \end{figure} \begin{figure} \caption{RMSEs of estimators of $\boldsymbol{\beta } \label{fig5} \end{figure} \begin{figure} \caption{RMSEs of estimators of $\rho ^{2} \label{fig6} \end{figure} \section{Concluding remarks\label{sec6}} Even though the multinomial logistic regression is an extensively applied model, in our knowledge there is no study which compares the method of moments and the Binder's method for estimating the intracluster correlation coefficient. The simulation study designed in this paper shows that the Binder's method is by far the best choice. As future research, we would like to extend the proposed method to be valid for estimating the $\boldsymbol{\beta}$ and $\rho^{2}$ for different cluster sizes. \pagebreak \end{document}
\begin{document} \title[Wave Equations with Robin--Acoustic Perturbation]{Attractors for Damped Semilinear Wave Equations with a Robin--Acoustic Boundary Perturbation} \author[J. L. Shomberg]{Joseph L. Shomberg} \subjclass[2000]{Primary: 35B25, 35B41; Secondary: 35L20, 35Q72.} \keywords{Damped semilinear wave equation, acoustic boundary condition, Robin boundary condition, singular perturbation, global attractor, upper-semicontinuity, exponential attractor.} \address{Department of Mathematics and Computer Science, Providence College, Providence, Rhode Island 02918, USA, \\ {\tt{[email protected]}}} \date\today \begin{abstract} Under consideration is the damped semilinear wave equation \[ u_{tt}+u_t-\Delta u + u + f(u)=0 \] on a bounded domain $\Omega$ in $\mathbb{R}^3$ with a perturbation parameter $\varepsilon>0$ occurring in an acoustic boundary condition, limiting ($\varepsilon=0$) to a Robin boundary condition. With minimal assumptions on the nonlinear term $f$, the existence and uniqueness of global weak solutions is shown for each $\varepsilon\in[0,1]$. Also, the existence of a family of global attractors is shown to exist. After proving a general result concerning the upper-semicontinuity of a one-parameter family of sets, the result is applied to the family of global attractors. Because of the complicated boundary conditions for the perturbed problem, fractional powers of the Laplacian are not well-defined; moreover, because of the restrictive growth assumptions on $f$, the family of global attractors is obtained from the asymptotic compactness method developed by J. Ball for generalized semiflows. With more relaxed assumptions on the nonlinear term $f$, we are able to show the global attractors possess optimal regularity and prove the existence of an exponential attractor, for each $\varepsilon\in[0,1].$ This result insures that the corresponding global attractor inherits finite (fractal) dimension, however, the dimension is {\em{not}} necessarily uniform in $\varepsilon$. \end{abstract} \maketitle \section{Introduction to the model problems} \label{s:intro} Let $\Omega\subset\mathbb{R}^3$ be a bounded domain with smooth boundary $\Gamma$. The equation under consideration in the unknown $u=u(t,x)$ is the semilinear damped wave equation \begin{equation}\label{damped-wave-equation} u_{tt} + u_t-\Delta u + u + f(u) = 0 \quad \text{in} \quad (0,\infty)\times\Omega \end{equation} with the initial conditions \begin{equation} \label{Robin-initial-conditions} u(0,\cdot) = u_0, ~u_t(0,\cdot) = u_1 \quad \text{on} \quad \{0\}\times \Omega \end{equation} and the homogeneous Robin boundary condition \begin{equation} \label{Robin-boundary} \partial_{\bf{n}}u + u = 0 \quad \text{on} \quad (0,\infty)\times\Gamma, \end{equation} where $\bf{n}$ is the outward pointing unit vector normal to the surface $\Gamma$ at $x$, and $\partial_{\bf{n}}u$ denotes the normal derivative of $u$, $\nabla u\cdot \bf{n}$. Assume the nonlinear term $f\in C^1(\mathbb{R})$ satisfies the growth condition \begin{equation} \label{f-assumption-1} |f'(s)|\leq \ell(1+s^2) \end{equation} for some $\ell\geq 0$, and the sign condition \begin{equation}\label{f-assumption-2} \liminf_{|s|\rightarrow\infty}\frac{f(s)}{s} > -1 \end{equation} Collectively, denote the IBVP (\ref{damped-wave-equation})-(\ref{Robin-boundary}), with (\ref{f-assumption-1})-(\ref{f-assumption-2}), or (\ref{f-assumption-2}), (\ref{f-reg-ass-2})-(\ref{f-reg-ass-3}) as Problem (R). Also of interest is the following ``relaxation'' of the Robin boundary condition, where, for $\delta=\delta(t,x)$ and $\varepsilon>0$, the acoustic perturbation boundary condition is employed \begin{equation} \label{acoustic-boundary} \left\{ \begin{array}{ll} \delta_{tt} + \varepsilon[\delta_t + \delta + g(\delta)] = -u_t & \quad \text{on} \quad (0,\infty)\times\Gamma \\ \delta_t = \partial_{\bf{n}}u & \quad \text{on} \quad (0,\infty)\times\Gamma, \end{array}\right. \end{equation} supplemented with the additional initial conditions \begin{equation}\label{acoustic-initial-conditions} \varepsilon\delta(0,\cdot) = \varepsilon\delta_0, \quad \delta_t(0,\cdot) = \delta_1 \quad \text{on} \quad \{0\}\times \Gamma. \end{equation} During the discussion on well-posedness and the dissipative qualitative behavior of the perturbation problem, we assume $g\in C^1(\mathbb{R})$ satisfies the growth condition \begin{equation}\label{g-assumption-1} |g'(s)|\leq \rho \end{equation} for some $\rho\geq 0$, and the sign condition \begin{equation}\label{g-assumption-2} \liminf_{|s|\rightarrow\infty}\frac{g(s)}{s}>-1. \end{equation} Denote the IBVP (\ref{damped-wave-equation})-(\ref{Robin-initial-conditions}), (\ref{acoustic-boundary})-(\ref{acoustic-initial-conditions}) with (\ref{f-assumption-1})-(\ref{f-assumption-2}) or (\ref{f-assumption-2}), (\ref{f-reg-ass-2})-(\ref{f-reg-ass-3}) (below), and (\ref{g-assumption-1})-(\ref{g-assumption-2}), or (\ref{g-reg-ass-1}) (below) by Problem (A). During portions of this article, we will make different assumptions on the nonlinear terms $f$ and $g$. When we discuss global attractors for Problem (R) and Problem (A), we will take \begin{equation} \label{g-reg-ass-1} g\equiv0. \end{equation} Also, when we discuss regularity results or exponential attractors, we will need to require more from the nonlinear term $f$; in that case, we assume the growth condition \begin{equation}\label{f-reg-ass-2} |f''(s)|\leq \ell_1(1+|s|) \end{equation} and the bound \begin{equation}\label{f-reg-ass-3} f'(s)\geq -\ell_2, \end{equation} holds for some $\ell_1,\ell_2\geq 0$, in addition to (\ref{f-assumption-2}). The idea of ``relaxing'' a Robin boundary condition into an acoustic boundary condition comes from \cite{Beale&Rosencrans74}. In the singular case of the acoustic boundary condition; that is, when $\varepsilon=0$ in (\ref{acoustic-boundary}), the following Robin boundary condition in $u_t$ is obtained, \begin{equation} \label{Robin-boundary-ut} \frac{\partial^2 u}{\partial t \partial{\bf{n}}} + u_t = 0 \quad \text{on} \quad (0,\infty)\times\Gamma. \end{equation} The condition in equation (\ref{Robin-boundary-ut}) can be expressed as the system on $(0,\infty)\times\Gamma$, \begin{equation} \label{key-2} \left\{ \begin{array}{l} \delta_{tt} = -u_t \\ \delta_t = \partial_{\bf{n}}u. \end{array} \right. \end{equation} In general, the damped wave equation (\ref{damped-wave-equation}) has several applications to physics, such as relativistic quantum mechanics (cf. e.g.\cite{Babin&Vishik92,Temam88}). Problem (R) may be used to govern a thermoelastic medium with (\ref{Robin-boundary}) indicating that the amount of heat is proportional to its flux at the boundary. For this reason, the Robin boundary condition is also called a convection surface condition as well. One way to motivate the Robin boundary condition may be as a static condition based on the dynamic boundary condition, \begin{equation} \label{DBC} \partial_{\bf{n}}u+u+u_t=0. \end{equation} The dynamic term $u_t$ that now appears can be used to account for frictional forces that produce damping along the boundary on the physical domain. The condition (\ref{DBC}) also has motivations coming from thermodynamics and also appears in connection with the Wentzell boundary condition (cf. \cite{Gal12-2} and the references therein). Problem (A) describes a gas experiencing irrotational forces from a rest state in a domain $\Omega$. The surface $\Gamma$ acts as a locally reacting spring-like mechanism in response to excess pressure in $\Omega$. The unknown $\delta =\delta (t,x)$ represents the {\em{inward}} ``displacement'' of the boundary $\Gamma$ reacting to a pressure described by $-u_{t}$. The first equation (\ref{acoustic-boundary})$_{1}$ describes the spring-like effect in which $\Gamma $ (and $\delta $) interacts with $-u_{t}$, and the second equation (\ref{acoustic-boundary})$_{2}$ is the continuity condition: velocity of the boundary displacement $\delta $ agrees with the normal derivative of $u$. The presence of the term $g$ indicates nonlinear effects in the damped oscillations occurring on the surface. Together, (\ref{acoustic-boundary}) describe $\Gamma $ as a so-called locally reactive surface. In applications the unknown $u$ may be taken as a velocity potential of some fluid or gas in $\Omega $ that was disturbed from its equilibrium. The acoustic boundary condition was rigorously described by Beale and Rosencrans in \cite{Beale76,Beale&Rosencrans74}. Various recent sources investigate the wave equation equipped with acoustic boundary conditions, \cite{CFL01,GGG03,Mugnolo10,Vicente09}. However, more recently, it has been introduced as a dynamic boundary condition for problems that study the asymptotic behavior of weakly damped wave equations, see \cite{Frigeri10}. Because the Laplacian operator is compact, self-adjoint, and strictly positive with Robin boundary conditions in (\ref{Robin-boundary}), the Laplacian admits a countable collection of eigenfunctions and a corresponding non-decreasing set of positive eigenvalues. Also, one is able to define fractional powers of the Laplacian. Hence, the existence and uniqueness of a (local) weak solution can be sought through a Faedo-Galerkin approximation procedure by projecting the problem onto a subspace spanned by a finite set of eigenfunctions. Through the use of {\em{a priori}} estimates and compactness methods, the weak solution is usually obtained as a subsequence of a weakly converging sequence (see \cite{Lions69,Lions&Magenes72,Temam88} for a more detailed description of Faedo-Galerkin methods and their application to nonlinear partial differential equations). However, because of the nature of Problem (R) (in fact weak solutions are mild solutions), semigroups methods are utilized to obtain a local weak solution to Problem (R). Well-defined fractional powers of the Laplacian are usually utilized to decompose the solution operator into two operators: a part that decays to zero and a compact part. In either problem though, the growth assumptions on the nonlinear term $f(u)$ are not sufficient to apply such a decomposition. Thus, the existence of the global attractor is obtained through generalized semiflow methods contributed by Ball in \cite{Ball00,Ball04}. One must show that the solution operators are weakly continuous and asymptotically compact. On the other hand, not even fractional powers of the Laplacian are well-defined with acoustic boundary conditions. This means the solutions of Problem (A) cannot be obtained via a spectral basis, so local weak solutions to Problem (A) are obtained with semigroup methods. Both problems will be formulated in an abstract form and posed as an equation in a Banach space, containing a linear unbounded operator, which is the infinitesimal generator of a strongly continuous semigroup of contractions on the Banach space, and containing a locally Lipschitz nonlinear part. The global attractors for Problem (A) are also obtained through the weak continuity and asymptotic compactness of the the solution operators (again, cf. \cite{Ball00,Ball04}). Actually, the $\varepsilon=1$ case of Problem (A) has already been studied in \cite{Frigeri10}, and it is that work, along with \cite{Beale&Rosencrans74}, that has brought the current one into view. The stability of solutions to partial differential equations under singular perturbations has been a topic undergoing rapid and strong growth; in particular, the continuity of attracting sets is a topic that has grown significantly. Only some of the results are mentioned below. An upper-semicontinuous family of global attractors for wave equations obtained from a perturbation of hyperbolic-relaxation type appears in \cite{Hale&Raugel88}. The problem is of the type \[ \varepsilon u_{tt}+u_t-\Delta u+\phi(u)=0, \] where $\varepsilon\in [0,1]$. The equation possesses Dirichlet boundary conditions, and $\phi\in C^2(\mathbb{R})$ satisfies the growth assumption, \[ \phi''(s)\leq C(1+|s|) \] for some $C>0$. The global attractor for the parabolic problem, $\mathcal{A}_0\subset H^2(\Omega)\cap H^1_0(\Omega)$, is ``lifted'' into the phase space for the hyperbolic problems, $X=H^1_0(\Omega)\times L^2(\Omega)$, by defining, \begin{equation} \label{HR-lift} \mathcal{LA}_0:=\{(u,v)\in X:u\in\mathcal{A}_0,~v=f-g(u)+\Delta u\}. \end{equation} The family of sets in $X$ is defined by, \begin{equation} \label{gfam-1} \mathbb{A}_\varepsilon:=\left\{ \begin{array}{ll} \mathcal{L}\mathcal{A}_0 & \text{for}~\varepsilon=0 \\ \mathcal{A}_\varepsilon & \text{for}~\varepsilon\in(0,1], \end{array}\right. \end{equation} where $\mathcal{A}_\varepsilon\subset X$ denotes the global attractors for the hyperbolic-relaxation problem. The main result in \cite{Hale&Raugel88} is the upper-semicontinuity of the family of sets $\mathbb{A}_\varepsilon$ in $X$; i.e., \begin{equation} \label{4usc} \lim_{\varepsilon\rightarrow 0} {\mathrm{dist}} _X(\mathbb{A}_\varepsilon,\mathbb{A}_0):= \lim_{\varepsilon\rightarrow 0}\sup_{a\in\mathbb{A}_\varepsilon}\inf_{b\in\mathbb{A}_0}\|a-b\|_X=0. \end{equation} To obtain this result, we will replace the initial conditions (\ref{acoustic-initial-conditions}) with the following \begin{equation} \label{acoustic-initial-conditions2} \varepsilon\delta(0,\cdot) = \varepsilon\delta_0, \quad \delta_t(0,\cdot) = \varepsilon\delta_1-(1-\varepsilon)u_0 \quad \text{on} \quad \{0\}\times \Gamma. \end{equation} {\em{The result in (\ref{4usc}) insures every that for every Robin-type problem, Problem (R), there is an acoustic relaxation, Problem (A), in which (\ref{4usc}) holds.}} \begin{remark} \label{key} To motivate the choice of initial conditions chosen in (\ref{acoustic-initial-conditions2}), consider the following. The formal limit of (\ref{acoustic-boundary}) obtained in (\ref{key-2}) leads to the equation on $\Gamma$, \[ \partial_{\mathbf{n}}u(t) = -u(t) + \phi, \] where $\phi=\phi(x)$ is an arbitrary function obtained from integration with respect to $t$. So at $t=0$ we find, \[ \delta_t(0)=-u(0)+\phi. \] Thus, when (\ref{acoustic-initial-conditions2}) holds, we are guaranteed $\phi\equiv0.$ Moreover, this means the formal restriction ``$\varepsilon=0$'' of Problem (A) does in fact coincide with Problem (R). \end{remark} Since this result appeared, an upper-continuous family of global attractors for the Cahn-Hilliard equations has been found \cite{Zheng&Milani05}. Robust families of exponential attractors (that is, both upper- and lower-semicontinuous with explicit control over semidistances in terms of the perturbation parameter) of the type reported in \cite{GGMP05} have successfully been demonstrated to exist in numerous applications spanning partial differential equations of evolution: the Cahn-Hilliard equations with a hyperbolic-relaxation perturbation \cite{GGMP05-CH3D,GGMP05-CH1D}, applications with a perturbation appearing in a memory kernel have been treated for reaction diffusion equations, Cahn-Hilliard equations, phase-field equations, wave equations, beam equations, and numerous others \cite{GMPZ10}. Recently, the existence of an upper-semicontinuous family of global attractors for a reaction-diffusion equation with a singular perturbation of hyperbolic relaxation type and dynamic boundary conditions has appeared in \cite{Gal&Shomberg15}. Robust families of exponential attractors have also been constructed for equations where the perturbation parameter appears in the boundary conditions. Many of these applications are to the Cahn-Hilliard equations and to phase-field equations \cite{Gal08,GGM08-2,Miranville&Zelik02}. Also, continuous families of inertial manifolds have been constructed for wave equations \cite{Mora&Morales89-2}, Cahn-Hilliard equations \cite{BGM10}, and more recently, for phase-field equations \cite{Bonfoh11}. Finally, for generalized semiflows and for trajectory dynamical systems (dynamical systems where well-possedness of the PDE---uniqueness of the solution, in particular---is not guaranteed), some continuity properties of global attractors have been found for the Navier-Stokes equations \cite{Ball00}, the Cahn-Hilliard equations \cite{Segatti06}, and for wave equations \cite{Ball04,Zelik04}. The thrust behind robustness is typically an estimate of the form, \begin{equation} \label{robust-intro} \|S_\varepsilon(t)x-\mathcal{L}S_0(t)\Pi x\|_{X_\varepsilon}\leq C\varepsilon, \end{equation} where $x\in X_\varepsilon$, $S_\varepsilon(t)$ and $S_0(t)$ are semigroups generated by the solutions of the perturbed problem and the limit problem, respectively, $\Pi$ denotes a projection from $X_\varepsilon$ onto $X_0$ and $\mathcal{L}$ is a ``lift'' (such as (\ref{HR-lift})) from $X_0$ into $X_\varepsilon$. The estimate (\ref{robust-intro}) means we can approximate the limit problem with the perturbation with control explicitly written in terms of the perturbation parameter. Usually, such control is only exhibited on compact time intervals. It is important to realize that the lift associated with a hyperbolic-relaxation problem, for example, requires a certain degree of regularity from the limit problem. In particular, \cite{Gal&Shomberg15,Hale&Raugel88} rely on (\ref{HR-lift}); so one needs $\mathcal{A}_0\subset H^2$ in order for $\mathcal{LA}_0\subset L^2$ to be well-defined. For the above model problem, the perturbation parameter appears in a (dynamic) boundary condition. The perturbation is singular in nature, however, additional regularity from the global attractor $\mathcal{A}_0$ is not required in order for the lift to be well-defined. In place of this, we will rely on the (local) Lipschitz continuity of the corresponding semiflows; both the limit problem ($\varepsilon=0$) and the perturbation problem ($\varepsilon>0$) exhibit Lipschitz continuous solution operators. This property allows us to prove an estimate of the form (\ref{robust-intro}), and additionally, the upper-semicontinuity of the family of global attractors, without additional regularity from $\mathcal{A}_0$. This fact is the key reason behind the motivation for using the first-order growth condition (\ref{f-assumption-1}); no further regularity is need to obtain an upper-continuous family of global attractors, and hence, a more general result is obtained. Unlike global attractors described above, exponential attractors (sometimes called, inertial sets) are positively invariant sets possessing finite fractal dimension that attract bounded subsets of the phase space exponentially fast. It can readily be seen that when both a global attractor $\mathcal{A}$ and an exponential attractor $\mathcal{M}$ exist, then $\mathcal{A}\subseteq \mathcal{M}$, and so the global attractor is also finite dimensional. When we turn our attention to proving the existence of exponential attractors, certain higher-order dissipative estimates are required. In the case for Problem (A), the estimates cannot be obtained along the lines of multiplication by fractional powers of the Laplacian; as we have already described, we need to resort to other methods. In particular, we will apply $H^2$-elliptic regularity methods as in \cite{Pata&Zelik06}. Here, the main idea is to differentiate the equations with respect to time $t$ to obtain uniform estimates for the new equations. This strategy has recently received a lot of attention. Some successes include dealing with a damped wave equation with acoustic boundary conditions \cite{Frigeri10} and a wave equation with a nonlinear dynamic boundary condition \cite{CEL02,CEL04-2,CEL04}. Also, there is the hyperbolic relaxation of a Cahn-Hilliard equation with dynamic boundary conditions \cite{CGG11,Gal&Grasselli12}. Additionally, this approach was also taken in \cite{Gal&Shomberg15}. The drawback from using this approach comes in to difficulty in finding appropriate estimates that are {\em{uniform}} in the perturbation parameter $\varepsilon.$ Indeed, this was the case in \cite{Gal&Shomberg15}. There, the authors we able to find an upper-semicontinuous family of global attractors and a family of exponential attractors. It turned out that a certain higher-order dissipative estimate depends on $\varepsilon$ in a crucial way, and consequently, the robustness/H\"older continuity of the family of exponential attractors cannot (yet) be obtained. Furthermore, as it turns out, the global attractors found in \cite{Gal&Shomberg15} have finite (fractal) dimension, although the dimension is not necessarily independent of $\varepsilon.$ It appears that similar difficulties persist with the model problem examined here. The main results in this paper are: \begin{itemize} \item An upper-semicontinuity result for a {\em{generic}} family of sets for a family of semiflows, where in particular, the limit ($\varepsilon=0$) semigroup of solution operators is locally Lipschitz continuous, uniformly in time on compact intervals. This result largely rests on the fact that the difference between two trajectories emanating from the same initial data, on compact time intervals, can be estimated in the phase space by a constant times $\sqrt{\varepsilon}$. \item Problem (R) and Problem (A) admit a family of global attractors $\{\mathcal{A}_\varepsilon\}_{\varepsilon\in[0,1]}$, where each is bounded, uniformly in $\varepsilon$, in the respective phase space. The rate of attraction of bounded sets to $\mathcal{A}_\varepsilon$ approaches a constant as $\varepsilon$ approaches zero. \item The generic semicontinuity result is applied to the family of global attractors $\{\mathcal{A}_\varepsilon\}_{\varepsilon\in[0,1]}$. Because of the Lipschitz continuity of the semiflow generated by Problem (R), $S_0$, no further regularity is required from the attractor $\mathcal{A}_0$ to insure $\mathcal{LA}_0$ is well-defined in the phase space of Problem (A). \item Under more restrictive assumptions on the nonlinear terms $f$ and $g$, the global attractors are shown to possess optimal regularity by being bounded in a compact subset of the phase space; however, this bound is {\em{no longer}} independent of $\varepsilon$. \item There exists a family of exponential attractors $\{\mathcal{M}_\varepsilon\}_{\varepsilon\in[0,1]}$, admitted by the semiflows associated with for Problem (R) and Problem (A). Since $\mathcal{A}_\varepsilon\subset\mathcal{M}_\varepsilon$ for each $\varepsilon\in[0,1]$, this result insures the global attractors inherit finite (fractal) dimension. However, we cannot conclude that dimension is uniform in $\varepsilon$ (this result remains open). \end{itemize} \subsection{Notation and conventions} We take the opportunity here to introduce some notations and conventions that are used throughout the paper. We denote by $\Vert \cdot \Vert $, $\Vert \cdot \Vert _{k}$, the norms in $L^{2}(\Omega )$, $H^{k}(\Omega )$, respectively. We use the notation $\langle \cdot ,\cdot \rangle $ and $\langle \cdot ,\cdot \rangle_{k}$ to denote the products on $L^{2}(\Omega )$ and $H^{k}(\Omega)$, respectively. For the boundary terms, $\Vert \cdot \Vert _{L^{2}(\Gamma )}$ and $\langle \cdot ,\cdot \rangle _{L^{2}(\Gamma )}$ denote the norm and, respectively, product on $L^{2}(\Gamma )$. We will require the norm in $H^{k}(\Gamma )$, to be denoted by $\Vert \cdot \Vert _{H^{k}(\Gamma )}$, where $k\geq 1$. The $L^{p}(\Omega )$ norm, $p\in (0,\infty ]$, is denoted $|\cdot |_{p}$. The dual pairing between $H^{1}(\Omega )$ and the dual $H^{-1}(\Omega) := (H^{1}(\Omega ))^{\ast }$ is denoted by $(u,v)_{H^{-1}\times H^1}$. In many calculations, functional notation indicating dependence on the variable $t$ is dropped; for example, we will write $u$ in place of $u(t)$. Throughout the paper, $C>0$ will denote a \emph{generic} constant, while $Q:\mathbb{R}_{+}\rightarrow \mathbb{R}_{+}$ will denote a \emph{generic} increasing function. All these quantities, unless explicitly stated, are \emph{independent} of the perturbation parameter $\varepsilon$. Further dependencies of these quantities will be specified on occurrence. We will use $\|B\|_{W}:=\sup_{\Upsilon\in B}\|\Upsilon\|_W$ to denote the ``size'' of the subset $B$ in the Banach space $W$. Later in the article, we will rely on the Laplace-Beltrami operator $-\Delta_\Gamma$ on the surface $\Gamma.$ This operator is positive definite and self-adjoint on $L^2(\Gamma)$ with domain $D(-\Delta_\Gamma)$. The Sobolev spaces $H^s(\Gamma)$, for $s\in\mathbb{R}$, may be defined as $H^s(\Gamma)=D((-\Delta_\Gamma)^{s/2})$ when endowed with the norm whose square is given by, for all $u\in H^s(\Gamma)$, \begin{equation} \label{LB-norm} \|u\|^2_{H^s(\Gamma)} := \|u\|^2_{L^2(\Gamma)} + \left\|(-\Delta_\Gamma)^{s/2}u\right\|^2_{L^2(\Gamma)}. \end{equation} As for the plan of the paper, in Section 2 we review the important results concerning the limit ($\varepsilon=0$) Problem (R), and in Section 3 we discuss the relevant results concerning the perturbation Problem (A). Some important remarks describing several instances of how Problem (A) depends of the perturbation parameter $\varepsilon>0$ are given throughout Section 3. The Section 4 contains a new abstract upper-semicontinuity result that is then tailored specifically for the model problem under consideration. The final Section 5 summarizes our findings. \section{Attractors for Problem (R), the $\varepsilon=0$ case} \label{s:Robin} It is shown that Problem (R) possesses unique global weak solutions in a suitable phase space, and the solutions depend continuously on the initial data. Under suitable assumptions on $f$, we also establish the existence of global strong solutions. The solutions generate a Lipschitz continuous semiflow which admits a bounded, absorbing, positively invariant set in the phase space. The existence of a global attractor is achieved by using Ball's asymptotic compactness method when we set $g\equiv 0$, and, with suitable assumptions on $f$, the existence of an exponential attractor follows from the recent work \cite{Gal&Shomberg15}. \subsection{The functional framework for Problem (R)} The phase space and the abstract Cauchy problem are formulated. In addition, the Laplacian, $-\Delta$, with the Robin boundary conditions described by (\ref{Robin-boundary}) is briefly discussed. The spectral basis is used in finding a Poincar\'e type inequality for Problem (R). Define \[ \mathcal{H}_0:=H^1(\Omega) \times L^2(\Omega). \] The space $\mathcal{H}_0$ is Hilbert when endowed with the norm whose square is given by, for $\varphi=(u,v)\in\mathcal{H}_0$, \begin{align*} \|\varphi\|^2_{\mathcal{H}_0} &:= \|u\|^2_1 + \|u\|^2_{L^2(\Gamma)} + \|v\|^2 \\ & = \left( \|\nabla u\|^2 + \|u\|^2 \right) + \|u\|^2_{L^2(\Gamma)} + \|v\|^2. \end{align*} Recall that if $u\in H^1(\Omega)$, then $u_{\mid\Gamma} \in H^{1/2}(\Gamma)\hookrightarrow L^2(\Gamma)$ by the trace theorem, so $\|\cdot\|_{\mathcal{H}_0}$ is well-defined. The embedding in the previous assertion follows from the embedding theorem on the {\em{two}}-dimensional surface $\Gamma$ (cf. \cite[Theorem 2.6]{Hebey99}). As introduced in \cite{CEL02,Wu&Zheng06}, let $\Delta_{\mathrm{R}}:L^{2}(\Omega )\rightarrow L^{2}(\Omega )$ be the ``Robin-Laplacian'' operator with domain \begin{equation*} D(\Delta_{\mathrm{R}}) = \{u\in H^2(\Omega ): \partial _{\bf{n}}u + u = 0 \ \text{on} \ \Gamma \}. \end{equation*} The Robin-Laplace operator $-\Delta_{\mathrm{R}}$ is self-adjoint and strictly positive; indeed, for all $u,v\in H^1(\Omega)$, \[ (-\Delta_{\rm{R}} u,v)_{H^{-1}\times H^1} = (u,-\Delta_{\rm{R}} v)_{H^{-1}\times H^1}, \] and \[ (-\Delta_{\rm{R}} u,u)_{H^{-1}\times H^1} = \|\nabla u\|^2 + \|u\|^2_{L^2(\Gamma)} \geq 0. \] By the spectral theorem, the operator $-\Delta_{\mathrm{R}}$ admits a system of eigenfunctions $(\omega_j)_{j=1}^\infty$ that is a complete orthonormal system in $L^2(\Omega)$, and a corresponding sequence of eigenvalues $(\lambda_j)_{j=1}^\infty$ which can be ordered into a nondecreasing sequence, such that $\lim_{j\rightarrow \infty}\lambda_j=\infty$. Using the Fourier series representation of $u$ the Poincar\'e inequality is computed as, for all $u\in H^1(\Omega)$, \begin{equation} \label{our-Poincare-0} \|u\|\leq \frac{1}{\sqrt{\lambda_1}}\left( \|\nabla u\|^2 + \|u\|^2_{L^2(\Gamma)} \right)^{1/2}, \end{equation} where $\lambda_1>0$ is the first eigenvalue of the Laplacian with Robin boundary conditions (\ref{Robin-boundary}). The Robin-Laplacian is extended to a continuous operator $\Delta_{\mathrm{R}}:H^{1}(\Omega )\rightarrow \left( H^{1}(\Omega)\right) ^{\ast }$, defined by, for all $v\in H^{1}(\Omega )$, \begin{equation*} (-\Delta_{\mathrm{R}}u,v)_{H^{-1}\times H^1}=\langle \nabla u,\nabla v\rangle +\langle u,v\rangle _{L^{2}(\Gamma )}. \end{equation*} We mention it is well-known that the Dirichlet trace map $ {\mathrm{tr}} D:C^\infty({\overline{\Omega}})\rightarrow C^\infty(\Gamma)$, defined by $ {\mathrm{tr}} D(u):= u_{\mid\Gamma}$, extends to a linear continuous operator $ {\mathrm{tr}} D:H^r(\Omega)\rightarrow H^{r-1/2}(\Gamma)$, for all $r>1/2$. Hence, \[ u\in H^2(\Omega) \Longrightarrow u\in H^{3/2}(\Gamma) \quad\text{and}\quad\partial_{\bf{n}} u := \nabla u\cdot {\bf{n}} \in H^{1/2}(\Gamma). \] Thus, the equation $\partial_{\bf{n}}u= -u$ makes sense in the following distributional sense on $\Gamma$: for all $u_1\in D(\Delta_{\mathrm{R}})$ and $u_2\in H^1(\Omega)$, \[ \int_\Omega (-\Delta_{\mathrm{R}})u_1 u_2 {\mathrm{d}} x = \int_\Gamma u_1 u_2 {\mathrm{d}} \sigma + \int_\Omega \nabla u_1 \nabla u_2 {\mathrm{d}} x. \] With $D(\Delta_{\rm{R}}),$ define the set \begin{align*} \mathcal{D}_0 & := D(\Delta_{\rm{R}}) \times H^1(\Omega) \\ & = \left\{ (u,v)\in H^2(\Omega) \times H^1(\Omega) : \partial_{\bf{n}}u + u = 0 \ \text{on} \ \Gamma \right\}, \end{align*} and the linear unbounded operator $\mathrm{R}:D(\mathrm{R})\subset\mathcal{H}_0\rightarrow\mathcal{H}_0$, where $D(\mathrm{R})=\mathcal{D}_0$, by \[ \mathrm{R}:=\begin{pmatrix} 0 & 1 \\ \Delta_{\rm{R}}-1 & -1 \end{pmatrix}. \] By the Lumer-Phillips theorem (cf. e.g. \cite[Theorem I.4.3]{Pazy83}) it is not hard to see that the operator $(\mathrm{R},D(\mathrm{R}))$ is an infinitesimal generator of a strongly continuous semigroup of contractions on $\mathcal{H}_0$, denoted $e^{\mathrm{R}t}$. The set $D(\mathrm{R})$ is dense in $\mathcal{H}_0=H^1(\Omega)\times L^2(\Omega)$, and $\mathrm{R}$ is dissipative since, for all $\varphi=(u,v)\in D(\mathrm{R})$, \[ \langle \mathrm{R}\varphi,\varphi \rangle_{\mathcal{H}_0} = \langle v,u \rangle_1 + \langle v,u \rangle_{L^2(\Gamma)} + \langle \Delta_{\rm{R}} u-u-v,v \rangle = -\|v\|^2 \leq 0. \] The surjectivity requirement $(I+R)\varphi=\theta$ can be shown with the aid of the Lax-Milgram theorem; the elliptic system \[ \left\{ \begin{array}{ll} u + v = \chi, & u\in D(\Delta_{\rm{R}}), \ v,\chi\in H^1(\Omega) \\ -\Delta_{\rm{R}} u + u = -\psi, & \psi \in L^2(\Omega). \end{array} \right. \] admits a unique weak solution $\varphi=(u,v)\in D(\mathrm{R})$ for any $\theta=(\chi,\psi)\in\mathcal{H}_0$. Define the map $\mathcal{F}:\mathcal{H}_0\rightarrow\mathcal{H}_0$ by \[ \mathcal{F}(\varphi):=\begin{pmatrix} 0 \\ -f(u) \end{pmatrix} \] for all $\varphi\in\mathcal{H}_0$. Since $f:H^1(\Omega)\rightarrow L^2(\Omega)$ is locally Lipschitz continuous \cite[cf. e.g. Theorem 2.7.13]{Zheng04}, it follows that the map $\mathcal{F}:\mathcal{H}_0\rightarrow\mathcal{H}_0$ is as well. Problem (R) may be put into the abstract form in $\mathcal{H}_0$ \begin{equation} \label{abstract-Robin-problem} \left\{ \begin{array}{l} \displaystyle\frac{ {\mathrm{d}} \varphi}{ {\mathrm{d}} t} = \mathrm{R}\varphi + \mathcal{F}(\varphi) \\ \varphi(0)=\varphi_0 \end{array} \right. \end{equation} where $\varphi=\varphi(t)=(u(t),u_t(t))$ and $\varphi_0=(u_0,u_1)\in\mathcal{H}_0$; $v=u_t$ in the sense of distributions. \begin{lemma} \label{adjoint-r} The adjoint of $\mathrm{R}$, denoted $\mathrm{R}^{\ast}$, is given by \begin{equation*} \mathrm{R}^{\ast}:= -\begin{pmatrix} 0 & 1 \\ \Delta _{\mathrm{R}}-1 & 1 \end{pmatrix}, \end{equation*} with domain \begin{equation*} D(\mathrm{R}^{\ast }):=\{(\chi ,\psi )\in H^{2}(\Omega )\times H^{1}(\Omega ):\partial _{\mathbf{n}}\chi +\chi =0 \ \text{on} \ \Gamma \}. \end{equation*} \end{lemma} \begin{proof} The proof is a calculation similar to, e.g., \cite[Lemma 3.1]{Ball04}. \end{proof} \subsection{Well-posedness of Problem (R)} Semigroup methods are applied to obtain local weak/mild solutions. {\em{A priori}} estimates are then made to show that the local solutions are indeed global ones. Continuous dependence of the solution on the initial conditions and the uniqueness of solutions follows in the usual way by estimating the difference of two solutions. A semiflow (semigroup of solution operators) acting on the phase space is defined. Because of the continuous dependence estimate, the semiflow is Lipschitz continuous on the phase space. Due to the restrictive growth conditions imposed on $f(u)$, there is no regularity result. Formal multiplication of the PDE (\ref{damped-wave-equation}) by $2u_t$ in $L^2(\Omega)$ produces the {\em{energy equation}} \begin{equation}\label{Robin-energy-3} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t} \left\{ \|\varphi\|^2_{\mathcal{H}_0} + 2\int_\Omega F(u) {\mathrm{d}} x \right\} + 2\|u_t\|^2 = 0. \end{equation} Here, $F(s)=\int_0^s f(\sigma) {\mathrm{d}} \sigma$. Note that from assumption (\ref{f-assumption-2}), it follows that there is a constant $\mu_0\in(0,1]$ such that, for all $\xi\in H^1(\Omega)$, \begin{equation}\label{from-f-assumption-2} 2\int_\Omega F(\xi) {\mathrm{d}} x \geq -(1-\mu_0)\|\xi\|^2_1 - \kappa_f \end{equation} for some constant $\kappa_f \ge 0$. A proof of (\ref{from-f-assumption-2}) can be found in \cite[page 1913]{CEL02}. On the other hand, using (\ref{f-reg-ass-3}) and integration by parts on $F(s)=\int_{0}^{s}f(\sigma ) {\mathrm{d}} \sigma $, we have the upper-bound \begin{align} \label{consequence-F-2} \int_\Omega F(\xi) {\mathrm{d}} x & \leq \langle f(\xi),\xi \rangle + \frac{\ell_2}{2\lambda_1}\|\xi\|^2_1. \end{align} Moreover, the inequality \begin{equation}\label{from-f-assumption-1} \langle f(u),u \rangle \geq -(1-\mu_0)\|u\|^2_1 - \kappa_f \end{equation} follows from the sign condition (\ref{f-assumption-2}) where $\mu_0\in(0,1]$ and $\kappa_f\geq0$ are from (\ref{from-f-assumption-2}). The definition of weak solution is from \cite{Ball77}. \begin{definition} Let $T>0$. A map $\varphi\in C([0,T];\mathcal{H}_0)$ is a {\em{weak solution}} of (\ref{abstract-Robin-problem}) on $[0,T]$ if for each $\theta\in D(\mathrm{R}^*)$ the map $t \mapsto \langle \varphi(t),\theta \rangle_{\mathcal{H}_0}$ is absolutely continuous on $[0,T]$ and satisfies, for almost all $t\in[0,T]$, \begin{equation} \label{abs-1} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t}\langle \varphi(t),\theta \rangle_{\mathcal{H}_0} = \langle \varphi(t),\mathrm{R}^*\theta \rangle_{\mathcal{H}_0} + \langle \mathcal{F}(\varphi(t)),\theta \rangle_{\mathcal{H}_0}. \end{equation} The map $\varphi$ is a weak solution on $[0,\infty)$ (i.e. a {\em{global weak solution}}) if it is a weak solution on $[0,T]$ for all $T>0$. \end{definition} According to \cite[Definition 3.1 and Proposition 3.5]{Ball04}, the notion of weak solution above is equivalent to the following notion of a mild solution. \begin{definition} Let $T>0$. A function $\varphi:[0,T]\rightarrow\mathcal{H}_0$ is a weak/mild solution of (\ref{abstract-Robin-problem}) on $[0,T]$ if and only if $\mathcal{F}(\varphi(\cdot))\in L^1(0,T;\mathcal{H}_0)$ and $\varphi$ satisfies the variation of constants formula, for all $t\in[0,T],$ \[ \varphi(t)=e^{\mathrm{R}t}\varphi_0 + \int_0^t e^{\mathrm{R}(t-s)}\mathcal{F}(\varphi(s)) {\mathrm{d}} s. \] \end{definition} Furthermore, by \cite[Proposition 3.4]{Ball04} and the explicit characterization of $D(\mathrm{R}^*)$, our notion of weak solution is also equivalent to the standard concept of a weak (distributional) solution to Problem (R). \begin{definition} \label{weak} A function $\varphi=(u,u_{t}):[0,T]\rightarrow \mathcal{H}_0$ is a weak solution of (\ref{abstract-Robin-problem}) (and, thus of (\ref{damped-wave-equation})-(\ref{Robin-boundary})) on $[0,T],$ if, for almost all $t\in \left[ 0,T\right],$ \begin{equation*} \varphi =(u,u_{t})\in C(\left[ 0,T\right] ;\mathcal{H}_0), \end{equation*} and, for each $\psi \in H^{1}\left( \Omega \right) ,$ $\langle u_{t},\psi \rangle \in C^{1}\left( \left[ 0,T\right] \right) $ with \begin{equation*} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t}\left\langle u_{t}\left( t\right) ,\psi \right\rangle + \left\langle u_{t}\left( t\right) ,\psi \right\rangle + \left\langle u\left( t\right) ,\psi \right\rangle_1 + \left\langle u\left( t\right) ,\psi \right\rangle _{L^{2}\left( \Gamma \right) }=-\left\langle f\left( u\left( t\right) \right) ,\psi \right\rangle. \end{equation*} \end{definition} Indeed, by \cite[Lemma 3.3]{Ball04} we have that $f:H^{1}\left( \Omega \right) \rightarrow L^{2}\left( \Omega \right) $ is sequentially weakly continuous and continuous thanks to the assumptions (\ref{f-assumption-1}) and (\ref{f-assumption-2}). Moreover, by \cite[Proposition 3.4]{Ball04} and the explicit representation of $D(\mathrm{R}^*)$ in Lemma \ref{adjoint-r}, $\langle \varphi_{t},\theta \rangle \in C^{1}\left( \left[ 0,T\right] \right) $ for all $\theta \in D\left( \mathrm{R}^{\ast }\right) $, and (\ref{abs-1}) is satisfied. Finally, the notion of strong solution to Problem (R) is as follows. \begin{definition} \label{Robin-strong} Let $\varphi_0 = (u_0,u_1) \in \mathcal{D}_0$: that is, let $\varphi_0\in H^{2}(\Omega )\times H^{1}(\Omega )$ be such that \begin{equation*} \partial _{\bf{n}}u_{0}+u_{0}=0 \ \text{on} \ \Gamma \end{equation*} is satisfied. A function $\varphi(t) = (u(t),u_t(t))$ is called a (global) strong solution if it is a (global) weak solution in the sense of Definition \ref{weak} and if it satisfies the following regularity properties: \begin{equation} \label{regularity-property} \varphi \in L^{\infty }([0,\infty) ;\mathcal{D}_0)\quad\text{and}\quad\partial_t\varphi\in L^{\infty }([0,\infty) ;\mathcal{H}_0). \end{equation} Therefore, $\varphi(t) = (u(t), u_t(t)) $ satisfies the equations (\ref{damped-wave-equation})-(\ref{Robin-boundary}) almost everywhere; i.e., is a strong solution. \end{definition} We now have the first main result. \begin{theorem}\label{t:Robin-weak-solutions} Assume (\ref{f-assumption-1}) and (\ref{f-assumption-2}) hold. Let $\varphi_0\in\mathcal{H}_0$. Then there exists a unique global weak solution $\varphi\in C([0,\infty);\mathcal{H}_0)$ to (\ref{abstract-Robin-problem}). For each weak solution, the map \begin{equation}\label{C1-map} t \mapsto \|\varphi(t)\|^2_{\mathcal{H}_0} + 2\int_\Omega F(u(t)) {\mathrm{d}} x \end{equation} is $C^1([0,\infty))$ and the energy equation (\ref{Robin-energy-3}) holds (in the sense of distributions). Moreover, for all $\varphi_0, \theta_0\in\mathcal{H}_0$, there exists a positive constant $\nu_0>0$, depending on $\|\varphi_0\|_{\mathcal{H}_0}$ and $\|\theta_0\|_{\mathcal{H}_0}$, such that, for all $t\geq 0$, \begin{equation}\label{continuous-dependence} \|\varphi(t)-\theta(t)\|_{\mathcal{H}_0} \leq e^{\nu_0 t} \|\varphi_0 - \theta_0\|_{\mathcal{H}_0}. \end{equation} Additionally, when (\ref{f-assumption-2}), (\ref{f-reg-ass-2}), and (\ref{f-reg-ass-3}) hold, and $\varphi_0\in \mathcal{D}_0$, there exists a unique global strong solution $\varphi\in C([0,\infty);\mathcal{D}_0)$ to (\ref{abstract-Robin-problem}). \end{theorem} \begin{proof} As discussed in the previous sections, the operator $\mathrm{R}$ with domain $D(\mathrm{R})=D(\Delta_{\rm{R}}) \times H^1(\Omega)$ is an infinitesimal generator of a strongly continuous semigroup of contractions on $\mathcal{H}_0$, and the map $\mathcal{F}:\mathcal{H}_0\rightarrow\mathcal{H}_0$ is locally Lipschitz continuous. Therefore, by, e.g. \cite[Theorem 2.5.4]{Zheng04}, for any $\varphi_0\in\mathcal{H}_0$, there is a $T^*>0$, depending on $\|\varphi_0\|_{\mathcal{H}_0}$, such that the abstract Problem (\ref{abstract-Robin-problem}) admits a unique local weak solution on $[0,T^*)$ satisfying \[ \varphi\in C([0,T^*);\mathcal{H}_0). \] The next step is to show that $T^*=\infty$. Since the map (\ref{C1-map}) is absolutely continuous on $[0,T^*)$, then integration of the energy equation (\ref{Robin-energy-3}) over $(0,t)$ yields, for all $t\in[0,T^*)$, \begin{equation}\label{Robin-energy-4} \|\varphi(t)\|^2_{\mathcal{H}_0} + 2\int_\Omega F(u(t)) {\mathrm{d}} x + 2\int_0^t \|u_t(\tau)\|^2 d\tau= \|\varphi_0\|^2_{\mathcal{H}_0} + 2\int_\Omega F(u_0) {\mathrm{d}} x. \end{equation} Applying inequality (\ref{from-f-assumption-2}) to (\ref{Robin-energy-4}), omitting the remaining (positive) integral on the left hand side of (\ref{Robin-energy-4}), and using the fact that the estimate $|F(s)|\leq C|s|(1+|s|^3)$ follows from assumption (\ref{f-assumption-1}), for some constant $C>0$, then, for all $t\in[0,T^*)$, \begin{equation}\label{Robin-bound-1} \|\varphi(t)\|_{\mathcal{H}_0} \leq C \|\varphi_0\|_{\mathcal{H}_0}. \end{equation} Since the bound on the right hand side of (\ref{Robin-bound-1}) is independent of $t\in[0,T^*)$, $T^*$ can be extended indefinitely; therefore $T^*=+\infty$. To show that (\ref{continuous-dependence}) holds, let $\varphi_0=(u_0,u_1), \theta_0=(v_0,v_1)\in\mathcal{H}_0$. Let $\varphi(t)=(u(t),u_t(t))$ and, respectively, $\theta(t)=(v(t),v_t(t))$ denote the corresponding global solutions of Problem (R) on $[0,\infty)$ with the initial data $\varphi_0$ and $\theta_0$. For all $t\ge0$, \begin{align} \bar\varphi(t) & := \varphi(t)-\theta(t) \notag \\ & = \left(( u(t),u_t(t) \right) - \left( v(t),v_t(t) \right) \notag \\ & =: \left( z(t),z_t(t) \right), \notag \end{align} and \begin{align} \bar\varphi_0 & := \varphi_0-\theta_0 \notag \\ & = (u_0-v_0,u_1-v_1) \notag \\ & =: (z_0,z_1). \notag \end{align} Then $z$, satisfies the IBVP \begin{equation}\label{dependence-1} \left\{ \begin{array}{ll} z_{tt}+z_t-\Delta z+z+f(u)-f(v)=0 & \text{in} \ (0,\infty)\times\Omega \\ z(0,\cdot)=z_0, ~z_t(0,\cdot)=z_1 & \text{on} \ \{0\}\times\Omega \\ \partial_{\bf{n}}z+z=0 & \text{on} \ (0,\infty)\times\Gamma. \end{array} \right. \end{equation} Multiply (\ref{dependence-1})$_1$ by $2z_t$ in $L^2(\Omega)$ to yield, for almost all $t\geq0$, \begin{equation}\label{dependence-2} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t} \|\bar\varphi(t)\|^2_{\mathcal{H}_0} + 2\|z_t\|^2 = 2\langle f(v)-f(u),z_t \rangle. \end{equation} Since $f:H^1(\Omega)\rightarrow L^2(\Omega)$ is locally Lipschitz continuous, then \begin{equation}\label{dependence-3} 2|\langle f(v)-f(u),z_t \rangle| \leq C\|z\|^2_1 + \|z_t\|^2, \end{equation} where the constant $C>0$ depends on the local Lipschitz continuity of $f$ as well as the uniform bound on the weak solutions $u$ and $v$ thanks to (\ref{Robin-bound-1}). After omitting the term $2\|z_t\|^2$ on the left hand side of (\ref{dependence-2}) and adding $\|z\|^2_{L^2(\Gamma)}$ to the right hand side, combining (\ref{dependence-2}) and (\ref{dependence-3}) produces, for almost all $t\geq 0$, \[ \frac{ {\mathrm{d}} }{ {\mathrm{d}} t}\|\bar\varphi(t)\|^2_{\mathcal{H}_0} \leq C\|\bar\varphi(t)\|^2_{\mathcal{H}_0}. \] The mapping $t\mapsto \|(z(t),z_t(t))\|^2_{\mathcal{H}_0}$ is absolutely continuous on $(0,\infty)$, so equation (\ref{continuous-dependence}) is obtained after applying a Gr\"onwall inequality. The uniqueness of the solutions now follows from (\ref{continuous-dependence}). To prove the existence of strong solutions, let $\varphi_0\in\mathcal{D}_0$ and assume (\ref{f-assumption-2}), (\ref{f-reg-ass-2}), and (\ref{f-reg-ass-3}) hold. (Recall, $\mathcal{D}_0=D(\mathrm{R})$.) We know that $\mathcal{F}:D(R)\rightarrow D(\mathrm{R})$ is locally Lipschitz continuous thanks to (\ref{f-reg-ass-2}). It now follows that (cf. e.g. \cite[Theorem 2.5.6]{Zheng04}) there exists a unique (global) solution; i.e., a weak solution satisfying \begin{equation*} \varphi\in C^1([0,\infty);\mathcal{H}_0) \cap C^0([0,\infty);D(\mathrm{R})). \end{equation*} Thus, (\ref{regularity-property}) holds. This finishes the proof. \end{proof} \begin{remark} The interested author may also view \cite{Wu&Zheng06} where strong solutions are obtained for (\ref{damped-wave-equation}) with the dynamic boundary condition (\ref{DBC}). There, the authors also show the convergence of strong solutions to equilibrium when $f$ is (real) analytic. \end{remark} The following provides the dynamical system we associate with Problem (R). \begin{corollary} \label{sf-r} Let $\varphi_0=(u_0,u_1)\in\mathcal{H}_0$ and $u$ be the unique global solution of Problem (R). The family of maps $S_0=(S_0(t))_{t\geq 0}$ defined by \[ S_0(t)\varphi_0(x):=(u(t,x,u_0,u_1),u_t(t,x,u_0,u_1)) \] is a {\em{semiflow}} generated by Problem (R). The operators $S_0(t)$ satisfy \begin{enumerate} \item $S_0(t+s)=S_0(t)S_0(s)$ for all $t,s\geq 0$. \item $S_0(0)=I_{\mathcal{H}_0}$ (the identity on $\mathcal{H}_0$) \item $S_0(t)\varphi_0\rightarrow S_0(t_0)\varphi_0$ for every $\varphi_0\in\mathcal{H}_0$ when $t\rightarrow t_0$. \end{enumerate} Additionally, each mapping $S_0(t):\mathcal{H}_0\rightarrow\mathcal{H}_0$ is Lipschitz continuous, uniformly in $t$ on compact intervals; i.e., for all $\varphi_0, \theta_0\in\mathcal{H}_0$, and for each $T\geq 0$, and for all $t\in[0,T]$, \begin{equation}\label{S0-Lipschitz-continuous} \|S_0(t)\varphi_0-S_0(t)\theta_0\|_{\mathcal{H}_0} \leq e^{\nu_0 T}\|\varphi_0-\theta_0\|_{\mathcal{H}_0}. \end{equation} \end{corollary} \begin{proof} The semigroup properties (1) and (2) are well-known and apply to a general class of abstract Cauchy problems possessing many applications (see \cite{Babin&Vishik92,Goldstein85,Morante79,Tanabe79}; in particular, a proof of property (1) is given in \cite[\S1.2.4]{Milani&Koksch05}). The continuity in $t$ described by property (3) follows from the definition of weak solution (this also establishes strong continuity of the operators when $t_0=0$). The continuity property (\ref{S0-Lipschitz-continuous}) follows from (\ref{continuous-dependence}). \end{proof} \subsection{Dissipativity of Problem (R)} We will now show that the dynamical system $(S_0(t),\mathcal{H}_0)$ generated by the weak solutions of Problem (R) is dissipative in the sense that $S_0$ admits a closed, positively invariant, bounded absorbing set in $\mathcal{H}_0$. The following functional will be of use in the proof of the following theorem. For $\varphi=(u,v)\in\mathcal{H}_0$, define the map $E_0:\mathcal{H}_0\rightarrow\mathbb{R}$ by \begin{equation} \label{Robin-functional-0} E_0(\varphi) :=\|\varphi\|^2_{\mathcal{H}_0} + 2\mu_0\langle u,v \rangle + 2\int_\Omega F(u) {\mathrm{d}} x. \end{equation} Using (\ref{from-f-assumption-2}) and the growth estimate (\ref{f-assumption-1}), the functional $E_0(\varphi)$ satisfies, for some constants $C_1,C_2>0$ and for all $\varphi\in\mathcal{H}_0$, \begin{equation} \label{Robin-functional-1} C_1\|\varphi\|^2_{\mathcal{H}_0} - \kappa_f \leq E_0(\varphi) \leq C_2\|\varphi\|_{\mathcal{H}_0}(1+\|\varphi\|^3_{\mathcal{H}_0}). \end{equation} \begin{lemma} Assume (\ref{f-assumption-1}) and (\ref{f-assumption-2}) hold. There exists $R_0>0$ with the property that: for every $R>0$, there exists $t_0=t_0(R)\geq 0$, depending on $R$, such that for every $\varphi_0\in\mathcal{H}_0$ with $\|\varphi_0\|_{\mathcal{H}_0}\le R$ and for every $t\geq t_0$, \[ \|S_0(t)\varphi_0\|_{\mathcal{H}_0}\leq R_0. \] Furthermore, the set \begin{equation} \label{abss-0} \mathcal{B}_0:=\{ \varphi\in \mathcal{H}_0 : \|\varphi\|_{\mathcal{H}_0} \le R_0 \} \end{equation} is closed, bounded, absorbing, and positively invariant for the semiflow $S_0$ in $\mathcal{H}_0$. \end{lemma} \begin{proof} Multiply equation (\ref{damped-wave-equation}) by $u_t+\eta u$, where $\eta>0$ is a sufficiently small constant yet to be chosen, and integrate over $\Omega$ to yield, for almost all $t\geq 0$, \begin{equation} \label{Robin-absorbing-set-1} \frac{1}{2}\frac{ {\mathrm{d}} }{ {\mathrm{d}} t}E_0(\varphi) + \eta\|u\|^2_1 + \eta\|u\|^2_{L^2(\Gamma)} + \eta\langle u,u_t \rangle + (1-\eta)\|u_t\|^2 + \eta\langle f(u),u \rangle = 0. \end{equation} Directly from (\ref{from-f-assumption-1}) \begin{equation}\label{from-f-1-and-Poincare} \eta\langle f(u),u \rangle \ge -\eta(1-\mu_0)\|u\|^2_1 - \eta\kappa_f. \end{equation} With the basic estimate, \begin{equation} \label{fuk-1} \eta\langle u,u_t \rangle \ge -\frac{\eta\mu_0}{2}\|u\|^2_1 - \frac{\eta}{2\mu_0}\|u_t\|^2, \end{equation} and after also applying (\ref{from-f-1-and-Poincare}) to equation (\ref{Robin-absorbing-set-1}), we find that for each $\mu_0\in(0,1]$ and $0<\eta<\left( 1+\frac{1}{2\mu_0} \right)^{-1}$, there is a constant $m_0=m_0(\mu_0,\eta)>0$ in which, for almost all $t\geq 0$, \[ \frac{ {\mathrm{d}} }{ {\mathrm{d}} t}E_0(\varphi) + 2m_0\|\varphi\|^2_{\mathcal{H}_0} \le 2\eta\kappa_f. \] Let $\widetilde R>0$. For all $\varphi_0\in \mathcal{H}_0$ with $\|\varphi_0\|_{\mathcal{H}_0}\leq \widetilde R$, the upper-bound in (\ref{Robin-functional-1}) reads \[ E_0(\varphi_0) \leq C_2\widetilde R(1+\widetilde{R}^3). \] Hence, for all $\widetilde R>0$, there exists $R>0$ such that, for all $\varphi_0\in\mathcal{H}_0$ with $\|\varphi_0\|_{\mathcal{H}_0}\le \widetilde R$, then $E_0(\varphi_0)\le R.$ From the lower-bound in (\ref{Robin-functional-1}), we immediately see that $\sup_{t\ge0}E_0(\varphi(t))\ge - \kappa_f.$ By Lemma \ref{t:diff-ineq-1} there exists $t_0>0$, depending on $\kappa_f$, such that for all $t\geq t_0$ and for all $\varphi_0\in \mathcal{H}_0$ with $\|\varphi_0\|_{\mathcal{H}_0}\leq \widetilde R$, \[ E_0(S_0(t)\varphi_0) \le \sup_{\varphi\in \mathcal{H}_0} \left\{ E_0(\varphi) : m_0\|\varphi\|^2_{\mathcal{H}_0}\leq \eta\kappa_f \right\}. \] Thus, there is $R_0>0$ such that, for all $t\geq t_0$ and for all $\varphi_0\in \mathcal{H}_0$ with $\|\varphi_0\|_{\mathcal{H}_0}\leq \widetilde R$, \begin{equation}\label{Robin-semiflow-absorbing} \|S_0(t)\varphi_0\|_{\mathcal{H}_0} \leq R_0. \end{equation} By definition, the set $\mathcal{B}_0$ in (\ref{abss-0}) is closed and bounded in $\mathcal{H}_0$. The inequality in (\ref{Robin-semiflow-absorbing}) implies that $\mathcal{B}_0$ is absorbing: given any nonempty bounded subset $B$ of $\mathcal{H}_0$, there is a $t_0\geq 0$ depending on $B$ in which, for all $t\geq t_0$, $S_0(t)B\subseteq \mathcal{B}_0$. Consequently, since $\mathcal{B}_0$ is bounded, $\mathcal{B}_0$ is also positively invariant under the semiflow $S_0$. \end{proof} \begin{remark} \label{Robin-time} According to Lemma \ref{t:diff-ineq-1}, $t_0$ depends on $\eta$, $m_0$ (both described above in the proof) and $\kappa_f$ as \[ t_0=\frac{1}{\iota}\left( C_2R(1+R^3)+\eta\kappa_f \right). \] Moreover, thanks to the bounds given in (\ref{Robin-functional-1}), and with the differential inequality (\ref{id-007}), we may explicitly compute the radius of $\mathcal{B}_0$. To begin, suppose $\varphi_0\in\mathcal{H}_0$ is such that $\|\varphi_0\|_{\mathcal{H}_0}\le R.$ Integrate (\ref{id-007}) on $[0,t]$. After applying (\ref{Robin-functional-1}) twice, we obtain the inequality \[ \|\varphi(t)\|_{\mathcal{H}_0}\le C_1^{1/2} \left( (\eta\kappa_f-m_0R^2)t + C_2R(1+R^3)+\eta\kappa_f \right)^{1/2}. \] Since (\ref{Robin-bound-1}) must hold, it must be the case that $\eta\kappa_f-m_0 R^2<0;$ i.e., \[ R>\sqrt{\frac{\eta\kappa_f}{m_0}}. \] Thus, we set $R=R(\iota)=\sqrt{\frac{\eta\kappa_f+\iota}{m_0}}$, for any $\iota>0$, and therefore obtain the radius of $\mathcal{B}_0$ to be, \begin{equation} \label{reg-radii} R_0^2(\iota) := \frac{C_2}{C_1} \left(\frac{\eta\kappa_f+\iota}{m_0}\right)^{1/2} \left( \eta\kappa_f+1 + \left( \frac{\eta\kappa_f+\iota}{m_0} \right)^{3/2} \right). \end{equation} \end{remark} \subsection{Global attractor for Problem (R)} We now aim to prove \begin{theorem} \label{t:robin-global} Assume (\ref{f-assumption-1}) and (\ref{f-assumption-2}) hold. The semiflow $S_0$ generated by the weak solutions of Problem (R) admits a global attractor $\mathcal{A}_0$ in $\mathcal{H}_0$. The global attractor is invariant under the semiflow $S_0$ (both positively and negatively) and attracts all nonempty bounded subsets of $\mathcal{H}_0$; precisely, \begin{enumerate} \item For each $t\geq 0$, $S_0(t)\mathcal{A}_0=\mathcal{A}_0$, and \item For every nonempty bounded subset $B$ of $\mathcal{H}_0$, \[ \lim_{t\rightarrow\infty}{\rm{dist}}_{\mathcal{H}_0}(S_0(t)B,\mathcal{A}_0):=\lim_{t\rightarrow\infty}\sup_{\varphi\in B}\inf_{\theta\in\mathcal{A}_0}\|S_0(t)\varphi-\theta\|_{\mathcal{H}_0}=0. \] \end{enumerate} The global attractor is unique and given by \[ \mathcal{A}_0=\omega(\mathcal{B}_0):=\bigcap_{s\geq 0}{\overline{\bigcup_{t\geq s} S_0(t)\mathcal{B}_0}}^{\mathcal{H}_0}. \] Furthermore, $\mathcal{A}_0$ is the maximal compact invariant subset in $\mathcal{H}_0$. \end{theorem} In order to prove Theorem \ref{t:robin-global}, we now develop further important properties of the semiflow $S_0$; the semiflow is weakly continuous and asymptotically compact. Both properties utilize only assumptions (\ref{f-assumption-1}) and (\ref{f-assumption-2}) on the nonlinear term $f(u)$. Consequently, the method of decomposing the semiflow into decay and compact parts to deduce the existence of the global attractor cannot be applied. A different approach is taken to deduce the existence of a global attractor; whereby the semiflow is shown to be weakly continuous and asymptotically compact (see \cite{Frigeri10}). Since the semiflow admits a bounded absorbing set, it follows from the theory of generalized semiflows by Ball (cf. \cite{Ball00,Ball04}) that the semiflow $S_0$ also admits a global attractor in the phase space $\mathcal{H}_0$. The first property follows from the general result in \cite[Theorem 3.6]{Ball04}. \begin{lemma} \label{t:Robin-weakly-continuous} The semiflow $S_0$ is weakly continuous on $\mathcal{H}_0$; i.e., for each $t\geq 0$, \[ S_0(t)\varphi_{0n}\rightharpoonup S_0(t)\varphi_0 \ \text{in} \ \mathcal{H}_0 ~\text{when}~\varphi_{0n}\rightharpoonup \varphi_0 \ \text{in} \ \mathcal{H}_0. \] \end{lemma} \begin{lemma} \label{t:Robin-asymptotic-compactness} The semiflow $S_0$ is asymptotically compact in $\mathcal{H}_0$; i.e., if $\varphi_{0n}=(u_{0n},u_{1n})$ is any bounded sequence in $\mathcal{H}_0$ and if $t_n$ is any sequence such that $t_n\rightarrow\infty$ as $n\rightarrow \infty$, then the sequence $\varphi^{(n)}(t_n) = S_0(t_n)\varphi_{0n}$ has a convergent subsequence. \end{lemma} \begin{proof} The proof essentially follows from \cite[Proposition 2]{Frigeri10}. Indeed, the proof needed here is much simpler because of the (static) Robin boundary condition. \end{proof} The existence of a global attractor for the dynamical system $(S_0(t),\mathcal{H}_0)$ now follows from \cite[Theorem 3.3]{Ball00}. \subsection{Optimal regularity for $\mathcal{A}_0$} In this section we report the result concerning the optimal regularity for the global attractor associated with Problem (R). It is at this point where we need to assume further smoothness from the nonlinear term $f$. Moving forward, we assume (\ref{f-assumption-2}) and (\ref{f-reg-ass-2})-(\ref{f-reg-ass-3}). With these assumptions, we find the asymptotic compactness property for the weak solutions from \cite[Theorem 3.17]{Gal&Shomberg15}. \begin{theorem} \label{exp-attr-r-1} Assume (\ref{f-assumption-2}), and (\ref{f-reg-ass-2})-(\ref{f-reg-ass-3}) hold. There exists a closed and bounded subset $\mathcal{U}_0\subset \mathcal{D}_0$, such that for every nonempty bounded subset $B\subset \mathcal{H}_0$, \begin{equation} \mathrm{dist}_{\mathcal{H}_0}(S_0(t)B,\mathcal{U}_0) \le Q(\left\Vert B\right\Vert _{\mathcal{H}_0}) e^{-\omega_0 t}, \label{tran-1} \end{equation} for some constant $\nu_0>0$. \end{theorem} By more standard arguments in the theory of attractors (see, e.g., \cite{Hale88, Temam88}), it follows that the global attractor $\mathcal{A}_0 \subset \mathcal{U}_0$ for the semigroup $S_0(t)$ is bounded in $\mathcal{D}_0$. \begin{corollary} The global attractor admitted by $S_0$ for Problem (R) satisfies \begin{equation*} \mathcal{A}_0\subset\mathcal{U}_0. \end{equation*} Consequently, the global attractor $\mathcal{A}_0$ is bounded in $\mathcal{D}_0$ and consists only of strong solutions. \end{corollary} \subsection{Exponential attractor for Problem (R)} The existence of an exponential attractor depends on certain properties of the semigroup; namely, the smoothing property for the difference of any two trajectories and the existence of a more regular bounded absorbing set in the phase space. The existence of exponential attractors for Problem (R) follows directly from \cite{Gal&Shomberg15}. Indeed, Theorem \ref{ear} (below) was shown to hold for (\ref{damped-wave-equation}), under assumptions (\ref{f-assumption-2}) and (\ref{f-reg-ass-2})-(\ref{f-reg-ass-3}), with the dynamic boundary condition (\ref{DBC}). Moreover, the existence of the bounded absorbing set $\mathcal{B}_{0}^1\subset \mathcal{D}_0$ for the limit semigroup $S_{0}(t)$ associated with Problem (R) was established in \cite[Lemma 4.6]{Gal&Shomberg15}. \begin{theorem} \label{ear} Assume (\ref{f-assumption-2}), and (\ref{f-reg-ass-2})-(\ref{f-reg-ass-3}) hold. The dynamical system $\left( S_0,\mathcal{H}_0\right)$ associated with Problem (R) admits an exponential attractor $\mathcal{M}_0$ compact in $\mathcal{H}_0$, and bounded in $\mathcal{U}_0$. Moreover, there hold: (i) For each $t\geq 0$, $S_0(t)\mathcal{M}_0\subseteq \mathcal{M}_0$. (ii) The fractal dimension of $\mathcal{M}_0$ with respect to the metric $\mathcal{H}_0$ is finite, namely, \begin{equation*} \dim_{\mathrm{F}}\left( \mathcal{M}_0,\mathcal{H}_0\right) \leq C<\infty, \end{equation*} for some positive constant $C$. (iii) There exist $\omega_1>0$ and a nonnegative monotonically increasing function $Q(\cdot)$ such that, for all $t\geq 0$, \begin{equation} \label{exp-attn-9} {\mathrm{dist}} _{\mathcal{H}_0}(S_0(t)B,\mathcal{M}_0)\leq Q(\Vert B\Vert _{\mathcal{H}_0})e^{-\omega_1 t}, \end{equation} for every nonempty bounded subset $B$ of $\mathcal{H}_0$. \end{theorem} The proof of Theorem \ref{ear} follows from the application of an abstract result tailored specifically to our needs (see, e.g., \cite[Proposition 1]{EMZ00}, \cite{FGMZ04}, \cite{GGMP05}; cf. also Remark \ref{rem_att}\ below). \begin{remark} Above, \begin{equation*} \dim_{\mathrm{F}}(\mathcal{M}_0,\mathcal{H}_0):=\limsup_{r\rightarrow 0}\frac{\ln \mu _{\mathcal{H}_0}(\mathcal{M}_0,r)}{-\ln r}<\infty , \end{equation*} where, $\mu _{\mathcal{H}_0}(\mathcal{X},r)$ denotes the minimum number of $r$-balls from $\mathcal{H}_0$ required to cover $\mathcal{X}$ (the so-called Kolmogorov entropy of $X$). \end{remark} \begin{remark} \label{rem_att} According to the above sources, $\mathcal{M}_0\subset\mathcal{B}^1_0$ is only guaranteed to attract bounded subsets of $\mathcal{B}^1_0$ exponentially fast (in the topology of $\mathcal{H}_0$); i.e., \[ {\mathrm{dist}} _{\mathcal{H}_0}(S_0(t)\mathcal{B}^1_0,\mathcal{M}_0) \leq Q(\|\mathcal{B}^1_0\|_{\mathcal{H}_0})e^{-\omega_2 t}, \] for some constant $\omega_2>0$. However, from \cite[Lemma 4.6]{Gal&Shomberg15}, we have that for any bounded subset $B$ of $\mathcal{H}_0$, \[ {\mathrm{dist}} _{\mathcal{H}_0}(S_0(t)B,\mathcal{B}^1_0) \leq Q(\|B\|_{\mathcal{H}_0})e^{-\omega_3t}, \] for some constant $\omega_3>0$ (moreover, this can be see from (\ref{tran-1}) after possibly increasing the radius of $\mathcal{B}^1_0$ in order to contain $\mathcal{U}_0$). Thus, by the so-called ``transitivity of exponential attraction'' (see Lemma \ref{t:exp-attr}) it follows that (\ref{exp-attn-9}) holds for all non-empty bounded subsets $B$ in $\mathcal{H}_0$. \end{remark} Finally, recall \begin{corollary} There holds the bound \begin{equation*} \dim_{\mathrm{F}}(\mathcal{A}_0,\mathcal{H}_0)\leq \dim_{\mathrm{F}}(\mathcal{M}_0,\mathcal{H}_0). \end{equation*} That is, the global attractor $\mathcal{A}_0$ also possesses finite fractal dimension. \end{corollary} \section{Attractors for Problem (A), the $\varepsilon>0$ case} \label{s:acoustic} In this section Problem (A) is discussed. The $\varepsilon=1$ case was already presented in \cite{Frigeri10}. The main results presented here follow directly from \cite{Frigeri10} with suitable modifications to account for the perturbation parameter $\varepsilon$ occurring in the equation governing the acoustic boundary condition. Generally, we do not need to present the proofs for the case $\varepsilon\in(0,1)$ since the modified proofs follow directly from Frigeri's work \cite{Frigeri10}. However, in some instances $\varepsilon$ may appear in a crucial way in some parameters. Hence, it is important to keep track of the required modifications. These observations will be explained, where needed, by a remark following the statement of the claim. Indeed, the radius of the absorbing set $\mathcal{B}_\varepsilon^1$ associated with the semiflow $S_\varepsilon$ here depends on $\varepsilon$ in a crucial way (see Remark \ref{r:time-1}). Despite this fact, we are still able to show that a family of global attractors exists for each $\varepsilon\in(0,1]$. Furthermore, under assumptions (\ref{f-assumption-2}), (\ref{g-reg-ass-1})-(\ref{f-reg-ass-3}), we obtain the optimal regularity of the global attractors by showing that they are bounded (though, not uniformly with respect to $\varepsilon$) in a compact subset of the phase space. Additionally, under assumptions (\ref{f-assumption-2}), (\ref{g-reg-ass-1})-(\ref{f-reg-ass-3}), we also show the existence of a family of exponential attractors. Consequently, the corresponding global attractor possesses finite fractal dimension. However, due to a lack of appropriate estimates uniform in the perturbation parameter $\varepsilon$, any robustness/H\"older continuity result for the family of exponential attractors is still out of reach. The section begins with the functional setting for the problem. By using the same arguments in \cite{Frigeri10}, it can easily be shown that, for each $\varepsilon\in(0,1]$, Problem (A) possesses unique global weak solutions in a suitable phase space, and the solutions depend continuously on the initial data. For the reader's convenience, we sketch the main arguments involved in the proofs. As with Problem (R), the solutions generate a family of Lipschitz continuous semiflows, now depending on $\varepsilon$, each of which admits a bounded, absorbing, positively invariant set. As in \cite{Frigeri10}, the existence of global attractors follows using Ball's asymptotic compactness methods. As mentioned, under suitable assumptions on $f$ and $g$, we will also establish the existence of a family of exponential attractors. \subsection{The functional framework for Problem (A)} The phase space and abstract formulation for the perturbation problem are given in this section. The formulation depends on the parameter $\varepsilon$. Let \[ \mathcal{H}:= H^1(\Omega) \times L^2(\Omega) \times L^2(\Gamma) \times L^2(\Gamma). \] The space $\mathcal{H}$ is Hilbert with the norm whose square is given by, for $\zeta=(u,v,\delta,\gamma)\in\mathcal{H}$, \begin{align} \|\zeta\|^2_{\mathcal{H}} & := \|u\|^2_1 + \|v\|^2 + \|\delta\|^2_{L^2(\Gamma)} + \|\gamma\|^2_{L^2(\Gamma)} \notag \\ & = \left( \|\nabla u\|^2 + \|u\|^2 \right) + \|v\|^2 + \|\delta\|^2_{L^2(\Gamma)} + \|\gamma\|^2_{L^2(\Gamma)}. \notag \end{align} Let $\varepsilon>0$ and denote by $\mathcal{H}_\varepsilon$ the space $\mathcal{H}$ when endowed with the $\varepsilon$-weighted norm whose square is given by \begin{align} \|\zeta\|^2_{\mathcal{H}_\varepsilon} := \|u\|^2_1 + \|v\|^2 + \varepsilon\|\delta\|^2_{L^2(\Gamma)} + \|\gamma\|^2_{L^2(\Gamma)}. \notag \end{align} Let \[ D(\Delta):= \{ u\in L^2(\Omega) : \Delta u\in L^2(\Omega) \}, \] and define the set \[ D(\mathrm{A}_\varepsilon) := \left\{ (u,v,\delta,\gamma)\in D(\Delta) \times H^1(\Omega) \times L^2(\Gamma) \times L^2(\Gamma) : \partial_{\bf{n}}u = \gamma \ \text{on} \ \Gamma \right\}. \] Define the linear unbounded operator $\mathrm{A}_\varepsilon:D(\mathrm{A}_\varepsilon)\subset\mathcal{H}_\varepsilon\rightarrow\mathcal{H}_\varepsilon$ by \[ \mathrm{A}_\varepsilon:=\begin{pmatrix} 0 & 1 & 0 & 0 \\ \Delta-1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & -1 & -\varepsilon & -\varepsilon \end{pmatrix}. \] For each $\varepsilon\in(0,1]$, the operator $\mathrm{A}_\varepsilon$ with domain $D(\mathrm{A}_\varepsilon)$ is an infinitesimal generator of a strongly continuous semigroup of contractions on $\mathcal{H}_\varepsilon$, denoted $e^{\mathrm{A}_\varepsilon t}$. According to \cite{Frigeri10}, the $\varepsilon=1$ case follows from \cite[Theorem 2.1]{Beale76}. For each $\varepsilon\in(0,1]$, $\mathrm{A}_\varepsilon$ is dissipative because, for all $\zeta=(u,v,\delta,\gamma)\in D(\mathrm{A}_\varepsilon)$, \[ \langle \mathrm{A}_\varepsilon\zeta,\zeta \rangle_{\mathcal{H}_\varepsilon} = -\|v\|^2 -\varepsilon\|\gamma\|^2_{L^2(\Gamma)} \leq 0. \] Also, the Lax-Milgram theorem can be applied to show that the elliptic system, $(I+\mathrm{A}_\varepsilon)\zeta=\xi$, admits a unique weak solution $\zeta\in D(\mathrm{A}_\varepsilon)$ for any $\xi\in\mathcal{H}_\varepsilon$. Thus, $\mathrm{R}(I+\mathrm{A}_\varepsilon)=\mathcal{H}_\varepsilon$. \begin{remark} Notice that ${\rm{rank}}(\mathrm{R})=3$ while ${\rm{rank}}(\mathrm{A}_\varepsilon)=4$; that is, when $\varepsilon=0$, the operator $\mathrm{R}$ exhibits a ``drop in rank.'' This feature corresponds to the Robin boundary condition given in terms of $u_t$ that results when $\varepsilon=0$ in the boundary condition equation (\ref{acoustic-boundary}). \end{remark} For each $\varepsilon\in(0,1]$, the map $\mathcal{G}_\varepsilon:\mathcal{H}_\varepsilon\rightarrow\mathcal{H}_\varepsilon$ given by \[ \mathcal{G}_\varepsilon(\zeta):=\begin{pmatrix} 0 \\ -f(u) \\ 0 \\ -\varepsilon g(\delta) \end{pmatrix} \] for all $\zeta=(u,v,\delta,\gamma)\in\mathcal{H}_\varepsilon$ is locally Lipschitz continuous because the map $f:H^1(\Omega)\rightarrow L^2(\Omega)$ is locally Lipschitz continuous, and, because of the growth assumption on $g$ given in (\ref{g-assumption-1}), it is easy to see that $g:L^2(\Gamma)\rightarrow L^2(\Gamma)$ is globally Lipschitz continuous. Then Problem (A) may be put into the abstract form in $\mathcal{H}_\varepsilon$ \begin{equation}\label{abstract-acoustic-problem} \left\{ \begin{array}{l} \displaystyle\frac{ {\mathrm{d}} \zeta}{ {\mathrm{d}} t} = \mathrm{A}_\varepsilon\zeta + \mathcal{G}_\varepsilon(\zeta) \\ \zeta(0)=\zeta_0 \end{array} \right. \end{equation} where $\zeta=\zeta(t)=(u(t),u_t(t),\delta(t),\delta_t(t))$ and $\zeta_0=(u_0,u_1,\delta_0,\delta_1)\in\mathcal{H}_\varepsilon$, now where $v=u_t$ and $\gamma=\delta_t$ in the sense of distributions. To obtain the {\em{energy equation}} for Problem (A), multiply (\ref{damped-wave-equation}) by $2u_t$ in $L^2(\Omega)$ and multiply (\ref{acoustic-boundary}) by $2\delta_t$ in $L^2(\Gamma)$, then sum the resulting identities to obtain \begin{equation}\label{acoustic-energy-3} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t} \left\{ \|\zeta\|^2_{\mathcal{H}_\varepsilon} + 2\int_\Omega F(u) {\mathrm{d}} x + 2\varepsilon\int_\Gamma G(\delta) {\mathrm{d}} \sigma \right\} + 2\|u_t\|^2 + 2\varepsilon\|\delta_t\|^2_{L^2(\Gamma)} = 0, \end{equation} where $F(s)=\int_0^s f(\xi) {\mathrm{d}} \xi$ and now $G(s)=\int_0^s g(\xi) {\mathrm{d}} \xi$, and $ {\mathrm{d}} \sigma$ represents the natural surface measure on $\Gamma$. Reflecting (\ref{from-f-assumption-2}), there is a constant $\mu_1\in(0,1]$ such that \begin{equation}\label{from-g-assumption-2} 2\varepsilon\int_\Gamma G(\delta) {\mathrm{d}} \sigma \geq -(1-\mu_1)\varepsilon\|\delta\|^2_{L^2(\Gamma)} - \varepsilon\kappa_g \end{equation} for some constant $\kappa_g\geq 0$. Additionally, from the sign condition (\ref{g-assumption-2}), there holds \begin{equation}\label{from-g-assumption-1} \varepsilon\langle g(\delta),\delta \rangle_{L^2(\Gamma)} \ge -(1-\mu_1)\varepsilon\|\delta\|^2_{L^2(\Gamma)} - \varepsilon\kappa_g. \end{equation} \begin{lemma} \label{adjoint-a} For each $\varepsilon \in (0,1]$, the adjoint of $\mathrm{A}_\varepsilon$, denoted $\mathrm{A}_\varepsilon^{\ast }$, is given by \begin{equation*} \mathrm{A}_\varepsilon^{\ast }:= -\begin{pmatrix} 0 & 1 & 0 & 0 \\ \Delta-1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & -1 & -\varepsilon & \varepsilon \end{pmatrix}, \end{equation*} with domain \begin{equation*} D(\mathrm{A}_\varepsilon^{\ast }):=\{(\chi ,\psi, \phi,\xi )\in D(\Delta) \times H^1(\Omega) \times L^2(\Gamma) \times L^2(\Gamma) : \partial _{\mathbf{n}}\chi = - \xi \ \text{on} \ \Gamma \}. \end{equation*} \end{lemma} \begin{proof} The proof is a calculation similar to, e.g., \cite[Lemma 3.1]{Ball04}. \end{proof} \subsection{Well-posedness of Problem (A)} Again, the definition of weak solution is from \cite{Ball77}. \begin{definition} Let $T>0$. A map $\zeta\in C([0,T];\mathcal{H}_\varepsilon)$ is a {\em{weak solution}} of (\ref{abstract-acoustic-problem}) on $[0,T]$ if for each $\xi\in D(A^*_\varepsilon)$ the map $t \mapsto \langle \zeta(t),\xi \rangle_{\mathcal{H}_\varepsilon}$ is absolutely continuous on $[0,T]$ and satisfies, for almost all $t\in[0,T]$, \begin{equation} \label{abs-2} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t}\langle \zeta(t),\xi \rangle_{\mathcal{H}_\varepsilon} = \langle \zeta(t),A^*_\varepsilon\xi \rangle_{\mathcal{H}_\varepsilon} + \langle \mathcal{G}_\varepsilon(\zeta(t)),\xi \rangle_{\mathcal{H}_\varepsilon}. \end{equation} The map $\zeta$ is a weak solution on $[0,\infty)$ (i.e. a {\em{global weak solution}}) if it is a weak solution on $[0,T]$ for all $T>0$. \end{definition} Following \cite{Ball04}, we provide the equivalent notion of a mild solution. \begin{definition} Let $T>0$. A function $\zeta:[0,T]\rightarrow\mathcal{H}_\varepsilon$ is a weak/mild solution of (\ref{abstract-acoustic-problem}) on $[0,T]$ if and only if $\mathcal{G}_\varepsilon(\zeta(\cdot))\in L^1(0,T;\mathcal{H}_\varepsilon)$ and $\zeta$ satisfies the variation of constants formula, for all $t\in[0,T],$ \[ \zeta(t)=e^{\mathrm{A}_\varepsilon t}\zeta_0 + \int_0^t e^{\mathrm{A}_\varepsilon(t-s)}\mathcal{G}_\varepsilon(\zeta(s)) {\mathrm{d}} s. \] \end{definition} Again, our notion of weak solution is equivalent to the standard concept of a weak (distributional) solution to Problem (A). Indeed, since $f:H^{1}\left( \Omega \right) \rightarrow L^{2}\left( \Omega \right) $ is sequentially weakly continuous and continuous and $\left( \zeta_{t},\theta \right) \in C^{1}\left( \left[ 0,T\right] \right) $ for all $\theta \in D\left( A^{\ast }\right) $, and (\ref{abs-2}) is satisfied. \begin{definition} \label{aweak} A function $\zeta=(u,u_{t},\delta,\delta_t):[0,T]\rightarrow \mathcal{H}_\varepsilon$ is a weak solution of (\ref{abstract-acoustic-problem}) (and, thus of (\ref{damped-wave-equation}), (\ref{Robin-initial-conditions}), (\ref{acoustic-boundary}) and (\ref{acoustic-initial-conditions})) on $[0,T],$ if, for almost all $t\in \left[ 0,T\right],$ \begin{equation*} \zeta =(u,u_{t},\delta,\delta_t)\in C(\left[ 0,T\right] ;\mathcal{H}_\varepsilon), \end{equation*} and, for each $\psi \in H^{1}\left( \Omega \right),$ $\langle u_{t},\psi \rangle \in C^{1}\left( \left[ 0,T\right] \right) $ with \begin{equation*} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t}\left\langle u_{t}\left( t\right) ,\psi \right\rangle + \left\langle u_{t}\left( t\right) ,\psi \right\rangle + \left\langle u\left( t\right) ,\psi \right\rangle_1 = -\left\langle f\left( u\left( t\right) \right) ,\psi \right\rangle - \left\langle \delta_t\left( t\right) ,\psi \right\rangle _{L^{2}\left( \Gamma \right) }, \end{equation*} and, for each $\phi \in L^{2}\left(\Gamma\right),$ $\left( \delta_{t},\phi \right) \in C^{1}\left( \left[ 0,T\right] \right) $ with \begin{equation*} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t}\left\langle \delta_{t}\left( t\right) ,\phi \right\rangle_{L^{2}\left( \Gamma \right) } + \left\langle \delta_{t}\left( t\right) ,\phi \right\rangle_{L^{2}\left( \Gamma \right) } + \left\langle \delta\left( t\right) ,\phi \right\rangle _{L^{2}\left( \Gamma \right) } = - \left\langle g\left( \delta\left( t\right) \right) ,\phi\right\rangle_{L^2(\Gamma)}. \end{equation*} \end{definition} Recall from the previous section that $f:H^1(\Omega)\rightarrow L^2(\Omega)$ is sequentially weakly continuous and continuous. Recall that, by \cite[Proposition 3.4]{Ball04} and Lemma \ref{adjoint-a}, $\langle \zeta,\xi \rangle_{\mathcal{H}_\varepsilon}\in C([0,T])$ for all $\xi\in D(A^*)$. The definition of strong solution follows. First, for each $\varepsilon\in(0,1]$, define the space, \[ \mathcal{D}_\varepsilon:=\left\{ (u,v,\delta,\gamma)\in H^2(\Omega)\times H^1(\Omega)\times H^{1/2}(\Gamma)\times H^{1/2}(\Gamma):\partial_{\bf{n}}u = \gamma ~~\text{on}~~ \Gamma \right\}, \] and let $\mathcal{D}_\varepsilon$ be equipped with the $\varepsilon$-weighted norm whose square is given by, for all $\zeta=(u,v,\delta,\gamma)\in\mathcal{D}_\varepsilon$, \[ \|\zeta\|^2_{\mathcal{D}_\varepsilon}:=\|u\|^2_2+\|v\|^2_1+\varepsilon\|\delta\|^2_{H^{1/2}(\Gamma)}+\|\gamma\|^2_{H^{1/2}(\Gamma)}. \] \begin{definition} \label{d:acoustic-strong} Let $\zeta_0 = \left(u_0,u_1,\delta_0,\delta_1\right) \in \mathcal{D}_\varepsilon$: that is, let $\zeta_0\in H^{2}(\Omega )\times H^{1}(\Omega )\times H^{1/2}(\Gamma)\times H^{1/2}(\Gamma)$ be such that the compatibility condition \begin{equation*} \partial _{\bf{n}}u_{0} = \delta_1 \ \text{on} \ \Gamma \end{equation*} is satisfied. A function $\zeta (t) =(u(t),u_t(t),\delta(t),\delta_t(t))$ is called a (global) strong solution if it is a (global) weak solution in the sense of Definition \ref{aweak} and if it satisfies the following regularity properties: \begin{equation} \label{acoustic-regularity-property} \begin{array}{l} \zeta \in L^{\infty }([0,\infty);\mathcal{D}_\varepsilon)\quad\text{and}\quad\partial_t\zeta\in L^{\infty }([0,\infty);\mathcal{H}_\varepsilon). \end{array} \end{equation} Therefore, $\zeta(t) = (u(t), u_t(t),\delta(t),\delta_t(t)) $ satisfies the equations (\ref{damped-wave-equation}), (\ref{acoustic-boundary}) and (\ref{acoustic-initial-conditions}) almost everywhere; i.e., is a strong solution. \end{definition} Now we give the first main result in this section. \begin{theorem} \label{t:ws-a} Assume (\ref{f-assumption-1}), (\ref{f-assumption-2}), (\ref{g-assumption-1}),(\ref{g-assumption-2}). Let $\zeta_0\in\mathcal{H}_\varepsilon$. For each $\varepsilon\in(0,1]$, there exists a unique global weak solution $\zeta\in C([0,\infty);\mathcal{H}_\varepsilon)$ to (\ref{abstract-acoustic-problem}). For each weak solution, the map \begin{equation}\label{acoustic-C1-map} t \mapsto \|\zeta(t)\|^2_{\mathcal{H}_\varepsilon} + 2\int_\Omega F(u(t)) {\mathrm{d}} x + 2\varepsilon\int_\Gamma G(\delta(t)) {\mathrm{d}} x \end{equation} is $C^1([0,\infty))$ and the energy equation (\ref{acoustic-energy-3}) holds (in the sense of distributions). Moreover, for all $\zeta_0, \xi_0\in\mathcal{H}_\varepsilon$, there exists a positive constant $\nu_1>0$, depending on $\|\zeta_0\|_{\mathcal{H}_\varepsilon}$ and $\|\xi_0\|_{\mathcal{H}_\varepsilon}$, such that for all $t\geq 0$, \begin{equation}\label{acoustic-continuous-dependence} \|\zeta(t)-\xi(t)\|_{\mathcal{H}_\varepsilon} \leq e^{\nu_1 t} \|\zeta_0 - \xi_0\|_{\mathcal{H}_\varepsilon}. \end{equation} Furthermore, when (\ref{f-assumption-2}) and (\ref{g-reg-ass-1})-(\ref{f-reg-ass-3}) hold, and $\zeta_0\in\mathcal{D}_\varepsilon$, then there exists a unique global strong solution $\zeta\in C([0,\infty);\mathcal{D}_\varepsilon)$ to (\ref{abstract-acoustic-problem}). \end{theorem} \begin{proof} We only mention the first part of the proof. Following \cite[Proof of Theorem 1]{Frigeri10}: The operator $\mathrm{A}_\varepsilon$ is the generator of a $C^0$-semigroup of contractions in $\mathcal{H}_\varepsilon$. This follows from \cite{Beale76} and the Lumer-Phillips Theorem. Also, by (\ref{f-assumption-1}) and (\ref{g-assumption-1}), the functional $\mathcal{G}_\varepsilon:\mathcal{H}_\varepsilon\rightarrow\mathcal{H}_\varepsilon$ is locally Lipschitz continuous. So there is $T^*>0$ and a maximal weak solution $\zeta\in C([0,T^*),\mathcal{H}_\varepsilon)$ (cf. e.g. \cite{Zheng04}). To show $T^*=+\infty$, observe that integrating the energy identity (\ref{acoustic-energy-3}) over $(0,t)$ yields, for all $t\in[0,T^*),$ \begin{align} & \|\zeta(t)\|^2_{\mathcal{H}_\varepsilon} + 2\int_\Omega F(u(t)) {\mathrm{d}} x + 2\varepsilon\int_\Gamma G(\delta(t)) {\mathrm{d}} x + \int_0^t \left( 2\|u_\tau(\tau)\|^2d\tau + 2\varepsilon\|\delta_\tau(\tau)\|^2_{L^2(\Gamma)} \right) d\tau \notag \\ &= \|\zeta_0\|^2_{\mathcal{H}_\varepsilon} + 2\int_\Omega F(u_0) {\mathrm{d}} x + 2\varepsilon\int_\Gamma G(\delta_0) {\mathrm{d}} x. \label{fit-1} \end{align} Applying (\ref{from-f-assumption-2}) and (\ref{from-g-assumption-2}) to (\ref{fit-1}), we find that, for all $t\in[0,T^*)$, \begin{equation*} \|\zeta(t)\|_{\mathcal{H}_\varepsilon}\le C(\|\zeta_0\|_{\mathcal{H}_\varepsilon}), \end{equation*} with some $C>0$ independent of $t$; which of course means $T^*=+\infty.$ Moreover, we know that when $\zeta_0\in\mathcal{H}_\varepsilon$ is such that $\|\zeta_0\|_{\mathcal{H}_\varepsilon}\le R$ for all $\varepsilon\in(0,1],$ then there holds the uniform bound, for all $t\ge0,$ \begin{equation} \label{acoustic-bound} \|\zeta(t)\|_{\mathcal{H}_\varepsilon}\le Q(R). \end{equation} The remainder of the proof follows directly from \cite[Theorem 1]{Frigeri10}. \end{proof} As above, we formalize the dynamical system associated with Problem (A). \begin{corollary} \label{sf-a} Let $\zeta_0=(u_0,u_1,\delta_0,\delta_1)\in\mathcal{H}_\varepsilon$ and let $u$ and $\delta$ be the unique solution of Problem (A). For each $\varepsilon\in(0,1],$ the family of maps $S_\varepsilon=(S_\varepsilon(t))_{t\geq 0}$ defined by \begin{align} &S_\varepsilon(t)\zeta_0(x):= \notag \\ &(u(t,x,u_0,u_1,\delta_0,\delta_1),u_t(t,x,u_0,u_1,\delta_0,\delta_1),\delta(t,x,u_0,u_1,\delta_0,\delta_1),\delta_t(t,x,u_0,u_1,\delta_0,\delta_1)) \notag \end{align} is the semiflow generated by Problem (A). The operators $S_\varepsilon(t)$ satisfy \begin{enumerate} \item $S_\varepsilon(t+s)=S_\varepsilon(t)S_\varepsilon(s)$ for all $t,s\geq 0$. \item $S_\varepsilon(0)=I_{\mathcal{H}_\varepsilon}$ (the identity on $\mathcal{H}_\varepsilon$) \item $S_\varepsilon(t)\zeta_0\rightarrow S_\varepsilon(t_0)\zeta_0$ for every $\zeta_0\in\mathcal{H}_\varepsilon$ when $t\rightarrow t_0$. \end{enumerate} Additionally, each mapping $S_\varepsilon(t):\mathcal{H}_\varepsilon\rightarrow\mathcal{H}_\varepsilon$ is Lipschitz continuous, uniformly in $t$ on compact intervals; i.e., for all $\zeta_0, \chi_0\in\mathcal{H}_\varepsilon$, and for each $T\geq 0$, and for all $t\in[0,T]$, \begin{equation}\label{sf-a-lc} \|S_\varepsilon(t)\zeta_0-S_\varepsilon(t)\chi_0\|_{\mathcal{H}_\varepsilon} \leq e^{\nu_1 T}\|\zeta_0-\chi_0\|_{\mathcal{H}_\varepsilon}. \end{equation} \end{corollary} \begin{proof} The proof is not much different from the proof of Corollary \ref{sf-r} above. The Lipschitz continuity property follows from (\ref{acoustic-continuous-dependence}). \end{proof} \subsection{Dissipativity of Problem (A)} The dynamical system $(S_\varepsilon(t),\mathcal{H}_\varepsilon)$ is shown to admit a positively invariant, bounded absorbing set in $\mathcal{H}_\varepsilon$. The argument follows \cite[Theorem 2]{Frigeri10}. \begin{lemma} \label{t:a-abs-set} Assume (\ref{f-assumption-1}), (\ref{f-assumption-2}), (\ref{g-assumption-1}), and (\ref{g-assumption-2}) hold. For each $\varepsilon\in(0,1]$, there exists $R_{1}>0$, independent of $\varepsilon$, such that the following holds: for every $R>0$, there exists $t_{1\varepsilon}=t_1(\varepsilon,R) \ge 0$, depending on $\varepsilon$ and $R$, so that, for all $\zeta_0\in\mathcal{H}_\varepsilon$ with $\|\zeta_0\|_{\mathcal{H}_\varepsilon} \le R$ for every $\varepsilon\in(0,1]$, and for all $t\geq t_{1\varepsilon}$, \begin{equation} \label{acoustic-radius} \|S_\varepsilon(t)\zeta_0\|_{\mathcal{H}_\varepsilon} \le R_{1}. \end{equation} Furthermore, for each $\varepsilon\in(0,1],$ the set \begin{equation} \label{abss-1} \mathcal{B}_\varepsilon:=\{ \zeta\in \mathcal{H}_\varepsilon : \|\zeta\|_{\mathcal{H}_\varepsilon} \leq R_{1} \} \end{equation} is closed, bounded, absorbing, and positively invariant for the dynamical system $(S_\varepsilon,\mathcal{H}_\varepsilon)$. \end{lemma} \begin{remark} \label{r:time-1} The proof of Lemma \ref{t:a-abs-set} utilizes Proposition \ref{t:diff-ineq-1}, and as such, the ``time of entry,'' $t_{1\varepsilon},$ may be explicitly calculated in terms of the parameters in (\ref{from-f-assumption-2}) and (\ref{from-g-assumption-2}), and in terms of $R$ of course, as \begin{align} t_{1\varepsilon}(\iota) & := \frac{1}{\iota}\left( C_{2}R(1+R^3) + \kappa_f+\varepsilon\kappa_g \right). \notag \end{align} Furthermore, after a calculation similar to the one given in Remark \ref{Robin-time}, we find the radii of $\mathcal{B}_\varepsilon$ to be given by, for all $\varepsilon\in(0,1]$ and $\iota>0,$ \begin{align} R^2_{1\varepsilon}(\iota) &:= \frac{C_2}{C_1} \left( \kappa_f+\varepsilon\kappa_g+\frac{\iota}{m_1\varepsilon} \right)^{1/2} \left( \kappa_f+\varepsilon\kappa_g+1 + \left( \kappa_f+\varepsilon\kappa_g+\frac{\iota}{m_1\varepsilon} \right)^{3/2} \right). \notag \end{align} Observe, in general, $R_{1\varepsilon}\sim\varepsilon^{-1}.$ However, we now choose $\iota=\iota_\varepsilon=m_1 \varepsilon.$ Then \[ t_{1\varepsilon}(m_1\varepsilon):=\frac{1}{\varepsilon m_1}\left( 1+\frac{C_2R(1+R^3)}{\kappa_f+\varepsilon\kappa_g} \right). \] Hence, $t_{1\varepsilon}\rightarrow+\infty$ as $\varepsilon\rightarrow0$. But with such a choice of $\iota$, the radius $R_{1\varepsilon}(m_1\varepsilon)$ has a fixed upper-bound {\em{independent}} of $\varepsilon$, \begin{equation} \label{g-attr-bound} R^2_{1\varepsilon}(m_1\varepsilon):= \frac{C_2}{C_1} \left(\kappa_f+\varepsilon\kappa_g+1\right)^{3/2} \left( 1 + \left( \kappa_f+\varepsilon\kappa_g+1 \right)^{1/2} \right). \end{equation} Compare this, and the limit as $\varepsilon\rightarrow 0$, to the radius of $\mathcal{B}_0$ given in (\ref{reg-radii}). \end{remark} \subsection{Global attractors for Problem (A)} Concerning the existence of global attractors, it is now assumed that (\ref{g-reg-ass-1}) holds; i.e., $g\equiv0$, (yet it is still assumed that $f$ only satisfy (\ref{f-assumption-1})-(\ref{f-assumption-2})). Hence, the corresponding acoustic boundary condition is now \begin{equation}\label{acoustic-boundary-2} \left\{ \begin{array}{ll} \delta_{tt} + \varepsilon[\delta_t + \delta] = -u_t & \text{on} \ (0,\infty)\times\Gamma \\ \delta_t = \partial_{\bf{n}}u & \text{on} \ (0,\infty)\times\Gamma, \end{array}\right. \end{equation} again supplemented with the initial conditions (\ref{acoustic-initial-conditions}). By using asymptotic compactness methods, it can be shown, with some modifications to \cite{Frigeri10}, that the semiflow $S_\varepsilon$ admits a global attractor in $\mathcal{H}_\varepsilon$, for each $\varepsilon\in(0,1]$; thus, defining a family of global attractors in $\mathcal{H}_\varepsilon$ (cf. e.g. (\ref{gfam-1})). The continuity properties of this family of sets will be developed in the next section. The existence of a family of global attractors in $\mathcal{H}_\varepsilon$ admitted by the semiflows $S_\varepsilon$ for Problem (A) follows from \cite[Theorem 3.3]{Ball00} as in \S\ref{s:Robin} for Problem (R). We remind the reader that for this result, $g\equiv0,$ and Problem (A) is equipped with the linear boundary condition (\ref{acoustic-boundary-2}). \begin{theorem}\label{t:acoustic-global} Assume (\ref{f-assumption-1}), (\ref{f-assumption-2}), and (\ref{g-reg-ass-1}) hold. For each $\varepsilon\in(0,1]$, the dynamical system $(S_\varepsilon(t),\mathcal{H}_\varepsilon)$ admits a global attractor $\mathcal{A}_\varepsilon$ in $\mathcal{H}_\varepsilon$. The global attractor is invariant under the semiflow $S_\varepsilon$ (both positively and negatively) and attracts all nonempty bounded subsets of $\mathcal{H}_\varepsilon$; precisely, \begin{enumerate} \item For each $t\geq 0$, $S_\varepsilon(t)\mathcal{A}_\varepsilon=\mathcal{A}_\varepsilon$, and \item For every nonempty bounded subset $B$ of $\mathcal{H}_\varepsilon$, \[ \lim_{t\rightarrow\infty}{\rm{dist}}_{\mathcal{H}_\varepsilon}(S_\varepsilon(t)B,\mathcal{A}_\varepsilon):=\lim_{t\rightarrow\infty}\sup_{\varphi\in B}\inf_{\theta\in\mathcal{A}_\varepsilon}\|S_\varepsilon(t)\varphi-\theta\|_{\mathcal{H}_\varepsilon}=0. \] \end{enumerate} The global attractor is unique and given by \[ \mathcal{A}_\varepsilon=\omega(\mathcal{B}_\varepsilon):=\bigcap_{s\geq 0}{\overline{\bigcup_{t\geq s} S_\varepsilon(t)\mathcal{B}_\varepsilon}}^{\mathcal{H}_\varepsilon}. \] Furthermore, $\mathcal{A}_\varepsilon$ is the maximal compact invariant subset in $\mathcal{H}_\varepsilon$. \end{theorem} Again, the weak continuity of the semiflow $S_\varepsilon$ in $\mathcal{H}_\varepsilon$ follows from the general result from \cite[Theorem 3.6]{Ball04}. The asymptotic compactness result is borrowed from \cite{Frigeri10}, and generalized from $\varepsilon=1$ to the case where $\varepsilon\in(0,1]$. \begin{lemma} For each $\varepsilon\in(0,1]$, the semiflow $S_\varepsilon$ is weakly continuous on $\mathcal{H}_\varepsilon$; i.e., for each $t\geq 0$, \[ S_\varepsilon(t)\zeta_{0n}\rightharpoonup S_\varepsilon(t)\zeta_0 \ \text{in} \ \mathcal{H}_\varepsilon ~\text{when}~\zeta_{0n}\rightharpoonup \zeta_0 \ \text{in} \ \mathcal{H}_\varepsilon. \] \end{lemma} \begin{proof} For each fixed $\varepsilon\in(0,1],$ the proof from \cite[Theorem 3.6]{Ball04} applies to Problem (A). \end{proof} \begin{lemma} \label{aac} For each $\varepsilon\in(0,1]$, the semiflow $S_\varepsilon$ is asymptotically compact in $\mathcal{H}_\varepsilon$; i.e., if $\zeta_{0n}=(u_{0n},u_{1n},\delta_{0n},\delta_{1n})$ is any bounded sequence in $\mathcal{H}_\varepsilon$ and if $t_n$ is any sequence such that $t_n\rightarrow\infty$ as $n\rightarrow \infty$, then the sequence $\zeta^{(n)}(t_n) = S_\varepsilon(t_n)\zeta_{0n}$ has a convergent subsequence. \end{lemma} \begin{proof}[Proof of Theorem \ref{t:acoustic-global}] We apply the method of generalized semiflows due to Ball \cite{Ball00,Ball04}. This result follows directly from the existence of a bounded absorbing set $\mathcal{B}_\varepsilon$, due to Lemma \ref{t:a-abs-set}, and the asymptotic compactness of the semiflow $S_\varepsilon$, proven in Lemma \ref{aac}. \end{proof} \subsection{Optimal regularity for $\mathcal{A}_\varepsilon$} This section discusses the asymptotic compactness result for the weak solutions to Problem (A), assuming the following (\ref{f-assumption-2}), (\ref{g-reg-ass-1}), (\ref{f-reg-ass-2}), and (\ref{f-reg-ass-3}) hold. The results directly follow the presentation in \cite{Frigeri10} with modifications to include the perturbation parameter $\varepsilon\in(0,1].$ \begin{theorem} \label{optimal-a} Assume (\ref{f-assumption-2}), (\ref{g-reg-ass-1}), (\ref{f-reg-ass-2}), and (\ref{f-reg-ass-3}) hold. For each $\varepsilon\in(0,1]$, there exists a closed and bounded subset $\mathcal{U}_\varepsilon\subset \mathcal{D}_\varepsilon$, such that for every nonempty bounded subset $B\subset \mathcal{H}_\varepsilon$, \begin{equation} \mathrm{dist}_{\mathcal{H}_\varepsilon}(S_\varepsilon(t)B,\mathcal{U}_\varepsilon) \le Q_\varepsilon(\left\Vert B\right\Vert _{\mathcal{H}_\varepsilon}) e^{-\omega_{4 \varepsilon} t}, \label{tran-2} \end{equation} for some nonnegative monotonically increasing function $Q_{\varepsilon}(\cdot)$ and for some positive constant $\omega_{4 \varepsilon}>0$, both depending on $\varepsilon$ where $Q_\varepsilon(\cdot)\sim\varepsilon^{-1}$ and $\omega_{4\varepsilon}\sim\varepsilon^{-1}.$ \end{theorem} Hence, we immediately have the following \begin{corollary} For each $\varepsilon\in(0,1]$, the global attractor $\mathcal{A}_\varepsilon$ admitted by the semiflow $S_\varepsilon$ satisfies \begin{equation*} \mathcal{A}_\varepsilon\subset\mathcal{U}_\varepsilon. \end{equation*} Consequently, for each $\varepsilon\in(0,1]$, the global attractor $\mathcal{A}_\varepsilon$ is bounded in $\mathcal{D}_\varepsilon$ and consists only of strong solutions. \end{corollary} The proof of Theorem \ref{optimal-a} proceeds along the usual lines; whereby decomposing the semiflow $S_\varepsilon$ into two parts, one which decays (exponentially) to zero, and one part which is precompact. Recall, since fractional powers of the Laplacian with acoustic boundary conditions are undefined, the precompactness property will be earned through the application of $H^2$-elliptic regularity results. Define \begin{equation} \label{beta} \psi (s):=f(s)+\beta s \end{equation} for some constant $\beta \ge \ell_2$ to be determined later (observe, $\psi'(s)\ge 0$ thanks to assumption (\ref{f-reg-ass-3})). Set $\Psi (s):=\int_{0}^{s}\psi (\sigma ) {\mathrm{d}} \sigma$. Let $\zeta_0=(u_0,u_1,\delta_0,\delta_1)\in\mathcal{H}_\varepsilon$. Let $\zeta(t)=(u(t),u_t(t),\delta(t),\delta_t(t))$ denote the corresponding global solution of Problem (A) on $[0,\infty)$ with the initial data $\zeta_0$. We then decompose Problem (A) into the following systems of equations. For all $t\ge0$, set \begin{align} \zeta(t) & = (u(t),u_t(t),\delta(t),\delta_t(t)) \notag \\ & = \left( v(t),v_t(t),\gamma(t),\gamma_t(t) \right) + \left( w(t),w_t(t),\theta(t),\theta_t(t) \right) \notag \\ & =: \xi(t) + \chi(t) \notag \end{align} Then $\xi$ and $\chi$ satisfy the IBVPs, \begin{equation} \left\{ \begin{array}{ll} \label{pde-v} v_{tt} + v_{t} - \Delta v + v + \psi (u) - \psi (w) = 0 & \text{in}\quad(0,\infty )\times \Omega, \\ \gamma_{tt} + \varepsilon [ \gamma_t + \gamma ] = -v_t & \text{on}\quad(0,\infty)\times \Gamma, \\ \gamma_t = \partial_{\bf{n}}v & \text{on}\quad(0,\infty )\times \Gamma, \\ \xi(0) = \zeta_0 & \text{at}\quad\{0\}\times{\overline{\Omega}}, \\ \end{array} \right.\end{equation} and, respectively, \begin{equation} \left\{ \begin{array}{ll} \label{pde-w} w_{tt} + w_{t} - \Delta w + w + \psi (w) = \beta u & \text{in}\quad(0,\infty)\times \Omega, \\ \theta_{tt} + \varepsilon [ \theta_t + \theta ] = -w_t & \text{on}\quad(0,\infty)\times \Gamma, \\ \theta_{t} = \partial_{\bf{n}}w & \text{on}\quad(0,\infty)\times \Gamma, \\ \chi(0) = {\bf{0}} & \text{at}\quad\{0\}\times{\overline{\Omega}}. \end{array} \right. \end{equation} In view of Lemmas \ref{t:uniform-bound-w} and \ref{t:uniform-decay} below, we define the one-parameter family of maps, $K_\varepsilon(t):\mathcal{H}_\varepsilon\rightarrow \mathcal{H}_\varepsilon$, by \begin{equation*} K_\varepsilon(t)\zeta_{0} := \xi(t), \end{equation*} where $\xi(t)$ is a solution of (\ref{pde-w}). With such $\xi(t)$, we may define a second function $\chi(t)$ for all $t\ge 0$ as the solution of (\ref{pde-v}). Through the dependence of $\xi$ on $\chi$ and $\zeta_{0}$, the solution of (\ref{pde-v}) defines a one-parameter family of maps, $Z_\varepsilon(t):\mathcal{H}_{\varepsilon}\rightarrow \mathcal{H}_\varepsilon$, defined by \begin{equation*} Z_\varepsilon(t)\zeta_{0} := \chi(t). \end{equation*} Notice that if $\xi$ and $\chi$ are solutions to (\ref{pde-v}) and (\ref{pde-w}), respectively, then the function $\zeta := \xi + \chi$ is a solution to the original Problem (A), for each $\varepsilon\in(0,1]$. The first lemma shows that the operators $K_\varepsilon$ are bounded in $\mathcal{H}_\varepsilon$, uniformly with respect to $\varepsilon$. The result largely follows from the existence of a bounded absorbing set $\mathcal{B}_\varepsilon$ in $\mathcal{H}_\varepsilon$ for $S_\varepsilon$ (recall (\ref{acoustic-bound})). \begin{lemma} \label{t:uniform-bound-w} For each $\varepsilon \in (0,1]$ and $\zeta _{0}=(u_{0},u_{1},\delta_0,\delta_1) \in \mathcal{H}_\varepsilon$, there exists a unique global weak solution $\chi = (w, w_{t}, \theta, \theta_t) \in C([0,\infty );\mathcal{H}_\varepsilon)$ to problem (\ref{pde-w}) satisfying \begin{equation} \label{bnd-2} \theta_{t}\in L_{\mathrm{loc}}^{2}([0,\infty )\times \Gamma ). \end{equation} Moreover, for all $\zeta_{0}\in \mathcal{H}_\varepsilon$ with $\left\Vert \zeta_{0}\right\Vert _{\mathcal{H}_\varepsilon}\leq R$ for all $\varepsilon \in (0,1]$, there holds for all $t\geq 0$, \begin{equation} \label{uniform-bound-w} \Vert K_\varepsilon(t)\zeta _{0}\Vert _{\mathcal{H}_\varepsilon} \le Q(R). \end{equation} \end{lemma} \begin{lemma} \label{t:Gronwall-bound} For each $\varepsilon\in(0,1]$ and for all $\eta >0$, there is a function $Q_{\eta}(\cdot) \sim \eta^{-1}$, such that for every $0\leq s\leq t$, $\zeta_{0} = (u_{0},u_{1},\delta_0,\delta_1) \in \mathcal{B}_{\varepsilon }$, and $\varepsilon\in(0,1]$, \begin{equation} \label{Gronwall-bound-0} \int_{s}^{t} \left( \Vert u_{t}(\tau )\Vert ^{2} + \Vert w_{t}(\tau )\Vert ^{2} + \varepsilon\|\theta(\tau)\|^2_{L^2(\Gamma)} \right) {\mathrm{d}} \tau \le \frac{\eta }{2}(t - s) + Q_{\eta}(R), \end{equation} where $R>0$ is such that $\left\Vert \zeta _{0}\right\Vert _{\mathcal{H}_{\varepsilon }}\leq R,$ for all $\varepsilon\in(0,1]$. \end{lemma} The next result shows that the operators $Z_\varepsilon$ are uniformly decaying to zero in $\mathcal{H}_\varepsilon$, for each $\varepsilon\in(0,1]$. \begin{lemma} \label{t:uniform-decay} For each $\varepsilon \in (0,1]$ and $\zeta_{0}=(u_{0},u_{1},\delta_0,\delta_1) \in \mathcal{H}_{\varepsilon }$, there exists a unique global weak solution $\xi=(v,v_t,\gamma,\gamma_t)\in C([0,\infty );\mathcal{H}_{\varepsilon })$ to problem (\ref{pde-v}) satisfying \begin{equation} \label{bounded-boundary-3} \gamma_{t}\in L_{\mathrm{loc}}^{2}([0,\infty )\times \Gamma ). \end{equation} Moreover, for all $\zeta_0\in\mathcal{D}_\varepsilon$ with $\left\Vert\zeta_{0}\right\Vert _{\mathcal{H}_{\varepsilon }}\le R$ for all $\varepsilon \in (0,1]$, there is a constant $\omega_{5 \varepsilon} > 0$, depending on $\varepsilon $, as $\omega_{5\varepsilon}\sim\varepsilon$, and there is a positive monotonically increasing function $Q_\varepsilon(\cdot)\sim\varepsilon^{-1}$, such that, for all $t\geq 0$, \begin{equation} \label{uniform-decay} \Vert Z_{\varepsilon }(t)\zeta_{0}\Vert _{\mathcal{H}_{\varepsilon }} \le Q_\varepsilon(R)e^{-\omega_{5\varepsilon} t}. \end{equation} \end{lemma} The following lemma establishes the precompactness of the operators $K_{\varepsilon }$, for each $\varepsilon\in(0,1].$ \begin{lemma} \label{compact-a} For all $\varepsilon\in(0,1]$, and for each $R>0$ and $\zeta\in\mathcal{D}_\varepsilon$ such that $\|\zeta\|_{\mathcal{D}_\varepsilon}\le R$ for all $\varepsilon\in(0,1]$, there exist constants $\omega_{6\varepsilon},R_{2\varepsilon}>0$, both depending on $\varepsilon$, with $\omega_{6\varepsilon}\sim\varepsilon$ and $R_{2\varepsilon}\sim\varepsilon^{-1}$, in which, for all $t\ge0$, there holds \begin{equation} \label{a-exp-attr-1} \|K_\varepsilon(t)\zeta_0\|^2_{\mathcal{D}_\varepsilon} \le Q(R)e^{-\omega_{6\varepsilon}t} + R_{2\varepsilon}. \end{equation} \end{lemma} \subsection{Exponential attractors for Problem (A)} We now turn to the existence of exponential attractors for each $\varepsilon\in(0,1]$ for Problem (A). By \cite[Theorem 5]{Frigeri10}, we already know that Problem (A) with $\varepsilon=1$ admits an exponential attractor described by Theorem \ref{eaa} below. Moreover, the result for the perturbed case follows from \cite{Frigeri10} after suitable modifications to include $\varepsilon\in(0,1]$ appearing in the boundary condition (\ref{acoustic-boundary}). \begin{theorem} \label{eaa} Assume (\ref{f-assumption-1}), (\ref{f-assumption-2}), and (\ref{g-reg-ass-1}) hold. For each $\varepsilon \in (0,1]$, the dynamical system $(S_\varepsilon,\mathcal{H}_\varepsilon)$ associated with Problem (A) admits an exponential attractor $\mathcal{M}_{\varepsilon }$ compact in $\mathcal{H}_{\varepsilon},$ and bounded in $\mathcal{D}_\varepsilon$. Moreover, for each $\varepsilon\in(0,1]$ fixed, there hold: (i) For each $t\geq 0$, $S_{\varepsilon }(t)\mathcal{M}_{\varepsilon}\subseteq \mathcal{M}_{\varepsilon }$. (ii) The fractal dimension of $\mathcal{M}_{\varepsilon }$ with respect to the metric $\mathcal{H}_{\varepsilon }$ is finite, namely, \begin{equation*} \dim_{\mathrm{F}}\left( \mathcal{M}_{\varepsilon },\mathcal{H}_{\varepsilon }\right) \leq C_{\varepsilon }<\infty, \end{equation*} for some positive constant $C$ depending on $\varepsilon$. (iii) There exists a positive constant $\nu_{1\varepsilon}>0$ and a nonnegative monotonically increasing function $Q_{\varepsilon}$ both depending on $\varepsilon$, and where $Q_\varepsilon\sim\varepsilon^{-1}$, such that, for all $t\geq 0$, \begin{equation*} {\mathrm{dist}} _{\mathcal{H}_{\varepsilon }}(S_{\varepsilon }(t)B,\mathcal{M}_{\varepsilon })\leq Q_{\varepsilon}(\Vert B\Vert _{\mathcal{H}_{\varepsilon }})e^{-\nu_{1\varepsilon} t}, \end{equation*} for every nonempty bounded subset $B$ of $\mathcal{H}_{\varepsilon }$. \end{theorem} As with Problem (R) above, the proof of Theorem \ref{eaa} will follow from the application of an abstract proposition reported specifically for our current case below (see, e.g., \cite[Proposition 1]{EMZ00}, \cite{FGMZ04}, \cite{GGMP05}). \begin{proposition} \label{abstract-a} Assume (\ref{f-assumption-1}), (\ref{f-assumption-2}), and (\ref{g-reg-ass-1}) hold. Let $(S_{\varepsilon},\mathcal{H}_{\varepsilon}) $ be a dynamical system for each $\varepsilon >0$. Assume the following hypotheses hold: \begin{enumerate} \item[(H1)] There exists a bounded absorbing set $\mathcal{B}_{\varepsilon }^{1}\subset \mathcal{D}_{\varepsilon }$ which is positively invariant for $S_{\varepsilon }(t).$ More precisely, there exists a time $t_{2\varepsilon}>0,$ which \emph{depends} on $\varepsilon >0$, such that \begin{equation*} S_{\varepsilon }(t)\mathcal{B}_{\varepsilon }^{1}\subset \mathcal{B}_{\varepsilon }^{1} \end{equation*} for all $t\geq t_{2\varepsilon}$ where $\mathcal{B}_{\varepsilon }^{1}$ is endowed with the topology of $\mathcal{H}_{\varepsilon }.$ \item[(H2)] There is $t^{\ast }\geq t_{2\varepsilon}$ such that the map $S_{\varepsilon }(t^{\ast })$ admits the decomposition, for each $\varepsilon \in (0,1]$ and for all $\zeta_{0},\xi_{0}\in \mathcal{B}_{\varepsilon }^{1}$, \begin{equation*} S_{\varepsilon }(t^{\ast })\zeta_{0}-S_{\varepsilon }(t^{\ast })\xi_{0}=L_{\varepsilon }(\zeta_{0},\xi_{0})+R_{\varepsilon }(\zeta_{0},\xi_{0}) \end{equation*} where, for some constants $\alpha ^{\ast }\in (0,\frac{1}{2})$ and $\Lambda ^{\ast }=\Lambda ^{\ast }(\Omega ,t^{\ast })\geq 0$ with $\Lambda ^{\ast }$ depending on $\varepsilon >0$, the following hold: \begin{equation}\label{dd-l-a} \Vert L_{\varepsilon }(\zeta_{0},\xi_{0})\Vert _{\mathcal{H}_{\varepsilon }}\leq \alpha ^{\ast }\Vert \zeta_{0}-\xi_{0}\Vert _{\mathcal{H}_{\varepsilon }} \end{equation} and \begin{equation} \label{dd-k-a} \Vert R_{\varepsilon }(\zeta_{0},\xi_{0})\Vert _{\mathcal{D}_{\varepsilon }}\leq \Lambda ^{\ast }\Vert \zeta_{0}-\xi_{0}\Vert _{\mathcal{H}_{\varepsilon }}. \end{equation} \item[(H3)] The map \begin{equation*} (t,U)\mapsto S_{\varepsilon }(t)\zeta:[t^{\ast },2t^{\ast }]\times \mathcal{B}_{\varepsilon }^{1}\rightarrow \mathcal{B}_{\varepsilon }^{1} \end{equation*} is Lipschitz continuous on $\mathcal{B}_{\varepsilon }^{1}$ in the topology of $\mathcal{H}_{\varepsilon}$. \end{enumerate} Then, $(S_{\varepsilon },\mathcal{H}_{\varepsilon })$ possesses an exponential attractor $\mathcal{M}_{\varepsilon }$ in $\mathcal{B}_{\varepsilon }^{1}.$ \end{proposition} \begin{remark} As in the case for Problem (R), the basin of exponential attraction is indeed the entire phase space thanks to (\ref{a-exp-attr-1}) below. This in turn implies the exponential attraction of subsets of $\mathcal{B}^1_\varepsilon$ and the transitivity of exponential attraction (see Proposition \ref{t:exp-attr}). \end{remark} \begin{corollary} For each $\varepsilon\in(0,1]$, the global attractors of Theorem \ref{t:acoustic-global} are bounded in $\mathcal{D}_\varepsilon$. In addition, there holds \begin{equation*} \dim_{\mathrm{F}}(\mathcal{A}_\varepsilon,\mathcal{H}_\varepsilon)\leq \dim_{\mathrm{F}}(\mathcal{M}_\varepsilon,\mathcal{H}_\varepsilon)\le C_\varepsilon \end{equation*} for some constant $C>0$, depending on $\varepsilon.$ As a consequence, the global attractors $\mathcal{A}_\varepsilon$ are finite dimensional; however, the dimension of $\mathcal{M}_{\varepsilon }$ is not necessarily uniform with respect to $\varepsilon >0$. \end{corollary} To prove Theorem \ref{eaa}, we apply the abstract result expressed in Proposition \ref{abstract-a}. As a preliminary step, we make an observation on the energy equation (\ref{acoustic-energy-3}) associated with Problem (A). \begin{lemma} \label{t:to-H1} Conditions (H1), (H2) and (H3) hold for each fixed $\varepsilon \in (0,1]$. Moreover, for each $\varepsilon\in(0,1]$, and for each $R>0$ and $\zeta\in\mathcal{D}_\varepsilon$ such that $\|\zeta\|_{\mathcal{D}_\varepsilon}\le R$ for all $\varepsilon\in(0,1]$, there exist constants $\widetilde \omega_{6\varepsilon},\widetilde R_{2\varepsilon}>0$, both depending on $\varepsilon$, with $\widetilde\omega_{6\varepsilon}\sim\varepsilon$ and $\widetilde R_{2\varepsilon}\sim\varepsilon^{-1}$, in which, for all $t\ge0$, there holds \begin{equation} \label{a-exp-attr-2} \|S(t)\zeta_0\|^2_{\mathcal{D}_\varepsilon} \le Q(R)e^{-\widetilde\omega_{6\varepsilon}t/2} + \widetilde R_{2\varepsilon}. \end{equation} \end{lemma} \begin{remark} The ``time of entry'' of some bounded set $B\subset\mathcal{D}_\varepsilon$ into $\mathcal{B}^1_\varepsilon$ is given by \[ t_{2\varepsilon}=t_{2}(\|B\|_{\mathcal{H}_\varepsilon},R_\varepsilon) = \max\left\{ \frac{1}{\widetilde\omega_{6\varepsilon}}\ln\left( \frac{Q\left(\|B\|_{\mathcal{H}_\varepsilon}\right)}{R_\varepsilon - \widetilde R_{2\varepsilon}} \right),0 \right\} \] where $R_\varepsilon$ is the radius of the absorbing set $\mathcal{B}^1_\varepsilon$ in $\mathcal{D}_\varepsilon$. Furthermore, both $R_{2 \varepsilon}$ and $t_{2 \varepsilon} \rightarrow+\infty$ as $\varepsilon\rightarrow 0$. \end{remark} \section{The continuity of families of sets} This section contains a new abstract theorem (Theorem \ref{t:robustness}) concerning the upper-semicontinuity of a family of sets. The key assumption in the theorem involves a comparison of the semiflow corresponding to the unperturbed problem to the semiflow corresponding to the perturbation problem in the topology of the perturbation problem. The unperturbed problem is ``fitted'' into the phase space of the perturbed problem through the use of two maps, a {\em{canonical extension}} and {\em{lift}}. This approach for obtaining an upper-semicontinuous family of sets developed in this section is largely motivated by \cite{GGMP05}. In the setting presented in this paper, where the perturbation is isolated to the boundary condition, the upper-semicontinuity result is obtained for a broad range of families of sets and the overall analysis is much simpler. The result is then applied to Problem (R) and Problem (A). In the final part of this section, we also consider the difference between Problem (A) and Problem (R), whereby, this time, we {\em{project}} Problem (A) onto the phase space for Problem (R) for estimate in $\mathcal{H}_0$. The abstract upper-semicontinuity theorem is developed in this section. \begin{definition} Given two bounded subsets $A$, $B$ in a Banach space $X$, the {\em Hausdorff (asymmetric) semidistance} between $A$ and $B$, in the topology of $X$, is defined by \[ {\mathrm{dist}} _X(A,B):=\sup_{a\in A}\inf_{b\in B}\|a-b\|_X. \] \end{definition} Suppose $X_0$ is a Banach space with the norm $\|\chi\|_{X_0}$, for all $\chi\in X_0$, and suppose $Y$ is a Banach space with the norm $\|\psi\|_Y$, for all $\psi\in Y$. For $\varepsilon\in(0,1]$, let $X_\varepsilon$ be the one-parameter family of Banach product spaces \[ X_\varepsilon=X_0\times Y \] with the $\varepsilon$-weighted norm whose square is given by \[ \|(\chi,\psi)\|^2_{X_\varepsilon}=\|\chi\|^2_{X_0} + \varepsilon\|\psi\|^2_{Y}. \] For each $\varepsilon\in(0,1]$, let $S_\varepsilon$ be a semiflow on $X_\varepsilon$ and let $S_0$ be a semiflow on $X_0$. Let $\Pi:X_\varepsilon\rightarrow X_0$ be the {\em{projection}} from $X_\varepsilon$ onto $X_0$; for every subset $B_\varepsilon\subset X_\varepsilon$, \[ \Pi B_\varepsilon=B_0\subset X_0 \] (recall, the correspondence between Problem (A) and Problem (R) indicated by the projection is indicated in Remark \ref{key}). Define the ``lift'' map to map sets $B_0\subset X_0$ to sets in the product $X_\varepsilon$. With the lift map it is possible to measure the semi-distance between sets from $X_0$ with sets in $X_\varepsilon$, using the topology of $X_\varepsilon$. \begin{definition} Given a map $\mathcal{E}:X_0\rightarrow Y$, locally Lipschitz in $X_0$, the map $\mathcal{L}:X_0\rightarrow X_\varepsilon$ defined by $\chi\mapsto (\chi,\mathcal{E}\chi)$ is called a {\em{lift}} map. The map $\mathcal{E}$ is called a {\em{canonical extension}}. If $B_0$ is a bounded subset of $X_0$, the set $\mathcal{E}B_0\subset Y$ is called the canonical extension of $B_0$ into $Y$, and the set \[ \mathcal{L}B_0=\{(\chi,\psi)\in X_\varepsilon : \chi\in B_0, \ \psi\in \mathcal{E}B_0\} \] is called the lift of $B_0$ into $X_\varepsilon$. \end{definition} What follows is a general description the type of families that are will be continuous in $X_\varepsilon$. Let $W_0$ be a bounded subset of $X_0$, and let $S_0$ be a semiflow on $X_0$. For each $\varepsilon\in(0,1]$ let $W_\varepsilon$ be a bounded subset of $X_\varepsilon$, and let $S_\varepsilon$ be a semiflow on $X_\varepsilon$. Let $T>0$ and define the sets \begin{equation}\label{set-family-0} \mathcal{U}_0 = \bigcup_{t\in[0,T]} S_0(t)W_0 \end{equation} and \begin{equation}\label{set-family-1} \mathcal{U}_\varepsilon = \bigcup_{t\in[0,T]} S_\varepsilon(t)W_\varepsilon. \end{equation} Define the family of sets $(\mathbb{U}_\varepsilon)_{\varepsilon\in[0,1]}$ in $X_\varepsilon$ by \begin{equation}\label{set-family-2} \mathbb{U}_\varepsilon=\left\{ \begin{array}{ll} \mathcal{U}_\varepsilon & 0<\varepsilon\leq 1 \\ \mathcal{L}\mathcal{U}_0 & \varepsilon=0. \end{array} \right. \end{equation} \begin{remark} The sets $W_0$ and $W_\varepsilon$ are not assumed to be absorbing or positively invariant. \end{remark} \begin{theorem}\label{t:robustness} Suppose that the semiflow $S_0$ is Lipschitz continuous on $X_0$, uniformly in $t$ on compact intervals. Suppose the lift map $\mathcal{L}$ satisfies the following: for $T>0$ and for any bounded set $B_\varepsilon$ in $X_\varepsilon$, there exists $M=M(\|B_\varepsilon\|_{X_\varepsilon})>0$, depending on $B_\varepsilon$, $\rho\in(0,1]$, both independent of $\varepsilon$, such that for all $t\in [0,T]$ and $(\chi,\psi)\in B_\varepsilon$, \begin{equation}\label{robustness} \|S_\varepsilon(t)(\chi,\psi)-\mathcal{L}S_0(t)\Pi(\chi,\psi)\|_{X_\varepsilon}\leq M\varepsilon^\rho. \end{equation} Then the family of sets $(\mathbb{U}_\varepsilon)_{\varepsilon\in[0,1]}$ is upper-semicontinuous in the topology of $X_\varepsilon$; precisely, \[ {\rm dist}_{X_\varepsilon}(\mathbb{U}_\varepsilon,\mathbb{U}_0) \le M\varepsilon^\rho. \] \end{theorem} \begin{proof} To begin, \[ {\mathrm{dist}} _{\mathcal{H}_\varepsilon}(\mathbb{U}_\varepsilon,\mathbb{U}_0) = \sup_{a\in\mathcal{U}_\varepsilon}\inf_{b\in\mathcal{L}\mathcal{U}_0}\|a-b\|_{X_\varepsilon}. \] Fix $t\in[0,T]$ and $\alpha\in W_\varepsilon$ so that $a=S_\varepsilon(t)\alpha\in\mathcal{U}_\varepsilon$. Then \begin{align} \inf_{b\in\mathcal{L}\mathcal{U}_0}\|a-b\|_{X_\varepsilon} & = \inf_{\substack{\tau\in [0,T] \\ \theta\in W_0}}\|S_\varepsilon(t)\alpha-\mathcal{L}S_0(\tau)\theta\|_{X_\varepsilon} \notag \\ & \leq \inf_{\theta\in W_0}\|S_\varepsilon(t)\alpha-\mathcal{L}S_0(t)\theta\|_{X_\varepsilon}. \notag \end{align} Since $S_\varepsilon(t)\alpha=a$, \begin{align} \sup_{\alpha\in W_\varepsilon}\inf_{b\in\mathcal{L}\mathcal{U}_0}\|S_\varepsilon(t)\alpha-b\|_{X_\varepsilon} & \leq \sup_{\alpha\in W_\varepsilon}\inf_{\theta\in W_0}\|S_\varepsilon(t)\alpha-\mathcal{L}S_0(t)\theta\|_{X_\varepsilon} \notag \\ & = {\mathrm{dist}} _{X_\varepsilon}(S_\varepsilon(t)W_\varepsilon,\mathcal{L}S_0(t)W_0) \notag \\ & \leq \max_{t\in [0,T]} {\mathrm{dist}} _{X_\varepsilon}(S_\varepsilon(t)W_\varepsilon,\mathcal{L}S_0(t)W_0). \notag \end{align} Thus, \begin{align} \sup_{t\in [0,T]}\sup_{\alpha\in W_\varepsilon}\inf_{b\in\mathcal{L}\mathcal{U}_0} \|S_\varepsilon(t)\alpha-b\|_{X_\varepsilon} \leq \max_{t\in [0,T]} {\mathrm{dist}} _{X_\varepsilon}(S_\varepsilon(t)W_\varepsilon,\mathcal{L}S_0(t)W_0), \notag \end{align} and \begin{align} \sup_{a\in\mathcal{U}_\varepsilon}\inf_{b\in\mathcal{L}\mathcal{U}_0}\|a-b\|_{X_\varepsilon} & \leq \sup_{t\in [0,T]}\sup_{\alpha\in W_\varepsilon}\inf_{b\in\mathcal{L}\mathcal{U}_0} \|S_\varepsilon(t)\alpha-b\|_{X_\varepsilon} \notag \\ & \leq \max_{t\in [0,T]} {\mathrm{dist}} _{X_\varepsilon}(S_\varepsilon(t) W_\varepsilon,\mathcal{L}S_0(t)W_0) \notag \\ & \leq \max_{t\in [0,T]} \sup_{\alpha\in W_\varepsilon}\inf_{\theta\in W_0}\|S_\varepsilon(t)\alpha-\mathcal{L}S_0(t)\theta\|_{X_\varepsilon}. \notag \end{align} The norm is then expanded \begin{align} \|S_\varepsilon(t)\alpha-\mathcal{L}S_0(t)\theta\|_{X_\varepsilon} \leq \|S_\varepsilon(t)\alpha & - \mathcal{L}S_0(t)\Pi\alpha\|_{X_\varepsilon} \notag \\ & + \|\mathcal{L}S_0(t)\Pi\alpha-\mathcal{L}S_0(t)\theta\|_{X_\varepsilon} \label{4-1} \end{align} so that by the assumption described in (\ref{robustness}), there is a constant $M>0$ such that for all $t\in[0,T]$ and for all $\alpha\in W_\varepsilon$, \[ \|S_\varepsilon(t)\alpha-\mathcal{L}S_0(t)\Pi\alpha\|_{X_\varepsilon} \leq M\varepsilon^\rho. \] Expand the square of the norm on the right hand side of (\ref{4-1}) to obtain, for $\Pi\alpha=\Pi(\alpha_1,\alpha_2)=\alpha_1\in X_0$ and $\theta\in X_0$, \begin{equation}\label{triangle-r} \|\mathcal{L}S_0(t)\Pi\alpha-\mathcal{L}S_0(t)\theta\|^2_{X_\varepsilon} = \|S_0(t)\Pi\alpha-S_0(t)\theta\|^2_{X_0} + \varepsilon\|\mathcal{E}S_0(t)\Pi\alpha - \mathcal{E}S_0(t)\theta\|^2_Y. \end{equation} By the local Lipschitz continuity of $\mathcal{E}$ on $X_0$, and by the local Lipschitz continuity of $S_0$ on $X_0$, there is $L>0$, depending on $W_0$, but independent of $\varepsilon$, such that (\ref{triangle-r}) can be estimated by \[ \|\mathcal{L}S_0(t)\Pi\alpha-\mathcal{L}S_0(t)\theta\|^2_{X_\varepsilon} \leq L^2(1+\varepsilon)\|\Pi\alpha-\theta\|^2_{X_0}. \] Hence, (\ref{4-1}) becomes \[ \|S_\varepsilon(t)\alpha-\mathcal{L}S_0(t)\theta\|_{X_\varepsilon} \leq M\varepsilon^\rho + L\sqrt{1+\varepsilon}\|\Pi\alpha-\theta\|_{X_0} \] and \[ \inf_{\theta\in W_0}\|S_\varepsilon(t)\alpha-\mathcal{L}S_0(t)\theta\|_{X_\varepsilon} \leq M\varepsilon^\rho + L\sqrt{1+\varepsilon}\inf_{\theta=\Pi\alpha}\|\Pi\alpha-\theta\|_{X_0}. \] Since $\Pi\alpha\in\Pi W_\varepsilon=W_0$, then it is possible to choose $\theta\in W_0$ to be $\theta=\Pi\alpha$. Therefore, \[ {\mathrm{dist}} _{X_\varepsilon}(\mathbb{U}_\varepsilon,\mathbb{U}_0) = \sup_{\alpha\in W_\varepsilon}\inf_{\theta\in W_0}\|S_\varepsilon(t)\alpha-\mathcal{L}S_0(t)\theta\|_{X_\varepsilon} \leq M\varepsilon^\rho. \] This establishes the upper-semicontinuity of the sets $\mathbb{U}_\varepsilon$ in $X_\varepsilon$. \end{proof} \begin{remark} The upper-semicontinuous result given in Theorem \ref{t:robustness} is reminiscent of robustness results (cf. \cite{GGMP05}) in-so-far as we obtain explicit control over the semidistance in terms of the perturbation parameter $\varepsilon$. \end{remark} \subsection{The upper-semicontinuity of the family of global attractors for the model problems} The goal of this section is to show that the assumptions of Theorem \ref{t:robustness} are meet. The conclusion is that the family of global attractors for the model problem are upper-semicontinuous. First, compared to the previous section, $X_0=\mathcal{H}_0$, $Y=L^2(\Gamma)\times L^2(\Gamma)$, and $X_\varepsilon=X_0\times Y=\mathcal{H}_\varepsilon$. Recall that by equation (\ref{S0-Lipschitz-continuous}), $S_0$ is locally Lipschitz continuous on $\mathcal{H}_0$. Define the projection $\Pi:\mathcal{H}_\varepsilon\rightarrow\mathcal{H}_0$ by \[ \Pi(u,v,\delta,\gamma) = (u,v); \] thus, for every subset $E_\varepsilon\subset\mathcal{H}_\varepsilon$, $\Pi E_\varepsilon=E_0\subset\mathcal{H}_0$. Define the canonical extension $\mathcal{E}:\mathcal{H}_0\rightarrow L^2(\Gamma)\times L^2(\Gamma)$ by, for all $(u,v)\in\mathcal{H}_0$, \[ \mathcal{E}(u,v)=(0,-u). \] Clearly, $\mathcal{E}$ is locally Lipschitz on $\mathcal{H}_0$. Then the lift map, $\mathcal{L}:\mathcal{H}_0\rightarrow\mathcal{H}_\varepsilon$, is defined, for any bounded set $E_0$ in $\mathcal{H}_0$, by \[ \mathcal{L}E_0:=\{(u,v,\delta,\gamma)\in\mathcal{H}_\varepsilon:(u,v)\in E_0, \delta=0, \gamma=-u \}. \] Recall that $\gamma=\delta_t$ in distributions, so the Robin boundary condition---in $u_t$---is achieved by differentiating the last equation with respect to $t$; hence, $\delta_{tt}=-u_t$. The condition $\delta=0$ highlights the drop in rank the Robin problem possesses when $\varepsilon=0$ compared to the acoustic boundary condition when $\varepsilon>0$. The main result of the section is \begin{theorem} \label{upper} Assume (\ref{f-assumption-1}), (\ref{f-assumption-2}), and (\ref{g-reg-ass-1}) hold. Let $\mathcal{A}_0$ denote the global attractor corresponding to Problem (R) and for each $\varepsilon\in(0,1]$, let $\mathcal{A}_\varepsilon$ denote the global attractor corresponding to Problem (A). The family of global attractors $(\mathbb{A}_\varepsilon)_{\varepsilon\in[0,1]}$ in $\mathcal{H}_\varepsilon$ defined by \[ \mathbb{A}_\varepsilon=\left\{ \begin{array}{ll} \mathcal{A}_\varepsilon & 0<\varepsilon\leq 1 \\ \mathcal{L}\mathcal{A}_0 & \varepsilon=0. \end{array} \right. \] is upper-semicontinuous in $\mathcal{H}_\varepsilon$, with explicit control over semi-distances in terms of $\varepsilon$. (Note: we are not claiming that $\mathcal{LA}_0$ is a global attractor for Problem (R) in $\mathcal{H}_\varepsilon.$) \end{theorem} The following claim establishes the assumption made in equation (\ref{robustness}). The claim indicates that trajectories on $\mathcal{A}_0$ and $\mathcal{A}_\varepsilon$, with the same initial data, may be estimated, on compact time intervals and in the topology of $\mathcal{H}_\varepsilon$, by a constant depending on the radii of the absorbing sets $\mathcal{B}_\varepsilon$ and by the perturbation parameter $\varepsilon$ to some power (recall Remark \ref{r:time-1}, these radii are bounded independent of $\varepsilon$). \begin{lemma} \label{compare} Let $T>0$. There is a constant $\Lambda_1>0$, independent of $\varepsilon$, such that, for all $t\in[0,T]$ and for all $\zeta_0\in \mathcal{A}_\varepsilon$, \begin{equation} \label{robust-7} \|S_\varepsilon(t)\zeta_0 - \mathcal{L}S_0(t)\Pi\zeta_0\|_{\mathcal{H}_\varepsilon} \le \Lambda_1\sqrt{\varepsilon}. \end{equation} \end{lemma} \begin{proof} Let $u$ denote the weak solution of Problem (A) corresponding to the initial data $\zeta_0=(u_0,u_1,\delta_0,\delta_1)\in \mathcal{A}_\varepsilon$, and let $\bar u$ denote the weak solution of Problem (R) corresponding to the initial data $\Pi\zeta_0=(u_0,u_1)\in \mathcal{A}_0$. Rewrite the Robin boundary condition as the following system in $\bar u$ and $\bar\delta$, \begin{equation}\label{Robin-system-1} \left\{\begin{array}{l} \bar\delta_t = -\bar u \\ \bar\delta_t = \partial_{\bf{n}}{\bar{u}}. \end{array}\right. \end{equation} To compare the Robin problem with the acoustic problem in $\mathcal{H}_\varepsilon$, the first equation is differentiated with respect to $t$ and the corresponding system is equipped with initial conditions, \[ \left\{\begin{array}{l} \bar\delta_{tt} = -\bar u_t \\ \bar\delta_t = \partial_{\bf{n}}{\bar{u}} \\ \bar\delta(0,\cdot)=0, \ \bar\delta_t(0,\cdot)=-u_0. \end{array}\right. \] Observe that through the definition of the lift map, \[ (\bar\delta(0,\cdot),\bar\delta_t(0,\cdot))=\mathcal{E}(u(0,\cdot),u_t(0,\cdot))=(0,-u_0). \] Let $z=u-\bar u$ and $w=\delta-\bar\delta$; hence, $z$ and $w$ satisfy the system \begin{equation}\label{z-difference} \left\{\begin{array}{ll} z_{tt} + z_t - \Delta z + z + f(u) - f(\bar u) = 0 & \text{in} \ (0,\infty)\times\Omega \\ z(0,\cdot)=0, \ z_t(0,\cdot)=0 & \text{on} \ \{0\}\times\Omega \\ w_{tt} + \varepsilon(w_t + w) - \varepsilon(\bar\delta_t + \bar\delta) = -z_t & \text{on} \ (0,\infty)\times\Gamma \\ w_t = \partial_{\bf{n}}z & \text{on} \ (0,\infty)\times\Gamma \\ \varepsilon w(0,\cdot)=\varepsilon\delta_0, \ w_t(0,\cdot)=\varepsilon(\delta_1+u_0) & \text{on} \ \{0\}\times\Gamma. \end{array}\right. \end{equation} Multiply equation (\ref{z-difference})$_1$ by $2z_t$ in $L^2(\Omega)$ to obtain \begin{align} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t} & \|z_t\|^2 + 2\|z_t\|^2 + \frac{ {\mathrm{d}} }{ {\mathrm{d}} t}\|\nabla z\|^2 - 2\left\langle \partial_{\bf{n}}z,z_t \right\rangle_{L^2(\Gamma)} \notag \\ & + \frac{ {\mathrm{d}} }{ {\mathrm{d}} t}\|z\|^2 + 2\langle f(u)-f(\bar u),z_t \rangle = 0. \label{robust-1} \end{align} Multiply equation (\ref{z-difference})$_3$ by $2w_t$ in $L^2(\Gamma)$, to obtain \begin{align} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t} & \|w_t\|^2_{L^2(\Gamma)} + 2\varepsilon\|w_t\|^2_{L^2(\Gamma)} + \varepsilon\frac{ {\mathrm{d}} }{ {\mathrm{d}} t}\|w\|^2_{L^2(\Gamma)} \notag \\ & - 2\varepsilon\langle \bar\delta_t+\bar\delta, w_t \rangle_{L^2(\Gamma)} = - 2\langle z_t, w_t \rangle_{L^2(\Gamma)}. \label{robust-2} \end{align} Since $w_t=\partial_{\bf{n}}z$ on the boundary $\Gamma$, then $ - 2\langle \partial_{\bf{n}}z, z_t \rangle_{L^2(\Gamma)} = -2\langle z_t, w_t \rangle_{L^2(\Gamma)}$. Hence, summing equations (\ref{robust-1}) and (\ref{robust-2}) gives, for almost all $t\ge0,$ \begin{align} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t} & \left\{ \|z\|^2_1 + \|z_t\|^2 + \varepsilon\|w\|^2_{L^2(\Gamma)} + \|w_t\|^2_{L^2(\Gamma)} \right\} + 2\|z_t\|^2 + 2\varepsilon\|w_t\|^2_{L^2(\Gamma)} \notag \\ & - 2\varepsilon\langle \bar\delta_t+\bar\delta, w_t \rangle_{L^2(\Gamma)} + 2\langle f(u)-f(\bar u),z_t \rangle = 0. \label{robust-3} \end{align} Estimating the first product yields, \[ 2\varepsilon|\langle \bar\delta_t+\bar\delta,w_t \rangle_{L^2(\Gamma)}| \leq 2\varepsilon^2(\|\bar\delta_t\|^2_{L^2(\Gamma)}+\|\bar\delta\|^2_{L^2(\Gamma)}) + \|w_t\|^2_{L^2(\Gamma)}. \] From (\ref{Robin-system-1}), by comparing directly with the solution of the Problem (R), we find that $\bar u\in L^2([0,\infty);H^1(\Omega))$, so by virtue of the trace map, $\bar u_{\mid\Gamma}=\bar\delta_t\in L^2([0,\infty);H^{1/2}(\Gamma)) \hookrightarrow L^2([0,\infty);L^2(\Gamma))$. By the definition of weak solution for the Robin problem, the map $t\mapsto\|\bar u(t)\|^2_{L^2(\Gamma)}$ is continuous. So as $\bar\delta_t(t)=-\bar u(t)$ in $L^2(\Gamma)$, the auxiliary term $\|\bar\delta(t)\|^2_{L^2(\Gamma)}$ is bounded, uniformly in $t$ on compact intervals. Since the global attractor $\mathcal{A}_0$ is bounded in $\mathcal{B}_0$, the maps $t\mapsto\|\bar\delta(t)\|_{L^2(\Gamma)}$ and $t\mapsto\|\bar\delta_t(t)\|_{L^2(\Gamma)}$ are bounded, uniformly in $t$ and $\varepsilon\in(0,1]$, by a the radius of $\mathcal{B}_0$, $R_0$. Thus, there is a constant $C=C(R_0)>0$, independent of $\varepsilon$, such that, for all $t\in [0,T]$, \begin{align} 2\varepsilon|\langle \bar\delta_t+\bar\delta,w_t \rangle_{L^2(\Gamma)}| & \le \varepsilon^2\cdot C(R_0) + \|w_t\|^2_{L^2(\Gamma)}. \label{robust-4} \end{align} Now estimate the remaining product using the local Lipschitz continuity of $f$, \begin{equation}\label{robust-5} 2|\langle f(u)-f(\bar u),z_t \rangle| \leq C_\Omega\|z\|^2_1 + \|z_t\|^2, \end{equation} where $C_\Omega$ is due to the continuous embedding $H^1(\Omega)\hookrightarrow L^6(\Omega).$ Combining (\ref{robust-3}) (after omitting the two positive terms $2\|z_t\|^2 + 2\varepsilon\|w_t\|^2_{L^2(\Gamma)}$), (\ref{robust-4}), and (\ref{robust-5}) after adding the terms $\varepsilon\|w\|^2_{L^2(\Gamma)} + \|w_t\|^2_{L^2(\Gamma)}$ the the right hand side, produces the differential inequality, which holds for almost all $t\in[0,T]$, \begin{align} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t} & \left\{ \|z\|^2_1 + \|z_t\|^2 + \varepsilon\|w\|^2_{L^2(\Gamma)} + \|w_t\|^2_{L^2(\Gamma)} \right\} \notag \\ & \leq \varepsilon^2\cdot C(R_0) + C_\Omega(\|z\|^2_1 + \|z_t\|^2 + \varepsilon\|w\|^2_{L^2(\Gamma)} + \|w_t\|^2_{L^2(\Gamma)}). \notag \end{align} Integrating with respect to $t$ in the compact interval $[0,T]$ yields, \begin{align} & \|z(t)\|^2_1 + \|z_t(t)\|^2 + \varepsilon\|w(t)\|^2_{L^2(\Gamma)} + \|w_t(t)\|^2_{L^2(\Gamma)} \notag \\ & \leq e^{C_\Omega T}(\|z(0)\|^2_1 + \|z_t(0)\|^2 + \varepsilon\|w(0)\|^2_{L^2(\Gamma)} + \|w_t(0)\|^2_{L^2(\Gamma)}) \notag \\ & + \varepsilon^2\cdot C(R_0) \left( e^{C_\Omega T}-1 \right). \label{robust-6} \end{align} Because of the initial conditions, $z(0)=z_t(0)=0$, \[ \varepsilon\|w(0)\|^2_{L^2(\Gamma)} = \varepsilon\|\delta_0\|^2_{L^2(\Gamma)} \quad \text{and}\quad \|w_t(0)\|^2_{L^2(\Gamma)}=\varepsilon\|\delta_1+u_0\|^2_{L^2(\Gamma)}. \] Since the initial condition $\zeta_0=(u_0,u_1,\delta_0,\delta_1)$ belongs to the bounded attractor $\mathcal{A}_\varepsilon$, $\|w_t(0)\|_{L^2(\Gamma)}\le \varepsilon\cdot R_1$. Thus, inequality (\ref{robust-6}) can be written as \begin{equation}\label{final-1} \|z(t)\|^2_1 + \|z_t(t)\|^2 + \varepsilon\|w(t)\|^2_{L^2(\Gamma)} + \|w_t(t)\|^2_{L^2(\Gamma)} \le \varepsilon\cdot C(R_0,R_1,\Omega). \end{equation} To show (\ref{robust-7}) as claimed, recall that \begin{align} \|S_\varepsilon(t)\zeta_0-\mathcal{L}S_0(t)\Pi\zeta_0\|^2_{\mathcal{H}_\varepsilon} & \notag \\ = \|z(t)\|^2_1 & + \|z_t(t)\|^2 + \varepsilon\|\delta(t)\|^2_{L^2(\Gamma)} + \|\delta_t(t)+\bar u(t)\|^2_{L^2(\Gamma)}. \label{final-2} \end{align} The last two terms are estimated above by \begin{equation} \label{final-3} \varepsilon\|\delta(t)\|^2_{L^2(\Gamma)} = \varepsilon\|\delta(t)-\bar\delta(t)+\bar\delta(t)\|^2_{L^2(\Gamma)} \leq 2\varepsilon\|w(t)\|^2_{L^2(\Gamma)} + 2\varepsilon\|\bar\delta(t)\|^2_{L^2(\Gamma)}, \end{equation} and \begin{align} \|\delta_t(t)+\bar u(t)\|^2_{L^2(\Gamma)} & = \|\delta_t(t)-\bar\delta_t(t)+\bar\delta_t(t)+\bar u(t)\|^2_{L^2(\Gamma)} \notag \\ & \leq 2\|w_t(t)\|^2_{L^2(\Gamma)} + 2\|\bar\delta_t(t)+\bar u(t)\|^2_{L^2(\Gamma)}. \label{final-4} \end{align} It follows from (\ref{Robin-system-1}) that, on $\Gamma$, $\bar\delta_t(t)=-\bar u(t)$ so $\|\bar\delta_t(t) + \bar u(t)\|^2_{L^2(\Gamma)}\equiv 0$. Combining inequalities (\ref{final-2}), (\ref{final-3}), and (\ref{final-4}) with (\ref{final-1}), and recalling the bound on $\varepsilon\|\delta(t)\|^2_{L^2(\Gamma)}$, we arrive at \[ \|S_\varepsilon(t)\zeta_0-\mathcal{L}S_0(t)\Pi\zeta_0\|^2_{\mathcal{H}_\varepsilon} \le \varepsilon\cdot C(R_0,R_1,\Omega). \] This establishes equation (\ref{robust-7}). \end{proof} The proof of the main result now follows from a direct application of Theorem \ref{t:robustness} to the model problem. Theorem \ref{t:robustness} may actually be applied to any family of sets that may be described by (\ref{set-family-0})-(\ref{set-family-2}), which includes the family of global attractors found above. However, since the bound on the exponential attractors is {\em{not}} uniform in $\varepsilon$, this upper-semicontinuity result cannot be applied to the corresponding family of exponential attractors. \begin{proof}[Proof of Theorem \ref{upper}] Because of the invariance of the global attractors, setting $W_0=\mathcal{A}_0$ in equation (\ref{set-family-0}) and setting $W_\varepsilon=\mathcal{A}_\varepsilon$ in equation (\ref{set-family-1}) produces, respectively, $\mathcal{U}_0=\mathcal{A}_0$ and $\mathcal{U}_\varepsilon=\mathcal{A}_\varepsilon$. \end{proof} We conclude this section with some remarks and final observations. \begin{remark} One can see from (\ref{final-3}) that the continuity result in Theorem \ref{upper} depends on the topology of $\mathcal{H}_\varepsilon$. Conversely, the result in \cite{Hale&Raugel88} holds in the corresponding topology with $\varepsilon=1$ fixed. But recall, the argument made in their work requires more regularity from the solutions. \end{remark} Upon reflection with Lemma \ref{compare}, we also obtain the following explicit estimate for the difference of two trajectories, originating from similar initial data, whereby we project Problem (A) onto the phase space for Problem (R). The result shows that the first two components of the solution to Problem (A) converge to the solution to Problem (R) as $\varepsilon\to 0$, starting with some fixed initial data. \begin{lemma} Let $T>0$. There is a constant $\Lambda_1>0$, independent of $\varepsilon$, such that, for all $t\in[0,T]$ and for all $\zeta_0\in \mathcal{A}_\varepsilon$, \begin{equation} \label{robust-7-70} \|\Pi S_\varepsilon(t)\zeta_0 - S_0(t)\Pi\zeta_0\|_{\mathcal{H}_0} \le \Lambda_2\sqrt{\varepsilon}. \end{equation} \end{lemma} \begin{proof} Let us here consider the difference between Problem (R) and Problem (A) whereby this time we project Problem (A) onto the phase space for Problem (R). The {\em{projected Problem (A)}} is obtained from equations (\ref{damped-wave-equation})-(\ref{Robin-initial-conditions}) and (\ref{acoustic-boundary}), and (\ref{acoustic-initial-conditions2}), \begin{equation} \label{proj-A} \left\{\begin{array}{ll} u_{tt} + u_t - \Delta u + u + f(u) = 0 & \text{in} \ (0,\infty)\times\Omega \\ u(0,\cdot)=u_0, \ u_t(0,\cdot)=u_1 & \text{on} \ \{0\}\times\Omega \\ \delta_{tt} = -u_t & \text{on} \ (0,\infty)\times\Gamma \\ \delta_t = \partial_{\bf{n}}u & \text{on} \ (0,\infty)\times\Gamma \\ \delta_t(0,\cdot)=\varepsilon\delta_1-(1-\varepsilon)u_0 & \text{on} \ \{0\}\times\Gamma. \end{array}\right. \end{equation} The associated solution operator is denoted $\Pi S_\varepsilon(t).$ (It is important to note that the initial data for the projected Problem (A) is not projected, only the corresponding solution is.) Observe, the final three equations, (\ref{proj-A})$_3$-(\ref{proj-A})$_5$ may be reduced; they imply that there is an ``arbitrary constant'' $\phi=\phi(x)$ in which there holds on $\Gamma$, \[ \partial_{\bf{n}}u+u=\phi(x) \quad \text{where} \quad \phi=\varepsilon(\delta_1+u_0). \] Let $u$ denote the weak solution of Problem (A) corresponding to the initial data $\zeta_0=(u_0,u_1,\delta_0,\delta_1)\in \mathcal{A}_\varepsilon$, and let $\bar u$ denote the weak solution of Problem (R) corresponding to the initial data $\Pi\zeta_0=(u_0,u_1)\in \mathcal{A}_0$. Again, rewrite the Robin boundary condition as in (\ref{Robin-system-1}), letting $z=u-\bar u$ and $w=\delta-\bar\delta$, we find, this time, $z$ and $w$ satisfy the system, \begin{equation} \label{z-difference-70} \left\{\begin{array}{ll} z_{tt} + z_t - \Delta z + z + f(u) - f(\bar u) = 0 & \text{in} \ (0,\infty)\times\Omega \\ z(0,\cdot)=0, \ z_t(0,\cdot)=0 & \text{on} \ \{0\}\times\Omega \\ \partial_{\bf{n}}z + z = \varepsilon\phi(x) & \text{on} \ (0,\infty)\times\Gamma. \end{array}\right. \end{equation} At this point it is easy to check that, by virtue of the continuous dependence argument for Problem (R) (cf. (\ref{continuous-dependence})), for almost all $t\ge0$, there holds, \begin{equation} \label{proj-p-1} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t} \|\bar\varphi\|^2_{\mathcal{H}_0} \le C \|\bar\varphi\|^2_{\mathcal{H}_0} + \varepsilon\|\phi(x)\|^2. \end{equation} Thus, we find that there is a constant $\Lambda_2>0$, such that for all $t\in[0,T]$, (\ref{robust-7-70}) holds. \end{proof} \begin{remark} At this point it is worth contrasting the above results with what we know of the following related perturbation problem where, for $\varepsilon\in(0,1],$ \begin{equation} \label{abc2} \left\{ \begin{array}{ll} \varepsilon\delta_{tt}+\delta_t+\delta=-u_t \\ \delta_t=\partial_{\bf{n}}u & \quad\text{on}\quad (0,\infty)\times\Gamma. \end{array} \right. \end{equation} The limit problem is $\partial_{\bf{n}}u+\delta=-u_t$, which could be compared with the dynamic boundary condition \begin{equation} \label{dybc} \partial_{\bf{n}}\bar u+\bar u_t=0 \quad\text{on}\quad (0,\infty)\times\Gamma, \end{equation} as found in \cite[Equation (1.4)]{Gal12}. Writing (\ref{dybc}) as the system \begin{equation} \label{dybc2} \left\{ \begin{array}{ll} \bar\delta_t=-\bar u_t \\ \bar\delta_t=\partial_{\bf{n}}\bar u & \quad\text{on}\quad (0,\infty)\times\Gamma. \end{array} \right. \end{equation} allows us to write the difference when we take the initial conditions \[ \zeta(0)=\zeta_0=(u_0,u_1,\varepsilon\delta_0,\delta_1)\in\mathcal{H}_\varepsilon, \] and \[ \mathcal{L}\Pi\zeta_0=(u_0,u_1,\varepsilon u_0,-u_1) \] into account. Letting $z=u-\bar u$ and $w=\delta-\bar\delta$ as above, we find \begin{equation} \label{z-difference2} \left\{\begin{array}{ll} z_{tt} + z_t - \Delta z + z + f(u) - f(\bar u) = 0 & \text{in} \quad (0,\infty)\times\Omega \\ z(0,\cdot)=0, \ z_t(0,\cdot)=0 & \text{at} \quad \{0\}\times\Omega \\ \varepsilon w_{tt} + w_t + w= -z_t - \bar\delta - \varepsilon\bar\delta_{tt} & \text{on} \quad (0,\infty)\times\Gamma \\ w_t = \partial_{\bf{n}}z & \text{on} \quad (0,\infty)\times\Gamma \\ w(0,\cdot)=\varepsilon(\delta_0-u_0), \ \varepsilon w_t(0,\cdot)=\varepsilon(\delta_1+u_1) & \text{at} \quad \{0\}\times\Gamma. \end{array}\right. \end{equation} The first problem that arrises concerns the bound on the term $\bar\delta_{tt}$, uniform in $\varepsilon$ and $t$ on compact intervals. Since $\bar\delta=-\bar u_{tt}$ by (\ref{dybc2}), such a bound can be obtained from arguments similar to those in \cite[Proof of Lemma 3.16]{Gal&Shomberg15}; however, this in turn necessitates the regularity assumptions such as (\ref{f-assumption-2}), (\ref{f-reg-ass-2})-(\ref{f-reg-ass-3}) for Problem (R). Also, another concern comes from the presence of the new term $\bar\delta$ in the right-hand side of (\ref{z-difference2})$_3$. The function $\bar\delta$ is determined from (\ref{dybc2}) (that is, the transport-type equation (\ref{dybc})) and the initial condition $\bar\delta(0,\cdot)=\varepsilon u_0$. The $\varepsilon$ present in this initial condition insures we obtain a control like that obtained in (\ref{robust-7}). This model will be examined in a subsequent article. \end{remark} \section{Conclusions} In this article, an upper-semicontinous family of global attractors was constructed for a damped semilinear wave equation possessing a singular perturbation parameter occurring in prescribed acoustic boundary condition. The result is obtained with a rather restrictive growth condition on the nonlinear term, and as a result, the global attractors are arrived at after obtaining an asymptotic compactness property of the semiflows. With the perturbation parameter occurring in the boundary conditions, the semiflow corresponding to the limit problem, Problem (R), is Lipschitz continuous on its phase space. This result is utilized at a critical step in the proof of the upper-semicontinuity of a generic family of sets which was applied to the global attractors. Another crucial property in the framework of this problem is that the lift map does not require any regularity. For example, the global attractor $\mathcal{A}_0$ for Problem (R) is in $H^1(\Omega)\times L^2(\Omega)$ whereby its lift is in $H^1(\Omega)\times L^2(\Omega)\times L^2(\Gamma)\times L^2(\Gamma)$; no additional regularity of the attractor is required in order to obtain the upper-semicontinuity result. Recall, this is certainly not the case for problems with a perturbation of hyperbolic-relaxation type. In comparison to the upper-semicontinuity result for the global attractors in \cite{Hale&Raugel88}, the perturbation parameter there occurs as a hyperbolic-relaxation term (see the motivation in \S1). The global attractors for the parabolic problem must be in (at least) $H^2(\Omega)\cap H^1_0(\Omega)$ in order for the lift to be well-defined in $H^1(\Omega)\times L^2(\Omega)$. Such a regularity result requires stronger assumptions than those initially used here on the nonlinear term. In addition, while the parabolic semiflow $S_0$ is Lipschitz continuous in $L^2(\Omega)$, it is not necessarily so in $H^1_0(\Omega)$. Because of this, another approach is needed when investigating the continuity of the family of global attractors in $H^1(\Omega)\times L^2(\Omega)$. The global attractors $\mathcal{A}_\varepsilon$ obtained under the restrictive growth assumptions were shown to be bounded, uniformly with respect to the perturbation parameter $\varepsilon$, in the phase space $\mathcal{H}_\varepsilon.$ With assumptions sufficient to allow the existence of both weak and strong solutions to both Problem (R) and Problem (A), we showed that the family of global attractors, $\mathcal{A}_{\varepsilon}$, $\varepsilon\in[0,1],$ possess optimal regularity; each is bounded in the more regular phase space $\mathcal{D}_\varepsilon.$ However, here, the bound is no longer independent of $\varepsilon.$ With this further regularity on the nonlinear term, we showed that the semiflows $S_\varepsilon$ admit a decomposition into exponentially decaying to zero and uniformly precompact parts. Since fractional-powers of the Laplacian are undefined for Problem (A), we resorted to other $H^2$-regularity methods. In a natural way, these results allowed us to show the existence of a bounded absorbing set $\mathcal{B}^1_\varepsilon$ in $\mathcal{D}_\varepsilon;$ a first step to proving the existence of an exponential attractor. Indeed, further properties of $S_\varepsilon$ are shown: Lipschitz continuity on $[T,2T]\times\mathcal{B}^1_\varepsilon$, for some $0<T<\infty,$ and a squeezing property. The existence of a family of exponential attractors (also upper-semicontinuous) means the corresponding family of global attractors (for each $\varepsilon\in[0,1]$) possesses finite fractal dimension. Through various estimates which depend on $\varepsilon$ in a crucial way, the radius of the absorbing set $\mathcal{B}^1_\varepsilon$ depends on $\varepsilon$; moreover, the fractal dimension of $\mathcal{A}_\varepsilon$ and $\mathcal{M}_\varepsilon$ is not necessarily uniform in $\varepsilon.$ Two important practical results stemming from this work are the following: First, the nature of the upper-semicontinuity result of the attractors, as presented here, means Problem (A) is a ``relaxation'' of Problem (R). Precisely, in light of Lemma \ref{compare}, Problem (A) can be interpreted as an approximation of Problem (R), and, in this case, the difference between corresponding trajectories, on compact time intervals, is controlled explicitly in terms of $\sqrt{\varepsilon}$. Secondly, the finite dimensionality of the attractors means the infinite-dimensional dynamics inherent in the systems associated with Problem (R) and Problem (A) can be reduced to a finite-dimensional system of ODEs. \appendix \section{} In this section we include some useful results utilized by Problem (R) and Problem (A). The first result can be found in \cite[Lemma 2.7] {Belleri&Pata01}. \begin{proposition} \label{t:diff-ineq-1} Let $X$ be an arbitrary Banach space, and $Z\subset C([0,\infty);X)$. Suppose that there is a functional $E:X\rightarrow\mathbb{R}$ such that, for every $z\in Z$, \begin{equation}\label{zupper-1} \sup_{t\geq 0} E(z(t))\geq -r \ \text{and} \ E(z(0))\leq R \end{equation} for some $r,R\geq 0$. In addition, assume that the map $t\mapsto E(z(t))$ is $C^1([0,\infty))$ for every $z\in Z$ and that for almost all $t\geq 0$, the differential inequality holds \begin{equation} \label{id-007} \frac{ {\mathrm{d}} }{ {\mathrm{d}} t} E(z(t)) + m\|z(t)\|^2_X \leq C, \end{equation} for some $m>0$, $C\geq 0$, both independent of $z\in Z$. Then, for every $\iota>0$, there exists $t_0\geq 0$, depending on $R$ and $\iota$, such that for every $z\in Z$ and for all $t\geq t_0$, \begin{equation}\label{wer-2} E(z(t))\leq \sup_{\xi\in X}\{ E(\xi):m\|\xi\|^2_X\leq C+\iota \}. \end{equation} Furthermore, $t_0=(r+R)/\iota$. \end{proposition} The following result is the so-called transitivity property of exponential attraction from \cite[Theorem 5.1]{FGMZ04}. \begin{proposition} \label{t:exp-attr} Let $(\mathcal{X},d)$ be a metric space and let $S_t$ be a semigroup acting on this space such that \[ d(S_t m_1,S_t m_2) \leq C e^{Kt} d(m_1,m_2), \] for appropriate constants $C$ and $K$. Assume that there exists three subsets $M_1$,$M_2$,$M_3\subset\mathcal{X}$ such that \[ {\rm{dist}}_\mathcal{X}(S_t M_1,M_2) \leq C_1 e^{-\alpha_1 t} \quad\text{and}\quad{\rm{dist}}_\mathcal{X}(S_t M_2,M_3) \leq C_2 e^{-\alpha_2 t}. \] Then \[ {\rm{dist}}_\mathcal{X}(S_t M_1,M_3) \leq C' e^{-\alpha' t}, \] where $C'=CC_1+C_2$ and $\alpha'=\frac{\alpha_1\alpha_1}{K+\alpha_1+\alpha_2}$. \end{proposition} \section*{Acknowledgments} The author gratefully acknowledges Sergio Frigeri for his consulting on this project. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{Resilience of multi-photon entanglement under losses} \author{G.A.\ Durkin$^{1,2}$} \email{[email protected]} \author{C.\ Simon$^{1,2,3}$} \author{J.\ Eisert$^{4}$} \author{D.\ Bouwmeester$^2$} \affiliation{ $^1$ Centre for Quantum Computation, University of Oxford, OX1 3PU, UK \\ $^2$ Department of Physics, University of California, Santa Barbara, CA 93106, USA \\ $^3$ Laboratoire de Spectrom\'{e}trie Physique, CNRS et Universit\'{e} J.\ Fourier - Grenoble, BP 87, 38402 St.\ Martin d'H\`{e}res, France\\ $^4$ Institut f\"{u}r Physik, University of Potsdam, D-14469 Potsdam, Germany } \date{\today} \begin{abstract} We analyze the resilience under photon loss of the bi-partite entanglement present in multi-photon states produced by parametric down-conversion. The quantification of the entanglement is made possible by a symmetry of the states that persists even under polarization-independent losses. We examine the approach of the states to the set of states with a positive partial transpose as losses increase, and calculate the relative entropy of entanglement. We find that some bi-partite distillable entanglement persists for arbitrarily high losses. \end{abstract} \pacs{PACS: 03.67.-a, 42.50.-p, 03.65.Ud} \maketitle \section{Introduction} Parametric down-conversion has been used in many experiments \cite{pdcexp} to create polarization entangled photon pairs \cite{kwiat}. Recent experimental \cite{lamasdemart,eisenberg} and theoretical \cite{durkin,simon,reid} work has studied the creation of strong entanglement of large numbers of photons. The states under consideration are entangled pairs of light pulses such that the polarization of each pulse is completely undetermined, but the polarizations of the two pulses are always anti-correlated. Such states are the polarization equivalent of approximate singlet states of two potentially very large spins \cite{howell}. An application of the states for quantum key distribution has been suggested \cite{durkin}. In any realistic experiment photons will be lost during propagation. It is therefore of great practical interest to analyze the resilience of the multi-photon entanglement under loss. A priori this seems like a very difficult task, because it requires the quantification of the entanglement present in mixed quantum states of high or actually even infinite dimensionality. However, the multi-photon states introduced in the above work exhibit very high symmetry - in the absence of losses they are spin singlets. The related symmetry under joint polarization transformations on both pulses is preserved even in the presence of polarization-independent losses. This makes it possible to apply the concepts of `entanglement under symmetry' developed in Refs.\ \cite{Werner,Rains,Sym,Regul,KGH} to the quantification of the multi-photon entanglement in the presence of losses. We calculate the degree of entanglement for the resulting states of high symmetry, as quantified in terms of the relative entropy of entanglement. We show that some (distillable) entanglement remains for arbitrarily high losses. \section{Symmetry of the states in the presence of losses} In the above-mentioned experiments and proposals a non-linear crystal is pumped with a strong laser pulse, and a three-wave mixing effect leads to the creation of photons along two directions $a$ and $b$. To a good approximation the Hamiltonian in the interaction picture in a four-mode description is given by \begin{equation}\langleabel{Hamiltonian} H=e^{i\phi} \kappa({a}_{h}^{\dagger } {b}_{v}^{\dagger}-{a}_{v}^{\dagger }{b}_{h}^{\dagger}) + e^{- i\phi} \kappa({a}_{h}{b}_{v}-{a}_{v}{b}_{h}). \end{equation} The real coupling constant $\kappa$ is proportional to the amplitude of the pump field and to the relevant non-linear optical coefficient of the crystal, and $\phi$ denotes the phase of the pump field. Photons are created into the four modes with annihilation operators $a_{h}$, $a_{v}$, $b_{h}$, $b_{v}$, where $h$ and $v$ denote horizontal and vertical polarization. Note that both the modes and the associated annihilation operators will be denoted with the same symbol. In the absence of losses, this Hamiltonian leads to a state vector of the form \cite{durkin,simon} \begin{eqnarray} |\psi\rangleangle = e^{-iHt}|0\rangleangle= \frac{1}{\cosh^{2}\tau}\sum _{n=0}^{\infty}e^{i n \phi} \sqrt{n+1} \;\tanh^{n}\tau\;|\psi^{n}_{-}\rangleangle, \langleabel{pdcstate} \end{eqnarray} where $\tau=\kappa t$ is the effective interaction time and \begin{eqnarray} &&|\psi^{n}_{-}\rangleangle = \frac{1}{\sqrt{n+1}} \frac{1}{n!}(a^{\dagger }_{h}b^{\dagger }_{v}-a^{\dagger }_{v}b^{\dagger }_{h})^n |0\rangleangle\langleabel{psin-}\\ &&= \frac{1}{\sqrt{n+1}}\sum_{m=0}^{n}(-1)^{m}|n\!\!-\!\!m\rangle_{a_h} |\,m\rangle_{a_v}|\,m\rangle_{b_h}|n\!\!-\!\!m\rangleangle_{b_v} \,. \nonumber \end{eqnarray} In experiments the pump phase is typically unknown, and data is collected over time intervals much longer than the pump field coherence time. We will therefore consider the state $\rangleho$ obtained from the state vector Eq. (\rangleef{pdcstate}) by uniformly averaging over the pump phase $\phi\in[0,2\pi)$: \begin{equation} \rangleho = \frac{1}{\cosh^{4}\tau}\sum _{n=0}^{\infty}(n+1) \;\tanh^{2n}\tau\;|\psi^{n}_{-}\rangleangle \langleangle \psi^{n}_{-}| \langleabel{pdcrho}. \end{equation} The Hamiltonian $H$ is invariant under any joint polarization transformation in the spatial modes $a$ and $b$. That is, if one defines ${\bf a}=(a_h,a_v)$ and ${\bf b}=(b_h,b_v)$, then $H$ is invariant under the joint application of the same unitary $U$ from $SU(2)$ to both vectors, ${\bf a} \mapsto {U}{\bf a}$ and ${\bf b} \mapsto { U} {\bf b}$. This invariance of $H$ is inherited by the multi-photon states created through the action of $H$ on the vacuum. This symmetry can be expressed as \begin{equation} V(U)\rangleho V(U)^{\dagger}=\rangleho \langleabel{sym} \end{equation} for all $U\in SU(2)$, where $V(U)= e^{i {\bf n} {\bf J}}$, and the real vector ${\bf n}$ is specified by $U=e^{i{\bf n}\sigma/2}$, $\sigma$ denoting the vector of Pauli matrices. Here the angular momentum operator ${\bf J}$ can be written as ${\bf J}={\bf J}_{a}+ {\bf J}_{b}$. The components of ${\bf J}_{a}$ associated with spatial mode $a$ are given by the familiar quantum Stokes parameters ${J}_{a,x}=(a_{+}^{\dagger} a_{+} - a_{-}^{\dagger} a_{-})/2$, ${J}_{a,y}=(a_{l}^{\dagger} a_{l} - a_{r}^{\dagger} a_{r})/2$, and ${J}_{a,z}=(a_{h}^{\dagger} a_{h} - a_{v}^{\dagger} a_{v})/2$, with $a_{\pm}=(a_{h}\pm a_{v})/\sqrt{2}$ corresponding to light that is linearly polarized at $\pm 45^o$, and $a_{l,r}=(a_{h}\pm i a_{v})/\sqrt{2}$ to left and right-hand circularly polarized light. Analogous relations hold for spatial mode $b$. In the present work we are interested in the states created by $H$ in the presence of losses. These losses will be modeled by four beam splitters of transmittivity $\eta\in [0,1]$, one for each of the modes $a_h, a_v, b_h, b_v$, where the modes are mixed with vacuum modes. Explicitly, the operation ${\cal L}^a_{\eta}$ corresponding to losses characterized by $\eta$ acting on a single mode $a$ is given by \begin{equation} {\cal L}^a_{\eta}(\rangleho)=\sum \langleimits_{n=0}^{\infty} L_n^a \rangleho (L_n^a)^{\dagger}, \end{equation} with $L_n^a$ being given by \begin{equation} L_n^a=\frac{1}{\sqrt{n!}}(1-\eta)^{\frac{n}{2}} a^n \eta^{\frac{1}{2}a^{\dagger}a}. \end{equation} One can easily verify that these operators satisfy \begin{equation} \sum \langleimits_{n=0}^{\infty} (L_n^a)^{\dagger} L_n^a = \openone, \end{equation} required for trace preservation. In this paper we are interested in the situation where an equal amount of loss occurs in all four modes. We will denote the corresponding quantum operation by \begin{equation} {\cal L}_{\eta}={\cal L}^{a_h}_{\eta}\otimes {\cal L}^{a_v}_{\eta}\otimes{\cal L}^{b_h}_{\eta}\otimes{\cal L}^{b_v}_{\eta}. \langleabel{loss} \end{equation} It is not difficult to apply this loss channel to the state $\rangleho$ of Eq. (\rangleef{pdcrho}). However, the resulting expression is quite unwieldy, and quantifying the entanglement present in the state seems like a hopeless task at first sight. We will now discuss general properties of the resulting state that allow a simple parametrization and as a consequence the determination of its entanglement. In the absence of losses, all components of the state created by the action of $H$ have an equal number of photons in the $a$ modes and in the $b$ modes, since photons are created in pairs. The state vector $|\psi\rangleangle$ of Eq. (\rangleef{pdcstate}) is a superposition of terms corresponding to different total photon numbers. For any given term we will denote the number of photons in the $a$ modes by $\alpha=\alpha_h+\alpha_v$, where $\alpha_h$ is the number of photons in mode $a_h$ etc. Analogously, the number of photons in the $b$ modes is denoted by $\beta=\beta_h+\beta_v$. The relative phase between terms with different values of $\alpha$ or $\beta$ depends on the pump phase $\phi$. The corresponding coherences in the density matrix are removed when averaging over the pump phase. Losses lead to the appearance of terms with $\alpha \neq \beta$. The state $\rangleho'= {\cal L}_{\eta}(\rangleho)$ after losses now has the form \begin{equation} \rangleho'=\sum_{\alpha,\beta=0}^{\infty} P{(\alpha,\beta)} \rangleho^{(\alpha,\beta)}, \langleabel{rho} \end{equation} where $P{(\alpha,\beta)}$ is the probability to have photon numbers $\alpha$ and $\beta$ in the $a$ and $b$ modes respectively, and $\rangleho^{(\alpha,\beta)}$ is the corresponding state. In the state before losses, the terms $\rangleho^{(\alpha,\alpha)}$ are maximally entangled states (for $\alpha \neq 0$), denoted by $|\psi_-^{\alpha}\rangleangle \langleangle \psi_-^{\alpha}|$ in the notation of Eq. (\rangleef{psin-}). Losses reduce this entanglement, but do not make the state become separable, as will be seen below. The state vector $|\alpha_h,\alpha_v\rangleangle|\beta_h,\beta_v\rangleangle$ corresponds to a spin state vector $|j_a,m_a\rangleangle|j_b,m_b\rangleangle$ with $j_a=(\alpha_h+\alpha_v)/2,m_a=(\alpha_h-\alpha_v)/2,j_b=(\beta_h+\beta_v)/2,m_b=(\beta_h-\beta_v)/2$. Note that in this representation a single photon corresponds to a spin-1/2 system. A state with fixed photon numbers $\alpha$ and $\beta$ thus corresponds to a state of two fixed general spins $j_a=\alpha/2$ and $j_b=\beta/2$. The key feature of the lossy channel ${\cal L}_{\eta}$ of Eq. (\rangleef{loss}) is that it does not destroy the symmetry described by Eq. (\rangleef{sym}). We have that \begin{equation} V(U) {\cal L}_{\eta}(\rangleho)V(U)^{\dagger}={\cal L}_{\eta}(\rangleho) \end{equation} for all losses $\eta$ and all $U\in SU(2)$. To sketch the argument why this symmetry is retained we will resort to the Heisenberg picture. Polarisation-independent loss in the $a$ modes can be described by the map \begin{equation} {\bf a}\mapsto {\bf a'}=\sqrt{\eta}{\bf a}+\sqrt{1-\eta} {\bf c}, \end{equation} where ${\bf c}=(c_h,c_v)$ is a vector of unpopulated modes that are coupled into the system due to the loss. Applying ${ U}\in SU(2)$ to ${\bf a'}$ gives \begin{equation} {\bf a''}={ U}{\bf a'}=\sqrt{\eta}{ U}{\bf a}+\sqrt{1-\eta}{ U}{\bf c}. \end{equation} On the other hand, applying first ${ U}$ and then the loss operation gives \begin{equation} {\bf a''}=\sqrt{\eta}{ U}{\bf a}+\sqrt{1-\eta}{\bf c}, \end{equation} in which the last term is different. However, this term just corresponds to a coupling in of unpopulated modes with a coefficient $\sqrt{1-\eta}$. The resulting lossy channel is invariant under the map ${\bf c}\mapsto U{\bf c}$, since these modes are unpopulated. This implies that the state after application of the loss operation ${\cal L}_{\eta}$ has the same symmetry as before. Note that for this argument to hold, the amount of loss in the $a$ and $b$ modes does not have to be the same, since the transformations are applied independently to each of $\mathbf{a}$ and $\mathbf{b}$. However, within each spatial mode, losses must be polarisation insensitive. The identification of the above symmetry dramatically simplifies the description of the resulting states. The most general state $\rangleho^{(\alpha,\beta)}$ with fixed value of $\alpha$ and $\beta$ for which $V(U)\rangleho^{(\alpha,\beta)}V(U)^{\dagger}= \rangleho^{(\alpha,\beta)}$ for all $U\in SU(2)$ is of the form \begin{equation} \langleabel{rhoalphabeta} \rangleho^{(\alpha,\beta)}=\sum_{j=|j_a-j_b|}^{j_a+j_b} \mu^{(\alpha,\beta)}_j\Omega^{(\alpha,\beta)}_j, \end{equation} where $j_a=\alpha/2,j_b=\beta/2$ \cite{KGH}, essentially as a consequence of Schur's lemma \cite{grouptheory}. Here, the $\mu^{(\alpha,\beta)}_j$ form a probability distribution for all $(\alpha,\beta)$ in the allowed values for $j$. In turn, $\Omega^{(\alpha,\beta)}_j$ is up to normalization to unit trace a projection onto the space of total spin $j$ (for fixed $j_a=\alpha/2,j_b=\beta/2$). That is, $\Omega^{(\alpha,\beta)}_j= {\mathbbm{1}}^{(\alpha,\beta)}_j/(2j+1)$, where ${\mathbbm{1}}^{(\alpha,\beta)}_j$ is equal to the identity when acting on the space labeled by $\alpha$, $\beta$, and $j$, and zero otherwise \cite{KGH,Schliemann}. As an example, let us consider the case with exactly one photon in each spatial mode, i.e., $\alpha=\beta=1$. Then there are just two terms in the expansion of Eq.\ (\rangleef{rhoalphabeta}), proportional to $\Omega^{(1,1)}_0$ and $\Omega^{(1,1)}_1$. The state $\Omega^{(1,1)}_0$ is the projector onto the two-photon singlet state with state vector $(({a}_{h}^{\dagger } {b}_{v}^{\dagger}-{a}_{v}^{\dagger}{b}_{h}^{\dagger})/\sqrt{2})|0\rangleangle$, while $\Omega^{(1,1)}_1$ is the normalized projector onto the spin-1 triplet. The trace condition $\mu^{(1,1)}_{0}+\mu^{(1,1)}_{1}=1$ means that the set of all invariant states $\rangleho^{(1,1)}$ is characterized by just one parameter. Note that the most general state with exactly one photon in each spatial mode would be characterized by $15$ parameters. \section{Quantifying the entanglement} In order to quantify the entanglement in a given physical situation, one has to determine the coefficients $P{(\alpha,\beta)}$ of Eq.\ (\rangleef{rho}) and $\mu^{(\alpha,\beta)}_j$ of Eq.\ (\rangleef{rhoalphabeta}), which may be calculated from the polarization dependent photon counting probabilities $p(\alpha_h,\alpha_v,\beta_h,\beta_v)$. These in turn can be determined by explicitly applying the loss channel ${\cal L}_{\eta}$ of Eq. (\rangleef{loss}) to the state $\rangleho$ of Eq. (\rangleef{pdcrho}). One finds \begin{eqnarray} &&p(\alpha_h,\alpha_v,\beta_h,\beta_v)=\frac{ \eta^{\alpha+\beta}(1-\eta)^{\alpha+\beta}}{(\cosh (\kappa t))^4 \alpha_h! \alpha_v! \beta_h! \beta_v!}\langleabel{probs} \\ &&\times \sum \langleimits_{m= m_{0},n=n_{0}}^{\infty} \frac{((1-\eta)\tanh (\kappa t))^{2(m+n)}(m!)^2 (n!)^2}{(m-\alpha_h)!(m-\beta_v)!(n-\alpha_v)!(n-\beta_h)!},\nonumber \end{eqnarray} where $m_0=\max(\alpha_h,\beta_v)$ and $n_{0}=\max(\alpha_v,\beta_h)$. The probabilities $P(\alpha,\beta)$ are obtained by summing this expression over all $\alpha_h,\alpha_v,\beta_h,\beta_v$ with $\alpha_h+\alpha_v=\alpha$ and $\beta_h+\beta_v=\beta$. The coefficients $\mu^{(\alpha,\beta)}_j$ may be written as linear combinations of the $p(\alpha_h,\alpha_v,\beta_h,\beta_v)$ via the Clebsch-Gordan coefficients \cite{grouptheory} by means of the standard procedure of `coupling spins'. Polarization-sensitive photon counting in the spatial modes $a$ and $b$ corresponds to the basis spanned by the $|j_a,m_a\rangleangle|j_b,m_b\rangleangle$, while the $\mu^{(\alpha,\beta)}_j$ and $\Omega^{(\alpha,\beta)}_j$ are defined in terms of the total spin, corresponding to the label $j$. Since the $\mu^{(\alpha,\beta)}_j$ characterize the normalized state $\rangleho^{(\alpha,\beta)}$, they only depend on the relative probabilities of the different values of $\alpha_h,\alpha_v,\beta_h,\beta_v$ for given $\alpha$ and $\beta$. Eq.\ (\rangleef{probs}) then implies that they depend on the interaction time $t$ and the transmission $\eta$ only via the combination $\xi=(1-\eta)\tanh(\kappa t)\in [0,1]$, which ranges from zero for perfect transmission (or, less interestingly, zero interaction time) to one in a limit of complete loss and infinite interaction time. For example, for $\alpha=\beta=1$, the single independent parameter $\mu^{(1,1)}_0$ is given by \begin{eqnarray} \mu^{(1,1)}_{0}= 1 &-& \frac{3}{2} \bigl( p(1,0,1,0)+p(0,1,0,1)\bigr)/P(1,1).\nonumber\\ \end{eqnarray} where as before $P(1,1)=p(1,0,1,0)+p(1,0,0,1)+p(0,1,1,0)+p(0,1,0,1)$. This gives \begin{equation} \mu^{(1,1)}_{0}=(1+ \xi^2/2)/(1+2\xi^2).\langleabel{muzero} \end{equation} To quantify the entanglement present in the total state, one can proceed by considering each $\rangleho^{(\alpha,\beta)}$ separately. There is no unique measure of entanglement for mixed states. Instead, there are several inequivalent ones, each of which is associated with a different physical operational interpretation \cite{Int}. The relative entropy of entanglement \cite{Relent}, which will be employed in the present paper specifies to which extent a given state can be operationally distinguished from the closest state that is regarded as being disentangled. The relative entropy of entanglement of a state $\rangleho$ is defined as \begin{eqnarray}\langleabel{relent} E_{R}(\rangleho )= \inf_{\sigma \in \mathcal{D}} S(\rangleho||\sigma), \end{eqnarray} where $S(\rangleho||\sigma)=\text{tr} [\rangleho \langleog \rangleho - \rangleho \langleog \sigma]$ denotes the quantum relative entropy of the state $\rangleho$ relative to the state $\sigma$. Here ${\mathcal D}$ is taken to be the set of states with positive partial transpose \cite{peres} (PPT states). This set of states includes the set of separable states, but in general also contains bound entangled states \cite{boundent}. The relative entropy of entanglement is an upper bound to the distillable entanglement \cite{Int}, providing a measure of the entanglement available as a resource for quantum information purposes \cite{regular}. The symmetry of the states dramatically simplifies the calculation of the relative entropy of entanglement. As follows immediately from the convexity of the relative entropy and the invariance under joint unitary operations, the closest PPT state can always be taken to be a state of the same symmetry \cite{Rains,KGH}. Hence, the closest PPT state is characterized by the same small number of parameters. For simplicity of notation, we will denote the subset of state space corresponding to specific numbers $\alpha$, $\beta$ of photons as $(\alpha,\beta)$-photon space. In the $(1,1)$-photon space let us denote the closest PPT state as \begin{eqnarray} \sigma^{(1,1)}= \zeta^{(1,1)}_{0} \Omega^{(1,1)}_{0} + (1-\zeta^{(1,1)}_{0}) \Omega^{(1,1)}_{1}. \end{eqnarray} Forming the partial transpose of this state, and demanding that the resulting operator be non-negative, gives the condition $ \zeta^{(1,1)}_{0} \langleeq 1/2$. In this simplest space, all symmetric states lie on the straight line segment $\mu^{(1,1)}_{0} \in [0,1]$ with the PPT region extending from the origin to the midpoint (see Fig.\ 1). In general, for higher photon numbers $\alpha$ and $\beta$, the set of symmetric states are represented by a simplex in a $(\text{min}(\alpha,\beta)+1)$-dimensional space, the coordinates of which are denoted by $\mu^{(\alpha,\beta)}_j$. In turn, the PPT criterion gives rise to a number of linear inequalities, such that the set of invariant operators with a positive partial transpose corresponds again to a simplex. The intersection of the two simplices corresponds to the invariant PPT states, and the coordinates are denoted by $\zeta^{(\alpha,\beta)}_j$ \cite{Hendriks}. The situation with $\alpha=\beta =1,2,3$ is depicted explicitly in Fig.\ 1. The simplex corresponding to symmetric states, characterized by the condition that the $ \mu^{(\alpha,\beta)}_j$ form a probability distribution, is in these three cases a straight line segment, an equilateral triangle, and a regular tetrahedron respectively. The vertices of the simplex represent the normalized projectors $\Omega^{(\alpha,\beta)}_{j}$. States in the interior of the simplex are convex combinations of all the allowed projectors. The PPT set with the same symmetry is clearly marked. \begin{figure} \caption{\small{ The simplices of symmetric states for the cases of $(\alpha,\beta)$, $\alpha=\beta=1,2,3$, respectively. The equilateral triangle has been marked with contour lines, on each of which one of the parameters is constant. The set of PPT states is indicated by the grey line segment in the top graph, the shaded area of the $(2,2)$ triangle and the filled polygon which obscures the $\mu^{(3,3)} \end{figure} \begin{figure} \caption{\small{Lower bounds to the relative entropy of entanglement for down-conversion states with initial average photon numbers of $0.5$ (solid), $1$ (dashed), and $3$ (dotted line) subject to loss, evaluating the sum of Eq.\ (\rangleef{full} \end{figure} Fig.\ 1 also shows the curves traced by the down-conversion states when they are subject to loss. As discussed above, the position of the states on the curve is determined by the single parameter $\xi$. For perfect transmission corresponding to $\eta=1$ the quantum state in an $\alpha=\beta$ photon space has $\mu^{(\alpha,\beta)}_{0}=1$ for all values of $t$, corresponding to maximal entanglement. As losses are increased the state migrates towards the PPT boundary. It is an important immediate consequence of Eq.\ (\rangleef{muzero}) that for all losses $\eta>0$, the number $\mu^{(\alpha,\alpha)}_{0}$ is always greater than $1/2$ for any finite $t$ and for all $\alpha$. For any finite $t$, $\mu^{(\alpha,\alpha)}_{0}\rangleightarrow 1/2$ as $\xi\rangleightarrow 1$ (which corresponds to a limit of zero transmission time and infinite interaction time). This holds true for $(\alpha,\alpha)=(1,1)$, but also for higher values of $\alpha$: the state remains outside the PPT set for any non-vanishing $t$ and for arbitrarily high losses. Therefore, the above results show that there is always some entanglement in the down-conversion state, as quantified in terms of the relative entropy of entanglement. As a corollary, which one can already infer from the lowest dimensional subspace, $(\alpha,\alpha)=(1,1)$, there is actually distillable entanglement in the down-conversion state, regardless of how lossy the transmission from the source to the detector. We now proceed to quantify the entanglement in the states more explicitly. Since $E_R$ is convex and the set of symmetric PPT states is convex, finding the closest state $\sigma$ amounts to solving a convex optimisation problem. For different values of $\alpha,\beta$ the quantities $S(\rangleho^{(\alpha,\beta)}||\sigma^{(\alpha,\beta)})$ have been evaluated, where $\sigma^{(\alpha,\beta)}$ denotes the PPT state which is the unique global minimum in the convex optimization problem, i.e., the PPT state closest to the down-conversion state. For generic states, this optimization problem would still be convex, yet, the dimensionality of state space grows as $(\alpha+1)^2(\beta+1)^2-1$. The symmetry dramatically reduces the dimensionality of the constraint set to searched to $\text{min}(\alpha,\beta)$, and thus makes the quantification of the entanglement a feasible task. For instance, for a state with three photons on each side, one has to consider only three objective variables instead of 255. The total relative entropy of entanglement is given by the expression \begin{eqnarray}\langleabel{full} E_{R}(\rangleho) = \sum_{\alpha,\beta=0}^{\infty} P{(\alpha,\beta)} E_{R} (\rangleho^{(\alpha,\beta)}). \end{eqnarray} The average photon number before loss $N$ is related to the interaction time $t$ as $N=2 \sinh^2 (\kappa t)$. The average photon number after loss is $n=\eta N$. Fig.\ 2 shows the relative entropy of entanglement calculated as described above for $N=0.5$, $N=1$ and $N=3$. One sees that significant entanglement remains even for substantial losses. \section{Conclusions} We have shown how symmetry considerations make possible the quantification of entanglement for states produced by parametric down-conversion and subject to losses. The resilience of the entanglement of these multi-photon states under photon loss makes them an excellent system for the experimental demonstration of entanglement of large photon numbers \cite{eisenberg} and good candidates for quantum communication schemes \cite{durkin}. \begin{references} \bibitem{pdcexp} For recent reviews see A.\ Zeilinger, Rev.\ Mod.\ Phys.\ {\bf 71}, S288 (1999); N.\ Gisin, G.\ Ribordy, W.\ Tittel, and H.\ Zbinden, Rev.\ Mod.\ Phys.\ {\bf 74}, 145 (2002); D.\ Bouwmeester, A.\ Ekert, and A.\ Zeilinger, {\it The Physics of Quantum Information} (Springer, Heidelberg-Berlin-New York, 2000). \bibitem{kwiat} P.G.\ Kwiat, K.\ Mattle, H.\ Weinfurter, A.\ Zeilinger, A.V.\ Sergienko, and Y.\ Shih, Phys.\ Rev.\ Lett.\ {\bf 75}, 4337 (1995). \bibitem{lamasdemart} A.\ Lamas-Linares, J.C.\ Howell, and D.\ Bouwmeester, Nature {\bf 412}, 887 (2001); F. \ De Martini, G. \ Di Giuseppe, and S. \ P\'{a}dua, Phys.\ Rev.\ Lett.\ {\bf 87}, 150401 (2001) \bibitem{eisenberg} H.S.\ Eisenberg, G.\ Khoury, G.A.\ Durkin, C.\ Simon, and D.\ Bouwmeester, Phys.\ Rev.\ Lett.\ {\bf 93}, 193901 (2004). \bibitem{durkin} G.A.\ Durkin, C.\ Simon, and D.\ Bouwmeester, Phys. Rev.\ Lett.\ {\bf 88}, 187902 (2002). \bibitem{simon} C.\ Simon and D.\ Bouwmeester, Phys.\ Rev.\ Lett.\ {\bf 91}, 053601 (2003). \bibitem{reid} M.D.\ Reid, W.J.\ Munro, and F.\ De Martini, Phys.\ Rev.\ A {\bf 66}, 033801 (2002); A.B.\ U'Ren, K.\ Banaszek, and I.A.\ Walmsley, Quant.\ Inf.\ Comp.\ {\bf 3}, 480 (2003). \bibitem{howell} J.C.\ Howell, A.\ Lamas-Linares, and D. Bouwmeester, Phys.\ Rev.\ Lett.\ {\bf 88}, 030401 (2002). \bibitem{Werner} R.F.\ Werner, Phys.\ Rev.\ A {\bf 40}, 4277 (1989). \bibitem{Rains} E.M.\ Rains, Phys.\ Rev.\ A {\bf 60}, 179 (1999). \bibitem{Sym} J.\ Eisert, T.\ Felbinger, P.\ Papadopoulos, M.B.\ Plenio, and M.\ Wilkens, Phys.\ Rev.\ Lett.\ {\bf 84}, 1611 (2000); B.M.\ Terhal and K.G.H.\ Vollbrecht, Phys.\ Rev.\ Lett.\ {\bf 85}, 2625 (2000). \bibitem{Regul} K.\ Audenaert, J.\ Eisert, E.\ Jan{\'e}, M.B.\ Plenio, S.\ Virmani, and B.\ De Moor, Phys.\ Rev.\ Lett.\ {\bf 87}, 217902 (2001); K.\ Audenaert, B.\ De Moor, K.G.H.\ Vollbrecht, and R.F.\ Werner, Phys.\ Rev.\ A {\bf 66}, 032310 (2002). \bibitem{KGH} K.G.H.\ Vollbrecht and R.F.\ Werner, Phys.\ Rev.\ A {\bf 64}, 062307 (2001). \bibitem{grouptheory} H.F.\ Jones, {\it Groups, Representations and Physics}, 2nd edition (Institute of Physics, London, 1998). \bibitem{Schliemann} J.\ Schliemann, Phys.\ Rev.\ A {\bf 68}, 012309 (2003). \bibitem{Int} The measures distillable entanglement and entanglement cost specify certain optimal conversion rates: these are the rates that can be achieved in an asymptotic extraction of and preparation procedure starting from maximally entangled qubit pairs. These procedures are thought to be implemented by employing local quantum operations and classical communication (LOCC) only \cite{Bennett,Rains}. \bibitem{Bennett} C.H.\ Bennett, D.P.\ DiVincenzo, J.A.\ Smolin, and W.K.\ Wootters, Phys.\ Rev.\ A {\bf 54}, 3824 (1996). \bibitem{Relent} V.\ Vedral, M.B.\ Plenio, M.A.\ Rippin, and P.L.\ Knight, Phys.\ Rev.\ Lett. {\bf 78}, 2275 (1997); V.\ Vedral and M.B.\ Plenio, Phys.\ Rev.\ A {\bf 57}, 1619 (1998); J.\ Eisert, C.\ Simon, and M.B.\ Plenio, J.\ Phys.\ A {\bf 35}, 3911 (2002). \bibitem{peres} This is the transposition with respect to one part of a bi-partite quantum system. If the resulting partial transpose is a positive operator, then the original state is said to have a positive partial transpose. See A.\ Peres, Phys.\ Rev.\ Lett.\ {\bf 77}, 1413 (1996). \bibitem{boundent} M.\ Horodecki, P.\ Horodecki, and R.\ Horodecki, Phys.\ Rev.\ Lett.\ {\bf 80}, 5239 (1998). \bibitem{regular} With the methods of Refs.\ \cite{Regul}, the asymptotic versions $E_\infty (\rangleho)= \langleim_{n\rangleightarrow \infty} E_R(\rangleho^{\otimes n})/n$ would in principle also be accessible for this class of states. \bibitem{Hendriks} The set of bi-partite PPT states with $SU(2)$ symmetry has been investigated independently by B.\ Hendriks (Diploma thesis, University of Braunschweig, 2002) under the supervision of R.F.\ Werner. \end{references} \end{document}
\begin{document} \begin{abstract} For a complex variety $\widehat X$ with an action of a reductive group $\widehat G$ and a geometric quotient $\pi: \widehat X \to X$ by a closed normal subgroup $H \subset \widehat G$, we show that open sets of $X$ admitting good quotients by $G=\widehat G / H$ correspond bijectively to open sets in $\widehat X$ with good $\widehat G$-quotients. We use this to compute GIT-chambers and their associated quotients for the diagonal action of $\text{PGL}_2$ on $(\mathbb{P}^1)^n$ in certain subcones of the $\text{PGL}_2$-effective cone via a torus action on affine space. This allows us to represent these quotients as toric varieties with fans determined by convex geometry. \end{abstract} \maketitle \newtheoremstyle{test} {} {} {\it} {} {\bfseries} {.} { } {} \newtheoremstyle{test2} {} {} {} {} {\bfseries} {.} { } {} \theoremstyle{test} \newtheorem{Def}{Definition}[section] \newtheorem{Exa}[Def]{Example} \newtheorem{Exe}[Def]{Exercise} \newtheorem{Theo}[Def]{Theorem} \newtheorem{Lem}[Def]{Lemma} \newtheorem{Cor}[Def]{Corollary} \newtheorem{Pro}[Def]{Proposition} \newtheorem*{Exa*}{Example} \newtheorem*{Pro*}{Proposition} \newtheorem*{Def*}{Definition} \newtheorem*{Cor*}{Corollary} \newtheorem*{Lem*}{Lemma} \newtheorem*{Theo*}{Theorem} \theoremstyle{test2} \newtheorem{Rmk}[Def]{Remark} \newtheorem*{Rmk*}{Remark} \section{Introduction} Let $G$ be a reductive group acting on a variety $X$, then an important question in Geometric Invariant theory is to classify the open $G$-invariant subsets $U$ of $X$ having a good quotient under the action of $G$. Define \begin{align} \label{eqn:DefU} \mathcal{U}_{(X,G)} = \left\{ \parbox{0.7\linewidth}{$U \subset X$ nonempty, open $G$-invariant such that a good quotient $U \to U/\!\!/ G$ exists (in schemes over $\mathbb{C}$)}\right\},\\ \label{eqn:DefUpr} \mathcal{U}^{\text{pr}}_{(X,G)} = \left\{ \parbox{0.7\linewidth}{$U \subset X$ nonempty, open $G$-invariant such that a good quotient $U \to U/\!\!/ G$ exists with $U /\!\!/ G$ a projective variety}\right\},\\ \label{eqn:DefUprg} \mathcal{U}^{\text{pr,g}}_{(X,G)} = \left\{ \parbox{0.7\linewidth}{$U \subset X$ nonempty, open $G$-invariant such that an affine, geometric quotient $U \to U/\!\!/ G$ exists and such that $U /\!\!/ G$ is a projective variety}\right\}. \end{align} In this note we describe a geometric situation, in which these collections of open sets for two pairs $(X,G)$, $(\widehat X, \widehat G)$ can be identified. \begin{Def} Let $G,H$ be reductive, linear algebraic groups and let $G$ act on a variety $X$. A good (resp. geometric) $H$-lift of $(X,G)$ is the data of \begin{itemize} \item a reductive algebraic group $\widehat G$ containing $H$ as a closed normal subgroup together with an identification $\widehat G / H = G$, \item a variety $\widehat X$ with an action of $\widehat G$, \item a morphism $\pi: \widehat X \to X$, which is \begin{itemize} \item $\widehat G$-equivariant with respect to the action of $\widehat G$ on $X$ induced by the action of $G$ and the morphism $\widehat G \to \widehat G / H=G$, \item a good (resp. geometric) quotient for the induced action of $H$ on $\widehat X$. \end{itemize} \end{itemize} \end{Def} Then we prove the following result. \begin{Theo} \label{Theo:lift} Let $\pi: \widehat X \to X$ be a good $H$-lift of $(X,G)$ for $X$ a variety with an action of the reductive group $G$. Then the map $U \mapsto \pi^{-1}(U)$ induces an injection $\mathcal{U}_{(X,G)} \to \mathcal{U}_{(\widehat X,\widehat G)}$. Moreover, for $U \in \mathcal{U}_{(X,G)}$ the map $\pi$ induces a natural isomorphism $\pi^{-1}(U) /\!\!/ \widehat G \cong U /\!\!/ G$. Thus we also get an injection $ \mathcal{U}^{\text{pr}}_{(X,G)} \to \mathcal{U}^{\text{pr}}_{(\widehat X,\widehat G)}$. If $\pi$ is a geometric $H$-lift, the correspondence above is a bijection and it induces a bijection $\mathcal{U}^{\text{pr,g}}_{(X,G)} \to \mathcal{U}^{\text{pr,g}}_{(\widehat X,\widehat G)}$. \end{Theo} It has already been observed by various authors (see \cite[Theorem 6.1.5]{bbquotients} for an overview) that for a geometric $H$-lift $\pi: \widehat X \to X$ a $G$-invariant open subset $U \subset X$ has a good/geometric quotient by $G$ iff $\pi^{-1}(U)$ has a good/geometric quotient by $\widehat G$. Below we give a self-contained argument that also includes the case of good $H$-lifts. The structure of the paper is as follows: as a motivation for the definition of $\mathcal{U}^{\text{pr}}_{(X,G)}$, we show in Section \ref{Sect:Linebundles} how its elements relate to $G$-linearized line bundles on $X$ for $X$ a smooth variety. This makes a connection to the classical approach of \cite{git}, obtaining (semi)stable sets from linearized line bundles. We give two special situations where we can identify the elements of $\mathcal{U}^{\text{pr,g}}_{(X,G)}$ with chambers of the $G$-effective cone in $\text{NS}^G(X)$: \begin{itemize} \item for $X$ a projective homogeneous variety, $G$ reductive, \item for $X = \mathbb{A}^n$ and $G=T$ a torus acting linearly. \end{itemize} In Section \ref{Sect:Hlifts} we give the proof of Theorem \ref{Theo:lift} and describe situations where $H$-lifts appear naturally. Moreover we show that $H$-lifts where the action of $H$ on $\widehat X$ is free give an identification $\text{Pic}^G(X) \cong \text{Pic}^{\widehat G}(\widehat X)$ compatible with forming semistable sets. Finally in Section \ref{Sect:Exa} we give an explicit example, showing how to compute parts of the VGIT-decomposition of the $G$-effective cone for the componentwise action of $G=\text{PGL}_2$ on $(\mathbb{P}^1)^n$ using toric quotients. \section*{Conventions} In the paper we are going to work over the complex numbers. For us, a \emph{good quotient} of the action of an algebraic group $G$ on a scheme $X$ is a morphism $p: X \to Y$ to a scheme $Y$ satisfying \begin{enumerate} \item $p$ is surjective, affine and $H$-invariant, \item $p_*(\mathcal{O}_X^G) = \mathcal{O}_Y$, where $\mathcal{O}_X^G$ is the sheaf of $G$-invariant functions on $X$, \item for $Z_1, Z_2 \subset X$ closed, disjoint $G$-invariant subsets, their images $p(Z_1), p(Z_2)$ are also closed and disjoint. \end{enumerate} On the other hand, for $p$ to be a \emph{geometric quotient} we require the properties above, except that it is affine, but additionally we want the fibres of geometric points under $p$ to be orbits of $G$. This is the definition of \cite{git}. \todo{Def: saturated} \section{Motivation: Projective quotients from linearized line-bundles} \label{Sect:Linebundles} Let $X$ be a smooth, irreducible variety with an action of a connected reductive group $G$. We want to study good quotients of open sets $U \subset X$ by $G$ which are projective varieties. \begin{Lem} \label{Lem:ULcorr} Let $X$ be a smooth, irreducible variety with an action of a connected reductive group $G$. Then the open sets $U$ in $\mathcal{U}^{\text{pr}}_{(X,G)}$ are all of the form $U=X^{ss}(L)$ for a $G$-linearized line bundle $L$ on $X$. \end{Lem} \begin{proof} By \cite[Theorem 6.1.5]{bbquotients}, all open $G$-invariant sets $U \subset X$ with a good quotient $U /\!\!/ G$ that is a quasi-projective variety are saturated subsets of some $X^{ss}(L)$ for $L$ a $G$-linearized line bundle. Let $\pi: X^{ss}(L) \to X^{ss}(L) /\!\!/ G$ be the corresponding good quotient. Then we have $U /\!\!/ G \subset X^{ss}(L) /\!\!/ G$ contained as an open subset. If $U /\!\!/ G$ is in addition projective, this inclusion is an isomorphisms. But as $U$ is a saturated open subset, we have $U=\pi^{-1}(\pi(U))=\pi^{-1}(U /\!\!/ G) = X^{ss}(L)$ as claimed. \end{proof} \begin{Rmk} We can generalize the setting above to $X$ being a normal variety if we work with $G$-linearized Weil divisors instead of line bundles, as described in \cite{hausenweil}. However, as our applications work with smooth $X$, we stay in the more classical setting of line bundles. \end{Rmk} An advantage of the condition ``$U /\!\!/ G$ is projective'' in comparision with ``$U$ is maximal with respect to saturated inclusion in $\mathcal{U}_{(X,G)}$'' is that this can be verified intrinsically only from the action of $G$ on $U$ (without reference to the ambient variety $X$). Thus the definition of $\mathcal{U}^{\text{pr}}$ is compatible with the restriction to open $G$-invariant subsets in the following sense. \begin{Cor} \label{Cor:chamberopen} Let $X$ be a variety with an action of a reductive group $G$. Let $U_0 \subset X$ be an open $G$-invariant subset. Then we have $\mathcal{U}^{\text{pr}}_{(U_0,G)} = \mathcal{U}^{\text{pr}}_{(X,G)} \cap \{U: U \subset U_0\}$ (similarly for $\mathcal{U}^{\text{pr,g}}$). \end{Cor} \todo{Hilbert-Mumford implies this is a subcone, doesn't it? For every point in the complement, we have the condition that it is unstable, this is a half-space on the set of line bundles.} In the following two subsections, we are going to present situations where the dependence of $X^{ss}(L)$ of $L$ has been studied before and where the classes of $G$-linearized line bundles are partitioned into cones on which $X^{ss}(L)$ is constant. \subsection{Actions of reductive groups on smooth projective varieties} \label{Sect:projVGIT} Let $X$ be an irreducible, smooth projective variety acted upon by a connected, reductive linear algebraic group $G$. In this situation, Dolgachev and Hu defined in \cite{dolgachevhu} the $G$-ample cone $C^G(X) \subset \text{NS}^G(X)_\mathbb{R}$ inside the Neron-Severi group of $G$-linearized line bundles. It is spanned by the homology classes of $G$-linearized ample line bundles $L$ such that $X^{ss}(L) \neq \emptyset$. In \cite[Theorem 3.3.2]{dolgachevhu}, they show that if this cone has nonempty interior, it contains open chambers such that two elements $L,L' \in C^G(X)$ are in the same chamber $\sigma$ iff we have \[ X^{ss}(L) = X^s(L) = X^{s}(L') = X^{ss}(L')=:X^{ss}(\sigma).\] Furthermore, as $X$ is projective, for any $L$ in a chamber as above, we have that $X^{ss}(L) /\!\!/ G$ is projective. This shows that the set of chambers of $C^G(X)$ injects into $\mathcal{U}^{\text{pr,g}}_{(X,G)}$ by sending a chamber $\sigma$ to $X^{ss}(\sigma)$. Note however, that this inclusion can be strict: in \cite{bbexotic} Bia{\l}ynicki-Birula and {\'S}wi{\polhk{e}}cicka give an example of a smooth projective variety $X$ with an action of a torus $T$ together with an open set $U \subset X$, which has a projective geometric quotient $U /\!\!/ T$ but is not of the form $X^s(L)$ for $L$ ample and $G$-linearized. For a treatment of the behaviour of $X^{ss}(L)$ when $L$ is outside the ample cone see \cite{beyondample}. However, for certain special varieties $X$ the correspondence between chambers of $C^G(X)$ and elements of $\mathcal{U}^{\text{pr,g}}_{(X,G)}$ is bijective. \begin{Pro} \label{Pro:Xsmoothproj} Let $X$ be an irreducible, smooth projective variety acted upon by a connected, reductive linear algebraic group $G$. Assume that every effective divisor is semiample (i.e. some positive power is base-point free) and that $C^G(X)$ has nonempty interior with all walls having positive codimension. Then the chambers of $C^G(X)$ are in bijection with $\mathcal{U}^{\text{pr,g}}_{(X,G)}$ via $\sigma \mapsto X^{ss}(\sigma)$. \end{Pro} \begin{proof} Let $U \in \mathcal{U}^{\text{pr,g}}_{(X,G)}$ then we need to show that $U$ is of the form $U=X^s(L') = X^{ss}(L')$ for some ample $G$-linearized line bundle $L'$. By Lemma \ref{Lem:ULcorr} a priori we only know that $U=X^{ss}(L)$ for some $G$-linearized (not necessarily ample) $L$. As $U \to U/\!\!/ G$ is a geometric quotient, all orbits in $U$ are fibres of this map and hence closed, so $U=X^s(L)=X^{ss}(L)$. As $U$ is nonempty, the bundle $L$ must have at least one section, so its associated divisor is effective and thus semiample by assumption. We want to show that for $m$ sufficiently large and a suitable $L_0$ in the interior of $C^G(X)$ the line bundle $L'=L^{\otimes m} \otimes L_0$ satisfies $U \subset X^s(L')=X^{ss}(L')$. Then as in the proof of Lemma \ref{Lem:ULcorr} we see that this inclusion is already the identity. But as such $L'$ are ample and $G$-effective, this finishes the proof. In the following, we can use the Hilbert-Mumford criterion to determine the (semi)stable points of $L'$. For this recall from \cite[Section 1.1]{dolgachevhu} the construction of the function $M^\bullet(x): \text{Pic}^G(X)_{\mathbb{R}} \to \mathbb{R}$ for $x \in X$. To define it let $\lambda : \mathbb{C}^* \to G$ be a $1$-parameter subgroup of $G$ then, as $X$ is proper, the map $\mathbb{C}^* \to X, t \mapsto \lambda(t).x$ has a limit $z$ over $t=0$. The point $z$ is fixed by $\lambda$ and for $L \in \text{Pic}^G(X)$, $\mathbb{C}^*$ acts on the fibre $L_z$ of $L$ over $z$ with weight $r=:\mu(x,\lambda)$. Let $T$ be a maximal torus in $G$ and let $\|\ \|$ be a Weyl-invariant norm on the group of $1$-parameter subgroups of $T$ tensor $\mathbb{R}$. Then for any $1$-parameter subgroup $\lambda$ of $G$ define $\| \lambda \|$ to be the norm of a suitable conjugate of $\lambda$ contained in $T$. We set \[M^L(x) = \sup_{\lambda \text{ 1-PSG of }G} \frac{\mu^L(x,\lambda)}{\|\lambda\|}.\] By \cite[Lemma 3.2.5.]{dolgachevhu} the function $M^\bullet(x)$ factors through $\text{NS}^G(X)_\mathbb{R}$ and satisfies \[M^{L_1 + L_2}(x) \leq M^{L_1}(x) + M^{L_2}(x), M^{m L}(x) = m M^{L}(x)\] for $L_1, L_2 \in \text{Pic}^G(X)$, $m>0$. For $L'$ ample $G$-linearized, $x$ is semistable (properly stable) with respect to $L'$ iff $M^{L'}(x) \leq 0$ ($M^{L'}(x) < 0$). For our given $L$, we first show that $M^L(x)<0$ for all $x \in X^s(L)$. Observe that as $L$ is semiample, by \cite[Corollary 1]{semiample} there exists a $1$-parameter subgroup $\lambda$ of $G$ with $M^L(x) = \mu^L(x,\lambda)/\|\lambda\|$. Thus it suffices to show $\mu^L(x,\lambda) <0$ for all $1$-parameter subgroups $\lambda$. As $x$ is stable, there exists an invariant section $s$ of some tensor power of $L$ with $x \in X_s$ and $Gx \subset X_s$ closed. We claim that then $z=\lim_{t \to 0} \lambda(t)x$ is not contained in $X_s$. Indeed assume otherwise, then $z\in \overline{Gx} \cap X_s = Gx$, so $z=gx$ for some $g \in G$. However then the stabilizer $G_{gx}$ contains all of $\lambda(\mathbb{C}^*)$, so it is not finite. But then also the stabilizer of $x$ is not finite and we obtain a contradiction, as our assumptions imply that all stable points are properly stable\footnote{In \cite{dolgachevhu} finiteness of stabilizers was part of the definition of a stable point.}. We conclude that $s(z)=0$ and by \cite[Proposition 1]{semiample} this implies $\mu(x,\lambda)<0$. Let $L_0 \in C^G(X)$ such that for all $0 < r \ll 1$ we have that $L + rL_0$ is contained in a chamber of $C^G(X)$. As there are only finitely many walls (\cite[Theorem 3.3.3]{dolgachevhu}), which are all of positive codimension, such $L_0$ exist. For $m \gg 0$ an integer, we have that $L'=L^{\otimes m} \otimes L_0$ is ample and for a fixed $x \in X^s(L)$ we know \[M^{L'}(x) \leq m M^{L}(x) + M^{L_0}(x).\] As $M^L(x)<0$ we can choose $m$ sufficiently big such that $M^{L'}(x)<0$ and hence $x \in X^s(L')$. The subsets \[Y_m = X^s(L) \setminus X^s(L^{\otimes m} \otimes L_0)\] form a descending chain of closed subsets of $X^s(L)$ and for all $x \in X^s(L)$ there exists $m$ with $x \notin Y_m$. By Noetherian induction we can thus choose $m_0$ such that $U=X^s(L) \subset X^s(L^{\otimes m} \otimes L_0)$ for all $m \geq m_0$. But by the choice of $L_0$ we have that $L'=L^{\otimes m} \otimes L_0$ is contained in a chamber of $C^G(X)$ for $m$ sufficiently large. \end{proof} The condition that every effective divisor $D$ is semiample is for instance satisfied for homogeneous projective varieties $X=G/P$. Indeed, in this case \[G \mapsto \text{Pic}(X), g \mapsto \mathcal{O}(g.D),\] where $g.D$ is the translate of $D$ by $g$, is a family of line bundles over $G$. By \cite[Proposition 7]{popov}, $\text{Pic}(X)$ is discrete and hence the map above is constant and equal to $\mathcal{O}(D)$. But as the $G$-translates of $X \setminus D$ cover $X$, this shows that $\mathcal{O}(D)$ is base-point free. \todo{ Mention: \\ The criterion is also satisfied for smooth projective curves (by Riemann-Roch) and abelian varieties (by the Theorem of the Square). ?} \subsection{Toric quotients of affine space} \label{Sect:toric} In this section, we explain how for linear actions $(\mathbb{C}^*)^n \curvearrowright \mathbb{C}^r$ we can compute open sets in $\mathcal{U}_{(\mathbb{C}^r,(\mathbb{C}^*)^n)}$ via elementary and algorithmically accessible operations involving fans and polyhedra. We closely follow \cite[Section 14]{toric} in notation and presentation. Let an algebraic torus $G=(\mathbb{C}^*)^{n}$ act faithfully, linearly on the affine space $X=\mathbb{C}^r$. By a suitable change of coordinates, we may assume that $G$ acts by diagonal matrices. For $\textbf t = (t_1, \ldots, t_n) \in G$ and $\beta = (\beta^1, \ldots, \beta^n) \in \mathbb{Z}^n$ write \[\textbf t^\beta = t_1^{\beta^1} t_2^{\beta^2} \cdots t_n^{\beta^n}.\] Then after coordinate change, the action of $\textbf t \in G$ on $x \in \mathbb{C}^r$ is given by \[\textbf t.x = \text{diag}(\textbf t^{\beta_1}, \ldots, \textbf t^{\beta_r}) x\] for integer vectors $\beta_1, \ldots, \beta_r \in \mathbb{Z}^n$. Note that via the identification of $\mathbb{Z}^n$ with the character group $\widehat G$ of $G$, the $\beta_i$ are simply the restrictions of the characters $\textbf{t} \mapsto t_i$ of $(\mathbb{C}^*)^r$ along the map $(\mathbb{C}^*)^n \to (\mathbb{C}^*)^r \subset \text{GL}(\mathbb{C}^r)$ specifying the action. Let \[\gamma: \mathbb{Z}^r \cong \widehat{(\mathbb{C}^*)^r} \to \widehat G \cong \mathbb{Z}^n\] be this restriction map (such that $\gamma(e_i)=\beta_i$). The assumption that the action is faithful implies that $\gamma$ is surjective (\cite[Lemma 14.2.1]{toric}). Let $\delta: M \to \mathbb{Z}^r$ be the kernel of $\gamma$. Setting $N=\Hom(M,\mathbb{Z})$, the map $\delta$ is given by \[\delta(m)=(\langle m, \nu_1 \rangle, \ldots, \langle m, \nu_r \rangle)\] for some $\nu_1, \ldots, \nu_r \in N$. Below we will see that the vectors $\beta_1, \ldots, \beta_r$ control the linearizations and GIT-chambers for quotients of $\mathbb{C}^r$ by $G$ and these quotients are toric varieties of fans in $N_\mathbb{R}=N \otimes_{\mathbb{Z}} \mathbb{R}$ with rays spanned by some of the vectors $\nu_1, \ldots, \nu_r$. For this note that, as all line bundles on $\mathbb{C}^r$ are trivial, the $G$-linearized line bundles $L=\mathbb{C}^r \times \mathbb{C} \to \mathbb{C}^r$ are specified by characters $\chi \in \widehat G$ via \[\textbf t . (x,y) = (\textbf t.x, \chi(\textbf t) y),\text{ with }\textbf t \in G, x \in \mathbb{C}^r, y \in \mathbb{C}.\] Denote by $(\mathbb{C}^r)^{ss}_{\chi}, (\mathbb{C}^r)^{s}_{\chi}$ the (semi)stable points with respect to these linearizations and by $\mathbb{C}^r /\!\!/_\chi G$ the categorical quotient of $(\mathbb{C}^r)^{ss}_{\chi}$ by $G$. Let $\widehat G_\mathbb{R} = \widehat G \otimes_{\mathbb{Z}} \mathbb{R}$ and let $C_\beta \subset \widehat G_\mathbb{R}$ be the cone spanned by $\beta_1 \otimes 1, \ldots, \beta_r \otimes 1$. Note that as $\gamma$ was surjective, we have $\text{dim}\ C_\beta = \text{dim}\ \widehat G_\mathbb{R}$. Then we have the following results: \begin{enumerate} \item The set $(\mathbb{C}^r)^{ss}_{\chi}$ of semistable points is nonempty iff $\chi \otimes 1 \in C_\beta$. (\cite[Proposition 14.3.5]{toric}) \item The set $(\mathbb{C}^r)^{s}_{\chi}$ of stable points is nonempty iff $\chi \otimes 1$ is in the interior of $C_\beta$. (\cite[Proposition 14.3.5]{toric}) \item The quotient $\mathbb{C}^r /\!\!/_\chi G$ is projective for some $\chi \otimes 1 \in C_\beta$ iff all $\beta_i$ are nonzero and $C_\beta$ is strongly convex (i.e. $C_\beta \cap (- C_\beta) = \{0\}$). In this case all nonempty quotients $\mathbb{C}^r /\!\!/_\chi G$ are projective. (\cite[Proposition 14.3.10]{toric}) \item We have $(\mathbb{C}^r)^{s}_{\chi} = (\mathbb{C}^r)^{ss}_{\chi}$ iff $\chi \otimes 1$ does not lie on a cone $C_{\beta'}$ generated by a subset $\beta'$ of the $\beta_i$ with $\text{dim}\ C_{\beta'} < \text{dim}\ C_{\beta}$. (\cite[Theorem 14.3.14]{toric}) \end{enumerate} In fact the behaviour of $(\mathbb{C}^r)^{ss}_{\chi}$ as $\chi \otimes 1$ varies in $C_\beta$ is completely determined by the so-called secondary fan $\Sigma_{\text{GKZ}}$ (see \cite[Theorem 14.4.7]{toric}). This is a rational fan in $\widehat G_\mathbb{R}$ with support $C_\beta$ such that $(\mathbb{C}^r)^{ss}_{\chi}$ is constant for $\chi \otimes 1$ moving in the relative interior of any of the cones $\sigma \in \Sigma_{\text{GKZ}}$. Let $\chi \in \widehat G \cap C_\beta$ be given by $\chi = \sum_{i=1}^r a_i \beta_i$, i.e. $\chi=\gamma(\textbf{a})$. Define the polyhedron \[P_{\textbf a} = \{m \in \mathbb{M}_\mathbb{R}: \langle m, \nu_i \rangle \geq -a_i, 1 \leq i \leq r\} \subset M_\mathbb{R}= M \otimes_{\mathbb{Z}} \mathbb{R}.\] Let $\Sigma_\chi$ be the normal fan of $P_{\textbf a}$ (see \cite[Proposition 14.2.10]{toric}), then it is independent of the choice of ${\textbf a} \in \gamma^{-1}(\chi)$ and we have that the quotient $\mathbb{C}^r /\!\!/_{\chi} G$ is isomorphic to the toric variety associated to $\Sigma_\chi$ (\cite[Theorem 14.2.13]{toric}). Moreover, we have an explicit description of the set $(\mathbb{C}^r)^{ss}_{\chi}$ of semistable points. Set \[I_{\emptyset,\chi} = \{i \in \{1, \ldots, r\}: P_{\textbf a} \cap \{m: \langle m, \nu_i \rangle = -a_i\} = \emptyset \}.\] Define the ideal \[B(\Sigma_\chi, I_{\emptyset,\chi}) = \left(\textnormal{pr}od_{i \notin I_{\emptyset,\chi}: \nu_i \notin \sigma} x_i: \sigma \in \Sigma_\chi \right) \cdot \left( \textnormal{pr}od_{i \in I_{\emptyset,\chi}} x_i \right)\] in $\mathbb{C}[x_1, \ldots, x_r]$. Then $(\mathbb{C}^r)^{ss}_{\chi} = \mathbb{C}^r \setminus V(B(\Sigma_\chi, I_{\emptyset,\chi}))$ (\cite[Corollary 14.2.22]{toric}). In fact, the fan $\Sigma_\chi$ and the set $I_{\emptyset, \chi}$ of indices is also constant on the relative interior of the cones of $\Sigma_{\text{GKZ}}$ and these cones are uniquely indexed by this data and written as $\Gamma_{\Sigma, I_\emptyset}$. \begin{Pro} \label{Pro:toriccorr} The map $\Sigma_{\text{GKZ}} \to \mathcal{U}_{(\mathbb{C}^r,G)}$ associating to a cone $\Gamma_{\Sigma, I_\emptyset}$ the set $(\mathbb{C}^r)^{ss}_{\chi}$ for any $\chi \otimes 1$ in the relative interior of $\Gamma_{\Sigma, I_\emptyset}$ is well-defined and injective. If all vectors $\beta_i$ are nonzero and the cone $C_\beta$ is strongly convex, the map above is a bijection from $\Sigma_{\text{GKZ}}$ to $\mathcal{U}^{\text{pr}}_{(\mathbb{C}^r,G)}$ sending chambers to elements of $\mathcal{U}^{\text{pr},g}_{(\mathbb{C}^r,G)}$. Conversely, every $U \in \mathcal{U}_{(\mathbb{C}^r,G)}$ is a saturated open set of some $(\mathbb{C}^r)^{ss}_{\chi}$. \end{Pro} \begin{proof} We have already remarked that in the relative interior of $\sigma$, the set $(\mathbb{C}^r)^{ss}_{\chi}$ is constant, so we show injectivity. Assume that $\Gamma_{\Sigma, I_\emptyset}$ and $\Gamma_{\Sigma', I_\emptyset'}$ map to the same set $U$ of semistable points. Then the vanishing ideal $I$ of $\mathbb{C}^r \setminus U$ is the radical of the ideals $B(\Sigma, I_\emptyset), B(\Sigma', I_\emptyset')$ as defined above. But as these two are ideals generated by square-free monomials, they are already radical (see for instance \cite[Corollary 1.2.5]{monomial}), so $B(\Sigma, I_\emptyset) = B(\Sigma', I_\emptyset')$. By \cite[Corollary 14.4.15]{toric} this implies $\Gamma_{\Sigma, I_\emptyset} = \Gamma_{\Sigma', I_\emptyset'}$. The additional conditions on the $\beta_i$ guarantee that all quotients $(\mathbb{C}^r)^{ss}_\chi /\!\!/ G$ for $\chi \otimes 1 \in C_\beta$ are projective. Assume conversely that we have $U\in \mathcal{U}^{\text{pr,g}}_{(\mathbb{C}^r,(\mathbb{C}^*)^n)}$, then by Lemma \ref{Lem:ULcorr} it is of the form $U=X^{ss}(L)$ for $L$ a $G$-linearized line bundle corresponding to the character $\chi$ of $G$. As this set is nonempty, we have $\chi \otimes 1 \in C_\beta$ and as the fan $\Sigma_{\text{GKZ}}$ has support $C_\beta$, it is contained in the relative interior of one of its cones. The last statement above is again Lemma \ref{Lem:ULcorr}. \end{proof} \section{Properties of \texorpdfstring{$H$}{H}-lifts} \label{Sect:Hlifts} We are now ready to prove Theorem \ref{Theo:lift}. For this, we need the following technical result, which we prove here in lack of a good reference. \begin{Lem} \label{Lem:epi} Let $\pi: Z \to X$ be a surjective morphism of schemes with $X$ reduced. Then $\pi$ is an epimorphism, i.e. two maps $\varphi_1, \varphi_2 : X \to Y$ to some scheme $Y$ agree iff $\varphi_1 \circ \pi = \varphi_2 \circ \pi$. In particular, good quotients of reduced schemes are epimorphisms. \end{Lem} \begin{proof} For morphisms $\varphi_1, \varphi_2$ as above we have a fibred diagram \begin{center} \begin{tikzcd} W \arrow{r}\arrow{d} & Y\arrow{d}{\delta_Y}\\ X \arrow{r}{(\varphi_1, \varphi_2)} & Y \times Y \end{tikzcd} \end{center} where $\delta_Y$ is the diagonal map of $Y$, which is a locally closed embedding. In particular $W \to X$ is also a locally closed embedding. Assume that $\varphi_1 \circ \pi = \varphi_2 \circ \pi$, then by definition the map $\pi$ factors through $W \to X$. In particular, $W \to X$ is surjective and hence a closed embedding. But as $X$ is reduced, this means that it is an isomorphism. Then the diagram above shows that $\varphi_1 = \varphi_2$. In particular, if $\pi$ is a good quotient and $Z$ is reduced, so is $X$ and thus the assumptions above are satisfied. \end{proof} \begin{proof}[Proof of Theorem \ref{Theo:lift}] We first note that as $\pi$ is surjective, the $G$-invariant subsets $U$ of $X$ inject via $\pi^{-1}$ into the $\widehat G$-invariant subsets $\widehat U$ of $\widehat X$. If $\pi$ is a geometric quotient this is a bijection with inverse map given by $\widehat U \mapsto \pi(\widehat U)$. This is well-defined because $\pi$ sends open $H$-invariant sets to open sets and it is an inverse to $\pi^{-1}$ as the fibres of $\pi$ are orbits. Before we continue, recall the following fact, which is Lemma 5.1 in \cite{ramanathan}. Let a reductive algebraic group $G'$ act on schemes $Y,Z$. If $Y \to Z$ is an affine, $G'$-equivariant morphism and $Z \to Z /\!\!/ G'$ is a good quotient, then $Y$ also has a good quotient $Y \to Y /\!\!/ G'$ and the induced morphism $Y /\!\!/ G' \to Z /\!\!/ G'$ is affine. First assume that $U \in \mathcal{U}_{(X,G)}$, so we have a good quotient $U \to U /\!\!/ G = U /\!\!/ \widehat G$. The map $\pi|_{\pi^{-1}(U)}$ is $\widehat G$-equivariant and affine. Then by the result above, $\pi^{-1}(U)$ has a good quotient $\pi^{-1}(U) /\!\!/ \widehat G$, which maps to $U /\!\!/ G$ via a map $\psi$. We want to show $\psi$ is an isomorphism, so we construct an inverse $\varphi$. The $H$-invariant map $\pi^{-1}(U) \to \pi^{-1}(U) /\!\!/ \widehat G$ factors uniquely through a map $\varphi': U \to \pi^{-1}(U) /\!\!/ \widehat G$ as $\pi$ is a categorical $H$-quotient and $\varphi'$ is $G$-invariant. But as $U \to U /\!\!/ G$ is a quotient for the $G$-action on $U$, the map $\varphi'$ factors uniquely through some map $\varphi: U /\!\!/ G \to \pi^{-1}(U) /\!\!/ \widehat G$. We can write the following commutative diagram \begin{center} \begin{tikzcd} \pi^{-1}(U) \arrow{r}{\pi} \arrow{d} & U \arrow{d} \arrow{dl}{\varphi'} & \pi^{-1}(U)\arrow{l} \arrow{r}\arrow{d} & U\arrow{d}\\ \pi^{-1}(U) /\!\!/ \widehat G & U /\!\!/ G \arrow{l}{\varphi} & \pi^{-1}(U) /\!\!/ \widehat G \arrow{l}{\psi} & U /\!\!/ G \arrow{l}{\varphi} \end{tikzcd} \end{center} Via diagram chase and using that good quotients of reduced schemes are epimorphisms (Lemma \ref{Lem:epi}), we conclude that $\varphi \circ \psi$ and $\psi \circ \varphi$ are both the identity on their domains. If $U \to U /\!\!/ G$ and $\pi$ are geometric quotients, then the preimage of some geometric point $p \in U /\!\!/ G$ in $U$ is a $G$-orbit and thus its preimage in $\pi^{-1}(U)$ is a $\widehat G$-orbit, hence $\pi^{-1}(U) \to U /\!\!/ G$ is a geometric quotient. Now assume that $\pi^{-1}(U)$ has a good quotient map $\pi^{-1}(U) \to \pi^{-1}(U) /\!\!/ \widehat G$. For the trivial $H$-action on the latter space, this is a $H$-equivariant affine map and clearly the identity on $\pi^{-1}(U) /\!\!/ \widehat G$ is a good quotient for the trivial $H$-action. Thus by the result from \cite{ramanathan}, the $H$-action on $\pi^{-1}(U)$ has a good quotient (which is isomorphic to $U$, as $\pi$ is a good $H$-quotient) and the map $\psi: U \to \pi^{-1}(U) /\!\!/ \widehat G$ is affine. To show that it is a good quotient, we consider the diagram \begin{center} \begin{tikzcd} \pi^{-1}(U) \arrow{r}{\pi} \arrow{d} & U\arrow{dl}{\psi}\\ \pi^{-1}(U) /\!\!/ \widehat G \end{tikzcd} \end{center} and use that $\pi$ is an epimorphism to show that $\psi$ is surjective, $G$-invariant and sends disjoint closed $G$-invariant sets to disjoint closed sets. Given a $G$-invariant local function $f$ on $U$, the function $f \circ \pi$ is $\widehat G$-invariant, so it factors uniquely through some function $g$ on $\pi^{-1}(U) /\!\!/ \widehat G$. Again using that $\pi$ is an epimorphism, we see $f=g \circ \psi$, so indeed $f$ factors through $\pi^{-1}(U) /\!\!/ \widehat G$. Thus $\psi$ is a good $G$-quotient. If $\pi^{-1}(U) \to \pi^{-1}(U) /\!\!/ \widehat G$ is a geometric quotient, its geometric fibres are orbits of $\widehat G$, so the fibres in $U$ are $G$-orbits and thus $\psi$ is a geometric quotient. \end{proof} Instead of looking at the correspondence of open sets admitting a good quotient induced by $H$-lifts, we can also directly consider the behaviour of equivariant Picard groups and the corresponding (semi)stable sets. Here we have the following result. \begin{Pro} \label{Pro:Piccorr} Let $\pi: \widehat X \to X$ be a good $H$-lift of $(X,G)$ for $X$ a variety with an action of the reductive group $G$. Then pullback by $\pi$ induces an map $\pi^* : \text{Pic}^G(X) \to \text{Pic}^{\widehat G}(\widehat X)$ and we have \[\widehat X^{ss}(\pi^* L) = \pi^{-1}(X^{ss}(L))\] for $L \in \text{Pic}^G(X)$. If $\pi$ is a geometric $H$-lift and $H$ acts freely on $\widehat X$, the map $\pi^*$ is an isomorphism. \end{Pro} \begin{proof} Via the map $\widehat G \to G=\widehat G / H$ we have a natural map $\text{Pic}^G(X) \to \text{Pic}^{\widehat G}(X)$ by extending a $G$-action on a line bundle $L$ to a $\widehat G$-action. For the $\widehat G$-equivariant morphism $\pi$ we then have a natural pullback map $\text{Pic}^{\widehat G}(X) \to \text{Pic}^{\widehat G}(\widehat X)$ and the map $\pi^*$ above is the composition of these two homomorphisms. Fix $L$ (the total space of) a $G$-linearized line bundle on $X$ and let $p: L \to X$ be the corresponding $G$-equivariant morphism. Then we have a cartesian diagram \begin{equation*} \begin{tikzcd} \pi^*(L) \arrow{r} \arrow{d}{\widehat p} & L\arrow{d}{p}\\ \widehat X \arrow{r}{\pi} & X \end{tikzcd} \end{equation*} where all maps are $\widehat G$-equivariant. Now $G$-invariant global sections of $L$ are $G$-equivariant sections of $p$ and those correspond bijectively to $\widehat G$-equivariant sections of $\widehat p$, i.e. global sections of $\pi^*(L)$. Here we use that $\pi$ is a categorical $H$-quotient. Thus $\pi^*$ induces a natural isomorphism $\Gamma(L)^G \cong \Gamma(\pi^*(L))^{\widehat G}$. Of course this argument also works after replacing $L$ by $L^{\otimes k}$ for $k \geq 1$. Now let $x \in X^{ss}(L)$, then there exists a $G$-invariant section $s$ of some $L^{\otimes k}$ with $x \in X_s = \{x':s(x') \neq 0\}$ and $X_s$ is affine. But then $\pi^*s$ is a $\widehat G$-invariant section of $\pi^* L^{\otimes k}$ and $\widehat X_{\pi^* s} = \pi^{-1}(X_s)$ is affine as $\pi$ is an affine morphism. Hence all elements of $\pi^{-1}(x)$ are $\pi^*(L)$-semistable. Conversely for $\widehat x \in \widehat X^{ss}(\pi^*L)$ there exists a $\widehat G$-invariant section $\widehat s$ of some $\pi^* L^{\otimes k}$ with $\widehat x \in \widehat X_{\widehat s}$ and $\widehat X_{\widehat s}$ affine. By the argument above, $\widehat s = \pi^* s$ for some $G$-invariant section $s$ of $L^{\otimes k}$ and we only need to show $X_s$ affine. But clearly $\widehat X_{\widehat s} \to X_s$ is a categorical quotient of the affine variety $\widehat X_{\widehat s}$ by $H$ and thus $X_s$ is affine by \cite[Theorem 1.1]{git}. Hence $\pi(\widehat x)$ is $L$-semistable. If the action of $H$ is free on $\widehat X$, by \cite[Proposition 0.9]{git} the map $\pi$ is a fppf-locally trivial $H$-torsor. The fact that $\pi^*$ is an isomorphism $\text{Pic}^G(X) \to \text{Pic}^{\widehat G}(\widehat X)$ then follows from descent along torsors. A concise way to put the proof, using the language of stacks, is the following: the fact that $\pi$ is a $H$-torsor implies that there is a canonical isomorphism $X \cong [\widehat X/H]$. Taking the quotient stack under the actions of $G=\widehat G / H$ on both sides we have \[[X/G] \cong [[\widehat X/H]/(\widehat G/H)] \cong [\widehat X / \widehat G],\] where in the last isomorphism we use \cite[Remark 2.4]{romagny2005}. Taking Picard groups on both sides we see \[\text{Pic}^G(X) = \text{Pic}([X/G]) \cong \text{Pic}([\widehat X / \widehat G]) = \text{Pic}^{\widehat G}(\widehat X)\] and this isomorphism is exactly given by pullback via $\pi$. \end{proof} In the example presented in Section \ref{Sect:Exa}, all $H$-lifts that are used will come from a free $H$-action on $\widehat X$, so we have isomorphisms of Picard groups as above. \section{Applications} In this section we will see several situations, where $H$-lifts naturally appear and thus allow us to conclude results about the chamber-decompositions of $G$-effective cones. \subsection{Partial quotients} One possibility to construct $H$-lifts is basically a reformulation of the definition. \begin{Pro} \label{Pro:partquot} Let $\widehat X$ be a variety acted upon by a reductive group $\widehat G$ and assume a closed, normal subgroup $H \subset \widehat G$ acts on $\widehat X$ with a good (resp. geometric) quotient $\pi: \widehat X \to X$, where $X$ is a variety. Then $X$ carries an induced action of $\widehat G/H$ making $(\widehat X, \widehat G)$ a good (resp. geometric) $H$-lift of $(X,\widehat G/H)$. \end{Pro} Combined with Corollary \ref{Cor:chamberopen}, this tells us the following: assume we are given a $\widehat G$-action on $\widehat X$ and a closed normal subgroup $H$ of $\widehat G$ acting on the open, $\widehat G$-invariant set $U_0 \subset \widehat X$ with geometric quotient $U_0 /\!\!/ H$. Then the open sets $U \in \mathcal{U}^{\text{pr,g}}_{(\widehat X, \widehat G)}$ contained in $U_0$ are in bijection with $\mathcal{U}^{\text{pr,g}}_{(U_0 /\!\!/ H,\widehat G/H)}$, which is a problem on a smaller-dimensional variety. \subsection{Morphisms to homogeneous spaces} \begin{Pro} \label{Pro:homspace} Let a reductive group $G$ act on an irreducible variety $X$ and assume we are given a $G$-equivariant morphism $\varphi: X \to Z$ to a homogeneous $G$-space $Z$ (i.e. the action of $G$ on $Z$ is transitive). Let $z_0 \in Z$ be a closed point and let $H=G_{z_0}$ be its stabilizer in $G$, which we assume to be reductive. Consider the variety \[\widehat X = \{(g,x) : \varphi(g x) = z_0\} \subset G \times X\] with the action of $G \times H$ given by \[(g',h).(g,x) = (h g(g')^{-1},g'x).\] Then the projection \[\pi_X : \widehat X \to X, (g,x) \mapsto x\] makes $(\widehat X, G \times H)$ a geometric $H$-lift for $(X,G)$. On the other hand, for $Y=\pi^{-1}(z_0) \subset X$ with the induced action of $H=G_{z_0}$, the map \[\pi_Y : \widehat X \to Y, (g,x) \mapsto gx\] makes $(\widehat X, G \times H)$ a geometric $G$-lift of $(Y,H)$. \end{Pro} \begin{proof} By definition of $Y$ and $\widehat X$, we have cartesian diagrams \begin{equation}\label{eqn:DefhatXY} \begin{tikzcd} \widehat X \arrow{r}{\pi_Y} \arrow{d} & Y \arrow{r} \arrow{d} &\{z_0\} \arrow{d}\\ G \times X \arrow{r}{\sigma} & X \arrow{r}{\varphi} & Z \end{tikzcd} \end{equation} where $\sigma$ is the action map of $G$ on $X$. As we are in characteristic zero and as $G \times X$ and $X$ are irreducible, the fibres of the generic point $\eta_Z$ of $Z$ under $\varphi$ and $\varphi \circ \sigma$ are geometrically reduced. Hence by \cite[Theorem 9.7.7]{egaIV3}, the set of closed points in $Z$ whose fibre under $\varphi$ and $\varphi \circ \sigma$ is geometrically reduced is open and nonempty. But because the $G$-action on $Z$ is transitive, all these fibres are isomorphic to $Y$ and $\widehat X$, respectively. Thus these are varieties over $\mathbb{C}$. From the formula for the action of $G \times H$ on $\widehat X$, it is clear that the maps $\pi_X, \pi_Y$ are $G \times H$-equivariant for the induced actions of $G=G\times H / H$ on $X$ and $H=G \times H / G$ on $Y$. For the map $\pi_X$, observe that we can obtain it using a different cartesian diagram, namely \[\begin{tikzcd} \widehat X \arrow{r} \arrow{d}{\pi_X} & G \arrow{d}{\psi}\\ X \arrow{r}{\varphi} & Z \end{tikzcd} \] where $\psi(g) = g^{-1} z_0$. Clearly $\psi$ is a fpqc-locally trivial $H$-torsor representing $Z$ as the quotient $G/H$. But then its base change $\pi_X$ via $\varphi$ is still a fpqc-locally trivial $H$-torsor and thus a geometric quotient. On the other hand, for $\pi_Y$ we see from the diagram (\ref{eqn:DefhatXY}) that it is a base change of the map $\sigma$, which clearly is a (trivial) $G$-torsor (using the automorphism $(g,x) \mapsto (g,g^{-1}x)$ of $G \times X$). Thus it is a (trivial) $G$-torsor itself and hence a geometric quotient. \end{proof} Using Theorem \ref{Theo:lift} we see that given a $G$-action on a variety $X$ and a subset $Y$ of $X$ obtained as the fibre of an equivariant map to a $G$-homogeneous space, we have a bijection between $\mathcal{U}_{(X,G)}$ and $\mathcal{U}_{(Y,G_Y)}$, where $G_Y$ is the subgroup of $G$ leaving $Y$ stable. Note that in \cite[Definition 15.1]{bbquotients} a subvariety $Y \subset X$ such that for all $y \in Y$ we have $H=\{g \in G: gy \in Y\}$ is called a strong $H$-section of $X$. If $X$ is normal, \cite[Lemma 15.2]{bbquotients} says that for such a $Y$ the morphism $G \times_H Y \to X$ given by $[(g,y)] \mapsto gy$ is a $G$-isomorphism. Using this isomorphism, we have a $G$-equivariant projection $X \cong G \times_H Y \to G / H$ with fibre $Y$ over $[e] \in G /H$, placing us in the situation of Proposition \ref{Pro:homspace}. Again it has been noted before that $Y$ has a good/geometric $H$-quotient iff $X$ has a good/geometric $G$-quotient (\cite[Corollary 15.3]{bbquotients}). \section{Example} \label{Sect:Exa} To illustrate how the techniques above can be used in practice, consider the diagonal action of $G=\text{PGL}_2$ on $X=(\mathbb{P}^1)^n$. We demonstrate how some of the chambers of the $G$-effective cone can be related to chambers of the cone $C_\beta$ for a linear action of $(\mathbb{C}^*)^{n-1}$ on $\mathbb{C}^{2n-4}$. Here we can compute the chamber decomposition as well as the resulting quotient varieties using the toric methods we recalled in Section \ref{Sect:toric}. The action of $G$ on $X$ has been intensely studied in the past(\cite{git}, \cite{polito}, \cite{ballquot}, \cite{weighted}). The line bundle $\mathcal{O}(a_1, \ldots, a_n)$ on $X$ carries a (unique) $G$-linearization iff the sum of the $a_i$ is even, so \[\text{Pic}^G(X) \cong \left\{(a_1, \ldots, a_n) \in \mathbb{Z}^n: \sum_{i=1}^n a_i \equiv 0 \text{ mod } 2\right\} \subset \mathbb{Z}^n\] and the effective cone is given by \[(\mathbb{R}_{\geq 0})^n \subset \mathbb{R}^n = \text{Pic}^G(X)_{\mathbb{R}}.\] We can analyze (semi)stability with respect to a given polarization using the Hilbert-Mumford numerical criterion. For $a=(a_1, \ldots, a_n) \in (\mathbb{Z}_{>0})^n$ with $|a|=\sum_{i=1}^n a_i$ even, a point $p=(p_1, \ldots, p_n) \in X$ is semistable with respect to $\mathcal{O}(a_1, \ldots, a_n)$ iff for all $p \in \ensuremath{\mathbb{P}}^1$ we have $\sum_{i: p_i=p} a_i \leq |a|/2$. The point $p$ is stable iff all inequalities above are strict. From this we see that the $G$-effective ample cone is given by \[C^G(X) = \left\{(a_1, \ldots, a_n) \in \mathbb{R}^n: 0 < a_i \leq \sum_{j \neq i} a_j\right\} \subset \mathbb{R}^n.\] The criterion above also gives an explicit identification of the VGIT-chamber structure of $C^G(X)$. For $S \subset \{1, \ldots, n\}$ consider the half-space \[H_S = \left\{(a_1, \ldots, a_n) \in \mathbb{R}^n: \sum_{i \in S} a_i \geq \sum_{i \in \{1,\ldots, n\} \setminus S} a_i\right\} \subset \mathbb{R}^n.\] Then the hyperplanes corresponding to the half-spaces above divide $C^G(X)$ into connected components, which are exactly the chambers of the VGIT-decomposition as in Section \ref{Sect:projVGIT}. Though it is easy to determine the various chambers, it is more difficult to compute the quotients associated to them. In \cite{ballquot}, some of the quotients are computed for $n=5,6,7,8$. Using the techniques from the previous sections, we are able to compute these quotients for chambers contained in certain subcones of $C^G(X)$. For the notation below, recall from Section \ref{Sect:toric} that given an action of a torus $T$ on $\mathbb{C}^k$, the set of linearizations of the action is given by the character group $\widehat T$. Inside $\widehat T \otimes \mathbb{R}$ we have a fan $\Sigma_{\text{GKZ}}$ such that the set of $\chi$-semistable points in $\mathbb{C}^k$ is constant as the linearization $\chi$ varies in the relative interior of the cones of $\Sigma_{\text{GKZ}}$, which are denoted by $\Gamma_{\Sigma,I_\emptyset}$. \begin{Theo} \label{Theo:Sl2Pn} Let $S \subset \{1, \ldots, n\}$ with $|S|=2$, then the chambers $\sigma$ of $C^G(X)$ contained in $H_S$ are in bijective correspondence to the chambers $\Gamma_{\Sigma, I_\emptyset} \in \Sigma_{\text{GKZ}}$ for the action of $T=(\mathbb{C}^*)^{n-1}$ on $\mathbb{C}^{2n-4}$ given by \begin{align} &(t_1, \ldots, t_{n-2},s).(x_1, y_1, x_2, y_2, \ldots, x_{n-2}, y_{n-2}) \label{eqn:Taction}\\ =&(t_1 x_1, s t_1 y_1, t_2 x_2, s t_2 y_2, \ldots, t_{n-2} x_{n-2},s t_{n-2} y_{n-2}). \nonumber \end{align} Under this correspondence, the quotient variety associated to $\sigma$ is the toric variety associated to the fan $\Sigma$. \end{Theo} \begin{proof} Assume for simplicity of notation $S=\{1,2\}$ below. As $X$ is smooth and as every effective divisor is semiample on $X$, by Proposition \ref{Pro:Xsmoothproj} the chambers of $C^G(X)$ are in bijection with the open sets in $\mathcal{U}^{\text{pr,g}}_{(X,G)}$ by sending $\sigma$ to $X^{ss}_\sigma$. Now for any $a \in C^G(X) \cap \mathrm{int}(H_S)$ and $p \in X$ semistable with respect to $a$ we know $p_1 \neq p_2$ by the Hilbert-Mumford criterion. Thus under the correspondence above, the chambers contained in $H_S$ correspond to the subset \[\mathcal{U}^{\text{pr,g}}_{((\ensuremath{\mathbb{P}}^1)^{n} \setminus \Delta_{12},G)} \subset \mathcal{U}^{\text{pr,g}}_{(X,G)},\] where $\Delta_{12}=\{(p_1, \ldots, p_n): p_1 = p_2\}$. So the projection $\varphi: (\ensuremath{\mathbb{P}}^1)^{n} \setminus \Delta_{12} \to \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1 \setminus \Delta=Z$ to the first two factors is a $G$-equivariant morphism to the homogeneous $G$-space $Z$. For $z_0=([0:1],[1:0]) \in Z$, the stabilizer $G_{z_0}$ is exactly the diagonal torus \[H=\mathbb{C}^*=\left\{\left[ \begin{pmatrix} 1 & 0\\0 & a \end{pmatrix} \right] : a \in \mathbb{C}^* \right\} \subset \text{PGL}_2.\] By Proposition \ref{Pro:homspace} we obtain a variety $\widehat X$ with an action of $G \times H$ which is a geometric $H$-lift for $(X,G)$ and a geometric $G$-lift for $(\varphi^{-1}(z_0), H) = ((\ensuremath{\mathbb{P}}^1)^{n-2}, \mathbb{C}^*)$. Here the action of $\mathbb{C}^*$ on $(\mathbb{P}^1)^{n-2}$ is given by \[a.([x_1,y_1], \ldots, [x_{n-2},y_{n-2}]) = ([x_1,a y_1], \ldots, [x_{n-2},a y_{n-2}]).\] By Theorem \ref{Theo:lift} the geometric $H$ and $G$-lifts above give a natural bijection \[\mathcal{U}^{\text{pr,g}}_{((\ensuremath{\mathbb{P}}^1)^{n} \setminus \Delta_{12},G)} = \mathcal{U}^{\text{pr,g}}_{((\ensuremath{\mathbb{P}}^1)^{n-2},\mathbb{C}^*)}.\] We now approach the pair $((\ensuremath{\mathbb{P}}^1)^{n-2}, \mathbb{C}^*)$ from a different angle. Of course the space $(\ensuremath{\mathbb{P}}^1)^{n-2}$ is a geometric quotient of $(\mathbb{C}^2 \setminus \{0\})^{n-2}$ by $(\mathbb{C}^*)^{n-2}$ via the action \begin{align*} &(t_1, \ldots, t_{n-2}).(x_1,y_1, \ldots, x_{n-2}, y_{n-2})\\ =&(t_1 x_1, t_1 y_1, t_2 x_2, t_2 y_2, \ldots, t_{n-2} x_{n-2}, t_{n-2} y_{n-2}). \end{align*} The action of $\mathbb{C}^*$ on $(\ensuremath{\mathbb{P}}^1)^{n-2}$ lifts to a linear action on the prequotient $\mathbb{C}^{2n-4}$, which commutes with the action of $(\mathbb{C}^*)^{n-2}$ above and together they determine the action of $(\mathbb{C}^*)^{n-1}$ given in (\ref{eqn:Taction}). By Proposition \ref{Pro:toriccorr} the chambers of the secondary fan $\Sigma_{\text{GKZ}}$ for this toric action correspond to elements of $\mathcal{U}^{\text{pr,g}}_{(\mathbb{C}^{2n-4}, (\mathbb{C}^*)^{n-1})}$ by sending a chamber $\Gamma_{\Sigma, I_{\emptyset}}$ to $(\mathbb{C}^{2n-4})^{ss}_{\chi}=(\mathbb{C}^{2n-4})^{s}_{\chi}$ for any $\chi$ contained in this chamber. However, for the action above no point $(x_1, y_1, \ldots, x_n, y_n)$ with $x_i = y_i=0$ for some $i$ can be stable (with respect to any character) as it has nonfinite stabilizer. Thus we have \[\mathcal{U}^{\text{pr,g}}_{(\mathbb{C}^{2n-4}, (\mathbb{C}^*)^{n-1})} = \mathcal{U}^{\text{pr,g}}_{((\mathbb{C}^2 \setminus \{0\})^{n-2}, (\mathbb{C}^*)^{n-1})}.\] Using Proposition \ref{Pro:partquot} the space $((\mathbb{C}^2 \setminus \{0\})^{n-2}, (\mathbb{C}^*)^{n-1})$ is a $(\mathbb{C}^*)^{n-2}$-lift of $((\ensuremath{\mathbb{P}}^1)^{n-2}, \mathbb{C}^*)$ as above, so we can identify \[\mathcal{U}^{\text{pr,g}}_{((\ensuremath{\mathbb{P}}^1)^{n-2},\mathbb{C}^*)} = \mathcal{U}^{\text{pr,g}}_{((\mathbb{C}^2 \setminus \{0\})^{n-2}, (\mathbb{C}^*)^{n-1})}.\] Combining the correspondences above (see also the diagram in Remark \ref{Rmk:overview}) we have proved the claim. \end{proof} \begin{Rmk} \label{Rmk:overview} In the situation of Theorem \ref{Theo:Sl2Pn} we can not only relate chambers of the $G$-effective cones for $((\ensuremath{\mathbb{P}}^1)^n,\text{PGL}_2)$ and $(\mathbb{C}^{2n-4},(\mathbb{C}^*)^{n-1})$ abstractly but we can actually find a linear map between the equivariant Picard groups inducing this correspondence. Recall from Proposition \ref{Pro:Piccorr} that for $\pi: \widehat X \to X$ a geometric $H$-lift with respect to a free $H$-action, the pullback by $\pi$ induces an isomorphism $\pi^*: \text{Pic}^G(X) \to \text{Pic}^{\widehat G}(\widehat X)$ with $\pi^{-1}(X^{ss}(L))=\widehat X^{ss}(\pi^* L)$ for $L \in \text{Pic}^G(X)$. We illustrate again the course of the proof of Theorem \ref{Theo:Sl2Pn}. \[ \begin{tikzcd}[column sep=small] \text{PGL}_2 \curvearrowright (\ensuremath{\mathbb{P}}^1)^n& & (\mathbb{C}^*)^{n-1} \curvearrowright \mathbb{C}^{2n-4}\\ \text{PGL}_2 \curvearrowright (\ensuremath{\mathbb{P}}^1)^{n} \setminus \Delta_{12} \arrow[Subseteq]{u}{} \arrow{rd}& & (\mathbb{C}^*)^{n-1} \curvearrowright (\mathbb{C}^2 \setminus \{0\})^{n-2}\arrow[Subseteq]{u}{} \arrow{ld}\\ & \mathbb{C}^* \curvearrowright (\mathbb{P}^1)^{n-2} \end{tikzcd} \] Both arrows at the bottom are (compositions of) geometric $H$-lifts for free $H$-actions, so they induce isomorphisms of equivariant Picard groups compatible with forming semistable sets. We have to see how the two inclusions at the top behave in this respect. The inclusion $(\mathbb{C}^2 \setminus \{0\})^{n-2} \subset \mathbb{C}^{2n-4}$ has complement of codimension $2$, so it induces isomorphisms of (equivariant) Picard groups and (invariant) sections of line bundles. Also the complement of the inclusion above consists of points with nonfinite stabilizers. So for every linearization on $\mathbb{C}^{2n-4}$ such that stable and semistable points agree, these sets are anyway contained in $(\mathbb{C}^2 \setminus \{0\})^{n-2}$. Thus on the interior of the chambers in $\text{Pic}^{(\mathbb{C}^*)^{n-1}}(\mathbb{C}^{2n-4})_\mathbb{Q}$ the isomorphism above respects the formation of (semi)stable points. Note that this is not true for all linearizations: for the trivial linearization all of $\mathbb{C}^{2n-4}$ is semistable, but on $(\mathbb{C}^2 \setminus \{0\})^{n-2}$ the trivial linearization has no semistable points (as this variety is not affine). For the other inclusion $i: (\ensuremath{\mathbb{P}}^1)^{n} \setminus \Delta_{12} \hookrightarrow (\ensuremath{\mathbb{P}}^1)^n$ we have \[\text{Pic}^G((\ensuremath{\mathbb{P}}^1)^n \setminus \Delta_{12})_\mathbb{Q} = \text{Pic}^G((\ensuremath{\mathbb{P}}^1)^n)_\mathbb{Q} / \mathbb{Q} \mathcal{O}(1,1,0, \ldots, 0)\] and $i^*$ is the corresponding quotient map. For any $G$-linearized line bundle $L'$ on $(\ensuremath{\mathbb{P}}^1)^n \setminus \Delta_{12}$, which is the restriction of a bundle $L$ on $(\ensuremath{\mathbb{P}}^1)^n$, any invariant section $s'$ of $(L')^{\otimes k}$ extends to a section $s$ of $(L \otimes\mathcal{O}(m,m,0, \ldots, 0))^{\otimes k}$ vanishing on $\Delta_{12}$ for $m \gg 0$ (take $m$ greater than the order of the rational section $s$ of $L^{\otimes k}$ along $\Delta_{12}$). Conversely, for $L=\mathcal{O}(a_1, a_2, \ldots, a_n)$ on $(\ensuremath{\mathbb{P}}^1)^n$ with $a_1 + a_2 > a_3 + \ldots + a_n$ we consider again the Hilbert-Mumford criterion from above. For $S \subset \{1, \ldots, n\}$ and $\Sigma_S(\textbf a) = \Sigma_{s \in S} a_s - \Sigma_{s \notin S} a_s$ we have \begin{itemize} \item $\Sigma_{S}(\textbf{a})>0$ for $1,2 \in S$, \item $\Sigma_{S}(\textbf{a})<0$ for $1, 2 \notin S$, \item $\Sigma_{S}(\textbf{a}) = \Sigma_{S}(\textbf{a}+(m,m,0, \ldots, 0))$ for $1 \in S, 2 \notin S$ or $1 \notin S, 2 \in S$. \end{itemize} So we see that twisting $L$ by $\mathcal{O}(m,m,0, \ldots, 0)$ does not change the set of semistable points. In fact this shows that all cones of the VGIT-fan in $\text{Pic}^G((\ensuremath{\mathbb{P}}^1)^n)_{\mathbb{R}}$ with relative interior strictly inside the interior of the half-space $H_{\{1,2\}}=\{a:\Sigma_{\{1,2\}}\geq 0\}$ have the cone generated by $\mathcal{O}(1,1, 0, \ldots, 0)$ in their closure and thus as a face. Moreover, the set of semistable points of $L$ is contained in $(\ensuremath{\mathbb{P}}^1)^n \setminus \Delta_{12}$. All this shows that for any $L \in \text{Pic}^G((\ensuremath{\mathbb{P}}^1)^n)$ and $L' = i^* L$ its restriction to $(\ensuremath{\mathbb{P}}^1)^n \setminus \Delta_{12}$, we have \[((\ensuremath{\mathbb{P}}^1)^n \setminus \Delta_{12})^{ss}_{L'} = ((\ensuremath{\mathbb{P}}^1)^n)^{ss}_{L\otimes\mathcal{O}(m,m,0, \ldots, 0)} \] for $m \gg 0$. To conclude, inside $\text{Pic}^G((\ensuremath{\mathbb{P}}^1)^n)_{\mathbb{R}}$ we have the subfan of the VGIT-fan contained in $H_{\{1,2\}}$. Via the map $i^*$ it maps to its quotient fan by the ray $\text{Cone}(\mathcal{O}(1,1, 0, \ldots, 0))$. Moreover, on the relative interior of the cones in the quotient fan, the set of semistable points is constant and equal to the semistable points on the cone in the preimage containing $\mathcal{O}(1,1, 0, \ldots, 0)$. \end{Rmk} \begin{Rmk} The linear action of $T=(\mathbb{C}^*)^{n-1}$ on $\mathbb{C}^{2n-4}$ that arises above has been studied in the Master thesis \cite{toricblowup} of the author. It arises as the canonical representation of the toric variety $\text{Bl}_{n-2} \mathbb{P}^{n-3}$ as a torus quotient of affine space with respect to a symmetric linearization (i.e. the character $(1, \ldots, 1)$ of $T$). In the thesis a family of chambers of the secondary fan together with their quotients is explicitly identified. The quotients occurring in this family are iterated projective $\ensuremath{\mathbb{P}}^1$-bundles over some $\text{Bl}_{k} \mathbb{P}^{n'-3}$ $(k \leq n'-2)$. \end{Rmk} \end{document}
\begin{document} \title{The Quantum Liouville Equation is non-Liouvillian} \author{Dimitris Kakofengitis and Ole Steuernagel} \affiliation{School of Physics, Astronomy and Mathematics, University of Hertfordshire, Hatfield, AL10 9AB, UK} \email{[email protected]} \date{\today} \begin{abstract} The Hamiltonian flow of a classical, time-independent, conservative system is incompressible, it is Liouvillian. The analog of Hamilton's equations of motion for a quantum-mechanical system is the quantum-Liouville equation. It is shown that its associated quantum flow in phase space, Wigner flow, is not incompressible. It gives rise to a quantum analog of classical Hamiltonian vector fields: the Wigner phase space velocity field~$\bm w$, the divergence of which can be unbounded. The loci of such unbounded divergence form lines in phase space which coincide with the lines of zero of the Wigner function. Along these lines exist characteristic pinch points which coincide with stagnation points of the Wigner flow. \end{abstract} \pacs{03.65.-w, 03.65.Ta } \maketitle \section{Introduction} In classical phase space the coordinates $\bm r = \binom{\bm q}{\bm p}$ are position~${\bm q}$ and momentum~${\bm p}$ with the associated dynamics described by the Hamiltonian velocity field $\bm v = \binom{\bm {\dot q}}{\bm {\dot p}}$ giving rise to a continuity equation~$\frac{\partial}{\partial t} \rho(\bm r;t) + \bm \nabla \bullet \bm j(\bm r;t) = \sigma(\bm r;t)$ for the movement of the classical probability density~$\rho$ and its flow~$\bm j$. Because probability is locally conserved the source term $\sigma(\bm r;t) =0$. Famously, classical Hamiltonian vector fields for time-independent conservative systems are \noindent\begin{eqnarray} \text{divergence-free } \quad &\bm \nabla \bullet \bm v \; & = 0 , \label{eq:_Div_v_zero_classical} \\ \text{or, the flow is incompressible } \quad &\frac{ D \rho }{Dt} & = 0 . \label{eq:_Dtotal_zero_classical} \end{eqnarray} Eq.~(\ref{eq:_Dtotal_zero_classical}) follows from~(\ref{eq:_Div_v_zero_classical}) if the density is made up of ``carriers'', particles or charges (and their respective probability distributions), such that the flow $\bm j$ can be decomposed into the product \begin{equation} \bm j = \rho \bm v . \label{eq:_j_rho_times_velocity} \end{equation} Then, the \emph{total derivative}~\cite{ComovingDerivativeNames} of~$\rho$ is $ \frac{ D \rho}{Dt} = - \rho \bm \nabla \bullet \bm v$. In quantum mechanics, Wigner's quantum phase space-based distribution function~$W$~\cite{Wigner_32,Hillery_PR84} obeys~\cite{Wigner_32} the, so-called, quantum Liouville equation~\cite{Schleich_01} \begin{equation} \frac{\partial W(\bm r;t) }{\partial t} + \bm \nabla \bullet \bm J(\bm r;t) = 0 \; , \label{eq:W_Continuity} \end{equation} where~$\bm J$ is the Wigner flow~\cite{Ole_PRL13} of the system. Here we establish that the quantum Liouville equation is typically non-Liouvillian, that Wigner's phase space velocity~$\bm w$, the quantum analog of~$\bm v$, can have unbounded divergence, and that the structure of the divergence of~$\bm w$ can help us to investigate the phase space structure of a quantum system's dynamics. We first review features of Wigner flow and introduce the concept of the Wigner phase space velocity~$\bm w$~(\ref{eq:w}), in section~\ref{sec:WignerFlow}. We then consider quantum systems for which the flow of~$\bm w$ is always incompressible (harmonic oscillator), in section~\ref{sec:HarmWignerFlow}, incompressible for energy eigenstates only (`squared' harmonic oscillator), in section~\ref{sec:KerrWignerFlow}, and generically non-Liouvillian (anharmonic oscillator), in section~\ref{sec:AnHarmWignerFlow}, before we conclude in section~\ref{sec:Conclusion}. \section{Wigner Flow\label{sec:WignerFlow}} From now on, we will only consider motion in one spatial dimension~$x$. In this case~$W$ is a one-dimensional Fourier transform \begin{equation}\label{eq:W} W_\varrho(x,p;t) \equiv \frac{1}{\pi \hbar} \int_{-\infty}^{\infty} dy \, \varrho(x+y,x-y,t) \cdot e^{\frac{2i}{\hbar} p y} \; , \end{equation} of the off-diagonal coherences~$\varrho(x+y,x-y,t)$ of the quantum mechanical density matrix~ $\varrho$ which has the form~$\varrho = \Psi^*(x+y,t)\Psi(x-y,t)$ if the system is in a pure state~$\Psi$ (star `*' denotes complex conjugation); $\hbar=h/(2\pi)$ is Planck's constant rescaled. $W$ is real valued but can be negative~\cite{Wigner_32} and therefore is a quantum-mechanical `quasi-probability' function~\cite{Hillery_PR84,Schleich_01}. For time-independent conservative systems such as a point mass~$M$ moving under the influence of a potential~$U$, described by the Hamiltonian \begin{equation} H(x,p) = \frac{p^2}{2M} + U(x) , \label{eq:_classical_Hamiltonian} \end{equation} where the potential~$U(x)$ can be Taylor-expanded (giving rise to finite forces only), $\bm J$ of~(\ref{eq:W_Continuity}) has the explicit form~\cite{Wigner_32} \begin{eqnarray} {\bm J} = \binom{ J_x }{ J_p } = \begin{pmatrix} \frac{p}{M} W \\ -\sum\limits_{l=0}^{\infty}{\frac{(i\hbar/2)^{2l}}{(2l+1)!} \partial_p^{2l}W \partial_x^{2l+1}U } \end{pmatrix} \; . \label{eq:FlowComponents} \end{eqnarray} Here, the notation~$\partial_x^l=\frac{\partial^l}{\partial x^l}$, etc., is used for conciseness. Explicit reference to dependence on $\bm r$ and $t$ is now dropped. Wigner flow's complicated form makes it non-trivial to work out its overall structure. To characterize Wigner flow it is useful to determine its orientation winding number~\cite{Ole_PRL13} (or Poincar\'e index) \begin{eqnarray} \label{eq:WindingNumber} \omega({\cal L},t) =\frac{1}{2\pi} \varointctrclockwise_{\cal L} d \varphi \; . \end{eqnarray} The Poincar\'e index $\omega$~tracks the orientation angle~$\varphi$ of the flow vectors~$\bm J$ along continuous, closed, self-avoiding loops~$\cal L$ in phase space. Because the components of the flow are continuous functions, $\omega$ is zero except for the case when the loop contains stagnation points. In such a case a non-zero value of $\omega$ can occur and this value is conserved unless the system's dynamics transports a stagnation point across~$\cal L$~\cite{Ole_PRL13}. When comparing Wigner flow with classical Hamiltonian flow, it is not unreasonable to argue that the first order terms of Wigner flow~(\ref{eq:FlowComponents}) have classical form \begin{equation} \binom{ J_x }{ J_p } = \binom{ v_x W}{ - W \partial_x V } + \binom{ 0 }{ {\cal O}(\hbar^2) } \ ; \label{eq:WFlow_mot_Classical} \end{equation} and therefore, whenever higher order quantum terms~${\cal O}(\hbar^2)$ are present, Wigner flow cannot be Liouvillian~\cite{Zachos_book_05}. It turns out that for eigenstates of Kerr oscillators (section~\ref{sec:KerrWignerFlow}, below) this is not correct though. We note that, firstly, Wigner's function typically has areas of negative value which is why classical probability arguments have to be used cautiously. Secondly, a clear identification of the terms responsible for deviation from the classical case might be of interest in its own right. And, thirdly, we have, so far, little intuition regarding the behaviour of Wigner flow, and we show here that the divergence of its flow can be tied to other physical phenomena, such as the formation of stagnation points of~$\bm J$ in phase space. To establish that in general quantum phase space flow is non-Liouvillian, let us cast it into a form analogous to Eq.~(\ref{eq:_j_rho_times_velocity}), namely~$ {\bm J} = W {\bm w}$, and investigate the divergence of the \emph{Wigner phase space velocity} \begin{equation}\label{eq:w} \bm w = \frac{\bm J}{W} . \end{equation} According to Eq.~(\ref{eq:_Div_v_zero_classical}), to establish when Wigner flow is Liouvillian, we determine when \begin{equation} \boldsymbol{\nabla} \bullet \bm w = 0 \; . \label{eq:Wigner_velocity_Liouvillian} \end{equation} With $\boldsymbol{\nabla} \bullet \bm J = W \boldsymbol{\nabla} \bullet \bm w + \bm w \bullet \boldsymbol{\nabla} W = -\partial_t W$ we have \begin{equation} \boldsymbol{\nabla} \bullet \bm w = - \frac{ { \bm J} \bullet \boldsymbol{\nabla} W + W \partial_t W}{W^2} . \label{eq:Wigner_velocity_Divergence} \end{equation} \section{Wigner Flow of Harmonic Oscillators \label{sec:HarmWignerFlow}} \begin{figure} \caption{\CO{(Color online)} \label{fig:flow_harm} \end{figure} For a harmonic potential $ U^\odot(x) = \frac{k}{2}x^2 $ with spring constant~$k$, Wigner flow~(\ref{eq:FlowComponents}) has the `classical' form \begin{equation} {\bm J}^\odot = \binom{J^\odot_{ x}}{J^\odot_{ p}} = W^\odot(x,p,t) \cdot \binom{ \frac{p}{M} }{ - k x } \, . \label{eq:WF_harm} \end{equation} Inserting~(\ref{eq:WF_harm}) into~(\ref{eq:Wigner_velocity_Divergence}) yields $ \bm \nabla \bullet \bm w^\odot = 0 $, always. A harmonic oscillator's quantum phase space flow is always Liouvillian, see Fig.~\ref{fig:flow_harm}. \section{Wigner Flow of the Kerr Oscillator\label{sec:KerrWignerFlow}} An example of a system for which its energy eigenstates yield Liouvillian Wigner flow, but its superposition states do not, is the `squared' harmonic oscillator, described by the Kerr Hamiltonian \begin{equation} {\cal \hat H_K} = \left( \frac{\hat p^2}{2M} + \frac{k}{2} \hat x^2 \right) + \Lambda^2 \left( \frac{\hat p^2}{2M} + \frac{k}{2} \hat x^2 \right)^2 \, . \label{eq_Kerr_Hamiltonian} \end{equation} The parameter $\Lambda$ parameterizes the system's (quantum-optical) Kerr--non-linearity, $\Lambda \propto \sqrt{\chi^{(3)}}$~\cite{Walls_Milburn_QuopBook,Osborn_JPA09,Manko_PSc10,Kirchmair_NAT13}, i.e. in field operator language~$ {\cal \hat H_K} = \left( a^\dagger a + \frac{1}{2} \right) + \tilde \chi^{(3)} \left( a^\dagger a + \frac{1}{2} \right)^2$. The wavefunctions of the harmonic oscillator are solutions to the Kerr Hamiltonian rendering the entire system analytically solvable. Note that ${\cal \hat H_K}$ contains products in $\hat x$ and $\hat p$, this implies that the terms for the Wigner flow are not of the form~(\ref{eq:FlowComponents}). Instead, the Wigner flow components can be determined using Moyal brackets~\cite{Zachos_book_05} and are found to be of the form~\cite{Kerr_WignerFlow} \begin{figure*} \caption{\CO{(Color online)} \label{fig:flow_Kerr} \end{figure*} \begin{widetext} \begin{eqnarray} & J_x \; & = \left[ \Lambda^2 \left( -{ \frac {\hbar^2 p}{4{M}^{2}}} \partial_{x}^{2} + \left\{ { \frac {{p}^{3}}{{M}^{2}}}+ {\frac {k{x}^{2}p}{M}} \right\} - {\frac {\hbar^2 k}{4 M}} \partial_{p}^ {2} p \right)+\left\{ \frac {p }{M} \right\} \right] W(x,p,t) \label{eq:J_x} \\ \mbox{ and } \qquad & J_p\; & = \left[ \Lambda^2 \left( \frac{\hbar^2 k^2 x}{4} \partial_{p}^2 - \left\{ {k}^{2}{x}^{3} + {\frac {kx{p }^{2}}{M}} \right\} + \frac {\hbar^2 k}{4 M} \partial_{x}^2 x \right) - \left\{ k x \right\} \right] W(x,p,t) \; , \label{eq:J_p} \end{eqnarray} \end{widetext} where the curly brackets surround the classical terms. All other terms (of order~${\cal O}(\hbar^2)$) are of quantum origin. For symmetry reasons the quantum terms cancel for eigenstates but not otherwise and are responsible for the non-Liouvillian nature of quantum phase space flow of superposition states of the Kerr oscillator. For energy eigenstates, Eq.~(\ref{eq:Wigner_velocity_Divergence}) reads \begin{equation} \boldsymbol{\nabla} \bullet \bm w = - \frac{ { \bm J} \bullet \boldsymbol{\nabla} W }{W^2} . \label{eq:Wigner_velocity_Divergence_Eigenstates} \end{equation} For eigenstates of the Kerr system, $\bm J$ is always perpendicular to $\bm \nabla W$ and therefore quantum phase space flow is Liouvillian for its eigenstates. For a superposition state (depicted in Fig.~\ref{fig:flow_Kerr}) Wigner flow is non-Liouvillian and forms isolated flow stagnation points at the intersections of the lines of vanishing $J_x$-component of the flow (thick green lines in Fig.~\ref{fig:flow_Kerr}) with lines of vanishing $J_p$-component of the flow (thick blue lines in Fig.~\ref{fig:flow_Kerr}). The flow's corresponding stagnation points are depicted by red plus-signs, if their Poincar\'e index~$\omega=1$, and by yellow minus signs, if their Poincar\'e index~$\omega=-1$. \section{Wigner Flow of Anharmonic Oscillators\label{sec:AnHarmWignerFlow}} For Hamiltonians of the form~(\ref{eq:_classical_Hamiltonian}) with an anharmonic potential~$U$ it is no longer true that the divergence of~$\bm w$ for eigenstates is zero. In this case Wigner flow typically expands or compresses, i.e., is non-Liouvillian always, almost everywhere in phase space. This can be understood from the previous discussion of Eq.~(\ref{eq:WFlow_mot_Classical}). The quantum terms in Wigner flow yield terms that break the incompressibility of classical phase space flow~\cite{Zachos_book_05} and there are no symmetries, such as those for the eigenstates of the Kerr system, to offset their influence. For illustration we show the Wigner flow portrait and the associated divergence map for the first excited bound state of a Morse oscillator in Fig.~\ref{fig:flow_Anharmonic}. \begin{figure*} \caption{\CO{(Color online)} \label{fig:flow_Anharmonic} \end{figure*} In the case of mechanical quantum systems, described by Hamiltonians of the form~(\ref{eq:_classical_Hamiltonian}), the (black dashed) line of zero of the Wigner function, according to Eq.~(\ref{eq:WFlow_mot_Classical}), coincides with (thick green) lines of zero of the $J_x$-component. According to Eq.~(\ref{eq:Wigner_velocity_Divergence}) this is the location where the divergence of the Wigner phase space velocity~$\bm w$ becomes unbounded. The (thick blue) lines of vanishing $J_p$-component of the flow do typically not coincide with $J_x$-zero lines; this leads to the formation of isolated stagnation points of the flow~\cite{Ole_PRL13,Kakofengitis14} wherever (off the $x$-axis) blue and green lines cross each other, see Fig.~\ref{fig:flow_Anharmonic}. In other words, when we follow the line of unbounded divergence of~$\bm w$ we trace out the line where $J_x=0$. If such a line crosses (off the $x$-axis) with a line where $J_p=0$, $\bm \nabla \bullet \bm w$ changes sign, this leads to the formation of the pinch-points of~$\bm \nabla \bullet \bm w$ evident in the right panel of Fig.~\ref{fig:flow_Anharmonic}. Off the $x$-axis, these pinch-points thus coincide with flow stagnation points. \section{Conclusion\label{sec:Conclusion}} We introduce the concept of the Wigner phase space velocity~$\bm w$. We show that the quantum-Liouville equation~(\ref{eq:W_Continuity}) is generically non-Liouvillian and would better be called quantum-continuity equation. Only in the case of the harmonic oscillator is the flow of the Wigner phase velocity divergence-free. Generically, for any anharmonic quantum-mechanical oscillator, Wigner flow is non-Liouvillian and features unbounded divergence. Field-oscillators of the Kerr type show intermediate behaviour in that their eigenstates feature Liouvillian flow, but their coherent superpositions do not. In anharmonic quantum-\emph{mechanical} systems~(\ref{eq:_classical_Hamiltonian}) the (off-axis) pinch-points of unbounded divergence of Wigner's phase space velocity~$\bm w$ coincide with the stagnation points of Wigner flow~$\bm J$. \end{document}
\begin{document} \title{Separating the online and offline DP-chromatic numbers} \begin{abstract} The DP-coloring problem is a generalization of the list-coloring problem in which the goal is to find an independent transversal in a certain topological cover of a graph $G$. In the online DP-coloring problem, the cover of $G$ is revealed one component at a time, and the independent transversal of the cover must be constructed in parts based on incomplete information. Kim, Kostochka, Li, and Zhu asked whether the chromatic numbers corresponding to these two graph coloring problems can have an arbitrarily large difference in a single graph. We answer this question in the affirmative by constructing graphs for which the gap between the online DP-chromatic number and the offline DP-chromatic number is arbitrarily large. \end{abstract} \section{Introduction} We will consider several graph coloring problems. In the \emph{list coloring problem}, we have a graph $G$ and a list $L(v)$ of colors at each vertex $v \in V(G)$. In this setting, we say that an \emph{$L$-coloring} of $G$ is a proper coloring $\varphi:V(G) \rightarrow \bigcup_{v \in V(G)} L(v)$ of $G$ in which $\varphi(v) \in L(v)$ for every vertex $v \in V(G)$. If $G$ always has an $L$-coloring whenever $|L(v)| = f(v)$ for each vertex $v \in V(G)$, then we say that $G$ is \emph{$f$-choosable}. If $f$ is a constant function $f(v) = k$, then we say that the \emph{list-chromatic number} (or \emph{choosability}) of $G$ is at most $k$, and we write $\ch(G) \leq k$. The \emph{DP-coloring problem} is a generalization of the list coloring problem introduced by Dvo\v{r}\'ak and Postle \cite{DP}, defined as follows. Given a graph $G$ and a function $f:V(G) \rightarrow \mathbb N$, an \emph{$f$-fold cover} of $G$ is a graph $H$ obtained by the following process: \begin{itemize} \item For each vertex $v \in V(G)$, add a clique $K_{f(v)}$ to $H$, and write $L(v)$ for the vertex set of this clique. \item For each edge $uv \in E(G)$, add a matching between $L(u)$ and $L(v)$. \end{itemize} Then, we say that an independent set in $H$ of size $|V(G)|$ is a \emph{DP-coloring} of $G$ with respect to $H$. If $G$ always has a DP-coloring for every $f$-fold cover $H$ of $G$, then we say that $G$ is DP-$f$-colorable. If $f$ is a constant function $f(v) = k$, then we say that $H$ is a \emph{$k$-fold} cover of $G$, and if $G$ always has a DP-coloring for every $k$-fold cover $H$ of $G$, then we say that the DP-chromatic number of $G$ is at most $k$, and we write $\chi_{DP}(G) \leq k$. Given a cover $H$ of $G$, we often refer to the vertices of $H$ as \emph{colors}, and if $c \in L(v)$ for a vertex $v \in V(G)$, then we say that the color $c$ is \emph{above} $v$. Note that when $f(v) = k$ is a constant function, if the cliques in $H$ corresponding to vertices in $G$ are replaced with independent sets, and if each matching between sets $L(u)$ and $L(v)$ is a perfect matching, then $H$ is a $k$-sheeted covering space of $G$, and a DP-coloring of $G$ is equivalent to an independent transversal of the fibers in $H$ above the vertices of $G$ (see \cite{Hatcher} for an introduction to graphs as topological spaces). Every list-coloring problem can be transformed into a DP-coloring problem as follows. Given a graph $G$ with a color list $L'(v)$ at every vertex $v \in V(G)$, we construct a cover $H$ of $G$ by adding a clique with vertex set $L(v)$ for every vertex $v \in V(G)$, with elements of $L(v)$ corresponding to colors in $L'(v)$. Then, we consider each edge $uv \in E(G)$, and we add an edge in $H$ between each pair $(c,c') \in L(u) \times L(v)$ for which $c$ and $c'$ both correspond to a common color from $L'(u) \cap L'(v)$. When $H$ is constructed this way, a DP-coloring of $G$ with respect to $H$ is equivalent to an $L'$-coloring of $G$. Therefore, it holds that $\ch(G) \leq \chi_{DP}(G)$. We will also consider two online graph coloring problems. The \emph{online DP-coloring} problem takes place in the form of a DP-coloring game between two players, called Lister and Painter. The game is played on a graph $G$ with a function $f:V(G) \rightarrow \mathbb N$. At the beginning of the game, each vertex $v \in V(G)$ has $f(v)$ tokens. On each turn $i$, Lister removes some number $m_i(v)$ (possibly zero) of tokens from each vertex $v \in V(G)$ and then reveals a clique $K_{m_i(v)}$ above each vertex $v$. Furthermore, for each edge $uv \in E(G)$, Lister reveals a matching between the revealed cliques above $u$ and $v$. The cliques and matchings revealed on this turn $i$ form a cover $H_i$ of $G$. After $H_i$ is revealed, Painter chooses an independent set from the vertices of $H_i$. The game ends when $G$ has no more tokens for Lister to remove. Painter wins the game if she manages to choose at least one color above each vertex of $G$ before the game is over; otherwise, Lister wins. If Painter always has a winning strategy in the DP-coloring game on a graph $G$ when each vertex $v \in V(G)$ begins with $k$ tokens, then we say that the \emph{online DP-chromatic number} (or \emph{DP-paintability}) of $G$ is at most $k$, and we write $\chi_{DPP}(G) \leq k$. Observe that if each vertex of $G$ begins with $k$ tokens, then Lister has the option of revealing a $k$-fold cover $H$ of $G$ on the first turn and asking Painter to find a DP-coloring of $G$ with respect to $H$, and therefore $\chi_{DP}(G) \leq \chi_{DPP}(G)$. If, in the DP-coloring game, Lister is only allowed to remove at most one token from each vertex of $G$ during each turn and must always reveals edges wherever possible, then we call this variant of the game the \emph{list-coloring game}. For the list-coloring game, we may equivalently imagine that on each turn $i$, Lister reveals a single color $c_i$ above each vertex of some induced subgraph $G'_i$ of $G$, and Painter must choose some independent set $I_i$ of $G'_i$ and color each vertex in $I_i$ with $c_i$. In this equivalent setting, each vertex $v \in V(G)$ still begins with $f(v)$ tokens, and a single token is removed from $v$ whenever Lister reveals a color above $v$. In this setting, Painter wins the game if and only if she can color every vertex of $G$ before the game ends. If Painter always has a strategy to win the list-coloring game on a graph $G$ when each vertex $v \in V(G)$ begins with $f(v)$ tokens, then we say that $G$ is \emph{$f$-paintable}. If $f$ is a constant function $f(v) = k$, then we say that $G$ is \emph{$k$-paintable}, and we write $\chi_P(G) \leq k$. The online list-coloring game was originally invented using this framework of revealing colors above vertices independently by Schauz \cite{Schauz} and Zhu \cite{ZhuP}. At the end of the list-coloring game on $G$ with a constant function $f(v) = k$, the colors revealed at each vertex $v$ form a set $L(v)$ of $k$ colors, and if Painter wins the game, then Painter completes a proper $L$-coloring of $G$. Since Lister is free to form any list assignment $L$ on the vertices of $G$, it follows that if $G$ is $k$-paintable, then $G$ is also $k$-choosable, and hence $\chi_{\ell}(G) \leq \chi_P(G)$. Also, since the online list-coloring game is at least as difficult for Lister as the DP-coloring game, it also follows that $\chi_P(G) \leq \chi_{DPP}(G)$. After putting all of the inequalities between these parameters together, we obtain two inequality chains: \begin{eqnarray*} \chi_{\ell}(G) \leq & \chi_{DP}(G) & \leq \chi_{DPP}(G); \\ \chi_{\ell}(G) \leq & \chi_{P}(G) & \leq \chi_{DPP}(G) . \end{eqnarray*} Given these inequality chains, it is natural to ask whether the differences between adjacent parameters can be arbitrarily large. For three out of these four differences, we find an affirmative answer by letting $G = K_{n,n}$. Indeed, Bernshteyn \cite{Bernshteyn} showed that a graph $G$ of average degree $d$ satisfies $\chi_{DP}(G) = \Omega \left ( \frac{d}{\log d} \right )$, implying that $\chi_{DP}(K_{n,n}) = \Omega \left (\frac{n}{\log n} \right )$. Since it is known that $\chi_{\ell}(K_{n,n}) \leq \chi_P(K_{n,n}) = \log_2 n + O(1)$ \cite{Duraj}, this shows that \[\chi_{DP}(K_{n,n}) - \chi_{\ell}(K_{n,n}) = \Omega \left ( \frac{n}{\log n} \right )\textrm{ \indent and \indent} \chi_{DPP}(K_{n,n}) - \chi_{P}(K_{n,n}) = \Omega \left ( \frac{n}{\log n} \right ).\] Duraj, Gutowski, and Kozik \cite{Duraj} also showed that \[\chi_{P}(K_{n,n}) - \chi_{\ell}(K_{n,n}) = \Omega(\log \log n).\] Therefore, by letting $G = K_{n,n}$, we achieve an arbitrarily large difference for each adjacent pair of parameters except for $\chi_{DPP}(G) - \chi_{DP}(G)$. For this final difference, Kim, Kostochka, Li, and Zhu \cite{KKLZ} showed there exist graphs $G$ for which $\chi_{DPP}(G) - \chi_{DP}(G) \geq \chi_{P}(G) - \chi_{DP}(G) \geq 1$, implying that this final difference can be positive. However, it has not been shown that this difference can be arbitrarily large. In this note, we will show that the difference $\chi_{DPP}(G) - \chi_{DP}(G)$ can also be arbitrarily large, answering a question of Kim, Kostochka, Li, and Zhu \cite{KKLZ}. Rather than choosing $G = K_{n,n}$, we will construct a graph $G_t$ for each $t \geq 1$ that satisfies $\chi_{DPP}(G_t) - \chi_{DP}(G_t) \geq t$. We construct our graphs $G_t$ by generalizing an idea from the original paper of Kim, Kostochka, Li, and Zhu \cite{KKLZ}. Our graphs $G_t$ will also satisfy the additional property that $\chi_{P}(G_t) - \chi_{DP}(G_t) \geq t$. By combining this equality with the already-known estimate $\chi_{DP}(K_{n,n}) - \chi_P(K_{n,n}) = \Omega \left ( \frac{n}{\log n} \right )$, we hence see that the difference $\chi_{DP}(G) - \chi_{P}(G)$ can achieve both positive and negative values of arbitrarily large magnitude. \section{The construction} For each integer $t \geq 1$, we will construct a graph $G_t$ that satisfies $\chi_P(G_t) - \chi_{DP}(G_t) \geq t$. Since $\chi_{DPP}(G) \geq \chi_P(G)$ for all graphs $G$, our graphs $G_t$ will also satisfy $\chi_{DPP}(G_t) - \chi_{DP}(G_t) \geq t$. Our construction is based on a generalization of an idea of Kim, Kostochka, Li, and Zhu~\cite{KKLZ}. As we are concerned with showing that the paintability of each graph $G_t$ is large enough, we begin with an observation about the online list-coloring game. If Lister and Painter play the online list-coloring game on a graph $G$ with some initial token assignment, then Lister wins if and only if he can reach a position in which each uncolored vertex $v \in V(G)$ has some $g(v)$ remaining tokens, and the uncolored subgraph of $G$ is not $g$-choosable. In the original paper of Kim, Kostochka, Li, and Zhu~\cite{KKLZ}, the authors take advantage of this idea in order to construct a graph $G$ satisfying $\chi_P(G) \geq \chi_{DP}(G) + 1$. In order to show that their graph $G$ has a large enough paintability, these authors show that in the online list-coloring game on their graph $G$, Lister always has a strategy to create an uncolored $K_{1,k}$ subgraph of $G$ in which each leaf $\ell$ has $g(\ell) = 1$ token and the center vertex $v$ has $g(v) = k$ tokens. Since $K_{1,k}$ is not $g$-choosable, it follows that Lister has a strategy to win the online list-coloring game on their graph $G$. In our construction, we will use a similar idea. We first fix the value $k = 2^{8t^3}$. (With more careful calculation, our proof would work with a smaller value of $k$, but we use this larger value for clearer presentation.) In each of our graphs $G_t$, we will show that Lister can always create an uncolored $K_{t,k^t}$ subgraph in which each $t$-degree vertex $u$ has $g(u) = t$ tokens and each $k^t$-degree vertex $v$ has $g(v) = k$ tokens. The following lemma shows that if Lister manages to create such a subgraph of $G_t$, then Lister wins the online list-coloring game. \begin{lemma} \label{lem:ktkt} Given the function $g: V(K_{t,k^t}) \rightarrow \mathbb N$ defined above, $K_{t,k^t}$ is not $g$-choosable. \end{lemma} \begin{proof} We let the $t$ vertices $v_1, \dots, v_t$ of degree $k^t$ have pairwise disjoint color lists $L(v_1), \dots, L(v_t)$ of size $k$. Then, for each of the $k^t$ elements $L \in L(v_1) \times \dots \times L(v_t)$, we let $L$ be the list of a vertex $u$ of degree $t$. Then, for any coloring of $v_1, \dots, v_t$ using colors from their lists, some vertex $u$ of degree $t$ has no available color in its list, and hence $K_{t,k^t}$ is not $g$-choosable. \end{proof} The most important piece of our construction of $G_t$ will be the following gadget $H_t$. We construct our gadget $H_t$ along with a function $h:V(H_t) \rightarrow \mathbb N$ as follows. We let $H_t$ contain $(t+1)k^t$ copies $K^1, \dots, K^{(t+1)k^t}$ of the clique $K_{t+1}$, and we write $u^{\ell}_1, \dots, u^{\ell}_{t+1}$ for the vertices of each clique $K^{\ell}$. We write $U$ for the set of all of these vertices of the form $u_j^{\ell}$; in other words, we let $U$ consist of all vertices that we have introduced so far. Then, for each value $1 \leq j \leq t+1$, we add $t$ independent vertices $x_j^1,\dots,x_j^t$, and we make each of these vertices adjacent to $u^1_j, u^2_j,\dots,u^{(t+1)k^t}_j$. We write $X$ for the set consisting of all of these vertices of the form $x_j^i$. For each vertex $u_j^{\ell} \in U$, we let $h(u_j^{\ell}) = t+1$, and for each vertex $x_j^i \in X$, we let $h(x_j^i) = k - t + 1$. We now prove two lemmas that show that under appropriate circumstances, winning the online list-coloring game on $H_t$ as Painter is much harder than finding a DP-coloring on $H_t$. \begin{lemma} \label{lem:no} $H_t$ is not $(h+t-1)$-paintable. \end{lemma} \begin{proof} We give each vertex $v \in V(H_t)$ exactly $h(v) + t - 1$ tokens, and we show that Painter cannot win the list-coloring game on $H_t$. On each of the first $t$ turns, Lister reveals a color at each vertex of each clique $K^{\ell}$ and reveals an edge wherever possible. After these first $t$ turns, each clique $K^{\ell}$ must have an uncolored vertex $u_j^{\ell}$ with exactly $h(u_j^{\ell}) - 1 = t$ tokens. Furthermore, since we have $(t+1)k^t$ cliques $K^{\ell}$, each with at least one uncolored vertex, there must exist some value $1 \leq j^* \leq t+1$ for which at least $k^t$ vertices of the form $u_{j^*}^{\ell}$ are uncolored. However, the $k^t$ vertices of the form $u_{j^*}^{\ell}$ along with the $t$ vertices $x^1_{j^*}, \dots, x^t_{j^*}$ form an uncolored $K_{t,k^t}$ subgraph in which each $t$-degree vertex $v$ has only $g(v) = t$ remaining tokens, and each $k^t$-degree vertex $v$ has only $g(v) = k$ remaining tokens. Since Lemma \ref{lem:ktkt} shows that this $K_{t,k^t}$ subgraph is not $g$-choosable, Lister has a strategy to win the game. \end{proof} \begin{lemma} \label{lem:yes} $H_t$ is DP-$h$-colorable. \end{lemma} \begin{proof} Consider an $h$-fold cover $H'$ of $H_t$. Recall that given a vertex $v \in V(H_t)$, we say that $v$ corresponds to a clique $K_{h(v)}$ in $H'$ with a vertex set $L(v)$, and we say that $L(v)$ contains $h(v)$ \emph{colors}. Using this terminology, we say that each color in $H'$ appears above only one vertex of $H_t$, as the sets $L(v)$ forming the cliques of $H'$ are pairwise disjoint. We will first color the vertices $x^i_j \in X$. For each vertex $x^i_j \in X$ and color $c \in L(x^i_j)$, we assign a set $S_c \subseteq [(t+1)k^t]$ that consists of those indices $\ell$ for which $L(u_j^{\ell})$ contains a color adjacent to $c$. We would like to color the $t(t+1)$ vertices of $X$ using $t(t+1)$ colors $c_1, \dots, c_{t(t+1)}$ that correspond to a family $\mathcal S =\{ S_{c_1}, \dots, S_{c_{t(t+1)}}\}$ such that for each value $1 \leq q \leq t+1$, the following property holds: \begin{equation}\tag{$\star$} \label{star} \textrm{The intersection of any $q$ sets of $\mathcal S$ contains at most $k^{t - \frac{qt}{t+1}} - 1$ elements.} \end{equation} In particular, the intersection of any $t+1$ sets of $\mathcal S$ is empty. We show that we may greedily color each vertex $x_j^i \in X$ while satisfying (\ref{star}). Suppose we wish to color some vertex $x \in X$ and that we have already colored some subset $Y \subseteq X$ while satisfying (\ref{star}). For each subset $A \subseteq Y$ of size $q \in [0,t]$ whose vertices are colored with colors $a_1, \dots, a_q$, we must choose some color $c \in L(x)$ for which \begin{equation}\tag{$\bullet$} \label{eqn:int} |S_c \cap S_{a_1} \cap \dots \cap S_{a_q}| \leq k^{t - \frac{(q+1)t}{t+1}} - 1. \end{equation} Note that since $h(u^{\ell}_j) = t+1$ for each vertex $ u^{\ell}_j \in U$, each value $\ell \in [(t+1)k^t]$ appears in at most $t+1$ sets $S_c$ for colors $c \in L(x)$. Furthermore, since $|S_{a_1} \cap \dots \cap S_{a_q}| < k^{t - \frac{qt}{t+1}}$ whenever $q \geq 1$, the number of colors $c \in L(x)$ that do not satisfy (\ref{eqn:int}) for a given $A \subseteq X$ is at most \[\frac{(t+1)k^{t - \frac{qt}{t+1}} }{k^{t - \frac{(q+1)t}{t+1}}} = (t+1)k^{\frac{t}{t+1}}.\] Furthermore, since fewer than $2^{t(t+1)}$ subsets $A \subseteq Y$ can be chosen, the number of colors $c \in L(x)$ that would cause (\ref{star}) to be violated is less than \begin{eqnarray*} 2^{t(t+1)}(t+1) k^{\frac{t}{t+1}} &= & 2^{t(t+1) + \frac{8t^4}{t+1}}(t+1) \\ &=&2^{t(t+1) + 8(t^3 - t^2 + t - 1 + \frac{1}{t+1} )}(t+1) \\ &< & 2^{8t^3 - t} \\ & < & k - t + 1 = h(x). \end{eqnarray*} Therefore, some color $c \in L(x)$ can be used to color $x$ without violating (\ref{star}). Now, with every vertex $x_j^i \in X$ colored, and with (\ref{star}) satisfied, no index $\ell$ belongs to the intersection of more than $t$ sets $S_c$, where $c$ is a color used at a vertex $x_j^i$, and hence after coloring the vertices in $X$, at most $t$ colors are unavailable at each clique $K^{\ell}$. Therefore, the vertices of each clique $K^{\ell}$ can be ordered so that the first vertex has at least one available color, the second vertex has at least two available colors, and so forth, until the last vertex has $t+1$ available colors. Therefore, each remaining clique $K^{\ell}$ can be $DP$-colored with its available colors, and the lemma is proven. \end{proof} Now, we construct our graph $G_t$. First, we make $k^{k-2t}$ copies of the graph $H_t$, and we index them by the $(k-2t)$-tuples in $[k]^{k-2t}$. We also add $k-2t$ vertices $y_1, \dots, y_{k-2t}$ that are adjacent to all vertices of $U$ in each copy of $H_t$. We write $\tilde{U}$ for this set of neighbors of $y_1, \dots, y_{k-2t}$, that is, the set of vertices belonging to a set $U$ in some copy of $H_t$. The following two theorems show that $\chi_P(G_t) - \chi_{DP}(G_t) \geq t$. \begin{theorem} $\chi_{DP}(G_t) \leq k-t+1$. \end{theorem} \begin{proof} We may give $G_t$ a DP-coloring with lists of size $(k-t+1)$ as follows. First, we arbitrarily color the vertices $y_1, \dots, y_{k-2t}$. Next, we observe that the vertices in $\tilde U$ have lost at most $k-2t$ available colors, so for each vertex $v$ in a copy of $H_t$, $v$ has at least $h(v)$ available colors remaining. Therefore, by Lemma \ref{lem:yes}, we may finish our DP-coloring of $G_t$ by giving each remaining copy of $H_t$ a DP-$h$-coloring. \end{proof} \begin{theorem} \label{thm:no} $\chi_P(G_t) > k$. \end{theorem} \begin{proof} Suppose that the online list-coloring game is played on $G_t$ with $k$ tokens at each vertex. We will show that Lister has a winning strategy. For each pair $(i,j)$ satisfying $1 \leq i \leq k$ and $1 \leq j \leq k-2t$, Lister executes the following command. When Lister executes the command for a given pair $(i,j)$, we say that this takes place on \emph{turn $(i,j)$}. \begin{quote} Reveal a color $c_{i,j}$ above $y_j$ and above each vertex of $\tilde{U}$ that belongs to a copy of $H_t$ indexed by a $(k-2t)$-tuple with the value $i$ in the $j$th coordinate. \end{quote} For each value $j \in [k-2t]$, we write $L(y_j) = \{c_{1,j}, \dots, c_{k,j}\}$ for the set of colors revealed above $y_j$. For each $j \in [k-2t]$, we may assume that for some value $i_{j} \in [k]$, Painter colors $y_{j}$ during turn $(i_{j}, j)$ and hence does not color any vertex of $\tilde U$ during turn $(i_j, j)$. Indeed, if this is not the case, then $y_j$ is never colored, and Painter will not have another chance to color $y_{j}$. Therefore, for each value $j \in [k-2t]$, we may assume that no vertex in a copy of $H_t$ indexed by a $(k-2t)$-tuple with an $i_j$ entry in the $j$th coordinate is colored using a color in $L(y_j)$. Now, let $H$ be the copy of $H_t$ indexed by the $(k-2t)$-tuple $(i_1, \dots, i_{k-2t})$. By our observation above, no vertex of $H$ has been colored by a color in $L(y_1) \cup \dots \cup L(y_{k-2t})$, and hence no vertex of $H$ has been colored. Furthermore, since $k-2t$ tokens have been removed from each vertex in $\tilde{U} \cap V(H)$, it follows that for each vertex $v \in V(H)$, only $h(v)+t-1$ tokens remain at $v$. Therefore, Lister can follow the strategy in Lemma \ref{lem:no} on $H$ in order to win the game, and thus $\chi_P(G_t) > k$. \end{proof} \section{Conclusion} While we have shown for each $t \geq 1$ the existence of a graph $G_t$ for which $\chi_P(G_t) - \chi_{DP}(G_t) \geq t$, it is still open whether there exists a sequence $\{G_t\}_{t \geq 1}$ of graphs for which \[\lim_{t \rightarrow \infty} \frac{\chi_{DPP}(G_t)}{\chi_{DP}(G_t)} > 1 \textrm{ \indent or \indent } \lim_{t \rightarrow \infty} \frac{\chi_{P}(G_t)}{\chi_{\ell}(G_t)} > 1.\] On the other hand, it is unknown whether $\chi_P(G)$ can be bounded above by a linear or even polynomial function of $\chi_{\ell}(G)$, and it is unknown whether $\chi_{DPP}(G)$ can be bounded above by a linear function of $\chi_{DP}(G)$. Duraj, Gutowski, and Kozik \cite{Duraj} have pointed out that currently, the best known bound for $\chi_P(G)$ in terms of $\chi_{\ell}(G)$ comes from the relationship between a graph's choosability and minimum degree. Namely, a result of Saxton and Thomason \cite{Saxton} states that a graph $G$ of minimum degree $\delta$ satisfies $\chi_{\ell}(G) \geq (1 + o(1)) \log_2 \delta$. Writing $d$ for the degeneracy of a graph $G$, we observe that $G$ must have a subgraph of minimum degree $d$, and hence we may use this result to observe that \[\chi_P(G) \leq d+1 \leq 2^{(1+o(1))\chi_{\ell} (G)}.\] For $\chi_{DPP}(G)$, we may use a result of of Bernshteyn showing that a graph $G$ of minimum degree $\delta$ satisfies $\chi_{DP}(G) \geq \frac{\delta / 2}{\log (\delta / 2)}$ in order to bound $\chi_{DPP}(G)$ in terms of $\chi_{DP}(G)$ in a similar way. Using the same observation as above, if $d$ is the degeneracy of $G$, then \[\chi_{DPP}(G) \leq d + 1\leq (2 + o(1)) \chi_{DP}(G) \log \chi_{DP}(G).\] It is likely that a deeper understanding of these coloring parameters is necessary to determine tight bounds between them. \section{Acknowledgment} I am grateful to Alexandr Kostochka for offering helpful feedback on an earlier version of this paper. \raggedright \end{document}
\begin{document} \title[Nevanlinna representations]{Nevanlinna representations in several variables} \author{J. Agler, R. Tully-Doyle and N. J. Young} \keywords{Pick class, Loewner class, Cauchy transform, self-adjoint operator, resolvent} \mathcal{S}_nubjclass[2010]{32A30, 30E20, 30E05, 47B25, 47A10} \thanks{The first author was partially supported by National Science Foundation Grant on Extending Hilbert Space Operators DMS 1068830. The third author was partially supported by the UK Engineering and Physical Sciences Research Council grant EP/J004545/1.} \date{23rd June, 2012} \begin{abstract} We generalize to several variables the classical theorem of Nevanlinna that characterizes the Cauchy transforms of positive measures on the real line. We show that for the Loewner class, a large class of analytic functions that have non-negative imaginary part on the upper polyhalfplane, there are representation formulae in terms of densely-defined self-adjoint operators on a Hilbert space. We identify four types of such representations, and we obtain function-theoretic conditions that are necessary and sufficient for a given function to possess a representation of each of the four types.\\ \end{abstract} \maketitle \input intro \input structured \input matricial \input type4 \input type321 \input asymptotic \input caraInfty \input caraTypes \input growth \input resolvident \input nevanlinna.bbl \nin J. Agler, Department of Mathematics, University of California at San Diego, CA 92103, USA.\\ \nin R. Tully-Doyle, Department of Mathematics, University of California at San Diego, CA 92103, USA.\\ \nin N. J. Young, School of Mathematics, Leeds University, Leeds LS2 9JT {\em and} School of Mathematics and Statistics, Newcastle University, Newcastle upon Tyne NE3 4LR, England. Email [email protected] \end{document}
\begin{document} \includepdf{FrontPage} \title{Feature Selection in Data Envelopment Analysis: A Mathematical Optimization approach} \author[1]{Sandra Ben\'{\i}tez Pe\~na} \author[2]{Peter Bogetoft} \author[2]{Dolores Romero Morales} \affil[1]{Instituto de Matem\'aticas de la Universidad de Sevilla (IMUS), Seville, Spain \newline \tt{[email protected]}} \affil[2]{Copenhagen Business School, Frederiksberg, Denmark \newline \tt{\{pb.eco,drm.eco\}@cbs.dk}} \date{\today} \maketitle \begin{abstract} \noindent This paper proposes an integrative approach to feature (input and output) selection in Data Envelopment Analysis (DEA). The DEA model is enriched with zero-one decision variables modelling the selection of features, yielding a Mixed Integer Linear Programming formulation. This single-model approach can handle different objective functions as well as constraints to incorporate desirable properties from the real-world application. Our approach is illustrated on the benchmarking of electricity Distribution System Operators (DSOs). The numerical results highlight the advantages of our single-model approach provide to the user, in terms of making the choice of the number of features, as well as modeling their costs and their nature. \vspace*{1cm} \noindent \Keywords{Benchmarking; Data Envelopment Analysis; Feature Selection; Mixed Integer Linear Programming} \end{abstract} \section{Introduction \label{sec:intro}} Organisations need to know whether they are using the best practices to produce their products and services, and to do so they benchmark their performance with that of others. There are many documented examples of the use of benchmarking in the literature from both the private and the public sector, such as, airlines, banks, hospitals, universities, manufacturers, schools, and municipalities, see \cite{bogetoft2013performance}, and references therein. Within benchmarking, Data Envelopment Analysis (DEA) \citep{CHARNES1978429} is one of the most widely used tools, \cite{COOK201945,emrouznejad2018survey,jiang2015dearank,landete2017robust,li2017dynamic,petersen2018directional,RUIZ20161,RUIZ2018}. It aims at benchmarking the performance of decision marking units (DMUs), which use the same types of inputs and produce the same types of outputs, against each other. DEA calculates an efficiency score for each of the DMUs, so that DMUs with a score equal to one are in the so-called efficient frontier. DMUs outside the efficient frontier are deemed as underperforming, and a further analysis gives insights as to what they can do to improve their efficiency. The efficiency of DMUs in DEA is measured as the weighted summation of the outputs divided by the weighted summation of the inputs, and the weights are found solving a Linear Programming problem for each DMU. DEA model specification, in the form of feature (where the term feature is used to refer to either outputs, inputs or environmental variables) selection, has a significant impact on the shape of the efficient frontier in DEA as well as the insights given to the inefficient DMUs \cite{golany1989application}. Moreover, it is known to improve the discriminatory power of DEA models \cite{bogetoft2010benchmarking}. Our paper proposes and investigates a mathematical optimization approach for feature selection in DEA. In benchmarking projects, as in most applied statistical analyses, one of the most challenging tasks is the choice of the DEA model specification. First, a good model should make conceptual sense not only from the theoretical but also from a practical point of view. The interpretation must be easy to understand and the properties of the model must be natural. This contributes to the acceptance of the model by stakeholders and provides a safeguard against spurious models developed without much understanding of the industry. More precisely, this has to do with the choice of outputs in DEA that are natural cost drivers and with functional forms that, for example, have reasonable returns to scale and curvature properties. Second, it is important to guide the search for a good model with classical statistical tests. We typically seek models that have significant features with the right signs and that do not leave a large unexplained variation. Third, intuition and experience is a less stringent but important safeguard against false model specifications and the over- or underuse of data to draw false conclusions. It is important that the models produce results that are not that different from the results one would have found in other data sets, e.g., from other countries or related industries. The intuition and experience must be used with caution. We may screen away extraordinary but true results (Type 1 error) and we may go for a more common set of results based on false models (Type 2 error). One aspect of this is that one will tend to be more confident in a specification of inputs and outputs that leads to comparable results in alternative estimation approaches, e.g., in the DEA and Stochastic Frontier Analysis models. Finally, the choice of model specification has to be pragmatic. We need to take into account the availability of data as well as what the model is going to be used for. In benchmarking, it matters if the model is used to learn best practices, to reallocate resources between entities or to directly incentivize firms or managers by performance based payment schemes. Our approach gives a tool that can support the selection of features in benchmarking, allowing the user to navigate through a large amount of DEA models and a large amount of constraints modeling knowledge in the form of intuition and experience, in an efficient manner. The complexity of the model specification phase partially explains the lack of enough guidance in the literature at this respect, \cite{cook2014data,luo2012input,SOLEIMANIDAMANEH20095146}, and most of the effort goes into the analysis and interpretation of a given DEA model. With the strand of literature on feature selection, the most common approach is to use a priori rules based on Statistical Analysis (such as correlations, dimensionality reduction techniques, and regression), and Information Theory (such as AIC or Shannon entropy). Alternatively, an ex-post analysis of the sensitivity of the efficient frontier to additional features can be run to detect whether relevant features have been left out. See \cite{adler2010improving,fernandez2018stepwise,li2017variable,nataraja2011guidelines,pastor2002statistical,sirvent2005monte,SOLEIMANIDAMANEH20095146,wagner2007stepwise}, and references therein. Recently, there have been attempts to use LASSO techniques from Statistical Learning to build sparse benchmarking models, i.e., models using just a few features, \cite{caiLASSODEA,LEE2018,qin2014joint}. In this paper, the DEA Linear Programming formulation is enriched with zero-one decision variables modelling the selection of features for different objective functions, such as the average efficiency or the squared distance to the ideal point where all DMUs are efficient, and for different set of constraints that incorporate knowledge from the industrial application, such as bounds on the weights as well as costs on the features. This yields either a Mixed Integer Linear Programming (MILP) problem, or a Mixed Integer Quadratic Programming (MIQP) one. Thus, in contrast to the existing literature, that tends to combine statistical analysis with the mathematical programming based DEA, we propose an approach that is entirely driven by mathematical optimization. We illustrate our models in the benchmarking of electricity Distribution System Operators (DSOs), where there is a pool of 100 potential outputs. The contributions of our approach are threefold. First, our single-model mathematical approach can guide better the selection of features: it controls directly the number of chosen features, as opposed to techniques based on seeking sparsity, being thus able to quantify the added value of additional features; works directly with the original features, as opposed to dimensionality reduction techniques, which create artificial features that are difficult to interpret; and can derive a collection of models by shaping in alternative ways the distribution of the efficiencies, using different objective functions that focus on different groups of DMUs, which can be combined through, for instance, Shannon entropy \cite{SOLEIMANIDAMANEH20095146}. Second, while the previous literature has focused on the choice of variables from a small set of candidates, e.g., \cite{LEE2018}, in the era of Big Data, the set of alternatives to choose from is expanding at a fast pace, and the challenge is often not the lack of data, but the abundance of data, \cite{ZHU2018291}. In the numerical section, we show how our MILP/MIQP approach is able to make the selection from a large pool of outputs. Third, we introduce an element of game theory when selecting features. In applied projects, the evaluated DMUs will typically try to influence the feature selection since this will affect how one firm is evaluated relative to others. It is therefore important to think about the conflict the DMUs (the players of the game) when choosing the set of features (the strategies of the game) used in the calculation of the efficiencies (outcome). We illustrate the results for a simpler game setting where the strategies are derived from the individual and the joint models. The reminder of the paper is structured as follows. In Section \ref{sec:OS}, we introduce the individual feature selection problem where the selection is tailored to a given DMU. In Section \ref{sec:singleOS}, we introduce the joint feature selection problem where the selection is imposed to be the same for all DMUs. Section \ref{sec:numerical} is devoted to the illustration of our models in the benchmarking of electricity Distribution System Operators (DSOs). We end the paper in Section \ref{sec:conclusions} with some conclusions and lines for future research. \section{The individual selection model\label{sec:OS}} In this section, and for an individual DMU, we propose a Mixed Integer Linear Programming (MILP) formulation to select outputs in Data Envelopment Analysis (DEA). Consider $K$ DMUs (indexed by $k$), using $I$ inputs (indexed by $i$), and producing $O$ outputs (indexed by $o$). DMU $k$ uses vector of inputs $\mathbf{x}^{(k)}\in {\mathbb R}_+^I$ to produce vector of outputs $\mathbf{y}^{(k)}\in {\mathbb R}_+^O$. Let $E^{(k)}$ be the so-called Farrell input-oriented efficiency of DMU $k$, which is the optimal solution value to a DEA model. Our goal is to select the $p$ outputs from the $O$ potential ones that yield the maximal efficiency for DMU $k$. We first start with the classical formulation of the problem which solves a Linear Programming model, and subsequently include the output selection decision variables. The output selection model for DMU $k$ is enriched with input selection decision variables, as well as constraints modeling desirable properties about the selected features. Please note that our approach can easily be extended to the use of other efficiency measures, including the output-based Farrell efficiency. \subsection{The selection model for a DMU} We start with the formulation of the classical DEA model, in which we can make use of the $O$ outputs available. The input-oriented efficiency of DMU $k$, $E^{(k)}$, in a DEA model with constant returns to scale (CRS) is equal to the optimal solution value of the following Linear Programming formulation, \begin{eqnarray} E^{(k)} = \mbox{maximize}_{(\bm{\alpha}^{(k)},\bm{\beta}^{(k)})} \ \sum_{o=1}^O \beta_o^{(k)} y_o^{(k)} \label{eq:of} \end{eqnarray} \noindent {\em s.t.} (DEA$^{(k)}$) \begin{eqnarray} \displaystyle \sum_{o=1}^O \beta_o^{(k)} y_o^{(j)} - \sum_{i=1}^I \alpha_i^{(k)} x_i^{(j)} \le 0 && \forall j=1,\ldots,K\label{eq:DEA1}\\ \sum_{i=1}^I \alpha_i^{(k)} x_i^{(k)} =1 && \label{eq:DEA2}\\ \alpha^{(k)} \in {\mathbb R}^I_+ && \label{eq:DEA3}\\ \beta^{(k)} \in {\mathbb R}^O_+, && \label{eq:DEA4} \end{eqnarray} where $\alpha_i^{(k)}$ is the weight for input $i$ and $\beta_o^{(k)}$ the weight for output $o$. (DEA$^{(k)}$) has $K+1$ linear constraints and $I+ O$ continuous variables, and thus can be solved efficiently even for large problem instances. We continue with the model in which $p$ outputs are to be selected from the $O$ available ones such that the efficiency of DMU $k$ is maximized. Let $z^{(k)}_o$ be equal to 1 if output $o$ can be used in the calculation of the efficiency of DMU $k$, and 0 otherwise. Let $E^{(k)}(\mathbf{z}^{(k)})$ denote the corresponding efficiency. The decision variables $\beta_o^{(k)}$ and $\alpha_i^{(k)}$ are defined as above. The Output Selection for DMU $k$, where $p$ outputs must be selected such that $E^{(k)}(\mathbf{z}^{(k)})$ is maximized, can be written as the following MILP: \begin{eqnarray} v^{(k)}(p) := \mbox{maximize}_{(\bm{\alpha}^{(k)},\bm{\beta}^{(k)},\mathbf{z}^{(k)})} \ \displaystyle \sum_{o=1}^O \beta_o^{(k)} y_o^{(k)} \end{eqnarray} \noindent {\em s.t.} (OSDEA$^{(k)}(p)$) \begin{eqnarray} \displaystyle \eqref{eq:DEA1}-\eqref{eq:DEA4} && \nonumber\\ \beta^{(k)}_o \le M z^{(k)}_o && \forall o=1,\ldots,O\label{eq:DEA5OS}\\ \sum_{o=1}^O z^{(k)}_o = p && \label{eq:DEA6OS}\\ z^{(k)}_o \in \{0,1\} && \forall o=1,\ldots,O\label{eq:DEA7OS}, \end{eqnarray} where $M$ is a big constant. Constraints \eqref{eq:DEA1}-\eqref{eq:DEA4} were already present in the classical DEA model. Constraints (\ref{eq:DEA5OS}) make sure that the selection variables $z^{(k)}_o$ are well defined: if $z^{(k)}_o$ equals $0$, then $\beta_o^{(k)}$ equals $0$ too. Constraint (\ref{eq:DEA6OS}) models the number of features to be selected. Finally, constraints (\ref{eq:DEA7OS}) relate to the range of decision variables $z^{(k)}_o$. (OSDEA$^{(k)}(p)$) has $K+O+2$ linear constraints and $I+ 2 \, O$ variables, where $I+O$ are continuous and $O$ are binary ones. Our numerical experiments show that this problem can be solved efficiently, although the solution time is affected by the value of the $M$ constant. The value of $M$, and thus the computational burden of the problem, can be reduced using an upper bound on the weight associated with output $o$, for each $o=1,\ldots,O$. It is not difficult to see that, without loss of optimality, $\beta^{(k)}_o=0$ if $y_o^{(k)}=0$, and thus $z^{(k)}_o=0$. Otherwise, $\beta^{(k)}_o \le \frac{1}{y_o^{(k)}}$, by combining \eqref{eq:DEA1} and \eqref{eq:DEA2}. Thus, constraints \eqref{eq:DEA5OS} can be tighten to \begin{eqnarray*} \displaystyle \beta^{(k)}_o = 0 && \forall o=1,\ldots,O, \mbox{such that } y_o^{(k)}=0\\ \beta^{(k)}_o \le \frac{1}{y_o^{(k)}} \,\, z^{(k)}_o && \forall o=1,\ldots,O, \mbox{such that } y_o^{(k)}>0. \end{eqnarray*} Let $z^{(k)}(p)$ denote the optimal selection variables to (OSDEA$^{(k)}(p)$), i.e., the $p$ outputs that yield the maximum efficiency for DMU $k$. Thus, the optimal solution value to (OSDEA$^{(k)}(p)$), denoted above by $v^{(k)}(p)$, is equal to $E^{(k)}(z^{(k)}(p))$. A few things are known about the maximum efficiency $v^{(k)}(p)$ as function of $p$. The efficiency $v^{(k)}(p)$ is non decreasing in $p$, i.e., the more outputs we select the better the efficiency of DMU $k$ can be. Moreover, in the extreme case when all outputs are considered, we have that $v^{(k)}(O)=E^{(k)}$. Thus, a plausible strategy to choose the value of $p$ is to look at the marginal contribution of an additional feature, i.e., $v^{(k)}(p+1) - v^{(k)}(p)$, and stop when this is below a threshold. \subsection{Extensions} \label{sec:extensions} In this section we discuss several interesting extensions that can be carried out using the previous model as a basis. First, we present the formulation when both inputs and outputs are to be selected, all at once. Second, we model constraints on the weights attached to the outputs. Finally, we discuss how other attributes attached to the outputs, such as costs and correlations, may constrain the feature selection. \subsubsection{Selection of inputs and outputs} Note that up to now, and for the sake of clarity, we have focused on the selection of outputs. The selection of $\tilde{p}$ inputs from the $I$ potential ones can be included in a similar fashion. Indeed, let us consider the new binary variables $\tilde{z}^{(k)}_i$, equal to 1 if input $i$ can be used in the calculation of the efficiency for DMU $k$, and 0 otherwise. Hence, the Feature Selection for DMU $k$, (FSDEA$^{(k)}(p)$), where $\tilde{p}$ inputs and $p$ outputs are selected, can be written also as an MILP \begin{eqnarray} \mbox{maximize}_{(\bm{\alpha},\bm{\beta},\mathbf{z}^{(k)},\tilde{\mathbf{z}}^{(k)})} \ \displaystyle \sum_{o=1}^O \beta_o^{(k)} y_o^{(k)} \end{eqnarray} \noindent {\em s.t.} (FSDEA$^{(k)}(p)$) \begin{eqnarray} \displaystyle \eqref{eq:DEA1}-\eqref{eq:DEA7OS}&& \nonumber\\ \alpha^{(k)}_i \le \tilde{M} \tilde{z}^{(k)}_i && \forall i=1,\ldots,I\label{eq:DEA252OS}\\ \sum_{i=1}^I \tilde{z}^{(k)}_i = \tilde{p} && \label{eq:DEA262OS}\\ \tilde{z}^{(k)}_i \in \{0,1\} && \forall i=1,\ldots,I\label{eq:DEA272OS}, \end{eqnarray} where $ \tilde{M}$ is another big constant. Constraints (\ref{eq:DEA252OS})--(\ref{eq:DEA272OS}) are the counterparts of (\ref{eq:DEA5OS})--(\ref{eq:DEA7OS}) but modelling the selection of inputs instead of outputs. (FSDEA$^{(k)}(p)$) has $K+O+I+3$ linear constraints and $2 \, I+ 2 \, O$ variables, where half of them are continuous and the other half binary. Although running times are not an issue for this model, one can lower them even further by finding tighter values of $M$ and $ \tilde{M}$. As above, this can be done using bounds on the inputs and the outputs. \subsubsection{Modeling constraints on weights} Our (OSDEA$^{(k)}(p)$) improves the discriminatory power of the DEA model by focusing on a few outputs, and eliminating the rest from the calculation of the efficiency of DMU $k$. There is a strand of literature that, using also as a basis the discriminatory power, argue the necessity of controlling the values of the weights \citep{allen1997weights,GREEN1996461,joro2015data,PODINOVSKI2016916,ramon2010choice,doi101002ev1441}. In these works, it is assumed that we have upper and lower bounds on the weight $\beta^{(k)}_o$, say, $L^{(k)}_{o}$ and $U^{(k)}_{o}$, for $o = 1,\ldots,O$, i.e., \begin{eqnarray} \displaystyle L^{(k)}_{o} \leq \beta^{(k)}_o \leq U^{(k)}_{o} & \forall o = 1,\ldots,O. \label{eq:lowerupperbound} \end{eqnarray} Gathering non trivial values for these bounds is not a straightforward task for the user in the presence of many outputs, as in dataset on benchmarking of electricity DSOs in our numerical section. In any case, we can enrich our (OSDEA$^{(k)}(p)$), to not only control whether an output can be used, but also the range of values for the corresponding weight. These bounds can be incorporated in constraints (\ref{eq:DEA5OS}) in (OSDEA$^{(k)}(p)$) yielding $$ \begin{array}{ll} \displaystyle L^{(k)}_{o}z^{(k)}_o \leq \beta^{(k)}_o \leq U^{(k)}_{o} z^{(k)}_o & \forall o = 1,\ldots,O. \end{array} \eqno{(7')} $$ There are a few observations to be made. First, the knowledge of upper bounds on the weights naturally tightens the value of $M$. Second, if there are meaningful lower bounds on the weights, i.e., if $L^{(k)}_{o}> 0$, then $z^{(k)}_o$ must be equal to $1$. Third, these positive lower bounds make the selection problem (OSDEA$^{(k)}(p)$) infeasible for small values of $p$. Indeed, this is the case when there are more than $p$ outputs with a positive lower bound. \subsubsection{Modeling attributes of the outputs} Outputs may have attributes attached to them, which may affect the selection. We will model two of those. First, we will consider that outputs are different in nature and therefore we will attach a different cost to them. Let $c_o$ denote the cost associated with output $y_o, o = 1, \ldots, O$, which can measure the collection and the verification of this output in a repeated setting. To select $p$ outputs so that their total cost does not exceed a given amount $C$, we need to add to (OSDEA$^{(k)}(p)$) the following constraint \begin{eqnarray} \displaystyle \sum_{o=1}^O c_o z^{(k)}_o \leq C. && \label{eq:DEAOScost} \end{eqnarray} Second, we can consider the outputs being partitioned into $S$ clusters, with outputs within a cluster being similar in terms of what they measure. In the context of benchmarking electricity Distribution System Operators (network companies), clusters may related to the many different measurements of connections, transformers, lines, cables, etc. Let $\mathcal{H}= \{H_1, \ldots, H_S\}$ denote the partitioning of the outputs, namely $H_\ell \cap H_s = \emptyset$ and $\cup_{\ell=1}^S H_\ell=\{1,\ldots,O\}$. Given the similarity of outputs within a cluster, we will impose that at most (respectively, at least) $p^{\rm (max)}_\ell$ (respectively, $p^{\rm (min)}_\ell$) outputs can be selected from $H_\ell$. In order to do so, we need to add to (OSDEA$^{(k)}(p)$) the following constraint \begin{eqnarray} \displaystyle \sum_{o \in H_\ell} z^{(k)}_o \leq p^{\rm (max)}_\ell && \forall \ell=1,\ldots,S. \label{eq:DEAOScluster1}\\ \sum_{o \in H_\ell} z^{(k)}_o \geq p^{\rm (min)}_\ell && \forall \ell=1,\ldots,S. \label{eq:DEAOScluster2} \end{eqnarray} Finally, we have correlation $\rho_{oo'}$ between outputs $o$ and $o'$ as another attribute. If two outputs are highly correlated, we may be interested in using only one of them, since the information they provide is almost the same and can derive in the problem of multicollinearity \cite{Bertsimas2016ORF}. Hence, we want to impose that if $\rho_{oo'}$ is greater than a preselected threshold, then outputs $o$ and $o'$ cannot be chosen simultaneously. We can model this by first defining a 0--1 matrix $R$, in which $R_{oo'}=0$ if $\rho_{oo'}$ is lower than the threshold, and $1$ otherwise. Then, we have simply to add to (OSDEA$^{(k)}(p)$) the constraints \begin{eqnarray} \displaystyle z^{(k)}_o + z^{(k)}_{o'} \leq 2- R_{oo'}, & \forall o < o'.\label{eq:correlmodel} \end{eqnarray} The choice of the threshold have to be done with care, since some works like \cite{nunamaker1985using} suggest that the addition of a highly correlated variable may increase the efficiency. Throughout this section, we have made the selection of outputs individually for DMU $k$ with the goal to maximize the efficiency of DMU $k$. Therefore, for two different DMUs, $k$ and $ k'$, the selected outputs, $z^{(k)}(p)$ and $z^{(k')}(p)$, may differ. In model specification one is interested in finding the most discriminatory features in order to build a valid model for all DMUs. With this in mind, we propose in the next section a mathematical optimization model that selects the outputs jointly for all DMUs, ensuring they will be the same ones for all DMUs. \section{The joint selection model\label{sec:singleOS}} In this section, we address the problem in which the selected outputs have to be the same for all DMUs. First, this joint selection is made maximizing the average efficiency of all DMUs, yielding an MILP formulation. The model can be enriched as in previous section with input selection decision variables, as well as constraints modeling desirable properties about the selected features. Second, we propose alternatives to the maximization of the average efficiency when making the joint selection of outputs, such as the maximization of the weighted average efficiency, the minimum efficiency, or a percentile of the efficiencies. The joint selection model can again be formulated as an MILP problem. We also consider the minimization of the square of the Euclidean distance from each DMU efficiency to the ideal value of 1, where the joint selection model can be rewritten as a Mixed Integer Quadratic Programming problem. \subsection{The selection model for all DMUs} To obtain all efficiencies $E^{(k)}$ simultaneously, one can solve the following single-objective Linear Programming formulation \begin{eqnarray} \frac{1}{K} \sum_{k=1}^K E^{(k)} = \mbox{maximize}_{(\bm{\alpha},\bm{\beta})} \ \displaystyle \frac{1}{K} \sum_{k=1}^K \sum_{o=1}^O \beta_o^{(k)} y_o^{(k)} \label{eqn:ofsingle} \end{eqnarray} \noindent {\em s.t.} \begin{eqnarray} \displaystyle \sum_{o=1}^O \beta_o^{(k)} y_o^{(j)} - \sum_{i=1}^I \alpha_i^{(k)} x_i^{(j)} \le 0 && \forall j=1,\ldots,K; \forall k=1,\ldots,K \label{eq:DEA1single}\\ \sum_{i=1}^I \alpha_i^{(k)} x_i^{(k)} =1 && \forall k=1,\ldots,K \label{eq:DEA2single}\\ \alpha \in {\mathbb R}^{I \cdot K}_+ && \label{eq:DEA3single}\\ \beta \in {\mathbb R}^{O \cdot K}_+. && \label{eq:DEA4single} \end{eqnarray} It is easy to see that this problem decomposes by DMU, and that each of the subproblems are equivalent to (DEA$^{(k)}$), which optimal solution value is $E^{(k)}$. We continue with the model in which $p$ outputs are to be selected from the $O$ available ones, the same ones for all DMUs. The goal in this section is to maximize the average efficiency across all DMUs. Let $z_o$ be equal to 1 if output $o$ can be used in the calculation of the efficiencies, and 0 otherwise. The decision variables $\beta_o^{(k)}$ and $\alpha_i^{(k)}$ are defined as above. The Output Selection for DEA problem, (OSDEA$(p)$), where $p$ outputs must be selected such that the average efficiency across all DMUs is maximized, can be written as the following MILP: \begin{eqnarray} v(p) := \mbox{maximize}_{(\bm{\alpha},\bm{\beta},\mathbf{z})} \ \displaystyle \frac{1}{K} \sum_{k=1}^K \sum_{o=1}^O \beta_o^{(k)} y_o^{(k)} \end{eqnarray} \noindent {\em s.t.} (OSDEA$(p)$) \begin{eqnarray} \displaystyle \eqref{eq:DEA1single}-\eqref{eq:DEA4single} && \nonumber\\ \beta^{(k)}_o \le M z_o && \forall o=1,\ldots,O; \forall k=1,\ldots,K\label{eq:DEA5OSNEW}\\ \sum_{o=1}^O z_o = p && \label{eq:DEA6OSNEW}\\ z_o \in \{0,1\} && \forall o=1,\ldots,O\label{eq:DEA7OSNEW}, \end{eqnarray} where $M$ is a big constant. Constraints \eqref{eq:DEA1single}-\eqref{eq:DEA4single} are necessary to find the weights of the inputs and the outputs that the efficiency for each DMU. Constraints (\ref{eq:DEA5OSNEW}) make sure that the selection variables $z_o$ are well defined with respect to $\beta_o^{(k)}$. Constraint (\ref{eq:DEA6OSNEW}) models the number of features to be selected. Finally, constraints (\ref{eq:DEA7OSNEW}) relate to the range of decision variables $z_o$. (OSDEA$(p)$) has $K(K+O+1)+1$ linear constraints and $K(O+I)+O$ variables, where $K(O+I)$ are continuous and $O$ are binary ones. We have multiplied the size of the problem by $K$, except for the number of binary variables, which are still one per output. Our numerical experiments show that this problem can still be solved efficiently. Moreover, and as in previous section, the computational burden of the problem depends on the value $M$ and we can tighten it using similar bounds. As before, we might extend the model as in Section \ref{sec:extensions}, with input selection decision variables, as well as constraints to model desirable properties of the outputs. Let $z(p)$ denote the optimal selection variables to (OSDEA$(p)$), i.e., the $p$ outputs that yield the maximum average efficiency. Thus, the optimal solution value to (OSDEA$(p)$), denoted above by $v(p)$, is equal to $\frac{1}{K} \sum_{k=1}^K E^{(k)}(z(p))$. In general, we have that $E^{(k)}(z(p)) \le E^{(k)}(z^{(k)}(p))$, since $z^{(k)}(p)$ is the best strategy for DMU $k$. As in previous section, the maximum average efficiency $v(p)$ is non decreasing in $p$, i.e., the more outputs we select the better the average efficiency can be. In the limit case, we have that $v(O)=\frac{1}{K} \sum_{k=1}^K E^{(k)}$. The number of selected outputs $p$ is a parameter of our model. The user should make the choice of $p$ after inspecting the curve $v(p)$. As before, a plausible strategy to choose the value of $p$ is to look at the marginal contribution of an additional feature, i.e., $v(p+1) - v(p)$, and stop when this is below a threshold. The question is whether this marginal contribution is nonincreasing, i.e., $v(p+2) - v(p+1) \le v(p+1) - v(p)$, for all $p=1,\ldots,O-2$. Below, we show a toy example where this inequality is not satisfied, and thus $v(p)$ is not a concave function of $p$. In the numerical section, devoted to the benchmarking of electricity DSOs, the function $v(\cdot)$ that we obtain empirically is concave, and thus, not convex. \begin{counterex} \label{counterex:concave} Consider $5$ DMUs, each one described by a single input and four different outputs, as can be seen in Table~\ref{tab:Counter2}. When performing the feature selection procedure, the results that we obtain are the following. In the case of selecting just one output, say $p=1$, the procedure chooses ``Output 1'' and the obtained efficiencies are then just the same as the values of ``Output 1" for each DMU. Hence, the average efficiency is 0.8. When two outputs are selected, the procedure chooses ``Output 1'' and ``Output 2''. These outputs make the average efficiency to be 0.867. Furthermore, if three outputs are selected, the procedure chooses ``Output 2'', ``Output 3'' and ``Output 4''. These outputs make all the DMUs efficient (i.e., efficiency equal to $1$) and thus the average efficiency is 1. Clearly, \[v(3)-v(2) >v(2)-v(1),\] and therefore $v(\cdot)$ is not concave. \begin{table} $$ \begin{tabular}{|c|c|cccc|} \hline DMU & Input 1 & Output 1 & Output 2 & Output 3 & Output 4 \\ \hline 1 & 1 & 0.6 & $\frac{1}{3}$ & $\frac{1}{3}$ & $\frac{1}{3}$ \\ 2 & 1 & 0.7 & $\frac{1}{3}$ & $\frac{1}{3}$ & $\frac{1}{3}$ \\ 3 & 1 & 0.8 & 1 & 0 & 0 \\ 4 & 1 & 0.9 & 0 & 1 & 0 \\ 5 & 1 & 1 & 0 & 0 & 1 \\ \hline \end{tabular} $$ \caption{Toy example for which $v(\cdot)$ is not concave} \end{table}\label{tab:Counter2} \end{counterex} A greedy approach is provided in \cite{pastor2002statistical} to address the feature selection problem in a nested fashion. In short, this greedy nested procedure works as follows. For $p=1$, (OSDEA$(p)$) is solved to optimality. Let $o^{\rm}(1)$ be its best output. For $p=2$, (OSDEA$(p)$) is solved to optimality, with the additional constraint that $z_{o^{\rm}(1)}=1$. Let $o^{\rm}(2)$ be its best output. In general, for $p$, (OSDEA$(p)$) is solved to optimality, with the additional constraints that $z_{o^{\rm}(1)}=z_{o^{\rm}(2)}=\ldots=z_{o^{\rm}(p-1)}=1$. Let $o^{\rm}(p)$ be its best output. Clearly, this greedy approach returns a sequence of outputs that is nested, i.e., the outputs selected in iteration $p-1$ will also be selected in iteration $p$, for all $p$. The following is a toy example that illustrates that the approach in \cite{pastor2002statistical} does not provide, in general, the optimal solution to (OSDEA$(p)$). \begin{counterex} \label{counterex:nested} Consider $4$ DMUs, each one described by a single input and three different outputs, as can be seen in Table~\ref{tab:Counter}. When performing the feature selection procedure, the results that we obtain are the following. In the case of selecting just one output, say $p=1$, the procedure chooses ``Output 1'' and the obtained efficiencies are then just the same as the values of ``Output 1" for each DMU. When two outputs are selected, the procedure chooses ``Output 2'' and ``Output 3''. These outputs make all the DMUs efficient. However, if either ``Output 1'' and ``Output 2'' or ``Output 1'' and ``Output 3'' were used instead, the efficiencies would be \{0.85,0.9,0.95,1\} and \{1,1,0.927,1\}, respectively. \begin{table} $$ \begin{tabular}{|c|c|ccc|} \hline DMU & Input 1 & Output 1 & Output 2 & Output 3 \\ \hline 1 & 1 & 0.85 & 0.2 & 0.8 \\ 2 & 1 & 0.95 & 0.4 & 0.6 \\ 3 & 1 & 0.9 & 0.6 & 0.4 \\ 4 & 1 & 1 & 0.8 & 0.2 \\ \hline \end{tabular} $$ \caption{Toy example for which the approach in \cite{pastor2002statistical} does not provide the optimal solution to (OSDEA$(p)$)} \end{table}\label{tab:Counter} \end{counterex} \subsection{Alternative objective functions} In (OSDEA$(p)$), we maximize the average efficiency across all DMUs. In this section, we propose other objective functions $\phi()$ to select the outputs. A straightforward generalization would be to consider the weighted average efficiency. \begin{equation} \label{eq:quadratic} \phi^{\rm (w)}(\bm{\alpha},\bm{\beta},\mathbf{z}) = \frac{1}{K} \sum_{k=1}^K \sum_{o=1}^O \omega^{(k)} \beta_o^{(k)} y_o^{(k)}. \end{equation} This is relevant if the DMUs are not equally important. If there is only one input, say cost in $\$$, we could for example use $w^{(k)}=x^{(k)}$, and (\ref{eq:quadratic}) would correspond to minimizing the total sector loss from inefficiency. Instead of the weighted average efficiency, one could be interested in measuring how far each DMU is from efficiency. This can be measured with the following quadratic function \begin{equation} \label{eq:quadratic2} \phi^{\rm (q)}(\bm{\alpha},\bm{\beta},\mathbf{z}) = \frac{1}{K} \sum_{k=1}^K (1- \sum_{o=1}^O \beta_o^{(k)}y_o^{(k)})^2. \end{equation} Alternatively, our goal could have been maximizing the worst efficiency, i.e., the minimum one \begin{equation} \label{eq:quadratic3} \phi^{\rm (m)}(\bm{\alpha},\bm{\beta},\mathbf{z})= \min_{k=1,\ldots,K} \sum_{o=1}^O \beta_o^{(k)} y_o^{(k)}. \end{equation} This is relevant, for example, if outputs are selected with the aim of being Rawlsian fair towards all DMUs. Instead of the minimum, we could have optimized another $\pi$-percentile, $\pi =1,\ldots,100$, of the efficiency distribution. Assuming that the efficiencies are given in non-decreasing order, $ \sum_{o=1}^O\beta_o^{(k)}\mathbf{y}_o^{(k)} \le \sum_{o=1}^O\beta_o^{(k+1)}y_o^{(k+1)}$, for all $k$, we would have \begin{equation} \label{eq:percentile} \phi^{(\pi)}(\bm{\alpha},\bm{\beta},\mathbf{z}) = \sum_{o=1}^O\beta_o^{(k(\pi))}y_o^{(k(\pi))}, \end{equation} with $k(\pi)=\lfloor K \, \dfrac{\pi}{100}\rfloor.$ The Output Selection for DEA problem where the goal is to maximize $\phi^{\rm (w)}$ in \eqref{eq:quadratic}, (OSDEA$(p)$)$^{\rm (w)}$, can be formulated in the same fashion as (OSDEA$(p)$). The Output Selection for DEA problem where the goal is to maximize $\phi^{\rm (q)}$ in \eqref{eq:quadratic2}, (OSDEA$(p)$)$^{\rm (q)}$, can be formulated similarly to (OSDEA$(p)$). While the feasible region remains the same, the objective function becomes quadratic and the goal is to minimize it, yielding a Mixed Integer Quadratic Programming formulation. The Output Selection for DEA problem where the goal is to maximize the minimum efficiency $\phi^{\rm (m)}$ in \eqref{eq:quadratic3}, (OSDEA$(p)$)$^{\rm (m)}$ can be written as an MILP. Here, we need to define a new variable $\lambda$ to rewrite the minimum in the objective function, and include the corresponding constraints to ensure that the new variable is well defined. \begin{equation} \label{eq:rewritemin} \lambda \le \sum_{o=1}^O \beta_o^{(k)} y_o^{(k)} \quad k=1,\ldots,K, \end{equation} and thus \begin{eqnarray} \mbox{maximize}_{(\bm{\alpha},\bm{\beta},\mathbf{z},\lambda)} \ \displaystyle \lambda \end{eqnarray} \noindent {\em s.t.} (OSDEA$(p)$)$^{\rm (m)}$ \begin{eqnarray*} \displaystyle \eqref{eq:DEA1single}-\eqref{eq:DEA4single}; \eqref{eq:DEA5OS}-\eqref{eq:DEA7OS}; \eqref{eq:rewritemin}. && \nonumber \end{eqnarray*} The Output Selection for DEA problem where the goal is to maximize the $\pi$-percentile $\phi^{(\pi)}$ in \eqref{eq:percentile}, (OSDEA$(p)$)$^{(\pi)}$, can also be written as an MILP, similarly as in \cite{benati2015using}. We need to define a new variable $\lambda$ that is equal to the percentile, as well as include the corresponding constraints to ensure that the new variable is well defined. We also need a new binary variable, $\delta^{(k)}$, that is equal to $1$ if the efficiency of DMU $k$, $\sum_{o=1}^O\beta_o^{(k)} y_o^{(k)}$, is at least $\lambda$ and $0$ otherwise. \begin{eqnarray} \mbox{maximize}_{(\bm{\alpha},\bm{\beta},\mathbf{z},\lambda,\delta)} \ \displaystyle \lambda \end{eqnarray} \noindent {\em s.t.} (OSDEA$(p)$)$^{(\pi)}$ \begin{eqnarray} \displaystyle \eqref{eq:DEA1single}-\eqref{eq:DEA4single}; \eqref{eq:DEA5OS}-\eqref{eq:DEA7OS} && \nonumber\\ \sum_{o=1}^O \beta_o^{(k)} y_o^{(k)} \geq \lambda - M' (1- \delta^{(k)}) && \forall k=1,\ldots,K \label{eq:DEA8singleOSpercentile}\\ \sum_{k=1}^{K} \delta^{(k)} = \lfloor K \, \dfrac{\pi}{100}\rfloor \label{eq:DEA9singleOSpercentile}\\ \delta^{(k)} \in \{0,1\} && \forall k=1,\ldots,K \label{eq:DEA10singleOSpercentile}, \end{eqnarray} with $M'$ a big constant. \section{Numerical section}\label{sec:numerical} In this section, we illustrate the models in previous sections using a real-world dataset in benchmarking of electricity DSOs \cite{agrell2017regulatory,agrell2018theory}. Here, we have $K=182$ DMUs, $O=100$ outputs, and $I=1$ input. As customary, each output has been normalized dividing it by the difference between the maximum and the minimum values of the output. Figure \ref{fig:Corrp10} displays the correlations between the outputs, with darker colours pointing at higher correlations. This matrix reveals subsets of outputs highly correlated with each other, such as outputs 23 to 31, where correlations are above 0.5, except for $\mbox{corr}(23,27)=0.37$ and $\mbox{corr}(27,29)=0.35$. The experiments were run on a computer with an Intel$^\circledR$ Core{$^\textrm{TM}$} i7-6700 processor at $3.4$ GHz using $16$ GB of RAM, running Windows 10 Home. All the optimization problems have been solved using Python 3.5 interface \cite{pthn} with Gurobi 7.0.1 solver, \cite{gurobi}. We have solved (OSDEA$(p)$) for $p=1,\ldots,10$, with $M$ equal to $1000$. We have run the approach in \cite{pastor2002statistical} to provide (OSDEA$(p)$) with an initial solution. A time limit of 300 seconds has been imposed, although this is not binding for small values of $p$. Once (OSDEA$(p)$) has selected the $p$ outputs, $p=1,\ldots,10$, we calculate the efficiencies of the DSOs obtained with the chosen outputs. The results are summarized in Table \ref{table:summary statistics}, and Figures \ref{fig:distFSallp} and \ref{fig:distFSBoxplot}, while the correlation matrix in Figure \ref{fig:Corrp10-selected} highlights the correlations between the selected outputs. Table \ref{table:summary statistics} presents summary statistics of the distribution of the efficiencies, namely the minimum, the maximum, the average (i.e., $v(p)$), the standard deviation, the quartiles $q_i$, and the interquartile range (i.e., $q_3$-$q_1$). The last column of this table reports the selected outputs. Figure \ref{fig:distFSBoxplot} displays the box-and-whiskers plots as well as the average efficiency $v(p)$, and Figure \ref{fig:distFSallp} the histograms of the distribution of the efficiencies. The average efficiency improves with the number of selected outputs, $p$, increasing from $0.5555$ to $0.8732$. Figure \ref{fig:distFSBoxplot} shows that the marginal effect of increasing $p$ to $p+1$ is decreasing for this dataset. When looking at the quartiles, we can see that there is a substantial improvement too by increasing $p$. When the number of selected features, $p$, is small the chosen features give poor efficiencies to some of the DMUs. Indeed, for $p\le5$, the minimum efficiency is below $0.1200$. As $p$ increases, we can see that this minimum increases rapidly with $p$. The first quartile increases from $0.4743$ to $0.7902$. Similarly, for the median, we have $0.5637$ to $0.9090$, while for the third quartile $0.6300$ to $1.0000$. The maximum efficiency is already maximal, i.e., equal to $1.0000$, for the smallest value of $p$, and remains like that for all values of $p$ tested. In general, the standard deviation of the efficiencies decreases with $p$, while the interquartile range increases. In terms of the correlations, for small values of $p$, the selected outputs are highly correlated with each other. For instance, for $p=2$, we choose outputs 11 and 31, for which $\mbox{corr}(11,31)=0.91$; while for $p=3$, we choose outputs 21, 31, 54, with $\mbox{corr}(21,31)=0.89$, $\mbox{corr}(21,54)=0.88$ and $\mbox{corr}(31,54)=0.91.$ Actually, for $p=2,\ldots,5$, the smallest correlation between the selected outputs is equal to $0.61$. For $p=10$, there are only two outputs, for which we can find correlations below 0.5, namely outputs 19 and 74. \begin{table} \begin{scriptsize} $$ \begin{tabular}{|c|cc|cc|cccc|l|} \hline $p$&$\min$&$\max$&\textrm{mean}&\textrm{st. dev.}&\textrm{$q_1$}&\textrm{$q_2$}&\textrm{$q_3$}&\textrm{$q_3$-$q_1$}&\textrm{selected features}\\ \hline 1&0.0000&1.0000&0.5555&0.1695&0.4743&0.5637&0.6300&0.1557&\textrm{59}\\ 2&0.0006&1.0000&0.6553&0.1708&0.5772&0.6682&0.7482&0.1710&\textrm{11 31}\\ 3&0.0009&1.0000&0.7118&0.1643&0.6391&0.7222&0.7839&0.1448&\textrm{21 31 54}\\ 4&0.1161&1.0000&0.7487&0.1511&0.6645&0.7479&0.8404&0.1759&\textrm{16 21 31 59}\\ 5&0.1161&1.0000&0.7812&0.1494&0.6962&0.7738&0.8911&0.1949&\textrm{16 21 31 59 94}\\ 6&0.3105&1.0000&0.8082&0.1402&0.7222&0.8068&0.9305&0.2083&\textrm{16 19 21 31 59 94}\\ 7&0.3105&1.0000&0.8290&0.1404&0.7474&0.8355&0.9545&0.2071&\textrm{16 19 21 31 59 91 94}\\ 8&0.3105&1.0000&0.8462&0.1402&0.7658&0.8689&0.9802&0.2144&\textrm{16 19 21 31 59 91 94 97}\\ 9&0.3105&1.0000&0.8610&0.1370&0.7789&0.8841&0.9972&0.2183&\textrm{16 19 21 31 59 74 91 94 97}\\ 10&0.4576&1.0000&0.8732&0.1304&0.7902&0.9090&1.0000&0.2098&\textrm{16 19 21 29 31 59 74 91 94 97}\\ \hline \end{tabular} $$ \caption{Summary statistics for the distribution of efficiencies, for $p=1,\ldots,10$} \end{scriptsize} \end{table}\label{table:summary statistics} In real-life applications, the DMUs may be consulted on the chosen outputs. This may be useful to ensure that the resulting choices make conceptual sense. However, such involvement is likely to lead strategic behavior. The evaluated DMUs may try to influence the choice of outputs in their own advantage. This may lead to a game between the DMUs (the players), each of which has preferred selections of outputs (strategies), and the modeler, who is trying to make a reasonable selection based on the resulting efficiencies of all DMUs (the outcome). To start investigating the challenges of such strategic behavior, we will consider a simple game or social choice problem. Without loss of generality, we assume that $p$ is fixed. We assume that there are $K$ players, the DMUs, and $K+1$ strategies or choices, namely the selection of outputs $z^{k}(p)$ according to the individual preferences as determined by (OSDEA$^{(k)}(p)$), $k=1,…,K$, and to the joint selection $z(p)$ as determined by (OSDEA$(p)$). We will think of the joint selection $z(p)$ as the default or status quo selection and the question is now if one of the individual selections $z^{k}(p)$ would be preferred by a large group of DMUs. If so, the modeler is likely to face strong opposition to his proposed selection of outputs. For DMU $k'$, the selection of outputs made with (OSDEA$^{(k')}(p)$), $z^{(k')}(p)$, is at least as attractive as the one made with (OSDEA$(p)$), $z(p)$, or in other words, the reported efficiency with $z^{(k')}(p)$ is at least as high as the one with $z(p)$. However, for any other DMU $k\not = k'$, it is not clear whether the selection $z^{(k')}(p)$ reports a higher efficiency for DMU $k$ or not. One would like to know how many DMUs prefer the joint strategy $z(p)$ over the $K$ individual strategies $z^{(k')}(p)$. The so-called cross-efficiency measures this preference. Let ${\displaystyle}elta^{(k)}(k')=E^{(k)}(\mathbf{z}^{(k')}(p))-E^{(k)}(\mathbf{z}(p))$, with $k,k'=1,\ldots,K$, be equal to the difference in reported efficiency for DMU $k$ by the joint selection model and the individual selection model for DMU $k'$. We can define \[\Pi(k') = \frac{100}{K} \, \mbox{cardinality} (\{k \, : \, {\displaystyle}elta^{(k)}(k')>0\}),\] i.e., the percentage of DMUs that prefer the individual strategy of DMU $k'$ over the joint one. In Figure \ref{fig:distFSindvsjointallp}, we illustrate the share of DMUs that prefer individual selections to the joint selection. As for the joint strategy, a time limit of 300 seconds has been imposed to (OSDEA$^{(k')}(p)$), although this is not binding for any value of $p$. We have binned the support to the individual selections in intervals of width 5\%. The height of the first bar indicates how many individual strategies are preferred by [0\%,5\%) of the DMUs, the height of the second bar corresponds to [5\%,10\%) DMUs supporting it, etc. When $p$ increases, we see that less individual strategies are preferred by many DMUs over the joint strategy. Indeed, for values of $p$ above 5, the joint strategy is supported by at least 50\% of the DMUs over any of the individual strategies, while for values of $p$ above 7, this becomes at least 60\%. We can therefore conclude that, in this simple game, as the model gets larger, it becomes less likely that a large group of DMUs will agree on alternative to the modeler’s selection. \section{Conclusions} \label{sec:conclusions} In this paper, we have proposed a single-model approach for feature selection in DEA. When the objective is the average efficiency of the DMUs, the problem can be written as an MILP formulation. We have considered other objectives such as the squared distance to the ideal point, where all the DMUs are efficient, yielding a Mixed Integer Quadratic Programming formulation; and we have shown how to enrich the model to allow for situations where different features come with different costs, e.g., related to data collection or data quality, and where features can be grouped and restrictions can be placed on the use of different groups of features in the specific industrial application. Our numerical section illustrates that we can find good solutions in a reasonable amount of time for the case in which the average efficiency is the goal, which boils down to an MILP. Our approach deviates from previous literature on feature selection in several ways. It is purely based on mathematical programming as opposed to a mixture of statistical and mathematical programming methods, where the desirable properties above can be modeled in a natural way. It works directly with the original features as opposed to dimensionality reduction techniques, which create artificial features. It focuses on the choice of features from a large set of potential candidate features. Finally, it can handle different objective functions to reflect the underlying objective of the modeler and the application context, e.g., the conflicts between different groups of DMUs in the evaluation. In this paper, we have also introduced an element of game theory. This is relevant since the evaluated parties in applied projects typically will try to influence the feature selection. It is therefore important to think about the conflict between choosing features from a joint and an individual point of view. We have shown how conflicts can be partially analyzed via the cross-efficiency matrix and we have illustrated the conflict between individual and joint perspectives in the numerical application. In the future, it would however be relevant to further explore these issues. One limitation of our analysis above is that we only consider $K$ specific alternatives to the modeler’s joint selection. In theory, there are, of course, many more alternatives. Indeed, any subset of size $p$ from the set of potential outputs $O$ could potentially muster the support of many DMUs against the modeler’s proposal. The strategic analysis of all possible alternatives is likely to become overwhelming. It may therefore be relevant to introduce some restrictions. One idea is to detect relevant clusters of DMUs and make the selection of features tailored to them. By looking at likely interest groups, the game theoretical analysis may be less complex. Groupings could, for example, refer to small versus large, start-up versus well-established, urban versus rural, and investor-owned versus cooperatively owned DMUs. Also, it would be interesting to add constraints to the feature selection model in order to guarantee the support of a number of DMUs, e.g., a majority of DMUs, hereby modeling more general game theoretical aspects, such as the scope for forming coalitions. \begin{figure} \caption{Correlation matrix for the outputs, highlighting the correlation between with the selected outputs for $p=10$ \label{fig:Corrp10} \label{fig:Corrp10} \end{figure} \begin{figure} \caption{Box-and-whiskers plots of efficiencies, including average efficiency, for $p=1,\ldots,10$} \label{fig:distFSBoxplot} \end{figure} \begin{figure} \caption{Histograms of the distribution of the efficiencies, $p=1,\ldots,10$ \label{fig:distFSallp} \label{fig:distFSallp} \end{figure} \begin{figure} \caption{Correlation matrix for the selected outputs for $p=10$ \label{fig:Corrp10-selected} \label{fig:Corrp10-selected} \end{figure} \begin{figure} \caption{Histograms of the distribution of preferences of individual strategies over the joint strategy, $p=1,\ldots,10$ \label{fig:distFSindvsjointallp} \label{fig:distFSindvsjointallp} \end{figure} \end{document}
\begin{document} \title{General sending-or-not-sending twin field protocol for quantum key distribution with asymmetric source parameters} \author{Xiao-Long Hu$ ^{1}$, Cong Jiang$ ^{1}$, Zong-Wen Yu$ ^{2}$, and Xiang-Bin Wang$ ^{1,3,4,5\footnote{Email Address: [email protected]}} $ } \affiliation{ \centerline{$^{1}$State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics,} \centerline{Tsinghua University, Beijing 100084, People¡¯s Republic of China} \centerline{$^{2}$Data Communication Science and Technology Research Institute, Beijing 100191, China} \centerline{$^{3}$ Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China} \centerline{ Hefei, Anhui 230026, China } \centerline{$^{4}$ Jinan Institute of Quantum technology, SAICT, Jinan 250101, People¡¯s Republic of China} \centerline{$^{5}$ Shenzhen Institute for Quantum Science and Engineering, and Physics Department,} \centerline{ Southern University of Science and Technology, Shenzhen 518055, China} } \begin{abstract} The sending-or-not-sending (SNS) protocol of the twin-field quantum key distribution (TFQKD) can tolerant large misalignment error and its key rate can exceed the linear bound of repeaterless QKD. But the original SNS protocol requires the two users to use the same source parameters. Here we propose a general protocol with asymmetric source parameters and give the security proof of this protocol. Our general protocol has a much better performance than that of the original SNS protocol when the channel is asymmetric. \end{abstract} \pacs{ 03.67.Dd, 42.81.Gs, 03.67.Hk } \maketitle \section{Introduction} Quantum key distribution provides a method for unconditionally secure communication~\cite{bennett1984quantum,lo1999unconditional,shor2000simple,mayers2001unconditional,gisin2002quantum,gisin2007quantum,renner2008security,scarani2009security,koashi2009simple} between two parties, Alice and Bob. Combined with the decoy-state method~\cite{inamori2007unconditional,gottesman2004security,hwang2003quantum,wang2005beating,lo2005decoy,adachi2007simple} and measurement-device-independent QKD (MDIQKD) protocol~\cite{lo2012measurement,braunstein2012side}, QKD can overcome the security loophole from the nonideal single-photon sources~\cite{huttner1995quantum,yuen1996quantum,brassard2000limitations} and imperfect detection devices~\cite{lydersen2010hacking,gerhardt2011full} and has developed rapidly both in theory~\cite{wang2007quantum,hayashi2007upper,wang2013three,sasaki2014practical,curty2014finite,xu2013practical,xu2014protocol,song2012finite,zhou2014tightened,yu2015statistical,zhou2016making,jiang2016measurement,jiang2017measurement,zhou2017obtaining,huang2018quantum,chau2018decoy,hu2018measurement,wang2018prefixed} and experiment~\cite{rosenberg2007long,schmitt2007experimental,peng2007experimental,boaron2018secure,yuan2007unconditionally,wang2008experimental,peev2009secoqc,dixon2010continuous,sasaki2011field,frohlich2013quantum,rubenok2013real,liu2013experimental,da2013proof,chan2014modeling,tang2014experimental,tang2014measurement,takesue2015experimental,wang2015phase,pirandola2015high,comandar2016quantum,wang2017measurement,liao2017satellite,liao2018satellite}. The maximum distance of decoy-state MDIQKD has been experimentally increased to 404 kilometers~\cite{yin2016measurement}. But the key rate of BB84, MDIQKD protocol, or any modified version of these protocols cannot exceed the linear bounds of repeaterless QKD, such as the TGW bound~\cite{takeoka2014fundamental} and the PLOB bound~\cite{pirandola2017fundamental}. Recently, a new protocol named twin-field quantum key distribution (TFQKD) was proposed~\cite{lucamarini2018overcoming} whose key rate dependence on the channel transmittance $\eta$ is $R\sim O(\sqrt\eta)$. Following this protocol, many variants of TFQKD were proposed~\cite{wang2018twin,tamaki2018information,ma2018phase,lin2018simple,cui2019twin,curty2018simple,lu2019twin,grasselli2019practical,xu2019general,zhang2019twin,zhou2019asymmetric,maeda2019repeaterless} and some experiments of TFQKD were demonstrated~\cite{minder2019experimental,liu2019experimental,wang2019beating,zhong2019proof}. Among those protocols, one efficient protocol named the sending-or-not-sending (SNS) protocol~\cite{wang2018twin} has the advantages of unconditionally security under coherent attacks and it can tolerant large misalignment error. And the SNS protocol with finite data size has been studied~\cite{yu2019sending,jiang2019unconditional}. However, the security proof of the SNS protocol requires the condition that the two users, Alice and Bob, use the same source parameters, such as the intensities of signal and decoy sources and the probability for sending coherent pulses in the $Z$ windows. Here we propose a general SNS protocol where Alice and Bob are not required to use the same source parameters. We give a security proof for this general protocol. Then we apply our general protocol to the case of asymmetric channels, i.e., the channel between Alice and Charlie (we will call it ``Alice's channel'' for simplicity in this paper) and that between Bob and Charlie (we will call it ``Bob's channel'' for simplicity in this paper) are not the same. The numerical results show that in this case the key rate of our general protocol is much higher than that of the original SNS protocol. This paper is arranged as follows. In Sec.~\ref{sec:protocol}, we present the procedures of our general SNS protocol with asymmetric source parameters. In Sec.~\ref{sec:proof}, we give a security proof of our protocol through three virtual protocols and their reductions. We show the results of numerical simulation of the general SNS protocol compared with the original SNS protocol in Sec.~\ref{sec:simulation}. The article ends with some concluding remarks in Sec.~\ref{sec:conclusion}. We give the formulas for key rate calculation in the appendix. \section{General SNS protocol with asymmetric source parameters}\label{sec:protocol} A schematic of our general SNS protocol is shown in Figure~\ref{fig:sketch}. The two legitimate users, Alice and Bob, independently send coherent pulses and vacuum pulses to an untrusted third party (UTP), Charlie. Charlie takes compensation to the pulses, measures them, and announces the measurement results. Then Alice and Bob distill the final key from a set of the pulses according to the announced data. The details of the protocol are shown as follows. \begin{figure} \caption{A schematic of the setup for the general SNS protocol. IM: intensity modulator; PM: phase modulator; BS: beam-splitter; $D_L$ \& $D_R$: single-photon detector in the measurement station of Charlie.} \label{fig:sketch} \end{figure} {\bf Step 1.} In each time window $i$, {\em they} (Alice and Bob) independently decide whether this is a decoy window or a signal window. In her (his) decoy window, she (he) randomly chooses one of a few states $\rho_{Ak}$ ($\rho_{Bk}$), for $k=0,1,2,\dots$, to send out a decoy pulse to Charlie where $\rho_{A0}=\rho_{B0}=\oprod{0}{0}$ are the vacuum states and $\rho_{Ak}$ ($\rho_{Bk}$), $k>0$, are coherent states $\ket{\sqrt{\mu_{Ak}} e^{\mathbf{i}\delta_{Ai}+\mathbf{i}\gamma_{Ai}}}$ ($\ket{\sqrt{\mu_{Bk}} e^{\mathbf{i}\delta_{Bi}+\mathbf{i}\gamma_{Bi}}}$). (In this paper, we denote the imaginary unit as $\mathbf{i}$.) In her (his) signal window, she (he) decides to send out to Charlie a signal pulse in the state $\ket{\sqrt{\mu_{A}^\prime} e^{\mathbf{i}\delta_{Ai}+\mathbf{i}\gamma_{Ai}}}$ ($\ket{\sqrt{\mu_{B}^\prime} e^{\mathbf{i}\delta_{Bi}+\mathbf{i}\gamma_{Bi}}}$) and puts down a bit value 1 (0) by probability $\epsilon_A$ ($\epsilon_B$), or decides not to send it out and puts down a bit value 0 (1) by probability $1-\epsilon_A$ ($1-\epsilon_B$). Here, $\delta_{Ai}$, $\delta_{Bi}$, $\gamma_{Ai}$, and $\gamma_{Bi}$ are random phase. The global phases $\gamma_{Ai}$ and $\gamma_{Bi}$ can be any reference phase and known by anyone. The private phase $\delta_{Ai}$ ($\delta_{Bi}$) is a random phase taken by Alice (Bob) secretly. Besides, we request the following mathematical constraint for source parameters: \begin{equation}\label{equ:protocol1} \frac{\mu_{Ak}}{ \mu_{Bk}} = \frac{\epsilon_A(1-\epsilon_B) \mu_A^\prime e^{-\mu_A^\prime}}{\epsilon_B(1-\epsilon_A) \mu_B^\prime e^{-\mu_B^\prime}} \end{equation} for each $k>0$. As the major result of this work, this constraint guarantees the security of the general SNS protocol with asymmetric source parameters for Alice and Bob. The real protocol here is actually same with that in ref.~\cite{wang2018twin} except for this mathematical constraint and the virtual protocols for security proof will use different types of entangled states. For ease of presentation, we will omit the subscript $i$ if it doesn't cause any confusion. But keep in mind that all $\delta_A$, $\delta_B$, $\gamma_A$, and $\gamma_B$ are chosen differently in different time windows. {\bf Step 2.} Charlie is supposed to measure all twin fields with a beam splitter after taking phase compensation and announce the measurement outcome. Note: Charlie is expected to remove the phase $\gamma_{A}$ and $\gamma_{B}$ by phase compensation. His action only affects the key rate, but has not influence on the security of the protocol. {\bf Step 3.} {\em They} announce each one's decoy windows and signal windows. And {\em they} announce the intensities of the decoy pulses and the private phases $\delta_{A}$ and $\delta_{B}$ in each decoy window. {\em Definition: }We define a $Z$-window when both of {\em them} determine signal windows and an $X$-window when both of {\em them} determine decoy windows. We define an {\em effective event} when one and only one Charlie's detector clicks, and the corresponding window is called an {\em effective window}. Given that $\delta_A$ ($\delta_B$) is randomized, whenever Alice (Bob) sends a coherent state with intensity $\mu_A^\prime$ ($\mu_B^\prime$), it can be equivalently regarded as a density matrix $\sum_{k=0}^\infty\frac{e^{-\mu_A^\prime}\mu_A^{\prime k}}{k!}\oprod{k}{k}$ ($\sum_{k=0}^\infty\frac{e^{-\mu_B^\prime}\mu_B^{\prime k}}{k!}\oprod{k}{k}$), which is a classical mixture of different photon number states only. Hence among all $Z$-windows, we can define a set of $Z_1$-windows, in which one and only one of {\em them} decides to send and she (he) actually sends a single-photon state. {\em They} don't know which time window is a $Z_1$-window, but {\em they} can calculate the number of $Z_1$-windows in an experiment. Among all $X$-windows, we define a set of $\tilde{X}$-windows, in which {\em they} choose the intensities $\mu_{Ak}$ and $\mu_{Bk}$ with the same $k$ and the phase shifts $\delta_A$ and $\delta_B$ satisfy the restriction \begin{equation}\label{equ:restriction} 1 - |\cos (\delta_{A} - \delta_{B} + \Delta\varphi)| \le |\lambda|. \end{equation} Here the values $\Delta\varphi$ and $\lambda$ are some values determined by Alice and Bob according to the result of channel test and calibration in the experiment to obtain a satisfactory key rate and the value of $\Delta\varphi$ can be different from time to time. In this paper, we will set $\Delta\varphi=0$ for presentation simplicity, i.e. \begin{equation}\label{equ:criterion} 1 - |\cos (\delta_{A} - \delta_{B})| \le |\lambda|, \end{equation} but this doesn't affect the validity of the security proof if we use Eq.(\ref{equ:restriction}) for the post selection. {\em They} keep the data of the effective events and discard all the others. {\bf Step 4.} {\em They} randomly choose some events from the effective $Z$-windows to do the error test. A bit-flip error occurs when Alice's bit value is different from Bob's in a $Z$-window. {\em They} discard the test bits, and the remaining events from effective $Z$-windows will be distilled for the final key. {\bf Step 5.} Based on the measurement outcome in effective $X$-windows, {\em they} calculate $n_1$, the number of effective events in $Z_1$-windows. Based on the measurement outcome and the announced values of $\delta_A$ and $\delta_B$ in effective $\tilde{X}$-windows, {\em they} calculate $e_1^{ph}$, the phase-flip error rate of effective events in $Z_1$-windows. {\em Note:} In effective $\tilde{X}$-windows, an error occurs when \\ (1)the left detector clicks and $\cos(\delta_A-\delta_B)<0$ \\ (2)the right detector clicks and $\cos(\delta_A-\delta_B)>0$.\\ Given this definition, {\em they} can observe the error rates in $\tilde{X}$-windows for each intensities of input light. With this, {\em they} can estimate $e_1^{ph}$ through the decoy state analysis which requests them to observe the counting rates of various intensities of input light. As proved in ref.~\cite{wang2018twin}, decoy state method can applied to our protocol as if the phases $\delta_A$ and $\delta_B$ were not announced. In the asymptotic case that there are decoy states with infinite different intensities, {\em they} can obtain the exact value of $e_1^{ph}$. In the case that there are decoy states with finite different intensities, {\em they} can obtain the upper bound of $e_1^{ph}$. {\em Note:} The Appendix shows the four-intensity method of this protocol. In this case, the formulas for $n_1$ and $e_1^{ph}$ are given in Eqs.(\ref{equ:s1Z})-(\ref{equ:e1ph}). {\bf Step 6.} {\em They} perform the post-processing and obtain the final key with length \begin{equation}\label{equ:length} N_f = n_1 [1 - H(e_1^{ph})] - f n_t H(E_Z) \end{equation} where $f$ is the correction efficiency, $n_t$ is the number of effective $Z$-windows, and $E_Z$ is the bit-flip error rate in effective $Z$-windows. Details for calculating the length of the final key (or the key rate) with four-intensity decoy-state method are presented in the Appendix. {\em Note:} If we set $\epsilon_A=\epsilon_B$ and $\mu_A^\prime=\mu_B^\prime$, this protocol is actually the same as the original SNS protocol in ref.~\cite{wang2018twin}. \section{Security proof with virtual protocols and reduction}\label{sec:proof} \subsection{Introduction of the ancillary photons and the extended states} Similarly to the security proof in Ref.~\cite{wang2018twin}, we use the idea of entanglement distillation with ancillary photons to proof the security of our protocol. Image that in a $Z$-window, if Alice (Bob) decides to send a coherent state $\rho_A$ ($\rho_B$) to Charlie, she (he) puts down a local ancillary qubit in the state $\ket{1}$ ($\ket{0}$), and if Alice (Bob) decides not to send, she (he) puts down a local ancillary qubit in the state $\ket{0}$ ($\ket{1}$). To Alice (Bob), the state $\ket{1}$ corresponds to the bit value 1 (0), and the state $\ket{0}$ corresponds to the bit value 0 (1). We define subspace $\mathcal{T}$ for the subspace of the sent-out states and $\mathcal{A}n$ for the subspace of the local ancillary states. Therefore, the extended state in the complex space $\mathcal{T}\otimes\mathcal{A}n$ in the $Z$-window can be written as \begin{equation}\label{equ:Omega} \begin{split} \Omega &= \epsilon_A \epsilon_B (\rho_A \tilde{\otimes} \rho_B) \otimes \oprod{11}{11} \\ &+ \epsilon_A (1-\epsilon_B) (\rho_A \tilde{\otimes} \oprod{0}{0}) \otimes \oprod{10}{10} \\ &+ (1-\epsilon_A) \epsilon_B (\oprod{0}{0} \tilde{\otimes} \rho_B) \otimes \oprod{01}{01} \\ &+ (1-\epsilon_A) (1-\epsilon_B) (\oprod{0}{0} \tilde{\otimes} \oprod{0}{0}) \otimes \oprod{00}{00}. \end{split} \end{equation} Here both symbols $\otimes$ and $\tilde{\otimes}$ are for a tensor product, and $\tilde{\otimes}$ is the tensor product inside $\mathcal{T}$, and $\otimes$ is the tensor product between $\mathcal{T}$ and $\mathcal{A}n$. The states on the left side of $\otimes$ are in $\mathcal{T}$ and we name them {\em real-photon states}. The states on the right side of $\otimes$ are in $\mathcal{A}n$ and we name them {\em ancillary-photon states}. Since the private phases $\delta_A$ and $\delta_B$ in $Z$-windows are kept secret all the time, the coherent states $\rho_A$ and $\rho_B$ are actually phase-randomized coherent states, whose density matrices are \begin{equation}\label{equ:rhoAB} \rho_k = \sum_{n=0}^\infty \frac{e^{-\mu^\prime_k}\mu_k^{\prime n}}{n!}\oprod{n}{n}= \mu_k^{\prime}e^{-\mu^\prime_k}\oprod{1}{1} + (1-\mu_k^{\prime}e^{-\mu^\prime_k})\bar{\rho}_k, \end{equation} with $k=A,B$ and \begin{equation}\label{equ:rhobar} \bar{\rho}_k = \frac{1}{1-\mu_k^{\prime}e^{-\mu^\prime_k}} \sum_{n\neq1} \frac{e^{-\mu^\prime_k}\mu_k^{\prime n}}{n!}\oprod{n}{n}. \end{equation} So the extended state in the $Z$-window can be written in another form: \begin{equation}\label{equ:Omeganew} \Omega=\sum_{r=1}^4 q_r \Omega_r \end{equation} and \begin{equation}\label{equ:Omegar} \begin{split} \Omega_1 =& C_1 [\epsilon_A(1-\epsilon_B) \mu_A^\prime e^{-\mu_A^\prime} \oprod{10}{10} \otimes \oprod{10}{10} \\ & + \epsilon_B(1-\epsilon_A) \mu_B^\prime e^{-\mu_B^\prime} \oprod{01}{01} \otimes \oprod{01}{01}] \\ \Omega_2 =& C_2 [\epsilon_A(1-\epsilon_B) (1-\mu_A^\prime e^{-\mu_A^\prime}) (\bar\rho_A \tilde\otimes \oprod{0}{0}) \otimes \oprod{10}{10} \\ & + \epsilon_B(1-\epsilon_A) (1-\mu_B^\prime e^{-\mu_B^\prime}) (\oprod{0}{0} \tilde\otimes \bar\rho_B) \otimes \oprod{01}{01}] \\ \Omega_3 =& \oprod{00}{00} \otimes \oprod{00}{00} \\ \Omega_4 =& (\rho_A \tilde\otimes \rho_B) \otimes \oprod{11}{11} \end{split} \end{equation} where $C_1$ and $C_2$ are some normalization factors. With the condition in Eq. (\ref{equ:protocol1}), $\Omega_1$, the target state we used to prove the security, can be written as \begin{equation}\label{equ:Omega1} \begin{split} \Omega_1 =& C^2 (\mu_{A1} \oprod{10}{10} \otimes \oprod{10}{10} + \mu_{B1} \oprod{01}{01} \otimes \oprod{01}{01}) \end{split} \end{equation} with \begin{equation}\label{equ:C} C = 1 / \sqrt{\mu_{A1} + \mu_{B1}}. \end{equation} In $X$-windows, the two-mode states sent by {\em them} are \begin{equation}\label{equ:rhoX} \rho_{Xk} = \oprod{\beta_k}{\beta_k} \end{equation} where \begin{equation}\label{equ:betak} \ket{\beta_k} = \ket{\sqrt{\mu_{Ak}} e^{\mathbf{i}\delta_{A}+\mathbf{i}\gamma_{A}}} \ket{\sqrt{\mu_{Bk}} e^{\mathbf{i}\delta_{B}+\mathbf{i}\gamma_{B}}}. \end{equation} In our protocol, the states in the $Z$-windows are actually classical mixtures of $\Omega_1$, $\Omega_2$, $\Omega_3$, and $\Omega_4$. In the security proof, we first show the security of the protocol with only the state $\Omega_1$ and then show the security of the protocol with $\Omega$ by the tagged model~\cite{inamori2007unconditional,gottesman2004security}. \subsection{Virtual Protocol 1} {\em Definition:} We have define an {\em effective event} in the protocol. An {\em effective ancillary photon} is an ancillary photon corresponding to an effective event. \subsubsection{Preparation stage} For each time window $i$, {\em they} preshare the classical information about whether this time window is an $X$-window or $Z$-window. {\em They} preshare an extended state \begin{equation}\label{equ:Omega0} \Omega_{0} = \oprod{\Psi}{\Psi} \end{equation} where \begin{equation}\label{equ:Psi} \ket{\Psi} = C (\sqrt{\mu_{A1}} e^{\mathbf{i}\gamma_{A}} \ket{10} \otimes \ket{10} + \sqrt{\mu_{B1}} e^{\mathbf{i}\gamma_{B}} \ket{01} \otimes \ket{01}) \end{equation} and $\gamma_{A}$ and $\gamma_{B}$ are announced publicly. (Remind that $\Omega_0$ and $\ket\Psi$ vary in different time windows.) In the time window $i$ which is a $Z$-window, through discussion by a secret channel, Alice chooses a random phase $\delta_{A}$ and Bob chooses a random phase $\delta_{B}$, which satisfy the restriction Eq.(\ref{equ:criterion}). Then {\em they} take phase shifts $\delta_A$ and $\delta_B$ to their own real photons, respectively. In the time window $i$ which is a $X$-window, {\em they} take random and independent phase shifts $\delta_A$ and $\delta_B$ to their own real photons, respectively. After the phase shifts, the extended state changes into \begin{equation}\label{equ:OmegaZX} \Omega_{Z} = \oprod{\Psi^\prime}{\Psi^\prime},\ \Omega_{X} = \oprod{\Psi^\prime}{\Psi^\prime} \end{equation} and \begin{equation}\label{equ:Psi'} \begin{split} \ket{\Psi^\prime} = C (&\sqrt{\mu_{A1}} e^{\mathbf{i}\gamma_{A} + \mathbf{i}\delta_{A}} \ket{10} \otimes \ket{10} \\ &+ \sqrt{\mu_{B1}} e^{\mathbf{i}\gamma_{B} + \mathbf{i}\delta_{B}} \ket{01} \otimes \ket{01}) \end{split} \end{equation} Among all $X$-windows, we define a set of $\tilde{X}$-windows, in which the phase shifts $\delta_A$ and $\delta_B$ satisfy the restriction Eq.(\ref{equ:criterion}). The states in $Z$-windows are not identical to those in $X$-windows, but they are identical to those in $\tilde{X}$-windows. Besides, we define real-photon states $\ket{\chi^0}$ and $\ket{\chi^1}$ for any time window: \begin{equation}\label{equ:chi0} \begin{split} \ket{\chi^0} =& C (\sqrt{\mu_{A1}} e^{\mathbf{i}\gamma_{A} + \mathbf{i}\delta_{A}} \ket{10} + \sqrt{\mu_{B1}} e^{\mathbf{i}\gamma_{B} + \mathbf{i}\delta_{B}} \ket{01}) \\ \ket{\chi^1} =& C (\sqrt{\mu_{A1}} e^{\mathbf{i}\gamma_{A} + \mathbf{i}\delta_{A}} \ket{10} - \sqrt{\mu_{B1}} e^{\mathbf{i}\gamma_{B} + \mathbf{i}\delta_{B}} \ket{01}) \\ &\text{if}\quad \cos(\delta_A-\delta_B) \ge 0 \end{split} \end{equation} or \begin{equation}\label{equ:chi1} \begin{split} \ket{\chi^0} =& C (\sqrt{\mu_{A1}} e^{\mathbf{i}\gamma_{A} + \mathbf{i}\delta_{A}} \ket{10} - \sqrt{\mu_{B1}} e^{\mathbf{i}\gamma_{B} + \mathbf{i}\delta_{B}} \ket{01}) \\ \ket{\chi^1} =& C (\sqrt{\mu_{A1}} e^{\mathbf{i}\gamma_{A} + \mathbf{i}\delta_{A}} \ket{10} + \sqrt{\mu_{B1}} e^{\mathbf{i}\gamma_{B} + \mathbf{i}\delta_{B}} \ket{01}) \\ &\text{if}\quad \cos(\delta_A-\delta_B) < 0 \end{split} \end{equation} \subsubsection{Protocol} {\bf V1-1: }At any $Z$-window ($X$-window), {\em they} send the real photons of $\Omega_Z$ ($\Omega_X$) defined in Eq. (\ref{equ:OmegaZX}) to Charlie and keep the ancillary photons locally. {\bf V1-2: }Charlie measures the real photons from Alice and Bob with a beam splitter after taking phase compensation according to the strong reference lights with phases $\gamma_A$ and $\gamma_B$. He announces his measurement outcome and then {\em they} announce the values of $\delta_A$ and $\delta_B$ of all $X$-windows. With the preshared information of $X$- and $Z$-windows, the measurement outcome, and the announced values of $\delta_A$ and $\delta_B$, {\em they} can obtain effective $Z$-windows and effective $\tilde{X}$-windows. The data of other time windows will be discarded. {\em Definition: }After Step V1-2, the remaining effective events can be divided into eight subsets according to the window information ($Z$-window or $X$-window), the clicking detector (the left $L$ or the right $R$) and the sign of $\cos(\delta_A-\delta_B)$ (positive $+$ or negative $-$). These subsets is labeled as $\Gamma_{(a,d)}$ where $\Gamma=\tilde{X},Z$, $a=+,-$, and $d=L,R$. For example, the subset $Z_{(-,R)}$ is the set of effective $Z$-windows when the right detector clicks and $\cos(\delta_A-\delta_B)<0$. Correspondingly, the effective ancillary photons can be divide into eight subsets labeled as $\mathcal{A}_{\Gamma_{(a,d)}}$. {\bf V1-3: }{\em They} check the phase-flip error rate $E_{(a,d)}$ of the set $\mathcal{A}_{\tilde{X}_{(a,d)}}$, where $a=+,-$ and $d=L,R$. Since the effective $Z$-windows are identical to the effective $\tilde{X}$-windows, the phase-flip error rate of $\mathcal{A}_{\tilde{X}_{(a,d)}}$ should be the same as that of $\mathcal{A}_{Z_{(a,d)}}$ (asymptotically). {\em Note: }The state $\ket{\Psi^\prime}$ in Eq.(\ref{equ:Psi'}) can be written in another form: \begin{equation}\label{equ:Psi'1} \ket{\Psi^\prime} = \frac{1}{\sqrt2} (\ket{\chi^0}\otimes\ket{\Phi^0} + \ket{\chi^1}\otimes\ket{\Phi^1}),\ \text{if }a=+ \end{equation} or \begin{equation}\label{equ:Psi'2} \ket{\Psi^\prime} = \frac{1}{\sqrt2} (\ket{\chi^1}\otimes\ket{\Phi^0} + \ket{\chi^0}\otimes\ket{\Phi^1}),\ \text{if }a=- \end{equation} where $\ket{\Phi^k},k=0,1$ are two-mode states of ancillary photons: \begin{equation}\label{equ:Phi} \ket{\Phi^0} = \frac{1}{\sqrt2} (\ket{10}+\ket{01}),\ket{\Phi^1} = \frac{1}{\sqrt2} (\ket{10}-\ket{01}) \end{equation} To obtain the phase-flip error rate $E_{(a,d)}$ of the set $\mathcal{A}_{\tilde{X}_{(a,d)}}$, each ancillary photon of this set is measured in the basis $\{\ket{\Phi^0},\ket{\Phi^1}\}$ and there are $n_{(a,d)}^{(0)}$ outcomes of $\oprod{\Phi^0}{\Phi^0}$ and $n_{(a,d)}^{(1)}$ outcomes of $\oprod{\Phi^1}{\Phi^1}$. The phase-flip error rate $E_{(a,d)}$ of $\mathcal{A}_{\tilde{X}_{(a,d)}}$ is define as: \begin{equation}\label{equ:Ead} E_{(a,d)} = \frac{\min \big(n_{(a,d)}^{(0)},n_{(a,d)}^{(1)}\big)}{n_{(a,d)}} \end{equation} where $n_{(a,d)}=n_{(a,d)}^{(0)}+n_{(a,d)}^{(1)}$ is the number of the ancillary photons in $\mathcal{A}_{\tilde{X}_{(a,d)}}$. {\bf V1-4: }With the estimated value of $E_{(a,d)}$, {\em they} can purify the ancillary photons in $\mathcal{A}_{Z_{(a,d)}}$ with $(a,d)=(+,R),(+,L),(-,R),(-,L)$ separately. After purification, {\em they} obtain ancillary photons in (almost 100\%) pure single-photon entangled states $\ket{\Phi^0}$ (or $\ket{\Phi^1}$). Then {\em they} perform local measurement on their own ancillary photons and obtain the final key. Alice (Bob) puts down a bit value 0 (1) or 1 (0) when her (his) measurement outcome is $\oprod{0}{0}$ or $\oprod{1}{1}$ . {\em Note 1-Security: }The security of the final key is based on the faithfulness of the purification. If {\em they} estimate the error rate $E_{(a,d)}$ of a set of ancillary photons exactly, {\em they} can purify these ancillary photons to get pure entangled photons. Although Charlie measured the real photons and {\em they} selected the set of ancillary photons based on his announced measurement outcomes, {\em they} check the phase-flip error rate of these photons by themselves. Since the extended states of effective $\tilde{X}$-windows are identical to those of effective $Z$-windows, the phase-flip error rate of $\mathcal{A}_{\tilde{X}_{(a,d)}}$ is exactly the same as that of $\mathcal{A}_{Z_{(a,d)}}$ statistically. {\em They} can obtain the phase-flip error rate in $\mathcal{A}_{Z_{(a,d)}}$ by testing the ancillary photon in $\mathcal{A}_{\tilde{X}_{(a,d)}}$ and then perform the purification to $\mathcal{A}_{Z_{(a,d)}}$. {\bf As a result, the security of the key doesn't rely on Charlie's honesty}. {\em Note 2-Estimation of the phase-flip error rate: }According to the definition of $E_{(a,d)}$ in Eq.(\ref{equ:Ead}), {\em they} have to measure the ancillary photons in the basis $\{\ket{\Phi^0},\ket{\Phi^1}\}$. It's easy to prove that {\em they} can perform local measurement in the basis $\{\ket{\pm x}=(\ket0\pm\ket1)/\sqrt2\}$ and check the parity of the outcome instead of measuring in the basis $\{\ket{\Phi^0},\ket{\Phi^1}\}$. Explicitly, their outcome with even parity ($\ket{x+}\ket{x+}$ or $\ket{x-}\ket{x-}$) corresponds to the outcome $\ket{\Phi^0}$ and that with odd parity ($\ket{x+}\ket{x-}$ or $\ket{x-}\ket{x+}$) corresponds to the outcome $\ket{\Phi^1}$. {\em Note 3-Reduction of the preshare states in $X$-windows: }Since measurement in the basis $\{\ket{\pm x}\}$ is a local operation on the ancillary photons, it makes no difference whether {\em they} measure their ancillary photons after sending the real photons or before that. So they can measure the ancillary photons before the protocol starts, and label this time window an $X_0$-window if the outcome is even parity, or label it an $X_1$-window if the outcome is odd parity. Then {\em they} prepare and send the real photon in the state \begin{equation}\label{equ:chi+} \ket{\chi^+} = C (\sqrt{\mu_{A1}} e^{\mathbf{i}\gamma_{A} + \mathbf{i}\delta_{A}} \ket{10} + \sqrt{\mu_{B1}} e^{\mathbf{i}\gamma_{B} + \mathbf{i}\delta_{B}} \ket{01}) \end{equation} in an $X_0$-window, or prepare and send that in the state \begin{equation}\label{equ:chi-} \ket{\chi^-} = C (\sqrt{\mu_{A1}} e^{\mathbf{i}\gamma_{A} + \mathbf{i}\delta_{A}} \ket{10} - \sqrt{\mu_{B1}} e^{\mathbf{i}\gamma_{B} + \mathbf{i}\delta_{B}} \ket{01}) \end{equation} in an $X_1$-window. Alternatively, {\em they} can start with the information of $X_0$-windows and $X_1$-windows and the states in Eqs. (\ref{equ:chi+})(\ref{equ:chi-}). {\em They} prepare and send real photons in the state $\ket{\chi^+}$ in $X_0$-windows or in the state $\ket{\chi^-}$ in $X_1$-windows. In this way, the ancillary photons in $X$-windows in the above virtual protocol are not necessary, and the formula of phase-flip error rate should be changed correspondingly. We introduce a symbol $\tilde{X}_{(a,b,d)}$ for the set of effective time windows, which satisfy the restriction in Eq.(\ref{equ:criterion}), with joint events $a$, $b$, $d$ where Event $a$ ($a=+,-$): the sign of $\cos(\delta_A-\delta_B)$. Event $b$ ($b=0,1$): this times window is an $X_b$-window. Event $d$ ($d=L,R$): the $d$-detector clicks and the other doesn't click. And we also introduce $n_{\tilde{X}_{(a,b,d)}}$ for the number of time windows in the set $\tilde{X}_{(a,b,d)}$. Therefore, we have \begin{equation}\label{equ:NX} n^{(b)}_{(a,d)} = N_{\tilde{X}_{(a,b,d)}} \end{equation} and \begin{equation}\label{equ:Ead'} E_{(a,d)} = \frac{\min (N_{\tilde{X}_{(a,0,d)}},N_{\tilde{X}_{(a,1,d)}})}{n_{(a,d)}} \end{equation} This reduction of $X$-windows leads to the Virtual Protocol 2. \subsection{Virtual Protocol 2} \subsubsection{Preparation stage} For each time window $i$, {\em they} preshare the classical information about whether this time window is an $X_0$-window, an $X_1$-window or a $Z$-window. {\em They} preshare an extended state $\Omega_Z$ in Eq.(\ref{equ:OmegaZX}) for $Z$-windows, an real-photon state $\ket{\chi^+}$ for $X_0$-windows and an real state $\ket{\chi^-}$ for $X_1$-windows. \subsubsection{Protocol} {\bf V2-1: }At any $Z$-window, {\em they} send out the real photons of $\Omega_Z$ to Charlie and keep the ancillary photons locally. At any $X_0$-window ($X_1$-window), {\em they} send $\ket{\chi^+}$ ($\ket{\chi^-}$) to Charlie. {\bf V2-2: }Charlie measures the real photons from Alice and Bob with a beam splitter after taking phase compensation according to the strong reference lights with phases $\gamma_A$ and $\gamma_B$. He announces his measurement outcome and then {\em they} announce the values of $\delta_A$ and $\delta_B$ of all $X$-windows. {\bf V2-3: }{\em They} check the phase-flip error rate $E_{(a,d)}$ by the set $\tilde{X}_{(a,0,d)}$ and $\tilde{X}_{(a,1,d)}$, where $a=+,-$ and $d=L,R$. {\bf V2-4: }{\em They} purify the ancillary photons in $\mathcal{A}_{Z_{(a,d)}}$ with $(a,d)=(+,R),(+,L),(-,R),(-,L)$ separately with the estimated value of $E_{(a,d)}$. Then {\em they} perform local measurement on their own ancillary photons and obtain the final key. {\em Note: Reduction of preshared states in $X$-windows.} \\{\em Reduction 1: }The real-photon states with $a=+$ ($a=-$) in $X_0$-windows are actually identical to those with $a=-$ ($a=+$) in $X_1$-windows, e.g. \begin{equation} \rho_{(+,0)} = \rho_{(-,1)},\rho_{(-,0)} = \rho_{(+,1)}. \end{equation} So we can conclude that $N_{\tilde{X}_{(a,0,d)}}=N_{\tilde{X}_{(\bar{a},1,d)}}$, where $\bar{a}$ stands for the opposite sign of $a$. This means that all the values $N_{\tilde{X}_{(a,1,d)}}$ in the phase-flip error rate in Eq.(\ref{equ:Ead'}) can be replaced by $N_{\tilde{X}_{(\bar{a},0,d)}}$ so that $X_1$-windows are not necessary. {\em They} can just use the data from $X_0$-windows to estimate the phase-flip error rate, and no one else will find any difference. Therefore, {\em they} use only $X_0$-windows and send only the state $\ket{\chi^+}$ in $X$-windows. The number of effective events from the state $\ket{\chi^+}$ and the joint events $a$, $d$ (with $a=+,-;d=L,R$) is denoted as $n_{\tilde{X}_{(a,d)}}$. The formula of the phase-flip error rate should be changed into: \begin{equation}\label{equ:Ead''} E_{(a,d)} = \frac{\min (n_{\tilde{X}_{(a,d)}},n_{\tilde{X}_{(\bar{a},d)}})}{n_{(a,d)}} \end{equation} where the formula for $n_{(a,d)}$ should be changed into $n_{(a,d)} = n_{\tilde{X}_{(a,d)}}+n_{\tilde{X}_{(\bar{a},d)}}$. \\{\em Reduction 2: }All the effective ancillary photons in $Z$-windows can be purified in one batch. The phase-flip error rate is \begin{equation} \begin{split} E^{ph} = \frac{\sum_{a,d} \min (n_{\tilde{X}_{(a,d)}},n_{\tilde{X}_{(\bar{a},d)}})}{\sum_{a,d} n_{(a,d)}} \\ =\frac{2\sum_{d} \min (n_{\tilde{X}_{(+,d)}},n_{\tilde{X}_{(-,d)}})}{2 n_1} \end{split} \end{equation} where $n_1=n_{\tilde{X}_{(+,L)}}+n_{\tilde{X}_{(+,R)}}+n_{\tilde{X}_{(-,L)}}+n_{\tilde{X}_{(-,R)}}$ is the total number of effective events in $\tilde{X}$-windows. Using the relations that $n_{\tilde{X}_{(-,L)}} \ge \min(n_{\tilde{X}_{(+,L)}},n_{\tilde{X}_{(-,L)}})$ and $n_{\tilde{X}_{(+,R)}} \ge \min(n_{\tilde{X}_{(+,R)}},n_{\tilde{X}_{(-,R)}})$, the phase-flip error rate can be bounded by \begin{equation}\label{equ:Eph} E^{ph} \le \frac{n_{\tilde{X}_{(-,L)}} + n_{\tilde{X}_{(+,R)}}}{n_1}. \end{equation} In this formula for the phase-flip error rate, we only need the total number of effective events in $\tilde{X}$-windows and the number of these two kinds of effective events: \\1. the left detector clicks and $\cos(\delta_A-\delta_B)<0$; \\2. the right detector clicks and $\cos(\delta_A-\delta_B)\ge0$. \\Therefore, we can define these two kinds of effective events as {\em error events} and the corresponding time windows are defined as {\em error windows}. If {\em they} set the value of $\lambda$ small enough and Charlie perform the compensation honestly, {\em they} may get few error events so that the phase-flip error rate will be quite low and the key rate will be high. \\ \\{\em Reduction 3: }The density matrix in Eq.(\ref{equ:rhoX}) with randomized $\delta_A$ and $\delta_B$ can be written as the classical mixture of a set of states $\{\ket{\psi_l^{(k)}}\}$: \begin{equation}\label{equ:rhoX'} \rho_{Xk} = \sum_l p_l^{(k)} \oprod{\psi_l^{(k)}}{\psi_l^{(k)}} \end{equation} with \begin{equation}\label{equ:psik} \begin{split} \ket{\psi_l^{(k)}} = D_l^{(k)} \sum_{n=0}^l \frac{(\sqrt{\mu_{Ak}} e^{\mathbf{i}\delta_{A} + \mathbf{i}\gamma_{A}})^n}{\sqrt{n!}} \frac{(\sqrt{\mu_{Bk}} e^{\mathbf{i}\delta_{B} + \mathbf{i}\gamma_{B}})^{l-n}}{\sqrt{(l-n)!}} \\ \ket{n,l-n} \end{split} \end{equation} where $D_l^{(k)}$ are some normalization factors and $\ket{\psi_1^{(k)}}$ is exactly $\ket{\chi^+}$ when the condition in Eq.(\ref{equ:protocol1}) is satisfied. So {\em they} don't need to preshare the state $\ket{\chi^+}$. {\em They} can send the phase-randomized coherent state $\ket{\sqrt{\mu_{Ak}} e^{\mathbf{i}\delta_{A} + \mathbf{i}\gamma_{A}}}$ and $\ket{\sqrt{\mu_{Bk}} e^{\mathbf{i}\delta_{B} + \mathbf{i}\gamma_{B}}}$ to Charlie in $X$-windows, and then use the decoy-state method to estimate the bound of the phase-flip error rate of $\ket{\chi^+}$, $e_1^{ph}$. \\{\em Note:} If some decoy states with intensities $\mu_{Am}$ and $\mu_{Bm}$ are not used to estimate $e_1^{ph}$, these intensities don't have to satisfy the constrain in Eq.(\ref{equ:protocol1}). \subsection{Virtual Protocol 3} \subsubsection{Preparation stage} For each time window $i$, {\em they} preshare the classical information about whether this time window is an $X$ window or a $Z$-window. {\em They} preshare an extended state $\Omega_Z$ in Eq.(\ref{equ:OmegaZX}) for $Z$-windows. \subsubsection{Protocol} {\bf V3-1: }At any $Z$-window, {\em they} send out the real photons of $\Omega_Z$ to Charlie and keep the ancillary photons locally. At any $X$-window, Alice (Bob) sends a coherent state $\ket{\sqrt{\mu_{Ak}} e^{\mathbf{i}\delta_{A} + \mathbf{i}\gamma_{A}}}$ ($\ket{\sqrt{\mu_{Bk}} e^{\mathbf{i}\delta_{B} + \mathbf{i}\gamma_{B}}}$) with random $\delta_A$ and $\gamma_A$ ($\delta_B$ and $\gamma_B$) to Charlie. {\bf V3-2: }Charlie measures the real photons from Alice and Bob with a beam splitter after taking phase compensation according to the strong reference lights with phases $\gamma_A$ and $\gamma_B$. He announces his measurement outcome and then {\em they} announce the values of $\delta_A$ and $\delta_B$ of all $X$-windows. {\bf V3-3: }{\em They} apply decoy-state method with the data of the effective $\tilde{X}$-windows to estimate the phase-flip error rate $e_1^{ph}$. {\bf V3-4: }{\em They} purify the ancillary photons in the effective $Z$-windows with the estimated value of $e_1^{ph}$. Then {\em they} perform local measurement on their own ancillary photons and obtain the final key. {\em Note 1: Reduction of preshared states in $Z$-windows.} \\{\em Reduction 1: }The state $\Omega_Z$ with the restriction Eq.(\ref{equ:criterion}) is identical to $\Omega_Z$ without the restriction Eq.(\ref{equ:criterion}). If we regard all $Z$-windows as a whole, the condition in Eq.(\ref{equ:criterion}) can be loosen, which means that the phase shifts $\delta_A$ and $\delta_B$ to the real photons can be randomized in the range $[0,2\pi)$ independently. In this way, {\em they} don't need any mutual information about the phase shifts $\delta_A$ and $\delta_B$. \\ \\{\em Reduction 2: }The process that {\em they} purify the effective ancillary photons in $Z$-windows and then perform local measurement on them is equivalent to the process that {\em they} measure these ancillary photons in the photon-number basis and then do classical distillation to the classical data, which is called quasipurification~\cite{shor2000simple}. In the latter process, the state in $Z$-window is \begin{equation}\label{equ:OmegaZ'} \Omega_Z^\prime = C^2 (\mu_{A1} \oprod{10}{10} \otimes \oprod{10}{10} + \mu_{B1} \oprod{01}{01} \otimes \oprod{01}{01}), \end{equation} which is equivalent to the state $\Omega_1$ in Eq.(\ref{equ:Omega1}). This means that the protocol can just start with the state $\Omega$ and apply the tagged model to distill the final key from the effective events using the state $\Omega_1$. The length of the final key should be \begin{equation}\label{equ:length2} n_f = n_1 [1 - H (e_1^{ph})] - n_t H(E_Z) \end{equation} where $n_1$ is the number of effective events with state $\Omega_1$ estimated by decoy-state method, $n_t$ is the number of effective events in $Z$-windows, and $E_Z$ is the bit-flip error rate of $n_t$. A bit-flip error occurs when Alice's bit value is different from Bob's in a $Z$-window. {\em Note 2:Reduction of preshared information of time windows.} \\Alice (Bob) determines a signal window by probability $p^Z_{A}$ ($p^Z_{B}$) and determines a decoy window with intensity $\mu_{Ak}$ ($\mu_{Bk}$) by probability $p^X_{Ak}$ ($p^X_{Bk}$), where $p^Z_{A}+\sum_k p^X_{Ak}=1$ ($p^Z_{B}+\sum_k p^X_{Bk}=1$). A $Z$-window is defined when both of {\em them} determine signal windows and an $X$-window is defined when both of {\em them} determine decoy windows. Other time windows are regarded as mismatch windows and they will be discarded. In this way, {\em they} don't need to preshare any information of time windows, and the states in $Z$-windows and $X$-windows are $\Omega$ and $\rho_{Xk}$, respectively. With the reductions above, the virtual protocol is equivalent to our asymmetric SNS protocol. \section{Numerical Simulation With Asymmetric Channels}\label{sec:simulation} Here we present the results of numerical simulation of different SNS protocols in the case that the channels are asymmetric, i.e., Alice' channel and Bob's channel are different, e.g., they have different channel losses. The original SNS with the asymmetric channels can be modified a little to fit the asymmetric channels, which we call ``the modified SNS protocol'' in the following. Consider the case that the original SNS protocol is applied to the asymmetric channels, when {\em they} use the same source, the intensities of the pulses interfering at the beam-splitter will differ a lot due to different channel transmittances, which will cause a high error rate in $X$-windows and therefore enhance the phase-flip error rate of single-photon, $e_1^{ph}$. In the modified SNS protocol, {\em they} still use the same source parameters, but Charlie add an extra loss to one of the channels to make the transmittances of two channels the same. Explicitly, if the transmittance of Alice's channel, $\eta_A$, is larger than that of Bob's channel, $\eta_B$, Charlie adds an extra loss $1-\eta_B/\eta_A$ to Alice's channel. On the contrary, if $\eta_A<\eta_B$, Charlie adds an extra loss $1-\eta_A/\eta_B$ to Bob's channel. Since Charlie's action doesn't affect the security, the security of the modified SNS protocol is guaranteed automatically. We show the numerical results of the optimal key rate of the original, the modified, and the general SNS protocols with the asymmetric channels. The effect of the finite data size has been considered in our calculation. The device parameters used in the simulation are listed in Table~\ref{tab:parameters}. \begin{table} \begin{ruledtabular} \begin{tabular}{ccccccc} $N_t$ & $e_d$ & $d$ & $\eta_d$ & $f_e$ & $\xi$ & $\alpha$ \\ \hline $10^{13}$ & 5\% & $10^{-10}$ & 50\% & 1.1 & $10^{-10}$ & 0.2dB/km\\ \end{tabular} \end{ruledtabular} \caption{\label{tab:parameters} Devices' parameters used in numerical simulations. $N_t$ is the total number of pulse pairs; $e_d$ is the misalignment error in the $X$ windows; $d$ is the dark count rate of each detector at the UTP; $\eta_d$ is the detection efficiency of each detector at the UTP; $f_e$ is the error correction inefficiency. $\xi$ is the failure probability in the parameter estimation; $\alpha$ is the channel loss.} \end{table} In Figure~\ref{fig:50km} and Figure~\ref{fig:100km}, we show the optimal key rates of three protocols with the difference in length between the two channels ($L_B-L_A$) fixed at 50 km and 100 km, respectively. \begin{figure} \caption{(Color online)The optimized key rates (per pulse pair) versus transmission distance between Alice and Charlie with three different SNS protocols. Here the difference in length between Alice's and Bob's channels is fixed at 50 km. } \label{fig:50km} \end{figure} \begin{figure} \caption{(Color online)The optimized key rates (per pulse pair) versus transmission distance between Alice and Charlie with three different SNS protocols. Here the difference in length between Alice's and Bob's channels is fixed at 100 km. } \label{fig:100km} \end{figure} In the figures, we have also compared our results with the linear bound of the repeaterless QKD. There are excellent theoretical linear bounds for key rate of a repeaterless QKD, such as the famous TGW bound~\cite{takeoka2014fundamental} and the PLOB bound~\cite{pirandola2017fundamental}. Also, we show some details of the optimal key rates with different SNS protocols in Table~\ref{tab:keyrate}. \begin{table} \begin{ruledtabular} \begin{tabular}{c|c|c|c|c} $L_A$(km) & $L_B$(km) & general SNS & modified SNS & original SNS \\ \hline 0 & 50 & $7.21\times10^{-4}$ & $2.89\times10^{-4}$ & $1.17\times10^{-4}$ \\ \hline 150 & 200 & $4.73\times10^{-7}$ & $1.95\times10^{-7}$ & $5.52\times10^{-8}$ \\ \hline 250 & 300 & $1.19\times10^{-9}$ & $5.08\times10^{-11}$ & $0$ \\ \hline 0 & 100 & $8.89\times10^{-5}$ & $2.63\times10^{-5}$ & $8.09\times10^{-7}$ \\ \hline 100 & 200 & $6.71\times10^{-7}$ & $1.95\times10^{-7}$ & $1.73\times10^{-9}$ \\ \hline 200 & 300 & $2.09\times10^{-9}$ & $5.08\times10^{-11}$ & $0$ \\ \end{tabular} \end{ruledtabular} \caption{\label{tab:keyrate} The optimal key rates with different SNS protocols. The device parameters used in the simulation are listed in Table~\ref{tab:parameters}.} \end{table} It's easy to find that in the asymmetric channels, the performance of our general SNS protocol is much better than that of the other two protocols, especially when the difference in length between Alice's and Bob's channels is large. \section{Conclusion}\label{sec:conclusion} In this paper, we propose a general SNS protocol with asymmetric source parameters and give a security proof of this protocol. The intensities and the probabilities for sending at Alice's and Bob's sides should satisfy the condition given in Eq.(\ref{equ:protocol1}) to guarantee the security in the asymmetric case. We present the numerical results of different SNS protocols to show that our general SNS protocol gives much high key rate than the other SNS protocols when Alice's and Bob's channels are different. When the difference in length between Alice's and Bob's channels is 100 km, the key rate of the general SNS protocol is tens to hundreds of times higher than that of the original SNS protocol. Our general SNS protocol can be applied directly to the SNS experiments with asymmetric channels. If we use the method of two-way classical communication~\cite{xu2019general} on our protocol, the key rate of our general protocol can be improved further. We shall report this elsewhere. \section*{Appendix: Formulas For Key Rate Calculation}\label{sec:formulas} \subsection{Four-Intensity Decoy-State Method and Parameter Estimation} In order to make our protocol easy to demonstrate in the experiment, we give the four-intensity decoy-state method for our protocol. ``Four-intensity'' means that Alice (Bob) uses four different intensities, $\mu_A^\prime$ ($\mu_B^\prime$) in signal windows and $0$, $\mu_{A1}$ ($\mu_{B1}$), $\mu_{A2}$ ($\mu_{B2}$) in decoy windows. All measurement results in effective $X$-windows are used to estimate the bound of $n_1$. Only measurement results in effective $\tilde{X}$-windows when Alice uses the intensity $\mu_{A1}$ and Bob uses the intensity $\mu_{B1}$ are used to estimate the bound of $e_1^{ph}$. The formula for the length of the final key in Eq.(\ref{equ:length}) can be written in the form of key rate per time window: \begin{equation}\label{equ:keyrate} \begin{split} R = p_A^Z p_B^Z \{ &[\epsilon_A(1-\epsilon_B) \mu_A^\prime e^{-\mu_A^\prime} + \epsilon_B(1-\epsilon_A) \mu_B^\prime e^{-\mu_B^\prime}]\\ & s_1^Z [1 - H(e_1^{ph})] - f S_Z H(E_Z) \} \end{split} \end{equation} where $s_1^Z$ is the counting rate in $Z_1$-windows, and $S_Z$ is the counting rate of $Z$-windows. If there are $m$ effective windows in a set $\zeta$ of $n$ time windows, the counting rate of $\zeta$ is defined as $S_\zeta=m/n$. So we need to estimate the bound of $\mean{s_1^Z}$ and $\mean{e_1^{ph}}$ by the four-intensity decoy-state method. Here $\mean{\cdot}$ stands for the expected value of a variable. Similarly to the methods in Ref.\cite{yu2019sending}, the lower bound of $\mean{s_1^Z}$ is given by: \begin{equation}\label{equ:s1Z} \mean{s_1^Z} \ge \underline{\mean{s_1^Z}} = \frac{\mu_{A1}}{\mu_{A1}+\mu_{B1}} \underline{\mean{s_{10}^Z}} + \frac{\mu_{B1}}{\mu_{A1}+\mu_{B1}} \underline{\mean{s_{01}^Z}} \end{equation} where $\underline{\mean{s_{10}^Z}}$ is lower bound of the expected value of the counting rate of the state $\oprod{10}{10}$ with \begin{equation}\label{equ:s10} \underline{\mean{s_{10}^Z}} \!=\! \frac{\mu_{A2}^2 e^{\mu_{A1}} \mean{S_{\mu_{A1}0}} \!-\! \mu_{A1}^2 e^{\mu_{A2}} \mean{S_{\mu_{A2}0}} \!-\! (\mu_{A2}^2 \!-\! \mu_{A1}^2) \mean{S_{00}}}{\mu_{A1} \mu_{A2} (\mu_{A2} - \mu_{A1})} \end{equation} and $\underline{\mean{s_{01}^Z}}$ is lower bound of the expected value of the counting rate of the state $\oprod{01}{01}$ with \begin{equation}\label{equ:s01} \underline{\mean{s_{01}^Z}} \!=\! \frac{\mu_{B2}^2 e^{\mu_{B1}} \mean{S_{0\mu_{B1}}} \!-\! \mu_{B1}^2 e^{\mu_{B2}} \mean{S_{0\mu_{B2}}} \!-\! (\mu_{B2}^2 \!-\! \mu_{B1}^2) \mean{S_{00}}}{\mu_{B1} \mu_{B2} (\mu_{B2} - \mu_{B1})}. \end{equation} Here $\mean{S_{\alpha\beta}}$ is the expected value of the counting rate of the time windows when Alice and Bob send the decoy pulses with intensities $\alpha$ and $\beta$, respectively. If the data size is infinite, these expected values are exactly the values observed in the experiments. If the data size is finite, we should use the Chernoff Bound introduced in the next section to estimate the bound of the expected values from the observed values. The upper bound of $\mean{e_1^{ph}}$ is given by: \begin{equation}\label{equ:e1ph} \mean{e_1^{ph}} \le \overline{\mean{e_1^{ph}}} = \frac{\mean{T_{\Delta}} - e^{-(\mu_{A1}+\mu_{B1})} \mean{S_{00}} /2}{e^{-(\mu_{A1}+\mu_{B1})} (\mu_{A1}+\mu_{B1}) \underline{\mean{s_1^Z}}} \end{equation} where $\mean{T_{\Delta}}$ is the expected value of the error counting rate of $\tilde{X}$-windows, when Alice and Bob send the decoy pulses with intensities $\mu_{A1}$ and $\mu_{B1}$, respectively. If there are $m$ error windows in a set $\zeta$ of $n$ time windows, the error counting rate of $\zeta$ is defined as $T_\zeta=m/n$. \subsection{Chernoff Bound} In the asymptotic case where the data size is infinite, the observed values are the same as the expected values. But in the nonasymptotic case where the data size is finite, the observed values are different from the expected values. So we need the Chernoff bound~\cite{chernoff1952measure} to estimate the range of expected values from the observed values and use the worst case to ensure that the final key is secure. Let $X_1,X_2,\dots,X_n$ be $n$ random variables whose observed values are either 0 or 1, $X$ be their sum $X=\sum_i X_i$, and $\phi$ be the expected value of $X$. We have the lower and the upper bound of $\phi$: \begin{equation}\label{equ:chornoffL} \phi^L (X) = \frac{X}{1+\delta_1(X)} \end{equation} \begin{equation}\label{equ:chornoffU} \phi^U (X) = \frac{X}{1-\delta_2(X)} \end{equation} where $\delta_1(X)$ and $\delta_2(X)$ are the solutions of the following equations: \begin{equation}\label{equ:delta1} \left(\frac{e^{\delta_1}}{(1+\delta_1)^{1+\delta_1}}\right)^{\frac{X}{1+\delta_1}} = \frac{\xi}{2} \end{equation} \begin{equation}\label{equ:delta2} \left(\frac{e^{-\delta_2}}{(1-\delta_2)^{1-\delta_2}}\right)^{\frac{X}{1-\delta_2}} = \frac{\xi}{2} \end{equation} where $\xi$ is the failure probability. With the above equations, we have \begin{equation}\label{equ:bound1} N_{\alpha\beta} \underline{\mean{S_{\alpha\beta}}} = \phi^L(N_{\alpha\beta} S_{\alpha\beta}),N_{\alpha\beta} \overline{\mean{S_{\alpha\beta}}} = \phi^U(N_{\alpha\beta} S_{\alpha\beta}). \end{equation} Here $S_{\alpha\beta}$ is the observed value of the counting rate. Then in Eq.(\ref{equ:keyrate}) we need the real values of $s_1^Z$ and $e_1^{ph}$ in a specific experiment. So Eqs.(\ref{equ:chornoffL})-(\ref{equ:bound1}) can be written in another form to estimate the upper and the lower bound of real values from expected values: \begin{equation}\label{equ:chornoffU'} X^U(\phi) = [1+\delta_1^\prime(\phi)]\phi \end{equation} \begin{equation}\label{equ:chornoffL'} X^L(\phi) = [1-\delta_2^\prime(\phi)]\phi \end{equation} where $\delta_1^\prime(\phi)$ and $\delta_2^\prime(\phi)$ are the solutions of the following equations: \begin{equation}\label{equ:delta1'} \left(\frac{e^{\delta_1^\prime}}{(1+\delta_1^\prime)^{1+\delta_1^\prime}} \right)^{\phi} = \frac{\xi}{2} \end{equation} \begin{equation}\label{equ:delta2'} \left(\frac{e^{-\delta_2^\prime}}{(1-\delta_2^\prime)^{1-\delta_2^\prime}} \right)^{\phi} = \frac{\xi}{2} \end{equation} With the above equations, we have \begin{equation}\label{equ:bound2} \begin{split} N_1^Z s_1^Z &\ge X^L(N_1^Z \underline{\mean{s_1^Z}}),\\ N_1^Z \underline{\mean{s_1^Z}} e_1^{ph} &\le X^U(N_1^Z \underline{\mean{s_1^Z}} \overline{\mean{e_1^{ph}}}). \end{split} \end{equation} where $N_1^Z=N_t p_A^Z p_B^Z [\epsilon_A(1-\epsilon_B) \mu_A^\prime e^{-\mu_A^\prime} + \epsilon_B(1-\epsilon_A) \mu_B^\prime e^{-\mu_B^\prime}]$ is the number of single-photon state in the $Z$ windows when one and only one of {\em them} decides to send and $N_t$ is the total number of time windows. \subsection{Finite Key Size Effect} Similarly to the analysis of the effect of the finite key size in ref.~\cite{jiang2019unconditional}, we give the key rate formula with the effect of the finite key size of our general SNS protocol in the universally composable framework~\cite{muller2009composability}. If the length of the final key satisfies \begin{equation}\label{equ:finitekey} \begin{split} N_f = n_1 [1 - H(e_1^{ph})] - f n_t H(E_Z) \\ - \log_2 \frac{2}{\varepsilon_{\text{cor}}} - 2\log_2 \frac{1}{\sqrt2 \varepsilon_{\text{PA}} \hat\varepsilon}, \end{split} \end{equation} the protocol is $\varepsilon_{\text{sec}}$-secret with $\varepsilon_{\text{sec}}=2\hat\varepsilon+4\bar\varepsilon+\varepsilon_{\text{PA}}+\varepsilon_{n_1}$ , and the total security coefficient of the protocol is $\varepsilon_{\text{tot}}=\varepsilon_{\text{cor}}+\varepsilon_{\text{sec}}$. Here $\varepsilon_{\text{cor}}$ is the probability that the error correction fails, $\bar\varepsilon$ is the probability that the real value of $e_1^{ph}$ isn't in the range that we estimate, $\varepsilon_{\text{PA}}$ is the failure probability of the privacy amplification, and $\varepsilon_{n_1}$ is the probability that the real value of $n_1$ isn't in the range that we estimate. According to Eqs.(\ref{equ:s1Z})-(\ref{equ:bound2}), we have $\bar\varepsilon=3\xi$ and $\varepsilon_{n_1}=6\xi$. If we set $\varepsilon_{\text{cor}}=\hat\varepsilon=\varepsilon_{\text{PA}}=\xi$ in our numerical simulation, the total security coefficient of our protocol is $\varepsilon_{\text{tot}}=22\xi=2.2\times10^{-9}$. Also, Eq.(\ref{equ:finitekey}) can be written in the form of key rate per time window with some source parameters: \begin{equation}\label{equ:finitekeyrate} \begin{split} R = p_A^Z p_B^Z \{ &[\epsilon_A(1-\epsilon_B) \mu_A^\prime e^{-\mu_A^\prime} + \epsilon_B(1-\epsilon_A) \mu_B^\prime e^{-\mu_B^\prime}]\\ &\cdot s_1^Z [1 - H(e_1^{ph})] - f S_Z H(E_Z) \} \\ &- \frac{1}{N_t} (\log_2 \frac{2}{\varepsilon_{\text{cor}}} + 2\log_2 \frac{1}{\sqrt2 \varepsilon_{\text{PA}} \hat\varepsilon}). \end{split} \end{equation} \newline \end{document}
\begin{document} \title{On cyclic branched coverings of prime knots} \author{Michel Boileau and Luisa Paoluzzi} \date{\today} \maketitle \begin{abstract} We prove that a prime knot $K$ is not determined by its $p$-fold cyclic branched cover for at most two odd primes $p$. Moreover, we show that for a given odd prime $p$, the $p$-fold cyclic branched cover of a prime knot $K$ is the $p$-fold cyclic branched cover of at most one more knot $K'$ non equivalent to $K$. To prove the main theorem, a result concerning the symmetries of knots is also obtained. This latter result can be interpreted as a characterisation of the trivial knot. \vskip 2mm \noindent\emph{AMS classification:} Primary 57M25; Secondary 57M12; 57M50. \vskip 2mm \noindent\emph{Keywords:} Prime knots, cyclic branched covers, symmetries of a knot, $JSJ$-decomposition. \end{abstract} \section{Introduction} Two knots $K$ and $K'$ are \emph{equivalent} if there is a homeomorphism of ${\mathbb S}^{3}$ sending $K$ to $K'$. Given a knot $K \subset {\mathbb S}^{3}$ and an integer $p \pi (g)eq 2$ one can construct the (total space of the) $p$-fold cyclic cover $M_{p}(K)$ of ${\mathbb S}^{3}$ branched along $K$: it is a fundamental object in knot theory. There are non-prime knots all of whose cyclic branched covers are homeomorphic. This is no longer true for prime knots: S. Kojima \cite{Ko} proved that for each prime knot $K \subset {\mathbb S}^{3}$ there is an integer $n_{K} \pi (g)eq 2$ such that two prime knots $K$ and $K'$ are equivalent if their $p$-fold cyclic branched covers are homeomorphic for some $p > \max (n_{K}, n_{K'})$. There are many examples of prime knots in ${\mathbb S}^{3}$ which are not equivalent but share homeomorphic $p$-fold cyclic branched covers due to C. Giller \cite{Gi}, C. Livingston \cite{Li}, Y. Nakanishi \cite{Na}, M. Sakuma \cite{Sa1}. Moreover there is no universal bound for $n_{K}$. The main goal of this article is to study the relationship between prime knots and their cyclic branched covers when the number of sheets is an odd prime number. \begin{Definition} Let $K \subset {\mathbb S}^3$ be a prime knot. A knot $K' \subset {\mathbb S}^3$ which is not equivalent to $K$ and which has the same $p$-fold cyclic branched cover as $K$ is called a \emph{$p$-twin} of $K$. \end{Definition} There are examples of prime knots, even hyperbolic knots (e.g. Montesinos knots) with an arbitrarily large number of non-equivalent $2$-twins. In contrast, for an odd prime number $p$, the number of $p$-twins is very restricted, according to our main result: \begin{Theorem}\label{thm:twins} Let $K\subset {\mathbb S}^3$ be a prime knot. Then: \item{(i)} There are at most two odd prime numbers $p$ for which $K$ admits a $p$-twin. \item{(ii)} For a given odd prime number $p$, $K$ admits at most one $p$-twin. \item{(iii)} Suppose that a prime knot $K$ admits the same knot $K'$ as a $p$-twin and a $q$-twin for two distinct odd prime numbers $p$ and $q$. Then $K$ has two commuting rotational symmetries of order $p$ and $q$ with trivial quotients. \end{Theorem} A \emph{rotational symmetry of order $p$} of a knot $K \subset {\mathbb S}^3$ is an orientation preserving periodic diffeomorphism $\psi$ of the pair $({\mathbb S}^3, K)$ with period $p$ and non-empty fixed-point set disjoint from $K$. We say that the rotational symmetry $\psi$ has \emph{trivial quotient} if $K/\psi$ is the trivial knot. For hyperbolic knots Theorem \ref{thm:twins} is in fact a consequence of B. Zimmermann's result in \cite{Zim1} whose proof uses the orbifold theorem and the Sylow theory for finite groups. The result in Theorem \ref{thm:twins} is sharp: for any pair of coprime integers $p> q >2$ B. Zimmermann has constructed examples of prime hyperbolic knots with the same $p$-fold and $q$-fold branched coverings \cite{Zim2}. The second named author \cite{Pao2} has proved that a hyperbolic knot is determined by three cyclic branched covers of pairwise distinct orders. The following, straightforward corollary of Theorem \ref{thm:twins}, shows that a stronger conclusion holds for arbitrary prime knots when we focus on branched coverings with odd prime orders. \begin{Corollary}\label{cor: three covers} A prime knot is determined by three cyclic branched covers of pairwise distinct odd prime orders. More specifically, for every knot $K$ there is at least one integer $p_K \in \{3, 5, 7 \}$ such that $K$ is determined by its $p_K$-cyclic branched cover. \end{Corollary} Another straightforwards consequence of Theorem \ref{thm:twins} is: \begin{Corollary}\label{cor:composite} Let $K=K_1\sharp...\sharp K_t$ and $K'=K'_1\sharp...\sharp K'_t$ be two composite knots with the same cyclic branched covers of orders $p_j$, $j=1,2,3$, for three fixed, pairwise distinct, odd prime numbers. Then, after a reordering, the (non oriented) knots $K_i$ and $K'_i$ are equivalent for all $i=1,...,t$. \end{Corollary} Part (ii) of Theorem \ref{thm:twins} states that for a given odd prime number $p$ a closed, orientable $3$-manifold can be the $p$-fold cyclic branched cover of at most two non-equivalent knots in ${\mathbb S}^3$. In \cite{BPZ} it has been shown that an integer homology sphere which is a $n$-fold cyclic branched cover of ${\mathbb S}^3$ for four distinct odd prime numbers $n$ is in fact ${\mathbb S}^3$. By putting together these two results we get the following corollary: \begin{Corollary}\label{cor:homologysphere} Let $M$ be an irreducible integer homology $3$-sphere. Then: there are at most three distinct knots in ${\mathbb S}^3$ having $M$ as cyclic branched cover of odd prime order. \end{Corollary} Our main task will be to prove Theorem \ref{thm:twins} for a satellite knot: that is a knot whose exterior ${\mathbb S}^3\setminus{\mathcal U}(K)$ has a non trivial Jaco-Shalen-Johannson decomposition \cite{JS}, \cite{Jo} (in the sequel we use $JSJ$-decomposition for short). Otherwise the knot is called simple: in this case, due to Thurston's hyperbolization theorem \cite{Th2}, its exterior is either hyperbolic, and the proof follows already from the works in \cite{Pao2} and \cite{Zim1}, or it is a torus knot and a simple combinatorial argument applies. The proof of Theorem \ref{thm:twins} for satellite knots relies on the study of the \emph{partial symmetries} of the exterior $E(K)$ of $K$ induced by the covering transformations associated to the twins of $K$ and on the localization of their axes of fixed points in the components of the $JSJ$-decomposition of $E(K)$. In particular the proof uses the following result about rotational symmetries of prime knots which is of interest in its own right. \begin{Theorem}\label{thm:three rotations} Let $K$ be a knot in ${\mathbb S}^3$ admitting three rotational symmetries with trivial quotients and whose orders are three pairwise distinct numbers $>2$. Then $K$ is the trivial knot. \end{Theorem} Since the trivial knot admits a rotational symmetry with trivial quotient of order $p$ for each integer $p \pi (g)e 2$, the above Theorem \ref{thm:three rotations} can be interpreted as a characterisation of the trivial knot, i.e. a knot is trivial if and only if it admits three rotational symmetries of pairwise distinct orders $>2$ and trivial quotients. \section{Rotational symmetries of knots} A \emph{rotational symmetry} of order $p$ of a knot $K \subset {\mathbb S}^3$ is an orientation preserving, periodic diffeomorphism $\psi$ of the pair $({\mathbb S}^3, K)$ of order $p$ and non-empty fixed-point set disjoint from $K$. We say that the rotational symmetry $\psi$ has \emph{trivial quotient} if $K/\psi$ is the trivial knot. \begin{Remark}\label{rem:lift} Let $K$ be a knot and let $\psi$ be a rotational symmetry of $K$ of order $p$. The symmetry $\psi$ lifts to a periodic diffeomorphism $\tilde\psi$ of the $p$-fold branched cover $M_p(K)$ with order $p$ and non-empty fixed-point set, which commutes with the covering transformation $h$ of $K$ acting on $M_p(K)$. Then the symmetry $\psi$ has trivial quotient if and only if $(M,Fix(\tilde\psi))/<\tilde\psi> \cong ({\mathbb S}^3,K')$. Moreover in this case $K$ and $K'$ have a common quotient link with two trivial components (see \cite{Zim1}). In particular a symmetry of a knot $K$ induced by the covering transformation associated to a $p$-twin $K'$ of $K$ is a $p$-rotational symmetry with trivial quotient. This follows from the fact that the two commuting deck transformations associated to the two twins induce on $M_p(K)$ a ${\mathbb Z}/p{\mathbb Z} \oplus {\mathbb Z}/p{\mathbb Z}$-cover of ${\mathbb S}^3$ branched over a link with two unknotted components. \end{Remark} The main result of this section is the following theorem whose assertion (i) is Theorem \ref{thm:three rotations}: \begin{Theorem}\label{thm:rotations} Let $K$ be a knot in ${\mathbb S}^3$. \item{(i)} Assume that $K$ admits three rotational symmetries with trivial quotients and whose orders are three pairwise distinct numbers $>2$. Then $K$ is the trivial knot. \item{(ii)} Assume that $K$ admits two rotational symmetries $\psi$ and $\varphi$ with trivial quotients and of distinct orders $>2$. Then the fixed-point sets $Fix(\psi)$ and $Fix(\varphi)$ sit in the $JSJ$-component of $E(K)$ which contains $\partial E(K)$. \end{Theorem} We prove first a weaker version of Theorem \ref{thm:three rotations} that we shall use in the remaining of this section (see also \cite[Scholium]{Pao2}). \begin{Proposition}\label{prop:commuting rotations} Let $K$ be a knot in ${\mathbb S}^3$ admitting three commuting rotational symmetries of orders $p>q>r\pi (g)e2$. If the symmetries of order $q$ and $r$ have trivial quotients, then $K$ is the trivial knot. \end{Proposition} {\bf Proof.} Denote by $\varphi$, $\psi$ and $\rho$ the three symmetries. If two of them -say $\varphi$, $\psi$- have the same axis, then by hypothesis the one with smaller order -say $\psi$- must have trivial quotient, i.e. $K/\psi$ is the trivial knot. Since the three symmetries commute, $\varphi$ induces a rotational symmetry of $K/\psi$ which is non trivial for the order of $\varphi$ is larger than that of $\psi$. The axis ${\mathcal A}$ of this induced symmetry is the image of $Fix(\psi)$ in the quotient by the action of $\psi$. In particular $K/\psi$ and ${\mathcal A}$ form a Hopf link and $K$ is the trivial knot: this follows from the equivariant Dehn lemma, see \cite{Hil}. We can thus assume that the axes are pairwise disjoint. Note that even if $r=2$, since the symmetries commute, the symmetry of order $2$ cannot act as a strong inversion on the axes of the other two symmetries. In this case we would have that the axis of $\rho$, which is a trivial knot, admits two commuting rotational symmetries, $\varphi$ and $\psi$, with distinct axes, which is impossible: this follows, for instance, from the fact (see \cite[Thm 5.2]{EL}) that one can find a fibration of the complement of the trivial knot which is equivariant with respect to the two symmetries. \qed The proof of Theorem \ref{thm:rotations} is based on a series of Lemmata. The first result concerns the structure of the $JSJ$-decomposition of the $p$-fold cyclic branched cover $M$ of a prime knot $K \subset {\mathbb S}^3$. Let $h$ be the covering transformation, then the quotient space $M/<h>$ has a natural orbifold structure, denoted by ${\mathcal O}_p(K)$, with underlying space ${\mathbb S}^3$ and singular locus $K$ with local group a cyclic group of order $p$ (cf. \cite[Chap. 2]{BMP}). According to Bonahon-Siebenmann \cite{BS} and the orbifold theorem \cite{BoP}, \cite{CHK}, such an orbifold admits a characteristic collection of toric $2$-suborbifolds, which split ${\mathcal O}_p(K)$ into geometric suborbifolds. Moreover this characteristic collection of toric $2$-suborbifolds lifts to the $JSJ$-collection of tori for $M$. It follows that for $p > 2$ the Bonahon-Siebenmann characteristic collection of toric $2$-suborbifolds coincides with the $JSJ$-collection of tori for the exterior $E(K) = {\mathbb S}^3\setminus{\mathcal U}(K)$ of $K$. \begin{Lemma}\label{lem:JSJ} Let $p >2$ be an integer and let $M$ be the $p$-fold cyclic branched cover of a prime knot $K$ in the $3$-sphere. Then: \item{(a)} The dual graph associated to the $JSJ$-decomposition of $M$ is a tree. \item{(b)} The fixed-point set of the group of deck transformations is entirely contained in one geometric piece of the decomposition. \end{Lemma} {\bf Proof.} \noindent{\bf (a)} Note, first of all, that $M$ is irreducible since $K$ is prime. Hence the Bonahon-Siebenmann decomposition of the orbifold ${\mathcal O}_p(K)$ lifts to the $JSJ$-collection for $M$ since $p>2$. Moreover, the graph dual to the Bonahon-Siebenmann decomposition of the orbifold ${\mathcal O}_p(K)$, which lifts to the $JSJ$-decomposition for $M$, is a tree. Cutting along a torus of former decomposition and considering the component $C$ which does not contain $K$ one gets the complement of a knot in ${\mathbb S}^3$. The lemma follows now from the fact that each connected component of a cyclic branched cover of $C$ has a unique boundary component. \noindent{\bf (b)} Note that the group of deck transformations preserves the $JSJ$-collection of tori. If $p>2$, the fixed-point set of this group does not meet any torus of the $JSJ$-decomposition, because each $JSJ$-torus is separating and the fixed point set is connected. Since the fixed point set is connected, it is entirely contained in one geometric piece of the $JSJ$-decomposition. \qed \begin{Remark} Note that the conclusion of the first part of the lemma holds also for covers of order $2$. For covers of prime order this property follows also from the fact that $M_p(K)$ is a ${\mathbb Z}/p{\mathbb Z}$-homology sphere (see \cite{Go}). \end{Remark} \begin{Lemma}\label{lem:prime} If a knot $K \subset {\mathbb S}^3$ has a rotational symmetry with trivial quotient, then $K$ is prime. \end{Lemma} {\bf Proof.} M. Sakuma \cite[Thm 4]{Sa2} showed that the only possible rotational symmetries of a composite knot must either permute cyclically its prime summands, or act as a symmetry of one prime summand while permuting the remaining ones. In particular the quotient knot cannot be trivial. \qed The following is a key lemma for the proofs of Theorems \ref{thm:twins} and \ref{thm:rotations}. \begin{Lemma}\label{lem:companion} Let $K$ be a knot admitting a rotational symmetry $\psi$ of order $p>2$ and consider the $JSJ$-decomposition of its exterior $E(K) = {\mathbb S}^3 \setminus{\mathcal U}(K)$. \item{(i)} $T$ is a torus of the decomposition which does not separate $\partial E(K)$ from $Fix(\psi)$ if and only if the orbit $\psi T$ has $p$ elements. \item{(ii)} Under the assumption that $\psi$ has trivial quotient, each torus which separates $\partial E(K)$ from $Fix(\psi)$ corresponds to a prime companion of $K$ on which $\psi$ acts with trivial quotient. \end{Lemma} {\bf Proof.} Let $T$ be a torus of the $JSJ$-decomposition of $E(K)$ considered as a torus inside $S^3$: $T$ separates the $3$-sphere into a solid torus containing $K$ and the exterior of a non trivial knot $K_T$ which is a companion of $K$. Note that, since the order of the symmetry $\psi$ is $>2$, its axis cannot meet $T$. Assume that the axis $Fix(\psi)$ of the symmetry is contained in the solid torus. If the orbit of $T$ under $\psi$ does not contain $p$ elements, then a non-trivial power of $\psi$ leaves $T$ invariant, and thus it also leaves the solid torus and the knot exterior invariant. The restriction of this power of $\psi$ to the solid torus acts as a rotation of order $m >1$ around its core and leaves invariant each meridian. This non-trivial power of $\psi$ would then be a rotational symmetry about the non trivial knot $K_T$ which is absurd because of the proof of the Smith's conjecture (see \cite{MB}). For the reverse implication, it suffices to observe that the geometric pieces of the decomposition containing $\partial E(K)$ and $Fix(\psi)$ must be invariant by $\psi$, and so must be the unique geodesic segment joining the corresponding vertices in the tree dual to the decomposition. For the second part of the Lemma, note that $K_T/\psi$ is a companion of $K/\psi$, which is trivial by hypothesis. In particular $K_T/\psi$ is also trivial and thus, by Lemma \ref{lem:prime}, must be prime. \qed The following lemma gives a weaker version of assertion (ii) of Theorem \ref{thm:rotations} under a commutativity hypothesis: \begin{Lemma}\label{lem:two rotations} Let $K$ be a prime knot admitting two commuting rotational symmetries $\psi$ and $\varphi$ of orders $p,q>2$. Then: \item{(i)} The fixed-point sets of $\psi$ and $\varphi$ are contained in the same geometric component of the $JSJ$-decomposition for $E(K)$; \item{(ii)} If $\psi$ has trivial quotient and $p \not = q$, the fixed-point sets of $\psi$ and $\varphi$ sit in the component which contains $\partial E(K)$. \end{Lemma} {\bf Proof.} \noindent{\bf Part (i)} Let $v_{\psi}$ (respectively $v_{\varphi}$) the vertex of the graph $\Gamma_K$ dual to the $JSJ$-decomposition of $E(K)$ corresponding to the geometric component containing $Fix(\psi)$ (respectively $Fix(\varphi)$). Since the two rotational symmetries commute, $\psi$ (respectively $\varphi$) must leave $Fix(\varphi)$ (respectively $Fix(\psi)$) invariant, and so the geodesic segment of $\Gamma_K$ joining $v_{\psi}$ to $v_{\varphi}$ must be fixed by the induced actions of $\psi$ and $\varphi$ on $\Gamma_K$. If this segment contains an edge $e$, the corresponding $JSJ$-torus $T$ in $E(K)$ cannot separate both $Fix(\varphi)$ and $Fix(\psi)$ from $\partial E(K)$. This would contradict part (i) of Lemma \ref{lem:companion}. \noindent{\bf Part (ii)} Let $M$ be the $p$-fold cyclic branched cover of $K$ and let $h$ be the associated covering transformation. According to Remark \ref{rem:lift} the lift $\tilde \psi$ of $\psi$ to $M$ is the deck transformation of a cyclic cover of ${{\mathbb S}}^3$ branched along a knot $K'$. Note that both $\tilde \psi$ and $\tilde \varphi$ (the lift of $\varphi$ to $M$) commute on $M$ with the covering transformation $h$. In particular $\tilde \varphi$ and $h$ induce commuting rotational symmetries of $K'$ with order $q$ and $p$ respectively. According to part $(i)$, $Fix(\varphi)$ and $Fix(h)$ belong to the same piece of the $JSJ$-decomposition of $M$. Since $Fix(h)$ maps to $K$ and $p \not = q$, $Fix(\varphi)$ sits in the $JSJ$-piece of $E(K)$ which contains $\partial E(K)$ and the conclusion follows since $Fix(\psi)$ belongs to the same $JSJ$-piece as $Fix(\varphi)$. \qed \begin{Lemma}\label{lem:torus} Let $K$ be a knot admitting a rotational symmetry $\psi$ with trivial quotient and of order $p>2$. Let $M$ be the $p$-fold cyclic branched cover of $K$ and denote by $\pi:M \longrightarrow({\mathbb S}^3,K)$ the associated branched cover. Let $T$ be a torus in the $JSJ$-collection of tori of $E(K)$. \item{(i)} The torus $T$ is left invariant by $\psi$ if and only if $\pi^{-1}(T)$ is connected. \item{(ii)} If $\pi^{-1}(T)$ is connected, then the companion $K_T$ of $K$ corresponding to $T$ is prime and the winding number of $T$ with respect to $K$ is prime with $p$, so in particular it is not zero. \item{(iii)} The torus $T$ is not left invariant by $\psi$ if and only if $\pi^{-1}(T)$ has $p$ components. \end{Lemma} {\bf Proof.} \noindent{\bf Part (i)}. According to Remark \ref{rem:lift}, the $p$-fold cyclic branched cover $M$ of $K$ admits two commuting diffeomorphisms of order $p$, $h$ and $h'=\tilde\psi $, such that: $(M,Fix(h))/<h> \cong ({\mathbb S}^3,K)$ on which $h'$ induces the $p$-rotational symmetry $\psi$ with trivial quotient, and $(M,Fix(h'))/<h'> \cong ({\mathbb S}^3,K')$ on which $h$ induces a $p$-rotational symmetry $\psi'$ with trivial quotient. The preimage $\pi^{-1}(T) = \tilde T$ is connected if and only if it corresponds to a torus $\tilde T$ of the $JSJ$-decomposition of $M$ which is left invariant by $h$. If, by contradiction, $\psi$ does not leave $T$ invariant, then the $h'$-orbit of $\tilde T$ consists of $m>1$ elements. Cutting $M$ along these $m$ separating tori, one gets $m+1$ connected components. \begin{Claim}\label{claim:component} Both $Fix(h)$ and $Fix(h')$ must be contained in the same connected component. \end{Claim} {\bf Proof.} The diffeomorphism $h'$ cyclically permutes the $m$ connected components which do not contain $Fix(h')$. Since $h$ and $h'$ commute, $h$ leaves invariant each of these $m$ components and it acts in the same way on each of them (that is, the restrictions of $h$ to each component are conjugate). Since the set $Fix(h)$ is connected, the claim follows. \qed The $m$ components permuted by $h'$ project to a connected submanifold of the exterior $E(K')$ of the knot $K'$ with connected boundary the image $T'$ of $\tilde T$. This submanifold is invariant by the action of $\psi'$ but does not contain $Fix(\psi')$. This contradicts Lemma \ref{lem:companion}(i). To conclude the proof of Lemma \ref{lem:torus} (i), it suffices to observe that $h$ and $h'$ play symmetric roles. \noindent{\bf Part (ii)} The first part of assertion (ii) is a straightforward consequence of assertion (i) and of Lemma \ref{lem:companion}. The second part follows from the fact that for $\pi^{-1}(T)$ to be connected, the winding number of $T$ and $p$ must be coprime. \noindent{\bf Part (iii)} is a consequence of the proof of part (i) of Lemma \ref{lem:companion} and of the fact that $h$ and $h'$ play symmetric roles. \qed {\bf Proof of Theorem \ref{thm:rotations}.} The proof is achieved in three steps. \noindent {\bf Step 1.} Theorem \ref{thm:rotations} is true under the assumption that the rotational symmetries commute pairwise. In this case, assertion (i) is the statement of Proposition \ref{prop:commuting rotations}. Assertion (ii) follows from Lemma \ref{lem:two rotations}. \noindent{\bf Step 2.} Theorem \ref{thm:rotations} is true under the assumption that every companion of $K$ is prime (i.e. $K$ is \emph{totally prime}) and has non vanishing winding number (i.e. $K$ is \emph{pedigreed}). Assume that we are in the hypotheses of Theorem \ref{thm:rotations}. Then Lemma \ref{lem:prime} assures that $K$ is a prime knot. If $K$ is also totally prime and pedigreed then M. Sakuma \cite[Thm4 and Lemma 2.3]{Sa2} proved that, up to conjugacy, the rotational symmetries belong either to a finite cyclic subgroup or to an $S^1$-action in $Diff^{+,+}(S^3,K)$. Thus after conjugacy, step 1 applies. For part (ii) note that the distances of the fixed point set of the symmetries to the vertex containing $\partial E(K)$ in the $JSJ$-graph $\Gamma_ K$ do not change by conjugacy. \noindent {\bf Step 3.} Reduction of the proof to step 2. If $K$ is not totally prime or pedigreed, then it is non-trivial. We shall construct a non trivial, totally prime and pedigreed knot verifying the hypothesis of Theorem \ref{thm:rotations}. Assertion (i) then follows by contradiction. For Assertion (ii) we need to verify that the construction does not change the distance of the pieces containing the axes of rotations to the root containing $\partial E(K)$. Roughly speaking we consider the $JSJ$-tori closest to $\partial E(K)$ and corresponding either to non-prime or to winding number zero companions. Then we cut $E(K)$ along these tori and keep the component $W$ containing $\partial E(K)$ and suitably Dehn-fill $W$ along these tori to get the exterior of a non-trivial knot $\pi (h)at K$ in ${\mathbb S}^3$, which verifies Sakuma's property. More precisely, let $\Gamma_K$ be the tree dual to the $JSJ$-decomposition of $E(K)$ and let $\Gamma_0$ be its maximal (connected) subtree with the following properties: \begin{itemize} \item $\Gamma_0$ contains the vertex $v_\partial$ corresponding to the geometric piece whose boundary contains $\partial E(K)$. Note that the geometric piece of the decomposition corresponding to $v_\partial$ cannot be a composing space for $K$ is prime; \item no vertex of $\Gamma_0$ corresponds to a composing space (i.e. a space homeomorphic to a product $S^1 \times B$ where $B$ is an $n$-punctured disc with $n \pi (g)eq 2$); \item no edge of $\Gamma_0$ corresponds to a torus whose meridian has linking number $0$ with $K$. \end{itemize} Denote by $X(\Gamma_0)$ the submanifold of $E(K)$ corresponding to $\Gamma_0$. The following claim describes certain properties of $X(\Gamma_0)$ with respect to a rotational symmetry $\psi$ of $({\mathbb S}^3,K)$. \begin{Claim}\label{claim:sym} Let $\psi$ be a rotational symmetry of $({\mathbb S}^3,K)$ with order $p >2$ and trivial quotient. Then: \item{(i)} The fixed-point set of $\psi$ is contained in $X(\Gamma_0)$. \item{(ii)} The tree $\Gamma_0$ is invariant by the automorphism of $\Gamma_K$ induced by $\psi$ and the submanifold $X(\Gamma_0)$ is invariant by $\psi$. \end{Claim} {\bf Proof.} \noindent{\bf Assertion {(i)}.} Let $\pi (g)amma$ be the unique geodesic segment in $\Gamma_K$ which joins the vertex $v_\partial$ to the vertex corresponding to the geometric piece containing $Fix(\psi)$ (see Lemma \ref{lem:JSJ}; note that here we use $p>2$). According to assertion (ii) of Lemma \ref{lem:companion}, no vertex along $\pi (g)amma_i$ can be a composing space. Since the linking number of $K$ and $Fix(\psi)$ must be coprime with $p$, no torus corresponding to an edge of $\pi (g)amma$ can have winding number $0$ (see Lemma \ref{lem:torus}). \noindent{\bf Assertion {(ii)}.} This is just a consequence of the maximality of $\Gamma_0$ and the fact that elements of the group $\langle\psi \rangle$ generated by $\psi$ must preserve the $JSJ$-decomposition of $E(K)$ and the winding numbers of the $JSJ$-tori, as well as send composing spaces to composing spaces. \qed Let $\pi :M_{p}(K) \longrightarrow({\mathbb S}^3,K)$ be the $p$-fold cyclic branched cover. Let $T$ be a torus of the $JSJ$-collection of tori for $E(K)$. Denote by $E_T$ the manifold obtained as follows: cut $E(K)$ along $T$ and choose the connected component whose boundary consists only of $T$. Note that $E_T$ is the exterior of the companion $K_T$ of $K$ corresponding to $T$. \begin{Claim}\label{claim:meridian-longitude} Let $T$ be a torus of $\partial X(\Gamma_0)\setminus\partial E(K)$. The preimage $\pi^{-1}(T)$ consists of $p$ components, each bounding a copy of $E_T$ in $M_{p}(K)$. In particular, there is a well-defined meridian-longitude system $(\mu_T,\lambda_T)$ on each boundary component of $X(\Gamma_0)$, different from $\partial E(K)$, which is preserved by taking the $p$-fold cyclic branched covers. \end{Claim} {\bf Proof.} According to Lemma \ref{lem:torus}, the preimage of $T$ is either connected or consists of $p$ components. If the preimage of $T$ were connected, the tree $\Gamma_0$ would not be maximal according to Lemma \ref{lem:torus}(ii). The remaining part of the Claim is then easy. \qed We wish now to perform Dehn fillings on the boundary of $X(\Gamma_0)$ in order to obtain a totally prime and pedigreed knot admitting pairwise distinct rotational symmetries with trivial quotients. On each component $T$ of $\partial X(\Gamma_0)\setminus\partial E(K)$ we fix the curve $\alpha_n = \lambda_T+n\mu_T$. \begin{Claim}\label{claim:surgery} For all but finitely many $n \in {\mathbb Z}$ the Dehn filling of each component $T$ of $\partial X(\Gamma_0)\setminus\partial E(K)$ along the curve $\alpha_n$ produces the exterior of a non-trivial, prime and pedigreed knot $\pi (h)at K$ in ${\mathbb S}^3$. \end{Claim} {\bf Proof.} Note that by the choice of surgery curves the resulting manifold $\pi (h)at X(\Gamma_0)$ is the exterior of a knot $\pi (h)at K$ in the $3$-sphere, i.e. $\pi (h)at X(\Gamma_0)\subset {\mathbb S}^3$, and thus is irreducible. We distinguish two cases: \noindent{\bf {(1)}} The $JSJ$-component $X_T$ of $X(\Gamma_0)$ adjacent to $T$ is Seifert fibred. Then, by the choice of $\Gamma_0$, $X_T$ is a cable space (i.e. the exterior of a $(a,b)$-torus knot in the solid torus bounded by $T$ in ${\mathbb S}^3$). Moreover the fiber $f$ of the Seifert fibration of $X(\Gamma_0)$ is homologous to $a\mu_{T} + b\lambda_{T}$ on $T$ and the intersection number $\vert \Delta(f,\mu_T) \vert = b > 1$. The intersection number of the filling curve $\alpha_n$ with the fiber $f$ is then $\vert \Delta(f,\alpha_n) \vert = \vert na -b \vert$ and is $> 1$ for all but finitely many $n \in {\mathbb Z}$. In this case the resulting manifold $X_T(\alpha_n)$ is the exterior of a non trivial torus knot which is prime and pedigreed \cite{CGLS}. \noindent {\bf{( 2)}} The $JSJ$-component $X_T$ of $X(\Gamma_0)$ adjacent to $T$ is hyperbolic. By Thurston's hyperbolic Dehn filling theorem \cite[Chap. 5]{Th1} (see also \cite[Appendix B]{BoP}) for all but finitely many $n \in {\mathbb Z}$ the Dehn filling of each component $T \subset \partial X_T \cap (\partial X(\Gamma_0)\setminus\partial E(K))$ along the curve $\alpha_n$ produces a hyperbolic manifold $X_T(\alpha_n)$ with finite volume. Therefore for all but finitely many $n$'s $\in {\mathbb Z}$ the Dehn filling of each component $T \subset \partial X(\Gamma_0)\setminus\partial E(K)$ along the curve $\alpha_n$ produces a $\partial$-irreducible $3$-manifold $\pi (h)at X(\Gamma_0) \subset {\mathbb S}^3$ such that each Seifert piece of its $JSJ$-decomposition is either a Seifert piece of $X(\Gamma_0)$ or a non-trivial torus knot exterior. Hence it corresponds to the exterior of a non-trivial knot $\pi (h)at{K} \subset {\mathbb S}^3$ which is totally prime. It is also pedigreed by the choice of $\Gamma_0$. \qed Let $\psi$ a rotational symmetry of $({\mathbb S}^3,K)$ with order $p >2$. Then the restriction ${\psi}_{\vert_{X(\Gamma_0)}}$, given by Claim \ref{claim:sym} extends to $\pi (h)at X(\Gamma_0)$, giving a $p$-rotational symmetry $\pi (h)at\psi$ of the non-trivial, totally prime and pedigreed knot $({\mathbb S}^3,\pi (h)at K)$. In order to be able to apply step 2 to the knot $\pi (h)at K$ and the induced rotational symmetries, we still need to check that the rotational symmetry $\pi (h)at \psi$ has trivial quotient when $\psi$ has trivial quotient. This is the aim of the following: \begin{Claim}\label{claim:quotient} If the knot $K/\psi$ is trivial, then the knot $\pi (h)at K/\pi (h)at\psi$ is trivial. \end{Claim} {\bf Proof.} Let $\pi:M_{p}(K) \longrightarrow({\mathbb S}^3,K)$ be the $p$-fold cyclic branched cover. Let $h$ be the deck transformation of this cover and $h'$ the lift of $\psi$. According to Remark \ref{rem:lift}, $h'$ is the deck transformation for the $p$-fold cyclic cover of the $3$-sphere branched along a knot $K'$. Note that, by Claim \ref{claim:meridian-longitude}, $M_{p}(K)\setminus\pi^{-1}(X(\Gamma_0) \cup {\mathcal U}(K))$ is a disjoint union of $p$ copies of $E(K) \setminus X(\Gamma_0)$. It follows that the $p$-fold cyclic branched cover $M_{p}(\pi (h)at K)$ of $\pi (h)at K$ is the manifold obtained by a $(\lambda_T+n\mu_T)$-Dehn filling on all the boundary components of $\pi^{-1}(X(\Gamma_0) \cup {\mathcal U}(K))$. The choice of the surgery shows that both $h$ and $h'$ extend to diffeomorphisms $\pi (h)at h$ and $\pi (h)at h'$ of order $p$ of $M_{p}(\pi (h)at K)$. By construction we have that $M_{p}(\pi (h)at K)/<\pi (h)at h> \cong {\mathbb S}^3$. In the same way $M_{p}(\pi (h)at K)/<\pi (h)at h'>$ is obtained from $M_{p}(K)/<h'> \cong{\mathbb S}^3$ by cutting off a copy of $E(K) \setminus X(\Gamma_0)$ and Dehn filling along $\partial X(\Gamma_0)$. The choice of the surgery curve assures that the resulting manifold is again ${\mathbb S}^3$ and the conclusion follows from Remark \ref{rem:lift}. \qed From the non-trivial prime knot $K$, we have thus constructed a non-trivial, totally prime and pedigreed knot $\pi (h)at K$ which has the property that every rotational symmetry $\psi$ of $K$ with trivial quotient and order $>2$ induces a rotational symmetry $\pi (h)at \psi$ of $\pi (h)at K$ with trivial quotient and the same order. Moreover by the choice of the Dehn filling curve in the construction of $\pi (h)at K$, the vertex containing $Fix(\pi (h)at \psi)$ remains at the same distance from the vertex containing $\partial E(\pi (h)at K)$ in the $JSJ$-tree $\Gamma_ {\pi (h)at K}$ as the vertex containing $Fix(\psi)$ from the vertex containing $\partial E(K)$ in the $JSJ$-tree $\Gamma_ K$. Then the conclusion is a consequence of step 2.\qed \section{Twins of a prime knot} In this section we prove Theorem \ref{thm:twins}. If $K$ is trivial, the theorem is a consequence of the proof of Smith's conjecture (see \cite{MB}). We shall thus assume in the remaining of this section that $K$ is non trivial and $p$ is an odd prime number. Let $M$ be the common $p$-fold cyclic branched cover of two prime knots $K$ and $K'$ in ${\mathbb S}^3$. Let $h$ and $h'$ be the deck transformations for the coverings of $K$ and $K'$ respectively. By the orbifold theorem \cite{BoP}, see also \cite{CHK} one can assume that $h$ and $h'$ act \emph{geometrically} on the geometric pieces of the $JSJ$-decomposition of $M$, i.e. by isometries on the hyperbolic pieces and respecting the fibration on the Seifert fibred ones. The following lemma describes the Seifert fibred pieces of the $JSJ$-decomposition of the $p$-fold branched cyclic cover $M$ (see also \cite{Ja} and \cite[Lemma 2]{Ko}). \begin{Lemma}\label{lem:seifert} Let $p$ be an odd prime integer and let $M$ be the $p$-fold cyclic branched cover of ${\mathbb S}^3$ branched along a prime, satellite knot $K$. If $V$ is a Seifert piece in the $JSJ$-decomposition for $M$. Then the base $B$ of $V$ can be: \begin{enumerate} \item A disc with $2$, $p$ or $p+1$ singular fibres; \item A disc with $1$ hole, i.e. an annulus, with $1$ or $p$ singular fibres; \item A disc with $p-1$ holes and $1$ singular fibre; \item A disc with $p$ holes and $1$ singular fibre; \item A disc with $n$ holes, $n\pi (g)e2$. \end{enumerate} \end{Lemma} {\bf Proof.} It suffices to observe that $V$ projects to a Seifert fibred piece $V'$ of the Bonahon-Siebenmann decomposition for the orbifold ${\mathcal O}_p(K)$. There are four possible cases: \noindent{\bf (a)} $V'$ contains $K$: $V'$ is topologically a non trivially fibred solid torus and $K$ is a regular fibre of the fibration, i.e. a torus knot $K(a,b)$, since it cannot be the core of the fibred solid torus. The knot $K$ lifts to a singular fibre of order $p$ if $p$ does not divide $ab$ and to a regular fibre otherwise. The core of the solid torus is a singular fibre of order -say- $a$. It lifts to a regular fibre if $a=p$, a singular fibre of order $a/(a,p)$ if $p$ does not divide $b$, or to $p$ singular fibres of order $a$ if $p$ divides $b$. Thus $V$ has $p$ boundary components if $p$ divides $a$ and $1$ otherwise. An Euler characteristic calculation shows that $B$ is either a disc with $2$ or $p$ singular fibres, or a disc with $p-1$ holes and with at most $1$ singular fibre. \noindent{\bf (b)} $V'$ is the complement of a torus knot $K(a,b)$ in ${\mathbb S}^3$. In this case, $V$ is either a copy of $V'$, and $B$ is a disc with $2$ singular fibres or $V$ is a true $p$-fold cover of $V'$. In this case $V$ has exactly one boundary component. Reasoning as in case (a), we see that the two singular fibres of $V'$ must lift to either $2$ singular fibres, or $1$ regular fibre and $p$ singular fibres or $1$ singular fibre and $p$ singular fibres. In particular $B$ is a disc with $2$, $p$ or $p+1$ singular fibres. \noindent{\bf (c)} $V'$ is the complement of a torus knot $K(a,b)$ in a solid torus, i.e. a cable space, and its base is an annulus with $1$ singular fibre. Reasoning as in (b) we find that $B$ can be a disc with $1$ hole and $1$ or $p$ singular fibres or a disc with $p$ holes and at most $1$ singular fibre. \noindent{\bf (d)} $V'$ is a composing space with at least $3$ boundary components and thus so is $V$. More precisely, note that either $V'$ lifts to $p$ disjoint copies of itself, or $V$ and $V'$ are homeomorphic and $V'$ is obtained by quotienting $V$ via the $p$-translation along the ${\mathbb S}^1$ fibre. In this case $B$ is a disc with at least $2$ holes. This analysis ends the proof of Lemma \ref{lem:seifert}. \qed \begin{Proposition}\label{prop:subtree} Let $M$ be the common $p$-fold cyclic branched cover of two prime knots $K$ and $K'$ in ${\mathbb S}^3$, $p$ an odd prime number, and let $h$ be the deck transformation for the covering of $K$. Let $\Gamma$ be the tree dual to the $JSJ$-decomposition of $M$. The deck transformation $h'$ for the covering of $K'$ can be chosen (up to conjugacy) in such a way that: \item{(i)} There exists a subtree $\Gamma_f$ of $\Gamma$ on which the actions induced by $h$ and $h'$ are trivial; \item{(ii)} The vertices of $\Gamma$ corresponding to the geometric pieces of the decomposition which contain $Fix(h)$ and $Fix(h')$ belong to $\Gamma_f$; \item{(iii)} Let $M_f$ the submanifold of $M$ corresponding to $\Gamma_f$. The restrictions of $h$ and $h'$ to $M_f$ commute. \end{Proposition} {\bf Proof.} The proof relies on the study of the actions of the two covering transformations $h$ and $h'$ on the $JSJ$-decomposition of the common $p$-fold cyclic branched covering $M$. Since $\Gamma$ is finite, the group generated by the tree automorphisms induced by $h$ and $h'$ is finite as well. Standard theory of group actions on trees assures that a finite group acting on a tree without inversion must have a global fixed point and that its fixed-point set is connected. Thus part (i) of the proposition follows, using the fact that $h$ and $h'$ have odd orders. Choose now $h'$, up to conjugacy in $Diff^+(M)$, in such a way that $\Gamma_f$ is maximal. We want to show that, in this case, $M_f$ contains $Fix(h)$ and $Fix(h')$. Assume by contradiction that the vertex $v_h$ of $\Gamma$ corresponding to the geometric piece containing $Fix(h)$, whose existence is ensured by Lemma \ref{lem:JSJ}, does not belong to $\Gamma_f$. Let $\pi (g)amma_h$ the unique geodesic path in $\Gamma$ connecting $v_h$ to $\Gamma_f$. Let $e_h$ the edge in $\pi (g)amma_h$ adjacent to $\Gamma_f$ and denote by $T$ the corresponding torus of the $JSJ$-collection of tori for $M$. Let $U$ be the connected component of $M\setminus T$ which contains $Fix(h)$. Consider the $\langle h,h'\rangle$-orbit of $U$. This orbit is the disjoint union of $h$ (and $h'$) orbits of $U$. Remark that the $h$-orbit of $U$ is $\{U\}$. \begin{Claim}\label{claim:orbit} The orbit $\langle h,h'\rangle U$ must contain an $h$-orbit, different from $\{U\}$ and containing a unique element. \end{Claim} {\bf Proof.} Otherwise all the $h$-orbits in $\langle h,h'\rangle U$ different from $\{U\}$ would have $p$ elements, since $p$ is prime. In particular, the cardinality of $\langle h,h'\rangle U$ would be of the form $kp+1$. This implies that at least one of the $h'$-orbits in $\langle h,h'\rangle U$ must contain one single element $U'$. Up to conjugacy with an element of $\langle h,h'\rangle$ (whose induced action on $\Gamma_f$ is trivial), we can assume that $U=U'$, contradicting the hypothesis that $h'$ was chosen up to conjugacy in such a way that $\Gamma_f$ is maximal. \qed Let $U'\neq U$ the element of $\langle h,h'\rangle U$ such that $h(U')=U'$. Note that $U$ and $U'$ are homeomorphic since they belong to the same $\langle h,h'\rangle$-orbit. \begin{Claim}\label{claim:knot complement} $U$ is homeomorphic to the exterior $E({{\mathcal K}})$ of a knot ${\mathcal K} \subset {\mathbb S}^3$ admitting a free symmetry of order $p$. \end{Claim} {\bf Proof.} The first part of the Claim follows from the fact that, by maximality of $\Gamma_f$, $h'$ cannot leave $U$ invariant, so must freely permute $p$ copies of $U$ belonging to $\langle h,h'\rangle U$. Thus $U$ must appear as a union of geometric pieces of the $JSJ$-splitting of $E(K')$. The second part follows from the fact that $h$ must act freely on $U'$ which is homeomorphic to $U$. \qed \begin{Remark}\label{rem:free quotient} Note that the quotient of $U$ by the action of its free symmetry of order $p$ is also a knot exterior because $h$ acts freely on $U'$ and $U'$ must project to a union of geometric pieces of the $JSJ$-splitting of $E(K)$. \end{Remark} \begin{Claim}\label{claim:non-free quotient} $U$ admits a rotational symmetry of order $p$ whose quotient $U/\langle h\rangle$ is topologically a solid torus. \end{Claim} {\bf Proof.} The quotient $U/\langle h\rangle$ is obtained by cutting ${\mathbb S}^3$ along an essential torus in $E(K)$. Since $K \subset U/\langle h\rangle$, it must be a solid torus. \qed It follows from Claim \ref{claim:non-free quotient} and Lemma \ref{lem:prime} that the knot ${\mathcal K}$ is prime. Moreover, according to Claims \ref{claim:knot complement} and \ref{claim:non-free quotient}, ${\mathcal K}$ admits a rotational symmetry and a free symmetry, both of order $p$. This is however impossible because M. Sakuma \cite[Thm. 3]{Sa2} showed that a prime knot can only have one symmetry of odd order up to conjugacy. This contradiction proves part (ii) of Proposition \ref{prop:subtree}. To prove part (iii) we shall consider two cases, according to the structure of $\Gamma_f$. \noindent {\bf Case (a)}: {\it $\Gamma_f$ contains an edge.} Choose an edge in $\Gamma_f$ and let $T$ be the corresponding torus in the $JSJ$-collection of tori for $M$. Let $V$ be a geometric piece of the $JSJ$-decomposition of $M$ adjacent to $T$. Then Lemma \ref{lem:commutation} below together with a simple induction argument show that $h'$ can be chosen (up to conjugacy) in such a way that its restriction to $M_f$ commutes with the restriction of $h$. \begin{Lemma}\label{lem:commutation} If the covering transformations $h$ and $h'$ preserve a $JSJ$-torus $T$ of $M$ then, up to conjugacy in $Diff^{+}(M)$, $h$ and $h'$ commute on the union of the geometric components of the $JSJ$-decomposition adjacent to $T$. \end{Lemma} {\bf Proof.} First we show that $h$ and $h'$ commute on each geometric component adjacent to $T$. Since $h$ and $h'$ preserve the orientation of $M$, we deduce that $h(V)=V$ and $h'(V)=V$, and that $h$ and $h'$ act geometrically on the geometric piece $V$. A product structure on $T$ can always be induced by the geometric structure on $V$: either by considering the induced Seifert fibration on $T$ if $V$ is Seifert fibred, or by identifying $T$ with a section of a cusp in the complete hyperbolic manifold $V$. Since $h$ and $h'$ are isometries of order $p$, for such a product structure on $T$ they act as (rational) translations, i.e. their action on $T={{\mathbb S}}^1\times{{\mathbb S}}^1$ is of the form $(\zeta_1,\zeta_2) \mapsto (e^{2i\pi r_1/p}\zeta_1,e^{2i\pi r_2/p}\zeta_2)$, where $p$ and at least one between $r_1$ and $r_2$ are coprime. Thus $h$ and $h'$ commute on $T$. If $V$ is hyperbolic, we have just seen that $h$ and $h'$ are two isometries of $V$ which commute on the cusp corresponding to $T$. Thus they must commute on $V$. If $V$ is Seifert fibred, then the Seifert fibration is unique up to isotopy, and $h$ and $h'$ preserve this fibration. \begin{Remark} Note that the quotient of $V$ by a fiber-preserving diffeomorphism of finite order $h$ only depends on the combinatorial behaviour of $h$, i.e. its translation action along the fibre and the induced permutation on cone points and boundary components of the base. In particular, the conjugacy class of $h$ only depends on these combinatorial data. Note moreover that two geometric symmetries having the same combinatorial data are conjugate via a diffeomorphism isotopic to the identity. \end{Remark} Since the translation along the fibres commutes with every fiber-preserving diffeomorphism of $V$, it suffices to see whether $h$ and $h'$ commute, up to a conjugation of $h'$, on the base $B$ of $V$. It is enough then to consider the possible actions of order $p$ on the possible bases. According to Lemma \ref{lem:seifert} the possible actions of $h$ and $h'$ are described below: \begin{enumerate} \item If $B$ is a disc with $2$ singular fibres, or an annulus with $1$ singular fibre, or a disc with $n$ holes, $n\neq p$, or a disc with $p-1$ holes and $1$ singular fibre, then the action on $B$ is necessarily trivial and there is nothing to prove. Note that, according to the proof of Lemma \ref{lem:seifert}, if $B$ is a disc with $p-1$ holes with one singular fibre, no boundary torus is left invariant, so this possibility in fact does not occur. \item If $B$ is a disc with $p$ holes and $1$ singular fibre or a disc with $p+1$ singular fibres, then the only possible action is a rotation about a singular fibre cyclically permuting the holes or the remaining singular fibres. \item If $B$ is a disc with $p$ singular fibres then the action must be a rotation about a regular fibre which cyclically exchanges the singular fibres. \item If $B$ is an annulus with $p$ singular fibres the action must be a free rotation cyclically exchanging the singular fibres. Note that in the three latter cases the action can never be trivial on the base. \item If $B$ is a disc with $n$ holes then two situations can arise: either the action is trivial on the base (case (d) in the proof of Lemma \ref{lem:seifert}; note that in case (a), when $n=p-1$, all boundary components must be cyclically permuted), or $n=p$ and the action is a rotation about a regular fibre which cyclically permutes the $p$ holes (see part (c) of Lemma \ref{lem:seifert}). \end{enumerate} We shall now show that, if both $h$ and $h'$ induce non trivial actions on the base of $V$, then, up to conjugacy, $h$ and $h'$ can be chosen so that their actions on $B$ coincide. Note that for $h$ and $h'$ to commute it suffices that the action of $h'$ on $B$ coincides with the action of some power of $h$, however this stronger version will be needed in the proof of Corollary \ref{cor:extension}. First of all remark that, if $B$ is a disc with $p+1$ singular fibres (case 2) and $h$ and $h'$ leave invariant distinct singular fibres, then all the singular fibres must have the same order (in fact, must have the same invariants). This means that, after conjugating $h'$ by a homeomorphism of $V$ which is either an isotopy exchanging two regular fibres or a Dehn twist along an incompressible torus exchanging two singular fibres, one can assume that, in cases 2 and 3, $h$ and $h'$ leave set-wise invariant the same fibre. Note that this homeomorphism is isotopic to the identity on $\partial V$ and thus extends to $M$. In fact, using Lemma \ref{lem:seifert} one can show that the fibres cannot all have the same order. Since the actions of $h$ and $h'$ consist in permuting exactly $p$ holes or singular fibres, it suffices to conjugate $h'$ via a homeomorphism of $V$ (which is a composition of Dehn twists along incompressible tori) in such a way as to exchange the order of the holes or singular fibres so that $h'$ and $h$ cyclically permute them in the same order. Note that in the case of singular fibres this product of Dehn twists is isotopic to the identity on $\partial V$ and thus extends to $M$. In the case of holes, the product of Dehn twists extends to $M$ since it induces the identity on the fundamental groups of the tori of $\partial V$ and the connected components of $M\setminus V$ adjacent to boundary tori different from $T$ are necessarily homeomorphic. Once the two diffeomorphisms $h$ and $h'$ commute on the two geometric pieces adjacent to $T$, the commutation can be extended on a product neighborhood of $T$, since the two finite abelian groups generated by the restrictions of $h$ and $h'$ on each side of $T$ have the same action on $T$. Indeed, the slope of the translation induced by $h'$ on $T$ has been left unchanged by the conjugation. \qed \begin{Remark}\label{rem:same action} Note that in case 1 of the proof of the above Lemma, the actions of $h$ and $h'$ must coincide after taking a power, i.e. $h$ and $h'$ generate the same cyclic group. This is not necessarily true in the remaining cases, even if $h$ and $h'$ induce the same action on $B$. Indeed, they can induce different translations along the fibres. Nevertheless, in both cases, to assure that the actions of $h$ and $h'$ coincide on $V$, it suffices to check that they coincide on $T$. \end{Remark} \noindent {\bf Case (b)}: {\it $\Gamma_f$ is a single vertex.} Let $V=M_f$ be the geometric piece corresponding to the unique vertex of $\Gamma_f$. If $V=M$, then the result is already known. We can thus assume that $V\neq M$. According to part (ii) of Proposition \ref{prop:subtree}, we can assume that the fixed-point sets of $h$ and $h'$ are contained in $V$. If $V$ is Seifert fibred then, case (a) of the proof of Lemma \ref{lem:seifert} shows that the base $B$ of $V$ is either a disc with $2$ or $p+1$ singular fibres, or a disc with $p-1$ holes and with $1$ or $2$ singular fibres. In the first case the boundary torus of $V$ is preserved by $h$ and $h'$ and the assertion follows from Lemma \ref{lem:commutation}. In the second case the action on the base is necessarily a rotation fixing two points (either the unique singular fibre and a regular one, or the two singular fibres) and cyclically permuting the $p$ boundary components. Then conjugating $h'$ by a product of Dehn twists along incompressible tori, which extends to $M$ as in the proof of Lemma \ref{lem:commutation}, leads to the desired conclusion. The case where $V$ is hyperbolic is due to B. Zimmermann \cite{Zim1}. We give the argument for completeness. Since $V$ is hyperbolic, we consider the group ${\mathcal I}_V$ of isometries of $V$ induced by diffeomorphisms of $M$ which leave $V$ invariant. Let ${\mathcal S}$ be the $p$-Sylow subgroup of ${\mathcal I}_V$. Up to conjugacy, we can assume that both $h=h_{\vert_V}$ and $h'=h'_{\vert_V}$ belong to ${\mathcal S}$. If the groups $\langle h\rangle$ and $\langle h'\rangle$ generated by $h$ and $h'$ are conjugate, we can assume that $h=h'$ and we are done. So we assume that $\langle h\rangle$ and $\langle h'\rangle$ are not conjugate. Then it suffices to prove that $h'$ normalises $\langle h\rangle$ because each element normalising $\langle h\rangle$ must leave invariant $Fix(h)$ and the subgroup of ${\mathcal I}_V$ which leaves invariant a simple closed geodesic, like $Fix(h)$, must be a finite subgroup of ${\mathbb Z}/2{\mathbb Z}\ltimes({\mathbb Q}/{\mathbb Z}\oplus{\mathbb Q}/{\mathbb Z})$. In particular, elements of odd order must commute. Assuming that $\langle h\rangle$ and $\langle h'\rangle$ are not conjugate, we have that $\langle h\rangle \subsetneq {\mathcal S}$ and, by \cite[Ch 2, 1.5]{Su}, either $\langle h\rangle$ is normal in ${\mathcal S}$ and we have reached the desired conclusion, or there exist an element $\pi (h)at{h}=ghg^{-1}$, conjugate to $h$ in ${\mathcal S}$, which normalises $\langle h\rangle$ and such that $\langle h\rangle \cap\langle{\pi (h)at{h}}\rangle=\{1\}$. We want to show that $h'$ normalises $\langle h\rangle$. Assume, by contradiction that $h'$ is not contained in $\langle h,\pi (h)at{h}\rangle = {\mathbb Z}/p{\mathbb Z}\oplus{\mathbb Z}/p{\mathbb Z}$. Then this group is smaller than ${\mathcal S}$ and again we are able to find a new cyclic group $H$ of order $p$ whose intersection with $\langle h,\pi (h)at{h}\rangle$ is reduced to the identity and which normalises $\langle h,\pi (h)at{h}\rangle$. Since the order of $H$ is an odd prime number and since $\langle h\rangle$ and $\langle\pi (h)at{h}\rangle$ are the only subgroups of $\langle h,\pi (h)at{h}\rangle$ which fix point-wise a geodesic by \cite[Proposition 4]{MZ}, $H$ would commute with $\langle h,\pi (h)at{h}\rangle$ which is a contradiction to the structure of a group leaving a geodesic invariant. This final contradiction shows that, up to conjugacy, the subgroups $\langle h\rangle$ and $\langle h'\rangle$ either commute or coincide on $V$. This finishes the proof of Proposition \ref{prop:subtree}. \qed The following proposition shows that a prime knot $K$ having a $p$-twin either admits a rotational symmetry of order $p$, or a well-specified submanifold $E_p(K)$ built up of geometric pieces of the $JSJ$-decomposition of $E(K)$ admits a symmetry of order $p$ with non-empty fixed-point set. \begin{Definition} Let $K$ be a prime knot in ${\mathbb S}^3$. For each odd prime number $p$ we define $E_p(K)$ to be the connected submanifold of $E(K)$ containing $\partial E(K)$ and such that $\partial E_p(K) \setminus \partial E(K)$ is the union of the $JSJ$-tori of $E(K)$ with winding number $p$ which are closest to $\partial E(K)$. \end{Definition} \begin{Proposition}\label{prop:orbifold} Let $K$ be a prime knot and let $p$ be an odd prime number. Then for any $p$-twin $K'$, the deck transformation of the branched cover $M\longrightarrow({{\mathbb S}}^3,K')$ induces on $E_p(K)$ a symmetry of order $p$, with non-empty fixed-point set and which extends to ${\mathcal U}(K)$. \end{Proposition} {\bf Proof.} First we show that the deck transformation of the branched cover $M\longrightarrow({{\mathbb S}}^3,K')$ associated to a $p$-twin of $K$ induces on $E_p(K)$ a symmetry of order $p$. Let $K'$ be a $p$-twin of $K$. Let $h$ and $h'$ be the deck transformations on $M$ for the $p$-fold cyclic branched covers of $K$ and $K'$. We shall start by understanding the behaviour of $h$ and $h'$ on $M$. We have seen in Proposition \ref{prop:subtree} that $h$ and $h'$ can be chosen to commute on the submanifold $M_f$ of $M$ corresponding to the maximal subtree of $\Gamma$ on which both $h$ and $h'$ induce a trivial action. Let $\Gamma_c$ the maximal $\langle h,h'\rangle$-invariant subtree of $\Gamma$ containing $\Gamma_f$, such that, up to conjugacy, $h$ and $h'$ can be chosen to commute on the corresponding submanifold $M_c$ of $M$. If $M_c = M$ then after conjugation $h'$ commutes with $h$ on $M$, but is distinct from $h$ because the knots $K$ and $K'$ are not equivalent. Hence it induces a rotational symmetry of order $p$ of the pair $(S^3,K)$ and we are done. So we consider now the case where $\partial M_c$ is not empty. It is sufficient to show that $E_p(K) \subset M_c/<h>$: then the symmetry of order $p$ induced by $h'$ on $M_c/<h>$ must preserve $E_p(K)$ since each $JSJ$-torus of $E(K)$ can only be mapped to another torus of the family with the same winding number and the same distance from $\partial E(K)$. First we show: \begin{Claim}\label{claim:permutation} Let $T$ be a connected component of $\partial M_c$. The $h$-orbit of $T$ consists of $p$ elements which are permuted in the same way by $h$ and $h'$. \end{Claim} {\bf Proof.} Let $T$ be a torus in $\partial M_c$ and let $U$ be the connected component of $M\setminus M_c$ adjacent to $T$. Because of Lemma \ref{lem:commutation}, $T$ cannot be preserved by both $h$ and $h'$ for else $M_c$ would not be maximal. Without loss of generality, we can assume that either: \noindent{\bf (a)} $h(T) \neq T$ and $h'(T) \neq T$; \noindent or \noindent{\bf (b)} $h(T)=T$ but $h'(T)\neq T$; in this case since $h$ and $h'$ commute on $M_c$, we have that $h(h{'}^{\alpha}(U)) = h{'}^{\alpha}(U)$. Then part (ii) of Proposition \ref{prop:subtree} implies that $h$ acts freely on $h{'}^{\alpha}(U)$ for each $\alpha=0,...,p-1$. In case (a), the orbit of $T$ by the action of the group $\langle h,h'\rangle$ consists of $p$ or $p^2$ elements which bound on one side $M_c$ and on the other side a manifold homeomorphic to $U$. If the orbit consist of $p$ elements, since $h$ and $h'$ commute on $M_c$, up to choosing a different generator in $\langle h'\rangle$ we can assume that $h$ and $h'$ permute the elements of the orbit in the same way. Indeed, we have $h'h(T)=hh'(T)=h(h^{\alpha}(T))=h^{\alpha}(h(T))$. If the orbit consist of $p^2$ elements, $U$ is a is a knot exterior and there is a well-defined longitude-meridian system on each component of the $\langle h,h'\rangle$-orbit of $T$. In particular, there is a unique way to glue a copy of $U$ along the projection of $T$ in $M_c/\langle h,h'\rangle$. This implies that $h$ and $h'$ commute up to conjugacy on $M_c\cup\langle h,h'\rangle U$, contradicting the maximality of $M_c$. Note also that in this latter case the stabiliser of each component of $\langle h,h'\rangle U$ is reduced to the identity which clearly extends to $\langle h,h'\rangle U$. Assume we are in case (b). Consider the restriction of $h$ and $h_\alpha=h{'}^{-\alpha}hh{'}^\alpha$ to $U$. Since $h$ and $h'$ commute on $M_c$, $h$ and $h_\alpha$ coincide on $T$. Let $V$ be the geometric piece of the $JSJ$-decomposition for $M$ adjacent to $T$ and contained in $U$. Using Lemma \ref{lem:commutation}, we see that $h$ and $h_\alpha$ commute on $V$ and thus coincide on it, because they coincide on $T$. Thus $h$ and $h'$ commute on $M_c\cup_{\alpha=0}^{p-1}h{'}^\alpha(V)$, and again we reach a contradiction to the maximality of $M_c$. \qed We can thus assume to be in case (a) and that the $\langle h,h'\rangle$-orbit of $T$ has $p$ elements. \begin{Claim}\label{claim:winding number} Each torus in the boundary of $M_c/<h>$ has winding number $p$ with respect to $K$. \end{Claim} {\bf Proof.} Since a boundary component $T$ of $M_c/<h>$ lifts to $p$ boundary components of $M_c$, the winding number of $T$ with respect to $K$ must be a multiple of $p$. We shall now reason by induction on the number $n$ of boundary components of $M_c/<h>$. If $n=0$ there is nothing to prove. If $n = 1$ the quotient spaces $M_c/<h>$ and $M_c/<h'>$ are solid tori, i.e. the exterior of a trivial knot which can be identified with a meridian of each solid torus. Note that the winding number of $T$ is precisely the linking number of $K$ with such a meridian. Note, moreover, that the spaces $M_c/<h>$ and $M_c/<h'>$ have a common quotient ${\mathcal O}$ which is obtained by quotienting $M_c/<h>$ (respectively $M_c/<h'>$) via the the symmetry $\psi$ (respectively $\psi'$) of order $p$ and with non-empty fixed-point set, induced by $h'$ (respectively $h$). Since $\psi'$ preserves $\partial{(M_c/<h'>)}$ and has non-empty fixed-point set, $Fix(\psi')$ and the meridian of $\partial{(M_c/<h'>)}$ must form a Hopf link, in particular, their linking number is $1$. The image of $Fix(\psi')$ and of the meridian of $\partial{(M_c/<h'>)}$ form again a Hopf link in ${\mathcal O} =(M_c/<h'>)/\psi$. By lifting them up to $M_c/<h>$ we see that the meridian lifts to a meridian and the image of $Fix(\psi')$ lifts to $K$ which thus have linking number $p$. Hence the property is proved in this case. If $n>1$, we shall perform trivial Dehn surgery on $n-1$ boundary components of $M_c/<h>$. Note that such a surgery does not change the winding number of the remaining boundary components (for the boundary components are unlinked), that the symmetry of order $p$ of $M_c/<h>$ extends to the resulting solid torus, and that the surgery can be lifted on $M_c$ in such a way that the quotient of the resulting manifold by the action of the diffeomorphism induced by $h'$ is again a solid torus. This last property follows from the fact that each connected component of $(E(K)\setminus(M_c/\langle h\rangle)$ is the exterior of a knot which lifts in $M$ to $p$ diffeomorphic copies. These $p$ copies of the knot exterior are permuted by $h'$ and a copy appears in the $JSJ$-decomposition of $E(K')$. This means that on each boundary component there is a well-defined meridian-longitude system which is preserved by $h$ and $h'$ and by passing to the quotient. The claim follows now from case $n=1$. \qed Now Claims \ref{claim:permutation} and \ref{claim:winding number} imply that $E_p(K)$ is a submanifold of $M_c/\langle h\rangle \cap E(K)$. Note, moreover, that the fixed-point set of the induced symmetry is contained in $M_f/\langle h\rangle\subset M_c/\langle h\rangle$. In particular, each torus of the $JSJ$-family separating such fixed-point set from $K$ lifts to a single torus of the $JSJ$-family for $M$ and its winding number cannot be a multiple of $p$. We can thus conclude that the fixed-point set of the symmetry induced by $h'$ is contained in $E_p(K)$. This finishes the proof of Proposition \ref{prop:orbifold}. \qed \begin{Remark}\label{rem:orbifold} Note that $M_c/h \cap E(K)$ can be larger than $E_p(K)$ for there might be tori of the $JSJ$-collection for $M$ which have an $\langle h,h'\rangle$-orbit containing $p^2$ elements and which project to tori with winding number $p$. Note also that $E_p(K)$ coincides with $E(K)$ if there are no $JSJ$-tori in $E(K)$ with winding number $p$. \end{Remark} \begin{Remark}\label{rem:commutation} The deck transformations $h$ and $h'$ cannot commute on the submanifolds $U$ of $M$ corresponding to branches of $\Gamma$ whose $h$- and $h'$-orbits coincide and consist of $p$ elements, if $h$ and $h'$ are different; that is, the stabiliser $h'h^{-1}$ is a finite order diffeomorphism of $U$ if and only if it is trivial. To see this, assume that there is a unique orbit of this type and assume by contradiction that $h$ and $h'$ commute on $M$ and are distinct. The diffeomorphism $h'$ would induce a non trivial symmetry of $E(K)$ of order $p$ and non-empty fixed-point set which fixes set-wise the projection of $U$ and acts freely on it. This contradicts the first part of Lemma \ref{lem:companion}. If there are $n>1$ such orbits an equivariant Dehn surgery argument on $n-1$ components leads again to a contradiction \end{Remark} Here is a straightforward corollary of Proposition \ref{prop:orbifold} which generalises a result proved by B. Zimmermann \cite{Zim1} for hyperbolic knots. \begin{Corollary}\label{cor:p-symmetry} Let $K$ be a prime knot and let $p$ be an odd prime number. If $K$ has no companion of winding number $p$ and has a $p$-twin, then $K$ admits a rotational symmetry of order $p$ with trivial quotient. \qed \end{Corollary} So far we have proved that if a prime knot $K$ has a $p$-twin either $E(K)$ admits a $p$-rotational symmetry or a well-specified submanifold $E_p(K)$ of $E(K)$ admits a symmetry of order $p$ with non-empty fixed-point set. We shall say that the $p$-twin induces a \emph{symmetry}, respectively a \emph{partial symmetry}, of $K$. \begin{Proposition}\label{prop:twin} Let $K$ be a prime knot. Assume that $K$ has a $p$-twin and a $q$-twin for two distinct odd prime numbers. \item{(i)} At least one twin, say the $q$-twin, induces a $q$-rotational symmetry $\psi_q$ of $K$. Moreover: \item{(ii)} If the $p$-twin induces a partial $p$-symmetry of $K$, then $\partial E_p(K) \setminus \partial E(K)$ is a $JSJ$-torus which separates the fixed point set $Fix(\psi_q)$ from $\partial E(K)$. \end{Proposition} First we study some properties of partial symmetries induced by $p$-twins for an odd prime number $p$. \begin{Lemma}\label{lem:partial companion} Let $K$ be a prime knot and let $\psi$ be the partial symmetry of order $p$ induced on $E_p(K)$ by a $p$-twin. Let $T$ be a torus of the $JSJ$-collection of $E_p(K)$ which is not in the boundary. Then $T$ does not separate $\partial E(K)$ from $Fix(\psi)$ if and only if its $\psi$-orbit has $p$ elements. Moreover, this is the case if and only if the lift of $T$ to the $p$-fold cyclic branched cover of $K$ has $p$ elements. \end{Lemma} {\bf Proof.} It suffices to perform $\psi$-equivariant Dehn fillings on the boundary components $\partial E_p(K) \setminus \partial E(K)$ of $E_p(K)$ in such a way that the resulting manifold is a knot exterior $E(\pi (h)at K)$ and that the graph dual to the $JSJ$-decomposition of $E(\pi (h)at K)$ remains unchanged after filling (see the proof of Theorem \ref{thm:rotations}). Part (i) of Lemma \ref{lem:companion} then applies to the resulting knot $\pi (h)at K$ and the induced rotational symmetry. To apply Lemma \ref{lem:torus} it suffices to note that, as in the proof of Claim \ref{claim:winding number}, the fillings can be chosen in such a way that the induced fillings on the quotient $E_p(K)/\langle \psi \rangle$ give also a solid torus (see Remark \ref{rem:lift}). \qed \begin{Remark}\label{rem:case b} In particular, case (b) of the proof of Claim \ref{claim:permutation} cannot happen for a torus $T$ in the situation of Lemma \ref{lem:partial companion}. \end{Remark} \begin{Lemma}\label{lem:vertex} Let $K$ be a prime knot and let $\psi$ be the partial symmetry of order $p$ induced on $E_p(K)$ by a $p$-twin. Let $T \subset \partial E_p(K) \setminus \partial E(K)$ be a torus which is $\psi$-invariant. Let $e_{T}$ be the corresponding edge in the tree dual to the $JSJ$-decomposition of $E_p(K)$. Let $v_K$ and $v_\psi$ be the vertices corresponding to the geometric pieces containing $\partial E(K)$ and $Fix(\psi)$ respectively. Then $v_\psi$ belongs to the unique geodesic joining $v_K$ to $e_{T}$ in this $JSJ$-tree. \end{Lemma} {\bf Proof.} If we cut ${\mathbb S}^3$ along a torus of the $JSJ$-collection of $E_p(K)$, the connected component which does not contain $K$ is a knot exterior and is thus contained in a ball in ${\mathbb S}^3$. If the conclusion of the Lemma were false, then we could find two tori of the $JSJ$-decomposition of $E_p(K)$ contained in two disjoint balls, one torus separating $Fix(\psi)$ from $K$ and the other coinciding with $T$ or separating it from $K$. In particular the linking number of $Fix(\psi)$ and a meridian of the solid torus bounded by $T$ (i.e. the winding number of $T$ with respect to $Fix(\psi)$) would be zero. This is impossible since $\psi$ leaves set-wise invariant $T$. \qed \begin{Remark}\label{rem:adjacency} Lemma \ref{lem:vertex} has two interesting consequences. Since $h$ and $h'$ play symmetric roles, we deduce that $Fix(\psi)$ and $\partial E(K)$ must belong to the same geometric piece of the $JSJ$-decomposition of $E_p(K)$. This follows from the fact that, in $E_p(K') \cup {\mathcal U}(K')$, $Fix(\psi)$ maps to $K'$, $K$ maps to $Fix(\psi')$, and $T$ maps to a $\psi'$-invariant torus. Moreover, each invariant boundary torus $T$ is adjacent to the geometric component containing $Fix(\psi)$ and $K$, else, we would get a contradiction to Lemma \ref{lem:partial companion}. \end{Remark} {\bf Proof of Proposition \ref{prop:twin}(i).} We argue by contradiction, assuming that there are a $p$-twin and a $q$-twin of $K$ which induce only partial symmetries of $E(K)$ for two distinct odd prime numbers $p$ and $q$. Then $\partial E_p(K)$ and $\partial E_q(K)$ are not empty. Moreover, we must have $E(K)\setminus E_p(K) \subset E_q(K)$ since the winding number along nested tori is multiplicative and thus the winding number of any $JSJ$-torus contained in $E(K)\setminus E_p(K)$ must be of the form $kp$ and cannot be $q$. In particular $\partial E_p(K) \setminus \partial E(K) \subset \text{int}(E_q(K))$. Let $T \in \partial E_p(K) \setminus \partial E(K)$ be a torus and let $\psi$ be the $q$-symmetry with non-empty fixed-point set induced on $E_q(K)$ by the $q$-twin. Since the winding number of $T$ is $p$, its lift to the $q$-fold cyclic branched cover of $K$ is connected. According to part (i) of Lemma \ref{lem:companion} and to Lemmata \ref{lem:torus} and \ref{lem:partial companion}, $T$ must separate $\partial E(K)$ from $Fix(\psi)$. Since $Fix(\psi)$ is connected, we see that so must be $\partial E_p(K) \setminus \partial E(K) = T$. The final contradiction is then reached by applying Remark \ref{rem:adjacency}. \qed {\bf Proof of Proposition \ref{prop:twin}(ii).} This is a consequence of the proof of part (i): note that in the proof $\psi$ may be a global or partial symmetry. \qed We are now in a position to prove Theorem \ref{thm:twins}. {\bf Proof of part (i) of Theorem \ref{thm:twins}.} We argue by contradiction, assuming that $K$ admits twins for three distinct, odd prime numbers $p, q, r$. Under this assumption, it follows that $K$ is a non-trivial knot. If the three twins induce rotational symmetries of the knot $K$, then part (i) of Theorem \ref{thm:rotations} gives a contradiction. Therefore part (i) of Proposition \ref{prop:twin} implies that twins of orders, say $q$ and $r$, induce rotational symmetries $\psi_q$ and $\psi_r$ of $K$ having order $q$ and $r$ respectively, while a $p$-twin induces only a partial rotational symmetry of $E(K)$ of order $p$. Then part (ii) of Proposition \ref{prop:twin} shows that $\partial E_p(K) \setminus \partial E(K)$ is a $JSJ$-torus in $E(K)$ which separates $\partial E(K)$ from both $Fix(\psi_q)$ and $Fix(\psi_r)$. This contradicts part (ii) of Theorem \ref{thm:rotations} which states that $Fix(\psi_q)$ and $Fix(\psi_r)$ must sit in the $JSJ$-component containing $\partial E(K)$. \qed {\bf Proof of part (ii) of Theorem \ref{thm:twins}.} Let $K$ be a prime knot and let $p$ be an odd prime number. We assume that $K$ has at least two non-equivalent $p$-twins $K_1$ and $K_2$ and look for a contradiction. If both $\psi_{1}$ and $\psi_{2}$ are rotational symmetries of order $p$ of $K$, then by M. Sakuma \cite[Thm. 3]{Sa2} they are conjugate since $K$ is prime. This would contradict the hypothesis that the knots $K_1$ and $K_2$ are not equivalent. Assume now that at least one symmetry, say $\psi_{1}$ is partial. Then $\psi_{1}$ and $\psi_{2}$ are rotational symmetries of order $p$ of the submanifold $E_p(K) \subset E(K)$. Let $X_0$ be the geometric piece of the $JSJ$-decomposition of $E(K)$ containing $\partial E(K)$. Then $\psi_1$ (respectively $\psi_2$) generates a finite cyclic subgroup $G_1$ (respectively $G_2$) of the group $Diff^{+,+}(X_0, \partial E(K))$ of diffeomorphisms of the pair $(X_0, \partial E(K))$ which preserve the orientations of $X_0$ and of $\partial E(K)$. Moreover, one can assume that $G_1$ and $G_2$ act geometrically on $X_0$. If $X_0$ admits a hyperbolic structure, it is a consequence of the proof of the Smith conjecture (see for example \cite[Lemma 2.2]{Sa2}) that the subgroup of $Diff^{+,+}(X_0, \partial E(K))$ consisting of restrictions of isometries of $X_0$ is finite cyclic. Hence $G_1 = G_2$ and up to taking a power $\psi_1 = \psi_2$ on $X_0$. If $X_0$ is Seifert fibred, then it must be a cable space, since $K$ is prime. The uniqueness of the Seifert fibration and the fact that the basis of the Seifert fibration has no symmetry of finite order imply that the cyclic groups $G_1$ and $G_2$ belong to the circle action $S^1 \subset Diff^{+,+}(X_0, \partial E(K))$ inducing the Seifert fibration of $X_0$, see \cite[Lemma 2.3]{Sa2}. Since $G_1$ and $G_2$ have the same prime order, up to taking a power $\psi_1 = \psi_2$ on $X_0$. Let $h_1$ and $h_2$ be the deck transformations on $M$ associated to the $p$-fold cyclic coverings branched along $K_1$ and $K_2$, and which induce $\psi_1$ and $\psi_2$. Then by taking a suitable powers, $h_1$ and $h_2$ coincide up to conjugacy on the geometric piece $\widetilde X_0$ of the $JSJ$-decomposition of $M$ containing the preimage of $K$. The following lemma shows that they will coincide on $M$, contradicting our hypothesis. \qed \begin{Lemma}\label{cor:extension} If the covering transformations $h$ and $h'$ preserve a $JSJ$-piece or a $JSJ$-torus of $M$ and coincide on it, then they can be chosen, up to conjugacy, to coincide everywhere. \end{Lemma} {\bf Proof.} This is a consequence of the proofs of Propositions \ref{prop:subtree} and \ref{prop:orbifold}. We shall start by showing that we can always assume that there is a piece $V$ of the $JSJ$-decomposition on which $h$ and $h'$ coincide. To this purpose, assume that $h$ and $h'$ coincide only on a $JSJ$-torus $T$. According to Lemma \ref{lem:commutation} and Remark \ref{rem:same action}, $h$ and $h'$ coincide on the geometric pieces of the decomposition adjacent to $T$, which are also invariant. Consider now the maximal subtree $\Gamma_1$ of $\Gamma$ such that the restrictions of $h$ and $h'$ to the corresponding submanifold $M_1$ of $M$ coincide, up to conjugacy, and such that $V\subset M_1$. Let $S$ be a $JSJ$-torus for $M$ in the boundary of $M_1$. Since $h$ and $h'$ coincide on $M_1$, the $h$-orbit and the $h'$-orbit of $S$ coincide as well and consist of either one single element $\{S\}$ or $p$ elements $\{S,h(S)=h'(S),...,h^{p-1}(S)={h'}^{p-1}(S)\}$. In the former case, according to Lemma \ref{lem:commutation}, $\Gamma_1$ would not be maximal. In the latter case, we are precisely in the situation described in part (a) of Claim \ref{claim:permutation}. Once more, $\Gamma_1$ is not maximal because one can impose that $h$ and $h'$ act in the same way on the $p$ connected components with connected boundary obtained by cutting $M$ along the $\langle h,h'\rangle$-orbit of $S$ (see Remark \ref{rem:commutation}). This contradiction shows that $M=M_1$ and the lemma is proved. \qed {\bf Proof of part (iii) of Theorem \ref{thm:twins}.} First we analyse the case of a knot admitting two twins, one of which induces a partial symmetry. \begin{Proposition}\label{prop:partial} Let $K$ be a prime knot admitting a $p$-twin $K'$ and a $q$-twin $K''$ for two distinct odd prime numbers $p$ and $q$. If $K'$ induces a partial symmetry of $K$ then $K'$ and $K''$ are not equivalent. \end{Proposition} {\bf Proof.} By part (ii) of Proposition \ref{prop:twin}, $E_p(K)$ has a unique boundary component which separates $\partial E(K)$ from the fixed-point set of the $q$-rotational symmetry $\psi$ induced by $K''$. By cutting ${\mathbb S}^3$ along $T = \partial E_p(K)$ we obtain a solid torus $V=E_p(K)\cup {\mathcal U}(K)$ containing $K$, and a knot exterior $E_T$. $K$ admits a $q$-rotational symmetry $\psi$ induced by $K''$ which preserves this decomposition and induces a $q$-rotational symmetry with trivial quotient (see Lemma \ref{lem:companion}) on $E_T$ and a free $q$-symmetry $\tilde\psi$ on $V$. The covering transformation for the knot $K'$ induces a $p$-symmetry $\varphi$ of $V$ with non-empty fixed-point set. Assume now by contradiction that $K'=K''$. Since $K'$ induces a partial symmetry of $K$ and vice versa, $S^3$ admits a decomposition into two pieces: $V'=E_p(K')\cup {\mathcal U}(K')$ and $E_T $. On the other hand, since $K''$ induces a genuine $q$-rotational symmetry of $K$, $K''$ admits a $q$-rotational symmetry $\psi''$ induced by $K$ which preserves the aforementioned decomposition and induces a $q$-rotational symmetry with trivial quotient on $E_T$. Using the fact that $E_T$ is the exterior of a prime knot (see Lemma \ref{lem:prime}) and M. Sakuma's result \cite[Thm. 3]{Sa2}, we see that the two $q$-rotational symmetries with trivial quotient induced by $\psi$ and $\psi''$ on $E_T$ act in the same way. Let now $E_0$ be the smallest knot exterior of the $JSJ$-decomposition of $E_T$ on which $\psi=\psi''$ induces a $q$-rotational symmetry with trivial quotient (this is obtained by cutting $E_T$ along the torus of the $JSJ$-decomposition closest to $Fix(\psi)$ -respectively $Fix(\psi'')$- and separating it from $T$. Consider now the lift, denoted by $(X,{\mathcal K})$, to $(S^3,K'')$ of $(E_0, Fix(\psi))/\psi$. We claim that $(X,{\mathcal K})=(V',K')$. Indeed, $X$ contains $K''=K'$ by construction, and its boundary is the unique torus of the $JSJ$-decomposition which is left invariant by the $q$-rotational symmetry of $K''$ -by construction again- and which is closest to $K''$ (compare Remark \ref{rem:adjacency}). Since $E_0/\psi=E_0/\psi''$, and a solid torus has a unique $q$-fold cyclic cover, we deduce that $(V',K')=(X,{\mathcal K})=(V,K)$. In particular, the deck transformations for $K$ and $K'$ on their common $p$-fold cyclic branched cover can be chosen to coincide on the lift of $V=V'$. Lemma \ref{cor:extension} implies that $K=K'$ contradicting the fact that $K'$ is a $p$-twin. \qed Let $K'$ be a $p$-twin and a $q$-twin of $K$ for two distinct odd prime numbers $p$ and $q$. Proposition \ref{prop:partial} implies that $K'$ induces two rotational symmetries $\psi_p$ and $\psi_q$ of $K$ with trivial quotients and orders $p$ and $q$. Part (ii) of Theorem \ref{thm:rotations} shows that the fixed-point sets $Fix(\psi_p)$ and $Fix(\psi_q)$ lie in the $JSJ$-component of $E(K)$ which contains $\partial E(K)$. Then the proof of part (iii) of Theorem \ref{thm:twins} follows from the following: \begin{Lemma}\label{lem:commuting symmetries} Let $K$ be a prime knot admitting two rotational symmetries $\psi$ and $\varphi$ of odd prime orders $p > q$. If the fixed-point sets of $\psi$ and $\varphi$ lie in the component which contains $\partial E(K)$, then the two symmetries commute up to conjugacy. \end{Lemma} {\bf Proof.} Reasoning as in the proof of part (ii) of Theorem \ref{thm:twins}, one can show that $\psi$ and $\varphi$ commute on the component which contains $\partial E(K)$. Since all other components are freely permuted according to part (i) of Lemma \ref{lem:companion}, the conclusion follows as in the proof of part (a) of Claim \ref{claim:permutation}. \qed {\bf Proof of Corollary \ref{cor:composite}.} First of all note that, because of the uniqueness of the Milnor-Kneser decomposition of the covers of $K$ and $K'$, the number of prime summands of $K$ and $K'$ is the same. After ditching components of $K$ and $K'$ that appear in both decompositions in equal number, we can assume that $K_i$ is not equivalent to $K'_\ell$, for all $i,\ell=1,...,t$. If $K$ and $K'$ have three common cyclic branched covers of odd prime orders, we deduce that for each $i=1,...,t$, $K_i$ is not determined by its $p_j$-fold cyclic branched cover, $j=1,2,3$, for it is also the $p_j$-fold cyclic branched cover of some $K'_{i_j}$ not equivalent to $K_i$. Hence $K_i$ would have twins for three distinct odd prime orders which is impossible by Theorem \ref{thm:twins}. \qed \section{Examples} Examples of prime knots admitting a $p$-twin which induces a global rotational symmetry of order $p$ were first constructed by Y. Nakanishi \cite{Na} and M. Sakuma \cite{Sa1}. They considered a prime link with two trivial components whose linking number is $1$. By taking the $p$-fold cyclic cover of ${\mathbb S}^3$ branched along the first (respectively the second) component of the link one gets again ${\mathbb S}^3$ and the second (respectively first) component lifts to a prime knot. The two knots thus constructed have the same $p$-fold cyclic branched cover by construction (see also Remark \ref{rem:lift}), moreover, by computing their Alexander polynomial they were shown to be distinct. In \cite[Thm 3 and Cor. 1]{Zim1} B. Zimmerman showed that if a hyperbolic knot has a $p$-twin, for $p\pi (g)e 3$, then the $p$-twin induces a global symmetry and the two knots are thus obtained by Y. Nakanishi and M. Sakuma's construction where the quotient link is hyperbolic and admits no symmetry which exchanges its two components. As a matter of fact, the links considered by Y. Nakanishi and M. Sakuma are in fact hyperbolic and so are the resulting twins if $p$ is at least $3$, according to the orbifold theorem \cite{BoP}, see also \cite{CHK}. Note that, when $p=2$, the situation, even in the case of hyperbolic knots, is much more complex and there are several ways to construct $2$-twins of a given knot. In this section we shall see how one can construct, for each given odd prime $p$, two prime, non simple, knots which are $p$-twins, and such that the symmetries they induce are not global. The first construction shows that the number $\nu$ of components of $\partial E_p(K)\setminus\partial E(K)$ can be arbitrarily large. This means that the situation encountered in Proposition \ref{prop:twin}(ii) is extremely special. The second construction shows that our result is indeed best possible even for prime knots with $p$-twins inducing partial symmetries: we shall construct prime knots admitting a $p$-twin inducing a partial symmetry and a $q$-twin inducing a global rotational symmetry. \subsection{Knots admitting a $p$-twin inducing only a partial symmetry} Assume we are given a hyperbolic link $L=L_1\cup...\cup L_{\nu+2}$, with $\nu+2 \pi (g)e 3$ components, satisfying the following requirements: {\bf Property $*$} \begin{enumerate} \item The sublink $L_3\cup...\cup L_{\nu+2}$ is the trivial link; \item For each $i=1,2$ and $j=3,...,\nu+2$, the sublink $L_i\cup L_j$ is a Hopf link; \item ${\rm lk}(L_1,L_2)$ is prime with $p$; \item No symmetry of $L$ exchanges $L_1$ and $L_2$. \end{enumerate} We shall consider the orbifold ${\mathcal O}=({\mathbb S}^3,(L_1\cup L_2)_p)\setminus {\mathcal U}(L_3\cup...\cup L_{\nu+2})$ which is the $3$-sphere with singular set of order $p$ the (sub)link $L_1\cup L_2$ and an open tubular neighbourhood of the (sub)link $L_3\cup...\cup L_{\nu+2}$ removed. ${\mathcal O}$ is hyperbolic if $p\pi (g)e3$, and will represent the quotient of ${\mathcal O}_p=E_p(K)\cup{\mathcal U}(K)$ and ${\mathcal O}_p'=E_p(K')\cup{\mathcal U}(K')$ via the action of the partial $p$-symmetries. Indeed, to obtain ${\mathcal O}_p$ (respectively ${\mathcal O}_p'$) take the $p$-fold cyclic orbifold cover of $({\mathbb S}^3,(L_1\cup L_2)_p)\setminus {\mathcal U}(L_3\cup...\cup L_{\nu+2})$ which desingularises $L_2$ (respectively $L_1$). Observe that one can fix a longitude-meridian system on each boundary component of ${\mathcal O}$, induced by those of $L_i$, $i=3,\dots,\nu+2$. Note that, because of condition 4 of Property $*$, the two orbifolds ${\mathcal O}_p$ and ${\mathcal O}_p'$ with the fixed peripheral systems are distinct. Remark that ${\mathcal O}_p$ and ${\mathcal O}'_p$ can be obtained by the orbifold covers, analogous to those described above, of $({\mathbb S}^3,(L_1\cup L_2)_p)$ (which are topologically ${\mathbb S}^3$) by removing open regular neighbourhoods of the lifts of the components $L_3\cup...\cup L_{\nu+2}$. Note that these components lift to trivial components whose linking number with the lift of $L_i$, $i=1,2$, is precisely $p$, because of condition 2, and which form again a trivial link. For each $j=3,...,\nu+2$, choose a knot exterior $E({\mathcal K}_j)$ to be glued along the $j$-th boundary component of ${\mathcal O}_p$ and ${\mathcal O}'_p$ in such a way that a fixed longitude-meridian system on $E({\mathcal K}_j)$ is identified with the lift of the longitude-meridian system on the $j$-th boundary component of ${\mathcal O}$. The underlying spaces of the orbifolds ${\mathcal O}_p\cup_{j=3}^{\nu+2} E({\mathcal K}_j)$ and ${\mathcal O}'_p\cup_{j=3}^{\nu+2} E({\mathcal K}_j)$ are topologically ${\mathbb S}^3$ and it is easy to see that their singular sets are connected (see condition 3). The resulting knots have the same $p$-fold cyclic branched cover, however, since ${\mathcal O}_p$ and ${\mathcal O}'_p$ are distinct, they are not equivalent. \begin{Remark} Observe that we have just shown that the number of connected components of $\partial E_p(K)\setminus\partial E(K)$, which is precisely $\nu$, can be arbitrarily large. Note also that if $\nu\pi (g)e2$, according to Proposition \ref{prop:twin}, the knot $K$ has no $q$-twins for $q\neq p$ odd prime. \end{Remark} We shall now prove that links with Property $*$ exist. Notice that for $\nu=1$ links satisfying all the requirements where constructed by Zimmermann in \cite{Zim2}, see also \cite{Pao1}. \begin{figure} \caption{The link $L$ and its Bonahon-Siebenmann decomposition.} \label{fig:one} \end{figure} Consider the link given in Figure \ref{fig:one} for $\nu=3$ (the generalization for arbitrary $\nu\pi (g)e1$ is obvious). Most conditions are readily checked just by looking at the figure, and we only need to show that $L$ is hyperbolic and has no symmetries which exchange $L_1$ and $L_2$. To this purpose, we shall describe the Bonahon-Siebenmann decomposition of the orbifold $({\mathbb S}^3,(L)_2)$, where all components have ${\mathbb Z}/2{\mathbb Z}$ as local group. The decomposition consists of one single hyperbolic piece (see Figure \ref{fig:one}) and $\nu+1$ (respectively $1$) Seifert fibred pieces if $\nu\pi (g)e2$ (respectively $\nu=1$). Since the Seifert fibred pieces contain no incompressible torus, the hyperbolicity of $L$ follows. Note now that every symmetry of $L$ must leave invariant the unique hyperbolic piece of the decomposition. This piece is obtained by quotienting the hyperbolic knot $10_{155}$ via its full symmetry group ${\mathbb Z}/2{\mathbb Z}\oplus{\mathbb Z}/2{\mathbb Z}$ and thus has no symmetries (for more details see \cite{Pao1}), so we conclude that the components $L_1$ and $L_2$ are non exchangeable. \subsection{Knots admitting a $p$-twin inducing a partial symmetry and a $q$-twin inducing a global symmetry} Let ${\mathcal K}$ be a hyperbolic knot admitting a $p$-twin and a $q$-twin; the twins of ${\mathcal K}$ induce global symmetries, so that ${\mathcal K}$ admits a $p$- and a $q$-rotational symmetry with trivial quotient (see \cite{Zim2}, where a method to construct hyperbolic knots with two twins is described). Remove a tubular neighbourhood of the axis of the symmetry of order $q$ (note that the two symmetries have disjoint axes), and use the resulting solid torus $V$ to perform Dehn surgery on the exterior $E$ of the $(2,q)$-torus knot. Denote by $K$ the image of ${\mathcal K}$ after surgery. We require that: \begin{enumerate} \item The resulting manifold is ${\mathbb S}^3$; \item The $q$-rotational symmetry of $E$ and the restriction of the $q$-rotational symmetry of ${\mathcal K}$ to $V$ give a global $q$-rotational symmetry of $K$; \item The $q$-rotational symmetry of $K$ has trivial quotient. \end{enumerate} \begin{figure} \caption{Satellising so that the induced rotation has trivial quotient.} \label{fig:two} \end{figure} Note that the last requirement can be met by choosing appropriately the longitude when satellising, as illustrated in Figure \ref{fig:two}. We claim that $K$ admits a $q$-twin, $K''$, and a $p$-twin, $K'$. $K''$ is obtained by the standard method described in Remark \ref{rem:lift}. Note that $K\neq K''$, for the roots of the $JSJ$-decompositions of the exteriors of $K$ and $K''$ are hyperbolic and Seifert fibred respectively. To construct $K'$, consider the $p$-twin ${\mathcal K}'$ of ${\mathcal K}$ and let $V'$ be the solid torus obtained by removing the axis of the $q$-rotational symmetry of ${\mathcal K}'$. Note that $V$ and $V'$ have a common quotient obtained by taking the space of orbits of the $p$-rotational symmetries, however $V$ and $V'$ are different orbifolds by construction. Fix a longitude-meridian system on $V$ (the one used for the surgery): by first quotienting and then lifting it, get a longitude-meridian system on $V'$ that must be used to perform surgery along a copy of $E$. The image of ${\mathcal K}'$ after the surgery will be $K'$. Note that, when taking the $p$-fold cyclic branched covers of $K$ and $K'$, the hyperbolic orbifolds $V$ and $V'$ lift to the same manifold by construction, while the Seifert fibred part lifts, in both cases, to $p$ copies of $E$. Again by construction, the gluings are compatible and the two covers coincide. It is also evident that $K'$ can only induce a partial symmetry of $K$, and the claim is proved. \begin{Remark} Note that according to Proposition \ref{prop:partial} the $p$-twins and $q$-twins obtained in this construction cannot be equivalent. \end{Remark} \section{Homology spheres as cyclic branched covers} By the proof of the Smith conjecture Corollary \ref{cor:homologysphere} is true for the $3$-sphere $S^3$. So from now on we assume that the integral homology sphere $M$ is not homeomorphic to $S^3$. Then by \cite[Thm1]{BPZ}, $M$ can be a $p_i$-fold cyclic branched cover of ${\mathbb S}^3$ for at most three pairwise distinct odd prime numbers $p_i$. Moreover if $M$ is irreducible and is the $p_i$-fold cyclic branched cover of ${\mathbb S}^3$ for three pairwise distinct odd prime numbers $p_i$, then the proof of \cite[Corollary 1.(i)]{BPZ} shows that for each prime $p_i$, $M$ is the $p_i$-fold cyclic branched cover of precisely one knot. Since a knot admits at most one $p$-twin for an odd prime integer $p$, we need only to consider the case when the irreducible integral homology sphere $M$ is the branched cover of ${\mathbb S}^3$ for precisely two distinct odd primes, say $p$ and $q$. Moreover \cite[Corollary 1.(ii)]{BPZ} shows that $M$ has a non trivial $JSJ$-decomposition. Looking for a contradiction, we can assume that, for each prime, $M$ is the branched covering of two distinct knots with covering transformations $\psi$, $\psi'$ of order $p$ and $\varphi$, $\varphi'$ of order $q$. If each rotation of order $p$ commutes with each rotation of order $q$ up to conjugacy, then the contradiction follows from the following claim which is an easy consequence of Sakuma's result \cite[Thm. 3]{Sa2} (see \cite[Claim 8]{BPZ}). \begin{Claim}\label{claim:unique symmetry} Let $n\pi (g)e3$ be a fixed odd integer. Let $\rho$ be a rotation with trivial quotient of an irreducible manifold $M$. All the rotations of $M$ of order $n$ which commute with $\rho$ are conjugate in $Diff(M)$ into the same cyclic group of order $n$. \qed \end{Claim} Otherwise, consider the subgroup $G=\langle \psi, \psi', \varphi, \varphi' \rangle$ of diffeomorphisms of $M$. According to the proof of \cite[Proposition 4]{BPZ}, each rotation of order $p$ commutes with each rotation of order $q$ up to conjugacy, unless the induced action of $G$ on the dual tree of the $JSJ$-decomposition for $M$ fixes precisely one vertex corresponding to a hyperbolic piece $V$ of the decomposition and $\{p,q\}=\{3,5\}$. In this case, one deduces as in the proof of \cite[Corollary 1.(ii)]{BPZ} that the restrictions of $\psi$ and $\psi'$ (respectively $\varphi$ and $\varphi'$) coincide up to conjugacy on $V$. Then the desired contradiction follows from Lemma \ref{cor:extension} which implies that $\psi$ and $\psi'$ (respectively $\varphi$ and $\varphi'$) coincide up to conjugacy on $M$. \qed \begin{footnotesize} {\textsc{Laboratoire Emile Picard (UMR 5580 du CNRS)}} {\textsc{Universit\'e Paul Sabatier}} {\textsc{118 route de Narbonne}} {\textsc{31062 Toulouse cedex 4, France}} {\tt{[email protected]}} {\textsc{I.M.B.(UMR 5584 du CNRS)}} {\textsc{Universit\'e de Bourgogne}} {\textsc{B.P. 47 870}} {\textsc{9 av. Alain Savary}} {\textsc{21078 Dijon cedex, France}} {\tt{[email protected]}} \end{footnotesize} \end{document}
\begin{equation}gin{document} \title{Tensor network states and geometry} \author{G. Evenbly$^{1}$, G. Vidal} \affiliation{School of Mathematics and Physics, the University of Queensland, Brisbane 4072, Australia} \affiliation{Perimeter Institute for Theoretical Physics, Waterloo, Ontario, N2L 2Y5 Canada} \date{\today} \begin{equation}gin{abstract} Tensor network states are used to approximate ground states of local Hamiltonians on a lattice in $D$ spatial dimensions. Different types of tensor network states can be seen to generate different geometries. Matrix product states (MPS) in $D=1$ dimensions, as well as projected entangled pair states (PEPS) in $D>1$ dimensions, reproduce the $D$-dimensional physical geometry of the lattice model; in contrast, the multi-scale entanglement renormalization ansatz (MERA) generates a $(D+1)$-dimensional holographic geometry. Here we focus on homogeneous tensor networks, where all the tensors in the network are copies of the same tensor, and argue that certain structural properties of the resulting many-body states are preconditioned by the geometry of the tensor network and are therefore largely independent of the choice of variational parameters. Indeed, the asymptotic decay of correlations in homogeneous MPS and MERA for $D=1$ systems is seen to be determined by the structure of geodesics in the physical and holographic geometries, respectively; whereas the asymptotic scaling of entanglement entropy is seen to always obey a simple boundary law -- that is, again in the relevant geometry. This geometrical interpretation offers a simple and unifying framework to understand the structural properties of, and helps clarify the relation between, different tensor network states. In addition, it has recently motivated the branching MERA, a generalization of the MERA capable of reproducing violations of the entropic boundary law in $D>1$ dimensions. \end{abstract} \pacs{03.67.-a, 03.65.Ud, 02.70.-c, 05.30.Fk} \maketitle \section{Introduction} \label{sect:intro} In recent years tensor network states \cite{Fannes92,Ostlund95,Rommer97,Perez-Garcia07, White92, White93, Schollwoeck05, Schollwoeck11, Vidal03, Vidal04, Daley04, White04, Shi06, Alba11, Vidal07, Vidal08, Evenbly09, Giovannetti08, Pfeifer09, Vidal10, Verstraete04, Sierra98, Nishino98, Nishio04, Murg07, Jordan08, Gu08, Jiang08, Xie09, Murg09, Tagliacozzo09, Murg10, Evenbly10, Evenbly10b, Aguado08, Cincio08, Evenbly09b, Koenig09, Evenbly10c, Corboz10, Kraus10, Pineda10, Corboz09, Barthel09, Shi09, Li10, Corboz10b, Pizorn10, Gu10, Corboz10c} have emerged as an important theoretical tool to investigate quantum many-body systems. They offer a novel conceptual framework to describe and classify the possible phases of matter\cite{Pollmann09,Chen11,Schuch10,Chen11b}. At the same time, as variational ans\"atze, tensor network states are the basis of numerical approaches to quantum many-body problems. The simplest and best known tensor network state is the matrix product state (MPS)\cite{Fannes92,Ostlund95,Rommer97,Perez-Garcia07}. The MPS is at the core of the extraordinary success of White's density matrix renormalization group (DMRG) \cite{White92, White93, Schollwoeck05, Schollwoeck11}, which for almost twenty years has dominated numerical research in one dimensional lattice models, such as quantum spin chains, providing extremely accurate ground state properties. The MPS is also used to simulate dynamics with the time-evolving block decimation (TEBD)\cite{Vidal03, Vidal04, Daley04, White04} algorithm and variations thereof, often referred to as time-dependent DMRG. Other tensor network states for one-dimensional systems include the tree tensor network (TTN)\cite{Shi06, Alba11} and the multi-scale entanglement renormalization ansatz (MERA)\cite{Vidal07, Vidal08, Evenbly09, Giovannetti08, Pfeifer09, Vidal10}, with the later being particularly successful at describing ground states at quantum critical points. Each of the above tensor network states for $D=1$ dimensional systems has a natural generalization in $D>1$ dimensions. The projected entangled-pair state (PEPS)\cite{Verstraete04, Sierra98, Nishino98, Nishio04, Murg07, Jordan08, Gu08, Jiang08, Xie09, Murg09} generalizes the MPS, whereas $D>1$ versions of TTN\cite{Tagliacozzo09, Murg10} and MERA \cite{Evenbly10, Evenbly10b, Aguado08, Cincio08, Evenbly09b, Koenig09, Evenbly10c} also exist. Among those generalizations, PEPS and MERA stand out for offering efficient representations of many-body wave functions, thus leading to scalable simulations in $D>1$ dimensions; and, importantly, for also being able to address systems that are beyond the reach of quantum Monte Carlo approaches due to the so-called sign problem, including frustrated spins\cite{Murg09,Evenbly10c} and interacting fermions\cite{Corboz10, Kraus10, Pineda10, Corboz09, Barthel09, Shi09, Li10, Corboz10b, Pizorn10, Gu10, Corboz10c}. An attractive feature of tensor network states is that they are largely unbiased variational ans\"atze, in the sense that they are capable of representing many different types of ground states through a proper choice of variational parameters, as clearly witnessed by two decades of MPS explorations with DMRG\cite{White92,White93,Schollwoeck05,Schollwoeck11}. By increasing the bond dimension $\chi$ of the MPS\cite{Vidal03,Perez-Garcia07}, which governs the size of its tensors and therefore the number of variational parameters, more entanglement can be reproduced and a more accurate approximation to the ground state of a lattice model is obtained. As a matter of fact, an MPS can exactly reproduce any many-body state of the system provided that the bond dimension $\chi$ is large enough\cite{Vidal03, Perez-Garcia07}, although this will typically require a prohibitively large value of $\chi$, namely a value exponentially large in the system size -- leading to an inefficient representation. What makes the MPS interesting is that some moderately small value of $\chi$ is often already capable of accurately approximating the ground state of a local Hamiltonian in $D=1$ dimensions\cite{White92, White93, Schollwoeck05, Schollwoeck11}, as recently clarified by Hastings\cite{Hastings07,Hastings07b}. For instance, for most gapped Hamiltonians, an accurate MPS approximation is obtained already with some finite bond dimension $\chi$ that depends on $H$ but is essentially independent of the size of the system. For critical (and thus gapless) systems much of the same is true of the MERA\cite{Vidal07, Vidal08, Evenbly09, Giovannetti08, Pfeifer09}, which with a fixed bond dimension $\chi$ is capable of accurately reproducing large scale properties of the ground state, such as the asymptotic scaling of two-point correlators and of entanglement entropy. Tensor network states in $D>1$ dimensions, such as PEPS\cite{Verstraete04, Sierra98, Nishino98, Nishio04, Murg07, Jordan08, Gu08, Jiang08, Xie09, Murg09} and MERA\cite{Evenbly10, Evenbly10b, Aguado08, Cincio08, Evenbly09b, Koenig09, Evenbly10c}, are also thought to be capable of accurately describing a large variety of ground states. This is supported both by growing numerical evidence and by the existence of analytical MERA\cite{Aguado08, Koenig09} and PEPS\cite{Buerschaper09,Gu09} constructions for some topologically ordered ground states in $D=2$ dimensions. However, a sharp increase of computational costs with bond dimension $\chi$ implies that, in practice, simulations in $D>1$ dimensions are restricted to small values of $\chi$. This restriction implies favouring low entangled states over more robustly entangled states. And as a result, and unless specific preventive measures are taken, tensor network states in $D>1$ dimensions may artificially favour local order over topological order, or may effectively open a gap in a critical system. It is therefore important to understand how a finite bond dimension preconditions the properties of a given tensor network ansatz. The goal of our review paper is to collect together a number of previous results in this direction, which can be found scattered through the literature, and to present them under a simple and unifying framework. We consider homogeneous tensor network states with a finite bond dimension $\chi$ and look into those structural properties (in practice, scaling of correlators and of entanglement entropy) that follow from the way the tensors are connected into a network -- and are therefore independent of the choice of variational parameters. Those properties can be directly associated with properties of the (discrete) geometry reproduced by the network. A main merit of this presentation is that it exposes, in very simple geometric terms, the main structural differences between the MPS and the scale invariant MERA in $D=1$ dimensions, while also explaining why these two tensor network states are a natural ansatz to represent ground states of gapped and gapless Hamiltonians, respectively. The situation turns out to be much less clear-cut in $D>1$ dimensions, where not only the scale invariant MERA but also PEPS can describe certain type of gapless ground states; in addition, both PEPS and MERA fail to describe another type of more robustly entangled, gapless ground states. A second merit of our presentation is that it allows us to express, again in simple geometric terms, the limitations experienced by PEPS and MERA in $D>1$ dimensions, while preparing the stage to announce how to overcome these limitations, namely by using tensor networks corresponding to more sophisticated geometries, as recently proposed in Ref. \onlinecite{Evenbly11}. The rest of the paper is divided into sections as follows. Section \ref{sect:GS} briefly reviews the typical behaviour of correlations and entanglement entropy in ground states of local Hamiltonians. In Sect. \ref{sect:TN} we argue that a key difference between MPS/PEPS and MERA is in the geometry that these tensor network states describe. MPS and PEPS reproduce the $D$-dimensional \textit{physical} geometry of the lattice model, whereas the MERA describes a $(D+1)$-dimensional \textit{holographic} geometry, with the additional dimension corresponding to length scale. Then Sect. \ref{sect:correlation} points out that the decay of correlations in the MPS and the scale invariant MERA can be regarded as following from the structure of geodesics in the physical and holographic geometries, respectively. Similarly, Sect. \ref{sect:entropy} argues that the scaling of entanglement entropy in the MPS and the scale invariant MERA can be understood to follow from a simple boundary law in some appropriate region of the physical and holographic geometries. Sect. \ref{sect:gapped} considers the holographic geometry of ground states of gapped systems and a related tensor network state, the finite range MERA, which in some sense interpolates between the MPS and the scale invariant MERA. Finally, Sect. \ref{sect:discussion} argues that thinking about tensor network states in terms of geometry offers more than just an attractive way of presenting previously known results. As demonstrated by the recent proposal of the branching MERA, it can also stimulate further progress in the field. Specifically, the restriction experienced by PEPS and MERA to obeying the boundary law for entanglement entropy in $D>1$ dimensions is overcome by considering tensor network states that reproduce a $D+1$ holographic geometry with a more sophisticated structure, including branching in the length scale direction. \section{Ground states of local Hamiltonians} \label{sect:GS} Let $\mathcal{L}$ denote a lattice in $D$ spatial dimensions made of $N$ sites, where each site is described by a complex vector space $\mathbb{V}$, and let $H:\mathbb{V}^{\otimes N} \rightarrow \mathbb{V}^{\otimes N}$, with $H^{\dagger} = H$, be a Hamiltonian that decomposes as a sum of terms, with each term involving only a few neighbouring sites. We refer to any such Hamiltonian, made of short-range interactions, as a \textit{local} Hamiltonian. The ground state $\ket{\Psi_{\mbox{\tiny GS}}} \in \mathbb{V}^{\otimes N}$ of $H$ is the state that minimizes the expectation value $\bra{\Psi}H\ket{\Psi}$. Typically, the ground state $\ket{\Psi_{\mbox{\tiny GS}}}$ of a given local Hamiltonian $H$ has a number of structural properties that are common to most ground states of local Hamiltonians. Here we consider two such structural properties: the behaviour of two-point correlation functions and the scaling of entanglement entropy. \subsection{Correlations} \label{sect:GS:corr} Let us first consider the two-point correlator \begin{equation}gin{equation} C(x_1,x_2) \equiv \langle P_{x_1} Q_{x_2} \rangle - \langle P_{x_1} \rangle \langle Q_{x_2} \rangle, \end{equation} where $P_{x_1}$ and $Q_{x_2}$ denote two local operators acting on (a neighborhood of) sites $x_1,x_2\in \mathcal{L}$, and $\langle O \rangle$ stand for the ground state expectation value $\bra{\Psi_{\mbox{\tiny GS}}} O\ket{\Psi_{\mbox{\tiny GS}}}$. In most ground states of local Hamiltonians, $C(x_1,x_2)$ decays with the distance between positions $x_1$ and $x_2$. In the limit of large distances, this happens in one of two characteristic ways. When the Hamiltonian $H$ is gapped, correlations decay exponentially\cite{Hastings04}, \begin{equation}gin{equation} C(x_1,x_2) \approx e^{-|x_1-x_2|/\xi}, \label{eq:Cgap} \end{equation} where $\xi\geq 0$ denotes a correlation length. Instead, when the Hamiltonian $H$ is gapless, correlations decay polynomially\cite{Sachdev99}, \begin{equation}gin{equation} C(x_1,x_2) \approx |x_1-x_2|^{-q}, \label{eq:Ccrit} \end{equation} where $q\geq 0$ is some exponent. In this paper we refer to gapped systems also as non-critical systems, and to gapless systems as critical systems. A gapped/non-critical system that is close to a critical point may display two-point correlations that decay according to the more refined, combined form \cite{DiFrancesco97} \begin{equation}gin{equation} C(x_1,x_2) \approx \frac{e^{-|x_1-x_2|/\xi}}{|x_1-x_2|^{q}}, \label{eq:Cmixed} \end{equation} which is dominated by the polynomial decay for distances smaller than the correlation length, $|x_1-x_2| \ll \xi$, and reduces to the exponential decay of Eq. \ref{eq:Cgap} at larger distances, $|x_1-x_2| \gg \xi$. \subsection{Entanglement entropy} \label{sect:GS:entropy} The amount of entanglement between a region $A$ of the lattice and the rest of the system can be measured by the entanglement entropy \begin{equation}gin{equation} S(A) \equiv - \mbox{tr} \left(\rho_A \log_2 \rho_A\right), \end{equation} where $\rho_A$ is the reduced density matrix for region $A$, obtained from the ground state $\ket{\Psi_{\mbox{\tiny GS}}}$ by tracing out the rest of the system, denoted $B$, \begin{equation}gin{equation} \rho_A \equiv \mbox{tr}_{B} \proj{\Psi_{\mbox{\tiny GS}}}. \label{eq:rhoA} \end{equation} In $D$ spatial dimensions, most ground states of local Hamiltonians obey a boundary law (often also referred to as "area law") for entanglement entropy\cite{Srednicki93, Latorre04, Plenio05, Bravyi06, Eisert06, Hastings07c, Masanes09, Eisert10}, in the sense that the entanglement entropy of a hyperblock $A$ of $L^{D}$ sites scales as the size $|\partial A|$ of its boundary $\partial A$, \begin{equation}gin{equation} S(A) \approx |\partial A| \approx L^{D-1},~~~~~~\mbox{(boundary law)} \label{eq:boundaryLaw} \end{equation} instead of scaling as the size $|A| = L^{D}$ of the bulk of the block $A$. However, there are also ground states that display logarithmic corrections to the above boundary law \cite{Holzhey94, Callan94, Fiola94, Vidal03b, Wolf06, Gioev06, Li06, Barthel06, Swingle10, Swingle09b, Motrunich07, Senthil08, Liu09}, \begin{equation}gin{equation} S(A) \approx L^{D-1}\log_2 (L). \label{eq:LogD} \end{equation} Specifically, in $D=1$ dimensions, gapped systems obey a boundary law, which in this case means that the entanglement entropy saturates to a constant $S_0$ as a function of the block size $L$ \begin{equation}gin{equation} S(A) \leq S_0. \label{eq:1Dgap} \end{equation} Instead, critical systems display a logarithmic correction to the boundary law, \begin{equation}gin{equation} S(A) \approx \frac{c}{3} \log_2 (L), \label{eq:1Dcrit} \end{equation} where $c$ is the central charge of the corresponding CFT. Near criticality, the entropy grows with $L$ as in Eq. \ref{eq:1Dcrit} until the saturation constant $S_0$ is reached, with the later scaling with the correlation length $\xi$ as \begin{equation}gin{equation} S_0 \approx \frac{c}{3} \log_2 (\xi). \label{eq:1Dmixed} \end{equation} The scaling of Eqs. \ref{eq:1Dgap}-\ref{eq:1Dmixed} were first presented in Ref. \onlinecite{Vidal03b} for specific quantum spin chains and are consistent with previous entropy calculations in quantum field theory by Holzhey et al. \cite{Holzhey94}, Callan and Wilczek \cite{Callan94} and Fiola et al. \cite{Fiola94}. Subsequently, Jin and Korepin\cite{Jin04} formalized the quantum spin chain results, whereas Cardy and Calabrese\cite{Calabrese04, Calabrese05} formalised and generalized the quantum field theory calculations. In $D>1$, the relation between the scaling of entanglement entropy and the existence of a gap in $H$ is less clear-cut. For instance, the study of possible scalings of entanglement entropy in the ground state of systems of free fermions shows that the boundary law is obeyed by gapped systems, as expected, but also for a class of critical systems (namely, systems with a Fermi surface of dimension $\Gamma$ smaller than $D-1$), whereas a second class of critical systems (with a Fermi surface of dimension $\Gamma = D-1$) display logarithmic multiplicative corrections to the boundary law\cite{Wolf06, Gioev06, Li06, Barthel06}, see Table \ref{table:fermions}. Such logarithmic corrections are also believed to be present in other gapless systems in $D>1$ dimensions, such as Fermi Liquids and spin Bose metals\cite{Swingle10, Swingle09b, Motrunich07, Senthil08, Liu09}. \begin{equation}gin{table}[!htb] \begin{equation}gin{tabular}{|lcccc|} \hline & $~\textrm{gapped}$ $\; \; \; \; \; \; \; $ & $\Gamma=0$ $\; \; \; \; \; \; \; $ & $\Gamma=1$ $\; \; \; \; \; \; \;$ & $\Gamma=2$ $\; \; \; \; \; \; \; $ \\ \hline D=1 & $S_0$ & $\log_2 (L)$ & - & - \\ D=2 & $L$ & $L$ & $L \log_2 (L)$ & - \\ D=3 & $L^2$ & $L^2$ & $L^2$ & $L^2 \log_2 (L)$ \\ \hline \end{tabular} \caption{Scaling of entanglement entropy $S(A)$ of a block $A$ made of $L^D$ sites for the ground state of free fermion models on a $D$-dimensional lattice and with a $\Gamma$-dimensional Fermi surface.} \label{table:fermions} \end{table} We emphasize that the above simplified characterizations of correlations and entanglement entropy touch only on those aspects that are needed for subsequent analysis, and ignore a number of other important results. For instance, there are additive corrections to the boundary law for entanglement entropy in topological phases, known as topological entanglement entropy\cite{Kitaev06, Levin06}. Although both MERA\cite{Aguado08, Koenig09} and PEPS\cite{Buerschaper09, Gu09} can describe topologically ordered systems and in particular account for the topological entanglement entropy, the latter is only an additive correction to the scaling of entanglement entropy and as such will play no role in our discussions. \section{Geometry of tensor network states} \label{sect:TN} \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{geometries.eps} \caption{ (Color online) A local Hamiltonian on a $D$-dimensional lattice defines a discrete $D$-dimensional geometry. To each ground state we can attach a $(D+1)$-dimensional geometry, where the additional dimension corresponds to length scale. } \label{fig:geometries} \end{center} \end{figure} A tensor network state expresses the wave-function of a lattice system as a collection of tensors (i.e. multi-dimensional arrays of complex coefficients) that are connected according to a network pattern. We refer to the abundant literature for further details\cite{Fannes92,Ostlund95,Rommer97,Perez-Garcia07, White92, White93, Schollwoeck05, Schollwoeck11, Vidal03, Vidal04, Daley04, White04, Shi06, Alba11, Vidal07, Vidal08, Evenbly09, Giovannetti08, Pfeifer09, Vidal10, Verstraete04, Sierra98, Nishino98, Nishio04, Murg07, Jordan08, Gu08, Jiang08, Xie09, Murg09, Tagliacozzo09, Murg10, Evenbly10, Evenbly10b, Aguado08, Cincio08, Evenbly09b, Koenig09, Evenbly10c, Corboz10, Kraus10, Pineda10, Corboz09, Barthel09, Shi09, Li10, Corboz10b, Pizorn10, Gu10, Corboz10c}. In this paper we are concerned with the use of a tensor network state as an efficient, approximate representation of the ground state $\ket{\Psi_{\mbox{\tiny GS}}}$ of a local Hamiltonian $H$ on a $D$-dimensional lattice. For concreteness, we consider square (or hypercubic) lattices, although most considerations can be easily generalised to other types of lattices. There are two \textit{geometries} that are relevant to the problem of representing a ground state, and most of the existing tensor network states can be broadly classified according to which of these two geometries their networks reproduce, see Fig. \ref{fig:geometries}. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=7cm]{MPS-PEPS.eps} \caption{ (Color online) (i) Matrix product state (MPS) for the ground state of a local Hamiltonian $H$ in a one-dimensional lattice. The tensors are connected according to a one-dimensional array, in correspondence with the one-dimensional physical geometry dictated by the interactions in $H$. (ii) Projected entangled pair state (PEPS) for the ground state of a two-dimensional lattice. The tensors are connected into a network that reproduces the two-dimensional physical geometry. } \label{fig:MPS-PEPS} \end{center} \end{figure} \subsection{Physical geometry} \label{sect:TN:physical} First of all, there is the geometry generated by the pattern of interactions in $H$, which we refer to as \textit{physical geometry}. In a $D$-dimensional lattice $\mathcal{L}$, a short-ranged Hamiltonian $H$ connects neighbouring sites of $\mathcal{L}$. That is, two sites are close to each other in the physical geometry if and only if they are also close in the lattice $\mathcal{L}$. Therefore the physical geometry is also $D$-dimensional and essentially equivalent to the lattice $\mathcal{L}$ itself. By definition, the physical geometry only depends on the pattern of interactions in Hamiltonian $H$ and is insensitive to any further details, such as whether $H$ is gapped or gapless. In particular, the physical geometry is also largely independent of specific structural properties of its ground state $\ket{\Psi_{\mbox{\tiny GS}}}$, e.g. decay of correlations or scaling of entanglement entropy. An important class of tensor network states consist in collections of tensors connected into a network that reproduces the physical geometry. For instance, a MPS reproduces the one-dimensional physical geometry of a spin chain, whereas PEPS reproduces the $D$-dimensional physical geometry in lattice models in $D>1$ spatial dimensions, see Fig. \ref{fig:MPS-PEPS}. If we regard the lattice model as a discretization of continuous space, then an infinite MPS describes a discretized version of the line, and an infinite PEPS in $D>1$ dimensions describes a discretized version of a $D$-dimensional hyperplane. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=7cm]{MERA.eps} \caption{ (Color online) Multi-scale entanglement renormalization ansatz (MERA) for the ground state of a local Hamiltonian $H$ in a one-dimensional lattice. The tensors form a two-dimensional \textit{holographic geometry}. The horizontal direction reproduces the spatial dimension of the lattice model, whereas the vertical direction corresponds to the different length scales that are relevant to describing the structure of entanglement in the ground state of the system. More generally, the MERA for a system in $D$ dimensions spans a holographic geometry in $D+1$ dimensions. } \label{fig:MERA} \end{center} \end{figure} \subsection{Holographic geometry} \label{sect:TN:holographic} A second geometry is given by the pattern of entanglement in the ground state $\ket{\Psi_{\mbox{\tiny GS}}}$ of $H$. This pattern is naturally described by incorporating an additional dimension to the $D$-dimensional physical geometry. This additional dimension is associated to length scale (equivalently, energy scale), in the spirit of the holographic principle\cite{Maldacena98,Gubser98,Witten98,Swingle09}. Here we refer to the $(D+1)$-dimensional geometry generated in this way by the entanglement in the ground state $\ket{\Psi_{\mbox{\tiny GS}}}$ as the \textit{holographic geometry} of $\ket{\Psi_{\mbox{\tiny GS}}}$, and use the scale parameter $z$, \begin{equation}gin{equation} z \equiv \log_2 \lambda, \end{equation} where $\lambda$ is a length scale, to label it. The physical geometry corresponds to setting $z=0$. In the MERA, tensors are connected so as to reproduce the holographic geometry. For instance, the MERA for a one-dimensional system, Fig. \ref{fig:MERA}, spans two dimensions, thus describing a discrete, two-dimensional holographic geometry, with tensors labeled by a space coordinate $x$ and the scale parameter $z$. More generally, the MERA for the ground state of a $D$-dimensional system reproduces a discrete, $(D+1)$-dimensional holographic geometry. The MERA can be regarded as defining a real space renormalization group transformation, where the scale parameter $z$ labels coarse-grained lattices $\mathcal{L}^{(z)}$ that offer an effective description of the system at length scale $\lambda = 2^{z}$. This additional dimension allows the MERA to store, using different parts of the network, properties of the ground state corresponding to different length scales $\lambda$. In contrast with the physical geometry, which is only sensitive to the pattern of interactions in the local Hamiltonian $H$, the holographic geometry depends also on certain structural properties of the ground state, such as the existence of a finite correlation length $\xi$. In order to emphasize the differences between the physical and holographic geometries, here and in the next two sections we consider the holographic geometry of a critical, scale invariant ground state, in which the correlation length $\xi$ diverges and all length scales are equivalent. Correspondingly, in these sections we will restrict our considerations to the scale invariant MERA. Only later, in Sect. \ref{sect:gapped}, we will also address the case of a gapped Hamiltonian, where the correlation length $\xi$ is finite, and will consider a version of the MERA adequate to that situation. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{AdSCFT.eps} \caption{ (Color online) As pointed out by Swingle \cite{Swingle09}, the scale invariant MERA for the ground state of a quantum spin chain can be interpreted as a discrete realization of the AdS/CFT correspondence. The ground state of the one-dimensional lattice model corresponds to a discrete version of the vaccuum of a CFT$_{1+1}$, whereas the MERA spans a two dimensional geometry that corresponds to a discrete version of a time slice of AdS$_{2+1}$. The Figure shows a MERA similar to that of Fig. \ref{fig:MERA}, but from another perspective, with the scale parameter $z$ as a radial coordinate. } \label{fig:AdSCFT} \end{center} \end{figure} The connection between MERA and the holographic principle was first explored by Swingle\cite{Swingle09}. Specifically, the scale invariant MERA\cite{Vidal07, Vidal08} used to describe the ground state of a quantum spin chain at criticality can be understood as a discrete realization of the AdS/CFT correspondence. Indeed, the (scale invariant) ground state of the critical chain is a discrete version of the vacuum of a $1+1$ conformal field theory (CFT), whereas the scale invariant MERA can be regarded as defining a discrete version of a $2+1$ anti de Sitter (AdS) space, see Fig. \ref{fig:AdSCFT}. [Notice that since we are describing time-independent ground states, the time direction is not captured by the tensor network and is irrelevant in the present discussion]. \subsection{Homogeneous tensor networks} Several structural properties of the states that can be represented by the above tensor networks are pre-determined by the choice of geometry they reproduce. The next two sections review the behaviour of correlators and entanglement entropy in MPS, PEPS and MERA, and relate them to properties of the appropriate geometry. For simplicity, we mostly consider homogeneous tensor networks, in which all the tensors are copies of a single tensor (in the scale invariant MERA, two different tensors are necessary). We call \textit{generic} a property that is typically observed in a homogeneous tensor network where the coefficients of the tensor have been chosen randomly. In the following we consider \textit{generic} properties of \textit{homogeneous} MPS, PEPS and MERA with a finite bond dimension $\chi$. We consider states of an infinite lattice $\mathcal{L}$, see Fig. \ref{fig:homogeneous}, and focus mostly on the \textit{asymptotic} behaviour of such properties, namely in the decay of correlations at large distances and scaling of entanglement entropy for large blocks of sites. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{homogeneous.eps} \caption{ (Color online) Homogeneous tensor network states for the ground state in an infinite lattice in $D=1$ spacial dimensions. (i) A homogeneous MPS is characterized by a single tensor that is repeated infinitely many times throughout the tensor network. (ii) A homogeneous scale invariant MERA is characterized by two tensors, a disentangler and an isometry, repeated throughout the tensor network, which consists of infinitely many layers. } \label{fig:homogeneous} \end{center} \end{figure} \section{Correlations and geodesics} \label{sect:correlation} The asymptotic decay of correlations has long been known to be exponential in an MPS \cite{Fannes92, Ostlund95, Rommer97} and polynomial in the scale invariant MERA \cite{Vidal08, Giovannetti08, Pfeifer09}. In this section we point out that such behaviour is dictated by the structure of geodesics in the geometry attached to each of these tensor network states. For an MPS, the later is a rather straightfoward statement; for the MERA, it was first noted by Swingle\cite{Swingle09}. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=7cm]{geodesicTN.eps} \caption{ (Color online) (i) Two sites $x_1$ and $x_2$ of the lattice $\mathcal{L}$ are connected through several paths within the tensor network. The geodesic corresponds to the shortest of such paths, where the length of a path is measured e.g. by the number of tensors in the path. In the example, the shortest path between sites $x_1$ and $x_2$ contains 6 tensors, and therefore the length of the geodesic is $D(x_1,x_2) = 6$. Eq. \ref{eq:corrTN} relates the length of geodesics with the assymptotic decay of correlations in the tensor network (assuming that correlations are predominantly carried by the tensors in the geodesic). (ii) Region ${\cal O}mega_A$ of the tensor network that contains (the indices corresponding to) region $A$ of the lattice $\mathcal{L}$. The boundary $\partial {\cal O}mega_A$ of region ${\cal O}mega_A$ consists of the set of indices connecting ${\cal O}mega_A$ with the rest of the tensor network. The number $n(A)$ of such indices is interpreted as measure of the size $|\partial {\cal O}mega A|$ of the boundary $\partial {\cal O}mega_A$. In the example, $n(A) = |\partial {\cal O}mega_A|=3$. An upper bound to the entanglement entropy of a region $A$ of the lattice $\mathcal{L}$ is given in terms of $n(A)$, Eq. \ref{eq:upperBound}. } \label{fig:geodesicTN} \end{center} \end{figure} \subsection{Geodesics within a tensor network} Given a tensor network state for the state $\ket{\Psi}$ of a lattice $\mathcal{L}$, and two sites of $\mathcal{L}$ at positions $x_1$ and $x_2$, we can define a notion of distance between these two sites \textit{within the tensor network} as follows. First we notice that the two sites are connected by paths within the tensor network, where each path consists of a list of tensors and links/indices connecting the tensors. To any such path, we then associate a length, as given by the number of tensors (or links) in the path. Then the distance $D(x_1,x_2)$ between these sites is defined as the length of the shortest path connecting them, see Fig. \ref{fig:geodesicTN}(i). Let $C(x_1,x_2)$ denote a correlation function between positions $x_1$ and $x_2$. It turns out that for both the MPS and the scale invariant MERA, the decay of correlations can be expressed in terms of the distance $D(x_1,x_2)$ within the tensor network, \begin{equation}gin{equation} C(x_1,x_2) \approx e^{-\alpha D(x_1,x_2)}, \label{eq:corrTN} \end{equation} for some positive constant $\alpha$. This expression assumes that the correlations between the two sites are mostly carried through the tensors/links in the geodesic path connecting them. It originates in the fact that for both the MPS ($D=1$ dimensions) and the scale invariant MERA (in any dimensions), the correlator $C(x_1,x_2)$ can be obtained by evaluating an expression with the (approximate) form \begin{equation}gin{equation} C(x_1,x_2) \approx {\vec{v}_L}^{\dagger} \cdot (T)^{D(x_1,x_2)} \cdot \vec{v}_R, \label{eq:corrTNexplain} \end{equation} that is, a scalar product involving two vectors ${\vec{v}_L}$ and $\vec{v}_R$ and the $D(x_1,x_2)$-th power of some transfer matrix $T$. The eigenvalues of matrix $T$ give rise to the possible correlation lengths $\xi$ in Eq. \ref{eq:Cgap} for the MPS and the possible power laws $p$ in Eq. \ref{eq:Ccrit} for the scale invariant MERA. Instead of reproducing the original derivation of this result for MPS\cite{Fannes92, Ostlund95, Rommer97} and MERA\cite{Vidal08, Giovannetti08, Pfeifer09}, here we will focus on the geometrical interpretation of Eq. \ref{eq:corrTN} in terms of the structure of geodesics. \subsection{Correlations in the MPS} \label{sect:correlation:MPS} The MPS reproduces the physical geometry of a lattice $\mathcal{L}$ in $D=1$ dimensions, and therefore the induced \textit{physical distance}, \begin{equation}gin{equation} D_{\text{\tiny phys}}(x_1,x_2) \approx |x_1-x_2|, \label{eq:Dphys} \end{equation} is simply proportional to the number of lattice sites between positions $x_1$ and $x_2$, see Fig. \ref{fig:geodesic}(i). Replacing the physical distance in Eq. \ref{eq:corrTN} leads to the following asymptotic expression for the correlators of the MPS, \begin{equation}gin{equation} C_{\text{\tiny MPS}}(x_1,x_2) \approx e^{-\alpha D_{\text{\tiny phys}}(x_1,x_2)} \approx e^{-|x_1-x_2|/\xi}, \label{eq:CMPS} \end{equation} for some correlation length $\xi>0$, which indeed reproduces the exponential decay of correlations characteristic of gapped systems, see Eq. \ref{eq:Cgap}. \subsection{Correlations in the scale invariant MERA} \label{sect:correlation:MERA} In the scale invariant MERA, two sites at positions $x_1$ and $x_2$ of the lattice $\mathcal{L}$ are connected by a geodesic path of length $O(\log_2(x_1-x_2))$, see Fig. \ref{fig:geodesic}(ii), giving rise to the holographic distance \begin{equation}gin{equation} D_{\text{\tiny{hol}}}(x_1,x_2) \approx \log_2(|x_1-x_2|), \label{eq:Dhol} \end{equation} which is consistent with the structure of geodesics in AdS space\cite{Swingle09}. Replacing this holographic distance in Eq. \ref{eq:corrTN} leads to the following asymptotic expression for the correlators of the scale invariant MERA, \begin{equation}gin{eqnarray} C_{\text{\tiny{MERA}}}(x_1,x_2) &\approx& e^{-\alpha D_{\text{\tiny hol}}(x_1,x_2)} \\ &\approx& e^{-q\log_2(|x_1-x_2|)} = |x_1-x_2|^{-q}, \label{eq:CMERA} \end{eqnarray} for some exponent $q\geq 0$, which reproduces the polynomial decay of correlators characteristic of critical systems, Eq. \ref{eq:Ccrit}. \subsection{Correlations in $D>1$ dimensions} In $D>1$ dimensions, parts of the same analysis can be conducted again for the PEPS and the scale invariant MERA. The physical geometry reproduced by the PEPS induces a physical distance $D_{\text{\tiny{phys}}}(x_1,x_2)$ which, as in Eq. \ref{eq:Dphys}, is proportional to the distance within the lattice $\mathcal{L}$, whereas the scale invariant MERA leads to a holographic distance $D_{\text{\tiny{hol}}}(x_1,x_2)$ analogous to that of Eq. \ref{eq:Dhol}. It is also true that, for a generic choice of tensors in the PEPS and MERA, we again recover an asymptotic decay of correlation functions $C(x_1,x_2)$ that is exponential and polynomial in $|x_1-x_2|$, respectively, see Eqs. \ref{eq:CMPS} and \ref{eq:CMERA}. However, we point out that for certain (non-generic) choices of variational parameters, PEPS can also display polynomial decay of correlations, as is the case e.g. of a PEPS built from a critical classical partition function\cite{Verstraete06}. Such (non-generic) behaviour is incompatible with the assumption implicit in Eq. \ref{eq:corrTN}, namely that correlations are mostly carried by the tensors/links included in the geodesic path connecting positions $x_1$ and $x_2$. In a PEPS with polynomially decaying correlation functions, correlations between two sites are instead obtained from a sum of contributions involving the many different paths connecting the two sites within the $D$-dimensional network, and not just from the geodesic paths. Therefore, Eq. \ref{eq:corrTN} does not hold for critical PEPS. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{geodesic.eps} \caption{ (Color online) (i) In an MPS, two spins at positions $x_1$ and $x_2$ are connected by a path containing $|x_1-x_2|$ tensors. (ii) In a MERA, the same two spins are connected by a path that only has $O(\log_2(|x_1 - x_2|))$ tensors, in correspondence with geodesics in AdS space. } \label{fig:geodesic} \end{center} \end{figure} \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=6cm]{entropyMPS.eps} \caption{ (Color online) Upper bound for the entropy $S_L$ of the reduced density matrix $\rho_L$ of a region $A$ of linear size $L$: (i) In an MPS, the tensors describing a block $A$ of $L$ sites, that is region ${\cal O}mega_A^{\text{\tiny{phys}}}$, are connected with the rest of the tensor network by means of two (that is, a constant number of) bonds, $n(A)=2$. Therefore an MPS can at most reproduce a constant entanglement entropy $S_L \approx \mbox{const.}$, which corresponds to a (physical) boundary law. (ii) In a PEPS for a two-dimensional system, the number $n(A)$ of bond indices connecting ${\cal O}mega_A^{\text{\tiny{phys}}}$ with the rest of the tensor network is proportional to the size of the boundary of the square region $A$, $n(A)\approx L$. Therefore the entropy scales at most as $S_{L} \approx L$, which again is a boundary law. } \label{fig:entropyMPS} \end{center} \end{figure} \section{Entanglement entropy and boundary laws} \label{sect:entropy} The scaling of the entanglement entropy $S(A)$ of a region $A$ is well understood for a MPS\cite{Vidal03,Perez-Garcia07}, a PEPS\cite{Verstraete06} and a scale invariant MERA\cite{Vidal08pre}. In this section we review these results and re-express them as a simple boundary law for a related region ${\cal O}mega_A$ in the appropriate geometry. \subsection{Entropy as the size of a boundary} An upper bound for the entanglement entropy $S(A)$ in a tensor network state is obtained as follows. First we consider splitting the tensor network into two parts, ${\cal O}mega_A$ and ${\cal O}mega_B$, where ${\cal O}mega_A$ contains the open indices corresponding to all the sites in region $A$ and the other part ${\cal O}mega_B$ contains the open indices corresponding to region $B$, namely the rest of sites in lattice $\mathcal{L}$. Then we count the number of bond indices $n(A)$ that connect regions ${\cal O}mega_A$ and ${\cal O}mega_B$. Since each bond index can contribute at most $\log_2 (\chi)$ to the entropy of $\rho_A$, we obtain the upper bound\cite{entropyCount} (see Fig. \ref{fig:geodesicTN}(ii)), \begin{equation}gin{equation} S(A) \leq n(A) \log_2 (\chi). \label{eq:upperBound} \end{equation} Among all such partitions, the one that minimizes $n(A)$ is the one that provides the tightest upper bond to $S(A)$. From now one, we use ${\cal O}mega_A$ and ${\cal O}mega_B$ to refer to this optimal partition, and $n(A)$ to denote the corresponding minimal number of bond indices. The optimal upper bound of Eq. \ref{eq:upperBound} is saturated' for MPS, PEPS and MERA, in the sense that plenty of numerical evidence shows that for a generic choice of coefficients in a homogeneous tensor network, the entanglement entropy scales proportional to $n(A)$, \begin{equation}gin{equation} S(A) \approx n(A). \label{eq:SAnA} \end{equation} In our discrete geometries, we can think of $n(A)$ as a measure of the size $|\partial {\cal O}mega_A|$ of the boundary $\partial {\cal O}mega_A$ of the minimally connected region ${\cal O}mega_A$, and therefore interpret Eq. \ref{eq:SAnA} as stating that the entropy $S(A)$ is proportional to the size of the boundary of region ${\cal O}mega_A$, \begin{equation}gin{equation} S(A) \approx |\partial{\cal O}mega_A|. \label{eq:SAOA} \end{equation} This expression, equally valid for MPS, PEPS and MERA, allows us to always interpret the scaling of entanglement entropy as a simple boundary law in the appropriate geometry. The specific scaling of entanglement entropy for each tensor network state is then obtained by replacing $|\partial{\cal O}mega_A|$ in Eq. \ref{eq:SAOA} with its explicit dependence on the linear size $L$ of region $A$, as we do next. \subsection{Entanglement entropy in MPS and PEPS} \label{sect:entropy:MPS} Consider a hypercubic region $A$ made of $L^D$ sites. The MPS and the PEPS reproduce the physical geometry. Therefore the hypercubic region $A$ of the lattice and the region ${\cal O}mega^{\text{\tiny{phys}}}_A$ of the tensor network that contains the open indices corresponding to sites in $A$ are essentially equivalent. In particular, the size of their boundaries is proportional, $|\partial {\cal O}mega^{\text{\tiny{phys}}}_A| \approx |\partial A|$. In other words, region ${\cal O}mega^{\text{\tiny{phys}}}_A$ is connected with the rest of the tensor network by a number of bond indices $n(A)$ proportional to the size $|\partial A|$ of the boundary $\partial A$ of the hypercubic region $A$ itself, see Fig. \ref{fig:entropyMPS}. Since $|\partial A| \approx L^{D-1}$, we obtain, \begin{equation}gin{equation} n(A) \approx |\partial A| \approx L^{D-1}, \end{equation} which implies, together with Eq. \ref{eq:SAnA}, that the entanglement entropy of the MPS\cite{Vidal03,Perez-Garcia07} and PEPS\cite{Verstraete06} scales with the linear size $L$ according to the boundary law of Eq. \ref{eq:boundaryLaw}, that is \begin{equation}gin{eqnarray} S_{\text{\tiny MPS}}(A) &\approx& L^{D-1} \approx S_0 ~~~~~~(D=1) \label{eq:SMPS}\\ S_{\text{\tiny PEPS}}(A) &\approx& L^{D-1} ~~~~~~~~~~~~~(D>1). \end{eqnarray} Here we will refer to such scaling of entanglement entropy as a \textit{physical boundary law} or simply \textit{boundary law}. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{entropyMERA.eps} \caption{ (Color online) Upper bound for the entropy $S_L$ of the reduced density matrix $\rho_L$ of a region $A$ of $L$ contiguous sites. In a MERA for a one-dimensional system, the minimally connected region ${\cal O}mega_A^{\text{\tiny{hol}}}$ for region $A$ of the lattice is connected with the rest of the tensor network by a number $n(A)$ that grows logarithmically with the size of region $A$, $n(A) \approx \log (L)$. Therefore the entanglement entropy scales at most as $S_L \approx \log (L)$, which is a logarithmic violation of the (physical) boundary law. A more detailed analysis of the scaling is found in Fig. \ref{fig:OmegaA1D}. } \label{fig:entropyMERA} \end{center} \end{figure} \subsection{Entanglement entropy in the scale invariant MERA} \label{sect:entropy:MERA} Given a $D$-dimensional hypercubic region $A$ of lattice $\mathcal{L}$, the minimally connected region ${\cal O}mega^{\text{\tiny{hol}}}_A$ in the scale invariant MERA is $(D+1)$-dimensional, see Fig. \ref{fig:entropyMERA}, with the additional dimension labelled by the scale parameter $z$. The number $n(A)$ of bond indices connecting ${\cal O}mega^{\text{\tiny{hol}}}_A$ with the rest of the tensor network is the sum of $T \approx \log_2 L$ different contributions $n_z(A)$, \begin{equation}gin{equation} n(A) \approx \sum_{z=0}^{T-1} n_z(A), \label{eq:nA_MERA} \end{equation} where each contribution $n_z(A)$ corresponds to a different length scale $\lambda=2^z$, with $z\in \{0,1,\cdots, T-1\}$. As explained in Ref. \onlinecite{Vidal08pre}, the contribution $n_z(A)$ is proportional to the size of the boundary of a region $A_z$ obtained from region $A$ by means of $z$ coarse-graining steps, where each coarse-graining step divides the linear size of the region roughly by two, and where the size of the boundary of a region is measured by the number of boundary sites included in the region. Let us explicitly perform the sum in Eq. \ref{eq:nA_MERA}. It is useful to address $D=1$ and $D>1$ separately. In $D=1$ dimensions, given a region $A$ made of $L$ sites, each region $A_z$ has a boundary $\partial A_z$ made of two sites, so that each contribution $n_z(A) \approx |\partial A_z| = 2$ is constant. Therefore $n(A)$, which is made of $T\approx \log_2(L)$ constant contributions (Fig. \ref{fig:OmegaA1D}), grows logarithmically with $L$, \begin{equation}gin{equation} n(A) \approx 2T \approx \log (L). \label{eq:nLogL} \end{equation} Then Eqs. \ref{eq:SAnA} and \ref{eq:nLogL} imply that the entanglement entropy in the scale invariant MERA in $D=1$ dimensions grows as\cite{Vidal08pre} \begin{equation}gin{equation} S_{\text{\tiny{MERA}}}(A) \approx \log (L)~~~~~~~~~~~~(D=1), \label{eq:SMERA1D} \end{equation} which reproduces the scaling characteristic of ground states of quantum critical systems, see Eq. \ref{eq:1Dcrit}. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=6cm]{OmegaA1D.eps} \caption{ (Color online) Scaling of entanglement entropy in the MERA in $D=1$ dimensions. (i) Caricature of region $A$ in the lattice $\mathcal{L}$ and of the corresponding region ${\cal O}mega_A^{\text{\tiny{hol}}}$ in the MERA (see also Fig. \ref{fig:entropyMERA}). (ii) The total number $n(A)$ of bond indices connecting ${\cal O}mega_A^{\text{\tiny{hol}}}$ with the rest of the tensor network is the result of $\log (L)$ identical contributions, each corresponding to a different length scale or value of $z$. Thus, $n(A) \approx \log_2(L)$. As a result, the entropy of region $A$ in the MERA for $D=1$ dimensions scales at most as $S(A) \approx \log_2(L)$, which is a logarithmic violation of the (physical) boundary law. } \label{fig:OmegaA1D} \end{center} \end{figure} Instead, in $D>1$ dimensions, each region $A_z$ is a hypercubic block of size $\approx L/2^{z}$ (Fig. \ref{fig:OmegaA2D}), and therefore the size $|\partial A_z|$ of its boundary $\partial A_z$ scales with $L$ as \begin{equation}gin{equation} |\partial A_z| \approx \left(\frac{L}{2^{z}}\right)^{D-1}. \end{equation} Using again that $n_z(A)$ is proportional to $|\partial A_z|$ we find that now the contributions $n_z(A)$ to $n(A)$ in Eq. \ref{eq:nA_MERA} depend on $L$. Their sum leads to \begin{equation}gin{equation} n(A) \approx L^{D-1} \sum_{z=0}^{T} 2^{-z} \approx L^{D-1}. \label{eq:nLD1} \end{equation} Then Eqs. \ref{eq:SAnA} and \ref{eq:nLD1} imply that the entanglement entropy in the scale invariant MERA in $D>1$ dimensions grows as\cite{Vidal08pre} \begin{equation}gin{equation} S_{\text{\tiny{MERA}}}(A) \approx L^{D-1}~~~~~~~~~~~~(D>1), \label{eq:SMERA2D} \end{equation} which reproduces the scaling characteristic of ground states in most gapped systems and some gapless systems in $D>1$ dimensions, see Eq. \ref{eq:boundaryLaw}. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=6cm]{OmegaA2D.eps} \caption{ (Color online) Scaling of entanglement entropy in the MERA in $D=2$ dimensions. (A similar analysis applies to $D>2$ dimensions). (i) Caricature of region $A$ in the lattice $\mathcal{L}$ and of the corresponding region ${\cal O}mega_A^{\text{\tiny{hol}}}$ in the MERA. (ii) The total number $n(A)$ of bond indices connecting ${\cal O}mega_A^{\text{\tiny{hol}}}$ with the rest of the tensor network is the result of $\log (L)$ contributions $n_z(A)$. Contribution $n_z(A)$ corresponds to length scale $\lambda = 2^z$ and is proportional to the size $|\partial A_z|\approx L/2^{z}$ of the boundary $\partial A_z$ of a coarse-grained region $A_z$. The sum of contributions is dominated by the smallest length scale, $z=0$, and is thus proportional to $L$. As a result, the entropy of region $A$ in the MERA for $D=2$ dimensions scales at most as $S(A) \approx L$, which is a (physical) boundary law. } \label{fig:OmegaA2D} \end{center} \end{figure} Eq. \ref{eq:SMERA1D} corresponds to a logarithmic violation of the (physical) boundary law in $D=1$ dimensions, whereas Eq. \ref{eq:SMERA2D} corresponds to the (physical) boundary law. Here we reinterpret both Eq. \ref{eq:SMERA1D} and Eq. \ref{eq:SMERA2D} as a \textit{holographic boundary law}, that is, as a boundary law in the region ${\cal O}mega^{\text{\tiny hol}}_A$ of the holographic geometry. This geometric interpretation of the scaling of entanglement entropy in the scale invariant MERA is inspired by (and can be considered a lattice version of) the results of Ref. \onlinecite{Ryu06, Ryu06b, Nishioka09}, where the entanglement entropy of a CFT is computed using the holographic principle, by noticing that it scales as the size of the boundary of a region ${\cal O}mega^{\text{\tiny hol}}_A$ with minimal boundary. We emphasize, however, that while Refs. \onlinecite{Ryu06, Ryu06b, Nishioka09} discuss the scaling of entanglement entropy in the actual ground state of a physical theory, our present discussion only concerns the scaling of entanglement entropy in a variational ansatz (which we hope to be a good approximate representation of ground states). One merit of this geometric interpretation is that it motivates a strategy to build tensor network states that violate the boundary law also in $D>1$, as presented in Ref. \onlinecite{Evenbly11} and discussed in Sect. \ref{sect:discussion}. \section{Holographic geometry in gapped systems} \label{sect:gapped} The holographic geometry considered so far in Sects. \ref{sect:TN}-\ref{sect:entropy} corresponds to scale invariant, critical ground states, as described by the scale-invariant MERA. This particular scenario has been used there to emphasize the differences between physical and holographic geometries, which are most evident for critical systems. However, all ground states, whether corresponding to a critical system or a non-critical one, have a holographic geometry. For completeness, in this section we consider the holographic geometry of the ground states of gapped systems, which was first discussed by Swingle\cite{Swingle09}. These ground states can be represented by a finite range MERA\cite{Evenbly09} -- a MERA with a finite number of layers of tensors, where tensors in different layers are in principle allowed to be different. Since the finite range MERA is not a homogeneous tensor network, its structural properties do not only depend on the way the tensors are connected into a network -- different layers of the MERA may contribute differently to, say, correlations and entanglement entropy. However, even in this case geometrical considerations alone will already allow us to reproduce some of the key properties that differentiate the ground states of gapped systems from critical ones. In addition, near criticality, where the correlation length $\xi$ is much larger than the lattice spacing, we will recover aspects of the scaling of correlations and entanglement entropy as a function of $\xi$. Finally, in the opposite limit --namely when the correlation length $\xi$ is of the order of the lattice spacing-- we will see that the holographic geometry reduces to the physical geometry. Correspondingly, for ground states close to this limit, a finite range MERA representation becomes equivalent to an MPS/PEPS representation. \subsection{Correlation length, finite range MERA and truncated holographic geometry} Let us then consider the ground state \ket{\Psi_{\mbox{\tiny GS}}} of a gapped Hamiltonian $H$ in $D$ dimensions, in which correlations decay exponentially with distance according to Eq. \ref{eq:Cgap} (or possibly Eq. \ref{eq:Cmixed}) and therefore have a characteristic length scale, the correlation length $\xi$. Notice that if we coarse-grain the lattice according to a scheme that maps a block of $2^D$ sites into one site, after one coarse-graining step the correlation length $\xi$ has shrunk by a factor two, $\xi \rightarrow \xi' = \xi/2$. By applying more coarse-graining steps the correlation length will shrink further. In particular, after \begin{equation}gin{equation} z_\xi \equiv \log_2(\xi) \end{equation} coarse-graining steps the correlation length will become one (in units of separation between lattice sites), and a few additional coarse-graining steps, say a fixed number $\Delta z$ (independent of $\xi$), will render all two-point correlators negligible (i.e. smaller than some pre-determined, small constant). That is, starting with a ground state \ket{\Psi_{\mbox{\tiny GS}}} with correlation length $\xi$, it takes \begin{equation}gin{equation} z_0 \equiv z_{\xi} + \Delta z \approx O(\log_2(\xi)) \end{equation} steps of coarse-graining to produce a state with negligible two-point correlators. Here we will assume that after the $z_{0}$ steps of coarse-graining (according to an entanglement renormalization scheme\cite{Vidal07,Vidal10}) the original ground state $\ket{\Psi_{\mbox{\tiny GS}}}$ of the system has been transformed into a state that can be well approximated by the product state \begin{equation}gin{equation} \ket{\text{prod}} \equiv \ket{0}\otimes \ket{0} \otimes \cdots \otimes \ket{0}, \label{eq:prod} \end{equation} namely a state with no correlations between different lattice sites. State $\ket{\text{prod}}$ describes the ground state at a fixed-point of the RG flow that corresponds to a gapped phase without topological order. In this case, the ground state $\ket{\Psi_{{\mbox{\tiny GS}}}}$ of an infinite system can be represented by a \textit{finite range} MERA\cite{Evenbly09}, which is made of just a finite number $z_0$ of layers of disentanglers and isometries. The finite range MERA is not homogeneous, in that the tensors in different layers are allowed to be different, reflecting the fact that the properties of the ground state are now different at different length scales. Here we will consider a particular choice of finite range MERA, where the first $z_{\xi}$ layers of tensors correspond to those in the scale invariant MERA that describes the neighbouring critical point, and the remaining $\Delta z$ layers are chosen to minimize the ground state energy of the gapped Hamiltonian $H$. This choice is illustrated in Fig. \ref{fig:frMERA}(i) for a finite range MERA in $D=1$ dimensions.\cite{otherFiniteRangeMERA} \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{frMERA.eps} \caption{ (Color online) Finite range MERA for an infinite lattice in $D=1$ spatial dimensions. (i) $z_{\xi}\equiv \log_2 (\xi)$ layers of identical disentanglers and isometries obtained from the scale invariant MERA for the critical case are followed by some fixed number $\Delta z$ of non-homogeneous layers of tensors (where the tensors on different layers are allowed to be different). In this simple example, $z_{\xi} = 2$ and $\Delta z =1$, so that the total number of layers of tensors is $z_0 \equiv z_{\xi} + \Delta z = 3$. The finite range MERA represents a ground state $\ket{\Psi_{\mbox{\tiny GS}}}$ of a gapped system that can be transformed into the unentangled state $\ket{\text{prod}}$ after three layers of coarse-graining. (ii) The finite range MERA can be combined with another tensor network (e.g. MPS in the figure) in order to represent a ground state $\ket{\Psi_{GS}}$ of a gapped system that flows towards an entangled fixed point ground state $\ket{\Psi_{\mbox{\tiny f.p.}}}$ under coarse-graining transformations. The MPS at the top of the MERA can also be used to accurately describe the exponential decay of correlations that dominate the limit $|x_1-x_2|\gg \xi$ of Eq. \ref{eq:Cmixed}. } \label{fig:frMERA} \end{center} \end{figure} The holographic geometry attached to the ground state of a gapped system is still $(D+1)$-dimensional, but it is truncated in the RG direction, with the scale parameter $z$ restricted to values in the interval $[0,z_0]$. This truncation has an immediate effect on the possible decay of correlations and scaling of entanglement entropy that the tensor network can reproduce. \subsection{Correlators} Let us consider first a correlator between two sites $x_1$ and $x_2$ such that $|x_1-x_2|$ is smaller than the correlation length $\xi$, $|x_1-x_2| \ll \xi$. In this case the geodesic connecting the two sites within the truncated holographic geometry only runs through length scales $z$ smaller than $z_0$ and therefore is not affected by the existence of the truncation at $z=z_0$. As a result, the length of the geodesic is still logarithmic in $|x_1-x_2|$ as in a critical system, Eq. \ref{eq:Dhol}, see Fig. \ref{fig:frMERAgeodesic}(i). In addition, since the geodesic only runs through the homogeneous, scale-invariant region of the finite range MERA, $z\in [0,z_{\xi}]$, the two-point correlator is expected to decay polynomially as in a critical system, Eq. \ref{eq:Ccrit}. On the other hand, when $|x_1-x_2|$ is larger than the correlation length $\xi$, $|x_1-x_2|\gg \xi$, the length of the geodesic connecting the two sites within the tensor network grows proportional to $|x_1-x_2|$, see Fig. \ref{fig:frMERAgeodesic}(ii), \begin{equation}gin{equation} D_{\text{\tiny hol}}(x_1,x_2) \approx |x_1-x_2|,~~~~ |x_1-x_2|\gg \xi ~~~~\text{(gapped $H$)} \label{eq:CgapMERA} \end{equation} That is, for sufficiently large $|x_1-x_2|$, distances $D_{\text{\tiny hol}}(x_1,x_2)$ in the holographic geometry become proportional to distances $D_{\text{\tiny phys}}(x_1,x_2)$ in the physical geometry. It would be tempting to say that, in this second regime, the structure of geodesics in the finite range MERA, Eq. \ref{eq:CgapMERA}, implies that correlations at large distances decay exponentially. While it is the case that the finite range MERA can approximate exponentially decaying correlations \cite{corrMERA}, these follow from the use of different tensors in top $\Delta z$ layers of the network, and cannot be interpreted in simple geometric terms. Nevertheless, what is clear from geometric arguments (together with some underlying transfer mechanism for the propagation of correlations) is that a truncated holographic geometry can no longer give rise to polynomially decaying correlations at long distances, $|x_1-x_2| \gg \xi$, since the length of geodesics is no longer logarithmic, Eq. \ref{eq:CgapMERA}. In conclusion, we have argued that the truncated holographic geometry of the ground state of a gapped system gives rise to a modified structure of geodesics that is compatible with the decay of two-point correlators expressed in Eq. \ref{eq:Cmixed}, with polynomial decay for $|x_1-x_2| \ll \xi$ and exponential decay for $|x_1-x_2| \gg \xi$. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{frMERAgeodesic.eps} \caption{ (Color online) Geodesics in the finite range MERA. (i) When $|x_1-x_2|$ is smaller than the correlation length $\xi$, the geodesic connecting sites $x_1$ and $x_2$ within the tensor network is identical to the scale invariant case, and its length is therefore logarithmic in $|x_1-x_2|$. (ii) When $|x_1-x_2|$ is larger than the correlation length $\xi$, the geodesic connecting the two sites sees the presence of the truncation and grows proportional to $|x_1-x_2|$. The structure of geodesics in the truncated holographic geometry is therefore compatible with the refined decay of correlations of Eq. \ref{eq:Cmixed}. } \label{fig:frMERAgeodesic} \end{center} \end{figure} \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{frMERAentropy.eps} \caption{ (Color online) Minimally connected regions in the finite range MERA. (i) When region $A$ is smaller than the correlation length $\xi$, $L\ll \xi$, the minimally connected region ${\cal O}mega_{A}^{\text{\tiny hol}}$ within the tensor network is identical to the scale invariant case, and the size $n(A) \equiv |{\cal O}mega_{A}^{\text{\tiny hol}}|$ of its boundary ${\cal O}mega_{A}^{\text{\tiny hol}}$ is therefore logarithmic in $L$. (ii) When region $A$ is larger than the correlation length $\xi$, $L\gg \xi$, the size of the boundary ${\cal O}mega_{A}^{\text{\tiny hol}}$ saturates to a constant (as a function of $L$) that is proportional to the number $z_0$ of layers in the finite range MERA and thus grows logarithmically with $\xi$. The structure of minimally connected regions in the truncated holography is therefore compatible with a saturation of the entropy, Eq. \ref{eq:1Dgap}, with a saturation value $S_0$ that scales as $\log_2(\xi)$, Eq. \ref{eq:1Dmixed}. } \label{fig:frMERAentropy} \end{center} \end{figure} \subsection{Entanglement entropy} In $D=1$ dimensions, the scaling of entanglement entropy in the finite range MERA (for gapped systems) is also different than in the scale invariant MERA (for critical systems), with the difference having a straightforward geometric interpretation. Let us consider Fig. \ref{fig:frMERAentropy}. If we first consider a region $A$ with length $L$ smaller than the correlation length $\xi$, $L \ll \xi$, then the minimally connected region ${\cal O}mega_A^{\text{\tiny hol}}$ in the tensor network only involves length scales $z$ smaller than $z_\xi$. As illustrated in Fig. \ref{fig:frMERAentropy}(i), in this case the size $n(A) \equiv |\partial {\cal O}mega_A^{\text{\tiny hol}}|$ of the boundary $\partial {\cal O}mega_A^{\text{\tiny hol}}$ scales as $\log_(L)$ as in the scale invariant MERA, Eq. \ref{eq:nLogL}, and the entanglement entropy grows as in a critical system, Eqs. \ref{eq:1Dcrit}. However, when the size $L$ of region $A$ is larger than the correlation length $\xi$, $L \gg \xi$, the minimally connected region ${\cal O}mega^{\text{\tiny{hol}}}_A$ in the finite range MERA has a boundary $\partial {\cal O}mega^{\text{\tiny{hol}}}_A$ that saturates to a constant size $|\partial {\cal O}mega^{\text{\tiny{hol}}}_A|$, see Fig. \ref{fig:frMERAentropy}(ii), with \begin{equation}gin{equation} |\partial {\cal O}mega^{\text{\tiny{hol}}}_A| \equiv n(A) \approx z_\xi \equiv \log_2 (\xi) ~~~~(D=1,\text{ gapped $H$}) \end{equation} That is, region ${\cal O}mega^{\text{\tiny{hol}}}_A$ is now connected with the rest of the tensor network through a number $n(A)$ of bond indices proportional to $z_{\xi} \equiv \log_2 (\xi)$, which is independent of $L$. This implies a constant upper bound for the entanglement entropy \begin{equation}gin{equation} S_{\text{\tiny MERA}}(A) \approx L^{D-1} \approx S_0 ~~~~(D=1,\text{ gapped $H$}) \end{equation} and therefore the finite range MERA obeys the boundary law of Eq. \ref{eq:1Dgap}. In addition, the entropy saturates to a constant $S_0\approx z_{\xi}$ that grows logarithmically with the correlation length $\xi$\cite{entMERA}, thus also producing a scaling compatible with Eq. \ref{eq:1Dmixed}. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{OmegaAfinite.eps} \caption{ (Color online) Scaling of entanglement entropy in the finite range MERA in $D=1$ and $D=2$ dimensions. (i) For $D=1$, region ${\cal O}mega_A^{\text{\tiny hol}}$ is a truncated version of that in Fig. \ref{fig:OmegaA1D}. (ii) Only length scales smaller than $\xi$ contribute to the total number $n(A)$ of bond indices connecting ${\cal O}mega_A^{\text{\tiny hol}}$ with the rest of the tensor network. This number is therefore upperbounded by a constant, which grows with the correlation length as $\log_2(xi)$. (iii) For $D=2$, the region ${\cal O}mega_A^{\text{\tiny hol}}$ is a truncated version of that in Fig. \ref{fig:OmegaA2D}. (iv) Again, only length scales smaller than $\xi$ contribute to the total number $n(A)$ of bond indices connecting ${\cal O}mega_A^{\text{\tiny hol}}$ with the rest of the tensor network. However, this does not change the linear dependence of $n(A)$ in $L$. } \label{fig:OmegaAfinite} \end{center} \end{figure} Therefore we see that the structure of minimally connected regions in the truncated holographic geometry reproduces well the scaling of entanglement entropy in gapped systems, both for block lengths $L$ larger and smaller than the correlation length $\xi$. In $D>1$ dimensions, the truncation of the holographic geometry to $z \leq z_0$ due to the presence of a finite correlation length $\xi$ does not alter the scaling of entanglement entropy, which is dominated by the $z=0$ contribution (see Fig. \ref{fig:OmegaAfinite}), and therefore the finite range MERA still obeys a boundary law, \begin{equation}gin{equation} S_{\text{\tiny MERA}}(A) \approx |\partial {\cal O}mega_A^{\text{\tiny hol}}| \approx L^{D-1} ~~~~(D>1,\text{ gapped $H$}) \end{equation} \subsection{Equivalence between holographic and physical geometries} \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{MERAtoMPS.eps} \caption{ (Color online) The finite correlation MERA can be converted into an MPS with a sufficiently large, but finite bond dimension $\chi_{\text{\tiny{MPS}}}$ as given by Eq. \ref{eq:MPSvMERA}. Specifically, each bond index of the MPS has to account for $O(\log_2 (\xi))$ bond indices of the MERA. When $\xi$ is small, the two-dimensional holographic geometry of $\ket{\Psi_{GS}}$ and the one-dimensional physical geometry of $H$ are essentially equivalent, and the ground state can be accurately described by either an MPS or a MERA. Figs. (i)-(iii) illustrate in diagrammatical notation how to convert a finite range MERA into a MPS, where the bond dimension of the MPS grows exponentially with the number of layers in the MERA. } \label{fig:MERAtoMPS} \end{center} \end{figure} We have seen that for gapped systems, the holographic geometry is truncated at a value $z_0$ of the scale parameter $z$ corresponding (up to a constant) to $z_{\xi}\equiv\log_2(\xi)$, where $\xi$ is the correlation length. We have also seen that the presence of the truncation in the holographic geometry implies that the length $D_{\text{\tiny hol}}(x_1,x_2)$ of geodesics and the size $|\partial {\cal O}mega_{A}^{\text{\tiny hol}}|$ of the boundary of minimally connected regions in the holographic geometry scale asymptotically as in the physical geometry. As a matter of fact, when the correlation length $\xi$ is of the order of the lattice spacing, so that $z_0$ is a small number, it is no longer possible to distinguish between holographic and physical geometries at all. Correspondingly, as we discuss below, in $D=1$ dimensions the finite range MERA can be efficiently mapped into an MPS. This mapping if still possible for a large correlation length $\xi$, but the resulting MPS has a bond dimension $\chi$ that grows with $\xi$ and diverges at a critical point. \subsection{From MERA to MPS} As illustrated in Fig. \ref{fig:MERAtoMPS} for gapped systems in $D=1$ dimensions, a finite range MERA made of $z_0$ layers of tensors and with bond dimension $\chi_{\text{\tiny{MERA}}}$ can be re-expressed as an MPS with bond dimension $\chi_{\text{\tiny{MPS}}}$ given by\cite{MERAtoMPS} \begin{equation}gin{equation} \chi_{\text{\tiny{MPS}}} \approx \left(\chi_{\text{\tiny{MERA}}}\right)^{z_0},~~~~~z_0\approx z_{\xi} \equiv \log_2(\xi). \label{eq:MPSvMERA} \end{equation} Therefore, when the correlation length $\xi$ is small, the holographic geometry is a narrow strip and the MERA can be re-expressed as an MPS with a small bond dimension $\chi_{\text{\tiny{MPS}}}$. However, as one gets closer to a quantum critical point and the correlation length $\xi$ becomes larger, the holographic geometry becomes a strip with larger width $z_0$. The MERA can still be re-expressed as an MPS, but with a bond dimension $\chi_{\text{\tiny{MPS}}}$ that grows exponentially with the number $z_0$ of layers in the MERA, Eq. \ref{eq:MPSvMERA}. Since the computational cost of MPS algorithms scales as a power of $\chi_{\text{\tiny{MPS}}}$, that is exponentially with $z_0$, numerical simulations must be restricted to small values of $z_0$. Finally, at the critical point, where $\xi$ diverges, the holographic geometry extends indefinitely in the coarse-graining direction $z$, and a MERA with finite bond dimension $\chi_{\text{\tiny{MERA}}}$ can no longer be replaced by an MPS with finite bond dimension $\chi_{\text{\tiny{MPS}}}$. In $D>1$ dimensions, a finite range MERA with a finite bond dimension $\chi_{\text{\tiny{MERA}}}$ can also be re-expressed as a PEPS of finite bond dimension $\chi_{\text{\tiny{PEPS}}}$. However, the bond dimension $\chi_{\text{\tiny{PEPS}}}$ does not grow significantly with $z_0$ and, as a matter of fact, even the scale invariant MERA in $D>1$ (with $z_0=\infty$) can be exactly represented by a PEPS with finite bond dimension $\chi_{\text{\tiny{PEPS}}}$, as recently shown in Ref. \onlinecite{Barthel10} (see also Ref. \onlinecite{PEPSinhomo}). \section{Discussion} \label{sect:discussion} In this manuscript we have reviewed a number of results concerning correlations and entanglement in tensor network states and presented them in a unified way by pointing out that they can be interpreted as geometric properties of some underlying discrete geometry. Specifically, MPS and PEPS have been argued to describe a $D$-dimensional physical geometry dictated by the interactions of a local Hamiltonian $H$ in $D$-dimensions, whereas the MERA has been seen to describe a $(D+1)$-dimensional holographic geometry associated to the ground state $\ket{\Psi_{\mbox{\tiny GS}}}$ of $H$. Our presentation clearly emphasizes the main structural differences between MPS/PEPS and MERA, and interprets the decay of correlations and the scaling of entanglement entropy in terms of geometric concepts such as geodesics and regions of minimal surface within the relevant geometry, as summarized in Eqs. \ref{eq:corrTN} and \ref{eq:SAOA}. The geometrical interpretation is also the natural language to connect the MERA with the holographic principle. We conclude the present review with two brief discussions of related issues. The first is a practical warning for future tensor network practitioners. The second is a pointer to on-going developments that have been motivated by the geometric perspective described here. \subsection{MPS for critical systems in $D=1$ dimensions; and for systems in $D=2$ dimensions.} First, a word of caution on the use of geometric considerations to characterize tensor network states is in order. Our discussion has mostly focussed on the \textit{asymptotic} decay of correlations and \textit{asymptotic} scaling of entanglement entropy. In $D=1$ dimensions, this analysis pointed at the MPS as a natural representation for the ground state of gapped systems, and at the scale invariant MERA as a natural representation for the ground state of critical systems. However, this should not be understood as implying that an MPS cannot be used to study ground states of critical systems in $D=1$ dimensions, or even ground states in $D=2$ dimensional lattices. In a homogeneous MPS with finite bond dimension $\chi_{\text{\tiny MPS}}$, two-point correlators are indeed constrained to asymptotically decay exponentially, Eq. \ref{eq:CMPS}, whereas entanglement entropy must saturate, Eq. \ref{eq:SMPS}, thus reproducing the scaling of Eqs. \ref{eq:Cgap} and \ref{eq:1Dgap} characteristic of gapped systems. However, for some intermediate values of distance $|x_1-x_2|$ and size $L$, an MPS can still accurately approximate a polynomial decay $|x_1-x_2|^{-q}$ of correlations and a logarithmic growth $\log_2(L)$ of entanglement entropy. More specifically, a finite bond dimension $\chi_{\text{\tiny{MPS}}}$ in the MPS has been seen\cite{Tagliacozzo08, Nishino96, Pollmann09b} to introduce an artificial, finite correlation length $\xi_{\chi}$ (where $\xi_{\chi}$ depends on central charge $c$ of the CFT that describes the critical point under consideration) such that the correct scaling of correlators and entropy is reproduced for $|x_1-x_2|$ and $L$ smaller than $\xi_{\chi}$. A relatively mild scaling of computational costs with the bond dimension of the MPS, namely as $\chi_{\text{\tiny MPS}}$ to the third power, implies that very large bond dimensions (of the order of thousands) can be afforded with reasonably modest computational resources, leading to large values of the effective correlation length $\xi_{\chi}$. This, together with the use of finite size scaling techniques, make the MPS a very suitable tool to study critical ground states, which explains the success of DMRG also for critical systems\cite{White92, White93, Schollwoeck05, Schollwoeck11}. Similarly, an MPS may appear as an unlikely candidate to represent ground states of $D=2$ lattice models, since the only way it can afford reproducing the boundary law of entanglement entropy in $D=2$ dimensions, Eq. \ref{eq:boundaryLaw}, is through a bond dimension $\chi_{\text{\tiny MPS}}$ that grows exponentially in the linear size of the lattice. Once more, however, using an MPS with very large $\chi_{\text{\tiny MPS}}$ (which can again be afforded due to the relatively mild scaling of computational costs with $\chi_{\text{\tiny MPS}}$) and finite size scaling arguments, an MPS has been successfully used to study ground states of two-dimensional lattice models\cite{Liang94, White98, Xiang01, White07, Yan11}. \subsection{Beyond the entropic boundary law in $D>1$ dimensions.} The present analysis has also reminded us of an important limitation of PEPS and MERA in $D>1$ dimensions. Recall that these tensor network states are constrained to obey a strict boundary law for entanglement entropy, Eq. \ref{eq:boundaryLaw}. However, there is an important class of gapless systems in $D>1$ dimensions whose ground states display a logarithmic violation of the boundary law, Eq. \ref{eq:LogD}. These systems include Fermi gases and liquids with a $(D-1)$-dimensional Fermi surface, as well as spin Bose-metals with an analogous Bose surface\cite{Swingle09b, Swingle10, Motrunich07, Senthil08, Liu09}. How may we go about using a tensor network state to represent such ground states? In the case of PEPS, the boundary law cannot be easily overcome, since it is an intrinsic property of the physical geometry that the ansatz reproduces. Mimicking the previous discussion on the use of MPS to study critical systems, one could, perhaps, study ground states with a logarithmic violation of the boundary law with by a PEPS by considering finite systems and by suitably increasing the bond dimension $\chi_{\text{\tiny PEPS}}$ with the system size. Then finite size scaling techniques could be used to extrapolate finite size results to the thermodynamic limit. However, the cost of PEPS simulations grows as a much larger power of the bond dimension $\chi_{\text{\tiny PEPS}}$ than in the case of MPS, confining $\chi_{\text{\tiny PEPS}}$ to small values and seriously limiting the viability of this strategy. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8.5cm]{BoundLawBranch.eps} \caption{ (Color online) Holographic branching in a $D=1$ dimensional system. (i) Graphical representation of the holographic geometry in the absence of branching. Space is labelled by coordinate $x$ and whereas the scale parameter $z$ labels the different length scales $\lambda \equiv 2^z$ in the system. (ii) A region $A$ in the $D=1$ system defines a minimally connected region $\Gamma_A^{\text{\tiny hol}}$ in the the holographic geometry, as discussed in Sect. \ref{sect:entropy}. (iii) Boundary $\partial \Gamma_A^{\text{\tiny hol}}$ of the minimally connected region $\Gamma_A^{\text{\tiny hol}}$. (iv) Graphical representation of the holographic geometry in the presence of branching at the value $z=z^{\star}$ of the scale parameter, corresponding to length scale $\lambda^{\star} \equiv 2^{z^{\star}}$. (v) The same region A of (ii) gives now rise to a minimally connected region $\Gamma_A^{\text{\tiny hol}}$ which is also affected by the branching (provided that the size $L$ of region $A$ is larger than the length scale $\lambda^{\star}$ at which branching occurs). (vi) As a result of the branching, the boundary $\partial \Gamma_A^{\text{\tiny hol}}$ of the minimally connected region $\Gamma_A^{\text{\tiny hol}}$ is larger. A larger boundary leads, together with Eq. \ref{eq:SAOA}, to a larger amount of entanglement entropy. } \label{fig:BoundLawBranch} \end{center} \end{figure} Similar considerations apply to the MERA: a systematic increase of bond dimension $\chi_{\text{MERA}}$ as larger systems in $D>1$ dimensions are considered seems unviable, due to the sharp increase of computational costs with $\chi_{\text{\tiny MERA}}$. However, remember that in $D=1$ dimensions the logarithmic violation of the boundary law, Eq. \ref{eq:1Dcrit}, could be interpreted as a boundary law in the holographic geometry, Eqs. \ref{eq:SAOA} and \ref{eq:SMERA1D}. This strongly suggests a possible alternative. Indeed, it is natural to wonder whether, once more, the logarithmic violations of the boundary law in $D>1$ dimensions, Eq. \ref{eq:LogD}, can follow from a boundary law in a more elaborated, yet unknown, $(D+1)$-dimensional holographic geometry. A generalized MERA that would reproduce this holographic geometry would then automatically display a logarithmic violation of the boundary law. \begin{equation}gin{figure}[!tb] \begin{equation}gin{center} \includegraphics[width=8cm]{subsume.eps} \caption{ (Color online) Schematic representation of different holographic geometries in $D=2$ dimensional systems in terms of the holographic tree introduced in Ref. \onlinecite{Evenbly11}. (i) The holographic geometry of a gapped system has a single branch with a finite extension in the $z$ direction, namely $z\in [0,z_0]$ where $z_0 = O(\log (\xi))$ as discussed in Sect. \ref{sect:gapped}. This geometry corresponds to a finite range MERA. (ii) The holographic geometry of a gapless system that obeys the entropic boundary law may also consist of a single branch, but this extends indefinitely in the $z$ direction, as in the scale-invariant MERA. (iii) Holographic geometry with an infinite number of branching points, capable of reproducing the logarithmic violation of the boundary law characteristic of the ground state of several gapless systems, including free fermions, see Table \ref{table:fermions}. Notice that the holographic geometry of (i) and (ii) allow us to distinguished between two types of ground states, corresponding to gapped and gapless systems, that obey the boundary law. In this sense, the holographic geometry can be used to issue a more refined classification of ground states according to their pattern of entanglement. } \label{fig:subsume} \end{center} \end{figure} \subsection{Holographic branching} \label{sect:discussion:holBranch} As recently discussed in Ref. \onlinecite{Evenbly11}, it turns out that, indeed, one can engineer holographic geometries such that Eq. \ref{eq:LogD}, as well as many other forms of scaling, can be understood to follow from a holographic boundary law. A key ingredient in these holographic geometries is the presence of branching, by means of which a single $(D+1)$-dimensional geometry associated to small length scales (high energies) becomes two independent $(D+1)$-dimensional geometries at large length scales (lower energies), see Fig. \ref{fig:BoundLawBranch} (i). Physically, holographic branching describes the decoupling of a single theory into two theories (or sets of degrees of freedom) that do not interact with each other at energy scales lower than some decoupling energy -- equivalently, at length scales larger than some decoupling length $\lambda$. Thus, at length scales smaller than $\lambda$ there is a single lattice model, whereas at length scales larger than $\lambda$, the lattice model breaks into two independent lattice models. Fig. \ref{fig:BoundLawBranch} illustrates how the presence of holographic branching affects the amount of entanglement entropy in the ground state. The holographic region ${\cal O}mega_A$ associated with a physical region $A$ of linear size $L$ larger than $\lambda$ also branches into two pieces. As a result, the entropy $S(A)$ receives contributions from two pieces of the boundary $\partial {\cal O}mega_A$. As discussed in Ref. \onlinecite{Evenbly11}, it turns out that a sequence of holographic branchings occurring at different length scales, as represented by a branching tree (see Fig. \ref{fig:subsume}(iii) for an example) leads to a wide range of forms of scaling for the entanglement entropy $S(A)$ of a region $A$ in the original lattice, including Eq. \ref{eq:LogD}. The resulting tensor network state, the branching MERA, reproduces these model elaborated holographic geometries and has been shown to efficiently represent e.g. the ground state of a $D=2$ dimensional fermionic lattice model with a one-dimensional Fermi surface. The study of holographic geometries with (possibly multiple) branching points opens up a number of exciting new possibilities, presently under consideration. On the one hand, it motivates a revision of the RG flow and its structure of fixed points. As we progress towards low energies, a single theory may branch into (perhaps infinitely many) other theories. In particular, new fixed points of this revised RG flow, including branching at all length scales, seem to include certain $D=2$ dimensional systems with a one-dimensional Fermi surface. On the other hand, the holographic geometry also offers a new venue to characterize entanglement of ground states. The pattern of branching (as given by a holographic tree) of a ground state $\ket{\Psi_{\mbox{\tiny GS}}}$, as well as the extent of each branch in the scale direction $z$, leads to a new classification of ground states that subsumes the one provided by considering only the scaling of the entanglement entropy $S(A)$ of a region $A$ of the lattice, see Fig. \ref{fig:subsume}. For instance, while gapped systems and some gapless systems in $D>1$ dimensions cannot be distinguished by the scaling of entanglement entropy (since they all obey the boundary law of Eq. \ref{eq:boundaryLaw}), their holographic geometry is clearly distinct. This research was supported in part by the National Science Foundation under Grant No. NSF PHY05-51164. The authors also acknowledge support from the Australian Research Council under Grants FF0668731 and DP1092513. \begin{equation}gin{thebibliography}{150} \bibitem{Fannes92} M. Fannes, B. Nachtergaele, and R. F. Werner, Commun. Math. Phys. 144, 443 (1992). \bibitem{Ostlund95} S. Ostlund and S. Rommer, Phys. Rev. Lett. 75, 3537 (1995). \bibitem{Rommer97} S. Rommer and S. Ostlund, Phys. Rev. B 55, 2164 (1997). \bibitem{Perez-Garcia07} D. Perez-Garcia, F. Verstraete, M. M.Wolf, and J. I. Cirac, Quant. Inf. Comput. 7, 401 (2007). \bibitem{White92} S.R. White, Phys. Rev. Lett. 69, 2863 (1992). \bibitem{White93} S.R. White, Phys. Rev. B, 48, 10345 (1993). \bibitem{Schollwoeck05} U. Schollwoeck, Rev. Mod. Phys., 77, 259 (2005). \bibitem{Schollwoeck11} U. Schollwoeck, Ann. of Phys. 326, 96 (2011). \bibitem{Vidal03} G. Vidal, Phys. Rev. Lett., 91, 147902 (2003). \bibitem{Vidal04} G. Vidal, Phys. Rev. Lett., 93, 040502 (2004). \bibitem{Daley04} A. J. Daley, C. Kollath, U. Schollweock, and G. Vidal, J. Stat. Mech. Theor. Exp., P04005 (2004). \bibitem{White04} S. R. White and A. E. Feiguin, Phys. Rev. Lett., 93, 076401 (2004). \bibitem{Shi06} Y. Shi, L.-M. Duan and G. Vidal, Phys. Rev. A, 74, 022320 (2006). \bibitem{Alba11} V. Alba, L. Tagliacozzo, P. Calabrese, arXiv:1103.3166v1 [cond-mat.stat-mech] \bibitem{Vidal07} G. Vidal, Phys. Rev. Lett., 99, 220405 (2007). \bibitem{Vidal08} G. Vidal, Phys. Rev. Lett., 101, 110501 (2008). \bibitem{Evenbly09} G. Evenbly and G. Vidal, Phys. Rev. B, 79, 144108 (2009). \bibitem{Giovannetti08} V. Giovannetti, S. Montangero, and R. Fazio, Phys. Rev. Lett., 101, 180503 (2008). \bibitem{Pfeifer09} R.N.C. Pfeifer, G. Evenbly, and G. Vidal, Phys. Rev. A, 79, 040301(R) (2009). \bibitem{Vidal10} G. Vidal, in \textit{Understanding Quantum Phase Transitions}, edited by L. D. Carr (Taylor $\&$ Francis, Boca Raton, 2010). \bibitem{Verstraete04} F. Verstraete, and J. I. Cirac, arXiv:cond-mat/0407066v1 (2004). \bibitem{Sierra98} G. Sierra and M.A. Martin-Delgado, arXiv:cond-mat/9811170v3 (1998). \bibitem{Nishino98} T. Nishino and K. Okunishi, J. Phys. Soc. Jpn., 67, 3066, 1998. \bibitem{Nishio04} Y. Nishio, N. Maeshima, A. Gendiar, and T. Nishino, arXiv:condmat/0401115. \bibitem{Murg07} V. Murg, F. Verstraete, and J. I. Cirac, Phys. Rev. A, 75, 033605 (2007). \bibitem{Jordan08} J. Jordan, R. Orus, G. Vidal, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett., 101, 250602 (2008). \bibitem{Gu08} Z.-C. Gu, M. Levin, and X.-G. Wen, Phys. Rev. B, 78, 205116 (2008). \bibitem{Jiang08} H. C. Jiang, Z. Y. Weng, and T. Xiang, Phys. Rev. Lett., 101, 090603 (2008). \bibitem{Xie09} Z. Y. Xie, H. C. Jiang, Q. N. Chen, Z. Y. Weng, and T. Xiang, Phys. Rev. Lett., 103, 160601 (2009). \bibitem{Murg09} V. Murg, F. Verstraete, and J. I. Cirac, Phys. Rev. B, 79, 195119 (2009). \bibitem{Tagliacozzo09} L. Tagliacozzo, G. Evenbly, and G. Vidal, Phys. Rev. B 80, 235127 (2009). \bibitem{Murg10} V. Murg, F. Verstraete, O. Legeza, and R. M. Noack, Phys. Rev. B 82, 205105 (2010). \bibitem{Evenbly10} G. Evenbly and G. Vidal, Phys. Rev. B, 81, 235102 (2010). \bibitem{Evenbly10b} G. Evenbly and G. Vidal, New J. Phys., 12, 025007 (2010). \bibitem{Aguado08} M. Aguado and G. Vidal, Phys. Rev. Lett., 100, 070404 (2008). \bibitem{Cincio08} L. Cincio, J. Dziarmaga, and M. M. Rams Phys. Rev. Lett., 100, 240603 (2008). \bibitem{Evenbly09b} G. Evenbly and G. Vidal, Phys. Rev. Lett., 102, 180406 (2009). \bibitem{Koenig09} R. Koenig, B.W. Reichardt, and G. Vidal, Phys. Rev. B, 79, 195123 (2009). \bibitem{Evenbly10c} G. Evenbly and G. Vidal, Phys. Rev. Lett., 104, 187203 (2010). \bibitem{Corboz10} P. Corboz, G. Evenbly, F. Verstraete, and G. Vidal, Phys. Rev. A, 81, 010303(R) (2010). \bibitem{Kraus10} C. V. Kraus, N. Schuch, F. Verstraete, and J. I. Cirac, Phys. Rev. A, 81, 052338 (2010). \bibitem{Pineda10} C. Pineda, T. Barthel, and J. Eisert, Phys. Rev. A, 81, 050303(R) (2010). \bibitem{Corboz09} P. Corboz and G. Vidal, Phys. Rev. B, 80, 165129 (2009). \bibitem{Barthel09} T. Barthel, C. Pineda, and J. Eisert, Phys. Rev. A, 80, 042333 (2009). \bibitem{Shi09} Q.-Q. Shi, S.-H. Li, J.-H. Zhao, and H.-Q. Zhou, arXiv:0907.5520v1 [cond-mat.str-el] (2009). \bibitem{Li10} S.-H. Li, Q.-Q. Shi, H.-Q. Zhou, arXiv:1001.3343v1 [cond-mat.supr-con] (2010). \bibitem{Corboz10b} P. Corboz, R. Orus, B. Bauer, and G. Vidal, Phys. Rev. B, 81, 165104 (2010). \bibitem{Pizorn10} I. Pizorn and F. Verstraete, Phys. Rev. B, 81, 245110 (2010). \bibitem{Gu10} Z.-C. Gu, F. Verstraete, and X.-G. Wen, arXiv:1004.2563v1 [cond-mat.str-el] (2010). \bibitem{Corboz10c} P. Corboz, J. Jordan and G. Vidal, Phys. Rev. B 82, 245119 (2010). \bibitem{Pollmann09} F. Pollmann, A. M. Turner, E. Berg, and M. Oshikawa, Phys. Rev. B 81, 064439 (2010). \bibitem{Chen11} X. Chen, Z.-C. Gu, and X.-G. Wen, Phys. Rev. B , 035107 (2011). \bibitem{Schuch10} N. Schuch, D. Perez-Garcia, I. Cirac, arXiv:1010.3732. \bibitem{Chen11b} X. Chen, Z.-C. Gu, X.-G. Wen, arXiv:1103.3323. \bibitem{Hastings07} M. B. Hastings, J. Stat. Mech. , P08024 (2007). \bibitem{Hastings07b} M. B. Hastings, Phys. Rev. B 76, 035114 (2007). \bibitem{Buerschaper09} O. Buerschaper, M. Aguado, G. Vidal, Phys. Rev. B 79, 085119 (2009) \bibitem{Gu09} Z.-C. Gu, M. Levin, B. Swingle, X.-G. Wen, Phys. Rev. B 79, 085118 (2009) \bibitem{Evenbly11} G. Evenbly and G. Vidal, \textit{Branching MERA}, in preparation. \bibitem{Hastings04} M. B. Hastings, Phys. Rev. B 69, 104431 (2004). \bibitem{Sachdev99} S. Sachdev, \textit{Quantum Phase Transitions}, Cambridge Univ. Press, Cambridge (1999) \bibitem{DiFrancesco97} P. Di Francesco, P. Mathieu, and D. Senechal, \textit{Conformal Field Theory}, Springer-Verlag, New York, 1997. \bibitem{Srednicki93} M. Srednicki, Phys. Rev. Lett. 71 (1993) 666-669 \bibitem{Latorre04} J. I. Latorre, E. Rico, G. Vidal, Quant. Inf. Comput. 4 (2004) 48-92. \bibitem{Plenio05} M.B. Plenio, J. Eisert, J. Dreissig, and M. Cramer Rev. Lett. 94, 060503 (2005). \bibitem{Bravyi06} S. Bravyi, M. B. Hastings, and F. Verstraete, Phys. Rev. Lett. 97, 050401 (2006). \bibitem{Eisert06} J. Eisert, T. J. Osborne, Phys. Rev. Lett. 97, 150404 (2006). \bibitem{Hastings07c} M. B. Hastings, JSTAT, P08024 (2007). \bibitem{Masanes09} Lluis Masanes, Phys. Rev. A 80, 052104 (2009). \bibitem{Eisert10} J. Eisert, M. Cramer, and M.B. Plenio, Rev. Mod. Phys. 82, 277 (2010). \bibitem{Holzhey94} C. Holzhey, F. Larsen and F.Wilczek, Nucl. Phys. B 424 (1994) 443-467. \bibitem{Callan94} C. G. Callan and F. Wilczek, Phys. Lett. B333 (1994) 55-61. \bibitem{Fiola94} T. M. Fiola, J. Preskill, A. Strominger and S. P. Trivedi, Phys. Rev. D 50 (1994) 3987-4014. \bibitem{Vidal03b} G. Vidal et al, Phys. Rev. Lett. 90 (2003) 227902. \bibitem{Wolf06} M. M. Wolf, Phys. Rev. Lett. 96, 010404 (2006). \bibitem{Gioev06} D. Gioev, I. Klich, Phys. Rev Lett. 96, 100503 (2006). \bibitem{Li06} W. Li, L. Ding, R. Yu, T. Roscilde, and S. Haas, Phys. Rev. B 74, 073103 (2006). \bibitem{Barthel06} T. Barthel, M.-C. Chung, U. Schollwoeck, Phys. Rev. A 74, 022329 (2006). \bibitem{Swingle10} B. Swingle, arXiv:1002.4635 (2010). \bibitem{Swingle09b} B. Swingle, arXiv:0908.1724 (2009). \bibitem{Motrunich07} O. Motrunich and M. Fisher, Phys. Rev. B 75, 235116 (2007). \bibitem{Senthil08} T. Senthil, Phys. Rev. B 78, 035103 (2008). \bibitem{Liu09} H. Liu, J. McGreevy, and D. Vegh, arXiv:0903.2477 (2009). \bibitem{Jin04} B.-Q. Jin and V.E. Korepin, Jour. Stat. Phys. 116, 79-95 (2004). \bibitem{Calabrese04} P. Calabrese and J. Cardy, J. Stat. Mech. P06002 (2004). \bibitem{Calabrese05} P. Calabrese and J. Cardy, Int. J. Quant. Inf. 4, 429 (2006). \bibitem{Kitaev06} A. Kitaev, J. Preskill, Phys. Rev. Lett. 96 (2006) 110404. \bibitem{Levin06} M. Levin, X.-G. Wen, Phys. Rev. Lett., 96, 110405 (2006) \bibitem{Maldacena98} J. M. Maldecena, Adv. Theor. Math. Phys. 2, 231 (1998). \bibitem{Gubser98} S. S. Gubser, I. R. Klebanov, and A. M. Polyakov, Phys. Lett. B 428, 105 (1998). \bibitem{Witten98} E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998). \bibitem{Swingle09} B. Swingle, arXiv:0905.1317 \bibitem{Verstraete06} F. Verstraete, M. M. Wolf, D. Perez-Garcia, J. I. Cirac, Phys. Rev. Lett. 96, 220601 (2006). \bibitem{Vidal08pre} The scaling of entanglement entropy in the MERA for $D=1$ and $D>1$ was first presented in the pre-print manuscript [G. Vidal, arXiv:quant-ph/0610099v1] and was excluded from the journal version, Ref. \onlinecite{Vidal08}, due to space limitations. \bibitem{Ryu06} S. Ryu and T. Takayanagi, Phys. Rev. Lett. 96, 181602 (2006). \bibitem{Ryu06b} S. Ryu and T. Takayanagi, JHEP 0608 (2006) 045. \bibitem{Nishioka09} T. Nishioka, S. Ryu, T. Takayanagi, J. Phys. A 42, 504008 (2009). \bibitem{MERAtoMPS} A related expression, namely $\chi_{\text{\tiny MPS}} \approx (\chi_{\text{\tiny MERA}})^{\log_2 N}$, was first quoted in the refereences of Ref. \onlinecite{Vidal08}, where it was erroneously written as $\chi_{\text{\tiny MPS}} \approx (\chi_{\text{\tiny MERA}}){\log_2 N}$. \bibitem{Barthel10} T. Barthel, M. Kliesch, J. Eisert, Phys. Rev. Lett. 105, 010502 (2010). \bibitem{Tagliacozzo08} L. Tagliacozzo, Thiago. R. de Oliveira, S. Iblisdir, J. I. Latorre, Phys. Rev. B 78, 024410 (2008). \bibitem{Nishino96} T. Nishino, K. Okunishi and M. Kikuchi, Phys. Lett. A 213, 69 (1996). \bibitem{Pollmann09b} F. Pollmann, S. Mukerjee, A. Turner, J. E. Moore, Phys. Rev. Lett. 102, 255701 (2009) \bibitem{Liang94} S. Liang and H. Pang, Physical Review B 49, 9214 (1994). \bibitem{White98} S. R. White and D. J. Scalapino, Phys. Rev. Lett. 80, 1272 (1998). \bibitem{Xiang01} T. Xiang, J. Lou, and Z. Su, Physical Review B 64, 104414 (2001). \bibitem{White07} S. R. White and A. L. Chernyshev, Phys. Rev. Lett. 99, 127004 (2007). \bibitem{Yan11} S. Yan, D. A. Huse, S. R. White, arXiv:1011.6114v1. \bibitem{entropyCount} The reduced density matrix $\rho_A$ of Eq. \ref{eq:rhoA} can be seen to have a rank $r(\rho_A)$ upper bounded by the total dimension of the $n(A)$ indices connecting parts $A$ and $B$ of the wavefunction, that is $r(\rho_A) \leq \chi^{n(A)}$, where $\chi$ is the dimension of each index. Then Eq. \ref{eq:upperBound} follows from the basic entropic inequality $ \mbox{tr}(\rho_A \log_2 \rho_A) \leq r(\rho_A)$, which is saturated when all the eigenvalues of $\rho_A$ are equal to $1/r(\rho_A)$. \bibitem{otherFiniteRangeMERA} The finite range MERA can also be used, in combination with another tensor network, to describe other gapped phases of matter, such as (possibly symmetry protected) topologically ordered phases, that are characterized by some entangled fixed-point ground state $\ket{\Phi_{\mbox{\tiny f.p.}}}$ instead of $\ket{\text{prod}}$. In this case, the finite range MERA is used to coarse-grain the ground state $\ket{\Psi_{\mbox{\tiny GS}}}$ into $\ket{\Phi_{\mbox{\tiny f.p.}}}$, which is then represented by some other tensor network (e.g. an MPS, PEPS or scale invariant MERA). Fig. \ref{fig:frMERA}(ii) shows an example in D=1 dimensions. \bibitem{corrMERA} Strictly speaking, the exponential decay of correlations in the finite range MERA does not accept a simple geometric interpretation. On the one hand, the ansatz is not homogenerous (that is, different tensors describe different length scales) and therefore the relation between the distance $D_{\text{hol}}(x_1,x_2)$ and a correlation function $C(x_1,x_2)$ is no longer be given by Eq. \ref{eq:corrTN}. On the other hand, in a finite range MERA all correlations $C(x_1,x_2)$ for distances $|x_1-x_2|$ larger than $O(2^{z_0})$ are zero, as a result of the isometric character of tensors in the MERA. This last limitation disappears if the tensors at the top layer of the MERA are not constrained to be isometric. \bibitem{entMERA} In the MERA for a ground state near a quantum critical point, most of the entanglement entropy in a block of size $L\gg \xi$ comes from the bond indices corresponding from the first $z_0\equiv \log_2(\xi)$ layers of the finite range MERA, which make approximately equal contributions, so that the entropy saturates to a constant that depends on the correlation length $\xi$ as $S_{\text{\tiny{MERA}}}(A)\approx \log_2(\xi)$. \bibitem{PEPSinhomo} The PEPS resulting from rewriting a scale invariant MERA in $D>1$ is significantly inhomogeneous (it requires infinitely many different tensors, one for each value of the scale parameter $z$) and its bond indices have to communicate correlations at all length scales. As a result, its practical use as a variational ansatz is not clear. \end{thebibliography} \end{document}
\begin{document} \title{Designing Progressive Dinner Parties} \begin{abstract} I recently came across a combinatorial design problem involving progressive dinner parties (also known as safari suppers). In this note, I provide some elementary methods of designing schedules for these kinds of dinner parties. \end{abstract} \section{The Problem} \label{intro.sec} A simple form of \emph{progressive dinner party} could involve three couples eating a three-course dinner, with each couple hosting one course. I received email from Julian Regan asking if there was a nice way to design a more complicated type of progressive dinner party, which he described as follows: \begin{quote}The event involves a number of couples having each course of a three-course meal at a different person's house, with three couples at each course, every couple hosting once and no two couples meeting more than once. \end{quote} Let us represent each couple by a \emph{point} $x \in X$ and each course of each meal by a \emph{block} consisting of three points. Suppose there are $v$ points (i.e., couples). Evidently we want a collection of blocks of size three, say $\mathcal{B}$, such that the following conditions are satisfied: \begin{enumerate} \item The blocks can be partitioned into three parallel classes, each consisting of $v/3$ disjoint blocks. (Each parallel class corresponds to a specific course of the meal.) Hence, there are a total of $v$ blocks and we require $v \equiv 0 \bmod 3$. \item No pair of points occurs in more than one block. \item There is a bijection $h : \mathcal{B} \rightarrow X$ such that $h(B) \in B$ for all $B \in \mathcal{B}$. (That is, we can identify a \emph{host} for each block in such a way that each point occurs as a host exactly once.) \end{enumerate} We will refer to such a collection of blocks as a PDP$(v)$. It is not hard to see that a PDP$(v)$ does not exist if $v=3$ or $v=6$, because we cannot satisfy condition 2. However, for all larger values of $v$ divisible by three, we show in Section \ref{solns.sec} that it is possible to construct a PDP$(v)$. Section \ref{generalization.sec} considers a generalization of the problem in which there are $k$ courses and $k$ couples present at each course, and gives a complete solution when $k = 4$ or $k=5$. \section{Two Solutions} \label{solns.sec} We begin with a simple construction based on latin squares. A \emph{latin square} of order $n$ is an $n$ by $n$ array of $n$ symbols, such that each symbol occurs in exactly one cell in each row and each column of the array. A \emph{transversal} of a latin square of order $n$ is a set of $n$ cells, one from each row and each column, that contain $n$ different symbols. Two transversals are \emph{disjoint} if they do not contain any common cells. \begin{lemma} \label{LS.lem} Suppose there is a latin square of order $w$ that contains three disjoint transversals. Then there is a PDP$(3w)$. \end{lemma} \begin{proof} Let $L$ be a latin square of order $w$ that contains disjoint transversals $T_1, T_2$ and $T_3$. Let the rows of $L$ be indexed by $R$, let the columns be indexed by $C$ and let the symbols be indexed by $S$. We assume that $R$, $C$ and $S$ are three mutually disjoint sets. Each transversal $T_i$ consists of $w$ ordered pairs in $R \times C$. We will construct a PDP$(3w)$ on points $X = R \cup C \cup S$. For $1 \leq i \leq 3$, we construct a parallel class $P_i$ as follows: \[ P_i = \{ \{r,c,L(r,c)\} : (r,c) \in T_i \}.\] Finally, for any block $B = \{r,c,s\}\in P_1 \cup P_2 \cup P_3$, we define $h(B)$ as follows: \begin{itemize} \item if $B \in P_1$, then $h(B) = r$ \item if $B \in P_2$, then $h(B) = c$ \item if $B \in P_3$, then $h(B) = s$. \end{itemize} The verifications are straightforward. \begin{itemize} \item First, because each $T_i$ is a transversal, it is clear that each $P_i$ is a parallel class. \item No pair of points $\{r,c\}$ occurs in more than one block because the three transversals are disjoint. \item Suppose a pair of points $\{r,s\}$ occurs in more than one block. Then there is $L(r,c) \in T_i$ and $L(r,c') \in T_j$ such that $L(r,c) = L(r,c')$. $T_i$ and $T_j$ are disjoint, so $c \neq c'$. But then we have two occurrences of the same symbol in row $r$ of $L$, which contradicts the assumption that $L$ is a latin square. \item The argument that no pair of points $\{c,s\}$ occurs in more than one block is similar. \item Finally, the mapping $h$ satisfies property 3 because each $T_i$ is a transversal. \end{itemize} \end{proof} \begin{theorem} There is a PDP$(3w)$ for all $w \geq 3$. \end{theorem} \begin{proof} If $\geq 3$, $w \neq 6$, there is a pair of orthogonal latin squares of order $w$. It is well-known that a pair of orthogonal latin squares of order $w$ is equivalent to a latin square of order $w$ that contains $w$ disjoint transversals (see, e.g., \cite[p.\ 162]{HCD}). Since $w \geq 3$, we have three disjoint transversals and we can apply Lemma \ref{LS.lem} to obtain a PDP$(w)$. There do not exist a pair of orthogonal latin squares of order 6, but there is a latin square of order $6$ that contains four disjoint transversals (see, e.g., \cite[p.\ 193]{HCD}). So we can also use Lemma \ref{LS.lem} to construct a PDP$(18)$. \end{proof} \begin{example} \label{12.ex} We construct a PDP$(12)$. Start with a pair of orthogonal latin squares of order $4$: \[ \begin{array}{rr} L_1 = \begin{array}{|c|c|c|c|} \hline 1 & 3 & 4 & 2\\ \hline 4 & 2 & 1 & 3\\ \hline 2 & 4 & 3 & 1\\ \hline 3 & 1 & 2 & 4\\ \hline \end{array}\, , & L_2 = \begin{array}{|c|c|c|c|} \hline 1 & 4 & 2 & 3\\ \hline 3 & 2 & 4 & 1\\ \hline 4 & 1 & 3 & 2\\ \hline 2 & 3 & 1 & 4\\ \hline \end{array}. \end{array} \] Each symbol in $L_2$ gives us a transversal in $L_1$. Suppose we index the rows by $r_i$ ($1 \leq i \leq 4$) and the columns by $c_j$ ($1 \leq j \leq 4$). From symbols $1,2$ and $3$, we obtain the following three disjoint transversals in $L_1$: \begin{align*} T_1 &= \{ (r_1,c_1), (r_2,c_4), (r_3,c_2), (r_4,c_3)\}\\ T_2 &= \{ (r_1,c_3), (r_2,c_2), (r_3,c_4), (r_4,c_1)\}\\ T_3 &= \{ (r_1,c_4), (r_2,c_1), (r_3,c_3), (r_4,c_2)\}. \end{align*} Suppose we relabel the points as $1, \dots 12$, replacing $r_1,\dots , r_4$ by $1, \dots , 4$; replacing $c_1,\dots , c_4$ by $5, \dots , 8$; and replacing the symbols $1,\dots , 4$ by $9, \dots , 12$. Then we obtain the following PDP$(12)$, where the hosts are indicated in red: \begin{align*} P_1 &= \{ \{\textcolor{red}{1},5,9\}, \{\textcolor{red}{2},8,11\}, \{\textcolor{red}{3},6,12\}, \{\textcolor{red}{4},7,10\} \} \\ P_2 &= \{ \{1,\textcolor{red}{7},12\}, \{2,\textcolor{red}{6},10\} , \{3,\textcolor{red}{8}, 9\}, \{4,\textcolor{red}{5},11\} \} \\ P_3 &= \{ \{1,8,\textcolor{red}{10}\} , \{2,5,\textcolor{red}{12}\} , \{3,7, \textcolor{red}{11}\} , \{4,6,\textcolor{red}{9}\} \}. \end{align*} \end{example} Of course, using a pair of latin squares is overkill. It would perhaps be easier just to give explicit formulas to construct a PDP. Here is one simple solution that works for all $v \geq 9$ such that $v \equiv 0 \bmod 3$ and $v \neq 12$. \begin{theorem} \label{direct.thm} Let $w \geq 3$, $w \neq 4$, and let $X = \ensuremath{\mathbb{Z}}_w \times \{0,1,2\}$. Define the following three parallel classes: \begin{align*} P_0 &= \{ \{(0,0), (0,1), (0,2)\} \bmod w\}\\ P_1 &= \{ \{(0,0), (1,1), (2,2)\} \bmod w\}\\ P_2 &= \{ \{(0,0), (2,1), (4,2)\} \bmod w\}. \end{align*} For any block $B = \{(i,0), (j,1), (k,2)\} \in P_0 \cup P_1 \cup P_2$, define $h(B)$ as follows. \begin{itemize} \item if $B \in P_0$, then $h(B) = (i,0)$ \item if $B \in P_1$, then $h(B) = (j,1)$ \item if $B \in P_2$, then $h(B) = (k,2)$. \end{itemize} Then $P_0, P_1, P_2,$ and $h$ yield a PDP$(3w)$. \end{theorem} \begin{proof} It is clear that each $P_i$ is a parallel class because we are developing a base block modulo $w$ and each base block contains one point with each possible second coordinate. For the same reason, the mapping $h$ satisfies property 3. Consider the differences $(y -x) \bmod w$ that occur between pairs of points $\{ (x,0), (y,1)\}$. We obtain all pairs with differences $0,1$ and $2$ when we develop the three base blocks. The same thing happens when we look at the differences $(y -x) \bmod w$ between pairs of points $\{ (x,1), (y,2)\}$. Finally, consider the differences $(y -x) \bmod w$ that occur between pairs of points $\{ (x,0), (y,2)\}$. We obtain all pairs with differences $0,2$ and $4$ modulo $w$ when we develop the three base blocks. Since $w \neq 4$, these differences are distinct and the pairs obtained by developing the base blocks are also distinct. \end{proof} If $w = 4$, then the construction given in Theorem \ref{direct.thm} does not yield a PDP$(12)$, because various pairs occur in more than one block. For example, the pair $\{(0,0), (0,2)\}$ occurs in a block of $P_0$ as well as in a block of $P_2$. \begin{example} We apply Theorem \ref{direct.thm} with $w=5$. The three parallel classes, with hosts in red, are: \[ \begin{array}{c|c|c} P_0 & P_1 & P_2 \\ \hline \{ \textcolor{red}{(0,0)}, (0,1), (0,2) \} & \{ (0,0), \textcolor{red}{(1,1)}, (2,2) \} & \{ (0,0), (2,1), \textcolor{red}{(4,2)} \} \\ \{ \textcolor{red}{(1,0)}, (1,1), (1,2) \} & \{ (1,0), \textcolor{red}{(2,1)}, (3,2) \} & \{ (1,0), (3,1), \textcolor{red}{(0,2)} \} \\ \{ \textcolor{red}{(2,0)}, (2,1), (2,2) \} & \{ (2,0), \textcolor{red}{(3,1)}, (4,2) \} & \{ (2,0), (4,1), \textcolor{red}{(1,2)} \} \\ \{ \textcolor{red}{(3,0)}, (3,1), (3,2) \} & \{ (3,0), \textcolor{red}{(4,1)}, (0,2) \} & \{ (3,0), (0,1), \textcolor{red}{(2,2)} \} \\ \{ \textcolor{red}{(4,0)}, (4,1), (4,2) \} & \{ (4,0), \textcolor{red}{(0,1)}, (1,2) \} & \{ (4,0), (1,1), \textcolor{red}{(3,2)} \} \end{array} \] \end{example} \subsection{Finding Hosts} The specific constructions that we provided in Section \ref{solns.sec} led to a very simple method to identify hosts. However, no matter what collection of three parallel classes we use, it will be possible to define hosts in such a way that property 3 of a PDP will be satisfied. \begin{theorem} Suppose that $P_1,P_2$ and $P_3$ are three parallel classes of blocks of size three, containing points from a set $X$ of size $v \equiv 0 \bmod 3$. Then we can define a mapping $h$ that satisfies property 3. \end{theorem} \begin{proof} Construct the bipartite point-block incidence graph of the design. The nodes in this graph are all the elements of $X \cup \mathcal{B}$. For $x \in X$ and $B\in \mathcal{B}$, we create an edge from $x$ to $B$ if and only if $x \in B$. The resulting graph is a 3-regular bipartite graph and hence it has a perfect matching $M$ (this is a corollary of Hall's Theorem, e.g., see \cite[Corollary 16.6]{BM}). For every $B \in \mathcal{B}$, define $h(B) = x$, where $x$ is the point matched with $B$ in the matching $M$. \end{proof} The following corollary is immediate. \begin{corollary} Suppose that $P_1,P_2$ and $P_3$ are three parallel classes of blocks of size three, containing points from a set $X$ of size $v \equiv 0 \bmod 3$. Suppose also that no pair of points occurs in more one block in $\mathcal{B} = P_1 \cup P_2 \cup P_3$. Then there is a PDP$(v)$. \end{corollary} \section{A Generalization} \label{generalization.sec} Suppose we now consider a generalization where meals have $k$ courses and each course includes $k$ couples. We define a PDP$(k,v)$ to be a set of blocks of size $k$, defined on a set of $v$ points, which satisfies the following properties: \begin{enumerate} \item The blocks can be partitioned into $k$ parallel classes, each consisting of $v/k$ disjoint blocks. Hence, there are a total of $v$ blocks and we require $v \equiv 0 \bmod k$. \item No pair of points occurs in more than one block. \item There is a bijection $h : \mathcal{B} \rightarrow X$ such that $h(B) \in B$ for all $B \in \mathcal{B}$. \end{enumerate} The problem we considered in Section \ref{intro.sec} was just the special case $k=3$ of this general definition. Here is a simple necessary condition for existence of a PDP$(k,v)$. \begin{lemma} If a PDP$(k,v)$ exists, then $v \geq k^2$. \end{lemma} \begin{proof} A given point $x$ occurs in $k$ blocks, each having size $k$. The points in these blocks (excluding $x$) must be distinct. Therefore, \[v \geq k(k-1)+ 1 = k^2 - (k-1).\] Since $k$ divides $v$, we must have $v \geq k^2$. \end{proof} We have the following results that are straightforward generalizations of our results from Section \ref{solns.sec}. The first three of these results are stated without proof. \begin{lemma} \label{LS-k.lem} Suppose there are $k-2$ orthogonal latin squares of order $w$ that contain $k$ disjoint common transversals. Then there is a PDP$(k,kw)$. \end{lemma} \begin{corollary} \label{LS-k.cor} Suppose there are $k-1$ orthogonal latin squares of order $w$. Then there is a PDP$(k,kw)$. \end{corollary} \begin{theorem} \label{hosts-k.thm} Suppose that $P_1, \dots , P_k$ are $k$ parallel classes of blocks of size $k$, containing points from a set $X$ of size $v \equiv 0 \bmod k$. Then we can define a mapping $h$ that satisfies property 3. \end{theorem} Our last construction generalizes Theorem \ref{direct.thm}. \begin{theorem} \label{direct-k.thm} Let $w \geq k \geq 3$. Suppose that the following condition holds: \begin{equation} \label{num.eq} \text{There is no factorization $w = st$ with $2 \leq s \leq k-1$ and $2 \leq t \leq k-1$.} \end{equation} Then there is a PDP$(k,kw)$. \end{theorem} \begin{proof} Define $X = \ensuremath{\mathbb{Z}}_w \times \{0,\dots , k-1\}$ and define the following $k$ parallel classes, $P_0, \dots , P_{k-1}$: \begin{align*} P_i &= \{ \{(0,0), (i,1), (2i,2), \dots , ((k-1)i, k-1)\} \bmod w\}, \end{align*} for $i = 0, \dots , k-1$. Finally, define the mapping $h$ as follows. For any block $B \in P_{\ell}$, define $h(B) = (x,\ell)$, where $(x,\ell)$ is the unique point in $B$ having second coordinate equal to $\ell$. Then $P_0, \dots , P_{k-1}$ and $h$ yield a PDP$(k,kw)$. Most of the verifications are straightforward, but it would perhaps be useful to see how condition (\ref{num.eq}) arises. Consider the differences $(y -x) \bmod w$ that occur between pairs of points $\{ (x,c), (y,c+d)\}$, where $c$ and $d$ are fixed, $0 \leq c \leq k-2$, $1 \leq d \leq k-c-1$. These difference are \[0,d,2d, \dots ,(k-1)d \bmod w,\] where $0 < d \leq k-1$. We want all of these differences to be distinct. Suppose that \[ id \equiv jd \bmod w\] where $0 \leq j < i \leq k-1$. Then \[ (i-j)d \equiv 0 \bmod w.\] Hence, \[ ed \equiv 0 \bmod w\] where $0 < e \leq k-1$ and $0 < d \leq k-1$. Then, it not hard to see that $w$ can be factored as the product of two positive integers, both of which are at most $k-1$. Conversely, suppose such a factorization exists, say $w = st$. Then the pair $\{ (0,0) , (0,t)\}$ occurs in a block in $P_0$ and again in a block in $P_s$. \end{proof} Observe that condition (\ref{num.eq}) of Theorem \ref{direct-k.thm} holds if $w$ is prime or if $w > (k-1)^2$. Therefore we have the following corollary of Theorem \ref{direct-k.thm}. \begin{corollary} \label{direct-k.cor} Let $w \geq k \geq 3$. Suppose that $w$ is prime or $w > (k-1)^2$. Then there is a PDP$(k,kw)$. \end{corollary} In general, some values of $w$ will be ruled out (in the sense that Theorem \ref{direct-k.thm} cannot be applied) for a given value of $k$. For example, as we have already seen in the previous section, we cannot take $w=4$ in Theorem \ref{direct-k.thm} if $k = 3$. However, a PDP$(12)$ was constructed by a different method in Example \ref{12.ex}. We have the following complete results for $k=4$ and $k=5$. \begin{theorem} \label{k=4,5.thm} There is a PDP$(4,4w)$ if and only if $w \geq 4$. Further, there is a PDP$(5,5w)$ if and only if $w \geq 5$. \end{theorem} \begin{proof} For $k=4$, we proceed as follows. Theorem \ref{direct-k.thm} yields a PDP$(4,4w)$ for all $w \geq 4$, $w\neq 4, 6, 9$. Theorem \ref{LS-k.cor} provides a PDP$(4,16)$ and a PDP$(4,36)$ since three orthogonal latin squares of orders $4$ and $9$ are known to exist (see \cite{HCD}). The last case to consider is $w=6$. Here we can use a resolvable $4$-GDD of type $3^8$ (\cite{Shen}). Actually, we only need four of the seven parallel classes in this design. Then, to define the hosts, we can use Theorem \ref{hosts-k.thm}. We handle $k=5$ in a similar manner. Theorem \ref{direct-k.thm} yields a PDP$(5,5w)$ for all $w \geq 5$, $w\neq 6,8,9, 12$ or $16$. There are four orthogonal latin squares of orders $8,9, 12$ and $16$ (see \cite{HCD}) so these values of $w$ are taken care of by Theorem \ref{LS-k.cor}. Finally, the value $w=6$ is handled by a direct construction due to Marco Buratti \cite{marco}. Define $X= \ensuremath{\mathbb{Z}}_{30}$ and \[\mathcal{B} = \{ \{ 0,1,8,12,14\} \bmod 30 \}.\] So we have thirty blocks that are obtained from the base block $B_0 = \{ 0,1,8,12,14\}$. It is easy to check that no pair of points is repeated, because the differences of pairs of points occurring in $B_0$ are all those in the set \[\pm \{1, 2, 4,6,7,8,11,12,13,14\}.\] Define \[P_0=\{B_0+5j \bmod 30 : j=0,1,\dots,5\}\] and for $1 \leq i \leq 4$, let \[P_i= \{ B+i \bmod 30: B \in P_0 \}.\] In this way, $\mathcal{B}$ is partitioned into five parallel classes, each containing six blocks. Theorem \ref{hosts-k.thm} guarantees that we can define hosts in a suitable fashion. However, it is easy to write down an explicit formula, namely, $ h(B_0 + i) = i$ for $0 \leq i \leq 29$. \end{proof} \end{document}
\betaegin{document} \thispagestyle{empty} \varsigmapace{2mm}ace*{-3.0truecm} \noindentindent \varsigmapace{2mm}ace{1 true cm} \betaegin{center}{\langlerge\betaf Delta shocks in the relativistic full Euler equations for a Chaplygin gas$^{ **}$ \footnotetext{$^{*}$Corresponding author. Tel: +86-0591-83852790.\\ \indent \,\,\,\,\,\,\,\,E-mail address: [email protected].\\ \indent \,\,\,\,\,$^{**}$Supported by the National Natural Science Foundation of China (No. 70371025), the Scientific Research Foundation of the Ministry of Education of China (No. 02JA790014), the Natural Science Foundation of Fujian Province of China (No. 2015J01014) and the Science and Technology Developmental Foundation of Fuzhou University (No. 2004-XQ-16). }}\end{center} \varsigmapace{2mm}ace*{0.2 true cm} \betaegin{center}{\betaf Zhiqiang Shao$^{a, *}$\\ {\it $^{a}$Department of Mathematics, Fuzhou University, Fuzhou 350002, China} }\end{center} \varsigmapace{2mm}ace*{2.5 true mm} \sigmaetlength{\unitlength}{1cm} \betaegin{picture}(20,0.1) \put(-0.6,0){\line(1,0){14.5}} \end{picture} \varsigmapace{2mm}ace*{2.5 true mm} \noindentindent{\sigmamall {\sigmamall\betaf Abstract} \varsigmapace{2mm}ace*{2.5 true mm}The relativistic full Euler equations for a Chaplygin gas are studied. The Riemann problem is solved constructively. There are two kinds of Riemann solutions, in which one consists of three contact discontinuities and the other involves a delta shock wave on which both state variables the rest mass density and the proper energy density simultaneously contain the Dirac delta functions. It is quite different from the previous ones on which only one state variable contains the Dirac delta function. The formation mechanism, generalized Rankine-Hugoniot relation and entropy condition are clarified for this type of delta shock wave. Under the generalized Rankine-Hugoniot relation and entropy condition, the existence and uniqueness of delta shock solutions are also established. \varsigmapace{2mm}ace*{2.5 true mm} \noindentindent{\sigmamall {\sigmamall\betaf MSC: } 35L65; 35L67 \varsigmapace{2mm}ace*{2.5 true mm} \noindentindent{\sigmamall {\sigmamall\betaf Keywords:} Relativistic full Euler equations; Chaplygin gas; Riemann problem; Delta shock wave \varsigmapace{2mm}ace*{2.5 true mm} \sigmaetlength{\unitlength}{1cm} \betaegin{picture}(20,0.1) \put(-0.6,0){\line(1,0){14.5}} \end{picture} \betaaselineskip 15pt \sigmaec{\Large\betaf 1.\quad Introduction }The relativistic Euler equations of the conservation laws of baryon numbers, momentum and energy reads (see [8, 27]) $$ \left\{\betaegin{array}{ll}\Big(\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)_{t}+\Big(\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)_x=0, \\[4mm]\Big(\frac{(p/c^{2}+\rho ) v}{1-v^2/c^2}\Big)_t+\Big(\frac{(p/c^{2}+\rho )v^2}{1-v^2/c^2}+p\Big)_x=0,\\[4mm] \Big(\frac{(p/c^{2}+\rho )v^2/c^2}{1-v^2/c^2}+\rho\Big)_t+\Big(\frac{(p/c^{2}+\rho ) v}{1-v^2/c^2}\Big)_x=0,\end{array}\right .\eqno{(1.1)} $$ where $n$, $\rho$, $p$ and $v$ represent the rest mass density, the proper energy density, the pressure and the particle speed, respectively, and the constant $c$ is the speed of light. In his fundamental work of 1948, Taub [42] derived system (1.1) and then obtained the Hugoniot curve of the relativistic shocks, and also showed that $\Gammaamma$, the ratio of specific heats, must be less than $\frac{5}{3}$. He gave a more systematic description of relativistic hydrodynamics in his later work [43]. In 1986, Thompson [44] established several relations on the relativistic shock curves. He observed that ``the relativistic shock equations are much more complicated and do not lend themselves to expressions that are both simple and general". Since the high complexity of the system itself, up to now, there are few results for this system in the literature. Chen [8] solved the Riemann problem to system (1.1) for the polytropic gas with the equations of state $p=(\Gammaamma-1)c^{2}(\rho-n)$ and $p=kSn^{\Gammaamma}$. Recently, its vanishing pressure limit problem was studied by Yin and Sheng [48]. Besides, Geng and Li [15] studied the non-relativistic global limits of the entropy solutions to the Cauchy problem of system (1.1) for the isothermal flow $p=k^{2}\rho$. Ding [14] proved the global stability of the strong rarefaction wave to 1-D piston problem of system (1.1) for the polytropic gas with the equations of state $p=(\Gammaamma-1)c^{2}(\rho-n)$ and $p=kSn^{\Gammaamma}$. Here, we concern with the equation of state is$$p=-\frac{1}{\rho}, \,\,\,\,\,\,\,\,\,\eqno{(1.2)} $$ which was introduced by Chaplygin [5], Tsien [45] and von Karman [19] as a suitable mathematical approximation for calculating the lifting force on a wing of an airplane in aerodynamics. A gas is called a Chaplygin gas it satisfies the equation of state (1.2). The Chaplygin gas owns a negative pressure and occurs in certain theories of cosmology. Such a gas has been advertised as a possible model for dark energy [1, 16]. In recent years, astrophysicists have growing interests in the Chaplygin gas dynamics, which replaces the polytropic equation of state $p(\rho)=\rho^\Gammaamma$ $(\Gammaamma >1)$ (e.g., see [6-7, 18, 37, 40]) with $p(\rho)= -\rho^{-1}$. The typical feature of the Chaplygin gas dynamics is that the $\deltaelta$-shocks appear in non-zero pressure cases. The Riemann problem was solved for the nonrelativistic case by Brenier [2] and Serre [35], relativistic case by Cheng and Yang [10], followed by its vanishing pressure limit problem by Yin and Song [49]. In this paper, we are interested in the Riemann problem for (1.1) and (1.2) with Riemann initial data $$(n, \rho, v)(0,x)=\left\{\betaegin{array}{ll}(n_{-},\rho_{-},v_{-}),\,\,\,\,\,\,\,\,\,x<0, \\(n_{+},\rho_{+}, v_{+}),\,\,\,\,\,\,\,\,\,x>0, \end{array}\right . \eqno{(1.3)}$$ where $\rho_{\pm} >0$, $n_{\pm} >0$ and $v_{\pm} $ are given constant states. The Riemann problem is a special initial value problem where the initial data consisting of two piecewise constant states are separated by a jump discontinuity at the origin for the one-dimensional hyperbolic systems of conservation laws. It is well known that the Riemann problem is the most fundamental problem in the field of nonlinear hyperbolic conservation laws. Theories of hyperbolic systems of conservation laws can be found in [3-4, 11, 25, 27, 36, 39] etc. For the Chaplygin gas, the considered relativistic full Euler equations possess three linearly degenerate characteristic fields, thus the classical elementary waves only involves contact discontinuities. The rarefaction wave curves and the shock wave curves are actually coincided to the so-called contact discontinuities in the state space. Although the system is much more complicated and the results are much harder to obtain, with the help of the contact discontinuity curves, by the analysis on the physicall relevant region and the method of characteristic analysis, we construct Riemann solutions only involving contact discontinuities when $\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}>\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$. However, for the case $\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}\leq\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$, we find that the Riemann solution can not be constructed by these classical contact discontinuities and delta shocks should occur. In this case, we rigorously analyze the formation of mechanism for delta shock wave with Dirac delta function in the state variables the rest mass density and the proper energy density. By the definition of the delta shock wave solution to (1.1) and (1.2) in the sense of distributions, we propose the generalized Rankine-Hugoniot relation and entropy condition for this type of delta shock wave. Thus both existence and uniqueness of delta shock wave solutions can be obtained by solving the generalized Rankine-Hugoniot relation under entropy condition. In this work, it is proven that the delta shock wave with Dirac delta function in both state variables the rest mass density and the proper energy density develops in solutions of the relativistic full Euler equations for the Chaplygin gas. It is quite different from the previous ones on which only one state variable contains the Dirac delta function. To our knowledge, this type of delta shock wave has not been found in the previous studies on the relativistic Euler equations. For related researches of delta shock waves, we refer the readers to [6, 9-10, 12-13, 17, 20-24, 26, 28-35, 37-38, 41, 46-49] and the references cited therein for more details. For the theory of the delta shock wave with Dirac delta function in multiple state variables, interested readers may refer to [9, 30-33, 46-47] for further details. Besides, substantially different from the works [9, 30-31, 46-47], where the delta shock wave with Dirac delta function in multiple state variables has been found only in some non-strictly hyperbolic systems of conservation laws, we find this type of delta shock wave in a linearly degenerate and strictly hyperbolic systems of conservation laws. The rest of this paper is organized as follows. In Sections 2, we first clarify the physically relevant region where we can persent classical solutions and delta shock waves, and deduce the classical contact discontinuity curves, then construct Riemann solutions only involving the classical contact discontinuities. In Section 3, we analyze the formation of mechanism for delta shock wave with Dirac delta function in both the rest mass density and the proper energy density. We also propose the generalized Rankine-Hugoniot and entropy condition for this type of delta shock wave and then prove the existence and uniqueness of delta shock wave solutions under the generalized Rankine-Hugoniot relation and entropy condition. \betaaselineskip 15pt \sigmaec{\Large\betaf 2.\quad Preliminaries and classical Riemann solutions }In this section, we present some preliminary knowledge for system (1.1) and construct classical Riemann solutions of (1.1)-(1.2) with initial data (1.3). The physically relevant region for solutions is $$\Lambda=\betaigg\{(n, \rho, v)|n>0, \rho>\frac{1}{c}, |v|<c\betaigg\},\eqno{(2.1)}$$that is, the sonic speed $\sigmaqrt{p'(\rho)}$ should be strictly less than the speed of light (see [8]). For any smooth solution, system (1.1) with (1.2) can be written in matrix form $$A\left(\betaegin{array}{cc}n\\\rho\\v \end{array}\right)_t+B\left(\betaegin{array}{cc}n\\\rho\\v \end{array}\right)_x=0,\eqno{(2.2)} $$ where $$A=\left( \betaegin{array}{ccc}\frac{1}{\sigmaqrt{1-v^{2}/c^{2}}}& 0 & \frac{nv}{c^{2}(1-v^{2}/c^{2})^{3/2}}\\ 0& \frac{\betaig(\frac{1}{\rho^{2}c^{2}}+1\betaig)v}{1-v^{2}/c^{2}} & \frac{\betaig(-\frac{1}{\rho c^{2}}+\rho\betaig)(1+v^{2}/c^{2})}{(1-v^{2}/c^{2})^{2}} \\0& \frac{1+\frac{v^{2}}{\rho^{2}c^{4}}}{1-v^{2}/c^{2}} & \frac{\frac{2v}{c^{4}}\betaig(-\frac{1}{\rho}+\rho c^{2}\betaig)}{(1-v^{2}/c^{2})^{2}} \end{array}\right),\eqno{(2.3)}$$ and $$ B=\left( \betaegin{array}{ccc}\frac{v}{\sigmaqrt{1-v^{2}/c^{2}}}& 0 & \frac{n}{(1-v^{2}/c^{2})^{3/2}}\\ 0& \frac{v^{2}+1/\rho^{2}}{1-v^{2}/c^{2}} & \frac{2\rho v \betaig(1-\frac{1}{\rho^{2}c^{2}}\betaig)}{(1-v^{2}/c^{2})^{2}} \\0& \frac{\betaig(\frac{1}{\rho^{2}c^{2}} +1 \betaig )v}{1-v^{2}/c^{2}} & \frac{\betaig(-\frac{1}{\rho c^{2}}+\rho\betaig) (1+v^{2}/c^{2})}{(1-v^{2}/c^{2})^{2}} \end{array}\right).\eqno{(2.4)}$$ It follows from (2.3) and (2.4) that $$ A^{-1}B=\left( \betaegin{array}{ccc}v& \frac{\frac{nv}{c^{2}\rho^{2}}(1-v^{2}/c^{2})}{\betaig(\frac{v^{2}}{\rho^{2} c^{4}}-1\betaig)\betaig(-\frac{1}{\rho c^{2}}+\rho\betaig)} & \frac{n}{1-\frac{v^{2}}{\rho^{2} c^{4}}}\\ 0& \frac{v \betaig(\frac{1}{\rho^{2}c^{2}}-1\betaig)}{\frac{v^{2}}{\rho^{2} c^{4}}-1} & \frac{\rho \betaig(\frac{1}{\rho^{2}c^{2}}-1\betaig)}{\frac{v^{2}}{\rho^{2} c^{4}}-1} \\0& \frac{-\frac{1}{\rho^{2}} (1-v^{2}/c^{2})^{2}}{\betaig(-\frac{1}{\rho c^{2}}+\rho\betaig)\betaig(\frac{v^{2}}{\rho^{2} c^{4}}-1\betaig)} & \frac{v \betaig(\frac{1}{\rho^{2}c^{2}}-1\betaig)}{\frac{v^{2}}{\rho^{2} c^{4}}-1} \end{array}\right).\eqno{(2.5)}$$ By (2.5), it is not difficult to see that system (1.1) with (1.2) has three real and distinct eigenvalues$$\langlembdabda_{1}=\frac{v-\frac{1}{\rho}}{1-\frac{v}{\rho c^{2}}},\,\,\,\, \langlembdabda_{2}=v,\,\,\,\,\langlembdabda_{3}=\frac{v+\frac{1}{\rho}}{1+\frac{v}{\rho c^{2}}}, \eqno{(2.6)} $$ with the corresponding right eigenvectors $$\overlineerrightarrow{r}_1=\betaigg(\frac{-n}{\betaig(\rho-\frac{1}{\rho c^{2}}\betaig)\betaig(1-v^{2}/c^{2}\betaig)},\frac{-1}{1-v^{2}/c^{2}},\frac{1/\rho}{\rho-\frac{1}{\rho c^{2}}}\betaigg)^{T},\,\,\overlineerrightarrow{r}_2=(1,0,0 )^T,\,\, \overlineerrightarrow{r}_3=\betaigg(\frac{n}{\betaig(\rho-\frac{1}{\rho c^{2}}\betaig)\betaig(1-v^{2}/c^{2}\betaig)},\frac{1}{1-v^{2}/c^{2}},\frac{1/\rho}{\rho-\frac{1}{\rho c^{2}}}\betaigg)^{T}, \eqno{(2.7)}$$ satisfying $$ \betaigtriangledown\langlembdabda_{i}{\cal D }ot \overlineerrightarrow{r_i}\equiv 0\, \,(i=1,2,3).$$ Therefore, system (1.1) with (1.2) is strictly hyperbolic and fully linearly degenerate, and the associated waves are contact discontinuities. Since system (1.1) with (1.2) and the Riemann data (1.3) are invariant under stretching of coordinates: $(t, x)\rightarrow (\alphalpha t, \alphalpha x)~(\alphalpha$ is a constant), we seek the self-similar solution $$(n, \rho,v)(t, x)=(n, \rho,v)(\xi),\,\,\,\,\xi=\frac{x}{t}.$$ Then Riemann problem (1.1), (1.2) and (1.3) is reduced to the following boundary value problem of ordinary differential equations: $$ \left\{\betaegin{array}{ll} -\xi\Big(\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)_{\xi}+\Big(\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)_{\xi}=0,\\ -\xi\Big(\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig) v}{1-v^2/c^2}\Big)_{\xi}+\Big(\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)v^2}{1-v^2/c^2}-\frac{1}{\rho}\Big)_{\xi}=0,\\ -\xi\Big(\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)v^2/c^2}{1-v^2/c^2}+\rho\Big)_{\xi}+\Big(\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig) v}{1-v^2/c^2}\Big)_{\xi}=0,\end{array}\right .\eqno{(2.8)} $$ with $(n, \rho,v)(\pm\infty)=(n_{\pm}, \rho_{\pm},v_{\pm}).$\\ \indent For any smooth solution, system (2.8) can be rewritten as $$\left( \betaegin{array}{ccc}\frac{v-\xi}{\sigmaqrt{1-v^{2}/c^{2}}}&0 & \frac{nc^{2}-nv\xi }{c^{2}(1-v^2/c^2)^{3/2}} \\0& \frac{c^{2}(\frac{1}{\rho^{2}}+v^{2})-\xi v(\frac{1}{\rho^{2}}+c^{2})}{c^{2}-v^{2}} &\frac{(-\frac{1}{\rho}+\rho c^{2})(2vc^{2}-\xi c^{2}-\xi v^{2})}{(c^{2}-v^{2})^{2}} \\0& \frac{(\frac{1}{\rho^{2}}+c^{2})c^{2}v-\xi(c^{4}+\frac{v^{2}}{\rho^{2}})}{c^{2}(c^{2}-v^{2})} &\frac{(-\frac{1}{\rho}+\rho c^{2})(c^{2}+v^{2}-2v\xi)}{(c^{2}-v^{2})^{2}} \end{array}\right)\left(\betaegin{array}{cccc}dn\\ d\rho\\deltav \end{array}\right)=0.\eqno{(2.9)}$$ It provides either the general solution (constant state) $$(n, \rho, v)={\mathrm Constant }, $$ or the singular solutions $$ \left\{ \betaegin{array}{ll} \xi=\langlembdabda_{1}=\frac{v-\frac{1}{\rho}}{1-\frac{v}{\rho c^{2}}},\, \\ d\betaigg(\frac{v-\frac{1}{\rho}}{1-\frac{v}{\rho c^{2}}}\betaigg)=0,\\\frac{dn}{d\rho} =\frac{n \rho c^{2}}{\rho^{2}c^{2}-1}, \end{array} \right. \eqno{(2.10)} $$ $$ \left\{ \betaegin{array}{ll} \xi=\langlembdabda_{2}=v, \\ d\rho=0,\,\,dv=0,\,\,dn\neq0, \end{array} \right. \eqno{(2.11)} $$ $$ \left\{ \betaegin{array}{ll} \xi=\langlembdabda_{3}=\frac{v+\frac{1}{\rho}}{1+\frac{v}{\rho c^{2}}}, \\ d\betaigg(\frac{v+\frac{1}{\rho}}{1+\frac{v}{\rho c^{2}}}\betaigg)=0,\\\frac{dn}{d\rho} =\frac{n\rho c^{2}}{\rho^{2}c^{2}-1}. \end{array} \right. \eqno{(2.12)} $$ Integrating (2.10) from $(n_{-}, \rho_{-},v_{-})$ to $(n, \rho,v)$ yields that $$\xi=\langlembdabda_{1}=\frac{v-\frac{1}{\rho}}{1-\frac{v}{\rho c^{2}}} =\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}\,\, {\mathrm and}\,\, \frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}}. \eqno{(2.13)} $$ Similarly, we have $$\xi=\langlembdabda_{2}=v= v_{-}\, \, \,\, \rho=\rho_{-}\,\,\, {\mathrm and}\,\,\,n\neq n_{-}, \eqno{(2.14)} $$ $$\xi=\langlembdabda_{3}=\frac{v+\frac{1}{\rho}}{1+\frac{v}{\rho c^{2}}} =\frac{v_{-}+\frac{1}{\rho_{-}}}{1+\frac{v_{-}}{\rho_{-} c^{2}}}\,\, {\mathrm and}\,\, \frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}}. \eqno{(2.15)} $$ \indent For a bounded discontinuity at $\xi=\sigmaigma,$ the Rankine-Hugoniot relation holds: $$ \left\{\betaegin{array}{ll} -\sigmaigma\Big[\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]+\Big[\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]=0,\\ -\sigmaigma\Big[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig) v}{1-v^2/c^2}\Big]+\Big[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)v^2}{1-v^2/c^2}-\frac{1}{\rho}\Big]=0,\\ -\sigmaigma\Big[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)v^2/c^2}{1-v^2/c^2}+\rho\Big]+\Big[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig) v}{1-v^2/c^2}\Big]=0,\end{array}\right .\eqno{(2.16)} $$ where $[q]=q -q_{-}$ is the jump of $q$ across the discontinuity and $\sigmaigma$ is the velocity of the discontinuity. Eliminating $\sigmaigma$ in the second and third equations of (2.16), we have $$\betaigg(-\frac{1}{\rho}+\frac{1}{\rho_{-}}\betaigg)(\rho-\rho_{-})\betaigg(1-\frac{v^{2}}{c^{2}}\betaigg)\betaigg(1-\frac{v_{-}^{2}}{c^{2}}\betaigg)=\betaigg(-\frac{1}{\rho c^{2}}+\rho \betaigg)\betaigg(-\frac{1}{\rho_{-} c^{2}}+\rho_{-} \betaigg)(v-v_{-})^{2}.\eqno{(2.17)}$$ Then, from (2.17) it follows that $$\frac{(v-v_{-})^{2}}{\betaigg(1-\frac{v^{2}}{c^{2}}\betaigg)\betaigg(1-\frac{v_{-}^{2}}{c^{2}}\betaigg)}=\frac{(\rho-\rho_{-})^{2}} {\betaigg(\rho^{2}-\frac{1}{ c^{2}} \betaigg)\betaigg(\rho_{-}^{2}-\frac{1}{ c^{2}} \betaigg)},\eqno{(2.18)}$$ and $$\betaigg(\frac{v-v_{-}}{\frac{vv_{-}}{c^{2}}-1}\betaigg)^{2}=\frac{\betaig(-\frac{1}{\rho}+\frac{1}{\rho_{-}}\betaig)(\rho -\rho_{-})}{\betaig(-\frac{1}{\rho c^{2}}+\rho _{-} \betaig)\betaig(-\frac{1}{\rho_{-} c^{2}}+\rho \betaig)}.\eqno{(2.19)}$$ By a simple calculation, it is easy to see that (2.19) is equivalent to $$\frac{v-v_{-}}{\frac{vv_{-}}{c^{2}}-1}=\pm\betaigg(\frac{\rho -\rho_{-}}{\rho\rho _{-} -\frac{1}{c^{2}}} \betaigg).\eqno{(2.20)}$$ Thus, we have two cases, namely, Case 1: $\frac{v-v_{-}}{\frac{vv_{-}}{c^{2}}-1}=\frac{\rho -\rho_{-}}{\rho\rho _{-} -\frac{1}{c^{2}}},$ which gives a 1-shock $$S_{1}:\frac{v-\frac{1}{\rho}}{1-\frac{v}{\rho c^{2}}} =\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}, \,\, {\mathrm with}\,\,p>p_{-},\,\rho>\rho_{-},\,v<v_{-}. \eqno{(2.21)} $$ Hence, from (2.21) and (2.18) it follows that $$\frac{v-v_{-}}{\sigmaqrt{\Big(1-\frac{v^{2}}{c^{2}}\Big)\Big(1-\frac{v_{-}^{2}}{c^{2}}\Big)}}=-\frac{\rho-\rho_{-}}{\sigmaqrt{\Big(\rho^{2}-\frac{1}{ c^{2}} \Big)\Big(\rho_{-}^{2}-\frac{1}{ c^{2}} \Big)}},\eqno{(2.22)}$$ $$v=\frac{v_{-}-a}{1-\frac{v_{-}a}{c^{2}}},\eqno{(2.23)}$$ $$\sigmaqrt{1-\frac{v^{2}}{c^{2}}}=\frac{\sigmaqrt{c^{2}-a^{2}}}{c\Big(1-\frac{v_{-}a}{c^{2}}\Big)}\sigmaqrt{1-\frac{v_{-}^{2}}{c^{2}}}, \eqno{(2.24)}$$ $$\sigmaqrt{c^{2}-a^{2}}=\frac{c\sigmaqrt{\rho\rho_{-}\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)\betaig(-\frac{1}{\rho_{-} c^{2}}+\rho_{-} \betaig)}}{\rho\rho_{-} -\frac{1}{c^{2}}},\eqno{(2.25)}$$ where $$a=\frac{\rho-\rho_{-}}{\rho\rho_{-}-\frac{1}{c^{2}}}.$$ Eliminating $\sigmaigma$ in the first and second equations of (2.16), we have $$\frac{v-v_{-}}{\sigmaqrt{\Big(1-\frac{v^{2}}{c^{2}}\Big)\Big(1-\frac{v_{-}^{2}}{c^{2}}\Big)}}\Bigg(\frac{n_{-}v\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)}{\sigmaqrt{1-\frac{v^{2}}{c^{2}}}}-\frac{nv_{-}\betaig(-\frac{1}{\rho_{-} c^{2}}+\rho_{-} \betaig)}{\sigmaqrt{1-\frac{v_{-}^{2}}{c^{2}}}}\Bigg)=\Big(-\frac{1}{\rho}+\frac{1}{\rho_{-}}\Big)\Bigg(\frac{n}{\sigmaqrt{1-\frac{v^{2}}{c^{2}} }} -\frac{n_{-}}{\sigmaqrt{1-\frac{v_{-}^{2}}{c^{2}}}}\Bigg).\eqno{(2.26)}$$ Substituting (2.22)-(2.25) into (2.26), we get, after a straightforward calculation that$$\Bigg(\frac{n\rho_{-}(\rho\rho_{-}-\frac{1}{c^{2}})}{\sigmaqrt{\rho\rho_{-}\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)\betaig(-\frac{1}{\rho_{-} c^{2}}+\rho_{-} \betaig)}}-\frac{n_{-}(\rho\rho_{-}-\frac{1}{c^{2}})}{-\frac{1}{\rho_{-} c^{2}}+\rho_{-} }\Bigg)\Big(v_{-}-\frac{1}{\rho_{-}}\Big)(\rho-\rho_{-})=0. \eqno{(2.27)} $$ When $\rho\neq\rho_{-}$, the second part of the left side in the above expression will not be zero if $v_{-}\neq\frac{1}{\rho_{-}}$, which means $$\frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}},\,\,\,\,\,\,\,\,\,{\mathrm if }\,\,v_{-}\neq\frac{1}{\rho_{-}}. \eqno{(2.28)} $$ Eliminating $\sigmaigma$ in the first and third equations of (2.16), we have $$\frac{nv\Big(-\frac{1}{\rho c^{2}}+\rho\Big)}{\sigmaqrt{1-\frac{v^{2}}{c^{2}}}}+\frac{n_{-}v_{-}\Big(-\frac{1}{\rho_{-} c^{2}}+\rho_{-}\Big)}{\sigmaqrt{1-\frac{v_{-}^{2}}{c^{2}}}}-\frac{n_{-}\Big(-\frac{1}{\rho c^{2}}+\rho\Big)\Big(v-\frac{v_{-}v^{2}}{c^{2}}\Big)}{\Big(1-\frac{v^{2}}{c^{2}}\Big)\sigmaqrt{1-\frac{v_{-}^{2}}{c^{2}}}}$$ $$-\frac{n\Big(-\frac{1}{\rho_{-} c^{2}}+\rho_{-}\Big)\Big(v_{-}-\frac{vv_{-}^{2}}{c^{2}}\Big)}{\Big(1-\frac{v_{-}^{2}}{c^{2}}\Big)\sigmaqrt{1-\frac{v^{2}}{c^{2}}}}=(\rho-\rho_{-}) \Bigg(\frac{nv}{\sigmaqrt{1-\frac{v^{2}}{c^{2}}}} -\frac{n_{-}v_{-}}{\sigmaqrt{1-\frac{v_{-}^{2}}{c^{2}}}}\Bigg).$$ We rearrange terms to get $$\frac{nv\Big(-\frac{1}{\rho c^{2}}+\rho_{-}\Big)}{\sigmaqrt{1-\frac{v^{2}}{c^{2}}}}+\frac{n_{-}v_{-}\Big(-\frac{1}{\rho_{-} c^{2}}+\rho\Big)}{\sigmaqrt{1-\frac{v_{-}^{2}}{c^{2}}}}=\frac{1-\frac{vv_{-}}{c^{2}}}{\sigmaqrt{\Big(1-\frac{v^{2}}{c^{2}}\Big)\Big(1-\frac{v_{-}^{2}}{c^{2}}\Big)}} \Bigg(\frac{n_{-}v\Big(-\frac{1}{\rho c^{2}}+\rho\Big)}{\sigmaqrt{1-\frac{v^{2}}{c^{2}}}}+\frac{nv_{-}\Big(-\frac{1}{\rho_{-} c^{2}}+\rho_{-}\Big)}{\sigmaqrt{1-\frac{v_{-}^{2}}{c^{2}}}}\Bigg).\eqno{(2.29)} $$ Substituting (2.22)-(2.25) into (2.29), and noting that $\frac{v-v_{-}}{\frac{vv_{-}}{c^{2}}-1}=\frac{\rho -\rho_{-}}{\rho\rho _{-} -\frac{1}{c^{2}}}$, we get, after a straightforward calculation that$$\Bigg(\frac{n(\rho\rho_{-}-\frac{1}{c^{2}})}{\sigmaqrt{\rho\rho_{-}\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)\betaig(-\frac{1}{\rho_{-} c^{2}}+\rho_{-} \betaig)}}-\frac{n_{-}(\rho\rho_{-}-\frac{1}{c^{2}})}{\rho_{-}\betaig(-\frac{1}{\rho_{-} c^{2}}+\rho_{-}\betaig) }\Bigg)\Big(\frac{v_{-}}{\rho_{-}c^{2}}-1\Big)(\rho-\rho_{-})=0. \eqno{(2.30)} $$ When $\rho\neq\rho_{-}$, the second part of the left side in the above expression will not be zero if $v_{-}\neq\rho_{-}c^{2}$, which means $$\frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}},\,\,\,\,{\mathrm if}\,\, v_{-}\neq\rho_{-}c^{2}. \eqno{(2.31)} $$ Because $v_{-}=\rho_{-}c^{2}$ contradicts with $v_{-}=\frac{1}{\rho_{-}}$, the above expression (2.31) together with (2.28) yields that $$\frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}}. \eqno{(2.32)} $$ This defines the Hugoniot curve of the relativistic shock. Substituting (2.23)-(2.25) and (2.32) into the first equation of (2.16), we get, after a straightforward calculation that $$\sigmaigma=\frac{v_{-}\rho_{-}(\rho-\rho_{-})+a(\frac{1}{c^{2}}-\rho\rho_{-})}{\rho_{-}(\rho-\rho_{-})+\frac{v_{-}a}{c^{2}}(\frac{1}{c^{2}}-\rho\rho_{-})} =\frac{(\rho-\rho_{-})(v_{-}\rho_{-}-1)}{(\rho-\rho_{-})(\rho_{-}-\frac{v_{-}}{c^{2}})}.\eqno{(2.33)} $$ When $\rho\neq\rho_{-}$, from (2.33), it is easy to find that $$\sigmaigma =\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}.\eqno{(2.34)} $$ When $\rho=\rho_{-}$, the situation is simple. From (2.23) and (2.16), we can easily obtain that $$ \sigmaigma=v= v_{-},\, \, \,\, \rho=\rho_{-}\,\,\, {\mathrm and}\,\,\,n\neq n_{-}, \eqno{(2.35)} $$ Case 2: $\frac{v-v_{-}}{\frac{vv_{-}}{c^{2}}-1}=-\frac{\rho -\rho_{-}}{\rho\rho _{-} -\frac{1}{c^{2}}},$ which gives a 3-shock $$S_{3}:\frac{v+\frac{1}{\rho}}{1+\frac{v}{\rho c^{2}}} =\frac{v_{-}+\frac{1}{\rho_{-}}}{1+\frac{v_{-}}{\rho_{-} c^{2}}}, \,\, {\mathrm with}\,\,p<p_{-},\,\rho<\rho_{-},\,v<v_{-}. \eqno{(2.36)} $$ Then, from (2.36) and (2.18) it follows that $$\frac{v-v_{-}}{\sigmaqrt{\Big(1-\frac{v^{2}}{c^{2}}\Big)\Big(1-\frac{v_{-}^{2}}{c^{2}}\Big)}}=\frac{\rho-\rho_{-}}{\sigmaqrt{\Big(\rho^{2}-\frac{1}{ c^{2}} \Big)\Big(\rho_{-}^{2}-\frac{1}{ c^{2}} \Big)}},\eqno{(2.37)}$$ $$v=\frac{v_{-}+a}{1+\frac{v_{-}a}{c^{2}}},\eqno{(2.38)}$$ $$\sigmaqrt{1-\frac{v^{2}}{c^{2}}}=\frac{\sigmaqrt{c^{2}-a^{2}}}{c\Big(1+\frac{v_{-}a}{c^{2}}\Big)}\sigmaqrt{1-\frac{v_{-}^{2}}{c^{2}}}, \eqno{(2.39)}$$ $$\sigmaqrt{c^{2}-a^{2}}=\frac{c\sigmaqrt{\betaig(\rho^{2}-\frac{1}{ c^{2}}\betaig)\betaig(\rho_{-}^{2}-\frac{1}{ c^{2}} \betaig)}}{\rho\rho_{-} -\frac{1}{c^{2}}},\eqno{(2.40)}$$ where $$a=\frac{\rho-\rho_{-}}{\rho\rho_{-}-\frac{1}{c^{2}}}.$$ Substituting (2.37)-(2.40) into (2.26), we get, after a straightforward calculation that $$\Bigg(\frac{n(\rho_{-}-\frac{1}{\rho c^{2}})}{\rho_{-}\sigmaqrt{\betaig(\rho^{2}-\frac{1}{ c^{2}} \betaig)\betaig(\rho_{-}^{2}-\frac{1}{ c^{2}} \betaig)}}-\frac{n_{-}(\rho-\frac{1}{\rho_{-}c^{2}})}{\rho\betaig(\rho_{-}^{2}-\frac{1}{ c^{2}}\betaig) }\Bigg)(\rho_{-}v_{-}+1)(\rho-\rho_{-})=0. \eqno{(2.41)} $$ When $\rho\neq\rho_{-}$, the second part of the left side in the above expression will not be zero if $v_{-}\neq -\frac{1}{\rho_{-}}$, which means $$\frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}},\,\,\,\,\,\,\,\,\,{\mathrm if }\,\,v_{-}\neq -\frac{1}{\rho_{-}}. \eqno{(2.42)} $$ Substituting (2.37)-(2.40) into (2.29), and noting that $\frac{v-v_{-}}{\frac{vv_{-}}{c^{2}}-1}=-\frac{\rho -\rho_{-}}{\rho\rho _{-} -\frac{1}{c^{2}}}$, we get, after a straightforward calculation that$$\Bigg(\frac{n(\rho\rho_{-}-\frac{1}{c^{2}})}{\sigmaqrt{\betaig(\rho^{2}-\frac{1}{ c^{2}} \betaig)\betaig(\rho_{-}^{2}-\frac{1}{ c^{2}} \betaig)}}-\frac{n_{-}(\rho\rho_{-}-\frac{1}{c^{2}})}{\rho_{-}^{2}-\frac{1}{ c^{2}} }\Bigg)\Big(\frac{v_{-}}{\rho_{-}c^{2}}+1\Big)(\rho-\rho_{-})=0. \eqno{(2.43)} $$ When $\rho\neq\rho_{-}$, the second part of the left side in the above expression will not be zero if $v_{-}\neq -\rho_{-}c^{2}$, which means $$\frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}},\,\,\,\,{\mathrm if}\,\, v_{-}\neq -\rho_{-}c^{2}. \eqno{(2.44)} $$ Because $v_{-}=-\rho_{-}c^{2}$ contradicts with $v_{-}=-\frac{1}{\rho_{-}}$, the above expression (2.44) together with (2.42) yields that $$\frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}}. \eqno{(2.45)} $$ This defines the Hugoniot curve of the relativistic shock. Substituting (2.38)-(2.40) and (2.45) into the first equation of (2.16), we get, after a straightforward calculation that $$\sigmaigma=\frac{v_{-}\rho_{-}(\rho-\rho_{-})+a(\rho\rho_{-}-\frac{1}{c^{2}})}{\rho_{-}(\rho-\rho_{-})+\frac{v_{-}a}{c^{2}}(\rho\rho_{-}-\frac{1}{c^{2}})} =\frac{(\rho-\rho_{-})(v_{-}\rho_{-}+1)}{(\rho-\rho_{-})(\rho_{-}+\frac{v_{-}}{c^{2}})}.\eqno{(2.46)} $$ When $\rho\neq\rho_{-}$, from (2.46), it is easy to find that $$\sigmaigma =\frac{v_{-}+\frac{1}{\rho_{-}}}{1+\frac{v_{-}}{\rho_{-} c^{2}}}.\eqno{(2.47)} $$ When $\rho=\rho_{-}$, the situation is simple. From (2.38) and (2.16), we can easily obtain that $$ \sigmaigma=v= v_{-},\, \, \,\, \rho=\rho_{-}\,\,\, {\mathrm and}\,\,\,n\neq n_{-}, \eqno{(2.48)} $$ According to above discussions, there are two types of shock curves $S_{i}$ $(i=1,3)$, which are given by $$S_{1}:\left\{ \betaegin{array}{ll} \sigmaigma=\frac{v-\frac{1}{\rho}}{1-\frac{v}{\rho c^{2}}} =\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}},\\ \frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}},\, \end{array} \right. \eqno{(2.49)} $$ with$$\,\,\,\,p>p_{-},\,\rho>\rho_{-},\,v<v_{-}, \eqno{} $$ and $$S_{3}:\left\{ \betaegin{array}{ll} \sigmaigma=\frac{v+\frac{1}{\rho}}{1+\frac{v}{\rho c^{2}}} =\frac{v_{-}+\frac{1}{\rho_{-}}}{1+\frac{v_{-}}{\rho_{-} c^{2}}},\\ \frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}},\, \end{array} \right. \eqno{(2.50)} $$ with$$\,\,p<p_{-},\,\rho<\rho_{-},\,v<v_{-}. \eqno{} $$ From (2.13), (2.15) and (2.49)-(2.50), we can find that the rarefaction waves and the shock waves are coincident in the state space, which correspond to contact discontinuities of the first and the third families: $$J_{1}:\xi=\sigmaigma=\frac{v-\frac{1}{\rho}}{1-\frac{v}{\rho c^{2}}} =\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}\,\, {\mathrm and}\,\, \frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}}, \eqno{(2.51)} $$ $$J_{3}:\xi=\sigmaigma=\frac{v+\frac{1}{\rho}}{1+\frac{v}{\rho c^{2}}} =\frac{v_{-}+\frac{1}{\rho_{-}}}{1+\frac{v_{-}}{\rho_{-} c^{2}}}\,\, {\mathrm and}\,\, \frac{n}{n_{-}}=\sigmaqrt{\frac{(\rho c-1)(\rho c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}}. \eqno{(2.52)} $$ From (2.14), (2.35) and (2.48), we can find that there is a contact discontinuity of the second family: $$J_{2}: \xi=\sigmaigma=v= v_{-},\, \, \,\, \rho=\rho_{-}\,\,\, {\mathrm and}\,\,\,n\neq n_{-}. \eqno{(2.53)} $$ \varsigmakip 0.1in\varsigmakip 0.1in \noindentindent{\sigmamall {\sigmamall\betaf Remark 1.} In [8], the author derived the Hugoniot curve of the relativistic shocks by considering such a coordinate system that the shock speed is zero. Different from that, in this paper, we obtain the result generally and ulteriorly give the analytical formula of relativistic shocks ( also see [15, 48]). \varsigmakip 0.1in In the state space, starting from a given state $(n_{-},\rho_{-},v_{-}),$ we draw the contact discontinuity curves (2.51) and (2.52) for $\rho>\frac{1}{c}$, the projections of which onto the $(\rho, v)$-plane are denoted by $J_{1}$ and $J_{3}$, respectively. So, $J_{1}$ has the asymptotic line $v=\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$ and the singularity point $(\rho, v)=(\frac{1}{c}, c),$ and $J_{3}$ has the asymptotic line $v=\frac{v_{-}+\frac{1}{\rho_{-}}}{1+\frac{v_{-}}{\rho_{-} c^{2}}}$ and the singularity point $(\rho, v)=(\frac{1}{c}, -c).$ Also, starting from the point $\betaigg(n_{-},\rho_{-}, \frac{\rho_{-}v_{-}c^{2}-2c^{2}+\frac{v_{-}}{\rho_{-}}}{\rho_{-}c^{2}-2v_{-}+\frac{1}{\rho_{-}}}\betaigg),$ in the state space, we draw the contact discontinuity curve (2.52), the projection of which onto the $(\rho, v)$-plane is denoted by $S_{\deltaelta}$. Then, $S_{\deltaelta}$ has the asymptotic line $v=\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$ and the singularity point $(\rho, v)=(\frac{1}{c}, -c).$ The projections of these curves onto the $(\rho, v)$-plane divide the $(\rho, v)$-plane into five regions I, II, III, IV and V, as shown in Fig. 1. \hspace{65mm}\sigmaetlength{\unitlength}{0.8mm}\betaegin{picture}(80,66) \put(-50,0){\line(0,4){6}} \put(51,0){\line(0,4){6}}\put(-36.5,35){${\cal D }ot$} \put(-52,2){\veector(0,2){50}}\put(-24,0){\line(0,4){52}}\put(20,-6){$\frac{v_{-}+\frac{1}{\rho_{-}}}{1+\frac{v_{-}}{\rho_{-} c^{2}}}$ }\put(24.2,0){\line(0,4){52}} \put(-50,0){\veector(2,0){105}} \put(2,-4){$v_{-}$} \put(-55,49){$\rho$}\put(52,3){$\rho=\frac{1}{c}$} \put(56,-1){$v$}\put(50,-3){$c$}\put(-53,-3){$-c$} \put(16,44){$J_{3}$} \put(-18,45){$J_{1}$}\put(-29,-6){$\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$} \put(-16,20){IV } \put(3,19){$(\rho_{-}, v_{-})$} \put(2.5,7){II }\put(44,6){$$ }\put(2,32){III} \put(-45,20){V}\put(-33,40.5){$S_{\deltaelta}$} \put(21,20){I }\qbezier(50,4)(-14,12)(-22,52)\qbezier(-50,4)(17.5,4)(22,52) \put(0,3.4){\line(-4,0){50}}\put(0,3.4){\line(4,0){52}}\qbezier(-50,5)(-27.5,4)(-25,52) \end{picture} \varsigmapace{2mm}ace{0.6mm} \varsigmakip 0.2in {\cal E }nterline{\betaf Fig. 1.\, The projections of the curves $J_{1}$ and $J_{3}$ onto the $(\rho, v)$-plane. } \varsigmakip 0.1in \indent \hspace{65mm}\sigmaetlength{\unitlength}{0.8mm}\betaegin{picture}(80,66)\put(0,0){\line(-4,5){25}} \put(0,0){\line(1,3){12}} \put(0,0){\line(3,2){34}} \put(10,38){$J_{2}$}\put(-30,30){$J_{1}$} \put(25,8){$(n_{+},\rho_{+}, v_{+})$} \put(35,22){$J_{3}$}\put(-48,18){$$}\put(-40,8){$(n_{-},\rho_{-}, v_{-})$} \put(11,26){$(n_{*2},\rho_{*2}, v_{*2})$ }\put(-18,23){$(n_{*1},\rho_{*1}, v_{*1})$ } \put(-34,34){$$} \put(0,0){\veector(0,2){48}}\put(0,0){\veector(0,2){48}} \put(-45,0){\veector(2,0){98}} \put(-3,-4){$O$} \put(-4,45){$t$} \put(54,-1){$x$} \end{picture} \varsigmapace{2mm}ace{0.6mm} \varsigmakip 0.2in {\cal E }nterline{\betaf Fig. 2.\, The Riemann solution of the case $\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}>\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$.} \varsigmakip 0.1in \indent For any given right state $(n_{+},\rho_{+}, v_{+}),$ according to Fig. 1, we can construct Riemann solutions of (1.1), (1.2) and (1.3). When the projection of the state $(n_{+},\rho_{+}, v_{+})$ onto the $(\rho, v)$-plane lies in $ I{\cal U }p II{\cal U }p III{\cal U }p IV$, the Riemann problem can be solved in the following way. On the physically relevant region, we draw the contact discontinuity curves $J_{1}(n_{-}, \rho_{-},v_{-})$ and $J_{3}(n_{+}, \rho_{+},v_{+})$. The projections of these contact discontinuity curves onto the $(\rho, v)$-plane have a unique intersection point $(\rho_{*}, v_{*})$ determined by $$\left\{\betaegin{array}{ll}\frac{v_{*}-\frac{1}{\rho_{*}}}{1-\frac{v_{*}}{\rho_{*} c^{2}}} =\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}},\\\frac{v_{*}+\frac{1}{\rho_{*}}}{1+\frac{v_{*}}{\rho_{*} c^{2}}} =\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}.\end{array}\right .\eqno{(2.54)}$$ Then, we draw the contact discontinuity curve $J_{2}(n_{-}, \rho_{*},v_{*}),$ which intersects the contact discontinuity curves $J_{1}(n_{-}, \rho_{-}, v_{-})$ and $J_{3}(n_{+}, \rho_{+}, v_{+})$ at the unique points $(n_{*1},\rho_{*1}, v_{*1}),$ and $(n_{*2},\rho_{*2}, v_{*2}),$ $\rho_{*1}=\rho_{*2}=\rho_{*}, $ $v_{*1}=v_{*2}=v_{*}. $ So far, we have completely obtained a solution of (1.1), (1.2) and (1.3), as shown in Fig. 2. Thus, we have proved the following result \varsigmakip 0.1in \noindentindent{\sigmamall {\sigmamall\betaf Theorem 2.1.} For Riemann problem (1.1), (1.2) and (1.3), on the physically relevant region, under the condition $\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}>\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$, there exists a unique entropy solution, which can be expressed as $$(n, \rho, v)(t,x)=\left\{ \betaegin{array}{ll} (n_{-}, \rho_{-}, v_{-}), \,\, \,\,\,\,\,-\infty<x/t<\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}},\\(n_{*1},\rho_{*1}, v_{*1}), \,\,\,\,\, \,\,\,\,\,\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}\leq x/t\leq v_{*1},\\(n_{*2},\rho_{*2}, v_{*2}), \,\,\,\,\, \,\,\,\,\,v_{*1}<x/t\leq\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}},\\ (n_{+},\rho_{+}, v_{+}),\,\,\,\,\,\,\,\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}<x/t<+\infty, \end{array} \right.\eqno{(2.55)}$$ where $$\left\{\betaegin{array}{ll}\rho_{*1}=\rho_{*2}=\rho_{*}, \\ v_{*1}=v_{*2}=v_{*},\\\frac{v_{*}-\frac{1}{\rho_{*}}}{1-\frac{v_{*}}{\rho_{*} c^{2}}} =\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}},\\\frac{v_{*}+\frac{1}{\rho_{*}}}{1+\frac{v_{*}}{\rho_{*} c^{2}}} =\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}},\,\,\\ n_{*1}=n_{-}\sigmaqrt{\frac{(\rho_{*} c-1)(\rho_{*} c+1)}{(\rho_{-} c-1)(\rho_{-} c+1)}},\\ n_{*2}=n_{+}\sigmaqrt{\frac{(\rho_{*} c-1)(\rho_{*} c+1)}{(\rho_{+} c-1)(\rho_{+} c+1)}}.\end{array}\right .\eqno{(2.56)}$$ \hspace{65mm}\sigmaetlength{\unitlength}{0.8mm}\betaegin{picture}(80,66) \put(0,0){\line(3,2){40}}\put(0,0){\line(-4,5){25}}\put(0,0){\line(-1,5){8}} \put(0,0){\line(1,3){15}} \put(8,48){$\frac{x}{t}=\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}$} \put(25,8){$(n_{+},\rho_{+}, v_{+})$} \put(41,29){$\frac{x}{t}=v_{-}$} \put(35,40){$\frac{x}{t}= \frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$}\qbezier(3.1,10)(8,13)(9,9) \put(35,20){$\frac{x}{t}= \frac{v_{-}+\frac{1}{\rho_{-}}}{1+\frac{v_{-}}{\rho_{-} c^{2}}}$} \put(-48,18){$$}\put(-36,4){$(n_{-},\rho_{-}, v_{-})$} \put(8.5,16){${\Bbb O}mega$ }\put(-14,40){$\frac{x}{t}=v_{+}$} \put(-34,34){$\frac{x}{t}=\frac{v_{+}-\frac{1}{\rho_{+}}}{1-\frac{v_{+}}{\rho_{+} c^{2}}}$} \put(0,0){\veector(0,2){48}}\put(0,0){\line(2,1){40}}\put(0,0){\line(1,1){38}} \put(-45,0){\veector(2,0){98}} \put(-3,-4){$O$} \put(-4,45){$t$} \put(54,-1){$x$} \end{picture} \varsigmapace{2mm}ace{0.6mm} \varsigmakip 0.2in {\cal E }nterline{\betaf Fig. 3.\, The characteristic lines from initial data for the case $\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}\Gammaeq \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}$.} \varsigmakip 0.1in \indent \betaaselineskip 15pt \sigmaec{\Large\betaf 3.\quad Delta shock solutions } In this section, we construct the Riemann solutions of (1.1)-(1.2) with initial data (1.3) when the projection of the state $(n_{+},\rho_{+}, v_{+})$ onto the $(\rho, v)$-plane lies in $V$, namely, $$\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}\Gammaeq \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}.\eqno{(3.1)}$$ At this moment, the linearly degenerate characteristic lines from initial data will overlap in a domain ${\Bbb O}mega=\{(t,x)|\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}t\leq x\leq\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}t, 0\leq t<+\infty\}$ shown in Fig. 3. So, the singularity must develop in ${\Bbb O}mega$. It is easy to know that the singularity is impossible to be a jump with finite amplitudes because the Rankine-Hugoniot relation is not satisfied on the bounded jump. To analyze the singularity, we first study the special case $\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}= \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}},$ let us consider the limit of the solution $ (n,\rho,v)(\xi)$ when $n_{-}, $ $\rho_{-}, $ $v_{-}, $ $n_{+}$ and $\rho_{+} $ are fixed, $ \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}\rightarrow \frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}+0$. When $ \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}> \frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$, the solution is given by (2.55), where $\rho_{*}$ and $ v_{*}$ satisfy $$\left\{\betaegin{array}{ll}\frac{c^{2}(v_{*}-\frac{1}{\rho_{*}})}{c^{2}-\frac{v_{*}}{\rho_{*} }} =\frac{c^{2}(v_{-}-\frac{1}{\rho_{-}})}{c^{2}-\frac{v_{-}}{\rho_{-} }},\\\frac{c^{2}(v_{*}+\frac{1}{\rho_{*}})}{c^{2}+\frac{v_{*}}{\rho_{*} }} =\frac{c^{2}(v_{+}+\frac{1}{\rho_{+}})}{c^{2}+\frac{v_{+}}{\rho_{+} }}.\end{array}\right .\eqno{(3.2)}$$ We can employ (3.2) and calculate to obtain $$\rho_{*}=\frac{c^{2}-ab+\sigmaqrt{c^{4}+a^{2}b^{2}-c^{2}(a^{2}+b^{2})}}{c^{2}(b-a)},\eqno{(3.3)}$$ where $$a=\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}} \,\, { \mathrm \,\, and}\,\,\,\,b = \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}.$$ Therefore, as $ \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}\rightarrow \frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}+0$, namely, $b \rightarrow a^{+}$, the combination of (3.2)-(3.3) and (2.56) yields $$\left\{\betaegin{array}{ll}\rho_{*1}=\rho_{*2}=\rho_{*}\rightarrow +\infty, n_{*1}\rightarrow +\infty,n_{*2}\rightarrow +\infty,\\ v_{*1}=v_{*2}=v_{*}\rightarrow \frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}.\end{array}\right .\eqno{(3.4)}$$ These show that contact discontinuities $J_{1}$, $J_{2}$ and $J_{3}$ coincide to form a new type of nonlinear hyperbolic wave. Now let us calculate the total quantities of $n$, $\rho$ and $v$ between $J_{1}$ and $J_{3}$ as $n_{-}, $ $\rho_{-}, $ $v_{-}, $ $n_{+}$ and $\rho_{+} $ are fixed, $ \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}\rightarrow \frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}+0$, $$\lim\limits_{b\rightarrow a^{+}}\int_{a}^{b}\rho(\xi)d\xi=\lim\limits_{b\rightarrow a^{+}}\int_{a}^{b}\rho_{*}d\xi=\lim\limits_{b\rightarrow a^{+}}\frac{c^{2}-ab+\sigmaqrt{c^{4}+a^{2}b^{2}-c^{2}(a^{2}+b^{2})}}{c^{2}}=\frac{2(c^{2}-a^{2})}{c^{2}}\neq 0,\eqno{(3.5)}$$ $$\lim\limits_{b\rightarrow a^{+}}\int_{a}^{b}v(\xi)d\xi=\lim\limits_{b\rightarrow a^{+}}\int_{a}^{b}v_{*}d\xi=\lim\limits_{b\rightarrow a^{+}}v_{*}(b-a)= 0,\eqno{(3.6)}$$ and $$\lim\limits_{b\rightarrow a^{+}}\int_{a}^{b}n(\xi)d\xi=\lim\limits_{b\rightarrow a^{+}}\betaigg(\int_{a}^{v_{*}}n_{-}\sigmaqrt{\frac{c^{2}\rho_{*}^{2}-1}{c^{2}\rho_{-}^{2}-1}}d\xi +\int^{b}_{v_{*}}n_{+}\sigmaqrt{\frac{c^{2}\rho_{*}^{2}-1}{c^{2}\rho_{+}^{2}-1}}d\xi\betaigg)$$ $$=\lim\limits_{b\rightarrow a^{+}}\betaigg(\frac{(c^{2}-v_{*}a)n_{-}}{\rho_{*}c^{2}}\sigmaqrt{\frac{c^{2}\rho_{*}^{2}-1}{c^{2}\rho_{-}^{2}-1}} +\frac{(c^{2}-v_{*}b)n_{+}}{\rho_{*}c^{2}}\sigmaqrt{\frac{c^{2}\rho_{*}^{2}-1}{c^{2}\rho_{+}^{2}-1}}\betaigg)$$ $$=\frac{c^{2}-a^{2}}{c^{2}} \betaigg(\frac{n_{-}}{\sigmaqrt{\rho_{-}^{2}-\frac{1}{c^{2}}}}+\frac{n_{+}}{\sigmaqrt{\rho_{+}^{2}-\frac{1}{c^{2}}}}\betaigg)\neq0.\eqno{(3.7)}$$ Hence, (3.5) and (3.7) show that $\rho(\xi)$ and $n(\xi)$ have the same singularity as a weighted Dirac delta function at $\xi= \frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}, $ while (3.6) implies that $v(\xi)$ has a bounded variation. Thus, such a type of nonlinear hyperbolic wave of (1.1) and (1.2) is a delta shock wave with a weighted Dirac delta function in both $n$ and $\rho $, denoted by $S_{\deltaelta}$. It is quite different from the previous ones on which only one state variable contains the Dirac delta function. To our knowledge, this type of delta shock wave has not been found in the previous studies on the relativistic Euler equations. Moreover, for $S_{\deltaelta}$ in this case, the inequality $$\langlembdabda_{1}(n_{+},\rho_{+}, v_{+})<\langlembdabda_{2}(n_{+},\rho_{+}, v_{+})<\langlembdabda_{3}(n_{+},\rho_{+}, v_{+})=\sigmaigma=\langlembdabda_{1}(n_{-},\rho_{-}, v_{-})<\langlembdabda_{2}(n_{-},\rho_{-}, v_{-})<\langlembdabda_{3}(n_{-},\rho_{-}, v_{-})$$ holds, where $\sigmaigma=\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$ is the propagation speed of $S_{\deltaelta}$. It means that none of the six characteristic lines on both sides of $S_{\deltaelta}$ is outgoing with respect to $S_{\deltaelta}$. By the above analysis, for the case $\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}\Gammaeq \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}$, the delta shock wave solution containing a Dirac delta function in both $n$ and $\rho$ will be considered. Thus we first introduce three definitions as follows. \varsigmakip 0.1in \varsigmakip 0.1in \noindentindent{\sigmamall {\sigmamall\betaf Definition 3.1.} A triple $(n, \rho, v)$ constitutes a solution of (1.1) in the sense of distributions if it satisfies $$ \left\{ \betaegin{array}{ll} \int_{0}^{+\infty}\int_{-\infty}^{+\infty}\Big(\Big(\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)\phi_{t}+\Big(\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)\phi_{x}\Big)dxdt=0,\\ \int_{0}^{+\infty}\int_{-\infty}^{+\infty}\Big(\Big(\frac{(p/c^{2}+\rho ) v}{1-v^2/c^2}\Big)\phi_{t}+\Big(\frac{(p/c^{2}+\rho )v^2}{1-v^2/c^2}+p\Big)\phi_{x}\Big)dxdt=0,\\\int_{0}^{+\infty}\int_{-\infty}^{+\infty}\Big(\frac{(p/c^{2}+\rho )v^2/c^2}{1-v^2/c^2}+\rho\Big) \phi_{t}+\Big(\frac{(p/c^{2}+\rho ) v}{1-v^2/c^2}\Big)\phi_{x}\Big)dxdt=0, \end{array} \right. \eqno{(3.8)}$$ for all test functions $\phi\in C^{\infty}_{0}(R^{+}\timeses R^{1}).$\\ \varsigmakip 0.1in \noindentindent{\sigmamall {\sigmamall\betaf Definition 3.2.} A two-dimensional weighted delta function $w(s)\deltaelta_{L}$ supported on a smooth curve $L$ parameterized as $t=t(s)$, $x=x(s)$ $ (c\leq s\leq d)$ is defined by $$ \langlengle w(s)\deltaelta_{L},\phi\ranglengle=\int_{c}^{d}w(s)\phi(t(s),x(s))ds, \eqno{(3.9)}$$ for all test functions $\phi\in C^{\infty}_{0}(R^{2}).$ \\ \varsigmakip 0.1in \noindentindent{\sigmamall {\sigmamall\betaf Definition 3.3.} A triple distribution $(n, \rho, v)$ is called a delta shock wave solution of (1.1) if it is represented in the form $$(n, \rho,v)(t, x)=\left\{ \betaegin{array}{ll} (n_{l}, \rho_{l}, v_{l})(t,x), & \hbox{$x<x(t) $,} \\ (h(t)\deltaelta(x-x( t)),w(t)\deltaelta(x-x( t)), v_{\deltaelta}(t)), & \hbox{$x=x(t)$,} \\ (n_{r}, \rho_{r}, v_{r})(t,x), & \hbox{$x>x(t)$} \end{array} \right. \eqno{(3.10)}$$ and satisfies Definition 3.1, where $(n_{l}, \rho_{l}, v_{l})(t,x)$ and $(n_{r}, \rho_{r}, v_{r})(t,x)$ are piecewise smooth bounded solutions of (1.1). \varsigmakip 0.1in \indent With Definitions 3.1-3.3, we seek a delta shock wave solution with the discontinuity $x=x(t)$ of (1.1) with (1.2) in the form $$(n, \rho,v)(t, x)=\left\{ \betaegin{array}{ll} (n_{-}, \rho_{-}, v_{-}), & \hbox{$x<x(t) $,} \\ (h(t)\deltaelta(x-x( t)),w(t)\deltaelta(x-x( t)), v_{\deltaelta}(t)), & \hbox{$x=x(t)$,} \\ (n_{+}, \rho_{+}, v_{+}), & \hbox{$x>x(t)$,} \end{array} \right. \eqno{(3.11)}$$ where $x(t), h(t),$ $w(t)\in C^{1}[0, +\infty)$, $\deltaelta({\cal D }ot)$ is the standard Dirac measure supported on the curve $x=x(t)$, and $h(t), w(t)$ are the weights of the delta shock wave on the state variables $n, \rho$, respectively. Similar to [2, 10, 32], we define $\rho^{-1}$ as follows $$\rho^{-1}=\left\{ \betaegin{array}{ll} \rho^{-1}_{-}, & \hbox{$x<x(t) $,} \\ 0, & \hbox{$x=x(t)$,} \\ \rho^{-1}_{+}, & \hbox{$x>x(t)$.} \end{array} \right. \eqno{(3.12)}$$ We assert that (3.11) is a delta shock wave solution of (1.1) with (1.2) in the sense of distributions if it satisfies the following generalized Rankine-Hugoniot relation $$ \left\{ \betaegin{array}{ll} \frac{dx(t)}{dt}=v_{\deltaelta}(t), \\ \frac{d}{dt}\Big(\frac{h(t)}{\sigmaqrt{1-v_{\deltaelta}^{2}(t)/c^{2}}}\Big)=v_{\deltaelta}(t)\Big [\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]-\Big[\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big], \\ \frac{d}{dt}\Big(\frac{w(t) v_{\deltaelta}(t)}{1-v_{\deltaelta}^{2}(t)/c^2}\Big)=v_{\deltaelta}(t) \Big[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig) v}{1-v^2/c^2}\Big] -\Big[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)v^2}{1-v^2/c^2}-\frac{1}{\rho}\Big ],\\ \frac{d}{dt}\Big(\frac{w(t)}{1-v_{\deltaelta}^{2}(t)/c^2}\Big)=v_{\deltaelta}(t) \Big[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)v^2/c^2}{1-v^2/c^2}+\rho\Big] -\Big[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig) v}{1-v^2/c^2}\Big], \end{array} \right.\eqno (3.13) $$ where $[q]= q_{+}-q_{-}$, etc. In fact, if the relation (3.13) holds, then for any test function $\phi\in C^{\infty}_{0}([0,+\infty)\timeses R),$ by Green's formulation, we have $$ \int_{0}^{+\infty}\int_{-\infty}^{+\infty}\Big(\Big(\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)\phi_{t}+\Big(\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)\phi_{x}\Big)dxdt$$ $$=\int_{0}^{+\infty}\int_{-\infty}^{x(t)}\frac{n_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}}\phi_{t}+ \frac{n_{-}v_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}} \phi_{x}dxdt$$ $$+\int_{0}^{+\infty}\int_{x(t)}^{+\infty}\frac{n_{+}}{\sigmaqrt{1-v_{+}^{2}/c^{2}}}\phi_{t}+ \frac{n_{+}v_{+}}{\sigmaqrt{1-v_{+}^{2}/c^{2}}} \phi_{x}dxdt$$ $$+\int_{0}^{+\infty}\frac{h(t)}{\sigmaqrt{1-v_{\deltaelta}^{2}(t)/c^{2}}}\phi_{t}(t,x(t))+ \frac{h(t)v_{\deltaelta}(t)}{\sigmaqrt{1-v_{\deltaelta}^{2}(t)/c^{2}}} \phi_{x}(t, x(t))dt.\eqno{(3.14)}$$ \varsigmakip 0.1in Without loss of generality, we assume that $v_{\deltaelta}(t):=\sigmaigma_{\deltaelta}$ is a constant and $\sigmaigma_{\deltaelta}>0.$ By exchanging the ordering of integral and using the change of variables, the first term on the right-hand side of (3.14) equals $$\hspace{0mm}\int_{0}^{+\infty}\int_{-\infty}^{0}\frac{n_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}}\phi_{t}dxdt + \int_{0}^{+\infty}\int_{0}^{x(t)}\frac{n_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}}\phi_{t}dxdt$$ $$+\int_{0}^{+\infty}\int_{-\infty}^{x(t)} \frac{n_{-}v_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}} \phi_{x}dxdt $$ $$\hspace{0mm}=\int_{0}^{+\infty}dx\int_{t(x)}^{+\infty}\frac{n_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}}\phi_{t}dt +\int_{0}^{+\infty}\frac{n_{-}v_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}}\phi(t,x(t))dt$$$$ =-\int_{0}^{+\infty}\frac{n_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}}\phi(t(x),x)dx +\int_{0}^{+\infty}\frac{n_{-}v_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}}\phi(t,x(t))dt$$ $$ =\int_{0}^{+\infty}\betaigg(\frac{n_{-}v_{-}}{\sigmaqrt{1-v_{-}^{2}/c^{2}}}-\frac{n_{-}v_{\deltaelta}(t)}{\sigmaqrt{1-v_{-}^{2}/c^{2}}}\betaigg)\phi(t,x(t))dt.\eqno{(3.15)}$$ Similarly, the second term on the right-hand side of (3.14) equals $$\hspace{0mm}\int_{0}^{+\infty}dx\int^{t(x)}_{0}\frac{n_{+}}{\sigmaqrt{1-v_{+}^{2}/c^{2}}}\phi_{t}dt -\int_{0}^{+\infty}\frac{n_{+}v_{+}}{\sigmaqrt{1-v_{+}^{2}/c^{2}}} \phi(t, x(t))dt$$$$\hspace{0mm} =\int_{0}^{+\infty}\frac{n_{+}}{\sigmaqrt{1-v_{+}^{2}/c^{2}}}\phi(t(x),x)dx-\int_{0}^{+\infty}\frac{n_{+}v_{+}}{\sigmaqrt{1-v_{+}^{2}/c^{2}}} \phi(t, x(t))dt$$ $$\hspace{0mm} =\int_{0}^{+\infty}\betaigg(\frac{n_{+}v_{\deltaelta}(t)}{\sigmaqrt{1-v_{+}^{2}/c^{2}}}-\frac{n_{+}v_{+}}{\sigmaqrt{1-v_{+}^{2}/c^{2}}}\betaigg ) \phi(t, x(t))dt.\eqno{(3.16)}$$ By using integrating by parts, from (3.14)-(3.16), we obtain $$ \int_{0}^{+\infty}\int_{-\infty}^{+\infty}\Big(\Big(\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)\phi_{t}+\Big(\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big)\phi_{x}\Big)dxdt $$ $$=\int_{0}^{+\infty}\betaigg(v_{\deltaelta}(t)\Big [\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]-\Big[\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big] - \frac{d}{dt}\Big(\frac{h(t)}{\sigmaqrt{1-v_{\deltaelta}^{2}(t)/c^{2}}}\Big)\betaigg)\phi(t, x(t))dt=0,\eqno{(3.17)}$$ which yields the first equality of (3.8). Similarly, one can prove the second and third equalities of (3.8). Thus, the assertion is true. \varsigmakip 0.1in \varsigmakip 0.1in \noindentindent{\sigmamall {\sigmamall\betaf Remark 2.} The generalized Rankine-Hugoniot relation (3.13) reflects the exact relationship among the limit states on both sides of the delta shock wave and the location, propagation speed, weights and the reassignment of $ v $ on the delta shock wave. \indent In addition, to guarantee uniqueness, we should propose the following entropy condition $$ \frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}} \leq v_{\deltaelta}(t)\leq \frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}, \eqno{(3.18)}$$ which means that all characteristic lines on both sides of the delta shock wave are incoming. A discontinuity satisfying (3.13) and (3.18) will be called a delta shock wave to system (1.1). At this moment, the Riemann problem is reduced to solve the ordinary differential equations (3.13) with initial data $$t=0:\,\,\,\,x(0)=0,\,\,\,v_{\deltaelta}(0)=v_{0},\,\,\,h(0)=0,\,\,\,w(0)=0.\eqno{(3.19)}$$ where $v_{0}$ is an undetermined constant. Integrating (3.13) from $0$ to $t$ with initial data (3.19), we have $$\left\{\betaegin{array}{ll} \frac{h(t)}{\sigmaqrt{1-v_{\deltaelta}^{2}(t)/c^{2}}}=\Big [\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]x(t)-\Big[\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]t, \\ \frac{w(t) v_{\deltaelta}(t)}{1-v_{\deltaelta}^{2}(t)/c^2}= Fx(t)-Gt,\\ \frac{w(t)}{1-v_{\deltaelta}^{2}(t)/c^2}= Ex(t)-Ft, \end{array} \right.\eqno (3.20) $$ where $$E=\betaigg[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)v^2/c^2}{1-v^2/c^2}+\rho\betaigg],F=\betaigg[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig) v}{1-v^2/c^2}\betaigg], G=\betaigg[\frac{\betaig(-\frac{1}{\rho c^{2}}+\rho \betaig)v^2}{1-v^2/c^2}-\frac{1}{\rho}\betaigg].$$ In what follows, under the entropy condition (3.18), we can solve (3.20) to obtain $$ \left\{ \betaegin{array}{ll} x_{}(t)=\frac{F+\sigmaqrt{F^{2}-EG}}{E}t, \\v_{\deltaelta}(t)=\frac{F+\sigmaqrt{F^{2}-EG}}{E}, \\ w(t) =\sigmaqrt{F^{2}-EG}\betaigg(1-\Big(\frac{F+\sigmaqrt{F^{2}-EG}}{cE}\Big)^2\betaigg)t,\\ h(t)= \sigmaqrt{1-\Big(\frac{F+\sigmaqrt{F^{2}-EG}}{cE}\Big)^2}\,\betaigg(\Big [\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]\frac{F+\sigmaqrt{F^{2}-EG}}{E} -\Big[\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]\betaigg)t, \end{array} \right.\eqno (3.21) $$ for $E\neq 0$, and $$ \left\{ \betaegin{array}{ll} x(t)=\frac{G}{2F}t, \\v_{\deltaelta}(t)=\frac{G}{2F}, \\ w(t) =-F\betaigg(1-\Big(\frac{G}{2Fc}\Big)^{2}\betaigg )t,\\ h(t)=\sigmaqrt{1-\Big(\frac{G}{2Fc}\Big)^{2}}\,\betaigg(\Big [\frac{n}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]\frac{G}{2F}-\Big[\frac{nv}{\sigmaqrt{1-v^{2}/c^{2}}}\Big]\betaigg)t, \end{array} \right.\eqno (3.22) $$ for $E=0$. The proof is similar to that in [10], so we omit it. \varsigmakip 0.1in Thus, we have proved the following result. \varsigmakip 0.1in \noindentindent{\sigmamall {\sigmamall\betaf Theorem 3.1.} On the physically relevant region, under the condition $\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}\leq\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$, Riemann problem (1.1), (1.2) and (1.3) admits a unique entropy solution in the sense of distributions of the form $$(n, \rho,v)(t, x)=\left\{ \betaegin{array}{ll} (n_{-}, \rho_{-}, v_{-}), & \hbox{$x<x(t) $,} \\ (h(t)\deltaelta(x-x( t)),w(t)\deltaelta(x-x( t)), v_{\deltaelta}(t)), & \hbox{$x=x(t)$,} \\ (n_{+}, \rho_{+}, v_{+}), & \hbox{$x>x(t)$,} \end{array} \right. \eqno{(3.23)}$$ where $x(t)$, $v_{\deltaelta}(t), $ $ h(t)$ and $w(t)$ are shown in (3.21) for $E\neq 0$ or (3.22) for $E=0$. \varsigmakip 0.1in At last, combining with the results in Section 2, we can conclude \varsigmakip 0.1in \noindentindent{\sigmamall {\sigmamall\betaf Theorem 3.2.} For Riemann problem (1.1), (1.2) and (1.3), on the physically relevant region, there exists a unique entropy solution, which consists of three contact discontinuities when $\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}>\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$ and a delta shock wave on which both $\rho$ and $n$ contain Dirac delta function simultaneously when $\frac{v_{+}+\frac{1}{\rho_{+}}}{1+\frac{v_{+}}{\rho_{+} c^{2}}}\leq\frac{v_{-}-\frac{1}{\rho_{-}}}{1-\frac{v_{-}}{\rho_{-} c^{2}}}$. \varsigmakip 10 pt \betaegin{thebibliography}{99}\betaibitem{s1} N. Bilic, G. B. Tupper, R. D. Viollier, Dark matter, dark energy and the Chaplygin gas, arXiv:astro-ph/0207423. \betaibitem{s1} Y. Brenier, Solutions with concentration to the Riemann problem for one-dimensional Chaplygin gas equations, J. Math. Fluid Mech. 7 (2005) S326-S331. \betaibitem{s1} A. Bressan, Hyperbolic Systems of Conservation Laws: The One-Dimensional Cauchy Problem, Oxford Univ. Press, Oxford, 2000. \betaibitem{s1} T. Chang, L. Hsiao, The Riemann problem and Interaction of Waves in Gas Dynamics, Pitman Monogr. Surv. Pure Appl. Math., vol. 41, Longman Scientific and Technical, 1989. \betaibitem{s11} S. Chaplygin, On gas jets, Sci. Mem. Moscow Univ. Math. Phys. 21 (1904) 1-121. \betaibitem{s7} G.Q. Chen, H. Liu, Formation of \({\deltaelta}\)-shocks and vacuum states in the vanishing pressure limit of solutions to the Euler equations for isentropic fluids, SIAM J. Math. Anal. 34 (2003) 925-938. \betaibitem{4} G.Q. Chen, Y. Li, Stability of Riemann solutions with large oscillation for the relativistic Euler equations, J. Differential Equations 202 (2004) 332-353. \betaibitem{s1} J. Chen, Conservation laws for relativistic fluid dynamics, Arch. Ration. Mech. Anal. 139 (1997) 377-398. \betaibitem{s3} H. Cheng, Riemann problem for one-dimensional system of conservation laws of mass, momentum and energy in zero-pressure gas dynamics, Differ. Equ. Appl. 4 (2012) 653-664. \betaibitem{s1}H. Cheng, H. Yang, Riemann problem for the relativistic Chaplygin Euler equations, J. Math. Anal. Appl. 381 (2011) 17-26. \betaibitem{s1}C. M. Dafermos, Hyperbolic Conservation Laws in Continuum Physics, Springer, New York, 1999. \betaibitem{s1} V.G. Danilov, D. Mitrovic, Delta shock wave formation in the case of triangular hyperbolic system of conservation laws, J. Differential Equations 245 (2008) 3704-3734. \betaibitem{s1}V.G. Danilov, V.M. Shelkovich, Dynamics of propagation and interaction of $\deltaelta$-shock waves in conservation law systems, J. Differential Equations 211 (2005) 333-381. \betaibitem{s1}M. Ding, Existence and stability of rarefaction wave to 1-D piston problem for the relativistic full Euler equations, J. Differential Equations 262 (2017) 6068-6108. \betaibitem{s1} Y. Geng, Y. Li, Non-relativistic global limits of entropy solutions to the extremely relativistic Euler equations, Z. Angew. Math. Phys. 61 (2010) 201-220. \betaibitem{s1} V. Gorini, A. Kamenshchik, U. Moschella, V. Pasquier, The Chaplygin gas as a model for dark energy, arXiv:gr-qc/0403062. \betaibitem{s1} B.T. Hayes, P.G. LeFloch, Measure solutions to a strictly hyperbolic system of conservation laws, Nonlinearity 9 (1996) 1547-1563. \betaibitem{s1}C. H. Hsu, S. S. Lin, T. Makino, On the relativistic Euler equations, Methods Appl. Anal. 8 (2001) 159-207. \betaibitem{s12} T. Karman, Compressibility effects in aerodynamics, J Aeron Sci 8 (1941) 337-365. \betaibitem{s14} B.L. Keyfitz, Conservation laws, delta-shocks and singular shocks, in: M. Grosser, G. Hormann, M. Oberguggenberger (Eds.), Nonlinear Theory of Generalized Functions, Chapman and Hall/CRC, Boca Raton, FL, 1999, pp. 99-112. \betaibitem{s14} B. L. Keyfitz, H. C. Kranzer, A viscosity approximation to a system of conservation laws with no classical Riemann solution, Nonlinear hyperbolic problems, Bordeaux 1988, Lecture Notes in Math., vol. 1402, Springer, Berlin, 1989. pp. 185-197. \betaibitem{s14} B. L. Keyfitz, H. C. Kranzer, Spaces of weighted measures for conservation laws with singular shock solutions, J Differential Equations 118 (1995) 420-451. \betaibitem{s14} B. L. Keyfitz, C. Tsikkou, Conserving the Wrong Variables in Gas Dynamics: A Riemann Solution with Singular Shocks, Quart. Appl. Math. 70 (2012) 407-436. In the special issue in honor of Dafermos' 70th birthday. \betaibitem{s1} D.J. Korchinski, Solution of a Riemann problem for a 2$\timeses$2 system of conservation laws possessing no classical weak solution, thesis, Adelphi University, 1977. \betaibitem{s1} P.D. Lax, Hyperbolic systems of conservation laws and the mathematical theory of shock waves, Regional Conf. Series in Appl. Math. 11, SIAM, Philadelphia, 1973. \betaibitem{s1} J. Li, T. Zhang, S.L. Yang, The Two-Dimensional Riemann Problem in Gas Dynamics, Longman Scientific and Technical, 1998. \betaibitem{s1} T.T. Li, T.H. Qin, Physics and Partial Differential Equations, Volume II, Higher Education Press, Beijing, 2014, translated by Yachun Li. \betaibitem{s15} M. Nedeljkov, Shadow Waves: Entropies and Interactions for Delta and Singular Shocks, Arch. Ration. Mech. Anal. 197 (2010) 489-537. \betaibitem{s15} M. Nedeljkov, Higher order shadow waves and delta shock blow up in the Chaplygin gas, J. Differential Equations 256 (2014) 3859-3887. \betaibitem{s1} B. Nilsson, V.M. Shelkovich, Mass, momentum and energy conservation laws in zero-pressure gas dynamics and delta-shocks, Appl. Anal. 90 (2011) 1677-1689. \betaibitem{s1} B. Nilsson, O.S. Rozanova, V.M. Shelkovich, Mass, momentum and energy conservation laws in zero-pressure gas dynamics and $\deltaelta$-shocks: II, Appl. Anal. 90 (2011) 831-842. \betaibitem{s1}Y. Pang, Delta shock wave in the compressible Euler equations for a Chaplygin gas, J. Math. Anal. Appl. 448 (2017) 245-261. \betaibitem{s1}Y. Pang, Delta shock wave with Dirac delta function in multiple components for the system of generalized Chaplygin gas dynamics, Boundary Value Problems (2016) 2016: 202. \betaibitem{s18} E. Panov, V. M. Shelkovich, $\deltaelta'$-Shock waves as a new type of solutions to systems of conservation laws, J. Differential Equations 228 (2006) 49-86. \betaibitem{9} D. Serre, Multidimensional shock interaction for a Chaplygin gas, Arch. Ration. Mech. Anal. 191 (2009) 539-577. \betaibitem{9} D. Serre, Systems of conservation laws I: Hyperbolicity, Entropies, Shock waves, Systems of Conservation Laws II: Geometric Structures, Oscillations, and Initial-Boundary Value Problems, Cambridge University Press, Cambridge, 2000. \betaibitem{s16} W. Sheng, G. Yin, Delta shocks and vacuum states in vanishing pressure limits of solutions to the relativistic Euler equations for polytropic gases, J. Math. Anal. Appl. 355 (2009) 594-605. \betaibitem{s3} W. Sheng, T. Zhang, The Riemann problem for the transportation equations in gas dynamics, in: Mem. Amer. Math. Soc., 137, AMS, Providence, 1999. \betaibitem{s3}J. Smoller, Shock Waves and Reaction Diffusion Equations, Springer-Verlag, New York, 1983. \betaibitem{s3} J. Smoller, B. Temple, Global solutions of the relativistic Euler equations, Comm. Math. Phys 156 (1993) 67-99. \betaibitem{s3} D.C. Tan, T. Zhang, Y.X. Zheng, Delta-shock waves as limits of vanishing viscosity for hyperbolic systems of conservation laws, J. Differential Equations 112 (1994) 1-32. \betaibitem{s1} A. H. Taub, Relativistic Rankine-Hugoniot Equations, Phys. Rev. 74 (1948) 328-334. \betaibitem{s1} A. H. Taub, Relativistic Hydrodynamics, Relativity Theory and Astrophysics 1, Relativity and Cosmology (J. Ehlers, ed.), American Mathematical Society, Providence, 1967, pp. 170-193. \betaibitem{s1} K. W. Thompson, The special relativistic shock tube, J. Fluid Mech. 171 (1986) 365-375. \betaibitem{s12} H. Tsien, Two dimensional subsonic flow of compressible fluids, J Aeron Sci 6 (1939) 399-407. \betaibitem{s1} H. Yang, Y. Zhang, New developments of delta shock waves and its applications in systems of conservation laws, J. Differential Equations 252 (2012) 5951-5993. \betaibitem{s1} H. Yang, Y. Zhang, Delta shock waves with Dirac delta function in both components for systems of conservation laws, J. Differential Equations 257 (2014) 4369-4402. \betaibitem{s1}G. Yin, W. Sheng, Delta wave formation and vacuum state in vanishing pressure limit for system of conservation laws to relativistic fluid dynamics, Z. Angew. Math. Mech. 95 (2015) 49-65. \betaibitem{s1} G. Yin, K. Song, Limits of Riemann Solutions to the Relativistic Euler Systems for Chaplygin Gas as Pressure Vanishes, Abstract and Applied Analysis, vol. 2013, Article ID 296361, 15 pages. \end{thebibliography} \end{document}
\betaegin{document} \tilde{t}le[Well-posedness]{On well-posedness of generalized Hall-magneto-hydrodynamics} \alphauthor [Mimi Dai]{Mimi Dai} \alphaddress{Department of Mathematics, Stat. and Comp. Sci., University of Illinois Chicago, Chicago, IL 60607,USA} \epsilonmail{[email protected]} \alphauthor [Han Liu]{Han Liu} \alphaddress{Department of Mathematics, Stat. and Comp. Sci., University of Illinois Chicago, Chicago, IL 60607,USA} \epsilonmail{[email protected]} \thetaanks{The work of the authors was partially supported by NSF Grant DMS--1815069.} \betaegin{abstract} We obtain local well-posedness result for the generalized Hall-magneto-hydrodynamics system in Besov spaces ${\deltaot B^{-(2\alphalpha_1-{\mathcal g}amma)}_{\infty, \infty}} \times {\deltaot B^{-(2\alphalpha_2-\betaeta)}_{\infty, \infty}(\mathbb R^3)}$ with suitable indexes $\alphalpha_1, \alphalpha_2, \betaeta$ and ${\mathcal g}amma.$ As a corollary, the hyperdissipative electron magneto-hydrodynamics system is globally well-posed in ${\deltaot B^{-(2\alphalpha_2-2)}}_{\infty, \infty}(\mathbb R^3)$ for small initial data. \betaigskip KEY WORDS: Well-posedness, Hall-MHD, Electron-MHD, Small data. \hspace{0.02cm}CLASSIFICATION CODE: 35Q35, 35Q60, 35A05 \epsilonnd{abstract} \maketitle \sigmaection{Introduction} In this paper, we study the well-posedness problem of the following generalized Hall-magneto-hydrodynamics (Hall-MHD) system \betaegin{equation}\lambdaabel{fhmhd} \betaegin{cases} u_t+(u\cdot \nabla)u-(b \cdot \nabla)b+\nabla p = -\nu(-\Delta)^{\alpha_1} u,\\ b_t+(u \cdot \nabla )b-(b \cdot \nabla) u+ \epsilonta \nabla \times ((\nabla \times b) \times b)= -\mu (-\Delta)^{\alpha_2} b,\\ \nabla \cdot u=0, \ \nabla \cdot b=0,\\ u(0,x)=u_0, \ b(0, x)=b_0, \ t \in \mathbf{R}^+, x\in \mathbf{R}^3, \epsilonnd{cases} \epsilonnd{equation} with the parameters $\alpha_1, \alpha_2 >0$ and the constants $\nu, \mu >0, \epsilonta {\mathcal g}eq 0.$ In particular, the fourth term on the left-hand side of the second equation is called the Hall term. When $\alpha_1=\alpha_2=1,$ $\epsilonta>0,$ system (\ref{fhmhd}) becomes the standard Hall-MHD system, whereas the case $\epsilonta=0$ corresponds to the generalized magneto-hydrodynamics (MHD) system. Derived in \cite{ADFL} as the incompressible limit of a two-fluid isothermal Euler-Maxwell system for electrons and ions, the Hall-MHD system describes the evolution of a system consisting of charged particles that can be approximated as a conducting fluid, in the presence of a magnetic field $b,$ with $u$ denoting the fluid velocity, $p$ the pressure, $\nu$ the viscosity, $\mu$ the magnetic resistivity and $\epsilonta$ a constant determined by the ion inertial length. The MHD and Hall-MHD systems have a wide range of applications in plasma physics and astrophysics, including modelling solar wind turbulence, designing tokamaks as well as studying the origin and dynamics of the terrestrial magnetosphere. Notably, the Hall-MHD system serves a vital role in interpreting the magnetic reconnection phenomenon, frequently observed in space plasmas. For more physical backgrounds, we refer readers to \cite{C, G1, G2, GB, L, PM}. Over the past decade, various mathematical results concerning the Hall-MHD system have been obtained. A mathematically rigorous derivation of the system is due to Acheritogaray, Degond, Frouvelle and Liu \cite{ADFL}. Concerning the solvability of the system, Chae, Degond and Liu \cite{CDL} obtained global-in-time existence of weak solutions and local-in-time existence of classical solutions. In \cite{CL}, Chae and Lee established a blow-up criterion and a small data global existence result. In addition, local well-posedness results can be found in the works by Dai \cite{D2, D3}, and global existence results for small data were also proved by Wan and Zhou \cite{WZ1} as well as by Kwak and Lkhagvasuren \cite{KL}. For various regularity criteria, readers are referred to \cite{D1, FFNZ, FLN, HAHZ, WL, Y3, Y4, YZ0, Z}. Regarding the propeties of the solutions, the temporay decay of weak solutions was studied by Chae and Schonbek \cite{CS}, while the stability of global strong solutions is due to Benvenutti and Ferreira \cite{BF}. On the other hand, in the irresistive setting, there are striking ill-posedness results due to Chae and Weng \cite{CW} as well as Jeong and Oh \cite{JO}. Recently, Dai \cite{D4} proved the non-uniqueness of the Leray-Hopf weak solution via a convex integration scheme. The generalized system (\ref{fhmhd}) has also attracted mathematicians' attentions. Chae, Wan and Wu \cite{CWW} proved local well-posedness in the case $\alpha_1=0,$ $\alpha_2>\frac{1}{2},$ while local well-posedness result for $0<\alpha_1 \lambdaeq 2,$ $1 < \alpha_2 \lambdaeq 2$ and global well-posedness result for $\alpha_1 {\mathcal g}eq \frac{5}{4},$ $\alpha_2 {\mathcal g}eq \frac{7}{4}$ were obtained respectively by Wan and Zhou \cite{WZ2} and Wan \cite{W}. Small data global solutions were established in \cite{PMZ, WYT, Y1}. In addition, decay results of global smooth solutions in the cases where either $\alpha_1$ or $\alpha_2=0$ is due to Dai and Liu \cite{DL}. We refer readers to \cite{FSZ, GMS, JZ, PZ} for a number of regularity criteria. In this paper, we shall prove that system (\ref{fhmhd}) is locally well-posed in the Besov space ${\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty}} \times {\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}}(\mathbf{R}^3)$ for suitable choices of $\alpha_1, \alpha_2, \betaeta$ and ${\mathcal g}amma.$ Our main result states as follows. \betaegin{Theorem}[Local well-posedness]\lambdaabel{lwp} For $(u_0, b_0) \in {\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty}} \times {\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}}(\mathbf{R}^3),$ there exists a unique local-in-time solution $(u,b)$ to system (\ref{fhmhd}) such that $$(u,b) \in L^\infty\betaig(0,T; {\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty}} \times {\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}(\mathbf{R}^3)}\betaig)$$ with $T = T \betaig(\nu, \mu, \epsilonta, \|u_0\|_{\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty}}, \|b_0\|_{\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}}\betaig),$ provided that the parameters $\alpha_1, \alpha_2, \beta$ and ${\mathcal g}amma$ satisfy the following constraints \betaegin{equation} \betaegin{cases} {\mathcal g}amma {\mathcal g}eq \max\{1, \frac{\alpha_1}{\alpha_2}\}, \\ \beta {\mathcal g}eq \max\{ 2, \frac{({\mathcal g}amma +1)\alpha_2}{2\alpha_1}\},\\ \frac{{\mathcal g}amma}{2}< \alpha_1 < {\mathcal g}amma,\\ \frac{\beta}{2} <\alpha_2<\beta. \epsilonnd{cases} \epsilonnd{equation} \epsilonnd{Theorem} An interesting byproduct of the above result is small data global well-posedness for the electron MHD (EMHD) equations, the fluid-free version of system (\ref{fhmhd}). \betaegin{Theorem}[Global existence for small data]\lambdaabel{gwp} Let $1<\alpha_2 <2.$ There exists some $\varepsilon=\varepsilon(\mu)>0$ such that if $\|b_0\|_{\deltaot B^{-(2\alpha_2-2)}_{\infty, \infty}(\mathbf{R}^3)} \lambdaeq \varepsilon,$ then there exists a solution $b$ to the EMHD system, i.e., system (\ref{fhmhd}) with $u \epsilonquiv 0,$ satisfying $$b \in L^\infty\betaig(0, +\infty; {\deltaot B^{-(2\alpha_2-2)}_{\infty, \infty}(\mathbf{R}^3)}\betaig) \text{ and } {\frak {su}}p_{t>0} t^{\frac{\alpha_2-1}{\alpha_2}}\|b\|_{L^\infty(\mathbf{R}^3)} < \infty.$$ \epsilonnd{Theorem} For generalized MHD system, local and global well-posedness results in Besov spaces were proved in \cite{YZ} via the same mechanism as the one in this paper, in spite of a major difference between the MHD and Hall-MHD systems in terms of scaling properties. In brief, the generalized MHD system scales as $$ u_\lambdaambda(t,x)=\lambdaambda^{2\alpha_1-1}u(\lambdaambda^{2\alpha_1}t, \lambdaambda x), \ b_\lambdaambda(t,x)=\lambdaambda^{2\alpha_2-1}b(\lambdaambda^{2\alpha_2}t, \lambdaambda x), $$ while the EMHD equations scale as $ b_\lambdaambda(t,x)=\lambdaambda^{2\alpha_2-2}b(\lambdaambda^{2\alpha_2}t, \lambdaambda x),$ resulting in an absence of scaling invariance along with a lack of the notion of criticality in the Hall-MHD system, which seems to render the global well-posedness for the full system (\ref{fhmhd}) rather elusive. For system (\ref{fhmhd}), we can only establish local well-posedness, in contrast to the generalized MHD system, which possesses global-in-time solutions in the largest critical space $\deltaot B^{-(2\alpha_1-1)}_{\infty, \infty} \times \deltaot B^{-(2\alpha_2-1)}_{\infty, \infty}(\mathbf{R}^3),$ with $\alpha_1=\alpha_2, \ \frac{1}{2}< \alpha_1, \alpha_2 <1$ for small initial data, as proven in \cite{YZ}. The fact that the well-posedness result for the Hall-MHD system deviates from that for the MHD system is an evidence that the new scale and non-linear interactions introduced by the Hall term $\nabla \times ((\nabla \times b)\times b)$ play a significant role. \betaigskip \sigmaection{Preliminaries} \lambdaabel{sec:pre} {\frak {su}}bsection{Notation} Throughout the paper, we will use $C$ to denote different constants. The notation $A \lambdaesssim B$ means that $A \lambdaeq CB$ for some constant $C.$ For simplicity, we denote the caloric extensions $e^{-\nu t (-\Delta)^{\alpha_1}}u_0$ and $e^{-\mu t(-\Delta)^{\alpha_2}}b_0$ by $\tilde u_0$ and $\tilde b_0,$ respectively. In addition, we use $\mathbb{P}$ to denote the Helmholtz-Leray projection onto solenoidal vector fields, which acts on a vector field $\phi$ as $$\mathbb{P}\phi = \phi +\nabla \cdot (-\Delta)^{-1} \deltaiv \phi.$$ {\frak {su}}bsection{Besov spaces via Littlewood-Paley theory} \lambdaabel{sec:LPD} We shall briefly recall the homogeneous Littlewood-Paley decomposition, through which we shall define the homogeneous Besov space. For a complete description of Littlewood-Paley theory and its applications, we refer readers to \cite{BCD, G}. We introduce the radial function $\chi\in C_0^\infty(\mathbf{R}^n)$ such that $0 \lambdaeq \chi \lambdaeq 1$ and \betaegin{equation}\notag \chi(\xi)= \betaegin{cases} 1, \ \ \mbox { for } |\xi|\lambdaeq\frac{3}{4}\\ 0, \ \ \mbox { for } |\xi|{\mathcal g}eq 1. \epsilonnd{cases} \epsilonnd{equation} Let $\varphi \in C_0^\infty(\mathbf{R}^n)$ be such that $\varphi(\xi)=\chi(\xi/2)-\chi(\xi).$ We construct a family of smooth functions $\{\varphi_q \}_{q \in \mathbb{Z}}$ supported on dyadic annuli in the frequency space, defined as \betaegin{equation}\notag \varphi_q(\xi)=\varphi(2^{-q}\xi), \ q \in \mathbb{Z}. \epsilonnd{equation} We can see that $\{ \varphi_q\}_{q \in\mathbb Z}$ is a partition of unity in $\mathbf{R}^n.$ Denoting the Fourier transform and its inverse by $\mathcal{F}$ and $\mathcal{F}^{-1},$ respectively, we introduce $h:=\mathcal F^{-1}\varphi.$ For $u \in \mathcal{S}',$ the homogeneous Littlewood-Paley projections are defined as \betaegin{equation}\notag \deltaot \Delta_qu:=\mathcal F^{-1}(\varphi(2^{-q}\xi)\mathcal Fu)=2^{nq}\deltaisplaystyle\int_{\mathbf{R}^n} h(2^qy)u(x-y)dy, \ q \in \mathbb{Z}. \epsilonnd{equation} In view of the above definitions, we note that the following identity holds in the sense of distributions - $$ u= {\frak {su}}m_{q \in \mathbb Z} \deltaot \Delta_q u.$$ With each $\deltaot \Delta_q u$ supported in some annular domain in the Fourier space, Littlewood-Paley projections provide us with a way to decompose a function into pieces with localized frequencies. For $s \in \mathbf{R}$ and $1 \lambdaeq p,q \lambdaeq \infty,$ we define the homogeneous Besov space $\deltaot B^s_{p,q}$ as $$\deltaot B^s_{p,q}(\mathbf{R}^n) = \betaig\{f \in \mathcal{S}'(\mathbf{R}^n) : \|f\|_{\deltaot B^{s}_{p,q}(\mathbf{R}^n)} < \infty \betaig \},$$ with the norm given by \betaegin{equation}\notag \|f\|_{\deltaot B^{s}_{p,q}(\mathbf{R}^n)}= \betaegin{cases} \deltaisplaystyle\betaig({\frak {su}}m_{j \in \mathbb Z} (2^{sj}\|\deltaot \Delta_j f \|_{L^p(\mathbf{R}^n)})^q \betaig)^{\frac{1}{q}}, \ \text{ if }1 \lambdaeq q < \infty,\\ \deltaisplaystyle {\frak {su}}p_{j \in \mathbb Z} (2^{sj}\|\deltaot \Delta_j f \|_{L^p(\mathbf{R}^n)}), \ \text{ if } q= \infty. \epsilonnd{cases} \epsilonnd{equation} In this paper, we are primarily interested in the $L^\infty, \epsilonll^\infty$-based Besov spaces $\deltaot B^{s}_{\infty, \infty}.$ {\frak {su}}bsection{Besov spaces and the heat kernel} It turns out that negative order Besov spaces can also be characterized via the action of the heat kernel. In particular, we have the following lemma, for whose proof we refer readers to \cite{Lr}. \betaegin{Lemma} Let $f \in {\deltaot B^{s}_{\infty, \infty}}$ for some $s<0.$ The following norm equivalence holds. \betaegin{equation} \|f\|_{\deltaot B^{s}_{\infty, \infty}}={\frak {su}}p_{t>0} t^{-\frac{s}{2\alpha}}\|e^{-t(-\Delta)^{\alpha}}f\|_{L^\infty}, \text{ where } \alpha >0. \epsilonnd{equation} \epsilonnd{Lemma} More generally, the following lemma concerning the action of the heat semigroup in Besov spaces holds true and shall be extensively used in this paper. \betaegin{Lemma}\lambdaabel{bsv} i) For $\alpha>0,$ the following inequalities hold. \betaegin{gather*} \|e^{-t(-\Delta)^{\alpha}}f\|_{L^\infty} \lambdaeq C \|f\|_{L^\infty},\\ \|\nabla e^{-t(-\Delta)^{\alpha}}f\|_{L^\infty} \lambdaeq C t^{-\frac{1}{2\alpha}}\|f\|_{L^\infty},\\ \|\nabla \mathbb{P}e^{-t(-\Delta)^{\alpha}}f\|_{L^\infty} \lambdaeq C t^{-\frac{1}{2\alpha}}\|f\|_{L^\infty}. \epsilonnd{gather*} ii) For $\alpha>0$ and $s_0 \lambdaeq s_1,$ the following inequalities hold. \betaegin{gather*} \|e^{-t(-\Delta)^{\alpha}}f\|_{\deltaot B^{s_1}_{\infty, \infty}} \lambdaeq C t^{-\frac{1}{2\alpha}(s_1-s_0)} \|f\|_{\deltaot B^{s_0}_{\infty, \infty}},\\ \|\nabla^k e^{-t(-\Delta)^{\alpha}}f\|_{\deltaot B^{s_1}_{\infty, \infty}} \lambdaeq C t^{-\frac{1}{2\alpha}(s_1-s_0+k)} \|f\|_{\deltaot B^{s_0}_{\infty, \infty}}. \epsilonnd{gather*} \epsilonnd{Lemma} Proofs of Lemma \ref{bsv} can be found in \cite{KOT, MYZ}. {\frak {su}}bsection{Mild solutions} A mild solution to system (\ref{fhmhd}) is the fix point of the map \betaegin{equation}\lambdaabel{msl} S(u,b):=\betaegin{pmatrix}S_1(u,b) \\ S_2(u,b)\epsilonnd{pmatrix}, \epsilonnd{equation} where $S_1(u,b)$ and $S_2(u,b)$ are given by the following Duhamel's formulae - \betaegin{equation}\lambdaabel{msu} \betaegin{split} S_1(u,b):= u(t,x)=& e^{-\nu t (-\Delta)^{\alpha_1}}u_0(x)- \int_0^t e^{-\nu (t-s)(-\Delta)^{\alpha_1}}\mathbb{P}\nabla \cdot (u \otimes u)(s) \mathrm{d}s\\ & + \int_0^t e^{-\nu (t-s)(-\Delta)^{\alpha_1}}\mathbb{P}\nabla \cdot (b \otimes b)(s) \mathrm{d}s,\\ \epsilonnd{split} \epsilonnd{equation} \betaegin{equation}\lambdaabel{msb} \betaegin{split} S_2 (u,b):= b(t,x) =& e^{-\mu t (-\Delta)^{\alpha_2}}b_0(x)- \int_0^t e^{-\mu (t-s)(-\Delta)^{\alpha_2}}\mathbb{P}\nabla \cdot (u \otimes b)(s) \mathrm{d}s\\ & + \int_0^t e^{-\mu (t-s)(-\Delta)^{\alpha_2}}\mathbb{P}\nabla \cdot (b \otimes u)(s) \mathrm{d}s\\ & - \epsilonta \int_0^t e^{-\mu (t-s)(-\Delta)^{\alpha_2}}\nabla \times( \nabla \cdot (b \otimes b))(s) \mathrm{d}s.\\ \epsilonnd{split} \epsilonnd{equation} In (\ref{msb}), we have applied the vector identity $\nabla \times (\nabla \cdot (b \otimes b))=\nabla \times ((\nabla \times b)\times b)$ to the Hall term. To further simplify notations, we view the integrals in expressions (\ref{msu}) and (\ref{msb}) as bilinear forms. \betaegin{Definition}[Bilinear forms] Let $f,g \in \mathcal{S}^{'}.$ The bilinear forms $\mathcal{B}_{\alpha_1}(\cdot,\cdot),$ $\mathcal{B}_{\alpha_2}(\cdot, \cdot)$ and $\mathfrak{B}_{\alpha_2}(\cdot, \cdot)$ are defined as follows. \betaegin{equation}\notag \betaegin{split} \mathcal{B}_{\alpha_1}(f,g)=& \int_0^t e^{-\nu (t-s)(-\Delta)^{\alpha_1}}\mathbb{P}\nabla \cdot (f \otimes g)(s) \mathrm{d}s;\\ \mathcal{B}_{\alpha_2}(f,g)=& \int_0^t e^{-\mu (t-s)(-\Delta)^{\alpha_2}}\mathbb{P}\nabla \cdot (f \otimes g)(s) \mathrm{d}s;\\ \mathfrak{B}_{\alpha_2}(f,g)= & \epsilonta \int_0^t e^{-\mu (t-s)(-\Delta)^{\alpha_2}}\nabla \times( \nabla \cdot (b \otimes b))(s) \mathrm{d}s. \epsilonnd{split} \epsilonnd{equation} \epsilonnd{Definition} In view of the above, we can write the formulae (\ref{msl}), (\ref{msu}) and (\ref{msb}) as \betaegin{equation} \betaegin{split} S_1(u,b)= & \tilde u_0(x) - \mathcal{B}_{\alpha_1} (u,u) + \mathcal{B}_{\alpha_1} (b,b),\\ S_2(u,b)= & \tilde b_0(x) - \mathcal{B}_{\alpha_2} (u,b) + \mathcal{B}_{\alpha_2} (b,u) - \mathfrak{B}_{\alpha_2} (b,b).\\ \epsilonnd{split} \epsilonnd{equation} {\frak {su}}bsection{The contraction principle} Given the mild solution formulation (\ref{msl}), a traditional approach is to find a fixed point by iterating the map $(u,b) \mapsto S(u,b).$ In order to do so, it is essential to find a space $\mathcal{E}$ such that the bilinear forms $\mathcal{B}_{\alpha}(\cdot, \cdot)$ and $\mathfrak{B}_{\alpha}(\cdot, \cdot)$ are bounded from $\mathcal{E} \times \mathcal{E}$ to $\mathcal{E}.$ In this paper, we shall use the following lemma, proven in \cite{Lr} and \cite{M} as a simple consequence of Banach fixed point theorem. \betaegin{Lemma}\lambdaabel{pic} Let $\mathcal{E}$ be a Banach space. Given a bilinear form $\mathbb{B}: \mathcal{E} \times \mathcal{E} \to \mathcal{E}$ such that $\|\mathbb{B}(u,v)\|_\mathcal{E} \lambdaeq C_0 \|u\|_\mathcal{E} \|v\|_\mathcal{E}, \forall u, v \in \mathcal{E},$ for some constant $C_0 >0,$ we have the following assertions for the equation \betaegin{equation}\lambdaabel{eq1} u= y +\mathbb{B}(u,u). \epsilonnd{equation} i). Suppose that $y \in B_\varepsilon(0):= \{f\in \mathcal{E}: \|f\|_{\mathcal{E}}< \varepsilon \}$ for some $ \varepsilon \in \betaig(0, \frac{1}{4C_0}\betaig),$ then the equation (\ref{eq1}) has a solution $u \in B_{2\varepsilon}(0):= \{f \in \mathcal{E}: \|f\|_{\mathcal{E}}<2\varepsilon \},$ which is, in fact, the unique solution in the ball $\overline{B_{2\varepsilon}(0)}.$ ii). On top of i), suppose that $\betaar y \in B_\varepsilon(0), \betaar u \in B_{2\varepsilon}(0)$ and $\betaar u=\betaar y+\mathbb{B}(\betaar u, \betaar u),$ then the following continuous dependence is true. \betaegin{equation}\lambdaabel{inq} \|u-\betaar u\|_\mathcal{E} \lambdaeq \frac{1}{1-4\varepsilon C_0}\|y -\betaar y\|_\mathcal{E}. \epsilonnd{equation} \epsilonnd{Lemma} It can be seen from inequality (\ref{inq}) that to ensure local well-posedness, it suffices that $C_0=CT^a$ for some $a>0,$ while global well-posedness would require $C_0$ to be bounded above by a time-independent constant. \betaigskip \sigmaection{Proofs of Theorems} This section is devoted to the proofs of Theorems \ref{lwp} and \ref{gwp}. We work within a framework based on the concepts of the ``admissible path space" and ``adapted value space", as formulated in \cite{Lr}. The idea is to first identify an ``admissible path space" $\mathcal{E}_T$ in which we may apply the contraction principle, then characterize the ``adapted value space" $E_T$ associated with $\mathcal{E}_T.$ In our case, we consider the space $$ E_T =\{f: f \in \mathcal{S}', \ e^{-t(-\Delta)^{\alpha_i}}f \in \mathcal{E}_T, \ 0< t <T\}, \ i=1 \text{ or } 2. $$ To start, we define the Banach spaces $X_T$ and $Y_T$ and the admissible path space $\mathcal{E}_T:=X_T \times Y_T.$ \betaegin{equation}\lambdaabel{exy} X_T=\Big\{f: \mathbf{R}^{+} \to L^\infty(\mathbf{R}^3): \nabla \cdot f=0 \text{ and }{\frak {su}}p_{0< t <T} t^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}}\|f(t)\|_{L^\infty(\mathbf{R}^3)} <\infty \Big \} \epsilonnd{equation} \betaegin{align}\lambdaabel{eyx} Y_T=\Big\{f: \mathbf{R}^{+} \to L^\infty(\mathbf{R}^3): \nabla \cdot f=0 \text{ and }{\frak {su}}p_{ 0< t <T} t^{\frac{2\alpha_2-\betaeta}{2\alpha_2}}\|f(t)\|_{L^\infty(\mathbf{R}^3)} <\infty \Big \} \epsilonnd{align} By formulae (\ref{msu}) and (\ref{msb}) along with the characterization of homogeneous Besov spaces in terms of the heat flow (\ref{bsv}), we have the following inequalities - \betaegin{equation}\notag \betaegin{split} \|u\|_{X_T} \lambdaeq & {\frak {su}}p_{t>0} t^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}}\| \tilde u_0 \|_\infty + \|\mathcal{B}_{\alpha_1}(u,u)\|_{X_T} + \|\mathcal{B}_{\alpha_1}(b,b)\|_{X_T}\\ \lambdaeq & C_\nu \|u_0 \|_{ \deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty} } + \|\mathcal{B}_{\alpha_1}(u,u)\|_{X_T} + \|\mathcal{B}_{\alpha_1}(b,b)\|_{X_T}, \epsilonnd{split} \epsilonnd{equation} \betaegin{equation}\notag \betaegin{split} \|b\|_{Y} \lambdaeq & {\frak {su}}p_{t>0} t^{\frac{2\alpha_2-\betaeta}{2\alpha_2}}\| \tilde b_0 \|_\infty + \|\mathcal{B}_{\alpha_2}(u,b)\|_{Y_T}+ \|\mathcal{B}_{\alpha_2}(b,u)\|_{Y_T}+\|\mathfrak{B}_{\alpha_2}(b,b)\|_{Y_T}\\ \lambdaeq & C_\mu \|b_0 \|_{ \deltaot B^{-(2\alpha_2-\betaeta)}_{\infty, \infty} }+ \|\mathcal{B}_{\alpha_2}(u,b)\|_{Y_T}+ \|\mathcal{B}_{\alpha_2}(b,u)\|_{Y_T}+\|\mathfrak{B}_{\alpha_2}(b,b)\|_{Y_T}. \epsilonnd{split} \epsilonnd{equation} Clearly, ${\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty}} \times {\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}(\mathbf{R}^3)}$ is an adapted value space corresponding to the admissible path space $\mathcal{E}_T$ given by Definitions \ref{exy} and \ref{eyx}. We proceed to prove the following proposition. \betaegin{Proposition} Suppose that the parameters $\alpha_1, \alpha_2, \beta$ and ${\mathcal g}amma$ satisfy \betaegin{equation}\lambdaabel{cnt} \betaegin{cases} {\mathcal g}amma {\mathcal g}eq \max\{1, \frac{\alpha_1}{\alpha_2}\}, \\ \beta {\mathcal g}eq \max\{ 2, \frac{({\mathcal g}amma +1)\alpha_2}{2\alpha_1}\},\\ \frac{{\mathcal g}amma}{2}< \alpha_1 < {\mathcal g}amma,\\ \frac{\beta}{2} <\alpha_2<\beta. \epsilonnd{cases} \epsilonnd{equation} If $(u,b) \in \mathcal{E}_T$ for some $0<T<\infty,$ then $\|S(u,b)-(\tilde u_0, \tilde b_0)\| \in \mathcal{E}_T.$ In particular, \betaegin{equation}\lambdaabel{bbd} \|S(u,b)-(\tilde u_0, \tilde b_0)\|_{\mathcal{E}_T} \lambdaeq C T^a \|(u,b)\|_{\mathcal{E}_T}^2 \epsilonnd{equation} for some $a>0$ and $C=C(\nu, \mu, \epsilonta)>0.$ \epsilonnd{Proposition} \textbf{Proof:\ } First, we remark that the constraints on the parameters indeed yield a non-empty set, since the combination ${\mathcal g}amma=1, \betaeta=2, \alpha_1=1-\deltaelta$ and $\alpha_2=2-2\deltaelta$ with $\frac{1}{4}< \deltaelta < \frac{1}{2}$ clearly satisfies (\ref{cnt}). To prove (\ref{bbd}), it suffices to show that the bilinear forms are bounded from $\mathcal{E}_T\times \mathcal{E}_T$ to $\mathcal{E}_T,$ with bounds dependent on $\nu, \mu, \epsilonta$ and $T.$ To this end, we invoke the property of the Beta function. More specifically, for $\alpha>1$ and $0 < \thetaeta < \alpha,$ we have \betaegin{equation}\lambdaabel{bet} \int^t_0 (t-\tau)^{-\frac{1}{\alpha}}\tau^{-\frac{\thetaeta}{\alpha}}\mathrm{d}\tau = t^{1-\frac{1}{\alpha}-\frac{\thetaeta}{\alpha}}B\Big(1-\frac{\thetaeta}{\alpha}, 1-\frac{1}{\alpha}\Big) \lambdaeq C t^{1-\frac{1}{\alpha}-\frac{\thetaeta}{\alpha}}. \epsilonnd{equation} Let ${\mathcal g}amma {\mathcal g}eq 1$ and $\frac{{\mathcal g}amma}{2} < \alpha_1 < {\mathcal g}amma.$ Via integration by parts, H\"older's inequality, identity (\ref{bet}) and Definition \ref{exy}, we have the following inequalities. \betaegin{equation}\notag \betaegin{split} \|\mathcal{B}_{\alpha_1}(u,u)\|_{X_T} \lambdaeq & C_\nu {\frak {su}}p_{0<t<T} t^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}}\int^t_0(t-s)^{-\frac{1}{2\alpha_1}}\|u(s)\|_\infty\|u(s)\|_\infty \mathrm{d}s\\ \lambdaeq & C_\nu \|u\|_{X_T}^2 {\frak {su}}p_{0<t<T} t^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}}\int^t_0(t-s)^{-\frac{1}{2\alpha_1}}s^{-2+\frac{{\mathcal g}amma}{\alpha_1}} \mathrm{d}s\\ \lambdaeq & C_\nu T^{\frac{{\mathcal g}amma-1}{2\alpha_1}} \|u\|_{X_T}^2. \epsilonnd{split} \epsilonnd{equation} Similarly, the following estimates are true provided that ${\mathcal g}amma {\mathcal g}eq 1$, $\frac{{\mathcal g}amma}{2} < \alpha_1 < {\mathcal g}amma$, $\frac{\betaeta}{2} < \alpha_2 < \betaeta$ and $\beta {\mathcal g}eq \frac{({\mathcal g}amma+1)\alpha_2}{2\alpha_1}.$ \betaegin{equation}\notag \betaegin{split} \|\mathcal{B}_{\alpha_1}(b,b)\|_{X_T} \lambdaeq & C_\nu {\frak {su}}p_{0<t<T} t^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}}\int^t_0(t-s)^{-\frac{1}{2\alpha_1}}\|b(s)\|_\infty\|b(s)\|_\infty \mathrm{d}s\\ \lambdaeq & C_\nu \|b\|_{X_T}^2{\frak {su}}p_{0<t<T} t^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}}\int^t_0(t-s)^{-\frac{1}{2\alpha_1}}s^{-2+\frac{\betaeta}{\alpha_2}} \mathrm{d}s\\ \lambdaeq & C_\nu T^{\frac{\betaeta}{\alpha_2}-\frac{{\mathcal g}amma+1}{2\alpha_1}} \|b\|_{X_T}^2. \epsilonnd{split} \epsilonnd{equation} To bound the term $\|\mathcal{B}_{\alpha_2}(b,u)\|_Y,$ we further require that $\alpha_2 > \frac{1}{2}$ and ${\mathcal g}amma {\mathcal g}eq \frac{\alpha_1}{\alpha_2}.$ \betaegin{equation}\notag \betaegin{split} \|\mathcal{B}_{\alpha_2}(b,u)\|_{Y_T} \lambdaeq & C_\mu {\frak {su}}p_{0<t<T} t^{\frac{2\alpha_2-\betaeta}{2\alpha_2}}\int^t_0(t-s)^{-\frac{1}{2\alpha_2}}\|u(s)\|_\infty\|b(s)\|_\infty \mathrm{d}s\\ \lambdaeq & C_\mu \|u\|_X \|b\|_Y {\frak {su}}p_{0<t<T} t^{\frac{2\alpha_2-\betaeta}{2\alpha_2}}\int^t_0(t-s)^{-\frac{1}{2\alpha_2}}s^{-2+\frac{{\mathcal g}amma}{2\alpha_1}+\frac{\betaeta}{2\alpha_2}}\mathrm{d}s\\ \lambdaeq & C_\mu T^{\frac{{\mathcal g}amma}{2\alpha_1}-\frac{1}{2\alpha_2}} \|u\|_{X_T} \|b\|_{Y_T}. \epsilonnd{split} \epsilonnd{equation} We note that the term $\|\mathcal{B}_{\alpha_2}(u,b)\|_Y$ can be estimated in an identical manner. Finally, we integrate by parts twice to estimate the Hall term. We end up with the condition $\alpha_2>1$ along with all the constraints from previous estimates. \betaegin{equation}\notag \betaegin{split} \|\mathfrak{B}_{\alpha_2}(b,b)\|_{Y_T} \lambdaeq & C_{\mu, \epsilonta} {\frak {su}}p_{0<t<T} t^{\frac{2\alpha_2-\betaeta}{2\alpha_2}}\int^t_0(t-s)^{-\frac{1}{\alpha_2}}\|b(s)\|_\infty\|b(s)\|_\infty \mathrm{d}s\\ \lambdaeq & C_{\mu, \epsilonta} \|b\|_{Y_T}^2 {\frak {su}}p_{0<t<T} t^{\frac{2\alpha_2-\betaeta}{2\alpha_2}}\int^t_0(t-s)^{-\frac{1}{\alpha_2}}s^{-2+\frac{\betaeta}{\alpha_2}}\mathrm{d}s \\ \lambdaeq & C_{\mu, \epsilonta} T^{\frac{\betaeta-2}{2\alpha_2}}\|b\|_{Y_T}^2. \epsilonnd{split} \epsilonnd{equation} \par{\raggedleft$\Box$\par} \textbf{Proof of Theorem \ref{lwp}}: By inequality (\ref{bbd}), Lemma \ref{bsv} and Lemma \ref{pic}, there exists a solution $(u,b) \in \mathcal{E}_T$ provided that the initial data $(u_0, b_0)$ and the time $T$ satisfy \betaegin{equation}\notag 4 C T^a \Big(C_{\nu} \|u_0 \|_{ \deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty} }+ C_{\mu, \epsilonta}\|b_0 \|_{ \deltaot B^{-(2\alpha_2-\betaeta)}_{\infty, \infty} }\Big) < 1. \epsilonnd{equation} It remains to be shown that $(u,b) \in L^\infty\betaig(0,T; {\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty}} \times {\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}(\mathbf{R}^3)}\betaig).$ By (\ref{msu}) and Lemma \ref{bsv}, it holds that \betaegin{equation}\notag \betaegin{split} \|S_1 u(t)\|_{\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty}} = & {\frak {su}}p_{0<\tau <T}\tau^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}} \|e^{-\nu \tau (-\Delta)^{\alpha_1}}S_1 u(t) \|_{L^\infty}\\ \lambdaesssim & {\frak {su}}p_{0<\tau <T} \tau^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}} \|e^{-\nu(\tau+t)(-\Delta)^{\alpha_1}}u_0 \|_{L^\infty}\\ & +{\frak {su}}p_{0<\tau <T} \tau^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}}\|u\|_{X_T}^2\int^{\tau+t}_0 (\tau+t-s)^{-\frac{1}{2\alpha_1}}s^{-2+\frac{{\mathcal g}amma}{\alpha_1}}\mathrm{d}s \\ & +{\frak {su}}p_{0<\tau <T} \tau^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}}\|b\|_{Y_T}^2\int^{\tau+t}_0 (\tau+t-s)^{-\frac{1}{2\alpha_1}}s^{-2+\frac{\beta}{\alpha_2}}\mathrm{d}s. \epsilonnd{split} \epsilonnd{equation} Estimating with the help of (\ref{bet}), we have \betaegin{equation}\notag \betaegin{split} \|S_1 u(t)\|_{\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty}} \lambdaesssim & {\frak {su}}p_{0<\tau <T} \tau^{\frac{2\alpha_1-{\mathcal g}amma}{2\alpha_1}} \Big(\|e^{-\nu\tau(-\Delta)^{\alpha_1}}u_0 \|_{L^\infty} +(\tau+t)^{-1+\frac{2{\mathcal g}amma-1}{2\alpha_1}}\|u\|_{X_T}^2\\ &+ (\tau+t)^{-1-\frac{1}{2\alpha_1}+\frac{2\beta}{2\alpha_2}}\|b\|_{Y_T}^2 \Big)\\ \lambdaesssim & \|u_0\|_{\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty} }+ T^a\|(u,b)\|_{\mathcal{E}_T}^2. \epsilonnd{split} \epsilonnd{equation} In a similar fashion, the following inequalities follows from (\ref{msb}) and Lemma \ref{bsv}. \betaegin{equation}\notag \betaegin{split} \|S_2 b(t)\|_{\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}} = & {\frak {su}}p_{0<\tau <T}\tau^{\frac{2\alpha_2-\beta}{2\alpha_2}} \|e^{-\mu \tau (-\Delta)^{\alpha_2}}S_2 b(t) \|_{L^\infty}\\ \lambdaesssim & {\frak {su}}p_{0<\tau <T} \tau^{\frac{2\alpha_2-\beta}{2\alpha_2}} \betaigg( \|e^{-\mu(\tau+t)(-\Delta)^{\alpha_2}}b_0 \|_{L^\infty}\\ & + 2 \|u\|_{X_T}\|b\|_{Y_T}\int^{\tau+t}_0 (\tau+t-s)^{-\frac{1}{2\alpha_2}}s^{-2+\frac{{\mathcal g}amma}{2\alpha_1}+\frac{\beta}{2\alpha_2}}\mathrm{d}s \\ & +\|b\|_{Y_T}^2\int^{\tau+t}_0 (\tau+t-s)^{-\frac{1}{\alpha_2}}s^{ -2+\frac{\beta}{\alpha_2}}\mathrm{d}s \betaigg). \epsilonnd{split} \epsilonnd{equation} The integrals can be evaluated thanks to (\ref{bet}), which yields the bound on $S_2 b.$ \betaegin{equation}\notag \betaegin{split} \|S_2 b(t)\|_{\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}} \lambdaesssim & {\frak {su}}p_{0<\tau <T} \tau^{\frac{2\alpha_2-\beta}{2\alpha_2}} \Big(\|e^{-\nu\tau(-\Delta)^{\alpha_1}}b_0 \|_{L^\infty}\\ & +(\tau+t)^{-1+\frac{{\mathcal g}amma}{2\alpha_1}+\frac{\beta-1}{2\alpha_2}}\|u\|_{X_T} \|b\|_{Y_T}+ (\tau+t)^{-1+\frac{\beta-1}{\alpha_2}}\|b\|_{Y_T}^2 \Big)\\ \lambdaesssim & \|b_0\|_{\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}}+ T^a\|(u,b)\|_{\mathcal{E}_T}^2. \epsilonnd{split} \epsilonnd{equation} The inequalities above imply that $$(u,b) \in L^\infty\betaig(0,T; {\deltaot B^{-(2\alpha_1-{\mathcal g}amma)}_{\infty, \infty}} \times {\deltaot B^{-(2\alpha_2-\beta)}_{\infty, \infty}(\mathbf{R}^3)}\betaig).$$ \par{\raggedleft$\Box$\par} However, well-posedness result for the standard Hall-MHD system, i.e., the case $\alpha_1=\alpha_2=1,$ is unattainable as the above method breaks down in this case. We now turn to the hyper-resistive EMHD equations, written as \betaegin{equation}\lambdaabel{emhde} \betaegin{cases} b_t + \epsilonta \nabla \times ((\nabla \times b) \times b)= -\mu (-\Delta)^{\alpha_2}b,\\ \nabla\cdot b=0,\\ b(0,x)=b_0, \ t \in \mathbf{R}^+, x \in \mathbf{R}^3, \epsilonnd{cases} \epsilonnd{equation} where $1< \alpha_2 <2.$ The above system is the small-scale limit of the Hall-MHD system, corresponding to the scenario in which the ions are practically static, simply forming a neutralizing background for the moving electrons. It is named electron MHD as the system is solely determined by the electrons. In astrophysics, system (\ref{emhde}) makes frequent appearances in the study of the magnetosphere and the solar wind, whose dynamics can be puzzling due to high frequency magnetic fluctuations. Readers may consult \cite{G1, G3, MG} for relevant physics backgrounds. Unlike the complete system (\ref{fhmhd}), system (\ref{emhde}) possesses the property of scaling invariance. More specifically, if $b(t,x)$ solves system (\ref{emhde}) with initial data $b_0,$ then $b_\lambdaambda(t,x)= \lambdaambda^{2\alpha_2-2}b(\lambdaambda^{2\alpha_2}t, \lambdaambda x)$ is a solution subject to the initial data $\lambdaambda^{2\alpha_2-2}b_0(\lambdaambda x).$ One can see that the space $L^\infty\betaig(0, \infty; \deltaot B^{-(2\alpha_2-2)}_{\infty, \infty}(\mathbf{R}^3)\betaig)$ is the largest critical space according to the scaling property. We proceed to prove Theorem \ref{gwp} by finding a ball $B {\frak {su}}bset Y_T$ where the solution map $S_2$ is a contraction mapping. We have the following two propositions. \betaegin{Proposition} Let $\alpha_2 \in (1,2)$ and $\betaeta =2.$ For $0< T \lambdaeq \infty,$ the map $S_2$ satisfies \betaegin{equation}\lambdaabel{bd2} \|S_2 b - \tilde b_0\|_{Y_T} \lambdaeq C \|b\|_{Y_T}^2. \epsilonnd{equation} Therefore, there exists some $\varepsilon_1 >0,$ such that $S_2$ is a self-mapping on the ball $$B_{\varepsilon_1}\betaig(\tilde b_0\betaig)=:\{ f\in Y_T: \|f-\tilde b_0\|_{Y_T} < \varepsilon_1 \},$$ provided that $ \|b_0\|_{\deltaot B^{-(2\alpha_2-2)}_{\infty, \infty}(\mathbf{R}^3)} <\varepsilon_1.$ \epsilonnd{Proposition} \textbf{Proof:\ } The inequality (\ref{bd2}) follows from the following estimate. \betaegin{equation}\notag \betaegin{split} \|\mathfrak{B}_{\alpha_2}(b,b)\|_{Y_T} \lambdaeq & {\frak {su}}p_{t>0} t^{\frac{2\alpha_2-2}{2\alpha}}\int^t_0(t-s)^{-\frac{1}{\alpha_2}}\|b(s)\|_\infty\|b(s)\|_\infty \mathrm{d}s\\ \lambdaeq & \|b\|_{Y_T}^2 {\frak {su}}p_{t>0} t^{\frac{2\alpha_2-2}{2\alpha_2}}\int^t_0(t-s)^{-\frac{1}{\alpha_2}}s^{-2+\frac{2}{\alpha_2}}\mathrm{d}s \\ \lambdaeq & C_{\mu, \epsilonta} \|b\|_{Y_T}^2. \epsilonnd{split} \epsilonnd{equation} Since it is assumed that $b \in B_{\varepsilon_1}\betaig(\tilde b_0\betaig)$ and $\|b_0\|_{\deltaot B^{-(2\alpha_2-2)}_{\infty, \infty}(\mathbf{R}^3)} <\varepsilon_1,$ it follows from inequality (\ref{bd2}) and lemma (\ref{bsv}) that \betaegin{equation}\notag \|S_2 b - \tilde b_0\|_{Y_T} \lambdaeq C \|b\|_{Y_T}^2 \lambdaeq C \betaig( \|b-\tilde b_0\|_{Y_T}^2 + \|\tilde b_0\|_{Y_T}^2 \betaig) \lambdaeq C \varepsilon_1^2. \epsilonnd{equation} \par{\raggedleft$\Box$\par} \betaegin{Proposition}\lambdaabel{ctr} Let $1< a_2 <2$ and $\betaeta =2.$ For any $T \in (0, \infty],$ there exists some $\varepsilon_2 \in (0, \varepsilon_1)$ such that if $\|b_0\|_{\deltaot B^{-(2\alpha_2-2)}_{\infty, \infty}(\mathbf{R}^3)} <\varepsilon_2,$ then the solution map $S_2$ is a contraction mapping on the ball $$B_{\varepsilon_2}\betaig(\tilde b_0\betaig)=:\{ f\in Y_T: \|f-\tilde b_0\|_{Y_T} < \varepsilon_2 \}.$$ \epsilonnd{Proposition} \textbf{Proof:\ } Let $b, \betaar b \in B_{\varepsilon_2}\betaig( \tilde b_0 \betaig).$ Clearly, the following inequalities hold. \betaegin{equation}\notag \betaegin{split} \|S_2 b -S_2 \betaar b \|_{Y_T}= & \| \mathfrak{B}_{\alpha_2}(b,b) - \mathfrak{B}_{\alpha_2}(\betaar b, \betaar b) \|_{Y_T}\\ \lambdaeq & \| \mathfrak{B}_{\alpha_2}(b,b) - \mathfrak{B}_{\alpha_2}(b, \betaar b)\|_{Y_T} + \| \mathfrak{B}_{\alpha_2}(b, \betaar b) - \mathfrak{B}_{\alpha_2}(\betaar b, \betaar b) \|_{Y_T}\\ \lambdaeq & C_{\mu, \epsilonta} \max \{\|b\|_{Y_T}, \|\betaar b \|_{Y_T}\} \|b-\betaar b \|_{Y_T}\\ \lambdaeq & C_{\mu, \epsilonta} \varepsilon_2 \|b-\betaar b \|_{Y_T}. \epsilonnd{split} \epsilonnd{equation} We can ensure that $S_2$ is a contraction mapping by choosing $\varepsilon_2< 1/{2C_{\mu, \epsilonta}}.$ \par{\raggedleft$\Box$\par} \textbf{Proof of Theorem \ref{gwp}}. As a result of Proposition \ref{ctr}, we know that for some $\varepsilon_2>0,$ $S_2$ has a fixed point, which is a mild solution to system (\ref{emhde}), in $$B_{\varepsilon_2}\betaig(\tilde b_0\betaig)=:\{ f\in Y_T: \|f-\tilde b_0\|_{Y_T} < \varepsilon_2, \ T=+\infty\},$$ provided that $ \|b_0\|_{\deltaot B^{-(2\alpha_2-2)}_{\infty, \infty}(\mathbf{R}^3)} <\varepsilon_2.$ To see that the solution $b$ is in $L^\infty(0, \infty; \deltaot B^{-(2\alpha_2-2)}_{\infty, \infty}(\mathbf{R}^3)),$ we just calculate \betaegin{equation}\notag \betaegin{split} \|S_2 b(t)\|_{\deltaot B^{-(2\alpha_2-2)}_{\infty, \infty}} \lambdaesssim & {\frak {su}}p_{\tau >0} \tau^{\frac{2\alpha_2-2}{2\alpha_2}} \betaigg( \|e^{-\mu(\tau+t)(-\Delta)^{\alpha_2}}b_0 \|_{L^\infty}\\ & + \|b\|_{Y_T}^2\int^{\tau+t}_0 (\tau+t-s)^{-\frac{1}{\alpha_2}}s^{ -2+\frac{2}{\alpha_2}}\mathrm{d}s \betaigg)\\ \lambdaesssim & \|b_0\|_{ \deltaot B^{-(2\alpha_2-2)}_{\infty, \infty} }+\|b\|_{Y_T}^2. \epsilonnd{split} \epsilonnd{equation} \par{\raggedleft$\Box$\par} Unfortunately, the above pathway to small data global well-posedness fails just when $\alpha_2=1,$ leaving the question of the standard EMHD equations' solvability in its largest critical space $\deltaot B^0_{\infty, \infty}(\mathbf{R}^3)$ unanswered. At this moment, we are inclined to believe that in this setting, the system is ill-posed instead. \betaigskip {\textbf{Acknowledgement.}} The authors would like to thank Prof. Isabelle Gallagher and Dr. Trevor Leslie for helpful discussions. \betaegin{thebibliography}{XX} \betaibitem{ADFL} M. Acheritogaray, P. Degond, A. Frouvelle and J. Liu. \newblock {\epsilonm Kinetic formulation and global existence for the Hall-Magneto-hydrodynamics system}. \newblock Kinet. Relat. Models Vol. 4(4), 901-918, 2011. \betaibitem{BF} M. J. Benvenutti and L. C. F. Ferreira. \newblock {\epsilonm Existence and stability of global large strong solutions for the Hall-MHD system.} \newblock Differ. Integral Equ. Vol. 29(9–10), 977–1000, 2016. \betaibitem{BCD} H. Bahouri, J. Chemin, and R. Danchin. \newblock {\epsilonm Fourier Analysis and Nonlinear Partial Differential Equations}. \newblock Grundlehren der mathematischen Wissenschaften, 343. Springer, Heidelberg, 2011. \betaibitem{C} L. M. B. C. Campos. \newblock {\epsilonm On hydromagnetic waves in atmospheres with application to the Sun.} \newblock Theor. Comput. Fluid Dyn. 10 (1-4), 37-70, 1998. \betaibitem{CDL} D. Chae, P. Degond and J. Liu. \newblock {\epsilonm Well-posedness for Hall-magnetohydrodynamics}. \newblock Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire Vol. 31, No. 3, 555-565, 2014. \betaibitem{CL} D. Chae and J. Lee. \newblock {\epsilonm On the blow-up criterion and small data global existence for the Hall-magnetohydrodynamics.} \newblock J. Diff. Eq. Vol. 256(11), 3835-3858, 2014. \betaibitem{CS} D. Chae and M. E. Schonbek. \newblock {\epsilonm On the temporal decay for the Hall-magnetohydrodynamic equations.} \newblock J. Diff. Eq. Vol. 255(11), 3971-3982, 2013. \betaibitem{CWW} D. Chae, R. Wan and J. Wu. \newblock {\epsilonm Local well-posedness for the Hall-MHD equations with fractional magnetic diffusion.} \newblock J. Math. Fluid Mech. Vol. 17(4), 627-638, 2015. \betaibitem{CW} D. Chae and S. Weng. \newblock {\epsilonm Singularity formation for the incompressible Hall-MHD equations without resistivity.} \newblock Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire Vol. 33, No. 4, 1009-1022, 2016. \betaibitem{D1} M. Dai. \newblock {\epsilonm Regularity criterion for the 3D Hall-magneto-hydrodynamics.} \newblock J. Diff. Eq. Vol. 261(1), 573-591, 2016. \betaibitem{D2} M. Dai. \newblock {\epsilonm Local well-posedness of the Hall-MHD system in $H^s(\mathbf{R}^n)$ with $s>n/2$.} \newblock arXiv: 1709.02347. \betaibitem{D3} M. Dai. \newblock {\epsilonm Local well-posedness for the Hall-MHD system in optimal Sobolev Spaces.} \newblock arXiv: 1803.09556. \betaibitem{D4} M. Dai. \newblock {\epsilonm Non-uniqueness of Leray-Hopf weak solutions of the 3D Hall-MHD system}. \newblock arXiv: 1812.11311. \betaibitem{DL} M. Dai and H. Liu. \newblock {\epsilonm Long time behavior of solutions to the 3D Hall-magneto-hydrodynamics system with one diffusion}. \newblock J. Diff. Eq., Vol. 266, 7658-7677, 2019. \betaibitem{FFNZ} J. Fan, Y. Fukumoto, G.Nakamura and Y. Zhou. \newblock {\epsilonm Regularity criteria for the incompressible Hall-MHD system .} \newblock Z. Angew. Math. Mech. 95(11), 1156-1160, 2015. \betaibitem{FLN} J. Fan, F. Li and G. Nakamura. \newblock {\epsilonm Regularity criteria for the incompressible Hall-magnetohydrodynamic equations .} \newblock Nonlinear Anal. 109: 173-179, 2014. \betaibitem{FSZ} J. Fan, B. Samet and Y. Zhou. \newblock {\epsilonm A regularity criterion for a generalized Hall-MHD system.} \newblock Comput. Math. Appl. 74(10), 2438-2443, 2017. \betaibitem{G1} S. Galtier. \newblock{\epsilonm Introduction to Modern Magnetohydrodynamics.} \newblock Cambridge University Press, Cambridge, UK, 2016. \betaibitem{G2} S. Galtier. \newblock {\epsilonm Wave turbulence in incompressible Hall magnetohydrodynamics .} \newblock Journal of Plasma Physics 72 (5), 721-769, 2006. \betaibitem{GB} S. Galtier and E Buchlin. \newblock {\epsilonm Multiscale Hall-magnetohydrodynamic turbulence in the solar wind .} \newblock The Astrophysical Journal 656 (1), 560, 2007. \betaibitem{G3} S. Galtier. \newblock {\epsilonm Exact scaling laws for 3D electron MHD turbulence.} \newblock Journal of Geophysical Research: Space Physics 113 (A1), 2008. \betaibitem{G} L. Grafakos. \newblock {\epsilonm Modern Fourier Analysis .} \newblock Graduate Texts in Mathematics, Vol. 250, 2nd edition, Springer, New York, 2009. \betaibitem{GMS} W. Gu, C. Ma and J. Sun. \newblock {\epsilonm A regularity criterion for the generalized Hall-MHD system.} \newblock Bound. Value Probl. 2016:188, 2016. \betaibitem{HAHZ} F. He, B. Ahmad, T. Hayat and Y. Zhou. \newblock {\epsilonm On regularity criteria for the 3D Hall-MHD equations in terms of the velocity.} \newblock Nonlinear Anal. RWA 32, 35-51, 2016. \betaibitem{JO} I. Jeong and S. Oh. \newblock{\epsilonm On the Cauchy problem for the Hall and electron magnetohydrodynamic equations without resistivity I: illposedness near degenerate stationary solutions.} \newblock arXiv: 1902.02025. \betaibitem{JZ} Z. Jiang and M. Zhu. \newblock {\epsilonm Regularity criteria for the 3D generalized MHD and Hall-MHD systems.} \newblock Bull. Malays. Math. Sci. Soc. 41 (1), 105-122, 2018. \betaibitem{KOT} H. Kozono, T. Ogawa and Y. Taniuchi. \newblock {\epsilonm Navier-Stokes equations in the Besov space near $L^\infty$ and BMO.} \newblock Kyushu Journal of Mathematics 57:303-324, 2003. \betaibitem{KL} M. Kwak and B. Lkhagvasuren. \newblock {\epsilonm Global wellposedness for Hall-MHD equations }. \newblock Nonlinear Anal. 174: 104-117, 2018. \betaibitem{Lr} P. G. Lemari\'e-Rieusset \newblock {\epsilonm Recent Development in the Navier-Stokes Problem.} \newblock Chapman \& Hall/CRC Press: Boca Raton, 2002. \betaibitem{L} M. J. Lighthill. \newblock {\epsilonm Studies on magneto-hydrodynamic waves and other anisotropic wave motions.} \newblock Phil. Trans. R. Soc. A 252 (1014), 397-430, 1960. \betaibitem{M} Y. Meyer. \newblock {\epsilonm Wavelets, paraproducts and Navier-Stokes equations .} \newblock Current Development in Mathematics, 1996, p105-212, International Press, Cambridge, MA, 1999. \betaibitem{MG} R. Meyrand and S. Galtier. \newblock {\epsilonm Anomalous Spectrum in Electron Magnetohydrodynamic Turbulence.} \newblock Physical review letters 111 (26), 264501, 2013. \betaibitem{MYZ} C. Miao, B. Yuan and B. Zhang. \newblock {\epsilonm Well-posedness of the Cauchy problem for the fractional power dissipative equations.} \newblock Nonlinear Analysis: Theory, Methods \& Applications 68: 461-484, 2008. \betaibitem{PMZ} N. Pan, C. Ma and M. Zhu. \newblock {\epsilonm Global regularity for the 3D generalized Hall-MHD system.} \newblock Appl. Math. Lett. 61: 62-66, 2016. \betaibitem{PZ} N. Pan and M. Zhu. \newblock {\epsilonm A new regularity criterion for the 3D generalized Hall-MHD system with $\beta \in (\frac12,1]$.} \newblock J. Math. Anal. Appl, Vol. 445(1), 604-611, 2017. \betaibitem{PM} J. M. Polygiannakis and X. Moussas. \newblock {\epsilonm A review of magneto-vorticity induction in Hall-MHD plasmas.} \newblock Plasma Phys. Control. Fusion 43 (2), 195, 2001. \betaibitem{W} R. Wan. \newblock {\epsilonm Global regularity for generalized Hall Magneto-Hydrodynamics systems .} \newblock Electron. J. Differ. Equ. 179, 1-18, 2015. \betaibitem{WZ1} R. Wan and Y. Zhou. \newblock {\epsilonm On global existence, energy decay and blow-up criteria for the Hall-MHD system.} \newblock J. Diff. Eq. 259 (11), 5982-6008, 2015. \betaibitem{WZ2} R. Wan and Y. Zhou. \newblock {\epsilonm Low Regularity Well-Posedness for the 3D Generalized Hall-MHD System.} \newblock Acta Appl. Math. 147, 95-111, 2017. \betaibitem{WL} Y. Wang and H. Li. \newblock {\epsilonm Beale-Kato-Madja type criteria of smooth solutions to 3D Hall-MHD flows.} \newblock Appl. Math. Comput. 286, 41-48, 2016. \betaibitem{WYT} X. Wu, Y. Yu and Y. Tang \newblock {\epsilonm Global existence and asymptotic behavior for the 3D generalized Hall-MHD system.} \newblock Nonlinear Anal. 151:41-50, 2017. \betaibitem{Y1} Z. Ye. \newblock {\epsilonm Regularity criteria and small data global existence to the generalized viscous Hall-magnetohydrodynamics.} \newblock Comput. Math. Appl. 70(8), 2137-2154, 2015. \betaibitem{Y2} Z. Ye \newblock {\epsilonm Global well-posedness and decay results to 3D generalized viscous magnetohydrodynamic equations.} \newblock Ann. Mat. Pura Appl. (4) 195 (4), 1111-1121, 2016. \betaibitem{Y3} Z. Ye. \newblock {\epsilonm Regularity criterion for the 3D Hall-magnetohydrodynamic equations involving the vorticity.} \newblock Nonlinear Anal. 144, 182-193, 2016. \betaibitem{Y4} Z. Ye. \newblock {\epsilonm A logarithmically improved regularity criterion for the 3D Hall-MHD equations in Besov spaces with negative indices.} \newblock Appl. Anal. 96 (16), 2669-2683, 2017. \betaibitem{YZ0} Z. Ye and Z. Zhang. \newblock {\epsilonm A remark on regularity criterion for the 3D Hall-MHD equations based on the vorticity.} \newblock Appl. Math. Comput. 301: 70–77, 2017. \betaibitem{YZ} X. Yu and Z. Zhai. \newblock {\epsilonm Well-posedness for fractional Navier-Stokes equations in the largest critical spaces $\deltaot B^{-(2\beta-1)}_{\infty, \infty}(\mathbf{R}^n)$.} \newblock Math. Meth. Appl. Sci. 35(6), 676-683, 2012. \betaibitem{Z} Z. Zhang. \newblock {\epsilonm A remark on the blow-up criterion for the 3D Hall-MHD system in Besov spaces.} \newblock J. Math. Anal. Appl. 441 (2), 692-701, 2016. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{\LARGE\sc Virial theorem and generalized momentum in\\ quaternic quantum mechanics\\ } \author{\tt\large SERGIO GIARDINO} \email{[email protected]} \affiliation{ Departamento de Matem\'atica Pura e Aplicada, Universidade Federal do Rio Grande do Sul (UFRGS)\\ Avenida Bento Gon\c calves 9500, Caixa Postal 15080, 91501-970 Porto Alegre, RS, Brazil} \begin{abstract} \noindent After a review on recent results on quaternic quantum mechanics ($\mathbb{H}$QM), we present further consistency tests that reinforce its compatibility with the usual complex quantum mechanics ($\mathbb{C}$QM). The novel results comprises the Virial theorem, the quantum quaternic Lorentz force, the existence of a quaternic magnetic monopole and the redefinition of the expectation value. \end{abstract} \maketitle \tableofcontents \section{\sc Introduction} After almost one century of research, quantum mechanics remains unfinished. There are open questions involving several topics, like the generality of quantum theory, the hermiticity, the entanglement, the measurement, among others. This article concerns the proposal for generalization of quantum mechanics throughout using quaternic numbers. We remember that quaternions ($\mathbb{H}$) are hyper-complex numbers with three imaginary units \cite{Rocha:2013qtt}, then $q\in\mathbb{H}$ supposes that \begin{equation}\label{i1} q=x_0 + x_1 i + x_2 j + x_3 k, \qquad\mbox{where}\qquad x_0,\,x_1,\,x_2,\,x_3\in\mathbb{R}\qquad\mbox{and}\qquad i^2=j^2=k^2=-1. \end{equation} The imaginary units $i,\,j$ and $k$ are anti-commutative (by way of example, $ij=-ji$), consequently quaternions are non-commutative hyper-complex numbers. In symplectic notation, a quaternic number may be written in terms of complex components, where $\,q=z_0+z_1j\,$ and $\,z_0,\,z_1\in\mathbb{C}.\,$ Thus, quaternic quantum mechanics ($\mathbb{H}$QM) inquires whether we can generalize quantum mechanics by replacing the complex wave functions with quaternic wave functions. In symplectic notation, a quaternic wave function reads \begin{equation}\label{i1} \Psi=\Psi_0+\Psi_1\,j, \end{equation} where $\Psi_0$ and $\Psi_1$ are complex functions. In fact, a quaternic quantum theory was proposed by John von Neumann and Garrett Birkhoff \cite{Birkhoff:2017kpl}, wondering whether quantum mechanics admits a formulation that is independent of a numerical system. It was the fist attempt o obtain a propositional quantum logic, and another deep question of the article was: why do the wave functions are evaluated over the complex numbers? In other words, Birkhoff and von Neumann wondered a criterion for choosing the complexes out of the four division algebras, namely the reals ($\mathbb{R}$), the complexes ($\mathbb{C}$), the quaternions ($\mathbb{H}$) and the octonions ($\mathbb{O}$), and either what we gain or what we lose with each choice. Ernst Stueckelberg developed real quantum mechanics ($\mathbb{R}$QM) \cite{Stueckelberg:1960rsi,Stueckelberg:1961rsi,Stueckelberg:1961psg,Stueckelberg:1962fra}, and this formulation is considered equivalent to $\mathbb{C}$QM. However, Stueckelberg's theory is involved; it demands an anti-unitary operator $\,J\,$ that replaces the complex imaginary unit $\,i\,$ in Schr\"odinger equation, and this anti-unitary operator $\,J\,$ is specific for each anti-commuting pair of operators \cite{Stueckelberg:1960rsi}. In spite of these problems, $\mathbb{R}$QM is still an object of research, and plays a role within quantum information \cite{Mosca:2009sqs,Borish:2013rqm,Wootters:2014ret,Wootters:2012esh} and mathematical physics \cite{Oppio:2016pbf}, thus indicating that alternatives to complex quantum mechanics ($\mathbb{C}$QM) may be useful for understanding new physics. The next possibility is to use quaternions to build a quantum theory. The success of this enterprise is only partial, and various unsolved questions remain. The majority of the results in $\mathbb{H}$QM concerns what we call the anti-hermitian $\mathbb{H}$QM, and whose development has been subsumed in a book by Stephen Adler \cite{Adler:1995qqm}. We remember $\mathcal{A}^\dagger$ as the adjoint of the operator $\mathcal{A}$, and that an anti-hermitian operator satisfies $\mathcal{A}^\dagger=-\mathcal{A}$. For mathematical convenience, the anti-hermitian $\mathbb{H}$QM uses an anti-hermitian Hamiltoninan operator in the Schr\"oedinger equation, while the remaining physical operators are still hermitian. Beyond this unexplained singularity of the Hamiltonian operator, the anti-hermitian formulation of $\mathbb{H}$QM presents several problems, like the ill defined classical limit, and the exact solutions that are scarce and difficult to compare to $\mathbb{C}$QM \cite{Davies:1989zza,Davies:1992oqq,Ducati:2001qo,Nishi:2002qd,DeLeo:2005bs,Madureira:2006qps,Ducati:2007wp,Davies:1990pm,DeLeo:2013xfa,DeLeo:2015hza,Giardino:2015iia,Sobhani:2016qdp,Procopio:2016qqq,Sobhani:2017yee,DeLeo:2019bcw,Hassanabadi:2017wrt,Hassanabadi:2017jiz}. The anti-hermitian solutions are difficult to interpret because there are no simple solutions of $\mathbb{H}$QM like the free particle and the harmonic oscillator. Consequently, the exact effects of using a wave function (\ref{i1}), that has the additional degree of freedom represented by $\Psi_1$ and the constraint generated by the quaternic imaginary unit $j$, are difficult to evaluate. However, a recent series of papers has indicated an anternative to anti-hermitian $\mathbb{H}$QM \cite{Giardino:2016abe,Giardino:2018lem,Giardino:2018rhs,Giardino:2017yke,Mandal:2019nsf}. First of all, it is realized that non-anti-hermitian Hamiltonian may generate probability conserving solutions. In the second article of the series \cite{Giardino:2018lem}, it has been proven that a well-defined classical limit is attainable in $\mathbb{H}$QM without the anti-hermitian assumption. Finally, it has been ascertained that this non-anti-hermitian quaternic theory is defined in a real Hilbert space, where the spectral theorem holds and the time evolution is non-abelian \cite{Giardino:2018rhs}. A technique of obtaining quaternic solutions is also proposed in \cite{Giardino:2017yke} to the case of the free particle. In this article, we continue grounding $\mathbb{H}$QM without the anti-hermitian assumption. The general momentum operator obtained in \cite{Giardino:2016abe} has been used to obtain the quaternic Virial theorem in Section \ref{V} and a quaterninionic quantum Lorentz force in Section \ref{L}. As a by-product, we were oblidged to redefine the expectation value in Section \ref{Q}. However, before considering these results, the next section provides a summary of the results obtained in the three previous articles. \section{\sc quaternic quatum mechanics in real Hilbert space\label{R}} First of all, we point out that $\mathbb{H}$QM in real space has been proven to be mathematically consistent in \cite{Giardino:2018rhs}, where the spectral theorem has been proven and the Fourier series and the Fourier transform have been studied. Then, let us entertain the quaternic Schr\"odinger equation \begin{equation}\label{r1} \hslash\frac{\partial\Psi}{\partial t}\,i\,=\,\left[-\frac{\hslash^2}{2m}\Big(\bm\nabla- \bm{\mathcal A}\Big)^2+U\right]\Psi,\qquad \mbox{where}\qquad\bm{\mathcal A}=\bm\alpha i+\bm\beta j,\qquad U=V +W\,j, \end{equation} where $\bm\alpha$ is a real vector function, $\bm\beta$ is a complex vector function and the quaternic scalar potential $U$ comprises the complex functions $V$ and $W$. We point out the position of the imaginary unit $\,i\,$ at the right hand side of $\partial_t\Psi$. The non-commutativity of $\Psi$, implies that $i\Psi\neq\Psi i$, and hence the choice $\,i\partial_t\Psi\,$ in (\ref{r1}) generates another wave equation, which we briefly discuss in the appendix. Irrespective of that, the proposition of (\ref{r1}) demands a series of studies concerning the physical and mathematical consistency of the quantum theory described through it. First of all, the a right multiplication of (\ref{r1}) by $(-i)$ does not necessarily generate an anti-hermitian operator, and consequently the equation cannot be treated using the current anti-hermitian $\mathbb{H}$QM formalism. Of course, one could do it, but this means a restriction of the solutions, and we seek the maximal generality. Then, we firstly ascertain the interpretation of the wave function as a density of probability. Defining $\,q^*=z_0^*-z_1j\,$ as the quaternic conjugate of the arbitrary quaternion $\,q,\,$ the continuity equation \cite{Giardino:2018lem} reads \begin{equation}\label{r2} \frac{\partial \rho}{\partial t}+ \bm{\nabla\cdot J}=g, \end{equation} where $\rho=\Psi\Psi^*$ is the probability density, $g$ is a probability source, $\bm J$ is the probability current and $\bm\Pi$ is the generalized momentum operator, so that \begin{equation}\label{r3} g=\frac{1}{\hslash}\Big(\Psi i\,\Psi^*U^*-U\Psi i\Psi^* \Big),\;\;\;\; \bm J=\frac{1}{2m}\Big[(\bm\Pi\Psi)\Psi^*+\Psi(\bm\Pi\Psi\big)^* \Big]\;\;\;\;\mbox{and}\qquad \bm\Pi\Psi=-\hslash\big(\bm\nabla-\bm{\mathcal A}\big)\Psi i. \end{equation} (\ref{r2}) indicates that the probability density is conserved for $g=0$, and this is achieved for real $U$, in agreement with complex quantum mechanics ($\mathbb{C}$QM). In the case of complex potentials (where $W=0$), (\ref{r2}) recovers the continuity equation valid for non-hermitian $\mathbb{C}$QM \cite{Bender:2007nj,Moiseyev:2011nhq} irrespective of the non-hermitian vector potential $\bm\beta$. This coincidence permit us to formulate the hypothesis of a relation between $\mathbb{H}$QM and non-hermitian $\mathbb{C}$QM. This idea is of central importance and will be considered in future investigations. A second element of the non-anti-hermitian $\mathbb{H}$QM comes by analogy with $\mathbb{C}$QM, where the momentum expectation value is related to the probability current through $\langle \bm \Pi\rangle=\langle \bm J\rangle/m$. This analogy has been used in \cite{Giardino:2018lem} to define the expectation value for an arbitrary quaternic operator $\mathcal{O}$ as \begin{equation}\label{r4} \langle\mathcal O\rangle= \frac{1}{2}\int dx^3\Big[\big(\mathcal{O}\Psi\big)\Psi^* +\Psi\big(\mathcal{O}\Psi\big)^* \Big]. \end{equation} The expectation value $\langle\mathcal O\rangle$ is always real, irrespective of the hermiticity of $\mathcal O$, something desirable physically. Furthermore, (\ref{r4}) recovers the $\mathbb{C}$QM expectation value for complex wave functions and hermitian operators. Conversely, (\ref{r4}) has an interesting consequence: the Hilbert space is real, in contrast to $\mathbb{C}$QM, which is developed over a complex Hilbert space. Thus, the expression for the expectation was obtained using a physical motivation but it must satisfy several mathematical requirements that are necessary to functions that belong to a Hilbert space. The most important requirement is whether the spectral theorem holds and consequently establishes the correspondence between physical observables and eigenvalues. In a real Hilbert space, the fundamental theorem of algebra indicates that a restricted class of operators may have physical significance. On the other hand, in $\mathbb{C}$QM the more general complex Hilbert space is restricted to a subspace where the wave functions are physically meaningful by using hermitian operators. Such a procedure is not necessary in the case of the real Hilbert space because every eigenvalue is real an potentially meaningful. Only the application can decide the suitable formalism for each physical situation, but the most fundamental mathematical questions concerning these matters were addressed in \cite{Giardino:2018lem,Giardino:2018rhs}, where the details may be found. A further important consistency test is the Ehrenfest theorem, which describes the time evolution of the quantum expectation values as a classical dynamics, and that has been considered in \cite{Giardino:2018lem}. For the position expectation value, we have \begin{equation}\label{r5} \frac{d \langle \bm r \rangle}{dt}=\frac{\langle \bm \Pi\rangle}{m}-\frac{2}{\hbar}\big\langle (U\bm r|i)\big\rangle, \end{equation} where we define the notation \begin{equation}\label{r6} (a|b)\Psi=a\,\Psi b. \end{equation} We observe that the second term \begin{equation} \big\langle (U\bm r|i)\big\rangle=\int\bm r\Big(U\Psi i\Psi^*-\Psi i \Psi^* U^*\Big)dx^3 \end{equation} is identically zero for real $U$ and pure imaginary for complex $U$. Consequently, the dynamics of the position expectation value recovers the classical dynamics for real $U$ and recovers the $\mathbb{C}$QM for complex $U$, another exact correspondence between $\mathbb{H}$QM and non-hermitian $\mathbb{C}$QM. Furthermore, the momentum expectation value gives \begin{equation}\label{r7} \frac{d\langle p_x\rangle}{dt}=\int dx^3\left(U\Psi\frac{\partial\Psi^*}{\partial x}+\frac{\partial\Psi}{\partial x}\Psi U^*\right). \end{equation} For real $U$, the right hand side of (\ref{r7}) gives $\langle-\,\partial_x U\rangle$, in perfect agreement with hermitian $\mathbb{C}$QM. Using expectation values, we may express (\ref{r7}) as \begin{equation}\label{r8} \frac{d\langle p_x\rangle}{dt}=2\left\langle - \frac{\partial U}{\partial x}\right\rangle+ 2\left\langle -U\frac{\partial}{\partial x} \right\rangle. \end{equation} As in the expectation value of the position, the dynamics is classical for real $U$ and recovers the $\mathbb{C}$QM when $W=0$ and $U$ is consequently complex. Thus, classical limit of $\mathbb{H}$QM is well defined, and the case when the dynamics is non-classical are also known. Anti-hermitian $\mathbb{H}$QM does not address these issues satisfactorily. Finally, the dynamical equation for the expectation values has been obtained in \cite{Giardino:2018lem}, so that \begin{equation}\label{r9} \frac{d}{dt}\Big\langle \big(\mathcal{O}|i\big) \Big\rangle =\left\langle\frac{\partial}{\partial t}\big(\mathcal O|i\big) \right\rangle +\frac{1}{\hslash}\left\langle \Big[\mathcal O,\,\mathcal H \Big]\right\rangle +\frac{\partial\langle(\mathcal O|i)\rangle}{\partial t}, \end{equation} which may also be written \begin{equation}\label{r10} \frac{d}{dt}\Big\langle\mathcal{O} \Big\rangle =\left\langle\frac{\partial}{\partial t}\mathcal O \right\rangle +\frac{1}{\hslash}\left\langle \Big[\mathcal H,\,(\mathcal O|i)\Big]\right\rangle +\frac{\partial\langle\mathcal O\rangle}{\partial t}. \end{equation} If $\Psi$ is complex and $\mathcal{O}$ is complex and hermitian, (\ref{r10}) recovers the $\mathbb{C}$QM equations for the dynamics of expectation values in the Schr\"odinger picture. Thus, there are clear evidences of a quaternic theory that contains the complex quantum mechanics as a particular case. However, there is a remaining question, whether (\ref{r9}) and (\ref{r10}) are equivalent or not. This paper considers this question in the following sections, and then let us start considering the virial theorem for $\mathbb{H}$QM. \section{\sc The quaternic virial theorem\label{V}} Let us start with another evidence of a well behaved classical limit of the $\mathbb{H}$QM: the virial theorem. Considering $|\bm{\mathcal A}|=0$ in the Hamiltonian of (\ref{r1}), the time evolution of the expectation values (\ref{r10}) encompasses the following stationary dynamics \begin{align}\label{v1} \frac{d}{dt}\langle \bm{r\cdot p} \rangle&=\frac{1}{\hslash}\Big\langle\big[\mathcal{H},\,(\bm{r\cdot p}|i)\big]\Big\rangle\qquad \mbox{where}\qquad \bm{p}\Psi=-\hslash\bm\nabla\Psi\,i \end{align} Thus, we have \begin{align}\nonumber \big[\mathcal H,\,(\bm{r\cdot p}|i)\big]&=\big[\mathcal H,\,\bm{r}\big]\bm\cdot(\bm p|i)+\bm{r\cdot }\big[\mathcal H,\,(\bm{p}|i)\big]\\ \label{v2} &=\frac{1}{2m}\big[p^2,\,\bm r\big]\bm\cdot(\bm p|i)+\bm {r\cdot} \big[U,\,(\bm p|i)\big], \end{align} where $\,[p^2,\,\bm p]=[U,\,\bm r]=0\,$ have been used. Using the identities \begin{equation}\label{v3} \big[p^2,\,r_a]=\bm p\bm\cdot\big[\bm p,\,r_a\big]+\big[\bm p,\,r_a\big]\bm{\cdot p} \qquad\mbox{and}\qquad \big[p_a,\,r_b\big]=-\hslash\,\big(\delta_{ab}\big|i\big) \end{equation} we get \begin{align}\label{v4} \big[p^2,\,\bm r\big]=-2\hslash\big(\bm p\big|i\big)\qquad\mbox{and}\qquad \big[\mathcal{H},\,(\bm{r\cdot p}|i)\big]=\frac{\hslash}{m}p^2-\hslash\,\bm{r\cdot\nabla}U. \end{align} Consequently, \begin{equation}\label{v5} \frac{1}{\hslash} \left\langle\big[\mathcal{H},\,(\bm{r\cdot p}|i)\big]\right\rangle=\frac{1}{m}\langle p^2\rangle -\frac{1}{2}\left\langle\bm{r\cdot\nabla}(U+U^*)\right\rangle \end{equation} In a stationary state, the time derivative on the left hand side of (\ref{v1}) is zero, and thus we obtain the virial theorem. For real $U$, the result is identical to the $\mathbb{C}$QM virial theorem. However, using a complex $U$, where $W=0$, an imaginary contribution depending on $\,V-V^*\,$ lacks on the right hand side of (\ref{v5}), the $\mathbb{C}$QM result is not recovered. This residual complex contribution is eliminated because the expectation value (\ref{r4}) is always real, and the pure imaginary components of $U$ do not contribute to (\ref{v5}). We may have a further insight considering equation (\ref{r9}). Firstly, \begin{equation} \big[\mathcal{H},\,\bm{r\cdot p}\big]=-\frac{\hslash}{m}\left(p^2\big|\,i\,\right)+\hslash\,\left(\bm{r\cdot\nabla}U\big|\,i\,\right), \end{equation} and thus \begin{align}\nonumber \frac{d}{dt}\big\langle (\bm{r\cdot p}|i) \big\rangle&=\frac{1}{\hslash}\Big\langle\big[\mathcal{H},\,\bm{r\cdot p}\big]\Big\rangle\\ \label{v6} &=\frac{1}{2}\left\langle\Big(\bm{r\cdot\nabla}(U-U^*)\Big|i\Big)\right\rangle. \end{align} For the contributions to be correctly considered, we have to sum (\ref{v5}) and (\ref{v6}), so that \begin{equation}\label{v7} \frac{d}{dt}\big\langle \bm{r\cdot p} +(\bm{r\cdot p}|i)\big\rangle=\frac{1}{m}\langle p^2\rangle -\frac{1}{2}\left\langle\bm{r\cdot\nabla}(U+U^*)\right\rangle +\frac{1}{2}\left\langle\Big(\bm{r\cdot\nabla}(U-U^*)\Big|i\Big)\right\rangle \end{equation} The left hand side of (\ref{v7}) is zero because we are considering stationary states, and now we have the complete quaternic counterpart of the virial theorem, where the non-hermitian virial theorem is recovered for complex $U$. The result is a piece of evidence that (\ref{r9}) and (\ref{r10}) are not equivalent, and the correct expression for the virial theorem must consider the sum of both of the expressions. In the next section we give further arguments that reinforce the validity of adding (\ref{v5}) and (\ref{v6}) to obtain (\ref{v7}), and in Section \ref{Q} we propose a general expression. \section{\sc The quaternic quantum Lorentz force\label{L}} In order to get a quaternic version of the quantum Lorentz force, (\ref{r10}) furnishes \begin{equation}\label{l1} \frac{d}{dt}\Big\langle\mathcal{\bm\Pi} \Big\rangle =\left\langle\frac{\partial}{\partial t}\bm\Pi \right\rangle +\frac{1}{\hslash}\left\langle \Big[\mathcal H,\,(\bm\Pi|i)\Big]\right\rangle +\frac{\partial\langle\bm\Pi\rangle}{\partial t}, \end{equation} where \begin{equation}\label{l2} \Big[\mathcal H,\,(\bm\Pi|i)\Big]=\frac{1}{2m}\Big[\Pi^2,\,(\bm\Pi|i)\Big]+\Big[U,\,(\bm\Pi|i)\Big]. \end{equation} On the other hand, from the total linear momentum $\;\Pi^2=\Pi_1^2+\Pi_2^2+\Pi_3^2\;$ and from an arbitrary component $\;\Pi_c\;$ of $\;\bm\Pi$, we have \begin{eqnarray}\nonumber \big[\Pi^2,\,(\Pi_c|i)\big]&=&\bm{\Pi\cdot}\big[\bm\Pi,\,(\Pi_c|i)\big]+\big[\bm\Pi,\,(\Pi_c|i)\big]\bm{\cdot\Pi}\\ \label{l3} &=&\Pi_a\big[\Pi_a,\,(\Pi_c|i)\big]+\big[\Pi_a,\,(\Pi_c|i)\big]\Pi_a +\Pi_b\big[\Pi_b,\,(\Pi_c|i)\big]+\big[\Pi_b,\,(\Pi_c|i)\big]\Pi_b. \end{eqnarray} Introducing the notation \begin{equation}\label{l4} \mathcal{P}_{[a}\mathcal{Q}_{b]}= \mathcal{P}_{a}\mathcal{Q}_{b}- \mathcal{P}_{b}\mathcal{Q}_{a}, \end{equation} we obtain \begin{equation}\label{l5} \big[\Pi_a,\,(\Pi_b|i)\big]=\hslash^2\Big(\partial_{[a}\mathcal A_{b]}-\mathcal A_{[a}\mathcal A_{b]}\big|i\Big), \end{equation} where \begin{align}\label{l6} \partial_{[a}\mathcal A_{b]}&=\;\partial_{[a}\alpha_{b]}i\,+\,\partial_{[a}\beta_{b]}\,j\;=\;\varepsilon_{abc}\Big[i\big(\bm{\nabla\times\alpha}\big)_c +\big(\bm{\nabla\times\beta}\big)_c j\,\Big],\\ \mathcal A_{[a}\mathcal A_{b]}&=-\beta_{[a}^{\,}\beta^*_{b]}+2\alpha_{[a}\beta_{b]}ij=\varepsilon_{abc}\Big[\bm{\big(\beta^*\times\beta}\big)_c+2i\big(\bm{\alpha\times\beta}\big)_cj\,\Big], \end{align} and $\varepsilon_{abc}$ is the anti-symmetric Levi-Civita symbol. After defining the vectors \begin{equation}\label{l7} \bm\kappa=i\,\bm{\nabla\times\alpha}\,+\,\bm{\beta\times\beta^*},\qquad \bm\lambda=\bm{\nabla\times\beta}\,+\,2i\,\bm{\beta\times\alpha}\qquad\mbox{and}\qquad\bm{\mathcal B}=\bm\kappa+\bm\lambda j, \end{equation} equation (\ref{l5}) gives \begin{equation}\label{l8} \big[\Pi_a,\,(\Pi_b|i)\big]=\hslash^2\varepsilon_{abc}\big(\,\mathcal B_c\,\big|\,i\,\big). \end{equation} We call $\bm{\mathcal B}$ the quaternic magnetic field, and it is a pure imaginary quaternic vector, so that $\,\bm{\mathcal B}^*=-\bm{\mathcal B}.\,$ In order to handle the quaternic vector $\bm{\mathcal B}$, one defines a vector product between the quaternic vectors $\;\bm X=\bm X_0+\bm X_1j\;$ and $\;\bm Y=\bm Y_0+\bm Y_1j,\;$ namely \begin{equation}\label{l9} \bm X\times\bm Y\,=\, \bm X_0\times\bm Y_0\,-\,\bm X_1\times\bm Y_1^*\,+\,\big(\bm X_0\times\bm Y_1\,+\,\bm X_1\times\bm Y_0^*\big)\,j. \end{equation} This quaternic vector product is not identical to the usual real or complex vector product, one immediately sees that $\,\bm{X\times Y}\neq -\bm{Y\times X}.\,$ Next, using (\ref{l3}), (\ref{l8}) and (\ref{l9}), we get \begin{align}\label{l10} \big[\,\Pi^2,\,(\bm\Pi|i)\,\big]=\hslash^2\Big[\big(\bm{\mathcal B}\big|i\big)\times\bm\Pi\,-\,\bm\Pi\times\big(\bm{\mathcal B}\big|i\big)\Big] \end{align} However, $\langle\bm{\nabla\times\mathcal{B}}\rangle=0$ implies that \begin{align}\label{l11} \frac{1}{\hslash^3} \left\langle\Big[\Pi^2,\,(\bm\Pi|i)\Big]\right\rangle=&\;\big\langle\bm{\mathcal A\times \mathcal B}\big\rangle -\big\langle\bm{\mathcal B\times \mathcal A}\big\rangle\\ \nonumber =&\,-\,2\Big\langle\bm{\alpha\times\nabla\times\alpha}\,+\, \bm{\beta\times\nabla\times\beta}^*\,-\,4i\,\bm{\alpha\times(\beta\times\beta^*)}\big\rangle, \end{align} and the second commutator term of (\ref{l2}) additionally renders \begin{equation}\label{l12} \big[\,U,(\bm\Pi|i)\big]=\hslash\big[\,\bm{\mathcal A},\,U\big]-\hslash\bm\nabla U\qquad \mbox{and}\qquad \Big\langle\big[\,U,\,(\bm\Pi|i)\big]\Big\rangle=-\frac{\hslash}{2}\Big\langle\bm\nabla \big(U+U^*\big)\Big\rangle. \end{equation} Consequently (\ref{l1}) allots \begin{equation}\label{l13} \frac{d}{dt}\Big\langle\mathcal{\bm\Pi} \Big\rangle =\hslash^2\Big[\big\langle\bm{\mathcal A\times \mathcal B}\big\rangle- \big\langle\bm{\mathcal B\times \mathcal A}\big\rangle\Big]-\frac{1}{2}\Big\langle\bm\nabla \big(U+U^*\big)\Big\rangle+ \left\langle\frac{\partial(\bm{\mathcal{A}}|i)}{\partial t}\right\rangle, \end{equation} where it has been used that \begin{equation}\label{l14} \left\langle\frac{\partial\bm\Pi}{\partial t}\right\rangle=\hslash\left\langle\frac{\partial(\bm{\mathcal{A}}|i)}{\partial t}\right\rangle. \end{equation} Equation (\ref{l13}) neither describes a classical dynamics nor recovers the $\mathbb{C}$QM result in a complex limit. It is physically meaningless. However, as has been done for the virial theorem, let us then consider (\ref{r9}) with $\mathcal O=\bm\Pi$. In this case, we have \begin{align}\label{l15} \frac{d\langle(\Pi|i)\rangle}{dt}=&\;\hslash\,\big\langle\bm{\mathcal B}\times\bm p-\bm p\times\bm{\mathcal B}\big\rangle +\frac{1}{2}\Big\langle\bm\nabla \Big(U-U^*\Big|i\Big)\Big\rangle+\left\langle\big[U,\,(\bm{\mathcal A}|i)\big]\right\rangle, \end{align} where it has been used that \begin{equation} \left\langle\frac{\partial(\bm\Pi|i)}{\partial t}\right\rangle=-\hslash\left\langle\frac{\partial\bm{\mathcal{A}}}{\partial t}\right\rangle=0, \end{equation} and also that \begin{equation}\label{l16} \big[\,U,\bm\Pi\big]=\hslash\big[U,\,(\bm{\mathcal A}|i)\big]+\hslash(\bm\nabla U|i)\qquad \mbox{and}\qquad \left\langle\big[\,U,\bm\Pi\big]\right\rangle=\hslash\left\langle\big[U,\,(\bm{\mathcal A}|i)\big]\right\rangle+ \frac{\hslash}{2}\Big\langle\bm\nabla \Big(U-U^*\Big|i\Big)\Big\rangle. \end{equation} In the same fashion as has been done for the virial theorem, we sum (\ref{l13}) and (\ref{l15}) in order to obtain a quaternic quantum Lorentz force \begin{align}\label{l17} \Big\langle\bm F_{\,\mathbb H}\Big\rangle=& \frac{d\langle \Pi\rangle}{dt}+\frac{d\langle(\Pi|i)\rangle}{dt}=\\\nonumber =&\;\hslash\,\big\langle\bm{\mathcal B}\times\bm p-\bm p\times\bm{\mathcal B}\big\rangle +\hslash^2\Big\langle\bm{\mathcal A\times \mathcal B}- \bm{\mathcal B\times \mathcal A}\Big\rangle+\left\langle \frac{\partial(\bm{\mathcal A}|i)}{\partial t}\right\rangle- \,\left\langle\bm\nabla\frac{U+U^*}{2}\right\rangle +\left\langle\bm\nabla \frac{U-U^*}{2}\Big|i\right\rangle+\left\langle\big[U,\,(\bm{\mathcal A}|i)\big]\right\rangle. \end{align} The above result does make sense: it recovers the $\mathbb{C}$QM result in the complex limit and there are contributions for each of the potentials. At the present point, it is too early to know precise the nature of the new quaternic potential and fields $\,\bm{\mathcal A},\,\bm{\mathcal B}\,$ and the imaginary components of $U$, but the existence of a dynamical equation is the first step to ascertain their physical character. As has been said in the beginning of this article, ascertaining the physical meaning of these fields is an exciting challenge for future research. However, we may attain an initial conclusion from the results. The definition of the quaternic magnetic field (\ref{l7}) immediately gives \begin{equation} \bm{\mathcal B}=\bm{\nabla\times\mathcal A}-\bm{\mathcal{A}\times\mathcal{A}}\qquad\mbox{where}\qquad\bm{\mathcal{A}\times\mathcal{A}}=\bm{\beta\times\beta}^*+2i\,\bm{\beta\times\alpha}\,j. \end{equation} Consequently \begin{equation}\label{l17} \langle \bm{\nabla\cdot\mathcal B}\rangle= 0,\qquad\qquad\mbox{but}\qquad\qquad \langle (\bm{\nabla\cdot\mathcal B}|i)\rangle\neq 0. \end{equation} Hence, the model naturally predicts the existence of a quantum magnetic monopole, something that is not predicted in $\mathbb{C}$QM. Relations between quaternions and magnetic monopoles are already known \cite{Hitchin:1988dmm}, and this relation indicates that the result is correct as much as indicates an exciting direction for future research. \section{\sc quaternic quantum formalism revisited\label{Q}} In Section \ref{R} an ambiguity in the dynamical evolution of the expectation value was observed in (\ref{r9}) and (\ref{r10}). In Section \ref{V} and in Section \ref{L} we obtained that the sum of (\ref{r9}) and (\ref{r10}) give physically consistent results for the momentum of a particle. Consequently, the results of the preceding sections enable us to revisit the introduction and emend the formalism of $\mathbb{H}$QM. A novel expectation value have to be defined, namely \begin{equation}\label{q1} \big\langle\mathcal O_\mathbb{H}\big\rangle=\big\langle\,\mathcal O +\big(\mathcal O\,|\,i\big)\big\rangle =\big\langle \big(\mathcal O\,|\,1+i\big)\big\rangle. \end{equation} The expectation value (\ref{q1}) resembles a complexification of the real Hilbert space, something already considered in \cite{Sharma:1988css,Sharma:1988chs}. In fact, these works argue that real Hilbert spaces must be ruled out of quantum mechanics, while other works point out the fundamental role of complex numbers in quantum mechanics \cite{Toyoda:1973nch,Lahti:2017wch}. Furthermore, the formulation of anti-hermitian $\mathbb{H}$QM in a quaternic Hilbert space is also criticized \cite{Graydon:2013sra,Gantner:2017ech}. We understand that our results are in agreement with these previous results. (\ref{q1}) couples a complex structure to a quaternic operator, where $\mathcal{O}\to(\mathcal{O}|1+i)$ determines a physical measure, thus imposing a difference between the inner product and the physical expectation value. This conciliation between a real Hilbert space and complex numbers must be confirmed in future research. Further evidences in favour of (\ref{q1}) are the invariance of the quaternic Ehrenfest theorem, because $\langle (x|i)\rangle=\langle (p_x|i)\rangle=0$, and also the unification of (\ref{r9}-\ref{r10}) on a single dynamical equation for expectation values in the Schr\"odinger picture \begin{equation}\label{i110} \frac{d}{dt}\Big\langle\mathcal{O} +(\mathcal O|i)\Big\rangle =\left\langle\frac{\partial}{\partial t}\Big(\mathcal O +(\mathcal O|i) \Big)\right\rangle +\frac{1}{\hslash}\left\langle \Big[\mathcal O-(\mathcal O|i),\,\mathcal H\Big]\right\rangle +\frac{\partial\langle\mathcal O+(\mathcal O|i)\rangle}{\partial t}. \end{equation} The new definition and dynamics for quantum expectation values based on physical grounds reinforce the consistency of $\mathbb{H}$QM. It is another element that compose the whole picture of the theory and that adds on well defined wave equation, expectation values, complex limits, classical limits, and spectral decomposition. We can consider it as a solution that seeks a problem, and hence the research of exact solutions seems to be an urgent direction for future research. \section{\sc Conclusion\label{C}} In this article we have revisited and emended the mathematical machinery of $\mathbb{H}$QM in real Hilbert space \cite{Giardino:2016abe,Giardino:2018lem,Giardino:2018rhs}. The previous results demonstrate that the theory is provided with wave equation, momentum operator, conservation of probability, expectation values, classical limit and spectral decomposition. The further results of the validity of the virial theorem and of the existence of the time evolution for the generalized linear momentum provide further evidences of the consistency of the theory. The existence of a consistent framework is a fact that permit us formulate further questions about quaternic quantum mechanics. A general question about $\mathbb{H}$QM involves its relation to non-hermitian $\mathbb{C}$QM \cite{Bender:2007nj,Moiseyev:2011nhq}, a framework where parity and time-reversal invariances ($\mathcal{PT}-$invariances) replace hermiticity. This approach rendered fruitful theoretical developments \cite{Scolarici:2003wu,Scolarici:2009zz,Solombrino:2002vk,Mostafazadeh:2001jk,Mathur:2010nhh,Mathur:2014rnh} and also experimental tests of the theory have been done \cite{Makris:2008dyn,Guo:2009obs,Rueter:2010pst,Regensburger:2012phl}. Inquiring whether the quaternic and the complex formulations relate themselves is very relevant. Conversely, specific quaternic solutions are needed to evidence the physical meaning of the pure quaternic terms of the wave function, of the scalar potential and of the gauge field $\bm\beta$. Inquiring about a quaternic magnetic monopole is another fascinating question. In fact, the research of exact solutions is the most important trend in $\mathbb{H}$QM, a task that is easier in the present situation of consistency of the theory presented in this paper. \appendix \section{\sc the left complex wave equation} In this section Virial theorem and the quantum Lorentz force are calculated considering the left-complex quaternic Schr\"odinger equation, namely \begin{equation}\label{a1} i\hslash\frac{\partial\Psi}{\partial t}\,=\mathcal{H}\Psi\qquad\mbox{and}\qquad \mathcal{H}=\frac{\hbar^2}{2m}i\Big(\bm\nabla -\bm A\Big)\bm\cdot i\Big(\bm\nabla -\bm A\Big)+U \end{equation} where the quaternic vector potential $\,\bm{\mathcal{A}}\,$ and scalar potential $\,U\,$ are defined in (\ref{r1}). There are a slight variation in the terms of the continuity equation (\ref{r4}), where the source $g$, the probability density $\bm J$, and the gauge-invariant quaternic linear momentum operator is $\bm\Pi$ are as follows \begin{equation}\label{a2} \rho=\Psi^*\Psi,\qquad g=\Psi^*\left(\frac{V^* i-i\, V}{\hbar}\right)\Psi,\qquad \bm J=\frac{1}{2m}\left[\Psi^*\big(\bm\Pi\Psi\big)+\big(\bm\Pi\Psi\big)^*\Psi\,\right], \qquad\mbox{and}\qquad\bm\Pi\Psi=-i\,\hbar\big(\bm\nabla-\bm Q\big)\Psi. \end{equation} The Ehrenfest theorem has slight differences in the dynamics of position and linear momentum (\ref{r5}-\ref{r8}) as well, which may be obtained in \cite{Giardino:2018lem}, but the conclusions are identical. Nevertheless, there are many differences in the expectation value dynamics between (\ref{r9}-\ref{r10}) and the dynamical equations obtained from (\ref{a1}), accordingly \begin{align} & \frac{d}{dt}\Big\langle \mathcal O - i\mathcal O i\Big\rangle =\left\langle\frac{\partial}{\partial t}(\mathcal O - i\mathcal O i)\right\rangle +\frac{1}{\hslash}\left\langle \Big[\mathcal H,\,\mathcal O i+i\mathcal O\Big]\right\rangle+ \frac{\partial}{\partial t}\Big\langle \mathcal O - i\mathcal O i\Big\rangle,\label{a3} \\ & \frac{d}{dt}\Big\langle \mathcal O i + i\mathcal O \Big\rangle =\left\langle\frac{\partial}{\partial t}(\mathcal O i +i\mathcal O )\right\rangle -\frac{1}{\hslash}\left\langle \Big[\mathcal H,\,\mathcal O -i\mathcal O i\Big]\right\rangle+ \frac{\partial}{\partial t}\Big\langle \mathcal O i+ i\mathcal O \Big\rangle,\label{a4} \\ & \frac{d}{dt}\Big\langle \mathcal O + i\mathcal O i\Big\rangle =\left\langle\frac{\partial}{\partial t}(\mathcal O + i\mathcal O i)\right\rangle -\frac{1}{\hslash}\left\langle \Big\{\mathcal H,\,\mathcal O i-i\mathcal O\Big\}\right\rangle+ \frac{\partial}{\partial t}\Big\langle \mathcal O + i\mathcal O i\Big\rangle,\label{a5} \\ & \frac{d}{dt}\Big\langle \mathcal O i-i\mathcal O \Big\rangle =\left\langle\frac{\partial}{\partial t}(\mathcal O i - i\mathcal O )\right\rangle +\frac{1}{\hslash}\left\langle \Big\{\mathcal H,\,\mathcal O +i\mathcal O i\Big\}\right\rangle+ \frac{\partial}{\partial t}\Big\langle \mathcal O i- i\mathcal O \Big\rangle.\label{a6} \end{align} In the case of the virial theorem, we already have a difference between the results, so that (\ref{v7}) turns into \begin{equation}\label{a7} \frac{d}{dt}\big\langle \bm{r\cdot p} +(\bm{r\cdot p}|i)\big\rangle=\frac{1}{m}\langle p^2\rangle -\frac{1}{2}\left\langle\bm{r\cdot\nabla}(U+U^*)\right\rangle +\frac{1}{2}\left\langle\bm{r\cdot\nabla}(iU-U^*i)\right\rangle+\big\langle 2W\bm{r\cdot p}\big\rangle, \end{equation} The last term of (\ref{a7}) is absent in (\ref{v7}). This result confirms that the left-complex quaternic Schr\"odinger equation (\ref{a1}) is not equivalent to (\ref{r1}). The calculation of a Lorentz force from (\ref{a1}) reinforces this conclusion. The commutators \begin{eqnarray} \big[\;\Pi_a,\,\Pi_b\;\big]&=&\hslash^2\Big[\partial_{[a}A_{b]}-\Big(A_{[a}+iA_{[a}i\Big)\partial_{b]}+iA_{[a}iA_{b]}\Big]\\ \big[\,\Pi_a,\,\Pi_b i\,\big]&=&\hslash^2\Big[\partial_aA_bi-\partial_biA_a+\Big(A_bi-iA_b\Big)\partial_a+iA_aiA_bi+iA_bA_a\Big]\\ \big[\,\Pi_a,\,i\Pi_b\,\big]&=&\hslash^2\Big[i\partial_{[a}A_{b]}-\Big(A_bi-iA_b\Big)\partial_a-iA_aA_b+A_biA_a\Big]\\ \big[\Pi_a,\,i\Pi_b i\big]&=&\hslash^2 \Big[i\partial_aA_bi+\partial_bA_a+\Big(A_{(a}+iA_{(a}i\Big)\partial_{b)}-iA_aA_bi-A_bA_a\Big], \end{eqnarray} do not seem to enable a quantum Lorentz force. We cannot say that is impossible to get such a result. It is of course an open question, however, the Virial theorem (\ref{a7}) leads us to expect additional terms the Lorentz force, thence the physical interpretation is probably more difficult, because of this, we consider that this result indicates that only (\ref{r1}) is physically meaningful. The wave equation (\ref{a1}) with $|\bm{\mathcal{A}}|=0$ has been adopted in anti-hermitian $\mathbb{H}$QM \cite{Adler:1995qqm}, and this fact maybe explains the difficulty in obtaining physically meaningful solutions in such framework. \end{document}
\begin{equation}gin{document} \title{\Large On Sprays with Vanishing $\chi$-Curvature} \begin{equation}gin{abstract} Every Riemannian metric or Finsler metric on a manifold induces a spray via its geodesics. In this paper, we discuss several expressions for the $\chi$-curvature of a spray. We show that the sprays obtained by a projective deformation using the S-curvature always have vanishing $\chi$-curvature. Then we establish the Beltrami Theorem for sprays with $\chi=0$. \noindent {\bf Keywords:} Sprays, Isotropic curvature, $\chi$-curvature and $S$-curvature. \\ {\bf MR(2000) subject classification: } 53C60, 53B40 {\varepsilon}nd{abstract} \section{Introduction} A spray $G$ on a manifold $M$ is a special vector field on the tangent bundle $TM$. In a standard local coordinate system $(x^i, y^i)$ in $TM$, a spray $G$ can be expressed by \[ G = y^i \frac{{\partial} }{{\partial} x^i} - 2 G^i \frac{{\partial} }{{\partial} y^i},\] where $G^i = G^i(x,y) $ are local $C^{\infty}$ functions on non-zero vectors with the following homogeneity: $G^i(x, \lambda y)= \lambda^2 G^i(x,y),$ $ \forall \lambda >0.$ Every Finsler metric induces a spray on a manifold. Some geometric quantities of a Finsler metric are actually defined by the induced spray only. These quantities are extremely interesting to us. For a spray $G$ on a manifold $M$, with the Berwald connection, we define two key quantities: the Riemann curvature tensor $R^{\ i}_{j \ kl}$ and the Berwald curvature tensor $B^{\ i}_{j \ kl}$ (see \cite{Sh}). Certain averaging process gives rise to various notions of Ricci curvature tensor. One of them is the Ricci curvature tensor: $ {\rm Ric}_{jl} :=\frac{1}{2} \{ R^{\ m}_{j \ m l} + R^{\ m}_{l \ m j}\} $ (\cite{LiSh1}). The well known Ricci curvature ${\rm Ric} := {\rm Ric}_{jl} y^jy^l = R^{\ m}_{j \ ml}y^j y^l$ has been studied for a long time by many people. Besides these quantities, we have another important quantity which is expressed in terms of the vertical derivatives of the Riemann curvature. It is the so-called $\chi$-curvature defined by \begin{equation} \chi_k:= -\frac{1}{6} \Big \{ 2 R^m_{\ k\cdot m} + R^m_{\ m \cdot k } \Big \} .\label{chi_def} {\varepsilon}nd{equation} where $R^i_{\ k} = y^j R^{\ i}_{j \ kl}y^l$. The $\chi$-curvature can be expressed in several forms. For an arbitrary volume form $dV$, \begin{equation} \chi_k = \frac{1}{2} \Big \{ S_{\cdot k|m} y^m-S_{|k} \Big \}, \label{chi_S} {\varepsilon}nd{equation} where $S= S_{(G, dV)}$ is the S-curvature of $(G, dV)$ (\cite{Sh1}). For a spray induced by a Finsler metric, the $\chi$-curvature can be expressed by \begin{equation} \chi_k = \frac{1}{2} \Big \{ I_{k|p|q} y^py^q + I_m R^m_{\ k} \Big\}, \label{chi_I} {\varepsilon}nd{equation} where $I_k := g^{ij} C_{ijk}$ denotes the mean Cartan torsion (\cite{Sh0} \cite{ChSh}). These are three typical expressions for the mysterious quantity $\chi$. In this paper, we shall focus on sprays with $\chi=0$. For a spray $G$ on a manifold $M$, in the projectively equivalent class of $G$, there is always a spray with $\chi=0$. More precisely, for any volume form $dV$ on $M$, we may construct a spray $\hat{G}$ by a projective change: \[ \hat{G}^i := G^i -\frac{S}{n+1} y^i,\] where $S$ is the S-curvature of $(G, dV)$. This spray $\hat{G}$ is invariant under a projective change with $dV$ fixed. This projective deformation is first introduced in \cite{Sh}. We prove the following \begin{equation}gin{thm}\label{thm1.1} Let $G$ be a spray on a manifold $M$. For any volume form $dV$, the spray $\hat{G}$ associated with $(G, dV)$ has vanishing $\chi$-curvature, $\hat{\chi}=0$. {\varepsilon}nd{thm} Note that $\hat{G}$ is projectively equivalent to $G$. Hence if $G$ is of scalar curvature, then $\hat{G}$ is of scalar curvature too. Hence it is of isotropic curvature since $\hat{\chi}=0$. Thus $\hat{G}$ must be of isotropic curvature. We obtain the following \begin{equation}gin{cor}\label{cor1.2} Let $G$ be a spray of scalar curvature on a manifold $M$. For any volume form $dV$, the spray $\hat{G}$ associated with $(G, dV)$ must be of isotropic curvature. {\varepsilon}nd{cor} The well-known Beltrami Theorem in Riemannian geometry says that for two projectively equivalent Riemannian metrics $g_1, g_2$, the metric $g_1$ is of constant curvature if and only if $g_2$ is of constant curvature. In particular, if a Riemannian metric $g$ is locally projectively flat, then it is of constant curvature since $g$ is locally projectively equivalent to the standard Euclidean metric. This theorem can be extended to sprays with $\chi=0$. \begin{equation}gin{thm}\label{thm1.3} For two projectively equivalent sprays $G_1, G_2$ with $\chi=0$, $G_1$ is of isotropic curvature if and only if $G_2$ is of isotropic curvature. In particular, if a spray $G$ is locally projectively flat with $\chi=0$, then it is of isotropic curvature. {\varepsilon}nd{thm} Sprays or Finsler metrics with $\chi=0$ deserve further study. Spherically symmetric metrics with $\chi=0$ have been studied in \cite{Zhu}. \noindent {\bf Acknowledgment}: The primary version of this note is part of my lectures during the summer school in 2018 in Xiamen University, China. \section{Preliminaries} A spray $G$ on a manifold $M$ is a vector field on the tangent bundle $TM$ which is locally expressed in the following form \[ G = y^i \frac{{\partial} }{{\partial} x^i} - 2 G^i \frac{{\partial} }{{\partial} y^i},\] where $G^i = G^i(x,y) $ are local $C^{\infty}$ function on $TU {\varepsilon}quiv U \times R^n$, \[ G^i(x, \lambda y ) = \lambda^2 G^i(x, y), \ \ \ \ \ \lambda >0.\] Put \[ N^i_j := \frac{{\partial} G^i}{{\partial} y^j}, \ \ \ \ \ \Gamma^i_{jk} = \frac{{\partial}^2 G^i}{{\partial} y^j {\partial} y^k}.\] Let $\omega^i := dx^i$ and $\omega^{n+i} := dy^i + N^i_j dx^j$ and $\omega_j^{\ i} := \Gamma^i_{jk} dx^k$. We have \[ d\omega^i = \omega^j \wedge \omega_j^{ \ i} .\] Put \[ \Omega_j^{\ i} := d\omega_j^{\ i} -\omega_j^{\ k} \wedge \omega_k^{\ i}.\] We obtain two quantities $R$ and $B$: \[ \Omega_j^{\ i} =\frac{1}{2} R^{\ i}_{j \ kl} \omega^k \wedge \omega^l - B^{\ i}_{j \ kl} \omega^k \wedge \omega^{n+l},\] where $R^{\ i}_{j \ kl} + R^{\ i}_{j \ lk}=0$. \[ R^{\ i}_{j \ kl} = {\delta \Gamma^i_{jl}\over \delta x^k} - {\delta \Gamma^i_{jk}\over \delta x^l} +\Gamma^i_{ks}\Gamma^s_{jl} - \Gamma^s_{jk}\Gamma^i_{ls}, \] \begin{equation} B^{\ i}_{j \ kl} = \frac{{\partial} \Gamma^i_{kl}}{{\partial} y^j}. \label{Bcurvature} {\varepsilon}nd{equation} We have the first set of Bianchi identities \begin{equation}gin{eqnarray} && R^{\ i}_{j \ kl} + R^{\ i}_{k \ lj} + R^{\ i}_{l\ jk} =0\\ && B^{\ i}_{j \ kl} = B^{\ i}_{k \ jl}. {\varepsilon}nd{eqnarray} In fact $B^{\ i}_{j \ kl}$ is symmetric in $j, k, l$ and $y^j B^{\ i}_{j \ kl} =0$. Put \[ R^i_{\ kl} := y^jR^{\ i}_{j \ kl}, \ \ \ \ \ R^{\ i}_{j \ k}:= R^{\ i}_{j \ kl}y^l, \ \ \ \ \ R^i_{\ k} := y^j R^{\ i}_{j \ kl} y^l.\] The two-index Riemann curvature tensor $ R^i_{\ k} $ and the four-index Riemann curvature tensor $R^{\ i}_{j \ kl}$ determine each other by the following identity: \begin{equation} R^{\ i}_{j \ kl} = \frac{1}{3} \Big \{ R^i_{\ k\cdot l\cdot j} - R^i_{\ l\cdot k\cdot j} \Big \}, \label{RRR} {\varepsilon}nd{equation} We also have \begin{equation}gin{eqnarray} R^{\ i}_{j \ k} & = & \frac{1}{3} \Big \{ 2 R^i_{\ k\cdot j}+R^i_{\ j \cdot k} \Big \}, \label{RRR2}\\ R^i_{\ kl} & = & \frac{1}{3} \Big \{ R^i_{\ k\cdot l} - R^i_{\ l\cdot k} \Big \},\label{RRR3} {\varepsilon}nd{eqnarray} where $T^{*}_{\ *\cdot k} $ is the vertical covariant derivative, namely, $T^*_{*\cdot k} = \frac{{\partial}}{{\partial} y^k} ( T^*_{\ *})$. Further covariant derivatives yield the second set of Bianchi identities: \begin{equation}gin{eqnarray} &&R^{\ i}_{j \ kl|m}+ R^{\ i}_{j \ lm|k} + R^{\ i}_{j \ mk|l}\nonumber\\ && \hspace{1 cm} + B^{\ i}_{j \ mp}R^p_{ \ kl} + B^{\ i}_{j \ lp}R^p_{\ mk} + B^{\ i}_{j \ kp}R^p_{\ lm} =0\label{Bi1}\\ && R^{\ i}_{j \ kl\cdot m} = B^{\ i}_{j\ ml|k} - B^{\ i}_{j\ km|l}\label{Bi2} \\ && B^{\ i}_{j \ kl\cdot m} = B^{\ i}_{j \ km\cdot l}. \label{Bi3} {\varepsilon}nd{eqnarray} Contracting (\ref{Bi1}) with $y^j$ yields \begin{equation} R^i_{\ kl|m} + R^i_{\ lm|k}+ R^i_{\ mk|l} =0. \label{Bi4} {\varepsilon}nd{equation} Contracting (\ref{Bi4}) with $y^l$ yields \begin{equation} R^i_{\ k|m} - R^i_{\ m|k} + R^i_{\ mk|l}y^l =0. \label{RRRR} {\varepsilon}nd{equation} \section{The $\chi$-curvature} The $\chi$-curvature defined by the Riemann curvature tensor in (\ref{chi_def}) can be expressed in several ways. \begin{equation}gin{lem}\label{lem3.1} \begin{equation} \chi_k = -\frac{1}{2} R^{\ m}_{m \ k} = - \frac{1}{2} R^{\ m}_{m \ kl}y^l. \label{XR} {\varepsilon}nd{equation} {\varepsilon}nd{lem} {\it Proof}: It follows from (\ref{RRR2}). \hspace*{\fill}Q.E.D. Lemma \ref{lem3.1} tells us that if $R^{\ m}_{m\ k}=0$, then $\chi=0$. Put \begin{equation} T^i_{\ k} : = R^i_{ \ k} - \Big\{ R \delta^i_{\ k} - \frac{1}{2} R_{\cdot k} y^i \Big \}, \label{Tcurvature} {\varepsilon}nd{equation} where $R:= \frac{1}{n-1} R^m_{\ m}$. By definition, $G$ is of isotropic curvature if $T^i_{\ k}=0$. Note that \[ {\rm trace} (T) := T^m_{\ m} =0.\] By a direct computation, we can obtain another expression for $\chi_k$. \begin{equation}gin{lem}\label{lem3.2} \begin{equation} \chi_k = -\frac{1}{3} T^m_{\ k\cdot m}. {\varepsilon}nd{equation} {\varepsilon}nd{lem} Lemma \ref{lem3.2} tells us that if $G$ is of isotropic curvature, then $\chi=0$. Recall the definition of the Weyl curvature \begin{equation} W^i_{\ k} := A^i_{\ k} - \frac{1}{n+1} A^m_{\ k \cdot m} y^i, {\varepsilon}nd{equation} where $A^i_{ \ k} := R^i_{\ k} - R \delta^i_{\ k}$. Clearly, \[ W^m_{\ k \cdot m}=0.\] We obtain a nice formula for the Weyl curvature. \begin{equation}gin{lem} The Weyl curvature is given by \begin{equation} W^i_{\ k} = R^i_{\ k} -\Big \{ R\delta^i_{\ k} - \frac{1}{2} R_{\cdot k} y^i \Big \} +\frac{3}{n+1} \chi_k y^i. \label{WRX} {\varepsilon}nd{equation} {\varepsilon}nd{lem} {\it Proof}: One can easily rewrite $W^i_{\ k}$ as \[ W^i_{\ k} = R^i_{\ k} - \Big \{ R\delta^i_{\ k} - \frac{1}{2}R_{\cdot k} y^i \Big \} -\frac{1}{2(n+1)} \Big \{ 2R^m_{\ k \cdot m} +(n-1) R_{\cdot k}\Big \} y^i.\] By (\ref{XR}), we prove the lemma. \hspace*{\fill}Q.E.D. Given a volume $dV= \sigma (x) dx^1 \cdots dx^n$, the S-curvature of $(G, dV)$ is defined by \[ S := \Pi- y^m \frac{{\partial}rtial }{{\partial}rtial x^m} \Big ( \ln \sigma \Big ).\] We have the following expression for $\chi$. \begin{equation}gin{lem}(\cite{LiSh1}) \begin{equation} \chi_k = \frac{1}{2} \Big \{ S_{\cdot k |m} y^m - S_{|k} \Big \}. \label{XS} {\varepsilon}nd{equation} {\varepsilon}nd{lem} In local coordinates, by (\ref{XS}), one can easily get \begin{equation} \chi_k = \frac{1}{2} \Big \{ \Pi_{x^m y^k} y^m - \Pi_{x^k} -2 \Pi_{y^k y^m} G^m \Big \}, \label{Slocal} {\varepsilon}nd{equation} where $ \Pi := \frac{{\partial}rtial G^m}{{\partial}rtial y^m} $. Clearly, $\chi$ is independent of $dV$. \section{Sprays with $\chi=0$} A spray is said to be {\it $S$-closed} if in local coordinates, $\Pi = \frac{{\partial} G^m}{{\partial} y^m} $ is a closed local $1$-form. The spray induced by a Riemannian metric $g=g_{ij}(x)y^iy^j$ is S-closed. In fact \begin{equation} \Pi = y^k \frac{{\partial} }{{\partial} x^k} \Big [ \ln \sqrt{\det(g_{ij}(x) ) } \Big ] .\label{Piclosed} {\varepsilon}nd{equation} By (\ref{Piclosed}), for any volume form $dV=\sigma(x) dx^1\wedge \cdots \wedge dx^n$, the S-curvature of $(G, dV)$ is a closed $1$-form, \[ S = y^k \frac{{\partial} }{{\partial} x^k}[\ln \varphi(x)] ,\] where $\varphi ( x) = \sqrt{\det(g_{ij}(x) ) }/\sigma(x)$. We have the following \begin{equation}gin{prop}\label{propS=closed} If a spray is S-closed, then $\chi=0$. In particular, if for some volume form $dV = \sigma dx^1 \cdots dx^n$, the S-curvature of $(G, dV)$ is a closed $1$-form, then $\chi=0$. {\varepsilon}nd{prop} {\it Proof}: By assumption, \[ S = \Pi - y^m \frac{{\partial} }{{\partial} x^m} (\ln \sigma) = {\varepsilon}ta _k y^k, \] with $({\varepsilon}ta_k)_{x^l} = ({\varepsilon}ta_l)_{x^k}$. Then by (\ref{Slocal}), $\chi_k=0$. \hspace*{\fill}Q.E.D. Let $\tilde{F}$ be a Finsler metric and $G$ be a spray on a manifold $M$. The spray coefficients $\tilde{G}^i$ of $\tilde{F}$ can be expressed as follows \begin{equation} \tilde{G}^i = G^i + P y^i + \frac{1}{2} \tilde{F} \tilde{g}^{ik} \Big \{ \tilde{F}_{\cdot k |m} y^m -\tilde{F}_{|k} \Big \}. {\varepsilon}nd{equation} where $P= \tilde{F}_{|m}y^m/(2\tilde{F})$. Thus $\tilde{F}$ is projectively equivalent to $G$ if and only if \begin{equation} \tilde{F}_{\cdot k |m} y^m -\tilde{F}_{|k} =0. {\varepsilon}nd{equation} This is the generalized version of the famous Rapcs\'{a}k Theorem. By (\ref{XS}), we obtain the following \begin{equation}gin{thm}\label{thm4.3} Let $G$ be a spray with $\chi=0$ and $ dV$ be a volume form. If for the S-curvature $S$ of $(G, dV)$, $\tilde{F}=|S|$ is a Finsler metric, then it is projectively equivalent to $G$. {\varepsilon}nd{thm} \section{Sprays of isotropic curvature} A spray $G$ is said to be {\it of scalar curvature} if \begin{equation} R^i_{ \ k} = R \delta^i_{\ k} - \tau_k y^i , \label{RRt} {\varepsilon}nd{equation} where $\tau_k $ is a positively homogeneous function of degree one with $\tau_k y^k = R$. This is equivalent to that $W^i_{\ k} =0$. By (\ref{WRX}), we see that (\ref{RRt}) is equivalent to the following \begin{equation} R^i_{ \ k} = R \delta^i_{\ k} -\frac{1}{2} R_{\cdot k} y^i -\frac{3}{n+1}\chi_k y^i. \label{RRCS} {\varepsilon}nd{equation} The $\chi$-curvature characterizes sprays of isotropic curvature among sprays of scalar curvature. By (\ref{RRCS}), we obtain the following \begin{equation}gin{thm}\label{propsi} (\cite{LiSh2}) \label{lemchi=0} Let $G$ be a spray of scalar curvature. $G$ is of isotropic curvature if and only if $\chi=0$. {\varepsilon}nd{thm} \noindent {\it Proof of Theorem \ref{thm1.3}}: If $G_1$ is of isotropic curvature, then $G_2$ is of scalar curvature by the projective equivalence. Since $\chi=0$, we see that $G_2$ is of isotropic curvature by Proposition \ref{propsi}. \hspace*{\fill}Q.E.D. If $G$ is of isotropic curvature, then \[ R^{\ i}_{j \ kl} = \frac{1}{2} \Big \{ R_{\cdot l \cdot j} \delta^i_{\ k} - R_{\cdot k \cdot j} \delta^i_{\ l} \Big \}. \] \[ R^i_{\ kl} =\frac{1}{2} \Big \{ R_{\cdot l} \delta^i_{\ k} -R_{\cdot k}\delta^i_{\ l} \Big \}. \] Assume that $G$ is of isotropic curvature. By (\ref{Bi1}), we obtain \begin{equation} (R_{\cdot l\cdot j|m}-R_{\cdot m \cdot j |l} )\delta^i_{\ k} + (R_{\cdot m\cdot j|k} - R_{\cdot k\cdot j|m} )\delta^i_{\ l} + (R_{\cdot k \cdot j |l} -R_{\cdot l \cdot j |k} )\delta^i_{\ m} =0. {\varepsilon}nd{equation} This yields \begin{equation} (R_{\cdot l|m}-R_{\cdot m|l})\delta^i_{\ k} +(R_{\cdot m|k} - R_{\cdot k|m} )\delta^i_{\ l} + (R_{\cdot k|l} - R_{\cdot l|k} )\delta^i_{\ m}=0. \label{Bi5} {\varepsilon}nd{equation} Contracting (\ref{Bi5}) with $y^m$ yields \begin{equation} (R_{\cdot l|m}y^m- 2 R_{|l})\delta^i_{\ k} + (2 R_{|k}-R_{\cdot k|m}y^m) \delta^i_{| k} + (R_{\cdot k|l}-R_{\cdot l|k}) y^i =0. \label{RR1} {\varepsilon}nd{equation} Taking trace $ i=k$ in (\ref{RR1}), we obtain \begin{equation} (n-2) (R_{\cdot l|m}y^m - 2 R_{|l} ) =0.\label{RR2} {\varepsilon}nd{equation} \begin{equation}gin{thm} \label{thmdual} If $G$ is an $n$-dimensional spray of isotropic curvature $R$ ($n\geq 3$), then $R$ satisfies \begin{equation} \frac{1}{2} R_{\cdot l |m} y^m - R_{|l}=0. \label{RRik}{\varepsilon}nd{equation} {\varepsilon}nd{thm} {\it Proof}: By assumption $n\geq 3$, we obtain from (\ref{RR2}), \[ R_{|l} - \frac{1}{2} R_{\cdot l|m}y^m =0.\] \hspace*{\fill}Q.E.D. For a spray $G$, we introduce a new quantity ${\varepsilon}ta = {\varepsilon}ta_k d^k$, \begin{equation} {\varepsilon}ta_k:= \frac{1}{2} R_{\cdot k|m}y^m - R_{|k}, \label{eta} {\varepsilon}nd{equation} where $R:=\frac{1}{n-1} {\rm Ric} $. For a spray of isotropic curvature $R$ on $n$-dimensional manifold $M$ ($n\geq 3$), By Theorem \ref{thmdual}, ${\varepsilon}ta =0$. Let $L:=\tilde{F}^2$ be a Finsler metric and $G$ a spray on a manifold. The spray coefficients of $\tilde{F}$ can be expressed as \begin{equation} \tilde{G}^i=G^i + \frac{1}{4} \tilde{g}^{ik} L_{|k} + \tilde{g}^{ik} \Big \{ \frac{1}{2} L_{\cdot k | m} y^m - L_{|k} \Big \}. \label{Gdual} {\varepsilon}nd{equation} $L:=\tilde{F}^2$ is said to be {\it dually equivalent to} $G$ if \begin{equation} \tilde{G}^i=G^i + \frac{1}{4} \tilde{g}^{ik} L_{|k}. {\varepsilon}nd{equation} This is equivalent to \begin{equation} \frac{1}{2} L_{\cdot k | m} y^m - L_{|k} =0. \label{LLL} {\varepsilon}nd{equation} For a spray $G$ on an $n$-dimensional manifold $M$ with isotropic scalar curvature $R$. Assume that $R$ is a Finsler metric, by Theorem \ref{thmdual}, one can see that $R$ is dually equivalent to $G$. \section{Projective change by the S-curvature} Let $G$ be a spray and $dV$ be a volume form on an $n$-dimensional manifold $M$. We deform $G$ to another spray $\hat{G}$ by \[ \hat{G}^i := G^i - \frac{S}{n+1} y^i,\] where $S $ denotes the S-curvature of $(G, dV)$. From the definition, we see that $\hat{G}$ is projectively equivalent to $G$. \begin{equation}gin{lem} Let $G$ be a spray and $dV$ a volume form on a manifold $M$. Let $\hat{G}$ be the spray associated with $(G, dV)$. Then the S-curvature of $(\hat{G}, dV)$ vanishes. Hence, $\hat{\chi} =0$. {\varepsilon}nd{lem} {\it Proof}: Recall \[ \hat{\chi}_k = \frac{1}{2} \Big \{ \hat{S}_{|m \cdot k} y^m - \hat{S}_{|k} \Big \}.\] On the other hand, $\hat{G}^i = G^i + Py^i$ with $P = - \frac{S}{n+1}$. Thus \[ \hat{S}= S +(n+1) P = 0.\] This yields that $ \hat{\chi}=0$. \hspace*{\fill}Q.E.D. \begin{equation}gin{lem} If $G_1$ and $G_2$ are two projectively equivalent sprays on a manifold $M$, then for any volume form $dV$, the spray $\hat{G}_1$ associated with $(G_1, dV)$ and $\hat{G}_2$ associated with $(G_2, dV)$ are equal, i.e., $\hat{G}_1 = \hat{G}_2$. {\varepsilon}nd{lem} {\it Proof}: It is easy to see that if $ G^i_1 = G^i_2 + P y^i$, then \[ S_1 = S_2 + (n+1) P.\] Then \begin{equation}gin{eqnarray*} \hat{G}_1^i & = & G^i_1 - \frac{S_1}{n+1} y^i \\ & = & [G^i_2 + Py^i] - \frac{S_2+(n+1) P}{n+1} y^i \\ & = & G^i_2 - \frac{S_2}{n+1} y^i = \hat{G}_2. {\varepsilon}nd{eqnarray*} \hspace*{\fill}Q.E.D. \noindent {\it Proof of Corollary \ref{cor1.2}}: First by definition, $\hat{G}$ is projectively equivalent to $G$. Thus $\hat{G}$ is of scalar curvature. Since $\hat{\chi}=0$, by Lemma \ref{lemchi=0}, we see that $\hat{G}$ is of isotropic curvature. \hspace*{\fill}Q.E.D. By the above lemma, any geometric quantity of $\hat{G}$ is a projective invariant of $G$ with respect to a fixed volume form $dV$. Further, if the geometric quantity of $\hat{G}$ is independent of the volume form $dV$, then the quantity is a projective quantity of $G$. \begin{equation}gin{lem} Let $G$ be a spray and $dV$ a volume form on a manifold $M$. For the spray $\hat{G}$ associated with $(G, dV)$, the Riemann curvature of $\hat{G} $ is given by \begin{equation} \hat{R}^i_{\ k} = R^i_{\ k} + \tau \delta^i_{\ k} - \frac{1}{2} \tau_{\cdot k} y^i +\frac{3 \chi_k}{n+1} y^i, \label{RRtaux} {\varepsilon}nd{equation} where \begin{equation} \tau : = \Big ( \frac{S}{n+1} \Big )^2 + \frac{1}{n+1} S_{|m}y^m. {\varepsilon}nd{equation} {\varepsilon}nd{lem} {\it Proof}: By a direct argument. \hspace*{\fill}Q.E.D. By (\ref{RRtaux}), we get the projective Ricci curvature tensor $\widehat{\rm Ric}_{jl} :=\frac{1}{2} \{ \hat{R}^{\ m}_{j \ ml} +\hat{R}^{\ m}_{l \ mj} \}$ and the projective Ricci curvature $\widehat{\rm Ric} := \widehat{\rm Ric}_{jl}y^jy^l$. \begin{equation}gin{eqnarray} \widehat{\rm Ric}_{jl} & = & {\rm Ric}_{jl} +\frac{n-1}{2} \tau_{\cdot j\cdot l} +H_{jl}, \label{hatRicij}\\ \widehat{\rm Ric} & = & {\rm Ric} + (n-1) \tau,\label{hatRic} {\varepsilon}nd{eqnarray} where $\widehat{\rm Ric} = \widehat{\rm Ric}_{jl} y^j y^l$ is the Ricci curvature of $\hat{G}$ and \[ H_{ij} := \frac{1}{2}\Big \{ \chi_{i\cdot j}+\chi_{j\cdot j} \Big \}. \] It is natural to consider other quantities of $\hat{G}$, such as the Berwald curvature defined in (\ref{Bcurvature}) and the T-curvature defined in (\ref{Tcurvature}) \[ \hat{B}^{\ i}_{j \ kl} = \frac{{\partial}^3 \hat{G}^i}{{\partial} y^j {\partial} y^k {\partial} y^l}.\] \[ \hat{T}^i_{\ k} = \hat{R}^i_{\ k} - \Big \{ \hat{R}\delta^i_{\ k} - \frac{1}{2} \hat{R}_{\cdot k} y^i \Big \}.\] Clearly, $\hat{B}$ and $\hat{T}$ are projective invariants with a fixed volume form $dV$. We have the following \begin{equation}gin{prop} Let $G$ be a spray on a manifold and $\hat{G}$ a spray associated with $(G, dV)$ for some volume form $dV$. Then the Berwald curvature $\hat{B}$ and $\hat{T}$ are independent of $dV$, hence they are projective invariants of $G$. In fact $ \hat{B} = D$ is the Douglas curvature and $\hat{T} = W$ is the Weyl curvature of $G$. {\varepsilon}nd{prop} Here we provide another description of the Douglas curvature and the Weyl curvature of a spray. Let $G$ be a spray and $\hat{G}$ be the spray associated with $(G, dV)$ for some volume form $dV$. Let $\hat{{\varepsilon}ta}$ be the quantity of $\hat{G}$ defined in (\ref{eta}). Then $\hat{{\varepsilon}ta}$ is a projective invariant of $G$ for a fixed volume form $dV$. In fact, $\hat{{\varepsilon}ta} = {\bf W}^o$ the so-called {\it Berwald-Weyl curvature} (\cite{Sh}). If $G$ is of scalar curvature, then $\hat{G}$ is of isotropic curvature. Thus $\hat{{\varepsilon}ta}=0$ when $n=\dim M \geq 3$ by Theorem \ref{thmdual}. \begin{equation}gin{prop}Let $G$ be a spray on a manifold and $\hat{G}$ a spray associated with $(G, dV)$ for some volume form $dV$. Assume that $G$ is of scalar curvature. Then the projective invariant $\hat{{\varepsilon}ta} =0$ in dimension $n\geq 3$. {\varepsilon}nd{prop} \section{Examples} In this section, we shall give some sprays of isotropic curvature. \begin{equation}gin{ex}{\rm Let $ F =\alpha+\begin{equation}ta$ be a Randers metric on an $n$-dimensional manifold $M$, where $\alpha =\sqrt{a_{ij}(x)y^iy^j}$ is a Riemannian metric and $\begin{equation}ta = b_i (x) y^i$ is a $1$-form on $M$. Let $\nabla \begin{equation}ta =b_{i|j} y^i dx^j $ denote the covariant derivative of $\begin{equation}ta$ with respect to $\alpha$. Let \[ r_{ij}:= \frac{1}{2} (b_{i|j} + b_{j|i} ), \ \ \ \ s_{ij}:= \frac{1}{2} ( b_{i|j}-b_{j|i} ), \ \ \ \ s_j := b^i s_{ij},\] \[ q_{ij}:= r_{im}s^m_{\ j}, \ \ \ t_{ij}:=s_{im}s^m_{\ j}, \ \ \ \ t_j:=b^i t_{ij}.\] Let \begin{equation} \hat{G}^i: = G^i_{\alpha} + \alpha s^i_{\ 0}.\label{RandersG} {\varepsilon}nd{equation} In fact $\hat{G}$ is the spray associated with $(G, dV_{\alpha})$. It is proved that $\hat{G}$ is of scalar curvature if and only if the Riemann curvature $\bar{R}^i_{\ k}$ of $\alpha$ and the covariant derivatives of $\begin{equation}ta$ satisfy the following equations (\cite{ShYi}) \begin{equation}gin{eqnarray} \bar{R}^i_{\ k} & = & \kappa \Big \{ \alpha^2 \delta^i_k - y_k y^i \Big \}\nonumber\\ && +\alpha^2 t^i_{\ k}+ t_{00} \delta^i_k-t_{k0}y^i - t^i_{\ 0} y_k - 3s^i_{\ 0} s_{k0} , \label{eqW1AAA}\\ s_{ij|k} & = & \frac{1}{n-1} \Big \{a_{ik} s^m_{\ j|m} - a_{jk} s^m_{\ i|m} \Big \}. \label{eqW2**AA} {\varepsilon}nd{eqnarray} where $\kappa =\kappa (x)$ is a scalar function on $M$. In this case, $\hat{G}$ is actually of isotropic curvature. $\hat{R}^i_{\ k} = \hat{R}\delta^i_k -\frac{1}{2} \hat{R}_{\cdot k} y^i$. By a simple computation, we obtain a formula for $\hat{R}:= \frac{1}{n-1} \widehat{\rm Ric}$: \[ \hat{R} = \kappa \alpha^2 + t_{00}+ \frac{2}{n-1} \alpha s^m_{\ 0|m}.\] } {\varepsilon}nd{ex} . \begin{equation}gin{ex}{\rm Consider a spray on an open subset $U\subset R^2$, \[ G = y^1 \frac{{\partial} }{{\partial} x^1}+ y^2 \frac{{\partial} }{{\partial} x^2} - 2 G^1 \frac{{\partial} }{{\partial} y^1}-2 G^2 \frac{{\partial} }{{\partial} y^2},\] where \begin{equation}gin{eqnarray*} G^1 & = & B (y^1)^2 + 2 C y^1y^2 + D (y^2)^2 +\frac{1}{3} ( f_{x^1} (y^1)^2 +f_{x^2} y^1y^2)\\ G^2 & = & - A (y^1)^2 -2 B y^1y^2 - C (y^2)^2 +\frac{1}{3} (f_{x^1} y^1y^2 +f_{x^2} (y^2)^2). {\varepsilon}nd{eqnarray*} where \[ A = A(x^1,x^2), \ \ B = B(x^1,x^2), \ \ C = C(x^1,x^2), \ \ D = D(x^1, x^2), \ \ f = f(x^1, x^2)\] are $C^{\infty}$ functions on $U$. The geodesics are the graphs of $x^2 = \phi (x^1) $ \[ \phi'' = 2 A(x^1, \phi) + 6 B(x^1, \phi) \phi'+ 6 C(x^1, \phi) (\phi')^2 + 2 D(x^1, \phi) (\phi')^3.\] We have \[ \Pi =\frac{{\partial} G^m}{{\partial} y^m} = f_{x^1} y^1 + f_{x^2} y^2.\] Thus $\chi_k =0$. Further computation shows that $G$ is of isotropic curvature. } {\varepsilon}nd{ex} \begin{equation}gin{thebibliography}{lbl} \bibitem{ChSh}X. Cheng and Z. Shen, {\it Finsler Geometry --- An approach via Randers spaces}, Springer-Verlag, (2012) \bibitem{LiSh1} B. Li and Z. Shen, {\it Ricci curvature tensor and non-Riemannian quantities}, Canadian Mathematical Bulletin, 58(2015), 530-537. \bibitem{LiSh2} B. Li and Z. Shen, {\it On sprays of isotropic curvature}, International Journal of Mathematics, 29 (2018), https://doi.org/10.1142/S0129167X18500039 \bibitem{Sh0} Z. Shen, {\it Finsler manifolds with nonpositive flag curvature and constant S-curvature}, Mathematische Zeitschrift, {\bf 249}(2005), 625-639. \bibitem{Sh1} Z. Shen, {\it On some non-Riemannian quantities in Finsler geometry}, Canad. Math. Bull. {\bf 56}(2013), 184-193. \bibitem{Sh} Z. Shen, {\it Differential Geometry of Spray and Finsler Spaces}, Kluwer Academic Publishers, 2001. \bibitem{ShYi} Z.Shen and G. C. Yildirim, {\it A characterization of Randers metrics of scalar flag curvature}, Recent Developments in Geometry and Analysis, Advanced Lectures in Mathematics {\bf 23} (2013), 345-358. \bibitem{Zhu} H. Zhu, {\it On a class of Finsler metrics with special curvature properties}, Balkan Journal of Geometry and Its Applications, {\bf 23} (2018), 97-108. {\varepsilon}nd{thebibliography} \noindent Zhongmin Shen \noindent Department of Mathematical Sciences, Indiana University-Purdue University Indianapolis, IN 46202-3216, USA. \noindent \verb"[email protected]" {\varepsilon}nd{document}
\begin{document} \begin{center} \uppercase{\bf When waiting moves you\\ in scoring combinatorial games} \vskip 20pt {\bf Urban Larsson\footnote{Supported by the Killam Trust}}\\ {\smallit Dalhousie University, Canada}\\ {\bf Richard J.~Nowakowski}\\ {\smallit Dalhousie University, Canada}\\ {\bf Carlos P. Santos\footnote{Corresponding author: Centro de Estruturas Lineares e Combinatórias, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, Edifício C6, Piso 2, 1749-016 Lisboa, Portugal; [email protected]}}\\ {\smallit Center for Linear Structures and Combinatorics, Portugal}\\ \end{center} \begin{abstract} Combinatorial Scoring games, with the property `extra pass moves for a player does no harm', are characterized. The characterization involves an order embedding of Conway's Normal-play games. Also, we give a theorem for comparing games with scores (numbers) which extends Ettinger's work on dicot Scoring games. \end{abstract} \section{Introduction} \textit{The Lawyer's offer:} To settle a dispute, a court has ordered you and your opponent to play a Combinatorial game, the winner (most number of points) takes all. Minutes before the contest is to begin, your opponent's lawyer approaches you with an offer: "You, and you alone, will be allowed a pass move to use once, at any time in the game, but you must use it at some point (unless the other player runs out of moves before you used it)." Should you accept this generous offer? We will show when you should accept and when you should decline the offer. It all depends on whether Conway's Normal-play games (last move wins) can be embedded in the `game' in an order preserving way. \textit{Combinatorial games} have perfect information, are played by two players who move alternately, but moreover, the games finish regardless of the order of moves. When one of the players cannot move, the winner of the game is declared by some predetermined winning condition. The two players are usually called Left (female pronoun) and Right (male pronoun). Many combinatorial games have the property that the game decomposes into independent sub-positions. A player then has the choice of playing in exactly one of the sub-positions; the whole position is the \textit{disjunctive sum} of the sub-positions. The disjunctive sum of positions $G$ and $H$ is written $G+H$. Such \emph{additive} \cite{Johns2014} games include \textsc{go, domineering, konane, amazons, dots\&boxes} and also (end-positions of) \textsc{chess} but do not include \textsc{hex} or any type of Maker-Maker and Maker-Breaker games. See \cite{Beck2006}, for example, for techniques to analyse these latter games. \textit{Normal-play} games have the last player to move as the winner; \textit{Mis\`ere-play} games have that player as the loser. In this paper the focus is on \textit{Scoring games} in which the player with the greatest score wins. Finding general results for Scoring games has proven difficult. There are only five contributors known to the authors: Milnor \cite{Milno1953}, followed by Hanner \cite{Hanne1959}, considered games with no Zugzwang positions; Johnson \cite{Johns2014} abstracted from a game played with knots; Ettinger \cite{Ettin1996,Ettin2000} considered \textit{dicots}, that is games where either both players have a move or neither does; Stewart \cite{Stewa2011} considered a very general class of games. We give an overview of their results in Section \ref{sec:survey}. Aviezri Fraenkel coined the terms `Math' games and `Play' games. The former have properties that mathematicians like. On the other hand, Play games tend to be harder to analyze, for example \textsc{go, dots-\&-boxes, othello, blokus} and \textsc{kulami}; moreover they give direction to the mathematical research. `Play' Scoring games tend to have some common strategic considerations. This paper focuses on three. \begin{itemize} \item \textit{Zugzwang} (German for ``compulsion to move'') is a situation where one player is put at a disadvantage because he has to make a move when he would prefer to pass and make no move. \item \textit{Bonus/penalty}: In many Scoring games, there are penalties or bonuses to be awarded when a game finishes. \item \textit{Greediness principle}: Given two games $G$ and $H$, Left prefers a game $G$ for a game $H$, whenever each Left option of $H$ is also a Left option in $G$, and each Right option of $G$ is also a Right option of $H$. \end{itemize} Zugzwang and the Greediness principle relate to the question posed in the Lawyer's offer. Perhaps one would believe that if all other things remain equal, giving Left an extra option is an advantage, at least no disadvantage, to Left. Surprisingly, this is not always true, indeed it is not true in \cite{Stewa2011}, nor is it true in Mis\`ere-play games. If there are Zugzwangs in the `game', then you would be inclined to accept, but, as we will see, this does not reveal the whole truth. See Figure \ref{fig:sample} for an example. Classes of Scoring games $\mathcal{S}$, like Normal and Mis\`ere-play games, have a defined equivalence, $\equiv$, (often called `equality') which gives rise to equivalence classes, and where $S/\!\!\equiv$ forms a monoid. In Normal-play games this gives an ordered abelian group. For the class of all Mis\`ere-play games the monoid has little structure. Significant results concerning Mis\`ere-play games only became possible after Plambeck and Plambeck \& Siegel (see \cite{PlambS2008}) pioneered the approach of restricting the total set of games under consideration. As shown in Stewart \cite{Stewa2011}, the monoid based on the full class of Scoring games also has little structure. Following Plambeck \& Siegel's approach, we restrict the subset of Scoring games under consideration to obtain a monoid with a useful structure. In Section \ref{sec:terminology} we formally develop the concepts needed for Scoring games, together with some basic results. In Section \ref{sec:Normal-play} we give some Normal-play background; see \cite{AlberNW2007, Siege2013} for more on Normal-play games. Our main results are given in Section \ref{sec:obstacle}. In Theorem \ref{thm:emb}, we study when Normal-play games can be embedded in a family of Scoring games in an order preserving way. For games $G$ and $H$, to check if $G\geqslant H$ (see Definition \ref{def:equality}) involves comparisons using all scoring games. Theorem \ref{thm:comp} gives a method that avoids this if one of the games is a number, and in particular answers the question: `who wins?', i.e. is $G\geqslant 0?$ To illustrate the concepts, we refer to games based on \textsc{konane} (see also Section~\ref{sec:ScoringGames}). \subsection{\textsc{konane}} \textsc{konane} is a traditional Hawaiian Normal-play game, played on a $m\times n$ checker-board with white stones on the white squares and black on the black squares with some stones removed. Stones move along rows or columns but not both in the same move. A stone can jump over an adjacent opponent's stone provided that there is an empty square on the other side, and the opponent's stone is removed. Multiple jumps are allowed on a single move but are not mandatory. When the player to move has no more options then the game is over and, in Normal-play, the player is the loser. \textsc{scoring-konane} is played as \textsc{konane}, but when the player to move has no options then the game is over, and the score is \textit{the number of stones Left has removed minus the number of stones Right has removed.} Left wins if the score is positive, loses if it is negative and ties if it is zero. \begin{figure} \caption{To the left, a {\sc scoring-konane} \label{fig:sample} \end{figure} In Figure \ref{fig:sample}, we show that it is not clear whether you should accept the Lawyer's offer if you only get beforehand information that the `game' to play is {\sc scoring-konane}, but not which particular position. In Section~\ref{sec:ScoringGames}, we develop a variation of {\sc scoring konane}, with a bonus/penalty rule, for which you gladly would accept the offer, irrespective of any particular position. \section{Games terminology}\label{sec:terminology} Throughout, we assume best play, i.e., for example, `Left wins' means that Left can force a win against all of Right's strategies. We first give the definitions common to all variants of additive combinatorial games. Other concepts require the winning condition and these we give in separate sub-sections. We denote by $\mathbb{Np}$, the set of short, Normal-play games. The word `game' has multiple meanings. Following \cite{AlberNW2007, Siege2013}, when referring to a set of rules, the explicit game is in \textsc{small capitals}, otherwise, in proofs, `game' and `position' will be used interchangeably. Given a game $G$, an \textit{option} of $G$ is a position that can be reached in one move; a \emph{Left (Right) option} of $G$ is a position that can be reached in one move by Left (Right). The set of Left, respectively Right, options of $G$ are denoted by $G^\mathcal{L} $ and $G^\mathcal{R} $ and we write $G^{\rm L}$ and $G^{\rm R}$ to denote a typical representative of $G^\mathcal{L} $ and $G^\mathcal{R} $ respectively. A combinatorial game is defined recursively as $G=\{G^\mathcal{L} \midG^\mathcal{R} \}$. Further, $G$ is a \textit{short game} if it has a form $\{G^\mathcal{L} \midG^\mathcal{R} \}$ such that $G^\mathcal{L} $ and $G^\mathcal{R} $ are finite sets of short games. A game $H$ is a \textit{follower} of a game $G$ if there is any sequence of moves (including the empty sequence, and not necessarily alternating) starting at $G$ that results in the game $H$. The \textit{disjunctive sum} of the games $G$ and $H$ is the game $G+H$, in which a player may move in $G$ or in $H$, but not both. That is, $G+H = \{G^\mathcal{L} +H, G+H^\mathcal{L}\midG^\mathcal{R} +H, G+H^\mathcal{R}\}$ where, for example, if $G^\mathcal{L} =\{G^{\rm L_1},G^{\rm L_2},\ldots\}$ then $G^\mathcal{L} +H = \{G^{\rm L_1}+H,G^{\rm L_2}+H,\ldots\}$. Clearly, the disjunctive sum operation is associative and commutative. There is a well-known operation of `turning the board around', that is, reversing the roles of both players. In Normal-play games, given a game $G$, this new game is the additive inverse of the old and the `turned' board is denoted by $-G$ giving rise to the desirable statement $G-G=0$, indicating that there is no advantage to either Left or Right in $G-G$. In Scoring games and also in Mis\`ere-play, the underlying structure is not necessarily a group, so for most games $G - G\ne 0$. To avoid misleading equations, we call this operation \textit{conjugation}, and denote the \textit{conjugate} by $\sim \! G$. If $G=\{G^{\rm L_1},G^{\rm L_2},\ldots \mid G^{\rm R_1},G^{\rm R_2},\ldots \}$ then recursively $\sim \! G = \{\sim \! G^{\rm R_1},\sim \! G^{\rm R_2},\ldots \mid \, \sim \! G^{\rm L_1},\sim \! G^{\rm L_2},\ldots\}$. The notation $G=\{G^\mathcal{L}\mid G^\mathcal{R}\}$ has been identified with Normal-play games in the literature. In this paper, since we will refer to both Normal- and Scoring-play as different entities, we will use $\langle \,G^\mathcal{L}\mid G^\mathcal{R}\, \rangle$ to refer to Scoring games in order to avoid confusion. The classes and subclasses of games that are mentioned in this paper, and in papers about Mis\`ere-play games, all have some common properties and have been given a designation. \begin{definition} Let $\mathcal{U}$ be a set of combinatorial games. Then $\mathcal{U}$ is a \emph{universe} if \begin{itemize} \item[(1)] $\mathcal{U}$ is closed under disjunctive sum; \item[(2)] $\mathcal{U}$ is closed under taking options, that is, if $G\in\mathcal{U}$ then every $G^{\rm L}\inG^\mathcal{L} $ and $G^{\rm R}\inG^\mathcal{R} $ are in $\mathcal{U}$; \item[(3)] $\mathcal{U}$ is closed under conjugation, that is, if $G\in\mathcal{U}$ then $\sim \! G\in\mathcal{U}$. \end{itemize} \end{definition} \subsection{Scoring games terminology}\label{sec:ScoringGames} All of the previous works on Scoring games used different terminology and notation although the concepts were very similar. We unify the notation; for example, even though players sometimes have a means of keeping score during the play, the score is uniquely determined only when the player to move has no options. \begin{definition}\label{basic} Let $G$ be a game with no Left options. Then we write $G^\mathcal{L}=\emptyset^{\ell}$ to indicate that, if Left to move, the game is over and the score is the real number $\ell$. Similarly, if $G^\mathcal{R}=\emptyset^r$, and it is Right's move, there are no Right options and the score is $r$. We refer to $\emptyset^s$ as an \textit{atom} or, if needed for specificity, the $s$\textit{-atom}. Scores will always be real numbers with the convention: Left wins if $s>0$; Right wins if $s<0$ and the game is a tie (drawn) if $s=0$. Since $\langle \, \emptyset^s\mid\emptyset^{s} \, \rangle$ results in a score of $s$ regardless of who moves next, we call this game $s$. \end{definition} Games $\langle \, \emptyset^{\ell}\mid\emptyset^{r} \, \rangle$ with any of the three conditions $\ell<r$, $\ell=r$ and $\ell>r$ can occur in practice. Allowing only $\ell=r$ gives the scoring universe studied in \cite{Stewa2011}. Since addition and subtraction of real numbers does not pose a problem, we will revert to using $-s$ instead of $\sim\! s=\langle \emptyset^{-s}\mid \emptyset^{-s}\rangle$ for the conjugate of $s$. \begin{example} \textsc{diskonnect}\footnote{The first world championships, at TRUe games May 2014, were played on a $8\times 8$ board with the middle $2\times 2$ square empty. The authors placed 4th, 11th and 2nd respectively, Paul Ottaway placed 1st and Svenja Huntemann 3rd.} is played as \textsc{scoring-konane}, but with an additional bonus/penalty rule at the end: a piece is \emph{insecure} if it can be captured by the opponent with a well-chosen sequence of moves (ignoring the alternating-move condition), and otherwise, the piece is \emph{safe}. When a player to move, say Left (Black), has no more options then the game is over and Right (White) removes all of Left's insecure stones on the board. This is the penalty a player has for running out of options. The score when the game ends is \textit{the number of stones Left has removed minus the number of stones Right has removed.} \end{example} \begin{figure} \caption{An endgame in \textsc{konane} \label{fig:disk1} \end{figure} The game in Figure \ref{fig:disk1} is $1=\{ 0 \mid \emptyset \}$ in (Normal-play) \textsc{konane}, $\langle\langle \emptyset^1\mid \emptyset^{1} \rangle \mid \emptyset^{0} \rangle = \langle 1\mid \emptyset^{0} \rangle $ in \textsc{scoring-konane} and $\langle 1 \mid \emptyset^{2} \rangle $ in \textsc{diskonnect}. In \textsc{diskonnect}, if Left moves, she jumps one stone for a score of 1 and the game is over. If it is Right to move, he has no moves and the bonus clause is invoked and since there are two white stones that could be taken, the score is 2. Observe that even though Left cannot, in play, actually take both stones, for each stone there is a legal sequence that leads to it being removed and so, each is insecure. Scoring games can be defined in a recursive manner using the atoms. \begin{definition}\label{def:S} The games born on Day 0 are $S_0=\{\langle \, \emptyset^{\ell}\mid \emptyset^r \, \rangle : \ell,r\in \mathbb{R} \}$. For $i=0,1,2,\ldots$, let $S_{i+1}$, the games born by Day $i+1$, be the set of games of the form $\langle \, {\cal G}\mid {\cal H} \, \rangle$, where ${\cal G}$ and ${\cal H} $ are non-empty finite subsets of $S_{i}$, or where either or both can be a single atom. The games in $S_{i+1}\setminus S_{i}$ are said to have \textit{birthday} $i+1$. Let $\mathbb{S}=\cup_{i\geqslant 0}S_i$. \end{definition} We now make explicit the effect of taking a disjunctive sum of Scoring games. (See Figure \ref{fig:gametree} for an example.) \begin{definition} Given two Scoring games $G$ and $H$, the disjunctive sum is given by: \begin{eqnarray*} G+H&=& \langle \, \emptyset^{\ell_1+\ell_2}\mid\emptyset^{r_1+r_2} \, \rangle, \quad\textrm{ if $G=\langle \, \emptyset^{\ell_1}\mid\emptyset^{r_1} \, \rangle$ and $H=\langle \, \emptyset^{\ell_2}\mid\emptyset^{r_2} \, \rangle$;}\\ &=&\langle \, \emptyset^{\ell_1+\ell_2}\midG^\mathcal{R} +H,G+H^\mathcal{R} \, \rangle, \textrm{ if $G=\langle \, \emptyset^{\ell_1}\midG^\mathcal{R} \, \rangle$ and $H=\langle \, \emptyset^{\ell_2}\midH^\mathcal{R} \, \rangle$},\\ &{}&\qquad \textrm{ and at least one of $G^\mathcal{R} $ and $H^\mathcal{R}$ is not empty;}\\ &=&\langle \, G^\mathcal{L} +H,G+H^\mathcal{L}\mid \emptyset^{r_1+r_2} \, \rangle, \textrm{ if $G=\langle \, G^\mathcal{L} \mid\emptyset^{r_1} \, \rangle$ and $H=\langle \, G^\mathcal{L} \mid\emptyset^{r_2} \, \rangle$},\\ &{}&\qquad \textrm{ and at least one of $G^\mathcal{L} $ and $H^\mathcal{L}$ is not empty;}\\ &=&\langle \, G^\mathcal{L} +H,G+H^\mathcal{L}\midG^\mathcal{R} +H,G+H^\mathcal{R} \, \rangle, \textrm{ otherwise.} \end{eqnarray*} \end{definition} Note that the option $G^\mathcal{L} +H$ does not exist if $G^\mathcal{L} $ is empty (there is no addition rule for adding an empty set of options to a game). For example, consider $\langle \, \emptyset^1\mid 2 \, \rangle+ \langle \, 2\mid -1 \, \rangle$. If Left plays, she has no move in the first component, but does have a move, so the score in the first component is not yet triggered. She must move to $\langle \, \emptyset^1\mid 2 \, \rangle + 2$, whereupon Right moves to $2+2=4>0$, and Left wins. If Right plays, he should move to $\langle \, \emptyset^1\mid 2 \, \rangle + (-1)$; now Left has no move in the sum, and the score of Left's empty set of options is triggered, giving a total score of $1-1=0$, a tie. Note also that the addition of numbers is covered by $p+s = \langle \, \emptyset^p\mid\emptyset^{p} \, \rangle + \langle \, \emptyset^s\mid\emptyset^{s} \, \rangle= \langle \, \emptyset^{p+s}\mid\emptyset^{p+s} \, \rangle$. Game trees are a standard way to represent combinatorial games, and for Scoring games, each leaf is typically labelled with a score of game; here, if one of the players run out of moves, the node must have an atom attached to it (Figure~\ref{fig:gametree}). For Scoring (and Normal-play) games, we use the convention that edges down and to the right represent a Right move, those down and to the left represent a Left move. \begin{figure} \caption{The disjunctive sum of two game trees.} \label{fig:gametree} \end{figure} We now turn our attention to the partial order of Scoring games. We will use Left- and Right-scores, obtained from alternating play, for comparison of Scoring games. \begin{definition}\label{SStops} The \emph{Left-score} and the \emph{Right-score} of a scoring game $G$ are: \begin{eqnarray} \mathit{Ls}(G) &=& \begin{cases}r & \text{if $G^\mathcal{L} =\emptyset^\ell$}, \\ \max(\mathit{Rs}(G^{\rm L})) & \text{otherwise};\end{cases} \\ \mathit{Rs}(G) &=& \begin{cases}r & \text{if $G^\mathcal{R} =\emptyset^r$}, \\ \min(\mathit{Ls}(G^{\rm R})) & \text{otherwise}.\end{cases} \end{eqnarray} \end{definition} \begin{definition}\label{def:equality}(Inequality in Scoring Universes)\\ Let $\mathcal{U}\subseteq \mathbb{S}$ be a universe of combinatorial Scoring games, and let $G, H\in {\cal U}$. Then $G\geqslant H$ if $$\mathit{Ls}(G+X)\geqslant \mathit{Ls}(H+X)$$ and $$\mathit{Rs}(G+X)\geqslant \mathit{Rs}(H+X),$$ for all $X\in \mathcal{U}$. \end{definition} For game equivalence, we replace all inequalities in the definition, by equalities. It follows that any universe of Scoring games $\mathcal{U}\subseteq \mathbb{S}$ is a monoid, that is $0 + X = X$ for any $X\in {\cal U}$. \section{A natural scoring universe}\label{sec:obstacle} Normal-play games can be regarded as Scoring games, with all scores being zero. From a mathematical perspective, it is of interest to know when they can be embedded in a scoring universe in an order preserving way. \begin{definition}\label{defn:naturalembed} Let $\mathcal{U}\subseteq \mathbb{S}$ be a universe of Scoring games with $\mathbb{R}\subset \mathcal U$. Define the \emph{Normal-play mapping} $\zeta:\mathbb{Np} \hookrightarrow \mathcal{U}$ as $\zeta(G)=\Num{G}\in \mathcal{U}$ where $\Num{G}$ is the game obtained by replacing each empty set of options in the followers of $G\in \mathbb{Np}$, with the $0$-atom,~$\emptyset^0$. \end{definition} Since each atom in $\Num{G}$ is the $0$-atom, the outcome is obviously a tie when the game is played in isolation. But the importance of the $\mathbb{Np}$-mapping is revealed in Definition \ref{def:natural}, and later in Theorem~\ref{thm:emb}. The mapping $f: X\rightarrow Y$ is an \emph{order-embedding} if, for all $x_1,x_2\in X$, $x_1\leqslant x_2$ if and only if $f(x_1)\leqslant f(x_2)$. \begin{definition}\label{def:natural} A scoring universe $\mathcal{U}$ is \emph{natural} if the $\mathbb{Np}$-mapping $\zeta$ is an order-embedding. \end{definition} Tactically, the games $\Num{n}$ can be regarded as \textit{waiting-moves}, for $n$ an integer. We will say that Left (Right) \textit{waits} to mean that she uses one of the waiting-moves in $G + \Num{n}$ if $n$ is positive (negative). For example, in the disjunctive sum $\Num{2}+\langle \, -4\mid\langle \, -3 \mid 5 \, \rangle \, \rangle$ Left is happy to play her waiting-move giving the option $\Num{1}+\langle \, -4\mid\langle \, -3\, |\, 5 \, \rangle \, \rangle$; Right responds to $\Num{1}+\langle \, -3 \mid5 \, \rangle$; Left again waits giving $\Num{0}+\langle \, -3 \mid5 \, \rangle = \langle \, -3 \mid5 \, \rangle$; Right is forced to move to $5$. Further analysis shows that Left much prefers $\Num{2}+\langle \, -4\mid\langle \, -3 \mid5 \, \rangle \, \rangle$ to $\Num{1}+\langle \, -4\mid\langle \, -3 \mid5 \, \rangle \, \rangle$. However, there is an even more compelling reason coming from tactical situations that appear naturally within the games we play. We invite the reader, playing Left, to contemplate which of the two games $G_1$ and $G_2$, in Figure~\ref{fig:disk4} they would prefer to add to a disjunctive sum of Scoring games. \begin{figure} \caption{The \textsc{Diskonnect} \label{fig:disk4} \end{figure} From a Normal-play point of view, it is natural that more waiting-moves give a tactical advantage over fewer and so the order-embedding of Normal-play games is something to be desired if we wish to use any Normal-play intuition in Scoring play. More concretely, in $G_1$, Figure~\ref{fig:disk4}, by `general principles', Left's jumping only one stone is clearly dominated (we invite the reader to consider the intuition of this) so \[ G_1 = \langle \, 2\mid \emptyset^2 \, \rangle = 2 + \langle \, 0\mid \emptyset^0 \, \rangle = 2+\langle \, \langle \, \emptyset^0\mid \emptyset^0 \, \rangle \mid \emptyset^0 \, \rangle = 2 + \Num{1} \] and \[G_2 = \langle \, 1+\langle \, 1\mid \emptyset^1 \, \rangle \mid \emptyset^2 \, \rangle = 2 + \langle \, \langle \, \langle \, \emptyset^0\mid \emptyset^0 \, \rangle \mid \emptyset^0 \, \rangle \mid \emptyset^0 \, \rangle = 2+\Num{2} \] Here, $G_2$, with the extra waiting-move, seems preferable over $G_1$. Games of the form $\langle \, \emptyset^\ell\midG^\mathcal{R} \, \rangle $ and $\langle \, G^\mathcal{L} \mid\emptyset^r \, \rangle $ (including $\langle \, \emptyset^\ell\mid \emptyset^r\, \rangle $) will be called \textit{atomic} in general, or \textit{Left-atomic} and \textit{Right-atomic}, respectively, if more precision is required. \begin{definition}\label{def:guaranteed} Let $G\in \mathbb{S}$ be an atomic game. Then $G$ is \emph{stable} if $\mathit{Ls}(G)\leqslant \mathit{Rs}(G)$, and $G$ is guaranteed if, for every atom $\emptyset^s$ which is a follower of $G^\mathcal{L}$, and every atom $\emptyset^t$ which is a follower of $G^\mathcal{R}$, $s\leqslant t$. In general, a game in $\mathbb{S}$ is \textit{stable} (\emph{guaranteed}) if every atomic follower is stable (guaranteed). Let $\mathbb{GS}$ be the class of all guaranteed Scoring games. \end{definition} It is clear that $\mathbb{GS}$ is a universe, since `guaranteed' is an hereditary property; moreover, this class is also generated recursively when $\mathbb{S}$ is generated, see Definition~\ref{def:S}. Note that $G\in \mathbb{GS}$ implies that $G$ is stable. To motivate our approach for guaranteed games, we first prove an intermediate result on the stable games. In particular, the existence of `hot-atomic-games', such as $\langle \, \emptyset^3\mid-2 \, \rangle$, prevents $\zeta$ from being an order-embedding. \begin{theorem}\label{thm:sbU} If $\mathcal{U}$ is a natural universe, then each game in $\mathcal{U}$ is stable. \end{theorem} \begin{proof} Suppose that $\mathcal{U}$ contains a non-stable game $G = \langle \, \emptyset^\ell \mid G^\mathcal{R} \, \rangle\in\mathcal{U}$, where $\ell > \mathit{Rs}(G)$. Choose $k$ where $\ell>k>\mathit{Rs}(G)$ and put $X=G-k$. Consider the Left-score of $0+X$: \begin{eqnarray*} \mathit{Ls}(0+X) &=& \mathit{Ls}(\langle \, \emptyset^\ell\midG^\mathcal{R} \, \rangle-k),\quad \\ &=&\ell-k>0, \mbox{since Left has no move}; \end{eqnarray*} and the Left-score of $\Num{1} +X$: \begin{eqnarray*} \mathit{Ls}(\, \Num{1}+X) &=& \mathit{Ls}( \, \Num{1}+\langle \, \emptyset^\ell\midG^\mathcal{R} \, \rangle-k),\quad \\ &=&\mathit{Rs}(\langle \, \emptyset^\ell\midG^\mathcal{R} \, \rangle)-k, \mbox{since Left only had the waiting-move},\\ &=&\mathit{Rs}(G)-k < 0. \end{eqnarray*} Since $\mathit{Ls}(0+X)>\mathit{Ls}( \, \Num{1}+X)$, it follows from Definition \ref{def:equality} that $0$ is not less than $\Num{1}$ and consequently $\mathcal{U}$ is not natural. \end{proof} Universes in which every game is stable are not necessarily natural. \begin{example}\label{ex:non-natural} For example, in $\mathbb{Np}$ the game 0 has many forms, and in a natural universe they all are mapped to the same game. However, in $\mathbb{Np}$, let $*=\{0\mid 0\}$, which gives $0=\{*\mid*\}$. Consider the Scoring-game $G=\langle \, \emptyset^2\mid\langle \, \langle \, -5\mid5\, \rangle\mid-5 \, \rangle \, \rangle$, which is stable because $\mathit{Ls}(G)=2<\mathit{Rs}(G)=5$. Now $\mathit{Ls}( \, \Num{0}+G)>0$ and $\mathit{Ls}( \Num{\{*\mid*\}}+G)<0$, i.e. $\Num{\{*\mid*\}}\ne \Num{0} = 0$ and the two representations of 0 are mapped to different games. \end{example} We already know that $\mathbb{GS}$ is a stable universe, and the non-existence of hot-atomic-games ensures that it is also natural. \begin{theorem}\label{thm:emb} The universe $\mathbb{GS}$ is natural. \end{theorem} \begin{proof} We must demonstrate that the Normal-play mapping is an order-embedding. First we show that $\zeta$ is order-preserving. Consider $G, H\in\mathbb{Np}$ such that $G\geqslant H$. We want to argue that $\Num{G}\geqslant \Num{H}$ in $\mathbb{GS}$. The next arguments show that $\mathit{Ls}(\, \Num{G}+X)\geqslant \mathit{Ls}(\, \Num{H}+X)$, for all $X\in\mathbb{GS}$. To this purpose, we induct on the sum of the birthdays of $G$, $H$ and $X$. First, suppose that Left has no options in $\Num{H}+X$. This occurs when $X=\langle \,\emptyset^x\midX^\mathcal{R}\, \rangle$ and $H=\{\emptyset | \,H^\mathcal{R}\}$ (that is $\Num{H}=\langle \,\emptyset^0\mid\Num{H}^{\cal R}\, \rangle$). In this case, $\mathit{Ls}(\, \Num{H}+X)=x$. Now $\Num{G}+X=\Num{G}+\langle \,\emptyset^x\midX^\mathcal{R}\, \rangle$. If Left has a move in $G$ then, in $X$, which is a guaranteed game, the score will be larger than or equal to $x$. If Left cannot play in $G$ then the game is over and she has a score of $x$. Therefore, in both cases, because the score in $\Num{G}$ is always $0$, $\mathit{Ls}(\, \Num{G}+X)\geqslant x$. Now we assume that Left has a move in $\Num{H}+X$. If there is a Left move, $X^{\rm L}$, such that $\mathit{Ls}(\, \Num{H}+X)=\mathit{Rs}(\, \Num{H}+X^{\rm L})$, then, by induction $\mathit{Rs}(\, \Num{H}+X^{\rm L})\leqslant \mathit{Rs}(\, \Num{G}+X^{\rm L})$. By the definitions of the Left- and Right-scores, $\mathit{Rs}(\, \Num{G}+X^{\rm L})\leqslant \mathit{Ls}(\, \Num{G}+X)$, giving $\mathit{Ls}(\, \Num{H}+X) \leqslant \mathit{Ls}(\, \Num{G}+X)$. The remaining case is that there is a Left move in $\Num{H}$ with $\mathit{Ls}(\, \Num{H}+X)=\mathit{Rs}(\, \Num{H}^{\rm L}+X)$. In $\mathbb{Np}$, $G\geqslant H$, i.e., $G-H\geqslant 0$ and so Left has a winning move in $G-H^{\rm L}$. There are two possibilities, either $G^{\rm L}-H^{\rm L}\geqslant 0$ or $G-H^{\rm LR}\geqslant 0$. If the first occurs, then $G^{\rm L}\geqslant H^{\rm L}$ and, by induction, $\mathit{Rs}(\, \Num{G}^{\rm L}+X)\geqslant \mathit{Rs}(\, \Num{H}^{\rm L}+X)$ which gives us the inequalities \[\mathit{Ls}(\, \Num{H}+X)=\mathit{Rs}(\, \Num{H}^{\rm L}+X)\leqslant \mathit{Rs}(\, \Num{G}^{\rm L}+X)\leqslant \mathit{Ls}(\, \Num{G}+X). \] If $G-H^{\rm LR}\geqslant 0$ occurs, then, by induction, $\mathit{Ls}(\, \Num{G}+X) \geqslant \mathit{Ls}(\, \Num{H}^{\rm LR}+X)$. By the definitions of Left- and Right scores, we also have $\mathit{Ls}(\, \Num{H}^{\rm LR}+X) \geqslant \mathit{Rs}(\, \Num{H}^{\rm L}+X)$ and since we are assuming that $\mathit{Ls}(\, \Num{H}+X)=\mathit{Rs}(\, \Num{H}^{\rm L}+X)$, we can conclude that $\mathit{Ls}(\, \Num{G}+X)\geqslant \mathit{Ls}(\, \Num{H}+X)$. The arguments to show that $\mathit{Rs}(\, \Num{G}+X)\geqslant \mathit{Rs}(\, \Num{H}+X)$ for all $X\in\mathbb{\mathbb{GS}}$ are analogous and we omit these. Since $\mathit{Ls}(\, \Num{G}+X)\geqslant \mathit{Ls}(\, \Num{H}+X)$ and $\mathit{Rs}(\, \Num{G}+X)\geqslant \mathit{Rs}(\, \Num{H}+X)$ for all $X\in\mathbb{\mathbb{GS}}$, then, by Definition~\ref{def:equality}, $\Num{G}\geqslant \Num{H}$. To demonstrate that $\zeta$ is an order-embedding, it suffices to show that $G>H$ implies $\Num{G}>\Num{H}$ and $G \mid\mid H$ implies $\Num{G} \mid\mid \Num{H}$. We already know that $G>H$ implies $\Num{G}\geqslant \Num{H}$, so it suffices to show that $\Num{G}\neq \Num{H}$. Consider the distinguishing game $X=\; \sim\! \Num{H} +\langle \, -1\mid 1\, \rangle$. We get $\Num{H} + X=H+ \sim \!\! H +\langle \, -1\mid 1\, \rangle$, and Left (next player) loses playing first in $H-H$. So $\mathit{Ls}(\Num{H} + X)=-1$ (no player will play in the Zugzwang). But, similarly, $G>H$ implies $\mathit{Ls}(\Num{G} + X)=1$, since Left wins playing first in $G-H$. Hence $\Num{H}\ne \Num{G}$. To prove that $H\mid\mid G$, we use the same distinguishing game $X=\; \sim \! \Num{H} +\langle \, -1\mid~1\, \rangle$. We have that $-1=\mathit{Ls}(H+ X)<\mathit{Ls}(G+ X)=1$ and $1=\mathit{Rs}(H+ X) >\mathit{Rs}(G+ X)=-1$, which proves the claim. \end{proof} Some of the properties of Normal-play will hold in our Scoring universe, but there is some cost to play in a fairly general Scoring universe; there are non-invertible elements (see also Section \ref{sec:survey}). Hence comparison of games, in general, cannot be carried out as easily as in Normal-play, where $G\geqslant H$ is equivalent to $G-H\geqslant 0$ (Left win playing second in $G-H$). Here we begin by demonstrating how to compare games with scores. Note that since $\Num{-n} =\, \sim\!\Num{n}$ we will revert to the less cumbersome notation $-\Num{n}$. \begin{definition} Let $G$ be a game in $\mathbb{S}$. Then $\mathit{Ls}u(G)=\min\{\mathit{Ls}(G-\Num{n}\, ):n\in\mathbb{N}_0\}$ is the \emph{Right's pass-allowed Left-score} (\emph{pass-allowed Left-score}). The \emph{pass-allowed Right-score} is defined analogously, $\mathit{Rs}o(G)=\max\{\mathit{Rs}(G+\Num{n}):n\in\mathbb{N}_0\}$. \end{definition} In brief, the `overline' indicates that Left can pass and the `underline' that Right can pass. \begin{lemma} Let $G\in \mathbb{S}$. Then \begin{enumerate} \item $\mathit{Ls}(G)\geqslant \mathit{Ls}u(G)$ and $\mathit{Rs}(G)\leqslant \mathit{Rs}o(G)$; \item $\mathit{Rs}o(G+H)\geqslant \mathit{Rs}o(G)+\mathit{Rs}o(H)$ and $\mathit{Ls}u(G+H)\leqslant \mathit{Ls}u(G)+\mathit{Ls}u(H)$. \end{enumerate} \end{lemma} \begin{proof} The inequalities in statement 1 are obvious from the definition of the $\min$-function, and since $G+\Num{0} =G$. If Left answers Right in the same component (including the possibility of a waiting-move) then this is not the full complement of strategies available to her, and this proves that $ \mathit{Rs}o(G+H)\geqslant \mathit{Rs}o(G)+\mathit{Rs}o(H)$. The inequalities for $\mathit{Ls}u$ are proved similarly. \end{proof} \begin{definition} Let $\ell \in \mathbb{R}$. Then, $G$ is \textit{left-$\ell$-protected} if $\mathit{Ls}u(G)\geqslant \ell$ and, for all $G^{\rm R}$, there exists $G^{\rm RL}$ such that $G^{\rm RL}$ is left-$\ell$-protected. Similarly, $G$ is Right-$r$-protected if $\mathit{Rs}o(G)\geqslant r$ and, for all $G^{\rm L}$, there exists $G^{\rm LR}$ such that $G^{\rm LR}$ is right-$r$-protected. \end{definition} The concept of $\ell$-protection allows for comparisons with numbers in $\mathbb{GS}$. \begin{theorem}\label{thm:comp} Let $G\in \mathbb{GS}$. Then $G\geqslant \ell$ if and only if $G$ is left-$\ell$-protected. \end{theorem} \begin{proof} ($\Rightarrow$) Suppose that $G$ is not left-$\ell$-protected. We will use distinguishing games of the form $X=\langle \, \emptyset^a\mid b- \Num{n} \, \rangle$ to obtain contradictory inequalities $\mathit{Rs}(G+X)<0<\mathit{Rs}(\ell+X)$. The cases $1$ and $2$ are general considerations that don't need induction and that can be used whenever we want. We begin with the base case of the induction on the rank of $G$, that will be used to prove case $3$.\\ \noindent Base case (rank 0): $G=\langle \, \emptyset^v\mid \emptyset^s\, \rangle$, with $v\leqslant s$.\\ Because $G$ is not left-$\ell$-protected, we have $\mathit{Ls} (G) = v < \ell$. Therefore, in order to have a distinguishing game to build the induction, we consider $X=\langle \, \emptyset^a\mid a+\Num{0} \, \rangle$ where $-\ell<a<-v$. Then, $\mathit{Rs}(G+X)=v+a<0<a+\ell=\mathit{Rs}(r+X)$.\\ \noindent {\rm Case 1:} $\mathit{Ls}u(G)=v<\ell$.\\ Consider $X=\langle \, \emptyset^a\mid a- \Num{n} \, \rangle$, where $-\ell<a<-v$, and where $n$ is large enough to obtain $\mathit{Ls}u(G)$. Then, $$\mathit{Rs}(G+X)\leqslant v+a<0$$ $$\mathit{Rs}(\ell+X)=\ell+a>0$$ This is contradictory with $G\geqslant \ell$.\\ \noindent Case 2: There exists a $G^{\rm R}$ that is Left-atomic.\\ If $G^{\rm R}=\langle \, \emptyset^v \mid (G^{\rm R})^\mathcal{R} \, \rangle$, then consider $X=\langle \, \emptyset^a\mid b+\Num{0} \, \rangle$ such that $a<-\ell$, $b>-\ell$ and $a<b$. Then, $$\mathit{Rs}(G+X)\leqslant \mathit{Ls}(G^{\rm R}+X)=v+a<0$$ $$\mathit{Rs}(\ell+X)=\ell+b>0.$$ \noindent Case 3: There exists $G^{\rm R}$ such that ${\cal G}^{\rm RL}\neq\emptyset$ and all $G^{\rm RL}$ are not left-$\ell$-protected.\\ Consider $G^{\rm RL_1}$, $G^{\rm RL_2}$, $G^{\rm RL_3}$,\ldots,$G^{\rm RL_k}$; the games in ${\cal G}^{\rm RL}$. By induction, for all $i\in \{1,\ldots k\}$, consider the distinguishing games $X_i = \langle \, \emptyset^{a_i}\mid b_i- \Num{n_i}\, \rangle$ such that $$\mathit{Rs}(G^{\rm RL}_i+X_i)<0<\mathit{Rs}(\ell+X_i),$$ for all $i$. Let $X=\langle \, \emptyset^{ \min(a_i)}\mid \min(b_i)- \max(\Num{n_i}) \, \rangle$. By convention and Theorem~\ref{thm:emb}, each $X_i$ is no better than $X$ for Right. Hence, we get that $\mathit{Rs}(G^{\rm RL}_i+X)<0$, for all $i$. We also get $\mathit{Rs}(\ell+X)=\ell+\min(b_i)>0$. Therefore, $\mathit{Rs}(G+X)\leqslant \mathit{Ls}(G^{\rm R}+X)=\max_i \mathit{Rs}(G^{\rm RL}_i+X) <0$. This contradicts $G\geqslant \ell$, since $\mathit{Rs}(\ell+X)>0$, and thus finishes the induction step.\\ ($\Leftarrow$) Assume that $G$ is left-$\ell$-protected. We need to prove that $\mathit{Rs}(G+X)\geqslant \mathit{Rs}(\ell+X)$ and $\mathit{Ls}(G+X)\geqslant \mathit{Ls}(\ell+X)$ $\forall X\in\mathbb{GS}$. Fix some $X\in\mathbb{GS}$ and we proceed by induction on the sum of the ranks of $G$ and $X$. Suppose there is no Right option in $G+X$, that is, $G=\langle G^\mathcal{L} \mid\emptyset^p \rangle $ and $X=\langle X^\mathcal{L} \mid\emptyset^q \rangle$. It follows that $\mathit{Rs}(G+X) = p+q$ and $\mathit{Rs}(\ell+X) = \ell+q$. Let $s=\mathit{Ls}u(G)$. Since $G$ is left-$\ell$-protected, we get $s\geqslant \ell$. Further, since $G$ is guaranteed, we have that $s\leqslant p$, and therefore $\mathit{Rs}(G+X) = p+q \geqslant s+q \geqslant \ell+q =\mathit{Rs}(\ell+X)$. We may now suppose that Right has a move in $G+X$. Suppose that Right's best move is in $X$, to say $G+X^{\rm R}$. We then have the chain of relations \begin{eqnarray*} \mathit{Rs}(G+X) &=& \mathit{Ls}(G+X^{\rm R}), \mbox{ by assumption},\\ &\geqslant & \mathit{Ls}(\ell+X^{\rm R}), \mbox{ by induction on the sum of the ranks},\\ &= & \mathit{Rs}(\ell+X), \mbox{ by definition and since numbers have empty sets of options.} \end{eqnarray*} If Right's best move is in $G$, to say $G^{\rm R}+X$, then we have the relations \begin{eqnarray*} \mathit{Rs}(G+X) &=& \mathit{Ls}(G^{\rm R}+X), \mbox{ by assumption,}\\ &\geqslant & \mathit{Rs}(G^{\rm RL}+X), \mbox{ Left might have chosen a non-optimal option,}\\ &\geqslant & \mathit{Rs}(\ell+X), \mbox{ by induction, since $G^{\rm RL}$ is left-$\ell$-protected.} \end{eqnarray*} In all cases we have that $\mathit{Rs}(G+X)\geqslant \mathit{Rs}(\ell+X)$. If Left has a move in $X$ then let $X^{\rm L}$ be the move such that $\mathit{Ls}(\ell+X)=\mathit{Rs}(\ell+X^{\rm L})$. By the previous paragraphs, we have that $\mathit{Rs}(\ell+X^{\rm L})\leqslant \mathit{Rs}(G+X^{\rm L})$. Since moving to $X^{\rm L}$ may not be the best Left move in $G+X$, we know that $\mathit{Rs}(G+X^{\rm L})\leqslant \mathit{Ls}(G+X)$, i.e. $\mathit{Ls}(\ell+X)\leqslant \mathit{Ls}(G+X)$. Now, if Left has no move in $X$ then $X = \langle \, \emptyset^s|X^\mathcal{R} \, \rangle$. We know that $\mathit{Ls}(\ell+X) = \ell + s$. In $G+X$ we restrict Left's strategy, which can only give her the same or a lower score. We denote the continuation of the two games as the $G$ and $X$ components. Whenever there is a Left option in the $G$ component, Left plays the best one. If there is no option in the $G$ component, then and only then Left plays in the $X$ component. Any move by Right in the $X$ component is a waiting-move in the $G$ component so Left achieves a score of at least $\mathit{Ls}u(G)$; moreover, by assumption, $\mathit{Ls}u(G)\geqslant \ell$. Since $X$ is guaranteed, the score from this component is at least $s$. That is, $\mathit{Ls}(G+X)\geqslant \ell+s = \mathit{Ls}(\ell+X)$. For all $X$, we have shown that $\mathit{Ls}(G+X)\geqslant \mathit{Ls}(\ell+X)$ and $\mathit{Rs}(G+X)\geqslant \mathit{Rs}(\ell+X),$ that is, $G\geqslant \ell$. \end{proof} An interesting question is: \textit{What games, and how many, are equal to 0?} We have, for example, $G=\langle \, \langle \, 1\mid 0 \, \rangle \mid \langle \, 0\mid -1 \, \rangle \, \rangle = 0$, a game not obtained from the natural embedding. But each natural embedding of Normal-play game equal to zero maps to zero in $\mathbb{GS}$. Note that, for example, $\langle \, 0\mid 0 \, \rangle \ne 0$, which is of course what we want since in $\mathbb{Np}$, $\mathord{\ast} = \{ 0\mid 0 \}$ (compare this with Milnor's scoring universe in Section \ref{sec:survey}). But also $ \langle \, \langle \, 1\mid 0 \, \rangle \mid \emptyset^0 \, \rangle = 0$, a game that is not obtained from a 0 in Normal-play and neither is it a dicot. The size of the equivalence class of $0$ gives a lower bound on the sizes of the other equivalence classes since $G+H=H$ if $G=0$. Further, note that, if $G\geqslant 0$, then $G$ is Left-0-protected, and thus Right loses moving first, which is similar to the situation in Normal-play. The following corollary of Theorem \ref{thm:comp} gives a criteria that enables us to determine when a game is equal to 0. \begin{corollary}\label{0corollary} Let $G\in \mathbb{GS}$. Then $G=0$ iff G is left-$0$- and right-$0$-protected, that is iff $\mathit{Ls}u(G) = \mathit{Rs}o(G)= 0$ and, for all $G^{\rm R}\in G^\mathcal{R} $, there exists $G^{\rm RL}\geqslant 0$ and, for all $G^{\rm L}\in G^\mathcal{L} $, there exists $G^{\rm LR}\leqslant 0$. \end{corollary} \section{Survey of other scoring universes}\label{sec:survey} Let us begin by listing some of the game properties of the other scoring universes in the literature. In the columns, we list the authors of the three most relevant scoring universes in relation to this work; the properties are discussed, as appropriate, later in this section. The `Yes' in Stewart's column means that there are no known `practical methods', or that the answer is trivial as for the invertible elements. The games in Milnor's and Ettinger's universes are trivially stable, since the only atomic games are numbers. Summarizing: \begin{center} \begin{tabular}{|c|c|c|c|} \hline {\bf Properties}& {\bf Milnor}&{\bf Ettinger}&{\bf Stewart}\\ \hline Ordered Abelian Group&Yes&No&No\\ \hline Equivalence Class of 0 & Large & Large & Small\\ \hline Invertible Elements &All& Many&Numbers \\ \hline Constructive Comparisons & Yes&Yes&No \\ \hline Greediness Principle & Yes&Yes&No \\ \hline Game Reductions & Yes &Yes& `Yes'\\ \hline Characterization of Invertible Elements & Yes&Yes&`Yes'\\ \hline Unique Canonical Forms & Yes&No&`Yes' \\ \hline All Games Stable & `Yes' &`Yes' &No\\ \hline Natural Embedding&No&No&No\\ \hline \end{tabular} \end{center} \subsection{Milnor's non-negative incentive games } Milnor \cite{Milno1953} and Hannor \cite{Hanne1959} considered dicot Scoring games in which there is a non-negative incentive for each player to move, and where the atoms (base of recursion) are real numbers. We denote this universe by $\mathbb{PS}$. Non-negative incentive translates to $\mathit{Ls}(G) \geqslant \mathit{Rs}(G)$, for all positions, that is \emph{Zugzwang} situations never occur and the universe is an abelian group. As soon as $\mathit{Ls}(G)=\mathit{Rs}(G)$, the game is over and the players add up the score. There is a similarity between these games and Normal-play games. If the scores are always integers, at the end, the players could count the score by imagining they are making `score'-many independent moves. Using this idea, there have been many advances in the endgame of \textsc{go} (last point). Games such as \textsc{amazons}\footnote{see \url{http://en.wikipedia.org/wiki/Game_of_the_Amazons}} and \textsc{domineering} can also be thought of as `territorial games' (where the score depends on the size of captured land), but, in the literature, they have so far been analyzed under the guise of Normal-play. Because the games in this universe have non-negative incentive and all games of $\mathbb{Np}$ would have 0 incentive (no change in score) there is no natural embedding of $\mathbb{Np}$ into $\mathbb{PS}$. In Milnor's universe when we have the disjunctive sum of a Normal-play component with a non-negative incentive component, the outcome is determined by the non-negative incentive component. Both players want to play in the non-negative incentive component because there are no Zugzwangs. A Normal-play component alone is a tie. So, any Normal-play component is irrelevant; i.e., all embedded Normal-play components are equal to zero, and so, $\mathbb{PS}$ is not natural. \subsection{Ettinger's dicot Scoring games} Ettinger's universe \cite{Ettin1996,Ettin2000} also consists of dicot games, but \emph{Zugzwang} games like $\langle \, -3\mid3\, \rangle $ are now allowed. In Ettinger's universe, $\mathbb{DS}$, the atomic games are the real numbers and moreover, there are no games of the form $\langle \, \emptyset^r\mid \emptyset^s \, \rangle, r\neq s$ (although the latter game is a dicot). He noted that real numbers are not necessary, just some values taken from an ordered abelian group. The definitions of (in)equality and of Left- and Right-scores are as in Definitions~\ref{SStops} and \ref{def:equality}. \begin{definition} The disjunctive sum $G+H$ with $G,H\in \mathbb{DS}$ reduces to \begin{displaymath} G+H= \left\{ \begin{array}{ll} r+s & \textrm{if $G=r$ and $H=s$ are numbers},\\ \langle \, G^\mathcal{L} +H,G+H^\mathcal{L}\mid G^\mathcal{L} +H, G+H^\mathcal{R} \, \rangle & \textrm{otherwise.} \end{array} \right. \end{displaymath} \end{definition} Disjunctive sum is associative and commutative, and the universe is partially ordered. The problem is that games do not necessarily have inverses. The following concept was used by Ettinger \cite[p. 20-22]{Ettin1996}. \begin{definition} Let $r\in \mathbb{R}$. Then, $G$ is \textit{left-$r$-safe} if $\mathit{Ls}(G)\geqslant r$ and, for all $G^{\rm R}$, there exists $G^{\rm RL}$ such that $G^{\rm RL}$ is left-$r$-safe. The concept of right-$r$-safety is defined analogously. \end{definition} Two important results are: \begin{theorem}[Ettinger's Theorem]\label{Ettingertheorem} Let $r\in \mathbb{R}$ and $G\in \mathbb{DS}$. Then, $G\geqslant r$ iff $G$ is left-$r$-safe. \end{theorem} Writing the explicit condition for `safety', we get: \begin{corollary}[Ettinger's Corollary]\label{Ettingercorollary} Let $G\in \mathbb{DS}$. Then $G=0$ iff $\mathit{Ls}(G)=\mathit{Rs}(G)=0$ and for all $G^{\rm R}\in G^\mathcal{R} $ there exists $G^{\rm RL}\geqslant 0$ and for all $G^{\rm L}\in G^\mathcal{L} $ there exists $G^{\rm LR}\leqslant 0$. \end{corollary} In \cite{Ettin1996} (p. 48), it is proved that $G\in \mathbb{DS}$ is invertible iff $G +\sim \! G = 0$. In fact, Ettinger proved that there are non-invertible elements and $\mathbb{DS}$ is just a semigroup (monoid). Namely, consider $G=\langle \, \langle \, 1 \mid -1 \, \rangle \mid \langle \, 1 \mid 1 \, \rangle \, \rangle $. Then $\sim \! G = \langle \, \langle \, -1\mid -1\, \rangle \mid \langle \, 1\mid -1 \, \rangle \, \rangle $. Now $\mathit{Ls}(G + \sim \! G)=-2$. Therefore, using the distinguishing game $0$, $\mathit{Ls}(G + \sim \! G+0) < \mathit{Ls}(0+0)$ and, so, by Definition~\ref{def:equality}, $G + \sim \! G\neq 0$. Lacking a group structure, how do we, constructively, know if $G\geqslant r$? As explained in Theorem \ref{Ettingertheorem}, Ettinger solved the problem with the concept of $r$-safety. The Greediness principle holds in $\mathbb{DS}$. Also, $\mathbb{DS}$ does not have hot-atomic-games; the only games with empty sets of options are the real numbers. Also, in \cite{Ettin1996}, despite there being reductions (domination and reversibility) this does not lead to a unique canonical form for the equivalence classes. Finally, since $\mathbb{DS}$ is a dicot universe and $\mathbb{Np}$ is not ($\{ 0 \mid \emptyset \}$ is a game in $\mathbb{Np}$) there is no natural order-preserving embedding from $\mathbb{Np}$ into $\mathbb{DS}$. The only relation is with the subgroup of $\mathbb{Np}$ constituted by the dicot games (formerly called \textit{all-small}) (\cite{AlberNW2007} p.185-). \begin{figure} \caption{Inclusion maps of $\mathbb{Np} \label{fig:compare} \end{figure} Although the inclusion map from Ettinger's universe to Guaranteed Scoring is not order preserving, some nice properties still hold, using a refined setting of pass-allowed stops; see Figure \ref{fig:compare}. See also item 4 in Section \ref{sec:Ste}. \subsection{Stewart's general Scoring games}\label{sec:Ste} The Scoring-universe that Stewart defined \cite{Stewa2011}, $\mathbb{S}' \subset \mathbb{S}$ (the only restriction is $\langle \, \emptyset^r\mid\emptyset^{s} \, \rangle\in \mathbb{S}'$ implies $r=s$), does allow non-stable games, in particular hot-atomic-games, such as $\langle \, \emptyset^9\mid-5\, \rangle$. It has some disadvantageous properties.\\ \noindent \textbf{1)} In $\mathbb{S}'$, we have $G=0\Rightarrow G\cong \langle \, \emptyset^0\mid \emptyset^0 \, \rangle $, where `$\cong$' denotes identical game trees. (See the discussion before Corollary \ref{0corollary} about size of equivalence classes.) The argument is as follows. Suppose that $G=0$ is such that $G^\mathcal{L} \neq\emptyset$ and consider the distinguishing game $X=\langle \, \emptyset^a\mid b \, \rangle $, where $a>0$ and $b$ is less than all the real numbers that occur in any follower of $G$. If Left starts in $G+X$ she loses; if Left starts in $0+X$ she wins (\cite{Johns2014}, p.36). Therefore, $G\neq 0$. This situation occurs because $X$ is a ``strange'' game where Left wants to have the turn but she has no moves. So, the only invertible games of $\mathbb{S}'$ are the numbers.\\ \noindent \textbf{2)} $\mathbb{S}'$ is non-natural. In a natural universe we have $\Num{1}>0$. Recall, $\zeta(1)=\langle \, 0 \mid \emptyset^0 \, \rangle = \Num{1}$ and $\zeta(0)=\Num{0}=0$. Consider the distinguishing game $X=\langle \, \emptyset^2\mid -3 \, \rangle $: \begin{align*} \mathit{Ls}(\langle \, 0 \mid \emptyset^0 \, \rangle + \langle \, \emptyset^2 \mid -3 \, \rangle ) &= -3,\\ \mathit{Ls}(0+\langle \, \emptyset^2 \mid -3 \, \rangle )&=2, \text{ since Left has no move.} \end{align*} Thus, by Definition \ref{def:equality}, $\Num{1}>0$ is not true in $\mathbb{S}'$.\\ \noindent \textbf{3)} The Greediness principle fails in $\mathbb{S}'$. Consider $G=\langle \, 0\mid \emptyset^0 \rangle$ and $H=\langle \, \emptyset^0\mid \emptyset^0 \, \rangle $. There are instances when Left does not prefer $G$; for example, if $X=\langle \, \emptyset^1\mid -1 \, \rangle $, then $\mathit{Ls}(H+X) = 1$, $\mathit{Rs}(H+X) = -1$ and $\mathit{Ls}(G+X) = -1$, $\mathit{Rs}(G+X) = -1$. Thus, by Definition \ref{def:equality}, $G\not\geqslant H$. \\ \noindent \textbf{4)} Clearly $\mathbb{DS}\subset \mathbb{S}$ but the inclusion map is not order-preserving (see also Figure~\ref{fig:compare} for a diagram of the results in this paper). Consider the dicot game $G=\langle \, \langle \, 1\mid 1 \, \rangle \mid \langle \, 1\mid 1 \, \rangle \, \rangle$. By Corollary \ref{Ettingercorollary}, $G>0$ in $\mathbb{DS}$. However, in $\mathbb{S}$, using the distinguishing game $X=\langle \, \emptyset^{\frac{1}{2}}\mid \langle \, \langle \, -2\mid 2\, \rangle \mid -3 \, \rangle \, \rangle $, of course, $\mathit{Ls}(0+X) = \frac{1}{2} > 0$. But, in the game $G+X$, Left has only one legal move, that to $\langle \, 1\mid 1 \, \rangle + X$, and so Right goes to $\langle \, 1\mid 1 \, \rangle + \langle \, \langle \, -2\mid 2 \, \rangle \mid -3 \, \rangle $, which gives $\mathit{Ls}(G'+X) = -1 < 0$. Thus, $G>0$ in $\mathbb{DS}$, but $G\ngeqslant0$ in $\mathbb{S}$. \subsection{Johnson's well-tempered, dicot Scoring games} Johnson's \cite{Johns2014} universe consists of dicot games in which, for a given game $G$, the length of any play (distance to any leaf on the game tree) has the same parity. The games are called \textit{even-tempered} if all the lengths are even and called \textit{odd-tempered} otherwise. A game, $G$ is \textit{inversive} if $\mathit{Ls}(G+X)\geqslant \mathit{Rs}(G+X)$ for every even-tempered game $X$. Although the whole set of games is not well-behaved, for example, canonical forms do not exist, each inversive game has a canonical form and an additive inverse which is equal to its conjugate; in fact they form an abelian group. Moreover, $G\geqslant H$ if $G$ and $H$ have the same `temper' and $\mathit{Rs}(G-H)\geqslant 0$. \section{Normal-play games}\label{sec:Normal-play} The definitions for Normal-play are standard, along with other material, in any of \cite{BerleCG2001-2004, AlberNW2007, Siege2013}. Under Normal-play, there are four outcome classes: \small \begin{center} \begin{tabular}{|l|l|l|} \hline Class & Name & Definition \\ \hline ${\cal N}$ & incomparable & The ${\cal N}$ext player wins\\ \hline ${\cal P}$ & zero & The ${\cal P}$revious player wins\\ &&(more precisely ${\cal N}$ext player loses) \\ \hline ${\cal L}$ & positive & ${\cal L}$eft wins regardless of who plays first \\ \hline ${\cal R}$ & negative & ${\cal R}$ight wins regardless of who plays first \\ \hline \end{tabular} \end{center} \normalsize \noindent We write $\circ(G)$ to designate the outcome of $G$. The fundamental definitions of Normal-play structure are based in these outcomes. \begin{definition} \label{Equivalence} (Equivalence) $G=H\,\,\mathrm{if} \circ(G+X)=\circ(H+X)\,\,\mathrm{for}\,\mathrm{all}\,\mathrm{games}\,X.$ \end{definition} The convention is that positive is good for Left and negative for Right and the outcomes are ordered: $\mathcal{L}$ is greater than both $\mathcal{N}$ and $\mathcal{P}$, which in turn are both greater than $\mathcal{R}$, and, finally, $\mathcal{N}$ and $\mathcal{P}$ are incomparable. In Normal-play games, there is a way to check for the equivalence of games $G$ and $H$, which does not require considering any third game: $$\textit{$G=H$ iff $G-H$ is a second player win. }$$ \begin{definition} \label{Order} (Order) $G\geqslant H$ if $\forall X$, $o(G+X)\geqslant o(H+X)$. \end{definition} \begin{definition} \label{Number} (Number) A game $G$ is a number if all $G^{\rm L}$ and $G^{\rm R}$ are numbers and for each pair of options, $G^{\rm L}<G^{\rm R}$. \end{definition} Note that $0=\{\emptyset \mid \emptyset\}$ is a number since there are no options to compare. If the number is positive, then this represents the number of moves advantage Left has over Right. An important concept is the `stop'---the best number that either player can achieve when going first. \begin{definition}\label{Stops} (Stops) The \emph{Left-stop} and the \emph{Right-stop} of a game $G$ are: \begin{eqnarray} LS(G) &=& \begin{cases}G & \text{if $G$ is a number}, \\ \max(RS(G^{\rm L})) & \text{if $G$ is not a number};\end{cases} \\ RS(G) &=& \begin{cases}G & \text{if $G$ is a number}, \\ \min(LS(G^{\rm R})) & \text{if $G$ is not a number}.\end{cases} \end{eqnarray} \end{definition} There are two possible relations between the stops. One: $LS(G) > RS(G)$ and $G$ is referred to as a \textit{hot} game. The term was chosen to give the idea that the players really want to play first in $G$. Two: $LS(G) = RS(G)$ and $G$ is either a number or a \emph{tepid} game such as a position in \textsc{nim} and \textsc{clobber} \cite{AlberNW2007}. In the presence of a hot game, moves in a tepid game are not urgent, and moves in numbers are never urgent. The situation $LS(G) < RS(G)$ does not occur, since then $G$ would be a Zugzwang game, but in Normal-play these games are numbers and so the relationship reverts to $LS(G) = RS(G)$. The Lawyer's offer should obviously have been accepted if the question would have concerned Normal-play, because here the worst thing imaginable is to run out of options. \end{document}
\begin{document} \title[On Rigid Manifolds of Kodaira Dimension 1]{On Rigid Manifolds of Kodaira Dimension 1} \author{Ingrid Bauer, Christian Gleissner, Julia Kotonski} \thanks{ \textit{2020 Mathematics Subject Classification}: 32G05; 14L30; 14J10; 14J40; 14M25; 14B05.\\ \textit{Keywords}: Rigid complex manifolds, deformation theory, quotient singularities, toric geometry. \\ The second author wants to thank Stephen Coughlan and Andreas Demleitner for interesting and useful conversations about rigid manifolds.} \begin{abstract} We discuss rigid compact complex manifolds of Kodaira dimension 1, arising as product-quotient varieties. First, we show that there is no free rigid action on the product of $(n-1)$ elliptic curves and a curve of genus at least two. Then, we describe the occurring groups, study the quotients and prove that there is always a suitable resolution of singularities preserving rigidity. Finally, we give a complete classification of the cases arising for minimal group orders. \end{abstract} \maketitle \begin{dedication} Dedicated to the memory of Alberto Collino. \end{dedication} \tableofcontents \section{Introduction} A compact complex manifold with no non-trivial deformations is called \emph{rigid}. In the seminal paper \cite{rigidity}, Bauer and Catanese discussed and posed various open questions and problems regarding rigid manifolds with certain geometric properties, among others the relation between rigidity and Kodaira dimension. Clearly, the only rigid curve is $\ensuremath{\mathbb{P}}^1$. In the above paper, it was shown that rigid surfaces occur only in Kodaira dimension $-\infty$ and $2$, see \cite{rigidity}*{Theorem 1.3}. Moreover, the authors showed that examples of rigid manifolds in any dimension $n\geq 3$ and any Kodaira dimension $\kappa= -\infty, 0, 2, 3,\ldots , n$, can easily be constructed. This indicates that in higher dimensions, rigid varieties should be more frequent and should appear with any Kodaira dimension (see also \cite{beau} for $\kappa=0$). The missing case $\kappa=1$ was left as an open problem: \begin{question-non} Do there exist rigid compact complex manifolds of dimension $n \geq 3$ and Kodaira dimension $1$? \end{question-non} In \cite{BG20}, the first two authors gave a positive answer to this question by the following ad-hoc construction: let $F$ be the Fermat cubic curve and $Q$ the Klein quartic. The unique non-abelian group $G$ of order $21$ acts on these curves in a way such that the diagonal action on the product $F^{n-1} \times Q$ leads to a rigid singular product-quotient variety \[ X_n = (F^{n-1} \times Q) /G \qquad \makebox{for} \qquad n\geq 3. \] Moreover, the authors showed that there is a resolution of singularities $\rho \colon \hat{X}_n \to X_n$ such that $\hat{X}_n$ is still rigid and has Kodaira dimension $1$. It seems natural to ask whether it is possible to find \qq{easier} examples, e.g., product-quotients by \emph{free} actions, so that we can avoid to deal with singular varieties. In the present article, we show that, in contrast to all other Kodaira dimensions, this is not possible. Furthermore, it would be desirable to have a classification of all rigid projective manifolds of Kodaira dimension one arising as a suitable resolution of singularities of product-quotient varieties as it was done in \cite{bauergleissner2} for the case of Kodaira dimension $0$. Guided by the above example, we are looking for $X$, the quotient of a product of $(n-1)$ elliptic curves and a curve of genus at least two by a diagonal action of a finite group $G$, a normal projective variety with isolated canonical quotient singularities, Kodaira dimension $1$ and $H^1(X, \Theta_{X}) = 0$. By a result of Schlessinger and by Kuranishi theory, it follows that $X$ is a rigid (singular) variety. Since we are looking for rigid {\em manifolds}, we construct a suitable resolution $\rho \colon \hat{X} \ensuremath{\rightarrow} X$ of singularities and show that $H^1(X, \Theta_{X}) = H^1(\hat{X}, \Theta_{\hat{X}})$. Our first main result is: \begin{theorem-non} Let $G$ be a finite group admitting a rigid diagonal action on a product $$E_1\times \ldots\times E_{n-1}\times C,$$ where the $E_j$ are elliptic curves, $C$ is a curve of genus at least two and the action is faithful on each factor. Then: \begin{enumerate} \item The elliptic curves $E_j$ are all the same and either isomorphic to $\ensuremath{\mathbb{C}}/\ensuremath{\mathbb{Z}}[i]$ or $\ensuremath{\mathbb{C}}/\ensuremath{\mathbb{Z}}[\ze_3]$. \item The group $G$ is a semi-direct product $A\rtimes \ensuremath{\mathbb{Z}}_d$, where $A$ is abelian and $\ensuremath{\mathbb{Z}}_d$ is the cyclic group of order $d$, where $d=3,4$ or $6$. \item The action of $G$ is never free. The isolated fixed points descend to cyclic quotient singula\-ri\-ties of type \[ \frac{1}{\ell}(1,\ldots, 1) \qquad \rm{or} \qquad \frac{1}{\ell}(1,\ldots, 1,\ell-1), \] where $\ell$ divides $d$. They are canonical if $n\geq d$. \item For each quotient $X=E^{n-1}\times C$, there exists a resolution of singularities, $\rho\colon\hat{X}\to X$, preserving the rigidity. If $n\geq d$, then $\hat{X}$ is a rigid manifold of Kodaira dimension one. \end{enumerate} \end{theorem-non} By the rigidity of the action, the curves $C$ and $E$ in the theorem are realized as triangle curves, i.e., Galois $G$-covers of the projective line $\mathbb P^1$ branched in three points $p_1,p_2$ and $p_3$. To such a cover, we can attach a generating triple for the group $G$ via the monodromy map associated to the cover: the elements are the images $g_i$ of simple loops $\gamma_i$ around $p_i$. They generate the group $G=A\rtimes\ensuremath{\mathbb{Z}}_d$ and fulfill the relation $g_1\cdot g_2\cdot g_3=1$. They will play a crucial role to derive our classification results since a lot of geometric properties of triangle curves are encoded in the generating triples. For a more detailed discussion we refer to Section~\ref{se3} or to the paper \cite{IFG}. It turns out that the geometry of the rigid quotient varieties $(E^{n-1}\times C)/G$ differs slightly depending whether the generating triple $S_C$ corresponding to the curve $C$ has elements belonging to $A$ or not. We write for short $S_C\subset G\setminus A$ or $S_C\not\subset G\setminus A$, respectively. The first case is special because it can only occur for $d=6$. Here, all rigid quotients are obtained as finite covers of a minimal one. More precisely, we have: \begin{theorem-non} If $S_C\subset G\setminus A$, then there exists a finite holomorphic cover onto a unique minimal rigid quotient \[ f\colon (E^{n-1}\times C)/G\longrightarrow X_{min}:=(E^{n-1}\times C')/\ensuremath{\mathbb{Z}}_6, \] where $C'$ is the hyperelliptic curve \[ C':=\{y^2=x_0^6+x_1^6\}\subset\ensuremath{\mathbb{P}}(1,1,3) \] of genus two and $E=\ensuremath{\mathbb{C}}/\ensuremath{\mathbb{Z}}[\ze_3]$ is the Fermat elliptic curve. \end{theorem-non} In the general case $S_C\not\subset G\setminus A$, we can classify the minimal examples, i.e., the rigid quotients $(E^{n-1} \times C)/G$, where the group $G=A \rtimes_{\varphi_d} \mathbb Z_d$ has the smallest group order. Among them, there is also the example of \cite{BG20} that we sketched above. \begin{theorem-non} The smallest groups of the form $A\rtimes\ensuremath{\mathbb{Z}}_d$ allowing a rigid diagonal action on $E^{n-1}\times C$ such that the $G$-action on $C$ has a generating triple $S_C\not\subset G\setminus A$ are: \begin{itemize} \item $G_3=\langle s,t ~ \big\vert ~ s^3=t^7=1,~ sts^{-1} =t^4 \ensuremath{\rightarrow}ngle$, \item $G_4=\langle s,t ~ \vert ~ s^4=t^5 =1, ~ sts^{-1}=t^3 \ensuremath{\rightarrow}ngle$, \item $G_6=\langle s,t ~ \big\vert ~ s^6=t^3=1,~ sts^{-1} =t^2 \ensuremath{\rightarrow}ngle$. \end{itemize} If $d=6$, then there is precisely one isomorphism class of such quotients.\\ If $d=3,4$, then there are at most two isomorphism classes of such quotients, which can be identified under complex conjugation.\\ In each case, the curves $C=C_d$ are uniquely determined up to isomorphism: $C_3$ is the Klein quartic, $C_4$ is Bring's curve and $C_6$ is a smooth curve on the Fermat cubic surface with equation \[ x_0x_2+x_1x_3=0. \] \end{theorem-non} Unfortunately, we are not able to decide whether the complex conjugate varieties in the above theorem (for $d=3,4$) are biholomorphic or not. We pose this as an open problem. The paper is organized as follows: in Section \ref{se2}, we briefly discuss rigid group actions on complex manifolds, in particular diagonal actions on products of curves. Section \ref{se3} is about the basic theory of {\em triangle curves}, i.e., Galois covers of the projective line $\mathbb P^1$ branched in three points. Here, we also introduce spherical gene\-rating triples of a finite group $G$, which is a convenient notion, commonly used to describe triangle curves from a group theoretical point of view. In Section \ref{se4}, we provide a detailed analysis of finite groups $G$ affording a rigid diagonal action on a product of $(n-1)$ elliptic curves and a curve $C$ of genus at least two. These groups are always semi-direct products $A\rtimes \ensuremath{\mathbb{Z}}_d$, where $A$ is abelian and $d=3,4$ or $6$. We show that one of the elements in the spherical generating system $S_C$ attached to the triangle curve $C$ must be contained in $A$, except possibly for $d=6$. Here, the case $S_C\subset G\setminus A$ can also occur. This exceptional case is discussed in Section \ref{se5}, while Section \ref{se6} is devoted to the general case $S_C \not\subset G\setminus A$. In these two sections, we complete the proofs of our main theorems, except for the existence of resolutions of the singularities that preserve the rigidity. Such resolutions are constructed in the final Section \ref{sec:resolutions} using methods from toric geometry that generalize the ideas of our previous work in {\cite{BG20}. \section{Rigid Group Actions}\label{se2} \noindent In this section, we recall some notions and results concerning {\it rigid group actions on complex mani\-folds}. \begin{definition} Let $X$ be a compact complex manifold, let $\Theta_X$ be its tangent sheaf and let $G$ be a finite group acting holomorphically on $X$. We say that the action is {\it rigid} if and only if $H^1(X,\Theta_X)^G=0$. \end{definition} \begin{rem}\label{firstremark} \ \begin{enumerate} \item To be precise, in the previous definition, the correct wording should be that the action is {\it infinitesimally rigid} because $H^1(X,\Theta_X)^G$ parametrizes the first order infinitesimal $G$-invariant deformations of $X$. But since \qq{infinitesimal rigidity} is the only type of rigidity we are considering in this paper, we say \qq{rigid} by a slight abuse of notation. \item If $G$ acts freely in codimension one, then there are isomorphisms $$H^i(X/G,\Theta_{X/G}) \cong H^i(X,\Theta_X)^G,\qquad\makebox{for all} \qquad i \geq 0.$$ In particular, if the action is rigid, the quotient $X/G$ has no infinitesimal {\it equisingular deformations}, i.e., no deformations preserving the singularities of $X/G$, since they are parameterized by $H^1(X/G,\Theta_{X/G})$. If $\dim(X)\geq 3$ and if the action has only isolated fixed points, then every infinitesimal deformation of the quotient preserves its singularities due to a result of Schlessinger \cite{schlessinger}. Thus, in this case, the quotient is in fact rigid, if the action is rigid. \item Let $C$ be a compact Riemann surface and $G \leq \Aut(C)$ be a finite group. Then, the $G$-action on $C$ is rigid if and only if $C/G \cong \mathbb P^1$ and the quotient map $C \to C/G \cong \mathbb P^1$ is branched in three points. Such curves are called {\em triangle curves} and $f \colon C \rightarrow C/G \cong \mathbb P^1$ is called a {\em triangle cover}. \end{enumerate} \end{rem} Let $G$ be a finite group acting holomorphically on the compact complex manifolds $X_1, \ldots, X_n$, then obviously, the cartesian product $G^n$ acts on the the product $X_1 \times \ldots \times X_n$, hence also the diagonal $\Delta_G \cong G \leq G^n$ by the formula $$ g(x_1, \ldots, x_n):=(g x_1, \ldots, g x_n). $$ We call this action the {\it diagonal} $G$-action. K\"unneth's formula allows us to derive a criterion for the rigidity of the diagonal action (see \cite{bauergleissner2}, Proposition 2.3.): \begin{proposition}\label{rigiddiag} Let $G$ be a finite group acting holomorphically on the compact complex manifolds $X_1, \ldots, X_n$. Then, the diagonal action on $X_1 \times \ldots \times X_n$ is rigid if and only if: \begin{enumerate} \item the $G$ action on each $X_i$ is rigid and \item $\big(H^0(X_i,\Theta_{X_i}) \otimes H^1(X_j,\mathcal O_{X_j})\big)^G=0$ for all $i \neq j$. \end{enumerate} \end{proposition} In this paper, we are mainly interested in the special case where the complex manifolds $X_i$ are compact Riemann surfaces $C$. There, it is convenient to rephrase the rigidity conditions of Proposition~\ref{rigiddiag} in terms of the characters $\chi_C$ of the canonical representation \[ \rho_C\colon G \to \GL(H^0(C,\omega_C)), \quad g \mapsto [\alpha \mapsto (g^{-1})^{\ast}(\alpha)]. \] \begin{corollary}\label{charconds} Let $G$ be a finite group acting holomorphically on the elliptic curves $E_1, \ldots, E_n$ and on the curves $C_{1}, \ldots, C_m$ of genus at least two, then the diagonal action on the product $$E_1 \times \ldots \times E_n \times C_{1} \times \ldots \times C_m$$ is rigid if and only if \begin{enumerate} \item the $G$-action on each $E_i$ and $C_j$ is rigid, \item $\langle \chi_{E_i} \cdot \chi_{C_j}, \chi_{triv}\ensuremath{\rightarrow}ngle =0 $ for all $1 \leq i \leq n$ and $1 \leq j \leq m$, and \item $\chi_{E_i} \cdot \chi_{E_j} \neq \chi_{triv}$ for all $1 \leq i < j \leq n$. \end{enumerate} \end{corollary} Recall that $\chi_{triv}$ denotes the trivial character and $\langle -,-\ensuremath{\rightarrow}ngle$ is the inner product in the space of class functions of $G$. \begin{proof} By Serre duality and since $H^0(\Theta_{C_j})=0$, it follows that condition $(2)$ of Proposition \ref{rigiddiag} is equivalent to $$ \big(H^1(\omega_{E_i}^{\otimes 2}) \otimes H^0(\omega_{C_j})\big)^G =0 \ \rm{and} \ \big(H^1(\omega_{E_i}^{\otimes 2}) \otimes H^0(\omega_{E_j})\big)^G =0. $$ By Dolbeault's interpretation of cohomology $$ H^1(\omega_{E_i}^{\otimes 2}) \simeq H_{\overline{\partial}}^{1,1}(E_i,\omega_{E_i}) = \langle (dz \wedge d\overline{z}) \otimes dz \ensuremath{\rightarrow}ngle $$ and the fact that the $G$-action on $dz \wedge d\overline{z}$ is trivial, it follows that the character of the representation $G \to \GL(H^1(\omega_{E_i}^{\otimes 2}))$ is equal to $\chi_{E_i}$. This proves the claim. \end{proof} \section{Group Theoretical Description of Triangle Curves}\label{se3} In what follows, we briefly recall the theory of triangle covers from the group theoretical point of view. As mentioned in Remark~\ref{firstremark}(3), triangle curves are finite Galois covers of the projective line branched on three points $\mathcal B:=\lbrace p_1,p_2,p_3 \rbrace \subset \mathbb P^1$. The fundamental group $\pi_1(\mathbb P^1 \setminus \mathcal B, \infty)$ is generated by three simple loops $ \gamma_1, \gamma_2$ and $\gamma_3$ around the points $p_1, p_2$ and $p_3$, which fulfill a single relation, namely $$ \pi_1(\mathbb P^1 \setminus \mathcal B, \infty ) = \langle \gamma_1, \gamma_2, \gamma_3 ~ | ~ \gamma_1 \cdot \gamma_2 \cdot \gamma_3 =1 \ensuremath{\rightarrow}ngle. $$ \begin{definition} Let $G$ be a finite group. A triple $S=[g_1,g_2,g_3]$ of non-trivial group elements is called a {\em spherical triple of generators} or shortly a \emph{generating triple} of $G$ if \[ G= \langle g_1,g_2,g_3\ensuremath{\rightarrow}ngle\qquad\makebox{and} \qquad g_1\cdot g_2 \cdot g_3=1_G. \] The \emph{type of $S$} is defined as $t(S):=[\ord(g_1),\ord(g_2),\ord(g_3)]$. \end{definition} Observe that a generating triple $S=[g_1,g_2,g_3]$ of a finite group $G$ defines a surjective homomorphism $\eta_S \colon \pi_1(\mathbb P^1 \setminus \mathcal B, \infty ) \to G$, $\gamma_i \mapsto g_i$. By \emph{Riemann's existence theorem}, the homomorphism $\eta_S$ induces a Galois triangle cover $f_S \colon (C_S,q_0) \to (\mathbb P^1,\infty)$ with branch locus $\mathcal B$ together with a unique isomorphism $\psi \colon G \to \Deck(f_S)$ such that the composition \[ (\psi \circ \eta_S) \colon \pi_1(\mathbb P^1 \setminus \mathcal B, \infty ) \to \Deck(f_S) \] is the monodromy map of the associated unramified cover. For details, we refer to the textbook \cite{miranda}*{Section~4}. \begin{rem}\label{triangle}\ \begin{enumerate} \item A point $q\in C_S$ has non-trivial stabilizer if and only if it belongs to one of the fibres $f_S^{-1}(p_i),\,i=1,2,3$. In this case, its stabilizer group has order $n_i=\ord(g_i)$. The genus of $C= C_S$, the order of $G$ and the $n_i$ are related by \emph{Hurwitz's formula}: $$ 2g(C)-2=|G|\left(1- \frac{1}{n_1} - \frac{1}{n_2} - \frac{1}{n_3} \right). $$ For this reason, $t(S)$ is also called the \emph{branching signature} of $f_S$. \item Applying a projectivity of the base $\mathbb P^1$, we can and will assume $p_1=-1, p_2=0$ and $p_3=1$. Moreover, we shall use generators $\gamma_i,\,i=1,2,3,$ of $\pi_1(\mathbb P^1 \setminus \mathcal B, \infty )$ such that the complex conjugate $\bar{\gamma}_i$ is the inverse of $\gamma_i$ (for example, those described in \cite{IFG}*{p.~7}). \end{enumerate} \end{rem} It is important to understand when two generating triples give the same triangle cover. \begin{definition} A \emph{twisted covering isomorphism} of two triangle $G$-covers $f_i\colon C_i \to \mathbb P^1$, $i=1,2$, branched on $\mathcal B=\lbrace -1,0,1 \rbrace$, is a pair $(u,v)$ of biholomorphic maps $$ u \colon C_1 \to C_2, \ \ v \colon \mathbb P^1 \to \mathbb P^1 $$ such that $v(\mathcal B) = \mathcal B$ and $u \circ f_1 = f_2 \circ v$. \end{definition} \begin{rem} Let $\psi_i \colon G \to \Deck(f_i)$ be the corresponding $G$-actions, then the existence of a twisted covering isomorphism is equivalent to the existence of an automorphism $\alpha \in \Aut(G)$ and a biholomorphism $u\colon C_1 \to C_2$ such that \[ \psi_2(\alpha(g))\circ u = u \circ \psi_1(g) \quad \makebox{for all} \quad g \in G. \] As we shall see, this holds if and only if the corresponding generating triples belong to the same orbit of a certain group action on the set $\mathcal S(G)$ of all generating triples of $G$:\\ first of all, there is a natural action of the \emph{Artin-Braid} group $$ \mathcal B_3:=\langle \sigma_1,\sigma_2 ~ \vert ~ \sigma_1\sigma_2\sigma_1 =\sigma_2 \sigma_1 \sigma_2 \ensuremath{\rightarrow}ngle $$ on $\mathcal S(G)$ defined by: \begin{itemize} \item $\sigma_1([g_1,g_2,g_3]):= [g_1g_2g_1^{-1},g_1,g_3]$, \item $\sigma_2([g_1,g_2,g_3]):= [g_1,g_2g_3g_2^{-1},g_2]$. \end{itemize} This action commutes with the diagonal action of an automorphism $\alpha \in \Aut(G)$ given by $$\alpha ([g_1,g_2,g_3]):=[\alpha( g_1) , \alpha( g_2) ,\alpha( g_3) ].$$ Thus, we get a well-defined action of the group $\Aut(G) \times \mathcal B_3$ on $\mathcal S(G)$. \end{rem} The following result can be found in \cite{IFG}: \begin{theorem}\label{CoveringIsomorphisms} Let $G$ be a finite group and $S,S' \in \mathcal S(G)$ be two generating triples of $G$. Then, the following are equivalent: \begin{enumerate} \item There is a twisted covering isomorphism of the covers $f_S\colon C_S \to \mathbb P^1$ and $f_{S'}\colon C_{S'} \to \mathbb P^1$. \item The generating triples $S$ and $S'$ are in the same $(\Aut(G) \times \mathcal B_3)$-orbit. \end{enumerate} \end{theorem} In the remaining part of the section, we shall apply the above to products of triangle curves. A diagonal rigid action on a product of curves $C_1\times\ldots\times C_n$ such that the action on each curve is faithful corresponds to an $n$-tuple $[S_1,\ldots,S_n]$ of generating triples of $G$. In order to classify the quotients, we need a criterion when two given tuples of generating triples yield isomorphic quotients.\\ Let $[S_1,\ldots,S_n]$ and $[S'_1,\ldots, S'_n]$ be two tuples of generating triples of a finite group $G$. Under the assumption that all curves have genus at least two and that the diagonal action of $G$ on the product of the curves is free, the argument in \cite{cat00} generalizes to: every biholomorphism $f\colon X\to X'$ between the quotients lifts to a biholomorphism $\hat{f}\colon C_{S_1}\times\ldots\times C_{S_n}\to C_{S'_1}\times\ldots\times C_{S'_n}$ of the form \[ (z_1,\ldots,z_n)\mapsto (u_1(z_{\tau(1)}),\ldots,u_n(z_{\tau(n)}))\qquad \makebox{with}\qquad \tau\in\mathfrak S_n. \] If the action is not free or if there occur curves of genus at most one, then it is not clear that any isomorphism $f\colon X\to X'$ lifts, and if a lift exists, it is not necessarily of the above form. Nevertheless, the existence of an isomorphism having a lift as above can easily be checked using the generating triples and Theorem~\ref{CoveringIsomorphisms}, which gives us at least a sufficient criterion when two quotients are biholomorphic: \begin{corollary}\label{bihollift} Let $[S_1,\ldots,S_n]$ and $[S'_1,\ldots, S'_n]$ be two tuples of generating triples of a finite group $G$. Then, the following are equivalent: \begin{enumerate} \item There exists a biholomorphism $f\colon X\to X'$ between the quotients which lifts to a biholomorphism $\hat{f}\colon C_{S_1}\times\ldots\times C_{S_n}\to C_{S'_1}\times\ldots\times C_{S'_n}$ of the form \[ (z_1,\ldots,z_n)\mapsto (u_1(z_{\tau(1)}),\ldots,u_n(z_{\tau(n)}))\qquad \makebox{with}\qquad \tau\in\mathfrak S_n. \] \item There exist $\alpha\in\Aut(G)$, $\delta_1,\ldots,\delta_n\in \mathcal B_3$ and $\tau\in \mathfrak S_n$ such that \[ S'_j= \alpha(\delta_j(S_{\tau(j)})) \qquad \makebox{for all}\qquad j=1,\ldots,n. \] \end{enumerate} \end{corollary} Given a generating triple $S=[g_1,g_2,g_3]$ of $G$, the \emph{conjugate} of $S$ is defined as \[ \iota(S):=[g_1^{-1},g_1g_3,g_3^{-1}]. \] In fact, since $\bar{\gamma}_i$ is the inverse of the path $\gamma_i$ (cf. Remark~\ref{triangle}), the conjugate triple yields the complex conjugate curve: \begin{proposition}[\cite{IFG}, Proposition~2.3] For $S\in \mathcal S(G)$, it holds $\overline{C_S}=C_{\iota(S)}$. \end{proposition} \begin{rem} If $G$ is a finite group acting holomorphically on $X$, then, we obtain a natural holomorphic action of $G$ on the complex conjugate variety $\overline{X}$. The complex conjugate of the quotient $X/G$ is the same as the quotient of $\overline{X}$ by the natural $G$-action. \end{rem} \begin{corollary} The complex conjugate of the quotient corresponding to a tuple $[S_1,\ldots,S_n]$ of generating vectors of $G$ equals the quotient corresponding to $[\iota(S_1),\ldots,\iota(S_n)]$. \end{corollary} \section{Rigid actions on curves of genus $g\geq 1$}\label{se4} In this section, we consider finite groups $G$ which admit a rigid action on curves of genus $g= 1$ and $g\geq 2$. For elliptic curves, i.e., in the case $g=1$, it is well-known that only very special groups can act faithfully on them. If we assume furthermore that this group $G$ also admits a faithful rigid action on a curve of genus $g\geq 2$, this will become even more restrictive. Recall that the automorphism group of an elliptic curve $E$ is a semidirect product $$\Aut(E)=E \rtimes \Aut_0(E),$$ where $\Aut_0(E) \cong \mathbb Z_2$, $\mathbb Z_4$ or $\mathbb Z_6$ (cf. \cite{miranda}*{Chapter III Proposition 1.12.}). In \cite{bauergleissner2}*{Proposition~3.6}, the finite subgroups of $\Aut(E)$ allowing a rigid action on $E$ were classified: \begin{proposition}\label{wallgrps} A finite group $G$ admits a faithful rigid holomorphic action on an elliptic curve $E$ if and only if it is isomorphic to a semidirect product $$ A \rtimes_{\varphi_d} \mathbb Z_d, $$ where $d=3,4$ or $6$ and $A \leq \mathbb Z_n^2$ is a subgroup for some $n$, invariant under the action $$ \varphi_d \colon \mathbb Z_d \to \Aut\big(\mathbb Z_n^2\big), $$ defined by: \begin{itemize} \item $\varphi_3(1)(a,b)=(-b,a-b)$, \item $\varphi_4(1)(a,b)=(-b,a)$ or \item $\varphi_6(1)(a,b)=(-b,a+b)$. \end{itemize} The possible branching signatures $[n_1,n_2,n_3]$ of the triangle cover $E \to E/G$, the abelianizations of $G$ and the isomorphism types of $E$ are summarised in the table below: {\begin{center} \begin{tabular}{ c c c c } & $ $d=3$ ~ $ & $ ~ $d=4$ ~ $ & $ ~ $d=6$~ $ \\ \hline \hline $[n_1,n_2,n_3]$ & $ \quad [3,3,3] \quad $ & $ \quad [2,4,4] \quad $ & $ \quad [2,3,6] \quad $ \\ $G^{ab}$ & $ \quad \mathbb Z_3 $ or $ \mathbb Z_3^2 \quad $ & $ \quad \mathbb Z_4 $ or $ \mathbb Z_2 \times \mathbb Z_4 \quad $ & $ \quad \mathbb Z_6 \quad $ \\ $E$ & $\mathbb C/\mathbb Z[\zeta_3]$ & $\mathbb C/ \mathbb Z[i]$ & $\mathbb C/ \mathbb Z[\zeta_3]$ \\ \hline \end{tabular} \end{center} } \end{proposition} An immediate geometric consequence of Proposition \ref{wallgrps} is: \begin{corollary}\label{alliso} Let $G$ be a finite group with a rigid diagonal action on a product $ X \times E_1 \times \ldots \times E_n$ which is faithful on each factor, where $X$ is a compact complex manifold and $E_1, \ldots , E_n$ are elliptic curves. Then, the elliptic curves are all isomorphic to $\mathbb C/\mathbb Z[i]$ or they are all isomorphic to $\mathbb C/\mathbb Z[\zeta_3]$. Moreover, the branching signature $[n_1,n_2,n_3]$ is the same for each cover $E_i \to E_i/G$. \end{corollary} We also need to recall the following: \begin{proposition}[\cite{bauergleissner2}, Proposition 4.4.]\label{uniquetrans} Let $G=A \rtimes_{\varphi_d} \mathbb Z_d$ be a finite group and $\psi \colon G \ensuremath{\rightarrow} \Aut(E)$ be a rigid faithful action on an elliptic curve $E$. Then, the translation subgroup of $G$, i.e., $$T_{\psi} :=\lbrace g \in G ~ \big\vert ~ \psi(g) ~\makebox{is a translation} \rbrace,$$ is always equal to $A$, except if $G$ is one of the following: $$ \mathbb Z_3^2, \ \ \mathbb Z_3^2 \rtimes_{\varphi_3} \mathbb Z_3, \ \ \mathbb Z_2 \times \mathbb Z_4 \ \ \rm{or} \ \ \mathbb Z_2^2 \rtimes_{\varphi_4} \mathbb Z_4. $$ \end{proposition} These four groups will be called {\em exceptional} in the remaining part of the article. We will see later on that these groups don't admit rigid actions on a curve of genus $g\geq 2$. Next, we present two lemmata concerning the structure of semidirect products $A \rtimes_{\varphi_d} \mathbb Z_d$. The first one concerns the orders of elements not contained in $A$, the second one is a basic result about subgroups of semidirect products $\mathbb Z_n^2 \rtimes_{\varphi_d} \mathbb Z_d$. They turn out to be important for the classification of groups admitting a rigid diagonal action on a product $E^{n-1}\times C$, especially for finding such groups of minimal order. \begin{lemma}[\cite{bauergleissner2}, Lemma 3.9.] \label{orderel} The order of an element of $A \rtimes_{\varphi_d} \mathbb Z_d$ which is not contained in $A$ is equal to the order of its image under the canonical projection $ A \rtimes_{\varphi_d} \mathbb Z_d \to \mathbb Z_d$. \end{lemma} \begin{lemma}\label{cyclicinvar} Let $p$ be a prime number. \begin{enumerate} \item If $d=4$, then there exists a $\varphi_4$-invariant cyclic subgroup of $\mathbb Z_p^2$ of order $p$ if and only if $p=2$ or $p$ is odd and $4 ~ \big\vert ~ (p-1)$. \item If $d=3$ or $6$, then there exists a $\varphi_d$-invariant cyclic subgroup of $\mathbb Z_p^2$ of order $p$ if and only if $p=3$ or $p>3$ and $3 ~ \big\vert ~ (p-1)$. \end{enumerate} \end{lemma} \begin{proof} Assume $d=4$. We show that an invariant subgroup exists if and only if the equation $x^2 =-1$ has a solution in $\mathbb Z_p$. Suppose $c$ is a solution, then the subgroup $\mathbb Z_p \simeq \langle (1,c) \ensuremath{\rightarrow}ngle$ is $\varphi_4$-invariant since $\varphi_4(1)(1,c)=(-c,1)=-c(1,c)$. Conversely, let $\mathbb Z_p \simeq\langle (a,b) \ensuremath{\rightarrow}ngle$ be a $\varphi_4$-invariant subgroup. Then, since $a \neq 0$, the element $a^{-1}\cdot (a,b) = (1,a^{-1}b)$ is also a generator for this group and the $\varphi_4$-invariance implies that $a^{-1} b$ is a solution of the equation $x^2=-1$. Now for $p=2$, the equation $x^2 =-1$ has a solution. If $p$ is odd, there is a solution if and only if the Legendre symbol has value one, i.e., $1= \left({\tfrac {-1}{p}}\right)= (-1)^{\tfrac{p-1}{2}}$, where the second equality is given by Euler's criterion. This equality holds if and only if $4 ~ \big\vert ~ (p-1)$. For $d=3$ or $d=6$, the existence of an invariant subgroup is equivalent to the existence of a solution $c \in \mathbb Z_p$ for the equation $x^2-x+1=0$ or $x^2+x+1=0$, respectively. In each case, a solution exists if and only if $p=3$ or $p>3$ and $\left({\tfrac {-3}{p}}\right)=1$. By Euler and by quadratic reciprocity, we obtain that \[ \left({\tfrac {-3}{p}}\right)= (-1)^{\tfrac{p-1}{2}}\cdot \left({\tfrac {3}{p}}\right) =\left(\tfrac {p}{3}\right)=1\iff 3~ \big\vert ~ (p-1).\qedhere \] \end{proof} \begin{rem}\label{CyclicN} If $\mathbb Z_n^2$ has a cyclic $\varphi_d$-invariant subgroup of order $n$, then for each prime divisor $p$ of $n$, the group $\mathbb Z_p^2$ has a cyclic invariant subgroup of order $p$. In particular, $p$ has to fulfill the conditions of lemma \ref{cyclicinvar}. \end{rem} The next proposition analyzes which of the groups of Proposition \ref{wallgrps}, i.e., groups admitting a faithful rigid action on an elliptic curve, also admit a faithful rigid action on a curve of genus at least two. In fact, we describe generating triples yielding triangle curves of genus at least 2, also correcting a flaw in \cite{IFG}*{Proposition~6.5} Assume that $G =A \rtimes_{\varphi_d} \mathbb Z_d$ and let $S = [g_1,g_2,g_3]$ be a generating triple for $G$. By abuse of notation, we shall write $S \not\subset G\setminus A$ if $\{g_1,g_2,g_3\} \not\subset G\setminus A$, and $S \subset G\setminus A$ otherwise. \begin{proposition}\label{genshapes} Assume that $G =A \rtimes_{\varphi_d} \mathbb Z_d$, where $d \in \{3,4,6\}$, admits a faithful rigid action on a curve of genus $g(C) \geq 2$ and let $S$ be a generating triple. \begin{enumerate} \item If $d=6$ and $S \subset G\setminus A$, then $S$ is of type $[3,6,6]$ and of the form $[s^4h,sk,sc]$, where $s$ is a generator of $\mathbb Z_6$ and $h,k,c \in A$. \item If $d=3$ or $4$, then $S \not\subset G\setminus A$. \item If $d$ is arbitrary and $S \not\subset G\setminus A$, then $S$ is of type $[d,d,\ell]$ and $S$ is of the form $[sh,s^{-1}k,c]$, where $s$ is a generator of $\mathbb Z_d$ and $h,k,c \in A$. \end{enumerate} \end{proposition} \begin{proof} We start with proving the third statement. By assumption, one of the entries of $S=[g_1,g_2,g_3]$ is contained in $A$. W.l.o.g., we can assume $g_3$ to be this element. The relation $g_1 \cdot g_2 \cdot g_3 =1$ implies that the equality $\overline{g_1}= \overline{g_2}^{-1}$ holds in the quotient $G/A \simeq \mathbb Z_d$. Thus, $\overline{g_1}$ is a generator of $G/A \simeq \mathbb Z_d$, and the claim follows from Lemma \ref{orderel}.\\ In order to prove the first and second part of the proposition, we assume $S \subset G \setminus A$. By Hurwitz's formula, we have the inequality \begin{equation}\label{ineq} 1 - \frac{1}{\ord(g_1)} - \frac{1}{\ord(g_2)} - \frac{1}{\ord(g_3)} > 0. \end{equation} Since the orders of the elements $g_i$ coincide with the orders of their classes in $G/A \simeq \mathbb Z_d$, the triple $[\overline{g_1},\overline{g_2},\overline{g_3}]$ is a generating triple for $G/A \simeq \mathbb Z_d$ of the same type as the type of $S$. In particular, the elements $\overline{g_i}$ are non-trivial and their orders divide $d$. This is impossible for $d=3$ by the above inequality \ref{ineq}. If $d=4$, then the only possible type fulfilling inequality \ref{ineq} is $[4,4,4]$. We can exclude this case since $G/A \simeq \mathbb Z_4$ has no generating triple of this type. If $d=6$, the list of possible types is: $[ 2, 6, 6 ]$, $[ 3, 3, 6 ]$, $[ 3, 6, 6 ]$ and $[ 6, 6, 6 ]$. All but $[ 3, 6, 6 ]$ can be excluded because $\mathbb Z_6$ does not have a generating triple of one of the other types. \end{proof} As a direct consequence, we can exclude all the exceptional groups: \begin{corollary} None of the exceptional groups (cf. Proposition \ref{uniquetrans}) admits a rigid action on a curve of genus $g \geq 2$. \end{corollary} \begin{proof} No exceptional group has a generating triple of type $[d,d,\ell]$ such that $1-\tfrac{2}{d}-\tfrac{1}{\ell} >0$. \end{proof} With the further knowledge about the groups and its generating triples, we can investigate the rigidity of a diagonal action on a product $E^{n-1}\times C$ in more detail. It will turn out that second condition of Corollary~\ref{charconds} is almost always guaranteed. \begin{general} From now on, when we talk about a \emph{diagonal action} of a group $G$ on a product $E^{n-1}\times C$, we implicitly assume that the action is faithful on each factor. \end{general} \begin{rem} Assume that $G=A \rtimes_{\varphi_d} \mathbb Z_d$ admits a rigid diagonal action on $E^{n-1} \times C $, where $g(C)\geq 2$. Then, the canonical representation \[ \rho_E\colon G \to \GL\big(H^0(E,\omega_E)\big) \] must be the same for each copy of $E$. Using Proposition~\ref{uniquetrans}, its character $\chi_{E}$ is the composition of the quotient map $G \to G/A \simeq \mathbb Z_d$ and one of the characters $\chi_{\zeta_d}$ or $\chi_{\zeta_d^{-1}}$ . By abuse of notation, we identify $\chi_E$ with the corresponding character $\chi_{\zeta_d}$ or $\chi_{\zeta_d^{-1}}$, respectively. \end{rem} As a corollary, we can prove that in almost all cases, condition (2) of Corollary~\ref{charconds} is automatically fulfilled: \begin{corollary}\label{allbutonerigid} Assume that $A \rtimes_{\varphi_d} \mathbb Z_d$ admits faithful rigid actions on an elliptic curve $E$ and on $C$, where $C$ has genus $\geq 2$. Then, $\langle \chi_{E} \cdot \chi_{C}, \chi_{triv}\ensuremath{\rightarrow}ngle =0 $, except for case $(1)$ in Proposition \ref{genshapes}. In this case, the generating triple corresponding to the action on $C$ is of the form $S_C=[s^4h,sk,sc]$, and then, the action on the product is rigid if and only if $\chi_E(s) = \zeta_6^{-1}$. \end{corollary} \begin{proof} Assume that we are not in case (1) of Proposition \ref{genshapes}. Then, the generating triple $S_C$ giving the $G$-action on $C$ is of type $[d,d,\ell]$ and of the form $[sh,s^{-1}k,c]$, where $s$ is a generator of $\mathbb{Z}_d$ and $h,k,c \in A$. Observe that $\langle \chi_{E} \cdot \chi_{C}, \chi_{triv}\ensuremath{\rightarrow}ngle = \langle \overline{\chi_{E}} ,\chi_{C}\ensuremath{\rightarrow}ngle$ and that moreover, by the formula of Chevalley-Weil (cf. \cite[Theorem~2.8]{FG16}), we get: $$ \langle \overline{\chi_{E}} ,\chi_{C}\ensuremath{\rightarrow}ngle = -1 + \tfrac{1}{d} + \tfrac{d-1}{d} =0. $$ Assume now that $d=6$ and the generating triple $S_C$ is contained in $G\setminus A$. Then, $S_C$ is of the form $[s^4h,sk,sc]$. If $\chi_E(s) = \zeta_6$, then $$ \langle \overline{\chi_{E}} ,\chi_{C}\ensuremath{\rightarrow}ngle = \langle\chi_{\zeta_6^{-1}}, \chi_C\ensuremath{\rightarrow}ngle =-1 + \tfrac{1}{3} + \tfrac{5}{6} + \tfrac{5}{6} =1, $$ but if $\chi_E(s) = \zeta_6^{-1}$, then $\langle \overline{\chi_{E}} ,\chi_{C}\ensuremath{\rightarrow}ngle =0$. \end{proof} Using Proposition \ref{genshapes}, we can work out the possible types of singularities of our quotients and in particular see that a rigid diagonal action on $E^{n-1} \times C $ is never free. \begin{corollary}\label{cor:sing} Assume that $G=A \rtimes_{\varphi_d} \mathbb Z_d$ admits a rigid diagonal action on $E^{n-1} \times C $. Then, the quotient $X_n:=(E^{n-1} \times C)/G$ is singular and the singularities are of type $$ \frac{1}{\ell}(1,\ldots, 1) \quad \rm{or} \quad \frac{1}{\ell}(1,\ldots, 1,\ell-1), $$ where $\ell$ is a divisor of $d$. In particular, $X_n$ has canonical singularities if $n \geq d$. \end{corollary} \begin{proof} Since $S_C$ always contains at least one element that is not contained in $A$, we always find an element of $G$ having fixed points on $E$ and on $C$. Thus, the action is not free and $X_n$ must be singular.\\ Let $S_C =[sh,s^{-1}k,c] \not\subset G\setminus A$ and $p=(p_1, \ldots, p_n)\in E^{n-1} \times C$ be a point with non-trivial stabilizer and $s^m a$ be a generator, where $a \in A$. Note that $m\neq 0$, as $s^m a$ is not a translation. By rigidity, the linear part of the action on $E^{n-1}$ is the same on each copy of $E$, and we may assume that $s$ acts on $E$ by multiplication with $\zeta_d $. Since $s^m a$ is contained in $\Stab(p_n)$ and not in $A$, the stabilizer $\Stab(p_n)$ is a conjugate of $\langle sh\ensuremath{\rightarrow}ngle$ or $\langle s^{-1}k\ensuremath{\rightarrow}ngle$. In the first case, the action of $s^ma$ around $p$ is $\diag(\zeta_d^m, \ldots, \zeta_d^m,\zeta_d^m)$ and in the second case $\diag(\zeta_d^m, \ldots, \zeta_d^m,\zeta_d^{-m})$. We conclude that $p$ descends to a singularity of type $$ \frac{1}{\ell}(1,\ldots, 1) \quad \rm{or} \quad \frac{1}{\ell}(1,\ldots, 1,\ell-1), $$ where $\ell=\ord(s^ma)=\ord(\zeta_d^m)$ divides $d$. The criterion of Reid-Shepherd-Barron-Tai \cite{R87} tells us that these singularities are canonical if $n\geq d$.\\ The proof in the case $S_C\subset G\setminus A$ is similar. \end{proof} Since we are interested in rigid \emph{manifolds}, we have to provide resolutions of the rigid singular quotients $(E^{n-1}\times C)/G=X$ preserving the rigidity. \begin{theorem}\label{theo:Resolutions} Any quotient $X_n:=(E^{n-1}\times C)/G$ by a rigid diagonal action of $G=A\rtimes_{\varphi_d}\ensuremath{\mathbb{Z}}_d$ has a resolution of singularities $\rho\colon\hat{X}_n\to X$ such that $H^1(\hat{X}_n,\Theta_{\hat{X}_n})=0$.\\ If $n\geq d$, $\hat{X}_n$ is a rigid manifold of Kodaira dimension one. \end{theorem} By Corollary~\ref{cor:sing}, the singularites of the quotients are isolated, hence, the construction of such resolutions is a local problem. Here, methods from toric geometry can be applied because germs of cyclic quotient singularities are represented by affine toric varieties. We postpone the proof of the theorem to section~\ref{sec:resolutions}. \section{The case $S_C \subset G \setminus A$}\label{se5} The next two sections are devoted to analyze the rigid quotients of products $E^{n-1}\times C$ in more detail. We start with the investigation of case (1) of Proposition \ref{genshapes}, i.e., $d=6$ and the entries of the generating triple $S_C$ for the $G$-action on $C$ are all contained in $G\setminus A$. It turns out that under this assumption, all the possible rigid quotients $(E^{n-1} \times C)/G$ are obtained as covers from a \emph{minimal one}, where $G=\mathbb Z_6$ and the curve $C$ has genus two. This minimal quotient is described as follows: consider the hyperelliptic curve $$C':=\lbrace y^2=x_0^6+x_1^6\rbrace \subset \mathbb P(1,1,3)$$ of genus $2$ and consider Fermat's elliptic curve $E=\ensuremath{\mathbb{C}}/\ensuremath{\mathbb{Z}}[\ze_3]$ together with the rigid actions of $\ensuremath{\mathbb{Z}}_6$: \begin{equation}\label{minimal} s(x_0:x_1:y)= (x_0:\zeta_6 x_1: y) \qquad \rm{and} \qquad s(z) = \zeta_6 z. \end{equation} Then, the induced diagonal action on $E^{n-1} \times C'$ is rigid and we denote the quotient by $X_{min}$. More precisely, it holds: \begin{theorem}\label{quot366} Assume that $E^{n-1} \times C$ admits a rigid diagonal action of the group $G=A \rtimes_{\varphi_6} \mathbb Z_6$ such that the $G$-action on $C$ has a generating triple $S_C \subset G\setminus A$. Then: \begin{enumerate} \item The group $A$ acts freely on $C$ and the quotient $C':=C/A$ is isomorphic to the hyperelliptic curve \[ \lbrace y^2=x_0^6+x_1^6\rbrace \subset \mathbb P(1,1,3). \] The $G/A \simeq \mathbb Z_6$-action on $C'$ is given by $s(x_0:x_1:y)= (x_0:\zeta_6 x_1: y)$, up to the automorphism of $\mathbb Z_6$ exchanging $s$ and $s^{-1}$. \item The elliptic curve $E/A$ is isomorphic to $E$ and the induced $G/A \simeq \mathbb Z_6$-action on $ E^{n-1} \times C'$ is rigid and compatible with the $G$-action on $E^{n-1} \times C$. There is a finite holomorphic cover \[ f\colon (E^{n-1} \times C)/G \to (E^{n-1} \times C')/\mathbb Z_6 = X_{min}. \] of degree $\big\vert A\big\vert^{n-1}$. \item The singularities of $X_{min}=(E^{n-1} \times C')/\mathbb Z_6$ are: \begin{center} {\scriptsize \renewcommand{1.5}{2.0} \begin{tabular}{| c | c | c | c | c | } \hline type & $\frac{1}{2}(1, \ldots, 1)$ & $\frac{1}{3}(1, \ldots, 1)$ & $\frac{1}{3}(1, \ldots, 1,2)$ & $\frac{1}{6}(1, \ldots, 1)$ \\ \hline number & $\frac{2}{3}(4^{n-1}-1)$ & $3^{n-1}$ & $3^{n-1}-1$ & $2$ \\ \hline \end{tabular} } \end{center} \end{enumerate} \end{theorem} \begin{proof} According to Proposition \ref{genshapes}, a generating triple $S_C$ for the $G$-action on $C$ is of the form $[s^4h,sk,sc]$. Hence, the only group elements that can have fixed points on $C$ are contained in $G \setminus A$. This shows that $A$ acts freely on $C$. Hurwitz's formula tells us that the genus of $C'=C/A$ is two, and thus, the curve is hyperelliptic. Before we derive the equation of $C'$, we prove the second part of the theorem. Clearly, $E/A$ is an elliptic curve, which is isomorphic to $E$. Moreover, the generating triple for the action of $G/A\simeq \mathbb Z_6$ on $C'$ is the image of $S_C$ under the quotient map $G \to \mathbb Z_6$ and therefore equal to $[s^4,s,s]$. Hence, by Corollary~\ref{allbutonerigid}, the action of $\mathbb Z_6$ on $E^{n-1} \times C'$ is also rigid since $g(C')=2$ and $\chi_E(s)=\zeta_6^{-1}$.\\ The degree of the induced cover $f$ equals $\lvert A\rvert^{n-1}$ as it fits in the following commutative diagram \[ \begin{tikzcd} E^{n-1}\times C \arrow{r}\arrow{d} & (E/A)^{n-1}\times (C/A)\simeq E^{n-1}\times C'\arrow{d}\\ (E^{n-1}\times C)/G \arrow{r}{f} & (E^{n-1}\times C')/(G/A) \end{tikzcd} \] Next, we determine the equation of $C'\subset \mathbb P(1,1,3)$, which must be of the form $y^2=f_6(x_0,x_1)$, where $f_6$ is homogenous of degree six. Note that $G/A\simeq \mathbb Z_6$ does not contain the hyperelliptic involution, otherwise the cover $\pi' \colon C'\to C'/\mathbb Z_6 \simeq \mathbb P^1$ would factor through the hyperelliptic cover \[ \Xi \colon C' \to \mathbb P^1, \qquad (x_0:x_1:y) \mapsto (x_0:x_1). \] This is impossible because $\Xi$ has $6$ ramification points and $\pi'$ has only $3$. Hence, $s$ descends to an automorphism $\widehat{s}$ of $\mathbb P^1$ of order six such that $\Xi \circ s = \widehat{s} \circ \Xi$. Up to a change of coordinates, $\widehat{s}$ is defined by \[ \widehat{s}(x_0:x_1)= (x_0:\zeta_6^{\pm 1} x_1). \] This forces $f_6$ to be $f_6(x_0,x_1)=x_0^6+x_1^6$ up to multiplication of $x_0$ and $x_1$ by non-zero scalars.\\ The action of $s$ on $C'$ must be one of the following: \[ s(x_0:x_1:y)= (x_0:\zeta_6^{\pm 1} x_1: \pm y). \] Up to permutation of $x_0$ and $x_1$, only the two possibilities \[ s(x_0:x_1:y)= (x_0:\zeta_6 x_1:y) \qquad \makebox{and} \qquad s(x_0:x_1:y)= (x_0:\zeta_6^{-1} x_1:y) \] remain. We claim that only the first choice for $s$ yields a rigid action on the product $E^{n-1}\times C'$.\\ By Corollary~\ref{charconds}, the action is rigid if and only if $\langle \chi_{C'},\overline{\chi}_E\ensuremath{\rightarrow}ngle=0$. The character of the canonical representation of the curve $C'$ is easy to determine: on the open affine $x_0=1$, the curve is defined by $y^2=x^6+1$ and a basis of $H^0(C', \omega_{C'})$ is given by the 1-forms $\frac {dx}{y}$ and $x \frac{dx}{y}$ for $y \neq 0$. The pullback of this 1-forms with the inverse of $s$ yields the character $\chi_{C'}=\chi_{\zeta_6^{-1}}+ \chi_{\zeta_6^4}$ for the first choice of the action and $\chi_{C'}=\chi_{\zeta_6}+ \chi_{\zeta_6^2}$ for the second. Since $\overline{\chi}_E(s)=\zeta_6$, the claim follows. Finally, we have to determine the singular points of the minimal quotient $X_{min}=(E^{n-1} \times C')/\mathbb Z_6$. For this, we need to know the points on $C'$ and on $E$ with non-trivial stabilizer and the action of the generator of the stabilizer-group in local coordinates. The table below gives an overview: \begin{center} {\footnotesize { \renewcommand{1.5}{1.5} \setlength{\tabcolsep}{4pt} \begin{tabular}{lclc|} \begin{tabular}{|c |} \hline point $q$ \\ \hline generator of $\Stab(q)$ \\ \hline local action \\ \hline \end{tabular} & \begin{tabular}{|c | c | c |} \hline $(1:0:\pm 1) $ & $(0:1:\pm 1)$ \\ \hline $ s $ & $ s^2 $ \\ \hline $x \mapsto \zeta_6 x$ & $x \mapsto \zeta_6^4 x$ \\ \hline \end{tabular} & \begin{tabular}{|c | c | c | } \hline $ 0 $ & $ \pm\frac{2+\zeta_3}{3} $ & $ \frac{1}{2}, \frac{\zeta_3}{2}, \frac{1+\zeta_3}{2}$ \\ \hline $ s $ & $ s^2 $ & $ s^3 $ \\ \hline $x \mapsto \zeta_6 x$ & $x \mapsto \zeta_6^2 x$ & $x \mapsto -x$ \\ \hline \end{tabular} \end{tabular} }} \end{center} \noindent The singularities of type $\tfrac{1}{2}(1,\ldots, 1)$ are the images of the points \[ (z_1, \ldots,z_{n-1},p) \in E^{n-1} \times C' \] having a stabilizer group of order two. Over each of these singularities, there are three points in the fibre. Using the table, we see that these points have coordinates \[ p \in \lbrace (1:0:\pm 1) \rbrace , \qquad z_i \in \left\lbrace 0 , \tfrac{1}{2}, \tfrac{\zeta_3}{2}, \tfrac{1+\zeta_3}{2}\right\rbrace, \] where at least one $z_i \neq 0$. The number of these points is $2 (4^{n-1} -1)$, which implies that there are $\tfrac{2}{3} (4^{n-1} -1)$ singularities of type $\tfrac{1}{2}(1,\ldots, 1)$. The points on the product $E^{n-1} \times C'$ with stabilizer of order $6$ are $\left(0, \ldots,0,(1:0:\pm 1)\right)$. They descend to $2$ singular points of type $\frac{1}{6}(1, \ldots,1)$. For the points $(z_1, \ldots,,z_{n-1},p)$ with stabilizer of order $3$, we distinguish the cases $|\Stab(p)|=3$ and $|\Stab(p)|=6$. In the first case, the coordinates of these points are \[ p \in \lbrace (0:1:\pm 1) \rbrace , \qquad z_i \in \left\lbrace 0, \pm\tfrac{2+\zeta_3}{3} \right\rbrace. \] These $2 \cdot 3^{n-1}$ points descend to $3^{n-1}$ singularities of type $\tfrac{1}{3}(1, \ldots, 1,2)$. In the second case, where $|\Stab(p)|=6$, we have two choices for $p$ and $3$ choices for $z_i$, where at least one of the $z_i$ has stabilizer of order $3$, i.e., at least one $z_i \neq 0$. There are $2(3^{n-1}-1)$ of these points. They descend to $3^{n-1}-1$ singularities of type $\tfrac{1}{3}(1, \ldots, 1)$. \end{proof} \section{The case $S_C \not\subset G \setminus A$}\label{se6} If the action on $C$ has a generating triple $S_C \not\subset G \setminus A$, then Theorem~\ref{quot366} cannot hold without modification: \begin{rem} Assume that $C$ admits a rigid action of the group $G=A \rtimes_{\varphi_d} \mathbb Z_d$, with a generating triple $S_C\not\subset G\setminus A$ of type $[d,d,\ell]$, then $C/A$ is the projective line. \end{rem} \begin{proof} Indeed, by Hurwitz's formula \[ 2g(C/A) -2 = d\left(-2 + 2 -\tfrac{2}{d} \right)= -2. \qedhere \] \end{proof} However, we can mod out proper subgroups of $A$ to produce groups of smaller order. Under some assumptions, the quotient $C/A$ still has genus at least two, and we obtain a similar result as in Theorem~\ref{quot366}: \begin{proposition} Assume that $E^{n-1} \times C$ admits a rigid diagonal action of a group $G=A \rtimes_{\varphi_d} \mathbb Z_d$, such that the $G$-action on $C$ has a generating triple of type $[d,d,\ell]$ of the form $[sh,s^{-1}k,c]$. Let $A' \lneq A$ be a proper $\varphi_d$-invariant subgroup. Then: \begin{enumerate} \item The group $G'=A/A' \rtimes_{\varphi_d} \mathbb Z_d$ has a generating triple of type $[d,d,\ell']$, where $\ell'$ is the order of $\overline{c}$ in $A/A'$. \item The quotient curve $C':=C/A'$ has genus at least two if and only if \[ 1- \tfrac{2}{d} - \tfrac{1}{\ell'} >0. \] \item The quotient $E/A'$ is isomorphic to $E$ and the induced $G'$-action on $E^{n-1} \times C'$ is rigid and compatible with the $G$-action on $E^{n-1} \times C$. There is a finite holomorphic cover \[ (E^{n-1} \times C)/G \to (E^{n-1} \times C')/G'. \] \end{enumerate} \end{proposition} \begin{proof} Let $\pi \colon G \to G'$ be the quotient map, then $$[\pi(sh),\pi(s^{-1}k),\pi(c)]= [s\overline{h}, s^{-1}\overline{k},\overline{c}]$$ is a generating triple of $G'$. Note that, since $A'\lneq A$, the class $\overline{c}$ is non-trivial. It corresponds to the induced $G'$-action on $C'=C/A'$. Using Lemma~\ref{orderel}, we see that the type of this generating triple is $[d,d,\ell']$, where $\ell'=\ord(\overline{c})$. \\ The second statement follows immediately from Hurwitz's formula, and for the last one, it suffices to note that $E/A' \simeq E$. The rigidity of the induced action is clear by Corollary~\ref{allbutonerigid}. \end{proof} In analogy to Theorem~\ref{quot366}, we want to find for each $d=3,4, 6$ the minimal examples, i.e., the rigid quotients $(E^{n-1} \times C)/G$ where the group $G=A \rtimes_{\varphi_d} \mathbb Z_d$ has the smallest group order. \begin{lemma}\label{le:MinimalGroupsSnotinA} The following three groups are the smallest groups of the form $A \rtimes_{\varphi_d} \mathbb Z_d$ which allow a faithful rigid and holomorphic action on a smooth curve $C$ of genus $g \geq 2$, such that the generating triple is not contained in $G \setminus A$: \begin{itemize} \item $G_3=\langle s,t ~ \big\vert ~ s^3=t^7=1,~ sts^{-1} =t^4 \ensuremath{\rightarrow}ngle$, \item $G_4=\langle s,t ~ \vert ~ s^4=t^5 =1, ~ sts^{-1}=t^3 \ensuremath{\rightarrow}ngle$, \item $G_6=\langle s,t ~ \big\vert ~ s^6=t^3=1,~ sts^{-1} =t^2 \ensuremath{\rightarrow}ngle$. \end{itemize} \end{lemma} \begin{proof} We only treat the case $d=3$ because the arguments for $d=4$ and $6$ are similar. Note that $G_3$ admits the generating triple $[s,s^2t,t^6]$ of type $[3,3,7]$. Hence, it is enough to exclude the groups $A \rtimes_{\varphi_3} \mathbb Z_3$ with $|A| \leq 6$. By Hurwitz's formula: \[ 0 < 1-\tfrac{2}{3} - \tfrac{1}{\ell}, \] which implies $4 \leq \ell$, i.e., $A$ has an element of order at least four. Since $A$ has at most six elements, it must be cyclic of order $n=4$, $5$ or $6$. This is impossible by Remark \ref{CyclicN} because each $n$ has a prime divisor $p \neq 3$ such that $3 \nmid (p-1)$. \end{proof} \begin{proposition}\label{minimalex} For each of the groups $G_d$, there exists up to isomorphism a unique faithful rigid and holomorphic action on a smooth curve $C_d$. The curves $C_d$ are canonically embedded. Their equations in $\mathbb P^{g-1}$ and the actions by projective transformations are given in the table below: \begin{center} {\footnotesize \begin{tabular}{| l | l | } \hline & \\ d=3 & $G_3= \langle s,t ~ \big\vert ~ s^3=t^7=1,~ sts^{-1} =t^4 \ensuremath{\rightarrow}ngle$ \\ & \\ & $C_3= \lbrace x_0^3x_1+x_1^3x_2+x_0x_2^3=0 \rbrace \subset \mathbb P^2$ \\ & \\ & $ s \mapsto \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix},$ \qquad $ t \mapsto \begin{pmatrix} \zeta_7^4 & 0 & 0 \\ 0 & \zeta_7^2 & 0 \\ 0 & 0 & \zeta_7 \end{pmatrix}$ \\ & \\ \hline & \\ d=4 & $G_4 =\langle s,t ~ \vert ~ s^4=t^5 =1,~ sts^{-1}=t^3 \ensuremath{\rightarrow}ngle$ \\ & \\ & $C_4=\lbrace x_0x_3+x_1x_2 = 0, ~~ x_0^2x_2-x_0x_1^2+x_1x_3^2-x_2^2x_3 = 0\rbrace \subset \mathbb P^3$ \\ & \\ & $s \mapsto \begin{pmatrix} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \end{pmatrix},$ \qquad $t \mapsto \begin{pmatrix} \zeta_5 & 0 & 0 & 0 \\ 0 & \zeta_5^2 & 0 & 0 \\ 0 & 0 & \zeta_5^3 & 0 \\ 0 & 0 & 0 & \zeta_5^4 \end{pmatrix}$ \\ & \\ \hline & \\ d=6 & $G_6 =\langle s,t ~ \big\vert ~ s^6=t^3=1,~ sts^{-1} =t^2 \ensuremath{\rightarrow}ngle$ \\ & \\ & $C_6=\lbrace x_1x_3+x_0x_2=0, ~~ x_0^3+x_1^3+x_2^3+x_3^3=0\rbrace \subset \mathbb P^3$ \\ & \\ & $s \mapsto \begin{pmatrix} 0 & \zeta_3 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & \zeta_3^2 \\ 0 & 0 & 1 & 0 \end{pmatrix}, \qquad t \mapsto \begin{pmatrix} \zeta_3 & 0 & 0 & 0\\ 0 & \zeta_3^2 & 0 & 0 \\ 0 & 0 & \zeta_3^2 & 0 \\ 0 & 0 & 0 & \zeta_3 \end{pmatrix}.$ \\ & \\ \hline \end{tabular} } \end{center} \end{proposition} \begin{rem} The curve $C_3$ is the \emph{Klein Quartic}, a curve of genus three with automorphism group $\PSL(2,\mathbb F_7)$, a finite simple group of order $168$. It is the largest group of automorphisms for a genus three curve (cf. \cite{Klein}). \\ The curve $C_4$ is isomorphic to \emph{Bring's curve} in $\mathbb P^4$, a smooth curve of genus $4$ on the \emph{Clebsch diagonal cubic surface} defined by the equations: \[ x_0^3+ \ldots + x_4^3= x_0^2+ \ldots + x_4^2=x_0+ \ldots + x_4=0. \] A biholomorphic map from $C_4$ to Bring's curve is induced by \[ \begin{pmatrix} -1 & 1 & 1 & -1 \\ -\zeta_5& \zeta_5^2 & \zeta_5^3 & -\zeta_5^4 \\ -\zeta_5^2 & \zeta_5^4 & \zeta_5 & -\zeta_5^3 \\ -\zeta_5^3 & \zeta_5 & \zeta_5^4 & -\zeta_5^2 \\ -\zeta_5^4 & \zeta_5^3 & \zeta_5^2 & -\zeta_5 \end{pmatrix} \] cf. \cite[proof of Theorem 9.5.8]{Dol12}. Note that the full automorphism group of Bring's curve is $\mathfrak S_5$, acting in the obvious way. It is the largest group of automorphisms for a genus $4$ curve (cf. \cite{wiman}).\\ A detailed description of the geometry of this curve can be found in the recent preprint \cite{disney}. \end{rem} \begin{proof}[Proof of Proposition~\ref{minimalex}] First, we show that the curves are canonical. Suppose that one of the groups $G_d$ acts on a hyperelliptic curve. Then, since the center $Z(G_d)$ contains no involution, the group $G_d$ embeds in $\PGL(2,\mathbb C)$. This is impossible because the finite subgroups of $\PGL(2,\mathbb C)$ are known to be cyclic, dihedral, $\mathfrak A_4$, $\mathfrak S_4$ and $\mathfrak A_5$.\\ Next, we shall derive equations for the curves and the actions. Here, we only treat $G_4$. The computations in case of the other two groups are similar. Since $C_4 \subset \mathbb P^{g-1}$ is canonically embedded, the group $G_4$ acts by projective transformations. Note that $g(C_4)=4$ as the type of the generating triple of $G_4$ equals $[4,4,5]$. The group $G_4$ has four representations of degree $1$, which are obtained from $G_4/\langle t \ensuremath{\rightarrow}ngle \simeq \mathbb Z_4$ by inflation, and one irreducible representation of degree $4$ defined by: \[ s \mapsto \begin{pmatrix} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \end{pmatrix} \qquad \makebox{and} \qquad t \mapsto \begin{pmatrix} \zeta_5 & 0 & 0 & 0 \\ 0 & \zeta_5^2 & 0 & 0 \\ 0 & 0 & \zeta_5^3 & 0 \\ 0 & 0 & 0 & \zeta_5^4 \end{pmatrix}. \] The canonical representation $G_4 \to GL(H^0(\omega_{C_4}))$ is therefore either the sum of four $1$-dimensional representations or equal to the irreducible representation of degree $4$. The first possibility can be ruled out since the canonical representation is faithful and $G_4$ is not abelian. This shows that the $G_4$-action on $C_4$ is given by the above matrices because the vector spaces $H^0(\omega_{C_4})$ and $\mathbb C[x_0, \ldots, x_3]_1$ are in natural bijection. The elements of order four in $G_4$ are conjugated either to $s$ or to the inverse $s^{-1}$. As a projective transformation, $s$ has four fixed points: \[ p_1:=(1:i:-i:-1),\ p_2:=(1:-i:i:-1), \ p_3:=(1:1:1:1),\ p_4:=(1:-1:-1:1) \] Since $g=g(C_4)=4$, the curve $C_4$ is a complete intersection of a quadric $Q$ and a cubic $K$ (cf. \cite[p.~258]{GriffH}). To find $Q$ and $K$, we use Noether's classical theorem (cf. \cite[p.~253]{GriffH}), which says that the following sequence is exact for all $k\geq 1$: \begin{equation}\label{noetherseq} 0 \to I(C_4)_k \to \mathbb C[x_0, \ldots, x_{3}]_k \to H^0(C,\omega_C^{\otimes k}) \to 0. \end{equation} The vector space $I(C_4)_2$ is $1$-dimensional and generated by $Q$, whereas $I(C_4)_3$ is $5$-dimensional and generated by $K$ together with the four reducible cubics $x_0Q, \ldots, x_3 Q$. Since the exact sequence \eqref{noetherseq} consists of compatible $G_4$-representations, they remain exact when we take the invariant parts. In particular, for $k=2$, we obtain the equality \[ I(C_4)_2^{G_4} = \mathbb C[x_0, \ldots, x_{3}]_2^{G_4} \] by the rigidity of the action. Since $\mathbb C[x_0, \ldots, x_{3}]_2$ is the symmetric square of the canonical representation, we easily find that $\mathbb C[x_0, \ldots, x_{3}]_2^{G_4}$ is one-dimensional, i.e., the quadric $Q$ is $G_4$-invariant. In particular, it is invariant under the action of $t$ which is diagonal. This implies that $Q$ is a linear combination of the monomials $x_0x_3$ and $x_1x_2$. These monomials are swapped by $s$, which forces the quadric to be \[ Q=x_0x_3 + x_1x_2, \] up to a non-zero scalar. We immediately see that $p_3$ and $p_4$ do not belong to $Q$. Hence, the two $G$-orbits of points with stabilizer of order $4$ are represented by $p_1$ and $p_2$.\\ Next, we consider the sequence~\eqref{noetherseq} for $k=3$. As $Q$ is invariant under $G_4$, the subspace of $I_3$ which is spanned by the products $x_0 Q, \ldots , x_3 Q$ is the irreducible $4$-dimensional representation of $G$. The cubic $K$, on the other hand, yields a one-dimensional representation. Since all one-dimensional representations of $G_4$ are obtained from $G_4/\langle t \ensuremath{\rightarrow}ngle$, they are trivial on $\langle t \ensuremath{\rightarrow}ngle$. In particular, $K$ is invariant under the action of $t$, and therefore a linear combination of the $t$-invariant monomials of degree three: \[ x_0^2x_2, \quad x_0x_1^2, \quad x_1x_3^2 \quad \makebox{and} \quad x_2^2x_3. \] These monomials are cyclically permuted by $s$, which shows that the cubic $K$ is (up to a scalar multiple) one of the following: \[ K_{\lambda}= x_0^2x_2 + \lambda x_0x_1^2+ \lambda^2 x_1x_3^2+ \lambda^3x_2^2x_3, \quad \makebox{where} \quad \lambda \in \lbrace \pm1, ~ \pm i \rbrace. \] The possibilities $K_{\pm i}$ can be ruled out because $p_1 \notin K_{i}$ and $p_2 \notin K_{-i}$. We claim that $I(C_4)_3^{G_4}=0$, which excludes $K=K_{1}$, and finally implies $K=K_{-1}$. To proof the claim, we consider the $G_4$-invariant part of \ref{noetherseq} for $k=3$: \begin{equation}\label{G4invariant} 0 \to I(C_4)_3^{G_4} \to \mathbb C[x_0, \ldots, x_{3}]_3^{G_4} \to H^0(C,\omega_{C_4}^{\otimes 3})^{G_4} \to 0. \end{equation} The space $\mathbb C[x_0, \ldots, x_{3}]_3^{G_4}$ is easily seen to be one-dimensional by computing the inner product \[ \langle \operatorname{Sym}^3(\chi_{C_4}), \chi_{triv} \ensuremath{\rightarrow}ngle = \frac{1}{|G_4|} \sum_{g \in G_4} \operatorname{Sym}^3(\chi_{C_4})(g) \] with the help of the well known formula: \[ \operatorname{Sym}^3(\chi_{C_4})(g)= \tfrac{1}{6}\cdot\left(\chi_{C_4}(g)^3+3\chi_{C_4}(g^2)\chi_{C_4}(g)+ 2\chi_{C_4}(g^3)\right), \qquad g \in G_4. \] \cite[Lemma VI.11, Examples VI.12]{beauAlgSurf} allows us to compute \[ h^0(\omega_{C_4}^{\otimes 3})^G= h^0\left(\mathcal O_{\mathbb P^1}\left(-6+2 \left\lfloor \tfrac{9}{4} \right\rfloor + \left\lfloor \tfrac{12}{5} \right\rfloor \right)\right) =h^0(\mathcal O_{\mathbb P^1})=1, \] and the exactness of the sequence \ref{G4invariant} yields $I(C_4)_3^{G_4}=0$. \end{proof} \begin{theorem}\label{main2} Let $n\geq 2$ and $X_d=(E^{n-1}\times C_d)/G_d$ a rigid quotient. Then, the types of the singularities of $X_d$ and their numbers are: \begin{center} {\scriptsize\renewcommand1.5{2.0}\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $\tfrac{1}{2}(1,\ldots,1)$ & $\tfrac{1}{3}(1,\ldots,1)$& $\tfrac{1}{3}(1,\ldots,1,2)$ & $\tfrac{1}{4}(1,\ldots,1)$ & $\tfrac{1}{4}(1,\ldots,1,3)$ & $\tfrac{1}{6}(1,\ldots,1)$ & $\tfrac{1}{6}(1,\ldots,1,5)$\\ \hline $d=3$ & -- & $3^{n-1}$ & $3^{n-1}$ & -- & -- & -- & --\\ \hline $d=4$ & $2^{n-1}(2^{n-1}-1)$ & -- & -- & $2^{n-1}$ & $2^{n-1}$ & -- & --\\ \hline $d=6$ & $\tfrac{2}{3}(4^{n-1}-1)$ & $\tfrac{1}{2}(3^{n-1}-1)$ & $\tfrac{1}{2}(3^{n-1}-1)$ & -- & -- & $1$ & $1$\\ \hline \end{tabular} } \end{center} If $d=6$, then there is precisely one isomorphism class of such quotients.\\ If $d=3,4$, then there are at most two isomorphism classes, which can be identified under complex conjugation. \end{theorem} \begin{proof} First, we prove the statement about the isomorphism classes. We start with the group $G_6=\langle 18,3 \ensuremath{\rightarrow}ngle \simeq \mathbb Z_3 \rtimes_{\varphi_6} \mathbb Z_6$. Let $\mathcal S_C$ be the set of generating triples $S$ for $G_6$ of type $[6,6,3]$ such that $S\not\subset G_6\setminus A$. As usual, $A$ is the unique normal abelian subgroup of $G_6$ of index $6$. Let $\mathcal S_E$ be the set of generating triples of $G_6$ of type $[2,3,6]$. Let \[ \mathfrak X \subset \mathcal S_E^{n-1}\times\mathcal S_C \] be the set of $n$-tuples of generating triples which correspond to a rigid action. We show that the action of $\Aut(G) \times \mathcal B_3^n \times \mathfrak S_{n-1}$ on this set is transitive, i.e., there is only one orbit and therefore only one isomorphism class. Let $R_C\subset \mathcal S_C$ be a set of representatives for the $(\Aut(G) \times \mathcal B_3)$-action on $\mathcal S_C$ and $R_E \subset \mathcal S_E$ be a set of representatives for the $\mathcal B_3$-action on $\mathcal S_E$. Clearly, each orbit of the $(\Aut(G) \times \mathcal B_3^n \times \mathfrak S_{n-1})$-action on $\mathfrak X$ has a representative in the set $R_E^{n-1}\times R_C$. Using a MAGMA computation, we see that the set $R_C$ has just one element, which we denote by $S$. Note that the transitivity of the action on $\mathcal S_C$ follows also from the uniqueness of the curve proven in Proposition~\ref{minimalex}. Furthermore, the set $R_E$ has exactly two elements $S_1$ and $S_2$; the characters of the corresponding canonical actions are $\chi_{\zeta_6}$ and $\chi_{\zeta_6^{-1}}$. According to Corollary \ref{charconds}, the unique tuples in $R_E^{n-1}\times R_C$ that give rigid actions are $[S_1,\ldots,S_1,S]$ and $[S_2,\ldots,S_2,S]$. Thus, there are at most two isomorphism classes of rigid quotients. Again a MAGMA computation shows that $[S_1,S]$ and $[S_2,S]$ are in the same orbit under the action of $\Aut(G) \times \mathcal B_3^2$. This implies that $[S_1,\ldots,S_1,S]$ and $[S_2,\ldots,S_2,S]$ are equivalent under the action of $\Aut(G) \times \mathcal B_3^n \times \mathfrak S_{n-1}$, and therefore, there is a unique isomorphism class of rigid quotients, see Corollary \ref{bihollift}. Now, let $d=3,4$. Also here, following the above strategy, only two tuples of generating triples remain, $[S_1,\ldots,S_1,S]$ and $[S_2,\ldots,S_2,S]$. However, in this case, they belong to different orbits under the action of $\Aut(G) \times \mathcal B_3^n \times \mathfrak S_{n-1}$. But the complex conjuagte of $[S_1,\ldots,S_1,S]$ lies in the same orbit as the second tuple. This means that the first quotient is isomorphic to the complex conjugate of the second quotient. The reader can find the source code for the MAGMA computations on the webpage \begin{center} \url{http://www.staff.uni-bayreuth.de/~bt300503/publi.html}. \end{center} The computation of the singularities can be done similarly as in the proof of Theorem~\ref{quot366} (see also \cite{BG20} for the case $d=3$). \end{proof} \begin{openprob} It is not clear to us if the complex conjugate varieties in Theorem \ref{main2} (for $d=3,4$) are biholomorphic or not. Corollary \ref{bihollift} just implies that there is no biholomorphism lifting to a biholomorphism of the form \begin{align*} E^{n-1}\times C_d &\longrightarrow E^{n-1}\times C_d\\ (z_1,\ldots,z_n)&\longmapsto (u_1(z_{\tau(1)}),\ldots,u_{n-1}(z_{\tau(n-1)}), u_{n}(z_{n})), \end{align*} for some permutation $\tau\in\mathfrak S_{n-1}$. However there might be other biholomorphisms that are not of product type or do not even lift. \end{openprob} \section{Proof of Theorem~\ref{theo:Resolutions}}\label{sec:resolutions} This section is devoted to the proof of Theorem~\ref{theo:Resolutions}. For this, we construct resolutions of the rigid singular quotients $X$ of the form \[ (E^{n-1}\times C)/G, \qquad \makebox{where} \qquad G= A\rtimes_{\varphi_d}\ensuremath{\mathbb{Z}}_d, \] preserving the rigidity in order to obtain rigid projective \emph{manifolds}.\\ If $n\geq d$, then all singularities of the quotients are canonical (cf. Corollary~\ref{cor:sing}), which implies that the Kodaira dimension of the resolutions is the same as the Kodaira dimension of the product $E^{n-1}\times C$, namely $1$. Leray's spectral sequence can be used to derive sufficient conditions on a resolution $\rho\colon\hat{X}\to X$ to guarantee that $\hat{X}$ is still infinitesimally rigid: \begin{proposition}[{\cite{BG20}, Proposition~2.10}]\label{prop:CondRes} Let $Y$ be a projective manifold and $G\leq\Aut(Y)$ be a finite group acting freely in codimension one. Let $\rho\colon \hat{X}\to X$ be a resolution of the quotient $X=Y/G$ such that \begin{enumerate} \item $\rho_\ast\Theta_{\hat{X}}\simeq \Theta_X$, \item $R^1\rho_\ast\Theta_{\hat{X}}=0$. \end{enumerate} Then, $H^1(\hat{X},\Theta_{\hat{X}})\simeq H^1(X,\Theta_X)\simeq H^1(Y,\Theta_Y)^G$. \end{proposition} By Corollary~\ref{cor:sing}, our quotients have only isolated singularities. Thus, the construction of resolutions with those properties is a local problem. Furthermore, the singularities are all cyclic quotient singularities of type \[ \frac{1}{\ell}(1,\ldots,1)\qquad\makebox{or}\qquad \frac{1}{\ell}(1,\ldots,1,\ell-1), \qquad\makebox{where}\qquad\ell\geq 2. \] Those singularities are known to be represented by affine toric varieties with cone $$\sigma:=\cone(e_1,\ldots, e_n)\subset\ensuremath{\mathbb{R}}^n$$ and lattice \[N_j:=\ensuremath{\mathbb{Z}}^n +\ensuremath{\mathbb{Z}}\cdot \tfrac{1}{\ell}(1,\ldots,1,a_j)\subset \ensuremath{\mathbb{R}}^n,\] where $a_1=1$ and $a_2=\ell-1$.\\ This allows us to use tools from toric geometry to construct such a resolution. The basic references for toric geometry are the textbooks \cite{F93} and \cite{CLS11}.\\ For the singularities of type $\tfrac{1}{\ell}(1,\ldots,1),\:\ell\geq 2$, we can use the toric blowup as a resolution. The verification of the conditions outlined in Proposition~\ref{prop:CondRes} is analogous to \cite{BG20}.\\ For the singularities of the other type, we generalize the construction and the proof of \cite{BG20}, where only the case $\ell=3$ was settled, to prove the following: \begin{proposition}\label{prop:Resolution} Let $n\geq 3$, $\ell\geq 2$, and let $U$ be the affine cyclic quotient singularity of type $\tfrac{1}{\ell}(1,\ldots,1,\ell-1)$. Then, there exists a resolution $\rho\colon \hat{U}\to U$ of singularities with the following properties: \begin{enumerate} \item $\rho_\ast\Theta_{\hat{U}}\simeq\Theta_{U}$\: and \item $R^1\rho_\ast\Theta_{\hat{U}}=0.$ \end{enumerate} \end{proposition} For the construction of the resolution, we use the toric description from above (set $N:=N_2$) and consider the finite sequence of star subdivisons of the cone $\sigma$ along the rays generated by the elements \[v_k:=\tfrac{1}{\ell}(k,\ldots,k,\ell-k),\quad k=1,\ldots,\ell-1.\] This yields a fan $\Sigma$ with maximal cones \begin{align*} \begin{split} \sigma_i^{(0)}&:=\cone(e_1,\dots,\widehat{e_i},\dots,e_n,v_1),\hspace{1,7cm} i=1,\dots,n-1,\\ \sigma_i^{(k)}&:=\cone(e_1,\dots,\widehat{e_i},\dots,e_{n-1},v_k,v_{k+1}),\quad i=1,\dots,n-1;\:k=1,\dots,\ell-2,\\ \sigma_n&:=\cone(e_1,\dots,e_{n-1},v_{\ell-1}). \end{split} \end{align*} It is an easy computation to show that all these maximal cones are smooth. Therefore, the subdivisons induce a resoultion $\rho\colon U_\Sigma\to U$, where $U_\Sigma$ is the toric variety of the fan $\Sigma$. The resolution admits $\ell-1$ exceptional divisors $E_1,\ldots,E_{\ell-1}$ corresponding to the additional rays generated by $v_k,\:k=1,\ldots,\ell-1$. We denote the divisors corresponding to the rays $\ensuremath{\mathbb{R}}_{\geq 0}e_i$ by $D_i$. \begin{proposition}\label{prop:excdivisor} The exceptional divisors of the toric resolution can be described as follows: \begin{enumerate} \item The exceptional prime divisor $E_{\ell-1}$ is isomorphic to $\ensuremath{\mathbb{P}}^{n-1}$. \item For $k=1,\ldots,\ell-2,$ the exceptional prime divisor $E_k$ is isomorphic to the projective bundle \[pr\colon E_k\simeq \ensuremath{\mathbb{P}}(\Oh_{\ensuremath{\mathbb{P}}^{n-2}}\oplus\Oh_{\ensuremath{\mathbb{P}}^{n-2}}(\ell-k))\to\ensuremath{\mathbb{P}}^{n-2}.\] In particular, \[\omega_{E_k}\simeq pr^*\Oh_{\ensuremath{\mathbb{P}}^{n-2}}(-n+2)\otimes\Oh_{E_k}(E_k).\] \end{enumerate} \end{proposition} \begin{proof} We leave the proof for the divisor $E_{\ell-1}$ to the reader as the reasoning behind it is similar to the following proof for the other divisors (but easier). Let $k$ be in $\{1,\ldots,\ell-2\}$. As a toric variety, $E_k$ is given by the quotient lattice $N(v_k)=N/\ensuremath{\mathbb{Z}} v_k$ and quotient cones \[\overline{\sigma_i^{(k-1)}},\:\overline{\sigma_i^{(k)}}\subset N(v_k)_ \ensuremath{\mathbb{R}},\quad 1\leq i\leq n-1\] together with their faces. Denote with $u_1,\ldots,u_{n-1}$ the standard basis of $\ensuremath{\mathbb{Z}}^{n-1}$ and define $e:=u_{n-1}$ and $u_0:=-(u_1+\ldots+u_{n-2})$. The computation for $k=1$ is the same as in \cite[Proposition~5.5]{BG20}. In the case $k>1$, the quotient lattice is generated by the classes $[e_2],\ldots,[e_{n-1}],[v_{k-1}]$, and we can identify the lattice with $\ensuremath{\mathbb{Z}}^{n-2}\times \ensuremath{\mathbb{Z}}$ via the isomorphism \[\phi\colon N(v_k)\longrightarrow\ensuremath{\mathbb{Z}}^{n-2}\times\ensuremath{\mathbb{Z}},\quad [e_i]\mapsto u_{i-1},\,[v_{k-1}]\mapsto -e.\] One can easily show that we can find integers $\mu$ and $\lambda$ such that \[e_1=-e_2-\ldots-e_{n-1}+(k-\ell)\cdot v_{k-1}+\mu\cdot v_k\quad\mathrm{and}\quad v_{k+1}=-v_{k-1}+\lambda\cdot v_k.\] Therefore, we get $\phi([e_1])=u_0+(\ell-k)\cdot e$ and $\phi([v_{k+1}])=e$. Using the $\ensuremath{\mathbb{R}}$-linear extension of $\phi$, which identifies $N(v_k)\otimes\ensuremath{\mathbb{R}}$ with $\ensuremath{\mathbb{R}}^{n-1}$, the quotient cones, viewed as cones in $\ensuremath{\mathbb{R}}^{n-1}$, are given by: \begin{align*} \overline{\sigma_i^{(k-1)}}&\simeq \cone(u_0+(\ell-k)\cdot e,u_1,\dots,\widehat{u_{i-1}},\dots,u_{n-2},-e),\\ \overline{\sigma_i^{(k)}}&\simeq\cone(u_0+(\ell-k)\cdot e,u_1,\dots,\widehat{u_{i-1}},\dots,u_{n-2},e). \end{align*} According to \cite[Example 7.3.5]{CLS11}, these cones and their faces build the fan of the projective bundle $E_k\simeq\ensuremath{\mathbb{P}}(\Oh\oplus\Oh(\ell-k))$.\\ Finally, we compute the canonical bundle of $E_k$ for $k=1,\ldots,\ell-2$ as follows: By using the adjunction formula and \cite[Theorem~8.2.3]{CLS11}, we get: \[\omega_{E_k}\simeq\Oh_{E_k}(-D_1-\ldots-D_n-E_1-\ldots-\widehat{E_k}-\ldots-E_{\ell-1}).\] Since $pr^\ast\Oh_{\ensuremath{\mathbb{P}}^{n-2}}(1)\simeq\Oh_{E_k}(D_2)$, compare \cite[Proposition~6.2.7]{CLS11}, and \[0\sim_{lin}\divi(e_1-(n-3)\cdot e_2+e_3+\ldots+e_n)=D_1-(n-3)\cdot D_2+D_3+\ldots+D_n+E_1+\ldots+E_{\ell-1},\] we conclude \[\omega_{E_k}\simeq \Oh_{E_k}(-(n-2)\cdot D_2+ E_k)\simeq pr^*\Oh_{\ensuremath{\mathbb{P}}^{n-2}}(-n+2)\otimes\Oh_{E_k}(E_k).\qedhere \] \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:Resolution}] Primarily, we prove that the resolution $\rho\colon U_\Sigma\to U$ satisfies the condition $\rho_\ast\Theta_{U_\Sigma}\simeq\Theta_{U}$. Let $D_i\subset U_\Sigma$ and $D_i'\subset U$ be the divisors corresponding to the rays generated by $e_i$. As proved in \cite[Proposition~5.8]{BG20}, we have to show that \[P_{D_i}\cap N^\vee=P_{D_i'}\cap N^\vee\] is true for all $i=1,\ldots,n$, where $P_{D_i}$ and $P_{D_i'}$ are the polyhedra of the divisors $D_i$ and $D_i'$. These polyhedra are given by the following conditions: \begin{align*} P_{D'_i}&=\{x\in\ensuremath{\mathbb{R}}^n\mid x_i\geq -1,\, x_j\geq 0,\: j\neq i\}\quad\quad\mathrm{and}\\ P_{D_i}&=P_{D'_i}\cap\{\langle x,v_k\ensuremath{\rightarrow}ngle\geq 0, \:k=1,\dots,\ell-1\}=\\ &=P_{D'_i}\cap\{kx_1+\ldots+kx_{n-1}+(\ell-k)x_n\geq 0,\:k=1,\dots,\ell-1\}. \end{align*} The dual latice $N^\vee$ consists of all points $x\in\ensuremath{\mathbb{Z}}^n$ satisfying that the sum $x_1+\ldots+x_{n-1}+(\ell-1)x_n$ is divisible by $\ell$. Obviously, the integral points of $P_{D_i}$ are contained in the polyhedron $P_{D_i'}$. So let $x$ be an element of $P_{D_i'}\cap N^\vee$ and $k\in\{1,\ldots,\ell-1\}$. Since \[k(x_1+\ldots+x_{n-1}+(\ell-1)x_n)=kx_1+\ldots+kx_{n-1}+(\ell-k)x_n+\ell(k-1)x_n\] and since the left hand side of these equation is divisible by $\ell$, the sum $kx_1+\ldots+kx_{n-1}+(\ell-k)x_n$ is divisible by $\ell$ as well. By assumption, we know that this sum is an integer at least $-(\ell-1)$. Therefore, it must be greater or equal to zero, and we conclude that $x$ is an element of $P_{D_i}$, too.\\ It remains to show that $R^1\rho_\ast\Theta_{U_\Sigma}=0$. The arguments are similar to the proof of Proposition~5.10 in \cite{BG20}. Using the toric Euler sequence and the fact that $U_\sigma$ has rational singularities, one can show that this is the case if and only if \begin{enumerate} \item $R^1\rho_*\Oh_{U_\Sigma}(D_i)=0$ for all $i=1,\dots,n$, \item $R^1\rho_*\Oh_{E_k}(E_k)=0$ for all $k=1,\dots,\ell-2$ and \item $R^1\rho_*\Oh_{E_{\ell-1}}(E_{\ell-1})=0$. \end{enumerate} The vanishing of $R^1\rho_*\Oh_{E_{\ell-1}}(E_{\ell-1})$ is clear since $E'\simeq\ensuremath{\mathbb{P}}^{n-1}$, which implies that the bundle $\Oh_{E_{\ell-1}}(E_{\ell-1})$ is a multiple of $\Oh_{\ensuremath{\mathbb{P}}^{n-1}}(1)$, whose first cohomology vanishes ($n\geq 3$).\\ By symmetry, it is enough to consider the cases $i=1,n$ to show (1). We start with $D_1$. The Cartier data of $D_1$ is the collection of the vectors \begin{align*} m_{\sigma_n}=-e_1+(\ell-1)\cdot e_n,\quad m_{\sigma_1^{(k)}}=0\quad\mathrm{and}\quad m_{\sigma_i^{(k)}}=-e_1+e_i, \end{align*} for $i=2,\ldots,n-1$ and $k=0,\ldots,\ell-1$. Since all these vectors are elements of the polyhedron $P_{D_1}$, the sheaf $\Oh_{U_\Sigma}(D_1)$ is globally generated according to \cite[Proposition~6.1.1]{CLS11}. Using Demazure vanishing \cite[Theorem~9.2.3]{CLS11}, we conclude $R^1\rho_*\Oh_{U_\Sigma}(D_1)=0$.\\ The Cartier data of the divisor $D_n$ is the following collection: \begin{align*} m_{\sigma_n}=0,\quad m_{\sigma_i^{(0)}}=-e_n+(\ell-1)\cdot e_i\quad\mathrm{and}\quad m_{\sigma_i^{(k)}}=0 \end{align*} for $i=1,\ldots,n-1$ and $k=1,\ldots, \ell-1$. These elements are all contained in the polyhedron associated to $D_n$, and we can argue as above.\\ Finally, let $k\in\{1,\ldots,\ell-2\}$. By Proposition~\ref{prop:excdivisor}, the canonical bundle of $E_k$ is \[\omega_{E_k}\simeq pr^*\Oh_{\ensuremath{\mathbb{P}}^{n-2}}(-n+2)\otimes\Oh_{E_k}(E_k).\] Using Serre duality on $E_k$ and the projection formula, we can conclude: \begin{align*} H^1(E_k,\Oh_{E_k}(E_k)) &\simeq H^{n-2}(E_k,pr^*\Oh_{\ensuremath{\mathbb{P}}^{n-2}}(-n+2))^\vee \simeq H^{n-2}(\ensuremath{\mathbb{P}}^{n-2},\Oh_{\ensuremath{\mathbb{P}}^{n-2}}(-n+2))^\vee\\ &\simeq H^0(\ensuremath{\mathbb{P}}^{n-2},\Oh_{\ensuremath{\mathbb{P}}^{n-2}}(-1))=0. \qedhere \end{align*} \end{proof} {\tiny MATHEMATISCHES INSTITUT, UNIVERSIT\"AT BAYREUTH, 95447 BAYREUTH, GERMANY} {\scriptsize\emph{E-mail address: [email protected],}} \quad {\scriptsize\emph{[email protected],}} \quad {\scriptsize\emph{[email protected]}} \end{document}
\mathrm{b}egin{document} \title{Fluctuations of the free energy of the spherical Sherrington--Kirkpatrick model with ferromagnetic interaction} \mathrm{b}egin{abstract} We consider a spherical spin system with pure 2-spin spherical Sherrington--Kirkpatrick Hamiltonian with ferromagnetic Curie--Weiss interaction. The system shows a two-dimensional phase transition with respect to the temperature and the coupling constant. We compute the limiting distributions of the free energy for all parameters away from the critical values. The zero temperature case corresponds to the well-known phase transition of the largest eigenvalue of a rank 1 spiked random symmetric matrix. As an intermediate step, we establish a central limit theorem for the linear statistics of rank 1 spiked random symmetric matrices. \end{abstract} \section{Introduction} \subsection{Model} Let $A=(A_{ij})_{i,j=1}^N$ be a real symmetric matrix where $A_{ij}$, $1 \leq i < j \leq N$, are independent random variables with mean $0$ and variance $1$, and the diagonal entries $A_{ii}=0$. The pure $2$-spin spherical Sherrington--Kirkpatrick (SSK) model with no external field is a disordered system defined by the random Hamiltonian \mathrm{b}egin{equation} H_N^{\SSK} (\mathrm{b}oldsymbol{\sigma}) := {\mathfrak a}c1{\sqrt{N}} \langle \mathrm{b}oldsymbol{\sigma}, A \mathrm{b}oldsymbol{\sigma} \rangle = {\mathfrak a}c1{\sqrt{N}} \sum_{i,j=1}^N A_{ij} \sigma_i\sigma_j \end{equation} for the spin variables on the sphere, $\mathrm{b}oldsymbol{\sigma} \in S_{N-1}$, where $S_{N-1} := \{ \mathrm{b}oldsymbol{\sigma} \in \R^N : \| \mathrm{b}oldsymbol{\sigma} \|^2 = N \}$. For the history and the existing results on the model, including the proof of the Parisi formula, we refer to \cite{KosterlitzThoulessJones, CrisantiSommers, TalagrandParisiSpher, PanchekoTalagrand2007} and references therein. We are interested in the spherical spin system with (random) Hamiltonian \mathrm{b}egin{equation} \label{eq:defofH} H_N (\mathrm{b}oldsymbol{\sigma}) = H_N^{\SSK}(\mathrm{b}oldsymbol{\sigma}) + H_N^{\CW}(\mathrm{b}oldsymbol{\sigma}), \qquad \mathrm{b}oldsymbol{\sigma} \in S_{N-1}, \end{equation} where the Curie--Weiss (CW) Hamiltonian with coupling constant $J$ is defined by \mathrm{b}egin{equation} H_N^{\CW}(\mathrm{b}oldsymbol{\sigma}) := {\mathfrak a}c{J}{N} \sum_{i,j=1}^N \sigma_i\sigma_j = {\mathfrak a}c{J}{N} \left( \sum_{i=1}^N \sigma_i \right)^2. \end{equation} Note that $H_N^{\CW}$ is large in magnitude when all $\sigma_i$ have the same sign. The Hamiltonian $H_N$ is similar to the SSK model with external field, \mathrm{b}egin{equation} H_N^{\exte}(\mathrm{b}oldsymbol{\sigma})= H_N^{\SSK}(\mathrm{b}oldsymbol{\sigma})+ h \sum_{i=1}^N \sigma_i. \end{equation} See \cite{Chen2014} for a relation between these two Hamiltonians. The main result of this paper is a limit theorem for the free energy at positive temperature $1/\mathrm{b}eta>0$ with positive coupling constant $J$. This paper is an extension of our previous paper \cite{BaikLee} in which we obtained limit theorems for pure $2$-spin SSK model (with $J=0$). Before we state our result, we first summarize the known limit theorems for the free energy of SSK and also the Sherrington--Kirkpatrick (SK) model. Results for (3)--(7) were established in the same year 2015. We indicate the limiting distribution and the order of fluctuations of the free energy. These results assume that $A_{ij}$ are standard Gaussian. However, the result (1) was extended to non-Gaussian $A_{ij}$ in \cite{GuerraToninelli2002, CarmonaHu}, and the results (3) and (4) were obtained for general normalized random variables. \subsubsection*{No external field.} When there is no external field ($h=0$), the following are known for pure $p$-spin models: \mathrm{b}egin{enumerate}[(1)] \item Pure $2$-spin SK model for $\mathrm{b}eta\in (0, \mathrm{b}eta_c)$: Gaussian, $O(N^{-1})$ \cite{AizenmanLebowitzRuelle, FrohlochZegarlinski1987, CometsNeveu1995} \item Pure $p$-spin SK model for $\mathrm{b}eta\in (0, \mathrm{b}eta^{(p)}_{c})$: Gaussian, $O(N^{-p/2})$ \cite{BovierKurkovaLowes} \item Pure $2$-spin SSK model for $\mathrm{b}eta\in (0, \mathrm{b}eta_c)$: Gaussian, $O(N^{-1})$ \cite{BaikLee} \item Pure $2$-spin SSK model for $\mathrm{b}eta\in (\mathrm{b}eta_c, \infty)$: $\TW_1$, $O(N^{-2/3})$ \cite{BaikLee} \item Pure $p$-spin SSK model for $p\ge 3$ at $\mathrm{b}eta=\infty$: Gumbel, $O(N^{-1})$ \cite{SubagZeitouni2015} \end{enumerate} Here, $\TW_1$ denotes the GOE Tracy-Widom distribution. The numbers $\mathrm{b}eta^{(p)}_c$, $p\ge 3$, and $\mathrm{b}eta_c$ are certain critical values. A results for pure $p$-spin SSK model with $p\ge 3$ for low temperature is given in \cite{Subag2016}. We also remark that the free energy of the pure $2$-spin SSK model at zero temperature, $\mathrm{b}eta=\infty$, is, after modifying the definition slightly, equal to the rescaled largest eigenvalue of symmetric random matrix $A$. Hence, from the well-known result in the random matrix theory \cite{So1999, TV2010, EYY}, this case also corresponds to $\TW_1$ with $O(N^{-2/3})$ fluctuation. Comparing with (5), we find that at zero temperature the free energy fluctuates differently for $p=2$ and $p\ge 3$. There is an important difference of $p=2$ case and $p\ge 3$ case: the number of critical points for the Hamiltonian (subject to the constraint $\|\mathrm{b}oldsymbol{\sigma}\|^2=N$) is $2N$ for $p=2$, but is exponential in $N$ for $p\ge 3$ as proved in \cite{ABC2013} (for upper bound) and \cite{Subag2015} (for lower bound). The critical points are the eigenvectors of $A$ for $p=2$, hence strongly correlated, whereas the extremal process of critical points converges in distribution to a Poisson point process for $p \ge 3$. See Theorem 1 of \cite{SubagZeitouni2015} for more detail. \subsubsection*{Positive external field.} The behavior of the free energy changes drastically under the presence of an external field ($h>0$). For this case, the more complicated model with mixed $p$-spin interactions were also studied. \mathrm{b}egin{enumerate} \item[(6)] Mixed $p$-spin SK and SSK models (without odd $p$-interactions for $p\ge 3$) with $h>0$ for all $\mathrm{b}eta\in (0, \infty)$: Gaussian, $O(N^{-1/2})$ \cite{ChenDeyPanchenkp2015} \item[(7)] Mixed $p$-spin SSK model with $h>0$ at $\mathrm{b}eta=\infty$: Gaussian, $O(N^{-1/2})$ \cite{ChenSen2015} \end{enumerate} Note that the fluctuations are significantly increased from the $h=0$ case. It is interesting to scale $h\to 0$ with $N$ and consider a transition from (6) and (7) to (4) or (5). By matching the variance when $h=0$ and $h>0$, it is expected that the transitional scaling is $h=O(N^{-1/6})$ for $p=2$. For the large deviation analysis for the pure $2$-spin SSK model and discussions for such $h$, we refer to \cite{FyodorovLeDoussal2014} for deterministic $h$ and \cite{DemboZeitouni2015} for random $h$. \subsection{Definitions} We first define a Hamiltonian that generalizes $H_N$ in \eqref{eq:defofH}. \mathrm{b}egin{defn}[Interactions] \label{def:M} \label{def:Wigner} \label{cond:nonzero} Let $A_{ij}$, $1\le i\le j$, be independent real random variables satisfying the following conditions: \mathrm{b}egin{itemize} \item All moments of $A_{ij}$ are finite and $\mathbb{E}[A_{ij}]=0$. \item For all $i<j$, $\mathbb{E}[A_{ij}^2]=1$, $\mathbb{E}[A_{ij}^3]=W_3$, and $\mathbb{E}[A_{ij}^4]=W_4$ for some constants $W_3\in \R$ and $W_4> 0$. \item For all $i$, $\mathbb{E}[A_{ii}^2]=w_2$ for a constant $w_2\ge 0$. \end{itemize} Set $A_{ji}=A_{ij}$ for $i<j$, and set $A=(A_{ij})_{i,j=1}^N$. Let \mathrm{b}egin{equation} \label{defofMma} M_{ij} = {\mathfrak a}c{A_{ij}}{\sqrt N} + {\mathfrak a}c{J}{N} \quad (i \neq j), \qquad M_{ii} = {\mathfrak a}c{A_{ii}}{\sqrt N} + {\mathfrak a}c{J'}{N} \end{equation} for some ($N$-independent) non-negative constants $J$ and $J'$. Set $M=(M_{ij})_{i,j=1}^N$. We call $M$ a Wigner matrix with non-zero mean. \end{defn} The Hamiltonian in \eqref{eq:defofH} is obtained by setting $A_{ii}=0$ and $J'=0$. \mathrm{b}egin{defn}[Free energy] \label{def:partition} Define the Hamiltonian $H_N (\mathrm{b}oldsymbol{\sigma}) = \langle \mathrm{b}oldsymbol{\sigma}, M \mathrm{b}oldsymbol{\sigma} \rangle$ on sphere $\|\mathrm{b}oldsymbol{\sigma}\|=\sqrt{N}$. For $\mathrm{b}eta>0$, define the partition function and the free energy as \mathrm{b}egin{equation} \label{eq:partition} Z_N= Z_N(\mathrm{b}eta)= \int_{S_{N-1}} e^{\mathrm{b}eta H_N(\mathrm{b}oldsymbol{\sigma})} \mathrm{d} \omega_N(\mathrm{b}oldsymbol{\sigma}), \qquad F_N= F_N(\mathrm{b}eta)= {\mathfrak a}c1{N} \log Z_N, \end{equation} where $\mathrm{d}\omega_N$ is the normalized uniform measure on the sphere $S_{N-1} = \{ \mathrm{b}oldsymbol{\sigma} \in \R^N : \| \mathrm{b}oldsymbol{\sigma} \|^2 = N \}$. \end{defn} \mathrm{b}egin{rem} We may also consider complex matrix $M$. In this case, the real and the complex entries are independent and we add an extra condition that $\E A_{ij}^2=0$. The results in this paper have corresponding results for complex $M$, but we do not state them here. \end{rem} \subsection{Results} \label{sec:resultsm} The following is the main result. The case when $J=0$ was proved previously in \cite{BaikLee}. \mathrm{b}egin{thm} \label{thm:main} The following holds as $N\to \infty$ where all the convergences are in distribution. The notation ${\mathcal N}(a,b)$ denotes Gaussian distribution with mean $a$ and variance $b$ and $\TW_1$ is the GOE Tracy--Widom distribution. \mathrm{b}egin{enumerate}[(i)] \item (Spin glass regime) If $\mathrm{b}eta > {\mathfrak a}c{1}{2}$ and $J < 1$, then \mathrm{b}egin{equation} \label{eq:thmspin1low} {\mathfrak a}c{1}{\mathrm{b}eta-{\mathfrak a}c{1}{2}} N^{2/3} \left( F_N - F(\mathrm{b}eta) \right) \Rightarrow \TW_{1}. \end{equation} \item (Paramagnetic regime) If $\mathrm{b}eta < {\mathfrak a}c{1}{2}$ and $ \mathrm{b}eta < {\mathfrak a}c{1}{2J}$, then \mathrm{b}egin{equation} \label{eq:thmspin1high} N \left( F_N - F(\mathrm{b}eta) \right) \Rightarrow \mathcal{N} \left(f_1, \mathrm{a}lpha_1 \right). \end{equation} \item (Ferromagnetic regime) If $J > 1$ and $\mathrm{b}eta > {\mathfrak a}c{1}{2J}$, then \mathrm{b}egin{equation} \label{eq:thmspin1mid} \sqrt{N} \left( F_N - F(\mathrm{b}eta) \right) \Rightarrow \mathcal{N} \left(0, \mathrm{a}lpha_2 \right). \end{equation} \end{enumerate} The leading order limit of the free energy is given by \mathrm{b}egin{equation} \label{Flimit} F(\mathrm{b}eta)= \mathrm{b}egin{cases} 2\mathrm{b}eta - {\mathfrak a}c12 \log(2\mathrm{b}eta) -{\mathfrak a}c34 \qquad &\text{for (i)} \\ \mathrm{b}eta^2 \qquad &\text{for (ii)} \\ \mathrm{b}eta \left( J + {\mathfrak a}c{1}{J} \right) - {\mathfrak a}c{1}{2} \log (2\mathrm{b}eta J) - {\mathfrak a}c{1}{4J^2} - {\mathfrak a}c{1}{2}, \quad &\text{for (iii).} \end{cases} \end{equation} The parameters for case (ii) in \eqref{eq:thmspin1high} are \mathrm{b}egin{equation} \label{eq:f1} f_1 = {\mathfrak a}c{1}{4} \log(1-4\mathrm{b}eta^2) + \mathrm{b}eta^2(w_2-2) + 2\mathrm{b}eta^4(W_4-3) - \mathrm{b}eta J - {\mathfrak a}c{1}{2} \log(1-2\mathrm{b}eta J) + \mathrm{b}eta J' \end{equation} and \mathrm{b}egin{equation} \label{eq:alpha1} \mathrm{a}lpha_1 = -{\mathfrak a}c{1}{2} \log (1-4\mathrm{b}eta^2) + \mathrm{b}eta^2 (w_2-2) + 2\mathrm{b}eta^4 (W_4-3), \end{equation} and the parameter for case (iii) in \eqref{eq:thmspin1mid} is \mathrm{b}egin{equation} \mathrm{a}lpha_2 = 2 \left( 1 - {\mathfrak a}c{1}{J^2} \right) \left(\mathrm{b}eta - {\mathfrak a}c{1}{2J} \right)^2. \end{equation} \end{thm} If we set $T = {\mathfrak a}c{1}{2\mathrm{b}eta}$, then the trichotomy corresponds to the cases $\max \{ T, J, 1 \} = 1$, $\max \{T, J, 1\} = T$, and $\max \{ T, J, 1 \} = J$, respectively. See Figure \ref{fig:diagram} for the phase diagram. \mathrm{b}egin{figure} \centering \mathrm{b}egin{tikzpicture}[scale=0.9] \draw[thick, ->] (0,0) -- (4,0); \draw[thick, ->] (0,0) -- (0,4); \draw[thick] (2, 0) -- (2,2); \draw[thick] (0, 2) -- (2,2); \draw[thick] (2, 2) -- (3.6, 3.6); \draw node at (4.3,0) {$J$}; \draw node at (-0.5, 3.6) {${\mathfrak a}c1{2\mathrm{b}eta}$}; \draw node at (-0.2, -0.25) {$0$}; \draw node at (2, -0.25) {$1$}; \draw node at (-0.25, 2) {$1$}; \draw node at (1,1) {spin glass}; \draw node at (1.3,3) {paramagnetic}; \draw node at (3.5,1) {ferromagnetic}; \end{tikzpicture} \caption{Phase diagram} \label{fig:diagram} \end{figure} The above result implies that \mathrm{b}egin{equation} F_N(\mathrm{b}eta) \to F(\mathrm{b}eta) \end{equation} in probability for $(J, \mathrm{b}eta)$ not on the critical lines. The formula \eqref{Flimit} of $F(\mathrm{b}eta)$, and hence also the phase diagram, were obtained by Kosterlitz, Thouless, and Jones \cite{KosterlitzThoulessJones}. Their proof is not completely rigorous but can be made rigorous by using the estimates that were developed later in random matrix theory. In this paper, we make their analysis rigorous and improve it to obtain the results on the fluctuations. Note that even though the paramagnetic regime and the ferromagnetic regime both have a Gaussian as the limiting distribution, the order of the fluctuations are different. The reason for this can be seen from the following theorem of which Theorem \ref{thm:main} is a consequence. \mathrm{b}egin{thm} \label{thm:main2} Let $\mu_1\ge \mu_2\ge\dots\ge \mu_N$ be the eigenvalues of Wigner matrix with non-zero mean $M$ in Definition \ref{def:M}. For every $\epsilon>0$ and $D > 0$, the following holds as $N\to \infty$ with probability higher than $1-N^{-D}$. \mathrm{b}egin{enumerate}[(i)] \item (Spin glass regime) If $\mathrm{b}eta > {\mathfrak a}c{1}{2}$ and $J < 1$, then \mathrm{b}egin{equation} \label{eq:thmspin1low002} F_N= F(\mathrm{b}eta) + \left( \mathrm{b}eta-{\mathfrak a}c{1}{2} \right) \left( \mu_1- 2 \right) + O(N^{-1+\epsilon}). \end{equation} \item (Paramagnetic regime) If $\mathrm{b}eta < {\mathfrak a}c{1}{2}$ and $ \mathrm{b}eta < {\mathfrak a}c{1}{2J}$, then \mathrm{b}egin{equation} \label{eq:thmspin1high002} \mathrm{b}egin{split} F_N = 2\mathrm{b}eta^2 - {\mathfrak a}c12 \log (2\mathrm{b}eta) - {\mathfrak a}c{1}{2N} \sum_i g(\mu_i) + {\mathfrak a}c1{N} \left( \log(2\mathrm{b}eta) -{\mathfrak a}c12 \log \left( -{\mathfrak a}c1{N} \sum_i g''(\mu_i) \right) \right) + O(N^{-2+\epsilon}) \end{split} \end{equation} where \mathrm{b}egin{equation} g(x) = \log \left( 2\mathrm{b}eta+{\mathfrak a}c1{2\mathrm{b}eta} - x \right). \end{equation} \item (Ferromagnetic regime) If $J > 1$ and $\mathrm{b}eta > {\mathfrak a}c{1}{2J}$, then \mathrm{b}egin{equation} \label{eq:thmspin1mid002} F_N = F(\mathrm{b}eta) + \left(\mathrm{b}eta - {\mathfrak a}c1{2J} \right) \left( \mu_1 - J- {\mathfrak a}c1{J} \right) + O(N^{-1}\log N). \end{equation} \end{enumerate} \end{thm} Intuitively, the free energy is dominated by the ground state, $\mu_1$, at low temperature, and by all eigenvalues at high temperature. The above result makes this intuition precise: in the spin glass regime (i) and the ferromagnetic regime (iii), the fluctuations of the free energy are governed by the ground state, the largest eigenvalue $\mu_1$, while in the ferromagnetic regime (ii), they are governed by all of the eigenvalues in the form of the linear statistics $\sum_i g(\mu_i)$ of a specific function $g$. The Wigner matrix with non-zero mean $M$ is a rank 1 case of so-called a spiked random matrix. A spiked random matrix is a random matrix perturbed additively by a deterministic matrix of fixed $N$-independent rank. Spiked random matrices were studied extensively in random matrix theory \cite{Baik-Ben_Arous-Peche05, Feral-Peche07, CDF2012, PRS2013, KY2013_iso}. Since the perturbation has a rank independent of $N$, the semi-circle law (see \eqref{semicirlw} below) still holds. However, the top eigenvalues may have different limit theorems. For the rank 1 case $M$, it was shown in Theorem 1.3 of \cite{PRS2013} that \mathrm{b}egin{equation} \label{spikedr1fl} \mathrm{b}egin{cases} N^{2/3} ( \mu_1 - 2 ) \Rightarrow \TW_1, \qquad &J<1 \\ N^{1/2} \left(\mu_1 - (J + {\mathfrak a}c{1}{J}) \right) \Rightarrow {\mathcal N}(0, 2(1-{\mathfrak a}c{1}{J^2})),\qquad &J>1 . \end{cases} \end{equation} (See also Theorem 3.4 of \cite{CDF2012}.) For Hermitian matrix, \eqref{spikedr1fl} was first proved in \cite{Baik-Ben_Arous-Peche05}. When $J<1$, then the perturbation has little effect on $\mu_1$. But when $J>1$, $\mu_1$ becomes an ``outlier'' in the sense that it is separated from the support of the semi-circle and as a consequence, becomes ``freer'' to fluctuate; the fluctuation order $N^{-1/2}$ is bigger in this case. Theorem \ref{thm:main} (i) and (iii) follow directly from Theorem \ref{thm:main2} and \eqref{spikedr1fl}. \subsection{Linear statistics for Wigner matrix with non-zero mean}\label{sec:linstsu} In order to prove Theorem \ref{thm:main} (ii) from Theorem \ref{thm:main2} (ii), we need a limit theorem for the linear statistic $\sum_i g(\mu_i)$. It is a well-known result in random matrix theory that for mean-zero Wigner matrices (i.e. $J=0$ case), the linear statistics converge to Gaussian distributions with scale $O(1)$ instead of the classical diffusive $O(N^{1/2})$ scale for the sum of independent random variables \cite{Johansson98, SiSo, BS2004, BY2005, LP}. The main technical component of this paper is the central limit theorem for the linear statistics of Wigner matrix with non-zero mean (i.e. $J>0$ case). The next theorem shows that the spike (i.e. $J>0$) only changes the mean of the limiting Gaussian distribution; the variance of the Gaussian distribution is same for all $J\ge 0$. We remark that the change of the mean due to the spike is already known for spiked sample covariance matrices \cite{WSY, PMC}. We prove the following result for $J>0$. Set \mathrm{b}egin{equation} \label{Chebyshev formula} \tau_\ell(\varphi)= {\mathfrak a}c1{\pi} \int_{-2}^2 \varphi(x) {\mathfrak a}c{T_\ell(x/2)}{\sqrt{4-x^2}} \, \mathrm{d} x = {\mathfrak a}c1{2\pi} \int_{-\pi}^\pi \varphi(2\cos\theta) \cos(\ell\theta) \, \mathrm{d} \theta \end{equation} for $\ell=0,1,2,\dots$, where $T_\ell(t)$ are the Chebyshev polynomials of the first kind; $T_0(t)=1$, $T_1(t)=t$, $T_2(t)=2t^2-1$, $T_3(t)=4t^3-3t$, $T_4(t)=8t^4-8t^2+1$, etc. \mathrm{b}egin{thm}[Linear statistics of Wigner matrix with non-zero mean]\label{thm:linear} Let $M$ be an $N \times N$ Wigner matrix with non-zero mean as in Definition \ref{def:M}. Denote by $\mu_1 \geq \mu_2 \geq \dots \geq \mu_N$ the eigenvalues of $M$. Set \mathrm{b}egin{equation} \label{hat J} \mathrm{w}idehat J = \mathrm{b}egin{cases} J + J^{-1} & \text{ if } J>1 \,, \\ 2 & \text{ if } J\leq 1 \,. \end{cases} \end{equation} Then, for any function $\varphi : \R \to \R$ that is analytic in an open neighborhood of $[-2, \mathrm{w}idehat{J} \, ]$ and has compact support, the random variable \mathrm{b}egin{equation} T_N(\varphi) := \sum_{i=1}^N \varphi(\mu_i) - N \int_{-2}^2 \varphi(x) {\mathfrak a}c{\sqrt{4-x^2}}{2\pi} \mathrm{d} x \end{equation} converges in distribution to the Gaussian distribution with mean $M(\varphi)$ and variance $V(\varphi)$, where \mathrm{b}egin{equation} \label{eq:Mvarfo} \mathrm{b}egin{split} M(\varphi) &= {\mathfrak a}c{1}{4} \left( \varphi(2) + \varphi(-2) \right) -{\mathfrak a}c{1}{2} \tau_0(\varphi) + J' \tau_1(\varphi) + (w_2-2) \tau_2(\varphi) + \left( W_4 - 3 \right) \tau_4(\varphi) \\ &\qquad + {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint \varphi \left(-s-{\mathfrak a}c{1}{s} \right) {\mathfrak a}c{J^2 s}{1 + Js} \mathrm{d} s \end{split} \end{equation} and \mathrm{b}egin{equation} \label{eq:Vvarfo} V(\varphi) = (w_2 -2) \tau_1(\varphi)^2 + (W_4 -3) \tau_2(\varphi)^2 + 2 \sum_{\ell=1}^{\infty} \ell \tau_{\ell}(\varphi)^2. \end{equation} The contour for the integral in \eqref{eq:Mvarfo} is any simple closed contour containing $0$ inside in the slit disk $\{ |s|<1\}\setminus [-1, -1/J]$ in which $\varphi \left(-s-{\mathfrak a}c{1}{s} \right)$ is analytic. (The analyticity condition of $\varphi$ implies that there is such a contour.) \end{thm} Note that the variance does not depend on $J$ and $J'$ but the mean does. Among various methods of studying the linear statistics in random matrix theory, we follow the method of Bai and Silverstein, and Bai and Yao \cite{BS2004, BY2005} to prove the above result. Specifically, we extend the analysis of \cite{BY2005} to the $J>0$ case. Let $\rho_N={\mathfrak a}c{1}{N} \sum_{j=1}^N \delta_{\mu_j}$ be the empirical spectral distribution of $M$. As $N \to \infty$, $\rho_N$ converges to the semicircle measure $\rho$, defined by \mathrm{b}egin{equation} \label{semicirlw} \rho(\mathrm{d} x) = {\mathfrak a}c{1}{2\pi} \sqrt{4-x^2}_+ \mathrm{d} x. \end{equation} Let $s_N(z)$ and $s(z)$ be the Stieltjes transforms of $\rho_N$ and $\rho$, respectively, for $z \in \C^+$. Then, $T_N(\varphi)$ admits an integral representation, which can be easily converted to a contour integral that contains $\xi_N(z) := s_N(z)-s(z)$ in its integrand. The problem then reduces to showing that $\xi_N(z)$ converges to a Gaussian process $\xi(z)$. Due to the non-zero mean of the entries $M_{ij}$, the proof of convergence of $\xi_N(z)$ and the evaluation of the mean and the covariance of $\xi(z)$ become complicated. The main technical input we use in the estimate is the local semicircle law obtained in \cite{EKYY1}. Theorem \ref{thm:main} (ii) follows from Theorem \ref{thm:main2} and Theorem \ref{thm:linear} once we evaluate the mean and the variance of the limiting Gaussian distribution: see Section \ref{sec:proofofthmma}. \mathrm{b}egin{rem} It is direct to check that the integral in \eqref{eq:Mvarfo} can also be expressed as: \mathrm{b}egin{equation} \label{eq:Mvarfo22} \mathrm{b}egin{split} {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint \varphi \left(-s-{\mathfrak a}c{1}{s} \right) {\mathfrak a}c{J^2 s}{1 + Js} \mathrm{d} s = \mathrm{b}egin{cases} \sum_{\ell=2}^{\infty} J^{\ell} \tau_{\ell}(\varphi) & \text{ if } J<1,\\ {\mathfrak a}c{1}{2} \varphi(2) -{\mathfrak a}c{1}{2} \tau_0(\varphi) - \tau_1(\varphi) & \text{ if } J=1, \\ \varphi ( \mathrm{w}idehat J ) - \tau_0(\varphi) - \mathrm{w}idehat J \tau_1(\varphi) - \sum_{\ell=2}^{\infty} J^{-\ell} \tau_{\ell}(\varphi) & \text{ if } J>1. \end{cases} \end{split} \end{equation} \end{rem} \subsection{Transitions} \label{sec:transitions} It is interesting to consider the phase transition and the near-critical behaviors in Theorem \ref{thm:main} and \ref{thm:main2}. We have the following result for the transition between the spin glass regime (i) and the ferromagnetic regime (iii). For fixed $\mathrm{b}eta > 1/2$, consider $J$ depending on $N$ as \mathrm{b}egin{equation} \label{Jscaletr} J = 1+wN^{-1/3}. \end{equation} Then for each $w\in \R$, the asymptotic result \eqref{eq:thmspin1low002} still holds. Now in the theory of spiked random matrices, the distribution of $\mu_1$ is known to have the transition under the scaling \eqref{Jscaletr}: \mathrm{b}egin{equation} \label{Jscaletr2} N^{2/3} \left( \mu_1- 2 \right) \Rightarrow \TW_{1, w} \end{equation} where $\TW_{1,w}$ is a one-parameter family of random variables with the distribution function obtained in Theorems 1.5 and 1.7 of \cite{Bloemendal-Virag1}. See also \cite{Mo} for the Gaussian case and \cite{Feral-Peche07} for a more general class of Wigner matrices. See Section \ref{sub:transition23}. For other transitions, by matching the fluctuation scales, we expect that the critical window for the transition between the paramagnetic regime (ii) and the ferromagnetic regime (iii) is $J = {\mathfrak a}c{1}{2\mathrm{b}eta} + O(N^{-1/2})$ for each $\mathrm{b}eta<1$ and that of the transition between the spin glass regime (i) and the paramagnetic regime (ii) is $\mathrm{b}eta = {\mathfrak a}c{1}{2} + O({\mathfrak a}c{\sqrt{\log N}}{N^{1/3}})$ for each $J<1$. However, the analysis of these transition regimes is yet to be done. \subsection{Organization} The rest of paper is organized as follows. In Section \ref{sec:proofofthmma} and Section \ref{sec:proofmain}, we prove the main results, Theorem \ref{thm:main} and Theorem \ref{thm:main2}, respectively. Theorem \ref{thm:linear} (linear statistics) is proved in Section \ref{sec:lspr} assuming Proposition \ref{prop:gaussian xi} and Lemma \ref{lem:Gamolrsm}. Proposition \ref{prop:gaussian xi} is proved in Section \ref{sec:outline}--\ref{sec:miscellanies}. Lemma \ref{lem:Gamolrsm} is proved in Section \ref{sub:nonrandom}. Certain technical large deviation estimates are proved in Section \ref{subsec:largdeves}. \mathrm{b}egin{nrem} Throughout the paper we use $C$ or $c$ in order to denote a constant that is independent of $N$. Even if the constant is different from one place to another, we may use the same notation $C$ or $c$ as long as it does not depend on $N$ for the convenience of the presentation. \end{nrem} \mathrm{b}egin{nrem} The notation $\Rightarrow$ denotes the convergence in distribution as $N\to \infty$. \end{nrem} \mathrm{b}egin{nrem} For random variables $X$ and $Y$ depending on $N$, we use the notation $X \prec Y$ to mean that $$ \p(|X| > N^{\epsilon} |Y|) < N^{-D} $$ for any (small) $\epsilon > 0$ and (large) $D > 0$. The relation $\prec$ is transitive and satisfies the arithmetic rules, e.g., if $X_1 \prec Y_1$ and $X_2 \prec Y_2$ then $X_1 + X_2 \prec Y_1 + Y_2$ and $X_1 X_2 \prec Y_1 Y_2$. We will also use the notation $X = {\mathcal O}(N^p)$ if $X \prec N^p$ for a constant $p$. \end{nrem} \subsubsection*{Acknowledgments} We would like to thank Zhidong Bai, Zhigang Bao, Wei-Kuo Chen, Jack Silverstein, and Jianfeng Yao for several useful communications. The work of Jinho Baik was supported in part by NSF grants DMS1361782. The work of Ji Oon Lee was supported in part by Samsung Science and Technology Foundation project number SSTF-BA1402-04. \section{Proof of Theorem \ref{thm:main}} \label{sec:proofofthmma} We already discussed in Section \ref{sec:resultsm} how Theorem \ref{thm:main} (i), (iii) follow from Theorem \ref{thm:main2} and \eqref{spikedr1fl}. We now check that Theorem \ref{thm:main} (iii) follows from Theorem \ref{thm:main2} and Theorem \ref{thm:linear}. In Theorem \ref{thm:linear}, we use the function $\varphi(x)=g(x)= \log\left( 2\mathrm{b}eta+ {\mathfrak a}c1{2\mathrm{b}eta} -x\right)$. We first evaluate $M(\varphi)$ and $V(\varphi)$ in Theorem \ref{thm:linear} for this function. The variance $V(\varphi)$ does not depend on $J$ and $J'$, and hence it is the same as the $J=J'=0$ case. The value $\sigma^2 = {\mathfrak a}c14 V(\varphi)$ was evaluated (3.13) of \cite{BaikLee} (see the second last sentence in Section 5 of \cite{BaikLee}); this is equal to $\mathrm{a}lpha_1$ in \eqref{eq:alpha1}. Now consider $M(\varphi)$. For the function $\varphi=g$, it was shown in (A.17) of \cite{BaikLee} that \mathrm{b}egin{equation} \tau_0(\varphi)=-\log(2\mathrm{b}eta), \quad \tau_1(\varphi)=-2\mathrm{b}eta, \quad \tau_2(\varphi)=-2\mathrm{b}eta^2, \quad \tau_4(\varphi)= -4\mathrm{b}eta^4. \end{equation} We now evaluate \mathrm{b}egin{equation} \label{eq:Mvarfospecf} \mathrm{b}egin{split} {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint \varphi \left(-s-{\mathfrak a}c{1}{s} \right) {\mathfrak a}c{J^2 s}{1 + Js} \mathrm{d} s = {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint \log \left(2\mathrm{b}eta+ {\mathfrak a}c1{2\mathrm{b}eta}+s+{\mathfrak a}c{1}{s} \right) {\mathfrak a}c{J^2 s}{1 + Js} \mathrm{d} s. \end{split} \end{equation} Set $B=2\mathrm{b}eta$. Then $B<\min\{1, 1/J\}$ since we are in the paramagnetic regime. The above integral is \mathrm{b}egin{equation} F(B)= {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint_{|s|=r} \log \left(B+{\mathfrak a}c1{B}+s+{\mathfrak a}c{1}{s} \right) {\mathfrak a}c{J^2 s}{1 + Js} \mathrm{d} s \end{equation} where we can take $r$ to be any number satisfying $B<r<\min\{1, 1/J\}$. Its derivative is \mathrm{b}egin{equation} \label{F derivative} F'(B)= {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint_{|s|=r} {\mathfrak a}c{(B^2-1)J^2 s^2}{B(B+s)(1+Bs)(1 + Js)} \mathrm{d} s =- {\mathfrak a}c{J^2B}{1-JB} = J - {\mathfrak a}c{J}{1-JB} \end{equation} by the calculus of residue: the one pole inside the contour is $s=-B$. Hence $F(B)=JB+ \log(1-JB)+C$ for a constant $C$ for every $B$ satisfying $0<B<\min\{1, 1/J\}$. To find the constant $C$, note that \mathrm{b}egin{equation} F(B)= {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint_{|s|=r} \log \left( {\mathfrak a}c{(B+s)(1+Bs)}{Bs} \right) {\mathfrak a}c{J^2 s}{1 + Js} \mathrm{d} s = {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint_{|s|=r} \log \left( {\mathfrak a}c{(B+s)(1+Bs)}{s} \right) {\mathfrak a}c{J^2 s}{1 + Js} \mathrm{d} s \end{equation} since the integral of ${\mathfrak a}c{J^2 s}{1 + Js}$ over the circle $|s|=r$ is zero. Hence $F(B)\to 0$ as $B\to 0$. This implies that $C=0$ and therefore $F(B)= JB+ \log(1-JB)$. This implies that \mathrm{b}egin{equation} {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint \varphi \left(-s-{\mathfrak a}c{1}{s} \right) {\mathfrak a}c{J^2 s}{1 + Js} \mathrm{d} s =2\mathrm{b}eta J + \log(1-2\mathrm{b}eta J). \end{equation} Therefore, \mathrm{b}egin{equation} M(\varphi)= {\mathfrak a}c{1}{2} \log\left( 1-4\mathrm{b}eta^2 \right) -2\mathrm{b}eta J' - 2\mathrm{b}eta^2 (w_2-2) - 4\mathrm{b}eta^4 \left( W_4 - 3 \right) + 2\mathrm{b}eta J + \log(1-2\mathrm{b}eta J). \end{equation} We also have (see (A.5) of \cite{BaikLee}) \mathrm{b}egin{equation} \int_{-2}^2 \log\left( 2\mathrm{b}eta+{\mathfrak a}c1{2\mathrm{b}eta}-x \right) \rho(\mathrm{d} x) = 2\mathrm{b}eta^2-\log\left(2\mathrm{b}eta \right). \end{equation} Furthermore, applying Theorem \ref{thm:linear} to function $-g''(x)$, we have \mathrm{b}egin{equation} - {\mathfrak a}c1{N} \sum_i g''(\mu_i) \to \int_{-2}^2 {\mathfrak a}c1{(2\mathrm{b}eta+{\mathfrak a}c1{2\mathrm{b}eta}-x)^2} \rho(\mathrm{d} x) = {\mathfrak a}c{4\mathrm{b}eta^2}{1-4\mathrm{b}eta^2}. \end{equation} in probability (see (A.8) of \cite{BaikLee} for the equality). Therefore, Theorem \ref{thm:main2} (ii) implies that \mathrm{b}egin{equation} N\left( F_N-\mathrm{b}eta^2 \right) \Rightarrow \mathcal{N} \left(f_1, \mathrm{a}lpha_1 \right) \end{equation} where \mathrm{b}egin{equation} f_1= -{\mathfrak a}c12 M(\varphi)+ \log(2\mathrm{b}eta) - {\mathfrak a}c12 \log \left( {\mathfrak a}c{4\mathrm{b}eta^2}{1-4\mathrm{b}eta^2} \right), \qquad \mathrm{a}lpha_1= {\mathfrak a}c14 V(\varphi). \end{equation} These are same as \eqref{eq:f1} and \eqref{eq:alpha1}. The proof is complete. \section{Proof of Theorem \ref{thm:main2}} \label{sec:proofmain} As we mentioned before, the leading order limit of the free energy \eqref{Flimit} was obtained in \cite{KosterlitzThoulessJones}. This is based on the following integral representation for the quenched case, i.e. for fixed matrix $M$. \mathrm{b}egin{lem}[\cite{KosterlitzThoulessJones}; also Lemma 1.3 of \cite{BaikLee}] \label{lem:inverse laplace} Let $M$ be an $N \times N$ symmetric matrix with eigenvalues $\mu_1 \geq \mu_2 \geq \dots \geq \mu_N$. Then \mathrm{b}egin{equation} \label{integral representation0} \int_{S_{N-1}} e^{\mathrm{b}eta \langle \mathrm{b}oldsymbol{\sigma}, M \mathrm{b}oldsymbol{\sigma} \rangle }\mathrm{d} \omega_N(\mathrm{b}oldsymbol{\sigma}) = C_N \int_{\gamma - \mathrm{i} \infty}^{\gamma + \mathrm{i} \infty} e^{{\mathfrak a}c{N}{2} G(z)} \mathrm{d} z, \quad G(z) = 2\mathrm{b}eta z - {\mathfrak a}c{1}{N} \sum_i \log (z - \mu_i), \end{equation} where $\gamma$ is any constant satisfying $\gamma>\mu_1$, the integration contour is the vertical line from $\gamma-\mathrm{i} \infty$ to $\gamma+\mathrm{i} \infty$, the $\log$ function is defined in the principal branch, and \mathrm{b}egin{equation} C_N = {\mathfrak a}c{\Gamma(N/2)}{2\pi \mathrm{i} (N\mathrm{b}eta)^{N/2-1}}. \end{equation} Here $\Gamma(z)$ denotes the Gamma function. \end{lem} Now for the spin system, the eigenvalues $\mu_i$ are random, but using random matrix theory, there are precise estimates on these random variables, and we can still apply the method of steepest-descent. A formal application of the method of steepest-descent was done in \cite{KosterlitzThoulessJones} and obtained the leading order term. In \cite{BaikLee}, we supply necessary estimates and made the result of \cite{KosterlitzThoulessJones} rigorous when $J=0$. We furthermore, extended the analysis to the next order term and obtained limit theorems, Theorem \ref{thm:main} when $J=0$. It is not explicitly stated in \cite{BaikLee}, but the analysis in it proved Theorem \ref{thm:main2} for $J=0$ as well. We now follow the similar approach and prove Theorem \ref{thm:main2} for $J>0$. \subsection{Rigidity estimates of the eigenvalues} Let $M$ be a Wigner matrix $M$ with non-zero mean in Definition \ref{def:M}. By definition, \mathrm{b}egin{enumerate}[(a)] \item For $i \neq j$, $\E M_{ij} = J N^{-1}$, $\E |M_{ij}|^2 = N^{-1} + J^2 N^{-2}$, $\E |A_{ij}|^4 = W_4 N^{-2} + O(N^{-{\mathfrak a}c{3}{2}})$. In addition, for Hermitian case, $\E M_{ij}^2 = J^2 N^{-2}$. \item For $i=j$, $\E M_{ii} = J' N^{-1}$, $\E |M_{ii}|^2 = W_2 N^{-1} + (J'N^{-1})^2$. \end{enumerate} For $M$, we have the following precise rigidity estimate for all eigenvalues other than the largest one. \mathrm{b}egin{lem}[Theorem 2.13 of \cite{EKYY1}, rigidity] \label{lem:rigidity} For a positive integer $k \in [1, N]$, let $\hat k := \min \{ k, N+1-k \}$. Let $\gamma_k$ be the classical location defined by \mathrm{b}egin{equation} \label{eq:classicallocationdef} \int_{\gamma_k}^{\infty} \mathrm{d} \rho_{sc} = {\mathfrak a}c{1}{N} \left( k - {\mathfrak a}c{1}{2} \right). \end{equation} Then, \mathrm{b}egin{equation} \label{eq:rigidity} |\mu_k - \gamma_k| \prec \hat k^{-1/3} N^{-2/3} \end{equation} for all $k=2,3,\dots, N$. \end{lem} The largest eigenvalue $\mu_1$ depends on $J$ and we have the following Dichotomy: \mathrm{b}egin{lem}[Theorem 6.3 of \cite{KY2013_iso}] \label{lem:largest eigenvalue} \mbox{ } \mathrm{b}egin{enumerate}[(a)] \item If $J \leq 1$, \mathrm{b}egin{equation} \label{sp1sub} |\mu_1 - 2| \prec N^{-2/3} \end{equation} \item If $J > 1$, \mathrm{b}egin{equation} \left| \mu_1 - (J + {\mathfrak a}c{1}{J}) \right| \prec \sqrt{{\mathfrak a}c{J-1 + N^{-1/3}}{N}}. \end{equation} \end{enumerate} \end{lem} \subsection{Proof} We apply the method of steepest-descent to the integral in Lemma \ref{lem:inverse laplace}. It is easy to check that $G'(z)$ is an increasing function of $z$ on $(\mu_1, \infty)$, hence there exists a unique $\gamma \in (\mu_1, \infty)$ satisfying the equation $G'(\gamma) = 0$: see Lemma 4.1 of \cite{BaikLee}. We see in the analysis below that in the spin glass regime and the ferromagnetic regime, $\gamma$ is close to $\mu_1$ with distance of order $O(N^{\epsilon-1})$. On the other hand, for the paramagnetic regime, $\gamma$ is away from $\mu_1$ with distance of order $O(1)$. \subsubsection{Spin glass regime: $\mathrm{b}eta > {\mathfrak a}c{1}{2}$ and $J < 1$} \label{sec:spinglass} \mathrm{b}egin{proof}[Proof of Theorem \ref{thm:main2} (i)] In Theorem 2.11 of \cite{BaikLee}, we obtained a Tracy-Widom limit theorem, Theorem \ref{thm:main2} (i), for general symmetric random matrix $M$ without assuming that the mean is zero. This theorem assumes three conditions, Condition 2.3 (Regularity of measure), Condition 2.4 (Rigidity of eigenvalues), and Condition 2.6 (Tracy-Widom limit of the largest eigenvalue). The proof actually establishes Theorem \ref{thm:main2} (i) under Condition 2.3 and Condition 2.4 first, which then implies Theorem \ref{thm:main} (i) if we add Condition 2.6: See (6.3) in \cite{BaikLee} and then the sentence below it. Now for Wigner matrix with non-zero mean $M$, Condition 2.3 and Condition 2.4 are satisfied clearly from Lemma \ref{lem:rigidity} and Lemma \ref{lem:largest eigenvalue}, including the largest eigenvalue. Hence Theorem \ref{thm:main2} (i) is proved. \end{proof} \subsubsection{Paramagnetic regime: $\mathrm{b}eta < {\mathfrak a}c{1}{2}$ and $\mathrm{b}eta < {\mathfrak a}c{1}{2J}$} \mathrm{b}egin{proof}[Proof of Theorem \ref{thm:main2} (ii)] In Theorem 2.10 of \cite{BaikLee}, we proved a Gaussian limit theorem, Theorem \ref{thm:main} (ii), for general symmetric random matrix $M$ without assuming that the mean is zero. This theorem assumes three conditions, Condition 2.3 (Regularity of measure), Condition 2.4 (Rigidity of eigenvalues), and Condition 2.5 (Linear statistics of the eigenvalues). Similar to the spin glass regime, the proof actually establishes Theorem \ref{thm:main2} (ii) under Condition 2.3 and Condition 2.4 first, which then implies Theorem \ref{thm:main} (ii) if we add Condition 2.5: See (5.27) and (5.29) in \cite{BaikLee}. Now for Wigner matrix with non-zero mean $M$, Condition 2.4 is not satisfied when $J>1$ due to Lemma \ref{lem:largest eigenvalue}. However, we can easily modify the proof of Theorem 2.10 of \cite{BaikLee} for the paramagnetic conditions as we see now. The case $J \leq 1$ follows from Theorem 2.10 of \cite{BaikLee} directly, but we consider this case as well here. We choose $\gamma$ in Lemma \ref{lem:inverse laplace} as the unique critical value of $G(z)$ on the part of the real line $z \in (\mu_1, \infty)$. In order to evaluate the integral in \eqref{integral representation0}, we introduce a deterministic function \mathrm{b}egin{equation} \label{deterministic G} \mathrm{w}idehat G(z) = 2\mathrm{b}eta z - \int_{-2}^2 \log(z-x) \mathrm{d} \rho(x) \end{equation} where $\rho$ is the semicircle measure. Let $\mathrm{w}idehat \gamma$ be the critical point of $\mathrm{w}idehat G$ in the interval $(2, \infty)$. As in (A.4) of \cite{BaikLee}, it can be easily checked that \mathrm{b}egin{equation} \label{deterministic gamma} \mathrm{w}idehat \gamma = 2\mathrm{b}eta + {\mathfrak a}c{1}{2\mathrm{b}eta}. \end{equation} Recall the definition of $\mathrm{w}idehat J$ in \eqref{hat J}. Since $\mathrm{b}eta < 1/2$ and $\mathrm{b}eta < {\mathfrak a}c{1}{2J}$ in the paramagnetic regime, we find that \mathrm{b}egin{equation} \mathrm{w}idehat \gamma > \mathrm{w}idehat J, \end{equation} hence $\mathrm{w}idehat \gamma > \mu_1$ with high probability. Recall that $\gamma_1$ is the classical location of the largest eigenvalue as defined in \eqref{eq:classicallocationdef}. Since $|\mu_1 - \gamma_1| = O(1)$ with high probability, Lemma 5.1 and Corollary 5.2 of \cite{BaikLee} hold for this case as well. Then, Corollary 5.3 and Lemma 5.4 of \cite{BaikLee} also hold, which implies the calculations up to (5.27) and (5.29) of \cite{BaikLee}. This proves Theorem \ref{thm:main2} (ii). \end{proof} \subsubsection{Ferromagnetic regime: $J > 1$ and $\mathrm{b}eta > {\mathfrak a}c{1}{2J}$} In this case, $\mathrm{w}idehat \gamma$ in \eqref{deterministic gamma} satisfies $\mathrm{w}idehat \gamma < \mathrm{w}idehat J$ since $\mathrm{b}eta > {\mathfrak a}c{1}{2J}$, and hence the proof for the paramagnetic regime does not apply. Instead, this case is similar to the spin glass regime and we modify the proof of Theorem 2.11 of \cite{BaikLee}. The following lemma shows that $\gamma$ is close to $\mu_1$ up to order $1/N$. This is similar to Lemma 6.1 of \cite{BaikLee}. \mathrm{b}egin{lem} \label{lem:case2 gamma} Let $c>0$ be a constant such that $2\mathrm{b}eta - {\mathfrak a}c{1}{J} > c$ and $J - 1 > c$. Then, \mathrm{b}egin{equation} {\mathfrak a}c{1}{3\mathrm{b}eta N} \leq \gamma - \mu_1 \leq {\mathfrak a}c{2}{cN}. \end{equation} with high probability. \end{lem} \mathrm{b}egin{proof} Note that \mathrm{b}egin{equation} G'(z) = 2\mathrm{b}eta - {\mathfrak a}c{1}{N} \sum_i {\mathfrak a}c{1}{z-\mu_i}. \end{equation} Since $G'(z)< 2\mathrm{b}eta - {\mathfrak a}c{1}{N(z-\mu_1)}$, we find that $G'(\mu_1 + {\mathfrak a}c{1}{3\mathrm{b}eta N}) < 0$. Since $G'(z)$ is an increasing function of $z$ on $(\mu_1, \infty)$, it suffices to show that $G'(\mu_1 + {\mathfrak a}c{2}{cN}) > 0$. In order to show this, we first notice that \mathrm{b}egin{equation} G'(z) = 2\mathrm{b}eta - {\mathfrak a}c{1}{N} {\mathfrak a}c{1}{z-\mu_1} - {\mathfrak a}c{1}{N} \sum_{i=2}^N {\mathfrak a}c{1}{z-\mu_i} \geq 2\mathrm{b}eta - {\mathfrak a}c{1}{N} {\mathfrak a}c{1}{z-\mu_1} - {\mathfrak a}c{1}{N} \sum_{i=2}^N {\mathfrak a}c{1}{\mu_1 -\mu_i} \end{equation} for $z \geq \mu_1$. From Lemma \ref{lem:rigidity}, we may assume that $\mu_k \, (k \geq 2)$ satisfies the rigidity estimate \eqref{eq:rigidity}. Thus, for any $\epsilon > 0$, if $z > \mu_1 > 2$, \mathrm{b}egin{equation} \mathrm{b}egin{split} G'(z) &\geq 2\mathrm{b}eta - {\mathfrak a}c{1}{N} {\mathfrak a}c{1}{z-\mu_1} - {\mathfrak a}c{1}{N} \sum_{i=2}^N \left( {\mathfrak a}c{1}{\mu_1 -\gamma_i} + \hat{i}^{-1/3} N^{-2/3+\epsilon} \right) \\ &\geq 2\mathrm{b}eta - {\mathfrak a}c{1}{N} {\mathfrak a}c{1}{z-\mu_1} - \int_{-2}^2 {\mathfrak a}c{\mathrm{d} \rho(x)}{\mu_1 - x} - C N^{-1+\epsilon} = 2\mathrm{b}eta - {\mathfrak a}c{1}{N} {\mathfrak a}c{1}{z-\mu_1} - {\mathfrak a}c{\mu_1 -\sqrt{\mu_1^2 -4}}{2} - C N^{-1+\epsilon}. \end{split} \end{equation} From Lemma \ref{lem:largest eigenvalue}, we thus find that, for any $0 < \delta < {\mathfrak a}c{c}{4}$, \mathrm{b}egin{equation} \mathrm{b}egin{split} G' \left(\mu_1 + {\mathfrak a}c{2}{cN} \right) &\geq 2\mathrm{b}eta - {\mathfrak a}c{c}{2} - {\mathfrak a}c{1}{2} \left( J + {\mathfrak a}c{1}{J} - \sqrt{ \left( J + {\mathfrak a}c{1}{J} \right)^2 -4} \right) -\delta - C N^{-1+\epsilon} \\ &\geq 2\mathrm{b}eta - {\mathfrak a}c{1}{J} - c > 0 \end{split} \end{equation} with high probability. This proves the lemma. \end{proof} The following lemma is a modification of Lemma 6.2 of \cite{BaikLee}. The proof is simpler here due to the fact that $\mu_1$ is away from $\mu_2$ by $O(1)$. \mathrm{b}egin{lem} \label{lem:case2G} Assume that there exists a constant $c > 0$ such that $2\mathrm{b}eta - {\mathfrak a}c{1}{J} > c$ and $J - 1 > c$. Let $\gamma$ be the solution of the equation $G'(\gamma) = 0$ in Lemma \ref{lem:case2 gamma}. Then, for any $0 < \epsilon < 1$, \mathrm{b}egin{equation} G(\gamma) = \mathrm{w}idehat{G}(\mu_1) + O(N^{-1+\epsilon}) \end{equation} with probability. (See \eqref{deterministic G} for the definition of $\mathrm{w}idehat{G}$). Moreover, there exist constants $C_0, C_1 > 0$ such that \mathrm{b}egin{equation} C_0 N^{\ell-1} \leq {\mathfrak a}c{(-1)^{\ell}}{(\ell-1)!} G^{(\ell)}(\gamma) \leq C_1^{\ell} N^{\ell-1} \end{equation} for all $\ell = 2, 3, \dots$ with probability. Here, $C_0$ and $C_1$ do not depend on $\ell$. \end{lem} \mathrm{b}egin{proof} We assume that the eigenvalues $\mu_k \, (k \geq 2)$ satisfies the rigidity estimate \eqref{eq:rigidity}. Then, from Lemma \ref{lem:case2 gamma}, \mathrm{b}egin{equation} G(\gamma) = 2\mathrm{b}eta \mu_1 - \int_{-2}^2 \log (\mu_1 -x) \mathrm{d} \rho(x) + O(N^{-1+\epsilon}) = \mathrm{w}idehat{G}(\mu_1) + O(N^{-1+\epsilon}) \end{equation} with probability. Thus, the first part of the lemma holds. For the second part of the lemma, recall that there exists a constant $\delta > 0$ such that $\gamma - \mu_i > \delta$ for all $i = 2, 3, \dots, N$. Since \mathrm{b}egin{equation} G^{(\ell)}(\gamma) = {\mathfrak a}c{(-1)^{\ell} (\ell-1)!}{N (\gamma - \mu_1)^{\ell}} + {\mathfrak a}c{(-1)^{\ell} (\ell-1)!}{N} \sum_{i=2}^N {\mathfrak a}c{1}{(\gamma - \mu_i)^{\ell}}, \end{equation} we can conclude that the second part of the lemma holds. \end{proof} \mathrm{b}egin{proof}[Proof of Theorem \ref{thm:main2} (iii)] Using the above two lemmas, the proof of Lemma 6.3 of \cite{BaikLee} applies without any change, and we find that there exists $K \equiv K(N)$ satisfying $N^{-C} < K < C$ for some constant $C > 0$ such that \mathrm{b}egin{equation} \int_{\gamma - \mathrm{i} \infty}^{\gamma + \mathrm{i} \infty} e^{{\mathfrak a}c{N}{2} G(z)} \mathrm{d} z = \mathrm{i} e^{{\mathfrak a}c{N}{2} G(\gamma)} K \end{equation} with high probability. This implies that, as (6.61) of \cite{BaikLee}, \mathrm{b}egin{equation} Z_N = {\mathfrak a}c{\sqrt{N} \mathrm{b}eta}{\mathrm{i} \sqrt{\pi} (2\mathrm{b}eta e)^{N/2}} e^{{\mathfrak a}c{N}{2} G(\gamma)} K (1 + O(N^{-1})) \end{equation} with high probability. Recall that $\mathrm{w}idehat J = J + {\mathfrak a}c{1}{J}$. Then, using Lemma \ref{lem:case2G} and evaluating $\mathrm{w}idehat{G}$ as in (A.5) of \cite{BaikLee}, we find that \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{eq:case2FN} F_N &= {\mathfrak a}c{1}{2} [G(\gamma) - 1 - \log (2\mathrm{b}eta)] + O(N^{-1} \log N) = {\mathfrak a}c{1}{2} [\mathrm{w}idehat{G}(\mu_1) - 1 - \log (2\mathrm{b}eta)] + O(N^{-1} \log N) \\ &= {\mathfrak a}c{1}{2} [\mathrm{w}idehat{G}(\mathrm{w}idehat J) - 1 - \log (2\mathrm{b}eta)] + {\mathfrak a}c{1}{2} \mathrm{w}idehat{G}' (\mathrm{w}idehat J) \cdot (\mu_1 -\mathrm{w}idehat J) + O(N^{-1} \log N)\\ &= \mathrm{b}eta \left( J + {\mathfrak a}c{1}{J} \right) - {\mathfrak a}c{1}{4J^2} - {\mathfrak a}c{1}{2} \log (2\mathrm{b}eta J) - {\mathfrak a}c{1}{2} + \left( \mathrm{b}eta - {\mathfrak a}c{1}{2J} \right) (\mu_1 - \mathrm{w}idehat J) + O(N^{-1} \log N), \end{split} \end{equation} hence \mathrm{b}egin{equation} \label{eq:case2fluctuation} F_N - F(\mathrm{b}eta) = \left( \mathrm{b}eta - {\mathfrak a}c{1}{2J} \right) (\mu_1 - \mathrm{w}idehat J) + O(N^{-1} \log N), \end{equation} with high probability. This completes the proof. \end{proof} \subsubsection{Transition between spin glass regime and ferromagnetic regime} \label{sub:transition23} Consider a fixed $\mathrm{b}eta > 1/2$ and $J = 1 + w N^{-1/3}$. As in Section \ref{sec:spinglass}, we can prove Theorem \ref{thm:main2} (i) assuming Condition 2.3 (Regularity of measure) and Condition 2.4 (Rigidity of eigenvalues) of \cite{BaikLee}, and Condition 2.3 and Condition 2.4 are satisfied from Lemma \ref{lem:rigidity} and Lemma \ref{lem:largest eigenvalue}. As discussed in Section \ref{sec:transitions}, for the Gaussian case \mathrm{b}egin{equation} N^{2/3} \left( \mu_1- 2 \right) \Rightarrow \TW_{1, w} \end{equation} where $\TW_{1,w}$ is a one-parameter family of random variables with the distribution function obtained in Theorems 1.5 and 1.7 of \cite{Bloemendal-Virag1}. For a non-Gaussian Wigner matrix with non-zero mean, the limit theorem can be proved by applying the Green function comparison method based on the Lindeberg replacement strategy in Theorems 2.4 and 6.3 in \cite{EYY}. The proof of Theorem 6.3 in \cite{EYY} can be reproduced by assuming the rigidity of eigenvalues and the local semicircle law, which hold also for a Wigner matrix with non-zero mean from Lemmas \ref{lem:rigidity}, \ref{lem:largest eigenvalue}, and \ref{lem:local law} (See also Theorem 3.3 and Lemma 3.5 of \cite{LY} for more detail on the case that the variance of the diagonal entries does not match that of GOE.) \section{Linear statistics} \label{sec:lspr} \subsection{Proof of Theorem \ref{thm:linear}} \label{sec:prooflinear} For a function $\varphi$ that satisfies the assumptions of the theorem, we consider $T(\varphi)$, the weak limit of the random variable \mathrm{b}egin{equation} T_N(\varphi) = \sum_{i=1}^N \varphi(\mu_i) - N \int_{-2}^2 \varphi(x) {\mathfrak a}c{\sqrt{4-x^2}}{2\pi} \mathrm{d} x = N \int_{-\infty}^{\infty} \varphi(x) [\rho_N - \rho](\mathrm{d} x). \end{equation} Fix ($N$-independent) constants $a_- < -2$ and $a_+ > \mathrm{w}idehat J$. Let $\Gamma$ be the rectangular contour whose vertices are $(a_- \pm \mathrm{i} v_0)$ and $(a_+ \pm \mathrm{i} v_0)$ for some $v_0 \in (0, 1]$. Then, \mathrm{b}egin{equation} \label{eq:contour representation} T_N(\varphi) = {\mathfrak a}c{N}{2\pi \mathrm{i}} \int_{\R} \oint_{\Gamma} {\mathfrak a}c{\varphi(z)}{z-x} [\rho_N - \rho](\mathrm{d} x) \mathrm{d} z = -{\mathfrak a}c{1}{2\pi \mathrm{i}} \oint_{\Gamma} \varphi(z) \xi_N (z) \mathrm{d} z \end{equation} where \mathrm{b}egin{equation} \label{xidefni} \xi_N (z):= N \int_{\R} {\mathfrak a}c1{x-z} (\rho_N - \rho)(\mathrm{d} x). \end{equation} Decompose $\Gamma$ into $\Gamma = \Gamma_u \cup \Gamma_d \cup \Gamma_l \cup \Gamma_r \cup \Gamma_0$, where \mathrm{b}egin{align} \Gamma_u &= \{ z = x + \mathrm{i} v_0 : a_- \leq x \leq a_+ \}, \\ \Gamma_d &= \{ z = x - \mathrm{i} v_0 : a_- \leq x \leq a_+ \}, \\ \Gamma_l &= \{ z = a_- + \mathrm{i} y : N^{-\delta}\le |y|\leq v_0 \}, \\ \Gamma_r &= \{ z = a_+ + \mathrm{i} y : N^{-\delta}\le |y|\leq v_0 \}, \\ \Gamma_0 &= \{ z = a_- + \mathrm{i} y : |y| < N^{-\delta} \} \cup \{ z = a_+ + \mathrm{i} y : |y| < N^{-\delta} \}, \end{align} for some sufficiently small $\delta > 0$. In Sections \ref{sec:outline}--\ref{sec:miscellanies}, we prove the following result for $\xi_N(z)$. \mathrm{b}egin{prop} \label{prop:gaussian xi} Let \mathrm{b}egin{equation} \label{sstism} s(z)=\int {\mathfrak a}c1{x-z} \rho(\mathrm{d} x)= {\mathfrak a}c{-z + \sqrt{z^2 -4}}{2} \end{equation} be the Stieltjes transform of the semicircle measure $\rho$. Fix a (small) constant $c > 0$ and a path ${\mathcal K} \subset \C^+$ such that $\im z > c$ for any $z \in {\mathcal K}$. Then, the process $\{ \xi_N(z): z \in {\mathcal K} \}$ converges weakly to a Gaussian process $\{ \xi(z): z \in {\mathcal K} \}$ with the mean \mathrm{b}egin{equation} \label{eq:mean xi} b(z) = {\mathfrak a}c{s(z)^2}{1-s(z)^2} \left( -J' + {\mathfrak a}c{J^2 s(z)}{1 + Js(z)} + (w_2 -1) s(z) + s'(z) s(z) + \left( W_4 - 3 \right) s(z)^3 \right) \end{equation} and the covariance matrix \mathrm{b}egin{equation} \label{eq:covariance xi} \Gamma(z_i, z_j) = s'(z_i) s'(z_j) \left( (w_2 - 2) + 2(W_4 - 3) s(z_i) s(z_j) + {\mathfrak a}c{2}{(1-s(z_i) s(z_j))^2} \right). \end{equation} \end{prop} On the other hand, the following lemma is proved in Section \ref{sub:nonrandom}. \mathrm{b}egin{lem} \label{lem:Gamolrsm} For sufficiently small $\delta > 0$, \mathrm{b}egin{equation} \label{nonrandom1} \lim_{v_0 \searrow 0} \limsup_{N \to \infty} \int_{\Gamma_{\sharp}} \E |\xi_N(z) |^2 \mathrm{d} z = 0, \end{equation} where $\Gamma_{\sharp}$ can be $\Gamma_r$, $\Gamma_l$, or $\Gamma_0$. \end{lem} From the explicit formulas \eqref{eq:mean xi} and \eqref{eq:covariance xi}, it is direct to check that \mathrm{b}egin{equation} \label{nonrandom2} \lim_{v_0 \searrow 0} \int_{\Gamma_{\sharp}} \E |\xi(z)|^2 \mathrm{d} z = 0. \end{equation} Combining Proposition \ref{eq:covariance xi}, Lemma \ref{lem:Gamolrsm} and \eqref{nonrandom2}, we obtain that $T_N(\varphi)$ converges in distribution to a Gaussian random variable $T(\varphi)$ with mean and variance \mathrm{b}egin{equation} \label{eq:Eintb} \E [T(\varphi)]= -{\mathfrak a}c{1}{2\pi \mathrm{i}} \oint_{\Gamma} \varphi(z) b(z) \mathrm{d} z, \qquad \var [T(\varphi)] = {\mathfrak a}c{1}{(2\pi \mathrm{i})^2} \oint_{\Gamma} \oint_{\Gamma} \varphi(z_1) \varphi(z_2) \Gamma(z_1, z_2) \mathrm{d} z_1 \mathrm{d} z_2. \end{equation} These integrals are equal to $M(\varphi)$ and $V(\varphi)$ in \eqref{eq:Mvarfo} and \eqref{eq:Vvarfo}: see Lemma \ref{lem:meanvacopm} below. This completes the proof of Theorem \ref{thm:linear}. \mathrm{b}egin{rem} The covariance matrix $\Gamma(z_i, z_j)$ in \eqref{eq:covariance xi} coincides with the one obtained in Proposition 4.1 of \cite{BY2005}. On the other hand, the mean $b_N(z)$ is different from the one in Proposition 3.1 of \cite{BY2005}. \end{rem} \subsection{Computation of the mean and variance of $T(\varphi)$} \label{sub:computation} \mathrm{b}egin{lem} \label{lem:meanvacopm} We have $\E [T(\varphi)]= M(\varphi)$ and $\var [T(\varphi)] =V(\varphi)$. \end{lem} \mathrm{b}egin{proof} Consider $\var[T(\varphi)]$. Since $\Gamma(z_1, z_2)$ is same as the $J=J'=0$ case, $\var[T(\varphi)]$ is same as the one in \cite{BY2005}, and we obtain the result. We note that it was further shown in \cite{BY2005} that \mathrm{b}egin{equation} \mathrm{b}egin{split} \covar [ T(\varphi_1), T(\varphi_2) ] &= (w_2 -2) \tau_1(\varphi_1) \tau_1(\varphi_2) + (W_4 -3) \tau_2(\varphi_1) \tau_2(\varphi_2) + 2 \sum_{\ell=1}^{\infty} \ell \tau_{\ell}(\varphi_1) \tau_{\ell}(\varphi_2). \end{split} \end{equation} Now let us consider $\E [T(\varphi)]$. Recall that (see \eqref{Chebyshev formula}), for $\ell=0,1,2,\dots$, \mathrm{b}egin{equation} \tau_\ell(\varphi) = {\mathfrak a}c1{2\pi} \int_{-\pi}^\pi \varphi(2\cos\theta) \cos(\ell\theta) \, \mathrm{d} \theta = {\mathfrak a}c{(-1)^{\ell}}{2\pi \mathrm{i}} \oint_{|s|=1} \varphi \left(-s-{\mathfrak a}c{1}{s} \right) s^{\ell-1} \, \mathrm{d} s \end{equation} where we set $s=-e^{\mathrm{i} \theta}$ for the second equality. We change the variable $z$ to $s=s(z)$ in the first integral in \eqref{eq:Eintb}. Note that \eqref{sstism} implies that $s+1/s=-z$ and the map $z\mapsto s$ maps $\C\setminus [-2, 2]$ to the disk $|s|<1$. Then $\Gamma$ is mapped to a contour with negative orientation that contains $0$ and lies in the slit disk $\Omega:=\{|s|<1\} \setminus [-1, -1/J]$. Changing the orientation of the contour, we obtain \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{Tphi} \E[T(\varphi)]&= {\mathfrak a}c{1}{2\pi \mathrm{i}} \oint \varphi \left(-s-{\mathfrak a}c{1}{s} \right) \left[ -J' + {\mathfrak a}c{J^2 s}{1 + Js} + (w_2-1)s + {\mathfrak a}c{s^3}{1-s^2} + \left( W_4 - 3 \right) s^3 \right] \mathrm{d} s \end{split} \end{equation} along a contour with positive orientation that contains $0$ and lies in the slit disk $\Omega:=\{|s|<1\} \setminus [-1, -1/J]$. Note that $\varphi \left(-s-{\mathfrak a}c{1}{s} \right)$ is analytic in a neighborhood of the boundary of $\Omega$. The first, third, and fifth terms in the integrand of \eqref{Tphi} are, using analyticity, equal to \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{mean1} &{\mathfrak a}c{1}{2\pi \mathrm{i}} \oint_{|s|=1} \varphi \left(-s-{\mathfrak a}c{1}{s} \right) \left[ -J' + (w_2-1)s + \left( W_4 - 3 \right) s^3 \right] \mathrm{d} s \\ &=J' \tau_1(\varphi) + (w_2-1) \tau_2(\varphi) + \left( W_4 - 3 \right) \tau_4(\varphi). \end{split} \end{equation} For the fourth term in the integrand of \eqref{Tphi}, when we deform the contour to the unit circle, then the two poles $s=-1$ and $s=1$ on the circle yields the half of the residue terms and the integral becomes the principal value. The principal value integral is, after setting $-s=e^{\mathrm{i}\theta}$, \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{residue00} P.V. {\mathfrak a}c{1}{2\pi} \int_{-\pi}^{\pi} \varphi(2\cos \theta) {\mathfrak a}c{e^{4\mathrm{i} \theta}}{1-e^{2\mathrm{i} \theta}} \mathrm{d} \theta = {\mathfrak a}c{1}{2\pi} \int_{-\pi}^{\pi} \varphi(2\cos \theta)\left(-{\mathfrak a}c{1}{2} - \cos 2\theta \right) \mathrm{d} \theta = -{\mathfrak a}c{1}{2} \tau_0(\varphi) - \tau_2(\varphi). \end{split} \end{equation} Hence we obtain \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{residue} &{\mathfrak a}c{1}{2\pi \mathrm{i}} \oint_{|s|=r} \varphi \left(-s-{\mathfrak a}c{1}{s} \right) {\mathfrak a}c{s^3}{1-s^2} \mathrm{d} s = {\mathfrak a}c14 \left( \varphi(-2)+ \varphi(2) \right) -{\mathfrak a}c{1}{2} \tau_0(\varphi) - \tau_2(\varphi) . \end{split} \end{equation} From \eqref{Tphi}, \eqref{mean1}, and \eqref{residue}, we proved that $\E [T(\varphi)]= M(\varphi)$. \end{proof} \section{Outline of the proof of Proposition \ref{prop:gaussian xi}} \label{sec:outline} Sections \ref{sec:outline}--\ref{sec:miscellanies} are dedicated to proving Proposition \ref{prop:gaussian xi}. From Theorem 8.1 of \cite{Billingsley_conv}, we need to show (a) the finite-dimensional convergence of $\xi_N(z)$ to a Gaussian vector and (b) the tightness of $\xi_N(z)$. To establish the part (a), we compute the limits of the mean $\E [\xi_N(z)]$ and the covariance $\covar [ \xi_N(z_1), \xi_N(z_2)]$ in Section \ref{sec:mean} and \ref{sec:covariance}, respectively. The part (a) is concluded in Section \ref{sub:martingale CLT}. The part (b) is proved in Section \ref{sub:tightness}. We use the following known results for the resolvent and large deviation estimates. \subsection{Local semicircle law and large deviation estimates} \label{sec:prelim} The Green function (resolvent) of $M$ is $R(z) = (M - zI)^{-1}$. The normalized trace of the Green function is defined as \mathrm{b}egin{equation} s_N(z) = {\mathfrak a}c{1}{N} \Tr R(z) = {\mathfrak a}c{1}{N} \sum_{i=1}^N {\mathfrak a}c{1}{\mu_i - z}, \end{equation} which is also the Stieltjes transform of $\rho_N$. Recall that \mathrm{b}egin{equation} \xi_N (z) = N \int_{\R} {\mathfrak a}c1{x-z} (\rho_N - \rho)(\mathrm{d} x) = N \left( s_N(z) - s(z) \right). \end{equation} We also set \mathrm{b}egin{equation} \zeta_N (z) := \xi_N (z) - \E \xi_N (z) = N \left( s_N(z) - \E s_N(z) \right). \end{equation} \mathrm{b}egin{lem}[Theorem 2.9 of \cite{EKYY1}, local semicircle law] \label{lem:local law} Let $\Sigma \geq 3$ be a fixed but arbitrary constant and define the domain $D = \{ z = E + \mathrm{i} \eta \in \C : |E| \leq \Sigma, \eta \in (0, 3) \}$. Set $\kappa = \min\{ |E-2|, |E+2| \}$. Then, for any $z \in D$ with $\im z = \eta$, \mathrm{b}egin{equation} |s_N(z) - s(z)| \prec \min \left\{ {\mathfrak a}c{1}{N\sqrt{\kappa + \eta}}, {\mathfrak a}c{1}{\sqrt N} \right\} + {\mathfrak a}c{1}{N\eta} \end{equation} and \mathrm{b}egin{equation} \label{eq:Rdelo} \max_{i, j} |R_{ij}(z) - \delta_{ij} s(z)| \prec {\mathfrak a}c{1}{\sqrt N} + \sqrt{{\mathfrak a}c{\im s(z)}{N\eta}} + {\mathfrak a}c{1}{N\eta}. \end{equation} \end{lem} For $\eta \sim 1$, we have the following corollary. \mathrm{b}egin{cor} \label{cor:local law} Let $\Sigma \geq 3$ be a fixed but arbitrary constant. For a fixed (small) constant $c>0$, define $D_c = \{ z = E + \mathrm{i} \eta \in \C : |E| \leq \Sigma, \eta \in (c, 3) \}$. Then, for any $z \in D_c$, \mathrm{b}egin{equation} \label{sn_s} |s_N(z) - s(z)| \prec N^{-1} \end{equation} and \mathrm{b}egin{equation} \label{Rbasicestim} |R_{ii}(z) - s(z)| \prec N^{-{\mathfrak a}c{1}{2}}, \qquad |R_{ij}(z)| \prec N^{-{\mathfrak a}c{1}{2}} \quad (i \neq j). \end{equation} Moreover, \eqref{sn_s} holds for any $z \in \Gamma_r \cup \Gamma_l \cup \Gamma_0$ and \eqref{Rbasicestim} holds for any $z \in \Gamma_r \cup \Gamma_l$. \end{cor} \mathrm{b}egin{proof} The bounds for $z \in D_c$ are straightforward since $\eta \sim 1$. For $z \in \Gamma_r \cup \Gamma_l \cup \Gamma_0$, from Lemma \ref{lem:rigidity} and Lemma \ref{lem:largest eigenvalue}, \mathrm{b}egin{equation} \label{eq:SnSforrbasp} \mathrm{b}egin{split} |s_N(z) - s(z)| &= \left| {\mathfrak a}c{1}{N} \sum_{j=1}^N {\mathfrak a}c{1}{\mu_j -z} - \int_{\R} {\mathfrak a}c{\rho(\mathrm{d} x)}{x-z} \right| = \left| {\mathfrak a}c{1}{N} \sum_{j=1}^N {\mathfrak a}c{1}{\gamma_j -z} - \int_{\R} {\mathfrak a}c{\rho(\mathrm{d} x)}{x-z} \right| + {\mathcal O}(N^{-1}) \\ &= {\mathcal O}(N^{-1}). \end{split} \end{equation} To prove \eqref{Rbasicestim} for $z \in \Gamma_r$, we notice that \mathrm{b}egin{equation} \sqrt{{\mathfrak a}c{\im s(z)}{N\eta}} \sim \sqrt{{\mathfrak a}c{\eta}{\sqrt{\kappa + \eta}} {\mathfrak a}c{1}{N\eta}} \sim \sqrt{{\mathfrak a}c{1}{N}} \end{equation} since $\kappa \sim 1$ and $N^{-\delta}\le \eta \le v_0$. Since ${\mathfrak a}c{1}{N\eta} \le N^{-1+\delta}$, from \eqref{eq:Rdelo}, we find that \eqref{Rbasicestim} holds for $z \in \Gamma_r$. The proof of \eqref{Rbasicestim} for $z \in \Gamma_r$ is the same. \end{proof} Let $M^{(a)}$ be the minor of $M$ obtained by removing the $a$-th row and the $a$-th column. We denote by $R^{(a)}$ and $s_N^{(a)}$ the Green function and the averaged Green function of $M^{(a)}$, respectively. It is well known that \mathrm{b}egin{equation} \label{schur} R_{ii} = {\mathfrak a}c{1}{M_{ii} - z - \sum_{p, q}^{(i)} M_{ip} R^{(i)}_{pq} M_{qi}}, \qquad R_{ij} = -R_{ii} \sum_p^{(i)} M_{ip} R^{(i)}_{pj} \quad (i \neq j), \end{equation} and \mathrm{b}egin{equation} \label{RRdiff} R_{ij} - R^{(a)}_{ij} = {\mathfrak a}c{R_{ia} R_{aj}}{R_{aa}}. \end{equation} Here, $(i)$ in the summation notation means that the index $p = 1, 2, \dots, N$ with $p \neq i$. From the second identity in \eqref{schur}, we also have an estimate \mathrm{b}egin{equation} \label{sumofMRest} \left| \sum_p^{(i)} M_{ip} R^{(i)}_{pj} \right| = \left| {\mathfrak a}c{R_{ij}}{R_{ii}} \right| \prec N^{-{\mathfrak a}c{1}{2}} \end{equation} for $i \neq j$. \mathrm{b}igskip We will also frequently use the following large deviation estimates, which will be proved in Section \ref{subsec:largdeves}. \mathrm{b}egin{lem} \label{lem:M deviation} Let $S$ be an $(N-1) \times (N-1)$ matrix independent of $M_{ia}$ $(1 \leq a \leq N, a \neq i)$ with matrix norm $\|S\|$. Then, for $n = 1, 2$, there exists a constant $C_n$ depending only on $J$ and $W_4$ in Definition \ref{def:Wigner} and Condition \ref{cond:nonzero} such that \mathrm{b}egin{equation} \label{eq:large expectation} \E \left| \sum_{p, q}^{(i)} M_{ip} S_{pq} M_{qi} - {\mathfrak a}c{1}{N} \sum_p^{(i)} S_{pp} \right|^{2n} \leq {\mathfrak a}c{C_n \|S\|^{2n}}{N^{n}}. \end{equation} Moreover, \mathrm{b}egin{equation} \label{eq:large general} \left| \sum_{p, q}^{(i)} M_{ip} S_{pq} M_{qi} - {\mathfrak a}c{1}{N} \sum_p^{(i)} S_{pp} \right| \prec {\mathfrak a}c{\|S\|}{\sqrt N}. \end{equation} \end{lem} \section{The mean function} \label{sec:mean} In this section, we assume that $z \in {\mathcal K} \cup \Gamma_r \cup \Gamma_l$. The estimate for $z \in \Gamma_r \cup \Gamma_l$ will be used later in the proof of Lemma \ref{lem:Gamolrsm}. Let \mathrm{b}egin{equation} b_N(z) = \E [\xi_N (z) ]= N[ \E s_N(z) - s(z)]. \end{equation} From \eqref{schur}, if we set \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{Qidefnh} Q_i:= -M_{ii} + \sum_{p, q}^{(i)} M_{ip} R^{(i)}_{pq} M_{qi}, \end{split} \end{equation} we have \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{schur expand} R_{ii} = {\mathfrak a}c{1}{-z-Q_i} &= {\mathfrak a}c{1}{-z-s} + {\mathfrak a}c{Q_i -s}{(-z-s)^2} + {\mathfrak a}c{\left( Q_i -s \right)^2}{(-z-s)^3} + O \left( {\mathfrak a}c{|Q_i-s|^3}{|z+s|^4} \right) \\ &= s + s^{2}(Q_i -s) + s^3\left( Q_i -s \right)^2 + O \left( |s|^4 |Q_i-s|^3 \right) \end{split} \end{equation} since $s = -1/(s+z)$. Using $R_{ii} = {\mathfrak a}c{1}{-z-Q_i} $, we have \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{Qibasesmt} Q_i-s = -{\mathfrak a}c1{R_{ii}}-z-s= -{\mathfrak a}c1{R_{ii}} +{\mathfrak a}c1{s} = {\mathcal O}(N^{-{\mathfrak a}c{1}{2}}). \end{split} \end{equation} We thus find that \mathrm{b}egin{equation} \label{b_N} b_N = s^2 \sum_i \E (Q_i-s)+ s^3 \sum_i \E (Q_i-s)^2 + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}). \end{equation} \subsection{$ \sum_i \E (Q_i-s)$} We first consider \mathrm{b}egin{equation} \label{b1} \mathrm{b}egin{split} \sum_i \E (Q_i-s) &= \sum_i \E \left[ -M_{ii} + \sum_{p, q}^{(i)} M_{ip} R^{(i)}_{pq} M_{qi} -s \right] \\ &= -J' + {\mathfrak a}c{J^2}{N^2} \E \sum_i \sum_{p, q}^{(i)} R^{(i)}_{pq} + {\mathfrak a}c{1}{N} \E \sum_i \sum_p^{(i)} R_{pp}^{(i)} - Ns. \end{split} \end{equation} Naive power counting shows that the second term is $O(N^{{\mathfrak a}c{1}{2}+\epsilon})$ and the third term is $O(N^{\epsilon})$. We show that the second term is actually $O(1)$ and the third term is $Ns$ plus an $O(1)$ term. \subsubsection{$ {\mathfrak a}c{1}{N} \E \sum_i \sum_p^{(i)} R_{pp}^{(i)}$} From \eqref{RRdiff} and \eqref{Rbasicestim}, \mathrm{b}egin{equation} \sum_p^{(i)} (R_{pp}^{(i)} - R_{pp}) = - \sum_p^{(i)} {\mathfrak a}c{R_{pi} R_{ip}}{R_{ii}} = - {\mathfrak a}c{1}{s} \sum_{p}^{(i)} R_{pi} R_{ip} + {\mathcal O}(N^{-{\mathfrak a}c{1}{2}}). \end{equation} This implies \mathrm{b}egin{equation} \sum_p^{(i)} R_{pp}^{(i)} = \left( \sum_p R_{pp} \right) - R_{ii} - {\mathfrak a}c{1}{s} (R^2)_{pp} + {\mathfrak a}c1{s} (R_{ii})^2 + {\mathcal O}(N^{-{\mathfrak a}c{1}{2}}), \end{equation} and hence \mathrm{b}egin{equation} {\mathfrak a}c1{N} \sum_i \sum_p^{(i)} R_{pp}^{(i)} = {\mathfrak a}c{N-1}{N} \Tr (R) - {\mathfrak a}c{1}{s N} \Tr (R^2) + {\mathfrak a}c1{s N } \sum_i (R_{ii})^2 + {\mathcal O}(N^{-{\mathfrak a}c{1}{2}}). \end{equation} Note that by spectral decomposition, \mathrm{b}egin{equation} \label{s derivative} {\mathfrak a}c1{N} \Tr R^2= {\mathfrak a}c{1}{N} \sum_i {\mathfrak a}c{1}{(\mu_i - z)^2} = {\mathfrak a}c{\mathrm{d}}{\mathrm{d} z} s_N(z). \end{equation} Since $|s_N(z) - s(z)| \prec N^{-1}$, we find from Cauchy integral formula that $|s_N'(z) - s'(z)| \prec N^{-1+\delta}$. Hence, using \eqref{Rbasicestim}, \mathrm{b}egin{equation} \mathrm{b}egin{split} {\mathfrak a}c1{N} \sum_i \sum_p^{(i)} R_{pp}^{(i)} &= (N-1) s_N(z) - {\mathfrak a}c{s_N'}{s} + {\mathfrak a}c1{s N } \sum_i (R_{ii})^2 + {\mathcal O}(N^{-{\mathfrak a}c{1}{2}}) \\ &= N s_N(z) - {\mathfrak a}c{s'}{s} + {\mathcal O}(N^{-{\mathfrak a}c{1}{2}}) \end{split} \end{equation} We therefore find that \mathrm{b}egin{equation} \label{b13'} {\mathfrak a}c{1}{N} \E \sum_i \sum_p^{(i)} R_{pp}^{(i)} = Ns + b_N -{\mathfrak a}c{s'}{s} + O(N^{-{\mathfrak a}c{1}{2}+\epsilon}). \end{equation} \subsubsection{${\mathfrak a}c{J^2}{N^2} \E \sum_i \sum_{p, q}^{(i)} R^{(i)}_{pq}$} The case when $p=q$ follows from \eqref{b13'}: \mathrm{b}egin{equation} \label{J2N2Ebla} {\mathfrak a}c{J^2}{N^2} \E \sum_i \sum_p^{(i)} R_{pp}^{(i)} = J^2 s + O(N^{-1+\epsilon}) \end{equation} since a naive estimate shows $b_N=O(N^{\epsilon})$ from the definition. We now consider the case when $p\neq q$. We start with a lemma. The strategy of the proof of this lemma is used several places in the paper. \mathrm{b}egin{lem}\label{lem:unmatch1} For $q\neq i$, \mathrm{b}egin{equation} \label{unmatching offdiagonal0} {\mathfrak a}c{1}{N} \E \sum_{p}^{(i,q)} R_{pi} R_{iq}^{(p)} = O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{equation} \end{lem} \mathrm{b}egin{proof} For distinct $p,q, i$, we have from \eqref{schur} and \eqref{sumofMRest} that \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{expRR unmatched} &R_{pi} R^{(p)}_{iq} = - R_{pp} \left( \sum_r^{(p)} M_{pr} R^{(p)}_{ri} \right) R^{(p)}_{iq} = -s \left( \sum_r^{(p)} M_{pr} R^{(p)}_{ri} \right) R^{(p)}_{iq} + {\mathcal O}(N^{-{\mathfrak a}c{3}{2} }) . \end{split} \end{equation} Hence, \mathrm{b}egin{equation} \label{expRRtempo} \mathrm{b}egin{split} &\E \left[ R_{pi} R^{(p)}_{iq} \right] = -{\mathfrak a}c{Js}{N} \E \sum_r^{(p)} R^{(p)}_{ri} R^{(p)}_{iq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{split} \end{equation} Using $R_{ab}^{(c)}-R_{ab} = {\mathcal O}(N^{-1})$, which follows from \eqref{RRdiff} and \eqref{Rbasicestim}, repeatedly, we find that for distinct $p, q, i$, \mathrm{b}egin{equation} \label{expRRtempo0} \mathrm{b}egin{split} \sum_r^{(p)} R^{(p)}_{ri} R^{(p)}_{iq} = \sum_r R_{ri} R_{iq} + {\mathcal O}(N^{-{\mathfrak a}c{1}{2} }) = \sum_{r}^{(i,q)}R_{ri} R_{iq}^{(r)} + {\mathcal O}(N^{-{\mathfrak a}c{1}{2} }). \end{split} \end{equation} Summing \eqref{expRRtempo} over $p$, this implies that \mathrm{b}egin{equation} \label{expRRtempo00} \mathrm{b}egin{split} &{\mathfrak a}c1{N} \sum_{p}^{(i,q)} \E \left[ R_{pi} R^{(p)}_{iq} \right] = -Js {\mathfrak a}c{N-1}{N^2} \E \sum_{r}^{(i,q)} R_{ri} R^{(r)}_{iq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) . \end{split} \end{equation} Since the two sums on either side are the same, we obtain that \mathrm{b}egin{equation} \label{unmatching offdiagonal0'} {\mathfrak a}c{1 + Js}{N} \E \sum_{p}^{(i,q)} R_{pi} R_{iq}^{(p)} = O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{equation} We now claim that $|1+Js| > c'$ uniformly on ${\mathcal K} \cup \Gamma_r \cup \Gamma_l$ for some ($N$-independent) constant $c' > 0$. Assuming the claim, it is obvious from \eqref{unmatching offdiagonal0'} that the desired lemma holds. To prove the claim, we first note that, for $J<1$, the claim is trivial since $|1+Js| > 1- J|s| > 1-J$. Thus, we assume that $J>1$. Let $z = E + \mathrm{i} \eta$. It is straightforward to check that for $\im z>0$, \mathrm{b}egin{equation} \im s(z) \geq C \sqrt{||E| -2| + \eta} \qquad \text{if} \quad |E| < 2 \quad\text{or} \quad ||E| -2| < \eta \end{equation} and \mathrm{b}egin{equation} \im s(z) \geq {\mathfrak a}c{C \eta}{\sqrt{|E| -2| + \eta}} \qquad \text{if} \quad |E| \geq 2 \quad\text{and} \quad ||E| -2| \geq \eta \end{equation} for some $C>0$ independent of $z$. (See, e.g., Lemma 3.4 of \cite{EYY}.) Thus, $|1+Js| \geq |J| \im s \sim 1$, for $z \in {\mathcal K}$. Recall that $a_+ > \mathrm{w}idehat J \geq 2$. From the definition of $s(z)$, it is direct to see that $s(a_+) > s(\mathrm{w}idehat J) = -1/J$. Moreover, \mathrm{b}egin{equation} \re s(a_+ + \mathrm{i} \eta) = \re \int_{-2}^2 {\mathfrak a}c1{x-a_+-\mathrm{i} \eta} \rho(\mathrm{d} x) = \int_{-2}^2 {\mathfrak a}c{x-a_+}{(x-a_+)^2 + \eta^2} \rho(\mathrm{d} x), \end{equation} hence $\re s(a_+ + \mathrm{i} \eta)$ is an increasing function of $\eta$. Thus, for $z \in \Gamma_u$, \mathrm{b}egin{equation} |1+Js| > 1+ J \re s > 1+ Js(a_+) \sim 1. \end{equation} For $z \in \Gamma_l$, it is easy to see that $\re s > 0$, hence $|1+Js| \geq 1 + J \re s > 1$. This completes the proof of the lemma. \end{proof} From \eqref{RRdiff} and \eqref{Rbasicestim}, \mathrm{b}egin{equation} \label{Ripqintermsofdiffrp} R^{(i)}_{pq} = R_{pq} - {\mathfrak a}c{R_{pi} R_{iq}}{R_{ii}} = R_{pq} - {\mathfrak a}c{R_{pi} R_{iq}}{s} + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}) = R_{pq} - {\mathfrak a}c{R_{pi} R^{(p)}_{iq}}{s} + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}). \end{equation} Hence we conclude from \eqref{Ripqintermsofdiffrp} and \eqref{unmatching offdiagonal0} that \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{b121} {\mathfrak a}c{J^2}{N^2} \E \sum_i \sum_{p\neq q}^{(i)} R^{(i)}_{pq} &= {\mathfrak a}c{J^2}{N^2} \E \sum_i \sum_{p\neq q}^{(i)} R_{pq} + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}) \\ &= {\mathfrak a}c{J^2}{N^2} \E \sum_i \sum_{p\neq q} R_{pq} + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}) = {\mathfrak a}c{J^2}{N} \E \sum_{p\neq q} R_{pq} + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}). \end{split} \end{equation} We showed that the upper index $(i)$ after adding a negligible term. \mathrm{b}igskip We now compute the right hand side of \eqref{b121}. From \eqref{schur} and \eqref{schur expand}, \mathrm{b}egin{equation} \label{b122} \mathrm{b}egin{split} R_{pq} &= -R_{pp} \sum_r^{(p)} M_{pr} R^{(p)}_{rq} \\ &= -s \sum_r^{(p)} M_{pr} R^{(p)}_{rq} - s^2 \left(Q_p -s \right) \sum_r^{(p)} M_{pr} R^{(p)}_{rq} + {\mathcal O}(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{split} \end{equation} Taking expectation, the first term becomes, \mathrm{b}egin{equation} \label{b1221} -s \E \sum_r^{(p)} M_{pr} R^{(p)}_{rq} = -{\mathfrak a}c{Js}{N} \E \sum_r^{(p)} R^{(p)}_{rq} . \end{equation} Since \mathrm{b}egin{equation} \E M_{pp} \sum_r^{(p)} M_{pr} R^{(p)}_{rq} = {\mathfrak a}c{J'J}{N^2} \E \sum_r^{(p)} R^{(p)}_{rq} = O(N^{-{\mathfrak a}c{3}{2} + \epsilon}), \end{equation} the second term in \eqref{b122} satisfies, also using \eqref{b1221}, \mathrm{b}egin{equation} \label{b122secondtermap} \E \left[ \left(Q_p -s \right) \sum_r^{(p)} M_{pr} R^{(p)}_{rq} \right] = \E \sum_{a, b}^{(p)} M_{pa} R^{(p)}_{ab} M_{bp} \sum_r^{(p)} M_{pr} R^{(p)}_{rq} -{\mathfrak a}c{Js}{N} \E \sum_r^{(p)} R^{(p)}_{rq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{equation} We now evaluate the term $$ \E \sum_{a, b}^{(p)} M_{pa} R^{(p)}_{ab} M_{bp} \sum_r^{(p)} M_{pr} R^{(p)}_{rq}, $$ by considering different choices of the indices $a,b,r$ separately as follows. \mathrm{b}egin{enumerate}[1)] \item When $a, b, r$ are all distinct, \mathrm{b}egin{equation} \label{Ethreepartfica} \E \sum^{(p)} M_{pa} R^{(p)}_{ab} M_{bp} M_{pr} R^{(p)}_{rq} = {\mathfrak a}c{J^3}{N^3} \sum^{(p)} \E \left[ R^{(p)}_{ab} R^{(p)}_{rq} \right], \end{equation} where the summation is over all distinct $a,b,r$. The part of the sum in which the index $r$ is equal to $q$ is \mathrm{b}egin{equation} \label{Ethreepartfitcatemp} {\mathfrak a}c{J^3}{N^3} \sum^{(p)}_{a\neq b} \E \left[ R^{(p)}_{ab} R^{(p)}_{qq} \right] = O(N^{-3/2+\epsilon}) \end{equation} by naive estimate. Hence we assume that the index $r$ satisfies $r\neq q$. Now similar to Lemma~\ref{lem:unmatch1}, for distinct $a,b,r,q, p$, \mathrm{b}egin{equation} \mathrm{b}egin{split} \E \left[ R^{(p)}_{ab} R^{(p)}_{rq} \right] &= \E \left[ R_{ab} R^{(a)}_{rq} \right] + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) = \E \left[ -R_{aa} \sum_t^{(a)} M_{at} R^{(a)}_{tb} R^{(a)}_{rq} \right] + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) \\ &= -{\mathfrak a}c{Js}{N} \E \sum_t^{(a)} R^{(a)}_{tb} R^{(a)}_{rq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) = -{\mathfrak a}c{Js}{N} \E \sum_t^{(p)} R^{(p)}_{tb} R^{(p)}_{rq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) \\ &= -{\mathfrak a}c{Js}{N} \E \sum_{t: t \neq b, r, q}^{(p)} R^{(p)}_{tb} R^{(p)}_{rq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{split} \end{equation} Summing over $a$, \mathrm{b}egin{equation} {\mathfrak a}c{1}{N} \E \sum_{a: a \neq b, r, q}^{(p)} R^{(p)}_{ab} R^{(p)}_{rq} = -Js {\mathfrak a}c{N-4}{N^2} \E \sum_{t: t \neq b, r, q}^{(p)} R^{(p)}_{tb} R^{(p)}_{rq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) \end{equation} for distinct $b,r, q, p$. Hence, after adding three ${\mathcal O}(N^{-1/2})$ terms to the sum, \mathrm{b}egin{equation} \label{Rabrq distinct} {\mathfrak a}c{1}{N} \E \sum_{a}^{(p)} R^{(p)}_{ab} R^{(p)}_{rq} = O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) \end{equation} for distinct $b, r,q, p$. Using this, we find that \eqref{Ethreepartfica} with the summation over all distinct $a,b,r$ with $r\neq q$ is $O(N^{-{\mathfrak a}c{3}{2} + \epsilon})$. Since the case when $r=q$ has the same estimate in \eqref{Ethreepartfitcatemp}, we find that \mathrm{b}egin{equation} \label{caseqofq1of} \E \left[ \sum^{(p)} M_{pa} R^{(p)}_{ab} M_{bp} M_{pr} R^{(p)}_{rq} \right] = O(N^{-{\mathfrak a}c{3}{2} + \epsilon}), \end{equation} where the summation is over all distinct $a,b,r$. \item When $a=b \neq r$, \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{Rabr distinct} &\E \sum_{a \neq r}^{(p)} M_{pa} R^{(p)}_{aa} M_{ap} M_{pr} R^{(p)}_{rq} = {\mathfrak a}c{J}{N} \left({\mathfrak a}c1{N}+{\mathfrak a}c{J^2}{N^2}\right) \sum_{a \neq r}^{(p)} \E \left[ R^{(p)}_{aa} R^{(p)}_{rq} \right]\\ &\qquad = {\mathfrak a}c{J}{N^2} \sum_{a, r}^{(p)} \E \left[ R^{(p)}_{aa} R^{(p)}_{rq} \right] + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) = {\mathfrak a}c{J}{N} \E \left[ s_N^{(p)} \sum_r^{(p)} R^{(p)}_{rq} \right]+ O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) \end{split} \end{equation} where we define \mathrm{b}egin{equation} s_N^{(p)} = {\mathfrak a}c1{N} \sum_a^{(p)} R_{aa}^{(p)}. \end{equation} \item When $a=r \neq b$ (or $b=r \neq a$), \mathrm{b}egin{equation} \label{Rab distinct0} \E \sum_{a \neq b}^{(p)} M_{pa} R^{(p)}_{ab} M_{bp} M_{pa} R^{(p)}_{aq} = {\mathfrak a}c{J}{N} \left({\mathfrak a}c1{N}+{\mathfrak a}c{J^2}{N^2}\right) \sum_{a \neq b}^{(p)} \E \left[ R^{(p)}_{ab} R^{(p)}_{aq} \right] . \end{equation} The part of the sum in which either $a=q$ or $b=q$ is $O(N^{-{\mathfrak a}c{3}{2} + \epsilon})$ from naive estimate. Now for $a\neq q$, \mathrm{b}egin{equation} \label{Rab distinct1} {\mathfrak a}c{1}{N} \sum_{b}^{(p,a,q)} R^{(p)}_{ab} R^{(p)}_{aq} = {\mathfrak a}c{1}{N} \sum_{b}^{(p,a,q)} R_{ab} R^{(b)}_{aq} + {\mathcal O}(N^{-3/2}) = {\mathfrak a}c{1}{N} \sum_{b}^{(a,q)} R_{ab} R^{(b)}_{aq} + {\mathcal O}(N^{-3/2}) \end{equation} Following the proof of \eqref{unmatching offdiagonal0}, we can check that $\E \left[ {\mathfrak a}c{1}{N} \sum_{b}^{(a,q)} R^{(p)}_{ba} R^{(b)}_{aq} \right] = O(N^{-{\mathfrak a}c{3}{2} + \epsilon})$. (This is easy to see for a real symmetric matrix since $R_{ab}=R_{ba}$.) Thus, \mathrm{b}egin{equation} \label{Rab distinct2} \E \sum_{a \neq b}^{(p)} M_{pa} R^{(p)}_{ab} M_{bp} M_{pa} R^{(p)}_{aq} = {\mathfrak a}c{J}{N^2} \sum_{a}^{(p,q)} \E \left[ \sum_{b}^{(a,q)} R^{(p)}_{ba} R^{(b)}_{aq} \right] + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) = O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{equation} \item When $a=b=r$, \mathrm{b}egin{equation} \label{Rabr same} \E \sum_r^{(p)} M_{pr} R^{(p)}_{rr} M_{rp} M_{pr} R^{(p)}_{rq} = \left( {\mathfrak a}c{W_3}{N^{{\mathfrak a}c{3}{2}}}+ {\mathfrak a}c{J^3}{N^3} \right) \E \sum_r^{(p)} R^{(p)}_{rr} R^{(p)}_{rq} = {\mathfrak a}c{W_3 s}{N^{{\mathfrak a}c{3}{2}}} \E \sum_r^{(p)} R^{(p)}_{rq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{equation} \end{enumerate} Putting the above four cases into \eqref{b122secondtermap}, we find that \mathrm{b}egin{equation} \mathrm{b}egin{split} \E \left[ \left( Q_p -s \right) \sum_r^{(p)} M_{pr} R^{(p)}_{rq} \right] & = {\mathfrak a}c{J}{N} \E \left[ \left( s_N^{(p)} - s \right) \sum_r^{(p)} R^{(p)}_{rq} \right] + {\mathfrak a}c{W_3 s}{N^{{\mathfrak a}c{3}{2}}} \E \sum_r^{(p)} R^{(p)}_{rq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{split} \end{equation} Note that \mathrm{b}egin{equation} s_N^{(p)} - s = {\mathfrak a}c1{N} \sum_{a}^{(p)} \left( R_{aa}^{(p)}-R_{aa}\right) - {\mathfrak a}c1{N} R_{pp} + (s_N -s) = {\mathcal O}(N^{-1}). \end{equation} Hence, \mathrm{b}egin{equation} \label{b1222} \mathrm{b}egin{split} \E \left[ \left( Q_p -s \right) \sum_r^{(p)} M_{pr} R^{(p)}_{rq} \right] = {\mathfrak a}c{W_3 s}{N^{{\mathfrak a}c{3}{2}}} \E \sum_r^{(p)} R^{(p)}_{rq} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) . \end{split} \end{equation} \mathrm{b}igskip From \eqref{b122}, \eqref{b1221}, and \eqref{b1222}, for $p\neq q$, \mathrm{b}egin{equation} \mathrm{b}egin{split} \E \left[ R_{pq} \right] &= - \left( {\mathfrak a}c{J s}{N} + {\mathfrak a}c{W_3 s^3}{N^{{\mathfrak a}c{3}{2}}} \right) \E \sum_r^{(p)} R^{(p)}_{rq}+ O(N^{-{\mathfrak a}c{3}{2} +\epsilon}). \end{split} \end{equation} Using \eqref{Ripqintermsofdiffrp} and \eqref{unmatching offdiagonal0}, this implies that \mathrm{b}egin{equation} \mathrm{b}egin{split} \E \left[ R_{pq} \right] &= - \left( {\mathfrak a}c{J s}{N} + {\mathfrak a}c{W_3 s^3}{N^{{\mathfrak a}c{3}{2}}} \right) \E \sum_r R_{rq}+ O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{split} \end{equation} From this we find that \mathrm{b}egin{equation} \mathrm{b}egin{split} {\mathfrak a}c{1}{N} \E \sum_p R_{pq} &= {\mathfrak a}c{1}{N} \E \left[ \sum_p^{(q)} R_{pq} + R_{qq} \right] = - \left( {\mathfrak a}c{J s}{N} +{\mathfrak a}c{ W_3 s^3}{N^{{\mathfrak a}c{3}{2}}} \right) \E \sum_r R_{rq} + {\mathfrak a}c{s}{N} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}), \end{split} \end{equation} which implies that \mathrm{b}egin{equation} \label{R unmatched} {\mathfrak a}c{1}{N} \E \sum_p R_{pq} = {\mathfrak a}c{s}{(1 + Js) N} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{equation} Therefore, we obtain \mathrm{b}egin{equation} \label{b12} \mathrm{b}egin{split} {\mathfrak a}c{J^2}{N} \E \sum_{p \neq q} R_{pq} &= {\mathfrak a}c{J^2}{N} \E \sum_{p, q} R_{pq} - {\mathfrak a}c{J^2}{N} \E \sum_{p} R_{pp} = {\mathfrak a}c{J^2 s}{1 + Js} - J^2 s + O(N^{-{\mathfrak a}c{1}{2}+ \epsilon}). \end{split} \end{equation} We obtain from \eqref{J2N2Ebla}, \eqref{b121}, and \eqref{b12} that \mathrm{b}egin{equation} \label{b1'222} {\mathfrak a}c{J^2}{N^2} \E \sum_i \sum_{p, q}^{(i)} R^{(i)}_{pq} = {\mathfrak a}c{J^2 s}{1 + Js} + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}). \end{equation} \subsubsection{Conclusion for $\sum_i \E (Q_i-s)$} From \eqref{b1}, \eqref{b13'}, and \eqref{b1'222}, \mathrm{b}egin{equation} \label{b1'} \sum_i \E (Q_i-s) = -J' + b_N - {\mathfrak a}c{s'}{s} + {\mathfrak a}c{J^2 s}{1 + Js} + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}). \end{equation} \subsection{$\sum_i \E (Q_i-s)^2$} We next turn to the second term in \eqref{b_N}. We begin with \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{b2} \E (Q_i-s)^2 = & {\mathfrak a}c{w_2}{N} + {\mathfrak a}c{(J')^2}{N^2}+ {\mathfrak a}c{2J' s}{N} + s^2 - 2 \left( s+ {\mathfrak a}c{J'}{N} \right) \E \sum_{p, q}^{(i)} M_{ip} R^{(i)}_{pq} M_{qi} \\ & + \E \sum_{p, q, r, t}^{(i)} M_{ip} R^{(i)}_{pq} M_{qi} M_{ir} R^{(i)}_{rt} M_{ti}. \end{split} \end{equation} The first sum on the right hand side satisfies \mathrm{b}egin{equation} \label{b22} \E \sum_{p, q}^{(i)} M_{ip} R^{(i)}_{pq} M_{qi} = {\mathfrak a}c{1}{N} \E \sum_p^{(i)} R^{(i)}_{pp} + {\mathfrak a}c{J^2}{N^2} \E \sum_{p, q}^{(i)} R^{(i)}_{pq} = \E s_N^{(i)} + {\mathfrak a}c{J^2 s}{(1 + Js)N} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}), \end{equation} using \eqref{R unmatched} (applied to the Green function of an $(N-1) \times (N-1)$ matrix). \mathrm{b}igskip \subsubsection{Computation of $\E\mathrm{b}ig[ \sum_{p, q, r, t}^{(i)} M_{ip} R^{(i)}_{pq} M_{qi} M_{ir} R^{(i)}_{rt} M_{ti} \mathrm{b}ig]$} In order to evaluate the last term in \eqref{b2}, we consider several cases separately. \mathrm{b}egin{enumerate}[1)] \item When $p, q, r, t$ are all distinct, $$ \sum^{(i)} \E \left[ M_{ip} R^{(i)}_{pq} M_{qi} M_{ir} R^{(i)}_{rt} M_{ti} \right] = {\mathfrak a}c{J^4}{N^4} \sum^{(i)} \E \left[ R^{(i)}_{pq} R^{(i)}_{rt} \right] = O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) $$ due to \eqref{Rabrq distinct}. Here the sum is taken over all distinct $p,q,r,t$. \item When $| \{p, q, r, t \}| = 3$: \mathrm{b}egin{enumerate} \item If $p=q$, $$ \E \left[ M_{ip} R^{(i)}_{pp} M_{pi} M_{ir} R^{(i)}_{rt} M_{ti} \right] = {\mathfrak a}c{J^2}{N^2}\left( {\mathfrak a}c1{N}+ {\mathfrak a}c{J^2}{N^2}\right) \E \left[ R^{(i)}_{pp} R^{(i)}_{rt} \right] = {\mathfrak a}c{J^2}{N^3} \E \left[ R_{pp} R_{rt}^{(i)} \right] + O(N^{-{\mathfrak a}c{9}{2} + \epsilon}). $$ Thus, using \eqref{Rbasicestim} and \eqref{b12}, we find that \mathrm{b}egin{equation} \mathrm{b}egin{split} & \sum^{(i)} \E \left[ M_{ip} R^{(i)}_{pp} M_{pi} M_{ir} R^{(i)}_{rt} M_{ti} \right] = {\mathfrak a}c{J^2}{N^2} \sum^{(i)}_{r \neq t} \E \left[ s_N R_{rt}^{(i)} \right] + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) \\ &\qquad ={\mathfrak a}c{J^2 s}{N^2} \sum^{(i)}_{r \neq t} \E \left[ R_{rt}^{(i)} \right] + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) = -{\mathfrak a}c{J^3 s^3}{(1 + Js)N} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{split} \end{equation} where the first sum is over all distinct $p,r,t$. \item If $r=t$, the calculation is the same as the above. \item Other cases have negligible contributions, i.e., bounded by $N^{-{\mathfrak a}c{3}{2} + \epsilon}$, due to unmatching off-diagonal terms using \eqref{unmatching offdiagonal0} and the derivation is similar to that of \eqref{Rab distinct2}. \end{enumerate} \item When $| \{p, q, r, t \}| = 2$: \mathrm{b}egin{enumerate} \item If there is a triplet, e.g., $p=q=r$, the contribution is $O(N^{-{\mathfrak a}c{3}{2} + \epsilon})$. For example, \mathrm{b}egin{equation} \mathrm{b}egin{split} \E \sum_{p \neq t}^{(i)} M_{ip} R^{(i)}_{pp} M_{pi} M_{ip} R^{(i)}_{pt} M_{ti} &= {\mathfrak a}c{J}{N} \left( {\mathfrak a}c{W_3}{N^{{\mathfrak a}c{3}{2}}}+ {\mathfrak a}c{J^3}{N^3} \right) \E \sum_{p \neq t}^{(i)} R^{(i)}_{pp} R^{(i)}_{pt} \\ &= {\mathfrak a}c{W_3 Js}{N^{{\mathfrak a}c{5}{2}}} \E \sum_{p \neq t}^{(i)} R^{(i)}_{pt} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) = O(N^{-{\mathfrak a}c{3}{2} + \epsilon}), \end{split} \end{equation} where we used \eqref{b12}. \item If $p=q$ and $r=t$, \mathrm{b}egin{equation} \mathrm{b}egin{split} &\E \sum_{p \neq r}^{(i)} M_{ip} R^{(i)}_{pp} M_{pi} M_{ir} R^{(i)}_{rr} M_{ri} = \left( {\mathfrak a}c{1}{N} + {\mathfrak a}c{J^2}{N^2} \right)^2 \E \sum_{p \neq r}^{(i)} R^{(i)}_{pp} R^{(i)}_{rr}\\ &= \left( {\mathfrak a}c{1}{N} + {\mathfrak a}c{J^2}{N^2} \right)^2 \E \sum_p^{(i)} R_{pp}^{(i)} \left( Ns_N^{(i)}-R_{pp}^{(i)}\right) = \E \left( s_N^{(i)} \right)^2 - {\mathfrak a}c{s^2}{N} + {\mathfrak a}c{2 J^2 s^2}{N} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{split} \end{equation} \item If $p=t$ and $q=r$, \mathrm{b}egin{equation} \mathrm{b}egin{split} &\E \sum_{p \neq q}^{(i)} M_{ip} R^{(i)}_{pq} M_{qi} M_{iq} R^{(i)}_{qp} M_{pi} = \left( {\mathfrak a}c{1}{N} + {\mathfrak a}c{J^2}{N^2} \right)^2 \E \sum_{p \neq q}^{(i)} R^{(i)}_{pq} R^{(i)}_{qp} \\ &= \left( {\mathfrak a}c{1}{N} + {\mathfrak a}c{J^2}{N^2} \right)^2 \left[ \E \Tr (R^{(i)})^2 - \E \sum_p^{(i)} ( R^{(i)}_{pp})^2 \right] = {\mathfrak a}c{s'}{N} - {\mathfrak a}c{s^2}{N} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}), \end{split} \end{equation} where we used \eqref{s derivative} (applied to an $(N-1)\times (N-1)$ matrix). \item If $p=r$ and $q=t$, the expectation $\E [M_{ip} R^{(i)}_{pq} M_{qi} M_{ip} R^{(i)}_{pq} M_{qi}]$ is negligible when $M$ is complex Hermitian. When $M$ is real symmetric, the calculation is the same as the above, since $R$ is also symmetric and the contribution is \mathrm{b}egin{equation} \mathrm{b}egin{split} {\mathfrak a}c{s'}{N} - {\mathfrak a}c{s^2}{N} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{split} \end{equation} \end{enumerate} \item When $p=q=r=t$, \mathrm{b}egin{equation} \E \sum_p^{(i)} M_{ip} R^{(i)}_{pp} M_{pi} M_{ip} R^{(i)}_{pp} M_{pi} = {\mathfrak a}c{W_4 s^2}{N} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{equation} \end{enumerate} Combining all cases together, we obtain \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{b26} &\E \sum_{p, q, r, t}^{(i)} M_{ip} R^{(i)}_{pq} M_{qi} M_{ir} R^{(i)}_{rt} M_{ti} \\ &= -{\mathfrak a}c{2 J^3 s^3}{(1 + Js)N} + \E \left( s_N^{(i)} \right)^2 - {\mathfrak a}c{s^2}{N} + {\mathfrak a}c{2 J^2 s^2}{N} + {\mathfrak a}c{2s'}{N} - {\mathfrak a}c{2s^2}{N} + {\mathfrak a}c{W_4 s^2}{N} + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}) \end{split} \end{equation} (when $M$ is real symmetric. For complex Hermitian $M$, we have ${\mathfrak a}c{s'}{N} - {\mathfrak a}c{s^2}{N}$ instead of ${\mathfrak a}c{2s'}{N} - {\mathfrak a}c{2s^2}{N}$.) \subsubsection{Conclusion for $\sum_i \E \left( Q_i-s\right)^2$} From \eqref{b2}, \eqref{b22}, and \eqref{b26}, \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{b2'01} \E \left( Q_i-s\right)^2 =& s^2+ \E \left( s_N^{(i)} \right)^2 - 2\left( s+ {\mathfrak a}c{J'}{N} \right) \E s_N^{(i)} + {\mathfrak a}c{2J's}{N} \\ &+ {\mathfrak a}c{1}{N} \left( w_2 - 3 s^2 + 2s' + W_4 s^2 \right) + O(N^{-{\mathfrak a}c{3}{2} + \epsilon}). \end{split} \end{equation} Using $|s_N^{(i)} - s| \prec N^{-1}$ and summing over $i$, we obtain \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{b2'} \sum_i \E \left( Q_i-s\right)^2 &= w_2 - 3s^2 + 2s' + W_4 s^2 + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}). \end{split} \end{equation} \subsection{Formula of $b_N$} Inserting \eqref{b1'} and \eqref{b2'} into \eqref{b_N}, we obtain \mathrm{b}egin{equation} b_N = -s^2 J' - s' s + b_N s^2 + {\mathfrak a}c{J^2 s^3}{1 + Js} + w_2 s^3 + 2s' s^3 + \left( W_4 - 3 \right) s^5 + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}). \end{equation} Therefore, \mathrm{b}egin{equation} b_N = {\mathfrak a}c{s^2}{1-s^2} \left( -J' - {\mathfrak a}c{s'}{s} + {\mathfrak a}c{J^2 s}{1 + Js} + w_2 s + 2s' s + \left( W_4 - 3 \right) s^3 \right) + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}). \end{equation} Using the algebraic identity $s' = {\mathfrak a}c{s^2}{1-s^2}$, we can express \mathrm{b}egin{equation} b_N = {\mathfrak a}c{s^2}{1-s^2} \left( -J' + {\mathfrak a}c{J^2 s}{1 + Js} + (w_2-1)s + s's + \left( W_4 - 3 \right) s^3 \right) + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}). \end{equation} This converges to $b(z)$ in Proposition \ref{prop:gaussian xi}. We remark that, when $J' = J = 0$, this reduces to \mathrm{b}egin{equation} b = (1+s')s^3 \left( (w_2-1) + s' + \left( W_4 - 3 \right) s^2 \right), \end{equation} which is the same as Proposition 3.1 of \cite{BY2005}. \section{The covariance function} \label{sec:covariance} \subsection{Martingale decomposition} Following \cite{BY2005}, we consider the filtration \mathrm{b}egin{equation} {\mathcal F}_k = \sigma( M_{ij}, \, k < i, j \leq N ) \end{equation} for $k = 0, 1, \dots, N$ and the conditional expectation \mathrm{b}egin{equation} \E_k( \cdot ) = \E ( \cdot | {\mathcal F}_k ). \end{equation} Recall that \mathrm{b}egin{equation} \zeta_N = \xi_N - \E \xi_N = \Tr R - \E \Tr R. \end{equation} We use the following martingale decomposition: \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{eq:zeta_N decomposition0} \zeta_N &= \sum_{k=1}^N (\E_{k-1} \Tr R - \E_k \Tr R) = \sum_{k=1}^N (\E_{k-1} - \E_k) \Tr R = \sum_{k=1}^N (\E_{k-1} - \E_k) (\Tr R - \Tr R^{(k)}). \end{split} \end{equation} From \eqref{RRdiff} and \eqref{schur}, \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{eq:zeta_N decomposition1} \Tr R - \Tr R^{(k)} = R_{kk} + \sum^{(k)}_i {\mathfrak a}c{R_{ik} R_{ki}}{R_{kk}} = R_{kk} + \sum_i^{(k)} R_{kk} \sum_{p, q}^{(k)} M_{kp} R^{(k)}_{pi} R^{(k)}_{iq} M_{qk}. \end{split} \end{equation} Hence \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{eq:zeta_N decomposition} \zeta_N &= \sum_{k=1}^N (\E_{k-1} - \E_k) \left[ R_{kk} \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) \right]. \end{split} \end{equation} As in the previous section, we expand $R_{kk}$ using Schur formula. Since \mathrm{b}egin{equation} \left| \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right| \prec 1 \end{equation} from \eqref{eq:large general}, it is tempting to speculate that one needs to expand $R_{kk}$ up to third order term, i.e., up to the term of order $N^{-1}$. However, for any random variables $X_k$ and $X_{\ell}$ with $k > \ell$ adapted to the filtration, \mathrm{b}egin{equation} \mathrm{b}egin{split} &\E \left[ (\E_{k-1} - \E_k)X_k \cdot (\E_{\ell-1} - \E_{\ell}) \overline X_{\ell} \right] = \E \left[ \E_{k-1}[ (\E_{k-1} - \E_k)X_k \cdot (\E_{\ell-1} - \E_{\ell}) \overline X_{\ell}] \right] \\ &= \E \left[ (\E_{k-1} - \E_k)X_k \cdot \E_{k-1}[(\E_{\ell-1} - \E_{\ell}) \overline X_{\ell}] \right] = 0. \end{split} \end{equation} Thus, \mathrm{b}egin{equation} \label{eq:E_k sum change} \E \left| \sum_{k=1}^N (\E_{k-1} - \E_k)X_k \right|^2 = \E \sum_{k=1}^N \left| (\E_{k-1} - \E_k)X_k \right|^2. \end{equation} This implies, in particular, that if a random variable $Y_k = {\mathcal O}(N^{-1})$, then \mathrm{b}egin{equation} \label{eq:E_k sum change2} \sum_{k=1}^N (\E_{k-1} - \E_k) (X_k + Y_k) = \sum_{k=1}^N (\E_{k-1} - \E_k) X_k + {\mathcal O}_p (N^{-{\mathfrak a}c{1}{2}}), \end{equation} where ${\mathcal O}_p (N^{-{\mathfrak a}c{1}{2}})$ means that the other terms are bounded by $N^{-{\mathfrak a}c{1}{2} + \epsilon}$ in probability. Applying the argument to the expansion \eqref{schur expand} of $R_{kk}$ in \eqref{eq:zeta_N decomposition}, we find that \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{zeta} \zeta_N &= s \sum_{k=1}^N (\E_{k-1} - \E_k) \left[ \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) \right] \\ &\qquad + s^2 \sum_{k=1}^N (\E_{k-1} - \E_k) \left[ \left(Q_k -s \right) \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) \right] + {\mathcal O}_p (N^{-{\mathfrak a}c{1}{2}}). \end{split} \end{equation} where $Q_k= -M_{kk} + \sum_{r, t}^{(k)} M_{kr} R^{(k)}_{rt} M_{tk}$ as in \eqref{Qidefnh}. \subsubsection{First term} The first term on the right hand side of \eqref{zeta} is given by \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{z1} & (\E_{k-1} - \E_k) \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \\ &= \E_{k-1} \left[ \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right] - \E_{k-1} \left[ {\mathfrak a}c{J^2}{N^2} \sum_{p, q}^{(k)} (R^{(k)})^2_{pq} + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right] \\ &= \E_{k-1} \left[ \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} - {\mathfrak a}c{J^2}{N^2} \sum_{p, q}^{(k)} (R^{(k)})^2_{pq} -s' \right] + {\mathcal O}(N^{-1}). \end{split} \end{equation} This corresponds to $b_k$ of \cite{BY2005}. \subsubsection{Second term} In order to compute the second term in the right hand side of \eqref{zeta}, note that \mathrm{b}egin{equation} |Q_k-s| \left| \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) - \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right) \right| \prec {\mathfrak a}c{1}{N}. \end{equation} from \eqref{Qibasesmt} and \eqref{eq:large general}, since $\| R^{(k)} \| \leq {\mathfrak a}c{1}{\im z}$ and $z \in {\mathcal K}$. Thus, the summand in the second term is given by \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{EkEkqQ1k} & (\E_{k-1} -\E_k ) \left[ (Q_k-s) \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right)\right] + {\mathcal O}(N^{-1}). \end{split} \end{equation} Now \mathrm{b}egin{equation} \mathrm{b}egin{split} & \E_k \left[ (Q_k-s) \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right)\right] \\ &= \E_{k-1} \left[ \left( - {\mathfrak a}c{J'}{N} + {\mathfrak a}c{J^2}{N^2} \sum_{r, t}^{(k)} R^{(k)}_{rt} + {\mathfrak a}c{1}{N} \sum_{r}^{(k)} R^{(k)}_{rr}-s \right) \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right)\right] \\ &= \E_{k-1} \left[ \left( {\mathfrak a}c{J^2}{N^2} \sum_{r, t}^{(k)} R^{(k)}_{rt} \right) \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right)\right] + {\mathcal O}(N^{-1}). \end{split} \end{equation} Hence, \eqref{EkEkqQ1k} becomes \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{z2} \E_{k-1} \left( -M_{kk} + \sum_{r, t}^{(k)} M_{kr} R^{(k)}_{rt} M_{tk} - {\mathfrak a}c{J^2}{N^2} \sum_{r, t}^{(k)} R^{(k)}_{rt} - s \right) (1 + s') + {\mathcal O}(N^{-1}). \end{split} \end{equation} \subsubsection{Simplified formula of the martingale decomposition} From \eqref{zeta}, \eqref{z1}, and \eqref{z2}, we find that \mathrm{b}egin{equation} \label{phi_decompose} \zeta_N = \sum_{k=1}^N \E_{k-1} \phi_k + {\mathcal O}_p(N^{-{\mathfrak a}c{1}{2}}), \end{equation} where \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{def_phi} \phi_k &:= s \left( \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} - {\mathfrak a}c{J^2}{N^2} \sum_{p, q}^{(k)} (R^{(k)})^2_{pq} - s' \right) \\ &\qquad + s^2 (1+s') \left( -M_{kk} + \sum_{p, q}^{(k)} M_{kp} R^{(k)}_{pq} M_{qk} - {\mathfrak a}c{J^2}{N^2} \sum_{p, q}^{(k)} R^{(k)}_{pq} - s \right). \end{split} \end{equation} Since ${\mathfrak a}c{\mathrm{d}}{\mathrm{d} z} R^{(k)}= (R^{(k)})^2$ and $s'=s^2(1+s')$, this can also be written as \mathrm{b}egin{equation} \label{phi_k} \phi_k = {\mathfrak a}c{\partial}{\partial z} \left[ s \left( -M_{kk} + \sum_{p, q}^{(k)} M_{kp} R^{(k)}_{pq} M_{qk} - {\mathfrak a}c{J^2}{N^2} \sum_{p, q}^{(k)} R^{(k)}_{pq} - s \right) \right]. \end{equation} Note that $\phi_k \prec N^{-{\mathfrak a}c{1}{2}}$. \subsection{Covariance} Let $z_1, z_2, \dots, z_p$ are $p$ distinct points in ${\mathcal K}$. In order to prove the finite dimensional convergence of $\xi_N$, it suffices to show that the random vector $(\zeta_N (z_1), \zeta_N (z_2), \dots, \zeta_N (z_p))$ converges weakly to a $p$-dimensional mean-zero Gaussian distribution with the covariance matrix $\Gamma(z_i, z_j)$ defined in \eqref{eq:covariance xi}. To prove it, we use the martingale CLT for $\sum_k \E_{k-1} \phi_k$. Let $z_1$ and $z_2$ be two distinct points in ${\mathcal K}$. Following \cite{BY2005}, we consider \mathrm{b}egin{equation} \Gamma_N (z_1, z_2) = \sum_{k=1}^N \E_k \left[ \E_{k-1} [\phi_k (z_1)] \cdot \E_{k-1} [\phi_k (z_2)] \right]. \end{equation} For simplicity, we introduce the notations \mathrm{b}egin{equation} s_1 = s(z_1), \qquad s_2 = s(z_2). \end{equation} Let \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{tilde Gamma} \mathrm{w}idetilde \Gamma_N (z_1, z_2) &= \sum_{k=1}^N \E_k \left[ \E_{k-1} \left[ -M_{kk} + \sum_{p, q}^{(k)} M_{kp} R^{(k)}_{pq}(z_1) M_{qk} - {\mathfrak a}c{J^2}{N^2} \sum_{p, q}^{(k)} R^{(k)}_{pq}(z_1) - s_1 \right] \right.\\ &\left. \qquad\qquad\qquad \times \E_{k-1} \left[ -M_{kk} + \sum_{p, q}^{(k)} M_{kp} R^{(k)}_{pq}(z_2) M_{qk} - {\mathfrak a}c{J^2}{N^2} \sum_{p, q}^{(k)} R^{(k)}_{pq}(z_2) - s_2 \right] \right] \end{split} \end{equation} so that \mathrm{b}egin{equation} \Gamma_N (z_1, z_2) = {\mathfrak a}c{\partial^2}{\partial z_1 \partial z_2} \left[ s_1 s_2 \mathrm{w}idetilde \Gamma_N (z_1, z_2) \right]. \end{equation} \subsubsection{The easy parts of $\mathrm{w}idetilde \Gamma_N (z_1, z_2)$} We now find the limit of $\mathrm{w}idetilde \Gamma_N (z_1, z_2)$. In order to simplify notations, let us write \mathrm{b}egin{equation} S_k(z):= \sum_{p, q}^{(k)} M_{kp} R^{(k)}_{pq}(z) M_{qk}, \qquad T_k(z):= {\mathfrak a}c{J^2}{N^2} \sum_{p, q}^{(k)} R^{(k)}_{pq}(z). \end{equation} Then a summand in the formula of $\mathrm{w}idetilde \Gamma_N (z_1, z_2)$ is \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{wrGamsumd} \E_k \left[ \E_{k-1} \left[ -M_{kk} +S_k(z_1)-T_k(z_1) - s_1 \right] \cdot \E_{k-1} \left[ -M_{kk} + S_k(z_2)-T_k(z_2) - s_2 \right] \right] . \end{split} \end{equation} We first estimate $S_k(z) - T_k(z) - s(z)$. By definition, \mathrm{b}egin{equation} S_k(z) - T_k(z) = \sum_{p, q}^{(k)} A_{kp} R^{(k)}_{pq}(z) A_{qk} + {\mathfrak a}c{J}{N} \sum_{p, q}^{(k)} A_{kp} R^{(k)}_{pq}(z) + {\mathfrak a}c{J}{N} \sum_{p, q}^{(k)} R^{(k)}_{pq}(z) A_{qk}. \end{equation} Using Lemma \ref{lem:M deviation} with $J=0$ (or the second part and the third part of Lemma \ref{lem:large deviation}), we find that \mathrm{b}egin{equation} \left| \sum_{p, q}^{(k)} A_{kp} R^{(k)}_{pq}(z) A_{qk} - {\mathfrak a}c{1}{N} \sum_p^{(k)} R^{(k)}_{pp}(z) \right| \prec {\mathfrak a}c{\|R^{(k)}\|}{\sqrt N}. \end{equation} Moreover, from the first part of Lemma \ref{lem:large deviation}, \mathrm{b}egin{equation} {\mathfrak a}c{1}{N} \left| \sum_{p, q}^{(k)} A_{kp} R^{(k)}_{pq}(z) \right| \prec {\mathfrak a}c{1}{N} \sum_q^{(k)} \left( {\mathfrak a}c{1}{N} \sum_p^{(k)} |R^{(k)}_{pq}(z)|^2 \right)^{{\mathfrak a}c{1}{2}} \leq \left( \sum_q^{(k)} {\mathfrak a}c{1}{N} \sum_p^{(k)} |R^{(k)}_{pq}(z)|^2 \right)^{{\mathfrak a}c{1}{2}} = {\mathfrak a}c{\|R^{(k)}\|}{\sqrt N}. \end{equation} Since $\| R^{(k)} \| \leq {\mathfrak a}c{1}{\im z}$ and $z \in {\mathcal K}$, and $|s^{(k)}(z) - s(z)| \prec N^{-1}$, we obtain that \mathrm{b}egin{equation} |S_k(z) - T_k(z) - s(z)| \prec N^{-{\mathfrak a}c{1}{2}}. \end{equation} Now we consider \eqref{wrGamsumd}. We note that \mathrm{b}egin{equation} \label{G1} \E_k \left( \E_{k-1} [M_{kk}] \right)^2 = {\mathfrak a}c{w_2}{N} + O(N^{-2}) \end{equation} and \mathrm{b}egin{equation} \label{G2} \E_k \left[ \E_{k-1} \left[ M_{kk} \right] \cdot \E_{k-1} \left[ S_k(z_2)-T_k(z_2) - s_2 \right] \right] = {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}) . \end{equation} We also have \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{G3} &\E_k \left[ \E_{k-1} \left[ S_k(z_1)\right] \cdot \E_{k-1} \left[ T_k(z_2) + s_2 \right] \right] \\ &= \sum_{p, q}^{(k)} \E[ M_{kp} M_{qk}] \cdot \E_k \left[ \E_{k-1} [ R^{(k)}_{pq}(z_1) ] \cdot \E_{k-1} \left[{\mathfrak a}c{J^2}{N^2} \sum_{r, t}^{(k)} R^{(k)}_{rt}(z_2) + s_2 \right] \right] \\ &= \E_k \left[ \E_{k-1} \left[{\mathfrak a}c{J^2}{N^2} \sum_{p, q}^{(k)} R^{(k)}_{pq}(z_1) + s^{(k)}_N (z_1) \right] \cdot \E_{k-1} \left[{\mathfrak a}c{J^2}{N^2} \sum_{r, t}^{(k)} R^{(k)}_{rt}(z_2) + s_2 \right] \right] \\ &=\E_k \left[ \E_{k-1} \left[ T_k(z_1) + s^{(k)}_N (z_1) \right] \cdot \E_{k-1} \left[ T_k(z_2) + s_2 \right] \right]. \end{split} \end{equation} Similar estimates hold if $z_2$ in \eqref{G2} and \eqref{G3} is replaced by $z_1$. Noting the similarity of the formula of \eqref{G3} with $\E_k \left[ \E_{k-1} \left[ T_k(z_1) + s_1 \right] \cdot \E_{k-1} \left[ T_k(z_2) + s_2 \right] \right]$, \eqref{wrGamsumd} becomes \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{wrGamsumd1} &{\mathfrak a}c{w_2}{N} + \E_k \left[ \E_{k-1} \left[ S_k(z_1) \right] \cdot \E_{k-1} \left[ S_k(z_2) \right] \right] - \E_k \left[ \E_{k-1} \left[ T_k(z_1) + s^{(k)}_N (z_1) \right] \cdot \E_{k-1} \left[ T_k(z_2) + s^{(k)}_N (z_2) \right] \right] \\ \\ &+ \E_k \left[ \E_{k-1} \left[ s_1-s^{(k)}_N (z_1) \right] \cdot \E_{k-1} \left[ s_2-s^{(k)}_N (z_2) \right] \right] + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}). \end{split} \end{equation} \subsubsection{$\E_k \left[ \E_{k-1} \left[ S_k(z_1) \right] \cdot \E_{k-1} \left[ S_k(z_2) \right] \right]$} \label{subsub:S_k} We compute \mathrm{b}egin{equation} \mathrm{b}egin{split} &\E_k \left[ \E_{k-1} \left[ S_k(z_1) \right] \cdot \E_{k-1} \left[ S_k(z_2) \right] \right] \\ &= \E_k \left[ \E_{k-1} \left[ \sum_{p, q}^{(k)} \left( A_{kp} + {\mathfrak a}c{J}{N} \right) R^{(k)}_{pq}(z_1) \left( A_{qk} + {\mathfrak a}c{J}{N} \right) \right] \cdot \E_{k-1} \left[ \sum_{r, t}^{(k)} \left( A_{kr} + {\mathfrak a}c{J}{N} \right) R^{(k)}_{rt}(z_2) \left( A_{tk} + {\mathfrak a}c{J}{N} \right) \right] \right]. \end{split} \end{equation} We rearrange it in descending order of $J$ and calculate the conditional expectations. \mathrm{b}egin{enumerate}[1)] \item For $J^4$-terms, we get $$ {\mathfrak a}c{J^4}{N^4} \E_k \left[ \E_{k-1} \left[ \sum_{p, q}^{(k)} R^{(k)}_{pq}(z_1) \right] \cdot \E_{k-1} \left[ \sum_{r, t}^{(k)} R^{(k)}_{rt}(z_2) \right] \right] = \E_k \left[ \E_{k-1} \left[ T_k(z_1) \right] \cdot \E_{k-1} \left[ T_k(z_2) \right] \right]. $$ \item For $J^3$-terms, the conditional expectation vanishes because it always contains a factor $\E[A_{k \cdot}]$ or $\E[A_{\cdot k}]$. \item For $J^2$-terms, we get \mathrm{b}egin{equation*} \mathrm{b}egin{split} &{\mathfrak a}c{J^2}{N^2} \E_k \left[ \E_{k-1} \left[ s^{(k)}_N (z_1) \right] \cdot \E_{k-1} \left[ \sum_{r, t}^{(k)} R^{(k)}_{rt}(z_2) \right] + \E_{k-1} \left[ \sum_{p, q}^{(k)} R^{(k)}_{pq}(z_1) \right] \cdot \E_{k-1} \left[ s^{(k)}_N (z_2) \right] \right] \\ & = \E_k \left[ \E_{k-1} \left[ s^{(k)}_N (z_1) \right] \cdot \E_{k-1} \left[ T_k(z_2) \right] + \E_{k-1} \left[ T_k(z_1) \right] \cdot \E_{k-1} \left[ s^{(k)}_N (z_2) \right] \right] . \end{split} \end{equation*} We also have other terms, but they are all negligible, i.e., of order ${\mathcal O}(N^{-{\mathfrak a}c{3}{2}})$. (After summing over $k$, the contribution from such terms will be $N^{-{\mathfrak a}c{1}{2}}$.) For example, consider \mathrm{b}egin{equation} \mathrm{b}egin{split} X_k&={\mathfrak a}c{J^2}{N^2} \E_k \left[ \E_{k-1} \left[ \sum_{p, q}^{(k)} R^{(k)}_{pq}(z_1) A_{qk} \right] \cdot \E_{k-1} \left[ \sum_{r, t}^{(k)} A_{kr} R^{(k)}_{rt}(z_2) \right] \right] \\ &= {\mathfrak a}c{J^2}{N^3} \E_k \left[ \sum_{q:q>k} \E_{k-1} \left[\sum_p^{(k)} R^{(k)}_{pq}(z_1) \right] \cdot \E_{k-1} \left[ \sum_t^{(k)} R^{(k)}_{qt}(z_2) \right] \right]. \end{split} \end{equation} By naive power counting, we see that $X_k = {\mathcal O}(N^{-1})$. Since the contribution from the case $p=q$ is ${\mathcal O}(N^{-{\mathfrak a}c{3}{2}})$, we may assume that $p \neq q$. Expanding $R^{(k)}_{pq}$ by \mathrm{b}egin{equation} R^{(k)}_{pq}(z_1) = -R^{(k)}_{pp}(z_1) \sum_a^{(k, p)} M_{pa} R^{(k, p)}_{aq}(z_1) = -s_1 \sum_a^{(k, p)} M_{pa} R^{(k, p)}_{aq}(z_1) + {\mathcal O}(N^{-1}), \end{equation} we obtain that \mathrm{b}egin{equation} \mathrm{b}egin{split} X_k &= {\mathfrak a}c{-J^2 s_1}{N^3} \E_k \left[ \sum_{q:q>k} \E_{k-1} \left[ \sum_p^{(k)} \sum_a^{(k, p)} M_{pa} R^{(k, p)}_{aq}(z_1) \right] \cdot \E_{k-1} \left[ \sum_t^{(k)} R^{(k)}_{qt}(z_2)\right] \right] + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}) \\ &= {\mathfrak a}c{-J^3 s_1}{N^4} \E_k \left[ \sum_{q:q>k} \E_{k-1} \left[ \sum_p^{(k)} \sum_a^{(k, p)} R^{(k, p)}_{aq}(z_1)\right] \cdot \E_{k-1} \left[ \sum_t^{(k)} R^{(k)}_{qt}(z_2) \right] \right] + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}) \\ &= {\mathfrak a}c{-J^3 s_1}{N^4} \E_k \left[ \sum_{q:q>k} \E_{k-1} \left[ \sum_p^{(k)} \sum_a^{(k)} R^{(k)}_{aq}(z_1) \right] \cdot \E_{k-1} \left[ \sum_t^{(k)} R^{(k)}_{qt}(z_2) \right] \right] + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}) \\ &= -Js_1 X_k + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}). \end{split} \end{equation} Hence, $X_k = {\mathcal O}(N^{-{\mathfrak a}c{3}{2}})$, which is negligible. \item The $J$-terms can be computed as in the previous case and find that the contribution is negligible, i.e., ${\mathcal O}(N^{-{\mathfrak a}c{3}{2}})$. Since the computation is similar to the previous case, we skip the proof. \item For the terms with no $J$, the conditional expectation vanishes unless $|\{ p, q, r, t \}| = 2$ or $p=q=r=t$. \mathrm{b}egin{enumerate} \item If $p=q \neq r=t$, we get \mathrm{b}egin{equation} \mathrm{b}egin{split} &{\mathfrak a}c{1}{N^2} \E_k \left[ \sum_{p \neq r}^{(k)} \E_{k-1} \left[ R^{(k)}_{pp}(z_1) \right] \cdot \E_{k-1} \left[ R^{(k)}_{rr}(z_2) \right] \right] \\ &= \E_k \left[ \E_{k-1} \left[s^{(k)}_N (z_1) \right] \cdot \E_{k-1} \left[s^{(k)}_N (z_2) \right] \right] - {\mathfrak a}c{s_1 s_2}{N} + {\mathcal O}(N^{-2}). \end{split} \end{equation} \item If $p=q=r=t$, we get \mathrm{b}egin{equation} \mathrm{b}egin{split} &\E_k \left[ \sum_p^{(k)} \E_{k-1} \left[ A_{kp} R^{(k)}_{pp}(z_1) A_{pk} \right] \cdot \E_{k-1} \left[ A_{kp} R^{(k)}_{pp}(z_2) A_{pk} \right] \right] \\ &= \sum_{p:p<k} {\mathfrak a}c{s_1 s_2}{N^2} + \sum_{p:p>k} {\mathfrak a}c{W_4 s_1 s_2}{N^2} + {\mathcal O}(N^{-2}) = {\mathfrak a}c{k}{N} {\mathfrak a}c{s_1 s_2}{N} + {\mathfrak a}c{N-k}{N} {\mathfrak a}c{W_4 s_1 s_2}{N} + {\mathcal O}(N^{-2}). \end{split} \end{equation} \item If $p=t \neq q=r$, we get \mathrm{b}egin{equation} \mathrm{b}egin{split} &\E_k \left[ \sum_{p \neq q}^{(k)} \E_{k-1} \left[ A_{kp} R^{(k)}_{pq}(z_1) A_{qk} \right] \cdot \E_{k-1} \left[ A_{kq} R^{(k)}_{qp}(z_2) A_{pk} \right] \right] \\ &= \E_k \left[ \sum_{p, q: p, q > k, p \neq q} (A_{kp} A_{qk} )^2 \cdot \E_{k-1} \left[ R^{(k)}_{pq}(z_1) \right] \cdot \E_{k-1} \left[ R^{(k)}_{qp}(z_2) \right] \right] \\ &= {\mathfrak a}c{1}{N^2} \E_k \left[ \sum_{p, q: p, q > k, p \neq q} \E_{k-1} \left[ R^{(k)}_{pq}(z_1) \right] \cdot \E_{k-1} \left[ R^{(k)}_{qp}(z_2) \right] \right] =: Y_k. \end{split} \end{equation} We note that $Y_k = {\mathcal O}(N^{-1})$. The idea in the estimate for $Y_k$ is similar to that for $X_k$, except that we expand both $R^{(k)}_{pq}(z_1)$ and $R^{(k)}_{qp}(z_2)$. Then, \mathrm{b}egin{equation} \mathrm{b}egin{split} Y_k &= {\mathfrak a}c{s_1 s_2}{N^2} \E_k \left[ \sum_{p, q: p, q > k, p \neq q} \sum_{a, b}^{(k, p)} \E_{k-1} \left[ M_{pa} R^{(k, p)}_{aq}(z_1) \right] \cdot \E_{k-1} \left[ R^{(k, p)}_{qb}(z_2) M_{bp} \right] \right] + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}) \\ &= {\mathfrak a}c{s_1 s_2}{N^3} \E_k \left[ \sum_{p, q: p, q > k, p \neq q} \sum_{a:a>k}^{(p)} \E_{k-1} \left[ R^{(k, p)}_{aq}(z_1) \right] \cdot \E_{k-1} \left[ R^{(k, p)}_{qa}(z_2) \right] \right] + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}) \\ &= {\mathfrak a}c{s_1 s_2}{N^3} \E_k \left[ \sum_{p, q: p, q > k, p \neq q} \sum_{a:a>k} \E_{k-1} \left[ R^{(k)}_{aq}(z_1) \right] \cdot \E_{k-1} \left[ R^{(k)}_{qa}(z_2) \right] \right] + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}) \\ &= {\mathfrak a}c{N-k-1}{N} {\mathfrak a}c{s_1 s_2}{N^2} \E_k \left[ \sum_{a, q: a, q > k} \E_{k-1} \left[ R^{(k)}_{aq}(z_1) \right] \cdot \E_{k-1} \left[ R^{(k)}_{qa}(z_2) \right] \right] + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}). \end{split} \end{equation} Thus, writing the last sum for $a\neq q$ and $a=q$ separately, we find that \mathrm{b}egin{equation} Y_k = {\mathfrak a}c{N-k-1}{N} s_1 s_2 Y_k + {\mathfrak a}c{(N-k-1)(N-k)}{N^3} (s_1 s_2)^2 + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}), \end{equation} and we obtain that \mathrm{b}egin{equation} Y_k = \left( 1 - {\mathfrak a}c{N-k}{N} s_1 s_2 \right)^{-1} {\mathfrak a}c{(N-k)^2}{N^3} (s_1 s_2)^2 + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}). \end{equation} \item If $p=r \neq q=t$, the conditional expectation is the same as $Y_k$ in the previous case. (When $A$ is complex Hermitian, it vanishes.) \end{enumerate} \end{enumerate} Altogether, we obtain that \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{G4} &\E_k \left[ \E_{k-1} \left[ S_k(z_1) \right] \cdot \E_{k-1} \left[ S_k(z_2) \right] \right] = \E_k \left[ \E_{k-1} \left[ T_k(z_1) + s^{(k)}_N (z_1) \right] \cdot \E_{k-1} \left[ T_k(z_2) + s^{(k)}_N (z_2) \right] \right] \\ &- {\mathfrak a}c{s_1 s_2}{N} + {\mathfrak a}c{k}{N} {\mathfrak a}c{s_1 s_2}{N} + {\mathfrak a}c{N-k}{N} {\mathfrak a}c{W_4 s_1 s_2}{N} + 2\left( 1 - {\mathfrak a}c{N-k}{N} s_1 s_2 \right)^{-1} {\mathfrak a}c{(N-k)^2}{N^3} (s_1 s_2)^2 + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}). \end{split} \end{equation} \subsubsection{Conclusion for $\mathrm{w}idetilde \Gamma_N (z_1, z_2)$ and $\Gamma_N (z_1, z_2)$} Combining \eqref{wrGamsumd1} and \eqref{G4}, we find that \eqref{wrGamsumd} is equal to \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{G45} {\mathfrak a}c{w_2}{N}- {\mathfrak a}c{s_1 s_2}{N} + {\mathfrak a}c{k}{N} {\mathfrak a}c{s_1 s_2}{N} + {\mathfrak a}c{N-k}{N} {\mathfrak a}c{W_4 s_1 s_2}{N} + 2\left( 1 - {\mathfrak a}c{N-k}{N} s_1 s_2 \right)^{-1} {\mathfrak a}c{(N-k)^2}{N^3} (s_1 s_2)^2 + {\mathcal O}(N^{-{\mathfrak a}c{3}{2}}). \end{split} \end{equation} Summing over $k$, we obtain from \eqref{tilde Gamma} that \mathrm{b}egin{equation} \mathrm{b}egin{split} \mathrm{w}idetilde \Gamma_N (z_1, z_2) &= w_2 - 1 + {\mathfrak a}c{1}{2} \left(W_4 - 3 \right) s_1 s_2 - {\mathfrak a}c{2 \log (1- s_1 s_2)}{s_1 s_2} + {\mathcal O}(N^{-{\mathfrak a}c{1}{2}}), \end{split} \end{equation} where we used \mathrm{b}egin{equation} \int_0^1 {\mathfrak a}c{a^2 x^2}{1- ax} \mathrm{d} x = -{\mathfrak a}c{a}{2} - 1 - {\mathfrak a}c{\log (1-a)}{a} \end{equation} for $a\in \C\setminus [1, \infty)$. To check that $s_1 s_2 \in \C\setminus [1, \infty)$, we notice that $\im s_1(z), \im s_2(z) > 0$ for $z \in {\mathcal K}$. If $(\re s_1)(\re s_2) > 0$, $\im (s_1 s_2) \neq 0$. If $(\re s_1)(\re s_2) \leq 0$, $\re (s_1 s_2) < 0$. Thus, in any case, $s_1 s_2 \in \C\setminus [1, \infty)$. Therefore, \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{eq:covariance gamma_N} \Gamma_N (z_1, z_2) &= {\mathfrak a}c{\partial^2}{\partial z_1 \partial z_2} \left[ s_1 s_2 \mathrm{w}idetilde \Gamma_N (z_1, z_2) \right] \\ &= s_1' s_2' \left( (w_2 - 1) + 2 \left(W_4 - 3 \right) s_1 s_2 + {\mathfrak a}c{2}{(1-s_1 s_2)^2} \right) + {\mathcal O}(N^{-{\mathfrak a}c{1}{2}}), \end{split} \end{equation} which converges to $\Gamma(z_1, z_2)$ in probability. \section{Proof of Proposition \ref{prop:gaussian xi}} \label{sec:miscellanies} We conclude the proof of Proposition \ref{prop:gaussian xi} by establishing the (a) the finite-dimensional convergence to Gaussian vectors and (b) the tightness of $\xi_N(z)$, as discussed in Section \ref{sec:outline}. \subsection{Finite-dimensional convergence} \label{sub:martingale CLT} To prove the finite-dimensional convergence, we use Theorem 35.12 of \cite{Billingsley_prob_meas} for Martingale central limit theorem. Recall the definition of $\phi_k$ in \eqref{phi_decompose} and \eqref{def_phi}. Since we already proved the convergence of the variance in the previous section, it suffices to check that \mathrm{b}egin{equation} \sum_{k=1}^N \E \left[ |\E_{k-1} [\phi_k]|^2 \chi_{|\E_{k-1} [\phi_k]| \geq \epsilon} \right] \to 0 \end{equation} for any ($N$-independent) $\epsilon > 0$, as $N \to \infty$. Since \mathrm{b}egin{equation} \E \left[ |\E_{k-1} [\phi_k]|^2 \chi_{|\E_{k-1} [\phi_k]| \geq \epsilon} \right] \leq {\mathfrak a}c{1}{\epsilon^2} \E \left[ |\E_{k-1} [\phi_k]|^4 \right], \end{equation} it is sufficient to prove that \mathrm{b}egin{equation} \label{Lyapunov} \sum_{k=1}^N \E \left[ |\E_{k-1} [\phi_k]|^4 \right] \to 0 \end{equation} as $N \to 0$, which is the Lyapunov condition in \cite{BY2005}. The Lypanov condition \eqref{Lyapunov} is obvious from the estimate $\phi_k \prec N^{-{\mathfrak a}c{1}{2}}$, which was established in the previous section. \subsection{Tightness of $(\zeta_N)$} \label{sub:tightness} {Since $\xi_N(z)=\zeta_N(z)+ \E[\xi_N(z)]$ and the mean $\E[\xi_N(z)]$ converges, it is enough to check the tightness of the sequence $\zeta_N(z)$.} From Theorem 12.3 of \cite{Billingsley_conv}, it suffices to show that $(\zeta_N(z))$ is tight for a fixed $z$ and the following H\"older condition as in \cite{BY2005}: for some ($N$-independent) constant $K >0$, \mathrm{b}egin{equation} \label{eq:Holder condition} \E|\zeta_N(z_1) - \zeta_N(z_2)|^2 \leq K|z_1 - z_2|^2, \qquad z_1, z_2 \in {\mathcal K}. \end{equation} The fact that $(\zeta_N(z))$ is tight for a fixed $z$ is obvious from that the variance is bounded uniformly on $N$ as shown in \eqref{eq:covariance gamma_N}. We now check the H\"older condition. Note that since $R(z_1) - R(z_2) = (z_1 - z_2) R(z_1) R(z_2)$, we have \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{holder1} &\E|\zeta_N(z_1) - \zeta_N(z_2)|^2 = |z_1 - z_2|^2 \E | \Tr R(z_1) R(z_2) - \E \Tr R(z_1) R(z_2)|^2 \\ &\qquad = |z_1 - z_2|^2 \E \left| \sum_{k=1}^N (\E_{k-1} - \E_k) \left( \Tr R(z_1) R(z_2) - \Tr R^{(k)}(z_1) R^{(k)}(z_2) \right) \right|^2. \end{split} \end{equation} We follow the arguments in Section \ref{sec:covariance} to estimate the right hand side of \eqref{holder1}. When compared with \eqref{eq:zeta_N decomposition0}, the main difference is that we do not need to precisely find the leading order term as in the covariance computation in Section \ref{sec:covariance}. For the ease of notation, we set \mathrm{b}egin{equation} R \equiv R(z_1), \qquad S \equiv R(z_2). \end{equation} We will frequently use the estimate \mathrm{b}egin{equation} \| R \|, \| S \|, \| R^{(k)} \|, \| S^{(k)} \| \leq C. \end{equation} for any $k = 1, 2, \dots, N$, uniformly for $z_1, z_2 \in {\mathcal K}$. For $i, j \neq k$, \mathrm{b}egin{equation} \mathrm{b}egin{split} R_{ij} S_{ji} - R^{(k)}_{ij} S^{(k)}_{ji} &= \left( R_{ij} - R^{(k)}_{ij} \right) S^{(k)}_{ji} + R^{(k)}_{ij} \left( S_{ji} - S^{(k)}_{ji} \right) + \left( R_{ij} - R^{(k)}_{ij} \right) \left( S_{ji} - S^{(k)}_{ji} \right) \\ &= {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} S^{(k)}_{ji} + R^{(k)}_{ij} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} + {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}}. \end{split} \end{equation} Thus, using \eqref{RRdiff}, \mathrm{b}egin{equation} \Tr RS - \Tr \left( R^{(k)} S^{(k)} \right) = \sum_{i, j}^{(k)} \left( {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} S^{(k)}_{ji} + R^{(k)}_{ij} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} + {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} \right) + 2(RS)_{kk} \end{equation} and \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{eq:rs difference} &\E|\zeta_N(z_1) - \zeta_N(z_2)|^2 \\ &= |z_1 - z_2|^2 \E \left| \sum_{k=1}^N (\E_{k-1} - \E_k) \sum_{i, j}^{(k)} \left( {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} S^{(k)}_{ji} + R^{(k)}_{ij} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} + {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} \right) + 2(RS)_{kk} \right|^2 \\ &= |z_1 - z_2|^2 \E \sum_{k=1}^N \left| (\E_{k-1} - \E_k) \sum_{i, j}^{(k)} \left( {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} S^{(k)}_{ji} + R^{(k)}_{ij} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} + {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} \right) + 2(RS)_{kk} \right|^2, \end{split} \end{equation} where we used \eqref{eq:E_k sum change} to get the last line. To estimate the right hand side of \eqref{eq:rs difference}, we rewrite the first term in the summand as \mathrm{b}egin{equation} \sum_{i, j}^{(k)} {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} S^{(k)}_{ji} = \sum_{i, j}^{(k)} R_{kk} \sum_{p, q}^{(k)} M_{pk} R^{(k)}_{ip} R^{(k)}_{qj} M_{kq} S^{(k)}_{ji} = R_{kk} \sum_{p, q}^{(k)} M_{kq} \left( R^{(k)} S^{(k)} R^{(k)} \right)_{qp} M_{pk}. \end{equation} Since \mathrm{b}egin{equation} (\E_{k-1} - \E_k) \left[ {\mathfrak a}c{s_N^{(k)} (z_1)}{N} \sum_p \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} \right] = 0, \end{equation} we obtain \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{rs1} &\E \sum_{k=1}^N \left| (\E_{k-1} - \E_k) \sum_{i, j}^{(k)} {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} S^{(k)}_{ji} \right|^2 \\ &= \E \sum_{k=1}^N \left| (\E_{k-1} - \E_k) \left[ R_{kk} \sum_{p, q}^{(k)} M_{kq} \left( R^{(k)} S^{(k)} R^{(k)} \right)_{qp} M_{pk} - {\mathfrak a}c{s_N^{(k)} (z_1)}{N} \sum_p \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} \right] \right|^2 \\ &\leq 4 \sum_{k=1}^N \E \left| R_{kk} \sum_{p, q}^{(k)} M_{kq} \left( R^{(k)} S^{(k)} R^{(k)} \right)_{qp} M_{pk} - {\mathfrak a}c{s_N^{(k)} (z_1)}{N} \sum_p \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} \right|^2. \end{split} \end{equation} Using that $|R_{kk}| \leq \| R \| \leq C$, we get \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{rs11} &\E \left| R_{kk} \sum_{p, q}^{(k)} M_{kq} \left( R^{(k)} S^{(k)} R^{(k)} \right)_{qp} M_{pk} - {\mathfrak a}c{R_{kk}}{N} \sum_p \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} \right|^2 \\ &\leq C \, \E \left| \sum_{p, q}^{(k)} M_{kq} \left( R^{(k)} S^{(k)} R^{(k)} \right)_{qp} M_{pk} - {\mathfrak a}c{1}{N} \sum_p \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} \right|^2 \leq {\mathfrak a}c{C \| R^{(k)} S^{(k)} R^{(k)}\|^2}{N} \leq {\mathfrak a}c{C}{N}, \end{split} \end{equation} where we used Lemma \ref{lem:M deviation} to get the second inequality. Moreover, since \mathrm{b}egin{equation} \left| {\mathfrak a}c{1}{N} \sum_p^{(k)} \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} \right| \leq \left\| R^{(k)} S^{(k)} R^{(k)} \right\| \leq C, \end{equation} we also have that \mathrm{b}egin{equation} \label{rs12'} \E \left| {\mathfrak a}c{R_{kk}}{N} \sum_p \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} - {\mathfrak a}c{s_N^{(k)} (z_1)}{N} \sum_p \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} \right|^2 \leq C \, \E \left| R_{kk} - s_N^{(k)} (z_1) \right|^2. \end{equation} Recall that we defined $Q_k= -M_{kk} + \sum_{p, q}^{(k)} M_{kp} R^{(k)}_{pq} M_{qk}$. Applying \eqref{schur expand} to expand $R_{kk}$ and using Corollary \ref{cor:local law}, we find that \mathrm{b}egin{equation} \label{r-s expansion} R_{kk} - s_N^{(k)} (z_1) = s(z_1) - s_N^{(k)} (z_1) + s(z_1)^2 (Q_k -s(z_1)) + {\mathcal O}(N^{-1}) = s(z_1)^2 (Q_k - s_N^{(k)} (z_1) ) + {\mathcal O}(N^{-1}). \end{equation} Thus, from Lemma \ref{lem:M deviation}, \mathrm{b}egin{equation} \label{rs12''} \E \left| R_{kk} - s_N^{(k)} (z_1) \right|^2 \leq C \, \E \left| -M_{kk} + \sum_{p, q}^{(k)} M_{kp} R^{(k)}_{pq} M_{qk} - {\mathfrak a}c{1}{N} \sum_p^{(k)} R^{(k)}_{pp} \right|^2 \leq {\mathfrak a}c{C}{N} \end{equation} hence, together with \eqref{rs12'}, we get \mathrm{b}egin{equation} \label{rs12} \E \left| {\mathfrak a}c{R_{kk}}{N} \sum_p \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} - {\mathfrak a}c{s_N^{(k)} (z_1)}{N} \sum_p \left( R^{(k)} S^{(k)} R^{(k)} \right)_{pp} \right|^2 \leq {\mathfrak a}c{C}{N}. \end{equation} Combining \eqref{rs11} and \eqref{rs12} with \eqref{rs1}, we find that \mathrm{b}egin{equation} \label{eq:rs1 bound} \E \sum_{k=1}^N \left| (\E_{k-1} - \E_k) \sum_{i, j}^{(k)} {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} S^{(k)}_{ji} \right|^2 \leq C. \end{equation} Similarly, we can also obtain a bound \mathrm{b}egin{equation} \label{eq:rs2 bound} \E \sum_{k=1}^N \left| (\E_{k-1} - \E_k) \sum_{i, j}^{(k)} R^{(k)}_{ij} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} \right|^2 \leq C. \end{equation} We expand the third term of the summand in \eqref{eq:rs difference} as \mathrm{b}egin{equation} \mathrm{b}egin{split} \sum_{i, j}^{(k)} {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} &= R_{kk} S_{kk} \sum_{i, j}^{(k)} \sum_{p, q}^{(k)} M_{pk} R^{(k)}_{ip} R^{(k)}_{qj} M_{kq} \sum_{r, t}^{(k)} M_{rk} S^{(k)}_{jr} S^{(k)}_{ti} M_{kt} \\ &= R_{kk} S_{kk} \sum_{t, p}^{(k)} M_{kt} \left( S^{(k)} R^{(k)} \right)_{tp} M_{pk} \sum_{q, r}^{(k)} M_{kq} \left( R^{(k)} S^{(k)} \right)_{qr} M_{rk} \\ &= R_{kk} S_{kk} \left( \sum_{p, q}^{(k)} M_{kp} \left( R^{(k)} S^{(k)} \right)_{pq} M_{qk} \right)^2 \end{split} \end{equation} since $R$ and $S$ commute. Following the decomposition idea we used in the proof of \eqref{eq:rs1 bound}, we first observe \mathrm{b}egin{equation} (\E_{k-1} - \E_k) \left[ s_N^{(k)} (z_1) s_N^{(k)} (z_2) \left( {\mathfrak a}c{1}{N} \sum_p \left( R^{(k)} S^{(k)} \right)_{pp} \right)^2 \right] = 0. \end{equation} Thus, \mathrm{b}egin{equation} \mathrm{b}egin{split} &\E \left| (\E_{k-1} - \E_k) \sum_{i, j}^{(k)} {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} R_{kk} \right|^2 \\ &= \E \left| (\E_{k-1} - \E_k) \left[ R_{kk} S_{kk} \left( \sum_{p, q}^{(k)} M_{kp} \left( R^{(k)} S^{(k)} \right)_{pq} M_{qk} \right)^2 - s_N^{(k)} (z_1) s_N^{(k)} (z_2) \left( {\mathfrak a}c{1}{N} \sum_p \left( R^{(k)} S^{(k)} \right)_{pp} \right)^2 \right] \right|^2. \end{split} \end{equation} Since $|R_{kk} S_{kk}| \leq \| R \| \| S \| \leq C$ and ${\mathfrak a}c{1}{N} \sum_p \left( R^{(k)} S^{(k)} \right)_{pp} \leq \| R \| \| S \| \leq C$, \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{rs31} &\E \left| (\E_{k-1} - \E_k) \sum_{i, j}^{(k)} {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} R_{kk} \right|^2 \\ &\leq C \, \E \left| \left( \sum_{p, q}^{(k)} M_{kp} \left( R^{(k)} S^{(k)} \right)_{pq} M_{qk} \right)^2 - \left( {\mathfrak a}c{1}{N} \sum_p \left( R^{(k)} S^{(k)} \right)_{pp} \right)^2 \right|^2 + C \, \E \left| R_{kk} S_{kk} - s_N^{(k)} (z_1) s_N^{(k)} (z_2) \right|^2. \end{split} \end{equation} Using a simple identity $A^2 - B^2 = (A-B)^2 + 2B(A-B)$, we estimate the first term in the right hand side of \eqref{rs31} by \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{rs311} &\E \left| \left( \sum_{p, q}^{(k)} M_{kp} \left( R^{(k)} S^{(k)} \right)_{pq} M_{qk} \right)^2 - \left( {\mathfrak a}c{1}{N} \sum_p \left( R^{(k)} S^{(k)} \right)_{pp} \right)^2 \right|^2 \\ &= \E \left| \left( \sum_{p, q}^{(k)} M_{kp} \left( R^{(k)} S^{(k)} \right)_{pq} M_{qk} - {\mathfrak a}c{1}{N} \sum_p \left( R^{(k)} S^{(k)} \right)_{pp} \right)^2 \right. \\ &\qquad \qquad \qquad \left. + {\mathfrak a}c{2}{N} \sum_p \left( R^{(k)} S^{(k)} \right)_{pp} \left( \sum_{p, q}^{(k)} M_{kp} \left( R^{(k)} S^{(k)} \right)_{pq} M_{qk} - {\mathfrak a}c{1}{N} \sum_p \left( R^{(k)} S^{(k)} \right)_{pp} \right) \right|^2 \\ &\leq C \, \E \left| \sum_{p, q}^{(k)} M_{kp} \left( R^{(k)} S^{(k)} \right)_{pq} M_{qk} - {\mathfrak a}c{1}{N} \sum_p \left( R^{(k)} S^{(k)} \right)_{pp} \right|^2 \leq {\mathfrak a}c{C}{N}, \end{split} \end{equation} where we used Lemma \ref{lem:M deviation} in the last inequality. From \eqref{rs12}, we also find that \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{rs312} &\E \left| R_{kk} S_{kk} - s_N^{(k)} (z_1) s_N^{(k)} (z_2) \right|^2 = \E \left| \left( R_{kk} - s_N^{(k)} (z_1) \right) S_{kk} + s_N^{(k)} (z_1) \left( S_{kk} - s_N^{(k)} (z_2) \right) \right|^2 \\ &\leq C \, \E \left| R_{kk} - s_N^{(k)} (z_1) \right|^2 + C \, \E \left| S_{kk} - s_N^{(k)} (z_2) \right|^2 \leq {\mathfrak a}c{C}{N}. \end{split} \end{equation} From \eqref{rs31}, \eqref{rs311}, and \eqref{rs312}, we obtain a bound \mathrm{b}egin{equation} \label{eq:rs3 bound} \E \sum_{k=1}^N \left| (\E_{k-1} - \E_k) \sum_{i, j}^{(k)} {\mathfrak a}c{R_{ik} R_{kj}}{R_{kk}} {\mathfrak a}c{S_{jk}{S_{ki}}}{S_{kk}} \right|^2 \leq C. \end{equation} Finally, the last term in \eqref{eq:rs difference} becomes \mathrm{b}egin{equation} (RS)_{kk} = R_{kk} S_{kk} \sum_{p, q}^{(k)} M_{kq} \left( R^{(k)} S^{(k)} \right)_{qp} M_{pk}, \end{equation} and one can prove by following the same argument as in the derivation of \eqref{eq:rs3 bound} that \mathrm{b}egin{equation} \label{eq:rs4 bound} \E \sum_{k=1}^N \left| (\E_{k-1} - \E_k) \left[ R_{kk} S_{kk} \sum_{p, q}^{(k)} M_{kq} \left( R^{(k)} S^{(k)} \right)_{qp} M_{pk} \right] \right|^2 \leq C. \end{equation} From \eqref{eq:rs difference}, \eqref{eq:rs1 bound}, \eqref{eq:rs2 bound}, \eqref{eq:rs3 bound}, and \eqref{eq:rs4 bound}, we find that the H\"older condition \eqref{eq:Holder condition} holds, which concludes the proof for tightness of $(\zeta_N)$. \section{Proof of Lemma \ref{lem:Gamolrsm}} \label{sub:nonrandom} For $z \in \Gamma_0$, we have $|s_N(z) - s(z)| \prec N^{-1}$ from Corollary \ref{cor:local law}. Thus, for any $\epsilon > 0$, \mathrm{b}egin{equation} \int_{\Gamma_0} \E |\xi_N(z)|^2 \mathrm{d} z \leq N^{-1 + \epsilon} |\Gamma_0| = 4 N^{-1 + \epsilon - \delta}. \end{equation} Setting $\epsilon = {\mathfrak a}c{\delta}{2}$, we find that \eqref{nonrandom1} holds for $\Gamma_0$. To prove \eqref{nonrandom1} for $\Gamma_r$, it suffices to show that $\E|\xi_N(z)|^2 < K$ for some ($N$-independent) constant $K>0$. In Section \ref{sec:mean}, we proved that \mathrm{b}egin{equation} \E \xi_N = {\mathfrak a}c{s^2}{1-s^2} \left( -J' + {\mathfrak a}c{J^2 s}{1 + Js} + (w_2-1)s + s's + \left( W_4 - 3 \right) s^3 \right) + O(N^{-{\mathfrak a}c{1}{2} + \epsilon}), \end{equation} thus $|\E \xi_N|^2 < C$ for $z \in \Gamma_r$. We now estimate $\E |\zeta_N|^2 = \E |\xi_N - \E \xi_N|^2$. Recall that we showed in \eqref{eq:zeta_N decomposition} that \mathrm{b}egin{equation} \zeta_N = \sum_{k=1}^N (\E_{k-1} - \E_k) \left[ R_{kk} \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) \right]. \end{equation} Following the idea in \eqref{rs11}, we use \mathrm{b}egin{equation} (\E_{k-1} - \E_k) \left[ s_N^{(k)} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp}\right) \right] =0, \end{equation} hence \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{rl} \E |\zeta_N|^2 &= \E \sum_{k=1}^N \left| (\E_{k-1} - \E_k) \left[ R_{kk} \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) - s_N^{(k)} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp}\right) \right] \right|^2 \\ &\leq 4 \sum_{k=1}^N \E \left| R_{kk} \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) - s_N^{(k)} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp}\right) \right|^2. \end{split} \end{equation} Define the event \mathrm{b}egin{equation} \Omega_N := \{ \mu_1 \leq \mathrm{w}idehat{J} + N^{-1/3} \}. \end{equation} From Lemma \ref{lem:largest eigenvalue}, we find $\p (\Omega_N) < N^{-D}$ for any (large) fixed $D>0$. On $\Omega_N$, \mathrm{b}egin{equation} |R_{kk}| \leq \| R \| \leq {\mathfrak a}c{1}{a_+ -\mathrm{w}idehat{J} - N^{-1/3}} \leq C \end{equation} for any $k=1, 2, \dots, N$, uniformly for $z \in \Gamma_r$. Similarly, $\| R^{(k)} \| \leq C$ for any $k=1, 2, \dots, N$, uniformly for $z \in \Gamma_r$. Thus, \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{rl1} &\E \left| \mathbbm{1}(\Omega_N) \left[ R_{kk} \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) - R_{kk} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right) \right] \right|^2 \\ &\leq C \, \E \left| \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} - {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right|^2 \leq {\mathfrak a}c{C \| R^{(k)} \|^4}{N} \leq {\mathfrak a}c{C}{N}, \end{split} \end{equation} Moreover, since \mathrm{b}egin{equation} \left| {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right| \leq \| R^{(k)} \|^2 \leq C \end{equation} on $\Omega_N$, from \eqref{rs12''}, we get \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{rl2} &\E \left| \mathbbm{1}(\Omega_N) \left[ R_{kk} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right) - s_N^{(k)} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp}\right) \right] \right|^2 \\ &\leq C \, \E \left| \mathbbm{1}(\Omega_N) \left[R_{kk} - s_N^{(k)} \right] \right|^2 \leq {\mathfrak a}c{C}{N}. \end{split} \end{equation} On $\Omega_N^c$, we use the trivial bound $\| R \|, \| R^{(k)} \| \leq {\mathfrak a}c{1}{\im z} \leq N^{\delta}$. Then, \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{rl1'} &\E \left| \mathbbm{1}(\Omega_N^c) \left[ R_{kk} \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) - R_{kk} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right) \right] \right|^2 \\ &\leq \left( \E \left[ \mathbbm{1}(\Omega_N^c) |R_{kk}|^2 \right] \right)^{1/2} \left( \E \left| \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} - {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right|^4 \right)^{1/2} \leq C\p (\Omega_N^c)^{1/2} {\mathfrak a}c{\| R \| \| R^{(k)} \|^4}{N} \leq {\mathfrak a}c{C}{N} \end{split} \end{equation} and similarly, \mathrm{b}egin{equation} \label{rl2'} \E \left| \mathbbm{1}(\Omega_N^c) \left[ R_{kk} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp} \right) - s_N^{(k)} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp}\right) \right] \right|^2 \leq {\mathfrak a}c{C}{N}. \end{equation} Combining \eqref{rl1}, \eqref{rl2}, \eqref{rl1'}, and \eqref{rl2'}, we obtain \mathrm{b}egin{equation} \E \left| R_{kk} \left( 1 + \sum_{p, q}^{(k)} M_{kp} (R^{(k)})^2_{pq} M_{qk} \right) - s_N^{(k)} \left( 1 + {\mathfrak a}c{1}{N} \sum_p^{(k)} (R^{(k)})^2_{pp}\right) \right|^2 \leq {\mathfrak a}c{C}{N}, \end{equation} thus, from \eqref{rl}, \mathrm{b}egin{equation} \E |\xi_N|^2 \leq 2\, \E |\zeta_N|^2 + 2\, |\E \xi_N|^2 \leq C, \end{equation} which proves the lemma for $\Gamma_r$. The proof of the lemma for $\Gamma_l$ is the same. \section{Large deviation estimates} \label{subsec:largdeves} We prove Lemma \ref{lem:M deviation} for the spiked random matrix $M$. The case of non-spiked random matrix is well-known and we adapt its proof. We use the following lemma, sometimes referred to as `large deviation estimates'. \mathrm{b}egin{lem}[Lemma 8.1 and Lemma 8.2 of \cite{EYY_bulk}] \label{lem:large deviation} Let $a_1, \dots, a_N$ be independent (complex) random variables with mean zero and variance $1$. Suppose that $a_1, \dots, a_N$ satisfies the uniform subexponential decay condition. Then, for any deterministic complex numbers $A_i$ and $B_{ij}$ $(i, j = 1, 2, \dots, N)$, \mathrm{b}egin{equation} \mathrm{b}egin{split} \left| \sum_{i=1}^N A_i a_i \right| &\prec \left( \sum_{i=1}^N |A_i|^2 \right)^{{\mathfrak a}c{1}{2}} \\ \left| \sum_{i=1}^N A_i |a_i|^2 - \sum_{i=1}^N A_i \right| &\prec \left( \sum_{i=1}^N |A_i|^2 \right)^{{\mathfrak a}c{1}{2}} \\ \left| \sum_{i \neq j} a_i B_{ij} a_j \right| &\prec \left( \sum_{i \neq j} |B_{ij}|^2 \right)^{{\mathfrak a}c{1}{2}} \end{split} \end{equation} \end{lem} For the proof of Lemma \ref{lem:large deviation}, see Appendix B of \cite{EYY_bulk}. \mathrm{b}egin{proof}[Proof of Lemma \ref{lem:M deviation}] We consider the case $n=1$ for the first part of the lemma. We first decompose \mathrm{b}egin{equation} \label{eq:M decompose} M_{ip} S_{pq} M_{qi} = A_{ip} S_{pq} A_{qi} + {\mathfrak a}c{J}{N} S_{pq} A_{qi} + {\mathfrak a}c{J}{N} A_{ip} S_{pq} + {\mathfrak a}c{J^2}{N^2} S_{pq}. \end{equation} Then, \mathrm{b}egin{equation} \mathrm{b}egin{split} \label{eq:M variance} &\left| \sum_{p, q}^{(i)} M_{ip} S_{pq} M_{qi} - {\mathfrak a}c{1}{N} \sum_p^{(i)} S_{pp} \right|^2 \\ &\leq 4 \left| \sum_{p, q}^{(i)} A_{ip} S_{pq} A_{qi} - {\mathfrak a}c{1}{N} \sum_p^{(i)} S_{pp} \right|^2 + {\mathfrak a}c{4J^2}{N^2} \left| \sum_{p, q}^{(i)} S_{pq} A_{qi} \right|^2 + {\mathfrak a}c{4J^2}{N^2} \left| \sum_{p, q}^{(i)} A_{ip} S_{pq} \right|^2 + {\mathfrak a}c{4J^4}{N^4} \left| \sum_{p, q}^{(i)} S_{pq} \right|^2. \end{split} \end{equation} Taking the expectation, \mathrm{b}egin{equation} \mathrm{b}egin{split} &\E \left| \sum_{p, q}^{(i)} A_{ip} S_{pq} A_{qi} - {\mathfrak a}c{1}{N} \sum_p^{(i)} S_{pp} \right|^2 \\ &= \E \sum_{p, q, r, s}^{(i)} A_{ip} S_{pq} A_{qi} A_{ir} \overline{S_{rs}} A_{si} - \E \sum_{p, q, r} A_{ip} S_{pq} A_{qi} \overline{S_{rr}} - \E \sum_{p, q, r} A_{ip} \overline{S_{pq}} A_{qi} S_{rr} + {\mathfrak a}c{1}{N^2} \sum_{p, q}^{(i)} \overline{S_{pp}} S_{qq} \\ &= {\mathfrak a}c{1}{N^2} \sum_{p, q}^{(i)} |S_{pq}|^2 + {\mathfrak a}c{1}{N^2} \sum_{p, q}^{(i)} S_{pq} \overline{S_{qp}} + {\mathfrak a}c{W_4}{N^2} \sum_p^{(i)} |S_{pp}|^2. \end{split} \end{equation} Since \mathrm{b}egin{equation} \sum_{p, q}^{(i)} |S_{pq}|^2 = \| S \|_{HS}^2 \leq N \| S \|^2 \end{equation} where $\| \cdot \|_{HS}$ denotes the Hilbert-Schmidt norm. Thus, we find that \mathrm{b}egin{equation} \E \left| \sum_{p, q}^{(i)} A_{ip} S_{pq} A_{qi} - {\mathfrak a}c{1}{N} \sum_p^{(i)} S_{pp} \right|^2 \leq {\mathfrak a}c{W_4 +2}{N} \| S \|^2. \end{equation} Similarly, for other terms in \eqref{eq:M variance}, \mathrm{b}egin{equation} \left| \sum_{p, q}^{(i)} S_{pq} A_{qi} \right|^2 = \left| \sum_{p, q}^{(i)} A_{ip} S_{pq} \right|^2 \leq N \| S \|^2 \end{equation} and \mathrm{b}egin{equation} \left| \sum_{p, q}^{(i)} S_{pq} \right|^2 \leq N^2 \sum_{p, q}^{(i)} |S_{pq}|^2 \leq N^3 \| S \|^2. \end{equation} Altogether, we obtain that \mathrm{b}egin{equation} \left| \sum_{p, q}^{(i)} M_{ip} S_{pq} M_{qi} - {\mathfrak a}c{1}{N} \sum_p^{(i)} S_{pp} \right|^2 \leq 4(W_4 + 2 + 2J^2 + J^4) {\mathfrak a}c{\| S \|^2}{N}, \end{equation} which proves the first part of the lemma for $n=1$. The case $n=2$ can be proved analogously. Next, we prove the second part of the lemma. From the second inequality in Lemma \ref{lem:large deviation}, \mathrm{b}egin{equation} \left| \sum_p^{(i)} A_{ip} S_{pp} A_{pi} - {\mathfrak a}c{1}{N} \sum_p^{(i)} S_{pp} \right| \prec {\mathfrak a}c{1}{N} \left( \sum_p^{(i)} |S_{pp}|^2 \right)^{{\mathfrak a}c{1}{2}}. \end{equation} From the third inequality in Lemma \ref{lem:large deviation}, \mathrm{b}egin{equation} \left| \sum_{p \neq q}^{(i)} A_{ip} S_{pp} A_{pi} \right| \prec {\mathfrak a}c{1}{N} \left( \sum_{p \neq q}^{(i)} |S_{pq}|^2 \right)^{{\mathfrak a}c{1}{2}}. \end{equation} Summing the inequalities above, we find that \mathrm{b}egin{equation} \left| \sum_{p, q}^{(i)} A_{ip} S_{pq} A_{qi} - {\mathfrak a}c{1}{N} \sum_p^{(i)} S_{pp} \right| \prec {\mathfrak a}c{1}{N} \left( \sum_{p, q}^{(i)} |S_{pq}|^2 \right)^{{\mathfrak a}c{1}{2}} = {\mathfrak a}c{\| S \|_{HS}}{N} \leq {\mathfrak a}c{\| S \|}{\sqrt N}. \end{equation} For the second term in \eqref{eq:M decompose}, we apply the first inequality in Lemma \ref{lem:large deviation} and get \mathrm{b}egin{equation} \mathrm{b}egin{split} \left| {\mathfrak a}c{J}{N} \sum_{p, q}^{(i)} S_{pq} A_{qi} \right| \prec {\mathfrak a}c{1}{N\sqrt N} \sum_p^{(i)} \left( \sum_q^{(i)} |S_{pq}|^2 \right)^{{\mathfrak a}c{1}{2}} \leq {\mathfrak a}c{1}{N} \left( \sum_{p, q}^{(i)} |S_{pq}|^2 \right)^{{\mathfrak a}c{1}{2}} \leq {\mathfrak a}c{\| S \|}{\sqrt N}. \end{split} \end{equation} The same estimate holds for the third term in \eqref{eq:M decompose}. Finally, for the last term in \eqref{eq:M decompose}, \mathrm{b}egin{equation} \left| \sum_{p, q}^{(i)} {\mathfrak a}c{J^2}{N^2} S_{pq} \right| \leq {\mathfrak a}c{J^2}{N} \left( \sum_{p, q}^{(i)} |S_{pq}|^2 \right)^{{\mathfrak a}c{1}{2}} \leq {\mathfrak a}c{\| S \|}{\sqrt N}. \end{equation} Summing the estimates, we obtain \eqref{eq:large general}. \end{proof} \def\leavevmode\raise.4ex\hbox{.}{\leavevmode\raise.4ex\hbox{.}} \mathrm{b}egin{thebibliography}{10} \mathrm{b}ibitem{AizenmanLebowitzRuelle} M.~Aizenman, J.~L. Lebowitz, and D.~Ruelle. \newblock Some rigorous results on the {S}herrington-{K}irkpatrick spin glass model. \newblock {\em Comm. Math. Phys.}, 112(1):3--20, 1987. \mathrm{b}ibitem{ABC2013} A.~Auffinger, G.~Ben~Arous, and J.~{\v{C}}ern{\'y}. \newblock Random matrices and complexity of spin glasses. \newblock {\em Comm. Pure Appl. Math.}, 66(2):165--201, 2013. \mathrm{b}ibitem{BS2004} Z.~Bai and J.~W. Silverstein. \newblock C{LT} for linear spectral statistics of large-dimensional sample covariance matrices. \newblock {\em Ann. Probab.}, 32(1A):553--605, 2004. \mathrm{b}ibitem{BY2005} Z.~Bai and J.~Yao. \newblock On the convergence of the spectral empirical process of {W}igner matrices. \newblock {\em Bernoulli}, 11(6):1059--1092, 2005. \mathrm{b}ibitem{Baik-Ben_Arous-Peche05} J.~Baik, G.~Ben~Arous, and S.~P{\'e}ch{\'e}. \newblock Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices. \newblock {\em Ann. Probab.}, 33(5):1643--1697, 2005. \mathrm{b}ibitem{BaikLee} J.~Baik and J.~O. Lee. \newblock Fluctuations of the free energy of the spherical {S}herrington-{K}irkpatrick model. \newblock arXiv:1505.07349. \mathrm{b}ibitem{Billingsley_conv} P.~Billingsley. \newblock {\em Convergence of probability measures}. \newblock John Wiley \& Sons, Inc., New York-London-Sydney, 1968. \mathrm{b}ibitem{Billingsley_prob_meas} P.~Billingsley. \newblock {\em Probability and measure}. \newblock Wiley Series in Probability and Mathematical Statistics. John Wiley \& Sons, Inc., New York, third edition, 1995. \newblock A Wiley-Interscience Publication. \mathrm{b}ibitem{Bloemendal-Virag1} A.~Bloemendal and B.~Vir{\'a}g. \newblock Limits of spiked random matrices {I}. \newblock {\em Probability Theory and Related Fields}, 156(3-4):795--825, 2013. \mathrm{b}ibitem{BovierKurkovaLowes} A.~Bovier, I.~Kurkova, and M.~L{\"o}we, \newblock Fluctuations of the free energy in the {REM} and the {$p$}-spin {SK} models, \newblock {\em Ann. Probab.}, 30(2):605--651, 2002. \mathrm{b}ibitem{CDF2012} M.~Capitaine, C.~Donati-Martin, and D.~F{\'e}ral. \newblock Central limit theorems for eigenvalues of deformations of {W}igner matrices. \newblock {\em Ann. Inst. Henri Poincar\'e Probab. Stat.}, 48(1):107--133, 2012. \mathrm{b}ibitem{CarmonaHu} P.~Carmona and Y.~Hu. \newblock Universality in {S}herrington-{K}irkpatrick's spin glass model. \newblock {\em Ann. Inst. H. Poincar\'e Probab. Statist.}, 42(2):215--222, 2006. \mathrm{b}ibitem{Chen2014} W.-K. Chen. \newblock On the mixed even-spin {S}herrington-{K}irkpatrick model with ferromagnetic interaction. \newblock {\em Ann. Inst. Henri Poincar\'e Probab. Stat.}, 50(1):63--83, 2014. \mathrm{b}ibitem{ChenDeyPanchenkp2015} W.-K. Chen, P.~Dey, and D.~Panchenko. \newblock {Fluctuations of the free energy in the mixed $p$-spin models with external field}. \newblock arxiv:1509.07071. \mathrm{b}ibitem{ChenSen2015} W.-K. Chen and A.~Sen. \newblock {Parisi formula, disorder chaos and fluctuation for the ground state energy in the spherical mixed $p$-spin models}. \newblock arxiv:1512.08492. \mathrm{b}ibitem{CometsNeveu1995} F.~Comets and J.~Neveu. \newblock The {S}herrington-{K}irkpatrick model of spin glasses and stochastic calculus: the high temperature case. \newblock {\em Comm. Math. Phys.}, 166(3):549--564, 1995. \mathrm{b}ibitem{CrisantiSommers} A.~Crisanti and H.~J. Sommers. \newblock The spherical p-spin interaction spin glass model: the statics. \newblock {\em Z. Phys. B. Condensed Matter}, 87(3):341--354, 1992. \mathrm{b}ibitem{DemboZeitouni2015} A.~Dembo and O.~Zeitouni. \newblock Matrix optimization under random external fields. \newblock {\em J. Stat. Phys.}, 159(6):1306--1326, 2015. \mathrm{b}ibitem{EKYY1} L.~Erd{\H{o}}s, A.~Knowles, H.-T. Yau, and J.~Yin. \newblock Spectral statistics of {E}rd{\H o}s-{R}\'enyi graphs {I}: {L}ocal semicircle law. \newblock {\em Ann. Probab.}, 41(3B):2279--2375, 2013. \mathrm{b}ibitem{EYY_bulk} L.~Erd{\H{o}}s, H.-T. Yau, and J.~Yin. \newblock Bulk universality for generalized {W}igner matrices. \newblock {\em Probab. Theory Related Fields}, 154(1-2):341--407, 2012. \mathrm{b}ibitem{EYY} L.~Erd{\H{o}}s, H.-T. Yau, and J.~Yin. \newblock Rigidity of eigenvalues of generalized {W}igner matrices. \newblock {\em Adv. Math.}, 229(3):1435--1515, 2012. \mathrm{b}ibitem{Feral-Peche07} D.~F{\'e}ral and S.~P{\'e}ch{\'e}. \newblock The largest eigenvalue of rank one deformation of large {W}igner matrices. \newblock {\em Comm. Math. Phys.}, 272(1):185--228, 2007. \mathrm{b}ibitem{FrohlochZegarlinski1987} J.~Fr{\"o}hlich and B.~Zegarli{\'n}ski. \newblock Some comments on the {S}herrington-{K}irkpatrick model of spin glasses. \newblock {\em Comm. Math. Phys.}, 112(4):553--566, 1987. \mathrm{b}ibitem{FyodorovLeDoussal2014} Y.~V. Fyodorov and P.~Le~Doussal. \newblock Topology trivialization and large deviations for the minimum in the simplest random optimization. \newblock {\em J. Stat. Phys.}, 154(1-2):466--490, 2014. \mathrm{b}ibitem{GuerraToninelli2002} F.~Guerra and F.~L. Toninelli. \newblock The thermodynamic limit in mean field spin glass models. \newblock {\em Comm. Math. Phys.}, 230(1):71--79, 2002. \mathrm{b}ibitem{Johansson98} K.~Johansson. \newblock On fluctuations of eigenvalues of random {H}ermitian matrices. \newblock {\em Duke Math. J.}, 91(1):151--204, 1998. \mathrm{b}ibitem{KY2013_iso} A.~Knowles and J.~Yin. \newblock The isotropic semicircle law and deformation of {W}igner matrices. \newblock {\em Comm. Pure Appl. Math.}, 66(11):1663--1750, 2013. \mathrm{b}ibitem{KosterlitzThoulessJones} J.~Kosterlitz, D.~Thouless, and R.~Jones. \newblock Spherical model of a spin-glass. \newblock {\em Phys. Rev. Lett.}, 36(20):1217--1220, 1976. \mathrm{b}ibitem{LY} J.~O. Lee and J.~Yin. \newblock A necessary and sufficient condition for edge universality of {W}igner matrices. \newblock {\em Duke Math. J.}, 163(1):117--173, 2014. \mathrm{b}ibitem{LP} A.~Lytova and L.~Pastur. \newblock Central limit theorem for linear eigenvalue statistics of random matrices with independent entries. \newblock {\em Ann. Probab.}, 37(5):1778--1840, 2009. \mathrm{b}ibitem{Mo} M.~Y. Mo. \newblock Rank 1 real {W}ishart spiked model. \newblock {\em Comm. Pure Appl. Math.}, 65(11):1528--1638, 2012. \mathrm{b}ibitem{PanchekoTalagrand2007} D.~Panchenko and M.~Talagrand. \newblock On the overlap in the multiple spherical {SK} models. \newblock {\em Ann. Probab.}, 35(6):2321--2355, 2007. \mathrm{b}ibitem{PMC} D.~Passemier, M.~R. McKay, and Y.~Chen. \newblock Asymptotic linear spectral statistics for spiked {H}ermitian random matrices. \newblock {\em J. Stat. Phys.}, 160(1):120--150, 2015. \mathrm{b}ibitem{PRS2013} A.~Pizzo, D.~Renfrew, and A.~Soshnikov. \newblock On finite rank deformations of {W}igner matrices. \newblock {\em Ann. Inst. Henri Poincar\'e Probab. Stat.}, 49(1):64--94, 2013. \mathrm{b}ibitem{SiSo} Y.~Sinai and A.~Soshnikov. \newblock Central limit theorem for traces of large random symmetric matrices with independent matrix elements. \newblock {\em Bol. Soc. Brasil. Mat. (N.S.)}, 29(1):1--24, 1998. \mathrm{b}ibitem{So1999} A.~Soshnikov. \newblock Universality at the edge of the spectrum in {W}igner random matrices. \newblock {\em Comm. Math. Phys.}, 207(3):697--733, 1999. \mathrm{b}ibitem{Subag2015} E.~Subag. \newblock {The complexity of spherical $p$-spin models - a second moment approach}. \newblock arxiv:1504.02251. \mathrm{b}ibitem{Subag2016} E.~Subag. \newblock {The geometry of the Gibbs measure of pure spherical spin glasses}. \newblock arxiv:1604.00679. \mathrm{b}ibitem{SubagZeitouni2015} E.~Subag and O.~Zeitouni. \newblock {The extremal process of critical points of the pure $p$-spin spherical spin glass model}. \newblock arxiv:1509.03098. \mathrm{b}ibitem{TalagrandParisiSpher} M.~Talagrand. \newblock Free energy of the spherical mean field model. \newblock {\em Probab. Theory Related Fields}, 134(3):339--382, 2006. \mathrm{b}ibitem{TV2010} T.~Tao and V.~Vu. \newblock Random matrices: universality of local eigenvalue statistics up to the edge. \newblock {\em Comm. Math. Phys.}, 298(2):549--572, 2010. \mathrm{b}ibitem{WSY} Q.~Wang, J.~W. Silverstein, and J.-f. Yao. \newblock A note on the {CLT} of the {LSS} for sample covariance matrix from a spiked population model. \newblock {\em J. Multivariate Anal.}, 130:194--207, 2014. \end{thebibliography} \end{document}
\begin{document} \parskip = 2mm \begin{center} {\bf\Large Lawvere theories and Jf-relative monads\footnote{\em 2000 Mathematical Subject Classification: 18C10 18C99, }} {\large\bf Vladimir Voevodsky}\footnote{School of Mathematics, Institute for Advanced Study, Princeton NJ, USA. e-mail: [email protected]} \vspace {3mm} {\large\bf December 2015 - January 2016} \end{center} \begin{abstract} In this paper we provide a detailed construction of an equivalence between the category of Lawvere theories and the category of relative monads on the obvious functor $Jf:F\rightarrow Sets$ where $F$ is the category with the set of objects $\nn$ and morphisms being the functions between the standard finite sets of the corresponding cardinalities. The methods of this paper are fully constructive and it should be formalizable in the Zermelo-Fraenkel theory without the axiom of choice and the excluded middle. It is also easily formalizable in the UniMath. \end{abstract} \tableofcontents \subsection{Introduction} The notion of a relative monad is introduced in \cite[Def.1, p. 299]{ACU} and considered in more detail in \cite{ACU2}. The categories of relative monads are parametrized by functors rather than by categories, i.e., while one speaks of a monad on a category $C$ one speaks of a relative monad on a functor $J:C\rightarrow D$. We reminds the relevant definitions and constructions of \cite{ACU2} in the first section of the paper. Following \cite{FPT} we let $F$ denote the category with the set of objects $\nn$ and the sets of morphisms $Mor_F(m,n)$ being the sets of functions $stn(m)\rightarrow stn(n)$ where $stn(n)=\{i\in\nn\,|\,i<n\}$ is the standard set with $n$ elements. For a universe $U$ let $Sets(U)$ be the category of sets in $U$ (see a detailed definition in Section \ref{LRML}). For any $U$ there is an obvious functor $Jf_U:F\rightarrow Sets(U)$. The main construction of the paper is a construction of an equivalence between the category $RMon(Jf_U)$ of relative monads on $Jf_U$ and the category $LW(U)$ of Lawvere theories in $U$ (see \cite{LandC} for the precise definition of $LW(U)$). While the main idea of this construction is straightforward its detailed presentation requires a considerable amount of work. In particular, since we work, as in \cite{LandC}, in the Zermelo-Fraenkel set theory without the axiom of choice and without the excluded middle axiom, we had to reprove a number of results about coproducts. One of the unexpected discoveries was the fact that it is impossible to construct the finite coproducts structure on the category $F$ and that instead one has to work with a weaker structure of finite ordered coproducts. We use the diagrammatic order in writing compositions, i.e., for $f:X\rightarrow Y$ and $g:Y\rightarrow Z$ we write $f\circ g$ for the composition of $f$ and $g$. We do not make precise the concept of a {\em universe} that we use for some of the statements of the paper. It would be certainly sufficient to assume that $U$ is a Grothendieck universe. However, it seems likely that sets $U$ satisfying much weaker conditions can be used both for the statements and for the proofs of our results. The problem/construction pairs in the paper can be interpreted in the ZF-formalization as follows. The ``problem'' part is formalized as a formula $P(x_1,\dots,x_n)$ with the free variables $x_1,\dots,x_n$ corresponding to the objects introduced in the problem. The ``construction part'' is formalized as a theorem of the form ``there exist unique $x_1,\dots,x_n$ such that $P(x_1,\dots,x_n)$ and $Q(x_1,\dots,x_n)$'' where $Q$ is a formula expressing the detailed properties of the objects defined by the construction. For example, formulas $P$ and $Q$ in the ZF-formalization of the problem ``to construct a homomorphism of groups $H:G_1\rightarrow G_2$'' with the construction ``Let $G_1={\bf Z\rm}/2$, $G_2={\bf Z\rm}/2$ and $H=Id_{{\bf Z\rm}/2}$'' will be as follows. The formula $P(G_1,G_2,H)$ will be expressing the fact that $G_1$ is a group, $G_2$ is a group and $H$ is a homomorphism from $G_1$ to $G_2$. The formula $Q(G_1,G_2,H)$ will be expressing the fact that $G_1={\bf Z\rm}/2$, $G_2={\bf Z\rm}/2$ and $H$ equals the identity homomorphism of ${\bf Z\rm}/2$. One can envision a proof assistant with the user-level language being some convenient dependently typed language that translates this language into formulas and deductions of the ZF and then verifies these formulas and deductions according to the rules of the first-order logic. \subsection{Relative monads} \begin{definition} \llabel{2015.12.22.def1} Let $J:C\rightarrow D$ be a functor. A relative monad ${\bf RR}$ on $J$ or a $J$-relative monad is a collection of data of the form \begin{enumerate} \item a function $RR:Ob(C)\rightarrow Ob(D)$, \item for each $X$ in $C$ a morphism $\eta(X):J(X)\rightarrow RR(X)$, \item for each $X,Y$ in $C$ and $f:J(X)\rightarrow RR(Y)$ a morphism $\rho(f):RR(X)\rightarrow RR(Y)$, \end{enumerate} such that the following conditions hold: \begin{enumerate} \item for any $X\in C$, $\rho(\eta(X))=Id_{RR(X)}$, \item for any $f:J(X)\rightarrow RR(Y)$, $\eta(X)\circ \rho(f)=f$, \item for any $f:J(X)\rightarrow RR(Y)$, $g:J(Y)\rightarrow RR(Z)$, $$\rho(f)\circ \rho(g)=\rho(f\circ \rho(g))$$ \end{enumerate} \end{definition} The following definition repeats \cite[Definition 2.2, p.4]{ACU2}. \begin{definition}\llabel{2015.12.22.def2} Let $J:C\rightarrow D$ be a functor and ${\bf RR}=(RR,\eta,\rho)$, ${\bf RR}'=(RR',\eta',\rho')$ be two relative monads on $J$. A morphism $\phi:{\bf RR}\rightarrow {\bf RR}'$ is a function $\phi:Ob(C)\rightarrow Mor(D)$ that to each $X\in C$ assigns a morphism $\phi(X):RR(X)\rightarrow RR'(X)$ such that \begin{enumerate} \item for any $X\in C$ one has $\eta'(X)=\eta(X)\circ \phi(X)$, \item for any $f:J(X)\rightarrow RR(Y)$ one has $$\rho(f)\circ \phi(Y)=\phi(X)\circ \rho'(f\circ \phi(Y))$$ \end{enumerate} \end{definition} \begin{lemma} \llabel{2015.12.22.l1} Let $J:C\rightarrow D$ be a functor and ${\bf RR}$ a relative monad on $J$. Then the function $X\mapsto Id_{RR(X)}$ is a morphism of relative monads ${\bf RR}\rightarrow {\bf RR}$. \end{lemma} \begin{proof} Both conditions of Definition \ref{2015.12.22.def2} are straightforward to prove. \end{proof} \begin{lemma} \llabel{2015.12.22.l2} Let $J:C\rightarrow D$ be a functor and ${\bf RR},{\bf RR}',{\bf RR}''$ be relative monads on $J$. Then if $\phi$ and $\phi'$ are functions $Ob(C)\rightarrow Mor(D)$ which are morphisms of relative monads ${\bf RR}\rightarrow {\bf RR}'$ and ${\bf RR}'\rightarrow {\bf RR}''$ then the function $X\mapsto \phi(X)\circ \phi'(X)$ is a morphism ${\bf RR}\rightarrow {\bf RR}''$. \end{lemma} \begin{proof} Let $X\in C$ then $$\eta(X)\circ \phi(X)\circ \phi'(X)=\eta'(X)\circ \phi'(X)=\eta''(X)$$ this proves the first condition of Definition \ref{2015.12.22.def2}. To prove the second condition let $f:J(X)\rightarrow RR(Y)$ then we have $$\rho(f)\circ \phi(Y)\circ \phi'(Y)=\phi(X)\circ \rho(f\circ \phi(Y))\circ \phi'(Y)=\phi(X)\circ \phi'(X)\circ \rho(f\circ \phi(Y)\circ \phi'(Y))$$ \end{proof} \begin{problem}\llabel{2015.12.22.prob3} Let $J:C\rightarrow D$ be a functor. To construct a category $RMon(J)$ of relative monads on $J$. \end{problem} \begin{construction}\rm\llabel{2015.12.18.constr3} Applying the same approach as before we obtain category data with the set of objects being the set $RMon(J)$ of relative monads on $J$, the set of morphisms being the set of triples $(({\bf RR},{\bf RR}'),\phi)$ where ${\bf RR}$, ${\bf RR}'$ are relative monads on $J$ and $\phi$ is a morphism of relative monads from ${\bf RR}$ to ${\bf RR}'$ as given by Definition \ref{2015.12.22.def2}, the identity morphisms are given by Lemma \ref{2015.12.22.l1} and compositions by Lemma \ref{2015.12.22.l2}. It follows immediately from the corresponding properties of morphisms in $C$ that these data satisfies the left and right identity and the associativity axioms forming a category. The set of morphisms from ${\bf RR}$ to ${\bf RR}'$ in this category is not equal to the set of morphisms of relative monads but it is in the obvious bijective correspondence with this set and we will use both functions of this bijective correspondence as coercions\footnote{When a function $f:X\rightarrow Y$ is declared as a {\em coercion} then every time that one has an expression $a$ that denotes an element of the set $X$ in a position where an element of the set $Y$ is expected one replaces it by $f(a)$}. \end{construction} \begin{lemma} \llabel{2016.01.03.l5} Let $\phi:{\bf RR}\rightarrow {\bf RR}'$ be a morphism of relative monads on $J:C\rightarrow D$ such that for all $X\in C$ the morphism $\phi(X):RR(X)\rightarrow RR'(X)$ is an isomorphism. Then $\phi$ is an isomorphism in the category of relative monads on $J$. \end{lemma} \begin{proof} Set $\phi'(X)=(\phi(X))^{-1}$. In view of the definition of the composition of morphisms of relative monads and the identity morphism of relative monads it is sufficient to verify that the family $\phi'$ is a morphism of relative monads from ${\bf RR}'$ to ${\bf RR}$. That it is the inverse to $\phi$ is then straightforward to prove. Let us check the two conditions of Definition \ref{2015.12.22.def2}. The equality $$\eta(X)=\eta'(X)\circ \phi'(X)$$ follows from the equality $\eta'(X)=\eta(X)\circ \phi(X)$ by composing it with $\phi'(X)$ on the right and using the fact that $\phi(X)\circ \phi'(X)=Id_{RR(X)}$. The second condition is of the form, for any $f':J(X)\rightarrow RR'(Y)$, \begin{eq}\llabel{2016.01.03.eq2} \rho'(f')\circ\phi'(Y)=\phi'(X)\circ\rho'(f'\circ\phi'(Y)) \end{eq} Applying the second condition of Definition \ref{2015.12.22.def2} for $\phi$ to $f=f'\circ \phi'(Y)$ and using the equality $\phi'(Y)\circ\phi(Y)=Id_{RR'(Y)}$ we get $$\rho(f'\circ\phi'(Y))\circ\phi(Y)=\phi(X)\circ\rho'(f'\circ \phi'(Y)\circ\phi(Y))=\phi(X)\circ\rho'(f')$$ It remains to compose this equality with $\phi'(Y)$ on the right and $\phi'(X)$ on the left and rewrite the equalities $\phi(Y)\circ\phi'(Y)=Id_{RR(Y)}$ and $\phi'(X)\circ \phi(X)=Id_{RR'(X)}$. \end{proof} Let us remind the definition of the Kleisli category of a relative monad (see \cite[p.8]{ACU2}). \begin{problem}\llabel{2015.12.22.prob1} Let $J:C\rightarrow D$ be a functor and ${\bf RR}$ be a relative monad on $J$. To define a category $K({\bf RR})$ that will be called Kleisli category of ${\bf RR}$. \end{problem} \begin{construction}\rm\llabel{2015.12.22.constr3} We set $Ob(K({\bf RR}))=Ob(C)$ and $$Mor(K({\bf RR}))=\amalg_{X,Y\in Ob(K({\bf RR}))}Mor(J(X),RR(Y))$$ We will, as before, identify the set of morphisms in $K({\bf RR})$ from $X$ to $Y$ with $Mor(J(X),RR(Y))$ by means of the obvious bijections. For $X\in Ob(C)$ we set $Id_{X,K({\bf RR})}=\eta(X)$. For $f\in Mor(J(X),RR(Y))$, $g\in Mor(J(Y),RR(Z))$ we set $f\circ_{K({\bf RR})}g=f\circ_D \rho(g)$. Verification of the associativity and the left and right identity axioms of a category are straightforward. \end{construction} \begin{problem}\llabel{2015.12.22.prob2} Let $J:C\rightarrow D$ be a functor and ${\bf RR}$ be a relative monad on $J$. To construct a functor $L_{{\bf RR}}:C\rightarrow K({\bf RR})$. \end{problem} \begin{construction}\rm\llabel{2015.12.22.constr4} We set $L_{Ob}=Id$ and for $f:X\rightarrow Y$, $L(f)=J(f)\circ_D\eta(Y)$. Verification of the identity and composition axioms of a functor are straightforward. \end{construction} The following lemma will be needed below. \begin{lemma} \llabel{2016.01.03.l4b} Let $u:X\rightarrow Y$ in $C$ and $g:J(Y)\rightarrow RR(Z)$ in $D$. Then one has $$L_{{\bf RR}}(u)\circ_{K({\bf RR})} g=J(u)\circ_D g$$ \end{lemma} \begin{proof} One has $$L_{{\bf RR}}(u)\circ_{K({\bf RR})} g=L_{{\bf RR}}(u)\circ_D\rho(g)=J(u)\circ_D\eta(Y)\circ_D\rho(g)=J(u)\circ_D g$$ \end{proof} \begin{problem}\llabel{2015.12.22.prob4} Let $J:C\rightarrow D$ be a functor and $\phi:{\bf RR}\rightarrow {\bf RR}'$ a morphism of relative monads on $J$. To construct a functor $K(\phi):K({\bf RR})\rightarrow K({\bf RR}')$ such that $L_{{\bf RR}}\circ K(\phi)=L_{{\bf RR}'}$. \end{problem} \begin{construction}\rm\llabel{2015.12.22.constr5} This construction is not, as far as we can tell, described in \cite{ACU2} and we will do all computations in detail. We set $K(\phi)_{Ob}=Id$. For $f\in Mor_D(J(X),RR(Y))$ we set $$K(\phi)(f)=f\circ_D \phi(Y).$$ For the identity axiom of a functor we have $$K(\phi)(Id_{X,K({\bf RR})})=K(\phi)(\eta_X)=\eta_X\circ_D \phi(X)=\eta'_X=Id_{X,K({\bf RR}')}$$ For the composition axiom, for $f\in Mor_D(J(X),RR(Y))$, $g\in Mor_D(J(Y),RR(Z))$ we have $$K(\phi)(f\circ_{{\bf RR}} g)=K(\phi)(f\circ_D \rho(g))=f\circ_D \rho(g)\circ_D \phi(Z)=f\circ_D \phi(Y)\circ_D\rho'(g\circ_D\phi(Z))$$ and $$K(\phi)(f)\circ_{{\bf RR}'} K(\phi)(g)=(f\circ_D\phi(Y))\circ_{{\bf RR}'}(g\circ_D\phi(Z))=f\circ_D\phi(Y)\circ_D\rho'(g\circ_D\phi(Z))$$ The condition $L_{{\bf RR}}\circ K(\phi)=L_{{\bf RR}'}$ obviously holds on objects and on morphisms we have for $f\in Mor_C(X,Y)$: $$(L_{{\bf RR}}\circ K(\phi))(f)=K(\phi)(L_{{\bf RR}}(f))=K(\phi)(J(f)\circ_D\eta(Y))=J(f)\circ_D\eta(Y)\circ_D\phi(Y)=$$$$J(f)\circ_D\eta'(Y)=L_{{\bf RR}'}(f).$$ Construction \ref{2015.12.22.constr5} is completed. \end{construction} \begin{lemma} \llabel{2016.01.01.l2} Let $J:C\rightarrow D$ be a functor. Then one has: \begin{enumerate} \item for a relative monad ${\bf RR}$ on $J$, $K(Id_{{\bf RR}})=Id_{K({\bf RR})}$, \item for morphisms $\phi:{\bf RR}\rightarrow {\bf RR}'$, $\phi':{\bf RR}'\rightarrow {\bf RR}''$ of relative monads on $J$, $K(\phi\circ \phi')=K(\phi)\circ K(\phi')$. \end{enumerate} \end{lemma} \begin{proof} The first assertion follows from the right identity axiom for $D$. The second assertion follows from the associativity of composition in $D$. \end{proof} \subsection{Binary coproducts and finite ordered coproducts in the constructive setting} In the absence of Axiom of Choice (AC) the structure of finite coproducts on a category can not be obtained from an initial object and the structure of binary coproducts. The same, of course, is true for products - the proof of \cite[Prop.1, p. 73]{MacLane} essentially depends on the AC. However, binary coproducts allow one to construct finite {\em ordered} coproducts as described below. \begin{definition}\llabel{2015.12.20.def1} A binary coproducts structure on a category $C$ is a function that assigns to any pair of objects $X,Y$ of $C$ an object $X\amalg Y$ and two morphisms $$ii_0^{X,Y}:X\rightarrow X\amalg Y$$ $$ii_1^{X,Y}:Y\rightarrow X\amalg Y$$ such that for any object $W$ of $C$ and any two morphisms $f_X:X\rightarrow W$, $f_Y:Y\rightarrow W$ there exists a unique morphism $\Sigma(f_X,f_Y):X\amalg Y\rightarrow W$ such that $$ii_0^{X,Y}\circ\Sigma(f_X,f_Y)=f_X$$ $$ii_1^{X,Y}\circ\Sigma(f_X,f_Y)=f_Y$$ \end{definition} \begin{definition} \llabel{2015.12.24.def2} A finite ordered coproduct structure on a category $C$ is a function that for any $m\ge 0$ and any sequence $X=(X_0,\dots,X_{m-1})$ of objects of $C$ defines an object $\amalg_{i=0}^{m-1}X_i$ and morphisms $ii^X_i:X_i\rightarrow \amalg_{i=0}^{m-1}X_i$ such that for any sequence $f_i:X_i\rightarrow Y$, $i=0,\dots,m-1$ there exists a unique morphism $\Sigma_{i=0}^{m-1} f_i:\amalg_{i=0}^{m-1}X_i\rightarrow Y$ such that \begin{eq}\llabel{2015.12.24.eq2} ii^X_j\circ \Sigma_{i=0}^{m-1} f_i = f_j \end{eq} \end{definition} Note that for $m=0$ there is a unique sequence of the form $(X_0,\dots,X_{m-1})$ - the empty sequence, and the corresponding $\amalg_{i=0}^{m-1}X_i$ is an initial object of $C$. \begin{problem}\llabel{2015.12.24.prob1} Given a category $C$ with an initial object $0$ and a binary coproducts structure to construct a finite ordered coproducts structure on $C$. \end{problem} \begin{construction}\rm\llabel{2015.12.24.constr1} By induction on $m$. For $m=0$ one defines $\amalg X_i$ to be $0$. The construction of the morphism $\Sigma f_i$, in this case for the empty set of morphisms $f_i$, and its properties follow easily from the definition of an initial object. For $m=1$ one defines $\amalg X_i=X_0$, $ii^X_0=Id_{X_0}$ and $\Sigma f_i=f_0$. The verification of the conditions is again straightforward. For the successor one defines $$\amalg_{i=0}^m X_i=(\amalg_{i=0}^{m-1}X_i)\amalg X_m$$ and $$\Sigma_{i=0}^m f_i=\Sigma(\Sigma_{i=0}^{m-1}f_i, f_m)$$ The morphisms $ii^X_i$ for $i=0,\dots,m-1$ are given by $$ii^X_i=ii^{X'}_i\circ ii^{\amalg_{i=0}^{m-1}X_i , X_m}_0$$ where $X'$ is the sequence $(X_0,\dots,X_{m-1})$, and $$ii^X_m=ii^{\amalg_{i=0}^{m-1}X_i , X_m}_1$$ To show that $\Sigma_{i=0}^{m}f_i$ satisfies the condition of Definition \ref{2015.12.24.constr1} we have: \begin{enumerate} \item for $j<m$ $$ii^X_j\circ \Sigma_{i=0}^{m} f_i = ii^{X}_j\circ \Sigma(\Sigma_{i=0}^{m-1}f_i, f_m) =ii^{X'}_j\circ ii^{\amalg_{i=0}^{m-1}X_i , X_m}_0 \circ \Sigma(\Sigma_{i=0}^{m-1}f_i, f_m)=ii^{X'}_j\circ \Sigma_{i=0}^{m-1}f_i=f_j$$ where the third equation is from the definition of a binary coproduct, \item for $j=m$ $$ii^X_m\circ \Sigma_{i=0}^{m} f_i=ii^{\amalg_{i=0}^{m-1}X_i , X_m}_1 \Sigma(\Sigma_{i=0}^{m-1}f_i, f_m)=f_m$$ \end{enumerate} To show that $f=\Sigma_{i=0}^mf_i$ is a unique morphism satisfying these conditions let $g$ be another morphism such that $$ii^X_j\circ g=f_j$$ for all $j=0,\dots,m$. Both $f$ and $g$ are morphisms from $(\amalg_{i=0}^{m-1}X_i)\amalg X_m$. By the uniqueness condition of Definition \ref{2015.12.20.def1} it is sufficient to show that $$ii^{\amalg_{i=0}^{m-1}X_i , X_m}_0\circ f=ii^{\amalg_{i=0}^{m-1}X_i , X_m}_0\circ g$$ and $$ii^{\amalg_{i=0}^{m-1}X_i , X_m}_1\circ f=ii^{\amalg_{i=0}^{m-1}X_i , X_m}_1\circ g$$ To prove the first equality it is sufficient, by the inductive assumption, to prove that $$ii^{X'}_j\circ ii^{\amalg_{i=0}^{m-1}X_i , X_m}_0\circ f=ii^{X'}_j\circ ii^{\amalg_{i=0}^{m-1}X_i , X_m}_0\circ g$$ for all $j=0,\dots,m-1$. This follows from our assumption since $$ii^{X'}_j\circ ii^{\amalg_{i=0}^{m-1}X_i , X_m}_0=ii^X_j$$ Similarly, the second equality follows from our assumption because $$ii^{\amalg_{i=0}^{m-1}X_i , X_m}_1=ii^X_m.$$ This completes Construction \ref{2015.12.24.constr1}. \end{construction} \begin{lemma} \llabel{2016.01.03.l4} Let $C$ be a category with an initial object $0$ and binary coproducts structure $(\amalg, ii_0, ii_1)$. Let $(\amalg',ii'_i)$ be the finite ordered coproducts structure defined on $C$ by Construction \ref{2015.12.24.constr1}. Then for $X=(X_0,X_1)$ one has $$(\amalg')_{i=0}^1X_i=X_0\amalg X_1$$ and $$(ii')_0^X=ii_0^{X_0,X_1}$$ $$(ii')_1^X=ii_1^{X_0,X_1}$$ \end{lemma} \begin{proof} The proof is by unfolding Construction \ref{2015.12.24.constr1} in the case $m=2$. \end{proof} \begin{lemma} \llabel{2015.12.24.l5} Given a category $C$ with the finite ordered coproducts structure $(\amalg_i X_i,ii^X_i)$ let $f_i:X_i\rightarrow Y$ where $i=0,\dots,m-1$ and $g:Y\rightarrow Z$. Then one has \begin{eq}\llabel{2015.12.24.eq4} (\Sigma_i f_i)\circ g=\Sigma_i ( f_i\circ g) \end{eq} \end{lemma} \begin{proof} By the uniqueness condition of Definition \ref{2015.12.24.def2} it is sufficient to show that for all $i=0,\dots, m-1$ the precompositions of both sides of (\ref{2015.12.24.eq4}) with $ii^X_i$ are equal. We have $$ii^X_i\circ (\Sigma_i f_i)\circ g=f_i\circ g=ii^X_i\circ (f_i\circ g)$$ \end{proof} \begin{lemma} \llabel{2015.12.24.l4} Let $C$ be a category with a finite ordered coproducts structure and $(X_0,\dots,X_{m-1})$ a sequence of objects of $C$. Then one has $$\Sigma_{i=0}^{m-1} ii_i^X=Id_{\amalg_{i=0}^{m-1}X_i}$$ \end{lemma} \begin{proof} It follows from the uniqueness part of Definition \ref{2015.12.24.def2}. \end{proof} \begin{definition}\llabel{2016.01.01.def1} Let $(C,\amalg,ii_0,ii_1)$ and $(C',\amalg',ii_0',ii_1')$ be two categories with the binary coproducts structure. A functor $G:C\rightarrow C'$ is said to strictly respect the binary coproduct structures if for all $X,Y\in C$ one has: $$G(X\amalg Y)=G(X)\amalg' G(Y)$$ and $$G(ii_0^{X,Y})=(ii_0')^{X,Y}$$ $$G(ii_1^{X,Y})=(ii_1')^{X,Y}$$ \end{definition} \begin{definition}\llabel{2016.01.01.def2} Let $(C,\amalg,ii_i)$ and $(C',\amalg',ii'_i)$ be two categories with finite ordered coproducts structures. A functor $G:C\rightarrow C'$ is said to strictly respect the finite ordered coproducts structures if for all $n\in\nn$ and all sequences $X=(X_0,\dots,X_{m-1})$ one has $$G(\amalg_{i=0}^mX_i)=(\amalg')_{i=0}^{m-1}G(X_i)$$ and for all $i=0,\dots,m-1$ one has $$G(ii_i^X)=(ii')_i^{G(X)}$$ \end{definition} \begin{lemma} \llabel{2016.01.01.l3} Let $(C,\amalg,ii_0,ii_1)$ and $(C',\amalg',ii_0',ii_1')$ be two categories with the binary coproducts structure and let $0$, $0'$ be initial objects in $C$ and $C'$ respectively. Let $G:C\rightarrow C'$ be a functor. Then $G$ strictly respects the finite coproduct structure on $C$ and $C'$ defined by the initial object and the binary coproduct structure by Construction \ref{2015.12.24.constr1} if and only if one has: \begin{enumerate} \item $G(0)=0'$, \item $G$ strictly respects the binary coproduct structure. \end{enumerate} \end{lemma} \begin{proof} The "only if" part follows from the fact that the initial objects of $C$ and $C'$ defined by the finite ordered coproducts structure of Construction \ref{2015.12.24.constr1} are $0$ and $0'$ and Lemma \ref{2016.01.03.l4}. The proof of the "if" part is easy by induction on the length of the sequence $X=(X_0,\dots,X_m)$ of Definition \ref{2016.01.01.def2}. \end{proof} \begin{remark}\rm\llabel{2016.01.05.rem1} It is not true in general that a finite ordered coproducts structure is determined by the corresponding initial object and the binary coproducts structure. In particular, the converse of Lemma \ref{2016.01.01.l3} is false - a functor that strictly respects the initial object and the binary coproducts structure defined by a finite ordered coproducts structure need not strictly respect the finite ordered coproducts structure itself. \end{remark} \begin{lemma}\llabel{2016.01.01.l6} Let $(C,\amalg,ii_i)$ and $(C',\amalg',ii'_i)$ be two categories with finite ordered coproducts structures and $G:C\rightarrow C'$ a functor that strictly respect the finite ordered coproducts structures. Let $X=(X_0,\dots,X_{m-1})$ be a sequence of objects of $C$ and $f_i:X_i\rightarrow Y$ a sequence of morphisms. Then one has \begin{eq}\llabel{2016.01.01.eq2} G(\Sigma_{i=0}^{m-1} f_i)=\Sigma_{i=0}^{m-1} G(f_i) \end{eq} where the $\Sigma$ on the left is with respect to $(\amalg,ii_i)$ and $\Sigma$ on the right is with respect to $(\amalg',ii'_i)$. \end{lemma} \begin{proof} Both the left and the right hand side of (\ref{2016.01.01.eq2}) are morphisms from $\amalg_{i=0}^{m-1}G(X_i)$ to $G(Y)$ according to the Definition \ref{2016.01.01.def2}. The right hand side is the unique morphism with these domain and codomain such that for all $i=0,\dots,m-1$ its pre-composition with $(ii')_i^{G(X)}$ equals $G(f_i)$. It remains to show that the same property holds for the right hand side. We have $$(ii')_i^{G(X)}\circ G(\Sigma_{i=0}^{m-1} f_i)=G(ii_i^X)\circ G(\Sigma_{i=0}^{m-1} f_i)=G(ii_i^X\circ \Sigma_{i=0}^{m-1} f_i)=G(f_i)$$. The lemma is proved. \end{proof} \subsection{More on the category $F$} Following \cite{FPT} we let $F$ denote the category with the set of objects $\nn$ and the set of morphisms from $m$ to $n$ being $Fun(stn(m),stn(n))$, where $stn(m)=\{i\in\nn\,|\,i<m\}$ is our choice for the standard set with $m$ elements (cf. \cite{LandC}). For $m,n\in\nn$ let $ii_0^{m,n}:stn(m)\rightarrow stn(m+n)$ and $ii_1^{m,n}:stn(n)\rightarrow stn(m+n)$ be the injections of the initial segment of length $m$ and the concluding segment of length $n$. \begin{lemma} \llabel{2016.01.03.l2} One has: \begin{enumerate} \item $0$ is the initial object of $F$, \item the function $$(m,n)\mapsto (m+n, ii_0^{m,n}, ii_1^{m,n})$$ is a binary coproduct structure on $F$. \end{enumerate} \end{lemma} \begin{proof} We have $stn(0)=\emptyset$ and there is a unique function from $\emptyset$ to any other set. The second assertion can be reduced to the case $n=1$ by induction on $n$ and then proved by direct reasoning involving the details of the set-theoretic definition of a function. \end{proof} \begin{definition} \llabel{2016.01.03.d1} The binary coproducts structure on $F$ defined by Lemma \ref{2016.01.03.l2} is called the standard binary coproducts structure. The finite ordered coproducts structure on $F$ defined by Lemma \ref{2016.01.03.l2} and Construction \ref{2015.12.24.constr1} is called the standard finite ordered coproducts structure. \end{definition} \begin{example} \llabel{2016.01.03.ex1}\rm There are binary coproducts structures on $F$ that are different from the standard binary coproducts structure. For example, the function that is equal to the standard binary coproducts structure on all pairs $(m,n)$ other than $(1,1)$ and such that $1\amalg 1=2$, $ii_0^{1,1}(0)=1$ and $ii_1^{1,1}=0$ is a binary coproducts structure on $F$ that is not equal to the standard one. \end{example} \begin{remark}\rm \llabel{2016.01.03.rem1} It is easy to define the concept of a finite coproducts structure on a category. The only non-trivial choice one has to make is which of the definitions of a finite set to use and it is reasonable to define a finite set as a set for which there exists, in the ordinary logical sense, $m\in\nn$ and a bijection from $stn(m)$ to this set. One can show then that it is impossible to construct a finite coproducts structure on $F$ without using the axiom of choice. Indeed, one would have to define for each finite set $I$ and a function $X:I\rightarrow \nn$ the coproduct object $\amalg X=\amalg_{i\in I}X(i)\in\nn$ and a family of functions $$ii_i^X:stn(X(i))\rightarrow stn(\amalg X)$$ for $i\in I$ such that for any $n$ the function $$Fun(stn(\amalg X), stn(n))\rightarrow \prod_{i\in I}Fun(stn(X(i)),stn(\amalg X))$$ defined by this family is a bijection. The latter condition is easily shown to be equivalent to the condition that $$stn(\amalg X)=\amalg_{i\in I}Im(ii_i^X)$$ One can also prove that if such a structure exists then $\amalg X=\Sigma_{i\in I} X(i)$ where the sum on the right is the usual commutative sum in $\nn$. Consider the case when $I$ is a set with $2$ elements and $X(i)=1$ for all $i\in I$. Then $\amalg X = 2$ and $ii_i^X:stn(1)\rightarrow stn(2)$ are functions whose images do not intersect and cover $stn(2)$. Then the function $i\mapsto ii_i^X(0)$ is a bijection from $I$ to $stn(2)$, i.e., we have found a canonical bijection from any finite set with 2 elements to $stn(2)$. This amounts to a particular case of the axiom of choice for the proper class of all sets with $2$ elements or, if we consider finite coproducts relative to a universe $U$, for the set of sets with $2$ elements in $U$. \end{remark} \begin{lemma} \llabel{2016.01.03.l3} Consider $F$ with the standard finite ordered coproducts structure. Then for any $m\in\nn$, $n_0,\dots,n_{m-1}\in\nn$ one has: \begin{enumerate} \item$\amalg_{i=0}^{m-1}n_i=\Sigma_{i=0}^{m-1}n_i$, \item for each $i=0,\dots, m-1$ and $j=0,\dots, k_i-1$ one has $$ii_i^{(n_0,\dots,n_{m-1})}(j)=(\Sigma_{l=0}^{i-1}n_l)+j$$ In particular, $ii_i^{(1,\dots,1)}(0)=i$. \end{enumerate} \end{lemma} \begin{proof} By induction on $m$ using Construction \ref{2015.12.24.constr1}. \end{proof} \subsection{Lawvere theories} Lawvere theories were introduced in \cite{Lawvere}. Let us remind an equivalent but more direct definition here. \begin{definition} \llabel{2015.11.24.def1} A Lawvere theory structure on a category $T$ is a functor $L:F\rightarrow T$ such that the following conditions hold: \begin{enumerate} \item $L$ is a bijection on the sets of objects, \item $L(0)$ is an initial object of $T$, \item for any $m,n\in\nn$ the square $$ \begin{CD} L(0) @>>> L(n)\\ @VVV @VVL(ii_1^{m,n}) V\\ L(m) @>L(ii_0^{m,n})>> L(m+n) \end{CD} $$ is a push-out square. \end{enumerate} A Lawvere theory is a pair $(T,L)$ where $T$ is a category and $L$ is a Lawvere theory structure on $T$. \end{definition} \begin{lemma} \llabel{2015.12.24.l3} A functor $L:F\rightarrow T$ is a Lawvere structure on $T$ if an only if it is bijective on objects, $L(0)$ is an initial object of $T$ and the function $$(X,Y)\rightarrow (L(L^{-1}(X)+L^{-1}(Y)), L(ii_0^{L^{-1}(X),L^{-1}(Y)}), L(ii_1^{L^{-1}(X),L^{-1}(Y)}))$$ is a binary coproducts structure on $T$. \end{lemma} \begin{proof} It follows by unfolding definitions and rewriting the equalities $L(L^{-1}(X))=X$ and $L^{-1}(L(n))=n$. \end{proof} \begin{definition} \llabel{2016.01.03.def2} Let $(T,L)$ be a Lawvere theory. The binary coproducts structure on $T$ defined in Lemma \ref{2015.12.24.l3} is called the standard binary coproducts structure defined by (the Lawvere theory structure) $L$. The finite ordered coproducts structure on $T$ defined by the initial object $L(0)$ and the standard binary coproducts structure on $T$ by Construction \ref{2015.12.24.constr1} is called the standard finite ordered coproducts structure defined by $L$. \end{definition} Everywhere below, unless the opposite is explicitly stated, we consider, for a Lawvere theory $(T,L)$ the category $T$ with the standard binary coproduct and finite ordered coproduct structures. \begin{lemma}\llabel{2015.12.24.l5b} Let $(T,L)$ be a Lawvere theory. Then $L$ strictly respects the standard finite coproduct structures on $F$ and $T$, i.e., for any $m\in\nn$, $n_0,\dots,n_{m-1}\in \nn$ one has: \begin{enumerate} \item $\amalg_{i=0}^{m-1} L(n_i)=L(\Sigma_{i=0}^{m-1} n_i)$, \item for any $i=0,\dots,m-1$, $$L(ii_i^{(n_0,\dots,n_{m-1})})=ii_i^{(L(n_0),\dots,L(n_{m-1}))}$$ \end{enumerate} \end{lemma} \begin{proof} Simple by induction on $m$ using the explicit form of Construction \ref{2015.12.24.constr1}. \end{proof} \begin{lemma}\llabel{2016.01.05.l1} Let $(T,L)$ be a Lawvere theory and let $u\in Fun(stn(m),stn(n))$. Then one has $$L(u)=\Sigma_{i=0}^{m-1}ii^{(L(1),\dots,L(1))}_{u(i)}$$ \end{lemma} \begin{proof} Both sides of the equality are morphisms from $L(m)$ to $L(n)$ in $T$. Since by Lemma \ref{2015.12.24.l5b}(1) $L(m)$ is the finite coproduct of the sequence $(L(1),\dots,L(1))$ to prove that two morphisms from $L(m)$ are equal it is sufficient to prove that their pre-compositions with $ii^{(L(1),\dots,L(1))}_i$ are equal for all $i=0,\dots,m-1$. We have $$ii^{(L(1),\dots,L(1))}_i\circ \Sigma_{i=0}^{m-1}ii^{(L(1),\dots,L(1))}_{u(i)}=ii^{(L(1),\dots,L(1))}_{u(i)}=L(ii^{(1,\dots,1)}_{u(i)})$$ and $$ii^{(L(1),\dots,L(1))}_i\circ L(u)=L(ii^{(1,\dots,1)}_i)\circ L(u)=L(ii^{(1,\dots,1)}_i\circ u)$$ It remains to show that $$ii^{(1,\dots,1)}_{u(i)}=ii^{(1,\dots,1)}_i\circ u$$ in $F$. Since both sides are functions from $stn(1)$ it is sufficient to prove that their values on $0$ are equal. This follows from Lemma \ref{2016.01.03.l3}. \end{proof} Recall that a morphism of Lawvere theories $G:(T,L)\rightarrow (T',L')$ is a functor $G:T\rightarrow T'$ such that $L\circ G=L'$. \begin{lemma} \llabel{2015.01.01.l4} Let $G:(T,L)\rightarrow (T',L')$ be a morphism of Lawvere theories. Then $G$ strictly respects the binary coproduct structures of Lemma \ref{2015.12.24.l3}. \end{lemma} \begin{proof} It follows by unfolding definitions and rewriting the equalities $L(L^{-1}(X))=X$ and $L^{-1}(L(n))=n$. \end{proof} \begin{lemma} \llabel{2016.01.01.l5} Let $G:(T,L)\rightarrow (T',L')$ be a morphism of Lawvere theories. Then $G$ strictly respects the standard ordered finite coproduct structures on $T$ and $T'$. \end{lemma} \begin{proof} It follows directly from Lemmas \ref{2016.01.01.l3} and \ref{2015.01.01.l4} and the equality $G(L(0))=(L\circ G)(0)=L'(0)$. \end{proof} \subsection{Lawvere theories and $Jf$-relative monads} \llabel{LRML} Let us start by reminding that for any set $U$ there is a category $Sets(U)$ of the following form. The set of objects of $Sets(U)$ is $U$. The set of morphisms is $$Mor(Sets(U))=\cup_{X,Y\in U}Fun(X,Y)$$ Since a function from $X$ to $Y$ is defined as a triple $(X,Y,G)$ where $G$ is the graph subset of this function the domain and codomain functions are well defined on $Mor(Sets(U))$ such that $$Mor_{Sets(U)}(X,Y)=Fun(X,Y)$$ and a composition function can be defined that restricts to the composition of functions function on each $Mor_{Sets(U)}(X,Y)$. Finally the identity function $U\rightarrow Mor(Sets(U))$ is obvious and the collection of data that one obtains satisfies the axioms of a category. This category is called the category of sets in $U$ and denoted $Sets(U)$. We will only consider the case when $U$ is a universe. Following \cite{ACU} we let $Jf_U:F\rightarrow Sets(U)$ denote the functor that takes $n$ to $stn(n)$ and that is the identity on morphisms between two objects (on the total sets of morphisms the morphism component of this functor is the inclusion of a subset). Recall that we use the expression ``a $J$-relative monad'' as a synonym for the expression ``a relative monad on $J$''. By simply unfolding definitions we get the following explicit form for the definition of a $Jf_U$-relative monad. \begin{lemma} \llabel{2016.01.01.l1} A $Jf_U$-relative monad is a collection of data of the form: \begin{enumerate} \item for each $n\in\nn$ a set $RR(n)$ in $U$, \item for each $n\in\nn$ a function $stn(n)\rightarrow RR(n)$, \item for each $m,n\in \nn$ and $f:stn(m)\rightarrow RR(n)$, a function $\rho(f):RR(m)\rightarrow RR(n)$, \end{enumerate} such that the following conditions hold: \begin{enumerate} \item for all $n\in\nn$, $\rho(\eta(n))=Id_{RR(n)}$, \item for all $f:stn(m)\rightarrow RR(n)$, $\eta(m)\circ \rho(f)=f$, \item for all $f:stn(k)\rightarrow RR(m)$, $g:stn(m)\rightarrow RR(n)$, $\rho(f)\circ \rho(g)=\rho(f\circ \rho(g))$. \end{enumerate} \end{lemma} The main goal of this section is to provide a construction for the following problem. \begin{problem}\llabel{2016.01.05.prob1} For a universe $U$ to construct an equivalence between the category $LW(U)$ of Lawvere theories in $U$ and the category $RMon(Jf_U)$ of $Jf_U$-relative monads. \end{problem} The construction will be given in Construction \ref{2016.01.05.constr1} below. \begin{lemma}\llabel{2015.12.22.l3} Let ${\bf RR}$ be a relative monad on $Jf:F\rightarrow Sets(U)$. Then $(K({\bf RR}),L_{{\bf RR}})$ is a Lawvere theory. \end{lemma} \begin{proof} We need to prove that the pair $(K({\bf RR}),L_{{\bf RR}})$ satisfies conditions of Definition \ref{2015.11.24.def1}. The first condition is obvious. The second condition is also obvious since $Fun(stn(0),RR(n))$ is a one point set for any set $RR(n)$. The third condition is straightforward to prove as well since the square $$ \begin{CD} Fun(stn(m+n),RR(k)) @>ii_1^{m,n}\circ\_>> Fun(stn(n),RR(k))\\ @Vii_0^{m,n}\circ\_VV @VVV\\ Fun(stn(m),RR(k)) @>>> Fun(stn(0),RR(k)) \end{CD} $$ is a pull-back square for any set $RR(k)$. \end{proof} \begin{problem}\llabel{2016.01.01.prob1} To construct a functor $RML_U:RMon(Jf_U)(U)\rightarrow LW(U)$. \end{problem} \begin{construction}\rm\llabel{2015.12.22.def5} We define the object component of $RML$ setting $$RML_{Ob}({\bf RR})=(K({\bf RR}),L_{{\bf RR}})$$ It is well defined by Lemma \ref{2015.12.22.l3}. We define the morphism component of $RLM$ setting $RML_{Mor}(\phi)=K(\phi)$. It is well defined by the condition of Problem \ref{2015.12.22.prob4}. The identity and composition axioms of a functor follow from Lemma \ref{2016.01.01.l2}. \end{construction} Below we consider, for a Lawvere theory $(T,L)$, the category $T$ with the finite ordered coproducts structure obtained by applying Lemma \ref{2015.12.24.l3} and Construction \ref{2015.12.24.constr1}. \begin{problem}\llabel{2015.12.22.prob5} Let $U$ be a universe and $(T,L)$ a Lawvere theory in $U$. To construct a $Jf_U$-relative monad ${\bf RR}=(RR,\eta,\rho)$. \end{problem} \begin{construction}\rm\llabel{2015.12.22.constr6} We set: \begin{enumerate} \item $RR(n)=Mor_T(L(1),L(n))$, \item $\eta(n)$ is the function $stn(n)\rightarrow Mor_T(L(1),L(n))$ given by $$\eta(n)(i) = ii_i^{(L(1),\dots,L(1))}.$$ This function is well defined because $$\amalg_{i=0}^{n-1}L(1)=L(n)$$ by Lemma \ref{2015.12.24.l5b}, \item for $f\in Fun(stn(m),Mor_T(L(1),L(n)))$ we define $$\rho(f)\in Fun(Mor_T(L(1),L(m)), Mor_T(L(1),L(n)))$$ as $g\mapsto g\circ \Sigma_{i=0}^{m-1} f(i)$. This formula is again well-defined in view of Lemma \ref{2015.12.24.l5b}. \end{enumerate} Let us verify the conditions of Lemma \ref{2016.01.01.l1}. For the first condition we have $$\rho(\eta(n))(g)=g\circ \Sigma_{i=0}^{n-1}\eta(n)(i)=g\circ \Sigma_{i=0}^{n-1}ii_i^{(L(1),\dots,L(1))}=g\circ Id_{L(n)}=g$$ where the third equality is by Lemma \ref{2015.12.24.l4}. For the second condition let $f\in Fun(stn(m),Mor_T(L(1),L(n)))$. To verify that $\eta(m)\circ \rho(f)=f$ we need to verify that these two functions from $stn(m)$ are equal, i.e., that for each $i=0,\dots,m-1$ we have $$(\eta(m)\circ \rho(f))(i)=f(i)$$ We have $$(\eta(m)\circ \rho(f))(i)=\rho(f)(\eta(m)(i))=\rho(f)(ii_i^{(L(1),\dots,L(1))})=ii_i^{(L(1),\dots,L(1))}\circ \Sigma_{j=0}^{m-1}f(j)=f(i)$$ To prove the third condition we need to show that $$\rho(f)\circ \rho(g)=\rho(f\circ \rho(g))$$ for all $f\in Fun(stn(k),Mor_T(L(1),L(m)))$ and $g\in Fun(stn(m),Mor_T(L(1),L(n)))$. Both sides are functions from $Mor_T(L(1),L(k))$. To verify that they are equal we need to show that for any $h\in Mor_T(L(1),L(k))$ we have $$(\rho(f)\circ \rho(g))(h)=\rho(f\circ \rho(g))(h)$$ We have $$(\rho(f)\circ \rho(g))(h)=\rho(g)(\rho(f)(h))=\rho(g)(h\circ \Sigma_{i=0}^{k-1}f(i))=h\circ (\Sigma_{i=0}^{k-1} f(i))\circ (\Sigma_{j=0}^{m-1} g(j))$$ and $$\rho(f\circ \rho(g))(h)=h\circ (\Sigma_{i=0}^{k-1} (f\circ \rho(g))(i))=h\circ (\Sigma_{i=0}^{k-1} (\rho(g)(f(i))))=h\circ (\Sigma_{i=0}^{k-1} (f(i)\circ \Sigma_{j=0}^{m-1}g(j)))$$ The right hand sides of these two expressions are equal by Lemma \ref{2015.12.24.l5}. This completes the construction. \end{construction} We let $LRM(T,L)$ denote the $Jf_U$-relative monad defined in Construction \ref{2015.12.22.constr6}. \begin{problem}\llabel{2016.01.01.prob2} Let $G:(T,L)\rightarrow (T',L')$ be a morphism of Lawvere theories. To construct a morphism of relative monads $LRM(T,L)\rightarrow LRM(T',L')$. \end{problem} \begin{construction}\rm\llabel{2016.01.01.constr2} We need to construct a family of functions $$\phi(n):Mor_{T}(L(1),L(n))\rightarrow Mor_{T'}(L'(1),L'(n))$$ that satisfies the conditions of Definition \ref{2015.12.22.def2} for $J=Jf$ and relative monads $LRM(T,L)=(RR,\eta,\rho)$ and $LRM(T',L')=(RR',\eta',\rho')$. Set $$\phi(n)=G_{L(1), L(n)}$$ since $L'=L\circ G$ these functions have the correct domain and codomain. For the first condition of Definition \ref{2015.12.22.def2} we need to show that for any $n\in\nn$ one has $$\eta'(n)=\eta(n)\circ G_{L(1), L(n)}$$ Since both sides are functions from $stn(n)$ it is sufficient to show that for all $i=0,\dots,n-1$ one has $\eta'(n)(i)=(\eta(n)\circ G_{L(1), L(n)})(i)$. By construction $$(\eta(n)\circ G_{L(1),L(n)})(i)=G(\eta(n)(I))=G(ii^X_i)$$ and $$\eta'(n)(i)=ii^{X'}_i$$ where $X=(L(1),\dots,L(1))$ and $X'=(L'(1),\dots,L'(1))$. Therefore we need to show that $G(ii^X_i)=ii^{X'}_i$. This follows from Lemma \ref{2016.01.01.l5}. For the second condition of Definition \ref{2015.12.22.def2} let $f:stn(m)\rightarrow Mor_T(L(1),L(n))$. We need to show that $$\rho(f)\circ \phi(n)=\phi(m)\circ \rho(f\circ \phi(n))$$ Both sides are functions from $Mor_T(L(1),L(m))$ to $Mor_{T'}(L'(1),L'(n))$. To show that they are equal we have to show that for each $g\in Mor_T(L(1),L(m))$ one has $$(\rho(f)\circ \phi(n))(g)=(\phi(m)\circ \rho'(f\circ \phi(n)))(g)$$ For the left hand side of this equality we have: $$(\rho(f)\circ \phi(n))(g)=\phi(n)(\rho(f)(g))=\phi(n)(g\circ \Sigma_{i=0}^{m-1}f(i))=G(g\circ \Sigma_{i=0}^{m-1}f(i))=G(g)\circ G(\Sigma_{i=0}^{m-1}f(i))=$$$$G(g)\circ \Sigma_{i=0}^{m-1}G(f(i))$$ where the last equality follows from Lemma \ref{2016.01.01.l6}. For the right hand side we have: $$(\phi(m)\circ \rho'(f\circ \phi(n)))(g)=\rho'(f\circ \phi(n))(\phi(m)(g))=\rho'(f\circ \phi(n))(G(g))=G(g)\circ \Sigma_{i=0}^{m-1}(f\circ \phi(n))(i)=$$$$G(g)\circ \Sigma_{i=0}^{m-1}(\phi(n)(f(i)))=G(g)\circ \Sigma_{i=0}^{m-1}G(f(i))$$ This completes the proof of the second condition of Definition \ref{2015.12.22.def2} and the construction. \end{construction} We let $LRM(\phi)$ or $LRM_{Mor}(\phi)$ denote the morphism of relative monads defined by Construction \ref{2016.01.01.constr2} \begin{problem} \llabel{2016.01.01.prob3} For a universe $U$, to construct a functor $$LRM_U:LW(U)\rightarrow RMon(Jf_U)$$ \end{problem} \begin{construction}\rm\llabel{2016.01.01.constr5} We define the object component of $LRM$ as the function defined by Construction \ref{2015.12.22.constr6} and the morphism component as the function defined by Construction \ref{2016.01.01.constr2}. We need to verify that these two functions satisfy the identity and composition axioms of a functor. Both follow immediately from the definitions of the identity functor and composition of functors. \end{construction} \begin{problem} \llabel{2016.01.01.prob4} For any universe $U$ to construct an isomorphism of functors $$RML_U\circ LRM_U\rightarrow Id_{RMon(Jf_U)}.$$ \end{problem} \begin{construction}\rm\llabel{2016.01.01.constr6} Let ${\bf RR}=(RR,\eta,\rho)$ be a $Jf_U$-relative monad. Let $$(T,L)=RML_U(RR,\eta,\rho)$$ and $$(RR',\eta',\rho')=LRM_U(T,L).$$ We need to construct an isomorphism of relative monads $$\phi_{{\bf RR}}:(RR',\eta',\rho')\rightarrow (RR,\eta,\rho)$$ and show that the family $\phi_{{\bf RR}}$ satisfies the naturality axiom of the definition of functor morphism. We have $$RR'(n)=Mor_T(L(1),L(n))=Mor_{K({\bf RR})}(L_{{\bf RR}}(1),L_{{\bf RR}}(n))=Mor_{K({\bf RR})}(1,n)=$$$$Fun(stn(1),RR(n))$$ and we define $\phi_{{\bf RR}}(n):RR'(n)\rightarrow RR(n)$ as the obvious bijection given by setting $$\phi_{{\bf RR}}(n)(f)=f(0)$$ Let us show that these functions form a morphism of relative monads, i.e., that they satisfy two conditions of Definition \ref{2015.12.22.def2}. We should exchange places between the $\eta$ and $\eta'$ since we consider a morphism ${\bf RR}'\rightarrow {\bf RR}$. The first condition becomes $$\eta(n)(i)=(\eta'(n)\circ\phi_{{\bf RR}}(n))(i)$$ for any $n\in \nn$ and $i=0,\dots,n-1$ and the second $$(\rho'(f)\circ \phi_{{\bf RR}}(n))(g)=(\phi_{{\bf RR}}(m)\circ \rho(f\circ \phi_{{\bf RR}}(n)))(g)$$ for any $f\in Fun(stn(m), RR'(n))$ and $g\in RR'(m)$. For $n\in\nn$ and $i=0,\dots,n-1$ we have $$(\eta'(n)\circ \phi_{{\bf RR}}(n))(i)=\phi_{\bf RR}(n)(\eta'(n)(i))=\phi_{{\bf RR}}(ii_i^{(L(1),\dots,L(1))})=ii_i^{(L(1),\dots,L(1))}(0)=$$$$L(ii_i^{(1,\dots,1)})(0)=L_{{\bf RR}}(ii_i^{(1,\dots,1)})(0)=(ii_i^{(1,\dots,1)}\circ \eta(n))(0)=\eta(n)(ii_i^{(1,\dots,1)}(0))=\eta(n)(i)$$ where the fourth equality is by Lemma \ref{2015.12.24.l5b} and the eighth equality is by Lemma \ref{2016.01.03.l3}. For the second condition, $f\in Fun(stn(m), RR'(n))$ and $g\in RR'(m)$ we have $$(\rho'(f)\circ \phi_{{\bf RR}}(n))(g)=\phi_{{\bf RR}}(n)(\rho'(f)(g))=\phi_{{\bf RR}}(n)(g\circ_T \Sigma_{i=0}^{m-1}f(i))=(g\circ_T \Sigma_{i=0}^{m-1}f(i))(0)$$ where $f$ is considered as an element of $Fun(stn(m),Mor_T(L(1),L(n)))$ and $g$ as an element of $Mor_T(L(1),L(m))$. Next we have: $$(g\circ_T \Sigma_{T,i=0}^{m-1}f(i))(0)=(g\circ_{K({\bf RR})} \Sigma_{T,i=0}^{m-1}f(i))(0)=(g\circ \rho(\Sigma_{T,i=0}^{m-1}f(i)))(0)=\rho(\Sigma_{T,i=0}^{m-1}f(i))(g(0))$$ where on the right $g$ is considered as an element of $Fun(stn(1),RR(m))$. On the other hand we have: $$(\phi_{{\bf RR}}(m)\circ \rho(f\circ \phi_{{\bf RR}}(n)))(g)=\rho(f\circ \phi_{{\bf RR}}(n))(\phi_{{\bf RR}}(m)(g))=\rho(f\circ \phi_{{\bf RR}}(n))(g(0))$$ where on the right $g$ is considered as an element of $Fun(stn(1),RR(m))$. Let us show that $$\Sigma_{T,i=0}^{m-1}f(i)=f\circ \phi_{{\bf RR}}(n),$$ Since both sides are morphisms in $T$ from $L(m)$ to $L(n)$ and it is sufficient to show that for any $j=0,\dots,m$ one has $$ii_j^{(L(1),\dots,L(1))}\circ_T (\Sigma_{T,i=0}^{m-1}f(i))=ii_j^{(L(1),\dots,L(1))}\circ_T (f\circ \phi_{{\bf RR}}(n))$$ The left hand side equals $f(j)$. For the right hand side we have $$ii_j^{(L(1),\dots,L(1))}\circ_T (f\circ \phi_{{\bf RR}}(n))=L(ii_j^{(1,\dots,1)})\circ_T (f\circ \phi_{{\bf RR}}(n))=L(ii_j^{(1,\dots,1)})\circ_{K({\bf RR})} (f\circ \phi_{{\bf RR}}(n))=$$$$ ii_j^{(1,\dots,1)}\circ f\circ \phi_{{\bf RR}}(n)$$ where the first equality is by Lemma \ref{2015.12.24.l5b} and the third equality is by Lemma \ref{2016.01.03.l4b}. Both $f(j)$ and $ii_j^{(1,\dots,1)}\circ f\circ \phi_{{\bf RR}}(n)$ are elements of $Fun(stn(1),RR(n))$. To prove that they are equal it is sufficient to prove that they coincide on $0$. We have: $$(ii_j^{(1,\dots,1)}\circ f\circ \phi_{{\bf RR}}(n))(0)=(f\circ \phi_{{\bf RR}}(n))(i)=\phi_{{\bf RR}}(n)(f(i))=f(i)(0)$$ where the first equality is by Lemma \ref{2016.01.03.l3}(2). This completes the proof of the fact that the family of functions $\phi_{{\bf RR}}$ is a morphism of relative monads. Let us show that the family $\phi_{{\bf RR}}$ satisfies the naturality axiom of the definition of functor morphism. Let $u:{\bf RR}_1\rightarrow {\bf RR}_2$ be a morphism of relative monads. Let $(T_i,L_i)=RML({\bf RR}_i)$ and ${\bf RR}'_i=LRM(T_i,L_i)$, $i=1,2$. Let $G=RML(u)$ and $u'=LRM(G)$. We need to show that the square $$ \begin{CD} {\bf RR}'_1 @>u'>> {\bf RR}'_2\\ @V\phi_{{\bf RR}_1} VV @VV\phi_{{\bf RR}_2} V\\ {\bf RR}_1 @>u>> {\bf RR}_2 \end{CD} $$ commutes, i.e., that for any $n\in\nn$ one has \begin{eq}\llabel{2016.01.03.eq1} u'(n)\circ \phi_{{\bf RR}_2}(n)=\phi_{{\bf RR}_1}(n)\circ u(n) \end{eq} We have that $$u'(n)\in Fun(RR'_1(n),RR'_2(n))=Fun(Fun(stn(1),RR_1(n)),Fun(stn(1),RR_2(n)))$$ and $$u'(n)(f)=(LRM(G)(n))(f)=G_{L_1(1),L_1(n)}(f)=G_{1,n}(f)=f\circ u(n)$$ Both sides of (\ref{2016.01.03.eq1}) are functions from $Fun(stn(1),RR_1(n))$. Therefore to prove that they are equal we need to prove that their values on any $f\in Fun(stn(1),RR_1(n))$ are equal. We have: $$(u'(n)\circ \phi_{{\bf RR}_2}(n))(f)=\phi_{{\bf RR}_2}(n)(u'(n)(f))=(u'(n)(f))(0)=(f\circ u(n))(0)=u(n)(f(0))$$ and $$(\phi_{{\bf RR}_1}(n)\circ u(n))(f)=u(n)(\phi_{{\bf RR}_1}(n)(f))=u(n)(f(0)).$$ This completes the proof of the fact that the family $\phi_{{\bf RR}}$ is a morphism of functors $RML_U\circ LRM_U\rightarrow Id_{RMon(Jf_U)}$. That it is an isomorphism follows from the general properties of functor morphisms and Lemma \ref{2016.01.03.l5}. This completes Construction \ref{2016.01.01.prob4}. \end{construction} \begin{problem} \llabel{2016.01.03.prob1} For a universe $U$ to construct a functor isomorphism $$LRM_U\circ RML_U\rightarrow Id_{LW(U)}$$ \end{problem} \begin{construction}\llabel{2016.01.03.constr1}\rm Let $(T,L)$ be a Lawvere theory in $U$. Let $$(RR,\eta,\rho)=LRM(T,L)$$ and $$(T',L')=RML(RR,\eta,\rho)$$ We need to construct an isomorphism of Lawvere theories $$G^{(T,L)}:(T',L')\rightarrow (T,L)$$ and show that the family $G^{(T,L)}$ is natural with respect to the morphisms of Lawvere theories $(T_1,L_1)\rightarrow (T_2,L_2)$. While constructing $G^{(T,L)}$ we will abbreviate its notation to $G$. We have: $$Ob(T')=Ob(K({\bf RR}))=Ob(F)=\nn$$ $$Mor_{T'}(m,n)=Mor_{K({\bf RR})}(m,n)=Fun(stn(m),RR(n))=Fun(stn(m),Mor_T(L(1),L(n)))$$ We set the object component of $G$ to be the object component of $L$. We set the morphism component $$G_{m,n}:Mor_{T'}(m,n)=Fun(stn(m),Mor_T(L(1),L(n)))\rightarrow Mor_T(L(m),L(n))=Mor_{T}(m,n)$$ to be of the form: $$G_{m,n}(f)=\Sigma_{T,i=0}^{m-1}f(i)$$ To show that $G_{m,n}$ is a bijection consider the function in the opposite direction given by, for $u\in Mor_T(m,n)$ and $i=0,\dots,m-1$ $$G^*_{m,n}(u)(i)=ii_i^{(L(1),\dots,L(1))}\circ u$$ The fact that $G$ and $G^*$ are mutually inverse follows easily from the definition of finite ordered coproducts. Let us show that $G$ is a functor. For the composition axiom, let $f\in Mor_{T'}(k,m)$, $g\in Mor_{T'}(m,n)$, then $$G_{k,m}(f)\circ_T G_{m,n}(g)=(\Sigma_{T,i=0}^{k-1}f(i))\circ_T (\Sigma_{T,j=0}^{m-1}g(j))=\Sigma_{T,i=0}^{k-1}(f(i)\circ_T (\Sigma_{T,j=0}^{m-1}g(j)))$$ and $$G_{k,n}(f\circ_{T'} g)=\Sigma_{T,i=0}^{k-1}((f\circ\rho(g))(i))=\Sigma_{T,i=0}^{k-1}(\rho(g)(f(i)))=\Sigma_{T,i=0}(f(i)\circ_T(\Sigma_{j=0}^{m-1}g(j)))$$ where the last equality is by Construction \ref{2015.12.22.constr6}(3). For the identity axiom, let $n\in \nn$ then $$G_{n,n}(Id_{T',m})=G_{n,n}(\eta(m))=\Sigma_{T,i=0}^{m-1}(\eta(m)(i))=\Sigma_{T,i=0}^{m-1}(ii_i^{(L(1),\dots,L(1))})=Id_{T,L(m)}$$ where the first equality is by Construction \ref{2015.12.22.constr3}, the third one is by Construction \ref{2015.12.22.constr6}(2) and the third one is by Lemma \ref{2015.12.24.l4}. To prove that $G$ is a morphism of Lawvere theories we have to show that $L'\circ G=L$. On objects the equality is obvious. To show that it holds on morphisms let $u\in Fun(stn(m),stn(n))$. Then $$(L'\circ G)(u)=G(L'(u))=\Sigma_{T,i=0}^m L'(u)(i)=\Sigma_{T,i=0}^m L_{{\bf RR}}(u)(i)=\Sigma_{T,i=0}^m (u\circ \eta(n))(i)=$$$$\Sigma_{T,i=0}^m \eta(n)(u(i))=\Sigma_{T,i=0}^m ii^{(L(1),\dots,L(1))}_{u(i)}=L(u)$$ where the fourth equality is by Construction \ref{2015.12.22.constr4} and the sixth one is by Construction \ref{2015.12.22.constr6}(2) and the seventh one is by Lemma \ref{2016.01.05.l1}. This completes the construction of the Lawvere theory morphisms $G^{(T,L)}$. It remains to show that they are natural with respect to morphisms of Lawvere theories. Let $H:T_1\rightarrow T_2$ be such a morphism. Let $(RR_i,\eta_i,\rho_i)=LRM(T_i,L_i)$ for $i=1,2$, $(T_i',L_i')=RML(RR_i,\eta_i,\rho_i)$, $\phi=LRM(H)$ and $H'=RML(\phi)$. Since $(L_i')_{Ob}=Id_{\nn}$ and $L_1'\circ H'=L_2'$ we have that $(H')_{Ob}=Id_{\nn}$. For $m,n\in\nn$ and $$f\in Mor_{T'_1}(m,n)=Fun(stn(m),Mor_{T_1}(L_1(1),L_1(n)))$$ we have $$H'(f)=RML(\phi)(f)=K(\phi)(f)=f\circ \phi(n)=f\circ LRM(H)(n)=f\circ H_{L_1(1),L_1(n)}$$ where the third equality is by Construction \ref{2015.12.22.constr5} and the fifth equality is by Construction \ref{2016.01.01.constr2}. We need to show that the square $$ \begin{CD} T'_1 @>H'>> T'_2\\ @VG^{(T_1,L_1)}VV @VVG^{(T_2,L_2)}V\\ T_1 @>H>> T_2 \end{CD} $$ commutes. For the object components, since $(G^{(T_i,L_i)})_{Ob}=(L_i)_{Ob}$ it means that for all $n\in \nn$ one has $$L_2(H'(n))=H(L_1(n)),$$ i.e., that $L_2(n)=H(L_1(n))$ which follows from the fact that $H$ is a morphism of Lawvere theories. For the morphism component it means that for all $f\in Fun(stn(m),Mor_{T_1}(L_1(1),L_1(n)))$ one has $$G^{(T_2,L_2)}(H'(f))=H(G^{(T_1,L_1)}(f)),$$ For the left hand side we have: $$G^{(T_2,L_2)}(H'(f))=G^{(T_2,L_2)}(f\circ H_{L_1(1),L_1(n)})=\Sigma_{T_2,i=0}^{m-1}(f\circ H_{L_1(1),L_1(n)})(i)=\Sigma_{T_2,i=0}^{m-1}(H(f(i)))$$ For the right hand side we have: $$H(G^{(T_1,L_1)}(f))=H(\Sigma_{T_1,i=0}^{m-1}f(i))=\Sigma_{T_2,i=0}^{m-1}(H(f(i)))$$ where the second equality is by Lemmas \ref{2016.01.01.l5} and \ref{2016.01.01.l6}. This completes the proof that the constructed family of Lawvere theories morphisms $G^{(T,L)}$ is a morphism of functors and with it completes Construction \ref{2016.01.03.constr1}. \end{construction} We can now provide a construction for Problem \ref{2016.01.05.prob1}. \begin{construction}\rm\llabel{2016.01.05.constr1} A functor $RML_U$ from $RMon(Jf_U)$ to $LW(U)$ is provided by Construction \ref{2015.12.22.def5}. A functor $LMR_U$ from $LW(U)$ to $RMon(Jf_U)$ is provided by Construction \ref{2016.01.01.constr5}. A functor isomorphism $RML_U\circ LRM_U\rightarrow Id_{RMon(Jf_U)}$ is provided by Construction \ref{2016.01.01.constr6}. A functor isomorphism $LRM_U\circ RML_U\rightarrow Id_{LW(U)}$ is provided by Construction \ref{2016.01.03.constr1}. \end{construction} \begin{remark}\rm\llabel{2016.01.05.rem2} The composition $RML_U\circ LRM_U$ is just slightly off from being {\em equal} to the identity functor on $RMon(Jf_U)$. It might appear that one can achieve the equality by considering a modified version $LRM'$ of the functor $LRM$ that sends $(T,L)$ to the relative monad based on the family of sets $Mor_T(L(1),L(n))^m$ where for a set $X$ and $m\in \nn$ one defines $X^m$ inductively as $X^0=stn(1)$, $X^1=X$ and $X^{n+1}=X^n\times X$. However, even this modified version of $LRM$ fails to achieve the equality due to the coercions that we need to insert to make our expression completely transparent. Indeed, the set of morphisms of the category $T$ in $(T,L)=LRM'({\bf RR})$ is $\amalg_{m,n\in\nn}RR(n)^m$ and the set $RR'(n)$ in ${\bf RR}'=RML(T,L)$ is $Mor_T(1,n)$, i.e., the set of iterated pairs of the form $((1,n),x)$ where $x\in RR(n)$. \end{remark} {\em Acknowledgements:} This material is based on research sponsored by The United States Air Force Research Laboratory under agreement number FA9550-15-1-0053. The US Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the United States Air Force Research Laboratory, the U.S. Government or Carnegie Melon University. \def$'${$'$} \end{document}
\begin{document} \setcounter{page}{1} \title{Team semantics for interventionist counterfactuals and causal dependence} \begin{abstract} We introduce a generalization of team semantics which provides a framework for manipulationist theories of causation based on structural equation models, such as Woodward's and Pearl's; our causal teams incorporate (partial or total) information about functional dependencies that are invariant under interventions. We give a unified treatment of observational and causal aspects of causal models by isolating two operators on causal teams which correspond, respectively, to conditioning and to interventionist counterfactual implication. The evaluation of counterfactuals may involve the production of partially determined teams. We suggest a way of dealing with such cases by 1) the introduction of formal entries in causal teams, and 2) the introduction of weaker truth values (falsifiability and admissibility), for which we suggest some plausible semantical clauses. We introduce formal languages for both deterministic and probabilistic causal discourse, and study in some detail their inferential aspects. Finally, we apply our framework to the analysis of direct and total causation, and other notions of dependence and invariance. \end{abstract} \tableofcontents \section{Introduction} Notions of dependence and independence entered the realm of logical investigation in the early days of mathematical logic, essentially with the introduction, by Frege, of nested quantification; this aspect of quantification was made explicit by the notion of Skolem function (\cite{Sko1920}). It is only in the last decades, however, that a systematical analysis of (in)dependence notions within predicative, propositional, modal logical languages has been undertaken. One of the main unifying tools in this enterprise is the so-called \emph{team semantics} (\cite{Hod1997},\cite{Hod1997b},\cite{Vaa2007}), whose key idea is that formulas involving dependencies acquire meaning only when evaluated over \emph{sets} of assignments. Variations of this methodology have allowed a systematical study of logical systems enriched with dependencies that arise from database theory, probabilistic theory and quantum information theory. In many cases, distinct notions of (in)dependence can coexist in one and the same formal language, and this kind of interplay has been systematically investigated from the point of view of definability and complexity. However, to the best of our knowledge, the framework of team semantics has not yet been used to investigate notions of causal and counterfactual dependence. In the present paper, we provide a generalization of team semantics that is also adequate to capture causal, counterfactual and probabilistic notions of (in)dependence which arise from modern manipulationist theories of causation such as Pearl's (\cite{Pea2000}) and Woodward's (\cite{Woo2003}). The generalization is not trivial. It is usually acknowledged in the literature that causal relationships cannot be reduced to mere correlations of data (the latter can be represented by a set of assignments.) Instead, a richer structure, encoding counterfactual assumptions, is needed. The plan of the paper is as follows. In section \ref{THCAUS} we give a short motivation for the manipulationist (interventionist) approach to causation. In section \ref{TEAMSEC} we present team semantics and show how to adequately enrich it so that it can handle interventionist counterfactuals. We will introduce several languages to express various (deterministic) notions of dependence. In section \ref{LOGLAWS} we analyze the logical properties of these languages, up to some basic soundness and completeness proofs; we use some of these properties to compare our counterfactuals with those of Stalnaker (\cite{Sta1968}), Lewis (\cite{Lew1973}) and Galles\&Pearl (\cite{GalPea1998}). Section \ref{NONPARAM} is dedicated to the logical issues that arise from nonparametric models. In section \ref{PROBSEC} we introduce probabilistic causal languages. In section \ref{CAUSMOD} we will discuss various notions of causation (mainly taken from Woodward) and invariance, in the light of the logic developed in earlier sections. \section{Theories of causation: background} \label{THCAUS} The reductive approach to causation has been philosophers' favourite tool. It aims, roughly, at finding necessary and sufficient conditions for causal relationships like ``$X$ causes $Y$''. Two such conditions have been prominent in the literature: those formulated in terms of conditional probabilities, and those based on counterfactuals (counterfactual dependence). We discuss them shortly in the next two sections. The main purpose of presenting them is to understand some of the reasons why the reductive approach has been found unsatisfactory and replaced by non-reductive approaches, such as the \emph{manipulationist} or \emph{interventionist} accounts of counterfactuals and causation (Pearl, Woodward, Halpern, Hitchcock, Briggs, among others). \subsection{Conditional probabilities} Two well known endeavours to connect causal relationships with conditional probabilities are due to P. Suppes (\cite{Sup1970}) and N. Cartwright (\cite{Car1983}). For instance, Cartwright (\cite{Car1983}, p. 26) requires that causes raise the probabilities of their effect: \begin{description} \item [{(CC)}] $C$ causes $E$ iff $Pr(E/C,K_{j})>Pr(E/K_{j})$ for all state descriptions $K_{j}$ which satisfy certain conditions. \end{description} Woodward (\cite{Woo2001}) finds (CC) defective in two ways. Firstly, the requirement to conditionalize on all $K_{j}$ is too strong, and a weaker, existential condition would suffice. Secondly, (CC) holds, as initially intended, only for positive causes. When negative (inhibiting) changes are taken into account, it is natural to replace $Pr(E/C,K_{j})>Pr(E/K_{j})$ by $Pr(E/C,K_{j})\neq Pr(E/K_{j})$ which only requires $C$ and $E$ to be (probabilistically) dependent. With these two points in mind, Woodward (\cite{Woo2001}) proposes to replace (CC) with something of the following sort: \begin{description} \item [{({*})}] $X$ causes $Z$ if and only if $X$ and $Z$ are dependent conditional on certain other factors $F$. \end{description} where $X$ and $Z$ are variables standing for properties. The problem now becomes that of specifying the other factors $F$. Cartwright suggests that they include other causes of $Z$ with the exception of those which are on a causal chain from $X$ to $Z$. She recognizes, however (\cite{Car1983}, p. 30; \cite{Car1989}, p. 95 ff; \cite{Woo2001}, p. 58) that the claim that we should never condition on all such intermediate variables is too strong. Woodward (\cite{Woo2001}) makes it clear that in order to understand these restrictions, and more broadly, in order for the project of specifying the other factors $F$ to have any hope of success, we need to refer to other causes of $Y$ and the way they are connected. For instance, to see why it is inappropriate to conditionalize on the variables which lie on a causal chain from $X$ to $Z$, as Cartwright first suggestion goes, it is enough to consider the causal structure \[ X\rightarrow Y \rightarrow Z \] Now if we were to conditionalize on $Y$, we would expect, intuitively, $X$ and $Z$ to be independent, which according to the definition above would result in $X$ not being a cause of $Z$. But this is not what we want. On the other side, to see that the requirement of never conditionalizing on the variables on causal chains from $X$ to $Z$ is too strong, it is enough to consider the causal structure in which both $X$ and $Y$ are ``direct'' causes of $Z$ and are on the causal paths between $X$ and $Z$. If the causal connection between $X$ and $Z$ is to be reflected in the probabilistic dependence of $Z$ on $X$ conditional on some other properties $F$, then these other properties must include $Y$. In other words, to determine the causal influence of $X$ on $Z$ we must take into account the influence of $Y$ on $Z$. What all this shows, according to Woodward, is that the project to connect causal relationships between $X$ and $Y$ with conditional probabilities, which finds its expression in ({*}), goes via a mechanism which provides information about other causes (contributing or or total) of $y$ besides $X$ and how these causes are connected with one another and with $Z$. (\cite{Woo2001}, p. 58.) One such mechanism is that of \textit{causal Bayesian networks} (Pearl 2000/2009 \cite{Pea2000}, Spirtes, Glymour and Scheines 1993/2001 \cite{SpiGlySch1993}). They are built on Directed Acyclic Graphs (DAGs) of the kind we have already encountered in our earlier examples. We start with a set of variables $V=\left\{ X_{1},....X_{n}\right\} $ whose causal relationships we want to investigate and a set of (directed) edges. A directed edge from the variable $X$ (parent) to the variable $Z$ (child) is intended to represent the fact that $X$ is a \textit{direct cause} of $Z$. The set of all parents of $Z$ is denoted as $PA_Z$. The project is now to investigate the connection between causal relationships generated by the edges of the graph and conditional probabilities $P(Z/PA_{Z})$ determined by a joint probability distribution $P$ over $V$. More exactly, let $P$ be a joint probability distribution on the set $V$. We denote by $x_{1},x_{2},...,x_{n}$ the values associated with the variables in $V$, that is, $X_{1}=x_{1},...,X_{n}=x_{n}.$ A DAG $G$ is said to \textit{represent} $P$ if the equation \begin{equation} P(x_{1},...,x_{n})=\prod_{i}P(x_{i}/pa_{i}) \end{equation} holds, where the variables $PA_{i}$ obey the parent-child relation of $G$: $Y\in PA_{i}$ if and only if there is an arrow in $G$ from $Y$ to $X_{i}$. A well known result states the following: \begin{teo}[Markov Condition, \cite{VerPea1990}] Let $G$ be a DAG with $V$ its set of variables. $G$ represents a probability distribution $P$ if and only if every variable in $V$ is independent of all its nondescendants (in G) conditional on its parents. \end{teo} A DAG $G$ which represents a probability distribution $P$ is often referred as \textit{causal Bayesian network}. The Markov Condition is thought to be important because it establishes a connection between causal relationships as represented by the arrows of a DAG and dependence relationships. For instance, if we take ``$Z$ is a nondescendant of $X$'' to stand for ``$X$ does not cause $Z$'', then the Markov condition implies that if ``$X$ does not cause $Z$'' then conditional on its parents, $X$ is independent of $Z$ , which by contraposition gives us the right-to-left direction of ({*}): \begin{itemize} \item If variables $X$ and $Z$ are dependent (that is: $Pr(X/Parents(X),Z) \neq Pr(X/Parents(X))$), then $X$ causes $Z$. \end{itemize} (Cf. e.g. \cite{Woo2001}) As handy as causal Bayesian networks are to connect causes with conditional probabilities, they fail to represent caunterfactual reasoning (\cite{Pea2000}, p. 37). In other words, if we want a robust notion of cause which sustains counterfactuals, we need to supplement causal Bayesian networks with an additional, deterministic component. \subsection{Counterfactual dependence} \label{SUBSCOUNT} Accounts of causal relationships based on counterfactual dependence have been available starting with the works of David Lewis and G. H. von Wright. Lewis (\cite{Lew1973b}, \cite{Lew1979}) reduces causal relationships to counterfactual dependence, which, in the end, is defined in terms of similarity between possible worlds. Von Wright (\cite{Wri1971}) distinguishes a causal connection between $p$ and $q$ from an accidental generalization (the concomitance of $p$ and $q$) on the basis of the fact that the former, unlike the latter, sustains a counterfactual assumption of the form ''on occasions where $p$, in fact, was not the case, $q$ would have accompanied it, had $p$ been the case''; that is, $p$ is a state of affairs which we can \textit{produce or suppress at will}. It is thus the \textit{manipulativity} of the antecedent which is the individuating aspect of the cause factor (\cite{Wri1971}, p. 70). Lewis's account has been criticized for relying too much on ``dubious metaphysics'' and von Wright's account for being too ``anthropomorphic''. Both Lewis's and von Wright's accounts contain ingredients which have been incorporated later on into interventionist accounts of counterfactuals and causation. Roughly, one needs a mechanism to represent the exogenous process which is the manipulation of the (variables of the) antecedent of a counterfactual. This mechanism has become known as \textit{intervention}\footnote{Lewis does not speak of interventions, but their role is mirrored, in the work of Lewis, by the notions of ``local miracle'' and ``non-backtracking counterfactual''.}. In addition, we need another mechanism to measure the \textit{effects} the changes of the intervened variables have on the (variables of the) consequent. This mechanism is encoded into the so-called \textit{structural (functional) equations} (\cite{Pea2000}, \cite{Woo2001}). In more details, we divide the set of variables whose causal relationships we want to investigate into two disjoint sets: a set $V$ of endogenous variables and a set $U$ of exogenous variables. With each endogenous variable $X\in V,$ an equation of the form \[ X=f_{X}(PA_{X}) \] is associated, where the variables $PA_{X}\subseteq U\cup V\setminus\left\{ X\right\} $ are called the \textit{parents of} $X$. The standard interpretation of a functional equation $X=f_{X}(PA_{X})$ is that of a law which specifies the value of $X$ given every possible combination of the values of $PA_{X}$. If we draw an arrow from each variable in $PA_{X}$ to the variable $X$ we obtain directed graphs as in the case of the causal Bayesian frameworks mentioned in the previous section. The crucial difference between the two frameworks is that in the present case, instead of characterizing the child-parent relationships stochastically in terms of conditional probabilities $P(X/PA_X),$ we characterize them deterministically using the equations. Various notions of intervention have been proposed, both in the context of causal Bayesian networks and in that of structural equations (e.g., \cite{SpiGlySch1993}, \cite{Woo1997}, \cite{Hau1998}, and \cite{Pea2000}). A detailed discussion of this variety is outside the scope of this paper. Suffice it to say that an intervention $do(X=x)$ on a variable $X$ is an action which disconnects the variable $X$ from all the incoming arrows into $X$ while preserving all the other arrows of the graph including those directed out of $X$. This is known as the \textit{arrow-breaking} conception of interventions. In the structural equations framework where the causal graph is induced by the appropriate set of equations, the intervention $do(X=x)$ results also in the alteration of that set: the equation $X=f_{X}(PA_{X})$ associated with the variable $X$ is replaced with a new equation $X=x$, while keeping intact the other equations in the set. It may be useful to illustrate these notions by way of an example (\cite{Woo2001}). Consider the following two equations: \[ \begin{array}{cccccl} (6) & Y=aX & & & (7) & Z=bX+cY\end{array} \] This set of equations induces the DAG If we intervene on $Y$ and set its value to $1$ (i.e., $do(Y=1))$, the result will be the altered system of equations: \[ \begin{array}{cccccc} (6') & Y=1 & & & (7) & Z=bX+cY\end{array} \] corresponding to the new DAG: \subsection{Various notions of cause} \label{SUBSCAUSES} Woodward (\cite{Woo1997},\cite{Woo2001}) uses interventions in the context of structural equations to \textit{define} ``$X$ is a cause of $Y$''. It turns out, however, that there are several distinct notions of cause, each satisfying the central commitment of the manipulability theory of causation: there is a causal relationship between $X$ and $Y$ whenever there is a possible intervention that changes the value of $X$ such that carrying it out changes the value of $Y$ (or its probability). \cite{Woo2001}, p. 54). Here are several notions of cause: \begin{description} \item [{(DC)}] (Direct cause) A necessary and sufficient condition for $X$ to be a direct cause of $Y$ with respect to some variable set $Z$ is that there be a possible intervention on $X$ that will change $Y$ (or the probability distribution of $Y$) when all the other variables in $Z$ besides $X$ and $Y$ are held fixed at some values by interventions. (\cite{Woo2001}, p. 52) \end{description} For Woodward, direct causes correspond to the parent-child relationships in the underlying DAG. The other notions cannot be always recovered from the arrows of the DAG: \begin{description} \item [{(TC)}] (Total cause) $X$ is a total cause of $Y$ if and only if it has a non-null total effect on $Y$- that is, if and only if there is some intervention on $X$ alone such that for some values of the other variables, this intervention on $X$ will change $Y$. The total effect of a change $dx$ in $X$ on $Y$ is the change in the value of $Y$ that would result from an intervention on $X$ alone that changes it by amount $dx$ (given the values of other variables that are not descendants of $X$). (\cite{Woo2001}, p. 54) \item [{(CC)}] (Contributing cause) $X$ is a contributing cause of $Y$ if and only if it makes non-null contribution to $Y$ along some directed path in the sense that there is some set of values of variables that are not on this path such that if these variables were fixed at those values, there is some intervention on $X$ that will change the value of $Y$. The contribution to a change in the value of $Y$ due to a change $dx$ in the value of $X$ along some directed path is the change in the value of $Y$ that would result from this change in $X$, given that the values of off path variables are fixed by independent interventions. (\cite{Woo2001}, pp. 54-55) \end{description} For instance in our earlier example consisting of the set of equations (6) and (7), $X$ is a direct cause of $Z$. To make this more transparent we shall assume that the coefficients $a,b,c$ are all equal to 1. We first intervene on $Y$ and set its value to $1$ as we did above (i.e., $do(Y=1))$. The result will be the altered system of equations: \[ \begin{array}{cccccc} (6') & Y=1 & & & (7) & Z=X+Y.\end{array} \] Next we perform two (independent) interventions on the system (6')-(7): $do(X=1)$ yields $Z=2$; and $do(X=2)$ yields $Z=3$. We conclude that $X$ is a direct cause of $Z$ in the sense of (DC). Consider now the set of equations \[ \begin{array}{cccccc} (8) & Y=aX & & & (9) & Z=dY\end{array} \] which corresponds to the DAG \[ X\longrightarrow Y \longrightarrow Z \] Any intervention $do(Y=e)$ leads to the system of equations \[ \begin{array}{cccccc} (8') & Y=e & & & (9) & Z=dY\end{array} \] Now it is obvious that no change in the value of $X$ will have any influence on the value of $Z$, hence $X$ is not a direct cause of $Z$ as expected. On the other side, it is easy to see that $X$ is a total cause of $Z$ in the sense of (TC) -- except in the special case that $d=\frac{1}{a}$. In section \ref{CAUSMOD} we shall represent some of these causal notions in the framework of causal team semantics, to which we now turn. \begin{comment} \subsection{The goal of the paper} We introduce a logical framework in which we can talk about the truth of statements of the form ``$X$ causes $Y$''. Our goal is to capture various notions of cause which have appeared in the literature, all bearing the distinguishing feature of sustaining counterfactual statements whose antecedents are made true by interventions. As hopefully it was made clear in the preceding sections, such a framework needs to represent adequately several components: \begin{enumerate} \item A set of variables whose causal relationships we want to investigate, which receive concommitently a class of values \item Directs acyclic graphs and structural equations, which constrain the range of values of the variables in (1). \item Interventions \item Probability distributions of variables \end{enumerate} The last component is needed for an account of causel notions according to which an intervention on the cause $X$ changes the probabilities of the effect $Y$, as in the quote at the beginning of this section. It turns out that there is already a ready made framework for dealing with (1): team semantics. We shall start with their presentation after which we shall add gradually more structure on them in order to represent components (2), (3) and (4). \end{comment} \section{Causal team semantics} \label{TEAMSEC} \subsection{Teams} Team semantics was introduced by W. Hodges (\cite{Hod1997},\cite{Hod1997b}) in order to provide a compositional presentation of the (game-theoretically defined) semantics of Independence-Friendly logic (\cite{HinSan1989},\cite{ManSanSev2011}). In the following years, team semantics has been used to extend first-order logic with database dependencies (e.g. Dependence logic \cite{Vaa2007}, Independence logic \cite{GraVaa2013}, Inclusion logic \cite{Gal2012}); similar approaches have been applied to propositional logics (\cite{YanVaa2016},\cite{YanVaa2017}) and modal logics (\cite{Vaa2008}, \cite{Tul2003}, \cite{BraFro2002}). Appropriate generalizations of teams have been used as descriptive languages for probabilistic dependencies (\cite{DurHanKonMeiVir2016}), for quantum phenomena (\cite{HytPaoVaa2015}), for Bayes networks (\cite{CorHytKonPenVaa2016}). The basic idea of team semantics is that notions such as dependence and independence, which express properties of relations (instead of individuals), cannot be captured by Tarskian semantics, which evaluates formulas on single assignments\footnote{This can be formally proved, see \cite{CamHod2001}.}; the appropriate unit for semantical evaluation is instead the \emph{team}, i.e., a \emph{set} of assignments (all sharing a common variable domain). In the standard approach, all the values of the variables come from a unique domain of individuals associated with an underlying model. However, in order to model causal and counterfactual dependence, we shall need to relax the assumption that variables may take as values only individuals coming from a common domain. Instead we shall take variables to represent properties which may have all kinds of values (as specified by their range). That is, once a set $Dom$ of variables is fixed, each assignment will be a mapping $s:\:Dom\rightarrow\bigcup_{X\in Dom}Ran(X)$ such that $s(X)\in Ran(X)$ for each $X\in Dom$. A \textbf{team} $T$ of domain $dom(T)=Dom$ will be any set of such assignments. As an example, recall Woodward's DAG corresponding to the structural equations (6) and (7) (subsection \ref{SUBSCOUNT}). We may take $X$ to express the property ``(whether it is) winter'', $Y$ the property ``(whether it is) cloudy'' and $Z$ the property ``(whether it is) snowing'' and take them to be represented in the team $T=\left\{ s_{1},s_{2}\right\} $: \[ \begin{array}{c|c|c|c} & X & Y & Z\\ \hline s_{1} & 0 & 0 & 0\\ \hline s_{2} & 1 & 1 & 1 \end{array} \] The basic semantic relation is now $T\models\psi$: the team $T$ satisfies the formula $\psi$. One can define a team semantics already for classical propositional languages. However such a semantics does not really add anything new, in the sense that $T\models\psi$ if and only if for all $s\in T$, $s\models\psi$ (in the Tarskian sense). That is, the meaning of a first-order formula is always reducible to a property of single assignments. However, once their semantics is expressed in terms of teams, propositional languages can be extended in ways that would be unavailable within Tarskian semantics; for instance, one can add (functional) dependence atoms $\dep{X_1,\dots,X_n}{Y}$ whose semantics is defined by \begin{description} \item [{({*})}] $T\models\dep{X_1,\dots,X_n}{Y}$ if and only if for all $s,s'\in X$, if $s(X_1)=s'(X_1),\dots,$ $s(X_n)=s'(X_n)$, then $s(Y)=s'(Y)$ \end{description} expressing that $Y$ is functionally determined by $\{X_1,\dots,X_n\}$; and this is a global property of the team, not reducible to properties of the single assignments\footnote{Actually, it is a property of \emph{pairs} of assignments of $X$, but more complex formulas arise whose meaning cannot be analogously reduced.}. Thus, in our example, it holds that whether it is snowing depends completely on whether it is winter, $T\models=(X;Z)$, and whether it is cloudy, $T\models\dep{Y}{Z}$; and whether it is cloudy depends entirely on whether it is winter, $T\models\dep{X}{Y}$. A team therefore might be used, for example, to represent a set of individual records coming from a statistical or experimental investigation; or, to represent all possible configurations that are compatible with the ranges of each variable; or yet, a subset of all possible configurations. This last case is particularly important in our context: it may well happen that some configurations are forbidden, even though they respect all variable ranges; this in particular happens if there are functional dependencies between the variables. For what regards the first possibility (analysis of statistical data) for many purposes it may be more suitable to use \emph{multi}teams, which allow multiple copies of assignments. Team-theoretical logical languages could thus be used to express global properties of distributions of values. On a second, different interpretation, teams and multiteams may be used to represent epistemic uncertainty about the current state of affairs; if one thinks of each assignment as a possible world, then the team may be thought as representing a set of equally plausible worlds. An intervention on such an object should, therefore, produce a new set of equally plausible candidates for the actual world. \begin{comment} One can for example define a standard team semantics for first-order logic; however, such a semantics does not really add anything new, in the sense that (using $X$ to denote teams, $s$ for assignments, $\psi$ a first-order formula): \[ X \models \psi \iff \text{for all $s\in X$, } s\models \psi \] that is, the meaning of a first-order formula is always reducible to a property of single assignments. However, once its semantics is expressed in terms of teams, first-order logic can be extended in ways that were impossible starting from Tarskian semantics; for example, one can add functional dependence atoms $\dep{\overline x}{y}$, whose semantics is defined as: \[ \dep{\overline x}{y} \iff \text{for all } s,s'\in X, \text{ if } s(\overline x) = s'(\overline x) \text{ then } s(y) = s'(y) \] expressing that $y$ is uniquely determined by $\overline x$; and this is a global property of the team, not reducible to properties of the single assignments\footnote{Actually, it is a property of \emph{pairs} of assignments of $X$, but more complex formulas arise whose meaning cannot be analogously reduced.}. \end{comment} \subsection{Causal teams} Despite some claims to the contrary in the literature, teams are insufficient to represent counterfactuals and causal notions based on them. As explained in subsection \ref{SUBSCOUNT}, a proper treatment of causal notions requires an account of counterfactual information, as encoded e.g. in invariant structural equations. It is true that a team sustains a number of functional dependencies among variables; but such dependencies may well be contingent, and disappear if the system is intervened upon. For this reason, we must extend teams so that they incorporate invariant dependencies or functions; and we must explain what may count as an intervention \emph{on a team}. Besides the ideal case in which a causal model contains a complete description of the functions involved in the structural equations (\emph{parametric} case), we develop our semantics with enough generality as to accomodate the more realistic case in which we possess only partial information about the functions (\emph{nonparametric} case); as an extreme case, the model might only incorporate information as to which functional \emph{dependencies} are invariant. In the nonparametric case, not all counterfactual statements can be evaluated; the additional logical complications related to this case will be examined in section \ref{NONPARAM}. Most of the paper will focus on the parametric case. \begin{comment} if and only if for all there is a function which maps the values in $T$ of the variables $\overline{X}$ to the values in $T$ of the variable $Y$. This is an existential statement. However, as mentioned in our introductory sections, in defining various notions of causes based on interventions, we need to be able to measure the \textit{effects} of an intervention (either deterministically of probabilistically). To do this, information about the existence of functions (or about the graph induced by them) is not enough: one needs their explicit formulation, i.e. structural equations. to hold also after some of the interventions performed on variables which figure as their arguments. Although teams, as they stand, sustain a number of functional dependencies in the ``existential mode'', such dependencies may well be contingent and disappear if the sytem is intervened upon. The most basic objects in the structural equation modeling approach are \emph{variables}, which we will denote with capital letters $U,V,X,Y...$. Each variable $V$ can assume values (tipically denoted as $v,v',v''$...) within a certain range of objects, $Ran(V)$. Variables are related by \emph{structural equations}, for example \[ Y := f(X_1,\dots,X_n) \] stating that $Y$ is determined as a function of $X_1,\dots,X_n$. We used the symbol $:=$ instead of an equality symbol to emphasize that the equation should be thought of as non-reversible. The set of arguments of function $f_j$, that is $\{X_1,\dots,X_n\}$, is usually denoted as $PA_Y$ (the set of \emph{parents} of $Y$; $Y$ is a \emph{child} of each of the $X_i$). \end{comment} Before proceeding, we want to fix some notational conventions. As is often done in the literature on causal models, we use the symbol $PA_Y$ ambiguously, so that it may mean either the set of parents of $Y$, or a sequence of the same variables in some fixed alphabetical ordering. For other sets/sequences of variables, we will adhere to the following conventions: \begin{notat} \begin{itemize} \item We use boldface letters such as \textbf{X} to denote either a set $\{X_1,\dots,X_n\}$ of variables or a sequence of the same variables (in the fixed alphabetical order) \item We use \textbf{x} to denote a set or sequence of values, each of which is a value for exactly one of the variables in \textbf{X}. We leave the details of these correspondences between variables and values as non-formalized. \item Writing $s(\SET X)$ we mean the set/sequence of values $s(X_1),\dots,s(X_n)$ that the assignment $s$ assigns to each of the variables in $\SET X$ \item $Ran(\SET X)$ is an abbreviation for $\prod_{X\in \SET X} Ran(X)$ \item By $\SET{X}\setminus \SET{Y}$ we denote the set/the sequence (in alphabetical order) of variables occurring in $\SET{X}$ but not in $\SET{Y}$, and by $\SET{x}\setminus\SET{y}$ a corresponding set/sequence of values \item By $\SET{X}\cap\SET{Y}$ we denote the set/sequence of variables occurring in both $\SET{X}$ and $\SET{Y}$, and by $\SET{x}\cap\SET{y}$ the corresponding set/sequence of values \end{itemize} and so on. \end{notat} Given a team $T^-$ and a variable $X\in dom(T^-)$, we write $T^-(X)$ for the set of values that are obtained for $X$ in the team $T^-$; that is, $T^-(X)= \{s(X)|s\in T^-\}$. As before, we say that a team $T^-$ satisfies a functional dependence $\dep{X_1,\dots, X_n}{Y}$, and we write $T^-\models \dep{X_1,\dots,X_n}{Y}$, if: \[ \text{for all } s,s'\in T^- \text{, if }s(X_i) = s'(X_i) \text{ for all }i=1..n, \text{ then }s(Y) = s'(Y). \] \begin{df} A \textbf{causal team} $T$ over variable domain $dom(T)$ with endogenous variables $\mathbf V\subseteq dom(T)$ is a quadruple $T = (T^-,G(T),\mathcal{R}_T,\mathcal{F}_T)$, where: \begin{enumerate} \item $T^-$ is a team. \item $G(T) =(dom(T),E)$ is a graph over the set of variables. For any $X\in dom(T)$, we denote as $PA_X$ the set of all variables $Y\in dom(T)$ such that the arrow $(Y,X)$ is in $E$. \item $\mathcal{R}_T = \{(X,Ran(X))|X\in dom(T)\}$ (where the $Ran(X)$ may be arbitrary sets) is a function which assigns a range to each variable \item $\mathcal{F}_T$ is a function $\{(V_i,f_{V_i})|V_i\in\mathbf V\}$ that assigns to each endogenous variable a $|PA_{V_i}|$-ary function $f_{V_i}:dom(f_{V_i})\rightarrow ran(V_i)$ \\(for some $dom(f_{V_i})\subseteq Ran(PA_{V_i})$) \end{enumerate} which satisfies the further restrictions: \begin{enumerate}[a)] \item $T^-(X) \subseteq Ran(X)$ for each $X\in dom(T)$ \item If $PA_Y=\{X_1,\dots, X_n\}$, then $T^-\models \dep{X_1,\dots, X_n}{Y}$ \item if $s\in T^-$ is such that $s(PA_Y)\in dom(f_Y)$, then $s(Y)= f_Y(s(PA_Y))$. \end{enumerate} In case $dom(f_V)= Ran(PA_V)$ for each $V\in \SET V$, we say the causal team is \textbf{parametric}; otherwise it is \textbf{nonparametric}. \end{df} Clause b) is there to ensure that whenever the graph contains an arrow $X_i\rightarrow Y$, and $\{X_1,\dots X_n\}$ is the maximal set of variables whence arrows come to $Y$, then the team satisfies the corresponding functional dependency $\dep{X_1,\dots X_n}{Y}$. Clause c) further ensures that such functional dependency is in accordance with the (partial description of the) function $f_Y \in \mathcal{F}_T$. The functional component $\mathcal F_T$ induces an associated system of structural equations, say \[ Y := \mathcal F_T(Y)(PA_Y) \] for each variable $Y\in dom(T)$. \begin{example} \label{EXCAUSALTEAM} Consider a causal team $T$ which has underlying team $T^- =\{\{(U,2),(X,1),(Y,2),(Z,4)\},$ $\{(U,3),$ $(X,1),(Y,2),(Z,4)\},\{(U,1),(X,3),(Y,3),$ $(Z,1)\},\{(U,1),(X,4),(Y,1),(Z,1)\},\{(U,4),(X,4),$ $(Y,1),(Z,1)\}\}$, graph $G(T) = (\{U,X,Y,Z\},$ $ \{(U,Z),(X,Y),(Y,Z),(X,Z)\})$, ranges $Ran(U) = Ran(X)$ $ = Ran(Y) = Ran(Z) = \{1,2,3,4\}$, and partial description of (one value of) the invariant function for $Z$: $\mathcal F(Z)(4,1,2):= 3$. We represent the $T^-$ and $G(T)$ components of $T$ by means of a decorated table: \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|l|}{ } \\ \multicolumn{4}{|l|}{U\tikzmark{U100} \ \tikzmark{X100}X\tikzmark{X100'} \ \ \tikzmark{Y100}Y\tikzmark{Y100'} \, \tikzmark{Z100}Z} \\ \hline $2$ & $1$ & $2$ & $4$\\ \hline $3$ & $1$ & $2$ & $4$\\ \hline $1$ & $3$ & $3$ & $1$\\ \hline $1$ & $4$ & $1$ & $1$\\ \hline $4$ & $4$ & $1$ & $1$\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=0.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X100'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y100}); \draw [->] ([yshift=3pt]{pic cs:Y100'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z100}); \draw ([xshift = 1pt,yshift=7pt]{pic cs:X100'}) edge[line width=0.2mm, out=35,in=125,->] ([yshift=6pt]{pic cs:Z100}); \draw ([yshift=8pt]{pic cs:U100}) edge[line width=0.2mm, out=35,in=125,->] ([yshift=8pt]{pic cs:Z100}); \end{tikzpicture} \end{center} \end{example} \subsection{Explicit causal teams} \label{SUBSEXPTEAM} For many purposes -- first of all, to keep a smoother account of the operations of taking subteams, and of applying iterated interventions -- it will be convenient to restrict attention to causal teams of a special form. This restriction causes no loss of generality within the developments pursued in the present paper. \begin{df} A causal team $T=(T^-,G(T),\mathcal{R}_T,\mathcal{F}_T)$ with endogenous variables $\mathbf{V}$ is \textbf{explicit} if, for every $V\in\mathbf{V}$, the following additional condition holds: d) Let $(X_1,\dots,X_n)$ be the list of the parents of $V$ in the fixed alphabetical order. For every $s\in T^-$, $(s(X_1),\dots,s(X_n))\in dom(f_Y)$. \end{df} (Here, as in the definition of causal team, $f_Y$ is a shorthand for $\mathcal F_T(Y)$). In words, the $\mathcal F_T$ component of an explicit causal team encodes \emph{all} the information of the team that concerns invariant functions; no further values of the functions can be reconstructed from the team component $T^-$. Given any causal team, it is always possible to construct in a canonical way an explicit causal team that corresponds to it, in the sense that it encodes exactly the same information (but it is more stable under the operations of taking subteams or interventions -- to be defined in the following subsections). For this purpose, given a causal team $T=(T^-,G(T),\mathcal{R}_T,\mathcal{F}_T)$ with endogenous variables $\mathbf{V}$, to any variable $Z\in \mathbf{V}$ we may associate an \textbf{explicit function} $h^T_Z:Ran(W)\rightarrow Ran(Z)$ as follows; given $pa_Z \in Ran(W)$, define\\ $h^T_Z(pa_Z)=$ $\left\{\begin{array}{cc} f_Z(pa_Z) & \text{if this is defined in $T$} \\ s(Z) & \text{if there is some row $s$ in $T$ with $s(PA_ Z) = pa_Z$}\\ \end{array}\right.$ \\ \\ (Conditions b) and c) in the definition of causal team ensure that $h_Z^T$ is well-defined.) We collect these explicit functions in a single function $\mathcal{H}_T$ such that, for each variable $Z\in\mathbf{V}$, $\mathcal{H}_T(Z):= h^T_Z$. Then: \begin{center} The explicit causal team associated to $T$ is $T^E:=(T^-,G(T),\mathcal R_T,\mathcal H_T)$. \end{center} \subsection{Causal subteams} It will be important, in order to define a semantics for our languages, to talk about causal subteams. A causal subteam $S$ of a causal team $T$ is meant to express a condition of lesser uncertainty; this will be encoded by the fact that the assignments in $S^-$ form a subset of the assignments of $T^-$, which may be interpreted as the fact that less configurations are considered possible. At the same time, the transition to a subteam should not erase information concerning the graph, the ranges of variables, and the invariant functions. The definitions in the previous subsection should make it clear that this proviso is easily guaranteed if $T$ is an \emph{explicit} causal team. In this case, we can define: \begin{df} Given an explicit causal team $T$, a \textbf{causal subteam} $S$ of $T$ is a causal team with the same domain and the same set of endogenous variables, which satisfies the following conditions: \begin{enumerate} \item $S^-\subseteq T^-$ \item $G(S) = G(T)$ \item $\mathcal{R}_S = \mathcal{R}_T$ \item $\mathcal{F}_S = \mathcal{F}_T$. \end{enumerate} \end{df} In case the team $T$ is not explicit, what can go wrong is that, for some endogenous variable $Y$, there may be some assignment $s\in T^- \setminus S^-$ such that $(s(X_1),\dots,s(X_n))\notin dom(f_Y)$ (where $X_1,\dots,X_n$ list $PA_Y$ in alphabetical order); in such case, the team $T$ encodes the fact that the invariant function which produces $Y$ assigns, to the list of arguments $(s(X_1),\dots,s(X_n))$, the value $s(Y)$; but this information is lost in the subteam $S$. To avoid this problem, we can define more generally: \begin{df} Given a causal team $T$, a \textbf{causal subteam} $S$ of $T$ is a causal subteam of the associated explicit team $T^E$ (as defined in subsection \ref{SUBSEXPTEAM}) \end{df} This second definition obviously coincides with the previous one over explicit causal teams. \subsection{A basic language and its semantics} \label{BASICLAN} Before discussing interventions and counterfactuals, we need to specify what it means for a causal team to satisfy atomic formulas and their boolean combinations. The kind of language we consider, for now, contains atomic dependence statements of the form $\dep{\SET X}{Y}$; atomic formulas of the forms $Y=y$ and $Y\neq y$, where $Y\in dom(T^-)$ and $y\in Ran(Y)$; connectives $\land$ and $\lor$. By analogy with the other kinds of team semantics that have been proposed in the literature, we can define satisfaction of a formula by a causal team by the clauses: \begin{itemize} \item $T\models \dep{\SET X}{Y}$ if for all $s,s'\in T$, $s(\SET X)=s'(\SET X)$ implies $s(Y)=s'(Y)$. \item $T\models Y=y$ if, for all $s\in T^-$, $s(Y)=y$. \item $T\models Y\neq y$ if, for all $s\in T^-$, $s(Y)\neq y$. \item $T\models \psi\land \chi$ if $T\models \psi$ and $T\models \chi$. \item $T\models \psi\lor \chi$ if there are two causal subteams $T_1,T_2$ of $T$ such that $T_1^-\cup T_2^- = T^-$, $T_1\models \psi$ and $T_2\models \chi$.\footnote{Notice that defining the union of any pair of \emph{causal} teams of the same domain is problematic, as the information on invariant functions given by each team might be incompatible with the information encoded in the other team.} \end{itemize} \subsection{Selective implication} Our main goal is to give an exact semantics to counterfactual statements of the form ``If $\psi$ had been the case, then $\chi$ would have been the case''. Very often, however, one find examples in the literature where these statements are embedded into a larger context. We have seen that von Wright (\cite{Wri1971}) considers examples of the form ``on occasions where $p$, in fact was not the case, $q$ would have accompanied it, had $p$ been the case''. Pearl (\cite{Pea2000}) analyzes the following query: ``what is the probability $Q$ that a subject who died under treatment $(X=1,Y=1)$ would have recovered $(Y=0)$ had he or she not been treated $(X=0)$? The appropriate representation of the last statement seems to be: \[ (X=1\wedge Y=1)\supset(X=0\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=0). \] where the symbol $\hspace{2pt}\Box\hspace{-4pt}\rightarrow$ stands for counterfactual implication, while the \emph{selective implication} $\supset$ is a form of restriction of the range of application of the counterfactual to the available evidence. What it does is to generate a subteam by selecting those assignments which satisfy the antecedent; and then it is checked whether the consequent holds in this subteam. Given a causal team $T$, and a classical formula $\psi$ (that is, a formula as in subsection \ref{BASICLAN}, but without dependence atoms) define the subteam $T^\psi$ by the condition: \begin{itemize} \item $(T^\psi)^- = \{s\in T^- | \{s\}\models \psi\}$. \end{itemize} Then, we define selective implication by the clause: \begin{itemize} \item $T\models \psi \supset \chi$ iff $T^\psi \models \chi$. \end{itemize} Here the consequent $\chi$ can be any formula of our current logical language; therefore, it might happen not to be a property of single assignments. Instead, we require for now the antecedent to be classical. The general idea is that selective implication is a reasonable operator only for antecedents which are \emph{flat} formulas, in the sense with which this word is used in the literature on logics of dependence (which will be reviewed in the following subsections). Actually, typical applications involve at most conjunctions of atomic formulas of the type $Z=z$. \begin{example} We observe that the selective implication \[ T\models Z=3 \supset Y=2 \] holds on any causal team that is based on the team $T$ which is depicted in the figure: \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Z \ Y \ X} \\ \hline $1$ & $2$ & $3$ \\ \hline $2$ & $1$ & $1$ \\ \hline $3$ & $2$ & $1$ \\ \hline $3$ & $2$ & $2$ \\ \hline \end{tabular} \end{center} To see that the formula holds on it, we have to construct the reduced subteam $T^{Z=3}$: \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Z \ Y \ X} \\ \hline $3$ & $2$ & $1$ \\ \hline $3$ & $2$ & $2$ \\ \hline \end{tabular} \end{center} which is obtained by selecting the third and fourth row of the previous table (the rows that satisfy $Z=3$). We can see, then, that $Y=2$ is satisfied by each row of this smaller table. Therefore, by the semantical clause for this kind of atomic formulas, $T^{Z=3}\models Y=2$. Then, the semantical clause for selective implication allows us to conclude that $T\models Z=3 \supset Y=2$. Notice that we only needed the team structure in order to evaluate a selective implication; all the information required to evaluate it is already encoded in the team, and no information about the structural equations or the underlying DAG is needed.\\ \end{example} \subsection{Interventions on teams: some examples} Our goal is to define (interventionist) \emph{counterfactual implication}. The task is more complicated than in the case of selective implication; we first illustrate the idea with some examples; formal definitions will be provided after that. Informally, the idea is that a counterfactual $X=x\hspace{2pt}\Box\hspace{-4pt}\rightarrow\psi$ is true in the causal team $T$ if $\psi$ is true in the causal team which results from an intervention $do(X=x)$ applied to the team $T$. \begin{example} Consider any causal team $T$ with $T^- = \{ \{(X,1),(Y,2),(Z,3)\}, $ $ \{(X,2),(Y,1),(Z,4)\}, \{(X,4),(Y,1),(Z,4)\}, \{(X,3),(Y,3),(Z,4)\} \}$, and underlying graph $G(T)= (\{X,Y,Z\}, \{(X,Y), (Y,Z)\})$ (we omit specifying the $\mathcal R_T$ and $\mathcal F_T$ components). We can represent it as an annotated table: \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X\tikzmark{X} \ \tikzmark{Y}Y\tikzmark{Y'} \ \tikzmark{Z}Z} \\ \hline $1$ & $2$ & $3$ \\ \hline $2$ & $1$ & $4$ \\ \hline $4$ & $1$ & $4$ \\ \hline $3$ & $3$ & $4$ \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y}); \draw [->] ([yshift=3pt]{pic cs:Y'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z}); \end{tikzpicture} \end{center} We want to establish whether the counterfactual $Y=2 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Z=3$ holds in $T$. The idea is to intervene in $T$ by setting the value of $Y$ to $2$ (this corresponds to replacing the function $f_y$ with the constant function $2$); updating all other variables that \emph{invariantly} depend on $Y$; and removing all the arrows that enter into $Y$. The causal team thus produced will be denoted by $T_{Y=2}$. So, first we intervene on $Y$: \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|l|}{X \ \ Y \tikzmark{Y1} \ \tikzmark{Z1}Z} \\ \hline $1$ & $\mathbf{2}$ & $\dots$ \\ \hline $2$ & $\mathbf{2}$ & $\dots$ \\ \hline $4$ & $\mathbf{2}$ & $\dots$ \\ \hline $3$ & $\mathbf{2}$ & $\dots$ \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y1}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z1}); \end{tikzpicture} \end{center} $Z$ is the only variable which has an invariant dependence on $Y$; therefore, we have to update its column. Since the function that determines $Z$ has $Y$ as its only parameter, we just have to consult $T$ and see that, in rows where $Y$ has value $2$, $Z$ takes value 3. Therefore, $T_{Y=2}$ looks like this: \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|l|}{X \ Y \tikzmark{Y2} \ \tikzmark{Z2}Z} \\ \hline $1$ & $\mathbf{2}$ & $3$ \\ \hline $2$ & $\mathbf{2}$ & $3$ \\ \hline $4$ & $\mathbf{2}$ & $3$ \\ \hline $3$ & $\mathbf{2}$ & $3$ \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y2}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z2}); \end{tikzpicture} \end{center} (Again, we are omitting a representation of the $\mathcal R_T$ and $\mathcal F_T$ components). Now $T_{Y=2}\models Z=3$, therefore we conclude that $T\models Y=2 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Z=3$. The team $T_{Y=2}$ describes a counterfactual situation in which we have certainty about the values of $Y$ and $Z$, but not about the value of $X$. Notice that this example uses both the team and the graph structure, but the specific invariant functions are not needed in the evaluation of this specific sentence. However, the observations above do not cover, for example, evaluation of counterfactuals of the form $Y=4\hspace{2pt}\Box\hspace{-4pt}\rightarrow\dots$, because team and graph structure do not tell us anything about what value should $Z$ take in circumstances in which $Y=4$. \end{example} \begin{example} We consider a slightly less trivial example. Here we evaluate the counterfactual $Y=2\hspace{2pt}\Box\hspace{-4pt}\rightarrow (Z=2\lor Z=3)$ in the causal team $T$ shown in the picture below \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|l|}{X\tikzmark{X3} \ \tikzmark{Y3}Y\tikzmark{Y3'} \ \tikzmark{Z3}Z} \\ \hline $1$ & $1$ & $1$ \\ \hline $1$ & $2$ & $2$ \\ \hline $2$ & $2$ & $3$ \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y3'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z3}); \draw ([yshift=8pt]{pic cs:X3}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Z3}); \end{tikzpicture} \end{center} (The components $\mathcal R_T$ and $\mathcal F_T$ are omitted as before). So we must produce again $T_{Y=2}$. First we intervene on $Y$ \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|l|}{X\tikzmark{X4} \, \tikzmark{Y4}Y \tikzmark{Y4'} \ \ \tikzmark{Z4}Z} \\ \hline $1$ & $\mathbf{2}$ & $\dots$ \\ \hline $1$ & $\mathbf{2}$ & $\dots$ \\ \hline $2$ & $\mathbf{2}$ & $\dots$ \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y4'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z4}); \draw ([yshift=8pt]{pic cs:X4}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Z4}); \end{tikzpicture} \end{center} and next we update the value of $Z$, taking into account both the values of $X$ and $Y$ in each row (since $Z$ is reached both by arrows coming from $X$ and from $Y$). Notice that now two of the modified rows become identical, so that only one appears in the table: \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{X5} \ \tikzmark{Y5}Y\tikzmark{Y5'} \ \tikzmark{Z5}Z} \\ \hline $1$ & $\mathbf{2}$ & $2$\\ \hline $2$ & $\mathbf{2}$ & $3$\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y5'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z5}); \draw ([yshift=8pt]{pic cs:X5}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Z5}); \end{tikzpicture} \end{center} This team can be partitioned into two causal subteams: \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{X6} \ \tikzmark{Y6}Y\tikzmark{Y6'} \ \tikzmark{Z6}Z} \\ \hline $1$ & $2$ & $2$\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y6'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z6}); \draw ([yshift=8pt]{pic cs:X6}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Z6}); \end{tikzpicture} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{X7} \ \tikzmark{Y7}Y\tikzmark{Y7'} \ \tikzmark{Z7}Z} \\ \hline $2$ & $2$ & $3$\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y7'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z7}); \draw ([yshift=8pt]{pic cs:X7}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Z7}); \end{tikzpicture} \end{center} the first of which satisfies $Z=2$, while the second satisfies $Z=3$. Therefore, by the semantical clause for disjunction, $T_{Y=2}$ satisfies $Z=2 \lor Z=3$. We may conclude that $T\models Y=2\hspace{2pt}\Box\hspace{-4pt}\rightarrow (Z=2\lor Z=3)$. \end{example} \begin{example} Here we present an example involving nonparametric teams. Our goal is to evaluate $X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=2$ in \emph{the} nonparametric team $T$ shown in the picture: \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|l|}{ } \\ \multicolumn{4}{|l|}{U\tikzmark{U8} \ \tikzmark{X8}X\tikzmark{X8'} \ \ \tikzmark{Y8}Y\tikzmark{Y8'} \, \tikzmark{Z8}Z} \\ \hline $2$ & $1$ & $2$ & $4$\\ \hline $3$ & $1$ & $2$ & $4$\\ \hline $1$ & $3$ & $3$ & $1$\\ \hline $1$ & $4$ & $1$ & $1$\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X8'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y8}); \draw [->] ([yshift=3pt]{pic cs:Y8'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z8}); \draw ([xshift=1,yshift=7pt]{pic cs:X8'}) edge[line width=0.2mm, out=35,in=125,->] ([yshift=6pt]{pic cs:Z8}); \draw ([yshift=8pt]{pic cs:U8}) edge[line width=0.2mm, out=35,in=125,->] ([yshift=8pt]{pic cs:Z8}); \end{tikzpicture} \end{center} Here we are assuming that $dom(\mathcal F_T(Y))=dom(\mathcal F_T(Z))= \emptyset$; ranges might be given, for example, by $\mathcal R_T(U) = \mathcal R_T(X) = \mathcal R_T(Y) = \mathcal R_T(Z) = \{1,2,3,4\}$. In order to evaluate $X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=2$ we need to generate the causal team $T_{X=1}$. First we intervene on $X$; this will affect all descendants of $X$, which in this case are the (children) $Y$ and $Z$. \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|l|}{ } \\ \multicolumn{4}{|l|}{U\tikzmark{U9} \, \tikzmark{X9}X\tikzmark{X9'} \ \ \ \tikzmark{Y9}Y\tikzmark{Y9'} \ \ \ \ \tikzmark{Z9}Z} \\ \hline $2$ & $\mathbf{1}$ & $\dots$ & $\dots$\\ \hline $3$ & $\mathbf{1}$ & $\dots$ & $\dots$\\ \hline $1$ & $\mathbf{1}$ & $\dots$ & $\dots$\\ \hline $1$ & $\mathbf{1}$ & $\dots$ & $\dots$\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X9'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y9}); \draw [->] ([yshift=3pt]{pic cs:Y9'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z9}); \draw ([xshift=1pt, yshift=7pt]{pic cs:X9'}) edge[line width=0.2mm, out=25,in=135,->] ([yshift=6pt]{pic cs:Z9}); \draw ([yshift=8pt]{pic cs:U9}) edge[line width=0.2mm, out=20,in=135,->] ([yshift=8pt]{pic cs:Z9}); \end{tikzpicture} \end{center} Notice, however, that we cannot yet evaluate $Z$ (which is a function of $U,X$ and $Y$) unless we first update $Y$: \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|l|}{ } \\ \multicolumn{4}{|l|}{U\tikzmark{U10} \, \tikzmark{X10}X\tikzmark{X10'} \ \ \tikzmark{Y10}Y\tikzmark{Y10'} \ \, \tikzmark{Z10}Z} \\ \hline $2$ & $1$ & $\mathbf{2}$ & $\dots$\\ \hline $3$ & $1$ & $\mathbf{2}$ & $\dots$\\ \hline $1$ & $1$ & $\mathbf{2}$ & $\dots$\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X10'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y10}); \draw [->] ([yshift=3pt]{pic cs:Y10'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z10}); \draw ([xshift = 1pt,yshift=7pt]{pic cs:X10'}) edge[line width=0.2mm, out=25,in=135,->] ([yshift=6pt]{pic cs:Z10}); \draw ([yshift=8pt]{pic cs:U10}) edge[line width=0.2mm, out=20,in=135,->] ([yshift=8pt]{pic cs:Z10}); \end{tikzpicture} \end{center} Finally we update $Z$, but we have a surprise: \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|l|}{ } \\ \multicolumn{4}{|l|}{U\tikzmark{U11} \, \tikzmark{X11}X\tikzmark{X11'} \ \ \tikzmark{Y11}Y\tikzmark{Y11'} \, \; \; \; \tikzmark{Z11}Z} \\ \hline $2$ & $1$ & $2$ & $\mathbf{4}$\\ \hline $3$ & $1$ & $2$ & $\mathbf{4}$\\ \hline $1$ & $1$ & $2$ & $\hat f_Z(1,1,2)$\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X11'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y11}); \draw [->] ([yshift=3pt]{pic cs:Y11'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z11}); \draw ([xshift = 1pt, yshift=7pt]{pic cs:X11'}) edge[line width=0.2mm, out=25,in=150,->] ([yshift=6pt]{pic cs:Z11}); \draw ([yshift=8pt]{pic cs:U11}) edge[line width=0.2mm, out=20,in=140,->] ([yshift=8pt]{pic cs:Z11}); \end{tikzpicture} \end{center} The information contained in the team $T^-$ is insufficient for the evaluation of the $Z$-value of the last row of $T_{X=1}$. Since the triple $(1,1,2)$ is not in the domain of $\mathcal F_T(Z)$, the best we could do is to fill that part of the table with a formal term. Here we wrote $\hat f_Z$ as a formal symbol distinguished from the function $f_Z$. In general, in case of repeated interventions, we can expect also complex terms, with incapsulated function symbols, to be produced. Notice however that we have no uncertainties about the $Y$ column; so, it is natural to state that $T_{X=1}\models Y=2$, and that, therefore, $T\models X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=2$. What if we were instead trying to evaluate some statement about $Z$ under these same counterfactual circumstances? We might renounce completely to assign truth values to such statements; or we might perhaps add a further truth value, which holds of such statement if the statement is \emph{admissible} given the form of the terms involved. A precise semantical definition is not trivial, and we address it later. \end{example} \begin{example} Consider now a causal team $S$ which is identical to that which was considered in the previous example, except for that its functional component, instead of being empty, contains at least a partial description of the invariant function for $Z$; we assume that $\mathcal{F}_S(Z) = f_Z$ is such that $(1,1,2)\in dom(f_Z)$, and that $f_Z(1,1,2)=3$. If we now apply the intervention $do(X=1)$ to this causal team, we can use this extra information to explicitly evaluate the last entry of $S_{X=1}$, obtaining the causal team \begin{center} $S_{T=1}$: \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|l|}{ } \\ \multicolumn{4}{|l|}{U\tikzmark{U11BIS} \, \tikzmark{X11BIS}X\tikzmark{X11BIS'} \ \ \tikzmark{Y11BIS}Y\tikzmark{Y11BIS'} \, \tikzmark{Z11BIS}Z} \\ \hline $2$ & $1$ & $2$ & $4$\\ \hline $3$ & $1$ & $2$ & $4$\\ \hline $1$ & $1$ & $2$ & $\mathbf{3}$\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X11BIS'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y11BIS}); \draw [->] ([yshift=3pt]{pic cs:Y11BIS'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z11BIS}); \draw ([xshift = 1pt, yshift=7pt]{pic cs:X11BIS'}) edge[line width=0.2mm, out=25,in=150,->] ([yshift=6pt]{pic cs:Z11BIS}); \draw ([yshift=8pt]{pic cs:U11BIS}) edge[line width=0.2mm, out=20,in=140,->] ([yshift=8pt]{pic cs:Z11BIS}); \end{tikzpicture} \end{center} \end{example} \subsection{Formal definition of interventions and counterfactuals} \label{SUBSDEFINT} All the examples in the previous subsection have one aspect in common: the graphs underlying causal teams are acyclic. In this case the corresponding causal team is called \emph{recursive}, by analogy with the literature on structural equation models; the examples show that, for these kinds of causal teams, the notion of intervention is naturally conceived in algorithmic terms. We now move towards a precise definition. We have learned a number of lessons from the previous examples: \begin{enumerate}[A.] \item An intervention $do(X=x)$ amounts to 1) setting the whole $X$-column to $x$; 2) eliminating all arrows that enter into $X$; 3) updating the columns that correspond to descendants of $X$. \item It might not be possible to update all the descendants of $X$ in a single step (actually, the order of updating might not be trivial to decide). \item The information encoded in the causal team might be insufficient for generating, under intervention, a proper causal team; we must then admit teams which assign formal terms to some variables. \end{enumerate} We begin by addressing this last problem. Given a graph $G$ whose set of vertices $V$ is a set of variables, we call $L_G$ the set of function symbols $\hat f_X$ (of arity $card(PA_X)$), for each $X\in V$ (actually, we only need one symbol for each endogenous variable, that is, for variables which have indegree at least one). We call $G$-terms the terms generated from variables in $V$ and from symbols in $L_G$ by the obvious inductive rules; the set of $G$-terms will be denoted as $Term_G$. When speaking of a causal team $T$, we will implicitly assume, from now on, that the ranges of all variables contain the set of terms $Term_{G(T)}$. Actually, it is not difficult to prove that (iterated) interventions (as defined below) on a recursive causal team with finite variable ranges can generate only a finite number of formal terms. Therefore, in principle a finite causal team could always be extended to a \emph{finite} causal team with formal terms, by using an appropriate finite subset of $Term_{G(T)}$ to extend the ranges of variables. We may combine these ideas with those of subsection \ref{SUBSEXPTEAM}. Given a causal team $T$, let $T'$ be the causal team obtained by extending the ranges of variables with formal terms, as described above. Form then the corresponding explicit team $T^{FE}:= (T')^E$ following the construction given in subsection \ref{SUBSEXPTEAM}. We call $T^{FE}$ the \textbf{fully explicit} causal team corresponding to $T$. The invariant functions of $T^{FE}$ now encode sufficient information for performing any kind of intervention that we will consider in this paper. The fact that the ranges of variables contain formal terms solves issue C. The fact that the team is explicit ensures that no information is lost after an intervention. Now we address problem $B$. How do we decide the order of updating for the columns? To understand the problem, observe the graph in the figure: \begin{figure}\label{fig:BEXAMPLE} \end{figure} $Y$ is connected to $X$ by an arrow, so one might think that, in an intervention on $X$, it is possible to update $Y$ immediately after updating $X$. However, this is impossible, because also the updated values of $Z$ are needed in the evaluation of the new values for $Y$. The point is that, from $X$ to $Y$, there is a longer path than the direct one. This suggests defining a distance from $X$, $d(X,\cdot)$, given by the maximum length of paths from $X$ to $\cdot$. Nodes at distance 1 can be immediately evaluated after updating $X$; at this point it is safe to update nodes at distance 2; and so on. Nodes that are not accessible by directed paths from $X$ will be assigned a negative distance and excluded from the updating procedure. Of course, this strategy will work provided there are no loops in the graph, or at least no loops in the part of the graph which is accessible by directed paths from $X$ (the set of descendants of $X$). Since in the applications it is very common to have conjunctive interventions, say $do(X_1=x_1 \land\dots \land X_n=x_n)$, we will define, more generally, the notion of distance from a \emph{set} of variables $\mathbf{X}$. In this more complex case, one must consider a reduced graph in which all arrows entering in $\SET{X}$ have been removed, and examine the directed paths of this reduced graph. The reader may think of the $card(\mathbf{X})=1$ case for ease of visualization. \begin{df} Given a graph $G=(\SET V,E)$ and $\SET{X}\subseteq \SET V$, \begin{itemize} \item We denote as $G_{-\SET{X}}=(\SET V,E_{-\SET{X}})$ the graph obtained by removing all arrows going into some vertex of $\SET X$ (i.e., an edge $(V_1,V_2)$ is in $E_{-\SET{X}}$ iff it is in $E$ and $V_2\notin \SET X$). Notice that, in the special case that $\SET{X} = \{X\}$, the set of directed paths of $G_{-\SET{X}}$ starting from $X$ coincides with the set of directed paths of $G$ starting from $X$. \item Let $Y\in \SET V$. We call \textbf{(evaluation) distance} between $\mathbf{X}$ and $Y$ the value $d_G(\mathbf{X},Y)= sup\{lenght(P)| P \text{ dir. path of $G_{-\SET{X}}$ going from some $X\in \mathbf{X}$}$ \\ $\text{to } Y\}$. In case no such path exists, we set $d_G(\mathbf{X},Y)=-1$. Clearly, if the graph is finite and acyclic, $d_G(\mathbf{X},Y)\in \mathbb{N}\cup\{-1\}$ for any pair $\mathbf{X},Y$. When the graph is clear from the context, we simply write $d(\SET X,Y)$. \end{itemize} \end{df} \begin{comment} *************************************************** DO YOU THINK THIS PARAGRAPH CAN BE ELIMINATED AT THIS POINT? OR IS IT NECESSARY (IN SOME MODIFIED FORM) FOR UNDERSTANDING THE FOLLOWING? We must now solve a practical problem. We must define what it means to use the information encoded in a causal team $T$ in order to update an intervened team $T'$; but there are two ways in which such information may be encoded in the causal team. First, a variable $Z$ might be updated applying the invariant function $f_Z$, which is part of the causal team, and applying it to the list of updated values of argument (we may call such list $s(\overline{PA_Z})$; but notice that each assignment of $T$ may give rise to a distinct list of values). Secondarily, if $f_Z$ is not defined on this list of values $pa_Z$, we might still be able to recover a value for $Z$ if the original team $T$ contains an assignment $s$ such that $s(\overline{PA_Z})= pa_Z$; in such case, we update $Z$ with the value $s(Z)$. And finally, if no information can be retrieved from $T$, we can use a formal term $\overline{f_Z}(pa_Z)$ as suggested above. To make some order among all these possibilities, we encode all this information in a single function $g^T_Z$; and our first operation in the definition of intervention will be to replace $f_Z$ with $g^T_Z$. The main point here (besides simplifying the description of what an intervention is) is ensuring that \emph{all} the information encoded in $T$ is also preserved in the intervened teams; this will guarantee that iterated interventions work smoothly. *************************************************** \end{comment} Let $T$ be a \emph{fully explicit}, \emph{recursive} causal team of endogenous variables $\SET V$. Let $\mathbf{X} =\{X_1,\dots,X_n\}\subseteq dom(T)$ and $x_1\in Ran(X_1),\dots, x_n\in Ran(X_n)$ corresponding values with the additional property that, if $X_i$ and $X_j$ denote the same variable, then $x_i=x_j$. We define the \textbf{intervention} $do(\bigwedge_{i=1..n} X_i=x_i)$ (in short, $do(\SET{X}=\SET{x})$) as an algorithm\footnote{Notice that our definition allows the presence of many occurrences of the same variable in different conjuncts, provided that all such conjuncts assign the same value to the given variable. We do \emph{not} define interventions associated to contradictory conjunctions. E.g., the intervention $do(X=x \land X=x \land Y=y)$ is defined, and coincides with the intervention $do(X=x \land Y=y)$; while the intervention $do(X=1 \land X=2)$ is not defined.}: \\ Stage $0$. Delete all arrows coming into $\SET X$, and replace each assignment $s\in T$ with $s(\SET{x}/\SET{X})$. Denote the resulting team\footnote{A warning: the teams $T_n$ produced before the last step of the algorithm may fail to form a causal team together with the other coomponents described, because of violations of conditions b) and c) of the definition of causal team.} as $T_0$. Replace $\mathcal F_T$ with its restriction $\mathcal F_T'$ to $\SET V\setminus\SET X$. \\ Stage $n+1$. If $\{Z_1,\dots, Z_{k_{n+1}}\}$ is the set of all the variables $Z_j$ such that $d_{G(T)}(\SET{X},Z_j) = n+1$, define a new team $T_{n+1}$ by replacing each $s\in T_n$ with the assignment $s(f_{Z_1}(s(PA_{Z_1}))/Z_1, \dots, f_{Z_{k_{n+1}}}(s(PA_{Z_{k_{n+1}}}))/Z_{k_{n+1}})$. \\ End the procedure after step $\hat n = sup\{d_{G(T)}(\SET{X},Z)|Z\in dom(T)\}$.\\ Notice that $\mathcal R_T$ is not modified by the algorithm; and that, except for the modifications to $G(T)$, it would be the same to apply the algorithm to each assignment separately. In case the causal team $T$ is recursive but not fully explicit, we should begin the algorithm with an additional step:\\ Step $-1$. Replace $T$ with the corresponding fully explicit causal team $T^{FE}$.\\ In case the intervention $do(\SET{X}=\SET{x})$ is a terminating algorithm on $T$, we define the causal team $T_{\SET{X}=\SET{x}}$ (of endogenous variables $\SET V\setminus \SET X$) as the quadruple $(T^{\hat n},G(T)_X,\mathcal R_T, \mathcal F_T')$ which is produced when $do(\SET{X}=\SET{x})$ is applied to $T$. Actually, relaxing a bit the notion of algorithm, the $do$ algorithm applies as well to causal teams with infinite variable ranges: \begin{teo} If $G(T)$ is a finite acyclic graph, then $T_{\SET{X}=\SET{x}}$ is well-defined. \end{teo} \begin{proof} We can assume without loss of generality that $T$ is fully explicit. Assume also, at first, that $T^-$ is finite. Suppose that, for $\SET{X}\subseteq dom(T), Y\in dom(T)$, $d(\SET X,Y)> card(G(T))$. Then there is a path $P$ from some $X\in\mathbf{X}$ to $Y$ such that $lenght(P)>card(G(T))$. Then there is a node which is crossed at least twice by $P$; so $G(T)$ contains a cycle: contradiction. Therefore, $sup\{d(\mathbf{X},Z)|Z\in dom(T)\}\leq card(G(T))$; this means that the ``for'' cycle in the algorithm goes through a finite number of iterations over the variable $n$. Finally, notice that, for each $n$, there are a finite number of variables $Z$ such that $d(\SET{X},Z)=n$ (due to finiteness of $G(T)$) and a finite number of assignments $t$ in the team that is undergoing modification (due to the finiteness of $T^-$). Therefore, the algorithm terminates after a finite number of steps. If instead $T^-$ is infinite, we can replace $do(\SET{X}=\SET{x})$ with an infinitary algorithm which, in each iteration of the ``for'' cycle, performs simultaneously the substitution $t \hookrightarrow t(f_{Z_1}(s(PA_{Z_1}))/Z_1, \dots, f_{Z_{k_{n+1}}}(s(PA_{Z_{k_{n+1}}}))/Z_{k_{n+1}})$ for all the assignments $t$ in the current team. By the arguments above, such an ``algorithm'' terminates and yields a well-defined causal team. \end{proof} In case a causal team is not recursive (i.e., its graph is cyclic), the algorithm above may well fail to terminate. In this case, a definition of the intervention $do(\SET{X}=\SET{x})$ could still be given in terms of the set of solutions of the modified system of structural equations (in which the equations for $\SET{X}$ are replaced by $\SET{X}:=\SET{x}$), provided the team is parametric. Each assignment $s\in T$ is a solution of the system of structural equations encoded in $\mathcal F_T$ (this is ensured by parametricity and conditions a) and c) in the definition of causal team). Galles and Pearl (\cite{GalPea1998}) consider the case of systems with unique solutions, defined as follows: 1) for fixed values of the exogenous variables, the system has a unique solution, and 2) each ``intervened'' system of equations obtained from the initial one replacing some equations of the form $X:=f_X(PA_X)$ with constant equations $X:=x$ still has a unique solution for each choice of values for the exogenous variables. In case the $\mathcal F_T$ component of a team encodes a system with unique solutions, then the natural way to define an intervention $do(X=x)$ on the team is to replace each assignment $s\in T^-$ with the (unique) assignment $t$ which encodes the solution of the intervened system for the choice $s(\SET{U})$ of values to the exogenous variables\footnote{In case the intervention acts also on some of the exogenous variables, this idea should be modified in an obvious way.}. The definition of the other components of the causal team produced by the intervention is straightforward. Over recursive parametric causal teams the team so obtained coincides with the causal team that is produced by the algorithm. It is not difficult to extend the previous ideas to systems that may have no solution at all. In case the intervened system obtained replacing the appropriate equations with $\SET{X}=\SET{x}$ has no solution corresponding to the choice $s(\SET{U})$ for the exogenous variables, then no assignment should correspond to $s$ in the intervened team $T_{\SET X=\SET x}$. Although this extension is straightforward, we can expect significant changes in the underlying logic. The case of systems with multiple solutions is more problematic. If many solutions correspond to an assignment of the initial team, should we include them all in the intervened team? And, for what regards the probabilistic developments of the following sections, should we consider all the assignments thus produced as equiprobable? These matters should probably be settled according to the kind of interpretation we want to give to nonrecursive causal teams in a given application, and one is lead to the classical problems of interpretation of nonrecursive causal models (see e.g \cite{StrWol1960}). It might also be reasonable to model such an intervention as producing not one, but multiple teams, corresponding to possible different outcomes of the intervention. This set of ``accessible teams'' would then induce a nontrivial modality, and it would then be reasonable to treat counterfactuals as necessity operators in a dynamic logic setting (in the spirit of \cite{Hal2000}). This would lead us far away from the approach of the present paper, in which we focus on the nonproblematic recursive case. We now return to it. Having defined the intervened team $T_{\SET X=\SET x}$, we are immediately led to a semantical clause for counterfactuals of the form $\SET X=\SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi$: \[ T\models \SET X=\SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi \iff T_{\SET X=\SET x} \models \psi. \] In case the antecedent is inconsistent (i.e., it contains two conjuncts $X_i = x_i, X_i = x_i'$ with $x_i \neq x_i'$), the corresponding intervention is not defined; in this case, we postulate the counterfactual to be (trivially) true. \subsection{Logical languages} We call the (basic) \emph{language of causal dependence}, $\mathcal{CD}$, the language formed by the following rules: \[ Y = y \ | \ Y\neq y \ | \ \dep{\SET X}{Y} \ | \ \psi \land \chi \ | \ \psi \land \chi \ | \ \theta \supset \chi \ | \ \SET X= \SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi \] for $Y,\SET{X}$ variables, $y,\SET x$ values, $\psi,\chi$ formulae of $\mathcal{CD}$, $\theta$ classical formula. If we interpret this language by the semantical clauses introduced so far, it can be shown that: \begin{teo} The logic $\mathcal{CD}$ is downwards closed, that is: if $\varphi\in \mathcal{CD}$, $T$ is a parametric causal team with at most unique solutions, $T'$ is a causal subteam of $T$, and $T\models\varphi$, then also $T'\models\varphi$. \end{teo} \begin{proof} We prove it by induction on the synctactical structure of $\varphi$. The atomic and propositional cases are routine. Suppose $T\models \psi \supset \chi$. Then $T^{\psi}\models \chi$. Now $T'^{\psi}\subseteq T^{\psi}$; therefore, by inductive hypothesis, $T'^{\psi}\models \chi$. Thus, by the semantic clause for selective implication, $T'\models \psi \supset \chi$. Suppose $T\models \SET X= \SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$. Then $T_{\SET X= \SET x}\models \chi$. But since the algorithm $do(\SET X= \SET x)$ acts on each assignment separately, we can conclude that $(T'_{\SET X= \SET x})^-\subseteq (T_{\SET X= \SET x})^-$. So, by the inductive hypothesis, $T'_{\SET X= \SET x}\models \chi$. Applying again the semantic clause, $T'\models \SET X= \SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$. \end{proof} \begin{teo} The logic $\mathcal{CD}$ has the empty set property, that is: for every $\varphi\in \mathcal{CD}$, $\emptyset\models\varphi$. \end{teo} \begin{proof} By induction on the syntax of the formula. The new cases are those for $\supset$ and $\hspace{2pt}\Box\hspace{-4pt}\rightarrow$. Let $\varphi$ be $\psi\supset\chi$. Notice that $\emptyset^{\psi}= \emptyset$. Trivially, any assignment in it satisfies $\chi$. So $\emptyset\models \psi \supset \chi$. Let $\varphi$ be of the form $\SET{X} = \SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$. Any $do$ algorithm, applied to $\emptyset$, produces $\emptyset$ again; therefore, by the inductive hypothesis, $\emptyset_{\SET X= \SET x}= \emptyset\models \chi$. Thus, by the clause for counterfactuals we obtain $\emptyset\models \SET X= \SET x\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$. \end{proof} We also give names to some relevant fragments of $\mathcal{CD}$: \begin{itemize} \item $\mathcal{C}$: the \emph{counterfactual fragment}, where symbols $\dep{\cdot}{\cdot}$ and $\supset$ are not allowed \item $\mathcal{C}O$: the \emph{causal-observational fragment} where $\dep{\cdot}{\cdot}$ is not allowed. \end{itemize} We may also define variants $\mathcal{C}O^{neg},\mathcal{C}^{neg}$ that are closed (except for antecedents of counterfactuals) under a dual negation $\neg$, which satisfies the additional semantical clause: \begin{itemize} \item $T\models \neg\psi \iff $ for all $s\in T^-$, $\{s\}\not \models \psi$.\footnote{This definition is reasonable due to the flatness of $\mathcal{C}O$, which is entailed by theorem \ref{TEOFLAT} below.} \end{itemize} We can show that the language $\mathcal{C}O^{neg}$ (and thus also $\mathcal{C}O,\mathcal{C}^{neg},\mathcal{C}$) satisfies a much more restrictive condition: flatness. Here and in the following, we will often abuse notation and write, say, $\{s\}$ for a causal team $T$ whose support $T^-$ contains the single assignment $s$; we will then write $\{s\}_{\SET X= \SET x}$ for the causal team $T_{\SET X= \SET x}$, and so on. \begin{teo} \label{TEOFLAT} The logic $\mathcal{C}O^{neg}$ over unique-solution parametric causal teams is flat, that is: for every formula $\varphi$ of $\mathcal{C}O^{neg}$ and every unique-solution parametric causal team $T$, $T\models \varphi$ iff $\{s\}\models \varphi$ for every assignment $s\in T^-$. \end{teo} \begin{proof} It can be shown by induction on the complexity of the involved formulas. The cases related to propositional connectives are routine. We only have to check the cases of $\varphi = \SET X= \SET x\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$ and $\varphi = \psi \supset \chi$. Case $\hspace{2pt}\Box\hspace{-4pt}\rightarrow$) Suppose first that $T\models \SET X= \SET x\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$, with $\chi$ flat. Then by the semantical clause for counterfactuals, $T_{\SET X= \SET x}\models \chi$. By flatness of $\chi$, $\{t\}\models\chi$ for each $t\in (T_{\SET X= \SET x})^-$. Now, notice that $\{s\}$ is a unique-solution causal team (because $T$ is such), which entails that, for every $s\in T^-$, $\{s\}_{\SET X= \SET x}$ is again a singleton causal team (see the details of the $do$ algorithm). Therefore, by the remarks above, $\{s\}_{\SET X= \SET x}\models \chi$ for each $s\in T$; so, by the semantical clauses, $\{s\}\models \SET X= \SET x\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$ for every $s\in T$. In the other direction, suppose $\{s\}\models \SET X= \SET x\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$ for every $s\in T^-$. Then for every $s\in T^-$, $\{s\}_{\SET X= \SET x}\models \chi$. Again, any such causal team is a singleton set, say $\{s\}_{\SET X= \SET x} =: \{t_s\}$. But notice that, if $t$ is any assignment in $(T_{\SET X= \SET x})^-$, then $t=t_s$ for some $s\in T^-$. So we can apply the inductive hypothesis and conclude that $T_{\SET X= \SET x}\models \chi$. Therefore $T\models \SET X= \SET x\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$. Case $\supset$) Suppose $T\models \psi \supset \chi$, with $\psi, \chi$ flat. Then, by the semantical clause for $\supset$, $T^\psi\models \chi$. By induction hypothesis, $\{s\}^\psi=\{s\}\models \chi$ for each $s\in (T^\psi)^-$. So $\{s\}\models \psi\supset \chi$. Suppose instead $s'\in T$ is such that $\{s'\}\not\models\psi$. Then $\{s'\}^\psi = \emptyset$. Now $\{s'\}^\psi = \emptyset \models \chi$; so, by the clause for $\supset$, $\{s'\}\models \psi \supset \chi$. In conclusion, for any $s\in T^-$, $\{s\}\models \psi \supset \chi$. Viceversa, suppose $\{s\}\models \psi \supset \chi$ for all $s\in T^-$. Let $s\in T^-$ be such that $\{s\}\models \psi$. Then $\{s\}^\psi = \{s\}$; our assumption then yields $\{s\}\models \chi$. Since $\chi$ is flat, we can conclude that $T^\psi \models \chi$. So $T\models \psi \supset\chi$. \end{proof} This flatness theorem uses in an essential way the recursiveness of the causal team. The previous downward closure and empty set property theorems may work as well in the nonrecursive case, provided a definition of intervention is given also in that case, along the lines sketched at the end of subsection \ref{SUBSDEFINT}. Notice also that, unlike in similar results for team semantics (see e.g. \cite{Vaa2007}), the truth of a flat formula in a causal team based on a singleton team uses, through structural equations, information that goes beyond that which is encoded in the single assignment of the team. A singleton causal team can be identified with a structural equation model, enriched with a description of an actual state for all variables in the system; therefore, our flatness result can be seen as expressing a sort of conservativity of the causal team semantics over structural equation modeling. As long as one works within a sufficiently poor language, statements on causal teams can be reduced to statements over multiple enriched structural equation models; but as soon as further non-flat language components are introduced, such as dependence atoms, or the probabilities and the intuitionistic disjunction (see later sections), structural equation models become insufficient, and causal teams are needed. \section{Some logical principles} \label{LOGLAWS} In this section, we analyze some validities and inference rules for the languages $\mathcal{CD},\mathcal{C}O$ and $\mathcal{C}$ when their semantics ranges over \emph{parametric, recursive} causal teams (we will assume this restriction throughout the section). Most observations extend also to unique solution causal teams. We shall write $\psi \models \chi$ to mean that $\chi$ holds on any parametric, recursive causal team on which $\psi$ holds. By $\models\chi$ we mean that $\chi$ holds on any parametric, recursive causal team. \subsection{The law of excluded middle} \label{SUBSEM} Does the fact that the logic $\mathcal{CO}$ is flat entail that its propositional logic is classical? The issue is somewhat problematic. It is well known that in predicate logics of dependence and independence the law of excluded middle fails; in the propositional case this point is more subtle. We propose two formulations of the law of excluded middle, a weak one and a stronger one that comes closer to the classical version:\\ \[ \hspace{-108pt}(WEM): \hspace{15pt}\models \psi \lor \neg\psi \] \[ (SEM): \hspace{15pt}\text{For any team } T, T\models \psi \text{ or } T\models \neg \psi \] Neither formulation covers the full classical scheme, because for many formulas we lack negation. Within $\mathcal{C}O^{neg}$, instead, SEM fundamentally corresponds to the classical law of excluded middle. It is very easy to find a counterexample to SEM, even at the atomic level. Notice indeed that the following team \begin{center} \begin{tabular}{|c|} \hline X \\ \hline 1 \\ \hline 2 \\ \hline \end{tabular} \end{center} does not satisfy $X=1$ nor its negation $X\neq 1$. With regard to WEM, we get: \begin{teo} All formulas in $\mathcal{C}O^{neg}$ satisfy WEM. \end{teo} \begin{proof} Let $\varphi$ be a formula of $\mathcal{C}O^{neg}$. Let $T$ be a team. Let $T_1 = \{s\in T^-| \{s\}\models\varphi\}$, and $T_2 = \{s\in T^-| \{s\}\not\models\varphi\} = \{s\in T^-| \{s\}\models\neg\varphi\}$. Then, by flatness, $T_1\models \varphi$ and $T_2\models \neg \varphi$. By the clause for disjunction, $T \models \varphi \lor \neg \varphi$. \end{proof} For trivial reasons (since it lacks negation of all but atomic formulas), $\mathcal{CD}$ itself satisfies WEM. It is not obvious how to extend $\mathcal{CD}$ itself with a negation. One way followed in the literature is to treat the negation of a dependence atom as satisfied only by the empty team. In this case, one immediately finds counterexamples to WEM. Another possibility would be to extend $\mathcal{CD}$ with a contradictory negation of the dependence atom: \begin{itemize} \item $T\models \not\dep{\SET{X}}{Y} \iff$ there are $s,s'\in T$ such that $s(\SET{X})=s'(\SET{ X})$ but $s(Y) \neq s'(Y)$. \end{itemize} This option does not save WEM, either: just let $T$ be any team that satisfies the atom $\dep{\SET{X}}{Y}$. Then, in any partition $T_1, T_2$ of $T$, both parts satisfy the atom, and not its negation (also in case one of the two subteams is the empty subteam). Notice also the failure of the empty set property. \subsection{Conditional excluded middle} In the present section we consider the law that is commonly called \emph{conditional excluded middle}: \[ \hspace{-86pt}(CEM): \hspace{15pt}\models \psi \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\chi \lor \neg \chi). \] This is often considered in pair with the formally similar law called \emph{distribution}: \[ (D): \hspace{15pt}\psi \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\chi \lor \chi') \models [(\psi \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi) \lor (\psi \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi')] \] These principles are known to hold for Stalnaker counterfactuals and fail for Lewis counterfactuals. We will show in a later subsection (\ref{SUBSCOMPL}) that distribution is valid for our languages. Here we focus mainly on the CEM law. What about systems of interventionist counterfactuals, such as those given by Galles and Pearl (\cite{GalPea1998}), Halpern (\cite{Hal2000}) and Briggs (\cite{Bri2012})? In the system considered by Galles and Pearl, only atomic consequents are allowed; therefore CEM and D have no instances. For what regards the remaining systems, these fundamentally evaluate formulas on single assignments, that we might think as singleton causal teams. Each intervention produces again an assignment/singleton team. Therefore, CEM trivially holds for atomic consequents. Simple induction arguments allow to extend this observation to all the formulas of the language considered by Halpern (which allow boolean operators in the consequents of counterfactuals). Briggs also allows disjunctions and negations in the antecedents of counterfactuals; we do not consider this possibility in the paper. The languages of these papers also have an operator corresponding to classical implication; therefore, D can be internalized and shown to hold in there, as is done explicitly in \cite{Hal2000}. As far as causal teams are concerned, by the flatness of $\mathcal{C}O^{neg}$, CEM holds in $\mathcal{C}O^{neg}$, and therefore also in $\mathcal{CD}$. However, it is natural to wonder about a corresponding semantical property: \[ (CEM'): \text{ For every causal team $T$, } T\models \theta\hspace{2pt}\Box\hspace{-4pt}\rightarrow\chi \text{ or } T\models \theta\hspace{2pt}\Box\hspace{-4pt}\rightarrow\neg\chi \] We show that CEM' can fail in very simple examples on causal teams; therefore, our logic does \emph{not} coincide with that of usual interventionist counterfactuals. The simplest possible counterexample is of the form: \begin{center} $T$: \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{X \ Y} \\ \hline 1 & 1 \\ \hline 1 & 2 \\ \hline \end{tabular} \end{center} The intervention $do(X=1)$ does not modify this team, that is: $T_{X=1} = T$. On the other hand, $T\not\models Y=1$, and $T\not\models Y\neq 1$. So, $T\not\models X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=1$, and $T\not\models X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y\neq 1$. Thus, CEM' is falsified. The peculiarity of this example might suggest that the failure of these laws could be attributed to the fact that there are no arrows between the variables of concern. This is not the case, as we can show with a slightly more complex example. Consider the team \begin{center} $S$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{X18} \ \tikzmark{Z18}Z\tikzmark{Z18'} \ \tikzmark{Y18}Y} \\ \hline 1 & 1 & 2\\ \hline 2 & 2 & 4\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Z18'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y18}); \draw ([yshift=7pt]{pic cs:X18}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Y18}); \end{tikzpicture} \end{center} where the behavior of $Y$ is determined by the function $\mathcal{F}_S(Y)(X,Z):= X+Z$. The intervention $do(X=1)$ generates the team \begin{center} $S_{X=1}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{X19} \ \tikzmark{Z19}Z\tikzmark{Z19'} \ \tikzmark{Y19}Y} \\ \hline 1 & 1 & 2\\ \hline 1 & 2 & 3\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Z19'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y19}); \draw ([yshift=7pt]{pic cs:X19}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Y19}); \end{tikzpicture} \end{center} It is then immediate to verify that $S$ neither satisfies $X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=2$ nor $X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y\neq 2$, thereby falsifying CEM'. \subsection{Interactions between implications and dependencies} \label{SUBSINTER} As causal team semantics defines both dependencies based on correlations (functional dependence) and causal and counterfactual dependencies, it allows us to investigate the logical principles which govern their interaction. Here we list some of the inference rules that connect the various kinds of dependence that are referred to within the language $\mathcal{CD}$; the simple soundness proofs are left to the reader. Given that the selective implication is strongly analogous to classical implication, the following inference rules hold (assuming the classicality of all antecedents): \[ \frac{\theta \supset \chi}{(\theta \land \psi) \supset \chi} \hspace{25pt} \frac{\theta \supset \chi}{\theta \supset (\chi \lor\psi)} \hspace{25pt} \frac{\theta \supset (\chi \supset \psi)}{(\theta \land \chi) \supset \psi} \] the third of which is also invertible. The second rule can be expected to fail in logics that lack the empty set property. It is time to make the relationship between selective and classical implication more explicit. First of all, notice that a restricted version of a semantic Deduction Theorem holds for selective implication: if any team that satisfies a \emph{classical} formula $\theta$ also satisfies $\chi$, then this in particular holds for any team of the form $T^\theta$; therefore $T^\theta\models \chi$, from which we obtain $T\models \theta\supset \chi$ (for any causal team $T$). Therefore, all the valid inference rules that are stated in this section can be internalized in the formal language as selective implications, \emph{provided the antecedents are classical formulas}\footnote{We might consider extending our languages by allowing \emph{flat} instead of classical antecedents for selective implications. This would extend the range of application of the Deduction Theorem, but would have the drawback that the syntax of the resulting language might not be decidable anymore.}. Secondly, we wish to establish whether our approach, of allowing selective implications, is equivalent or not to the approach of adding to our languages the dual negations of all classical formulas; could then a selective implication $\theta\supset\chi$ be replaced by an expression of the form $\neg\theta\lor \chi$? The answer is yes \emph{for downward closed logics}, as those that we have considered thus far; but the approach via selective implication is more general. The point is this: applying the clause for disjunction, $T\models\neg\theta\lor \chi$ holds if and only if there are $S_1,S_2 \subseteq T^-$ such that $S_2\models \chi$ and, for all $s\in S_1$, $\{s\}\not\models\theta$. Then $(T^{\theta})^- \subseteq S_2$; if the logic under consideration is downward closed, then, we can conclude that $T^{\theta}\models \chi$, that is, $T\models \theta\supset\chi$. But it is easy to find counterexamples in logics that are not downward closed. Consider for example the (marginal) \emph{independence atoms} $X \perp Y$ defined (along the lines of \cite{GraVaa2013}) by the semantical clause: \begin{itemize} \item $T\models X \perp Y \iff \text{for every } s,s'\in T^- \text{ there is an } s''\in T^- \text{ s.t. } s''(X) = s(X) \text{ and } s''(Y) = s'(Y)$. \end{itemize} Then the following team \begin{center} $U$: \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{X \ Y} \\ \hline $1$ & $1$ \\ \hline $1$ & $2$ \\ \hline $2$ & $1$ \\ \hline $2$ & $2$ \\ \hline \end{tabular} \end{center} satisfies $X\neq 1 \lor X \perp Y$ (just consider the partition of $U$ into $\emptyset$ and $U$; $\emptyset\models X\neq 1$ and $U\models X \perp Y$). But $U\not \models X=1 \supset X \perp Y$. We now turn to the relationship between selective and counterfactual implication. We start by showing that their commutativity, that is, the equivalence between $\theta \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\chi \supset \psi)$ and $\chi \supset (\theta \hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi)$ fails in causal teams in both directions. We show this by a couple of counter-examples. For all the teams involved, the graph will be $\{\{X,Y,Z\}, \{(X,Y),(Z,Y)\}\}$ (so, we do not draw it every time) and the only structural equation will be $Y:= X+Z$; it is not really important to specify the ranges of variables. Consider the following teams: \begin{center} $T$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 1 & 1 & 2 \\ \hline 1 & 2 & 3 \\ \hline 2 & 3 & 5 \\ \hline \end{tabular} \hspace{20pt} $T_{X=1}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 1 & 1 & 2 \\ \hline 1 & 2 & 3 \\ \hline \textbf{1} & 3 & \textbf{4} \\ \hline \end{tabular} \hspace{20pt} $(T_{X=1})^{X=1}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 1 & 1 & 2 \\ \hline 1 & 2 & 3 \\ \hline 1 & 3 & 4 \\ \hline \end{tabular} $T^{X=1}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 1 & 1 & 2 \\ \hline 1 & 2 & 3 \\ \hline \end{tabular} \hspace{20pt} $(T^{X=1})_{X=1}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 1 & 1 & 2 \\ \hline 1 & 2 & 3 \\ \hline \end{tabular} \end{center} From the tables, it is immediate to see that $T\models X=1 \supset (X=1\hspace{2pt}\Box\hspace{-4pt}\rightarrow (Y=2\lor Y=3))$, but $T\not\models X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow (X=1\supset (Y=2\lor Y=3))$. A counterexample for the opposite direction is given by the following teams, with graph and equation as before: \begin{center} $S$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 1 & 1 & 2 \\ \hline 1 & 2 & 3 \\ \hline 2 & 1 & 3 \\ \hline \end{tabular} \hspace{20pt} $S_{Z=1}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 1 & 1 & 2 \\ \hline 2 & 1 & 3 \\ \hline \end{tabular} \hspace{20pt} $(S_{Z=1})^{Y=3}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 2 & 1 & 3 \\ \hline \end{tabular} $S^{Y=3}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 1 & 2 & 3 \\ \hline 2 & 1 & 3 \\ \hline \end{tabular} \hspace{20pt}$(S^{Y=3})_{Z=1}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X \ Z \ Y} \\ \hline 1 & 1 & 2 \\ \hline 2 & 1 & 3 \\ \hline \end{tabular} \end{center} which show that $S\models Z=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow (Y=3 \supset Y=3)$ but $S\not\models Y=3\supset (Z=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=3)$. The noncommutativity of $\hspace{2pt}\Box\hspace{-4pt}\rightarrow$ with $\supset$ is a phenomenon that, to the best of our knowledge, has no analogous in the usual notations of the causal inference literature; we return to this point in Section \ref{SECDEFPROB}. In our search for further rules linking $\hspace{2pt}\Box\hspace{-4pt}\rightarrow$ and $\supset$, we can now have a look at the literature on Stalnaker-Lewis counterfactuals, and check whether in some case $\supset$ may take the role that in Stalnaker-Lewis rules is taken by material implication. For example, it is known that 1) a Stalnaker-Lewis counterfactual $\theta\hspace{2pt}\Box\hspace{-4pt}\rightarrow\chi$ implies the corresponding material conditional $\theta\rightarrow\chi$, and 2) the converse holds, \emph{provided $\theta$ holds}. It might be interesting to inquire whether similar principles hold for our counterfactual and selective implication, respectively. The literature on causation seems to suggest a negative answer: looking for example at rule 2 of Pearl's \emph{do calculus} (\cite{Pea2000}, sec. 3.4) we see that, in some languages that also allow the discussion of probabilities, replacing an intervention with an observation, or viceversa, can be done only under some graph-theoretical (and not logical) assumptions. Explicit counterexamples to 1) and 2) can be constructed in our languages, following the lines of \cite{Bri2012} (sect. 3.1 and 4), for some formulas in which the consequent $\chi$ is itself a counterfactual. The counterexample in \cite{Bri2012} is based on the famous ``execution line'' example (\cite{Pea2000}), but we can construct a much simpler one. Consider the following causal teams, with boolean ranges and invariant functions $f_Y(X):= X$ and $f_Z(X,Y):= X\land Y$: \begin{center} $T = T^{Y=0}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{XA} \ \tikzmark{YA}Y\tikzmark{YA'} \ \tikzmark{ZA}Z} \\ \hline 0 & 0 & 0\\ \hline \end{tabular} \hspace{10pt}$(T^{Y=0})_{X=1}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{XB} \ \tikzmark{YB}Y\tikzmark{YB'} \ \tikzmark{ZB}Z} \\ \hline 1 & 1 & 1\\ \hline \end{tabular} $T_{Y=0}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{XC} \ \tikzmark{YC}Y\tikzmark{YC'} \ \tikzmark{ZC}Z} \\ \hline 0 & 0 & 0\\ \hline \end{tabular} \hspace{10pt}$(T_{Y=0})_{X=1}$ \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{XD} \ \tikzmark{YD}Y\tikzmark{YD'} \ \tikzmark{ZD}Z} \\ \hline 1 & 0 & 0\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:XA}) [line width=0.2mm] to ([yshift=3pt]{pic cs:YA}); \draw [->] ([yshift=3pt]{pic cs:YA'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:ZA}); \draw ([yshift=7pt]{pic cs:XA}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:ZA}); \draw [->] ([yshift=3pt]{pic cs:XB}) [line width=0.2mm] to ([yshift=3pt]{pic cs:YB}); \draw [->] ([yshift=3pt]{pic cs:YB'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:ZB}); \draw ([yshift=7pt]{pic cs:XB}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:ZB}); \draw [->] ([yshift=3pt]{pic cs:YC'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:ZC}); \draw ([yshift=7pt]{pic cs:XC}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:ZC}); \draw [->] ([yshift=3pt]{pic cs:YD'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:ZD}); \draw ([yshift=7pt]{pic cs:XD}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:ZD}); \end{tikzpicture} \end{center} It is then immediate to see that $T\models Y=0\hspace{2pt}\Box\hspace{-4pt}\rightarrow (X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Z\neq 1)$ but $T\not\models Y=0\supset (X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow Z\neq 1)$. The important element in this example (and similar ones) is that we have an intervention $do(Y=0)$ which does not modify the team component of the causal team, but modifies the graph -- which encodes counterfactual relations. Other simple examples could be constructed for consequents which contain probabilistic statements -- we will define a semantics for them in later sections. We could not find, instead, a counterexample with consequents that involve only selective implications, dependence atoms and connectives; but here is a counterexample using selective implications and \emph{contradictory negations} of dependence atoms, as defined in subsection \ref{SUBSEM}. Consider the following causal teams, with invariant function $f_Z(X,Y):= Y$: \begin{center} $S$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{XE} \ \tikzmark{YE}Y\tikzmark{YE'} \ \tikzmark{ZE}Z} \\ \hline 0 & 0 & 0\\ \hline 1 & 1 & 1\\ \hline \end{tabular} $(S_{X=0})=(S_{X=0})^{X=0}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{XF} \ \tikzmark{YF}Y\tikzmark{YF'} \ \tikzmark{ZF}Z} \\ \hline 0 & 0 & 0\\ \hline 0 & 1 & 1\\ \hline \end{tabular} \hspace{10pt} $S^{X=0} = (S^{X=0})^{X=0}$: \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|c|}{X\tikzmark{XG} \ \tikzmark{YG}Y\tikzmark{YG'} \ \tikzmark{ZG}Z} \\ \hline 0 & 0 & 0\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:YE'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:ZE}); \draw ([yshift=7pt]{pic cs:XE}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:ZE}); \draw [->] ([yshift=3pt]{pic cs:YF'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:ZF}); \draw [->] ([yshift=3pt]{pic cs:YG'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:ZG}); \draw ([yshift=7pt]{pic cs:XG}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:ZG}); \end{tikzpicture} \end{center} Then $S\models X=0 \hspace{2pt}\Box\hspace{-4pt}\rightarrow (X=0 \supset \not\dep{X}{Z})$ but $S\not\models X=0 \supset (X=0 \supset \not\dep{X}{Z})$. Of course, it is questionable whether we should allow this kind of arrow $X\rightarrow Z$ in our definition of causal teams. Let us now see how dependence atoms interact with the two kinds of implication. For what regards selective implication, the following rules are valid: \[ \frac{\dep{\SET{X}}{Y}}{\SET{X} = \SET{x} \supset \con{Y}} \hspace{35pt} \frac{\bigwedge_{\SET{x}\in Ran(\SET X)}\bigvee_{y\in Ran(Y)}(\SET{X}=\SET{x}\supset Y=y)}{\dep{\SET{X}}{Y}}. \] As a simple special case of the second rule, we have that $Y=y$ implies $\con{Y}$. An analogous rule holds for counterfactual implication: \[ \frac{\bigwedge_{\SET{x}\in Ran(\SET X)}\bigvee_{y\in Ran(Y)}(\SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y)}{\dep{\SET{X}}{Y}}. \] but the implication from $\dep{\SET{X}}{Y}$ to $\SET{X} = \SET{x} \hspace{2pt}\Box\hspace{-4pt}\rightarrow \con{Y}$ only holds on teams where the tuple $\SET{x}$ occurs. \subsection{Permutation and exportation/importation of antecedents} Consider the following laws of \emph{permutation, exportation} and \emph{importation} of antecedents: \[ (P): \psi \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\psi'\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi) \equiv \psi' \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\psi\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi) \] \[ (E): (\psi \land \psi')\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi \models \psi \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\psi'\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi) \] \[ (I): \psi \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\psi'\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi) \models (\psi \land \psi')\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi \] These laws famously fail for Stalnaker's (\cite{Sta1968}) and Lewis' (\cite{Lew1973}) counterfactuals; see \cite{Sid2010}, chap. 8; they are also claimed to fail, in general, for natural language counterfactuals. The purpose of this section is to show that under some restriction (that the variables mentioned in $\psi$ and, resp. $\psi'$ be distinct) these laws are valid for our counterfactuals. Under this restriction (which will be somewhat relaxed in the next section), exportation/importation amounts to the statement that an intervention over a set of variables can be split into successive interventions over disjoint subsets of variables; permutation corresponds to the assertion the two interventions of this kind can be performed in any order. \begin{rema} We assume in this subsection that causal teams are explicit. However, the results presented here hold for non-explicit causal teams as well, although the proofs in that case are more involved. \end{rema} \begin{lem} \label{LEMPA} Let $G$ be a finite acyclic graph, $\mathbf{X},Y,W$ vertices of $G$ with $W\in PA_Y$, $Y\notin \SET X$ and $Y$ reachable from $\SET X$ (that is, $d(\mathbf{X},Y)\geq 0$). Then $d(\mathbf{X},W)<d(\mathbf{X},Y)$. \end{lem} \begin{proof} $d(\mathbf{X},Y) = sup\{lenght(P) | P \text{ directed path of } G_{-\SET X} \text{ from some } X\in \mathbf{X}$ $\text{to } Y\}$. But any directed path from $\mathbf{X}$ to $W$ is included in some path from this set. So $d(\mathbf{X},W)\leq d(\mathbf{X},Y)$. Assume first that at least one path from $\mathbf{X}$ to $W$ exists in $G_{-\SET X}$. Since $G$ is finite and acyclic, $d(\mathbf{X},W)=n\in \mathbb{N}$. Let $P$ be a path of $G_{-\SET X}$ from some $X\in \mathbf{X}$ to $W$ of lenght $n$. Then $P\cup\{(W,Y)\}$ is a path of $G_{-\SET X}$ from $X$ to $Y$ of lenght $n+1$. This holds for any $X\in \SET X$. Since $\SET X$ is finite, $d(\SET X,Y) = \operatorname{max}_{X\in \SET X}d(X,Y) > \operatorname{max}_{X\in \SET X}d(X,W) = d(\SET X,W)$. If instead no path from $\mathbf{X}$ to $W$ exists, then $d(\mathbf{X},W)=-1$, and the claim of the Lemma is satisfied. \end{proof} In the next theorem exportation and importation of antecedents are expressed as rules. A more general version will be proved later on (Theorem \ref{FULLIMPEXP}). \begin{teo}\label{IMPEXP} Let $T$ be a causal team with $G(T)$ finite acyclic, $\mathbf{X},\mathbf{Y}\in dom(T)$ such that $\mathbf{X} \cap \mathbf{Y}= \emptyset$, $\mathbf{x}\in Ran(\mathbf{X})$, and $\mathbf{y}\in Ran(\mathbf{Y})$. Then $T_{\mathbf{X}=\mathbf{x}\land \mathbf{Y}=\mathbf{y}} = (T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y}=\mathbf{y}}$. Therefore, the following rules \[ (IMP): \frac{\SET X=\SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\SET Y=\SET y \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi)}{(\SET X=\SET x \land \SET Y=\SET y) \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi} \hspace{25pt} (EXP): \frac{(\SET X=\SET x \land \SET Y=\SET y) \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi}{\SET X=\SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\SET Y=\SET y \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi)} \] are valid, under the above restrictions, over recursive teams. \end{teo} \begin{proof} Notice first that for any $s'\in (T_{\SET{X}=\SET{x}})_{\SET{Y}=\SET{y}}$ and any $s''\in T_{\SET{X}=\SET{x}\land \SET{Y}=\SET{y}}$, $s'(\SET{X})=s''(\SET{X}) = \SET{x}$ and $s'(\SET{Y})=s''(\SET{Y}) = \SET{y}$. Secondly, the restriction to explicit causal teams implies that $\mathcal{F}_{T} \supseteq\mathcal{F}_{T_{{\SET{X}=\SET{x}}}} \supseteq \mathcal{F}_{(T_{{\SET{X}=\SET{x}}})_{\SET{Y}=\SET{y}}} = \mathcal{F}_{T_{{\SET{X}=\SET{x}}\land\SET{Y}=\SET{y}}}$. Fix $s\in T$. Let $s'\in (T_{\SET{X}=\SET{x}})_{\SET{Y}=\SET{y}}$ be the assignment obtained by applying to $\{s\}$ the intervention $do(\SET{X}=\SET{x})$ followed by $do(\SET{Y}=\SET{y})$, and $s''$ the assignment obtained by applying $do(\SET{X}=\SET{x}\land\SET{Y}=\SET{y})$ to $s$. We shall prove, by a simultaneous induction on $m$ and $n$, that $s'(Z)=s''(Z)$ for any variable $Z$ such that $d(\SET{X},Z)\leq m$ and $d(\SET{Y},Z)\leq n$. In the base case $Z\in \SET{X} \cup \SET{Y} $, this result is obvious. Suppose that the statement holds for $m$ and $n$, and let $Z\in dom(T) \setminus( \SET{X} \cup \SET{Y} )$ be a variable such that $d(\SET{X},Z) \leq m$ and $d(\SET{Y},Z) = n+1$ (the proof of the symmetric case is analogous). From the algorithm $do(\SET{Y}=\SET{y})$, it follows that $s'(Z) = \mathcal F_{T_{\SET{X}=\SET{x}}}(Z)(s'(PA_Z))$ (notice indeed that, thanks to acyclicity, the algorithm does not modify the $PA_Z$-columns after modifying the $Z$-column), which, by the observation above, is equal to $\mathcal F_{T}(Z)(s'(PA_Z))$; and from the algorithm $do(\SET{X}=\SET{x}\land \SET{Y}=\SET{y})$, it follows that $s''(Z) = \mathcal F_{T}(Z)(s''(PA_Z))$. By Lemma \ref{LEMPA} (which we can apply because $d(\SET Y, Z) = n+1$ implies $Z\notin \SET Y$), the variables in $PA_Z$ have strictly smaller distances from $\SET{Y}$ than $Z$ (and, obviously, at most the same distance from $\SET X$). Then, by the inductive hypothesis, $s'(PA_Z) = s''(PA_Z)$. So, applying $\mathcal F_{T}(Z)$, we obtain $s'(Z)=s''(Z)$. \end{proof} As a corollary, we obtain the following variant of the permutation rule: \begin{teo}\label{PERM} Let $T$ be a causal team with $G(T)$ finite acyclic, $\mathbf{X},\mathbf{Y}\in dom(T)$ such that $\mathbf{X} \cap \mathbf{Y}= \emptyset$, $\mathbf{x}\in ran(\mathbf{X})$, and $\mathbf{y}\in ran(\mathbf{Y})$. Then $(T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y}=\mathbf{y}} = (T_{\mathbf{Y}=\mathbf{y}})_{\mathbf{X}=\mathbf{x}}$. Therefore, the following rule \[ (PERM): \frac{\SET X=\SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\SET Y=\SET y \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi)}{\SET Y=\SET y \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\SET X=\SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi)} \] is valid, under the above restrictions, over recursive causal teams. \end{teo} \begin{proof} From Theorem \ref{IMPEXP} we obtain the equalities $(T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y}=\mathbf{y}} = T_{\mathbf{X}=\mathbf{x}\land \mathbf{Y}=\mathbf{y}}$ and $(T_{\mathbf{Y}=\mathbf{y}})_{\mathbf{X}=\mathbf{x}} = T_{\mathbf{Y}=\mathbf{y}\land\mathbf{X}=\mathbf{x} }$. Since the order of variables is irrelevant in the definition of the $do$ algorithm, we also have $T_{\mathbf{X}=\mathbf{x}\land \mathbf{Y}=\mathbf{y}} = T_{\mathbf{Y}=\mathbf{y}\land \mathbf{X}=\mathbf{x}}$. Transitivity yields the desired result. \end{proof} \begin{comment} Notice first of all that the graphs generated by both procedures are identical, and that for any $s'\in (T_{\SET{X}=\SET{x}})_{\SET{Y}=\SET{y}}$ and any $s''\in (T_{\SET{Y}=\SET{y}})_{\SET{X}=\SET{x}}$, $s'(\SET{X})=s''(\SET{X}) = \SET{x}$ and $s'(\SET{Y})=s''(\SET{Y}) = \SET{y}$. Secondly, we prove that $\mathcal{F}_{T_{{\SET{X}=\SET{x}}}} = \mathcal{F}_{(T_{{\SET{X}=\SET{x}}})_{\SET{Y}=\SET{y}}} = \mathcal{F}_{(T_{\SET{Y}=\SET{y}})_{\SET{X}=\SET{x}}} = \mathcal{F}_{T_{\SET{Y}=\SET{y}}}$. To this end, first notice that, for each $W\in dom(T)$, $dom(\mathcal{F}_{T_{\SET{X}=\SET{x}}}(W)) = dom(\mathcal{F}_{(T_{\SET{X}=\SET{x}})_{\SET{Y}=\SET{y}}}(W)) = dom(\mathcal{F}_{(T_{\SET{Y}=\SET{y}})_{\SET{X}=\SET{x}}}(W)) = dom(\mathcal{F}_{T_{\SET{Y}=\SET{y}}}(W)) = ran(W)$. Then observe that $\mathcal{F}_{(T_{\SET{X}=\SET{x}})_{\SET{Y}=\SET{y}}}(W) = g_W^{T_{\SET{X}=\SET{x}}}$ and $\mathcal{F}_{(T_{\SET{Y}=\SET{y}})_{\SET{X}=\SET{x}}}(W) = g_W^{T_{\SET{Y}=\SET{y}}}$. Lastly, applying twice lemma \ref{LEMG}, we obtain that $g_W^{T_{\SET{X}=\SET{x}}} = g_W^T = g_W^{T_{\SET{Y}=\SET{y}}}$. Let $s\in T$. Let $s'\in (T_{\SET{X}=\SET{x}})_{\SET{Y}=\SET{y}}$ be the assignment obtained by applying to $\{s\}$ the procedure $do(\SET{X}=\SET{x})$ followed by $do(\SET{Y}=\SET{y})$, and $s''$ the assignment obtained applying the two procedures to $s$ in inverse order. We want to prove, by a double induction on $m$ and $n$, that $s'(Z)=s''(Z)$ for any variable $Z$ such that $ed(\SET{X},Z)\leq m$ and $ed(\SET{Y},Z)\leq n$. Suppose that the statement holds for $m$ and $n$, and let $Z$ be a variable such that $ed(\SET{X},Z) \leq m$ and $ed(\SET{Y},Z) = n+1$ (the other case is analogous). Then, by the algorithm $do(\SET{Y}=\SET{y})$, we have $s'(Z) = g^{T_{\SET{X}=\SET{x}}}_Z(s'(\overline{PA_Z}))$ (notice that, thanks to acyclicity, the algorithm does not modify the $\overline{PA_Z}$-columns after modifying the $Z$-column), and by the algorithm $do(\SET{X}=\SET{x})$, we have $s''(Z) = g^{T_{\SET{Y}=\SET{y}}}_Z(s''(\overline{PA_Z}))$. By Lemma \ref{LEMPA}, the variables in $\overline{PA_Z}$ have strictly smaller distances from $\SET{X}$ and $\SET{Y}$ than $Z$. Then, by the inductive hypothesis, $s'(\overline{PA_Z}) = s''(\overline{PA_Z})$. Then $s'(\overline{PA_Z})$ is in the domain of $f_Z = \mathcal{F}_{T_{\SET{Y}=\SET{y}}}(Z) = g_Z^T = \mathcal{F}_{T_{\SET{X}=\SET{x}}}(Z)$. So $s'(Z)=f_Z(s'(\overline{PA_Z}))=f_Z(s''(\overline{PA_Z}))=s''(Z)$, as desired. \end{comment} \subsection{Generalized import-export and permutation rules} The purpose of this section is to show that the assumptions of the permutation and the import-export rules can be relaxed in the following way: if the interventions involved are over two sets of variables $\mathbf{X}$ and $\mathbf{Y}$ with nonempty intersection, we have only to require that the two interventions act ``in the same way'' over the common set of variables $\mathbf{X}\cap\mathbf{Y}$. \begin{lemma}\label{REP} For any $\SET{X}\subseteq dom(T)$, $(T_{\SET{X}=\SET{x}})_{\SET{X}=\SET{x}} = T_{\SET{X}=\SET{x}}$. \end{lemma} \begin{proof} By a simple induction on the steps of the ``for'' cycle in the $do$ algorithm. \end{proof} \begin{lem}\label{LEMSETDIFF} Let $T$ be a causal team with $G(T)$ finite acyclic, $\mathbf{X},\mathbf{Y}\in dom(T)$, $\mathbf{x}\in Ran(\mathbf{X})$, and $\mathbf{y}\in Ran(\mathbf{Y})$. Suppose also that, if $X_i=Y_j\in \mathbf{X}\cap \mathbf{Y}$, then $x_i = y_j$. Let $\mathbf{Y'} = \mathbf{Y} \setminus \SET{X}$, and $\mathbf{y'}=\{y_j|Y_j\in\mathbf{Y'}\}$. Then $(T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y}=\mathbf{y}} = (T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y'}=\mathbf{y'}}$. \end{lem} \begin{proof} Let $\SET{X'}=\SET{X}\setminus \SET{Y}$, and $\SET{x'}=\{x_j|X_j\in \SET{X'}\}$. Let $\mathbf{Z} = \mathbf{X} \cap \mathbf{Y}$ and $\mathbf{z}= \SET x \cap \SET y$. Applying twice Theorem \ref{IMPEXP} (import-export), we get $(T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y}=\mathbf{y}} = (((T_{\mathbf{X'}=\mathbf{x'}})_{\mathbf{Z}=\mathbf{z}})_{\mathbf{Z} = \mathbf{z}})_{\mathbf{Y'}=\mathbf{y'}}$. But this last term is equal to $((T_{\mathbf{X'}=\mathbf{x'}})_{\mathbf{Z}=\mathbf{z}})_{\mathbf{Y'}=\mathbf{y'}}$, by Lemma \ref{REP}. Given that $\mathbf{Z}$ and $\mathbf{X'}$ are disjoint, we can apply again Theorem \ref{IMPEXP} to obtain $((T_{\mathbf{X'}=\mathbf{x'}})_{\mathbf{Z}=\mathbf{z}})_{\mathbf{Y'}=\mathbf{y'}} = (T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y'}=\mathbf{y'}}$. The chain of equalities yields the desired result. \end{proof} Next we show that the import-export rule works under the weaker hypothesis that the two interventions act in the same way on the shared set of variables. \begin{teo}\label{FULLIMPEXP} Let $T$ be a causal team with $G(T)$ finite acyclic, $\mathbf{X},\mathbf{Y}\in dom(T)$. $\mathbf{x}\in ran(\mathbf{X})$, and $\mathbf{y}\in ran(\mathbf{Y})$. Suppose also that, if $X_i=Y_j\in \mathbf{X}\cap \mathbf{Y}$, then $x_i = y_j$. Then $T_{\mathbf{X}=\mathbf{x}\land \mathbf{Y}=\mathbf{y}} = (T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y}=\mathbf{y}}$. \end{teo} \begin{proof} Apply Lemma \ref{LEMSETDIFF} to see that $(T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y}=\mathbf{y}} = (T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y'}=\mathbf{y'}}$, where $\mathbf{Y'}=\mathbf{Y}\setminus\mathbf{X}$. Given that $\mathbf{X}$ and $\mathbf{Y'}$ are disjoint sets of variables, we can use the weak import-export rule (Theorem \ref{IMPEXP}) to obtain $(T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y'}=\mathbf{y'}} = T_{\mathbf{X}=\mathbf{x} \land \mathbf{Y'}=\mathbf{y'}}$. But now the formula $\mathbf{X}=\mathbf{x} \land \mathbf{Y}=\mathbf{y}$ differs from $\mathbf{X}=\mathbf{x} \land \mathbf{Y'}=\mathbf{y'}$ only in that it contains some repetitions of conjuncts that are already present in $\mathbf{X}=\mathbf{x} \land \mathbf{Y'}=\mathbf{y'}$. Therefore, the two formulas define the same interventions (as can be seen by checking the $do$ algorithm); this immediately implies that $T_{\mathbf{X}=\mathbf{x} \land \mathbf{Y}=\mathbf{y}} = T_{\mathbf{X}=\mathbf{x} \land \mathbf{Y'}=\mathbf{y'}}$. The whole chain of equalities yields the result. \end{proof} Again, under these hypotheses the order of the interventions is irrelevant. \begin{teo}\label{FULLPERM} Let $T$ be a causal team with $G(T)$ finite acyclic, $\mathbf{X},\mathbf{Y}\in dom(T)$, $\mathbf{x}\in ran(\mathbf{X})$, and $\mathbf{y}\in ran(\mathbf{Y})$. Suppose also that, if $X_i=Y_j\in \mathbf{X}\cap \mathbf{Y}$, then $x_i = y_j$. Then $(T_{\mathbf{X}=\mathbf{x}})_{\mathbf{Y}=\mathbf{y}} = (T_{\mathbf{Y}=\mathbf{y}})_{\SET{X}=\mathbf{x}}$. \end{teo} \begin{proof} This works like the proof of Theorem \ref{PERM}, using the relaxed import-export rule (Theorem \ref{FULLIMPEXP}) instead of Theorem \ref{IMPEXP}. \end{proof} \subsection{Composition and effectiveness rules}\label{SUBSCOMPL} Galles and Pearl (\cite{GalPea1998}) present a complete and sound system for interventionist counterfactuals over recursive causal models (to be more precise, the result holds only for causal models that are parametric and under the restriction that the range of each variable is \emph{finite}). This system is based on two main rules called \emph{composition} and \emph{effectiveness}. In informal presentations (\cite{Pea2000}), Pearl claims that these two axioms exhaust the system; a closer look at the original paper (and at a subsequent clarifying paper by Halpern, \cite{Hal2000}) shows that two further axioms (\emph{definiteness} and \emph{uniqueness}), plus ``the rules of logic'' are part of the system. Let us first point out that Pearl uses a very different syntax from ours which originates in a tradition of representation of counterfactuals common in the statistical literature (the ``potential outcome'' approach, \cite{Ney1923},\cite{Rub1974}). He uses \[ Y_x(u) =y \] to mean that, in an actual state in which the (set of) exogenous variables $U$ take values $u$, if we fixed the value of $X$ to $x$ by an intervention, then $Y$ would take value $y$. Given that in a recursive structural equation model the endogenous variables are functionally determined by the exogenous ones, we can represent such an assignment $u$ of values to the exogenous variables as a unique assignment $s_u$. Therefore, we might represent Pearl's counterfactual in a notation closer to the logical tradition as: \[ s_u \models X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y = y. \] Assuming that, unless otherwise specified, variables and values are implicitly universally quantified, here is now the list of properties that Galles and Pearl ascribe to causal models with unique solutions (which include recursive models): \[ (Composition): W_x(u) = w \Rightarrow Y_{xw}(u) = Y_x(u) \] \[ (Effectiveness): X_{xw}(u)=x \] \[ (Definiteness): \text{ there is an $x$ such that } X_{\SET{y}}(u)=x \] \[ (Uniqueness): X_{\SET{y}}(u)=x \land X_{\SET{y}}(u)=x' \Rightarrow x=x' \] to which, according to Halpern (\cite{Hal2000}), we should add all classical propositional tautologies plus Modus Ponens. A further axiom scheme (call it REC) forces the system to be recursive (i.e., the graph to be acyclic). Translating the REC axiom into our language is somewhat problematic; we consider this issue towards the end of the present subsection. Do these axioms hold for our causal teams? First of all, we should decide how these axioms should be interpreted in our framework. One possible way to go is to conceive these counterfactuals as encoding properties of a team instead of an actual state; that is, to have a team $T$ play the role that was previously played by $u$ (or by $s_u$). It is then straightforward to translate effectiveness as the statement that each causal team $T$ satisfies: \[ (EFF): \models (X=x \land \SET{W}=\SET{w}) \hspace{2pt}\Box\hspace{-4pt}\rightarrow X=x. \] Translating composition is a more complex task. First of all, Galles\&Pearl have an operator $\Rightarrow$ which is most likely intended as a material implication (which, by the deduction theorem, reflects logical implication). We do not have in $\mathcal{CD}$ any such operator (we know that our selective implication $\supset$ obeys a deduction theorem, but only for $\mathcal{C}O$ antecedents). For this reason, we will focus here on the languages $\mathcal{C}^{neg}$ and $\mathcal{C}O^{neg}$, which are flat and closed under dual negation. We can therefore add modus ponens in the following form: \[ MP_\lor: \frac{\theta \hspace{15pt} \neg\theta\lor\chi}{\chi}. \] However, in order to facilitate the transition to more general languages like $\mathcal{CD}$, which are not closed under negation, we may also adopt the strategy of rewriting all the implications corresponding to Pearl's axioms as inference rules. For example, it will be reasonable to move from the formal language to the semantic metalanguage, and think of composition as a rule of inference rather than an axiom. The composition rule poses a further problem: the formula $Y_{xw}(u) = Y_x(u)$ is not a counterfactual. We may think instead of the equivalent expression (for all $y\in \mathcal R(Y)$): $Y_{xw}(u) = y \iff Y_x(u)=y$. It then becomes natural to decompose ``composition'' into \emph{two} rules of inference: \[ (CE): \frac{\SET{X}=\SET{x} \hspace{2pt}\Box\hspace{-4pt}\rightarrow W=w \hspace{25pt} (\SET{X}=\SET{x} \land W=w)\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y}{\SET{X}=\SET{x} \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y} \] \[ (CI): \frac{\SET{X}=\SET{x} \hspace{2pt}\Box\hspace{-4pt}\rightarrow W=w \hspace{25pt} \SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y}{(\SET{X}=\SET{x} \land W=w) \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y} \] Definiteness (``Decidability'') contains an existential quantification. \emph{If we assume the ranges of variables to be finite}, then we can formulate this axiom in a way that works relativized to teams that have a specific finite range for $X$: \[ (DEC): \lor_{x\in Ran(X)} (\SET{Y}=\SET{y}\hspace{2pt}\Box\hspace{-4pt}\rightarrow X=x) \] What this axiom expresses about a team is that the team can be decomposed into subteams, each of which satisfies, after the intervention $do(\SET Y = \SET y)$, one of the $X=x$ atoms; it is true of any parametric causal team (in the nonparametric case, the disjunction should not be taken over $Ran(X)$, but over $Ran(X)\cup Term_{G(T)}$). The uniqueness axiom mentions formulas of the form $x=x'$, which are absent from our language. But we can think of a reformulation of these axioms due to Halpern (which he simply calls the ``equality'' axiom scheme), which rephrases it as a scheme of rules of inference (one rule for each variable $X$, set of variables $\SET{Y}$, and pair of values $x,x'\in Ran(X)$ with $x\neq x'$): \[ (UNI): \frac{\SET{Y}=\SET{y}\hspace{2pt}\Box\hspace{-4pt}\rightarrow X=x}{\SET{Y}=\SET{y}\hspace{2pt}\Box\hspace{-4pt}\rightarrow X\neq x'} \] The import of this axiom seems much weaker than in the Galles-Pearl case, given that $X=x$ and $X\neq x'$ are global statements over many assignments. Our translations of the axioms and rules all fall within the language $\mathcal{C}^{neg}$. By its flatness, it immediately follows that the axioms EFF and the appropriate instances of DEC also hold for our causal teams. \begin{teo} The rules CE, CI and UNI are sound for parametric acyclic causal teams. \end{teo} \begin{proof} CE) Assume $T$ satisfies the two assumptions of the rule. Since the consequent $Y=y$ of the counterfactuals involved is a formula that does not contain counterfactuals, in order to prove that $T$ satisfies the conclusion of the rule, it is sufficient to show that $(T_{\SET{X}=\SET{x}})^- = (T_{\SET{X}=\SET{x} \land W = w})^-$. In case $W\in \SET{X}$, the rule holds for trivial reasons. Otherwise, a previous theorem (\ref{IMPEXP}) tells us that $T_{\SET{X}=\SET{x} \land W = w} = (T_{\SET{X}=\SET{x}})_{W = w}$. The first assumption of CE tells us that $T_{\SET{X}=\SET{x}} \models W=w$. But then, obviously, applying $do(W=w)$ to $T_{\SET{X}=\SET{x}}$ leaves its underlying team unvaried. CI) An analogous argument. UNI) Immediate. \end{proof} Finally, notice that, by the flatness of $\mathcal{C}^{neg}$, all classical validities that can be formulated in $\mathcal{C}^{neg}$ are sound for acyclic teams. (But remember that some classical semantic principles, like SEM, fail). Halpern (\cite{Hal2000}) considers a language $L_{uniq}$ that allows boolean formulas as consequents, and boolean combinations of counterfactuals, but does not allow embeded counterfactuals. This language can be identified with an appropriate fragment of $\mathcal{C}^{neg}$, call it $\mathcal{C}_u$ . What Halpern shows is that the axioms described above form a sound and complete system for $L_{uniq}$ over recursive (i.e. acyclic) causal models. In particular, they axiomatize the set of valid formulas of the Galles-Pearl language (which is restricted to conjunctions of counterfactuals of the form $\SET{X}=\SET{x} \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y$), although derivations may use formulas that are not part of the Galles-Pearl language. As we have seen, these axioms and rules are sound also for $\mathcal{C}^{neg}$, over recursive causal teams. But unlike $\mathcal{C}_u$, the language $\mathcal{C}^{neg}$ also allows counterfactual conditionals to occur in the consequents. Thus it may be that some more axioms are needed in order to obtain a complete system for $\mathcal{C}^{neg}$ over recursive causal teams. Our strategy for the rest of this section is to show that any counterfactual in $\mathcal{C}^{neg}$ is equivalent to a counterfactual with a "simple" consequent (i.e., a consequent which does not contain embedded counterfactuals, or even atomic formulas of the form $Y\neq y$), and to isolate the rules that allow this transformation. Our first observation is that Halpern \cite{Hal2000} proves that recursive causal models (or, more generally, causal models whose equations have unique solutions) satisfy some rules that allow removing connectives from consequents. In our language, these rules may be stated as follows: \[ (OR-OUT): \frac{\SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi\lor \psi'}{(\SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi)\lor(\SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi')} \] \[ (AND-OUT): \frac{\SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi\land \psi'}{(\SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi)\land(\SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi')} \] \[ (NEG-OUT): \frac{\SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \neg\psi}{\neg(\SET{X}=\SET{x}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi)} \] Halpern also shows the soundness of their inverses (call them OR-IN, AND-IN, NEG-IN). We check now that the same holds in our causal teams. \begin{lemma} \label{INTSPLIT} Suppose a causal team of the form $T_{\SET X = \SET x}$ has causal subteams $U',V'$ such that $(U')^-\cup (V')^- = (T_{\SET X = \SET x})^-$. Then $T$ has causal subteams $U,V$ such that $U_{\SET X = \SET x} = U'$ and $V_{\SET X = \SET x} = V'$. \end{lemma} \begin{proof} Let $T,U',V'$ as above. For each $t\in (T_{\SET X = \SET x})^-$, define the set of assignments $S_t = \{s\in T^-| \{s\}_{\SET X = \SET x} = \{t\}\}$. Define then $U^-:= \bigcup_{t\in U'} S_t$ and $V^-:= \bigcup_{t\in V'} S_t$. Define $G(U), \mathcal F_U$, etc. in the obvious way. Let us show that $U,V$ behave as wanted. Let $t\in (U')^-$. Pick an $s \in S_t \subseteq U^-$. $\{s\}_{\SET X = \SET x} = \{t\}$; so, $t\in (U_{\SET X = \SET x})^-$. This proves $(U')^-\subseteq(U_{\SET X = \SET x})^-$. Let $t\notin (U')^-$. Suppose for the sake of contradiction that there is $s\in U$ such that $\{s\}_{\SET X = \SET x}= \{t\}$. But then there is a $t'\in U'$ such that $s\in S_{t'}\subseteq U^-$. But this implies that $\{t\} = \{s\}_{\SET X = \SET x} = \{t'\}$. Therefore $t \in (U')^-$: contradiction. \end{proof} \begin{teo} The rules OR-OUT, AND-OUT, NEG-OUT, OR-IN, AND-IN, NEG-IN are sound for $\mathcal{C}^{neg}$ on parametric causal teams with unique solutions; except for NEG-IN/OUT, this also holds in $\mathcal{CD}$. \end{teo} \begin{proof} The AND and NOT case are straightforward. The soundness of OR-OUT/OR-IN follows immediately from Lemma \ref{INTSPLIT}. \end{proof} How should we treat counterfactuals occurring in the consequent? The importation rule presented in previous sections (Theorem \ref{FULLIMPEXP}) can eliminate them only under some synctactical restrictions. We need a different inference rule that may work without restrictions. We prove here that, if two consecutive interventions affect some common set of variables, then the action performed by the first intervention on these common variables is completely overwritten by the second intervention. \begin{lemma} \label{REWRITE} Let $T$ be a recursive causal team, $\SET{X}\subseteq dom(T)$, and $\SET{x},\SET{x'}\in Ran(\SET X)$. Then $(T_{\SET{X} = \SET{x}})_{\SET{X} = \SET{x'}} = T_{\SET{X} = \SET{x'}}$. \end{lemma} \begin{proof} Since all the interventions act on the same set of variables $\SET{X}$, they give rise to the same graph. A straightforward induction on $d(\SET{X},Y)$ can be used to show that, for any $s\in T$, if $\{t\}= (\{s\}_{\SET{X} = \SET{x}})_{\SET{X} = \SET{x'}}$ and $\{t'\} = \{s\}_{\SET{X} = \SET{x'}}$, then $t(Y) = t'(Y)$ for any $Y\in dom(T)$. \end{proof} \begin{teo} Let $T$ be a recursive causal team, $\SET{X},\SET{Y}\subseteq dom(T)$, $\SET{X'} = \SET{X}\setminus \SET{Y}$ and $\SET{x'} = \SET{x}\setminus \SET{y}$. Then $(T_{\SET{X}=\SET{x}})_{\SET{Y}=\SET{y}} = (T_{\SET{X'}=\SET{x'}})_{\SET{Y}=\SET{y}}$. \end{teo} \begin{proof} Let $\SET{Y'} = \SET{Y}\setminus \SET{X}$ and $\SET{y'} = \SET{y}\setminus \SET{x}$. Let $\SET{Z} = \SET{X}\cap \SET{Y}$. Suppose $\SET{Z}=(X_{i_1},\dots, X_{i_k}) = (Y_{j_1},\dots, Y_{j_k})$. Write $\SET{z_1}$ for $(x_{i_1},\dots, x_{i_k})$ (the restriction of $\SET x$ to values for $\SET Z$), and $\SET{z_2}$ for $(y_{j_1},\dots, y_{j_k})$ (the restriction of $\SET y$ to values for $\SET Z$). Then, by Theorem \ref{IMPEXP}, $(T_{\SET{X}=\SET{x}})_{\SET{Y}=\SET{y}} = (((T_{\SET{X'}=\SET{x'}})_{\SET{Z} = \SET{z_1}})_{\SET{Z} = \SET{z_2}})_{\SET{Y'}=\SET{y'}}$. By Lemma \ref{REWRITE}, this is equal to $((T_{\SET{X'}=\SET{x'}})_{\SET{Z} = \SET{z_2}})_{\SET{Y'}=\SET{y'}}$. Applying Theorem \ref{IMPEXP} again, we obtain $(T_{\SET{X'}=\SET{x'}})_{\SET{Y}=\SET{y}}$. \end{proof} This result, combined with Theorem \ref{IMPEXP}, can be immediately turned into a pair of inference rules: \[ (CF-OUT): \frac{\SET{X}=\SET{x} \hspace{2pt}\Box\hspace{-4pt}\rightarrow (\SET{Y}=\SET{y} \hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi)}{(\SET{X'}=\SET{x'} \land \SET{Y}=\SET{y}) \hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi} \] where $\SET{X'} = \SET{X}\setminus \SET{Y}$ and $\SET{x'} = \SET{x}\setminus \SET{y}$. Its inverse, CF-IN, works for $\SET{X'}\cap\SET{Y}=\emptyset$ and $\SET{X}\supseteq\SET{X'}$. This pair of rules corresponds essentially to the axiom CM11 proposed by Briggs (\cite{Bri2012}, sect. 4). Finally, we return to the axiom REC in Halpern (\cite{Hal2000}), which characterizes recursive systems. Halpern shows how to represent, in the potential outcome formalism, the notion ``$X$ affects $Y$ under some intervention'' (we believe this notion is the same as Woodward's notion of \emph{contributing cause}). We were not able to express this notion in $C^{neg}$. Instead we can express it in an extended language $C^{neg}(\sqcup)$ where we allow positive occurrences of the so-called \emph{intuitionistic disjunction} $\sqcup$, which follows the semantical clause: \[ T\models \psi \sqcup \psi' \iff T\models \psi \text{ or } T\models \psi' \] With this additional connective, ``$X$ affects $Y$'' can be written as follows; write $ND_X$ for the set of nondescendants of $X$, and $\mathcal W:= \{\SET Z\subset dom(T)\setminus \{X,Y\} | ND_X \subseteq \SET Z\}$: \[ CC(X,Y): \bigsqcup \big\{\SET Z = \SET z \hspace{2pt}\Box\hspace{-4pt}\rightarrow [(X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y) \land (X=x' \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y')]\big\}. \] where the disjunction is taken over all $\SET Z\in \mathcal W, \SET z\in Ran(\SET Z),x\neq x'\in Ran(X),y\neq y'\in Ran(Y)$. Then in $\mathcal{C}^{neg}(\sqcup)$ we can write: \[ (REC_k): \frac{CC(X_1,X_2) \hspace{10pt} \dots \hspace{10pt} CC(X_{n-1}, X_n)}{\neg CC(X_n,X_1)} \] and take REC to be the set of all the REC$_k$ rules, for $k\in \mathbb{N}$, $k\geq 2$. Given a causal team $T$, let $AX_\mathcal{C}[T]$\footnote{The extra parameter $T$ is there because we need, as was done in \cite{Hal2000}, to restrict the axioms to teams which share the same ``signature'' as $T$, that is, they have the same domain of variables and the same ranges for each of the variables.} be the axiom system constituted of axiom schemes EFF; DEC; UNI; and the rules CE, CI, OR/AND/NEG/CF-OUT, OR/AND/NEG/CF-IN, MP$_\lor$ and REC. Now $AX_\mathcal{C}[T]$ is a set of valid $C^{neg}(\sqcup)$ formulas and sound rules for $C^{neg}(\sqcup)$, but we can show it to be complete for the restricted set of formulas $C^{neg}$:\footnote{The approach which consists of defining a formal system over a language more general than the set of target formulas is not unheard of; it is for example the approach taken in \cite{GalPea1998}.} \begin{teo} $AX_\mathcal{C}[T]$ is a sound and complete axiom system for the language $\mathcal{C}^{neg}$ on recursive causal teams of domain $dom(T)$ and range function $\mathcal R_T$. \end{teo} \begin{proof} Soundness in $\mathcal{C}^{neg}$ was shown above; it is not difficult to show that soundness extends to $C^{neg}(\sqcup)$. Let $\Gamma\subseteq \mathcal{C}^{neg}$, $\varphi\in\mathcal{C}^{neg}$ be such that $\varphi$ holds in any recursive causal team on which $\Gamma$ holds. Notice that any $\psi\in \Gamma$ can be transformed into a $\psi'\in \mathcal{C}_u$ by repeated applications of OR/AND/NEG/CF-OUT; call $\Gamma'$ the set of the $\psi'$. By the soundness and invertibility of the rules, $\Gamma$ and $\Gamma'$ hold in the same causal teams. Similarly, $\varphi$ can be transformed into a $\varphi'\in \mathcal{C}_u$ such that $\varphi$ and $\varphi'$ hold in the same causal teams. Therefore, in order to prove completeness, we only have to show that, whenever $\Gamma'\vdash \varphi'$ according to the (translation of the) axiom system of Halpern, we can construct a derivation $\Gamma \vdash_{AX_\mathcal{C}[T]}\varphi$. For suppose we have a derivation $\Gamma'\vdash \varphi'$ in Halpern's axiom system. Then we can compose it with a derivation of $\Gamma'$ from $\Gamma$ (using the OUT rules) and a derivation of $\varphi$ from $\varphi'$ (using the IN rules) to obtain a derivation of $\varphi$ from $\Gamma$ within $AX_\mathcal{C}[T]$ (after Halpern's rules and axioms are replaced with our translations). \end{proof} The logic $\mathcal{C}O^{neg}$ is, again, flat and closed under dual negation. The obvious way to obtain a complete axiomatization for $\mathcal{C}O^{neg}$ is then to add to $AX_\mathcal{C}[T]$ the conversion rule \[ (SEL_E): \frac{\theta \supset \chi}{\neg \theta\lor\chi} \] and its inverse $SEL_I$, which allow to reduce the problem to that of completeness of $\mathcal{C}^{neg}$. Notice that SEL$_E$ is valid also for non-flat logics, of course under the restriction that $\theta$ be flat (and its negation be present in the language); the validity of SEL$-I$, instead, requires downward closure of the logic. An alternative approach to obtain a complete axiomatization of $\mathcal{C}O^{neg}$ is to add a rule for the extraction of selective implications from counterfactual consequents: \[ (SEL-OUT): \frac{\SET X = \SET x\hspace{2pt}\Box\hspace{-4pt}\rightarrow(\psi \supset \chi)}{(\SET X = \SET x\hspace{2pt}\Box\hspace{-4pt}\rightarrow\psi) \supset (\SET X = \SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow\chi)} \] and its inverse SEL-IN. These rules are sound for $\mathcal{C}O^{neg}$ and are both likely to hold (for the appropriate instances) also in languages that are not downward closed. \section{The nonparametric case: falsifiability and admissibility} \label{NONPARAM} We have allowed, in our causal teams, the possibility of having incomplete information concerning the invariant functions that relate the variables, up to the extreme case where no information at all is present (except for the set of arguments of the function, and the range of its allowed values). As a consequence, it is in general impossible to fill the counterfactual causal teams generated by interventions with proper values; if we do not have sufficient information to fill an entry with a value, we fill it instead with a formal term which describes the functional dependence of this entry on the values of other variables. But how should we evaluate counterfactual statements which involve variables whose columns cannot be filled with proper values? We think it should not be possible to ascribe truth values to such statements by the usual clauses. For example, we are not entitled to assert that $Y=3$ in a team whose non-formal entries for $Y$ are all equal to $3$. Yet in some cases we might be able to observe the falsity of similar statements; therefore, to state their contradictory negation. Let us write $\downarrow s(X)$ to signify that $s(X)$ is a proper value and not a formal term. Let $T$ be a causal team, possibly having some formal entries. We read $T\models^f \psi$ as ``$\psi$ is falsifiable in $T$'', but notice that this is not falsifiability in the dual sense; rather, it is a partial notion of contradictory negation. We propose the following semantical clauses: \begin{itemize} \item $T\models^f X=x \text{ if there is } s\in T^- \text{ such that } \downarrow s(X) \text{ and } s(X) \neq x$ \item $T\models ^f X\neq x \text{ if there is } s\in T^- \text{ such that } \downarrow s(X) \text{ and } s(X) = x$ \item $T\models ^f \dep{\SET X}{Y} \text{ if there are } s,s'\in T^- \text{ such that } \downarrow s(Y),\downarrow s'(Y), s(\SET X) = s'(\SET X) \text{ and } s(Y) \neq s'(Y)$ (notice that the $s(\SET X)$ need not all be proper values) \item $T\models^f \psi\land \chi$ if $T\models^f \psi$ or $T\models^f \chi$ \item $T\models^f \psi\lor \chi$ if for all subteams $T_1,T_2$ of $T$ with $T_1^-\cup T_2^-=T^-$, we have $T_1 \models^f \psi$ or $T_2\models^f \chi$. \end{itemize} (Notice that the clause for disjunction involves universal quantification over teams). Can we extend these clauses also to selective and counterfactual implication? Let us consider as an example the team \begin{center} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{$\hspace{-5pt}X \ \rightarrow Y$} \\ \hline 2 & 1 \\ \hline 1 & $\hat f_Y(1)$ \\ \hline \end{tabular} \end{center} It seems unreasonable to assert that this team falsifies the formula $Y=1 \supset X=2$, because, as long as we do not know the function $f_Y$, we cannot know whether $\hat f_Y(1)$ is meant to denote $1$ or some other number; therefore, we do not know whether the second assignment is compatible or not with our selection (if it were, then the formula would be falsified, otherwise it would not be). We now give a clause that takes into account this kind of problem. First, let $\psi$ be a classical formula; let $\SET{V}$ be the set of variables occurring in $\psi$; define $T^\psi_*:=T^\psi \cup \{s\in T^- | \not\downarrow s(V) \text{ for some } V\in\SET{V}\}$. Then, for $\chi$ \emph{downward closed} formula (as we know formulas of $\mathcal{CD}$ to be), a somewhat reasonable clause for selective implication seems to be: \begin{itemize} \item $T\models^f \psi \supset \chi$ if $T^\psi_* \models^f \chi$ \end{itemize} This clause is not completely satisfying, because there might be variables in $\SET V$ that are irrelevant for determining the truth value of $\psi$. For counterfactuals, the natural clause is more immediate: \begin{itemize} \item $T\models^f \SET X = \SET x \hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$ if $T_{\SET X = \SET x} \models^f \chi$. \end{itemize} Thirdly, notice that, even though we cannot most often decide of the truth of a statement when columns are incomplete, we might however be interested in asserting that some proposition is \emph{admissible} in the given team, that is, consistent with the data we possess. The following seem to be reasonable clauses for the atomic formulas: \begin{itemize} \item $T\models^a X=x \text{ if for all } s\in T^- \text{ such that } \downarrow s(X), s(X) = x$ \item $T\models^a X\neq x \text{ if for all } s\in T^- \text{ such that } \downarrow s(X), s(X) \neq x$ \item $T\models^a \dep{\SET X}{Y} \text{ if for all } s,s'\in T^- \text{ such that } \downarrow s(Y) \downarrow s'(Y), s(\SET X) = s'(\SET X)$, we have $s(Y) =s'(Y)$. \end{itemize} Finding general clauses for composite formulas appears to be a difficult task; for simplicity, we define this notion only for classical formulas in disjunctive normal form; we also require, in each conjunctive clause, that atomic formulas $P^i_j$ contain distinct variables. \begin{itemize} \item $T\models^a \bigvee_{i=1..m} \bigwedge_{j=1..n(i)} P^i_j$ ($P^i_j$ being of the form $X^i_j = x^i_j$ or $X^i_j \neq x^i_j$) if there are subteams $T_i$ of $T$, for $i=1..m$, such that \begin{enumerate} \item $T_i \models^a P^i_j$, for all $j$. \item for each $j,j'=1..n(i)$, if $j\neq j'$, $P^i_j$ is $X^i_j = a$ and $P^i_{j'}$ is $X^i_{j'} = b$ (with $a\neq b$), then for all $s\in T_i$, $s(X^i_j) \neq s(X^i_{j'})$. \item for each $j,j'=1..n(i)$, if $j\neq j'$, $P^i_j$ is $X^i_j = a$ and $P^i_{j'}$ is $X^i_{j'} \neq a$, then for all $s\in T_i$, $s(X^i_j) \neq s(X^i_{j'})$. \end{enumerate} \end{itemize} The clauses 2. and 3. above refer to formal inequality between terms. To have an idea of the intuition behind clause 2., the reader may think, for example, of the problem of checking the admissibility of $X=1 \land Y=2$; imagine that there is a row in which both the $X$-column and the $Y$-columm contain the formal term $f(3, g(2))$; then, surely, the formula is not admissible (for $X=1 \land Y=2$ to hold in the team, it is necessary that the $X$ and $Y$-column differ on each row). An intuition for clause 3. can be similarly obtained by thinking of checking the admissibility of $X=1\land Y\neq 1$; again, for admissibility, we need the term $f(3, g(2))$ not to occur both in the $X$ and $Y$ column. \section{Probabilistic semantics} \label{PROBSEC} \subsection{A probabilistic language} In Woodward's approach, probabilistic notions of causality are considered next to the deterministic ones; for example, a variable $X$ is said to be a \emph{total cause} (in the probabilistic sense) of a different variable $Y$ if there is an intervention $do(X=x)$ which changes the probability of $Y = y$ for some value of $y$. In Pearl (\cite{Pea2000}), and Spirtes, Glymour and Scheines (\cite{SpiGlySch1993}), one finds a representation of causal and counterfactual reasoning in a mixed framework which combines a deterministic component (functional equations) with a probabilistic one (probability distributions over exogenous variables). Such structures, which have been widely studied in the literature, induce causal Bayesian networks, as shortly mentioned in the introduction. In the context of team semantics, probabilities have been recently introduced via the notion of multiteam. A multiteam differs from a team in that it may feature multiple copies of the same assignment; it is therefore closer to a collection of experimental data than teams are. There have been at least two different approaches to the formalization of multiteams in the literature (\cite{Vaa2017}, \cite{DurHanKonMeiVir2016}). However, the approach we take in the following is that of simulating multiteams by means of teams; this can be easily accomplished by assuming that each team has an extra variable Key (which is never mentioned in our formal languages) which takes distinct values on distinct assignments of the same causal team. In this way, we can have two assignments that agree on all the significant variables and just differ on Key. With this assumption in mind, the definition of \emph{causal multiteam} can follow word by word the definition of causal team. We will now work towards the definition of languages that can be appropriate for the discussion and study of probabilistic notions as interpreted over causal multiteams. If we wish to talk about probabilities, it seems natural to add appropriate atomic formulas to our formal language. In theory, it would make sense to assign probabilities to any \emph{flat} formula of the resulting language. However, for technical simplicity, we only allow our formal language to talk of probabilities of $\mathcal{C}O$ formulas (which were proved to be flat in earlier sections). \begin{df} The set of probabilistic literals is given by: \[ \neg \alpha | Pr(\chi) \leq \epsilon | Pr(\chi) \geq \epsilon | Pr(\chi) \leq Pr(\theta) | Pr(\chi) \geq Pr(\theta) \] where $\alpha$ is a probabilistic literal, $\chi,\theta$ are formulas of $\mathcal{C}O$ and $\epsilon \in \mathbb{R}\cap [0,1]$. Literals and probabilistic literals without negation will be called atomic formulas. The \emph{(basic) probabilistic causal language} ($\mathcal{PCD}$) is given by the following clauses: \[ \alpha | \psi \land \chi | \psi \lor \chi | \psi \sqcup \chi | \theta\supset \psi | \SET X = \SET x\hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi \] where $\alpha$ is a literal or probabilistic literal, $\psi,\chi$ are $\mathcal{PCD}$ formulas, and $\theta$ a boolean combination of (non probabilistic) atoms. \end{df} As in the non-probabilistic case, we also define restricted languages: \begin{itemize} \item $\mathcal{PC}$ if selective implication is not allowed \item $\mathcal{PO}$ if counterfactuals are not allowed \item $\mathcal{P}$ if neither selective implications nor counterfactuals are allowed. \end{itemize} Some words are needed to motivate our choice of languages and their restrictions. First of all, notice that, besides the usual (tensor) disjunction $\lor$ that most naturally arises in the context of logics of dependence, we also add to our probabilistic languages what is called in the literature \emph{intuitionistic disjunction}; that is, a binary connective $\sqcup$ which satisfies the following semantical clause (already introduced in an earlier section): \begin{itemize} \item $T\models \psi \sqcup \chi \iff T\models \psi$ or $T\models \chi$. \end{itemize} If one wants to express that the disjunctive event ``$X=x$ or $Y=y$'' has probability $\epsilon$, it can be done using the formula $Pr(X=x \lor Y=y)=\epsilon$ (the semantics of probabilistic atoms will be defined soon below). If one wishes to make disjunctive statements about probabilities, the correct operator is instead the intuitionistic disjunction. For example, the statement ``either $X=x$ has probability less than one third, or probability greater than two thirds'' will be expressed as $Pr(X=x) <1/3 \sqcup Pr(X=x) > 2/3$. Intuitionistic disjunction turns out to be particularly useful for defining other operators. For example, dependence atoms and conditional independence atoms turn out to be (model-theoretically) \emph{definable} already in $\mathcal{PO}$ (this will be shown later on in the section). This is the reason why we have omitted such atoms from the syntax. Furthermore, we could not come up with any reasonable semantics for selective implications with probabilistic antecedents, and so we also omitted such constructs from the syntax. We will use some obvious abbreviations, such as $Pr(\chi) = \epsilon$ for $Pr(\chi) \leq \epsilon \land Pr(\chi) \geq \epsilon$, or $Pr(\chi) < \epsilon$ for $Pr(\chi) \leq \epsilon \land \neg Pr(\chi) \geq \epsilon$. The semantic clauses are the same as for the language $\mathcal{CD}$, enriched with clauses for the probabilistic atoms. In the following, we will often abuse notation and write $\{s\}$ for a \emph{causal} team whose underlying team contains exactly one assignment. Given any $\mathcal{C}O$ formula\footnote{This definition applies as well to any flat formula.} $\chi$ and any team $T$ with \emph{finite, nonempty} support $T^-$, we can define the \textbf{probability of $\chi$ in $T$} as: \[ Pr_T(\chi):= \frac{card(\{s\in T^- | \{s\}\models \chi\})}{card(T^-)}. \] For $T^-=\emptyset$, we conventionally assume $Pr_T(\chi)$ to be undefined; and consequently, expressions like $Pr_T(\chi)\leq \epsilon$, etc. to be false. Then the clauses for probabilistic atoms follow quite smoothly: \begin{itemize} \item $T\models Pr(\chi)\leq \epsilon \text{ iff } Pr_T(\chi)\leq \epsilon$ \item $T\models Pr(\chi)\geq \epsilon \text{ iff } Pr_T(\chi)\geq \epsilon$ \item $T\models Pr(\chi)\leq Pr(\theta) \text{ iff } Pr_T(\chi)\leq Pr_T(\theta)$ \item $T\models Pr(\chi)\geq Pr(\theta) \text{ iff } Pr_T(\chi)\geq Pr_T(\theta)$ \end{itemize} It is easy to see that the resulting logic is \emph{not} downward closed. For example, a team which is such that less than half of its assignments satisfy $\chi$ will satisfy $Pr(\chi)\leq \frac{1}{2}$. But the subteam $T^\chi$ consisting \emph{only} of the assignments that satisfy $\chi$ will not satisfy $Pr(\chi)\leq \frac{1}{2}$. Notice also that the atoms of the form $Pr(\chi)\leq Pr(\theta)$ (and $Pr(\chi)\geq Pr(\theta)$) could be eliminated if we allowed quantification over real values: we could then replace them with $\exists \epsilon(Pr(\chi)\leq \epsilon \land Pr(\theta)\geq \epsilon)$. A crucial question which arises right away is: does our definition really define a probability distribution on a team $T$ of finite support? Let $\mathcal{E}_T$ be the set of all subsets $S$ of $T$ which are definable by a $\mathcal{C}O$ formula $\chi$ (that is, $S= \{s\in T^- | \{s\}\models \chi\}$). This will be our set of \emph{events}. For any event $S\in \mathcal{E}_T$, define $P_T(S):= Pr_T(\psi)$, where $\psi$ is any formula which defines $S$. Now, a nice thing about $\mathcal{C}O$ formulas is that one can explicitly find (within $\mathcal{C}O$ itself) their complementary negations (a formula is a complementary negation of $\psi$ if it is satisfied exactly by those singleton causal teams that do not satisfy $\psi$). We define a canonical complementary negation $\psi^c$ for each formula $\psi$ of $\mathcal{C}O$: \begin{itemize} \item $(X = x)^c$ is $X \neq x$ (for any variable $X$ and value $x$) \item $(X \neq x)^c$ is $X = x$ (for any variable $X$ and value $x$) \item $(\theta \land \chi)^c$ is $(\theta^c \lor\chi^c)$ \item $(\theta \lor \chi)^c$ is $(\theta^c \land \chi^c)$ \item $(\theta\supset\chi)^c$ is $\theta \land \chi^c$ \item $(\theta\hspace{2pt}\Box\hspace{-4pt}\rightarrow\chi)^c$ is $\theta\hspace{2pt}\Box\hspace{-4pt}\rightarrow\chi^c$ \end{itemize} Complementary negations of $\mathcal{C}O$ formulas may fail to be boolean formulas, but they turn out to be useful because of the following: \begin{lem} Let $T$ be a causal multiteam, $\psi\in\mathcal{C}O$, and $S = \{s\in T^- | \{s\}\models \psi\}$. Then $T^- \setminus S = \{s\in T^- | \{s\}\models \psi^c\}$. \end{lem} \begin{proof} By induction on the synctactical complexity of $\psi\in \mathcal{C}O$. We consider the less obvious cases. Let $\psi = \theta\supset \chi$. Then $\{s\}\models\psi^c$ iff $\{s\}\models \theta \land\chi^c$ iff (by inductive hypothesis) $\{s\}\models \theta$ and $\{s\}\not\models \chi$ iff $\{s\}\not\models \theta\supset \chi$. Let $\psi = \theta\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi$. Then $\{s\}\models \psi^c$ iff $\{s\}\models \theta\hspace{2pt}\Box\hspace{-4pt}\rightarrow \chi^c$ iff $\{s\}_\theta\models \chi^c$ iff (by inductive hypothesis) $\{s\}_\theta\not\models \chi$ iff $\{s\}\not\models \theta\hspace{2pt}\Box\hspace{-4pt}\rightarrow\chi$. \end{proof} \begin{teo} Let $T$ be a finite causal multiteam. Then $(T,\mathcal{E}_T, P_T)$ is a probability space. \end{teo} \begin{proof} 1) $T^-$ is definable by the formula $X=x \lor X\neq x$. Therefore $T^-\in \mathcal{E}_T$. 2) $\mathcal{E}$ is obviously closed under unions (due to finiteness). 3) We want to show that $\mathcal{E}$ is closed under complementation. Let $S\in\mathcal{E}$. Then $S$ is defined by some $\mathcal{C}O$ formula $\psi$. But then $T^-\setminus S$ is defined by the complementary formula $\psi^c$, therefore $T^-\setminus S \in \mathcal{E}$. 4) $Pr_T(T^-) = 1$. 5) Let $S_1,S_2 \in \mathcal{E}_T$, with $S_1 \cap S_2 = \emptyset$. $S_1$ and $S_2$ are defined by two $\mathcal{C}O$ formulas $\psi_1,\psi_2$, respectively. Then $S_1\cup S_2$ is defined by $\psi_1 \lor \psi_2$. So \[ P_T(S_1\cup S_2) = Pr_T(\psi_1 \lor \psi_2) = \frac{card(\{s\in T^- | \{s\}\models \psi_1\lor \psi_2\})}{card(T^-)} = \] \[ = \frac{card(\{s\in T^- | \{s\}\models \psi_1\}) + card(\{s\in T^- | \{s\}\models \psi_2\})}{card(T^-)} = \] \[ =\frac{card(\{s\in T^- | \{s\}\models \psi_1\})}{card(T^-)} + \frac{card(\{s\in T^- | \{s\}\models \psi_2\})}{card(T^-)} = \] \[ = Pr_T(\psi_1) + Pr_T(\psi_2) = P_T(S_1) + P_T(S_2); \] the third equality holds due to the assumption $S_1 \cap S_2 = \emptyset$. \end{proof} This result allows us to use the usual rules of probability calculus. \subsection{Comparison with causal models} \label{SUBSCOMP} At this point we are in the position to notice a difference between our approach and the usual causal models. Recall that in the semi-deterministic approach of \cite{Pea2000} and \cite{SpiGlySch1993}, by a \emph{causal model} one usually understands (the terminology is by no means used consistently) a structural equation model enriched with a probabilistic distribution over the exogenous variables $\SET{U}$. If the causal model is recursive (i.e. the graph is acyclic), then each endogenous variable can be represented as a function of the exogenous variables (just consider the structural equation defining an endogenous variable, $V:=f_V(PA_V)$; iteratively replace each of the endogenous variables occurring in the right term of this expression with its structural equation until you obtain a function $f'_V(\SET{U})$). Given recursivity, the probability distribution over exogenous variables induces a joint probability distribution over \emph{all} the variables of the system given by the following equation: \begin{equation}\label{EQEXTENDPROB} P(X_1 = x_1\land\dots\land X_n = x_n) := \end{equation} \[ \sum_{\{(u_1,\dots,u_k)|\text{for all }i=1..n \text{ : } f'_{X_i}(u_1,\dots,u_k)=x_i\}} P(U_1=u_1\land\dots\land U_k=u_k) \] From this joint probability, one can obtain by marginalization the distributions of each set of variables. This approach fails, however, for non-recursive causal models, in case their system of structural equations has multiple solutions or no solutions at all for some configuration(s) of values of the exogenous variables. In this case some endogenous variables may fail to be functions of the exogenous variables (consider e.g. a system with variables $U,X,Y$ all having range $\{0,1\}$, and consisting of the equations $X:=Y$ and $Y:= X$, which induce the arrows $X\rightarrow Y$ and $Y\rightarrow X$; here $U$ is the only exogenous variable; given a fixed value $u$ for $U$, both triples $(u,0,0)$ and $(u,1,1)$ are solutions to the system; therefore, neither $X$ nor $Y$ can be determined as functions of $U$). Unlike the semi-deterministic approach of causal models, our definition of probabilities for causal teams applies to both recursive and non-recursive systems. This is due to the fact that we do not conflate in our framework two different accounts of exogeneity, as it is often the case in the literature. In the first account, exogenous variables are treated as quantities whose causal mechanisms are not further examined in the model. In the second account, exogenous variables are treated as \emph{unobserved} variables. The treatment of exogenous variables in our causal teams supports the first account: they have no structural equations, but the assignments of the causal teams encode quantitative information about them. To the extent to which causal teams stick to the first account, they are more general than causal models. (Of course, if we further extend causal teams to allow for unobserved variables, probabilities in non-recursive systems might fail again to be determined by the exogenous distribution; such a generalisation is beyond the scope of the present paper). We might ask at this point whether our probabilities extend correctly (``conservatively'') the probabilities typical of recursive causal models. That is, if we take the team-defined probability over the exogenous variables $\SET{U}$, and we define the joint probability by formula \eqref{EQEXTENDPROB}, do we obtain the team-defined joint probability? The answer is affirmative: \begin{teo} Let $T$ be a recursive causal multiteam with exogenous variables $\SET{U} = U_1,\dots, U_k$. For each $(u_1,\dots,u_k)\in \prod_{i=1..k}{Ran(U_i)}$, let $P(U_1=u_1\land\dots\land U_k=u_k):= Pr_T(U_1=u_1\land\dots\land U_k=u_k)$. Let $X_1,\dots,X_n$ be a list of all variables of $T$, and $P(X_1,\dots,X_n)$ be defined from $P(U_1=u_1\land\dots\land U_k=u_k)$ as in equation \eqref{EQEXTENDPROB}. Then $P(X_1=x_1,\dots,X_n=x_n) = Pr_T(X_1=x_1\land\dots\land X_n=x_n)$ for any tuple $(x_1,\dots,x_n) \in \prod_{X_i\in dom(T)}Ran(X_i)$. \end{teo} \begin{proof} By definition, \[ Pr_T(X_1=x_1 \land\dots\land X_n=x_n) = \frac{card(\{s\in T| s(X_1) = x_1 \land\dots\land s(X_n) = x_n\})}{card(T)}. \] Clause c) in the definition of causal teams entails that this last term is equal to \[ \frac{card(\{s\in T | f'_{X_i}(s(U_1),\dots,s(U_k)) = x_i \text{ for } i=1..n\})}{card(T)} \] where the $f'_{X_i}(u_1,\dots,u_k)$ are defined as above. We can further decompose the numerator according to the values taken by $U_1,\dots,U_k$, which ultimately leads to \[ \sum_{(u_1,\dots,u_k)|\text{ for all } i, f'_{X_i}(u_1,\dots,u_k)=x_i}\frac{card(\{s\in T | s(U_1)=u_1,\dots,s(U_k)) = u_k \})}{card(T)} \] that is \[ \sum_{(u_1,\dots,u_k)|\text{ for all } i, f'_{X_i}(u_1,\dots,u_k)=x_i}P(U_1=u_1 \land\dots\land U_k = u_k) \] which is the definition of $P(X_1=x_1,\dots,X_n=x_n)$. \end{proof} \subsection{Definability in the probabilistic language} \label{SECDEFPROB} Can we define conditional probabilities in our framework? Semantically, the obvious definition of a conditional probability over a team is ( assuming that $\chi_1,\chi_2$ are $\mathcal{C}O$ formulas, and $\chi_1$ is satisfied by at least one assignment of the team): \[ Pr_T(\chi_2|\chi_1):= \frac{Pr_T(\chi_1\land\chi_2)}{Pr_T(\chi_1)}. \] It is more complicated to assess whether our current formal languages can talk about conditional probabilities. We can see that, in a number of circumstances, the selective implication can take the role of conditioning. For example, under the same assumptions as above, we can write $Pr(\chi_2|\chi_1)\leq \epsilon$ as an abbreviation for \[ \chi_1 \supset Pr(\chi_2)\leq \epsilon \] Here is the proof that the abbreviation has the intended meaning: \[ T\models \chi_1 \supset Pr(\chi_2)\leq \epsilon \iff \] \[ \iff T^{\chi_1}\models Pr(\chi_2)\leq \epsilon \] \[ \iff \frac{card(\{s\in (T^{\chi_1})^- | \{s\}\models \chi_2\})}{card((T^{\chi_1})^-)}\leq \epsilon \] \[ \iff \frac{card(\{s\in (T^{\chi_1})^- | \{s\}\models \chi_2\})}{card(T^-)}\frac{card(T^-)}{card((T^{\chi_1})^-)} \leq \epsilon \] \[ \iff \frac{Pr_T(\chi_1 \land \chi_2)}{Pr_T(\chi_1)} \leq \epsilon \iff Pr_T(\chi_2|\chi_1) \leq \epsilon \] provided $Pr_T(\chi_1)> 0$. In case $Pr_T(\chi_1)= 0$, then the causal team is empty, so that our earlier conventions impose that $T^{\chi_1}\not\models Pr(\chi_2|\chi_1)\leq \epsilon$. Therefore, in this case we treat conventionally the expression $Pr(\chi_2|\chi_1)\leq \epsilon$ as not being satisfied by $T$. Things work analogously for inequalities in the opposite direction, and one can similarly prove that $T\models \chi \supset Pr(\psi_1)\leq Pr(\psi_2)$ iff $Pr_T(\psi_1|\chi)\leq Pr_T(\psi_2|\chi)$. We would like, however, to point out that our logical language is in some respect more general than other languages typically used in the causation literature (see e.g. \cite{Pea2000}, chap. 3). To give an example, Pearl would write $P(y|do(x),u)=\epsilon$, or $P(y|\hat x,u)=\epsilon$, to denote the probability, given the observation that $U$ is equal to $u$, of $Y$ being equal to $\epsilon$, in the model where $X$ has been fixed to $x$ by an intervention (see e.g. \cite{Pea2000} sec. 3.4). This is expressed in our notation by $X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow (U=u \supset Pr(Y=y) = \epsilon)$. A straightforward computation shows that this expression is equivalent to the statement that $Pr_{T_{X=x}}(Y=y |U=u) = \epsilon$, that is, that the probability of $Y=y$ conditional on $U=u$ \emph{in the intervened team} is $\epsilon$. But in our language we also have available formulas like $U=u \supset (X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow P(Y=y) = \epsilon)$. This formula is not necessarily equivalent to the previous one. This point can be seen by looking back at an example considered in subsection \ref{SUBSINTER} to show the non-commutativity of selective and counterfactual implications. The team $S$ in that example satisfies $Z=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow (Y=3 \supset Pr(Y=3)=1)$ but does not satisfy $Y=3 \supset ( Z=1\hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=3)=1)$. The difference between the two formulas is that in the second we consider the observational evidence that \emph{in the initial model}, $U$ has value $u$. We can explicitly show that the second formula expresses the fact that the probability of $Y=y$, after the intervention $do(X=x)$, is smaller than $\epsilon$, conditional on the evidence that $U=u$ \emph{in the pre-intervention team}. The equivalence can be proved as follows: \[ T\models U=u \supset (X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow P(Y=y) = \epsilon) \iff \] \[ \iff T^{U=u} \models X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow P(Y=y) = \epsilon \] \[ \iff (T^{U=u})_{X=x} \models P(Y=y) = \epsilon \] \[ \iff \frac{card(\{s\in ((T^{U=u})_{X=x})^-|\{s\}\models Y=y\})}{card(((T^{U=u})_{X=x})^-)} = \epsilon \] \[ \iff \frac{card(\{t\in (T^{U=u})^-|\{t\}_{X=x}\models Y=y\})}{card((T^{U=u})^-)} = \epsilon \] \[ \iff \frac{card(\{t\in (T^{U=u})^-|\{t\}_{X=x}\models Y=y\})}{card(T^-)}\cdot\frac{card(T^-)}{card((T^{U=u})^-)} = \epsilon \] \[ \iff \frac{Pr_T(U=u \land (X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y))}{Pr_T(U=u)} = \epsilon \] \[ \iff Pr_T(X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y|U=u) = \epsilon \] (the fourth equivalence makes an essential use of the fact that we are using multiteams, not teams). In Pearl's notation, there is always the implicit convention that what occurs as a conditioning variable (in this case $u$) has to be conditioned on in the post-intervention model. This is also the reason why $u$ and $do(x)$ can occur in the same position in the notation, without any problems of non-commutativity arising. But this limitation seems problematic in the context of causal reasoning. We might want, for example, to discuss the effects of an intervention over a restricted population, and Pearl's notation does not seems to be of any help in this case. The same problem seems to afflict other notational representations of counterfactuals, such as the one coming from the potential outcome tradition (\cite{Ney1923}, \cite{Rub1974}, \cite{GalPea1998}). In that notational system we can write, say, $Y_x(u)=y$ to express that, under the observation that $U$ is $u$, fixing $X$ to $x$ forces $Y$ to take value $y$ ($U=u \supset (X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y)$), but there seems to be no way to express the formula $X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow (U=u \supset Y=y)$. Since exogenous variables are not indirectly affected by interventions, this is just a minor problem; but the limits of the notation immediately arise again if one discusses conditional interventions that involve some exogenous variable. Let us now see what kind of (probabilistic) dependencies and independencies are definable in our framework. First of all, we can give a model-dependent definition of \emph{two $\mathcal{C}O$ formulas being probabilistically independent}: $\chi_1\pindep\chi_2$. To this purpose, notice that all probabilities expressible over a team $T$ are multiple of the minimum unit $u = 1/card(T^-)$. Thanks to this, probabilistic atoms of the form $Pr(\chi_2|\chi_1)\leq Pr(\theta)$ can be given a model-dependent definition within $\mathcal{PO}$ (under the assumption $Pr_T(\chi_1) > 0$): \[ T\models Pr(\chi_2|\chi_1)\leq Pr(\theta) \iff \] \[ \iff T \models \bigsqcup_{m=0..card(T^-)} [(\chi_1 \supset Pr(\chi_2)\leq mu) \land Pr(\theta)\geq mu)] \] Assuming by default that, if either $\chi_1$ or $\chi_2$ has probability 0, then the two formulas are probabilistically independent, we can take $\chi_1\pindep\chi_2$ as an abbreviation for $Pr(\chi_1) = 0 \sqcup Pr(\chi_2) = 0 \sqcup Pr(\chi_2|\chi_1) = Pr(\chi_2)$. More generally, we can talk of \textbf{conditional independence atoms} $\chi_1\pindep_{\chi_3}\chi_2$ as abbreviations for $Pr(\chi_1|\chi_3) = 0 \sqcup Pr(\chi_2|\chi_3) = 0 \sqcup Pr(\chi_2|\chi_1\land\chi_3) = Pr(\chi_2| \chi_3)$. It would be important to understand whether independence atoms $\chi_1\perp_{\chi_3}\chi_2$ (\cite{GraVaa2013}) are definable or not in this framework, or whether it would be worth to add them to the language. This would lead to a simple uniform definition of dependence atoms; but we do not know the answer to this question. However, we can see that it is possible to give a model-dependent definition (in $\mathcal{PO}$) of dependence atoms in terms of conditional independence atoms. Given a sequence $(x_1,\dots,x_n)$ of values (implicitly associated to variables $X_1,\dots,X_n \in dom(T)$), we say that $(x_1,\dots,x_n)$ \emph{occurs} in $T$ if there is an $s\in T$ such that $s(X_1)=x_1,\dots,s(X_n)=x_n$. \begin{teo} For any finite causal multiteam $T$, $T\models \dep{\SET X}{Y}$ if and only if \[ T\models \bigwedge_{(\SET x,y)\in Ran(\SET X) \times Ran(Y)} Y=y \hspace{5pt}\pindep_{\SET X = \SET x} Y=y. \] \end{teo} \begin{proof} The conjunction says that, for every tuple $\SET x,y$ of appropriate values for $\SET X,Y$, either $Pr_T(Y=y|\SET X = \SET x) = 0$, or $Pr_T(Y = y|Y = y \land \SET X = \SET x) = Pr_T(Y=y|\SET X = \SET x)$. That is, for each $\SET x,y\in Ran(\SET X) \times Ran(Y)$, either this tuple does not occur in $T$ or $Pr_T(Y=y|\SET X = \SET x)=1$, which means, more explicitly: \[ \frac{card(\{s\in T^-|s(Y)=y \text{ and } s(\SET X) = \SET x\})}{card(\{s\in T^-| s(\SET X) = \SET x \})} = 1 \] which is equivalent to \[ card(\{s\in T^-|s(Y)=y \text{ and } s(\SET X) = \SET x\}) = card(\{s\in T^-| s(\SET X) = \SET x \}) \] Given the finiteness of the team, this last equality is equivalent to the statement that any assignment $s$ which satisfies $s(\SET X) = \SET x$ also satisfies $s(Y) = y$. \end{proof} \subsection{The Markov Condition} An assumption underlying a vast literature on causal studies is the so-called Markov Condition. As a condition on a causal team $T$, it seems natural to paraphrase it as follows: \begin{center} $Pr_T(X=x | PA_X = pa_X) = Pr_T(X=x | PA_X = pa_X, \SET{N} =\SET{n})$ for any set $\SET{N}$ of nondescendants of $X$ disjoint from $PA_X$ and any tuple $(x,pa_X,\SET{n})$ occurring in $T$. \end{center} This is a scheme of conditions which state that each variable is independent of its nondescendants, conditional on its parents. This condition is considered important in causal inference, for example, because it allows to write the joint probability of the system in a compact and usually easy to compute form (product formula): \[ Pr_T(X_1=x_1\land \dots \land X_n=x_n)=\prod_{i=1..n} Pr_T(X_i=x_i|PA_i=pa_i) \] where $pa_i$ is $x_1,\dots,x_n$ restricted to the values for $PA_i$. This formula, in turn, gives a modular method for calculating the joint probability in intervened teams: \[ Pr_{T_{\SET{Z} = \SET{z}}}(X_1=x_1\land \dots \land X_n=x_n)=\prod_{X_i \notin \SET{Z}} Pr_T(X_i=x_i|PA_i=pa_i). \] However, in causal (multi)teams the Markov Condition is not automatically satisfied. As a simple counterexample, consider the team \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{X \ Y} \\ \hline 1 & 1 \\ \hline 2 & 1 \\ \hline 1 & 2 \\ \hline \end{tabular} \end{center} Since the parent set of $X$ is the empty set, and the tuple $X=1,Y=2$ occurs in the team, the Markov Condition would imply: $P(X=1) = P(X=1|Y=2)$. However, here we have $Pr(X=1) = 2/3$, but $P(X=1 | Y=2) =1$. The Markov Condition is a rather complex condition to require on a team; for example, it might be noticed that its verification can require an exponential time in the number of variables of the domain (since we have to verify such a condition for every appropriate \emph{set} $\SET{Z}$ of variables). It is probably desirable to find a simpler axiom that implies the Markov Condition. We suggest the following \emph{Markov Axiom Scheme} (MA): \begin{center} $Pr_T(X=x | PA_X = pa_X) = Pr_T(X=x | PA_X = pa_X, \SET{Y} =\SET{y})$ where $\SET{Y}$ is the set of \emph{all} nondescendants of $X$ not occurring in $PA_X$ and for any tuple $(x,\SET{y})$ occurring in $T$. \end{center} Notice that in this case we have one axiom for each variable in the domain; therefore, the complexity of verifying the Markov Axiom Scheme is linear in the number of variables. We have to prove that this condition is sufficient to enforce the Markov condition. This is obtained by a merely probabilistic reasoning, which we are allowed to used thanks to our earlier proof that teams endowed with our probability function form a $\sigma$-algebra. The following lemma is an immediate consequnce of basic probability calculus: \begin{lemma} \label{SHIFTLEMMA} For any $\mathcal{C}O$ formulas $\psi,\chi,\theta$ and causal multiteam $T$, and assuming that $Pr_T(\chi \land \theta)>0$, it follows that $Pr_T(\psi \land \chi | \theta) = Pr_T(\psi| \chi\land \theta) \cdot Pr_T(\chi|\theta)$. \end{lemma} \begin{teo} Suppose that a causal multiteam $T$ satisfies the Markov Axiom Scheme. Then it satisfies the Markov Condition. \end{teo} \begin{proof} Let $X_i\in dom(T)$, $PA_i$ its set of parents, $\SET{Z}$ a set of variables such that $PA_i \subseteq \SET{Z} \subseteq dom(T)\setminus \{X_i\}$, and such that any element of $\SET{Z}\setminus PA_i$ is a nondescendant of $X_i$. We want to show that $Pr_T(X_i=x_i | \bigwedge_{X_j\in \SET{Z}} X_j = x_j) = Pr_T(X_i=x_i | \bigwedge_{X_j\in PA_i} X_j = x_j)$ for any tupla $(x_1,\dots,x_n)$ occurring in $T$. Let $\SET{W} = \{X_j\in dom(T) | X_j \text{ nondescendant of $X_i$ and } X_j\notin \SET{Z}\}$. In case $\SET{W}$ is empty, this means that $\SET{Z}\setminus PA_i$ is the set of all nondescendants of $X_i$ that are not in $PA_i$; therefore the equality we are trying to prove is simply an instance of the Markov Axiom. Suppose then that $\SET{W}$ is nonempty. We now write $\SET{z}$ for the subsequence of values of the specific sequence $(x_1,\dots,x_n)$ that correspond to the variables in $\SET{Z}$; and $\SET{wz}$ for the concatenation of sequence $\SET{z}$ with some sequence $\SET{w}$ of values for $\SET{W}$. Define $T_{\SET{Z}}(\SET{W}) = \{\SET{w}|\text{ there is } s\in T^- \text{ such that } s(\SET{W}) = \SET{w} \text{ and } s(\SET{Z}) = \SET{z}\} = \{\SET{w}| Pr_T(\SET{W} = \SET{w} \land \SET{Z} = \SET{z})>0 \}$. Notice then that, if $Pr_T(\SET{W} = \SET{w} \land \SET{Z} = \SET{z})=0$, it follows that $Pr_T(X_i = x_i \land \SET{W} = \SET{w} | \bigwedge_{X_j\in\SET{Z}} X_j = x_j) = 0$. Therefore we can write: \[ Pr_T(X_i = x_i | \bigwedge_{X_j\in\SET{Z}} X_j = x_j) = \] \[ = \sum_{\SET{w}\in T_{\SET{Z}}(\SET{W})} Pr_T(X_i = x_i \land \SET{W} = \SET{w} | \bigwedge_{X_j\in\SET{Z}} X_j = x_j) \] because the events $X_i = x_i \land \SET{W} = \SET{w}$, as $\SET{w}$ varies among tuples occurring in $T_{\SET{Z}}(\SET{W})$, are disjoint; and for other values $\SET{w}\notin T_{\SET{Z}}(\SET{W})$, $Pr_T(X_i = x_i \land \SET{W} = \SET{w} | \bigwedge_{X_j\in\SET{Z}} X_j = x_j) = 0$. Since, furthermore, for any $\SET{w}\in T_{\SET{Z}}(\SET{W})$ it holds that $Pr_T(\SET{W} = \SET{w} \land \bigwedge_{X_j\in\SET{Z}} X_j = x_j)>0$, we are entitled to use Lemma \ref{SHIFTLEMMA} on each summand, which yields the expression \[ \sum_{\SET{w}\in T_{\SET{Z}}(\SET{W})} [ Pr_T(X_i = x_i | \SET{W} = \SET{w}, \bigwedge_{X_j\in\SET{Z}} X_j = x_j)\cdot Pr_T(\SET{W} = \SET{w} | \bigwedge_{X_j\in\SET{Z}} X_j = x_j) ]. \] Now, applying the appropriate instances of the Markov Axiom, one obtains \[ \sum_{\SET{w}\in T_{\SET{Z}}(\SET{W})} [ Pr_T(X_i = x_i | \bigwedge_{X_j\in PA_i} X_j = x_j)\cdot Pr_T(\SET{W} = \SET{w} | \bigwedge_{X_j\in\SET{Z}} X_j = x_j) ]. \] Since the left factor does not depend anymore on $\SET{w}$, we can extract it: \[ Pr_T(X_i = x_i | \bigwedge_{X_j\in PA_i} X_j = x_j)\cdot \sum_{\SET{w}\in T_{\SET{Z}}(\SET{W})} Pr_T(\SET{W} = \SET{w} | \bigwedge_{X_j\in\SET{Z}} X_j = x_j). \] Since the events form a $\sigma$-algebra, and $T_{\SET{Z}}(\SET{W})$ contains all the values $\SET{w}$ such that $\SET{W} = \SET{w} \land \bigwedge_{X_j\in\SET{Z}} X_j = x_j$ has positive probability, the right factor sums up to 1, and we obtain the desired equality. \end{proof} \subsection{Pearl's example in causal teams} We conclude this section illustrating the workings of the probabilistic semantics for causal teams over an example taken from Pearl (\cite{Pea2000}, $1.4.4$). For Pearl the purpose of the example is to show that causal Bayesian networks are insufficient to answer counterfactual questions, which require structural equations. Our concern here would only be with that aspect of the example (Pearl's ``model $2$'') which illustrates the three stages in his treatment of counterfactuals in the structural equation framework. The following quote gives the spirit of the enterprise: \begin{quote} The preceding considerations further imply that the three tasks listed in the beginning of this section - prediction, intervention, and counterfactuals - form a natural hierarchy of causal reasoning tasks, with increasing levels of refinement and increasing demands on the knowledge required for accomplishing the tasks. Prediction is the simplest of the three, requiring only a specification of a joint distribution function. The analysis of interventions requires a causal structure in addition to a joint distribution. Finally, processing counterfactuals is the hardest task because it requires some information about the functional relationships and/or the distribution of the omitted factors. (p. 38) \end{quote} The example is about a clinical test. There are two exogenous variables $X$ (being treated: $X=1$; not being treated: $X=0$) and $Y$ (which measures the subject's response: recovery $Y=0$, death, $Y=1)$. There are two exogenous (hidden), independent variables $U_{1}$ and $U_{2}$. The equations of the model are: \[ \left\{\begin{array}{l} x=u_{1} \\ y=xu_{2}+(1-x)(1-u_{2}) \end{array}\right. \] They induce the following $DAG$: The two independent binary variables are governed by the probability distribution $P(u_{1}=1)=P(u_{2}=1)=\frac{1}{2}$, which generates joint probability distributions $P(x,y,u_{2})$ and $P(x,y)$ that may be calculated using the formula from subsection \ref{SUBSCOMP}. For instance, $P(X=x\land Y=y)=$ \[ = \sum_{\left\{ \left(u_{1},u_{2}\right)\mid u_{1}=x\wedge u_{1}u_{2}+(1-u_{1})(1-u_{2})=y\right\} }P(U_{1}=u_{1}\wedge U_{2}=u_{2}) = 0,25 \] for all $x$ and $y$. Here is a table which lists $P(x,y,u_{2})$ and $P(x,y):$ \[ \small \begin{array}{c|ccc|ccc|ccc} & & u_{2}=1 & & & u_{2}=0 & & & Marginal\\ & x=1 & & x=0 & x=1 & & x=0 & x=1 & & x=0\\ \hline y=1 & 0 & & 0.25 & 0.25 & & 0 & 0.25 & & 0.25\\ y=0 & 0.25 & & 0 & 0 & & 0.25 & 0.25 & & 0.25 \end{array} \] Pearl considers the following counterfactual query: \begin{description} \item [{({*})}] What is the probability $Q$ that a subject who died under treatment $(x=1,y=1)$ would have recovered $(y=0)$ had he or she not been treated $(x=0)?$ \end{description} He calls the fact that the subject died under treatment the evidence \[ e=\left\{ (x=1,y=1)\right\} \] and points out that answering the above query involves three steps: ($1$) given the evidence $e$, ($3$) we compute the probability $Y=y$ ($2$) under the hypothetical condition $X=x$. In his example this amounts to: \begin{enumerate} \item We apply the evidence $e$ to the set of equations, and conclude that it is compatible with only one realization: $u_{1}=1$ and $u_{2}=1$. This gives us a value for $u_{2}$. \item We make an intervention $do(X=0)$ on the variable $X$. It results in the first equation being replaced with $x=0$. \item We solve the second equation for $x=0,$ $u_{2}=1$. We get $y=0$ from which we conclude that the probability $Q$ is 1. \end{enumerate} We are told that in the general case \begin{itemize} \item step ($1$) corresponds to an abduction: update the probability $P(\SET u)$ to obtain $P(\SET u|\SET e)$ \item step ($2$) corresponds to an action: replace the equations corresponding to each intervened variable $X$ by $X=x$ \item step ($3$) corresponds to a prediction: compute $P(y)$ in the new model. \end{itemize} (Pearl 2009, p.61) Pearl is explicit on the point that if for each value $\SET u$ of the exogenous variables $\SET U$ there is a unique solution for $Y$, then step ($3$) always gives a unique solution for the needed probability: we simply sum up the probabilities $P(\SET u|e)$ assigned to all $\SET u$ which yield $Y=y$ as a solution. More technically: \[ \begin{array}{c} P(y)=\sum_{\left\{ \SET u:f'_{Y}(\SET u)=y\right\} }P(\SET u|e) \end{array} \] Pearl's model $2$ can be represented in our framework by using the causal team $T$ \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{ } \\ \multicolumn{4}{|c|}{$U_1$\tikzmark{UP1} \tikzmark{UP2}$U_2$\tikzmark{UP2'} \ \tikzmark{XP1}$X$\tikzmark{XP1'} \ \tikzmark{YP1}$Y$} \\ \hline 0 & 0 & 0 & 1\\ \hline 0 & 1 & 0 & 0\\ \hline 1 & 0 & 1 & 0\\ \hline 1 & 1 & 1 & 1\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:XP1'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:YP1}); \draw ([yshift=7pt]{pic cs:UP1}) edge[line width=0.2mm, out=35,in=125,->] ([yshift=4pt]{pic cs:XP1}); \draw ([yshift=5pt]{pic cs:UP2'}) edge[line width=0.2mm, out=35,in=125,->] ([yshift=7pt]{pic cs:YP1}); \end{tikzpicture} \end{center} where the range of each variable is $\left\{ 0,1\right\}$ and the arrows of the DAG are induced by the two structural equations given earlier, which are part of the description of the team. Notice also that this team, interpreted as a multiteam, bears the same probabilities that are carried by the system described by Pearl. It can be checked that $T$ is an explicit causal team. We formulate Pearl's query in our language as \begin{equation}\label{PEARL1} X=1\wedge Y=1\supset(X=0\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=0) \end{equation} or, in its probabilistic version, as \begin{equation}\label{PEARL2} X=1\wedge Y=1\supset(X=0\hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=0)=1). \end{equation} By the clause for selective implication, the formula \eqref{PEARL1} is true in the causal team $T$ if and only if the formula $(X=0\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=0)$ is true in the causal team represented by the annotated table \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{ } \\ \multicolumn{4}{|c|}{$U_1$\tikzmark{UQ1} \tikzmark{UQ2}$U_2$\tikzmark{UQ2'} \ \tikzmark{XQ1}$X$\tikzmark{XQ1'} \ \tikzmark{YQ1}$Y$} \\ \hline 1 & 1 & 1 & 1\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:XQ1'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:YQ1}); \draw ([yshift=7pt]{pic cs:UQ1}) edge[line width=0.2mm, out=35,in=125,->] ([yshift=4pt]{pic cs:XQ1}); \draw ([yshift=5pt]{pic cs:UQ2'}) edge[line width=0.2mm, out=35,in=125,->] ([yshift=7pt]{pic cs:YQ1}); \end{tikzpicture} \end{center} and the two structural equations. Now the formula $(X=0\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=0)$ is true, given that the intervention $do(X=0)$ generates the causal team represented by the annotated team \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|l|}{ } \\ \multicolumn{4}{|l|}{$U_1$\tikzmark{UR1} \tikzmark{UR2}$U_2$\tikzmark{UR2'} \ \tikzmark{XR1}$X$\tikzmark{XR1'} \ \tikzmark{YR1}$Y$} \\ \hline 1 & 1 & 0 & 0\\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:XR1'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:YR1}); \draw ([yshift=5pt]{pic cs:UR2'}) edge[line width=0.2mm, out=35,in=125,->] ([yshift=7pt]{pic cs:YR1}); \end{tikzpicture} \end{center} and the same system of equations, except for the first one, replaced by $X=0$\footnote{Actually, in the causal team formalism this is represented by the removal of the invariant function $\mathcal{F}_{T^{X=1 \land Y=1}}(X)$, and the change of role of $X$, from endogenous to exogenous variable.}. Obviously $Y=0$ is true in this causal team, which shows that the formula \eqref{PEARL2} is true. Later on in section 7, where we will introduce various notions of cause within causal team semantics, we will see that in this example it is also true that $X$ is a \textit{direct cause} of $Y$ in $T$. Now recall our observations from subsection \ref{SECDEFPROB} concerning the definability of conditional probabilistic formulas. Those results show that \[ X=1\wedge Y=1\supset(X=0\hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=0)=1) \] is true in a causal team $T$ if and only if \[ Pr_{T}(X=0\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=0 \mid X=1\wedge Y=1)=1. \] We notice that the formula inside the brackets shows nicely Pearl's three ``mental steps''. \section{Causal modeling} \label{CAUSMOD} \subsection{Direct and total cause} What might be interesting to express in our languages? We consider the two basic type-causal notions from Woodward (\cite{Woo2003}), \emph{direct} and \emph{total} cause. Let us consider the case of \textbf{direct cause}. Recall Woodward's definition of a direct cause in section \ref{SUBSCAUSES}: \begin{quote} A necessary and sufficient condition for $X$ to be a direct cause of $Y$ with respect to some variable set $V$ is that there be a possible intervention on $X$ that will change $Y$ (or the probability distribution of $Z$) when all other variables in $V$ besides $X$ and $Z$ are held fixed at some value by interventions. \end{quote} Woodward's definition is slightly ambiguous in that it talks about a change in $Y$, but does not say with respect to what the change is made; to $Y$'s actual value? To some possible value of $Y$, i.e., some $y\in Ran(Y)$? To a $y$ occurring in some allowed configuration (assignment)? We resolve the ambiguity by stipulating that the values of $Y$ to compared be generated by \emph{two} distinct interventions. The kind of intervention needed for the evaluation of direct causation of $X$ on $Y$ is a simultaneous intervention on all variables in the system except for $X$ and $Y$. As an example, let $T$ be the team \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|l|}{X\tikzmark{X14} \ \tikzmark{Y14}Z\tikzmark{Y14'} \ \tikzmark{Z14}Y} \\ \hline 1 & 1 & 2 \\ \hline 2 & 2 & 4 \\ \hline 3 & 3 & 6 \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X14}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y14}); \draw [->] ([yshift=3pt]{pic cs:Y14'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z14}); \draw ([yshift=7pt]{pic cs:X14}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Z14}); \end{tikzpicture} \end{center} with $\mathcal{F}_Z(x):= x$ and $\mathcal{F}_Y(x,z) = x+z$. It is an essential ingredient of this example that there is an arrow from $X$ to $Y$. We show that $X$ is a direct cause of $Y$ in $T$. First of all we must fix all other variables (in this case, just $Z$) to an appropriate value (we choose $1$) by an intervention, which also updates $Y$: \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|l|}{X\tikzmark{X17} \ \tikzmark{Y17}Z\tikzmark{Y17'} \ \ \tikzmark{Z17}Y} \\ \hline 1 & \textbf{1} & \textbf{2} \\ \hline 2 & \textbf{1} & \textbf{3} \\ \hline 3 & \textbf{1} & \textbf{4} \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y17'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z17}); \draw ([yshift=7pt]{pic cs:X17}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Z17}); \end{tikzpicture} \end{center} Then we intervene in two different ways on $X$: \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|l|}{X\tikzmark{X15} \ \tikzmark{Y15}Z\tikzmark{Y15'} \ \ \tikzmark{Z15}Y} \\ \hline \textbf{1} & 1 & \textbf{2} \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y15'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z15}); \draw ([yshift=7pt]{pic cs:X15}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Z15}); \end{tikzpicture} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{ }\\ \multicolumn{3}{|l|}{X\tikzmark{X16} \ \tikzmark{Y16}Z\tikzmark{Y16'} \ \ \tikzmark{Z16}Y} \\ \hline \textbf{2} & 1 & \textbf{3} \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:Y16'}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Z16}); \draw ([yshift=7pt]{pic cs:X16}) edge[line width=0.2mm, out=55,in=125,->] ([yshift=7pt]{pic cs:Z16}); \end{tikzpicture} \end{center} The fact that the two interventions select distinct values for $Y$ proves that $X$ is a direct cause of $Y$. The peculiar form of these kinds of interventions makes so that, \emph{if there is an arrow from $X$ to $Y$}, after the intervention all the columns in the team are constant, that is, a singleton team is produced. This suggests that direct cause can also be defined as an operation on teams, instead of as an operation on assignments. Let $Fix(\SET z)$ be an abbreviation for $\bigwedge_{Z\in Dom(T)\setminus\{X,Y\}}Z=z$: \[ T\models DC(X;Y) \text{ iff there are } \SET z, x\neq x', y\neq y' \text{ such that } \] \[ T\models (Fix(\SET z) \land X=x)\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y \] \[ \text{ and } T\models (Fix(\SET z) \land X=x')\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y'. \] This semantical definition suggests that direct causation might be definable in an appropriate quantifier extension of $\mathcal{CD}$. We do not venture into this direction, but we look instead for a model-dependent definition in the language $\mathcal{CD}$ \emph{enriched with intuitionistic disjunction} $\sqcup$\footnote{The results of \cite{YanVaa2016} entail that, in the case of propositional dependence logic, the intuitionistic disjunction is eliminable in terms of the other propositional connectives (although not uniformly definable, \cite{Yan2017}). It is unclear, at the current state of investigation, whether these results translate into our framework.} \footnote{A remark: adding to $\mathcal{C}$ or $\mathcal{C}O$ the intuitionistic disjunction, one obtains logics that are still downward closed, but not flat.}. With the help of this operator, we can express the fact that $X$ is a direct cause of $Y$ in the causal team $T$, $T\models DC(X;Y)$, by the formula \[ \bigsqcup_{x\neq x', y\neq y',\SET z}[(Fix(\SET z) \land X=x) \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y] \land [(Fix(\SET z) \land X=x') \hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y'] \] where the disjunction ranges over all tuples $(x,x',y,y',\SET z)\in Ran(X)^2\times Ran(Y)^2\times Ran(\SET Z)$ such that $x\neq x'$ and $y\neq y'$. This definition is model-dependent in that it makes reference to the ranges of varianbles, and to a set $\SET Z = dom(T)\setminus \{X,Y\}$ of variables that is determined by the domain of the causal team. If there is an arrow from $X$ to $Y$, any intervention of this kind will collapse the team to a singleton team, so that the condition for direct cause can be checked (it can fail; see the example below). But if instead some intervention of this form does \emph{not} collapse the team into a singleton team, then necessarily there is no arrow from $X$ to $Y$. Therefore, the graph of direct causes is a subgraph of the graph of the team. It can be a \emph{proper} subgraph, that is, there might be an arrow between two variables $X$ and $Y$, and yet those fail to be in the direct cause relationship; this is what happens in the following causal team (with, say, $Ran(X)=Ran(Y)=\{0,1\}$): \begin{center} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{$X$ \hspace{-3pt}$\rightarrow$ \hspace{-3pt}$Y$} \\ \hline 0 & 0 \\ \hline 1 & 0 \\ \hline \end{tabular} \end{center} The definition of probabilistic direct cause in $\mathcal{PCD}$ presents a little surprise. As in the deterministic case, we use counterfactual antecedents of the form $X=x \land Fix(\SET z)$ to try to reduce to one the number of configurations occurring in the multiteam; however, now, even in presence of an arrow from $X$ to $Y$, the resulting team is not necessarily a singleton team, but it will be in general a collection of identical copies of one and the same assignment; the formula $Y=y$ will assume probability 0 or 1. This leads to simplifications in the definition of probabilistic direct cause. Write $\Psi_{x,x';y;\SET z}$ for \[ ((Fix(\SET z) \land X=x) \hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=y) = 0 \land ((Fix(\SET z) \land X=x') \hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=y)=1. \] Then, probabilistic direct cause can be formulated as: \[ T\models PDC(X;Y) \text{ iff there are } \SET z, x, x', y \text{ such that } T\models \Psi_{x,x';y;\SET z} \] A non-uniform definition in $\mathcal{PCD}$ is as follows: \[ \bigsqcup_{x\neq x',y,\SET z}\Psi_{x,x';y;\SET z} \] We may conclude that the probabilistic notion of direct cause collapses to the deterministic one. The case of \textbf{total cause} (\cite{Woo2003}) is more complicated. Recall Woodward's definition from section 2.3: \begin{quote} $X$ is a total cause of $Y$ if and only if there is a possible intervention on $X$ that will change $Y$ or the probability distribution of $Y$. \end{quote} This formulation of Woodward is again very ambiguous in some respects; as before, we can think this ``change'' in terms of two interventions; but, are the ``changes'' mentioned relative to some fixed actual configuration of the system (i.e. assignment), or should one check all possible configurations? We opt here for the second interpretation. This gives us a reasonable condition for obtaining again the collapse of the team to a singleton. The idea to obtain this is to fix (to some value) all the variables that are not descendants of $X$. Let us call this set $ND_X$; we use letter $\SET w$ to indicate values for these variables. Write $Fix'(\SET w)$ for $\bigwedge_{W\in ND_X} W=w$, and $\Xi_{x, x';y,y';\SET w}$ for \[ Fix'(\SET w) \hspace{2pt}\Box\hspace{-4pt}\rightarrow [(X=x\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y) \land (X=x'\hspace{2pt}\Box\hspace{-4pt}\rightarrow Y=y')]. \] We can then define total cause as: \[ T\models TC(X;Y) \text{ iff there are } x\neq x',\SET w\in T(ND_X), y\neq y' \] \[ \text{ such that } T\models \Xi_{x, x';y,y';\SET w}. \] This, once more, can be witten a a single formula: \[ T\models \bigsqcup_{x\neq x',\SET w\in T(ND_X), y\neq y'} \Xi_{x, x';y,y';\SET w}. \] Things work differently in the probabilistic framework. Notice that in the language $\mathcal{PCD}$ we have (abbreviated) formulas of the form \[ X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=y) = \epsilon \] saying that an intervention fixing the value of $X$ at $x$ will affect in a certain way the probability distribution of $Y$. If a conjunction of the following form \[ \Theta_{x,x';y;\epsilon,\epsilon'}: (X=x \hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=y) = \epsilon) \land (X=x' \hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=y) = \epsilon') \] holds for two distinct values $\epsilon, \epsilon'$, we can conclude that $X$ is a total cause in the (probabilistic) sense given by Woodward. More precisely, we can treat probabilistic total causation as follows. $X$ is a \textbf{(probabilistic) total cause} of $Y$, $T\models PTC(X;Y)$, iff \[ \text{there are } x\neq x', y,\epsilon\neq\epsilon' \text{ such that } T\models \Theta_{x,x';y;\epsilon,\epsilon'}. \] Compare this with the definition of deterministic total cause; the clauses have been considerably simplified. This follows from the fact that, differently from deterministic atoms of the form $Y=y$, the probabilistic atoms $Pr(Y=y)=\epsilon$ are not flat; they instead express a global property of the team. Therefore, there is no need to use the trick that was applied in the deterministic case (fixing counterfactually the nondescendants of $X$, so that the team collapses to a single assignment). In this probabilistic context, total causation is genuinely a property of teams, not of single assignments. \begin{comment} This semantic definition can be easily captured in an infinitary version of $\mathcal{PCD}$: \[ \bigvee_{\overline z, x\neq x',y, \epsilon\neq \epsilon'}\Big[(X=x\hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=y)=\epsilon) \land (X=x'\hspace{2pt}\Box\hspace{-4pt}\rightarrow Pr(Y=y)=\epsilon')\Big] \] (where $\overline z, x\neq x',y$ are intended to vary over the ranges of the corresponding variables) and, more compactly, if we allowed existential quantification over real numbers. However, actually we can define probabilistic total cause (non-uniformly) already in the language $\mathcal{PCD}$. \end{comment} Observing that the probability of any formula in the language is an integer multiple of the smallest unit $u = \frac{1}{card(T^-)}$, and that probabilities range within $[0,1]\cap \mathbb{R}$, we can also explictly define (in a non-uniform way) $T\models PTC(X;Y)$ as: \[ \bigsqcup_{x\neq x',y, m\neq n,m,n=0..card(T^-)}\Theta_{x,x';y;mu,nu}. \] \subsection{Higher-order effects} In general, a variable $X$ might be thought to have some kind of ``causal effect'' on another variable $Y$ if intervening in some way on $X$ can change the state of variable $Y$. In the kind of language that is usually applied to describe a causal model, the ``state'' of a variable $Y$ is completely summarized by a formula of the form $Y=y$, asserting that $Y$ is taking a specific value $y$. But, in a team, it may happen that a variable $Y$ satisfies \emph{none} of such formulas. Yet there are properties that can be ascribed or not to a variable $Y$ in $T$, and that might cease or begin to hold after an intervention on $X$. One example are statements on the probability distribution of $Y$. Or, there might be some deterministic property, say $\psi(Y)$, which does not hold in $T$, but holds after an intervention that reduces the range of $Y$-values occurring in the team. This suggests that ``$X$ is a total cause of $Y$'' might be redefined, in contexts more general than the usual causal modeling, as the statement that ``there is an intervention on $X$ that changes some property of $Y$''; and similarly for other notions of causation. Therefore, the arguments above rather show that the notion of direct cause is special, in that its evaluation over a team can be reduced to evaluation over causal models. Our formalism allows many more counterfactuals than those that are typically used in the definition of causal notions; it allows us to ponder, for example, what effects an intervention has on a \emph{set} of variables. Let us consider an example which is natural in the team context, but is unlikely to emerge in the usual discussions on interventionist causality. Consider a causal team $T$ of the form \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X\tikzmark{X12} \ \tikzmark{Y12}Y\tikzmark{Y'12} \ \tikzmark{Z12}Z} \\ \hline 1 & 2 & 1 \\ \hline 2 & 3 & 1 \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X12}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y12}); \end{tikzpicture} \end{center} Clearly $T\not\models\dep{Z}{Y}$. However, the intervened team $T_{X=1}$ \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{X\tikzmark{X13} \ \tikzmark{Y13}Y\tikzmark{Y'13} \ \tikzmark{Z13}Z} \\ \hline 1 & 2 & 1 \\ \hline \end{tabular} \begin{tikzpicture}[overlay, remember picture, yshift=.25\baselineskip, shorten >=.5pt, shorten <=.5pt] \draw [->] ([yshift=3pt]{pic cs:X13}) [line width=0.2mm] to ([yshift=3pt]{pic cs:Y13}); \end{tikzpicture} \end{center} is such that $T_{X=1}\models\dep{Z}{Y}$. Said otherwise, $T\models X=1 \hspace{2pt}\Box\hspace{-4pt}\rightarrow \dep{Z}{Y}$. So, intervening on $X$ has changed a property of variable $Y$ (that of being functionally dependent, or not, on $Z$). Clearly, such a team property is completely different from those that are usually considered in discussions about causation; and they may be thought as properties of the \emph{set} of variables $\{Y,Z\}$. Said otherwise, intervening on $X$ may affect the set $\{Y,Z\}$. We can construct similar examples also in the language of probabilities, without need for counterfactuals nor selective implications. Consider the following multiteam with variables $X,Y$ and no arrows, and the multiteam obtained from it by applying $do(X=0)$: \begin{center} $S$: \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{$X$ \ \ $Y$} \\ \hline 0 & 0 \\ \hline 0 & 1 \\ \hline 1 & 2 \\ \hline \end{tabular} \hspace{20pt} $S_{X=0}$: \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{$X$ \ \ $Y$} \\ \hline 0 & 0 \\ \hline 0 & 1 \\ \hline 0 & 2 \\ \hline \end{tabular} \end{center} Notice that the intervened team contains a new assignment $\{(X,0),(Y,2)\}$. Therefore, the probability of formula $X=0 \land Y=2$ raised from $0$ (before the intervention) to $1/3$ (after the intervention). In this case, intervening on $X$ affected the set $\{X,Y\}$, in the sense that it changed a property of the relationship between $X$ and $Y$. \subsection{Invariance} We want to make some remarks about one methodological aspect of Pearl's approach to causation. In his causal models, arrows represent the statement, or guess, that some functional dependencies are invariant under \emph{most} of the interventions that can be carried out on the system (a structural equation $X:= f_X(PA_X)$ is invariant under all interventions that do not act on $X$). Graphs therefore draw a line between some of the dependencies occurring in the data, which might be considered either 1) contingent or 2) due to a third common cause, and those that 3) have to be considered causal dependencies. But it must be immediately observed that there may be dependencies that are invariant in the sense specified above, and yet are not represented by arrows in the causal graph. Similarly, in our causal teams, the graph $G(T)$ encodes a set of invariant functional dependencies of the form $ \dep{PA_Y}{Y}$, but the causal team could still sustain invariant dependencies which are not explicitated by $G(T)$ (in this case, we say that a functional dependency $\dep{X_1,\dots,X_n}{Y}$ is \emph{invariant in $T$} if it holds after any intervention on $T$\footnote{One might raise the protest that it should be required that $\dep{X_1,\dots,X_n}{Y}$ be invariant under any \emph{sequence} of interventions. However, the importation law we have proved in earlier sections shows that for any sequence of interventions, there is a single intervention generating the same modified causal team. Notice, futhermore, that in this context the actual team $T$ can be thought of as the result of a null intervention.}); call $G_{inv}(T)$ the graph which represents all the invariant dependencies. Notice that while the structural equation for $Y$ is not invariant under interventions that act on $Y$, the functional dependency $\dep{X_1,\dots,X_n}{Y}$ \emph{is} invariant under such interventions, although its arrows are removed from the graph. What happens is this (we consider for simplicity an intervention over only one variable $Y$): if $T\models\dep{X_1,\dots,X_n}{Y}$, then $(X_i,Y)\in G(T)$ but $(X_i,Y)\notin G(T_{Y=y})$. But since $Y$ is constant in $T_{Y=y}$, no further intervention can disrupt the dependency $\dep{X_1,\dots,X_n}{Y}$; therefore $(X_i,Y)\in G_{inv}(T_{Y=y})$. This shows that there may be invariant dependencies in a causal team that are not represented by arrows of the graph. In general, we can isolate three graphs of interest: $G(T)\subseteq G_{inv}(T) \subseteq G_{cont}(T)$, where $G_{cont}(T)$ represents \emph{all} the functional dependencies in $T^-$, including those that are not invariant. (Furthermore, as shown in a previous subsection, it might be of interest to consider the subgraphs of $G(T)$ that represent direct causes, total causes and probabilistic total causes). The point is that the choice of $G(T)$ determines what an intervention is/does; in turn, the set of interventions thus defined may determine some further invariant dependencies (set $G_{inv}(T)$), those that hold both in the initial team and in all the intervened ones. The arrows of this new graph might happen to be inappropriate to capture a notion of ``direct'' or also of ``indirect'' causation. It might be interesting to ask whether this kind of invariance is definable in our languages; the question makes sense not only for functional dependencies, but for any property definable by a formula. The answer is positive; we can give a model-dependent definition. A formula $\psi$ is invariant for $T$ if: \[ T\models INV(\psi) := \bigwedge \{ \SET{Z}=\SET{z}\hspace{2pt}\Box\hspace{-4pt}\rightarrow \psi\} \] where the conjunction ranges over all consistent conjunctions of the form $\SET{Z}=\SET{z}$. Notice that this is not a single formula, but a scheme of formulas of $\mathcal{CD}$; which formula is actually picked depends on the domain of the causal team $T$ and the ranges of its variables. We do not know whether a uniform definition can be given, or if instead adding invariance statements (for sentences, or just for dependencies) might increase the expressivity of our languages. \section{Conclusions} We have shown how to generalize team semantics in a way that it may incorporate the kind of counterfactual and causal reasoning which is typical of recursive, possibly nonparametric structural equation models. We briefly explained how the semantics can be extended to cover also nonrecursive, parametric models with unique or no solutions. We introduced languages for causal discourse (both deterministic and probabilistic) over causal teams; we showed that these languages are in some ways more general than notations typically used in the literature (in that, for example, they may distinguish pre- and post-intervention observations). We explored somewhat systematically the logic underlying deterministic and probabilistic languages for interventionist counterfactual reasoning when these are applied to recursive, parametric causal teams; we also shortly sketched what challenges arise in the treatment of the nonparametric case, and proposed tentative clauses for nonclassical semantical notions that naturally arise in this case. Finally, we analyzed how some of the notions of cause and invariance that arise from manipulationist theories of causation can be expressed and generalized in our framework. These initial explorations of causal team semantics leave space for further investigation. On the side of causal modeling, parametric nonrecursive and nonparametric causal teams certainly deserve a more extensive study than what was sketched here (we find it unlikely, instead, that the study of the nonrecursive nonparametric teams might be fruitful). Another important case in causal modeling, which we only briefly grazed in this paper, are models with unobserved variables; defining a causal team semantics for this case would surely require some deep rethinking of the framework. Similar challenges might be posed by the study of forms of interventions beyond the ``surgical'' ones considered here. On the logical side, it would be important to develop tools for the comparison of expressive power of distinct languages, and to investigate systematically whether the introduction of counterfactuals leads to significant new definability results. The inferential aspects of causal languages should also be further investigated; in particular, complete deduction systems for deterministic languages with (in)dependence atoms are missing; and the case of probabilistic languages is still unexplored. Further directions for investigation are comparisons with previous notion of counterfactuals, and the study of the modal structure induced by interventions on teams. \begin{comment} We call $\mathcal{ICD}$ the language $\mathcal{CD}$ extended with atoms of the form $INV(\overline X;Y)$. It is obvious that $\mathcal{ICD}$ is still a downward closed logic. HERE WOULD BE NICE TO DEVELOP A FORMAL CALCULUS TO DERIVE ALL INVARIANCES. More generally, we can consider atoms of the form $INV(\psi)$, with analogous semantic clause, stating that $\psi$ is an invariant property of the team We call $\mathcal{ICD}P$ the language $\mathcal{CD}$ extended with atoms of the form $INV(\psi)$. IS THIS STILL DOWNWARD CLOSED? I THINK SO. AGAIN FORMAL CALCULUS. One defect of this language is that it cannot express basic facts relating invariance and dependence, such as the fact that, if $\psi$ is invariant, then it holds. We then need to include classical implication in our language (OR THE INTUITIONISTIC ONE? WHICH ONE CORRESPONDS MORE NEATLY TO SEMANTICAL IMPLICATION? I AM SURPRISED THAT CLASSICAL IMPLICATION IS NEVER MENTIONED IN WORKS ON PROPOSITIONAL DEPENDENCE LOGIC): \[ T\models \psi\rightarrow \chi \iff T\not\models\psi \text{ or } T\models \chi. \] We call $\ARROW{\mathcal{CD}}$, $\ARROW{\mathcal{ICD}}$, $\ARROW{\mathcal{ICD}}P$ the previously defined logics to which classical implication is added. Now we can state the logical truth \[ \models INV(\psi) \rightarrow \psi \] But notice that the logics with classical implication are not downward closed; for example, any causal team with $T^+ = \{\{(X,1),(Y,1)\},\{(X,2),(Y,1)\}\}$ satisfies $X=1\rightarrow Y=2$, but its subteam with $S^+ = \{\{(X,1),(Y,1)\}\}$ does not. If we want to recover downward closure, we may use instead intuitionistic implication: \[ T\models \psi\hspace{2pt}|\hspace{-4pt}\rightarrow\chi \iff \text{for any subteam $S$ of } T, S\not\models\psi, \text{ or } S\models \chi. \] We name the corresponding logics accordingly. \end{comment} \begin{comment} \subsection{Locality} ADD INTERACTIONS BETWEEN CF AND SELECTION AND FUNCTIONAL DEPENDENCE? ADD MIGHT COUNTERFACTUAL? \end{comment} The present work has been developed within the Academy of Finland Project n.286991, ``Dependence and Independence in Logic: Foundations and Philosophical Significance''. \end{document}
\begin{document} \addtolength{\baselineskip}{2.3mm} \begin{center} {\Large Reply to Comment: \\ Quantum Cryptography Based on Orthogonal States?} \end{center} Peres \cite{Peres} claims that our protocol \cite{GV} does not present any novel feature, and it is very similar to the oldest protocol of Bennett and Brassard \cite{BB84} (BB84). We completely disagree with this claim and with other points raised in the Comment. The essential novelty of our protocol is that the carrier of information is in a quantum state belonging to a definite set of orthonormal states. Any other protocol, as well as the BB84 scheme, does not have this feature, and in fact their security is based on that. Let us quote a recent paper \cite{EHPP} stating that: \begin{quotation} ``... cloning can give a faithful replica, while leaving the state of the original intact, only if it is known in advance that the carrier of information is in a quantum state belonging to a definite set of orthonormal states. If this is not the case, the eavesdropper will not be able to construct even an imperfect cloning device, which would give some information on the carrier without modifying it: a device of this sort would violate unitarity. Therefore coding based on nonorthogonal quantum states (which cannot be cloned) gives the possibility to detect any eavesdropping attempt''. \end{quotation} Thus, the security of BB84 (which uses 4 states, not all orthogonal) is assured by the `no-cloning' theorem, which is not applicable to our case. Peres claims that in our method Eve has access only to nonorthogonal states: ``The $\rho'_\pm$ states, {\it as seen by Eve\/}, are not orthogonal. Their are {\it identical\/}". However, the nonorthogonality (as seen by Eve) in our scheme is not ``just as in the BB84 protocol''. As Peres admits, in the case of {\it known} sending times our protocol is not secure, yet his nonorthogonality argument remains the same. The security of our protocol is not based on nonorthogonality, but on causality. As we have proved in the Letter \cite{GV}, a successful eavesdropping is possible only if some information can reach Bob before it leaves Alice's site - therefore, the protocol is secure. This is also the feature of the protocol proposed at the end of the Comment, in which a particle is sent at a known time in one out of two GV interferometers (GV2). A successful eavesdropping is impossible in this case too, otherwise the wavepacket delayed at Alice's location has to reach Bob's site before leaving Alice' site. Note that in the case of a single GV interferometer (and known sending times) Eve can send Bob a dummy particle at the appropriate time, but here she does not know which of the two interferometers to use for sending it. According to Peres, an important common feature of GV and BB84 (or other protocols interpolating between these two) is that ``information is sent in two {\it consecutive} steps, and security is achieved by withholding the second piece of information until after Bob receives the first one''. In his view the first step is sending the particle, and the second step is sending the necessary classical information: the chosen basis (BB84), the transmission time (GV), or the chosen interferometer (GV2). The first conceptual difference between the protocols is that in BB84 the two steps are necessary for sending the information, while in GV or GV2 one step is enough. The only purpose of the second step is to assure security against eavesdropping. The second difference is that the first step of our protocol also consists of two stages: sending the first wavepacket and sending the second wavepacket (the delayed one). Alice does not have to wait until the end of the first step for announcing the sending time (GV) or the interferometer in use (GV2). She can do that after the first stage of the first step (i.e. after the first wavepacket reaches Bob), thus, `the second step' might end before `the first step'. These two stages of the first step, i.e. the fact that the quantum signal consists of two separated parts, is the core of our method, and we do not see its analog in BB84 or any other protocol. Finally, it seems that Peres has not understood the `relativistic' versions of our protocol. First, it is not true that the storage rings have to be larger than the distance between Alice and Bob. When the communication is based on photons which travel on straight lines, the time delay can be made as small as wanted (it depends on the width of the wavepackets and on the accuracy of the clocks). Contrary to Peres' claim (and his Fig. 1), Eve can simultaneously access the two branches of the interferometer most of the time, still the protocol is secure. A similar proof to that given in the Letter \cite{GV} shows that a successful eavesdropping leads to superluminal signaling. Second, it is not true that in the case of the protocol with two widely separated paths and no time delay, ``a {\it team} of eavesdroppers could use mirrors to redirect the photon paths toward a common inspection center, and hence to Bob, without arousing suspicion''. Such an operation invariably increases the flight-time of the photons, therefore the users can easily expose the team by analyzing the timing. Since the information is encoded in the relative phase between the wavepackets, even more sophisticated eavesdropping methods cannot work, unless they use superluminal particles. \noindent Lior Goldenberg and Lev Vaidman \\ School of Physics and Astronomy, \\ Raymond and Beverly Sackler Faculty of Exact Sciences, \\ Tel-Aviv University, \\ Tel-Aviv 69978, Israel \end{document}
\begin{document} \title{An inertial three-operator splitting algorithm with applications to image inpainting} \author{ Fuying Cui$^{1}${\thanks{email: [email protected]}}, Yuchao Tang$^{1}${\thanks{Corresponding author. email: [email protected]}}, Yang Yang$^{1}${\thanks{email: [email protected]}} \\ \\ { \small 1. Department of Mathematics,} \\ {\small Nanchang University } \\ {\small Nanchang 330031, Jiangxi, P.R. China} \\} \date{} \date{} \maketitle{} {\bf Abstract.} The three-operators splitting algorithm is a popular operator splitting method for finding the zeros of the sum of three maximally monotone operators, with one of which is cocoercive operator. In this paper, we propose a class of inertial three-operator splitting algorithm. The convergence of the proposed algorithm is proved by applying the inertial Krasnoselskii-Mann iteration under certain conditions on the iterative parameters in real Hilbert spaces. As applications, we develop an inertial three-operator splitting algorithm to solve the convex minimization problem of the sum of three convex functions, where one of them is differentiable with Lipschitz continuous gradient. Finally, we conduct numerical experiments on a constrained image inpainting problem with nuclear norm regularization. Numerical results demonstrate the advantage of the proposed inertial three-operator splitting algorithms. \textbf{Key words}: Three-operator splitting algorithm; Inertial three-operator splitting algorithm; Krasnoselskii-Mann iteration; Image inpainting. \textbf{AMS Subject Classification}: 90C25; 65K05; 47H05. \section{ Introduction}\label{sec-introduction} \vskip 3mm Operator splitting algorithms have been widely used for solving many convex optimization problems in signal and image processing, machine learning and medical image reconstruction, etc. The traditional operator splitting algorithms include the forward-backward splitting algorithm \cite{lionsandmercier1979}, the Douglas-Rachford splitting algorithm \cite{bouglasandrachford1956TAMS}, and the forward-backward-forward splitting algorithm \cite{Tseng2000SIAM}, which are originally designed for solving the monotone inclusion of the sum of two maximally monotone operators, where one of which is assumed to be cocoercive or just Lipschitz continuous. In recent years, the monotone inclusion problems with the sum of more than two operators have been received much attention. See, for example \cite{Combettes2012SVVA,Combettes2013Systems,vu2013ACM,Vu2015JOTA,Tang2019JCM}. Let $H$ be a real Hilbert space. Let $A, B: H\rightarrow 2^H$ be two maximally monotone operators and let $C:H\rightarrow H$ be a cocoercive operator. Davis and Yin \cite{davis2015} proposed a so-called three-operator splitting algorithm (also known as Davis-Yin splitting algorithm \cite{Liu2019JOPT}) to solve the monotone inclusion of the form \begin{equation}\label{more two-monotone-inclusion} \textrm{ find }\, x\in H \textrm{ such that }\, 0\in Ax+Bx+Cx. \end{equation} They pointed out that the three-operator splitting algorithm includes many well-known operator splitting algorithms, such as the forward-backward splitting algorithm \cite{combettes2005}, the Douglas-Rachford splitting algorithm \cite{combettes2007} and the backward-forward splitting algorithm. Raguet et al. \cite{Raguet-SIAM-2013} proposed a generalized forward-backward splitting algorithm to solve the monotone inclusion of \begin{equation}\label{more-monotone-inclusion} \textrm{ find }\, x\in H \textrm{ such that }\, 0\in{ Bx+\sum_{i=1}^{m}A_{i}x}, \end{equation} where $m\geq 1$ is an integer, $\{A_{i}\}_{i=1}^{m}: H\rightarrow 2^H$ are maximally monotone operators and $B:H\rightarrow H$ is a cocoercive operator. In particular, when $m=1$, the generalized forward-backward splitting algorithm reduces to the forward-backward splitting algorithm. Furthermore, Raguet and Landrceu \cite{Raduet2015SIAMJIS} presented a precondition of the generalized forward-backward splitting algorithm for solving the monotone inclusion (\ref{more-monotone-inclusion}). By introducing a suitable product space, the monotone inclusion (\ref{more-monotone-inclusion}) can be reformulated as the sum of three maximally monotone operators, where one of them is a normal cone of a closed vector subspace. To solve this equivalent monotone inclusion problem, Briceno-Arias \cite{briceno2015Optim} introduced two splitting algorithms: the Forward-Douglas-Rachford splitting algorithm and the forward-partial inverse splitting algorithm. As a consequence, the generalized forward-backward splitting algorithm could be derived from the Forward-Douglas-Rachford splitting algorithm. Notice that the normal cone of a closed vector subspace is also a maximally monotone operator, then the three-operator splitting algorithm can be applied to solve the monotone inclusion studied in \cite{briceno2015Optim}, which also results in the generalized forward-backward splitting algorithm \cite{Raguet-SIAM-2013}. Some recent generalization of the three-operator splitting algorithm in the direction of stochastic and inexact can be found in \cite{Cevher2016Report,Zong2018}. In recent years, the inertial method has become more and more popular. Various inertial type algorithms were studied, see for example \cite{Alvarez2004,Ochs2014,Bot2015NFAO,Chen2015,Dong2016Optim,Combettes2017,Attouch2018} and references therein. The inertial method is also called the heavy ball method, which is based on a discretization of a second order dissipative dynamic system. In 1983, Nesterov \cite{nesterov1983} proposed an inertial gradient descent algorithm, which modified the heavy ball method of Polyak \cite{polyak1964}. Further, G\"{u}ler \cite{Guler1992} generalized Nesterov's method to the proximal point algorithm for solving the minimization of a nonsmooth convex function. In \cite{Alvarez2001}, Alvarez and Attouch proposed an inertial proximal point algorithm (iPPA) for solving zeros of a maximally monotone operator, which is defined by the following: let $x^{0}, x^{-1}\in H$, and set \begin{equation}\label{inertial-proximal-point-algorithm} \left\{ \begin{aligned} \omega^{k} &=x^{k}+\alpha_{k}(x^{k}-x^{k-1}), \\ x^{k+1} &=(I+\lambda_{k}A)^{-1}(\omega^{k}). \end{aligned} \right. \end{equation} They proved the convergence of (\ref{inertial-proximal-point-algorithm}) under the condition: $\inf \lambda_k >0$ and \begin{equation}\label{inertial-parameter-1} \begin{aligned} & \textrm{ (i) }\, \{\alpha_k \} \subseteq [0,\overline{\alpha}),\, \overline{\alpha}\in [0,1), \\ & \textrm{ (ii) } \sum_{k=0}^{+\infty} \alpha_k \| x^k - x^{k-1} \|^2 < +\infty. \end{aligned} \end{equation} Moudafi and Oliny \cite{Moudafi2003} introduced an inertial algorithm for finding zeros of the sum of two maximally monotone operators $A$ and $B$, where $B$ is $\beta$-cocoercive, for some $\beta >0$. The following iterative algorithm is defined in \cite{Moudafi2003}. \begin{equation}\label{inertial-algorithm-MO} \left\{ \begin{aligned} \omega^{k} &=x^{k}+\alpha_{k}(x^{k}-x^{k-1}), \\ x^{k+1} &=(I+\lambda_{k}A)^{-1}(\omega^{k} - \lambda_k Bx^k). \end{aligned} \right. \end{equation} They proved the convergence of the proposed inertial algorithm (\ref{inertial-algorithm-MO}) under the same condition (\ref{inertial-parameter-1}) imposed on the inertial parameters $\{\alpha_k\}$ as well as $\lambda_k \in (0, 2\beta)$. Lorenz and Pock \cite{lorenz2015JMIV} proposed a preconditioner, inertial forward-backward splitting algorithm as follows, in which the cocoercive operator is evaluated at the inertial extrapolate iteration scheme and a symmetric, positive define linear operator is used as a preconditioner. \begin{equation}\label{inertial-algorithm-LP} \left\{ \begin{aligned} \omega^{k} &=x^{k}+\alpha_{k}(x^{k}-x^{k-1}), \\ x^{k+1} &=(I+\lambda_{k}M^{-1}A)^{-1}(\omega^{k} - \lambda_k M^{-1} B\omega^k), \end{aligned} \right. \end{equation} where $M$ is a linear self-adjoint and positive define operator. Comparing (\ref{inertial-algorithm-MO}) and (\ref{inertial-algorithm-LP}), we can see that the operator $B$ is evaluated at the current point $x^k$ in (\ref{inertial-algorithm-MO}), but it is calculated at the inertial term in (\ref{inertial-algorithm-LP}). The inertial proximal point algorithm (\ref{inertial-proximal-point-algorithm}) has been proved an effective way to accelerate the speed of the proximal point algorithm. Before the inertial proximal point algorithm, the relaxed proximal point algorithm proposed by Eckstein and Bertsekas \cite{Eckstein1992} is another approach for accelerating the proximal point algorithm. For this purpose, Maing\'{e} \cite{Mainge2008JCAM} combined the relaxed strategy with the inertial method, and proposed the following relaxed inertial proximal point algorithm \begin{equation}\label{relaxed-inertial-proximal-point-algorithm} \left\{ \begin{aligned} \omega^{k} &=x^{k}+\alpha_{k}(x^{k}-x^{k-1}), \\ x^{k+1} &= (1-\rho_k)\omega^k + \rho_k(I+\lambda_{k}A)^{-1}(\omega^{k}). \end{aligned} \right. \end{equation} Under the condition $0<\inf \rho_k \leq \sup \rho_k <2$ and (\ref{inertial-parameter-1}), the weak convergence of (\ref{relaxed-inertial-proximal-point-algorithm}) was proved in \cite{Mainge2008JCAM}, which was based on the inertial Krasnoselskii-Mann iteration (\ref{inertial-KM-iteration}). In order to ensure the convergence of the inertial algorithms mentioned above, the condition (\ref{inertial-parameter-1}) is usually enforced. To deal with the issue of choosing the inertial parameters $\{\alpha_k\}$ as a priori, Alvarez and Attouch \cite{Alvarez2001} proved the convergence of the inertial proximal point algorithm (\ref{inertial-proximal-point-algorithm}) under the requirement of $\{\alpha_k\}$ is a nondecreasing sequence in $[0,\overline{\alpha})$ with $\overline{\alpha}< 1/3$. Lorenz and Pock \cite{lorenz2015JMIV} also proved the convergence of the inertial forward-backward splitting algorithm (\ref{inertial-algorithm-LP}) without the condition (\ref{inertial-parameter-1}). Bo\c{t} et al. \cite{Bot2015AMC} proved the convergence of the inertial Krasnoselskii-Mann iteration (\ref{inertial-KM-iteration}) without the condition (\ref{inertial-parameter-1}). Consequently, they proposed an inertial Douglas-Rachford splitting algorithm for solving the monotone inclusion problem of the sum of two maximally monotone operators. Further, in \cite{Bot2016MTA}, Bo\c{t} and Csetnek proposed an inertial alternating direction method of multipliers based on the inertial Douglas-Rachford splitting algorithm. In the context of convex minimization, the inertial forward-backward splitting algorithm leads to the so-called inertial proximal gradient algorithm. Just we have mentioned that the inertial parameters affect greatly the performance of the inertial algorithms. The Nesterov's sequence is one of popular choice, which reduces to the famous iterative shrinkage thresholding algorithm (FISTA) \cite{beck2009}. However, the convergence of the sequences generated by FISTA has been missed for a long time. Recently, Chambolle and Dossal \cite{Chambolle2015} proposed a different kind of inertial parameters for studying the convergence of the inertial proximal gradient algorithm. The convergence rate of the inertial proximal gradient algorithm with various options for the inertial sequences $\{\alpha_k\}$ were recently established in \cite{Attouch2016SJO, Attouch2018SJO}. An important property of these special inertial parameters for the inertial proximal gradient algorithm is that they all converge to one. However, it imposes more restrictions on the step size and also excludes the relaxation parameters. In addition, the convergence properties of the inertial proximal gradient algorithm with special choices of inertial parameters is limited in the context of convex minimization. It is not cleared whether these results can be extended to the general inertial forward-backward splitting algorithm for solving the monotone inclusion problem, such as (\ref{inertial-algorithm-MO}) or (\ref{inertial-algorithm-LP}). The purpose of this paper is to study a class of inertial three-operator splitting algorithm for solving the monotone inclusion problem (\ref{more two-monotone-inclusion}), which unifies the three-operator splitting algorithm and the inertial methods. We analyze the convergence of the proposed iterative algorithm under different conditions on the parameters. As a consequence, we obtain an inertial three-operator splitting algorithm for solving the convex minimization problem of the sum of three convex functions, where one of them is differentiable with Lipschitz continuous gradient and the others are proximable friendly convex functions. We verify the advantage of the proposed inertial three-operator splitting algorithm by applying it to a constrained image inpainting problem. The rest of this paper is organized as follows. Section 2, we recall some notations and definitions in monotone operators theory and convex analysis. Moreover, we also give some technical lemmas, which will be used in the following sections. Section 3, we propose a class of inertial three-operator splitting algorithm and establish the main convergence theorems. We also apply the proposed iterative algorithm to solve the convex optimization problem with the sum of three convex functions. Section 4, we apply the proposed inertial three-operator splitting algorithm to solve a constrained image inpainting problem and report numerical experiments results. Finally, we give some conclusions and future works. \section{Preliminaries}\label{sec:pre} \vskip 3mm In this section, we review some basic definitions and lemmas in monotone operator theory and convex analysis. Let $H$ be a real Hilbert space, the scalar product in $H$ is denoted by $\langle\cdot,\cdot\rangle$ and the corresponding norm in $H$ is $\|\cdot\|$. Let $A:H\rightarrow 2^H$ be a set-valued operator. We denote its domain, range, graph and zeros by dom $A= \{ x\in H| Ax \neq \emptyset \}$, ran $A = \{ u\in H | (\exists x\in H) u\in Ax\}$, gra $A = \{ (x,u)\in H\times H | u\in Ax \}$, and zer $A = \{x\in H | 0\in Ax\}$, respectively. $A^{-1}$ be inverse operator of $A$, defined by $(x,u)\in \textrm{gra } A$ if and only if $(u,x)\in \textrm{gra } A^{-1}$. \begin{definition}(\cite{bauschkebook2017}) Let $A:H\rightarrow 2^H$ be a set-valued operator. \noindent \emph{(i)}\, $A$ is said to be monotone, if $$ \langle x-y,u-v \rangle \geq 0, \quad \forall (x,u), (y,v)\in \textrm{gra } A. $$ Moreover, $A$ is said to be maximally monotone, if its graph is not strictly contained in the graph of any other monotone operator. \noindent \emph{(ii)}\, $A$ is said to be uniformly monotone, if there exists an increasing function $\psi: [0,+\infty) \rightarrow [0,+\infty]$ which vanishes only at $0$ such that $$ \langle x-y,u-v \rangle \geq \psi(\|x-y\|), \forall (x,u), (y,v)\in \textrm{gra } A. $$ If $\psi =\gamma (\cdot)^{2}$, then the $A$ is called $\gamma$-strongly monotone. \noindent \emph{(iii)}\, $A$ is said to be demiregular at $x \in \textrm{dom}\, A$, if for all $u\in Ax$ and for all sequences $(x^k, u^k)\in gra\, A$ with $x^k \rightharpoonup x$ and $u^k \rightarrow u$, we have $x^k\rightarrow x$. \end{definition} \begin{definition}(\cite{bauschkebook2017}) Let $B:H\rightarrow H$ be a single-valued operator. $B$ is called $\beta$-cocoercive, where $\beta \in (0, +\infty)$, if $$ \langle x-y, Bx-By \rangle \geq \beta \|Bx-By\|^{2}, \quad \forall x,y\in H. $$ \end{definition} \begin{definition}(\cite{bauschkebook2017}) Let $A:H\rightarrow 2^H$ be a maximally monotone operator. The resolvent operator of $A$ with index $\gamma >0$ is defined as $$ J_{\gamma A} = (I+\gamma A)^{-1}. $$ where $I$ is the identity operator. \end{definition} Next, we recall definitions of nonexpansive and related nonlinear operators. These operators often appear in the convergence analysis of operator splitting algorithms. \begin{definition}(\cite{bauschkebook2017}) Let $C$ be a nonempty subset of $H$. Let $T:C\rightarrow H$, then \noindent \emph{(i)} $T$ is called nonexpansive, if $$ \|Tu-Tv\| \leq \|u-v\|, \quad \forall u,v\in C. $$ \noindent \emph{(ii)} $T$ is called firmly nonexpansive, if $$ \|Tu-Tv\|^2 \leq \|u-v\|^2 - \| (I-T)u- (I-T)v \|^2, \quad \forall u,v\in C. $$ \noindent \emph{(iii)} $T$ is called $\theta$-averaged, where $\theta\in (0,1)$, if there exists a nonexpansive operator $S$ such that $T = (1-\theta)I + \theta S$. \end{definition} It is easy to prove that $T$ is $\theta$-averaged if and only if $$ \|Tu-Tv\|^2 \leq \|u-v\|^2 - \frac{1-\theta}{\theta}\|(I-T)u-(I-T)v\|, \quad \forall u,v\in C. $$. On the other hand, the resolvent operator $J_{\gamma A}$ is firmly nonexpansive and also nonexpansive operator. Davis and Yin \cite{davis2015} proved the following important lemma, which provides a fixed point characterize of the three-operator monotone inclusion problem (\ref{more two-monotone-inclusion}). \begin{lemma}(\cite{davis2015})\label{three-operator-lemma} Let $\gamma >0$, define $T = J_{\gamma A}(2J_{\gamma B} - I - \gamma CJ_{\gamma B}) + I - J_{\gamma B}$. The following set equality holds \begin{equation} zer(A+B+C)=J_{\gamma B}(Fix (T)). \nonumber \end{equation} In addition \begin{equation} Fix (T) = \{x+\gamma \mu | 0 \in (A+B+C)x, \mu \in (Bx) \bigcap (-Ax-Cx)\}. \nonumber \end{equation} \end{lemma} \begin{proposition}(\cite{davis2015})\label{three-operator-prop} Suppose that $T_{1}, T_{2}:H\rightarrow H$ are firmly nonexpansive and $C$ is $\beta$-cocoercive, for some $\beta > 0 $. Let $\gamma \in (0,2\beta)$. Then \begin{equation} T:=I-T_{2}+T_{1}\circ (2T_{2}-I-\gamma C\circ T_{2}), \end{equation} is $\alpha$-averaged, with $\alpha =\frac {2\beta}{4\beta-\gamma}< 1$. In particular, the following inequality holds \begin{equation} \|Tx-Ty\|^2 \leq \|x-y\|^2 - \frac{1-\alpha}{\alpha}\|(I-T)x-(I-T)y\|, \quad \forall x,y\in H. \end{equation} Further, for any $\overline{\varepsilon} \in (0,1)$ and $\gamma \in (0,2\beta\overline{\varepsilon})$, let $\overline{\alpha} =\frac {1} {2-\overline{\varepsilon}} < 1$. Then the following holds for all $x,y \in H$ \begin{align} \|Tx-Ty\|^2 &\leq \|x-y\|^2 - \frac{1-\overline{\alpha}}{\overline{\alpha}}\|(I-T)x-(I-T)y\|^{2}\nonumber \\ &-\gamma(2\beta-\frac {\gamma}{\overline{\varepsilon}})\|C\circ T_{2}x-C\circ T_{2}y\|^{2}, \quad \forall x,y\in H. \end{align} \end{proposition} The Krasnoselskii-Mann (KM) iteration scheme plays an important role in studying various fixed point algorithms arising in signal and image processing. Let $x^0\in H$, the KM iteration is defined as follow \begin{equation}\label{KM-iteration} x^{k+1} = (1-\rho_k)x^k + \rho_k Tx^k, \quad k \geq 0, \end{equation} where $\rho_k \in (0,1)$. In order to accelerate the KM iteration (\ref{KM-iteration}), the inertial Krasnoselskii-Mann (iKM) iteration was introduced. Let $x^{0}, x^{-1}\in H$, the iKM iteration scheme reads as \begin{equation}\label{inertial-KM-iteration} \left \{ \begin{aligned} y^k & = x^k + \alpha_k (x^k - x^{k-1}), \\ x^{k+1} & = (1-\rho_k)y^k + \rho_k Ty^k,\quad k \geq 0, \end{aligned}\right. \end{equation} where $\rho_k \in (0,1)$ and $\alpha_k \in (0,1)$. The following results concerned with the convergence analysis of the iKM iteration (\ref{inertial-KM-iteration}). Maing\'{e} \cite{Mainge2008JCAM} proved the following convergence result of the iKM iteration (\ref{inertial-KM-iteration}). \begin{lemma}(\cite{Mainge2008JCAM})\label{iKM-convergence-1} Let $H$ be a real Hilbert space. Let $T:H\rightarrow H$ be a nonexpansive operator such that $Fix (T)\neq \varnothing $. Let $\{x^k\}$ be generated by (\ref{inertial-KM-iteration}), where $\{\rho_k\}\subset (0,1)$ and $\{\alpha_k\}\subset [0,1)$ satisfy the following conditions: \noindent \emph{(i)} $0 \leq \alpha_{k} \leq \alpha < 1 $, $ 0< \underline{\lambda} \leq \lambda_{k} \leq \overline{\lambda} <1$. \noindent \emph{(ii)} $\sum_{k=0}^{+\infty}\alpha_{k}\|x^{k}-x^{k-1}\|^{2} < +\infty$. \noindent Then \noindent \emph{(a)} For any $x^{*} \in Fix(T), \lim_{k\rightarrow +\infty}\|x^{k}-x^{*}\|$ exists; \noindent \emph{(b)} $\{x^{k}\}$ converges weakly to a point in $Fix(T)$. \end{lemma} Bo\c{t} et al. \cite{Bot2015AMC} removed the condition (ii) in Lemma \ref{iKM-convergence-1}, but needed stronger conditions on $\{\alpha_k\}$ and $\{\lambda_k\}$. They proved the following convergence result. \begin{lemma}(\cite{Bot2015AMC})\label{iKM-convergence-2} Let $H$ be a real Hilbert space. Let $T:H\rightarrow H$ be a nonexpansive operator such that $Fix (T)\neq \varnothing $. Let $\{x^k\}$ be generated by (\ref{inertial-KM-iteration}), where $\{\rho_k\}\subset (0,1)$ and $\{\alpha_k\}\subset [0,1)$ satisfy the following conditions: \noindent \emph{(i)} $\{\alpha_{k}\}_{k\geq 1}$ is nondecreasing with $\alpha_{1}=0$ and $0\leq \alpha_{k} \leq \alpha <1$ for every $k\geq 1$; \noindent \emph{(ii)} Let $\lambda, \sigma, \delta > 0 $ such that \begin{equation} \delta > \frac{\alpha^{2}(1+\alpha)+\alpha \sigma}{1-\alpha^{2}} \quad and \quad0<\lambda \leq \lambda_{k}\leq \frac{\delta-\alpha[\alpha (1+\alpha)+\alpha \delta+\sigma]}{\delta[1+\alpha (1+\alpha)+\alpha \delta+\sigma]} \end{equation} \noindent Then the following hold: \noindent \emph{(a)} For any $y \in Fix(T), \lim_{k\rightarrow +\infty}\|x^{k}-y\|$ exists; \\ \noindent \emph{(b)} $\sum _{k=0}^{\infty}\|x^{k+1}-x^{k}\|^{2} < +\infty$; \\ \noindent \emph{(c)} $\{x^{k}\}$ converges weakly to a point in $Fix(T)$. \end{lemma} \begin{figure}\label{parameter-range} \end{figure} \begin{remark} The inertial parameters $\{\alpha_k\}$ in Lemma \ref{iKM-convergence-1} have to be calculated based on the current iteration and the last iteration. In Lemma \ref{iKM-convergence-2}, the relaxation parameters $\{\lambda_k\}$ are restricted by the constants $\delta, \sigma$ and $\alpha$. To show the impact of these parameters to the relaxation parameters, we plot the range of the relaxation parameters $\{\lambda_k\}$ in Figure \ref{parameter-range}. Here, we let $\delta = \frac{\alpha^{2}(1+\alpha)+\alpha \sigma}{1-\alpha^{2}} + \delta_{*}$. To get a large selection of the relaxation parameters $\{\lambda_k\}$, we recommend to set $\sigma = 0.01$ or $\sigma = 0.001$ and $\delta_{*}=1$. From Figure \ref{parameter-range}, it is obvious that the relaxation parameters $\{\lambda_k\}$ are decreasing as the maximal inertial parameter $\alpha$ increasing. \end{remark} Finally, we close this section by introducing some elements in convex analysis. See \cite{Zalinescu2012}. Let $f:H\rightarrow (-\infty, +\infty]$. We denote that $\textrm{ gra } f = \{x\in H | f(x)< +\infty\}$ is effective domain of $f$. We call $f$ is proper if $\textrm{ gra } f \neq \emptyset$. Let $\Gamma_{0}(H)$ be the class of proper lower semicontinuous convex functions from $H$ to $(-\infty, + \infty]$. The subdifferential of $f:H\rightarrow (-\infty, +\infty]$ is the set $\partial f(x) = \{u\in H | f(y)\geq f(x) + \langle u, y-x\rangle, \forall y\in H\}$. \begin{definition}(\cite{Zalinescu2012}) Let $f\in \Gamma_{0}(H)$. $f$ is called uniformly convex, if there exists an increasing function $\phi: [0,+\infty) \rightarrow [0,+\infty]$ that vanishes only at $0$ such that, for every $\alpha\in (0,1)$ and every $x,y\in \textrm{ dom } f$, $$ f(\alpha x + (1-\alpha)y) \leq \alpha f(x) + (1-\alpha)f(y) - \alpha (1-\alpha)\phi (\|x-y\|). $$ \end{definition} \begin{definition}(\cite{Zalinescu2012}) Let $f\in \Gamma_{0}(H)$. Let $u\in H$. The proximity operator of $f$ with index $\lambda >0$ is defined by $$ prox_{\lambda f}(u) = \arg\min_{x} \{ \frac{1}{2\lambda} \|x-u\|^2 + f(x) \}. $$ \end{definition} Notice that $J_{\lambda \partial f} = prox_{\lambda f}$. The proximity operator is a generalization of the projection operator $P_{C}$, where $C$ is a nonempty closed convex set. \section{An inertial three-operator splitting algorithm} \vskip 3mm In this section, we propose an inertial three-operator splitting algorithm to solve the monotone inclusion problem (\ref{more two-monotone-inclusion}). This algorithm combines the inertial iterative algorithm (\ref{inertial-KM-iteration}) with the three-operator splitting algorithm \cite{davis2015}. The detailed iterative algorithm is presented in Algorithm \ref{alg1}. \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \begin{algorithm}[htb] \caption{ An inertial three-operator splitting (iTOS) algorithm} \label{alg1} \begin{algorithmic}[1] \REQUIRE For any given $z^{0}$, $z^{-1}\in H$. Choose $\gamma, \lambda_{k}$ and $\alpha_{k}$. \\ For $k=0,1,2,\cdots$, do \quad \STATE $y^{k}=z^{k}+\alpha_{k}(z^{k}-z^{k-1})$; \quad \STATE $x_{B}^{k}=J_{\gamma B}y^{k}$; \quad \STATE $x_{A}^{k}=J_{\gamma A}(2x_{B}^{k}-y^{k}-\gamma Cx_{B}^{k})$; \quad \STATE $z^{k+1}=y^{k}+\lambda _{k}(x_{A}^{k}-x_{B}^{k})$. End for when some stopping criterion is met. \end{algorithmic} \end{algorithm} Let $\alpha_k =0$, then the inertial three-operator splitting algorithm reduces to the original three-operator splitting algorithm introduced by Davis and Yin \cite{davis2015}. Notice that most of the operator splitting algorithms can be written as KM iteration scheme (\ref{KM-iteration}) for computing fixed points of nonexpansive operators. The iKM iteration (\ref{inertial-KM-iteration}) is useful for developing various inertial operator splitting algorithms. Therefore, we shall make full use of Lemma \ref{iKM-convergence-1} and Lemma \ref{iKM-convergence-2} to prove the convergence of Algorithm \ref{alg1}. Now, we are ready to prove the first convergence theorem of Algorithm \ref{alg1}. \begin{theorem}\label{main-theorem} Let $H$ be a real Hilbert space. Let $A:H \rightarrow 2^{H}$ and $B:H\rightarrow 2^H$ be two maximally monotone operators. Let $C:H\rightarrow H$ be a $\beta$-cocoercive operator, for some $\beta> 0$. Define an operator $T:H\rightarrow H$ as follows \begin{equation}\label{def-T} T:= I-J_{\gamma B}+J_{\gamma A}(2J_{\gamma B}-I-\gamma C J_{\gamma B}). \end{equation} Let the iterative sequences $\{z^{k}\}, \{x_{A}^{k}\}$, and $\{x_{B}^{k}\}$ are generated by Algorithm \ref{alg1}. Assume that the parameters $\gamma, \{\alpha_{k}\}$, and $\{\lambda_{k}\}$ satisfy the following conditions: \noindent \emph{(i1)} $\gamma \in (0, 2\beta \overline{\varepsilon})$, where $\overline{\varepsilon} \in (0,1)$. \noindent \emph{(i2)} $\{\alpha_{k}\}$ is nondecreasing with $k\geq 1, \alpha_{1}=0$ and $ 0\leq \alpha_{k} \leq \alpha < 1 $. \noindent \emph{(i3)} for every $k\geq 1$, and $\lambda, \sigma, \delta > 0 $ such that \begin{equation} \delta > \frac{\alpha^{2}(1+\alpha)+\alpha \sigma}{1-\alpha^{2}} \textrm{ and }\, 0<\lambda \leq \lambda_{k}\leq \frac{\delta-\alpha[\alpha (1+\alpha)+\alpha \delta+\sigma]}{\overline{\alpha}\delta[1+\alpha (1+\alpha)+\alpha \delta+\sigma]}, \end{equation} where $\bar{\alpha} = \frac{1}{2-\overline{\varepsilon}}$. Then the following hold \noindent \emph{(i)} $\{z^{k}\}$ converges weakly to a fixed point of $T$. \noindent \emph{(ii)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Then $\{x_{B}^{k}\}$ converges weakly to $ J_{\gamma B}z^{*}\in zer(A+B+C)$. \noindent \emph{(iii)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Then $\{x_{A}^{k}\}$ converges weakly to $ J_{\gamma B}z^{*}\in zer(A+B+C)$. \noindent \emph{(iv)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Suppose that one of the following conditions hold \noindent \emph{(a)} $A$ is uniformly monotone on every nonempty bounded subset of $\textrm{ dom } A$.\\ \noindent \emph{(b)} $B$ is uniformly monotone on every nonempty bounded subset of $\textrm{ dom } B$.\\ \noindent \emph{(c)} $C$ is demiregular at every point $x \in zer(A+B+C)$. \\ Then $\{x_{A}^{k}\}$ and $\{x_{B}^{k}\}$ converge strongly to $J_{\gamma B}z^{*}\in zer(A+B+C)$. \end{theorem} \begin{proof} The iterative sequence $\{z^k\}$ generated by Algorithm \ref{alg1} is equivalent to \begin{equation}\label{eq3-1} z^{k+1} =(1- \lambda_{k})y^{k}+\lambda_{k}Ty^{k}. \end{equation} It follows from Proposition \ref{three-operator-prop} that $T$ is $\overline{\alpha}$-averaged. Then there exists a nonexpansive operator $R$ such that $T=(1-\overline{\alpha})I+\overline{\alpha} R$. Hence, we obtain from (\ref{eq3-1}) that \begin{align}\label{eq3-2} z^{k+1} & = (1- \lambda_{k})y^{k}+\lambda_{k}((1-\overline{\alpha})y^{k}+ \overline{\alpha} Ry^{k})\nonumber \\ & = (1- \lambda_{k}\overline{\alpha})y^{k}+\lambda_{k}\overline{\alpha} Ry^{k}. \end{align} Notice that $Fix(T)=Fix(R)\neq \emptyset$, and the conditions of (i1) and (i2) imply that all the conditions of Lemma \ref{iKM-convergence-1} are satisfied. Then we obtain that for any $y \in Fix(R)$, $\lim_{k\rightarrow +\infty}\|z^{k}-y\|$ exists. Moreover, $\sum _{k=0}^{\infty}\|z^{k+1}-z^{k}\|^{2} < +\infty$ and $\{z^{k}\}$ converges weakly to a point in $Fix(R)$. \noindent (i)\, Since $Fix(T)=Fix(R)$, then we obtain the conclusion of (i). \noindent (ii)\, Let $z^{*} \in Fix(T)$, from (\ref{eq3-1}), we have \begin{align}\label{eq3-3} \|z^{k+1}-z^{*}\|^{2} &=\|(1- \lambda_{k})(y^{k}-z^{*})+\lambda_{k}(Ty^{k}-z^{*})\|^{2}\nonumber \\ &=(1- \lambda_{k})\|y^{k}-z^{*}\|^{2}+\lambda_{k}\|Ty^{k}-z^{*}\|^{2}-\lambda_{k}(1- \lambda_{k})\|y^{k}-Ty^{k}\|^{2}. \end{align} Let $T_1 = J_{\gamma A}$ and $T_2 = J_{\gamma B}$ in Proposition \ref{three-operator-prop}, then we have \begin{align}\label{eq3-4} \|Ty^{k}-z^{*}\|^{2} &\leq \|y^{k}-z^{*}\|^{2}-\frac{(1-\overline{\alpha})}{\overline{\alpha}}\|(I-T)y^{k}-(I-T)z^{*}\|^{2}\nonumber\\ &-\gamma (2\beta-\frac {\gamma}{\overline{\epsilon}})\|CJ_{\gamma B}(y^{k})- CJ_{\gamma B}(z^{*})\|^{2}. \end{align} Substituting (\ref{eq3-4}) into (\ref{eq3-3}), we obtain \begin{align}\label{eq3-5} \|z^{k+1}-z^{*}\|^{2} &\leq \|y^{k}-z^{*}\|^{2}- \lambda_{k}(\frac{1}{\overline{\alpha}}-\lambda_{k})\|Ty^{k}-y^{k}\|^{2}\nonumber\\ &-\lambda_{k}\gamma (2\beta-\frac {\gamma}{\epsilon})\|CJ_{\gamma B}(y^{k})- CJ_{\gamma B}(z^{*})\|^{2}. \end{align} Next, we prove $\lim_{k\rightarrow +\infty}\|y^{k}-z^{*}\|=\lim_{k\rightarrow +\infty}\|z^{k}-z^{*}\|.$ In fact, \begin{align}\label{eq3-6} \|y^{k}-z^{*}\| &=\|z^{k}+\alpha_{k}(z^{k}-z^{k-1})-z^{*} \nonumber\|\\ &\leq\|z^{k}-z^{*}\|+\alpha_{k}\|z^{k}-z^{k-1}\|. \end{align} On the other hand, \begin{align}\label{eq3-7} \|z^{k+1}-z^{*}\| &=\|(1-\lambda_{k})(y^{k}-z^{*})+\lambda_{k}(Ty^{k}-z^{*})\| \nonumber\\ &\leq\|y^{k}-z^{*}\|. \end{align} Observe that $\lim_{k\rightarrow +\infty}\|z^{k}-z^{k-1}\|=0$ and $\lim_{k\rightarrow +\infty}\|z^{k}-z^{*}\|$ exists. We can conclude from (\ref{eq3-6}) and (\ref{eq3-7}) that $\lim_{k\rightarrow +\infty}\|y^{k}-z^{*}\|=\lim_{k\rightarrow +\infty}\|z^{k}-z^{*}\|$. Therefore, from (\ref{eq3-5}), we obtain \begin{equation}\label{eq3-8} \lim_{k\rightarrow +\infty}\|Ty^{k}-y^{k}\|=0, \end{equation} and \begin{equation}\label{eq3-9} \lim_{k\rightarrow +\infty}\|CJ_{\gamma B}(y^{k})- CJ_{\gamma B}(z^{*})\|=0. \end{equation} Let $\mu_{B}^{k}=\frac {1}{\gamma}(y^{k}-x_{B}^{k})\in Bx_{B}^{k}$. $\mu_{A}^{k}=\frac {1}{\gamma}(2x_{B}^{k}-y^{k}-\gamma C x_{B}^{k}-x_{A}^{k} )\in Ax_{A}^{k}$. It follows from the nonexpansiveness of $J_{\gamma B}$, we have \begin{align}\label{eq3-10} \|x_{B}^{k}-J_{\gamma B}(z^{*})\| &\leq\|y^{k}-z^{*}\| \nonumber \\ &\leq\|z^{k}-z^{*}\|+\alpha_{k}\|z^{k}-z^{k-1}\|\nonumber\\ &\leq\|z^{k}-z^{*}\|+\alpha\|z^{k}-z^{k-1}\|. \end{align} Notice that $\lim_{k\rightarrow +\infty}\|z^{k}-z^{*}\|$ exists and $\lim_{k\rightarrow +\infty}\|z^{k}-z^{k-1}\|=0$, then $\{x_{B}^{k}\}$ is bounded. Let $y$ is a sequential weak cluster point of $\{x_{B}^{k}\}$. That is, there exists a subsequence $\{x_{B}^{k_{n}}\}$ such that $x_{B}^{k_{n}}\rightharpoonup y$ as $k_{n}\rightarrow +\infty$. Let $x^{*}=J_{\gamma B}(z^{*})$. Then $x^{*}\in zer(A+B+C)$. By (\ref{eq3-9}), we have $Cx_{B}^{k}\rightarrow Cx^{*}$. Notice that $x_{B}^{k_{n}}\rightharpoonup y$, since C is maximally monotone, it follows from the weak-to-strong sequentical closedness of $C$ that $ Cy=Cx^{*}$. Then $Cx_{B}^{k}\rightarrow Cy$. We deduce from (\ref{eq3-8}) that $\|x_{A}^{k}-x_{B}^{k}\|=\|Ty^{k}-y^{k}\|\rightarrow 0$ as $k \rightarrow +\infty$. Then, we have $ x_{A}^{k_{n}}\rightharpoonup y$. Since $\|y^k - z^k\| = \alpha_k \|z^k - z^{k-1}\| \rightarrow 0$ as $k\rightarrow +\infty$, we obtain $y^k \rightharpoonup z^{*}$. Therefore, we get $\mu_{B}^{k_{n}}\rightharpoonup \frac {1}{\gamma}(z^{*}-y)$ and $\mu_{A}^{k_{n}}\rightharpoonup \frac {1}{\gamma}(y-z^{*}-\gamma Cy)$. By Corollary 26.8 of \cite{bauschkebook2017}, we obtain \begin{equation}\label{eq3-11} \frac {1}{\gamma}(z^{*}-y)\in By \Rightarrow y = (I+\gamma B)^{-1}z^{*} = J_{\gamma B}z^{*}, \end{equation} and $ y\in zer(A+B+C)$. Therefore $y$ is the unique weak sequential cluster point of $\{x_{B}^{k}\}$. Then $\{x_{B}^{k}\}$ converges weakly to $J_{\gamma B}z^{*} \in zer(A+B+C)$. \noindent (iii)\, The conclusion of $x_{A}^{k}\rightharpoonup J_{\gamma B}z^{*}$ comes from the fact that $\|x_{A}^{k}-x_{B}^{k}\|\rightarrow 0$ as $k \rightarrow +\infty$ and $x_{B}^{k} \rightharpoonup J_{\gamma B}z^{*}$. \noindent (iv)\, Let $ x^{*}=J_{\gamma B}z^{*}$, then $x^{*}\in zer(A+B+C)$. Let $\mu_{B}^{*}=\frac {1}{\gamma}(z^{*}-x^{*})\in Bx^{*}$, hence, we have $\mu_{B}^{*} \in Bx^{*}$. Let $\mu_{A}^{*}=\frac {1}{\gamma}(x^{*}-z^{*})-Cx^{*} \in Ax^{*}$, we derive from $x^{*}\in zer(A+B+C)$ and $\mu_{B}^{*} \in Bx^{*}$ that $\mu_{A}^{*} \in Ax^{*}$. It follows from that $B+C$ is monotone and $(x_{B}^{k}, \mu_{B}^{k})\in \textrm{ gra } B$, we have $\langle x_{B}^{k}-x^{*}, \mu_{B}^{k}+ Cx_{B}^{k}-(\mu_{B}^{*}+Cx_{B}^{k})\rangle \geq 0$. (a)\, Let $M=\{x^{*}\}\bigcup \{x_{A}^{k}| k\geq 0\}$. Then because $A$ is uniformly monotone on every nonempty bounded subset of $\textrm{ dom } A$. So there exists an increasing function $\phi_{A}: R_{+}\rightarrow [0, +\infty]$ that only vanishes at $0$ such that \begin{align}\label{eq3-12} \gamma \phi_{A}(\|x_{A}^{k}-x^{*}\|) &\leq \gamma \langle x_{A}^{k}-x^{*}, \mu_{A}^{k}-\mu_{A}^{*}\rangle +\gamma \langle x_{B}^{k}-x^{*}, \mu_{B}^{k} +Cx_{B}^{k}-(\mu_{B}^{*}+Cx^{*})\rangle \nonumber \\ &=\gamma \langle x_{A}^{k}-x_{B}^{k}, \mu_{A}^{k}-\mu_{A}^{*}\rangle +\gamma \langle x_{B}^{k}-x^{*}, \mu_{A}^{k} -\mu_{A}^{*}\rangle \nonumber \\ \quad &+\gamma \langle x_{B}^{k}-x^{*}, \mu_{B}^{k} +Cx_{B}^{k}-(\mu_{B}^{*}+Cx^{*})\rangle \nonumber \\ &=\gamma \langle x_{A}^{k}-x_{B}^{k}, \mu_{A}^{k}-\mu_{A}^{*}\rangle +\gamma \langle x_{B}^{k}-x^{*}, \mu_{A}^{k} +\mu_{B}^{k}+Cx_{B}^{k}\rangle \nonumber \\ &=\langle x_{B}^{k}-x_{A}^{k}, x_{B}^{k}-\gamma\mu_{A}^{k}-(x^{*}-\gamma \mu_{A}^{*}) \rangle \nonumber \\ &\leq \langle x_{B}^{k}-x_{A}^{k}, y^{k}-z^{*}\rangle+\gamma \langle x_{B}^{k}-x_{A}^{k}, Cx_{B}^{k}-Cx^{*}\rangle. \end{align} Since $y^{k}\rightharpoonup z^{*}$, $\|x_{B}^{k}-x_{A}^{k}\|\rightarrow 0$, and $Cx_{B}^{k}-Cx^{*}\rightarrow 0$, as $k\rightarrow +\infty$. Then, we obtain from the above inequality that $x_{A}^{k} \rightarrow x^{*}$. Moreover, $x_{B}^{k}\rightarrow x^{*}$ because $x_{A}^{k}-x_{B}^{k}\rightarrow 0$, as $k\rightarrow +\infty$. (b)\, Since $A+C$ is monotone, we have \begin{equation}\label{eq3-13} 0 \leq \langle x_{A}^{k} - x^{*}, \mu_{A}^{k} + Cx_{A}^{k} - (\mu_{A}^{*}+Cx^{*}) \rangle. \end{equation} Let $M=\{x^{*}\} \bigcup \{x_{B}^{k}| k\geq 0\}$. It follows from the uniformly monotone $B$ that \begin{equation}\label{eq3-14} \phi_{B}(\| x_{B}^{k} - x^{*} \|) \leq \langle x_{B}^{k} - x^{*}, \mu_{B}^{k} - \mu_{B}^{*} \rangle, \end{equation} where $\phi_{B}: R_{+}\rightarrow [0, +\infty]$ that only vanishes at $0$. Multiply $\gamma >0$ on the both side of (\ref{eq3-13}) and (\ref{eq3-14}), then we obtain \begin{align}\label{eq3-15} \gamma \phi_{B}(\| x_{B}^{k} - x^{*} \|) & \leq \gamma \langle x_{B}^{k} - x^{*}, \mu_{B}^{k} - \mu_{B}^{*} \rangle + \gamma \langle x_{A}^{k} - x^{*}, \mu_{A}^{k} + Cx_{A}^{k} - (\mu_{A}^{*}+Cx^{*}) \rangle \nonumber \\ & = \gamma \langle x_{B}^{k} - x_{A}^{k}, \mu_{B}^{k} - \mu_{B}^{*} \rangle + \langle x_{A}^{k} - x^{*}, \mu_{B}^{k} - \mu_{B}^{*} \rangle \nonumber \\ & \quad + \gamma \langle x_{A}^{k} - x^{*}, \mu_{A}^{k} + Cx_{A}^{k} - (\mu_{A}^{*}+Cx^{*}) \rangle \nonumber \\ & = \gamma \langle x_{B}^{k} - x_{A}^{k}, \mu_{B}^{k} - \mu_{B}^{*} \rangle + \gamma \langle x_{A}^{k} - x^{*}, \mu_{A}^{k} + \mu_{B}^{k} + C x_{A}^{k} \rangle \nonumber \\ & = \gamma \langle x_{B}^{k} - x_{A}^{k}, \mu_{B}^{k} - \mu_{B}^{*} \rangle + \langle x_{A}^{k} - x^{*}, x_{B}^{k} - x_{A}^{k} \rangle \nonumber \\ & \quad + \gamma \langle x_{A}^{k} - x^{*}, Cx_{A}^{k} - Cx_{B}^{k} \rangle. \end{align} Since $\lim_{k\rightarrow +\infty}\|x_{B}^{k} - x_{A}^{k}\| = 0$ and $C$ is $1/\beta$-Lipschitz continuous, then $\|Cx_{A}^{k} - Cx_{B}^{k}\| \rightarrow 0 $ as $k\rightarrow +\infty$. Hence, we derive from (\ref{eq3-15}) that $x_{B}^{k} \rightarrow x^{*}$ as $k\rightarrow +\infty$. (c)\, Because $Cx_{B}^{k} \rightarrow Cx^{*}$ and $x_{B}^{k} \rightharpoonup x^{*}$, thus $x_{B}^{k} \rightarrow x^{*}$ by the demiregularity of $C$. This completes the proof. \end{proof} \begin{theorem}\label{main-theorem2} Let $H$ be a real Hilbert space. Let $A:H \rightarrow 2^{H}$ and $B:H\rightarrow 2^H$ be two maximally monotone operators. Let $C:H\rightarrow H$ be a $\beta$-cocoercive operator, for some $\beta> 0$. Let the operator $T$ be defined as (\ref{def-T}). Let the iterative sequences $\{z^{k}\}, \{x_{A}^{k}\}, \{x_{B}^{k}\}$ are generated by Algorithm \ref{alg1}. Assume that $0 \leq \alpha_{k} \leq \alpha \leq 1 $, $ 0< \underline{\lambda} \leq \lambda_{k} < 1 $ and $\sum_{k=0}^{+\infty}\alpha_{k}\|z^{k}-z^{k-1}\|^{2}< +\infty$. Then the following hold \noindent \emph{(i)} $\{z^{k}\}$ converges weakly to a fixed point of $T$. \noindent \emph{(ii)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Then $\{x_{B}^{k}\}$ converges weakly to $ J_{\gamma B}z^{*}\in zer(A+B+C)$. \noindent \emph{(iii)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Then $\{x_{A}^{k}\}$ converges weakly to $ J_{\gamma B}z^{*}\in zer(A+B+C)$. \noindent \emph{(iv)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Suppose that one of the following conditions hold \noindent \emph{(a)} $A$ is uniformly monotone on every nonempty bounded subset of $\textrm{ dom } A$.\\ \noindent \emph{(b)} $B$ is uniformly monotone on every nonempty bounded subset of $\textrm{ dom } B$.\\ \noindent \emph{(c)} $C$ is demiregular at every point $x \in zer(A+B+C)$. \\ Then $\{x_{A}^{k}\}$ and $\{x_{B}^{k}\}$ converge strongly to $J_{\gamma B}z^{*}\in zer(A+B+C)$. \end{theorem} \begin{proof} (i)\, Since the iterative sequences $\{z^k\}$ generated by Algorithm \ref{alg1} can be rewritten as (\ref{eq3-1}). Hence, by Lemma \ref{iKM-convergence-2}, we can conclude that $\{z^k\}$ converges weakly to a fixed point of $T$. On the other other hand, we deduce from $\sum_{k=0}^{+\infty}\alpha_k \|z^k - z^{k-1}\|^2 < +\infty$ that $\lim_{k\rightarrow +\infty}\alpha_k \|z^k - z^{k-1}\|=0$. Then we have \begin{equation} y^k - z^k = \alpha_k (z^k - z^{k-1}) \rightarrow 0, \, \textrm{ as }\, k \rightarrow +\infty. \end{equation} The conclusions of (ii)-(iv) follow the same proof of Theorem \ref{main-theorem}, so we omit it here. \end{proof} \begin{remark} The conditions of inertial parameters $\{\alpha_k\}$ and $\{\lambda_k\}$ are different in Theorem \ref{main-theorem} and Theorem \ref{main-theorem2}. In Theorem \ref{main-theorem}, the relaxation parameters $\{\lambda_k\}$ are restricted by the upper bound of $\{\alpha_k\}$, while the relaxation parameters $\{\lambda_k\}$ are independent to the inertial parameters in Theorem \ref{main-theorem2}. On the other hand, the inertial parameters $\{\alpha_k\}$ in Theorem \ref{main-theorem2} are depended on the iterative sequences. In general, it is recommended to set as follows: $$ \alpha_k = \min \{ \frac{1}{k^2 \|z^k - z^{k-1}\|^2}, \overline{\alpha} \}, \quad \textrm{ where }\, \overline{\alpha}\in (0,1). $$ However, it is seldom to see any numerical experiments reported on this particular choice of inertial parameters. We will present numerical experiments to demonstrate the effectiveness and efficiency of it. \end{remark} As applications, we consider the following general convex optimization problem: \begin{equation}\label{sum-three-convex} \min_{x\in H}\, h(x) + f(x) + g(x), \end{equation} where $f:H\rightarrow (-\infty,+\infty]$ and $g:H\rightarrow (-\infty,+\infty]$ are proper, lower semi-continuous convex functions, and $h:H\rightarrow R$ is a convex differentiable function with a $1/\beta$-Lipschitz continuous gradient. Under some qualification conditions, the first-order optimality condition of (\ref{sum-three-convex}) is equivalent to the monotone inclusion problem (\ref{more two-monotone-inclusion}). Therefore, we obtain a class of inertial three-operator splitting algorithm for solving the convex optimization problem of the sum of three convex functions (\ref{sum-three-convex}). \begin{theorem}\label{coro1} Let $H$ be a real Hilbert space. Let $f,g:H\rightarrow (-\infty,+\infty]$ are proper, lower semi-continuous convex functions. Let $h:H\rightarrow R$ is convex differentiable with a $1/\beta$-Lipschitz continuous gradient. Suppose that $zer(\partial f+\partial g+\nabla h)\neq \varnothing$. Let $z^0, z^{-1}\in H$, and set \begin{equation}\label{inertial-cp-algorithm} \left\{ \begin{aligned} y^{k}&=z^{k}+\alpha_{k}(z^{k}-z^{k-1});\\ x_{g}^{k}&= prox_{\gamma g}y^{k};\\ x_{f}^{k}&= prox_{\gamma f}(2x_{g}^{k}-y^{k}-\gamma \nabla h(x_{g}^{k}));\\ z^{k+1}&=y^{k}+\lambda _{k}(x_{f}^{k}-x_{g}^{k})x. \\ \end{aligned} \right. \end{equation} Assume that the parameters $\alpha_{k}, \gamma $ and $\lambda_{k}$ satisfy the conditions of Theorem \ref{main-theorem}. Then the following hold \noindent \emph{(i)} $\{z^{k}\}$ converges weakly to a fixed point of $T$, where $T := I-prox_{\gamma g}+prox_{\gamma f}(2prox_{\gamma g}-I-\gamma \nabla h( prox_{\gamma g}))$. \noindent \emph{(ii)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Then $\{x_{g}^{k}\}$ converges weakly to $ prox_{\gamma g}z^{*}\in zer(\partial f+\partial g+\nabla h)$. \noindent \emph{(iii)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Then $\{x_{f}^{k}\}$ converges weakly to $ prox_{\gamma g}z^{*}\in zer(\partial f+\partial g+\nabla h)$. \noindent \emph{(iv)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Suppose that one of the followiing conditions hold \noindent \emph{(a)} $f$ is uniformly convex on every nonempty bounded subset of $\textrm{ dom } \partial f$.\\ \noindent \emph{(b)} $g$ is uniformly convex on every nonempty bounded subset of $\textrm{ dom } \partial g$.\\ \noindent \emph{(c)} $\partial h$ is demiregular at every point $x \in zer(\partial f+\partial g+\nabla h)$. Then $x_{f}^{k}$ and $\{x_{g}^{k}\}$ converge strongly to $ prox_{\gamma g}z^{*}\in zer(\partial f+\partial g+\nabla h)$. \end{theorem} \begin{proof} Let $A=\partial f, B=\partial g, C=\nabla h$. Since the subdifferential of a proper convex and lower semicontinuous function is a maximally monotone operator, then $\partial f$ and $\partial g$ are maximally monotone operators. On the other hand, by the Baillon-Haddad theorem, $\nabla h$ is $\beta$ cocoercive. Therefore, we can get the conclusions of Theorem \ref{coro1} from Theorem \ref{main-theorem} immediately. \end{proof} Similarly, we obtain the following convergence theorem of the iterative sequence (\ref{inertial-cp-algorithm}) from Theorem \ref{main-theorem2}. \begin{theorem}\label{coro2} Let $H$ be a real Hilbert space. Let $f$ and $g:H\rightarrow (-\infty,+\infty]$ are proper closed lower semi-continuous convex function. Let $h:H\rightarrow R$ is convex differentiable function with a $1/\beta$-Lipschitz continuous gradient. Suppose that $zer(\partial f+\partial g+\nabla h)\neq \varnothing$. Let the iterative sequences $\{z^{k}\}, \{x_{g}^{k}\}$, and $\{x_{f}^{k}\}$ are generated by (\ref{inertial-cp-algorithm}). Assume that the parameters $\alpha_{k}, \gamma $ and $\lambda_{k}$ satisfy the conditions of Theorem \ref{main-theorem2}. Then the following hold: \noindent \emph{(i)} $\{z^{k}\}$ converges weakly to a fixed point of $T$, where $T := I-prox_{\gamma g}+prox_{\gamma f}(2prox_{\gamma g}-I-\gamma \nabla h( prox_{\gamma g}))$. \noindent \emph{(ii)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Then $\{x_{g}^{k}\}$ converges weakly to $ prox_{\gamma g}z^{*}\in zer(\partial f+\partial g+\nabla h)$. \noindent \emph{(iii)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Then $\{x_{f}^{k}\}$ converges weakly to $ prox_{\gamma g}z^{*}\in zer(\partial f+\partial g+\nabla h)$. \noindent \emph{(iv)} Let $\lambda_{k}\geq \underline{\lambda}> 0 $ and $z^{*}$ be a fixed point of $T$. Suppose that one of the following conditions hold \noindent \emph{(a)} $f$ is uniformly convex on every nonempty bounded subset of $\textrm{ dom } \partial f$.\\ \noindent \emph{(b)} $g$ is uniformly convex on every nonempty bounded subset of $ \textrm{ dom } \partial g$.\\ \noindent \emph{(c)} $\partial h$ is demiregular at every point $x \in zer(\partial f+\partial g+\nabla h)$. Then $\{x_{f}^{k}\}$ and $\{x_{g}^{k}\}$ converge strongly to $ prox_{\gamma g}z^{*}\in zer(\partial f+\partial g+\nabla h)$. \end{theorem} \section{Numerical experiments} \vskip 3mm In this section, we apply the proposed inertial three-operator splitting algorithm (Algorithm \ref{alg1}) to solve a constrained image inpainting problem. We also compare it with the original three-operator splitting (TOS) algorithm. Notice that the convergence of the proposed inertial three-operator splitting algorithm is obtained in Theorem \ref{main-theorem} and Theorem \ref{main-theorem2}. The difference between Theorem \ref{main-theorem} and Theorem \ref{main-theorem2} is the requirement of the inertial parameters $\{\alpha_{k}\}$ and the relaxation parameters $\{\lambda_{k}\}$. Therefore, we refer to the proposed algorithm by iTOS-1 and iTOS-2, respectively. All the experiments are conducted in a laptop of Lenovo Intel (R) core (TM)i7-4712MQ CPU and 4 GB memory. We run the codes in MATLAB 2014A. \subsection{Constrained image inpainting problem} Let $u\in R^{m\times n}$ be a given image, where $\{u_{ij}\}_{(i,j)\in \Omega}$ are observed and the rest are missed. We consider the following contrained image inpainting problem: \begin{equation}\label{constrained-image-inpainting} \min_{x\in C}\, \frac{1}{2}\|P_{\Omega}(u)-P_{\Omega}(x)\|_{F}^{2}+ \mu \|x\|_{*}, \end{equation} where $\|\cdot\|_{F}$ is the Frobenius norm, $\|\cdot\|_{*}$ is the nuclear norm, $C$ is a nonempty closed convex set, and $\mu >0$ is the regularization parameter. Here, $P_{\Omega}$ is defined by \[P_{\Omega}(u)=\left\{\begin{array}{ll} u_{ij}, &\text{$(i,j)\in \Omega$},\\ 0, & \textrm{otherwise}. \end{array}\right.\] The nuclear norm has been widely used in image inpainting and matrix completion problem, which is a convex relaxation of low rank constraint. See, for example \cite{yangjfandyuan2013,tang2016-1,tangyc20171}. Here, we introduce a nonempty closed convex $C$ in (\ref{constrained-image-inpainting}), which provides an easy way to incorporate priori information. In particular, we choose $C$ as a nonnegative set, that is $C=\{x\in R^{m*n}| x_{ij}\geq 0\}$. By virtue of the indicator function $\delta_{C}(x)$, the constrained image inpainting problem (\ref{constrained-image-inpainting}) is equivalent to the following unconstrained image inpainting problem, \begin{equation}\label{unconstrained-image-inpainting} \min_{x}\, \frac{1}{2}\|P_{\Omega}(u)-P_{\Omega}(x)\|_{F}^{2}+\lambda\|x\|_{*}+\delta_{C}(x). \end{equation} It is obvious that the optimization problem (\ref{unconstrained-image-inpainting}) is a special case of the optimization problem of the sum of three convex functions (\ref{sum-three-convex}). In fact, let $h(x)=\frac{1}{2}\|P_{\Omega}(u)-P_{\Omega}(x)\|_{F}^{2}$, $g(x)=\lambda\|x\|_{*}$, and $f(x)=\delta_{C}(x)$. Then $h(x)$ is convex differentiable and $\nabla h(x)=P_{\Omega}(u)-P_{\Omega}(x)$ with $1$-Lipschitz continuous. The proximity operator of $g(x)$ can be computed by the singular value decomposition (SVD). See, for example \cite{Caijf2010SIAM}. And the proximity operator of $f(x)$ is the orthogonal projection onto the closed convex set $C$. Therefore, the three-operator splitting algorithm and the inertial three-operator splitting algorithm can be employed to solve the optimization problem (\ref{unconstrained-image-inpainting}). \subsection{Evaluation and parameters setting} To evaluate the quality of the restored images, we use the signal-to-noise ration (SNR) and the structural similarity index (SSIM) \cite{wangzhou}, which are defined by \begin{equation} SNR = 20log\frac{\|x\|_{F}}{\|x-x_{r}\|_{F}}, \end{equation} and \begin{equation} SSIM=\frac{(2u_{x}u_{x_r}+c_{1})(2\sigma_{xx_r}+c_{2})}{(u_{x}^{2}+u_{x_r}^{2}+c_{1})(\sigma_{x}^{2}+\sigma_{x_r}^{2}+c_{2})}, \end{equation} where $x$ is the original image, $x_{r}$ is the restored image, $u_{x}$ and $u_{x_r}$ are the mean values of the original image $x$ and restored image $x_{r}$, respectively, $\sigma_{x}^{2}$ and $\sigma_{x_r}^{2}$ are the variances, $\sigma_{xx_r}^{2}$ is the covariance of two images, $c_{1}=(K_{1}L)^{2}$ and $c_{2}=(K_{2}L)^{2}$ with $K_{1}=0.01$ and $K_{2}=0.03$, and $L$ is the dynamic range of pixel values. $SSIM$ ranges from $0$ to $1$, and $1$ means perfect recovery. We define the relative change between two successive iterative sequences as the stopping criterion, that is \begin{equation} \frac{\|z^{k+1}-z^{k}\|_{F}}{\|z^{k}\|_{F}}\leq \varepsilon, \end{equation} where $\varepsilon$ is a given small constant. The selection of relaxation parameter $\lambda_k$ and step size $\gamma$ are crucial to the convergence speed of the three-operator splitting algorithm and the proposed inertial three-operator splitting algorithm. For the sake of fair comparison, we define these parameters in Table \ref{parameters-selection}. \begin{table}[htbp] \footnotesize \centering \caption{Parameters selection of the studied iterative algorithms.} \vskip 2mm \begin{tabular}{c|c|c|c|c} \hline Type & Methods & Step size & Relaxation parameter & Inertial parameter \\ \hline \hline \multirow{3}[1]{*}{Case 1} & TOS & \multirow{3}[1]{*}{$\gamma = 1.8$} & $\lambda_k =1$ & None \\ & iTOS-1 & & $\lambda_k = 0.8$ & $\alpha_k = 0.2$ \\ & iTOS-2 & & $\lambda_k =1$ & $\alpha_k = \min \{ \frac{1}{k^2 \| z^k - z^{k-1} \|^2}, 0.2 \}$ \\ \hline \multirow{3}[1]{*}{Case 2} & TOS & \multirow{3}[1]{*}{$\gamma = 1$} & $\lambda_k = 0.3$ & None \\ & iTOS-1 & & $\lambda_k = 0.3$ & $\alpha_k = 0.5$ \\ & iTOS-2 & & $\lambda_k = 0.3$ & $\alpha_k = \min \{ \frac{1}{k^2 \| z^k - z^{k-1} \|^2}, 0.5 \}$ \\ \hline \multirow{3}[1]{*}{Case 3} & TOS & \multirow{3}[1]{*}{$\gamma = 0.5$} & $\lambda_k = 1.75$ & None \\ & iTOS-1 & & $\lambda_k = 1.4$ & $\alpha_k = 0.1$ \\ & iTOS-2 & & $\lambda_k = 1.75$ & $\alpha_k = \min \{ \frac{1}{k^2 \| z^k - z^{k-1} \|^2}, 0.1 \}$ \\ \hline \end{tabular}\label{parameters-selection} \end{table} \subsection{Results and discussions} The tested image is chosen from \cite{davis2015old}, which is a gray image of a building. See Figure \ref{tested-images}. We randomly remove some pixels from the original image with missing rate $40 \%$, $60 \%$ and $80 \%$, respectively. In each case, a random Gaussian noise with mean zero and standard variance $0.01$ and $0.05$ is added. The regularization parameter $\mu$ is tuned under different noise level. We set $\mu = 0.5$ for noise level $0.01$ and $\mu = 1.8$ for noise level $0.05$, respectively. \begin{figure}\label{tested-images} \end{figure} We test the performance of the studied iterative algorithms including TOS, iTOS-1 and iTOS-2 with parameters selection case $1$, $2$ and $3$ in Table \ref{parameters-selection}. The obtained results are presented in Table \ref{results1}, Table \ref{results2} and Table \ref{results3}, respectively. We observe from Table \ref{results1} that when the stopping criterion $\varepsilon = 10^{-3}$, the TOS, iTOS-1 and iTOS-2 perform nearly the same in terms of the SSIM and the number of iteratons. The SNR of iTOS-1 and iTOS-2 are slightly higher than the TOS. From Table \ref{results2}, we can see that the iTOS-1 spends less iteration numbers than the TOS and iTOS-2. Further, we can see from Table \ref{results3} that the performance of the TOS and iTOS-2 are almost the same. In Table \ref{results3}, the iTOS-1 is the slowest. We make the conclusion that from Table \ref{results1} to Table \ref{results3}, the iTOS-1 is faster than the original TOS for the same selection of the relaxation parameters, which are smaller than one. The iTOS-1 performs more stable than the iTOS-2. For visual display of the recovered images, we present them in Figure \ref{restored1}, Figure \ref{restored2} and Figure \ref{restored3}. \begin{table}[htbp] \footnotesize \centering \caption{Comparison results of parameters selection case $1$ in terms of SNR, SSIM and the number of iterations $k$.} \vskip 2mm \begin{tabular}{c|c|c|cccccccc} \hline \multirow{2}[1]{*}{Missing} & \multirow{2}[1]{*}{Noise} & \multirow{2}[1]{*}{Methods} & \multicolumn{3}{c}{$ \epsilon = 10^{-3}$} & & \multicolumn{3}{c}{$ \epsilon = 10^{-5}$} \\ \cline{4-6} \cline{8-10} rate & level & & $SNR$ & $SSIM$ & $k$ & & $SNR$ & $SSIM$ & $k$ \\ \hline \hline \multirow{6}[1]{*}{$40 \%$} & \multirow{3}[1]{*}{$0.01$} & TOS & $23.4856$ & $0.8877$ & $30$ & & $23.5961$ & $0.8895$ & $51$ \\ & & ITOS-1 & $23.4863$ & $0.8878$ & $30$ & & $23.5962$ & $0.8895$ & $51$ \\ & & ITOS-2 & $23.4868$ & $0.8878$ & $30$ & & $23.5971$ & $0.8896$ & $980$ \\ \cline{2-10} & \multirow{3}[1]{*}{$0.05$} & TOS & $19.7318$ & $0.7639$ & $15$ & & $19.7395$ & $0.7642$ & $23$ \\ & & ITOS-1 & $19.7247$ & $0.7636$ & $13$ & & $19.7394$ & $0.7642$ & $20$ \\ & & ITOS-2 & $19.7326$ & $0.7639$ & $15$ & & $19.7409$ & $0.7642$ & $39$ \\ \hline \multirow{6}[1]{*}{$60 \%$} & \multirow{3}[1]{*}{$0.01$} & TOS & $20.6602$ & $0.8069$ & $49$ & & $20.8415$ & $0.8123$ & $85$ \\ & & ITOS-1 & $20.6622$ & $0.8070$ & $49$ & & $20.8415$ & $0.8123$ & $85$ \\ & & ITOS-2 & $20.6609$ & $0.8070$ & $49$ & & $20.8420$ & $0.8123$ & $82$ \\ \cline{2-10} & \multirow{3}[1]{*}{$0.05$} & TOS & $17.5956$ & $0.6628$ & $21$ & & $17.6530$ & $0.6657$ & $36$ \\ & & ITOS-1 & $17.6035$ & $0.6632$ & $21$ & & $17.6529$ & $0.6657$ & $35$ \\ & & ITOS-2 & $17.5972$ & $0.6629$ & $21$ & & $17.6530$ & $0.6657$ & $33$ \\ \hline \multirow{6}[1]{*}{$80 \%$} & \multirow{3}[1]{*}{$0.01$} & TOS & $16.9807$ & $0.6433$ & $97$ & & $17.3557$ & $0.6617$ & $181$ \\ & & ITOS-1 & $16.9810$ & $0.6433$ & $97$ & & $17.3558$ & $0.6617$ & $181$ \\ & & ITOS-2 & $16.9813$ & $0.6434$ & $97$ & & $17.3565$ & $0.6618$ & $176$ \\ \cline{2-10} & \multirow{3}[1]{*}{$0.05$} & TOS & $12.9316$ & $0.3830$ & $44$ & & $13.2010$ & $0.3986$ & $104$ \\ & & ITOS-1 & $12.9417$ & $0.3835$ & $44$ & & $13.2012$ & $0.3986$ & $104$ \\ & & ITOS-2 & $12.9332$ & $0.3830$ & $44$ & & $13.2016$ & $0.3986$ & $98$ \\ \hline \end{tabular}\label{results1} \end{table} \begin{table}[htbp] \footnotesize \centering \caption{Comparison results of parameters selection case $2$ in terms of SNR, SSIM and the number of iterations $k$.} \vskip 2mm \begin{tabular}{c|c|c|cccccccc} \hline \multirow{2}[1]{*}{Missing} & \multirow{2}[1]{*}{Noise} & \multirow{2}[1]{*}{Methods} & \multicolumn{3}{c}{$ \epsilon = 10^{-3}$} & & \multicolumn{3}{c}{$ \epsilon = 10^{-5}$} \\ \cline{4-6} \cline{8-10} rate & level & & $SNR$ & $SSIM$ & $k$ & & $SNR$ & $SSIM$ & $k$ \\ \hline \hline \multirow{6}[1]{*}{$40 \%$} & \multirow{3}[1]{*}{$0.01$} & TOS & $22.7328$ & $0.8733$ & $133$ & & $23.5898$ & $0.8894$ & $249$ \\ & & ITOS-1 & $23.2498$ & $0.8835$ & $76$ & & $23.5940$ & $0.8895$ & $134$ \\ & & ITOS-2 & $22.7345$ & $0.8734$ & $133$ & & $23.5939$ & $0.8895$ & $244$ \\ \cline{2-10} & \multirow{3}[1]{*}{$0.05$} & TOS & $19.5628$ & $0.7582$ & $55$ & & $19.7378$ & $0.7641$ & $103$ \\ & & ITOS-1 & $19.6817$ & $0.7621$ & $32$ & & $19.7389$ & $0.7641$ & $52$ \\ & & ITOS-2 & $19.5635$ & $0.7583$ & $55$ & & $19.7389$ & $0.7641$ & $95$ \\ \hline \multirow{6}[1]{*}{$60 \%$} & \multirow{3}[1]{*}{$0.01$} & TOS & $19.5084$ & $0.7689$ & $216$ & & $20.8319$ & $0.8120$ & $416$ \\ & & ITOS-1 & $20.2781$ & $0.7949$ & $124$ & & $20.8379$ & $0.8122$ & $224$ \\ & & ITOS-2 & $19.5102$ & $0.7689$ & $216$ & & $20.8380$ & $0.8122$ & $421$ \\ \cline{2-10} & \multirow{3}[1]{*}{$0.05$} & TOS & $17.2028$ & $0.6438$ & $94$ & & $17.6490$ & $0.6655$ & $187$ \\ & & ITOS-1 & $17.4596$ & $0.6561$ & $55$ & & $17.6516$ & $0.6657$ & $99$ \\ & & ITOS-2 & $17.2036$ & $0.6439$ & $94$ & & $17.6515$ & $0.6657$ & $179$ \\ \hline \multirow{6}[1]{*}{$80 \%$} & \multirow{3}[1]{*}{$0.01$} & TOS & $14.3301$ & $0.5126$ & $375$ & & $17.3369$ & $0.6608$ & $877$ \\ & & ITOS-1 & $16.0750$ & $0.5979$ & $232$ & & $17.3485$ & $0.6614$ & $477$ \\ & & ITOS-2 & $14.3323$ & $0.5127$ & $375$ & & $17.3415$ & $0.6611$ & $887$ \\ \cline{2-10} & \multirow{3}[1]{*}{$0.05$} & TOS & $11.6758$ & $0.3200$ & $184$ & & $13.1859$ & $0.3977$ & $547$ \\ & & ITOS-1 & $12.3940$ & $0.3544$ & $119$ & & $13.1953$ & $0.3982$ & $300$ \\ & & ITOS-2 & $11.6767$ & $0.3200$ & $184$ & & $13.1953$ & $0.3982$ & $563$ \\ \hline \end{tabular}\label{results2} \end{table} \begin{table}[htbp] \footnotesize \centering \caption{Comparison results of parameters selection case $3$ in terms of SNR, SSIM and the number of iterations $k$.} \vskip 2mm \begin{tabular}{c|c|c|cccccccc} \hline \multirow{2}[1]{*}{Missing} & \multirow{2}[1]{*}{Noise} & \multirow{2}[1]{*}{Methods} & \multicolumn{3}{c}{$ \epsilon = 10^{-3}$} & & \multicolumn{3}{c}{$ \epsilon = 10^{-5}$} \\ \cline{4-6} \cline{8-10} rate & level & & $SNR$ & $SSIM$ & $k$ & & $SNR$ & $SSIM$ & $k$ \\ \hline \hline \multirow{6}[1]{*}{$40 \%$} & \multirow{3}[1]{*}{$0.01$} & TOS & $23.3412$ & $0.8852$ & $55$ & & $23.5948$ & $0.8895$ & $95$ \\ & & ITOS-1 & $23.2780$ & $0.8841$ & $60$ & & $23.5946$ & $0.8895$ & $106$ \\ & & ITOS-2 & $23.3426$ & $0.8852$ & $55$ & & $23.5951$ & $0.8895$ & $93$ \\ \cline{2-10} & \multirow{3}[1]{*}{$0.05$} & TOS & $19.6750$ & $0.7619$ & $23$ & & $19.7389$ & $0.7641$ & $38$ \\ & & ITOS-1 & $19.6579$ & $0.7613$ & $25$ & & $19.7389$ & $0.7641$ & $43$ \\ & & ITOS-2 & $19.6763$ & $0.7619$ & $23$ & & $19.7395$ & $0.7642$ & $51$ \\ \hline \multirow{6}[1]{*}{$60 \%$} & \multirow{3}[1]{*}{$0.01$} & TOS & $20.4405$ & $0.8001$ & $90$ & & $20.8395$ & $0.8122$ & $160$ \\ & & ITOS-1 & $20.3753$ & $0.7980$ & $99$ & & $20.8391$ & $0.8122$ & $178$ \\ & & ITOS-2 & $20.4415$ & $0.8001$ & $90$ & & $20.8399$ & $0.8122$ & $157$ \\ \cline{2-10} & \multirow{3}[1]{*}{$0.05$} & TOS & $17.4874$ & $0.6575$ & $40$ & & $17.6520$ & $0.6657$ & $71$ \\ & & ITOS-1 & $17.4895$ & $0.6576$ & $45$ & & $17.6518$ & $0.6657$ & $79$ \\ & & ITOS-2 & $17.4884$ & $0.6575$ & $40$ & & $17.6522$ & $0.6657$ & $69$ \\ \hline \multirow{6}[1]{*}{$80 \%$} & \multirow{3}[1]{*}{$0.01$} & TOS & $16.5053$ & $0.6195$ & $173$ & & $17.3517$ & $0.6615$ & $341$ \\ & & ITOS-1 & $16.3921$ & $0.6138$ & $190$ & & $17.3508$ & $0.6615$ & $379$ \\ & & ITOS-2 & $16.5064$ & $0.6195$ & $173$ & & $17.3525$ & $0.6616$ & $337$ \\ \cline{2-10} & \multirow{3}[1]{*}{$0.05$} & TOS & $12.6118$ & $0.3657$ & $92$ & & $13.1979$ & $0.3984$ & $217$ \\ & & ITOS-1 & $12.5646$ & $0.3632$ & $101$ & & $13.1972$ & $0.3983$ & $241$ \\ & & ITOS-2 & $12.6128$ & $0.3658$ & $92$ & & $13.1985$ & $0.3984$ & $212$ \\ \hline \end{tabular}\label{results3} \end{table} \begin{figure} \caption{The missing and restored images. (a)\, $40 \%$ missing and $0.01$ noise image (SNR: 3.9784(dB), SSIM: 0.1384); (b)\, TOS (SNR: 23.5961(dB), SSIM: 0.8895); (c)\, iTOS-1 (SNR: 23.5962(dB), SSIM: 0.8895); (d)\, iTOS-2 (SNR: 23.5971(dB), SSIM: 0.8896). } \label{restored1} \end{figure} \begin{figure} \caption{The missing and restored images. (a)\, $60 \%$ missing and $0.01$ noise image (SNR: 2.2314(dB), SSIM: 0.0788); (b)\, TOS (SNR: 20.8415(dB), SSIM: 0.8123); (c)\, iTOS-1 (SNR: 20.8415(dB), SSIM: 0.8123); (d)\, iTOS-2 (SNR: 20.8420(dB), SSIM: 0.8123). } \label{restored2} \end{figure} \begin{figure} \caption{The missing and restored images. (a)\, $80 \%$ missing and $0.01$ noise image (SNR: 0.9641(dB), SSIM: 0.0336); (b)\, TOS (SNR: 17.3557(dB), SSIM: 0.6617); (c)\, iTOS-1 (SNR: 17.3558(dB), SSIM: 0.6617); (d)\, iTOS-2 (SNR: 17.3565(dB), SSIM: 0.6618). } \label{restored3} \end{figure} \section{Conclusions and future works} The three-operator splitting algorithm is a new operator splitting algorithm for solving the monotone inclusion problem (\ref{more two-monotone-inclusion}). In this paper, we proposed a class of inertial three-operator splitting algorithm, which combined the inertial methods and the three-operator splitting algorithm. We analyzed the convergence of the proposed algorithm based on the iKM iteration (\ref{inertial-KM-iteration}). As a direct application, we developed an inertial three-operator splitting algorithm for solving the convex optimization problem (\ref{sum-three-convex}), which had wide applications in signal and image processing. We also presented numerical experiments on the constrained image inpainting problem (\ref{constrained-image-inpainting}) to demonstrate the advantage of introducing the inertial terms. When we finished this work, we find that \cite{attouch:hal-01708905,attouch:hal-01782016} also prove the convergence of the iKM iteration (\ref{inertial-KM-iteration}) under certain conditions on the inertial parameters and the relaxation parameters, which are different from Lemma \ref{iKM-convergence-1} and Lemma \ref{iKM-convergence-2}. It is natural to generalize the convergence analysis of the iKM iteration in \cite{attouch:hal-01708905,attouch:hal-01782016} to the proposed inertial three-operator splitting algorithm. We would like to compare the performance of variants inertial three-operator splitting algorithm for wide application problems, and study the convergence rate analysis of the inertial three-operator splitting algorithm in the context of convex minimization in the future works. \section*{Funding} This research was funded by the National Natural Science Foundations of China (11661056, 11771198, 11771347, 91730306, 41390454, 11401293), the China Postdoctoral Science Foundation (2015M571989) and the Jiangxi Province Postdoctoral Science Foundation (2015KY51). \section*{Conflicts of interest} The authors declare no conflict of interest. \end{document}
\begin{document} \title{Symmetric div-quasiconvexity and the relaxation of static problems} \author[S.~Conti, S.~M\"uller and M.~Ortiz] {S.~Conti$^1$, S.~M\"uller$^{1,2}$ and M.~Ortiz$^{1,2,3}$} \address { $^1$ Institut f\"ur Angewandte Mathematik, Universit\"at Bonn, Endenicher Allee 60, 53115 Bonn, Germany. } \address { $^2$ Hausdorff Center for Mathematics, Endenicher Allee 60, 53115 Bonn, Germany. } \address { $^3$ Division of Engineering and Applied Science, California Institute of Technology, 1200 E.~California Blvd., Pasadena, CA 91125, USA. } \begin{abstract} We consider problems of static equilibrium in which the primary unknown is the stress field and the solutions maximize a complementary energy subject to equilibrium constraints. A necessary and sufficient condition for the sequential lower-semicontinuity of such functionals is symmetric ${\rm div}$-quasiconvexity, a special case of Fonseca and M\"uller's $\mathcal{A}$-quasiconvexity with $\mathcal{A} = {\rm div}$ acting on $\R^{n\times n}_\sym$. We specifically consider the example of the static problem of plastic limit analysis and seek to characterize its relaxation in the non-standard case of a non-convex elastic domain. We show that the symmetric ${\rm div}$-quasiconvex envelope of the elastic domain can be characterized explicitly for isotropic materials whose elastic domain depends on pressure $p$ and Mises effective shear stress $q$. The envelope then follows from a rank-$2$ hull construction in the $(p,q)$-plane. Remarkably, owing to the equilibrium constraint the relaxed elastic domain can still be strongly non-convex, which shows that convexity of the elastic domain is not a requirement for existence in plasticity. \end{abstract} \maketitle \tableofcontents \section{Introduction} We consider problems of static equilibrium in which the primary unknown is the stress field and the solutions minimize a complementary energy subject to equilibrium constraints. Such problems arise, e.~g., in the limit analysis of solids at collapse, which is characterized by continuing deformations, or yielding, at constant applied loads \cite{Lubliner:1990}. In a geometrically linear framework, the elastic strains and the stress remain constant during collapse. Therefore, the plastic strain rate coincides with the total strain rate and is compatible. In addition, the stress is constrained to be in equilibrium and take values in the elastic domain $K$, which, for ideal plasticity and in the absence of hardening, is a fixed subset of $\R^{n\times n}_\sym$. Static theory then aims to minimize over all possible velocities $v:\mathrm{O}mega\to\R^n$ compatible with the boundary data $g:\partial\mathrm{O}mega\to\R^n$, and maximize over all possible stress fields $\sigma:\mathrm{O}mega\to K$ in equilibrium, the plastic dissipation \begin{equation}\label{4Adabr} \int_\mathrm{O}mega \sigma\cdot Dv \,dx. \end{equation} Natural spaces of functions are $\sigma\in L^\infty(\mathrm{O}mega;\R^{n\times n}_\sym)$ with $\sigma\in K$ almost everywhere and $v\in W^{1,p}(\mathrm{O}mega;\R^n)$ with $v=\gD$ on $\partial\mathrm{O}mega$ in the sense of traces. If the elastic domain $K$ is convex, then the mathematical analysis of the problem is straightforward. Thus, the supremum of (\ref{4Adabr}) with respect to $\sigma$ can be taken locally, and the resulting dissipation functional \begin{equation}\label{eqconvex} \int_\mathrm{O}mega \psi(Dv) \, dx \end{equation} can then be minimized over all admissible $v$. In (\ref{eqconvex}), $\psi(\xi):=\sup_{\sigma \in K} \sigma\cdot \xi$ is the dissipation potential. Thus, for convex $K$ the classical kinematic problem of limit analysis is recovered. The functional (\ref{eqconvex}) is itself convex and, for compact $K$, coercive, whence existence of minimizers follows by the direct method of the calculus of variations. However, the elastic domain $K$ of some notable materials is not convex. An illustrative example is silica glass. Indeed, Meade and Jeanloz \cite{Meade1988} made measurements of the shear strength of amorphous silica at pressures up to $81$ GPa at room temperature and showed that the strength initially decreases sharply as the material is compressed to denser structures of higher coordination and then rises again, Fig.~\ref{yl7fiU}a, resulting in a strongly non-convex elastic domain in the pressure-shear stress plane. Several authors \cite{Maloney2008, Schill:2018} have performed molecular dynamics calculations of amorphous solids deforming in pressure-shear and have found that the resulting deformation field forms distinctive patterns to accommodate permanent macroscopic deformations, Fig.~\ref{yl7fiU}b. Remarkably, whereas convex limit analysis is standard \cite{Lubliner:1990}, the case of non-convex elastic domains does not appear to have been studied. \begin{figure} \caption{a) Measurements of the shear yield strength of silica glass at pressures up to $81$ GPa at room temperature reveal a non-convex elastic domain in pressure-shear space \cite[Fig.~1]{Meade1988} \label{yl7fiU} \end{figure} More generally, we may consider static problems where the material response is expressed as \begin{equation}\label{stl2Vo} \varepsilon = \frac{\partial\chi}{\partial\sigma}(x, \sigma) , \end{equation} in terms of a complementary energy function $\chi$. {The} functional of interest is then the complementary energy \begin{equation}\label{l88Opb} \sigma \mapsto \int_{\Gamma_D} \sigma(x) \nu(x) \cdot \gD(x) \, d\mathcal{H}^{d-1} - \int_\mathrm{O}mega \chi(x, \sigma(x)) \, dx , \end{equation} to be minimized subject to the equilibrium constraints \begin{subequations}\label{fRoa1l} \begin{align} & \label{3U6iR0} {\rm div} \sigma(x) + b(x) = 0 , && \text{in } \mathrm{O}mega , \\ & \sigma(x) \nu(x) = h(x) , && \text{on } \Gamma_N , \end{align} \end{subequations} where $\sigma : \mathrm{O}mega \to \mathbb{R}^{{n}\times {n}}$ is a local stress field, $b : \mathrm{O}mega \to \mathbb{R}^{n}$ are body forces and $h : \Gamma_N \to \mathbb{R}^{n}$ applied tractions over the Neumann boundary $\Gamma_N\subseteq\partial\mathrm{O}mega$. If $\chi$ is non-convex, the question of relaxation again becomes non-standard and it may be expected to result in the development of microstructure in the form of rapidly oscillatory stress fields. A powerful mathematical tool for elucidating such questions is furnished by $\mathcal{A}$-quasiconvexity, introduced by Fonseca and M\"uller \cite{FonsecaMueller1999} as a necessary and sufficient condition for the sequential lower-semicontinuity of functionals of the form \begin{equation} (u,v) \mapsto \int_\mathrm{O}mega f(x, u(x), v(x)) \, dx , \end{equation} where $f : \mathrm{O}mega \times \mathbb{R}^m \times \mathbb{R}^d \to [0,+\infty)$ is a normal integrand, $\mathrm{O}mega \subseteq \mathbb{R}^n$ open and bounded, and $v$ must satisfy the differential constraint \begin{equation}\label{Bb8Xd8} \mathcal{A} \, v = 0 . \end{equation} Here, \begin{equation}\label{eqdefcalA} \mathcal{A} \, v := \sum_{i=1}^n A^{(i)} \frac{\partial v}{\partial x_i} , \end{equation} and {$A^{(i)} \in {\rm Lin}(\mathbb{R}^l; \mathbb{R}^d)$} is a constant rank partial differential operator. Specifically, $f(x, u, \cdot)$ is $\mathcal{A}$-quasiconvex if \begin{equation} f(x, u, v) \leq \int_Q f(x, u, v + w(y)) \, dy , \end{equation} for all $v \in \mathbb{R}^d$ and all $w \in C^\infty(Q; \mathbb{R}^d)$ such that $\mathcal{A} w = 0$ and $w$ is $Q$-periodic, with $Q = (0,1)^n$. In particular, with $\mathcal{A} = {\rm curl}$, $\mathcal{A}$-quasiconvexity reduces to Morrey's notion of quasiconvexity. In the context of the static problem (\ref{l88Opb}) and (\ref{fRoa1l}), we may identify the state field $v$ with $\sigma$ and the operative differential operator $\mathcal{A}$ with ${\rm div}$. The pertinent notion of quasiconvexity is, therefore, {\sl ${\rm div}$-quasiconvexity}, acting on fields of symmetric $n\times n$ matrices. Whereas for kinematic problems of the energy-minimization type there is a well-developed theory of relaxation relating to ${\rm curl}$-quasiconvexity, the relaxation of static problems of the form (\ref{l88Opb}) and (\ref{fRoa1l}), relating instead to ${\rm div}$-quasiconvexity, has been less extensively studied. In this paper, we develop a theory of symmetric ${\rm div}$-quasiconvex relaxation for static problems. For definiteness, we confine attention to the static problem of limit analysis \cite{Lubliner:1990} \begin{equation}\label{eqvarpb1} \sup\{ F(\sigma): \sigma\in L^\infty(\mathrm{O}mega;K)\} . \end{equation} Here, $K \subseteq \mathbb{R}^{n\times n}_{\rm sym}$ is the elastic domain, which we assume to be compact, and \begin{equation}\label{eqintrodefFsigma} F(\sigma):=\inf_v \Big\{ \int_\mathrm{O}mega \sigma \cdot D v \, dx \, : \, v\in W^{1,1}(\mathrm{O}mega;\R^n), \ v = \gD \text{ on } \partial\mathrm{O}mega \Big\} , \end{equation} where $\gD\in L^1(\partial\mathrm{O}mega;\R^n)$ gives the boundary data. The domain $\mathrm{O}mega$ is assumed to be a bounded Lipschitz domain. The stress field $\sigma$ is a divergence-free field, which takes values in symmetric matrices. This symmetry sets the present setting apart from previous applications of $\div$-quasiconvexity, also denoted $\calS$-quasiconvexity or soleinoidal--quasi\-convexity, which have focused on {the characterization of the $\div$-quasiconvex hull of a 3-point set in relation with the three-well problem in linear elasticity \cite{GarroniNesi2004,PalombaroPonsiglione2004,PalombaroSmyshlyaev2009}} and on the Born-Infeld equations \cite{MuellerPalombaro2014}. We call the present setting {\sl symmetric $\div$-quasiconvexity}. In Section \ref{secsdqc}, we show how the concept of symmetric $\divv$-quasiconvexity fits within the framework of $\mathcal{A}$-quasiconvexity and discuss the relevant properties of \sdqclong\ functions, which mainly follow directly from \cite{FonsecaMueller1999}. We also present in Lemma \ref{lemmatartar} an important example of a nonconvex \sdqclong\ function. Section \ref{sec:relax} deals with $\divv$-quasiconvexity for sets and their hulls, in the context of relaxation theory. An important result, announced in \cite[Th. 1 and Th. 2]{Schill:2018}, is Theorem \ref{theoexist}, which shows that the variational problem (\ref{eqvarpb1}) has a solution if $K$ is \sdqclong. We then discuss, in particular, the definition of the \sdqclong\ hull of a set $K$, which in principle depends on the growth of the class of test functions employed. However, we show that all $p\in(1,\infty)$ give equivalent definitions, Theorem \ref{theorempq}. Finally, Section \ref{sec:example} deals with the important case of sets $K$ that can be characterized in terms of the first two stress invariants alone and show how their \sdqclong\ hulls can be explicitly characterized. We recall that this elastic domain representation is the basis for a broad range of pressure-dependent plasticity models, including the Mohr-Coulomb model of sands (\cite{Lubliner:1990} and references therein), the Cam-Clay model of soils (\cite{Schofield:1968} and references therein), the Drucker-Prager model of pressure-dependent metal plasticity (\cite{Lubliner:1990} and references therein) and Gurson's model of porous metal plasticity \cite{Gurson:1977}. \section{Symmetric $\divv$-quasiconvex functions} \label{secsdqc} We start by giving the basic definitions and recalling the main results from \cite{FonsecaMueller1999}, specializing them to the case of interest here. \begin{definition} A Borel-measurable, locally bounded function $f:\R^{n\times n}_\sym\to\R$ is symmetric $\div$-quasiconvex if, for all $\varphi\in C^\infty_\per((0,1)^n;\R^{n\times n}_\sym)$ which obey $\div \varphi=0$ everywhere, \begin{equation}\label{eqdefsdqc} f\bigl(\int_{(0,1)^n} \varphi\, dx\bigr)\le \int_{(0,1)^n} f(\varphi) dx\,. \end{equation} For $\xi\in\R^{n\times n}_\sym$, the \sdqclong\ envelope of $f:\R^{n\times n}_\sym\to \R$ is defined as \begin{equation}\label{eqdefqsdf} \begin{split} \Qsd f(\xi):=\inf\left\{ \int_{(0,1)^n} f(\varphi)dx:\right. &\varphi\in C^\infty_\per((0,1)^n;\R^{n\times n}_\sym), \\ &\left.\Div\varphi=0, \int_{(0,1)^n} \varphi\, dx =\xi\right\}. \end{split} \end{equation} \end{definition} We recall that $C^\infty_\per((0,1)^n)$ is the set of $\varphi\in C^\infty(\R^n)$ such that $\varphi(x+e_i) = \varphi(x)$ for $i=1,\dots,n$. \begin{remark} From the definition it follows that, if $f,g$ are \sdqclong, then so are $\max\{f,g\}$ and $f+\lambda g$, for any $\lambda\in[0,\infty)$. Furthermore, all convex functions are \sdqclong. \end{remark} For a generic first-order differential operator of the form given in (\ref{eqdefcalA}) and a wavevector $w\in\R^n\setminus\{0\}$, the linear operator $\bbA(w)\in \Lin(\R^m;\R^n)$ is defined as \begin{equation} \bbA(w):= \sum_{i=1}^n A^{(i)} w_i \end{equation} The general theory of $\mathcal{A}$-quasiconvexity requires that $\bbA$ be constant rank, in the sense that $\rank\bbA$ does not depend on $w$ (as long as $w\ne 0$). We first show that this condition holds in the present case and compute the characteristic cone. We recall that the characteristic cone is the union of the sets where $\bbA(w)$ vanishes, for $w\ne 0$, and that \sdqclong\ functions are convex in the directions of the characteristic cone. \begin{lemma}\label{lemmaconstantrank} The condition of being divergence-free is constant rank on symmetric $n\times n$ matrices. The characteristic cone consists of all non-invertible matrices and spans $\R^{n\times n}_\sym$. \end{lemma} \begin{proof} Let $J:\R^{n(n+1)/2}\to\R^{n\times n}_\sym$ be a linear bijection which maps $\{e_1\dots e_{n(n+1)/2}\}$ to $\{e_i\odot e_j\}_{1\le i\le j\le n}$. We recall that $(a\odot b)_{ij}:=\frac12 (a_ib_j+a_jb_i)$. We define the differential operator $\mathcal{A}^\sdiv$ on $C^\infty(\mathrm{O}mega;\R^{n(n+1)/2})$ as $\mathcal{A}^\sdiv \varphi := \div (J\varphi)$. The corresponding linear operator $\bbA^\sdiv(w)\in \Lin(\R^{n(n+1)/2};\R^n)$, for $w\in \R^n$, is defined by its action on a vector $\xi\in \R^{n(n+1)/2}$, \begin{equation} (\bbA^\sdiv(w)\xi)_i = \sum_{j=1}^n (J\xi)_{ij}w_j, \end{equation} which can be written as $\bbA^\sdiv(w)\xi = (J\xi)w$. For example, for $n=2$, \begin{equation} J\begin{pmatrix} \xi_1\\ \xi_2\\ \xi_3 \end{pmatrix} =\begin{pmatrix} \xi_1 & \frac12\xi_3 \\ \frac12\xi_3 & \xi_2 \end{pmatrix} \end{equation} and \begin{equation} \begin{split} \mathcal{A}^\sdiv \begin{pmatrix} \varphi_1\\ \varphi_2\\ \varphi_3 \end{pmatrix} = \begin{pmatrix} \partial_1\varphi_1 + \frac12 \partial_2\varphi_3\\ \partial_2\varphi_2 + \frac12 \partial_1\varphi_3 \end{pmatrix}, \\ \bbA^\sdiv\begin{pmatrix} w_1\\ w_2 \end{pmatrix} \begin{pmatrix} \xi_1\\ \xi_2\\ \xi_3\end{pmatrix} = \begin{pmatrix} w_1\xi_1 + \frac12 w_2\xi_3\\ w_2\xi_2 + \frac12 w_1\xi_3 \end{pmatrix}. \end{split} \end{equation} We now show that the operator $\bbA^\sdiv(w)$ is surjective for every $w\in S^{n-1}$. Indeed, fix any vector $v\in\R^n$ and let $F^{v,w}\in\R^{n\times n}_\sym$ be such that $F^{v,w}w=v$ (for example, let $F^{v,w}=v\otimes w + w\otimes v - (v\cdot w) w\otimes w$). Then, choose $\xi:=J^{-1}(F^{v,w})$ to obtain $\bbA^\sdiv(w)J^{-1}(F^{v,w}) = F^{v,w}w=v$. Therefore, $\bbA^\sdiv(w)$ has rank $n$ for all $w\ne0$, and the constant-rank condition holds. The characteristic cone, first introduced by Murat and Tartar \cite{Murat1981,Tartar1979}, is defined as \begin{equation} \Lambda := \bigcup_{w\in S^{n-1}} \ker \bbA^\sdiv(w) \subseteq \R^{n(n+1)/2}. \end{equation} {In the present context, the cone $\Lambda$ may be identified (via the mapping $J$) with the set of non-invertible matrices,} \begin{equation}\label{eqcharcone} J\Lambda = \bigcup_{w\in S^{n-1}} \{\sigma \in \R^{n\times n}_\sym: \sigma w =0\} = \{\sigma \in \R^{n\times n}_\sym: \det\sigma=0\}. \end{equation} \end{proof} The following three results are essentially special cases of more general assertions that hold within the framework of $\mathcal{A}$-quasiconvexity in \cite{FonsecaMueller1999}. For convenience, we restate here the statements that are {needed} in the following. \begin{lemma}\label{lemmalambdaconvex} Let $f$ be \sdqclong. Then, it is convex along all non-invertible directions, in the sense that $f(\lambda A + (1-\lambda)B)\le \lambda f(A)+(1-\lambda) f(B)$ whenever $\lambda\in[0,1]$, $A,B\in\R^{n\times n}_\sym$, $\det (A-B)=0$. Furthermore, all such $f$ are locally Lipschitz continuous. \end{lemma} \begin{proof} If $f$ is upper semicontinuous, then the assertion follows directly from \cite[Prop. 3.4]{FonsecaMueller1999} using Lemma \ref{lemmaconstantrank}. Here, we give a direct proof without assuming upper semicontinuity. We first assume that there is a vector $\nu\in \Q^{n}\setminus\{0\}$ such that $(A-B)\nu=0$. We let $h:\R\to\{0,1\}$ be one-periodic, with $h(t)=0$ for $t\in(0,\lambda)$ and $h(t)=1$ for $t\in (\lambda,1)$. We choose $M\in\N$ such that $M\nu\in\Z^n$ and define $u(x):=A+(B-A) h(Mx\cdot \nu)$. From $Me_i\cdot\nu=M\nu_i\in\Z$, we deduce that $u(x+e_i)=u(x)$ for all $i$. Furthermore, $\div u=0$ {in the sense of distributions}, $|\{u=A\}\cap(0,1)^n|=\lambda$, and $|\{u=B\}\cap(0,1)^n|=1-\lambda$, which {implies} $\int_{(0,1)^n} u \, dx = \lambda A + (1-\lambda) B$. Let $\theta_\eps\in C^\infty_c(B_\eps)$ be a mollifier. Then, $u\ast \theta_\eps\in C^\infty_\per((0,1)^n;\R^{n\times n}_\sym)$ and, therefore, by (\ref{eqdefsdqc}), we obtain \begin{equation} f(\lambda A +(1-\lambda)B)\le \int_{(0,1)^n} f(u\ast \theta_\eps) dx. \end{equation} Since $f$ is locally bounded, $u$ is bounded and $|\{u\ast\theta_\eps\ne u\}\cap(0,1)^n|\to0$. Taking the limit $\eps\to0$, we deduce \begin{equation} f(\lambda A+(1-\lambda) B)\le \int_{(0,1)^n} f(u)\, dx =\lambda f(A)+(1-\lambda) f(B) \end{equation} whenever $A$ and $B$ are such that $(A-B)\nu=0$ for some $\nu\in \Q^n$. In particular, $f$ is separately convex and finite-valued, hence locally Lipschitz continuous. Consider now any two matrices $A,B$ and a vector $w\in S^{n-1}$ such that $(A-B)w=0$. We choose $\nu_j\in \Q^n$ such that $\nu_j\to w$, which implies $(A-B)\nu_j\to0$. Let now $B_j:=B+(A-B)\nu_j\otimes \nu_j/|\nu_j|^2$. Then, $(A-B_j)\nu_j=0$, hence $f(\lambda A+(1-\lambda)B_j)\le \lambda f(A)+(1-\lambda) f(B_j)$. Taking $j\to\infty$, by continuity of $f$ we conclude the proof. \end{proof} \begin{lemma}\label{lemmalowersem}\label{lowersem} \begin{enumerate} \item \label{lowersem1} Let $f$ be \sdqclong, $u_j\weakstarto u$ weakly in $L^\infty(\mathrm{O}mega;\R^{n\times n}_\sym)$, $\div u_j=0$ {in the sense of distributions}. Then, \begin{equation} \int_\mathrm{O}mega f(u(x))dx\le \liminf_{j\to\infty} \int_\mathrm{O}mega f(u_j(x))dx. \end{equation} \item Let $f$ be \sdqclong, $f(\xi)\le c(|\xi|^p+1)$ for some $p\in[1,\infty)$, $u_j\weakto u$ weakly in $L^p(\mathrm{O}mega;\R^{n\times n}_\sym)$, $\div u_j=0$ {in the sense of distributions}. Then, \begin{equation} \int_\mathrm{O}mega f(u(x))dx\le \liminf_{j\to\infty} \int_\mathrm{O}mega f(u_j(x))dx. \end{equation} \end{enumerate} \end{lemma} \begin{proof} Lemma \ref{lemmalambdaconvex} shows that $f$ is continuous. The result follows then immediately from \cite[Th. 3.7]{FonsecaMueller1999} using Lemma \ref{lemmaconstantrank}. \end{proof} \begin{lemma}\label{lemmaqsdqc} Let $f\in C^0(\R^{n\times n}_\sym;[0,\infty)$. Then, $\Qsd f$ is \sdqclong. \end{lemma} \begin{proof} Follows from \cite[Prop. 3.4]{FonsecaMueller1999}. \end{proof} We now recall an important example of a nontrivial \sdqclong\ function, due to Luc Tartar. \deff_\mathrm{T}{f_\mathrm{T}} \begin{lemma}[From \cite{Tartar1985}]\label{lemmatartar} The function $f_\mathrm{T}:\R^{n\times n}_\sym\to\R$, $f_\mathrm{T}(\sigma):=(n-1)|\sigma|^2-(\Tr\sigma)^2$, is symmetric $\div$-quasiconvex. \end{lemma} For completeness, we provide a short proof of this result, which plays an important role in the explicit examples discussed in Section \ref{sec:example}. \begin{proof} We first observe that, for any matrix $A\in \C^{n\times n}$, we have \begin{equation}\label{eqrankAAtrA} (\rank A) |A|^2 \ge |\Tr A|^2. \end{equation} To verify this inequality, it suffices to write $A$ in a basis in which only the first $\rank A$ diagonal entries are nonzero and to use then on this set the basic inquality $|\sum_i A_{ii}|^2\le (\rank A) \sum_i |A_{ii}|^2$. We now show that for any $\varphi \in C^1_\per ((0,1)^n;\R^{n\times n})$ with $\div \varphi=0$ the functional $I(\varphi):=\int_{(0,1)^n} f_\mathrm{T}(\varphi(x))dx$ is nonnegative. Indeed, letting $\hat \varphi_\lambda$ be the Fourier coefficients of $\varphi$, by Plancharel's theorem we have \begin{equation} \int_{(0,1)^n} f_\mathrm{T}(\varphi)\, dx=\sum_{\lambda\in 2\pi \Z^n} \left[(n-1) |\hat\varphi_\lambda|^2 - |\Tr\hat\varphi_\lambda|^2\right]\ge0 , \end{equation} where we have used (\ref{eqrankAAtrA}) and the fact that $\div\varphi=0$ implies $\hat\varphi_\lambda\lambda = 0$ and therefore $\rank\hat\varphi_\lambda\le n-1$. Let now $\varphi$ be as in the definition of $\div$-quasiconvexity, $\xi:=\int_{(0,1)^n} \varphi \, dx$. Since $f_\mathrm{T}$ is quadratic and $\varphi-\xi$ has average zero, expanding we obtain \begin{equation} \int_{(0,1)^n} f_\mathrm{T}(\varphi) dx = f_\mathrm{T}(\xi) +\int_{(0,1)^n} f_\mathrm{T}(\varphi-\xi) dx \ge f_\mathrm{T}(\xi). \end{equation} \end{proof} We close this section with a brief discussion of the relation to $\div$-quasiconvexity. In particular, we show that symmetric $\div$-quasiconvexity is not equivalent to $\div$-quasiconvexity composed with projection to symmetric matrices. We recall that a Borel-measurable, locally bounded function $f:\R^{m\times n}\to\R$ is $\div$-quasiconvex if, for every $\varphi\in C^\infty_\per((0,1)^n;\R^{m\times n})$ such that $\div \varphi=0$ everywhere, \begin{equation} f(\int_{(0,1)^n} \varphi\,dx)\le \int_{(0,1)^n} f(\varphi) dx\,. \end{equation} \newcommand\symf{\mathcal{S}f} \begin{lemma} For a given function $f:\R^{n\times n}_\sym\to\R$, we define $\symf:\R^{n\times n}\to\R$ as $\symf(\xi) := f((\xi+\xi^T)/2)$. If $\symf$ is $\div$-quasiconvex, then $f$ is symmetric $\div$-quasiconvex. However, there are symmetric $\div$-quasiconvex functions $f$ such that the corresponding $\symf$ is not $\div$-quasiconvex. \end{lemma} \begin{proof} In order to prove that $f$ is symmetric $\div$-quasiconvex, we pick $\varphi \in C^\infty_\per((0,1)^n; \R^{n\times n}_\sym)$ with $\div \varphi=0$ and observe that \begin{equation} f(\int_{(0,1)^n} \varphi\,dx)=\symf(\int_{(0,1)^n} \varphi\,dx)\le \int_{(0,1)^n} \symf(\varphi)dx=\int_{(0,1)^n} f(\varphi)dx. \end{equation} For the converse implication, we consider $n=2$ and $f(F)=\det(F)$, so that \begin{equation} \symf(F)=\det\frac{F+F^T}2 = \det F - \frac14 (F_{12}-F_{21})^2. \end{equation} We first check that $f$ is symmetric $\div$-quasiconvex. Let $\xi\in \R^{2\times 2}_\sym$, $\varphi\in C^\infty_\per([0,1]^2;\R^{2\times 2}_\sym)$ with $\div\varphi=0$ and $\int_{(0,1)^n} \varphi dx=0$. Then, there is $v\in C^\infty(\R^2;\R^2)$ with {$Dv={}^\perp\varphi^{\perp}$}, {where by this compact notation we mean $Dv=R\varphi R$, with $R=e_1\otimes e_2-e_2\otimes e_1$.} Since $\varphi$ has average $0$ and is periodic, we can choose $v\in C^\infty_\per([0,1]^2;\R^2)$. In particular, \begin{equation} \int_{[0,1]^2} f(\xi+\varphi)dx = \det \xi + \int_{[0,1]^2} \det Dv dx = \det\xi = f(\xi). \end{equation} At the same time, the function $\varphi(x):=e_1\otimes e_2 \sin(2\pi x_1)$ is $[0,1]^2$-periodic, divergence-free, has average 0, and gives \begin{equation} \int_{[0,1]^2} \symf(\varphi) dx= -\frac14 \int_{[0,1]^2} \sin^2(2\pi x_1) dx = -\frac{1}8 < 0=\symf(0). \end{equation} \end{proof} \section{Symmetric $\divv$-quasiconvex sets and hulls} \label{sec:relax} \subsection{Symmetric $\divv$-quasiconvex sets} In this section, we discuss symmetric $\div$-quasiconvexity of sets and their hulls. As in the case of quasiconvexity, there are different possible definitions of the hulls, depending on the growth that is assumed. For quasiconvexity, it has been shown that the $p$-quasiconvex hull of a compact set does not depend on the assumed growth $p$. The key technical ingredient is Zhang's truncation Lemma, see \cite{Zhang1992}. In the present setting, we can only prove the corresponding result for $1<p<\infty$, since the bounds on the potentials of the oscillatory fields are based on singular-integral estimates which only hold in that range, see Lemma \ref{lemmapotentials} below. For clarity we give separate definitions for $p\in[1,\infty]$. \begin{definition}\label{defKsdqc} A compact set $K \subseteq\R^{n\times n}_\sym$ is \sdqclong\ if, for any $\xi\in\R^{n\times n}_\sym\setminus K$, there is a symmetric ${\rm div}$-quasiconvex function $g\in C^0(\R^{n\times n}_\sym;[0,\infty))$ such that $g(\xi)>\max g(K)$. A compact set $K \subseteq\R^{n\times n}_\sym$ is $p$-\sdqclong, with $p\in[1,\infty)$, if the function $g$ can be chosen to have $p$-growth, in the sense that $g(\sigma)\le c(|\sigma|^p+1)$ for some $c\in\R$ and all $\sigma\in\R^{n\times n}_\sym$. \end{definition} We remark that the function $g$ can be chosen so that it vanishes on $K$ by replacing it with $\hat g:=\max\{g-\max g(K),0)\}$. It is clear that if $K$ is $p$-\sdqclong\ for some $p$ then it is \sdqclong. As in the case of quasiconvexity, the definition for non compact sets depends crucially on growth and many variants are possible. We do not discuss this case here. \def\no{ \begin{lemma} If $K$ is \sdqclong\ then there is a \sdqclong\ function $g\in C^0(\R^{n\times n}_\sym;[0,\infty))$ with $K=\{g=0\}$. If $K$ is $p$-\sdqclong\ then $g$ can be chosen to have $p$ growth. \end{lemma} \begin{proof} First observe that the function $g$ in Definition \ref{defKsdqc} can be chosen to vanish on $K$, by replacing it by $\hat g=\max\{g-\max g(K),0)\}$. \end{proof} } \begin{lemma}\label{lemmaLinftyKclosed} Let $K\subseteq\R^{n\times n}_\sym$ be compact and symmetricaly $\div$-quasiconvex, $E:=\{\sigma\in L^\infty(\mathrm{O}mega;K): \div \sigma=0\}$. Then, $E$ is closed with respect to weak-$*$ convergence in $L^\infty(\mathrm{O}mega;\R^{n\times n}_\sym)$. \end{lemma} \begin{proof} Let $\sigma_j\in E$ be such that $\sigma_j\weakstarto\sigma$ in $L^\infty(\mathrm{O}mega;\R^{n\times n}_\sym)$. For any $\xi\in\R^{n\times n}_\sym\setminus K$, there is a \sdqclong\ function $g_\xi\in C^0(\R^{n\times n}_\sym;[0,\infty))$ which vanishes on $K$ and with $g_\xi(\xi)>0$. By continuity, $g_\xi>0$ on $B_{r_\xi}(\xi)$, for some $r_\xi>0$. The set $\R^{n\times n}_\sym\setminus K$ can be covered by countably many such balls $B_i$. Let $g_i$ be the corresponding functions. It suffices to show that $\{x: \sigma(x)\in B_i\}$ is a null set for any $i$. By Lemma \ref{lowersem}\ref{lowersem1}, recalling that $\sigma_j\in K$ almost everywhere for all $j$, we obtain $\int_\mathrm{O}mega g_i(\sigma)dx\le \liminf_{j\to\infty} \int_\mathrm{O}mega g_i(\sigma_j)dx=0$. This implies that $g_i(\sigma(x))= 0$ almost everywhere. Since $g_i>0$ on $B_i$ we obtain that $\{x: \sigma(x)\in B_i\}$ is a null set, which concludes the proof. \end{proof} We are now ready to prove our first main result, namely, an existence statement for static problems with \sdqclong\ yield sets. We refer to the introduction for the formulation and the main definitions and recall in particular that $\gD\in L^1(\partial\mathrm{O}mega;\R^n)$ denotes the boundary data. \begin{theorem}\label{theoexist} If $K$ is nonempty and \sdqclong, then $F$ is weakly upper semicontinuous and the problem defined in (\ref{eqvarpb1}) {and (\ref{eqintrodefFsigma})} has a solution $\sigma_*\in L^\infty(\mathrm{O}mega;K)$, which obeys $\div\sigma_*=0$ {in the sense of distributions}. \end{theorem} \begin{proof} We first prove that $\sup F\in\R$. Let $\xi_0\in K$. Using the constant function $\sigma=\xi_0$ gives \begin{equation} F(\xi_0)=\xi_0\cdot \int_\mathrm{O}mega D v\, dx = \xi_0\int_{\partial\mathrm{O}mega} \gD\otimes \nu d\mathcal{H}^{n-1}\in\R, \end{equation} hence $\sup F\ne-\infty$. By the trace theorem for $W^{1,1}$ (see for example \cite[p. 168]{AmbrosioFP}), we can extend $\gD$ to a function $W^{1,1}(\mathrm{O}mega;\R^n)$, which we shall also denote $\gD$. For any $\sigma\in L^\infty(\mathrm{O}mega;K)$ we have \begin{equation} F(\sigma)\le \int_\mathrm{O}mega \sigma \cdot D \gD \, dx\le \|\gD\|_{W^{1,1}}\max\{|\xi|: \xi\in K\}, \end{equation} hence $\sup F\ne +\infty$. Next, we show that only fields $\sigma$ that are divergence-free need be considered. If we assume additional regularity, then an integration by parts gives \begin{equation} \int_\mathrm{O}mega \sigma \cdot D v\, dx = \int_{\partial\mathrm{O}mega} \sigma \gD\cdot \nu d\mathcal{H}^{n-1} - \int_\mathrm{O}mega v \cdot \div\sigma \, dx, \end{equation} which does not contain any derivative of $v$. In particular, the $\inf$ is $-\infty$ unless $\div\sigma=0$ almost everywhere. Consider now a generic $\sigma\in L^\infty(\mathrm{O}mega;\R^{n\times n}_\sym)$. If $\div \sigma\ne0$ {in the sense of distributions}, then there is $\theta\in C^\infty_c(\mathrm{O}mega;\R^n)$ such that $\int_\mathrm{O}mega \sigma\cdot D\theta\, dx\ne 0$. We consider the one-parameter family of test functions $v_t:=\gD+t\theta$ and obtain \begin{equation} F(\sigma) \le \int_\mathrm{O}mega \sigma\cdot Dv_t\, dx = \int_\mathrm{O}mega \sigma\cdot D\gD\, dx + t \int_\mathrm{O}mega \sigma\cdot D\theta \,dx\,\,\, \text{ for all } t\in\R , \end{equation} which shows that $F(\sigma)=-\infty$. Therefore, we can restrict attention to fields $\sigma$ that are divergence-free {in the sense of distributions}. Let $\sigma_k\in L^\infty(\mathrm{O}mega;K)$ be a maximizing sequence. By the preceding argument, $\div\sigma_k=0$ {in the sense of distributions}. Since the sequence is bounded in $L^\infty$, after extracting a subsequence it converges weak-$*$ to some $\sigma_*$, by the properties of distributions $\div \sigma_*=0$. Lemma \ref{lemmaLinftyKclosed} implies that $\sigma_*\in K$ almost everywhere. Hence, we only need to show that it is a maximizer. For any $v\in W^{1,1}(\mathrm{O}mega;\R^n)$ with $v=\gD$ on the boundary we have \begin{equation} \int_\mathrm{O}mega \sigma_*\cdot D v \, dx = \lim_{k\to\infty} \int_\mathrm{O}mega \sigma_k\cdot D v \, dx \ge \limsup_{k\to\infty} F(\sigma_k), \end{equation} hence, \begin{equation} F(\sigma_*)\ge \limsup_{k\to\infty} F(\sigma_k) = \sup F. \end{equation} \end{proof} \subsection{Symmetric $\div$-quasiconvex hulls} We now deal with the case that $K$ is not \sdqclong. Within the framework of relaxation theory, we begin by defining the \sdqclong\ hull. \begin{definition}\label{defKp} Let $K\subseteq\R^{n\times n}_\sym$ be compact, {$p\in[1,\infty)$,} $f_p(\xi):=\dist^p(\xi,K)$. We define \begin{equation} \Kp:=\{\xi\in \R^{n\times n}_\sym: \Qsd f_p(\xi)=0\} \end{equation} and \begin{equation} \begin{split} \Kinfty := \{& {\xi} \in\R^{n\times n}_\sym \, : \, g(\xi) \le \max g(K) \\ &\text{ for all symmetric ${\rm div}$-quasiconvex $g\in C^0(\R^{n\times n}_\sym;[0,\infty))$} \} . \end{split} \end{equation} \end{definition} \begin{lemma}\label{smallestsdqc} $\Kinfty$ is the smallest \sdqclong\ compact set that contains $K$. $\Kp$ is the smallest $p$-\sdqclong\ compact set that contains $K$. \end{lemma} As usual, the first assertion means that any \sdqclong\ compact set that contains $K$ also contains $\Kinfty$, and analogously for the second. \begin{proof} We start by $\Kp$. By Lemma \ref{lemmaqsdqc} the function $\Qsd f_p$ is \sdqclong. From $\Qsd f_p\le f_p$ it follows that $\Qsd f_p$ has $p$-growth and that $K\subseteq\Kp$. If $\xi\in\R^{n\times n}_\sym \setminus\Kp$, then $\Qsd f_p(\xi)>0=\max\Qsd f_p(\Kp)$. Therefore, $\Kp$ is $p$-\sdqclong. To show minimality, we consider a $p$-\sdqclong\ compact set $\tilde K$ with $K\subseteq\tilde K$ and show that $\Kp\subseteq\tilde K$. To this end, we fix a $\xi\in\Kp$ and a \sdqclong\ function $g$ with $p$ growth and show that $g(\xi)\le \max g(K)\le \max g(\tilde K)$. If this holds for any such function $g$, then necessarily $\xi\in \tilde K$, which implies $\Kp\subseteq\tilde K$ and concludes the proof. It remains to show that $g(\xi)\le \max g(K)$. Let $\eps>0$. Since $g$ is continuous and $f_p>0$ outside $K$, there is $\delta>0$ such that $g(\sigma)\le \max g(K)+\eps$ for all $\sigma$ with $f_p(\sigma)\le \delta$. Using the fact that $g$ has $p$-growth, we then obtain $g\le \max g(K)+\eps + C_\eps f_p$ pointwise. By monotonicity of the \sdqclong\ envelope, this gives $g=\Qsd g \le \max g(K)+\eps+C_\eps \Qsd f_p$ pointwise and, therefore, $g(\xi)\le \max g(K)+\eps$. Since $\eps$ is arbitrary, this concludes the proof. We now treat the $p=\infty$ case. The fact that $K\subseteq\Kinfty$ is obvious. To show that $\Kinfty$ is \sdqclong, we pick $\xi\not\in \Kinfty$. By the definition of $\Kinfty$, there is a \sdqclong\ function $g$ with $g(\xi)>\max g(K)$. At the same time, for any $\sigma\in\Kinfty$ it follows that $g(\sigma)\le \max g(K)$, which implies $\max g(\Kinfty)=\max g(K)$. We conclude that $g(\xi)>\max g(\Kinfty)$, which shows that $\Kinfty$ is \sdqclong. To show minimality, we assume that $\tilde K$ is \sdqclong\ and $K\subseteq\tilde K$. We wish to show that $\Kinfty\subseteq\tilde K$. To this end, we fix a $\xi\in\R^{n\times n}_\sym\setminus\tilde K$ and choose a \sdqclong\ function $g$ with $g(\xi)>\max g(\tilde K)$. From $K\subseteq\tilde K$, we obtain $\max g(\tilde K)\ge \max g(K)$. Therefore, $\xi\not\in \Kinfty$. This implies $\Kinfty\subseteq\tilde K$ and concludes the proof. \end{proof} We proceed to show that $\Kp$ does not depend on $p$, as long as $p\ne\infty$. One inclusion can easily be obtained from the definition. The other will be discussed in Section \ref{sectrunc} below. \begin{theorem}\label{theorempq} Let $K\subseteq\R^{n\times n}_\sym$ be compact, $1<p<q<\infty$. Then, $K^{(p)}=K^{(q)}$. \end{theorem} \begin{proof} Follows from Lemma \ref{lemmapqsimple} and Lemma \ref{lemmapqdifficult} below. \end{proof} \begin{definition} Let $K\subseteq\R^{n\times n}_\sym$ be compact. For every $p\in(1,\infty)$, we set $\Ksdqc=\Kp$. {This is admissible by Theorem \ref{theorempq}.} \end{definition} \begin{lemma}\label{lemmapqsimple} Let $K\subseteq\R^{n\times n}_\sym$ be compact. Then, $K^{(q)}\subseteq K^{(p)}$ for any $p,q$ with $1\le p < q \le \infty$. \end{lemma} \begin{proof} Assume first that $q<\infty$. We write $f_p(\xi):=\dist^p(\xi,K)$ and, analogously, $f_q$. For all $\delta>0$, we have \begin{equation} f_p \le \delta^p + \frac{1}{\delta^{q-p}} f_q \end{equation} and, therefore, \begin{equation} \Qsd f_p \le \delta^p + \delta^{p-q} \Qsd f_q. \end{equation} Let now $\xi\in K^{(q)}$, so that $\Qsd f_q(\xi)=0$. The above inequality implies that $\Qsd f_p(\xi)\le \delta^p$ for any $\delta>0$. We conclude that $\Qsd f_p(\xi)=0$ and $K^{(q)}\subseteq K^{(p)}$. If, instead, $q=\infty$, it suffices to observe that the function $\Qsd f_p$ is \sdqclong\ (Lemma \ref{lemmaqsdqc}). Therefore, it is one of the candidates in the definition of $\Kinfty$. Since $\Qsd f_p=0$ on $K$, we obtain that, necessarily, $\Qsd f_p=0$ on $\Kinfty$. Hence, $\Kinfty\subseteq \Kp$. \end{proof} \begin{remark} By analogy with the case of quasiconvexity, one might expect that $K^{(p)}=\Kinfty$ for every $p\in[1,\infty)$ and every compact set $K$. This property holds in dimension $n=2$, since $\div$-quasiconvexity is equivalent to quasiconvexity composed with a $90$-degree rotation. We do not know if the statement is true in higher dimensions. \end{remark} \begin{lemma}\label{lemmachangevariables} Let $K\subseteq\R^{n\times n}_\sym$ be compact, $A\in\R^{n\times n}$ invertible, $B\in\R^{n\times n}_\sym$. Then, \begin{equation} (AKA^T+B)^\sdqc=AK^\sdqc A^T+B \end{equation} {and \begin{equation} (AKA^T+B)^{(\infty)}=A\Kinfty A^T+B. \end{equation} }\end{lemma} \begin{proof} We shall prove below that \begin{equation}\label{eqcv1} (AKA^T+B)^\sdqc\subseteq AK^\sdqc A^T+B. \end{equation} In order to derive the other inclusion, we then consider the set $\tilde K:=AKA^{T}+B$, so that $K=A^{-1}(\tilde K-B)A^{-T}$. Application of (\ref{eqcv1}) to $\tilde K$ gives \begin{equation} K^\sdqc = (A^{-1}\tilde KA^{-T} - A^{-1}BA^{-T})^\sdqc \subseteq A^{-1}\tilde K^\sdqc A^{-T} - A^{-1}BA^{-T}. \end{equation} Multiplying on the left by $A$ and on the right by $A^T$ yields \begin{equation} A K^\sdqc A^T \subseteq \tilde K^\sdqc -B, \end{equation} which, recalling the definition of $\tilde K$, is the desired second inclusion. It remains to prove (\ref{eqcv1}). We consider the set $H:=AK^\sdqc A^T+B$. It is obvious that $AKA^T+B\subseteq H$. If we can prove that $H$ is $p$-\sdqclong, then Lemma \ref{smallestsdqc} implies $(AKA^T+B)^\sdqc\subseteq H$ and concludes the proof. In order to show that $H$ is $p$-\sdqclong, we fix a symmetric matrix $\hat\sigma\not\in H$ and show that there is a \sdqclong\ function $f$ with $p$-growth such that $f(\hat\sigma)>\max f(H)$. Theorem \ref{theorempq} shows that $p\in(1,\infty)$ can be chosen arbitrarily. In the case of $\Kinfty$, the requirement of $p$-growth does not apply. We define $\sigma:=A^{-1}(\hat\sigma-B)A^{-T}$, so that $\hat\sigma=A\sigma A^T+B$. The definitions of $H$ and $\hat\sigma$ show that $\sigma\not\in K^\sdqc$. Since $K^\sdqc$ is $p$-\sdqclong, there is a \sdqclong\ function $g$ with $p$-growth such that $g(\sigma)>\max g(K^\sdqc)$. We define $f(\xi):=g(A^{-1}(\xi-B) A^{-T})$, so that $f(\hat\sigma)>\max f(H)$. Growth and continuity are automatically inherited from $g$. To conclude the proof it remains to show that $f$ is \sdqclong. To this end, pick some $\varphi\in C^\infty_\per((0,1)^n;\R^{n\times n}_\sym)$ with $\div\varphi=0$ and let $\xi:=\int_{(0,1)^n} \varphi \,dx$. For some $F\in\R^{n\times n}$ chosen below, we define $\psi(x):=A^{-1}(\varphi(Fx)-B)A^{-T}$ and compute \begin{equation} \psi_{ij}(x)=\sum_{\alpha,\beta} A^{-1}_{i\alpha} \varphi_{\alpha\beta}(Fx) A^{-1}_{j\beta}-A^{-1}_{i\alpha}B_{\alpha\beta} A^{-1}_{j\beta} \end{equation} and \begin{equation} \partial_k \psi_{ij}(x)=\sum_{\alpha,\beta,\gamma} A^{-1}_{i\alpha} \partial_\gamma \varphi_{\alpha\beta}(Fx) A^{-1}_{j\beta} F_{\gamma k}. \end{equation} Therefore, \begin{equation} (\div \psi)_i(x)=\sum_{\alpha,\beta,\gamma,j} A^{-1}_{i\alpha} \partial_\gamma \varphi_{\alpha\beta}(Fx) A^{-1}_{j\beta} F_{\gamma j}. \end{equation} {We choose} $F:=A$, so that $\sum_j A^{-1}_{j\beta} F_{\gamma j}=\Id_{\beta\gamma}$ and \begin{equation} (\div \psi)_i(x)=\sum_{\alpha,\beta} A^{-1}_{i\alpha} \partial_\beta \varphi_{\alpha\beta}(Fx) =0. \end{equation} Recalling the definitions of $f$ and $\psi$, we compute \begin{equation} \begin{split} \int_{(0,1)^n} f(\varphi(x))dx=& \int_{(0,1)^n} g(A^{-1}(\varphi(x)-B) A^{-T})dx\\ =& \int_{(0,1)^n} g(\psi(A^{-1}x))dx = \det A\int_{A^{-1}(0,1)^n} g(\psi(y))dy. \end{split} \end{equation} The function $\psi$ is $A^{-1}(0,1)^n$-periodic and has average $A^{-1}(\xi-B)A^{-T}$. The maps $u_j(x):=\psi(jx)$ are divergence-free and converge weakly in $L^{\infty}(\R^n;\R^{n\times n}_\sym)$ to their average, which is $A^{-1}(\xi-B)A^{-T}$. The functions $x\mapsto g(u_j(x))=g(\psi(jx))$ are equally periodic and converge weakly to their average, which is the last expression in the previous equation. Since $g$ is \sdqclong, recalling the lower semicontinuity (Lemma \ref{lemmalowersem}) we conclude \begin{equation} g(A^{-1}(\xi-B)A^{-T})\le \det A\int_{A^{-1}(0,1)^n} g(\psi(y))dy, \end{equation} and recalling the definition of $g$ and the previous computation this gives \begin{equation} f(\xi)\le \int_{(0,1)^n} f(\varphi(x))dx. \end{equation} Therefore, $f$ is \sdqclong. This concludes the proof. \end{proof} \begin{lemma}\label{lemmaranktwo} Let $K\subseteq\R^{n\times n}_\sym$ be compact. If $A,B\in K^\sdqc$ and $\rank(A-B)<n$ then $\lambda A+(1-\lambda)B\in K^\sdqc$ for all $\lambda\in[0,1]$. The corresponding assertion holds for $\Kinfty$. \end{lemma} \begin{proof} The proof follows immediately from the definition and Lemma \ref{lemmalambdaconvex}. Indeed, the assumption gives $\Qsd f_p(A)=\Qsd f_p(B)=0$. Since $\Qsd f_p$ is \sdqclong, it is convex in the direction of $B-A$, and $\Qsd f_p(\lambda A+(1-\lambda)B)=0$. In the case of $\Kinfty$, we consider any \sdqclong\ function $f\in C^0(\R^{n\times n}_\sym;[0,\infty))$, and deduce as above $f(\lambda A+(1-\lambda)B)\le\lambda f(A)+(1-\lambda) f(B)\le \max f(\Kinfty)$. By the definition of $\Kinfty$, we obtain $\max f(\Kinfty)=\max f(K)$ and, therefore, $f(\lambda A+(1-\lambda)B)\le \max f(K)$. \end{proof} In closing this section, we present an explicit example in which $K$ consists of two matrices. \begin{lemma}\label{lemmatwomatrix} Let $K:=\{A,B\}\subseteq\R^{n\times n}_\sym$. If $\rank(A-B)=n$, then $K^\sdqc=\Kinfty=K$. Otherwise, $K^\sdqc=\Kinfty=[A,B]$, where $[A,B]$ is the segment with endpoints $A$ and $B$. \end{lemma} \begin{proof} The function $f(\xi):=\dist(\xi,[A,B])$ is convex, hence \sdqclong, therefore $K^\sdqc\subseteq[A,B]$. If $\rank(B-A)<n$, Lemma \ref{lemmaranktwo} shows that $[A,B]\subseteq \Kinfty\subseteq K^\sdqc$ and concludes the proof. Assume now that $\rank(B-A)=n$. By Lemma \ref{lemmachangevariables}, it suffices to consider the case $A=\Id$, $B=-\Id$ and we need only show that no matrix of the form $t\Id$, $t\in (-1,1)$, belongs to $K^\sdqc$. Let $f(\xi):=((n-1)|\xi|^2-(\Tr \xi)^2+n)_+$. Lemma \ref{lemmatartar} implies that $f$ is \sdqclong, and we verify that $f(\Id)=f(-\Id)=0$. However, $f(t\Id)=n(1-t^2)>0$ for all $t\in(-1,1)$, hence $t\Id\not\in K^\sdqc$. \end{proof} \subsection{Truncation of symmetric divergence-free fields} \label{sectrunc} In the remainder of this Section, we prove that $K^{(p)}$ does not depend on $p$, for $p\in(1,\infty)$. This proof requires truncation and approximation of vector fields that satisfy differential constraints, which is made much easier by working with the corresponding potentials. Following \cite{Conti:2018}, we introduce a stress potential $\apot$, which is related to the field $\sigma$ by $\sigma=\Div\Div \apot$, in a sense we now make precise. Let $\Rvierst$ be the set of $\zeta\in \R^{n\times n\times n\times n}$ such that \begin{equation}\label{eqdefRvierst} \zeta_{ijhk}=\zeta_{jikh}=-\zeta_{ihjk} \hskip5mm \text{ for all } i,j,k,h\in\{1, 2, \dots, n\}. \end{equation} For $\apot\in L^1_\loc(\R^n;\Rvierst)$ we define the distribution \begin{equation} (\Div\Div \apot)_{ij} = \sum_{h,k}\partial_{h}\partial_{k} \apot_{ijhk}. \end{equation} We observe that, by (\ref{eqdefRvierst}), $\Div (\Div \Div \apot)=0$ and $\Div \Div \apot=(\Div \Div \apot )^T$. Therefore, every potential generates a divergence-free symmetric matrix field. In order to construct potentials, we start from a fixed matrix $M\in\R^{n\times n}_\sym$ and define $\apot^M:\R^n\to\Rvierst$ as \begin{equation}\label{eqdefaM} \begin{split} \apot^M(x)_{ijhk}=\frac1{n(n-1)} &\bigl( M_{ij}x_hx_k+M_{hk}x_ix_j - M_{ih}x_jx_k - M_{kj}x_hx_i \bigr). \end{split} \end{equation} A straightforward computation shows that $\Div\Div \apot^M=M$, with $|\apot^M|(x)\le 2|x|^2|M|$, $|D\apot^M|(x)\le 4|x|\, |M|$, $|D^2\apot^M|(x)\le 4|M|$ for all $x\in\R^n$, $n\ge 2$. Working in Fourier space, this procedure can be generalized to any divergence-free symmetric matrix field. \begin{lemma}\label{lemmapotentials} \begin{enumerate} \item \label{lemmapotentialssmooth} Let $w\in C^\infty_\per((0,1)^n;\R^{n\times n}_\sym)$ with $\div w=0$ and $\int_{(0,1)^n} w\, dx=0$. Then, there is $\apot\in C^\infty_\per((0,1)^n;\Rvierst)$ such that $\Div\Div\apot=w$. The map $w\mapsto \apot$ is linear. \item\label{lemmapotentialsLp} Let $w\in L^p((0,1)^n;\R^{n\times n}_\sym)$ for some $p\in (1,\infty)$, $\div w=0$, $\int_{(0,1)^n} w\, dx=0$. Then, there is $\apot\in W^{2,p}_\per((0,1)^n;\Rvierst)$, with $\|D^2\apot\|_{p}\le c \|w\|_p$ and $\div\div\apot=w$. The map $w\mapsto\apot$ is linear and extends the map in \ref{lemmapotentialssmooth}. \item\label{lemmapotentialsLpLq} Let $w=w_p+w_q$, with $w_p\in L^p((0,1)^n;\R^{n\times n}_\sym)$, $w_q\in L^q((0,1)^n;\R^{n\times n}_\sym)$ for some $p,q\in (1,\infty)$, $\div w=0$, $\int_T w_p\, dx=\int_T w_q\, dx=0$. Then, there are $\apot_p\in W^{2,p}_\per((0,1)^n;\Rvierst)$, with $\|D^2\apot_p\|_{p}\le c \|w_p\|_p$, and $\apot_q\in W^{2,q}_\per((0,1)^n;\Rvierst)$, with $\|D^2\apot_q\|_{q}\le c \|w_q\|_q$, such that $\Div\Div(\apot_p+\apot_q)=w$. \end{enumerate} \end{lemma} We stress that \ref{lemmapotentialsLpLq} does not assert $\Div\Div \apot_p=w_p$. \begin{proof} \ref{lemmapotentialssmooth}: Let $\hat w:2\pi \Z^n\to \R^{n\times n}_\sym$ be the Fourier coefficients of $w$, so that \begin{equation} w(x)=\sum_{\lambda\in 2\pi \Z^n} \hat w(\lambda) e^{i\lambda\cdot x}. \end{equation} The assumptions on $w$ imply $\hat w(0)=0$, $\hat w_{ij}=\hat w_{ji}$ and $\sum_j\hat w_{ij}\lambda_j=0$. We define, in analogy to (\ref{eqdefaM}), $\hat\apot(0)=0$ and, for $\lambda\in 2\pi\Z^n\setminus\{0\}$, \begin{equation}\label{eqdefaMhat} \hat\apot(\lambda)_{ijhk}=\frac1{|\lambda|^4} \bigl( \hat w_{ij}\lambda_h\lambda_k+\hat w_{hk}\lambda_i\lambda_j - \hat w_{ih}\lambda_j\lambda_k - \hat w_{jk}\lambda_i\lambda_h \bigr). \end{equation} We easily verify that $\hat\apot(\lambda)\in \Rvierst$ and $\sum_{hk}\lambda_h\lambda_k \hat\apot_{ijhk}(\lambda)=\hat w_{ij}(\lambda)$ for all $\lambda$. Since the decay of the coefficients $\hat\apot$ is faster than the decay of the coefficients $\hat w$, the Fourier series \begin{equation} \apot(x)=\sum_{\lambda\in 2\pi \Z^n} \hat \apot(\lambda) e^{i\lambda\cdot x} \end{equation} defines a smooth periodic function $\apot\in C^\infty_\per(T;\Rvierst)$ such that $\Div\Div\apot=w$. \ref{lemmapotentialsLp}: Let $T:C^\infty_\per(T;\R^{n\times n}_\sym)\to C^\infty_\per(T;\Rvierst)$, $w\mapsto Tw:=\apot_w$, be the linear operator defined above. We consider the operator $D^2T : C^\infty_\per(T;\R^{n\times n}_\sym)\to C^\infty_\per(T;\R^{n^6})$, defined by $w\mapsto D^2Tw:=D^2\apot_w$. Its Fourier symbol is smooth on $S^{n-1}$ and homogeneous of degree zero. By \cite[Proposition 2.13]{FonsecaMueller1999} (which is based on \cite[Ex. (iii), page 94]{Stein70} and \cite[Cor. 3.16, p. 263]{SteinWeiss1971}) the operator $D^2T$ can be extended to a continuous operator from $L^p$ to $L^p$ for any $p\in (1,\infty)$. By Poincar\'e, and using the fact that $Tw$ and $DTw$ have average zero, the estimate in $W^{2,p}$ follows. \ref{lemmapotentialsLpLq}: We define $\apot_p:=Tw_p$, $\apot_q:=Tw_q$. The estimates on the norm follow as for \ref{lemmapotentialsLp}. By linearity of the operator $T$, the differential condition holds as well. We remark that the $L^p$ extension and the $L^q$ extension of the operator defined on smooth functions coincide on $L^p\cap L^q$. Therefore, we can use the symbol $T$ for the operator defined on $L^p\cup L^q$. \end{proof} A crucial element in subsequent steps is the following truncation result, which is a minor variant of those given in Sect. 6.6.2 of \cite{EvansGariepy} and Prop. A.1 of \cite{FJM02b} and is based on Zhang's Lemma \cite{Zhang1992}. \newcommand\utrunc{u} \newcommand\vtrunc{v} \begin{lemma}\label{lemmatrunc} Let $\utrunc\in W^{2,p}_\per(\torus;V)$, $M>0$, $V$ a finite-dimensional vector space. Then, there is $\vtrunc\in W^{2,\infty}_\per(\torus;V)$ such that \begin{enumerate} \item $\displaystyle \|D^2\vtrunc\|_{2,\infty}\le c M$; \item $\displaystyle |\{\vtrunc\ne \utrunc\}| \le \frac{c}{M^p} \int_{|\utrunc|+|D\utrunc|+|D^2\utrunc|>M} |\utrunc|^p+|D\utrunc|^p+|D^2\utrunc|^p dx$. \end{enumerate} The constant depends only on $n$ and $V$. \end{lemma} The above estimates immediately imply \begin{equation} \|D^2\utrunc-D^2\vtrunc\|_p^p \le c\int_{|\utrunc|+|D\utrunc|+|D^2\utrunc|>M} |\utrunc|^p+|D\utrunc|^p+|D^2\utrunc|^p dx. \end{equation} \begin{proof} After choosing a basis and working componentwise, we can assume $V=\R$. We define $h:=(\utrunc,D\utrunc,D^2\utrunc)$ and \begin{equation} E_M:= \{ x\in \torus: \exists r\in (0,\sqrt n): \fint_{B_r(x)} |h(y)| dy \ge 2M\}. \end{equation} Here and subsequently, $\fint_\mathrm{O}mega f dx := |\mathrm{O}mega|^{-1}\int_\mathrm{O}mega f dx$. If $E_M$ is a null set, then it suffices to take $\vtrunc=\utrunc$ and the proof is concluded. Otherwise, using the Vitali or the Besicovitch covering theorem it follows that the volume of $E_M$ obeys (ii). We can further enlarge $E_M$ by a null set and assume that all points of $\torus\setminus E_M$ are Lebesgue points of $h$. For $x\in \torus\setminus E_M$ and $r\in (0,\sqrt n)$, we define \begin{equation} \eta_r(x):=\fint_{B(x,r)} |D^2\utrunc(y) -D^2\utrunc(x)| dy\,. \end{equation} From the definition of $E_M$ we obtain $0\le \eta_r \le 4M$ for all $r$ and $x$ and $\eta_r\to0$ pointwise on $(0,1)^n\setminus E_M$. Therefore, there is a set $\tilde E_M$ with $|\tilde E_M|\le |E_M|$ such that $\eta_{r}\to0$ uniformly in $\torus\setminus E_M\setminus \tilde E_M$. We define $S_M:=(0,1)^n\setminus E_M\setminus \tilde E_M$. We have shown that there is $\omega:(0,\infty)\to(0,4M]$ nondecreasing with $\omega_r\to0$ such that \begin{equation} \fint_{B(x,r)} |D^2\utrunc(y) -D^2\utrunc(x)| dy\le \omega_r \text{ for all } x\in S_M, r\in (0,\sqrt n)\,. \end{equation} Fix now $x\in S_M$. By Poincar\'e's inequality, for any $r\in (0,\sqrt n)$ there is $A_r=A_r(x)\in\R^n$ such that \begin{equation} \fint_{B(x,r)} |D\utrunc(y) -A_r-D^2\utrunc(x)(y-x)| dy\le cr\omega_r \text{ for all } r\in (0,\sqrt n)\,. \end{equation} Being $x$ a Lebesgue point of $D\utrunc$, we have $\lim_{r\to0} A_r=D\utrunc(x)$. Comparing the above equation on the balls $B(x,r)$ and $B(x,r/2)$ we obtain $|A_r-A_{r/2}|\le cr\omega_r$, which (summing the geometric series $A_{2^{-k}r}-A_{2^{k+1}r}$) implies $|A_r-D\utrunc(x)|\le c r\omega_r$ and \begin{equation} \fint_{B(x,r)} |D\utrunc(y)-D\utrunc(x) -D^2\utrunc(x)(y-x)| dy\le cr\omega_r \text{ for all } r\in (0,\sqrt n)\,. \end{equation} A second application of Poincar\'e's inequality yields \begin{equation} \fint_{B(x,r)} |\utrunc(y) -b_r-D\utrunc(x)(y-x)-\frac12 D^2\utrunc(x)(y-x)(y-x)| dy\le cr^2\omega_r \text{ for all } r\in (0,\sqrt n)\,, \end{equation} for some $b_r=b_r(x)\in\R$, and the same argument as above leads to \begin{equation} \fint_{B(x,r)} |\utrunc(y) -P_x(y)| dy\le c r^2 \omega_r \text{ for all } r\in (0,\sqrt n)\,. \end{equation} where $P_x$ is the second-order Taylor polynomial of $\utrunc$ centered at $x$. For $x,x'\in S_M$ and $r=|x-x'|$, we have \begin{equation} \fint_{B(x,r)\cap B(x',r)} |P_x-P_{x'}| dy\le c r^2 \omega_r\,. \end{equation} Since the space of polynomials of degree two is finite dimensional, this is an estimate on the difference of the coefficients and also a uniform estimate on the difference of the two polynomials. The conclusion then follows from Whitney's extension theorem. We remark that the standard construction in Whitney's extension theorem, if given periodic inputs, produces periodic outputs, and that, if $E_M$ is not a null set, this procedure actually produces a $C^2$ function. \end{proof} We are finally in a position to prove the other inequality in Theorem \ref{theorempq}. Specifically, we show the following. \begin{lemma}\label{lemmapqdifficult} Let $K\subseteq\R^{n\times n}_\sym$ be compact. Then, $K^{(p)}\subseteq K^{(q)}$ for any $p,q$ with $1< p \le q < \infty$. \end{lemma} \begin{proof} As usual, we define $f_p(\sigma):=\dist^p(\sigma,K)$ and, analogously, $f_q$. For brevity, we write $T=(0,1)^n$. Pick $\xi\in K^{(p)}$. Since $\Qsd f_p(\xi)=0$, by the definition (\ref{eqdefqsdf}) there is a sequence of functions $w_k\in C^\infty_\per(T;\R^{n\times n}_\sym)$ with $\div w_k=0$, $\int_T w_k\, dx=\xi$ and $\int_T f_p(w_k(x),K)\, dx\to0$. We choose $M>0$ such that $K\subseteq B_{M-1}$ and $|\xi|\le M-1$ and define \begin{equation} w_k^M:= w_k \chi_{|w_k|< M} \hskip5mm\text{ and }\hskip5mm w_k^L := w_k-w_k^M=w_k\chi_{|w_k|\ge M}, \end{equation} where $\chi_{|w_k|< M}(x)=1$ if $|w_k|(x)< M$ and $0$ otherwise. Then, $\|w_k^M\|_{L^{2q}}\le \|w_k^M\|_{L^\infty}\le M$. Since $|\sigma|\ge M$ implies $\dist(\sigma,K)\ge 1$ we obtain \begin{equation} \begin{split} |w_k^L|&=|w_k|\chi_{|w_k|\ge M} \le \dist(w_k, K)+(M-1)\chi_{|w_k|\ge M}\\ &\le M\dist(w_k,K) \end{split} \end{equation} and, therefore, $\|w_k^L\|_{L^p}\to0$. Let $\apot_k^{M}\in W^{2,2q}_\per(T;\Rvierst)$ and $\apot_k^{L}\in W^{2,p}_\per(T;\Rvierst)$ be corresponding potentials obtained from $w_k^M-\int_T w_k^M\, dx\in L^{2q}$ and $w_k^L-\int_T w_k^L\, dx\in L^p$ using Lemma \ref{lemmapotentials}\ref{lemmapotentialsLpLq} with the exponents $2q$ and $p$. In particular, this implies $w_k=\xi+\div\div(\apot_k^M+\apot_k^L)$ with \begin{equation} \| \apot_k^M \|_{2,2q}\le c M \hskip5mm\text{ and }\hskip5mm \| \apot_k^L \|_{2,p}\to0 \text{ as $k\to\infty$.} \end{equation} Let $\apot_k^T\in C^2(T;\Rvierst)$ be the truncation of $\apot_k^{L}$ obtained from Lemma \ref{lemmatrunc}, $\|\apot_k^T\|_{2,\infty}\le cM$. The above estimates show that $\|\apot_k^T\|_{2,p}\to0$ and, therefore, $\|\apot_k^T\|_{2,2q}\to0$. We define $w_k^*:=\xi+\div\div(\apot_k^M+\apot_k^T)\in L^{2q}$. Then, $w_k-w_k^*=\div\div(\apot_k^L-\apot_k^T)\to0$ in $L^p$. We now proceed to prove that $\int_T f_q(w_k^*)dx\to0$ as $k\to\infty$. For every $N>M$, we write \begin{equation}\label{eqsinpq} f_q(w_k^*) \le (2N)^{q-p} f_p(w_k^*)\chi_{|w_k^*|<N} + (2|w_k^*|)^q\chi_{|w_k^*|\ge N} \end{equation} and treat the two terms separately. The second can be estimated as \begin{equation} \limsup_{k\to\infty} \int_{|w_k^*|\ge N} |w_k^*|^q dx \le \limsup_{k\to\infty} \frac{1}{N^q}\int_T |w_k^*|^{2q} dx \le \frac{ c M^{2q}}{N^q}. \end{equation} It remains to estimate the first term. For fixed $N$, the function $f_p$ is uniformly continuous on $B_N$, so there is $\delta_N>0$ such that $|\sigma|<N$, $|\sigma-\eta|<\delta_N$ imply $f_p(\sigma)\le f_p(\eta)+1/N^q$. Therefore, for all $\sigma,\eta\in\R^{n\times n}_\sym$ we have \begin{equation} f_p(\sigma)\chi_{|\sigma|<N} \le f_p(\eta) + \frac1{N^q} + (2N)^p \frac{|\sigma-\eta|^p}{\delta_N^p}. \end{equation} Setting $\sigma=w_k^*(x)$, $\eta=w_k(x)$, integrating, and recalling that $w_k-w_k^*\to0$ in $L^p$ yields \begin{equation}\label{eqliswstpk} \begin{split} \limsup_{k\to\infty} \int_{|w_k^*|<N} f_p(w_k^*) dx \le & \limsup_{k\to\infty} \int_T f_p(w_k) dx + \frac1{N^q}\\ &+ \frac{(2N)^p}{\delta_N^p}\limsup_{k\to\infty} \|w_k-w_k^*\|_p^p = \frac1{N^q}. \end{split} \end{equation} From (\ref{eqsinpq})--(\ref{eqliswstpk}), we conclude that \begin{equation} \limsup_{k\to\infty} \int_T f_q(w_k^*) dx \le \frac1{N^p}+ \frac{ c M^{2q}}{N^q} , \end{equation} for all $N>M$ and, therefore, $\int f_q(w_k^*) dx\to0$. Finally, by continuity and density we can replace $w_k^*$ by a sequence of smooth functions with the same properties (using mollification preserves the differential constraint, periodicity and the average), and therefore $\Qsd f_q(\xi)=0$. \end{proof} \section{Explicit relaxation for yield surfaces depending on the first two invariants} \label{sec:example} \subsection{General setting and main results} In this section, we focus on the case of rotationally symmetric sets of strains in three dimensions. Lemma \ref{lemmachangevariables} implies that if $K\subseteq\R^{3\times 3}_\sym$ is rotationally invariant, in the sense that $Q^TKQ=K$ for any $Q\in\mathrm{SO}(3)$, then also its \sdqclong\ hull is rotationally invariant, in the sense that $Q^TK^\sdqc Q=K^\sdqc$ for any $Q\in\mathrm{SO}(3)$, and the same for $\Kinfty$. We consider here the situation where $K$ is described by only two invariants, one corresponding to the pressure (the isotropic stress) and another to the deviatoric stress (a measure of the distance to diagonal matrices). We leave the case of generic rotationally invariant elastic domains for future work. For $\sigma\in\R^{3\times 3}_\sym$, we define the two variables \begin{equation}\label{eqdefpqsigma} p(\sigma):=\frac13\Tr\sigma \text{ and } q(\sigma):=\frac{|\sigma-p\Id|}{\sqrt2} \end{equation} and denote $\Phi:\R^{3\times 3}_\sym\to\R\times[0,\infty)$ the mapping $\Phi:=(p,q)$, {so that \begin{equation} \Phi(\sigma)=\left(\frac13\Tr\sigma ,\frac{|\sigma-p\Id|}{\sqrt2}\right). \end{equation}} We remark that $2q^2(\sigma)=|\sigma_D|^2$ where $\sigma_D:=\sigma-p\Id$ is the deviatoric part of $\sigma$. For example, for any $(p_*,q_*)\in\R\times[0,\infty)$ the matrices \begin{equation} \xi_0:=\begin{pmatrix} p_*+q_*&0&0\\0&p_*-q_*&0\\0&0&p_* \end{pmatrix} \text{ and } \xi_1:=\begin{pmatrix} p_*&q_*&0\\q_*&p_*&0\\0&0&p_* \end{pmatrix} \end{equation} obey $\Phi(\xi_0)=\Phi(\xi_1)=(p_*,q_*)$. Here, we consider sets $K$ that can be characterized by the values of these two invariants, in the sense that \begin{equation}\label{eqKfromH} K=\{\sigma\in\R^{3\times 3}_\sym: (p(\sigma), q(\sigma))\in H\} \text{ for some } H\subseteq\R\times[0,\infty). \end{equation} We seek a characterization of $K^\sdqc$ in the $(p,q)$ plane, i.~e., {we aim at characterizing} the set \begin{equation}\label{eqdefphiksdsc} \begin{split} \Phi(K^\sdqc) {=}&\{(p_*,q_*): \exists \sigma\in K^\sdqc \text{ with } (p(\sigma), q(\sigma))=(p_*,q_*)\}, \end{split} \end{equation} and the same for $\Kinfty$. An explicit expression is given in Theorem \ref{theorelaxexplicitpq} below. In some cases, we shall additionally show that $K^\sdqc$ is fully characterized by the values of $p$ and $q$, in the sense that $\sigma\in K^\sdqc$ if and only if $(p(\sigma),q(\sigma))\in \tilde H$ for some $\tilde H\in \R\times[0,\infty)$, see Theorem \ref{lemmaKfullcharsmallder} below. This is however not always true, see Lemma \ref{lemmanotcylndrical} for an example where this representation fails. Our results are restricted to the case in which the relevant set $\tilde H$ is connected. Connectedness of hulls is, in general, a very subtle issue related to the locality of the various convexity conditions. In the case of quasiconvexity, it relates to the compactness of sequences taking values in sets without rank-one connections, a question known as Tartar's conjecture \cite{Tartar1982}. We recall that nonlocality of quasiconvexity was proven, in dimension 3 and above, by Kristensen \cite{Kristensen1999-localityqc} based on \v{S}ver\'{a}k's counterexample to the equivalence of rank-one convexity and quasiconvexity \cite{Sverak1992-rcqc}. However, in dimension two the situation is different and positive results have been obtained by \v{S}ver\'{a}k \cite{Sverak1993-Tartarsconj} and Faraco and Sz\'{e}kelyhidi \cite{FaracoSzekelyhidi2008}. We begin by explaining the construction qualitatively and then present a proof of its correctness. In order to get started, we fix $p_0\in \R$ and consider the rank-two line \begin{equation}\label{eqlinep0t} t\mapsto \xi_t := \begin{pmatrix} p_0+ t & 0 & 0 \\ 0 & p_0 - t & 0\\ 0 & 0 & p_0 \end{pmatrix}. \end{equation} Clearly, $p(\xi_t)=p_0$ and $q(\xi_t)=|t|$. In particular, if $(p_0, q_0)\in H$ then both $\xi_{q_0}$ and $\xi_{-q_0}$ belong to $K$ and, with Lemma \ref{lemmaranktwo}, we obtain $\xi_t\in K$ for all $t\in[-q_0, q_0]$. Based on this argument, we define the set \begin{equation}\label{eqdefhatH} \hat H:=\{(p,q)\in\R\times[0,\infty): (p,q+a)\in H\text{ for some }a\ge 0\}. \end{equation} {The set $\Phi(K^\sdqc)$ mentioned in (\ref{eqdefphiksdsc}) will then be characterized in Theorem \ref{theorelaxexplicitpq} as a set $\Hsdqc$ that we now show how to construct explicitly. Specifically, } $\Hsdqc$ is {obtained} from $\hat H$ by first taking the convex hull and then eliminating all points that can be separated from $\Hsdqc$ by means of a translation of Tartar's function, $f(\sigma) := 4q^2(\sigma)-3p^2(\sigma)$, which is symmetric $\div$-quasiconvex, see Lemma \ref{lemmatartar2} below. We say that a point $y_*=(p_*,q_*)$ can be separated from $\hat H$ if there is $y_0=(p_0, q_0)\in \R\times[0,\infty)$ such that the function $f_{y_0}(p,q):=4(q^2-q_0^2)-3 (p-p_0)^2$ obeys $\max f_{y_0}(H)<f_{y_0}(y_*)$. Then, the set $\Hsdqc$ is \begin{equation}\label{eqdefHstar} \Hsdqc:=\{y_*\in \hat H^\conv: \text{$y_*$ cannot be separated from $\hat H$}\}. \end{equation} We refer to Figure \ref{fig-twop-a} for an illustration. Our main result is the following. \begin{theorem}\label{theorelaxexplicitpq} Let $H\subseteq\R\times[0,\infty)$ be a compact set, $K:=\{\sigma\in\R^{3\times 3}_\sym: (p(\sigma), q(\sigma))\in H\}$. If the set $\Hsdqc$ defined in (\ref{eqdefhatH}--\ref{eqdefHstar}) is connected, then $\Phi(K^\sdqc)= \Phi(\Kinfty)= \Hsdqc$. \end{theorem} \begin{proof} The result follows from Lemma \ref{lemmalowerboundexpl} and Lemma \ref{lemmainnerbound1} below, using the inclusion $\Kinfty\subseteq\Ksdqc$ that was proven in Lemma \ref{lemmapqsimple}. \end{proof} With an additional condition on the tangent to the boundary of $\Hsdqc$, we obtain a full characterization of the hull. The necessity of the condition on the tangent is proven in Lemma \ref{lemmanotcylndrical} below. \begin{theorem}\label{lemmaKfullcharsmallder} Under the assumptions of Theorem \ref{theorelaxexplicitpq}, if additionally the tangent to $\partial \Hsdqc$ belongs to $\{e\in S^1: |e_2|\le \frac{\sqrt3}{4}|e_1|\}$ for any $y_*\in \partial \Hsdqc\setminus \hat H$, then $K^\sdqc=\Kinfty=\{\sigma:\Phi(\sigma)\in\Hsdqc\}$. \end{theorem} \begin{proof} The result follows from Lemma \ref{lemmalowerboundexpl} and Lemma \ref{lemmainnerbound2} below, using the inclusion $\Kinfty\subseteq\Ksdqc$ that is proven in Lemma \ref{lemmapqsimple}. \end{proof} \begin{figure} \caption{Sketch of the construction of $\Hsdqc$ in the case that $H$ consists of two points. The set $\hat H$ consists of two segments, which join the points in $H$ with their projections on the $\{q=0\} \label{fig-twop-a} \end{figure} \subsection{Outer bound} The next two Lemmas contain the proof of the outer bound, i.~e., the inclusion $\Phi (K^\sdqc)\subseteq \Hsdqc$. \begin{lemma}\label{lemmatartar2} Let $g:\R^{3\times 3}_\sym\to\R$ be defined by $g(\xi):=f_{y_0}(p(\xi),q(\xi))$, where $f_{y_0}(p,q):=4(q^2-q_0^2)-3 (p-p_0)^2$ and $y_0=(p_0, q_0)\in \R\times[0,\infty)$. Then, $g$ is symmetric $\div$-quasiconvex. \end{lemma} \begin{proof} By Lemma \ref{lemmatartar}, we know that the function $f_\mathrm{T}:\R^{3\times 3}_\sym\to\R$, \begin{equation} f_\mathrm{T}(\xi):=2|\xi|^2-(\Tr\xi)^2 \end{equation} is symmetric $\div$-quasiconvex. From \begin{equation} |\xi|^2=|\xi-p(\xi)\Id|^2+|p(\xi)\Id|^2=2q(\xi)^2 + 3 p(\xi)^2 , \end{equation} we obtain \begin{equation} f_\mathrm{T}(\xi)=4q(\xi)^2-3p(\xi)^2. \end{equation} Therefore, $g(\xi)=f_\mathrm{T}(\xi-p_0\Id)-4q_0^2$ is symmetric $\div$-quasiconvex. \end{proof} \begin{lemma}\label{lemmalowerboundexpl} Under the assumptions of Theorem \ref{theorelaxexplicitpq}, $\Phi (K^\sdqc)\subseteq \Hsdqc$. \end{lemma} \begin{proof} We pick a $\sigma\in K^\sdqc$ and define $y:=(p(\sigma),q(\sigma))$. We need to show that $y\in \Hsdqc$. If $y\not\in \hat H^\conv$, then there is an affine function $a:\R^2\to\R$ of the form $(p,q)\mapsto a(p,q)=bp+cq+d$ such that $a(y)>0$ and $a\le 0$ on $\hat H$. We first show that we can assume $c\ge 0$. Indeed, if this were not the case, we could consider the new affine function $a'(p,q):=bp+d$, which obeys $a'(y)\ge a(y)>0$. Let now $(p',q')\in \hat H$. By the definition of $\hat H$ we have $(p',0)\in\hat H$. By the definition of $a'$ and the properties of $a$ we obtain $a'(p',q')=a(p',0)\le 0$. Therefore, we can assume $c\ge 0$, or, equivalently, that $a$ is nondecreasing in its second argument. The function $g:\R^{3\times 3}_\sym\to\R$, $g(\xi):=a(p(\xi),q(\xi))$ is the composition of convex functions, with $p$ linear, and $a$ nondecreasing in the second argument. Therefore, $g$ is convex, as can be easily verified, \begin{align*} g(\lambda \xi_1+(1-\lambda) \xi_2) =& a(p(\lambda \xi_1+(1-\lambda) \xi_2), q(\lambda \xi_1+(1-\lambda) \xi_2))\\ \le& a(\lambda p(\xi_1)+(1-\lambda) p(\xi_2), \lambda q(\xi_1)+(1-\lambda) q(\xi_2))\\ =& \lambda g(\xi_1)+(1-\lambda) g(\xi_2). \end{align*} In particular, $g\le 0$ on $K$, $g(\sigma)>0$ and $g$ is convex. Hence, $\sigma$ does not belong to the convex hull of $K$ and neither does it belong to the symmetric $\div$-quasiconvex hull. Assume now that $y\in \hat H^\conv\setminus \Hsdqc$. Then, it is separated from $\hat H$ in the sense of (\ref{eqdefHstar}). Let $y_0=(p_0,q_0)$ be as in the definition of separation. By Lemma \ref{lemmatartar2} the function $\xi\mapsto f_{y_0}(p(\xi), q(\xi))=4(q(\xi)-q_0)^2-3(p(\xi)-p_0)^2$ is symmetric $\div$-quasiconvex and this implies $\sigma\not \in K^\sdqc$. Therefore, $\Phi (K^\sdqc)\subseteq \Hsdqc$. \end{proof} \subsection{Inner bound} We now prove the inner bound. Specifically, we first show that for any $y_*\in \Hsdqc$ there is a matrix $\sigma\in \Kinfty$ with $\Phi(\sigma)=y_*$ (Lemma \ref{lemmainnerbound1}) and then that, if an additional condition on the slope of the boundary of $\Hsdqc$ is fulfilled, any matrix $\sigma$ with $\Phi(\sigma)=y_*$ belongs to $\Kinfty$ (Lemma \ref{lemmainnerbound2}). Our key result is a characterization of a family of rank-two curves in the $(p,q)$ plane. We say that $t\mapsto \gamma(t)$ is a rank-two curve if it is a reparametrization of $s\mapsto \Phi(A+s(B-A))$ for some $A$, $B\in\R^{3\times 3}_\sym$ with $\rank(A-B)\le 2$. The curves we construct are at the same time level sets of \sdqclong\ functions, either of the type used to separate points in the definition of $\Hsdqc$ or (piecewise) affine. This allows (see proof of Lemma \ref{lemmainnerbound1} below) to show that any point in $\hat H^\conv$ that cannot be separated from $\hat H$ can be constructed. This strategy is illustrated in Figure \ref{fig-s1}. \begin{figure} \caption{Strategy for the proof of the inner bound. From every point $y$, we construct a one-parameter family of rank-two lines that start in all possible directions (left panel) and which are at the same time level sets of \sdqclong\ functions. Then, we distinguish two cases: if there is a direction such that the rank-two line intersects the set $H$ on both sides of $y$, then $y$ belongs to the hull. If there is a direction such that the rank-two line does not intersect $H$ on any side of $y$, then we can separate $y$ from $H$. By continuity of the family of curves and compactness of $H$, one of the two must occur.} \label{fig-s1} \end{figure} \begin{lemma}\label{lemmahatHD} Let $K$, $H$ and $\hat H$ be as above. Then, any $\sigma_*\in\R^{3\times 3}_\sym$ with $(p(\sigma_*),q(\sigma_*))\in \hat H$ belongs to $\Kinfty$. \end{lemma} \begin{proof} Let $\sigma_*\in \R^{3\times 3}_\sym$ be such that $p_*:=p(\sigma_*)$, $q_*:=q(\sigma_*)$ obey $(p_*,q_*+a)\in H$ for some $a>0$. We consider the rank-two line \begin{equation} t\mapsto \xi_t := \sigma_* + \begin{pmatrix} t&0&0\\0&-t&0\\0&0&0 \end{pmatrix}. \end{equation} This obeys $\xi_0=\sigma_*$ and $p(\xi_t)=p_*$ for all $t$. The map $t\mapsto q(\xi_t)$ is continuous, equals $q_*$ at $t=0$ and diverges for $t\mapsto\pm\infty$. Hence, there are $t_-<0<t_+$ such that $q(\xi_{t_\pm})=q_*+a$. In particular, $\xi_{t_\pm}\in K$ and, therefore, (Lemma \ref{lemmaranktwo}) $\sigma_*=\xi_0\in \Kinfty$. \def\no{ For the second one, let $\sigma_*=p_*\Id$. By assumption, there are $(p_-,q_-), (p_+,q_+)\in H$ such that $p_-\le p_*\le p_+$, $q_-\ge (p_*-p_-)/\gamma$, $q_+\ge (p_+-p_*)/\gamma$, where $\gamma=\frac{\sqrt3}2$. In particular, $(p_-, (p_*-p_-)/\gamma)\in \hat H$ and $(p_+, (p_+-p_*)/\gamma)\in \hat H$. We consider the rank-two line \begin{equation} t\mapsto \xi_t = \begin{pmatrix} p_*+ t & 0 & 0 \\ 0 & p_* + t & 0\\ 0 & 0 & p_* \end{pmatrix} \end{equation} and observe that there are $t_-\le 0\le t_+$ such that $p(\xi_{t_\pm})=p_\pm$, $q(\xi_{t_\pm}) = |p_\pm-p_*|/\gamma$. From $\xi_{t_\pm}\in K^\sdqc$, it then follows that $\sigma_*=\xi_0\in K^\sdqc$. For the third, it suffices to observe that if $(p_0,q_0)\in \hat H$ then there is $q_1$ such that $(p_0,q_1)\in H$. Therefore, $p_0\in D_-$ and $p_0\in D_+$ and $p_0\in D$.} \end{proof} \begin{lemma}\label{lemmaranktwolines} Let $y=(p_*,q_*)\in \R\times(0,\infty)$. Then, there is a continuous function $\Gamma_y:S^1\times\R\to \R\times[0,\infty)$ such that for any $e\in S^1$ the map $t\mapsto\Gamma_y(e,t)$ is a rank-two curve parametrized by arc-length, with $\Gamma_y(e,0)=y$, $\partial_t \Gamma_y(e,0)=e$, and $\Gamma_y(e,t)=\Gamma_y(-e,-t)$. The curves $\Gamma_y(e,\cdot)$ are either of the form (\ref{ranktwolinesmin}) or of the form (\ref{ranktwolinespl}). \end{lemma} \begin{figure} \caption{Sketch of the lines constructed in the proof of Lemma \ref{lemmaranktwolines} \label{fig:figlemmaranktwo} \end{figure} \begin{proof} For reasons that will become clear subsequently, we treat separately the two sets \begin{equation} S^1_+:=\{e\in S^1: |e_2|\ge \frac{\sqrt 3}2|e_1|\} \hskip3mm\text{ and }\hskip3mm S^1_-:=\{e\in S^1: |e_2|\le \frac{\sqrt 3}2|e_1|\}. \end{equation} We observe that both are closed, that their union is $S^1$ and their intersection consists of the four points $(\pm \frac2{\sqrt7},\pm \frac{\sqrt3}{\sqrt7})$. We start from $S^1_+$. For $p_0,a\in\R$, we consider the rank-two line \begin{equation}\label{ranktwolinesmin} t\mapsto \xi_t := \begin{pmatrix} p_0+(1+ a) t & 0 & 0 \\ 0 & p_0 + (1-a) t & 0\\ 0 & 0 & p_0 \end{pmatrix} \end{equation} (see Figure \ref{fig:figlemmaranktwo}, left panel). We compute \begin{equation} p(\xi_t)=p_0 + \frac23 t \hskip3mm\text{ and }\hskip3mm q^2(\xi_t)= (\frac13+a^2)t^2. \end{equation} Solving for $t$ the first equation and inserting into the second, we obtain that the graph of $t\mapsto (p(\xi_t), q(\xi_t))$ is the set \begin{equation} q^2= \frac34 (1+3a^2)(p-p_0)^2 , \end{equation} which we can rewrite (recalling that $q\ge 0$) as \begin{equation} q= \frac{\sqrt3}2 \sqrt{1+3a^2} |p-p_0|. \end{equation} Therefore, any line of the form $\{q=\alpha|p-p_0|\}$ with $|\alpha|\ge \sqrt{3}/2$ is a rank-two line of the type given in (\ref{ranktwolinesmin}). In turn, this means that we can define \begin{equation} \Gamma_y(e,t):= \Pi (y+et) \hskip1cm \text{ for } e\in S^1_+ \end{equation} where $\Pi(p,q):=(p,|q|)$ denotes reflection onto the upper half-plane. We now turn to $S^1_-$. Let $(p_0,q_0)\in\R\times[0,\infty)$ and consider the rank-two line \begin{equation}\label{ranktwolinespl} t\mapsto\xi_t := \begin{pmatrix} p_0+q_0+ t & 0 & 0 \\ 0 & p_0 -q_0+ t & 0\\ 0 & 0 & p_0 \end{pmatrix}. \end{equation} As above, a simple computation shows that \begin{equation}\label{eqhyperbsimpl} p(\xi_t)=p_0 + \frac23 t \hskip3mm\text{ and }\hskip3mm q^2(\xi_t) = q_0^2 + \frac 13 t^2. \end{equation} We now consider the equation $(p(\xi_{t_*}), q(\xi_{t_*}))=(p_*, q_*)$. For every $t_*\in [-\sqrt3q_*, \sqrt3q_*]$ there is a unique solution $(p_0, q_0)\in\R\times[0,\infty)$, namely, \begin{equation} p_0=p_*-\frac23 t_* \hskip3mm\text{ and }\hskip3mm q_0=\sqrt{ q_*^2-\frac13 t_*^2}. \end{equation} We compute \begin{equation} \left.\frac{d}{dt} \begin{pmatrix} p(\xi_t)\\q(\xi_t) \end{pmatrix}\right|_{t=t_*}=\begin{pmatrix} 2/3 \\ t_*/3 q_* \end{pmatrix}= \frac{1}{3q_*} \begin{pmatrix} 2q_* \\ t_* \end{pmatrix}. \end{equation} Since we can choose $t_*$ freely in $[-\sqrt3q_*, \sqrt3q_*]$, we conclude that for every $e\in S^1_-$ there is a unique triplet $(p_0,q_0,t_*)$ such that the curve $t\mapsto (p(\xi_t),q(\xi_t))$ passes through $y=(p_*,q_*)$ at $t=t_*$ with tangent parallel to $e$. Indeed, this solution can be explicitly written as \begin{equation} t_*=2q_* \frac{e_2}{e_1} \,,\hskip1cm q_0=\sqrt{q_*^2-\frac13 t_*^2}\,,\hskip1cm p_0=p_*-\frac23 t_*. \end{equation} It is clear that this solution and, hence, $\xi_t$, depends continuously on $e$. We finally define $\Gamma_y(e,t)$ for $e\in S^1_-$ as the arc-length reparametrization of $t\mapsto \xi_{t_*+t}$ or $t\mapsto \xi_{t_*-t}$ depending on the sign of $e_1$ (see Figure \ref{fig:figlemmaranktwo}, right panel). It remains to check that this definition agrees with the previous one for the four points in $S^1_-\cap S^1_+$. For these points, the formulas above give $q_0=0$ and $a=0$, so that the two definitions of $\xi_t$ also coincide (with the same $p_0$). This concludes the proof. \end{proof} \begin{lemma} \label{lemmaranktwodirect} Let $y_*=(p_*,q_*)$ with $q_*>0$, and assume that there are $e\in S^1$ and $t_-<0<t_+$ such that $\Gamma_{y_*}(e,t_\pm)\in \hat H$, where $\Gamma_{y_*}$ is the map constructed in Lemma \ref{lemmaranktwolines}. Then, $y_*\in \Phi( \Kinfty)$. If, additionally, $|e_2|\le \frac{\sqrt3 }{4} e_1$ then any matrix $\sigma_*\in \R^{3\times 3}_\sym$ with $\Phi(\sigma_*)=y_*$ belongs to $\Kinfty$. \end{lemma} \begin{proof} In order to prove the first assertion we observe {that, by} Lemma \ref{lemmaranktwolines}, there is a rank-two line $t\mapsto \xi_t$ such that $\Phi(\xi_0)=y_*$ and $\Gamma_{y_*}(e,\R)$ is the graph of $t\mapsto \Phi(\xi_t)$. In particular, there is $s_-<0$ such that $(p,q)(\xi_{s_-})=\Gamma_{y_*}(e,t_-)\in \hat H$, which by Lemma \ref{lemmahatHD} implies that $\xi_{s_-}\in \Kinfty$. Analogously for $s_+$. By Lemma \ref{lemmaranktwo}, we obtain $\xi_0\in \Kinfty$ and, therefore, $y_*=\Phi(\xi_0)\in\Phi (\Kinfty)$. We now turn to the second assertion. By Lemma \ref{lemmaconstructranktwodir} below, there is a rank-two line $t\mapsto\xi_t$ with the same properties and, additionally, with $\xi_0=\sigma_*$. The same argument then implies $\sigma_*\in \Kinfty$. \end{proof} \begin{lemma}\label{lemmaconstructranktwodir} Let $\sigma_*\in\R^{3\times 3}_\sym$. Let $e\in S^1$ be such that $|e_2|\le \frac{\sqrt3}4 |e_1|$. Then, there is a rank-two line $t\mapsto\xi_t$ through $\xi_0=\sigma_*$ such that the curve $t\mapsto (p(\xi_t),q(\xi_t))$ is an hyperbola of the type (\ref{eqhyperbsimpl}) which is parallel to $e$ at $t=0$. \end{lemma} \begin{proof} Any rank-two line through $\sigma_*$ has the form $t\mapsto \xi_t:=\sigma_*+t B$, for some $B\in\R^{3\times 3}_\sym$ with $\det B=0$. Let $a,b$ be the eigenvalues of $B$, and let $e,f$ be a pair of orthonormal vectors such that $B=ae\otimes e + bf\otimes f$. We let $p_*:=p(\sigma_*)$, $q_*:=q(\sigma_*)$ and compute \begin{equation}\label{eqpxit} p(\xi_t)=p_*+\frac{a+b}3 t \end{equation} and \begin{equation} \begin{split} 2q^2(\xi_t)=&|\xi_t|^2-3p(\xi_t)^2 \\ =& 2q^2_*+t^2(a^2+b^2 - \frac13 (a+b)^2) \\ &+2t (ae\cdot\sigma_* e+bf\cdot\sigma_* f)-2t p_*(a+b). \end{split} \end{equation} From (\ref{eqpxit}), we obtain $t=3(p(\xi_t)-p_*)/(a+b)$. Inserting in the previous expression leads to \begin{equation} \begin{split} 2q^2(\xi_t)=& 2q^2_*+6(p(\xi_t)-p_*)^2 \frac{(a+b)^2-3ab}{(a+b)^2} \\ &+6\frac{p(\xi_t)-p_*}{a+b} (ae\cdot\sigma_* e+bf\cdot\sigma_* f)-6p_*(p(\xi_t)-p_*) \end{split} \end{equation} (the case $a+b=0$ is not relevant, since in this case $t\mapsto p(\xi_t)$ is constant). The expression \begin{equation} \frac{(a+b)^2-3ab}{(a+b)^2}=\frac14 +\frac34 \frac{ (a-b)^2}{(a+b)^2} \end{equation} can take any value in $[1/4,\infty)$ and the value $1/4$ is taken if and only if $a=b$. Therefore, the coefficient of the quadratic term $(p(\xi_t)-p_*)^2 $ can be the required value of $3/2$ (see (\ref{eqhyperbsimpl})) if and only if $a=b$. We can scale to $a=b=1$ and obtain \begin{equation} \begin{split} 2q^2(\xi_t)=& 2q^2_*+\frac32(p(\xi_t)-p_*)^2 \\ &+3(p(\xi_t)-p_*) (e\cdot\sigma_* e+f\cdot\sigma_* f)-6p_*(p(\xi_t)-p_*). \end{split} \end{equation} We are left with the task of choosing $e$ and $f$. Let $g:=e\wedge f$, so that $(e,f,g)$ is an orthonormal basis of $\R^3$. Then, \begin{equation} e\cdot\sigma_* e+f\cdot\sigma_* f+g\cdot\sigma_*g = \Tr \sigma_*=3p_* , \end{equation} so that, after some rearrangement, the linear term takes the form \begin{equation} \begin{split} &3(p(\xi_t)-p_*)(p_*-g\cdot\sigma_* g). \end{split} \end{equation} We conclude that the graph of $t\mapsto (p(\xi_t),q(\xi_t))$ is the graph of the curve defined by \begin{equation} 2q^2= 2q^2_*+\frac32(p-p_*)^2 +3(p-p_*)(p_*-g\cdot\sigma_* g) \end{equation} and its derivative at $p_*$ is given by \begin{equation} \left.\frac{dq}{dp}\right|_{p=p_*}=\frac{3}{4q_*} (p_*-g\cdot\sigma_* g). \end{equation} It remains to show that we can choose $B$ such that this quantity equals $e_2/e_1$, which is a number in $[-\sqrt3/4,\sqrt3/4]$. To this end, we first show that the ordered eigenvalues $\lambda_1\le\lambda_2\le\lambda_3$ of the matrix $\sigma_D:=\sigma_*-p_*\Id$ obey $\lambda_1\le -q_*/\sqrt3$, $\lambda_3\ge q_*/\sqrt3$. Indeed, assume the former was not the case. If $\lambda_2\le 0$, then $\lambda_3<2 q_*/\sqrt3$ and $\lambda_1^2+\lambda_2^2+\lambda_3^2<(1/3+1/3+4/3 )q_*^2=2q_*^2$, which is a contradiction. If, instead, $\lambda_2\ge0$, then $\lambda_2,\lambda_3\le q_*/\sqrt3$, with the same conclusion. The argument for $\lambda_3$ is similar. Therefore, the set $\{g\cdot \sigma_D g : g\in S^2\}$ contains the interval $[-q_*/\sqrt3,q_*/\sqrt3]$, and we can choose $g$ (and hence $e$, $f$) such that $p_*-g\cdot\sigma_* g=-g\cdot \sigma_D g=4q_*e_2/(3e_1)\in [-q_*/\sqrt3,q_*/\sqrt3]$. \end{proof} \begin{lemma}\label{lemmaABC} Let $\pmin :=\min\{p: \exists q, (p,q)\in H\}$, $\pmax:=\max\{p: \exists q, (p,q)\in H\}$ and \begin{equation} A:=[\pmin,\pmax], \end{equation} \begin{equation} B:=\{p: p\Id\in \Kinfty\}, \end{equation} \begin{equation} C:=\{p: (p,0)\in \Hsdqc\}. \end{equation} Assume $\Hsdqc$ is connected. Then, $A=B=C$. \end{lemma} We remark that the definition of $A$ immediately implies $\hat H^\conv\subseteq A\times[0,\infty)$. \begin{proof} By convexity, we easily obtain $B\subseteq A$ and $C\subseteq A$. By the construction of $\hat H$, we have $\pmin\in C$, $\pmax\in C$. From the construction of $\Hsdqc$, we see that $(p,q)\in \Hsdqc$ implies that the segment joining $(p,q)$ with $(p,0)$ also belongs to $\Hsdqc$. This proves that $\Hsdqc$ is connected if and only if $C$ is connected and that $C$ is the orthogonal projection of $\Hsdqc$ onto the $q=0$ axis. In particular, we have $A=C$. It remains to show that $A\subseteq B$. By Lemma \ref{lemmahatHD}, we have that $\pmin\in B$ and $\pmax\in B$. We define \begin{equation} D_+:=\bigcup\{[p,p+\frac2{\sqrt3} q]: (p,q)\in H\} \end{equation} and \begin{equation} D_-:=\bigcup\{[p-\frac2{\sqrt3} q,p]: (p,q)\in H\} . \end{equation} We first show that $D_+\cap D_-\subseteq B$. Indeed, let $p_*\in D_+\cap D_-$ and let $\sigma_*:=p_*\Id$. By assumption, there are $(p_-,q_-), (p_+,q_+)\in H$ such that $p_-\le p_*\le p_+$, $q_-\ge \gamma(p_*-p_-)$, $q_+\ge \gamma (p_+-p_*)$, where $\gamma:=\frac{\sqrt3}2$. In particular, $(p_-, \gamma(p_*-p_-))\in \hat H$ and $(p_+, \gamma(p_+-p_*))\in \hat H$. We consider the rank-two line \begin{equation} t\mapsto \xi_t := \begin{pmatrix} p_*+ t & 0 & 0 \\ 0 & p_* + t & 0\\ 0 & 0 & p_* \end{pmatrix} \end{equation} and observe that there are $t_-\le 0\le t_+$ such that $p(\xi_{t_\pm})=p_\pm$, $q(\xi_{t_\pm})=\gamma|p_\pm-p_*|$. Lemma \ref{lemmahatHD} implies $\xi_{t_\pm}\in \Kinfty$ and, with Lemma \ref{lemmaranktwo}, one then deduces $\sigma_*=\xi_0\in \Kinfty$. We next show that $A\subseteq D_+\cup D_-$ Indeed, if $p_*\not\in D_+\cup D_-$ then $q(\sigma)< \frac{\sqrt3}{2} |p(\sigma)-p_*|$ for any $\sigma\in K$. Consider the function $f(p,q):=4q^2-3(p-p_*)^2$. Then, $f(p,q)<0=f(p_*,0)$ for all $(p,q)\in \hat H$, therefore $(p_*,0)$ is separated from $\hat H$ and does not belong to $\Hsdqc$. This implies that $p_*\not\in C=A$. Up to now we have shown that \begin{equation} D_+\cap D_-\subseteq B\subseteq A\subseteq D_+\cup D_-. \end{equation} Assume that there is $p_*\in A\setminus B$. Without loss of generality, assume $p_*\in D_+$. Let $\bar p :=\min\{p\in B: p>p_*\}$. Since $\pmax\in B$, the set is nonempty. Since $B$ is closed, $p_*<\bar p$. The sets $D_+$ and $D_-$ are compact, cover the interval $[p_*,\bar p]$ and are disjoint in $[p_*,\bar p)$. Therefore, $[p_*,\bar p]\subseteq D_+$. Let $p'\in (p_*,\bar p)\subseteq D_+$. If there was $q'\ge0$ such that $(p',q')\in H$, then we would have $(p',0)\in\hat H$ and $p'\in B$. Therefore, $[p_*,\bar p)\times[0,\infty)\cap H=\emptyset$. For any $p'\in (p_*,\bar p)$, there is a point $y=(p_-,q_-)\in H$ with $p_-<p_*$, $q_-\ge \gamma (p'-p_*)$. Consider a sequence of such points, $p'_j\to \bar p$. By compactness of $H$, the corresponding points $y_j=(p^-_j, q^-_j)$ converge (after extracting a subsequence) to some $y_0=(p_0,q_0)\in H$. Since $p^-_j<p_*$ for all $j$ and $H$ is closed, we have $p_0< p_*$. We finally consider the rank-two line \begin{equation} t\mapsto \xi_t := \begin{pmatrix} \bar p+ t & 0 & 0 \\ 0 & \bar p + t & 0\\ 0 & 0 & \bar p \end{pmatrix}. \end{equation} Let $t_0$ be such that $\bar p + \frac23 t_0=p_0$. The condition $\bar p\in B$ corresponds to $\xi_0=\bar p \Id\in \Kinfty$, the definition of $y_0$ shows that $\Phi(\xi_{t_0})\in \hat H$ and, with Lemma \ref{lemmahatHD}, we obtain $\xi_{t_0}\in \Kinfty$. Therefore, $\xi_t\in \Kinfty$ for all $t\in[t_0,0]$. Let now $t_1\in (t_0,0)$ be such that $\bar p + \frac23 t_1=p_*$. After swapping coordinates, we see that the two matrices \begin{equation*} \xi_A:=\xi_{t_1}=\begin{pmatrix} \bar p+ t_1 & 0 & 0 \\ 0 & \bar p + t_1 & 0\\ 0 & 0 & \bar p \end{pmatrix}\,,\hskip5mm \xi_B:=\begin{pmatrix} \bar p+ t_1 & 0 & 0 \\ 0 & \bar p & 0\\ 0 & 0 & \bar p+ t_1 \end{pmatrix} \end{equation*} belong to $\Kinfty$. Since $\rank(\xi_A-\xi_B)=2$, so do all matrices in the segment joining them and, in particular, \begin{equation*} \xi_C:=\begin{pmatrix} \bar p+ t_1 & 0 & 0 \\ 0 & \bar p + \frac23 t_1 & 0\\ 0 & 0 & \bar p+\frac13 t_1 \end{pmatrix}\,. \end{equation*} Again, swapping coordinates the same is true for \begin{equation*} \xi_D:=\begin{pmatrix} \bar p+ \frac13t_1 & 0 & 0 \\ 0 & \bar p + \frac23 t_1 & 0\\ 0 & 0 & \bar p+ t_1 \end{pmatrix}\,. \end{equation*} Since $\rank(\xi_D-\xi_C)=2$ and $p_*\Id=\frac12\xi_D+\frac12\xi_C$, we obtain $p_*\Id\in \Kinfty$. This implies $p_*\in B$, a contradiction. Therefore, we conclude that $A\subseteq B$. \end{proof} \begin{lemma}\label{lemmainnerbound1} Under the assumptions of Theorem \ref{theorelaxexplicitpq}, $\Hsdqc\subseteq \Phi (\Kinfty)$. \end{lemma} \begin{proof} We fix $y_*=(p_*, q_*)\in \Hsdqc$. If $q_*=0$, then, in the notation of Lemma \ref{lemmaABC}, we have $p_*\in C=B$ and therefore $p_*\Id\in \Kinfty$. If $y_*\in \hat H$, then the result follows from Lemma \ref{lemmahatHD}. It remains to consider the case $y_*\in \Hsdqc\setminus\hat H$ and $q_*>0$. We consider the set of directions such that the rank-two line constructed in Lemma \ref{lemmaranktwolines} intersects $\hat H_A:=\hat H\cup A\times \{0\}$, where $A$ is the set constructed in Lemma \ref{lemmaABC} and define \begin{equation} D(y_*):=\{e\in S^1: \Gamma_{y_*}(e,[0,\infty))\cap \hat H_A\ne\emptyset\} \end{equation} (this is illustrated in Figure \ref{fig-s1}). By continuity of $\Gamma_{y_*}$ and compactness of $\hat H_A$, it follows that $D(y_*)$ is a closed subset of $S^1$. We now distinguish two cases. If there is $e\in D(y_*)\cap -D(y_*)$, then there are $t_-<0<t_+$ such that $\Gamma_{y_*}(e,t_\pm)\in \hat H_A$ and Lemma \ref{lemmaranktwodirect} implies that $y_*\in\Phi (\Kinfty)$. If instead there is no such $e$, then $D(y_*)$ and $-D(y_*)$ are disjoint. Since they are both closed, and $S^1$ is connected, they cannot cover $S^1$. In particular, there is $e\in S^1$ such that $e,-e\not\in D(y_*)$. In the notation of Lemma \ref{lemmaranktwolines}, if $e\in S^1_+$ then the curve $\Gamma_{y_*}(e,\R)$ is the graph of $q=b|p-p_0|$ for some $b\ge \sqrt3/2$, $p_0\in\R$ such that $q_*=b|p_*-p_0|$. Assume, for definiteness, that $p_0>p_*$. The remaining case is identical up to a few signs. This curve does not intersect $\hat H_A$ and, by the form of $\hat H_A$, this implies that $q<b|p-p_0|$ for all $(p,q)\in \hat H_A$. In particular, $p_0\not\in A$. Since $A$ is an interval and $p_*\in A$, we have that $A\subseteq (-\infty,p_0)$ and $H^\conv\subseteq(-\infty,p_0)\times[0,\infty)$. Hence, $q<b(p-p_0)$ for all $(p,q)\in \hat H$ and, by convexity, $q<b(p-p_0)$ for all $(p,q)\in \hat H^\conv$. But this contradicts the assumption $(p_*,q_*)\in \Hsdqc$. The case $e\in S^1_-$ is similar. The curve $\Gamma_{y_*}(e,\R)$ is of the type $\{f_{y_1}(\cdot)=0\}$, for some $y_1$. Then, $f_{y_1}(y_*)=0$ but $f_{y_1}<0$ on $\hat H$, so that $y_*$ is separated from $\hat H^\conv$, contradicting the assumption that $y_*\in \Hsdqc$. \end{proof} \begin{lemma}\label{lemmainnerbound2} Under the assumptions of Theorem \ref{theorelaxexplicitpq}, if additionally the tangent to $\partial \Hsdqc$ belongs to $\{e\in S^1: |e_2|\le \frac{\sqrt3}{4}|e_1|\}$ for any $y_*\in \partial \Hsdqc\setminus \hat H$, then any $\sigma$ with $\Phi(\sigma)\in\Hsdqc$ belongs to $\Kinfty$. \end{lemma} In particular, the assumption implies that $\partial\Hsdqc$ is differentiable (as a graph) at any point not belonging to $\hat H$, but does not require differentiability on $\hat H$. \begin{proof} The argument is similar to the proof of the previous Lemma. By construction of $\Hsdqc$, there is a map $\psi: A\to[0,\infty)$ such that \begin{equation} \Hsdqc=\{(p,q): p\in A, 0\le q\le \psi(p)\}. \end{equation} We first show that any $\sigma_*$ such that $(p_*,q_*):=\Phi(\sigma_*)\in\partial\Hsdqc$ belongs to $\Kinfty$. We distinguish several cases. If $q_*=0$, then $p_*\in A$ and the claim follows from the equality $A=B$ in Lemma \ref{lemmaABC}. If $(p_*,q_*)\in \hat H$, then the claim follows from Lemma \ref{lemmahatHD}. It remains the case that $(p_*,q_*)\in \hat H^\conv\setminus \hat H$ and cannot be separated from $\hat H$. At this point, we repeat the argument in Lemma \ref{lemmainnerbound1}. In particular, since $y_*\in \Hsdqc$ we know that there is $e\in S^1$ such that $e\in D(y_*)\cap -D(y_*)$. This means that there are $t_-<0<t_+$ such that $\Gamma_{y_*}(e,t_\pm)\in\hat H$ and that $\Gamma_{y_*}(e,t)\in \Hsdqc$ for all $t\in [t_-,t_+]$. This implies that $\Gamma_{y_*}(e,\cdot)$ is tangent to $\partial\Hsdqc$ at $t=0$ and, in particular, that $e$ is tangent to $\partial\Hsdqc$. We remark that $e$ cannot be $(0,\pm1)$, since in that case we would have $y_*\in\hat H$, a case we have already dealt with. Therefore, $|e_2|\le \frac{\sqrt3}{4}|e_1|$, so that by Lemma \ref{lemmaconstructranktwodir} we obtain that $\Phi(\xi_{s_\pm})\in\hat H$, which by Lemma \ref{lemmahatHD} implies $\xi_{s_\pm}\in \Kinfty$. Therefore, $\sigma_*=\xi_0\in \Kinfty$. This shows that for any $p\in A$ and matrix $\sigma$ with $\Phi(\sigma)=(p,\psi(p))$ belongs to $\Kinfty$. The argument of Lemma \ref{lemmahatHD} then concludes the proof. \end{proof} \begin{figure} \caption{Sketch of the sets $H$ and $\Hsdqc$ in the proof of Lemma \ref{lemmanotcylndrical} \label{figlemmanotcylndrical} \end{figure} We finally show that $\Hsdqc=\Phi(\Kinfty)$ does not imply $\Kinfty=\Phi^{-1}(\Hsdqc)$. We refer to Figure \ref{figlemmanotcylndrical} for an illustration. \begin{lemma}\label{lemmanotcylndrical} Let $H:=\{(0,0),(1,\sqrt3/2)\}$, and define $K$ as in (\ref{eqKfromH}). Then, $\Hsdqc=\{(p,q): 0\le p\le 1, 0\le q \le \sqrt 3 p/2\}$, the matrix $\sigma_*:=\mathrm{diag}(1,1/4,1/4)$ obeys $(p(\sigma_*), q(\sigma_*)) = (1/2,\sqrt3/4)\in \Hsdqc$, but $\sigma_*\not\in \Kinfty\subseteq\Ksdqc$. \end{lemma} \begin{proof} The formula for $\Hsdqc$ follows immediately from the definition in (\ref{eqdefhatH}--\ref{eqdefHstar}); the fact that $\sigma_*\in \Hsdqc$ from the definition of $p$ and $q$ in (\ref{eqdefpqsigma}). Lemma \ref{lemmapqsimple} shows that $\Kinfty\subseteq\Ksdqc$. It remains to prove that $\sigma\not\in \Kinfty$. Since $\rank \sigma_*=3$, Lemma \ref{lemmatwomatrix} implies $\{0, 2\sigma_*\}^{(\infty)}=\{0, 2\sigma_*\}$. Therefore, it suffices to show that $\sigma_*\in \Kinfty$ would imply $\sigma_*\in \{0, 2\sigma_*\}^{(\infty)}$. We first define $h:\R^{3\times 3}_\sym\to\R$, $h(\xi):=2p(\xi)-\xi_{11}$ and observe that $h(0) = h(\sigma_*) = h(2\sigma_*) = 0$. We fix any $\xi\in K\setminus\{0\}$. Then, necessarily $p(\xi)=1$ and $q(\xi)=\sqrt3/2$. Recalling that $ 2q^2(\xi)=|\xi-p(\xi)\Id|^2$ and $\xi_{33}=3p(\xi)-\xi_{11}-\xi_{22}$, we compute \begin{equation}\label{eqnotcylindr} \begin{split} \frac32= 2q^2(\xi)&=|\xi-p(\xi)\Id|^2= |\xi-\Id|^2\\ &\ge (\xi_{11}-1)^2+(\xi_{22}-1)^2+(2-\xi_{11}-\xi_{22})^2\\ &\ge (\xi_{11}-1)^2+2(\frac12-\frac{\xi_{11}}2)^2 = \frac32 (\xi_{11}-1)^2 \end{split} \end{equation} and we conclude that $\xi_{11}\le 2$, so that $h(\xi)\ge 0$. Furthermore, if $h(\xi)=0$ then necessarily $\xi_{11}=2$, so that equality holds throughout in (\ref{eqnotcylindr}). This, in turn, implies that $\xi=2\sigma_*$. We have therefore proven that $h\ge 0$ on $K$, with $\{h=0\}\cap K=\{0,2\sigma_*\}$. We now assume $\sigma_*\in \Kinfty$, so that, for any $g\in C^0(\R^{3\times 3}_\sym;[0,\infty))$ which is \sdqclong, $g(\sigma_*)\le \max g(K)$. In order to show that $\sigma_*\in \{0, 2\sigma_*\}^{(\infty)}$, we fix a function $f\in C^0(\R^{3\times 3}_\sym;[0,\infty)$ which is \sdqclong, and let $\alpha:=\max\{f(0),f(2\sigma_*)\}$. We need to show that $f(\sigma_*)\le \alpha$. Fix $\eps>0$. By continuity there is $\delta>0$ such that $f\le \alpha+\eps$ on $B_\delta(2\sigma_*)$. Let $M:=\max f(K)\ge\alpha$, $m:=\min h(K\setminus \{0\}\setminus B_\delta(2\sigma_*))>0$. We define \begin{equation} g(\xi) := f(\xi) - (M-\alpha)\frac{h(\xi)}{m} . \end{equation} Then, $g(0)=f(0)\le\alpha$, $g\le \alpha+\eps$ on $K\cap B_\delta(2\sigma_*)$, $g\le M-(M-\alpha)=\alpha$ on the rest of $K$, and $g$ is continuous and \sdqclong. The function $g_+=\max\{g,0\}\in C^0(\R^{3\times 3}_\sym;[0,\infty))$ obeys $\max g_+(K)\le \alpha+\eps$. Since $\sigma_*\in \Kinfty$, we have $f(\sigma_*)=g_+(\sigma_*)\le \alpha+\eps$. But $\eps$ was arbitrary, hence we conclude that $f(\sigma_*)\le \max f(\{0,2\sigma_*\})$. Therefore, $\sigma_*\in \{0,2\sigma_*\}^\sdqc$, as claimed, and the proof is concluded. \end{proof} \subsection{Examples} We close by presenting two specific examples for which the \sdqclong\ hull can be explicitly characterized. \begin{lemma} Let $p_1,q_1>0$, with $0<p_1<2q_1/\sqrt3$, and let $H:=\{(-p_1,q_1),(p_1,q_1)\}$. Then, \begin{equation}\label{eqtwopointshre} \Hsdqc=\{(p,q): -p_1\le p\le p_1, 0\le q \le \sqrt{q_1^2+\frac34 (p^2-p_1^2)}\} \end{equation} and $\Phi(K^\sdqc)=\Hsdqc$. If, additionally, $p_1\le q_1/\sqrt3$, then \begin{equation} \begin{split} K^\sdqc&=\{\sigma: \Phi(\sigma)\in \Hsdqc\}\\ &=\{\sigma: p(\sigma)\in[-p_1,p_1], q^2(\sigma)-\frac34 p^2(\sigma)\le q_1^2-\frac34 p_1^2\}. \end{split} \end{equation} \end{lemma} We refer to Figure \ref{fig-twop-a} for an illustration. \begin{proof} We observe that $\hat H=\{-p_1,p_1\}\times[0,q_1]$ and $\hat H^\conv=[-p_1,p_1]\times [0, q_1]$. Let $W:=\{(p,q): -p_1\le p\le p_1, q^2-\frac34 p^2\le q_1^2-\frac34p_1^2, q\ge0\}$ be the set in (\ref{eqtwopointshre}). We first show that $\Hsdqc\subseteq W$. We define $q_0:=\sqrt{q_1^2-\frac34 p_1^2}$ and consider the corresponding function $f_{(0,q_0)}(p,q)=4(q^2-q_0^2)-3p^2 = 4(q^2-q_1^2)-3(p^2-p_1^2)$. Then, $f_{(0,q_0)}\le 0$ on $\hat H$, and $f_{(0,q_0)}>0$ on $\hat H^\conv\setminus W$. Recalling (\ref{eqdefHstar}), we obtain $\Hsdqc\subseteq W$. To obtain the remaining inclusion, it suffices to show that we cannot separate any point of $W$ from $\hat H$. We fix a point $(p,q)\in W$ and consider a generic pair $y_0=(p_0,q_0)\in\R\times[0,\infty)$. The function $f_{y_0}$ separates $(p,q)$ from $\hat H$ if \begin{equation} \max \{4(q_1^2-q_0^2)-3(p_1\pm p_0)^2\} < 4(q^2-q_0^2) - 3 (p-p_0)^2, \end{equation} which, expanding all squares, is the same as \begin{equation} 4q_1^2-3 p_1^2 + 6 |p_1p_0|< 4q^2 - 3 p^2 + 6 pp_0. \end{equation} {}From $(p,q)\in W$ we obtain $|p|\le p_1$, which implies $6pp_0\le 6 |p_1p_0|$, and $ 4q^2 - 3 p^2 \le 4q_1^2-3 p_1^2 $. Summing the two gives \begin{equation} 4q^2 - 3 p^2 + 6 pp_0\le 4q_1^2-3 p_1^2 + 6 |p_1p_0|, \end{equation} which means that we cannot separate $(p,q)$ from $\hat H$. Therefore, $W\subseteq \Hsdqc$. From the definition and the condition $p_1<2q_1/\sqrt3$, we see that $\Hsdqc$ is connected, so that the first assertion directly follows from Theorem \ref{theorelaxexplicitpq}. To prove the second assertion we need only control the slope of the boundary. The vertical sides of $\Hsdqc$ belong to $\hat H$. The slope of the hyperbola is maximal at the two extreme points, i.~e., at $(\pm p_1, q_1)$. Differentiating $q^2-\frac34p^2=c$, we obtain $q'q=\frac34 p'p$, which implies that $|q'|/|p'|=\frac34 p_1/q_1$. If $p_1\le q_1/\sqrt3$, this implies that the slope is not larger than $\frac{\sqrt3}4$. The conclusion then follows from Lemma \ref{lemmaKfullcharsmallder}. \end{proof} \begin{figure} \caption{Different regions for the location of $D$ with respect to the circle in the construction of (\ref{eqHcirclepoint} \label{figpointcircpd} \end{figure} \begin{figure} \caption{Example with $H$ consisting of a point and a half-circle, see Lemma \ref{lemmapointcirc} \label{figpointcircI} \end{figure} \begin{figure} \caption{Two examples with $H$ consisting of a point and a half-circle, see Lemma \ref{lemmapointcirc} \label{figpointcircII} \end{figure} Next, we consider a second example in which $H$ consists of a half-circle of radius $r$ centered in $C:=(p_C,0)$ and a single point $D:=(p_D,q_D)$, \begin{equation}\label{eqHcirclepoint} H:=\{(p_D,q_D)\} \cup \{(p,q): (p-p_C)^2+q^2\le r^2, q\ge 0\}. \end{equation} There are several different cases, depending on the existence of one or two hyperbolas in the family considered above which contain the point $D$ and are tangent to the circle. The boundaries between the different phases are vertical lines (corresponding to the construction of $\hat H$ from $H$) and lines with slope $\pm \sqrt3/2$ (corresponding to the maximal slope of the hyperbolas, which is also the boundary between $S^1_+$ and $S^1_-$). The phase diagram is sketched in Figure \ref{figpointcircpd}. The critical points are $X=(p_C-\frac{\sqrt7}{\sqrt3} r,0)$, $Y=(p_C+\frac{\sqrt7}{\sqrt3} r,0)$ and $Z=(p_C,\frac{\sqrt7}{\sqrt4} r)$. For definiteness, we focus on two representative regions. \begin{lemma}\label{lemmapointcirc} Let $H$ be as in (\ref{eqHcirclepoint}) with $D$ in region $I$, defined as \begin{equation}\label{eqDregionI} p_D<p_C-r,\hskip5mm \frac{\sqrt3}{2}|p_D- p_X| < q_D< \frac{\sqrt3}{2}|p_D- p_Y | . \end{equation} Then, there is a unique $y_0=(p_0,q_0)\in\R\times[0,\infty)$ such that the hyperbola $\{q^2-q_0^2=\frac34(p-p_0)^2\}$ contains $D=(p_D,q_D)$ and is tangent to the circle with radius $r$ centered in $C=(p_C,0)$ in a point $T$. Furthermore, \begin{equation*} \Hsdqc = H\cup \{(p,q): p_D\le p\le p_T, q^2\le q_0^2 + \frac34 (p-p_0)^2\}. \end{equation*} If, instead, $D$ is in region $II$, defined by \begin{equation} q_D-q_Z \ge \frac{\sqrt3}{2} |p_D-p_C|, \end{equation} then $\Hsdqc=\hat H^\conv$. \end{lemma} \begin{proof} The second case is straightforward. The boundary of $\hat H^\conv$ has slope at least $\sqrt3/2$, hence there is no possibility to separate any point of it using the given hyperbolas. A sketch is shown in Figure \ref{figpointcircII}. The first case, corresponding to region $I$ in Figure \ref{figpointcircpd}, requires a more detailed argument. We first have to show that there is a unique hyperbola of the type $q^2-q_0^2=\frac34(p-p_0)^2$ which contains $D$ and is tangent to the half-circle. We refer to Figure \ref{figpointcircI} for an illustration. The condition that $y_D$ belongs to the hyperbola translates into \begin{equation} q_0^2=q_D^2-\frac34 (p_D-p_0)^2. \end{equation} The condition of being tangent means that the system \begin{equation} \begin{cases} q^2=q_D^2-\frac34 (p_D-p_0)^2+\frac34(p-p_0)^2\\ (p-p_C)^2+q^2=r^2 \end{cases} \end{equation} has a double solution. Note that these equations are both quadratic in $p$ and linear in $q^2$, hence the system is overall of second order in these two variables. Substituting $q^2$ into the second equation leads to the condition that \begin{equation} (p-p_C)^2+q_D^2-\frac34 (p_D-p_0)^2+\frac34(p-p_0)^2=r^2 \end{equation} has a double solution $p_T$, which should satisfy $p_T\in [p_C-r,p_C+r]$. This solution can be computed explicitly, but for proving the assertion existence suffices. To this end, we consider the family of curves $\Gamma_D(e,\R)$ constructed in Lemma \ref{lemmaranktwolines} for $|e_2|\le \frac{\sqrt 3}2e_1$. The assumption (\ref{eqDregionI}) implies that $\Gamma_D((2/\sqrt7, -\sqrt3/\sqrt7), [0,\infty))$ intersects $B_C(r)$, but $\Gamma_D((2/\sqrt7, +\sqrt3/\sqrt7), [0,\infty))$ does not (notice that both these curves are piecewise affine). By continuity there is $e_*$ in the given interval such that $\Gamma_D(e_*,\R)$ is tangent to $B_C(r)$. We denote by $T$ the intersection of the two, and define $(q_0, p_0)$ so that $\Gamma_D(e_*,\R)$ is the set $q^2-q_0^2=\frac34(p-p_0)^2$ (see Figure \ref{figpointcircI}). To conclude the proof, it suffices to show that no point of the given set can be separated by another hyperbola. To this end, it suffices to show that no other hyperbola of the given family can have two points in common with the given one. This follows from the fact that any solution to the system \begin{equation} \begin{cases} q^2-q_0^2=\frac34(p-p_0)^2\\ q^2-q_1^2=\frac34(p-p_1)^2 \end{cases} \end{equation} obeys $q_0^2-q_1^2=\frac34 (p_1^2-p_0^2-2pp_1-2pp_0)$, which is a linear equation in $p$ and, therefore, has at most one solution. If $p$ is unique, since $q\ge 0$ obviously $q$ is also unique. This concludes the proof. \end{proof} \end{document}
\begin{document} \title{Maximum value of the standardized log of odds ratio and celestial mechanics} \thispagestyle{firststyle} \ifthenelse{\boolean{shortarticle}}{\ifthenelse{\boolean{singlecolumn}}{\abscontentformatted}{\abscontent}}{} \dropcap{W}hen both exposure and disease outcome are binary variables, epidemiological data can be conveniently summarized by a 2$\times$2 table: \begin{table}[ht!] \centering \begin{tabular}{ccc} \hline &\multicolumn{2}{l}{Exposure}\\ \cline{2-3} \rule{0pt}{3ex} Disease status & $E$ & $\bar{E}$ \\ \hline \rule{0pt}{3ex} $D$ & $n_{11}=n_D\hat{p}$ & $n_{12}(1-\hat{p})$\\ $\bar{D}$ & $n_{21}=n_{\bar{D}}\hat{q}$ & $n_{22}(1-\hat{q})$\\ \hline \end{tabular} \label{tab1} \end{table} \noindent where $n_{11}+n_{12}$ is the number of cases, $n_D$; $n_{21}+n_{22}$ is the number of controls, $n_{\bar{D}}$; and the number of exposed subjects is $n_{11}+n_{21}$. When sampling is random with respect to exposure $E$, sample proportions $\hat{p}=n_{11}/n_D$ and $\hat{q}=n_{21}/n_{\bar{D}}$ estimate population probabilities of exposure among cases and among controls, respectively ($p = \Pr(E|D)$ and $q = \Pr(E|\bar{D})$). Then, in epidemiological studies, the effect of exposure on outcome is often measured by odds ratio, OR, which is defined as: \begin{eqnarray*} \text{OR} &=& \frac{p/(1-p)}{q/(1-q)} \\ &=& \frac{\Pr(D \mid E)/(1-\Pr(D \mid E)) }{\Pr(D \mid \bar{E})/(1-\Pr(D \mid \bar{E})}. \end{eqnarray*} Relative risk, $\text{RR} \nobreak= \nobreak\Pr(D|E) \nobreak/ \nobreak\Pr(D|\bar{E})$ cannot be directly estimated from table counts when sample proportions of cases and controls are fixed by design, but $\text{OR}$ estimate, $\widehat{\text{OR}} \nobreak= \nobreak \frac{\hat{p}/(1-\hat{p})}{\hat{q}/(1-\hat{q})}$, is unaffected by the study design. Let $\mu$ denote the effect size measured by log odds ratio. Given the estimated log odds ratio, $\hat{\mu} = \ln(\widehat{\text{OR}})$, a commonly used statistic is: \begin{eqnarray*} T &=& \frac{\ln(\widehat{\text{OR}})} {\sqrt{\sum{1/n_{ij}}}} = \frac{\hat{\mu}}{\sqrt{\sum{1/n_{ij}}}}, \end{eqnarray*} which asymptotically follows the standard normal distribution. The sum of four cell counts, $N=\sum n_{ij}$, can be factored into this expression as: \begin{eqnarray*} T &=& \sqrt{N} \, \, \frac{\hat{\mu}} {\hat{\sigma}(\hat{w})} \\ \hat{\sigma}(\hat{w}) &=& \sqrt{\frac{1}{\hat{w}} \frac{1}{\hat{p}(1-\hat{p})} + \frac{1}{1-\hat{w}} \frac{1}{\hat{q}(1-\hat{q})}}, \end{eqnarray*} where $\hat{w}$ is the proportion of cases $n_D/N$. The corresponding population parameter can be written as: \begin{eqnarray} \sigma^2(w) &=& \frac{1}{w} \frac{1}{\Pr(E|D)\left[1-\Pr(E|D)\right]} \label{eq:sd} \\ \nonumber &+& \frac{1}{(1-w)} \frac{1}{\Pr(E|\bar{D})\left[1-\Pr(E|\bar{D})\right]}, \end{eqnarray} where $w=\Pr(D)$ is disease prevalence. We express variance as a function of $w$ to emphasize that $\sigma(w)$ will vary depending on the study design. Further, solution to $\sigma'(w)=0$, under the constraint $0<w<1$, provides the value of $w$, at which variance is minimized, and thus $\mu / \sigma$ value is maximized. This minimization value can be found as: \begin{eqnarray} w_{m} &=& \argmin_w \sigma(w) = \frac{1}{1 + \frac{\Pr(E|D)}{\Pr(E|\bar{D})} \,\, \sqrt{\text{OR}^{-1}} }. \label{wm} \end{eqnarray} Thus, the ratio $\gamma = \mu / \sigma$ will attain its maximum if $\sigma = \sigma(w_m)$. Alternatively, in terms of the pooled exposure probability, $v=w\Pr(E|D)+(1-w)\Pr(E|\bar{D})$, the value at which variance is minimized and $\mu / \sigma$ is maximized can be expressed as a function of RR and OR as: \begin{eqnarray} v_m &=& \argmin_v \sigma(v) = \frac{1}{1 + \text{RR} \,\, \sqrt{\text{OR}^{-1}} }. \label{vm} \end{eqnarray} The variance of the prior distribution for $\mu / \sigma(v)$ will thus reach its minimum at $\sigma = \sigma(v_m)$. Re-expressing $\sigma$ in Eq. (\ref{eq:sd}) as a function of $v$, we get: \begin{eqnarray} \sigma^2(v) &=& \frac{1}{v} \frac{1}{\Pr(D|E)\left[1-\Pr(D|E)\right]} \label{sigma.vm} \\ \nonumber &+& \frac{1}{1-v} \frac{1}{\Pr(D|\bar{E})\left[1-\Pr(D|\bar{E})\right]}. \end{eqnarray} To obtain maximum possible standardized $\ln(\text{OR})$, we can substitute $v_m$ and $\Pr(D|\bar{E})=1/(1-\text{OR}\left[1-1/\Pr(D|E)\right])$ into Eq. (\ref{sigma.vm}), and minimize the resulting equation with respect to exposure risk, $\Pr(D|E)$, and with respect to $v$, which results in: \begin{equation} \Pr(D|E)= 1 - \frac{1}{1 + \sqrt{OR}}, \label{eq5} \end{equation} \begin{equation} \Pr(D|\bar{E}) = \frac{1}{1 + \sqrt{\text{OR}}} = 1 - \Pr(D|E), \label{eq6} \end{equation} and \begin{equation} v = 1/2. \label{eq7} \end{equation} Next, by substituting Eqs. (\ref{eq5}-\ref{eq7}) into Eq. (\ref{sigma.vm}) we get the denominator of the maximum standardized effect size. Therefore: \begin{eqnarray} \gamma_{\max} = \frac{ \ln(\text{OR}) }{ 2 \sqrt{2 + (1+\text{OR}) / \sqrt{\text{OR}}} }. \label{gamma.max} \end{eqnarray} The above equation depends only on odds ratio but is not monotone in it, and reaches its maximum for ln(OR) value about 4.7987. Perhaps counterintuitively, but as ln(OR) exceeds that value, the corresponding standardize statistic, ln(OR)/$\sigma$, starts to decrease. It turns out that there is a peculiar connection between the expression for $\gamma_{\text{max}}$ and the famous orbital mechanics equation: the Kepler equation, $M = E - \varepsilon \sin(E)$. A geometric interpretation of the Kepler equation is illustrated by Figure \ref{fig1}. Suppose that we are inside a circular orbit rescaled to be the unit circle. Our position S is denoted by ``$\large{\star}$''. The shortest path to the orbit has the length $1-\varepsilon$. A celestial body traveles the orbit from that point to T. Given the area $M/2$ and distance $1-\varepsilon$, we want to determine the angle $E$. These three values are related to one another by Kepler's equation. Planetary orbits are elliptical, so the actual orbit is along an ellipse inside of the unit circle . Still, the calculation of the {\em eccentric anomaly} $E$ is a crucial step in determining planet's coordinates along its elliptical orbit at various times. \begin{figure} \caption{\textbf{The Kepler equation: geometric interpretation} \label{fig1} \end{figure} The Kepler equation (KE) is transcendental, i.e., with no algebraic solution in terms of $M$ and $\varepsilon$, and it has been studied extensively since it is central to celestial mechanics. Colwell writes ``The sole subject of our work is Kepler's Equation'' in the book suitably named ``Solving Kepler's equation over three centuries'' and notes that ``in virtually every decade from 1650 to the present'' there have been papers devoted to that equation.\cite{colwell1993solving} A solution to KE can be written as an infinite series in powers of $\varepsilon$, which is convergent only if $\varepsilon$ is smaller than the ``Laplace Limit Constant'', LLC. To relate LLC to the bounds for standardized $\ln(\text{OR})$, let $x = \ln(\text{OR})$, $x>0$. In terms of $x$, the standardized statistic is given by: \begin{eqnarray} \gamma = \kappa(x) = \frac{x}{2 \sqrt{2 + \frac{1 + \exp{(x)}}{\exp{(x/2)}}}}. \end{eqnarray} Using basic trigonometric identities: \begin{eqnarray*} && \frac{1 + \exp{(x)}}{\exp{(x/2)}} = 2 \cosh (x/2), \\ && 2 + \frac{1 + \exp{(x)}}{\exp{(x/2)}} = 4 ( \cosh (x/4) )^{2}, \quad \text{and} \\ && \sqrt{2 + \frac{1 + \exp{(x)}}{\exp{(x/2)}}} = 2 \cosh (x/4), \end{eqnarray*} we can express $\kappa(x)$ and its derivative in terms of hyperbolic functions as: \begin{eqnarray} \kappa(x) &=& \frac{x/4}{\cosh(x/4)} = (x/4) \, \text{sech}(x/4) \label{eq.kappa} \\ \kappa'(x) &=& \frac{4 - x \tanh(x/4) }{16 \cosh(x/4) }. \end{eqnarray} To maximize the standardized ln(OR), we set $\kappa'(x)$=$0$, which is equivalent to solving $(x/4)\tanh(x/4)=1$. The solution is four times the solution to $x\tanh(x)=1$ equation, which is 1.19967864... This implies maximum ln(OR) $= 4 \times 1.19967864...=4.7987...$ and by substituting this value into Eq. (\ref{gamma.max}) we obtain $\gamma_{\text{max}} = 0.6627...$, the LLC. The solution to KE involves the condition equivalent to Eq. (\ref{eq.kappa}). Namely, the solution can be expressed as the power series in $\varepsilon$, provided $|\varepsilon \sin(E)| < |E-M|$ and that $\varepsilon < x / \cosh(x), x = |E-M|$, which is the LLC.\cite{plummer1918introductory} Although it appears that the LLC bound is a function of odds ratio alone, this bound can only be attained at the specific values of population parameters (or the respective sample values). Namely, (i) $v_m=w_m=1/2$ from Eq. (\ref{eq7}), which implies $\text{RR}^2= \left(\frac{\Pr(E|D)}{\Pr(E|\bar{D})}\right)^2 = \text{OR}$; (ii) $\Pr(D|E)=1-\Pr(D|\bar{E})$ from Eqs. (\ref{eq5} - \ref{eq6}); and (iii) ln(OR) $= 4.7987...$ Next, by solving \begin{eqnarray*} \text{OR} &=& \frac{\Pr(D|E)/(1-\Pr(D|E))}{\Pr(D|\bar{E})/(1-\Pr(D|\bar{E}))} \\ &=& \exp(4.7987\dots) = 121.354\dots \end{eqnarray*} for $\Pr(D|E)$, we obtain: \begin{eqnarray} \Pr(D|E) = \frac{1}{2\,z} + \frac{1}{2}, \end{eqnarray} where $z$ is the solution of $ z \tanh(z)=1$, i.e., $z =1.19967864\dots$ and $\Pr(D|E) = \Pr(E|D) = 0.9167782798\dots$ Similarly, Pearson correlation between two binary variables ranges between -1 and 1, but this range is not free of parameters: these boundary values are possible only in the case when the population (or sample) frequencies of two binary variables are equal to each other. Moreover, these bounds are asymmetric, depending on the sign of the correlation.\cite{weir1979inferences} The range of the standardized statistic, -LLC to LLC, has implications for statistical analysis. For example, several recent publications on P\nobreakdash-value{} replicability posed the following question: given a small initial P\nobreakdash-value{}, what is a likely spread of P\nobreakdash-values{} in subsequent replication studies?\cite{halsey2015fickle,lai2012subjective,lazzeroni2014p,lazzeroni2016solutions,vrz2017bayesian} P\nobreakdash-values{} for $\ln(\text{OR})$ are explicit functions of $\gamma$ because they are defined as $P = \Pr(Z > z_{\alpha})$, where $Z = \sqrt{N} \gamma$ is asymptotically normal. The two-sided P\nobreakdash-value{} can be similarly defined in terms of chi-square distributed $Z^2$. The prior distribution for the standardized effect size occurs naturally and needs to be specified in order to give probabilistic bounds for the spread of future replication P\nobreakdash-values{}. In applications where prior distribution for the effect size is modeled in terms of ln(OR), our results allow one to specify a reasonable prior range for the standardized value. It has been suggested that summary association statistics can be converted to approximate posterior (Bayesian) summaries about parameters of interest. For example, one-sided P\nobreakdash-value{}, $P$ for testing significance of $\ln(\text{OR})$ can be transformed to the normal test statistic, $Z = \Phi^{-1}(1-P)$. This statistic is $Z=\sqrt{N}\ln(\widehat{\text{OR}})/\hat{\sigma}$. An approximate Bayesian false discovery probability can be computed based only on the summary statistics $\ln(\widehat{\text{OR}})$ and $\hat{\sigma}$, the value $N$, and an assumed variance parameter for the zero-mean prior normal distribution for $\ln(\text{OR})$.\cite{wakefield2007bayesian,wakefield2009bayes} For any given value of OR, $\mu=\ln(\text{OR})$, is fixed, but $\sigma$ can vary as a function of $w$. The normal prior distribution for $\mu$ can be characterized simply by $\Pr(\text{OR} > x) = \beta$. Considering the standardized effect, we can write $\beta = \Pr(\text{OR} > x) = \Pr(\mu / \sigma > \ln(x) / \sigma)$. Denote the normal cumulative distribution function with the mean $a$ and variance $b$, evaluated at $x$ by $\Phi(x|a,b)$, and its inverse by $\Phi^{-1}(x|a,b)$. Then, $\ln(x) /\sigma = \Phi^{-1}(1-\beta \mid 0,\sigma_0) = \sqrt{\sigma_0} \Phi^{-1}(1-\beta \mid 0,1)$. From this, we can obtain the flattest possible prior distribution for $\mu/\sigma$ as the zero-mean normal with variance \begin{eqnarray} \sigma_0 = \left( \frac{\ln(x)/\sigma_m}{\Phi^{-1}(1-\beta \mid 0,1)} \right). \end{eqnarray} For $\sigma_0$ to be as large as possible, $\sigma_m$ should be equal to $\ln(x)/\gamma_{\max}$ (from Eq. \ref{gamma.max}). Alternatively, either the value $\sigma(w_m)$ or $\sigma(v_m)$ can be specified with some additional assumptions. For example, for $\sigma(w_m)$, $\Pr(D|E)$ needs to be specified. Then, \begin{eqnarray} \Pr(D \mid \bar{E}) &=& \frac{1}{1-\text{OR} \left(1 - \Pr(D \mid E)^{-1} \right)}, \\ \text{RR} &=& \Pr(D \mid E) / \Pr(D \mid \bar{E}), \end{eqnarray} and $\sigma(w_m)$ is obtained using the value $w_m$ from Eq. (\ref{wm}). \\ \end{document}
\begin{document} \title{Exact multiplicity of solutions for some semilinear Dirichlet problems} \begin{abstract} The classical result of A. Ambrosetti and G. Prodi \cite{AP}, in the form of M.S. Berger and E. Podolak \cite{BP}, gives the exact number of solutions for the problem \[ \Delta u+g(u)= \mu \varphihi _1(x)+e(x) \; \; \mboxox{in $D$} , \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,, \] depending on the real parameter $\mu$, for a class of convex $g(u)$, and $\int _D e(x) \varphihi _1(x)\, dx=0$ (where $\varphihi _1(x)>0$ is the principal eigenfunction of the Laplacian on $D$, and $D \; \;ubset R^n$ is a smooth domain). By considering generalized harmonics, we give a similar result for the problem \[ \Delta u+g(u)= \mu f(x) \; \; \mboxox{in $D$} , \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,, \] with $f(x)>0$. Such problems occur, for example, in ``fishing" applications that we discuss, and propose a new model. \; \;mallskip Our approach also produces a very simple proof of the anti-maximum principle of Ph. Cl\'{e}ment and L.A. Peletier \cite{CP}. \end{abstract} \begin{flushleft} Key words: Global solution curves, exact number of solutions, the anti-maximum principle. \end{flushleft} \begin{flushleft} AMS subject classification: 35J61, 35J25, 92D25. \end{flushleft} \; \;ection{Introduction} \; \;etcounter{equation}{0} \; \;etcounter{thm}{0} \; \;etcounter{lma}{0} Consider the problem \begin{equation} \lambdabel{i1} \Delta u+g(u)= f(x) \; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,, \end{equation} where $D$ is a smooth domain in $R^n$, and the functions $g(u)$ and $f(x)$ are given. Decompose $f(x)=\mu \varphihi _1(x)+e(x)$, where $\varphihi _1(x)>0$ is the principal eigenfunction of the Laplacian on $D$ with zero boundary condition, and $\int _D e(x) \varphihi _1(x)\, dx=0$. The classical result of A. Ambrosetti and G. Prodi \cite{AP}, in the form of M.S. Berger and E. Podolak \cite{BP}, says that if $g(u)$ is convex and asymptotically linear at $\varphim \infty$, then (under an additional restriction on the slopes of $g(u)$ at $\varphim \infty$) there exists a critical $\mu _0=\mu _0(e(x))$, such that the problem (\ref{i1}) has exactly two solutions for $\mu>\mu _0$, exactly one solution if $\mu=\mu _0$, and no solutions for $\mu<\mu _0$. However, sometimes it is desirable to have the parameter $\mu$ in front of the entire right hand side, and to consider the problem \begin{equation} \lambdabel{i2} \Delta u+g(u)= \mu f(x) \; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,. \end{equation} Such problems occur e.g., when one considers ``fishing" applications, see S. Oruganti et al \cite{SS}, D.G. Costa et al \cite{C}, P. Gir\~{a}o, and H. Tehrani \cite{G}, P.M. Gir\~{a}o and M. P\'{e}rez-Llanos \cite{G1}. We present an exact multiplicity result of Berger-Podolak type for the problem (\ref{i2}), provided that $f(x)>0$ on $D$. Throughout the paper, one can easily replace the Laplacian by any uniformly elliptic operator. \; \;mallskip Similar result holds for the problem \[ \Delta u+g(u)= \mu f(x)+e(x) \; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,, \] with $f(x)>0$ on $D$, and $\int _D e(x) f(x)\, dx=0$, providing a generalization of the above mentioned result of M.S. Berger and E. Podolak \cite{BP}. Our approach involves applying the implicit function theorem for continuation of solutions in a special way. We restrict the space of solutions by keeping the generalized first harmonic fixed, but in return allow $\mu$ to vary. Then we compute the direction of the turn of the solution curve, similarly to P. Korman \cite{K}. We show that there is at most one turn in case $g(u)$ is either convex or concave. The well-known anti-maximum principle of Ph. Cl\'{e}ment and L.A. Peletier \cite{CP} follows easily with this approach. We apply our results to a population model with fishing. We suggest a modification of the logistic model, to admit sign-changing solutions. We argue that one needs to consider sign-changing solutions to get complete bifurcation diagrams. \; \;ection{The global solution curves} \; \;etcounter{equation}{0} \; \;etcounter{thm}{0} \; \;etcounter{lma}{0} We assume that $D$ is a smooth domain in $R^n$, and denote by $\lambda _k$ the eigenvalues of the Laplacian on $D$, with zero boundary conditions, and by $\varphi _k(x)$ the corresponding eigenfunctions, normalized so that $\int _D \varphi ^2 _k(x) \, dx=1$. It is known that $\varphi _1(x)>0$ is simple, and $0<\lambda _1 <\lambda _2 \leq \lambda _3 \leq \cdots$. We denote by $H^k(D)$ the Sobolev spaces $W^{k,2}(D)$. We shall need the following generalization of Poincare's inequality. \begin{lma}\lambdabel{lma:1} Let $u(x) \in H^1_0(D) $ be such that $\int _D u(x) f(x) \, dx =0 $, for some $f(x) \in L^2(D)$, $f(x) \not\equiv 0$. Denote $f_1= \int _D f(x) \varphi _1(x) \, dx$, and \[ \displaystyle \nu =\lambda _1+(\lambda_2-\lambda_1) \frac{f_1^2}{||f||_{L^2}^2} \,. \] Then \begin{equation} \lambdabel{0} \int _D |\nabla u|^2 \, dx \geq \nu \int _D u^2 \, dx \,. \end{equation} \end{lma} \noindentndent {\bf Proof:} $\; \;$ By scaling of $u(x)$, we may assume that $\int _D u^2(x) \, dx=1$. Writing $\displaystyle u(x) =\; \;um _{k=1}^{\infty} u_k \varphi _k(x)$, $\displaystyle f(x) =\; \;um _{k=1}^{\infty} f_k \varphi _k(x)$, we then have \begin{equation} \lambdabel{1} \; \;um _{k=1}^{\infty} u^2_k=1 \,, \end{equation} \begin{equation} \lambdabel{2} \; \;um _{k=1}^{\infty} u_k f_k=0 \,. \end{equation} We need to show that $\int _D |\nabla u|^2 \, dx \geq \nu $. Using (\ref{1}), we estimate \begin{eqnarray} \lambdabel{3} & \int _D |\nabla u|^2 \, dx=-\int _D u \Delta u \, dx=\; \;um _{k=1}^{\infty} \lambda _k u^2_k \\ \nonumber & =\lambda _1+\; \;um _{k=2}^{\infty} \left( \lambda _k-\lambda _1 \right) u^2_k \geq \lambda _1+\left( \lambda _2 -\lambda _1 \right) \; \;um _{k=2}^{\infty} u^2_k \,. \nonumber \end{eqnarray} From (\ref{2}) \begin{equation} \lambdabel{4} \left( \; \;um _{k=2}^{\infty} u^2_k \right)^{1/2} \left( \; \;um _{k=2}^{\infty} f^2_k \right)^{1/2} \geq \; \;um _{k=2}^{\infty} u_k f_k=|-u_1f_1|=|u_1||f_1| \,. \end{equation} Set $x=\left( \; \;um _{k=2}^{\infty} u^2_k \right)^{1/2}$, $f=\left( \; \;um _{k=2}^{\infty} f^2_k \right)^{1/2}$. In view of (\ref{1}), we get from (\ref{4}) \[ xf \geq |f_1|\; \;qrt{1-x^2} \,, \] or \[ x^2 \geq \frac{f_1^2}{f^2+f_1^2}=\frac{f_1^2}{||f||_{L^2}^2} \,, \] and the proof follows from (\ref{3}). $\diamondsuit$ The inequality (\ref{0}) is sharp in the following sense: when $f=\varphi _1$, we have $\nu=\lambda _2$, and one has an equal sign in (\ref{0}) at $u=\varphi _2$. Clearly, $\nu \leq \lambda _2$, and $\nu > \lambda _1$ if $f(x)>0$. \begin{lma}\lambdabel{lma:2} Let $(w(x), \mu) \in H^2(D) \times R$ solve the problem \begin{eqnarray}\lambdabel{5} & \Delta w+a(x)w=\mu f(x) \; \; \mboxox{in $D$} \,, \; \; w=0 \; \; \mboxox{on $\varphiartial D$} \\ \nonumber & \int _D w(x) f(x) \, dx =0 \,, \nonumber \end{eqnarray} with some $f(x) \in L^2(D)$, $f(x) \not\equiv 0$. Assume that $a(x) \in C(D)$ satisfies $a(x) <\nu$ for all $x \in D$. Then $w(x) \equiv 0$, and $\mu=0$. \end{lma} \noindentndent {\bf Proof:} $\; \;$ Multiply the equation in (\ref{5}) by $w$, and integrate. By Lemma \ref{lma:1}, we have \[ \nu \int _D w^2 \, dx \leq \int _D |\nabla w|^2 \, dx= \int _D a(x)w^2 \, dx <\nu \int _D w^2 \, dx \,. \] Hence, $w(x) \equiv 0$, and from (\ref{5}), $\mu=0$. $\diamondsuit$ \begin{lma}\lambdabel{lma:2.1} Consider the problem (to find $z(x)$ and $\mu^*$) \begin{eqnarray}\lambdabel{5.1} & \Delta z+a(x)z=\mu^* f(x)+e(x) \; \; \mboxox{in $D$} \,, \; \; w=0 \; \; \mboxox{on $\varphiartial D$} \\ \nonumber & \int _D z(x) f(x) \, dx =\xi \,, \nonumber \end{eqnarray} where $f(x) \in L^2(D)$ satisfies $f(x) > 0$ a.e., and $a(x) \in C(D)$ satisfies $a(x) <\nu$ for all $x \in D$. Then for any $e(x) \in L^2(D)$, and any $\xi \in R$, the problem has a solution $(z(x),\mu ^*) \in \left(H^2(D) \cap H^1_0(D)\right) \times R$. \end{lma} \noindentndent {\bf Proof:} $\; \;$ {\em Case 1}. Assume that the operator \[ L[z] \equiv \Delta z+a(x)z \, : H^2(D) \cap H^1_0(D) \rightarrow L^2(D) \] is invertible. We claim that \begin{equation} \lambdabel{5.2} \int _D L^{-1}(f(x)) f(x) \, dx \ \ne 0 \,, \end{equation} where $L^{-1}$ denotes the the inverse operator of $L[z]$. Indeed, assuming otherwise, $w(x) \equiv L^{-1}(f(x))$ is not identically zero, and it satisfies (\ref{5}), with $\mu=1$, which contradicts Lemma \ref{lma:2}. Then the solution of (\ref{5.1}) is \[ z(x)=\mu^* L^{-1}(f(x))+ L^{-1}(e(x)) \,, \] and $\mu ^*$ is chosen so that $\int _D z(x) f(x) \, dx =\xi$, which we can accomplish, in view of (\ref{5.2}). \noindentndent {\em Case 2}. Assume that the operator $L[z]$ is not invertible. Since $a(x) <\nu \leq \lambda _2$, the kernel of $L[z]$ is one-dimensional, spanned by some $\varphi (x)>0$. Since $L[z]$ is a Fredholm operator of index zero, the first equation in (\ref{5.1}) is solvable if and only if its right hand side is orthogonal to $\varphi(x)$. We now obtain the solution $(z(x),\mu ^*)$ of (\ref{5.1}) as follows. Choose $\mu ^*$, so that $ \int _D \left(\mu^* f(x)+e(x) \right) \varphi (x)\, dx =0$. Then the first equation in (\ref{5.1}) has infinitely many solutions of the form \[ z(x)=z_0(x)+c \varphi(x) \,, \] with some $z_0(x)$. We choose the constant $c$, so that $\int _D z(x) f(x) \, dx =\xi$. $\diamondsuit$ We consider next the nonlinear problem \begin{equation} \lambdabel{6} \Delta u+g(u)=\mu f(x) \; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,. \end{equation} We shall assume that $g(u) \in C^1(R)$, and \begin{equation} \lambdabel{7} g(u)= \left\{ \begin{array}{ll} \gamma_1 u+b_1(u) & \mboxox{if $u < 0$} \\ \gamma_2 u+b_2(u) & \mboxox{if $u \geq 0$}, \end{array} \right. \end{equation} with real constants $\gamma _1$, $\gamma _2$, and $b_1(u)$, $b_2(u)$ bounded for all $u \in R$. Notice that we admit the case of $\gamma _2=\gamma _1$, and in particular we allow bounded $g(u)$, in case $\gamma _2=\gamma _1=0$. We shall consider strong solutions of (\ref{6}), $u(x) \in H^2(D) \cap H^1_0(D)$. Any function $u(x) \in L^2(D)$ can be decomposed as \begin{equation} \lambdabel{8} u(x)=\xi f(x)+U(x) \,, \; \; \mboxox{with $\int _D U(x) f(x) \, dx =0$} \,, \end{equation} for any $f(x) \in L^2(D)$. If $f(x)>0$ a.e., we call the constant $\xi$ the {\em generalized first harmonic} of $u(x)$. We shall need the following a priori estimate. \begin{lma}\lambdabel{lma:3} Assume that $f(x) \in H^2(D)$, $f(x) > 0$ a.e., and $g(u) \in C^1(R)$ satisfies the condition (\ref{7}), and $g'(u)\leq \nu_1$, for some constant $\nu_1<\nu$. Let $u(x) \in H^2(D) \cap H^1_0(D)$ be a solution of (\ref{6}), decomposed as in (\ref{8}). Then for some positive constants $c_1$ and $c_2$ \begin{equation} \lambdabel{9} |\mu |+||U||_{H^2(D)} \leq c_1|\xi|+c_2 \,. \end{equation} \end{lma} \noindentndent {\bf Proof:} $\; \;$ Using the ansatz (\ref{8}) in (\ref{6}), we have \begin{equation} \lambdabel{9.1} \; \;\; \;\; \; \Delta U +\xi \Delta f+g(\xi f(x)+U)=\mu f(x) \; \; \mboxox{in $D$} \,, \; \; U=0 \; \; \mboxox{on $\varphiartial D$} \,. \end{equation} Multiplying by $U$ and integrating, we write the result as \begin{eqnarray}\lambdabel{10} & \int _D |\nabla U|^2 \, dx-\int _D \left(\xi \Delta f \right) U \, dx \\\nonumber & -\int _D \left[g(\xi f(x)+U)-g(\xi f(x))\right]U \, dx-\int _D g(\xi f(x))U \, dx=0 \,. \nonumber \end{eqnarray} Using the mean value theorem, we estimate from below the third term on the left by -$\nu _1 \int _D U^2 \,dx$. If $\xi \geq 0$, then \[ \int _D g(\xi f(x))U \, dx=\int _D \left(\gamma_2 \xi f(x)+b_2(\xi f(x)) \right) U \, dx=\int _D b_2(\xi f(x))U \, dx \,. \] Using Lemma \ref{lma:1}, we have from (\ref{10}), for any small $\epsilon >0$, \[ (\nu -\nu _1)\int _D U^2 \, dx \leq \int _D \left(\xi \Delta f \right) U \, dx+\int _D b_2(\xi f(x))U \, dx \] \[ \leq \epsilon \int _D U^2 \, dx+c(\epsilon) \xi^2 \int _D \left(\Delta f \right)^2 \, dx+\epsilon \int _D U^2 \, dx+c(\epsilon) \,, \] which gives us an estimate of $\int _D U^2 \, dx$ \[ \int _D U^2 \, dx \leq c_1 \xi^2+c_2 \,, \; \; \mboxox{uniformly in $\mu$}\,, \] with some positive constants $c_1$, $c_2$. In case $\xi < 0$, the same estimate follows similarly. Returning to (\ref{10}), and using (\ref{7}), we have \begin{equation} \lambdabel{11} \int _D \left(|\nabla U|^2+U^2 \right) \, dx \leq c_1\xi^2+c_2 \,, \; \; \mboxox{uniformly in $\mu$}\,. \end{equation} (Here and later on, $c_1$, $c_2$ denote possibly new positive constants.) Then \begin{equation} \lambdabel{12} \int _D \left(|\nabla u|^2+u^2 \right) \, dx \leq c_1\xi^2+c_2 \,, \; \; \mboxox{uniformly in $\mu$}\,. \end{equation} To get an estimate of $\mu$, we now multiply (\ref{6}) by $u=\xi f+U$, and integrate \[ \xi \mu \int _D f^2 \, dx=- \int _D |\nabla u|^2 \, dx+\int _D g(u)u \,dx \,, \] which in view of (\ref{12}) implies that \begin{equation} \lambdabel{13} |\xi| | \mu| \leq c_1\xi^2+c_2 \,. \end{equation} (Observe that $|g(u)| \leq A|u|+B$, for some positive constants $A$, $B$, and for all $u$.) Fix some $\xi _0 >0$. Then for $|\xi| \geq \xi _0$, we conclude from (\ref{13}) \begin{equation} \lambdabel{14-} | \mu| \leq c_1|\xi|+c_2 \,. \end{equation} In case $|\xi| \leq \xi _0$, we multiply (\ref{6}) by $\varphi _1$, and integrate to show that $|\mu| \leq c_3$, for some $c_3>0$. We conclude that the bound (\ref{14-}) holds for all $\xi \in R$. We multiply (\ref{6}) by $\Delta u$, and integrate. Obtain \[ \int _D \left(\Delta u \right)^2 \, dx+\int _D \Delta u \, g(u) \, dx=-\mu \int _D \nabla f \cdot \nabla u \, dx \,. \] Using the estimates (\ref{12}) and (\ref{14-}), we get \[ \int _D \left(\Delta u \right)^2 \, dx \leq c_1\xi^2+c_2 \,. \] Since $\Delta u=\Delta U+ \xi \Delta f$, we conclude that \[ \int _D \left(\Delta U \right)^2 \, dx \leq c_1\xi^2+c_2 \,. \] By the elliptic estimates we obtain the desired bound on $||U||_{H^2(D)}$. $\diamondsuit$ \begin{cor}\lambdabel{cor1} In case $f(x)=\varphi _1(x)$, the second term on the left in (\ref{10}) vanishes, and we conclude that \[ ||U||_{H^1(D)} \leq c \,, \; \;\; \; \mboxox{uniformly in $\xi$ and $\mu$} \,, \] for some constant $c>0$. \end{cor} \begin{thm}\lambdabel{thm:1} Assume that $f(x) \in H^2(D)$, $f(x) > 0$ a.e., and $g(u) \in C^1(R)$ satisfies the condition (\ref{7}), and we have $g'(u)\leq \nu _1<\nu$ for all $u \in R$. Then for each $\xi \in (-\infty,\infty)$, there exists a unique $\mu$, for which the problem (\ref{6}) has a unique solution $u(x) \in H^2(D) \cap H^1_0(D)$, with the generalized first harmonic equal to $\xi$. The function $\mu =\varphihi (\xi)$ is smooth. \end{thm} \noindentndent {\bf Proof:} $\; \;$ We embed (\ref{6}) into a family of problems \begin{equation} \lambdabel{14} \; \;\; \;\; \; \Delta u+\lambda _1 u+k \left(g(u)-\lambda _1 u \right)-\mu f(x)=0 \; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,, \end{equation} with $0 \leq k \leq 1$ ($k=1$ corresponds to (\ref{6})). When $(k=0,\mu=0)$ the problem has solutions $u=a \varphi _1$, where $a$ is any constant. By choosing $a=a_0$, we can get the solution $u=a_0 \varphi _1$ of any generalized first harmonic $\xi ^0$. We now continue in $k$ the solutions of \begin{equation} \lambdabel{14.1} \; \;\; \;\; \;\; \;\; \; F(u,\mu,k) \equiv \Delta u+\lambda _1 u+k \left(g(u)-\lambda _1 u \right)-\mu f(x)=0 \; \mboxox{in $D$} \,, \; \; u=0 \; \mboxox{on $\varphiartial D$} \end{equation} \[ \int _D uf \, dx=\xi ^0 \,, \ \] with the operator $F(u,\mu,k) \, : H^2(D) \times R \times R \rightarrow L^2(D)$. We will show that the implicit function theorem applies, allowing us to continue $(u,\mu)$ as a function of $k$. Compute the Frechet derivative \begin{eqnarray} \nonumber & F_{(u,\mu)}(u,\mu, k)(w, \mu^*)=\Delta w+\lambda _1 w+k\left(g'(u)-\lambda _1 \right)w-\mu^*f(x) \,, \\ \nonumber & \int _D wf \, dx=0 \,. \nonumber \end{eqnarray} By Lemma \ref{lma:2}, the map $F_{(u,\mu)}(u,\mu, k)(w, \mu^*)$ is injective, and by Lemma \ref{lma:2.1} this map is surjective. Hence, the implicit function theorem applies, and we have a solution curve $(u,\mu)(k)$. By the a priori estimate of Lemma \ref{lma:3}, this curve continues for all $0 \leq k \leq 1$, and at $k=1$, we obtain a solution of the problem (\ref{6}) with the generalized first harmonic equal to $\xi ^0$. Turning to the uniqueness, let $(\bar \mu,\bar u(x))$ be another solution of (\ref{6}), and $\bar u(x)$ has the generalized first harmonic equal to $\xi ^0$. Then $(\bar \mu,\bar u(x))$ is solution of (\ref{14.1}) at $k=1$. We continue this solution backward in $k$, until $k=0$, using the implicit function theorem. By the Fredholm alternative, we have $\mu =0$, when $k=0$. Then $u=a_1 \varphi _1$, with $a_1 \ne a_0$ (since the solution curves do not intersect), and $\bar u(x)$ has the generalized first harmonic equal to $\xi ^0$, a contradiction. Finally, we show that solutions of (\ref{6}) can be continued in $\xi$, by using the implicit function theorem. Decomposing $u(x)=\xi f(x)+U(x)$, with $\int _D Uf \, dx=0$, we see that $U(x)$ satisfies \begin{eqnarray} \nonumber & F(U,\mu,\xi) \equiv \Delta U+ g\left(\xi f(x)+U(x) \right)=\mu f(x)-\xi \Delta f \; \; \mboxox{in $D$} \,, \; \; U=0 \; \; \mboxox{on $\varphiartial D$} \\ \nonumber & \int _D Uf \, dx=0 \,. \nonumber \end{eqnarray} Compute the Frechet derivative \begin{eqnarray} \nonumber & F_{(U,\mu)}(U,\mu, \xi)(w, \mu^*)=\Delta w+g'\left(\xi f(x)+U(x) \right)w-\mu^*f(x) \,, \\ \nonumber & \int _D wf \, dx=0 \,. \nonumber \end{eqnarray} As before, we see that the implicit function theorem applies, and we have a smooth solution curve $(u,\mu)(\xi)$ for the problem (\ref{6}). By Lemma \ref{lma:3}, this curve continues for all $\xi \in R$. $\diamondsuit$ \noindentndent {\bf Remark} The theorem implies that the value of $\xi$ is a {\em global parameter}, uniquely identifying the solution pair $(\mu, u(x))$. The well-known anti-maximum principle is easily proved by a similar argument. As in J. Shi \cite{S}, we state it along with the classical maximum principle. We present a self-contained proof, since the a priori estimate of Lemma \ref{lma:3} is not needed for this local result. \begin{thm} Consider the following problem, with $f(x) \in L^2(D)$, and $f(x)>0$ a.e. in $D$, \begin{equation} \lambdabel{16} \Delta u+\lambda u= f(x) \; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,. \end{equation} Then there exists a constant $\delta _f$, which depends on $f$, such that if $\lambda_1<\lambda<\lambda_1 +\delta _f$, then \begin{equation} \lambdabel{17} u(x)>0 \,, \; \; x \in D \,, \; \; \frac{\varphiartial u}{\varphiartial n} <0 \,, \; \; x \in \varphiartial D \,; \end{equation} and if $\lambda <\lambda _1$, then \[ u(x)<0 \,, \; \; x \in D \,, \; \; \frac{\varphiartial u}{\varphiartial n} >0 \,, \; \; x \in \varphiartial D \,. \] \end{thm} \noindentndent {\bf Proof:} $\; \;$ We prove the first part. Consider the problem \begin{equation} \lambdabel{18} \Delta u+\lambda _1 u+k u=\mu f(x) \; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,. \end{equation} When $k=0$, and $\mu=0$, this problem has a solution $u= \varphi _1$. Decompose $\varphi _1=\xi ^0 f(x)+e(x)$, with $\int _D f(x) e(x) \, dx=0$. We now continue in $k$, $k \geq 0$ the solution $(u,\mu) \in \left(H^2(D) \cap H^1_0(D)\right) \times R$ of \begin{eqnarray} \nonumber & \Delta u+\lambda _1 u+k u=\mu f(x) \; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \\ \nonumber & \int _D uf \, dx=\xi ^0 \,, \nonumber \end{eqnarray} beginning with $(\varphi _1,0)$ at $k=0$. By Lemmas \ref{lma:2} and \ref{lma:2.1}, the implicit function theorem applies, and we have a solution curve $(u,\mu)(k)$, at least for small $k$. If $k>0$ is small, then $u(x)$ is close to $\varphi _1(x)$, and we have $u(x)>0$ in $D$, and $\frac{\varphiartial u}{\varphiartial n} <0$ on $\varphiartial D$ (a.e.). Multiplying (\ref{18}) by $\varphi _1(x)$, and integrating over $D$, we conclude that $\mu=\mu(k)>0$. Then $\frac{u(x)}{\mu}$ is the solution of (\ref{16}), satisfying (\ref{17}) (a.e.). $\diamondsuit$ We now study the {\em global solution curve} $\mu =\varphihi (\xi)$ for the problem (\ref{6}), with $u(x) \in H^2(D) \cap H^1_0(D)$, in case $g(u)$ is either convex or concave. \begin{thm}\lambdabel{thm:2} Assume that $f(x) \in H^2(D)$, $f(x) > 0$ a.e., and $g(u) \in C^2(R)$ satisfies the condition (\ref{7}), and we have $g'(u)\leq \nu _1<\nu$ for all $u \in R$. Assume that either $g''(u)>0$, or $g''(u)<0$ holds for all $u \in R$. Then the solution curve of the problem (\ref{6}) $\mu =\varphihi (\xi)$ is either monotone, or it has exactly one critical point, which is the point of global minimum in case $g''(u)>0$ for all $u \in R$, and the point of global maximum in case $g''(u)<0$ for all $u \in R$. \end{thm} \noindentndent {\bf Proof:} $\; \;$ By the Theorem \ref{thm:1}, the problem (\ref{6}) has a solution curve $(u,\mu)(\xi)$, where $\xi$ is the generalized first harmonic of $u(\xi)$. Differentiate the equation (\ref{6}) in $\xi$ \begin{equation} \lambdabel{19} \Delta u_{\xi}+g'(u)u_{\xi}=\mu '(\xi) f(x) \; \; \mboxox{in $D$} \,, \; \; u_{\xi}=0 \; \; \mboxox{on $\varphiartial D$} \,. \end{equation} We claim that $u_{\xi}(x) \not \equiv 0$ for all $\xi \in R$. Indeed, since $u(x)=\xi f(x)+U(x)$, we have $u_{\xi}(x)=f(x)+U_{\xi}(x)$. If $u_{\xi} (x) \equiv 0$, then $U_{\xi}(x)=-f(x)$, but $\int _D U_{\xi}(x)f(x) \,dx=0$, a contradiction. Assume that $\mu '(\xi_0)=0$ at some $\xi_0$. Denoting $w(x)=u_{\xi} $ at $\xi=\xi _0$, we see that $w(x)$ is a non-trivial solution of \begin{equation} \lambdabel{20} \Delta w+g'(u)w=0 \; \; \mboxox{in $D$} \,, \; \; w=0 \; \; \mboxox{on $\varphiartial D$} \,. \end{equation} Since $g'(u)<\lambda_2$, it follows that $w(x)>0$ in $D$. In the spirit of \cite{KLO} and \cite{OS}, we differentiate the equation (\ref{19}) once more in $\xi$, and set $\xi=\xi _0$: \begin{equation} \lambdabel{21} \; \;\; \; \Delta u_{\xi \xi}+g'(u)u_{\xi\xi}+g''(u)w^2=\mu ''(\xi_0) f(x) \; \; \mboxox{in $D$} \,, \; \; u_{\xi \xi}=0 \; \; \mboxox{on $\varphiartial D$} \,. \end{equation} Combining the equations (\ref{20}) and (\ref{21}), we have \[ \mu ''(\xi_0) \int _D w f(x) \, dx=\int _D g''(u)w^3 \, dx \,. \] It follows that $\mu ''(\xi_0)>0$ ($\mu ''(\xi_0)<0$) in case $g''(u)>0$ for all $u \in R$ ($g''(u)<0$ for all $u \in R$), so that any critical point of $\mu (\xi)$ is a local minimum (maximum), and hence at most one critical point is possible. $\diamondsuit$ It is now easy to classify all of the possibilities. \begin{thm}\lambdabel{thm:3} Assume that $f(x) \in H^2(D)$, $f(x) > 0$ a.e., and $g(u) \in C^2(R)$ satisfies the condition (\ref{7}), and we have $g'(u)\leq \nu _1<\nu$ for all $u \in R$. Assume also that $g''(u)>0$ for all $u \in R$. \newline (i) If $\gamma _1 \,, \gamma _2<\lambda _1$, then the problem (\ref{6}) has a unique solution for any $\mu \in R$. Moreover, the solution curve $\mu =\varphihi (\xi)$ is defined, and monotone decreasing for all $\xi \in R$.\newline (ii) If $\lambda _1 < \gamma _1 \,, \gamma _2<\nu$, then the problem (\ref{6}) has a unique solution for any $\mu \in R$. Moreover, the solution curve $\mu =\varphihi (\xi)$ is defined, and monotone increasing for all $\xi \in R$.\newline (iii) If $ \gamma _1<\lambda _1 < \gamma _2<\nu$, then there is a critical $\mu _0$, so that the problem (\ref{6}) has exactly two solutions for $\mu >\mu _0$, it has a unique solution at $\mu =\mu _0$, and no solutions for $\mu <\mu _0$. Moreover, the solution curve $\mu =\varphihi (\xi)$ is defined for all $\xi \in R$, it is parabola-like, and $\mu _0$ is its global minimum value. \end{thm} \noindentndent {\bf Proof:} $\; \;$ The convexity of $g(u)$ implies that $\gamma _1 <\gamma _2$. By the Theorem \ref{thm:2}, the problem (\ref{6}) has a solution curve $\mu =\varphihi (\xi)$, defined for all $\xi \in R$, which is either monotone, or it has exactly one critical point, which is the point of global minimum. Decompose $u(x)=\bar \xi \varphi_1(x)+\bar U(x)$, where $\int _D \bar U(x) \varphi_1(x) \, dx=0$, and $\bar \xi$ is the first harmonic. We have \[ u(x)=\xi f(x)+ U(x)=\bar \xi \varphi_1(x)+\bar U(x) \,. \] Multiplying this by $f(x)$, and integrating \[ \xi \int _D f^2(x) \, dx=\bar \xi \int _D f(x) \varphi_1(x) \, dx+\int _D \bar U(x) f(x) \, dx \,. \] By the Corollary \ref{cor1} of Lemma \ref{lma:3}, $\int _D \bar U(x) f(x) \, dx$ is uniformly bounded. It follows that $\bar \xi \rightarrow \infty$ ($-\infty$) if an only if $\xi \rightarrow \infty$ ($-\infty$), providing us with a ``bridge" between $\xi$ and $\bar \xi$. Multiply the equation in (\ref{6}) by $\varphihi _1$, and integrate: \begin{equation} \lambdabel{22} \mu \int _D f(x) \varphihi _1(x) \, dx=-\lambda _1 \bar \xi+\int _D g(u) \varphihi _1(x) \, dx \,, \end{equation} with $\int _D f(x) \varphihi _1(x) \, dx>0$. If $\bar \xi >0$ and large, then $u(x)=\bar \xi \varphi_1(x)+\bar U(x)>0$ a.e. in $D$, and we have $u(x)<0$ a.e. in $D$ if $\bar \xi <0$ and $|\bar \xi|$ is large. By the condition (\ref{7}), \[ \mu \; \;im \left( \gamma _2-\lambda _1 \right)\bar \xi \,, \; \; \mboxox{when $\bar \xi >0$ and large} \,, \] \[ \mu \; \;im \left( \gamma _1-\lambda _1 \right)\bar \xi \,, \; \; \mboxox{when $\bar \xi<0$ and $|\bar \xi|$ is large} \,. \] These formulas give us the behavior of $\mu =\varphihi (\xi)$, as $\xi \rightarrow \varphim \infty$, and the theorem follows. $\diamondsuit$ The following result is proved similarly (the concavity of $g(u)$ implies that $\gamma _2 <\gamma _1$). \begin{thm}\lambdabel{thm:4} Assume that $f(x) \in H^2(D)$, $f(x) > 0$ a.e., and $g(u) \in C^2(R)$ satisfies the condition (\ref{7}), and we have $g'(u)\leq \nu _1<\nu$ for all $u \in R$. Assume also that $g''(u)<0$ for all $u \in R$. \newline (i) If $\gamma _1 \,, \gamma _2<\lambda _1$, then the problem (\ref{6}) has a unique solution for any $\mu \in R$. Moreover, the solution curve $\mu =\varphihi (\xi)$ is monotone decreasing for all $\xi \in R$.\newline (ii) If $\lambda _1 < \gamma _1 \,, \gamma _2<\nu$, then the problem (\ref{6}) has a unique solution for any $\mu \in R$. Moreover, the solution curve $\mu =\varphihi (\xi)$ is monotone increasing for all $\xi \in R$.\newline (iii) If $ \gamma _2<\lambda _1 < \gamma _1<\nu$, then there is a critical $\mu _0$, so that the problem (\ref{6}) has exactly two solutions for $\mu <\mu _0$, it has a unique solution at $\mu =\mu _0$, and no solutions for $\mu >\mu _0$. Moreover, the solution curve $\mu =\varphihi (\xi)$ is parabola-like, and $\mu _0$ is its global maximum value. \end{thm} It appears that there is less interest in concave nonlinearities, compared with the convex ones. This may be due to the fact that if one considers {\em positive } solutions of \[ \Delta u+g(u)=0\; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,, \] and $g(0) \geq 0$, then the case of concave $g(u)$ is easy, and the convex case is interesting. However, if $g(0)<0$, the situation may be reversed even for positive solutions, see e.g., \cite{K1}. For sign-changing solutions, it seems that the convex and concave cases are of equal complexity. Examining the proofs, we see that the Theorems \ref{thm:3} and \ref{thm:4} hold verbatim for the problem \[ \Delta u +g(u)=\mu f(x) +e(x) \; \; \mboxox{in $D$} \,, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,, \] with $e(x) \in H^2(D)$ satisfying $\int _D e(x) f(x) \, dx=0$, giving a generalization of the classical results of A. Ambrosetti and G. Prodi \cite{AP}, and of M.S. Berger and E. Podolak \cite{BP}. \; \;ection{A population model with fishing} \; \;etcounter{equation}{0} \; \;etcounter{thm}{0} \; \;etcounter{lma}{0} One usually begins population modeling with a logistic model \begin{equation} \lambdabel{f1} u'(t)=au(t)-bu^2(t) \,. \end{equation} Here $u(t)$ can be thought of as the number of fish in a lake at time $t$; $a$ and $b$ are positive constants. When $u(t)$ is small, $u^2(t)$ is negligible, and the population grows exponentially, but after some time the growth rate decreases. Now suppose the lake occupies some region $D \; \;ubset R^n$, and $u=u(x,t)$, with $x \in D$. Suppose that fish diffuses around the lake, and the population is near zero at the banks. Assume also there is time-independent fishing, accounted by the term $\mu f(x)$, where $f(x)$ is a positive function, and $\mu $ is a parameter. Then the model is \[ u_t(x,t)=au(x,t)-bu^2(x,t)+\Delta u(x,t)-\mu f(x) \; \; \mboxox{in $D$}, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,. \] We shall consider its steady state $u=u(x)$, satisfying \begin{equation} \lambdabel{f2} \Delta u(x)+au(x)-bu^2(x)-\mu f(x) =0 \; \; \mboxox{in $D$}, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,. \end{equation} It is customary in the population modeling to look for positive solutions. However one does not expect the solutions of (\ref{f2}) to remain positive, when the parameter $\mu >0 $ is varied (since $f(x)>0$). Therefore, we shall admit sign-changing solutions, with the interpretation that some re-stocking of fish is necessary when $u(x)<0$ (which presumably occurs near the banks, i.e., $\varphiartial D$), to avoid the algae growth or other negative consequences. However, there is no reason to use the logistic model (\ref{f1}) for sign-changing $u$. When $u<0$, it is still reasonable to assume that $u'(t) \approx a u(t)<0$, which corresponds to the assumption that the situation further deteriorates without re-stocking, but there seems to be no justification for the $-bu^2$ term. We consider the following model ($f(x)>0$) \begin{equation} \lambdabel{f3} \Delta u(x)+g(u(x))-\mu f(x) =0 \; \; \mboxox{in $D$}, \; \; u=0 \; \; \mboxox{on $\varphiartial D$} \,, \end{equation} where $g(u)$ is an extension of the logistic model to $u<0$, which we describe next. Namely, we assume that $g(u) \in C^2(R)$, and it satisfies \begin{equation} \lambdabel{f4} g(u)=au-bu^2 \; \; \mboxox{for $u \geq 0$} \,, \; \; \mboxox{with $\lambda _1<a<\nu$, and $b>0$} \,, \end{equation} \begin{equation} \lambdabel{f5} g'(u) <\nu \,, \; \; \mboxox{and} \; \; g''(u)<0 \; \; \mboxox{for $u \in R$} \,, \end{equation} where $ \lambda _1<\nu \leq \lambda _2$ was defined in Lemma \ref{lma:1}. Our conditions imply that $g(u) \; \;im cu+d$ as $u \rightarrow -\infty$, for some constants $0<c<\nu$, and $d>0$. When $\mu =0$ (no fishing), the problem (\ref{f3}) has the trivial solution $u(x) \equiv 0$, and a unique positive solution $u_0(x)$, see e.g., P. Korman and A. Leung \cite{KL}. When $\mu >0$ is varied these two solutions turn out to be connected by a smooth solution curve. To prove this result, we shall need the following consequence of Lemma 3.3 in \cite{A}. \begin{picture}(0,40)(-70,0) \; \;calebox{0.85}{ \varphiut(60, 0){\vector(1,0){130}} \varphiut(60,0){\vector(0,1){130}} \multiput(106,0)(0,7){8}{\line(0,1){5}} \thicklines \varphiut(44,101){\makebox(0,0)[l]{{\bf $u_0$}}} \varphiut(196,-1){\makebox(0,0)[l]{${\bf {\Large \mu}}$}} \varphiut(65,135){\makebox(0,0)[l]{${\bf ||u||}$}} \qbezier(60,0)(150,60)(60,102) \varphiut(104,-8){\makebox(0,0)[l]{${\bf { \bar \mu}}$}} \varphiut(10,-46){\makebox(0,0)[l]{Figure 1. The solution curve for the fishing model}} } \end{picture} \begin{lma}\lambdabel{lma:A} Let $u(x,\mu)$ denote the classical solution of (\ref{f3}), depending on a parameter $\mu \in R$. Assume that $\mu _2>\mu _1$, and $u(x,\mu_1)>0$, $u(x,\mu_2)>0$ for all $x \in D$. Then $u(x,\mu_2)>u(x,\mu_1)$ for all $x \in D$. \end{lma} Let $\xi _0=\int_D u_0(x) f(x) \, dx>0$ denote the first generalized harmonic of $u_0(x)$. We have the following result, for possibly sign-changing solutions. \begin{thm}\lambdabel{thm:f} Assume that the conditions (\ref{f4}) and (\ref{f5}) hold, and $f(x) \in C^{\alphapha}(D)$, $f(x)>0$ in $D$, $\alphapha>0$. Then in the $(\xi, \mu)$ plane there is a smooth parabola-like solution curve $\mu=\varphi (\xi)$ of (\ref{f3}), connecting the points $(0,0)$ and $(\xi _0,0)$. It has a unique point of maximum at some $\bar \xi \in (0,\xi _0)$, with $\bar \mu=\varphi (\bar \xi)>0$. Equivalently, for $\mu \in [0,\bar \mu)$ the problem (\ref{f3}) has exactly two solutions, it has exactly one solution at $\mu =\bar \mu$, and no solutions for $\mu >\bar \mu$. Moreover, all solutions lie on a parabola-like solution curve in the $(\mu, ||u||)$ plane, with a turn to the left. \end{thm} \noindentndent {\bf Proof:} $\; \;$ By Theorem \ref{thm:1}, we continue the solution curve from the point $(\xi _0,0)$ in the $(\xi, \mu)$ plane for decreasing $\xi$. By Lemma \ref{lma:A}, it follows that $\mu >0$ for $\xi$ near $\xi _0$, and $\xi<\xi _0$ (if $\mu <0$, then $\xi >\xi _0$). By Theorem \ref{thm:4}, this curve $\mu=\mu(\xi)$ has a unique critical point on $ (0,\xi _0)$, which a point of global maximum, and this curve links up to the point $(0,0)$. This implies that the solution curve in the $(\mu, ||u||)$ plane is as in Figure 1, concluding the proof. $\diamondsuit$ S. Oruganti et al \cite{SS} considered positive solutions of (\ref{f2}). They proved a similar result (as in Figure 1) for $a$ sufficiently close to $\lambda _1$. In that case, $\xi _0$ is small, and the entire solution curve is close to the point $(0,0)$. Working with positive solutions only narrows the class of solutions considerably, and the result of \cite{SS} is probably the best one can get (for the picture as in Figure 1). We showed in \cite{K2} that the picture is different when $a>\lambda _2$ (in case of positive solutions). \begin{picture}(0,40)(-70,0) \; \;calebox{0.85}{ \varphiut(20, 0){\vector(1,0){170}} \varphiut(60,0){\vector(0,1){130}} \multiput(106,0)(0,7){7}{\line(0,1){5}} \thicklines \varphiut(66,103){\makebox(0,0)[l]{{\bf $u_0$}}} \varphiut(196,-1){\makebox(0,0)[l]{${\bf {\Large \mu}}$}} \varphiut(65,135){\makebox(0,0)[l]{${\bf ||u||}$}} \qbezier(60,0)(170,60)(10,125) \varphiut(104,-8){\makebox(0,0)[l]{${\bf { \bar \mu}}$}} \varphiut(-20,-46){\makebox(0,0)[l]{Figure 2. The fishing model with stocking of fish (when $\mu <0$)}} } \end{picture} We show next that the upper branch of the solution curve in Theorem \ref{thm:f} continues for $\xi \in (-\infty,0)$, with $\mu=\varphi (\xi)$ monotone decreasing, and $u(x)>0$ in $D$. Moreover, $\lim _{\xi \rightarrow -\infty} \varphi (\xi))=+\infty$, implying that the solution curve in the $(\mu, ||u||)$ plane is as in Figure 2. Indeed, let us return to the solution point $(\xi _0,0)$ in the $(\xi, \mu)$ plane. For $\xi >\xi _0$, we have $\mu <0$ by Lemma \ref{lma:A}. Then $u(x)>0$, by the minimum principle, so that $g'(u)=a-2u<\nu$, and Theorem \ref{thm:4} applies. One can interpret $\mu<0$ as {\em stocking of fish}. \end{document}
\begin{document} \title{A `` quantum public key '' based cryptographic scheme for continuous variables} \author{Patrick Navez, Alessandra Gatti and Luigi A. Lugiato} \affiliation{INFM, Dipartimento di Scienze CC FF MM, Universita degli Studi dell'Insubria, Via Valleggio 11, I-22100 COMO, Italy} \date{\today} \begin{abstract} By analogy to classical cryptography, we develop a "quantum public key" based cryptographic scheme in which the two public and private keys consist in each of two entangled beams of squeezed light. An analog message is encrypted by modulating the phase of the beam sent in public. The knowledge of the degree of non classical correlation between the beam quadratures measured in private and in public allows only the receiver to decrypt the message. Finally, in a view towards absolute security, we formally prove that any external intervention of an eavesdropper makes him vulnerable to any subsequent detection. \end{abstract} \pacs{3.67.Dd, 3.67.Hk, 42.50, 42.65.-k} \maketitle Quantum correlations (entanglement) between light beams is a subject of considerable activity leading to novel applications such as quantum cryptography, quantum computation and quantum teleportation. As an alternative to classical cryptography, quantum cryptography methods based on correlation of a single photon pair \cite{Brassard} have been widely studied over the past years. They have lead to protocoles permitting to transmit confidential messages safely \cite{Bennett,Lo,Lutkenhaus}. On the other hand, few results have been presented for systems with a large number of photons such as correlated quadratures \cite{Reid, Ralph, Hillery}. The use of intense photons beams could present numerous technological avantages in comparison to the use of single photons pulses. Photon counting detectors are much more efficient to detect squeezed light \cite{Reid} and its transmission through an optical fiber can be done over a much longer distance \cite{Hillery}. In particular, Pereira et al. \cite{Pereira} have proposed a scheme in which a coherent light beam containing the message is transmitted through an adequate superimposition with two entangled beams of squeezed light. Since each of the two entangled beams is a noisy channel, they need to be recombined in order to decrypt the message. In this approach, the noise is used to hide the message which becomes thus unreadable during the transmission. Based on similar considerations, we aim at developping a ``quantum public key'' based cryptographic scheme. The public key method \cite{Brassard,Publickey} is widely used nowadays in classical cryptography. It consists in two keys. The first key is public and known to everybody i.e. to the sender (Alice) and the receiver (Bob) of the message and a possible undesirable third party. On the contrary, the second key is private and known only by the receiver of the message. The procedure is the following: Bob sends the public key to Alice who uses it to encrypt the message; then Alice sends back the encrypted information to Bob who is the only one able to decrypt it thanks to the private key. In this scheme, the essential idea is that the encryption is public in the sense that anyone, including a third party, can encrypt a message but any decryption process must require necessarily the knowledge of the second key. In this manner, messages are transmitted through public channel with strict confidentiality. In this letter, we investigate theoretically a quantum public key scheme similar to the classical one. We device a scheme in which the ``q-private'' and ``q-public'' keys consist in each of the two entangled photon beams respectively. We analyse a process to encrypt an analog message using the ``q-public'' light beam. The confidentiality of the message is guaranteed since the second ``q-private'' light beam is needed for the decryption process. The prefix ``q-'' has been added to remind the quantum character of the signal produced which, to the contrary of a classical signal, cannot be necessarily reproduced. The non-cloning theorem indeed prevents making an exact copy of an unknown quantum signal. Thus, the q-public beam is accessible only to any first user including a third party. The non-cloning theorem is also the basic idea from which one can prove that the transmission is secure against an eavesdropper (Eve) \cite{Bennett, Lo, Lutkenhaus}. The impossibility of reproduction prevents Eve from having access to any quantum state without modifying it. Such a modification makes her vulnerable to any subsequent detection by the sender and/or the receiver. As far as security is concerned, we formally prove for the first time, to our knowledge, the vulnerability of an eavesdropper to any external intervention during a communication process using continuous variables. However, we point out that this result does not imply the absolute security of the transmission since the vulnerability of Eve does not mean that she will be necessarily detected. An appropriated protocol of communication is required to detect that eavesdropping has occured. This essential issue is not completely discussed in this letter and remains still open in the case of continuous variables. Let us start with the description of the quantum public key based scheme. Suppose that Alice wishes to send a message to Bob. To this purpose, Bob produces an EPR state consisting of two entangled beams characterized by the photon annihilation operators $\hat a_1$ and $\hat a_2$ \cite{epr}. Written in occupation photon number representation, this state has the form: \begin{eqnarray}\label{squeeze} \!\!\! |\Psi\rangle= \exp \!\left( r{\hat a_1^\dagger}{\hat a_2^\dagger}- r {\hat a_1}{\hat a_2} \right) \!\!|0\rangle_1 |0\rangle_2 =\!\sum_{n=0}^\infty c_n|n\rangle_1|n\rangle_2 \end{eqnarray} where $c_n= (\tanh r )^{n}/\cosh r$ and $r$ is the squeezing real parameter. Bob then sends beam 1 to Alice and keeps beam 2 in his laboratory. Since Bob has access to the beam 2, he is the only one able to carry out any measurement on the total wave function (\ref{squeeze}). Alice or Eve, however, is restricted to carry out a measurement of any observable concerning the subsystem of the beam 1. All the information available to them is extracted only from the diagonal density matrix resulting from the partial trace of the total wave function over the unknown beam 2: \begin{eqnarray} {\hat \rho_{1}}= Tr_{2}(|\Psi\rangle \langle \Psi|)= \sum_{n=0}^\infty |c_n|^2|n\rangle_1 \,_1 \langle n| \end{eqnarray} With the ``q-public'' beam received from Bob, Alice encrypts the secret message by making a unitary transformation $\hat M$ which modifies the total wave function but not the density matrix. Consequently, $\hat M$ should verify the following criteria: \begin{eqnarray}\label{crit1} {\hat M}|\Psi \rangle \not= |\Psi \rangle \\ \label{crit2} {\hat M}{\hat \rho}_1{\hat M^\dagger} = \hat \rho_1 \end{eqnarray} Then Alice sends the beam 1 back to Bob which, thanks to the second ``q-private'' beam is the only one to decrypt the message. Another alternative is that Alice carries out some measurement herself on the beam 1 and sends the results to Bob through a public classical channel. The use of such a classical public channel avoids the blockability of the system by Eve and thus is secure against a split-universe attack \cite{Blow}. Since the density matrix has not been alterated in the encryption process, one does not observe the presence of the message in the signal received by Bob. Among the many existing possibilities, the phase transformation: \begin{eqnarray} {\hat M} =\exp\left(i f(\hat a_1^\dagger \hat a_1)\right) \end{eqnarray} satisfies both criteria (\ref{crit1}) and (\ref{crit2}). $f(x)$ could be any real function. For convenience, we choose a simple linear function $f(x)=\theta x$ which transforms the coefficients $c_n \rightarrow \exp(i\theta n)c_n$. The constant $\theta$ is the analog quantity containing the message that Alice encrypts. In comparison with \cite{Pereira}, in our encryption process, the message is really ``invisible'' in the signal and thus cannot be distinguished from the noise. After the phase transformation, the field operator associated to the first beam becomes $\hat a_{1'}=\hat M \hat a_1 \hat M^\dagger= \exp(-i\theta)\hat a_1$. An efficient way to decrypt the information is to measure observables for which the EPR state is an eigenstate. Because of the difficulty to get access to some of them in experiment, one uses rather the quadrature components operators: \begin{eqnarray} \hat Z_{1'} &=& \hat a_{1'}e^{-i\phi_A}+\hat a_{1'}^\dagger e^{i\phi_A} = e^{-i\theta_A}\hat a_1+e^{i\theta_A}\hat a_1^\dagger \\ \hat Z_{2}&=& e^{-i\theta_B}\hat a_2+e^{i\theta_B}\hat a_2^\dagger \; , \end{eqnarray} where $\theta_A= \phi_A+\theta$. $\phi_A$ and $\theta_B$ are the phases of the local oscillators used by Alice and Bob, respectively, in their homodyne measurements and determine which quadrature they select for their measurements. $\theta_B$ must remain private to Bob whereas $\phi_A$ must be communicated in public to Bob at some stage of the protocol. Although the EPR state is not an eigenstate of these operators, the uncertainty in the quadrature difference $\hat Z_-=\hat Z_{1'}-\hat Z_{2}$ is close to zero as the squeezing parameter $r$ becomes large and $\theta_A +\theta_B =0$. We notice indeed from the expression (\ref{squeeze}) that: \begin{eqnarray} \langle \Psi |\delta^2 \hat Z_-|\Psi \rangle &=& 2\left[ \cosh 2r-\cos(\theta_A+\theta_B) \sinh 2r\right] \\ &\stackrel{r>>1} {\to}& 4\sinh 2r \sin^2{(\theta_A +\theta_B) \over 2} \end{eqnarray} On the other hand, the introduction of non opposite phases $\theta_A+\theta_B \ne 0$ generates for large $r$ a quantum uncertainty which appears under the form of fluctuations during the measurement of the quadrature difference. In this manner, the message is obtained by determining the intensity of the noise resulting from these fluctuations \cite{epr}. Fig.\ref{fig} depicts a possible setup. The two EPR ``q-private'' and ``q-public'' beams are generated through a nondegenerate parametric down conversion process. They result from the fluorescence of a pump beam passing through a type II crystal acting also as an optical parametric amplifier (OPA) \cite{Pereira,Kimble,Yariv,epr}. The quadratures components are measured in a homodyne detection with the help of local oscillators fields (LO). The pump field, which generates the two EPR beams via parametric down conversion, is obtained by one intermediate second harmonic (SHG) step as usually. While Bob carries out the homodyne detection of one quadrature of beam 2 for phase $\theta_B$, Alice encrypts the message $\theta$ by modulating the beam 1 phase (e.g. by means of an electro-optical modulator), before carrying out the measurement of the quadrature $\phi_A$. Finally, Alice sends back to Bob the results of her measurement through a public classical channel. Let us now discuss the matter of the vulnerability of Eve to any subsequent detection by Alice and/or Bob. Let $\hat Z_{1'}$ and $\hat Z_{2}$ be the observables measured by Alice and Bob respectively. Suppose that Eve tries to have access to the ``q-public'' beam 1 and thus modifies it by means of an unitary transformation: \begin{eqnarray} \hat a_{1E} = \hat U^\dagger \hat a_1 \hat U \end{eqnarray} The unitary operator $\hat U$ could depend on other external degrees of freedom (observables) introduced by Eve. For example, Eve could use another photon beam or other modes of the same beam but taken at a different time or even a detector in view of a measurement on the beam 1. We denote by $|\nu \rangle$ a basis of the Hilbert space characterizing these external degrees of freedom and assume that $|0\rangle$ is the initial state before the modification. Then the state after the unitary transformation is $\hat U |\Psi\rangle |0 \rangle$. Let us define the probability distribution to find the system with the values $z_1$ and $z_2$ of the quadrature component observables $\hat Z_{1'}$ and $\hat Z_2$: \begin{eqnarray} P(z_{1'},z_2)=\langle \Psi | \delta(z_{1'}-\hat Z_{1'}) \delta(z_2 - \hat Z_2)| \Psi \rangle \end{eqnarray} \begin{widetext} \begin{figure} \caption{Schematic set-up for the ``quantum public key'' based cryptography with continous variables} \label{fig} \end{figure} \end{widetext} After Eve's action the probability distribution becomes: \begin{equation} \label{PE} P_E(z_{1'},z_2)=\langle 0 |\langle \Psi | \hat U^\dagger \delta(z_{1'}-\hat Z_{1'}) \delta(z_2 - \hat Z_2) \hat U| \Psi \rangle |0 \rangle \end{equation} To avoid that Eve presence is revealed, both probability distributions must be equal for the particular values of $\theta_A$ and $\theta_B$ chosen by Alice and Bob. On the other hand, Eve is vulnerable to detection, whenever there exist possible choices of $\theta_A,\theta_B$ (possible homodyne measurements made by Bob or Alice) for which the probability distribution of the measurement outcomes is changed by Eve's action. {\bf THEOREM :} If Alice measures only one quadrature $\hat Z_{1'}$, Eve attack is not vulnerable to a successive detection by Bob if and only if $\langle \nu |\hat U|0 \rangle$ is a function only of the quadrature operator $\hat Z_{1'}$. PROOF: When $\langle \nu |\hat U|0 \rangle$ commutes with $\hat Z_{1'}$ then the $\hat U$'s cancel each other in (\ref{PE}) and trivially: \begin{eqnarray}\label{probeq} P_E(z_{1'},z_2)=P(z_{1'},z_2) \end{eqnarray} On the contrary, let us examine the consequence of requiring (\ref{probeq}) for all possible choices of the quadrature $\hat Z_2$ measured by Bob but only for the state $|\Psi\rangle$ given by (\ref{squeeze}). Taking the Fourier transform on $z_1$ and $z_2$, the equation (\ref{probeq}) becomes: \begin{equation}\label{probeq1} \langle 0 |\langle \Psi | \hat U^\dagger e^{i\hat Z_{1'}s_1} e^{i\hat Z_2 s_2} \hat U| \Psi \rangle |0 \rangle = \langle \Psi | e^{i\hat Z_{1'}s_1} e^{i\hat Z_2 s_2} | \Psi \rangle \end{equation} This equality should be satisfied for any value of the real parameters $s_1$, $s_2$, $\theta_B$. The last two can be replaced by the more global complex parameter $\xi= \xi_X + i \xi_Y= e^{i\theta_B} s_2$, in such a way that $\hat Z_2 s_2= \xi \hat a_2^\dagger + \xi^\star \hat a_2$. Let us introduce: \begin{eqnarray} T_{n,n'}(\xi,\xi^*)= \,_2\langle n'| e^{-i \left(\xi \hat a_2^\dagger + \xi^\star \hat a_2\right)} | n \rangle_2 \end{eqnarray} A calculation shows that for all occupation number $n$ and $n'$ of the beam 2: \begin{eqnarray}\label{lt} \int{d^2\xi \over \pi} T_{n,n'}(\xi,\xi^*)e^{ i\left(\xi \hat a_2^\dagger + \xi^\star \hat a_2\right) } =|n\rangle_2 \,_2\langle n'| \end{eqnarray} Applying this transformation to (\ref{probeq1}) and using the property of entanglement of (\ref{squeeze}), we eliminate the states describing the second beam since $\hat U$ does not affect beam 2. We obtain for all occupation numbers $n$ and $n'$ for beam 1: \begin{eqnarray}\label{probeq3} \langle 0 |\,_1\langle n' | \hat U^\dagger e^{i\hat Z_{1'}s_1} \hat U| n \rangle_1|0 \rangle = \,_1\langle n' | e^{i\hat Z_{1'}s_1} |n \rangle_1 \end{eqnarray} Operating the inverse transformation of the matrix element in the right hand side, we get: \begin{eqnarray}\label{probeq4} \langle 0 |\,_1\langle n' | \hat U^\dagger e^{i\hat Z_{1'}s_1} \hat U e^{-i\hat Z_{1'}s_1}| n \rangle_1 |0 \rangle =\delta_{n',n} \end{eqnarray} Without loss of generality we can write the explicit dependance of $\hat U$ on the observables associated to the first beam as $\hat U = \hat U(\hat Z_{1'},\hat Q_{1'})$, where $\hat Q_{1'}= ( \hat a_1 e^{-i \theta_A} - \hat a_1^\dagger e^{i \theta_A}) /i$ is the observable canonically conjugated to $\hat Z_{1'}$. Noticing that the exponential operator is the generator of translations i.e. $e^{i\hat Z_{1'}s_1} \hat Q_{1'} e^{-i\hat Z_{1'}s_1}= \hat Q_{1'} -2s_1$, Eq.(\ref{probeq4}) becomes: \begin{equation}\label{probeq5} \langle 0 |\,_1\langle n' | \hat U^\dagger (\hat Z_{1'},\hat Q_{1'}) \hat U (\hat Z_{1'},\hat Q_{1'}-2s_1) | n \rangle_1 |0 \rangle =\delta_{n',n} \end{equation} Because two normalized states with unity scalar product are necessarily equal, we infer that for all $n$: \begin{eqnarray}\label{probeq6} \hat U (\hat Z_{1'},\hat Q_{1'}) | n \rangle_1 |0 \rangle= \hat U (\hat Z_{1'},\hat Q_{1'}-2s_1) | n \rangle_1 |0 \rangle \end{eqnarray} We conclude that, for any state $ |\nu \rangle$, $\langle \nu | \hat U (\hat Z_{1'},\hat Q_{1'})|0 \rangle$ does not depend on $\hat Q_{1'}$ or equivalently it depends only on the quadrature operator $\hat Z_{1'}$ measured by Alice. {\bf COROLLARY:} If Alice chooses to measure between at least two distinct and non opposite quadratures, Eve attack is not vulnerable if and only if $\hat U$ does not interact with the beam 1 and acts only onto the state $|0 \rangle$ i.e.: \begin{eqnarray}\label{theo} \langle \nu| \,_1\langle n'|\hat U |n\rangle_1 |0 \rangle =\delta_{n',n} u_\nu \end{eqnarray} where $u_\nu$ does not depend on $n$. PROOF: From the theorem, we deduce that for each non opposite quadrature we must have the dependance $\hat U (\hat Z_{1'})$. Since the quadratures are independent, the only possibility is that $\hat U$ is independent of any observable relative to the beam 1 or is of the form (\ref{theo}). One direct consequence of the theorem and its corollary is that if there is any element of randomness in the choice of the quadrature $\hat Z_{1'}$ made by Alice, then Eve cannot safely extract any information about beam 1 and therefore about the message. This element of randomness can be introduced as a random choice between two distinct and non opposite values $\phi_A$. Alternatively the message itself can be a random sequence of phase shifts $\theta$, chosen at least between two distinct and non opposite values. Thus, Eve is also vulnerable in the case of a digital transmission in which for example, Alice restricts her measurement to two orthogonal quadratures. The facts that the probability distributions must be equal for any value of $\theta_B$, that the second beam has been kept private and that $|\Psi\rangle$ has the entangled form (\ref{squeeze}) permits to achieve the proofs. If the value of $\theta_B$ remains fixed, the linear transformation cannot be carried out since the integration in (\ref{lt}) must be done over all $\xi$. If Eve has access to the second beam, then the unitary operator depends also on $\hat a_2$ and $\hat a_2^\dagger$ and the passage from (\ref{probeq1}) to (\ref{probeq3}) is not necessarily valid. Finally, if $|\Psi\rangle$ is different and for example is a disentangled state, then the beam 1 might be cloned by Eve and therefore she can obtain information without being detected. The vulnerability of Eve does not mean that she will be necessarily detected by Bob. When Bob receives a message, how can he know {\it a priori} that eavesdropping has occured? For example, Eve can block Bob's public beam and can replace it with a public beam of her own. Alice proceeds as planned and sends the results of her measurement to both Bob and to Eve. Eve decrypts the correct message but Bob gets the wrong message. To avoid this situation, Bob must be able to check that some expected correlations are recovered. One possibility is the introduction of some redundancies in the message which can help him to detect the presence of the third party. For example, Alice sends twice portions of the message. Any external intervention is detected if two identical portions are not recovered. Therefore, a strategy of communication or protocol is needed in order to guarantee the absolute security of the transmission. The study of such protocols requires further investigation and is beyond the scope of this letter. In summary, we developed a ``quantum public key'' scheme for encrypting a message by means of quantum correlated beams. This scheme is based on the principle that any decryption of the message requires both measurements of a ``q-private'' key signal and an encrypted ``q-public'' key signal. Directions for a future research work include, in addition to the study of secure protocols, the imperfection in the photon counting of the detector and the possibility of loss in the transmission which degrades the correlations. {\bf ACKNOWLEDGEMENTS} The authors warmly thank S. M. Barnett for useful remarks and criticisms. This work was supported by the network QSTRUCT of the TMR programme of the EU. \end{document}
\begin{document} \begin{center} {\Large Integral representation for functionals\\[1mm] defined on $SBD^p$ in dimension two}\\[5mm] {\today}\\[5mm] Sergio Conti$^{1}$, Matteo Focardi$^{2}$, and Flaviana Iurlano$^{1}$\\[2mm] {\em $^{1}$ Institut f\"ur Angewandte Mathematik, Universit\"at Bonn\\ 53115 Bonn, Germany}\\[1mm] {\em $^{2}$ DiMaI, Universit\`a di Firenze\\ 50134 Firenze, Italy}\\[3mm] \begin{minipage}[c]{0.8\textwidth} We prove an integral representation result for functionals with growth conditions which give coercivity on the space $SBD^p(\Omega)$, for $\Omega\subset\R^2$. The space $SBD^p$ of functions whose distributional strain is the sum of an $L^p$ part and a bounded measure supported on a set of finite $\calH^{1}$-dimensional measure appears naturally in the study of fracture and damage models. Our result is based on the construction of a local approximation by $W^{1,p}$ functions. We also obtain a generalization of Korn's inequality in the $SBD^p$ setting. \end{minipage} \end{center} \def\mathcal{M}{\mathcal{M}} \section{Introduction} The direct methods of $\Gamma$-convergence are of paramount importance in studying variational limits and relaxation problems since their introduction in the seminal paper by Dal Maso and Modica \cite{DalMasoModica80}. They focus on the study of abstract limiting functionals $F(u,A)$, obtained for instance using $\overline\Gamma$-convergence arguments; one key ingredient is the proof of an integral representation for $F(u,A)$. Here $u:\Omega\to\R^N$ is an element of a suitable function space $\mathscr{X}(\Omega)$, and $A$ runs in the class $\mathcal{A}(\Omega)$ of open subsets of a given open set $\Omega\subset\R^n$. The notion of \emph{variational functional} is at the heart of the matter: $F$, regarded as depending on the couple $(u,A)\in \mathscr{X}(\Omega)\times\mathcal{A}(\Omega)$, has to satisfy suitable lower semicontinuity, locality and measure theoretic properties (for more details see properties (i)-(iii) in Theorem~\ref{theorepr}). The specific growth conditions of the functional determine the natural functional space in which the function $u$ lies. Under these assumptions $F(u,A)$ can be written as an integral over the domain of integration $A$ with respect to a suitable measure. The integrands may depend on $x$, $u(x)$ and $\nabla u(x)$, and possibly on other local quantities of $u$, such as higher order or distributional derivatives. Furthermore, as first shown in some cases in \cite{DalMasoModica86} and then generalized in \cite{bou-fon-masc}, the corresponding energy densities can be characterized in terms of cell formulas, i.e.~asymptotic Dirichlet problems on small cubes or balls involving $F$ itself, with boundary data depending on the local properties of $u$. Integral representation results have been obtained in several contexts with increasing generality: starting with the pioneering contribution by De Giorgi for limits of area-type integrals \cite{DeGiorgi75}, it has been extended to functionals defined first on Sobolev spaces in \cite{Sbordone1975,CarboneSbordone1979,ButtazzoDalmaso80,ButtazzoDalmaso1985b,ButtazzoDalmaso1985} and on the space of functions with Bounded Variation in \cite{DalMaso80,BouchitteDalMaso93}, and then to energies defined on partitions in \cite{AmbrosioBraides1990a} and on the subspace $SBV$ in \cite{BraidesChiadopiat1996} (we refer to \cite{ButtazzoDalmaso1985,dalmaso,bou-fon-masc,bou-fon-leo-masc} for a more exhaustive list of references). The global method for relaxation introduced and developed in \cite{bou-fon-masc,bou-fon-leo-masc} provides a general approach that unifies and extends the quoted results. We address the integral representation of functionals defined on the subspace $SBD^p(\Omega)$ of the space $BD(\Omega)$ in two dimensions. The space of functions of bounded deformation $BD(\Omega)$ is characterized by the fact that the symmetric part of the distributional gradient $Eu:=(Du+Du^T)/2$ of $u\in L^1(\Omega,\R^n)$ is a bounded Radon measure, namely \begin{equation*} BD(\Omega):=\{u\in L^1(\Omega;\R^n): Eu \in \mathcal{M}(\Omega;\R^{n\times n}_\mathrm{sym})\}, \end{equation*} where $\Omega\subseteq\R^n$ is an open set, see \cite{Suquet1978a,Temam1983,AmbrosioCosciaDalmaso1997}. $BD$ and its subspaces $SBD$ and $SBD^p$ constitute the natural setting for the study of plasticity, damage and fracture models in a geometrically linear framework \cite{Suquet1978a,Temam1983,TemamStrang1980,AnzellottiGiaquinta1980,KohnTemam1983}. In particular, $SBD^p$ is the set of $BD$ functions such that the strain $Eu$ can be written as the sum of a function in $L^p(\Omega,\R^{n\times n})$ and a part concentrated on a rectifiable set with finite $\calH^{n-1}$-measure, see \cite{BellettiniCosciaDalmaso1998,Chambolle2004a,Chambolle2004b,cha-gia-pon}. For functionals with linear growth defined on $SBD$ an integral representation result was obtained by Ebobisse and Toader \cite{EboToa03}. These functionals, however, lack coercivity on the relevant space. The situation of functionals defined on $SBD^p$ and with corresponding growth properties is open. We give here a solution in two dimensions. \begin{theorem}\label{theorepr} Let $\Omega\subset\R^2$ be a bounded Lipschitz set, $p\in (1,\infty)$, $F:SBD^p(\Omega)\times \calB(\Omega)\to[0,\infty)$ be such that \begin{enumerate} \item\label{theoreprborel} $F(u,\cdot)$ is a Borel measure for any $u\in SBD^p(\Omega)$; \item\label{theoreprlsc} $F(\cdot, A)$ is lower semicontinuous with respect to the strong $L^1(\Omega,\R^2)$-convergence for any open set $A\subset\Omega$; \item\label{theoreprlocal} $F(\cdot, A)$ is local for any open set $A\subset\Omega$; \item\label{theoreprgrowth} There are $\alpha,\beta>0$ such that for any $u\in SBD^p(\Omega)$, any $B\in \calB(\Omega)$, \begin{align} &\alpha (\int_B |e(u)|^pdx + \int_{J_u\cap B} (1+|[u]|) d{\calH}^{1}) \le F(u,B)\notag\\ \le & \beta (\int_B (|e(u)|^p+1)dx + \int_{J_u\cap B} (1+|[u]|) d{\calH}^{1}).\label{e:growthF} \end{align} \end{enumerate} Then there are two Borel functions $f:\Omega\times \R^2\times \R^{2\times 2}\to [0,\infty)$ and $g:\Omega\times\R^2\times \R^2\times S^1\to [0,\infty)$ such that \begin{equation}\label{e:fg} F(u,B)=\int_B f(x,u(x),\nabla u(x)) dx + \int_{B\cap J_u} g(x,u^-(x), u^+(x), \nu_u(x)) d{\calH}^{1}\,. \end{equation} \end{theorem} Above and throughout the paper we will refer to the book \cite{ambrosio} and to the papers \cite{AmbrosioCosciaDalmaso1997, BellettiniCosciaDalmaso1998} for the notation and results about $BV$ and $BD$ spaces, respectively. In particular, $\calB(\Omega)$ is the family of Borel subsets of $\Omega$. The proof of Theorem~\ref{theorepr}, which is given in Section \ref{s:intrep1}, follows the general strategy introduced in \cite{bou-fon-masc,bou-fon-leo-masc}. Their approach was based on a Poincar\'e-type inequality in $SBV$ by De Giorgi, Carriero and Leaci, which is not known in $SBD^p$ (see \cite{DeGiorgiCarrieroLeaci,ambrosio}). Our main new ingredient is the construction of an approximation by $W^{1,p}$ functions, discussed in Section \ref{sec:approximation}, which permits to bypass the De Giorgi-Carriero-Leaci inequality. The approximation is done so that the function is only modified outside a countable set of balls with small area and perimeter. In each ball, we give a construction of a $W^{1,p}$ extension for the $SBD^p$ function by constructing a finite-element approximation on a countable mesh, which is chosen depending on the function $u$, see Section \ref{sec:grid}. Our $W^{1,p}$ approximation result also leads naturally to the proof of the following variant of Korn's inequality for $SBD^p$ functions. \begin{theorem}\label{theo:korn} Let $\Omega\subset\R^2$ be a connected, bounded, Lipschitz set and let $p\in (1,\infty)$. Then there exists a constant $c$, depending on $p$ and $\Omega$, with the following property: for every $u\in SBD^p(\Omega)$ there exist a set $\omega\subset\Omega$ of finite perimeter, with ${\calH}^{1}(\partial \omega)\leq c{\calH}^{1}(J_u)$, and an affine function $a(x)=Ax+b$, with $A\in\R^{2{\times}2}$ skew-symmetric and $b\in\R^2$, such that \bes{ \|u-a\|_{L^p(\Omega\setminus \omega,\R^2)}\leq c \|e(u)\|_{L^p(\Omega,\R^{2{\times}2})},\\ \|\nabla u-A\|_{L^p(\Omega\setminus \omega,\R^{2{\times}2})}\leq c \|e(u)\|_{L^p(\Omega,\R^{2{\times}2})}. } \end{theorem} This improves a result of \cite{friedrich} to the sharp exponent. Variants of the first inequality were first obtained in \cite{ChambolleContiFrancfort,friedrich1}. The construction of Section \ref{sec:approximation} turns out to be crucial also in proving existence for the Griffith fracture model, generalizing \cite{DeGiorgiCarrieroLeaci} to the case of linearized elasticity in dimension two. This will be the object of the forthcoming paper \cite{ContiFocardiIurlano_exist}. \section{Approximation of \texorpdfstring{$SBD^p$}{SBDp} functions with small jump set} \label{sec:grid} \newcommand\n{\nu} \newcommand\Hu{\mathcal{H}^1} \newcommand\Lu{\mathcal{L}^1} \newcommand\de{\delta} \newcommand\Ri{R_i} \newcommand\G{\mathcal{G}} \newcommand\F{\mathcal{F}} \newcommand{{\mathchoice {\kern1ex\vcenter{\hrule height.4pt width 6pt depth0pt} \kern-9.7pt} {\kern1ex\vcenter{\hrule height.4pt width 4.3pt depth0pt} \kern-7pt} {} {} }}{{\mathchoice {\kern1ex\vcenter{\hrule height.4pt width 6pt depth0pt} \kern-9.7pt} {\kern1ex\vcenter{\hrule height.4pt width 4.3pt depth0pt} \kern-7pt} {} {} }} \def\Xint#1{\mathchoice {\XXint\displaystyle\textstyle{#1}} {\XXint\textstyle\scriptstyle{#1}} {\XXint\scriptstyle\scriptscriptstyle{#1}} {\XXint\scriptscriptstyle\scriptscriptstyle{#1}} \!\int} \def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$} \vcenter{\hbox{$#2#3$}}\kern-.5\wd0}} \def\Xint-{\Xint-} In this Section we prove the following approximation result. \begin{theorem}\label{t:tecnico} Let $n=2$, $p\in[1,\infty)$. There exists $\eta>0$ and $\tilde{c}>0$ such that if $J\in\mathcal{B}(B_{2r})$, for some $r>0$, satisfies \be{\label{eq:Jsmall}\Hu(J)< 2r\eta,} then there exist $R\in (r,2r)$ for which the following holds: for every $u\in SBD^p(B_{2r})$ with $\Hu(J_u\cap B_{2r}\setminus J)=0$ there exists $\phi(u)\in SBD^p(B_{2r})\cap W^{1,p}(B_R,\R^2)$ such that \begin{enumerate} \item\label{tecnico1} $\Hu(J_u\cap \partial B_R)=0$; \item\label{tecnico2} $\displaystyle{\int_{B_R}|e(\phi(u))|^qdx\leq \tilde{c}\int_{B_R}|e(u)|^qdx}$, for every $q\in[1,p]$; \item\label{tecnico3} $\|u-\phi(u)\|_{L^1(B_R,\R^2)}\leq \tilde{c} R|Eu|(B_R)$; \item\label{tecnico4} $u=\phi(u)$ on $B_{2r}\setminus B_R$, $\Hu(J_{\phi(u)}\cap \partial B_R)=0$; \item\label{tecnico5} if $u\in L^\infty(B_{2r},\R^2)$, then $\|\phi(u)\|_{L^\infty(B_{2r},\R^2)}\leq \sqrt{2}\, \|u\|_{L^\infty(B_{2r},\R^2)}$. \end{enumerate} \end{theorem} \begin{proof} We follow an idea of \cite{ContiSchweizer1,ContiSchweizer2}. Arguing as in \cite[Lemma 4.3]{ContiSchweizer1} we first claim that there exists $R\in (r,2r)$ such that for $\de_k:= R\,2^{-k}$ we have \ba{\label{eq:inter}&\Hu({J}\cap\partial B_R)=0,\\ \label{eq:corona}&\Hu({J}\cap(B_R\setminus B_{R-\de_k}))<20\eta\de_k, \quad \textrm{for every $k\in\N$}.} To prove this, we first observe that (\ref{eq:inter}) holds for almost every $R$, therefore it suffices to show that (\ref{eq:corona}) holds on a set of positive measure. We consider the family of intervals $$\{[R-\de_{k},R]: \Hu({J}\cap(B_R\setminus B_{R-\de_k}))\ge 20\eta\de_{k}\}$$ and we define ${I}$ as the union of all intervals of the family, with $R\in (r,2r)$, $k\in\N$. By Vitali's covering theorem, there exists a countable set $(R_i, k_i)_{i\in\N}$ such that the corresponding intervals $[\Ri-\de_{k_i},\Ri] $ are pairwise disjoint and cover at least one fifth of ${I}$. Therefore by \eqref{eq:Jsmall} we obtain $$2r\eta>\Hu({J}\cap B_{2r})\geq \sum_{i\in \N}\Hu({J}\cap (B_{R_i}\setminus B_{R_i-\de_{k_i}}))\geq \sum_{i\in \N} 20\eta \de_{k_i}.$$ Since $\de_{k_i}=\Lu([\Ri-\de_{k_i},\Ri])$, we conclude that $\Lu({I})<r$. This proves the existence of $R$ such that (\ref{eq:inter}) and (\ref{eq:corona}) hold, which is fixed for the rest of the proof. We define $R_k:= R-\de_k$ and $\overline{x}_{k,j}:=R_k(\cos\frac{2\pi j}{2^k},\sin\frac{2\pi j}{2^k})$, $j=1,\dots,2^k$. We say that $\overline{x}_{k,j}$ and $\overline{x}_{k',j'}$ are neighbors if either $k=k'$ and $j= j'\pm 1$, working modulo $2^k$, or {(up to a permutation)} $k=k'+1$ and $j\in \{2j'-1,2j',2j'+1\}$, again modulo $2^k$. This gives a decomposition of $B_R$ into countably many triangles, whose angles are uniformly bounded away from 0 and $\pi$, see Figure \ref{fig:grid1}. \begin{figure} \caption{Sketch of the construction of the grid in the proof of Theorem \ref{t:tecnico} \label{fig:grid1} \end{figure} We shall construct $\phi(u)$ as a linear interpolation on a triangulation whose vertices are slight modifications of $\overline{x}_{k,j}$. Following the idea of \cite[Proposition 2.2]{ContiSchweizer2}, we next show how to construct the modified triangulation. We start off considering two neighboring points $\overline{x}$ and $\overline{y}$ in $\{\overline{x}_{k,j}\}_{k,j}$, connected by the segment $S_{\overline x,\overline y}\subset \overline B_{R_{k+1}}\setminus B_{R_{k-1}}$ for some $k$, and notice that $c_1\de_k\leq|\overline{x}-\overline{y}|\leq c_2\de_k$ for some $c_1\in(0,1)$, $c_2>1$ independent from $k$. Let $\alpha:=c_1/(8c_2)$ and consider the convex envelope \be{\label{eq:C} O_{\overline{x},\overline{y}}:=\conv(B(\overline{x},\alpha\de_k)\cup B(\overline{y},\alpha\de_k)).} Let $a_{\overline{x},\overline{y}}$ denote the infinitesimal rigid movement appearing in the Poincar\'{e}'s inequality for $u$ on the set \begin{equation*} Q_{\overline{x},\overline{y}}:=\{\xi\in B_R:\dist(\xi,S_{\overline{x},\overline{y}})<|\overline{x}-\overline{y}|/{(8c_2)}\}. \end{equation*} Given $\vartheta\in(0,1)$, let us prove that for $\eta$ sufficiently small and $\tilde{c}$ sufficiently large, depending only on $\vartheta$, there exists a subset $F\subset B(\overline{x},\alpha\de_k){\times} B(\overline{y},\alpha\de_k)$ with $\frac{(\calL^2\times\calL^2)(F)}{\calL^2(B_{\alpha\de_k})^2}< \vartheta$, such that for every $(x,y)\notin F$ the one-dimensional section $u_z^\nu$ has the following properties: \begin{enumerate} \renewcommand\theenumi{{(P\arabic{enumi})}} \renewcommand\labelenumi{\theenumi} \item\label{tecp1} $u^{\n}_{z}\in SBV( s_{x,y} )$; \item\label{tecp2} $\calH^0(J_{u^{\nu}_{z}})=0$, so that $u^{\nu}_z\in W^{1,1}( s_{x,y} )$; \item\label{tecp3} $\displaystyle{\int_{ s_{x,y} }|(u^{\nu}_{z})'|dt\leq \frac{\tilde{c}}{\de_k}\int_{O_{\overline{x},\overline{y}}}|e(u)|dx'}$; \item\label{tecp4} $\displaystyle{|u(\xi)-a_{\overline{x},\overline{y}}(\xi)|\leq \frac{\tilde{c}}{\de_k}|Eu|(Q_{\overline{x},\overline{y}})}$, \textrm{ for $\xi=x,y$.} \end{enumerate} Here $u^{\nu}_z(t):=u(z+t{\nu})\cdot {\nu}$ is the slice of $u$ along the line of direction \be{\label{eq:n}{\nu}:=\frac{x-y}{|x-y|},} and passing through \be{\label{eq:z}z:=(\Id-\nu\otimes\nu)x \in \R\nu^{\perp}\cap (x+\R \nu)} where $\R\nu^\perp$ is the linear space orthogonal to $\nu$, and $s_{x,y}\subset\R$ is the segment such that $z+s_{x,y}\nu=S_{x,y}$, see Figure \ref{fig:segment}. \begin{figure} \caption{{Slice along the line of direction $\nu=(x-y)/|x-y|$ passing through $z$ in the proof of Theorem~\ref{t:tecnico} \label{fig:segment} \end{figure} In order to obtain property \ref{tecp1} we first define the measure $\mu_{ \nu ,z}:=|u^ \nu _z|\calL^1$ and we observe that \begin{align}\label{eq:sez} \int_{B(\overline{x},\alpha\de_k){\times} B(\overline{y},\alpha\de_k)}&\mu_{ \nu ,z}( s_{x,y} )dx\,dy\\ &\leq c\de_k^2\int_{B(\overline{x},\alpha\de_k)}dx\int_{\{| \nu -\overline{ \nu }|\leq c(\alpha)\}} \mu_{ \nu ,z}((O_{\overline{x},\overline{y}})^{\nu}_z)d \calH^1(\nu) , \nonumber \end{align} by the change of variables $y=x+s \nu $, where $\overline{\nu}$ is defined as $\nu$ with $\overline{x},\overline{y}$ in place of $x,y$ { and $(O_{\overline{x},\overline{y}})^\nu_z\subset\R$ corresponds to $O_{\overline{x},\overline{y}}\cap (z+\R\nu)$.} By Fubini's theorem the last term in the previous inequality is less than or equal to \be{\label{eq:sez2} c\de_k^3\int_{\{| \nu -\overline{ \nu }|\leq c(\alpha)\}}d \calH^1(\nu) \int_{\nu^\perp}\mu_{ \nu ,z}( (O_{\overline{x},\overline{y}})^ \nu _z)d\Hu(z)\leq c\de_k^3\mu(O_{\overline{x},\overline{y}}), } where $\mu(O_{\overline{x},\overline{y}}):=\int_{O_{\overline{x},\overline{y}}}|u|dx'$. By \eqref{eq:sez} and \eqref{eq:sez2} the set $F_1\subset B(\overline{x},\alpha\de_k){\times} B(\overline{y},\alpha\de_k)$ of points $(x,y)$, for which the inequality $$\mu_{ \nu ,z}( s_{x,y} )>\frac{\tilde{c}}{\de_k}\mu(O_{\overline{x},\overline{y}})$$ holds, satisfies $\frac{(\calL^2\times\calL^2)(F_1)}{\calL^2(B_{\alpha\de_k})^2}< \vartheta/ {16}$, for $\tilde{c}$ large enough, depending on $\vartheta$ and $\alpha$. For $(x,y)\in B(\overline{x},\alpha\de_k){\times} B(\overline{y},\alpha\de_k)\setminus F_1$ we now repeat the argument in \eqref{eq:sez} and \eqref{eq:sez2} above redefining $$\mu_{ \nu ,z}:=|D(u^ \nu _z)|.$$ We find that for $(x,y)$ out of a small (in the previous sense) set $F'_1$ one has $u^ \nu _z\in SBV( s_{x,y} )$ and $$|D(u^ \nu _z)|( s_{x,y} )\leq \frac{\tilde{c}}{\de_k}|Eu|(O_{\overline{x},\overline{y}}).$$ As for property \ref{tecp2}, we use \eqref{eq:sez} and \eqref{eq:sez2} with $\mu_{ \nu ,z}:=\calH^0\res (J_{u^ \nu _z}\cap s_{x,y} )$ and $\mu:=\Hu\res (J_u\cap O_{\overline{x},\overline{y}})$. Now \eqref{eq:Jsmall} implies $$\int_{B(\overline{x},\alpha\de_k){\times} B(\overline{y},\alpha\de_k)}\calH^0(J_{u^ \nu _z}\cap s_{x,y} )dx\,dy\leq c\de_k^4\eta,$$ and hence the set $F_2$ of points $(x,y)$ for which $\calH^0(J_{u^ \nu _z}\cap s_{x,y} )>1/2$ is also small in the previous sense, for $\eta$ small enough. Note that this is the only step which requires the hypothesis on the dimension $n=2$. Analogously properties \ref{tecp3} and \ref{tecp4} can be derived. From the argument above it is straightforward that for many points $x\in B(\overline{x},\alpha\de_k)$, still in the sense of a large $\vartheta$-fraction of $B(\overline{x},\alpha\de_k)$, there are many points $y\in B(\overline{y},\alpha\de_k)$ for which $(x,y)\notin F$. Let us construct now the modified grid with an iterative process (see also \cite[Proposition 3.4]{ContiSchweizer2}). We will use the notation $B_i$ to indicate the balls $B(\overline{x}_{k,j},\alpha\de_k)$, { lexicographically ordered}. We start by fixing a point $x_0\in B_0$ for which there are many good choices in each neighboring ball. We next select $x_1\in B_1$ among the points which are good choices for $x_0$ and which have many good choices in each neighboring subsequent ball $B_i$, $i\geq 2$. Iterating the process, the point $x_m\in B_m$ will be taken among the good choices for the neighboring previously fixed $x_i$, $i<m$, and with the property that have many good choices in the neighboring subsequent $B_i$, $i>m$. Since each ball can have at most seven neighbors, at each step we select $x_m$ avoiding just a small subset of $B_m$. We call $S$ the set of points obtained by this process and we construct a new triangulation, with $x,y$ neighbors if and only if $\bar x$, $\bar y$ are neighbors. Notice that again \be{\label{eq:nondeg}c_1\de_k\leq|x-y|\leq c_2\de_k,} for every couple of neighboring points $x,y$, with the same $k$ as for the corresponding reference points $\overline x$ and $\overline y$, and suitable $c_1,c_2>0$ independent from $k$. We finally define $\phi(u)$ as the linear interpolation between the values of $u(x)$, $x\in S$ on each triangle of the triangulation. Fixed a triangle $T$ and any couple of its vertices $x,y$, we compute a component of the constant matrix $e(\phi(u))$ on $T$ by \be{\label{eq:eu}e(\phi(u))\nu\cdot \nu=\frac{(\phi(u)(x)-\phi(u)(y))\cdot \nu}{|x-y|}=\Xint-_{ s_{x,y} }(u^\nu_z)'dt,} where $\nu$ and $z$ are defined in \eqref{eq:n} and \eqref{eq:z}. We used the fact that $u$ and $\phi(u)$ agree on $x$ and $y$ and that $u$ is $W^{1,1}( s_{x,y} )$ by the choice of $x$ and $y$. By \eqref{eq:eu}, \eqref{eq:nondeg}, and property \ref{tecp3} above it follows $$|e(\phi(u))\nu\cdot \nu|\leq \frac{\tilde{c}}{\de_k^2}\int_{C}|e(u)|dx',$$ where $C$ is defined in \eqref{eq:C}. We recall that here and henceforth $\tilde{c}$ can possibly change. Letting $\nu$ vary among the directions of the sides of $T$, we obtain a control on the whole $|e(\phi(u))|$ thanks to \eqref{eq:nondeg} \be{\label{eq:euconv}|e(\phi(u))|\leq \frac{\tilde{c}}{\de_k^2}\int_{C_T}|e(u)|dx',} where $C_T$ denotes the convex envelope \begin{equation*} C_T:=\conv(\cup B(\overline{x},\alpha\de_k)) \end{equation*} and the union is taken over the three vertices $\overline{x}$ in the old triangulation corresponding to the three vertices of $T$. We remark that $B(\overline x,\alpha\delta_k)\subset B_{R_{k+1}}\setminus B_{R_{k-1}}$ for all $\overline x\in \partial B_{R_k}$, therefore the sets $C_T$ have finite overlap. We are ready to prove property \ref{tecnico2}. By Jensen's inequality and \eqref{eq:euconv} we have for $1\leq q\leq p$ \begin{align*} \int_T |e(\phi(u))|^qdx'&= \calL^2(T)|e(\phi(u))|^q\leq \tilde{c}\calL^2(C_T) \Big(\Xint-_{C_T}|e(u)|dx'\Big)^q\nonumber\\ &\leq \tilde{c}\int_{C_T}|e(u)|^qdx', \end{align*} and finally summing up on all triangles $T$ we get the conclusion. In order to prove properties \ref{tecnico3} and \ref{tecnico4} we estimate \be{\label{eq:tr1}\int_{T} |u-\phi(u)|dx'\leq \int_{T} |u-a_{\overline{x},\overline{y}}|dx' + \int_{T}|a_{\overline{x},\overline{y}}-\phi(u)|dx',} where $T$ is again a triangle of the modified triangulation with vertices $x,y,z$, $\overline{x},\overline{y},\overline{z}$ denote the three corresponding vertices of the old triangulation, $a_{\overline{x},\overline{y}}$ is the infinitesimal rigid motion appearing in the Poincar\'e's inequality for $u$ on $Q_{\overline{x},\overline{y}}$ (see item \ref{tecp4} above). Let us study first the second term in \eqref{eq:tr1}. Since $a_{\overline{x},\overline{y}}-\phi(u)$ is affine, it achieves its maximum on a vertex $\xi$ of $T$, therefore \begin{equation*} \int_{T}|a_{\overline{x},\overline{y}}-\phi(u)|dx'\leq c\de_k^2|a_{\overline{x},\overline{y}}(\xi)-\phi(u)(\xi)| =c\de_k^2|a_{\overline{x},\overline{y}}(\xi)-u(\xi)|. \end{equation*} Notice that if $\xi=\overline{z}$ then \ba{\label{eq:tr4}c\de_k^2|a_{\overline{x},\overline{y}}(\xi)-u(\xi)|&\leq& c\de_k^2|a_{\overline{x},\overline{y}}(\xi)-a_{\overline{x},\xi}(\xi)|+c\de_k^2|a_{\overline{x},\xi}(\xi)-u(\xi)| \nonumber\\ &\leq& c\int_{B(\overline{x},\alpha\delta_k)}|a_{\overline{x},\overline{y}}-a_{\overline{x},\xi}|dx'+c\de_k |Eu|(Q_{\overline{x},\xi}),} where we used the fact that $a_{\overline{x},\xi},a_{\overline{x},\overline{y}}$ are affine and item \ref{tecp4} above; if $\xi\in\{\overline x, \overline y\}$ then only the second term appears. By \eqref{eq:tr1}-\eqref{eq:tr4}, the triangular inequality, and Poincar\'e's inequality we conclude \be{\label{eq:tr5}\int_{T} |u-\phi(u)|dx'\leq c\delta_k |Eu|(Q_{T}),} where $Q_T:=Q_{\overline{x},\overline{y}}\cup Q_{\overline{y},\overline z}\cup Q_{\overline{z},\overline x}$. Finally summing up over $T$ we obtain property \ref{tecnico3}. We prove now property \ref{tecnico4}, property \ref{tecnico5} holding true by construction. We define $\phi(u):=u$ outside $\overline B_R$ and know that $\phi(u)\in W^{1,p}(B_R,\R^2)\cap SBD(B_{2r})$. It remains to prove that the traces on $\partial B_R$ coincide, or, equivalently, that $\calH^1(J_{\phi(u)}\cap B_R)=0$. Let $\psi_k\in C^\infty(B_R)$ be such that $\psi_k=0$ on $B_{R_k}$, $\psi_k=1$ in a neighborhood of $\partial B_R$, and $|\nabla\psi_k|\leq c/\de_k$. We define $v_k:=(u-\phi(u))\psi_k\in SBD(B_R)$ and we prove that $v_k\to 0$ strongly in $BD$, this implying in turn $v_k|_{\partial B_R}\to 0$ in $L^1(\partial B_R,\R^2)$ in the sense of traces and therefore property \ref{tecnico4}. Clearly \begin{equation*} \int_{B_R}|v_k|dx\leq\int_{B_R\setminus B_{R_k}}|u-\phi(u)|dx\to0 \end{equation*} by the dominated convergence theorem. Finally, using \eqref{eq:tr5} and the fact that the triangles have finite overlap, \bes{|Ev_k|(B_R)&\leq&|E(u-\phi(u))|(B_R\setminus B_{R_k})+\frac{c}{\de_k}\int_{B_R\setminus B_{R_k}}|u-\phi(u)|dx\\ &\leq&\tilde{c}|E(u-\phi(u))|(B_R\setminus B_{R_k}), } the last term tends to $0$ and this concludes the proof of property \ref{tecnico4}. \end{proof} \section{Regularity of \texorpdfstring{$SBD^p$}{SBDp} functions with small jump set} \label{sec:approximation} We first discuss how $SBD^p$ functions can be approximated by $W^{1,p}$ functions locally away from the jump set (Section \ref{sec:approxbulk}), and then how they can be approximated by piecewise $W^{1,p}$ functions around the jump set (Section \ref{sec:approxinterface}). Our approximation result also leads to the Korn inequality stated in Theorem \ref{theo:korn}. The key ingredient for all these results is the construction of Theorem \ref{t:tecnico}. Throughout the section $\eta\in(0,1)$ will be the constant from Theorem~\ref{t:tecnico} and $n=2$. \subsection{Approximation of \texorpdfstring{$SBD^p$}{SBDp} functions with \texorpdfstring{$W^{1,p}$}{W1p} functions} \label{sec:approxbulk} We shall use that the construction of Theorem \ref{t:tecnico}, using a suitable covering argument, permits to approximate $SBD^p$ functions by $W^{1,p}$ functions which coincide away from a small neighborhood of the jump set. The neighborhood is the union of countably many balls, such that each of them contains an amount of jump set proportional to the radius. Before discussing the covering argument in Proposition \ref{p:ricopr}, we show that (away from the boundary) almost any point of the jump set is the center of a ball with the appropriate density. \begin{lemma}\label{l:rx} Let $s\in (0,1)$. Let $J\in\mathcal{B}(B_{2\rho})$, for some $\rho>0$, be such that ${\calH}^{1}(J)<\eta(1-s)\rho$ then for ${\calH}^{1}$-a.e. $x\in J\cap B_{2s\rho}$ there exists a radius $r_x\in(0,(1-s)\rho)$ such that \begin{equation}\label{e:rx0} {\calH}^{1}\big(J\cap \partial B_{r_x}(x)\big)=0, \end{equation} \begin{equation}\label{e:rx} \eta\, {r_x}\leq{\calH}^{1}\big(J\cap B_{r_x}(x)\big)\leq {\calH}^{1}\big(J\cap B_{2r_x}(x)\big)<{2\,\eta\, r_x}. \end{equation} \end{lemma} \begin{proof} { We fix $x\in J\cap B_{2s\rho}$, choose $\lambda_x\in(\rho ,2\rho)$ such that } ${\calH}^{1}\big(J\cap \partial B_{\sfrac{\lambda_x}{2^{k}}}(x)\big)=0$ for all $k\in\N$, and define \[ r_x:=\max\{\sfrac{\lambda_x}{2^{k}}:\,k\in\N,\, {\calH}^{1}\big(J\cap B_{\sfrac{\lambda_x}{2^{k}}}(x)\big)\geq {{\eta\,\lambda_x}{2^{-k}}}\}. \] {The set is nonempty for $\calH^1$-almost every $x$ because $\eta<1$. The estimates \eqref{e:rx} hold by definition.} To conclude that $r_x<(1-s)\rho$ it is enough to notice that the opposite inequality would give the ensuing contradiction \[ {\calH}^{1}(J)\geq {\calH}^{1}\big(J\cap B_{r_x}(x)\big)\geq \eta\,r_x \geq{(1-s)}\eta\,\rho>{\calH}^{1}(J). \] \end{proof} We are now ready to prove the main result of the section via a covering argument, Lemma~\ref{l:rx}, and Theorem~\ref{t:tecnico}. \begin{proposition}\label{p:ricopr} Let $p\in(1,\infty)$, $n=2$. There exists a universal constant $c>0$ such that if $u\in SBD^p(B_{2\rho})$, {$\rho>0$,} satisfies \[ {\calH}^{1}(J_u\cap B_{2\rho})<{\eta\,(1-s)\rho} \] for $\eta\in(0,1)$ as in Theorem~\ref{t:tecnico} and some $s\in(0,1)$, then there is a countable family $\calF=\{B\}$ of closed balls of radius $r_B<(1-s)\rho$, each contained in $B_{(1+s)\rho}$, and a field $w\in SBD^p(B_{2\rho})$ such that \begin{enumerate} \item $\rho^{-1}\sum_{\calF}{{\calL}^2}\big(B\big)+\sum_{\calF}{\calH}^{1}\big(\partial B\big) \leq{\sfrac {c}\eta}\,{\calH}^{1}(J_u\cap B_{2\rho})$; \item ${\calH}^{1}\big(J_u\cap\cup_{\calF}\partial B\big)={\calH}^{1}\big((J_u\cap {B_{2s\rho}})\setminus \cup_{\calF}B\big)=0$; \item $w= u$ {${\calL}^2$-a.e.} on $B_{2\rho}\setminus\cup_{\calF}B$; \item {$w\in W^{1,p}(B_{2s\rho},\R^2)$} and ${\calH}^{1}(J_w\setminus J_u)=0$; \item \begin{equation}\label{e:volume} \int_{\cup_{\calF}B}|e(w)|^pdx\leq c\int_{\cup_{\calF}B}|e(u)|^pdx; \end{equation} and there exists a skew-symmetric matrix $A$ such that \begin{equation}\label{e:korn} \int_{B_{2s\rho}\setminus\cup_{\calF}B}|\nabla u-A|^pdx\leq c(p)\int_{B_{2\rho}}|e(u)|^pdx; \end{equation} \item $\|u-w\|_{L^1(B,\R^2)}\leq c\,r_B\,|Eu|(B)$, for every $B\in \calF$; \item if, additionally, $u\in L^\infty(B_{2\rho},\R^2)$ then $w\in L^\infty(B_{2\rho},\R^2)$ with \[ \|w\|_{L^\infty(B_{2\rho},\R^2)}\leq {c}\|u\|_{L^\infty(B_{2\rho},\R^2)}. \] \end{enumerate} \end{proposition} \begin{proof} By Lemma~\ref{l:rx} we find a family $\calF^\prime$ of open balls covering ${\calH}^{1}$-a.e. $J_u\cap B_{{2s\rho}}$ that satisfies \eqref{e:rx0} and \eqref{e:rx}. { Setting $J=J_u$, to every $B\in\calF^\prime$ we associate a new ball $B^*\subset B$ with the properties \ref{tecnico1}-\ref{tecnico5} of Theorem~\ref{t:tecnico}. Let $\calF^*$ be the family of the new balls $B^*$, this is still a cover of $J$. Further, the balls $B^*$ can be taken to be closed. By the Besicovitch covering theorem \cite[Theorem 2.17]{ambrosio} there are $\xi$} countable subfamilies $\calF^\prime_j=\{B_j^i\}_{i\in\N}$ of disjoint balls. Therefore, setting $\calF:=\cup_{j=1}^\xi \calF^\prime_j$ we have ${\calH}^{1}\big((J_u\cap B_{2s\rho})\setminus \cup_{\calF}B\big)=0$. In addition, by \eqref{e:rx0} the first condition in item (ii) is satisfied as well, so that (ii) is established. Furthermore, \begin{align*} \sum_{B\in\calF}{\calH}^{1}(\partial B)= &2\pi\sum_{B\in\calF}{r_B} \stackrel{\eqref{e:rx}}{\leq} \frac{2\pi}{\eta}\sum_{B\in\calF}{\calH}^{1}\big(J_u\cap B\big)\\ \leq&\xi\,\frac{2\pi}{\eta}{\calH}^{1}\big(J_u\cap \cup_{B\in\calF}B\big)\leq \xi\,{\frac{2\pi}{\eta}}{\calH}^{1}(J_u\cap B_{2\rho}). \end{align*} The volume estimate follows since $r_B\le \rho$ implies $\sum r_B^2\le \rho\sum r_B$. We remark that a quadratic volume estimate also follows by $\sum r_B^2\le (\sum r_B)^2$. Let $\phi(u)$ be the function given by Theorem \ref{t:tecnico} on the balls of the first family $\calF^\prime_1$ and define for every $h\in\N$ a function \[ w_1^h:=\begin{cases} \phi(u) & B_1^i, \,i\leq h \cr u & \mbox{otherwise} \end{cases} \] such that $w_1^h\in SBD^p(B_{2\rho})$, $w_1^h\in W^{1,p}(\cup_{i\leq h}B_1^i;\R^2)$ with $w_1^h=u$ ${\calL}^2$-a.e. on $B_{2\rho}\setminus \cup_{i\leq h}B_1^i$ and ${\calH}^{1}(J_{w_1^h}\setminus J_u)=0$. In addition by item \ref{tecnico2} in Theorem~\ref{t:tecnico} \begin{multline}\label{e:stimavolwh} \int_{B_{2\rho}}|e(w_1^h)|^p\,dx=\int_{\cup_{i\leq h}B_1^i}|e(\phi(u))|^p\,dx+ \int_{B_{2\rho}\setminus \cup_{i\leq h}B_1^i}|e(u)|^p\,dx\\ \leq {\tilde{c}}\int_{\cup_{i\leq h}B_1^i}|e(u)|^p\,dx+ \int_{B_{2\rho}\setminus \cup_{i\leq h}B_1^i}|e(u)|^p\,dx\leq(1+{\tilde{c}})\int_{B_{2\rho}}|e(u)|^p\,dx, \end{multline} and \[ |Ew_1^h|(B_{2\rho})\leq |Eu|\big(B_{2\rho}\setminus \cup_{i\leq h}B_1^i\big) +{\tilde{c}}\int_{\cup_{i\leq h}B_1^i}|e(u)|\,dx. \] Moreover, recalling that the $B_1^i$'s are disjoint and that $w_1^{h-1}=u$ on $B_1^h$, item \ref{tecnico3} in Theorem~\ref{t:tecnico} gives \[ \|w_1^h-w_1^{h-1}\|_{L^1(B_{2\rho};\R^2)}= \|w_1^h-u\|_{L^1(B_1^h;\R^2)}\leq c\,\rho\,|Eu|(B_1^h), \] in turn implying that for all $h\geq k\geq 1$ \[ \|w_1^h-w_1^k\|_{L^1(B_{2\rho};\R^2)}\leq \sum_{i=k+1}^h\|w_1^i-w_1^{i-1}\|_{L^1(B_1^h;\R^2)} \leq c\,\rho\,|Eu|\big(\cup_{k+1\leq i\leq h}B_1^i\big). \] Thus, $w_1^h\to w_1$ in $L^1(B_{2\rho};\R^2)$ with \[ w_1:=\begin{cases} \phi(u) & \cup_{\calF_1^\prime}B \cr u & \mbox{otherwise}. \end{cases} \] The $BD$ compactness theorem then yields that $w_1\in BD(B_{2\rho})$. In turn, by \eqref{e:stimavolwh} and since ${\calH}^{1}( J_{w_1^h}\setminus J_u)=0$, the $SBD$ compactness theorem implies that actually $w_1\in SBD^p(B_{2\rho})$ {(see also \cite[Theorem 11.3]{gbd}).} Furthermore, since \[ {\calH}^{1}\big(J_{w_1^h}\cap\cup_{\calF_1^\prime}B\big)={\calH}^{1}(J_u\cap \cup_{i\geq h+1} B_1^i\big), \] we may conclude that \[ {\calH}^{1}\big(J_{w_1}\cap\cup_{\calF_1^\prime}B\big)\leq \liminf_h{\calH}^{1}\big(J_{w_1^h}\cap\cup_{\calF_1^\prime}B\big)=0, \] and therefore $w_1\in W^{1,p}(\cup_{\calF_1^\prime}B,\R^2)$. Finally, by construction $w_1=u$ ${\calL}^2$-a.e. on $B_{2\rho}\setminus \cup_{\calF_1^\prime}B$ and ${\calH}^{1}(J_{w_1}\setminus J_u)=0$. By iterating the latter construction, for all $1< k\leq\xi$ and for every $h\in\N$ we find \[ w_k^h:=\begin{cases} \phi(w_{k-1}) & B_k^i, \,i\leq h \cr w_{k-1} & \mbox{otherwise} \end{cases} \] such that $w_k^h\in SBD^p(B_{2\rho})$, $w_k^h\in W^{1,p}(\cup_{i\leq h}B_k^i;\R^2)$, $w_k^h=w_{k-1}$ ${\calL}^2$-a.e. on $B_{2\rho}\setminus \cup_{i\leq h}B_k^i$ , ${\calH}^{1}(J_{w_k^h}\setminus J_{w_{k-1}})=0$. In addition, arguing as above, $w_k^h\to w_k$ in $L^1(B_{2\rho},\R^2)$ with \[ w_k:=\begin{cases} \phi(w_{k-1}) & \cup_{\calF_k^\prime}B \cr w_{k-1} & \mbox{otherwise}, \end{cases} \] $w_k\in SBD^p(B_{2\rho})$, $w_k\in W^{1,p}(\cup_{j\leq k}\cup_{\calF_j^\prime}B;\R^2)$, $w_k=w_{k-1}$ ${\calL}^2$-a.e. on $B_{2\rho}\setminus \cup_{\calF_k^\prime}B$ and ${\calH}^{1}(J_{w_k}\setminus J_{w_{k-1}})=0$. Set $w:=w_{\xi}$, then $w\in SBD^p(B_{2\rho})$, $w\in W^{1,p}(\cup_{\calF}B;\R^2)$, $w=u$ ${\calL}^2$-a.e. on $B_{2\rho}\setminus \cup_{\calF}B$, ${\calH}^{1}(J_w\setminus J_u)=0$. Iterating estimate \eqref{e:stimavolwh}, inequality \eqref{e:volume} follows at once with $c:=\max\{1+\tilde{c},2\pi\}\,\xi$, with $\tilde{c}$ the constant in Theorem~\ref{t:tecnico}. {Korn's inequality \eqref{e:korn} follows now immediately by (iii), (iv), and \eqref{e:volume}.} Finally, it is clear that also items (vi) and (vii) are satisfied in view of properties \ref{tecnico3} and \ref{tecnico5} in Theorem~\ref{t:tecnico}. \end{proof} \subsection{Korn's inequality in \texorpdfstring{$SBD^p$}{SBDp}} \label{sec:korn} \begin{proof}[Proof of Theorem \ref{theo:korn}] By standard scaling and covering arguments it suffices to prove the assertion for a special Lipschitz domain. Precisely, let $\varphi:\R\to\R$ Lipschitz with $\min \varphi[(-1,1)]=2$, and set $U:=\{x: x_1\in (-2,2), x_2\in (-2,\varphi(x_1))\}$, and $U^\mathrm{int}:=\{x: x_1\in (-1,1), x_2\in (-1,\varphi(x_1))\}$. It suffices to show that for any $u\in SBD^p(U)$ there are $\omega$ with $\calH^1(\partial\omega) +|\omega|^{1/2}\le c \calH^1(J_u)$ and an affine function $a:\R^2\to\R^2$ such that $\|u-a\|_{L^p(U^\mathrm{int}\setminus\omega,\R^2)} +\|\nabla u-\nabla a\|_{L^p(U^\mathrm{int}\setminus\omega,\R^2)} \le c_{L,p} \|e(u)\|_{L^p(U,\R^{2\times 2})}$, with $c$ depending on $p$ and the Lipschitz constant $L$ of $\varphi$. Obviously we can assume $\calH^1(J_u)$ to be small. Let $q_j:=x_j+(-r_j/2,r_j/2)^2$ and $Q_j:=x_j+(-r_j,r_j)^2$, and assume that $u\in SBD^p(Q_j)$ obeys $\calH^1(J_u\cap Q_j)\le \eta r_j/8$. By Proposition \ref{p:ricopr} with $\rho:=r_j/2$ and $s:=1/\sqrt2$ and Poincar\'e's inequality there are $\omega_j$ and $a_j$ affine with $r_j\calH^1(\partial\omega_j)+|\omega_j|^{1/2}\le c\calH^1(J_u\cap Q_j)$ and $r_j^{-1}\|u_j-a_j\|_{L^p(q_j\setminus\omega_j,\R^2)}+ \|\nabla u_j-\nabla a_j\|_{L^p(q_j\setminus\omega_j,\R^{2\times 2})}\le c_p \|e(u)\|_{L^p(Q_j,\R^{2\times 2})}$, with a constant which depends only on the exponent $p$. To pass to the estimate on $U^\mathrm{int}$ one uses a Whitney cover with pairs of open cubes $q_j$ and $Q_j$ such that the exterior ones have finite overlap and the interior ones form a cover, as done for example in proving the nonlinear Korn's inequality in \cite[Theorem 3.1]{FrieseckeJamesMueller2002}. Following \cite{friedrich}, if $\calH^1(J_u\cap Q_j)\ge \eta r_j/8$ we define $P_j:=x_j+(-r_j,r_j)\times(-r_j,\infty) \cap U$, otherwise $P_j=\emptyset$ and $\omega_j$, $a_j$ are obtained as above. Notice that $\calH^1(P_j)\le c_L r_j$ by the properties of Lipschitz functions and of the Whitney covering. Then it suffices to apply the weighted Poincar\'e inequality, as done in \cite[Theorem 3.1]{FrieseckeJamesMueller2002} and \cite[Theorem 4.2]{friedrich}. By the properties of the covering, for neighboring pairs of squares $|q_j\cap q_i|\ge c r_i^2$, and if $\eta$ is not too large this gives a bound on the difference of the two affine functions. One then defines $a^*\in C^\infty(U,\R^{2})$ using a partition of unit subordinated to the cover $\{q_j\}$, and obtains \begin{equation*} \int_{U^\mathrm{int}\setminus \cup_j P_j} (\varphi(x_1)-x_2)^p |D^2a^*|^p(x) dx\le c_{L,p} \|e(u)\|_{L^p(U,\R^2)}^p\,. \end{equation*} Since the cube $Q_0=(-2,2)^2$ was not removed one has $a^*=a_0$ in $q_0$ and application of the one-dimensional weighted Poincar\'e inequality to $a^*(x_1,\cdot)$ leads to the assertion, with $\omega:=\cup_j (P_j\cup \omega_j)$ and $a:=a_0$. Equivalently, in the last step one may use a Poincar\'e or Korn inequality on John domains, as done in \cite[Theorem 4.2]{friedrich}. \end{proof} We remark that the nonoptimality of the exponent in \cite[Theorem 4.2]{friedrich} is only consequence of the nonoptimal local estimate employed there (see \cite[Theorem 3.1]{friedrich}). \subsection{{Reflection}} \label{sec:approxinterface} {In this subsection we establish a technical result instrumental for the identification of the surface energy density in Section~\ref{s:surface}. To this aim, given $u\in SBD^p(\Omega)$ and a point $x_0\in J_u$ we set \begin{equation}\label{e:uxzero} u_{x_0}(x):=\begin{cases} u^+(x_0) & \text{ if } \langle x-x_0,\nu_{x_0}\rangle >0,\\ u^-(x_0) & \text{ if } \langle x-x_0,\nu_{x_0}\rangle <0. \end{cases} \end{equation} } \begin{lemma}\label{lem:refl} {Let $p\in(1,\infty)$}, $u\in SBD^p(\Omega)$, $\Omega\subset\R^2$ open. For $\calH^1$-a.e. $x_0\in J_u$ and any $\rho>0$ sufficiently small there is $v_\rho\in SBD^p(B_{2\rho}(x_0))\cap SBV^p{(B_{\rho}(x_0),\R^2)}$ such that: \begin{enumerate} \item $\displaystyle\lim_{\rho\to0} \frac1\rho \calH^1(J_{v_\rho}\setminus J_u \cap B_{\rho}(x_0)) =0$; \item $\displaystyle\lim_{\rho\to0} {\frac1\rho \int_{B_{\rho}(x_0)} |\nabla v_\rho|^p dx =0}$; \item $\displaystyle\lim_{\rho\to0} \frac1{\rho^2} \calL^2(\{x\in B_{\rho}(x_0):\,u\ne v_\rho\}) =0$; \item $\displaystyle\lim_{\rho\to0} \frac1{\rho^2} \int_{B_{\rho}(x_0)} |v_\rho-u| dx =0$; \item $\displaystyle\lim_{\rho\to0} \frac1{\rho^{p+1}} \int_{B_{\rho}(x_0)} |v_\rho-u_{x_0}|^p dx =0$; \item $\displaystyle\lim_{\rho\to0} \frac1{\rho} \int_{B_{\rho}(x_0)\cap{J_{v_\rho}}} |[v_\rho]-[u]| d\calH^1 =0$. \end{enumerate} \end{lemma} \begin{proof} Since $J_u$ is $(\calH^1,1)$ rectifiable, there exists a sequence $(\Gamma_i)_{i=1}^\infty$ of $C^1$ curves such that $\calH^1(J_u\setminus \cup_{i=1}^\infty\Gamma_i)=0$. For $\calH^1$-a.e. $x_0\in J_u$ we have \bes{ &\displaystyle{\lim_{\rho\to 0}\frac1{2\rho}{\int_{J_u\cap B_{\rho}(x_0)}(|[u]|+1)d\calH^1}=|[u](x_0)|+1},\\ &\displaystyle{\lim_{\rho\to 0}\frac1{2\rho}{\int_{J_u\cap\Gamma\cap B_{\rho}(x_0)}(|[u]|+1)d\calH^1}=|[u](x_0)|+1},} for one of the aforementioned curves $\Gamma$. Therefore \be{\label{eq:diffsimm}\displaystyle{\lim_{\rho\to 0}\frac1{2\rho}}{\int_{(J_u\triangle\Gamma)\cap B_{\rho}(x_0)}(|[u]|+1)d\calH^1}=0} and for $\rho$ small $\Gamma$ separates $B_{6\rho}(x_0)$ into two connected components. It is not restrictive to assume that {$\Gamma\cap B_{6\rho}(x_0)$} is the graph of a function $h\in C^1(\R)$. Moreover the following properties hold $\calH^1$-a.e. $x_0\in J_u$ \ba{ &\displaystyle\lim_{\rho\to0} \frac1\rho \int_{B_\rho(x_0)} |e(u)|^p dx =0,\label{e:choice1}\\ &\displaystyle\lim_{\rho\to0} \frac1{2\rho}|Eu|(B_\rho(x_0))=|[u]\odot\nu_u|(x_0),\label{e:choice2}\\ &\displaystyle\lim_{\rho\to0} \frac1{\rho^2} \int_{B_\rho(x_0)\cap\{\pm(x_2-h(x_1))>0\}}|u-u^\pm(x_0)|dx=0.\label{e:choice3} } For simplicity we next assume that the point $x_0=0$ satisfies all the previous properties \eqref{eq:diffsimm}-\eqref{e:choice3}, {with $h(0)=h^\prime(0)=0$}. We also set $\tau_\rho:=\|h\|_{L^\infty(B_{6\rho})}$ and note that $\tau_\rho/\rho\to 0$ as $\rho\to0$. We now define the reflections of $u$ with respect to the lines $\{x_2=\pm\tau_\rho\}$, in the sense of \cite[Lemma 1]{nitsche}. {More precisely, define $\tilde u^+_\rho$ on the set ${B_{2\rho}}\cap\{x_2<\tau_\rho\}$ by} \[ \begin{cases} (\tilde u^+_\rho)_1(x_1,x_2):= -2 u_1(x_1,3\tau_\rho-2x_2)+3 u_1(x_1,2\tau_\rho-x_2)\\ (\tilde u^+_\rho)_2(x_1,x_2):= 4 u_2(x_1,3\tau_\rho-2x_2)-3 u_2(x_1,2\tau_\rho-x_2) \end{cases} \] and by $u$ otherwise in ${B_{2\rho}}$. Note that $\tilde u^+_\rho\in SBD^p({B_{2\rho}})$ and that \ba{& {\displaystyle\lim_{\rho\to0}\frac1{2\rho}\calH^1(J_{\tilde u^+_\rho}\cap B_{2\rho})=0,}\label{eq:Jutilde}\\ &\|e(\tilde u^+_\rho)\|_{L^p(B_{2\rho},\R^{2{\times}2})}\leq c \|e(u)\|_{L^p(B_{6\rho},\R^{2{\times}2})},\label{eq:eutilde}} for a universal constant $c$. Using a similar reflection we define $\tilde u^-_\rho$ in $B_{2\rho}\cap\{(x_1,x_2):x_2>-\tau_\rho\}$ and we set $\tilde u^-_\rho:=u$ otherwise in $B_{2\rho}$. By \eqref{eq:diffsimm} and \eqref{eq:Jutilde} for $\rho$ small we have that $\tilde u^\pm_\rho$ satisfy the hypotheses of Proposition \ref{p:ricopr} on $B_{2\rho}$ with $s=1/2$. Thus, there exist $w^\pm_\rho\in SBD^p(B_{2\rho})\cap W^{1,p}(B_\rho,\R^2)$, for which properties (i)-(vii) hold true. Finally let us define $v_{\rho}\in SBD^p(B_{2\rho})$ by \[ v_{\rho}:=\begin{cases} w^+_\rho & \textrm{in $B_{2\rho}\cap\{x_2>h(x_1)\}$,}\\ w^-_\rho & \textrm{in $B_{2\rho}\cap\{x_2<h(x_1)\}$.} \end{cases} \] Since $w^\pm_\rho\in W^{1,p}(B_{\rho},\R^2)$ we obtain $v_\rho\in SBV^p(B_\rho,\R^2)$ with \begin{align*} Dv_\rho\res B_{\rho}=& \nabla w^+_\rho\,{\calL}^2\res B_{\rho}\cap\{x_2>h(x_1)\} + (w^+_\rho-w^-_\rho)\otimes\nu_\Gamma{\calH}^{1}\res\Gamma\cap B_{\rho}\\ &+ \nabla w^-_\rho\,{\calL}^2\res B_{\rho}\cap\{x_2<h(x_1)\}. \end{align*} We next check that $v_\rho$ satisfies the properties in the statement in the ball $B_\rho$. Property (i) comes straightforwardly from \eqref{eq:diffsimm} and from the fact that $J_{v_\rho}\subset \Gamma$. Moreover \eqref{eq:eutilde}, \eqref{e:volume}, and \eqref{e:choice1} yield \begin{equation}\label{e:evrho} \lim_{\rho\to 0} \frac{1}{\rho}\int_{B_{\rho}}|e(w_\rho^\pm)|^pdx=0. \end{equation} As for property (iii), we observe that \[ \lim_{\rho\to 0}\frac1{\rho^2} \calL^2(\{B_\rho:\,u\ne v_\rho\})\leq \lim_{\rho\to 0}(c\frac{\tau_\rho}{\rho}+ \frac c\rho\calH^1((J_u\setminus \Gamma)\cap B_{6\rho}))=0, \] where we have used Proposition~\ref{p:ricopr} (i) and \eqref{eq:diffsimm}. Let us now prove property (iv). By the definition of $v_{\rho}$ and $\tilde u^\pm_\rho$ and by triangular inequality we obtain \bm{\label{e:tr} \frac1{\rho^2} \int_{B_{\rho}}|v_\rho-u| dx \leq\\ \frac1{\rho^2} \int_{B_{\rho}\cap \{h(x_1)<x_2\}}|w^+_\rho-\tilde u^+_\rho| dx + \frac1{\rho^2} \int_{B_{\rho}\cap \{h(x_1)<x_2<\tau_\rho\}}|\tilde u^+_\rho-u| dx+\\ \frac1{\rho^2} \int_{B_{\rho}\cap\{x_2<h(x_1)\}}|w^-_\rho-\tilde u^-_\rho| dx + \frac1{\rho^2} \int_{B_{\rho}\cap\{-\tau_\rho<x_2<h(x_1)\}}|\tilde u^-_\rho-u| dx. } By the definition of $w^+_\rho$ and Proposition \ref{p:ricopr} (vi) we can estimate \begin{equation*} \frac1{\rho^2} \int_{B_{\rho}}|w^+_\rho-\tilde u^+_\rho| dx\leq {\frac {c}{\rho} |E\tilde u_\rho^+|(B_{2\rho}) \le \frac{c}{\rho} |Eu|(B_{6\rho}\setminus\Gamma). } \end{equation*} By \eqref{eq:diffsimm} and (\ref{e:choice1}) we conclude that the first term of \eqref{e:tr} tends to $0$. {Clearly, the same argument can be applied to the third term there. So, it remains to treat the second term in \eqref{e:tr}, being the fourth one similar.} By triangular inequality and a change of variable we infer \bm{ \frac1{\rho^2}\int_{{B_{\rho}}\cap \{h(x_1)<x_2<\tau_\rho\}}|\tilde u^+_\rho-u| dx\leq\nonumber\\ \frac1{\rho^2}\int_{B_{\rho}}|\tilde u^+_\rho-{u^+(x_0)}| dx+ \frac1{\rho^2}\int_{B_{\rho}\cap \{h(x_1)<x_2\}}|{u^+(x_0)}-u| dx\leq\\ \frac c{\rho^2}\int_{B_{6\rho}\cap \{h(x_1)<x_2\}}|{u^+(x_0)}-u| dx, } and the last term tends to $0$ by \eqref{e:choice3}, hence property (iv) follows. Let us prove now property (v). By Korn's inequality and Poincar\'e's inequality in $W^{1,p}$, there exists an affine function $a_\rho(x):=d_\rho+\beta_\rho x$ such that \begin{equation}\label{e:ppoi} \frac1{\rho^{p+1}} \int_{B_{\rho}} |w^+_\rho-a_\rho|^p dx \leq \frac c{\rho}\int_{B_{\rho}}|e(w^+_\rho)|^pdx. \end{equation} {We first claim that \begin{equation}\label{e:drho} \lim_{\rho\to0}d_{\rho}=u^+(x_0). \end{equation} {Let $\omega^+_\rho:=B_\rho\cap \{u= w_\rho^+\}\cap\{x_2>h(x_1)\}$. Since $|\omega^+_\rho|/\rho^2\to \pi/2$, and $a_\rho$ is affine, by \cite[Lemma 4.3]{ContiFocardiIurlano2016} we obtain, for $\rho$ small,} \[ \|a_\rho-u^+(x_0)\|_{L^\infty({B^+_{\rho}},\R^2)} \leq \frac c{\rho^2}\int_{\omega_\rho^+}|w^+_\rho-a_\rho|dx+\frac c{\rho^2}\int_{\omega_\rho^+}|u-u^+(x_0)|dx. \] The right hand side above is infinitesimal by \eqref{e:ppoi}, {(\ref{e:evrho})} and \eqref{e:choice3}, thus we conclude \[ \limsup_{\rho\to0}|d_\rho-u^+(x_0)|\leq \lim_{\rho\to0}\|a_\rho-u^+(x_0)\|_{L^\infty(B^+_{\rho},\R^2)}=0, \] which proves (\ref{e:drho}). Next we prove that} \ba{\label{e:betax0} &\displaystyle{\lim_{\rho\to0}\rho|\beta_\rho|^p=0,}\\ &\displaystyle{{\lim_{\rho\to0}{\rho^{\frac{1-p}{p}}}{|d_\rho- u^+(x_0)|}=0}}.\label{e:ax0} } To establish \eqref{e:betax0}, we fix $\delta>0$ small and we consider $\hat{\rho}$ such that \be{\label{e:delta} \Big(\frac1\rho\int_{B_{\rho}}|e(w^+_\rho)|^pdx\Big)^{\frac1p}<\delta, \qquad\textrm{for $\rho\leq\hat{\rho}$,} } note that this is possible by \eqref{e:evrho}. For $\rho<\hat{\rho}$ we define $\rho_k:=(2^k\rho)\wedge\hat{\rho}$ and we adopt the notation $k$ in place of $\rho_k$ for the subscriptions. As above, using \cite[Lemma 4.3]{ContiFocardiIurlano2016} and the triangular inequality we infer \bm{\nonumber \|a_k-a_{k+1}\|_{L^\infty({B^+_{\rho_k}},\R^2)} \leq\\ \frac c{{\rho_k}^2}\int_{\{u=w^+_{k}\}}|w^+_{k}-a_{k}|dx+\frac c{{\rho^2_{k+1}}}\int_{\{u=w^+_{{k+1}}\}}|w^+_{{k+1}}-a_{{k+1}}|dx \leq c\delta\rho_k^{\frac{p-1}p}, } where the last estimate follows by H\"older's inequality, \eqref{e:ppoi}, and \eqref{e:delta}. Therefore \begin{equation}\label{dadkdk1} |d_k-d_{k+1}|\leq \|a_k-a_{k+1}\|_{L^\infty({{B^+_{\rho_k}}},\R^2)}\leq c\delta\rho_k^{\frac{p-1}p}, \end{equation} and hence once more by \cite[Lemma 4.3]{ContiFocardiIurlano2016} and by the triangular inequality we conclude \begin{equation*} |\beta_k-\beta_{k+1}|\leq c\delta\rho_k^{-\frac{1}p}. \end{equation*} Collecting these estimates as $k$ varies we obtain \[ \rho|\beta_\rho|^p\leq \rho\Big(|\hat\beta|+\sum_{k=0}^{\hat{k}-1}|\beta_k-\beta_{k+1}|\Big)^p\leq c\delta^p+c\rho|\hat{\beta}|^p, \] where $\hat{k}$ is the first index such that $\rho_{\hat{k}}=\hat{\rho}$ and $\hat{\beta}:=\beta_{\hat{k}}=\beta_{\hat{\rho}}$. This proves \eqref{e:betax0} as $\rho\to0$ and $\delta\to0$. We next prove \eqref{e:ax0}. {Similarly to the previous estimate, summing (\ref{dadkdk1}) gives \begin{equation*} |d_\rho -d_{\hat\rho}| \le c \delta \, \hat\rho^{(p-1)/p} \end{equation*} for all $0<\rho<\hat\rho\le \rho_\delta$, with $\delta$ arbitrary and $\rho_\delta$ depending only on $\delta$. Taking $\rho\to0$ and using (\ref{e:drho}) yields \begin{equation*} \hat\rho^{(1-p)/p} |u^+(x_0) -d_{\hat\rho}| \le c \delta \end{equation*} which, since $\delta$ was arbitrary, proves (\ref{e:ax0}) and therefore (v).} { { At this point we turn to property (ii). } Korn's inequality implies that \begin{multline*} \|\nabla w_\rho^+\|_{L^p(B_{\rho},\R^{2{\times}2})}\leq \|\nabla w_\rho^+-\beta_\rho\|_{L^p(B_{\rho},\R^{2{\times}2})}+c\, \rho^{\sfrac2p}|\beta_\rho|\\ \leq c\,\|e(w_\rho^+)\|_{L^p(B_{\rho},\R^{2{\times}2})}+ c\,\rho^{\sfrac2p}|\beta_\rho|, \end{multline*} where $c>0$ is a universal constant. {This, together with \eqref{e:evrho} and \eqref{e:betax0} and the corresponding estimates for $w_\rho^-$, implies property (ii).} } We finally show property (vi). Note that by the trace theorem we have \bm{ \frac1{\rho}\int_{\Gamma\cap B_\rho}|v^\pm_\rho-u^\pm|d\calH^1\leq \nonumber\\ \frac c{\rho^2}\int_{B_\rho}|v_\rho-u|dx + \frac c\rho|E(v_\rho-u)|(B_\rho\setminus\Gamma)\leq\\ \frac c{\rho^2}\int_{B_\rho}|v_\rho-u|dx +\frac c\rho\int_{B_\rho}|e(v_\rho)|dx +\frac c{\rho}\int_{B_\rho}|e(u)|dx +\frac c\rho\int_{J_u\setminus\Gamma}|[u]|d\calH^1 } and all terms in the last expression approach $0$ respectively by (iv), \eqref{e:choice1}, \eqref{e:evrho} and \eqref{eq:diffsimm}. \end{proof} \section{Integral representation}\label{s:intrep1} \subsection{Preliminaries} In this Section we prove Theorem \ref{theorepr}, along the lines of \cite[Section 2.2]{bou-fon-leo-masc}. Before starting we specify that property \ref{theoreprlsc} means that if $u_j, u\in SBD^p(\Omega)$ obey $u_j\to u$ in $L^1(\Omega,\R^2)$, then $F(u,A)\le\liminf_{j\to\infty} F(u_j,A)$ for any open set $A$. Property \ref{theoreprlocal} means that if $u,v\in SBD^p(\Omega)$ obey $u=v$ $\calL^2$-a.e. in $A$, then $F(u,A)=F(v,A)$. The functions $f$ and $g$ are defined in (\ref{eqdeff}) and (\ref{eqdefg}) below. The family of balls contained in $\Omega$ is denoted by \begin{equation*} \calA^*(\Omega):=\left\{ B_\eps(x): x\in\Omega, \eps>0\right\}\,. \end{equation*} Let $B\in \calA^*(\Omega)$. We can identify any $u\in SBD^p(B)$ with its zero extension $u\chi_B\in SBD^p(\Omega)$, and correspondingly write $F(u,B)$ for $F(u\chi_B,B)$. By locality, for any other extension the value of the functional is the same. For $B\in \calA^*(\Omega)$ we define \begin{equation*} m(u,B):=\inf \{ F(w,B): w\in SBD^p(B),\,\, w=u \text{ around } \partial B\} \end{equation*} where the condition $w=u$ around $\partial B$ means that a ball $B'\subset\subset B$ exists, so that $w=u$ on $B\setminus B'$. For $\delta>0$, $A\in \calA(\Omega)$, we set \begin{align*} m^\delta(u,A):=\inf\{ &\sum_{i=1}^\infty m(u, B_i): B_i \in \calA^*, B_i\cap B_j=\emptyset, B_i\subset A, \\ &\diam(B_i)<\delta, \ \mu(A\setminus \bigcup_{i=1}^\infty B_i)=0\}\,, \end{align*} where $\mu:= \calL^2 \LL \Omega + (1+|[u]|) {\calH}^{1}\LL (J_u\cap \Omega)$. Since {$\delta\mapsto m^\delta(u,A)$ is decreasing, we can define} \begin{equation*} m^*(u,A):= \lim_{\delta\to0} m^\delta(u,A). \end{equation*} {Moreover, we set} \begin{equation}\label{eqdeff} f(x_0, u_0, \xi) := \limsup_{\eps\to0} \frac{ m(u_0+\xi(\cdot -x_0), B_\eps(x_0))} {\calL^2(B_\eps)} \end{equation} \begin{equation}\label{eqdefg} g(x_0, a,b,\nu) := \limsup_{\eps\to0} \frac{ m(u_{x_0,a,b,\nu}, B_\eps(x_0))} {2\eps}, \end{equation} where $u_{x_0,a,b,\nu}$ is defined as \begin{equation*} u_{x_0,a,b,\nu}(x):=\begin{cases} a & \textrm{if }\langle x-x_0,\nu\rangle >0,\\ b & \textrm{if }\langle x-x_0,\nu\rangle <0. \end{cases} \end{equation*} \begin{lemma}\label{lem1} For all $u\in SBD^p(\Omega)$ and $A\in\calA(\Omega)$, $F(u,A)=m^*(u,A)$. \end{lemma} \begin{proof} By definition, $m(u,B)\le F(u,B)$ for any ball $B$. Since $F(u,\cdot)$ is a measure, we obtain $m^\delta(u,A)\le F(u,A)$ for any $\delta>0$. Therefore $m^*(u,A)\le F(u,A)$. To prove the converse inequality, let $\delta>0$, pick countably many balls $B_i^\delta$ as in the definition of $m^\delta(u,A)$, such that \begin{equation*} \sum_{i=1}^\infty m(u,B_i^\delta) < m^\delta(u,A)+\delta\,. \end{equation*} By the definition of $m$ there are functions $v_i^\delta\in SBD^p(B_i^\delta)$ such that $v_i^\delta=u$ around $\partial B_i^\delta$ and $F(v_i^\delta, B_i^\delta)\le m(u,B_i^\delta)+\delta \calL^2(B_i^\delta)$. We define \begin{equation*} v^\delta:=\sum_{i=1}^\infty v_i^\delta \chi_{B_i^\delta} + u \chi_{N_0^\delta} \end{equation*} where $N_0^\delta:=\Omega\setminus \cup_i B_i^\delta$. By the $BD$ compactness theorem $v^\delta\in BD(\Omega)$ and {by the $SBD$ closure theorem (see also \cite[Theorem 11.3]{gbd})} we conclude that $v^\delta\in SBD^p(\Omega)$ and \begin{equation*} Ev^\delta=\sum_{i=1}^\infty Ev_i^\delta \LL B_i^\delta + Eu \LL N_0^\delta\,, \end{equation*} with \begin{equation*} |Ev^\delta|\LL N^\delta=0,\hskip5mm \mu(N^\delta)=0\,, \hskip1cm F(v^\delta, N^\delta)=0 \end{equation*} where $N^\delta:=A\cap N_0^\delta$. Further, \begin{equation*} F(v^\delta, A)=\sum_{i=1}^\infty F(v_i^\delta, B_i^\delta)+F(v^\delta, N^\delta) \le m^\delta(u,A)+\delta + \delta \calL^2(A)\,. \end{equation*} We claim that $v^\delta\to u$ in $L^1(\Omega,\R^2)$. Since $F(\cdot, A)$ is lower semicontinuous, this will imply \begin{equation*} F(u,A)\le \liminf_{\delta\to0} F(v^\delta,A) \le \liminf_{\delta\to0} m^\delta({u},A) =m^*(u,A)\,. \end{equation*} To prove $v^\delta\to u$, we observe that by Poincar\'e's inequality, $\diam B_i^\delta\le \delta$, and $v^\delta=u$ on $\partial B_i^\delta$ we obtain \begin{equation*} \|v^\delta-u\|_{L^1(B_i^\delta,\R^2)} \le c \delta |Ev^\delta-Eu|(B_i^\delta)\,. \end{equation*} Therefore \begin{align*} \|v^\delta-u\|_{L^1(\Omega,\R^2)} \le& \sum_i \|v^\delta-u\|_{L^1(B_i^\delta,\R^2)} \le c \delta (|Ev^\delta|(A)+|Eu|(A))\\ \le &c \delta (F(v^\delta, A)+F(u,A))\,. \end{align*} Since $F(v^\delta,A)$ has a finite limit as $\delta\to0$, this proves $v^\delta\to u$ in $L^1(\Omega,\R^2)$. \end{proof} \begin{lemma}\label{lem3} For any ball $B_r(x_0)\subset\Omega$ and $\delta>0$ sufficiently small we have \begin{enumerate} \item $\displaystyle \lim_{\delta\to0} m(u, B_{r-\delta}(x_0))=m(u,B_r(x_0))$; \item $\displaystyle m(u,B_{r+\delta}(x_0)))\leq m(u,B_r(x_0))+\beta\int_{B_{r+\delta}(x_0)\setminus B_r(x_0)}(1+|e(u)|^p)dx +\beta\int_{J_u\cap B_{r+\delta}(x_0)\setminus B_r(x_0)}(1+|[u]|)d{\calH}^{1}. $ \end{enumerate} \end{lemma} \begin{proof} We drop $x_0$ from the notation. Choose $v_\delta\in SBD^p(B_{r-\delta})$ with $v_\delta=u$ around $\partial B_{r-\delta}$ and $F(v_\delta, B_{r-\delta})\le m(u, B_{r-\delta})+\delta$. We define \begin{equation*} w_\delta(x) := \begin{cases} v_\delta(x) & \text{ if } x\in B_{r-\delta}\,,\\ u(x) & \text{ if } x\in \Omega\setminus B_{r-\delta}\,. \end{cases} \end{equation*} We have \begin{align*} m(u,B_r)\le& F(w_\delta,B_r)\le F(v_\delta, B_{r-\delta}) + F(w_\delta, B_{r}\setminus B_{r-\delta})\\ \le &m(u, B_{r-\delta})+\delta\\ &+ \beta \int_{B_r\setminus B_{r-\delta}} (|e(u)|^p+1)dx +\beta\int_{J_u\cap B_r\setminus B_{r-\delta}} (1+|[u]|) d\calH^{n-1}\,. \end{align*} Since $(1+|e(u)|^p)\calL^2 + (1+|[u]|)\calH^{1}\LL J_u$ is a bounded measure, we conclude that \begin{equation*} m(u,B_r)\le \liminf_{\delta\to0} m(u, B_{r-\delta})\,. \end{equation*} Conversely, for any $\eps>0$ there is $v_\eps\in SBD^p(B_r)$ with $v_\eps=u$ around $\partial B_r$ and $F(v_\eps,B_r)\le m(u, B_{r})+\eps$. For $\delta>0$ sufficiently small one has $v_\eps=u$ on $B_r\setminus B_{r-2\delta}$ and therefore $m(u, B_{r-\delta})\le F(v_\eps, B_{r-\delta}) \le m(u, B_{r})+\eps$. Taking first $\delta\to0$ and then $\eps\to0$ concludes the proof of (i). The proof of (ii) is analogous. \end{proof} \begin{lemma}\label{lem2} For $\mu$-a.e. $x_0\in\Omega$, \begin{equation*} \lim_{\eps\to0} \frac{ F(u, B_\eps(x_0))}{\mu(B_\eps(x_0))} = \lim_{\eps\to0} \frac{ m(u, B_\eps(x_0))}{\mu(B_\eps(x_0))}\,. \end{equation*} \end{lemma} \begin{proof} From $m(u,B_\eps(x_0))\le F(u,B_\eps(x_0))$ one immediately obtains \begin{equation*} \limsup_{\eps\to0} \frac{ m(u, B_\eps(x_0))} {\mu(B_\eps(x_0))} \le \limsup_{\eps\to0} \frac{ F(u, B_\eps(x_0))} {\mu(B_\eps(x_0))} \end{equation*} for any $x_0\in\Omega$. To prove the converse inequality, we define for $t>0$ the set \begin{align*} E_t:= \{ x\in \Omega:& \text{ there is } \eps_h\to 0 \text{ such that } \\ & F(u, B_{\eps_h}(x))>m(u, B_{\eps_h}(x))+ t \mu(B_{\eps_h}(x)) \text{ for all } h\}\,. \end{align*} From this definition one immediately has \begin{equation*} \liminf_{\eps\to0} \frac{ F(u, B_\eps(x_0))} {\mu(B_\eps(x_0))} \le \liminf_{\eps\to0} \frac{ m(u, B_\eps(x_0))} {\mu(B_\eps(x_0))} + t \hskip4mm\text{ for all } x_0\in\Omega\setminus E_t\,. \end{equation*} If we can prove that \begin{equation}\label{eqmut} \mu(E_t)=0\text{ for all } t>0 \end{equation} then, recalling that $ \lim_{\eps\to0} \frac{ F(u, B_\eps(x_0))} {\mu(B_\eps(x_0))}$ exists $\mu$-almost everywhere, the proof is concluded. It remains to prove (\ref{eqmut}) for an arbitrary $t>0$. For $\delta>0$ we define \begin{align*} X^\delta:=\{ B_\eps(x): &\,\, \eps<\delta, \hskip2mm \overline B_\eps(x)\subset\Omega, \hskip2mm\mu(\partial B_\eps(x))=0,\\ &F(u, B_\eps(x)) > m(u,B_\eps(x)) + t \mu(B_\eps(x))\} \end{align*} and \begin{align*} U^*:= \bigcap_{\delta>0} \{x: \exists \eps>0 \text{ s.t. } B_\eps(x)\in X^\delta\} \,. \end{align*} We first show that $E_t\subset U^*$. Let $x\in E_t$. Then for any $\delta>0$ there is $\eps\in(0,\delta)$ such that $F(u, B_\eps(x)) > m(u,B_\eps(x)) + t \mu(B_\eps(x))$. By Lemma \ref{lem3} the function $\eps\to m(u,B_\eps(x))$ is left-continuous; $F(u,B_\eps(x))$ is left-continuous because $F(u,\cdot)$ is a measure, therefore the same inequality holds for all $\eps'\in (\eps'',\eps)$. In particular, there is one which additionally obeys $\mu(\partial B_\eps(x))=0$. It remains to show that $\mu(U^*)=0$. We fix a compact set $K\subset U^*$ and $0<\delta<\eta$. Let $U^\eta:=\bigcup\{ B_\eps(x): B_\eps(x)\in X^\eta\}$ and \begin{align*} Y^\delta:= \{ B_\eps(x): \eps<\delta, \overline{B_\eps(x)}\subset U^\eta\setminus K, \mu(\partial B_\eps(x))=0\}\,. \end{align*} By definition, $X^\delta$ is a fine cover of $K$ and $Y^\delta$ of $U^\eta\setminus K$. Therefore there are countably many pairwise disjoint balls $ B_i\in X^\delta$ and $\hat { B}_j\in Y^\delta$ and a set $N$ with $\mu(N)=0$ such that \begin{equation*} U^\eta=\left( \bigcup_{i\in \N} B_i\right) \cup\left( \bigcup_{j\in \N} \hat { B_j}\right)\cup N\,. \end{equation*} Then \begin{align*} F(u, U^\eta)= &\sum_i F(u, B_i)+\sum_j F(u,\hat B_j) +F(u,N) \\ \ge & \sum_i (m(u, B_i)+t\mu(B_i)) + \sum_j m (u,\hat B_j)\\ =& \sum_i m(u, B_i) + \sum_j m(u, \hat B_j) + t\mu(\cup_i B_i)\\ \ge & m^\delta(u, U^\eta) + t \mu(K) \end{align*} where in the last step we used the definition of $m^\delta$. For $\delta\to0$, the definition of $m^*$ and Lemma \ref{lem1} give \begin{align*} F(u, U^\eta)\ge m^*(u, U^\eta) + t\mu(K) = F(u, U^\eta) +t\mu(K)\,. \end{align*} Therefore $\mu(K)=0$, and by the regularity of $\mu$ we conclude $\mu(U^*)=0$. \end{proof} \subsection{Bounds on the volume term} In this subsection we identify the volume energy density in the integral representation for $F$ to be the function $f$ defined in \eqref{eqdeff}. Throughout the whole subsection we consider a fixed map $u\in SBD^p(\Omega)$. Our first result shows that the local volume energy density can be computed with a $W^{1,p}$-approximation to the blow-ups of $u$ (see (\ref{eqconveweps}--\ref{eqconvweps}) below), in the sense that \begin{align}\label{equsdfstep12} \frac{ dF(u,\cdot)}{d{\calL}^2} (x_0) = \lim_{\eps\to0} \frac{ m\big(w_\eps, B_{\eps}(x_0)\big)}{\calL^2(B_\eps)}. \end{align} We shall however not need (\ref{equsdfstep12}), but only the apparently more complex version in (\ref{equsdfstep1})-(\ref{equsdfstep2}). Taking a diagonal subsequence they imply (\ref{equsdfstep12}). \begin{lemma}\label{lemmabdvolpart1} For $\calL^2$-almost any $x_0\in\Omega$, any $\eps>0$, and any $s\in (0,1)$ there are functions $w_\eps^s\in W^{1,p}(B_{s\eps}(x_0);\R^2)$ which obey \begin{align}\label{equsdfstep1} \frac{ dF(u,\cdot)}{d{\calL}^2} (x_0) \le \liminf_{s\to1} \liminf_{\eps\to0} \frac{ m\big(w_\eps^s, B_{s\eps}(x_0)\big)}{\calL^2(B_{s\eps})} \end{align} and \begin{align}\label{equsdfstep2} \limsup_{s\to1} \limsup_{\eps\to0} \frac{ m\big(w_\eps^s, B_{s^2\eps}(x_0)\big)}{\calL^2(B_{s\eps})} \le \frac{ dF(u,\cdot)}{d{\calL}^2} (x_0) \end{align} and which approximate the affine function $y\mapsto \nabla u(x_0)(y-x_0)+u(x_0)$ in the sense that \begin{align}\label{eqconveweps} \lim_{\eps\to0} \frac{1}{\eps^2} \int_{B_{\eps}(x_0)} |e(w^s_\eps)-e(u)(x_0)|^p dx=0 \end{align} and \begin{align}\label{eqconvweps} \lim_{\eps\to0} \frac{1}{\eps^{2+p}} \int_{B_{\eps}(x_0)} |w^s_\eps(x)-u(x_0)-\nabla u(x_0)(x-x_0)|^p dx=0\,. \end{align} \end{lemma} We remark that the ball in (\ref{equsdfstep2}) has radius $s^2\eps$ instead of $s\eps$. The estimate would also hold on $B_{s\eps}$, the variant we chose is more convenient in the proof of Lemma \ref{lem:volLB}. \begin{proof} Let $x_0\in \Omega$ be such that \begin{equation}\label{eqx0lebeu} \lim_{\eps\to0} \frac{1}{\eps^2} \int_{B_\eps(x_0)} |e(u)(x)-e(u)(x_0)|^p dx =0\,, \end{equation} \begin{equation}\label{eqx0lebju} \lim_{\eps\to0} \frac{1}{\eps^2} \int_{B_\eps(x_0)\cap J_u} (1+|[u]|)d\calH^1=0\,, \end{equation} and \begin{equation}\label{eqx0leu} \lim_{\eps\to0} \frac{1}{\eps^3} \int_{B_\eps(x_0)} |u(x)-u(x_0) - \nabla u(x_0)(x-x_0)|dx=0\,. \end{equation} By \cite[Th. 7.4]{AmbrosioCosciaDalmaso1997}, $\calL^2$-almost every $x_0$ obeys (\ref{eqx0leu}), the other two are standard. By (\ref{eqx0lebju}), for sufficiently small $\eps$ one has $\calH^1(J_u\cap B_\eps(x_0))\le \eta(1-s)\eps/2$, where $\eta$ is the constant from Theorem~\ref{t:tecnico}. By Proposition~\ref{p:ricopr} applied to $u-u(x_0)-\nabla u(x_0)(\cdot-x_0)$ there is $\tilde{w}^s_\eps\in SBD^p(B_\eps(x_0))\cap W^{1,p}(B_{s\eps}(x_0);\R^2)$ with properties (i)-(vii) and we set $w^s_\eps:=\tilde{w}^s_\eps+u(x_0)+\nabla u(x_0)(\cdot-x_0)$. In particular, (\ref{eqconveweps}) follows from (\ref{e:volume}) and (\ref{eqx0lebeu}), while (\ref{eqconvweps}) follows from Lemma \ref{lemmal1lp} below applied to $\tilde{w}^s_\eps$, estimating the right-hand side with (\ref{eqconveweps}), (vi), and \eqref{eqx0lebeu}-(\ref{eqx0leu}). We first prove (\ref{equsdfstep2}). By the definition of $m$ and the fact that $F(w^s_\eps,\cdot)$ is a measure follows \begin{align*} m(w^s_\eps, B_{s\eps}(x_0)) \le F(w^s_\eps, B_{s\eps}(x_0)) \le F(w^s_\eps, B_{\eps}(x_0))\,. \end{align*} Let $(B_i)_{i\in\N}$ be the balls from Proposition~\ref{p:ricopr}. For $M\in\N$ we define \begin{equation*} w_\eps^{s,M}:= u + {\chi_{\cup_{i=1}^M\overline B_i}} (w^s_\eps-u)\,. \end{equation*} Then $w_\eps^{s,M}\in SBD^p(B_\eps(x_0))$ and $w_\eps^{s,M}\to w^s_\eps$ in $L^1$ as $M\to\infty$. Further, \begin{align*} F(w_\eps^{s,M},B_{\eps}(x_0))\le & F(w_\eps^{s,M},B_{\eps}(x_0)\setminus \cup_{i=1}^M\overline B_i)+\sum_{i=1}^M F(w_\eps^{s,M},\overline B_i)\\ \le& F(u,B_{\eps}(x_0)\setminus \cup_{i=1}^M\overline B_i)+\beta\sum_{i=1}^M \int_{\overline B_i} (1+|e(w_\eps^s)|^p) dx \end{align*} since $w^{s,M}_\eps=w^s_\eps$ is a $W^{1,p}$ function on each $\overline B_i$. By monotonicity and lower semicontinuity of $F$ we obtain \begin{align*} F(w_\eps^s,B_{\eps}(x_0))\le & F(u,B_{\eps}(x_0))+\beta\sum_{i=1}^\infty \int_{\overline B_i} (1+|e(w_\eps^s)|^p) dx\\ \le & F(u,B_{\eps}(x_0))+c \calL^2(\cup_i B_i) (1+|e(u)|^p(x_0)) \\ & + c \int_{B_{\eps}(x_0)} (|e(w_\eps^s)-e(u)(x_0)|^p) dx \end{align*} and, recalling Proposition~\ref{p:ricopr} (i), conclude the proof of (\ref{equsdfstep2}) by \eqref{eqconveweps} and \eqref{eqx0lebeu}. It remains to prove (\ref{equsdfstep1}). Let $v_\eps\in SBD^p(B_{s^2\eps}(x_0))$ be such that $v_\eps=w^s_\eps$ around $\partial B_{s^2\eps}(x_0)$ and $F(v_\eps, B_{s^2\eps}) \le m(w^s_\eps, B_{s^2\eps}(x_0))+\eps^3$. We define \begin{equation*} \tilde v_\eps(x):= \begin{cases} v_\eps(x) & \text{ if } x\in B_{s^2\eps}(x_0)\\ w^s_\eps(x) & \text{ if } x\in B_\eps(x_0)\setminus B_{s^2\eps}(x_0)\,. \end{cases} \end{equation*} By definition of $m$ and additivity of $F$ we obtain \begin{align*} m(u, B_{\eps}(x_0))\le &F(\tilde v_\eps, B_{\eps}(x_0)) = F(\tilde v_\eps, B_{s^2\eps}(x_0))+ F(\tilde v_\eps, B_\eps(x_0)\setminus B_{s^2\eps}(x_0)) \end{align*} where by locality of $F$ and definition of $v_\eps$ \begin{align*} F(\tilde v_\eps, B_{s^2\eps}(x_0))= F(v_\eps, B_{s^2\eps}(x_0))\le m(w^s_\eps, B_{s^2\eps}(x_0)) + \eps^3 \end{align*} and, since $\tilde v_\eps=w^s_\eps$ outside $B_{s^2\eps}(x_0)$ and $\calH^1(J_{\tilde v_\eps}\cap \partial B_{s^2\eps}(x_0))=0$, recalling (\ref{e:volume}) we obtain \begin{align*} F(\tilde v_\eps, B_\eps(x_0)\setminus B_{s^2\eps}(x_0))\le & \beta\int_{B_\eps(x_0)\setminus B_{s^2\eps}(x_0)} (1+|e(w_\eps^s)|^p) dx\\ &+ \beta\int_{J_u\cap B_\eps(x_0)\setminus B_{s^2\eps}(x_0)} (1+|[u]|) d\calH^1\\ \le & c\beta \calL^2(B_\eps) (1-s^4) (1+|e(u)|^p(x_0))\\ &+ c\beta \int_{B_\eps(x_0)} |e(w_\eps^s)(x)-e(u)(x_0)|^p dx\\ &+ \beta\int_{J_u\cap B_\eps(x_0)} (1+|[u]|)d\calH^1\,. \end{align*} Dividing by $\calL^2(B_\eps)$ and taking the limit $\eps\to0$ gives \begin{align*} \lim_{\eps\to0} \frac{m(u, B_\eps(x_0))}{\calL^2(B_\eps)} \le \liminf_{\eps\to0} \frac{ m(w^s_\eps, B_{s^2\eps}(x_0))}{\calL^2(B_\eps)} + c\beta (1-s^4)(1+|e(u)|^p(x_0)) \,, \end{align*} where we used \eqref{eqconveweps} and (\ref{eqx0lebju}). Recalling Lemma \ref{lem2} we obtain \begin{align*} \frac{ dF(u,\cdot)}{d{\calL}^2} (x_0) = \lim_{\eps\to0} \frac{m(u, B_\eps(x_0))}{\calL^2(B_\eps)} \le \liminf_{s\to1} \liminf_{\eps\to0} \frac{ m(w^s_\eps, B_{s^2\eps}(x_0))}{\calL^2(B_{s\eps})}\,. \end{align*} This concludes the proof of (\ref{equsdfstep1}). \end{proof} The next Lemma is a reverse-H\"older estimate for functions with small strain, of the form $\|v\|_p\le r \|e(v)\|_p + \|v\|_1 r^{-n/p'}$. \begin{lemma}\label{lemmal1lp} For any $p\ge 1$ there is $c>0$ (depending on $n$ and $p$) such that for any $v\in W^{1,p}(B_r;\R^n)$ one has \begin{equation*} \frac{1}{r^{n+p}} \int_{B_r} |v|^p dx \le c \frac{1}{r^n} \int_{B_r} |e(v)|^p dx + c \left(\frac{1}{r^{n+1}}\int_{B_r} |v|dx\right)^p\,. \end{equation*} \end{lemma} \begin{proof} By scaling it suffices to consider $r=1$. By Korn's inequality there is an affine function $a$ such that \begin{equation*} \int_{B_1} |v-a|^p dx \le c \int_{B_1} |e(v)|^p dx \,. \end{equation*} Since $a$ is affine, \begin{equation*} \int_{B_1} |a|^p dx \le c\left( \int_{B_1} |a| dx\right)^p \le c\left( \int_{B_1} |v| dx\right)^p + c \int_{B_1} |v-a|^p dx\,. \end{equation*} A triangular inequality concludes the proof. \end{proof} \begin{lemma}\label{lem:volUB} For ${\calL}^2$-a.e. $x_0\in\Omega$, \begin{equation*} \frac{ dF(u,\cdot)}{d{\calL}^2} (x_0) \le f(x_0, u(x_0), \nabla u(x_0)) \end{equation*} where $f$ was defined in (\ref{eqdeff}). \end{lemma} \begin{proof} Let $x_0$, $w_\eps^s$ be as in Lemma \ref{lemmabdvolpart1}, for $s\in (0,1)$. We choose $v^s_\eps\in SBD^p(B_{s^2 \eps}(x_0))$ such that $v^s_\eps(x)=u(x_0)+\nabla u(x_0)(x-x_0)$ around $\partial B_{s^2\eps}(x_0)$ and $F(v^s_\eps, B_{s^2\eps}(x_0))\le m(u(x_0)+\nabla u(x_0)(\cdot-x_0), B_{s^2\eps}(x_0))+\eps^3$. We extend it to $\R^2$ setting it equal to $ u(x_0)+\nabla u(x_0)(\cdot-x_0)$ outside $B_{s^2\eps}(x_0)$ and choose $\varphi\in C^\infty_c(B_{s\eps}(x_0))$ with $\varphi=1$ on $B_{s^2\eps}(x_0)$ and $\|D\varphi\|_\infty \le c/(s(1-s) \eps)$. We define \begin{align*} z^s_\eps := \varphi v^s_\eps + (1-\varphi) w_\eps^s\,. \end{align*} We remark that $z^s_\eps=v^s_\eps$ on $B_{s^2\eps}(x_0)$ and $z^s_\eps\in W^{1,p}( B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0);\R^2)$. Then \begin{align*} m(w^s_\eps, B_{s\eps}(x_0)) \le& F(z^s_\eps, B_{s\eps}(x_0)) \le F(v^s_\eps, B_{s^2\eps}(x_0)) + F(z^s_\eps, B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0)) \\ \le &m(u(x_0)+\nabla u(x_0)(\cdot-x_0), B_{s^2\eps}(x_0)) +\eps^3\\ & + \beta \int_{ B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0)} (1+|e(z^s_\eps)|^p) dx\,. \end{align*} In order to estimate the error term, we observe that in $B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0)$ one has \begin{equation*} \nabla z^s_\eps-\nabla u(x_0)= (u(x_0)+\nabla u(x_0)(\cdot-x_0)-w^s_\eps) \nabla\varphi + (1-\varphi) (\nabla u(x_0)-\nabla w^s_\eps) \end{equation*} which implies \begin{align*} \int_{ B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0)} (1+|e(z^s_\eps)|^p) dx \le& c(1-s)\calL^2(B_{s\eps}) (1+|e(u)|^p(x_0))\\ &+ c \int_{B_{s\eps}(x_0)} |e(u)(x_0)-e(w_\eps^s)|^p dx\\ &+c \int_{B_{s\eps}(x_0)}\frac{|u(x_0)+\nabla u(x_0)(\cdot-x_0)-w_\eps^s|^p}{\eps^p s^p(1-s)^p} dx. \end{align*} Therefore \begin{align*} \limsup_{\eps\to0} \frac{F(z^s_\eps, B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0)) }{\calL^2(B_{s\eps})}\le c (1-s) (1+|e(u)|^p(x_0)) \end{align*} and \begin{align*} \limsup_{\eps\to0} \frac{ m(w_\eps^s, B_{s\eps}(x_0)) }{\calL^2(B_{s\eps})}\le & \limsup_{\eps\to0} \frac{ m(u(x_0)+\nabla u(x_0)(\cdot-x_0), B_{s^2\eps}(x_0))}{\calL^2(B_{s\eps})}\\ &+ c (1-s) (1+|e(u)|^p(x_0))\\ =& s^2f(x_0,u_0,\nabla u(x_0)) + c (1-s) (1+|e(u)|^p(x_0))\,. \end{align*} Since $s$ was arbitrary, this concludes the proof. \end{proof} \begin{lemma}\label{lem:volLB} For ${\calL}^2$-a.e. $x_0\in\Omega$, \begin{equation*} f(x_0, u(x_0), \nabla u(x_0))\le \frac{ dF(u,\cdot)}{d{\calL}^2} (x_0) \end{equation*} where $f$ was defined in (\ref{eqdeff}). \end{lemma} \begin{proof} We choose $x_0$ and $w_\eps^s$ as in Lemma \ref{lemmabdvolpart1}, for $s\in (0,1)$. We let $v^s_{\eps}\in SBD^p(B_{s^2\eps}(x_0))$ be such that $v_{\eps}^s=w_\eps^s$ around $\partial B_{s^2\eps}(x_0)$ and $F(v_\eps^s, B_{s^2\eps}(x_0))\le m(w_\eps^s, B_{s^2\eps}(x_0))+\eps^3$, and extend it to $B_{s\eps}(x_0)$ setting it equal to $w_\eps^s$ outside $B_{s^2\eps}(x_0)$. We choose $\varphi\in C^\infty_c(B_{s\eps}(x_0))$ with $\varphi=1$ on $B_{s^2\eps}(x_0)$ and $\|D\varphi\|_\infty \le c/(s(1-s)\eps)$ and define \begin{align*} z^s_\eps := \varphi v^s_\eps + (1-\varphi) (u(x_0)+\nabla u(x_0)(x-x_0))\,. \end{align*} Then \begin{align*} m(u(x_0)+\nabla u(x_0)(\cdot -x_0), &B_{s\eps}(x_0)) \le F(z^s_\eps, B_{s\eps}(x_0))\\ =& F(v_\eps^s, B_{s^2\eps}(x_0)) +F(z^s_\eps, B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0)) \\ \le& m(w_\eps^s, B_{s^2\eps}(x_0)) +\eps^3 + F(z^s_\eps, B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0))\,. \end{align*} In order to estimate the error term, we observe that in $B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0)$ one has \begin{equation*} \nabla z^s_\eps-\nabla u(x_0)= -(u(x_0)+\nabla u(x_0)(\cdot-x_0)-w_\eps^s) \nabla\varphi + \varphi (\nabla w_\eps^s-\nabla u(x_0)) \end{equation*} which leads as in the proof of Lemma \ref{lem:volUB} to \begin{align*} \limsup_{\eps\to0} \frac{F(z^s_\eps, B_{s\eps}(x_0)\setminus B_{s^2\eps}(x_0)) }{\calL^2(B_{s\eps})}\le c (1-s) (1+|e(u)|^p(x_0))\,. \end{align*} We conclude that for any $s\in(0,1)$ \begin{align*} &\limsup_{\eps\to0} \frac{ m(u(x_0)+\nabla u (x_0)(\cdot -x_0), B_{s\eps}(x_0))} {{\calL}^2(B_{s\eps})}\\ \le& \limsup_{\eps\to0} \frac{ m(w_\eps^s, B_{s^2\eps}(x_0))} {{\calL}^2(B_{s\eps})} +c (1-s) (1+|e(u)|^p(x_0))\,. \end{align*} Since $s$ was arbitrary, this concludes the proof. \end{proof} \subsection{Bounds on the surface term}\label{s:surface} In the current subsection we identify the function $g$ in \eqref{eqdefg} to be the surface energy density in the integral representation of $F$. As above, we work with a fixed map $u\in SBD^p(\Omega)$. We first prove a technical result. \begin{lemma}\label{lem:surftech} For ${\calH}^{1}$-a.e. $x_0\in J_u$ the functions $v_{2\eps}\in SBV^p(B_{2\eps}(x_0),\R^2)$ introduced in Lemma~\ref{lem:refl} satisfy for all $t\in(0,2)$ \begin{equation}\label{e:mweps} \frac{ dF(u,\cdot)}{d{\calH}^{1}\LL J_u} (x_0) =\lim_{\eps\to 0}\frac{m(v_{2\eps},B_{t\eps}(x_0))}{2t\eps}. \end{equation} \end{lemma} \begin{proof} It suffices to consider points $x_0$ such that the conclusions of Lemmata~\ref{lem:refl} and \ref{lem2} hold true, the Radon-Nikodym derivative $\frac{dF(u,\cdot)}{d{\calH}^{1}\res J_u}(x_0)$ exists finite, \begin{equation}\label{e:cond1} \lim_{\eps\to 0}\frac{\mu(B_\eps(x_0))}{2\eps}=1+|[u](x_0)|,\quad \end{equation} and \begin{equation}\label{e:cond2} \lim_{\eps\to 0}\Big(\frac{1}{\eps}\int_{B_\eps(x_0)}|e(u)|^pdx+ \frac{1}{\eps^2}\int_{B_\eps(x_0)}|u(x)-u_{x_0}|dx\Big)=0, \end{equation} where $u_{x_0}$ is the piecewise constant function defined in \eqref{e:uxzero}. In view of all these choices and thanks to Lemma~\ref{lem2} we may conclude that \begin{equation}\label{e:FHuno} \frac{dF(u,\cdot)}{d{\calH}^{1}\res J_u}(x_0)=\lim_{\eps\to 0}\frac{F(u,B_\eps(x_0))}{2\eps} =\lim_{\eps\to 0}\frac{m(u,B_\eps(x_0))}{2\eps}. \end{equation} For $\eps>0$ small enough the function $v_{2\eps}$ introduced in Lemma~\ref{lem:refl} belongs to $SBD^p(B_{4\eps}(x_0))\cap SBV^p(B_{2\eps}(x_0),\R^2)$ and it satisfies properties (i)-(vi). We set $w_\eps:=v_{2\eps}$, we are left with proving that for all $t\in(0,2)$ \begin{equation}\label{e:wepsUB} \frac{ dF(u,\cdot)}{d{\calH}^{1}\LL J_u} (x_0) \ge \limsup_{\eps\to 0}\frac{m(w_\eps,B_{t\eps}(x_0))}{2t\eps}, \end{equation} \begin{equation}\label{e:wepsLB} \frac{ dF(u,\cdot)}{d{\calH}^{1}\LL J_u} (x_0) \le \liminf_{\eps\to 0}\frac{m(w_\eps,B_{t\eps}(x_0))}{2t\eps}. \end{equation} For the sake of notational simplicity we shall prove inequalities \eqref{e:wepsUB} and \eqref{e:wepsLB} only for $t=1$. We start off with \eqref{e:wepsUB}. Let $(\eps_j)_j$ be a sequence such that \begin{equation}\label{e:epsj} \lim_{j\to\infty}\frac{m(w_{\eps_j},B_{\eps_j}(x_0))}{2\eps_j}= \limsup_{\eps\to 0}\frac{m(w_\eps,B_{\eps}(x_0))}{2\eps}. \end{equation} Items (iii) and (iv) in Lemma~\ref{lem:refl} and the Coarea formula yield for a subsequence not relabeled for convenience that for $\calL^1$-a.e. $s\in(0,1)$ \begin{equation}\label{e:coareasurf2} \lim_j\frac{1}{\eps_j}\int_{\partial B_{s\eps_j}(x_0)\cap\{u\neq w_{\eps_j}\}}\big(1+|u- w_{\eps_j}|\big)d{\calH}^{1}=0, \end{equation} \begin{equation}\label{e:coareasurf1} \mu\big(\partial B_{s\eps_j}(x_0)\big)=\calH^1\big(\partial B_{s\eps_j}(x_0)\cap J_{ w_{\eps_j}}\big)=0. \end{equation} We choose $z_j\in SBD^p(B_{s\eps_j}(x_0))$ such that $z_j=u$ around $\partial B_{s\eps_j}(x_0)$ and \[ F(z_j,B_{s\eps_j}(x_0))\leq m(u,B_{s\eps_j}(x_0))+\eps_j^2, \] and define \begin{equation*} \zeta_j:=\begin{cases} z_j & B_{s\eps_j}(x_0) \cr w_{\eps_j} & B_{\eps_j}(x_0)\setminus \overline{B_{s\eps_j}(x_0)}. \end{cases} \end{equation*} The definition of $z_j$, the growth conditions in \eqref{e:growthF}, and the locality of $F$ yield \begin{multline*} m( w_{\eps_j},B_{\eps_j}(x_0))\leq F(\zeta_j,B_{\eps_j}(x_0))\\ \leq F(z_j,B_{s\eps_j}(x_0)) +\underbrace{\beta\int_{B_{\eps_j}(x_0)\setminus B_{s\eps_j}(x_0)}(1+|e( w_{\eps_j})|^p)\,dx}_{=:I_j^{(1)}}\\ +\underbrace{\beta\int_{\partial B_{s\eps_j}(x_0)\cap\{u\neq w_{\eps_j}\}}(1+|u- w_{\eps_j}|)d{\calH}^{1}}_{=:I_j^{(2)}} +\underbrace{\beta\int_{(B_{\eps_j}(x_0)\setminus \overline{B_{s\eps_j}(x_0)})\cap J_{ w_{\eps_j}}}(1+|[ w_{\eps_j}]|)d{\calH}^{1}}_{=:I_j^{(3)}}\\ \leq m(u,B_{s\eps_j}(x_0))+\eps_j^2+I_j^{(1)}+I_j^{(2)}+I_j^{(3)}. \end{multline*} We note that $I_j^{(1)}$ and $I_j^{(2)}$ are $o(\eps_j)$ as $j\to \infty$ thanks to Lemma~\ref{lem:refl} (ii) and \eqref{e:coareasurf2}, respectively. Instead, employing Lemma~\ref{lem:refl} (vi) and \eqref{e:cond1} to bound $I_j^{(3)}$ we infer that \begin{multline}\label{e:Itrej} \limsup_{j\to\infty}\frac{I_j^{(3)}}{2\eps_j}\leq \limsup_{j\to\infty} \frac\beta{2\eps_j}\int_{(B_{\eps_j}(x_0)\setminus {B_{s\eps_j}(x_0)}\cap J_{u}}(1+|[u]|)d{\calH}^{1}\\ =\beta \limsup_{j\to\infty} \frac{\mu\big(B_{\eps_j}(x_0)\setminus {B_{s\eps_j}(x_0)}\cap J_{u}\big)}{2\eps_j} =(1-s)\beta(1+|[u](x_0)|). \end{multline} Therefore, by \eqref{e:FHuno} we conclude \begin{multline*} \lim_{j\to\infty}\frac{m(w_{\eps_j},B_{\eps_j}(x_0))}{2\eps_j}\leq \liminf_{j\to\infty}\frac{m(u,B_{s\eps_j}(x_0))}{2\eps_j}+(1-s)\beta(1+|[u](x_0)|)\\ =s\frac{dF(u,\cdot)}{d{\calH}^{1}\LL J_u} (x_0)+(1-s)\beta(1+|[u](x_0)|). \end{multline*} Estimate \eqref{e:wepsUB} follows at once by \eqref{e:epsj} and by letting $s\uparrow 1$ in the last inequality. Let now $(\eps_j)_j$ be a sequence such that \begin{equation}\label{e:epsj2} \lim_{j\to\infty}\frac{m(w_{\eps_j},B_{\eps_j}(x_0))}{2\eps_j}= \liminf_{\eps\to 0}\frac{m(w_\eps,B_{\eps}(x_0))}{2\eps}. \end{equation} Let $\lambda\in(1,2)$, arguing as for \eqref{e:coareasurf2} and \eqref{e:coareasurf1}, up to a subsequence depending on $\lambda$ and not relabeled for convenience we may assume that for $\calL^1$-a.e. $s\in(0,1)$ \begin{equation}\label{e:coareasurf2b} \lim_{j\to\infty}\frac{1}{\eps_j}\int_{\partial B_{s\lambda\eps_j}(x_0)\cap\{u\neq w_{\eps_j}\}} \big(1+|u-w_{\eps_j}|\big)d{\calH}^{1}=0, \end{equation} and \begin{equation}\label{e:coareasurf1b} \mu\big(\partial B_{s\lambda\eps_j}(x_0)\big)= \calH^1\big(\partial B_{s\lambda\eps_j}(x_0)\cap J_{w_{\eps_j}}\big)=0. \end{equation} Given $z_j\in SBD^p(B_{s\lambda\eps_j}(x_0))$ with $z_j=w_{\eps_j}$ around $\partial B_{s\lambda\eps_j}(x_0)$ and such that \[ F(z_j,B_{s\lambda\eps_j}(x_0))\leq m(w_{\eps_j},B_{s\lambda\eps_j}(x_0))+\eps_j^2, \] define \begin{equation*} \zeta_j:=\begin{cases} z_j & B_{s\lambda\eps_j}(x_0) \cr u & B_{\lambda\eps_j}(x_0)\setminus \overline{B_{s\lambda\eps_j}(x_0)}. \end{cases} \end{equation*} Using $\zeta_j$ as a test field for $m(u,B_{\lambda\eps_j}(x_0))$, by the locality of $F$ and its growth conditions in \eqref{e:growthF} \begin{multline*} m(u,B_{\lambda\eps_j}(x_0))\leq F(\zeta_j,B_{\lambda\eps_j}(x_0))\leq m(w_{\eps_j},B_{s\lambda\eps_j}(x_0))+\eps_j^2\\ +\underbrace{\beta\int_{B_{\lambda\eps_j}(x_0)}(1+|e(u)|^p)\,dx}_{I_j^{(4)}} +\underbrace{\beta\int_{\partial B_{s\lambda\eps_j}(x_0)\cap\{u\neq w_{\eps_j}\}} (1+|u-w_{\eps_j}|)d{\calH}^{1}}_{I_j^{(5)}}\\ +\underbrace{\beta\int_{(B_{\lambda\eps_j}(x_0)\setminus \overline{B_{s\lambda\eps_j}(x_0)})\cap J_{u}}(1+|[u]|)d{\calH}^{1}}_{I_j^{(6)}}. \end{multline*} The terms $I_j^{(4)}$ and $I_j^{(5)}$ are $o(\eps_j)$ by \eqref{e:cond2} and \eqref{e:coareasurf2b}, respectively. The term $I_j^{(6)}$ can be estimated thanks to \eqref{e:cond1}. Hence, we get by \eqref{e:FHuno} \begin{equation}\label{e:basta} \frac{dF(u,\cdot)}{d{\calH}^{1}\res J_u}(x_0)=\limsup_{j\to\infty}\frac{m(u,B_{\lambda\eps_j}(x_0))}{2\lambda\eps_j} \leq\limsup_{j\to\infty}\frac{m(w_{\eps_j},B_{s\lambda\eps_j}(x_0))}{2\lambda\eps_j}. \end{equation} Next, by choosing $s\in(0,1)$ for which \eqref{e:coareasurf2b} and \eqref{e:coareasurf1b} hold and $s\lambda>1$, we may use Lemma~\ref{lem3}(ii) to infer \begin{align} \nonumber m(w_{\eps_j},B_{s\lambda\eps_j}(x_0))\leq& m(w_{\eps_j},B_{\eps_j}(x_0)) + \beta\int_{B_{s\lambda\eps_j}(x_0)\setminus B_{\eps_j}(x_0)}(1+|e(w_{\eps_j})|^p)dx \\ & + \beta\int_{(B_{s\lambda\eps_j}(x_0)\setminus B_{\eps_j}(x_0))\cap J_{w_{\eps_j}}}(1+|[w_{\eps_j}]|) d{\calH}^{1} .\label{e:basta2} \end{align} Clearly, the first integral is $o(\eps_j)$ by Lemma~\ref{lem:refl} (ii), while the other one can be dealt with as $I_j^{(3)}$ in \eqref{e:Itrej}. Thus, \eqref{e:basta} and \eqref{e:basta2} give \[ \frac{dF(u,\cdot)}{d{\calH}^{1}\res J_u}(x_0)\leq\frac 1\lambda\lim_{j\to\infty} \frac{m(w_{\eps_j},B_{\eps_j}(x_0))}{2\eps_j}+(s\lambda-1)\beta(1+|[u](x_0)|). \] In conclusion, by taking into account \eqref{e:epsj2}, we deduce \eqref{e:wepsLB} by taking first the limit as $s\uparrow 1$, for $s\in(0,1)$ chosen as explained above, and then as $\lambda\downarrow 1$ in the latter inequality. \end{proof} We are now ready to show that the function $g$ in \eqref{eqdefg} is the surface energy density of $F$. This task shall be accomplished by proving two inequalities. \begin{lemma}\label{lem:surfUB} For ${\calH}^{1}$-a.e. $x_0\in J_u$, \begin{equation*} \frac{ dF(u,\cdot)}{d{\calH}^{1}\LL J_u} (x_0) \le g(x_0, u^+(x_0), u^-(x_0),\nu_u(x_0)) \end{equation*} where $g$ was defined in \eqref{eqdefg}. \end{lemma} \begin{proof} We consider the same $x_0$ as in Lemma \ref{lem:surftech}. In view of (\ref{e:mweps}) and the definition of $g$ in \eqref{eqdefg} it suffices to show that \begin{equation}\label{e:FHunob} \lim_{\eps\to 0}\frac{m(w_\eps,B_{\eps}(x_0))}{2\eps} \leq \limsup_{\eps\to 0}\frac{m(u_{x_0},B_{\eps}(x_0))}{2\eps}, \end{equation} where $w_\eps$ is the function introduced in Lemma \ref{lem:surftech}. To prove such a claim consider any sequence $(\eps_j)_j$, we have that for $\calL^1$-a.e. $s\in(0,1)$ \begin{equation}\label{e:coareasurf1c} \mu\big(\partial B_{s\eps_j}(x_0)\big)=\calH^1\big(\partial B_{s\eps_j}(x_0)\cap J_{w_j}\big)=0, \end{equation} where we have set $w_j:=w_{\eps_j}$. Fix $s\in(0,1)$ as above and a test field $z_j\in SBD^p(B_{s\eps_j}(x_0))$ with $z_j=u_{x_0}$ on $\partial B_{s\eps_j}(x_0)$ such that \[ F(z_j,B_{s\eps_j}(x_0))\leq m(u_{x_0},B_{s\eps_j}(x_0))+\eps_j^2. \] Consider a cut-off function $\varphi\in C^\infty_c(B_{\eps_j}(x_0),[0,1])$ such that $\varphi\equiv 1$ on $B_{s\eps_j}(x_0)$ and $\|\nabla \varphi\|_{L^\infty}\leq \frac2{(1-s)\eps_j}$. Define $\zeta_j:=\varphi\,z_j+(1-\varphi)w_j$, with the convention that $z_j$ is extended equal to $u_{x_0}$ outside $B_{s\eps_j}(x_0)$. Therefore, by using $\zeta_j$ as a test field for $m(w_j,B_{\eps_j}(x_0))$ we infer from the growth condition in \eqref{e:growthF} and the locality of $F$ \begin{multline}\label{e:stimasurf1} m(w_j,B_{\eps_j}(x_0))\leq F(\zeta_j,B_{\eps_j}(x_0))\leq F(z_j,B_{s\eps_j}(x_0))\\ +\underbrace{C\int_{B_{\eps_j}(x_0)\setminus B_{s\eps_j}(x_0)} (1+|e(w_j)|^p)\,dx}_{=:I_j^{(7)}} +\underbrace{\frac{C}{((1-s)\eps_j)^p}\int_{B_{\eps_j}(x_0)\setminus B_{s\eps_j}(x_0)}|w_j-u_{x_0}|^pdx}_{=:I_j^{(8)}}\\ +\underbrace{C\,{\calH}^{1}\big((B_{\eps_j}(x_0)\setminus B_{s\eps_j}(x_0))\cap J_{\zeta_j}\big)}_{=:I_j^{(9)}} +\underbrace{C\int_{(B_{\eps_j}(x_0)\setminus B_{s\eps_j}(x_0))\cap J_{\zeta_j}}|[\zeta_j]|d{\calH}^{1}}_{=:I_j^{(10)}}\\ \leq m(u_{x_0},B_{s\eps_j}(x_0))+\eps_j^2+I_j^{(7)}+I_j^{(8)}+I_j^{(9)}+I_j^{(10)}, \end{multline} with $C=C(\beta,p)>0$. By taking into account Lemma~\ref{lem:refl} (ii) and (v) we deduce that $I_j^{(7)}+I_j^{(8)}=o(\eps_j)$ as $j\to \infty$. Moreover, as \[ {\calH}^{1}((B_{\eps_j}(x_0)\setminus B_{s\eps_j}(x_0))\cap J_{\zeta_j}\setminus (J_{u_{x_0}}\cup J_{w_j}))=0, \] item (i) in Lemma~\ref{lem:refl} together with \eqref{e:cond1} give \[ \limsup_{j\to\infty}\frac{I_j^{(9)}}{2\eps_j}\leq C(1-s)(1+|[u](x_0)|). \] Furthermore, for ${\calH}^{1}$-a.e. $x\in J_{\zeta_j}\cap (B_{\eps_j}(x_0)\setminus \overline{B_{s\eps_j}(x_0)})$ it holds \begin{equation*} |[\zeta_j]|\leq |[u_{x_0}]|\chi_{J_{u_{x_0}}\cap J_{\zeta_j}}+|[w_j]|\chi_{J_{w_j}\cap J_{\zeta_j}}\leq 2|[u_{x_0}]|\chi_{J_{\zeta_j}}+|[w_j]-[u_{x_0}]|\chi_{J_{w_j}}. \end{equation*} In turn the latter inequality implies by \eqref{e:cond1} and \eqref{e:coareasurf1c} \begin{multline*} \limsup_{j\to\infty}\frac{I_j^{(10)}}{2\eps_j}\leq C(1-s)|[u](x_0)|\\ +C\,\limsup_{j\to\infty}\frac{1}{2\eps_j} \int_{(B_{\eps_j}(x_0)\setminus B_{s\eps_j}(x_0))\cap J_{\zeta_j}} (|{[w_j]-[u](x_0)}|)\,d{\calH}^{1}\\ \leq C(1-s)|[u](x_0)|, \end{multline*} thanks to item (vi) in Lemma~\ref{lem:refl}. Finally, we obtain from \eqref{e:stimasurf1} \begin{multline*} \liminf_{j\to\infty}\frac{m(w_j,B_{\eps_j}(x_0))}{2\eps_j}\leq s\limsup_{j\to\infty}\frac{m(u_{x_0},B_{s\eps_j}(x_0))}{2s\eps_j}+C(1-s)(1+|[u](x_0)|)\\ \leq s\limsup_{\eps\to0}\frac{m(u_{x_0},B_{\eps}(x_0))}{2\eps}+C(1-s)(1+|[u](x_0)|), \end{multline*} and the claim in \eqref{e:FHunob} follows at once by letting $s\to 1$ in the inequality above. \end{proof} The reverse inequality is established arguing in an analogous fashion, therefore we provide a more concise proof. \begin{lemma}\label{lem:surfLB} For ${\calH}^{1}$-a.e. $x_0\in J_u$, \begin{equation*} \frac{ dF(u,\cdot)}{d{\calH}^{1}\LL J_u} (x_0) \ge g(x_0, u^+(x_0), u^-(x_0),\nu_u(x_0)) \end{equation*} where $g$ was defined in \eqref{eqdefg}. \end{lemma} \begin{proof} We consider the same points $x_0$ as in Lemma \ref{lem:surftech}. Take any infinitesimal sequence $(\eps_j)_j$ such that \[ g(x_0, u^+(x_0), u^-(x_0),\nu_u(x_0))=\lim_{j\to\infty}\frac{m(u_{x_0},B_{\eps_j}(x_0))}{2\eps_j}, \] and recall that \eqref{e:coareasurf1c} is valid for $\calL^1$-a.e. $s\in(0,1)$ (as usual $w_j=w_{\eps_j}$). Having fixed such an $s$, let $z_j\in SBD^p(B_{s\,\eps_j}(x_0))$ with $z_j=w_j$ on $\partial B_{s\eps_j}(x_0)$ be such that \[ F(z_j,B_{s\eps_j}(x_0))\leq m(w_j,B_{s\eps_j}(x_0))+\eps_j^2. \] Let $\varphi\in C^\infty_c(B_{\eps_j}(x_0),[0,1])$ be a cut-off function such that $\varphi\equiv 1$ on $B_{s\eps_j}(x_0)$ and $\|\nabla \varphi\|_{L^\infty}\leq \frac2{(1-s)\eps_j}$. Define $ \zeta_j:=\varphi\,z_j+(1-\varphi)u_{x_0}$, with the convention that $z_j$ is extended equal to $w_j$ outside $B_{s\eps_j}(x_0)$. By using $\zeta_j$ as a test field for $m(u_{x_0},B_{\eps_j}(x_0))$ we infer from the growth condition in \eqref{e:growthF} and the locality of $F$ \begin{multline*} m(u_{x_0},B_{\eps_j}(x_0))\leq F(\zeta_j,B_{\eps_j}(x_0))\leq m(w_j,B_{s\eps_j}(x_0))+\eps_j^2\\ + C\int_{B_{\eps_j}(x_0)\setminus B_{s\,\eps_j}(x_0)}(1+|e(w_j)|^p)\,dx + \frac{C}{((1-s)\eps_j)^p}\int_{B_{\eps_j}(x_0)\setminus B_{s\eps_j}(x_0)}|w_j-u_{x_0}|^pdx\\ + C\int_{(B_{\eps_j}(x_0)\setminus B_{s\eps_j}(x_0))\cap J_{\zeta_j}}(1+|[\zeta_j]|)d{\calH}^{1}, \end{multline*} where $C=C(\beta,p)>0$. Arguing as in the corresponding estimate in Lemma~\ref{lem:surfUB} (cf. \eqref{e:stimasurf1}), and by taking into account the choice of $(\eps_j)_j$ we conclude that \begin{multline*} g(x_0, u^+(x_0), u^-(x_0),\nu_u(x_0))\leq\liminf_{j\to\infty}\frac{m(w_j,B_{s\eps_j}(x_0))}{2\eps_j} +C(1-s)(1+|[u](x_0)|)\\ =s\frac{ dF(u,\cdot)}{d{\calH}^{1}\LL J_u} (x_0)+C(1-s)(1+|[u](x_0)|). \end{multline*} The last equality follows from \eqref{e:mweps}. The conclusion is achieved by letting $s\uparrow 1$ in the last inequality, with $s\in(0,1)$ satisfying \eqref{e:coareasurf1c}. \end{proof} \subsection{Proof of Theorem \ref{theorepr}} \begin{proof}[Proof of Theorem \ref{theorepr}] The conclusion straightforwardly follows by Lemmata \ref{lem:volUB}, \ref{lem:volLB}, \ref{lem:surfUB}, and \ref{lem:surfLB}. \end{proof} \begin{proposition}\label{prop:growth} The assertion in Theorem~\ref{theorepr} holds also if property (iv) is replaced by the weaker \begin{enumerate} \item[(iv')] There are $\alpha,\beta>0$ such that for any $u\in SBD^p(\Omega)$, any $B\in \calB(\Omega)$, \begin{align*} &\alpha \Bigl(\int_B |e(u)|^pdx + {\calH}^{1}(J_u\cap B)\Bigr) \le F(u,B)\notag\\ \le & \beta \Bigl(\int_B (|e(u)|^p+1)dx + \int_{J_u\cap B} (1+|[u]|) d{\calH}^{1}\Bigr). \end{align*} \end{enumerate} \end{proposition} \begin{proof} Given $F$ satisfying properties (i)-(iii) and (iv'), we define for $\delta>0$ a functional $F_\delta:SBD^p(\Omega)\times \calB(\Omega)\to[0,\infty)$ by $$F_\delta(u,B):=F(u,B)+\delta\int_{J_u\cap B}|[u]|d{\calH}^{1},$$ for $u\in SBD^p(\Omega)$ and $B\in\calB(\Omega)$. Since $F_\delta$ satisfies properties (i)-(iv), there are two functions $f$ and $g_\delta$ such that $F_\delta$ can be represented as in \eqref{e:fg}. The family of functionals $F_\delta$ is pointwise increasing in $\delta$, therefore there exists the pointwise limit $g$ of $g_\delta$ as $\delta\to 0$. We conclude that the representation \eqref{e:fg} holds for $F$ with densities $f$ and $g$. \end{proof} \begin{remark} Since $F$ is lower semicontinuous on $W^{1,p}$, the integrand $f$ is quasiconvex \cite{AcerbiFusco84,Marcellini1985}. Since $F$ is lower semicontinuous on piecewise constant functions, $g$ is $BV$-elliptic \cite{AmbrosioBraides1990a,AmbrosioBraides1990b}. \end{remark} \begin{remark} If the functional $F$ additionally obeys \[ F(u+I,B)=F(I,B), \] for every $u\in SBD^p(\Omega)$, every ball $B\subset\Omega$, and every affine function $I$ such that $e(I)=0$, then there are two functions $f:\Omega\times \R^{2\times 2}\to [0,\infty)$ and $g:\Omega\times\R^2\times S^1\to [0,\infty)$ such that \[ F(u,B)=\int_B f(x, e(u(x))) dx + \int_{B\cap J_u} g(x,[u](x),\nu_u(x)) d{\calH}^{1}\,. \] \end{remark} \begin{remark}\label{r:Fgradiente} A growth condition on the volume part of the type of \eqref{e:growthF} alone does not force the energy density to depend only on $e(u)$. As an example, the integrand $f:\R^{2\times 2}\to [0,\infty)$ defined by \[ f(\xi):={(\xi_{11}+\xi_{22})^2}+\sqrt{(\xi_{12}^2+\xi_{21}^2)^2+1}-2\det(\xi) \] satisfies \[ {\frac18}|\xi+\xi^T|^2\leq f(\xi)\leq \frac14|\xi+\xi^T|^2+1 \] for every $\xi\in\R^{2\times 2}$, but evidently $f(\xi)$ depends also on the skew-symmetric part $\xi-\xi^T$. At the same time, $f$ is quasiconvex. We do not know if there is $g$ such that the functional $F$ defined as in (\ref{e:fg}) satisfies the growth condition \eqref{e:growthF} and is lower semicontinuous. \end{remark} \section*{Acknowledgments} F.~Iurlano wishes to thank Gianni Dal Maso for an interesting discussion. This work was partially supported by the Deutsche Forschungsgemeinschaft through the Sonderforschungsbereich 1060 {\sl ``The mathematics of emergent effects''}, project A6. S.~Conti thanks the University of Florence for the warm hospitality of the DiMaI ``Ulisse Dini'', where part of this work was carried out. M.~Focardi and F.~Iurlano are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). \end{document}
\begin{document} \title[Shadows and Barriers]{Shadows and Barriers} \author{Martin Br\"uckerhoff \address[Martin Br\"uckerhoff]{Universit\"at M\"unster, Germany} \email{[email protected]} \hspace*{0.5cm} Martin Huesmann \address[Martin Huesmann]{Universit\"at M\"unster, Germany} \email{[email protected]} } \thanks{MB and MH are funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 –390685587, Mathematics Münster: Dynamics–Geometry–Structure. } \begin{abstract} We show an intimate connection between solutions of the Skorokhod Embedding Problem which are given as the first hitting time of a barrier and the concept of shadows in martingale optimal transport. More precisely, we show that a solution $\tau$ to the Skorokhod Embedding Problem between $\mu$ and $\nu$ is of the form $\tau = \inf \{t \geq 0 : (X_t,B_t) \in \mathcal{R}\}$ for some increasing process $(X_t)_{t \geq 0}$ and a barrier $\mathcal{R}$ if and only if there exists a time-change $(T_l)_{l \geq 0}$ such that for all $l \geq 0$ the equation $$\mathbb{P}[B_{\tau} \in \cdot , \tau \geq T_l] = \shadow{\nu}{\mathbb{P}[B_{T_l} \in \cdot , \tau \geq T_l]}$$ is satisfied, i.e.\ the distribution of $B_{\tau}$ on the event that the Brownian motion is stopped after $T_l$ is the shadow of the distribution of $B_{T_l}$ on this event in the terminal distribution $\nu$. This equivalence allows us to construct new families of barrier solutions that naturally interpolate between two given barrier solutions. We exemplify this by an interpolation between the Root embedding and the left-monotone embedding. \normalem \noindent\emph{Keywords:} Skorokhod embedding, shadows, martingale optimal transport \\ \emph{Mathematics Subject Classification (2020):} Primary 60G40, 60G42, 60J45. \end{abstract} \date{\today} \maketitle \section{Introduction} Let $\mu$ be a probability measure on $\mathbb{R}$ and $(B_t)_{t \geq 0}$ a $\mathcal{F}$-Brownian Motion with initial distribution $\mathrm{Law}(B_0) = \mu$ defined on a filtered probability space $(\Omega, \mathcal{A},\mathbb{P}, (\mathcal{F}_t)_{t \geq 0})$. We assume that the filtration $\mathcal{F} = (\mathcal{F}_t)_{t \geq 0}$ is right-continuous and completed w.r.t.\ $\mathbb{P}$. Given another probability measure $\nu$, a finite $\mathcal{F}$-stopping time $\tau$ is said to be a solution to the Skorokhod Embedding Problem w.r.t.\ $\mu$ and $\nu$, if \begin{equation} \tag{$\mathrm{SEP}(\mu,\nu)$} (B_{t \land \tau})_{t \geq 0} \text{ is uniformly integrable} \quad \text{and} \quad B_{\tau} \sim \nu. \end{equation} It is well known that there exists a solution to $\mathrm{SEP}(\mu,\nu)$ if and only if $\mu \leq_{c} \nu$, i.e.\ if we have $\int_{\mathbb{R}} \varphi \, \mathrm{d} \mu \leq \int _{\mathbb{R}} \varphi \, \mathrm{d} \nu$ for all convex functions $\varphi$. In general there exist many different solutions to $\mathrm{SEP}(\mu,\nu)$ (cf.\ \cite{Ob04}). \subsection*{Main Result} In this article, we focus on the subclass of ``barrier solutions'' to the Skorokhod Embedding Problem which includes for instance the Root embedding \cite{Ro69}, the Az\'{e}ma-Yor embedding \cite{AzYo79}, the Vallois embedding \cite{Va83}, and the left-monotone embedding \cite{BeHeTo17}. These solutions can be described as the first time the process $(X_t,B_t)_{t \geq 0}$ hits a barrier in $[0, \infty) \times \mathbb{R}$ (cf.\ Definition \ref{def:Intro}) where $X$ is monotonously increasing, non-negative and $\mathcal{F}$-adapted. We show that these embeddings are closely related to the concept of shadows introduced by Beiglb\"ock and Juillet in \cite{BeJu16}. \begin{definition} \label{def:Intro} \begin{itemize} \item [(i)] A set $\mathcal{R} \subset [0, \infty) \times \mathbb{R}$ is called a barrier if $\mathcal{R}$ is closed and for all $(l,x) \in \mathcal{R}$ and $l \leq l'$ we have $(l',x) \in \mathcal{R}$. \item [(ii)] Let $\xi$ and $\zeta$ be finite measures on $\mathbb{R}$. We say that $\xi$ is a submeasure of $\zeta$ if $\xi[A] \leq_{+} \zeta[A]$ for all $A \in \mathcal{B}(\mathbb{R})$, denoted by $\xi \leq_{+} \zeta$. \item [(iii)] Let $\eta$ and $\zeta$ be finite measures on $\mathbb{R}$. A finite measure $\xi$ that satisfies $\eta \leq_{c} \xi \leq_{+} \zeta$ and $\xi \leq_{c} \xi'$ for all $\xi'$ with $\eta \leq_{c} \xi' \leq_{+} \zeta$, is called the shadow of $\eta$ in $\zeta$ and is denoted by $\shadow{\zeta}{\eta}$. \end{itemize} \end{definition} We want to mention that in the literature barriers defined as in Definition \ref{def:Intro}(i) are sometimes called ``right-barriers'' in contrast to ``left-barriers''. The shadow $\shadow{\zeta}{\eta}$ exists whenever the set of possible candidates is not empty, i.e.\ if there exists $\xi$ such that $\eta \leq_{c} \xi \leq_{+} \zeta$. This existence result was first shown by Rost \cite{Ro71}. Later Beiglb\"ock and Juillet \cite{BeJu16} rediscovered this object in the context of martingale optimal transport and coined the name shadow. In the following we use the notation $\mathrm{Law}(X;A)$ for the (sub-)probability measure which is given by the push-forward of $X$ under the restriction of $\mathbb{P}$ to the event $A$ (cf.\ Section \ref{ssec:Notation}). \begin{theorem} \label{thm:intro} Let $\mu \leq_{c} \nu$ and $\tau$ a solution of $\mathrm{SEP}(\mu,\nu)$. The following are equivalent: \begin{itemize} \item [(i)] There exists a right-continuous $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t : X_s = X_{t} = l] = 0$ for all $l \geq 0$, and a closed barrier $\mathcal{R} \subset [0, \infty) \times \mathbb{R}$ such that \begin{equation*} \tau = \inf \{ t \geq 0 : (X_t,B_t) \in \mathcal{R}\} \quad a.s. \end{equation*} \item [(ii)] There exists a left-continuous $\mathcal{F}$-time-change $(T_l)_{l \geq 0}$ with $T_0 = 0$, $T_\infty = \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$ such that for all $l \geq 0$ we have \begin{equation} \label{eq:ShadowResid} \mathrm{Law}(B_{\tau}; \tau \geq T_l) = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \tau \geq T_l)}. \end{equation} \end{itemize} Moreover, we may choose $T_l = \inf \{t \geq 0: X_t \geq l\}$ or $X_t := \sup \{l \geq 0: T_l \leq t\}$ and $\mathcal{R} := \{ (l,x) \in [0, \infty) \times \mathbb{R} : U_{\mathrm{Law}(B_{T_l \land \tau})} (x) = U_{\nu}(x) \}$, respectively, where $U_\cdot$ denotes the potential function of a finite measure (see Definition \ref{def:PotentialFunction}). \end{theorem} \begin{remark} We want to stress that this theorem also holds for randomized stopping times (cf.\ Theorem \ref{thm:MainEqui}). This concerns the implication $(ii) \mathbb{R}ightarrow (i)$, as part (i) already ensures that the randomized stopping time is induced by a (non-randomized) stopping time. \end{remark} To the best of our knowledge, the only known connection between shadows and the Skorokhod Embedding Problem is implicitly through the left-monotone embedding because it is uniquely characterized by the property that the induced martingale coupling between the initial and the terminal marginal distribution is precisely the left-curtain coupling (see below). Theorem \ref{thm:intro} shows that this connection is not by accident, but just a special case of an intimate connection between shadows and barrier solutions rooted in potential theory. \begin{figure} \caption{This is a sketch of the support and densities of the measures appearing in \eqref{eq:RootExpl} \label{fig:RootExp} \end{figure} \begin{figure} \caption{This is a sketch of the support and densities of the measure appearing in \eqref{eq:LMExpl} \label{fig:LmExp} \end{figure} Since the time change in Theorem \ref{thm:intro} is given by $T_l = \inf \{t \geq 0 : X_t \geq l\}$ it is straightforward to compute the time changes for well known examples. In the case of the Root-embedding, we have $X^{r}_t := t$, $T^r_l = l$ and Property \eqref{eq:ShadowResid} turns into \begin{equation}\label{eq:RootExpl} \mathrm{Law}(B_\tau; \tau \geq l) = \shadow{\nu}{\mathrm{Law}(B_l; \tau \geq l)} \end{equation} for all $l \geq 0$. The measure $\mathrm{Law}(B_\tau; \tau \geq l)$ is the projection of $\mathrm{Law}((\tau,B_{\tau}); \tau \geq l)$ onto the second (the spatial) component. In the SEP context the joint law of $(\tau,B_\tau)$ describes when and where the Brownian motion is stopped. Since $\tau$ is a barrier stopping time the support of $\mathrm{Law}((\tau,B_{\tau}); \tau \geq l)$ is on the boundary of the barrier intersected with $[l,\infty)\times \mathbb{R}$. This is depicted on the left hand side of Figure \ref{fig:RootExp}. By \eqref{eq:RootExpl} we can characterize this measure using information from time $l$ only. For each $l \geq 0$, it is given as the shadow of $\mathrm{Law}(B_l; \tau \geq l)$ in the prescribed terminal distribution $\nu$. We have a similar situation in the case of the left-monotone embedding. We have $X^{lm}_t = \exp(-B_0)$, \begin{equation*} T^{lm}_l = \begin{cases} 0 & \exp(-B_0) \geq l \\ + \infty &\exp(-B_0) < l \end{cases} \end{equation*} and Property \eqref{eq:ShadowResid} becomes \begin{equation} \label{eq:LMExpl} \mathrm{Law}(B_{\tau}; B_0 \leq -\ln(l)) = \shadow{\nu}{\mathrm{Law}(B_0; B_0 \leq - \ln(l) )} \end{equation} for all $l \geq 0$. Again the measure $\mathrm{Law}(B_\tau; \tau \geq l)$ is the projection of $\mathrm{Law}((\tau,B_{\tau}); \tau \geq l)$ onto the second component and in the SEP context the latter measure is supported on the boundary of the barrier after time $l$ (left side of Figure \ref{fig:LmExp}). Recall that in the left-monotone phase space, the Brownian motion is only moving vertically. This time the characterization of $\mathrm{Law}(B_\tau; \tau \geq l)$ via the shadow of $\mathrm{Law}(B_0; B_0 \leq - \ln(l) )$ into $\nu$ is completely independent of $\tau$. In particular, \eqref{eq:LMExpl} yields that $\tau$ is the left-monotone embedding of $\mu$ into $\nu$ if and only if $(B_0,B_{\tau})$ is the left-curtain coupling of $\mu$ and $\nu$ (cf.\ \cite{BeJu16}). The shadow $\shadow{\nu}{\eta}$ of a measure $\eta$ in the probability measure $\nu$, is the most concentrated (in the sense of $\leq_{c}$) submeasure of $\nu$ which can be reached by an embedding of $\eta$ into $\nu$ via a (randomized) $\mathcal{F}$-stopping time (cf.\ Lemma \ref{lemma:ConvOrder}). Hence, Theorem \ref{thm:intro} characterizes in general barrier solutions as those solutions $\tau$, for which there exists a random time given by $(T_l)_{l \geq 0}$ such that for all $l \geq 0$ the mass which is not stopped before $T_l$ under $\tau$, is allocated by $\tau$ as concentrated as possible in the target distribution $\nu$ without interference with the mass that is stopped before $T_l$. \subsection*{Interpolation} If the time-change $(T_l)_{l \geq 0}$ is measurable w.r.t.\ the completion of the natural filtration $\mathcal{F}^B$ generated by the Brownian motion (as it is the case for the Root-embedding and the left-monotone embedding), we can assume that the Brownian motion $B$ is defined on the canonical path space $\Omega = C([0, \infty))$ and we can consider the natural shift operator $\theta$. In this case, for all $\lambda \in (0,\infty)$ we obtain an interpolation $(R^\lambda_l)_{l \geq 0}$ between two $\mathcal{F}$-time-changes $(T_l ^1)_{l \geq 0}$ and $(T_l^2)_{l \geq 0}$ by \begin{equation*} R^\lambda_l := T^1 _{l \land \lambda} + (T^2_{l-\lambda} \circ \theta_{T^1_\lambda}) \mathds{1}_{\{l \geq \lambda\}} = \begin{cases} T_l^1 & l \leq \lambda \\ T_\lambda^1 + T^2 _l \circ \theta_{T^1_\lambda} & l > \lambda \end{cases}. \end{equation*} For the Root time-change $(T_l ^{r})_{l \geq 0}$ and the left-monotone time-change $(T_l^{lm})_{l \geq 0}$ the interpolation becomes \begin{equation*} R^{\lambda} _l := T^{r}_{l \land \lambda} + (T^{lm} _{l - \lambda} \circ \theta _{T^{r}_{\lambda}}) \mathds{1}_{\{l \geq \lambda\}} = \begin{cases} l & l \leq \lambda \\ l & \exp(-B_\lambda) + \lambda \geq l > \lambda \\ + \infty & \exp(-B_\lambda) + \lambda < l, l > \lambda \end{cases}. \end{equation*} A solution $\tau ^{\lambda}$ to $\mathrm{SEP}(\mu,\nu)$ that satisfies property \eqref{eq:ShadowResid} w.r.t.\ $(R^\lambda _l)_{l \geq 0}$, is by Theorem \ref{thm:intro} a barrier solution w.r.t.\ the level-process \begin{equation} \label{eq:lvlPrc} X_t ^\lambda := \sup \{l \geq 0 : R_l^\lambda \leq t\} = \begin{cases} t & t < \lambda \\ \lambda + \exp(-B_0) & t \geq \lambda \end{cases}. \end{equation} A natural guess is that $\lambda \mapsto \tau ^{\lambda}$ is a reasonable interpolation between the left-monotone embedding ($\lambda \uparrow + \infty$) and the Root embedding $(\lambda \downarrow 0)$. This is indeed the case: \begin{proposition} \label{prop:Interpolation} Let $\lambda \in (0, \infty)$. We define the stochastic process $(X^{\lambda}_t)_{t \geq 0}$ as in \eqref{eq:lvlPrc}. There exists a barrier $\mathcal{R}^{\lambda} \subset [0, \infty) \times \mathbb{R}$ such that the first hitting time \begin{equation*} \tau ^{\lambda} := \inf \{t \geq 0: (X_t ^{\lambda}, B_t) \in \mathcal{R}^{\lambda} \} \end{equation*} is a solution to $\mathrm{SEP}(\mu,\nu)$. Moreover, $\mathrm{Law}(B,\tau ^\lambda)$ (as a measure on $\Omega \times [0, \infty))$, converges weakly to $\mathrm{Law}(B,\tau ^r)$ as $\lambda \rightarrow \infty$ and, if $\mu$ is atomless, converges weakly to $\mathrm{Law}(B,\tau ^{lm})$ as $\lambda \rightarrow 0$. \end{proposition} \begin{figure} \caption{The sketch of two sample paths of $(X^\lambda_t,B_t)_{t \in [0,\tau ^{\lambda} \end{figure} \begin{remark} The choice of the Root embedding and the left-monotone embedding as the endpoints of the interpolation is partially arbitrary. As long as both time-changes are $\mathcal{F}^B$-measurable, this procedure can be applied to any two barrier solutions to obtain a new mixed barrier solution (see Lemma \ref{lemma:Nesting}). The continuity and convergence is then a question of the stability properties of the corresponding embeddings. Other approaches to interpolate (in some sense) between two different barrier solutions can be found in \cite{CoHo07} and \cite{GaObZo19}. \end{remark} \subsection*{Multi-Marginal Embeddings} Theorem \ref{thm:intro} can be extended to the case that the barrier solution is ``delayed'', in the sense that the solution can be written as the first hitting time of a barrier after it surpassed a fixed stopping time $\sigma$. \begin{proposition} \label{prop:ShiftedThm} Let $\tau$ be a $\mathcal{F}$-stopping-time that solves $\mathrm{SEP}(\mu,\nu)$. Let $\sigma \leq \tau$ be another $\mathcal{F}$-stopping time. The following are equivalent: \begin{itemize} \item [(i)] There exists a right-continuous $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t : X_s = X_{t} = l] = 0$ for all $l \geq 0$, and a closed barrier $\mathcal{R} \subset [0, \infty) \times \mathbb{R}$ such that \begin{equation*} \tau = \inf \{t \geq \sigma : (X_t,B_t) \in \mathcal{R} \} \quad a.s. \end{equation*} \item [(ii)] There exists a left-continuous $\mathcal{F}$-time-change $(T_l)_{l \geq 0}$ with $T_0 = 0$, $T_\infty = \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$ such that for all $l \geq 0$ we have \begin{equation*} \mathrm{Law}(B_\tau; \tau \geq \sigma \lor T_l) = \shadow{\nu}{\mathrm{Law}(B_{\sigma \lor T_l}; \tau \geq \sigma \lor T_l)}. \end{equation*} \end{itemize} \end{proposition} Motivated by financial applications, there has been an increased interest in the multi-marginal Skorokhod Embedding Problem and in particular in multi-marginal barrier solutions (cf.\ \cite{BeCoHu17b, NuStTa17}). Since this is essentially a sequence of delayed barrier solutions, we can extend Theorem \ref{thm:intro} to this case by an inductive application of Proposition \ref{prop:ShiftedThm}. \begin{corollary} \label{cor:MultiMarginal} Let $\mu \leq_{c} \nu _1 \leq_{c} ... \leq_{c} \nu_n$ be greater than $\mu$ in convex order and $\tau _1 \leq ... \leq \tau _n$ an increasing sequence of uniformly integrable $\mathcal{F}$-stopping times such that $\tau _i$ is a solution to $\mathrm{SEP}(\mu, \nu_i)$ for all $1 \leq i \leq n$. The following are equivalent: \begin{itemize} \item [(i)] There exists a suitable process $(X_t)_{t \geq 0}$, and closed barriers $\mathcal{R}^1,...,\mathcal{R}^n \subset [0, \infty) \times \mathbb{R}$ such that \begin{align*} \tau ^1 &= \inf\{ t \geq 0: (X_t,B_t) \in \mathcal{R}^1\} \quad\text{ and} \\ \tau ^i &= \inf\{ t \geq \tau ^{i-1}: (X_t,B_t) \in \mathcal{R}^i\} \quad \text{for all } 1 \leq i \leq n. \end{align*} \item [(ii)] There exists a suitable time-change $(T_l)_{l \geq 0}$, such that for all $l\geq 0$ we have \begin{align*} \mathrm{Law}(B_{\tau ^1}; \tau ^1 \geq T_l) &= \shadow{\nu _1}{\mathrm{Law}(B_{T_l}; \tau^1 \geq T_l)} \quad\text{ and} \\ \mathrm{Law}(B_{\tau ^i}; \tau ^i \geq \tau^{i-1} \lor T_l) &= \shadow{\nu _i}{\mathrm{Law}(B_{\tau ^{i-1} \lor T_l}; \tau ^i \geq \tau^{i-1} \lor T_l)} \quad \text{for all } 1 \leq i \leq n. \end{align*} \end{itemize} \end{corollary} \subsection*{Another Perspective on Theorem \ref{thm:intro}} We will prove Theorem \ref{thm:intro} in Section \ref{sec:MainResult} using potential theory. However, there is an alternative point of view on this theorem using Choquet-type representations of the barrier stopping time $\tau$ and the terminal law $\mathsf{Law}(B_\tau)$ of the stopped process. The most primitive version of a barrier embedding is a first hitting time of the form \begin{equation*} \tau^F := \inf \{t \geq 0 : B_t \in F\} = \inf \{ t \geq 0 : (t,B_t) \in [0, \infty) \times F\} \end{equation*} where $F \subset \mathbb{R}$ is a closed set. The terminal distribution $\mathrm{Law}(B_{\tau ^F})$ w.r.t.\ this stopping time can be characterized using the notion of Kellerer dilations. Given a closed set $F \subset \mathbb{R}$ the Keller dilation is defined as the probability kernel \begin{equation} \label{eq:KellererDilation} K^F(x,dy) = \begin{cases} \frac{x^+ - x}{x^+ - x^-} \, \mathrm{d}lta x_- + \frac{x - x^-}{x^+ - x^-} \, \mathrm{d}lta_{x^-} \quad & x \not \in F \\ \, \mathrm{d}lta_x & x \in F \end{cases} \end{equation} where $x^+ = \inf (F \cap [x, \infty])$ and $x^- = \sup (F \cap (-\infty,x])$. As a direct consequence of \cite[Satz 25]{Ke73}, for every closed set $F \subset \mathbb{R}$ a stopping time $\tau$ satisfies $\tau = \tau ^F$ a.e.\ if and only if $\mathrm{Law}(B_\tau) = \mathrm{Law}(B_0)K^F$. The main idea behind Theorem \ref{thm:intro} is now the following: In the same way that a barrier solution $\tau$ can be represented as a composition of first hitting times $(\tau ^{F_t})_{t \geq 0}$ for an increasing family of closed sets $(F_t)_{t \geq 0}$, the terminal law $\mathsf{Law}(B_\tau)$ w.r.t.\ a stopping time $\tau$ satisfying the shadow relation \eqref{eq:ShadowResid} can be represented using Kellerer dilations $(K^{F_a})_{a \in [0,1]}$ for an increasing family of closed sets $(F_a)_{a \in [0,1]}$. Since for fixed $F$, $\tau ^F$ and $K^F$ are in a one-to-one correspondence, these two representation -one on the level of stopping times and one on the level of target distributions- are two sides of the same coin. In fact, up to reparametrization of the index set, these two families can be chosen identical. Let us explain these two representations in more detail. To keep the notation simple, we will only consider the case of the Root-embedding ($X_t^r = t, T_l ^r = l$). For all $\mathcal{F}^B$-stopping times $\tau_1, \tau _2$ and $s \geq 0$, we define the composition \begin{equation*} C_{s}(\tau _1, \tau _2) := \tau _1 \land s + \tau _2 \circ \theta_{\tau _1 \land s} \end{equation*} where $\theta$ denotes the shift operator on the path space. The composition $C_{s}(\tau _1, \tau _2)$ is again a stopping time. We also inductively define the stopping times \begin{equation*} C_{s_1, ... , s_n}(\tau_1, ... , \tau _n) := C_{s_n}(C_{s_1, ... , s_{n-1}}(\tau _1, ... , \tau _{n-1}), \tau _n). \end{equation*} for $0 \leq s_1 \leq ... \leq s_n$. For all $s > 0$ and for all closed sets $F$, we have $C_s(\tau^F,\tau^F) = \tau^F$. Conversely, if there exists $s \geq 0$ and stopping times $\tau_1, \tau_2$ s.t.\ $\tau^F = C_s(\tau_1,\tau_2)$ for a closed set $F \subset \mathbb{R}$, then $\tau_1 \land s = \tau ^F \land s$ and $\tau _2 \circ \theta_{\tau _1 \land s} = \tau^F \circ \theta_{\tau _1 \land s}$. Therefore, stopping times of the form $\tau^F$ are ``extremal'' or ``atomic'' w.r.t.\ the composition operation $C$. \begin{lemma} Let $\tau$ be a stopping time. The following are equivalent: \begin{itemize} \item[(i)] There exists a right-barrier $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ s.t.\ $\tau = \inf \{t \geq 0 : (t,B_t) \in \mathcal{R}\}$. \item[(ii)] There exists an increasing family of closed sets $(F_t)_{t \geq 0}$ such that \begin{equation*} \tau = \lim _{n \rightarrow \infty} C_{2^{-n}, ... , n} (\tau ^{F_{2^{-n}}}, ... , \tau ^{F_n}). \end{equation*} \end{itemize} In this case a possible right-barrier is given by $\mathcal{R} := \overline{\bigcup _{t \geq 0} [t, \infty) \times F_t}$. \end{lemma} The proof of this equivalence is straightforward using the continuity of Brownian motion. We omit the details. On the level of measures, we obtain a similar representation of the shadow. For two probability measures $\zeta_1, \zeta _2$ and all $\alpha \in [0,1]$ the convex combination $(1- \alpha) \zeta_1 + \alpha \zeta_2$ is again a probability measure. By a result of Kellerer \cite[Theorem 1]{Ke73}, for every probability measure $\eta$ the extremal elements of the convex set $\{ \zeta : \eta \leq_{c} \zeta\}$ are given by $\left\{ \eta K^F \, : \, F \subset \mathbb{R} \text{ closed}\right\}$. \begin{lemma} Let $\tau$ be a stopping time and set $l_b := \sup \{l \geq 0: \mathbb{P}[\tau \geq l] \geq b\}$ for $b \in [0,1]$. The following are equivalent: \begin{itemize} \item [(i)] For all $l \geq 0$ we have $\mathrm{Law}(B_\tau; \tau \geq l) = \shadow{\nu}{\mathrm{Law}(B_l; \tau \geq l)}$. \item [(ii)] There exists an increasing family of closed sets $(F_a)_{a \in [0,1]}$ such that \begin{equation*} \mathrm{Law}(B_\tau) = \int _0 ^1 \eta_{1-a} K^{F_a} \, \mathrm{d} a \end{equation*} where the probability measures $\eta_a$ are defined by $\eta _a := \lim_{\varepsilon \rightarrow 0} \varepsilon ^{-1} \left( \overline{\eta}^{a + \varepsilon} - \overline{\eta}^a \right)$, $a \in [0,1]$, and $\overline{\eta}^\alpha := \mathrm{Law}(B_\tau; \tau \geq l_\alpha) - \frac{\mathbb{P}[\tau \geq l_\alpha] - \alpha}{\mathbb{P}[\tau = l_a]}\mathrm{Law}(B_\tau; \tau = l_\alpha)$ for $\alpha \in [0,1]$. \end{itemize} In this case we have $\shadow{\nu}{\mathrm{Law}(B_l; \tau \geq l_b)} = \int_0 ^b \eta_{1-a} K^{F_a} \, \mathrm{d} a$ for all $b \in [0,1]$. \end{lemma} Similar to \cite[Proposition 2.7]{BeJu16b} one can show that (i) implies (ii). The reversed implication is an application of Lemma \ref{lemma:ShadowDecomp}. We leave the details to the reader. \section{Related Literature} The Skorokhod Embedding Problem goes back to Skorokhod's work \cite{Sk65} in 1965. After his own solution to the embedding problem, this problem gained considerable attention in the literature and a wide range of different embeddings exploiting different mathematical tools were found. The survey \cite{Ob04} alone covers more than 20 different solutions. Moreover, several interesting variants of the Skorokhod Embedding are considered. Recently, there is an increased interest in a variant of Skorokhod Embedding Problem, which asks embeddings to minimize or maximize a predetermined cost function of space and time. This variant of the Skorokhod Embedding Problem has a direct connection to robust mathematical finance which was first noticed by Hobson \cite{Ho98}. For further background we refer to \cite{Ho03}. A novel mathematical exploration of properties of the optimal Skorokhod Embedding Problem in combination with optimal transport can be found in \cite{BeCoHu17}. Further variants are for instance the extensions to the embedding of multiple distributions (cf.\ \cite{BeCoHu17b}) and to higher dimensions (cf.\ \cite{GhKiPa19}). Among the first solutions to the Skorokhod Embedding Problem was Root's construction \cite{Ro69} of a barrier solution in the time-space phase space in 1969. Shortly after, Rost \cite{Ro76} proved that the Root-embedding is the unique embedding which has minimal variance among all other embeddings and provided an alternative construction of this embedding based on the potential theory for Markov processes. The Root-embedding and properties of the corresponding barrier are still subject of current research \cite{CoWa13,GaObZo19}. Moreover, the Root-embedding was recently used to construct a counterexample to the Cantelli-conjecture \cite{KlKu15}. The Root-embedding is presumably the most prominent barrier solution to the Skorokhod Embedding Problem. However, there are several other embeddings which can be characterized as first hitting times of barriers in a different phase space \cite{AzYo79, Va83}. The shadow for finite measures on the real line was introduced by Beiglböck and Juillet \cite{BeJu16} as the main tool in their construction of the left-curtain coupling. Thereby, they showed important properties as the associativity law and continuity, and coined the name shadow. Nevertheless, the essential concept of the shadow as well as its existence in a very broad framework already appeared in \cite{Ro71}. The shadow is used to study properties of the left-curtain coupling (cf.\ \cite{BeJu16}, \cite{Ju14}, \cite{HoNo17}, \cite{HoNo21}). Furthermore, the shadow can be used to construct and characterize a whole family of martingale couplings on the real line \cite{BeJu16b}, as well as finite-step martingales \cite{NuStTa17} and solutions to the peacock problem \cite{BrJuHu20}. To the best of our knowledge, the only known connection with the Skorokhod Embedding Problem so far is implicitly through the left-monotone embedding because it is uniquely characterized by the property that the induced martingale coupling between the initial and the terminal marginal distribution is precisely the left-curtain coupling. \section{Preliminary Results} \subsection{Notation} \label{ssec:Notation} $\Omega$ is a Polish space equipped with the Borel $\sigma$-algebra, $\mathcal{F}$ is a right-continuous filtration on $\Omega$ and $B$ is a $\mathcal{F}$-Bownian motion on the complete filtered probability space $(\Omega, \mathcal{B}(\Omega), \mathbb{P}, \mathcal{F})$. We use the notation $\mathrm{Law}(X;A)$ for the (sub-)probability measure which is given by the push-forward of the random variable $X$ under the restriction of $\mathbb{P}$ to the Borel set $A$. Alternatively, we sometimes use the notation $X_{\#}(\mathbb{P}_{|A})$ for this object. Further, we denote the set of finite (resp.\ probability) measures on a measurable space $\mathsf{X}$ by $\mathcal{M}(\mathsf{X})$ (resp.\ $\mathcal{P}(\mathsf{X})$). In the case $\mathsf{X} = \mathbb{R}$, we denote by $\mathcal{M}_1(\mathbb{R})$ (resp.\ $\mathcal{P}_1(\mathbb{R})$) the subset of finite (resp.\ probability) measures with finite first moment. We equip $\mathcal{M}_1(\mathbb{R})$ with the initial topology generated by the functionals $(I_f)_{f \in C_b(\mathbb{R}) \cup \{\vert \cdot \vert\}}$ where \begin{equation*} I_f : \mathcal{M}_1(\mathbb{R}) \ni \pi \mapsto \int _{\mathbb{R}} f \, \mathrm{d} \pi \in \mathbb{R}, \end{equation*} $C_b(\mathbb{R})$ is the set of continuous and bounded functions, and $\vert \cdot \vert$ denotes the absolute value function. We denote this topology on $\mathcal{M}_1(\mathbb{R})$ by $\mathcal{T}_1$. Finally, we define two order relations on $\mathcal{M}_1(\mathbb{R})$. We say that $\mu \in \mathcal{M}_1(\mathbb{R})$ is smaller than or equal to $\mu ' \in \mathcal{M}_1(\mathbb{R})$ in convex order, $\mu \leq_{c} \mu'$, if \begin{equation} \label{eq:OrderRelation} \int _{\mathbb{R}} \varphi \, \mathrm{d} \mu \leq \int _{\mathbb{R}} \varphi \, \mathrm{d} \mu ' \end{equation} holds for all convex $\varphi$ and $\mu$ is smaller than or equal to $\mu'$ in positive order, $\mu \leq_{+} \nu$, if \eqref{eq:OrderRelation} holds for all non-negative $\varphi$. \subsection{Randomized Stopping Times} The product space $\Omega \times [0,\infty)$ equipped with the product topology and Borel $\sigma$-algebra is again a Polish space. \begin{definition} A randomized stopping time (RST) w.r.t.\ $\mathbb{P}$ is a subprobability measure $\xi$ on $\Omega \times [0, \infty)$ such that the projection of $\xi$ onto $\Omega$ is $\mathbb{P}$ and there exists a disintegration $(\xi_\omega)_{\omega \in \Omega}$ of $\xi$ w.r.t.\ $\mathbb{P}$ such that \begin{equation} \label{eq:DecompRST} \rho _u : \omega \mapsto \inf\{t \geq 0 : \xi_\omega[0,t] \geq u\} \end{equation} is an $\mathcal{F}$-stopping time for all $u \in [0,1]$. We call a RST $\xi$ finite, if $\xi$ is a probability measure. \end{definition} We equip the space of RST with the topology of weak convergence of measures on $\Omega \times [0, \infty)$, i.e.\ the continuity of functionals $\xi \mapsto \int \varphi \, \mathrm{d} \xi$ for all $\varphi \in C_b(\Omega \times [0, \infty))$. The RST-property is closed under this topology (cf.\ \cite[Corollary 3.10]{BeCoHu17}). Any $\mathcal{F}$-stopping time $\tau$ naturally induces a RST by $\xi^{\tau} := \mathrm{Law}_{\mathbb{P}}(B,\tau)$. Conversely, we can represent any randomized stopping time as a usual stopping time by enlarging the filtration. \begin{lemma} [{\cite[Theorem 3.8]{BeCoHu17}}] \label{lemma:ReprRST} For every RST $\xi$ there exists an $(\mathcal{B}([0,1]) \times \mathcal{F}_t)_{t \geq 0}$-stopping-time $\overline{\tau} ^\xi$ on the probability space $([0,1] \times \Omega, \mathcal{B}([0,1] \times \Omega), \overline{\mathbb{P}})$ where $\overline{\mathbb{P}}$ is the product of the Lebesque measure and $\mathbb{P}$ such that \begin{equation*} \xi = \mathrm{Law}_{\overline{\mathbb{P}}}(\overline{\mathsf{Id}},\overline{\tau}^\xi) \end{equation*} where $\overline{\mathsf{Id}} : (u,\omega) \mapsto \omega$. Moreover, $\overline{B} : (u, \omega) \mapsto B(\omega)$ is a Brownian motion on $([0,1] \times \Omega, \mathcal{B}([0,1] \times \Omega), \overline{\mathbb{P}})$. \end{lemma} This representation is useful to justify the application of known theorems of stopping times to RST and will be used in the following. For further literature on randomized stopping times we refer to \cite{BeCoHu17} and references therein. Provided that $\mathrm{Law}(B_0) = \mu \leq_{c} \nu$, we say that $\xi$ is a solution of $\mathrm{SEP}(\mu,\nu)$ if \begin{align*} \sup _{s \geq 0} \int _{\Omega \times [0,\infty)} B_{s \land t} \, \mathrm{d} \xi(\omega,t) < + \infty \quad \text{and} \quad ((\omega,t) \mapsto B_t(\omega))_{\#} \xi = \nu. \end{align*} If $\xi$ is induced by a $\mathcal{F}$-stopping time $\tau$, this definition is consistent with the definiton of $\mathrm{SEP}(\mu,\nu)$ in the introduction. Especially in Section \ref{sec:MainResult} we will use the notational convention that $(\omega,t)$ always refers to an element of $\Omega \times [0, \infty)$. In particular, we will write $\xi[t \geq X]$ instead of $\xi[\{(\omega,t) : t \geq X(\omega)\}]$ where $X$ is a random variable and $\xi$ a RST. \subsection{Potential Theory} \label{ssec:PotentialTheory} Potential Theory is known to be a useful tool when dealing with barrier solutions (cf.\ \cite{Ro76}, \cite{CoWa13}) and the shadow (cf.\ \cite{Ro71}, \cite{BeJu16}). Since it is also a central part of our proof of Theorem \ref{thm:intro}, we recall some results below. \begin{definition} \label{def:PotentialFunction} Let $\eta \in \mathcal{M}_1$. The potential function of $\eta$ is defined by \begin{equation*} U_\eta : \mathbb{R} \rightarrow [0, \infty) \quad U_\eta (x) := \int _{\mathbb{R}} |y - x| \, \mathrm{d} \eta (y). \end{equation*} \end{definition} Since elements of $\mathcal{M}_1$ have finite first moments, the potential function is always well-defined. \begin{lemma}[{cf.\ \cite[Proposition 4.2]{BeJu16}, \cite[p.\ 335]{Ob04}}] \label{lemma:ConvOrder} Let $\mu, \nu \in \mathcal{P}_1(\mathbb{R})$. The following are equivalent: \begin{itemize} \item [(i)] $\mu \leq_{c} \nu$ \item [(ii)] $U_\mu \leq U_\nu$ \item [(iii)] There exists a solution to $\mathrm{SEP}(\mu,\nu)$. \end{itemize} \end{lemma} The equivalence between (i) and (ii) is not restricted to probability measures. Since both the convex order and the order of the potetntial functions are invariant w.r.t.\ scaling with positive factors, for all $\eta, \zeta \in \mathcal{M}_1$ with $\eta(\mathbb{R}) = \zeta(\mathbb{R})$ we have $\eta \leq_{c} \zeta$ if and only if $U_{\eta} \leq U_\zeta$ . \begin{lemma}[{cf.\ \cite[Proposition 4.1]{BeJu16}}] \label{lemma:characPotF} Let $m \in [0,\infty)$ and $x^* \in \mathbb{R}$. For a function $u:\mathbb{R} \rightarrow \mathbb{R}$ the following statements are equivalent: \begin{enumerate} \item [(i)] There exists a finite measure $\mu \in \mathcal{M}_1$ with mass $\mu(\mathbb{R}) = m$ and barycenter $x^* = \int _{\mathbb{R}} x \, \mathrm{d} \mu (x)$ such that $U_\mu = u$ . \item [(ii)] The function $u$ is non-negative, convex and satisfies \begin{equation} \label{eq:characPotF} \lim _{x \rightarrow \pm \infty} u(x) - m|x - x^*| = 0. \end{equation} \end{enumerate} Moreover, for all $\mu, \mu' \in \mathcal{M}_1$ we have $\mu = \mu'$ if and only if $U_\mu = U_{\mu'}$. \end{lemma} \begin{lemma} \label{lemma:PropPotf} Let $\eta$ be a positive measure on $\mathbb{R}$. If there exists an $\varepsilon > 0$ such that $U_{\eta}$ is affine on $[x-\varepsilon, x+ \varepsilon]$, $x \not \in \mathrm{supp}(\eta)$. \end{lemma} \begin{proof} The claim follows from the observation that the potential function of the measure $\eta$ satisfies $\frac{1}{2} U_\eta '' = \eta$ in a distributional sense (cf.\ \cite[Proposition 2.1]{HiRo12}). \end{proof} \begin{corollary} \label{cor:EquaPotfToZero} Let $\mu \leq \nu$ and $\tau$ be a solution to $\mathrm{SEP}(\mu,\nu)$. We have \begin{equation*} \mathbb{P}[\tau > 0, U_{\mu}(B_0) = U_{\nu}(B_0)] = 0. \end{equation*} \end{corollary} \begin{proof} Let $A := \{x \in \mathbb{R} :U_{\mu}(x) = U_{\nu}(x)\}$ and set $\eta := \mathrm{Law}(B_{0}; B_{0} \in A)$. Fubini's Theorem yields \begin{align*} 0 = \int U_{\nu} - U_{\mu} \, \mathrm{d} \eta = \mathbb{E}[U_{\eta}(B_\tau) - U_{\eta}(B_0)] = \mathbb{E}\left[\left(U_{\eta}(B_\tau) - U_{\eta}(B_0)\right) \mathds{1}_{\{\tau > 0\}}\right]. \end{align*} Since $U_\eta$ is a convex function and $(B_{t \land \tau})_{t \geq 0}$ is a uniformly integrable martingale, the (conditional) Jensen inequality yields that $U_\eta$ is $\mathbb{P}$-a.s.\ affine at $B_0$ on the set $\tau > 0$. Hence, by Lemma \ref{lemma:PropPotf} the claim follows. \end{proof} \begin{lemma} \label{lemma:T1Conv} Let $(\mu_n)_{n \in \mathbb{N}}$ be a sequence in $\mathcal{M}_1(\mathbb{R})$. The following are equivalent: \begin{itemize} \item [(i)] The sequence $(\mu_n)_{n \in \mathbb{N}}$ is weakly convergent and there exists a finite measure $\eta \in \mathcal{M}_1(\mathbb{R})$ such that \begin{equation*} \int _{\mathbb{R}} \varphi \, \mathrm{d} \mu_n \leq \int _{\mathbb{R}} \varphi \, \mathrm{d} \eta \end{equation*} for all non-negative convex $\varphi$. \item [(ii)] The sequence $(\mu_n)_{n \in \mathbb{N}}$ is convergent under $\mathcal{T}_1$. \item [(iii)] The sequence of potential functions is pointwise convergent and the limit is the potential function of a finte measure. \end{itemize} \end{lemma} \begin{proof} For the equiavlence of (ii) and (iii) and the implication (i)$\mathbb{R}ightarrow$(ii) we refer to \cite[Lemma 3.6]{BrJuHu20} and \cite[Lemma 3.3]{BrJuHu20}. It remains to show that (ii) implies (i). Since $\mathcal{T}_1$ is by definition stonger than the weak topology, $(\mu_n)_{n \in \mathbb{N}}$ is weakly convergent. Moreover, by \cite[Proposition 7.1.5]{AmGiSa08} the convergence in $\mathcal{T}_1$ implies that \begin{equation*} \limsup _{K \rightarrow \infty} \sup _{n \in \mathbb{N}} \int _{\mathbb{R}} |x| \mathds{1}_{\{\vert x \vert \geq K\}} \, \mathrm{d} \mu_n(x) = 0. \end{equation*} Hence, there exists a sequence $(K_m)_{m \in \mathbb{N}}$ with $K_{m+1} \geq K_m \geq 1$ such that $$\sup _{n \in \mathbb{N}} \int _{\mathbb{R}} |x| \mathds{1}_{\{\vert x \vert \geq K_m\}} \, \mathrm{d} \mu_n(x) \leq 2^{-m}$$ for all $m \in \mathbb{N}$. The measure \begin{equation*} \eta := \sum _{m = 1} ^{\infty} \sup _{n \in \mathbb{N}} \mu_n \left( [-K_m,-K_{m-1}] \cup [K_{m-1},K_m] \right) \left( \, \mathrm{d}lta _{-K_m} + \, \mathrm{d}lta_{K_m} \right) \end{equation*} is an element of $\mathcal{M}_1(\mathbb{R})$ which satisfies the desired properties. \end{proof} \subsection{Shadows} Recall the definition of the shadow in Definition \ref{def:Intro}. As direct consequences of this definition we obtain that \begin{equation*} \eta \leq_{+} \nu \mathbb{R}ightarrow \shadow{\nu}{\eta} = \eta \quad \text{and} \quad \eta \leq_{c} \eta' \mathbb{R}ightarrow \shadow{\nu}{\eta} \leq_{c} \shadow{\nu}{\eta'}. \end{equation*} In the following we collect further properties of the shadow. \begin{lemma}[{\cite[Theorem 4.8]{BeJu16}}] \label{lemma:ShadowAssz} Let $\eta := \eta _1 + \eta _2 \leq_{c} \nu$, the shadow of $\eta_2$ in $\nu - \shadow{\nu}{\eta_1}$ exists and we have \begin{equation*} \shadow{\nu}{\eta} = \shadow{\nu}{\eta _1} + \shadow{\nu - \shadow{\nu}{\eta _1}}{\eta _2}. \end{equation*} \end{lemma} The statement in Lemma \ref{lemma:ShadowAssz} is the ``associativity law'' for shadows already mentioned in the introduction. \begin{corollary} \label{lemma:CharShad} Let $\mu \leq_{c} \nu$ be probability measures and $A \subset \mathbb{R}$ a Borel set such that $\mu(A) > 0$. If a solution $\tau$ of $\mathrm{SEP}(\mu,\nu)$ satisfies \begin{equation*} \forall \tau' \text{ solution of } \mathrm{SEP}(\mu,\nu) \, : \, \mathrm{Law}(B_{\tau}; B_0 \in A) \leq_{c} \mathrm{Law}(B_{\tau'}; B_0 \in A) , \end{equation*} we have $\mathrm{Law}(B_\tau; B_0 \in A) = \shadow{\nu}{\mu _{|A}}$. \end{corollary} \begin{proof} If $\alpha := \mu(A) = 1$, there is nothing to show because $\shadow{\nu}{\mu_{|A}} = \nu = \mathrm{Law}(B_\tau; B_0 \in A)$. Assume $\alpha < 1$. Since $\tau$ is a solution to $\mathrm{SEP}(\mu,\nu)$, we have \begin{equation*} \mu_{|A} = \mathrm{Law}(B_0; B_0 \in A) \leq_{c} \mathrm{Law}(B_\tau; B_0 \in A) \leq_{+} \nu \end{equation*} and hence we obtain $\shadow{\nu}{\mu_{|A}} \leq_{c} \mathrm{Law}(B_\tau; B_0 \in A)$. It remains to show that also the reversed relation holds. By definition of the shadow, we have $\mu_{|A} \leq_{c} \shadow{\nu}{\mu_{|A}}$ and Lemma \ref{lemma:ConvOrder} yields that there exists a solution $\tau^A$ to $\mathrm{SEP}(\alpha^{-1}\mu_{|A}, \alpha^{-1}\shadow{\nu}{\mu_{|A}})$. By Lemma \ref{lemma:ShadowAssz} it is \begin{equation*} \mu_{|A^c} \leq_c \nu - \shadow{\nu}{\mu_{|A}} \end{equation*} and again Lemma \ref{lemma:ConvOrder} yields the existence of a solution $\tau ^{A^c}$ to $\mathrm{SEP}((1-\alpha)^{-1}\mu_{|A^c}, (1-\alpha)^{-1}(\nu - \shadow{\nu}{\mu_{|A}}))$. Since $\{B_0 \in A\} \in \mathcal{F}_0$, \begin{equation*} \tau' := \tau ^A \mathds{1}_{\{B_0 \in A\}} + \tau ^{A^c} \mathds{1}_{\{B_0 \not \in A\}} \end{equation*} is a solution to $\mathrm{SEP}(\mu,\nu)$ and thus \begin{equation*} \mathrm{Law}(B_{\tau}; B_0 \in A) \leq_{c} \mathrm{Law}(B_{\tau'}; B_0 \in A) = \alpha \mathrm{Law}(B_{\tau^A}) = \shadow{\nu}{\mu_{|A}}. \qedhere \end{equation*} \end{proof} \begin{corollary} \label{cor:ShadowOnEqualPart} Let $\mu \leq_{c} \nu$ and $\tau$ be a solution to $\mathrm{SEP}(\mu,\nu)$. Let $A \in \mathcal{F}_0$ such that $U_{\mu}(B_0) = U_{\nu}(B_0)$ on $A$. Then \begin{equation*} \shadow{\nu}{\mathrm{Law}(B_0; A^c)} = \mathrm{Law}(B_\tau; A^c). \end{equation*} \end{corollary} \begin{proof} Set $I := \{ x \in \mathbb{R} : U_{\mu}(x) < U_{\nu}(x)\}$. Since $I$ is the collection of irreducible components of $(\mu,\nu)$ (cf.\ \cite[Section A.1]{BeJu16} ), for any solution $\tau'$ of $\mathrm{SEP}(\mu,\nu)$, the stopped process $(B_{\tau' \land s})_{s \geq 0}$ stays in the irreducible component that it started in. Hence, the measure $\mathrm{Law}(B_{\tau'};B_0 \in I)$ is independent of the specific solution $\tau'$. By Lemma \ref{lemma:CharShad}, we obtain \begin{equation} \label{eq:IrredShadow} \mathrm{Law}(B_{\tau'}; B_{0} \in I) = \shadow{\nu}{\mathrm{Law}(B_{0}; B_{0} \in I)} \end{equation} for any solution $\tau'$ of $\mathrm{SEP}(\mu,\nu)$. Since $\{B_0 \in I\} \subset A^c$ and $\tau = 0$ on $\{B_0 \not \in I\}$ by Corollary \ref{cor:EquaPotfToZero}, we obtain \begin{align*} \mathrm{Law}(B_0; B_0 \not \in I, A^c) = \mathrm{Law}(B_\tau; B_0 \not \in I, A^c) \leq_+ \mathrm{Law}(B_\tau; B_0 \not \in I). \end{align*} Thus, with Lemma \ref{lemma:ShadowAssz} and \eqref{eq:IrredShadow} we obtain \begin{align*} \shadow{\nu}{\mathrm{Law}(B_0; A^c)} &= \shadow{\nu}{\mathrm{Law}(B_0; B_0 \in I)} + \shadow{\nu - \shadow{\nu}{\mathrm{Law}(B_0; B_0 \in I)}}{\mathrm{Law}(B_0; B_0 \not \in I, A^c)} \\ &= \mathrm{Law}(B_\tau ; B_0 \in I) + \shadow{\mathrm{Law}(B_\tau; B_0 \not \in I)}{\mathrm{Law}(B_0; B_0 \not \in I, A^c)} \\ &= \mathrm{Law}(B_\tau ; B_0 \in I) + \mathrm{Law}(B_\tau; B_0 \not \in I, A^c) = \mathrm{Law}(B_\tau ; A^c). \qedhere \end{align*} \end{proof} The connection of shadows to potential theory is through the following characterization of the potential functions of the shadow. \begin{lemma}[{\cite[Theorem 2]{BeHoNo20}}] \label{lemma:PotfShad} Let $\hat{\mu} \leq \mu \leq_{c} \nu$. The potential function of the shadow $\shadow{\nu}{\hat{\mu}}$ is given by \begin{equation*} U_{\shadow{\nu}{\hat{\mu}}} = U_{\nu} - \mathrm{conv} \left( U_{\nu} - U_{\hat{\mu}} \right) \end{equation*} where $\mathrm{conv}(f)$ denotes the convex hull of a function $f$, i.e. the largest convex function that is pointwise smaller than $f$. \end{lemma} \begin{lemma} [{\cite[Lemma 1]{BeHoNo20}}] \label{lemma:PropConv} Let $f$ be a continuous function bounded by an affine function from below. If $x \in \mathbb{R}$ satisfies $(\mathrm{conv}(f))(x) < f(x)$, there exists an $\varepsilon > 0$ such that $\mathrm{conv}(f)$ is affine on $[x - \varepsilon, x + \varepsilon]$. \end{lemma} \begin{lemma} \label{lemma:ShadowDecomp} Let $(\mu_a)_{a \in [0,1]}$ be a family of probability measures, $(F_a)_{a \in [0,1]}$ a decreasing sequence of closed subsets of $\mathbb{R}$ and set $\nu = \int _0 ^1 \mu_a K^{F_a} \leq_{+} \nu$. For all $b \in [0,1]$ we have \begin{equation*} \mathcal{S}^{\nu}\left(\int _0 ^b \mu_a \, \mathrm{d} a\right) = \int _0 ^b \mu_a K^{F_a} \, \mathrm{d} a. \end{equation*} \end{lemma} \begin{proof} Let $\eta, \zeta \in \mathcal{M}_1(\mathbb{R})$ and $F \subset \mathbb{R}$ a closed set with $\mathrm{supp}(\zeta) \subset F$. Since we have \begin{equation*} \eta \leq_{c} \eta K^F \leq_{+} \eta K^F + \zeta, \end{equation*} we obtain $\shadow{\eta K^F + \zeta}{\eta} \leq_{c} \eta K^F$. Conversely, we also have \begin{equation*} \eta K^F \leq_{c} \eta K^{\mathrm{supp}(\eta K^F + \zeta)} \leq_{c} \shadow{\eta K^F + \zeta}{\eta} \end{equation*} because $\mathrm{supp}(\eta K^F + \zeta) \subset F$ and by definition $\eta K^{\mathrm{supp}(\eta K^F + \zeta)}$ is the smallest measure in convex order which dominates $\eta$ in convex order and is supported on $\mathrm{supp}(\eta K^F + \zeta)$ (cf.\ \eqref{eq:KellererDilation}). Hence, we have $ \shadow{\eta K^F + \zeta}{\eta} = \eta K^F$. Furthermore, for all $n \in \mathbb{N}$, $\mu_1, \ldots , \mu_n \in \mathcal{M}_1$ and closed sets $F_1, \ldots , F_n \subset \mathbb{R}$ we can apply this equality to get \begin{equation*} \mu_1 K^{F_1} = \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1} \end{equation*} and with Lemma \ref{lemma:ShadowAssz} we inductively obtain \begin{align*} &\shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1 + \ldots + \mu_{k}} \\ &= \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1 + \ldots + \mu_{k-1}} \\ & \quad \quad + \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n} - \shadow{\mu_1 K ^{F_1} + \ldots + \mu_n K^{F_n}}{\mu _1 + \ldots + \mu_{k-1}} }{\mu_{k}} \\ &= \mu _1K^{F_1} + \ldots + \mu_{k-1}K^{F_{k-1}} + \shadow{\mu_{k} K ^{F_{k}} + \ldots + \mu_n K^{F_n}}{\mu_{k}} \\ &= \mu_1 K ^{F_1} + \ldots + \mu_k K^{F_k} \end{align*} for all $2 \leq k \leq n$. Since the map $(\mu,\nu) \mapsto \shadow{\nu}{\mu}$ is continuous under $\mathcal{T}_1$ (cf.\ \cite{Ju14}), the claim follows. \end{proof} \section{Proof of the Main Result} \label{sec:MainResult} We split the proof of Theorem \ref{thm:intro} in three parts. In Subsection \ref{ssec:adjoint} we show that the assumptions on the time-change and the level process in Theorem \ref{thm:intro} correspond to each other. In Subsection \ref{ssec:AprioriBound} we construct for every solution of the Skorokhod Embedding Problem an upper bound in the form of a barrier solution and we prove in Subsection \ref{ssec:ActualProof} that this upper bound is attained if and only if the properties of Theorem \ref{thm:intro} are satisfied. \subsection{Monotonously Increasing Processes} \label{ssec:adjoint} \begin{definition} Two monotonously increasing and non-negative families of random variables $(X_t)_{t \geq 0}$ and $(T_l)_{l \geq 0}$ are adjoint if $\mathbb{P}[X_t \geq l \Leftrightarrow T_l \leq t] = 1$ for all $l,t \geq 0$. \end{definition} \begin{remark} If $(X_t)_{t \geq 0}$ is right-continuous or $(T_l)_{l \geq 0}$ left-continuous and both families are adjoint, we have $\mathbb{P}[\forall l,t \geq 0 : X_t \geq l \Leftrightarrow T_l \leq t] = 1$. \end{remark} \begin{lemma} \label{lemma:ExAdjoint} \begin{itemize} \item [(i)] Let $(X_t)_{t \geq 0}$ be a right-continuous $\mathcal{F}$-adapted stochastic process which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t : X_s = X_t = l] = 0$ for all $l \geq 0$. Then, the family $(T_l)_{l \geq 0}$ defined by \begin{equation*} T_l := \inf \{t \geq 0 : X_t \geq l \} \end{equation*} is a left-continuous $\mathcal{F}$-time change with $T_0 = 0$, $T_\infty = + \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$ which is adjoint to $(X_t)_{t \geq 0}$. \item [(ii)] Let $(T_l)_{l \geq 0}$ be a left-continuous $\mathcal{F}$-time-change with $T_0 = 0$, $T_\infty = \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] = 1$ for all $l \geq 0$. Then. the family $(X_t)_{t \geq 0}$ defined by \begin{equation*} X_t := \sup \{l \geq 0 : T_l \leq t\} \end{equation*} is a right-continuous $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ which is non-negative, monotonously increasing and satisfies $\mathbb{P}[\exists s < t: X_s = X_{t} = l] = 0$ for all $l \geq 0$, and which is adjoint to $(T_l)_{l \geq 0}$. \end{itemize} \end{lemma} \begin{proof} Item (i): Let $t,l \geq 0$. If $X_t \geq l$, $T_l \leq t$ directly by definition. Conversely, if $T_l \leq t$, for all $u > t$ we obtain $X_u \geq l$ and thus $X_t = \lim _{u \downarrow t} X_u \geq l$ by right-continuity of $X$. Hence, $(T_l)_{l \geq 0}$ is adjoint to $(X_t)_{t \geq 0}$. Clearly, $(T_l)_{l \geq 0}$ is monotonously increasing. Since $(T_l)_{l \geq 0}$ and $(X_t)_{t \geq 0}$ are adjoint, the symmetric difference $\{T_l \leq t\} \triangle \{X_t \geq l\}$ is a $\mathbb{P}$-null-set and therefore contained in the completed filtration $\mathcal{F}_t$. Thus, $(T_l)_{l \geq 0}$ is a $\mathcal{F}$-time-change. Since $X_t$ is non-negative and finite, we obtain $T_0 = 0$ and $T_{\infty} = + \infty$. Moreover, $l \mapsto T_l$ is left-continuous by definition. Furthermore, we have $\mathbb{P}[ \lim _{k \downarrow l} T_k > T_l ] \leq \mathbb{P}[\exists s < t : X_s = X_{t} = l] = 0$. Item (ii): Basically the same just in reverse. \end{proof} \textbf{In the following} we fix a $\mathcal{F}$-adapted stochastic process $(X_t)_{t \geq 0}$ and an adjoint $\mathcal{F}$-time-change $(T_l)_{l \geq 0}$ that satisfy the properties listed in Lemma \ref{lemma:ExAdjoint}. \subsection{A-priori Bound} \label{ssec:AprioriBound} Let $B$ be a Brownian motion that starts in $\mu$. Fix a randomized stopping time $\xi$ that is a solution to $\mathrm{SEP}(\mu,\nu)$. To simplify notation we will use the following notation for measures derived from $\xi$ \begin{equation} \label{eq:DefRST2} \mathrm{Law}(B_{\sigma \land \xi}) := ((\omega,t) \mapsto B_{\sigma(\omega) \land t}(\omega))_{\#} \xi \end{equation} where $\sigma$ is an $\mathcal{F}$-stopping time. We set $u(l,x) := U_{\mathrm{Law}(B_{T_l \land \xi})}(x)$ and $v(x) := U_{\mathrm{Law}(\nu)}$ for $l \geq 0$ and $x \in \mathbb{R}$. In this part we will show that $\xi$ is bounded from above by the stopping time \begin{equation*} \hat{\tau} := \inf\{ t \geq 0 : u(X_t,B_t) = v(B_t)\}, \end{equation*} i.e.\ we have $\xi[t \leq \hat{\tau}] = 1$. Since $u$ depends on $\xi$, $\hat{\tau}$ is obviously not a global bound for all solutions to $\mathrm{SEP}(\mu,\nu)$. Nevertheless, Lemma \ref{lemma:uCont} implies that $\hat{\tau}$ is a barrier solution. \begin{lemma} \label{lemma:uCont} The function $u$ is continuous and monotonously increasing in the first component. Moreover, for all $x \in \mathbb{R}$ we have $v(x) = \lim_{l \rightarrow \infty} u(l,x)$. \end{lemma} \begin{proof} For all $x \in \mathbb{R}$ and $l \leq l'$, by Lemma \ref{lemma:ReprRST} we have \begin{align*} u(l',x) = \overline{\mathbb{E}} \left[ |\overline{B}_{T_{l'} \land \overline{\tau}^\xi} - x| \right] \geq \overline{\mathbb{E}} \left[ |\overline{B}_{T_{l} \land \overline{\tau}^\xi} - x| \right] = u(l,x) \end{align*} because $\mathrm{Law}_{\overline{\mathbb{P}}}(\overline{B}_{T_{l} \land \overline{\tau}^\xi})_{l \geq 0}$ is increasing in convex order by the optional stopping theorem. We chose $(T_l)_{l \geq 0}$ such that, for fixed $l_0 \geq 0$, $l \mapsto T_l$ is $\mathbb{P}$-a.s.\ continuous at $l_0$. Hence, $l \mapsto \mathrm{Law}(B_{T_l \land \xi})$ is weakly continuous and by Lemma \ref{lemma:T1Conv}, $u$ is continuous in the fist component because $\mathrm{Law}(B_{T_l \land \xi}) \leq_c \nu$ for all $l \geq 0$. Furthermore, $u$ is $1$-Lipschitz continuous in the second component because $u(l,\cdot)$ is the potential function of $\mathrm{Law}(B_{T_l \land \xi})$. \end{proof} \begin{lemma} \label{lemma:EquaToLeq} Let $l \geq 0$ and $\sigma$ be a finite $\mathcal{F}$-stopping time. It is \begin{equation*} \xi \left[ u(l,B_{\sigma}) = v(B_{\sigma}), t > \sigma \geq T_l \right] = 0. \end{equation*} \end{lemma} \begin{proof} This is a direct consequence of Lemma \ref{lemma:ReprRST} and Corollary \ref{cor:EquaPotfToZero}. \end{proof} \begin{proposition} \label{prop:EquaToLeq} Let $\sigma$ be a finite $\mathcal{F}$ stopping time with $\mathbb{P}[u(X_{\sigma},B_{\sigma}) = v(B_{\sigma})] = 1$. We have $\xi[t \leq \sigma] = 1$. \end{proposition} \begin{proof} Let $r(x) := \inf \{l \geq 0 : u(l,x) = v(x)\}$ for all $x \in \mathbb{R}$ and let \begin{equation} \label{eq:DefOfL} L := \{ r(x) : x \in \mathbb{R}, \exists \varepsilon > 0 \text{ s.t. } r(x) \leq r(y) \text{ f.a.\ } y \in (x-\varepsilon,x+\varepsilon)\} \end{equation} be the value set of all local minima of $r$. The set $L$ is countable. Indeed, setting $I_{p,q} := \{ x \in (p,q) : r(x) \leq r(y) \text{ f.a.\ } y \in (p,q)\}$, we have $L = \bigcup _{(p,q) \in \mathbb{Q}^2} r(I_{p,q})$ where $r(I_{p,q})$ is either empty or a singleton. Since $u(X_\sigma,B_\sigma) = v(B_\sigma)$ and $X_\sigma = l \mathbb{R}ightarrow T_l \leq \sigma$ $\mathbb{P}$-a.s., we obtain \begin{align*} \xi[t > \sigma, X_{\sigma} \in L] &= \sum _{l \in L} \xi[t > \sigma, u(X_{\sigma},B_{\sigma}) = v(B_{\sigma}), X_{\sigma} = l] \\ &\leq \sum _{l \in L} \xi[t > \sigma \geq T_l, u(l,B_{\sigma}) = v(B_{\sigma})] \end{align*} and the r.h.s.\ is equal to $0$ by Lemma \ref{lemma:EquaToLeq}. It remains to show that $\xi[t > \sigma, X_{\sigma} \not \in L] = 0$. To this end, we define $[l]_n := \max \{ i/2^n : i \in \mathbb{N}, i/2^n \leq l \}$ for all $n \in \mathbb{N}$ and $l \geq 0$, and \begin{equation*} \sigma ^n := \inf \{t \geq 0: u([X_t]_n,B_t) = v(B_t) \}. \end{equation*} We claim that \begin{equation} \label{eq:AuxClaim} \mathbb{P}\left[X_{\sigma} \not \in L, \sigma < \inf _{n \in \mathbb{N}} \sigma ^n\right] = 0. \end{equation} Admitting \eqref{eq:AuxClaim}, since for all $n \in \mathbb{N}$ the function $t \mapsto u([X_t]_n,B_t)$ is right-continuous, we have a.s.\ $u([X_{\sigma^n}]_n,B_{\sigma^n}) = v(B_{\sigma_n})$ and hence \eqref{eq:AuxClaim} yields \begin{align*} \xi[t > \sigma, X_{\sigma} \not \in L] &\leq \xi\left[ t > \inf _{n \in \mathbb{N}} \sigma^n \right] \\ &\leq \sum _{n \in \mathbb{N}} \xi[ t > \sigma^n, u([X_{\sigma^n}]_n,B_{\sigma^n}) = v(B_{\sigma^n})] \\ &= \sum _{n \in \mathbb{N}} \sum _{i = 0} ^{\infty} \xi \left[ t > \sigma^n, u([X_{\sigma^n}]_n,B_{\sigma^n}) = v(B_{\sigma^n}), \frac{i}{2^n} \leq X_{\sigma ^n} < \frac{i+1}{2^n} \right] \\ &\leq \sum _{n \in \mathbb{N}} \sum _{i = 0} ^{\infty} \xi[ t > \sigma^n \geq i/2^n, u(i/2^n,B_{\sigma^n}) = v(B_{\sigma^n})]. \end{align*} By Lemma \ref{lemma:EquaToLeq}, these summands are zero for all $n,i \in \mathbb{N}$. We are left with verifying \eqref{eq:AuxClaim}. By the definition of $L$ in \eqref{eq:DefOfL}, we see that for every pair $(l,x)$ where $l \not \in L$ and $x \in \mathbb{R}$ with $u(l,x) = v(x)$, there exists a sequence $(x_n)_{n \in \mathbb{N}}$ that converges to $x$ such that $u([l]_n,x_n) = v(x_n)$ for all $n \in \mathbb{N}$ large enough. Indeed, since $u(l,x) = v(x)$, it is $r(x) \leq l$ which leaves us with two cases: If $r(x) < l$, we just need to choose $n$ large enough such that $r(x) \leq [l]_n \leq l$. If $r(x) = l \not \in L$, $x$ cannot be a local minimum of $r$, therefore there exists a sequence $(x_m)_{m \in \mathbb{N}}$ that converges to $x$ with $r(x_m) < l$ and we just need to choose an appropriate subsequence $(x_{m_n})_{n \in \mathbb{N}}$ such that $r(x_m) \leq [l]_{n_m} \leq l$. Thus, since $u(X_\sigma,B_\sigma) = v(B_\sigma)$ $\mathbb{P}$-a.s., we obtain for $\mathbb{P}$-a.e.\ $\omega$ \begin{align*} X_\sigma(\omega) \not \in L \quad &\mathbb{R}ightarrow \quad \forall \, \mathrm{d}lta > 0 \, \exists n \in \mathbb{N} \, \exists y \in \mathcal{B}_\, \mathrm{d}lta (B_{\sigma}(\omega)) \, : u([X_\sigma(\omega)]_n,y) = v(y) \end{align*} where $\mathcal{B}_{\, \mathrm{d}lta}(x)$ denotes the open ball of radius $\, \mathrm{d}lta$ around $x$. Hence, for all $\varepsilon > 0$ we have \begin{equation} \label{eq:NastyIneq} \begin{split} &\mathbb{P}[\forall n \in \mathbb{N} \, \forall t \in (\sigma, \sigma + \varepsilon) : u([X_t]_n,B_t) < v(B_t), X_{\sigma} \not \in L ] \\ \leq \,&\mathbb{P}[\forall n \in \mathbb{N} \, \forall t \in (\sigma, \sigma + \varepsilon) : u([X_\sigma]_n,B_t) < v(B_t), X_{\sigma} \not \in L ] \\ \leq \,& \mathbb{P}[\forall \, \mathrm{d}lta > 0 \, \exists y\in \mathcal{B}_\, \mathrm{d}lta (B_{\sigma}) \, \forall t \in (\sigma, \sigma + \varepsilon) : B_t \neq y]. \end{split} \end{equation} where we used the monotonicity of $u$ in the first component (cf.\ Lemma \ref{lemma:uCont}). By the strong Markov property and the continuity of Brownian motion, we can bound the last term in \eqref{eq:NastyIneq} by the sum of $\mathbb{P}[\forall t \leq \varepsilon : B_t \leq 0]$ and $\mathbb{P}[\forall t \leq \varepsilon : B_t \geq 0]$, and this is clearly $0$. Since $\varepsilon > 0$ is arbitrary, \eqref{eq:AuxClaim} is shown. \end{proof} Recall that $\hat{\tau} := \inf\{ t \geq 0 : u(X_t,B_t) = v(B_t)\}$ where $u(l,\cdot)$ is the potential function of $\mathrm{Law}(B_{\xi \land T_l})$ and $v$ is the potential function of $\nu$. \begin{corollary} \label{cor:Leq} We have a.s.\ $\xi[t \leq \hat{\tau}] = 1$. If $\xi$ is induced by an $\mathcal{F}$-stopping time $\tau$, we have $\tau \leq \hat{\tau}$. \end{corollary} \begin{proof} Since $u$ is continuous and $t \mapsto (X_t,B_t)$ is $\mathbb{P}$-a.s.\ right-continuous, we obtain $\mathbb{P}[u(X_{\hat \tau}, B_{\hat \tau}) = v(B_{\hat \tau})] = 1$ and therefore we can apply Proposition \ref{prop:EquaToLeq}. \end{proof} \subsection{Proof of Theorem \ref{thm:MainEqui}} \label{ssec:ActualProof} Recall once again the properties of $(X_t)_{t \geq 0}$ and $(T_l)_{l \geq 0}$ formulated at the end of subsection \ref{ssec:adjoint}. Let $\xi$ be a RST which is a solution to $\mathrm{SEP}(\mu,\nu)$. Additionally to $\mathrm{Law}(B_{T_l \land \xi})$ (cf.\ \eqref{eq:DefRST2}), we introduce notation for the measures \begin{equation} \label{eq:DefRST3} \begin{split} \mathrm{Law}(B_{\xi}; \xi \geq T_l) &:= ((\omega,t) \mapsto B_t(\omega))_{\#} \xi _{\vert \{t \geq T_l (\omega)\}} \quad \text{and} \\ \mathrm{Law}(B_{T_l}; \xi \geq T_l) &:= ((\omega,t) \mapsto B_{T_l(\omega)}(\omega))_{\#} \xi _{\vert \{t \geq T_l (\omega)\}} \end{split} \end{equation} The following Lemma \ref{lemma:ShadToSupp2} is the main observation that allows us to show in Lemma \ref{lemma:ShadToEqua} a counterpart to the upper bound stated in Corollary \ref{cor:Leq}. \begin{lemma} \label{lemma:ShadToSupp2} Let $l \geq 0$ and suppose that $\xi$ satisfies \begin{equation} \label{eq:ShaodwProp} \mathrm{Law}(B_{\xi}; \xi \geq T_l) = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \xi \geq T_l)}. \end{equation} For all $x \in \mathbb{R}$, if $u(l,x) < v(x)$, $x \not \in \mathrm{supp}(\nu - \mathrm{Law}(B_{\xi}; \xi \geq T_l))$. \end{lemma} \begin{proof} Fix $l \geq 0$. By \eqref{eq:DefRST2} and \eqref{eq:DefRST3}, we have \begin{equation} \label{eq:DefinitionId} \mathrm{Law}(B_{T_l \land \xi}) - \mathrm{Law}(B_{T_l}; \xi \geq T_l) = \nu - \mathrm{Law}(B_{\xi}; \xi \geq T_l). \end{equation} Hence, Lemma \ref{lemma:PotfShad} and \eqref{eq:ShaodwProp} yield \begin{align*} v - u(l, \cdot) &= U_{\mathrm{Law}(B_{\xi}; \xi \geq T_l)} - U_{\mathrm{Law}(B_{T_l}; \xi \geq {T_l})} \\ &= v - U_{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} - \mathrm{conv} \left( v - U_{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} \right). \end{align*} Let $x \in \mathbb{R}$ with $u(l,x) < v(x)$. By Lemma \ref{lemma:PropConv}, there exists an $\varepsilon > 0$ such that on the interval $[x- \varepsilon, x + \varepsilon]$ the function \begin{align*} \mathrm{conv} \left( v - U _{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} \right) \end{align*} is affine. Rewriting with Lemma \ref{lemma:PotfShad} and \eqref{eq:DefinitionId}, we obtain that the function \begin{align*} u(l, \cdot) - U_{\mathrm{Law}(B_{T_l}; \xi \geq T_l)} = U_{\mathrm{Law}(B_{T_l \land \xi}) - \mathrm{Law}(B_{T_l}; \xi \geq T_l)} = U_{\nu - \mathrm{Law}(B_{\xi}; \xi \geq T_l)} \end{align*} is affine around $x$. Hence, Lemma \ref{lemma:PropPotf} yields that $x \not \in \mathrm{supp}(\nu - \mathrm{Law}(B_{\xi}; \xi \geq T_l))$. \end{proof} \begin{lemma} \label{lemma:ShadToEqua} If $\xi$ satisfies $\mathrm{Law}(B_{\xi}; \xi \geq T_l) = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \xi \geq T_l)}$ for all $l \geq 0$, we have $\xi\left[ u(X_t, B_{t}) < v(B_{t}) \right] = 0$. \end{lemma} \begin{proof} Let $\varepsilon > 0$. Since both $u(0,\cdot)$ and $v$ are potential functions of probability measures with the same mass and barycenter, they are continuous and their difference vanishes at $\pm \infty$. Hence, there exists an $M_1 \in \mathbb{N}$ such that $v(x) - u(0,x) \leq \frac{\varepsilon}{2}$ for all $|x| \geq M_1$. On the compact interval $[-M_1,M_1]$ the monotone increasing sequence $(u(l,\cdot))_{l \geq 0}$ converges pointwise to $v$ (see Lemma \ref{lemma:uCont}). Dini's theorem yields that there exists $M_2 \in \mathbb{N}$ such that $\sup _{x \in [-M_1,M_1]} v(x) - u(l,x) \leq \frac{\varepsilon}{2}$ for all $l \geq M_2$. Moreover, since $u$ is jointly continuous on the compact interval $[0,M_2] \times [-M_1,M_1]$,there exists an $n \in \mathbb{N}$ such that \begin{equation*} \forall \, x \in \mathbb{R} \ \forall \, 0 \leq l \leq l' \leq l + \frac{1}{2^n} \, : \quad u(l',x) - u(l,x) \leq \frac{\varepsilon}{2}. \end{equation*} For this $n$, we obtain \begin{align*} \xi \left[ u(X_t, B_t) + \varepsilon \leq v(B_t) \right] &= \sum _{i = 1} ^{\infty} \xi \left[ u(X_t, B_t) + \varepsilon \leq v(B_t), \frac{i-1}{2^n} \leq X_t < \frac{i}{2^n} \right] \\ &\leq \sum _{i = 1} ^{\infty} \xi \left[ u\left(\frac{i}{2^n}, B_t\right) < v(B_t), X_t < \frac{i}{2^n} \right]. \end{align*} For each $i \in \mathbb{N}$, the summands on the r.h.s.\ are $0$ because Lemma \ref{lemma:ShadToSupp2} yields \begin{align*} \xi \left[ u\left(\frac{i}{2^n}, B_t\right) < v(B_t), X_t < \frac{i}{2^n} \right] = \xi \left[ u\left(\frac{i}{2^n}, B_t \right) < v(B_t), t < T_{\frac{i}{2^n}} \right] = 0 \end{align*} Since $\varepsilon >0$ is arbitrary, the claim follows. \end{proof} \begin{lemma} \label{lemma:RBtoEqua} Let $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ be a closed barrier and $\tau$ an $\mathcal{F}$-stopping time. If $\tau = \inf \{t \geq 0: (X_t,B_t) \in \mathcal{R} \}$ $\mathbb{P}$-a.s., we have $\mathbb{P}[u(X_\tau,B_\tau) = v(B_{\tau})] = 1$. \end{lemma} \begin{proof} For all $(l,x) \in \mathcal{R}$, the Brownian motion $B$ cannot pass through $x$ on $[T_l \land \tau, \tau]$. Indeed, if $ t \in (T_l \land \tau, \tau]$ it is $X_t \geq l$ and since $\tau$ is by assumption the first time the process $(X_t,B_t)_{t \geq 0}$ hits the barrier $\mathcal{R}$, the Brownian motion is stopped at latest when it reaches $[l, \infty) \times \{x\} \subset \mathbb{R}$. Hence, we have $(B_{\tau} - x)(B_{\tau \land T_l} - x) \geq 0$ $\mathbb{P}$-a.s., and thus we obtain $$u(l,x) = \mathbb{E}[|B_{T_l \land \tau }-x|] = \mathbb{E}[|B_{\tau} - x|] = v(x).$$ Since $u$ is continuous (cf.\ Lemma \ref{lemma:uCont}) and $t \mapsto (X_t,B_t)$ is right-continuous, we get $\mathbb{P}[u(X_{\tau},B_{\tau}) = v(B_{\tau})] = 1$. \end{proof} \begin{lemma} \label{lemma:InfimumToEqual} Let $\hat{\tau} := \inf\{t \geq 0 : u(X_t,B_t) = v(B_t)\}$. For all $l \geq 0$ we have $\mathbb{P}[\hat{\tau} < T_l, u(l,B_{T_l \land \hat{\tau}}) < v(B_{T_l \land \hat{\tau}})] = 0$. \end{lemma} \begin{proof} By Lemma \ref{lemma:uCont}, $u$ is continuous and $t \mapsto (X_t,B_t)$ is $\mathbb{P}$-a.s.\ right-continuous, therefore we obtain from the definition of $\hat{\tau}$ \begin{equation*} \mathbb{P}[u(X_{\hat \tau}, B_{\hat \tau}) = v(B_{\hat \tau})] = 1. \end{equation*} Let $l \geq 0$. Since $(X_t)_{t \geq 0}$ is adjoint to $(T_l)_{l \geq 0}$, we obtain \begin{equation*} \hat{\tau} < T_l \quad \mathbb{R}ightarrow \quad X_{\hat{\tau}} < l \quad \mathbb{P}\text{a.s.} \end{equation*} Since $u$ is also monotonously increasing in the first component and it is $B_{T_l \land \hat{\tau}} = B_{\hat{\tau}}$ on the set $\{\hat{\tau} < T_l\}$, the claim follows. \end{proof} Recall the definitions from the end of Subsection \ref{ssec:adjoint}: \begin{theorem} \label{thm:MainEqui} The following are equivalent: \begin{itemize} \item [(i)] There exists a closed barrier $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ such that $\xi$ is induced by the $\mathcal{F}$-stopping time $\tau := \inf \{ t \geq 0 : (X_t,B_t) \in \mathcal{R}\}$. \item [(ii)] For all $l \geq 0$ we have $\mathrm{Law}(B_{\xi}; \xi > T_l) = \shadow{\nu}{\mathrm{Law}(B_{\xi}; \xi > T_l)}$. \item [(iii)] $\xi$ is induced by the $\mathcal{F}$ stopping time $\hat{\tau} := \inf\{t \geq 0 : u(X_t,B_t) = v(B_t)\}$. \end{itemize} \end{theorem} \begin{proof} \textit{(i) $\mathbb{R}ightarrow$ (iii):} By Lemma \ref{lemma:RBtoEqua}, $\tau$ satisfies $\mathbb{P}[u(X_\tau,B_\tau) = v(B_{\tau})] = 1$ and thus we have $\mathbb{P}$-a.s. $\hat{\tau} \leq \tau$. The claim follows with Corollary \ref{cor:Leq}. \textit{(iii) $\mathbb{R}ightarrow$ (i):} Lemma \ref{lemma:uCont} yields that $u$ is a jointly continuous function which is monotonously increasing in $l$. Hence, the set $\mathcal{R} := \{(l,x) \in [0, \infty) \times \mathbb{R} : u(l,x) = v(x)\}$ is a closed barrier. \textit{(ii) $\mathbb{R}ightarrow$ (iii):} By Lemma \ref{lemma:ShadToEqua}, $\xi[t \geq \hat \tau] \geq \xi[u(X_t,B_t) = v(B_t)] = 1$ and Corollary \ref{cor:Leq} yields that $\xi [t \leq \hat{\tau}] = 1$. \textit{(iii) $\mathbb{R}ightarrow$ (ii):} Let $l \geq 0$. Since $\xi$ is induced by $\hat{\tau}$ and a solution to $\mathrm{SEP}(\mu,\nu)$, $\hat \tau - \hat{\tau} \land T_l$ is a solution of $\mathrm{SEP}(\mathrm{Law}(B_{\hat{\tau} \land T_l}),\nu)$ w.r.t.\ the Brownian motion $B'_s = B_{s + \hat{\tau} \land T_l}$. Moreover, Lemma \ref{lemma:InfimumToEqual} yields \begin{equation*} \hat{\tau} < T_l \mathbb{R}ightarrow u(l,B'_0) = v(B'_{0}) \quad \mathbb{P}\text{-a.s.} \end{equation*} Hence, by Corollary \ref{cor:ShadowOnEqualPart} it is \begin{align*} \mathrm{Law}(B_{\hat{\tau}}; \hat{\tau} \geq T_l) &= \mathrm{Law}(B'_{\hat{\tau} - \hat{\tau} \land T_l}; \hat \tau \geq T_l) \\ &= \shadow{\nu}{\mathrm{Law}(B'_{0}; \hat \tau \geq T_l)} = \shadow{\nu}{\mathrm{Law}(B_{T_l}; \hat{\tau} \geq T_l)}. \qedhere \end{align*} \end{proof} \section{Proof of Proposition \ref{prop:ShiftedThm}} \begin{proof} Let $\tilde{\mathcal{F}}$ be the filtration defined by $\tilde{\mathcal{F}}_s := \mathcal{F}_{\sigma + s}$ and let $\tilde{B}$ be the process defined by $\tilde{B}_s := B_{\sigma + s}$. $\tilde{B}$ is an $\tilde{\mathcal{F}}$-Brownian motion. Moreover, $\tilde{\tau} := \tau - \sigma$ is an $\tilde{\mathcal{F}}$-stopping time because $\{\tilde{\tau} \leq s \} = \{\tau \leq \sigma + s\} \in \tilde{\mathcal{F}}_s$ for all $s \geq 0$. Clearly, we have $\tilde{B}_{\tilde{\tau}} = B_\tau$. Suppose (i) is satisfied. We set $\tilde{X}_s := X_{\sigma + s}$. Since $X$ is $\mathcal{F}$-adapted, $\tilde{X}$ is $\tilde{\mathcal{F}}$-adapted and, furthermore, we have \begin{equation*} \tilde{\tau} := \tau - \sigma = \inf \{ s \geq 0 : (\tilde{X}_s,\tilde{B}_s) \in \mathcal{R} \}. \end{equation*} Applying Theorem \ref{thm:intro} yields the existence of an $\tilde{\mathcal{F}}$-time-change $(\tilde{T}_l)_{l \geq 0}$ such that for all $l \geq 0$ we have \begin{equation*} \mathrm{Law}(\tilde{B}_{\tilde{\tau}}; \tilde{\tau} \geq \tilde{T}_l) = \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{T}_l}; \tilde{\tau} \geq \tilde{T}_l}. \end{equation*} In particular, by Theorem \ref{thm:intro} we can choose $\tilde{T}_l = \inf \{ s \geq 0 : \tilde{X}_s \geq l\}$. Moreover, we set $T_l = \inf \{t \geq 0 : X_t \geq l\}$ and see that we have \begin{align*} \sigma + \tilde{T}_l &= \sigma + \inf \{ s \geq 0 : \tilde{X}_s \geq l\} = \sigma + \inf \{ s \geq 0 : X_{\sigma + s} \geq l\} \\ &= \max\{\sigma, T_l \} \end{align*} where the last equality follows from the fact that $X$ is monotonously increasing. We easily verify that a.s. \begin{equation} \label{eq:RelationTilde} \tilde{B}_{\tilde{T}_l} = B_{\sigma + \tilde{T_l}} = B_{\sigma \lor T_l} \quad \text{ and } \quad \{ \tilde{\tau} \geq \tilde{T}_l\} = \{\tau - \sigma \geq \tilde{T}_l \} \} = \{\tau \geq \sigma \lor T_l\}. \end{equation} Hence, for all $l \geq 0$ we obtain \begin{equation*} \mathrm{Law}(B_\tau; \tau \geq \sigma \lor T_l) = \shadow{\nu}{\mathrm{Law}(B_{\sigma \lor T_l}; \tau \geq \sigma \lor T_l )}. \end{equation*} Conversely, suppose that (ii) is satisfied. We set $\tilde{T}_l := \max\{0,T_l - \sigma\}$. Since $(T_l)_{l \geq 0}$ is an $\mathcal{F}$-time-change, $(\tilde{T}_l)_{l \geq 0}$ is an $\tilde{\mathcal{F}}$-time-change, and by definition we have $\sigma + \tilde{T}_l = \sigma \lor T_l$ such that \eqref{eq:RelationTilde} holds as well. In particular, we obtain \begin{align*} \mathrm{Law}(\tilde{B}_{\tilde{\tau}}; \tilde{\tau} \geq \tilde{T}_l) &= \mathrm{Law}(B_\tau; \tau \geq \sigma \lor T_l) \\ &= \shadow{\nu}{\mathrm{Law}(B_{\sigma \lor T_l}; \tau \geq \sigma \lor T_l)} = \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{T_l}}; \tilde{\tau} \geq \tilde{T}_l)}. \end{align*} Applying Theorem \ref{thm:intro} yields the existence of an $\tilde{\mathcal{F}}$-adapted stochastic process $(\tilde{X}_s)_{s \geq 0}$ and a closed barrier $\mathcal{R} \subset [0,\infty) \times \mathbb{R}$ such that \begin{equation*} \tilde{\tau} = \inf \{ s \geq 0: (\tilde{X}_s,\tilde{B}_s) \in \mathcal{R}\}. \end{equation*} In particular, by Theorem \ref{thm:intro} we can choose $\tilde{X}_s := \sup \{l \geq 0 : \tilde{T}_l \leq t\}$. Moreover, we set $X_t := \sup \{l \geq 0 : T_l \leq t\}$ and see that \begin{align*} \tilde{X}_s &= \sup \{l \geq 0: \tilde{T}_l \leq s\} = \sup \{l \geq 0: \max\{0, T_l - \sigma\} \leq s\} \\ &= \sup \{l \geq 0 : T_l \leq \sigma + s\} = X_{\sigma + s}. \end{align*} Thus, we have a.s. \begin{align*} \inf \{ t \geq \sigma: (X_t, B_t) \in \mathcal{R} \} &= \sigma + \inf\{ s \geq 0 : (X_{\sigma + s}, B_{\sigma + s}) \in \mathcal{R}\} \\ &= \sigma + \tilde{\tau} = \tau. \qedhere \end{align*} \end{proof} \begin{remark} \label{rem:CondLM} For the left-monotone time-change $(T_l^{lm})_{l \geq 0}$, we have \begin{equation*} \{\tau^i \geq \tau^{i-1} \lor T_l ^{lm} \} = \{\tau^i \geq \tau^{i-1}, T^{lm}_l = 0\} = \{ B_0 \leq q_l\} \end{equation*} where $q_l :=- \ln (l)$ and $T_l ^{lm} = 0$ on this set. Thus, the stopping times $\tau ^1 \leq ... \leq \tau ^n$ are $(T_l ^{lm})_{l \geq 0}$-shadow-residual if and only if \begin{align*} \mathrm{Law}(B_{\tau ^i}; B_0 \leq q_l) &= \mathrm{Law}(B_{\tau ^i}; \tau ^i \geq \tau^{i-1} \lor T_l ^{lm}) \\ &= \shadow{\nu _i}{\mathrm{Law}(B_{\tau ^{i-1} \lor T_l ^{lm}}; \tau ^i \geq \tau^{i-1} \lor T_l ^{lm})} \\ &= \shadow{\nu _i}{\mathrm{Law}(B_{\tau ^{i-1}}; B_0 \leq q_l)} \end{align*} for all $1 \leq i \leq n$. Applying this inductively, these stopping times are shadow-residual if and only if \begin{equation*} \mathrm{Law}(B_{\tau ^i}; B_0 \leq q_l) = \shadow{\nu _i}{ ... \, \shadow{\nu_1}{\mathrm{Law}(B_{0}; B_0 \leq q_l)}} =: \shadow{\nu_1,...,\nu _i}{\mathrm{Law}(B_{0}; B_0 \leq q_l)} \end{equation*} for all $1 \leq i \leq n$. This is the obstructed shadow defined by Nutz-Stebegg-Tan in \cite{NuStTa17}. Hence, $(\tau ^1, ... , \tau^n)$ is the multi-marginal lm-solution if and only if the joint distribution $(B_0,B_{\tau ^1}, ... , B_{\tau ^n})$ is the mutliperiod left-monotone transport. \end{remark} \section{Proof of Proposition \ref{prop:Interpolation}} \label{sec:Interpolation} In this subsection we suppose that $\Omega = C([0,\infty))$ is the path space of continuous functions and that $\mathbb{P}$ is a probability measure on the path space such that the canonical process $B: \omega \mapsto \omega$ is a Brownian motion with $\mathrm{Law}_{\mathbb{P}}(B_0) = \mu$. Moreover, we denote by $\theta$ the shift operator on $\Omega$, i.e.\ $\theta_r : (\omega_s)_{s \geq 0} \mapsto (\omega_s)_{s \geq r}$ for all $r \geq 0$. \subsection{Concatenation Method} To simplify notation, we say that a finite stopping time $\tau$ is shadow-residual w.r.t.\ a time-change $(T_l)_{l \geq 0}$ if for all $l \geq 0$ we have \begin{equation*} \mathrm{Law}(B_\tau; \tau \geq T_l) = \shadow{\mathrm{Law}(B_\tau)}{\mathrm{Law}(B_{T_l}; \tau \geq T_l)}. \end{equation*} This is precisely the condition in part (ii) of Theorem \ref{thm:intro}. \begin{lemma} \label{lemma:CombinedStoppingTime} Let $\tau$ and $\sigma$ be two $\mathcal{F}$-stopping times such that $\tau$ is finite. The random variable $\tau + \sigma \circ \theta_{\tau}$ is a again a $\mathcal{F}$ stopping time. \end{lemma} \begin{proof} If $\tau$ takes only values in the countable set $A \subset [0, \infty)$, for all $s \geq 0$ we obtain \begin{equation*} \{\tau + \sigma \circ \ \theta_\tau \leq s\} = \bigcup _{k \in A \cap [0,t]} \{\sigma \circ \theta _k \leq t - k\} \in \mathcal{F}_{k + (t-k)} = \mathcal{F}_t. \end{equation*} A general $\tau$ can be approximated by discrete stopping times. \end{proof} \begin{corollary} \label{lemma:NestingPrep} Let $(T_l)_{l \geq 0}$ be a finite $\mathcal{F}$-time-change, $(S_l)_{l \geq 0}$ a $\mathcal{F}$-time-change and $\lambda > 0$. The family $(R_l)_{l \geq 0}$ defined by \begin{equation*} R_l := T_{l \land \lambda} + (S_{l-\lambda} \circ \theta _{T_\lambda}) \mathds{1} _{\{l \geq \lambda\}} = \begin{cases} T_l & l < \lambda \\ T_\lambda + S_{l - \lambda} \circ \theta _{T_\lambda} &l \geq \lambda \end{cases} \end{equation*} is an $\mathcal{F}$-time-change. If additionally, both $(T_l)_{l \geq 0}$ and $(S_l)_{l \geq 0}$ are left-continuous, $T_0 = S_0 = 0$, $T_\infty = S_\infty = + \infty$ and $\mathbb{P}[\lim _{k \downarrow l} T_k = T_l] =\mathbb{P}[\lim _{k \downarrow l} S_k = S_l] = 1$ for all $l \geq 0$, $(R_l)_{l \geq 0}$ satisfies these four properties as well. \end{corollary} \begin{lemma} \label{lemma:Nesting} Suppose we are in the setting of Lemma \ref{lemma:NestingPrep}. Additionally assume that $\tau$ is a solution of $\mathrm{SEP}(\mu,\nu)$ which is shadow residual w.r.t.\ $(T_l)_{l \geq 0}$. If $\sigma$ is a $\mathcal{F}$-stopping time such that $\sigma$ is a solution to $\mathrm{SEP}(\mathrm{Law}(B_{\tau \land T_\lambda}),\nu)$, then \begin{equation*} \rho := \tau \land T_{\lambda} + \sigma \circ \theta _{T_\lambda \land \tau} \end{equation*} is a $\mathcal{F}$-stopping time and a solution to $\mathrm{SEP}(\mu,\nu)$ which is shadow residual w.r.t.\ $(R _l)_{l \geq 0}$. \end{lemma} \begin{proof} We set $\tilde{\mathcal{F}}_s = \mathcal{F}_{s + \tau \land T_\lambda}$, $\tilde{B} := B \circ \theta _{\tau \land T_\lambda}$, $\tilde{\sigma} := \sigma \circ \theta_{\tau \land T_\lambda}$ and $\tilde{S}_l := S_l \circ \theta_{T_\lambda \land \tau}$. $\tilde{\sigma}$ is a stopping time w.r.t.\ the filtration generated by $\tilde{B}$. We also have $\mathrm{Law}(\tilde{B}_{\tilde{\sigma}}) = \nu$ and $\tilde{\sigma}$ is $(\tilde{S}_l)_{l \geq 0}$-shadow-residual. \texttt{STEP 1:} We have $\mathrm{Law}(B_{\rho}) = \mathrm{Law}(B_{\tau \land T_\lambda + \tilde{\sigma}}) = \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}) = \nu$. By Lemma \ref{lemma:CombinedStoppingTime}, $\rho$ is a $\mathcal{F}$-stopping time and $(B_{s \land \rho})_{s \geq 0}$ is uniformly integrable because $\tau$ and $\sigma$ are solutions to $\mathrm{SEP}(\mu,\nu)$ and $\mathrm{SEP}(\mathrm{Law}(\tilde{B}_0), \nu)$. Thus, $\rho$ is a solution to $\mathrm{SEP}(\mu,\nu)$. Moreover, we claim that we can represent $\rho$ as \begin{equation} \label{eq:Step1} \rho = \begin{cases} \tau & \tau < T_\lambda \\ T_\lambda + \tilde{\sigma} & \tau \geq T_\lambda \end{cases} \quad \mathbb{P}\text{-a.s.}. \end{equation} Indeed, since $\tau$ is $(T_l)_{l \geq 0}$-shadow-residual, by Theorem \ref{thm:MainEqui} (iii) and Lemma \ref{lemma:InfimumToEqual}, we have $\tau < T_\lambda \mathbb{R}ightarrow u(\lambda,\tilde{B}_0) = v(\tilde{B}_0)$ $\mathbb{P}$-a.s.\ where $u(l,\cdot) := U_{\mathrm{Law}(B_{T_l \land \tau})}$ and $v = U_{\nu}$ for all $l \geq 0$. Thus, we get \begin{align*} \mathbb{P}[\tilde{\sigma} > 0, \tau < T_\lambda] &\leq \mathbb{P}[\tilde{\sigma} > 0, u(\lambda, \tilde{B}_0) = v(\tilde{B}_0)] \\ &= \mathbb{P}[\tilde{\sigma} > 0, U_{\mathrm{Law}(\tilde{B}_0)}(\tilde{B}_0) = U_{\nu}(\tilde{B}_0)]. \end{align*} and the r.h.s. is equal to $0$ because $\tilde{\sigma}$ is a $\tilde{\mathcal{F}}$-stopping-time that solves $\mathrm{SEP}(\mathrm{Law}(\tilde{B}_0),\nu)$ (cf.\ Lemma \ref{lemma:EquaToLeq}). It remains to show that $\rho$ is $(R_l)_{l \geq 0}$-shadow-residual. We split this up in the cases $l \geq \lambda$ and $l < \lambda$. \texttt{STEP 2:} Suppose $l \geq \lambda$. Since by \texttt{STEP 1} $\{ \rho \geq R_l \} = \{\tilde{\sigma} \geq \tilde{S}_{l - \lambda},\tau \geq T_\lambda\}$ $\mathbb{P}$-a.s.\ and $\tilde{\sigma}$ is $(\tilde{S}_l)_{l \geq 0}$ shadow-residual, Lemma \ref{lemma:ShadowAssz} yields \begin{align*} &\mathrm{Law}(B_\rho; \rho \geq R_l) + \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) \\ &\quad = \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tilde{\sigma} \geq \tilde{S}_{l - \lambda}) = \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l - \lambda}}; \tilde{\sigma} \geq \tilde{S}_{l - \lambda} )} \\ & \quad = \shadow{\nu}{\mathrm{Law}(B_{R_l}; \rho \geq R_l)} \\ & \hspace{2cm} + \shadow{\nu - \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)}}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda)} \end{align*} Thus, we obtain $\mathrm{Law}(B_\rho; \rho \geq R_l) = \shadow{\nu}{\mathrm{Law}(B_{R_l}; \rho \geq R_l)}$ if we show \begin{align} &\mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) = \mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) \quad \text{and} \label{eq:Step2Eq1}\\ &\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau < T_\lambda) \leq_+ \nu - \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)}. \label{eq:Step2Eq2} \end{align} By \texttt{STEP} 1 it is $\tilde{\sigma} = 0$ on $\{\tau < T_\lambda\}$ and therefore \eqref{eq:Step2Eq1} follows immediately. Moreover, Lemma \ref{lemma:ShadowAssz} yields \begin{align*} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)} &\leq_{+} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda} \land \tilde{\sigma}}; \tau \geq T_\lambda)} \end{align*} On the one hand, by the definition of the shadow we have \begin{equation} \label{eq:Aux2} \begin{split} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{0}; \tau \geq T_\lambda)} &\leq_{c} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda} \land \tilde{\sigma}}; \tau \geq T_\lambda)} \\ &\leq_c \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda)} = \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda) \end{split} \end{equation} because $0 \leq \tilde{S}_{l-\lambda} \land \tilde{\sigma} \leq \tilde{\sigma}$ and $\mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda) \leq_{+} \nu$. On the other hand, since $u(\lambda, \tilde{B}_0) = v(\tilde{B}_0)$ on $\{\tau < T_\lambda\}$ by \texttt{STEP 1}, Corollary \ref{cor:ShadowOnEqualPart} yields \begin{equation*} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{0}; \tau \geq T_\lambda)} = \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau \geq T_\lambda). \end{equation*} Thus, we have equality in \eqref{eq:Aux2} which implies \begin{equation*} \shadow{\nu}{\mathrm{Law}(\tilde{B}_{\tilde{S}_{l-\lambda}}; \tilde{\sigma} \geq \tilde{S}_{l-\lambda}, \tau \geq T_\lambda)} \leq_{+} \nu - \mathrm{Law}(\tilde{B}_{\tilde{\sigma}}; \tau < T_\lambda) \end{equation*} and thereby \eqref{eq:Step2Eq2}. \texttt{STEP 3:} Now suppose $l < \lambda$. Since $R_\lambda = T_\lambda$ and $\{\rho \geq R_\lambda\} = \{\tau \geq T_\lambda\}$ $\mathbb{P}$-a.s., by \texttt{STEP 2} we have \begin{align*} \mathrm{Law}(B_\rho; \rho \geq R_\lambda) &= \shadow{\nu}{\mathrm{Law}(B_{R_\lambda}; \rho \geq R_\lambda)} = \shadow{\nu}{\mathrm{Law}(B_{T_\lambda}; \tau \geq T_\lambda)} \\ &= \mathrm{Law}(B_\tau; \tau \geq T_\lambda) \end{align*} because $\tau$ is $(T_l)_{l \geq 0}$ shadow residual. In particular, we get \begin{align*} \mathrm{Law}(B_\rho; \rho \geq R_l) &= \mathrm{Law}(B_\rho; \rho \geq R_l, \tau < T_\lambda ) + \mathrm{Law}(B_\rho; \rho \geq R_l, \tau \geq T_\lambda ) \\ &= \mathrm{Law}(B_\tau; T_\lambda > \tau \geq T_l) + \mathrm{Law}(B_\rho; \rho \geq T_\lambda) \\ &= \mathrm{Law}(B_\tau; \tau \geq T_l) \\ &= \shadow{\nu}{\mathrm{Law}(B_{T_l}; \tau \geq T_l)} \\ &= \shadow{\nu}{\mathrm{Law}(B_{R_l}; \rho \geq R_l)} \end{align*} because $\{\rho \geq R_l \} = \{\tau \geq T_\lambda\} \cup \{T_\lambda > \tau \geq T_\lambda\} = \{\tau \geq T_l\}$. \end{proof} \subsection{Robustness of LM-Embedding} \begin{lemma} \label{lemma:SEPcompact} Let $(\mathbb{P}_n)_{n \in \mathbb{N}}$ be a sequence of probability measures on $\Omega$ such that $B$ is a Brownian motion under $\mathbb{P}_n$, $(\nu_n)_{n \in \mathbb{N}}$ a sequence of probability measures on $\mathbb{R}$ and $(\xi^n)_{n \in \mathbb{N}}$ a sequence of RST w.r.t.\ $\mathbb{P}_n$ that are a solution to $\mathrm{SEP}(\mathrm{Law}_{\mathbb{P}_n}(B_0), \nu _n)$ for all $n \in \mathbb{N}$. If $(\mathbb{P}_n)$ converges weakly to $\mathbb{P}$ and $(\nu_n)_{n \in \mathbb{N}}$ converges to the probability measure $\nu$ under $\mathcal{T}_1$, there exists a weakly convergent subsequence of $(\xi ^n)_{n \in \mathbb{N}}$. Moreover, the limit of every convergent subsequence is a RST w.r.t.\ $\mathbb{P}$ that solves $\mathrm{SEP}(\mathrm{Law}_\mathbb{P}(B_0),\nu)$. \end{lemma} \begin{proof} Let $\varepsilon > 0$. Since $(\mathbb{P}_n)_{n \in \mathbb{N}}$ converges weakly, there exists a compact set $K_{\varepsilon} \subset \Omega$ such that $\mathbb{P}^n[K_\varepsilon] > 1 - \varepsilon$ for all $n \in \mathbb{N}$. Since $(\nu_n)_{n \in \mathbb{N}}$ converges in $\mathcal{T}_1$, by Lemma \ref{lemma:T1Conv} there exists $\eta \in \mathcal{M}_1$ such that $\int \varphi \, \mathrm{d} \nu _n \leq \int \varphi \, \mathrm{d} \eta$ for all $n \in \mathbb{N}$ and non-negative convex functions $\varphi$. Moreover, by the Theorem of de la Vallee-Poussin there exists a non-negative convex function $V \in C^2(\mathbb{R})$ with $V'' \geq C > 0$ such that $\int_\mathbb{R} V \, \mathrm{d} \eta < \infty$. For all $s \geq 0$ and $n \in \mathbb{N}$ we have \begin{equation*} \xi ^n [t \geq s] = \overline{\mathbb{P}}[\overline{\tau}^{\xi^n} \geq s] \leq \frac{\overline{\mathbb{E}}[\overline{\tau} ^{\xi^n}]}{s} \leq \frac{\overline{\mathbb{E}}[V(\overline{B}_{\overline{\tau} ^{\xi^n}})]}{s} \leq \frac{1}{Cs} \int_{\mathbb{R}} V \, \mathrm{d} \eta \end{equation*} where we used the notation of Lemma \ref{lemma:ReprRST}, the Markov inequality and Ito's formula. Hence, there exists $s_\varepsilon > 0$ such that $\xi^n[t \leq s_\varepsilon] > 1 - \varepsilon$ for all $n \in \mathbb{N}$. Then the mass of the compact set $K_\varepsilon \times [0,s_\varepsilon]$ under $\xi ^n$ is strictly greater than $1- 2\varepsilon$ for all $n \in \mathbb{N}$. Hence, the set $\{\xi ^n : n \in \mathbb{N} \}$ is tight. By Prokhorovs Theorem there exists a weakly convergent subsequence. We denote the limit by $\xi$. Since the set of RST is closed under weak convergence (cf.\ \cite[Corollary 3.10]{BeCoHu17}), $\xi$ is a RST stopping time w.r.t.\ $\mathbb{P}$. Moreover, $(\omega,t) \mapsto \varphi(\omega_t)$ is a continuous and bounded function on $\Omega \times [0, \infty)$ for all $\varphi \in C_b(\mathbb{R})$, and therefore $\mathrm{Law}(B_\xi) = \nu$. It remains to show that $(B_{\xi \land t})_{t \geq 0}$ is uniformly integrable. Since $|x| \mathds{1} _{|x| \geq K} \leq |x- K/2| + |x + K/2| - K$, we get \begin{equation*} \mathbb{E} \left[ |B_{\xi \land s}| \mathds{1} _{\{|B_{\xi \land s}| \geq K\}} \right] \leq U_{\mathrm{Law}(B_{\xi \land s})}(K/2) + U_{\mathrm{Law}(B_{\xi \land s})} (-K/2) - K \end{equation*} for all $s,K \geq 0$. Moreover, since $\xi ^n$ converges weakly to $\xi$ and $g_m$ defined by $g_m(y) := \min\{|y|,m\}$ is a continuous and bounded function, we obtain for all $x \in \mathbb{R}$ with monotone and dominated convergence \begin{align*} U_{\mathrm{Law}(B_{\xi \land t})}(x) &= \sup_{m \in \mathbb{N}} \lim_{n \rightarrow \infty} \int _{\Omega \times [0, \infty)} g_m(\omega_{t \land s} - x) \, \mathrm{d} \xi ^n(\omega,t) \\ &\leq \sup _{n \in \mathbb{N}} \mathbb{E}\left[ |B_{\xi^{n} \land s}| \right] \leq \sup _{n \in \mathbb{N}} \mathbb{E}\left[ |B_{\xi^{n}}| \right] = \sup _{n \in \mathbb{N}} U_{\nu_n}(x) \leq U_{\eta}(x) \end{align*} where we used that $\mathrm{Law}(B_{\xi ^n \land t})_{t \geq 0}$ is uniformly integrable for all $n \in \mathbb{N}$ . Thus, using the asymptotic behaviour of potential functions, it is \begin{equation*} \lim _{K \rightarrow \infty} \sup _{t \geq 0} \mathbb{E} \left[ |B_{\xi \land t}| \mathds{1} _{\{|B_{\xi \land t}| \geq K\}} \right] \leq \limsup _{K \rightarrow \infty} U_{\eta}\left(-K/2\right) + U_{\eta}(K/2) - K = 0. \end{equation*} and the claim follows. \end{proof} \begin{lemma} \label{lemma:StabilityLM} Let $(\nu_n)_{n \in \mathbb{N}}$ be a sequence of probability measures on $\mathbb{R}$, $(\mathbb{P}^n)_{n \in \mathbb{N}}$ be a sequence of probability measures on $\Omega$ such that $B$ is a Brownian motion with initial distribution $\mu_n$ under $\mathbb{P}^n$ and $(\xi^n)_{n \in \mathbb{N}}$ a sequence of corresponding RST which are lm-monotone solutions to $\mathrm{SEP}(\mu_n, \nu_n)$. If $(\mathbb{P}_n)_{n \in \mathbb{N}}$ converges weakly to $\mathbb{P}$ whose initial distribution $\mu$ is atomless and $\nu_n$ converges to $\nu$ in $\mathcal{T}_1$, the sequence $(\xi ^ n)_{n \in \mathbb{N}}$ converges weakly to a RST $\xi$ w.r.t.\ $\mathbb{P}$ which is a lm-monotone solution to $\mathrm{SEP}(\mu,\nu)$ \end{lemma} \begin{proof} By Lemma \ref{lemma:SEPcompact}, any subsequence of $(\xi ^n)_{n \in \mathbb{N}}$ has itself a convergent subsequence and the limit is a solution to $\mathrm{SEP}(\mu,\nu)$. If we show that this limit is $(T_l^{lm})_{l \geq 0}$-shadow-residual, by uniqueness (see \cite[Lemma 4.3]{BeHeTo17}), it has to be the unique left-monotone solution to $\mathrm{SEP}(\mu,\nu)$ and the claim follows. For simplicity, we denote the convergent subsequence of a given subsequence again by $(\xi ^n)_{n \in \mathbb{N}}$ and the limit by $\xi$. Since $\xi^n$ is $(T_l ^{lm})$-shadow-residual, the probability measure $\mathrm{Law}(B_0,B_{\xi ^n})$ is the left-curtain coupling of $\mu_n$ and $\nu_n$. Indeed, as in Remark \ref{rem:CondLM} we have for all $l \geq 0$ \begin{equation} \label{eq:LmStability} \begin{split} \mathrm{Law}(B_{\xi^n}; B_0 \leq - \ln(l)) &= \mathrm{Law}(B_{\xi ^n}; \xi ^n \geq T_l ^{lm}) \\ &= \shadow{\nu ^n}{\mathrm{Law}(B_0; \xi ^n \geq T_l ^n)} = \shadow{\nu _n}{\mathrm{Law}(B_0; B_0 \leq - \ln(l))} \end{split} \end{equation} because $\{t \geq T_l ^{lm}\} = \{T_l ^{lm} = 0\} = \{B_0 \leq - \ln(l) \}$ (where $-\ln(0) := - \infty$). As shown in \cite[Theorem 2.16]{Ju14} (and also as a consequence of stability of martingale optimal transport \cite[Theorem 1.1]{BaPa19})), the left-curtain coupling is stable under weak convergence, i.e.\ the weak limit $\mathrm{Law}(B_0,B_\xi)$ of $(\mathrm{Law}(B_0,B_{\xi^n}))_{n \in \mathbb{N}}$ is the left-curtain coupling of $\mu$ and $\nu$. Thus, analogous to \eqref{eq:LmStability}, $\xi$ is $(T_l^{lm})_{l \geq 0}$-shadow-residual. \end{proof} \subsection{Application} Fix $\mu \leq_{c} \nu$, let $\tau ^r$ be the Root solution to $\mathrm{SEP}(\mu,\nu)$ and $(T_l^r)_{l \geq 0}$ the Root time-change. Let $\lambda > 0$. We set $ \tilde B^{\lambda} := B \circ \theta _{T^{r} _{\lambda} \land \tau ^{r}}$. By the strong Markov property, $ \tilde B^{\lambda}$ is a Brownian motion and there exists a left-monotone solution $\sigma ^{\lambda}$ of $\mathrm{SEP}(\mathrm{Law}(\tilde B^{\lambda}_0),\nu)$. We define \begin{align*} \tau ^{\lambda} :=& \tau ^{r} \mathds{1} _{\{ \tau ^{r} < T^{r}_{\lambda} \}} + \sigma^\lambda \circ \theta_{(T^{r} _{\lambda} \land \tau ^{r})} \mathds{1} _{\{ \tau ^{r} \geq T^{r} _{\lambda} \}} \\ =& \tau^r \mathds{1}_{\{\tau ^r < \lambda \}} + \sigma^\lambda \circ \theta _{\lambda} \mathds{1} _{\{\tau ^r \geq \lambda\}} . \end{align*} By Lemma \ref{lemma:Nesting}, $\tau ^\lambda$ is a solution to $\mathrm{SEP}(\mu,\nu)$ which is shadow residual w.r.t.\ the time-change $(T_l ^\lambda)_{l \geq 0}$ defined as \begin{equation*} T^{\lambda} _l := T^{r}_{l \land \lambda} + (T^{lm} _{l - \lambda} \circ \theta _{T^{r}_{\lambda}}) \mathds{1}_{\{l \geq \lambda\}} = \begin{cases} l & l < \lambda \\ l & \exp(-B_\lambda) + \lambda \geq l \geq \lambda \\ + \infty & \exp(-B_\lambda) + \lambda < l, l \geq \lambda \end{cases}. \end{equation*} Thus, by Theorem \ref{thm:MainEqui}, there exists a barrier $\mathcal{R}^\lambda$ such that \begin{equation*} \tau ^{\lambda} = \inf \{t \geq 0 : (X_t ^\lambda, B_t) \in \mathcal{R}^\lambda\} \end{equation*} where $X^{\lambda}$ is defined as \begin{align*} X_t ^\lambda := \sup \{l \geq 0 : T_l^\lambda \leq t\} = \begin{cases} t & t < \lambda \\ \lambda + \exp(-B_0) & t \geq \lambda \end{cases}. \end{align*} To complete the proof of Proposition \ref{prop:Interpolation}, it remains to show the convergence of $\tau ^\lambda$ to $\tau ^r$ and $\tau ^{lm}$ as randomized stopping times as $\lambda$ tends to $+\infty$ and $0$. This is covered by Lemma \ref{lemma:ConvToRoot} and Lemma \ref{lemma:ConvToLM}. \begin{lemma} \label{lemma:ConvToRoot} The sequence $(\tau _\lambda)_{\lambda > 0}$ converges a.s.\ to $\tau ^r$ as $\lambda$ tends to $+ \infty$. In particular, $\mathrm{Law}(B,\tau ^\lambda)$ converges weakly to $\mathrm{Law}(B,\tau ^r)$. \end{lemma} \begin{proof} Since $\tau^r < + \infty$ and $T_\lambda^r \rightarrow \infty$, we have \begin{equation*} \lim _{\lambda \rightarrow + \infty} \tau ^\lambda = \lim _{\lambda \rightarrow + \infty} \left( \tau ^{r} \mathds{1} _{\{ \tau ^{r} < T^{r}_{\lambda} \}} + \sigma^\lambda \circ \theta_{(T^{r} _{\lambda} \land \tau ^{r})} \mathds{1} _{\{ \tau ^{r} \geq T^{r} _{\lambda} \}} \right) = \tau ^{r} \quad \mathbb{P}-\text{a.s.} \end{equation*} The weak convergence of $\mathrm{Law}(B,\tau ^\lambda)$ to $\mathrm{Law}(B,\tau ^r)$ follows immediately. \end{proof} \begin{lemma} \label{lemma:CompSupp} For all $\varphi \in C_c(\Omega \times [0, \infty))$ we have \begin{equation*} \lim_{\lambda \rightarrow 0} \mathbb{E}\left[ \left\vert \varphi(B, \tau ^\lambda) - \varphi (\tilde{B}^\lambda, \tilde{\sigma}^\lambda) \right\vert \mathds{1} _{\{\tau^r > 0\}}\right] = 0 \end{equation*} where $\tilde{B}^\lambda := B \circ \theta_{\tau ^r \land \lambda}$ and $\tilde{\sigma}^\lambda := \sigma^\lambda \circ \theta_{\tau ^r \land \lambda}$ for all $\lambda > 0$. \end{lemma} \begin{proof} For all $\lambda > 0$ we define the map $\Theta ^\lambda$ on $\Omega \times [0,\infty)$ by \begin{equation*} \Theta ^\lambda : (\omega,t) \mapsto (\omega \circ \theta_\lambda, \max\{t-\lambda,0\}). \end{equation*} A compatible metric on the Polish space $\Omega \times [0, \infty)$ is given by \begin{equation*} d((\omega,t),(\omega',t')) := |t-t'| + \sum _{n \in \mathbb{N}} 2^{-n}\sup _{s \in [0,n]} |\omega_s - \omega'_s| \end{equation*} and under this metric $\Theta ^\lambda$ is $2$-Lipschitz continuous for all $\lambda \in (0,1)$. Moreover, since $\lim _{\lambda \rightarrow 0}\Theta ^\lambda(\omega,t) = (\omega,t)$ for all $(\omega,t) \in \Omega \times [0, \infty)$, $\Theta^\lambda$ converges uniformly on compact sets to the idenity on $\Omega \times [0, \infty)$. Thus, we have \begin{align*} &\lim _{\lambda \rightarrow 0} \mathbb{E}\left[ \left\vert \varphi(B,\lambda + \tilde{\sigma} ^\lambda) - \varphi(\tilde{B}^\lambda,\tilde{\sigma} ^\lambda) \right\vert \right] \\ & \quad = \lim _{\lambda \rightarrow 0} \mathbb{E}\left[ \varphi(B,\lambda + \tilde{\sigma} ^\lambda) - \varphi(\Theta^\lambda(B,\lambda + \tilde{\sigma} ^\lambda)) \right] = 0. \end{align*} By substituting the definition of $\tau^\lambda$, we obtain the estimate \begin{align*} &\mathbb{E}\left[ \left\vert \varphi(B, \tau ^\lambda) - \varphi (\tilde{B}^\lambda, \tilde{\sigma}^\lambda) \right\vert \mathds{1} _{\{\tau^r > 0\}}\right] \\ &\quad \leq 2 ||\varphi||_\infty \mathbb{P}[0 <\tau ^r < \lambda] + \mathbb{E}\left[ \left\vert \varphi(B,\lambda + \tilde{\sigma} ^\lambda) - \varphi(\tilde{B}^\lambda,\tilde{\sigma} ^\lambda) \right\vert \right] \end{align*} and therefore the claim follows. \end{proof} \begin{lemma} \label{lemma:ConvToLM} The sequence $(\mathrm{Law}(B,\tau ^\lambda))_{\lambda > 0}$ converges weakly to $\mathrm{Law}(B,\tau ^{lm})$ as $\lambda$ tends to $0$. \end{lemma} \begin{proof} On the set $\{\tau ^r = 0\}$, $U_\mu(0) = u^r(0,B_0) = v(B_0) = U_\nu(B_0)$ and thus $\tau ^{lm} = 0 = \tau ^r$. Hence, in conjunction with Lemma \ref{lemma:CompSupp} we obtain for all $\varphi \in C_c(\Omega \times [0, \infty))$ \begin{equation} \label{eq:ConvPHI} \lim_{\lambda \rightarrow 0} \mathbb{E}\left[ \left\vert \varphi(B, \tau ^\lambda) - \varphi (\tilde{B}^\lambda, \tilde{\sigma}^\lambda) \right\vert \right] = 0 \end{equation} where $\tilde{B}^\lambda := B \circ \theta_{\tau ^r \land \lambda}$ and $\tilde{\sigma}^\lambda := \sigma^\lambda \circ \theta_{\tau ^r \land \lambda}$ for all $\lambda > 0$. Since both $(\mathrm{Law}(B,\tau ^\lambda))_{\lambda > 0}$ and $(\mathrm{Law}(\tilde{B}^\lambda,\tilde{\sigma}^\lambda))_{\lambda > 0}$ are sequences of solutions to $\mathrm{SEP}(\mu,\nu)$ and $\mathrm{SEP}(\mathrm{Law}(\tilde{B}_0^\lambda),\nu)$, both families are tight by Lemma \ref{lemma:SEPcompact}. Thus, \eqref{eq:ConvPHI} holds also for all $\varphi \in C_b(\Omega \times [0, \infty))$. Finally, Lemma \ref{lemma:StabilityLM} shows that $\mathrm{Law}(\tilde{B}^\lambda,\tilde{\sigma}^\lambda)$ converges weakly $\mathrm{Law}(B,\tau ^{lm})$. \end{proof} \normalem \end{document}
\begin{document} \preprint{ } \title[Equivariant Hopf Bifurcation on a Circular Domain]{Equivariant Hopf Bifurcation in a Class of Partial Functional Differential Equations on a Circular Domain} \author{Yaqi Chen} \affiliation{Department of Mathematics, Harbin Institute of Technology, Weihai, Shandong 264209, P.R.China.} \author{Xianyi Zeng} \affiliation{Department of Mathematics, Lehigh University, Bethlehem, PA 18015, United States.} \author{Ben Niu~*} \email{[email protected].} \affiliation{Department of Mathematics, Harbin Institute of Technology, Weihai, Shandong 264209, P.R.China. } \date{\today} \begin{abstract} Circular domains frequently appear in the fields of ecology, biology and chemistry. In this paper, we investigate the equivariant Hopf bifurcation of partial functional differential equations with Neumann boundary condition on a two-dimensional disk. The properties of these bifurcations around equilibriums are analyzed rigorously by studying the equivariant normal forms. Two reaction-diffusion systems with discrete time delays are selected as numerical examples to verify the theoretical results, in which spatially inhomogeneous periodic solutions including standing waves and rotating waves, and spatially homogeneous periodic solutions are found near the bifurcation points.\\ ~\\ Keywords: Circular domain, Partial functional differential equations, Equivariant Hopf bifurcation, Standing waves, Rotating waves \end{abstract} \maketitle \section{\label{sec1}Introduction} The research on the reaction-diffusion equation plays an important role in physics, chemistry, medicine, biology, and ecology. Many mathematical problems, such as the existence, boundedness, regularity and stability of solutions and traveling waves have been raised in~\citep{Wang1996J,Wang2011J,Nefedov2016J,Hsu2018J,Jin2020J}. Recently, the effect of time delay has drawn a lot of attention and Hopf bifurcation analysis becomes an effective tool to explain complex phenomena in reaction-diffusion systems, and even in more general partial functional differential equations (PFDEs). Wu~\citep{Wu1996M} gave a general Hopf bifurcation theorem for PFDEs by restricting the system to an eigenspace of the Laplacian. Faria \citep{Faria2000J} gave a framework for directly calculating the normal form of PFDEs with parameters. Based on these theories, many achievements have been made in the study of local Hopf bifurcation~\citep{Yi2009J,Hu2011J,Chen2013J,Guo2015J,Yang2022J,Song2021J,Song2021J2} or other codimension-two bifurcations~\citep{Cao2018J,Du2020J,Jiang2020J,Geng2022J}. The phenomena of symmetry appears a lot in real-word models, which usually leads to multiple eigenvalues, and the standard Hopf bifurcation theory of functional differential equations cannot be applied to solve such problems. Golubitsky et al.~\citep{Golubitsky1989M} used group theory to characterize transitions in symmetric systems and worked out the bifurcation theory for a number of symmetry groups. Based on these theories, there have been many subsequent studies on symmetry. Firstly, some researchers were concerned about nonlinear optical systems, which can effectively characterize optical problems such as circular diffraction~\citep{Razgulin2013J,Romanenko2014J,Budzinskiy2017J}. Besides, a Hopfield-Cohen-Grossberg network consisting of $n$ identical elements also has a certain symmetry, which has been studied in~\citep{Wu1998J,Guo2013M,Campbell2005J}. Furthermore, there have been many studies on the regions with $O(2)$ symmetry where the models are established. For example, Gils and Mallet-Paret\citep{Gils1986J} considered Hopf bifurcation in the presence of $O(2)$ symmetry and distinguished the phase portraits of the normal form into six cases. In \citep{Schley2003J}, Schley studied a delay parabolic equation in a disk with the Neumann boundary conditions and proved the existence of rotating waves with methods of eigenfunction. In recent years, a sequence of results about equivariant Hopf bifurcation in neutral functional differential equations~\citep{Guo2008J,Guo2010J} and functional differential equations of mixed type \citep{Guo2011J} have been established. In particular, Guo~\citep{Guo2022J} applied the equivariant Hopf bifurcation theorem to study the Hopf bifurcation of a delayed Ginzburg-Landau equation on a two-dimensional disk with the homogeneous Dirichlet boundary condition. More recently, Qu and Guo applied Lyapunov-Schmidt reduction to study the existence of inhomogeneous steady-state solutions on a unit disk~\citep{Qu2023J}, whereas different kinds of spatial-temporal solutions with symmetry have been detected by investigating isotropy subgroups of these equations~\citep{Golubitsky1989M,Wu1998J,Wu1999J,Guo2003J}. In fact, the disk, a typical region with $O(2)$ symmetry, is usually used to describe many real-world problems. For example in physics, rotating waves were often observed in the case of a circular aperture~\citep{Akhmanov1992J,Ramazza1996J,Residori2007J}. In chemical experiments, one usually studies chemical reactions in circular petri dishes, whose size may affect the existence and pattern of spiral waves~\citep{Dai2021J, Joseph1994J}. And in the field of ecology, some lakes could be abstracted as circular domains to study the interaction between predator and prey, and the mathematical modeling of predator-prey systems on the circular domain has been summarized in~\citep{Abid2015J,Yafia2016C}. We found that a complete derivation of normal forms and bifurcation analysis in general partial functional differential equations on two-dimensional circular domains remains lacking. Therefore, in this paper, we aim to consider general partial functional differential equations with homogeneous Neumann boundary condition defined on a disk and to improve the center manifold reduction technique established in \citep{Faria2000J,Faria1995J,Faria1995J2} to the normal form derivation for PFDEs on circular domains to fill the gap. Compared to the results in \citep{Wu1996M}, due to the $O(2)$ symmetry leading to multiple pure imaginary eigenvalues, the eigenspace of the Laplacian is sometimes two-dimensional, which gives rise to higher dimensional center subspace of the equilibrium at the bifurcation point. By introducing similar operators as in~\citep{Faria2000J}, we derive the normal form of the equivariant Hopf bifurcation of general partial functional differential equations on a disk in explicit formulas, which can be directly applied to some models with practical significance or the model on other kind of circular domains, for example an annulus or a circular sector. With the aid of normal forms, we find standing wave solutions and rotating wave solutions in a delayed predator-prey model. This is done after all the coefficients in the normal forms are explicitly computed. The structure of the article is as follows. In Section \ref{sec2}, the eigenvalue problem of the Laplace operator on a circular domain is reviewed and the existence of Hopf bifurcation is explored. In Section \ref{sec3}, we study the properties of equivariant Hopf bifurcation on the center manifold. The normal forms are also rigorously derived in this section. In Section \ref{sec4}, two types of reaction-diffusion equations with discrete time delay are selected and numerically solved to verify the theoretical results. \section{Preliminaries}\label{sec2} \subsection{The eigenvalue problem of the Laplace operator on a circular domain}\label{sec2.1} The eigenvalue problem associated with Laplace operators on a circular domain could be given in a standard way, see \citep{Murray2001M,Pinchover2005M}. For the convenience of our research, we use a similar method to treat the eigenvalue problem and state the main results here and consider a disk as follows $$ \mathbb{D}=\{(r, \theta): 0 \leq r \leq R, 0 \leq \theta \leq 2 \pi\}. $$ The Laplace operator defined in the cartesian coordinates is $\Delta \varphi=\frac{\partial^{2}}{\partial x^{2}} \varphi+\frac{\partial^{2}}{\partial y^{2}} \varphi$. Letting $x=r\cos(\theta),y=r\sin(\theta)$, it can be converted into the polar coordinates as $\Delta_{r \theta} \varphi=\frac{\partial^{2}}{\partial r^{2}} \varphi+\frac{1}{r} \cdot \frac{\partial}{\partial r} \varphi+\frac{1}{r^{2}} \cdot \frac{\partial^{2}}{\partial \theta^{2}} \varphi$. One needs to consider the following eigenvalue problem and calculate the eigenvectors on the disk. \begin{equation}\label{eigenvalue problem on the circular domain} \left\{\begin{array}{l} \Delta_{r \theta}\phi=-\lambda \phi, \\ \phi_r^{\prime}(R,\theta)=0, \theta \in[0,2 \pi]. \end{array}\right. \end{equation} Using the method of separation of variables and letting $\phi(r, \theta)=P(r) \Phi(\theta)$, we get that the eigenfunction corresponding to $\lambda_{mn}$ is \begin{equation}\label{eigenfunction of the Laplace operator} \phi_{n m}(r, \theta)=J_{n}\left(\sqrt{\lambda_{n m}} r\right)\Phi_{n}(\theta), \end{equation} with \begin{equation}\label{Phi theta} \Phi_{n}(\theta)=\begin{array}{c} a_{n} \cos n \theta+ b_{n} \sin n \theta, \end{array} \end{equation} and \begin{equation}\label{Jn} J_{n}(\rho)=\sum_{m=0}^{+\infty} \frac{(-1)^{m}}{m ! \Gamma(n+m+1)}\left(\frac{\rho}{2}\right)^{n+2 m}. \end{equation} $\lambda_{nm}$ is chosen such that the boundary condition $P'(R)=0$ is satisfied. \begin{remark}\label{lambdanm} Considering the Neumann boundary conditions, we have $J_n^{\prime}\left(\sqrt{\lambda_{n m}} R\right)=0$, which indicates that $\sqrt{\lambda_{n m}} R$ are roots of $J_n^{\prime}(\sqrt{\lambda}r)$. We use $\alpha_{n m}$ to represent these non-zero roots and assume that they are indexed in increasing order, i.e. $J_{n}^{\prime}\left(\alpha_{n m}\right)=0, \alpha_{n 1}<\alpha_{n 2}<\alpha_{n 3}<\cdots$, where $n \ge 0$ is the indices of the Bessel function and $m \ge 1$ are the indices for these roots. So $\lambda_{n m}=\left(\alpha_{n m} / R\right)^{2}$. For convience, we use $\alpha_{0 0}=0, \lambda_{0 0}=0$. \end{remark} \begin{remark}\label{basis} From the standard Sturm-Liouville theorem, we know that for any given nonnegative integer $n$, $J_{n}\left(\alpha_{n m} r\right)$ are the orthogonal sets with weight $r$ on the interval $[0,R]$, where $J_{n}^{\prime}\left(\alpha_{n m}\right)=0$. That is, for any given $m, k$, we have $$ \int_{0}^{R} r J_{n}\left(\frac{\alpha_{n m}}{R} r\right) J_{n}\left(\frac{\alpha_{n k}}{R} r\right) \rm{d} \it{r}=\left\{\begin{array}{cc} 0, & m \neq k,\\ \frac{R^2}{2}\left[1-\left(\frac{n}{\alpha_{nm}}\right)^2\right]J_n^2\left(\alpha_{n m}\right), & m=k. \end{array} \right. $$ Furthermore, for any given nonnegative integer $n$, the $\mathbb{L}^2$ norm with weight $r$ of function systems $\left\{J_{n}\left(\frac{\alpha_{n m}}{R} r\right)\right\}$ that include $\alpha_{0 0}$ are complete in the space $\mathbb{L}^{2}[0,R]$. Besides, the trigonometric function systems are orthogonal in the interval $[0,2 \pi]$ and complete in the space $\mathbb{L}^{2}[0,2 \pi]$. Therefore, for $n=0, 1, 2, \cdots$, $m=1, 2, \cdots$, we use a complexification of the space and the system of functions $$ \phi_0,~\phi_{nm}^{c},~\phi_{nm}^{s}, $$ constitutes an orthogonal basis with weight $r$ in the space $\mathbb{L}^{2}\{0 \leq \theta \leq 2 \pi, 0 \leq r \leq R\}$ with $$ \phi_0=J_{0}\left(\frac{\alpha_{0 0}}{R} r\right)=1,~\phi_{nm}^{c}=J_{n}\left(\frac{\alpha_{n m}}{R} r\right) \mathrm{e}^{\mathrm{i} n \theta},~\phi_{nm}^{s}=\overline{\phi_{nm}^{c}}=J_{n}\left(\frac{\alpha_{n m}}{R} r\right)\mathrm{e}^{-\mathrm{i} n \theta}. $$ \end{remark} From the above analysis, we can draw the following conclusions. \begin{theorem}\label{F-B} The solution of the Laplace equation on $\mathbb{D}$ with homogeneous Neumann condition at $r=R$ can be written as $$ \varphi(r, \theta)=\phi_{0}(r, \theta)+A_{nm}\sum_{n=0}^{+\infty} \sum_{m=1}^{+\infty}\phi_{nm}^c(r, \theta)+B_{nm}\sum_{n=1}^{+\infty} \sum_{m=1}^{+\infty}\phi_{nm}^s(r, \theta), $$ where\\ $$ A_{nm}=\frac{\delta_{n}}{ R^{2}\pi \left[ 1-\left( \frac{n}{\alpha_{nm}} \right)^2 \right] J_{n}^{2}\left(\alpha_{nm}\right)} \int_{0}^{R} \int_{0}^{2 \pi} r \varphi(r, \theta) J_{n}\left(\frac{\alpha_{nm}}{R} r\right) \mathrm{e}^{-\mathrm{i} n \theta} \mathrm{d} r \mathrm{d} \theta, $$ $$ B_{nm}=\frac{2}{R^{2} \pi \left[ 1-\left( \frac{n}{\alpha_{nm}} \right)^2 \right] J_{n}^{2}\left(\alpha_{nm}\right)} \int_{0}^{R} \int_{0}^{2 \pi} r \varphi(r, \theta) J_{n}\left(\frac{\alpha_{nm}}{R} r\right) \mathrm{e}^{\mathrm{i} n \theta} \mathrm{d} r \mathrm{d} \theta, $$ $$ \delta_{n}=\left\{\begin{array}{l}1, n=0, \\ 2, n \neq 0.\end{array}\right. $$ This means, for $n=0$, the eigenspace corresponding to the eigenvalue $\lambda_{0m}$ is spanned by $\phi_{0m}^{c},~m=0,1,\cdots$. For $n> 0$, the eigenspace corresponding to the eigenvalue $\lambda_{nm}$ is spanned by $\phi_{nm}^{c}$ and $\phi_{nm}^{s},~m=1,2,\cdots$. \end{theorem} \begin{remark} The above eigenvalue problems can be directly applied to an annulars domain $$ \tilde{\mathbb{D}}=\{(r, \theta):~R_1 \leq r \leq R_2,~0 \leq \theta \leq 2 \pi\}. $$ The difference is that the homogeneous Neumann condition is given at $r = R_1$ and $r = R_2$. Besides, $P(r)$ becomes \begin{equation*} P_n(\sigma_{nm},r)=A_nJ_n(\frac{\sigma_{nm}}{R_2} r)+B_nN_n(\frac{\sigma_{nm}}{R_2} r), \end{equation*} with $P_n^{\prime}(\sigma_{nm},R_1)=P_n^{\prime}(\sigma_{nm},R_2)=0$, where $N_{n}(\rho)=\frac{J_{n}(\rho) \cos n \pi-J_{-n}(\rho)}{\sin n \pi}$. Then $\frac{P_n(\sigma_{nm},r)}{\|P_n(\sigma_{nm},r)\|_{2,2}}$ forms a standard orthogonal basis of the space $\mathbb{L}^{2}\{ R_1 \leq r \leq R_2\}$. In what follows, there will be no significant difference in the subsequent process of bifurcation analysis. In fact, the above eigenvalue analysis in a circular sector domain $$ \hat{\mathbb{D}}=\{(r, \theta):~0 \leq r \leq R,~0 \leq \theta \leq \Theta,~\Theta<\pi\}, $$ is also similar. The difference is that the homogeneous Neumann conditions make $\Phi(\theta)$ become $\Phi_n(\theta)=\cos \frac{n\pi}{\Theta}\theta$. The subsequent calculation process is even simpler, which is a direct extension of the case in one-dimensional intervals. \end{remark} \subsection{The existence of the Hopf bifurcation}\label{sec2.2} We consider a general partial functional differential equations with homogeneous Neumann boundary conditions defined on a disk as follows: \begin{equation}\label{PFDEs} \frac{\partial U(t, x, y)}{\partial t}=D(\nu) \Delta U(t, x, y)+L(\nu)U_t(x, y)+F\left(U_t(x, y),\nu\right), \\ \end{equation} where $t \in[0,+\infty),~\Omega=\left\{(x, y) \in \mathbb{R}^{2} \mid x^{2}+y^{2}<R^{2}\right\}$, $$ U(t, x, y)=\left(\begin{array}{c} u_1(t, x, y)\\ u_2(t, x, y)\\ \vdots\\ u_n(t, x, y) \end{array}\right), ~ U_t(\vartheta)(x, y)=\left(\begin{array}{c} u_t^1(\vartheta)(x, y)\\ u_t^2(\vartheta)(x, y)\\ \vdots\\ u_t^n(\vartheta)(x, y) \end{array}\right), ~ D(\nu)=\left(\begin{array}{cccc} d_1(\nu) & 0 & \cdots & 0\\ 0 & d_2(\nu) & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & d_n(\nu)\\ \end{array}\right), $$ $d_i(\nu)>0,i=1,2,\cdots,n,~\nu \in \mathbb{R},~U_t(\vartheta)(x, y)=U(t+\vartheta, x, y),~\vartheta \in \left[-\tau,0\right],~U_t(x, y)\in \tilde{\mathscr{C}}:=C([-\tau,0], \tilde{\mathscr{X}_{\mathbb{C}}}),~L:\mathbb{R}\times \tilde{\mathscr{C}}\rightarrow\mathscr{X}_{\mathbb{C}}$ is a bounded linear operator, and ${F}: \tilde{\mathscr{C}} \times \mathbb{R} \rightarrow \mathscr{X}_{\tilde{\mathscr{C}}}$ is a $C^k~(k\ge 3)$ function such that ${F}\left(0, \nu\right)=0,~D_{\varphi}{F}\left(0, \nu\right)=0$ that stands for the Fr\'{e}chet derivative of ${F}\left(\varphi, \nu\right)$ with respect to $\varphi$ at $\varphi=0$. \begin{displaymath} \tilde{\mathscr{X}_{\mathbb{C}}}=\left\{\tilde{U}(x, y)\in {W}^{2,2}({\Omega}): \nabla\tilde{U}(x, y)\cdot \eta=0,(x, y)\in \partial \Omega \right\}, \end{displaymath} where $\eta$ is the out-of-unit normal vector. Here we use the complexification space $\tilde{\mathscr{X}_{\mathbb{C}}}$, because a complex form of eigenvector is more suitable to shorten the expressions in the normal form derivation. Let us now explore the existing conditions of Hopf bifurcation based on system (\ref{PFDEs}). Again, we use $x=r \cos \theta$, $y=r \sin \theta$ and the domain $\Omega$ is transformed into $\mathbb{D}=\{(r, \theta): 0 \le r<R, 0 \le \theta<2 \pi\}$. For simplicity, we still use symbols in (\ref{PFDEs}). System (\ref{PFDEs}) can be converted in polar coordinates to \begin{equation}\label{PFDEs r theta} \frac{\partial U(t, r, \theta)}{\partial t}=D(\nu) \Delta_{r \theta} U(t, r, \theta)+L(\nu)U_t(r, \theta)+F\left(U_t(r, \theta),\nu\right), \\ \end{equation} and analogously, define the phase space $$ {\mathscr{C}}:=C([-\tau,0], {\mathscr{X}_{\mathbb{C}}}), $$ where $$ {\mathscr{X}_{\mathbb{C}}}=\left\{\tilde{U}(r, \theta)\in {W}^{2,2}(\mathbb{D}): \partial_r \tilde{U}(R, \theta)=0,~\theta \in [0,2\pi) \right\}, $$ with inner product $\langle u(r,\theta),v(r,\theta)\rangle=\iint_{\mathbb{D}}r u(r,\theta) \bar{v}(r,\theta) \mathrm{d} r\mathrm{d}\theta$ weighted $r$ for $u(r,\theta),~v(r,\theta) \in \mathscr{X}_{\mathbb{C}}$. Then, $U_t(r, \theta)\in {\mathscr{C}}$ . Linearizing system (\ref{PFDEs r theta}) at the origin, we have \begin{equation}\label{linearized system} \frac{\partial U(t,r,\theta)}{\partial t}={D}(\nu)\Delta_{r \theta} U(t,r,\theta)+L(\nu) U_{t}(r,\theta). \end{equation} The characteristic equations of (\ref{linearized system}) are \begin{equation}\label{characteristic equation} \gamma \varphi -D(\nu) \Delta_{r \theta} \varphi-L(\nu)(\mathrm{e}^{\gamma\cdot}\varphi)=0, \end{equation} where $\mathrm{e}^{\gamma\cdot}(\vartheta)\varphi=\mathrm{e}^{\gamma\vartheta}\varphi$, for $\vartheta \in [-\tau,0]$, and $\gamma$ is an eigenvalue of equation (\ref{linearized system}). By Theorem \ref{F-B}, we find that solving (\ref{characteristic equation}) is equivalent to solving the following two groups of characteristic equations. The first group is given as \begin{equation}\label{characteristic equation1} \mathrm{det}\left[\gamma I +\lambda_{0m}D(\nu)-L(\nu)(\mathrm{e}^{\gamma\cdot}I)\right]=0,~m=0,1,2,\cdots. \end{equation} The second groups of equations have multiple roots as the eigenspace of $\lambda_{nm},~n=1,2,\cdots,~m=1,2,\cdots$ is of two-dimensional. They are \begin{equation}\label{characteristic equation2} \mathrm{det}\left[\gamma I +\lambda_{nm}D(\nu)-L(\nu)(\mathrm{e}^{\gamma\cdot}I)\right]^2=0,~n=1,2,\cdots,~m=1,2,\cdots. \end{equation} In order to consider the Hopf bifurcation, we assume that the following conditions hold for some $\nu_{\hat{\lambda}},~\hat{\lambda}=\lambda_{0m}$ or $\lambda_{nm}$. \begin{itemize} \item [$\bm{\mathrm{(H_1)}}$] There exists a neighborhood $\mathscr{U}_1$ of $\nu_{\hat{\lambda}},~\hat{\lambda}=\lambda_{0m}$ such that for $\nu \in \mathscr{U}_1$, system (\ref{linearized system}) has a pair of complex simple conjugate eigenvalues $\alpha_{\hat{\lambda}}(\nu)\pm \mathrm{i}\omega_{\hat{\lambda}}(\nu)$ and the remaining eigenvalues of (\ref{linearized system}) have non-zero real part for $\nu \in \mathscr{U}_1$. \item [$\bm{\mathrm{(H_2)}}$] There exists a neighborhood $\mathscr{U}_2$ of $\nu_{\hat{\lambda}},~\hat{\lambda}=\lambda_{nm}$ such that for $\nu \in \mathscr{U}_2$, system (\ref{linearized system}) has a pairs of complex repeated conjugate eigenvalues $\alpha_{\hat{\lambda}}(\nu)\pm \mathrm{i}\omega_{\hat{\lambda}}(\nu)$ and the remaining eigenvalues of (\ref{linearized system}) have non-zero real part for $\nu \in \mathscr{U}_2$. \item [$\bm{\mathrm{(H_3)}}$] $\alpha_{\hat{\lambda}}(\nu)\pm \mathrm{i}\omega_{\hat{\lambda}}(\nu)$ are continuously differential in $\nu$ with $\alpha_{\hat{\lambda}}(\nu_{\hat{\lambda}})=0,~\omega_{\hat{\lambda}}(\nu_{\hat{\lambda}})=\omega_{\hat{\lambda}}>0$. \end{itemize} \begin{remark}\label{equicariant} According to \citep{Golubitsky1989M}, problem (\ref{PFDEs r theta}) is $\Gamma$ equivariant in spatial dimension, with $\Gamma = O(2)$. For instance, write the right hand of system (\ref{PFDEs r theta}) as $\mathscr{F}(U(t,r,\theta))$, we have $$ \mathscr{F}(\kappa U(t,r,\theta))=\kappa \mathscr{F}(U(t,r,\theta)), \forall \kappa \in \Gamma. $$ Thus, in what follows, any solution after the action of this group is still a solution of the equation. \end{remark} By \citep{Golubitsky1989M,Guo2013M,Faria1995J,Ruan2003J} and Remark \ref{equicariant}, if $\bm{\mathrm{(H_1)}}$ and $\bm{\mathrm{(H_3)}}$ or $\bm{\mathrm{(H_2)}}$ and $\bm{\mathrm{(H_3)}}$ hold, noting $\hat{\nu}=\min\left\{\nu_{\hat{\lambda}}\right\}$, we know Hopf bifurcations occur at the critical values $\nu=\hat{\nu}$. When $\hat{\lambda}=\lambda_{0m},~m=0,1,2,\cdots$, the center subspace of the equilibrium is of two dimensional, so we call this a standard Hopf bifurcation. When $\hat{\lambda}=\lambda_{nm},~n=1,2,\cdots,~m=1,2,\cdots$, the center subspace of the equilibrium is of four dimensional, we say this is a (real) equivariant Hopf bifurcation. In the coming section, we will calculate the equivariant Hopf bifurcation around $E^{*}$. \section{Hopf bifurcation analysis}\label{sec3} \subsection{Normal form for PFDEs} In this section, we will investigate the properties of the equivariant Hopf bifurcation around $E^{*}$, using the theory in \citep{Wu1996M,Faria2000J,Gils1986J,Wu1999J,Faria1995J}. Letting $\nu=\hat{\nu}+\mu$, where $\hat{\nu}$ is given in subsection \ref{sec2.2} and $\mu \in \mathbb{R}$, following the method proposed in \citep{Faria2000J}, and using $\mu$ as a new variable, the Taylor expansions of ${L}(\hat{\nu}+\mu)$ and ${D}(\hat{\nu}+\mu)$ are as follows $$ {L}(\hat{\nu}+\mu)=\tilde{L}_0+\mu \tilde{L}_1+\frac{1}{2}\mu^2 \tilde{L}_2+\cdots, $$ $$ {D}(\hat{\nu}+\mu)=\tilde{D}_0+\mu \tilde{D}_1+\frac{1}{2}\mu^2 \tilde{D}_2+\cdots, $$ where $\tilde{D}_0=D(\hat{\nu}),~\tilde{L}_0(\cdot)={L}(\hat{\nu})(\cdot)$ is a linear operator from $\mathscr{C}$ to $\mathscr{X}_{\mathbb{C}}$. Now, in the space $\mathscr{C}$, system (\ref{PFDEs r theta}) is equivalent to \begin{equation}\label{abstract functional differential equation} \frac{\mathrm{d} U(t)}{\mathrm{d} t}=\tilde{D}_0\Delta_{r \theta} U(t)+\tilde{L}_0 U_{t}+\tilde{F}\left(U_{t}, \mu\right), \end{equation} where $\tilde{F}(\varphi, \mu)=[{D}(\hat{\nu}+\mu)-\tilde{D}_0]\Delta_{r\theta}\varphi(0)+[L(\hat{\nu}+\mu)-\tilde{L}_0](\varphi)+F(\varphi,\hat{\nu}+\mu)$, and the linearization system of (\ref{abstract functional differential equation}) is obtained as follows \begin{equation}\label{Linearizing abstract functional differential equation} \frac{\mathrm{d} U(t)}{\mathrm{d} t}=\tilde{D}_0 \Delta_{r \theta} U(t)+\tilde{L}_0 U_{t}. \end{equation} \subsubsection{Decomposition of $\mathscr{C}$} Let $A:{\mathscr{C}} \rightarrow \mathscr{X}_{\mathbb{C}}$ represent the infinitesimal generators of the semigroup induced by the solutions of (\ref{Linearizing abstract functional differential equation}), and $A^{*}$ is the adjoint operator of $A$, which satisfy \begin{equation}\label{A} A \varphi(\vartheta)=\left\{\begin{array}{cc}\varphi^{\prime}(\vartheta), & \vartheta \in[-\tau,0), \\ \tilde{D}_0 \Delta_{r \theta} \varphi(0)+\tilde{L}_0 \varphi, & \vartheta=0,\end{array}\right. \end{equation} \begin{equation}\label{Astar} A^{*} \psi(\varrho)=\left\{\begin{array}{cc}-\psi^{\prime}(\varrho), & \varrho \in(0,\tau], \\ -\tilde{D}_0 \Delta_{r \theta} \psi(0)-\tilde{L}_0 \psi, & \varrho=0.\end{array}\right. \end{equation} In addition, define a bilinear pairing \begin{equation}\label{bilinear product} \begin{aligned} (\psi, \varphi) &=\langle \varphi(0),\psi(0)\rangle-\int_{-\tau}^{0} \langle \varphi(\xi),\tilde{L}_0{\psi}(\xi+\tau)\rangle \mathrm{d} \xi. \\ \end{aligned} \end{equation} From the discussion in subsection \ref{sec2.2}, we know that $A$ has a pair of repeated purely imaginary eigenvalues $\pm \mathrm{i} \omega_{\hat{\lambda}} $ which are also eigenvalues of $A^{*}$. Let the central subspace $P$ and $P^{*}$ be the generalized eigenspace of $A$ and $A^{*}$ about $\Lambda_{0}=\{\pm \mathrm{i} \omega_{\hat{\lambda}} ,\pm \mathrm{i} \omega_{\hat{\lambda}} \}$, respectively. $P^{*}$ is the adjoint space of $P$. An important task is to decompose the space $\mathscr{C}$ through the relationship of bases in $P$ and $P^{*}$, and we write $\mathscr{C}=P_{C N} \oplus Q_{S}$, where $P_{C N}$ is the central subspace and $Q_{S}$ is its complementary space. Define $$ \hat{\phi}_{nm}^c=\frac{\phi_{nm}^c}{\|\phi_{nm}^c\|_{2,2}},~ \hat{\phi}_{nm}^s=\frac{\phi_{nm}^s}{\|\phi_{nm}^s\|_{2,2}}. $$ \begin{lemma} Let the basis of $P$ is \begin{equation}\label{basis of P} \Phi_{r \theta}(\vartheta)=\left(\Phi_{r \theta}^1(\vartheta),\Phi_{r \theta}^2(\vartheta)\right)=\left(\Phi^1(\vartheta)\cdot \hat{\phi}_{nm}^c,\Phi^2(\vartheta)\cdot\hat{\phi}_{nm}^s\right),~\vartheta \in [-\tau,0], \end{equation} with $$ \begin{aligned} \Phi_{r \theta}^1(\vartheta)=\left(\Phi_1(\vartheta)\cdot \hat{\phi}_{nm}^c,\Phi_2(\vartheta)\cdot \hat{\phi}_{nm}^c\right)=\left(\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}} \vartheta} \xi \hat{\phi}_{nm}^c, \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}} \vartheta} \bar{\xi}\hat{\phi}_{nm}^c\right),\\ \Phi_{r \theta}^2(\vartheta)=\left(\Phi_3(\vartheta)\cdot \hat{\phi}_{nm}^s,\Phi_4(\vartheta)\cdot \hat{\phi}_{nm}^s\right)=\left(\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}} \vartheta} {\xi} \hat{\phi}_{nm}^s, \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}} \vartheta} \bar{\xi} \hat{\phi}_{nm}^s\right), \end{aligned} $$ where $\xi$ can be noted as $\xi=(p_{11},p_{12},\cdots,p_{1n})^{\mathrm{T}}$. A basis for the adjoint space $P^*$ is \begin{equation}\label{basis of P*} \Psi_{r \theta}(\varrho)=\left(\Psi_{r \theta}^1(\varrho),\Psi_{r \theta}^2(\varrho)\right)^{\mathrm{T}}=\left(\Psi^1(\varrho)\cdot\hat{\phi}_{nm}^c,\Psi^2(\varrho)\cdot\hat{\phi}_{nm}^s\right)^{\mathrm{T}},~\varrho \in [0,\tau], \end{equation} with $$ \begin{aligned} \Psi_{r \theta}^1(\varrho)=\left(\Psi_1(\varrho)\cdot \hat{\phi}_{nm}^c,\Psi_2(\varrho)\cdot \hat{\phi}_{nm}^c\right)^{\mathrm{T}}=\left({q}^{-1} \mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}} \varrho} {{\xi}}^{\mathrm{T}}\hat{\phi}_{nm}^c, \bar{q}^{-1} \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}} \varrho} \bar{\xi}^{\rm{T}} \hat{\phi}_{nm}^c\right)^{\rm{T}},\\ \Psi_{r \theta}^2(\varrho)=\left(\Psi_3(\varrho)\cdot \hat{\phi}_{nm}^s,\Psi_4(\varrho)\cdot \hat{\phi}_{nm}^s\right)^{\mathrm{T}}=\left({{q}}^{-1} \mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}} \varrho} {\xi}^{\mathrm{T}} \hat{\phi}_{nm}^s, \bar{q}^{-1} \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}} \varrho} \bar{{\xi}}^{\rm{T}} \hat{\phi}_{nm}^s\right)^{\rm{T}}, \end{aligned} $$ where $q$ can be obtained by $(\Psi_{r\theta},\Phi_{r\theta})=I$, according to the adjoint bilinear form defined in (\ref{bilinear product}). \end{lemma} One can decompose $U_t$ into two parts: \begin{equation}\label{fenjie} U_t=U_t^P+U_t^Q=\sum_{k=1}^2 \Phi_{r \theta}^k\left(\Psi_{r \theta}^k, U_t\right)+U_t^Q=\sum_{k=1}^2 \Phi_{r \theta}^k z_{r \theta}^k+y_t, \end{equation} where $z_{r \theta}^k=\left(\Psi_{r \theta}^k, U_t\right),~y_t \in Q_s$. Define $z =(z_{r\theta}^1,z_{r\theta}^2)\equiv (z_1,z_2,z_3,z_4)^{\mathrm{T}}$ as the local coordinate system on the four-dimensional center manifold, which is induced by the basis $\Phi_{r \theta}$. Then, we get that \begin{equation}\label{zt2} \begin{aligned} \dot{z}(t) &= \tilde{B} z(t)+\left(\begin{array}{cc} \left\langle \tilde{F}(\sum_{k=1}^2\Phi_{r \theta}^k z_{r\theta}^k+y,\mu), \Psi_{r \theta}^1(0)\right\rangle\\ \left\langle \tilde{F}(\sum_{k=1}^2\Phi_{r \theta}^k z_{r\theta}^k+y,\mu), \Psi_{r \theta}^2(0)\right\rangle \end{array}\right),\\ \frac{\mathrm{d}y}{\mathrm{d}t}&=A_Qy+(I-\pi)X_0\tilde{F}\left(\sum_{k=1}^2\Phi_{r \theta}^k z_{r\theta}^k +y,\mu\right), \end{aligned} \end{equation} where $A_Q$ is the restriction of $A$ on $Q_s$, $A_Q \varphi=A \varphi$ for $\varphi \in Q_s$, $\pi: \mathscr{C} \rightarrow P_{CN}$ is the projection, and $$ \tilde{B}=\left(\begin{array}{cccc} \mathrm{i}\omega_{\hat{\lambda}} & 0 & 0 & 0\\ 0 & -\mathrm{i}\omega_{\hat{\lambda}} & 0 & 0\\ 0 & 0 & \mathrm{i}\omega_{\hat{\lambda}} & 0\\ 0 & 0 & 0 & -\mathrm{i}\omega_{\hat{\lambda}}\\ \end{array} \right). $$ According to the formal Taylor expansion $\tilde{F}(\varphi,\mu)=\sum_{j\ge 2}\frac{1}{j!}\tilde{F}_j(\varphi,\mu)$, (\ref{zt2}) can be written as \begin{equation}\label{zt22} \begin{aligned} \dot{z}(t) &= \tilde{B} z(t)+\sum_{j\ge 2}\frac{1}{j!}f_j^1(z,y,\mu),\\ \frac{\mathrm{d}y}{\mathrm{d}t}&=A_Qy+\sum_{j\ge 2}\frac{1}{j!}f_j^2(z,y,\mu), \end{aligned} \end{equation} where $f_j=(f_j^1,f_j^2),~j\ge 2$ are defined by \begin{equation}\label{fj} \begin{aligned} f_j^1(z,y,\mu)&=\left(\begin{array}{cc} \left\langle \tilde{F}(\sum_{k=1}^2\Phi_{r \theta}^k z_{r\theta}^k+y,\mu), \Psi_{r \theta}^1(0)\right\rangle\\ \left\langle \tilde{F}(\sum_{k=1}^2\Phi_{r \theta}^k z_{r\theta}^k+y,\mu), \Psi_{r \theta}^2(0)\right\rangle \end{array}\right),\\ f_j^2(z,y,\mu)&=(I-\pi)X_0 \tilde{F}_j\left(\sum_{k=1}^2\Phi_{r \theta}^k z_{r\theta}^k +y,\mu\right). \end{aligned} \end{equation} Referring to \citep{Faria2000J}, we get the normal form on the center manifold of the origin is \begin{equation}\label{normal form} \begin{aligned} \dot{z}(t)&=\tilde{B}z(t)+\frac{1}{2}g_2^1(z,y,\mu)+\frac{1}{6}g_3^1(z,y,\mu)+h.o.t.,\\ \frac{\mathrm{d}y}{\mathrm{d}t}&=A_Qy+\frac{1}{2}g_2^2(z,y,\mu)+\frac{1}{6}g_3^2(z,y,\mu)+h.o.t., \end{aligned} \end{equation} where $g=(g_j^1,g_j^2),~j\ge 2$ is given by $$ g_j(z,y,\mu)=\bar{f}_j(z,y,\mu)-M_jU_j(z,\mu), $$ where $\bar{f}_j^1$ is the terms of order $j$ in $(z, y)$ obtained after the computation of normal forms up to order $j-1$, $U_j=(U_j^1,U_j^2)$ denotes the change of variables about the transformation from $f_j$ to $g_j$, and the operator $M_j=(M_j^1,M_j^2)$ is defined by \begin{equation}\label{Mj} \begin{aligned} M_j^1&: \mathbb{V}_j^5(\mathscr{C}^4) \rightarrow \mathbb{V}_j^5(\mathscr{C}^4),\\ M_j^1U_j^1&=D_zU_j^1(z,\mu)\tilde{B}z-\tilde{B}U_j^1(z,\mu),\\ M_j^2&: \mathbb{V}_j^5(Q_s) \rightarrow \mathbb{V}_j^5(\mathrm{Ker}\pi),\\ M_j^2U_j^2&=D_zU_j^2(z,\mu)\tilde{B}z-A_QU_j^2(z,\mu), \end{aligned} \end{equation} where $ \mathbb{V}_j^5(Y)$ denotes the space homogeneous polynomials of $z=(z_1,z_2,z_3,z_4)^{\mathrm{T}}$ and $\mu$ with coefficients in $\mathscr{C}^4$. It is easy to verify that \begin{equation}\label{Mj1z} M_j^1(\mu z^p e_k)=\mathrm{i} \omega_{\hat{\lambda}} \mu\left(p_1-p_2+p_3-p_4+(-1)^k\right)z^p e_k,~ |p|=j-1, \end{equation} where $j \ge 2,~ k=1,2,3,4$, and $\{e_1,e_2,e_3,e_4\}$ is the canonical basis for $\mathscr{C}^4$. \subsubsection{Calculation of $g_2^1(z,0,\mu)$} For $j=2$, similar to the results in \citep{Budzinskiy2017J,Wu1999J}, we have $$ \mathrm{Ker}(M_2^1)=\mathrm{span}\left\{\left(\begin{array}{cccc} \mu z_1\\ 0\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ \mu z_2\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ \mu z_3\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ 0\\ \mu z_4 \end{array} \right), \left(\begin{array}{cccc} \mu z_3\\ 0\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ \mu z_4\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ \mu z_1\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ 0\\ \mu z_2 \end{array} \right) \right\}, $$ then $$ \begin{aligned} &\mathrm{Ker}(M_2^1) \cap \mathrm{span}\left\{\mu z^p e_k; |p|=1,k=1,2,3,4 \right\}\\ =&\mathrm{span}\left\{\mu x_1 e_1,\mu x_3 e_1,\mu x_2 e_2,\mu x_4 e_2,\mu x_1 e_3,\mu x_3 e_3,\mu x_2 e_4,\mu x_4 e_4\right\}. \end{aligned} $$ Therefore, the second order term of $\tilde{F}(U_t,\mu)$ is \begin{equation}\label{F2} \tilde{F}_2(U_t,\mu)= \mu \tilde{D}_1 \Delta U_t(0)+\mu \tilde{L}_1 U_t+F_2(U_t,\mu), \end{equation} and \begin{equation}\label{F2z} \begin{aligned} \tilde{F}_2(z,y,\mu)&=\tilde{F}(\Phi_{r\theta} z +y,\mu)\\ &=\mu \tilde{D}_1 \Delta \left(\Phi_{r\theta}(0) z +y(0)\right)+\mu \tilde{L}_1 (\Phi_{r\theta} z +y)+F_2(\Phi_{r\theta} z +y,\mu). \end{aligned} \end{equation} Since $F(0,\mu)=0,~DF(0,\mu)=0$, $F_2(\Phi_{r\theta} z+y,\mu)$ can be written as follows \begin{equation}\label{F2phi} \begin{aligned} F_2(\Phi_{r\theta} z+y,\mu)&=F_2(\Phi_{r\theta} z+y,0)\\ &=\sum_{p_1+p_2+p_3+p_4=2} A_{p_1p_2p_3p_4} \left(\hat{\phi}_{nm}^c\right)^{p_1+p_2} \left({\hat{\phi}_{nm}^s}\right)^{p_3+p_4} z_1^{p_1}z_2^{p_2}z_3^{p_3}z_4^{p_4}+S_2(\Phi_{r\theta} z,y)+o(|y|^2), \end{aligned} \end{equation} where $S_2$ represents the linear terms of $y$, which can be calculated by $DF_2(\Phi_{r\theta} z+y,0)|_{y=0}(y)$. By (\ref{zt22})-(\ref{F2phi}), noticing the fact $$ \int_0^R\int_0^{2\pi} r \hat{\phi}_{nm}^c \hat{\phi}_{nm}^s \mathrm{d} \theta \mathrm{d} r=1, $$ and the relationship of $\Phi_{r\theta}$ and $\Psi_{r\theta}$, we obtain \begin{equation}\label{g21} \frac{1}{2} g_2^1(z,0,\mu)=\frac{1}{2} \mathrm{Proj}_{\mathrm{Ker}(M_2^1)}f_2^1(z,0,\mu)=\left( \begin{array}{cccc} B_{11}\mu z_1\\ \overline{B_{11}} \mu z_2\\ B_{11}\mu z_3\\ \overline{B_{11}} \mu z_4 \end{array} \right), \end{equation} with \begin{equation}\label{B11B13} \begin{aligned} &B_{11}=\frac{1}{2}\overline{\Psi_1(0)}(-\lambda_{nm} \tilde{D}_1\Phi_1(0)+\tilde{L}_1\Phi_1).\\ \end{aligned} \end{equation} \subsubsection{Calculation of $g_3^1(z,0,\mu)$} For $j=3$, we have $$ \begin{aligned} \mathrm{Ker}(M_3^1)=\mathrm{span}\left\{\left(\begin{array}{cccc} z_1^2z_2\\ 0\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ z_1^2z_2\\ 0 \end{array} \right), \left(\begin{array}{cccc} z_1^2z_4\\ 0\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ z_1^2z_4\\ 0 \end{array} \right), \left(\begin{array}{cccc} z_3^2z_2\\ 0\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ z_3^2z_2\\ 0 \end{array} \right), \left(\begin{array}{cccc} z_3^2z_4\\ 0\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ z_3^2z_4\\ 0 \end{array} \right),\right.\\ \left(\begin{array}{cccc} 0\\ z_2^2z_1\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ 0\\ z_2^2z_1 \end{array} \right), \left(\begin{array}{cccc} 0\\ z_2^2z_3\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ 0\\ z_2^2z_4 \end{array} \right), \left(\begin{array}{cccc} 0\\ z_4^2z_1\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ 0\\ z_4^2z_1 \end{array} \right), \left(\begin{array}{cccc} 0\\ z_4^2z_3\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ 0\\ z_4^2z_3 \end{array} \right),\\ \left.\left(\begin{array}{cccc} z_1z_2z_3\\ 0\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ z_1z_2z_3\\ 0 \end{array} \right), \left(\begin{array}{cccc} z_1z_3z_4\\ 0\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ z_1z_3z_4\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ z_1z_2z_4\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ 0\\ z_1z_2z_4 \end{array} \right),\left(\begin{array}{cccc} 0\\ z_2z_3z_4\\ 0\\ 0 \end{array} \right), \left(\begin{array}{cccc} 0\\ 0\\ 0\\ z_2z_3z_4 \end{array} \right) \right\}, \end{aligned} $$ see \citep{Budzinskiy2017J,Wu1999J} again. Then $$ \mathrm{Ker}(M_3^1) \cap \mathrm{span}\left\{\mu z^p e_k; |p|=2,k=1,2,3,4 \right\} = \emptyset . $$ We define \begin{equation}\label{f31} \bar{f}_3^1(z,0,\mu)=f_3^1(z,0,\mu)+\frac{3}{2}\left[D_z f_2^1(z,0,\mu)U_2^1(z,\mu)+D_y f_2^1(z,0,\mu)U_2^2(z,\mu)-D_zU_2^1(z,\mu)g_2^1(z,0,\mu)\right]. \end{equation} According to \citep{Faria2000J}, the normal form up to the third order is $$ \begin{aligned} g_3^1(z,0,\mu)&=\mathrm{Proj}_{\mathrm{Ker}(M_3^1)}\bar{f}_3^1(z,0,\mu)\\ &=\mathrm{Proj}_{\mathrm{Ker}(M_3^1)}\bar{f}_3^1(z,0,0)+o(\mu^2|x|). \end{aligned} $$ Since $g_2^1(z,0,0)=0$, we only need to calcalate three parts $$ \mathrm{Proj}_{\mathrm{Ker}(M_3^1)} f_3^1(z,0,0), $$ $$ \mathrm{Proj}_{\mathrm{Ker}(M_3^1)} \left(D_z f_2^1(z,0,0)U_2^1(z,0)\right), $$ and $$ \mathrm{Proj}_{\mathrm{Ker}(M_3^1)}\left(D_y f_2^1(z,0,0)U_2^2(z,0)\right). $$ Through calculation, we obtain the main results as follows. Please refer to Appendix \ref{ Appendix A} for the specific calculation process. \begin{equation}\label{Part1} \frac{1}{3!} \mathrm{Proj}_{\mathrm{Ker}(M_3^1)}f_3^1(z,0,0)=\left( \begin{array}{cccc} C_{2001}z_1^2z_4+C_{1110}z_1z_2z_3\\ \overline{C_{2001}}z_2^2z_3+\overline{C_{1110}}z_1z_2z_4\\ C_{2001}z_3^2z_2+C_{1110}z_1z_3z_4\\ \overline{C_{2001}}z_4^2z_1+\overline{C_{1110}}z_2z_3z_4 \end{array} \right), \end{equation} where \begin{equation}\label{C} \begin{aligned} &C_{2001}=\frac{1}{6}\overline{\Psi_1(0)}A_{2001}\mathrm{M}_{22},~C_{1110}=\frac{1}{6}\overline{\Psi_1(0)}A_{1110}\mathrm{M}_{22}. \end{aligned} \end{equation} \begin{equation}\label{Part2} \begin{aligned} \frac{1}{3!}\mathrm{Proj}_{\mathrm{Ker(M_3^1)}}\left(D_zf_2^1(z,0,0)U_2^1(z,0)\right)=\bf{0}. \end{aligned} \end{equation} \begin{equation}\label{Part3} \begin{aligned} &\frac{1}{3!}\mathrm{Proj}_{\mathrm{Ker(M_3^1)}}\left(D_yf_2^1(z,0,0)U_2^2(z,0)\right)\\ =&\left( \begin{array}{cccc} E_{2100}z_1^2z_2+E_{2001}z_1^2z_4+E_{0120}z_3^2z_2+E_{0021}z_3^2z_4+E_{1110}z_1z_2z_3+E_{1011}z_1z_3z_4\\ \overline{E_{2100}}z_1z_2^2+\overline{E_{2001}}z_2^2z_3+\overline{E_{0120}}z_4^2z_1+\overline{E_{0021}}z_4^2z_3+\overline{E_{1110}}z_1z_2z_4+\overline{E_{1011}}z_2z_3z_4\\ E_{2100}z_3^2z_4+E_{2001}z_3^2z_2+E_{0120}z_1^2z_4+E_{0021}z_1^2z_2+E_{1110}z_1z_3z_4+E_{1011}z_1z_2z_3\\ \overline{E_{2100}}z_3z_4^2+\overline{E_{2001}}z_4^2z_1+\overline{E_{0120}}z_2^2z_3+\overline{E_{0021}}z_1^2z_2+\overline{E_{1110}}z_2z_3z_4+\overline{E_{1011}}z_1z_2z_4 \end{array} \right), \end{aligned} \end{equation} where $$ \begin{aligned} E_{2100}=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^c\left(S_{yz_1}(h_{0k1100}^{ccs})+S_{yz_2}(h_{0k2000}^{ccs})\right)\right],\\ E_{2001}=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^cS_{yz_1}(h_{0k1001}^{ccs})+\mathrm{M}_{2nkss}^cS_{yz_4}(h_{2nk2000}^{css})\right],\\ E_{0120}=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^cS_{yz_2}(h_{0k0020})^{ccs})+\mathrm{M}_{2nkss}^cS_{yz_3}(h_{2nk0110}^{css})\right],\\ E_{0021}=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{2nkss}^c\left(S_{yz_3}(h_{2nk0011}^{css})+S_{yz_4}(h_{2nk0020}^{css})\right)\right],\\ \end{aligned} $$ $$ \begin{aligned} E_{1110}=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^c\left(S_{yz_1}(h_{0k0110}^{ccs})+S_{yz_2}(h_{0k1010}^{ccs})\right)+\mathrm{M}_{2nkss}^cS_{yz_3}(h_{2nk1100}^{css})\right],\\ E_{1011}=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^cS_{yz_1}(h_{0k0011}^{ccs}) +\mathrm{M}_{2nkss}^c\left(S_{yz_3}(h_{2nk1001}^{css})+S_{yz_4}(h_{2nk1010}^{css})\right)\right].\\ \end{aligned} $$ Hence, by (\ref{f31}), (\ref{Part1}), (\ref{Part2}),and (\ref{Part3}), we have \begin{equation}\label{g31} \begin{aligned} \frac{1}{3!}g_3^1(z,0,0)=&\frac{1}{3!}\mathrm{Proj}_{\mathrm{Ker(M_3^1)}}\bar{f}_3^1(z,0,0)\\ =&\left( \begin{array}{cccc} B_{2100}z_1^2z_2+B_{2001}z_1^2z_4+B_{0120}z_3^2z_2+B_{0021}z_3^2z_4+B_{1110}z_1z_2z_3+B_{1011}z_1z_3z_4\\ \overline{B_{2100}}z_1z_2^2+\overline{B_{2001}}z_2^2z_3+\overline{B_{0120}}z_4^2z_1+\overline{B_{0021}}z_4^2z_3+\overline{B_{1110}}z_1z_2z_4+\overline{B_{1011}}z_2z_3z_4\\ B_{2100}z_3^2z_4+B_{2001}z_3^2z_2+B_{0120}z_1^2z_4+B_{0021}z_1^2z_2+B_{1110}z_1z_3z_4+B_{1011}z_1z_2z_3\\ \overline{B_{2100}}z_3z_4^2+\overline{B_{2001}}z_4^2z_1+\overline{B_{0120}}z_2^2z_3+\overline{B_{0021}}z_1^2z_2+\overline{B_{1110}}z_2z_3z_4+\overline{B_{1011}}z_1z_2z_4 \end{array} \right), \end{aligned} \end{equation} with $$ B_{p_1p_2p_3p_4}=C_{p_1p_2p_3p_4}+\frac{3}{2}(D_{p_1p_2p_3p_4}+E_{p_1p_2p_3p_4}). $$ \subsubsection{The normal form} Based on the above analysis, the normal form truncated to the third order on the center manifold can be summarized as follows \begin{equation}\label{Normal form} \begin{aligned} &\dot{z}_1=\mathrm{i}\omega_{\hat{\lambda}}z_1+B_{11}z_1\mu+B_{2100}z_1^2z_2+B_{2001}z_1^2z_4+B_{0120}z_3^2z_2+B_{0021}z_3^2z_4+B_{1110}z_1z_2z_3+B_{1011}z_1z_3z_4,\\ &\dot{z}_2=-\mathrm{i}\omega_{\hat{\lambda}}z_2+\overline{B_{11}}z_2\mu+\overline{B_{2100}}z_1z_2^2+\overline{B_{2001}}z_2^2z_3+\overline{B_{0120}}z_4^2z_1+\overline{B_{0021}}z_4^2z_3+\overline{B_{1110}}z_1z_2z_4+\overline{B_{1011}}z_2z_3z_4,\\ &\dot{z}_3=\mathrm{i}\omega_{\hat{\lambda}}z_3+B_{11}z_3\mu+B_{2100}z_3^2z_4+B_{2001}z_3^2z_2+B_{0120}z_1^2z_4+B_{0021}z_1^2z_2+B_{1110}z_1z_3z_4+B_{1011}z_1z_2z_3,\\ &\dot{z}_4=-\mathrm{i}\omega_{\hat{\lambda}}z_4+\overline{B_{11}}z_4\mu+\overline{B_{2100}}z_3z_4^2+\overline{B_{2001}}z_4^2z_1+\overline{B_{0120}}z_2^2z_3+\overline{B_{0021}}z_1^2z_2+\overline{B_{1110}}z_2z_3z_4+\overline{B_{1011}}z_1z_2z_4. \end{aligned} \end{equation} \begin{lemma}\label{reduce} By \citep{Gils1986J}, the normal form truncated to the third order can be reduced to \begin{equation}\label{z1z2z3z4} \begin{aligned} &\dot{z}_1=\mathrm{i}\omega_{\hat{\lambda}}z_1+B_{11}z_1\mu+B_{2001}z_1^2z_4+B_{1110}z_1z_2z_3,\\ &\dot{z}_2=-\mathrm{i}\omega_{\hat{\lambda}}z_2+\overline{B_{11}}z_2\mu+\overline{B_{2001}}z_3z_2^2+\overline{B_{1110}}z_1z_2z_4,\\ &\dot{z}_3=\mathrm{i}\omega_{\hat{\lambda}}z_3+B_{11}z_3\mu+B_{2001}z_3^2z_2+B_{1110}z_1z_3z_4,\\ &\dot{z}_4=-\mathrm{i}\omega_{\hat{\lambda}}z_4+\overline{B_{11}}z_4\mu+\overline{B_{2001}}z_1z_4^2+\overline{B_{1110}}z_2z_3z_4. \end{aligned} \end{equation} \end{lemma} The proof is given in Appendix \ref{Proof of Lemma}. Introducing double sets of polar coordinates \begin{equation}\label{chi} \begin{aligned} z_1=\rho_1\mathrm{e}^{\mathrm{i}\chi_1},~z_4=\rho_1\mathrm{e}^{-\mathrm{i}\chi_1},\\ z_3=\rho_2\mathrm{e}^{\mathrm{i}\chi_2},~z_2=\rho_2\mathrm{e}^{-\mathrm{i}\chi_2}, \end{aligned} \end{equation} we can obtain that \begin{equation}\label{rho} \begin{aligned} &\dot{\rho}_1=(a_1\mu+a_2\rho_1^2+a_3\rho_2^2)\rho_1,\\ &\dot{\chi}_1=\omega_{\hat{\lambda}},\\ &\dot{\rho}_2=(a_1\mu+a_2\rho_2^2+a_3\rho_1^2)\rho_2,\\ &\dot{\chi}_2=\omega_{\hat{\lambda}}, \end{aligned} \end{equation} with $$ a_1=\mathrm{Re}\{B_{11}\},~a_2=\mathrm{Re}\{B_{2001}\},~a_3=\mathrm{Re}\{B_{1110}\}.\\ $$ Based on the above analysis, by \citep{Gils1986J,Guckenheimer1983M}, we get, when $a_1\mu<0(>0)$, system (\ref{rho}) has six unfoldings(see Table \ref{tab1}) and their dynamical classifications are shown in Table \ref{tab2}. \begin{table} \caption{The six unfoldings of system (\ref{rho}). } \label{tab1} \begin{ruledtabular} \begin{tabular}{ccccccc} {Case} & 1& 2 & 3 & 4 & 5 & 6 \\ \hline $a_2$ & -- & -- & -- & + & + & + \\ $a_2+a_3$ & -- & -- & + & -- & + & +\\ $a_2-a_3$ & -- & + & -- & + & + & -- \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{The dynamical classifications of system (\ref{rho}) in each case.} \begin{ruledtabular} \label{tab2} \begin{tabular}{ccccccc} {} & {Case 1} & {Case 2}& {Case 3} & {Case 4} & {Case 5} & {Case 6} \\ \hline {$a_1\mu<0$} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case11.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case21.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case31.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case41.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case51.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case61.pdf}}}\end{minipage} \\ {$a_1\mu>0$} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case12.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case22.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case32.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case42.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9\linewidth]{case52.pdf}}}\end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\raisebox{-.5\height}{\centerline{\includegraphics[width=0.9 \linewidth]{case62.pdf}}}\end{minipage} \end{tabular} \end{ruledtabular} \end{table} Therefore, we can draw the following conclusions. \begin{theorem}\label{rotating and standing} We are mainly concerned with the properties corresponding to the following four equilibrium points of (\ref{rho}).\\ $\mathrm{(i)}$~ $(\rho_1,\rho_2)=(0,0)$ corresponds to the origin in the four-dimensional phase space and undergoes a stationary solution, which is spatially homogeneous.\\ $\mathrm{(ii)}$~ $(\rho_1,\rho_2)=(0,\sqrt{\frac{-a_1\mu}{a_2}})$ corresponds to a periodic solution in the plane of $(z_2,z_3)$, which is spatially inhomogeneous. At this point, the periodic solution restricted to the center subspace has the following approximate form $$ U_t(\vartheta)(r,\theta) \approx \sum_{i=1}^n{2|p_{1i}|\sqrt{\frac{-a_1\mu}{a_2}} J_n(\sqrt{\lambda_{nm}}r)\cos(\mathrm{Arg}(p_{1i})+\omega_{\hat{\lambda}}\vartheta+\omega_{\hat{\lambda}}t+n\theta) {e}_i}, $$ where $e_i$ is the $i$th unit coordinate vector of $\mathbb{R}^n$. When $a_1\mu>0(<0),~a_2<0(>0)$, system undergoes a rotating wave . Only when $a_1\mu>0,~a_2+a_3<0,~a_2-a_3>0$, the periodic solution of the equivariant Hopf bifurcation is orbitally asymptotically stable.\\ $\mathrm{(iii)}$~ $(\rho_1,\rho_2)=(\sqrt{\frac{-a_1\mu}{a_2}},0)$ corresponds to a a periodic solution in the plane of $(z_1,z_4)$, which is spatially inhomogeneous. At this point, the periodic solution restricted to the center subspace has the following approximate form $$ U_t(\vartheta)(r,\theta) \approx \sum_{i=1}^n{2|p_{1i}|\sqrt{\frac{-a_1\mu}{a_2}} J_n(\sqrt{\lambda_{nm}}r)\cos(\mathrm{Arg}(p_{1i})+\omega_{\hat{\lambda}}\vartheta+\omega_{\hat{\lambda}}t-n\theta) {e}_i}. $$ When $a_1\mu>0(<0),~a_2<0(>0)$, system undergoes a rotating wave in the opposite direction as that in (ii). Its stability conditions are as same as $\mathrm{(ii)}$.\\ $\mathrm{(iv)}$~ $(\rho_1,\rho_2)=(\sqrt{\frac{-a_1\mu}{a_2+a_3}},\sqrt{\frac{-a_1\mu}{a_2+a_3}})$ corresponds to a periodic solution, which is spatially inhomogeneous. At this point, the periodic solution restricted to the center subspace has the following approximate form $$ U_t(\vartheta)(r,\theta) \approx \sum_{i=1}^n{4|p_{1i}|\sqrt{\frac{-a_1\mu}{a_2+a_3}} J_n(\sqrt{\lambda_{nm}}r)\cos(\mathrm{Arg}(p_{1i})+\omega_{\hat{\lambda}}\vartheta+\omega_{\hat{\lambda}}t)\cos(n\theta) {e}_i}. $$ When $a_1\mu>0(<0),~a_2+a_3<0(>0)$, system undergoes a standing wave. Only when $a_1\mu>0,~a_2+a_3<0,~a_2-a_3<0$, the periodic solution of the equivariant Hopf bifurcation is orbitally asymptotically stable. \end{theorem} The proof is given in Appendix \ref{Proof of Theorem}. \begin{corollary}\label{qici} For $\lambda_{0m},m=0,1,2,\cdots$, the corresponding characteristic function is $\phi_{0m}^{c}$. Define $\hat{\phi}_{0m}^c=\frac{\phi_{0m}^{c}}{\|\phi_{0m}^{c}\|_{2,2}}$. According to \citep{Faria2000J}, after a similar calculation process shown above, the following normal form on the center manifold is obtained, $$ \begin{aligned} &\dot{z}_1=\mathrm{i}\omega_{\hat{\lambda}}z_1+{B}_{11}^{*}z_1\mu+{B}_{2100}^{*}z_1^2z_2+\cdots,\\ &\dot{z}_2=-\mathrm{i}\omega_{\hat{\lambda}}z_2+\overline{{B}^{*}_{11}}z_2\mu+\overline{{B}^{*}_{2100}}z_1z_2^2+\cdots. \end{aligned} $$ Introducing a set of polar coordinates, we can get $$ \dot{\rho}=({a}_{1}^{*}\mu+{a}_2^{*}\rho^2)\rho+o(\mu^2\rho+|(\rho,\mu)|^4), $$ with $$ a_1^{*}=\mathrm{Re}\{B_{11}^{*}\},~a_2^{*}=\mathrm{Re}\{B_{2100}^{*}\},\\ $$ where the specific representation of $B_{11}^{*}$ and $B_{2100}^{*}$ is shown in \citep{Faria2000J}. Besides, we get that \\ $\mathrm{(i)}$~ When ${a}_{2}^{*}<0(>0)$, the periodic solution is orbitally asymptotically stable(unstable).\\ $\mathrm{(ii)}$~ When ${a}_{1}^{*}{a}_{2}^{*}<0(>0)$, the bifurcation is supercritical(subcritical). \end{corollary} \subsection{Explicit formulas for a class of reaction-diffusion model with discrete time delay}\label{Explicit formulas} In a specific model, it is necessary to calculate $A_{p_1p_2p_3p_4},~S_{y(0)z_k},~S_{y(-1)z_k},~k=1,2,3,4$ to determine the explicit expression of $B_{p_1p_2p_3p_4}$ in the normal form. Therefore, in order to provide a more general symbolic expression, we will consider a class of reaction-diffusion system with discrete time delay defined on a disk as follows \begin{equation}\label{time-delayed reaction-diffusion system r theta} \left\{\begin{array}{l} \frac{\partial u(t, r, \theta)}{\partial t}=d_{1} \Delta_{r \theta} u(t, r, \theta)+F^{(1)}(u(t, r, \theta), v(t, r, \theta)),~(r, \theta) \in \mathbb{D},~t>0, \\ \frac{\partial v(t, r, \theta)}{\partial t}=d_{2} \Delta_{r \theta} v(t, r, \theta)+F^{(2)}(u(t, r, \theta), v(t, r, \theta), u(t-\tau, r, \theta), v(t-\tau, r, \theta)),~(r, \theta) \in \mathbb{D},~t>0, \\ \partial_{r} u(\cdot, R, \theta)=\partial_{r} v(\cdot, R, \theta)=0,~\theta \in [0,2\pi). \end{array}\right. \end{equation} This type of model covers some predator-prey systems and chemical reaction models, etc. While in practice there are many ways to introduce the time delay $\tau$, we demonstrate the critical method of analysis by including $\tau$ in the second equation for simplicity. Other types of systems can also refer to this process for calculation. Assume that the model has a positive equilibrium point $E^*(u^*,v^*)$ and select the time delay $\tau$ as the bifurcation parameter. Letting $\bar{u}(t, r, \theta)=u(\tau t, r, \theta)-u^{*}, \bar{v}(t, r, \theta)=v(\tau t, r, \theta)-v^{*}$, we drop the bar for simplicity. Then system (\ref{time-delayed reaction-diffusion system r theta}) can be transformed into \begin{equation}\label{time-delayed reaction-diffusion system ijkl} \left\{\begin{aligned} \frac{\partial u(t, r, \theta)}{\partial t}=& \tau d_{1} \Delta_{r \theta} u(t, r, \theta)+\tau\left[a_{11}\left(u(t, r, \theta)+u^{*}\right)+a_{12}\left(v(t, r, \theta)+v^{*}\right)\right] \\ &+\tau \sum_{i+j \geq 2} \frac{1}{i ! j !} F_{i j}^{(1)}(0,0) u^{i}(t, r, \theta) v^{j}(t, r, \theta), \\ \frac{\partial v(t, r, \theta)}{\partial t}=& \tau d_{2} \Delta_{r \theta} v(t, r, \theta)+\tau\left[ a_{21}\left(u(t, r, \theta)+u^{*}\right)+a_{22}\left(v(t, r, \theta)+v^{*}\right) \right] \\ +& \tau\left[ b_{21}\left(u(t-1, r, \theta)+u^{*}\right)+b_{22}\left(v(t-1, r, \theta)+v^{*}\right) \right] \\ +& \tau \sum_{i+j+l+k \geq 2} \frac{1}{i ! j ! k ! l !} F_{i j k l}^{(2)}(0,0,0,0) u^{i}(t, r, \theta) v^{j}(t, r, \theta) u^{k}(t-1, r, \theta) v^{l}(t-1, r, \theta), \end{aligned}\right. \end{equation} with $$ \begin{gathered} \left(\begin{array}{ll} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array}\right)=\left(\begin{array}{cc} \frac{\partial F^{(1)}\left(u^{*}, v^{*}\right)}{\partial u(t)} & \frac{\partial F^{(1)}\left(u^{*}, v^{*}\right)}{\partial v(t)} \\ \frac{\partial F^{(2)}\left(u^{*}, v^{*},u^{*}, v^{*}\right)}{\partial u(t)} & \frac{\partial F^{(2)}\left(u^{*}, v^{*},u^{*}, v^{*}\right)}{\partial v(t)} \end{array}\right), \\ \left(\begin{array}{ll} b_{11} & b_{12} \\ b_{21} & b_{22} \end{array}\right)=\left(\begin{array}{cc} 0 & 0\\ \frac{\partial F^{(2)}\left(u^{*}, v^{*},u^{*}, v^{*}\right)}{\partial u(t-\tau)} & \frac{\partial F^{(2)}\left(u^{*}, v^{*},u^{*}, v^{*}\right)}{\partial v(t-\tau)} \end{array}\right),\\ F_{i j}^{(1)}=\frac{\partial^{i+j} F^{(1)}}{\partial u^{i} \partial v^{j}}(0,0), \\ F_{i j k l}^{(2)}=\frac{\partial^{i+j+k+l} F^{(2)}}{\partial u^{i} \partial v^{j} \partial u^{k}(t-\tau) \partial v^{l}(t-\tau)}(0,0,0,0). \end{gathered} $$ Letting $\tau=\hat{\tau}+\mu$, where $\mu \in \mathbb{R}$ and $\hat{\tau}$ is the critical values at which Hopf bifurcations occur, then system (\ref{time-delayed reaction-diffusion system ijkl}) can be written in an abstract form like (\ref{abstract functional differential equation}), where operators ${L}_0$ and $\tilde{F}$ are given, respectively, by $$ L_0(\varphi)=(\hat{\tau}+\mu)\left(\begin{array}{c} a_{11} \varphi_{1}(0)+a_{12} \varphi_{2}(0) \\ b_{21} \varphi_{1}(-1)+b_{22} \varphi_{2}(-1)+a_{21} \varphi_{1}(0)+a_{22} \varphi_{2}(0) \end{array}\right), $$ $$ \tilde{F}(\varphi, \mu)=(\hat{\tau}+\mu)\left(\begin{array}{c}\sum_{i+j \geq 2} \frac{1}{i ! j !} F_{i j}^{(1)}(0,0) \varphi_{1}^{i}(0) \varphi_{2}^{j}(0) \\ \sum_{i+j+k+l \geq 2} \frac{1}{i ! j ! k ! l !} F_{i j k l}^{(2)}(0,0,0,0) \varphi_{1}^{i}(0) \varphi_{2}^{j}(0) \varphi_{1}^{k}(-1) \varphi_{2}^{l}(-1)\end{array}\right), $$ $$ \varphi=(\varphi_1,\varphi_2)^{\rm{T}} \in \mathscr{C}. $$ Choosing $\xi=(1,p_0)^{T}$, with $p_0=\frac{\mathrm{i}\omega_{\hat{\lambda}}+d_1\lambda_{nm}-a_{11}}{a_{12}}$, we get that the bases of $P$ is $$ \Phi_{r\theta}(\vartheta)=\left(\begin{array}{rrrr} \mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}\vartheta}\hat{\phi}_{nm}^c & \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}\vartheta}\hat{\phi}_{nm}^c & \mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}\vartheta}\hat{\phi}_{nm}^s & \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}\vartheta}\hat{\phi}_{nm}^s\\ p_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}\vartheta}\hat{\phi}_{nm}^c & \bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}\vartheta}\hat{\phi}_{nm}^c & p_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}\vartheta}\hat{\phi}_{nm}^s & \bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}\vartheta}\hat{\phi}_{nm}^s \end{array}\right), $$ and the basis of $P^*$ can also be obtained. The explicit formulas of $A_{p_1p_2p_3p_4},~S_{y(0)z_k},~S_{y(-1)z_k},~k=1,2,3,4$, and $h_{jp_1p_2p_3p_4}$ in the calculation of normal form are shown in Appendices \ref{A_{p_1p_2p_3p_4}}, \ref{S} and \ref{h}. \section{Numerical simulations}\label{sec4} \subsection{Numerical example 1: A diffusive Brusselator model with delayed feedback }\label{sec5.1} In~\citep{Zuo2012J}, Zuo and Wei studied a diffusive Brusselator model with delayed feedback. We put this model on a disk with Neumann boundary conditions and perform some numerical simulations. The model in polar form now turns to be \begin{equation}\label{Brusselator} \left\{\begin{array}{l} \frac{\partial u(t, r, \theta)}{\partial t}=d_{1} \Delta_{r \theta} u(t, r, \theta)+a-(b+1) u(t, r, \theta)+u^2(t, r, \theta) v(t, r, \theta), \\ \frac{\partial v(t, r, \theta)}{\partial t}=d_{2} \Delta_{r \theta} v(t, r, \theta)+b u(t, r, \theta)-u^2(t, r, \theta) v(t, r, \theta)+g (v(t-\tau, r, \theta)- v(t, r, \theta)),\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(r, \theta) \in \mathbb{D},~t>0,\\ \partial_{r} u(\cdot, R, \theta)=\partial_{r} v(\cdot, R, \theta)=0,~\theta \in [0,2\pi). \end{array}\right. \end{equation} Fixing $a=1,~b=1.5,~g=2,~d_1=2,~d_2=5,~R=10$, we get that the unique positive equilibrium solution of the model is $(1,1.5)$. When $\hat{\lambda}=\lambda_{00}$, $\omega \approx 0.6166$ and $\hat{\tau} \approx 0.7128$. According to the common analysis of the standard Hopf bifurcation, we get that when $\tau \in \left[0,0.7128\right)$, $E^*$ is locally asymptotically stable. When $\tau$ increases from zero and crosses the critical value $\hat{\tau} \approx 0.7128$, a family of periodic solutions are bifurcated from $E^*$, which is spatially homogeneous (see Figure \ref{unstable2} at $\tau=2$). By Corollary \ref{qici}, the Hopf bifurcation is supercritical and the periodic solutions are stable since ${a}_1^{*}{a}_2^{*} \approx -0.8264,~{a}_2^{*} \approx -0.6920$. \begin{figure} \caption{Spatially homogeneous periodic solutions with parameters: $a=1,~b=1.5,~g=2,~d_1=2,~d_2=5,~R=10,\tau=2.~(a):u,~(b):v.$} \label{unstable2} \end{figure} \subsection{Numerical example 2: A delayed predator-prey model with group defense and nonlocal competition}\label{sec5.2} In \citep{Liu2020J}, the authors investigated a predator-prey model with group defence and nonlocal competition. Here, we use the method established above to investigate the dynamics of such a model on a disk. \begin{equation}\label{nonlocal} \left\{\begin{array}{l} \frac{\partial u(t, r, \theta)}{\partial t}=d_{1} \Delta_{r \theta} u(t, r, \theta)+b u(t,r,\theta)\left(1-\frac{\hat{u}(t,r,\theta)}{K}\right)-a u^{\alpha}(t,r,\theta)v(t,r,\theta),~(r, \theta) \in \mathbb{D},~t>0, \\ \frac{\partial v(t, r, \theta)}{\partial t}=d_{2} \Delta_{r \theta} v(t, r, \theta)-d v(t,r,\theta)+ a eu^{\alpha}(t-\tau,r,\theta)v(t,r,\theta),~(r, \theta) \in \mathbb{D},~t>0, \\ \partial_{r} u(\cdot, R, \theta)=\partial_{r} v(\cdot, R, \theta)=0,~\theta \in [0,2\pi), \end{array}\right. \end{equation} where the state $u$, $v$ and parameters are defined in \citep{Liu2020J}. In particular, $\hat{u}(r,\theta,t)$ depicts the nonlocal competition with the form of $$ \hat{u}(r,\theta,t)=\frac{1}{ \pi R^2} \int_{0}^{R} \int_{0}^{2 \pi} \bar{r} u\left(\bar{r},\bar{\theta},t\right)\mathrm{d} \bar{\theta} \mathrm{d} \bar{r}. $$ According to subsection \ref{sec2.2}, characteristic equations of the linearization equation at the positive equilibrium point $E^*$ for system (\ref{nonlocal}) are \begin{equation}\label{characteristic equations} \left\{\begin{array}{l} \gamma^2+P_{0m}\gamma+Q_{0m}-a_{12}b_{21}\mathrm{e}^{-\gamma \tau}=0,~m=0,1,2,\cdots,\\ \left(\gamma^2+\bar{P}_{nm}\gamma+\bar{Q}_{nm}-a_{12}b_{21}\mathrm{e}^{-\gamma \tau}\right)^2=0,~n=1,2,\cdots,~m=1,2,\cdots, \end{array}\right. \end{equation} with $$ \begin{aligned} &P_{0m}=(d_1+d_2)\lambda_{0m}^2-a_{11}-c_{11},~{Q}_{0m}=(d_1\lambda_{0m}^2-a_{11}-c_{11})d_2\lambda_{0m}^2~m=0,1,2,\cdots;\\ &\bar{P}_{nm}=(d_1+d_2)\lambda_{nm}^2-a_{11},~\bar{Q}_{nm}=(d_1\lambda_{nm}^2-a_{11})d_2\lambda_{nm}^2,~n=1,2,\cdots,~m=1,2,\cdots, \end{aligned} $$ where the expressions of $a_{11},~a_{12},~b_{21},~c_{11}$ are shown in \citep{Liu2020J}. Fixing $b=0.25,~K=20,~a=0.3,~d=0.7,~e=0.5,~d_1=0.3,~d_2=0.75,~R=6$,~applying the same mathematical analysis method mentioned in \citep{Ruan2003J,Liu2020J}, at the unique positive constant steady solution, the first two bifurcation curves on the $\alpha-\tau$ plane are shown in Figure \ref{nonlocalfenzhitu}. We select $\alpha=0.6$ on the plane of $\alpha-\tau$. When $\hat{\lambda}=\lambda_{11}$, Hopf bifurcation occurs at $\hat{\tau}~=\tau_{\lambda_{11}}^0 \approx 1.7825$. We know that when $\tau ~\textless~1.7825$, $E^*$ is locally asymptotically stable, and when $\tau~\textgreater~1.7825$, $E^*$ is unstable. The bifurcation generated at this time is an equivariant hopf bifurcation. It can be obtained through numerical calculation that $\mu=1.2175,~B_{11}\approx 0.0021-0.0911\mathrm{i},~B_{2001}\approx -0.1075+0.0745\mathrm{i},~B_{1110}\approx -0.1813+0.1620\mathrm{i}$. Thus, $a_1\mu \approx 0.0026,~a_2 \approx -0.1075,~a_2 +a_3 \approx -0.2888,~a_2 -a_3 \approx 0.0738$, which corresponds to Case 2 when $a_1\mu>0$ in Table \ref{tab2}. By Theorem \ref{rotating and standing}, we know that system possesses an unstable standing wave (see Figure \ref{S-1}-\ref{S-3}) and two stable rotating waves (see Figure \ref{T-1}-\ref{T-2}). \begin{remark} We can see from Figure \ref{nonlocalfenzhitu} that as $\alpha$ changes, a double Hopf point HH appears. Below the lower line is the stable region of the system where $\tau~\textless~ min\left\{\tau_{\lambda_{00}^0},\tau_{\lambda_{11}^{0}}\right\}$. And above the lower line where $\tau~\textgreater~ \left\{\tau_{\lambda_{00}^0},\tau_{\lambda_{11}^{0}}\right\}$ the system may produce spatially homogeneous or inhomogeneous period solutions. Investigating the detailed bifurcation sets might require studying an at least six dimensional center manifold. \end{remark} \begin{remark} We can find that only when the initial value restricted to the center subspace satisfies $\rho_1=\rho_2$, the spatially inhomogeneous periodic solution is in the form of standing waves. For example, we select the initial value as $u(t,r,\theta)=u^*+\varsigma_1(t,r) \cdot \cos(\theta+\hat{\theta}),~t \in [-\tau,0);~v(t,r,\theta)=v^{*}+\varsigma_2(t,r) \cdot \cos(\theta+\hat{\theta}),~t \in [-\tau,0)$, which has the following approximate form restricted to the center manifold $$ U_0(\vartheta)(r,\theta) \approx 4(\mathrm{Re}\{\varsigma_1(\vartheta,r)\},\mathrm{Re}\{p_0\cdot\varsigma_2(\vartheta,r)\})^{\mathrm{T}}\cos(\theta+\hat{\theta}), $$ with $z_1=z_2=z_3=z_4=1$. Thus, the spatially inhomogeneous periodic solutions in Figure \ref{S-1}-\ref{S-3} are in the form of standing waves. No matter what value of $\hat{\theta}$ is taken, the simulation is a standing wave solution, which reflects the effect of $O(2)$ equivariance. However, when the initial values of $u$ and $v$ are chosen with other forms, solutions of the system are attracted by one of two coexisting stable rotating waves (see Figure \ref{T-1}-\ref{T-2}), which may be clockwise (Figure \ref{T-2}) or counterclockwise (Figure \ref{T-1}). \end{remark} \begin{figure} \caption{Partial bifurcation curves on the $\alpha-\tau$ plane.} \label{nonlocalfenzhitu} \end{figure} \begin{figure} \caption{The system produces standing waves with parameters: $b=0.25,~K=20,~a=0.3,~d=0.7,~e=0.5,\alpha=0.6,~d_1=0.3,~d_2=0.75,~R=6,~\tau=3$. Initial values are $u(t,r,\theta)=13.0320+0.01\cdot \cos t\cdot \cos r\cdot \cos\theta,~v(t,r,\theta)=0.8108+0.01\cdot \cos t\cdot \cos r\cdot \cos\theta,~t\in[-\tau,0)$. $(a):u,~(b):v$.} \label{S-1} \end{figure} \begin{figure} \caption{The system produces standing waves with parameters: $b=0.25,~K=20,~a=0.3,~d=0.7,~e=0.5,\alpha=0.6,~d_1=0.3,~d_2=0.75,~R=6,~\tau=3$. Initial values are $u(t,r,\theta)=13.0320+0.01\cdot \cos t\cdot \cos r\cdot \cos(\theta+\frac{\pi} \label{S-2} \end{figure} \begin{figure} \caption{The system produces standing waves with parameters: $b=0.25,~K=20,~a=0.3,~d=0.7,~e=0.5,\alpha=0.6,~d_1=0.3,~d_2=0.75,~R=6,~\tau=3$. Initial values are $u(t,r,\theta)=13.0320+0.01\cdot \cos t\cdot \cos r\cdot \cos(\theta-\frac{\pi} \label{S-3} \end{figure} \begin{figure} \caption{The system produces rotating waves with parameters: $b=0.25,~K=20,~a=0.3,~d=0.7,~e=0.5,\alpha=0.6,~d_1=0.3,~d_2=0.75,~R=6,~\tau=3$. Initial values are $u(t,r,\theta)=13.0320+0.01\cdot \cos t\cdot \cos r\cdot \sin\theta,~v(t,r,\theta)=0.8108+0.01\cdot \cos t\cdot \cos r\cdot \cos\theta,~t\in[-\tau,0)$. $(a):u,~(b):v$.} \label{T-1} \end{figure} \begin{figure} \caption{The system produces counterpropagating waves with parameters: $b=0.25,~K=20,~a=0.3,~d=0.7,~e=0.5,\alpha=0.6,~d_1=0.3,~d_2=0.75,~R=6,~\tau=3$. Initial values are $u(t,r,\theta)=13.0320+0.01\cdot \cos t\cdot \cos r\cdot \cos\theta,~v(t,r,\theta)=0.8108+0.01\cdot \cos t\cdot \cos r\cdot \sin\theta,~t\in[-\tau,0)$. $(a):u,~(b):v$.} \label{T-2} \end{figure} \appendix \section{The specific calculation of $g_3^1(z,0,\mu)$ }\label{ Appendix A} \subsection{The calculation of $\mathrm{Proj}_{\mathrm{Ker}(M_3^1)} f_3^1(z,0,0).$} Writing $\tilde{F}_3(\Phi_{r\theta} z,\mu)$ as follows $$ \tilde{F}_3(\Phi_{r\theta} z,0)=\sum_{p_1+p_2+p_3+p_4=3} A_{p_1p_2p_3p_4} \left(\hat{\phi}_{nm}^c\right)^{p_1+p_2} \left({\hat{\phi}_{nm}^s}\right)^{p_3+p_4} z_1^{p_1}z_2^{p_2}z_3^{p_3}z_4^{p_4}, $$ then we have $$ \begin{aligned} f_3^1(z,0,0)&=\left(\begin{array}{cc} \left\langle \tilde{F}_3(\Phi_{r\theta} z,0),\Psi_{r\theta}^1(0) \right\rangle \\ \left\langle \tilde{F}_3(\Phi_{r\theta} z,0),\Psi_{r\theta}^2(0) \right\rangle \end{array}\right)\\ &= \overline{\Psi(0)} \left(\begin{array}{cc} \sum_{p_1+p_2+p_3+p_4=3} A_{p_1p_2p_3p_4} \int_0^R \int_0^{2\pi} r \left(\hat{\phi}_{nm}^c\right)^{p_1+p_2} \left({\hat{\phi}_{nm}^s}\right)^{p_3+p_4+1}\mathrm{d}\theta \mathrm{d}r z_1^{p_1}z_2^{p_2}z_3^{p_3}z_4^{p_4}\\ \sum_{p_1+p_2+p_3+p_4=3} A_{p_1p_2p_3p_4} \int_0^R \int_0^{2\pi} r \left(\hat{\phi}_{nm}^c\right)^{p_1+p_2+1} \left({\hat{\phi}_{nm}^s}\right)^{p_3+p_4} \mathrm{d}\theta \mathrm{d}r z_1^{p_1}z_2^{p_2}z_3^{p_3}z_4^{p_4} \end{array} \right). \end{aligned} $$ Noticing the fact $$ \int_0^R\int_0^{2\pi} r \left(\hat{\phi}_{nm}^c\right)^{4} \mathrm{d} \theta \mathrm{d} r = \int_0^R\int_0^{2\pi} r \left(\hat{\phi}_{nm}^s\right)^{4} \mathrm{d} \theta \mathrm{d} r=0 , $$ $$ \int_0^R\int_0^{2\pi} r \left(\hat{\phi}_{nm}^c\right)^{3} \hat{\phi}_{nm}^s \mathrm{d} \theta \mathrm{d} r= \int_0^R\int_0^{2\pi} r \hat{\phi}_{nm}^c \left(\hat{\phi}_{nm}^s\right)^{3} \mathrm{d} \theta \mathrm{d} r=0, $$ $$ \int_0^R\int_0^{2\pi} r \left(\hat{\phi}_{nm}^c\right)^{2} \left(\hat{\phi}_{nm}^s\right)^{2} \mathrm{d} \theta \mathrm{d} r \triangleq \mathrm{M}_{22} , $$ and the relationship of $\Phi_{r\theta}$ and $\Psi_{r\theta}$, we get (\ref{Part1}). \subsection{The calculation of $\mathrm{Proj}_{\mathrm{Ker}(M_3^1)} \left(D_z f_2^1(z,0,0)U_2^1(z,0)\right).$} We have \begin{equation}\label{f21} \begin{aligned} f_2^1(z,0,0)&=\left(\begin{array}{cc} \left\langle \tilde{F}_2(\Phi_{r\theta} z,0),\Psi_{r\theta}^1(0) \right\rangle \\ \left\langle \tilde{F}_2(\Phi_{r\theta} z,0),\Psi_{r\theta}^2(0)\right\rangle \end{array}\right)\\ &= \overline{\Psi(0)} \left(\begin{array}{cc} \sum_{p_1+p_2+p_3+p_4=2} A_{p_1p_2p_3p_4} \int_0^R \int_0^{2\pi} r \left(\hat{\phi}_{nm}^c\right)^{p_1+p_2} \left({\hat{\phi}_{nm}^s}\right)^{p_3+p_4+1}\mathrm{d}\theta \mathrm{d}r z_1^{p_1}z_2^{p_2}z_3^{p_3}z_4^{p_4}\\ \sum_{p_1+p_2+p_3+p_4=2} A_{p_1p_2p_3p_4} \int_0^R \int_0^{2\pi} r \left(\hat{\phi}_{nm}^c\right)^{p_1+p_2+1} \left({\hat{\phi}_{nm}^s}\right)^{p_3+p_4} \mathrm{d}\theta \mathrm{d}r z_1^{p_1}z_2^{p_2}z_3^{p_3}z_4^{p_4} \end{array} \right). \end{aligned} \end{equation} Noticing the fact that $$ \int_0^R\int_0^{2\pi} r \left(\hat{\phi}_{nm}^c\right)^{3} \mathrm{d} \theta \mathrm{d} r= \int_0^R\int_0^{2\pi} r \left(\hat{\phi}_{nm}^s\right)^{3} \mathrm{d} \theta \mathrm{d} r=0, $$ $$ \int_0^R\int_0^{2\pi} r \left(\hat{\phi}_{nm}^c\right)^{2} \hat{\phi}_{nm}^s \mathrm{d} \theta \mathrm{d} r =\int_0^R\int_0^{2\pi} r \hat{\phi}_{nm}^c \left(\hat{\phi}_{nm}^s\right)^2 \mathrm{d} \theta \mathrm{d} r=0 , $$ then we get $\mathrm{Proj}_{\mathrm{Ker}(M_3^1)} \left(D_z f_2^1(z,0,0)U_2^1(z,0)\right)=0$. \subsection{The calculation of $\mathrm{Proj}_{\mathrm{Ker}(M_3^1)} \left(D_y f_2^1(z,0,0)U_2^2(z,0)\right).$} Firstly, we calculate the $\rm{Fr\acute{e}chet}$ derivative $D_yf_2^1(z,0,0):Q_s\rightarrow \mathscr{X}_\mathbb{C}$. By (\ref{F2z}) and (\ref{F2phi}), $\tilde{F}_2(z,y,0)$ can be written as \begin{equation}\label{tildeF2} \begin{aligned} \tilde{F}_2(z,y,0)&=S_2(\Phi_{r\theta} z,y)+o(z^2,y^2)\\ &=S_{yz_1}(y)z_1\hat{\phi}_{nm}^c+S_{yz_2}(y)z_2\hat{\phi}_{nm}^c+S_{yz_3}(y)z_3\hat{\phi}_{nm}^s+S_{yz_4}(y)z_4\hat{\phi}_{nm}^s+o(z^2,y^2), \end{aligned} \end{equation} where $S_{yz_k}(k=1,2,3,4):Q_s\rightarrow \mathscr{X}_\mathbb{C}$ are linear operators, and $$ S_{yz_k}(\varphi)=S_{y(0)z_k}(\varphi(0))+S_{y(-1)z_k}(\varphi(-1)). $$ Let $$ U_2^2(z,0)\triangleq h(z)=\sum_{j\ge 0}h_j(z)\hat{\phi}_{jk}(r,\theta), $$ with $$ h_{jk}(z)=\left(\begin{array}{cc} h_{jk}^{(1)}(z)\\ h_{jk}^{(2)}(z) \end{array}\right)=\sum_{p_1+p_2+p_3+p_4=2}\left(\begin{array}{cc} h_{jkp_1p_2p_3p_4}^{(1)}(z)\\ h_{jkp_1p_2p_3p_4}^{(2)}(z) \end{array}\right)z_1^{p_1}z_2^{p_2}z_3^{p_3}z_4^{p_4}. $$ Therefore, $$ \begin{aligned} D_y\tilde{F}_2(z,0,0)\left(U_2^2(z,0)\right)&=\left(\begin{array}{cc} \langle D_y\tilde{F}_2(z,0,0)\left(U_2^2(z,0)\right),\Psi_{r\theta}^1(0) \rangle\\ \langle D_y\tilde{F}_2(z,0,0)\left(U_2^2(z,0)\right),\Psi_{r\theta}^2(0) \rangle \end{array}\right)\\ &= \overline{\Psi(0)} \left(\begin{array}{cc} \sum_{j\ge 0}\left[\mathrm{M}_{jkcs}S_{yz_1}(h_{jk})z_1+\mathrm{M}_{jkcs}S_{yz_2}(h_{jk})z_2\right.\\ \left.+\mathrm{M}_{jkss}S_{yz_3}(h_{jk})z_3+\mathrm{M}_{jkss}S_{yz_4}(h_{jk})z_4\right]\\ \sum_{j\ge 0}[\mathrm{M}_{jkcc}S_{yz_1}(h_{jk})z_1+\mathrm{M}_{jkcc}S_{yz_2}(h_{jk})z_2\\ +\mathrm{M}_{jksc}S_{yz_3}(h_{jk})z_3+\mathrm{M}_{jksc}S_{yz_4}(h_{jk})z_4] \end{array} \right), \end{aligned} $$ where $$ \mathrm{M}_{jksc}=\mathrm{M}_{jkcs}=\int_0^R \int_0^{2\pi} r \hat{\phi}_{jk}\hat{\phi}_{nm}^c\hat{\phi}_{nm}^s \mathrm{d}\theta\mathrm{d}r=\left\{\begin{array}{cccc} \mathrm{M}_{0kcs}^c & \hat{\phi}_{0}~or~\hat{\phi}_{0k}^c,\\ 0, & otherwise, \end{array}\right. $$ $$ \mathrm{M}_{jkss}=\int_0^R \int_0^{2\pi} r \hat{\phi}_{jk}\hat{\phi}_{nm}^s\hat{\phi}_{nm}^s \mathrm{d}\theta\mathrm{d}r=\left\{\begin{array}{cccc} \mathrm{M}_{2nkss}^c, & \hat{\phi}_{jk}=\hat{\phi}_{2nk}^c,\\ 0, & otherwise, \end{array}\right. $$ $$ \mathrm{M}_{jkcc}=\int_0^R \int_0^{2\pi} r \hat{\phi}_{jk}\hat{\phi}_{nm}^c\hat{\phi}_{nm}^c \mathrm{d}\theta\mathrm{d}r=\left\{\begin{array}{cccc} \mathrm{M}_{2nkcc}^s, & \hat{\phi}_{jk}=\hat{\phi}_{2nk}^s,\\ 0, & otherwise. \end{array}\right. $$ Moreover, we have $$ \begin{aligned} D_y\tilde{F}_2(z,0,0)\left(U_2^2(z,0)\right) &=\bar{\Psi}(0)\left(\begin{array}{cccc} N_1\\ N_2 \end{array} \right), \end{aligned} $$ with $$ \begin{aligned} N_1=&\mathrm{M}_{0kcs}^c\left(S_{yz_1}(h_{0k}^{ccs})z_1+S_{yz_2}(h_{0k}^{ccs})z_2\right) +\mathrm{M}_{2nkss}^c\left(S_{yz_3}(h_{2nk}^{css})z_3+S_{yz_4}(h_{2nk}^{css})z_4\right),\\ N_2=&\mathrm{M}_{2nkcc}^s\left(S_{yz_1}(h_{2nk}^{ccs})z_1+S_{yz_2}(h_{2nk}^{ccs})z_2\right) +\mathrm{M}_{0kcs}^c\left(S_{yz_3}(h_{0k}^{ccs})z_3+S_{yz_4}(h_{0k}^{ccs})z_4\right).\\ \end{aligned} $$ Thus, \begin{equation}\label{part3} \begin{aligned} &\frac{1}{3!}\mathrm{Proj}_{\mathrm{Ker(M_3^1)}}\left(D_yf_2^1(z,0,0)U_2^2(z,0)\right)\\ =&\left( \begin{array}{cccc} E_{2100}^1z_1^2z_2+E_{2001}^1z_1^2z_4+E_{0120}^1z_3^2z_2+E_{0021}^1z_3^2z_4+E_{1110}^1z_1z_2z_3+E_{1011}^1z_1z_3z_4\\ \overline{E_{2100}^1}z_1z_2^2+\overline{E_{2001}^1}z_2^2z_3+\overline{E_{0120}^1}z_4^2z_1+\overline{E_{0021}^1}z_3z_4^2+\overline{E_{1110}^1}z_1z_2z_4+\overline{E_{1011}^1}z_2z_3z_4\\ E_{2100}^2z_1^2z_2+E_{2001}^2z_1^2z_4+E_{0120}^2z_3^2z_2+E_{0021}^2z_3^2z_4+E_{1110}^2z_1z_2z_3+E_{1011}^2z_1z_3z_4\\ \overline{E_{2100}^2}z_1z_2^2+\overline{E_{2001}^2}z_2^2z_3+\overline{E_{0120}^2}z_4^2z_1+\overline{E_{0021}^2}z_3z_4^2+\overline{E_{1110}^2}z_1z_2z_4+\overline{E_{1011}^2}z_2z_3z_4 \end{array} \right), \end{aligned} \end{equation} where $$ \begin{aligned} E_{2100}^1=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^c\left(S_{yz_1}(h_{0k1100}^{ccs})+S_{yz_2}(h_{0k2000}^{ccs})\right)\right],\\ E_{2001}^1=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^cS_{yz_1}(h_{0k1001}^{ccs})+\mathrm{M}_{2nkss}^cS_{yz_4}(h_{2nk2000}^{css})\right],\\ E_{0120}^1=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^cS_{yz_2}(h_{0k0020}^{ccs}))+\mathrm{M}_{2nkss}^cS_{yz_3}(h_{2nk0110}^{css})\right],\\ E_{0021}^1=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{2nkss}^c\left(S_{yz_3}(h_{2nk0011}^{css})+S_{yz_4}(h_{2nk0020}^{css})\right)\right],\\ E_{1110}^1=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^c\left(S_{yz_1}(h_{0k0110}^{ccs})+S_{yz_2}(h_{0k1010}^{ccs})\right)+\mathrm{M}_{2nkss}^cS_{yz_3}(h_{2nk1100}^{css})\right],\\ E_{1011}^1=\frac{1}{6}\overline{\Psi_1(0)}&\left[\mathrm{M}_{0kcs}^cS_{yz_1}(h_{0k0011}^{ccs}) +\mathrm{M}_{2nkss}^c\left(S_{yz_3}(h_{2nk1001}^{css})+S_{yz_4}(h_{2nk1010}^{css})\right)\right],\\ E_{2100}^2=\frac{1}{6}\overline{\Psi_3(0)}&\left[\mathrm{M}_{2nkcc}^s\left(S_{yz_1}(h_{2nk1100}^{ccs})+S_{yz_2}(h_{2nk2000}^{ccs})\right)\right],\\ E_{2001}^2=\frac{1}{6}\overline{\Psi_3(0)}&\left[\mathrm{M}_{2nkcc}^sS_{yz_1}(h_{2nk1001}^{ccs})+\mathrm{M}_{0kcs}^cS_{yz_2}(h_{0k2000}^{ccs})\right],\\ E_{0120}^2=\frac{1}{6}\overline{\Psi_3(0)}&\left[\mathrm{M}_{2nkcc}^sS_{yz_2}(h_{2nk0020}^{ccs})+\mathrm{M}_{0kcs}^cS_{yz_3}(h_{0k0110}^{ccs})\right],\\ E_{0021}^2=\frac{1}{6}\overline{\Psi_3(0)}&\left[\mathrm{M}_{0kcs}^c\left(S_{yz_3}(h_{0k0011}^{ccs})+S_{yz_4}(h_{0k0020}^{ccs})\right)\right],\\ E_{1110}^2=\frac{1}{6}\overline{\Psi_3(0)}&\left[\mathrm{M}_{2nkcc}^s\left(S_{yz_1}(h_{2nk0110}^{ccs})+S_{yz_2}(h_{2nk1010}^{ccs})\right)+\mathrm{M}_{0kcs}^cS_{yz_3}(h_{0k1100}^{ccs})\right]\\ E_{1011}^2=\frac{1}{6}\overline{\Psi_3(0)}&\left[\mathrm{M}_{2nkcc}^sS_{yz_1}(h_{2nk0011}^{ccs})+\mathrm{M}_{0kcs}^c\left(S_{yz_3}(h_{0k1001}^{ccs})+S_{yz_4}(h_{0k1010}^{ccs})\right)\right]. \end{aligned} $$ Now, we need to calculate $$ \begin{aligned} &h_{0k2000}^{ccs},~h_{0k1100}^{ccs},~h_{0k1010}^{ccs},~h_{0k1001}^{ccs},~h_{0k0110}^{ccs},~h_{0k0020}^{ccs},~h_{0k0011}^{ccs},\\ &h_{2nk2000}^{ccs},~h_{2nk1100}^{ccs},~h_{2nk1010}^{ccs},~h_{2nk1001}^{ccs},~h_{2nk0110}^{ccs},~h_{2nk0020}^{ccs},~h_{2nk0011}^{ccs},\\ &h_{2nk2000}^{css},~h_{2nk1100}^{css},~h_{2nk1010}^{css},~h_{2nk1001}^{css},~h_{2nk0110}^{css},~h_{2nk0020}^{css},~h_{2nk0011}^{css}. \end{aligned} $$ From (\ref{zt2}) and(\ref{Mj1z}), we get $$ \begin{aligned} M_2^2U_2^2(z,0)(\vartheta)&=M_2^2h(z)(\vartheta)\\ &=\left\{\begin{array}{cc} D_zh(z)Bz-\tilde{D}_0\Delta h(0)-\tilde{L}_0(h(z)), & \vartheta=0,\\ D_zh(z)Bz-D_{\vartheta}h(z), & \vartheta \neq 0,\\ \end{array}\right.\\ &=\left\{\begin{array}{cc} \sum_{j \ge 0}\left[D_zh_j(z)\hat{\phi}_{jk}(r,\theta)Bz-\tilde{D}_0\Delta h_j(z)\hat{\phi}_{jk}(r,\theta)-\tilde{L}_0(h_j(z)\hat{\phi}_{jk}(r,\theta))\right], & \vartheta=0,\\ \sum_{j \ge 0}\left[D_zh_j(z)\hat{\phi}_{jk}(r,\theta)Bz-D_{\vartheta}h_j(z)\hat{\phi}_{jk}(r,\theta)\right], & \vartheta \neq 0,\\ \end{array}\right. \end{aligned} $$ and $$ f_2^2(z,0,0)=\left\{\begin{array}{cc}\begin{aligned} \tilde{F}_2&(z,0,0)-\Phi_1(0)f_2^{1(1)}(z,0,0)\hat{\phi}_{nm}^s-\Phi_2(0)f_2^{1(2)}(z,0,0)\hat{\phi}_{nm}^s\\ &-\Phi_3(0)f_2^{1(1)}(z,0,0)\hat{\phi}_{nm}^c-\Phi_4(0)f_2^{1(1)}(z,0,0)\hat{\phi}_{nm}^c, & \vartheta=0,\\ -\Phi_1&(\vartheta)f_2^{1(1)}(z,0,0)\hat{\phi}_{nm}^s-\Phi_2(\vartheta)f_2^{1(2)}(z,0,0)\hat{\phi}_{nm}^s\\ &-\Phi_3(\vartheta)f_2^{1(1)}(z,0,0)\hat{\phi}_{nm}^c-\Phi_4(\vartheta)f_2^{1(1)}(z,0,0)\hat{\phi}_{nm}^c, & \vartheta \neq 0. \end{aligned}\end{array}\right. $$ Besides, we have \begin{equation}\label{M22U22} \langle M_2^2\left(U_2^2(z,0)\right),\beta_{jk} \rangle= \langle f_2^2(z,0,0),\beta_{jk} \rangle, \end{equation} with $\beta_{jk}=\frac{\phi_{jk}}{\|\phi_{jk}\|}$. Thus, the expressions of $h_{jp_1p_2p_3p_4}$ can be obtained. Due to the large number of expressions, we show the specific results in the Appendix F. Noting the fact that $$ \mathrm{M}_{2nkcc}^s=\mathrm{M}_{2nkss}^c, $$ therefore, we have $$ \begin{aligned} &E_{2100}^1=E_{0021}^2,~E_{2001}^1=E_{0120}^2,~E_{0120}^1=E_{2001}^2,\\ &E_{0021}^1=E_{2100}^2,~E_{1110}^1=E_{1011}^2,~E_{1011}^1=E_{1110}^2, \end{aligned} $$ For simplification of notations, we rewrite (\ref{part3}) as (\ref{Part3}). \section{Proof of Lemma \ref{reduce}}\label{Proof of Lemma} By a smooth transformation \begin{equation}\label{smooth transformation} \begin{aligned} &z_1=\zeta_1+b_1\zeta_1^2\zeta_2+b_2\zeta_3^2\zeta_2+b_3\zeta_3^2\zeta_4+b_4\zeta_1\zeta_3\zeta_4,\\ &z_2=\bar{z}_1,\\ &z_3=\zeta_1+b_1\zeta_3^2\zeta_4+b_2\zeta_1^2\zeta_4+b_3\zeta_1^2\zeta_2+b_4\zeta_1\zeta_2\zeta_3,\\ &z_4=\bar{z}_3, \end{aligned} \end{equation} we have \begin{equation}\label{zeta} \begin{aligned} &\zeta_1=z_1-b_1z_1^2z_2-b_2z_3^2z_2-b_3z_3^2z_4-b_4z_1z_3z_4+o(4),\\ &\zeta_2=\bar{\zeta}_1,\\ &\zeta_3=z_1-b_1z_3^2z_4-b_2z_1^2z_4-b_3z_1^2z_2-b_4z_1z_2z_3+o(4),\\ &\zeta_4=\bar{\zeta}_3. \end{aligned} \end{equation} Then $$ \begin{aligned} \dot{\zeta}_1=&\dot{z}_1-2b_1z_1z_2\dot{z}_1-b_1z_1^2\dot{z}_2-2b_2z_2z_3\dot{z}_3-b_2z_3^2\dot{z}_2-2b_3z_3z_4\dot{z}_3-b_3z_3^2\dot{z}_4 -b_4z_1z_3\dot{z}_4-b_4z_1z_4\dot{z}_3-b_4z_3z_4\dot{z}_1+o(4)\\ =&(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)z_1+B_{2100}z_1^2z_2^2+B_{1011}z_1z_3z_4+B_{2001}z_1^2z_4+B_{0120}z_3^2z_2+B_{0021}z_3^2z_4+B_{1110}z_1z_2z_3\\ &-3b_1(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)z_1^2z_2-3b_2(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)z_3^2z_2-3b_3(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)z_3^2z_4 -3b_4(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)z_1z_3z_4+o(4).\\ \end{aligned} $$ Let $$ \begin{aligned} &b_1=\frac{B_{2100}}{3(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)},~b_2=\frac{B_{0120}}{3(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)},\\ &b_3=\frac{B_{0021}}{3(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)},~b_4=\frac{B_{1011}}{3(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)}, \end{aligned} $$ then $$ \begin{aligned} \dot{\zeta}_1=&(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)z_1+B_{2001}z_1^2z_4^2+B_{1110}z_1z_2z_3+o(4)\\ =&(\mathrm{i}\omega_{\hat{\lambda}}+B_{11}\mu)\zeta_1+B_{2001}\zeta_1^2\zeta_4^2+B_{1110}\zeta_1\zeta_2\zeta_3+o(4). \end{aligned} $$ The same is true for $\dot{\zeta}_2, \dot{\zeta}_3$ and $\dot{\zeta}_4$ so that (\ref{z1z2z3z4}) is established. \section{Proof of Theorem \ref{rotating and standing}}\label{Proof of Theorem} We only need to prove the approximate expressions of rotating and standing wave solutions reduced to the center subspace, and the rest of the theorem can be easily obtained from previous analysis. By (\ref{basis of P}), (\ref{fenjie}),(\ref{chi}), we get $$ \begin{aligned} U_t(\vartheta)(r,\theta) \approx &\xi \mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\vartheta} J_n(\sqrt{\lambda_{nm}}r)\mathrm{e}^{\mathrm{i}n\theta} \rho_1\mathrm{e}^{\mathrm{i}\chi_1(t)} +\bar{\xi} \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\vartheta} J_n(\sqrt{\lambda_{nm}}r)\mathrm{e}^{\mathrm{i}n\theta} \rho_2\mathrm{e}^{-\mathrm{i}\chi_2(t)}\\ &+\xi \mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\vartheta} J_n(\sqrt{\lambda_{nm}}r)\mathrm{e}^{-\mathrm{i}n\theta} \rho_2\mathrm{e}^{\mathrm{i}\chi_2(t)} +\bar{\xi} \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\vartheta} J_n(\sqrt{\lambda_{nm}}r)\mathrm{e}^{-\mathrm{i}n\theta} \rho_1\mathrm{e}^{-\mathrm{i}\chi_1(t)} \end{aligned} $$ with $\xi=(p_{11},p_{12},\cdots,p_{1n})^{\mathrm{T}}$. For simplicity, we also rewrite $p_{1i}$ in the form of a complex angle as $p_{1i}=|p_{1i}|\mathrm{e}^{\mathrm{i} \mathrm{Arg}(p_{1i})}$ in the subsequent calculations. For $(\rho_1,\rho_2)=(0,\sqrt{\frac{-a_1\mu}{a_2}})$, $$ \begin{aligned} U_t(\vartheta)(r,\theta) \approx &\xi \mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\vartheta} J_n(\sqrt{\lambda_{nm}}r)\mathrm{e}^{\mathrm{i}n\theta} \rho_1\mathrm{e}^{\mathrm{i}\chi_1(t)} +\bar{\xi} \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\vartheta} J_n(\sqrt{\lambda_{nm}}r)\mathrm{e}^{-\mathrm{i}n\theta} \rho_1\mathrm{e}^{-\mathrm{i}\chi_1(t)}\\ \approx & \sum_{i=1}^n{2|p_{1i}|\sqrt{\frac{-a_1\mu}{a_2}} J_n(\sqrt{\lambda_{nm}}r)\cos(\mathrm{Arg}(p_{1i})+\omega_{\hat{\lambda}}\vartheta+\omega_{\hat{\lambda}}t+n\theta) {e}_i}. \end{aligned} $$ where $e_i$ is the $i$th unit coordinate vector of $\mathbb{R}^n$. This corresponds to the form of a rotating wave solution in the plane of $(z_2,z_3)$. For $(\rho_1,\rho_2)=(\sqrt{\frac{-a_1\mu}{a_2}},0)$, $$ \begin{aligned} U_t(\vartheta)(r,\theta) \approx &\bar{\xi} \mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\vartheta} J_n(\sqrt{\lambda_{nm}}r)\mathrm{e}^{\mathrm{i}n\theta} \rho_2\mathrm{e}^{-\mathrm{i}\chi_2(t)} +\xi \mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\vartheta} J_n(\sqrt{\lambda_{nm}}r)\mathrm{e}^{-\mathrm{i}n\theta} \rho_2\mathrm{e}^{\mathrm{i}\chi_2(t)}\\ \approx & \sum_{i=1}^n{2|p_{1i}|\sqrt{\frac{-a_1\mu}{a_2}} J_n(\sqrt{\lambda_{nm}}r)\cos(\mathrm{Arg}(p_{1i})+\omega_{\hat{\lambda}}\vartheta+\omega_{\hat{\lambda}}t-n\theta) {e}_i}, \end{aligned} $$ which corresponds to the form of a rotating wave solution in the opposite direction in the plane of $(z_1,z_4)$. For $(\rho_1,\rho_2)=(\sqrt{\frac{-a_1\mu}{a_2+a_3}},\sqrt{\frac{-a_1\mu}{a_2+a_3}})$, $$ \begin{aligned} U_t(\vartheta)(r,\theta) \approx & \sum_{i=1}^n{2|p_{1i}|\sqrt{\frac{-a_1\mu}{a_2+a_3}} J_n(\sqrt{\lambda_{nm}}r)\cos(\mathrm{Arg}(p_{1i})+\omega_{\hat{\lambda}}\vartheta+\omega_{\hat{\lambda}}t+n\theta) {e}_i}\\ +& \sum_{i=1}^n{2|p_{1i}|\sqrt{\frac{-a_1\mu}{a_2+a_3}} J_n(\sqrt{\lambda_{nm}}r)\cos(\mathrm{Arg}(p_{1i})+\omega_{\hat{\lambda}}\vartheta+\omega_{\hat{\lambda}}t-n\theta) {e}_i}\\ \approx &\sum_{i=1}^n{4|p_{1i}|\sqrt{\frac{-a_1\mu}{a_2+a_3}} J_n(\sqrt{\lambda_{nm}}r)\cos(\mathrm{Arg}(p_{1i})+\omega_{\hat{\lambda}}\vartheta+\omega_{\hat{\lambda}}t)\cos(n\theta) {e}_i}, \end{aligned} $$ which means when $n\theta=\frac{\pi}{2}$ or $n\theta=\frac{3\pi}{2}$, the form of the solution does not change over time. In other words, in a two-dimensional plane, the image of the solution has a fixed axis, thus, it corresponds to the form of a standing wave solution. \section{The calculation formula for $A_{p_1p_2p_3p_4}$}\label{A_{p_1p_2p_3p_4}} \subsection{The calculation formula for $A_{p_1p_2p_3p_4}(p_1+p_2+p_3+p_4=2)$} $$ \begin{aligned} A_{2000}=2\hat{\tau}\left(\begin{array}{cccc} A_{2000}^1\\ A_{2000}^2 \end{array}\right)&,~ A_{1100}=2\hat{\tau}\left(\begin{array}{cccc} A_{1100}^1\\ A_{1100}^2 \end{array}\right),\\ A_{1010}=2\hat{\tau}\left(\begin{array}{cccc} A_{1010}^1\\ A_{1010}^2 \end{array}\right)&,~ A_{1001}=2\hat{\tau}\left(\begin{array}{cccc} A_{1001}^1\\ A_{1001}^2 \end{array}\right),\\ A_{0020}=2\hat{\tau}\left(\begin{array}{cccc} A_{0020}^1\\ A_{0020}^2 \end{array}\right)&,~ A_{0011}=2\hat{\tau}\left(\begin{array}{cccc} A_{0011}^1\\ A_{0011}^2 \end{array}\right),\\ A_{0200}=\overline{A_{2000}}&,~A_{0101}=\overline{A_{1010}},\\ A_{0110}=\overline{A_{1001}}&,~A_{0002}=\overline{A_{0020}}, \end{aligned} $$ with $$ \begin{aligned} A_{2000}^1=&F_{20}^{(1)}+F_{11}^{(1)}p_0+F_{02}^{(1)}p_0^2,\\ A_{1100}^1=&2F_{20}^{(1)}+F_{11}^{(1)}(p_0+\bar{p}_0)+2F_{02}^{(1)}p_0\bar{p}_0,\\ A_{1010}^1=&2F_{20}^{(1)}+2F_{11}^{(1)}p_0+2F_{02}^{(1)}p_0^2,\\ A_{1001}^1=&2F_{20}^{(1)}+F_{11}^{(1)}(p_0+\bar{p}_0)+2F_{02}^{(1)}p_0\bar{p}_0,\\ A_{0020}^1=&F_{20}^{(1)}+F_{11}^{(1)}{p}_0+F_{02}^{(1)}p_0^2,\\ A_{0011}^1=&2F_{20}^{(1)}+F_{11}^{(1)}(p_0+\bar{p}_0)+2F_{02}^{(1)}p_0\bar{p}_0,\\ A_{2000}^2=&F_{2000}^{(2)}+F_{1100}^{(2)}p_0+F_{0200}^{(2)}p_0^2+F_{0020}^{(2)}\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} +F_{0011}^{(2)}p_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0002}^{(2)}p_0^2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+F_{1010}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+(F_{1001}^{(2)}+F_{0110}^{(2)})p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} +F_{0101}^{(2)}p_0^2\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}},\\ A_{1100}^2=&2F_{2000}^{(2)}+F_{1100}^{(2)}(p_0+\bar{p}_0)+2F_{0200}^{(1)}p_0\bar{p}_0+2F_{0020}^{(2)} +F_{0011}^{(2)}(p_0+\bar{p}_0)+2F_{0002}^{(2)}p_0\bar{p}_0 +F_{1010}^{(2)}(\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}})\\ &+F_{1001}^{(2)}(p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0110}^{(2)}(p_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0101}^{(2)}(p_0\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}),\\ A_{1010}^2=&2F_{2000}^{(2)}+2F_{1100}^{(2)}p_0+2F_{0200}^{(1)}p_0^2+2F_{0020}^{(2)}\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} +2F_{0011}^{(2)}p_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2F_{0002}^{(2)}p_0^2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+2F_{1010}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2(F_{1001}^{(2)}+F_{0110}^{(2)})p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} +2F_{0101}^{(2)}p_0^2\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}},\\ A_{1001}^2=&2F_{2000}^{(2)}+F_{1100}^{(2)}(p_0+\bar{p}_0)+2F_{0200}^{(1)}p_0\bar{p}_0+2F_{0020}^{(2)} +F_{0011}^{(2)}(p_0+\bar{p}_0)+2F_{0002}^{(2)}p_0\bar{p}_0 +F_{1010}^{(2)}(\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}})\\ &+F_{1001}^{(2)}(p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0110}^{(2)}(p_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0101}^{(2)}(p_0\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}),\\ A_{0020}^2=&F_{2000}^{(2)}+F_{1100}^{(2)}p_0+F_{0200}^{(2)}p_0^2+F_{0020}^{(2)}\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} +F_{0011}^{(2)}p_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0002}^{(2)}p_0^2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+F_{1010}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+(F_{1001}^{(2)}+F_{1001}^{(2)})p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} +F_{0101}^{(2)}p_0^2\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}},\\ A_{0011}^2=&2F_{2000}^{(2)}+F_{1100}^{(2)}(p_0+\bar{p}_0)+2F_{0200}^{(1)}p_0\bar{p}_0+2F_{0020}^{(2)} +F_{0011}^{(2)}(p_0+\bar{p}_0)+2F_{0002}^{(2)}p_0\bar{p}_0 +F_{1010}^{(2)}(\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}})\\ &+F_{1001}^{(2)}(p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0110}^{(2)}(p_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0101}^{(2)}(p_0\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}). \end{aligned} $$ \subsection{The calculation formula for $A_{p_1p_2p_3p_4}(p_1+p_2+p_3+p_4=3)$} $$ \begin{aligned} A_{2100}=6\hat{\tau}\left(\begin{array}{cccc} A_{2100}^1\\ A_{2100}^2 \end{array}\right)&,~ A_{2010}=6\hat{\tau}\left(\begin{array}{cccc} A_{2010}^1\\ A_{2010}^2 \end{array}\right),\\ A_{2001}=6\hat{\tau}\left(\begin{array}{cccc} A_{2001}^1\\ A_{2001}^2 \end{array}\right)&,~ A_{1020}=6\hat{\tau}\left(\begin{array}{cccc} A_{1020}^1\\ A_{1020}^2 \end{array}\right),\\ A_{0120}=6\hat{\tau}\left(\begin{array}{cc} A_{0120}^1\\ A_{0120}^2 \end{array}\right)&,~ A_{0021}=6\hat{\tau}\left(\begin{array}{cc} A_{0021}^1\\ A_{0021}^2 \end{array}\right),\\ \end{aligned} $$ $$ \begin{aligned} A_{1110}=6\hat{\tau}\left(\begin{array}{cc} A_{1110}^1\\ A_{1110}^2 \end{array}\right)&,~ A_{1011}=6\hat{\tau}\left(\begin{array}{cc} A_{1011}^1\\ A_{1011}^2 \end{array}\right),\\ A_{1200}=\overline{A_{2100}}&,~A_{0210}=\overline{A_{2010}},\\ A_{0201}=\overline{A_{2001}}&,~A_{0002}=\overline{A_{0020}}, \end{aligned} $$ with $$ \begin{aligned} A_{2100}^1=&3F_{30}^{(1)}+(\bar{p}_0+2p_0)F_{21}^{(1)}+(p_0^2+2p_0\bar{p}_0)F_{12}^{(1)}+3p_0^2\bar{p}_0F_{03}^{(1)},\\ A_{2010}^1=&3F_{30}^{(1)}+3p_0F_{21}^{(1)}+3p_0^2F_{12}^{(1)}+3p_0^3F_{03}^{(1)},\\ A_{2001}^1=&3F_{30}^{(1)}+(\bar{p}_0+2p_0)F_{21}^{(1)}+(p_0^2+2p_0\bar{p}_0)F_{12}^{(1)}+3p_0^2\bar{p}_0F_{03}^{(1)},\\ A_{1020}^1=&3F_{30}^{(1)}+3p_0F_{21}^{(1)}+3p_0^2F_{12}^{(1)}+3p_0^3F_{03}^{(1)},\\ A_{0120}^1=&3F_{30}^{(1)}+(\bar{p}_0+2p_0)F_{21}^{(1)}+(p_0^2+2p_0\bar{p}_0)F_{12}^{(1)}+3p_0^2\bar{p}_0F_{03}^{(1)},\\ A_{0021}^1=&3F_{30}^{(1)}+(\bar{p}_0+2p_0)F_{21}^{(1)}+(p_0^2+2p_0\bar{p}_0)F_{12}^{(1)}+3p_0^2\bar{p}_0F_{03}^{(1)},\\ A_{1110}^1=&6F_{30}^{(1)}+(2\bar{p}_0+4p_0)F_{21}^{(1)}+(2p_0^2+4p_0\bar{p}_0)F_{12}^{(1)}+6p_0^2\bar{p}_0F_{03}^{(1)},\\ A_{1011}^1=&6F_{30}^{(1)}+(2\bar{p}_0+4p_0)F_{21}^{(1)}+(2p_0^2+4p_0\bar{p}_0)F_{12}^{(1)}+6p_0^2\bar{p}_0F_{03}^{(1)},\\ A_{2100}^2=&3F_{3000}^{(2)}+(\bar{p}_0+2p_0)F_{2100}^{(2)}+(p_0^2+2p_0\bar{p}_0)F_{1200}^{(2)}+3p_0^2\bar{p}_0F_{0300}^{(2)}\\ &+\left(3F_{0030}^{(2)}+(\bar{p}_0+2p_0)F_{0021}^{(2)}+(p_0^2+2p_0\bar{p}_0)F_{0012}^{(2)}+3p_0^2\bar{p}_0F_{0003}^{(2)}\right)\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+F_{2010}^{(2)}(\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{2001}^{(2)}(\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0210}^{(2)}(2p_0\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0^2\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}})\\ &+F_{0201}^{(2)}(2p_0^2\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0^2\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{1020}^{(2)}(\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2) +F_{1002}^{(2)}(p_0^2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0\bar{p}_0)\\ &+F_{0120}^{(2)}(\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0) +F_{0102}^{(2)}(p_0^2\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0^2\bar{p}_0),\\ A_{2010}^2=&3F_{3000}^{(2)}+3p_0F_{2100}^{(2)}+3p_0^2F_{1200}^{(2)}+3p_0^3F_{0300}^{(2)}\\ &+\left(3F_{0030}^{(2)}+3p_0F_{0021}^{(2)}+3p_0^2F_{0012}^{(2)}+3p_0^3F_{0003}^{(2)}\right)\mathrm{e}^{-3\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+\left(3F_{2010}^{(2)}+3p_0F_{2001}^{(2)}+3p_0^2F_{0210}^{(2)} +3p_0^3F_{0201}^{(2)}\right)\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+\left(3F_{1020}^{(2)}+3p_0F_{0120}^{(2)}+3p_0F_{1002}^{(2)}+ +3p_0^3F_{0102}^{(2)}\right)\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}},\\ A_{2001}^2=&3F_{3000}^{(2)}+(\bar{p}_0+2p_0)F_{2100}^{(2)}+(p_0^2+2p_0\bar{p}_0)F_{1200}^{(2)}+3p_0^2\bar{p}_0F_{0300}^{(2)}\\ &+\left(3F_{0030}^{(2)}+(\bar{p}_0+2p_0)F_{0021}^{(2)}+(p_0^2+2p_0\bar{p}_0)F_{0012}^{(2)}+3p_0^2\bar{p}_0F_{0003}^{(2)}\right)\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+F_{2010}^{(2)}(\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{2001}^{(2)}(\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0210}^{(2)}(2p_0\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0^2\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}})\\ &+F_{0201}^{(2)}(2p_0^2\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0^2\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{1020}^{(2)}(\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2) +F_{1002}^{(2)}(p_0^2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0\bar{p}_0)\\ &+F_{0120}^{(2)}(\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0) +F_{0102}^{(2)}(p_0^2\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0^2\bar{p}_0),\\ \end{aligned} $$ $$ \begin{aligned} A_{1020}^2=&3F_{3000}^{(2)}+3p_0F_{2100}^{(2)}+3p_0^2F_{1200}^{(2)}+3p_0^3F_{0300}^{(2)}\\ &+\left(3F_{0030}^{(2)}+3p_0F_{0021}^{(2)}+3p_0^2F_{0012}^{(2)}+3p_0^3F_{0003}^{(2)}\right)\mathrm{e}^{-3\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+\left(3F_{2010}^{(2)}+3p_0F_{2001}^{(2)}+3p_0^2F_{0210}^{(2)} +3p_0^3F_{0201}^{(2)}\right)\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+\left(3F_{1020}^{(2)}+3p_0F_{0120}^{(2)}+3p_0F_{1002}^{(2)}+ +3p_0^3F_{0102}^{(2)}\right)\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}},\\ A_{0120}^2=&3F_{3000}^{(2)}+(\bar{p}_0+2p_0)F_{2100}^{(2)}+(p_0^2+2p_0\bar{p}_0)F_{1200}^{(2)}+3p_0^2\bar{p}_0F_{0300}^{(2)}\\ &+\left(3F_{0030}^{(2)}+(\bar{p}_0+2p_0)F_{0021}^{(2)}+(p_0^2+2p_0\bar{p}_0)F_{0012}^{(2)}+3p_0^2\bar{p}_0F_{0003}^{(2)}\right)\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+F_{2010}^{(2)}(\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{2001}^{(2)}(\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0210}^{(2)}(2p_0\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0^2\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}})\\ &+F_{0201}^{(2)}(2p_0^2\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0^2\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{1020}^{(2)}(\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2) +F_{1002}^{(2)}(p_0^2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0\bar{p}_0)\\ &+F_{0120}^{(2)}(\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0) +F_{0102}^{(2)}(p_0^2\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0^2\bar{p}_0),\\ A_{0021}^2=&3F_{3000}^{(2)}+(\bar{p}_0+2p_0)F_{2100}^{(2)}+(p_0^2+2p_0\bar{p}_0)F_{1200}^{(2)}+3p_0^2\bar{p}_0F_{0300}^{(2)}\\ &+\left(3F_{0030}^{(2)}+(\bar{p}_0+2p_0)F_{0021}^{(2)}+(p_0^2+2p_0\bar{p}_0)F_{0012}^{(2)}+3p_0^2\bar{p}_0F_{0003}^{(2)}\right)\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+F_{2010}^{(2)}(\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{2001}^{(2)}(\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0210}^{(2)}(2p_0\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0^2\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}})\\ &+F_{0201}^{(2)}(2p_0^2\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+p_0^2\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{1020}^{(2)}(\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2) +F_{1002}^{(2)}(p_0^2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0\bar{p}_0)\\ &+F_{0120}^{(2)}(\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0) +F_{0102}^{(2)}(p_0^2\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0^2\bar{p}_0),\\ A_{1110}^2=&6F_{3000}^{(2)}+(2\bar{p}_0+4p_0)F_{2100}^{(2)}+(2p_0^2+4p_0\bar{p}_0)F_{1200}^{(2)}+6p_0^2\bar{p}_0F_{0300}^{(2)}\\ &+\left(6F_{0030}^{(2)}+(2\bar{p}_0+4p_0)F_{0021}^{(2)}+(2p_0^2+4p_0\bar{p}_0)F_{0012}^{(2)}+6p_0^2\bar{p}_0F_{0003}^{(2)}\right)\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+F_{2010}^{(2)}(2\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{2001}^{(2)}(2\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0210}^{(2)}(4p_0\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0^2\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}})\\ &+F_{0201}^{(2)}(4p_0^2\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0^2\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{1020}^{(2)}(2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4) +F_{1002}^{(2)}(2p_0^2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4p_0\bar{p}_0)\\ &+F_{0120}^{(2)}(2\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4p_0) +F_{0102}^{(2)}(2p_0^2\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4p_0^2\bar{p}_0),\\ A_{1011}^2=&6F_{3000}^{(2)}+(2\bar{p}_0+4p_0)F_{2100}^{(2)}+(2p_0^2+4p_0\bar{p}_0)F_{1200}^{(2)}+6p_0^2\bar{p}_0F_{0300}^{(2)}\\ &+\left(6F_{0030}^{(2)}+(2\bar{p}_0+4p_0)F_{0021}^{(2)}+(2p_0^2+4p_0\bar{p}_0)F_{0012}^{(2)}+6p_0^2\bar{p}_0F_{0003}^{(2)}\right)\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ &+F_{2010}^{(2)}(2\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{2001}^{(2)}(2\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{0210}^{(2)}(4p_0\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0^2\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}})\\ &+F_{0201}^{(2)}(4p_0^2\bar{p}_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2p_0^2\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}) +F_{1020}^{(2)}(2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4) +F_{1002}^{(2)}(2p_0^2\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4p_0\bar{p}_0)\\ &+F_{0120}^{(2)}(2\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4p_0) +F_{0102}^{(2)}(2p_0^2\bar{p}_0\mathrm{e}^{-2\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+4p_0^2\bar{p}_0). \end{aligned} $$ \section{The calculation formula for $S_{y(0)z_k},~S_{y(-1)z_k},~k=1,2,3,4$}\label{S} $$ \begin{aligned} &S_{y(0)z_1}=\left(\begin{array}{cc} 2 F_{20}^{(1)}+F_{11}^{(1)}p_0 & F_{11}^{(1)}+2 F_{02}^{(1)}p_0\\ 2 F_{2000}^{(2)}+F_{1100}^{(2)}p_0+F_{1010}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1001}^{(2)}p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} & F_{1100}^{(2)}+2 F_{0200}^{(2)}p_0+F_{0110}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0101}^{(2)}p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ \end{array}\right),\\ &S_{y(0)z_2}=\left(\begin{array}{cccc} 2 F_{20}^{(1)}+F_{11}^{(1)}\bar{p}_0 & F_{11}^{(1)}+2 F_{02}^{(1)}\bar{p}_0\\ 2 F_{2000}^{(2)}+F_{1100}^{(2)}\bar{p}_0+F_{1010}^{(2)}\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1001}^{(2)}\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} & F_{1100}^{(2)}+2 F_{0200}^{(2)}\bar{p}_0+F_{0110}^{(2)}\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0101}^{(2)}\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ \end{array}\right),\\ &S_{y(0)z_3}=\left(\begin{array}{cccc} 2 F_{20}^{(1)}+F_{11}^{(1)}p_0 & F_{11}^{(1)}+2 F_{02}^{(1)}p_0\\ 2 F_{2000}^{(2)}+F_{1100}^{(2)}p_0+F_{1010}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1001}^{(2)}p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} & F_{1100}^{(2)}+2 F_{0200}^{(2)}p_0+F_{0110}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0101}^{(2)}p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ \end{array}\right),\\ &S_{y(0)z_4}=\left(\begin{array}{cccc} 2 F_{20}^{(1)}+F_{11}^{(1)}\bar{p}_0 & F_{11}^{(1)}+2 F_{02}^{(1)}\bar{p}_0\\ 2 F_{2000}^{(2)}+F_{1100}^{(2)}\bar{p}_0+F_{1010}^{(2)}\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1001}^{(2)}\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}} & F_{1100}^{(2)}+2 F_{0200}^{(2)}\bar{p}_0+F_{0110}^{(2)}\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0101}^{(2)}\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}\\ \end{array}\right),\\ \end{aligned} $$ $$ \begin{aligned} &S_{y(-1)z_1}=\left(\begin{array}{cccc} 0 & 0\\ 2 F_{0020}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0011}^{(2)}p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1010}^{(2)}+F_{0110}^{(2)}p_0 & F_{0011}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2 F_{0002}^{(2)}p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1001}^{(2)}+F_{0101}^{(2)}p_0\\ \end{array}\right),\\ &S_{y(-1)z_2}=\left(\begin{array}{cccc} 0 & 0\\ 2 F_{0020}^{(2)}\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0011}^{(2)}\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1010}^{(2)}+F_{0110}^{(2)}\bar{p}_0 & F_{0011}^{(2)}\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2 F_{0002}^{(2)}\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1001}^{(2)}+F_{0101}^{(2)}\bar{p}_0\\ \end{array}\right),\\ &S_{y(-1)z_3}=\left(\begin{array}{cccc} 0 & 0\\ 2 F_{0020}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0011}^{(2)}p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1010}^{(2)}+F_{0110}^{(2)}p_0 & F_{0011}^{(2)}\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2 F_{0002}^{(2)}p_0\mathrm{e}^{-\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1001}^{(2)}+F_{0101}^{(2)}p_0\\ \end{array}\right),\\ &S_{y(-1)z_4}=\left(\begin{array}{cccc} 0 & 0\\ 2 F_{0020}^{(2)}\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{0011}^{(2)}\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1010}^{(2)}+F_{0110}^{(2)}\bar{p}_0 & F_{0011}^{(2)}\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+2 F_{0002}^{(2)}\bar{p}_0\mathrm{e}^{\mathrm{i}\omega_{\hat{\lambda}}\hat{\tau}}+F_{1001}^{(2)}+F_{0101}^{(2)}\bar{p}_0\\ \end{array}\right). \end{aligned} $$ \section{The calculation formula for $h_{jp_1p_2p_3p_4}$}\label{h} $$ \begin{aligned} &h_{0k2000}^{ccs}(\vartheta)=-\mathrm{M}_{0kcs}^c\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}\vartheta}\left[-2\mathrm{i}\omega_{\hat{\lambda}}-\lambda_{0k}\tilde{D}_0+\tilde{L}_0(\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}}\cdot I_d)\right]^{-1}A_{2000},\\ &h_{0k1100}^{ccs}(\vartheta)=-\mathrm{M}_{0kcs}^c\left[-\lambda_{0k}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{1100},\\ &h_{0k1010}^{ccs}(\vartheta)=-\mathrm{M}_{0kcs}^c\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}\vartheta}\left[-2\mathrm{i}\omega_{\hat{\lambda}}-\lambda_{0k}\tilde{D}_0+\tilde{L}_0(\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}}\cdot I_d)\right]^{-1}A_{1010},\\ &h_{0k1001}^{ccs}(\vartheta)=-\mathrm{M}_{0kcs}^c\left[-\lambda_{0k}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{1001},\\ &h_{0k0110}^{ccs}(\vartheta)=-\mathrm{M}_{0kcs}^c\left[-\lambda_{0k}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{0110},\\ &h_{0k0020}^{ccs}(\vartheta)=-\mathrm{M}_{0kcs}^c\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}\vartheta}\left[-2\mathrm{i}\omega_{\hat{\lambda}}-\lambda_{0k}\tilde{D}_0+\tilde{L}_0(\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}}\cdot I_d)\right]^{-1}A_{0020},\\ &h_{0k0011}^{ccs}(\vartheta)=-\mathrm{M}_{0kcs}^c\left[-\lambda_{0k}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{0011},\\ \end{aligned} $$ where $k=0,1,2 \cdots,$ $$ \begin{aligned} &h_{2nk2000}^{ccs}(\vartheta)=-\mathrm{M}_{2nkcc}^s\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}\vartheta}\left[-2\mathrm{i}\omega_{\hat{\lambda}}-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}}\cdot I_d)\right]^{-1}A_{2000},\\ &h_{2nk1100}^{ccs}(\vartheta)=-\mathrm{M}_{2nkcc}^s\left[-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{1100},\\ &h_{2nk1010}^{ccs}(\vartheta)=-\mathrm{M}_{2nkcc}^s\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}\vartheta}\left[-2\mathrm{i}\omega_{\hat{\lambda}}-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}}\cdot I_d)\right]^{-1}A_{1010},\\ &h_{2nk1001}^{ccs}(\vartheta)=-\mathrm{M}_{2nkcc}^s\left[-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{1001},\\ &h_{2nk0110}^{ccs}(\vartheta)=-\mathrm{M}_{2nkcc}^s\left[-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{0110},\\ &h_{2nk0020}^{ccs}(\vartheta)=-\mathrm{M}_{2nkcc}^s\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}\vartheta}\left[-2\mathrm{i}\omega_{\hat{\lambda}}-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}}\cdot I_d)\right]^{-1}A_{0020},\\ &h_{2nk0011}^{ccs}(\vartheta)=-\mathrm{M}_{2nkcc}^s\left[-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{0011},\\ &h_{2nk2000}^{css}(\vartheta)=-\mathrm{M}_{2nkss}^c\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}\vartheta}\left[-2\mathrm{i}\omega_{\hat{\lambda}}-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}}\cdot I_d)\right]^{-1}A_{2000},\\ &h_{2nk1100}^{css}(\vartheta)=-\mathrm{M}_{2nkss}^c\left[-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{1100},\\ &h_{2nk1010}^{css}(\vartheta)=-\mathrm{M}_{2nkss}^c\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}\vartheta}\left[-2\mathrm{i}\omega_{\hat{\lambda}}-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}}\cdot I_d)\right]^{-1}A_{1010},\\ &h_{2nk1001}^{css}(\vartheta)=-\mathrm{M}_{2nkss}^c\left[-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{1001},\\ &h_{2nk0110}^{css}(\vartheta)=-\mathrm{M}_{2nkss}^c\left[-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{0110},\\ &h_{2nk0020}^{css}(\vartheta)=-\mathrm{M}_{2nkss}^c\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}\vartheta}\left[-2\mathrm{i}\omega_{\hat{\lambda}}-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(\mathrm{e}^{2\mathrm{i}\omega_{\hat{\lambda}}}\cdot I_d)\right]^{-1}A_{0020},\\ &h_{2nk0011}^{css}(\vartheta)=-\mathrm{M}_{2nkss}^c\left[-\lambda_{2nk}\tilde{D}_0+\tilde{L}_0(I_d)\right]^{-1}A_{0011}.\\ \end{aligned} $$ where $k=1,2,\cdots$. \end{document}
\begin{document} \raggedbottom \title{Calculation of molecular vibrational spectra on a quantum annealer} \author{Alexander Teplukhin} \author{Brian K. Kendrick} \email{Correspondence should be addressed to BKK ([email protected]).} \affiliation{Theoretical Division (T-1, MS B221), Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA} \author{Dmitri Babikov} \affiliation{Department of Chemistry, Marquette University, Milwaukee, Wisconsin 53021, USA} \vskip 10pt \begin{abstract} Quantum computers are ideal for solving chemistry problems due to their polynomial scaling with system size in contrast to classical computers which scale exponentially. Until now molecular energy calculations using quantum computing hardware have been limited to quantum simulators. In this paper, a new methodology is presented to calculate the vibrational spectrum of a molecule on a quantum annealer. The key idea of the method is a mapping of the ground state variational problem onto an Ising or quadratic unconstrained binary optimization (QUBO) problem by expressing the expansion coefficients using spins or qubits. The algorithm is general and represents a new revolutionary approach for solving the real symmetric eigenvalue problem on a quantum annealer. The method is applied to two chemically important molecules: O$_2$ (oxygen) and O$_3$ (ozone). The lowest two vibrational states of these molecules are computed using both a hardware quantum annealer and a software based classical annealer. \end{abstract} \maketitle \section*{Introduction} Quantum computers are seen by many as a future alternative to classical computers. Although quantum supremacy has not yet been achieved, the field is advancing quite rapidly. There are three major types of quantum computing devices \cite{qcreview}: quantum annealer \cite{dwave}, quantum simulator \cite{stancil, aguzik, qsimreview}, and a universal quantum computer. The first type is an example of adiabatic quantum computing \cite{aqc} and is used to solve optimization problems, which at first glance appears to be quite restrictive. The second type is based on quantum gates which appears to have a wider applicability and therefore may be able to simulate a larger variety of problems. However, adiabatic and gate based quantum computing were proven to be formally equivalent \cite{equiv}. Thus, the practical application space is most likely limited by the hardware realization and not necessarily by the type of approach. The third type will be able to solve any problem but it does not yet exist. Building a universal quantum computer is a very challenging task and its realization may be decades away. Coming from the physical chemistry community, we asked ourselves if it would be possible to program an important fundamental problem on a quantum annealer such as the commercially available D-Wave machine \cite{dw2000Q}. Typically, people who work with such devices go in the opposite direction: knowing hardware capabilities they come up with a suitable optimization problem. As a fundamental problem we chose to calculate the vibrational ground state and possibly excited states of a molecule. This problem is very important in chemistry, for example: H$_n^+$ ions \cite{hions1, hions2, hions3, hions4}, CH$_5^+$ and isotopologues \cite{ch5-1, ch5-2}, H$_3$O$^+$, H$_5$O$_2^+$ and deuterated analogues \cite{h3o, h5o2}, hydrogen clusters \cite{h2clust1, h2clust2, h2clust3}, their isotopologues \cite{d2clust1, d2clust2}, hydrogen bonded systems \cite{hbs1} and Lennard-Jones clusters \cite{lj1, lj2}. The common method to study these molecular systems is a Monte-Carlo (MC) method in its various flavors: variational MC, time-dependent variational MC, diffusion MC, and path integral MC. A very similar ground state problem was addressed recently \cite{vqe1,vqe2,vqe3}, where the authors designed an algorithm to calculate the electronic ground state on a quantum simulator. The result of their work is a Variational Quantum Eigensolver (VQE), where an expectation value of each term in the electronic Hamiltonian is evaluated on a trial wave function using a quantum simulator and the resultant total energy serves as a guide for generating the next trial wave function. The method is hybrid, because an optimization step, namely the trial generation, is performed on a classical computer. In contrast to the VQE algorithm which is based on a quantum simulator, the new algorithm presented in this work is based on a quantum annealer and solves any real symmetric eigenvalue problem. To our knowledge, this is the first quantum annealer based eigenvalue solver and will be referred to below as the Quantum Annealer Eigensolver (QAE). As discussed in more detail below, our QAE algorithm is also hybrid since the variational eigenvalue problem is solved via a sequence of many quantum annealer optimizations performed with varying weights on the constraint equations (i.e., Lagrange multipliers). The scanning and optimization of the weights is done on a classical computer. Mapping the eigenvalue problem to a quantum annealer hardware is non-trivial, because the annealer solves a minimization problem defined by an Ising functional of the form $H(s) = \sum_i h_{i} s_i + \sum_{i<j} J_{ij} s_i s_j$, where the spin variables $s_i$ accept values \{-1,1\}. Alternatively, the functional can be converted to quadratic unconstrained binary optimization (QUBO) form using variables $x_i \in \{0,1\}$, called qubits, giving $H(x) = \sum_i Q_{ii} x_i + \sum_{i<j} Q_{ij} x_i x_j$ \cite{dwProg}. The problem is how to write down a ground state or eigenvalue problem in QUBO form and explicitly construct the matrix $\mathbf{Q}$. The outline of the paper is as follows: First, we present our solution to this problem, including the extension to the excited state calculations and multiple dimensions. Second, we apply our algorithm to two chemically important species, O$_2$ (oxygen) and O$_3$ (ozone). Third, the introduction of weighted constraints is presented following a technique used to overcome the connectivity issue in the quantum annealer hardware (i.e., D-Wave machine). Noise is also modeled in the algorithm which is shown to reproduce the results from the D-Wave machine. In the final discussion section, we consider possible improvements of the algorithm and sources of error. \section*{Results} \subsection*{Mapping of ground state problem to QUBO problem} The method is inspired by the variational principle. Suppose we are interested in a ground state of a one-dimensional system and its wave function $\Psi$ is expanded using an orthonormal basis $\varphi_\alpha$ and unknown expansion coefficients $a_\alpha$: $\Psi = \sum_{\alpha=1}^B a_\alpha \varphi_\alpha$. Then, the ground state energy can be expressed as a double sum over the Hamiltonian matrix elements $E = \langle\Psi\vert \hat H \vert\Psi\rangle = \sum_{\alpha,\beta}^{B,B} a_\alpha a_\beta \langle\varphi_\alpha\vert \hat H \vert\varphi_\beta\rangle = \sum_{\alpha,\beta}^{B,B} a_\alpha a_\beta H_{\alpha\beta}$ It is easy to see that the functional form for the energy $E$ is similar to QUBO form $H(s)$, except that the coefficients $a_\alpha$ are continuous and $a_\alpha \in [-1;1]$ (since from the normalization condition $\langle\Psi\vert\Psi\rangle = 1$ we know $\sum_{\alpha=1}^B\,a^2_\alpha = 1$). In contrast, the QUBO variables $x_i$ are discrete. The key idea in mapping the eigenvalue problem with Hamiltonian matrix $\mathbf{H}$ to the QUBO optimization problem with matrix $\mathbf{Q}$ is to express each expansion coefficient $a_\alpha$ using $K$ qubits $q_k^\alpha$. This approximation can be done in multiple ways. The approach we followed in this work is a fixed-point representation that is used to represent real numbers in a computer. Since the magnitude of the coefficients $a_\alpha$ never exceeds unity, only the fractional part of the coefficient has to be stored. The last qubit $q_K^\alpha$ stores the sign of $a_\alpha$. The complete expression for the coefficient is $a_\alpha = \sum_{k=1}^{K-1}2^{k-K}q_k^\alpha - q_K^\alpha \in [-1;1)$. Now, the functional $E$ can be expressed explicitly in terms of the qubits $q_k^\alpha$. The powers of two are combined with the matrix elements $H_{\alpha\beta}$ giving the matrix elements $Q_{ij}$. The qubits $q_k^\alpha$ are mapped to the qubits $x_i$ via the relation $i = K(\alpha-1) + k$, where $i \in [1, B \times K]$. Since in the QUBO (or Ising) model the ordering of qubits within the pair does not matter (i.e., the interaction between $i$ and $j$ is the same as $j$ and $i$), the summation is restricted to $i < j$ and the non-diagonal elements $Q_{ij}$ are multiplied by two. Unfortunately, the minimum of the functional $E$ is a trivial solution $\Psi = 0$, which is due to the lack of the normalization constraint $\|\Psi\| = 1$. The workaround is to add that constraint right into the functional with a strength $\lambda$, giving $I = \langle\Psi\vert \hat H \vert\Psi\rangle + \lambda(1-\langle\Psi\vert\Psi\rangle)^2$. Essentially, the parameter $\lambda$ penalizes any deviation of the norm from unity and it helps to guide the optimization away from the trivial solution. One can think of $\lambda$ as a Lagrange multiplier and the functionals $E$ and $I$ as objective functions. The problem with the functional $I$ is that it is no more a QUBO functional, rather it is biquadratic in $x$. The trick is to lower the power of the constraint, giving $G = \langle\Psi\vert \hat H \vert\Psi\rangle + \lambda(1-\langle\Psi\vert\Psi\rangle)$. Dropping the constant shift $\lambda$, which has no effect on the optimization, one obtains the final expression for the functional from used in present study: $F = \langle\Psi\vert \hat H \vert\Psi\rangle - \lambda\langle\Psi\vert\Psi\rangle$. The main consequence of the decreased power is that the normalization condition is broken \textit{per se} (but this can be fixed by renormalizing the final solution). However, the primary role of the penalty is to avoid the trivial solution and the functional $F$ serves that purpose. Another issue with the functional $F$ is that it encourages a nonphysical norm $\|\Psi\| > 1$. This limits the number of techniques to find a good parameter $\lambda$. Ideally, $\lambda$ should be large enough to kick the optimization away from the trivial solution minimum but yet small enough to stay away from the large norm limit. To find an optimal value for $\lambda$, we scan in $\lambda$ and pick the solution with the lowest energy $E=\langle\Psi\vert \hat H \vert\Psi\rangle$ (where here $\Psi$ has been renormalized: $\langle\Psi\vert\Psi\rangle=1$). \subsection*{Calculation of excited states} The QAE algorithm described above can be easily applied to the calculation of excited states by modifying the initial Hamiltonian $\mathbf{H}$. Specifically, for the first excited state, an outer product matrix of the previously computed ground state wave function is added: $\mathbf{H'} = \mathbf{H} + S_0 \vert\Psi_0\rangle\langle\Psi_0\vert$. The parameter $S_0$ is an arbitrary user specified energy shift to move the ground state higher in the spectrum. The only requirement on $S_0$ is that it should be larger than the energy of the first excited state, otherwise the algorithm will keep converging to the ground state. To compute the $i$-th excited state, similar terms for the states $0,\ldots,(i-1)$ should be added to the Hamiltonian. In principle, this iterative procedure allows one to compute the whole spectrum of a molecule. Obviously, for a fixed basis size $B$ and qubit expansion $K$, the higher states will not be described as accurately as the lower ones. \subsection*{Multiple dimensions} The generalization of the QAE method to multiple dimensions is straightforward. For a direct product basis, the one-dimensional expansion is replaced with an $n$-dimensional expansion, but the way to code each expansion coefficient using $K$ qubits remains the same. For example, for a two-dimensional system with the same number of basis functions $B$ for each dimension, the expansion is $\Psi = \sum_{\alpha,\beta}^{B,B} a_{\alpha\beta} \varphi_\alpha \theta_\beta$, where $\varphi_\alpha$ and $\theta_\beta$ are the basis functions in each dimension. The qubits $q_k^{\alpha\beta}$ are now mapped to the variables $x_i$ as follows: $i = KB(\alpha-1) + K(\beta-1)+ k$, where $i \in [1, B^2 \times K]$. The QAE method can also be applied to a non-direct product basis. For example, one can implement a Sequential Diagonalization Truncation (SDT) \cite{sdt1, sdt2, sdt3} which drastically reduces the size of the Hamiltonian matrix and ultimately results in a much smaller total number of qubits than in direct product treatment. We use the SDT approach for ozone and further details of applying the SDT method to that molecule can be found elsewhere\cite{ozone1}. \subsection*{Application to O$_2$ and O$_3$} We applied our algorithm to the calculation of the ground and first excited states of the oxygen and ozone molecules. For both, we used an accurate potential energy surface of ozone \cite{ozonepes}. The one-dimensional O$_2$ potential $V(r_{{\rm O}_2})$ was generated from the full three-dimensional ozone potential by moving one of the oxygen atoms far away from the other two (i.e., $R_{{\rm O}-{\rm O}_2} = 60$ $a_0$). The two lowest wave functions for both molecules are shown in Fig. \ref{fig01}. The software based classical QUBO solver reproduces the wave functions computed with a standard classical numerical eigensolver (LAPACK \cite{lapack}), whereas the output of the hardware quantum annealer (D-Wave machine) is less accurate. The sharp edges (low resolution) of the wave functions is due to the small size of the basis set. For oxygen, a Fourier basis ($e^{i\,m\,r_{O_2}}$) was chosen small enough to give the known ground state energy of 791.64 cm$^{-1}$ within an error of 0.01 cm$^{-1}$ using a standard classical numerical eigensolver (LAPACK). The required number of periods in the Fourier series is $m_{max} = 4$ which translates to the basis size $B = 2\,m_{\max}+1 = 9$. For ozone, an accurate calculation would require significant computational resources, so we decided to use an SDT basis set truncated at quite low energy $E_{cut} = 2000$ cm$^{-1}$, which gives $B = 12$ basis functions. This basis is sufficient to describe the ground state at 1451 cm$^{-1}$ with an error of 120 cm$^{-1}$ using LAPACK, but is too small for excited state calculations (the computed excited state energy is 700 cm$^{-1}$ larger than the true value of 2147 cm$^{-1}$). Nevertheless, in this work we are primarily interested in benchmarking the new QAE method against a standard classical numerical eigensolver (LAPACK) and not in the absolute accuracy of the solutions. We refer the reader interested in accurate classical calculations to the relevant literature \cite{ozone1, ozone2}. For the three-dimensional ozone system, we plot the probability density (the wave function squared) as a function of the symmetric-stretch coordinate $\rho$ in Fig. 1b \cite{ozonecoords1, ozonecoords2}. The SDT basis functions span the other two internal degrees of freedom (not plotted) at each value of $\rho$ and are computed classically \cite{ozone1}. The number of qubits $K$ per expansion coefficient (or basis function) is 7 for oxygen and 5 for ozone which is the maximum possible number which fits within the 64 logical fully-connected qubits on the hardware quantum annealer (D-Wave machine). Namely, $K \cdot B = 7 \cdot 9 = 63$ for oxygen and $K \cdot B = 5 \cdot 12 = 60$ for ozone. Figure \ref{fig02} illustrates the convergence of the energies as a function of the number of qubits $K$ per expansion coefficient $a_\alpha$ (i.e., the level of discretization). There are several interesting findings to discuss. First, the error decreases exponentially as a function of $K$, which is very appealing. Second, the error decreases and reaches a plateau, for both solvers and both systems. This shared behavior demonstrates that the QAE algorithm itself is universal, but the actual error is solver and system dependent. Third, the quantum annealer (D-Wave machine) is much less accurate (by two-three orders of magnitude) than the classical QUBO solver. In addition, because the total number of qubits currently available in the quantum annealer is rather limited, the corresponding (dashed) curves do not continue to higher values of K. Finally, the classical QUBO solver brings the error for O$_2$ down to 0.01 cm$^{-1}$ which coincidently matches the error of chosen basis size. No more than $K = 8$ qubits (a qubyte) per coefficient are needed for oxygen. For ozone, $K = 5$ qubits are required by the classical QUBO solver to reach the plateau within an error of less than 3 cm$^{-1}$. This is quite accurate, compared to the 120 cm$^{-1}$ error due to the chosen (small) SDT basis. Table S1 in Supplementary Materials gives the parameters used in scanning over the normalization penalty $\lambda$. The initial $\lambda_{min}$ could be zero or some value that is smaller than the expected energy of the state being computed (if it is known). The number of steps $N_\lambda$ specifies the number of samples and is a convergence parameter. The last parameter is the step size $\Delta\lambda$ which ideally should be the same for all $K$ within a given problem. However, for small $K$ we were always getting trivial solutions which is probably due to the inaccurate description of the problem. The only way we found to avoid this is to increase the step size $\Delta\lambda$ by more than an order of magnitude. We faced the same issue when we were simulating the hardware noise. Increasing both the step size and the number of steps allowed us to overcome this obstacle. \subsection*{Algorithm scaling} It is straightforward to show that the runtime scales as $O(N_\lambda KB^d)$, where $d$ is the number of dimensions and $B$ basis functions are used for each dimension. In practice, however, the QUBO solver may also have some additional (internal) effects on performance. For example, the classical QUBO solver we used in this work is based on a backbone-based method inspired by Glover, \textit{et al.} \cite{glover} to partition the problem, which is causing a step-like runtime as a function of $K$ (see Figure S1 in Supplementary Materials). In the next paragraph, we will show that the actual computational time is in agreement with the theoretical scaling for the $d$-dimensional harmonic oscillator problem solved using a cosine basis and the classical QUBO solver. The oscillator frequencies were set different according to $\omega_i ($cm$^{-1}) = 800 + 200 \cdot (i-1)$, where $i$ is the dimension index. The linear scaling with $N_\lambda$ is obvious. To verify the linear scaling with $K$, we plot the normalized computational time divided by $K$ as a function of $K$ in Figure \ref{fig03}. As expected, all curves are horizontal and their slope does not depend on dimensionality (except for perhaps $d=5$). The rapid increase or steps at $K=16$ for 1D, at $K=6$ for 2D and at $K=2$ for 3D are due to partitioning size. They appear as soon as the total number of qubits exceeds the sub-QUBO size of 47, which is $3 \times 16$ for 1D, $3^2 \times 6$ for 2D and $3^3 \times 2$ for 3D. Once the linear dependence on $K$ has been established (see Fig. \ref{fig03}), one can then verify the exponential dependence on dimensionality $d$. The logarithm of the normalized computational time is plotted in Figure \ref{fig04} as a function of the dimensionality $d$. All of the curves exhibit a linear dependence on $d$ which confirms the exponential scaling. Again, the times for all $K$ at $d = 1$ and for $K = 4$ at $d = 2$ deviate significantly from the main trend because no partitioning is required to compute those. The average slope calculated based on $d=2$ through $4$ is 0.58 which is close to the theoretically predicted value of $0.48$ (for $B=3$). When $d=5$ is included the slope increases to 0.72 which is most likely due to the very large problem size. The total number of qubits for $d=5$ is $KB^d = 972$ to 3888 for $K=4$ to 16 and $B=3$ which results in a total number of configurations $10^{300}$ to $10^{1000}$. In summary, Figures \ref{fig03} and \ref{fig04} confirm the general scaling law $O(N_\lambda KB^d)$ of the algorithm and any deviations from it are due to a particular QUBO solver. We did not perform scaling studies on the hardware quantum annealer (D-Wave machine) because it was not practical due to the small number of logical qubits and long runtime. For example, for O$_2$ we were able to approach small problems with $B = 9$ and $K = 1$ to 7 and the runtime was about $t_{dw} = 2500$ s. This time does not depend on the number of logical qubits $K$, because all problems are treated by the hardware as maximum-size problems. In contrast, the classical QUBO solver runtime for these problems was about $t_{cl} = 30$ s, which is almost two orders of magnitude smaller. The long runtime for the hardware quantum annealer is primarily due to the large number of reads (see Fig. S3). In addition, the analysis for the $d$-dimensional harmonic oscillator would require an extensive QUBO partitioning, which means another factor of ten to hundred increase in $t_{dw}$. \subsection*{Chaining in quantum annealer} The only constraint we have discussed so far is a normalization constraint with associated penalty $\lambda$. However, there is another constraint and penalty factor worth mentioning when running on a quantum annealer (D-Wave machine). Namely, the chain constraint and the associated chain penalty. The physical qubits in the hardware do not have an all-to-all connectivity which is a requirement of the algorithm. In fact, each qubit has six neighbors at most (see Chimera graph) \cite{dwProg}. Fortunately, there is a method to embed a fully connected graph on top of the hardware graph. In this approach, a number of qubits are organized into so-called chains. Qubits within a chain act like a single logical qubit which is connected to all other logical qubits. To program chains in the hardware Chimera graph, one adds a set of constraints which have a single strength or chain penalty $c$. As with any constraint, the associated penalty or weight $c$ should be neither too small, because then the chains are broken, or too large, because then the hardware will become insensitive to the original problem. The simplest approach to find a good chain penalty is to perform scanning, in a similar way to $\lambda$ scanning. Figure \ref{fig05} demonstrates an example of this two-dimensional scanning for the ground state of O$_2$ with a Fourier basis of size $B = 7$ ($m_{max} = 3$) and $K = 3$. As expected, in the region of small $c$ the minimum energy is unacceptably large, simply due to broken chains. For large $c$ we see two regions: the region of small $\lambda$, which contains trivial solutions only (because the normalization constraint is too weak) and the region of large $\lambda$ with reasonable minimum energies. The phase transition between these two regions occurs close to the true ground state energy 791.64 cm$^{-1}$. The result of this two-dimensional scanning shows that a reasonable chain penalty lies between 10000 - 20000 cm$^{-1}$. We used $c=$15000 cm$^{-1}$ in all of our calculations. \subsection*{Simulating hardware noise} The results obtained on the quantum annealer are much less accurate than those obtained using the classical QUBO solver (see Figures \ref{fig01} and \ref{fig02}). We believe that this discrepancy is partially due to the error with which the QUBO problem is programmed in the hardware. The reported integrated control errors (ICE) for the hardware we used are quite large \cite{dw2000Q}. For example, the diagonal elements are programmed with a mean error 0.7\% of the maximum matrix element. In the O$_2$ calculation, the maximum matrix element is $11.5 \times 10^3$ cm$^{-1}$ which translates into an ICE of 80 cm$^{-1}$. Moreover, the error has a quite broad distribution, its standard deviation is 0.8\%. This can potentially double the ICE, bringing it to 160 cm$^{-1}$. Upon consideration of this source of error, the large difference between the quantum annealer and classical results in Figure \ref{fig02} is no longer that surprising. To quantify the effects of the hardware errors (noise), we performed calculations with a classical solver where random noise was manually added to QUBO (see Materials and Methods). The errors (relative to LAPACK) of the classical QUBO solutions with different magnitudes of noise are shown in Figure \ref{fig06}. On average, introduction of noise increases the error. For the 1D harmonic oscillator, the default noise (using the reported ICE values discussed above) mimics the quantum annealer behavior. For oxygen, the quantum annealer behavior is well characterized if the noise is increased by a factor of three. For ozone, a factor of 5 or 7 is enough. As the problem size increases, larger noise scaling is required (to mimic the hardware performance), which implies that there could be some other source of discrepancy between the hardware and software solvers. Also, introduction of noise required an increase in the strength of the normalization weight $\lambda$ which is reflected in Table S1. \section*{Discussion} There are several places in the method where some improvements could be done and which are definitely worth listing. First, the functional form we used in this study is not the only one. For example, the initial biquadratic functional $I$, that we simplified earlier, can be converted to a quadratic QUBO form using multiple additional constrains. The drawback of this approach is that it will require additional penalty factors and ultimately will make the problem much harder to manage and solve. Second, the method would significantly benefit if the normalization condition could be integrated into the functional, rather than added to it. For example, one can notice that solution \textbf{\textit{a}} represents a point on the unit hypersphere. Thus, its position can be described using spherical coordinates and the angles could be approximated with qubits $\mathbf{q}$. However, the presence of products of sines and cosines in this kind of mapping results in a polynomial of high degree in $q$. This would require multiple constraints and associated penalty factors to convert the problem into quadratic form. The problem becomes difficult again. Third, the actual scaling of the algorithm depends on the solver and its parameters. For the classical QUBO solver used in this work, the stopping criterion is specified by the number of repeats without any improvement. For the quantum annealer, the user specifies the number of annealing cycles or reads (see Materials and Methods). The scaling we discussed earlier is for the algorithm itself which excludes internal scaling effects of a particular QUBO solver. Fourth, the physical or working Chimera graph of the quantum annealer is not perfect. The yield of the working graph, the percentage of working qubits (and couplers) that are present, is 99\% (98\%) \cite{dw2000Q}. To make it a full-yield graph, an additional software post-processing is performed before results are sent back to the user. The user may opt out of this fixing procedure, however, that reduces portability of the algorithm, since every machine has its own working graph. Still removing this layer might be worth exploring. Fifth, the annealing time $t_{ann}$ is another parameter that can potentially play some role in the annealing process. We found that increasing $t_{ann}$ does not change results significantly. What affects the results more was the number of reads $N_{reads}$. Because there is a limitation $t_{ann} \cdot N_{reads} < 3$ seconds in the API, we chose the maximum number of reads per job submission: $N_{reads} = 10^4$ and $t_{ann} = 299$ $\mu$s. The maximum allowed $t_{ann}$ is 2000 $\mu$s but it would be good to explore larger annealing times if possible. Sixth, it is not quite clear how to properly include the ICEs in the noise simulation tests. The errors are reported solely for the maximum and minimum elements of QUBO matrix and they are a function of annealing time. In addition, they were reported for 70\% of the annealing process (i.e., there is no error data at the end of annealing) \cite{dw2000Q}. For the noise tests reported in this work, we used the reported errors at 70\% and manually scaled them. Seventh, the annealer temperature could be another source of error. In a recent paper \cite{temp} it was argued that the annealer temperatures must be appropriately scaled down with problem size (at least in a logarithmic way or better yet as a power law). In fact, during our study we had to switch from the device with 1024 qubits (DW2X) to 2048 qubits (DW2000Q). The temperature lowered from 15.7 ${\rm mK}$ to 14.5 ${\rm mK}$, which is almost logarithmic (it should be 14.3 ${\rm mK}$). However, we had to perform 50 times more reads on the larger machine to reproduce trivial solutions for small $\lambda$ and large $c$. Finally, the landscape of the hypothetical configuration space is defined by the problem. It could be that our problems have high and thick barriers on the landscape, which effectively disables the tunneling mechanism in the annealer and restricts exploration. Another possibility is that the landscape has a large number of wells and the solver jumps from one to another. Possibly, a very long annealing time could help to determine if this is the case. In summary, we developed a hybrid algorithm (QAE) for the calculation of the vibrational spectrum of a molecule on a quantum annealer. The eigenvalue problem is mapped to the QUBO (Ising) problem by discretization of the expansion coefficients using qubits. The method is hybrid due to the scanning in a penalty (or weight) to impose wave function normalization. Running on the actual quantum annealer requires the additional scanning in chain penalty. The method was applied to the ground and first excited vibrational states of two chemically important species: O$_2$ (oxygen) and O$_3$ (ozone). The QAE calculations based on the classical QUBO solver outperform those on the quantum annealer (D-Wave machine) in both accuracy and computational time (i.e., no supremacy of the latter one). Our tests show that this is partially due to the errors or noise present in the hardware. Hopefully, in the future it will be possible to build larger, more accurate and fully-connected quantum annealers. As a final note, the QAE algorithm is universal and can be used in any field of science or engineering to solve the real symmetric eigenvalue problem. \section*{Materials and Methods} We used qbsolv \cite{qbsolv} as the software classical QUBO solver and the D-Wave 2000Q \cite{dw2000Q} as the hardware quantum solver. The underlying qbsolv algorithm is a combination of Tabu search and a backbone-based method inspired by Glover \textit{et al.} \cite{glover}. The latter one is used for partitioning the original (large) QUBO into smaller sub-QUBOs. The only modification we did was to increase the span of partitioning to 1 (the hard-coded value is 0.214). The number of repeats in the stopping criterion is $N_{rep} = 10^4$. For the excited states calculations we used $S_0 = 9000$ cm$^{-1}$, however 3000 and 6000 also worked well. The hardware was accessed using qOp stack: qbsolv, DW library and SAPI \cite{qop}. Although qbsolv allows running sub-QUBOs on the hardware, we did not follow this approach, because the contribution of the D-Wave machine to the solution would be hard to estimate. Furthermore, qbsolv does implicit restarts and uses a classical Tabu search to refine solutions. Thus, a hardware calculation using the default qbsolv is actually hybrid and not fully quantum (for problems that fit one sub-QUBO qbsolv is completely classical). In order to bring the actual D-Wave performance to the surface, we removed partitioning, restarts and refinement from qbsolv, so that it only serves as an interface to the hardware. In addition, we raised the number of reads to $10^5$ (the hard-coded value is 25). Figure \ref{fig05} was prepared with $5 \times 10^5$ reads (half a million) and the chain penalty was 15000 cm$^{-1}$ (the hard-coded value is 15). The number of physical qubits on the DW2000Q is 2028 qubits (99\% yield) and 5903 couplers (98\% yield). However, embedding a fully connected graph leaves us with just 64 logical qubits. Our code generates input QUBO matrices for qbsolv. It is written in Fortran and uses LAPACK \cite{lapack} as the classical numerical eigensolver (for benchmarking the QUBO results). Convergence studies were done for $N_{rep}$, $N_{reads}$ and $N_\lambda$ and are reported in Figures S2-S4 of Supplementary Materials. We did not see a strong dependence on Tabu memory, so we used the default values. In addition, we tested the code on $d=1$ to 5 dimensional harmonic oscillators and the convergence for $d=1$ to 3 in terms of number of qubits $K$ is given in Figure S5. \section*{References} \section*{Figures} \makeatletter \renewcommand{\textbf{Fig.~\thefigure}}{\textbf{Fig.~\thefigure}} \makeatother \begin{figure} \caption{ \begin{small} \label{fig01} \end{figure} \begin{figure} \caption{ \begin{small} \label{fig02} \end{figure} \begin{figure} \caption{ \begin{small} \label{fig03} \end{figure} \begin{figure} \caption{ \begin{small} \label{fig04} \end{figure} \begin{figure} \caption{ \begin{small} \label{fig05} \end{figure} \begin{figure} \caption{ \begin{small} \label{fig06} \end{figure} \end{document}
\begin{document} \twocolumn[ \icmltitle{Optimizing DDPM Sampling with Shortcut Fine-Tuning} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Ying Fan}{yyy} \icmlauthor{Kangwook Lee}{yyy} \end{icmlauthorlist} \icmlaffiliation{yyy}{UW Madison} \icmlcorrespondingauthor{Ying Fan, Kangwook Lee}{[email protected], [email protected]} \icmlkeywords{Machine Learning, ICML} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} In this study, we propose \emph{Shortcut Fine-Tuning (SFT)}, a new approach for addressing the challenge of fast sampling of pretrained Denoising Diffusion Probabilistic Models (DDPMs). SFT advocates for the fine-tuning of DDPM samplers through the direct minimization of Integral Probability Metrics (IPM), instead of learning the backward diffusion process. This enables samplers to discover an alternative and more efficient sampling shortcut, deviating from the backward diffusion process. \textcolor{black}{Inspired by a control perspective,} we propose a new algorithm \textbf{SFT- PG}: \textbf{S}hortcut \textbf{F}ine-\textbf{T}uning with \textbf{P}olicy \textbf{G}radient, and prove that under certain assumptions, gradient descent of diffusion models with respect to IPM is equivalent to performing policy gradient. \textcolor{black}{To our best knowledge, this is the first attempt to utilize reinforcement learning (RL) methods to train diffusion models.} Through empirical evaluation, we demonstrate that our fine-tuning method can further enhance existing fast DDPM samplers, resulting in sample quality comparable to or even surpassing that of the full-step model across various datasets. \end{abstract} \section{Introduction} Denoising diffusion probabilistic models (DDPMs)~\citep{ho2020denoising} are parameterized stochastic Markov chains with Gaussian noises, which are learned by gradually adding noises to the data as the forward process, computing the posterior as a backward process, and then training the DDPM to match the backward process. Advances in DDPM~\citep{nichol2021improved,dhariwal2021diffusion} have shown the potential to rival GANs~\citep{gan} in generative tasks. However, one major drawback of DDPM is that a large number of steps $T$ is needed. As a result, there is a line of work focusing on sampling fewer $T'\ll T$ steps to obtain comparable sample quality: Most works are dedicated to better approximating the backward process as stochastic differential equations (SDEs) with fewer steps, generally via better noise estimation or computing better sub-sampling schedules~\citep{kong2021fast, san2021noise, lam2021bilateral, watson2021learningto, jolicoeur2021gotta, bao2021analytic, bao2022estimating}. Other works aim at approximating the backward process with fewer steps via more complicated non-gaussian noise distributions~\citep{xiao2021tackling}.\footnote{There is another line of work focusing on fast sampling of DDIM~\citep{song2020denoising} with deterministic Markov sampling chains, which we will discuss in Section~\ref{sec: related}.} \begin{figure} \caption{\textcolor{black} \label{fig:control} \end{figure} To our best knowledge, existing fast samplers of DDPM stick to imitating the computed backward process with fewer steps. If we treat data generation as a \textcolor{black}{control task (see Fig.~\ref{fig:control}), the backward process can be viewed as a demonstration to generate data from noise (which might not be optimal in terms of number of steps), and the training dataset could be an environment that provides feedback on how good the generated distribution is. From this view, imitating the backward process could be viewed as imitation learning~\citep{hussein2017imitation} or behavior cloning~\citep{torabi2018behavioral}. Naturally, one may wonder if we can do better than pure imitation, since learning via imitation is generally useful but rarely optimal, and we can explore alternative paths for optimal solutions during online optimization. } \begin{figure} \caption{ A visual illustration of the key idea of Shortcut Fine-Tuning (SFT). DDPMs aim at learning the backward diffusion model, but this approach is limited to a small number of steps. We propose the idea of \emph{not} \label{fig:smallt} \end{figure} Motivated by the above observation, we study the following underexplored question: \begin{center} \emph{Can we improve DDPM sampling by \textbf{not} following the backward process?} \end{center} In this work, we show that this is indeed possible. We fine-tune pretrained DDPM samplers by directly minimizing an integral probability metric (IPM) and show that finetuned DDPM samplers have significantly better generation qualities when the number of sampling steps is small. In this way, we can still enjoy diffusion models' multistep capabilities with no need to change the noise distribution, and improve the performance with fewer sampling steps. More concretely, we first show that performing gradient descent of the DDPM sampler w.r.t. the IPM is equivalent to stochastic policy gradient, which echoes the aforementioned RL view but with a changing reward from the optimal critic function given by IPM. In addition, we present a surrogate function that can provide insights for monotonic improvements. Finally, we present a fine-tuning algorithm with alternative updates between the critic and the generator. We summarize our main contributions as follows: \begin{itemize} \item (Section~\ref{sec: pg}) We propose a novel algorithm to fine-tune DDPM samplers with direct IPM minimization, and we show that performing gradient descent of diffusion models w.r.t. IPM is equivalent to policy gradient. To our best knowledge, this is the first work to apply reinforcement learning methods to diffusion models. \item (Section~\ref{sec: mono}) We present a surrogate function of IPM in theory, which provides insights on conditions for monotonic improvement and algorithm design. \item (Section~\ref{sec: regularization}) We propose a regularization for the critic based on the baseline function, which shows benefits for the policy gradient training. \item (Section~\ref{sec: full exp}) Empirically, we show that our fine-tuning can improve DDPM sampling performance in two cases: when $T$ itself is small, and when $T$ is large but using a fast sampler where $T' \ll T$. In both cases, our fine-tuning achieves comparable or even higher sample quality than the DDPM with 1000 steps using 10 sampling steps. \end{itemize} \section{Background} \subsection{Denoising Diffusion Probabilistic Models (DDPM)} Here we consider denoising probabilistic diffusion models (DDPM) as stochastic Markov chains with Gaussian noises ~\citep{ho2020denoising}. Consider data distribution $x_0 \sim q_0, x_0 \in \mathbb{R}^{n}$. Define the forward noising process: for $t \in [0,..,T-1]$, \begin{equation} q(x_{t+1}|x_{t}) := \mathcal{N}(\sqrt{1-\beta_{t+1}}x_{t}, \beta_{t+1}I), \end{equation} where $x_1,.., x_T$ are variables of the same dimensionality as $x_0$, $\beta_{1:T}$ is the variance schedule. We can compute the posterior as a backward process: \begin{equation} q(x_{t}|x_{t+1}, x_0) = \mathcal{N}(\tilde{\mu}_{t+1}(x_{t+1}, x_0), \tilde{\beta}_{t+1}I), \end{equation} where $\tilde{\mu}_{t+1}(x_{t+1}, x_0) = \frac{\sqrt{\bar{\alpha}_t}\beta_t}{1-\bar{\alpha}_{t+1}}x_0 + \frac{\sqrt{\alpha_{t+1}}(1-\bar{\alpha}_{t})}{1-\bar{\alpha}_{t+1}}x_{t+1}$, $\alpha_{t+1} = 1-\beta_{t+1}$, $ \bar{\alpha}_{t+1} = \prod_{s=1}^{t+1}\alpha_{s}$. We define a DDPM sampler parameterized by $\theta$, which generates data starting from some pure noise $x_T \sim p_T$: \begin{equation} \begin{split} &x_T \sim p_T = \mathcal{N}(0, I),\\ &x_t \sim p_t^{\theta}(x_t|x_{t+1}),\\ &p_t^{\theta}(x_t|x_{t+1}):=\mathcal{N}\big(\mu_{t+1}^\theta(x_{t+1}), \Sigma_{t+1}\big),\\ \end{split} \end{equation} where $\Sigma_{t+1}$ is generally chosen as $\beta_{t+1}I$ or $\tilde{\beta}_{t+1}I$. \footnote{In this work we consider a DDPM sampler with a fixed variance schedule $\beta_{1:T}$ as in \citet{ho2020denoising}, but it could also be learned as in \citet{nichol2021improved}.} Define \begin{equation} p^{\theta}_{x_{0:T}} := p_T(x_T)\prod_{t=0}^{T-1}p_t^{\theta}(x_t|x_{t+1}), \end{equation} and we have the marginal distribution $p^{\theta}_0(x_0) = \int p_{x_{0:T}}^{\theta}(x_{0:T}) d x_{1:T}$. The sampler is trained by minimizing the sum of KL divergences for each step: \begin{equation} J = \mathbb{E}_{q}\left[\sum_{t=0}^{T-1}D_{KL}(q(x_{t}|x_{t+1}, x_0), p_t^{\theta}(x_{t}|x_{t+1}))\right]. \end{equation} Optimizing the above loss can be viewed as matching the conditional generator $p_t^{\theta}(x_t|x_{t+1})$ with the backward process $q(x_{t}|x_{t+1}, x_0)$ for each step. \citet{song2020score} show that $J$ is equivalent to score-matching loss when formulating the forward and backward process as a discrete version of stochastic differential equations. \subsection{Integral Probability Metrics (IPM)} Given $\mathcal{A}$ as a set of parameters s.t. for each $\alpha \in \mathcal{A}$, it defines a critic $f_{\alpha}: \mathbb{R}^n \rightarrow \mathbb{R}$. Given a critic $f_\alpha$ and two distributions $p_{0}^{\theta}$ and $q_0$, we define \begin{equation} g(p_0^\theta, f_\alpha, q_0) := \mathop{\mathbb{E}}_{x_0\sim p_{0}^{\theta}}[f_\alpha(x_0)]-\mathop{\mathbb{E}}_{x_0 \sim q_0}[f_\alpha(x_0)]. \end{equation} Let \begin{equation} \Phi(p_{0}^{\theta},q_0) := \sup_{\alpha \in \mathcal{A}} g(p_0^\theta, f_\alpha, q_0). \end{equation} If $\mathcal{A}$ satisfies that $\forall \alpha \in \mathcal{A}$, $\exists \alpha' \in \mathcal{A}$, s.t. $f_{\alpha'} = -f_{\alpha}$, then $\Phi(p_{\theta},q)$ is a pseudo metric over the probability space of $\mathbb{R}^n$, making it so-called integral probability metrics (IPM). In this paper, we consider $\mathcal{A}$ that makes $\Phi(p_{0}^{\theta},q_0)$ an IPM. For example, when $\mathcal{A} = \{\alpha: ||f_{\alpha}||_{L}\leq 1\}$, $\Phi(p_{0}^{\theta},q_0)$ is the Wasserstein-1 distance; when $\mathcal{A} = \{\alpha: ||f_{\alpha}||_{\infty}\leq 1\}$, $\Phi(p_{0}^{\theta},q_0)$ is the total variation distance; it also includes maximum mean discrepancy (MMD) when $\mathcal{A}$ defines all functions in Reproducing Kernel Hilbert Space (RKHS). \section{Motivation} \subsection{Issues with Existing DDPM Samplers} \label{sec: motivation} Here we review the existing issues with DDPM samplers 1) when $T$ is not large enough, and 2) when sub-sampling with the number of steps $T' \ll T$, which inspires us to design our fine-tuning algorithm. \paragraph{Case 1. Issues caused by training DDPM with a small $T$ (Fig~\ref{fig:smallt}).} Given a score-matching loss $J$, the upper bound on Wasserstein-2 distance is given by ~\citet{kwon2022scorebased}: \begin{equation} W_2(p_0^\theta, q_0) \leq \mathcal{O}(\sqrt{J})+I(T)W_2(p_T, q_T), \label{eq: upper bound} \end{equation} where $I(T)$ is non-exploding and $W_2(p_T, q_T)$ decays exponentially with $T$ when $T \rightarrow \infty$. From the inequality above, one sufficient condition for the score-matching loss $J$ to be viewed as optimizing the Wasserstein distance is when $T$ is large enough such that $I(T)W_2(p_T, q_T) \rightarrow 0$. Now we consider the case when $T$ is small and $p_T \not\approx q_T$.\footnote{Recall that during the diffusion process, we need small Gaussian noise for each step set the sampling chain to also be conditional Gaussian~\citep{ho2020denoising}. As a result, a small $T$ means $q_T$ is not close to pure Gaussian, and thus $p_T \not\approx q_T$.}. The upper bound in Eq.~(\ref{eq: upper bound}) can be high since $W_2(p_T,q_T)$ is not neglectable. As shown in Fig~\ref{fig:smallt}, pure imitation $p_t^{\theta}(x_t|x_{t+1}) \approx q(x_t|x_{t+1},x_0)$ would not lead the model exactly to $q_0$ when $p_T$ and $q_T$ are not close enough. \paragraph{Case 2. Issues caused by a smaller number of sub-sampling steps ($T' \ll T$) (Fig~\ref{fig:fastsampling} in Appendix~\ref{app: subsampling}).} We consider DDPM sub-sampling and other fast sampling techniques, where $T$ is large enough s.t. $p_T \approx q_T$, but we try to sample with fewer sampling steps ($T'$). It is generally done by choosing $\tau$ to be an increasing sub-sequence of $T'$ steps in $[0, T]$ starting from $0$. Many works have been dedicated to finding a subsequence and variance schedule to make the sub-sampling steps match the full-step backward process as much as possible~\cite{kong2021fast,bao2021analytic,bao2022estimating}. However, this would inevitably cause downgraded sample quality if each step is Gaussian: as discussed in \citet{salimans2021progressive} and \citet{xiao2021tackling}, a multi-step Gaussian sampler cannot be distilled into a one-step Gaussian sampler without loss of fidelity. \subsection{Problem Formulation} In both cases mentioned above, there might exist paths other than imitating the backward process that can reach the data distribution with fewer Gaussian steps. Thus one may expect to overcome these issues by minimizing the IPM. Here we present the formulation of our problem setting. We assume that there is a target data distribution $q_0$. Given a set of critic parameters $\mathcal{A}$ s.t. $\Phi(p_{0}^{\theta},q_0) = \sup_{\alpha \in \mathcal{A}} g(p_0^\theta, f_\alpha, q_0)$ is an IPM, and given a DDPM sampler with $T$ steps parameterized by $\theta$, our goal is to solve: \begin{align} \min_{\theta} \Phi(p_{0}^{\theta},q_0). \end{align} \subsection{Pathwise Derivative Estimation for Shortcut Fine-Tuning: Properties and Potential Issues} \label{sec: pathwise} One straightforward approach is to optimize $\Phi(p_0^\theta,q_0)$ using pathwise derivative estimation~\citep{rezende2014stochastic} like GAN training, which we denote as \textbf{SFT} (shortcut fine-tuning). We can recursively define the stochastic mappings: \begin{equation} h_{\theta, T}(x_T) := x_T, \end{equation} \begin{equation} h_{\theta, t}(x_t) := \mu_{\theta}(h_{\theta, t+1}(x_{t+1})) +\epsilon_{t+1}, \end{equation} \begin{equation} x_0 = h_{\theta, 0}(x_T) \end{equation} where $x_T\sim \mathcal{N}(0,I), \epsilon_{t+1} \sim \mathcal{N}(0,\Sigma_{t+1}), t = 0,...,T-1$. Then we can write the objective function as: \begin{equation} \Phi(p_0^\theta, q_0) = \sup_{\alpha \in \mathcal{A}} \mathop{\mathbb{E}}_{x_T, \epsilon_{1:T}}[f_\alpha(h_{\theta,0}(x_T))]-\mathop{\mathbb{E}}_{x_0 \sim q_0}[f_\alpha(x_0)] \end{equation} Assume that $\exists \alpha \in \mathcal{A}$, s.t. $g(p_0^\theta, \alpha, q_0) = \Phi(p_{0}^{\theta},q_0)$. Let $\alpha^*(p_0^{\theta}, q_0) \in \{\alpha:g(p_0^\theta, \alpha, q_0) = \Phi(p_{0}^{\theta},q_0) \}$. When $f_{\alpha}$ is 1-Lipschitz, we can compute the gradient which is similar to WGAN~\citep{arjovsky2017wasserstein}: \begin{equation} \nabla_{\theta}\Phi(p_{0}^{\theta},q_0)=\underset{x_T, \epsilon_{1:T}}{\mathbb{E}}\left[\nabla_{\theta}f_{\alpha^*(p_0^{\theta}, q_0)}(h_{\theta,0}(x_T))\right]. \label{eq:wgan} \end{equation} \paragraph{Implicit requirements on the family of critics $\mathcal{A}$: gradient regularization.} In Eq.~(\ref{eq:wgan}), we can observe that the critic $f_{\alpha^*}$ needs to provide meaningful gradients (w.r.t. the input) for the generator. If the gradient of the critic happens to be 0 at some generated data points, even if the critic's value could still make sense, the critic would provide no signal for the generator on these points\footnote{For example, MMD with very narrow kernels can produce such critic functions, where each data point defines the center of the corresponding kernel which yields gradient 0.}. Thus GANs trained with IPMs generally need to choose $\mathcal{A}$ such that the gradient of the critic is regularized: For example, Lipschitz constraints like weight clipping~\citep{arjovsky2017wasserstein} and gradient penalty~\citep{gulrajani2017improved} for WGAN, and gradient regularizers for MMD GAN~\citep{arbel2018gradient}. \paragraph{Potential issues.}Besides the implicit requirements on the critic, there might also be issues when computing Eq.~(\ref{eq:wgan}) in practice. It contains differentiating a composite function with $T$ steps, which can cause problems similar to RNNs: \begin{itemize} \item Gradient vanishing may result in long-distance dependency being lost; \item Gradient explosion may occur; \item Memory usage is high. \end{itemize} \section{Method: Shortcut Fine-Tuning with Policy Gradient (SFT-PG)} We note that Eq.~(\ref{eq:wgan}) is not the only way to estimate the gradient w.r.t. IPM. In this section, we show that performing gradient descent of $\Phi(p_0^\theta, q_0)$ can be equivalent to policy gradient (Section~\ref{sec: pg}), provide analysis towards monotonic improvement (Section~\ref{sec: mono}) and then present the algorithm design (Section~\ref{sec: algo}). \subsection{Policy Gradient Equivalence} \label{sec: pg} By modeling the conditional probability through the trajectory, we provide an alternative way for gradient estimation which is equivalent to policy gradient, without differentiating through the composite functions. \begin{theorem} (\textbf{Policy gradient equivalence}) Assume that both $p_{x_{0:T}}^\theta(x_{0:T})f_{\alpha^*(p_0^{\theta}, q_0)}(x_0)$ and $\nabla_{\theta}p_{x_{0:T}}^\theta(x_{0:T})f_{\alpha^*(p_0^{\theta}, q_0)}(x_0)$ are continuous functions w.r.t. $\theta$ and $x_{0:T}$. Then \begin{equation} \begin{aligned} &\nabla_{\theta}\Phi(p_{0}^{\theta},q_0)=\underset{p_{x_{0:T}}^\theta}{\mathbb{E}}\big[f_{\alpha^*(p_0^{\theta}, q_0)}(x_0) \nabla_{\theta}\log \sum_{t=0}^{T-1}p_t^{\theta}(x_t|x_{t+1}) \big]. \end{aligned} \label{eq: pg} \end{equation} \end{theorem} \begin{proof} \begin{equation} \begin{aligned} &\mathrel{\phantom{=}}\nabla_{\theta}\Phi(p_{0}^{\theta},q_0)\\ &= \nabla_{\theta}\int p_{0}^\theta(x_0)f_{\alpha^*(p_0^{\theta}, q_0)}(x_0) dx_0\\ &\mathrel{\phantom{=}}+ \nabla_\theta \alpha^*(p_0^{\theta}, q_0) \nabla_{\alpha^*(p_0^{\theta}, q_0)}\int p_{0}^\theta(x_0)f_{\alpha^*(p_0^{\theta}, q_0)}(x_0) dx_0,\\ \end{aligned} \end{equation} where $\nabla_{\alpha^*(p_0^{\theta}, q_0)}\int p_{0}^\theta(x_0)f_{\alpha^*(p_0^{\theta}, q_0)}(x_0) dx_0$ is 0 from the envelope theorem. Then we have \begin{equation} \begin{aligned} &\mathrel{\phantom{=}}\nabla_{\theta}\int p_{0}^\theta(x_0)f_{\alpha^*(p_0^{\theta}, q_0)}(x_0) dx_0 \\ &= \nabla_{\theta}\int \left(\int p_{x_{0:T}}^{\theta}(x_{0:T}) dx_{1:T}\right)f_{\alpha^*(p_0^{\theta}, q_0)}(x_0) dx_0, \\ &=\nabla_{\theta}\int p_{x_{0:T}}^\theta(x_{0:T})f_{\alpha^*(p_0^{\theta}, q_0)}(x_0) dx_{0:T} \\ &= \int p_{x_{0:T}}^\theta(x_{0:T}) f_{\alpha^*(p_0^{\theta}, q_0)}(x_0) \nabla_{\theta}\log p^{\theta}_{x_{0:T}}(x_{0:T}) dx_{0:T} \\ &= \underset{p_{x_{0:T}}^\theta}{\mathbb{E}}\left[f_{\alpha^*(p_0^{\theta}, q_0)}(x_0) \sum_{t=0}^{T-1} \nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) \right], \end{aligned} \end{equation} where the second last equality is from the continuous assumptions to exchange integral and derivative and the log derivative trick. The proof is then complete. \qedhere \end{proof} \paragraph{MDP construction for policy gradient equivalence.} Here we explain why Eq.~(\ref{eq: pg}) could be viewed as policy gradient. We can construct an MDP with a finite horizon $T$: Treat $p_t^{\theta}(x_t|x_{t+1})$ as a policy, and assume that transition is an identical mapping such that the action is to choose the next state. Consider reward as $f_{\alpha^*(p_0^{\theta}, q_0)}(x_0)$ at the final step, and as $0$ at any other steps. Then Eq.~(\ref{eq: pg}) is equivalent to performing policy gradient~\citep{williams1992simple}. \textbf{Comparing Eq.~(\ref{eq:wgan}) and Eq.~(\ref{eq: pg}):} \begin{itemize} \item Eq.~(\ref{eq:wgan}) uses the gradient of the critic, while Eq.~(\ref{eq: pg}) only uses the value of the critic. This indicates that for policy gradient, weaker conditions are required for critics to provide meaningful guidance for the generator, which means more choices of $\mathcal{A}$ can be applied here. \item We compute the sum of gradients for each step in Eq.~(\ref{eq: pg}), which does not suffer from exploding or vanishing gradients. Also, we do not need to track gradients of the generated sequence during $T$ steps. \item However, stochastic policy gradient methods usually suffer from higher variance~\citep{mohamed2020monte}. Thanks to similar techniques in RL, we can reduce the variance via a baseline trick, which will be discussed in Section~\ref{sec: baseline}. \end{itemize} In conclusion, Eq.~(\ref{eq: pg}) is comparable to Eq.~(\ref{eq:wgan}) in expectation, with potential benefits like numerical stability, memory efficiency, and a wider range of the critic family $\mathcal{A}$. It could suffer from higher variance but the baseline trick can help. We denote such kind of method as \textbf{SFT-PG} (shortcut fine-tuning with policy gradient). \paragraph{Empirical comparison.} We conduct experiments on some toy datasets (Fig~\ref{fig: comparison}), where we show the performance of Eq.~(\ref{eq: pg}) with the baseline trick is at least comparable to Eq.~(\ref{eq:wgan}) at convergence when they use the same gradient penalty (GP) for critic regularization. We further observe SFT-PG with a newly proposed baseline regularization (B) enjoys a noticeably better performance compared to SFT with GP. The regularization methods will be introduced in Section~\ref{sec: regularization}. Experimental details are in Section~\ref{sec: regularization exp}. \subsection{Towards Monotonic Improvement} \label{sec: mono} \begin{figure} \caption{Illustration of the surrogate function given a fixed critic (red), and the actual objective $\Phi(p_{0} \label{fig: surrogate} \end{figure} The gradient update discussed in Eq.~(\ref{eq: pg}) only supports one step of gradient update, given a fixed critic $f_{\alpha^*(p_0^\theta, q_0)}$ that is optimal to the current $\theta$. Questions remain: When is our update guaranteed to get improvement? Can we do more than one update to get a potential descent? We answer the questions by providing a surrogate function of the IPM. \begin{theorem} (\textbf{The surrogate function of IPM}) Assume that $g(p_0^{\theta}, f_\alpha, q_0)$ is Lipschitz w.r.t. $\theta$, given $q_0$ and $\alpha\in\mathcal{A}$. Given a fixed critic $f_{\alpha^*(p_0^\theta, q_0)}$, there exists $l\geq 0$ such that $\Phi(p_0^{\theta'}, q_0)$ is upper bounded by the surrogate function below: \begin{equation} \Phi(p_0^{\theta'}, q_0) \leq g(p_0^{\theta'},f_{\alpha^*(p_0^\theta,q_0)}, q_0) + 2l||\theta'-\theta||. \end{equation} \label{mono} \end{theorem} Proof of Theorem~\ref{mono} can be found in Appendix~\ref{app: mono}. Here we provide an illustration of Theorem~\ref{mono} in Fig~\ref{fig: surrogate}. Given a critic that is optimal w.r.t. $\theta$, $\Phi(p_0^{\theta'}, q_0)$ is unknown if $\theta \neq \theta'$. But if we can get a descent of the surrogate function, we are also guaranteed to get a descent of $\Phi(p_0^{\theta'}, q_0)$, which facilitates more potential updates even if $\theta' \neq \theta$. Moreover, using the Lagrange multiplier, we can convert minimizing the surrogate function to a constrained optimization problem to optimize $g(p_0^{\theta'},f_{\alpha^*(p_0^\theta, q_0)}, q_0)$ with the constraint that $||\theta'-\theta||\leq \delta$ for some $\delta >0$. Following this idea, one simple trick is to perform $n_{\text{generator}}$ steps of gradient updates with a small learning rate, and clip the gradient norm with threshold $\gamma$. We present the empirical effect of such simple modification in Section~\ref{sec: clipping}, Table~\ref{gradnorm}. \paragraph{Discussion.} One may notice that Theorem~\ref{mono} is similar in spirit to Theorem 1 in TRPO~\citep{schulman2015trust}, which provides a surrogate function for a fixed but unknown reward function. In our case, the reward function $f_{\alpha^*(p_0^\theta, q_0)}$ is known for the current $\theta$ but changing: It is dependent on the current $\theta$ so it remains unknown for $\theta'\neq\theta$. The proof techniques are also different, but they both estimate an unknown part of the objective function. \subsection{Algorithm Design} \label{sec: algo} In the previous sections, we only consider the case where we have an optimal critic function given $\theta$. In the training, we adopt similar techniques in WGAN~\citep{arjovsky2017wasserstein} to perform alternative training of the critic and generator in order to approximate the optimal critic. Consider the objective function below: \begin{equation} \min_{\theta} \max_{\alpha\in \mathcal{A}} g(p_0^\theta, f_\alpha, q_0). \end{equation} Now we discuss techniques to reduce the variance of the gradient estimation and regularize the critic, and then give an overview of our algorithm. \subsubsection{Baseline Function for Variance Reduction} \label{sec: baseline} Given a critic $\alpha$, we can adopt a technique widely used in policy gradient to reduce the variance of the gradient estimation in Eq.~(\ref{eq: pg}). Similar to \citet{schulman2015high}, we can subtract a baseline function $V^{\omega}_{t+1}(x_{t+1})$ from the cumulative reward $f_{\alpha} (x_0)$, without changing the expectation: \begin{equation} \begin{aligned} &\mathrel{\phantom{=}}\nabla_{\theta} g(p_0^\theta, f_\alpha, q_0)\\ &= \underset{p_{x_{0:T}}^\theta}{\mathbb{E}}\left[f_{\alpha}(x_0) \sum_{t=0}^{T-1} \nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) \right]\\ &=\underset{p_{x_{0:T}}^\theta}{\mathbb{E}}\left[\sum_{t=0}^{T-1}(f_{\alpha}(x_0) - V_{t+1}^\omega(x_{t+1}))\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) \right], \end{aligned} \label{eq: baseline} \end{equation} where the optimal choice of $V_{t+1}^\omega(x_{t+1})$ to minimize the variance would be $V_{t+1}(x_{t+1},\alpha):=\underset{p_{x_{0:T}}^\theta}{\mathbb{E}}[f_{\alpha}(x_0)|x_{t+1}]$. Detailed derivation of Eq~(\ref{eq: baseline}) can be found in Appendix~\ref{app: baseline}. Thus, given a critic $\alpha$ and a generator $\theta$, we can train a value function $V_{t+1}^\omega$ by minimizing the objective below: \begin{equation} R_{B}(\alpha, \omega, \theta)=\underset{p_{x_{0:T}}^\theta}{ \mathbb{E}}\left[\sum_{t=0}^{T-1}(V_{t+1}^{\omega}(x_{t+1}) -V_{t+1}(x_{t+1},\alpha))^2\right]. \end{equation} \subsubsection{Choices of $\mathcal{A}$: Regularizing the Critic} \label{sec: regularization} Here we discuss different choices of $\mathcal{A}$, which indicates different regularization methods for the critic. \paragraph{Lipschitz regularization.} If we choose $\mathcal{A}$ to include parameters of all 1-Lipschitz functions, we can adopt regularization as WGAN-GP~\citep{gulrajani2017improved}: \begin{equation} R_{GP}(\alpha, \theta) = \underset{\hat{x_0}}{\mathbb{E}}\left[(||\nabla_{x_0}f_{\alpha}(x_0)||-1)^2\right], \end{equation} where $\hat{x_0}$ is sampled uniformly on the line segment between $x_0' \sim p_0^\theta$ and $x_0'' \sim q_0$. $f_\alpha$ can be trained to maximize $g(p_0^\theta, f_\alpha, q_0) -\eta R_{GP}(\alpha, \omega, \theta)$, $\eta>0$ is the regularization coefficient. \paragraph{Reusing baseline for critic regularization.} As discussed in Section~\ref{sec: pg}, since we only use the critic value during updates, now we can afford a potentially wider range of critic family $\mathcal{A}$. Some regularization on $f_\alpha$ is still needed; Otherwise its value can explode. Also, regularization is shown to be beneficial for local convergence~\citep{mescheder2018training}. So we consider regularization that can be weaker than gradient constraints, such that the critic is more sensitive to the changes of the generator, which could be favorable when updating the critic for a fixed number of training steps. We found an interesting fact that the loss $R_{B}(\alpha,\omega,\theta)$ can be \emph{reused} to regularize the value of $f_\alpha$ instead of the gradient, which implicitly defines a set $\mathcal{A}$ that shows empirical benefits in practice. Define \begin{equation} L(\alpha, \omega, \theta) := g(p_0^\theta, f_\alpha, q_0) -\lambda R_{B}(\alpha, \omega, \theta). \label{eq: b and c} \end{equation} Given $\theta$, our critic $\alpha$ and baseline $\omega$ can be trained together to maximize $L(\alpha, \omega, \theta)$. We provide an explanation of such kind of implicit regularization. During the update, we can view $V^\omega_{t+1}$ as an approximation of the expected value of $f_{\alpha}$ from the previous step. The regularization provides a trade-off between maximizing $g(p_0^\theta, f_\alpha, q_0)$ and minimizing changes in the expected value of $f_\alpha$, preventing drastic changes in the critic and stabilizing the training. Intuitively, it helps local convergence when both the critic and generator are already near-optimal: there is an extra cost for the critic value to diverge away from the optimal value. As a byproduct, it also makes the baseline function easier to fit since the regularization loss is reused. \paragraph{Empirical comparison: baseline regularization and gradient penalty.} We present a comparison of gradient penalty (GP) and baseline regularization (B) for policy gradient training (SFT-PG) in Section~\ref{sec: regularization exp}, Fig~\ref{fig: comparison} on toy datasets, which shows in policy gradient training, the baseline function performs comparably well or even better than gradient penalty. \subsubsection{Putting Together: Algorithm Overview} Now we are ready to present our algorithm. Our critic $\alpha$ and baseline $\omega$ are trained to maximize $L(\alpha, \omega, \theta) = g(p_0^\theta, f_\alpha, q_0) -\lambda R_{B}(\alpha, \omega, \theta)$, and the generator is trained to minimize $g(p_0^\theta, f_\alpha, q_0)$ via Eq.~(\ref{eq: baseline}). To save memory usage, we use a buffer $\mathcal{B}$ that contains $\{x_{t+1}, x_t, x_0, t\}$ generated from the current generator without tracking the gradient, and randomly sample a batch from the buffer to compute Eq.~(\ref{eq: baseline}) and then perform backpropagation. The maximization and minimization steps are performed alternatively. See details in Alg~\ref{alg}. \begin{algorithm}[] \caption{Shortcut Fine-Tuning with Policy Gradient and Baseline Regularization: SFT-PG (B)} \textbf{Input}: $n_{\text{critic}}$, $n_{\text{generator}}$, batch size $m$, critic parameters $\alpha$, baseline function parameter $\omega$ , pretrained generator $\theta$, regularization hyperparameter $\lambda$ \begin{algorithmic} \WHILE{$\theta$ not converged} \STATE Initialize trajectory buffer $\mathcal{B}$ as $\emptyset$ \FOR{$i$ = 0,...,$n_{\text{critic}}$} \STATE Obtain $m$ i.i.d. samples from $p^{\theta}_{x_{0:T}}$ \STATE Add all $\{x_{t+1}, x_t, x_0, t\}$ to $\mathcal{B}$, $t=0,...,T-1$ \STATE Obtain $m$ i.i.d. samples from $q_0$ \STATE Update $\alpha$ and $\omega$ via maximizing Eq.~(\ref{eq: b and c}) \ENDFOR \FOR{$j$ = 0,...,$n_{\text{generator}}$} \STATE Obtain $m$ samples of $\{x_{t+1}, x_t, x_0, t\}$ from $\mathcal{B}$ \STATE Update $\theta$ via policy gradient according to Eq.~(\ref{eq: baseline}) \ENDFOR \ENDWHILE \end{algorithmic} \label{alg} \end{algorithm} \section{Related Works} \label{sec: related} \paragraph{GAN and RL.} There are works using ideas from RL to train GANs~\citep{yu2017seqgan,wang2017irgan, sarmad2019rl, bai2019model}. The most relevant work is SeqGAN~\citep{yu2017seqgan}, which uses policy gradient to train the generator network. There are several main differences between their settings and ours. First, different GAN objectives are used: SeqGAN uses the JS divergence while we use IPM. In SeqGAN, the next token is dependent on tokens generated from all previous steps, while in diffusion models the next image is only dependent on the model output from one previous step; Also, the critic takes the whole generated sequence as input in SeqGAN, while we only care about the final output. Besides, in our work, rewards are mathematically derived from performing gradient descent w.r.t. IPM, while in SeqGAN, rewards are designed manually. In conclusion, different from SeqGAN, we propose a new policy gradient algorithm to optimize the IPM objective, with a novel analysis of monotonic improvement conditions and a new regularization method for the critic. \paragraph{Diffusion and GAN.} There are other works combining diffusion and GAN training: \citet{xiao2021tackling} consider multi-modal noise distributions generated by GAN to enable fast sampling; \citet{zheng2022truncated} considers a truncated forward process by replacing the last steps in the forward process with an autoencoder to generate noise, and start with the learned autoencoder as the first step of denoising and then continue to generate data from the diffusion model; Diffusion GAN~\citep{wang2022diffusion} perturbs the data with an adjustable number of steps, and minimizes JS divergence for all intermediate steps by training a multi-step generator with a time-dependent discriminator. To our best knowledge, there is no existing work using GAN-style training to fine-tune a pretrained DDPM sampler. \paragraph{Fast samplers of DDIM and more.} There is another line of work on fast sampling of DDIM~\citep{song2020denoising}, for example, knowledge distillation~\citep{luhman2021knowledge,salimans2021progressive} and solving ordinary differential equations (ODEs) with fewer steps~\citep{liu2022pseudo,lu2022dpm}. Samples generated by DDIM are generally less diverse than DDPM~\citep{song2020denoising}. Also, fast sampling is generally easier for DDIM samplers (with deterministic Markov chains) than DDPM samplers, since it is possible to combine multiple deterministic steps into one step without loss of fidelity, but not for combining multiple Gaussian steps as one~\citep{salimans2021progressive}. Fine-tuning DDIM samplers with deterministic policy gradient for fast sampling also seems possible, but deterministic policies may suffer from suboptimality, especially in high-dimensional action space~\citep{silver2014deterministic}, though it might require fewer samples. Also, it becomes less necessary since distillation is already possible for DDIM. Moreover, there is also some recent work that uses sample quality metrics to enable fast sampling. Instead of fine-tuning pretrained models, \citet{watson2021learning} propose to optimize the hyperparameters of the sampling schedule for a family of non-Markovian samplers by differentiating through KID~\citep{binkowski2018demystifying}, which is calculated by pretrained inception features. It is followed by a contemporary work that fine-tunes pretrained DDIM models using MMD calculated by pretrained features~\citep{aiello2023fast}, which is similar to the method discussed in Section~\ref{sec: pathwise} but with a fixed critic and a deterministic sampling chain. Generally speaking, adversarially trained critics can provide stronger signals than fixed ones and are more helpful for training~\citep{li2017mmd}. As a result, besides the potential issues discussed in Section~\ref{sec: pathwise}, such training may also suffer from sub-optimal results when $p^\theta_0$ is not close enough to $q_0$ at initialization, and is highly dependent on the choice of the pretrained feature. \section{Experiments} \label{sec: full exp} \begin{figure*} \caption{Training curves of swiss roll} \label{fig:swiss} \caption{Roll, SFT (GP)} \label{fig:swiss-wgan} \caption{Roll, SFT-PG (GP)} \label{fig:swiss-gp} \caption{Roll, SFT-PG (B)} \label{fig:swiss-ft} \caption{Training curves of moons} \label{fig:moon} \caption{Moons, SFT (GP)} \label{fig:moon-wgan} \caption{Moons, SFT-PG (GP)} \label{fig:moon-gp} \caption{Moons, SFT-PG (B)} \label{fig:moon-ft} \caption{Training curves~(\ref{fig:swiss} \label{fig: comparison} \end{figure*} In this section, we aim to answer the following questions: \begin{itemize} \item (Section~\ref{sec: poc}) Does the proposed algorithm SFT-PG (B) work in practice? \item (Section~\ref{sec: regularization exp}) How does SFT-PG (Eq.~(\ref{eq: pg})) work compared to SFT (Eq.~(\ref{eq:wgan})) with the same regularization (GP), and how does baseline regularization (B) compared to gradient penalty (GP) in SFT-PG? \item (Section~\ref{sec: clipping}) Do more generator steps with gradient clipping improve the performance, as discussed in Section~\ref{sec: mono}? \item (Section~\ref{sec: benchmark}) Does the proposed fine-tuning SFT-PG (B) improve existing fast samplers of DDPM on benchmark datasets? \end{itemize} Code is available at \url{https://github.com/UW-Madison-Lee-Lab/SFT-PG}. \subsection{Setup} Here we provide the setup of our training algorithm on different datasets. Model architectures and training details can be found in Appendix~\ref{app: exp}. \paragraph{Toy datasets.} The toy datasets we use are swiss roll and two moons~\citep{scikit-learn}. We use $\lambda=0.1$, $n_{\text{critic}} = 5, n_{\text{generator}} = 1$ with no gradient clipping. For evaluation, we use the Wasserstein-2 distance on 10K samples from $p_0$ and $q_0$ respectively, calculated by POT~\citep{flamary2021pot}. \paragraph{Image datasets.} We use MNIST~\citep{lecun1998gradient}, CIFAR-10~\citep{krizhevsky2009learning} and CelebA~\citep{liu2015deep}. For hyperparameters, we choose $\lambda=1.0$, $n_{\text{critic}} = 5, n_{\text{generator}} = 10$, $\gamma=0.1$, except when testing different choices of $n_{\text{generator}}$ and $\gamma$ in MNIST, where we use $n_{\text{generator}} = 5$ and varying $\gamma$. For evaluation, we use FID~\citep{heusel2017gans} measured by 50K samples generated from $p_0^\theta$ and $q_0$ respectively. \subsection{Proof-of-concept Results} In this section, we fine-tune pretrained DDPMs with $T=10$, and present the effect of the proposed algorithm SFT-PG with baseline regularization on toy datasets. We present the results of different gradient estimations discussed in Section~\ref{sec: pg}, different critic regularization methods discussed in Section~\ref{sec: regularization}, and the training technique with more generator steps discussed Section~\ref{sec: mono}. \subsubsection{Improvement from Fine-Tuning} \label{sec: poc} On the swiss roll dataset, we first train a DDPM with $T=10$ till convergence, and then use it as initialization of our fine-tuning. As in Table~\ref{tab: swiss}, our fine-tuned sampler with 10 steps can get better Wasserstein distance not only compared to the DDPM with $T=10$, but can even outperform DDPM with $T=1000$, which is reasonable since we directly optimize the IPM objective. \footnote{Besides, our algorithm also works when training from scratch with a final performance comparable to fine-tuning, but it will take longer to train.} The training curve and the data visualization can be found in Fig~\ref{fig:swiss} and Fig~\ref{fig:swiss-ft}. \begin{table}[h] \centering \vskip 0.08in \scalebox{1.0}{ \begin{tabular}{lc} \toprule \textbf{Method} &\textbf{$\mathbf{W_2(p_0^\theta,q_0)}$ ($\times 10^{-2}$) ($\downarrow$)} \\ \hline $T= 10$, DDPM & 8.29\\ $T= 100$, DDPM & 2.36\\ $T= 1000$, DDPM & 1.78 \\ $T=10$, SFT-PG (B) &\textbf{0.64} \\ \bottomrule \end{tabular} } \caption{Comparison of DDPM models and our fine-tuned model on the swiss roll dataset.} \label{tab: swiss} \end{table} \subsubsection{Effect of Different Gradient Estimations and Regularizations} \label{sec: regularization exp} On the toy datasets, we compare gradient estimation SFT-PG and SFT, both with gradient penalty (GP). \footnote{For gradient penalty coefficient, we tested different choices in $[0.001,10]$ and pick the best choice $0.001$. We also tried spectral normalization for Lipschitz constraints, but we found that its performance is worse than gradient penalty on these datasets.} We also compare them to our proposed algorithm SFT-PG (B). All methods are initialized with pretrained DDPM, $T=10$, then trained till convergence. As shown in Fig~\ref{fig: comparison}, we can observe that all methods converge and the training curves are almost comparable, while SFT-PG (B) enjoys a slightly better final performance. \begin{figure*} \caption{CIFAR10, Initialization} \caption{CIFAR10, SFT-PG (B)} \caption{CelebA, Initialization} \caption{CelebA, SFT-PG (B)} \caption{Randomly generated images before and after fine-tuning, on CIFAR10 $(32\times 32)$ and CelebA $(64\times 64)$, $T'=10$. The initialization is from pretrained models with $T=1000$ and sub-sampling schedules with $T'=10$ calculated from FastDPM~\citep{kong2021fast} \label{fig: benchmark} \end{figure*} \subsubsection{Effect of Gradient Clipping with More Generator Steps} \label{sec: clipping} In Section~\ref{sec: mono}, we discussed that performing more generator steps with the same fixed critic and clipping the gradient norm can improve the training of our algorithm. Here we present the effect of $n_\text{generator} = 1$ or $5$ with different gradient clipping thresholds $\gamma$ on MNIST, initialized with a pretrained DDPM with $T=10$, FID=7.34. From Table~\ref{gradnorm}, we find that a small $\gamma$ with more steps can improve the final performance, but could hurt the performance if too small. Randomly generated samples from the model with the best FID are in Fig~\ref{mnist}. We also conducted similar experiments on the toy datasets, but we find no significant difference on the final results, which is expected since the task is too simple. \begin{minipage}{\linewidth} \begin{minipage}[b]{0.49\linewidth} \centering \scalebox{0.8}{ \begin{tabular}[b]{ll} \toprule \textbf{Method} &\textbf{FID ($\downarrow$)} \\ \hline 1 step & 1.35\\ 5 steps, $\gamma = 10$ & 0.83\\ 5 steps, $\gamma = 1.0$ & \textbf{0.82} \\ 5 steps, $\gamma = 0.1$ &0.89 \\ 5 steps, $\gamma = 0.001$ &1.46 \\ \bottomrule \end{tabular} } \captionof{table}{Effect of $n_{\text{generator}}$ and $\gamma$.} \label{gradnorm} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \scalebox{1.12}{\includegraphics[width=0.49\linewidth]{mnist.pdf}} \captionof{figure}{Generated samples.} \label{mnist} \end{minipage} \end{minipage} \subsection{Benchmark Results} \label{sec: benchmark} To compare with existing fast samplers of DDPM, we take pretrained DDPMs with $T=1000$ and fine-tune them with sampling steps $T'=10$ on image benchmark datasets CIFAR-10 and CelebA. Our baselines include various fast DDPM samplers with Gaussian noises: naive DDPM sub-sampling, FastDPM~\citep{kong2021fast}, and recently advanced samplers like Analytic DPM~\citep{bao2021analytic} and SN-DPM~\citep{bao2022estimating}. For fine-tuning, we use the fixed variance and sub-sampling schedules computed by FastDPM with $T'=10$ and only train the mean prediction model. From Table~\ref{tab: benchmark}, we can observe that the performance of fine-tuning with $T'=10$ is comparable to the pretrained model with $T=1000$, outperforming the existing fast DDPM samplers. Randomly generated images before and after fine-tuning are in Fig~\ref{fig: benchmark}. \begin{table}[h] \centering \vskip 0.08in \scalebox{0.9}{ \begin{tabular}{lcc} \toprule \textbf{Method} &\textbf{CIFAR-10} ($32\times32$) & \textbf{CelebA} ($64\times64$) \\ \hline DDPM& 34.76 &36.69 \\ FastDPM & 29.43 & 28.98\\ Analytic-DPM & 22.94 & 28.99\\ SN-DDPM & 16.33& 20.60\\ SFT-PG (B) & \textbf{2.28} & \textbf{2.01}\\ \bottomrule \end{tabular} } \caption{FID ($\downarrow$) on CIFAR-10 and CelebA, $T'=10$ for all methods. Our fine-tuning produces comparable results with the full-step pretrained models (FID = 3.03 for CIFAR-10, and FID = 3.26 for CelebA, $T=1000$).} \label{tab: benchmark} \end{table} We also present a comparison with DDIM sampling methods on CIFAR 10 benchmark in Appendix~\ref{app: ddim}, where our method is comparable to progressive distillation with $T' = 8$. \subsection{Discussions and Limitations} In our experiments, we only train the mean prediction model given a pretrained DDPM. It is also possible to learn the variance via fine-tuning with the same objective, and we leave it as future work. We also note that although we do not need to track the gradients during all sampling steps, we still need to run $T'$ inference steps to collect the sequence, which is inevitably slower than one-step GAN training. \section{Conclusion} In this work, we fine-tune DDPM samplers to minimize the IPMs via policy gradient. We show performing gradient descent of stochastic Markov chains w.r.t. IPM is equivalent to policy gradient, and present a surrogate function of the IPM which sheds light on monotonic improvement conditions. Our fine-tuning improves the existing fast samplers of DDPM, achieving comparable or even higher sample quality than the full-step model on various datasets. \appendix \onecolumn \section{Visualization: Effect of Shortcut Fine-Tuning} \label{app: vis} \begin{figure} \caption{Sampling steps before fine-tuning from DDPM} \label{fig: vis} \caption{Sampling steps after fine-tuning} \label{fig: vis_new} \caption{Visualization of the sampling path before~(\ref{fig: vis} \label{fig: visualization} \end{figure} We provide visualizations of the complete sampling chain before and after fine-tuning in Fig~\ref{fig: visualization}. We generate 50 data points using the same random seed for DDPM and our fine-tuned model, trained on the same Gaussian cluster centered at the red spot $(0.5, 0.5)$ with a standard deviation of 0.01 in each dimension, $T=2$. The whole sampling path is visualized where different steps are marked with different intensities of the color: data points with the darkest color are finally generated. As shown in Fig~\ref{fig: visualization}, our fine-tuning does find a "shortcut" path to the final distribution. \section{Illustration of Sub-sampling with $T'\ll T$ in DDPM} \label{app: subsampling} \begin{figure} \caption{ \textcolor{black} \label{fig:fastsampling} \end{figure} \section{Towards Monotonic Improvement} \label{app: mono} Here we present detailed proof of Theorem~\ref{mono}. For simplicity, we denote $p_0^\theta$ as $p_\theta$, $q_0$ as $q$, and $z\in \mathbb{R}^d$ to replace $x_0$ as a variable in our sample space. Recall the generated distribution: $p_{\theta}$. Given target distribution $q$, the objective function is: \begin{equation} \min_{\theta}\max_{\alpha \in \mathcal{A}}g(p_{\theta}, f_\alpha, q), \end{equation} where $g(p_{\theta}, f_\alpha, q) = \int (p_{\theta}(z) - q(z))f_{\alpha}(z)dz$. Recall $\Phi(p_{\theta},q) = \underset{\alpha \in \mathcal{A}}{\max}\int (p_{\theta}(z) - q(z))f_{\alpha}(z)dz =\int (p_{\theta}(z) - q(z))f_{\alpha^*(p_\theta, q)}(z)dz$. Assume that $g(p_{\theta}, f_\alpha, q)$ is Lipschitz w.r.t. $\theta$, given $q$ and $\alpha\in\mathcal{A}$. Our goal is to show that there exists $l\geq 0$ s.t.: \begin{equation} \Phi(p_{\theta'}, q) \leq g(p_{\theta'},f_{\alpha^*(p_\theta,q)}, q) + 2l||\theta-\theta'||, \end{equation} where the equality is achieved when $\theta = \theta'$. If the above inequality holds, $L_{\theta}(\theta') = g(p_{\theta'},f_{\alpha^*(p_\theta,q)}, q) + 2l||\theta-\theta'||$ can be a surrogate function of $\Phi(p_{\theta'}, q)$: $\Phi(p_{\theta'}, q) -\Phi(p_\theta, q) \leq L_{\theta}(\theta')- L_{\theta}(\theta)$, $L_{\theta}(\theta) = \Phi(p_\theta, q)$, which means $\theta'$ that can improve $L_{\theta}(\theta')$ is also guaranteed to get improvement on $\Phi(p_{\theta'}, q)$. \begin{proof} Consider \begin{equation} \begin{split} &\mathrel{\phantom{=}}\Phi(p_{\theta'}, q) - \Phi(p_{\theta}, q)\\ & =\int (p_{\theta'} (z)- q(z))f_{\alpha^*(p_{\theta'}, q)}(z)dz-\int (p_{\theta} (z)- q(z))f_{\alpha^*(p_\theta, q)}(z)dz\\ & =\int (p_{\theta'} (z)- q(z))f_{\alpha^*(p_{\theta'}, q)}(z)dz -\int (p_{\theta'} (z)- q(z))f_{\alpha^*(p_\theta, q)}(z)dz \\ & \mathrel{\phantom{=}}+\int (p_{\theta'} (z)- q(z))f_{\alpha^*(p_\theta, q)}(z)dz- \int (p_{\theta} (z)- q(z))f_{\alpha^*(p_\theta, q)}(z)dz\\ & = \int (p_{\theta'} (z)- q(z))(f_{\alpha^*(p_{\theta'}, q)}(z)-f_{\alpha^*(p_\theta, q)}(z))dz + \int (p_{\theta'} (z)- p_{\theta} (z))f_{\alpha^*(p_\theta, q)}(z)dz\\ & = \int (q(z)-p_{\theta'} (z))(f_{\alpha^*(p_\theta, q)}(z)-f_{\alpha^*(p_{\theta'}, q)}(z))dz + \int (p_{\theta'} (z)- p_{\theta} (z))f_{\alpha^*(p_\theta, q)}(z)dz.\\ \end{split} \end{equation} We have \begin{equation} \begin{split} &\mathrel{\phantom{=}}\int (q(z)-p_{\theta'} (z))(f_{\alpha^*(p_\theta, q)}(z)-f_{\alpha^*(p_{\theta'}, q)}(z))dz\\ &=\int (p_{\theta}(z)-p_{\theta'} (z))(f_{\alpha^*(p_\theta, q)}(z)-f_{\alpha^*(p_{\theta'}, q)}(z))dz-\int (p_{\theta}(z)-q(z))(f_{\alpha^*(p_\theta, q)}(z)-f_{\alpha^*(p_{\theta'}, q)}(z))dz\\ &\leq \int (p_{\theta}(z)-p_{\theta'} (z))(f_{\alpha^*(p_\theta, q)}(z)-f_{\alpha^*(p_{\theta'}, q)}(z))dz, \end{split} \end{equation} where the last inequality comes from the definition: $\alpha^*(p_\theta, q) = \underset{\alpha \in \mathcal{A}}{\argmax} \int (p_{\theta}(z)-q(z))f_{\alpha}(z)$. So \begin{equation} \begin{split} &\mathrel{\phantom{=}}\Phi(p_{\theta'},q) - \Phi(p_{\theta},q) \\ &= \int (p_{\theta'} (z)- p_{\theta} (z))f_{\alpha^*(p_\theta, q)}(z)dz + \int (q(z)-p_{\theta'} (z))(f_{\alpha^*(p_\theta, q)}(z)-f_{\alpha^*(p_{\theta'}, q)}(z))dz \\ &\leq g(p_{\theta'}, f_{\alpha^*(p_\theta, q)}, q) - g(p_{\theta}, f_{\alpha^*(p_\theta, q)},q) + \int (p_{\theta}(z)-p_{\theta'}(z))(f_{\alpha^*(p_\theta, q)}(z)-f_{\alpha^*(p_{\theta'}, q)}(z))dz\\ &\leq g(p_{\theta'}, f_{\alpha^*(p_\theta, q)},q) - g(p_{\theta}, f_{\alpha^*(p_\theta, q)},q) + 2l||\theta-\theta'||,\\ \end{split} \end{equation} where the last inequality comes from the Lipschitz assumption of $g(p_{\theta}, f_{\alpha(p_\theta, q)},q)$ given $\alpha^*(p_\theta, q)$ and $\alpha^*(p_{\theta'}, q)$. Recall that $\Phi(p_{\theta},q) = g(p_{\theta}, f_{\alpha^*(p_\theta, q)},q)$, the proof is then complete. \end{proof} Consider the optimization objective: $\text{minimize}_{\theta'} L_{\theta}(\theta')$. Using the Lagrange multiplier, we can convert the problem to a constrained optimization problem: \begin{equation} \begin{split} &\underset{\theta'}{\text{minimize}} \quad g(p_{\theta'},f_{\alpha^*(p_\theta, q)}, q) \\ & \text{s.t.} \quad||\theta' - \theta||\leq \delta \\ \end{split} \end{equation} where $\delta >0$. The constraint is a convex set and the projection to the set is easy to compute via norm regularization, as we discussed in Section~\ref{sec: mono}. Intuitively, it means that as long as we only optimize in the neighborhood of the current generator $\theta'$, we can treat $g(p_{\theta'},f_{\alpha^*(p_\theta, q)}, q)$ as an approximation of $\Phi(p_{\theta'}, q)$ during gradient updates. \section{Baseline Function for Variance Reduction} \label{app: baseline} Here we present the derivation of Eq~(\ref{eq: baseline}), which is very similar to \citet{schulman2015high}. To show \begin{equation} \underset{p_{x_{0:T}}^\theta}{\mathbb{E}}\left[f_{\alpha}(x_0) \sum_{t=0}^{T-1} \nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) \right]=\underset{p_{x_{0:T}}^\theta}{\mathbb{E}}\left[\sum_{t=0}^{T-1}(f_{\alpha}(x_0) - V_{t+1}^\omega(x_{t+1}))\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) \right], \end{equation} we only need to show \begin{equation} \underset{p_{x_{0:T}}^\theta}{\mathbb{E}}\left[V_{t+1}^\omega(x_{t+1})\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) \right] =0. \end{equation} Note that \begin{equation} \begin{aligned} &\mathrel{\phantom{=}}\underset{p_{x_{0:T}}^\theta}{\mathbb{E}}\left[V_{t+1}^\omega(x_{t+1})\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) \right] \\ &=\underset{p_{x_{t+1:T}}^\theta}{\mathbb{E}}\left[\underset{p_{x_{0:t}}^\theta}{\mathbb{E}}\left[V_{t+1}^\omega(x_{t+1})\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) |x_{t+1:T}\right] \right]\\ &=\underset{p_{x_{t+1:T}}^\theta}{\mathbb{E}}\left[\underset{p_{x_{t}}^\theta}{\mathbb{E}}\left[V_{t+1}^\omega(x_{t+1})\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1})|x_{t+1:T}\right] \right],\\ \end{aligned} \end{equation} where $\underset{p_{x_{t}}^\theta}{\mathbb{E}}\left[V_{t+1}^\omega(x_{t+1})\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) |x_{t+1:T}\right]=0$ when $p_t^{\theta}(x_t|x_{t+1})$ and $\nabla_{\theta}p_t^{\theta}(x_t|x_{t+1})$ are continuous: \begin{equation} \begin{aligned} &\mathrel{\phantom{=}}\underset{p_{x_{t}}^\theta}{\mathbb{E}}\left[V_{t+1}^\omega(x_{t+1})\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1}) |x_{t+1:T}\right]\\ &=V_{t+1}^\omega(x_{t+1}) \int p_{x_t}^\theta(x_t)\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1})dx_t\\ &=V_{t+1}^\omega(x_{t+1}) \int p_{x_t}^\theta(x_t)\nabla_{\theta}\log p_t^{\theta}(x_t|x_{t+1})dx_t\\ &=V_{t+1}^\omega(x_{t+1}) \int\nabla_{\theta}p_t^{\theta}(x_t|x_{t+1})dx_t\\ &=V_{t+1}^\omega(x_{t+1}) \nabla_{\theta} \int p_t^{\theta}(x_t|x_{t+1})dx_t\\ &=0. \end{aligned} \end{equation} \section{Comparison with DDIM Sampling} \label{app: ddim} We present a comparison with DDIM sampling methods on CIFAR 10 benchmark as below. Methods marked with * require additional model training, and NFE is the number of sampling steps (number of score function evaluations). All methods are based on the same pretrained DDPM model with $T = 1000$. \begin{table}[H] \centering \begin{tabular}{lllllll} \toprule Method (DDPM, stochastic) & NFE& FID && Method (DDIM, deterministic) & NFE & FID\\ \hline DDPM & 10 & 34.76 & & DDIM & 10 & 17.33 \\ SN-DDPM & 10 & 16.33 & & DPM-solver & 10 & 4.70 \\ SFT-PG* & 10 & 2.28 & & & & \\ SFT-PG* & 8 & 2.64 & & Progressive distillation*$^+$ & 8 & 2.57 \\ \bottomrule \end{tabular} \caption{Comparison with DDIM sampling methods which is deterministic given the initial noise.} \end{table} We can observe that SFT-PG with NFE=10 produces the best FID, and SFT-PG with NFE=8 is comparable to progressive distillation with the same NFE. Our method is orthogonal to other fast sampling methods like distillation. We also note that our fine-tuning is more computationally efficient than progressive distillation: For example, for CIFAR10, progressive distillation takes about a day using 8 TPUv4 chips, while our method takes about 6h using 4 RTX 2080Ti, and the original DDPM training takes 10.6h using TPU v3.8. Besides, since we use a fixed small learning rate during training (1e-6), it is also possible to further accelerate our training by choosing appropriate learning rate schedules. \section{Experimental Details} \label{app: exp} Here we provide more details for our fine-tuning settings for reproducibility. \subsection{Experiments on Toy Datasets} \paragraph{Training sets.} For 2D toy datasets, each training set contains 10K samples. \paragraph{Model architecture.}The generator we adopt is a 4-layer MLP with 128 hidden units and soft-plus activations. The critic and the baseline function we use are 3-layer MLPs with 128 hidden units and ReLU activations. \paragraph{Training details.}For optimizers, we use Adam~\citep{kingma2014adam} with $\text{lr} = 5\times 10^{-5}$ for the generator, and $\text{lr} = 1\times 10^{-3}$ for both the critic and baseline functions. Pretraining for DDPM is conducted for 2000 epochs for $T=10,100,1000$ respectively. Both pretraining and fine-tuning use batch size 64 and we train 300 epochs for fine-tuning. \subsection{Experiments on Image Datasets} \label{app: } \paragraph{Training sets.} We use 60K training samples from MNIST, 50K training samples from CIFAR-10, and 162K samples from CelebA. \paragraph{Model architecture.} For model architecture, we use U-Net as the generative model as \citet{ho2020denoising}. For the critic, we adopt 3 convolutional layers with kernel size = 4, stride = 2, padding = 1 for downsampling, followed by 1 final convolutional layer with kernel size = 4, stride = 1, padding = 0, and then take the average of the final output. The numbers of output channels are 256,512,1024,1 for each layer, with Leaky ReLU (slope = 0.2) as activation. For the baseline function, we use a 4-layer MLP with timestep embeddings. The numbers of hidden units are 1024, 1024, 256, and the output dimension is 1. \paragraph{Training details.} For MNIST, we train a DDPM with $T=10$ steps for 100 epochs to convergence as a pretrained model. For CIFAR-10 and CelebA, we use the pretrained model in \citet{ho2020denoising} and \citet{song2020denoising} respectively with $T=1000$, and use the sampling schedules calculated by FastDPM~\citep{kong2021fast} with VAR approximation and DDPM sampling schedule as initialization for our fine-tuning. We found that rescaling the pixel values to [0,1] is a default choice in FastDPM, but it hurts the training if we put the rescaled images directly into the critic, so we remove the rescaling part during our fine-tuning. For optimizers, we use Adam with $\text{lr} = 1\times 10^{-6}$ for the generator, and $\text{lr} = 1\times 10^{-4}$ for both the critic and baseline functions. We found that smaller learning rates help the stability of training, which is compliant with the theoretical result in Section~\ref{mono}. For MNIST and CIFAR-10, we train 100 epochs with batch size = 128. For CelebA we trained 100 epochs with batch size = 64. \paragraph{More generated samples.} We present generated samples from the initialized FastDPM and our fine-tuned model respectively using the same random seed to show the effect of our fine-tuning in Fig~\ref{fig: more_cifar} and Fig~\ref{fig: more_celeba}. We notice that some of the images generated by our fine-tuned model are similar to images at initialization but with much richer colors and more details, and there are also some cases that the images after fine-tuning look very different than that from initialization. \begin{figure} \caption{ Images generated from FastDPM as initialization (on the top) and from the fine-tuned model (on the bottom), generated using the same seed, trained on CIFAR-10.} \label{fig: more_cifar} \end{figure} \begin{figure} \caption{Images generated from FastDPM as initialization (on the top) and from the fine-tuned model (on the bottom), generated using the same seed, trained on CelebA.} \label{fig: more_celeba} \end{figure} \end{document}
\begin{document} \title[Differential Subordination]{Applications of Theory of Differential Subordination for Functions with Fixed Initial Coefficient to Univalent Functions} \author[S. Nagpal]{Sumit Nagpal} \address{Department of Mathematics, University of Delhi, Delhi--110 007, India} \email{[email protected] } \author{V. Ravichandran} \address{Department of Mathematics, University of Delhi, Delhi--110 007, India \and School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang, Malaysia} \email{[email protected]; [email protected]} \date{} \begin{abstract} By using the theory of first-order differential subordination for functions with fixed initial coefficient, several well-known results for subclasses of univalent functions are improved by restricting the functions to have fixed second coefficient. The influence of the second coefficient of univalent functions is evident in the results obtained. \end{abstract} \subjclass[2010]{30C80} \keywords{ Differential subordination, fixed initial coefficient, convex and starlike functions.} \maketitle \section{Introduction and preliminaries}\label{sec1} It is well-known that the second coefficient of univalent functions influences many properties. For example, the bound for the second coefficient of univalent functions yields the growth and distortion estimates as well the Koebe constant. Various subclasses of univalent functions with fixed second coefficients were investigated beginning with Gronwall \cite{gronwall}. For a brief survey of these developments as well as for some radius problem, see \cite{naveen}. The necessary modifications to the theory of differential subordination to handle problems for functions with second coefficients are recently carried out in \cite{sumit}. Using the results in \cite{sumit}, the influence of the second coefficient in certain differential implications associated with starlike and convex functions with fixed second coefficients is investigated in this paper. Let $p$ be an analytic function in the unit disk $\mathbb{D}=\{z \in \mathbb{C}:|z|<1\}$ and $\psi(r,s)$ be a complex function defined in a domain of $\mathbb{C}^{2}$. Consider a class of functions $\Psi$, and two subsets $\Omega$ and $\Delta$ in $\mathbb{C}$. Given any two quantities, the aim of the theory of first-order differential subordination is to determine the third so that the following differential implication is satisfied: \begin{equation*}\label{subo} \psi \in \Psi \quad \mbox{and} \quad \{\psi(p(z),z p'(z)): z \in \mathbb{D}\} \subset \Omega \quad \Rightarrow \quad p(\mathbb{D}) \subset \Delta. \end{equation*} Furthermore, the problem is to find ``smallest'' such $\Delta$ and ``largest'' such $\Omega$. In \cite{sumit}, the authors proposed a new methodology by making appropriate modifications and improvements to the Miller and Mocanu's theory (see \cite{millermocanu1978,millermocanu1981} and their monograph \cite{monograph}) of second-order differential subordination and gave interesting applications of the newly formulated theory to the classes of normalized convex and starlike functions with fixed second coefficient. Let $\mathcal{H}_{\beta}[a,n]$ consist of analytic functions $p$ of the form \[p(z)=a+\beta z^{n}+p_{n+1}z^{n+1}+\cdots,\] where $\beta \in \mathbb{C}$ is fixed. Without loss of generality, we assume that $\beta$ is a positive real number. \begin{defin}\label{sec1,def1}\cite[Definition 1, p.\ 158]{millermocanu1981} Let $Q$ be the class of functions $q$ that are analytic and injective in $\overline{\mathbb{D}}\setminus E(q)$ where \[ E(q) := \{ \zeta \in \partial \mathbb{D}: \lim_{z \rightarrow \zeta} q(z)=\infty\}\] and are such that $q'(\zeta) \neq 0 $ for $\zeta\in \partial \mathbb{D}\setminus E(q)$. \end{defin} \begin{defin}\label{sec1,def2}\cite[Definition 3.1, p. 616]{sumit} Let $\Omega$ be a domain in $\mathbb{C}$, $n \in \mathbb{N}$ and $\beta >0$. Let $q \in {Q}$ be such that $ |q'(0)|\geq \beta$. The class $\Psi_{n,\beta}(\Omega,q)$ consists of \emph{$\beta$-admissible functions} $\, \psi:\mathbb{C}^2\rightarrow \mathbb{C}$ satisfying the following conditions: \begin{itemize} \item[(i)] $ \psi(r,s)$ is continuous in a domain $D\subset \mathbb{C}^2$, \item[(ii)] $(q(0),0)\in{D}$ and $\psi(q(0),0)\in{\Omega}$, \item[(iii)] $\psi(q(\zeta),m \zeta q'(\zeta))\not\in{\Omega}$ whenever $(q(\zeta),m \zeta q'(\zeta))\in{D}$, $\zeta\in \partial \mathbb{D}\setminus E(q)$ and \[m \geq n+\frac{|q'(0)|-\beta }{|q'(0)|+\beta }.\] \end{itemize} We write $\Psi_{1,\beta}(\Omega,q)$ as $\Psi_{\beta}(\Omega,q)$. \end{defin} \begin{thm}\label{sec1,th3}\cite[Theorem 3.1, p. 617]{sumit} Let $q(0)=a$, $\psi \in \Psi_{n,\beta}(\Omega,q)$ with associated domain D, and $\beta >0$ with $ |q'(0)|\geq \beta$. Let $p \in \mathcal{H}_{\beta}[a,n]$. If $(p(z), zp'(z))\in {D}$ for $z \in \mathbb{D}$ and \[ \psi (p(z), zp'(z)) \in {\Omega}\quad ( z\in {\mathbb{D}})\] then $p\prec q$. \end{thm} The special case of $\Delta$ being a half plane is important in our investigation. Let $\Delta=\{w:\RE w>0\}$. The function \[q(z)=\frac{a+\overline{a}z}{1-z} \quad (z \in \mathbb{D})\] where $\RE a >0$, is univalent in $\overline{\mathbb{D}}\setminus\{1\}$ and satisfies $q(\mathbb{D})=\Delta$, $q(0)=a$ and $q \in Q$. Let $\Psi_{n,\beta}(\Omega,a):=\Psi_{n,\beta}(\Omega,q)$ and when $\Omega=\Delta$, denote the class by $\Psi_{n,\beta}\{a\}$ with $\Psi_{\beta}\{a\}:=\Psi_{1,\beta}\{a\}$. The class $\Psi_{n,\beta}(\Omega,a)$ consists of those functions $\psi:\mathbb{C}^2\rightarrow \mathbb{C}$ that are continuous in a domain $D \subset \mathbb{C}^2$ with $(a,0) \in D$ and $\psi(a,0) \in \Omega$, and that satisfy the admissibility condition: \begin{equation}\label{sec1,eq1} \begin{split} \psi(i\rho,\sigma) &\not\in \Omega \quad \mbox{whenever} \quad (i\rho,\sigma) \in D \quad \mbox{and}\quad\\ \sigma &\leq-\frac{1}{2}\left( n+\frac{2\RE a-\beta }{2\RE a+\beta }\right)\frac{|a-i\rho|^{2}}{\RE a}, \end{split} \end{equation} where $\rho \in \mathbb{R}$ and $n \geq 1$. If $a=1$, then \eqref{sec1,eq1} simplifies to \begin{equation}\label{sec1,eq2} \begin{split} \psi(i\rho,\sigma) &\not\in \Omega \quad \mbox{whenever} \quad (i\rho,\sigma) \in D \quad \mbox{and}\quad\\ \sigma &\leq-\frac{1}{2}\left( n+\frac{2-\beta }{2+\beta }\right) (1+\rho^{2}), \end{split} \end{equation} where $\rho \in \mathbb{R}$, and $n \geq 1$. In this particular case, Theorem \ref{sec1,th3} becomes \begin{thm}\label{sec1,th4} \cite[Theorem 3.4, p. 620]{sumit} Let $p \in \mathcal{H}_{\beta}[a,n]$ with $\RE a>0$ and $0<\beta \leq 2\RE a$. \begin{enumerate} \item [(i)] Let $\psi \in \Psi_{n,\beta}(\Omega,a)$ with associated domain $D$. If $(p(z), zp'(z))\in {D} $ and $\psi (p(z), zp'(z)) \in \Omega\quad ( z\in {\mathbb{D}})$, then $\RE p(z) >0\quad (z \in \mathbb{D})$. \item [(ii)] Let $\psi \in \Psi_{n,\beta}\{a\}$ with associated domain $D$. If $(p(z), zp'(z))\in {D}$ and $ \RE \psi (p(z), zp'(z))>0 \quad ( z\in {\mathbb{D}})$, then $\RE p(z)>0\quad (z \in \mathbb{D})$. \end{enumerate} \end{thm} \section{Applications in univalent function theory}\label{sec2} Let $\mathcal{A}_n$ be the class consisting of analytic functions $f$ defined on $\mathbb{D}$ of the form $f(z)=z+a_{n+1}z^{n+1}+a_{n+2}z^{n+2}+\cdots$, and $\mathcal{A}:=\mathcal{A}_1$. The class $\mathcal{S}^*(\alpha)$ of starlike functions of order $\alpha$, $0 \leq \alpha <1$, consists of functions $f\in \mathcal{A}$ satisfying the inequality \[ \RE\left(\frac{zf'(z)}{f(z)}\right)>\alpha \quad ( z\in {\mathbb{D}}). \] Similarly, the class $\mathcal{C}(\alpha)$ of convex functions of order $\alpha$, $0 \leq \alpha <1$, consists of functions $f\in \mathcal{A}$ satisfying the inequality \[ \RE\left(1+\frac{zf''(z)}{f'(z)}\right)>\alpha \quad ( z\in {\mathbb{D}}). \] When $\alpha=0$, these classes are respectively denoted by $\mathcal{S}^*$ and $\mathcal{C}$. Let $\mathcal{A}_{n,b}$ denote the class of functions $f \in \mathcal{A}_{n}$ of the form \[f(z)=z+bz^{n+1}+a_{n+2}z^{n+2}+\cdots,\] where $b$ is fixed. We write $\mathcal{A}_{1,b}$ as $\mathcal{A}_{b}$. There are many differential inequalities in classical analysis for which the differential operator is required to have positive real part. A typical example is the Marx-Strohh\"{a}cker result, which states that if $ f \in \mathcal{A}$, then \[\RE \left(\frac{z f''(z)}{f'(z)}+1\right) >0 \quad(z\in \mathbb{D})\quad \Rightarrow \quad \RE \frac{z f'(z)}{f(z)}>\frac{1}{2}\quad (z\in \mathbb{D}).\] A natural problem is to extend the result by finding a domain $D$ containing the right half-plane so that \[\frac{z f''(z)}{f'(z)}+1 \in D \quad(z\in \mathbb{D})\quad \Rightarrow \quad \RE \frac{z f'(z)}{f(z)}>\frac{1}{2} \quad(z\in \mathbb{D}).\] The domain $D$ cannot be taken as the half-plane $\{w\in\mathbb{C}:\RE w>\alpha\}$, with $ \alpha<0$, for functions $f \in \mathcal{A}$. (For a counter example, see \cite{example}.) However, it is possible to take such a $D$ for functions $f \in \mathcal{A}_{b}$. To prove this result, we shall need the following lemma proved by Ozaki. \begin{lem}\label{lem} \cite{ozaki} If $f \in \mathcal{A}$ satisfies \[\RE \left(\frac{z f''(z)}{f'(z)}+1\right) >-\frac{1}{2}\quad (z\in \mathbb{D}),\] then $f$ is univalent in $\mathbb{D}$. \end{lem} \begin{thm}\label{sec2,th2} If $f \in \mathcal{A}_{b}$ with $|b| \leq 1$, then the following implication holds: \[\RE \left(\frac{z f''(z)}{f'(z)}+1\right) >\frac{|b|-1}{2(|b|+1)} \quad (z\in \mathbb{D})\quad \Rightarrow \quad \RE \frac{z f'(z)}{f(z)}>\frac{1}{2}\quad (z\in \mathbb{D}).\] \end{thm} \begin{proof} If we set \begin{equation}\label{sec2,eq1} \alpha:=\frac{|b|-1}{2(|b|+1)} \end{equation} then $\alpha \in [-1/2,0]$. Define the function $p:\mathbb{D} \rightarrow \mathbb{C}$ by \[p(z):=2\frac{zf'(z)}{f(z)}-1.\] Using Lemma \ref{lem}, it follows that $f$ is univalent and hence \[p(z)=1+2bz+\cdots\] is analytic in $\mathbb{D}$. Thus $p \in \mathcal{H}_{2b}[1,1]$ and satisfies \begin{equation}\label{sec2,eq2} \frac{zf''(z)}{f'(z)}+1-\alpha=\frac{p(z)+1}{2}+\frac{zp'(z)}{p(z)+1}-\alpha=\psi(p(z),zp'(z))\quad (z \in \mathbb{D}) \end{equation} where \[\psi(r,s):=\frac{r+1}{2}+\frac{s}{r+1}-\alpha,\] and $\alpha$ is given by \eqref{sec2,eq1}. The function $\psi$ is continuous in the domain $D=(\mathbb{C}\backslash \{-1\})\times \mathbb{C}$, $(1,0) \in D$ and \[\RE \psi(1,0)=1-\alpha >0,\] as $\alpha \in [-1/2,0]$. We need to show that the admissibility condition \eqref{sec1,eq2} is satisfied. Since \[\psi(i\rho,\sigma)=\frac{i\rho+1}{2}+\frac{\sigma}{1+\rho^{2}}(1-i\rho)-\alpha\] we have \begin{align*} \RE \psi(i\rho,\sigma)&=\frac{1}{2}+\frac{\sigma}{1+\rho^{2}}-\alpha\\ & \leq \frac{1}{2}-\frac{1}{2}\left(1+\frac{2-2|b|}{2+2|b|}\right)-\alpha\\ &=\frac{|b|-1}{2(|b|+1)}-\alpha=0, \end{align*} whenever $\rho \in \mathbb{R}$ and \[\sigma \leq -\frac{1}{2}\left(1+\frac{2-\beta}{2+\beta}\right)(1+\rho^{2}),\quad \beta=2|b|.\] Thus $\psi \in \Psi_{2b}\{1\}$. From the hypothesis and \eqref{sec2,eq2}, we obtain \[\RE \psi(p(z),zp'(z)) >0 \quad ( z \in \mathbb{D}).\] Therefore, by applying Theorem \ref{sec1,th4} (ii), we conclude that $p$ satisfies \[\RE p(z) >0 \quad ( z \in \mathbb{D}).\] This is equivalent to \[\RE \frac{z f'(z)}{f(z)}>\frac{1}{2}\quad (z\in \mathbb{D}).\qedhere\] \end{proof} \begin{rem}\label{sec2,rem3} For $|b|=1$, Theorem \ref{sec2,th2} reduces to \cite[ Theorem 2.6a, p.\ 57]{monograph}. Also, if $|b|=0$ then $f \in \mathcal{A}_{2}$ and $\alpha=-1/2$. Therefore, Theorem \ref{sec2,th2} reduces to \cite[ Theorem 2.6i, p.\ 68]{monograph} in this case. \end{rem} \begin{thm}\label{sec2,th5} If $f \in \mathcal{A}_{b}$ with $|b| \leq 1$, then the following implication holds: \[\RE \left(\frac{z f''(z)}{f'(z)}+1\right) >\frac{|b|-1}{|b|+1}\quad (z\in \mathbb{D}) \quad \Rightarrow \quad \RE \sqrt{f'(z)}>\frac{1}{2}\quad (z\in \mathbb{D}),\] where the branch of the square root is so chosen that $\sqrt{1}=1$. \end{thm} \begin{proof}Set \begin{equation}\label{sec2,eq3} \alpha:=\frac{|b|-1}{|b|+1}. \end{equation} Then $\alpha \in [-1,0]$. Define the function $p:\mathbb{D} \rightarrow \mathbb{C}$ by \[p(z):=2\sqrt{f'(z)}-1.\] Using the hypothesis, it follows that the function \[p(z)=1+2bz+\cdots\] is analytic in $\mathbb{D}$. Thus $p \in \mathcal{H}_{2b}[1,1]$ and satisfies \begin{equation}\label{sec2,eq4} \frac{zf''(z)}{f'(z)}+1-\alpha=1+\frac{2zp'(z)}{p(z)+1}-\alpha=\psi(p(z),zp'(z))\quad (z \in \mathbb{D}) \end{equation} where \[\psi(r,s):=1+\frac{2s}{r+1}-\alpha\] and $\alpha$ is given by \eqref{sec2,eq3}. The function $\psi$ is continuous in the domain $D=(\mathbb{C}\backslash \{-1\})\times \mathbb{C}$, $(1,0) \in D$ and \[\RE \psi(1,0)=1-\alpha >0,\] as $\alpha \in [-1,0]$. We now show that the admissibility condition \eqref{sec1,eq2} is satisfied. Since \[\psi(i\rho,\sigma)=1+\frac{2\sigma}{1+\rho^{2}}(1-i\rho)-\alpha\] we have \begin{align*} \RE \psi(i\rho,\sigma)&=1+\frac{2\sigma}{1+\rho^{2}}-\alpha\\ & \leq 1-\left(1+\frac{2-2|b|}{2+2|b|}\right)-\alpha\\ & =\frac{|b|-1}{|b|+1}-\alpha=0, \end{align*} whenever $\rho \in \mathbb{R}$ and \[\sigma \leq -\frac{1}{2}\left(1+\frac{2-\beta}{2+\beta}\right)(1+\rho^{2}), \quad \beta=2|b|.\] Thus $\psi \in \Psi_{2b}\{1\}$. From the hypothesis and \eqref{sec2,eq4}, we obtain \[\RE \psi(p(z),zp'(z)) >0 \quad ( z \in \mathbb{D}).\] Therefore, by applying Theorem \ref{sec1,th4} (ii), we conclude that $p$ satisfies \[\RE p(z) >0 \quad ( z \in \mathbb{D}).\] This is equivalent to \[\RE \sqrt{f'(z)} >\frac{1}{2}\quad (z\in \mathbb{D}).\qedhere\] \end{proof} \begin{rem}\label{sec2,rem6} If $|b|=1$, then $\alpha=0$ and Theorem \ref{sec2,th5} reduces to \cite[Theorem 2.6a, p.\ 57]{monograph}. \end{rem} \begin{thm}\label{sec2,th7} If $f \in \mathcal{A}_{b}$ with $|b| \leq 1$, then the following implication holds: \[\RE \frac{zf'(z)}{f(z)} >\frac{|b|}{|b|+1}\quad (z\in \mathbb{D}) \quad \Rightarrow \quad \RE \frac{f(z)}{z}>\frac{1}{2}\quad (z\in \mathbb{D}).\] \end{thm} \begin{proof} Setting \begin{equation}\label{sec2,eq5} \alpha:=\frac{|b|}{|b|+1}, \end{equation} it is seen that $\alpha \in [0,1/2]$. Define the function $p:\mathbb{D} \rightarrow \mathbb{C}$ by \[p(z):=2\frac{f(z)}{z}-1.\] Since $f \in \mathcal{A}_{b}$, the function \[p(z)=1+2bz+\cdots\] is analytic in $\mathbb{D}$. Thus $p \in \mathcal{H}_{2b}[1,1]$ and satisfies \begin{equation}\label{sec2,eq6} \frac{zf'(z)}{f(z)}-\alpha=1+\frac{zp'(z)}{p(z)+1}-\alpha=\psi(p(z),zp'(z))\quad (z \in \mathbb{D}) \end{equation} where \[\psi(r,s):=1+\frac{s}{r+1}-\alpha,\] and $\alpha$ is given by \eqref{sec2,eq5}. The function $\psi$ is continuous in the domain $D=(\mathbb{C}\backslash \{-1\})\times \mathbb{C}$, $(1,0) \in D$ and \[\RE \psi(1,0)=1-\alpha >0,\] as $\alpha \in [0,1/2]$. We now show that the admissibility condition \eqref{sec1,eq2} is satisfied. Since \[\psi(i\rho,\sigma)=1+\frac{\sigma}{1+\rho^{2}}(1-i\rho)-\alpha\] we have \begin{align*} \RE \psi(i\rho,\sigma)&=1+\frac{\sigma}{1+\rho^{2}}-\alpha\\ & \leq 1-\frac{1}{2}\left(1+\frac{2-2|b|}{2+2|b|}\right)-\alpha\\ & =\frac{|b|}{|b|+1}-\alpha=0, \end{align*} whenever $\rho \in \mathbb{R}$ and \[\sigma \leq -\frac{1}{2}\left(1+\frac{2-\beta}{2+\beta}\right)(1+\rho^{2}),\quad \beta=2|b|.\] Thus $\psi \in \Psi_{2b}\{1\}$. From the hypothesis and \eqref{sec2,eq6}, we obtain \[\RE \psi(p(z),zp'(z)) >0 \quad( z \in \mathbb{D}).\] Therefore, by applying Theorem \ref{sec1,th4} (ii), we conclude that $p$ satisfies \[\RE p(z) >0 \quad (z \in \mathbb{D}).\] This is equivalent to \[\RE \frac{f(z)}{z} >\frac{1}{2}\quad (z\in \mathbb{D}).\qedhere\] \end{proof} \begin{rem}\label{sec2,rem8} If $|b|=1$ then $\alpha=1/2$ and Theorem \ref{sec2,th7} reduces to \cite[Theorem 2.6a, p.\ 57]{monograph}. \end{rem} \begin{thm}\label{sec2,th9} If $f \in \mathcal{A}_{b}$ is locally univalent with $|b| \leq 1$, then the following implication holds: \[\RE \sqrt{f'(z)}>\sqrt{\frac{1+|b|}{8}} \quad (z\in \mathbb{D})\quad \Rightarrow \quad \RE \frac{f(z)}{z} >\frac{1}{2}\quad (z\in \mathbb{D}),\] where the branch of the square root is so chosen that $\sqrt{1}=1$. \end{thm} \begin{proof} To begin with, note that if we set \begin{equation}\label{sec2,eq7} \alpha:=\sqrt{\frac{1+|b|}{8}}, \end{equation} then $\alpha \in [1/2\sqrt{2},1/2]$. Define the function $p:\mathbb{D} \rightarrow \mathbb{C}$ by \[p(z):=\frac{2f(z)}{z}-1.\] Since $f \in \mathcal{A}_{b}$, the function \[p(z)=1+2bz+\cdots\] is analytic in $\mathbb{D}$. Thus $p \in \mathcal{H}_{2b}[1,1]$ and satisfies \begin{equation}\label{sec2,eq8} \sqrt{f'(z)}-\alpha=\sqrt{\frac{zp'(z)+p(z)+1}{2}}-\alpha=\psi(p(z),zp'(z))\quad (z \in \mathbb{D}) \end{equation} where \[\psi(r,s):=\sqrt{\frac{r+s+1}{2}}-\alpha,\] and $\alpha$ is given by \eqref{sec2,eq7}. The function $\psi$ is continuous in the domain $D=\mathbb{C}^{2}$, $(1,0) \in D$ and \[\RE \psi(1,0)=1-\alpha >0,\] as $\alpha \in [1/2\sqrt{2},1/2]$. We now show that the following admissibility condition holds: \begin{equation}\label{sec2,eq9} \RE \psi(i\rho,\sigma)=\RE \sqrt{\frac{i\rho+\sigma+1}{2}}-\alpha \leq 0, \end{equation} whenever $\rho \in \mathbb{R}$ and \[\sigma \leq -\frac{1}{2}\left(1+\frac{2-\beta}{2+\beta}\right)(1+\rho^{2}),\quad \beta=2|b|.\] If we let $\zeta=\xi+i\eta=(1+\sigma+i\rho)/2$, and using the conditions on $\rho$ and $\sigma$, we obtain \begin{align*} \xi=\frac{1+\sigma}{2} &\leq \frac{1}{2}\left[1-\frac{1}{2}\left(1+\frac{2-2|b|}{2+2|b|}\right)(1+\rho^{2})\right]\\ &=\frac{1}{2}\left[1-\frac{1}{1+|b|}(1+\rho^{2})\right]\\ &=\frac{1}{2(1+|b|)}(|b|-4\eta^{2}). \end{align*} This implies that $\zeta$ is a point inside the parabola \[\eta^{2}=-\frac{(1+|b|)}{2}\left[\xi-\frac{|b|}{2(1+|b|)}\right]\] and \[\RE \sqrt{\zeta}=\RE \sqrt{\xi+i\eta}=\sqrt{\frac{\xi+\sqrt{\xi^{2}+\eta^{2}}}{2}}.\] Since \begin{align*} \xi^{2}+\eta^{2} &\leq \frac{1}{4(1+|b|)^{2}}(|b|-4\eta^{2})^{2}+\eta^{2}\\ &=\frac{1}{4(1+|b|)^{2}}(1+4\eta^{2})(|b|^{2}+4\eta^{2}) \end{align*} and using the fact that the geometric mean is less than or equal to the arithmetic mean, we have \begin{align*} \sqrt{ \xi^{2}+\eta^{2}} &=\frac{1}{2(1+|b|)}\sqrt{(1+4\eta^{2})(|b|^{2}+4\eta^{2})}\\ &\leq \frac{1}{4(1+|b|)}[1+8\eta^{2}+|b|^{2}] \end{align*} so that \begin{align*} \xi+\sqrt{ \xi^{2}+\eta^{2}} &\leq \frac{1}{2(1+|b|)}(|b|-4\eta^{2})+\frac{1}{4(1+|b|)}[1+8\eta^{2}+|b|^{2}]\\ &=\frac{1+|b|}{4}. \end{align*} Thus \[\RE \sqrt{\zeta}- \sqrt{\frac{1+|b|}{8}} \leq 0.\] This is exactly the admissibility condition given in \eqref{sec2,eq9}. Thus $\psi \in \Psi_{2b}\{1\}$. From the hypothesis and \eqref{sec2,eq8}, we obtain \[\RE \psi(p(z),zp'(z)) >0 \quad(z \in \mathbb{D}).\] Therefore, by applying part $(ii)$ of Theorem \ref{sec1,th4} we conclude that $p$ satisfies \[\RE p(z) >0 \quad ( z \in \mathbb{D}).\] This is equivalent to \[\RE \frac{f(z)}{z} >\frac{1}{2}\quad (z\in \mathbb{D}).\qedhere\] \end{proof} \begin{rem}\label{sec2,rem10} If $|b|=1$, then $\alpha=1/2$ and Theorem \ref{sec2,th9} reduces to \cite[Theorem 2.6a, p.\ 57]{monograph}. \end{rem} \section{Two Sufficient conditions for Starlikeness}\label{sec3} In 1989, Nunokawa \cite{nunokawa} gave the following sufficient condition for starlikeness: if $f \in \mathcal{A}$, then \[\RE\left( \frac{zf''(z)}{f'(z)}+1\right)< \frac{3}{2}\quad (z\in \mathbb{D}) \quad \Rightarrow \quad 0<\RE \frac{zf'(z)}{f(z)}<\frac{4}{3}\quad (z\in \mathbb{D}).\] We will improve this result for a function $f \in \mathcal{A}_{b}$. \begin{thm}\label{sec3,th1} If $f \in \mathcal{A}_{b}$ satisfies \[\RE \left(\frac{zf''(z)}{f'(z)}+1\right) <\frac{3}{2}\quad (z\in \mathbb{D}),\] then \[\left|\frac{zf'(z)}{f(z)}-\alpha\right| <\alpha\quad (z\in \mathbb{D}),\] where $\alpha$ is given by \begin{equation}\label{sec3,eq1} \alpha:=\frac{3(|b|+6)+\sqrt{9|b|^{2}+28|b|+4}}{8(|b|+4)}. \end{equation} In particular, \[0<\RE \frac{zf'(z)}{f(z)}<2\alpha\quad (z\in \mathbb{D}).\] \end{thm} \begin{proof} The hypothesis can be written in terms of subordination as \[\frac{z f''(z)}{f'(z)}+1 \prec \frac{1-2z}{1-z} \quad ( z\in \mathbb{D})\] which gives $|b| \leq 1/2$. Also the constant $\alpha$ given by \eqref{sec3,eq1} satisfies the equation \begin{equation}\label{sec3,eq2} 4(|b|+4)\alpha^{2}-3(|b|+6)\alpha+5=0. \end{equation} If $\alpha >2/3$, then we obtain \[3\sqrt{9|b|^{2}+28|b|+4}>7|b|+10.\] On solving this, we get $|b|>1/2$ which is a contradiction. Similarly, if we let $\alpha <5/8$, then we obtain $|b|<0$ which is again a contradiction. Thus $\alpha \in [5/8,2/3]$. Define the function \[w=q(z):=\frac{\alpha(1-z)}{(\alpha-1)z+\alpha}\quad (z\in {\mathbb{D}}) \] where $\alpha$ is given by \eqref{sec3,eq1}. As $\alpha \in [5/8,2/3]$, $q$ is analytic and univalent in $\overline{\mathbb{D}}$. Thus, $q\in {Q}$. Since $q(-1)=2\alpha$ and $q(1)=0$, we see that \[q(\mathbb{D})=\{w:|w-\alpha|<\alpha\}.\] Now, define the function $p:\mathbb{D} \rightarrow \mathbb{C}$ by \[p(z):=\frac{zf'(z)}{f(z)}.\] Since $f \in \mathcal{A}_{b}$ and $f$ is starlike (univalent), the function \[p(z)=1+bz+\cdots\] is analytic in $\mathbb{D}$. Thus $p \in \mathcal{H}_{b}[1,1]$ and satisfies \begin{equation}\label{sec3,eq3} \frac{zf''(z)}{f'(z)}+1=p(z)+\frac{zp'(z)}{p(z)}=\psi(p(z),zp'(z))\quad (z \in \mathbb{D}) \end{equation} where \[\psi(r,s):=r+\frac{s}{r}.\] We claim that $\psi \in \Psi_{b}(\Omega,q)$ where $\Omega=\{w:\RE w<3/2\}$. The function $\psi$ is continuous in the domain $D=(\mathbb{C} \backslash {\{0\}}) \times \mathbb{C}$, $(1,0)\in {D}$ and \[\RE \psi (1,0)=1<3/2\] so that $\psi (1,0)\in {\Omega}$. We now show that \[ \RE \psi (q(\zeta),m \zeta q'(\zeta)) \geq \frac{3}{2} , \] where $|\zeta|=1$ and \[m \geq 1+\frac{|q'(0)|-|b|}{|q'(0)|+|b|},\quad q'(0)=\frac{1-2\alpha}{\alpha}.\] Since \begin{align*} \psi (q(\zeta),m \zeta q'(\zeta))&=q(\zeta) + m \frac{\zeta q'(\zeta)}{q(\zeta)}\\ &=\frac{\alpha(1-\zeta)}{(\alpha-1)\zeta+\alpha}+\frac{m(1-2\alpha)\zeta}{(1-\zeta)[(\alpha-1)\zeta+\alpha]}\\ &=-1+\frac{(m+2)\alpha-\zeta}{(\alpha-1)\zeta+\alpha}-\frac{m}{1-\zeta},\quad \zeta\neq 1, \end{align*} we have \begin{equation}\label{sec3,eq4} \RE \psi (q(\zeta),m \zeta q'(\zeta))= -1+\RE\left(\frac{(m+2)\alpha-\zeta}{(\alpha-1)\zeta+\alpha}\right)-m\RE\frac{1}{1-\zeta}, \quad \zeta \neq 1. \end{equation} Since for $\alpha \in [5/8,2/3]$, $m\geq 1$, \[\RE\left(\frac{(m+2)\alpha-\zeta}{(\alpha-1)\zeta+\alpha}\right)\geq (m+2)\alpha +1,\quad |\zeta|=1,\] and \[\RE \frac{1}{1-\zeta}=\frac{1}{2},\quad |\zeta|=1,\quad \zeta \neq 1,\] we have \begin{align*} \RE \psi (q(\zeta),m \zeta q'(\zeta))&\geq -1+\frac{2(m+2)\alpha^{2}-m\alpha-1}{2\alpha-1}-\frac{m}{2}\\ &=(m+2)\alpha-\frac{m}{2}\\ &=\left(\frac{2\alpha-1}{2}\right)m+2\alpha\\ &\geq \left(\frac{2\alpha-1}{2}\right)\left(1+\frac{(2\alpha-1)-|b|\alpha}{(2\alpha-1)+|b|\alpha}\right)+2\alpha\\ &=\frac{2(|b|+4)\alpha^{2}-6\alpha+1}{(2\alpha-1)+\alpha|b|}=\frac{3}{2}, \end{align*} using \eqref{sec3,eq2}. Thus, $\psi \in \Psi_{b} (\Omega,q)$ where $\Omega=\{w:\RE w<3/2\}$. From the hypothesis and \eqref{sec3,eq3}, we obtain \[\psi(p(z),zp'(z))\in {\Omega}\quad (z \in \mathbb{D}).\] Therefore, by applying Theorem \ref{sec1,th3}, we have \[p(z)\prec q(z)\quad(z \in \mathbb{D})\] or equivalently \[ \left|\frac{zf'(z)}{f(z)}-\alpha\right| < \alpha \quad (z \in \mathbb{D}).\] In particular, the above inequality yields the following: \[ 0<\RE \frac{zf'(z)}{f(z)}<2\alpha \quad (z \in \mathbb{D}).\qedhere\] \end{proof} \begin{rem}\label{sec2,rem12} If $|b|=1/2$ then $\alpha$ given by \eqref{sec3,eq1} simplifies to $2/3$. Thus Theorem \ref{sec3,th1} reduces to \cite[Main theorem]{nunokawa} in this case. \end{rem} Another familiar implication is the following \cite[Theorem 2.6i, p.\ 68]{monograph}: \[\RE \left(\frac{z f''(z)}{f'(z)}+1\right) >-\frac{1}{2}\quad (z \in \mathbb{D}) \quad \Rightarrow \quad \RE \frac{z f'(z)}{f(z)}>\frac{1}{2}\quad (z \in \mathbb{D}) \] for a function $f \in \mathcal{A}_{2}$. We generalize this result for a function $f \in \mathcal{A}_{2,b}$. \begin{thm}\label{sec3,th3} If $f \in \mathcal{A}_{2,b}$, then the following implication holds: \[\RE \left(\frac{z f''(z)}{f'(z)}+1\right) >-\frac{1}{2} \quad (z \in \mathbb{D})\quad \Rightarrow \quad \RE \frac{z f'(z)}{f(z)}>\alpha\quad (z \in \mathbb{D}),\] where $\alpha$ is the smallest positive root of the equation \begin{equation}\label{sec3,eq5} 2\alpha^{3}+2(1-|b|)\alpha^{2}-(2|b|+7)\alpha+3+|b|=0 \end{equation} in the interval $[1/2,2/3]$. \end{thm} \begin{proof}First note that in terms of subordination the hypothesis can be written as \[\frac{z f''(z)}{f'(z)}+1 \prec \frac{1+2z}{1-z} \quad (z\in \mathbb{D})\] which gives $|b| \leq 1/2$. Also, the function $g$ defined by \[g(\alpha):=2\alpha^{3}+2(1-|b|)\alpha^{2}-(2|b|+7)\alpha+3+|b|\] is continuous in $[1/2,2/3]$ and satisfies \begin{align*} g\left(\frac{1}{2}\right)&=\frac{1}{4}(1-2|b|) \geq 0,\quad \mbox{and}\\ g\left(\frac{2}{3}\right)&=-\frac{1}{27}(5+33|b|) \leq 0, \end{align*} as $|b| \leq 1/2$. Therefore by Intermediate Value Theorem, there exists a root of $g(\alpha)=0$ in $[1/2,2/3]$. (In fact, $\alpha \in [0.5,\sqrt{2.5}-1]\simeq[0.5,0.58]$.) Define the function $p:\mathbb{D} \rightarrow \mathbb{C}$ by \begin{equation}\label{sec3,eq6} p(z):=\frac{zf'(z)}{f(z)}-\alpha, \end{equation} where $\alpha$ is the smallest positive root of \eqref{sec3,eq5}. Since $f \in \mathcal{A}_{2,b}$ and $f$ is univalent, the function \[p(z)=(1-\alpha)+2bz^{2}+\cdots\] is analytic in $\mathbb{D}$. Thus $p \in \mathcal{H}_{2b}[1-\alpha,2]$ and as $\alpha \leq 2/3$ we readily see that \[\RE p(0)=1-\alpha >0.\] From \eqref{sec3,eq6}, we obtain \[\frac{zf'(z)}{f(z)}=p(z)+\alpha\] so that \[\frac{zf''(z)}{f'(z)}+1=p(z)+\alpha+\frac{zp'(z)}{p(z)+\alpha}=\psi(p(z),zp'(z))\quad (z \in \mathbb{D})\] where \[\psi(r,s):=r+\alpha+\frac{s}{r+\alpha}.\] We need to apply Theorem \ref{sec1,th4} to conclude that $\RE p(z) >0$. If we let \[\Omega=\{w:\RE w >-1/2\}\] then by hypothesis, we have \[\{\psi(p(z),zp'(z)):z \in \mathbb{D}\} \subset \Omega.\] To apply Theorem \ref{sec1,th4}, we need to show that $\psi \in \Psi_{2,2b}(\Omega,1-\alpha)$. The function $\psi$ is continuous in the domain $D=(\mathbb{C}\backslash \{-\alpha\})\times \mathbb{C}$, $(1-\alpha,0) \in D$ and \[\RE \psi(1-\alpha,0)=1 >0.\] We now show that the admissibility condition \eqref{sec1,eq1} is satisfied. Since \[\psi(i\rho,\sigma)=i \rho+\alpha+\frac{\sigma}{\alpha^{2}+\rho^{2}}(\alpha-i\rho)\] we have \begin{align*} \RE \psi(i\rho,\sigma)&=\alpha+\frac{\alpha \sigma}{\alpha^{2}+\rho^{2}}\\ &\leq \alpha-\frac{1}{2}\frac{\alpha}{\alpha^{2}+\rho^{2}}\left(2+\frac{2(1-\alpha)-2|b|}{2(1-\alpha)+2|b|}\right)\frac{(1-\alpha)^{2}+\rho^{2}}{1-\alpha}\\ &=\alpha-\frac{1}{2}\frac{\alpha}{1-\alpha}\frac{3(1-\alpha)+|b|}{(1-\alpha)+|b|}\frac{(1-\alpha)^{2}+\rho^{2}}{\alpha^{2}+\rho^{2}}. \end{align*} Using \eqref{sec3,eq5} and from the monotonicity of the function \[h(t)=\frac{(1-\alpha)^{2}+t}{\alpha^{2}+t},\quad t \geq0,\] it follows that \begin{align*} \RE \psi(i\rho,\sigma)&\leq \alpha-\frac{1}{2}\frac{1-\alpha}{\alpha}\frac{3(1-\alpha)+|b|}{(1-\alpha)+|b|}\\ &=\frac{(2|b|-1)\alpha^{2}-2\alpha^{3}+(6+|b|)\alpha-3-|b|}{2\alpha[(1-\alpha)+|b|]}=-\frac{1}{2}. \end{align*} whenever $\rho \in \mathbb{R}$ and \[\sigma \leq -\frac{1}{2}\left(2+\frac{2\RE p(0)-\beta}{2\RE p(0)+\beta}\right)\frac{|p(0)-i\rho|^{2}}{\RE p(0)},\quad p(0)=1-\alpha, \quad \beta=2|b|.\] Thus $\psi \in \Psi_{2,2b}(\Omega,1-\alpha)$. Therefore, by applying part $(i)$ of Theorem \ref{sec1,th4} we conclude that $p$ satisfies \[\RE p(z) >0 \quad ( z \in \mathbb{D}).\] This is equivalent to \[\RE \frac{z f'(z)}{f(z)}>\alpha\quad (z \in \mathbb{D}).\] where $\alpha$ is the smallest positive root of \eqref{sec3,eq5}. \end{proof} \begin{rem}\label{sec3,rem4} If $|b|=1/2$ then \eqref{sec3,eq5} becomes \[2\alpha^{3}+\alpha^{2}-8\alpha+\frac{7}{2}=0\] which simplifies to \[(2\alpha-1)\left(\alpha^{2}+\alpha-\frac{7}{2}\right)=0.\] As $\alpha \in [1/2,2/3]$, we get $\alpha=1/2$. Thus Theorem \ref{sec3,th3} reduces to Theorem 2.6i in \cite[p.\ 68]{monograph} in this case. \end{rem} \end{document}
\begin{document} \begin{abstract} We prove that the loop space of the directed suspension of a directed space is homotopy equivalent to the James construction. In particular, it does not depend on the directed structure of a given directed space. \end{abstract} \maketitle {\bf sec}tion{Introduction} The operation of suspension of a topological space $$X\mapsto \Sigma X=S^1\wedge X$$ is far from being faithful. Whole information about noncommutativity of $\pi_1(X,x_0)$ is lost. One can read only the abelianization $$\pi_1(X,x_0)_{ab}=H_2(\Sigma X)\,.$$ Similarly, the product structure in the cohomology $H^*(X)$ is lost. Multiplication in $H^{>0}(\Sigma X)$ is trivial. Recently a new kind of structures came to attention of topologists. One considers a subset of \emph{allowable} paths $\mathsf{P}s X\subset PX$ in the space of all paths in $X$. This kind of structure is called a directed structure, or a d-stucture \cite{Gr}. Any topological space $X$ has many directed structures. The poorest one contains constant paths only; this is the discrete structure $X_\delta$. Another structure is the richest: all paths are allowed. This is the total structure $X_{tot}$. Of course, there are many intermediate structures. The purpose of our paper is to show that, in some sense, the suspension forgets information about the directed structure. Given a topological space $X$ with a chosen directed structure, there are many ways of constructing directed structure on $\Sigma X$. There are two particularly interesting cases to consider. One is the smash product with the circle $S^1$ with total structure $$\Sigma X=S^1_{tot}\wedge X\,,$$ the other one is the smash product with the oriented circle $S^1$, in which only the paths with non-decreasing angle are allowed. This suspension is denoted by $$\vec{\Sigma} X=\vec{S}^1\wedge X\,.$$ In both cases, we can consider either the minimal structure, which contains only finite concatenations of directed paths coming from $S^1_{tot}\times X$ or $\vec{S}^1\times X$, or the complete one, which allows infinite concatenations. Obviously, all these directed structures depend on the directed structure in $X$. We formulate our results in a way that they can be applied to a wider variety of directed structures. We will not assume explicitly that the directed structure on $\Sigma X$ comes from a directed structure on $X$, only that it satisfies certain conditions. We will study the spaces of directed loops $\vec{\Omega}\Sigma X$ based at the distinguished point of the suspension. It is a subspace of the space of all loops $\Omega\Sigma X$. \begin{thm} [\!(abel{twierdzenieglowne} Suppose that $X$ is a connected space. Furthermore, assume that the suspension $\Sigma X$ is equipped with a directed structure that is excisive, finitely generated and translatable (see Def )\!]ef{fingen}, )\!]ef{translatable} and )\!]ef{exci}). With the notation as above, the inclusion \begin{equation} \vec{\Omega}\Sigma X \subseteq\Omega\Sigma X\end{equation} is a weak homotopy equivalence. \end{thm} The technical assumptions of Theorem )\!]ef{twierdzenieglowne} are satisfied for a natural class of directed spaces: \begin{thm} The conclusion of the Theorem )\!]ef{twierdzenieglowne} holds for the suspension of the realization of cubical complexes with minimal and completed directed structures.\end{thm} The homotopy type of the space $\Omega\Sigma X$ does not depend on the directed structure of $X$. Therefore, the homotopy type of $\vec{\Omega}\Sigma X$ do not depend on the directed structure in $\Sigma X$. The Theorem )\!]ef{twierdzenieglowne} is a consequence of the following statements (Theorem )\!]ef{homomain}, Corollary )\!]ef{cormain} and Theorem )\!]ef{spojnosc}): the natural map \begin{equation}[\!(abel{JavO} JX\to \vec{\Omega}\Sigma X \end{equation} is a weak homotopy equivalence. Here $JX$ is the free topological monoid generated by $X$, which is called the James construction. By the classical result of James \cite{Ja} (see also \cite[\S5.3]{CaMi}) the natural map \begin{equation}JX\to \Omega\Sigma X\end{equation} is also a weak homotopy equivalence. In another paper \cite{Mi}, Milnor have proved by simplicial methods that $\Omega\Sigma X$ is homotopy equivalent to the topological group $GX$ freely generated by $X$. These result coincide since $JX$ is connected, and then the group completion does not change the homotopy type. If $X$ is not connected then one is tempted to prove that ()\!]ef{JavO}) is a homotopy equivalence. We skip that proof due to point-set topology problems. We rely only on the homological argument which works fine only for connected spaces. We note that in general $\vec{\Omega}\Sigma X\to \Omega\Sigma X$ is not a homotopy equivalence. For $X=S^0$ we have $$\vec{\Omega}\vec{\Sigma} X=\vec{\Omega}\vec{S}^1\sim JS^0=\mathbb{N},$$ $$\Omega\vec{\Sigma} X=\Omega\vec{S}^1\sim GS^0=\mathbb{Z}$$ and the inclusion is homotopic to the inclusion of the natural numbers into the integer numbers $\mathbb{N}\hookrightarrow \mathbb{Z}$. We construct an explicit homotopy inverse of ()\!]ef{JavO}). We note that the original proof of James and other proofs which are present in the literature for the classical case do not exhibit an inverse map. Our construction of the inverse map is specific for the directed spaces. In the course of the proof we meet several technical problems. Most of them are caused by the fact that the suspension $\vec{\Sigma} X$ might have pretty bad structure in the neighbourhood of the distinguished base point. For any space $X$, the suspension admits the effect of "peacock feather eye", i.e. there are directed loops which infinitely times pass through the distinguished point. \obrazek{rys2}{4cm} \begin{center}$[-1,1]\times X$\hskip180pt $\Sigma X$\end{center} \noindent To avoid this problem we first consider the space of paths which are finite concatenations of paths coming from $[-1,1]\times X$, i.e.~we consider the minimal directed structure and later pass to the closure. The paper is organized as follows: after introducing the notions in \S)\!]ef{directed:spaces} and )\!]ef{section:cubical} we discuss directed structures in the smash product $X\wedge Y$. In \S)\!]ef{smash} we show under suitable conditions that if $X$ and $Y$ are connected directed spaces, then $\vec{\Omega}(X\wedge Y)$ is connected. Further in \S)\!]ef{suspension} we describe a convenient model of the suspension $\Sigma X$. The James construction is recalled in \S)\!]ef{James}. We modify the classical homological proof of James in the context of directed spaces. That is done in the \S)\!]ef{homologicznie}. In the next section \S)\!]ef{homotopijnie}, we construct a homotopy inverse of the James map ()\!]ef{JavO}) explicitly. We note that such a construction was not done in the classical context and might be much harder for non-directed paths. Finally, in \S)\!]ef{straightening} we show how to deform in a canonical way a path to another one which corresponds to a configuration of points in $X$, that is to a path which lies in the image of the James map. {\bf sec}tion{Directed spaces} [\!(abel{directed:spaces} Let $(X,x_0)$ be a pointed Hausdorff space, which is locally contractible. The following definition is due to Grandis \cite{Gr} \begin{df} \emph{A d-structure} on a topological space $X$ is a family of paths \[ \mathcal{D}\subseteq P(X):=\map([0,1],X), \] called \emph{d-paths}, that satisfies the following conditions: \begin{itemize} \item{Every constant path a d-path.} \item{The concatenation of d-paths is a d-path.} \item{Non-decreasing reparametrization of a d-path is a d-path.} \end{itemize} \emph{A d-space}, or \emph{a directed space}, is a space equipped with a d-structure. If it is clear what d-structure is considered on a space $X$, it is denoted by $\vec{P}(X)$. \end{df} A map $f:X\to Y$, where $X$, $Y$ are d-spaces, is \emph{a d-map} if it preserves a d-structure, i.e., if $f(\alpha)\in\vec{P}(Y)$ whenever $\alpha\in\vec{P}(X)$. The d-spaces and d-maps form a category, which is complete and cocomplete \cite{Gr}. For every family $\mathcal{F}\subseteq P(X)$ there exists the unique minimal d-structure $\overline{\mathcal{F}}\supseteq\mathcal{F}$ on $X$ generated by $\mathcal{F}$. It contains all constant paths and all paths having the form \[ (\alpha_1*\dots*\alpha_n)\circ f, \] where $\alpha_i\in\mathcal{F}$ for $i\in\{1,\dots,n\}$ and $f:[0,1]\to[0,1]$ is a continuous non-decreasing map. If $X$ is a d-space, and $A,B\subseteq X$, then we denote \begin{equation} \vec{P}(X)_A^B:=\{\alpha\in \vec{P}(X):\; \alpha(0)\in A,\; \alpha(1)\in B\}. \end{equation} If $(X,x_0)$ is a pointed d-space, we define \emph{the space of directed loops on $X$} as \begin{equation} \vec{\Omega}(X):=\vec{P}(X)_{x_0}^{x_0}. \end{equation} A path $\alpha:[a,b]\to X$, for $(a,b)$ not necessarily equal to $(0,1)$, will be called a d-path if its reparametrization $[0,1]\ni t\mapsto \alpha(a+t(b-a))\in X$ is a d-path. Here we list examples of d-spaces that play an important role in this paper: \begin{itemize} \item{Any topological space has two extreme d-structures: the total $X_{tot}$ one which contains all paths, and the discrete one $X_\delta$ which contains only constant paths.} \item{\emph{A directed Euclidean space $\vec{\mathbb{R}}^n$}, where d-paths are all paths having non-decreasing coordinates.} \item{Every subspace $Y\subseteq X$ of a d-space is a d-space, with a d-structure $\vec{P}(X)\cap P(Y)$. Particular cases are $\vec{I}=[0,1]\subseteq \vec{\mathbb{R}}$ (\emph{the directed interval}) and $\vec{I}^n\subseteq \vec{\mathbb{R}}^n$ (\emph{the directed $n$--cube}).} \item{For every closed subset $Y\subseteq X$ of a d-space, the quotient $X/Y$ is also a d-space, with a d-structure generated by paths having the form $p\circ\alpha$, where $\alpha\in\vec{P}(X)$ and $p:X\to X/Y$ is the projection. We obtain the minimal quotient d-structure $P_{\text{min}}(X/Y)$.} \item{The completed quotient d-structure on $X/Y$, denoted by $\vec{P}_c(X/Y)$, that contains all paths $\alpha$ for which there exists a (non-necessarily finite) cover of $[0,1]$ by closed intervals $E_a$, $a\in A$, such that $\alpha|_{E_a}$ is a projection of a d-path on $X$. The completed structure is the completion of the minimal d-structure in the sense of \cite{Z2}.} \item{An important special case is \emph{the directed circle} \[ \vec{S}^1=\vec{I}/\{0,1\}\cong \vec{\mathbb{R}}/\big((-\infty,-1]\cup[1,+\infty)\big).\] In this case the minimal and the completed d-structure are equal. } \end{itemize} Note that the minimal structure in $X/Y$ is not always natural. For example, take the interval $X=[-1,1]$ with the total structure. Then $X/\{-1,1\}\simeq S^1$ as topological spaces. But the directed paths $\alpha:[a,b]\to X/\{-1,1\}$ of the minimal quotient structure have the property that $\alpha^{-1}(*)$ has only finitely many connected components. \begin{df} Let $X$ be a space and let $\vec{P}(X)$ be a d-structure on $X$. \emph{The completion} of $\vec{P}(X)$ is a d-structure $\vec{P}_c(X)$ on $X$ such that $\alpha\in \vec{P}_c(X)$ if and only if there exists a cover \[ \bigcup_{i\in J} [a_i,b_i]=[0,1] \] such that $\alpha|_{[a_i,.b_i]}\in \vec{P}(X)$ for all $i\in J$. \end{df} {\bf sec}tion{Cubical complexes} [\!(abel{section:cubical} The completion of the minimal structure on a quotient space will be called \emph{the completed structure}. Clearly, $\vec{P}_c(X_{tot}/Y_{tot})\cong P_{tot}(X/Y)$. Semi-cubical complexes \cite{FGR} form a family of d-spaces, which is especially important because of their applications in Computer Science. In this paper we consider cubical sets: a class which contains semi-cubical sets and is closed with respect to taking suspensions. \begin{df}[{\cite{K}}] \emph{A cubical set} $K$ is a sequence of disjoint sets $(K_n)$, $n=0,1,\dots$ equipped with \begin{itemize} \item{face maps $d^\varepsilon_{i}:K_n\to K_{n-1}$, for $n>0$, $i\in\{1,\dots,n\}$, $\varepsilon\in\{0,1\}$,} \item{degeneracy maps $s_i:K_n\to K_{n+1}$, for $n\geq 0$, $i=1,\dots,n+1$,} \end{itemize} which satisfy the following cubical relations: \begin{itemize} \item{$d^\varepsilon_i d^\eta_j=d^\eta_{j-1} d^\varepsilon_i$ for $i<j$,} \item{$s_is_j=s_{j-1} s_i$ for $i<j$,} \item{$s_jd^\varepsilon_i=\begin{cases} d^\varepsilon_i s_{j-1} &\text{for $i<j$,}\\ \text{identity} & \text{for $i=j$,}\\ d^\varepsilon_{i-1}s_j & \text{for $i>j$.} \end{cases}$} \end{itemize} \end{df} Every cubical set has a geometric realization which is a d-space. For $\varepsilon\in\{0,1\}$ and $i\in\{1,\dots,n\}$, define d-maps \begin{align*} \delta^\varepsilon_i: \vec{I}^{n-1} \ni (t_1,\dots,t_{n-1})& \mapsto (t_1,\dots,t_{i-1},\varepsilon,t_{i+1},\dots,t_n)\in \vec{I}^n\\ \sigma_i:\vec{I}^{n+1}\ni (t_1,\dots,t_{n+1})&\mapsto (t_1,\dots,t_{i-1},t_{i+1},\dots,t_{n+1})\in\vec{I^n}. \end{align*} \emph{The geometric realization} of a cubical set $K$ is the quotient space \begin{equation} |K|=\coprod_{n\geq 0} K_n \times \vec{I}^n/\sim, \end{equation} where $\sim$ is generated by $(c,\delta^\varepsilon_i(\mathbf{t}))\sim (d^\varepsilon_i(c),\mathbf{t})$ and $(c,\sigma_j(\mathbf{t}))\sim (s_j(c),\mathbf{t})$. There are (at least) two natural d-structures on $|K|$: \begin{itemize} \item{The minimal quotient d-structure $\vec{P}_\text{min}(|K|)$, namely, a continues path $\alpha$ is a d-path if there exists a sequence of numbers $0=t_0<t_1<\dots<t_k=1$, cubes $c_i\in K_{d(i)}$ and d-paths $\beta_i:[t_{i-1},t_i]\to \vec{I}^n$ such that $\alpha(t)=(c_i,\beta_i(t))$ for $t\in [t_{i-1},t_i]$. This is a minimal d-structure such that the inclusions of cubes induce d-maps. } \item{The completed d-structure $\vec{P}_{c}(|K|)$, namely, a path $\alpha$ is a d-path if there exists a (non-necessarily finite) cover of $[0,1]$ by closed intervals $E_a$, $a\in A$, such that $\alpha|_{E_a}$ has the form $(c,\beta)$, for $c\in K_n$ and a d-path $\beta:E_a\to \vec{I}^n$. This coincides with the completed d-structure on a quotient space.} \end{itemize} The geometric realization of $K$ with the minimal quotient (resp. the completed) d-structure completed d-structure will be denoted by $|K|_{\text{min}}$ (resp.\ $|K|_c$). We will need the following technical result. \begin{prp}[\!(abel{cubicalNeighborhood} Let $K$ be a cubical set, $L$ its subset, and $p\in K_0$. Then there exists a subset $A\subseteq |K|$ such that the interior of $A$ contains $|L|$ and the maps \[ \vec{P}(|K|)_p^{|L|}\to \vec{P}(|K|)_p^A, \] induced by the inclusion is a homotopy equivalence, where $\vec{P}$ stands for either $\vec{P}_\text{min}$ or $\vec{P}_c$. \end{prp} \begin{proof} The same argument is valid in both cases. Let $r:\vec{I}\to\vec{I}$ be a d-map given by the formula \[ r(s)=\begin{cases} 0 & \text{for $s\in[0,\tfrac{1}{3}]$,}\\ 3(s-\tfrac{1}{2})+\tfrac{1}{2} & \text{for $s\in[\tfrac{1}{3},\tfrac{2}{3}]$,}\\ 1 & \text{for $s\in[\tfrac{2}{3},1]$,} \end{cases} \] and let $r^n:\vec{I}^n\to\vec{I}^n$ denote its $n$--th product. Furthermore, let $r^n_t=(1-t)id_{\vec{I}^n}+tr^n$, $t\in[0,1]$, be a homotopy between the identity map on $\vec{I}^n$ and $r^n$. For every $t$, the family $r^n_t$, $n\geq 0$, commutes with the face maps $\delta^\varepsilon_i$ and the degeneracy maps $\sigma_i$; as a consequence, the formula \[ R_t:|K|\ni (c,(t_1,\dots,t_n))\mapsto (c,r^n(t_1,\dots,t_n))\in |K| \] defines a homotopy between the identity map on $|K|$ and $R_1:|K|\to |K|$. Notice that, for every $n\geq 0$, the interior of $(r^n)^{-1}(\partial \vec{I}^n)$ contains $\partial \vec{I}^n$. Therefore, the interior of $A=(R_1)^{-1}(|L|)$ contains $|L|$. Finally, we need to show that the inclusion $\vec{P}(|K|)_p^{|L|}\subseteq \vec{P}(|K|)_p^{A}$ is a homotopy equivalence. Indeed, $R_1$ induces the map $(R_1)^*:\vec{P}(|K|)_p^{A}\to \vec{P}(|L|)_p^{|L|}$ which is a homotopy inverse of the inclusion, with suitable homotopies induced by $R_t$. \end{proof} {\bf sec}tion{Connectedness of $\vec{\Omega}(X\wedge Y)$}[\!(abel{smash} Here we prove a general fact about directed structures in the smash product \[ X\wedge Y=(X\times Y)/(X\vee Y) \] of pointed directed spaces. We will apply it for $\Sigma X=S^1_{tot}\wedge X$ and $\vec{\Sigma} X=\vec{S}^1\wedge X$. The minimal directed structure $\vec{P}_{\text{min}}(X\wedge Y)\subset P(X\wedge Y)$ consists of the paths which are concatenations of a finite number of d-paths coming from $X\times Y$. \begin{df}[\!(abel{fingen} We say that a directed structure $\vec{P}(X\wedge Y)$ on the smash product of pointed directed spaces $X,Y$ is \emph{finitely generated} if it contains the minimal structure and, for any pair of points $a,b\in X\wedge Y$, the space $\vec{P}_{\text{min}}(X\wedge Y)_a^b$ is dense in $\vec{P}(X\wedge Y)_a^b$. In particular, $\vec{\Omega}_{\text{min}}(X\wedge Y)$ is dense in $\vec{\Omega}(X\wedge Y)$.\end{df} \begin{prp}[\!(abel{p:CompletedDStructureIsFinitelyGenerated} The completed d-structure $\vec{P}_c(X\wedge Y)$ is finitely generated. \end{prp} \begin{proof} Denote $Z=X\times Y$, $V=X\vee Y$ and fix $\alpha\in \vec{P}_c(Z/V)$. Let $\{U_j\}_{j\in J}$ be a decreasing local basis of $V/Z$ at the point $Z$, i.e., such that $U_j\supseteq U_{j'}$ for $j<j'$. Fix $j\in J$. Consider a cover of $[0,1]$ with maximal connected subsets of $\alpha^{-1}(Z\setminus V)$ and $\alpha^{-1}(U_j)$. By the compactness of $[0,1]$, there exists a finite number of pairwise disjoint intervals $E_i=[a_i,b_i]$, $i=1,\dots,n(j)$ such that $\alpha(a_i)=\alpha(b_i)=V$ and $\alpha(t)\in U_j$ whenever $t\not\in \bigcup E_i$. Define a path \[ \alpha_j(t)=\begin{cases} \alpha(t) & \text{for $t\in \bigcup E_i$}\\ V & \text{otherwise.} \end{cases} \] Now $(\alpha_j)_{j\in J}$ is a sequence of paths of $\vec{P}_\text{min}(Z/V)$ that converges to $\alpha$. We have used here generalized sequences (or nets) and our argument works in the general topological setup. \end{proof} \begin{thm}[\!(abel{spojnosc} Suppose that $X$ and $Y$ are path connected. We assume that the directed structure in $X\wedge Y$ is finitely generated. Then the space of directed loops $\vec{\Omega}(X\wedge Y)$ is connected. \end{thm} \begin{proof} It is enough to show that $\vec{\Omega}_{\text{min}}(X\wedge Y)$ is connected since the closure of a connected set is connected. Every path in $\vec{\Omega}_{\text{min}}(X\wedge Y)$ can be lifted to $X\times Y$ uniquely away from the base point. Therefore, every path is a concatenation of a finite number of images of paths \[\alpha:[0,a]\to X\times Y\] such that $$\alpha(0),\,\alpha(a)\in X\vee Y\,.$$ We will deform such paths to constant paths. Let $x_0$ and $y_0$ be the distinguished points in $X$ and $Y$. There are four possibilities: \begin{enumerate} \item ~ $\alpha(0)=(x_0,y)$ and $\alpha(a)=(x,y_0)$, \item ~ $\alpha(0)=(x_0,y)$ and $\alpha(a)=(x_0,y')$, \item ~ $\alpha(0)=(x,y_0)$ and $\alpha(a)=(x_0,y)$, \item ~ $\alpha(0)=(x,y_0)$ and $\alpha(a)=(x',y_0)$. \end{enumerate} Let $(\alpha_X,\alpha_Y)$ be the coordinates of $\alpha$. The path $\alpha$ is homotopic to the concatenation \begin{equation}[\!(abel{rozklad}(\alpha_X(0),\alpha_Y)*(\alpha_X,\alpha_Y(a))\,.\end{equation} Here $ \alpha_X(0)$ and $\alpha_Y(a)$ denote constant paths of the lengths $a$. In the first case ($\alpha_X(0)=x_0$, $\alpha_Y(a)=y_0$) such concatenation is projected to a constant path. In the second case ($\alpha_X(0)=x_0$, $\alpha_X(a)=x_0$), we note that the first component of ()\!]ef{rozklad}) is projected to a constant path. Since the $Y$ is path connected, the second component can be deformed to the path of the form $(\alpha_X,y_0)$. Again, this path is projected to the constant path. The third and fourth cases are analogous.\end{proof} {\bf sec}tion{The directed suspension} [\!(abel{suspension} Let $X$ be a space with a distinguished point $x_0$. We will describe the suspension $\Sigma{X}=S^1\wedge X$ as a quotient of $\mathbb{R}\times X$. First, we present the circle $S^1$ as the quotient space \[ S^1=\mathbb{R}/\big((-\infty,-1]\cup [1,\infty)\big)\, . \] Then $\Sigma X$ is identified with the quotient of $\mathbb{R}\times X$ by the subspace \[ \big((-\infty,-1]\cup [1,\infty)\big)\times X\cup \vec{\R}\times \{x_0\}\,. \] The point of $\Sigma X$ which is the image of $(t,x)\in \mathbb{R}\times X$ will be denoted by $[\!( t,x )\!]$. The distinguished point of $\Sigma X$ is \[ *=[\!( t,x_0)\!]=[\!( s,x)\!]\quad\text{for }|s|\geq 1\,. \] The coordinates in $\Sigma X$ are defined only away from the distinguished point. Let $$p:\Sigma X\setminus\{*\}\to X$$ and $$h:\Sigma X\setminus\{*\}\to (-1,1)$$ be the projections onto the respective coordinates. For a given d-structure $\vec{P}(X)$ on $X$, there is a variety of ``natural'' d-structures on $\Sigma{X}$ which can be defined. They are: \begin{itemize} \item{The directed suspension structure $\vec{P}_{\text{min}}(\vec{S}^1\wedge X)$, in which a path is directed if and only if it is a concatenation of a finite number of paths which are the images of directed paths in $\vec{\R}\times X$. It is a minimal structure such that the quotient map $\vec{S}^1\times X\to \Sigma X$ is a d-map.} \item{The completed directed suspension structure $\vec{P}_c(\vec{S}^1\wedge X)$, which allows "infinite" concatenations of d-maps.} \item{The completed structure $\vec{P}_c({S}^1_{\mathrm{tot}}\wedge X)$, which is the total d-structure on $\Sigma X$. } \end{itemize} We will consider all these cases simultaneously. We will even work in a greater generality and assume only that the d-structure satisfies the following definition: \begin{df}[\!(abel{translatable} We say that a directed structure on $\Sigma X$ is \emph{translatable} if it contains paths \[ [-1,1]\ni t\mapsto [\!( t,x )\!]\in \Sigma X \] for all $x\in X$, and the maps \[ \Sigma X\ni [\!( t,x )\!] \mapsto [\!( [\!(ambda t+\mu, x )\!]\in \Sigma X \] preserve the directed structure for all $[\!(ambda,\mu\in\mathbb{R}$, $[\!(ambda>0$, $-[\!(ambda+\mu[\!(eq-1$ and $[\!(ambda+\mu\geq 1$. \end{df} All d-structures listed above are translatable. Notice that we do not assume that a translatable d-structure is induced, in any way, from a d-structure on $X$. Without specifying any particular directed structure in the suspension, we prove the results in Section )\!]ef{homologicznie} for $\vec{P}_{\text{min}}(\vec{S}^1\wedge X)$, $\vec{P}_c(\vec{S}^1\wedge X)$ and $\vec{P}_c({S}^1_{\mathrm{tot}}\wedge X)=P(\Sigma X)$ simultaneously. We will assume that the directed structure is finitely generated (see Definition )\!]ef{fingen}); this is true in the three listed cases thanks to Proposition )\!]ef{p:CompletedDStructureIsFinitelyGenerated}. We will also need another technical but natural condition, which excludes point-set pathology (see Definition )\!]ef{exci}). In Section )\!]ef{homotopijnie} we assume that all directed paths are non-decreasing along $\mathbb{R}$ coordinate away from the base point. This assumption allows to construct a map from the loop space to the James construction. {\bf sec}tion{James construction} [\!(abel{James} \subsection{One-sided Moore paths} In this section we will use one-sided Moore paths, i.e., we do not assume that the paths are parametrized by the unit interval, but by an arbitrary interval $[0,e]$ for $e\geq 0$. The concatenation of such paths is strictly associative (when it is defined). \begin{df}[One-sided Moore paths][\!(abel{Moore-one} Let $Y$ be a directed space, $y_0\in Y$. By $\mathsf{P}s(Y,y_0)$ we denote the space of directed Moore paths $\alpha$, such that $\alpha(0)=y_0$. Formally, \begin{multline*}\mathsf{P}s(Y,y_0)=\{(\alpha,t^\infty_\alpha)\in \map(\mathbb{R}_+,Y)\times \mathbb{R}_+\,:\\ \,\alpha|_{[0,e]}\text{ is directed},\;\alpha(0)=y_0\,,\;\alpha(s)=\alpha(t^\infty_\alpha)\;\text{for }s>e\,\}\,.\end{multline*} The element of $\mathsf{P}s(Y,y_0)$ will be regarded as a map $$\alpha:[0,t^\infty_\alpha]\to Y\,.$$ \end{df} \begin{prp} The space $\mathsf{P}s(Y,y_0)$ is contractible.\end{prp} \begin{proof} The contracting homotopy is given by the formula $H_s(\alpha)=\alpha|_{[0,s\,t_\infty^\alpha]}$ for $s\in [0,1]$. \end{proof} We will use the following notation: $$\en:\mathsf{P}s(Y,y_0)\to Y$$ is the evaluation at the end of the path $\alpha(t^\alpha_\infty)$. \begin{rem} If $Y$ has a total directed structure, i.e., every path is directed, then the map "$\en$" is a Serre fibration with the fiber $\Omega(Y,y_0)$. It is not true in general; it might happen that the set $\en^{-1}(y)$ (paths from $y_0$ to $y$) is empty. \end{rem} \subsection{Paths in the suspension}[\!(abel{stozkiL} Fix a space $X$ and fix a translatable d-structure on $\Sigma X$. Denote by $C_+$ (resp.\ $C_-$) the upper (resp. the lower) sub-cone of $\Sigma X$, i.e., the image of $(-\infty,0]\times X$ (resp.\ $[0,\infty)\times X$) in $\Sigma X$. Let \begin{align*}&L_-=\en^{-1}(C_-)=\{\alpha\in \mathsf{P}s(\Sigma X,*)\;:\;\alpha(t_\infty^\alpha)\in C_-\,\}\,,\\ &L_+=\en^{-1}(C_+)=\{\alpha\in \mathsf{P}s(\Sigma X,*)\;:\;\alpha(t_\infty^\alpha)\in C_+\,\}\,,\\ &L_0=L_+\cap L_-=\en^{-1}(\{0\}\times X)=\{\alpha\in \mathsf{P}s(\Sigma X,*)\;:\;\alpha(t_\infty^\alpha)\in X\times\{0\}\,\}\,.\end{align*} For $x\in X$, $a,b\in\mathbb{R}$, $a<b$ let $$\beta_x^{a,b}:[0,b-a]\to \Sigma X$$ be the path given by $$\beta_x^{a,b}(t)=[\!( a+t,x)\!] \,.$$ Note that, according to the Definition )\!]ef{translatable}, such paths are directed. In general, we will use the letter $\beta$ to denote various versions of paths, which have $x$ fixed. Let us write $\beta(x)$ for the loop $\beta_x^{-1,1}$ and let us consider the map $$\beta:X\to\vec{\Omega}\Sigma X\,,$$ $$x\mapsto \beta(x)=\beta_x^{-1,1}\,.$$ \subsection{A map from $J(X)$ to $\vec{\Omega}\Sigma X$} Let us recall some details from \cite{CaMi}. There a convenient way of constructing a map $J(X)\to \vec{\Omega}\Sigma X$ is presented. The map $\beta:X\to\vec{\Omega}\Sigma X$ does not preserve the base point, since the path $\beta(x_0)$ has length 2. Let us modify the space $X$. Let \[ X'=X\vee [0,1]=X\sqcup [0,1]/{x_0\sim 1} \] and extend the map $\beta$ \[ \beta'(x)=\begin{cases}\beta(x)&\text{if }x\in X\\ \text{constant path of length }2t&\text{if }x=t\in[0,1]\end{cases} \] Let $0\in X'$ be the distinguished point in $X'$. The map $\beta'$ preserves the distinguished points; therefore, it induces a map of monoids: $$J(\beta'):J(X')\to \vec{\Omega}\Sigma X\,.$$ Also we have a map $$J(r):J(X')\to J(X)$$ induced by the retraction $r:X'\to X$. This map is a homotopy equivalence. Therefore, we obtain a map defined up to homotopy $$J(\beta')\circ J(r)^{-1}:J(X)\to \vec{\Omega}\Sigma X\,.$$ Note that the James construction $J(X')$ contains the disjoint union $$\bigsqcup_{n>0} X^n$$ and the restriction of the map $J(\beta')$ to $X^n$ is the concatenation $$\beta(x_1)*\beta(x_2)*\dots*\beta(x_n)\,.$$ Later in \S)\!]ef{homotopijnie}, we will construct a map $${\bf sec}:\vec{\Omega}\Sigma X\to J(X)$$ such that $${\bf sec}\circ J(\beta')(x_1,x_2,\dots,x_n)=[(x_1,x_2,\dots,x_n)]\in J(X)\,.$$ It will follow that ${\bf sec}$ is a homotopy inverse of $J(\beta')\circ J(r)^{-1}$. {\bf sec}tion{Main homological result} [\!(abel{homologicznie} \subsection{The statement of the homological theorem} In this section, we consider an arbitrary directed structure in $\Sigma X$ which is translatable ()\!]ef{translatable}). Consider the space of directed loops \[ \vec{\Omega}\Sigma X=\en^{-1}(*)\,. \] The homology $H_*(\vec{\Omega}\Sigma X)$ with coefficients in a field has an algebra structure since $\vec{\Omega}\Sigma X$ is a topological monoid. All homology groups appearing in this section will have coefficients in a field $\mathbb{F}$. The main result is the following Theorem )\!]ef{homomain} and its corollaries. We will use singular homology, which works in a general topological context but we need some general condition allowing to apply Mayer-Vietoris sequence (see \cite[Ch 4.6, p.188]{Sp} for the definition of excisive couple). This technical issue forces a condition which we impose in the directed structure of $\Sigma X$. \begin{df}[\!(abel{exci} A d-structure on $\Sigma X$ is \emph{excisive} if the pair $(L_+,L_-)$ (defined in \S)\!]ef{stozkiL}) is an excisive couple in $\vec{P}(\Sigma{X})_*$ in the sense of \cite[Ch 4.6, p.188]{Sp}. \end{df} \begin{thm}[\!(abel{homomain} Let $X$ be a space, and let $\vec{P}(\Sigma X)$ be a directed structure on $\Sigma X$ that is excisive and translatable. Assume that $\vec{\Omega}\Sigma X$ is connected. Let $V=H_{>0}(X)=\widetilde H_*(X)$. Then the map $$\beta_*:V\to H_*(\vec{\Omega}\Sigma X)$$ induces an isomorphism of algebras $$T(V)\to H_*(\vec{\Omega}\Sigma X)\,,$$ where $T(V)$ is the free algebra generated by the vector space $V$. \end{thm} \begin{cor} [\!(abel{cormain} Under assumptions as above, the extension of the inclusion \[ \beta:X\hookrightarrow\vec{\Omega}\Sigma X \] to the James construction \[ J(\beta'):J(X')\hookrightarrow\vec{\Omega}\Sigma X \] is a weak homotopy equivalence. In particular, the homotopy type of $\vec{\Omega}\Sigma X$ does not depend on the directed structure on $\Sigma X$. \end{cor} It follows that: \begin{cor}Under the assumptions as above, the inclusion $$\vec{\Omega}\Sigma X\to \Omega\Sigma X$$ is a homotopy equivalence.\end{cor} Finally, we will show that the assumptions on the d-structure on $\Sigma{X}$ are satisfied for directed suspensions of cubical complexes. \begin{prp} Assume that $X$ is a geometric realization of a cubical set $B$, and the directed structure is the directed suspension structure (minimal or completed). Then $\vec{P}(\vec{\Sigma} X)$ is both translatable and excisive. If $X$ is connected, then $\vec{\Omega} \vec{\Sigma} X$ is connected. \end{prp} \begin{proof} As before, the translatability is clear. Let $E_-$, $E_+$ be $1$--dimensional cubes with the single non-degenerate vertices $i_-$ and $i_+$ respectively, and let $E=E_{-}\cup_{d^1_1(i_-)=d^0_1(i_+)} E_+$. There is a d-homeomorphism of triples $(|E|,|E_-|,|E_+|)\cong ([-1,1],[-1,0],[0,1])$. As a consequence, the geometric realization of \[ K=E\otimes B/ (E\otimes\{b_0\} \cup \{v_-,v_+\}\otimes B) \] is d-homeomorphic to $\Sigma{X}$. Let $K_-$ and $K_+$ be the images of $E_-\otimes B$ and $E_+\otimes B$. There is a d-homeomorphism of triples \[ (\Sigma X,C_+,C_-)\cong (|K|,|K_+|,|K_-|). \] By )\!]ef{cubicalNeighborhood}, there are subsets $A_-,A_+\subseteq |K|$ that contain $|K_-|$ and $|K_+|$ in their interiors, respectively, and \[ L_-\cong \en^{-1}(|K_-|)\subseteq \en^{-1}(A_-),\quad L_+\cong \en^{-1}(|K_+|)\subseteq \en^{-1}(A_+) \] are homotopy equivalences. The pair $(\en^{-1}(A_-),\en^{-1}(A_+))$ is excisive; this implies that $(L_-,L_+)$ is also excisive. The last essertion follows from Theorem )\!]ef{spojnosc}. \end{proof} \subsection{Basic homotopy equivalences} Fix a d-structure on $\Sigma X$ that is translatable. Let $$\phi^-:\Sigma X\to \Sigma X$$ be the map induced defined by $$[\!( s,x)\!]\mapsto [\!( 2s-1,x)\!]\,.$$ This map shrinks $C_-$ to the distinguished point. The map $\phi^-$ is d-homotopic (i.e., homotopic via d-maps) to the identity: the homotopy is defined for $t\in[0,1]$: \begin{equation}[\!(abel{phit}\phi^-_t([\!( s,x)\!])=[\!((t+1)s-t,x)\!]\,.\end{equation} The map $\phi^-_t$ shrinks the image of $(-\infty,\tfrac{t-1}{t+1}]\times X$ to the distinguished point. The map $\phi^-$ induces a map of the space of the directed paths. For $\alpha\in L_-$ we have $\phi^-\circ\alpha\in \vec{\Omega}\Sigma X$. Therefore \begin{prp}[\!(abel{prop1}The inclusion $i_-:\vec{\Omega}\Sigma X\to L_-$ is a homotopy equivalence. Its inverse is given by $$\mathsf{P}hi^-(\alpha)=\phi^-\circ \alpha\,.\qed$$\end{prp} Similarly we have a map $$\phi^+=\phi^+_1:\Sigma X\to \Sigma X$$ $$\phi^+([\!( s,x)\!])= [\!( 2s+1,x)\!]\,,$$ which shrinks $C_+$ to the distinguished point. The homotopy between $\phi^+$ and the identity is given by the formula $$\phi^+_t([\!( s,x)\!])= [\!((t+1)s+t,x)\!]\,.$$ The map $\phi^+_t$ shrinks the image of $[\tfrac{1-t}{t+1},\infty)\times X$ to the distinguished point. We obtain \begin{prp}[\!(abel{prop2}The inclusion $i_+:\vec{\Omega}\Sigma X\to L_+$ is a homotopy equivalence. The inverse is given by $$\mathsf{P}hi^+(\alpha)=\phi^+\circ \alpha\,.$$\end{prp} \begin{prp}[\!(abel{prop3}The map $F:X\times \vec{\Omega}\Sigma X\to L_0$ given by $$F(x,\alpha)=\alpha*\beta_x^{-1,0}\,.$$ is a homotopy equivalence. The inverse is given by $$G(\alpha)=(\en(\alpha),\phi^-\circ \alpha)\in (\{0\}\times X)\times \vec{\Omega}\Sigma X\simeq X\times \vec{\Omega}\Sigma X \,.$$ \end{prp} \begin{proof} The composition $$GF((x,\alpha))=G(\alpha*\beta_x^{0,1})= (x,\phi^-\circ(\alpha*\beta_x^{-1,0}))$$ is joined with the identity by the homotopy $$H_t((x,\alpha))=\big(x,\phi^-_t\circ(\alpha*\beta_ x^{-1,(t-1)/(1+t)})\big)\,,$$ $$H_0(x,\alpha)=(x,\alpha)\,,\quad H_1(x,\alpha)=GF(x,\alpha)\,,$$ where $\phi^-_t$ is given by ()\!]ef{phit}). The opposite composition is equal to $$FG(\alpha)=F(p(\alpha),\phi^-\circ \alpha)=(\phi^-\circ\alpha)*\beta_x^{-1,0}\,.$$ The homotopy to the identity is given by $$H'_t(\alpha)=(\phi^-_t\circ\alpha)*\beta_x^{(t-1)/(1+t),0}\,,$$ $$H'_0(\alpha)=\alpha\,,\quad H'_1(\alpha)=FG(\alpha)\,.\qedhere$$ \end{proof} \subsection{Proof of the main homological theorem} Proof of Theorem )\!]ef{homomain} is a modification of the proof in the classical situation \cite{Ja} and \cite[\S5.3]{CaMi}. \begin{proof} We consider the decomposition $$\mathsf{P}s(\vec{\Omega}\Sigma X,*) =L_-\cup_{L_0}L_+\,.$$ We may apply Mayer-Vietoris exact sequence since we know, by assumption, that $(L_-,L_+)$ is an excisive couple. Let $j_\pm:L_\pm\to\mathsf{P}s(\Sigma X,*)$ be the inclusion. Since the space $\mathsf{P}s(\Sigma X,*)$ is contractible, the Mayer-Vietoris exact sequence for the reduced homology gives us an isomorphism $$\widetilde H_*(L_0){\xrightarrow{\;(j_-,j_+)\;}} \widetilde H_*(L_-)\oplus \widetilde H_*(L_+)\,.$$ Due to Propositions )\!]ef{prop1}-)\!]ef{prop3}, we obtain an isomorphism $$\widetilde H_*(X\times \vec{\Omega}\Sigma X)\stackrel\simeq[\!(ongrightarrow \widetilde H_*(\vec{\Omega}\Sigma X)\oplus \widetilde H_*(\vec{\Omega}\Sigma X)\,.$$ The key point is to analyze the isomorphism map in terms of the algebra structure of $H_*(\vec{\Omega}\Sigma X)$. We have a commutative diagram \begin{equation}[\!(abel{dia} \begin{diagram} \node{\widetilde H_*(L_0)} \arrow{e,t}{(j_-,j_+)_*} \node{\widetilde H_*(L_-)\oplus \widetilde H_*(L_+)} \arrow{s,r}{(\mathsf{P}hi_-,\mathsf{P}hi_+)_*} \\ \node{\widetilde H_*(X\times \vec{\Omega}\Sigma X)} \arrow{e,t}{\simeq} \arrow{n,l}{F_*} \node{\widetilde H_*(\vec{\Omega}\Sigma X)\oplus \widetilde H_*(\vec{\Omega}\Sigma X)} \end{diagram} \end{equation} The map $$\mathsf{P}hi_-\circ j_-\circ F:X\times \vec{\Omega}\Sigma X\to \vec{\Omega}\Sigma X$$ $$\alpha\mapsto \phi_-\circ(\alpha*\beta_x^{-1,0})=(\phi_-\circ\alpha)*(\phi_-\circ \beta_x^{-1,0})=(\phi_-\circ\alpha)*const$$ is homotopic to the projection on $\vec{\Omega}\Sigma X$. On the other hand $$\mathsf{P}hi_+\circ j_+\circ F:X\times \vec{\Omega}\Sigma X\to \vec{\Omega}\Sigma X$$ $$\alpha\mapsto \phi_+\circ(\alpha*\beta_x^{-1,0})=(\phi_+\circ\alpha)*(\phi_+\circ \beta_x^{-1,0})=(\phi_+\circ\alpha)*(\phi_+\circ\beta_x^{-1,0})\,.$$ and after a reparametrisation of $\beta$ is equal to $$(\phi_+\circ\alpha)*\beta_x^{-1,1}=(\phi_+\circ\alpha)*\beta(x)\,.$$ This map is homotopic to $$\alpha\mapsto \alpha*\beta(x)\,.$$ Denote by $A$ the algebra $H_*(\vec{\Omega}\Sigma X)=H_*(\vec{\Omega}\Sigma X;\mathbb{F})$. This is a graded algebra with $A_0=\mathbb{F}$, by the assumption that $\vec{\Omega}\vSigma X$ is connected. Thus, $$A=\mathbb{F}\oplus\widetilde A\,,\quad \widetilde A=\widetilde H_*(\vec{\Omega}\Sigma X)=H_{>0}(\vec{\Omega}\Sigma X)\,.$$ The reduced homology of $X\times \vec{\Omega}\Sigma X$ can be decomposed as $$\widetilde H_*(X\times \vec{\Omega}\Sigma X)=\widetilde H_*(X)\oplus \widetilde H_*(\vec{\Omega}\Sigma X)\oplus\widetilde H_*(X)\otimes H_*(\vec{\Omega}\Sigma X)=V\oplus \widetilde A\oplus V\otimes \widetilde A\,.$$ The bottom isomorphism of the diagram )\!]ef{dia} is of the form $$V\oplus \widetilde A\oplus V\otimes \widetilde A\to \widetilde A\oplus \widetilde A\,,$$ $$(v,a,w\otimes b)\mapsto (a,\beta_*(v)+\beta_*(w)\cdot b)\,.$$ We subtract one summand $\widetilde A$ from the above mapping and since $V\oplus V\otimes\widetilde A=V\otimes A$ we obtain an isomorphism $$V\otimes A\to \widetilde A\,,$$ $$w\otimes b\mapsto \beta_*(w)\cdot b\,.$$ This property characterizes the tensor algebra $T(V)$, see \cite[Prop.~5.3.2]{CaMi}.\end{proof} \subsection{Proof of Corollary )\!]ef{cormain}} The spaces $JX$ and $\vec{\Omega}\Sigma X$ are topological monoids, therefore their fundamental groups are commutative. A map between such spaces is a weak homotopy equivalence if and only if it induces an isomorphism of homology groups. It is enough to check that the induced maps are isomorphisms for homologies with coefficients in the fields $\mathbb{Z}_p$ (for any prime $p$) and in $\mathbb{Q}$. By \cite{Ja} the homology of $J(X)$ is the free tensor algebra generated by $V=H^{>0}(X)$. The map $$X[\!(ongrightarrow JX'\stackrel{J(\beta')}[\!(ongrightarrow \vec{\Omega}\Sigma X$$ gives rise to the map of homology: $$V[\!(ongrightarrow H_*(JX)\simeq T(V)[\!(ongrightarrow H^*(\vec{\Omega}\Sigma X)\,.$$ By Theorem )\!]ef{homomain} the last map is an isomorphism. \qed {\bf sec}tion{Topological construction} [\!(abel{homotopijnie} \subsection{Increasing paths in the directed suspension} We will concentrate on \emph{the directed suspension} of a pointed d-space $X$, i.e., we assume that $\vec{P}(\vec{\Sigma} X)$ is either $\vec{P}_{\mathrm{min}}(\vec{S}^1\wedge X)$ or $\vec{P}_{\mathrm{c}}(\vec{S}^1\wedge X)$. We do not assume that $X$ is connected but it is important that, for every directed path $\alpha\in \vec{P}(\vec{\Sigma} X)$, the composition $h\circ \alpha$ is non-decreasing on any interval on which it is well-defined. To emphasize this, we will write $\vec{\Sigma} X$ rather than $\Sigma X$. \begin{df} A path $\alpha\in \vec{P}(\vec{\Sigma} X)$ is \emph{strictly increasing} if, for every $0[\!(eq s <t [\!(eq 1$ one of the following conditions are satisfied: \begin{itemize} \item{there exists $s<u<t$ such that $\alpha(u)=*$,} \item{the map $(h\circ\alpha)|_{[\!(eft(s,t)\!]ight)}$ is strictly increasing.} \end{itemize} Let $\vec{P}_{inc}(\vec{\Sigma} X)\subseteq \vec{P}(\vec{\Sigma} X)$ be the space of strictly increasing paths, and let \[ \vec{\Omega}_{inc}\vSigma X:=\{\alpha\in\vec{P}_{inc}(\vec{\Sigma}{X}):\; \alpha(t_0^\alpha)=\alpha(t_\infty^\alpha)=*\} \] be the space of strictly increasing loops. \end{df} \begin{prp} The inclusion $\vec{\Omega}_{inc}\vSigma X\subseteq \vec{\Omega}\vSigma X$ is a homotopy equivalence. \end{prp} \begin{proof} For every $0<\varepsilon<1$, the map $R:\vec{\Omega}\vSigma X\to \vec{\Omega}_{inc}\vSigma X$ \[ R(\alpha)(t) = [\!( (h(\alpha(t))+\varepsilon\bar t\,)(1-\varepsilon)^{-1} , p(\alpha(t)) )\!] \,, \] where $$\bar t=\tfrac{t-t^\alpha_0}{t^\alpha_\infty-t^\alpha_0}$$ is the normalized parameter of $\alpha$, is a homotopy inverse. \obrazek{rys1}{5cm} $$h(\alpha(t))\quad\quad\quad h(\alpha(t))+\varepsilon\bar t\quad\quad\quad \tfrac{ h(\alpha(t))+\varepsilon\bar t}{1-\varepsilon}$$ \begin{center}Time $\times\;\;\mathbb{R}$--coordinate of $R(\alpha)$\end{center} \end{proof} \subsection{Topological James equivalence} \begin{df} For an open neighborhood $U$ of $x_0$, and a sequence $(U_i)_{i=1}^k$ of open subsets of $X$ such that $U_i\cap U=\emptyset$, define the following subset of $J(X)$ \[ \mathfrak{U}((U_i),U)=[\!(eft[\!(brace(x_i)_{i=1}^n:\; \exists_{1[\!(eq m_1<\dots< m_k[\!(eq n}\;(\forall_{j\in\{1,\dots,k\}}\; x_{m_j}\in U_j \;\wedge\; \forall_{i\not\in\{m_1,\dots,m_k\}}\; x_i\in U) )\!]ight)\!]brace \] Let $J_c(X)$ denote the set $J(X)$ with the topology generated by the sets $\mathfrak{U}((U_i),U)$. \end{df} \begin{rem} The identity map $J(X)\to J_c(X)$ is continuous but its inverse is not in general. Nevertheless, it induces a weak homotopy equivalence since it is homeomorphism on each compact subset. \end{rem} Let $\Omega_{\text{min}}:=\vec{\Omega}_{\text{min,inc}}(\vec{\Sigma} X)$ be the space of directed strictly increasing Moore paths on $\vec{\Sigma}{X}$ that consist of finitely many segments. For every path $\omega\in\Omega_{\text{min}}$, the inverse image \begin{equation} \omega^{-1}(X\setminus\{x_0\}\times \{0\})=\{x^\omega_1<\dots<x^\omega_{k(\omega)}\} \end{equation} is a finite set. \begin{prp} The map \[{\bf sec}:\Omega_{\text{)\!]m min}}\to J_c(X)\] \[ \omega\mapsto (x^\omega_1,\dots,x^\omega_{k(\omega)}) \] is continuous. \end{prp} \begin{proof} Let $(U_i)_{i=1}^k$, $U\ni x_0$ be open subsets of $X$ such that $U_i\cap U$ for all $i$. Fix $\omega\in \Omega_{\text{min}}$ such that ${\bf sec}(\omega)\in \mathfrak{U}((U_i),U)$. Then there exists a sequence $t_1<\dots<t_k$ such that $\omega(t_i)\in U_i\times \{0\}$ and $\omega(\mathbb{R}_+\setminus\{t_1,\dots,t_k\})\cap (X\setminus\{x_0\})\times \{0\}=\emptyset$. Furthermore, there exist $t^a_i<t_i<t^b_i\in \mathbb{R}_+$ such that $h(\omega(t))<0$ for $t\in[\!(eft[t^a_i,t_i)\!]ight[$, $h(\omega(t))>0$ for $t\in[\!(eft]t_i,t^b_i)\!]ight]$ and $p(\omega(t))\in U_i$ for $i\in [t^a_i,t^b_i]$. Let $\mathfrak{F}(K,V)\subseteq \Omega_{\text{min}}$ denote the space of paths $\alpha$ such that $\alpha(K)\subseteq V$. The set \begin{multline*} \mathfrak{F}[\!(eft(\mathbb{R}_+\setminus \bigcup_{i=1}^k (t^a_i,t^b_i), \{[\!( s,x)\!]:\; \text{$s\neq 0$ or $x\in U$}\} )\!]ight) \cap \bigcap_{i=1}^k \mathfrak{F}(\{t^a_i\},\{[\!( s,x )\!]:\; -1<s<0 \} )\cap\\ \mathfrak{F}(\{t^b_i\},\{[\!( s,x )\!]:\; 0<s<1 \}) \cap \mathfrak{F}([t^a_i,t^b_i],\{[\!( s,x )\!]:\; 0<s<1,\; x\in U_i \}) \end{multline*} is an open neighborhood of $\omega$ contained in ${\bf sec}^{-1}(\mathfrak{U}((U_i),U))$. \end{proof} It is clear from the construction that for $(x_1,x_2,\dots,x_n)\in X^n\subset J(X')$ we have $${\bf sec}(J(\beta'(x_1,x_2,\dots,x_n))=[(x_1,x_2,\dots,x_n)]\,,$$ i.e. $${\bf sec}(J(\beta'(x_1,x_2,\dots,x_n))=J(r)(x_1,x_2,\dots,x_n)\,,$$ where $J(r)$ is the map induced by the retraction $r:X'\to X$. With the assumption that $\vec{\Omega}\vSigma X$ is connected, it follows from Theorem )\!]ef{homomain} that ${\bf sec}$ is a homotopy equivalence. Below we present a construction of a retraction of $\Omega_{\text{min}}$ to a subspace consisting of the paths which are, up to a reparametrization, in the image of $J(\beta')$. \begin{exa} If $X$ is a discrete d-space, then every directed loop $\omega\in \Omega_{\text{)\!]m min}}$ is a non-decreasing reparametrization of a concatenation \[ \beta_{x_1}*\dots*\beta_{x_n}. \] As a consequence, the map ${\bf sec}$ induces a continuous bijection between the space of traces (cf.~\cite[Section 2]{R}) \[\vec{T}(\vec\Sigma X_\delta)_*^*:=\vec{\Omega}\vec{\Sigma} X_\delta / \{\text{non-decreasing reparametrizations}\}\] and $J_c(X)$. \end{exa} {\bf sec}tion{Straightening a path} [\!(abel{straightening} \subsection{A single segment} We show a construction which allows to construct a deformation of an arbitrary path to the concatenation of paths of the form $\beta(x)$. Let $\alpha:[0,a]\to C_-\subset\vec{\Sigma} X$ be a directed increasing path such that $$\alpha(0)=*\quad\text{and}\quad \alpha(a)\in \{0\}\times X\,.$$ In order to construct a deformation, we introduce the map $$\psi_t^-:C_-\to C_-$$ $$(s,x)\mapsto (s-t,x)\,.$$ Now let us define the homotopy $$ \widetilde H_t^-(\alpha)=(\psi^-_t\circ\alpha) *\beta^{-t,0}_{p(\alpha(a))}(-/a)\, ,$$ i.e., $$ \widetilde H_t^-(\alpha)(s)=\begin{cases}\psi^-_t(\alpha(s))&\text{for } s\in[0,a]\\ [\!( \tfrac{s-a}a-t,p(\alpha(a)) )\!] &\text{for } s\in[a,a(1+t)].\end{cases}$$ The path $\widetilde H_t^-(\alpha)$ is of length $(1+t)a$. We rescale the parameter linearly in order to obtain a path of length $a$. The resulting deformation has the property $$H_0^-(\alpha)=\alpha\,,$$ and $H_1^-(\alpha)(s)$ is a concatenation of a constant path with $\beta^{-1,0}_{p\alpha(a)}(s/2a)$. Similarly, we deform paths $\alpha:[0,a]\to C_+\subset\vec{\Sigma} X$ such that $$\alpha(0)\in \{0\}\times X\quad\text{and}\quad\alpha(a)=*$$ by the formula $$\widetilde H_t^+(\alpha)=\beta^{0,t}_{p(\alpha(0))}(-/a)*(\psi_{t}^+\circ\alpha)\,,$$ where $\psi_t^+(s,x)=(s+t,x)$. That is $$ \widetilde H_t^+(\alpha)(s)=\begin{cases} [\!( \tfrac{s}a,p(\alpha(0)) )\!] &\text{for } s\in[0,at]\\ \psi^+_t(\alpha(s-at))&\text{for } s\in[at,a(1+t)].\\\end{cases}$$ We reparametrize $\widetilde H^+_t(\alpha)$ to have a path of length $a$. Finally, we construct a deformation of a directed increasing path satisfying \begin{align}[\!(abel{pojedyncza1}&\alpha(0)=\alpha(a)=*\,,\\ [\!(abel{pojedyncza2}&\alpha(s)\ne*~\text{ for }~s\in(0,a)\,.\end{align} Such path has a unique intersection with the set $(\{0\}\times X)\setminus \{*\}$. Let $b\in(0,a)$ be the parameter for which $\alpha(b)\in \{0\}\times X$. Then $\alpha$ can be written as the concatenation $$\alpha=\alpha^-*\alpha^+,$$ with $\alpha^-$ of length $b$. We define a deformation $$ \hat H_t(\alpha)=H^-_t(\alpha^-)*H^+_t(\alpha^+)\,.$$ The line coordinate of $\hat H_1(\alpha)$ is piecewise linear (increasing linearly on $[b/2,b]$ and on $[b,(b+a)/2)$ and constant elsewhere). The path $ H_1(\alpha)$ can be deformed further (in an uniform way) to obtain a path $$\beta^{-1,1}_{p(\alpha(b))}(s/a)\,.$$ We have obtained a canonical deformation $$\alpha(s)\;[\!(eadsto\;\beta^{-1,1}_{p(\alpha(b))}(s/a)\,.$$ \obrazek{rys3}{5cm} \begin{center}$X\;\times\; \mathbb{R}$\end{center} \noindent We omit the proof that this procedure is continuous with respect to $\alpha$. The rigorous proof is quite tedious and we leave it to the reader since we do not need it in presence of Theorem )\!]ef{homomain} and Corollary )\!]ef{cormain}. \subsection{Deformation of a chain} Every path $\alpha\in \Omega_{\text{min}}$ is a concatenation of a chain $$const_0*\alpha_1*const_1*\alpha_2*const_2*\dots*\alpha_n*const_n\,,$$ where the paths $\alpha_i$ satisfy ()\!]ef{pojedyncza1}--)\!]ef{pojedyncza2}) and $const_i$ are constant paths. These constant paths can be deformed to paths of length zero in a canonical way and the paths $\alpha_i$ are deformed using the homotopy defined in the precious subsection. The resulting retraction $\alpha\mapsto H_1(\alpha)$ leaves unchanged the paths which are the images $J(\beta')(x_1,x_2,\dots,x_n)$. {\bf sec}tion{Concluding remarks} Let us summarize the results of the homological and geometric constructions. \begin{itemize} \item For $X$ connected, according to Corollary )\!]ef{cormain}, the map \[ \beta'\circ J(r)^{-1}:J(X)\to \vec{\Omega}\Sigma X \] is a weak homotopy equivalence under suitable assumptions on the directed structure, including the minimal or completed structure for cubical complexes. \item We have constructed a map ${\bf sec}:\Omega_{\text{min}}\to JX$ such that \[ {\bf sec}\circ\beta'\circ \iota:\bigsqcup_{n>0}X^n\to J(X) \] is equal to the natural projection. Here $\Omega_{\text{min}}$ is a subset of $\vec{\Omega}\vSigma X$, which is homotopy equivalent. It follows that the map ${\bf sec}$ is essentially a homotopy inverse of $\beta'$. \item In addition, we show how to deform an individual path to a path which is in the image of $\beta'$. The deformation is done in a canonical way. \end{itemize} We hope that our results will add a new flavour to the old classical James construction. \end{document}
\begin{document} \title{Moments for multi-dimensional Mandelbrot's cascades } \author{Chunmao HUANG \footnote{ Email addresses: [email protected] (C. Huang) .} \\ \small{\emph {Harbin institute of technology at Weihai, Department of mathematics, 264209, Weihai, China}}} \date{} \maketitle \begin{abstract} We consider the distributional equation $\textbf{Z}\stackrel{d}{=}\sum_{k=1}^N\textbf{A}_k\textbf{Z}(k) $, where $N$ is a random variable taking value in $\mathbb N_0=\{0,1,\cdots\}$, $\textbf{A}_1,\textbf{A}_2,\cdots$ are $p\times p$ non-negative random matrix, and $\textbf{Z},\textbf{Z}(1),\textbf{Z}(2),\cdots$ are $i.i.d$ random vectors in in $\mathbb{R}_+^p$ with $\mathbb{R}_+=[0,\infty)$, which are independent of $(N,\textbf{A}_1,\textbf{A}_2,\cdots)$. Let $\{\mathbf Y_n\}$ be the multi-dimensional Mandelbrot's martingale defined as sums of products of random matrixes indexed by nodes of a Galton-Watson tree plus an appropriate vector. Its limit $\mathbf Y$ is a solution of the equation above. For $\alpha>1$, we show respectively a sufficient condition and a necessary condition for $\mathbb{E}\|\mathbf Y\|^\alpha\in(0,\infty)$. Then for a non-degenerate solution $\mathbf Z$ of the equation above, we show the decay rates of $\mathbb{E} e^{-\mathbf t\cdot \mathbf Z}$ as $\|\mathbf t\|\rightarrow\infty$ and those of the tail probability $\mathbb P(\mathbf y\cdot \mathbf Z\leq x)$ as $x\rightarrow 0$ for given $\mathbf y=(y^1,\cdots,y^p)\in \mathbb R_{+}^p$, and the existence of the harmonic moments of $\mathbf y\cdot \mathbf Z$. As application, these above results about the moments (of positive and negative orders) of $\mathbf Y$ are applied to a special multitype branching random walk. Moreover, for the case where all the vectors and matrixes of the equation above are complex, a sufficient condition for the $L^\alpha$ convergence and the $\alpha$th-moment of the Mandelbrot's martingale $\{\mathbf Y_n\}$ is also established.\\ \emph{Key words:} moments, harmonic moments, Mandelbrot's martingales, multiplicative cascades, multi-branching random walks \emph{AMS subject classification:} 60K37, 60J80 \end{abstract} \section {Introduction} We consider a multi-dimensional Mandelbrot's martingale $\{\mathbf Y_{n}\}$ defined as sums of products of random matrixes (weights) indexed by nodes of a Galton-Watson tree plus an appropriate vector. We are interested in the existence of the moments of positive and negative orders of its limit $\mathbf Y$. For the one-dimensional case, the classical model of Mandelbrot \cite{Mandelbrot74} corresponds to the case where the tree is a fixed $r$-ary tree ($r\geq 2$ being a constant), and all the weights are one-dimensional random variables. This classical model and its variations were studied by many authors in different contexts, see for example: Bingham \& Doney \cite{Bingham74, Bingham75} for branching processes and general age-dependent branching processes; Kahane \& Peyri\`{e}re \cite{Kahane76}, Guivarc'h \cite{Guivarch90} and Barral \cite{Barral99} for multiplicative cascades; Biggins \cite{Biggins77} and Biggins \& Kyprianou \cite{Biggins97} for branching random walks; Durrett \& Liggett \cite{Durrett83} for some infinite particle systems; R\"osler \cite{Rosler92} for the Quicksort algorithm. A general one-dimensional model (called Mandelbrot's cascades) which unifies the study of cascades and branching random walks was presented by Liu \cite{liu2000}, where a number of applications were shown. The model considered here is a generalization of the model presented in \cite{liu2000} to the multi-dimensional case. Similar to the one-dimensional case, our model is also corresponding to multi-type branching random walks which attract some authors' attention recently, see for example Kyprianou \& Rahimzadeh Sani \cite{ks}, Biggins \& Rahimzadeh Sani \cite{bs} and Biggins \cite{b12}. This paper is our first exploration to multi-dimensional Mandelbrot's cascades. Considering the practicability, we choose to begin with the existence of the moments of $\mathbf Y$, which are useful to study the asymptotic properties of $\{\mathbf Y_{n}\}$. Let's present our model and problems. We consider the distributional equation of $\textbf{Z}$: \begin{equation}\tag{E}\label{E} \textbf{Z}\stackrel{d}{=}\sum_{k=1}^N\textbf{A}_k\textbf{Z}(k), \end{equation} where $N$ is a random variable taking value in $\mathbb N_0=\{0,1,\cdots\}$, $\textbf{A}_1,\textbf{A}_2,\cdots$ are $p\times p$ non-negative random matrix ; $\textbf{Z},\textbf{Z}(1),\textbf{Z}(2),\cdots$, which are independent of $(N,\textbf{A}_1,\textbf{A}_2,\cdots)$, are $i.i.d$ random vectors in $\mathbb{R}_+^p$ with $\mathbb{R}_+=[0,\infty)$. We say a matrix $\mathbf{A}$ is \emph{finite} if all entries of $\mathbf A$ are finite, and say $\mathbf A$ is \emph{strictly positive} if for some positive integer $n$, all entries of $\mathbf A^n$ are positive. When a matrix $\mathbf A$ is finite and strictly positive, the Perron-Frobeninius theorem shows that $\mathbf A$ has a positive maximal eigenvalue $\rho$ and has associated positive right and left eigenvectors $\mathbf{v}=(v_1,\cdots,v_p)$ and $\mathbf{u}=(u_1,\cdots,u_p)$. Moreover, $\mathbf{u}$, $\mathbf{v}$ can be normalized so that $\sum\limits_{i=1}^{p}u_i=\sum\limits_{i=1}^{p}u_iv_i=1$. Throughout this paper, we assume that\\ \\* \textbf{Assumption (H)}. The matrix \emph{$\textbf{M}:=\mathbb E\sum\limits_{k=1}^N\textbf{A}_k$ is finite and strictly positive with the maximum-modulus eigenvalue $1$ and the corresponding left and right eigenvectors $\textbf{U}=(U_1,\cdots,U_p),\textbf{V}=(V_1,\cdots,V_p)$ normalized such that $\sum\limits_{i=1}^pU_i=\sum\limits_{i=1}^pU_iV_i=1$.} \\* We are interested in the existence of the solution with $\alpha$th-moment ($\alpha>1$) of the equation (\ref{E}), and furthermore, the existence of its harmonic moments. It is clear that there exists a solution of equation (\ref{E}). In fact, we can construct a solution (denoted by $\mathbf Y$) in the following way. Let $\mathbb N=\{1,2, \cdots\}$ and write \begin{equation*} I=\bigcup_{n=0}^{\infty }\mathbb{N}^{n} \end{equation*} for the set of all finite sequences $u=u_1\cdots u_n$ with $u_i\in \mathbb N$, where by convention ${\mathbb{N}}^{0}=\{\emptyset \}$ contains the null sequence $\emptyset $. If $u=u_1\cdots u_n\in I$, we write $|u|=n$ for the length of $u$; if $u=u_1\cdots u_n, v=v_1\cdots v_m\in I$, we write $uv=u=u_1\cdots u_n v_1\cdots v_m$ for the sequence obtained by juxtaposition. In particular, $u\emptyset=\emptyset u=u$. We partially order $I$ by writing $u\le v$ to mean that for some $u'\in I$, $v=uu'$, and by writing $u<v$ to mean that $u\le v$ and $u\ne v$. Let $\{(N_u,\textbf{A}_{u1},\textbf{A}_{u2},\cdots)\}$ be a family of independent copies of $(N,\textbf{A}_1,\textbf{A}_2,\cdots)$, indexed by all the finite sequence $u\in I$. For simplicity, we write $(N, \mathbf A_{1}, \mathbf A_{2},\cdots )$ for $(N_{\emptyset },\textbf{A}_{\emptyset 1},\textbf{A}_{\emptyset 2},\cdots )$. Let $\mathbb{T}$ be the Galton-Watson tree with defining elements ($N_{u}$) ($u\in I$): (i) $\emptyset \in \mathbb{T}$; (ii) if $u\in \mathbb{T}$, then $uk\in \mathbb{T}$ if and only if $1\leq k\leq N_{u}$; (iii) if $u k\in \mathbb{T}$, then $u\in \mathbb{T}$. Here the null sequence $\emptyset $ is the root of the tree $\mathbb{T}$, which can be regarded as the initial particle; $u k$ represents the $k$-th child of $u$; $N_{u}$ represents the number of offspring of the particle $u$. Each node of the tree $\mathbb{T}$ is marked with the random vector $(N_u,\mathbf{A}_{u1},\textbf{A}_{u2},\cdots)$. We can imagine that the random matrix $\textbf{A}_{u k}$ is the "weight" associated with the edge $(u,uk)$ linking the nodes $u$ and $u k$ if $u\in \mathbb{T}$ and $1\leq k\leq N_{u}$; the values $\textbf{A}_{u k}$ for $k>N_{u}$ are of no influence for our purpose, and will be taken as $0$ for convenience. Let $\mathbb{T}_{n}=\{u\in\mathbb{T}: |u|=n\}$ be the set of sequence $u$ in $\mathbb{T}$ with length $|u|=n$. Put \begin{equation*} \mathbf X_{u}=\mathbf A_{u_{1}}\mathbf A_{u_{1}u_{2}}\cdots \mathbf A_{u_{1}\cdots u_{n}}\quad \mathrm{ if }\ u=u_{1} \ldots u_{n}\in I \quad \mathrm{ for } \; n\geq 1, \end{equation*} and define \begin{equation}\label{beqYn} \mathbf Y_{0}=\mathbf V \quad \mbox{ and }\quad \mathbf Y_{n}=\sum_{u\in \mathbb{T}_{n}} \mathbf X_{u}\mathbf V \quad \mathrm{ for } \; n\geq 1. \end{equation} It is not difficult to verify that $\textbf{Y}_n=(Y_{n,1},\cdots,Y_{n,p})$ is a non-negative martingale with respect to the filtration $$\mathcal {F}_n=\sigma((N_{u},\mathbf A_{u1},\mathbf A_{u2},\cdots ):|u|<n),$$ the $\sigma$-field that contains all information up to generation $n$. We call $\{\textbf{Y}_n\}$ {\em Multi-dimensional Mandelbrot's martingale}. It reduce to the classical Mandelbrot's martingale when the dimension $p=1$. Clearly, there exists a non-negative random vector $\mathbf Y=(Y_1, \cdots, Y_p)\in \mathbb R_+^p$ such that $$\mathbf Y=\lim_{n\rightarrow \infty }\mathbf Y_{n}$$ almost surely (a.s.) with $\mathbb EY_i\leq V_i$ for all $1\leq i\leq p$ by Fatou's lemma. Notice that \begin{eqnarray}\label{MCE1} \mathbf Y_n =\sum_{u\in \mathbb{T}_{1}}\mathbf A_{u}\mathbf Y_{n-1}(u), \end{eqnarray} where $\{\mathbf Y_n(u)\}$ ($u\in\mathbb{T}_k$) are independent copies of $\mathbf Y_n$ and they are independent of $\mathcal F_k$. Denote $\mathbf Y(u)=\lim\limits_{n\rightarrow\infty}\mathbf Y_n(u)$. Letting $n\rightarrow\infty$ in (\ref{MCE1}), we have \begin{equation}\label{MCE2} \textbf{Y}{=}\sum_{k=1}^N\textbf{A}_k\textbf{Y}(k), \end{equation} which means that $\mathbf Y$ is a solution of the equation (\ref{E}). \paragraph{Example 1.1 Multitype branching random walk (MBRW) } A multitype branching random walk (MBRW) with $p$ types defined as follows. A single particle $\emptyset$, of type $i\in\{1,2,\cdots, p\}$ is located at the origin of real line $\mathbb{R}$. It gives birth to children of the first generation, which are scattered on $\mathbb{R}$, according to a vector point process $\mathbf{L}_i=(L_{i1},L_{i2},\cdots,L_{ip})$, where $L_{ij}$ is the point process counting the number of particles of type $j\in\{1,2,\cdots, p\}$ born to the particle of type $i$. These particles of the first generation reproduce particles to form the second generation. The displacements of the offsprings of a particle of type $j$, relative to their parent's position, are given by the point process $\mathbf{L}_j$. These particles of the second generation reproduce children to form the next generation, and so on. All particles behave independently. We denote the position of a particle $u$ by $S_u$ and the type of $u$ by $\tau (u)$ , then the position of $uk$, the $k$-th child of $u$ satisfies $$S_{uk}=S_u+l_{uk},$$ where $l_{uk}$ denotes the displacement of $uk$ relative to $u$ whose distribution is determined by $L_{\tau(u)\tau(uk)}$. Assume that $N_i:=\sum\limits_{j=1}^pZ_{ij}(\mathbb R)$ has the same distribution for all $1\leq i\leq p$, which means that all particles produce offspring according to the same distribution if we don't care the type. Under this assumption, all particles $u\in I$ associated with the numbers of their offspring $N_u$ form a Galton-Watson tree $\mathbb T$ described above. We remark that this assumption is not necessary in a usual MBRW, so the example presented here is just a special case of MBRW. For more information and more results about the usual MBRW, cf \cite {bs, b12, ks}. For $t\in \mathbb{R}$, define the matrix $\tilde {\mathbf M}(t)=(\tilde M_{ij}(t))$ as $$ \tilde M_{ij}(t)=\sum_{\substack{u\in\mathbb T_1\\\tau(u)=j}}e^{-tS_u}\qquad\text{( $\tau(\emptyset)=i$).} $$ Assume that $\tilde{\mathbf M}(t)$ defined above is finite and strictly positive. Denote the positive maximal eigenvalue of $\tilde{\mathbf M}(t)$ by $\tilde \rho(t)$ and the associated normalized positive left and right eigenvectors by $\tilde{\mathbf U}(t)=(\tilde U_1(t),\cdots, \tilde U_p(t))$ and $\tilde{\mathbf V}(t)=\tilde V_1(t),\cdots, \tilde V_p(t))$ respectively. For each $i=1,2,\cdots, p$, let \begin{equation*} W_{n,i}(t):=\frac{\sum\limits_{u\in\mathbb{T}_n}\tilde V_{\tau(u)}(t)e^{-tS_u}}{\tilde V_i(t)\tilde \rho(t)^n}\qquad (\tau(\emptyset)=i). \end{equation*} It is known that for each $i=1,2,\cdots, p$, $\{W_{n,i}(t)\}$ forms a non-negative martingale with mean one, hence it converges a.s. to a non-negative random variable $W^i(t)$ with $\mathbb EW_i(t)\leq1$. Write $$\mathbf Y_n=\left(W_{n,1}(t)\tilde V_1(t),\;\;W_{n,2}(t)\tilde V_2(t),\;\;\cdots\;\;, W_{n,p}(t)\tilde V_p(t)\right).$$ We can see that the martingale $\{\mathbf Y_n\}$ is just the Mandelbrot's martingale defined in (\ref{beqYn}) if we put the random matrix $\mathbf A_k=((\mathbf A_k)_{ij})$, where $$(\mathbf A_k)_{ij}=\frac{e^{-tS_k}}{\tilde \rho(t)}\mathbf{1}_{\{\tau(k)=j\}}\qquad (\tau(\emptyset)=i).$$ Indeed, with $\mathbf A_k$, we have $\mathbf M=\frac{\tilde{\mathbf M}(t)}{\tilde \rho(t)}$, so that $\mathbf V=\tilde{\mathbf V}(t)$. Notice that for $u=u_1\cdots u_n$, $$(\mathbf A_{u_{1}\cdots u_{n}})_{ij}=\frac{e^{-tS_u}}{\tilde \rho(t)^n}\mathbf{1}_{\{\tau(u)=j\}}\qquad (\tau(\emptyset)=i).$$ Thus by (\ref{beqYn}), for each $i=1,2,\cdots, p$, with $\tau(\emptyset)=i$, $$\mathbf Y_{n,i}=\sum_{u\in \mathbb{T}_{n}}\frac{e^{-tS_u}}{\tilde \rho(t)^n}\tilde V_{\tau(u)}(t) =W_{n,i}(t)\tilde V_i(t)$$ Therefore, the limit of $\mathbf Y_n$, namely, $$\mathbf Y=\left(W_1(t)\tilde V_1(t),\;\;W_2(t)\tilde V_2(t),\;\;\cdots\;\;, W_p(t)\tilde V_p(t)\right)$$ satisfies (\ref{MCE2}). \section{Main results} Let $\mathbf Y$ be the limit of the Mandelbrot's martingale $\{\mathbf Y_n\}$. We first discuss the existence of the $\alpha$th-moment ($\alpha>1$) of $\textbf{Y}$, which implies its non-degeneracy. For $t\in\mathbb{R}$ fixed, define the random matrix $\textbf{A}_k^{(t)}=((\textbf{A}_k^{(t)})_{ij})$ as $(\textbf{A}_k^{(t)})_{ij}:=[(\textbf{A}_k)_{ij}]^t$. Let $$\mathbf M{(t)}:=\mathbb E \sum_{k=1}^N \mathbf A_k^{(t)}.$$ When $\mathbf M{(t)}$ is finite and strictly positive, we denote the its positive maximal eigenvalue by $ \rho(t)$ and the corresponding positive left and right eigenvectors by ${\mathbf U}(t)=( U_1(t),\cdots, U_p(t))$ and ${\mathbf V}(t)= (V_1(t),\cdots, V_p(t))$ normalized such that $\sum\limits_{i=1}^pU_i(t)=\sum\limits_{i=1}^pU_i(t)V_i(t)=1$. Define \begin{equation*} \mathbf X_{u}^{(t)}=\mathbf A_{u_{1}}^{(t)}\mathbf A_{u_{1}u_{2}}^{(t)}\cdots \mathbf A_{u_{1}\cdots u_{n}}^{(t)}\quad \mathrm{ if }\ u=u_{1} \ldots u_{n}\in I \quad \mathrm{ for } \; n\geq 1, \end{equation*} \begin{equation}\label{beqYn1} \mathbf Y_{0}^{(t)}=\mathbf V(t) \quad \mbox{ and }\quad \mathbf Y_{n}^{(t)}=\sum_{u\in \mathbb{T}_{n}} \mathbf X_{u}^{(t)}\mathbf V(t)\quad \mathrm{ for } \; n\geq 1. \end{equation} Clearly, $\textbf{Y}_n^{(t)}=(Y_{n,1}^{(t)},\cdots,Y_{n,p}^{(t)})$ is a non-negative martingale with mean $\textbf{V}(t)$, so it converges a.s. to a random vector $\textbf{Y}^{(t)}=(Y^{(t)}_1,\cdots,Y^{(t)}_p)$. In particular, when $t=1$, we have $\textbf{X}_u^{(1)}=\textbf{X}_u$, $\rho(1)=1$ and $\textbf{V}(1)=\textbf{V}$, hence $\textbf{Y}_n^{(1)}=\textbf{Y}_n$ and $\mathbf Y^{(1)}=\mathbf Y$. Further more, define the matrix $\textbf{M}_n(t)=((\textbf{M}_n(t))_{ij})$ as $$(\textbf{M}_n(t))_{ij}:=\mathbb E\sum_{u\in \mathbb{T}_n}[(\textbf{X}_u)_{ij}]^t$$ with the maximum-modulus eigenvalue denoted by $\rho_n(t)$ and the corresponding normalized positive left and right eigenvectors by $\textbf{U}_n(t)=(U_{n,1}(t)\cdots,U_{n,p}(t))$ and $\textbf{V}_n(t)=(V_{n,1}(t),\cdots,V_{n,p}(t))$. In particular, $\rho_1(t)=\rho(t)$. We declare that throughout this paper the notation norm $\|\mathbf A\|$ represents any one of the matrix norms if $\mathbf A$ is a matrix, and $\|\mathbf u\|=\sum\limits_{j=1}^p|u_j|$ is the $L^1$-norm of $\mathbf u=(u_1, \cdots, u_p)$ if $\mathbf u$ is a vector. \begin{thm}[Moments]\label{MCT1} Let $\alpha>1$. \begin{itemize} \item[(a)]If $\mathbb E\|\sum\limits_{k=1}^N\textbf{A}_k\|^\alpha<\infty$ and $p^{(\alpha-1)}\rho_n(\alpha)<1$ for some positive integer $n$, then $$0<\mathbb E\|\mathbf Y\|^\alpha<\infty\qquad \text{and}\qquad \mathbb E\mathbf Y=\mathbf V.$$ \item[(b)] Conversely, if $0<\mathbb E\|\mathbf Y\|^\alpha<\infty$, then $\mathbb E\|\sum\limits_{k=1}^N\textbf{A}_k\|^\alpha<\infty$ and $\rho_n(\alpha)\leq 1$ for all $n$. If additionally \begin{equation}\label{MCEC2} \text{$\mathbb P(\text{$\forall k\in\{1, 2,\cdots, N\}$, $\mathbf A_k$ has a positive column vector})>0$,} \end{equation} then $\rho_n(\alpha)<1$ for all $n$. \end{itemize} \end{thm} \noindent\textbf{Remark 2.1. } (i) For $\alpha>1$, under Assumption (H), the condition $\mathbb E\|\sum\limits_{k=1}^N\textbf{A}_k\|^\alpha<\infty$ ensures that $\mathbf M(\alpha)$ is finite and strictly positive, so that $\rho(\alpha)$ exists. Notice that for each $t\in \mathbb R$ fixed, $$[\textbf{M}(t)]^n\leq\textbf{ M}_n(t)\leq p^{(\alpha-1)(n-1)}[\textbf{M}(t)]^n,$$ where for two matrix $\mathbf A=(a_{ij}), \mathbf B=(b_{ij})$, the inequality $\mathbf A\leq \mathbf B$ means that $a_{ij}\leq b_{ij}$ for all $i, j$. Thus the existences of $\rho(t)$ and $\rho_n(t)$ are equivalent, and we moreover have for each $t\in \mathbb R$ fixed, $$\text{$\rho(t)^n\leq \rho_n(t)\leq p^{(\alpha-1)(n-1)}\rho(t)^n$.}$$ Therefore, under Assumption (H) and the condition $\mathbb E\|\sum\limits_{k=1}^N\textbf{A}_k\|^\alpha<\infty$, $\rho_n(t)$ exists for all $t\in [1,\alpha]$ and for all $n$. Besides, we remark that the condition $\mathbb E\|\sum\limits_{k=1}^N\textbf{A}_k\|^\alpha<\infty$ is equivalent to $\mathbb E \|\mathbf Y_1\|^\alpha<\infty$. (ii) Under Assumption (H), $\mathbb E\|\mathbf Y\|^\alpha>0$ is equivalent to $\mathbb E(Y_i)^\alpha>0$ for all $i\in\{1,\cdots,p\}$. Indeed, by (\ref{MCE2}), one can see that $\mathbb E \mathbf Y$ is a an eigenvector associated to the eigenvalue $1$. If it is non-trivial, i.e. $\mathbb E \mathbf Y\neq \mathbf 0$, then $\mathbb E \mathbf Y=c\mathbf V$ for some constant $c> 0$, which implies that $\mathbb E \mathbf Y$ is positive. Theorem \ref{MCT1}(a) shows a sufficient condition for the existence of the $\alpha$th-moment ($\alpha>1$) of $\textbf{Y}$, or equivalently, the $L^\alpha$ convergence of the martingale $\{\mathbf Y_n\}$ to its limit $\textbf{Y}$. If $\mathbb E(Y_i)^\alpha<\infty$, it is obvious that $\mathbb E Y_i=V_i$ and $\mathbb P(Y_i>0)>0$. As $\mathbf Y$ is a solution of the equation (\ref{E}), Theorem \ref{MCT1}(a) in fact also gives the existence of a non-trivial solution of equation (\ref{E}). Moreover, if $p^{(\alpha-1)}\rho_n(\alpha)<1$ for some positive integer $n$, Theorem \ref{MCT1} implies that $0<\mathbb E \|\mathbf Y\|^\alpha<\infty$ if and only if $\mathbb E \|\mathbf Y_1\|^\alpha<\infty$, which reveals that $\mathbf Y_{1}$ and $\mathbf Y$ would have the same asymptotic properties. In particular, for $p=1$, if $\mathbb P(\forall k\in\{1, 2,\cdots, N\}, A_k>0 )>0$, Theorem \ref{MCT1} says that $$ 0<\mathbb E Y^\alpha<\infty\qquad \text{if and only if}\qquad\mathbb EY_1^\alpha<\infty \;\;\text{and}\;\; \rho(\alpha)<1.$$ This result was obtained by Liu (\cite{liu2000}, Theorem 2.1) with the help of a size-biased measure. Here our proof will present a different idea based on inequalities for martingale. Our method, which is available for both $p=1$ and $p>1$, also avoids the trouble of finding an convenient size-biased measure for the case where $p>1$. We mention that this method can also be used to the complex case where $\textbf{A}_k$ are complex random matrixes and $\textbf{Z}$ and $\textbf{Z}(k)$ are complex random vectors, see Section 6. Now we consider the existence of harmonic moments of $\textbf{Y}$, i.e., $\mathbb E (Y_i)^{-\lambda}<\infty$, for each $i\in\{1,2,\cdots, p\}$, where $\lambda>0$. We shall deal with a more general case, with a general non-trivial solution of equation (\ref{E}), denoted still by $\mathbf Z$, instead of $\mathbf Y$. Let $\mathbf Z$ be a non-trivial solution of equation (\ref{E}). Then we have $\mathbb P(\mathbf Z>\mathbf 0)>0$, where $\mathbf Z>\mathbf 0$ means that $Z_i>0$ for all $i=1,2,\cdots, p$. Assume that (\ref{MCEC2}) holds, and \begin{equation}\label{MCA1} \mathbb P(N=0)=0,\qquad \mathbb P(N=1)<1. \end{equation} In fact, assumption (\ref{MCEC2}) is object to ensure that the probability $\mathbb P(\mathbf Z=\mathbf 0)$ is a solution of the equation $f(q)=q$, where $f(s)=\mathbb E s^ N$ ($0\leq s\leq 1$) is the generating function of $N$. Since $\mathbb P(\mathbf Z=\mathbf 0)<1$, under assumptions (\ref{MCA1}), by the unity of solution, we have $\mathbb P(\mathbf Z=\mathbf 0)=0$, or namely, $\mathbb P(\mathbf Z>\mathbf 0)=1$. Let \begin{equation} \phi(\mathbf t)=\mathbb Ee^{-\mathbf t\cdot\mathbf Z},\qquad\mathbf t=(t_1,\cdots,t_p)\in \mathbb R_{+}^p, \end{equation} be the Laplace transform of $\mathbf Z$, where we write $\mathbf u\cdot\mathbf v=\sum\limits_{j=1}^pu_jv_j$ for the inner product of two vectors $\mathbf u$ and $\mathbf v$. We are interested in the decay rate of $\phi(\mathbf t)$ as $\|\mathbf t\|\rightarrow\infty$ and that of the tail probability $\mathbb P(\mathbf y\cdot \mathbf Z\leq x)$ as $x\rightarrow0$, for given $\mathbf y=(y_1,\cdots,y_p)\in \mathbb R_{+}^p$ , as well as the harmonic moment $\mathbb E (\mathbf y\cdot \mathbf Z)^{-\lambda}$ for $\lambda>0$. Set $$\underline {m}:=\mbox{\emph{essinf}} \;N$$ Then $\underline {m}\geq1$, since $\mathbb P(N=0)=0$. We have the following result. \begin{thm}[Harmonic moments]\label{MCT2} Assume (\ref{MCEC2}) and (\ref{MCA1}). Write $a_{ij}=(\mathbf A_1)_{ij}$. If $$\mathbb E \left(\min\limits_i\sum\limits_{j=1}^p a_{ij}\right)^{-\lambda}<\infty\qquad \text{and}\qquad\mathbb E \left[\left(\min\limits_i\sum\limits_{j=1}^p a_{ij}\right)^{-\lambda}\mathbf 1_{\{N=1\}}\right]<1 $$ for some $\lambda>0$, then $$\phi(\mathbf t)=O(\|\mathbf t\|^{-\lambda})\quad(\|\mathbf t\|\rightarrow\infty),$$ and for every fixed non-zero $\mathbf y=(y_1,\cdots,y_p)\in \mathbb R_{+}^p$, $$\mathbb P(\mathbf y\cdot \mathbf Z\leq x)=O(x^{\lambda})\quad(x\rightarrow0),\qquad\quad \mathbb E (\mathbf y\cdot \mathbf Z)^{-\lambda_1}<\infty\quad(0<\lambda_1<\lambda). $$ If additionally $\underline {m}>1$ and $\mathbb E\left[\prod\limits_{k=1}^{\underline {m}}\left(\min\limits_i\sum\limits_{j=1}^p (\mathbf A_k)_{ij}\right)^{-\lambda}\right]<\infty$, then $$\phi(\mathbf t)=O(\|\mathbf t\|^{-\underline {m}\lambda})\;\;(\|\mathbf t\|\rightarrow\infty),\quad \mathbb P(\mathbf y\cdot \mathbf Z\leq x)=O(x^{\underline {m}\lambda})\;\;(x\rightarrow0),\quad \mathbb E (\mathbf y\cdot \mathbf Z)^{-\underline {m}\lambda_1}<\infty\;\;(0<\lambda_1<\lambda).$$ \end{thm} From Theorem \ref{MCT2}, we can deduce similar results for each component $Z_i$ of $\mathbf Z$. Let $\phi_i(t)=\mathbb Ee^{-tZ_i}$ ($t>0$) be the Laplace transform of $Z_i$. Denote by $\mathbf e_i$ the vector which the $i$-th component is $1$ and the others are $0$. Then $\phi_i(t)=\phi(t\mathbf e_i)$, and $\mathbf e_i\cdot \mathbf Z=Z_i$. Applying Theorem \ref{MCT2} to $\phi(t\mathbf e_i)$ and $\mathbf e_i\cdot \mathbf Z$, we immediately get the following corollary. \begin{co}\label{MCC1} Under the conditions of Theorem \ref{MCT2}, we have for each $i\in\{1,2,\cdots,p\}$, $$\phi_i( t)=O( t^{-\lambda})\;\;( t\rightarrow\infty),\quad \mathbb P(Z_i\leq x)=O(x^{\lambda})\;\;(x\rightarrow0),\quad \mathbb E (Z_i)^{-\lambda_1}<\infty\;\;(0<\lambda_1<\lambda).$$ If additionally $\underline {m}>1$ and $\mathbb E\left[\prod\limits_{k=1}^{\underline {m}}\left(\min\limits_i\sum\limits_{j=1}^p (\mathbf A_k)_{ij}\right)^{-\lambda}\right]<\infty$, then $$\phi_i( t)=O( t^{-\underline {m}\lambda})\;\;( t\rightarrow\infty),\quad \mathbb P(Z_i\leq x)=O(x^{\underline {m}\lambda})\;\;(x\rightarrow0),\quad \mathbb E (Z_i)^{-\underline {m}\lambda_1}<\infty\;\;(0<\lambda_1<\lambda).$$ \end{co} For $p=1$, Theorem \ref{MCT2} (or Corollary \ref{MCC1}) coincides with the results of Liu (\cite{liu2}, Theorems 2.1 and 2.4). But when $p>1$, to find the critical value for the existence of harmonic moments like \cite{liu2} seems difficult. Similar to (\cite{liu2}, Theorem 2.5), we also have result below about the exponential decay rate of $\phi(\mathbf t)$. \begin{thm}[The exponential case]\label{MCT3} Assume that (\ref{MCEC2}) holds, $\underline{m}\geq 2$ and $\min\limits_{i,j}\mathbf (A_k)_{ij}\geq \underline {a}$ a.s. for some constant $\underline {a}>0$ and all $1\leq k\leq \underline {m}$. \begin{itemize} \item[(a)] If $\mathbb P\left(N=\underline {m}\right)>0$, then there exists a constant $C_1>0$ such that for all $\|\mathbf t\|>0$ large enough, $$\phi(\mathbf t)\leq \exp\{-C_1\|\mathbf t\|^\gamma\},$$ and for every fixed non-zero $\mathbf y=(y_1,\cdots,y_p)\in \mathbb R_{+}^p$, there exists a constant $C_{1,\mathbf y}>0$ such that for all $x>0$ small enough, $$\mathbb P(\mathbf y\cdot \mathbf Z\leq x)\leq \exp\{-C_{1,\mathbf y}x^{-\gamma/(1-\gamma)}\},$$ where $\gamma=-\log\underline{m}/\log{(\underline{a}p)}\in(0,1)$. \item[(b)]For some $\varepsilon>0$ satisfying $(\underline{a}+\varepsilon)p\underline {m}<1$, if $\mathbb P\left(N=\underline {m},\; \max\limits_{ij}(A_k)_{ij}\leq \underline{a}+\varepsilon\; \text{for all } 1\leq k\leq \underline {m}\right)>0$, then there exists a constant $C_2>0$ such that for all $\|\mathbf t\|>0$ large enough, $$\phi(\mathbf t)\geq \exp\{-C_2\|\mathbf t\|^{\gamma(\varepsilon)}\},$$ and for every fixed $\mathbf y=(y_1,\cdots,y_p)\in \mathbb R_{+}^p$, there exists a constant $C_{2,\mathbf y}>0$ such that for all $x>0$ small enough, $$\mathbb P(\mathbf y\cdot \mathbf Z\leq x)\geq \exp\{-C_{2,\mathbf y}x^{-\gamma(\varepsilon)/(1-\gamma(\varepsilon))}\},$$ where $\gamma(\varepsilon)=-\log\underline{m}/\log{[(\underline{a}+\varepsilon) p]}\in(0,1)$. \end{itemize} \end{thm} Finally, as applications of the above moment results for the limit $\mathbf Y$ of Mandelbrot's martingale $\{\mathbf Y_n\}$, we consider the MBRW described in Example 1.1 and show the sufficient conditions for the existence of moments (of positive and negative orders) of $W_i(t)$, for each $i=1, 2, \cdots, p$ and for $t\in\mathbb R$ fixed. For MBRW, it is obvious that (\ref{MCEC2}) is satisfied. Notice that $\mathbf{M}(\alpha)=\frac{\tilde{\mathbf M}(\alpha t)}{\tilde \rho(t)^\alpha}$, which leads to $\rho(\alpha)=\frac{\tilde{\rho}(\alpha t)}{\tilde \rho(t)^\alpha}$. Applying Theorem \ref{MCT1} yields the result for the moments of positive orders, and Theorem \ref{MCT2} yields the one for the moments of negative orders. \begin{co}[Application to MBRW]\label{MCC2}We consider the MBRW described in Example 1.1. \begin{itemize} \item[(a)]Let $\alpha>1$. If $\max\limits_{i}\mathbb E\left(W_{1,i}(t)\right)^\alpha<\infty$ and $p^{\alpha-1}\frac{\tilde{\rho}(\alpha t)}{\tilde \rho(t)^\alpha}<1$, then $\max\limits_{i}\mathbb E [W_i(t)]^\alpha<\infty$. \item[(b)]Assume (\ref{MCA1}). Denote by $S_1^i$ the displacement of the first child ${1}$ of the initial particle $\emptyset$ of type $i\in\{1,\cdots, p\}$. Let $\lambda>0$. If $\max\limits_{i}\mathbb E e^{-(\lambda+\varepsilon) t S_1^i}<\infty$ and $\mathbb E \max\limits_{i}e^{-(\lambda+\varepsilon) S_1^i}\mathbf 1_{\{N=1\}}<1$ for some $\varepsilon>0$, then $\max\limits_{i}\mathbb E [W_i(t)]^{-\lambda}<\infty$. \end{itemize} \end{co} Corollary \ref{MCC2}(a) gives a sufficient condition for the existence of $\alpha$th-moment of $W_i(t)$. In fact, if we deal with the martingale $\{W_{n,i}(t)\}$ directly according to the ideas in the proof of Theorem \ref{MCT1}, the condition $p^{\alpha-1}\frac{\tilde{\rho}(\alpha t)}{\tilde \rho(t)^\alpha}<1$ can be weaken to $\frac{\tilde{\rho}(\alpha t)}{\tilde \rho(t)^\alpha}<1$ (see Huang \cite{huangT}, where we show that $\max\limits_{i}\mathbb E\left(W_{1,i}(t)\right)^\alpha<\infty$ and $\frac{\tilde{\rho}(\alpha t)}{\tilde \rho(t)^\alpha}<1$ is a necessary and sufficient condition for $\max\limits_{i}\mathbb E [W_i(t)]^\alpha<\infty$. The rest part of the paper is arranged as follows. In next section, we shall establish two auxiliary inequalities for the martingale $\{\mathbf Y_n^{(t)}\}$, which will be used in Section 4 for the proof of Theorem \ref{MCT1}. In Section 5, we shall prove Theorems \ref{MCT2} and \ref{MCT3}. Finally, we shall consider the complex case in Section 6, where we shall show sufficient conditions for the $L^\alpha$ convergence and the $\alpha$th-moment of the Mandelbrot's martingale $\{\mathbf Y_n\}$. \section {The martingale $\{\mathbf Y_n^{(t)}\}$} The critical idea of the proof of Theorem \ref{MCT1} is to notice the double martingale structure (cf \cite{GK} for more information) of the martingale $\{\mathbf Y_n^{(t)}\}$ and apply the inequality of martingale (Burkholder's inequality) to it. We shall go along the proof of Theorem \ref{MCT1} according to the lines of Huang \& Liu \cite{huang3} or Alsmeyer \emph{et al.} \cite{IK}. In this section, we show two lemmas (inequalities) to the martingale $\{\mathbf Y_n^{(t)}\}$ which will be used in the proof of Theorem \ref{MCT1}. \begin{lem}\label{MCL1.2.1} Let $\alpha>1$. Fix $t\in \mathbb R$. If $\max\limits_i\mathbb E\left[Y_{1,i}^{(t)}\right]^\alpha<\infty$, then for each $i=1,\cdots,p$, \begin{itemize} \item[(a)] for $\alpha\in(1,2]$, \begin{equation}\label{MCE1.2.1} \mathbb E\left|Y_{n+1,i}^{(t)}-Y_{n,i}^{(t)}\right|^\alpha\leq C p^{(\alpha-1)n}\left[\frac{\rho(\alpha t)}{\rho(t)^\alpha}\right]^n; \end{equation} \item[(b)] for $\alpha>2$, \begin{equation}\label{MCE1.2.2} \mathbb E\left|Y_{n+1,i}^{(t)}-Y_{n,i}^{(t)}\right|^\alpha\leq C p^{\alpha n/2}\left[\frac{\rho(2 t)^{\alpha/2}}{\rho(t)^\alpha}\right]^n \mathbb E[Y_{n,i}^{(2t)}]^{\alpha/2}, \end{equation} \end{itemize} where $C$ is a constant depending on $\alpha,p,t$. \end{lem} \begin{proof} We can decompose $Y_{n,i}^{(t)}$ as \begin{eqnarray*} Y_{n,i}^{(t)}=\frac{1}{\rho(t)^n}\sum_{j=1}^{p}\sum_{u\in\mathbb{T}_n}(\textbf{X}_u^{(t)})_{ij}V_j(t) =\frac{1}{\rho(t)^n}\sum_{j=1}^{p}\sum_{u\in\mathbb{T}_{n-1}}(\textbf{X}_u^{(t)})_{ij}Y_{1,j}^{(t)}(u), \end{eqnarray*} where $\mathbf Y_1^{(t)}(u)$ is a version of $\mathbf Y_1^{(t)}$ at root $u$. Hence $$Y_{n+1,i}^{(t)}-Y_{n,i}^{(t)}=\frac{1}{\rho(t)^n}\sum_{j=1}^{p} \sum_{u\in\mathbb{T}_n}(\textbf{X}_u^{(t)})_{ij}\left[Y_{1,j}^{(t)}(u)-V_j(t)\right].$$ By Burkholder's inequality (see for example \cite{chow}), \begin{eqnarray*} \mathbb E\left|Y_{n+1,i}^{(t)}-Y_{n,i}^{(t)}\right|^\alpha&\leq& \frac{p^{\alpha-1}}{\rho(t)^{\alpha n}}\sum_{j=1}^{p}\mathbb E\left| \sum_{u\in\mathbb{T}_n}(\textbf{X}_u^{(t)})_{ij}\left[Y_{1,j}^{(t)}(u)-V_j(t)\right]\right|^\alpha\\ &\leq&\frac{C}{\rho(t)^{\alpha n}}\sum_{j=1}^{p}\mathbb E\left(\sum_{u\in\mathbb{T}_n} \left[(\textbf{X}_u^{(t)})_{ij}\right]^2\left[Y_{1,j}^{(t)}(u)-V_j(t)\right]^2\right)^{\alpha/2}. \end{eqnarray*} Noticing the fact that \begin{eqnarray*} \mathbb E\left(\sum_{u\in\mathbb{T}_n} \left[(\textbf{X}_u^{(t)})_{ij}\right]^2\left[Y_{1,j}^{(t)}(u)-V_j(t)\right]^2\right)^{\alpha/2} \leq\left\{\begin{array}{ll} \mathbb E\left(\sum\limits_{u\in \mathbb{T}_n}\left[(\textbf{X}_u^{(t)})_{ij}\right]^\alpha\right) \mathbb E\left|Y_{1,j}^{(t)}-V_j(t)\right|^\alpha, & \;\text{for $\alpha\in(1,2]$,}\\ \mathbb E\left(\sum\limits_{u\in \mathbb{T}_n}\left[(\textbf{X}_u^{(t)})_{ij}\right]^2\right)^{\alpha/2} \mathbb E\left|Y_{1,j}^{(t)}-V_j(t)\right|^\alpha,&\;\text{for $\alpha>2$}, \end{array} \right. \end{eqnarray*} we have \begin{eqnarray}\label{MCES31} \mathbb E\left|Y_{n+1,i}^{(t)}-Y_{n,i}^{(t)}\right|^\alpha \leq\left\{\begin{array}{ll} \frac{C}{\rho(t)^{\alpha n}}\sum\limits_{j=1}^{p}\mathbb E\left(\sum\limits_{u\in \mathbb{T}_n}\left[(\textbf{X}_u^{(t)})_{ij}\right]^\alpha\right) \mathbb E\left|Y_{1,j}^{(t)}-V_j(t)\right|^\alpha, & \text{for $\alpha\in(1,2]$,}\\ \frac{C}{\rho(t)^{\alpha n}}\sum\limits_{j=1}^{p}\mathbb E\left(\sum\limits_{u\in \mathbb{T}_n}\left[(\textbf{X}_u^{(t)})_{ij}\right]^2\right)^{\alpha/2} \mathbb E\left|Y_{1,j}^{(t)}-V_j(t)\right|^\alpha,& \text{for $\alpha>2$}. \end{array} \right. \end{eqnarray} Note that for $u\in\mathbb{T}_n$, \begin{equation}\label{MCE1.2.5} \left[(\textbf{X}_u^{(t)})_{ij}\right]^\alpha\leq p^{(\alpha-1)(n-1)}(\textbf{X}_u^{(\alpha t)})_{ij},\qquad \forall \alpha>1, \end{equation} and \begin{equation}\label{MCE1.2.6} \sum_{u\in\mathbb{T}_n}(\textbf{X}_u^{(t)})_{ij}\leq \frac{\rho(t)^n}{V_j(t)}Y_{n,i}^{(t)}. \end{equation} Thus \begin{eqnarray*} \mathbb E\sum_{u\in\mathbb{T}_n}\left[(\textbf{X}_u^{(t)})_{ij}\right]^\alpha \leq p^{(\alpha-1)(n-1)}\mathbb E\sum_{u\in\mathbb{T}_n}(\textbf{X}_u^{(\alpha t)})_{ij} \leq\frac{V_i(\alpha t)}{V_j(\alpha t)}p^{(\alpha-1)(n-1)}\rho(\alpha t)^n. \end{eqnarray*} Applying this inequality to the first inequality of (\ref{MCES31}), and noticing that $\max_i\mathbb E\left[Y_{1,i}^{(t)}\right]^\alpha<\infty$, we obtain (\ref{MCE1.2.1}). To get (\ref{MCE1.2.2}), we only need to see that \begin{eqnarray*} \mathbb E\left(\sum_{u\in\mathbb{T}_n}\left[(\textbf{X}_u^{(t)})_{ij}\right]^2\right)^{\alpha/2}&\leq&p^{\alpha (n-1)/2}\mathbb E\left(\sum_{u\in\mathbb{T}_n}(\textbf{X}_u^{(2 t)})_{ij}\right)^{\alpha/2}\\ &\leq&V_j(2 t)^{-\alpha/2} p^{\alpha (n-1)/2}\rho(2 t)^{\alpha n/2}\mathbb E\left[Y_{n,i}^{(2t)}\right]^{\alpha/2}, \end{eqnarray*} and combing this inequality with the second inequality of (\ref{MCES31}). \end{proof} \begin{lem}\label{MCL1.2.2} Let $\alpha>1$. Fix $t\in\mathbb R$. If $\max\limits_i\left[\mathbb{E} Y_{1,i}^{(t)}\right]^\alpha<\infty$, then for each $i=1,\cdots,p$, \begin{equation}\label{MCE1.2.7} \mathbb E\left[Y_{n,i}^{(t)}\right]^\alpha\leq Cn^{1+\frac{2^m-1}{2^m}\alpha}\left[\max\{1,\; p^{\alpha-1}\frac{\rho(\alpha t)}{\rho(t)^\alpha},\; p^{\frac{2^l-1}{2^l}\alpha}\frac{\rho(2^lt)^{\alpha/2^l}}{\rho(t)^\alpha},l=1,2,\cdots,m\}\right]^n \end{equation} for $\alpha\in(2^m,2^{m+1}]$, where $m\geq0$ is an integer. \end{lem} \begin{proof} At first, for $m=0$, $\alpha\in(1,2]$. Applying Burkholder's inequality to the martingale $\{Y_{n,i}^{(t)}\}$ and by Lemma \ref{MCL1.2.1}, \begin{eqnarray*} \mathbb E\left|Y_{n+1,i}^{(t)}-1\right|^\alpha&\leq& C\sum_{k=0}^{n-1}\mathbb E\left|Y_{k+1,i}^{(t)}-Y_{k,i}^{(t)}\right|^\alpha\\ &\leq&C\sum_{k=0}^{n-1}p^{(\alpha-1)k}\left(\frac{\rho(\alpha t)}{\rho(t)^\alpha}\right)^k\\ &\leq& Cn\left[\max\{1,p^{\alpha-1}\frac{\rho(\alpha t)}{\rho(t)^\alpha}\}\right]^n. \end{eqnarray*} Thus $$\mathbb E\left[Y_{n,i}^{(t)}\right]^\alpha\leq Cn\left[\max\{1,p^{\alpha-1}\frac{\rho(\alpha t)}{\rho(t)^\alpha}\}\right]^n.$$ So (\ref{MCE1.2.7}) holds for $m=0$. Now suppose that (\ref{MCE1.2.7}) holds for some $m\geq0$, we shall prove it still holds for $m+1$. For $\alpha\in(2^{m+1},2^{m+2}]$, we have $\alpha/2\in(2^{m},2^{m+1}]$. Since $\max\limits_i\mathbb E\left[Y_{1,i}^{(t)}\right]^\alpha<\infty$ ensures that $\max\limits_i\mathbb E\left[Y_{1,i}^{(2t)}\right]^{\alpha/2}<\infty$, by induction, we have \begin{equation}\label{MCE1.2.8} \mathbb E\left[Y_{1,i}^{(2t)}\right]^{\alpha/2}\leq Ck^{1+\frac{2^m-1}{2^{m+1}}\alpha}\left[\max\{1,\; p^{\alpha/2-1} \frac{\rho(\alpha t)}{\rho(2t)^{\alpha/2}},\; p^{\frac{2^l-1}{2^{l+1}}\alpha}\frac{\rho(2^{l+1}t)^{\alpha/2^{l+1}}}{\rho(2t)^{\alpha/2}},l=1,2,\cdots,m\}\right]^k. \end{equation} Hence combing (\ref{MCE1.2.8}) with (\ref{MCE1.2.2}) we get \begin{equation}\label{MCE1.2.9} \mathbb E\left|Y_{k+1,i}^{(t)}-Y_{k,i}^{(t)}\right|^\alpha \leq Ck^{1+\frac{2^m-1}{2^{m+1}}\alpha}\left[\max\{ p^{\alpha-1}\frac{\rho(\alpha t)}{\rho(t)^\alpha},\; p^{\frac{2^l-1}{2^l}\alpha}\frac{\rho(2^lt)^{\alpha/2^l}}{\rho(t)^\alpha},l=1,2,\cdots,m+1\}\right]^k. \end{equation} By Burkholder's inequality and Minkowski's inequality, and applying (\ref{MCE1.2.9}), \begin{eqnarray*} \mathbb E\left|Y_{n,i}^{(t)}-1\right|^\alpha&\leq&C\left(\sum_{k=0}^{n-1}\left(\mathbb E\left|Y_{k+1,i}^{(t)}-Y_{k,i}^{(t)}\right|^\alpha\right)^{2/\alpha}\right)^{\alpha/2}\\ &\leq& C\left(\sum_{k=0}^{n-1}k^{(1+\frac{2^m-1}{2^{m+1}}\alpha)\frac{2}{\alpha}} \left[\max\{ p^{\alpha-1}\frac{\rho(\alpha t)}{\rho(t)^\alpha},\;p^{\frac{2^l-1}{2^l}\alpha} \frac{\rho(2^lt)^{\alpha/2^l}}{\rho(t)^\alpha},l=1,\cdots,m+1\}\right]^{2k/\alpha}\right)^{\alpha/2}\\ &\leq& Cn^{1+\frac{2^m-1}{2^{m+1}}\alpha+\frac{\alpha}{2}} \left[\max\{1, p^{\alpha-1}\frac{\rho(\alpha t)}{\rho(t)^\alpha},\;p^{\frac{2^l-1}{2^l}\alpha} \frac{\rho(2^lt)^{\alpha/2^l}}{\rho(t)^\alpha},l=1,\cdots,m+1\}\right]^{n}, \end{eqnarray*} which implies that (\ref{MCE1.2.7}) holds for $m+1$. This completes the proof. \end{proof} \noindent\textbf{Remark 3.1. } Lemmas \ref{MCL1.2.1}(b) and \ref{MCL1.2.2} also holds with $\beta$ in place of $2$ for any $\beta\in(1,2]$. To see this fact, observing that $$\mathbb E\left(\sum_{u\in\mathbb{T}_n}\left[(\textbf{X}_u^{(t)})_{ij}\right]^2\right)^{\alpha/2} \leq \mathbb E\left(\sum_{u\in\mathbb{T}_n}\left[(\textbf{X}_u^{(t)})_{ij}\right]^\beta\right)^{\alpha/\beta}$$ in the proof of Lemma \ref{MCL1.2.1}, one just need to repeat the proofs of Lemmas \ref{MCL1.2.1}(b) and \ref{MCL1.2.2} with $\beta$ in place of $2$ for the case where $\alpha>2$. \section {Proof of Theorem \ref{MCT1}} Now we give the proof of Theorem \ref{MCT1}, by using the inequalities for the martingale $\{\mathbf Y_n^{(t)}\}$ (Lemmas \ref{MCL1.2.1} and \ref{MCL1.2.2}) which are obtained in Section 3. \begin{proof}[Proof of Theorem \ref{MCT1}]The proof of (a) is composed by two steps. Step 1: we will show that if $\mathbb E\|\sum\limits_{k=1}^N\textbf{A}_k\|^\alpha<\infty$ and $p^{(\alpha-1)}\rho(\alpha)<1$, then for each $i$, $\mathbb EY_i=V_i$ and $\mathbb E[Y_i]^\alpha<\infty$, which implies that $\mathbb E\mathbf Y=\mathbf V$ and $0<\mathbb{E}\|\mathbf Y\|^\alpha<\infty$. In fact, it suffices to prove that $\sup\limits_n\mathbb E[Y_{n,i}]^\alpha<\infty$ for each $i$, which is equivalent to $Y_{n,i}\rightarrow Y_i$ in $L^\alpha$, so that $\mathbb EY_i=V_i$ and $0<\mathbb E[Y_i]^\alpha<\infty$. The condition $\mathbb{E}\|\sum\limits_{k=1}^N\textbf{A}_k\|^\alpha<\infty$, or equivalently, $\max\limits_i\mathbb E[Y_{1,i}]^\alpha<\infty$, ensures the finiteness of $\textbf{M}(t)$ for all $t\in[1,\alpha]$. Moreover, since $\textbf{M}(1)=\textbf{M}$ is strictly positive, by the log-convexity of $(\textbf{M}(t))_{ij}$, we have $\textbf{M}(t)$ is strictly positive for all $t\in[1,\alpha]$, so $\rho(t)$ exists for all $t\in[1,\alpha]$. For $\alpha\in(1,2]$, by Burkholder's inequality and Lemma \ref{MCL1.2.1}, $$\sup_n\mathbb E|Y_{n,i}-1|^\alpha\leq C\sum_{n=0}^{\infty}\mathbb E|Y_{n+1,i}-Y_{n,i}|^2\leq C\sum_{n=0}^{\infty}p^{(\alpha-1)n}\rho(\alpha)^n<\infty.$$ For $\alpha>2$, by Burkholder's inequality and Minkowski's inequality, $$\sup_n\mathbb E|Y_{n,i}-1|^\alpha\leq C\left(\sum_{n=0}^{\infty}(\mathbb E|Y_{n+1,i}-Y_{n,i}|^\alpha)^{2/\alpha}\right)^{\alpha/2}.$$ We shall show the series $\sum\limits_{n=0}^{\infty}(\mathbb E|Y_{n+1,i}-Y_{n,i}|^\alpha)^{2/\alpha}<\infty$. Observing that $\max\limits_i\mathbb E\left[Y_{1,i}^{(2)}\right]^{\alpha/2}<\infty$ since $\max\limits_i\mathbb E(Y_{1,i})^{\alpha}<\infty$, by Lemma \ref{MCL1.2.2}, we have for $\alpha\in(2^m,2^{m+1}]$ ($m\geq1$ is an integer), \begin{equation}\label{MCE1.2.11} \mathbb E\left[Y_{1,i}^{(2)}\right]^{\alpha/2}\leq Cn^\gamma\left[\max\{1,p^{\alpha/2-1}\frac{\rho(\alpha)}{\rho(2)^{\alpha/2}}, p^{\frac{2^l-1}{2^{l+1}}\alpha}\frac{\rho(2^{l+1})^{\alpha/2^{l+1}}}{\rho(2)^{\alpha/2}},l=1,\cdots,m-1\}\right]^n, \end{equation} where $\gamma=1+\frac{2^{m-1}-1}{2^m}\alpha\leq\frac{\alpha}{2}$. By Lemma \ref{MCL1.2.1} and (\ref{MCE1.2.11}), \begin{eqnarray*} \mathbb E|Y_{n+1,i}-Y_{n,i}|^\alpha&\leq&Cp^{\alpha n/2}\rho(2)^{\alpha n/2}\mathbb E\left[(Y_n^{(2)})^i\right]^{\alpha/2}\\ &\leq&Cn^{\alpha/2}\left[\max\{p^{\alpha-1}\rho(\alpha),p^{\frac{2^l-1}{2^l}\alpha}\rho(2^l)^{\alpha/2^l},l=1,\cdots,m\}\right]^n. \end{eqnarray*} Therefore $$\sum_{n=0}^{\infty}\left(\mathbb E|Y_{n+1,i}-Y_{n,i}|^\alpha\right)^{2/\alpha}\leq C\sum_nn\left[\max\{p^{\alpha-1}\rho(\alpha),p^{\frac{2^l-1}{2^l}\alpha}\rho(2^l)^{\alpha/2^l},l=1,\cdots,m\}\right]^{2n/\alpha}.$$ The series in the right side of the inequality above converges if and only if \begin{equation}\label{MCE1.2.12} \max\{p^{\alpha-1}\rho(\alpha),p^{\frac{2^l-1}{2^l}\alpha}\rho(2^l)^{\alpha/2^l},l=1,\cdots,m\}<1. \end{equation} Note that $\rho(t)$ is log-convex since $(\textbf{M}(t))_{ij}$ is log-convex (Kingman 1961). We have $\forall \beta\in(1,\alpha)$, $\rho(\beta)\leq\rho(\alpha)^{(\beta-1)/(\alpha-1)}$. Thus $$p^{\frac{\beta-1}{\beta}\alpha}\rho(\beta)^{\alpha/\beta} \leq\left[p^{\alpha-1}\rho(\alpha)\right]^{\frac{\alpha(\beta-1)}{\beta(\alpha-1)}}<1,$$ and so (\ref{MCE1.2.12}) is true from this fact. Step 2: we will prove that if $\mathbb{E}\|\sum\limits_{k=1}^N\textbf{A}_k\|^\alpha<\infty$ and $p^{(\alpha-1)}\rho_r(\alpha)<1$ for some $r$, then for each $i$, $\mathbb EY_i=V_i$ and $\mathbb E[Y_i]^\alpha<\infty$. Let $\bar{N}:=N_r$ be the population of the $r$-generation and $\bar{\textbf{A}}_i:=\textbf{X}_{u^i}$, where $u^i$ denotes the $i$-th particle of the $r$-generation. We consider $(\bar{N},\bar{\textbf{A}}_1,\bar{\textbf{A}}_2,\cdots)$. Clearly, $\bar{\textbf{M}}:=\mathbb E\sum\limits_{i=1}^{\bar{N}}\bar{\textbf{A}}_i$ is finite and strictly positive with the maximum-modulus eigenvalue $1$ and the corresponding eigenvectors $\bar{\textbf{U}}=\textbf{U},\bar{\textbf{V}}=\textbf{V}$. Let $\mathbb{\bar T}$ be the corresponding Galton-Watson tree and $\mathbb{\bar T}_n=\{u\in \mathbb{\bar T}: |u|=n\}$. Define \begin{equation*} \bar{\textbf{Y}}_n:=\sum_{u\in\mathbb{\bar T}_n}\bar{\textbf{X}}_u\textbf{V}\quad\text{with}\; \bar{\textbf{X}}_u:=\bar{\textbf{A}}_{u_1}\cdots\bar{\textbf{A}}_{u_1\cdots u_n}\; \text{for}\;u\in\mathbb{\bar T}_n. \end{equation*} Similarly, we define $\bar{\textbf{M}}(t)$, $\bar{\textbf{Y}}_n(t)$, $\bar{\rho}(t)$ and $\bar{\textbf{V}}(t)$ like Section 2. It is easy to see that $\bar{\textbf{Y}}_n$ has the same distribution as $\textbf{Y}_{nr}$, therefore, $\bar{\textbf{Y}}:=\lim\limits_{n\rightarrow\infty}\bar{\textbf{Y}}_n$ a.s. has the same distribution as $\textbf{Y}$. To get $\mathbb EY_i=V_i$ and $\mathbb{E}[Y_i]^\alpha<\infty$, by Step 1, we only need to verify $\mathbb{E}\|\sum\limits_{i=1}^{\bar{N}}\bar{\textbf{A}}_i\|^\alpha<\infty$ and $p^{(\alpha-1)}\bar{\rho}(\alpha)<1$. The latter is obvious since $\bar{\textbf{M}}(t)=\textbf{M}_r(t)$ and so $\bar{\rho}(\alpha)=\rho_r(\alpha)$. To verify the former, we notice that $\mathbb E\|\sum\limits_{i=1}^{\bar{N}}\bar{\textbf{A}}_i\|^\alpha<\infty$ is equivalent to $\max\limits_i\mathbb E[\bar{Y}_{1,i}]^\alpha<\infty$, which is true by Lemma \ref{MCL1.2.2}, since $\mathbb E[\bar {Y}_{1,i}]^\alpha=\mathbb E[ Y_{r,i}]^\alpha$. Now we prove the converse (b). Suppose that $0<\mathbb{E}\|\mathbf Y\|^\alpha<\infty$, which implies that $\max\limits_i\mathbb E[Y_i]^\alpha<\infty$ and $\mathbf Y$ is non-degenerate. As $\mathbf Y$ is a non-trivial solution of the equation (\ref{E}), we have $\mathbb{E} \mathbf Y=\mathbf M \mathbb{E}\mathbf Y$ with $\mathbb{E} \mathbf Y\neq \mathbf 0$, which means that $\mathbb{E} \mathbf Y$ is a non-trivial eigenvector corresponding to the eigenvalue $1$, and so $\mathbb{E} \mathbf Y=c\mathbf Y$ for some constant $c>0$. By equation (\ref{E}), for each $i$, $$Y_i=\sum_{k=1}^N(\mathbf A_k \mathbf Y(k))_i=\sum_{k=1}^N\sum_{j=1}^p(\mathbf A_k)_{ij}Y_i(k).$$ By Jensen's inequality, for each $i$, \begin{eqnarray*} \mathbb{E}[Y_i]^\alpha&\geq & \mathbb{E}\left[\mathbb{E}\left(\sum_{k=1}^N\sum_{j=1}^p(\mathbf A_k)_{ij}Y_j(k)\big{|}\mathcal F_1\right)\right]^\alpha\\ &=&\mathbb{E}\left[\mathbb{E}\sum_{k=1}^N\sum_{j=1}^p(\mathbf A_k)_{ij}\mathbb{E} Y_j\right]^\alpha \\&=&c^\alpha\mathbb{E}\left[\mathbb{E}\sum_{k=1}^N(\mathbf A_k \mathbf V)_i\right]^\alpha\\ &=&c^\alpha\mathbb{E}[Y_{1,i}]^\alpha. \end{eqnarray*} Thus $\max\limits_i\mathbb E[Y_i]^\alpha<\infty$ implies $\max\limits_i\mathbb E[Y_{1,i}]^\alpha<\infty$, or equivalently, $\mathbb{E}\|\sum_{i=1}^N\textbf{A}_i\|^\alpha<\infty$. Next, we consider $\rho_n(t)$. Since \begin{eqnarray}\label{MCE1.2.13} [Y_i]^\alpha=\left[\sum_{j=1}^p\sum_{u\in\mathbb{T}_n}(\textbf{X}_u)_{ij}Y_j(u)\right]^\alpha\geq \sum_{j=1}^p\sum_{u\in\mathbb{T}_n}\left[(\textbf{X}_u)_{ij}\right]^\alpha\left[ Y_j(u)\right]^\alpha, \end{eqnarray} we obtain \begin{equation} \label{MCE1.2.14} \mathbb{E}[Y_i]^\alpha \geq \sum_{j=1}^p\mathbb{E}\sum_{u\in\mathbb{T}_n}\left[(\textbf{X}_u)_{ij}\right]^\alpha \mathbb{E}[ Y_j]^\alpha=\sum_{j=1}^p(\textbf{M}_n(\alpha))_{ij} \mathbb{E}[ Y_j]^\alpha. \end{equation} Thus \begin{equation} \label{MCE1.2.15} \sum_{i=1}^pU_{n,i}(\alpha)\mathbb{E}[Y_i]^\alpha\geq\sum_{j=1}^p \mathbb{E}[ Y_j]^\alpha\sum_{i=1}^pU_{n,i}(\alpha) (\textbf{M}_n(\alpha))_{ij}=\rho_n(\alpha)\sum_{j=1}^pU_{n,j}(\alpha)\mathbb{E}[Y_j]^\alpha, \end{equation} which leads to $\rho_n(\alpha)\leq1$. If additionally (\ref{MCEC2}) holds, then for each $i$, $\mathbb{P}\left(\sum\limits_{j=1}^p\sum\limits_{u\in \mathbb{T}_n} \mathbf{1}_{\{(\textbf{X}_u)_{ij}>0\}}=0\; or \;1\right)<1$. Hence the strictly inequality in (\ref{MCE1.2.13}) holds with positive probability, and so both (\ref{MCE1.2.14}) and (\ref{MCE1.2.15}) are strictly inequalities, which leads to $\rho_n(\alpha)<1$. \end{proof} \section {Proof of Theorems \ref{MCT2} and \ref{MCT3}} We will prove Theorems \ref{MCT2} and \ref{MCT3} based on the equation (\ref{E}), with ideas from Liu \cite{liu2}. Recall that $\phi(\mathbf t)=\mathbb{E} e^{-\mathbf t\cdot\mathbf Z}$ is the Laplace transform of the non-trivial solution $\mathbf Z$ to the equation (\ref{E}). By (\ref{E}), $\phi(\mathbf t)$ satisfies the functional equation \begin{equation}\label{E1} \phi(\mathbf t)=\mathbb{E}\prod_{k=1}^N\phi(\mathbf t \mathbf A_k). \end{equation} Our proofs are based on this equation. To prove Theorem \ref{MCT2}, the two lemmas below are necessary. \begin{lem}\label{MCLN1} Let $\phi:\mathbb R_+^p\mapsto\mathbb R_+$ be a bounded function, and $\mathbf A=(a_{ij})$ be a non-zero matrix such that for some $0<q<1$, $t_\varepsilon>0$ and all $\mathbf t$ satisfying $\|\mathbf t\|>t_\varepsilon$, \begin{equation}\label{MCLE41} \phi(\mathbf t)\leq q \mathbb{E}\phi(\mathbf t\mathbf A). \end{equation} If $q\mathbb{E}\left(\min\limits_i\sum\limits_j a_{ij}\right)^{-\lambda}<1$, then $\phi(\mathbf t)=O(\|\mathbf t\|^{-\lambda})$ ($\|\mathbf t\|\rightarrow\infty$). \end{lem} \begin{proof} Assume that $\phi$ is bounded by a constant $K$. For $\mathbf t\neq \mathbf 0$, if $\|\mathbf t\|\leq t_\varepsilon$, then $\phi(\mathbf t)\leq K\leq K t_\varepsilon^\lambda \|\mathbf t\|^{-\lambda}$, which yields by (\ref{MCLE41}) \begin{equation}\label{MCLE42} \phi(\mathbf t)\leq q \mathbb{E}\phi(\mathbf t\mathbf A)+ C \|\mathbf t\|^{-\lambda},\quad \text{for all $\mathbf t\neq \mathbf 0$,} \end{equation} where $C$ is a general positive constant. Let $\{\mathbf A_k\}$ be a family of i.i.d copies of $\mathbf A$. By induction on (\ref{MCLE42}), \begin{equation}\label{MCLE43} \phi(\mathbf t)\leq q^n \mathbb{E}\phi(\mathbf t\mathbf A_1\cdots \mathbf A_n)+ C \left[\sum_{k=1}^{n-1}\left(q^{k-1}\mathbb{E}\|\mathbf t\mathbf A_1\cdots \mathbf A_{k-1}\|\right)^{-\lambda}+\|\mathbf t\|^{-\lambda}\right],\quad \text{for all $\mathbf t\neq \mathbf 0$.} \end{equation} Note that for any matrix $\mathbf A=(a_{ij})$ and vector $\mathbf t$, we have $$\|\mathbf t\mathbf A\|=\sum_j(\mathbf t \mathbf A)_j=\sum_j\sum_i t_i a_{ij}\geq \sum_i t_i \min_i\sum_j a_{ij}=\|\mathbf t\|\left(\min_i\sum_j a_{ij}\right).$$ Thus by the independency of $\{\mathbf A_k\}$, \begin{equation}\label{MCLE44} \mathbb{E}\|\mathbf t\mathbf A_1\cdots \mathbf A_{k}\|^{-\lambda} \leq \|\mathbf t\|^{-\lambda}\mathbb{E}\left(\prod_{l=1}^k\left(\min_i\sum_j(\mathbf A_l)_{ij}\right)\right)^{-\lambda}=\|\mathbf t\|^{-\lambda}\left[\mathbb{E}\left(\min_i\sum_j a_{ij}\right)^{-\lambda }\right]^k. \end{equation} Combing (\ref{MCLE44}) with (\ref{MCLE43}) and letting $n\rightarrow\infty$ leads to $\phi(\mathbf t)=O(\|\mathbf t\|^{-\lambda})$ ($\|\mathbf t\|\rightarrow\infty$). \end{proof} \begin{lem}[\cite{liu1999}, Lemma 4.4]\label{MCLN2} \label{MDL3.2} Let $X$ be a positive random variable. For $0<a<\infty$, consider the following statements:\newline \begin{equation*} \begin{array}{ll} (i)\;\mathbb{E}X^{-a}<\infty; & (ii)\;\mathbb{E}e^{-tX}=O(t^{-a})(t\rightarrow\infty); \\ (iii)\;\mathbb{P}(X\leq x)=O(x^a)(x\rightarrow0); & (iv)\;\forall b\in(0,a), \mathbb{E}X^{-b}<\infty. \\ \end{array} \end{equation*} Then the following implications hold: (i) $\mathbb{R}ightarrow$ (ii) $\Leftrightarrow$ (iii) $\mathbb{R}ightarrow$ (iv). \end{lem} \begin{proof}[Proof of Theorem \ref{MCT2}] Let $N_{\delta}=\sum\limits_{k=1}^N\mathbf 1_{\{\min\limits_{i}\sum\limits_{j} ( \mathbf A_k)_{ij}>\delta\}}$ for $\delta>0$. Then $N_\delta \uparrow N$, as $\delta\downarrow0$. Since $\phi(\mathbf t)=\mathbb E e^{-\mathbf t\cdot \mathbf Z}\rightarrow 0$ as $\|\mathbf t\|\rightarrow\infty$, there exists $t_\varepsilon>0$ such that for $\|\mathbf t\|>t_\varepsilon$, $\phi(\mathbf t)<\varepsilon$. For $\|\mathbf t\|>t_\varepsilon/\delta$, if $\min\limits_{i}\sum\limits_{j} ( \mathbf A_k)_{ij}>\delta$, we have $\|\mathbf t \mathbf A_k\|\geq \|\mathbf t\|\min\limits_{i}\sum\limits_{j} ( \mathbf A_k)_{ij}>t_\varepsilon$. By equation (\ref{E1}), \begin{eqnarray*} \phi(\mathbf t)\leq \mathbb{E}\phi(\mathbf t\mathbf A_1)\left(\varepsilon^{N_\delta-1}\mathbf 1_{\{N_\delta\geq1\}}+\mathbf 1_{\{N_\delta=0\}}\right)=q_{\varepsilon,\delta}\mathbb E \phi(\mathbf t\tilde{\mathbf A}), \end{eqnarray*} where $q_{\varepsilon,\delta}=\mathbb{E}\left(\varepsilon^{N_\delta-1}\mathbf 1_{\{N_\delta\geq1\}}+\mathbf 1_{\{N_\delta=0\}}\right)$ and $\tilde{\mathbf A}=(\tilde a_{ij})$ is a random matrix whose distribution is determined by $\mathbb E g(\tilde{\mathbf A})=\frac{1}{q_{\varepsilon, \delta}}\mathbb{E} g({\mathbf A}_1)\left(\varepsilon^{N_\delta-1}\mathbf 1_{\{N_\delta\geq1\}}+\mathbf 1_{\{N_\delta=0\}}\right)$ for all bounded and measurable function $g$ on $\mathbb R_+^{p^2}$. We can see that by the dominated convergence theorem, $$q_{\varepsilon,\delta}\stackrel{\delta\downarrow0}{\longrightarrow}\mathbb{E}\varepsilon^{N-1}\mathbf 1_{\{N\geq1\}}\stackrel{\varepsilon\downarrow0}{\longrightarrow}\mathbb{P}(N=1)<1,$$ and since $\mathbb{E}\left(\min\limits_{i}\sum\limits_{j} a_{ij}\right)^{-\lambda}<\infty$, \begin{eqnarray*} q_{\varepsilon,\delta}\mathbb{E}\left(\min\limits_{i}\sum\limits_{j}\tilde{ a}_{ij}\right)^{-\lambda}&=&\mathbb{E}\left(\min\limits_{i}\sum\limits_{j} a_{ij}\right)^{-\lambda}\left(\varepsilon^{N_\delta-1}\mathbf 1_{\{N_\delta\geq1\}}+\mathbf 1_{\{N_\delta=0\}}\right)\\ &\stackrel{\delta\downarrow0}{\longrightarrow}&\mathbb{E}\left(\min\limits_{i}\sum\limits_{j} a_{ij}\right)^{-\lambda}\mathbf 1_{\{N\geq1\}}\stackrel{\varepsilon\downarrow0}{\longrightarrow}\mathbb{E}\left(\min\limits_{i}\sum\limits_{j} a_{ij}\right)^{-\lambda}\mathbf 1_{\{N=1\}}<1. \end{eqnarray*} By Lemma \ref{MCLN1}, $\phi(\mathbf t)=O(\|\mathbf t\|^{-\lambda})$ ($\|\mathbf t\|\rightarrow\infty$). Thus for given non-zero $\mathbf y=(y^1,\cdots,y^p)\in \mathbb R_{+}^p$, $\mathbb{E} e^{-t\mathbf y\cdot \mathbf Z}=O(t^{-\lambda})(t\rightarrow\infty)$, so that by Lemma \ref{MCLN2}, $\mathbb{P}(\mathbf y\cdot \mathbf Z\leq x)=O(x^\lambda) (x\rightarrow0)$ and $\mathbb{E}(\mathbf y\cdot \mathbf Z)^{-\lambda_1},\infty$, $\forall 0<\lambda_1<\lambda$. For the second part, notice that we have obtained $ \phi(\mathbf t)\leq C \|\mathbf t\|^\lambda$ for all $\|\mathbf t\|>0$ in the first part, where $C$ is a positive constant. By equation (\ref{E1}), $$\phi(\mathbf t)\leq \mathbb{E}\prod_{k=1}^{\underline{m}}\phi(\mathbf t\mathbf A_k)\leq C^{\underline{m}}\mathbb{E}\prod_{k=1}^{\underline{m}}\|\mathbf t\mathbf A_k\|^{-\lambda}\leq C^{\underline{m}}\|\mathbf t\|^{-\underline{m}\lambda} \mathbb E\left[\prod\limits_{k=1}^{\underline {m}}\left(\min\limits_i\sum\limits_{j=1}^p (\mathbf A_k)_{ij}\right)^{-\lambda}\right].$$ The rest results follow by Lemma \ref{MCLN2}. \end{proof} \begin{proof}[Proof of Theorem \ref{MCT3}]We only prove the results for $\phi(\mathbf t)$. The assertions for $\mathbb{P}(\mathbf y\cdot \mathbf Z\leq x)$ follow from that about $\mathbb{E}^{-t\mathbf y\cdot \mathbf Z}$ and the Tauberian Theorem of exponential type (cf \cite{liu96}). We first prove (a). By equation (\ref{E}), $$1=\sum_{j=1}^p V_j=\mathbb{E}\sum_{k=1}^N\sum_{j,l=1}^p (\mathbf A_k)_{jl}V_l>\mathbb{E}\sum_{k=1}^{\underline{m}}\sum_{j,l=1}^p (\mathbf A_k)_{jl}V_l\geq \underline{a}\;\underline{m}\;p.$$ The strict inequality holds because of (\ref{MCEC2}) and $\mathbb{P}(N=\underline{m})>0$. Therefore, we have $\gamma=-\log\underline{m}/\log{(\underline{a}p)}\in(0,1)$. Since for all $k$, $$\phi(\mathbf t\mathbf A_k)=\mathbb{E}\exp\left\{-\sum_{i,j=1}^pt_i(\mathbf A_k)_{ij}Z_j\right\}\leq \mathbb{E}\exp\{-\underline{a}\|\mathbf t\|\mathbf e\cdot\mathbf Z\}=\phi(\underline{a}\|\mathbf t\|\mathbf e),$$ where $\mathbf e=(1,1,\cdots, 1)$, by equation (\ref{E1}), \begin{equation}\label{MCNT21} \phi(\mathbf t)\leq \mathbb{E}\prod_{k=1}^{\underline{m}}\phi(\mathbf t \mathbf A_k)\leq \left[\phi(\underline{a}\|\mathbf t\|\mathbf e)\right]^{\underline{m}}. \end{equation} Applying (\ref{MCNT21}) with $\mathbf t=\underline{a}\|\mathbf t\|\mathbf e$, we have $\phi(\underline{a}\|\mathbf t\|\mathbf e)\leq \left[\phi(\underline{a}^2p\|\mathbf t\|\mathbf e)\right]^{\underline{m}}$, so that $\phi(\mathbf t)\leq \left[\phi(\underline{a}^2p\|\mathbf t\|\mathbf e)\right]^{\underline{m}}$. By iteration, we get \begin{equation}\label{MCNT22} \phi(\mathbf t)\leq \left[\phi(\underline{a}^kp^{k-1}\|\mathbf t\|\mathbf e)\right]^{\underline{m}^k}. \end{equation} As $\underline{a}p<1$, for $\|\mathbf t\|\geq p$, there exists an integer $k\geq0$ such that $p/(\underline{a}p)^k\leq\|\mathbf t\|< p/(\underline{a}p)^{k+1}$. So this $k$ satisfies \begin{equation}\label{MCNT24} \frac{\log p-\log \|\mathbf t\|}{\log {(\underline{a}p})}-1<k\leq \frac{\log p-\log \|\mathbf t\|}{\log {(\underline{a}p})}. \end{equation} For any $x\geq1$, one can see that \begin{equation}\label{MCNT23} \phi(x\mathbf e)=\mathbb{E}\exp\left\{-x\sum_{j=1}^pZ_j\right\}\leq \mathbb{E}\exp\{-\sum_{j=1}^pZ_j\}=\phi(\mathbf e)<1. \end{equation} Since $\underline{a}^kp^{k-1}\|\mathbf t\|\geq 1$, by (\ref{MCNT22}), (\ref{MCNT23}) and (\ref{MCNT24}) , we have $$\log \phi(\mathbf t)\leq \underline{m}^k\log \phi(\mathbf e)\leq \exp\left\{\frac{\log\underline{m}}{\log(\underline{a}p)}(\log p-\log\|\mathbf t\|)\right\}=-C_1\|\mathbf t\|^\gamma,$$ where $C_1=- p^{-\gamma}\log \phi(\mathbf e)>0 $. We then prove (b). The proof is similar to that of (a). If $\max_{ij}(\mathbf A_k)_{ij}\leq \underline{a}+\varepsilon$, then $$\phi(\mathbf t\mathbf A_k)\geq\phi\left((\underline{a}+\varepsilon)\|\mathbf t\|\mathbf e\right).$$ By equation (\ref{E1}), \begin{equation}\label{MCNT25} \phi(\mathbf t)\geq \mathbb{E}\prod_{k=1}^{\underline{m}}\phi(\mathbf t \mathbf A_k)\mathbf 1_{\{N=\underline{m},\; \max\limits_{ij}(\mathbf A_k)_{ij}\leq \underline{a}+\varepsilon,\forall k\}} \geq \rho\left[\phi((\underline{a}+\varepsilon)\|\mathbf t\|\mathbf e)\right]^{\underline{m}}, \end{equation} where $\rho=\mathbb{P}(N=\underline{m},\; \max\limits_{ij}(\mathbf A_k)_{ij}\leq \underline{a}+\varepsilon,\forall k)<1$. By iteration, we get \begin{equation}\label{MCNT26} \phi(\mathbf t)\geq \rho^{\sum_{j=0}^{k-1}\underline{m}^j}\left[\phi((\underline{a}+\varepsilon)^kp^{k-1}\|\mathbf t\|\mathbf e)\right]^{\underline{m}^k}. \end{equation} As $(\underline{a}+\varepsilon)p<1$, for $\|\mathbf t\|\geq p$, there exists an integer $k\geq1$ such that $p/((\underline{a}+\varepsilon)p)^{k-1}\leq\|\mathbf t\|< p/((\underline{a}+\varepsilon)p)^{k}$. Since $\phi(x\mathbf e)\geq \phi(\mathbf e)$ for any $x<1$ and $(\underline{a}+\varepsilon)^kp^{k-1}\|\mathbf t\|< 1$, we have $$\phi((\underline{a}+\varepsilon)^kp^{k-1}\|\mathbf t\|\mathbf e)\geq \phi(\mathbf e).$$ Therefore, (\ref{MCNT26}) yields \begin{eqnarray*} \log \phi(\mathbf t)&\geq &\underline{m}^k\left(\log \phi(\mathbf e)+ \underline{m}^{-k} \sum\limits_{j=0}^{k-1}\underline{m}^j\log \rho \right)\\ &\geq &\underline{m}^k\left(\log \phi(\mathbf e)+ \frac{\log \rho}{\underline{m}-1} \right)\\ &\geq& \exp\left\{\frac{\log\underline{m}}{\log((\underline{a}+\varepsilon)p)}(\log p-\log\|\mathbf t\|)+\log \underline{m}\right\}\left(\log \phi(\mathbf e)+ \frac{\log \rho}{\underline{m}-1} \right)\\ &=&-C_2\|\mathbf t\|^{\gamma(\varepsilon)}, \end{eqnarray*} where $C_2=- p^{-\gamma(\varepsilon)}\underline{m}\left(\log \phi(\mathbf e)+ \frac{\log \rho}{\underline{m}-1} \right)>0$. \end{proof} \section {Moments for the complex case} In this section, we consider the complex case, where in equation (\ref{E}), all the matrix $\mathbf A_k$ and the vectors $\mathbf Z, \mathbf Z(k)$ are complex (with $\mathbb C$ in place of $\mathbb R_+$). Here we still interested in the existence of the $\alpha$th-moment ($\alpha>1$) solution, or in other words, the $L^\alpha$ convergence and the $\alpha$th-moment of the Mandelbrot's martingale $\{\mathbf Y_n\}$ defined by (\ref{beqYn}). Besides Assumption (H), we assume moreover that $$\hat{\textbf{M}}:=\mathbb{E}\sum\limits_{i=1}^{N}\hat{\textbf{A}}_k\quad\text{with}\;(\hat{\textbf{A}}_k)_{ij}:=|(\textbf{A}_k)_{ij}|$$ is finite and strictly positive. For $t\in\mathbb{R}$ fixed, let $$\hat{\textbf{M}}(t):=\mathbb{E}\sum_{i=1}^N\hat{\textbf{A}}_k^{(t)}\quad\text{with}\; (\hat{\textbf{A}}_k^{(t)})_{ij}:=(\hat{\textbf{A}}_k)^t_{ij},$$ whose maximum-modulus eigenvalue is denoted by $\hat\rho(t)$ and the corresponding normalized left and right positive eigenvectors by $\hat{\textbf{U}}(t),\hat{\textbf{V}}(t)$. Define $$\hat{\textbf{Y}}_n^{(t)}:=\frac{\sum\limits_{u\in\mathbb{T}_n}\hat{\textbf{X}}_u^{(t)}\hat{\textbf{V}}(t)}{\hat\rho(t)^n}\quad\text{with}\; \hat{\textbf{X}}_u^{(t)}:=\hat{\textbf{A}}_{u_1}^{(t)}\cdots\hat{\textbf{A}}_{u_1\cdots u_n}^{(t)}\; \text{for}\;u\in\mathbb{T}_n.$$ Obviously, $\{\hat{\textbf{Y}}_n^{(t)}\}$ has the same structure as the martingale $\{\textbf{Y}_n^{(t)}\}$ of the real case for which we have established inequalities in Section 3, therefore, we can apply these results (Lemmas \ref{MCL1.2.1} and \ref{MCL1.2.2}) to the martingale $\{\hat{\textbf{Y}}_n^{(t)}\}$. Following similar arguments to the proof of Theorem \ref{MCT1}, we reach the following result for the the complex case. \begin{thm}[Complex case]\label{MCT21}Assume that all the matrix $\mathbf A_k$ and the vectors $\mathbf Z, \mathbf Z(k)$ are complex. Let $\alpha>1$. If $\mathbb{E}\|\sum\limits_{i=1}^N\hat{\textbf{A}_i} \|^\alpha<\infty$ and either of the following assertions holds: \begin{itemize} \item[(i)] $\alpha \in(1,2]$ and $p^{(\alpha-1)}\hat{\rho}(\alpha)<1$; \item[(ii)] $\alpha>2$ and $\max\{p^{\alpha-1}\hat{\rho}(\alpha), p^{\alpha/\beta}\hat{\rho}(\beta)\}<1$ for some $\beta\in(1,2]$, \end{itemize} then $\sup\limits_n\mathbb{E}\|\mathbf Y_n\|^\alpha<\infty$, and $\{\mathbf Y_n\}$ converges a.s. and in $L^\alpha$ to a random vector $\mathbf Y$, so that $\mathbb{E}\mathbf Y=\mathbf V$ and $0<\mathbb{E}\|\mathbf Y\|^\alpha<\infty$. \end{thm} In particular, for the case $p=1$, it is easy to see that $V=1$ and $\hat{\rho}(t)=\hat{m}(t)=\mathbb{E}\sum\limits_{i=1}^N|A_i|^t$. \begin{co}[case p=1]Let $p=1$ and $\alpha>1$. If $\mathbb{E} \left(\sum\limits_{i=1}^N|A_i|\right)^\alpha<\infty$ and either of the following assertions holds:\\ \begin{itemize} \item[(i)] $\alpha \in(1,2]$ and $ \hat{\rho}(\alpha)<1$; \item[(ii)]$\alpha>2$ and $\max\{ \hat{\rho}(\alpha), \hat{\rho}(\beta)\}<1$ for some $\beta\in(1,2]$, \end{itemize} then $\sup_n\mathbb{E}|Y_n|^\alpha<\infty$ and $\{Y_n\}$ converges a.s. and in $L^\alpha$ to a random variable $Y$, so that $\mathbb{E} Y=1$ and $0<\mathbb{E} Y^\alpha<\infty$. \end{co} The proof of Theorem \ref{MCT21} is similar to that of Theorem \ref{MCT1}. We first show several lemmas for the martingale $\{\mathbf Y_n^{(t)}\}$. \begin{lem}\label{MCL2.1}Let $\alpha>1$. Fix $t\in\mathbb R$. Assume that $\max\limits_i\mathbb{E}|Y_{1,i}^{(t)}|^\alpha<\infty$. Then for each $i=1,2,\cdots, p$, \begin{itemize} \item[(a)]if $\alpha\in(1,2]$, \begin{equation}\label{MCE2.2.1} \mathbb{E}\left|Y_{n+1,i}^{(t)}-Y_{n,i}^{(t)}\right|^\alpha\leq C p^{(\alpha-1)n}\left[\frac{\hat{\rho}(\alpha t)}{|\rho(t)|^\alpha}\right]^n; \end{equation} \item[(b)]if $\alpha>2$, for any $\beta\in(1,2]$, \begin{equation}\label{MCE2.2.2} \mathbb{E}\left|Y_{n+1,i}^{(t)}-Y_{n,i}^{(t)}\right|^\alpha\leq C p^{\alpha n/2}\left[\frac{\hat{\rho}(\beta t)^{\alpha/\beta}}{|\rho(t)|^\alpha}\right]^n \mathbb{E}\left[\hat{Y}_{n,i}^{(\beta t)}\right]^{\alpha/\beta }, \end{equation} \end{itemize} where $C$ is a constant depending on $\alpha,p,t$. \end{lem} \begin{proof} Notice that $|(\textbf{X}_u^{(t)})_{ij}|\leq(\hat{\textbf{X}}_u^{(t)})_{ij}$. Applying Burkholder's inequality, we get \begin{eqnarray*} \mathbb{E}\left|Y_{n+1,i}^{(t)}-Y_{n,i}^{(t)}\right|^\alpha&\leq& \frac{C}{|\rho(t)|^{\alpha n}}\sum_{j=1}^p\mathbb{E}\left( \sum_{u\in\mathbb{T}_n}\left|(\textbf{X}_u^{(t)})_{ij}\right|^2\left|Y_{1,j}^{(t)}(u)-V_j(t)\right|^2\right)^{\alpha/2}\\ &\leq&\frac{C}{|\rho(t)|^{\alpha n}}\sum_{j=1}^p\mathbb{E}\left( \sum_{u\in\mathbb{T}_n}\left[(\hat{\textbf{X}}_u^{(t)})_{ij}\right]^2\left|Y_{1,j}^{(t)}(u)-V_j(t)\right|^2\right)^{\alpha/2}. \end{eqnarray*} Then repeat the proof of Lemma \ref{MCL1.2.1} (with $\beta$ in place of $2$ for the case where $\alpha>2$). \end{proof} Apply Lemma \ref{MCL1.2.2} (with $\beta$ in place of $2$) to (\ref{MCE2.2.2}), we immediately get the following lemma. \begin{lem}\label{MCL2.2} Let $\alpha>1$. Fix $t\in\mathbb R$. If $\max\limits_i\mathbb{E}\left|Y_{1,i}^{(t)}\right|^\alpha<\infty$ and $\max\limits_i\mathbb{E}\left|\hat{Y}_{1,i}^{(\beta t)}\right|^{\alpha/\beta}<\infty$ for some $\beta\in(1,2]$, then for $\alpha\in(\beta^m,\beta^{m+1}]$ ($m\geq 1$ is an integer), $$\mathbb{E}\left|Y_{n+1,i}^{(t)}-Y_{n,i}^{(t)}\right|^\alpha\leq C n^{\alpha/\beta}\left[\max\{p^{\alpha-1}\frac{\hat{\rho}(\alpha t)}{|\rho(t)|^\alpha},\; p^{\frac{\beta^l-1}{\beta^l}\alpha}\frac{\hat{\rho}(\beta^lt)^{\alpha/\beta^l}}{|\rho(t)|^\alpha},l=1,\cdots,m \}\right]^n.$$ \end{lem} Combing Lemmas \ref{MCL2.1} and \ref{MCL2.2} leads to Lemma \ref{MCL2.3} below. \begin{lem}\label{MCL2.3} Let $\alpha>1$. Assume that $\mathbb{E}\|\sum\limits_{k=1}^N\hat{\textbf{A}_k}\|^\alpha<\infty$. Then for each $i=1,2,\cdots, p$, \begin{itemize} \item[(a)]if $\alpha\in(1,2]$, \begin{equation}\label{MCE2.2.3} \mathbb{E}\left|Y_{n+1,i} -Y_{n,i} \right|^\alpha\leq C \left[ p^{(\alpha-1)}\hat{\rho}(\alpha ) \right]^n; \end{equation} \item[(b)]if $\alpha>2$, for any $\beta\in(1,2]$, \begin{equation}\label{MCE2.2.4} \mathbb{E}\left|Y_{n+1,i} -Y_{n,i} \right|^\alpha\leq C n^{\alpha/\beta}\max\{p^{\alpha-1}\hat{\rho}(\alpha), p^{\alpha/\beta}\hat{\rho}(\beta)^{\alpha/\beta}\}^n, \end{equation} \end{itemize} where $C$ is a constant depending on $\alpha,p,t$. \end{lem} \begin{proof}Firstly, we remark that $\hat{\rho}(t)$ exists for all $t\in[1,\alpha]$ since $\mathbb{E}\|\sum\limits_{k=1}^N\hat{\textbf{A}}_k\|^\alpha<\infty$ and $\hat{\textbf{M}}(1)=\hat{\textbf{M}}$ is finite and strictly positive. Furthermore, $\mathbb{E}\|\sum\limits_{k=1}^N\hat{\textbf{A}}_k\|^\alpha<\infty$ implies that $\max\limits_i\mathbb{E}|Y_{1,i}|^\alpha<\infty$ and $\max\limits_i\mathbb{E}|\hat{Y}_{1,i}^{(\beta)}|^{\alpha/\beta}<\infty$. So (\ref{MCE2.2.3}) is directly from (\ref{MCE2.2.1}). For $\alpha>2$, by Lemma \ref{MCL2.2}, we have \begin{eqnarray*} \mathbb{E}\left|Y_{n+1,i} -Y_{n,i} \right|^\alpha&\leq &C n^{\alpha/\beta}\left[\max\{p^{\alpha-1}\hat{\rho}(\alpha ) ,\; p^{\frac{\beta^l-1}{\beta^l}\alpha}\hat{\rho}(\beta^l)^{\alpha/\beta^l} ,l=1,\cdots,m\}\right]^n\\ &\leq&C n^{\alpha/\beta}\left(\sup_{\beta\leq x\leq\alpha}\{p^{1-1/x}\hat{\rho}(x)^{1/x}\}\right)^{\alpha n}, \end{eqnarray*} if $\alpha\in(\beta^m,\beta^{m+1}]$ ($m\geq 1$ is an integer). Let $$g(x):=\log(p^{1-1/x}\hat{\rho}(x)^{1/x})=(1-\frac{1}{x})\log p+\frac{1}{x}\log\hat{\rho}(x).$$ Clearly, $g(x)$ is derivable on $(1,\alpha)$ with derivative $$g'(x)=\frac{h(x)}{x^2},\quad \text{where} \; h(x):=\log p+x\frac{\hat{\rho}'(x)}{\hat{\rho}(x)}-\log\hat{ \rho}(x).$$ The log-convexity of $\hat{\rho}(x)$ implies that $h(x)$ is increasing, hence $g(x)$ reaches its maximum on a closed interval at the extremity points. We have $$ \sup_{\beta\leq x\leq\alpha}\{p^{1-1/x}\hat{\rho}(x)^{1/x}\}=\max\{p^{1-1/\alpha}\hat{\rho}(\alpha)^{1/\alpha}, p^{1/\beta}\hat{\rho}(\beta)^{1/\beta}\}. $$ The proof is complete. \end{proof} Now we prove Theorem \ref{MCT21}. \begin{proof}[Proof of Theorem \ref{MCT21}] By Lemma \ref{MCL2.3}, we can obtain the series $\sum_{n}\left(\mathbb{E}|Y_{n+1,i}-Y_{n,i}|^\alpha\right)^{1/\alpha}<\infty$. Observing that $$\left(\mathbb{E}|Y_{n,i}|^\alpha\right)^{1/\alpha}\leq\sum_{k=0}^{n-1}\left(\mathbb{E}|Y_{k+1,i}-Y_{k,i}|^\alpha \right)^{1/\alpha}+1,$$ we immediately get $$\sup_n\mathbb{E}|Y_{n,i}|^\alpha\leq\left(\sum_{n=0}^{\infty}\left(\mathbb{E}|Y_{n+1,i}-Y_{n,i}|^\alpha \right)^{1/\alpha}+1\right)^\alpha<\infty.$$ Notice that $\mathbb{E}\sum\limits_n|Y_{n+1,i}-Y_{n,i}|\leq\sum\limits_{n}\left(\mathbb{E}|Y_{n+1,i}-Y_{n,i}|^\alpha\right)^{1/\alpha}<\infty$. This fact leads to the a.s. convergence of the series $\sum\limits_n|Y_{n+1,i}-Y_{n,i}|$, which show that $\{Y_{n,i}\}$ is a Chauchy sequence in the sense a.s., so there exists a random variable $Y_i$ such that $Y_{n,i}\rightarrow Y_i$ a.s.. By Fatou's Lemma, we have \begin{eqnarray*} \mathbb{E}|Y_{n,i}-Y_i|^\alpha=\mathbb{E}\lim_{l\rightarrow\infty}|Y_{n+l,i}-Y_{n,i}|^\alpha \leq\liminf_{l\rightarrow\infty}\mathbb{E}|Y_{n+l,i}-Y_{n,i}|^\alpha \leq\left(\sum_{k=n}^\infty(\mathbb{E}|Y_{k+1,i}-Y_{k,i}|^\alpha )^{1/\alpha}\right)^\alpha\stackrel{n\rightarrow\infty}{\longrightarrow }0. \end{eqnarray*} Thus $Y_{n,i}\rightarrow Y_i$ in $L^\alpha$, so that $\mathbb{E} Y_i=V_i$ and $0<\mathbb{E}(Y_i)^\alpha<\infty$. \end{proof} \end{document}
\begin{document} \title{Zeros of $L$-functions outside the critical strip} \author{Andrew R.~Booker} \address{School of Mathematics, University of Bristol, University Walk, Bristol, BS8 1TW, United Kingdom} \email{[email protected]} \thanks{A.~R.~B.\ was supported by EPSRC Grants EP/H005188/1, EP/L001454/1 and EP/K034383/1.} \thanks{F.~T.'s work was partially supported by the National Science Foundation under Grant No. DMS-1201330.} \author{Frank Thorne} \address{Department of Mathematics, University of South Carolina, 1523 Greene Street, Columbia, SC 29208, USA} \email{[email protected]} \begin{abstract} For a wide class of Dirichlet series associated to automorphic forms, we show that those without Euler products must have zeros within the region of absolute convergence. For instance, we prove that if $f\in S_k(\Gamma_1(N))$ is a classical holomorphic modular form whose $L$-function does not vanish for $\mathbb{R}e(s)>\frac{k+1}2$, then $f$ is a Hecke eigenform. Our proof adapts and extends work of Saias and Weingartner \cite{SW}, who proved a similar result for degree $1$ $L$-functions. \end{abstract} \maketitle \section{Introduction} In \cite{SW}, Saias and Weingartner showed that if $L(s)=\sum_{m=1}^\infty\frac{\lambda(m)}{m^s}$ is a Dirichlet series with periodic coefficients, then either $L(s)=0$ for some $s$ with real part $>1$, or $\lambda(m)$ is multiplicative at almost all primes (so that $L(s)=D(s)L(s,\chi)$ for some primitive Dirichlet character $\chi$ and finite Dirichlet series $D$). Earlier work of Davenport and Heilbronn \cite{dh1,dh2} established this result for the special case of the Hurwitz zeta-function $\zeta(s,\alpha)$ with rational parameter $\alpha$, and proved an analogue for the degree $2$ Epstein zeta-functions. Also in degree $2$, Conrey and Ghosh \cite{cg} showed that the $L$-function associated to the square of Ramanujan's $\Delta$ modular form has infinitely many zeros outside of its critical strip. In this paper, we generalize all of these results and study the extent to which, among all Dirichlet series associated to automorphic forms (appropriately defined), the existence of an Euler product is characterized by non-vanishing in the region of absolute convergence. For instance, for classical degree $2$ $L$-functions, we prove the following: \begin{theorem}\label{thm:classical} Let $f\in S_k(\Gamma_1(N))$ be a holomorphic cuspform of arbitrary weight and level. If the associated complete $L$-function $\Lambda_f(s)=\int_0^\infty f(iy)y^{s-1}\,dy$ does not vanish for $\mathbb{R}e(s)>\frac{k+1}2$ then $f$ is an eigenfunction of the Hecke operators $T_p$ for all primes $p\nmid N$. \end{theorem} Our method is sufficiently general to apply to $L$-functions of all degrees, and in fact we obtain Theorem~\ref{thm:classical} as a corollary of the following general result: \begin{theorem}\label{thm:main} Fix a positive integer $n$. For $j=1,\ldots,n$, let $r_j$ be a positive integer and $\pi_j$ a unitary cuspidal automorphic representation of $\GL_{r_j}(\mathbb{A}_\mathbb{Q})$ with $L$-series $L(s,\pi_j)=\sum_{m=1}^\infty\lambda_j(m)m^{-s}$. Assume that the $\pi_j$ satisfy the generalized Ramanujan conjecture at all finite places (so that, in particular, $|\lambda_j(p)|\le r_j$ for all primes $p$) and are pairwise non-isomorphic. Let $$ R=\left\{\sum_{m=1}^M\frac{a_m}{m^s}:M\in\mathbb{Z}_{\ge0}, (a_1,\ldots,a_M)\in\mathbb{C}^M\right\} $$ denote the ring of finite Dirichlet series, and let $P\in R[x_1,\ldots,x_n]$ be a polynomial with coefficients in $R$. Then either $P(L(s,\pi_1),\ldots,L(s,\pi_n))$ has a zero with real part $>1$ or $P=D(s)x_1^{d_1}\cdots x_n^{d_n}$ for some $D\in R$, $d_1,\ldots,d_n\in\mathbb{Z}_{\ge0}$. \end{theorem} \begin{remarks}\hspace{1cm} \begin{enumerate} \item For $\pi_j$ as in the statement of the theorem, it is known (see \cite{jacquet-shalika}) that $L(s,\pi_j)$ does not vanish for $\mathbb{R}e(s)\ge1$. Thus if $P=D(s)x_1^{d_1}\cdots x_n^{d_n}$ is a monomial then whether or not $P(L(s,\pi_1),\ldots,L(s,\pi_n))$ vanishes for some $s$ with $\mathbb{R}e(s)>1$ is determined entirely by the finite Dirichlet series $D(s)$. Further, the Grand Riemann Hypothesis (GRH) predicts that each $L(s,\pi_j)$ does not vanish for $\mathbb{R}e(s)>\frac12$. Theorem~\ref{thm:main} demonstrates that the GRH, if it is true, is a very rigid phenomenon. \item By the almost-periodicity of Dirichlet series, if $P(L(s,\pi_1),\ldots,L(s,\pi_n))$ has at least one zero with real part $>1$ then it must have infinitely many such zeros. In fact, our proof shows that there is some number $\eta=\eta(P;\pi_1,\ldots,\pi_n)>0$ such that for any $\sigma_1,\sigma_2$ with $1<\sigma_1<\sigma_2\le1+\eta$, we have \begin{equation}\label{eqn:Nlowerbound} \#\bigl\{s\in\mathbb{C}:\mathbb{R}e(s)\in[\sigma_1,\sigma_2], \Im(s)\in[-T,T], P(L(s,\pi_1),\ldots,L(s,\pi_n))=0\bigr\} \gg T \end{equation} for $T$ sufficiently large (where both the implied constant and the meaning of ``sufficiently large'' depend on $\sigma_1,\sigma_2$ as well as $P$ and $\pi_1,\ldots,\pi_n$). On the other hand, if we restrict to $\mathbb{C}$-linear combinations (i.e.\ homogeneous degree $1$ polynomials $P\in\mathbb{C}[x_1,\ldots,x_n]$) and $\pi_1,\ldots,\pi_n$ with a common conductor and archimedean component $\pi_{1,\infty}\cong\ldots\cong\pi_{n,\infty}$, Bombieri and Hejhal \cite{bh} showed, subject to GRH and a weak form of the pair correlation conjecture for $L(s,\pi_j)$, that asymptotically $100\%$ of the non-trivial zeros of $P(L(s,\pi_1),\ldots,L(s,\pi_n))$ have real part $\frac12$. \item The assumption of the Ramanujan conjecture in Theorem~\ref{thm:main} could be relaxed. For instance, it would suffice to have, for each fixed $j$: \begin{itemize} \item[(i)]some mild control over the coefficients of the logarithmic derivative $$\frac{L'}{L}(s,\pi_j)=\sum_{m=1}^\infty c_j(m)m^{-s}$$ at prime powers, namely $\sum_p\frac{|c_j(p^k)|^2}{p^k}<\infty$ for any fixed $k\ge2$ (cf.\ \cite[Hypothesis H]{rs}); \item[(ii)]an average bound for $|\lambda_j(p)|^4$ over arithmetic progressions of primes, namely $$ \limsup_{x\to\infty} \frac{\sum_{\substack{p\le x\\p\equiv a\;(\text{mod }q)}}|\lambda_j(p)|^4} {\sum_{\substack{p\le x\\p\equiv a\;(\text{mod }q)}}1} \le C_j, $$ for all co-prime $a,q\in\mathbb{Z}_{>0}$, where $C_j>0$ is independent of $a,q$. \end{itemize} Note that (i) is known to hold when $r_j\le 4$ (see \cite{rs,kim}). Further, both estimates follow from the Rankin--Selberg method if, for instance, the tensor square $\pi_j\otimes\pi_j$ is automorphic for each $j$. Since this is known when $r_j=2$ (see \cite{gj}), Theorem~\ref{thm:main} could be extended to include the $L$-functions associated to Maass forms. \item The main tool used in the proof is the quasi-orthogonality of the coefficients $\lambda_j(p)$, i.e.\ asymptotic estimates for sums of the form $\sum_{p\le x}\frac{\lambda_j(p)\overline{\lambda_k(p)}}p$ as $x\to\infty$. These follow from the Rankin--Selberg method, and were obtained in a precise form independently by Wu--Ye \cite[Thm.~3]{wu-ye} and Avdispahi\'c--Smajlovi\'c \cite[Thm.~2.2]{as}. (We also make use of similar estimates for sums over $p$ in an arithmetic progression---see Lemma~\ref{lem:sump} for the exact statement---though it is likely that this could be avoided at the expense of making the proof more complicated.) Since quasi-orthogonality and the Ramanujan conjecture are essentially the only properties of automorphic $L$-functions that we require, one could instead take these as hypotheses and state the theorem for an axiomatically-defined class of $L$-functions, such as the Selberg class. However, it has been conjectured that the Selberg class coincides with the class of automorphic $L$-functions, so this likely offers no greater generality. \item The conclusion of Theorem~\ref{thm:main} is interesting even for $n=1$. For instance, Nakamura and Pa\'nkowski \cite {np} have shown very recently, for a wide class of $L$-functions $L(s)$, that if $P\in R[x]$ is not a monomial and $\delta>0$ then $P(L(s))$ necessarily has zeros in the half-plane $\mathbb{R}e(s)>1-\delta$. Our result strengthens this to $\mathbb{R}e(s)>1$. (On the other hand, \cite{np} also yields the estimate \eqref{eqn:Nlowerbound} for any $[\sigma_1,\sigma_2]\subseteq(\frac12,1)$, which does not follow from our method.) \item Our results are related to universality results for zeta and $L$-functions. Voronin \cite{voronin} proved for any compact set $K$ with connected complement contained within the strip $\mathbb{R}e(s) \in (\frac12, 1)$, and any nonvanishing, continuous function $f : K \rightarrow \mathbb{C}$ holomorphic on the interior of $K$, that $f$ can be uniformly approximated by vertical translates of the zeta function. Voronin's results were extended by a number of authors. One result similar to ours, due to Laurin\v{c}ikas and Matsumoto \cite{LM}, states that given $m$ functions $f_1, \dots, f_m$ as above, and $L$-functions $L_j(s, F)$ associated to twists of a Hecke newform $F$ by pairwise inequivalent Dirichlet characters, that the $f_j$ may be simultaneously approximated by a single vertical translate of the functions $L_j(s, F)$. This implies \cite[Theorem 4]{LM} that non-trivial linear combinations of the $L_j(s, F)$ must contain zeros inside the critical strip with $\mathbb{R}e(s) > \frac{1}{2}$. References to many more works on universality can be found in \cite{LM}. \end{enumerate} \end{remarks} \subsection*{Summary of the proof} Our proof closely follows Saias and Weingartner's in broad outline, but becomes more technical in some places. The reader may wish to read \cite{SW} first. The technical heart of our paper is Proposition \ref{prop:lem2}, an extension of Lemma 2 of \cite{SW}. Given $n$ complex numbers $z_1, \cdots, z_n$ (bounded away from $0$ and $\infty$), we would like to simultaneously solve the equations $L(s, \pi_j) = z_j$, leading to a quick proof of the main theorem. As a substitute, we solve equations of a form $\prod_{p > y} L(\sigma + it_p, \pi_{j, p}) = z_j$, where the ordinate of $s$ is allowed to vary for each prime. Given this, in Section \ref{sec:main} we prove our main theorem, following the proof of Theorem 2 in \cite{SW}. As in \cite{SW}, the main tools are Weyl's criterion, allowing us to simultaneously approximate all of the $p^{-\sigma - it_p}$ by $p^{- \sigma - it}$ for a single $t$, and Rouch\'e's theorem, which states that actual zeros must exist near approximate zeros. The proof of Proposition \ref{prop:lem2} follows those of Lemmas 1 and 2 of \cite{SW}. However, in \cite{SW} the Dirichlet coefficients $\lambda(m)$ are all periodic to some fixed modulus, and this fact, combined with the prime number theorem for arithmetic progressions, allows for easy control of various partial sums that need to be estimated. Here, we must do without this periodicity. To prove Proposition \ref{prop:lem2}, we choose (in Proposition \ref{prop:sw1}) a partition of the set of primes $p > y$ into disjoint subsets $S$, and complex numbers $\epsilon_p \in S^1$ for each $p > y$, so that the vectors of partial sums $\sum_{p \in S} \epsilon_p \lambda_j(p) p^{-\sigma}$ are linearly independent in a precise quantitative sense. Our main tool is the Rankin--Selberg method (substituting for periodicity and orthogonality of Dirichlet characters); see Lemma~\ref{lem:sump}. We also rely on the rather technical Proposition \ref{prop:td}, which says that for matrices $g_1, \dots, g_m$, we can continuously solve equations of the form $\sum_{i = 1}^m g_i f_i(z) = z$ for $n$-tuples of complex numbers $z = (z_1, \cdots, z_n)$. The $g_i$ are constructed from the sums over $p \in S$ considered in Proposition \ref{prop:sw1}, but we are able to formulate Proposition \ref{prop:td} in a general manner, without reference to automorphic forms or primes. The conclusion of Proposition \ref{prop:td} is guaranteed only for large $m$, so that the number of subsets $S$ needed may be large. We choose these subsets to be arithmetic progressions, for which the Rankin--Selberg estimates presented in Lemma \ref{lem:sump} are known to hold. If such estimates were unavailable, it seems likely that we could still obtain our result by constructing the $S$ in a more {\itshape ad hoc} fashion instead. In any case, and in contrast to Saias--Weingartner, the modulus of the arithmetic progression has no particular arithmetic significance, and is chosen to be coprime to all the conductors of the $\pi_j$. \section{Preliminaries} \subsection{Automorphic $L$-functions} Let $\pi_j$ be as in the statement of Theorem~\ref{thm:main}. Each $\pi_j$ can be written as a restricted tensor product $\pi_{j,\infty}\otimes\bigotimes_p\pi_{j,p}$ of local representations, where $p$ runs through all prime numbers. Then we have \begin{equation}\label{eq:l_def_1} L(s,\pi_j)=\prod_pL(s,\pi_{j,p}),\quad\text{for }\mathbb{R}e(s)>1. \end{equation} Here each local factor $L(s,\pi_{j,p})$ is a rational function of $p^{-s}$, of the form \begin{equation}\label{eq:l_def_2} L(s,\pi_{j,p})=\frac1{(1-\alpha_{j,p,1}p^{-s})\cdots(1-\alpha_{j,p,r_j}p^{-s})} \end{equation} for certain complex numbers $\alpha_{j,p,\ell}$. The generalized Ramanujan conjecture asserts that $|\alpha_{j,p,\ell}|\le 1$, with equality holding for all $p\nmid\cond(\pi_j)$, where $\cond(\pi_j)\in\mathbb{Z}_{>0}$ is the conductor of $\pi_j$. In particular, $|\lambda_j(p)|=|\alpha_{j,p,1}+\ldots+\alpha_{j,p,r_j}|\le r_j$. \begin{lemma}\label{lem:sump}\footnote{[Added after publication.] As Mattia Righetti pointed out to us, this lemma is incorrect as claimed, although the statement is true with $O(\sigma - 1)$ replaced by $O\big( \frac{1}{\log(2/(\sigma - 1))}\big)$. The final line of the proof does not follow with the uniformity claimed, but in a published correction we prove that the bound $\sum_{p > y} p^{-\sigma} \gg \frac{y^{1 - \sigma}}{\log y} \log \frac{2}{\sigma - 1}$ does. The lemma is used in the proof of Proposition \ref{prop:sw1}, where it is needed only that the error tend to $0$ as $\sigma \rightarrow 1^+$. The corrected error term is sufficient, and the remaining results remain valid with only cosmetic changes to the proofs.} Let $a$ and $q$ be positive integers satisfying $\bigl(q,a\prod_{j=1}^n\cond(\pi_j)\bigr)=1$. Then \[ \sum_{\substack{p>y\\p\equiv{a}\;(\text{\rm mod }q)}} \frac{|u_1\lambda_1(p)+\ldots+u_n \lambda_n(p)|^2}{p^\sigma} = \bigg( \frac{1}{ \phi(q)} + O(\sigma-1)\bigg) \sum_{p>y}p^{-\sigma} \] for all $y>0$, $\sigma\in(1,2]$ and all unit vectors $(u_1, \ldots, u_n)$, where the implied constant depends only on $\pi_1,\ldots,\pi_n$ and $q$. \end{lemma} \begin{proof} Let $\chi\;(\text{mod }q)$ be a Dirichlet character, not necessarily primitive. We consider the sum \[ E_{jk\chi}(x)= \sum_{p\le x}\big(\lambda_j(p)\overline{\lambda_k(p)}\chi(p) -\delta_{jk\chi}\big)\frac{\log{p}}{p}, \] running over primes $p\le x$, where $\delta_{jk\chi}=1$ if $j=k$ and $\chi$ is the trivial character, and $0$ otherwise. Applying \cite[(2) and (3)]{as} with $(\pi,\pi')=(\pi_j\otimes\chi,\pi_k)$ and, if $\chi$ is imprimitive, subtracting any contribution from the terms with $p|q$, we obtain the bound $E_{jk\chi}(x)\ll_q 1$. Next, for any non-integral $y\ge\frac32$ and any $\sigma\in(1,2]$, we have $$ \sum_{p>y} \frac{\lambda_j(p)\overline{\lambda_k(p)}\chi(p)-\delta_{jk\chi}}{p^\sigma} =\int_y^\infty\frac{t^{1-\sigma}}{\log{t}}\,dE_{jk\chi}(t). $$ Integrating by parts and applying the above estimate for $E_{jk\chi}$, we see that this is $\ll_q y^{1-\sigma}/\log{y}$. Now, expanding the square and using orthogonality of Dirichlet characters, we have $$ \begin{aligned} \sum_{\substack{p>y\\p\equiv{a}\;(\text{mod }q)}} \frac{|u_1\lambda_1(p)+\ldots+u_n\lambda_n(p)|^2}{p^\sigma}&= \frac1{\phi(q)}\sum_{j=1}^n\sum_{k=1}^n \sum_{\chi\;(\text{mod }q)}u_j\overline{u_k}\overline{\chi(a)} \sum_{p>y}\frac{\lambda_j(p)\overline{\lambda_k(p)}\chi(p)}{p^\sigma}\\ &=O_q\!\left(\frac{y^{1-\sigma}}{\log{y}}\right) +\frac1{\phi(q)}\sum_{p>y}p^{-\sigma}. \end{aligned} $$ Finally, by the prime number theorem we have $\sum_{p>y}p^{-\sigma}\gg\frac{y^{1-\sigma}}{(\sigma-1)\log{y}}$, uniformly for $y\ge\frac32$ and $\sigma\in(1,2]$. The lemma follows. \end{proof} \subsection{A few lemmas} In the remainder of this section we discuss the topology of $\GL_n(\mathbb{C})$ and prove some simple lemmas, to be used in the more technical propositions which follow. Let $\Mat_{n\times n}(\mathbb{C})$ denote the set of $n\times n$ matrices with entries in $\mathbb{C}$. For $A=(a_{ij})\in\Mat_{n\times n}(\mathbb{C})$, the {\itshape Frobenius norm} is defined by \[ \|A\|=\sqrt{\tr\left(\overline{A}^T A\right)}=\sqrt{\sum|a_{ij}|^2}. \] Note that this agrees with the Euclidean norm under the identification of $\Mat_{n\times n}(\mathbb{C})$ with $\mathbb{C}^{n^2}$. By the Schwarz inequality, we have $|Av|\le\|A\|\cdot|v|$ for any $A\in\Mat_{n\times n}(\mathbb{C})$ and $v\in\mathbb{C}^n$. We endow $\GL_n(\mathbb{C})=\{g\in\Mat_{n\times n}(\mathbb{C}):\det{g}\ne0\}$ with the subspace topology. In particular, it is easy to see that a set $K\subseteq\GL_n(\mathbb{C})$ is compact if and only if $K$ is closed in $\Mat_{n\times n}(\mathbb{C})$ and there are positive real numbers $c$ and $C$ such that $$ \|g\|\le C\text{ and }|\det{g}|\ge c \quad\text{for all }g\in K. $$ Since $g^{-1}$ can be expressed in terms of $\frac1{\det{g}}$ and the cofactor matrix of $g$, it follows that $\|g^{-1}\|$ is bounded on $K$ (and indeed the map $g\mapsto g^{-1}$ is continuous, so that $\GL_n(\mathbb{C})$ is a topological group with this topology). \begin{lemma} Suppose $K$ is a compact subset of $\GL_n(\mathbb{C})$, $g\in K$, and $U\subseteq\mathbb{C}^n$ contains an open $\delta$-neighborhood of some point. Then $gU$ contains an $\varepsilon$-neighborhood, where $\varepsilon > 0$ depends only on $\delta$ and $K$. \end{lemma}\label{lem:nbhd_stability} \begin{proof} By linearity, we may assume without loss of generality that $U$ contains the $\delta$-neighborhood of the origin, $N_\delta$. Since $K$ is compact, there is a number $C>0$ such that $\|g^{-1}\|\le C$ for all $g\in K$. Put $\varepsilon=C^{-1}\delta$, and let $N_\varepsilon$ be the $\varepsilon$-neighborhood of the origin. For any $v\in N_\varepsilon$ we have $|g^{-1}v|\le\|g^{-1}\|\cdot|v|<C\varepsilon=\delta$, so that $v=g(g^{-1}v)\in gN_\delta$. Since $v$ was arbitrary, $gN_\delta\supseteq N_\varepsilon$. \end{proof} \begin{lemma}\label{lem:sq_root_avg} For any $v_0, \ldots, v_k \in \mathbb{C}^n$, there exist $\theta_0, \ldots, \theta_k \in [0, 1]$ such that \[ \left| \sum_{j = 0}^k e(\theta_k) v_j\right| \leq \sqrt{\sum_{j=0}^k|v_j|^2}. \] \end{lemma} \begin{proof} We have \[ \int_{[0, 1]^k}\left|\sum_{j=0}^k e(\theta_j) v_j\right|^2 d\theta_1\cdots d\theta_k = \sum_{j=0}^{k}|v_j|^2. \] Thus, the average choice of $(\theta_0,\ldots,\theta_k)$ satisfies the conclusion. \end{proof} \begin{lemma}\label{lem:monomial} Let $P\in\mathbb{C}[x_1,\ldots,x_n]$. Suppose that every solution to the equation $P(x_1,\ldots,x_n)=0$ satisfies $x_1\cdots x_n=0$. Then $P$ is a monomial, i.e., $P=cx_1^{d_1}\ldots x_n^{d_n}$ for some $c\neq0$ and non-negative integers $d_1,\ldots,d_n$. \end{lemma} \begin{proof} Let $V=\{(x_1,\ldots,x_n)\in\mathbb{C}^n:P(x_1,\ldots,x_n)=0\}$ be the vanishing set of $P$. By hypothesis, the polynomial $x_1\cdots x_n$ vanishes on $V$. Thus, since $\mathbb{C}$ is algebraically closed, Hilbert's Nullstellensatz implies that there is some $d\in\mathbb{Z}_{\ge0}$ such that $(x_1\cdots x_n)^d$ is contained in the ideal generated by $P$, i.e.\ $P|(x_1\cdots x_n)^d$. Since $\mathbb{C}[x_1,\ldots,x_n]$ is a unique factorization domain, this is only possible if $P$ is a monomial. \end{proof} \begin{lemma}\label{lem:poly_cont} Let $P\in\mathbb{C}[x_1,\ldots,x_n]$ and suppose that $y\in\mathbb{C}^n$ is a zero of $P$. Then for any $\varepsilon>0$ there exists $\delta>0$ such that any polynomial $Q\in\mathbb{C}[x_1,\ldots,x_n]$, obtained by changing any of the nonzero coefficients of $P$ by at most $\delta$ each, has a zero $z\in\mathbb{C}^n$ with $|y-z|<\varepsilon$. \end{lemma} \begin{proof} If $P$ is identically $0$ then so is $Q$, so we may take $z=y$. Otherwise, set $$ p(t)=P(y+tu)\quad\text{and}\quad q(t)=Q(y+tu) $$ for $t\in\mathbb{C}$, where $u$ is any unit vector for which $p(t)$ does not vanish for all $t$; shrinking $\varepsilon$ if necessary, assume that $p(t)$ does not vanish on $C_\varepsilon=\{t\in\mathbb{C}:|t|=\varepsilon\}$; and let $\gamma>0$ be the minimum of $|p(t)|$ on $C_\varepsilon$. For $t\in C_\varepsilon$ we have \[ |q(t) - p(t)| < \delta N \Big(1+\varepsilon+|y|\Big)^{\deg P} \] where $N$ is the number of nonzero coefficients of $P$. Choosing $\delta$ so that the right side of this expression is bounded by $\gamma$, we have $|q(t)-p(t)|<|p(t)|$ for $t\in C_\varepsilon$. By Rouch\'e's theorem $q(t)$ has a zero $t_0$ of modulus $|t_0|<\varepsilon$, and taking $z=y+t_0u$ completes the proof. \end{proof} \section{Simultaneous representations of $n$-tuples of complex numbers} The technical heart of our work is the following analogue of Lemma 2 of \cite{SW}: \begin{proposition}\label{prop:lem2} For any real numbers $y,R>1$ there exists $\eta>0$ such that, for all $\sigma\in(1,1+\eta]$, we have \begin{align*} \bigg\{\biggl(\prod_{p > y} L(\sigma+it_p,\pi_{j,p})\biggr)_{j=1,\ldots,n}&\ : t_p\in\mathbb{R}\text{ for each prime }p>y\bigg\}\\ &\supseteq \Big\{ (z_1, \ldots, z_n) \in \mathbb{C}^n : R^{-1} \leq |z_j| \leq R \textnormal{ for all } j \Big\}. \end{align*} \end{proposition} Loosely speaking, after simultaneously approximating the $t_p$ by a common $t$, it will follow that we can make the $L(s, \pi_j)$ independently approach any desired $n$-tuple of nonzero complex numbers, and this will allow us to find zeros in linear or polynomial combinations. The proof relies on an analogue of Lemma 1 of \cite{SW}, whose adapation is not especially straightforward. We carry out this work by proving two technical propositions; the first establishes the existence of solutions to a certain equation involving matrices in a fixed compact subset of $\GL_n(\mathbb{C})$. \begin{proposition}\label{prop:td} Let \[ T = \{ (z_1, \ldots, z_n) \in \mathbb{C}^n \ : \ |z_1| = \ldots = |z_n| = 1 \}, \] \[ D = \{ (z_1, \ldots, z_n) \in \mathbb{C}^n \ : \ |z_1|, \ldots, |z_n| \leq 1 \}, \] and fix a compact set $K \subseteq \GL_n(\mathbb{C})$. Then there is a number $m_0 > 0$ such that for every $m \geq m_0$ and all $(g_1, \ldots, g_m) \in K^m$, there are continuous functions $f_1, \ldots, f_m : D \rightarrow T$ such that $\sum_{i = 1}^m g_i f_i(z) = z$ for all $z \in D$. \end{proposition} We will carry out the proof in three steps: \begin{enumerate}[(1)] \item\label{it_small_neigh} We first show that there exist $\varepsilon > 0$ and $m_1$ such that for all $m \geq m_1$ and all $(g_1, \ldots, g_m) \in K^m$, the set $\{ \sum_{i = 1}^m g_i t_i : t_1,\ldots,t_m \in T \}$ contains an open $\varepsilon$-neighborhood of a point in $\mathbb{C}^n$. \item\label{it_big_neigh} `Fattening' the neighborhood constructed in the first step, we will show that there exists $m_2$ such that for $m \geq m_2$ and all $(g_1,\ldots,g_m)\in K^m$, $\{ \sum_{i = 1}^m g_i t_i : t_1,\ldots,t_m \in T \}$ contains the closed ball of radius $2$, $\{(z_1,\ldots,z_n):|z_1|^2+\ldots+|z_n|^2\le4\}$. \item\label{it_cont} Although the previous step yields a parametrization of a large closed set, it is not obviously continuous. By repeating the construction from step \eqref{it_small_neigh} using the added knowledge of step \eqref{it_big_neigh}, we show that one can achieve a continuous parametrization of $D$. \end{enumerate} \begin{proof} We begin by showing \eqref{it_small_neigh}. By compactness, there is an $m_1$ such that for any $m\ge m_1$ and any $m$-tuple $(g_1, \ldots, g_m)$, there is a distinct pair of indices $i,j$ such that $\| g_i^{-1} g_j - I \| < \frac{1}{3 \sqrt{n}}$. Assume, without loss of generality, that $(i,j)=(1,2)$, and put $\Delta = g_1^{-1} g_2 - I$. Then for any choice of $t_1, t_2 \in T$, we have \[ g_1 t_1 + g_2 t_2 = g_1 (t_1 + (I + \Delta) t_2), \] where $\| \Delta \|<\frac1{3\sqrt{n}}$. We introduce some notation. First, define $A=\{z\in\mathbb{C}:|z-1|\le\frac13\}$ and $B=\{ z \in \mathbb{C} : |z - 1| \leq \frac{2}{3} \}$. Next, let $s_1, s_2: B \rightarrow \mathbb{C}$ be the unique continuous functions satisfying $z = s_1(z) + s_2(z)$, $|s_1(z)| = |s_2(z)| = 1$ and $\Im(\frac{s_1(z)}{s_2(z)}) > 0$ for all $z\in B$. For $j=1,2$, let $t_j:B^n\rightarrow T$ be defined by $t_j(z_1,\ldots,z_n)=(s_j(z_1),\ldots,s_j(z_n))$. Given an arbitrary element $w\in A^n$, we define a continuous function $h_w: B^n \rightarrow \mathbb{C}^n$ by $h_w(z) = w - \Delta{t_2(z)}$. Since $|t_2(z)|=\sqrt{n}$ and $\|\Delta\|<\frac1{3\sqrt{n}}$, we have $|\Delta t_2(z)|<\frac13$. In particular, each entry of $\Delta t_2(z)$ is bounded in magnitude by $\frac13$, so by the triangle inequality, the image of $h_w$ is contained in $B^n$. By the Brouwer fixed point theorem, there exists $z \in B^n$ with $h_w(z) = z$, so that \[ t_1(z) + (I + \Delta) t_2(z) = z+\Delta{t_2(z)} = z+w-h_w(z) = w. \] Therefore, all of $A^n$ is in the image of the map $z \mapsto t_1(z) + (I + \Delta) t_2(z)$, so that in particular \[ A^n \subseteq \{ t_1 + g_1^{-1} g_2 t_2 \ : \ t_1, t_2 \in T \}. \] Applying Lemma \ref{lem:nbhd_stability} with $\delta=\frac13$, there is an $\varepsilon>0$ depending only on $K$ such that $\{ g_1 t_1 + g_2 t_2 \ : \ t_1, t_2 \in T \}$ contains an $\varepsilon$-neighborhood of some point in $\mathbb{C}^n$. We conclude the same of the set $\{ g_1 t_1 + \ldots + g_m t_m \ : \ t_1, \ldots, t_m \in T \}$ by choosing arbitrary fixed $t_3, \ldots, t_m \in T$. \\ \\ Proceeding to step \eqref{it_big_neigh}, let $k_1$ be a large integer to be determined later, set $m_2 = m_1 k_1$, and for any $m \geq m_2$ write $m = k m_1 + l$ with $k\ge k_1$ and $0 \leq l < m_1$. For each $j$ with $0\leq j<k$, applying step \eqref{it_small_neigh} to $(g_{jm_1+1},\ldots,g_{jm_1+m_1})$, we obtain an $\varepsilon$-neighborhood centered at some $v_j\in \mathbb{C}^n$. Further, we put $v_k = g_{km_1 + 1} \overrightarrow{1} + \ldots + g_{k m_1 + l} \overrightarrow{1}$, where $\overrightarrow{1} = (1, \ldots, 1) \in T$. Since $m_1$ is fixed and $K$ is compact, we have $|v_j| \leq C$ for $0\leq j\leq k$, for some $C$ independent of the individual $g_i$. Let $N_\varepsilon=\{(z_1,\ldots,z_n)\in\mathbb{C}^n:|z_1|^2+\ldots+|z_n|^2<\varepsilon^2\}$ be the $\varepsilon$-neighborhood of the origin in $\mathbb{C}^n$. Then by the above observations, for any $\theta_0,\ldots,\theta_k\in[0,1]$, $\big\{ \sum_{i = 1}^m g_i t_i \, : \, t_1,\ldots,t_m \in T \big\}$ contains the set $$ \sum_{j=0}^{k-1}e( \theta_j)(v_j+N_{\varepsilon})+e(\theta_k)v_k =\sum_{j=0}^ke(\theta_j)v_j+kN_{\varepsilon}. $$ By Lemma~\ref{lem:sq_root_avg}, there is a choice of $\theta_0,\ldots,\theta_k$ for which $\bigl|\sum_{j=0}^ke(\theta_j)v_j\bigr|\le C\sqrt{k+1}$. Now let $k_1$ be the smallest positive integer satisfying $k_1\varepsilon > C \sqrt{k_1 + 1} + 2$. Then for $k\ge k_1$, we have shown that $\{ \sum_{i = 1}^m g_i t_i : t_1,\ldots,t_m\in T \}$ contains the closed ball of radius $2$. \\ \\ Proceeding to step \eqref{it_cont}, we put $m_0=3nm_2$. Suppose that $m\geq m_0$ and $(g_1,\dots,g_m)$ are given, and choose a partition of $\{1,\ldots,m\}$ into $3n$ sets $I_{j,\ell}$ (for $1\leq j\leq n$, $1\leq \ell\leq3$), each of size at least $m_2$. For each $j$ with $1\leq j\leq n$, write \[ v_j = v_{j,1} = v_{j,2} = v_{j,3} = (0,\ldots,0,2,0,\ldots,0), \] where the $2$ is in the $j$th position. For each $j$ and $\ell$ we use step \eqref{it_big_neigh} to express $v_{j,\ell}$ in the form \begin{equation}\label{eqn:step3_idecomp} v_{j,\ell} = \sum_{i\in I_{j,\ell}}g_i t_i \end{equation} for some $t_i\in T$. Next, note that the set $\bigl\{2[(1,\ldots,1)+\alpha+\beta]\ : \alpha,\beta\in T\}$ contains $D$. As in the proof of step \eqref{it_small_neigh}, we can choose continuous functions $\alpha=(\alpha_1,\ldots,\alpha_n), \beta=(\beta_1,\ldots,\beta_n):D\to T$ such that $z_j=2[1+\alpha_j(z)+\beta_j(z)]$ for every $z=(z_1,\ldots,z_n)\in D$. Thus, $$ z=\sum_{j=1}^n[1+\alpha_j(z)+\beta_j(z)]v_j =\sum_{j=1}^n [v_{j,1}+\alpha_j(z)v_{j,2}+\beta_j(z)v_{j,3}]. $$ Finally, we use \eqref{eqn:step3_idecomp} to rewrite this as $$ z=\sum_{j=1}^n\left[ \sum_{i\in I_{j,1}}g_it_i +\sum_{i\in I_{j,2}}g_i\big(t_i\alpha_j(z)\big) +\sum_{i\in I_{j,3}}g_i\big(t_i\beta_j(z)\big)\right], $$ which is a decomposition of the type required. \end{proof} Next, we use the quasi-orthogonality of the coefficients $\lambda_j(p)$ (Lemma~\ref{lem:sump}) to show that, by choosing an arbitrary ``twist'' $\epsilon_p\in S^1$ for each large prime $p$, we can make sums of the $\epsilon_p\lambda_j(p)$ line up in linearly independent directions, as quantified in the following proposition. Given a real parameter $y>0$, we write \[ S(y) = \{p \text{ prime}: p > y \} \quad\text{and}\quad s(y,\sigma) = \sum_{p\in S(y)} p^{-\sigma}. \] \begin{proposition}\label{prop:sw1} There is a compact set $K \subseteq \GL_n(\mathbb{C})$, explicitly defined in \eqref{eqn:def_K} depending only on the degrees $r_1,\ldots,r_n$, with the following property: Let $m$ be a positive integer. Then there is a real number $\delta>0$ (depending on the $\pi_j$ and $m$) such that for any $y>0$ and any $\sigma\in(1,1+\delta]$, there exists a partition of $S(y)$ into $mn$ pairwise disjoint subsets $S_{ik}(y)$ ($i=1,\ldots,m$, $k=1,\ldots,n$) and a choice of $\epsilon_p \in S^1$ for each $p\in S(y)$, such that the $m$-tuple of matrices $(g_1,\ldots,g_m)$ defined by \begin{equation}\label{eqn:gidef} g_i = \bigg( \frac{mn}{s(y,\sigma)} \sum_{p \in S_{ik}(y)} \frac{ \epsilon_p \lambda_j(p)}{p^{\sigma}} \bigg)_{1 \leq j, k \leq n}, \quad i=1,\ldots,m \end{equation} lies in $K^m$. \end{proposition} \begin{proof} Let $q$ be the smallest prime number satisfying $q\equiv1\pmod{mn}$ and $q\nmid\prod_{j=1}^n\cond(\pi_j)$. We put $t=\frac{q-1}{mn}$ and define $S_{ik}^\circ(y)$ to be the union of residue classes $$ S_{ik}^\circ(y)=\bigcup_{\ell=1}^t \bigl\{p\in S(y):p\equiv tn(i-1)+t(k-1)+\ell\pmod{q}\bigr\}, $$ and $$ S_{ik}(y)=\begin{cases} S_{ik}^\circ(y)\cup\{q\}&\text{if } i=k=1\text{ and }y<q,\\ S_{ik}^\circ(y)&\text{otherwise}. \end{cases} $$ Then the $S_{ik}(y)$ are pairwise disjoint and cover $S(y)$. For a fixed choice of $i$, let $v_k$ denote the $k$th column of $g_i$, as defined in \eqref{eqn:gidef}, with the $\epsilon_p$ yet to be chosen. We will show by induction that there is a choice of the $\epsilon_p$ such that \begin{equation}\label{eqn:projspan} |v_\ell - \proj_{\textspan\{v_1, \ldots, v_{\ell - 1}\}} v_\ell| \geq \frac{1}{2r} \end{equation} holds for every $\ell=1,\ldots,n$, where $r=\sqrt{r_1^2+\ldots+r_n^2}$. To that end, let $k$ be given, and assume that \eqref{eqn:projspan} has been established for $\ell=1,\ldots,k-1$. Choose a unit vector $u = (u_1, \ldots, u_n)$ orthogonal to $v_1, \ldots, v_{k - 1}$. By the Schwarz inequality and the Ramanujan bound $|\lambda_j(p)|\le r_j$, for each prime $p$ we have $ |\bar{u}_1 \lambda_1(p) + \ldots + \bar{u}_n \lambda_n(p)| \leq r. $ Therefore \begin{align}\label{eqn:inn_prod} \frac{mn}{s(y, \sigma)} \sum_{p\in S_{ik}(y)} \frac{|\bar{u}_1\lambda_1(p)+\ldots+\bar{u}_n\lambda_n(p)|}{p^\sigma} &\geq\frac{mn}{r s(y, \sigma)} \sum_{p\in S_{ik}^\circ(y)} \frac{|\bar{u}_1\lambda_1(p)+\ldots+\bar{u}_n\lambda_n(p)|^2}{p^\sigma}\\ &=\frac{1+O_{m, n}(\sigma-1)}{r}, \nonumber \end{align} the latter equality following by Lemma \ref{lem:sump}. We choose $\delta$ so that the $O$ term above is bounded in modulus by $\frac12$, and for each $p \in S_{ik}(y)$ we choose $\epsilon_p$ such that $\epsilon_p (\bar{u}_1 \lambda_1(p) + \ldots + \bar{u}_n \lambda_n(p))$ is real and nonnegative. Then the left side of \eqref{eqn:inn_prod} equals \[ \langle u, v_k \rangle = \langle u, v_k - \proj_{\textspan\{v_1, \ldots, v_{k - 1}\}} v_k \rangle \leq |v_k - \proj_{\textspan\{v_1, \ldots, v_{k - 1}\}} v_k|, \] so that \eqref{eqn:projspan} follows for $\ell=k$. Applying Gram--Schmidt orthogonalization to $v_1,\ldots,v_n$, it follows from \eqref{eqn:projspan} that $|\det{g_i}|\ge (2r)^{-n}$. Moreover, by the Schwarz inequality and Lemma \ref{lem:sump} again, each entry of $g_i$ is bounded above by $1+O_{m, n}(\sigma-1)$, so that $\|g_i\|\le2n$ for a suitable choice of $\delta$. Thus, \begin{equation}\label{eqn:def_K} K=\{g\in\GL_n(\mathbb{C}):\|g\|\le 2n, |\det{g}|\ge(2r)^{-n}\} \end{equation} has the desired properties. \end{proof} We are now ready to prove Proposition \ref{prop:lem2}, largely following \cite{SW}. \begin{proof}[Proof of Proposition \ref{prop:lem2}] We use Propositions \ref{prop:sw1} and \ref{prop:td} to determine a compact set $K\subseteq\GL_n(\mathbb{C})$, a positive integer $m_0$, and a real number $\delta>0$ with the properties described there. Taking $m=m_0$, the aforementioned propositions yield, for any $\sigma\in(1,1+\delta]$, an $m$-tuple of matrices $(g_1,\ldots,g_m)\in K^m$, elements $\epsilon_p \in S^1$ for each prime $p > y$, and continuous functions $f_1,\dots,f_m:D\rightarrow T$ such that \begin{equation}\label{eqn:id_decomp} \sum_{i=1}^m g_i f_i(z) = z\quad\text{for all }z\in D. \end{equation} Now, let $\mu=\frac{s(y,\sigma)}{mn}$. For each prime $p>y$, we define a continuous function $t_p : \mu D\rightarrow\mathbb{R}$ satisfying \begin{equation}\label{eqn:def_tp} p^{- i t_p(z)} = \epsilon_p f_i(\mu^{-1}z)_k, \end{equation} where $(i,k)$ is the unique pair of indices for which $p\in S_{ik}(y)$ and $f_i(\mu^{-1}z)_k$ denotes the $k$th component of $f_i(\mu^{-1}z)$. (Note that the lift from $S^1$ to $\mathbb{R}$ is possible since $D$ is simply connected.) Define an error term $E(z)=(E_1(z),\ldots,E_n(z))$ by writing, for each $j=1,\ldots,n$, $$ E_j(z)=\sum_{p>y}\left(\log L(\sigma+it_p(z),\pi_{j,p}) -\lambda_j(p)p^{-(\sigma+it_p(z))}\right). $$ By the Ramanujan bound, we have $$ \log L(s,\pi_{j,p})-\lambda_j(p)p^{-s}=O(p^{-2}) $$ uniformly for $\mathbb{R}e(s)\ge1$. Since $\sum_p p^{-2}$ converges, the continuity of $E$ follows from that of the individual $t_p$. Moreover, each component $E_j(z)$ is bounded by a number $C>0$, independent of $j$, $z$, $y$, or $\sigma$. Set $R'=\sqrt{\pi^2+\log^2{R}}$. We take $\eta\in(0,\delta]$ small enough that the condition $\sigma\in(1,1+\eta]$ ensures that $\mu\ge C+R'$. By \eqref{eqn:id_decomp}, \eqref{eqn:def_tp}, and Proposition \ref{prop:sw1} we have $$ \sum_{p>y}\lambda_j(p)p^{-(\sigma+it_p(z))}= \sum_{i=1}^m\sum_{k=1}^n \sum_{p\in S_{ik}(y)} \frac{\lambda_j(p)\epsilon_pf_i(\mu^{-1}z)_k}{p^{\sigma}}=z_j, $$ for any $z=(z_1,\ldots,z_n)\in\mu{D}$. Now fix $w\in R'D$ and define a function $F_w:(C+R')D\to\mathbb{C}$ by $F_w(z)=w-E(z)$. By the estimate for $E_j(z)$ above, the image of $F_w$ is contained in $(C+R')D$. Thus, by the Brouwer fixed point theorem, there exists $z\in(C+R')D$ with $F_w(z)=z$, so that \[ \biggl(\sum_{p>y}\log L(\sigma+it_p(z),\pi_{j,p})\biggr)_{j=1,\ldots,n} =z+E(z)=z+w-F_w(z)=w. \] Taking exponentials yields the proposition. \end{proof} \section{Proof of Theorem \ref{thm:main}}\label{sec:main} The proof will be carried out in two steps: \begin{enumerate}[(1)] \item Applying our previous results, we show that unless $P$ is a monomial (as described in Theorem \ref{thm:main}), for every $\sigma > 1$ sufficiently close to $1$ there are real numbers $t_p$ (for each prime $p$) and $t_0$ such that $P|_{s=\sigma+it_0}$ vanishes at $\bigl(\prod_pL(\sigma+it_p,\pi_{1,p}),\ldots, \prod_pL(\sigma+it_p,\pi_{n,p})\bigr)$. \item Simultaneously approximating the $p^{-it_p}$ by $p^{-it}$ for a common value of $t$, we use Rouch\'e's theorem to find a zero of $P(L(s,\pi_1),\ldots,L(s,\pi_n))$ close to $\sigma+it$. \end{enumerate} Note that the second step is standard and is applied in \cite{SW} in much the same way. \\ \\ We begin with a polynomial $P$ whose coefficients are finite Dirichlet series $D(s)=\sum_{m=1}^M a_mm^{-s}$, and let $y$ be the largest value of $M$ occurring in any of these coefficients. We rewrite each $L(s,\pi_j)$ as $L_{\leq{y}}(s,\pi_j)L_{>y}(s,\pi_j)$, splitting each Euler product into products over primes $p\leq y$ and $p>y$ respectively. Setting $$ Q(x_1,\ldots,x_n) =P\bigl(L_{\le{y}}(s,\pi_1)x_1,\ldots,L_{\le{y}}(s,\pi_n)x_n\bigr), $$ we have $P(L(s,\pi_1),\dots,L(s,\pi_n))=Q(L_{>y}(s,\pi_1),\dots,L_{>y}(s,\pi_n))$. The coefficients of $Q$ are rational functions of the $p^{-s}$ for $p\le y$. More precisely, for any monomial term $D(s)x_1^{d_1}\cdots x_n^{d_n}$ in the expansion of $P$, the corresponding term of $Q$ is \[D(s)L_{\le{y}}(s,\pi_1)^{d_1}\cdots{}L_{\le{y}}(s,\pi_n)^{d_n} x_1^{d_1}\cdots{}x_n^{d_n}. \] Since the finite Euler products $L_{\le{y}}(s,\pi_j)$ are non-vanishing holomorphic functions on $\{s\in\mathbb{C}:\mathbb{R}e(s)\ge1\}$, the corresponding terms of $P$ and $Q$ have the same zeros there. Let $D_1(s),\ldots,D_m(s)$ run through the coefficients of $P$ which do not vanish identically, and consider their product $f(s)=D_1(s)\cdots D_m(s)$. Then $f$ is itself a finite Dirichlet series which does not vanish identically. By complex analysis, $f$ cannot vanish at $1+it$ for every $t\in\mathbb{R}$, so there is some $t_0$ for which $D_1(1+it_0),\ldots,D_m(1+it_0)$ are all non-zero, and the same holds for the corresponding terms of $Q$. Next we specialize the coefficients of $Q$ to a fixed value of $s$, obtaining a polynomial $h_s\in\mathbb{C}[x_1,\dots,x_n]$. Considering $s=1+it_0$, Lemma \ref{lem:monomial} implies that either $h_{1+it_0}=cx_1^{d_1}\cdots x_n^{d_n}$ for some $c\in\mathbb{C}$ and $d_1,\ldots,d_n\in\mathbb{Z}_{\ge0}$, or that there are $y_1,\ldots,y_n\in\mathbb{C}$, none zero, for which $h_{1+it_0}(y_1,\ldots,y_n)=0$. In the former case, it follows from our choice of $t_0$ that $P=D(s)x_1^{d_1}\cdots x_n^{d_n}$ is a monomial, as allowed in the conclusion of Theorem~\ref{thm:main}. Henceforth we assume that we are in the latter case, and aim to show that $P(L(s,\pi_1),\ldots,L(s,\pi_n))$ has a zero with $\mathbb{R}e(s)>1$. We choose $R>1$ so that $R^{-{1/2}}\leq|y_j|\leq R^{1/2}$ for every $j$. By Lemma \ref{lem:poly_cont}, there is a number $\varepsilon>0$ such that for every $\sigma\in(1,1+\varepsilon]$, there exists $(z_1(\sigma),\ldots,z_n(\sigma))\in\mathbb{C}^n$ satisfying $h_{\sigma+it_0}(z_1(\sigma),\ldots,z_n(\sigma))=0$ and $R^{-1}\leq|z_j(\sigma)|\leq R$ for every $j$. We use Proposition \ref{prop:lem2} to determine $\eta$ in terms of $y$ and $R$, and assume that $\eta \le \varepsilon$ by shrinking $\eta$ if necessary. Proposition~\ref{prop:lem2} then guarantees that, for every $\sigma\in(1,1+\eta]$, we can solve the simultaneous system of equations $$ \prod_{p>y}L(\sigma+it_p,\pi_{j,p})=z_j(\sigma),\quad j=1,\ldots,n, $$ in the $t_p$ for $p>y$. For $p\le y$ we set $t_p=t_0$, thereby completing step (1). \\ \\ Turning to step (2), let $\sigma_1,\sigma_2\in\mathbb{R}$ with $1<\sigma_1<\sigma_2\le1+\eta$, and put $\sigma=\frac{\sigma_1+\sigma_2}2$. With the $t_0$ and $t_p$ resulting from step (1) for this choice of $\sigma$, let $P_{it_0}$ denote the polynomial obtained from $P$ by replacing $s$ by $s+it_0$ in all of its coefficients, and define \begin{equation}\label{eqn:Fdef} F(s)=P_{it_0}\!\left( \prod_pL(s+it_p,\pi_{1,p}),\ldots,\prod_pL(s+it_p,\pi_{n,p})\right). \end{equation} Then $F$ is holomorphic for $|s-\sigma|<\sigma-1$ and satisfies $F(\sigma)=0$ by construction. It follows that there is a number $\rho\in(0,\frac{\sigma_2-\sigma_1}2]$ such that $F(s)\neq0$ for all $s\in C_\rho=\{s\in\mathbb{C}:|s-\sigma|=\rho\}$. Write $\gamma$ for the minimum of $|F(s)|$ on $C_\rho$. Next, by abuse of notation, we write $P(s)$ as shorthand for $P(L(s,\pi_1),\ldots,L(s,\pi_n))$. As $P(s)=\sum_{m=1}^\infty a_mm^{-s}$ converges absolutely as a Dirichlet series for $\mathbb{R}e(s) > 1$, there is an integer $M > 0$ with $\sum_{m=M}^{\infty}|a_m|m^{-\sigma_1}\le\frac{\gamma}3$. By \eqref{eqn:Fdef} we have $F(s)=\sum_{m=1}^\infty b_mm^{-s}$, where $b_m=a_m\prod_{p|m}p^{-it_p\ord_p(m)}$, and by the joint uniform distribution of $p^{it}$ for primes $p<M$, it follows that the set of $t\in\mathbb{R}$ satisfying \[\sum_{m=1}^{M-1}\frac{|a_mm^{-it}-b_m|}{m^{\sigma_1}}<\frac{\gamma}3\] has positive lower density. For any such $t$ the triangle inequality yields $|P(s+it)-F(s)|<\gamma$ for all $s$ with $\mathbb{R}e(s)\ge\sigma_1$, and in particular for all $s\in C_\rho$. By Rouch\'e's theorem, it follows that $P(s+it)$ has a zero $s$ with $|s-\sigma|<\rho$. Thus, $P(s)$ has zeros with real part in $[\sigma_1,\sigma_2]$, and indeed we have $$ \#\{s\in\mathbb{C}:\mathbb{R}e(s)\in[\sigma_1,\sigma_2], \Im(s)\in[-T,T], P(s)=0\} \gg_{\sigma_1,\sigma_2}T $$ for all $T\ge T_0(\sigma_1,\sigma_2)$. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{Imaging in scattering media via the second-order correlation of light field} \author{Wenlin Gong} \author{Pengli Zhang} \author{Xia Shen} \author{ Shensheng Han} \email{[email protected]} \affiliation{ Key Laboratory for Quantum Optics and Center for Cold Atom Physics of CAS, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China } \date{\today} \begin{abstract} Imaging with the second-order correlation of two light fields is a method to image an object by two-photon interference involving a joint detection of two photons at distant space-time points. We demonstrate for the first time that an image with high quality can still be obtained in the scattering media by applying the second-order correlation of illuminating light field. The scattering effect on the visibility of images is analyzed both theoretically and experimentally. Potential applications and the methods to further improve the visibility of the images in scattering media are also discussed. \end{abstract} \pacs{42.50.Ar, 42.68.Mj, 42.50.Dv, 42.62.Be} \maketitle Multiple scattering has a great influence on the quality of images. The information will be degraded and the images suffer reduced resolution and contrast because of multiple scattering, such as light propagation and imaging in the atmosphere \cite{Mckechnie}, neutron imaging \cite{Raine}, imaging and diagnosis in life and medical sciences \cite{Maher}. In medical and clinic diagnosis, comparing with X-ray, optical photons provide nonionizing and safe radiation for medical applications, and are now becoming an increasing interesting method for imaging in biological tissues. Such as Optical coherence tomography (OCT), Diffuse optical tomography (DOT), Photoacoustic tomography (PAT)and so on \cite{Wang,Gibson,Yodh,Arridge,Zhang,Hsiung}. Although the quality of images in scattering media have an enhancement in some extent by these techniques, there are still lots of problems which are difficult to be settled. Because all conventional imaging methods are based on first-order correlation of light field, we ``see" an image only when we look at the object, which means the detection and imaging are unseparated in conventional imaging process. So when the information of the object is distorted by multiple scattering, and the information of both multiple scattering and the object are unknown, we can not, in principle, obtain exactly an image destroyed by multiple scattering. With respect to the classical area of imaging, the field of quantum imaging aims to devise novel techniques for optical imaging, by exploiting the quantum nature of light \cite{Gatti}. Based on the characteristic of Bose-Einstein correlation of light fields and the theory of two-photon interference \cite{Glauber}, the imaging method by the second-order correlation of two light fields, for example, ghost imaging with entangled source or thermal light, becomes a new kind of imaging technique \cite{Angelo}. In ghost imaging schemes, even if the test detector is a pointlike or bucket detector, by measuring the second-order correlation function of the two light fields, we can still obtain the image of the object with both entangled source and thermal light by scanning the position of the photons which never actually passed through the object \cite{Pittman,Valencia,Cheng,Gatti1,Gatti2,Bennink2,Zhang1,Zhang2}. This imaging method, for the first time in the history of imaging technology development, leads to the separation of detection and imaging. In this letter, when there is multiple scattering in the test path but no multiple scattering in the reference path, imaging in the scattering media with thermal light is investigated by the second-order correlation of two light fields. Fig. 1 shows the schematic of experimental setup. The thermal light source $S$ produced by the method described in Ref. \cite{Zhang2,Martienssen,Liu}, first propagates through a beam splitter, then is divided into a test and a reference path. In the test path, the light goes through a thin lens of focal length $f_1$, scattering media and then to the test detector $D_t$. In the reference path, the light propagates through another thin lens of focal length $f$ then to the CCD camera $D_r$ with the axial resolution 6.45$\times$6.45$\mu$m. \begin{figure} \caption{Scheme for the second-order correlation with thermal light in the scattering media.} \end{figure} In Fig. 1, The impulse response function between the plane $x$ and the plane $x_0$ is \cite{Wang,Vesperinas,Holstein} \begin{eqnarray} h(x,x_0 ) = \alpha h_{in} (x,x_0 ) + \beta h_{sca} (x,x_0 ). \end{eqnarray} \begin{eqnarray} \left| \alpha \right|^2 + \left| \beta \right|^2 = 1. \end{eqnarray} where $h_{in}(x,x_0)$ is the impulse response function with no scattering media, and $h_{sca}(x,x_0)$ is the impulse response function because of the interactions of multiple scattering. $\alpha$, $\beta$ are the probability amplitudes of incident light and scattering light, respectively. The probability distribution function in scattering media can be represented by point scattering function(PSF). Generally, there are two forms of PSF: Lorentzian-shaped and Gaussian-shaped distribution \cite{Hassanein,Segre}. \begin{subequations} \label{eq:whole} \begin{eqnarray} h_{sca} (x,x_0 ) \propto \int dx' P(x',x_0 )_{L_A} h(x,x')_{(L_{2}+z_{2})}, \end{eqnarray} \begin{eqnarray} P(x',x_0 )_{L_A} =[\frac{2}{{\pi \Delta x_{L_A } ^2 }}]^{1/4} \exp \left\{ { - (\frac{{x' - x_0 }}{{\Delta x_{L_A} }})^2 } \right\}, \end{eqnarray} \begin{eqnarray} \int {\left| {P(x',x_0 )_{L_A} } \right|} ^2 dx'=1. \end{eqnarray} \begin{eqnarray} \beta \propto \frac{{D^{a_\beta } w^{c_\beta } {L_A}^{d_\beta } n^{e_\beta } }}{{\lambda ^{b_\beta } }};\Delta x_{L_A } \propto \frac{{D^{a_x } w^{c_x } {L_A}^{d_x } n^{e_x } }}{{\lambda ^{b_x } }}. \end{eqnarray} \end{subequations} here, $P(x',x_0 )_{L_A}$ is the Gaussian-shaped point scattering probability amplitude from the position $x_0$ to $x'$ when the light goes through the scattering medium with effective thickness $L_A$. $\Delta x_{L_A}$, $D$, $w$, $\lambda$ and $n$ are the broadening width, the diameter size of scattering particles, the concentration of suspended particles, the wavelength of the incident light and refractive index of the scattering medium, respectively. $\alpha$, $\beta$ and $\Delta x_{L_A}$ are all determined by specific experimental conditions. By optical coherence theory \cite{Cheng,Gatti1,Glauber}, the second-order correlation function between the two detectors is: \begin{eqnarray} \Delta G^{(2,2)} (x_r ,x_t ) = < I(x_r )I(x_t ) > - < I(x_r ) > < I(x_t ) >\nonumber\\ =\left| \right.\int {dx_1 } \int {dx_2} G^{(1,1)} (x_1 ,x_2 ) h_r ^ * (x_1 ,x_r )h_t (x_2 ,x_t )\left. \right|^2 . \end{eqnarray} where $< >$ denotes statistical average of the ensemble, and $G^{(1,1)} (x_1 ,x_2)$ is the first-order correlation function on the source plane, $h_t(x_t ,x_2)$ is the impulse response function in the test path whereas $h_r^*(x_r,x_1)$ denotes phase conjugate of the impulse response function in the reference path. Suppose the light source is fully spatially incoherent, \begin{equation} G^{(1,1)} (x_1 ,x_2 ) = I_0 \delta (x_1 - x_2 ). \end{equation} where $I_0$ is a constant, and $\delta(x)$ is Dirac delta function. Under the paraxial approximation, and when the effective apertures of the lenses in the optical system are large enough, the impulse response function of the reference system is \begin{equation} h_r (x_r ,x_1) \propto \exp \left\{ {\frac{{j\pi }}{{\lambda f}}(1 - \frac{z}{f})x_1 ^2 - \frac{{2j\pi }}{{\lambda f}}x_r x_1 } \right\}. \end{equation} when the object plane, the thin lens $f_2$ and the test detector plane satisfy the thin lens equation \begin{equation} \frac{1}{{z_2+L_2 }} + \frac{1}{{z_3 }} = \frac{1}{{f_2 }}. \end{equation} then the impulse response function in the test path is \begin{subequations} \label{eq:whole} \begin{eqnarray} h_t(x_t ,x_2 ) \propto \int {dx'} [\alpha _1 \exp \left\{ { - \frac{{2j\pi }}{{\lambda f_1 }}x'x_2 } \right\} + \beta _1 \int d x_2 '\nonumber\\\times P(x',x_2 ')_{L_{1A} } \exp \left\{ {- \frac{{2j\pi }}{{\lambda f_1 }}x_2 'x_2 } \right\}] t(x')C(x')\nonumber\\\times\exp \left\{ {\frac{{j\pi }}{{\lambda f_1 }}(1 - \frac{{z_1 + L_1 }}{{f_1 }})x_2 ^2 } \right\}. \end{eqnarray} \begin{eqnarray} C(x') = \alpha _2 \delta (x' + \frac{{z_2 + L_2 }}{{z_3 }}x_t ) + \beta _2 P( - \frac{{z_2 + L_2 }}{{z_3 }}x_t ,x')_{L_{2A} }. \end{eqnarray} \end{subequations} where t(x) is the transmission function of the object. If \begin{eqnarray} \frac{{1 - \frac{z}{f}}}{f} = \frac{{1 - \frac{{z_1 + L_1 }}{{f_1 }}}}{{f_1 }}. \end{eqnarray} and $\frac{{f_1 }}{f} = \frac{{z_2 + L_2 }}{{z_3 }}$, then the same speckle distributions can be obtained on the two detector planes without the scattering media and the object. Substituting Eqs. (5)-(9) into Eq. (4), the correlation function is \begin{eqnarray} \Delta G^{(2,2)} (x_r ,x_t ) \propto \left| {\{ \alpha _1 \alpha _2 \delta (x_r + x_t ) + P( - \frac{{f_1 }}{f}x_t ,\frac{{f_1 }}{f}x_r )_{L_{2A} } } \right.\nonumber\\ \times \beta _1 \alpha _2 \} t( - \frac{{f_1 }}{f}x_t ) + \alpha _1 \beta _2 P( - \frac{{f_1 }}{f}x_t ,\frac{{f_1 }}{f}x_r )_{L_{1A} } t(\frac{{f_1 }}{f}x_r )\nonumber\\ \left. { + \beta _1 \beta _2 \int {dx'(x')} P(x',\frac{{f_1 }}{f}x_r )_{L_{1A} } P( - \frac{{f_1 }}{f}x_t ,x')_{L_{2A} } } \right|^2. \end{eqnarray} If the test detector is a bucket detector, integrating $x_t$ in Eq. (10), then \begin{eqnarray} \Delta G^{(2)} (x_r ) = \int {\Delta G^{(2,2)} (x_r ,x_t )dx_t }. \end{eqnarray} If the test detector is a CCD camera, because the images on the two camera planes are inverse, in the case of $x_t=-x_r$, Eq. (10) can be rewritten as \begin{eqnarray} \Delta G^{(2,2)} (x_r , - x_r ) \propto \left| {\left\{ {\alpha _1 \alpha _2 + \alpha _1 \beta _2 \left[ {\frac{2}{{\pi \Delta x_{L_{2A} } ^2 }}} \right]^{1/4} } \right.} \right.\nonumber\\\left. { + \beta _1 \alpha _2 \left. {\left[ {\frac{2}{{\pi \Delta x_{L_{1A} } ^2 }}} \right]^{1/4} } \right\}t(\frac{{f_1 }}{f}x_r ) + \beta _1 \beta _2 C_0 } \right|^2 . \end{eqnarray} \begin{eqnarray} C_0 = \int {dx't(x')} P(x',\frac{{f_1 }}{f}x_r )_{L_{1A} } P( \frac{{f_1 }}{f}x_r ,x')_{L_{2A} }. \end{eqnarray} From Eq. (12)-(13), for $L_1$=0 (namely, $\beta_1$=0) or $L_2$=0 (namely, $\beta_2$=0), images with high quality can always be reconstructed by the second-order correlation of two light fields in scattering media. However, if the object is fixed in the middle of the scattering media, the quality of the image is the worst. In the experiments, we prepare a suspension liquid composed by emulsion polymerization particles with particle diameter $D$=3.26$\mu$m and the solution $NaCl$ with density $\rho$=1.19 $g/cm^3$. The vessel used to put the suspension liquid is designed as 40mm$\times$10mm$\times$20mm. We take $\lambda$=650nm, $f_1$=400.0mm, $f$=150.0mm, $f_2$=250.0mm, $z$=211.0mm, $z_1$=300.0mm, $z_2+L_2$=390.0mm, $z_3$=243.8mm. The average sizes of the speckles on the CCD camera plane are $40.6\mu$m. When the light goes through the scattering media with thickness $L_1+L_2$=40mm, $|\frac{\beta}{\alpha}|^2$$\approx$694, $\Delta x_{L_A}$$\approx$1.36mm, the scattering coefficient of the medium $\mu_s$$\approx$1.64/cm. The minimum characteristic scale of the object (``\textbf{zhong}" ring) is 60$\mu$m and the diameter of the ring is 1.6mm. \begin{figure} \caption{(I). The axial sections of the images when the thermal light from a single slit (the silt width a=0.2mm) fixed before the mediums went through different mediums with the scattering thickness $L_1+L_2$=40mm. (II). Images of the object obtained when the sample was fixed in different positions of the scattering media (averaged 15, 000 speckle frames). (a). $L_1$=0mm, $L_2$=40mm, $\triangle x_{L_{A} \end{figure} Fig. 2(I) represents the point scattering functions when the thermal light goes through different mediums. The Gaussian envelop will become wider because of multiple scattering. For conventional imaging, as shown in Fig. 2(II)(2), the quality of images will reduce as the increase of the scattering thickness $L_2$. Oppositely, when the test detector is a bucket detector, the quality of the images reconstructed by the second-order correlation between two paths will decay as the scattering thickness $L_1$ is increased [Fig. 2(II)(3)]. However, as shown in Fig. 2(II)(4-c), if the test detector is a CCD camera with high axial resolution, we can also obtain images with high quality via the second-order correlation of the two light fields when there is only strong multiple scattering between the object plane and the source. In addition, if the object is fixed in the middle of the scattering media, the visibility of the image will reduce, but the resolution doesn't decay [Fig. 2(II)(4-b)]. \begin{figure} \caption{Effect of multiple scattering between the source and the object plane on $g^{(2)} \end{figure} The visibility of the images reconstructed by the second-order correlation of two light fields can be explained by normalized second-order correlation function \cite{Glauber} \begin{eqnarray} g^{(2)} (x_1 ,x_2 = 0) = 1 + \frac{{\Delta G^{(2,2)} (x_1 ,x_2 = 0)}}{{ < I(x_1 ) > < I(x_2 = 0) > }}. \end{eqnarray} where $g^{(2)} (x_1 ,x_2 = 0)$ denotes the second-order cross correlation degree between two spatial positions $x_2$=0 and $x_1$. For the thermal light field, $g^{(2)}(x_{t1},x_{t2}=0)$ reveals the fluctuation of the light field, whereas $g^{(2)}(x_r,x_t=0)$ describes the cross correlation between two paths. As shown in Fig. 3, both the maximum values of the second-order correlation degree $g^{(2)}(x_{t1}=0,x_{t2}=0)$ and $g^{(2)}(x_r=0,x_t=0)$ decrease sharply as the scattering thickness $L_1$ is increased, so the visibility of images will reduce (see Fig. 2(II)(3)). However, even $g^{(2)}(x_r=0,x_t=0)$ is lower than 1.05, an image with high quality can still be reconstructed when both the detectors in two paths are CCD cameras with high axial resolution (see Fig. 2(II)(4)). Because multiple scattering between the source and the object plane destroys the cross-correlation between the two paths, thus the visibility of the image and $g^{(2)}(x_r=0,x_t=0)$ will be degraded as the increase of multiple scattering. Higher $g^{(2)}(x_r=0,x_t=0)$ and larger transverse coherent width on the object plane can both enhance the visibility of the image in scattering media reconstructed by the second-order correlation between the test and reference paths \cite{Shen,Gatti3}. In addition, by the results described by Eq. (12) and represented in Figs. 2-3, when the both detectors in two paths are CCD cameras with high axial resolution, the quality of images obtained by the second-order correlation of two light fields is also determined by the quality of images registered by the CCD camera $D_t$. Thus, almost all existing conventional imaging schemes in scattering media can be applied in the test path to further improve the quality of the correlation imaging system. When $L_1<<L_2$, the quality of the image reconstructed by the second-order correlation of the two light fields is mainly determined by the mechanism of ``ghost" imaging. Because of the separation of detection and imaging for ``ghost" imaging, the test detector is only used to collect the radiation of the thermal light propagating through the object, while imaging system is located in the reference path. Because there is no multiple scattering in the reference path, the quality of the image isn't influenced. If $L_1>>L_2$, the quality of the reconstructed image mainly depends on the conventional imaging system in the test path. Because the scattering between the object and the CCD camera $D_t$ is week, thus the image with good quality can also be obtained. Generally speaking, by the second-order correlation of the two light fields, we can always obtain an image with much better quality than the image achieved only with conventional first-order correlation optical imaging methods in scattering media. Entangled source and other nonclassical light source with higher $g^{(2)}(x_{t1}=0,x_{t2}=0)$ may be applied to further improve the quality of the image obtained by the second-order correlation imaging system in scattering media \cite{Chan,Walls,Wang1}. In conclusion, imaging via the second-order correlation of two light fields provides a brand-new route for imaging in scattering media. We demonstrate for the first time that we can always obtain an image with much better quality reconstructed by the second-order correlation imaging method than the image achieved by conventional first-order correlation imaging methods in scattering media. This will be very useful to imaging, test and diagnosis of biological tissues with infrared and near infrared light. The work was partly supported by the Hi-Tech Research and Development Program of China under Grant Project No. 2006AA12Z115, and Shanghai Fundamental Research Project under Grant Project No. 06JC14069. \end{document}
\begin{document} \preprint{APS/123-QED} \title{ Small-world complex network generation on a digital quantum processor} \author{Eric B. Jones} \email{[email protected]} \affiliation{National Renewable Energy Laboratory, Golden, CO 80401, USA} \author{Logan E. Hillberry} \affiliation{Department of Physics, University of Texas, Austin, TX 78712, USA} \author{Matthew T. Jones} \affiliation{Department of Physics, Colorado School of Mines, Golden, CO 80401, USA} \affiliation{NVIDIA Corporation, Boulder, CO 80302, USA} \author{Mina Fasihi} \affiliation{Department of Physics, Colorado School of Mines, Golden, CO 80401, USA} \author{Pedram Roushan} \affiliation{Google Quantum AI, Santa Barbara, CA 93101, USA} \author{Zhang Jiang} \affiliation{Google Quantum AI, Santa Barbara, CA 93101, USA} \author{Alan Ho} \affiliation{Google Quantum AI, Santa Barbara, CA 93101, USA} \author{Charles Neill} \affiliation{Google Quantum AI, Santa Barbara, CA 93101, USA} \author{Eric Ostby} \affiliation{Google Quantum AI, Santa Barbara, CA 93101, USA} \author{Peter Graf} \affiliation{National Renewable Energy Laboratory, Golden, CO 80401, USA} \author{Eliot Kapit} \email{[email protected]} \affiliation{Quantum Engineering Program, Colorado School of Mines, Golden, CO 80401, USA} \affiliation{Department of Physics, Colorado School of Mines, Golden, CO 80401, USA} \author{Lincoln D. Carr} \email{[email protected]} \affiliation{Quantum Engineering Program, Colorado School of Mines, Golden, CO 80401, USA} \affiliation{Department of Physics, Colorado School of Mines, Golden, CO 80401, USA} \date{\today} \maketitle \textbf{ Quantum cellular automata (QCA) evolve qubits in a quantum circuit depending only on the states of their neighborhoods \cite{arrighi2019overview, farrelly2020review} and model how rich physical complexity can emerge from a simple set of underlying dynamical rules \cite{bleh2012quantum}. For instance, Goldilocks QCA depending on trade-off principles exhibit non-equilibrating coherent dynamics and generate complex mutual information networks \cite{hillberry2021entangled}, much like the brain \cite{bullmore2009complex}. The inability of classical computers to simulate large quantum systems is a hindrance to understanding the physics of quantum cellular automata, but quantum computers offer an ideal simulation platform \cite{nielsen2002quantum, feynman2018simulating}. Here we demonstrate the first experimental realization of QCA on a digital quantum processor, simulating a one-dimensional Goldilocks rule on chains of up to 23 superconducting qubits. Employing low-overhead calibration and error mitigation techniques, we calculate population dynamics and complex network measures indicating the formation of small-world mutual information networks. Unlike random states \cite{arute2019quantum}, these networks decohere at fixed circuit depth independent of system size; the largest of which corresponds to 1,056 two-qubit gates. Such computations may open the door to the employment of QCA in applications like the simulation of strongly-correlated matter \cite{brun2020quantum, duranthon2021coarse, shah2019quantum} or beyond-classical computational demonstrations.} One of the most profound observations regarding the natural world is that, despite the simple set of physical laws that underpin it, the universe displays a plethora of complex, emergent phenomena, encountered in fields as diverse as biology, sociology, and physics \cite{anderson1972more, anderson2018basic, jensen1998self}. Examples of classical systems where complexity arises as a result of many interacting degrees of freedom are ecosystems, the human brain, and power grids \cite{turcotte2002self}. Certain classical cellular automata (CA) show how complexity can arise from simple rules without the controlling hand of a designer \cite{adamatzky2010game}. CA possess the ability to generate oscillatory, self-replicating structures and in some instances are themselves Turing complete \cite{wolfram1983statistical, wolfram1985cryptography, lindgren1988complexity, chopard1998cellular, cook2004universality}. It is known however, that the laws constituting our best model of the universe are quantum mechanical rather than classical \cite{donoghue2014dynamics}. Therefore, in order to simulate the emergence of complexity more fundamentally, one ought to investigate computational models that are predicated upon quantum mechanics. Goldilocks quantum cellular automata (QCA) \cite{hillberry2021entangled}, are a class of computational models that exhibit emergent complexity despite being constructed from repeated blocks of simple local unitary operators \cite{farrelly2020review}. They involve trade-offs in the local neighborhood such as are known to be sources of complexity in classical systems and essential to self-organized criticality \cite{carlson2002complexity}. Some Goldilocks QCA have been shown to generate mutual information networks that exhibit signatures of complexity, such as large network clustering, short average path length, and broad node-strength distribution, typically only observed in classical, small-world networks like social or biological networks \cite{hillberry2021entangled}. In addition, QCA have been proposed for other applications such as lattice discretization in the simulation of strongly-correlated matter, quantum field, and gravitational theories \cite{shah2019quantum, brun2020quantum, duranthon2021coarse}. However, the categorical limitation on the ability of classical computers to simulate the time evolution of large quantum systems is a bottleneck for the discovery and exploration of QCA more generally, hampering the theoretical illumination of the class of systems as a whole \cite{nielsen2002quantum}. \begin{figure*} \caption{\label{fig:circuits} \label{fig:circuits} \end{figure*} The last few years have seen the creation of sizeable digital quantum processors that are already demonstrating their value as tools of scientific discovery \cite{wright2019benchmarking, arute2019quantum, arute2020observation, arute2020hartree, pino2021demonstration, neill2021accurately, jurcevic2021demonstration}. Due to their universality, such processors are ideal platforms on which to elucidate the physics and complexity characteristics of QCA. Herein, we simulate a particular one-dimensional QCA on a Sycamore-class superconducting processor depicted schematically in Figs.~\ref{fig:circuits}a-d. Through the calculation of population dynamics and a complex-network characterization of the two-body mutual information matrix we establish that such QCA form small-world mutual information networks and thereby exhibit emergent physical complexity. Moreover, we take the first step towards enabling the widespread use of near-term quantum processors as QCA simulators and offer a template for how to experimentally investigate QCA generally. \noindent \textbf{Quantum cellular automata} A one-dimensional (1D) quantum elementary cellular automaton may be defined as a chain of $L$ quantum bits (qubits) whose states are updated according to repeated blocks of neighborhood-local unitary operations along a discrete time axis. When every qubit's state has been updated, a QCA \textit{cycle}, $t$, is completed. After selecting 1D chains of high-quality qubits from the available hardware graph (Fig.~\ref{fig:circuits}a; see also Supplementary Information), the structure (Fig.~\ref{fig:circuits}b) of a 1D QCA experiment is comprised of an initialization step, followed by the application of some number of QCA cycle unitaries out to cycle $t\in \{ 0, 1, \ldots, t_{\text{max}}\}$, and the measurement of appropriate observables. \begin{figure*} \caption{\label{fig:populations} \label{fig:populations} \end{figure*} The particular QCA that we simulate is the totalistic, three-site Goldilocks rule $T_6$ with a uniform Hadamard activation unitary applied to each qubit and boundary conditions equivalent to fixed $|0\rangle$s (see Supplementary Information) \cite{hillberry2021entangled}. We note that the QCA notation $T_6$ should not be confused with decoherence times, which we will denote $\tilde{T}_1$ and $\tilde{T}_2$ where applicable. Fig.~\ref{fig:circuits}c shows how rule $T_6$ is compiled down to quantum gates. A single, central bit flip initialization is followed by one QCA cycle, and finally, measurement in the computational $z$-basis. The local update, represented by two non-Clifford $\text{CH}$ gates (green box), does nothing if there are zero or two adjacent $|1\rangle$s and applies the Hadamard activator to the central qubit if there is exactly one adjacent $|1\rangle$: this is the trade-off rule that gives rise to the \textit{Goldilocks} nomenclature. \noindent \textbf{Population dynamics and error mitigation} The quantum processor on which we run our QCA simulations is a 53-qubit superconducting processor, Weber, which follows the design of the Sycamore architecture outlined in Ref. \cite{arute2019quantum} (see also Supplementary Information). Typical performance characteristics for Weber are: single-qubit gate error $e_1\approx 0.1\%$, two-qubit gate error $e_2\approx 1.4\%$, $|0\rangle$-state readout error $e_{r0}\approx 2\%$, $|1\rangle$-state readout error $e_{r1}\approx 7\%$, and population relaxation time $\tilde{T}_1\approx 15 \mu s$ \cite{qcdatasheet}. Fig.~\ref{fig:circuits}d shows the decomposition of a single QCA cycle (red box) to the native $\sqrt{\text{iSWAP}}^{\dagger}$ two-qubit and $\text{PhXZ}(a, x, z) \equiv \text{Z}^z \text{Z}^a \text{X}^x \text{Z}^{-a}$ family of single-qubit gates. Strictly speaking, the native two-qubit gate is better modelled by $\sqrt{\text{iSWAP}}^{\dagger} \times \text{CPHASE}(\varphi)$ where the parasitic cphase is $\varphi \approx \pi/23$ \cite{arute2020observation}. We apply a suite of low-overhead circuit optimization, calibration, and error mitigation techniques to optimize circuit performance including moment alignment, spin-echo insertion, Floquet calibration \cite{arute2020observation, neill2021accurately}, parasitic cphase compensation, and most importantly, post-selection (see Supplementary Information). \begin{figure*} \caption{\label{fig:clusterings} \label{fig:clusterings} \end{figure*} At each QCA cycle depth we measure the output of the circuit in the $z$-basis $N_c=100,000$ times, resulting in a set of $L$-bit strings $\{ |z\rangle \}$ and associated probabilities $\{ P_z \approx N_z/N_c \}$, where $N_z$ is the number of times bit string $|z\rangle$ is observed. The local population on each site is calculated via $\langle n_i \rangle = (1-\sum_z P_z (-1)^{z_i})/2$ and averaged over four 1D qubit chains. The left panel of Fig.~\ref{fig:populations}a shows the numerical emulation of such a procedure initialized with a single, central bit flip on 21 qubits, out to 30 QCA cycles. The two large-scale blue diamonds indicate coherent dynamics. When repeated on the Weber processor, a combination of photon loss, gate error, and state preparation and measurement (SPAM) error leads to nearly total population decoherence by $t\approx10$ (Fig.~\ref{fig:populations}a, center panel). We therefore post-select experimental data and discard any measurements whose eigenvalues of the Ising-like operator $\mathcal{O} = \sum_{i=0}^L Z_i Z_{i+1}$ differ from the corresponding eigenvalue of the initial state. That is, $\mathcal{O}$ is a dynamical invariant of the $T_6$ rule that keeps track of the number of domain walls in the system. The right panel of Fig.~\ref{fig:populations}a shows that post-selection results in coherent population dynamics that persist beyond $t\approx 10$, although different observables can degrade with noise on slightly different timescales (see Fig.~\ref{fig:clusterings}). The cycle-stamped population vignettes shown in Fig.~\ref{fig:populations}b support these observations more quantitatively, with error bars representing one standard deviation on the four different chains. After $t \approx 15$, error bars on the post-selected data become more significant and while some qualitative features of the emulated population dynamics appear to persist, such as larger population towards the center of the chain, it is unclear from Fig.~\ref{fig:populations} alone as to what the underlying nature of these qualitative similarities is. Moreover, our complex-network analysis of the behavior of rule $T_6$ relies on the calculation of two-body observables beyond the one-body observables depicted in Fig.~\ref{fig:populations}. As such, we turn to a calculation of Shannon mutual information both to more deeply understand the long-time population dynamics of our QCA and to establish their complex-network behavior. \begin{figure*} \caption{\label{fig:complex_network} \label{fig:complex_network} \end{figure*} \noindent \textbf{Mutual information network analysis} Following the complex-network approach in neuroscience wherein functional connectivity of the brain is characterized between spatially non-adjacent regions \cite{bullmore2009complex}, we calculate the classical, Shannon mutual information between all pairs of qubits in each 1D Goldilocks QCA chain \begin{equation} I_{ij} \equiv \sum_{z_i=0}^1 \sum_{z_j=0}^1 p(z_i, z_j) \log_2 \frac{p(z_i, z_j)}{p(z_i)p(z_j)} \end{equation} and regard it as an adjacency matrix of correlations that defines the QCA network at each cycle. We choose to use classical, rather than a measure of quantum, mutual information because its calculation only requires measurements in the computational $z$-basis, which we have shown are amenable to post-selection. Moreover, we show in Supplementary Information that the Shannon (classical) mutual information acts as a reliable proxy for von Neumann (quantum) mutual information for the $T_6$ QCA. Complex networks are ones that are neither purely regular, such as a lattice or complete graph, nor entirely random \cite{albert2002statistical}. The classic demonstration that a network has complex, small-world character involves showing persistently large clustering and simultaneously short path length \cite{watts1998collective}, with a power-law node strength distribution resulting in highly-connected nodes. By analogy with transportation networks, these features describe networks that are easily traversed both locally (clustering) and globally (path length), and exhibit hubs (broad node strength distribution). Through decoherence, the state of a quantum processor approaches an incoherent uniformly random state with all amplitudes equal to $2^{-L/2}$ and for which $I_{ij} = 0$ for all $L\geq2$. Hence, the incoherent uniformly random state is neither locally nor globally traversable and is thus not a typical random network. Upon post-selection, the decohered state is no longer uniform and the corresponding mutual information network is both non-zero and non-random, although to a much lesser extent than the states generated by Goldilocks QCA. The complexity of networks generated by Goldilocks QCA is established by computing network measures for emulated, raw processor, and post-selected processor states and then comparing each of these to post-selected incoherent uniform random states. Clustering measures local network transitivity and is defined as the ratio of the weighted number of closed triangles in the network to the weighted total number of length-2 paths in the network (i.e., the number of \textit{potentially} closed triangles, see Supplementary Information)). The first relevant signature of network complexity is intermediate to large clustering values that do not decay with system size, in contrast to random networks. The emulated clustering (blue curves) of the QCA exhibits this signature and actually increases slightly as a function of system size, $L$ (see Fig.~\ref{fig:clusterings}). While we plot three of the larger system sizes we simulated here, $L=15, 17, \text{and} \: 19$, this proves true for all other system sizes simulated as well. Next, we note that without post-selection, the clustering $\mathcal{C}$ calculated from raw data from Weber (red points) rises briefly but then quickly decays toward zero, the incoherent uniformly random limit (black dotted curve), at $t\approx 12$ for all three system sizes. In contrast, the green curves in Fig.~\ref{fig:clusterings} show that with post-selection the experimental clustering tracks the emulated clustering closely until $t\approx 6$ and remains larger than post-selected uniform randomness (black dashed curve) until $t\approx 12$, independent of system size. There is therefore a window between $t\approx 4$ to $12$ over which we can analyze the formation of a non-random complex network in the QCA for all system sizes simulated. We provide a more detailed description of our cycle windowing process in Supplementary Information. Fig.~\ref{fig:complex_network}a shows the coherence window, cycle-and-chain-averaged emulated (blue), raw (red), and post-selected (green) clustering coefficient for $L=5$ to $23$ qubits. After the finite-size effects encountered for $L\leq 11$, it is clear that while the raw clustering trends towards zero-- that of a incoherent uniformly random state network-- both the emulated and post-selected clustering stabilize towards $\mathcal{C}\approx 0.3$ and appear to trend towards larger values as a function of system size, indicating substantial network transitivity beyond post-selected randomness (black dashed curve). Fig.~\ref{fig:complex_network}b shows the coherence window, cycle-and-chain-averaged weighted shortest path length, $\ell$, as a function of system size, which gauges global network traversability (see Supplementary Information). The raw data path length (red) in Fig. 4b is large and increases as a function of system size. The post-selected (green) path length tracks the emulated (blue) path length closely, trends downward, and is always one to two orders of magnitude smaller than the raw path length. Interestingly, post-selected path length tracks the path length of post-selected randomness (black dashed line) nearly as well as it does emulated path length. Taken together however, Figs.~\ref{fig:complex_network}a-b signal the existence of small-world mutual information networks generated in the coherence window of the Goldilocks QCA beyond what can be obtained by post-selecting incoherent uniform randomness. The end of the coherence window ($t=12$) for the largest system size simulated ($L=23$) corresponds to $1,056 \: \sqrt{\text{iSWAP}}^{\dagger}$ gates. Fig.~\ref{fig:complex_network}c further indicates the formation of small-world mutual information networks, showing that the size-normalized emulated (blue) and post-selected (green) node strength distributions (see Supplementary Information), $P[g_i/(L-1)]$, are relatively flat between $1\times 10^{-2}$ to $2 \times 10^{-1}$ compared with those of the post-selected random node strengths (black dashes), which peak between $\sim 2.5 \times 10^{-2}$ to $1.5\times10^{-1}$, and raw (red) node strengths, which are heavily biased towards much smaller values, indicating a deficit in network connectivity. Finally, Figs.~\ref{fig:complex_network}d-g visually depict how the mutual information for $L=23$ at $t=9$ differs for the raw QCA data, which approaches the network structure of incoherent uniform randomness, and the emulated and post-selected QCA networks, which both display lattice girder-like small-world structure that resemble one another more closely than they do post-selected randomness. \noindent \textbf{Towards beyond-classical QCA} In addition to their intrinsic scientific value as quantum models for emergent complexity, QCA also present intriguing prospects for establishing new inroads to the beyond-classical era. In the instance of Goldilocks rule $T_6$, identification of a dynamical invariant makes simulation less fragile to noise than a fully chaotic random quantum circuit (RQC). Meanwhile, the system's dynamics still occupy a significant fraction of Hilbert space (see Supplementary Information), which scales as $\sim 1.08^L$. For context, the long-time, cycle-averaged bond entropy of rule $T_6$ was shown to scale between a 1D area and volume law \cite{hillberry2021entangled}. Although simulation of Goldilocks rules has shown the failure of direct matrix-product-state approaches \cite{vargas2016quantum}, given this intermediate scaling it is an open problem as to whether efficient classical simulatability may be achieved using a modified tensor network approach \cite{hillberry2021entangled, eisert2010colloquium}. Moreover, efficient simulatability of area law-scaling states in two-dimensions (2D) or higher using tensor network approaches, while promising, is even less assured than in 1D. Hence, 2D QCA that exhibit area-law scaling (or worse) may be good candidates for beyond-classical demonstrations. Here we have demonstrated that existing quantum processors can efficiently simulate 1D QCA with high fidelity at large gate volume. While reliant on the availability of high-fidelity hardware, the main circuit design principles that enable this goal consist of identification of particular QCA rules that: i) generate significant complexity signatures, ii) efficiently compile to low-depth sequences of hardware-native gates, and iii) are amenable to post-selection through identification of one or more dynamical invariants. We therefore expect these design principles to aid in discovering QCA that support beyond-classical demonstrations or are otherwise useful in quantum computational domain applications. In particular, employing such design principles for QCAs that model correlated quantum matter could be a promising route toward beyond-classical simulation of novel physical systems in the near term. \noindent \textbf{Methods} Please see Supplementary Information. \noindent \textbf{Data availability} The data supporting this work will be made available upon reasonable request to the corresponding authors. \noindent \textbf{Code availability} The code supporting this work will be made available upon reasonable request to the corresponding authors. \noindent \textbf{Acknowledgements} We thank the Google Quantum AI team. This work was supported in part by the NSF under grant PHY-1653820 (EK); and DGE-2125899, CCF-1839232, PHY-1806372, and OAC-1740130 (MF, MTJ, LDC). This work was authored in part by the National Renewable Energy Laboratory (NREL), operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. This work was supported in part by the Laboratory Directed Research and Development (LDRD) Program at NREL. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains, and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains, a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes. \noindent \textbf{Author contributions} Conceptualization (EK, LDC). Data curation (EBJ, LEH, MTJ, MF, ZJ, AH). Formal Analysis (EBJ, LEH, MTJ, MF, PR, ZJ, AH, CN, EO, EK, LDC). Funding acquisition (PG, EK, LDC). Investigation (EBJ, LEH, MTJ, MF, PR, ZJ, AH, CN, EO, EK, LDC). Methodology (EBJ, LEH, MTJ, MF, PR, ZJ, AH, CN, EO, EK, LDC). Project administration (PR, AH, EO, EK, LDC). Resources (PR, ZJ, AH, CN, EO). Software (EBJ, LEH, MTJ, ZJ, AH). Supervision (PG, EK, LDC). Validation (all authors). Visualization (EBJ, LEH, PR, ZJ, AH, CN, EK, LDC). Writing – original draft (EBJ). Writing – review \& editing (all authors). \noindent \textbf{Competing interests} The authors declare no competing interests. \noindent \textbf{Materials \& Correspondence} Materials requests to E.B. Jones. Correspondence to E.B. Jones, E. Kapit, or L.D. Carr. \begin{thebibliography}{39} \providecommand{\natexlab}[1]{#1} \providecommand{\url}[1]{\texttt{#1}} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi: #1}\else \providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi \bibitem[Adamatzky(2010)]{adamatzky2010game} A.~Adamatzky. \newblock \emph{Game of life cellular automata}, volume~1. \newblock Springer, 2010. \bibitem[AI(2021)]{qcdatasheet} G.~Q. AI. \newblock Quantum computer data sheet. \newblock \url{https://quantumai.google/hardware/datasheet/weber.pdf}, May 2021. \bibitem[Albert and Barab{\'a}si(2002)]{albert2002statistical} R.~Albert and A.-L. Barab{\'a}si. \newblock Statistical mechanics of complex networks. \newblock \emph{Reviews of modern physics}, 74\penalty0 (1):\penalty0 47, 2002. \bibitem[Anderson(1972)]{anderson1972more} P.~W. Anderson. \newblock More is different. \newblock \emph{Science}, 177\penalty0 (4047):\penalty0 393--396, 1972. \bibitem[Anderson(2018)]{anderson2018basic} P.~W. Anderson. \newblock \emph{Basic notions of condensed matter physics}. \newblock CRC Press, 2018. \bibitem[Arrighi(2019)]{arrighi2019overview} P.~Arrighi. \newblock An overview of quantum cellular automata. \newblock \emph{Natural Computing}, 18\penalty0 (4):\penalty0 885--899, 2019. \bibitem[Arute et~al.(2019)Arute, Arya, Babbush, Bacon, Bardin, Barends, Biswas, Boixo, Brandao, Buell, et~al.]{arute2019quantum} F.~Arute, K.~Arya, R.~Babbush, D.~Bacon, J.~C. Bardin, R.~Barends, R.~Biswas, S.~Boixo, F.~G. Brandao, D.~A. Buell, et~al. \newblock Quantum supremacy using a programmable superconducting processor. \newblock \emph{Nature}, 574\penalty0 (7779):\penalty0 505--510, 2019. \bibitem[Arute et~al.(2020{\natexlab{a}})Arute, Arya, Babbush, Bacon, Bardin, Barends, Bengtsson, Boixo, Broughton, Buckley, et~al.]{arute2020observation} F.~Arute, K.~Arya, R.~Babbush, D.~Bacon, J.~C. Bardin, R.~Barends, A.~Bengtsson, S.~Boixo, M.~Broughton, B.~B. Buckley, et~al. \newblock Observation of separated dynamics of charge and spin in the fermi-hubbard model. \newblock \emph{arXiv preprint arXiv:2010.07965}, 2020{\natexlab{a}}. \bibitem[Arute et~al.(2020{\natexlab{b}})Arute, Arya, Babbush, Bacon, Bardin, Barends, Boixo, Broughton, Buckley, Buell, et~al.]{arute2020hartree} F.~Arute, K.~Arya, R.~Babbush, D.~Bacon, J.~C. Bardin, R.~Barends, S.~Boixo, M.~Broughton, B.~B. Buckley, D.~A. Buell, et~al. \newblock Hartree-fock on a superconducting qubit quantum computer. \newblock \emph{Science}, 369\penalty0 (6507):\penalty0 1084--1089, 2020{\natexlab{b}}. \bibitem[Bleh et~al.(2012)Bleh, Calarco, and Montangero]{bleh2012quantum} D.~Bleh, T.~Calarco, and S.~Montangero. \newblock Quantum game of life. \newblock \emph{EPL (Europhysics Letters)}, 97\penalty0 (2):\penalty0 20012, 2012. \bibitem[Brun and Mlodinow(2020)]{brun2020quantum} T.~A. Brun and L.~Mlodinow. \newblock Quantum cellular automata and quantum field theory in two spatial dimensions. \newblock \emph{Physical Review A}, 102\penalty0 (6):\penalty0 062222, 2020. \bibitem[Bullmore and Sporns(2009)]{bullmore2009complex} E.~Bullmore and O.~Sporns. \newblock Complex brain networks: graph theoretical analysis of structural and functional systems. \newblock \emph{Nature reviews neuroscience}, 10\penalty0 (3):\penalty0 186--198, 2009. \bibitem[Carlson and Doyle(2002)]{carlson2002complexity} J.~M. Carlson and J.~Doyle. \newblock Complexity and robustness. \newblock \emph{Proceedings of the national academy of sciences}, 99\penalty0 (suppl 1):\penalty0 2538--2545, 2002. \bibitem[Chopard and Droz(1998)]{chopard1998cellular} B.~Chopard and M.~Droz. \newblock \emph{Cellular automata}, volume~1. \newblock Springer, 1998. \bibitem[Cook et~al.(2004)]{cook2004universality} M.~Cook et~al. \newblock Universality in elementary cellular automata. \newblock \emph{Complex systems}, 15\penalty0 (1):\penalty0 1--40, 2004. \bibitem[Cotler and Wilczek(2020)]{cotler2020quantum} J.~Cotler and F.~Wilczek. \newblock Quantum overlapping tomography. \newblock \emph{Physical review letters}, 124\penalty0 (10):\penalty0 100401, 2020. \bibitem[Developers(2021)]{cirq_developers_2021_5182845} C.~Developers. \newblock Cirq, Aug. 2021. \newblock URL \url{https://doi.org/10.5281/zenodo.5182845}. \newblock {See full list of authors on Github: https://github .com/quantumlib/Cirq/graphs/contributors}. \bibitem[Donoghue et~al.(2014)Donoghue, Golowich, and Holstein]{donoghue2014dynamics} J.~F. Donoghue, E.~Golowich, and B.~R. Holstein. \newblock \emph{Dynamics of the standard model}, volume~35. \newblock Cambridge university press, 2014. \bibitem[Duranthon and Di~Molfetta(2021)]{duranthon2021coarse} O.~Duranthon and G.~Di~Molfetta. \newblock Coarse-grained quantum cellular automata. \newblock \emph{Physical Review A}, 103\penalty0 (3):\penalty0 032224, 2021. \bibitem[Eisert et~al.(2010)Eisert, Cramer, and Plenio]{eisert2010colloquium} J.~Eisert, M.~Cramer, and M.~B. Plenio. \newblock Colloquium: Area laws for the entanglement entropy. \newblock \emph{Reviews of modern physics}, 82\penalty0 (1):\penalty0 277, 2010. \bibitem[Farrelly(2020)]{farrelly2020review} T.~Farrelly. \newblock A review of quantum cellular automata. \newblock \emph{Quantum}, 4:\penalty0 368, 2020. \bibitem[Feynman(2018)]{feynman2018simulating} R.~P. Feynman. \newblock Simulating physics with computers. \newblock In \emph{Feynman and computation}, pages 133--153. CRC Press, 2018. \bibitem[Fruchterman and Reingold(1991)]{fruchterman1991graph} T.~M. Fruchterman and E.~M. Reingold. \newblock Graph drawing by force-directed placement. \newblock \emph{Software: Practice and experience}, 21\penalty0 (11):\penalty0 1129--1164, 1991. \bibitem[Gamel(2016)]{gamel2016entangled} O.~Gamel. \newblock Entangled bloch spheres: Bloch matrix and two-qubit state space. \newblock \emph{Physical Review A}, 93\penalty0 (6):\penalty0 062320, 2016. \bibitem[Hillberry et~al.(2021)Hillberry, Jones, Vargas, Rall, Halpern, Bao, Notarnicola, Montangero, and Carr]{hillberry2021entangled} L.~E. Hillberry, M.~T. Jones, D.~L. Vargas, P.~Rall, N.~Y. Halpern, N.~Bao, S.~Notarnicola, S.~Montangero, and L.~D. Carr. \newblock Entangled quantum cellular automata, physical complexity, and goldilocks rules. \newblock \emph{Quantum Science and Technology}, 6\penalty0 (4):\penalty0 045017, 2021. \newblock URL \url{http://iopscience.iop.org/article/10.1088/2058-9565/ac1c41}. \bibitem[Jensen(1998)]{jensen1998self} H.~J. Jensen. \newblock \emph{Self-organized criticality: emergent complex behavior in physical and biological systems}, volume~10. \newblock Cambridge university press, 1998. \bibitem[Jurcevic et~al.(2021)Jurcevic, Javadi-Abhari, Bishop, Lauer, Bogorin, Brink, Capelluto, G{\"u}nl{\"u}k, Itoko, Kanazawa, et~al.]{jurcevic2021demonstration} P.~Jurcevic, A.~Javadi-Abhari, L.~S. Bishop, I.~Lauer, D.~F. Bogorin, M.~Brink, L.~Capelluto, O.~G{\"u}nl{\"u}k, T.~Itoko, N.~Kanazawa, et~al. \newblock Demonstration of quantum volume 64 on a superconducting quantum computing system. \newblock \emph{Quantum Science and Technology}, 6\penalty0 (2):\penalty0 025020, 2021. \bibitem[Lindgren and Nordahl(1988)]{lindgren1988complexity} K.~Lindgren and M.~G. Nordahl. \newblock Complexity measures and cellular automata. \newblock \emph{Complex Systems}, 2\penalty0 (4):\penalty0 409--440, 1988. \bibitem[Muldoon et~al.(2016)Muldoon, Bridgeford, and Bassett]{muldoon2016small} S.~F. Muldoon, E.~W. Bridgeford, and D.~S. Bassett. \newblock Small-world propensity and weighted brain networks. \newblock \emph{Scientific reports}, 6\penalty0 (1):\penalty0 1--13, 2016. \bibitem[Neill et~al.(2021)Neill, McCourt, Mi, Jiang, Niu, Mruczkiewicz, Aleiner, Arute, Arya, Atalaya, et~al.]{neill2021accurately} C.~Neill, T.~McCourt, X.~Mi, Z.~Jiang, M.~Niu, W.~Mruczkiewicz, I.~Aleiner, F.~Arute, K.~Arya, J.~Atalaya, et~al. \newblock Accurately computing the electronic properties of a quantum ring. \newblock \emph{Nature}, 594\penalty0 (7864):\penalty0 508--512, 2021. \bibitem[Nielsen and Chuang(2002)]{nielsen2002quantum} M.~A. Nielsen and I.~Chuang. \newblock Quantum computation and quantum information, 2002. \bibitem[Pino et~al.(2021)Pino, Dreiling, Figgatt, Gaebler, Moses, Allman, Baldwin, Foss-Feig, Hayes, Mayer, et~al.]{pino2021demonstration} J.~M. Pino, J.~M. Dreiling, C.~Figgatt, J.~P. Gaebler, S.~A. Moses, M.~Allman, C.~Baldwin, M.~Foss-Feig, D.~Hayes, K.~Mayer, et~al. \newblock Demonstration of the trapped-ion quantum ccd computer architecture. \newblock \emph{Nature}, 592\penalty0 (7853):\penalty0 209--213, 2021. \bibitem[Shah and Gorard(2019)]{shah2019quantum} R.~Shah and J.~Gorard. \newblock Quantum cellular automata, black hole thermodynamics, and the laws of quantum complexity. \newblock \emph{arXiv preprint arXiv:1910.00578}, 2019. \bibitem[Turcotte and Rundle(2002)]{turcotte2002self} D.~L. Turcotte and J.~B. Rundle. \newblock Self-organized complexity in the physical, biological, and social sciences, 2002. \bibitem[Vargas(2016)]{vargas2016quantum} D.~L. Vargas. \newblock Quantum complexity: Quantum mutual information, complex networks, and emergent phenomena in quantum cellular automata. \newblock Master's thesis, Colorado School of Mines. Arthur Lakes Library, 2016. \bibitem[Watts and Strogatz(1998)]{watts1998collective} D.~J. Watts and S.~H. Strogatz. \newblock Collective dynamics of ‘small-world’networks. \newblock \emph{nature}, 393\penalty0 (6684):\penalty0 440--442, 1998. \bibitem[Wolfram(1983)]{wolfram1983statistical} S.~Wolfram. \newblock Statistical mechanics of cellular automata. \newblock \emph{Reviews of modern physics}, 55\penalty0 (3):\penalty0 601, 1983. \bibitem[Wolfram(1985)]{wolfram1985cryptography} S.~Wolfram. \newblock Cryptography with cellular automata. \newblock In \emph{Conference on the Theory and Application of Cryptographic Techniques}, pages 429--432. Springer, 1985. \bibitem[Wright et~al.(2019)Wright, Beck, Debnath, Amini, Nam, Grzesiak, Chen, Pisenti, Chmielewski, Collins, et~al.]{wright2019benchmarking} K.~Wright, K.~Beck, S.~Debnath, J.~Amini, Y.~Nam, N.~Grzesiak, J.-S. Chen, N.~Pisenti, M.~Chmielewski, C.~Collins, et~al. \newblock Benchmarking an 11-qubit quantum computer. \newblock \emph{Nature communications}, 10\penalty0 (1):\penalty0 1--6, 2019. \end{thebibliography} \part{Supplementary Information} \tableofcontents \section{Quantum Cellular Automata in 1D} A one-dimensional quantum cellular automaton is defined as a chain of $L$ identical qubits whose states are updated according to homogeneous blocks of neighborhood-local unitary operators \cite{arrighi2019overview, farrelly2020review}. Each local unitary takes as an input the state of the target qubit's neighbors and outputs a particular operator, either the identity or the activation operator, to be applied to the target qubit, conditioned on the neighborhood's state. When such an update unitary has been applied to all $L$ qubits, a QCA \textit{cycle} is complete. In the specific construction with nearest-neighbor connectivity, corresponding to three-site (closed) neighborhoods (denoted $T_R$), a specific qubit's open neighborhood (its two neighbors) can be in a superposition of any $2^2$ states. The target qubit's state can either be activated or not depending upon these $4$ configurations. As such there are $2^{2^2}=16$ possible transition functions on a three-site closed neighborhood. We label each of these transition functions, or \textit{rules}, as $R\in \{0, 1, \ldots, 15\}$. To specify a three-site rule, we write $T_R$. The rule which activates for only a balanced neighborhood of 0-1 or 1-0, $T_6$, is called the \textit{Goldilocks rule} and is the main focus of our study on the quantum processor. For efficient circuit parallelization purposes, we update all even qubits in parallel followed by all odd qubits. In this ordering, the general form for the $L$-qubit unitary operator that evolves the system from cycle $t$ to cycle $t+1$ is \cite{hillberry2021entangled} \begin{equation}\label{eq:qca_unitary} \begin{split} U(T_R;t, t+&1) = \prod_{o=1, 3, \ldots}^{L} U_o(T_R) \prod_{e=2, 4,\ldots}^{L-1} U_e(T_R), \\ U_{e(o)}(T_R) = &\sum_{m, n=0}^1 P_{e(o)-1}^{(m)} \otimes V^{c_{mn}}_{e(o)} \otimes P_{e(o)+1}^{(n)} \end{split} \end{equation} where $i=e(o)$ indexes each even (odd) qubit in the 1D chain, $P_j^{(m)} = |m\rangle \langle m|$ is a single-qubit projection operator, and $V_i$ is the chosen activation unitary applied to qubit $i$ (which is typically taken to be the same operator for all $i$). In order to convert from a rule number to a unitary operator, one first performs a binary expansion of the rule number where the expansion coefficients are themselves labelled by 2-bit strings, $R=\sum_{m,n=0}^1 c_{mn}2^{m+n}$. The coefficients $c_{mn}$ describe which neighborhood configurations activation occurs on. For Goldilocks rule $T_6$ considered in the main text, we have that $6=1\times 2^1 + 1\times 2^2$ so that $c_{00}=c_{11}=0$ while $c_{01}=c_{10}=1$. Therefore, the local $T_6$ cycle unitary reads \begin{equation} U_i(T_6)=|00 \rangle 1\langle 00| + |01 \rangle V\langle 01| +|10 \rangle V\langle 10| + |11 \rangle 1\langle 11|, \end{equation} which applies the activation unitary $V$ to qubit $i$ if its surrounding qubits are in the configurations $|01\rangle$ or $|10\rangle$ and does nothing otherwise. We refer to $T_6$ as a \textit{totalistic} rule because activation obeys a left-right symmetry and only depends on the total number of adjacent $|1\rangle$s. Another important totalistic rule is $T_1$, also called the PXP model in many-body quantum literature when run in continuous time~\cite{hillberry2021entangled}; here we consider discrete time, i.e., a quantum circuit, in line with the main concept of cellular automata. Note that that for 1D, three-site neighborhoods there are only $2^3=8$ totalistic rules. Restricting a QCA search space to require totalistic updates is a useful tool for discovering emergent complexity. Finally, we note that $V=X$, the bit-flip operator, corresponds to a reversible classical cellular automaton. In the main text we choose $V=H$, the Hadamard operator. It has been shown that the particular form of the activation unitary affects the complexity outcomes of $T_6$ relatively little so long as $V$ is able to move classical states off the poles of the Bloch sphere, which $H$ does \cite{hillberry2021entangled}. \section{Quantum Hardware Specifications and Qubit Picking} \subsection{Weber Processor} \begin{figure} \caption{\label{fig:readout_error} \label{fig:readout_error} \end{figure} \begin{figure} \caption{\label{fig:t1_time} \label{fig:t1_time} \end{figure} \begin{figure} \caption{\label{fig:two_qubit_error} \label{fig:two_qubit_error} \end{figure} The Weber quantum processing unit is a 53-qubit superconducting processor that follows the design of the Sycamore-class architecture in Ref.~\cite{arute2019quantum}. As discussed in the main text, typical performance characteristics for Weber are: single-qubit gate error $e_1\approx 0.1\%$, two-qubit gate error $e_2\approx 1.4\%$, $|0\rangle$-state readout error $e_{r0}\approx 2\%$, $|1\rangle$-state readout error $e_{r1}\approx 7\%$, and population relaxation time $\tilde{T}_1\approx 15 \mu s$ \cite{qcdatasheet}. Of these, the three performance characteristics that are potentially the most deleterious to calculating accurate observables are $e_{r1}$, $e_2$, and $\tilde{T}_1$ (relative to total circuit execution time). Selecting high-quality chains of qubits therefore involves simultaneously optimizing for these three characteristics, which change slightly between processor calibrations. For the particular calibration displayed, Fig.~\ref{fig:readout_error} shows that while the parallel readout $e_{r1}$ on Weber is relatively homogeneous across the chip, it is slightly lower on the right-hand side. Meanwhile, the idle $\tilde{T}_1$ decoherence times shown in Fig.~\ref{fig:t1_time} are somewhat longer on the right-hand side of the processor as well and relatively uniform within its top right quadrant. Finally, $e_2$ rates for the $\sqrt{\text{iSWAP}}^{\dagger}$ gate when executed in parallel on Weber and characterized by cross-entropy benchmarking (XEB) are lower both in the processor's top-right quadrant and in a small region left of center. Taken together, these observations indicate that the top-right quadrant of Weber is an ideal region within which to choose 1D qubit embeddings. For instance, a particular five qubit embedding that exploits this quadrant has qubit indices (1, 6) $\rightarrow$ (2, 6) $\rightarrow$ (2, 7) $\rightarrow$ (3, 7) $\rightarrow$ (4, 7). Of course, as chains become longer, that is, as $L$ becomes larger, the available real-estate for picking the highest-quality qubits becomes constrained. \subsection{Rainbow Processor} \begin{figure} \caption{\label{fig:readout_error_rain} \label{fig:readout_error_rain} \end{figure} \begin{figure} \caption{\label{fig:t1_time_rain} \label{fig:t1_time_rain} \end{figure} \begin{figure} \caption{\label{fig:two_qubit_error_rain} \label{fig:two_qubit_error_rain} \end{figure} The Rainbow quantum processing unit is a 23-qubit superconducting processor with typical performance characteristics similar to Weber. While the results in the main text were generated on the Weber processor, we used Rainbow in order to assess the effect of varying initial conditions in the product state. These varying initial conditions were assessed in a 17-qubit chain, the largest 1D chain embeddable on Rainbow. This size of the chain is large enough to not be susceptible to the finite-size effects encountered in smaller chains. At the same time it is not so large that low retained count fractions cause problems with measurement statistics and thus create large error bars. Figs.~\ref{fig:readout_error_rain}, \ref{fig:t1_time_rain}, and \ref{fig:two_qubit_error_rain} show the three most relevant performance characteristics for choosing high-quality chains, parallel $e_{r1}$, $\tilde{T}_1$, and parallel $e_2$, respectively. However, the ability to avoid noisy qubits on Rainbow was severely constrained by the large size of the chain relative to the size of the processor. For reference, we used the embedding (5, 0) $\rightarrow$ (5, 1) $\rightarrow$ (4, 1) $\rightarrow$ (4, 2) $\rightarrow$ (4, 3) $\rightarrow$ (5, 3) $\rightarrow$ (5, 2) $\rightarrow$ (6, 2) $\rightarrow$ (7, 2) $\rightarrow$ (7, 3) $\rightarrow$ (6, 3) $\rightarrow$ (6, 4) $\rightarrow$ (6, 5) $\rightarrow$ (7, 5) $\rightarrow$ (7, 4) $\rightarrow$ (8, 4) $\rightarrow$ (8, 5). \section{Circuit Compilation, Calibration, and Optimization} In order to simulate the $T_6$ QCA with high-fidelity, we compile, calibrate, and optimize circuits in the following manner. The Cirq open source framework was used for the workflow \cite{cirq_developers_2021_5182845}. \begin{enumerate} \item Each $CH(q_i, q_j)$ gate between control qubit $q_i$ and target qubit $q_j$ in the local $T_6$ update unitary is compiled into \begin{equation} CH(q_i, q_j) = Y^{1/4}(q_j) CZ(q_i, q_j) Y^{-1/4}(q_j), \end{equation} where up to a global phase $Y^{t}=R_Y(\pi t)$. \item Each $CZ$ gate is further decomposed into two bare $\sqrt{\text{iSWAP}}^{\dagger}$ gates and single-qubit rotations. A controlled-phase gate between control qubit $q_i$ and target qubit $q_j$ can be decomposed as \begin{align}\label{eq:cphase_decomp} \begin{split} \mathrm{CPHASE}(\phi)_{ij}&=e^{i(2\varphi-\phi)/4}\\ \times &R_{Z_i}(\pi-\phi/2) \otimes R_{Z_j}(-\phi/2)\\ \times &R_{X_i}(-\xi_i) \otimes R_{X_j}(-\xi_j) \\ \times &K(\theta)_{ij}\mathrm{CPHASE}(\varphi)_{ij}\\ \times &R_{Z_i}(\pi+\varphi/2)\otimes R_{Z_j}(\varphi/2)\\ \times &R_{X_i}(-2\alpha) \otimes 1_j\\ \times &K(\theta)_{ij}\mathrm{CPHASE}(\varphi)_{ij}\\ \times &R_{Z_i}(\phi/2) \otimes R_{Z_j}(\phi/2)\\ \times &R_{X_i}(\xi_i)\otimes R_{X_j}(\xi_j), \end{split} \end{align} which is equivalent to the decomposition presented in Ref. \cite{arute2020observation}. Here, $K(\theta)$ is a continuously-parameterized fractional-iSWAP gate such that $\sqrt{\text{iSWAP}}^{\dagger}=K(\pi/4)$. In this step, we take $\varphi=0$ and account for the parasitic $\varphi \approx \pi/23$ using Floquet calibration in a later step. The other decomposition parameters are given by \begin{equation} \sin(\alpha) = \sqrt{\frac{\sin^2(\phi/4)-\sin^2(\varphi/2)}{\sin^2(\theta)-\sin^2(\varphi/2)}}, \end{equation} \begin{align} \begin{split} \xi_i &= \tan^{-1}\Bigg(\frac{\tan(\alpha)\cos(\theta)}{\cos(\varphi/2)}\Bigg)\\ &+\frac{\pi}{2}\Big(1-\text{sgn}\Big(\cos(\varphi/2)\Big)\Big), \end{split} \end{align} and \begin{align} \begin{split} \xi_j &= \tan^{-1}\Bigg(\frac{\tan(\alpha)\sin(\theta)}{\sin(\varphi/2)}\Bigg)\\ &+\frac{\pi}{2}\Big(1-\text{sgn}\Big(\sin(\varphi/2)\Big)\Big). \end{split} \end{align} To decompose the $CZ$ gate, we simply set $\phi=\pi$. \item Strings of consecutive single-qubit gates are merged together into $\text{PhXZ}(a, x, z) \equiv Z^z Z^a X^x Z^{-a}$ gates using the cirq.google \textit{optimized\_for\_sycamore} utility. This compresses into alternating moments of parallel single-qubit gates followed by parallel two-qubit gates. \item Spin echo insertion: Wherever a qubit is idle for more than one single-qubit gate layer, pairs of Pauli operators are inserted to decrease idle qubit crosstalk. For instance, $XX=1$. \item Floquet characterization: Our targeted native two-qubit gate on Weber is the $\sqrt{\text{iSWAP}}^{\dagger}=K(\pi/4)$ gate. However, due to calibration drift and cross-talk errors, the effective gate implemented has the form (see Supplementary Information of Refs.~\cite{arute2020observation, neill2021accurately}) \begin{equation} \begin{split} \quad \quad & K(\theta,\zeta, \chi,\gamma,\varphi) = \\ & \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & e^{-i\gamma-i\zeta}\cos(\theta) & -i e^{-i\gamma +i\chi}\sin(\theta) & 0\\ 0 & -i e^{-i\chi - i\gamma}\sin(\theta) & e^{-i\gamma+i\zeta}\cos(\theta) & 0 \\ 0 & 0 & 0 & e^{-2i\gamma - i\varphi} \end{bmatrix}. \end{split} \end{equation} In the absence of calibration drift and cross-talk errors, $\theta=\pi/4$ and $\zeta=\chi=\gamma=\varphi=0$, and one recovers the target gate as $\sqrt{\text{iSWAP}}^{\dagger}=K(\pi/4, 0, 0, 0, 0)$. When such drift and errors are present, the goal of Floquet characterization is to determine the values of $\theta,\zeta, \chi,\gamma,$ and $\varphi$ with a view towards subsequently correcting for some or all of them. This is done by repeating circuit moments with two-qubit gates many times with interleaved, parameterized z-rotations as probes, in order to amplify the calibration drift and control errors embodied in $\theta,\zeta, \chi,\gamma,$ and $\varphi$. All five angles except for $\chi$ are able to be characterized, since $\chi$ corresponds to a complex hopping phase and requires closed loops in order to characterize. For more details, see Refs. \cite{arute2020observation, neill2021accurately}. \item After Floquet characterization, the first angle we correct for is the parasitic cphase angle, $\varphi$. We do this by reconstructing our circuits and re-decomposing our $CZ$ gates using Eq.~\ref{eq:cphase_decomp} where now instead of $\varphi=0$ we use the value determined by Floquet characterization on each moment, which is typically in the vicinity of $\varphi\approx \pi/23$. \item Next, we re-merge single-qubit gates according to the procedure in Step 3. \item Re-insert spin echoes according to the logic of Step 4. \item Floquet calibration: In addition to $\varphi$, the angles $\gamma$ and $\zeta$ can be corrected for by inserting z-rotations on either side of an imperfect two-qubit gate. The relationship between the general excitation-number-conserving gate and our target gate (with parasitic cphase) is \begin{equation}\label{eq:angle_correction} \begin{split} \quad \quad &K(\theta, 0, 0, 0, \varphi) = \\ &R_Z(-\beta,\beta)R_Z(\gamma, \gamma)K(\theta,\zeta,\chi,\gamma,\varphi)R_Z(-\alpha, \alpha), \end{split} \end{equation} where $\alpha=(\zeta+\chi)/2$, $\beta=(\zeta-\chi)/2$, and we use the following shorthand for pairs of two single-qubit z-rotations, $R_Z(z_i,z_j)=e^{i(z_i+z_j)/2}R_{Z_i}(z_i)\otimes R_{Z_j}(z_j)$. Since we cannot characterize $\chi$, we set $\chi=0$. The resulting simplification in Eq.~\ref{eq:angle_correction} will correct for $\gamma$ and $\zeta$. As such, after the insertion of the z-rotations in Eq.~\ref{eq:angle_correction}, each $CZ$ gate in the original QCA circuit is compiled correctly as in Eq.~\ref{eq:cphase_decomp} with the effective two-qubit native gate $\sim K(\pi/4,0,0,0,\pi/23) = K(\pi/4)\mathrm{CPHASE}(\pi/23)$. The angles $\theta\sim\pi/4$ and $\chi\sim 0$ remain uncorrected. The inserted, corrective z-rotations are merged with the execution of two-qubit gates at the hardware level and do not need to be re-merged into $\text{PhXZ}$ gates. \item Send circuits to the quantum processor, Weber or Rainbow, in temporal batches, averaging over qubit arrangements. \end{enumerate} \section{Post-Selection} \begin{figure} \caption{\label{fig:ps_overhead} \label{fig:ps_overhead} \end{figure} Once measurement statistics are gathered by measuring $N_c$ times in the $z$-basis, post-selection is used to discard measurements, i.e. bit strings, that result from error and thus reside outside the invariant-protected sector of Hilbert space. We emphasize that with a higher fidelity, post-selection would be unnecessary for the establishment of coherence windows; generally however, in the NISQ era post-selection is an important and useful tool for quantum computing. The dynamical invariant that defines the protected sector of Hilbert space is the operator \begin{equation}\label{eq:invariant} \begin{split} \mathcal{O} \equiv \sum_{i=0}^L \mathcal{O}_i,\\ \mathcal{O}_i \equiv Z_i Z_{i+1} \end{split} \end{equation} where the sites $i=0$ and $i=L+1$ refer to padding boundary qubits fixed to the $|0\rangle$ state. We first give a proof that $\mathcal{O}$ is a dynamical invariant of the $T_6$ rule. Consider the commutator between Eq.~\ref{eq:qca_unitary} and Eq.~\ref{eq:invariant} \begin{equation}\label{eq:new_full_comm} \begin{split} \Big[ \mathcal{O}, U(T_R;t,t+1) \Big] = \sum_{i=0}^L \Big[\mathcal{O}_i,\prod_o U_o \prod_e U_e \Big]\\ = \sum_{i=0}^L \Big( \Big[\mathcal{O}_i,\prod_o U_o \Big]\prod_e U_e + \prod_o U_o \Big[\mathcal{O}_i,\prod_e U_e \Big] \Big). \end{split} \end{equation} Using standard commutator identities, the two commutators in the second line of Eq.~\ref{eq:new_full_comm} can be expressed as \begin{equation}\label{eq:half_comm} \begin{split} \Big[\mathcal{O}_i, \prod_j U_j \Big]=\sum_j \prod_{l'>j} U_{l'}\Big[\mathcal{O}_i,U_j\Big]\prod_{l<j}U_l\\ =\sum_j \prod_{l'>j} U_{l'}\Big( Z_i\Big[Z_{i+1},U_j\Big]+\Big[Z_{i},U_j\Big]Z_{i+1}\Big)\prod_{l<j}U_l, \end{split} \end{equation} where $j$ can run over either the even or odd index set. Next, note that since any projection operator commutes with the Pauli $Z$ operator, \begin{equation}\label{eq:term_comm} \begin{split} \Big[ Z_{k} , U_j \Big] = P_{j-1}^{(m)}\Big[ Z_j, V^{c_{mn}}_j \Big] \delta_{k,j} P_{j+1}^{(n)}. \end{split} \end{equation} Substituting Eqs.~\ref{eq:term_comm} and~\ref{eq:half_comm} into Eq.~\ref{eq:new_full_comm} and noting that $Z_k P_k^{(m)} = P_k^{(m)} Z_k = (-1)^m P_k^{(m)}$ we find \begin{equation} \begin{split} \Big[ \mathcal{O}, U(T_R;t,t+1) \Big] =\\ \sum_o \prod_{l'>o} U_{l'}\sum_{m,n}P_{o-1}^{(m)}\Big[Z_o,V_o^{c_{mn}}\Big]P_{o+1}^{(n)}\Big( (-1)^m \\ + (-1)^n \Big)\prod_{l<o}U_l \prod_e U_e\\ + \prod_o U_o \sum_e \prod_{l'>e} U_{l'}\sum_{m,n}P_{e-1}^{(m)}\Big[Z_e,V_e^{c_{mn}}\Big]P_{e+1}^{(n)}\Big( (-1)^m \\ + (-1)^n \Big)\prod_{l<e}U_l \end{split} \end{equation} We now specify to the rule $T_6$. There are four combinations of $m$ and $n$ to consider. In the instance where $m=n=0$ or $m=n=1$, the commutator $[Z_j,V_j^{c_{m=n}=0}]=0$. Meanwhile, when $m\neq n$, $(-1)^m + (-1)^n = 0$. Therefore, each term indexed by $(m,n)$ vanishes and \begin{equation} \Big[ \mathcal{O}, U(T_6;t,t+1) \Big]=0, \end{equation} proving that $\mathcal{O}$ is a dynamical invariant of rule $T_6$ for \textit{any} unitary $V_j$. In addition, $\mathcal{O}$ has eigenstates diagonal in the $z$-basis. Hence, any $z$-basis measurement whose eigenvalue under $\mathcal{O}$ is different than the $\mathcal{O}$-eigenvalue of the QCA's initialized state must have resulted from an error in the computation. In order to see this, and that Eq.~\ref{eq:invariant} is related to domain wall conservation, consider the action of $\mathcal{O}$ on a five-qubit chain, initialized with a single, central bit flip, and padded by fixed boundary $|0\rangle$s \begin{equation} \mathcal{O}\,|0\rangle \!\otimes\! |00100\rangle \!\otimes\! |0\rangle = 2\, |0\rangle \!\otimes\! |00100\rangle \!\otimes\! |0\rangle. \end{equation} Now consider the action of $\mathcal{O}$ on another state with the same number of domain walls \begin{equation}\label{eq:good_state} \mathcal{O}\,|0\rangle \!\otimes\! |01110\rangle \!\otimes\! |0\rangle = 2\, |0\rangle \!\otimes\! |01110\rangle \!\otimes\! |0\rangle. \end{equation} If however a bit flip were to occur on the central qubit in the chain, the action of $\mathcal{O}$ would be \begin{equation} \mathcal{O}\,|0\rangle \!\otimes\! |01010\rangle \!\otimes\! |0\rangle = -2\, |0\rangle \!\otimes\! |01010\rangle \!\otimes\! |0\rangle. \end{equation} Hence, the calculated eigenvalue of $-2$, rather than $+2$, indicates that we should discard this measurement from our statistics when calculating observables. Note that $\mathcal{O}$ does not protect against all longitudinal relaxation errors. For example, if the state in Eq.~\ref{eq:good_state} relaxes on the qubit just left or right of center, $\mathcal{O}$ still has eigenvalue $+2$. In addition, if two photons are lost, say in both the central qubit and in the one just to the right or left, $\mathcal{O}$ will also have eigenvalue $+2$. In instances such as these, superposition amplitudes will renormalize in such a way as to degrade the fidelity of the computation while still keeping the processor's state superposition in the protected subspace. Fig.~\ref{fig:ps_overhead} shows the fraction of initial processor counts, $N_c$, that are retained after applying post-selection to our experimental measurements from Weber as a function of QCA cycle depth for all system sizes considered, $L=5, 7, 9, \ldots, 23$. Unsurprisingly, from $t=0$ to $t=14$, each set of data points with constant $L$ decays exponentially as a function of QCA cycle as the post-selection procedure has to discard exponentially many measurements in order to compensate for $\tilde{T}_1$ decoherence. At $t=15$, there is an evident step change in the decay rate of the retained count fractions. The nominal gate execution times for $\text{PhXZ}$ and $\sqrt{\text{iSWAP}}^{\dagger}$ on Weber are 25 ns and 32 ns, respectively. Since each QCA cycle has eight alternating layers of each, at $t=15$, the nominal circuit execution time (excluding readout) is about $6.84 \: \mu \text{s}$. While median $\tilde{T}_1$ times are about $15 \: \mu \text{s}$, low end (10th percentile) $\tilde{T}_1$ times are about $11 \: \mu \text{s}$ on Weber. Hence, at $6.84 \: \mu \text{s}$, the probability of a 10th percentile qubit erroneously relaxing from the $|1\rangle$ state is about $47 \: \%$, making it difficult for post-selection to compensate for multi-photon loss processes or delocalizing errors. QCA cycle $t=15$ is also the point referenced in the main text (e.g., main text Fig. 2b) where the error bars on population dynamics begin to grow significantly. In addition, the retained count fraction also decays roughly exponentially at fixed $t$ as a function of $L$. The number of gate layers, and thus the circuit execution time, is fixed as a function of system size for fixed QCA cycle depth with respect to $\tilde{T}_1$. Since gate \textit{volume} however grows linearly in $L$, it is likely that this decay in the retained count fraction as a function of system size is a result of accumulating gate error, as is well-established within the digital error model \cite{arute2019quantum}. \begin{figure} \caption{\label{fig:hilbert_dim} \label{fig:hilbert_dim} \end{figure} Since $T_6$ conserves $\mathcal{O}$, the system evolves in a Hilbert space of reduced dimension $\mathcal{H}_{\mathcal{O}}$. In Fig.~\ref{fig:hilbert_dim} we plot the relative dimension $\dim(\mathcal{H}_{\mathcal{O}}) / \dim(\mathcal{H}$) for initial conditions consisting of a few $|1 \rangle$'s separated by $|0 \rangle$'s. For the case studied in the main text, a single $|1\rangle$, the relative Hilbert space dimension decays as $\propto e^{-L/1.63}$. Thus, since $\dim(\mathcal{H})=2^L$, we have $\dim(\mathcal{H}_{L-3}) \propto 1.08^L$. \section{Mutual Information Measures} \begin{figure} \caption{\label{fig:shannon_v_vN} \label{fig:shannon_v_vN} \end{figure} Consider a set of $z$-basis measurements $\{|z\rangle \: \: \text{s.t.} \in \{0, 1\}^{\otimes N}\}$ from an $N$-qubit (subset of a) quantum processor and associated probabilities, $\{P_z\}$. For any two qubits, $q_i$ and $q_j$, one can construct a $z$-basis joint and two marginal probability distributions via \begin{align} p(z_i,z_j) &= \sum_{z'} P_{z'} \delta_{z_i,z_i'} \delta_{z_j,z_j'}\\ p(z_{i(j)}) &= \sum_{z_{j(i)}} p(z_i,z_j). \end{align} For two classical variables (e.g., bits) $z_i$ and $z_j$ with joint probability distribution $p(z_i,z_j)$ and marginal probability distributions, $p(z_i)$ and $p(z_j)$, the Shannon mutual information is defined in terms of the Shannon entropy of a distribution, ${\rm H}(p)$, as (see e.g., Ref.~\cite{nielsen2002quantum}) \begin{equation} I_{ij} \equiv {\rm H}[p(z_i)] + {\rm H}[p(z_j)] -{\rm H}[p(z_i,z_j)] \end{equation} and can be written as \begin{equation} I_{ij} \equiv \sum_{z_i=0}^1 \sum_{z_j=0}^1 p(z_i, z_j) \log_2 \frac{p(z_i, z_j)}{p(z_i)p(z_j)}. \end{equation} It measures, as a relative entropy, correlations in the variables $z_i$ and $z_j$. That is, $I_{ij}\geq 0$ and $I_{ij}=0$ if and only if $p(z_i,z_j)=p(z_i)p(z_j)$. The quantum mechanical generalization of the classical probability distributions is the reduced density matrix (RDM). For a Hilbert space composed of two subspaces, $\mathcal{H}=\mathcal{H}_A \otimes \mathcal{H}_B$ and a density matrix $\rho \in \mathcal{H}$, the RDM of subspace $\mathcal{H}_A$ is defined as \begin{equation} \rho_A \equiv \text{Tr}_B \rho, \end{equation} where the the trace is taken over the degrees of freedom of subsystem $\mathcal{H}_B$. RDMs can also be constructed from few-body quantum observables. For instance, the single-qubit RDM for qubit $q_i$ can be constructed as \begin{equation} \rho_i = \frac{1}{2}\sum_{\mu=0}^3 \langle \sigma_i^{\mu} \rangle \sigma_i^{\mu}, \end{equation} where $\sigma_i^0=1_i$, $\sigma_i^1=X_i$, $\sigma_i^2=Y_i$, and $\sigma_i^3=Z_i$ are elements of the Pauli algebra, and expectation values are calculated from quantum processor measurements in the appropriate basis \cite{gamel2016entangled}. Similarly, a $k$-qubit RDM can be calculated as \begin{equation} \rho_{i_1\ldots i_k}=\frac{1}{2^k}\sum_{\mu_1,\ldots,\mu_k=0}^3 \langle \sigma_{i_1}^{\mu_1}\otimes \ldots \otimes \sigma_{i_k}^{\mu_k} \rangle \sigma_{i_1}^{\mu_1}\otimes \ldots \otimes \sigma_{i_k}^{\mu_k}. \end{equation} Expectation values of long Pauli strings can be efficiently calculated via schema such as quantum overlapping tomography \cite{cotler2020quantum}. With an RDM in hand, the von Neumann entropy is defined as \begin{equation} S(\rho) \equiv - \text{Tr} \rho \log_2 \rho \end{equation} and the von Neumann mutual information between two qubits follows similarly \begin{equation} I_{ij}^{\rm vN} \equiv S(\rho_i) + S(\rho_j) - S(\rho_{ij}). \end{equation} Calculation of $I_{ij}^{\rm vN}$ therefore requires measurements of observables, such as $\langle X_i Z_j \rangle$, in bases other than the $z$-basis. However, the dynamical invariant introduced in the main text, $\mathcal{O}=\sum_{i=0}^L Z_i Z_{i+1}$, can only be used to post-select for errors when measurement is performed in the $z$-basis. As such, we choose to use Shannon mutual information for the calculation of complex network measures. As validation for this choice, we numerically emulate $L=19$ qubits initialized with a single-bit flip and evolving under $T_6$ for 10,000 cycles. At each cycle we calculate the relative Frobenius distance $\Vert I - I^{\rm vN} \Vert_{\rm F} / \Vert I \Vert_{\rm F}$, where the Frobenius norm of a matrix $M$ with elements $M_{ij}$ is defined as \begin{equation} \Vert M \Vert_{\rm F} = \sqrt{\sum_{i,j=0}^{L-1} \vert M_{ij}\vert^2}. \end{equation} Fig.~\ref{fig:shannon_v_vN}a shows the distribution of the relative Frobenius distance between the Shannon and Von Neumann mutual information across 10,000 cycles. The median relative distance is $9.7\%$. We also examine relative differences of network measures (defined in detail in the next section) in Fig.~\ref{fig:shannon_v_vN}b. Clustering is the most similar between Shannon and von Neumann-based mutual information (blue histogram, median relative difference $0.7\%$). Path length (gold histogram) exhibits a median relative difference $-18\%$) while that of node strength (green histogram) is $8\%$. We conclude that the Shannon mutual information is a reliable proxy the the von Neumann entropy at the level of, at worst, a few percent. Moreover, since we use only the Shannon mutual information in the main text, our conclusions regarding QCA network complexity, as compared to random networks and post-selected incoherent uniform random states, stand irrespective of the mutual information definition employed. \section{Complex Network Measures} \subsection{Clustering Coefficient} In order to establish the small-world character of the $T_6$ QCA's mutual information network, we focus on three canonical complex network measures: clustering, path length, and node strength distribution. Intuitively, one can view the mutual information matrix $I_{ij}$, as an adjacency matrix in analogy with that of a transportation network. For example, the economic activities of two cities are likely to be more tightly intertwined if high-speed rail (large adjacency weight) connects them rather than a rutted dirt track (low adjacency weight). Similarly, the dynamics of two qubits will be more strongly correlated if they share more mutual information. Continuing with this analogy, the clustering coefficient gauges how locally traversable a network is. Suppose Alice is attempting to reach City C from City A, but there only exist rail lines from City A to City B and from City B to City C, but no lines exist, or perhaps just a dirt track exists, between City A and City C. Locally, this is an inefficient network to traverse since Alice either has to travel through City B else has to take the dirt track to get between Cities A and C. A highly traversable network at the local scale also implies high network transitivity; that is, if Cities A and B are well-connected and Cities B and C are well-connected, then so should be cities A and C. The \textit{global clustering coefficient} assesses this local transitivity in a manner that averages over the entire network. At a particular cycle depth, $t$, it is defined as \begin{equation}\label{eq:clustering} \mathcal{C} \equiv \frac{\text{Tr}[I^3]}{\sum_{i\neq j=1}^L [I^2]_{ij}}. \end{equation} The numerator of Eq.~\ref{eq:clustering} counts the weighted number of closed triangles (node triplets) in the network, while the denominator is the weighted number of length-2 paths in the network, that is, the weighted number of potentially closed triangles. In the main text, we show that the Goldilocks rule $T_6$ exhibits sizeable clustering, much larger than the post-selected uniform random state. The clustering grows with system size within the Weber processor coherence window, indicating experimental formation of a series of locally traversable networks. \subsection{Average Shortest Path Length} A network's \textit{average shortest path length} measures the extent to which it is globally traversable. In the transportation network analogy, a network of cities (e.g., Atlanta, Nashville, Indianapolis, Detroit, Pittsburgh, Baltimore, Raleigh, and Charlotte) laid out in a one-dimensional ring topology but only connected to their nearest neighbors (Atlanta and Indianapolis for Nashville) and next-nearest neighbors (Charlotte and Detroit for Nashville) via high-speed rail lines would have very high local traversability, i.e., transitivity. However, it would take a relatively long time to traverse between antipodes of the ring (e.g., Nashville to Baltimore) and thus those cities' economic activities might be less correlated. From the standpoint of network weights, a large mutual information between two nodes should contribute to shortening path length, such as a rail line that could carry more passengers might, while small mutual information should increase the path length by virtue of its low traversability, like the dirt track. We therefore define the network-averaged weighted path length for a particular cycle depth as \begin{equation} \begin{aligned} \ell \equiv \frac{1}{L(L-1)} &\sum_{i\neq j=1}^L d_{ij},\\ d_{ij} = \min_{p_{ij} \in \mathcal{P}_{ij}} &\sum_{\langle k, l \rangle \in p_{ij}} 1/I_{kl}, \end{aligned} \end{equation} where $d_{ij}$ is the minimum distance between nodes (i.e., qubits) $q_i$ and $q_j$, the sum in the first line runs over all pairs of qubits, and $L(L-1)$ is the number of pairs in the network. The sum in the second line runs over all edges ($\langle k,l \rangle$) in a particular path ($p_{ij}$) between nodes $q_i$ and $q_j$ and the minimum is taken over all possible paths ($\mathcal{P}_{ij}=\{p_{ij}\}$) between the two nodes. As discussed, we use $1/I_{kl}$ as weights in the summand under the intuition that edges with large mutual information are traversed easily while small mutual information values should be detrimental to short path length, consistent with \cite{muldoon2016small}. \subsection{Node-Strength Distribution} The last complex network measure we consider is the \textit{node-strength distribution}. In the transportation network analogy, an efficiently traversable network will have many nodes like Grand Central Station (termed \textit{hubs}) that have many, strong connections to other nodes as well as some nodes that have few or weak connections, like Golden, Colorado, whose servicing light rail line terminates three miles short of the center of town. Mathematically, this translates into a broad, flat node-strength distribution. We calculate a size-invariant (un-normalized) node-strength distribution, \begin{equation}\label{eq:node_strength} P[g_i/(L-1)] = \text{hist}\Bigg[ \frac{1}{L-1} \sum_j I_{ij} \Bigg] \end{equation} Practically, Eq.~\ref{eq:node_strength} is evaluated by aggregating all size-scaled values of $g_i/(L-1)=\sum_{j}I_{ij}/(L-1)$ across all system sizes, $L\in\{5, 7, \ldots, 23\}$ into a set which is then histogrammed. The resulting distribution is increasingly biased towards low node strength as the network approaches randomness. Small-world networks have broad, flat distributions, while distributions that are more sharply-peaked around smaller ranges of values indicate increasing non-random regularity in the network. \section{Establishing the Coherence Window} \begin{figure} \caption{\label{fig:coherence_window} \label{fig:coherence_window} \end{figure} As discussed in the main text, our Goldilocks QCA circuits, when aided by post-selection, generate mutual information complex network observables that exhibit structure in excess of post-selected incoherent uniform randomness. Fig.~\ref{fig:coherence_window} shows the clustering coefficient calculated from the post-selected data generated from the Weber processor (green data points), as a function of QCA cycle and for all system sizes simulated, $L\in \{5, 7, \ldots, 23\}$. Over the first few cycles, the clustering coefficient climbs from zero until it crosses the horizontal dashed line in each panel, which is the value of the clustering coefficient for the incoherent uniform random state subject to the same post-selection procedure as the QCA data from Weber. The left-most vertical dashed line demarcates this point, which is the QCA cycle at which the post-selected experimental clustering becomes larger than post-selected randomness. In all cases, the post-selected experimental clustering continues to climb for a few cycles until decoherence mechanisms begin to degrade the data in excess of what post-selection can mitigate for, typically at $t\approx 8$. After a few more cycles, the post-selected experimental clustering drops to the clustering value of post-selected incoherent uniform randomness and remains there for the rest of the simulation. The right-most vertical dashed line demarcates this point, which occurs at $t\approx 12$ roughly independent of system size. For each system size, the region between the two vertical dashed lines is where the post-selected experimental clustering is in excess of post-selected incoherent uniform randomness and is therefore the \textit{coherence window} over which we calculate the cycle-averaged complex network observables. We remark that after the end of the coherence window, $t\approx 12$, for $L=21$ and $L=23$ the post-selected experimental clustering fails to return to the value associated with post-selected incoherent uniform randomness. We do not regard this as an extension of the coherence window. Rather, inspection of Fig.~\ref{fig:ps_overhead} shows that after $t\approx 12$, the 21-qubit chains are retaining fewer than roughly 100 of the 100,000 initial measurements while the 23-qubit chains are retaining fewer than about 10 of the initial 100,000 measurements. As such, error bars become significant and calculation of observables after $t\approx 12$ in each case becomes unreliable. \section{Effect of Higher Product State Filling} \begin{figure*} \caption{\label{fig:clusterings_ic} \label{fig:clusterings_ic} \end{figure*} Throughout the main text, we have focused on a particular initial condition for our Goldilocks QCA circuits, namely $|0\ldots 010 \ldots 0\rangle$, a single central bit flip in a chain of $L$ qubits. This was done so as to be able to perform finite size scaling analyses of complex network measures in a sensible and well-controlled manner, as well as the close analogy to a standard initial condition for elementary classical cellular automata. Moreover, it was previously shown that the ability of Goldilocks QCA to generate physical complexity is largely insensitive to initial conditions \cite{hillberry2021entangled}. However, it is an important question as to whether our experimental protocol, including post-selection, for effectively generating coherent small-world mutual information networks generalizes beyond the particular instance of the single bit-flip initialization. Fig.~\ref{fig:clusterings_ic} shows the effect of increasing the number of isolated bit flips in the initial state on clustering dynamics of a 17-qubit chain simulated on Google's 23-qubit Rainbow processor. In particular, the left panel corresponds to the initial condition $|00000100000100000\rangle$, the middle panel corresponds to the initial condition $|00010000100001000\rangle$ and the right panel corresponds to the initial condition $|00100010001000100\rangle$, i.e., equally spaced $|1\rangle$'s. We observe the following. First, the long-time average clustering values of the emulated data fall as the number of initial bit flips increase. Second, the raw clustering follows the expected behavior in all the panels, rising briefly before decaying down to incoherent uniform randomness at around $t\approx 12$. Third and most importantly however, we note that increasing the number of initial bit flips degrades the ability of our post-selection to correct for errors. While two initial bit flips (left panel) results in post-selected experimental clustering that follows the emulated curve relatively closely until $t\approx 4$ when it begins to degrade, three bit flips (middle panel) results in agreement out to only $t\approx 2$, and four initial bit flips (right panel) sees immediate disagreement between emulation and post-selected experiment after $t\approx 1$. In fact, for four initial bit flips, there is no QCA cycle for which post-selected experimental clustering is meaningfully improved over raw experimental clustering during the coherence window. This indicates that while our experimental protocol, including post-selection, is most appropriately applied at very low initial bit flip filling, it struggles to produce emergent complexity in the face of noise at higher filling fractions. This is likely because at higher initial isolated bit-flip filling, there are more error processes that cause amplitude renormalization but still keep the state vector in the protected sector of Hilbert space. That is, there are more locations in a higher-filling bit string where an erroneous bit flip will nonetheless result in domain wall conservation. \end{document}
\begin{document} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}} \font \tensbold = cmssbx10 \font \sevensbold = cmssbx10 at 7pt \font \fivesbold = cmssbx10 at 5pt \newfam\fam\sboldfam\tensboldfam \textfont\fam\sboldfam\tensboldfam=\tensbold \scriptfont\fam\sboldfam\tensboldfam=\sevensbold \scriptscriptfont\fam\sboldfam\tensboldfam=\fivesbold \def\fam\sboldfam\tensbold{\fam\fam\sboldfam\tensboldfam\tensbold} \parskip = 0 pt \parindent = 10 pt \abovedisplayskip=13pt plus 3pt minus 9pt \belowdisplayskip=13pt plus 3pt minus 9pt \centerline{\bf SUBDYNAMICS THROUGH TIME SCALES AND SCATTERING MAPS}\par \centerline{\bf IN QUANTUM FIELD THEORY}\par \vskip 5 pt \centerline{Ludovico~Lanz$^{(1)}$, Olaf~Melsheimer$^{(2)}$ and Bassano~Vacchini$^{(1)}$}\par {\it \centerline{$^{(1)}$Dipartimento di Fisica dell'Universit\`a di Milano, INFN, Sezione di Milano,} \centerline{Via Celoria 16, I-20133, Milano, Italy} \centerline{$^{(2)}$Fachbereich Physik, Philipps-Universit\"at, Renthof 7, D-3550 Marburg, Germany} } \vskip 10 pt \centerline{\sc Abstract}\par { \baselineskip=11pt \it It is argued that the dynamics of an isolated system, due to the concrete procedure by which it is separated from the environment, has a non-Hamiltonian contribution. By a unified quantum field theoretical treatment of typical subdynamics, e.g., hydrodynamics, kinetic theory, master equation for a particle interacting with matter, we look for the structure of this more general dynamics. }\par \vskip 10 pt \noindent \par \section{The concept of physical system} \par Quantum mechanics (QM) has non-separability as its most striking feature, i.e., one cannot attribute ``properties'' to parts of a system and therefore typical problems like the measurement process and EPR situations arise. This feature is so deeply rooted in the mathematical structure of QM that we believe one should not try to make it less stringent, e.g., by attempts like ``spontaneous reduction''~\cite{GRW}. We prefer instead to weaken the very concept of physical system: usually the ``isolation'' of a physical system is taken for granted, while in our opinion the way in which isolation is achieved belongs to the very definition of the system. Any attempt inside QM to obtain the subdynamics for a subsystem enforces the introduction of a suitable time scale in order to break the correlations with the environment; in a completely sharp description of the dynamics of a subsystem the physics of the whole universe would enter. The preparation procedure leading to a system, isolated during a time interval $[t_0,t_1]$ and confined in a spatial region $\omega$, covers a time interval $[T,t_0]$, that will be called ``preparation time''. Due to the confinement the basic space-time symmetries are broken and by suitable boundary conditions ``peculiar'' properties of the system are introduced. This obviously reduces the universal character of the dynamical description, however an important universal behaviour still remains due to symmetry and locality (or short range character) of effective interactions, whose relevance becomes particularly evident in the quantum field theoretical approach. We thus regard a physical system as a part of the world under control by a suitable preparation, whose local behaviour is explained in terms of locally interacting quantum fields. The choice of these fields depends on the level of description of the system. A large part of physics can be explained in terms of quantum fields related to molecules with a typical time scale of the order $\approx 10^{-13}$ s; a much more refined description arises if the basic fields are related to nuclei and electrons, then the basic theory would be QED and a much smaller time scale $\approx 10^{-23}$ s could be considered: however such role of QED as basic theory of macrosystems is far from being exploited. \par In a sense our viewpoint can appear as opposite to the most widely spread one, which we synthetise as follows: particles are the primary systems, related to non-confined quantum fields and to basic symmetries, all other systems are structure of particles; one then tries to obtain a typical macroscopic behaviour in some suitable thermodynamic limit. According to us, on the contrary, macroscopic systems are to be taken as the primary systems, even if in their definition time scales and spatial confinement must be carefully taken into account; the theoretical framework for their description is quantum field theory, locality and quantisation taking the place of the atomistic model. In this context particles are a derived concept. The description of non-equilibrium systems is put in the foreground and at least in principle should be performed taking boundary effects into account; procedures like ``continuous'' limit should be applied only at the end, if one wants to get rid of boundary effects. This standpoint is closer to thermodynamics and electromagnetism, while the former one originates from classical mechanics. The relevance of macroscopic systems for the foundations of QM is the starting point of Ludwig's axiomatic approach to QM~\cite{Foundations}. The insistence on the distinction between these two attitudes is due to the fact that they lead in a natural way to two different formulations of the dynamics. In the first approach one associates a wave function $\psi$ to each system ($\psi ({\mbox {\bf x}},t)$ for one particle, $\ldots\,$, $\psi ({\mbox {\bf x}}_1,{\mbox {\bf x}}_2,\ldots,{\mbox {\bf x}}_N,t)$ for N particles); obviously if one describes situations like ``unpolarised'' particles it is appropriate to use a statistical operator, in order to take a lack of control of the experimental specification into account. This aspect becomes increasingly important for large N, so that statistical operators are very useful for macroscopic systems, nevertheless the basic dynamics is given by an evolution operator for the wavefunction $\psi$. On the other hand, starting with a macroscopic system, one is led to assume a statistical operator ${\hat \varrho}_t$ as the most appropriate mathematical representation of the preparation procedure until time $t$. The set $\cal K({\cal H})$ of statistical operators on the Hilbert space ${\cal H}$ becomes most important and the space $\cal T ({\cal H}) $ of trace-class operators, which is generated in a natural way by $\cal K({\cal H})$ [$\cal K({\cal H})$ is the base of the base-norm space $\cal T ({\cal H})$], plays a role similar to that of ${\cal H}$ in the previous formalism. Correspondingly unitary operators on ${\cal H}$ in the first approach are replaced by affine maps of $\cal K({\cal H})$ in $\cal K({\cal H})$, i.e. by positive, trace-preserving maps on $\cal T ({\cal H})$. If the system is isolated in the time interval $[t_0,t_1]$ the spontaneous repreparations ${\hat \varrho}_t$, $t \in [t_0,t_1]$, are related together by $\hat \varrho_t = {\cal M}_{t t'} \hat \varrho_{t'}$ ($t\geq t'$), where the evolution system $ \left \{ {{\cal M}}_{t'' t'} \, t''\geq t' \right \} $ satisfies the composition rule ${\cal M}_{t''' t'}={\cal M}_{t''' t''}{\cal M}_{t'' t'}$ ($t'\leq t''\leq t'''$). We stress the fact that there is no reason to assume that ${\cal M}_{t'' t'}$ has an inverse. If ${\cal M}^{-1}_{t'' t'}$ exists then ${\cal M}_{t'' t'}= {\hat U}_{t''t'} \cdot {\hat U}^{\scriptscriptstyle\dagger}_{t''t'} $ (see for example~\cite{Davies}), with ${\hat U}_{t''t'}$ unitary or antiunitary operator and one is brought back to the Hilbert space formalism: $ \psi_{t''} = {\hat U}_{t''t'}\psi_{t'}$. The dynamics in the present framework is indeed more general and has irreversibility as typical phenomenon. \par To determine the maps ${\cal M}_{t'' t'}$ a choice of relevant observables is necessary: when a time scale is introduced, only those observables should be considered, whose expectation values do not appreciably vary in a time interval of the order of the time scale. It is thus necessary to work in the Heisenberg picture, i.e. with the adjoint map ${\cal M}^{'}_{t t_{0}}$, and consider expressions of the form ${\cal M}^{'}_{t t_{0}}{\hat A}$, ${\hat A}$ being a relevant observable. For the same system different descriptions can be given by different choices of relevant observables and corresponding time scales: e.g., hydrodynamic or kinetic description of a continuum. Skipping questions of mathematical rigour we can assume the differential equation \begin{equation} \label{0} { d \over dt } {\cal M}^{'}_{t t_0} = {\cal L}^{'}_t {\cal M}^{'}_{t t_0} , \end{equation} and represent ${\cal M}^{'}_{t t_0}$ in the form ${\cal M}^{'}_{t t_0} = T \left( e^{\int_{t_0}^t dt' \, {\cal L}^{'}_{t'}} \right)$ in terms of the generator ${\cal L}_t^{'}$. It is well known (rigorously for bounded ${\cal L}^{'}_t$) that if ${\cal M}^{'}_{t'' t'}$ has the additional property of complete positivity (CP), ${\cal L}^{'}_t$ has the Lindblad structure~\cite{Lind}: \begin{equation} {\cal L}^{'}_t{\hat B}= +{i \over \hbar} \left( {{{\hat H}}_t {\hat B}-{\hat B} {{\hat H}}_t} \right) - { {1\over\hbar}} \left( {{\hat A}_t {\hat B} + {\hat B} {\hat A}_t}\right) +{1 \over \hbar} \sum_j {{\hat L}}_{tj}^{{\scriptscriptstyle \dagger}} {\hat B} {{\hat L}}_{tj} \label{1} \end{equation} \[ {{\hat H}}_t ={{\hat H}}_t^{\scriptscriptstyle \dagger} \qquad {\hat A}_t = {1\over2} \sum_j {{\hat L}}_{tj} {{\hat L}}_{tj}^{{\scriptscriptstyle \dagger}}. \] In our framework the assumption of CP can appear too restrictive since only a suitable subset of observables is relevant and one expects that a modified concept of CP of ${\cal M}^{'}_{t'' t'}$ relatively to these observables should be given, leading to a more general structure of ${\cal L}^{'}_t$; we shall return to this point in the sequel. \par The more general description of the dynamics that we are considering allows the introduction of the concept of trajectory in quantum theory. In fact in the general formalism of continuous measurement approach~\cite{misure cont} one has that an evolution system $ \{ {\cal M}_{t'' t'} \quad t''\geq t' \} $ with ${\cal L}_t$ having the Lindblad structure can be decomposed on a space $Y^{t_1}_{t_0}$ of trajectories for stochastic variables $\xi(t)$. The concept of trajectory is particularly useful in the field of quantum optics, as is shown by the many interesting applications to be found in the literature. More precisely one can consider Wiener and jump processes related to the operators ${{\hat L}}_{tj}$ in the sense that the expectations of the increments for example in the case of jump processes are given by~\cite{Lanz6}: \[ \langle d \xi_j(t) \rangle = dt \, {\mbox{\rm Tr}} \left( ( {{\hat L}}^{\scriptscriptstyle\dagger}_{tj}{{\hat L}}_{tj}) \hat \varrho (t) \right) . \] One can define $\sigma$-algebras ${\cal B}(Y^{t''}_{t'})$ of subsets $ \Omega^{t''}_{t'} $ of $Y^{t''}_{t'}$ and construct operation valued measures $ {\cal F}^{t''}_{t'}( \Omega^{t''}_{t'} )$ on ${\cal B}(Y^{t''}_{t'})$ in such a way that $ {\cal M}_{t'' t'} = {\cal F}(Y^{t''}_{t'}). $ Then for any decomposition $Y^{t''}_{t'} = \cup_{\alpha}({\Omega^{t''}_{t'}}_{\alpha}) $ with disjoint subsets ${\Omega^{t''}_{t'}}_{\alpha}$ one has \begin{equation} \label{4} {\cal M}_{t'' t'}= \sum_\alpha {\cal F}({\Omega^{t''}_{t'}}_{\alpha}) . \end{equation} One can therefore claim that the quantum dynamics of the system is compatible with the evolution of classical stochastic variables; typically the probability that the trajectory of these variables for $t'\leq t \leq t''$ belongs to a subset ${\Omega^{t''}_{t'}}_{\alpha}$ is given by $ p({\Omega^{t''}_{t'}}_{\alpha}) = {\mbox{\rm Tr}} \left( {\cal F}^{t''}_{t'}( {\Omega^{t''}_{t'}}_{\alpha}) \hat \varrho (t') \right) $. The decomposition of ${\cal M}_{t'' t'}$ by operation valued stochastic processes ${\cal F}^{t''}_{t'}$ on a suitable trajectory space $Y^{t''}_{t'}$ is not unique, i.e. there are many compatible objective ``classical'' pictures which are consistent with the quantum evolution, a feature that can be linked with a ``generalised concept of complementarity''. The very possibility of recovering some kind of classical insight into QM is due to the non-Hamiltonian evolution; obviously (\ref{4}) would be inconsistent with $ {\cal M}_{t'' t'}= {\hat U}_{t''t'} \cdot {\hat U}^{\scriptscriptstyle\dagger}_{t''t'} $, since for $\hat \varrho_{t'}= | \psi_{t'} \rangle \langle \psi_{t'} | $ the l.h.s. of (\ref{4}) is a pure state and the r.h.s. is a mixture. \par In the framework we have now presented dynamics is given by ${\cal L}_t^{'}$~\cite{Lanz1}; while one expects that the Hamiltonian part is fixed by the local interactions, the remaining part is connected to the preparation procedure of the isolated system: it cannot be strictly derived from a Hamiltonian theory of a larger system surrounding it (in fact also for this larger system an ${\cal L}_t^{'}$ should be determined). The problem arises to give reasonable criteria for the construction of ${\cal L}_t^{'}$. One can expect that near to equilibrium only the Hamiltonian part $ {i\over\hbar} [\hat H, \cdot] $ is important, as it is clearly indicated by the great success of equilibrium statistical mechanics; however the non-Hamiltonian part of ${\cal L}_t^{'}$ is relevant for irreversible behaviour and for a full explanation of approach to equilibrium: in fact energy conservation ${\cal L}_t^{'}{\hat H}=0$ (at least on the relevant time scale) grants for the existence of an ``eigenstate'' of ${\cal L}_t^{'}$ with practically zero eigenvalue, while one expects that the other eigenvalues of ${\cal L}_t^{'}$ have a negative real part. \par \section{A microsystem interacting with matter} \par We shall take on later the problem of an explicit construction of $\hat \varrho (t)$ for a macrosystem ${\cal S}_{\rm M}$. We assume now that $\hat \varrho (t)$ is given (e.g., the system is at equilibrium) and consider the problem of describing the new system ${\cal S} = {\cal S}_{\rm M} +{}$ a microsystem, ${\cal S}$ still being confined inside a region $\omega$. The Hamiltonian of ${\cal S}$ is: \[ {\hat H}={\hat H}_0 + {\hat H}_{\rm M} + {{\hat V}} \qquad {\hat H}_0 = \sum_f {E_f} {{\hat a}^{\scriptscriptstyle \dagger}_{f}} {{\hat a}_{{f}}} \qquad \left[{{{\hat a}_{{f}}},{{\hat a}^{\scriptscriptstyle \dagger}_{g}}}\right]_{\mp}=\delta_{fg} , \] where $E_f$ are the eigenvalues of the operator $ -{ \hbar^2 \over 2m } \Delta_2 $ : \[ -{ \hbar^2 \over 2m } \Delta_2 u_f({\mbox{\bf x}})= E_f u_f({\mbox{\bf x}}) \qquad u_f({\mbox{\bf x}})=0 \quad {\mbox{\bf x}}\in \partial\omega . \] Let us assume for the statistical operator of ${\cal S}$ the following structure: \begin{equation} {\hat \varrho}(t)= \sum_{{g} {f}}{} {{\hat a}^{\scriptscriptstyle \dagger}_{g}} {{\hat \varrho}_{\rm M}}(t) {{\hat a}_{{f}}} {{ \varrho}}_{gf}(t) , \label{5} \end{equation} \begin{equation} \label{6} {{\hat a}_{{f}}}{{\hat \varrho}_{\rm M}}=0 . \end{equation} The coefficients ${{\varrho}}_{gf}$ build a positive, trace one matrix, which can be considered as the representative of a statistical operator ${{\hat \varrho}}^{(1)}(t)$ in the Hilbert space ${{\cal H}^{(1)}}$ spanned by the states $u_f$ ($ {{\varrho}}_{gf}= \langle u_g | {{\hat \varrho}}^{(1)} | u_f \rangle $). Equation (\ref{6}) indicates that the system ${\cal S}_{\rm M}$ has charge $ {\hat Q}=\sum_f {{\hat a}^{\scriptscriptstyle \dagger}_{f}} {{\hat a}_{{f}}} $ with value zero, i.e. it does not contain the microsystem. Equation (\ref{5}) represents the fact that ${\cal S}_{\rm M}$ has been perturbed by the additional particle and therefore presents a new dynamical behaviour contained in the coefficients ${{ \varrho}}_{gf}(t)$ that can be picked out studying the time evolution of the observables $ {\hat A} = \sum_{h,k} {\hat a^{\scriptscriptstyle \dagger}_{h}} {A}_{hk} {\hat a_{k}} $; in fact the subdynamics of these observables provides the QM of a one-particle system with Hilbert space ${\cal H}^{(1)}$, statistical operator ${\hat \varrho}^{(1)}$ and observables ${\hat{\mbox{\sf A}}}^{(1)}$, with matrix elements $ A_{hk}= \langle u_h | {\hat{\mbox{\sf A}}}^{(1)} u_k \rangle_{{\cal H}^{(1)}} $, through the formula \begin{equation} \label{alfa} {\hbox{\rm Tr}}_{{\cal H}} \left( {{\hat A}{\hat \varrho}}(t) \right) = \sum_{hk} {\hbox{\rm Tr}}_{{\cal H}} \left( e^{{i\over \hbar}{\hat H}t} {\hat a}_h^{\scriptscriptstyle\dagger} {\hat a}_k e^{-{i\over \hbar}{\hat H}t} {\hat \varrho} \right) A_{hk} = {\hbox{\rm Tr}}_{{{\cal H}^{(1)}}} \left( {\hat {{\mbox{\sf A}}}^{(1)} {{\hat \varrho}}^{(1)}}(t) \right) . \end{equation} One has to study the expression $e^{{i\over \hbar}{\hat H}t} {\hat a}_h^{\scriptscriptstyle\dagger} {\hat a}_k e^{-{i\over \hbar}{\hat H}t}$, exploiting the fact that the expectation values $ \langle {\hat A} \rangle_t $ are ``slowly varying'' if the matrix $A_{hk}$ is ``quasi-diagonal'' (if $A_{hk}=\delta_{hk}$, ${\hat A}$ is a conserved charge). We give here only a sketchy account of the main points (for details see~\cite{art1}). It proves useful to use in the Heisenberg picture a formalism in ${\cal B}({\cal H})$ reminiscent of usual scattering theory in ${\cal H}$, by means of superoperators, typically: \[ {\cal H}^{'}={i \over \hbar} [{{\hat H}},\cdot], \quad {\cal H}^{'}_0={i \over \hbar} [{{\hat H}}_0 + {\hat H}_{\rm M},\cdot], \quad {\cal V}^{'}={i \over \hbar} [{\hat V},\cdot]. \] In such a context operators of the form $ {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} $ are ``eigenstates'' of ${\cal H}^{'}_0$ with eigenvalues $ {i \over \hbar} \left( E_h - E_k \right) $. Setting ${\cal U}^{'}(t)=e^{{\cal H}^{'}t}$ one has: \begin{eqnarray} \label{7} {{{\cal U}^{'}(t)}} \left( {{{\hat a}^{\scriptscriptstyle \dagger}_{h}}{{\hat a}_{k}}} \right) &=& \left( {{{\cal U}^{'}(t)}}{{\hat a}^{\scriptscriptstyle \dagger}_{h}} \right) \left( {{{{\cal U}^{'}(t)}}{{\hat a}_{k}}} \right) \\ &=& {\int\limits_{-i\infty+\eta}^{+i\infty + \eta}}{ dz_1 \over 2\pi i } \, e^{z_1 t} \left( { {1\over{ z_1 - {\cal H}^{'}}} {{\hat a}^{\scriptscriptstyle \dagger}_{h}}} \right) {\int\limits_{-i\infty+\eta}^{+i\infty + \eta}}{ dz_2 \over 2\pi i } \, e^{z_2 t} \left( { {1\over{ z_2 - {\cal H}^{'}}} {{\hat a}_{k}} } \right) \nonumber \end{eqnarray} \begin{equation} \label{8} {{ {1\over{ z - {\cal H}^{'}}} }} = {{ {1\over{ z - {\cal H}^{'}_0}} }} +{{ {1\over{ z - {\cal H}^{'}_0}} }} {\cal T}(z){{ {1\over{ z - {\cal H}^{'}_0}} }} \quad \mbox{\rm where} \quad {\cal T}(z) \equiv {\cal V}^{'} + {\cal V}^{'}{{ {1\over{ z - {\cal H}^{'}}} }}{\cal V}^{'} . \end{equation} ${\cal T}(z)$ is reminiscent of the usual T-matrix of scattering theory and plays a central role in this treatment: it will be called ``scattering map''. The operator ${\cal T}(z)$ has poles on the imaginary axis for $z={i\over\hbar}(e_\alpha-e_\beta)$, $e_\alpha$ being the eigenvalues of ${\hat H}$. In the calculation of expression (\ref{7}) we shall assume that the function ${\cal T}(z)$ for $Re z \approx \varepsilon$ with $ \varepsilon \gg \delta$ ($\delta$ typical spacing between the poles) is smooth enough, so that the only relevant contribution from the singularities of $({ z - {\cal H}^{'}})^{-1}$ stems from the singularities of $({ z - {\cal H}^{'}_0})^{-1}$; this smoothness property is linked to the fact that the set of poles of $({ z - {\cal H}^{'}})^{-1}$ goes over to a continuum if the confinement is removed yielding an analytic function with a cut along the imaginary axis, that can be continued across the cut without singularities if no absorption of the microsystem occurs. More precisely ${\cal T}(iy+\varepsilon)$ is considered as practically constant for variations $\Delta y \approx {\hbar \over \tau_0}$, where $\tau_0$ has to be interpreted as a collision time. Treating expression (\ref{7}) we make use of the inequality \begin{equation} \label{9} \left | E_h - E_k \right | \ll { \hbar \over \tau_0 } , \end{equation} whose physical meaning is that the typical variation time $\tau_1$ of the quantities $ \langle {\hat A} \rangle_t $ is much larger than $\tau_0$. One then arrives at the following very perspicuous structure for $ {{{\cal U}^{'}(t)}} \left( {{{\hat a}^{\scriptscriptstyle \dagger}_{h}}{{\hat a}_{k}}} \right) $: \[ {{{\cal U}^{'}(t)}} \left( {{{\hat a}^{\scriptscriptstyle \dagger}_{h}}{{\hat a}_{k}}} \right) = {{{\hat a}^{\scriptscriptstyle \dagger}_{h}}{{\hat a}_{k}}} + t {\cal L}' \left( {{{\hat a}^{\scriptscriptstyle \dagger}_{h}}{{\hat a}_{k}}} \right) \] \begin{equation} \label{tipstr} {\cal L}' \left( {{{\hat a}^{\scriptscriptstyle \dagger}_{h}}{{\hat a}_{k}}} \right) = {i\over\hbar} \left[ {\hat H}_{\rm \scriptscriptstyle eff}, {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} \right] - {1\over \hbar} \left( \left[ {\hat \Gamma}^{(1)} , {\hat a}^{\scriptscriptstyle\dagger}_h \right] {\hat a}_k - {\hat a}^{\scriptscriptstyle\dagger}_h \left[ {\hat \Gamma}^{(1)}, {\hat a}_k \right] \right) + {1\over\hbar} \sum_\lambda {\hat R}^{(1)}_{h \lambda}{}^{\dagger} {\hat R}^{(1)}_{k \lambda}, \end{equation} where ${\hat H}_{\rm \scriptscriptstyle eff} = {\hat H}_0 + {\hat V}^{\rm \scriptscriptstyle eff}$ and \begin{eqnarray*} {\hat V}^{\rm \scriptscriptstyle eff} \!\!&=&\!\! \sum_{gr} {\hat a}^{\scriptscriptstyle \dagger}_{r} {\hat V}{}^{\rm \scriptscriptstyle eff}_{rg} {\hat a}_{g} \\ \!\!&=&\!\! i \hbar \sum_{\lambda \lambda' \atop gr} {\hat a}^{\scriptscriptstyle \dagger}_{r} | \lambda \rangle \langle \lambda | \frac 12 \left[ \left( {\cal T} \left({-{i \over \hbar} {E_r} +\varepsilon}\right) {\hat a}_r \right) {{\hat a}^{\scriptscriptstyle \dagger}_{g}} + {\hat a}_r \left( {\cal T} \left({{i \over \hbar} {E_g} +\varepsilon}\right) {\hat a}_g^{\scriptscriptstyle\dagger} \right) \right] | \lambda' \rangle \langle \lambda' | {\hat a}_{g} \\ {\hat \Gamma}^{(1)} \!\!&=&\!\! \sum_{gr} {\hat a}^{\scriptscriptstyle \dagger}_{r} {\hat \Gamma}_{rg} {\hat a}_{g} \\ \!\!&=&\!\! i \hbar \sum_{\lambda \lambda' \atop gr} {\hat a}^{\scriptscriptstyle \dagger}_{r} | \lambda \rangle \langle \lambda | \frac i2 \left[ \left( {\cal T} \left({-{i \over \hbar} {E_r} +\varepsilon}\right) {\hat a}_r \right) {{\hat a}^{\scriptscriptstyle \dagger}_{g}} - {\hat a}_r \left( {\cal T} \left({{i \over \hbar} {E_g} +\varepsilon}\right) {\hat a}_g^{\scriptscriptstyle\dagger} \right) \right] | \lambda' \rangle \langle \lambda' | {\hat a}_{g} \\ {\hat R}^{(1)}_{k\lambda} \!\!&=&\!\! \sqrt{2\varepsilon \hbar^3} \sum_{g\lambda'} { \langle \lambda | \left( {\cal T} \left({-{i \over \hbar} {E_g} +\varepsilon}\right) {\hat a}_g^{\hphantom{\scriptscriptstyle \dagger}} \right) {{\hat a}^{\scriptscriptstyle \dagger}_{k}} | \lambda' \rangle \over E_g + E_{\lambda} - E_k - E_{\lambda' } - i\hbar\varepsilon } \langle \lambda' | {\hat a}_g \end{eqnarray*} and $| \lambda \rangle $ denotes an eigenvector of ${\hat H}_{\rm M}$ with eigenvalue $E_\lambda$ and of ${\hat H}_0$ with eigenvalue zero. Eq.(\ref{tipstr}) shows a typical structure arising in the calculation, which we will also find in the more complex situation examined in $\S\,3$, where the form of the different operators is further commented on. Let us observe that ${\hat V}{}^{\rm \scriptscriptstyle eff}_{rg}$ and ${\hat \Gamma}_{rg}$ are not c-number coefficients, but operators acting in the Fock-space for the macrosystem, as stressed by the hats; they are connected respectively to the self-adjoint and anti-self-adjoint part of what can be considered as an operator valued T-matrix. The last contribution displays the ``bilinear structure'' of the third term in the r.h.s. of (\ref{1}), connected to irreversibility and CP and not reproducible in the Hilbert space formalism, even resorting to an interaction potential which is not self-adjoint. Within the approximation leading to (\ref{tipstr}) one has ${\hat \Gamma}^{(1)} \approx {1\over2} \sum_{h \lambda} {\hat R}_{h \lambda}^{(1)}{}^{\dagger} {\hat R}_{h \lambda}^{(1)}$ and therefore ${\cal L}^{'}{\hat N}=0$. Appealing to (\ref{alfa}) we may obtain an evolution equation for the matrix elements ${\varrho}_{fg}$ which is meaningful on a time scale much longer than the correlation time for ${\cal S}_{\rm M}$: \[ { d {\varrho}_{gf} \over dt } = -{i \over \hbar} \left( {{E_g}-{E_f}} \right) {\varrho}_{gf} + {1 \over \hbar} \sum_h {\varrho}_{gh} {\mbox{\sf Q}}^{\scriptscriptstyle \dagger}_{hf} + {1 \over \hbar} \sum_k {\mbox{\sf Q}}_{gk} {\varrho}_{kf} + {1 \over \hbar} \sum_{hk \atop \lambda\xi} \left( {{\mbox{\sf L}}_{\lambda\xi}} \right)_{gk} {\varrho}_{kh} \left( {{{\mbox{\sf L}}_{\lambda\xi}}} \right)^*_{fh} \] with \begin{eqnarray*} {\mbox{\sf Q}}_{kf} &=& \hbar{\hbox{\rm Tr}}_{{{{\cal H}}}} \left[{ \left( {\cal T} \left({-{i \over \hbar} {E_k} +\varepsilon}\right) {{\hat a}_{k}} \right) {{\hat a}^{\scriptscriptstyle \dagger}_{f}} {{\hat \varrho}_{\rm M}(t)} }\right] \\ \left( {{\mbox{\sf L}}_{\lambda\xi}} \right) _{kf} &=& \sqrt{2\varepsilon\hbar^3 \pi_\xi} {\langle \lambda \vert \left[{ \left( {{\cal T} \left({-{i \over \hbar} {E_k} +\varepsilon}\right) {{\hat a}_{k}}} \right) {{\hat a}^{\scriptscriptstyle \dagger}_{f}} }\right] { 1 \over {{E_k}+{E_{{\lambda}}}-{E_f}-{H}_{\rm M} -i\hbar\varepsilon} } \vert {\xi(t)} \rangle} ; \end{eqnarray*} $\xi(t)$ is a complete system of eigenvectors of ${{\hat \varrho}_{\rm M}(t)}$, $({{\hat \varrho}_{\rm M}(t)}=\sum_{\xi(t)} \pi_{\xi(t)} \vert {{\xi(t)}} \rangle {\langle \xi(t) \vert})$. To show the connection with (\ref{1}) we introduce in ${{\cal H}^{(1)}}$ the operators ${\hat {\mbox{\sf Q}}}^{(1)}, {\hat {\mbox{\sf L}}}^{(1)}_{\lambda\xi}$: \[ \langle {k} \vert {\hat {\mbox{\sf Q}}}^{(1)} \vert {f} \rangle =\mbox{\sf Q}_{kf} \quad , \quad \langle {k} \vert {{\hat {\mbox{\sf L}}}^{(1)}_{\lambda\xi}} \vert {f} \rangle = {\bigl( {{\mbox{\sf L}}_{\lambda\xi}} \bigr)}_{kf} , \] thus attaining in the Schr\"odinger picture the full evolution of ${\varrho}^{(1)}$, given by the typical Lindblad generator: \begin{equation} { d {\hat {\varrho}}^{(1)} \over dt } = -{i \over \hbar} \left[ { {\hat {\mbox{\sf H}}}_{\rm eff},{\hat {\varrho}}^{(1)}} \right] + {1\over 2\hbar} \left \{ { \left( {\hat {\mbox{\sf Q}}}^{(1)}+ {{\hat {\mbox{\sf Q}}}^{(1)}}{}^{ \dagger} \right) ,{\hat {\varrho}}^{(1)}} \right \} + {1 \over \hbar} \sum^{}_{{\xi,\lambda }} {\hat {\mbox{\sf L}}}^{(1)}_{\lambda\xi} {\hat {\varrho}}^{(1)} {\hat {\mbox{\sf L}}}^{(1)}_{\lambda\xi}{}^{ \dagger}\ , \label{Meq} \end{equation} where ${{\hat {\mbox{\sf H}}}}^{(1)}_{\rm \scriptscriptstyle eff} ={\hat {\mbox{\sf H}}}^{(1)}_0 + {i\over 2} \left( {{\hat {\mbox{\sf Q}}}^{(1)}- {{\hat {\mbox{\sf Q}}}^{(1)}}{}^{ \dagger}} \right)$. Furthermore according to preservation of trace we have ${{\hat {\mbox{\sf Q}}}^{(1)}+ {{\hat {\mbox{\sf Q}}}^{(1)}}{}^{ \dagger}} = - \sum_{\lambda\xi} {\hat {\mbox{\sf L}}}^{(1)}_{\lambda\xi}{}^{ \dagger} {\hat {\mbox{\sf L}}}^{(1)}_{\lambda\xi}$. \par As it is well-known (\ref{Meq}) is apt to describe very different physical situations. If the last contribution, which we will call ``incoherent'', may be neglected, at least as a first approximation, eq.(\ref{Meq}) is equivalent to a Schr\"odinger equation with a possibly complex potential. In the case of a particle interacting with matter this equation is well-suited to describe a coherent optical behaviour, for example in terms of a refractive index, as it is usually done in neutron optics~\cite{Erice,Sears} and recently also in atom optics~\cite{Mlynek,Vigue}. In this framework the operator ${\hat {\mbox{\sf Q}}}^{(1)}$ is to be interpreted as an optical potential, which in our formalism is naturally linked to matrix elements of the T-operator, thus showing the connection between the effective, macroscopic description through an index of refraction and quantities characterising the local interactions. The T-operator may be replaced by phenomenological expressions (for example the Fermi pseudo-potential in the case of neutron optics). This picture is particularly useful to deal with particle interferometry. To see how the last contribution may be linked to an interaction having a measuring character let us introduce the reversible mappings $ {\cal A}_{t'' t'}={\hat U}^{(1)}_{t'' t'} \cdot {\hat U}^{(1)}_{t'' t'}{}^{\dagger} $, where ${\hat U}_{t'' t'}^{(1)} = T \exp ( {-{i \over \hbar} \int_{t'}^{t''} dt \, ({{\hat {\mbox{\sf H}}}^{(1)}_0(t)+i{\hat {\mbox{\sf Q}}}^{(1)}(t)} } ) ) $, corresponding to a coherent contractive evolution of the microsystem during the time interval $[t',t'']$, and the CP mappings ${\cal L}_{\lambda \xi}= {\hat {\mbox{\sf L}}}_{\lambda\xi}^{(1)}(t) \cdot {\hat {\mbox{\sf L}}}^{(1)}_{\lambda\xi}{}^{ \dagger}(t)$, having a measuring character, as it is clear from their very structure, reminiscent of the reduction postulate. The expression of the operators ${\hat {\mbox{\sf L}}}^{(1)}_{\lambda\xi}$ shows how these mappings may be linked with a transition inside the macrosystem specified by the pair of indexes $\xi,\lambda$, as a result of scattering with the microsystem. These transitions are in general not detectable, but under suitable conditions they could prime real events. The solution of (\ref{Meq}) may be written in the form: \begin{equation} \label{Subcoll} {\hat {\varrho}}_t = {\cal A}_{t t_0}{\hat {\varrho}}_{t_0} + \sum_{\lambda_1 \xi_1} \int_{t_0}^{t} dt_1 \, {\cal A}_{t t_1} {\cal L}_{\lambda_1 \xi_1}(t_1) {\cal A}_{t_1 t_0}{\hat {\varrho}}_{t_0} +{} \ldots \, , \end{equation} that is a sum over subcollections corresponding to the realization of no event, one event and so on. The set of variables $N_{\lambda\xi}(\tau)$, $\tau\geq t_0$, (number of transitions up to time $\tau$), define a multicomponent classical stochastic process, and (\ref{Subcoll}) corresponds to the decomposition of the evolution map on the space of trajectories for $N_{\lambda\xi}(\tau)$. This is a straightforward generalization of the typical ``counting process'' considered by Srinivas and Davies~\cite{Srinivas}. Of course other decompositions in terms of operation valued maps are possible on trajectory spaces related to different observables, as indicated in $\S\,1$. When the first term in (\ref{Subcoll}) is largely predominant a wavelike description as given by the Schr\"odinger equation is sufficiently accurate and small disturbances, conveyed by the other terms, play no significant role. In a different physical context however, as would be the case for the brownian motion of a particle interacting with an ideal gas, the interplay between the contractive and the incoherent part has a major role. Being interested in the dynamics far away from the walls the quantum number $h$ corresponds to the momentum variable ${\mbox{\bf p}_h}$, and supposing that momentum transfers are small (Fokker-Planck approximation) one arrives at an equation describing diffusion in phase-space which has the following form (see~\cite{Diosi}): \[ { d {\hat \varrho} \over dt } = -{i\over\hbar} \left[ {{\hat {\mbox{\sf H}}}_{\rm \scriptscriptstyle eff}} ,{\hat \varrho} \right] - D_{pp} \left[ {\hat {\mbox{\sf x}}}, \left[ {\hat {\mbox{\sf x}}},{\hat \varrho} \right] \right] - D_{qq} \left[ {\hat {\mbox{\sf p}}}, \left[ {\hat {\mbox{\sf p}}},{\hat \varrho} \right] \right] -{ i\eta \over 2 M } \left[ {\hat {\mbox{\sf x}}} , \left \{ {\hat {\mbox{\sf p}}},{\hat \varrho} \right \} \right] , \] $D_{qq}, D_{qq}$ and $\eta$ being diffusion coefficients linked in different ways to the operators ${\hat {\mbox{\sf L}}}_{\lambda\xi}^{(1)}$ and $M$ the mass of the particle. \par \section{Theory of a macrosystem: thermodynamic evolution by a scattering map} \par \setcounter{equation}{0} We consider a very schematic model of macrosystem in the non-relativistic case built by one type of molecules with mass $m$ confined inside a region $\omega$, interacting by a two body potential $V( \left | {\mbox{\bf x}} -{\mbox{\bf y}} \right | )$; for the sake of simplicity no internal structure of the molecules is taken into account. In the field theoretical language the system is described by a quantum Schr\"odinger field (QSF): \begin{equation} \label{campo} {\hat \psi}({\mbox{\bf x}}) = \sum_f u_f ({\mbox{\bf x}}) {\hat a}_f \qquad { \left[ {\hat a}_f , {\hat a}^{\scriptscriptstyle\dagger}_g \right] }_\pm = \delta_{fg} \end{equation} \[ -{ \hbar^2 \over 2m } \Delta_2 u_f({\mbox{\bf x}})= E_f u_f({\mbox{\bf x}}), \qquad u_f({\mbox{\bf x}})=0 \quad {\mbox{\bf x}}\in \partial\omega . \] We shall assume the following Hamiltonian to take local interactions and confinement into account: \begin{equation} \label{10} {\hat H}= \sum_f E_f {\hat a}_f^{\scriptscriptstyle\dagger} {\hat a}_f + {1\over 2} \sum_{l_1 l_2 \atop f_1 f_2} {\hat a}_{l_1}^{\scriptscriptstyle\dagger} {\hat a}_{l_2}^{\scriptscriptstyle\dagger} V_{l_1 l_2 f_2 f_1} {\hat a}_{f_2} {\hat a}_{f_1} \end{equation} \begin{equation} \label{11} V_{l_1 l_2 f_2 f_1} = \int_{\omega} d^3 \! {\mbox{\bf x}} \int_{\omega} d^3 \! {\mbox{\bf y}} \, u_{l_1}^{*}({\mbox{\bf x}}) u_{l_2}^{*}({\mbox{\bf y}}) V( \left | {\mbox{\bf x}} -{\mbox{\bf y}} \right | ) u_{f_2}({\mbox{\bf y}}) u_{f_1}({\mbox{\bf x}}) . \end{equation} Eq.(\ref{campo}) is linked to the basic ``local'' Hamiltonian for the non-confined field (NC) \begin{eqnarray*} {\hat H}_{\rm \scriptscriptstyle NC} &=& \int d^3\! {\mbox{\bf x}} \, {{ \hbar^2 \over 2m }} {\mbox{\rm grad}} {\hat \psi}_{\rm \scriptscriptstyle NC}^{\scriptscriptstyle\dagger} ({\mbox{\bf x}}) \cdot {\mbox{\rm grad}} {\hat \psi}_{\rm \scriptscriptstyle NC}({\mbox{\bf x}}) \\ &\hphantom{=}& + {1\over 2} \int d^3\! {\mbox{\bf x}} d^3\! {\mbox{\bf r}} \, {\hat \psi}_{\rm \scriptscriptstyle NC}^{\scriptscriptstyle\dagger} \left( {\mbox{\bf x}}- {{\mbox{\bf r}}\over 2} \right) {\hat \psi}_{\rm \scriptscriptstyle NC}^{\scriptscriptstyle \dagger} \left( {\mbox{\bf x}}+ {{\mbox{\bf r}}\over 2} \right) V(r) {\hat \psi}_{\rm \scriptscriptstyle NC} \left( {\mbox{\bf x}}+{{\mbox{\bf r}}\over 2} \right) {\hat \psi}_{\rm \scriptscriptstyle NC} \left( {\mbox{\bf x}}-{{\mbox{\bf r}}\over 2} \right) \end{eqnarray*} \[ { \left[ {\hat \psi}_{\rm \scriptscriptstyle NC}({\mbox{\bf x}}), {\hat \psi}_{\rm \scriptscriptstyle NC}^{\scriptscriptstyle\dagger}({\mbox{\bf x}}') \right] }_\pm = \delta ({\mbox{\bf x}}-{\mbox{\bf x}}'), \] simply selecting the part of ${\hat H}_{\rm \scriptscriptstyle NC}$ related to the ``normal modes'' $u_f$ typical of the confinement: in fact the preparation procedure should imply a kind of relaxation of ${\hat \psi}_{\rm \scriptscriptstyle NC}({\mbox{\bf x}})$ to ${\hat {\psi}}({\mbox{\bf x}})$. Skipping this problem and also the related question of the full explicit structure of ${\cal L}^{'}$ for a realistic system, we shall simply take the Hamiltonian (\ref{10}) containing only the normal modes of the field inside $\omega$. If we are interested in a hydrodynamic description relevant observables are constructed starting with the densities of the typical constants of motion, mass and energy: \begin{eqnarray} \label{12} {\hat \rho}_{m}({\mbox{\bf x}}) &=& m {\hat \psi}^{\scriptscriptstyle\dagger}({\mbox{\bf x}}) {\hat \psi}({\mbox{\bf x}}) \\ {\hat e}({\mbox{\bf x}}) &=& {{ \hbar^2 \over 2m }} {\mbox{\rm grad}} {\hat \psi}^{\scriptscriptstyle\dagger}({\mbox{\bf x}}) \cdot {\mbox{\rm grad}} {\hat \psi}({\mbox{\bf x}}) \nonumber \\ &\hphantom{=}& + {1\over 2} \int_{\omega_x} d^3\! {\mbox{\bf r}} \, {\hat \psi}^{\scriptscriptstyle\dagger} \left( {\mbox{\bf x}}- {{\mbox{\bf r}}\over 2} \right) {\hat \psi}^{\scriptscriptstyle\dagger} \left( {\mbox{\bf x}}+ {{\mbox{\bf r}}\over 2} \right) V(r) {\hat \psi} \left( {\mbox{\bf x}}+{{\mbox{\bf r}}\over 2} \right) {\hat \psi} \left( {\mbox{\bf x}}-{{\mbox{\bf r}}\over 2} \right) , \nonumber \end{eqnarray} where the dependence of $\omega_{x}$ on ${\mbox{\bf x}}$ is generally negligible if $V(r)$ is a short range potential. In the case of a kinetic description we replace (\ref{12}) by the ``Boltzmann'' operator density ${\hat f}({\mbox{\bf x}},{\mbox{\bf p}}) = m \sum_{hk} {\hat a}_h^{\scriptscriptstyle\dagger} \langle u_h | {\hat {\mbox{\sf F}}}^{(1)} ({\mbox{\bf x}},{\mbox{\bf p}}) | u_k \rangle {\hat a}_k$, where ${\hat {\mbox{\sf F}}}^{(1)}$ is the density of joint one particle position-momentum observables~\cite{Lanz5,Holevo}. These densities lead to slowly varying quantities if they are integrated over regions large enough in space or phase-space, since one has constants if the integration is extended over the whole space. The constants of motion leading to this subdynamics are linked to very fundamental symmetries: time translation invariance and gauge symmetry. Our relevant observables have the general structure: \begin{equation} \label{relobs} \sum_{hk} {\hat a}_h^{\scriptscriptstyle\dagger} A_{hk}(\xi) {\hat a}_k \quad , \quad \sum_{k_1 k_2 \atop h_1 h_2} {\hat a}_{h_1}^{\scriptscriptstyle\dagger} {\hat a}_{h_2}^{\scriptscriptstyle\dagger} A_{h_1 h_2 k_2 k_1} ({\mbox{\bf x}}) {\hat a}_{k_2} {\hat a}_{k_1} \end{equation} \[ A_{h_1 h_2 k_2 k_1} ({\mbox{\bf x}}) = \frac 12 \int_{\omega_x} d^3\! {\mbox{\bf r}} \, u_{h_1}^{*} \left( {\mbox{\bf x}}- {{\mbox{\bf r}}\over 2} \right) u_{h_2}^{*} \left( {\mbox{\bf x}}+ {{\mbox{\bf r}}\over 2} \right) V(r) u_{k_2} \left( {\mbox{\bf x}}+{{\mbox{\bf r}}\over 2} \right) u_{k_1} \left( {\mbox{\bf x}}-{{\mbox{\bf r}}\over 2} \right) . \] We thus have to study in Heisenberg picture the expressions: \begin{equation} \label{37} \sum_{hk} e^{{i\over \hbar}{\hat H}t} {\hat a}_h^{\scriptscriptstyle\dagger} {\hat a}_k e^{-{i\over \hbar}{\hat H}t} A_{hk}(\xi) , \qquad \sum_{h_1 h_2 \atop k_1 k_2} e^{{i\over \hbar}{\hat H}t} {\hat a}_{h_1}^{\scriptscriptstyle\dagger} {\hat a}_{h_2}^{\scriptscriptstyle\dagger} {\hat a}_{k_2} {\hat a}_{k_1} e^{-{i\over \hbar}{\hat H}t} A_{h_1 h_2 k_2 k_1} ({\mbox{\bf x}}) , \end{equation} and shall take into account that by the slow variability only terms that are ``diagonal enough'' are really relevant; the sums should be restricted to indexes such that: \[ \frac 1{\hbar} |E_h - E_k| < {1\over\tau_1} \qquad \frac 1{\hbar} |E_{h_1} +E_{h_2}- E_{k_1}-E_{k_2}| < {1\over\tau_1}, \] where $\tau_1$ is the characteristic variation time of the relevant quantities. We would like to stress the fact that the QSF is the basic tool to describe a massive continuum, just like the quantum electromagnetic field describes a massless continuum. The dynamics of the QSF, $ {\hat \psi}({\mbox{\bf x}},t)= e^{{i\over \hbar}{\hat H}t} {\hat \psi}({\mbox{\bf x}}) e^{-{i\over \hbar}{\hat H}t} $, in terms of which one can rewrite (\ref{37}), is given by the simple field equation \begin{equation} \label{13} i\hbar {\partial\over \partial t} {\hat \psi}({\mbox{\bf x}},t) =-{ \hbar^2 \over 2m } \Delta_2 {\hat \psi}({\mbox{\bf x}},t) + \int d^3\! {\mbox{\bf y}} \, {\hat \psi}^{\scriptscriptstyle\dagger}({\mbox{\bf y}},t) V( \left | {\mbox{\bf x}}- {\mbox{\bf y}} \right | ) {\hat \psi}({\mbox{\bf y}},t) {\hat \psi}({\mbox{\bf x}},t) , \end{equation} however no such equation holds for the expectation value of the field $ {\psi}({\mbox{\bf x}},t)= \langle {\hat \psi}({\mbox{\bf x}},t) \rangle $ due to correlations in the non-linear term; ${\psi}({\mbox{\bf x}},t)$ is not useful to calculate the expectations of operators (\ref{37}) and therefore a classical Schr\" odinger field equation for ${\psi}({\mbox{\bf x}},t)$ has no physical meaning in general. In this respect the case of electromagnetism, where no self-interaction of the field occurs, is deeply different and allows classical electrodynamics to play an important role. To the macrosystem one associates typical ``thermodynamic state'' parameters: the velocity field ${{\mbox{\bf v}}}({\mbox{\bf x}},t)$, the temperature field $\beta({\mbox{\bf x}},t)$, the chemical potential field $\mu ({\mbox{\bf x}},t)$ in the case of the hydrodynamic description or more generally a field $\mu({\mbox{\bf x}},{\mbox{\bf p}},t)$ on the one-particle phase-space in the kinetic case~\cite{Roepke}. The parameters $\beta({\mbox{\bf x}},t)$ and $\mu ({\mbox{\bf x}},t)$ ($\mu({\mbox{\bf x}},{\mbox{\bf p}},t)$) determine the expectation values of energy density and mass density (the Boltzmann operator); let us briefly recall how the relation between state variables and expectation values is established~\cite{Robin}. At any time $t$ one considers the whole set of statistical operators $ \left \{ {\hat w} \right \} $ which yield the expectation values assigned at that time: \[ \langle {\hat e}^{(0)}({\mbox{\bf x}}) \rangle_t \! = \! {\mbox{\rm Tr}} \! \left( {\hat e}^{(0)}({\mbox{\bf x}}) {\hat w} \right) , \ \langle {\hat \rho}_{m}({\mbox{\bf x}}) \rangle_t \! = \! {\mbox{\rm Tr}} \! \left( {\hat \rho}_{m}({\mbox{\bf x}}) {\hat w} \right) , \ \langle {\hat f}^{(0)}({\mbox{\bf x}},{\mbox{\bf p}}) \rangle_t \! = \! {\mbox{\rm Tr}} \! \left( {\hat f}^{(0)}({\mbox{\bf x}},{\mbox{\bf p}}) {\hat w} \right) \] \begin{eqnarray} \label{densita} {\hat e}^{(0)}({\mbox{\bf x}}) &=& \frac {1}{2m} \left( i\hbar {\partial\over\partial{\mbox{\bf x}}} - m{{\mbox{\bf v}}} ({\mbox{\bf x}},t) \right) {\hat \psi}^{\scriptscriptstyle\dagger}({\mbox{\bf x}}) \cdot \left( -i\hbar {\partial\over\partial{\mbox{\bf x}}} - m{{\mbox{\bf v}}}({\mbox{\bf x}},t) \right) {\hat \psi}({\mbox{\bf x}}) \\ &\hphantom{=}& + \frac 12 \int_{\omega_x} d^3\! {\mbox{\bf r}} \, {\hat \psi}^{\scriptscriptstyle\dagger} \left( {\mbox{\bf x}}- {{\mbox{\bf r}}\over 2} \right) {\hat \psi}^{\scriptscriptstyle\dagger} \left( {\mbox{\bf x}}+ {{\mbox{\bf r}}\over 2} \right) V(r) {\hat \psi} \left( {\mbox{\bf x}}+{{\mbox{\bf r}}\over 2} \right) {\hat \psi} \left( {\mbox{\bf x}}-{{\mbox{\bf r}}\over 2} \right) \nonumber \\ {\hat \rho}_{m}({\mbox{\bf x}}) &=& {\hat \rho}^{(0)}_{m}({\mbox{\bf x}}) = m {\hat \psi}^{\scriptscriptstyle\dagger}({\mbox{\bf x}}) {\hat \psi}({\mbox{\bf x}}) , \qquad {\hat f}^{(0)}({\mbox{\bf x}},{\mbox{\bf p}}) = {\hat f}({\mbox{\bf x}},{\mbox{\bf p}}-m{\mbox{\bf v}}({\mbox{\bf x}},t)) \nonumber \end{eqnarray} where the quantities indexed by $(0)$ (depending explicitly on the velocity field ${{\mbox{\bf v}}} ({\mbox{\bf x}},t)$) represent densities in the reference frame in which the continuum is locally at rest. The velocity field is related to the expectation value of the momentum density ${\hat {\mbox{\bf p}}}({\mbox{\bf x}})$ through the relation $ \langle {\hat {\mbox{\bf p}}}^{(0)}({\mbox{\bf x}}) \rangle =0 $, where \[ {\hat {\mbox{\bf p}}}^{(0)}({\mbox{\bf x}}) = \frac {1}{2} \! \left \{ {\hat \psi}^{\scriptscriptstyle\dagger}({\mbox{\bf x}}) \left( -i\hbar {\partial\over\partial{\mbox{\bf x}}} - m{{\mbox{\bf v}}} ({\mbox{\bf x}},t) \right) {\hat \psi}({\mbox{\bf x}}) + \left[ \left( i\hbar {\partial\over\partial{\mbox{\bf x}}} - m{{\mbox{\bf v}}} ({\mbox{\bf x}},t) \right) {\hat \psi}^{\scriptscriptstyle\dagger}({\mbox{\bf x}}) \right] {\hat \psi}({\mbox{\bf x}}) \right \} \] or equivalently $ \langle {\hat {\mbox{\bf p}}}({\mbox{\bf x}}) \rangle_t = {{\mbox{\bf v}}}({\mbox{\bf x}},t) \langle {\hat \rho}_{m}({\mbox{\bf x}}) \rangle_t $. Then one looks for a statistical operator in the set $ \left \{ {\hat w} \right \} $ such that the Von-Neumann entropy $S=-k{\mbox{\rm Tr} \left( {\hat w} \log {\hat w} \right) }$ is maximal, i.e. the most unbiased choice of a statistical operator leading to the given expectation values. The unique solution of this problem is \begin{equation} \label{14} {\hat w}[\beta(t) , \mu (t), {{\mbox{\bf v}}}(t)] = { e^{-{ \int_{\omega} d^3\! {\bf \scriptscriptstyle x} \, \beta({\bf \scriptscriptstyle x},t) \left[ {\hat e}^{(0)}({\bf \scriptscriptstyle x}) - \mu({\bf \scriptscriptstyle x},t){\hat \rho}_{m} ({\bf \scriptscriptstyle x}) \right] }} \over {\mbox{{\rm Tr}}} \, e^{-{ \int_{\omega} d^3\! {\bf \scriptscriptstyle x} \, \beta({\bf \scriptscriptstyle x},t) \left[ {\hat e}^{(0)}({\bf \scriptscriptstyle x}) - \mu({\bf \scriptscriptstyle x},t){\hat \rho}_{m} ({\bf \scriptscriptstyle x}) \right] }} } \end{equation} and analogously in the kinetic case. \par \noindent The corresponding $S=-k{\mbox{\rm Tr} ( {\hat w}[\beta(t) , \mu (t), {{\mbox{\bf v}}}(t)] \log {\hat w}[\beta(t) , \mu (t), {{\mbox{\bf v}}}(t)] ) }$ is the thermodynamic entropy of the macrosystem. If the time evolution of the expectation values $ \langle {\hat e}^{(0)}({\mbox{\bf x}}) \rangle_t , \langle {\hat \rho}_{m}({\mbox{\bf x}}) \rangle_t , \left( \langle {\hat f}({\mbox{\bf x}},{\mbox{\bf p}}) \rangle_t \right) $ is given by the Hamiltonian evolution (\ref{37}) or more generally by a map ${\cal M}^{'}_{t t_0}$, having a preadjoint ${\cal M}_{t t_0}$ which does not decrease the Von-Neumann entropy, one immediately has that the thermodynamic entropy is non-decreasing. In this way one establishes the second principle of thermodynamics on a very clear dynamical basis. \par In the simplest scheme of macroscopic dynamics the thermodynamic state parameters ${{\mbox{\bf v}}}({\mbox{\bf x}},t)$, $\beta({\mbox{\bf x}},t)$, $\mu ({\mbox{\bf x}},t)$ ($\mu({\mbox{\bf x}},{\mbox{\bf p}},t)$) at time $t_0$ determine its evolution for $t>t_0$, e.g., by differential equations. Phenomenology shows that this is very often the case. Tackling the problem from the theoretical viewpoint one is induced, considering the operators \[ \begin{array}{ccccccc} {\dot {\hat \rho}}_{m}({\mbox{\bf x}}) &=& {i\over\hbar}[{\hat H},{{\hat \rho}}_{m}({\mbox{\bf x}})] &\qquad& {\dot {\hat { \mbox{\bf p}}}}({\mbox{\bf x}}) &=& {i\over\hbar}[{\hat H},{{\hat {\mbox{\bf p}}}}({\mbox{\bf x}})] \\ {\dot {\hat e}}({\mbox{\bf x}}) &=& {i\over\hbar}[{\hat H},{{\hat e}}({\mbox{\bf x}})] &\qquad& ( {\dot {\hat f}}({\mbox{\bf x}},{\mbox{\bf p}}) &=& {i\over\hbar}[{\hat H},{{\hat f}}({\mbox{\bf x}},{\mbox{\bf p}})] ) , \end{array} \] to calculate their expectations with the statistical operator given by (\ref{14}). This leads to wrong results as can be seen from the fact that the expectation values of the currents which can be associated, through a conservation equation, to these operators would vanish~\cite{Zubarev}, due to time reversal invariance of microphysics, thus failing to describe any dissipative flow (e.g., heat conduction, viscosity, etc.). The idea of a time scale for the thermodynamic evolution and of a related subdynamics for the basic densities leads to a refinement of the aforementioned procedure: assume that ${i\over\hbar}[{\hat H},\cdot]$ can be replaced by a mapping ${\cal L}^{'}$, initially defined on the linearly independent elements $ {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} $, $ {\hat a}^{\scriptscriptstyle \dagger}_{h_1} {\hat a}^{\scriptscriptstyle \dagger}_{h_2} {\hat a}_{k_2} {\hat a}_{k_1} $, giving the slow time evolution of the relevant variables. In this way not only the statistical operator ${\hat w}[\beta(t) , \mu (t), {{\mbox{\bf v}}}(t)] $, but also the evolution operator is tuned to the relevant observables. Then one has the following set of closed evolution equations for the thermodynamic fields ${{\mbox{\bf v}}}({\mbox{\bf x}},t)$, $\beta({\mbox{\bf x}},t)$, $\mu ({\mbox{\bf x}},t)$, ($\mu({\mbox{\bf x}},{\mbox{\bf p}},t)$) related to the basic observables $ {\hat A}={ {\hat \rho}}_{m}({\mbox{\bf x}}),\, { {\hat {\mbox{\bf p}}}}({\mbox{\bf x}}),\,{{\hat e}}({\mbox{\bf x}}),\, ({ {\hat f}}({\mbox{\bf x}},{\mbox{\bf p}})) $: \begin{equation} \label{one} { d \over dt } {\mbox{\rm Tr}} \left( {\hat A} {\hat w}[\beta(t) , \mu (t), {{\mbox{\bf v}}}(t)] \right) = {\mbox{\rm Tr}} \left( ({\cal L}^{'} {\hat A}) {\hat w}[\beta(t) , \mu (t), {{\mbox{\bf v}}}(t)] \right) . \end{equation} The non-Hamiltonian form of the map ${\cal L}^{'}$ eliminates the aforementioned difficulties with vanishing dissipative flows; preliminary investigations of the consequences of (\ref{one}) in the case of a dilute gas indicate that it could be the right solution. The map ${\cal L}^{'}$ that adequately replaces ${i\over\hbar}[{\hat H},\cdot]$ for the slow variables must generate an evolution of the relevant observables that preserves their positivity properties (e.g., ${ {\hat \rho}}_{m}({\mbox{\bf x}}), { {\hat f}}({\mbox{\bf x}},{\mbox{\bf p}}) $) and also conservation of mass (${\hat M}= \int d^3 \! {\mbox{\bf x}} \, { {\hat \rho}}_{m}({\mbox{\bf x}})= \int d^3 \! {\mbox{\bf x}}d^3 \! {\mbox{\bf p}}\, { {\hat f}}({\mbox{\bf x}},{\mbox{\bf p}}) $) and of energy (${\hat E}= \int d^3 \! {\mbox{\bf x}} \, { {\hat e}}({\mbox{\bf x}}) $). Then ${\cal L}^{'}{\hat M}=0$ and ${\cal L}^{'}{\hat E}=0$, while positivity with respect to observables constructed in terms of creation and annihilation operators could arise by a stronger property, reminding CP: \begin{equation} \label{a} \sum_{hk} \langle \psi_h | {\cal U}^{'} \left( {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} \right) \psi_k \rangle >0 \qquad \sum_{h_1 h_2 \atop k_1 k_2} \langle \psi_{h_1 h_2} | {\cal U}^{'} \left( {\hat a}^{\scriptscriptstyle \dagger}_{h_1} {\hat a}^{\scriptscriptstyle \dagger}_{h_2} {\hat a}_{k_2} {\hat a}_{k_1} \right) \psi_{k_1 k_2} \rangle >0 , \end{equation} for any choice of $ \left \{ \psi_h \right \} $ and $ \left \{ \psi_{h_1 h_2} \right \} $. \par The time evolution of the typical expressions (\ref{37}) can be studied by a procedure quite similar to that already shown in $\S\,2$, based on the representation (\ref{7}) in terms of the ``superoperator'' ${\cal H}_0$, having $ {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} $, $ {\hat a}^{\scriptscriptstyle \dagger}_{h_1} {\hat a}^{\scriptscriptstyle \dagger}_{h_2} {\hat a}_{k_2} {\hat a}_{k_1} $ as eigenstates and of the superoperator ${\cal T}(z)$, which was called scattering map. If suitable smoothness properties of ${\cal T}(z)$ occur, essentially only the poles of $({ z - {\cal H}^{'}_0})^{-1}$ contribute to the calculation of (\ref{7}), so that the following asymptotic representation holds: \begin{equation} \label{39} {{{\cal U}^{'}(t)}} \left( {{{\hat a}^{\scriptscriptstyle \dagger}_{h}}{{\hat a}_{k}}} \right) = {{{\hat a}^{\scriptscriptstyle \dagger}_{h}}{{\hat a}_{k}}} + t {\cal L}' \left( {{{\hat a}^{\scriptscriptstyle \dagger}_{h}}{{\hat a}_{k}}} \right) \qquad \tau_0 \ll t \ll { \hbar \over |E_h - E_k| } ; \end{equation} $\tau_0$ is linked to smoothness properties of ${\cal T}(z)$ and can be interpreted as the typical duration of a collision between two particles interacting through the potential $V( \left | {\mbox{\bf x}} -{\mbox{\bf y}} \right | )$; $\tau_0$ fixes a time scale that is assumed to be much smaller than the typical variation time $\tau_1$ of our relevant observables. ${\cal L}^{'}$ is a linear mapping defined initially on the linearly independent elements $ {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} $, $ {\hat a}^{\scriptscriptstyle \dagger}_{h_1} {\hat a}^{\scriptscriptstyle \dagger}_{h_2} {\hat a}_{k_2} {\hat a}_{k_1} $. For brevity we simply describe the structure of ${\cal L}^{'}$, skipping the derivation. The formalism produces the typical structure of two-particle QM, with an N-body correction due to the Pauli principle. A two-particle scattering operator is defined by \begin{equation} \label{310} {{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(z) = {\hat V}{}^{(2)} + {\hat V}{}^{(2)} { 1 \over z - {{\hat H}_L}^{(2)} } {\hat V}{}^{(2)}_{L} \qquad {{\hat H}_L}^{(2)} = {{\hat H}^{(2)}}_0 + {{\hat V}_L}^{(2)} , \end{equation} where these operators, labelled by the index (2), are defined in the Hilbert space ${\cal H}^{(2)}$ of two identical particles by matrix elements in the two-particle (symmetric or antisymmetric) basis $|l_2 l_1 \rangle$; the matrix elements are: \begin{eqnarray} \label{311} \langle l_2 l_1 | {\hat H}^{(2)}_{0} | f_2 f_1 \rangle &=& (E_{f_1}+E_{f_2}) {1\over 2!} \left( \delta_{l_2 f_2} \delta_{l_1 f_1}\pm \delta_{l_2 f_1} \delta_{l_1 f_2} \right) \nonumber \\ \langle l_2 l_1 | {\hat V}^{(2)} | f_2 f_1 \rangle &=& V_{l_1 l_2 f_2 f_1} \\ \langle l_2 l_1 | {\hat V}^{(2)}_{L} | f_2 f_1 \rangle &=& (1 \pm {\hat n}_{l_1} \pm {\hat n}_{l_2}) V_{l_1 l_2 f_2 f_1} \end{eqnarray} the coefficients $E_f, V_{l_1 l_2 f_2 f_1}$ are given in (\ref{10}) and (\ref{11}), the factor $(1 \pm {\hat n}_{l_1} \pm {\hat n}_{l_2})$ is given in a more indirect way: the ``two-particle'' QM expressed by the aforementioned operators provides c-number coefficients in Fock-space operator expressions initially defined on the Fock-space basis $|\ldots { n}_f \ldots \rangle$, ${ n}_f$ being the occupation numbers of the different field modes $f$: ${ n}_f \in {\rm I\!\!N}$ in the Bose case, ${ n}_f=0,1$ in the Fermi case; therefore the factor $(1 \pm {\hat n}_{l_1} \pm {\hat n}_{l_2})$ depends on the Fock-space basis elements on which the final Fock-space operator is acting. Also the adjoint operator in the two-particle Hilbert space will be useful: \begin{equation} \label{312} {{\hat V}_R}^{(2)} = {{\hat V}_L^{(2)}}{}^{\dagger}, \quad {{\hat H}_R}^{(2)}= {{\hat H}_L^{(2)}}{}^{\dagger}, \quad \left[ {{{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(z) } \right]^{\dagger}= {\hat V}{}^{(2)} + {\hat V}{}^{(2)}_{R} { 1 \over z^* - {{\hat H}_R}^{(2)} } {\hat V}{}^{(2)} . \end{equation} The superoperator ${\cal L}^{'}$ consists of an Hamiltonian part ${i\over\hbar}[{\hat H}_{\rm \scriptscriptstyle eff},\cdot]$ and of a part, analogous to the one in (\ref{tipstr}), reminding the Lindblad structure (\ref{1}). The formally self-adjoint Hamilton operator ${\hat H}_{\rm \scriptscriptstyle eff}$ is initially defined on the Fock-space basis $|\ldots {n}_f \ldots \rangle$ by the following expression: \begin{eqnarray} \label{313} {\hat H}_{\rm \scriptscriptstyle eff} &=& \sum_f E_f {\hat a}_f^{\scriptscriptstyle\dagger} {\hat a}_f + {1\over 2} \sum_{l_1 l_2 \atop f_1 f_2} {\hat a}_{l_1}^{\scriptscriptstyle\dagger} {\hat a}_{l_2}^{\scriptscriptstyle\dagger} V{}_{l_1 l_2 f_2 f_1}^{eff} {\hat a}_{f_2} {\hat a}_{f_1} \\ V{}_{l_1 l_2 f_2 f_1}^{eff} &=& \langle {l_2 l_1} | \frac 12 \! \! \left( {{{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(E_{f_1}+E_{f_2}+ i\hbar\varepsilon)} + { \left[ {{{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(E_{l_1}+E_{l_2}+ i\hbar\varepsilon)} \right] }^{\dagger} \right) | {f_2 f_1} \rangle. \nonumber \end{eqnarray} By comparison with (\ref{campo}) one can notice that introducing the time scale $\tau\gg \tau_0$ the coefficients $V_{l_1 l_2 f_2 f_1}$ related to the basic interaction between the field modes is replaced by $V{}_{l_1 l_2 f_2 f_1}^{eff}$, linked with a full, Pauli principle corrected description of the two body collisions in the medium, expressed in terms of the self-adjoint part of the operator ${{{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(z)}$. The anti-self-adjoint part $\frac i2 ({{{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(z)} -{{{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(z) }^{\dagger} )$ is not zero if one goes beyond Born approximation and provides a contribution to ${\cal L}^{'}$ analogous to the second term in the l.h.s. of (\ref{1}), of the form $ - {1\over \hbar} \left( \left[ {\hat \Gamma}^{(2)} , {\hat a}^{\scriptscriptstyle\dagger}_h \right] {\hat a}_k - {\hat a}^{\scriptscriptstyle\dagger}_h \left[ {\hat \Gamma}^{(2)}, {\hat a}_k \right] \right) $, that due to sign ``-'' cannot be rewritten as $\left[ {\hat \Gamma}^{(2)}, \cdot \right]$; the operator ${\hat \Gamma}^{(2)}$ is defined on the Fock-space basis by \[ {1\over 2} \sum_{f_1 f_2 \atop l_1 l_2} {\hat a}^{\scriptscriptstyle\dagger}_{l_1} {\hat a}^{\scriptscriptstyle\dagger}_{l_2} \langle {l_2 l_1} | \frac i2 \! \! \left( {{{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(E_{f_1}+E_{f_2}+ i\hbar\varepsilon)} - \left[ {{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(E_{l_1}+E_{l_2}+i\hbar\varepsilon) \right]^{\dagger} \right) | {f_2 f_1} \rangle {\hat a}_{f_2} {\hat a}_{f_1}. \] At this point one immediately expects a third contribution to ${\cal L}^{'}$ related to the product structure $ {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} $ and involving both ${{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}$ and ${{{\hat {{\mbox{\fam\sboldfam\tensbold T}}}}{}^{(2)}}}^{\dagger}$; this contribution is given by $ {1\over\hbar} \sum_{\lambda} \! {\hat R}^{(2)}_{h \lambda}{}^{\dagger} {\hat R}^{(2)}_{k \lambda} $ and reminds the structure of the third term at the l.h.s. of the Lindblad expression (\ref{1}), where it accounted for decoherence (or state reduction, or event production). The operators ${\hat R}^{(2)}_{k \lambda}$ are defined on the Fock-space basis by: \[ {\hat R}^{(2)}_{k \lambda} = -i \sqrt{2\varepsilon (1 \pm {\hat n}_{\lambda} \pm {\hat n}_{k} )} \sum_{f_1 f_2} { \langle {k \lambda} | {{{\hat {\mbox{\fam\sboldfam\tensbold T}}}{}^{(2)}}(E_{f_1}+E_{f_2}+ i\hbar\varepsilon)} | {f_2 f_1} \rangle \over E_k +E_\lambda - E_{f_1}-E_{f_2} -i\hbar\varepsilon } {\hat a}_{f_2} {\hat a}_{f_1} . \] The factor $\sqrt{2\varepsilon (1 \pm {\hat n}_{\lambda} \pm {\hat n}_{k} )}$ arises in the approximate factorisation of a Pauli correction term depending both on ${\hat n}_{k}$ and ${\hat n}_{h}$: \begin{equation} \label{314} 2\varepsilon \left( 1 \pm {\hat n}_{\lambda} \pm \frac 12 ({\hat n}_{h} +{\hat n}_{k}) \right) \approx \sqrt{2\varepsilon (1 \pm {\hat n}_{\lambda} \pm {\hat n}_{h} )} \sqrt{2\varepsilon (1 \pm {\hat n}_{\lambda} \pm {\hat n}_{k} )} , \end{equation} this factorisation, which is a good approximation if the Pauli corrections are not very large, is a typical quantum condition, which together with $\tau_0 \ll \tau_1$ must be satisfied for the validity of the simple thermodynamic behaviour that we are considering in this section. The final structure of ${\cal L}^{'}$ is formally the same as in (\ref{tipstr}) \begin{equation} \label{x} {\cal L}^{'} {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} = {i\over\hbar} \left[ {\hat H}_{\rm \scriptscriptstyle eff}, {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} \right] - {1\over \hbar} \left( \left[ {\hat \Gamma}^{(2)} , {\hat a}^{\scriptscriptstyle\dagger}_h \right] {\hat a}_k - {\hat a}^{\scriptscriptstyle\dagger}_h \left[ {\hat \Gamma}^{(2)}, {\hat a}_k \right] \right) + {1\over\hbar} \sum_\lambda {\hat R}^{(2)}_{h \lambda}{}^{\dagger} {\hat R}^{(2)}_{k \lambda}, \end{equation} the main difference lying in the space in which these operators act, according to the two different physical situations. As a consequence of unitarity of ${\cal U}^{'}$ one can prove that within the approximation leading to expression (\ref{x}) one has: \begin{equation} \label{xx} {\hat \Gamma}^{(2)} \approx {1\over4} \sum_{h \lambda} {\hat R}^{(2)}_{h \lambda}{}^{\dagger} {\hat R}^{(2)}_{h \lambda} , \end{equation} therefore one can replace the expression $ {\hat \Gamma}^{(2)} \approx {1\over4} \sum_{h \lambda} {\hat R}^{(2)}_{h \lambda}{}^{\dagger} {\hat R}^{(2)}_{h \lambda}$ in (\ref{x}): in this way the conservation relation ${\cal L}^{'}{\hat M}=0$ is exactly satisfied. It can be easily shown that $ \sum_{hk} \langle \psi_h | ( [ 1+\tau {\cal L}^{'} ] {\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k} ) \psi_k \rangle \ge 0 $ to first order in $\tau$, so that the positivity property (\ref{a}) is satisfied. \par As a preliminary check of the formalism let us report that the calculation of ${\cal L}^{'} ({\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k}) $ yields the typical structure of the collision term of the Boltzmann equation with the Pauli principle corrections: $ {1\over\hbar} {\hat R}^{(2)}_{h \lambda}{}^{\dagger} {\hat R}^{(2)}_{h \lambda} $ and $ - {1\over \hbar} \left[ {\hat \Gamma}^{(2)} , {\hat a}^{\scriptscriptstyle\dagger}_h \right] {\hat a}_h + c.c. $ are respectively the ``gain'' and the ``loss'' part of the collision term; ${\cal L}^{'} ({\hat a}^{\scriptscriptstyle \dagger}_{h} {\hat a}_{k}) $ yields also the streaming term of the Boltzmann equation. The study of ${\cal L}^{'} ( {\hat a}^{\scriptscriptstyle \dagger}_{h_1} {\hat a}^{\scriptscriptstyle \dagger}_{h_2} {\hat a}_{k_2} {\hat a}_{k_1} ) $ is not yet finished, it should end up with a full hydrodynamic and kinetic description of a one component continuum. The approximations leading to ${\cal L}^{'}$ are based on the smoothness assumption related to the condition $\tau_0\ll\tau_1$ and to a ``one mode'' approximation for the description of the dynamics in the time interval $\tau_0$. A finite parameter $\varepsilon \simeq \hbar\tau_0$ appears in the formal expression of ${\cal L}^{'}$; the final results do not appreciably depend on this by-product of the approximations if $\tau_0\ll\tau_1$. In a sense the simple thermodynamic behaviour expressed by (\ref{one}) arises by an approximation and this is indicated by the presence of $\varepsilon$: an appreciable dependence of the results on $\varepsilon$ indicates a failure of the smoothness assumption and of the related approximations. Let us stress finally that in this approach existence of closed evolution equations for the thermodynamic state variables avoids any factorisation assumption for the distribution functions and therefore goes far beyond the approach based on the truncation of a hierarchy. \par \section{Dynamics with memory effects} \par According to $\S\,3$, when $ {i\over\hbar} [\hat H, \cdot] $ can be replaced by the map ${\cal L}^{'}$ given by (\ref{x}) the family of generalised Gibbs states ${\hat w}(t)$ replaces the family of statistical operators $\hat \varrho_t = {\cal U}({t- t_0}) \hat \varrho_{t_0}$ (for simplicity we assume now time-translation invariance). This means that to determine the evolution of the thermodynamic state from a given time $ \bar t$ onwards nothing else than $\beta$, $\mu $, ${{\mbox{\bf v}}}$ at that time has to be taken into account: i.e. no bias comes by the previous history $\beta({\mbox{\bf x}},t')$, $\mu ({\mbox{\bf x}},t')$, ${{\mbox{\bf v}}}({\mbox{\bf x}},t')$, $t' < \bar t$. This is no longer true if the condition $\tau_1 \gg \tau_0$ [see eq. (\ref{39})] is not satisfied. Let us assume that at an initial time T the statistical operator ${\hat \varrho}_T$ can be identified with the Gibbs state ${\hat w}(T)$ related to it: \begin{equation} \label{41} {\hat \varrho}_T={\hat w}(T). \end{equation} By a straightforward calculation~\cite{Robin} one has: \begin{eqnarray} \label{42} {\hat \varrho}_t &=& e^{-{i\over \hbar}{\hat H}(t- T)} {\hat \varrho}_T e^{{i\over \hbar}{\hat H}(t-T)} = { e^{ - \langle \beta(T) \cdot {\hat e}^{(0)}[-(t-T)] \rangle + \langle [\mu(T)\beta(T)] \cdot{\hat \rho}_{m}[-(t-T)] \rangle } \over {\mbox{\rm Tr}} \, e^{ - \langle \beta(T) \cdot {\hat e}^{(0)}[-(t-T)] \rangle + \langle [\mu(T)\beta(T)] \cdot{\hat \rho}_{m}[-(t-T)] \rangle } } \\ &=& { e^{ - \langle \beta(t) \cdot{\hat e}^{(0)} \rangle + \langle [\mu(t)\beta(t)] \cdot{\hat \rho}_{m} \rangle - \int_0^{t-T} d\tau \, { d \over d\tau } \left( { \langle \beta(t-\tau) \cdot{\hat e}^{(0)}(-\tau) \rangle - \langle [\mu(t-\tau)\beta(t-\tau)] \cdot{\hat \rho}_{m}(-\tau) \rangle } \right) } \over {\mbox{\rm Tr}} \, e^{ - \langle \beta(t) \cdot{\hat e}^{(0)} \rangle + \langle [\mu(t)\beta(t)] \cdot{\hat \rho}_{m} \rangle - \int_0^{t-T} d\tau \, { d \over d\tau } \left( { \langle \beta(t-\tau) \cdot{\hat e}^{(0)}(-\tau) \rangle - \langle [\mu(t-\tau)\beta(t-\tau)] \cdot{\hat \rho}_{m}(-\tau) \rangle } \right) } } \nonumber \end{eqnarray} where ${\hat A}(\tau)= e^{{i\over \hbar}{\hat H}\tau} {\hat A} e^{-{i\over \hbar}{\hat H}\tau} $ and $\langle \beta(t) \cdot {\hat A} \rangle = \int_\omega d^3\, {\mbox{\bf x}}\, \beta(t,{\mbox{\bf x}}){\hat A}({\mbox{\bf x}}) $. In the last term of the exponent the history of the thermodynamic state during the time interval $[T,t]$ appears; by ${\dot {\hat \rho}}_{m} = -{\mbox{\rm div}}{\hat {\mbox{\bf J}}}^{(0)}_{{{m}}}$, $ {\dot {\hat e}}= -{\mbox{\rm div}}{\hat {\mbox{\bf J}}}^{(0)}_l $ it can be rewritten in the more perspicuous form: \begin{eqnarray} \label{43} && - \int_0^{t-T} d\tau \, { d \over d\tau } \left( { \langle \beta(t-\tau) \cdot{\hat e}^{(0)}(-\tau) \rangle - \langle [\mu(t-\tau)\beta(t-\tau)] \cdot{\hat \rho}_{{{m}}}(-\tau) \rangle } \right) = \hphantom{{}-{}ttttttt} \\ && \hphantom{{}-{}} = \int_T^t d\tau' \left[ \langle { d \over d\tau' } \beta(\tau') \cdot {\hat e}^{(0)}(\tau' - t) \rangle - \langle { d \over d\tau' } [\beta(\tau')\mu(\tau')] \cdot {\hat \rho}_{{{m}}}^{(0)}(\tau' - t) \rangle \right. \nonumber \\ && \hphantom{{}-{}= \int_T^t d\tau'[[} - \left. \langle {\mbox{\rm grad}}\beta(\tau')\cdot {\hat {\mbox{\bf J}}}^{(0)}_l(\tau' - t) \rangle + \langle {\mbox{\rm grad}} (\beta(\tau')\mu(\tau'))\cdot {\hat {\mbox{\bf J}}}^{(0)}_{{{m}}} (\tau' - t) \rangle \right] \nonumber \\ && \hphantom{ {}-{}= } + \int_T^{t_0} d\tau' \int_{\partial\omega} d\sigma \, {\mbox{\bf n}}\cdot \left( \langle \beta(\tau',{\mbox{\bf x}}) {\hat {\mbox{\bf J}}}^{(0)}_l({\mbox{\bf x}},\tau' - t) \rangle - \langle \beta(\tau',{\mbox{\bf x}}) \mu(\tau',{\mbox{\bf x}}) {\hat {\mbox{\bf J}}}^{(0)}_{{{m}}} ({\mbox{\bf x}}, \tau' - t) \rangle \right) \nonumber \end{eqnarray} where time and space derivatives of the thermodynamic fields appear on the same footing; by the last term also matter and energy exchanges with the environment during the preparation time $[T, t_o]$ can be described. Taking expression (\ref{42}) for ${\hat \varrho}_t$ to calculate the basic expectation values and thus determining the thermodynamic state variables for $t > t_0$ as in (\ref{one}), one has for them closed evolution equations having as input $\beta({\mbox{\bf x}},\tau')$, $\mu ({\mbox{\bf x}},\tau')$, ${{\mbox{\bf v}}}({\mbox{\bf x}},\tau')$ $\tau' \in [T,t_0]$. Such equations are generally used and work under the hypothesis that memory decays within a typical correlation time. This also helps to attenuate the problem of the initial choice (\ref{41}). We mention in passing that no problem about condition (\ref{41}) exists if one assumes the point of view of ``informational thermodynamics'': then ${\hat \varrho}_T={\hat w}(T)$ is just dictated by the measured values of the relevant variables at time $T$; however this approach does not explain why the previous history of a concrete collection of macrosystems is irrelevant just before $T$. Let us also mention the solution given by Zubarev: $T$ is shifted to $-\infty$, thus taking off any previous history; however this limit is highly critical and since also a thermodynamic limit is involved, it shifts the problem of thermodynamic evolution to a cosmological one. In our framework a time scale is associated to the system and the generator ${\cal L}^{'}$ should have a non-Hamiltonian part. This provides a mechanism by which memory can decay. Let us assume that the preparation time $t_o - T$ is larger than the decay time of the memory: at this point the history that comes before $T$ is irrelevant for the dynamics that ${\cal L}^{'}$ is able to describe. Then the choice (\ref{41}), that is not biased by this history, is adequate. When $ {\cal L}^{'}= {i\over\hbar} [\hat H, \cdot] + {\tilde {\cal L}}^{'} $, calling $ {\hat \varrho}({[\beta,\mu,{\mbox{\bf v}}]},t,T) $ the operator on the l.h.s. of (\ref{42}) one has: \begin{eqnarray*} && {\mbox{\rm Tr}} \left( {\hat A} {\cal U}{(t,T)} {\hat \varrho}_T \right) = {\mbox{\rm Tr}} \left[ \left( T e^{ \int_T^t d\tau \, {\tilde {\cal L}}^{'}(\tau) } {\hat A} \right) {\hat \varrho}({[\beta,\mu,{\mbox{\bf v}}]},t,T) \right] , \\ && {\tilde {\cal L}}^{'}(\tau) = e^{{{\cal L}_0}^{'}(T-\tau)} {\tilde {\cal L}}^{'} e^{-{{\cal L}_0}^{'}(T-\tau)} , \qquad {{\cal L}_0}^{'}= {i\over\hbar} [\hat H, \cdot] . \end{eqnarray*} \par \vskip 20pt \end{document}
\begin{equation}gin{document} \newtheorem{theorem}{\bf Theorem}[section] \newtheorem{proposition}[theorem]{\bf Proposition} \newtheorem{definition}[theorem]{\bf Definition} \newtheorem{corollary}[theorem]{\bf Corollary} \newtheorem{example}[theorem]{\bf Example} \newtheorem{exam}[theorem]{\bf Example} \newtheorem{remark}[theorem]{\bf Remark} \newtheorem{lemma}[theorem]{\bf Lemma} \newtheorem{conclusions}[theorem]{\bf Conclusions} \newcommand{\nrm}[1]{|\!|\!| {#1} |\!|\!|} \newcommand{\begin{equation}gin{array}}{\begin{equation}gin{array}} \newcommand{\end{array}}{\end{array}} \newcommand{\vskip 1ex}{\vskip 1ex} \newcommand{\vskip 1exe}{\vskip 2ex} \newcommand{\vskip 4ex}{\vskip 4ex} \newcommand{\dm}[1]{ {\displaystyle{#1} } } \newcommand{\begin{equation}}{\begin{equation}gin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{equation}ano}{\begin{equation}gin{eqnarray*}} \newcommand{\end{equation}ano}{\end{eqnarray*}} \newcommand{\inp}[2]{\langle {#1} ,\,{#2} \rangle} \def\bmatrix#1{\left[ \begin{equation}gin{matrix} #1 \end{matrix} \right]} \def \noin{\noindent} \newcommand{\Pi_e}{\Pi_e} \def \R{{\mathbb R}} \def \C{{\mathbb C}} \def \K{{\mathbb K}} \def \J{{\mathbb J}} \def \Lb{\mathrm{L}} \def \T{{\mathbb T}} \def \Pb{\mathrm{P}} \def \N{{\mathbb N}} \def \Ib{\mathrm{I}} \def \Ls{{\Lambda}_{m-1}} \def \Gb{\mathrm{G}} \def \Hb{\mathrm{H}} \def \Lam{{\Lambda_{m}}} \def \Qb{\mathrm{Q}} \def \Rb{\mathrm{R}} \def \Mb{\mathrm{M}} \def \norm{\nrm{\cdot}\equiv \nrm{\cdot}} \def \P{{\mathbb P}_m(\C^{n\times n})} \def \A{{{\mathbb P}_1(\C^{n\times n})}} \def \H{{\mathbb H}} \def \L{{\mathbb L}} \def \G{{\mathcal G}} \def \S{{\mathbb S}} \def \sigmin{\sigma_{\min}} \def \elam{\sigma_{\epsilon}} \def \slam{\sigma^{\S}_{\epsilon}} \def \Ib{\mathrm{I}} \def \Tb{\mathrm{T}} \def \d{{\delta}} \def \Lb{\mathrm{L}} \def \N{{\mathbb N}} \def \Ls{{\Lambda}_{m-1}} \def \Gb{\mathrm{G}} \def \Hb{\mathrm{H}} \def \Delta{\triangle} \def \Rar{\Rightarrow} \def \p{{\mathsf{p}(\lam; v)}} \def \D{{\mathbb D}} \def \tr{\mathrm{Tr}} \def \cond{\mathrm{cond}} \def \lam{\lambda} \def \sig{\sigma} \def \sign{\mathrm{sign}} \def \ep{\epsilon} \def \diag{\mathrm{diag}} \def \rev{\mathrm{rev}} \def \vec{\mathrm{vec}} \def \spsd{\mathsf{spsd}} \def \spd{\mathsf{spd}} \def \sk{\mathsf{skew}} \def \sy{\mathsf{sym}} \def \en{\mathrm{even}} \def \odd{\mathrm{odd}} \def \rank{\mathrm{rank}} \def \pf{{\bf Proof: }} \def \dist{\mathrm{dist}} \def \rar{\rightarrow} \def \rank{\mathrm{rank}} \def \pf{{\bf Proof: }} \def \dist{\mathrm{dist}} \def \Re{\mathsf{Re}} \def \Im{\mathsf{Im}} \def \re{\mathsf{re}} \def \im{\mathsf{im}} \def \sym{\mathsf{sym}} \def \sksym{\mathsf{skew\mbox{-}sym}} \def \odd{\mathrm{odd}} \def \even{\mathrm{even}} \def \herm{\mathsf{Herm}} \def \skherm{\mathsf{skew\mbox{-}Herm}} \def \str{\mathrm{ Struct}} \def \eproof{$\blacksquare$} \def \proof{\noin\pf} \def \bS{{\bf S}} \def \cA{{\cal A}} \def \E{{\mathcal E}} \def \X{{\mathcal X}} \def \F{{\mathcal F}} \def \tr{\mathrm{Tr}} \def \range{\mathrm{Range}} \def \pal{\mathrm{palindromic}} \def \palpen{\mathrm{palindromic~~ pencil}} \def \palpoly{\mathrm{palindromic~~ polynomial}} \def \odd{\mathrm{odd}} \def \even{\mathrm{even}} \title{Vector space of linearizations for the quadratic two-parameter matrix polynomial} \author{ Bibhas Adhikari\thanks{E-mail: [email protected]} \\ Indian Institute of Technology Rajasthan \\ Jodhpur, India } \date{} \maketitle \thispagestyle{empty} \noin\textbf{Abstract:} Given a quadratic two-parameter matrix polynomial $Q(\lam,\mu)$, we develop a systematic approach to generating a vector space of linear two-parameter matrix polynomials. We identify a set of linearizations of $Q(\lam,\mu)$ that lie in the vector space. Finally, we determine a class of linearizations for a quadratic two-parameter eigenvalue problem.\\ \noin\textbf{Key words:} matrix polynomial, two-parameter matrix polynomial, quadratic two-parameter eigenvalue problem, two-parameter eigenvalue problem, linearization \\ \noin\textbf{AMS classification:} 65F15, 15A18, 15A69, 15A22 \section{Introduction} We consider two-parameter quadratic matrix polynomials of the form \begin{equation}gin{equation}\label{def:qtp} Q(\lam, \mu) =A + \lam B + \mu C + \lam^2 D + \lam\mu E + \mu^2 F \end{equation} where $\lam, \mu$ are scalars and the coefficient matrices are real or complex matrices of order $n\times n.$ If $(\lam,\mu)\in \C\times\C$ and nonzero $x\in \C^n$ satisfy $Q(\lam,\mu)x = 0$, then $x$ is said to be an eigenvector of $Q(\lam, \mu)$ corresponding to the eigenvalue $(\lam,\mu)$. The classical approach to solving spectral problems for matrix polynomials is to first perform a linearization, that is, to transform the given polynomial into a linear matrix polynomial, and then work with this linear polynomial (see \cite{Kha97,Kub98,Kha07,mupl08,mupl081,mackey3} and the references therein). Therefore, given a quadratic two-parameter matrix polynomial $Q(\lam,\mu)$, we seek linear two-parameter matrix polynomials $L(\lam,\mu)= \lam \widehat{A}_1 + \mu \widehat{A}_2 + \widehat{A}_3$, called \emph{linearizations}, which have the same eigenvalues as $Q(\lam,\mu).$ The one-parameter matrix polynomials have been an active area of research in numerical linear algebra \cite{mackey3,fiedler1,fiedler2}. In \cite{mackey3}, Mackey et al. have investigated the one-parameter polynomial eigenvalue problem extensively and they have produced vector spaces of linearizations for a one-parameter matrix polynomial by generalizing the companion forms of the one-parameter polynomial. Adopting a similar approach we derive a set of linearizations of a quadratic two-parameter matrix polynomial. The quadratic two-parameter eigenvalue problem is concerned with finding scalars $\lam, \mu \in\C$ and non-zero vectors $x_1\in\C^{n_1}, x_2\in\C^{n_2}$ such that \begin{equation}gin{equation}\label{defn:qevp1} \left\{ \begin{equation}gin{array}{ll} Q_1(\lam,\mu)x_1=(A_1 + \lam B_1 + \mu C_1 + \lam^2 D_1 + \lam\mu E_1 + \mu^2 F_1)x_1=0 & \hbox{} \\ Q_2(\lam,\mu)x_2=(A_2 + \lam B_2 + \mu C_2 + \lam^2 D_2 + \lam\mu E_2 + \mu^2 F_2)x_2=0 & \hbox{} \end{array} \right. \end{equation} where $A_i, B_i, \hdots, F_i\in\C^{n_i\times n_i}, i=1,2.$ A pair $(\lam, \mu)$ satisfying (\ref{defn:qevp1}) is called an eigenvalue of (\ref{defn:qevp1}) and $x_1\otimes x_2,$ where $\otimes$ is the Kronecker product, is the corresponding eigenvector. This problem appears in stability analysis of different systems, for example, time-delay systems of single delay \cite{HocMuPl10,jar08,jarhocs09,mupl08}. The standard approach to solving (\ref{defn:qevp1}) is by linearizing the problem into a two-parameter eigenvalue problem of larger size and then by converting it into an equivalent coupled generalized eigenvalue problem which is then solved by numerical methods, see \cite{mupl08,mupl081,HocMuPl10}. Given (\ref{defn:qevp1}), we seek a two-parameter eigenvalue problem \begin{equation}gin{equation}\label{sec1:defn:qlin} \left\{ \begin{equation}gin{array}{ll} L_1(\lam,\mu)w_1 := (A^{(1)}) + \lam B^{(1)} + \mu C^{(1)})w_1 =0& \hbox{} \\ L_2(\lam,\mu)w_2 := (A^{(2)}) + \lam B^{(2)} + \mu C^{(2)})w_2 =0& \hbox{} \end{array} \right. \end{equation} with the same eigenvalues, where $w_i \in\C^{3n_i}\setminus\{0\}$ and $A^{(i)}, B^{(i)}, C^{(i)}\in \C^{3n_i \times 3n_i},$ $ i=1,2.$ In such case (\ref{sec1:defn:qlin}) is called a linearization of (\ref{defn:qevp1}). The choice of a linearization may have an adverse effect on the sensitivity of the eigenvalues. Therefore, it is important to identify potential linearizations and describe their constructions. In this paper, we develop a systematic approach that enables us to generate a class of linearizations for a quadratic two-parameter eigenvalue problem. \section{Linearizations for quadratic two-parameter matrix polynomial} In this section we construct a set of linearizations of a quadratic two-parameter matrix polynomial. \begin{equation}gin{definition}(\cite{mupl08}) A $ln\times ln$ linear matrix polynomial $L(\lam, \mu) = \lam \widehat{A}_1 + \mu \widehat{A}_2 + \widehat{A}_3$ is a linearization of an $n\times n$ matrix polynomial $Q(\lam, \mu)$ if there exist polynomials $P(\lam, \mu)$ and $R(\lam, \mu),$ whose determinant is a non-zero constant independent of $\lam$ and $\mu,$ such that $$\bmatrix{Q(\lam,\mu) & 0 \\ 0 & I_{(l-1)n}} = P(\lam, \mu)L(\lam, \mu)R(\lam, \mu).$$ \end{definition} Let $Q(\lam, \mu)$ be a quadratic two-parameter matrix polynomial given by $$Q(\lam, \mu)=\lam^2A_{20} + \mu^2A_{02}+\lam\mu A_{11}+\lam A_{10}+\mu A_{01}+A_{00}$$ where the coefficient matrices are of order $n\times n.$ Assume that $x$ is the eigenvector corresponding to an eigenvalue $(\lam,\mu)$ of $Q(\lam, \mu),$ that is, $Q(\lam, \mu)x =0.$ With a view to constructing linearizations of $Q(\lam, \mu)$, we denote $x=x_{00}, \lam x_{00}= x_{10}, \mu x_{00} = x_{01}.$ Then we have \begin{equation}gin{equation} A_{20}(\lam x_{10}) + A_{02}(\mu x_{01}) + A_{11} (\lam x_{01}) + A_{10}x_{10} + A_{01}x_{01} + A_{00}x_{00} =0. \end{equation} Consequently we have \begin{equation}gin{equation}\label{def:l1} \left(\lam \underbrace{\bmatrix{A_{20} & A_{11} & 0 \\ 0 & 0 & 0 \\ 0 & 0 & I}}_{\widehat{A}_1} + \mu \underbrace{\bmatrix{0 & A_{02} & 0 \\ 0 & 0 & I\\ 0 & 0 & 0}}_{\widehat{A}_2} + \underbrace{\bmatrix{A_{10} & A_{01} & A_{00} \\ 0 & -I & 0\\ -I & 0 & 0}}_{\widehat{A}_3} \right) \bmatrix{x_{10} \\ x_{01} \\ x_{00}} = 0.\end{equation} Observe that $\bmatrix{x_{10} \\ x_{01} \\ x_{00}} = \bmatrix{\lam x \\ \mu x \\ x} = \bmatrix{\lam\\ \mu \\ 1}\otimes x.$ We denote $\Lambda:= \bmatrix{\lam\\ \mu \\ 1}.$ Thus $x$ is the eigenvector corresponding to an eigenvalue $(\lam, \mu)$ of $Q(\lam, \mu)$ if and only if $L(\lam, \mu) w =0$ where $w =\Lambda\otimes x$ and $L(\lam, \mu) = \lam \widehat{A}_1 + \mu \widehat{A}_2 + \widehat{A}_3,$ that is, $w$ is the eigenvector corresponding to an eigenvalue $(\lam,\mu)$ of $L(\lam,\mu).$ We show that $L(\lam,\mu)$ is a linearization of $Q(\lam,\mu).$ Define \begin{equation}ano E(\lam,\mu) &:=& \bmatrix{\lam I_n & I_n & 0\\ \mu I_n & 0 & I_n \\ I_n & 0 & 0},\\ F(\lam,\mu) &:=& \bmatrix{I_n & \mu A_{02}+\lam A_{11}+A_{01} & \lam A_{20}+A_{10} \\ 0 & 0 & I_n \\ 0 & I_n & 0}.\end{equation}ano Notice that $E, F$ are unimodular quadratic two-parameter matrix polynomials, that is, determinants of $E$ and $F$ are constants. Then we can easily check that \begin{equation}gin{equation} F(\lam,\mu)L(\lam,\mu)E(\lam,\mu) = \bmatrix{Q(\lam,\mu) & 0\\ 0 & I_{2n}}.\end{equation} Thus we have $\mbox{det}Q(\lam,\mu)=\gamma\mbox{det}L(\lam,\mu)$ for some $\gamma\neq 0.$ This implies $L(\lam,\mu)$ preserves the eigenvalues of $Q(\lam, \mu)$ and hence is a linearization of order $3n\times 3n.$ We call this linearization the \textit{standard} linearization of $Q(\lam,\mu)$. It is interesting to observe that the linearization proposed in \cite{mupl08} is up to some permutations of block rows and columns of the standard linearization. However, for the \textit{standard} linearization we have \begin{equation}gin{equation}\label{erual:LQ} L(\lam, \mu) \cdot (\Lambda\otimes x) =\left[ (Q(\lam, \mu)x)^T \,\, 0 \,\, \hdots \,\, 0\right]^T \,\, \mbox{for all} \,\, x\in\C^n,\end{equation} and therefore, any solution of (\ref{def:l1}) gives a solution of the original problem $Q(\lam, \mu)x=0.$ Further, by (\ref{erual:LQ}) we have \begin{equation}gin{equation}\label{lincondition} L(\lam, \mu) \cdot (\Lambda\otimes I_n) = L_1(\lam, \mu) \bmatrix{\lam I_n \\ \mu I_n \\ I_n} = \bmatrix{Q(\lam, \mu) \\ 0 \\ 0} = e_1 \otimes Q(\lam, \mu), e_1=\bmatrix{1\\ 0\\ 0}. \end{equation} We restrict our attention to the equation (\ref{lincondition}) which is satisfied by the \textit{standard} linearization. It would be worthy to find linear two-parameter matrix polynomials $L(\lam,\mu)$ that satisfy \begin{equation}gin{equation}\label{intro:v}L(\lam,\mu)\cdot (\Lambda\otimes I_n) = v \otimes Q(\lam, \mu)\end{equation} for some vector $v\in\C^3.$ Therefore, we introduce the notation \begin{equation}gin{equation} \mathcal{V}_Q =\{ v\otimes Q(\lam, \mu) : v\in\K^3\} \end{equation} and define \begin{equation}gin{equation}\label{def:L1} \L(Q(\lam, \mu)) := \left\{ L(\lam, \mu) = \lam \widehat{A}_1 + \mu \widehat{A}_2 + \widehat{A}_3, \widehat{A}_i\in\K^{3n\times 3n} : L(\lam, \mu) \cdot (\Lambda \otimes I_n) \in\mathcal{V}_Q\right\}.\end{equation} Note that $\L(Q(\lam, \mu))\neq\emptyset$ as the \textit{standard} linearization $L(\lam,\mu)\in \L(Q(\lam, \mu)).$ It is easy to check that $\L(Q(\lam, \mu)) $ is a vector space. If $L(\lam,\mu)\in \L(Q(\lam, \mu)) $ for some $v\in\C^3$ then call $v$ is an \textit{ansatz} vector associated with $L(\lam,\mu)$. To investigate the structure of each $L(\lam,\mu)\in \L(Q(\lam, \mu)),$ we define a ``box-addition" for three $3n\times 3n$ block matrices as follows. \begin{equation}gin{definition} Let $\widehat{X}, \widehat{Y}, \widehat{Z} \in \C^{3n\times 3n}$ be three block matrices of the form $$\widehat{X}=\bmatrix{X_{11} & X_{12} & X_{13} \\ X_{21} & X_{22} & X_{23} \\ X_{31} & X_{32} & X_{33}}, \widehat{Y} = \bmatrix{Y_{11} & Y_{12} & Y_{13} \\ Y_{21} & Y_{22} & Y_{23} \\ Y_{31} & Y_{32} & Y_{33}}, \widehat{Z}= \bmatrix{Z_{11} & Z_{12} & Z_{13} \\ Z_{21} & Z_{22} & Z_{23} \\ Z_{31} & Z_{32} & Z_{33}}.$$ Define \begin{equation}ano \widehat{X} \boxplus \widehat{Y} \boxplus \widehat{Z} &=& \bmatrix{X_{11} & X_{12} & X_{13} \\ X_{21} & X_{22} & X_{23} \\ X_{31} & X_{32} & X_{33}} \boxplus \bmatrix{Y_{11} & Y_{12} & Y_{13} \\ Y_{21} & Y_{22} & Y_{23} \\ Y_{31} & Y_{32} & Y_{33}} \boxplus \bmatrix{Z_{11} & Z_{12} & Z_{13} \\ Z_{21} & Z_{22} & Z_{23} \\ Z_{31} & Z_{32} & Z_{33}} \\ &=& \bmatrix{X_{11} & X_{12} & 0 & X_{13} & 0 & 0 \\ X_{21} & X_{22} & 0 & X_{23} & 0 & 0 \\ X_{31} & X_{32} & 0 & X_{33} & 0 & 0} + \bmatrix{0 & Y_{11} & Y_{12} & 0 & Y_{13} & 0 \\ 0 & Y_{21} & Y_{22} & 0 & Y_{23} & 0 \\ 0 & Y_{31} & Y_{32} & 0 & Y_{33} & 0} \\ && + \bmatrix{0 & 0 & 0 & Z_{11} & Z_{12} & Z_{13} \\ 0 & 0 & 0 & Z_{21} & Z_{22} & Z_{23} \\ 0 & 0 & 0 & Z_{31} & Z_{32} & Z_{33}} \end{equation}ano where $`+$' is the usual matrix addition. \end{definition} For the \textit{standard} linearization $L(\lam, \mu)=\lam \widehat{A}_1 + \mu \widehat{A}_2 + \widehat{A}_3\in\L(Q(\lam, \mu)) $ we have \begin{equation}ano \widehat{A}_1 \boxplus \widehat{A}_2 \boxplus \widehat{A}_3 &=& \bmatrix{A_{20} & A_{11} & 0 \\ 0 & 0 & 0 \\ 0 & 0 & I} \boxplus \bmatrix{0 & A_{02} & 0 \\ 0 & 0 & I\\ 0 & 0 & 0} \boxplus \bmatrix{A_{10} & A_{01} & A_{00} \\ 0 & -I & 0\\ -I & 0 & 0} \\ &=& \bmatrix{A_{20} & A_{11} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 &0 & 0 & 0\\ 0& 0 & 0 & I & 0 & 0} + \bmatrix{0 & 0 & A_{02} & 0 & 0 & 0\\ 0 & 0 & 0 &0 & I & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 } \\ && { + \bmatrix{ 0 & 0 &0 &A_{10} & A_{01} & A_{00} \\ 0 & 0 & 0& 0 & -I & 0\\ 0 & 0 & 0 & -I & 0 & 0}} \\ &=& \bmatrix{A_{20} & A_{11} & A_{02} & A_{10} & A_{01} & A_{00}\\ 0 & 0 & 0 &0 & 0 & 0\\ 0& 0 & 0 & 0 & 0 & 0} \\ &=& e_1 \otimes \bmatrix{A_{20} & A_{11} & A_{02} & A_{10} & A_{01} & A_{00}} .\end{equation}ano Thus we have the following lemma. \begin{equation}gin{lemma}\label{lem:sec2} Let $Q(\lam, \mu) = \lam^2A_{20} + \mu^2A_{02}+\lam\mu A_{11}+\lam A_{10}+\mu A_{01}+A_{00}$ be a quadratic two-parameter matrix polynomial with real or complex coefficient matrices of order $n\times n$, and $L(\lam, \mu)= \lam \widehat{A}_1 + \mu \widehat{A}_2 + \widehat{A}_3$ a $3n\times 3n$ two-parameter linear matrix polynomial. Then \begin{equation}ano L(\lam, \mu) \cdot (\Lambda\otimes I_n) &=& v\otimes Q(\lam,\mu) \Leftrightarrow \\ && \widehat{A}_1 \boxplus \widehat{A}_2 \boxplus \widehat{A}_3 = v\otimes \bmatrix{A_{20} & A_{11} & A_{02} & A_{10} & A_{01} & A_{00}}. \end{equation}ano \end{lemma} \noin\pf Computational and easy to check. \begin{equation}gin{example}\label{exp1} Consider a quadratic two-parameter polynomial $$ Q(\lam, \mu) = \lam^2A_{20} + \mu^2A_{02}+\lam\mu A_{11}+\lam A_{10}+\mu A_{01}+A_{00} $$ where $A_{ij}\in \C^{n\times n}$ and \begin{equation}ano L(\lam,\mu) = && \lam\bmatrix{A_{20} & A_{11}+A_{20} & A_{10}+A_{01} \\ A_{20} & A_{00} & 0 \\ 2A_{20} & A_{02} + 2A_{11} & I } \\ && +\mu \bmatrix{-A_{20} & A_{02} & A_{01} \\ A_{11}-A_{00} & A_{02} & 0\\ -A_{02} & 2A_{02} & A_{01}} + \bmatrix{-A_{01} & 0 & A_{00} \\ A_{01} & A_{01} & A_{00}\\ -I+2A_{10} & A_{01} & 2A_{00}}.\end{equation}ano Then $L(\lam,\mu)\in\L(Q(\lam, \mu))$ since $$\widehat{A}_1 \boxplus \widehat{A}_2 \boxplus \widehat{A}_3 = [1 \,\, 1 \,\, 2]^T \otimes \bmatrix{A_{20} & A_{11} & A_{02} & A_{10} & A_{01} & A_{00}}.$$ \end{example} Using Lemma \ref{lem:sec2} we characterize the structure of any $L(\lam, \mu) \in \L(Q(\lam, \mu)).$ \begin{equation}gin{theorem}\label{thm:charlin} Let $Q(\lam, \mu) = \lam^2A_{20} + \mu^2A_{02}+\lam\mu A_{11}+\lam A_{10}+\mu A_{01}+A_{00}$ be a quadratic two-parameter matrix polynomial with real or complex coefficient matrices of order $n\times n$, and $v\in\C^3.$ Then a linear two-parameter matrix polynomial $L(\lam, \mu) \in \L(Q(\lam,\mu))$ corresponding to the ansatz vector $v$ is of the form $L(\lam, \mu)= \lam \widehat{A}_1 + \mu \widehat{A}_2 + \widehat{A}_3$ where \begin{equation}ano \widehat{A}_1 &=& \bmatrix{v\otimes A_{20} & -Y_1+v\otimes A_{11} & -Z_1+v\otimes A_{10}}\\ \widehat{A}_2 &=& \bmatrix{Y_1 & v\otimes A_{02} & -Z_2+v\otimes A_{01} }\\ \widehat{A}_3 &=& \bmatrix{Z_1 & Z_2 & v\otimes A_{00}}\end{equation}ano where $Y_1=\bmatrix{Y_{11} \\ Y_{21} \\ Y_{31}}, Z_1=\bmatrix{Z_{11} \\ Z_{21} \\ Z_{31}}, Z_2=\bmatrix{Z_{12} \\ Z_{22} \\ Z_{32}}\in \C^{3n\times n}$ are arbitrary. \end{theorem} \noin\pf Let $\mathcal{M}: \L(Q(\lam, \mu)) \rightarrow \mathcal{V}_Q$ be a multiplicative map defined by $L(\lam, \mu)\mapsto L(\lam, \mu)(\Lambda\otimes I_n).$ Its easy to see that $\mathcal{M}$ is linear. First we show that $\mathcal{M}$ is surjective. Let $v\otimes Q(\lam, \mu)$ be an arbitrary element of $\mathcal{V}_Q.$ Construct $L(\lam, \mu)= \lam \widehat{A}_1 + \mu \widehat{A}_2 + \widehat{A}_3$ where \begin{equation}ano \widehat{A}_1 &=& \bmatrix{v\otimes A_{20} & v\otimes A_{11} & v\otimes A_{10}}\\ \widehat{A}_2 &=& \bmatrix{0 & v\otimes A_{02} & v\otimes A_{01} }\\ \widehat{A}_3 &=& \bmatrix{0 & 0 & v\otimes A_{00}}.\end{equation}ano Then obviously we have $\widehat{A}_1 \boxplus \widehat{A}_2 \boxplus \widehat{A}_3 =v\otimes \bmatrix{A_{20} & A_{02} & A_{11} & A_{10} & A_{01} & A_{00}},$ so by Lemma \ref{lem:sec2} $L(\lam, \mu)$ is an $\mathcal{M}$-pre-image of $v\otimes Q(\lam, \mu).$ The set of all $\mathcal{M}$-preimages of $v\otimes Q(\lam, \mu)$ is $L(\lam, \mu)+ \mbox{Ker}\mathcal{M}$, so all that remains is to compute $\mbox{Ker}\mathcal{M}$. Further by Lemma \ref{lem:sec2} $\mbox{Ker}\mathcal{M}$ contains $L(\lam, \mu)= \lam \widehat{A}_1 + \mu \widehat{A}_2 + \widehat{A}_3$ that satisfies $\widehat{A}_1 \boxplus \widehat{A}_2 \boxplus \widehat{A}_3 = 0.$ The definition of ``box-addition" implies that $\widehat{A}_1, \widehat{A}_2, \widehat{A}_3 $ are of the following form \begin{equation}ano \widehat{A}_1 &=& \bmatrix{0 & -Y_1 & -Z_1}\\ \widehat{A}_2 &=& \bmatrix{Y_1 & 0 & 0 }\\ \widehat{A}_3 &=& \bmatrix{Z_1 & Z_2 & 0}\end{equation}ano where $Y_1, Z_1, Z_2 \in \C^{3n\times n}$ are arbitrary. This completes the proof. \begin{equation}gin{example}\label{exp2} In Example \ref{exp1} we achieve the linear two-parameter polynomial $L(\lam,\mu)\in\L(Q(\lam,\mu))$ by choosing $$Y_1:=\bmatrix{-A_{20} \\ -A_{00}+A_{11} \\ A_{02}}, Z_1:=\bmatrix{-A_{01} \\ A_{10} \\ -I + 2A_{10}}, Z_2:=\bmatrix{0 \\ A_{01} \\ A_{01}}.$$ The \textit{standard} linearization $L(\lam,\mu)\in\L(Q(\lam,\mu))$ is achieved by choosing $$Y_1:=\bmatrix{0 \\ 0 \\ 0}, Z_1:=\bmatrix{A_{10} \\ 0 \\ -I}, Z_2:=\bmatrix{A_{01} \\ -I \\ 0}.$$ \end{example} \begin{equation}gin{corollary} Dimension of $\L(Q(\lam,\mu))= 9n^2 + 3.$ \end{corollary} \begin{equation}gin{remark} For quadratic one-parameter matrix polynomial \begin{equation}gin{equation}\label{def:pol} P(\lam) = \lam^2 A_{20} + \lam A_{10} + A_{00}, A_{i0}\in\C^{n\times n}, i=0,1,2, \end{equation} a vector space $\L_1(P)$ of matrix pencils of the form $L(\lam)=X+\lam Y \in\C^{2n\times 2n}$ is obtained in \cite{mackey3}. Setting $\mu=0$ in $Q(\lam,\mu)$ we have $Q(\lam,0)=P(\lam).$ Then from the constructions of linear two-parameter polynomials given in Theorem \ref{thm:charlin} it is easy to check that $\L(Q(\lam,\mu)) = \L_1(P).$ In fact, if $\mu =0$ then $\L(Q(\lam, \mu))$ contains matrix pencils $L(\lam)=\lam \widehat{A}_1 + \widehat{A}_3 \in \C^{2n\times 2n}$ where $$\widehat{A}_1 = \bmatrix{v\otimes A_{20} & -Z_1+v\otimes A_{10}}, \widehat{A}_3 = \bmatrix{Z_1 & v\otimes A_{00}},$$ $v\in\C^2$ and $Z_1\in\C^{2n\times n}$ is arbitrary. Thus we obtain the same vector space of matrix pencils obtained in \cite{mackey3} for a given quadratic one-parameter matrix polynomial $Q(\lam, 0)=\lam^2 A_{20} + \lam A_{10} + A_{00} = P(\lam).$ \end{remark} \subsection{Construction of linearizations} It is not very clear that whether all linear two-parameter matrix polynomials in the space $\L(Q(\lam,\mu))$ are linearizations of $Q(\lam, \mu).$ For example, consider any $L(\lam,\mu)\in \L(Q(\lam,\mu))$ corresponding to \textit{ansatz} vector $v = 0$. Thus given a quadratic two-parameter matrix polynomial $Q(\lam,\mu)$ we need to identify which $L(\lam,\mu)$ in $\L(Q(\lam,\mu))$ are linearizations. We begin with a result concerning the special case of the \textit{ansatz} vector $v=\alpha e_1$ where $e_1=\bmatrix{1&0&0}^T$ and $0\neq\alpha\in\C.$ \begin{equation}gin{theorem}\label{thm:linearization} Let $Q(\lam, \mu)=\lam^2 A_{20} + \lam\mu A_{11} + \mu^2 A_{02} + \lam A_{10} + \mu A_{01} + A_{00}$ be a quadratic two-parameter matrix polynomial with real or complex coefficient matrices of order $n\times n$. Suppose $L(\lam, \mu)=\lam\widehat{A}_1 + \mu\widehat{A}_2 + \widehat{A}_3 \in \L(Q(\lam, \mu))$ with respect to the ansatz vector $v=\alpha e_1\in\C^3,$ where \begin{equation}ano \widehat{A}_1 &=& \bmatrix{\alpha e_1\otimes A_{20} & -Y_1+\alpha e_1\otimes A_{11} & -Z_1+\alpha e_1\otimes A_{10}}\\ \widehat{A}_2 &=& \bmatrix{Y_1 & \alpha e_1\otimes A_{02} & -Z_2+\alpha e_1\otimes A_{01} }\\ \widehat{A}_3 &=& \bmatrix{Z_1 & Z_2 & \alpha e_1\otimes A_{00}},\end{equation}ano $Y_1=\bmatrix{Y_{11} \\ 0 \\ 0}, Z_1=\bmatrix{Z_{11} \\ Z_{21} \\ Z_{31}}, Z_2=\bmatrix{Z_{12} \\ Z_{22} \\ Z_{32}}\in \C^{3n\times n}, \,\, \mbox{det}\bmatrix{Z_{21} & Z_{22} \\ Z_{31} & Z_{32}} \neq 0.$ Then $L(\lam,\mu)$ is a linearization of $Q(\lam, \mu).$ \end{theorem} \noin\pf By Theorem \ref{thm:charlin}, any linear two-parameter matrix polynomial $L(\lam, \mu) = \lam\widehat{A}_1 + \mu\widehat{A}_2 + \widehat{A}_3 \in \L(Q(\lam,\mu))$ corresponding to the \textit{ansatz} vector $v=\alpha e_1$ is of the form \begin{equation}gin{small} \begin{equation}ano L(\lam,\mu) = && \lam \bmatrix{\alpha A_{20} & -Y_{11}+\alpha A_{11} & -Z_{11}+\alpha A_{10} \\ 0 & -Y_{21} & -Z_{21} \\ 0 & -Y_{31} & -Z_{31}} + \mu \bmatrix{Y_{11} & \alpha A_{02} & -Z_{12}+\alpha A_{01} \\ Y_{21} & 0 & -Z_{22} \\ Y_{31} & 0 & -Z_{32}} \\ && + \bmatrix{Z_{11} & Z_{12} & \alpha A_{00} \\ Z_{21} & Z_{22} & 0 \\ Z_{31} & Z_{32} & 0}. \end{equation}ano \end{small} Thus we have $$L(\lam,\mu) = \bmatrix{W_1(\lam, \mu) & W_2(\lam, \mu) & W_3(\lam, \mu) \\ \mu Y_{21}+Z_{21} & -\lam Y_{21}+Z_{22} & -\lam Z_{21}-\mu Z_{22} \\ \mu Y_{31}+Z_{31} & -\lam Y_{31}+Z_{32} & -\lam Z_{31}-\mu Z_{32}}$$ where $W_1(\lam, \mu) = \alpha \lam A_{20}+\mu Y_{11}+Z_{11}, W_2(\lam, \mu) = \alpha \mu A_{02}+\lam \alpha A_{11}-\lam Y_{11}+Z_{12}$ and $W_3(\lam, \mu) = \alpha\lam A_{10}-\lam Z_{11}+\alpha \mu A_{01}-\mu Z_{12}+\alpha A_{00}.$ Define $$E(\lam,\mu) = \bmatrix{\frac{\lam}{\alpha} I & I & 0 \\ \frac{\mu}{\alpha} I & 0 & I \\ \frac{1}{\alpha} I & 0 &0 }.$$ Consequently, we have $$L(\lam,\mu)E(\lam,\mu) = \bmatrix{Q(\lam,\mu) & W_1(\lam,\mu) & W_2(\lam,\mu) \\ 0 & \mu Y_{21}+Z_{21} & -\lam Y_{21}+Z_{22} \\ 0 & \mu Y_{31}+Z_{31} & -\lam Y_{31}+Z_{32}}.$$ Setting $Y_{21}=0=Y_{31}$ we have $L(\lam,\mu)E(\lam,\mu) = \bmatrix{Q(\lam,\mu) & W(\lam,\mu) \\ 0 & Z}$ where $W(\lam,\mu)=\bmatrix{W_1(\lam,\mu) & W_2(\lam,\mu)}\in\C^{n\times 2n}, Z=\bmatrix{Z_{21} & Z_{22} \\ Z_{31} & Z_{32}}\in\C^{2n\times 2n}.$ Since $Z$ is nonsingular, we define $$F(\lam,\mu) = \bmatrix{I & -W(\lam,\mu)Z^{-1} \\ 0 & Z^{-1}}.$$ Then we have $$F(\lam,\mu)L(\lam,\mu)E(\lam,\mu)= \bmatrix{ Q(\lam,\mu) & 0 \\ 0 & I_{2n}}.$$ Note that both $E(\lam, \mu)$ and $F(\lam, \mu)$ are unimodular polynomials. Hence we have $\mbox{det}L(\lam,\mu)=\gamma\mbox{det}Q(\lam,\mu)$ for some nonzero $\gamma\in\C.$ Thus $L(\lam,\mu)$ is a linearization. This completes the proof. Let $Q(\lam,\mu)$ quadratic two-parameter matrix polynomial and $L(\lam, \mu)\in\L(Q(\lam, \mu))$ corresponding to an \textit{ansatz} vector $0\neq v\in\C^3.$ Then the following is a procedure for determining a set of linearizations of $Q(\lam, \mu).$\\ \noin\textbf{Procedure to determine linearizations in $\L(Q(\lam,\mu))$:} \begin{equation}gin{enumerate} \item Suppose $Q(\lam,\mu)$ is a quadratic two-parameter matrix polynomial and $L(\lam,\mu) = \lam¸\widehat{A}_1 +\mu \widehat{A}_2 + \widehat{A}_3 \in \L(Q(\lam,\mu))$ corresponding to \textit{ansatz} vector $v\in\C^3$ i.e. $L(\lam,\mu) (\Lambda\otimes I_n) = v\otimes Q(\lam,\mu).$ \item Select any nonsingular matrix $M=\bmatrix{m_{11}&m_{12}&m_{13}\\m_{21}&m_{22}&m_{23}\\m_{31}&m_{32}&m_{33}}$ such that $Mv = \alpha e_1\in \C^3, \alpha\neq 0.$ A list of nonsingular matrices $M$ depending on the entries of $v$ is given in the Appendix. \item Apply the corresponding block-transformation $M \otimes I_n$ to $L(\lam,\mu).$ Then we have $\widetilde{L}(\lam, \mu)= (M\otimes I_n)L(\lam,\mu)= \lam \widetilde{A_1} + \mu \widetilde{A_2} + \widetilde{A_3}$ such that \begin{equation}ano \widetilde{A_1} &=& \bmatrix{\alpha e_1\otimes A_{20} & -\widetilde{Y_1}+\alpha e_1\otimes A_{11} & -\widetilde{Z_1}+\alpha e_1\otimes A_{10}}\\ \widetilde{A_2} &=& \bmatrix{\widetilde{Y_1} & \alpha e_1\otimes A_{02} & -\widetilde{Z_2}+\alpha e_1\otimes A_{01} }\\ \widetilde{A_3} &=& \bmatrix{\widetilde{Z_1} & \widetilde{Z_2} & \alpha e_1\otimes A_{00}}\end{equation}ano where \begin{equation}ano \widetilde{Y_1} &=& (M\otimes I_n)Y_1=(M\otimes I_n)\bmatrix{Y_{11}\\ 0\\ 0} = \bmatrix{m_{11}Y_{11} \\ m_{21}Y_{11} \\ m_{31}Y_{11}} \\ \widetilde{Z_1} &=& (M\otimes I_n)Z_1=(M\otimes I_n)\bmatrix{Z_{11}\\ Z_{21}\\ Z_{31}} =\bmatrix{m_{11}Z_{11} +m_{12}Z_{21} + m_{13}Z_{31} \\ m_{21}Z_{11} +m_{22}Z_{21} + m_{23}Z_{31} \\ m_{31}Z_{11} +m_{32}Z_{21} + m_{33}Z_{31}} \\ \widetilde{Z_2}&=&(M\otimes I_n)Z_2 = (M\otimes I_n)\bmatrix{Z_{12}\\ Z_{22}\\ Z_{32}}=\bmatrix{m_{11}Z_{12} +m_{12}Z_{22} + m_{13}Z_{32} \\ m_{21}Z_{12} +m_{22}Z_{22} + m_{23}Z_{32} \\ m_{31}Z_{12} +m_{32}Z_{22} + m_{33}Z_{32}}\end{equation}ano are arbitrary. \item For $\widetilde{L}(\lam, \mu)$ to be linearization we need to choose $Y_1, Z_1, Z_2$ as follows. If $m_{21}=m_{31}=0$ then choose $Y_{11}$ arbitrary; otherwise choose $Y_{11}=0.$ Further we need to choose $Z_1=\bmatrix{Z_{11}\\Z_{21}\\Z_{31}}, Z_2=\bmatrix{Z_{12}\\Z_{22}\\Z_{32}}$ for which \begin{equation}gin{equation}\label{eqn:cond}\mbox{det}\bmatrix{m_{21}Z_{11} +m_{22}Z_{21} + m_{23}Z_{31} & m_{21}Z_{12} +m_{22}Z_{22} + m_{23}Z_{32} \\ m_{31}Z_{11} +m_{32}Z_{21} + m_{33}Z_{31} & m_{31}Z_{12} +m_{32}Z_{22} + m_{33}Z_{32}}\neq 0.\end{equation} From the construction of $M$ given in the Appendix it is easy to check that we can always choose suitable $Z_1, Z_2$ for which the condition (\ref{eqn:cond}) is satisfied. \end{enumerate} \section{Linearization of two-parameter quadratic eigenvalue problem} The quadratic two-parameter eigenvalue problem is concerned with finding a pair $(\lam,\mu)\in\C\times\C$ and nonzero vectors $x_i\in\C^{n_i}$ for which \begin{equation}\label{defn:qevp2} Q_i(\lam,\mu)x_i = 0, i=1,2, \end{equation} where \begin{equation}\label{def:q2pp} Q_i(\lam,\mu)=A_i + \lam B_i + \mu C_i + \lam^2 D_i + \lam\mu E_i + \mu^2 F_i, \end{equation} $A_i, B_i, \hdots, F_i\in\C^{n_i\times n_i}.$ The pair $(\lam,\mu)$ is called an eigenvalue of (\ref{defn:qevp2}) and $x_1\otimes x_2$ is called the corresponding eigenvector. The spectrum of a quadratic two-parameter eigenvalue problem is the set \begin{equation}\sigma_Q :=\left\{ (\lam,\mu)\in\C\times\C : \mbox{det}Q_i(\lam,\mu)=0, i=1,2\right\}.\end{equation} In the generic case, we observe that (\ref{defn:qevp2}) has $4n_1n_2$ eigenvalues by using the following theorem. \begin{equation}gin{theorem}\label{thm:Bezout}(Bezout's Theorem, \cite{CoxLiOs97}) Let $f(x,y)=g(x,y)=0$ be a system of two polynomial equations in two unknowns. If it has only finitely many common complex zeros $(x,y)\in\C\times\C,$ then the number of those zeros is at most $\mbox{degree}(f)\cdot\mbox{degree}(g).$ \end{theorem} The usual approach to solving (\ref{defn:qevp2}) is to linearize it as a two-parameter eigenvalue problem given by \begin{equation}gin{equation}\label{defn:qlin2} \left\{ \begin{equation}gin{array}{ll} L_1(\lam,\mu)w_1 = (A^{(1)}) + \lam B^{(1)} + \mu C^{(1)})w_1=0 & \hbox{} \\ L_2(\lam,\mu)w_2 = (A^{(2)}) + \lam B^{(2)} + \mu C^{(2)})w_2=0 & \hbox{} \end{array} \right. \end{equation} where $A^{(i)}, B^{(i)}, C^{(i)} \in \C^{m_i\times m_i}, m_i\geq 2n_i, i=1,2,$ and $w_i = \Lambda \otimes x_i. $ A pair $(\lam,\mu)$ is called an eigenvalue of (\ref{defn:qlin2}) if $L_i(\lam, \mu)w_i = 0$ for a nonzero vector $w_i$ for $i =1, 2$, and $w_1\otimes w_2$ is the corresponding eigenvector. Thus the spectrum of the linearized two-parameter eigenvalue problem is given by \begin{equation}\sigma_L :=\left\{ (\lam,\mu)\in\C\times\C : \mbox{det}L_i(\lam,\mu)=0, i=1,2\right\}.\end{equation} Therefore, in the generic case, the problem (\ref{defn:qlin2}) has $m_1m_2\geq 4n_1n_2$ eigenvalues. A standard approach to solve a two-parameter eigenvalue problem (\ref{defn:qlin2}) is by converting it into a coupled generalized eigenvalue problem given by \begin{equation} \Delta_1 z=\lam \Delta_0 z \,\, \mbox{and} \,\, \Delta_2 z=\mu \Delta_0 z\end{equation} where $z=w_1\otimes w_2$ and \begin{equation}ano \Delta_0 &=& B^{(1)} \otimes C^{(2)} - C^{(1)}\otimes B^{(2)} \\ \Delta_1 &=& C^{(1)}\otimes A^{(2)} - A^{(1)}\otimes C^{(2)} \\ \Delta_2 &=& A^{(1)}\otimes B^{(2)} - B^{(1)}\otimes A^{(2)}.\end{equation}ano The two-parameter eigenvalue problem is called singular (resp. nonsingular) if $\Delta_0$ is singular (resp. nonsingular), see \cite{mupl08}. As mentioned earlier, we are interested in finding linear two-parameter polynomials $L_i(\lam,\mu)$ for a given quadratic two-parameter eigenvalue problem (\ref{defn:qevp2}) such that $\sigma_Q=\sigma_L.$ Thus we have the following definition. \begin{equation}gin{definition} Let (\ref{defn:qevp2}) be a quadratic two-parameter eigenvalue problem. A two-parameter eigenvalue problem (\ref{defn:qlin2}) is said to be a linearization of (\ref{defn:qevp2}) if $L_i(\lam, \mu)$ is a linearization of $Q_i(\lam, \mu).$ \end{definition} Thus if we consider a linearization of a quadratic two-parameter eigenvalue problem then $\sigma_Q = \sigma_L$ is guaranteed. It is also easy to observe that $x_1\otimes x_2$ is an eigenvector corresponding to an eigenvalue $(\lam, \mu)$ of a quadratic two-parameter eigenvalue problem if and only if $w_1\otimes w_2 $ is an eigenvector corresponding to the eigenvalue $(\lam, \mu)$ of the linearization. Making use of the construction of linearizations for a two-parameter quadratic matrix polynomial described in section 2, we construct linearizations for a quadratic two-parameter eigenvalue problem. \begin{equation}gin{theorem}\label{Thm:q2pp} Let (\ref{defn:qevp2}) be a quadratic two-parameter eigenvalue problem. A class of linearizations of (\ref{defn:qevp2}) is given by $$L_i(\lam,\mu)w_i= (A^{(i)} + \lam B^{(i)} + \mu C^{(i)})w_i=0, w_i=\Lambda\otimes x_i, i=1,2,$$ where \begin{equation}ano A^{(i)} &=& \bmatrix{Z_1^{(i)} & Z_2^{(i)} & \alpha_ie_1\otimes A_i }, \\ B^{(i)} &=& \bmatrix{\alpha_ie_1\otimes D_i & -Y_1^{(i)} + \alpha_ie_1\otimes E_i & -Z_1^{(i)}+\alpha_ie_1\otimes B_i}, \\ C^{(i)} &=& \bmatrix{Y_1^{(i)} & \alpha_ie_1\otimes F_i & -Z_2^{(i)}+\alpha_ie_1\otimes C_i},\end{equation}ano $\alpha_i\neq 0, Y_1^{(i)}=\bmatrix{Y_{11}^{(i)} \\ 0 \\ 0}, Z_1^{(i)} = \bmatrix{Z_{11}^{(i)} \\Z_{21}^{(i)} \\ Z_{31}^{(i)} }, Z_2^{(i)} = \bmatrix{Z_{12}^{(i)} \\Z_{22}^{(i)} \\ Z_{32}^{(i)} } \in \K^{3n\times n}, \mbox{det}\bmatrix{Z_{21}^{(i)} & Z_{22}^{(i)} \\ Z_{31}^{(i)} & Z_{32}^{(i)}} \neq 0.$ \end{theorem} \pf Consider the linearizations $L_i(\lam,\mu)= A^{(i)} + \lam B^{(i)} + \mu C^{(i)}$ of $Q_i(\lam,\mu)$ associated with \textit{ansatz} vector $0\neq \alpha_ie_1\in \C^3, i=1,2$ given by Theorem \ref{thm:linearization}. This completes the proof. Now we show that the linearizations for a quadratic two-parameter eigenvalue problem described in Theorem \ref{Thm:q2pp} are singular linearizations. The following theorem plays an important role in the sequel. \begin{equation}gin{theorem}\label{Thm:det} The determinant of a block-triangular matrix is the product of the determinants of the diagonal blocks.\end{theorem} Now we have the following result. \begin{equation}gin{theorem} The linearizations for (\ref{defn:qevp2}) derived in Theorem \ref{Thm:q2pp} are singular linearizations. \end{theorem} \pf Consider the linearizations $L_i(\lam,\mu)w_i= (A^{(i)} + \lam B^{(i)} + \mu C^{(i)})w_i = 0, i=1,2$ of $Q_i(\lam,\mu)$ where \begin{equation}ano B^{(i)} = \bmatrix{\alpha_iD_i & -Y_{11}^{(i)} + \alpha_i E_i & -Z_{11}^{(i)} + \alpha_i B_i \\ 0 & 0 & -Z_{21}^{(i)} \\ 0 & 0 & -Z_{31}^{(i)} }, C^{(i)} = \bmatrix{Y_{11}^{(i)}& \alpha_i F_i & -Z_{12}^{(i)} + \alpha_i C_i \\ 0 & 0 & -Z_{22}^{(i)} \\ 0 & 0 & -Z_{32}^{(i)} }. \end{equation}ano Consequently we have \begin{equation}ano \Delta_0 &=& B^{(1)}\otimes C^{(2)} - C^{(1)}\otimes B^{(2)} \\ &=& \bmatrix{\alpha_1D_1\otimes C^{(2)} & (-Y_{11}^{(1)} + \alpha_1 E_1)\otimes C^{(2)} & (-Z_{11}^{(1)} + \alpha_1 B_1)\otimes C^{(2)} \\ 0 & 0 & -Z_{21}^{(1)}\otimes C^{(2)} \\ 0 & 0 & -Z_{31}^{(1)}\otimes C^{(2)} } \\ && - \bmatrix{Y_{11}^{(1)}\otimes B^{(2)} & \alpha_1 F_1\otimes B^{(2)} & (-Z_{12}^{(1)} + \alpha_1 C_1)\otimes B^{(2)} \\ 0 & 0 & -Z_{22}^{(1)}\otimes B^{(2)}\\ 0 & 0 & -Z_{32}^{(1)}\otimes B^{(2)} }. \end{equation}ano Observe that $\Delta_0$ is a block-triangular matrix with one of the diagonal blocks is $0.$ Hence by Theorem \ref{Thm:det} we have $\mbox{det}\Delta_0 =0.$ This completes the proof. \begin{equation}gin{remark} Note that given a quadratic two-parameter eigenvalue problem (\ref{defn:qevp2}) we choose linearizations $L_i(\lam,\mu)$ of $Q_i(\lam,\mu)$ associated with the \textit{ansatz} vector $0\neq \alpha_ie_1\in\C^3$, and constructed linearizations $L_i(\lam,\mu)w_i=0, w_i=\Lambda\otimes x_i$ of (\ref{defn:qevp2}). However, we can derive a large class of singular linearizations by choosing linearizations $L_i(\lam,\mu)$ of $Q_i(\lam,\mu)$ associated with \textit{ansatz} vector $0\neq v_i\in\C^3$ described in section $2$. \end{remark} \section{Conclusions} Given a quadratic two-parameter matrix polynomial $Q(\lam,\mu)$, we construct a vector space of linear two-parameter matrix polynomials and identify a set of linearizations of $Q(\lam,\mu).$ We also describe construction of each of these linearizations. Finally, using these linearizations we determine a class of singular linearizations for a quadratic two-parameter eigenvalue problem.\\ \noin\textbf{Acknowledgement} The author wishes to thank Bor Plestenjak and Rafikul Alam for their stimulating comments that have significantly improved the quality of the results. Thanks are due to the referees for their constructive remarks.\\\\ \noin \textbf{Appendix} Let $0\neq \alpha\in\C$ and $e_1=\bmatrix{1 \\0 \\0}.$ Given a vector $v=\bmatrix{a\\b\\c}\in\C^3$ we can always pick a nonsingular matrix $M\in\C^{3\times 3}$ for which $Mv=\alpha e_1$ as follows. $$M=\left\{ \begin{equation}gin{array}{ll} \small{\bmatrix{\alpha/a & 0 & 0 \\ 1/a & -1/b & 0 \\ 1/a & 0 & -1/c}}, & \hbox{if $a\neq 0, b\neq 0, c\neq 0$} \\ \small{\bmatrix{0 & \alpha/b & 0 \\ 0 & -1/b & 1/c \\ 1 & 0 & 0}}, & \hbox{if $a=0, b\neq 0, c\neq 0$} \\ \small{\bmatrix{1 & 1 & \alpha/c \\ 1 & 1 & 0 \\ 0 & 1 & 0}}, & \hbox{if $a= 0, b= 0, c\neq 0$}\\ \small{\bmatrix{\alpha/a & 0 & 0 \\ 0 & 1 & 0 \\ -1/a & 0 & 1/c}}, & \hbox{if $a\neq 0, b= 0, c\neq 0$}\\ \small{\bmatrix{\alpha/a & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1}}, & \hbox{if $a\neq 0, b= 0, c= 0$}\\ \small{\bmatrix{\alpha/a & 0 & 1 \\ 1/a & -1/b & 1 \\ -1/a & 1/b & 0}}, & \hbox{if $a\neq 0, b\neq 0, c= 0$}\\ \small{\bmatrix{1 & \alpha/b & 0 \\ 1 & 0 & 0 \\ 1 & 0 & 1}}, & \hbox{if $a= 0, b\neq 0, c= 0$}\\ \small{\bmatrix{\alpha/a & 0 & 0 \\ 1/a & 0 & -1/c \\ 0 & 1 & 0}}, & \hbox{if $a\neq 0, b= 0, c\neq 0.$} \end{array} \right.$$ \begin{equation}gin{thebibliography}{abcd} \bibitem{atk:72} {\sc F. V. Atkinson}, {\em Multiparameter eigenvalue problems}, {Academic Press, New York, 1972.} \bibitem{chennett95} {\sc J. Chen, G. Gu and C.N. Nett}, {\it A new method for computing delay margins for stability of linear delay systems,} {Systems Control Lett. 26 (1995) pp. 107-117.} \bibitem{CoxLiOs97}{\sc D. Cox, J. Little and D. O'shea} {\it Using algebraic geometry,} {Springer Verlag, 1997.} \bibitem{funich06} {\sc P. Fu, S.-I. Niculescu and J. Chen,} {\it Stability of linear neutral time-delay systems: exact conditions via matrix pencil solutions,} {IEEE Trans. Automat. Control 51 (2006) pp. 1063-1069.} \bibitem{ErOlFa07} {\sc A.F. Ergenc, N. Olgac, H. Fazelina,} {\it Extended Kronecker summation for cluster treatment of LTI systems with multiple delays,}{ SIAM J. Control Optim. 46 (2007) pp. 143-155.} \bibitem{guni03} {\sc K. Gu and S.-I. Niculescu,} {\it Survey on recent results in the stability and control of time-delay systems,} {J. Dynam. Syst.-T. ASME 125 (2003) pp. 158-165.} \bibitem{haints85} {\sc J. Hale, E.F. Infante and F.-S. Tsen,} {\it Stability in linear delay equations,} {J. Math. Anal. Appl. 105 (1985) 533–555.} \bibitem{hochkopl05} {\sc M.E. Hochstenbach, T. Ko$\breve{s}$ir and B. Plestenjak,} {\it A Jacobi–Davidson type method for the two-parameter eigenvalue problem,} {SIAM J. Matrix Anal. Appl. 26 (2005) pp. 477-497.} \bibitem{HocMuPl10}{\sc M. E. Hochstenbach, A. Muhic, B. Plestenjak,} {On linearizations of the quadratic two-parameter eigenvalue problems,} March 2010, accepted in Linear Algebra Appl.. \bibitem{jar08} {\sc E. Jarlebring,} {\it On critical delays for linear neutral delay systems,} {Proc. Europ. Contr. Conf, 2007.} \bibitem{jarthesis08} {\sc E. Jarlebring,} {\em The Spectrum of Delay-Differential Equations: Numerical Methods, Stability and Perturbation,} {Ph.D. thesis, TU Braunschweig, 2008.} \bibitem{jar09} {\sc E. Jarlebring,} {\it Critical delays and polynomial eigenvalue problems,} {J. Comput. Appl. Math. 224 (2009) pp. 296-306.} \bibitem{jarhocs09} {\sc E. Jarlebring and M.E. Hochstenbach}, {\it Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations}, { Linear Algebra Appl. 431(2009) pp. 369-380.} \bibitem{kamen80} {\sc E.W. Kamen,} {\it On the relationship between zero criteria for two-variable polynomials and asymptotic stability of delay differential equations,} {IEEE Trans. Automat. Control 25 (1980) pp. 983-984.} \bibitem{kharit01} {\sc V. Kharitonov,} {\it Robust stability analysis of time delay systems: a survey,} {Annual Rev. Control 23 (1999) pp. 185-196.} \bibitem{Kha97} {\sc V. B. Khazanov,} {\it The multiparameter eigenvalue problem: Jordan vector semilattices} {Journal of Mathematical Sciences 86(1997) pp. 2944-2949.} \bibitem{Kha07} {\sc V. B. Khazanov,} {\it To solving spectral problems for multiparameter polynomial matrices} {Journal of Mathematical Sciences 141(2007) pp. 1690-1700.} \bibitem{Kosir:94} {\sc T. Ko$\breve{s}$ir}, {\it Finite dimensional multiparameter spectral theory: the nonderogatory case}, {Linear Algebra Appl., 212-213 (1994) pp. 45–70}. \bibitem{Kub98} {\sc V. N. Kublanovskaya,} {\it An approach to solving multiparameter algebraic problems} {Journal of Mathematical Sciences 89(1998) pp. 1715-1749.} \bibitem{Lou01} {\sc J. Louisell,} {\it A matrix method for determining the imaginary axis eigenvalues of a delay system,} {IEEE Trans. Automat. Control 46 (2001) pp. 2008-2012.} \bibitem{mackey3} {\sc D. S. Mackey, N. Mackey, C. Mehl and V. Mehrmann,} {\it Vector spaces of linearizations for matrix polynomials,}{ SIAM J. Matrix Anal. Appl., 28(2006), pp.971-1004.} \bibitem{michni07} {\sc W. Michiels and S.-I. Niculescu,} {\em Stability and Stabilization of Time-Delay Systems: An Eigenvalue-Based Approach,} {Advances in Design and Control, vol. 12, SIAM Publications, Philadelphia, 2007.} \bibitem{mupl081}{\sc A. Muhi$\breve{c}$ and B. Plestenjak}, {\it On the singular two-parameter eigenvalue problem,} {Electron. J. Linear Algebra 18 (2009) pp. 420-437}. \bibitem{mupl08}{\sc A. Muhi$\breve{c}$ and B. Plestenjak,} {\it On the quadratic two-parameter eigenvalue problem and its linearization,} {Linear Algebra Appl. 432 (2010) pp. 2529-2542}. \bibitem{nicul98} {\sc S.-I. Niculescu,} {\it Stability and hyperbolicity of linear systems with delayed state: a matrix-pencil approach,} {IMA J. Math. Control Inform. 15 (1998) pp. 331-347.} \bibitem{niculfych06} {\sc S.-I. Niculescu, P. Fu and J. Chen,} {\it On the stability of linear delay-differential algebraic systems: exact conditions via matrix pencil solutions,} {Proceedings of the 45th IEEE Conference on Decision and Control, 2006.} \bibitem{fiedler1}{\sc F. De Ter$\acute{A}$n, F. M. Dopico, and D. S. Mackey,} {\it Fiedler companion linearizations for rectangular matrix polynomials,} submitted. \bibitem{fiedler2}{\sc F. De Ter$\acute{A}$n, F.M. Dopico and D. S. Mackey}, {\it Linearizations of singular matrix polynomials and the recovery of minimal indices,} { Electron. J. of Linear Algebra 18 (2009), pp. 371-402. } \end{thebibliography} \end{document}
\begin{document} \baselineskip 16pt \title{\textbf{On the $\pi$$\mathfrak{F}$-norm and the $\mathfrak{H}$-$\mathfrak{F}$-norm of a finite group}\thanks{Research is supported by a NNSF grant of China (grant \#11371335) and Research Fund for the Doctoral Program of Higher Education of China (Grant 20113402110036).}} \author{Xiaoyu Chen, Wenbin Guo\thanks{Corresponding author.}\\ {\small Department of Mathematics, University of Science and Technology of China,}\\ {\small Hefei 230026, P. R. China}\\ {\small E-mails: [email protected], $\,[email protected]} } \date{} \maketitle \begin{abstract} Let $\mathfrak{H}$ be a Fitting class and $\mathfrak{F}$ a formation. We call a subgroup $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)$ of a finite group $G$ the $\mathfrak{H}$-$\mathfrak{F}$-norm of $G$ if $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)$ is the intersection of the normalizers of the products of the $\mathfrak{F}$-residuals of all subgroups of $G$ and the $\mathfrak{H}$-radical of $G$. Let $\pi$ denote a set of primes and let $\mathfrak{G}_\pi$ denote the class of all finite $\pi$-groups. We call the subgroup $\mathcal{N}_{\mathfrak{G}_\pi,\mathfrak{F}}(G)$ of $G$ the $\pi\mathfrak{F}$-norm of $G$. A normal subgroup $N$ of $G$ is called $\pi\mathfrak{F}$-hypercentral in $G$ if either $N=1$ or $N>1$ and every $G$-chief factor below $N$ of order divisible by at least one prime in $\pi$ is $\mathfrak{F}$-central in $G$. Let $Z_{\pi\mathfrak{F}}(G)$ denote the $\pi\mathfrak{F}$-hypercentre of $G$, that is, the product of all $\pi\mathfrak{F}$-hypercentral normal subgroups of $G$. In this paper, we study the properties of the $\mathfrak{H}$-$\mathfrak{F}$-norm, especially of the $\pi\mathfrak{F}$-norm of a finite group $G$. In particular, we investigate the relationship between the $\pi'\mathfrak{F}$-norm and the $\pi\mathfrak{F}$-hypercentre of $G$.\par \end{abstract} \renewcommand{\empty}{\empty} \footnotetext{Keywords: norm, $\pi\mathfrak{F}$-norm, $\pi\mathfrak{F}$-hypercentre, $\mathfrak{F}$-maximal subgroups, $\mathfrak{F}$-critical group.} \footnotetext{Mathematics Subject Classification (2000): 20D10, 20D15.} \section{Introduction} \noindent All groups considered in this paper are finite, and all classes of groups $\mathfrak{X}$ mentioned are non-empty. $G$ always denotes a group, $p$ denotes a prime, $\pi$ denotes a set of primes, and $\mathbb{P}$ denotes the set of all primes. Also, let $\pi(G)$ denote the set of all prime divisors of the order of $G$, and let $\pi(\mathfrak{X})=\bigcup\{\pi(G):G\in \mathfrak{X}\}$ for a class of groups $\mathfrak{X}$.\par Recall that a class of groups $\mathfrak{F}$ is called a formation if $\mathfrak{F}$ is closed under taking homomorphic images and subdirect products. A formation $\mathfrak{F}$ is said to be saturated if $G\in \mathfrak{F}$ whenever $G/\Phi(G)\in \mathfrak{F}$. The $\mathfrak{F}$-residual of $G$, denoted by $G^\mathfrak{F}$, is the smallest normal subgroup $N$ of $G$ with $G/N\in \mathfrak{F}$. The formation product $\mathfrak{X}\circ\mathfrak{F}$ of a class of groups $\mathfrak{X}$ and a formation $\mathfrak{F}$ is the class of all groups $G$ such that $G^\mathfrak{F}\in\mathfrak{X}$. A class of groups $\mathfrak{H}$ is called a Fitting class if $\mathfrak{H}$ is closed under taking normal subgroups and products of normal $\mathfrak{H}$-subgroups. The $\mathfrak{H}$-radical of $G$, denoted by $G_\mathfrak{H}$, is the maximal normal $\mathfrak{H}$-subgroup of $G$. The Fitting product $\mathfrak{H}\diamond\mathfrak{X}$ of a Fitting class $\mathfrak{H}$ and a class of groups $\mathfrak{X}$ is the class of all groups $G$ such that $G/G_\mathfrak{H}\in \mathfrak{X}$. A class of groups $\mathfrak{B}$ is called a Fitting formation if $\mathfrak{B}$ is both a formation and a Fitting class. Note that for a Fitting formation $\mathfrak{B}$, a formation $\mathfrak{F}$ and a Fitting class $\mathfrak{H}$, $\mathfrak{H}\diamond(\mathfrak{B}\circ\mathfrak{F})=(\mathfrak{H}\diamond\mathfrak{B})\circ\mathfrak{F}$ always holds, and we denote it by $\mathfrak{H}\diamond\mathfrak{B}\circ\mathfrak{F}$.\par The class of the groups of order $1$ is denoted by $1$, and the class of all finite groups is denoted by $\mathfrak{G}$. We use $\mathfrak{S}$ (resp. $\mathfrak{N}$, $\mathfrak{U}$, $\mathfrak{A}$) to denote the class of finite solvable (resp. nilpotent, supersolvable, abelian) groups and $\mathfrak{S}_\pi$ (resp. $\mathfrak{N}_\pi$, $\mathfrak{U}_\pi$) to denote the class of finite $\pi$-solvable (resp. $\pi$-nilpotent, $\pi$-supersolvable) groups. Also, the symbol $\mathfrak{G}_\pi$ denotes the class of all finite $\pi$-groups.\par A formation function $f$ is a local function $f$: $\mathbb{P}\rightarrow\{\mbox{classes of groups}\}$ such that $f(p)$ is a formation for all $p\in\mathbb{P}$. Let $LF(f)$ denote the set of all groups $G$ whose chief factors $L/K$ are all $f$-central in $G$, that is, $G/C_G(L/K)\in f(p)$ for all $p\in \pi(L/K)$. The canonical local definition of a saturated formation $\mathfrak{F}$ is the uniquely determined formation function $F$ such that $\mathfrak{F}=LF(F)$, $F(p)\subseteq \mathfrak{F}$ and $\mathfrak{G}_p\circ F(p)=F(p)$ for all $p\in\mathbb{P}$ (for details, see \cite[Chap. IV]{Doe}).\par Following \cite[Chap. II]{Doe}, for a class of groups $\mathfrak{X}$, we define closure operations as follows: $\mathtt{S}\mathfrak{X}=(G:G\leq H\ \mbox{for some}\ H\in \mathfrak{X})$; $\mathtt{S_n}\mathfrak{X}=(G:G\ \mbox{is subnormal in }H\ \mbox{for some}\ H\in \mathfrak{X})$; $\mathtt{Q}\mathfrak{X}=(G: \mbox{ there exist}\ H\in \mathfrak{X}$ $\mbox{and an epimorphism from}$ $H\ \mbox{onto}\ G)$; $\mathtt{E}\mathfrak{X}=(G:\mbox{there exists a series}$ $\mbox{of subgroups of }G:$ $1=G_0\unlhd G_1\unlhd\cdots\unlhd G_n=G$ $\mbox{with each }G_i/G_{i-1}\in \mathfrak{X})=\bigcup^{\infty}_{r=1}\mathfrak{X}^r$.\par Recall that the norm $\mathcal{N}(G)$ of $G$ is the intersection of the normalizers of all subgroups of $G$, and the Wielandt subgroup $\omega(G)$ of $G$ is the intersection of the normalizers of all subnormal subgroups of $G$. These concepts were introduced by R. Baer \cite{Bae} and H. Wielandt \cite{Wie} in 1934 and 1958, respectively. Much investigation has focused on using the concepts of the norm and the Wielandt subgroup to determine the structure of finite groups (see, for example,\cite{Sch,Bae1,Bae2,Bei,Bry,Cam,Cos,Kap,Orm}).\par Recently, Li and Shen \cite{Li} considered the intersection of the normalizers of the derived subgroups of all subgroups of $G$. Also, in \cite{Gon} and \cite{She}, the authors considered the intersection of the normalizers of the nilpotent residuals of all subgroups of $G$. Furthermore, for a formation $\mathfrak{F}$, Su and Wang \cite{Su} investigated the intersection of the normalizers of the $\mathfrak{F}$-residuals of all subgroups of $G$ and the intersection of the normalizers of the products of the $\mathfrak{F}$-residuals of all subgroups of $G$ and $O_{p'}(G)$. As a continuation of the above ideas, we now introduce the notion of $\mathfrak{H}$-$\mathfrak{F}$-norm as follows:\par \noindent\textbf{Definition 1.1.} Let $\mathfrak{H}$ be a Fitting class and $\mathfrak{F}$ a formation. We call a subgroup $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)$ of $G$ the \textit{$\mathfrak{H}$-$\mathfrak{F}$-norm} of $G$ if $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)$ is the intersection of the normalizers of the products of the $\mathfrak{F}$-residuals of all subgroups of $G$ and the $\mathfrak{H}$-radical of $G$, that is, $$\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)=\bigcap_{H\leq G}N_G(H^{\mathfrak{F}}G_{\mathfrak{H}}).$$ In particular, when $\mathfrak{H}=1$, the subgroup $\mathcal{N}_{1,\mathfrak{F}}(G)$ of $G$ is called the \textit{$\mathfrak{F}$-norm} of $G$, and we denote it by $\mathcal{N}_{\mathfrak{F}}(G)$, that is, $$\mathcal{N}_{\mathfrak{F}}(G)=\bigcap_{H\leq G}N_G(H^\mathfrak{F});$$ when $\mathfrak{H}=\mathfrak{G}_\pi$, the subgroup $\mathcal{N}_{\mathfrak{G}_\pi,\mathfrak{F}}(G)$ of $G$ is called the \textit{$\pi\mathfrak{F}$-norm} of $G$, and we denote it by $\mathcal{N}_{\pi\mathfrak{F}}(G)$, that is, $$\mathcal{N}_{\pi\mathfrak{F}}(G)=\bigcap_{H\leq G}N_G(H^\mathfrak{F}O_\pi(G)).$$\par \noindent\textbf{Definition 1.2.} Let $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{0}(G)=1$ and $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{i}(G)/\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{i-1}(G)=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{i-1}(G))$ for $i=1,2,\cdots$. Then there exists a series of subgroups of $G$: $$1=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{0}(G)\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{1}(G) \leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{2}(G)\cdots \leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{n}(G)=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{n+1}(G)=\cdots.$$ Denote $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)$ the terminal term of this ascending series. In particular, when $\mathfrak{H}=1$, we denote $\mathcal{N}_{1,\mathfrak{F}}^{\infty}(G)$ by $\mathcal{N}_{\mathfrak{F}}^{\infty}(G)$; when $\mathfrak{H}=\mathfrak{G}_\pi$, we denote $\mathcal{N}_{\mathfrak{G}_\pi,\mathfrak{F}}^{\infty}(G)$ by $\mathcal{N}_{\pi\mathfrak{F}}^{\infty}(G)$.\par Let $\mathfrak{F}$ be a formation. A $G$-chief factor $L/K$ is said to be $\mathfrak{F}$-central in $G$ if $(L/K)\rtimes (G/C_G(L/K))\in \mathfrak{F}$. Following \cite{Guo}, a normal subgroup $N$ of $G$ is called \textit{$\pi\mathfrak{F}$-hypercentral} in $G$ if either $N=1$ or $N>1$ and every $G$-chief factor below $N$ of order divisible by at least one prime in $\pi$ is $\mathfrak{F}$-central in $G$. Let $Z_{\pi\mathfrak{F}}(G)$ denote the \textit{$\pi\mathfrak{F}$-hypercentre} of $G$, that is, the product of all $\pi\mathfrak{F}$-hypercentral normal subgroups of $G$. The $\mathbb{P}\mathfrak{F}$-hypercentre of $G$ is called the $\mathfrak{F}$-hypercentre of $G$, and we denote it by $Z_{\mathfrak{F}}(G)$.\par Let $\mathfrak{X}$ be a class of groups. Recall that a subgroup $U$ of $G$ is called \textit{$\mathfrak{X}$-maximal} in $G$ if $U\in \mathfrak{X}$ and $G$ does not have a subgroup $V$ such that $U<V$ and $V\in \mathfrak{X}$. Following \cite{Ski}, we use $\mbox{Int}_\mathfrak{X}(G)$ to denote the intersection of all $\mathfrak{X}$-maximal subgroups of $G$.\par In \cite[Remark 4]{Bei1}, J. C. Beidleman and H. Heineken observed that $\mathcal{N}_{\mathfrak{N}_c}^{\infty}(G)$ coincides with $\mbox{Int}_{\mathfrak{N}\circ\mathfrak{N}_c}(G)$ for every group $G$, where $\mathfrak{N}_c$ denotes the class of nilpotent groups of class at most $c$. In \cite{Ski}, A. N. Skiba gave conditions under which the $\mathfrak{F}$-hypercentre $Z_\mathfrak{F}(G)$ coincides with $\mbox{Int}_{\mathfrak{F}}(G)$ for every group $G$. Also, Guo and A. N. Skiba \cite{Guo} gave conditions under which the $\pi\mathfrak{F}$-hypercentre $Z_{\pi\mathfrak{F}}(G)$ coincides with $\mbox{Int}_{\mathfrak{F}}(G)$ for every group $G$.\par Motivated by the above observations, the following questions naturally arise:\par \noindent\textbf{Problem (I).} \textit{Under what conditions $\mathcal{N}_{\mathfrak{F}}^{\infty}(G)$ coincides with the $\mathfrak{N}\circ\mathfrak{F}$-hypercentre $Z_{\mathfrak{N}\circ\mathfrak{F}}(G)$\textup{?} More generally, under what conditions $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$ coincides with the $\pi(\mathfrak{N}\circ\mathfrak{F})$-hypercentre $Z_{\pi({\mathfrak{N}\circ\mathfrak{F}})}(G)$\textup{?}}\par \noindent\textbf{Problem (II).} \textit{Under what conditions $\mathcal{N}_{\mathfrak{F}}^{\infty}(G)$ coincides with $\mbox{\textup{Int}}_{\mathfrak{N}\circ\mathfrak{F}}(G)$\textup{?} More generally, under what conditions $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$ coincides with $\mbox{\textup{Int}}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$}?\par For a class of groups $\mathfrak{X}$, a group $G$ is called $\mathtt{S}$-critical for $\mathfrak{X}$ if $G\notin \mathfrak{X}$ but all proper subgroups of $G$ belong to $\mathfrak{X}$. Let $\mbox{Crit}_\mathtt{S}(\mathfrak{X})$ denote the set of all groups $G$ which are $\mathtt{S}$-critical for $\mathfrak{X}$. For convenience of statement, we give the following definition.\par \noindent\textbf{Definition 1.3.} We say that a formation $\mathfrak{F}$ satisfies:\par (1) \textit{The $\pi$-boundary condition} (I) if $\mbox{Crit}_\mathtt{S}(\mathfrak{F})\subseteq \mathfrak{N_{\pi}}\circ\mathfrak{F}$ (equivalently, $\mbox{Crit}_\mathtt{S}(\mathfrak{F})\subseteq \mathfrak{S_{\pi}}\circ\mathfrak{F}$, see Lemma 2.7 below).\par (2) \textit{The $\pi$-boundary condition} (II) if for any $p\in \pi$, $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{F})\subseteq \mathfrak{S}_\pi\circ\mathfrak{F}$.\par (3) \textit{The $\pi$-boundary condition} (III) if for any $p\in \pi$, $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{F})\subseteq \mathfrak{N}_\pi\circ\mathfrak{F}$.\par (4) \textit{The $\pi$-boundary condition \textup{(III)} in $\mathfrak{S}$} if for any $p\in \pi$, $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{F})\cap \mathfrak{S}\subseteq \mathfrak{N}_\pi\circ\mathfrak{F}$.\par Note that a formation $\mathfrak{F}$ satisfies the $\pi$-boundary condition (III) (resp. the $\pi$-boundary condition \textup{(III)} in $\mathfrak{S}$) if and only if $\mathfrak{N}_\pi\circ\mathfrak{F}$ satisfies the $\pi$-boundary condition (resp. the $\pi$-boundary condition in $\mathfrak{S}$) in the sense of \cite{Guo}.\par \noindent\textbf{Remark 1.4.} If a formation $\mathfrak{F}$ satisfies the $\pi$-boundary condition (II), then clearly, $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I). However, the converse does not hold. For example, let $\pi=\mathbb{P}$ and $\mathfrak{F}=\mathfrak{N}_3$. By \cite[Chap. IV, Satz 5.4]{Hup}, $\mbox{Crit}_\mathtt{S}(\mathfrak{N}_3)\subseteq \mathfrak{N}\circ\mathfrak{N}_3$. Now let $G=A_5$, where $A_5$ is the alternating group of degree 5. Then $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{G}_3\circ\mathfrak{N}_3)$, but $G\notin \mathfrak{S}\circ\mathfrak{N}_3$. Hence $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_3\circ\mathfrak{N}_3)\nsubseteq \mathfrak{S}\circ\mathfrak{N}_3$.\par \noindent\textbf{Remark 1.5.} If a formation $\mathfrak{F}$ satisfies the $\pi$-boundary condition (III), then $\mathfrak{F}$ satisfies the $\pi$-boundary condition (II). However, the converse does not hold. For example, let $\pi=\mathbb{P}$ and $\mathfrak{F}=\mathfrak{G}_3$. For any prime $p\neq 3$, $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{G}_3)\subseteq \mathfrak{N}_3\cup \mbox{Crit}_\mathtt{S}(\mathfrak{N}_3)$. If there exists a group $H$ such that $H\in \mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{G}_3)\backslash (\mathfrak{S}\circ\mathfrak{G}_3)$, then by \cite[Chap. IV, Satz 5.4]{Hup}, we have that $H\in \mathfrak{N}_3$. Hence $H$ has the normal 3-complement $A$. If $A<H$, then $A\in \mathfrak{G}_p\circ\mathfrak{G}_3\subseteq \mathfrak{S}$, and thereby $H\in \mathfrak{S}$, a contradiction. Therefore, $H=A\in \mathfrak{G}_p\cup\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p)\subseteq \mathfrak{S}$, also a contradiction. This shows that $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{G}_3)\subseteq \mathfrak{S}\circ\mathfrak{G}_3$, and so $\mathfrak{G}_3$ satisfies the $\mathbb{P}$-boundary condition (II). Now let $G=S_3$, where $S_3$ is the symmetric group of degree 3. Then it is easy to see that $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{G}_2\circ\mathfrak{G}_3)$, but $G\notin \mathfrak{N}\circ\mathfrak{G}_3$. Hence $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_2\circ\mathfrak{G}_3)\nsubseteq \mathfrak{N}\circ\mathfrak{G}_3$.\par Firstly, we give a characterization of $\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$-groups by using their $\mathfrak{H}$-$\mathfrak{F}$-norms.\par \noindent\textbf{Theorem A.} Let $\mathfrak{H}$ be a saturated Fitting formation such that $\mathfrak{G}_{\pi'}\subseteq\mathfrak{H}=\mathtt{E}\mathfrak{H}$ and $\mathfrak{F}$ a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Suppose that one of the following holds:\par (i) $G^{\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}}\in \mathfrak{S_{\pi}}$.\par (ii) $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I).\par \noindent Then the following statements are equivalent:\par (1) $G\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$.\par (2) $G/\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$.\par (3) $G/\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$.\par (4) $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/N)>1$ for every proper normal subgroup $N$ of $G$.\par (5) $G=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)$.\par The main purpose of this paper is to give answers to Problem (I) and (II). In the universe of all groups, we prove:\par \noindent\textbf{Theorem B.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Then:\par (1) If $\mathfrak{F}$ satisfies the $\pi$-boundary condition (II), then $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$ holds for every group $G$.\par (2) If $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$ holds for every group $G$, then $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I).\par (3) $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$ holds for every group $G$ if and only if $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$ holds for every group $G\in \bigcup_{p\in\pi} (\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{F})\backslash (\mathfrak{S}_\pi\circ\mathfrak{F}))$.\par \noindent\textbf{Remark 1.6.} The converse of statement (2) of Theorem B does not hold. For example, let $\pi=\mathbb{P}$ and $\mathfrak{F}=\mathfrak{U}$. By K. Doerk's result \cite{Doe1}, $\mbox{Crit}_\mathtt{S}(\mathfrak{U})\subseteq \mathfrak{N}\circ\mathfrak{U}$. This means that $\mathfrak{U}$ satisfies the $\mathbb{P}$-boundary condition (I). Let $A$ be the $2$-Frattini module of $A_5$, where $A_5$ is the alternating group of degree 5. By \cite[Example 1]{Gri}, the dimension of $A$ is 5. Then by \cite[Appendix $\beta$, Proposition $\beta.5$]{Doe}, there exists a Frattini extension $G$ such that $G/A\cong A_5$ and $A=\Phi(G)$. Now we show that $\mathcal{N}_{\mathfrak{U}}(G)=\Phi(G)$. As $\mathcal{N}_{\mathfrak{U}}(G)<G$, it will suffice to prove that for any subgroup $H$ of $G$, $\Phi(G)\leq N_G(H^\mathfrak{U})$. If $H/H\cap \Phi(G)\in \mathfrak{U}$, then $H^\mathfrak{U}\leq \Phi(G)$, and so $\Phi(G)\leq N_G(H^\mathfrak{U})$. Hence, consider that $H/H\cap \Phi(G)\notin \mathfrak{U}$. Since $G/\Phi(G)\cong A_5$, $H\Phi(G)/\Phi(G)\cong A_4$, where $A_4$ is the alternating group of degree 4. This implies that $H\Phi(G)$ is a Hall $5'$-subgroup of $G$, and thereby $H$ is a Hall $5'$-subgroup of $G$. Thus $\Phi(G)\leq H$, and consequently $\Phi(G)\leq N_G(H^\mathfrak{U})$. Therefore, $\mathcal{N}_{\mathfrak{U}}(G)=\Phi(G)$. If $\mathcal{N}_{\mathfrak{U}}^{\infty}(G)=Z_{\mathfrak{N}\circ\mathfrak{U}}(G)$, then $Z_{\mathfrak{N}\circ\mathfrak{U}}(G)=\Phi(G)$. Since $G^{\mathfrak{N}\circ\mathfrak{U}}=G$, by \cite[Chap. IV, Theorem 6.10]{Doe}, $Z(G)=\Phi(G)$. It follows that $G$ is quasisimple. By \cite[Table 4.1]{Gor}, the Schur multiplier of $A_5$ is a cyclic group of order 2, a contradiction. Hence $\mathcal{N}_{\mathfrak{U}}^{\infty}(G)\neq Z_{\mathfrak{N}\circ\mathfrak{U}}(G)$. Besides, we currently do not know whether the converse of statement (1) of Theorem B is true or not.\par \noindent\textbf{Theorem C.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Then the following statements are equivalent:\par (1) $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=\mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$ holds for every group $G$.\par (2) $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=\mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$ holds for every group $G$.\par (3) $\mathfrak{F}$ satisfies the $\pi$-boundary condition (III).\par In the universe of all solvable groups, we prove:\par \noindent\textbf{Theorem D.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Then $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$ holds for every group $G\in \mathfrak{S_{\pi}}\circ\mathfrak{F}$.\par \noindent\textbf{Theorem E.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Then the following statements are equivalent:\par (1) $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=\mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$ holds for every $G\in \mathfrak{S}$.\par (2) $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=\mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$ holds for every $G\in \mathfrak{S}$.\par (3) $\mathfrak{F}$ satisfies the $\pi$-boundary condition (III) in $\mathfrak{S}$.\par \section{Preliminaries} The following two lemmas are well known.\par \noindent\textbf{Lemma 2.1.} Let $\mathfrak{F}$ be a formation. Suppose that $H\leq G$ and $N\unlhd G$. Then:\par (1) $G^\mathfrak{F}N/N=(G/N)^\mathfrak{F}$.\par (2) If $\mathfrak{F}=\mathtt{S}\mathfrak{F}$ (resp. $\mathfrak{F}=\mathtt{S_n}\mathfrak{F}$), then $H^\mathfrak{F}\leq G^\mathfrak{F}\cap H$ (resp. $N^\mathfrak{F}\leq G^\mathfrak{F}\cap N$).\par \noindent\textbf{Lemma 2.2.} Let $\mathfrak{H}$ be a Fitting class. Suppose that $H\leq G$ and $N\unlhd G$. Then:\par (1) $G_\mathfrak{H}\cap N=N_\mathfrak{H}$.\par (2) If $\mathfrak{H}=\mathtt{S}\mathfrak{H}$, then $G_\mathfrak{H}\cap H\leq H_\mathfrak{H}$.\par (3) If $\mathfrak{H}=\mathtt{Q}\mathfrak{H}$, then $G_\mathfrak{H}N/N\leq (G/N)_\mathfrak{H}$.\par (4) If $\mathfrak{H}=\mathtt{E}\mathfrak{H}$ and $N\leq G_\mathfrak{H}$, then $(G/N)_\mathfrak{H}\leq G_\mathfrak{H}/N$.\par \noindent\textbf{Lemma 2.3.} Let $\mathfrak{H}$ be a Fitting class and $\mathfrak{F}$ a formation. Suppose that $H\leq G$ and $N\unlhd G$. Then:\par (1) $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)\cap N\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(N)$.\par (2) If $\mathfrak{H}=\mathtt{S}\mathfrak{H}$, then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)\cap H\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(H)$.\par (3) If $\mathfrak{H}=\mathtt{Q}\mathfrak{H}$, then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)N/N\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/N)$.\par (4) If $\mathfrak{F}=\mathtt{S}\mathfrak{F}$ and $G\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$, then either $G=1$ or $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)>1$.\par \noindent\textit{Proof.} (1) By definition and Lemma 2.2(1), $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)\cap N=(\bigcap_{H\leq G}N_G(H^{\mathfrak{F}}G_{\mathfrak{H}}))\cap N\leq\bigcap_{H\leq N}\linebreak[4]N_N(H^{\mathfrak{F}}G_{\mathfrak{H}})\leq \bigcap_{H\leq N}N_N(H^{\mathfrak{F}}(G_{\mathfrak{H}}\cap N))=\bigcap_{H\leq N}N_N(H^{\mathfrak{F}}N_{\mathfrak{H}})=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(N)$.\par The proof of statement (2) is similar to (1).\par (3) By definition, Lemma 2.1(1) and Lemma 2.2(3), $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)N/N=$$(\bigcap_{H\leq G}N_G(H^{\mathfrak{F}}G_{\mathfrak{H}}))N/N\linebreak[4]\leq \bigcap_{N\leq H\leq G}$$N_G(H^{\mathfrak{F}}$$G_{\mathfrak{H}})/N$ $\leq \bigcap_{H/N\leq G/N}N_{G/N}((H^{\mathfrak{F}}N/N)$$(G/N)_{\mathfrak{H}})=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/N)$.\par (4) We may suppose that $G>1$ and $G_\mathfrak{H}=1$. Since $G\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$, $G=G/G_\mathfrak{H}\in \mathfrak{N}\circ\mathfrak{F}$. Then $G^\mathfrak{F}\in \mathfrak{N}$, and so $Z(G^\mathfrak{F})>1$. As $\mathfrak{F}=\mathtt{S}\mathfrak{F}$, we have that $H^\mathfrak{F}\leq G^\mathfrak{F}$ for every subgroup $H$ of $G$ by Lemma 2.1(2). It follows that $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)\geq Z(G^\mathfrak{F})>1$.\par \noindent\textbf{Lemma 2.4.} Let $f$ be a subgroup functor assigning to every group $G$ a characteristic subgroup $f(G)$ of $G$. Define a subgroup functor $f_i$ as follows: for every group $G$, $f_0(G)=1$; $f_i(G)/f_{i-1}(G)=f(G/f_{i-1}(G))$ for $i=1,2,\cdots$.\par (1) If $f(G)N/N\leq f(G/N)$ for every group $G$ and every normal subgroup $N$ of $G$, then $f_i(G)N/N\leq f_i(G/N)$ for every group $G$ and every normal subgroup $N$ of $G$.\par (2) If $f(G)N/N\leq f(G/N)$ and $f(G)\cap N\leq f(N)$ for every group $G$ and every normal subgroup $N$ of $G$, then $f_i(G)\cap N\leq f_i(N)$ for every group $G$ and every normal subgroup $N$ of $G$.\par (3) If $f(G)N/N\leq f(G/N)$ for every group $G$ and every normal subgroup $N$ of $G$, and $f(G)\cap H\leq f(H)$ for every group $G$ and every subgroup $H$ of $G$, then $f_i(G)\cap H\leq f_i(H)$ for every group $G$ and every subgroup $H$ of $G$.\par \noindent\textit{Proof.} (1) By induction, we may suppose that $f_{i-1}(G)N/N\leq f_{i-1}(G/N)$. Let $f_{i-1}(G/N)=A_{i-1}/N$ and $f_{i}(G/N)=A_{i}/N$. Then $f_{i-1}(G)\leq A_{i-1}$ and $A_i/A_{i-1}=f(G/A_{i-1})$. It follows that $(f_i(G)A_{i-1}/f_{i-1}(G))/(A_{i-1}/f_{i-1}(G))\leq f((G/f_{i-1}(G))/(A_{i-1}/f_{i-1}(G)))=(A_i/f_{i-1}(G))/\linebreak[4](A_{i-1}/f_{i-1}(G))$. Therefore, $f_i(G)\leq A_i$, and so $f_i(G)N/N\leq f_i(G/N)$.\par (2) By induction, we may assume that $f_{i-1}(G)\cap N\leq f_{i-1}(N)$. Let $f_{i-1}(G)\cap N=C_{i-1}$ and $f_i(G)\cap N=C_i$. Then $f(N/C_{i-1})(f_{i-1}(N)/C_{i-1})/(f_{i-1}(N)/C_{i-1})\leq f((N/C_{i-1})/(f_{i-1}(N)/C_{i-1}))\linebreak[4]=(f_{i}(N)/C_{i-1})/(f_{i-1}(N)/C_{i-1})$. This implies that $f(N/C_{i-1})\leq f_{i}(N)/C_{i-1}$. Clearly, $C_{i}f_{i-1}(G)/f_{i-1}(G)=f(G/f_{i-1}(G))\cap (f_{i-1}(G)N/f_{i-1}(G))\leq f(f_{i-1}(G)N/f_{i-1}(G))$. It follows that $C_{i}/C_{i-1}\leq f(N/C_{i-1})\leq f_{i}(N)/C_{i-1}$. Therefore, $C_{i}\leq f_i(N)$, and so $f_i(G)\cap N\leq f_i(N)$.\par The proof of statement (3) is similar to (2).\par \noindent\textbf{Lemma 2.5.} Let $\mathfrak{H}$ be a Fitting class and $\mathfrak{F}$ a formation. Suppose that $H\leq G$ and $N\unlhd G$. Then:\par (1) If $\mathfrak{H}=\mathtt{Q}\mathfrak{H}$, then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)\cap N\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(N)$.\par (2) If $\mathfrak{H}$ is a Fitting formation such that $\mathfrak{H}=\mathtt{S}\mathfrak{H}$, then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)\cap H\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(H)$.\par (3) If $\mathfrak{H}=\mathtt{Q}\mathfrak{H}$, then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)N/N\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G/N)$.\par (4) If $\mathfrak{H}=\mathtt{Q}\mathfrak{H}$ and $N\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)$, then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G/N)=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)/N$.\par (5) If $\mathfrak{H}=\mathtt{Q}\mathfrak{H}$, then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)=\bigcap\{N|N\unlhd G, \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/N)=1\}$.\par \noindent\textit{Proof.} Statements (1)-(3) directly follow from Lemmas 2.3 and 2.4.\par (4) By definition and (3), we have that $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G/N)/(\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)/N)\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}((G/N)/(\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)/N))\linebreak[4]=1$. Therefore, $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G/N)=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)/N$.\par (5) By definition, $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G))=1$. On the other hand, if $N\unlhd G$ such that $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/N)=1$, then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G/N)=1$. Hence by (3), $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)\leq N$. Therefore, we have that $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)=\bigcap\{N|N\unlhd G, \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/N)=1\}$.\par \noindent\textbf{Lemma 2.6.} Let $\mathfrak{H}$ be a Fitting class and $\mathfrak{F}$ a formation such that $\mathfrak{F}\subseteq \mathfrak{S}$. Suppose that $G_1$ and $G_2$ are groups with $(|G_1|,|G_2|)=1$. Then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G_1\times G_2)=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G_1)\times \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G_2)$ and $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G_1\times G_2)=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G_1)\times \mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G_2)$.\par \noindent\textit{Proof.} We only need to prove that $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G_1\times G_2)=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G_1)\times \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G_2)$. Let $G=G_1\times G_2$. Since $(|G_1|,|G_2|)=1$, for every subgroup $H$ of $G$, we have that $H=(H\cap G_1)\times (H\cap G_2)$. By \cite[Chap. IV, Theorem 1.18]{Doe}, $H^\mathfrak{F}=(H\cap G_1)^\mathfrak{F}\times (H\cap G_2)^\mathfrak{F}$. Then it is easy to see that $G_\mathfrak{H}={(G_1)}_\mathfrak{H}\times {(G_2)}_\mathfrak{H}$, and so $H^\mathfrak{F}G_\mathfrak{H}=(H\cap G_1)^\mathfrak{F}{(G_1)}_\mathfrak{H}\times (H\cap G_2)^\mathfrak{F}{(G_2)}_\mathfrak{H}$. This implies that $N_G(H^\mathfrak{F}G_\mathfrak{H})=N_{G_1}((H\cap G_1)^\mathfrak{F}{(G_1)}_\mathfrak{H})\times N_{G_2}((H\cap G_2)^\mathfrak{F}{(G_2)}_\mathfrak{H})$. Hence $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)=\bigcap_{H\leq G}N_G(H^{\mathfrak{F}}G_{\mathfrak{H}})=\bigcap_{H\leq G}N_{G_1}((H\cap G_1)^{\mathfrak{F}}{(G_1)}_{\mathfrak{H}})\times \bigcap_{H\leq G}N_{G_2}((H\cap G_2)^{\mathfrak{F}}{(G_2)}_{\mathfrak{H}})=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G_1)\times \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G_2)$.\par \noindent\textbf{Lemma 2.7.} Let $\mathfrak{F}$ be a formation. Then $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I) if and only if $\mbox{Crit}_\mathtt{S}(\mathfrak{F})\subseteq \mathfrak{S}_\pi\circ\mathfrak{F}$.\par \noindent\textit{Proof.} The necessity is evident. So we only need to prove the sufficiency. Suppose that $\mbox{Crit}_\mathtt{S}(\mathfrak{F})\subseteq \mathfrak{S}_\pi\circ\mathfrak{F}$. Let $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{F})$. If $G^\mathfrak{F}\leq \Phi(G)$, then there is nothing to prove. We may, therefore, assume that $G^\mathfrak{F}\nleq \Phi(G)$. Let $G^\mathfrak{F}/L$ be a $G$-chief factor. Clearly, $G^\mathfrak{F}/L\in \mathfrak{N}_\pi$. If $L\nleq \Phi(G)$, then $G$ has a maximal subgroup $M$ such that $G=LM$. Since $M\in \mathfrak{F}$, $G/L\cong M/L\cap M\in \mathfrak{F}$, and so $G^\mathfrak{F}\leq L$, which is absurd. Hence $L\leq \Phi(G)$. This implies that $L=G^\mathfrak{F}\cap \Phi(G)$. Since $G^\mathfrak{F}\Phi(G)/\Phi(G)\cong G^\mathfrak{F}/G^\mathfrak{F}\cap \Phi(G)\in \mathfrak{N}_\pi$, we have that $G^\mathfrak{F}\in \mathfrak{N}_\pi$ by \cite[Lemma 3.1]{Bal1}. This shows that $G\in \mathfrak{N}_\pi\circ\mathfrak{F}$, and thus $\mbox{Crit}_\mathtt{S}(\mathfrak{F})\subseteq \mathfrak{N}_\pi\circ\mathfrak{F}$.\par \noindent\textbf{Lemma 2.8.} Let $\mathfrak{F}$ be a saturated formation and $\pi\subseteq \pi(\mathfrak{F})$. Suppose that $H\leq G$ and $N\unlhd G$. Then:\par (1) If $N\leq Z_{\pi\mathfrak{F}}(G)$, then $Z_{\pi\mathfrak{F}}(G/N)=Z_{\pi\mathfrak{F}}(G)/N$.\par (2) $Z_{\pi\mathfrak{F}}(G)N/N\leq Z_{\pi\mathfrak{F}}(G/N)$.\par (3) If $\mathfrak{F}=\mathtt{S}\mathfrak{F}$ (resp. $\mathfrak{F}=\mathtt{S_n}\mathfrak{F}$), then $Z_{\pi\mathfrak{F}}(G)\cap H\leq Z_{\pi\mathfrak{F}}(H)$ (resp. $Z_{\pi\mathfrak{F}}(G)\cap N\leq Z_{\pi\mathfrak{F}}(N)$).\par (4) If $\mathfrak{G}_{\pi'}\circ\mathfrak{F}=\mathfrak{F}$ and $G/Z_{\pi\mathfrak{F}}(G)\in \mathfrak{F}$, then $G\in \mathfrak{F}$.\par (5) If $\mathfrak{F}=\mathtt{S}\mathfrak{F}$ (resp. $\mathfrak{F}=\mathtt{S_n}\mathfrak{F}$), $\mathfrak{G}_{\pi'}\circ\mathfrak{F}=\mathfrak{F}$ and $H\in \mathfrak{F}$ (resp. $N\in \mathfrak{F}$), then $HZ_{\pi\mathfrak{F}}(G)\in \mathfrak{F}$ (resp. $NZ_{\pi\mathfrak{F}}(G)\in \mathfrak{F}$).\par (6) $Z_{\pi\mathfrak{F}}(G)=Z_{\pi(\mathfrak{G}_{\pi'}\circ\mathfrak{F})}(G)$.\par (7) If $\mathfrak{F}=\mathtt{S_n}\mathfrak{F}$, then $Z_{\pi\mathfrak{F}}(G)\in \mathfrak{G}_{\pi'}\circ\mathfrak{F}$.\par \noindent\textit{Proof.} Statement (1) is evident by definition.\par Statements (2)-(5) were proved in \cite[Lemma 2.2]{Guo}.\par (6) Let $\mathfrak{F}=LF(F)$, where $F$ is the canonical local definition of $\mathfrak{F}$. Then by \cite[Chap. IV, Theorem 3.13]{Doe}, $\mathfrak{G}_{\pi'}\circ\mathfrak{F}=LF(H)$, where $H(p)=F(p)$ for all $p\in \pi$ and $H(p)=\mathfrak{G}_{\pi'}\circ\mathfrak{F}$ for all $p\in \pi'$. Then by definition, it is easy to see that $Z_{\pi\mathfrak{F}}(G)=Z_{\pi(\mathfrak{G}_{\pi'}\circ\mathfrak{F})}(G)$.\par Statement (7) follows from (5) and (6).\par \noindent\textbf{Remark 2.9.} Note that there exist several minor mistakes in \cite{Guo}. In \cite[Lemmas 2.2(6) and 2.2(7)]{Guo} and \cite[Lemma 2.4(g)]{Guo}, ``$\mathfrak{G}_\sigma\circ\mathfrak{F}=\mathfrak{F}$" should be corrected as ``$\mathfrak{G}_{\pi'}\circ\mathfrak{F}=\mathfrak{F}$"; and in \cite[Lemma 2.2(5)]{Guo}, ``$Z_{\pi\mathfrak{F}}(H)\cap A$" should be corrected as ``$Z_{\pi\mathfrak{F}}(A)\cap H$".\par \noindent\textbf{Lemma 2.10.}\cite[Lemma 2.5]{Ski} Let $\mathfrak{F}=LF(F)$ be a saturated formation, where $F$ is the canonical local definition of $\mathfrak{F}$, and $E$ a normal $p$-subgroup of $G$. If $E\leq Z_\mathfrak{F}(G)$, then $G/C_G(E)\in F(p)$.\par \noindent\textbf{Lemma 2.11.} Let $\mathfrak{F}$ be a formation and $\mathfrak{B}=\mathfrak{N}_\pi\circ \mathfrak{F}$. Then:\par (1) $\mathfrak{B}=LF(b)$ with $b(p)=\mathfrak{F}$ for all $p\in \pi$ and $b(p)=\mathfrak{B}=\mathfrak{N}_\pi\circ \mathfrak{F}$ for all $p\in \pi'$.\par (2) The canonical local definition $B$ of $\mathfrak{B}$ can be defined as follows: $B(p)=\mathfrak{G}_p\circ\mathfrak{F}$ for all $p\in \pi$ and $B(p)=\mathfrak{B}=\mathfrak{N}_\pi\circ\mathfrak{F}$ for all $p\in \pi'$.\par \noindent\textit{Proof.} Statement (1) directly follows from \cite[Lemma 1]{Ski2}, and Statement (2) follows from \cite[Chap. IV, Lemma 3.13]{Doe}.\par \noindent\textbf{Lemma 2.12.} Let $\mathfrak{F}$ be a formation. Then:\par (1) $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=1$ if and only if $C_G(G^\mathfrak{F})=1$ and $O_{\pi'}(G)=1$.\par (2) $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\cap G^\mathfrak{F}=Z_{\pi\mathfrak{N}}(G^\mathfrak{F})$.\par (3) If $\mathfrak{F}$ is saturated, then $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)/Z_{\pi\mathfrak{N}}(G^\mathfrak{F})=Z_{\pi\mathfrak{F}}(G/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))$.\par \noindent\textit{Proof.} (1) Suppose that $C_G(G^\mathfrak{F})=1$ and $O_{\pi'}(G)=1$. If $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)>1$, then let $N$ be a minimal normal subgroup of $G$ contained in $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$. Clearly, $N$ is not a $\pi'$-group. Then by Lemma 2.11(1), we have that $G/C_G(N)\in \mathfrak{F}$, and so $N\leq C_G(G^\mathfrak{F})=1$, a contradiction. Thus $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=1$. Now assume that $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=1$. Then clearly, $O_{\pi'}(G)=1$. Suppose that $C_G(G^\mathfrak{F})>1$, and let $N$ be a minimal normal subgroup of $G$ contained in $C_G(G^\mathfrak{F})$. Then $G^\mathfrak{F}\leq C_G(N)$ and $N$ is not a $\pi'$-group. Hence by Lemma 2.11(1) again, $N\leq Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$, which is impossible. Therefore, $C_G(G^\mathfrak{F})=1$.\par (2) Firstly, we prove that $Z_{\pi\mathfrak{N}}(G^\mathfrak{F})\leq Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$. If $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)>1$, then by induction, $Z_{\pi\mathfrak{N}}((G/Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G))^\mathfrak{F})\leq Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G/Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G))=1$. By Lemmas 2.1(1) and 2.8(2), $Z_{\pi\mathfrak{N}}(G^\mathfrak{F})\leq Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$. We may, therefore, assume that $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=1$. Then by (1), $C_G(G^\mathfrak{F})=1$ and $O_{\pi'}(G)=1$. It follows that $Z(G^\mathfrak{F})=1$ and $O_{\pi'}(G^\mathfrak{F})=1$. By (1) again, $Z_{\pi\mathfrak{N}}(G^\mathfrak{F})=1$. Consequently, $Z_{\pi\mathfrak{N}}(G^\mathfrak{F})\leq Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$.\par Suppose that $Z_{\pi\mathfrak{N}}(G^\mathfrak{F})>1$. Then by induction and Lemma 2.1(1), $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))\cap (G^\mathfrak{F}/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))=Z_{\pi\mathfrak{N}}(G^\mathfrak{F}/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))=1$. Hence by Lemma 2.8(1), $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\cap G^\mathfrak{F}=Z_{\pi\mathfrak{N}}(G^\mathfrak{F})$. We may, therefore, assume that $Z_{\pi\mathfrak{N}}(G^\mathfrak{F})=1$. Then by (1), $Z(G^\mathfrak{F})=1$ and $O_{\pi'}(G^\mathfrak{F})=1$. If $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\cap G^\mathfrak{F}>1$, then let $N$ be a minimal normal subgroup of $G$ contained in $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\cap G^\mathfrak{F}$. Since $O_{\pi'}(G^\mathfrak{F})=1$, $N$ is not a $\pi'$-group. It follows from Lemma 2.11(1) that $G/C_G(N)\in \mathfrak{F}$, and so $N\leq Z(G^\mathfrak{F})$, a contradiction. Therefore, $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\cap G^\mathfrak{F}=1$.\par (3) If $Z_{\pi\mathfrak{N}}(G^\mathfrak{F})>1$, then by induction, $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))=Z_{\pi\mathfrak{F}}(G/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))$. Hence by (2) and Lemma 2.8(1), $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)/Z_{\pi\mathfrak{N}}(G^\mathfrak{F})=Z_{\pi\mathfrak{F}}(G/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))$. We may, therefore, assume that $Z_{\pi\mathfrak{N}}(G^\mathfrak{F})=1$. Then by (2), $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\cap G^\mathfrak{F}=1$, and so $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\leq C_G(G^\mathfrak{F})$. By \cite[Chap. IV, Theorem 6.13]{Doe}, $C_G(G^\mathfrak{F})=Z_\mathfrak{F}(G)\leq Z_{\pi\mathfrak{F}}(G)\leq Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$. This implies that $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=Z_{\pi\mathfrak{F}}(G)$.\par \noindent\textbf{Lemma 2.13.}\cite[Lemma 2.10]{Ski} Let $\mathfrak{F}=LF(F)$ be a saturated formation with $p\in \pi(\mathfrak{F})$, where $F$ is the canonical local definition of $\mathfrak{F}$. Suppose that $G$ is a group of minimal order in the set of all groups $G\in \mbox{Crit}_\mathtt{S}(F(p))$ and $G\notin \mathfrak{F}$. Then $G^\mathfrak{F}$ is the unique minimal normal subgroup of $G$ and $O_p(G)=\Phi(G)=1$.\par \section{Proofs of Main Results} \noindent\textbf{Lemma 3.1.} Let $\mathfrak{H}$ be a saturated Fitting formation such that $\mathfrak{G}_{\pi'}\subseteq\mathfrak{H}=\mathtt{E}\mathfrak{H}$ and $\mathfrak{F}$ a formation. Then $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$ if one of the following holds:\par (i) $\mathfrak{F}=\mathtt{S_n}\mathfrak{F}$ and $G^{\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}}\in \mathfrak{S_{\pi}}$.\par (ii) $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I).\par \noindent\textit{Proof.} Assume that the result is false and let $G$ be a counterexample of minimal order. Note that if the condition (i) holds, then since $\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}=\mathtt{S_n}(\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F})$, we have that $(\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G))^{\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}}\leq G^{\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}}\in \mathfrak{S}_\pi$ by Lemma 2.1(2). Hence the condition (i) holds for $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)$ when the condition (i) holds for $G$. If $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)<G$, then by the choice of $G$ and Lemma 2.5(1), $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G))\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$, a contradiction. We may, therefore, assume that $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)=G$. Let $N$ be any minimal normal subgroup of $G$. Then by Lemma 2.1(1), the condition (i) holds for $G/N$ when the condition (i) holds for $G$. Hence by the choice of $G$ and Lemma 2.5(3), $G/N=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G/N)\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$. Clearly, $\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$ is a saturated formation by \cite[Chap. IV, Theorem 4.8]{Doe}. This implies that $N$ is the unique minimal normal subgroup of $G$.\par If $G_\mathfrak{H}>1$, then $N\leq G_\mathfrak{H}$. By Lemmas 2.2(3) and 2.2(4), $(G/N)_\mathfrak{H}=G_\mathfrak{H}/N$. Since $G/N\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$, $G/G_\mathfrak{H}\in \mathfrak{N}\circ\mathfrak{F}$, and thus $G\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$, a contradiction. Therefore, $G_\mathfrak{H}=1$, and so $O_{\pi'}(G)=1$. If $N\leq \Phi(G)$, then $G\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$, which is impossible. Hence $N\nleq \Phi(G)$. It follows that $G$ has a maximal subgroup $M$ such that $N\nleq M$. Since $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)>1$ and $N$ is the unique minimal normal subgroup of $G$, we have that $N\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)$. Then by the definition of $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)$, $N\leq N_G(M^\mathfrak{F})$. This induces that $M^\mathfrak{F}\unlhd G$. Hence $M^\mathfrak{F}=1$, and so $M\in \mathfrak{F}$. It follows that $G/N\cong M/N\cap M\in \mathfrak{F}$, and thereby $G^\mathfrak{F}\leq N$. Since $1<G^{\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}}\leq G^\mathfrak{F}\leq N$, $N=G^{\mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}}=G^\mathfrak{F}$.\par We claim that $N\in \mathfrak{N}$. If the condition (i) holds, then $N\in \mathfrak{S_{\pi}}$. As $O_{\pi'}(G)=1$, $N\in \mathfrak{N}$. Now assume that the condition (ii) holds. Then since $G\notin \mathfrak{F}$, we may take a subgroup $K$ of $G$ such that $K\in \mbox{Crit}_\mathtt{S}(\mathfrak{F})\subseteq \mathfrak{N_{\pi}}\circ\mathfrak{F}$. If $N\notin \mathfrak{N}$, then $C_G(N)=1$. Since $N\leq \mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G)$ and $G_\mathfrak{H}=1$, we have that $N\leq N_G(K^\mathfrak{F})$, and so $N\cap K^\mathfrak{F}\unlhd N$. As $K^\mathfrak{F}\in \mathfrak{N_{\pi}}$ and $O_{\pi'}(G)=1$, $K^\mathfrak{F}\in \mathfrak{N}$. By \cite[Chap. A, Proposition 4.13(b)]{Doe}, $N\cap K^\mathfrak{F}=1$. It follows that $K^\mathfrak{F}\leq C_G(N)=1$, and thus $K\in \mathfrak{F}$, a contradiction. Hence $N\in \mathfrak{N}$. Therefore, our claim holds. This induces that $G\in \mathfrak{N}\circ\mathfrak{F}\subseteq \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$. The final contradiction completes the proof.\par \noindent\textbf{Proof of Theorem A.} It is obvious that (1) implies (2) and (2) implies (3). Suppose that (3) holds, that is, $G/\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$. If $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/N)=1$ for some proper normal subgroup $N$ of $G$, then by Lemma 2.5(5), $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)\leq N$, and so $G/N\in \mathfrak{H}\diamond\mathfrak{N}\circ\mathfrak{F}$. Hence by Lemma 2.3(4), either $G=N$ or $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/N)>1$, a contradiction. This induces that (3) implies (4). Now assume that (4) holds. Then since $\mathcal{N}_{\mathfrak{H},\mathfrak{F}}(G/\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G))=1$, we have that $G=\mathcal{N}_{\mathfrak{H},\mathfrak{F}}^{\infty}(G)$. Hence (4) implies (5). Finally, by Lemma 3.1, we get that (5) implies (1). This finishes the proof of the theorem.\par Since $\mathfrak{N}_\pi=\mathfrak{G}_{\pi'}\circ\mathfrak{N}=\mathfrak{G}_{\pi'}\diamond\mathfrak{N}$, the next corollary directly follows from Theorem A, which is also a generalization of \cite[Theorem A]{Su} and \cite[Theorem B]{Su}.\par \noindent\textbf{Corollary 3.2.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Suppose that one of the following holds:\par (i) $G\in \mathfrak{S_{\pi}}\circ \mathfrak{F}$.\par (ii) $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I).\par \noindent Then the following statements are equivalent:\par (1) $G\in \mathfrak{N}_\pi\circ\mathfrak{F}$.\par (2) $G/\mathcal{N}_{\pi'\mathfrak{F}}(G)\in \mathfrak{N}_\pi\circ\mathfrak{F}$.\par (3) $G/\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\in \mathfrak{N}_\pi\circ\mathfrak{F}$.\par (4) $\mathcal{N}_{\pi'\mathfrak{F}}(G/N)>1$ for every proper normal subgroup $N$ of $G$.\par (5) $G=\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$.\par In the sequel of this section, we restrict our attention to $\pi\mathfrak{F}$-norms.\par \noindent\textbf{Lemma 3.3.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Then $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$.\par \noindent\textit{Proof.} If $\mathcal{N}_{\pi'\mathfrak{F}}(G)>1$, then by induction, $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G/\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G))\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G/\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G))=1$. By Lemma 2.8(2), $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)/\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=1$, and so $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$. Hence we may assume that $\mathcal{N}_{\pi'\mathfrak{F}}(G)=1$. Since $\mathfrak{F}=\mathtt{S}\mathfrak{F}$, $H^\mathfrak{F}\leq G^\mathfrak{F}$ for every subgroup $H$ of $G$ by Lemma 2.1(2), and thereby $C_G(G^\mathfrak{F})O_{\pi'}(G)\leq N_G(H^\mathfrak{F}O_{\pi'}(G))$. This implies that $C_G(G^\mathfrak{F})=1$ and $O_{\pi'}(G)=1$. Then by Lemma 2.12(1), $1=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$.\par \noindent\textbf{Proofs of Theorem B(1) and Theorem D.} We need to prove that if either $G\in \mathfrak{S_{\pi}}\circ \mathfrak{F}$ or $\mathfrak{F}$ satisfies the $\pi$-boundary condition (II), then $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$. Suppose that the result is false and let $L$ be a counterexample of minimal order. By Lemma 3.3, $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L)\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(L)$. We may, therefore, assume that $\mathcal{N}_{\pi'\mathfrak{F}}(L)>1$. If $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L)>1$, then by the choice of $L$ and Lemma 2.8(1), $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(L/Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L))=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L/Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L))=1$. Hence by Lemma 2.5(4), $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(L)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L)$, a contradiction. Therefore, $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L)=1$, and thereby $O_{\pi'}(L)=1$.\par Now let $N$ be any minimal normal subgroup of $L$ contained in $\mathcal{N}_{\pi'\mathfrak{F}}(L)$. If $L$ has a minimal normal subgroup $R$ which is different from $N$, then by the choice of $L$, $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(L/R)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L/R)$. It follows from Lemma 2.5(3) that $NR/R\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(L/R)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L/R)$. By $L$-isomorphism $N\cong NR/R$, we have that $N\leq Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L)$, which is absurd. Thus $N$ is the unique minimal normal subgroup of $L$. If $N\nleq \Phi(L)$, then $L$ has a maximal subgroup $M$ such that $N\nleq M$. Since $N\leq \mathcal{N}_{\pi'\mathfrak{F}}(L)$, $N\leq N_L(M^\mathfrak{F})$ for $O_{\pi'}(L)=1$. This implies that $M^\mathfrak{F}\unlhd L$, and so $M^\mathfrak{F}=1$. Therefore, $M\in \mathfrak{F}$. Then $L/N\cong M/N\cap M\in \mathfrak{F}$. As $N\leq \mathcal{N}_{\pi'\mathfrak{F}}(L)$, $L/\mathcal{N}_{\pi'\mathfrak{F}}(L)\in \mathfrak{F}$. Note that $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I) if $\mathfrak{F}$ satisfies the $\pi$-boundary condition (II). By Corollary 3.2, we have that $L\in \mathfrak{N}_{\pi}\circ \mathfrak{F}$. This induces that $L=Z_{\pi(\mathfrak{N}_\pi\circ\mathfrak{F})}(L)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L)$ by Lemma 2.8(6), a contradiction. Hence $N\leq \Phi(L)$, and so $N$ is an elementary abelian $p$-group with $p\in \pi$. Let $M$ be any maximal subgroup of $L$. By the choice of $L$, $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(M)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(M)$. Then by Lemma 2.5(2), $N\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(L)\cap M\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(M)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(M)$. Thus $N\leq Z_{\mathfrak{N}\circ\mathfrak{F}}(M)$. By Lemmas 2.10 and 2.11(2), $M/C_M(N)\in \mathfrak{G}_p\circ\mathfrak{F}$. If $C_L(N)\nleq M$, then $L=C_L(N)M$, and so $L/C_L(N)\cong M/C_M(N)\in \mathfrak{G}_p\circ\mathfrak{F}$. This shows that $N\leq Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(L)$ by Lemma 2.11(2), which is impossible. Hence $C_L(N)\leq M$, and thereby $C_L(N)\leq \Phi(L)$. Since $N$ is the unique minimal normal subgroup of $L$, $\Phi(L)$ is a $p$-subgroup of $L$. Therefore, $C_L(N)$ is also a $p$-subgroup of $L$. This implies that $M\in \mathfrak{G}_p\circ\mathfrak{F}$. If $L\in \mathfrak{G}_p\circ\mathfrak{F}$, then $L\in \mathfrak{N}_{\pi}\circ \mathfrak{F}$, a contradiction as above. Hence $L\in \mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{F})$. Then, in both cases, $L\in \mathfrak{S_{\pi}}\circ \mathfrak{F}$.\par Let $F_p(L)$ be the $p$-Fitting subgroup of $L$, that is, the $\mathfrak{N}_p$-radical of $L$. As $N$ is the unique minimal normal subgroup of $L$, we have that $O_{p'}(L)=1$, and so $F_p(L)=O_p(L)$. By \cite[Chap. A, Theorem 13.8(a)]{Doe}, $F_p(L)\leq C_L(N)\leq \Phi(L)$. This induces that $F_p(L)=\Phi(L)$. Since $L\in \mathfrak{S_{\pi}}\circ \mathfrak{F}$, $L^\mathfrak{F}\in \mathfrak{S_{\pi}}$. If $L^\mathfrak{F}\leq \Phi(L)$, then $L\in \mathfrak{G}_p\circ\mathfrak{F}$, which is absurd. Thus $L^\mathfrak{F}\nleq \Phi(L)$. Let $A/\Phi(L)$ be an $L$-chief factor contained in $L^\mathfrak{F}\Phi(L)/\Phi(L)$. Then $A/\Phi(L)\in \mathfrak{N_{\pi}}$, and so $A/\Phi(L)\in \mathfrak{N}_p$. Hence by \cite[Lemma 3.1]{Bal1}, we have that $A\in \mathfrak{N}_p$. This implies that $A\leq F_p(L)=\Phi(L)$, a contradiction. The proof is thus completed.\par \noindent\textbf{Proofs of Theorems B(2) and B(3).} (2) Suppose that $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$ holds for every group $G$. Obviously, for any group $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{F})$, $G=\mathcal{N}_{\pi'\mathfrak{F}}(G)=\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$. It follows that $G=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$, and so $G\in \mathfrak{N}_\pi\circ\mathfrak{F}$ by Lemma 2.8(7). Hence $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I).\par (3) The necessity is obvious. So we only need to prove the sufficiency. For any group $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{F})$ and any $p\in \pi$, either $G\in \mathfrak{S}_\pi\circ \mathfrak{F}$ or $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{F})\backslash (\mathfrak{S}_\pi\circ \mathfrak{F})$. In the former case, $G\in \mathfrak{N}_\pi\circ \mathfrak{F}$ by Lemma 2.7. In the latter case, a same discussion as in the proof of (2) shows that $G\in \mathfrak{N}_\pi\circ\mathfrak{F}$. Therefore, $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I). The rest of the proof is similar to the proofs of Theorem B(1) and Theorem D.\par \noindent\textbf{Corollary 3.4.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Suppose that one of the following holds:\par (i) $G\in \mathfrak{S}_\pi\circ\mathfrak{F}$.\par (ii) $\mathfrak{F}$ satisfies the $\pi$-boundary condition (II).\par \noindent Then $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)/Z_{\pi\mathfrak{N}}(G^\mathfrak{F})=\mathcal{N}_{\pi'\mathfrak{F}}(G/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)/Z_{\pi\mathfrak{N}}(G^\mathfrak{F})$. In particular, if $\mathfrak{F}$ is saturated, then $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)/Z_{\pi\mathfrak{N}}(G^\mathfrak{F})=\mathcal{N}_{\pi'\mathfrak{F}}(G/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)/Z_{\pi\mathfrak{N}}(G^\mathfrak{F})=Z_{\pi\mathfrak{F}}(G/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))$.\par \noindent\textit{Proof.} Obviously, $Z_{\pi\mathfrak{N}}(G^\mathfrak{F}/Z_{\pi\mathfrak{N}}(G^\mathfrak{F}))=1$. By induction, Lemma 2.5(4) and Lemma 2.8(1), we may assume that $Z_{\pi\mathfrak{N}}(G^\mathfrak{F})=1$. Then by Lemma 2.12(2), $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\cap G^\mathfrak{F}=1$, and thus $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\leq C_G(G^\mathfrak{F})$. Since $\mathfrak{F}=\mathtt{S}\mathfrak{F}$, $H^\mathfrak{F}\leq G^\mathfrak{F}$ for every subgroup $H$ of $G$, and so $C_G(G^\mathfrak{F})\leq \mathcal{N}_{\pi'\mathfrak{F}}(G)$. Hence $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\leq \mathcal{N}_{\pi'\mathfrak{F}}(G)$. By Theorem B(1) and Theorem D, $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$. This implies that $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=\mathcal{N}_{\pi'\mathfrak{F}}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)$. Suppose further that $\mathfrak{F}$ is saturated. Then by Lemma 2.12(3), $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=Z_{\pi\mathfrak{F}}(G)$, and so $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=\mathcal{N}_{\pi'\mathfrak{F}}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=Z_{\pi\mathfrak{F}}(G)$.\par \noindent\textbf{Lemma 3.5.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Then $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\leq \mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$ if one of the following holds:\par (i) $G\in \mathfrak{S}_\pi\circ\mathfrak{F}$.\par (ii) $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I).\par \noindent\textit{Proof.} Let $H$ be any subgroup of $G$ such that $H\in \mathfrak{N}_\pi\circ\mathfrak{F}$. Then we only need to prove that $H\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\in \mathfrak{N}_\pi\circ\mathfrak{F}$. By Lemma 2.5(2), we have that $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(H\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G))$. It follows that $H\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)/\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(H\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G))\cong(H\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)/\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G))/(\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(H\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G))/\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G))\in \mathfrak{N}_\pi\circ\mathfrak{F}$. Then by Corollary 3.2, $H\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\in \mathfrak{N}_\pi\circ\mathfrak{F}$. Hence $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\leq \mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$.\par \noindent\textbf{Proof of Theorem C.} By Lemma 2.11(2), the canonical local definition $F$ of $\mathfrak{N}_\pi\circ\mathfrak{F}$ can be defined as follows: $F(p)=\mathfrak{G}_p\circ\mathfrak{F}$ for all $p\in \pi$; $F(p)=\mathfrak{N}_\pi\circ\mathfrak{F}$ for all $p\in \pi'$. Note that $Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=Z_{\pi(\mathfrak{N}_\pi\circ\mathfrak{F})}(G)$ by Lemma 2.8(6). Then by \cite[Theorem A]{Guo}, (2) is equivalent to (3).\par Next we show that (1) is equivalent to (3). Suppose that (3) holds, that is, $\mathfrak{F}$ satisfies the $\pi$-boundary condition (III). Then clearly, $\mathfrak{F}$ satisfies the $\pi$-boundary condition (I). Therefore, for every group $G$, we have that $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\leq \mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$ by Lemma 3.5. Since (2) is equivalent to (3), $\mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$ by Lemma 3.3. Consequently, $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=\mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$ holds for every group $G$, and so (3) implies (1).\par Now suppose that $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=\mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(G)$ holds for every group $G$, and there exists a prime $p\in \pi$ such that $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{F})\nsubseteq \mathfrak{N}_\pi\circ\mathfrak{F}$. Let $L$ be a group of minimal order in the set of all groups $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ\mathfrak{F})\backslash(\mathfrak{N}_\pi\circ\mathfrak{F})$. Then by Lemma 2.13, $L^{\mathfrak{N}_\pi\circ\mathfrak{F}}$ is the unique minimal normal subgroup of $L$ and $O_p(L)=\Phi(L)=1$. Hence by \cite[Chap. B, Theorem 10.3]{Doe}, there exists a simple $\mathbb{F}_pL$-module $P$ which is faithful for $L$. Let $V=P\rtimes L$. For any subgroup $H$ of $V$ such that $H\in \mathfrak{N}_\pi\circ\mathfrak{F}$, if $PH=V$, then $P\cap H\unlhd V$. This implies that $P\cap H=1$ for $P$ is a simple $\mathbb{F}_pL$-module, and so $H\cong V/P\cong L\notin \mathfrak{N}_\pi\circ\mathfrak{F}$, a contradiction. Hence $PH<V$. Then clearly, $PH\cap L<L$, and thus $PH/P=P(PH\cap L)/P\cong PH\cap L\in \mathfrak{G}_p\circ\mathfrak{F}$. It follows that $PH\in \mathfrak{G}_p\circ\mathfrak{F}\subseteq \mathfrak{N}_\pi\circ\mathfrak{F}$. Therefore, $P\leq \mbox{Int}_{\mathfrak{N}_\pi\circ\mathfrak{F}}(V)=\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(V)$. If $P\nleq \mathcal{N}_{\pi'\mathfrak{F}}(V)$, then $P\cap \mathcal{N}_{\pi'\mathfrak{F}}(V)=1$. Since $C_V(P)=P$, $\mathcal{N}_{\pi'\mathfrak{F}}(V)\leq C_V(P)=P$, and so $\mathcal{N}_{\pi'\mathfrak{F}}(V)=1$, which is absurd. Hence $P\leq \mathcal{N}_{\pi'\mathfrak{F}}(V)$. It follows that $P\leq N_V(L^\mathfrak{F}O_{\pi'}(V))$, and thereby $L^\mathfrak{F}O_{\pi'}(V)\unlhd V$. Then $L^\mathfrak{F}O_{\pi'}(V)\leq C_V(P)=P$. This induces that $L^\mathfrak{F}=1$. Therefore, $L\in \mathfrak{F}$, a contradiction. This shows that (1) implies (3). Consequently, (1) is equivalent to (3). The theorem is thus proved.\par \noindent\textbf{Proof of Theorem E.} We can prove the theorem similarly as in the proof of Theorem C by using \cite[Theorem 4.22]{Guo}.\par Now we give some conditions under which the formations satisfy the $\mathbb{P}$-boundary condition (I) (resp. the $\mathbb{P}$-boundary condition (II), the $\mathbb{P}$-boundary condition (III), the $\mathbb{P}$-boundary condition (III) in $\mathfrak{S}$). Recall that if $\sigma$ denotes a linear ordering on $\mathbb{P}$, then a group $G$ is called a Sylow tower group of complexion (or type) $\sigma$ if there exists a series of normal subgroups of $G$: $1=G_0\leq G_1\leq\cdots\leq G_n=G$ such that $G_i/G_{i-1}$ is a Sylow $p_i$-subgroup of $G/G_{i-1}$ for $1\leq i\leq n$, where $p_1\prec p_2\prec\cdots\prec p_n$ is the ordering induced by $\sigma$ on the distinct prime divisors of $|G|$. Let $\mathfrak{T}_\sigma$ denote the class of all Sylow tower groups of complexion $\sigma$. By \cite[Chap. IV, Example 3.4(g)]{Doe}, $\mathfrak{T}_\sigma$ is a saturated formation. Also, a formation $\mathfrak{F}$ is said to be a $\check{S}$-formation (or have the Shemetkov property) if $\mbox{Crit}_\mathtt{S}(\mathfrak{F})\subseteq \mbox{Crit}_\mathtt{S}(\mathfrak{N})\cup \{\mbox{cyclic groups of prime order}\}$. Clearly, $\mathfrak{N}_\pi$ is a $\check{S}$-formation. For details and more examples, see \cite[Section 3.5]{Guo4}. Moreover, a group $G$ is said to be $\pi$-closed if $G$ has a normal Hall $\pi$-subgroup. Let $\mathfrak{C}_{\pi}$ denote the formation of all $\pi$-closed groups.\par \noindent\textbf{Proposition 3.6.} A formation $\mathfrak{F}$ satisfies the $\mathbb{P}$-boundary condition (I) if one of the following holds:\par (1) $\mathfrak{F}\subseteq \mathfrak{T}_\sigma$.\par (2) $\mathfrak{F}$ is a $\check{S}$-formation.\par (3) $\mathfrak{F}\subseteq \mathfrak{C}_{2}$.\par (4) $\mathfrak{F}\subseteq \mathfrak{N}_{2}$.\par \noindent\textit{Proof.} (1) By \cite[Theorem 8]{Ros}, $\mbox{Crit}_\mathtt{S}(\mathfrak{T}_\sigma)\subseteq \mathfrak{S}$, and so $\mbox{Crit}_\mathtt{S}(\mathfrak{F})\subseteq \mathfrak{T}_\sigma\cup \mbox{Crit}_\mathtt{S}(\mathfrak{T}_\sigma)\subseteq \mathfrak{S}$.\par Statement (2) is clear by definition.\par (3) Note that $\mathfrak{C}_{2}$ is a $\check{S}$-formation by \cite[Remark]{Sta}. Then by Feit-Thompson Theorem, $\mbox{Crit}_\mathtt{S}(\mathfrak{F})\subseteq \mathfrak{C}_{2}\cup \mbox{Crit}_\mathtt{S}(\mathfrak{C}_{2})\subseteq \mathfrak{S}$.\par The proof of statement (4) is similar to (3).\par \noindent\textbf{Proposition 3.7.} A formation $\mathfrak{F}$ satisfies the $\mathbb{P}$-boundary condition (II) if one of the following holds:\par (1) $\mathfrak{F}\subseteq \mathfrak{N}$.\par (2) $\mathfrak{F}\subseteq \mathfrak{G}_{2'}$ (equivalently, $2\notin\pi(\mathfrak{F})$).\par \noindent\textit{Proof.} (1) By \cite[Chap. IV, Satz 5.4]{Hup}, for any $p\in\mathbb{P}$, $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ \mathfrak{N})=\mbox{Crit}_\mathtt{S}(\mathfrak{N}_{p'})\subseteq \mbox{Crit}_\mathtt{S}(\mathfrak{N})\subseteq\mathfrak{S}$. Hence $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ \mathfrak{F})\subseteq \mathfrak{N}_{p'}\cup\mbox{Crit}_\mathtt{S}(\mathfrak{N}_{p'})\subseteq \mathfrak{S}$.\par (2) Note that by \cite[Remark]{Sta}, $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_2\circ \mathfrak{G}_{2'})=\mbox{Crit}_\mathtt{S}(\mathfrak{C}_{2})\subseteq \mathfrak{S}$, and for any odd prime $p$, $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ \mathfrak{G}_{2'})=\mbox{Crit}_\mathtt{S}(\mathfrak{G}_{2'})=\{\mbox{cyclic group of order 2}\}\subseteq \mathfrak{S}$. Hence for any $p\in\mathbb{P}$, $\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ \mathfrak{F})\subseteq (\mathfrak{G}_p\circ \mathfrak{G}_{2'})\cup\mbox{Crit}_\mathtt{S}(\mathfrak{G}_p\circ \mathfrak{G}_{2'})\subseteq \mathfrak{S}$.\par Recall that a group $G$ is called $p$-decomposable if there exists a subgroup $H$ of $G$ such that $G=P\times H$ for some Sylow $p$-subgroup $P$ of $G$. Also, we use $\mathfrak{N}^r$ to denote the class of all groups $G$ with $l(G)\leq r$, where $l(G)$ is the Fitting length of $G$.\par \noindent\textbf{Proposition 3.8.} (1) Let $\mathfrak{F}$ be a formation with $\pi(\mathfrak{F})=\mathbb{P}$ such that $\mathfrak{F}\subseteq \mathfrak{N}$. Then $\mathfrak{F}$ satisfies the $\mathbb{P}$-boundary condition (III).\par (2) Let $\mathfrak{L}$ be the formation of all $p$-decomposable groups. Then $\mathfrak{N}^r\circ\mathfrak{L}$ satisfies the $\mathbb{P}$-boundary condition (III) in $\mathfrak{S}$.\par (3) Let $\mathfrak{F}$ be a formation with $\pi(\mathfrak{F})=\mathbb{P}$ such that $\mathfrak{F}\subseteq \mathfrak{N}$. Then $\mathfrak{N}^r\circ\mathfrak{F}$ satisfies the $\mathbb{P}$-boundary condition (III) in $\mathfrak{S}$.\par \noindent\textit{Proof.} Statement (1) was proved in \cite[Proposition 4.9(ii)]{Guo}, and statement (2) follows from \cite[Lemma 5.2]{Ski1} and \cite[Proposition 4.9(i)]{Guo}.\par (3) By \cite[Chap. IV, Theorem 1.16]{Doe}, we have that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. It follows from (1) and \cite[Proposition 4.9(i)]{Guo} that $\mathfrak{N}^r\circ\mathfrak{F}$ satisfies the $\mathbb{P}$-boundary condition (III) in $\mathfrak{S}$.\par \section{Applications} In this section, we investigate the structure of groups $G$ whose minimal subgroups are contained in $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$. Let $\Psi_{p}(G)=\langle x|x\in G, o(x)=p\rangle$ if $p$ is odd, and $\Psi_2(G)=\langle x|x\in G, o(x)=2\rangle$ if the Sylow 2-groups of $G$ are quaternion-free, otherwise $\Psi_2(G)=\langle x|x\in G, o(x)=2\mbox{ or }4\rangle$.\par \noindent\textbf{Lemma 4.1.} Suppose that $\Psi_{p}(G^{\mathfrak{N}_p})\leq Z_{\mathfrak{N}_p}(G)$. Then $G\in \mathfrak{N}_p$.\par \noindent\textit{Proof.} By Lemma 2.8(3), for any subgroup $H$ of $G$, $\Psi_{p}(H^{\mathfrak{N}_p})\leq H\cap Z_{\mathfrak{N}_p}(G)\leq Z_{\mathfrak{N}_p}(H)$. Then by induction, $H\in \mathfrak{N}_p$. We may, therefore, assume that $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{N}_p)$. It follows from \cite[Chap. IV, Satz 5.4]{Hup} that $G^{\mathfrak{N}_p}$ is a Sylow $p$-subgroup of $G$. By \cite[Theorem 1.1]{Sem}, $G^{\mathfrak{N}_p}/\Phi(G^{\mathfrak{N}_p})$ is a $G$-chief factor, and the exponent of $G^{\mathfrak{N}_p}$ is $p$ or $4$ (when $p=2$ and $G^{\mathfrak{N}_p}$ is non-abelian). If $p=2$ and $G^{\mathfrak{N}_2}$ is non-abelian and quaternion-free, then by \cite[Theorem 3.1]{War}, $G^{\mathfrak{N}_2}$ has a characteristic subgroup $L$ of index 2. This induces that $L=\Phi(G^{\mathfrak{N}_2})$, and so $G^{\mathfrak{N}_2}$ is cyclic, which is contrary to our assumption. Hence $p$ is odd or $p=2$ and $G^{\mathfrak{N}_2}$ is either abelian or not quaternion-free. This implies that $G^{\mathfrak{N}_p}=\Psi_{p}(G^{\mathfrak{N}_p})\leq Z_{\mathfrak{N}_p}(G)$, and thereby $G\in \mathfrak{N}_p$. The lemma is thus proved.\par \noindent\textbf{Lemma 4.2.} Let $\mathfrak{F}$ be a saturated formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$ and $\pi\subseteq \pi(\mathfrak{F})$. Suppose that $\Psi_{p}(G^{\mathfrak{F}})\leq Z_{\pi\mathfrak{F}}(G)$ for every $p\in \pi$. Then $G\in \mathfrak{G}_{\pi'}\circ\mathfrak{F}$.\par \noindent\textit{Proof.} Assume that the result is false and let $G$ be a counterexample of minimal order. If $O_{\pi'}(G)>1$, then for every $p\in \pi$, $\Psi_{p}(G^{\mathfrak{F}}O_{\pi'}(G)/O_{\pi'}(G))\leq Z_{\pi\mathfrak{F}}(G)/O_{\pi'}(G)=Z_{\pi\mathfrak{F}}(G/O_{\pi'}(G))$ by Lemma 2.8(1). Hence by the choice of $G$, $G/O_{\pi'}(G)\in \mathfrak{G}_{\pi'}\circ\mathfrak{F}$, and thereby $G\in \mathfrak{G}_{\pi'}\circ\mathfrak{F}$, which is impossible. Therefore, $O_{\pi'}(G)=1$. Let $M$ be any maximal subgroup of $G$. Since $M^{\mathfrak{F}}\leq G^{\mathfrak{F}}$ by Lemma 2.1(2), for every $p\in \pi$, $\Psi_{p}(M^{\mathfrak{F}})\leq M\cap Z_{\pi\mathfrak{F}}(G)\leq Z_{\pi\mathfrak{F}}(M)$ by Lemma 2.8(3). Then by the choice of $G$, $M\in \mathfrak{G}_{\pi'}\circ\mathfrak{F}$. We may, therefore, assume that $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{G}_{\pi'}\circ\mathfrak{F})$.\par If $Z_{\pi\mathfrak{F}}(G)\nleq \Phi(G)$, then $G$ has a maximal subgroup $M$ such that $G=Z_{\pi\mathfrak{F}}(G)M$. It follows that $G/Z_{\pi\mathfrak{F}}(G)\in \mathfrak{G}_{\pi'}\circ\mathfrak{F}$. By Lemmas 2.8(4) and 2.8(6), $G\in \mathfrak{G}_{\pi'}\circ\mathfrak{F}$, a contradiction. Hence $Z_{\pi\mathfrak{F}}(G)\leq \Phi(G)$ is nilpotent. Since $O_{\pi'}(G)=1$, $Z_{\pi\mathfrak{F}}(G)$ is a $\pi$-group. Then it is easy to see that $Z_{\pi\mathfrak{F}}(G)=Z_{\mathfrak{F}}(G)$. By \cite[Chap. IV, Theorem 6.10]{Doe}, for every $p\in \pi$, $\Psi_{p}(G^{\mathfrak{F}})\leq Z_{\mathfrak{F}}(G)\cap G^{\mathfrak{F}}\leq Z(G^{\mathfrak{F}})$. It follows from Lemma 4.1 that $G^{\mathfrak{F}}\in \mathfrak{N}_\pi$. As $O_{\pi'}(G)=1$, we have that $G^{\mathfrak{F}}\in \mathfrak{N}\cap \mathfrak{G}_\pi$. Since $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{G}_{\pi'}\circ\mathfrak{F})$ and $\mathfrak{F}=\mathtt{S}\mathfrak{F}$, $G\in \mbox{Crit}_\mathtt{S}(\mathfrak{F})$. Then a similar discussion as in the proof of Lemma 4.1 shows that $G^{\mathfrak{F}}$ is a $p$-group with $p\in\pi$ such that the exponent of $G^{\mathfrak{F}}$ is $p$ or $4$ (when $p=2$ and $G^{\mathfrak{F}}$ is not quaternion-free) by using \cite[Theorem 1.1]{Sem}. This implies that $G^{\mathfrak{F}}=\Psi_{p}(G^{\mathfrak{F}})\leq Z_{\mathfrak{F}}(G)$, and so $G\in \mathfrak{F}$. The final contradiction ends the proof.\par \noindent\textbf{Theorem 4.3.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$. Suppose that $\Psi_{p}(G^{\mathfrak{N}_\pi\circ\mathfrak{F}})\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)$ for every $p\in \pi$ and one of the following holds:\par (i) $G\in \mathfrak{S_{\pi}}\circ\mathfrak{F}$.\par (ii) $\mathfrak{F}$ satisfies the $\pi$-boundary condition (II).\par (iii) $2\in\pi$ and $\mathfrak{F}$ satisfies the $\{2\}$-boundary condition (II).\par (iv) $\{2,q\}'\subseteq\pi$, where $q$ is an odd prime, and $\mathfrak{F}$ satisfies the $\{2,q\}'$-boundary condition (II).\par \noindent Then $G\in \mathfrak{N}_\pi\circ\mathfrak{F}$.\par \noindent\textit{Proof.} If either the condition (i) or the condition (ii) holds, then by Theorem B(1), Theorem D and Lemma 2.8(6), $\Psi_{p}(G^{\mathfrak{N}_\pi\circ\mathfrak{F}})\leq \mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)=Z_{\pi(\mathfrak{N}\circ\mathfrak{F})}(G)=Z_{\pi(\mathfrak{N}_{\pi}\circ\mathfrak{F})}(G)$ for every $p\in \pi$. Hence in both cases, the theorem follows from Lemma 4.2.\par Now suppose that the condition (iii) holds. Then it is easy to see that $\mathcal{N}_{\pi'\mathfrak{F}}(G)\leq \mathcal{N}_{2'\mathfrak{F}}(G)$ by definition, and so $\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\leq \mathcal{N}_{2'\mathfrak{F}}^{\infty}(G)$. It follows that $\Psi_{2}(G^{\mathfrak{N}_2\circ\mathfrak{F}})\leq \Psi_{2}(G^{\mathfrak{N}_\pi\circ\mathfrak{F}})\leq\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\leq \mathcal{N}_{2'\mathfrak{F}}^{\infty}(G)$. By applying the condition (ii), $G\in \mathfrak{N}_2\circ\mathfrak{F}\subseteq \mathfrak{S}\circ\mathfrak{F}$, and thereby the condition (i) holds. Therefore, $G\in \mathfrak{N}_\pi\circ\mathfrak{F}$.\par Finally, we assume that the condition (iv) holds. Then for every $p\in \{2,q\}'$, $\Psi_{p}(G^{\mathfrak{N}_{\{2,q\}'}\circ\mathfrak{F}})\leq \Psi_{p}(G^{\mathfrak{N}_\pi\circ\mathfrak{F}})\leq\mathcal{N}_{\pi'\mathfrak{F}}^{\infty}(G)\leq \mathcal{N}_{\{2,q\}\mathfrak{F}}^{\infty}(G)$. By applying the condition (ii), $G\in \mathfrak{N}_{\{2,q\}'}\circ\mathfrak{F}\subseteq \mathfrak{S}\circ\mathfrak{F}$ by Burnside's $p^aq^b$-theorem, and so the condition (i) holds. Hence $G\in \mathfrak{N}_\pi\circ\mathfrak{F}$.\par The next two corollaries can be regarded as generalizations of \cite[Theorem 5.2]{She} and \cite[Theorem 5.3]{She}, respectively.\par \noindent\textbf{Corollary 4.4.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$ and $\mathfrak{F}\subseteq \mathfrak{U}$. Suppose that all cyclic subgroups of $G$ of odd prime order are contained in $\mathcal{N}_{\mathfrak{F}}^{\infty}(G)$. Then:\par (1) $G\in \mathfrak{S}$.\par (2) The $p$-length of $G$ is at most 2 for every odd prime $p$, and if $\mathfrak{F}\subseteq \mathfrak{N}$, then the $p$-length of $G$ is at most 1 for every odd prime $p$.\par (3) The Fitting length of $G$ is bounded by 4, and if $\mathfrak{F}\subseteq \mathfrak{N}$, then the Fitting length of $G$ is bounded by 3.\par \noindent\textit{Proof.} By \cite[Chap. IV, Satz 5.4]{Hup}, $\mathfrak{N}_2$ satisfies the $2'$-boundary condition (II), and so $\mathfrak{F}$ also satisfies the $2'$-boundary condition (II) for $\mathfrak{F}\subseteq \mathfrak{U}\subseteq \mathfrak{N}_2$. Since $\Psi_{p}(G^{\mathfrak{N}_{2'}\circ\mathfrak{F}})\leq \mathcal{N}_{\mathfrak{F}}^{\infty}(G)\leq \mathcal{N}_{\{2\}\mathfrak{F}}^{\infty}(G)$ for every odd prime $p$, by Theorem 4.3, $G\in \mathfrak{N}_{2'}\circ\mathfrak{F}\subseteq \mathfrak{S}$. Hence for every odd prime $p$, $G^{\mathfrak{U}}\leq G^\mathfrak{F}\in \mathfrak{N}_{2'}\subseteq \mathfrak{N}_p$, and so the $p$-length of $G$ is at most 2 for every odd prime $p$. It is clear that $G^{\mathfrak{N}^2}\leq G^\mathfrak{U}\in \mathfrak{N}_{2'}\subseteq \mathfrak{N}^2$. This implies that the Fitting length of $G$ is bounded by 4. Now consider that $\mathfrak{F}\subseteq \mathfrak{N}$. The discussion is similar as above.\par \noindent\textbf{Corollary 4.5.} Let $\mathfrak{F}$ be a formation such that $\mathfrak{F}=\mathtt{S}\mathfrak{F}$ and $\mathfrak{F}\subseteq \mathfrak{U}$. Suppose that all cyclic subgroups of $G$ of order prime or 4 are contained in $\mathcal{N}_{\mathfrak{F}}^{\infty}(G)$. Then:\par (1) $G\in \mathfrak{S}$.\par (2) The $p$-length of $G$ is at most 2 for every $p\in\mathbb{P}$, and if $\mathfrak{F}\subseteq \mathfrak{N}$, then the $p$-length of $G$ is at most 1 for every $p\in\mathbb{P}$.\par (3) The Fitting length of $G$ is bounded by 3, and if $\mathfrak{F}\subseteq \mathfrak{N}$, then the Fitting length of $G$ is bounded by 2.\par \noindent\textit{Proof.} The corollary can be proved similarly as in the proof of Corollary 4.4.\par \noindent\textbf{Acknowledgments.}\par The authors are very grateful to the referee for his/her careful reading and helpful comments.\par \end{document}
\begin{document} \begin{frontmatter}[classification=text] \title{Simplicial Homeomorphs and Trace-Bounded Hypergraphs} \author[jl]{Jason Long} \author[bn]{Bhargav Narayanan\thanks{Supported in part by NSF grants DMS-1800521 and CCF-1814409}} \author[cy]{Corrine Yap\thanks{Supported in part by NSF grant DMS-1800521}} \begin{abstract} Our first main result is the following basic fact about simplicial complexes: for each $k \in \mathbb{N}$, there exists an exponent $\lambda_k \ge k^{-2k^2}$ such that for any $k$-complex $\mathcal{S}$, every $k$-complex on $n \ge n_0(\mathcal{S})$ vertices with at least $n^{k+1 - \lambda_k}$ facets contains a homeomorphic copy of $\mathcal{S}$. The existence of these exponents was suggested by Linial in 2006 but was previously known only in dimensions one and two, both by highly dimension-specific arguments: the existence of $\lambda_1$ is a result of Mader from 1967, and the existence of $\lambda_2$ was established by Keevash--Long--Narayanan--Scott in 2020. We deduce this geometric theorem from a purely combinatorial result about trace-bounded hypergraphs, where an $r$-partite $r$-graph $H$ with partition classes $V_1, V_2, \dots, V_r$ is said to be $d$-trace-bounded if for each $2 \le i \le r$, all the vertices of $V_i$ have degree at most $d$ in the trace of $H$ on $V_1 \cup V_2 \cup \dots \cup V_i$. Our second main result is the following fact about degenerate trace-bounded hypergraphs: for all $r \ge 2$ and $d\in\mathbb{N}$, there exists an exponent $\alphapha_{r,d} \ge (5rd)^{1-r}$ such that for any $d$-trace-bounded $r$-partite $r$-graph $H$, every $r$-graph on $n \ge n_0(H)$ vertices with at least $n^{r - \alphapha_{r,d}}$ edges contains a copy of $H$. This strengthens a theorem of Conlon--Fox--Sudakov from 2009 who showed that a similar result holds for $r$-partite $r$-graphs $H$ satisfying the stronger hypothesis that the vertex-degrees in all but one of its partition classes are bounded (in $H$, as opposed to in its traces). \end{abstract} \end{frontmatter} \section{Introduction} This paper aims to answer the following basic geometric question about $k$-dimensional simplicial complexes (or \emph{$k$-complexes} for short) that arises in the `high-dimensional combinatorics' programme of Linial~\cite{linial4, linial3}. \begin{problem}\label{kgraphhomeo} Given a $k$-complex $\mathcal{S}$, how many facets can a $k$-complex on $n$ vertices have if it contains no homeomorphic copy of $\mathcal{S}$? \end{problem} For a $k$-complex $\mathcal{S}$, let $\lambda(\mathcal{S})$ be the supremum over all $\lambda$ for which the maximum number of facets in a $k$-complex on $n$ vertices with no homeomorphic copy of $\mathcal{S}$ is $O(n^{k+1 - \lambda})$. It is essentially folklore --- see~\cite{myhomeo} for a discussion --- that $\lambda(\mathcal{S}) > 0$ for every $k$-complex $\mathcal{S}$. A much more intriguing possibility, namely that for every $k$-complex $\mathcal{S}$, $\lambda(\mathcal{S})$ is bounded below uniformly by some universal exponent $\lambda_k > 0$ that depends only on the dimension $k$, was suggested by Linial~\cite{linialques1, linialques2} (explicitly for dimension two and implicitly for higher dimensions); our first main result establishes this in every dimension. \begin{theorem}\label{mainthm1} For all $k \in \mathbb{N}$, there is a $\lambda_k \ge k^{-2k^2}$ such that for any $k$-complex $\mathcal{S}$, every $k$-complex on $n \ge n_0(\mathcal{S})$ vertices with at least $n^{k+1 - \lambda_k}$ facets contains a homeomorphic copy of $\mathcal{S}$. \end{theorem} The existence of such universal exponents $\lambda_k$ as in Theorem~\ref{mainthm1} was previously known only for $k = 1$ and $k=2$; that the optimal value of $\lambda_1$ is $1$ is a classical result of Mader~\cite{mader1}, and that $\lambda_2 \ge 1/5$ was shown recently by Keevash, Scott and the first and second authors~\cite{myhomeo}. The conjecturally optimal value of $\lambda_2$ is $1/2$, and establishing this remains open, though a beautiful recent result of Kupavskii, Polyanskii, Tomon, and Zakharov~\cite{surfaces} establishes that $\lambda(\mathcal{S}) = 1/2$ whenever $\mathcal{S}$ is the triangulation of any closed orientable two-dimensional surface, generalising a classical result of Brown, Erd\H{o}s and S\'os~\cite{bes} establishing this fact for the two-sphere. It is worth mentioning that all of~\cite{bes, myhomeo, surfaces, mader1} proceed via arguments that are highly specific to dimensions one and two. Indeed, the main novelty in the proof of Theorem~\ref{mainthm1} is our ability to simultaneously handle all dimensions; this generality comes at a cost, however, since the resulting bounds in low dimensions are not very competitive with those in the aforementioned results. We shall deduce Theorem~\ref{mainthm1} from a purely combinatorial result, of some independent interest, about the Tur\'an numbers of a large class of $r$-uniform hypergraphs (or \emph{$r$-graphs}, for short). For an $r$-partite $r$-graph $H$, let $\alphapha(H)$ be the supremum over all $\alphapha$ for which the maximum number of edges in an $r$-graph on $n$ vertices with no copy of $H$ is $O(n^{r - \alphapha})$. A well-known result of Erd\H{o}s~\cite{degen} says that $\alphapha(H) > 0$ for every $r$-partite $r$-graph $H$; this value $\alphapha(H)$ is called the \emph{Tur\'an exponent} of $H$, and the determination of these exponents is the central problem --- see~\cite{turan3, turan2} --- of degenerate Tur\'an theory. To state our second result, we need a definition. We say that an $r$-partite $r$-graph $H$ with partition classes $V_1, V_2, \dots, V_r$ is \emph{$d$-trace-bounded} if for each $2 \le i \le r$, all the vertices of $V_i$ have degree at most $d$ in the trace of $H$ on $V_1 \cup V_2 \cup \dots \cup V_i$. Our second main result establishes the existence of universal lower bounds on the Tur\'an exponents of degenerate trace-bounded hypergraphs. \begin{theorem}\label{mainthm2} For all $r \ge 2$ and $d\in\mathbb{N}$, there is an $\alphapha_{r,d} \ge (5rd)^{1-r}$ such that for any $d$-trace-bounded $r$-partite $r$-graph $H$, every $r$-graph on $n \ge n_0(H)$ vertices with at least $n^{r - \alphapha_{r,d}}$ edges contains a copy of $H$. \end{theorem} Theorem~\ref{mainthm2} generalises a result of Conlon, Fox and Sudakov~\cite{CFS} which asserts, for all $r\ge 2$ and $d \in \mathbb{N}$, the existence of exponents $\alphapha'_{r,d} > 0$ (of order roughly $(rd)^{1-r}$ as well) with the following property: for any $r$-partite $r$-graph $H$ with partition classes $V_1, V_2, \dots, V_r$ in which the degrees of the vertices in each of $V_2, V_3, \dots, V_r$ are at most $d$ in $H$, we have $\alphapha(H) \ge \alphapha'_{r,d}$. It is easy to see that any $r$-partite $r$-graph $H$ to which the aforementioned result applies is $d$-trace-bounded as well, so Theorem~\ref{mainthm2} clearly implies this result. Of course, Theorem~\ref{mainthm2} is genuinely stronger than the result in~\cite{CFS} since not every trace-bounded $r$-partite $r$-graph has bounded vertex-degrees in all but one of its partition classes, and indeed, the full strength of Theorem~\ref{mainthm2} will be crucial in proving Theorem~\ref{mainthm1}. Two more remarks about Theorem~\ref{mainthm2} are in order. First, the fact that the optimal value of $\alphapha_{2,d}$ is $1/d$ (as opposed to the $1/(10d)$ promised by Theorem~\ref{mainthm2}) is a result of Alon, Krivelevich and Sudakov~\cite{AKS} which may also be read out of earlier work of F\"uredi~\cite{Furedi}. Second, it is known that, in a sense, something like the trace-boundedness hypothesis in Theorem~\ref{mainthm2} is necessary if one expects to control the Tur\'an exponent in terms of vertex-degrees; indeed, from~\cite{KMV}, we know that for every $\ensuremath{\varepsilon} > 0$, there exists a 3-partite 3-graph $H$ with all the vertex-degrees in one of its partition classes being 1 for which $\alphapha(H) \le \ensuremath{\varepsilon}$. Let us summarise the discussion above by specialising to $3$-graphs. For a 3-partite 3-graph $H$ with partition classes $V_1$, $V_2$ and $V_3$, we have the following conclusions about its Tur\'an exponent $\alphapha(H)$, listed below in order of decreasing strength of the hypotheses on $H$. \begin{enumerate} \item If the degrees of the vertices in both $V_3$ and $V_2$ are bounded above by $d$ in $H$, then~\cite{CFS} shows that $\alphapha(H)\ge 1/(15d)^2$. \item If the degrees of the vertices in $V_3$ are bounded above by $d$ in $H$, and the degrees of the vertices in $V_2$ are bounded above by $d$ in the trace of $H$ on $V_1 \cup V_2$, then Theorem~\ref{mainthm2} says that $\alphapha(H) \ge 1/(15d)^2$. \item If all we know is that the degrees of the vertices in $V_3$ are bounded above by $d$ in $H$, then~\cite{KMV} shows that $\alphapha(H)$ need not be bounded below uniformly in terms of $d$, even when $d = 1$. \end{enumerate} This paper is organised as follows. We begin by establishing some notation and making precise some of the undefined terminology appearing in the introduction in Section~\ref{sec:prelim}. The deduction of Theorem~\ref{mainthm1} from Theorem~\ref{mainthm2} is given in Section~\ref{sec:homs}, and the proof of Theorem~\ref{mainthm2} follows in Section~\ref{sec:proof}. We conclude with a discussion of some open problems in Section~\ref{sec:conc}. \section{Preliminaries}\label{sec:prelim} We shall only consider \emph{homogeneous} $k$-complexes, namely $k$-complexes all of whose facets are $k$-dimensional. Consequently, we may specify a $k$-complex $\mathcal{S}$ over a vertex set $V$ by listing the family $F$ of its $k$-dimensional facets, each of which is a subset of $V$ of cardinality $k+1$ (though $\mathcal{S}$ is, strictly speaking, the family of all subsets of its facets). We say that a $k$-complex $\mathcal{T}$ contains a \emph{homeomorph} (or a \emph{homeomorphic copy}) of a $k$-complex $\mathcal{S}$ if there is a subcomplex of $\mathcal{T}$ that is homeomorphic to $\mathcal{S}$. An $r$-graph $G$ on a vertex set $V$ is a family $E$ of $r$-element subsets of $V$ called the edges of $G$. A $k$-complex $\mathcal{S}$ may hence be identified with a $(k+1)$-graph $G$ by viewing the facets of $\mathcal{S}$ as the edges of $G$, and vice versa. When we specify a $k$-complex by its set of facets alone, or an $r$-graph by its edge set alone, the vertex set of the $k$-complex or the $r$-graph in question is taken to be the span of the facets or the edges respectively. Since most of the work here will be in proving Theorem~\ref{mainthm2}, let us set out some more notation for working with an $r$-graph $G$. We write $v(G)$ and $e(G)$ for the number of vertices and edges of $G$ respectively. The \emph{link $\mathcal{L}(v, G)$} of a vertex $v\in V(G)$ in $G$ is the $(r-1)$-graph whose edges are those sets $S$ for which $\{v\} \cup S$ is an edge of $G$, and the \emph{degree $\Deg(v, G)$} of $v$ is the number of edges of $G$ containing $v$, or equivalently $\Deg(v, G) = e(\mathcal{L}(v, G))$. For an $(r-1)$-graph $J$ with $V(J) \subset V(G)$, its \emph{common neighbourhood $\Gamma(J, G)$} in $G$ is the set of vertices $v \in V(G)$ for which $\{v\} \cup S$ is an edge of $G$ for each edge $S \in E(J)$. Finally, for a subset $U \subset V(G)$ of the vertex set of $G$, the \emph{trace $\Tr(G, U)$} of $G$ on $U$ is the family $\{S \cap U: S \in E(G)\}$. An $r$-graph $G$ is \emph{$r$-partite} if its vertex set admits a partition $V(G) = V_1 \cup V_2 \cup \dots \cup V_r$ such that every edge of $G$ contains exactly one vertex each from each of these $r$ partition classes. When $G$ is an $r$-partite $r$-graph with partition classes $V_1, V_2, \dots, V_r$, we abbreviate $\Tr(G, V_1\cup V_2 \cup \dots \cup V_i)$ by $\Tr_i(G)$, noting that $\Tr_i(G)$ is an $i$-graph for each $1 \le i \le r$. Finally, we say that an $r$-partite $r$-graph $G$ with partition classes $V_1, V_2, \dots, V_r$ is \emph{$d$-trace-bounded} if for each $2 \le i \le r$, we have $\Deg(v, \Tr_i(G)) \le d$ for each $v \in V_i$. It will be convenient for us to work with a large $r$-partite subgraph of a given $r$-graph; the following fact facilitates this, and follows from an easy averaging argument. \begin{proposition}\label{triedges} Any $r$-graph on $rn$ vertices with $m$ edges contains an $r$-partite subgraph with partition classes each of size $n$ and at least $(r! / r^r)m$ edges. \qed \end{proposition} \section{Homeomorphs}\label{sec:homs} Our proof of Theorem~\ref{mainthm1} relies on the following construction. Given a $k$-complex $\mathcal{S}$, the \emph{canonical subdivision of $\mathcal{S}$} is a $k$-complex $\tilde \mathcal{S}$ homeomorphic to $\mathcal{S}$ constructed as follows. The vertex set of $\tilde \mathcal{S}$ is given by \[ V(\tilde \mathcal{S}) = V(\mathcal{S}) \cup \{v_T: T \subset V(\mathcal{S}), |T| \ge 2, \text{ and $T$ is contained in some facet of $\mathcal{S}$} \};\] in other words, we start with $V(\mathcal{S})$ and for each $2 \le t \le k+1$, we introduce a new vertex $v_T$ for each $t$-set $T$ contained in some facet of $\mathcal{S}$. The facets $F(\tilde \mathcal{S})$ of $\tilde \mathcal{S}$ are then obtained by subdividing each facet of $\mathcal{S}$ into $(k+1)!$ facets as follows: for each facet $S \in F(\mathcal{S})$ of $\mathcal{S}$, consider the $(k+1)!$ possible chains \[ \{v\} \subsetneq T_2 \subsetneq T_3 \subsetneq \dots \subsetneq T_k \subsetneq S \] with $v$ a vertex of $\mathcal{S}$, and include $\{v, v_{T_2}, v_{T_3}, \dots, v_{T_k}, v_S\}$ in $F(\tilde \mathcal{S})$. It is not hard to see that $\tilde \mathcal{S}$ is homeomorphic to $\mathcal{S}$, as illustrated in Figure~\ref{fig:subd}. \begin{proof}[Proof of Theorem~\ref{mainthm1}] We shall prove the result with $\lambda_k = \alphapha_{k+1, (k+1)!}$, where $\alphapha_{k+1, (k+1)!}$ is as promised by Theorem~\ref{mainthm2}. We note that this establishes the bound \[\lambda_k \ge (5(k+1)(k+1)!)^{-k} \ge k^{-2k^2}\] for $k \ge 3$; that $\lambda_k \ge k^{-2k^2}$ for all $k \in \mathbb{N}$ follows from the facts, respectively from~\cite{mader1} and~\cite{myhomeo}, that $\lambda_1 \ge 1$ and $\lambda_2 \ge 1/5$. Given a $k$-complex $\mathcal{S}$, we first construct its canonical subdivision $\tilde \mathcal{S}$ as described above. When this $k$-complex $\tilde \mathcal{S}$ is viewed as a $(k+1)$-graph, it is clear that it is $(k+1)$-partite with partition classes $V_1, V_2, \dots, V_{k+1}$, where $V_1 = V(\mathcal{S})$ and for $2 \le t \le k+1$, $V_t$ consists of those new vertices $v_T$ introduced in $\tilde \mathcal{S}$ for each $t$-set $T$ contained in some facet of $\mathcal{S}$. Furthermore, $\tilde \mathcal{S}$ is $((k+1)!)$-trace-bounded; indeed, it is easy to see, for each $2 \le t \le k+1$, that for every $v \in V_t$, we have $\Deg(v, \Tr_t(\tilde \mathcal{S})) = t! \le (k+1)!$. It follows from Theorem~\ref{mainthm2} that provided $n \ge n_0(\mathcal{S})$ is large enough, any $k$-complex on $n$ vertices with $n^{k+1-\lambda_k}$ facets contains $\tilde \mathcal{S}$ as a subcomplex, and therefore, a homeomorph of $\mathcal{S}$. \end{proof} \begin{figure} \caption{The construction of $\tilde \mathcal{S} \label{fig:subd} \end{figure} \section{Trace-bounded hypergraphs}\label{sec:proof} We start with a lemma that says that if an $r$-partite $r$-graph has many edges and a small number of (small) subgraphs that are `marked' as being bad, then we may pass to an $(r-1)$-partite $(r-1)$-graph in one of its traces that has similar properties. To state this key lemma, we need a little set up, to which we now turn. For each $r, d \in \mathbb{N}$, let $\mathcal{H}(r, d)$ denote the (finite) family of all nonempty $r$-partite $r$-graphs with at most $d$ edges, taken up to isomorphism; recall our convention that the vertex set of an $r$-graph specified by its edge set alone is the span of its edges, whence $r \le v(J) \le rd$ for each $J \in \mathcal{H}(r,d)$. Suppose that $r \ge 2$, and let $G$ be an $r$-partite $r$-graph with partition classes $X_1, X_2, \dots, X_r$ each of size $n$. Suppose that for each $J \in \mathcal{H}(r, d)$, a subset $\mathcal{B}(J, G)$ of the copies of $J$ in $G$ have been \emph{marked (as being bad)}. We say that an $(r-1)$-graph $L \subset \Tr_{r-1}(G)$ with at most $d$ edges is \emph{$\beta$-bad with respect to $G$} if \begin{enumerate}[label = {\bfseries{B\arabic{enumi}}}] \item\label{B1} either $|\Gamma(L, G)|\le n^{1-\beta}$, or \item\label{B2} if there exists some $J \in \mathcal{H}(r, d)$ such that the number of marked copies of $J$ in $G$ containing $L$ is at least \[n^{-2\beta}\mathopen{}\mathclose\bgroup\originalleft(n^{1-\beta}\aftergroup\egroup\originalright)^{v(J)-v(L)-1}|\Gamma(L, G)|.\] \end{enumerate} The following lemma will be the workhorse that drives the proof of our main result. \begin{lemma}\label{mainlem2} For a fixed $r \ge 2$, $d\in\mathbb{N}$, and $\ensuremath{\varepsilon}, \delta >0$, the following holds for all sufficiently large $n \in \mathbb{N}$. Let $G$ be an $r$-partite $r$-graph with partition classes $X_1, X_2, \dots, X_r$ each of size $n$ and $e(G) \ge 2^r n^{r-\ensuremath{\varepsilon}}$ in which, for each $J \in \mathcal{H}(r, d)$, there is a set $\mathcal{B}(J, G)$ of at most $n^{v(J)-\delta}$ marked copies of $J$ in $G$. Then, for $\beta =\delta/(rd+1)$, there is an $(r-1)$-partite $(r-1)$-graph $G' \subset \Tr_{r-1}(G)$ with partition classes $X_1, X_2, \dots, X_{r-1}$ such that \begin{enumerate} \item $e(G') \ge 2^{r-1} n^{r-1-\ensuremath{\varepsilon}}$, and \item for each $L \in \mathcal{H}(r-1, d)$, the set $\mathcal{B}(L, G')$ of copies of $L$ in $G'$ that are $\beta$-bad with respect to $G$ satisfies \[|\mathcal{B}(L, G')| \le n^{v(L)-\beta + 2\ensuremath{\varepsilon}}.\] \end{enumerate} \end{lemma} \begin{proof} Choose a vertex $x\in X_r$ uniformly at random and let $G' = \mathcal{L}(x, G) \subset \Tr_{r-1}(G)$ be the link of $x$ in $G$. It is clear that \beq{eg'} \mathbb{E}[e(G')] = e(G) / n \ge 2^{r} n^{r-1-\ensuremath{\varepsilon}}. \enq For each $L \in \mathcal{H}(r-1, d)$, let $P(L)$ be the set of copies $L'$ of $L$ in $G'$ with $|\Gamma(L', G)|\le n^{1-\beta}$. Next, for $L \in \mathcal{H}(r-1,d)$ and $J\in \mathcal{H}(r,d)$, we say that a copy $J'$ of $J$ in $G$ \emph{extends} a copy $L'$ of $L$ in $\Tr_{r-1}(G)$ if $\Tr_{r-1}(J')=L'$. Let $Q(L, J)$ be the set of copies $L'$ of $L$ in $G'$ which extend to at least \[ n^{-2\beta}\mathopen{}\mathclose\bgroup\originalleft(n^{1-\beta}\aftergroup\egroup\originalright)^{v(J)-v(L)-1}|\Gamma(L', G)|\] marked copies of $J$ belonging to $\mathcal{B}(J, G)$. With these definitions in place, we then have \[\mathcal{B}(L, G')=P(L) \cup \mathopen{}\mathclose\bgroup\originalleft(\bigcup_{J\in\mathcal{H}(r,d)}Q(L,J)\aftergroup\egroup\originalright) \] for each $L \in \mathcal{H}(r-1, d)$; indeed, the first term above accounts for~\ref{B1} and the second for~\ref{B2}. First, we note that \beq{pl} \mathbb{E}[|P(L)|] \le n^{v(L)-\beta}, \enq since each copy $L'$ of $L$ in $\Tr_{r-1}(G)$ with $|\Gamma(L', G)|\le n^{1-\beta}$ survives in $G'$ with probability at most $n^{-\beta}$, and the number of copies of $L$ in $\Tr_{r-1}(G)$ is trivially at most $n^{v(L)}$. Next, since $|\mathcal{B}(J, G)|\le n^{v(J)-\delta}$ for each $J\in \mathcal{H}(r,d)$, we have \[\sum_{L'\in Q(L, J) } n^{-2\beta}\mathopen{}\mathclose\bgroup\originalleft(n^{1-\beta}\aftergroup\egroup\originalright)^{v(J)-v(L)-1}|\Gamma(L', G)|\le |\mathcal{B}(J, G)| \le n^{v(J)-\delta},\] and by rearranging this, we get \begin{align*} \sum_{L'\in Q(L, J) } \frac{|\Gamma(L', G)|}{n} & \le n^{2\beta}\mathopen{}\mathclose\bgroup\originalleft(n^{\beta-1}\aftergroup\egroup\originalright)^{v(J)-v(L)-1}n^{v(J)-1-\delta} \\ & = n^{v(L)-\delta+\beta(1+v(J)-v(L))} \le n^{v(L)-\delta+\beta rd}, \end{align*} where the last inequality uses the trivial facts that $v(J) \le rd$ and $v(L) \ge 1$. It follows that \beq{ql} \mathbb{E}[|Q(L,J)|] \le n^{v(L)-\delta+\beta rd} \enq for each $L \in \mathcal{H}(r-1,d)$ and $J \in \mathcal{H}(r,d)$. Putting the estimates~\eqref{pl} and~\eqref{ql} together, we get \beq{bbl} \mathbb{E}[|\mathcal{B}(L, G')|] \le n^{v(L)-\beta}+|\mathcal{H}(r,d)| n^{v(L)-\delta+\beta rd} = C_1 n^{v(L)-\beta}, \enq where $C_1 = (1 + |\mathcal{H}(r,d)|)$, the last equality following from the fact that $\delta= \beta(rd+1)$. To finish, we set $C_2 = |\mathcal{H}(r-1,d)|$, and combine~\eqref{eg'} and~\eqref{bbl} to get \[ \mathbb{E}\mathopen{}\mathclose\bgroup\originalleft[\frac{e(G')}{2^{r-1}n^{r-1-\ensuremath{\varepsilon}}}- 1 - \frac{1}{C_2}\sum_{L\in \mathcal{H}(r-1,d)} \frac{|\mathcal{B}(L, G')|}{C_1 n^{v(L)-\beta}} \aftergroup\egroup\originalright] \ge 0; \] consequently, there is at least one vertex in $X_r$ whose link $G'$ has the following properties: \begin{enumerate} \item $e(G') \ge 2^{r-1} n^{r-1-\ensuremath{\varepsilon}} $, and \item for every $L \in \mathcal{H}(r-1,d)$, we have \[|\mathcal{B}(L, G')| \le \frac{C_1 C_2 n^{v(L)-\beta}e(G')}{2^{r-1} n^{r-1-\ensuremath{\varepsilon}}} \le 2^{1-r}C_1 C_2 n^{v(L)-\beta +\ensuremath{\varepsilon}} \le n^{v(L)-\beta + 2\ensuremath{\varepsilon}}; \] here, we use the facts that $e(G') \le n^{r-1}$, that $\ensuremath{\varepsilon} > 0$, and that $n$ is sufficiently large. \end{enumerate} Such an $(r-1)$-graph $G'$ has all the properties we require, proving the lemma. \end{proof} With Lemma~\ref{mainlem2} in hand, we are now ready to prove our second main result. \begin{proof}[Proof of Theorem~\ref{mainthm2}] Let $H$ be a $d$-trace-bounded $r$-partite $r$-graph with partition classes $Y_1, Y_2, \dots, Y_r$. For convenience, we prove that any large $r$-graph with sufficiently many edges on a vertex set whose cardinality is \emph{divisible by $r$} must contain a copy of $H$; of course, this divisibility assumption makes no material difference beyond allowing us to drop floor and ceiling signs. We shall prove the result with the precise value of \[\alphapha_{r,d} = \frac{1}{10d}\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{rd+1}\aftergroup\egroup\originalright)^{r-2},\] noting that $\alphapha_{r,d} \ge (5rd)^{1-r}$ for all $r \ge 2$ and $d\ge1$. Given an $r$-graph on $rn$ vertices with at least $(rn)^{r - \alphapha_{r,d}}$ edges, then provided $n$ is sufficiently large, we may, by Proposition~\ref{triedges}, pass to an $r$-partite subgraph with partition classes $X_1, X_2, \dots, X_r$ each of size $n$ containing $2^rn^{r-\ensuremath{\varepsilon}}$ edges, for some \[0 < \ensuremath{\varepsilon} \le \frac{1}{9d}\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{rd+1}\aftergroup\egroup\originalright)^{r-2};\] we shall only work with this $r$-partite $r$-graph, which we call $G$, in what follows. Our goal now is to show that we are guaranteed to find a copy of $H$ in $G$ provided $n \ge n_0(H)$ is sufficiently large. Our proof proceeds in two stages. In the first stage, we shall inductively construct a sequence of $i$-partite $i$-graphs $G_i \subset \Tr_i(G)$ with partition classes $X_1, X_2, \dots, X_{i}$ for $r-1 \ge i \ge 1$, with $G_{i}$ being constructed by feeding $G_{i+1}$ into Lemma~\ref{mainlem2}. To accomplish this iterative construction, we need to find a suitable $G_{r-1}$ from which to start, which we do as follows. \begin{claim}\label{mainlem1} There is an $(r-1)$-partite $(r-1)$-graph $G_{r-1} \subset \Tr_{r-1}(G)$ with partition classes $X_1, X_2, \dots, X_{r-1}$ such that \begin{enumerate} \item $e(G_{r-1}) \ge 2^{r-1} n^{r-1-\ensuremath{\varepsilon}} $, and \item for each $J \in \mathcal{H}(r-1,d)$, the set $\mathcal{B}(J, G_{r-1})$ of copies of $J$ in $G_{r-1}$ that are contained in the link of fewer than $v(H)$ different vertices of $X_r$ in $G$ satisfies \[|\mathcal{B}(J, G_{r-1})| \le n^{v(J)-1/2}.\] \end{enumerate} \end{claim} \begin{proof} The proof mirrors that of Lemma~\ref{mainlem2}, but involves less work since we are aiming to accomplish less. Indeed, choose a vertex $x\in X_r$ uniformly at random and let $G' = \mathcal{L}(x, G) \subset \Tr_{r-1}(G)$ be the link of $x$ in $G$. As before, we clearly have \beq{edges} \mathbb{E}[e(G')] = e(G)/n = 2^{r} n^{r-1-\ensuremath{\varepsilon}}. \enq For each $J \in \mathcal{H}(r-1,d)$, the probability that a copy of $J$ in $\Tr_{r-1}(G)$ contained in the link of fewer than $v(H)$ different vertices of $X_r$ survives in $G'$ is at most $v(H)/ n$, so it follows that $\mathcal{B}(J, G')$ satisfies \beq{badjs} \mathbb{E} [ |\mathcal{B}(J, G')|] \le v(H) n^{v(J)-1}. \enq Setting $C = |\mathcal{H}(r-1,d)| v(H) $ and putting~\eqref{edges} and~\eqref{badjs} together, we get \[\mathbb{E} \mathopen{}\mathclose\bgroup\originalleft[\frac{e(G')}{2^{r-1}n^{r-1-\ensuremath{\varepsilon}}}- 1 - \frac{1}{C} \sum_{J \in \mathcal{H}(r-1,d)}\frac{|\mathcal{B}(J, G')|}{n^{v(J)-1}}\aftergroup\egroup\originalright] \ge 0.\] Consequently, there is at least one vertex in $X_r$ whose link $G'$ has the following properties: \begin{enumerate} \item $e(G') \ge 2^{r-1}n^{r-1-\ensuremath{\varepsilon}} $, and \item for every $J \in \mathcal{H}(r-1,d)$, we have \[|\mathcal{B}(J, G')| \le \frac{C n^{v(J)-1}e(G')}{2^{r-1}n^{r-1-\ensuremath{\varepsilon}}} \le 2^{1-r} C n^{v(J)-1 +\ensuremath{\varepsilon}} \le n^{v(J)-1/2}; \] here, we use the facts that $e(G') \le n^{r-1}$, that $\ensuremath{\varepsilon}<1/2$, and that $n$ is sufficiently large. \end{enumerate} The claim follows by taking $G_{r-1}$ to be a link $G'$ with the above properties. \end{proof} Let $G_{r-1}$ be the $(r-1)$-graph promised by Claim~\ref{mainlem1} and set $\delta_{r-1}=1/2$. We know that \begin{enumerate} \item $e(G_{r-1}) \ge 2^{r-1} n^{r-1-\ensuremath{\varepsilon}}$, and \item for each $J \in \mathcal{H}(r-1,d)$, the set $\mathcal{B}(J, G_{r-1})$ of copies of $J$ in $G_{r-1}$ contained in fewer than $v(H)$ distinct links in $G$ satisfies $|\mathcal{B}(J, G_{r-1})| \le n^{v(J)-\delta_{r-1}}$. \end{enumerate} For $r-2 \ge i \ge 1$, having constructed $G_{i+1}$ with \begin{enumerate} \item $e(G_{i+1}) \ge 2^{i+1}n^{i+1-\ensuremath{\varepsilon}}$ along with, \item for each $J \in \mathcal{H}(i+1, d)$, a set $\mathcal{B}(J, G_{i+1})$ of at most $n^{v(J)-\delta_{i+1}}$ bad copies of $J$ in $G_{i+1}$, \end{enumerate} we apply Lemma~\ref{mainlem2} to $G_{i+1}$ to construct $G_i$ such that \begin{enumerate} \item $e(G_i) \ge 2^{i}n^{i-\ensuremath{\varepsilon}}$, and \item for each $J \in \mathcal{H}(i, d)$, the set $\mathcal{B}(J, G_i)$ of copies of $J$ in $G_i$ that are $\beta_i$-bad with respect to $G_{i+1}$ satisfies \[|\mathcal{B}(J, G_i)| \le n^{v(J)-\delta_i},\] where $\beta_i=\delta_{i+1}/((i+1)d+1)$ and $\delta_{i}=\beta_i-2\ensuremath{\varepsilon}$. \end{enumerate} Since $\delta_{r-1}=1/2$ and $\delta_i={\delta_{i+1}}/{((i+1)d+1)}-2\ensuremath{\varepsilon}\ge {\delta_{i+1}}/{(rd+1)}-2\ensuremath{\varepsilon}$ for each $r-2 \ge i \ge 1$, we get that \[ \delta_1\ge \frac12\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{rd+1}\aftergroup\egroup\originalright)^{r-2}-2\ensuremath{\varepsilon} \mathopen{}\mathclose\bgroup\originalleft(\frac{1- (1/(rd+1))^{r-2}}{1- 1/(rd+1)} \aftergroup\egroup\originalright)\ge \frac12 \mathopen{}\mathclose\bgroup\originalleft(\frac{1}{rd+1}\aftergroup\egroup\originalright)^{r-2} -3\ensuremath{\varepsilon} > d\ensuremath{\varepsilon}, \] where the last inequality above uses the fact that $\ensuremath{\varepsilon} \le (1/9d) (1/(rd+1))^{r-2}$. Consequently, we have \begin{enumerate} \item $1/2 = \delta_{r-1} \ge \delta_{r-2} \ge \dots \ge \delta_1 > d \ensuremath{\varepsilon}$, and \item $1/2 \ge \beta_{r-2} \ge \beta_{r-3} \ge \dots \ge \beta_1 > 0$. \end{enumerate} We are now ready for the second stage of the proof where we embed $H$ into $G$. First, we may assume that $H$ has no isolated vertices, i.e., any vertices $y$ for which $\Deg(y, H) = 0$; indeed, if $H$ has isolated vertices, we may embed the rest of $H$ into $G$ first, and then embed the isolated vertices into $G$ arbitrarily provided $n$ is sufficiently large. By assuming $H$ has no isolated vertices, we have $\Tr_r(H) = H$ and furthermore, the link of any vertex in $\Tr_j(H)$ is nonempty for each $2 \le j \le r$. For $1 \le j \le r-1$, we shall sequentially construct injective maps $\phi_j : Y_j \to X_j$ in such a way that \begin{enumerate} \item each induced map $\Phi_{j}: Y_1 \cup Y_2 \cup \dots \cup Y_j \to X_1 \cup X_2 \cup \dots \cup X_j$ is an embedding of $\Tr_{j}(H)$ into $G_{j}$, and \item for each nonempty subgraph $J$ of $\Tr_j(H)$ with at most $d$ edges, no copy of $J$ in the image of $\Phi_{j}$ is in $\mathcal{B}(J, G_{j})$. \end{enumerate} Observe that $G_1$ is a 1-graph on $X_1$, which is just a subset of $X_1$, so $v(G_1) = e(G_1)$, and we have \[m = v(G_1) = e(G_{1})\ge 2n^{1-\ensuremath{\varepsilon}}.\] We construct $\phi_1: Y_1 \to X_1$ in such a way that for every $J \in \mathcal{H}(1,d)$, no copy of $J$ in $\phi_1 (\Tr_1(H))$ is in $\mathcal{B}(J, G_1)$, i.e., is $\beta_1$-bad with respect to $G_2$. By construction, we have $|\mathcal{B}(J, G_1)| \le n^{v(J)-\delta_1}$ for each $J \in \mathcal{H}(1,d)$. Note that $\mathcal{H}(1,d)$ consists of $d$ elements, namely one set of cardinality $t$ for each $1 \le t \le d$, and also note that the number of $\beta_1$-bad $t$-sets in $G_1$ is $o(m^t)$ for each $1 \le t\le d$ since $\delta_1/t \ge \delta_1/d > \ensuremath{\varepsilon}$. It follows that the number of problematic $d$-sets in $G_1$, namely those containing a $\beta_1$-bad $t$-set for some $1 \le t \le d$, is $o(m^d)$, so we may choose a subset of $X_1$ of size $|Y_1|$ which does not contain any such problematic $d$-set provided $n$ is sufficiently large, as can be seen, for example, by applying a bound of de Caen~\cite{deCaen} to the $d$-graph of all problematic $d$-sets. In other words, we can choose a subset $S$ of $X_1$ of size $|Y_1|$ in such a way that for each $J \in \mathcal{H}(1,d)$, no copy of $J$ in $\mathcal{B}(J, G_1)$ is contained in $S$; we take $\phi_1$ to be any injective map from $Y_1$ to $S$. For $2 \le j \le r-2$, suppose that we have constructed injective maps $\phi_1, \phi_2, \dots, \phi_{j-1}$ as above. We now extend $\Phi_{j-1}$ to $\Phi_j$ by defining a suitable map $\phi_j:Y_j\to X_j$ randomly as follows. Since $H$ is $d$-trace-bounded, each vertex $y\in Y_j$ has degree at most $d$ in $\Tr_j(H)$. Given a vertex $y\in Y_j$, let $L(y) = \mathcal{L}(y, \Tr_j(H))$ be the link of $y$ in $\Tr_j(H)$, so that $L(y)$ is a nonempty subgraph of $\Tr_{j-1}(H)$ with at most $d$ edges. Inductively, we know that $\Phi_{j-1}(L(y))$ is a subgraph of $G_{j-1}$. We choose $\phi_j(y)$ uniformly at random from the common neighbourhood $\Gamma(\Phi_{j-1}(L(y)), G_j) \subset X_j$. By choosing, for each $y \in Y_j$, the image $\phi_j(y)$ of $y$ from the set $\Gamma(\Phi_{j-1}(L(y)), G_j)$, we have ensured that the induced map $\Phi_j$ is a homomorphism from $\Tr_j(H)$ into $G_j$. We claim that with positive probability, both of the following events hold: \begin{enumerate}[label = {\bfseries{E\arabic{enumi}}}] \item\label{AA} the induced map $\Phi_{j}$ is injective, i.e., is an embedding of $\Tr_{j}(H)$ into $G_{j}$, and \item\label{BB} for each nonempty subgraph $J$ of $\Tr_j(H)$ with at most $d$ edges, no copy of $J$ in the image of $\Phi_{j}$ is in $\mathcal{B}(J, G_{j})$. \end{enumerate} To deal with~\ref{AA}, note that for any vertex $y\in Y_j$, the $(j-1)$-graph $L(y) = \mathcal{L}(y, \Tr_j(H))$ is nonempty and has at most $d$ edges, and inductively, its image $\Phi_{j-1}(L(y))$ is not in $\mathcal{B}(L(y), G_{j-1})$, namely the set of copies of $L(y)$ in $G_{j-1}$ that are $\beta_{j-1}$-bad with respect to $G_j$, so it follows that \beq{choices}|\Gamma(L(y), G_j)|\ge n^{1-\beta_{j-1}}. \enq Since $\beta_{j-1} \le 1/2$, the probability that $\phi_j$, and therefore $\Phi_j$, fails to be injective is easily seen to be $o(1)$, whence we certainly have $\mathbb{P}(\text{\textbf{E1}}) > 1/ 2$ provided $n$ is sufficiently large. To address~\ref{BB}, we argue as follows. Let $J$ be a nonempty subgraph of $\Tr_j(H)$ with at most $d$ edges, and let $L=\Tr_{j-1}(J)$. The probability of the event that $\Phi_j(J) \in \mathcal{B}(J ,G_j)$ may be bounded above as follows. By the bound in~\eqref{choices}, the number of choices for $\phi_{j}$ on $V(J) \setminus V(L)$ is at least $(n^{1-\beta_{j-1}})^{v(J)-v(L)}$. On the other hand, since $L$ is nonempty with at most $d$ edges, we know inductively that $\Phi_{j-1}(L) \notin \mathcal{B}(L, G_{j-1})$. Therefore, $\Phi_{j-1}(L)$ is not $\beta_{j-1}$-bad with respect to $G_j$, from which it follows that $\Phi_{j-1}(L)$ is contained in at most \[n^{-2\beta_{j-1}}\mathopen{}\mathclose\bgroup\originalleft(n^{1-\beta_{j-1}}\aftergroup\egroup\originalright)^{v(J)-v(L)-1}|\Gamma(\Phi_{j-1}(L), G_j)|\] copies of $J$ in $G_j$ that belong to $\mathcal{B}(J, G_j)$. Hence, the probability of $\Phi_j(J) \in \mathcal{B}(J ,G_j)$ is at most \[\frac{n^{-2\beta_{j-1}}\mathopen{}\mathclose\bgroup\originalleft(n^{1-\beta_{j-1}}\aftergroup\egroup\originalright)^{v(J)-v(L)-1}|\Gamma(\Phi_{j-1}(L), G_j)|}{\mathopen{}\mathclose\bgroup\originalleft(n^{1-\beta_{j-1}}\aftergroup\egroup\originalright)^{v(J)-v(L)}}\le \frac{n^{1-2\beta_{j-1}}}{n^{1-\beta_{j-1}}}=n^{-\beta_{j-1}}=o(1),\] where we use the facts that $|\Gamma(\Phi_{j-1}(L), G_j)| \le n$ and that $\beta_{j-1} > 0$. Summing this estimate over the $O(1)$ choices of nonempty subgraphs $J$ of $\Tr_j(H)$ with $e(J) \le d$ shows that $\mathbb{P}(\text{\textbf{E2}}) > 1/ 2$ provided $n$ is sufficiently large as well, which together with the fact that $\mathbb{P}(\text{\textbf{E1}}) > 1/ 2$ establishes the existence of an appropriate $\phi_j$. Now, we finish by extending $\Phi_{r-1}$ to an embedding $\Phi_r: Y_1 \cup Y_2 \cup \dots Y_r \to X_1 \cup X_2 \cup \dots X_r$ of $H$ into $G$ by defining a final injective map $\phi_r:Y_r \to X_r$. Recall that for each $J \in \Tr_{r-1}(H)$, the set $\mathcal{B}(J, G_{r-1})$ consists of those copies $J'$ of $J$ in $G_{r-1}$ for which the number of vertices $x \in X_r$ whose link $\mathcal{L}(x, G)$ contains $J'$ as a subgraph is at most $v(H)$. We may now define $\phi_r$ by greedily picking, for each $y \in Y_r$, a distinct vertex $\phi_r(y) \in \Gamma(\Phi_{r-1}(\mathcal{L}(y, H)), G) \subset X_r$; this is always possible since $\Phi_{r-1}(\mathcal{L}(y, H)) \notin \mathcal{B}(\mathcal{L}(y, H), G_{r-1})$. It follows that $\Phi_r(H)$ is a copy of $H$ in $G$, completing the proof. \end{proof} \section{Conclusion}\label{sec:conc} A number of open problems remain, and we conclude by highlighting those that we think are particularly deserving of attention. With regard to homeomorphs, the outstanding problem is to determine the optimal values of the exponents $\lambda_k$ in Theorem~\ref{mainthm1} for each $k \ge 2$. As mentioned earlier, even the optimal value of $\lambda_2$ is not known, though there are good reasons to expect it to be $1/2$. In higher dimensions, we are only able to speculate: could it be that for each $k \ge 2$, the optimal value of $\lambda_k$ is precisely $\lambda(S^k)$, where $S^k$ is (any triangulation of) the $k$-sphere? This is indeed the underlying mechanism behind the prediction of the value of $1/2$ in dimension two, but we do not know, nor do we have a guess for, the value of $\lambda(S^k)$ for any $k \ge 3$; for example, a combination of a random construction and a generalisation of the argument of the Brown, Erd\H{o}s and S\'os~\cite{bes} shows that $1/4 \le \lambda(S^3) \le 1/3$, but we have no reason to think either bound reflects the truth. With regard to trace-bounded hypergraphs, the main problem again is to determine the optimal values of the exponents $\alphapha_{r,d}$ in Theorem~\ref{mainthm2} for each $r \ge 3$ and $d\in \mathbb{N}$. As remarked upon earlier, we know that the optimal value of $\alphapha_{2,d}$ is $1/d$, but even formulating a natural guess for the optimal value of $\alphapha_{r,d}$ when $r \ge 3$, let alone proving it, would be of some interest. \begin{dajauthors} \begin{authorinfo}[jl] Jason Long\\ Cambridge CB1\thinspace1JH, UK\\ jasonlong272\imageat{}gmail\imagedot{}com \end{authorinfo} \begin{authorinfo}[bn] Bhargav Narayanan\\ Department of Mathematics, Rutgers University\\ Piscataway, NJ, USA\\ narayanan\imageat{}math\imagedot{}rutgers\imagedot{}edu \\ \url{https://sites.math.rutgers.edu/~narayanan/} \end{authorinfo} \begin{authorinfo}[cy] Corrine Yap\\ Department of Mathematics, Rutgers University\\ Piscataway, NJ, USA\\ corrine\imagedot{}yap\imageat{}rutgers\imagedot{}edu\\ \url{http://www.corrineyap.com/} \end{authorinfo} \end{dajauthors} \end{document}
\begin{document} \title{Contact seaweeds } \author[*]{Vincent E. Coll, Jr.} \author[*]{Nicholas Mayers} \author[*]{Nicholas Russoniello} \author[$\dagger$] {Gil Salgado} \affil[*]{Department of Mathematics, Lehigh University, Bethlehem, PA, 18015} \affil[$\dagger$]{Facultad de Ciencias, Universidad Autonoma de San Luis Potos\'{i}; SLP, M\'{e}xico} \maketitle \begin{abstract} \noindent A ($2k+1$)$-$dimensional contact Lie algebra is one which admits a one-form $\varphi$ such that $\varphi \wedge (d\varphi)^k\ne0$. Such algebras have index one, but this is not generally a sufficient condition. Here we show that index-one type-$A$ seaweed algebras are necessarily contact. Examples, together with a method for their explicit construction, are provided. \end{abstract} \noindent \textit{Mathematics Subject Classification 2020}: 17Bxx, 53D10 \noindent \textit{Key Words and Phrases}: contact Lie algebra, contact structure, Frobenius Lie algebra, seaweeds, meanders, regular one-forms \blfootnote{Nicholas Mayers, [email protected], is the corresponding author. Gil Salgado would like to acknowledge the support received through CONACYT Grant \#A1-S-45886 and PRODEP grant UASLP-CA-228. } \section{Introduction} The \textit{index} of a Lie algebra $(\mathfrak{g}, [-,-])$ is an important algebraic invariant which was first formally introduced by Dixmier (\textbf{\cite{D}}, 1977). It is defined by \[ {\rm ind \hspace{.1cm}} \mathfrak{g}=\min_{\varphi\in \mathfrak{g}^*} \dim (\ker (B_\varphi)), \] where $\varphi$ is an element of the linear dual $\mathfrak{g}^*$ and $B_\varphi$ is the associated skew-symmetric \textit{Kirillov form} defined by \[ B_\varphi(x,y)=\varphi([x,y]), \textit{ for all }~ x,y\in\mathfrak{g}. \] \noindent On a given Lie algebra $\mathfrak{g}$, index-realizing linear forms, i.e., those $\varphi\in\mathfrak{g}^*$ which satisfy $\dim(\ker(B_{\varphi}))={\rm ind \hspace{.1cm}}\mathfrak{g},$ are called \textit{regular} and exist in profusion, being dense in both the Zariski and Euclidean topolgies of $\mathfrak{g}^*$ (see \textbf{\cite{DK}}). \noindent Using the above notation, the index is used to describe certain important classes of algebras. \begin{itemize} \item If dim $\mathfrak{g}=2n$ and if there exists a $\varphi$ such that $(d\varphi)^n\ne 0$, then $\mathfrak{g}$ is said to \textit{Frobenius}, and $\varphi$ is called a \textit{Frobenius form}. Deformation theorists are interested in Frobenius Lie algebras because each such $\mathfrak{g}$ provides a solution to the classical Yang-Baxter equation, which in turn quantizes to a universal deformation formula, i.e., a Drinfel'd twist which deforms any algebra which admits an action of $\mathfrak{g}$ by derivations (see \textbf{\cite{twist}}). A Lie algebra is Frobenius precisely when its index is zero. \item If dim $\mathfrak{g}=2n+1$ and if there exists a $\varphi$ such that $\varphi \wedge (d\varphi)^n\ne 0$, then $\mathfrak{g}$ is said to be $\textit{contact}$, $\varphi$ is called a \textit{contact form}, and $\varphi \wedge (d\varphi)^n$ is a \textit{volume form} on the underlying Lie group. The construction and classification of contact manifolds is a central problem in differential topology (see \textbf{\cite{wein}}). If a Lie algebra is contact, then its index is equal to one. The converse is not true in general.\footnote{The Lie algebra $\mathfrak{g}=\langle e_1,e_2,e_3\rangle$ with relations $[e_1,e_2]=e_2$ and $[e_1,e_3]=e_3$ has index one but is not contact.} However, there are a few important families of Lie algebras for which index one identifies if a given Lie algebra is contact. For example, index-one nilpotent Lie algebras and real, compact Lie algebras of index one are necessarily contact (see \textbf{\cite{RG}}). \end{itemize} \noindent \begin{remark}\label{ex:ind1ncont} Frobenius and contact Lie algebras are tightly interwoven. Every Frobenius Lie algebra has a codimension-one contact ideal, and every Frobenius Lie algebra is a codimension-one ideal of a contact Lie algebra \textup(see \textup{\textbf{\cite{codim1}}}\textup). \end{remark} Here, we seek to classify contact algebras among a class of matrix algebras called \textit{seaweed algebras} (or simply, ``seaweeds''). These algebras, along with their evocative name, were first introduced by Dergachev and A. Kirillov in (\textbf{\cite{DK}}, 2000), where they defined such algebras as complex subalgebras of $\mathfrak{gl}(n)$ preserving certain flags of subspaces of $\mathbb{C}^n$ developed from two compositions of $n$. The passage to seaweeds of ``classical type" is realized by requiring that elements of the seaweed subalgebra of $\mathfrak{gl}(n)$ satisfy additional algebraic conditions. For example, the type-$A$ case ($A_{n-1}=\mathfrak{sl}(n,\mathbb{C})$) is defined by a vanishing trace condition. Ongoing, we will assume that all Lie algebras are over $\mathbb{C}$. We can now state the main result of this article, which asserts that index one is sufficient for seaweed subalgebras of $\mathfrak{sl}(n)=A_{n-1}$ to be contact. \begin{theorem*}\label{main} If $\mathfrak{g}$ is a type-A seaweed, then $\mathfrak{g}$ is contact if and only if ${\rm ind \hspace{.1cm}}\mathfrak{g}=1.$ \end{theorem*} The structure of the paper is as follows. In Section~\ref{sec:seaweeds}, we define type-$A$ seaweeds and detail the construction of an associated planar graph, called a \textit{meander}, which is helpful in computing the seaweed's index. In Section~\ref{sec:framework}, we discuss a framework for constructing regular one-forms on type-$A$ seaweeds and explicitly compute the kernels of the associated Kirillov forms. In Section~\ref{sec:main}, we establish that the regular one-forms of Section~\ref{sec:framework} are, in fact, contact, thus proving the main theorem. In Section~\ref{sec:examples}, the reader will find algebraic technology which can be used to generate a spate of both Frobenius and contact seaweeds of arbitrarily high dimension. \section{Seaweeds and Meanders}\label{sec:seaweeds} \noindent A type-$A$ seaweed in its ``standard (matrix) form''\footnote{Since every seaweed is conjugate to one in standard form, we have presumed this in the definition of a type-$A$ seaweed in order to ease exposition. A basis-free definition reckons seaweed subalgebras of a reductive Lie algebra $\mathfrak{g}$ as the intersection of two parabolic algebras whose sum is $\mathfrak{g}$ (see \textbf{\cite{Pan}}). For this reason, seaweed algebras have elsewhere been called \textit{biparabolic} (see \textbf{\cite{Joseph}}). We do not require the latter definition for our present discussion.} can be described as a subalgebra of $\mathfrak{sl}(n)$ constructed as follows. First, fix two ordered compositions of $n$, $(a)=(a_1,\dots,a_m)$ and $(b)=(b_1,\dots,b_t)$. Let $D_{(a)}$ be the subalgebra of block-diagonal matrices whose blocks have sizes $a_1\times a_1,\dots,a_m\times a_m$ and similarly for $D_{(b)}$. A \textit{type-A seaweed algebra} (or simply, ``type-A seaweed") is the subalgebra of $\mathfrak{sl}(n)$ spanned by the intersection of $D_{(a)}$ with the lower-triangular matrices and the intersection of $D_{(b)}$ with the upper-triangular matrices. We call the the locations of potentially nonzero entries in the seaweed \textit{admissible locations}. For a type-$A$ seaweed defined by two compositions of $n$ as above, we write $\mathfrak{p}_n^A\;\frac{a_1|\cdots|a_m}{b_1|\cdots|b_t}$. See Example~\ref{ex:seaweed}. \begin{remark} We tacitly assume a standard \textup(Chevalley\textup) basis for a seaweed $\mathfrak{g}=\mathfrak{p}_n^A \frac{a_1|\dots|a_m}{b_1|\dots|b_t}$ given by the union of the following two sets of matrix units: \begin{itemize} \item $\{e_{i,i}-e_{i+1,i+1} ~|~ 1\leq i\leq n-1 \}$, and \item $\{e_{i,j}~|~ 1\leq i\neq j\leq n$ and $(i,j)$ is an admissible location\}. \end{itemize} \end{remark} \begin{Ex}\label{ex:seaweed} We illustrate a type-$A$ seaweed in its standard matrix form -- revealing its characteristic wavy seaweed ``shape.'' The asterisks represent admissible locations and entries in non-admissible locations are zeroes. See Figure \ref{Aseaweed}. \end{Ex} \begin{figure} \caption{ The seaweed $\mathfrak{p} \label{Aseaweed} \end{figure} To each seaweed $\mathfrak{p}_n^A\;\frac{a_1|\cdots|a_m}{b_1|\cdots|b_t}$ we associate a planar graph called a \textit{{meander}}, constructed as follows. First, place $n$ vertices $v_1$ through $v_n$ in a horizontal line. Next, create two partitions of the vertices by forming \textit{{top}} and {\textit{bottom blocks}} of vertices of size $a_1$, $a_2$, $\cdots$, $a_m$, and $b_1$, $b_2$, $\cdots$, $b_t$, respectively. Place edges in each top (likewise bottom) block in the same way. Add an edge from the first vertex of the block to the last vertex of the same block. Repeat this edge addition on the second vertex and the second to last vertex within the same block and so on within each block of both partitions. Top edges are drawn concave down and bottom edges are drawn concave up. Let $M(\mathfrak{g})$ denote the meander associated with $\mathfrak{g}$. We place a counterclockwise orientation on $M(\mathfrak{g})$ to produce the \textit{directed meander} $\overrightarrow{M}(\mathfrak{g}).$ See Example~\ref{ex:meanders}. \begin{Ex}\label{ex:meanders} We illustrate the meander and directed meander of $\mathfrak{g}=\mathfrak{p}_8^A\frac{2|6}{8}.$ See Figure 2. \begin{figure} \caption{$M(\mathfrak{g} \label{MeanderinSeaweed} \end{figure} \end{Ex} \noindent A meander can be visualized inside its associated seaweed $\mathfrak{g}$ if one views the diagonal locations $\{(i,i)\}_{i=1}^n$ of $\mathfrak{g}$ as the $n$ vertices $\{v_i\}_{i=1}^n$ of the meander and reckons the top edges $\{(v_i,v_j) ~|~ i<j\}$ of the meander as the unions of line segments connecting the matrix locations $(i,i)\rightarrow(j,i)\rightarrow(j,j)$ and the bottom edges $\{(v_i,v_j) ~|~ i<j\}$ of the meander as the unions of line segments connecting the matrix locations $(i,i)\rightarrow(i,j)\rightarrow(j,j)$. See Figure \ref{fig:MeanderinSeaweed}. \begin{figure} \caption{The meander of $\mathfrak{g} \label{fig:MeanderinSeaweed} \end{figure} Remarkably, the index of the seaweed can be computed by counting the number and type of the connected components in the associated meander. In a given meander, we call a connected component a \textit{path} if it is not a cycle. Note that a path may be degenerate, i.e., consist of a single vertex. \begingroup \makeatletter \apptocmd{\thetheorem}{ \unless\ifx\protect\@unexpandable@protect\textnormal{(Dergachev and A. Kirillov} \textbf{\cite{DK}}, \textnormal{2000)}\protect\footnote{The authors actually established the formula $2C+P$ for the index of a seaweed subalgebra of $\mathfrak{gl}(n)$, but only a minor algebraic argument is required to extend to the type-$A$ case yielding (\ref{eqn:indexA}). See \textbf{\cite{seriesA}}.}\fi}{}{} \makeatother \begin{theorem}\label{thm:indexA} If $\mathfrak{g}$ is a type-A seaweed, then \begin{equation}\label{eqn:indexA} {\rm ind \hspace{.1cm}}\mathfrak{g}=2C+P-1, \end{equation} \noindent where $C$ is the number of cycles and $P$ is the number of paths in $M(\mathfrak{g})$. \end{theorem} \endgroup \begin{Ex} Consider the type-A seaweed $\mathfrak{b}=\mathfrak{p}_5^A\frac{1|1|1|1|1}{5},$ which is the standard Borel subalgebra of $\mathfrak{sl}(5)$ \textup(see Figure~\ref{fig:borel} \textup(left\textup)\textup). Since $M(\mathfrak{b})$ consists of zero cycles and three paths \textup(see Figure~\ref{fig:borel} \textup(right\textup)\textup), it follows from Theorem~\ref{thm:indexA} that $${\rm ind \hspace{.1cm}}\mathfrak{b}=2(0)+3-1=2.$$ \begin{figure} \caption{The Borel $\mathfrak{b} \label{fig:borel} \end{figure} \end{Ex} Any meander can be contracted, or ``wound down," to the empty meander through a sequence of graph-theoretic moves -- detailed in Lemma~\ref{lem:winding} below -- each of which is uniquely determined by the structure of the meander at the time of the move application. Such a sequence is called the \textit{signature} of the meander (see \textbf{ \cite{meanders2}}). The signature is essentially a graph theoretic recasting of Panyushev’s reduction algorithm (see \textbf{\cite{Pan}}). \begin{tcolorbox}[breakable, enhanced] \begin{lemma}[Coll et al. \textbf{\cite{meanders2}}]\label{lem:winding} \label{winding down} Let $\mathfrak{g}=\mathfrak{p}_n^A\frac{a_1|\dots|a_m}{b_1|\dots|b_t}$ be a type-A seaweed with associated meander $M(\mathfrak{g})$ -- in this setting, we say the meander $M=M(\mathfrak{g})$ is of type $\frac{a_1|\dots|a_m}{b_1|\dots|b_t}.$ Create a meander $M'$ by one of the following moves. \begin{enumerate} \item {Pure Contraction \textup(P\textup):} If $a_1>2b_1$, then $M\mapsto M'$ of type $\frac{(a_1-2b_1)|b_1|a_2|\cdots|a_m}{b_2|b_3|\cdots|b_t}$. \item {Block Elimination \textup(B\textup):} If $a_1=2b_1$, then $M\mapsto M'$ of type $\frac{b_1|a_2|\cdots|a_m}{b_2|b_3|\cdots|b_t}$. \item {Rotation Contraction \textup(R\textup):} If $b_1<a_1<2b_1$, then $M\mapsto M'$ of type $\frac{b_1|a_2|\cdots|a_m}{(2b_1-a_1)|b_2|\cdots|b_t}$. \item {Component Deletion \textup(C\textup(c\textup)\textup):} If $a_1=b_1=c$, then $M\mapsto M'$ of type $\frac{a_2|\cdots|a_m}{b_2|\cdots|b_t}$. \item {Flip \textup(F\textup):} If $a_1<b_1$, then $M\mapsto M'$ of type $\frac{b_1|b_2|\cdots|b_t}{a_1|\cdots|a_m}$. \end{enumerate} These moves are called \textit{{winding-down moves}}. For all moves, except the Component Deletion move, $\mathfrak{g}$ and $\mathfrak{g}'$ \textup(the seaweed with meander $M(\mathfrak{g}')=M'$\textup) have the same index. \end{lemma} \end{tcolorbox} \noindent If $\mathfrak{g}$ is as in Lemma \ref{winding down} and $M(\mathfrak{g})$ is a meander for which the collection of component deletions in its signature is $\{C(c_1),\dots,C(c_q)\}$, then we say that $M(\mathfrak{g})$ and, by an abuse of terminology, $\mathfrak{g}$ each have \textit{homotopy type} $\mathcal{H}(c_1,\dots,c_q).$ See Example \ref{ex:winding}. \begin{Ex}\label{ex:winding} The meander of type $\frac{2|6}{8}$ of Example~\ref{ex:meanders} has signature $FPFRBC(2),$ and so has homotopy type $\mathcal{H}(2).$ The winding-down moves associated with the signature are illustrated in Figure~\ref{fig:winding}. \begin{figure} \caption{Winding down the meander of type $\frac{2|6} \label{fig:winding} \end{figure} \end{Ex} Since contact seaweeds are the main focus, we need only consider those seaweeds with index one. Theorem $\ref{thm:indexA}$ tells us how to find them. For emphasis, we record this in the following corollary to Theorem~\ref{thm:indexA}. \begin{theorem}\label{cor:index A} A type-A seaweed has index one if and only if its associated meander consists of exactly one cycle or exactly two paths, i.e., its associated meander has homotopy type $\mathcal{H}(2)$ or $\mathcal{H}(1,1).$ \end{theorem} \section{Framework for regular forms}\label{sec:framework} In her dissertation, Dougherty (\textbf{\cite{Adiss}}, 2019) establishes a complicated inductive framework for the explicit construction of families of regular one-forms on seaweeds of classical type (cf. \textbf{\cite{aria}}). When restricted to type-$A$ seaweeds of index one, this framework simplifies considerably. Importantly, some of these regular families consist of one-forms that are also contact. We detail this ``type-$A$ framework'' in the following subsections. \subsection{Component meanders} \textit{Notation}: Ongoing, $\mathfrak{g}$ will be assumed to be a type-$A$ seaweed. If $\mathfrak{g}$ has homotopy type $\mathcal{H}(c_1,\dots,c_q),$ define the \textit{component meander} $CM(\mathfrak{g})$ associated with $\mathfrak{g}$ to be the meander with the same signature as $\mathfrak{g}$ except that the component deletions are all of size one. The construction of $CM(\mathfrak{g})$ involves an implicit identification of components, i.e., vertices and edges, in $M(\mathfrak{g})$. The vertices of $CM(\mathfrak{g})$ are $\{v_{I_1},\dots,v_{I_t}\},$ where $I_i$ is the set of indices for the vertices in $M(\mathfrak{g})$ that were collapsed into one vertex in $CM(\mathfrak{g}).$ Note that $|I_i|=c_j$ if $v_{I_i}$ is a vertex of $CM(\mathfrak{g})$ arising from the collapsing of a component of size $c_j.$ Also note $CM(\mathfrak{g})$ may be oriented as usual to yield $\overrightarrow{CM}(\mathfrak{g})$. See Example \ref{ex:componentmeander}. \begin{Ex}\label{ex:componentmeander} Consider $\mathfrak{g}=\mathfrak{p}_6^A\frac{2|1|1|2}{6}.$ In Figure \ref{fig:commeanex}, we illustrate the meander, component meander, and directed component meander of $\mathfrak{g}$. \begin{figure} \caption{$M(\mathfrak{g} \label{fig:commeanex} \end{figure} \end{Ex} \subsection{The Core and Peak} The core and peak of $\mathfrak{g}$ are sets of sets of admissible locations and their definitions are based on a vector space decomposition of $\mathfrak{g}$ developed from the homotopy type of $\mathfrak{g}$ as follows. If $\mathfrak{g}$ has homotopy type $\mathcal{H}(c_1,\dots,c_q)$ then \begin{equation}\label{eqn:decomp} \mathfrak{g}=\bigoplus_{i=1}^q\mathfrak{g}|_{c_i}, \end{equation} where $\mathfrak{g}|_{c_i}$ is the subspace of $\mathfrak{g}$ corresponding to a particular component of size $c_i$ in $M(\mathfrak{g}).$ See Example~\ref{ex:decomp}. \begin{Ex}\label{ex:decomp} Consider $\mathfrak{g}=\mathfrak{p}_6^A\frac{2|1|1|2}{6}$ of our running Example \ref{ex:componentmeander}. Note that $\mathfrak{g}$ has homotopy type $\mathcal{H}(2,1)$, which yields the vector space decomposition $\mathfrak{g}=\mathfrak{g}|_2\oplus\mathfrak{g}|_1.$ See Figure~\ref{fig:decomp}. \begin{figure} \caption{Vector space decomposition of $\mathfrak{g} \label{fig:decomp} \end{figure} \end{Ex} We are now ready to formally define the core and peak of $\mathfrak{g}$. First, fix a component $\mathfrak{g}|_{c_i}$ of homotopy type $c_i$ and define the sets $$ V_{c_i}=\{I\;|\;v_I\text{ is a vertex on the path of $CM(\mathfrak{g})$ corresponding to }c_i\},$$ $$\textgoth{C}_{c_i}=\{I\times I\;|\;I\in V_{c_i}\}.$$ $$\textgoth{P}_{c_i}=\{I\times J\;|\;I,J\in V_{c_i}\text{ and } (v_I,v_J) \text{ is an edge in }\overrightarrow{CM}(\mathfrak{g})\}.$$ The set $\textgoth{C}_{c_i}$ is the \textit{core set of $\mathfrak{g}|_{c_i}$} -- the set of $c_i\times c_i$ blocks on the diagonal of $\mathfrak{g}$ contained in $\mathfrak{g}|_{c_i}$ -- and the set $\textgoth{P}_{c_i}$ is the \textit{peak set of $\mathfrak{g}|_{c_i}.$} Now, we define the \textit{core of $\mathfrak{g}$} and the \textit{peak of $\mathfrak{g}$} as the union of the core sets and peak sets, respectively, of the components in the vector space decomposition (\ref{eqn:decomp}). In other words, $$\textgoth{C}_\mathfrak{g}=\bigcup_{i=1}^q\textgoth{C}_{c_i}\hspace{2em}\text{ and }\hspace{2em}\textgoth{P}_\mathfrak{g}=\bigcup_{i=1}^q\textgoth{P}_{c_i}.$$ \noindent Since $\textgoth{C}_{\mathfrak{g}}$ and $\textgoth{P}_{\mathfrak{g}}$ consist of sets of ordered pairs of indices defining blocks of admissible locations in $\mathfrak{g},$ we refer to elements of $\textgoth{C}_{\mathfrak{g}}$ and $\textgoth{P}_{\mathfrak{g}}$ as \textit{core blocks} and \textit{peak blocks}, respectively. \begin{Ex} We illustrate the definitions above by constructing the core and peak sets of the seaweed of Example~\ref{ex:componentmeander}, $\mathfrak{g}=\mathfrak{p}_6^A\frac{2|1|1|2}{6}.$ Recall $\mathfrak{g}$ has homotopy type $\mathcal{H}(2,1).$ As illustrated in Figure~\ref{fig:commeanex} \textup(center\textup) above, the vertices of $CM(\mathfrak{g})$ are written as $v_{\{1,2\}},v_{\{3\}},v_{\{4\}},$ and $v_{\{5,6\}}.$ So, we have that $$V_2=\Big\{\{1,2\},\{5,6\}\Big\}\quad\quad\text{and}\quad\quad V_1=\Big\{\{3\},\{4\}\Big\},$$ so the core set of $\mathfrak{g}$ is $$\textgoth{C}_{\mathfrak{g}}=\Big\{\{(1,1),(1,2),(2,1),(2,2)\},\{(3,3)\},\{(4,4)\},\{(5,5),(5,6),(6,5),(6,6)\}\Big\}.$$ \noindent Now, to construct the peak set of $\mathfrak{g},$ consider $\overrightarrow{CM}(\mathfrak{g}).$ As illustrated by Figure~\ref{fig:commeanex} \textup(right\textup), we have that $$\textgoth{P}_2=\Big\{\{(1,5),(1,6),(2,5),(2,6)\}\Big\}\quad\quad\text{and}\quad\quad\textgoth{P}_1=\Big\{\{(3,4)\}\Big\},$$ so the peak set of $\mathfrak{g}$ is $$\textgoth{P}_{\mathfrak{g}}=\Big\{\{(1,5),(1,6),(2,5),(2,6)\},\{(3,4)\}\Big\}.$$ \noindent The core blocks and peak blocks of $\mathfrak{g}$ are bolded and outlined, respectively, in the seaweed in Figure~\ref{fig:cp}. \begin{figure} \caption{The core and peak blocks identified within $\mathfrak{g} \label{fig:cp} \end{figure} \end{Ex} \subsection{Regular one-forms} The core and peak of an index-one $\mathfrak{g}$ facilitate the definition of a family $\Phi$ of regular one-forms on $\mathfrak{g}$ reliant on the homotopy type of $\mathfrak{g}$. An element of $\Phi$ is of the form $\sum e_{i,j}^*$, where $e_{i,j}^*$ denotes the dual of the matrix unit $e_{i,j}$ and the sum is over a restricted set of admissible locations $(i,j)$ of $\mathfrak{g}$. The restrictions are determined by the homotopy type of $\mathfrak{g}$, which, for an index-one $\mathfrak{g}$, must be either $\mathcal{H}(2)$ or $\mathcal{H}(1,1)$. \subsection{Homotopy type $\mathcal{H}(2)$}\label{sec:h2} Let $\mathfrak{g}$ have homotopy type $\mathcal{H}(2)$, with attendant peak and core. Here, we construct a regular one-form $\varphi_{(2)}\in\Phi$ using $\textgoth{P}_{\mathfrak{g}}$ and $\textgoth{C}_{\mathfrak{g}}$. We find it convenient to mark the admissible locations which define $\varphi_{(2)}$ by black dots -- a black dot is placed in location $(i,j)$ if and only if $e_{i,j}^*$ is a nonzero summand of $\varphi_{(2)}$ -- according to the following schema. Note that the core and peak blocks are all $2\times 2$. A single black dot is placed in the upper left corner of each of the core blocks. For the peak blocks, black dots are placed along the diagonal. See Example~\ref{ex:Afunc}. \begin{Ex}\label{ex:Afunc} Recall that $\mathfrak{g}=\mathfrak{p}_8^A \frac{2|6}{8}$ has homotopy type $\mathcal{H}(2).$ The regular functional $\varphi_{(2)}$ has summands determined by locations of black dots in Figure~\ref{fig:Adots}. \begin{figure} \caption{The summands of $\varphi_{(2)} \label{fig:Adots} \end{figure} \noindent As Figure \ref{fig:Adots} displays, $\varphi_{(2)}$ is given by $$\varphi_{(2)}=e_{1,1}^*+e_{3,3}^*+e_{5,5}^*+e_{7,7}^*+e_{1,7}^*+e_{2,8}^*+e_{3,5}^*+e_{4,6}^*+e_{7,3}^*+e_{8,4}^*.$$ \end{Ex} \subsubsection{The kernel of $\varphi_{(2)}$ } In the proof of the main theorem in Section~\ref{sec:main}, we require a specific description of $\ker(B_{\varphi_{(2)}})=span\{k\}.$ To identify $k,$ we first require the following technical lemmas. \begin{lemma}\label{lem:evenpart} If $\mathfrak{g} = \mathfrak{p}_n^A \frac{a_1|\dots|a_m}{b_1|\dots|b_t}$ has homotopy type $\mathcal{H}(2)$, then $a_i$ and $b_j$ are even for all $1\leq i\leq m$ and $1\leq j\leq t$. \end{lemma} \begin{proof} Since $\mathfrak{g}$ has homotopy type $\mathcal{H}(2),$ its meander $M(\mathfrak{g})$ consists of exactly one cycle; moreover, each vertex has degree 2. In particular, each vertex is incident with exactly one top edge and exactly one bottom edge. However, if $a_i$ (resp. $b_j$) is odd for some $i$ (resp. $j$), then the middle vertex of block $a_i$ (resp. $b_j$), i.e., $v_{\sum_{k=1}^{i-1}a_k+\big\lceil \frac{a_i}{2}\big\rceil}$ \Big(resp. $v_{\sum_{k=1}^{j-1}b_k+ \big\lceil\frac{b_j}{2}\big\rceil}$\Big) is not incident to a top (resp. bottom) edge. The result follows. \end{proof} \begin{lemma}\label{lem:oppsign} Let $\mathfrak{g}$ have homotopy type $\mathcal{H}(2).$ If $e^*_{i,j}$ occurs as a nontrivial summand in $\varphi_{(2)}$, then $i$ and $j$ have the same parity. \end{lemma} \begin{proof} Let $\mathfrak{g}=\mathfrak{p}_n^A\frac{a_1|\dots|a_m}{b_1|\dots|b_t}$ have homotopy type $\mathcal{H}(2),$ and without loss of generality, assume there exists $s$ such that $a_s>2.$ Consider the collection of peak blocks determined by $a_s;$ in particular, consider the summands of $\varphi_{(2)}$ within each of these peak blocks. Such summands are $e_{i,j}^*$ and $e_{i+1,j+1}^*,$ where $$i=\sum_{r=1}^{s}a_{r}-1+2t\text{\ \ \ \ \ and\ \ \ \ \ } j=\sum_{r=1}^{s-1}a_{r}+1+2t$$ for $0\le t\le \lfloor\frac{a_s}{4}\rfloor$. Notice that $i$ and $j$ are both odd for all $t$, by Lemma~\ref{lem:evenpart}. Since $a_s$ was arbitrary and the same argument applies for all $b_s,$ the result follows. \end{proof} \noindent With Lemmas~\ref{lem:evenpart} and \ref{lem:oppsign} established, the following result identifies the generator of $\ker(B_{\varphi_{(2)}}).$ \begin{theorem}\label{thm:h2a} Let $\mathfrak{g}\subset\mathfrak{sl}(n)$ be a type-A seaweed with homotopy type $\mathcal{H}(2)$. If $\varphi_{(2)}$ is defined as above, then $$\ker(B_{\varphi_{(2)}})=\textup{span}\{k\},$$ where $$k=\sum_{i=1}^n(-1)^{i+1}e_{i,i}.$$ \end{theorem} \begin{proof} Since $\varphi_{(2)}$ is regular on $\mathfrak{g},$ we need only show that $k\in\ker(B_{\varphi_{(2)}}).$ As a consequence of Lemma~\ref{lem:evenpart}, we have that $n$ is even, so $tr(k)=0$ and $k\in\mathfrak{g}.$ Now, to see that $k\in\ker(B_{\varphi_{(2)}}),$ consider the following: \begin{itemize} \item[\textbf{(a)}] $\varphi_{(2)}([k,e_{i,i}-e_{i+1,i+1}])=0,$ for all $1\leq i\leq n-1,$ \item[\textbf{(b)}] $\varphi_{(2)}([k,e_{i,j}])=0,$ for all $e_{i,j}\in\mathfrak{g}$ such that $e_{i,j}^*$ is not a summand of $\varphi_{(2)},$ and \item[\textbf{(c)}] $\varphi_{(2)}([k,e_{i,j}])=\varphi_{(2)}(e_{i,j}-e_{i,j})=0,$ for all $e_{i,j}\in\mathfrak{g}$ such that $e_{i,j}^*$ is a summand of $\varphi_{(2)}.$ \end{itemize} Equations \textbf{(a)} and \textbf{(b)} follow immediately from $k$ being a diagonal element of $\mathfrak{g},$ and Equation \textbf{(c)} follows from Lemma~\ref{lem:oppsign}. Therefore, $k\in\ker(B_{\varphi_{(2)}}).$ \end{proof} \begin{Ex} Returning to the seaweed $\mathfrak{g}=\mathfrak{p}_8^A\frac{2|6}{8}$ from Example~\ref{ex:Afunc}, we have that $$\ker(B_{\varphi_{(2)}})=\textup{span}\{e_{1,1}-e_{2,2}+e_{3,3}-e_{4,4}+e_{5,5}-e_{6,6}+e_{7,7}-e_{8,8}\}.$$ \end{Ex} \subsection{Homotopy type $\mathcal{H}(1,1)$}\label{sec:h11} \textit{Notation}: Throughout this section, for a given graph $G,$ we denote its vertex set by $V(G)$ and its edge set by $E(G).$ When $\mathfrak{g}$ has homotopy type $\mathcal{H}(1,1),$ the core and peak blocks have dimension one, so $CM(\mathfrak{g})=M(\mathfrak{g})$. Moreover, $M(\mathfrak{g})$ must consist of two paths, say $P_1$ and $P_2$. Let $P_1$ be the path which contains $v_1$. Now define the one-form $\varphi_{(1,1)}\in \Phi$ as follows: \begin{eqnarray}\label{summation} \varphi_{(1,1)}=\sum_{(v_i,v_j)\in E(\overrightarrow{M}(\mathfrak{g}))}e_{i,j}^*+\sum_{v_i\in V(P_1)}e_{i,i}^*. \end{eqnarray} \noindent Note that all the terms in the first summation of (\ref{summation}) can be read directly from the edges of the directed meander in the obvious way. See Example \ref{ex:h11form}. \begin{Ex}\label{ex:h11form} Consider the seaweed $\mathfrak{g}=\mathfrak{p}_5^A\frac{1|4}{3|1|1}$ with directed meander shown in Figure~\ref{fig:H11ex} below. Note that $\mathfrak{g}$ has homotopy type $\mathcal{H}(1,1),$ so the above construction yields the regular one-form $$\varphi_{(1,1)}=e_{1,3}^*+e_{4,3}^*+e_{5,2}^*+e_{1,1}^*+e_{3,3}^*+e_{4,4}^*.$$ \begin{figure} \caption{\centering The directed meander $\protect\overrightarrow{M} \label{fig:H11ex} \end{figure} \end{Ex} \subsubsection{The kernel of $\varphi_{(1,1)}$ } \noindent By construction, the one-form $\varphi_{(1,1)}$ is regular. As in the $\mathcal{H}(2)$ case, we can also describe the kernel of its associated Kirillov form. See Theorem \ref{thm:h11ker}. \begin{theorem}\label{thm:h11ker} If $\mathfrak{g}$ has homotopy type $\mathcal{H}(1,1)$ with $\varphi_{(1,1)}$ defined as in \textup{(\ref{summation})}, then $$\ker(B_{\varphi_{(1,1)}})=\textup{span}\{h\},$$ where $$h=|V(P_2)|\sum_{i\in V(P_1)}e_{i,i}-|V(P_1)|\sum_{j\in V(P_2)}e_{j,j}.$$ \end{theorem} \begin{proof} Since $\varphi_{(1,1)}$ is regular on $\mathfrak{g},$ we need only show that $h\in\ker(B_{\varphi_{(1,1)}}).$ We first establish that $h\in\mathfrak{g}$ by noting that $tr(h)=|V(P_2)||V(P_1)|-|V(P_1)||V(P_2)|=0$. Now, to see that $h\in\ker(B_{\varphi_{(1,1)}}),$ consider the following: \begin{itemize} \item $\varphi_{(1,1)}([h,e_{i,i}-e_{i+1,i+1}])=0,$ for all $1\leq i\leq |V(P_1)|+|V(P_2)|-1,$ \item $\varphi_{(1,1)}([h,e_{i,j}])=0,$ for all $e_{i,j}\in\mathfrak{g}$ such that $e_{i,j}^*$ is not a summand of $\varphi_{(1,1)},$ \item $\varphi_{(1,1)}([h,e_{i,j}])=\varphi_{(1,1)}(|V(P_2)|(e_{i,j}-e_{i,j}))=0,$ for all $e_{i,j}\in\mathfrak{g}$ such that $(v_i,v_j)\in E(P_1),$ and \item $\varphi_{(1,1)}([h,e_{i,j}])=\varphi_{(1,1)}(|V(P_1)|(e_{i,j}-e_{i,j}))=0,$ for all $e_{i,j}\in\mathfrak{g}$ such that $(v_i,v_j)\in E(P_2).$ \end{itemize} The result follows. \end{proof} \begin{Ex} Returning to the seaweed $\mathfrak{g}=\mathfrak{p}_5^A\frac{1|4}{3|1|1}$ from Example~\ref{ex:h11form}, we have that $$\ker(B_{\varphi_{(1,1)}})=\textup{span}\{2e_{1,1}-3e_{2,2}+2e_{3,3}+2e_{4,4}-3e_{5,5}\}.$$ \end{Ex} \section{Main results}\label{sec:main} In this section, we establish the main result of this article. We first need some notation and a few general results about contact Lie algebras. \noindent \textit{Notation: In this section, we will use $\mathfrak{f}$ to denote an arbitrary Lie algebra and we will continue with our established convention of letting $\mathfrak{g}$ represent a type-$A$ seaweed.} Let $\mathfrak{f}$ be a $(2k+1)-$dimensional contact Lie algebra with contact form $\varphi.$ Fix an ordered basis\linebreak $\mathscr{B}=\{E_1,\dots,E_{2k+1}\}$ of $\mathfrak{f}$ and define $C(\mathfrak{f},\mathscr{B})=([E_i,E_j])_{1\leq i,j\leq 2k+1}$ to be the \textit{commutator matrix} associated to $\mathfrak{f}$ and indexed by basis $\mathscr{B}$. The contact form $\varphi$ can be applied to each element of $C(\mathfrak{f},\mathscr{B})$ to yield the matrix $$[B_{\varphi}]=\varphi(C(\mathfrak{f},\mathscr{B})).$$ Denote by $[\varphi]=(x_1\dots x_{2k+1})^t$ the coordinate vector in $\mathbb{C}^{2k+1}$ such that $\varphi=\sum_{i=1}^{2k+1} x_iE_i^*$ where $\{E_1^*,\dots,E_{2k+1}^*\}$ is the ``dual basis'' associated to $\mathscr{B}$ and consider the square $(2k+2)-$dimensional skew-symmetric matrix $$\left[\widehat{B}_{\varphi}\right]=\begin{bmatrix} 0 & [\varphi]^t\\ -[\varphi] & \varphi(C(\mathfrak{\mathfrak{f}})) \end{bmatrix}.$$ \noindent A straightforward computation yields the following technical lemma. \begin{lemma}[Salgado \textbf{\cite{Sally}}, 2019]\label{lem:det} Let $\mathfrak{f}$ be a Lie algebra with $\dim\mathfrak{g}=2k+1$ and $\varphi\in\mathfrak{f}^*.$ Using the notation developed above, $$\varphi\wedge(d\varphi)^k=\det\left(\left[\widehat{B}_{\varphi}\right]\right) E_1^*\wedge \dots \wedge E_{2k+1}^*. $$ Therefore, $\varphi$ is a contact form on $\mathfrak{f}$ if and only if $$\det\left(\left[\widehat{B}_{\varphi}\right]\right)\neq 0.$$ \end{lemma} \noindent From Lemma~\ref{lem:det} above, we obtain the following useful characterization of a contact Lie algebra. \begin{lemma}\label{lem:kernel} An index-one Lie algebra $\mathfrak{f}$ is contact if and only if there exists a regular one-form $\varphi\in\mathfrak{f}^*$ for which there is an element $x\in\mathfrak{f}$ with $\ker(B_{\varphi})=span\{ x\}$ and $\varphi(x)\neq 0.$ \end{lemma} \begin{proof} To establish the forward implication, we need only recall that every contact form $\varphi$ has a unique Reeb vector, $x_{\varphi}\in\mathfrak{f}$ defined by the equations $$B_{\varphi}(x_{\varphi},-)=0$$ and $$\varphi(x_{\varphi})=1.$$ For the reverse implication, let $\varphi\in\mathfrak{f}^*$ be a regular one-form, i.e., $\dim\ker(B_{\varphi})={\rm ind \hspace{.1cm}}\mathfrak{f}=1.$ Let $x\in\mathfrak{f}$ denote a generator of $\ker(B_{\varphi}),$ then extend $x$ to a basis of $\mathfrak{f},$ and consider $\det\left[\widehat{B}_{\varphi}\right]$ with respect to this basis. Computing, we have that $$\det\left[\widehat{B}_{\varphi}\right]=\varphi(x)^2\det\left[B'_{\varphi}\right],$$ where $B'_{\varphi}$ is the submatrix of $\left[B_{\varphi}\right]$ with the row and column indexed by $\varphi(x)$ removed. Since $\varphi$ is a regular one-form, $B'_{\varphi}$ has full rank, so $\det\left[B'_{\varphi}\right]\neq 0.$ The result follows from Lemma~\ref{lem:det}. \end{proof} We are now in a position to prove the main theorem of this article. \begin{theorem}\label{thm:main} If $\mathfrak{g}$ is a type-A seaweed, then $\mathfrak{g}$ is contact if and only if ${\rm ind \hspace{.1cm}}\mathfrak{g}=1.$ \end{theorem} \begin{proof} Let $\mathfrak{g}$ be an index-one seaweed subalgebra of $\mathfrak{sl}(n)$. Recall from Theorem~\ref{cor:index A} that this means $\mathfrak{g}$ has one of the following homotopy types: $\mathcal{H}(2)$ or $\mathcal{H}(1,1)$. We proceed by treating each homotopy type as its own case, using the constructions and notation of Section~\ref{sec:framework} and then applying Lemma~\ref{lem:kernel}. \noindent \textbf{Case 1:} $\mathfrak{g}$ has homotopy type $\mathcal{H}(2).$ Using the notation of Section~\ref{sec:h2}, we claim that $\varphi_{(2)}$ is a contact form on $\mathfrak{g}.$ Recall from Theorem~\ref{thm:h2a} that $$\ker(B_{\varphi_{(2)}})=\textup{span}\{k\}=\textup{span}\left\{\sum_{i=1}^n(-1)^{i+1}e_{i,i}\right\},$$ and notice that $\varphi_{(2)}(k)=\frac{n}{2}\neq 0.$ An application of Lemma~\ref{lem:kernel} establishes the claim. \noindent \textbf{Case 2:} $\mathfrak{g}$ has homotopy type $\mathcal{H}(1,1).$ Using the notation of Section~\ref{sec:h11}, we claim that $\varphi_{(1,1)}$ is a contact form on $\mathfrak{g}.$ Recall from Theorem~\ref{thm:h11ker} that $$\ker(B_{\varphi_{(1,1)}})=\textup{span}\{h\}=\textup{span}\left\{|V(P_2)|\sum_{i\in V(P_1)}e_{i,i}-|V(P_1)|\sum_{j\in V(P_2)}e_{j,j}\right\},$$ and notice that $\varphi_{(1,1)}(h)=|V(P_2)||V(P_1)|\neq 0.$ An application of Lemma~\ref{lem:kernel} establishes the claim. \end{proof} The following corollary identifies the only $n$ for which $\mathfrak{sl}(n,\mathbb{C})$ is contact, providing another proof for a well-known result (cf. \textbf{\cite{RG}}). \begin{corollary} The Lie algebra $\mathfrak{sl}(n,\mathbb{C})$ is contact if and only if $n=2.$ \end{corollary} \begin{proof} Note that $\mathfrak{sl}(n,\mathbb{C})$ is a type-$A$ seaweed; in particular, it is the seaweed $\mathfrak{p}_n^A\frac{n}{n}.$ Further, its meander consists of $\lfloor\frac{n}{2}\rfloor$ cycles and, if $n$ is odd, a degenerate path. By Theorem 1, we have that ${\rm ind \hspace{.1cm}}\mathfrak{sl}(n,\mathbb{C})=1$ if and only if $n=2,$ and the result follows from an application of Theorem~\ref{thm:main}. \end{proof} \section{Examples}\label{sec:examples} Theorem~\ref{thm:indexA} provides an elegant combinatorial formula for the index -- and is critical to the proof heuristics of our main Theorem~\ref{thm:main} -- but to use it to quickly identify, or construct, contact seaweeds is nettlesome, having to first construct the associated meander and then count the number and type of connected components. Fortunately, for seaweeds consisting of a small number of parts, one can determine the index with some dispatch. In particular, the following theorem provides an explicit index formula presented in terms of a linear common divisor of two arguments, each of which is a linear combination of the terms in the seaweed's defining composition (see \textbf{\cite{Cam}}). We can use these formulas to manufacture an unlimited supply of contact seaweeds, of arbitrarily large dimension. \begin{theorem}[Coll et al. \textbf{\cite{meanders2}}, 2015]\label{thm:3parts} The seaweed $\mathfrak{p}_n^A\frac{a|b|c}{n}$ has index $\mathfrak{g}cd(a+b,b+c)-1.$ \end{theorem} \noindent So, to generate contact seaweeds of the form $\mathfrak{p}_n^A\frac{a|b|c}{n},$ we need only ensure $\mathfrak{g}cd(a+b,b+c)=2.$ This is easy to do; consider, for example, $\mathfrak{p}_5^A\frac{1|1|3}{5}$, $\mathfrak{p}_7^A\frac{1|3|3}{7}$, $\mathfrak{p}_{12}^A\frac{4|2|6}{12},$ etc. Moreover, if $b=0$, an immediate corollary to Theorem~\ref{thm:3parts} gives an index formula in the maximal parabolic case. \begin{theorem}[Elashvili \textbf{\cite{Elashvili}}, 1990]\label{thm:elashvili} The seaweed $\mathfrak{p}_n^A\frac{a|c}{n}$ has index $\mathfrak{g}cd(a,c)-1.$ \end{theorem} \noindent Using Theorem \ref{thm:elashvili}, the reader will have no difficulty constructing examples of maximally parabolic type-$A$ contact seaweeds. Of course, Theorems~\ref{thm:3parts} and~\ref{thm:elashvili} can also be used to generate Frobenius seaweeds since, by Theorem \ref{thm:indexA}, their associated meanders must consist of a single path. Here are a couple of examples: $\mathfrak{p}_5^A\frac{2|3}{5}$ and $\mathfrak{p}_8^A\frac{1|2|5}{8}.$ See Figure~\ref{fig:frob}, where the meanders of these two Frobenius seaweeds are displayed. \begin{figure} \caption{Meanders} \label{fig:frob} \end{figure} \end{document}
\begin{document} \title[On Periodic Solutions to Lagrangian System] { On Periodic Solutions to Lagrangian System With Singularities } \author[Oleg Zubelevich]{Oleg Zubelevich\\ \\\tt Dept. of Theoretical mechanics, \\ Mechanics and Mathematics Faculty,\\ M. V. Lomonosov Moscow State University\\ Russia, 119899, Moscow, MGU \\ } \date{} \thanks{Partially supported by grant RFBR 18-01-00887} \subjclass[2000]{34C25, 70F20 } \keywords{Lagrangian systems, periodic solutions, inverse square potential, two fixed center problem.} \begin{abstract}A Lagrangian system with singularities is considered. The configuration space is a non-compact manifold that depends on time. A set of periodic solutions has been found. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}{Definition}[section] \section{Introduction} Let us start from a model example. Consider the plane $\mathbb{R}^2$ with the standard Cartesian frame $$Oxy,\quad \boldsymbol r=x\boldsymbol e_x+y\boldsymbol e_y$$ and the standard Euclidean norm $|\cdot|$. A particle of mass $m$ moves in the plane being influenced by a force with the potential $$V(\boldsymbol r)=-\gamma\Big(\frac{1}{|\boldsymbol r-\boldsymbol r_0|^n}+\frac{1}{|\boldsymbol r+\boldsymbol r_0|^n}\Big),$$ here $\gamma>0$ is a constant; $\boldsymbol r_0\ne 0$ is a given vector. The Lagrangian of this system is as follows \begin{equation}\label{dtgt66}L(\boldsymbol r,\boldsymbol {\dot r})=\frac{m}{2}|\boldsymbol {\dot r}|^2-V(\boldsymbol r).\end{equation} If $n=1$ then this system is classical the two fixed centers problem and it is integrable \cite{arn}. The case of potentials of type $V(\boldsymbol r)\sim -1/|\boldsymbol r|^2$ is mainly used in quantum mechanics but the classical statement is also studied, see \cite{Martinez}, \cite{Krishnaswami} and references therein. One of the consequences from the main Theorem \ref{main_theor} (see the next section) is as follows. \begin{prop}\label{xvfvvvv} For any $\omega>0,\quad n\ge 2$ and for any $m\in\mathbb{N}$ system (\ref{dtgt66}) has an $\omega$-periodic solution $\boldsymbol r(t)$ that first $m$ times coils clockwise about the point $\boldsymbol r_0$ and then $m$ times coils counterclockwise about the point $-\boldsymbol r_0$ . If $m=1$ then the solution $\boldsymbol r(t)$ forms an $8$-like curve in the plane. \end{prop} This fact does not hold for $n=1$ \cite{ger}. Consider another pure classical mechanics example. Introduce in our space an inertial Cartesian frame $Oxyz$ such that the gravity is $\boldsymbol g=-g\boldsymbol e_z$. Let a particle of mass $m$ be influenced by gravity and slides without friction on a surface $$z=f(\boldsymbol r)=-\gamma\Big(\frac{1}{|\boldsymbol r-\boldsymbol r_0|^n}+\frac{1}{|\boldsymbol r+\boldsymbol r_0|^n}\Big),\quad \boldsymbol r=x\boldsymbol e_x+y\boldsymbol e_y.$$ The corresponding Lagrangian is \begin{equation}\label{fgrgtcc}L(\boldsymbol r,\boldsymbol {\dot r})=\frac{m}{2}\Big(|\boldsymbol{\dot r}|^2+\big(\nabla f(\boldsymbol r),\boldsymbol{\dot r}\big)^2\Big)-mgf(\boldsymbol r).\end{equation} Proposition \ref{xvfvvvv} holds for system (\ref{fgrgtcc}) also. Existence problems for periodic solutions to Lagrangian systems have intensively been studied since the beginning of the 20th century and even earlier. There is an immense number of different results and methods developed in this field. We mention only a few of them which are mach closely related to this article. In \cite{capozzi} periodic solutions have been obtained for the Lagrangian system with Lagrangian $$L(t,x,\dot x)=\frac{1}{2}g_{ij}(x)\dot x^i\dot x^j-W(t,x),\quad x=(x^1,\ldots,x^m)\in\mathbb{R}^m$$ here and in the sequel we use the Einstein summation convention. The form $g_{ij}$ is symmetric and positive definite: $$g_{ij}\xi^i\xi^j\ge \mathrm{const}_1\cdot|\xi|^2.$$ The potential is as follows $W(t,x)=V(x)+g(t)\sum_{i=1}^m x^i,$ where $V$ is a bounded function $|V(x)|\le\mathrm{const}_2$ and $g$ is an $\omega-$ periodic function. The functions $V,g_{ij}$ are even. Under these assumptions the authors prove that there exists a nontrivial $\omega-$ periodic solution. Our main tool to obtain periodic solutions is variation technique. Variational problems and Hamiltonian systems have been studied extensively. Classic references of these subjects are \cite{23}, \cite{28}, \cite{14}. \section{The Main Theorem} Introduce some notations. Let $x=(x^1,\ldots, x^m)$ and $\ph=(\ph^1,\ldots,\ph^n)$ be points of the standard $\mathbb{R}^m$ and $\mathbb{R}^n$ respectively. Then let $z$ stand for the point $(x,\varphi)\in \mathbb{R}^{m+n}$. By $|\cdot|$ denote the standard Euclidean norm of $\mathbb{R}^k,\quad k=m,m+n,$ that is $|x|^2=\sum_{i=1}^k(x^i)^2.$ The variable $z$ can consist only of $x$ without $\varphi$ or conversely. Assume that we are provided with a set $\sigma\in\mathbb{R}^{m+n}$ such that 1) If $z=(x,\varphi)\in\sigma$ then $-z\in\sigma$ and $(x,\varphi+2\pi p)\in\sigma$ for any $p=(p^1,\ldots ,p^n)\in\mathbb{Z}^n$; 2) the set $\sigma$ does not have accumulation points. Introduce a manifold $\Sigma=\mathbb{R}^{m+n}\backslash\sigma.$ \begin{rem}\label{remqq}Actually, the space $\mathbb{R}^{m+n}$ wraps a cylinder $\mathcal C=\mathbb{R}^m\times \mathbb{T}^n$, where $\mathbb{T}^n$ is the torus with angle variables $\varphi$. By the same reason, $\Sigma$ can be regarded as the cylinder with several dropped away points. Nevertheless for analysis purposes we prefer to deal with $\mathbb{R}^{m+n}$. \end{rem} The main object of our study is the following Lagrangian system with Lagrangian \begin{equation}\label{dtgd5}L(t,z,\dot z)=\frac{1}{2}g_{ij}\dot z^i\dot z^j+a_i\dot z^i-V,\quad z=(z^1,\ldots,z^{m+n}).\end{equation} \begin{rem}The term $a_i\dot z^i$ in the Lagrangian corresponds to the so called gyroscopic forces. For example, the Coriolis force and the Lorentz force are gyroscopic. \end{rem} The functions $g_{ij},a_i$ depend on $(t,z)$ and belong to $C^2(\mathbb{R}\times \Sigma)$; moreover all these functions are $2\pi-$periodic in each variable $\varphi^j$ and $\omega-$periodic in the variable $t,\quad \omega>0$. For all $(t,z)\in \mathbb{R}\times \Sigma$ it follows that $g_{ij}=g_{ji}$. The function $V(t,z)\in C^2(\mathbb{R}\times\Sigma)$ is also $2\pi-$periodic in each variable $\varphi^j$ and $\omega-$periodic in the variable $t$. We also assume that there are positive constants $C,M,A,K,P$ such that for all $(t,z)\in\mathbb{R}\times \Sigma$ and $ \xi\in \mathbb{R}^{m+n}$ we have \begin{equation}\label{sdfff}|a_i(t,z)|\le C+M|z|,\quad \frac{1}{2}g_{ij}(t,z)\xi^i\xi^j\ge K|\xi|^2;\end{equation} for all $(t,z)\in\mathbb{R}\times\Sigma,\quad \sigma'\in\sigma$ it follows that \begin{equation}\label{q_sdfff}V(t,z)\le A|z|^2-\frac{P}{|z-\sigma'|^2}+C_1.\end{equation} The Lagrangian $L$ is defined up to an additional constant (see Definition \ref{dfg5} below), so the constant $C_1$ is not essential. System (\ref{dtgd5}) obeys the following ideal constraints: \begin{equation}\label{constr} f_j(t,z)=0,\quad j=1,\ldots,l<m+n,\quad f_j\in C^2(\mathbb{R}^{m+n+1}).\end{equation} The functions $f_j$ are also $2\pi-$periodic in each variable $\varphi^j$ and $\omega-$periodic in the variable $t$. Introduce a set $$F(t)=\{z\in\mathbb{R}^{m+n}\mid f_j(t,z)=0,\quad j=1,\ldots,l\}.$$ Assume that \begin{equation}\label{dfggfgfg3e} \mathrm{rank}\,\Big(\frac{\partial f_j}{\partial z^k}(t,z)\Big)=l\end{equation}for all $z\in F(t)$. So that $F(t)$ is a smooth manifold. Assume also that all the functions $f_j$ are either odd: \begin{equation}\label{ncdnvv1} f_j(-t,-z)=-f_j(t,z)\end{equation} or even \begin{equation}\label{ncdnvv2} f_j(-t,-z)=f_j(t,z).\end{equation} The set $\sigma$ belongs to $F(t)$: \begin{equation}\label{dvt5y56}\sigma\subset F(t).\end{equation} \begin{rem} Actually, it is sufficient to say that all the functions are defined and have formulated above properties in some open symmetric vicinity of the manifold $F(t)$. We believe that this generalization is unimportant and keep referring to the whole space $\mathbb{R}^{m+n}$ just for simplicity of exposition. \end{rem} \begin{rem} The set $\sigma$ can be void. In this case the condition (\ref{dvt5y56}) is dropped and formula (\ref{q_sdfff}) takes the form $$V(t,z)\le A|z|^2+C_1.$$ \end{rem} \begin{df}\label{dfg5}We shall say that a function $z(t)\in C^2(\mathbb{R},\Sigma)$ is a solution to system (\ref{dtgd5}), (\ref{constr}) if there exists a set of functions $$\{\alpha^1,\ldots,\alpha^l\}\subset C(\mathbb{R})$$ such that \begin{equation}\label{xvvv}\frac{d}{dt}\frac{\partial L}{\partial \dot z^i}(t,z(t),\dot z(t))-\frac{\partial L}{\partial z^i}(t,z(t),\dot z(t))=\alpha^j(t)\frac{\partial f_j}{\partial z^i}(t,z(t))\end{equation} and \begin{equation}\label{cfgbgg}z(t)\in F(t)\end{equation} for all real $t$. In the absence of constraints (\ref{constr}) right side of (\ref{xvvv}) is equal to zero and the condition (\ref{cfgbgg}) is dropped. \end{df}The functions $\alpha^i$ are defined from equations (\ref{xvvv}), (\ref{constr}) uniquely. \begin{df}Let $$\mathcal V_p\subset C(\mathbb{R},\Sigma),\quad p=(p^1,\ldots,p^n)\in\mathbb{Z}^n$$ stand for a set of functions $z(t)=(x(t),\varphi(t))$ that satisfy the following properties for all $t\in\mathbb{R}$: 1) $x(t+\omega)=x(t);\quad \varphi(t+\omega)=\varphi(t)+2\pi p,$ 2) $z(t)\in F(t)$. In the absence of constraints (\ref{constr}) condition 2) is omitted. \end{df} \begin{df} We shall say that two functions $z_1,z_2\in\mathcal V_p$ are homotopic to each other iff there exists a mapping $z(s,t)\in C([0,1]\times\mathbb{R},\Sigma)$ such that 1) $z(s,\cdot)\in\mathcal V_p,$ 2) $z(0,t)=z_1(t),\quad z(1,t)=z_2(t).$ By $[z]$ we denote the homotopy class of the function $z$. \end{df} \begin{theorem}\label{main_theor} Assume that 1) all the functions are even: $$g_{ij}(-t,-z)=g_{ij}(t,z),\quad a_i(-t,-z)=a_i(t,z),\quad V(-t,-z)=V(t,z);$$ 2) the following inequality holds \begin{equation}\label{xcxddfd}K-\frac{M\omega}{\sqrt 2}-\frac{A\omega^2}{2}>0;\end{equation} 3) for some $\nu\in\mathbb{Z}^n$ the set $\mathcal V_\nu$ is non-void and there is a function $$ \tilde z(t)\in\mathcal V_\nu\cap C^1(\mathbb{R},\Sigma);\quad \tilde z(-t)=-\tilde z(t);$$ 4) and \begin{equation}\label{cvbvggdfgkk} \inf\{\|z-\sigma'\|_{C[0,\omega]}\mid z\in [\tilde z],\quad \sigma'\in\sigma\}>0.\end{equation} Then system (\ref{dtgd5}), (\ref{constr}) has an odd solution $$z_*(t)\in [\tilde z],\quad z_*(-t)=-z_*(t).$$ This assertion remains valid in the absence of constraints (\ref{constr}). When $\sigma=\emptyset$ then the assertion remains valid with condition 4) omitted. \end{theorem} Actually the solution stated in this theorem is as smooth as it is allowed by smoothness of the Lagrangian $L$ and the functions $f_j$ up to $C^\infty$. Loosely speaking, condition 4) of this theorem (formula (\ref{cvbvggdfgkk})) implies that we look for a solution among the curves those can not be shrunk into a point. See for example problem from the Introduction. \begin{rem} If all the functions do not depend on $t$ then we can choose $\omega$ to be arbitrary small and inequality (\ref{xcxddfd}) is satisfied. Taking a vanishing sequence of $\omega$, we obtain infinitely many periodic solutions of the same homotopic type. \end{rem} \begin{rem}\label{dtrgrr}Condition 2) of the Theorem is essential. Indeed, system $$L(t,x,\dot x)=\frac{1}{2}\dot x^2-\frac{1}{2}\Big(x-\sin t\Big)^2$$ obeys all the conditions except inequality (\ref{xcxddfd}). It is easy to see that the corresponding equation $\ddot x+x=\sin t$ does not have periodic solutions. \end{rem} \subsection{Examples} Our first example is as follows. \begin{figure} \caption{the tube and the ball \label{overflow} \label{overflow} \end{figure} {\it A thin tube can rotate freely in the vertical plane about a fixed horizontal axis passing through its centre $O$. A moment of inertia of the tube about this axis is equal to $J$. The mass of the tube is distributed symmetrically such that tube's centre of mass is placed at the point $O$. Inside the tube there is a small ball which can slide without friction. The mass of the ball is $m$. The ball can pass by the point $O$ and fall out from the ends of the tube. The system undergoes the standard gravity field $\boldsymbol g$.} It seems to be evident that for typical motion the ball reaches an end of the tube and falls down out the tube. It is surprisingly, at least for the first glance, that this system has very many periodic solutions such that the tube turns around several times during the period. The sense of generalized coordinates $\phi,x$ is clear from Figure \ref{overflow}. The Lagrangian of this system is as follows \begin{equation}\label{Lagr}L(x,\phi,\dot x,\dot \phi)= \frac{1}{2}\Big(mx^2+J\Big)\dot\phi^2+\frac{1}{2}m\dot x^2-mgx\sin\phi.\end{equation} Form theorem \ref{main_theor} it follows that for any constant $\omega>0$ system (\ref{Lagr}) has a solution $\phi(t),x(t),\quad t\in\mathbb{R}$ such that 1) $x(t)=-x(-t),\quad \phi(t)=-\phi(-t);$ 2) $x(t+\omega)=x(t),\quad \phi(t+\omega)=\phi(t)+2\pi .$ This result shows that for any $\omega>0$ the system has an $\omega-$periodic motion such that the tube turns around once during the period. The length of the tube should be chosen properly. Our second example is a counterexample. Let us show that the first condition of the theorem \ref {main_theor} can not be omitted. {\it Consider a mass point $m$ that slides on a right circular cylinder of radius $r$. The surface of the cylinder is perfectly smooth. The axis $x$ of the cylinder is parallel to the gravity $\boldsymbol g$ and directed upwards.} The Lagrangian of this system is \begin{equation}\label{xvfvffvvf}L(x,\varphi,\dot x,\dot\varphi)=\frac{m}{2}\Big(r^2\dot \varphi^2+\dot x^2\Big)-mgx.\end{equation} All the conditions except the evenness are satisfied but it is clear this system does not have periodic solutions. \section{ Proof of Theorem \ref{main_theor} } In this section we use several standard facts from functional analysis and the Sobolev spaces theory \cite{edv}, \cite{adams}. By $c_1,c_2,\ldots$ we denote inessential positive constants. \subsection{} Recall that the Sobolev space $H^1_{\mathrm{loc}}(\mathbb{R})$ consists of functions $u(t),t\in\mathbb{R}$ such that $u,\dot u\in L^2_{\mathrm{loc}}(\mathbb{R})$. The following embedding holds $H^1_{\mathrm{loc}}(\mathbb{R})\subset C(\mathbb{R})$. Recall another standard fact. \begin{lemma}\label{xddddd} Let $u\in H^1_{\mathrm{loc}}(\mathbb{R})$ and $u(0)=0$. Then for any $a>0$ we have $$\|u\|^2_{L^2(0,a)}\le \frac{a^2}{2}\|\dot u\|^2_{L^2(0,a)},\quad \|u\|^2_{C[0,a]}\le a \|\dot u\|^2_{L^2(0,a)}.$$\end{lemma} Here and below the notation $\|\dot u\|_{L^2(0,a)}$ implies that $$\Big\|\dot u\Big|_{(0,a)}\Big\|_{L^2(0,a)}$$ the same is concerned to $\|u\|_{C[0,a]}$ etc. \subsection{}Here we collect several spaces which are needed in the sequel. \begin{df} By $X$ denote a space of functions $u\in H^1_{\mathrm{loc}}(\mathbb{R})$ such that for all $t\in\mathbb{R}$ the following conditions hold $$u(-t)=-u(t),\quad u(t+\omega)=u(t).$$ \end{df} By virtue of lemma \ref{xddddd}, the mapping $u\mapsto \|\dot u\|_{L^2(0,\omega)}$ determines a norm in $X$. This norm is denoted by $\|u\|$. The norm $\|\cdot\|$ is equivalent to the standard norm of $H^1[0,\omega]$. The space $(X,\|\cdot\|)$ is a Banach space. Since the norm $\|\cdot\|$ is generated by an inner product $$(u,v)_X=\int_0^{\omega}\dot u(t)\dot v(t)dt$$ the space $X$ is also a real Hilbert space, particularly this implies that $X$ is a reflexive Banach space. \begin{df} Let $\Phi$ stand for the space $\{ct+u(t)\mid c\in\mathbb{R},\quad u\in X\}$. \end{df} By the same argument, $(\Phi,\|\cdot\|)$ is a reflexive Banach space. Observe also that $\Phi=\mathbb{R}\oplus X$ and by direct calculation we get $$\|\psi\|^2=\omega c^2+\|u\|^2,\quad \psi(t)=ct+u(t)\in\Phi.$$ Observe that $X\subset \Phi$. \begin{df} Let $E$ stand for the space $$X^m\times\Phi^n=\{z(t)=(x^1,\ldots, x^m,\varphi^1,\ldots,\ph^n)(t)\mid x^i\in X,\quad \varphi^j\in\Phi\}.$$\end{df} The space $E$ is also a real Hilbert space with an inner product defined as follows $$(z,y)_E=\int_0^{\omega}\sum_{i=1}^{m+n}\dot z^i(t)\dot y^i(t)dt,$$ where $ z=(z^k),y=(y^k)\in E,\quad k=1,\ldots, m+n.$ We denote the corresponding norm in $E$ by the same symbol and write $$\|z\|^2=\big\||z|\big\|^2=\sum_{k=1}^{m+n}\|z^k\|^2.$$ The space $E$ is also a reflexive Banach space. Introduce the following set $$E_0=\Big\{(x,\varphi)\in E\Big|\varphi^j=\frac{2\pi \nu_j}{\omega}t+u_j,\quad \forall x\in X,\quad \forall u_j\in X,\quad j=1,\ldots,n\Big\}.$$ This set is a closed plane of codimension $n$ in $E$. If $(x,\varphi)\in E_0$ then $\varphi(t+\omega)=\varphi(t)+2\pi \nu.$ \begin{df} Let $Y$ stand for the space $$\big\{u\in L^2_{\mathrm{loc}}(\mathbb{R})\mid u(t)=u(-t),\quad u(t+\omega)=u(t)\quad \mbox{almost everywhere in }\mathbb{R}\big\}.$$ \end{df} The space $Y^{m+n}$ is a Hilbert space with respect to the inner product $$(z,y)_Y=\int_0^{\omega}\sum_{i=1}^{m+n} z^i(t) y^i(t)dt,$$ where $ z=(z^k),y=(y^k)\in Y^{m+n},\quad k=1,\ldots, m+n.$ \subsection{} Fix a positive constant $K_1$ and consider a function \begin{equation}\label{vfdgdfg66}u:[0,\omega]\to\Sigma,\quad u(t)=(u^1,\ldots,u^{m+n})(t),\quad u^i\in H^1[0,\omega]\end{equation} such that the following two conditions hold \begin{equation}\label{fgggfdvv} \int_0^\omega\frac{dt}{|u(t)-\hat u|^2}\le K_1,\quad \|u\|_{H^1[0,\omega]}\le K_1\end{equation} with some constant vector $\hat u\in \mathbb{R}^{m+n}$. \begin{lemma}\label{xvcvcccc} There exists a positive constant $K_2$ such that for any function $u$ that satisfies (\ref{vfdgdfg66}) and (\ref{fgggfdvv}) one has $$|u(t)-\hat u|\ge K_2\sqrt{\sum_{k=1}^{n+m}\|u^k-\hat u^k\|^2_{C^{0,1/2}[0,\omega]}},\quad t\in[0,\omega],\quad j=1,\ldots,N.$$The constant $K_2$ depends only on $K_1$. \end{lemma}Here $C^{0,1/2}[0,\omega]$ is the Holder space. {\it Proof of Lemma \ref{xvcvcccc}.} By $c_1,c_2,\ldots$ we denote inessential positive constants. Observe that $H^1[0,\omega]\subset C^{0,1/2}[0,\omega]$ and \begin{equation}\label{xfvbvvv} \|\cdot\|_{C^{0,1/2}[0,\omega]}\le c_1\|\cdot\|_{H^1[0,\omega]}.\end{equation} Take arbitrary $t'\in[0,\omega]$. Fix $j$ and introduce the following notations $v=(v^1,\ldots, v^{m+n}),$ $$ v^k=u^k-\hat u^k,\quad a=|v(t')|,\quad b=\sqrt{\sum_{k=1}^{n+m}\|v^k\|^2_{C^{0,1/2}[0,\omega]}}.$$ Observe that from conditions of the Lemma and formula (\ref{xfvbvvv}) we get\begin{equation}\label{vfvfgffffhdfgh}b\le c_3.\end{equation} It follows that $$|v^k(t)-v^k(t')|\le \|v^k\|_{C^{0,1/2}[0,\omega]}|t-t'|^{1/2},\quad t\in[0,\omega]$$ and $$|v^k(t)|\le |v^k(t')|+\|v^k\|_{C^{0,1/2}[0,\omega]}|t-t'|^{1/2}.$$Consequently, we obtain $$|v(t)|\le a+b|t-t'|^{1/2}.$$ We then have $$ K_1\ge\int_0^\omega\frac{dt}{|v(t)|^2}\ge \int_0^{t'}\frac{dt}{\big(a+b(t'-t)^{1/2}\big)^2}+ \int_{t'}^\omega\frac{dt}{\big(a+b(t-t')^{1/2}\big)^2}.$$ The last two integrals are computed easily with the help of the change $\xi=a+b(t'-t)^{1/2}$ for the first integral, and $\xi=a+b(t-t')^{1/2}$ for the second integral respectively. Assume for definiteness that $t'\in[0,\omega/2]$. Other case is carried out analogously. Then inequality $$ \int_{t'}^\omega\frac{dt}{\big(a+b(t-t')^{1/2}\big)^2}\le K_1$$ and formula (\ref{vfvfgffffhdfgh}) imply $$a\ge c_2b.$$ The Lemma is proved. \begin{df}Let $W$ stand for a set $E_0\cap \mathcal [\tilde z].$\end{df} \subsection{} Introduce the Action Functional $S:W\to\mathbb{R},$ $$S(z)=\int_0^{\omega}L(t,z,\dot z)dt.$$ Our next goal is to prove that this functional attains its minimum. By using estimates (\ref{sdfff}), (\ref{q_sdfff}) we get $$S(z)\ge \int_0^{\omega}\big(K|\dot z|^2-|\dot z|(C+M|z|)-A|z|^2\big)dt.$$ From the Cauchy inequality and Lemma \ref{xddddd} it follows that \begin{align} \int_0^{\omega}|\dot z|| z|dt&\le\frac{\omega}{\sqrt 2}\|z\|^2, \nonumber\\ \int_0^{\omega}| z|^2dt&\le \frac{\omega^2}{2}\|z\|^2, \nonumber\\ \int_0^{\omega}| \dot z|dt&\le \sqrt{\omega}\|z\|.\nonumber \end{align} We finally obtain \begin{equation}\label{xfffff} S(z)\ge \Big(K-\frac{M\omega}{\sqrt 2}-\frac{A\omega^2}{2}\Big)\|z\|^2-C\sqrt{\omega}\|z\|. \end{equation} By formula (\ref{xcxddfd}) the functional $S$ is coercive: \begin{equation}\label{dvfdffvfv}S(z)\to \infty\end{equation}as $\|z\|\to\infty.$ Note that the Action Functional which corresponds to system (\ref{xvfvffvvf}) is also coercive but, as we see above, property (\ref{dvfdffvfv}) by itself does not imply existence results. \subsection{} Let $\{z_k\}\subset W$ be a minimizing sequence: $$S(z_k)\to \inf_{z\in W} S(z)$$ as $k\to\infty.$ By formula (\ref{dvfdffvfv}) the sequence $\{z_k\}$ is bounded: $\sup_k\|z_k\|<\infty.$ \begin{lemma}\label{qwwfdf} The following estimates hold \begin{align} \inf\{|z_k(t)-&\sigma'|\mid t\in\mathbb{R},\quad \sigma'\in\sigma,\quad k\in\mathbb{N}\}\nonumber\\ &=\inf\{|z_k(t)-\sigma'|\mid t\in[0,\omega],\quad \sigma'\in\sigma,\quad k\in\mathbb{N}\}>0.\label{vgffdddd}\end{align}\end{lemma} {\it Proof of lemma \ref{qwwfdf}.} The middle equality in formula (\ref{vgffdddd}) follows from the fact that $z_k\in\mathcal V_\nu$, we will prove inequality. It is sufficient to check inequality (\ref{vgffdddd}) for $\sigma'\in \sigma\cap\Lambda,$ $$\Lambda=\bigcup_{k\in\mathbb{N}}\{z\in\mathbb{R}^{m+n}\mid |z-z_k(t)|\le 1,\quad t\in[0,\omega]\}.$$ The set $ \sigma\cap\Lambda$ is finite. By formula (\ref{q_sdfff}) we obtain \begin{align} P\int_0^\omega&\frac{dt}{|z_k(t)-\sigma'|^2}+\Psi_k\nonumber\\ &\le c_7-\int_0^\omega V(t,z_k(t))dt+\Psi_k\le S(z_k)+c_7.\label{xvfffff}\end{align} Here $$\Psi_k=\int_0^\omega a_i(t,z_k(t))\dot z_k(t)dt,\quad c_7=A\|z_k\|^2_{L^2[0,\omega]}+C_1\omega.$$ By formula (\ref{sdfff}) and lemma \ref{xddddd} this sequence is bounded: $$|\Psi_k|\le c_5\|z_k\|+c_6\|z_k\|^2.$$ So that from formulas (\ref{xvfffff}), (\ref{cvbvggdfgkk}) and Lemma \ref{xvcvcccc}, Lemma \ref{qwwfdf} is proved. Since the space $E$ is reflexive, this sequence contains a weakly convergent subsequence. Denote this subsequence in the same way: $z_k\to z_*$ weakly in $E$. Moreover, the space $H^1[0,\omega]$ is compactly embedded in $C[0,\omega]$. Thus extracting a subsequence from the subsequence and keeping the same notation we also have \begin{equation}\label{scr 4vt}\max_{t\in[0,\omega]}|z_k(t)-z_*(t)|\to 0,\end{equation} as $k\to\infty$; and $$\inf\{|z_*(t)-\sigma'|\mid t\in\mathbb{R},\quad \sigma'\in\sigma\}>0.$$ The set $E_0$ is convex and strongly closed therefore it is weakly closed: $z_*\in E_0$. By continuity (\ref{scr 4vt}) one also gets $z_*\in W$ (see Lemma \ref{xzsdffgff} in the Appendix). \subsection{} Let us show that $\inf_{z\in W} S(z)=S(z_*).$ \begin{lemma}\label{xvvfvv}Let a sequence $\{u_k\}\subset \Phi$ weakly converge to $u\in\Phi$ (or $u_k,u\in X$ and $u_k\to u$ weakly in $X$); and also $\max_{t\in [0,\omega]}|u_k(t)- u(t)|\to 0$ as $k\to\infty$. Then for any $f\in C(\mathbb{R})$ and for any $v\in L^2(0,\omega)$ it follows that $$\int_0^{\omega}f(u_k)\dot u_k vdt\to \int_0^{\omega}f(u)\dot u vdt,\quad \mbox{as}\quad k\to\infty.$$ \end{lemma} Indeed, \begin{align} \int_0^{\omega}&f(u_k)\dot u_k vdt\nonumber\\ &=\int_0^{\omega}\big(f(u_k)-f(u)\big)\dot u_k vdt+\int_0^{\omega}f(u)\dot u_k vdt.\nonumber\end{align} The function $f$ is uniformly continuous in a compact set $$\Big[\min_{t\in[0,\omega]}\{u(t)\}-c,\max_{t\in[0,\omega]}\{u(t)\}+c\Big]$$ with some constant $c>0$. Consequently we obtain $$\max_{t\in [0,\omega]}|f(u_k(t))- f(u(t))|\to 0.$$ Since the sequence $\{u_k\}$ is weakly convergent it is bounded: $$\sup_k\|u_k\|<\infty$$ particularly, we get $$\|\dot u_k\|_{L^2(0,\omega)}<\infty.$$ So that as $k\to\infty$ \begin{align} \Big|&\int_0^{\omega}\big(f(u_k)-f(u)\big)\dot u_k vdt\Big|\nonumber\\ &\le \big\|v\big(f(u_k)- f(u)\big)\big\|_{L^2(0,\omega)}\cdot \|\dot u_k\|_{L^2(0,\omega)}\to 0.\nonumber \end{align} To finish the proof it remains to observe that a function $$w\mapsto\int_0^{\omega}f(u)\dot w vdt$$ belongs to $\Phi'$ (or to $X'$). Indeed, $$\Big|\int_0^{\omega}f(u)\dot w vdt\Big|\le \max_{t\in[0,\omega ]}|f(u(t))|\cdot\|v\|_{L^2(0,\omega)}\|w\|.$$ \subsection{} The following lemma is proved similarly. \begin{lemma}\label{xvvfvv1}Let a sequence $\{u_k\}\subset \Phi$ (or $\{u_k\}\subset X$) be such that $$\max_{t\in [0,\omega]}|u_k(t)- u(t)|\to 0$$ as $k\to\infty.$ Then for any $f\in C(\mathbb{R})$ and for any $v\in L^1(0,\omega)$ it follows that $$\int_0^{\omega}f(u_k) vdt\to \int_0^{\omega}f(u) vdt$$ as $k\to\infty.$ \end{lemma} \subsection{} Introduce a function $p_k(t,\xi)=L(t,z_k,\dot z_*+\xi)$. The function $p_k$ is a quadratic polynomial of $\xi\in\mathbb{R}^{m+n}$, so that $$p_k(t,\xi)=L(t,z_k,\dot z_*)+\frac{\partial L}{\partial \dot z^i}(t,z_k,\dot z_*)\xi^i+\frac{1}{2}\frac{\partial^2 L}{\partial \dot z^j\partial \dot z^i}(t,z_k,\dot z_*)\xi^i\xi^j.$$ The last term in this formula is non-negative: $$\frac{\partial^2 L}{\partial \dot z^j\partial \dot z^i}(t,z_k,\dot z_*)\xi^i\xi^j=g_{ij}(t,z_k)\xi^i\xi^j\ge 0.$$ We consequently obtain $$p_k(t,\xi)\ge L(t,z_k,\dot z_*)+\frac{\partial L}{\partial \dot z^i}(t,z_k,\dot z_*)\xi^i.$$ It follows that \begin{align}S(z_k)&=\int_0^{\omega}p_k(t,\dot z_k-\dot z_*)dt\ge \int_0^{\omega}L(t,z_k,\dot z_*)dt\nonumber\\&+ \int_0^{\omega}\frac{\partial L}{\partial \dot z^i}(t,z_k,\dot z_*)(\dot z_k^i-\dot z_*^i)dt.\label{xfbf}\end{align} From Lemma \ref{xvvfvv} and Lemma \ref{xvvfvv1} it follows that $$\int_0^{\omega}L(t,z_k,\dot z_*)dt\to \int_0^{\omega}L(t,z_*,\dot z_*)dt\quad \mbox{as}\quad k\to\infty,$$ and $$\int_0^{\omega}\frac{\partial L}{\partial \dot z^i}(t,z_k,\dot z_*)(\dot z_k^i-\dot z_*^i)dt\to 0\quad \mbox{as}\quad k\to\infty.$$ Passing to the limit as $k\to\infty$ in (\ref{xfbf}) we finally yield $$\inf_{z\in W}S(z)\ge S(z_*)\Longrightarrow \inf_{z\in W}S(z)= S(z_*).$$ \begin{rem} Basing upon these formulas one can estimate the norm $\|z_*\|$. Indeed, take a function $\hat z\in W$ then due to formula (\ref{xfffff}) one obtains $$S(\hat z)\ge S(z_*)\ge \Big(K-\frac{M\omega}{\sqrt 2}-\frac{A\omega^2}{2}\Big)\|z_*\|^2-C\sqrt{\omega}\|z_*\| ,$$ here $S(\hat z)$ is an explicitly calculable number. \end{rem} \subsection{} Now from this point we begin proving the theorem under the assumption that the constraints are odd (\ref{ncdnvv1}). Thus for any $v\in X^{m+n}$ such that $$\frac{\partial f_j}{\partial z^k}(t,z_*) v^k(t)=0$$ it follows that $$\frac{d}{d\varepsilon}\Big|_{\varepsilon=0}S(z_*+\varepsilon v)=0.$$ Introduce a linear functional $$b:X^{m+n}\to\mathbb{R},\quad b(v)=\frac{d}{d\varepsilon}\Big|_{\varepsilon=0}S(z_*+\varepsilon v),$$ and a linear operator $$A:X^{m+n}\to X^{l},\quad (Av)_j=\frac{\partial f_j}{\partial z^k}(t,z_*) v^k.$$ It is clear, both these mappings are bounded and $$\ker A\subset\ker b.$$ \begin{lemma}\label{xfff} The operator $A$ maps $X^{m+n}$ onto $X^l$ that is $$A(X^{m+n})=X^l.$$\end{lemma} {\it Proof.} By $\tilde A(t)$ denote the matrix $$\frac{\partial f_j}{\partial z^k}\big(t,z_*(t)\big).$$ It is convenient to consider our functions to be defined on the circle $t\in \mathbb S=\mathbb{R}/(\omega\mathbb{Z}).$ Fix an element $w\in X^l$. Let us cover the circle $\mathbb{S}$ with open intervals $U_i,\quad i=1,\ldots, N$ such that there exists a set of functions $$v_i\in H^1(U_i),\quad \tilde A(t) v_i(t)=w(t),\quad t\in U_i,\quad i=1,\ldots, N.$$ And let $\psi_i$ be a smooth partition of unity subordinated to the covering $\{U_i\}$. A function $\tilde v(t)=\sum_{i=1}^N\psi_i(t) v_i(t)$ belongs to $H^1(\mathbb{S})$ and for each $t$ it follows that $\tilde A(t)\tilde v(t)=w(t)$. But the function $\tilde v$ is not obliged to be odd. Since $\tilde A(-t)=\tilde A(t)$ we have $$\tilde A(t) v(t)=w(t),\quad v(t)=\frac{\tilde v(t)-\tilde v(-t)}{2}\in X^{m+n}.$$ The Lemma is proved. \subsection{} Recall a lemma from functional analysis \cite{KF}. \begin{lemma}\label{dfgdfdddd} Let $E,H,G$ be Banach spaces and $$A:E\to H,\quad B:E\to G$$ be bounded linear operators; $\ker A\subseteq\ker B.$ If the operator $A$ is onto then there exists a bounded operator $\Gamma:H\to G$ such that $B=\Gamma A.$\end{lemma} Thus there is a linear function $\Gamma\in (X^l)'$ such that $$b(v)=\Gamma A(v),\quad v\in X^{m+n}.$$ Or by virtue of the Riesz representation theorem, there exists a set of functions $\{\gamma^1,\ldots,\gamma^l\}\subset X$ such that $$\frac{d}{d\varepsilon}\Big|_{\varepsilon=0}S(z_*+\varepsilon v)=\int_0^\omega\dot\gamma^j(t)\frac{d}{dt}\Big(\frac{\partial f_j}{\partial z^k}(t,z_*)v^k(t)\Big)dt$$ for all $v\in X^{m+n}$. \subsection{}Every element $v\in X^{m+n}$ is presented as follows $$v(t)=\int_0^ty(s)ds,$$ where $y\in Y^{m+n}$ is such that $$\int_0^{\omega}y(s)ds=0.$$ Introduce a linear operator $h:Y^{m+n}\to\mathbb{R}^{m+n}$ by the formula $$h(y)=\int_0^{\omega}y(s)ds.$$ Define a linear functional $q:Y^{m+n}\to\mathbb{R}$ by the formula $$q(y)=(b-\Gamma A)v,\quad v(t)=\int_0^t y(s)ds.$$ Now all our observations lead to $$\ker h\subseteq \ker q.$$ Therefore, there exists a linear functional $\lambda:\mathbb{R}^{m+n}\to\mathbb{R}$ such that $$q=\lambda h$$ Let us rewrite the last formula explicitly. There are real constants $\lambda_k$ such that for any $y^k\in Y$ one has \begin{align} &\int_0^{\omega}\Big(\frac{\partial L}{\partial \dot z^k}(t,z_*,\dot z_*)y^k(t)+\frac{\partial L}{\partial z^k}(t,z_*,\dot z_*)\int_0^ty^k(s)ds\Big)dt\nonumber\\ &=\int_0^\omega\dot\gamma^j(t)\frac{\partial f_j}{\partial z^k}(t,z_*)y^k(t)dt+\int_0^\omega\dot\gamma^j(t)\frac{d}{dt}\Big(\frac{\partial f_j}{\partial z^k}(t,z_*)\Big)\int_0^ty^k(s)dsdt\nonumber\\ &+\lambda_k\int_0^{\omega}y^k(s)ds.\nonumber\end{align} \subsection{}By the Fubini theorem we obtain \begin{align}&\int_0^{\omega}\frac{\partial L}{\partial \dot z^k}(t,z_*,\dot z_*)y^k(t)dt+\int_0^{\omega}y^k(s)\int_s^{\omega}\frac{\partial L}{\partial z^k}(t,z_*,\dot z_*)dtds\nonumber\\ &=\int_0^\omega\dot\gamma^j(t)\frac{\partial f_j}{\partial z^k}(t,z_*)y^k(t)dt+\int_0^\omega y^k(s)\int_s^\omega\dot\gamma^j(t)\frac{d}{dt}\Big(\frac{\partial f_j}{\partial z^k}(t,z_*)\Big)dtds\nonumber\\ &+\lambda_k\int_0^{\omega}y^k(s)ds.\label{sfr44}\end{align} In this formula the functions $$\frac{\partial L}{\partial \dot z^k}(t,z_*,\dot z_*),\quad \dot\gamma^j(t)\frac{\partial f_j}{\partial z^k}(t,z_*)$$ are even and $\omega$-periodic functions of $t$. The functions $$\dot\gamma^j(t)\frac{d}{dt}\Big(\frac{\partial f_j}{\partial z^k}(t,z_*)\Big),\quad \frac{\partial L}{\partial z^k}(t,z_*,\dot z_*) $$ are odd and $\omega$-periodic. Employ the following trivial observation.\begin{prop}\label{cfgggtg}If $w\in L^1_{\mathrm{loc}}(\mathbb{R})$ is an $\omega-$periodic and odd function then for any constant $a\in\mathbb{R}$ a function $$t\mapsto\int_a^tw(s)ds$$ is also $\omega-$periodic.\end{prop} So that the functions $$\int_s^\omega\dot\gamma^j(t)\frac{d}{dt}\Big(\frac{\partial f_j}{\partial z^k}(t,z_*)\Big)ds,\quad\int_s^\omega \frac{\partial L}{\partial z^k}(t,z_*,\dot z_*) ds$$ are even and $\omega$-periodic in $s$. Therefore, equation (\ref{sfr44}) is rewritten as $(y,\eta)_Y=0$ for any $y=(y^1,\ldots y^{m+n})\in Y^{m+n}$ and $\eta=(\eta_1,\ldots,\eta_{m+n})$ stands for \begin{align}\eta_k&=\frac{\partial L}{\partial \dot z^k}(t,z_*,\dot z_*)+\int_t^{\omega}\frac{\partial L}{\partial z^k}(s,z_*,\dot z_*)ds\nonumber\\ &-\dot\gamma^j(t)\frac{\partial f_j}{\partial z^k}(t,z_*)-\int_t^\omega\dot\gamma^j(s)\frac{d}{ds}\Big(\frac{\partial f_j}{\partial z^k}(s,z_*)\Big)ds-\lambda_k\nonumber\in Y.\end{align} Consequently we obtain the following system \begin{align}&\frac{\partial L}{\partial \dot z^k}(t,z_*,\dot z_*)+\int_t^{\omega}\frac{\partial L}{\partial z^k}(s,z_*,\dot z_*)ds\nonumber\\ &=\dot\gamma^j(t)\frac{\partial f_j}{\partial z^k}(t,z_*)+\int_t^\omega\dot\gamma^j(s)\frac{d}{ds}\Big(\frac{\partial f_j}{\partial z^k}(s,z_*)\Big)ds+\lambda_k.\label{sfddgrf44}\end{align} here $ k=1,\ldots,m+n.$ If we formally differentiate both sides of equations (\ref{sfddgrf44}) in $t$ then we obtain the Lagrange equations (\ref{xvvv}) with $\alpha^j=\ddot\gamma^j$. Equations (\ref{sfddgrf44}) hold for almost all $t\in(0,\omega)$ but all the functions contained in (\ref{sfddgrf44}) are defined for all $t\in\mathbb{R}$. The functions $$\int_t^\omega\dot\gamma^j(s)\frac{d}{ds}\Big(\frac{\partial f_j}{\partial z^k}(s,z_*)\Big)ds,\quad \int_t^{\omega}\frac{\partial L}{\partial z^k}(s,z_*,\dot z_*)ds$$ are $\omega-$periodic by Proposition \ref{cfgggtg}. Equation (\ref{sfddgrf44}) holds for almost all $t\in\mathbb{R}$. \subsection{}Let $g^{ij}$ stand for the components of the matrix inverse to $(g_{ij}):\quad g^{ij}g_{ik}=\delta_k^j$. Present equation (\ref{sfddgrf44}) in the form \begin{align}\label{csr4f}\dot z^j_*(t)&=g^{kj}(t,z_*(t))\nonumber\\ &\cdot\Big(\lambda_k+\dot\gamma^i(t)\frac{\partial f_i}{\partial z^k}(t,z_*(t))+\int_t^\omega\dot\gamma^i(s)\frac{d}{ds}\Big(\frac{\partial f_i}{\partial z^k}(s,z_*(s))\Big)ds\nonumber\\ &-\int_t^{\omega}\frac{\partial L}{\partial z^k}(s,z_*(s),\dot z_*(s))ds -a_k(t,z_*(t))\Big).\end{align} Together with equation (\ref{csr4f}) consider equations \begin{equation}\label{fvvfvfvfsfg} \frac{\partial f_j}{\partial t}(t,z_*(t))+\frac{\partial f_j}{\partial z^k}(t,z_*(t))\dot z^k_*(t)=0.\end{equation} These equations follow from (\ref{constr}). Recall that by the Sobolev embedding theorem, $z_*\in X\subset C(\mathbb{R}).$ Due to (\ref{dfggfgfg3e}) we have $$\det B(t,z_*)\ne 0,\quad B(t,z_*)=\Big(g^{kj}(t,z_*)\frac{\partial f_i}{\partial z^k}(t,z_*)\frac{\partial f_l}{\partial z^j}(t,z_*)\Big)$$ for all $t$. Substituting $\dot z_*$ from (\ref{csr4f}) to (\ref{fvvfvfvfsfg}) we can express $\dot\gamma^j$ and see $\dot\gamma^j\in C(\mathbb{R})$. Thus from (\ref{csr4f}) it follows that $\dot z_*\in C(\mathbb{R}).$ Applying this argument again we obtain $\ddot\gamma^j,\ddot z_*\in C(\mathbb{R}).$ This proves the theorem for the case of odd constraints. \subsection{} Let us discuss the proof of the theorem under the assumption that the constraints are even (\ref{ncdnvv2}). \begin{df} By $Z$ denote a space of functions $u\in H^1_{\mathrm{loc}}(\mathbb{R})$ such that for all $t\in\mathbb{R}$ the following conditions hold $$u(-t)=u(t),\quad u(t+\omega)=u(t).$$ \end{df} The space $Z^l$ is a real Hilbert space with respect to an inner product $$(u,v)_Z=\sum_{i=1}^l\int_0^\omega\big(u_i(t)v_i(t)+\dot u_i(t)\dot v_i(t)\big)dt.$$ This is the standard inner product in $H^1[0,\omega]$. So what is changed now? The operator $A$ takes the space $X^{m+n}$ onto the space $Z^l$. The proof of this fact is the same as in Lemma \ref{xfff}. By the Riesz representation theorem, there exists a set of functions $\{\gamma^1,\ldots,\gamma^l\}\subset Z$ such that \begin{align}b(v)&=\int_0^\omega\dot\gamma^j(t)\frac{d}{dt}\Big(\frac{\partial f_j}{\partial z^k}(t,z_*)v^k(t)\Big)dt\nonumber\\ &+\int_0^\omega\gamma^j(t)\frac{\partial f_j}{\partial z^k}(t,z_*)v^k(t)dt\nonumber\end{align} for all $$v=\int_0^t y(s)ds,\quad y\in Y^{m+n},\quad \int_0^\omega y(s)ds=0.$$ So that equation (\ref{sfddgrf44}) is replaced with the following one \begin{align}&\frac{\partial L}{\partial \dot z^k}(t,z_*,\dot z_*)+\int_t^{\omega}\frac{\partial L}{\partial z^k}(s,z_*,\dot z_*)ds\nonumber\\ &=\dot\gamma^j(t)\frac{\partial f_j}{\partial z^k}(t,z_*)+\int_t^\omega\dot\gamma^j(s)\frac{d}{ds}\Big(\frac{\partial f_j}{\partial z^k}(s,z_*)\Big)ds\nonumber\\ &+\int_t^\omega\gamma^j(s)\frac{\partial f_j}{\partial z^k}(s,z_*)ds +\lambda_k.\nonumber\end{align} Here $ k=1,\ldots,m+n.$ By the same argument the functions $$\int_t^\omega\gamma^j(s)\frac{\partial f_j}{\partial z^k}(s,z_*)ds,\quad \int_t^\omega\dot\gamma^j(s)\frac{d}{ds}\Big(\frac{\partial f_j}{\partial z^k}(s,z_*)\Big)ds$$ are $\omega-$periodic and one can put $\alpha^j=\ddot\gamma^j-\gamma^j.$ Other argument is the same as above. The theorem is proved. \section{Appendix} \begin{lemma}\label{xzsdffgff}Fix a positive constant $\delta$. Let $z_1,z_2$ be functions from $ \mathcal V_p$ such that $$\inf\{|z_i(t)-\sigma'|\mid t\in[0,\omega],\quad \sigma'\in \sigma,\quad i=1,2\}\ge\delta.$$ There exists a positive number $\varepsilon>0$ such that if $$\max_{t\in[0,\omega]}|z_1(t)-z_2(t)|<\varepsilon$$ then these functions are homotopic. \end{lemma} {\it Proof of lemma \ref{xzsdffgff}.} Our argument is quite standard. So we present a sketch of the proof. It is convenient to consider $z_1,z_2$ as functions with values in $\mathcal C$ (see Remark \ref{remqq}). In the same sense $F(t)$ is a submanifold in $\mathcal C$ and the functions $z_1,z_2$ define a pair of closed curves in $\mathcal C$. Choose a Riemann metric in $\mathcal C$, for example as follows $$d\tau^2=\sum_{k=1}^{m+n} (dz^k)^2.$$ This metric inducts a metric in $F(t)$. Under the conditions of the Lemma any two points $z_1(t),z_2(t)\in F(t)$ are connected in $F(t)$ with a unique shortest piece of geodesic $\chi(\xi,t),\quad \xi\in[0,\tilde \xi(t)]$ such that $$\chi(0,t)=z_1(t),\quad \chi(\tilde \xi(t),t)=z_2(t).$$ Here $\xi$ is the arc-length parameter. Define the homotopy as follows $z(s,t)=\chi(s\tilde \xi(t),t),\quad s\in[0,1]$. The Lemma is proved. \subsection*{Acknowledgments} The author wishes to thank Professor E. I. Kugushev for useful discussions. \end{document}
{\mathbf e}gin{document} \title[Lebesgue inequalities for Chebyshev Greedy Algorithms]{Lebesgue inequalities for Chebyshev Thresholding Greedy Algorithms} \alphauthor{P. M. Bern\'a} \alphaddress{Pablo M. Bern\'a \\ Departamento de Matem\'aticas \\ Universidad Aut\'onoma de Madrid \\ 28049 Madrid, Spain} \email{[email protected]} \alphauthor{\'O. Blasco} \alphaddress{\'Oscar Blasco \\ Department of Analysis Mathematics \\ Universidad de Valencia, Campus de Burjassot\\ Valencia, 46100, Spain} \email{[email protected]} \alphauthor{G. Garrig\'os} \alphaddress{Gustavo Garrig\'os \\ Departamento de Matem\'aticas \\ Universidad de Murcia \\ 30100 Murcia, Spain} \email{[email protected]} \alphauthor{E. Hern\'andez} \alphaddress{Eugenio Hern\'andez \\ Departamento de Matem\'aticas \\ Universidad Aut\'onoma de Madrid \\ 28049 Madrid, Spain} \email{[email protected]} \alphauthor{T. Oikhberg} \alphaddress{Timur Oikhberg \\ Department of Mathematics \\ University of Illinois Urbana-Champaign \\ Urbana, IL 61807, USA} \email{[email protected]} \thanks{The research of the first, third and fourth authors are partially supported by the grants MTM-2016-76566-P (MINECO, Spain) and 19368/PI/14 (\emph{Fundaci\'on S\'eneca}, Regi\'on de Murcia, Spain). Also, the first author is supported by a PhD Fellowship from the program "Ayudas para contratos predoctorales para la formaci\'on de doctores 2017" (MINECO, Spain). The second author is supported by Grant MTM-2014-53009-P (MINECO, Spain). Third author partially supported by grant MTM2017-83262-C2-2-P (Spain). The fourth author has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreement No 777822} \subjclass{41A65, 41A46, 46B15.} \keywords{thresholding Chebyshev greedy algorithm, thresholding greedy algorithm, quasi-greedy basis, semi-greedy bases.} {\mathbf e}gin{abstract} We establish estimates for the Lebesgue parameters of the Chebyshev Weak Thresholding Greedy Algorithm in the case of general bases in Banach spaces. These generalize and slightly improve earlier results in \cite{DKO}, and are complemented with examples showing the optimality of the bounds. Our results also correct certain bounds recently announced in \cite{SY}, and answer some questions left open in that paper. \end{abstract} \maketitle \section{Introduction } \setcounter{equation}{0}\setcounter{footnote}{0} \setcounter{figure}{0} Let $\mathbb X$ be a Banach space over $\SK = \SR$ or $\SC$, let $\SX^*$ be its dual space, and consider a system $\lbrace {\mathbf e}_n, {\mathbf e}n\rbrace_{n=1}^\infty\subset \SX\times\SX^*$ with the following properties: {\mathbf e}gin{itemize} \item[a)] $0<\inf_n \lbrace \Vert {\mathbf e}_n\Vert, \Vert {\mathbf e}n\Vert\rbrace\leq \sup_n\lbrace \Vert {\mathbf e}_n\Vert, \Vert {\mathbf e}n\Vert\rbrace<\infty$ \item[b)] ${\mathbf e}n({\mathbf e}_m) = \delta_{n,m}$, for all $n,m\geq 1$ \item[c)] $\SX= \overline{\span\lbrace {\mathbf e}_n : n\in\mathbb N\rbrace}$ \item [d)] $\SX^* = \overline{\span\lbrace {\mathbf e}n : n\in\mathbb N\rbrace}^{w^*}$. \end{itemize} Under these conditions $\mathcal B=\lbrace {\mathbf e}_n\rbrace_{n=1}^\infty$ is called a \textit{seminormalized Markushevich basis for $\SX$} (or M-basis for short), with \textit{dual system} $\lbrace {\mathbf e}n\rbrace_{n=1}^\infty$. Sometimes we shall consider the following special cases {\mathbf e}gin{itemize} \item[e)] $\mathcal B$ is a \textit{Schauder basis} if $K_b:=\sup_N \Vert S_N\Vert<\infty$, where $S_Nx:=\sum_{n=1}^N {\mathbf e}^*_n(x){\mathbf e}_n$ is the $N$-th partial sum operator \end{itemize} {\mathbf e}gin{itemize} \item[f)] $\mathcal B$ is a \textit{Ces\`aro basis} if $\sup_N \Vert F_N\Vert<\infty$, where $F_N:= \frac{1}{N}\sum_{n=1}^N S_n$ is the $N$-th Ces\`aro operator. In this case we use the constant \mathcal Be {\mathbf e}ta=\;\max\big\{\sup_N\|F_N\|,\,\sup_N\|I-F_N\|\big\}. \label{beta}\end{equation} \end{itemize} With every $x\in\mathbb X$, we shall associate the formal series $x\sim \sum_{n=1}^{\infty} {\mathbf e}n(x){\mathbf e}_n$, where a)-c) imply that $\lim_{n}{\mathbf e}n(x)=0$. As usual, we denote $\mathop{\rm supp} x=\lbrace n\in\mathbb N : {\mathbf e}n(x)\neq 0\rbrace$. \ We recall standard notions about (weak) greedy algorithms; see e.g. the texts \cite{Tem1, Tem15} for details and historical background. Fix $t\in(0,1]$. We say that $A$ is a \textit{$t$-greedy set for $x$ of order $m$}, denoted $A\in G(x, m, t)$, if $\vert A\vert=m$ and \mathcal Be\min_{n\in A}\vert {\mathbf e}n(x)\vert \geq t\cdot\max_{n\not\in A}\vert{\mathbf e}n(x)\vert.\label{tineq}\end{equation} A \textit{$t$-greedy operator of order $m$} is any mapping $\mathcal G_m^t: \SX\to \SX$ which at each $x\in\SX$ takes the form $$\mathcal G_m^t(x)=\sum_{n\in A}{\mathbf e}n(x){\mathbf e}_n,\quad \text{for some set}\quad A=A(x,\mathcal G^t_m)\in G(x, m, t).$$ We write $\SG_m^t$ for the set of all $t$-greedy operators of order $m$. The approximation scheme which assigns a sequence $\{\mathcal G_m^t(x)\}_{m=1}^\infty$ to each vector $x\in\SX$ is called a \textit{Weak Thresholding Greedy Algorithm} (WTGA), see \cite{KT2,TemW}. When $t=1$ one just says Thresholding Greedy Algorithm (TGA), and drops the super-index $t$, that is $\mathcal G_m^1 = \mathcal G_m$, etc. \ It is standard to quantify the efficiency of these algorithms, among all possible $m$-term approximations, in terms of \emph{Lebesgue-type inequalities}. That is, for each $m=1, 2,...$, we look for the smallest constant ${\mathbf L}_m^t$ such that {\mathbf e}gin{eqnarray}\label{weakin} \Vert x-\mathcal G_m^t(x)\Vert \leq {\mathbf L}_m^t\mathop{\rm sign}ma_m(x),\quad \forall\; x\in\SX, \quad\forall\; \mathcal G_m^t\in \SG_m^t, \end{eqnarray} where $$\mathop{\rm sign}ma_m(x):=\inf\mathcal Big\lbrace \mathcal Big\Vert x-\sum_{n\in B}b_n {\mathbf e}_n\mathcal Big\Vert\mid b_n\in\SK,\quad \vert B\vert \leq m\mathcal Big\rbrace.$$ We call the number ${\mathbf L}_m^t$ the \emph{Lebesgue parameter} associated with the WTGA, and we just write ${\mathbf L}_m$ when $t=1$. We refer to \cite[Chapter 3]{Tem15} for a survey on such inequalities, and to \cite{GHO,DKO,AA1,BBG,BBGHO} for recent results. It is known that ${\mathbf L}_m^t=O(1)$ holds for a fixed $t$ if and only if it holds for all $t\in(0,1]$, and if and only if $\mathcal B$ is unconditional and democratic; see \cite{KT} and \cite[Thm 1.39]{Tem1}. In this special case $\mathcal B$ is called a \emph{greedy basis}. \ In this paper we shall be interested in \textit{Chebyshev thresholding greedy algorithms}. These were introduced by Dilworth, Kalton and Kutzarova, see \cite[\S3]{DKK}, as an enhancement of the TGA. Here, we use the weak version considered in \cite{DKO}. Namely, for fixed $t\in(0,1]$ we say that ${\mathfrak{CG}}_m^t:\SX\to\SX$ is a \textit{Chebyshev $t$-greedy operator} of order $m$ if for every $x\in\SX$ the set $A=\mathop{\rm supp}{\mathfrak{CG}}_m^t(x)\in G(x, m, t)$ and moreover $$\Vert x-{\mathfrak{CG}}_m^t(x)\Vert = \min\mathcal Big\lbrace \big\Vert x-\sum_{n\in A}a_n{\mathbf e}_n\big\Vert\mid a_n\in\SK\mathcal Big\rbrace.$$ Finally, we define the \textit{weak Chebyshevian Lebesgue parameter} ${\mathbf {L}}_m^{{\rm ch}, t}$ as the smallest constant such that $$\Vert x-{\mathfrak{CG}}_m^t(x)\Vert \leq {\mathbf {L}}_m^{{\rm ch}, t}\mathop{\rm sign}ma_m(x),\quad \forall\; x\in\SX,\quad \forall \;{\mathfrak{CG}}_m\in \mathcal Gtm\,,$$ where $\mathcal Gtm$ is the collection of all Chebyshev $t$-greedy operators of order $m$. As before, when $t=1$ we shall omit the index $t$, that is $\mathbf{L}_m^{{\rm ch}}:=\mathbf{L}_m^{{\rm ch},1}$. When $\mathbf{L}_m^{{\rm ch}}=O(1)$ the system $\mathcal B$ is called semi-greedy; see \cite{DKK}. We remark that the first author recently established that a Schauder basis $\mathcal B$ is semi-greedy if and only if is quasi-greedy and democratic; see \cite{B}. \ In this paper we shall be interested in quantitative bounds of ${\mathbf {L}}_m^{{\rm ch}, t}$ in terms of the quasi-greedy and democracy parameters of a general M-basis $\mathcal B$. Earlier bounds were obtained by Dilworth, Kutzarova and Oikhberg in \cite{DKO} when $\mathcal B$ is a quasi-greedy basis, and very recently, some improvements were also announced by C. Shao and P. Ye in \cite[Theorem 3.5]{SY}. Unfortunately, various arguments in the last paper seem not to be correct, so one of our goals here is to give precise statements and proofs for the results in \cite{SY}, and also settle some of the questions which are left open there. \ To state our results, we recall the definitions of the involved parameters. Given a finite set $A\subset\SN$, we shall use the following standard notation for the indicator sums: $$\bone_A = \sum_{n\in A}{\mathbf e}_n\mand \bone_{\varepsilon A}=\sum_{n\in A}\varepsilon_n {\mathbf e}_n,\quad\e\in\Upsilon$$ where $\Upsilon$ is the set of all $\varepsilon = \lbrace\varepsilon_n\rbrace_n\subset\SK$ with $\vert\varepsilon_n\vert=1$. Similarly, we write \[ P_A(x)=\sum_{n\in A}{\mathbf e}n(x){\mathbf e}_n. \] The relevant parameters for this paper are the following: {\mathbf e}gin{itemize} \item Conditionality parameters: $$k_m := \sup_{\vert A\vert \leq m}\Vert P_A\Vert\mand k_m^c=\sup_{\vert A\vert \leq m}\Vert I-P_A\Vert.$$ \item Quasi-greedy parameters: $$g_m := \sup_{\mathcal G_k\in \SG_k,\,k\leq m}\,\|\mathcal G_k\| \mand g_m^c := \sup_{\mathcal G_k\in \SG_k,\,k\leq m}\,\|I-\mathcal G_k\|. $$ Below we shall also use the variant \[ \tg_m:=\sup_{{\mathcal G'<\mathcal G}\alphatop{\mathcal G\in\SG_k,\;k\leq m}}\|\mathcal G-\mathcal G'\|, \] where $\mathcal G'<\mathcal G$ means that $A(x,\mathcal G')\subset A(x,\mathcal G)$ for all $x$; see \cite{BBG}. \item Super-democracy parameters: $$\tilde{\mu}_m = \sup_{\underset{\vert\varepsilon\vert=\vert\eta\vert=1}{\vert A\vert=\vert B\vert\leq m}}\dfrac{\Vert \bone_{\varepsilon A}\Vert}{\Vert \bone_{\eta B}\Vert}\mand \tilde{\mu}_m^d=\sup_{\underset{\vert\varepsilon\vert=\vert\eta\vert=1}{\vert A\vert=\vert B\vert\leq m, \; A\cap B=\emptyset}}\dfrac{\Vert \bone_{\varepsilon A}\Vert}{\Vert \bone_{\eta B}\Vert}.$$ \item[$\bullet$] Quasi-greedy parameters for constant coefficients (see \cite[(3.11)]{BBG})\[ \ga_m=\sup_{\underset{B\subset A,\;\vert A\vert\leq m}{\vert\varepsilon\vert=1}}\dfrac{\Vert \bone_{\varepsilon B}\Vert}{\Vert \bone_{\e A}\Vert}. \] \end{itemize} Note that $\ga_m\leq g_m\leq \tg_m\leq 2g_m$, but in general $\ga_m$ may be much smaller than $g_m$; see e.g. \cite[\S5.5]{BBG}. Likewise, in \S5 below we show that $\tmu^d_m$ may be much smaller than $\tmu_m$, except for Schauder bases in which both quantities turn out to be equivalent; see Theorem \ref{th_mudN}. \ Our first result is a general upper bound, which improves and extends \cite[Theorem 2.4]{SY}. {\mathbf e}gin{theorem}\label{main2} Let $\mathcal B$ be an M-basis in $\mathbb X$, and let $\mathfrak{K}=\sup_{n,j}\Vert {\mathbf e}n\Vert\Vert {\mathbf e}_j\Vert$. Then, \mathcal Be {\mathbf {L}}_m^{{\rm ch}, t} \leq\, 1\,+\,(1+\tfrac1t)\,\mathfrak{K}\,m\,, \quad \forall\;m\in\SN,\;\; t\in(0,1].\label{ineq_th1}\end{equation} Moreover, there exists a pair $(\SX,\mathcal B)$ where the equality is attained for all $m$ and $t$. \end{theorem} The second result is a slight generalization of \cite[Theorem 4.1]{DKO}, and gives a correct version of \cite[Theorem 3.5]{SY}. {\mathbf e}gin{theorem}\label{main1} Let $\mathcal B$ be an M-basis in $\mathbb X$. Then, for all $m\geq 1$ and $t\in (0,1]$, \mathcal Be{\mathbf {L}}_m^{{\rm ch}, t} \;\leq \;g_{2m}^c\,+\,\frac2t\,\min\big\{\,\tg_m\tmu_m\,,\;\ga_{2m}\tg_{2m}\tmu^d_m\, \big\}.\label{Lmain}\end{equation} \end{theorem} Our next result concerns lower bounds for ${\mathbf {L}}_m^{{\rm ch}, t}$, for which we need to introduce weaker versions of the democracy parameters with an additional separation condition. For two finite sets $A,B\subset\SN$ and $c\geq1$, the notation $A>cB$ will stand for $\min A >c\max B$. \mathcal Bi\item Given an integer $c\geq2$, we define {\mathbf e}gin{equation} \label{vmc} \vartheta_{m,c} := \sup\left\lbrace \dfrac{\Vert \bone_{\varepsilon A}\Vert}{\Vert \bone_{\eta B}\Vert}\mid \vert \varepsilon\vert=\vert\eta\vert=1,\;|A|=|B|\leq m\;\mbox{ with $A>cB$ or $B>cA$} \right\rbrace. \end{equation} \end{itemize} {\mathbf e}gin{theorem}\label{th3} If $\mathcal B$ is a Ces\`aro basis in $\SX$ with constant ${\mathbf e}ta$, then for every $c\geq 2$ $${\mathbf {L}}_m^{{\rm ch}, t}\geq \frac{1}{t{\mathbf e}ta^2}\frac{c-1}{c+1}\vartheta_{m,c},\quad \forall\;m\in\SN,\;t\in(0,1].$$ \end{theorem} We shall also establish, in Theorem \ref{Thm_thetam} below, a similar lower bound valid for more general M-bases (not necessarily of Ces\`aro type), in terms of a new parameter $\theta_m$ which is invariant under rearrangements of $\mathcal B$. {\mathbf e}gin{Remark} {\rm One may compare the bounds for ${\mathbf {L}}_m^{\rm ch}$ above with those for ${\mathbf L}_m$ given in \cite{BBG} \[ (1)\;{\mathbf L}_m\leq 1+3\frak{K}m,\qquad (2)\; {\mathbf L}_m\leq k^c_{2m}+\tg_m\tmu_m,\quad\mand\;(3)\; {\mathbf L}_m\geq \tmu_m^d, \] which illustrate a slightly better behavior of the Chebishev TGA. Observe that one also has the trivial inequalities \[ {\mathbf {L}}_m^{{\rm ch}, t}\leq {\mathbf L}_m^t\leq k^c_m\,{\mathbf {L}}_m^{{\rm ch}, t}. \] Indeed, ${\mathbf {L}}_m^{{\rm ch}, t} \leq {\mathbf L}_m^t$ is direct by definition, while ${\mathbf L}_m^t \leq k_m^c {\mathbf {L}}_m^{{\rm ch}, t}$ can be proved as follows: take $x\in\mathbb X$ and $A=\mathop{\rm supp} \mathcal G^t_m(x)$. Pick a Chebyshev greedy operator ${\mathfrak{CG}}^t_m$ such that $\mathop{\rm supp} {\mathfrak{CG}}^t_m (x)=A$. Then \[ \Vert x-\mathcal{G}_m^t(x)\Vert = \|(I-P_A)x\|=\Vert (I-P_A)(x-\mathfrak{CG}_m^t(x))\Vert \leq k_m^c\Vert x-\mathfrak{CG}_m^t(x)\Vert, \] so ${\mathbf L}_m^t\leq k_m^c {\mathbf {L}}_m^{{\rm ch}, t}$. Hence, when $\mathcal B$ is unconditional then ${\mathbf L}_m^t \alphapprox {\mathbf {L}}_m^{{\rm ch}, t}$. However for all conditional quasi-greedy and democratic bases we have ${\mathbf {L}}_m^{\rm ch}=O(1)$, but ${\mathbf L}_m\to\infty$.} \end{Remark} The paper is organized as follows. Section \ref{previous} is devoted to preliminary lemmas. In Section \ref{proof} we prove Theorems \ref{main2}, \ref{main1} and \ref{th3}, and also establish the more general lower bound in Theorem \ref{Thm_thetam}, giving various situations in which it applies. Section \ref{examples} is devoted to examples illustrating the optimality of the results; in particular, an optimal bound of ${\mathbf {L}}_m^{\rm ch}$ for the trigonometric system in $L^1(\ST)$, settling a question left open in \cite{SY}. In Section \ref{s:comparison_mu_mu_d} we investigate the equivalence between $\tmu^d_m$ and $\tmu_m$ and show Theorem \ref{th_mudN}. Finally, in Section 6 we study the convergence of ${\mathfrak{CG}}_m (x)$ and $\mathcal G_m(x)$ to $x$ under the \emph{strong} M-basis assumption, settling a gap in \cite{SY,Wo}. \section{Preliminary results}\label{previous} We recall some basic concepts and results that will be used later in the paper; see \cite{DKK,BBG}. For each $\alphal>0$ we define the $\alphal$-truncation of a scalar $y\in\mathbb K$ as $$T_\alphal(y)=\alphal\, \mathrm{sign \,} y\; \mbox{ if }\, \vert y\vert \geq \alphal,\mand T_\alphal(y)=y\; \mbox{ if }\, \vert y\vert\leq\alphal.$$ We extend $T_\alphal$ to an operator in $\mathbb X$ by formally assigning $T_\alphal(x)\sim\sum_{n=1}^\infty T_\alphal({\mathbf e}n(x)){\mathbf e}_n$, that is $$T_\alphal(x):= \alphal \bone_{\varepsilon \Lambda_\alphal(x)}+(I-P_{\Lambda_\alphal(x)})(x),$$ where $\Lambda_\alphal(x)=\lbrace n : \vert{\mathbf e}n(x)\vert>\alphal\rbrace$ and $\varepsilon=\lbrace\mathrm{sign \,}({\mathbf e}n(x))\rbrace$. Of course, this operator is well defined since $\Lambda_\alphal(x)$ is a finite set. In \cite{BBG} we can find the following result: {\mathbf e}gin{lemma}{\cite[Lemma 2.5]{BBG}}\label{trun1} For all $\alphal>0$ and $x\in\mathbb X$, we have $$\Vert T_\alphal(x)\Vert \leq g_{\vert\Lambda_\alphal(x)\vert}^c\Vert x\Vert.$$ \end{lemma} We also need a well known property from \cite{DKK,DKKT}, formulated as follows. {\mathbf e}gin{lemma}{\cite[Lemma 2.3]{BBG}}\label{propc} If $x\in\mathbb X$ and $\varepsilon=\lbrace\mathrm{sign \,}({\mathbf e}n(x))\rbrace$, then \mathcal Be\min_{n\in G}\vert{\mathbf e}n(x)\vert \Vert\bone_{\varepsilon G}\Vert \leq \tg_{\vert G\vert} \Vert x\Vert,\quad \forall G\in G(x,m,1).\label{propG}\end{equation} \end{lemma} The following version of \eqref{propG}, valid even if $G$ is not greedy, improves \cite[Lemma 2.2]{DKO}. {\mathbf e}gin{lemma}\label{propc1} Let $x\in\mathbb X$ and $\varepsilon=\lbrace\mathrm{sign \,}({\mathbf e}n(x))\rbrace$. For every set finite $A\subset\SN$, if $\alphal=\min_{n\in A}|{\mathbf e}^*_n(x)|$, then \mathcal Be\alphal \Vert\bone_{\varepsilon A}\Vert \,\leq \,\ga_{|A\cup\La_\alphal(x)|}\,\tg_{|A\cup\La_\alphal(x)|} \Vert x\Vert,\label{propA}\end{equation} where $\Lambda_\alphal(x)=\lbrace n : \vert{\mathbf e}n(x)\vert>\alphal\rbrace$. \end{lemma} {\mathbf e}gin{proof} Call $G=A\cup\La_\alphal(x)$, and notice that it is a greedy set for $x$. Then, \[ \alphal \Vert\bone_{\varepsilon A}\Vert \,\leq\,\alphal\,\ga_{|G|}\|\bone_{\e G}\|\,\leq\,\ga_{|G|}\,\tg_{|G|}\, \|x\|,\] using \eqref{propG} in the last step. \end{proof} {\mathbf e}gin{Remark} \label{propR1}{\rm The following is a variant of \eqref{propA} with a different constant \mathcal Be \min_{n\in A}|{\mathbf e}^*_n(x)|\; \Vert\bone_{\varepsilon A}\Vert \,\leq \,k_{|A|}\, \Vert x\Vert. \label{propR}\end{equation} A similar proof as the one in Lemma \ref{propc1} can be seen in \cite[Proposition 2.5]{BB1}.} \end{Remark} Finally, using convexity as in \cite[Lemma 2.7]{BBG}, one has the elementary lemma. {\mathbf e}gin{lemma}\label{conv} For all finite sets $A\subset\SN$ and scalars $a_n\in\SK$ it holds \[ \mathcal Big\|\sum_{n\in A} a_n {\mathbf e}_n\mathcal Big\|\leq \,\max_{n\in A}|a_n|\,\sup_{|\e|=1}\big\|\bone_{\e A}\big\|. \] \end{lemma} \section{Proof of the main results}\label{proof} \subsection{Proof of Theorem \ref{main2}} Let $x\in\mathbb X$ and ${\mathfrak{CG}}t_m\in \mathcal Gtm$ be a fixed Chebyshev $t$-greedy operator, and denote by $A=\mathop{\rm supp}{\mathfrak{CG}}t_mx\in G(x,m,t)$. Pick any $z=\sum_{n\in B}b_n{\mathbf e}_n$ such that $\vert B\vert = m$. By definition of the Chebyshev operators, {\mathbf e}gin{eqnarray*} \Vert x-\mathfrak{CG}_m^t(x)\Vert \leq \Vert x-P_{A\cap B}(x)\Vert\leq \Vert P_{B\setminus A}(x)\Vert + \Vert x-P_B(x)\Vert. \end{eqnarray*} On the one hand, using \eqref{tineq}, $$\Vert P_{B\setminus A}(x)\Vert \leq \sup_n\Vert {\mathbf e}_n\Vert\sum_{j\in B\setminus A}\vert \mathbf{e}_j^*(x)\vert\leq \frac{1}{t}\sup_n\Vert{\mathbf e}_n\Vert \sum_{j\in A\setminus B}\vert \mathbf{e}_j^*(x-z)\vert \leq \frac{1}{t}\mathfrak{K}m\Vert x-z\Vert.$$ On the other hand, using the inequality (3.9) of \cite{BBG}, $$\Vert x-P_B(x)\Vert=\Vert (I-P_B)(x-z)\Vert \leq k_m^c\Vert x-z\Vert \leq (1+\mathfrak{K}m)\Vert x-z\Vert.$$ Hence, ${\mathbf {L}}_m^{{\rm ch}, t}\leq 1+\left(1+\frac{1}{t}\right)\mathfrak{K}m$. Finally, the fact that the equality in \eqref{ineq_th1} can be attained is witnessed by Examples \ref{summing} and \ref{diff} below. \subsection{Proof of Theorem \ref{main1}} The scheme of the proof follows the lines in \cite[Theorem 3.2]{DKK} and \cite[Theorem 4.1]{DKO}, with some additional simplifications introduced in \cite{BBG}. Given $x\in\mathbb X$ and ${\mathfrak{CG}}t_m\in \mathcal Gtm$, we denote by $A=\mathop{\rm supp}{\mathfrak{CG}}t_mx\in G(x,m,t)$. Pick any $z=\sum_{n\in B}b_n{\mathbf e}_n$ such that $\vert B\vert = m$. By definition of the Chebyshev operators, \mathcal Be\|x-{\mathfrak{CG}}^t_mx\|\leq \|x-p\|,\quad \mbox{for any $p=\sum_{n\in A}a_n{\mathbf e}_n$.}\label{chtp}\end{equation} We make the selection of $p$ suggested in \cite{DKK}. Namely, if $\alphal=\max_{n\notin A}|{\mathbf e}^*_n(x)|$, we let \[ p=P_A(x)- P_A\big(T_\alphal(x-z)\big). \] It is easily verified that \mathcal Bea x-p & = & (I-P_A)\big(x-T_\alphal(x-z)\big)+T_\alphal(x-z) \nonumber\\ & = & P_{B\setminus A}\big(x-T_\alphal(x-z)\big)+T_\alphal(x-z). \label{p12}\end{equation}a Since $\La_\alphal(x-z)=\{n\mid |{\mathbf e}^*_n(x-z)|>\alphal\}\subset A\cup B$, then Lemma \ref{trun1} gives \mathcal Be \big\|T_\alphal(x-z)\big\|\leq g^c_{2m}\|x-z\|. \label{Taz}\end{equation} Next we treat the first term in \eqref{p12}. Observe that $\max_{n\in B\setminus A}\vert {\mathbf e}n(x-T_\alphalpha(x-z))\vert\leq 2\alphalpha$, so Lemma \ref{conv} gives \mathcal Bea \big\|P_{B\setminus A}\big(x-T_\alphal(x-z)\big)\big\| & \leq & 2\alphal\,\sup_{|\e|=1}\big\|\bone_{\e(B\setminus A)}\big\|\nonumber\\ & \leq & \frac2t\,\min_{n\in A\setminus B}|{\mathbf e}^*_n(x-z)|\,\sup_{|\e|=1}\big\|\bone_{\e(B\setminus A)}\big\|=(*).\label{PBA} \end{equation}a At this point we have two possible approaches. Let $\eta_n=\mathrm{sign \,} [e^*_n(x-z)]$. In the first approach we pick a greedy set $\Ga\in G(x-z,|A\setminus B|,1)$, and control \eqref{PBA} by \mathcal Be (*) \,\leq \,\frac2t\,\min_{n\in \Ga}|{\mathbf e}^*_n(x-z)|\,\,\tmu_m\,\big\|\bone_{\eta\Ga}\big\|\, \leq \, \frac2t\,\tmu_m\,\tg_{m}\|x-z\|,\label{PBA1}\end{equation} using Lemma \ref{propc} in the last step. In the second approach, we argue as follows \mathcal Be (*) \,\leq \,\frac2t\,\min_{n\in A\setminus B}|{\mathbf e}^*_n(x-z)|\,\,\tmu^d_m\,\big\|\bone_{\eta(A\setminus B)}\big\|\, \leq \, \frac2t\,\ga_{2m}\,\tg_{2m}\,\tmu_m^d\,\|x-z\|,\label{PBA2}\end{equation} using in the last step Lemma \ref{propc1} and the fact that, if $\dt=\min_{A\setminus B}|{\mathbf e}^*_n(x-z)|$, then the set $(A\setminus B)\cup\{n\mid|{\mathbf e}^*_n(x-z)|>\dt\}\subset A\cup B$ and hence has cardinality $\leq 2m$. \ We can now combine the estimates displayed in \eqref{chtp}-\eqref{PBA2} and obtain \[ \|x-{\mathfrak{CG}}^t_mx\|\leq \,\big[g^c_{2m}+\frac2t\,\min\big\{\,\tg_m\tmu_m\,,\;\ga_{2m}\tg_{2m}\tmu^d_m\,\big\}\big]\,\|x-z\|, \] which after taking the infimum over all $z$ establishes Theorem \ref{main1}. \ProofEnd {\mathbf e}gin{Remark} {\rm In \cite[Theorem 3.5]{SY} a stronger inequality is stated (for $t=1$), namely \mathcal Be {\mathbf {L}}_m^{\rm ch}\leq g^c_{2m}+2\tg_m\tmu^d_m.\label{Ye}\end{equation} The proof, however, seems to contain a gap, and a missing factor $k_m^c$ should also appear in the last summand. Nevertheless, it is still fair to ask whether the inequality \eqref{Ye} asserted in \cite{SY} may be true with a different proof.} \end{Remark} {\mathbf e}gin{Remark} {\rm Using Remark \ref{propR1} in place of Lemma \ref{propc1} in \eqref{PBA2} above leads to an alternative and slightly simpler estimate \mathcal Be {\mathbf {L}}_m^{{\rm ch}, t} \;\leq \;g_{2m}^c\,+\,\frac2t\,k_m\tmu_m^d\,.\label{cLk}\end{equation} However, this would not be as efficient as \eqref{Lmain} when $\mathcal B$ is quasi-greedy and conditional.} \end{Remark} {\mathbf e}gin{Remark} {\rm When $\mathcal B$ is quasi-greedy with constant ${\mathbf q}=\sup_m g_m<\infty$, then Theorem \ref{main1} implies the following \[ {\mathbf {L}}_m^{{\rm ch}, t} \leq {\mathbf q}+ 4t^{-1}\, {\mathbf q}^2 \,\tilde{\mu}^d_m.\] This is a slight improvement with respect to \cite[Theorem 4.1]{DKO}. }\end{Remark} \subsection{Proof of Theorem \ref{th3}} Recall that $S_N=\sum_{n=1}^N {\mathbf e}^*_n(\cdot){\mathbf e}_n$ and \[ F_N(x)=\frac1N\sum_{n=1}^NS_n(x)=\sum_{n=1}^N\big(1-\frac{n-1}N\big){\mathbf e}^*_n(x){\mathbf e}_n. \] For $M>N$ we define the operators (of de la Vall\'ee-Poussin type) \mathcal Bea V_{N,M}(x) & = & \frac M{M-N}\,F_{M}(x)-\frac N{M-N}F_{N}(x)\nonumber\\ & = & \sum_{n=1}^N{\mathbf e}^*_n(x){\mathbf e}_n\,+\,\sum_{n=N+1}^M\big(1-\frac{n-N-1}{M-N}\big)\,{\mathbf e}^*_n(x){\mathbf e}_n.\label{VNM}\end{equation}a In particular, observe that, for ${\mathbf e}ta$ as in \eqref{beta} we have \mathcal Be \max\big\{\|V_{N,M}\|, \|I-V_{N,M}\|\big\}\,\leq\, \frac{M+N}{M-N}\,{\mathbf e}ta. \label{VNMb}\end{equation} We next prove that, if $c\geq2$, then for all $A,B\subset\SN$ such that $B>cA$ with $|A|=|B|\leq m$ it holds \mathcal Be {\mathbf {L}}_m^{{\rm ch}, t}\geq \frac1{t{\mathbf e}ta}\,\frac{c-1}{c+1}\,\frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|},\quad\forall\;|\e|=|\eta|=1. \label{beta1}\end{equation} Pick any set $C>B$ such that $|B\cup C|=m$, and let \[ x=\bone_{\e A}+t\bone_{\eta B}+t\bone_C. \] Then $B\cup C\in G(x,m,t)$, and hence there is a Chebyshev $t$-greedy operator so that \[ x-{\mathfrak{CG}}^t_m(x)=\bone_{\e A}+\sum_{n\in B\cup C} a_n{\mathbf e}_n, \] for some scalars $a_n\in\SK$. Clearly, \[ \|x-{\mathfrak{CG}}^t_m(x)\|\leq {\mathbf {L}}_m^{{\rm ch}, t}\mathop{\rm sign}ma_m(x)\,\leq\,{\mathbf {L}}_m^{{\rm ch}, t}\,\|t\bone_{\eta B}\|, \] using $z=\bone_{\e A}+t\bone_C$ an $m$-term approximant. On the other hand, let $N=\max A$. Since $\min B\cup C> cN$, then \eqref{VNM} yields \[ V_{N,cN}(x-{\mathfrak{CG}}^t_mx)=\bone_{\e A}. \] Therefore, \eqref{VNMb} implies that \[ \|x-{\mathfrak{CG}}^t_m(x)\|\geq \frac{\|V_{N,cN}(x-{\mathfrak{CG}}^t_mx)\|}{\|V_{N,cN}\|}\geq \frac{c-1}{(c+1){\mathbf e}ta}\,\|\bone_{\e A}\|. \] We have therefore proved \eqref{beta1}. We next show that when $|A|=|B|\leq m$ satisfy $A>cB$ then \mathcal Be {\mathbf {L}}_m^{{\rm ch}, t}\geq \frac1{t{\mathbf e}ta^2}\,\frac{c-1}{c+1}\,\frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|},\quad\forall\;|\e|=|\eta|=1. \label{beta2}\end{equation} This together with \eqref{beta1} is enough to establish Theorem \ref{th3}. We shall actually show a slightly stronger result: {\mathbf e}gin{lemma} \label{lem_x} Let $|A|=|B|\leq m$ and let $y\in\SX$ be such that $|y|_\infty:=\sup_n|{\mathbf e}^*_n(y)|\leq1$ and $A>c(B\mathbin{\mathaccent\cdot\cup} \mathop{\rm supp} y)$. Then \mathcal Be {\mathbf {L}}_m^{{\rm ch}, t}\geq \frac1{t{\mathbf e}ta^2}\,\frac{c-1}{c+1}\,\frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}+y\|},\quad\forall\;|\e|=|\eta|=1. \label{beta3}\end{equation} \end{lemma} Observe that the case $y=0$ in \eqref{beta3} yields \eqref{beta2}. We now show \eqref{beta3}. Pick a large integer $\la>1$ and a set $C>\la A$ such that $|B\cup C|=m$. Let \[ x=\bone_{\e A}+ty+t\bone_{\eta B}+t\bone_C. \] As before, $B\cup C\in G(x,m,t)$, and hence for some Chebyshev $t$-greedy operator we have \[ x-{\mathfrak{CG}}^t_m(x)=\bone_{\e A}+ty+\sum_{n\in B\cup C} a_n{\mathbf e}_n, \] for suitable scalars $a_n\in\SK$. Choosing $\bone_{\e A}+t\bone_C$ as $m$-term approximant of $x$ we see that \[ \|x-{\mathfrak{CG}}^t_m(x)\|\leq {\mathbf {L}}_m^{{\rm ch}, t}\mathop{\rm sign}ma_m(x)\,\leq\,{\mathbf {L}}_m^{{\rm ch}, t}\,t\,\|\bone_{\eta B}+y\|. \] On the other hand, calling $N=\max(B\mathbin{\mathaccent\cdot\cup}\mathop{\rm supp} y)$ and $L=\max A$ we have \[ (I-V_{N,cN})\circ V_{L,\la L}\big(x-{\mathfrak{CG}}^t_mx\big)=\bone_{\e A} \] Thus, \[ \|x-{\mathfrak{CG}}^t_m(x)\|\geq \frac{\|\bone_{\e A}\|}{\|I-V_{N,cN}\|\|V_{L,\la L}\|}\geq \frac{c-1}{(c+1){\mathbf e}ta}\,\frac{\la-1}{(\la+1){\mathbf e}ta}\,\|\bone_{\e A}\|. \] Therefore we obtain \[ {\mathbf {L}}_m^{{\rm ch}, t}\geq \frac1{t{\mathbf e}ta^2}\,\frac{c-1}{c+1}\,\frac{\la-1}{\la+1}\,\frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}+y\|} \] which letting $\la\to\infty$ yields \eqref{beta3}. This completes the proof of Lemma \ref{lem_x}, and hence of Theorem \ref{th3}. {\mathbf e}gin{Remark} {\rm When $\mathcal B$ is a Schauder basis, a similar proof gives the following lower bound, which is also obtained in \cite[Theorem 2.2]{SY} \[ {\mathbf {L}}_m^{{\rm ch}, t}\geq\,\frac1{(K_b+1)t}\,\sup\mathcal Big\{\frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|}\mid |A|=|B|= m, \;\mbox{$A>B$ or $B>A$ },\;|\e|=|\eta|=1\mathcal Big\}. \] The statement for Ces\`aro bases, however, will be needed for the applications in \S\ref{ex_trig}. }\end{Remark} \subsection{Lower bounds for general M-bases} Observe that \[ \vartheta_{m,c}=\sup_{|A|\leq m}\vartheta_c(A),\quad\mbox{where}\quad \vartheta_c(A)=\sup_{{B\mid |B|=|A|}\alphatop{{B>cA}\alphatop{\e,\eta\in\Upsilon}}} \max\mathcal Big\{\frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|},\frac{\|\bone_{\eta B}\|}{\|\bone_{\e A}\|}\mathcal Big\}. \] We consider a new parameter\mathcal Be \vartheta_m=\sup_{|A|\leq m}\;\inf_{c\geq1}\;\vartheta_c(A). \label{theta2}\end{equation} We remark that, unlike $\vartheta_{m,c}$, the parameter $\vartheta_m$ depends on $\{{\mathbf e}_n\}_{n=1}^\infty$ but not on the reorderings of the system. We shall give a lower bound for ${\mathbf {L}}_m^{{\rm ch}, t}$ in terms of $\vartheta_m$ in a less restrictive situation than the Ces\`aro basis assumption on $\{{\mathbf e}_n\}_{n=1}^\infty$. \ Given $\rho\geq1$, we say that $\{{\mathbf e}_n\}_{n=1}^\infty$ is $\rho$-admissible if the following holds: \emph{for each finite set $A\subset\SN$, there exists $n_0=n_0(A)$ such that, for all sets $B$ with $\min B\geq n_0$ and $|B|\leq|A|$,} \mathcal Be \big\|\sum_{n\in A}\alphal_n{\mathbf e}_n\big\|\leq \rho\,\big\|\sum_{n\in A\cup B}\alphal_n{\mathbf e}_n\big\|,\quad\forall\;\alphal_n\in\SK. \label{A1} \end{equation} Observe that \eqref{A1} implies that \mathcal Be \big\|\sum_{n\in B}\alphal_n{\mathbf e}_n\big\|\leq (\rho+1)\,\big\|\sum_{n\in A\cup B}\alphal_n{\mathbf e}_n\big\|,\quad\forall\;\alphal_n\in\SK. \label{B1} \end{equation} This condition is clearly satisfied by all Schauder and Ces\`aro bases (with $\rho=K_b$ or $\rho>{\mathbf e}ta$), but we shall see below that it also holds in more general situations. {\mathbf e}gin{proposition}\label{Ptheta1} Let $\{{\mathbf e}_n,{\mathbf e}^*_n\}_{n=1}^\infty$ be an M-basis such that $\{{\mathbf e}_n\}_{n=1}^\infty$ is $\rho$-admissible. Then \mathcal Be {\mathbf {L}}_m^{{\rm ch}, t}\geq \frac{\vartheta_m}{(\rho+1)t},\quad\forall\;m\in\SN,\quad t\in(0,1]. \label{thetam}\end{equation} \end{proposition} {\mathbf e}gin{proof} Fix $A\subset\SN$ such that $|A|\leq m$. Choose $C$ disjoint with $A$ such that $|A\cup C|=m$. Let $n_0=n_0(A\cup C)$ as in the above definition, which we may assume larger than $\max A\cup C$. Pick any $B$ with $\min B\geq n_0$ and $|B|=|A|$, and any $\e,\eta\in\Upsilon$. Let $x=t\bone_{\e A}+t \bone_C+\bone_{\eta B}$. Then $A\cup C\in G(x,m,t)$, and there is a Chebyshev $t$-greedy operator with ${\mathfrak{CG}}t_m(x)$ supported in $A\cup C$. Thus, \[ \|x-{\mathfrak{CG}}t_m(x)\|\leq {\mathbf {L}}_m^{{\rm ch}, t}\,\mathop{\rm sign}ma_m(x)\leq {\mathbf {L}}_m^{{\rm ch}, t}\,\|x-(\bone_{\eta B}+t\bone_C)\|={\mathbf {L}}_m^{{\rm ch}, t}\,t\,\|\bone_{\e A}\|. \] On the other hand, using the property in \eqref{B1} one obtains\[ \|x-{\mathfrak{CG}}t_m(x)\|\geq \frac{\|\bone_{\eta B}\|}{\rho+1}. \] Thus, \[ {\mathbf {L}}_m^{{\rm ch}, t}\,\geq\,\frac{1}{(\rho+1)t}\,\frac{\|\bone_{\eta B}\|}{\|\bone_{\e A}\|}. \] We now assume additionally that $\min B\geq n_0+m$, and pick $D\subset[n_0,n_0+m-1]$ such that $|B|+|D|=m$. Let $y=\bone_{\e A}+t\bone_{\eta B}+t\bone_D$. Then $B\cup D\in G(y,m,t)$ and a similar reasoning gives \[ \frac{\|\bone_{\e A}\|}\rho\leq \|y-{\mathfrak{CG}}t_m(y)\|\leq {\mathbf {L}}_m^{{\rm ch}, t}\,\mathop{\rm sign}ma_m(y)\leq \,{\mathbf {L}}_m^{{\rm ch}, t}\, t\,\|\bone_{\eta B}\|. \] Thus, \[ {\mathbf {L}}_m^{{\rm ch}, t}\,\geq\,\frac{1}{(\rho+1)t}\,\max\mathcal Big\{\frac{\|\bone_{\eta B}\|}{\|\bone_{\e A}\|},\frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|}\mathcal Big\}, \] and taking the supremum over all $|B|=|A|$ with $B\geq (n_0+m)A$ and all $\e,\eta\in\Upsilon$, we see that \[ {\mathbf {L}}_m^{{\rm ch}, t}\,\geq\,\frac{\vartheta_{n_0+m}(A)}{(\rho+1)t}\,\geq \frac{\inf_{c\geq1} \vartheta_{c}(A)}{(\rho+1)t}. \] Finally, a supremum over all $|A|\leq m$ leads to \eqref{thetam}. \end{proof} \ We now give some general conditions in $\{{\mathbf e}_n,{\mathbf e}^*_n\}_{n=1}^\infty$ and $\SX$ under which $\rho$-admissibility holds. We recall a few standard definitions; see e.g. \cite{Hajek}. We use the notation $[{\mathbf e}_n]_{n\in A}={\overline{\span}}\{{\mathbf e}_n\}_{n\in A}$, for $A\subset\SN$. A sequence $\{{\mathbf e}_n\}_{n=1}^\infty$ is \emph{weakly null} if\[ \lim_{n\to\infty}x^*({\mathbf e}_n)=0,\quad\forall\;x^*\in\SX^*. \] Given a subset $Y\subset\SX^*$, we shall say that $\{{\mathbf e}_n\}_{n=1}^\infty$ is \emph{$Y$-null} if\[ \lim_{n\to\infty}y({\mathbf e}_n)=0,\quad\forall\;y\in Y. \] Given $\kappa\in (0,1]$, we say that a set $Y\subset \SX^*$ is $\kappa$-norming whenever $$ \sup_{x^* \in Y, \Vert x^* \Vert \leq 1} \vert x^*(x) \vert\,\geq \,\kappa\,\|x\|, \quad\forall\;x\in\SX .$$ {\mathbf e}gin{proposition} \label{Ptheta2} Let $\{{\mathbf e}_n,{\mathbf e}^*_n\}_{n=1}^\infty$ be a biorthogonal system in $\SX\times\SX^*$. Suppose that the sequence $\{{\tilde\be}_n:=\|{\mathbf e}^*_n\|\,{\mathbf e}_n\}_{n=1}^\infty\subset\SX$ is $Y$-null, for some subset $Y \subset \SX^*$ which is $\kappa$-norming. Then $\{{\mathbf e}_n\}_{n=1}^\infty$ is $\rho$-admissible for every $\rho>1/\kappa$. \end{proposition} {\mathbf e}gin{proof} Consider a finite set $A\subset\SN$ with say $\vert A \vert = m$ and denote $$E :=[{\mathbf e}_n]_{n\in A}.$$ Given $\varepsilon>0$, one can find a finite set $S \subset Y \cap \{x^*\in\SX^*\mid\|x^*\|=1\}$ so that {\mathbf e}gin{equation}\label{e1} \max_{x^* \in S} \vert x^*(e) \vert \geq (1-\varepsilon) \kappa \Vert e \Vert,\quad\forall\;e\in E . \end{equation} Indeed, it suffices to verify the above inequality for $e$ of norm $1$. Pick an $\varepsilon \kappa/2$-net $(z_k)_{k=1}^N$ in the unit sphere of $E$. For any $k$ find a norm one $z_k^* \in Y$ so that $|z_k^*(z_k)| > (1-\varepsilon/2) \kappa$. We claim that $S = \{z_k^* : 1 \leq k \leq N\}$ has the desired properties. To see this, pick a norm one $e \in E$, and find $k$ with $\|e - z_k\| \leq \varepsilon \kappa/2$. Then $$ \max_{x^* \in S} \vert x^*(e) \vert \geq |z_k^*(e)| \geq |z_k^*(z_k)| - \|e-z_k\| \geq (1-\varepsilon/2) \kappa - \varepsilon \kappa/2 = (1-\varepsilon) \kappa . $$ Next, since the sequence $\{\|{\mathbf e}^*_n\|\,{\mathbf e}_n\}$ is $Y$-null, for each $\delta>0$ we can find an integer $n_0> \max A$ so that $$ \max_{x^* \in S} \vert x^*({\mathbf e}_n) \vert\,\|{\mathbf e}^*_n\|\, \leq \frac{\delta\kappa}{m},\quad\forall\; n\ge n_0 . $$ Pick any $B$ of cardinality $m$ with $\min B\geq n_0$, and let $$G := [{\mathbf e}_n]_{n\in B}.$$ For $f = \sum_{n \in B} {\mathbf e}^*_n(f) {\mathbf e}_n \in G$, we have {\mathbf e}gin{equation}\label{e2} \max_{x^* \in S} \vert x^*(f) \vert \leq \max_{x^*\in S}\,\sum_{n\in B}|x^*({\mathbf e}_n)|\,\|{\mathbf e}^*_n\|\,\|f\|\,\leq\, \delta\kappa \Vert f \Vert.\end{equation} We claim that {\mathbf e}gin{equation} \Vert e+f \Vert \geq \frac{(1-\e-\dt)\kappa}{1+\dt\kappa}\,\|e\|,\, \, \, {\textrm{ for any }} \, e \in E, \, f \in G . \label{eq:direct_sum} \end{equation} To show this, we fix $\ga>0$ (to be chosen later), and assume first that $\|f\|\geq(1+\ga)\|e\|$. Then, \[ \|e+f\|\geq\|f\|-\|e\|\geq \ga\|e\|. \] Next assume that $\|f\|<(1+\ga)\|e\|$, then using (\ref{e1}) and (\ref{e2}) we obtain that $$ \Vert e+f \Vert \geq \max_{x^* \in S} \vert x^* (e+f) \vert \geq (1-\varepsilon)\kappa \Vert e \Vert - \delta\kappa \Vert f \Vert > (1-\varepsilon- \delta(1+\gamma))\kappa \Vert e \Vert . $$ We now choose $\ga$ so that $\ga=(1-\varepsilon- \delta(1+\gamma))\kappa$, that is, \[ \ga=\frac{(1-\varepsilon- \delta)\kappa}{1+\dt\kappa}, \] which shows the claim in \eqref{eq:direct_sum}. Now, given $\rho>1/\kappa$, we may pick $\dt=\e$ sufficiently small so that the above number $\ga>1/\rho$. Then, \eqref{eq:direct_sum} becomes \[ \Vert e+f \Vert \geq \frac1\rho\,\|e\|,\, \, \, {\textrm{ for any }} \, e \in [e_n]_{n\in A}, \, f \in [e_n]_{n\in B},\quad \] for all $B$ with $\min B\geq n_0$ and $|B|=|A|=m$. Thus, $\{{\mathbf e}_n\}_{n=1}^\infty$ is $\rho$-admissible. \end{proof} \ We mention a few cases where the hypotheses in the above proposition can be applied: \bline (1) When the sequence $\{{\tilde\be}_n\}_{n=1}^\infty$ is weakly null, since $Y = \SX^*$ is always $1$-norming. \sline (2) When $\sup_{n\geq1}\|{\mathbf e}_n\|\,\|{\mathbf e}^*_n\|<\infty$ and $Y=[{\mathbf e}^*_n]_{n\in\SN}$ is $\kappa$-norming, since the first condition implies that $\{{\tilde\be}_n\}_{n=1}^\infty$ is $Y$-null. In particular, when $\{{\mathbf e}_n\}_{n=1}^\infty$ is a Schauder basis in $\SX$, in which case the above conditions hold with $\kappa=1/K_b$; see \cite[Theorems I.3.1 and I.12.2]{SingerI}. \sline (3) In every separable Banach space $\SX$, if one picks $\{{\mathbf e}_n,{\mathbf e}^*_n\}_{n=1}^\infty$ to be an $M$-basis with the properties in (2) and $\kappa=1$; see e.g. \cite[Theorem III.8.5]{SingerII} for the existence of such bases. \sline (4) Let $\SX=C(K)$ where $K$ is a compact Hausdorff set and let $\mu$ be a Radon probability measure in $K$ with $\mathop{\rm supp}\mu=K$. Then, the natural embedding of $C(K)$ into $L_\infty(\mu)$ is isometric, and therefore $Y=L_1(\mu)$ is $1$-norming in $\SX$. Let $\{{\mathbf e}_n\}_{n=1}^\infty$ be a complete system in $\SX$ which is orthonormal with respect to $\mu$ and uniformly bounded, that is, $\int_K {\mathbf e}_n {\overline{{\mathbf e}_m}} \, d\mu = \dt_{n,m}$ and $\sup_n\|{\mathbf e}_n\|_\infty<\infty$. Then the sequence $\{{\mathbf e}_n\}_{n=1}^\infty$ is $L_1(\mu)$-null in $\SX$. Indeed, this follows from case (2), and the fact that $C(K)$ is dense in $L_1(\mu)$. Examples of such systems in $C(K)$ include the trigonometric system in $C[0,1]$ (in the real or complex case), as well as certain polygonal versions of the Walsh system \cite{Cie68, ropela79,weisz01}, or any reorderings of them (which may cease to be Ces\`aro bases). \sline (5) As a dual of the previous, if $\SX=L^1(\mu)$ then every system $\{{\mathbf e}_n\}_{n=1}^\infty$ as in (4) is weakly null, and hence case (1) applies. \sline (6) Recall the definition of the \emph{right fundamental function}: $ {\mathbf{\varphi}}_r(m) = \sup \lbrace \Vert \bone_A \Vert : \vert A \vert \leq m \rbrace$. If $\{{\mathbf e}_n\}_{n=1}^\infty$ is such that ${\mathbf{\varphi}}_r(m) = {\mathbf{o}}(m)$, then this system is weakly null. Indeed, first note that also $ {\mathbf{\tphi}}_r(m) = \sup \lbrace \Vert \bone_{\eta A} \Vert : \vert A \vert \leq m,\;|\eta|=1 \rbrace={\mathbf{o}}(m)$. Assume that the system is not weakly null. Then there exist a norm one $x^* \in \SX^*$ and $\varepsilon_0 > 0$ so that the set $A=\{ n \in \mathbb N : \vert x^*({\mathbf e}_n) \vert \geq \varepsilon_0 \} $ is infinite. Pick any $F \subset A$ with $|F| = m$ and let $\eta_n=\mathop{\rm sign}[ x^*({\mathbf e}_n)]$; then $$ {\mathbf{\tphi}}_r(m) \geq \Vert \bone_{{\overline{\eta}}F} \Vert \geq \vert x^*( \sum_{n \in F} {\overline{\eta_n}}{\mathbf e}_n ) \vert =\sum_{n\in F}|x^*({\mathbf e}_n) | \geq m\varepsilon_0 , $$ contradicting our assumption. \ Finally, as a consequence of Propositions \ref{Ptheta1} and \ref{Ptheta2} one obtains {\mathbf e}gin{theorem}\label{Thm_thetam} Let $\{{\mathbf e}_n,{\mathbf e}^*_n\}_{n=1}^\infty$ be a seminormalized M-basis such that the sequence $\{{\mathbf e}_n\}_{n=1}^\infty$ is $Y$-null for some subset $Y \subset \SX^*$ which is $\kappa$-norming. Then, if $\vartheta_m$ is as in \eqref{theta2}, we have \mathcal Be {\mathbf {L}}_m^{{\rm ch}, t}\geq \frac{\kappa\,\vartheta_m}{(\kappa+1)t},\quad\forall\;m\in\SN,\quad t\in(0,1]. \label{chtheta}\end{equation} \end{theorem} \section{Examples}\label{examples} The first two examples are variants of those in \cite[\S5.1]{BBG} and \cite[\S8.1]{BBGHO}. \subsection{Example \ref{summing}: The summing basis.}\label{summing} Let $\mathbb X$ be the closure of the set of all finite sequences $\mathbf{a}=(a_n)_n\in c_{00}$ with the norm $$\Vert \mathbf{a}\Vert = \sup_m\mathcal Big\vert \sum_{n=1}^m a_n\mathcal Big\vert.$$ The canonical system $\mathcal B=\lbrace {\mathbf e}_n\rbrace_{n=1}^\infty$ is a Schauder basis in $\SX$ with $K_b=1$ and $\Vert {\mathbf e}_n\Vert= 1$ for all $n$. Also, $\Vert \mathbf{e}_1^*\Vert = 1$, $\Vert {\mathbf e}n\Vert = 2$ if $n\geq 2$, so $\mathfrak{K}=2$ in Theorem \ref{main2}; see \cite[\S5.1]{BBG}. We now show that, for this example of $(\SX,\mathcal B)$, the bound of Theorem \ref{main2} is sharp. As in \cite[\S5.1]{BBG}, we consider the element: $$x=\mathcal Big(\underbrace{\frac{1}{2},\frac{1}{t},\frac{1}{2}},...,\underbrace{\frac{1}{2},\frac{1}{t},\frac{1}{2}}; \frac{1}{2}; \underbrace{-1,1},...,\underbrace{-1,1},0,...\mathcal Big),$$ where we have $m$ blocks of $\left(\frac{1}{2},\frac{1}{t},\frac{1}{2}\right)$ and $m$ blocks of $(-1,1)$. Picking $A=\lbrace n : x_n=-1\rbrace$ as a $t$-greedy set of $x$, we see that {\mathbf e}gin{eqnarray*} \Vert x-{\mathfrak{CG}}_m^t(x)\Vert &=& \min_{a_i, i =1,...,m}\mathcal Big\Vert \mathcal Big(\frac{1}{2},\frac{1}{t},\frac{1}{2},...,\frac{1}{2},\frac{1}{t},\frac{1}{2}; \frac{1}{2}; a_1, 1, a_2, 1,...,a_m,1,0,...\mathcal Big)\mathcal Big\Vert\\ &\geq& \mathcal Big\Vert \mathcal Big(\frac{1}{2},\frac{1}{t},\frac{1}{2},...,\frac{1}{2},\frac{1}{t},\frac{1}{2}; \frac{1}{2}; 0,...\mathcal Big)\mathcal Big \Vert = m+\frac{m}{t}+\frac{1}{2}. \end{eqnarray*} On the other hand, {\mathbf e}gin{eqnarray*} \mathop{\rm sign}ma_m(x)&\leq& \mathcal Big\Vert x-\frac{t+1}{t}(0,1,0,...,0,1,0; 0,...)\mathcal Big\Vert\\ &=& \mathcal Big\Vert \mathcal Big(\frac{1}{2},-1,\frac{1}{2},...,\frac{1}{2},-1,\frac{1}{2}; \frac{1}{2};-1,1,...,-1,1,0...\mathcal Big)\mathcal Big\Vert=\frac{1}{2}. \end{eqnarray*} Hence, ${\mathbf {L}}_m^{{\rm ch}, t}\geq 1+2(1+\frac1{t})m$ and we conclude that ${\mathbf {L}}_m^{{\rm ch}, t}= 1+2(1+\frac1{t})m$ by Theorem \ref{main2}. As a consequence, observe that in this case ${\mathfrak{CG}}^t_m(x)=0$. {\mathbf e}gin{Remark} {\rm The above example strengthens \cite[Theorem 2.4]{SY}, where the authors are only able to show that $1+4m\leq {\mathbf {L}}_m^{\rm ch}\leq 1+6m$. } \end{Remark} \subsection{Example \ref{diff}: the difference basis.}\label{diff} Let $\lbrace {\mathbf e}_n\rbrace_{n=1}^\infty$ be the canonical basis in $\ell^1(\mathbb N)$ and define the elements $$y_1 = {\mathbf e}_1,\; y_n = {\mathbf e}_n-{\mathbf e}_{n-1},\; n=2,3,...$$ The new system $\mathcal B= \lbrace y_n\rbrace_{n=1}^\infty$ is called the difference basis of $\ell^1$. We recall some basic properties used in \cite[\S 8.1]{BBGHO}. If $(b_n)_n\in c_{00}$ then $$\Vert \sum_{n=1}^\infty b_n y_n\Vert = \sum_{n=1}^\infty \vert b_n-b_{n+1}\vert.$$ Also, $\mathcal B$ is a monotone basis with $\Vert y_1\Vert = 1$, $\Vert y_n\Vert = 2$ if $n\geq 2$, and $\Vert y_n^*\Vert = 1$ for all $n\geq 1$ (in fact, the dual system corresponds to the summing basis). So, $\mathfrak{K}=2$ and Theorem \ref{main2} gives ${\mathbf {L}}_m^{{\rm ch}, t} \leq 1+2(1+\frac{1}{t})m$ for all $t\in (0,1]$. To show the equality we consider the vector $x=\sum_nb_n y_n$ with coefficients $(b_n)$ given by $$\mathcal Big(1,\underbrace{1,1,-\tfrac{1}{t},1},...,\underbrace{1,1,-\tfrac{1}{t},1},0,...\mathcal Big),$$ where the block $\mathcal Big(1,1,\frac{-1}{t},1\mathcal Big)$ is repeated $m$ times. If we take $\Gamma =\lbrace 2,6,...,4m-2\rbrace$ as a $t$-greedy set for $x$ of cardinality $m$, then {\mathbf e}gin{eqnarray*} \Vert x-{\mathfrak{CG}}_m^t(x)\Vert&=&\inf_{(a_j)_{j=1}^m}\Vert x-\sum_{j=1}^m a_jy_{4j-2}\Vert\\ &=& \inf_{(a_j)_{j=1}^m}\mathcal Big\Vert \mathcal Big(1,1-a_1,1,\frac{-1}{t},1,...,1-a_m,1,\frac{-1}{t},1,0,...\mathcal Big)\mathcal Big\Vert\\ &=& \inf_{(a_j)_{j=1}^m}2\sum_{j=1}^m \vert a_j\vert + 2m\mathcal Big(1+\frac{1}{t}\mathcal Big)+1=2m\mathcal Big(1+\frac{1}{t}\mathcal Big)+1. \end{eqnarray*} Hence, in this case we also have ${\mathfrak{CG}}^t_m(x)=0$. On the other hand \[ \mathop{\rm sign}ma_m(x)\leq \big\|x+(1+\tfrac1t)\sum_{j=1}^m y_{4j}\big\|= \Vert (1,1,1,1,1,...,1,1,1,1,0,...)\Vert =1.\] This shows that ${\mathbf {L}}_m^{{\rm ch}, t} = 1+2(1+\frac{1}{t})m$. \subsection{Example \ref{ex_trig}: the trigonometric system in $L^p(\mathbb T)$.}\label{ex_trig} Consider $\mathcal B=\{e^{i nx}\}_{n\in\SZ}$ in $L^p(\ST)$ for $1\leq p<\infty$, and in $C(\ST)$ if $p=\infty$. In \cite{Tem98trig}, Temlyakov showed that \[ c_pm^{|\frac1p-\frac12|}\leq {\mathbf L}_m\leq 1+3m^{|\frac1p-\frac12|}, \] for some $c_p>0$ and all $1\leq p\leq \infty$. Adapting his argument, Shao and Ye have recently established, in \cite[Theorem 2.1]{SY}, that for $1<p\leq\infty$ it also holds \mathcal Be{\mathbf {L}}_m^{\rm ch}\alphapprox m^{|\frac1p-\frac12|}.\label{LcT}\end{equation} The case $p=1$ is left as an open question, and only the estimate $\frac{\sqrt m}{\ln(m)}\lesssim {\mathbf {L}}_m^{\rm ch}\lesssim \sqrt m$ is given; see \cite[(2.24)]{SY}. Moreover, the proof of the case $p=\infty$ seems to contain some gaps and may not be complete. Here, we shall give a short proof ensuring the validity of \eqref{LcT} in the full range $1\leq p\leq\infty$, with a reasoning similar to \cite[\S 5.4]{BBG}. More precisely, we shall prove the following. {\mathbf e}gin{proposition} Let $1\leq p\leq \infty$. Then there exists $c_p>0$ such that \mathcal Be {\mathbf {L}}_m^{{\rm ch}, t}\,\geq \, c_p\,t^{-1}\, m^{|\frac 1p-\frac 12|},\quad \forall\;m\in\SN,\quad t\in(0,1]. \label{Tt}\end{equation} \end{proposition} We remark that in the cases $p=1$ and $p=\infty$ the trigonometric system is not a Schauder basis, but it is a Ces\`aro basis\footnote{We equip $\mathcal B$ with its natural ordering $\{1,e^{ix}, e^{-ix}, e^{2ix}, e^{-2ix}, \ldots\}$.}. So we may use the lower bounds in Theorem \ref{th3}, namely \mathcal Be {\mathbf {L}}_m^{{\rm ch}, t}\geq \,{c'_p}\;t^{-1}\,\sup_{{{|A|=|B|\leq m}\alphatop{A>2B\;\;\text{or}\;\;B>2A}}}\,\sup_{|\e|=|\eta|=1}\;\frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|}. \label{prop1main} \end{equation} {\mathbf e}gin{itemize} \item Case $1<p\leq 2$. Assume that $m=2\ell+1$ or $2\ell+2$ (that is, $\ell=\lfloor \frac{m-1}2\rfloor$). We choose $B=\{-\ell,...,\ell\}$, so that $\bone_B=D_\ell$ is the $\ell$-th Dirichlet kernel, and hence \[ \|\bone_B\|_p=\|D_\ell\|_{L^p(\ST)}\alphapprox m^{1-\frac1p}. \] Next we take a lacunary set $A=\lbrace 2^j : j_0\leq j\leq j_0+2\ell\rbrace$, so that \mathcal Be \|\bone_A\|_p\alphapprox\sqrt m, \label{Ap}\end{equation} and where $j_0$ is chosen such that $2^{j_0}\geq m$, and hence $A>2B$. Then, \eqref{prop1main} implies \[ {\mathbf {L}}_m^{{\rm ch}, t}\geq \,{c_p}\;t^{-1}\frac{m^{1/2}}{m^{1-\frac1p}}\,=\,c_p\,t^{-1}\, m^{|\frac 1p-\frac 12|}. \] \item Case $2\leq p<\infty$. The same proof works in this case, just reversing the roles of $A$ and $B$. \item Case $p=\infty$. We replace the lacunary set by a Rudin-Shapiro polynomial of the form \[ R(x)=e^{iNx}\,\sum_{n=0}^{2^L-1}\e_n e^{inx},\quad \mbox{with }\;\e_n\in\{\pm1\}, \] where $L$ is such that $2^L\leq m<2^{L+1}$; see e.g. \cite[p. 33]{katz}. Then, $R=\bone_{\e B}$ with $B=N+\{0,1,\ldots,2^L-1\}$ and \[ \|\bone_{\e B}\|_\infty=\|R\|_{L^\infty(\ST)}\alphapprox \sqrt m. \] If we pick $N\geq 2\cdot 2^L$, then $B>2A$ with $A=\{\pm1,\ldots,\pm (2^L-1)\}$. Finally, \[ \|\bone_A\|_\infty=\|D_{2^L-1}-1\|_{L^\infty(\ST)} \alphapprox\, m. \] So, \eqref{prop1main} implies the desired bound. \item Case $p=1$. We use the lower bound in Lemma \ref{lem_x}, namely\mathcal Be {\mathbf {L}}_m^{{\rm ch}, t} \,\geq\, c_1'\;t^{-1}\;\frac{\|\bone_A\|}{\|\bone_B+y\|}, \label{cL1} \end{equation} for all $|A|=|B|\leq m$ and all $y$ such that $A>2(B\mathbin{\mathaccent\cdot\cup}\mathop{\rm supp} y)$ and $\sup_n|{\mathbf e}^*_n(y)|\leq 1$. As before, let $m=2\ell+1$ or $2\ell+2$, and choose the same sets $A$ and $B$ as in the case $1<p\leq2$. Next choose $y$ so that the vector \[ V_\ell=\bone_B+y \] is a de la Vall\'ee-Poussin kernel as in \cite[p. 15]{katz}. Then, the Fourier coeffients ${\mathbf e}^*_n(y)$ have modulus $\leq1$ and are supported in $\{n\mid \ell<|n|\leq 2\ell+1\}$, so the condition $A>2(B\mathbin{\mathaccent\cdot\cup}\mathop{\rm supp} y)$ holds if $2^{j_0}\geq 2m+1$. Finally, \[ \|\bone_B+y\|_1=\|V_\ell\|_{L^1(\ST)}\leq 3, \] so the bound ${\mathbf {L}}_m^{{\rm ch}, t}\gtrsim t^{-1}\sqrt m$ follows from \eqref{cL1}. \end{itemize} {\mathbf e}gin{Remark} {\rm Using the trivial upper bound ${\mathbf {L}}_m^{{\rm ch}, t}\leq {\mathbf L}_m^t\lesssim t^{-1} m^{|\frac1p-\frac12|}$, we conclude that ${\mathbf {L}}_m^{{\rm ch}, t} \alphapprox t^{-1}m^{\vert \frac{1}{p}-\frac{1}{2}\vert}$ for all $1\leq p\leq \infty$.} \end{Remark} \section{Comparison between $\tmu_m$ and $\tmu^d_m$ }\label{s:comparison_mu_mu_d} \setcounter{equation}{0}\setcounter{footnote}{0} \setcounter{figure}{0} In this section we compare the democracy constants $\tmu_m$ and $\tmu^d_m$ defined in \S1 above. Let us first note that \mathcal Be \tmu^d_m\leq \tmu_m\leq (\tmu^d_m)^2 \label{mu2} \end{equation} and \mathcal Be \tmu_m^d\leq \tmu_m\leq (1+2\kappa)\ga_m\tmu_m^d,\label{mumud} \end{equation} where $\kappa=1$ or 2 depending if $\SK=\SR$ or $\SC$. Indeed, the left inequality in (\ref{mu2}) is immediate by definition, and the right one follows from \[ \frac{\|\bone_{\eta B}\|}{\|\bone_{\e A}\|}= \frac{\|\bone_{\eta B}\|}{\|\bone_{C}\|}\,\frac{\|\bone_{C}\|}{\|\bone_{\e A}\|}\leq (\tmu^d_m)^2, \] for any $|A|=|B|\leq m$ and any $C$ disjoint with $A\cup B$ with $|C|=|A|=|B|$. Concerning the right inequality in (\ref{mumud}), we use that if $|A|=|B|\leq m$ then \[ \frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|}\leq \frac{\|\bone_{\e (A\setminus B)}\|+\|\bone_{\e(A\cap B)}\|}{\|\bone_{\eta B}\|}\leq \ga_m \frac{\|\bone_{\e (A\setminus B)}\|}{\|\bone_{\eta (B\setminus A)}\|} +\frac{\|\bone_{\e(A\cap B)}\|}{\|\bone_{\eta B}\|}\leq \ga_m\,\tmu_m^d+2\kappa\ga_m, \] using in the last step \cite[Lemma 3.3]{BBG}. From (\ref{mumud}) we see that $\tmu_m\alphapprox\tmu_m^d$ when $\mathcal B$ is quasi-greedy for constant coefficients. In the next subsection we shall show that $\tmu_m\alphapprox \tmu^d_m$ for all Schauder bases, a result which seems new in the literature. \subsection{Equivalence for Schauder bases} We begin with a simple observation. {\mathbf e}gin{lemma}\label{s:alternative_mu} \mathcal Be \tmu^d_m=\sup\mathcal Big\{\frac{\|\bone_{\eta B}\|}{\|\bone_{\e A}\|}\mid |B|\leq |A|\leq m,\;\;A\cap B=\emptyset,\;\;|\e|=|\eta|=1\mathcal Big\}. \label{tmudN1}\end{equation} \end{lemma} {\mathbf e}gin{proof} Let $|\e|=|\eta|=1$ and $|B| \leq |A| \leq m$ with $A\cap B=\emptyset$. We must show that $\|\bone_{\eta B}\|/\|\bone_{\e A}\|\leq \tmu^d_m$. Pick any set $C$ disjoint with $A\cup B$ such that $|B|+|C| = |A|$. We now use the elementary inequality \mathcal Be \|x\|=\mathcal Big\|\frac{x+y}2+\frac{x-y}2\mathcal Big\|\leq\max\{\|x+y\|,\|x-y\|\},\label{max_xy} \end{equation} with $x=\bone_{\eta B}$ and $y=\bone_C$. Let $\eta'\in\Upsilon$ be such that $\eta'|_B = \eta|_B$ and $\eta'|_C=\pm1$, according to the sign that reaches the maximum in \eqref{max_xy}. Then $\|\bone_{\eta B}\|\leq \|\bone_{\eta'(B\cup C)}\|\leq \tmu^d_m\|\bone_{\e A}\|$, and the result follows. \end{proof} {\mathbf e}gin{theorem}\label{th_mudN} If $K_b$ is the basis constant and $\varkappa=\sup_{n}\Vert {\mathbf e}n\Vert\Vert {\mathbf e}_n\Vert$, then \mathcal Be \tmu_m\leq 2(K_b+1)\tmu^d_m +\varkappa\,K_b.\label{mudN}\end{equation} \end{theorem} {\mathbf e}gin{proof} Let $|A|=|B|\leq m$, and $|\e|=|\eta|=1$. Then \[ \frac{\|\bone_{\eta B}\|}{\|\bone_{\e A}\|} \leq \frac{\|\bone_{\eta(B\setminus A)}\|}{\|\bone_{\e A}\|}+\frac{\|\bone_{\eta (B\cap A)}\|}{\|\bone_{\e A}\|} = I+II. \] Lemma \ref{s:alternative_mu} implies $I\leq \tmu_m^d$. We now bound $II$. Pick an integer $n_0$ such that $A_1=\{n\in A\mid n\leq n_0\}$ and $A_2=A\setminus A_1$ satisfy \[ |A_1|=|A_2| \quad\mbox{(if $|A|$ is even), \; or}\quad |A_1|=\frac{|A|-1}2=|A_2|-1\quad \mbox{(if $|A|$ is odd)}.\] Then \mathcal Beas II & \leq & \frac{\|\bone_{\eta (B\cap A_1)}\|}{\|\bone_{\e A}\|}+\frac{\|\bone_{\eta (B\cap A_2)}\|}{\|\bone_{\e A}\|}\\ & \leq & (K_b+1)\frac{\|\bone_{\eta (B\cap A_1)}\|}{\|\bone_{\e A_2}\|}+K_b\frac{\|\bone_{\eta (B\cap A_2)}\|}{\|\bone_{\e A_1}\|}\;=\;II_1+II_2, \end{equation}as using in the second line the basis constant bound for the denominator. Since $|B\cap A_1|\leq |A_1|\leq |A_2|$, we see that \[ II_1\leq (K_b+1)\tmu^d_m.\] On the other hand, picking any number $n_1\in B\cap A_2$, and using $\|{\mathbf e}^*_{n_1}\|\|\bone_{\e A}\|\geq |{\mathbf e}^*_{n_1}(\bone_{\e A})|=1$, we see that \[ II_2\leq K_b\frac{\|\bone_{\eta (B\cap A_2\setminus\{n_1\})}\|}{\|\bone_{\e A_1}\|}+ K_b\|{\mathbf e}_{n_1}\|\|{\mathbf e}^*_{n_1}\|\leq K_b\tmu^d_m+\varkappa K_b, \] the last bound due to $|B\cap A_2\setminus\{n_1\}|\leq |A_2|-1\leq |A_1|$ and Lemma \ref{s:alternative_mu}. Putting together the previous bounds easily leads to \eqref{mudN}. \end{proof} {\mathbf e}gin{Remark}{\rm A similar argument shows the equivalence of the standard (unsigned) democracy parameters \mathcal Be \mu_m=\sup_{|A|=|B|\leq m}\frac{\|\bone_{B}\|}{\|\bone_{A}\|}\mand \mu^d_m=\sup_{{|A|=|B|\leq m}\alphatop{A\cap B=\emptyset}}\frac{\|\bone_{B}\|}{\|\bone_{A}\|}. \label{unsign}\end{equation} Indeed, in this case, the analog of \eqref{tmudN1} takes the weaker form \mathcal Be \mu^d_m\leq \sup_{{|B|\leq |A|\leq m}\alphatop{A\cap B=\emptyset}}\frac{\|\bone_{ B}\|}{\|\bone_{ A}\|}\leq K_b\mu^d_m. \label{mudN1}\end{equation} Then, \eqref{mudN1} and the same proof we gave for Theorem \ref{th_mudN} (with $\eta=\e\equiv1$) leads to \mathcal Be \mu_m\leq 2(K_b+1)K_b\,\mu^d_m +\varkappa\,K_b. \label{mudN2} \end{equation} } \end{Remark} \ \subsection{An example where $\tmu_m$ grows faster than $\tmu^d_m$} The following example also seems to be new in the literature. As in \eqref{unsign}, we denote by $\mu_m$, $\mu^d_m$ the democracy parameters corresponding to constant signs. {\mathbf e}gin{theorem}\label{Timur} There exists a Banach space $\SX$ with an M-basis $\mathcal B$ such that\[ \limsup_{m\to\infty}\frac{\tmu_m}{[\tmu^d_m]^{2-\e}}=\limsup_{m\to\infty}\frac{\mu_m}{[\mu^d_m]^{2-\e}}=\infty,\quad \forall\;\e>0. \] \end{theorem} {\mathbf e}gin{proof} Let $N_0=1$, and define recursively $N_k=2^{2^{N_{k-1}}}$, and $N_k'=N_1+\ldots +N_{k-1}$. Consider the blocks of integers \[ S_k=\big\{N'_k+1,\ldots, N'_k+N_k \big\},\] and denote the tail blocks by $T_k=\cup_{j\geq k+1} S_{j}$. Finally, let \[ \mathfrak{N}_k=\big\{(\mathop{\rm sign}ma_j)_{j\in S_k}\mid\mathop{\rm sign}ma_j\in\{\pm1\}\mand \sum_{j\in S_k}\mathop{\rm sign}ma_j=0\big\}. \] We define a real Banach space $\SX$ as the closure of $c_{00}$ with the norm \[ \|x\|\;=\;\max\mathcal Big\{\;\|x\|_\infty, \;\sup_{k\geq1} \alphal_k\,\sup_{\mathop{\rm sign}ma\in \mathfrak{N}_k}\big|\langle \bone_{\mathop{\rm sign}ma S_k},x\rangle\big|,\; \sup_{k\geq1}{\mathbf e}ta_k\sup_{{S\subset T_k}\alphatop{|S|=N_k}}\sum_{j\in S}|x_j|\;\mathcal Big\}, \] where the weights $\alphal_k$ and ${\mathbf e}ta_k$ are chosen as follows: \[ \alphal_k=2^{-N_{k-1}}=\frac1{\log_2 N_k}\mand {\mathbf e}ta_k=\frac1{\sqrt{N_k}}. \] Observe that \[ N'_k=N_1+\ldots+N_{k-1}\leq 2 N_{k-1}=2\log_2\log_2 N_k \mand \frac{\alphal_k}{{\mathbf e}ta_k}=\frac{\sqrt{N_k}}{\log_2 N_k}. \] {\bf Claim 1:} $\quad\mathcal Ds\tmu_{N_k}\geq \mu_{N_k}\geq \frac{N_k/2}{(\log_2 N_k)\sqrt{\log_2\log_2 N_k}}$, for all $k\geq1$. {\mathbf e}gin{proof} Pick any $A\subset S_k\cup S_{k+1}$ such that $|A|=N_k$ and $|A\cap S_k|=|A\cap S_{k+1}|=N_k/2$. Then \[ \|\bone_A\|\geq \alphal_k\,N_k/2=\frac{N_k/2}{\log_2 N_k}. \] Next, pick $B=S_k$, so that $|B|=|A|=N_k$ and \[ \|\bone_B\|=\max\mathcal Big\{\;1,\;\alphal_k\cdot 0,\;\sup_{n\leq k-1}{\mathbf e}ta_n N_n\;\mathcal Big\}={\mathbf e}ta_{k-1}N_{k-1}=\sqrt{N_{k-1}}= \sqrt{\log_2\log_2 N_k}. \] Then $\mu_{N_k}\geq \|\bone_A\|/\|\bone_B\|\geq \frac{N_k/2}{(\log_2 N_k)\sqrt{\log_2\log_2 N_k}}$. \end{proof} \bline {\bf Claim 2:} $\quad\mathcal Ds\mu^d_{N_k}\leq \tmu^d_{N_k}\leq \sqrt{N_k}$, for all $k\geq2$. {\mathbf e}gin{proof} Let $A,B$ be any pair of disjoint sets with $|A|=|B|\leq N_k$, and let $|\e|=|\eta|=1$. If $|A|=|B|\leq \sqrt{N_k}$, then the trivial bounds $\|\bone_{\e A}\|\leq |A|$ and $\|\bone_{\eta B}\|\geq1$ give \[ \frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|}\leq \sqrt{N_k}. \] So, it remains to consider the cases $\sqrt{N_k}<|A|=|B|\leq N_k$. We split $A$ into three parts\[ A_0=A\cap S_k,\quad A_+=A\cap T_k,\quad A_-=A\cap[S_1\cup\ldots\cup S_{k-1}]. \] Then, we have the following upper bound \mathcal Beas \|\bone_{\e A}\| & \leq& \max\mathcal Big\{1,\; \sup_{n<k}\alphal_{n}|A_-|, \;\alphal_k|A_0|,\;\sup_{n>k}\alphal_n N_k,\; \sup_{n<k}{\mathbf e}ta_nN_n,\;\sup_{n\geq k}{\mathbf e}ta_n|A|\;\mathcal Big\}\\ & \leq & \max\mathcal Big\{\; N'_k, \;\alphal_k|A_0|,\;{\mathbf e}ta_k|A|\;\mathcal Big\}, \end{equation}as due to the elementary inequalities \mathcal Bi \item $\sup_{n<k} \alphal_n|A_-|\leq |A_-|\leq N'_k $ \item $\sup_{n>k}\alphal_n N_k=\alphal_{k+1} N_k=N_k2^{-N_k}\leq1$ \item $\sup_{n<k}{\mathbf e}ta_nN_n=\sqrt{N_{k-1}}\leq N_{k-1}\leq N'_k$ \item $\sup_{n\geq k}{\mathbf e}ta_n|A|={\mathbf e}ta_k|A|$. \end{itemize} Moreover, since ${\mathbf e}ta_k|A|\leq\min\{{\mathbf e}ta_kN_k=\sqrt{N_k},\;\alphal_k|A|\;\}$, we derive \mathcal Be \|\bone_{\e A}\|\leq \max \{\sqrt{N_k}, \alphal_k|A_0|\}\mand \|\bone_{\e A}\|\leq \max \{N'_k,\alphal_k|A|\}. \label{upperA}\end{equation} We now give a lower bound for $\|\bone_{\eta B}\|$. The key estimate will rely on the following {\mathbf e}gin{lemma} Let $B_0=B\cap S_k$ and $B^c_0=S_k\setminus B_0$. Then \mathcal Be \sup_{\mathop{\rm sign}ma\in \mathfrak{N}_k}\big|\langle\bone_{\mathop{\rm sign}ma S_k},\bone_{\eta B_0}\rangle\big|\,\geq\, \min\{|B_0|,|B_0^c|\}. \label{Timur0}\end{equation} \end{lemma} {\mathbf e}gin{proof} If $|B_0| \leq N_k/2$, then we may select any $\mathop{\rm sign}ma \in \mathfrak{N}_k$ such that $\mathop{\rm sign}ma|_{B_0} = \eta$ (which is possible since $\vert B_0^c \vert \geq \vert B_0 \vert$), which gives $$ |\langle \bone_{\mathop{\rm sign}ma S_k}, \bone_{\eta B_0} \rangle | = |B_0|=\min\{|B_0|,|B_0^c|\}. $$ Assume now that $\vert B_0 \vert > N_k/2$. Pick any $S \subset B_0$ with $\vert S \vert = \vert B_0^c \vert = N_k - \vert B_0 \vert$. Choose $\nu \in \lbrace -1,1\rbrace^{B_0^c}$ so that $\sum_{i \in S} \eta_i + \sum_{i \in B_0^c} \nu_i = 0$. Choose $\tau \in \lbrace -1,1\rbrace^{B_0 {\mathbf a}ckslash S}$ so that $\sum_{i \in B_0 {\mathbf a}ckslash S} \tau_i = 0$. Replacing $\tau$ by $-\tau$, if necessary, we may assume that $\sum_{i\in B_0\setminus S}\tau_i\eta_i\geq0$. Finally, define $\mathop{\rm sign}ma\in \mathfrak{N}_k$ by setting \[ \mathop{\rm sign}ma|_S= \eta|_S, \quad \mathop{\rm sign}ma|_{B^c_0}= \nu|_{B^c_0}, \quad \mathop{\rm sign}ma|_{B_0 {\mathbf a}ckslash S}=\tau|_{B_0 {\mathbf a}ckslash S}. \] Then, $$ |\langle \bone_{\mathop{\rm sign}ma S_k}, \bone_{\eta B_0} \rangle |\, =\, \sum_{i\in S} \eta_i^2 + \sum_{i \in B_0 {\mathbf a}ckslash S} \tau_i \eta_i \, \geq |S| = |B_0^c| =\min\{|B_0|,|B_0^c|\} \,. \qedhere $$ \end{proof} From the lemma and the definition of the norm we see that \mathcal Be \|\bone_{\eta B}\|\geq \max\mathcal Big\{\,1,\; \alphal_k\min\{|B_0|,|B^c_0|\},\;{\mathbf e}ta_k|B_+|\;\mathcal Big\}. \label{lowB}\end{equation} We shall finally combine the estimates in \eqref{upperA} and \eqref{lowB} to establish Claim 2. We distinguish two cases \sline\emph{Case 1:} $\min\{|B_0|,|B^c_0|\}=|B^c_0|$. Then, since $A_0\subset B^c_0$, we see that \[ \alphal_k|A_0|\leq \alphal_k|B^c_0|\leq \|\bone_{\eta B}\|, \] and therefore the first estimate in \eqref{upperA} gives\[ \frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|}\leq\frac{\max\{\sqrt{N_k},\|\bone_{\eta B}\|\}}{\|\bone_{\eta B}\|} \leq \sqrt{N_k}. \] \sline\emph{Case 2:} $\min\{|B_0|,|B^c_0|\}=|B_0|$. Then, \eqref{lowB} reduces to \[ \|\bone_{\eta B}\| \geq \max\big\{\,\alphal_k|B_0|,\;{\mathbf e}ta_k|B_+|\;\big\}\geq{\mathbf e}ta_k\frac{|B_0|+|B_+|}2= {\mathbf e}ta_k\frac{|B|-|B_-|}2\geq {\mathbf e}ta_k|B|/4,\] since $|B_-|\leq N'_k\leq\sqrt{N_k}/2\leq |B|/2$, if $k\geq2$. Also, the second bound in \eqref{upperA} reads \[ \|\bone_{\e A}\|\leq \alphal_k |A|, \] since $N'_k\leq \sqrt{N_k}/\log_2 N_k=\alphal_k\sqrt{N_k}\leq \alphal_k|A|$, if $k\geq2$. Thus \[ \frac{\|\bone_{\e A}\|}{\|\bone_{\eta B}\|}\leq\frac{\alphal_k|A|}{{\mathbf e}ta_k|B|/4}=\frac{4\alphal_k}{{\mathbf e}ta_k}= \frac{4\sqrt{N_k}}{\log_2 N_k} \leq \sqrt{N_k}. \] This establishes Claim 2. \end{proof} From Claims 1 and 2 we now deduce that \[ \frac{\mu_{N_k}}{[\tmu^d_{N_k}]^{2-\e}}\geq \frac{N_k^{\e/2}/2}{(\log_2 N_k)\sqrt{\log_2\log_2 N_k}}\to\infty, \] and therefore \[ \limsup_{N\to\infty}\frac{\mu_{N}}{[\mu^d_{N}]^{2-\e}}=\limsup_{N\to\infty}\frac{\tmu_{N}}{[\tmu^d_{N}]^{2-\e}}=\infty. \qedhere \] \end{proof} \section{Norm convergence of ${\mathfrak{CG}}t_m x$ and $\mathcal G_m^t x$} In this section we search for conditions in $\mathcal B=\{{\mathbf e}_n\}_{n=1}^\infty$ under which it holds \mathcal Be \Vert x-{\mathfrak{CG}}_m(x)\Vert\to0,\quad \forall\;x\in\SX.\label{SYconv}\end{equation} In \cite[Theorem 1.1]{SY} this convergence is asserted for all ``bases'' $\{{\mathbf e}_n,{\mathbf e}^*_n\}_{n=1}^\infty$ satisfying (a)-(b)-(c). The proof however, does not seem complete, so we investigate here whether \eqref{SYconv} may be true in that generality. \ The solution to this question requires the notion of \emph{strong M-basis}; see \cite[Def 8.4]{SingerII}. We say that $\mathcal B$ is a strong M-basis if additionally to the conditions (a)-(d) in \S1 it also holds \mathcal Be \,{\overline{\span\{{\mathbf e}_n\}_{n\in A}}}\,=\,\big\{x\in\SX\mid \mathop{\rm supp} x\subset A\big\},\quad \forall\;A\subset\SN. \label{strongM}\end{equation} Clearly, all Schauder or Ces\`aro bases (in some ordering) are strong M-bases; see e.g. \cite{ruckle} for further examples. However, there exist M-bases which are not strong M-bases, see e.g. \cite[p. 244]{SingerII}, or \cite{dov}\footnote{We thank V. Kadets for kindly providing this reference.} for seminormalized examples in Hilbert spaces. {\mathbf e}gin{lemma} If $\mathcal B$ is an M-basis which is not a strong M-basis, then there exists an $x_0\in\SX$ such that, for all Chebyshev greedy operators ${\mathfrak{CG}}_m$, \mathcal Be \liminf_{m\to\infty}\|x_0-{\mathfrak{CG}}_mx_0\|>0.\label{liminf} \end{equation} \end{lemma} {\mathbf e}gin{proof} If $\mathcal B$ is not a strong M-basis there exists some set $A\subset\SN$ (necessarily infinite) and some $x_0\in\SX$ with $\mathop{\rm supp} x_0\subset A$ such that \[ \dt=\mathop{\rm dist} (x_0,[{\mathbf e}_n]_A)>0.\] Since $\mathop{\rm supp}{\mathfrak{CG}}_mx_0$ is always a subset of $A$, this implies \eqref{liminf}. \end{proof} {\mathbf e}gin{Remark}{\rm The above reasoning also implies that $\liminf_{m}\|x_0-\mathcal G_mx_0\|>0$, for all greedy operators $\mathcal G_m$. In particular, for M-bases which are not strong, the quasi-greedy condition \mathcal Be\label{Cq} C_q:=\sup_{{\mathcal G_m\in\SG_m}\alphatop{m\in\SN}}\|\mathcal G_m\|<\infty\end{equation} does not imply that $\mathcal G_mx$ converges to $x$ for all $x\in\SX$. So the standard characterization in \cite[Theorem 1]{Wo} needs the extra assumption that $\mathcal B$ is a strong M-basis. }\end{Remark} A corrected version of \cite[Theorem 1.1]{SY} (and also of ``$3\mathbb Rightarrow1$'' in \cite[Theorem 1]{Wo}) is the following. {\mathbf e}gin{proposition} If $\mathcal B$ is a strong M-basis then, for all Chebyshev $t$-greedy operators ${\mathfrak{CG}}^t_m$, \mathcal Be \lim_{m\to\infty}\|x-{\mathfrak{CG}}^t_mx\|=0,\quad\forall\;x\in\SX. \label{limch} \end{equation} If additionally $C_q<\infty$, then for all $t$-greedy operators $\mathcal G^t_m$, \mathcal Be \lim_{m\to\infty}\|x-\mathcal G^t_mx\|=0,\quad\forall\;x\in\SX. \label{limcG} \end{equation} \end{proposition} {\mathbf e}gin{proof} Given $x\in\SX$ and $\e>0$, by \eqref{strongM} there exists $z=\sum_{n\in B}b_n{\mathbf e}_n$ such that $\|x-z\|<\e$, for some finite set $B\subset\mathop{\rm supp} x$. Let $\alphal=\min_{n\in B}|{\mathbf e}^*_n(x)|$ and \[{\mathbf L}_ma_{\alphal}=\{n\mid |{\mathbf e}^*_n(x)|\geq \alphal\}.\] Since $\alphal>0$, this is a finite greedy set for $x$ which contains $B$. Moreover, we claim that \mathcal Be {\mathbf L}_ma_{\alphal}\subset \mathop{\rm supp}{\mathfrak{CG}}^t_m x=:A,\quad \forall\; m> |{\mathbf L}_ma_{t\alphal}|. \label{incl}\end{equation} Indeed, if this was not the case there would exist $n_0\in{\mathbf L}_ma_{\alphal}\setminus A$, and since $A$ is a $t$-greedy set for $x$, then $\min_{n\in A}|{\mathbf e}n(x)|\geq t|{\mathbf e}^*_{n_0}(x)|\geq t\alphal$. So, $A\subset{\mathbf L}_ma_{t\alphal}$, which is a contradiction since $m=|A|>|{\mathbf L}_ma_{t\alphal}|$. Therefore, \eqref{incl} holds and hence \[ \|x-{\mathfrak{CG}}^t_mx\|\leq \|x-\sum_{n\in B}b_n{\mathbf e}_n\|<\e,\quad \forall\; m> |{\mathbf L}_ma_{t\alphal}|. \] This establishes \eqref{limch}. \ We now prove \eqref{limcG}. As above, let $z=\sum_{n\in B}b_n{\mathbf e}_n$ with $B\subset\mathop{\rm supp} x$ and $\|x-z\|<\e$. Performing if necessary a small perturbation in the $b_n$'s, we may assume that $b_n\not={\mathbf e}n(x)$ for all $n\in B$. Let now \[ \alphal_1=\min_{n\in B}|{\mathbf e}^*_n(x)|,\quad \alphal_2=\min_{n\in B}|{\mathbf e}^*_n(x-z)|,\mand \alphal=\min\{\alphal_1,\alphal_2\}>0.\] Consider the sets \[{\mathbf L}_ma_{t\alphal}=\{n\mid |{\mathbf e}^*_n(x)|\geq t\alphal\}=\{n\mid |{\mathbf e}^*_n(x-z)|\geq t\alphal\},\] which for all $t\in(0,1]$ are greedy sets for \emph{both} $x$ and $x-z$, and contain $B$. We claim that, \mathcal Be \mbox{if $m>|{\mathbf L}_ma_{t\alphal}|$ and $A:=\mathop{\rm supp}\mathcal G^t_mx$,} \quad {\rm then }\quad {\mathbf L}_ma_{\alphal}\subset A \mand A\in G(x-z,m,t). \label{incl2}\end{equation} The assertion ${\mathbf L}_ma_{\alphal}\subset A$ is proved exactly as in \eqref{incl}. Next, we must show that \[ \mbox{if $n\in A$} \quad {\rm then }\quad |{\mathbf e}^*_n(x-z)|\;\geq\; t\,\max_{k\notin A}|{\mathbf e}^*_k(x-z)|\,=\; t\,\max_{k\notin A}|{\mathbf e}^*_k(x)|. \] This is clear if $n\in A\setminus B$ since ${\mathbf e}n(x-z)={\mathbf e}n(x)$, and $A\in G(x,m,t)$. On the other hand, if $n\in B$, then $|{\mathbf e}n(x-z)|\geq \alphal_2\geq \alphal\geq \max_{k\in A^c}|{\mathbf e}^*_k(x)|$, the last inequality due to ${\mathbf L}_ma_\alphal\subset A$. Thus \eqref{incl2} holds true, and therefore \[ \mathcal G^t_m(x)-z=\sum_{n\in A}{\mathbf e}n(x-z){\mathbf e}_n\,=\,{\mathbf a}r{\mathcal G}^t_m(x-z), \] for some ${\mathbf a}r{\mathcal G}^t_m\in\SG^t_m$. Thus, \[ \|\mathcal G^t_m(x)-x\| \,=\, \|(I-{\mathbf a}r{\mathcal G}^t_m)(x-z)\| \,\leq \, (1+\|{\mathbf a}r{\mathcal G}^t_m\|)\,\e,\] and the result follows from $\sup_{m}\|{\mathbf a}r{\mathcal G}^t_m\|\leq (1+4C_q/t)C_q$, by \cite[Lemma 2.1]{DKO}. \end{proof} {\mathbf e}gin{thebibliography}{1} \bibitem{AA1} \textsc{F. Albiac and J.L. Ansorena}, \emph{Characterization of 1-almost greedy bases}. Rev. Matem. Compl. {\bf 30} (1) (2017), 13--24. \bibitem{B} \textsc{P. M. Bern\'a}, \emph{Equivalence between almost-greedy and semi-greedy bases}. J. Math. Anal. Appl. {\bf 417} (2019), 218--225. \bibitem{BB1} \textsc{P. M. Bern\'a, \'O. Blasco}, \emph{Characterization of greedy bases in Banach spaces}. J. Approx. Theory {\bf 215} (2017), 28--39. \bibitem{BBG} \textsc{P. M. Bern\'a, \'O. Blasco, G. Garrig\'os}, \emph{Lebesgue inequalities for the greedy algorithm in general bases}. Rev. Mat. Complut. \textbf{30} (2017), 369--392. \bibitem{BBGHO} \textsc{P. M. Bern\'a, \'O. Blasco, G. Garrig\'os, E. Hern\'andez, T. Oikhberg}, \emph{Embeddings and Lebesgue-Type Inequalities for the Greedy Algorithm in Banach Spaces}. Constr. Approx. {\bf 48} (3) (2018), 415--451. \bibitem{Cie68} \textsc{Z. Ciesielski}, A bounded orthonormal system of polygonals. Studia Math. {\bf 31} (1968) 339--346. \bibitem{DKK} \textsc{S.J. Dilworth, N.J. Kalton, D. Kutzarova}, \emph{On the existence of almost greedy bases in Banach spaces}, Studia Math. 159 (1) (2003), 67--101. \bibitem{DKKT} \textsc{S.J. Dilworth, N.J. Kalton, D. Kutzarova, and V.N. Temlyakov}, \emph{The Thresholding Greedy Algorithm, Greedy Bases, and Duality}, Constr. Approx. 19, (2003), 575--597. \bibitem{DKO} \textsc{S.J. Dilworth, D. Kutzarova, T. Oikhberg}, \emph{Lebesgue constants for the weak greedy algorithm}, Rev. Matem. Compl. 28 (2) (2015), 393--409. \bibitem{dov} \textsc{L. N. Dovbysh, N. K. Nikolskii, V. N. Sudakov}, \emph{How good can a nonhereditary family be?} J. Sov. Math. {\bf 34} (6) (1986), 2050--2060. \bibitem{GHO} \textsc{G. Garrig\'os, E. Hern\'andez and T. Oikhberg}, \emph{Lebesgue-type inequalities for quasi-greedy bases}, Constr. Approx. 38 (3) (2013), 447--470. \bibitem{Hajek} \textsc{P. Hajek, V. Montesinos Santaluc\'\i a, J. Vanderwerff, V. Zizler}, \emph{ Biorthogonal systems in Banach spaces}. Springer-Verlag 2008. \bibitem{katz} \textsc{Y. Katznelson}, \emph{An introduction to Harmonic Analysis}, 2nd ed. Dover Publ Inc, New York, 1976. \bibitem{KT} \textsc{S.V. Konyagin and V.N. Temlyakov}, \emph{A remark on greedy approximation in Banach spaces}, East. J. Approx. 5 (1999), 365--379. \bibitem{KT2} \textsc{S.V. Konyagin and V.N. Temlyakov}, \emph{Greedy approximatin with regard to bases and general minimal systems}, Serdica Math. J. \textbf{28} (2002), 305--328. \bibitem{ropela79} \textsc{S. Ropela}, Properties of bounded orthogonal spline bases. In \emph{Approximation theory (Papers, VIth Semester, Stefan Banach Internat. Math. Center, Warsaw, 1975)}, pp. 197--205, Banach Center Publ., 4, Warsaw, 1979. \bibitem{ruckle} \textsc{W.H. Ruckle}, On the classification of biorthogonal sequences. Canadian J. Math {\bf 26} (1974), 721--733. \bibitem{SY} \textsc{C. Shao, P. Ye}, \emph{Lebesgue constants for Chebyshev thresholding greedy algorithms}, Journal of Inequalities and Applications (2018), Paper No. 102, 23 pp. \bibitem{SingerI} \textsc{I. Singer}. Bases in Banach spaces I. Springer-Verlag, 1970. \bibitem{SingerII} \textsc{I. Singer}. Bases in Banach spaces II. Springer-Verlag, 1981. \bibitem{Tem98trig} \textsc{V. N. Temlyakov}, \emph{Greedy algorithm and n-term trigonometric approximation}, Const.Approx. 14 (1998), 569--587. \bibitem{Tem1} \textsc{V.N. Temlyakov}, {\it Greedy approximation}. Cambridge University Press, 2011. \bibitem{TemW} \textsc{V. N. Temlyakov}, \emph{The best m-term approximation and greedy algorithms}, Adv. Comput. {\bf 8} (1998), 249--265. \bibitem{Tem15} \textsc{V. N. Temlyakov}, Sparse approximation with bases. Ed. by S. Tikhonov. Advanced Courses in Mathematics. CRM Barcelona. Birkhäuser-Springer, 2015. \bibitem{TYY2} \textsc{V. N. Temlyakov, M. Yang, P. Ye}, \emph{Lebesgue-type inequalities for greedy approximation with respect to quasi-greedy bases}, East J. Approx {\bf 17} (2011), 127--138. \bibitem{weisz01} \textsc{F. Weisz}, On the Fej\'er means of bounded Ciesielski systems. Studia Math. {\bf 146} (3) (2001), 227--243. \bibitem{Wo} \textsc{P. Wojtaszczyk}, \emph{Greedy Algorithm for General Biorthogonal Systems}, Journal of Approximation Theory, 107, (2000), 293--314. \end{thebibliography} \vskip 1truemm \end{document}
\begin{document} \title{Comment on ``Optimal convex approximations of quantum states" } \author{Xiao-Bin Liang} \affiliation{School of Mathematics and Computer science, Shangrao Normal University, Shangrao 334001, China} \author{Bo Li} \email{[email protected]} \affiliation{School of Mathematics and Computer science, Shangrao Normal University, Shangrao 334001, China} \author{Shao-Ming Fei} \affiliation{Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany}\affiliation{School of Mathematical Sciences, Capital Normal University, Beijing 100048, China} \begin{abstract} In a recent paper, M. F. Sacchi [Phys. Rev. A 96, 042325 (2017)] addressed the general problem of approximating an unavailable quantum state by the convex mixing of different available states. For the case of qubit mixed states, we show that the analytical solutions in some cases are invalid. In this Comment, we present complete analytical solutions for the optimal convex approximation. Our solutions can be viewed as correcting and supplementing the results in the aforementioned paper. \end{abstract} \pacs{03.67.-a, 03.65.Ud, 03.65.Yz} \maketitle In Sec. III of Ref. \cite{Sacchi15}, the problem of optimally generating a desired quantum state $\rho$ by the given set of the eigenstates of all Pauli matrices was provided. Namely, consider the optimal convex approximation of a quantum state with respect to the set \begin{align}\langlebel{b3} & B_3=\{|0\ranglengle,|1\ranglengle,|2\ranglengle=\frac{\sqrt{2}}{2}(|0 \ranglengle +|1 \ranglengle),|3\ranglengle=\frac{\sqrt{2}}{2}(|0 \ranglengle - |1 \ranglengle),\nonumber\\ & |4\ranglengle=\frac{\sqrt{2}}{2}(|0 \ranglengle + \sqrt{-1} |1 \ranglengle),|5\ranglengle=\frac{\sqrt{2}}{2}(|0 \ranglengle - \sqrt{-1} |1 \ranglengle)\}.\nonumber \end{align} The optimal convex approximation of $\rho$ with respect to $B_3$ is defined as $D_{B_3}(\rho)=min\{\parallel\rho-\sum_i p_i\rho_i\parallel_1\}$, where $\rho_i=|i\ranglengle\langlengle i|$, $0\leq p_i\leq 1$, $\sum_i p_i=1$, the minimum is taken over all possible probability weights $\{p_i\}$, and $\parallel A\parallel_1$ denotes the trace norm of $A$, that is, $\parallel A\parallel_1=Tr\sqrt{A^\dag A}=\sum_is_i(A)$ with $\{s_i(A)\}$ representing the singular values of $A$. The optimal convex approximate set is given by $S(\rho^{opt})=\{\rho^{opt}|D_{B_3}(\rho)=\parallel\rho-\rho^{opt}\parallel_1\}$. Here we point out that the analytical solution given in [Phy. Rev. A 96, 042325(2017)] is invalid in some cases. We first provide a simple example. Consider the target qubit $\rho$ given by \begin{equation}\langlebel{taget} \rho =\left( \begin{array}{cc} 1-a & k \sqrt{a(1-a)}e^{-i\phi }\\ k \sqrt{a(1-a)}e^{i\phi } & a \\ \end{array} \right ) \; \end{equation} with $a \in [0,1]$, $\phi \in [0,2\pi]$, and $k\in [0, 1]$. If we set $a=1/2$, $k=1$, $\phi=\pi/4$, it is easily verified that the point belongs to the region of case (i) in Ref. \cite{Sacchi15}, that is, $k_{th}\equiv a/({\sqrt{a(1-a)}(\cos\phi+\sin\phi)})< k\leq a/({\sqrt{a(1-a)} })$. Then the optimal convex approximation and the corresponding optimal weights are given by Eq. (18) and (19) in \cite{Sacchi15}, respectively. However, if one substitutes $a=1/2$, $k=1$ and $\phi=\pi/4$ into Eq. (19) in \cite{Sacchi15}, one has $p_0=1-4a/3-2k\sqrt{a(1-a)}(\cos\phi+\sin\phi)/3=(1-\sqrt{2})/3<0$, which implies that the optimal probability is negative and this solution is invalid. In the following, in terms of the method used in \cite{liang16} (see also the Karush-Kuhn-Tucker theorem and its conclusion in \cite{Forst}, p46-60), we provide the complete analytical solution for the optimal convex approximation of a quantum state under $B_3$ distance and the corresponding optimal weights. For simplicity, we denote $u=k\sqrt{a(1-a)}\cos \phi$, $v=k\sqrt{a(1-a)}\sin \phi$, where $k\in[0,1]$, $a\in[0,\frac{1}{2}]$ and $\phi\in[0,\pi/2]$. When $a-u-v\geq 0$, one has $D_{B_3}(\rho)=0$. The pertaining weights corresponding to $\rho_i$ are given by \begin{eqnarray}\langlebel{protaget} &&p_0=1-a-u-v -t_1-t_2,\nonumber \\& & p_1=a-u-v -t_1-t_2, \nonumber \\& & p_2=2u+t_1,\nonumber \\& & p_3=t_1,\nonumber \\& & p_4=2v+t_2,\nonumber \\& & p_5=t_2, \end{eqnarray} where $t_1$ and $t_2$ are arbitrary non-negative arguments such that $p_1\geq0$. If $t_1=t_2=0$, then Eq. (\ref{protaget}) reduce to Eq. (14) in Ref. \cite{Sacchi15}. However, if one sets $t_1=a-u-v,t_2=0$ in (\ref{protaget}), one gets $p_0=1-2a$, $p_1=0$, $p_2=a+u-v$, $p_3=a-u-v$, $p_4=2v$ and $p_5=0$. This is another kind of decomposition which is different from the one in Ref. [1]. Thus, our decompositions can be viewed as a complete supplement to the results in Ref. \cite{Sacchi15}. The previous complete analytical solution can be classified into the following four cases, see proof in Supplemental Material: \par \noindent $i)$ If $a<u+v\leq(3-4a)/2$, $a-v+2u\geq0$ and $a-u+2v\geq0$, the optimal convex approximation of $\rho$ is given by \begin{eqnarray} D_{B_3}(\rho)= \frac{\sqrt{3}}{3}(\langlengle\sigma_x\ranglengle+\langlengle\sigma_y\ranglengle+\langlengle\sigma_z\ranglengle-1) \;,\nonumber \end{eqnarray} with corresponding optimal weights \begin{eqnarray} &&p_0=1-4a/3-2u/3 -2v/3, \nonumber \\& & p_2=2a/3-2v/3+4u/3, \nonumber \\& & p_4= 2a/3-2u/3+4v/3, \nonumber \\& & p_1=p_3=p_5=0. \nonumber \end{eqnarray} \par \noindent $ii)$ If $a<u+v\leq(3-4a)/2$, $a-v+2u\geq0$ and $a-u+2v<0$, the optimal convex approximation of $\rho $ is given by \begin{eqnarray} D_{B_3}(\rho)= \sqrt{\langlengle\sigma_y\ranglengle^2+\frac{1}{2}(\langlengle\sigma_x\ranglengle+\langlengle\sigma_z\ranglengle-1)^2} \;,\nonumber \end{eqnarray} with the corresponding optimal weights \begin{eqnarray} &&p_0=1-a-u, \nonumber \\& & p_2=a+u,\nonumber \\& & p_1=p_3=p_4=p_5=0. \nonumber \end{eqnarray} \par \noindent $iii)$ If $a<u+v\leq(3-4a)/2$, $a-v+2u<0$ and $a-u+2v\geq0$, the optimal convex approximation of $\rho $ is given by \begin{eqnarray} D_{B_3}(\rho)= \sqrt{\langlengle\sigma_x\ranglengle^2+\frac{1}{2}(\langlengle\sigma_y\ranglengle+\langlengle\sigma_z\ranglengle-1)^2} \;.\nonumber \end{eqnarray} The related optimal weights are given by \begin{eqnarray} & &p_0=1-a-v, \nonumber\\ & &p_4=a+v,\nonumber \\ & &p_1=p_2=p_3=p_5=0.\nonumber \end{eqnarray} \par \noindent $iv)$ If $u+v>(3-4a)/2$, we have \begin{eqnarray} D_{B_3}(\rho)= \sqrt{\langlengle\sigma_z\ranglengle^2+\frac{1}{2}(\langlengle\sigma_y\ranglengle+\langlengle\sigma_x\ranglengle-1)^2}\nonumber \end{eqnarray} with the pertaining optimal weights \begin{eqnarray}\langlebel{e3} &&p_2=1/2+u-v,\nonumber \\& & p_4=1/2-u+v,\nonumber \\& & p_0=p_1=p_3=p_5=0. \end{eqnarray} Up to now, we have refined the conclusions in Sec. III of Ref. \cite{Sacchi15}. Particularly, we have added the case $iv)$ as a valid supplement. Moreover, we point out that the Fig. 2 in Ref. \cite{Sacchi15} is inaccurate in some areas. In the following, we plot the accurate $D_{B_3}(\rho)$ for fixed value of the phase parameter $\phi=\frac{\pi}{3}$, see Fig. \ref{fig_1}. \begin{figure} \caption{ $D_{B_3} \end{figure} As another related example, consider $k=1$ and $a=1/2$. According to Eq. (18) in Ref. \cite{Sacchi15}, we get that $D_{B_3}(\rho)$ is about 0.2113. From Eq. (19) of \cite{Sacchi15}, $p_0 =(\sqrt{3}-1)/3 <0$. In fact, according to (\ref{e3}), the accurate $D_{B_3}(\rho) \approx0.2588$, and the corresponding probability is $p_0=p_1=p_3=p_5=0$, $p_2=(1+\sqrt{3})/4$ and $p_4=(3-\sqrt{3})/4$. We plot the difference of Fig. \ref{fig_1} in our paper and Fig. 2 in \cite{Sacchi15}, see Fig. \ref{fig_2}. \begin{figure} \caption{The difference of $D_{B_3} \end{figure} In summary, we have derived the complete solution for the optimal convex approximation of a qubit mixed state under $B_3$ distance. We have revised the problem related to the result for $a<u+v$ in Ref. \cite{Sacchi15}. In addition, if $a\geq u+v$, our decompositions are the complete supplement to the representative decompositions in Ref. \cite{Sacchi15}. We would like to say that the idea of looking for the least distinguishable states is nice, and the condition Eq. (13) in Ref. \cite{Sacchi15} for exact convex decomposition is also correct. We would also point out that the discussion in the last section of Ref. \cite{Sacchi15}, on the case of many copies of quantum states, the non-additivity of the distance, and the role of correlations, maintain general validity. \section{Appendix}\langlebel{Appendix} We now provide a detail proof of the state classification in the main text. To find $D_{B_3}(\rho)=min\parallel\rho-\sum_i p_i\rho_i\parallel_1$ is equivalent to search for the solution of the following minimum, \begin{eqnarray} Min \{2\sqrt{\mid Det(\rho-\sum _{i} p_i|i\ranglengle\langlengle i| )\mid}\}, \end{eqnarray} such that $p_i\geq 0$ and $\sum _{j}p_j=1$. Denote \begin{align}\langlebel{question} f(p_0,p_1,p_2,p_3,p_4,p_5)&=&\mid Det(\rho-\sum^5 _{i=0} p_i|i\ranglengle\langlengle i| ) \mid \nonumber\\ &&-\sum^5 _{i=0}\langlembda_i p_i-\langlembda\sum^5 _{i=0}p_i. \end{align} According to the Karush-Kuhn-Tucker Theorem \cite{Forst} and the related conclusion (see page 46-60 in \cite{Forst}), the above question is equal to \begin{align}\langlebel{question1} \nabla f=0,~\langlembda_ip_i=0,~\langlembda_i\geq0,~p_i\geq0,~\sum^5 _{j=0}p_j=1 \end{align} for $i=0,1,2,3,4,5$. One then obtains the following equations and inequalities \begin{eqnarray}\langlebel{question2} &&p_1+p_2/2+p_3/2+p_4/2+p_5/2+\langlembda_0+\langlembda-a=0, \nonumber \\& & p_0+p_2/2+p_3/2+p_4/2+p_5/2+\langlembda_1+\langlembda-1+a=0,\nonumber \\& & p_0/2+p_1/2+p_3+p_4/2+p_5/2+\langlembda_2+\langlembda-1/2+u,\nonumber \\& & p_0/2+p_1/2+p_2+p_4/2+p_5/2+\langlembda_3+\langlembda-1/2-u=0,\nonumber \\& & p_0/2+p_1/2+p_2/2+p_3/2+p_5+\langlembda_4+\langlembda-1/2+v=0,\nonumber \\& & p_0/2+p_1/2+p_2/2+p_3/2+p_4+\langlembda_5+\langlembda-1/2-v=0,\nonumber \\& & \langlembda_ip_i=0,~\langlembda_i\geq0,~p_i\geq0,~i=0,1,2,3,4,5,\nonumber \\& & \Sigma _ip_i=1, \end{eqnarray} where $u=k\sqrt{a(1-a)}\cos \phi$ and $v=k\sqrt{a(1-a)}\sin \phi$. From (\ref{question2}) we have (1) If $p_0\neq0, p_1\neq0$, from $\langlembda_1=\langlembda_2=0$ we have $\langlembda=0$ and $\langlembda_i=0$ ($i=2,3,4,5$). Similarly, if $p_2\neq0,~ p_3\neq0$ or $p_4\neq0,~ p_5\neq0$ or at least four of $\{p_i\}$ are nonzero, (\ref{question2}) is equivalent to $\nabla f=0,$ $\langlembda=0,$ $\langlembda_i=0$ ($i=0,1,2,3,4,5$), $\Sigma _ip_i=1$. Thus we have \begin{eqnarray} &&p_0=1-a-u-v -t_1-t_2,\nonumber \\& & p_1=a-u-v -t_1-t_2, \nonumber \\& & p_2=2u+t_1,\nonumber \\& & p_3=t_1,\nonumber \\& & p_4=2v+t_2,\nonumber \\& & p_5=t_2,\nonumber \end{eqnarray} where $t_1$ and $t_2$ are arbitrary non-negative numbers such that $p_1\geq0$. In this case, $D_{B_3}(\rho)=0$, and the condition $p_i\geq0$, $i=0,1,2,3,4,5$, is transformed to $a-u-v\geq 0$. (2) Only three numbers of $\{p_i\}$ are nonzero. According to (1), $D_{B_3}(\rho)=0$ for $a-u-v\geq 0$. For $a-u-v< 0$, that only three numbers of $\{p_i\}$ are nonzero results in the following cases, $(i)$: $p_0\neq0$, $p_2\neq0$, $p_4\neq0$; $(ii)$: $p_0\neq0$, $p_2\neq0$, $p_5\neq0$; $(iii)$: $p_0\neq0$, $p_3\neq0$, $p_5\neq0$; $(iv)$: $p_1\neq0$, $p_2\neq0$, $p_4\neq0$; $(v)$: $p_1\neq0$, $p_2\neq0$, $p_5\neq0$; $(vi)$: $p_1\neq0$, $p_3\neq0$, $p_5\neq0$. However, the solution of (\ref{question2}) exists only for the case $(i)$, that is, \begin{eqnarray} &&p_0=1-4a/3-2u/3 -2v/3, \nonumber \\& & p_2=2a/3-2v/3+4u/3, \nonumber \\& & p_4= 2a/3-2u/3+4v/3, \nonumber \\& & p_1=p_3=p_5=0. \nonumber \end{eqnarray} We obtain that $a<u+v\leq(3-4a)/2$, $a-v+2u\geq0$ and $a-u+2v\geq0$. Now we illustrate that for the case $(iv)$, no solution exists. According to the assumption, $\langlembda_1=0$, $\langlembda_2=0$ and $\langlembda_4=0$. Then $p_1=4a/3-1/3-2u/3-2v/3$, $p_2= 2/3-2a/3+4u/3-2v/3$ and $p_4=2/3-2a/3-2u/3+4v/3$. Notice that $p_1=4a/3-1/3-2u/3-2v/3\geq0$. One obtains $a<u+v\leq2a-1/2$, namely, $a>1/2$, which results in a contradiction. One can similarly show that for the cases $(ii)$, $(iii)$, $(v)$ and $(vi)$, also no solutions exist. (3) Now consider the region $a<u+v\leq(3-4a)/2$. Case $i$: $a<u+v\leq(3-4a)/2$, $a-v+2u\geq0$ and $a-u+2v<0$. For $p_0\neq0$ and $p_2\neq0$, the equation (7) has the following solution, \begin{eqnarray} &&p_0=1-a-u, \nonumber \\& & p_2=a+u,\nonumber \\& & p_1=p_3=p_4=p_5=0. \nonumber \end{eqnarray} Case $ii$: $a<u+v\leq(3-4a)/2$, $a-v+2u<0$ and $a-u+2v\geq0$, that is, $p_0\neq0$ and $p_4\neq0$, the solution of (7) is given by \begin{eqnarray} & &p_0=1-a-v, \nonumber \\& & p_4=a+v,\nonumber \\& & p_1=p_2=p_3=p_5=0.\nonumber \end{eqnarray} (4) The last case, $u+v>(3-4a)/2$ and only two numbers of $\{p_i\}$ are nonzero. In this case, only for $p_2\neq0$ and $p_4\neq0$, one has the following solution, \begin{eqnarray} &&p_2=1/2+u-v,\nonumber \\& & p_4=1/2-u+v,\nonumber \\& & p_0=p_1=p_3=p_5=0.\nonumber \end{eqnarray} For the other 6 cases with only two nonzero $\{p_i\}$, there do not exist solutions. For example, let us consider the case $p_1\neq0$ and $p_2\neq0$. By $\langlembda_1=0$ we have $\langlembda\leq 0$. Thus, $p_1=a-u$, $p_2=1-a+u$, $\langlembda=1/2-a/2-u/2$, which leads to a contradiction, $1-a\leq u\leq a$ and $p_1=0$. \noindent {\bf Acknowledgments} This work is supported by NSFC (11765016,11675113), Jiangxi Education Department Fund (KJLD14088) and Key Project of Beijing Municipal Commission of Education under No. KZ201810028042. \end{document}
\begin{document} \title[Solutions of Darboux Equations]{Solutions of Darboux Equations, its Degeneration and Painlev\'e VI Equations} \author{Yik-Man Chiang} \email{[email protected]} \address{Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, SAR} \author{Avery Ching} \email{[email protected]} \address{Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, SAR} \author{Chiu-Yin Tsang} \email{[email protected]} \address{Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, SAR} \subjclass{Primary 54C40, 14E20; Secondary 46E25, 20C20} \classification{33E10 (primary), 34M35 (secondary).} \keywords{Darboux Equation, Lam\'e Equation, Heun Equation, Hypergeometric Equation.} \thanks{The first and third authors are partially supported by Hong Kong Research Grant Council \#16300814.} \begin{abstract} In this paper, we study the Darboux equations in both classical and system form, which give the elliptic Painlev\'e VI equations by the isomonodromy deformation method. Then we establish the full correspondence between the special Darboux equations and the special Painlev\'e VI equations. Instead of the system form, we especially focus on the Darboux equation in a scalar form, which is the generalization of the classical Lam\'{e} equation. We introduce a new infinite series expansion (in terms of the compositions of hypergeometric functions and Jacobi elliptic functions) for the solutions of the Darboux equations and regard special solutions of the Darboux equations as those terminating series. The Darboux equations characterized in this manner have an almost (but not completely) full correspondence to the special types of the Painlev\'e VI equations. Finally, we discuss the convergence of these infinite series expansions. \end{abstract} \maketitle \vspace*{6pt}\tableofcontents \section{Introduction} The Lam\'e equation \[ \dfrac{d^2 y}{du^2}+[h-k^2\alpha(\alpha+1)\textstyle{\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2(u,k)}]y=0 \] was introduced by Lam\'e in 1837 to solve the Laplace equation in the ellipsoidal coordinates (he searched for isothermal surfaces in \cite{Lame}, and arrived at equation 17). Because of its importance in mathematical physics, the Lam\'e equations were solved by series expansions by a number of mathematicians like Ince \cite{Ince2, Ince1}, Erd\'elyi \cite{Erdelyi} and Sleeman \cite{Sleeman}. Then in 1882, Darboux \cite{Darboux} introduced its generalization \begin{eqnarray}\leftarrowbel{E:darboux} &&\frac{{d}^{2}y}{{du}^{2}}+\Big(h-\frac{\xi(\xi+1)}{\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2(u,k)}-\frac{\eta(\eta+1)\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds}^2(u,k)}{\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}^2(u,k)}-\nonumber\\ &&\hspace{2.5cm}\frac{\mu(\mu+1)k^2\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}^2(u,k)}{\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds}^2(u,k)}-\nu(\nu+1)k^2{\textstyle\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2(u,k)}\Big)y=0, \end{eqnarray} or in the following Weierstrass form \begin{eqnarray}\leftarrowbel{E:darbouxweierstrass} &&\frac{{d}^{2}y}{{du}^{2}}+\Big(h-\xi(\xi+1)\wp(u,\tau)-\eta(\eta+1)\wp(u+\frac{1}{2},\tau)-\nonumber\\ &&\hspace{2.5cm}\mu(\mu+1)\wp(u+\frac{\tau}{2},\tau)-\nu(\nu+1)\wp(u+\frac{1+\tau}{2},\tau)\Big)y=0, \end{eqnarray} where $\wp(\cdot,\tau)$ is the Weierstrass elliptic function with periods $1$ and $\tau$. Some other forms of the Darboux equations are possible. For instance, they can be de-normalized to become the Sparre equations (see Matveev and Smirnov \cite{MS} for detail). In either form, the Darboux equation is simply the pull-back of the Heun equation via a double cover. Not much was known about these linear equations. But the research was soon redirected as explained in the following paragraphs. It is well known that two second order linear equations have their solutions related by a first order operator if they differ by a gauge transformation. Thus, a sensible strategy is to tackle a class of gauge equivalent equations rather than one particular equation. Such a gauge equivalent class is known as an isomonodromic family. In a modern language, one writes a family of Heun equations (parametrized by the cross ratio of the four singular points) as a system of first order linear PDEs \begin{equation}\leftarrowbel{E:heunsystem} dY=[A_0\dfrac{dx}{x}+A_1\dfrac{dx}{x-1}+A_2\dfrac{d(x-t)}{x-t}]Y \end{equation} Then the flatness condition (or integrability condition) of such a system of over-determined (two unknown functions satisfying four equations) PDEs yields the Painlev\'e VI equation in rational form. The readers may refer to the work of Jimbo and Miwa \cite{JM} for a complete treatment. The search of special solutions of Painlev\'e VI transcendence is a prominent research subject while another more natural path is almost forgotten. Painlev\'e (\cite{Painleve}) introduced the elliptic Painlev\'e VI equation (and studied by Manin \cite{Manin} in an algebro-geometric setup) \begin{eqnarray}\leftarrowbel{E:ellipticpainleve} \frac{{d}^{2}u(\tau)}{{d\tau}^{2}}&=&-\frac{1}{8\pi^2}\Big[a_0^2{\wp'(u(\tau);\tau)} +{a_1^2\wp'(u(\tau)+\frac{1}{2};\tau)}\nonumber\\ &&\ + {a_2^2\wp'(u(\tau)+\frac{\tau}{2};\tau)}+(a_3-1)^2{\wp'(u(\tau)+\frac{1+\tau}{2};\tau)}\Big] \end{eqnarray} by pulling back the rational Painlev\'e VI equation via a double cover. In elliptic form, the Painlev\'e VI equation shares the same properties as the rational Painlev\'e VI equation but it appears to be much more symmetric. On the other hand, the original Darboux equation drew little attention in the century following its discovery. There is only some occasion use of it like the study of the Schwarz maps of certain Heun equations by Nehari \cite{Nehari}. The true importance of \eqref{E:darboux} was established by Treibich and Verdier, which states that a Schr\"odinger type operator has the finite-gap property if and only if it is of the form \eqref{E:darbouxweierstrass} with integral parameters, or its degeneration (see the work of Treibich and Verdier \cite{TV, Verdier} for detail, or a concise account by Veselov \cite{Veselov}). These Darboux equations with integral parameters admit the so-called finite gap solutions \cite{Take3},\cite{smirnov1} which are integral representations of a certain kind. However, solutions of Darboux equations with general parameters are not as well understood as their integral parameters counterpart. In this paper, we plan to \begin{enumerate} \item single out the Darboux equations which admit certain special solutions. Our method, which has its origin that at least goes back to \cite{Erdelyi1}, is to expand the solutions of the Darboux equations appropriately, such as \[\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}(u,k)^{\xi+1}\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}(u,k)^{\eta+1}\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds}(u,k)^{\mu+1} \sum_{m=0}^{\infty}X_m\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}(u,k)^2m\sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} a+m,b+m\\ c+2m \end{matrix}};\ \mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}(u,k)^2\mathfrak{R}ight). \] Then we seek the conditions on the parameters of the Darboux equations in order that the infinite sum above terminates. We show that these ``termination conditions" match exactly the well-known conditions given by affine Weyl group $\tilde{D}_4$ discovered by Okamoto \cite{Okamoto} where the Painlev\'e VI admits special solutions in explicit forms (see more explanation below). Moreover, we point out that these terminated sums correspond to Schlesinger transformations \cite{JM} where a general theory of linear transformations between two differential equations where their corresponding local monodromy differ by an integer was developed. These Darboux equations are summarized in Theorem \mathfrak{R}ef{terminate1}. \item study the convergence of the series expansions of solutions of the Darboux equations by applying the theory of three-term recursion by Poincar\'e and Perron in Section \mathfrak{R}ef{S:Convergence}. \item single out the elliptic Painlev\'e VI equations which admit special solutions in Theorem \mathfrak{R}ef{riccati} (see Clarkson \cite{Clarkson} or the well-known book \cite{GLS} for detail). \end{enumerate} We observe that under the same conditions on the parameters, both the Darboux equations and the elliptic Painlev\'e VI equations admit special solutions. Our strategy towards an understanding of this phenomenon is to assign linear algebraic meaning to these parameters via the following steps. \begin{enumerate} \item The classical Darboux equations are rewritten in a system form in Definition \mathfrak{R}ef{darbouxsystem}, so that the parameters of the classical Darboux equations become the eigenvalues of the residue matrices of this system form of the Darboux equations at the various singularities. \item The Darboux equations in a system form are set to undergo an isomonodromic deformation. The elliptic Painlev\'e VI equations are derived from these deformations in Section \mathfrak{R}ef{EPainleve}. The parameters of the elliptic Painlev\'e VI equation remain to be the eigenvalues of the residue matrices of the system for of the Darboux equation above at the various singularities. \end{enumerate} Therefore we arrive at the main observation (Corollary \mathfrak{R}ef{observation}) that under the same conditions, both the Darboux equations and the Painlev\'e VI equations have special solutions. In a certain sense, such correspondence is anticipated by Jimbo and Miwa \cite{MW}. \section{The Darboux Connection and the Elliptic Painlev\'e VI Equation} \leftarrowbel{S:connection} As we have revised in the introduction, the rational Painlev\'e VI equation comes from the flatness condition of the system of PDEs \eqref{E:heunsystem}. Therefore, one expects the elliptic Painlev\'e VI equation \eqref{E:ellipticpainleve} to come from the flatness condition of a family of Darboux equations in system form \[ dY=\left[A_0\dfrac{d\sigma(\wp^{-1}(x);\tau)}{\sigma(\wp^{-1}(x);\tau)}+ A_1\dfrac{d\sigma(\wp^{-1}(x)+\frac{1}{2};\tau)}{\sigma(\wp^{-1}(x)+\frac{1}{2};\tau)}+ A_2\dfrac{d\sigma(\wp^{-1}(x)+\frac{\tau}{2};\tau)}{\sigma(\wp^{-1}(x)+\frac{\tau}{2};\tau)}+ A_3\dfrac{d\sigma(\wp^{-1}(x)+\frac{1+\tau}{2};\tau)}{\sigma(\wp^{-1}(x)+\frac{1+\tau}{2};\tau)}\mathfrak{R}ight]Y. \] We shall give a brief account of such a construction in this section. \subsection{The Darboux Equation in a System Form} \leftarrowbel{DConnection} Let $\tau\in\mathbb{H}$ be a point in the upper half plane, and \[ k^2=\dfrac{\wp(\frac{1+\tau}{2},\tau)-\wp(\frac{1}{2},\tau)}{\wp(\frac{1+\tau}{2},\tau)-\wp(\frac{\tau}{2},\tau)} \] is the $\leftarrowmbda$-invariant of the complex torus $\mathbb{C}/(\mathbb{Z}+\tau\mathbb{Z})$. It is well-known that the map \[ \begin{array}{rcl} E=\mathbb{C}/(\mathbb{Z}+\tau\mathbb{Z})&\to&\{(x:y:z)\in\mathbb{P}^2:y^2z=4x(z-x)(z-k^2x)\}\\ u&\mapsto&(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2u:(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2)'(u):1) \end{array} \] is biholomorphic as well as a group isomorphism. \begin{remark} In the classical description of the Jacobi elliptic functions, the $\leftarrowmbda$-invariant of a complex torus is usually specified, whereas in the classical description of the Weierstrass elliptic functions, the period quotient of a complex torus is usually specified. \end{remark} The projective curve $\{(x:y:z)\in\mathbb{P}^2:y^2z=4x(z-x)(z-k^2x)\}$ admits an involution $-1\times:(x:y:z)\mapsto(x:-y:z)$. The quotient by $\pm 1$ admits an isomorphism \[ \begin{array}{rcl} \{(x,y)\in\mathbb{C}\times(\mathbb{C}\backslash\{0\}):y^2=4x(1-x)(1-k^2x)\}/\{\pm 1\}&\to&\mathbb{C}\backslash\{0,1,k^{-2}\}\\ (x:y:z)&\mapsto&(x:z) \end{array} \] Now there is a well-known equation called the Heun equation in system form, which is defined on the punctured Riemann sphere $\mathbb{P}^1\backslash\{0,1,k^{-2},\infty\}$. Let $A_0$, $A_1$, $A_2$ $A_3=-A_0-A_1-A_2$ be $2\times 2$ matrices with complex entries and consider the matrix-valued one-form \[ \omega=A_0\dfrac{dx}{x}+A_1\dfrac{dx}{x-1}+A_2\dfrac{dx}{x-k^{-2}}. \] The Heun equation in system form asks for a vector-valued function $Y$ satisfying $dY=\omega Y$. The Darboux equation is indeed the pull-back of Heun equation via the map $E\backslash\{0,\frac{1}{2},\frac{\tau}{2},\frac{1+\tau}{2}\}\stackrel{p}{\to} \mathbb{P}^1\backslash\{0,1,k^{-2},\infty\}$. The pull-back $p^*\omega$ has a simple pole at the four points of order two. So our target is to obtain an analogue of $dx/(x-a)$ in the torus so as to write the Darboux equation in a system form. Recall that for each $a\in\mathbb{C}$, ``$x-a$" means the linear form \[ \begin{array}{rcl} \mathbb{C}^2\backslash\{(0,0)\}&\to&\mathbb{C}\\ (x,y)&\mapsto&x-ay \end{array} \] which fails to be well-defined on the projective space $\mathbb{P}^1$, whereas its (unique) zero locus is well-defined (so that this linear form is a section of a line bundle of degree one.). The Weierstrass sigma function has the same property in the sense that $\sigma:\mathbb{C}\to\mathbb{C}$ is not periodic, but its (unique) zero locus is well-defined modulo $\mathbb{Z}+\tau\mathbb{Z}$. Therefore, for a fixed $\tau\in\mathbb{H}$, $p^*\omega$ can be written as \begin{eqnarray*} p^*\omega&=&A_0\dfrac{d\sigma(z)}{\sigma(z)}+A_1\dfrac{d\sigma(z+\frac{1}{2})}{\sigma(z+\frac{1}{2})} +A_2\dfrac{d\sigma(z+\frac{\tau}{2})}{\sigma(z+\frac{\tau}{2})} +A_3\dfrac{d\sigma(z+\frac{1+\tau}{2})}{\sigma(z+\frac{1+\tau}{2})}\\ &=&\left(A_0\zeta(z)+A_1\zeta(z+\frac{1}{2})+A_2\zeta(z+\frac{\tau}{2})+A_3\zeta(z+\frac{1+\tau}{2})\mathfrak{R}ight) dz. \end{eqnarray*} Recall that for each $z\in\mathbb{C}$, \[ \zeta(z+1)=\zeta(z)+2\zeta(\frac{1}{2})\;\;\mbox{ and }\;\;\zeta(z+\tau)=\zeta(z)+2\zeta(\frac{\tau}{2}). \] Thus, \begin{eqnarray*} &&A_0\zeta(z+1)+A_1\zeta(z+\frac{1}{2}+1)+A_2\zeta(z+\frac{\tau}{2}+1)+A_3\zeta(z+\frac{1+\tau}{2}+1)\\ &=&A_0\zeta(z)+A_1\zeta(z+\frac{1}{2})+A_2\zeta(z+\frac{\tau}{2})+A_3\zeta(z+\frac{1+\tau}{2})+ 2(A_0+A_1+A_2+A_3)\zeta(\frac{1}{2})\\ &=&A_0\zeta(z)+A_1\zeta(z+\frac{1}{2})+A_2\zeta(z+\frac{\tau}{2})+A_3\zeta(z+\frac{1+\tau}{2}), \end{eqnarray*} as the four matrices $A_0$,..., $A_3$ are chosen so that $A_0+A_1+A_2+A_3=0$. Hence the matrix-valued form $p^*\omega$ has a period $1$. Similarly, $\tau$ is a period of $p^*\omega$ also. Consequently, $p^*\omega$ is well-defined on the torus $E$. Therefore, we arrive at a reformulation of the Darboux equation in a system form. \begin{definition}\leftarrowbel{darbouxsystem} Let $A_0$, $A_1$, $A_2$, $A_3\in\mathfrak{s}l_2(\mathbb{C})$ such that $A_0+A_1+A_2+A_3=0$. Let \begin{equation} \leftarrowbel{connection} \Omega=A_0\dfrac{d\sigma(z)}{\sigma(z)}+A_1\dfrac{d\sigma(z+\frac{1}{2})}{\sigma(z+\frac{1}{2})} +A_2\dfrac{d\sigma(z+\frac{\tau}{2})}{\sigma(z+\frac{\tau}{2})} +A_3\dfrac{d\sigma(z+\frac{1+\tau}{2})}{\sigma(z+\frac{1+\tau}{2})} \end{equation} be a matrix-valued one-form defined on $\mathbb{C}/(\mathbb{Z}+\tau\mathbb{Z})$, except at the points of order two. Then the Darboux equation in a system form is defined by \[ dY=\Omega Y. \] \end{definition} The correspondence between the system form and the classical form of the Darboux equation can be made explicit as follows. First of all, we need an elementary lemma. \begin{lemma} Let $A=(a_{ij})_{1\leq i,j\leq 2}$ be a matrix-valued holomorphic function. If $Y=(y_1,y_2)^T$ is a vector-valued function satisfying $Y'=AY$. Then, \[ y''_1+[-a_{11}-a_{22}-\dfrac{a'_{12}}{a_{12}}]y'_1+[a_{11}a_{22}-a_{21}a_{12}-a_{12}(\dfrac{a_{11}}{a_{12}})']y_1=0. \] \end{lemma} \begin{proof} Direct calculation. \end{proof} In this lemma, we observe that $y_1$ satisfies a second order ODE with two kinds of singularities: \begin{itemize} \item[(i)] poles of $a_{11}+a_{22}$ and that of $a_{11}a_{22}-a_{12}a_{21}$. These are called {\it essential singularities}. \item[(ii)] zeros or poles of $a_{12}$. These are called {\it apparent singularities}. \end{itemize} Note that the essential singularities of this equation are invariant under similarity transformations of $A$. The same is not true for its apparent counterpart. Now we revise Definition \mathfrak{R}ef{darbouxsystem}. From the definition of $\Omega$ in (\mathfrak{R}ef{connection}), the first component of $Y$ satisfies a second order ODE with essential singularities $0$, $\frac{1}{2}$, $\frac{\tau}{2}$ and $\frac{1+\tau}{2}$. The (1,2) entry of $\Omega$ is an elliptic one-form with four poles, and hence it has four zeros. So it is sensible to make the following normalization: \begin{itemize} \item[(i)] $A_0$, $A_1$, $A_2$ and $A_3$ have traces zero and \item[(ii)] the (1,2) entry of $\Omega$ has four zeros at $0$, $\frac{1}{2}$, $\frac{\tau}{2}$ and $\frac{1+\tau}{2}$ by applying a simultaneous similarity transformation to $A_0$, $A_1$, $A_2$ and $A_3$ \end{itemize} so that the first component of $Y$ satisfies a second ODE with vanishing first order term. This second order ODE is nothing but the classical Darboux equation. \begin{remark} Instead of the Weierstrass sigma functions, the one-form $\Omega$ in (\mathfrak{R}ef{connection}) can be written in terms of the theta functions as follows \[ A_0\dfrac{d\vartheta_1(z)}{\vartheta_1(z)}+A_1\dfrac{d\vartheta_2(z)}{\vartheta_2(z)} +A_2\dfrac{d\vartheta_3(z)}{\vartheta_3(z)} +A_3\dfrac{d\vartheta_4(z)}{\vartheta_4(z)}. \] \end{remark} The following theorem provides the sufficient conditions for the Darboux equation having special solutions. \begin{theorem}\leftarrowbel{SpSoln} Let $A_0$, $A_1$, $A_2$, $A_3\in\mathfrak{s}l_2(\mathbb{C})$ such that $A_0+A_1+A_2+A_3=0$, and let $\pm a_j/2$ be the eigenvalues of $A_j$ ($j=0,1,2,3$). If \begin{itemize} \item[(i)] one of $a_0$, $a_1$, $a_2$ or $a_3$ is an integer, then there exists $h\in\mathbb{C}$ such that the Darboux equation \begin{equation}\leftarrowbel{DarbouxEqn} \dfrac{dY}{dz}-\left[A_0\dfrac{d\sigma(z)}{\sigma(z)}+A_1\dfrac{d\sigma(z+\frac{1}{2})}{\sigma(z+\frac{1}{2})} +A_2\dfrac{d\sigma(z+\frac{\tau}{2})}{\sigma(z+\frac{\tau}{2})} +A_3\dfrac{d\sigma(z+\frac{1+\tau}{2})}{\sigma(z+\frac{1+\tau}{2})}\mathfrak{R}ight]Y=hY \end{equation} has a solution which is a sum of finitely many Gauss hypergeometric functions. \item[(ii)] $a_0\pm a_1 \pm a_2 \pm a_3$ is an even integer, then there exists $h\in\mathbb{C}$ such that the Darboux equation (\mathfrak{R}ef{DarbouxEqn}) has a solution which is a product of powers of elliptic functions times an elliptic function with poles at the order two points. \end{itemize} \end{theorem} \begin{proof} (i) is a rephrasing of part iv) of Theorem \mathfrak{R}ef{terminate1}, while (ii) is a rephrasing of part i), ii) and iii) of Theorem \mathfrak{R}ef{terminate1}. \end{proof} \begin{remark} Instead of Darboux equations, a sheaf theoretic description of two types of special solutions of Heun equations (which are sections of sheaves constructed from rigid local systems) will be given in \cite{CCT1}. \end{remark} \subsection{The Correspondence to the Elliptic Painlev\'e VI Equation} \leftarrowbel{EPainleve} It is well-known that the Painlev\'e VI equation comes from an isomonodromic family of Heun equations (see \cite[Fuchs]{Fuchs}, \cite[Schlesinger]{Schlesinger}, \cite[Jimbo-Miwa]{JM}, \cite[Takemura]{Take3}). One expects that an isomonodromic family of Darboux equations (the monodromy of Darboux equations remains unchanged as $\tau$ varies) yields the elliptic version of Painlev\'e VI equation in the same manner. Our first task is to describe a family of elliptic curves. Let $\mathbb{H}\subset\mathbb{C}$ be the upper half plane and let $g_2$, $g_3:\mathbb{H}\to\mathbb{C}$ be the usual Eisenstein series. Let \[ \mathbb{E}=\{(x,y,\tau)\in\mathbb{C}^2\times\mathbb{H}:y^2=4x^3-g_2(\tau)x-g_3(\tau)\}. \] Then $\mathbb{E}\to\mathbb{H}$ is a family of elliptic curves. Moreover, \[ \begin{array}{l} D_0=(\mbox{line at }\infty\times\mathbb{H})\cap\mathbb{E},\\ D_1=\{(x,y,\tau)\in\mathbb{E}:x=\wp(\frac{1}{2},\tau)\},\\ D_2=\{(x,y,\tau)\in\mathbb{E}:x=\wp(\frac{\tau}{2},\tau)\},\\ D_3=\{(x,y,\tau)\in\mathbb{E}:x=\wp(\frac{1}{2}+\frac{\tau}{2},\tau)\} \end{array} \] are the four half-periods loci, so that $\mathbb{E}\backslash (D_0\cup D_1\cup D_2\cup D_3)\to\mathbb{H}$ is the family of elliptic curves with the four points of order two deleted. Notice that $\mathbb{E}$ admits an obvious involution $(x,y,\tau)\mapsto(x,-y,\tau)$. The quotient of $\mathbb{E}$ by such an involution is the usual family of $\mathbb{P}^1$ with four points deleted, as described as follows. Consider the following hypersurfaces in $\mathbb{C}\times\mathbb{C}$, \[ \begin{array}{l} H_0=\{(z,t)\in\mathbb{C}\times\mathbb{C}:z=0\},\\ H_1=\{(z,t)\in\mathbb{C}\times\mathbb{C}:z=1\},\\ H_2=\{(z,t)\in\mathbb{C}\times\mathbb{C}:z=t\}, \end{array} \] then the projection to the second coordinate $(\mathbb{C}\times\mathbb{C})\backslash (H_0\cup H_1\cup H_2)\to\mathbb{C}\backslash\{0,1\}$ is the family of four-punctured Riemann sphere with one of the punctures varying. The map \[ p:(\mathbb{E}\backslash (D_0\cup D_1\cup D_2\cup D_3))\to (\mathbb{C}\times\mathbb{C})\backslash (H_0\cup H_1\cup H_2) \] is the two-to-one map described above. Pick four matrix-valued analytic functions $A_0$, $A_1$, $A_2$, $A_3:\mathbb{C}\backslash\{0,1\}\to\mathfrak{s}l(2)$ such that $A_0+A_1+A_2+A_3=0$, then \[ \omega=-\left[\dfrac{A_0(t)}{x}+\dfrac{A_1(t)}{x-1}+\dfrac{A_2(t)}{x-t}\mathfrak{R}ight]dx \] is a one-form defined on $(\mathbb{C}\times\mathbb{C})\backslash(H_0\cup H_1\cup H_2)$, and \begin{eqnarray*} \Omega&=&p^*\omega\\ &=&-\left[A_0\dfrac{d\sigma(\wp^{-1}(x);\tau)}{\sigma(\wp^{-1}(x);\tau)}+ A_1\dfrac{d\sigma(\wp^{-1}(x)+\frac{1}{2};\tau)}{\sigma(\wp^{-1}(x)+\frac{1}{2};\tau)}+ A_2\dfrac{d\sigma(\wp^{-1}(x)+\frac{\tau}{2};\tau)}{\sigma(\wp^{-1}(x)+\frac{\tau}{2};\tau)}+ A_3\dfrac{d\sigma(\wp^{-1}(x)+\frac{1+\tau}{2};\tau)}{\sigma(\wp^{-1}(x)+\frac{1+\tau}{2};\tau)}\mathfrak{R}ight] \end{eqnarray*} or equivalently, \[ \Omega=-\sum_{j=0}^3A_j\dfrac{d\vartheta_{j+1}(\wp^{-1}(x);\tau)}{\vartheta_{j+1}(\wp^{-1}(x);\tau)} \] is a well-defined matrix-valued one form defined on $\mathbb{E}\backslash(D_0\cup D_1\cup D_2\cup D_3)$, with log singularities only. The system of PDEs $dY=\omega Y$ is over-determined in general and does not have any local solutions. However, the flatness condition $d\omega=\omega\wedge\omega$ guarantees the existence of local solutions of $dY=\omega Y$ (see \cite{Poberezhny}). If one chooses a convenient basis so that $A_3$ is diagonal, such a flatness condition implies that $X(t)$, the zero locus of the (1,2) entry of $\omega$, satisfies the usual Painlev\'e VI equation \cite[\S3]{Mahoux}, \cite[\S2]{Take5}. Similarly, the flatness condition $d\Omega=\Omega\wedge\Omega$ guarantees the existence of local solutions of the family of Darboux equations $dY=\Omega Y$. In case $A_3$ is diagonal, the ODE satisfied by $u(\tau)$, the zero locus of the (1,2) entry of $\Omega$, is known as the elliptic Painlev\'e VI equation. Note that $\Omega=p^*\omega$. The flatness condition $d\omega=\omega\wedge\omega$ implies that of $\Omega$. Thus $u(\tau)=p^*X(t)$. The elliptic Painlev\'e VI equation is the pull-back of the rational Painlev\'e VI equation by $p$. This is not a trivial computation and the geometric detail can be found in \cite{Manin}. The consequence is the following form of the elliptic Painlev\'e equation: \begin{eqnarray} \leftarrowbel{EP6} \frac{{d}^{2}u(\tau)}{{d\tau}^{2}}&=&-\frac{1}{8\pi^2}\Big[a_0^2{\wp'(u(\tau);\tau)} +{a_1^2\wp'(u(\tau)+\frac{1}{2};\tau)}\nonumber\\ &&\ + {a_2^2\wp'(u(\tau)+\frac{\tau}{2};\tau)}+(a_3-1)^2{\wp'(u(\tau)+\frac{1+\tau}{2};\tau)}\Big], \end{eqnarray} where $\pm a_j/2$ are the eigenvalues of $A_j$ ($j=0,1,2,3$). Following the idea of Manin \cite{Manin}, the elliptic Painlev\'e VI equation (\mathfrak{R}ef{EP6}) is equivalent to the classical Painlev\'e VI equation \begin{eqnarray} \leftarrowbel{P6} \frac{d^2X}{dt^2}&=&\frac{1}{2}\begin{itemize}gg(\frac{1}{X}+\frac{1}{X-1}+\frac{1}{X-t}\begin{itemize}gg)\left(\frac{dX}{dt}\mathfrak{R}ight)^2 -\begin{itemize}gg(\frac{1}{t}+\frac{1}{t-1}+\frac{1}{X-t}\begin{itemize}gg)\frac{dX}{dt}\nonumber\\&& \ +\frac{X(X-1)(X-t)}{t^2(t-1)^2}\begin{itemize}gg(\alpha +\beta \frac{t}{X^2} \ +\gamma\frac{t-1}{(X-1)^2}+\delta\frac{t(t-1)}{(X-t)^2}\begin{itemize}gg), \end{eqnarray} where \[(a_0^2,a_1^2,a_2^2,(a_3-1)^2)=(-2\beta,2\gamma,1-2\delta,2\alpha).\] Moreover, the group of symmetries of (\mathfrak{R}ef{EP6}) can be generated by the following transformations (see \cite[\S3.2]{Manin}): \begin{itemize} \item[(i)] $(a_i)\mapsto(-a_i)$ for $i=0,1,2$ and $(a_3-1)\mapsto(-a_3+1)$ \item[(ii)] Permutations of $(a_0,a_1,a_2,a_3-1)$; \item[(iii)] $(a_0,a_1,a_2,a_3-1)\mapsto(a_0+n_0,a_1+n_1,a_2+n_2,a_3-1+n_3)$, where $n_0+n_1+n_2+n_3\equiv0 \mod 2$ and $n_i\in\mathbb{Z}.$ \end{itemize} The following theorem gives the conditions for the Painlev\'e VI equation (\mathfrak{R}ef{P6}) having one-parameter families of solutions expressed in terms of the hypergeometric functions (see \cite[Theorem 48.3]{GLS}): \begin{theorem} \leftarrowbel{riccati} If either \begin{equation}\leftarrowbel{condition} a_0+\epsilon_1a_1+\epsilon_2a_2+\epsilon_3a_3\in2\mathbb{Z} \mbox{ for some } \epsilon_1,\epsilon_2,\epsilon_3\in\{1,-1\} \end{equation} or \begin{equation}\leftarrowbel{condition1} (a_0-n)(a_1-n)(a_2-n)(a_3-n)=0 \mbox{ for some } n\in\mathbb{Z}, \end{equation} then the Painlev\'e VI equation (\mathfrak{R}ef{P6}) (or (\mathfrak{R}ef{EP6})) has one-parameter families of solutions expressed in terms of the hypergeometric functions. \end{theorem} \begin{example} For the case that $a_0=0,$ the Painlev\'e VI equation (\mathfrak{R}ef{EP6}) (and (\mathfrak{R}ef{P6}) resp.) has a trivial solution $u(\tau)\equiv0$ (and $X(t)\equiv0$ resp.). \end{example} The following important observation can be seen from Theorem \mathfrak{R}ef{SpSoln} and Theorem \mathfrak{R}ef{riccati}: \begin{corollary} \leftarrowbel{observation} If the eigenvalues $\pm a_j/2$ of $A_j$ satisfy the condition (\mathfrak{R}ef{condition}) for some $\epsilon_1,\epsilon_2,\epsilon_3\in\{1,-1\}$ or the condition (\mathfrak{R}ef{condition1}) for some $n\in\mathbb{Z}$, then both the Darboux equation (\mathfrak{R}ef{DarbouxEqn}) and Painlev\'e VI equation (\mathfrak{R}ef{P6}) (or (\mathfrak{R}ef{EP6})) have special solutions in the sense in Theorem \mathfrak{R}ef{SpSoln} and Theorem \mathfrak{R}ef{riccati} respectively. \end{corollary} \section{The Darboux Equation in a Scalar Form} In the previous section, we saw that given a Darboux equation in system form, its first component satisfies the classical Darboux equation. The second component would then be determined by its first component and this second component would be a special kind of function if the first component is of the same kind. Thus, we shall focus on the first component in this section and search for special types of solutions of the classical Darboux equations. Recall that different types of series expansions of the solution of the Lam\'{e} equation \begin{equation*}\leftarrowbel{E:lame} \frac{{d}^{2}y}{{du}^{2}}+\Big(h-{\nu(\nu+1)}{\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}(u,k)^2}\Big)y=0 \end{equation*} had been worked on by well-known mathematicians. For instance, the power series expansions and the Fourier-Jacobi expansions were developed by Ince in \cite{Ince2,Ince1}. Then Erd\'{e}lyi \cite{Erdelyi} and Sleeman \cite{Sleeman} expanded the {L}am\'e functions into series of associated Legendre functions of the first kind and the second kind respectively. As an example, one of the expansions was (see \cite[(12.1)]{Erdelyi}) \begin{equation}\leftarrowbel{E:legendre} \sum_{m=0}^\infty C_m \Gamma(\nu-2m+1) P^{2m}_\nu\begin{itemize}g(\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}(u,k)\begin{itemize}g), \end{equation} where \begin{eqnarray*} P^n_\nu(z)&=&(-1)^n\frac{\Gamma(\nu+n+1)}{\Gamma(\nu-n+1)\Gamma(n+1)}\left(\frac{1-z}{1+z}\mathfrak{R}ight)^\frac{n}{2} \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} -\nu, \nu+1\\ n+1 \end{matrix}};\ \frac{1-z}{2}\mathfrak{R}ight)\\ &=&(-1)^n\frac{\Gamma(\nu+n+1)}{2^n\Gamma(\nu-n+1)\Gamma(n+1)}\left({1-z^2}\mathfrak{R}ight)^\frac{n}{2} \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} -\nu+n, \nu+n+1\\ n+1 \end{matrix}};\ \frac{1-z}{2}\mathfrak{R}ight). \end{eqnarray*} Notice that the series expansion (\mathfrak{R}ef{E:legendre}) can also be expressed in terms of hypergeometric functions \begin{equation*} \sum_{m=0}^\infty C_m \frac{\Gamma(\nu+2m+1)}{2^{2m}\Gamma(2m+1)}\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}(u,k)^m \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} -\nu+2m, \nu+2m+1\\ 2m+1 \end{matrix}};\ \frac{1-\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}(u,k)}{2}\mathfrak{R}ight). \end{equation*} By the quadratic tranformations of hypergeometric functions, it becomes \begin{equation} \sum_{m=0}^\infty C_m \frac{\Gamma(\nu+2m+1)}{2^{2m}\Gamma(2m+1)}\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}(u,k)^m \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} -\nu/2+m, \nu/2+1/2+m\\ 2m+1 \end{matrix}};\ \mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}(u,k)^2 \mathfrak{R}ight). \end{equation} In Section \mathfrak{R}ef{S:series}, we will generalize the series expansion in terms of the compositions of hypergeometric functions and Jacobi elliptic functions (see (\mathfrak{R}ef{E:2F1}) in Definition \mathfrak{R}ef{Series}) for local solutions of the Darboux equation\footnote{Also known as {D}arboux-{T}reibich-{V}erdier equation \cite{Veselov}.} \begin{eqnarray} &&\frac{{d}^{2}y}{{du}^{2}}+\Big(h-\frac{\xi(\xi+1)}{\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2(u,k)}-\frac{\eta(\eta+1)\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds}^2(u,k)}{\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}^2(u,k)}-\nonumber\\ &&\hspace{2.5cm}\frac{\mu(\mu+1)k^2\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}^2(u,k)}{\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds}^2(u,k)}-\nu(\nu+1)k^2{\textstyle\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2(u,k)}\Big)y=0 \end{eqnarray} on a torus of $\mathbb{C}$ modulo the lattice $\Leftarrowmbda=\{m\omega_1+n\omega_2:m,n\in\mathbb{Z}\}$, which can be specified by the Riemann $P$-scheme \[ P_{\mathbb{C}\slash\Leftarrowmbda} \begin{Bmatrix} \ 0\ &\ K(k)\ &\ K(k)+iK'(k)\ &\ iK'(k)\ &\\ \xi+1&\eta+1&\mu+1&\nu+1&u;\, h\\ -\xi&-\eta&-\mu&-\nu& \end{Bmatrix}, \] where the entries on the top row represent the locations of the regular singularities and the entries of the next two rows under the corresponding singularities represent the two exponents of the local solutions there. $u$, $h$ are the independent variable and the accessory parameter respectively. The Riemann $P$-scheme of a linear ODE is important as its local monodromies are revealed. Let $X$ be the complement of $\{0,K,iK',K+iK'\}$ in $\mathbb{C}/\Leftarrowmbda$. Choose a base point $x_0\in X$ and let $V$ be the space of solutions of equation \eqref{E:darboux} in a small neighborhood of $x_0$. Then the monodromy of equation \eqref{E:darboux} is the group representation \[ \mathfrak{R}ho:\pi_1(X,x_0)\to GL(V) \] defined by analytic continuation along paths. Some important partial information about such monodromy is read off from the Riemann $P$-scheme as follows. Let $\gamma_0$, $\gamma_K$, $\gamma_{iK'}$ and $\gamma_{K+iK'}\in\pi_1(X,x_0)$ be loops based at $x_0$ which wind around $0$, $K$, $iK'$ and $K+iK'$ respectively for once. Then $\mathfrak{R}ho(\gamma_0)$, $\mathfrak{R}ho(\gamma_K)$, $\mathfrak{R}ho(\gamma_{K+iK'})$ and $\mathfrak{R}ho(\gamma_{iK'})$ have eigenvalues $e^{2\pi i(\xi+1)}$, $e^{-2\pi i\xi}$; $e^{2\pi i(\eta+1)}$, $e^{-2\pi i\eta}$; $e^{2\pi i(\mu+1)}$, $e^{-2\pi i\mu}$; $e^{2\pi i(\nu+1)}$, $e^{-2\pi i\nu}$ respectively. Sometimes, it is useful to projectivise $\mathfrak{R}ho$ to yield \[ \sigma: \pi_1(X,x_0)\stackrel{\mathfrak{R}ho}{\to}GL(V)\to\mathbb{P}GL(V). \] (This projectivised monodromy plays an important role in the Schwarz's map.) Then the conjugacy classes \[ \begin{array}{l} \sigma(\gamma_0)\cong e^{2\pi i(2\xi+1)};\\ \sigma(\gamma_K)\cong e^{2\pi i(2\eta+1)};\\ \sigma(\gamma_{K+iK'})\cong e^{2\pi i(2\mu+1)};\\ \sigma(\gamma_{iK'})\cong e^{2\pi i(2\nu+1)} \end{array} \] are read off from the difference of local exponents in the Riemann $P$-scheme above. The monodromy representation of a linear ODE is important. For instance, let $L_1$ and $L_2$ be second order linear ordinary differential operators. Denote the germs of solutions of $L_1$ and $L_2$ at an ordinary point $x_0$ by $V$ and $W$ respectively. Suppose that there is a first order differential operator $G$ (called a gauge transformation) which transforms $V$ to $W$. Then such a gauge transformation induces a $\pi_1(X,x_0)$-linear map from $V$ to $W$. In particular, when $V$ and $W$ are irreducible $\pi_1(X,x_0)$ representations, the gauge transformation $G$ induces an equivalence between the $\pi_1(X,x_0)$ representations $V$ and $W$ (see \cite{Kimura1} for a nice working example). Conversely, If $L_1$ and $L_2$ have equivalent monodromy representations, there exists a gauge transformation with rational coefficients sending germs of solutions of $L_1$ to that of $L_2$. A special case is worth extra attention. Suppose that $L$ is a second order ordinary differential operator which has a regular singular point at $a$. Suppose further that $L$ has local exponent difference $1$ at $a$ and the local monodromy of $L$ at $a$ is diagonalizable. There is a special kind of gauge transformation called an {\it elementary Schlesinger transformation} which sends germs of solutions of $L$ to that of an operator $L'$ which does not have a singularity at $a$. The readers may refer to \cite{JM} for the detail. \subsection{The Hypergeometric Function Series Expansion of Darboux Solutions and the Corresponding Special Solutions} \leftarrowbel{S:series} In this section, we consider one local solution at $u=0$ with exponent $\xi+1$ and call it the {\it local Darboux solution}, denoted by $Dl(\xi,\eta,\mu,\nu;h;u,k)$. The expansions of $Dl$ at the other regular singular points $K,\, iK^\prime,\, K+iK^\prime$ can be obtained after applying the symmetries of the Darboux equation (for more details, see \cite{CCT}). If $\xi=-\frac{3}{2},-\frac{5}{2},\cdots$, then $Dl(\xi,\eta,\mu,\nu;h;u,k)$ will generically be logarithmic and we do not discuss this degenerate case further in this paper. Recall the definition in \cite{CCT} that \begin{definition}[({\cite[Definition 7.1]{CCT}})] Suppose that $\xi\neq-\frac{3}{2},-\frac{5}{2},\cdots$. Let $Dl(\xi,\eta,\mu,\nu;h;u,k)$ be defined by the following series expansion \begin{equation}\leftarrowbel{E:series} \mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}(u,k)^{\xi+1}\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}(u,k)^{\eta+1}\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds}(u,k)^{\mu+1}\sum_{m=0}^\infty C_m\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}(u,k)^{2m}, \end{equation} where the coefficients $C_m(\xi,\eta,\mu,\nu;h;k)$ satisfy the relation {\begin{eqnarray*}n\leftarrowbel{3term}&&(2m+2)(2m+2\xi+3)C_{m+1}\nonumber\\ &&\hspace{.5cm} +\{h-[2m+\eta+\xi+2]^2-k^2[2m+\mu+\xi+2]^2+(k^2+1)(\xi+1)^2\}C_{m}\nonumber\\ &&\hspace{1cm} +k^2(2m+\xi+\eta+\mu+\nu+2)(2m+\xi+\eta+\mu-\nu+1)C_{m-1}=0,\nonumber\\ \end{enumerate}qn} where $m\geq0$ and the initial conditions $C_{-1}=0$, $C_0=1$. \end{definition} The Heun equation \begin{equation} \leftarrowbel{heun} \frac{d^2y}{dt^2}+\Big(\frac{\gamma}{t}+\frac{\delta}{t-1}+\frac{\epsilon}{t-a}\Big)\frac{dy}{dt}+ \frac{\alpha\beta t-q}{t(t-1)(t-a)}y=0, \end{equation} is connected to the Darboux equation by a simple change of dependent and independent variables, and Erd\'{e}lyi \cite[Eqn(4.2)]{Erdelyi1} (1942) gave the hypergeometric function series expansion of the local Heun solution. So it makes sense to study the corresponding expansion of the local Darboux solution $Dl(\xi,\eta,\mu,\nu;h;u,k)$. {For abbreviation, we let $\op_{\pm\pm\pm\pm}$ stand for $\pm\xi\pm\eta\pm\mu\pm\nu$ such that \begin{equation*} \alpha=\frac{1}{2}\begin{itemize}g(\op_{++++}+4\begin{itemize}g),\ \beta=\frac{1}{2}\begin{itemize}g(\op_{+++-}+3\begin{itemize}g). \end{equation*} {We also let $\op_{\pm0\pm0}$ stand for $\pm\xi\pm\mu$.} \begin{definition} \leftarrowbel{Series} Suppose that $\xi\neq-\frac{3}{2},-\frac{5}{2},\cdots$. Let $\tilde{Dl}(\xi,\eta,\mu,\nu;h;u,k)$ be defined by the following series expansion \begin{eqnarray}\leftarrowbel{E:2F1} &&(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd} u)^{\xi+1}(\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} u)^{\eta+1}(\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds} u)^{\mu+1} \sum_{m=0}^\infty \Gamma\Bigg(\frac{1}{2}\begin{itemize}g(\op_{+-++}+2m+3)\Bigg)\Gamma\Bigg(\frac{1}{2}\begin{itemize}g(\op_{+-+-}+2m+2\begin{itemize}g)\Bigg)\nonumber\\ &&\times \frac{X_m(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd} u)^{2m}}{\Gamma\begin{itemize}g(\op_{+0+0}+2m+3\begin{itemize}g)} \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} \frac{1}{2}\begin{itemize}g(\op_{++++}+4\begin{itemize}g)+m, \frac{1}{2}\begin{itemize}g(\op_{+++-}+3\begin{itemize}g)+m\\ \op_{+0+0}+2m+3 \end{matrix}};\ \textstyle\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2 u\mathfrak{R}ight), \nonumber\\ \end{eqnarray} in which the coefficients $X_m(\xi,\eta,\mu,\nu;h;k)$ satisfy the relation \[ L_0X_0+M_0X_1=0, \] \begin{equation} \leftarrowbel{threeterm} K_mX_{m-1}+L_mX_m+M_mX_{m+1}=0,\mbox{ for }m>0, \end{equation} where \[ K_{m}=\frac{\begin{itemize}g(\op_{++++}+2m+2\begin{itemize}g)\begin{itemize}g(\op_{+++-}+2m+1\begin{itemize}g) \begin{itemize}g(\op_{+0+0}+m+1\begin{itemize}g)(2\mu+2m+1)} {2\begin{itemize}g(\op_{+0+0}+2m+1\begin{itemize}g)\begin{itemize}g(\op_{+0+0}+2m\begin{itemize}g)},\] \begin{eqnarray*} L_{m}&=&\Big[\frac{\begin{itemize}g(\op_{++++}+2m+4\begin{itemize}g)\begin{itemize}g(\op_{+-++}+2m+3\begin{itemize}g)} {2\begin{itemize}g(\op_{+0+0}+2m+3\begin{itemize}g)\begin{itemize}g(\op_{+0+0}+2m+1\begin{itemize}g)} +\frac{\begin{itemize}g(\op_{+++-}+2m+3\begin{itemize}g)\begin{itemize}g(\op_{+-+-}+2m+2\begin{itemize}g)} {2\begin{itemize}g(\op_{+0+0}+2m+3\begin{itemize}g)\begin{itemize}g(\op_{+0+0}+2m+1\begin{itemize}g)}\\ &&\ -\frac{2}{\begin{itemize}g(\op_{+0+0}+2m+1\begin{itemize}g)}\Big](2\xi+2m+1)m -\frac{(2\mu+3)\begin{itemize}g(\op_{++++}+2m+4\begin{itemize}g) \begin{itemize}g(\op_{+++-}+2m+3\begin{itemize}g)}{2\begin{itemize}g(\op_{+0+0}+2m+3\begin{itemize}g)}\\ &&\ +2(2\mu+3)m+h-\xi(\xi+1)k^2+(\xi+1)(1-k^2)+2(\mu+1)(\xi+1)(1-k^2)\\ &&\ -4k^2m\begin{itemize}g(\op_{+0+0}+m+2\begin{itemize}g)-\nu(\nu+1)+(\mu+1)^2(1-k^2)+2(\mu+1)(\eta+1)+\mu+\eta+2, \end{eqnarray*} \[ M_{m}=\frac{(m+1)(2\xi+2m+3)\begin{itemize}g(\op_{+-++}+2m+3\begin{itemize}g)\begin{itemize}g(\op_{+-+-}+2m+2\begin{itemize}g)} {2\begin{itemize}g(\op_{+0+0}+2m+4\begin{itemize}g)\begin{itemize}g(\op_{+0+0}+2m+3\begin{itemize}g)}. \] \end{definition} We show that the series always converges for \[\left|\frac{1-\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}(u,k)}{1+\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,k)}\mathfrak{R}ight|<\min\Big\{\begin{itemize}g|k + ik'\begin{itemize}g|^2,\ |k - ik'\begin{itemize}g|^2\Big\}\] (with the possible exception of some branch cut) in Section \mathfrak{R}ef{S:Convergence}. Now we look at the conditions for the existence of special solutions, that is, the termination of the series. This is the case in which convergence is not an issue. \begin{theorem}\leftarrowbel{terminate} If there exists a non-negative integer $q$ such that one of the following cases \begin{equation} \leftarrowbel{termination} \op_{++++}=\xi+\eta+\mu+\nu=-2q-4 \mbox{ or } \op_{+++-}=\xi+\eta+\mu-\nu=-2q-3 \end{equation} or \begin{equation} \leftarrowbel{termination0} \displaystyle \mu=-\frac{2q+3}{2} \end{equation} holds, then there exist $q+1$ values $h_0,\, \cdots,\, h_q$ of $h$ such that the series expansion $\tilde{Dl}(\xi,\eta,\mu,\nu;h;u,k)$ ($j=0,1,\cdots,q$) terminates. \end{theorem} \begin{proof} It follows from $(\mathfrak{R}ef{threeterm})$ that the local Darboux solution becomes the finite series \begin{eqnarray*} &&(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd} u)^{\xi+1}(\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} u)^{\eta+1}(\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds} u)^{\mu+1} \sum_{m=0}^q \Gamma\Bigg(\frac{1}{2}\begin{itemize}g(\op_{+-++}+2m+3)\Bigg)\Gamma\Bigg(\frac{1}{2}\begin{itemize}g(\op_{+-+-}+2m+2\begin{itemize}g)\Bigg)\nonumber\\ &&\times \frac{X_m(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd} u)^{2m}}{\Gamma\begin{itemize}g(\op_{+0+0}+2m+3\begin{itemize}g)} \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} \frac{1}{2}\begin{itemize}g(\op_{++++}+4\begin{itemize}g)+m, \frac{1}{2}\begin{itemize}g(\op_{++-+}+3\begin{itemize}g)+m\\ \op_{+0+0}+2m+3 \end{matrix}};\ \textstyle\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2 u\mathfrak{R}ight), \mbox{ with } X_q\neq0 \nonumber\\ \end{eqnarray*} if and only if $$K_{q+1}(\xi,\eta,\mu,\nu;k)=0=X_{q+1}(\xi,\eta,\mu,\nu;h;k).$$ Notice that $X_{q+1}(\xi,\eta,\mu,\nu;h;k)=0$ is equivalent to saying the vanishing of the finite continued-fraction \begin{eqnarray*}n\leftarrowbel{fcf} \tilde{f}(\xi,\eta,\mu,\nu;h;k):=L_0/M_0-\frac{K_1/M_1}{L_1/M_1-}\frac{K_2/M_2}{L_2/M_2-}\cdots\frac{K_q/M_q}{L_q/M_q}=0.\end{enumerate}qn If $\xi,\eta,\mu,\nu$ are chosen such that $K_{q+1}(\xi,\eta,\mu,\nu;k)=0,$ that is, (\mathfrak{R}ef{termination}) or (\mathfrak{R}ef{termination0}) holds, then there exists $q+1$ values $h_0,\, \cdots,\, h_q$ of $h$ such that (\mathfrak{R}ef{fcf}) holds, and hence the series terminates. \end{proof} \begin{remark} If (\mathfrak{R}ef{termination}) holds, then the solution in Theorem \mathfrak{R}ef{terminate} becomes the {\it Darboux polynomial}, denoted by $Dp(\xi,\eta,\mu,\nu;h_j;u,k)$ ($j=0,\cdots,q$) (see the details in \cite[p. 34]{CCT}.). In this case, an {\it invariant subspace} $\mathcal{V}_q$ under the Darboux operator $\mathcal{D}$ can be constructed as follows: let the Darboux operator $\mathcal{D}$ such that the Darboux equation (\mathfrak{R}ef{E:darboux}) can be rewritten as $(\mathcal{D}+h)y=0$ and let $\mathcal{U}_q$ be the space of all the even elliptic functions on $\mathbb{C}\slash\Leftarrowmbda$ having exactly one pole at $iK'(k)$ of order at most $2q$. Note that $\mathcal{U}_q$ can be viewed as the space of polynomials in $\textstyle\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2 u$ with degree at most $q$. Then it can be verified that the space \[\mathcal{V}_q:=\{(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd} u)^{\xi+1}(\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} u)^{\eta+1}(\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds} u)^{\mu+1}f:\ f\in \mathcal{U}_q\}\] is invariant under the Darboux operator $\mathcal{D}$, that is, $\mathcal{D}g\in\mathcal{V}_q$ for all $g\in\mathcal{V}_q$. The search of the accessory parameters $h_j$ above becomes a finite dimensional eigenvalue problem. \end{remark} \begin{remark} If (\mathfrak{R}ef{termination0}) holds, then we obtain a new type of the special solutions which are finite sums of hypergeometric functions in Theorem \mathfrak{R}ef{terminate}. In this case, the Darboux equation is indeed the pull-back of a Heun equation having an {\it apparent singularity} so that it can be reduced to a hypergeometric equation by the gauge transformation, which is the case considered by Kimura in \cite{Kimura1}. \end{remark} \begin{theorem}\leftarrowbel{gauge} If there exists a non-negative integer $q$ such that $\mu=-\dfrac{2q+3}{2}$, then there exists $q+1$ complex numbers $h_0$,..., $h_q$ so that $\tilde{Dl}(\xi,\eta,\mu,\nu;h_j;u,k)$ has the form of a finite sum. There also exists a first order differential operator $R$ with elliptic coefficients (which is called a gauge transformation) such that \[ \tilde{Dl}(\xi,\eta,\mu,\nu;h_j;u,k)=R \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} \frac{1}{2}\begin{itemize}g(\op_{++++}+4\begin{itemize}g), \frac{1}{2}\begin{itemize}g(\op_{+++-}+3\begin{itemize}g)\\ \op_{+0+0}+3 \end{matrix}};\ \textstyle\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2 u\mathfrak{R}ight). \] \end{theorem} \begin{proof} From Theorem \mathfrak{R}ef{terminate}, the Darboux equation \eqref{E:darboux} has a solution of the form $\tilde{Dl}(\xi,\eta,\mu,\nu;h;u,k)$. There also exist $h_0$,..., $h_q$ such that $\tilde{Dl}(\xi,\eta,\mu,\nu;h_j;u,k)$ is a finite sum of the form \begin{eqnarray} &&(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd} u)^{\xi+1}(\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} u)^{\eta+1}(\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds} u)^{\mu+1} \sum_{m=0}^q \Gamma\Bigg(\frac{1}{2}\begin{itemize}g(\op_{+-++}+2m+3)\Bigg)\Gamma\Bigg(\frac{1}{2}\begin{itemize}g(\op_{+-+-}+2m+2\begin{itemize}g)\Bigg)\nonumber\\ &&\times \frac{X_m(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd} u)^{2m}}{\Gamma\begin{itemize}g(\op_{+0+0}+2m+3\begin{itemize}g)} \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} \frac{1}{2}\begin{itemize}g(\op_{++++}+4\begin{itemize}g)+m, \frac{1}{2}\begin{itemize}g(\op_{+++-}+3\begin{itemize}g)+m\\ \op_{+0+0}+2m+3 \end{matrix}};\ \textstyle\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2 u\mathfrak{R}ight), \nonumber\\ \end{eqnarray} Now by the classical contiguous relations, for each $a$, $b$, $c$ and non-negative integer $m$, \[ \begin{array}{ll} &\sideset{_2}{_1}{\operatorname{F}}(a+m,b+m;c+2m;x)\\ =&\dfrac{c+2m-1}{(a+m-1)(b+m-1)}\dfrac{d}{dx}\sideset{_2}{_1}{\operatorname{F}}(a+m-1,b+m-1;c+2m-1;x)\\ =&\dfrac{c+2m-1}{(a+m-1)(b+m-1)}\dfrac{d}{dx}\\ &\left(\dfrac{c+2(m-1)}{(c-a+(m-1))(c-b+(m-1))}(1-x)\dfrac{d}{dx}-\dfrac{c-a-b}{(c-a+(m-1))(c-b+(m-1))}\mathfrak{R}ight)\\ &\sideset{_2}{_1}{\operatorname{F}}(a+m-1,b+m-1;c+2(m-1);x)\\ \end{array} \] Apply this relation repeatedly to the expansion above, we obtain a differential operator $L$, of order $2q$ in $\dfrac{d}{d(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2 u)}$ and with elliptic coefficients such that \[ \tilde{Dl}(\xi,\eta,\mu,\nu;h_j;u,k)=L \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} \frac{1}{2}\begin{itemize}g(\op_{++++}+4\begin{itemize}g), \frac{1}{2}\begin{itemize}g(\op_{+++-}+3\begin{itemize}g)\\ \op_{+0+0}+3 \end{matrix}};\ \textstyle\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2 u\mathfrak{R}ight). \] Finally, we let $H$ be the second order differential operator which defines the hypergeometric function above. Applying the division algorithm, we obtain differential operators $Q$ and $R$ so that $R$ is of order one in $\dfrac{d}{d(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2 u)}$, and \[ L=QH+R \] Consequently, \[ \tilde{Dl}(\xi,\eta,\mu,\nu;h_j;u,k)=R \sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} \frac{1}{2}\begin{itemize}g(\op_{++++}+4\begin{itemize}g), \frac{1}{2}\begin{itemize}g(\op_{+++-}+3\begin{itemize}g)\\ \op_{+0+0}+3 \end{matrix}};\ \textstyle\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2 u\mathfrak{R}ight). \] \end{proof} Now we rephrase the result above for Darboux equations in system form. \begin{corollary} Let $A_0$, $A_1$, $A_2$, $A_3\in\mathfrak{s}l_2(\mathbb{C})$ such that $A_0+A_1+A_2+A_3=0$ and an eigenvalue of $A_2$ is a half-integer. Then there exists $h\in\mathbb{C}$ and a gauge transformation $G$ which transforms the Darboux equation \begin{equation}\leftarrowbel{A} \dfrac{dY}{dz}-[A_0\dfrac{d\vartheta_1(z)}{\vartheta_1(z)}+A_1\dfrac{d\vartheta_2(z)}{\vartheta_2(z)} +A_2\dfrac{d\vartheta_3(z)}{\vartheta_3(z)} +A_3\dfrac{d\vartheta_4(z)}{\vartheta_4(z)}]Y=hY \end{equation} to \begin{equation}\leftarrowbel{B} \dfrac{dY}{dz}-[B_0\dfrac{d\vartheta_1(z)}{\vartheta_1(z)}+B_1\dfrac{d\vartheta_2(z)}{\vartheta_2(z)} +B_3\dfrac{d\vartheta_4(z)}{\vartheta_4(z)}]Y=h'Y \end{equation} for some $B_0$, $B_1$, $B_3\in\mathfrak{s}l_2(\mathbb{C})$ and $h'\in\mathbb{C}$. \end{corollary} \begin{proof} Let the eigenvalues of $A_2$ be $\pm\mu$. If $\mu>-\frac{1}{2}$, then Theorem \mathfrak{R}ef{gauge} assures that there exists a gauge transformation which transforms solutions of equation \eqref{A} to that of equation \eqref{B}. If $\mu=-\frac{1}{2}$, the difference of the local exponents of equation \eqref{A} at $K+iK'$ is $1$. There exists an elementary Schlesinger transformation which transforms its solutions to that of equation \eqref{B}. \end{proof} \subsection{All other Special Solutions via Symmetries}\leftarrowbel{symmetries} Using the symmetries of the Darboux Equation studied in \cite{CCT}, we can generate $2\times2\times2\times24=192$ solutions of Darboux equation (\mathfrak{R}ef{E:darboux}) in the following form when $\xi,\eta,\mu,\nu\notin\frac{2{\mathbb Z}+1}{2}$: \[ \tilde{Dl}\begin{itemize}g(\sigma_{X_i}(\xi)^{s_\xi},\sigma_{X_i}(\eta)^{s_\eta},\sigma_{X_i}(\mu)^{s_\mu},\sigma_{X_i}(\nu);h_X;\tau_{X_i}(u,k),\kappa_X(k)\begin{itemize}g), \] where ${s_\xi},{s_\eta},{s_\mu}=+$ or $-$; $X=I,\, A,\,B,\, C,\, D,\, E$; $i=0,1,2,3$, see more details in \cite[\S8]{CCT}. Then we obtain the following \begin{theorem} \leftarrowbel{terminate1} Assume one of the following conditions \begin{enumerate} \item $\op_{++++}\in2{\mathbb Z}\setminus\{-2\}$ or \item one of $\op_{+++-}, \op_{++-+}, \op_{+-++}, \op_{-+++}$ is in $(2{\mathbb Z}+1)\setminus\{-1\}$ or \item one of $\op_{++--}, \op_{+--+}, \op_{+-+-}$ is in $2{\mathbb Z}\setminus\{0\}$ or \item one of $\displaystyle \xi,\eta,\mu,\nu$ is in $\frac{2{\mathbb Z}+1}{2}\setminus\{-\frac{1}{2}\}$ \end{enumerate} holds. Then there exist finitely many values of $h_X$ such that at least one of the 192 local solutions in the form $$\tilde{Dl}(\sigma_{X_i}(\xi)^{s_\xi},\sigma_{X_i}(\eta)^{s_\eta},\sigma_{X_i} (\mu)^{s_\mu},\sigma_{X_i}(\nu);h_X;\tau_{X_i}(u,k),\kappa_X(k))$$ terminates. \end{theorem} \begin{proof} By the assumption, we choose a positive integer $q$ such that $$\sigma_{X_i}(\xi)^{s_\xi}+\sigma_{X_i}(\eta)^{s_\eta}+\sigma_{X_i}(\mu)^{s_\mu}+\sigma_{X_i}(\nu)=-2q-4,$$ or $$\sigma_{X_i}(\xi)^{s_\xi}+\sigma_{X_i}(\eta)^{s_\eta}+\sigma_{X_i}(\mu)^{s_\mu}-\sigma_{X_i}(\nu)=-2q-3,$$ or $$\sigma_{X_i}(\mu)^{s_\mu}=-\frac{2q+3}{2}$$ holds. Thus the proof follows from Theorem \mathfrak{R}ef{terminate}. \end{proof} Now we obtain the following result for Darboux equation in system form. \begin{corollary} Let $A_0$, $A_1$, $A_2$, $A_3\in\mathfrak{s}l_2(\mathbb{C})$ with eigenvalues $\pm\xi$, $\pm\eta$, $\pm\mu$, $\pm\nu$ respectively such that $A_0+A_1+A_2+A_3=0$. If \begin{enumerate} \item $\op_{++++}\in2{\mathbb Z}$ or \item one of $\op_{+++-}, \op_{++-+}, \op_{+-++}, \op_{-+++}$ is in $(2{\mathbb Z}+1)$ or \item one of $\op_{++--}, \op_{+--+}, \op_{+-+-}$ is in $2{\mathbb Z}$, \end{enumerate} then there exists $h\in\mathbb{C}$ such that the monodromy of the Darboux equation \begin{equation} \dfrac{dY}{dz}-[A_0\dfrac{d\vartheta_1(z)}{\vartheta_1(z)}+A_1\dfrac{d\vartheta_2(z)}{\vartheta_2(z)} +A_2\dfrac{d\vartheta_3(z)}{\vartheta_3(z)} +A_3\dfrac{d\vartheta_4(z)}{\vartheta_4(z)}]Y=hY \end{equation} is reducible. Moreover if one of $\xi$, $\eta$, $\mu$ or $\nu$ is in $\frac{2{\mathbb Z}+1}{2}$, there exists $h\in\mathbb{C}$ and a gauge transformation which transforms solutions of the Darboux equation above to that of a Darboux equation with one of the singularities removed. \end{corollary} \begin{remark} Theorem \mathfrak{R}ef{terminate1} states that \begin{itemize} \item (iv) the degeneration of the projectivised local monodromy of equation \eqref{E:darboux} at a point is a necessary condition for equation \eqref{E:darboux} to have a special solution which is a finite sum of hypergeometric functions. In Theorem \mathfrak{R}ef{gauge}, this finite sum is expressed as a gauge transformation between a Darboux equation with a local exponent difference $0$ and one with local exponent difference $m\in\mathbb{N}$. Such a gauge transformation is a Schlesinger transformation which was studied in detail in \cite{JM}. \item (i), (ii) or (iii) are the necessary conditions for the reducibility of the monodromy of equation \eqref{E:darboux} so that a Darboux polynomial type solution is observed. To see this, let $\mathfrak{R}ho:\pi_1(X,x_0)\to GL(2)$ be the monodromy representation of a Darboux equation which has a sub-representation spanned by a function $w$ defined around $x_0$. From this assumption, for each loop $\gamma_j$ ($j\in\{0,K,iK',K+iK'\}$), $\mathfrak{R}ho(\gamma_j)w$ is a multiple of $w$. So we infer from the Riemann $P$-scheme of the Darboux equation that \begin{eqnarray*} \mathfrak{R}ho(\gamma_0)w=e^{2\pi i(\pm\xi)}w;\\ \mathfrak{R}ho(\gamma_K)w=e^{2\pi i(\pm\eta)}w;\\ \mathfrak{R}ho(\gamma_{iK'})w=e^{2\pi i(\pm\nu)}w;\\ \mathfrak{R}ho(\gamma_{K+iK'})w=e^{2\pi i(\pm\mu)}w, \end{eqnarray*} for some choices of $\pm$. Now we also observe from the topology of the four-punctured torus that \[ \gamma_0\gamma_{K}\gamma_{iK'}\gamma_{K+iK'}=\delta_K\delta_{iK'}\delta^{-1}_K\delta^{-1}_{iK'} \] where $\delta_K$ is the straight edge joining $0$ to $K$. The same for $\delta_{iK'}$. Let both sides act on $w$, we obtain \[ \exp(2\pi i(\op_{abcd}))w=\mathfrak{R}ho(\delta_K)\mathfrak{R}ho(\delta_{iK'})\dfrac{1}{\mathfrak{R}ho(\delta_K)}\dfrac{1}{\mathfrak{R}ho(\delta_{iK'})}w=w, \] for some $a$, $b$, $c$, $d\in\{\pm\}$. Thus, we obtain $\op_{abcd}\in\mathbb{Z}$. \end{itemize} \end{remark} \subsection{Convergence of the Hypergeometric Function Series Expansion} \leftarrowbel{S:Convergence} In this section, we discuss the convergence when the series (\mathfrak{R}ef{E:2F1}) is non-terminating. \begin{theorem}\leftarrowbel{convergent} Suppose that $\tilde{Dl}(\xi,\eta,\mu,\nu;h;u,k)$ is non-terminating. Then it converges on the domain \[\left\{u\in\mathbb{C}:\Big|\frac{1-\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}(u,k)}{1+\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,k)}\Big|<\max\Big\{\begin{itemize}g|k + ik'\begin{itemize}g|^{-2},\ |k - ik'\begin{itemize}g|^{-2}\Big\}\mathfrak{R}ight\}\] if \begin{eqnarray*}n\leftarrowbel{ifcf} \tilde{g}(\xi,\eta,\mu,\nu;h;k):=L_0/M_0-\frac{K_1/M_1}{L_1/M_1-}\frac{K_2/M_2}{L_2/M_2-}\cdots=0\end{enumerate}qn holds. Otherwise, it converges only on the smaller domain \[\left\{u\in\mathbb{C}:\Big|\frac{1-\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd}(u,k)}{1+\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,k)}\Big|<\min\Big\{\begin{itemize}g|k + ik'\begin{itemize}g|^{-2},\ |k - ik'\begin{itemize}g|^{-2}\Big\}\mathfrak{R}ight\}.\] \end{theorem} \begin{proof} For simplicity, let the series expansion $\tilde{Dl}(\xi,\eta,\mu,\nu;h;u,k)$ be denoted by \[(\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd} u)^{\xi+1}(\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} u)^{\eta+1}(\mathop{\rm dn}} \def\dc{\mathop{\rm dc}} \def\ds{\mathop{\rm ds} u)^{\mu+1}\sum_{m=0}^\infty X_m \varphi_m(u,k).\] As $$ \lim_{m\mathfrak{R}ightarrow\infty}K_m/m^2=\lim_{m\mathfrak{R}ightarrow\infty}M_m/m^2=\frac{1}{4k^2} \mbox{ and } \lim_{m\mathfrak{R}ightarrow\infty}L_m/m^2=\frac{1}{2k^2}-1 $$ it follows from Theorem \mathfrak{R}ef{poin} that $\lim_{m\mathfrak{R}ightarrow\infty}X_{m+1}/X_m$ exists, where the coefficient $X_m$ is defined in \eqref{threeterm}, and is equal to one of the roots of the quadratic equation $$t^2-2(2{k}^2-1)t+1=0,$$ that is, $\lim_{m\mathfrak{R}ightarrow\infty}X_{m+1}/X_m=(k\pm ik')^2.$ In fact, the limit depends on whether the infinite continued fraction $\tilde{g}(\xi,\eta,\mu,\nu;h;k)=0$ holds or not. By Theorem \mathfrak{R}ef{perron}, $$ \lim_{m\mathfrak{R}ightarrow\infty}|X_{m+1}/X_m|= \begin{cases} \min\Big\{\begin{itemize}g|k + ik'\begin{itemize}g|^2,\ |k - ik'\begin{itemize}g|^2\Big\}&\mbox{ if (\mathfrak{R}ef{ifcf}) holds}\\ \max\Big\{\begin{itemize}g|k + ik'\begin{itemize}g|^2,\ |k - ik'\begin{itemize}g|^2\Big\}&\mbox{ otherwise} \end{cases}. $$ On the other hand, it follows from Theorem \mathfrak{R}ef{asymptotic} that \[ \frac{\varphi_{m+1}(u,k)}{\varphi_m(u,k)}\sim e^{-\zeta} \mbox{ as } m\mathfrak{R}ightarrow\infty, \mbox{ where } \mbox{$\mathop{\rm sn}} \def\sc{\mathop{\rm sc}} \def\sd{\mathop{\rm sd}^2$}(u,k) = \frac{2}{1-\cosh \zeta} \] and hence \[ \lim_{m\mathfrak{R}ightarrow\infty}\frac{\varphi_{m+1}(u,k)}{\varphi_m(u,k)}=\frac{1-\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,k)}{1+\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,k)}. \] Thus by the ratio-test, the series expansion converges for $$\left|\frac{1-\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,k)}{1+\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,k)}\mathfrak{R}ight|<\begin{cases} \max\Big\{\begin{itemize}g|k + ik'\begin{itemize}g|^{-2},\ |k - ik'\begin{itemize}g|^{-2}\Big\} &\mbox{ if (\mathfrak{R}ef{ifcf}) holds}\\ \min\Big\{\begin{itemize}g|k + ik'\begin{itemize}g|^{-2},\ |k - ik'\begin{itemize}g|^{-2}\Big\} &\mbox{ otherwise} \end{cases}.$$ \end{proof} \begin{definition} If $h=\hat{h}$ is chosen such that (\mathfrak{R}ef{ifcf}) holds, then $\tilde{Dl}(\xi,\eta,\mu,\nu;h;u,k)$ converges on the larger domain \[\left\{u\in\mathbb{C}:\left|\frac{1-\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,k)}{1+\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,k)}\mathfrak{R}ight|< \max\Big\{\begin{itemize}g|k + ik'\begin{itemize}g|^{-2},\ |k - ik'\begin{itemize}g|^{-2}\Big\}\mathfrak{R}ight\}\] and in this case we call the solution the {\it Darboux function}, denoted by $\tilde{Df}(\xi,\eta,\mu,\nu;\hat{h};u,k)$. \end{definition} \begin{remark} When the parameters $\xi$, $\eta$, $\mu$ in the $\tilde{Df}$ become zero (or $-1$), we recover Erd\'elyi's series expansions (see formula (12.1) in \cite{Erdelyi1}). \end{remark} Finally, using the symmetries of the Darboux Equation studied in \cite{CCT}, we also have \begin{theorem} If $h$ is chosen such that $h_X(\xi,\eta,\mu,\nu;h;k)$ satisfies the infinite continued fraction $$\tilde{g}(\sigma_{X_i}(\xi)^{s_\xi},\sigma_{X_i}(\eta)^{s_\eta},\sigma_{X_i}(\mu)^{s_\mu},\sigma_{X_i}(\nu);h_X;\kappa_X(k))=0,$$ then the series $\tilde{Dl}(\sigma_{X_i}(\xi)^{s_\xi},\sigma_{X_i}(\eta)^{s_\eta},\sigma_{X_i}(\mu)^{s_\mu},\sigma_{X_i}(\nu);h_X;\tau_{X_i}(u,k),\kappa_X(k))$ converges on the domain \[\left\{u\in\mathbb{C}:\left|\frac{1-\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,\kappa_X(k))}{1+\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,\kappa_X(k))}\mathfrak{R}ight|< \max\Big\{\begin{itemize}g|\kappa_X(k) + i\kappa'_X(k)\begin{itemize}g|^{-2},\ |\kappa_X(k) - i\kappa'_X(k)\begin{itemize}g|^{-2}\Big\}\mathfrak{R}ight\}.\] Otherwise, it converges only on the smaller domain \[\left\{u\in\mathbb{C}:\left|\frac{1-\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,\kappa_X(k))}{1+\mathop{\rm cn}} \def\cs{\mathop{\rm cs}} \def\cd{\mathop{\rm cd} (u,\kappa_X(k))}\mathfrak{R}ight|< \min\Big\{\begin{itemize}g|\kappa_X(k) + i\kappa'_X(k)\begin{itemize}g|^{-2},\ |\kappa_X(k) - i\kappa'_X(k)\begin{itemize}g|^{-2}\Big\}\mathfrak{R}ight\}.\] \end{theorem} \appendix \section{Asymptotic Expansions of Hypergeometric Functions} Let $\cosh\zeta=1-2x$. \begin{theorem}[({\cite[\S 6]{Watson}})] \leftarrowbel{asymptotic} \begin{eqnarray*} &&\frac{\Gamma(\alpha-\gamma+1+m)\Gamma(\gamma-\beta+m)} {\Gamma(\alpha-\beta+1+2m)}x^m\sideset{_2}{_1}{\operatorname{F}}\left({\begin{matrix} \alpha+m,\, \alpha-\gamma+1+m\\ \alpha-\beta+1+2m \end{matrix}};\ x\mathfrak{R}ight)\\ &\sim& 2^{\alpha+\beta}x^{-\alpha}(1-e^{-\zeta})^{1/2-\gamma}(1-e^{-\zeta})^{\gamma-\alpha-\beta-1/2} e^{-(\alpha+m)\zeta}\sum_{s=0}^\infty c_s\Gamma(s+1/2)m^{-s-1/2}. \end{eqnarray*} \end{theorem} \section{Results on Three-Term Recursion Relations}\leftarrowbel{S:three term} In this section, we review some useful results about three-term recursion relations. We refer to the readers to Gautschi \cite{Gautschi} for more details. Given the three-term recursion \[R_rC_{r+1}+S_rC_r+P_rC_{r-1}=0\, (r=0,1,2,\cdots),\] where $C_{-1}=0$ and $R_r\neq0$ for all $r=0,1,2,\cdots$. Assume that $\lim_{r\to\infty} P_r:=P,$ $\lim_{r\to\infty} S_r:=S$ and $\lim_{r\to\infty} R_r:=R$ exist. The limit of ${C_{r+1}}/{C_r}$ ($r\to\infty$) can be determined by Poincar\'e's Theorem and Perron's Theorem. \begin{theorem}[(Poincar\'e's Theorem (see \cite{Poincare} or {\cite[p.527]{MT}}))]\leftarrowbel{poin} The limit \[\lim_{r\to\infty} \frac{C_{r+1}}{C_r}= t_1\ \mbox{ or }\ t_2,\] where $t_1$ and $t_2$ are the roots of the quadratic equation $Rt^2+St+P$. \end{theorem} \begin{theorem}[(Perron's Theorem (see {\cite[\S 57]{Perron}}))]\leftarrowbel{perron} Suppose that $|t_1|<|t_2|.$ If the infinite continued fraction \[S_0/R_0-\frac{P_1/R_1}{S_1/R_1-}\frac{P_2/R_2}{S_2/R_2-}\cdots=0\] holds, then \[\lim_{r\to\infty} \Bigg|\frac{C_{r+1}}{C_r}\Bigg|= t_1.\] Otherwise, \[\lim_{r\to\infty} \Bigg|\frac{C_{r+1}}{C_r}\Bigg|= t_2.\] \end{theorem} \end{document}
\begin{document} \title[The rate of increase of mean values] {The rate of increase of mean values of functions in weighted Hardy spaces} \author{chengji xiong \& junming liu } \address{Department of Mathematics, Shantou University, Shantou, Guangdong, 515063,\newline P.~R.~China} \email{chengji\[email protected],\ [email protected]} \begin{abstract} Let $0<p<\infty$ and $0\leq q<\infty$. For each $f$ in the weighted Hardy space $H_{p, q}$,\ we show that $d\|f_r\|_{p,q}^p/dr$ grows at most like $o(1/1- r)$ as $r\rightarrow 1$. \end{abstract} \keywords{Hardy space, weighted Hardy space, mean value.} \subjclass[2000]{30H10} \maketitle \section{Introduction} Let ${\mathbb D}$ be the unit disk in the complex plane and $0<p<\infty$. The Hardy space $H^{p}(\mathbb{D})$ is the family of all analytic functions $f$ in ${\mathbb D}$ satisfying $$\|f\|_{p}= \lim_{r\rightarrow 1}\|f_{r}\|_{p}<\infty ,$$ where $$\|f_{r}\|_{p}=\Big( \frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}d\theta\Big)^{1/p}\ .$$ If $0\leq q<\infty$, define $$\|f_r\|_{p,q}=\Big(\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}(1-r^{2})^{q}d\theta \Big)^{1/p}\ .$$ The weighted Hardy space $H_{p,q}(\mathbb D)$ is the family of all analytic functions $f$ in $\mathbb D$ satisfying $$\|f\|_{p,q}=\sup_{0<r<1}\|f_r\|_{p,q}<\infty .$$ It is trivial that $H_{p,0}$ is just the classical Hardy space $H^p$ and $H_{p,q}$ is a Banach space for $p\ge 1$. There are many books about $H^p$ and $H_{p,q}$ such as Duren's book \cite{D} and Zhu's book \cite{Zhu}. It is well known that $\|f_{r}\|_{p}$ is a nondecreasing function of $r$. Furthermore, Hardy \cite{GH} showed that $\log\|f_{r}\|_{p}$ is a convex function of $\log r$. Other properties of mean values of analytic functions can be found in \cite{EF, EFW,HS}. Recently, Mashreghi \cite{JM} showed that $d\|f_{r}\|_{p}/dr$ grows at most like $o(1/1-r)$ as $r\rightarrow1$. In this paper, we generalized this result to weighted Hardy spaces. \section{Lemmas} Suppose that $f\not\equiv 0$ is analytic in $\mathbb D$. Let $W(z)=|f(z)|^p(1-|z|^2)^q$ for $0<p<\infty$, $0\le q<\infty$, and $\nabla$ be the gradient operator. \begin{lemma}Let $0<r\le 1$ and $D_r=\{z:|z|<r\}$. For $z_0\in D_r$ and small $\varepsilon >0$, let $D(z_{0}, \varepsilon)=\{z:|z-z_0|\le\varepsilon\}\subset D_r$ and $$I_{\varepsilon}=\int_{\partial D(z_{0}, \varepsilon)}\Big( \log(r/|z|)\frac{\partial W}{\partial n}-W\frac{\partial\log(r/|z|)}{\partial n} \Big)d\ell .$$ If $z_0\neq 0$ is a zero of $f$, then $\lim_{\varepsilon\rightarrow 0}I_{\varepsilon}=0$. If $z_0=0$ then $\lim_{\varepsilon\rightarrow 0}I_{\varepsilon}=2\pi |f(0)|^p$. \end{lemma} \begin{proof} Since $W(z)=|f(z)|^p(1-|z|^2)^q$, direct computation gives $$\nabla W(z)=(1-|z|^2)^q\nabla |f(z)|^p+|f(z)|^p\nabla (1-|z|^2)^q$$ and $$|\nabla |f(z)|^p|\le p|f(z)|^{p-1}|f'(z)| ,\quad |\nabla (1-|z|^2)^q|=2q|z|(1-|z|^2)^{q-1} .$$ So $$\Big| \frac{\partial W(z)}{\partial n} \Big|\le\big| \nabla W(z) \big|\le (1-|z|^2)^q|\nabla |f(z)|^p|+|f(z)|^p|\nabla (1-|z|^2)^q|$$ $$\le p|f(z)|^{p-1}|f'(z)|(1-|z|^2)^q+2q|z||f(z)|^p(1-|z|^2)^{q-1}$$ and $$\Big|\frac{\partial\log(r/|z|)}{\partial n}\Big|\le |\nabla \log(r/|z|)|=\frac{1}{|z|}.$$ Write \begin{eqnarray*} I_{\varepsilon} &=& \int_{\partial D(z_{0}, \varepsilon)}\Big( \log(r/|z|)\frac{\partial W}{\partial n}-W\frac{\partial\log(r/|z|)}{\partial n} \Big)d\ell \\ &=& \int_{0}^{2\pi}\log(r/|z_0+\varepsilon e^{i\theta}|)\frac{\partial W(z_0+\varepsilon e^{i\theta})}{\partial n}\varepsilon d\theta - \int_{\partial D(z_{0}, \varepsilon)}W(z)\frac{\partial\log(r/|z|)}{\partial n}d\ell \\ &=& I_1-I_2. \end{eqnarray*} For convenience, let $C>0$ be a constant independent of $\varepsilon$, whose value may change from line to line. If $z_0\neq 0$ is a zero of order $k\ge 1$, then $|I_1|\le C\varepsilon^{kp}$ and $|I_2|\le C\varepsilon^{k+1}$. Thus $\lim_{\varepsilon\rightarrow 0}I_{\varepsilon}(z_0)=0$. If $z_0=0$ and $f(z_0)\neq 0$, then $|I_1|\le \varepsilon\log (r/\varepsilon)$ and $$-I_2=\int_{0}^{2\pi}W(\varepsilon e^{i\theta})d\theta=(1-\varepsilon^2)^q\int_{0}^{2\pi}|f(\varepsilon e^{i\theta})|^p d\theta .$$ Hence $\lim_{\varepsilon\rightarrow 0}I_{\varepsilon}=2\pi |f(0)|^p$. At last, if $z_0=0$ is a zero of $f$ of order $k\ge 1$, then $|I_1|\le C\varepsilon^{kp}\log (r/\varepsilon)$ and $|I_2|\le C\varepsilon ^{kp}$. We have $\lim_{\varepsilon\rightarrow 0}I_{\varepsilon}=2\pi |f(0)|^p$ also. \end{proof} \begin{lemma} Let $f\in H_{p,\ q}(\mathbb{D})$, $0<p<\infty$, $0\leq q<\infty$, and $f\not\equiv 0$. Then $$\lim_{r\rightarrow 1}\int_{0}^{2\pi}|f(re^{i\theta})|^{p} (1-r^{2})^{q}d\theta - 2\pi |f(0)|^{p} =\int_{|z|<1}\log(1/|z|)G(z) dxdy\ ,$$ where $$G(z)=(1-|z|^{2})^{q}\nabla^{2}|f|^{p}+2\nabla|f|^{p}\cdot \nabla(1-|z|^{2})^{q} +|f|^{p}\nabla^{2}(1-|z|^{2})^{q}.$$ \end{lemma} \begin{proof}Since $f\not\equiv 0$ is analytic in $\mathbb D$, for any $0<R<1$, we can choose $r$, $R<r<1$ such that $f$ has finite many zeros in $D_r=\{|z|<r\}$ and no zeros on the circle $\partial D_r$. Let $\{z_1,z_2,\cdots,z_n\}$ be the set consisting of $0$ and all zeros of $f$ in $D_r$. Take $\varepsilon >0$ so small that $\{z: |z-z_k|\le\varepsilon\}\subset D_r$ for $1\le k\le n$, and all these disks are disjoint. Denote $$\Omega=D_r\backslash\bigcup_{k=1}^{n}\{z:|z-z_k|\le\varepsilon\}.$$ The function $W(z)=|f(z)|^{p}(1-|z|^{2})^{q}$ is infinitely differentiable in a neighborhood of $\overline{\Omega}$. Then by Green's theorem, we have $$\int_{\Omega}\log(r/|z|)\nabla^{2}W(z)dxdy=\int_{\partial \Omega} \Big( \log(r/|z|)\frac{\partial W}{\partial n}-W\frac{\partial\log(r/|z|)}{\partial n} \Big)d\ell . \eqno{(2.1)}$$ Direct computation gives $$\nabla^{2}W(z)=(1-|z|^{2})^{q}\nabla^{2}|f|^{p}+2\nabla|f|^{p}\cdot \nabla(1-|z|^{2})^{q} +|f|^{p}\nabla^{2}(1-|z|^{2})^{q}=G(z).$$ The direction of every small circle in $\partial\Omega$ is clockwise. For $\varepsilon\rightarrow 0$, by Lemma 1, formula (2.1) becomes $$\int_{\partial D_r} \Big( \log(r/|z|)\frac{\partial W}{\partial n}-W\frac{\partial\log(r/|z|)}{\partial n} \Big)d\ell-2\pi |f(0)|^p=\int_{D_r}\log(r/|z|)G(z) dxdy.$$ The left-side of above formula is equal to $$\int_{0}^{2\pi}W(re^{i\theta})d\theta-2\pi |f(0)|^p.$$ Now we obtain the lemma as $r\rightarrow 1$. \end{proof} \section{The rate of increase of $\|f_{r}\|_{p,q}^p$} \begin{theorem} Let $f\in H_{p,\ q}(\mathbb{D})$ and $f\not\equiv 0$, then $$\frac{d\|f_r\|_{p,q}^p}{dr}=o(1/1-r) \mbox{ as } r\rightarrow 1.$$ \end{theorem} \begin{proof}As in the proof of Lemma 2, we choose suitable $r$ and $\varepsilon$. Replacing $\log r/|z|$ by $\log 1/|z|$, by Green's theorem, we have, $$\int_{\Omega}\log(1/|z|)\nabla^{2}W(z)dxdy=\int_{\partial \Omega} \Big( \log(1/|z|)\frac{\partial W}{\partial n}-W\frac{\partial\log(1/|z|)}{\partial n} \Big)d\ell . \eqno{(3.1)}$$ For $\varepsilon\rightarrow 0$, using Lemma 1 for $r=1$, formula (3.1) turns into $$\int_{D_r}\log(1/|z|)G(z)dxdy=\int_{\partial D_r} \Big( \log(1/|z|)\frac{\partial W}{\partial n}-W\frac{\partial\log(1/|z|)}{\partial n} \Big)d\ell-2\pi|f(0)|^p$$ $$=\int_{0}^{2\pi}\Big(\log\frac{1}{r}\frac{\partial W}{\partial r}(re^{i\theta})+\frac{W(re^{i\theta})}{r}\Big)rd\theta-2\pi|f(0)|^p$$ $$=\int_{0}^{2\pi}W(re^{i\theta})d\theta-r\log r\int_{0}^{2\pi}\frac{\partial W}{\partial r}(re^{i\theta})d\theta-2\pi|f(0)|^p$$ $$=\int_{0}^{2\pi}|f(re^{i\theta})|^p(1-r^2)^qd\theta-2\pi r\log r\frac{d||f_r||_{p,q}^p}{dr}-2\pi |f(0)|^p.$$ Now let $r\rightarrow 1$. By Lemma 2, we have $$\lim_{r\rightarrow 1}\log r\frac{d\|f_r\|_{p,q}^p}{dr}=0.$$ Hence, $$\lim_{r\rightarrow 1}\frac{d\|f_r\|_{p,q}^p}{dr}=o(1/\log r)=o(1/1-r).$$ \end{proof} By Theorem 1, if $\lim_{r\rightarrow 1}\|f_r\|_{p,q}\neq 0$, or for $q=0$, $\|f_r\|_{p,q}$ is increasing with $r$, then we can get $\lim_{r\rightarrow 1}d\|f_r\|_{p,q}/dr=o(1/1-r)$. In the proof of Theorem 1, using $1$ instead of $\log 1/|z|$, similar arguments can deduce $$2\pi r\frac{d\|f_r\|_{p,q}^p}{dr}=\int_{|z|<r}G(z)dxdy.$$ \begin{corollary} If $f\in H_{p, q}$ with $0<p<\infty,\ 0\leq q<\infty$, then we have $$2\lim_{r\rightarrow 1}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}(1-r^{2})^{q}d\theta= \int_{\mathbb D}(1-|z|^{2})\nabla^{2}\Big(|f(z)|^{p}(1-|z|^{2})^{q}\Big)dxdy$$ $$+\ 4\int_{\mathbb{D}}|f(z)|^{p}(1-|z|^{2})^{q}dxdy. \eqno{(3.2)}$$ \end{corollary} \begin{proof}Denote $$J_{\varepsilon}=\int_{\partial D(z_{0}, \varepsilon)}\Big( (1-|z|^{2})\frac{\partial W} {\partial n}-W\frac{\partial(1-|z|^{2})}{\partial n} \Big)d\ell ,$$ where $z_0$ is a zero of $f$. Similar arguments as in the proof of Lemma 1, we have $\lim_{\varepsilon\rightarrow 0}J_{\varepsilon}=0$. Choosing suitable $r$ and $\varepsilon$ as in the proof of Theorem 1, by Green's theorem, we have $$\int_{\Omega}(1-|z|^{2})\nabla^{2}W(z)-\nabla^{2}(1-|z|^{2})\cdot W(z)dxdy=\int_{\partial \Omega} \Big( (1-|z|^{2})\frac{\partial W}{\partial n}-W\frac{\partial(1-|z|^{2})}{\partial n} \Big)d\ell$$ As $\varepsilon\rightarrow 0$, the above formula turns into $$\int_{D_r}(1-|z|^{2})\nabla^{2}W(z)+4W(z)dxdy=\int_{\partial D_r} \Big( (1-|z|^{2})\frac{\partial W}{\partial n}-W\frac{\partial(1-|z|^{2})}{\partial n} \Big)d\ell$$ $$=\int_{0}^{2\pi}\Big((1-r^{2})\frac{\partial W}{\partial r}(re^{i\theta})+2rW(re^{i\theta}) \Big)rd\theta . \eqno{(3.3)}$$ By Theorem 1, we have $$\lim_{r\rightarrow 1}\int_{0}^{2\pi}r(1-r^{2})\frac{\partial W}{\partial r}(re^{i\theta})d\theta=2\lim_{r\rightarrow 1}\left[(1-r)\frac{\partial}{\partial r}\int_{0}^{2\pi}W(re^{i\theta})d\theta\right]=0\ .$$ The proof is completed by letting $r\rightarrow 1$ in formula (3.3). \end{proof} Note that $\nabla^{2}|f|^p=p^2|f|^{p-2}|f'|^2$. Taking $q=0$ in formula (3.2), we obtain $$\int_{0}^{2\pi}|f(e^{i\theta})|^{p}d\theta=\frac{p^2}{2} \int_{\mathbb D}(1-|z|^{2})|f(z)|^{p-2}|f'(z)|^{2}dxdy+ 2\int_{\mathbb D}|f(z)|^{p}dxdy ,$$ which is reminiscent of Hardy-Stein identity $$\frac{1}{2\pi}\int_0^{2\pi}|f(e^{i\theta})|^pd\theta=|f(0)|^p+\frac{p^2}{2\pi}\int_{\mathbb D}\log\frac{1}{|z|}|f(z)|^{p-2}|f'(z)|^2dxdy.$$ \end{document}
\begin{document} \markboth{J. Brust et al.}{SC-SR1: MATLAB Software for Solving Shape-Changing L-SR1 Trust-Region Subproblems} \title[Algorithm xxx: SC-SR1: MATLAB Software for Limited-Memory SR1 Trust-Region Methods] {Algorithm xxx: SC-SR1: MATLAB Software for Limited-Memory SR1 Trust-Region Methods} \author[J. Brust]{Johannes J. Brust} \email{[email protected]} \address{Argonne National Laboratory} \author[O. Burdakov]{Oleg Burdakov} \email{[email protected]} \address{Link\"{o}ping University} \author[J. Erway]{Jennifer B. Erway} \email{[email protected]} \address{Wake Forest University} \author[R. Marcia]{Roummel F. Marcia} \email{[email protected]} \address{University of California, Merced} \thanks{ This research is support in part by National Science Foundation grants CMMI-1333326, CMMI-1334042, IIS-1741264, and IIS-1741490 and the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) under Contract DE-AC02-06CH11347.} \begin{abstract} We present a MATLAB implementation of the symmetric rank-one (SC-SR1) method that solves trust-region subproblems when a limited-memory symmetric rank-one (L-SR1) matrix is used in place of the true Hessian matrix, which can be used for large-scale optimization. The method takes advantage of two shape-changing norms~\cite{BYuan02,Burdakov2016} to decompose the trust-region subproblem into two separate problems. Using one of the proposed norms, the resulting subproblems have closed-form solutions. Meanwhile, using the other proposed norm, one of the resulting subproblems has a closed-form solution while the other is easily solvable using techniques that exploit the structure of L-SR1 matrices. Numerical results suggest that the SC-SR1 method is able to solve trust-region subproblems to high accuracy even in the so-called ``hard case''. {\color{black} When integrated into a trust-region algorithm, extensive numerical experiments suggest that the proposed algorithms perform well, when compared with widely used solvers, such as truncated CG.} \end{abstract} \keywords{Large-scale unconstrained optimization, trust-region methods, limited-memory quasi-Newton methods, symmetric rank-one update, shape-changing norm} \maketitle \makeatletter \def{\small BFGS}{{\small BFGS}} \def{\small L-BFGS}{{\small L-BFGS}} \def{\small L-SR1}{{\small L-SR1}} \def{\small SR1}{{\small SR1}} \def{\small CG}{{\small CG}} \def{\small DFP}{{\small DFP}} \def{\small LMTR}{{\small LMTR}} \def{\small OBS}{{\small OBS}} \def{\small OBS}SC{{\small OBS-SC}} \def{\small PSB}{{\small PSB}} \def{\small SC-SR1}{{\small SC-SR1}} \def{\small QR}{{\small QR}} \def{\small MATLAB}{{\small MATLAB}} \newcommand{\mathop{\operator@font{minimize}}ize}[1]{{\displaystyle\mathop{\operator@font{minimize}}_{#1}}} \newcommand{\mathop{\operator@font{minimize}}}{\mathop{\operator@font{minimize}}} \newcommand{\;\;}{\;\;} \newcommand{\;\;\;}{\;\;\;} \newcommand{{\mathcal Q}}{{\mathcal Q}} \newcommand{\defined}{\mathop{\,{\scriptstyle\stackrel{\triangle}{=}}}\,} \newcommand{\mathop{\operator@font{subject\ to}}}{\mathop{\operator@font{subject\ to}}} \newcommand{\words}[1]{\;\;\text{#1}\;\;} \newcommand\diag{\mathop{\operator@font diag}\nolimits} \newcommand{\rfm}[1]{\textcolor{black}{#1}} \newcommand{\obs}[1]{\textcolor{black}{#1}} \newcommand{\je}[1]{\textcolor{black}{#1}} \newcommand{\jeo}[1]{\textcolor{black}{#1}} \newcommand{\jb}[1]{\textcolor{black}{#1}} \newcommand{\jbo}[1]{\textcolor{black}{#1}} \renewcommand{PROCEDURE}{ALGORITHM} \newcommand{\mathbf{p}}{\mathbf{p}} \newcommand{\mathbf{v}}{\mathbf{v}} \newcommand{\bfm}[1]{\mathbf{#1}} \newcommand{\bk}[1]{\mathbf{#1}_k} \newcommand{\bko}[1]{\mathbf{#1}_{k+1}} \newcommand{\bsk}[1]{\boldsymbol{#1}_k} \newcommand{\bsko}[1]{\boldsymbol{#1}_{k+1}} \newcommand{\bs}[1]{\boldsymbol{#1}} \newcounter{pseudocode}[section] \def\thesection.\arabic{pseudocode}{\thesection.\arabic{pseudocode}} \newenvironment{pseudocode}[2] { \refstepcounter{pseudocode} \AlgBegin {{\bfseries Algorithm \thesection.\arabic{pseudocode}.}\rule[-1.25pt]{0pt}{10pt}#1} #2} {\AlgEnd} \newcounter{Pseudocode}[section] \def\thesection.\arabic{Pseudocode}{\thesection.\arabic{Pseudocode}} \newenvironment{Pseudocode}[2] { \refstepcounter{Pseudocode} \AlgBegin {{\bfseries #1.}\rule[-1.25pt]{0pt}{10pt}} #2} {\AlgEnd} \def\tilde{\nu}{\tilde{\nu}} \newcounter{procedureC} \newcounter{algorithm saved} \newenvironment{procedureAlg}[1][H]{ \setcounter{algorithm saved}{\value{algocf}} \setcounter{algocf}{\value{procedureC}} \renewcommand{PROCEDURE}{PROCEDURE} \begin{algorithm}[#1] }{\end{algorithm} \setcounter{procedureC}{\value{algocf}} \setcounter{algocf}{\value{algorithm saved}} } \makeatother \section{Introduction} \label{intro} At each iteration of a trust-region method for minimizing a general nonconvex function $f(\mathbf{x})$, the so-called \emph{trust-region subproblem} must be solved to obtain a step direction: \begin{equation} \label{eq:trustProblem} \mathop{\operator@font{minimize}}ize{ \mathbf{p}\in\mathbb{R}^n}\;\;{\mathcal Q}\left(\mathbf{p}\right) \defined \mathbf{g}^T\mathbf{p} + \frac{1}{2} \mathbf{p}^T\mathbf{B} \mathbf{p} \;\;\;\mathop{\operator@font{subject\ to}} \;\; \|\mathbf{p}\| \le \delta, \end{equation} where $\mathbf{g}\defined\nabla f\left(\mathbf{x}_k\right)$, $\mathbf{B}$ is an approximation to $\nabla^2 f\left(\mathbf{x}_k\right)$, $\delta$ is a positive constant, and $\|\cdot\|$ is a given norm. In this article, we describe a {\small MATLAB}{} implementation for solving the trust-region subproblem (\ref{eq:trustProblem}) when $\bfm{B}$ is a limited-memory symmetric rank-one ({\small L-SR1}) matrix approximation of $\nabla^2 f(\bfm{x}_k)$. In large-scale optimization, solving (\ref{eq:trustProblem}) represents the bulk of the computational effort in trust-region methods. The norm used in (\ref{eq:trustProblem}) not only defines the trust region shape but also determines the difficulty of solving each subproblem. The most widely-used norm chosen to define the trust-region subproblem is the two-norm. One reason for this choice of norm is that the necessary and sufficient conditions for a global solution to the subproblem defined by the two-norm are well-known~\cite{Gay81,MorS83,Sor82}; many methods exploit these conditions to compute high-accuracy solutions to the trust-region subproblem (see e.g., \cite{EG10,EGG09,ErwayM14,Gould2010,BruEM15,MorS83}). The infinity-norm is sometimes used to define the subproblem; however, when $\mathbf{B}$ is indefinite, as can be the case when $\mathbf{B}$ is a {\small L-SR1}{} matrix, the subproblem is NP-hard~\cite{MK87,Vav92}. For more discussion on norms other than the infinity-norm we refer the reader to~\cite{ConGT00a}. In this article, we consider the trust-region subproblems defined by \emph{shape-changing} norms originally proposed in~\cite{BYuan02}. Generally speaking, shape-changing norms are norms that depend on $\mathbf{B}$; thus, in the quasi-Newton setting where the quasi-Newton matrix $\mathbf{B}$ is updated each iteration, the shape of the trust region changes each iteration. One of the earliest references to shape-changing norms is found in \cite{Gold80} where a norm is implicitly defined by the product of a permutation matrix and a unit lower triangular matrix that arise from a symmetric indefinite factorization of $\mathbf{B}$. Perhaps the most widely-used shape-changing norm is the so-called ``elliptic norm'' given by $\|\mathbf{x}\|_A\defined \mathbf{x}^T\mathbf{Ax}$, where $\mathbf{A}$ is a positive-definite matrix (see, e.g.,~\cite{ConGT00a}). A well-known use of this norm is found in the Steihaug method~\cite{Ste83}, and, more generally, truncated preconditioned conjugate-gradients ({\small CG})~\cite{ConGT00a}; these methods reformulate a two-norm trust-region subproblem using an elliptic norm to maintain the property that the iterates from preconditioned {\small CG}{} are increasing in norm. Other examples of shape-changing norms include those defined by vectors in the span of $\mathbf{B}$ (see, e.g.,~\cite{ConGT00a}). The shape-changing norms proposed in~\cite{BYuan02,Burdakov2016} have the advantage of breaking the trust-region subproblem into two separate subproblems. Using one of the proposed shape-changing norms, the solution of the subproblem then has a closed-form solution. In the other proposed norm, one of the subproblems has a closed-form solution while the other is easily solvable. The publicly-available {\small LMTR}{} codes~\cite{Burdakov2018} solve trust-region subproblems (\ref{eq:trustProblem}) defined by these shape-changing norms and the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) updates of $\mathbf{B}$. To our knowledge, there are no other implementations for solving trust-region subproblems defined by these shape-changing norms. \subsection{Overview of the proposed method} In this paper, we \jb{develop} a {\small MATLAB}{} implementation for solving trust-region \jb{(TR)} subproblems defined by the two shape-changing norms \jb{described} in~\cite{BYuan02} when {\small L-SR1}{} approximations to the Hessian are used instead of {\small L-BFGS}{} approximations. \jb{For limited-memory algorithms a re-scaling strategy (i.e., effectively re-initializing the Hessian approximation at each iteration) is often important for the practical performance of the method. Yet, because the structure of {\small L-SR1}{} matrices can be exploited to reduce the memory usage even further when a constant initialization is used (i.e., no re-scaling) we provide an option to chose between such strategies. Moreover, our implementation enables the testing and addition of new solvers by swapping out the respective TR subproblem algorithm. In this way, we conduct numerical experiments on large-scale CUTEst problems \cite{GouOT03}, comparing the shape-changing methods to truncated CG and an $\ell_2$-norm based algorithm.} The proposed method, called the shape-changing SR1 method ({\small SC-SR1}), \jb{enables high-accuracy subproblem solutions by exploiting the structure of {\small L-SR1}{} matrices}. This paper is organized as follows: In Section 2, we review {\small L-SR1}{} matrices, including the compact representation for these matrices and a method to efficiently compute their eigenvalues and a partial eigenbasis. In Section 3, we demonstrate how the shape-changing norms decouple the original trust-region subproblem into two problems and describe the proposed solver for each subproblem. Finally, for each shape-changing norm, we show how to construct a global solution to (\ref{eq:trustProblem}) from the solutions of the two decoupled subproblems. Optimality conditions are presented for each of these decoupled subproblems in Section 4. In Section 5, we demonstrate the accuracy of the proposed solvers, \jb{and compare them on a collection of large-scale optimization problems.} Concluding remarks can be found in Section 6. \subsection{Notation} In this article, the identity matrix of dimension $d$ is denoted by $\mathbf{I}_d = \left[ \mathbf{e}_1| \cdots | \mathbf{e}_d \right]$, and depending on the context the subscript $d$ may be suppressed. Finally, we assume that all {\small L-SR1}{} updates are computed so that the {\small L-SR1}{} matrix is well defined. \section{L-SR1 matrices} Suppose $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is a smooth objective function and $\{ {\bfm{x}}_i\}$, $i=0,\ldots k$, is a sequence of iterates, then the symmetric rank-one ({\small SR1}) matrix is defined using pairs $(\bfm{s}_i,\bfm{y}_i)$ where $$ \mathbf{s}_i\defined \mathbf{x}_{i+1} - \mathbf{x}_i \quad \text{and} \quad \mathbf{y}_i \defined \nabla f(\mathbf{x}_{i+1}) - \nabla f(\mathbf{x}_{i}), $$ and $\nabla f$ denotes the gradient of $f$. Specifically, given an initial matrix $\mathbf{B}_0$, $\mathbf{B}_{k+1}$ is defined recursively as \begin{equation}\label{eq:sr1} \mathbf{B}_{k+1} \defined \mathbf{B}_k + \frac{(\mathbf{y}_k- \mathbf{B}_k\mathbf{s}_k)(\mathbf{y}_k - \mathbf{B}_k\mathbf{s}_k)^T}{(\mathbf{y}_k - \mathbf{B}_k\mathbf{s}_k)^T\mathbf{s}_k}, \end{equation} provided $(\mathbf{y}_k - \mathbf{B}_k\mathbf{s}_k)^T\mathbf{s}_k \ne 0$. In practice, $\mathbf{B}_0 = \bfm{B}^{(k)}_0$ is often taken to be a scalar multiple of the identity matrix \jb{that re-scales $\bk{B}$ each iteration}; for the duration of this article we assume that $\mathbf{B}_0=\gamma_k \mathbf{I}$, $\gamma_k\in\mathbb{R}$. \emph{Limited-memory} symmetric rank-one matrices ({\small L-SR1}) store and make use of only the $m$ most-recently computed pairs $\{(\mathbf{s}_i,\mathbf{y}_i)\}$, where $m\ll n$ (for example, Byrd et al.~\cite{ByrNS94} suggest $m\in [3,7]$). For simplicity of notation, we assume that the current iteration number $k$ is less than the number of allowed stored limited-memory pairs $m$. The {\small SR1}{} update is a member of the Broyden class of updates (see, e.g.,~\cite{NocW06}). Unlike widely-used updates such as the Broyden-Fletcher-Goldfarb-Shanno ({\small BFGS}) and the Davidon-Fletcher-Powell ({\small DFP}) updates, this update can yield indefinite matrices; that is, {\small SR1}{} matrices can incorporate negative curvature information. In fact, the {\small SR1}{} update has convergence properties superior to other widely-used positive-definite quasi-Newton matrices such as {\small BFGS}{}; in particular,~\cite{ConnGT91} give conditions under which the {\small SR1}{} update formula generates a sequence of matrices that converge to the true Hessian. (For more background on the {\small SR1}{} update formula, see, e.g.,~\cite{GrNaS09,KelS98,KhaBS93,NocW06,SunY06,Wol94}.) \subsection{Compact representation} The compact representation of {\small SR1}{} matrices can be used to compute the eigenvalues and a partial eigenbasis of these matrices. In this section, we review the compact formulation of {\small SR1}{} matrices. To begin, we define the following matrices: \begin{eqnarray*} \mathbf{S}_k &\defined& [ \ \mathbf{s}_0 \ \ \mathbf{s}_1 \ \ \mathbf{s}_2 \ \ \cdots \ \ \mathbf{s}_{k-1} \ ] \ \in \ \mathbb{R}^{n \times k}, \\ \mathbf{Y}_k &\defined& [ \ \mathbf{y}_0 \ \ \mathbf{y}_1 \ \ \mathbf{y}_2 \ \ \cdots \ \ \mathbf{y}_{k-1} \ ] \ \in \ \mathbb{R}^{n \times k}. \end{eqnarray*} The matrix $\mathbf{S}_k^T\mathbf{Y}_k\in\mathbb{R}^{k \times k}$ can be written as the sum of the following three matrices: $$ \mathbf{S}_k^T\mathbf{Y}_k = \mathbf{L}_k + \mathbf{D}_k + \mathbf{R}_k, $$ where $\mathbf{L}_k$ is strictly lower triangular, $\mathbf{D}_k$ is diagonal, and $\mathbf{R}_k$ is strictly upper triangular. Then, $\mathbf{B}_{k}$ can be written as \begin{equation}\label{eq:form} \mathbf{B}_{k} \ = \ \gamma_{k}\mathbf{I} + \mathbf{\Psi}_k \mathbf{M}_k \mathbf{\Psi}_k^T, \end{equation} where $\mathbf{\Psi}_k \in \mathbb{R}^{n \times k}$ and $\mathbf{M}_k \in \mathbb{R}^{k \times k}$. In particular, $\mathbf{\Psi}_k$ and $\mathbf{M}_k$ are given by \begin{equation}\label{eqn-PsiM} \mathbf{\Psi}_k \ = \ \mathbf{Y}_k - \gamma_k\mathbf{S}_k \quad \text{and} \quad \mathbf{M}_k \ = \ (\mathbf{D}_k + \mathbf{L}_k + \mathbf{L}_k^T - \gamma_k\mathbf{S}_k^T\mathbf{S}_k)^{-1}. \end{equation} The right side of equation (\ref{eq:form}) is the \emph{compact representation} of $\mathbf{B}_{k}$; this representation is due to Byrd et al.~\cite[Theorem 5.1]{ByrNS94}. For the duration of this paper, we assume that updates are \jb{made} when both the next {\small SR1}{} matrix $\mathbf{B}_{k}$ is well-defined and $\mathbf{M}_k$ exists \cite[Theorem 5.1]{ByrNS94}. For notational simplicity, we assume $\mathbf{\Psi}_k$ has full column rank; when $\mathbf{\Psi}_k$ does not have full column rank, we refer to~\cite{Burdakov2016} for the modifications needed for computing the eigenvalues, which we also review in Section 2.2. Notice that the computation of $\mathbf{M}_k$ is computationally admissible since it is a very small symmetric square matrix. \subsection{\jb{Limited-Memory Updating}}\label{sec-lm} \jb{For large optimization problems, limited-memory approaches store only a small number of vectors to define the {\small L-SR1}{} representations. Depending on the initialization strategy, specifically whether $ \gamma_k $ varies between iterations or is constant ($ \gamma_k = \bar{\gamma} $) the matrices in \eqref{eqn-PsiM} can be effectively stored and updated. We will describe these techniques in subsequent sections. By setting the parameter $m \ll n$ limited-memory techniques enable inexpensive computations, and replace or insert one column at each iteration in $ \bfm{Y}_k $ and $ \bfm{S}_k $. Let an underline below a matrix represent the matrix with its first column removed. That is, $ \underline{\bfm{S}}_k $ represents $ \bfm{S}_k $ without its first column. With this notation, a column update of a matrix, say $ \bk{S} $, by a vector $ \bk{s} $ is defined as follows. \begin{equation*} \text{colUpdate}\left(\bk{S},\bk{s} \right) \defined \begin{cases} [\: \bk{S} \: \bk{s}\: ] & \text{ if } k < m \\ [\: \underline{\bfm{S}}_k \: \bk{s}\: ] & \text{ if } k \ge m. \\ \end{cases} \end{equation*} This column update can be implemented efficiently, without copying large amounts of memory, by appropriately updating a vector that stores index information (``mIdx"). A function to do so is described in Procedure \ref{alg-colUpdate}: \begin{procedureAlg} \caption{Limited-memory column updating of $ \bk{S} $ by the vector $ \bk{s} $}\label{alg-colUpdate} \begin{algorithmic}[1] \ENSURE [$\bk{S}$,\text{mIdx}]=\texttt{colUpdate}($\bk{S}$, $\bk{s}$, mIdx, $m$, $k$); \IF {$k = 0$} \STATE{$\text{mIdx} \gets \text{zeros}(m,1)$;} \ENDIF \IF {$ k < m $} \STATE{$ \text{mIdx}(k+1) \gets k+1$;} \STATE{$ \bk{S}(:,\text{mIdx}(k+1)) \gets \bk{s} $;} \ELSIF {$m \le k$} \STATE{$ k_m \gets \text{mIdx}(1) $;} \STATE{$ \text{mIdx}(1:(m-1)) \gets \text{mIdx}(2:m) $;} \STATE{$ \text{mIdx}(m) \gets k_m $;} \STATE{$ \bk{S}(:,\text{mIdx}(m)) \gets \bk{s} $;} \ENDIF \RETURN $\bk{S}$, $\text{mIdx}$; \end{algorithmic} \end{procedureAlg} Note that this procedure does not copy (or overwrite) large blocks of memory as would commands such as $ \{ \bk{S}(:,1:(m-1)) \gets \bk{S}(:,2:m); \bk{S}(:,m) \gets \bk{s} \} $, but instead accesses the relevant locations using a stored vector of indices. Certain matrix products can also be efficiently updated. As such, the product $ \bk{S}^T \bk{Y} $ does not have to be re-computed from scratch. In order to describe the matrix product updating mechanism, let an overline above a matrix represent the matrix with its first row removed. That is, $ \overline{\bfm{S}^{T}_k \bfm{Y}}_k $ represents $ \bfm{S}^{T}_k \bk{Y} $ without its first row. With this notation, a product update of, say $ \bk{S}^T\bk{Y} $, by matrices $ \bk{S} $, $\bk{Y} $ and vectors $ \bk{s} $, $\bk{y}$ is defined as: \begin{equation*} \text{prodUpdate} \left( \bk{S}^T\bk{Y}, \bk{S}, \bk{Y}, \bk{s}, \bk{y} \right) \defined \begin{cases} \left[ \begin{array}{ c c } \bk{S}^T\bk{Y} & \bk{S}^T\bk{y} \\ \bk{s}^T\bk{Y} & \bk{s}^T \bk{y} \end{array} \right] & \text{ if } k < m \\ \left[ \begin{array}{ c c } \left(\underline{\overline{\bfm{S}^{T}_k \bfm{Y}_k}}\right) & \underline{\bfm{S}}_k^T\bk{y} \\ \bk{s}^T \underline{\bfm{Y}}_k & \bk{s}^T \bk{y} \end{array} \right] & \text{ if } k \ge m .\\ \end{cases} \end{equation*} This product update can be implemented without recomputing potentially large multiplications, by storing previous products and information about the column order in $ \bk{S} $ and $ \bk{Y} $. In particular, updating the matrix product is based on storing $ \bk{S}^T \bk{Y} $, $ \bk{S}, \bk{Y} $ and the vector ``mIdx". Although a different order is possible, we apply the product update after column updates of $ \bk{S}, \bk{Y} $ have been done previously. In such a situation the vector, which stores the appropriate index information (``mIdx") is defined at such a point.}\\ \jb{\begin{procedureAlg} \caption{Limited-memory product update $ \bk{S}^T \bk{Y} $ ($ \bk{s} $, $ \bk{y} $ are column updates to $ \bk{S} $, $ \bk{Y} $) }\label{alg-prodUpdate} \begin{algorithmic}[1] \ENSURE [$\bk{S}^T\bk{Y}$]=\texttt{prodUpdate}($\bk{S}^T\bk{Y}$, $\bk{S}$, $\bk{Y}$, $ \bk{s} $, $ \bk{y} $, mIdx, $m$, $k$); \IF {$ k < m $} \STATE{$\bk{S}^T\bk{Y}(1:(k+1),k+1) \gets \bk{S}(:,\text{mIdx}(1:(k+1)))^T \bk{y}$;} \STATE{$\bk{S}^T\bk{Y}(k+1,1:k) \gets \bk{s}^T\bk{Y}(:,\text{mIdx}(1:k))$;} \ELSIF {$m \le k$} \STATE{$ \bk{S}^T\bk{Y}(1:(m-1),1:(m-1)) \gets \bk{S}^T\bk{Y}(2:m,2:m) $; \label{alg-prodUpdate:0}} \STATE{$ \bk{S}^T\bk{Y}(1:m,m) \gets \bk{S}(:,\text{mIdx}(1:m)^T \bk{y} $; \label{alg-prodUpdate:1}} \STATE{$\bk{S}^T\bk{Y}(m,1:(m-1)) \gets \bk{s}^T\bk{Y}(:,\text{mIdx}(1:(m-1)))$; \label{alg-prodUpdate:2}} \ENDIF \RETURN $\bk{S}^T\bk{Y}$; \end{algorithmic} \end{procedureAlg} Note that such a product update is computationally much more efficient, than recomputing the product from scratch. Specifically, when $ m \le k $, the direct product $ \bk{S}^T \bk{Y} $ is done at $ \mathcal{O}(m^2n) $ multiplications. However, Procedure \ref{alg-prodUpdate} does this update with $ \mathcal{O}(2mn) $ multiplications in lines \ref{alg-prodUpdate:1} and \ref{alg-prodUpdate:2}, by reusing previous values from line \ref{alg-prodUpdate:0}. Moreover, when the product is symmetric, e.g. Procedure \ref{alg-prodUpdate} is invoked by $ \texttt{prodUpdate}( \bk{S}^T\bk{S}, \bk{S}, \bk{S}, \bk{s}, \bk{s}, \text{mIdx}, m, k ) $, then $ \bk{S}(:,\text{mIdx}(1:m)^T \bk{s} $ can be stored in line \ref{alg-prodUpdate:1} and reused in line \ref{alg-prodUpdate:2} (thus only one matrix-vector product is needed, instead of two). Since limited-memory updating of the {\small L-SR1}{} matrices varies for the chosen initialization strategy, we describe the cases of non-constant initializations $ \gamma_k $ and constant $ \gamma_k = \bar{\gamma} $ next. \subsubsection{Limited-memory updating of \eqref{eqn-PsiM} using non-constant $ \gamma_k $} \label{sec-up:noncons} When $ \gamma_k $ varies for every iteration, $ \bsk{\Psi} $ is best implicitly represented by storing $ \bk{S} $ and $ \bk{Y} $, instead of explicitly forming it (forming $ \bsk{\Psi} $ explicitly incurs additional $ \mathcal{O}(mn) $ memory locations in $ \bsk{\Psi} = \bk{Y} - \gamma_k \bk{S} $). By storing the previous $ m $ pairs $ \{ \bfm{s}_i, \bfm{y}_i \}_{i=k-m}^{k-1} $ in the limited-memory matrices $ \bk{S} = [\: \bfm{s}_{k-m} \: \cdots \: \bfm{s}_{k-1} \:] \in \mathbb{R}^{n \times m} $ and $ \bk{Y} = [\: \bfm{y}_{k-m} \: \cdots \: \bfm{y}_{k-1} \:] \in \mathbb{R}^{n \times m} $ the matrix-vector product $ \bsk{\Psi}^T \bfm{g} $ (for a vector $\bfm{g}$) is done as \begin{equation*} \bsk{\Psi}^T \bfm{g} = \bk{Y}^T\bfm{g} - \gamma_k ( \bk{S}^T \bfm{g} ). \end{equation*} \subsubsection{Limited-memory updating of \eqref{eqn-PsiM} using constant $ \gamma_k = \bar{\gamma} $} \label{sec-up:cons} When $ \gamma_k = \bar{\gamma} $ is constant, then $ \bk{Y} $ and $ \bk{S} $ do not have to be stored separately. Instead the limited-memory method stores $m$ previous vectors $ \{ \bs{\psi}_i = \bfm{y}_i - \bar{\gamma} \bfm{s}_i \}_{i=k-m}^{k-1} $, concatenated in the matrix \begin{equation*} \bsk{\Psi} = \left[ \ \bs{\psi}_{k-m} \ \ \cdots \ \ \bs{\psi}_{k-1} \ \right] \in \mathbb{R}^{n \times m} \end{equation*} Matrix vector products are directly computed as $ \bsk{\Psi}^T \bk{g} $. Subsequently, $ \bk{M} $ from \eqref{eqn-PsiM} can be updated efficiently by noting that \begin{equation*} \bk{M}^{-1}\bk{e} = \left(\bk{D} + \bk{L} + \bk{L}^T - \bar{\gamma} \bk{S}^T \bk{S} \right)\bk{e} = \bsk{\Psi}^T\bk{s}. \end{equation*} Because of these simplifications an L-SR1 algorithm with constant initialization strategy can be implemented with about half the memory footprint (storing only $ \bsk{\Psi} $ as opposed to $ \bk{Y}, \bk{S} $ (and previous small products)). However, often the ability to rescale the computations via a non-constant $\gamma_k$ parameter can be advantageous in solving large-scale optimization problems. We provide an option to choose between constant or non-constant initialization strategies in our implementations.} \subsection{Eigenvalues}\label{sec-eigs} In this subsection, we demonstrate how the eigenvalues and a partial eigenbasis can be computed for {\small SR1}{} matrices. In general, this derivation can be done for any limited-memory quasi-Newton matrix that admits a compact representation; in particular, it can be done for any member of the Broyden convex class~\cite{ErwayM15,B18,BMP18}. This discussion is based on~\cite{Burdakov2016}. Consider the problem of computing the eigenvalues of $\mathbf{B}_{k}$, which is assumed to be an {\small L-SR1}{} matrix, obtained from performing $m$ rank-one updates to $\mathbf{B}_0= \mathbf{\gamma I}$. For notational simplicity, we drop subscripts and consider the compact representation of $\mathbf{B}$: \begin{equation}\label{eq:Bk} \mathbf{B} = \gamma \mathbf{I} + \mathbf{\Psi} \mathbf{M} \mathbf{\Psi}^T. \end{equation} The ``thin'' QR factorization of $\mathbf{\Psi}$ can be written as $\mathbf{\Psi} = \mathbf{Q}\mathbf{R}$ where $\mathbf{Q} \in \mathbb{R}^{n \times m}$ and $\mathbf{R} \in \mathbb{R}^{m \times m}$ is invertible because, as it was assumed above, $\mathbf{\Psi}$ has full column rank. Then, \begin{equation}\label{eq:eig-1} \mathbf{B} = \gamma \mathbf{I} + \mathbf{Q}\mathbf{R}\mathbf{M}\mathbf{R}^T\mathbf{Q}^T. \end{equation} The matrix $\mathbf{RMR}^T\in\mathbb{R}^{m\times m}$ is of a relatively small size, and thus, it is computationally inexpensive to compute its spectral decomposition. We define the spectral decomposition of $\mathbf{R}\mathbf{M}\mathbf{R}^T$ as $\mathbf{U}\hat{\Lambda}\mathbf{U}^T,$ where $\mathbf{U} \in \mathbb{R}^{m \times m}$ is an orthogonal matrix whose columns are made up of eigenvectors of $\mathbf{R}\mathbf{M}\mathbf{R}^T$ and $\hat{\Lambda}=\diag(\hat{\lambda}_1,\allowbreak \dots,\allowbreak \hat{\lambda}_{m})$ is a diagonal matrix whose entries are the associated eigenvalues. Thus, \begin{equation}\label{eqn-eig1-nopermute} \mathbf{B} = \gamma \mathbf{I} + \mathbf{Q}\mathbf{U}\hat{\Lambda}\mathbf{U}^T\mathbf{Q}^T. \end{equation} Since both $\mathbf{Q}$ and $\mathbf{U}$ have orthonormal columns, $\mathbf{P}_\parallel \defined \mathbf{Q}\mathbf{U}\in\mathbb{R}^{n\times m}$ also has orthonormal columns. Let $\bfm{P}_\perp$ denote the matrix whose columns form an orthonormal basis for $\left({\bfm{P}}_\parallel\right)^\perp$. Thus, the spectral decomposition of $\mathbf{B}$ is defined as $\mathbf{B} = \mathbf{P}\Lambda_{\gamma} \mathbf{P}^T,$ where \begin{equation}\label{eqn-PL} \mathbf{P}\defined \begin{bmatrix}\mathbf{P_\parallel} \,\,\ \mathbf{P_\perp} \end{bmatrix} \quad \text{and} \quad \Lambda_{\gamma} \defined \begin{bmatrix} \Lambda & 0 \\ 0 & \gamma \mathbf{I}_{n-m} \end{bmatrix}, \end{equation} with $\Lambda_{\gamma}=\diag(\lambda_1,\ldots,\lambda_n)$ and $\Lambda=\diag(\lambda_1,\ldots,\lambda_{m})=\hat{\Lambda}+\gamma\mathbf{I}\in\mathbb{R}^{m\times m}$. We emphasize three important properties of the eigendecomposition. First, all eigenvalues of $\mathbf{B}$ are explicitly obtained and represented by $ \Lambda_{\gamma} $. Second, only the first $ m $ eigenvectors of $ \mathbf{B} $ can be explicitly computed, if needed; they are represented by $\mathbf{P}_{\parallel} $. In particular, since $ \mathbf{\Psi} = \mathbf{Q}\mathbf{R} $, then \begin{equation} \label{eqn-matvec-withP} \mathbf{P}_{\parallel} = \mathbf{Q}\mathbf{U} = \mathbf{\Psi} \mathbf{R}^{-1}\mathbf{U}. \end{equation} If $\mathbf{P}_\parallel$ needs to only be available to compute matrix-vector products then one can avoid explicitly forming $\mathbf{P}_\parallel$ by storing $\mathbf{\Psi}$, $\mathbf{R}$, and $\mathbf{U}$. Third, the eigenvalues given by the parameter $ \gamma $ can be interpreted as an estimate of the curvature of $ f $ in the space spanned by the columns of $ \mathbf{P}_{\perp}$. While there is no reason to assume the function $f$ has negative curvature throughout the entire subspace $\mathbf{P}_\perp$, in this paper, we consider the case $\gamma\le 0$ for the sake of completeness. For the duration of this article, we assume the first $m$ eigenvalues in $\Lambda_{\gamma}$ are ordered in increasing values, i.e., $\Lambda=\diag(\lambda_1,\ldots,\lambda_{m})$ where $\lambda_1\le \lambda_2 \le \ldots \le \lambda_{m}$ and that $r$ is the multiplicity of $\lambda_1$, i.e., $\lambda_1 = \lambda_2 = \cdots = \lambda_r < \lambda_{r+1}$. For details on updating this partial spectral decomposition when a new quasi-Newton pair is computed, see~\cite{ErwayM15}. \subsection{Implementation} In the above presentation, the {\small QR} factorization was used for \jb{ease of readability} to find a partial spectral decomposition of $\mathbf{B}$. However, there are other approaches that may be better suited for different applications. An alternative approach to computing the eigenvalues of $\mathbf{B}$ is presented in~\cite{Lu96} that replaces the QR factorization of $\mathbf{\Psi}$ with the SVD and an eigendecomposition of a $m\times m$ matrix and $t\times t$ matrix, respectively, where $t\le m$. (For more details, see~\cite{Lu96}.) \jeo{However, experiments in~\cite{BruBEM19} indicate that the QR version of this computation outperforms the SVD approach. } \jeo{When} $\mathbf{\Psi}^T \bfm{\Psi}$ is positive definite (i.e., $\mathbf{\Psi}$ is full rank), the Cholesky factorization of $\mathbf{\Psi}^T \bfm{\Psi}=\mathbf{R}^T \bfm{R}$ provides the same $\mathbf{R}$ needed to form $\mathbf{P}_{\parallel}$ in (\ref{eqn-matvec-withP})~\cite{Burdakov2016}. \jb{Since $ \bs{\Psi} $ is not explicitly formed when a non-constant initialization $ \gamma = \gamma_k $ is used (in this case $ \bs{\Psi} $ is defined by storing $ \bfm{Y}, \bfm{S} $) the product matrix $ \bs{\Psi}^T \bs{\Psi} $ is represented by \begin{equation} \label{eq:impsipsi} \bs{\Psi}^T \bs{\Psi} = \bfm{Y}^T \bfm{Y} - 2 \gamma \bfm{Y}^T \bfm{S} + \gamma^2 \bfm{S}^T \bfm{S} \end{equation} (in \eqref{eq:impsipsi} the matrices $ \bfm{Y}^T\bfm{Y}, \bfm{Y}^T\bfm{S} $ and $ \bfm{S}^T \bfm{S} $ are stored and updated). In contrast, with a constant initialization $ \gamma_k = \bar{\gamma} $ the product $ \bs{\Psi}^T \bs{\Psi} $ can be directly updated.} For the algorithm proposed in this paper, it is necessary to be able to compute the eigenvalues of $\bfm{B}$ and to be able to compute products with $\bfm{P}_\parallel$. However, in our application, it could be the case that $\mathbf{\Psi}$ is not full rank; in this case, it is preferable to use the ${\small \text{LDL}^T}$ decomposition~\cite{gvl96} of $\mathbf{\Psi}^T \mathbf{\Psi}$ as proposed in~\cite{Burdakov2016}. Specifically, $$\mathbf{\Pi}^T \mathbf{\Psi}^T\mathbf{\Psi \Pi}=\mathbf{LDL}^T,$$ where $\mathbf{\Pi}$ is a permutation matrix. If $\mathbf{\Psi}$ is rank-deficient, i.e., $\text{rank}(\mathbf{\Psi})=r<m$, then at least one diagonal entry of $\mathbf{D}$ is zero. \jeo{(In computer arithmetic, it will be relatively small.)} In the proposed algorithm, we use the following criteria to determine whether entries in $\mathbf{D}$ are sufficiently large: The $i$th entry of $\mathbf{D}$, i.e., $d_i$, is sufficiently large provided that \begin{equation} \label{eqn-ldlcriteria} \jeo{d_i > 10^{-8}\times[\mathbf{\Pi}^T\mathbf{\Psi}^T\mathbf{\Psi\Pi}]_{ii}.} \end{equation} Now, let $J$ to be the set of indices that satisfy (\ref{eqn-ldlcriteria}), \jeo{i.e., $r=|J|$.} Furthermore, define $\mathbf{D}_\dagger$ to be the matrix $\mathbf{D}$ having removed any rows and columns indexed by an element not in $J$ and $\mathbf{L}_\dagger$ to be the matrix $\mathbf{L}$ having removed columns indexed by an element not in $J$. Then, $$ \mathbf{\Psi}^T \mathbf{\Psi}\approx\mathbf{\Pi L}_\dagger\mathbf{D}_\dagger\mathbf{L}^T_\dagger\mathbf{ \Pi}^T=\mathbf{\Pi R}_\dagger^T\mathbf{R}_\dagger\mathbf{\Pi}^T ,$$ where $\mathbf{R}_\dagger \defined \sqrt{\mathbf{D}_\dagger}\mathbf{L}_\dagger^T\in\mathbb{R}^{r\times m}$. Furthermore, \begin{equation}\label{eq:eig-1P} \mathbf{B} \approx \gamma \mathbf{I} + \mathbf{Q}_\dagger\mathbf{R}_\dagger\mathbf{\Pi}^T\mathbf{M\Pi}\mathbf{R}_\dagger^T\mathbf{Q}_\dagger^T \quad \text{with} \quad \mathbf{Q}_\dagger\defined \left( \mathbf{\Psi\Pi}\right)_\dagger\mathbf{R}^{-1}_{\ddagger}\in\mathbb{R}^{n\times r}, \end{equation}\jeo{where} $\left(\mathbf{\Psi\Pi}\right)_\dagger$ is the matrix $\mathbf{\Psi\Pi}$ having deleted any columns indexed by an element not in $J$, and $\mathbf{R}_\ddagger\in\mathbb{R}^{r\times r}$ is the matrix $\mathbf{R}_\dagger$ having removed columns indexed by elements not in $J$. Notice that the matrix $\mathbf{R}_\dagger\mathbf{\Pi}^T \mathbf{M}\mathbf{\Pi}\mathbf{R}_\dagger^T \in\mathbb{R}^{r\times r}$ is full rank. Thus, the eigenvalue decomposition $\mathbf{U}\hat{\Lambda}\mathbf{U}^T$ is now computed not for $\mathbf{RMR}^T$ as in Section \ref{sec-eigs}, but for $\mathbf{R_{\dagger}\Pi}^T \mathbf{M\Pi R_{\dagger}}^T$. Furthermore, $\bfm{P}_\parallel$ in (\ref{eqn-matvec-withP}) is computed as \begin{equation}\label{eqn-Pparallel} \mathbf{P}_\parallel = \mathbf{Q_\dagger U} = \left(\mathbf{\Psi\Pi}\right)_\dagger \mathbf{R}_{\ddagger}^{-1} \mathbf{U} \end{equation} \jb{when a constant initialization is used (since $ \bs{\Psi} $ is explicitly formed), and as \begin{equation}\label{eqn-Pparallel_nc} \mathbf{P}_\parallel = \mathbf{Q_\dagger U} = \left(\bfm{Y\Pi}\right)_\dagger \mathbf{R}_{\ddagger}^{-1} \mathbf{U} - \gamma\left(\bfm{S\Pi}\right)_\dagger \mathbf{R}_{\ddagger}^{-1} \mathbf{U} \end{equation} when a non-constant initialization is used.} Algorithm 1 details the computation of the elements needed to form $\mathbf{P}_\parallel$, using the ${\small \text{LDL}^T}$ decomposition. It produces $\mathbf{\Lambda}$, $\mathbf{R}_{\ddagger}$, $\mathbf{U}$, and $\mathbf{\Pi}$. There are several pre-processing and post-processing steps in this algorithm. Namely, lines 7 and 9 are used to remove any spurious complex round-off error, line 10 is to order the eigenvalues and associated eigenvectors, and line 12 sets any small eigenvalue (in absolute value) to zero. An alternative to forming and storing $\bfm{R}_{\ddagger}$ is to maintain $\bfm{R}_{\dagger}$ and the index set $J$. \jb{Moreover, since it is typically more efficient to update the product $ \bs{\Psi}^T \bs{\Psi} $ instead of forming it from scratch, the argument ``$ \Psi \lor \Psi^T \Psi $" is used to enable passing either of the two inputs, depending on the context.} \begin{algorithm}[H] \caption{Computing $\mathbf{R}_\ddagger$, $\mathbf{\Lambda}$, $\mathbf{U}$, and $\mathbf{\Pi}$ using the ${\small \text{LDL}^T}$ decomposition}\label{alg-ldl} \begin{algorithmic}[1] \ENSURE [$\mathbf{R}_\ddagger$, $\mathbf{\Lambda}$, $\mathbf{U}$, $\mathbf{\Pi}$, \jeo{$J$}]=\texttt{ComputeSpectral}(\jb{$\Psi \lor \Psi^T \Psi $}, $\mathbf{M}^{-1}$, $\gamma$, $\tau$); \STATE Compute the ${\small \text{LDL}^T}$ decomposition of $\bs{\Psi}^T \bs{\Psi}$ and store the factors $\mathbf{L}$ and $\mathbf{D}$ matrices, and store $\mathbf{\Pi}$ (as a vector with the permutation information); \STATE Find the indices of elements of $\mathbf{D}$ that are sufficiently large using (\ref{eqn-ldlcriteria}) and store as $J$; \STATE Form $\mathbf{D}_\dagger$ by storing the rows and columns of $\mathbf{D}$ corresponding to indices of $J$; \STATE Form $\mathbf{L}_\dagger$ by storing the columns of $\mathbf{L}$ corresponding the indices of $J$; \STATE $\mathbf{R}_\dagger \gets \sqrt{\mathbf{D}_\dagger}\mathbf{L}_\dagger^T$; \STATE $\mathbf{T}\gets \mathbf{R}_\dagger\mathbf{\Pi}^T\mathbf{M\Pi R}^T_\dagger$; \STATE Compute the spectral decomposition $\mathbf{U}\mathbf{\hat{\Lambda}}\mathbf{U}^T$of $(\mathbf{T}+\mathbf{T}^T)/2$; \STATE Form $\mathbf{R}_\ddagger $ by storing the columns of $\mathbf{R}_\dagger$ corresponding to columns of $J$; \STATE $\mathbf{\hat{\Lambda}}\gets \text{real}(\mathbf{\hat{\Lambda}})$ \STATE Order the entries in $\mathbf{\hat{\Lambda}}$ from low to high and rearrange the columns of $\mathbf{U}$ accordingly to maintain the spectral decomposition of $(\mathbf{T}+\mathbf{T}^T)/2$; \STATE $\mathbf{\Lambda}\gets \mathbf{\hat{\Lambda}}+\gamma \bfm{I}$; \IF {$|\mathbf{\Lambda}_{ii}|<\tau$ for any $i$ } \STATE {$\mathbf{\Lambda}_{ii}\gets 0$}; \ENDIF \RETURN $\mathbf{R}_\ddagger$, $\mathbf{\Lambda}$, $\mathbf{U}$, $\mathbf{\Pi}$; \end{algorithmic} \end{algorithm} The output of Algorithm 1 includes the factors of $\mathbf{P}_\parallel$ (see (\ref{eqn-Pparallel})), i.e., $\mathbf{R}_\ddagger$, $\mathbf{U}$, and $\mathbf{\Pi}$, \jeo{as well as $J$.} For the method proposed in Section 3, products with $\mathbf{P}_\parallel$ are computed as a sequence of explicit matrix-vector products with the factors of $\mathbf{P}_\parallel$. In practice, the permutation matrix $\mathbf{\Pi}$ is not stored explicitly; instead, the permutation is applied implicitly using a vector that maintains the order of the columns after the permutation matrix is applied. Thus, products with $\mathbf{P}_\parallel$ are computed using only matrix-vector products together with a rearranging of columns. \section{Proposed method} \label{sec-proposed} The proposed method is able to solve the {\small L-SR1}\ trust-region subproblem to high accuracy, even when $\mathbf{B}$ is indefinite. The method makes use of the eigenvalues of $\mathbf{B}$ and the factors of $\mathbf{P}_\parallel$. To describe the method, we first transform the trust-region subproblem \eqref{eq:trustProblem} so that the quadratic objective function becomes separable. Then, we describe the shape-changing norms proposed in \cite{BYuan02,Burdakov2016} that decouples the separable problem into two minimization problems, one of which has a closed-form solution while the other can be solved very efficiently. Finally, we show how these solutions can be used to construct a solution to the original trust-region subproblem. \subsection{Transforming the Trust-Region Subproblem} Let $\mathbf{B} = \mathbf{P}\Lambda_{\gamma}\mathbf{P}^T$ be the eigendecomposition of $\mathbf{B}$ described in Section 2.2. Letting $\mathbf{v} = \mathbf{P}^T\mathbf{p}$ and $\mathbf{g}_{\mathbf{P}} = \mathbf{P}^T\mathbf{g}$, the objective function $\mathcal{Q}(\mathbf{p})$ in \eqref{eq:trustProblem} can be written as a function of $\bfm{v}$: \begin{equation*} {\mathcal Q}\left( \mathbf{p} \right) = \mathbf{g}^T\mathbf{p} + \frac{1}{2} \mathbf{p}^T \mathbf{B} \mathbf{p} = \mathbf{g}^T_{\mathbf{P}}\mathbf{v} + \frac{1}{2}\mathbf{v}^T \Lambda_{\gamma} \mathbf{v} \defined q\left( \mathbf{v} \right). \end{equation*} With $ \mathbf{P} = \left[ \mathbf{P}_{\parallel} \quad \mathbf{P}_{\perp} \right] $, we partition $\mathbf{v}$ and $\mathbf{g}_{\mathbf{P}}$ as follows: \begin{equation*} \mathbf{v} = \mathbf{P}^T\mathbf{p} = \left[ \begin{array}{c} \mathbf{P}^T_{\parallel} \mathbf{p} \\ \mathbf{P}^T_{\perp} \mathbf{p} \\ \end{array} \right] = \left[ \begin{array}{c} \mathbf{v}_{\parallel}\\ \mathbf{v}_{\perp} \\ \end{array} \right] \quad \text{and} \quad \mathbf{g}_{\mathbf{P}} = \left[ \begin{array}{c} \mathbf{P}^T_{\parallel} \mathbf{g} \\ \mathbf{P}^T_{\perp}\mathbf{g} \end{array} \right] = \left[ \begin{array}{c} \mathbf{g}_{\parallel} \\ \mathbf{g}_{\perp} \end{array} \right], \end{equation*} where $\mathbf{v}_{\parallel}, \mathbf{g}_{\parallel} \in \mathbb{R}^{m}$ and $\mathbf{v}_{\perp}, \mathbf{g}_{\perp} \in \mathbb{R}^{n-m}$. Then, \begin{eqnarray} \label{eq:q_decomp} q\left( \mathbf{v} \right) &=& \begin{bmatrix} \mathbf{g}_{\parallel}^T \ & \mathbf{g}_{\perp}^T \end{bmatrix} \begin{bmatrix} \mathbf{v}_{\parallel} \\ \mathbf{v}_{\perp} \end{bmatrix} + \frac{1}{2} \begin{bmatrix} \mathbf{v}_{\parallel}^T \ & \mathbf{v}_{\perp}^T \end{bmatrix} \begin{bmatrix} \Lambda \\ & \gamma\mathbf{I}_{n-m} \end{bmatrix} \begin{bmatrix} \mathbf{v}_{\parallel} \\ \mathbf{v}_{\perp} \end{bmatrix} \nonumber \\ &=& \mathbf{g}^T_{\parallel}\mathbf{v}_{\parallel} + \mathbf{g}^T_{\perp}\mathbf{v}_{\perp} + \frac{1}{2} \left( \mathbf{v}^T_{\parallel}\Lambda\mathbf{v}_{\parallel} + \gamma \left\| \mathbf{v}_{\perp} \right\|^2 \right) \nonumber \\ &=& q_{\parallel} \left( \mathbf{v}_{\parallel} \right) + q_{\perp} \left( \mathbf{v}_{\perp} \right), \end{eqnarray} where $$ q_{\parallel} \left( \mathbf{v}_{\parallel} \right) \defined \mathbf{g}^T_{\parallel}\mathbf{v}_{\parallel} + \frac{1}{2}\mathbf{v}^T_{\parallel}\Lambda\mathbf{v}_{\parallel} \quad \text{and} \quad q_{\perp} \left( \mathbf{v}_{\perp} \right) \defined \mathbf{g}^T_{\perp}\mathbf{v}_{\perp} + \frac{\gamma}{2} \left\| \mathbf{v}_{\perp} \right\|^2. $$ Thus, the trust-region subproblem \eqref{eq:trustProblem} can be expressed as \begin{equation} \label{eq:qv} \mathop{\operator@font{minimize}}ize{\| \mathbf{ Pv } \| \le \delta } \: \: q\left( \mathbf{v}\right) = \left\{ q_{\parallel}\left( \mathbf{v}_{\parallel}\right) + q_{\perp}\left( \mathbf{v}_{\perp}\right) \right\}. \end{equation} Note that the function $q(\mathbf{v})$ is now separable in $\mathbf{v}_{\parallel}$ and $\mathbf{v}_{\perp}$. To completely decouple \eqref{eq:qv} into two minimization problems, we use a shape-changing norm so that the norm constraint $\| \mathbf{ Pv } \| \le \delta$ decouples into separate constraints, one involving $\mathbf{v}_{\parallel}$ and the other involving $\mathbf{v}_{\perp}$. \subsection{Shape-Changing Norms} Consider the following shape-changing norms proposed in~\cite{BYuan02,Burdakov2016}: \begin{align} \label{eq:sc_2} \| \mathbf{p} \|_{\mathbf{P},2} &\defined \max\left( \| \mathbf{P}_{\parallel}^T\mathbf{p} \|_2, \| \mathbf{P}_{\perp}^T\mathbf{p} \|_2 \right ) \hspace{.125cm} = \max\left( \| \mathbf{v}_{\parallel} \|_2, \| \mathbf{v}_{\perp} \|_2\right), \\ \label{eq:sc_inf} \| \mathbf{p} \|_{\mathbf{P},\infty} &\defined \max\left( \| \mathbf{P}_{\parallel}^T\mathbf{p} \|_{\infty}, \| \mathbf{P}_{\perp}^T\mathbf{p} \|_2 \right ) = \max\left( \| \mathbf{v}_{\parallel} \|_{\infty} , \| \mathbf{v}_{\perp} \|_2 \right). \end{align} We refer to them as the $(\mathbf{P},2)$ and the $(\mathbf{P},\infty)$ norms, respectively. Since $\mathbf{p} = \mathbf{Pv}$, the trust-region constraint in \eqref{eq:qv} can be expressed in these norms as \begin{eqnarray*} \| \mathbf{Pv} \|_{\mathbf{P},2} \ \le \delta \quad &\text{if and only if}& \quad \| \mathbf{v}_{\parallel} \|_2 \ \le \delta \ \text{and} \ \| \mathbf{v}_{\perp} \|_2 \le \delta, \\ \| \mathbf{Pv} \|_{\mathbf{P},\infty} \le \delta \quad &\text{if and only if}& \quad \| \mathbf{v}_{\parallel} \|_{\infty} \le \delta \ \text{and} \ \| \mathbf{v}_{\perp} \|_2 \le \delta. \end{eqnarray*} Thus, from \eqref{eq:qv}, the trust-region subproblem is given for the $(\mathbf{P},2)$ norm by \begin{equation} \label{eq:qv2} \mathop{\operator@font{minimize}}ize{\| \mathbf{ Pv } \|_{\mathbf{P},2} \le \delta } \: \: q\left( \mathbf{v}\right) = \mathop{\operator@font{minimize}}ize{\| \mathbf{ v }_{\parallel} \|_{2} \le \delta } \: \: q_{\parallel}\left( \mathbf{v}_{\parallel}\right) + \mathop{\operator@font{minimize}}ize{\| \mathbf{ v }_{\perp} \|_{2} \le \delta } \: \: q_{\perp}\left( \mathbf{v}_{\perp}\right), \end{equation} and using the $(\mathbf{P},\infty)$ norm it is given by \begin{equation} \label{eq:qvinfty} \mathop{\operator@font{minimize}}ize{\| \mathbf{ Pv } \|_{\mathbf{P},\infty} \le \delta } \: \: q\left( \mathbf{v}\right) = \mathop{\operator@font{minimize}}ize{\| \mathbf{ v }_{\parallel} \|_{\infty} \le \delta } \: \: q_{\parallel}\left( \mathbf{v}_{\parallel}\right) + \mathop{\operator@font{minimize}}ize{\| \mathbf{ v }_{\perp} \|_{2} \le \delta } \: \: q_{\perp}\left( \mathbf{v}_{\perp}\right). \end{equation} As shown in~\cite{Burdakov2016}, these norms are equivalent to the two-norm, i.e., \begin{eqnarray*} \frac{1}{\sqrt{2}} \| \mathbf{p} \|_2 \ \le & \| \mathbf{p} \|_{\mathbf{P},2} & \le \ \| \mathbf{p} \|_2 \\ \frac{1}{\sqrt{m}} \| \mathbf{p} \|_2 \ \le & \| \mathbf{p} \|_{\mathbf{P},\infty} & \le \ \| \mathbf{p} \|_2. \end{eqnarray*} Note that the latter equivalence factor depends on the number of stored quasi-Newton pairs $m$ and not on the number of variables ($n$). Notice that the shape-changing norms do not place equal value on the two subspaces since the region defined by the subspaces is of different size and shape in each of them. However, because of norm equivalence, the shape-changing region insignificantly differs from the region defined by the two-norm, the most commonly-used choice of norm. We now show how to solve the decoupled subproblems. \subsection{Solving for the optimal $\mathbf{v}_{\perp}^*$} \label{sec:v_perp} The subproblem \begin{equation} \label{eq:sp_vperp_2} \mathop{\operator@font{minimize}}ize{\| \mathbf{v}_{\perp} \|_2 \le \delta} \quad q_{\perp}\left ( \mathbf{v}_{\perp}\right) \equiv \mathbf{g}^T_{\perp}\mathbf {v}_{\perp} + \frac{\gamma}{2} \| \mathbf{v}_{\perp} \|_2^2 \end{equation} appears in both \eqref{eq:qv2} and \eqref{eq:qvinfty}; its optimal solution can be computed by formula. For the quadratic subproblem (\ref{eq:sp_vperp_2}) the solution $\mathbf{v}_{\perp}^*$ must satisfy the following optimality conditions found in~\cite{Gay81,MorS83,Sor82} associated with \eqref{eq:sp_vperp_2}: For some $\sigma_{\perp}^* \in \mathbb{R}^+$, \begin{subequations} \label{eq:opt_vperp} \begin{align} \label{eq:vperp_c1} \left( \gamma+\sigma^{*}_{\perp} \right)\mathbf{v}^*_{\perp} &= -\mathbf{g}_{\perp}, \\ \label{eq:vperp_c2} \sigma^{*}_{\perp} \left( \|\mathbf{v}^*_{\perp}\|_2 - \delta \right) &= 0, \\ \label{eq:vperp_c4} \|\mathbf{v}^*_{\perp }\|_2 & \le \delta, \\ \label{eq:vperp_c5} \gamma + \sigma^{*}_{\perp} & \ge 0. \end{align} \end{subequations} Note that the optimality conditions are satisfied by $(\mathbf{v}_{\perp}^*, \sigma_{\perp}^*)$ given by \begin{equation} \label{eq:soln_vperp} \mathbf{v}^*_{\perp} = \begin{cases} -\frac{1}{\gamma} \mathbf{g}_{\perp} & \text{ if } \gamma > 0 \text{ and } \left \| \mathbf{g}_{\perp} \right \|_2 \le \delta | \gamma|, \\ \delta \mathbf{u} & \text{ if } \gamma \le 0 \text{ and } \| \mathbf{g}_{\perp} \|_2 = 0, \\ -\frac{ \delta}{\| \mathbf{g}_{\perp} \|_2} \mathbf{g}_{\perp} & \text{ otherwise, } \end{cases} \end{equation} and \begin{equation} \label{eq:soln_sigmaperp} \sigma^*_{\perp} = \begin{cases} 0 & \text{ if } \gamma > 0 \text{ and } \left \| \mathbf{g}_{\perp} \right \|_2 \le \delta |\gamma|, \\ \frac{\left\| \mathbf{g}_{\perp} \right\|_2}{\delta} - \gamma & \text{ otherwise,} \end{cases} \end{equation} where $\mathbf{u} \in \mathbb{R}^{n-m}$ is any unit vector with respect to the two-norm. \subsection{Solving for the optimal $\mathbf{v}_{\parallel}^*$} \label{sec:v_parallel} \noindent In this section, we detail how to solve for the optimal $\mathbf{v}_\parallel^*$ when either the $(\mathbf{P}, \infty)$-norm or the $(\mathbf{P},2)$-norm is used to define the trust-region subproblem. \noindent \textbf{$(\mathbf{P},\infty)$-norm solution}. If the shape-changing $(\mathbf{P}, \infty)$-norm is used in \eqref{eq:qv}, then the subproblem in $ \mathbf{v}_{\parallel} $ is \begin{equation} \label{eq:sp_vpar_inf} \mathop{\operator@font{minimize}}ize{\left\| \mathbf{v}_{\parallel} \right\|_{\infty} \le \delta} \quad q_{\parallel}\left( \mathbf{v}_{\parallel} \right) = \mathbf{g}^T_{\parallel} \mathbf{v}_{\parallel} + \frac{1}{2} \mathbf{v}^T_{\parallel} \Lambda \mathbf{v}_{\parallel}. \end{equation} The solution to this problem is computed by separately minimizing $ m $ scalar quadratic problems of the form \begin{equation} \label{eq:sp_vpar_infi} \mathop{\operator@font{minimize}}ize{ |[ \mathbf{v}_{\parallel}]_i|\le \delta } \quad q_{\parallel,i}( [\mathbf{v}_{\parallel}]_i) = \left[ \mathbf{g}_{\parallel} \right]_i\left[ \mathbf{v}_{\parallel} \right]_i + \frac{\lambda_i}{2} \left( \left[ \mathbf{v}_{\parallel} \right]_i \right)^2, \quad \quad 1 \le i \le m. \end{equation} The minimizer depends on the convexity of $q_{\parallel,i}$, i.e., the sign of $\lambda_i$. The solution to \eqref{eq:sp_vpar_infi} is given as follows: \begin{equation} \label{eq:vpar_inf} [\mathbf{v}^*_{||} ]_i = \begin{cases} -\frac{\left[ \mathbf{g}_{||}\right]_i}{ \lambda_i} & \text{ if } \left| \frac{ \left[ \mathbf{g}_{||}\right]_i }{\lambda_i} \right| \le \delta \text{ and } \lambda_i > 0, \\ c & \text{ if } \left[ \mathbf{g}_{\parallel}\right]_i = 0, \: \lambda_i = 0,\\ - \text{sgn}(\left [ \mathbf{g}_{\parallel} \right ]_i) \delta & \text{ if } \left[ \mathbf{g}_{\parallel}\right]_i \ne 0, \: \lambda_i = 0,\\ \pm\delta & \text{ if } \left[ \mathbf{g}_{\parallel}\right]_i = 0, \: \lambda_i < 0,\\ -\frac{\delta}{ \left| \left[ \mathbf{g}_{||}\right]_i \right|} \left[ \mathbf{g}_{||}\right]_i & \text{ otherwise}, \end{cases} \end{equation} where $c$ is any real number in $[-\delta, \delta]$ and ``sgn'' denotes the signum function (see~\cite{Burdakov2016} for details). \noindent \textbf{$(\mathbf{P},2)$-norm solution}: If the shape-changing $(\mathbf{P}, 2)$-norm is used in \eqref{eq:qv}, then the subproblem in $ \mathbf{v}_{\parallel} $ is \begin{equation} \label{eq:sp_vpar_2} \mathop{\operator@font{minimize}}ize{\| \mathbf{v}_{\parallel} \|_2 \le \delta} \quad q_{\parallel}\left( \mathbf{v}_{\parallel} \right) = \mathbf{g}^T_{\parallel} \mathbf{v}_{\parallel} + \frac{1}{2} \mathbf{v}^T_{\parallel} \Lambda \mathbf{v}_{\parallel}. \end{equation} The solution $\mathbf{v}^*_{\parallel}$ must satisfy the following optimality conditions~\cite{Gay81,MorS83,Sor82} associated with (\ref{eq:sp_vpar_2}): For some $\sigma_\parallel^*\in\mathbb{R}^+$, \begin{subequations} \label{eq:opt_vpar} \begin{align} \label{eq:vpar_c1} (\Lambda+\sigma^{*}_{\parallel}\mathbf{I})\mathbf{v}^*_{\parallel} &= -\mathbf{g}_{\parallel}, \\ \label{eq:vpar_c2} \sigma^{*}_{\parallel} \left( \|\mathbf{v}^*_{\parallel}\|_2 - \delta \right) &= 0, \\ \label{eq:vpar_c4} \|\mathbf{v}^*_{\parallel}\|_2 & \le \delta, \\ \label{eq:vpar_c5} \lambda_i + \sigma^{*}_{\parallel} & \ge 0 \quad \text{ for } 1 \le i \le m. \end{align} \end{subequations} A solution to the optimality conditions (\ref{eq:vpar_c1})-(\ref{eq:vpar_c5}) can be computed using the method found in~\cite{BruEM15}. For completeness, we outline the method here; this method depends on the sign of $\lambda_1$. Throughout these cases, we make use of the expression of $\mathbf{v}_{\parallel}$ as a function of $\sigma_\parallel$. That is, from the first optimality condition \eqref{eq:vpar_c1}, we write \begin{equation} \label{eq:v_sigma} \mathbf{v}_{\parallel}\left( \sigma_{\parallel} \right) = -\left( \Lambda + \sigma_{\parallel} \mathbf{I} \right)^{-1} \mathbf{g}_{\parallel}, \end{equation} with $\sigma_{\parallel} \ne -\lambda_i$ for $1 \le i \le m$. \noindent \textbf{Case 1 ($\lambda_1 > 0 $).} When $\lambda_1>0$, the unconstrained minimizer is computed (setting $\sigma_\parallel^*=0$): \begin{equation} \label{eq:v_par_solpos} \mathbf{v}_{\parallel} \left( 0 \right) = - \Lambda^{-1}\mathbf{g}_{\parallel}. \end{equation} If $\mathbf{v}_{\parallel}(0)$ is feasible, i.e., $\| \mathbf{v}_{\parallel} \left( 0 \right) \|_2 \le \delta $ then $\mathbf{v}_{\parallel}^* = \mathbf{v}_{\parallel} (0)$ is the global minimizer; otherwise, $ \sigma^*_{\parallel} $ is the solution to the secular equation \eqref{eq:secular} (discussed below). The minimizer to the problem \eqref{eq:sp_vpar_2} is then given by \begin{equation} \label{eq:v_secularsoln} \mathbf{v}^*_{\parallel} = -\left( \Lambda + \sigma^*_{\parallel}\mathbf{I}\right)^{-1}\mathbf{g}_{\parallel}. \end{equation} \noindent \textbf{Case 2 ($ \lambda_1 = 0 $).} If $ \mathbf{g}_{\parallel} $ is in the range of $\Lambda$, i.e., $[\mathbf{g}_{\parallel}]_i = 0$ for $ 1 \le i \le r$, then set $\sigma_{\parallel} = 0$ and let \begin{equation*} \mathbf{v}_{\parallel}\left( 0 \right) = - \Lambda^{\dagger} \mathbf{g}_{\parallel}, \end{equation*} where $ {\dagger} $ denotes the pseudo-inverse. If $ \| \mathbf{v}_{\parallel} (0) \|_2 \le \delta $, then \begin{equation*} \label{eq:v_par_solsing} \mathbf{v}^*_{\parallel} = \mathbf{v}_{\parallel} \left( 0 \right) = - \Lambda^{\dagger}\mathbf{g}_{\parallel} \end{equation*} satisfies all optimality conditions (with $ \sigma^*_{\parallel} = 0 $). Otherwise, i.e., if either $[\mathbf{g}_{\parallel}]_i \ne 0 $ for some $ 1 \le i \le r $ or $\| \Lambda^{\dagger} \mathbf{g}_{\parallel} \|_2 > \delta $, then $ \mathbf{v}^*_{\parallel} $ is computed using \eqref{eq:v_secularsoln}, where $ \sigma^*_{\parallel} $ solves the secular equation in \eqref{eq:secular} (discussed below). \noindent \textbf{Case 3 ($ \lambda_1 < 0 $):} If $ \mathbf{g}_{\parallel} $ is in the range of $\Lambda - \lambda_1 \mathbf{I}$, i.e., $[\mathbf{g}_{\parallel}]_i = 0$ for $ 1 \le i \le r$, then we set $ \sigma_{\parallel} = -\lambda_1 $ and \begin{equation*} \mathbf{v}_{\parallel}\left( -\lambda_1 \right) = -\left( \Lambda -\lambda_1\mathbf{I} \right)^{\dagger} \mathbf{g}_{\parallel}. \end{equation*} If $\| \mathbf{v}_{\parallel} ( -\lambda_1 )\|_2 \le \delta $, then the solution is given by \begin{equation} \label{eq:v_par_solnhc} \mathbf{v}^*_{\parallel} = \mathbf{v}_{\parallel} \left( -\lambda_1 \right) + \alpha \mathbf{e}_1, \end{equation} where $ \alpha = \sqrt{ \delta^2 - \left\| \mathbf{v}_{\parallel}\left( -\lambda_1 \right) \right\|^2_2 } $. (This case is referred to as the ``hard case''~\cite{ConGT00a,MorS83}.) Note that $\mathbf{v}_{\parallel}^*$ satisfies the first optimality condition \eqref{eq:vpar_c1}: \begin{align*} \left( \Lambda -\lambda_1 \mathbf{I} \right) \mathbf{v}^*_{\parallel} &= \left( \Lambda -\lambda_1 \mathbf{I} \right) \left( \mathbf{v}_{\parallel} \left( -\lambda_1 \right) + \alpha \mathbf{e}_1 \right) = -\mathbf{g}_{\parallel}. \end{align*} The second optimality condition \eqref{eq:vpar_c2} is satisfied by observing that \begin{equation*} \| \mathbf{v}^*_{\parallel} \|^2_2 = \| \mathbf{v}_{\parallel} ( -\lambda_1 ) \|^2_2 + \alpha^2 = \delta^2. \end{equation*} Finally, since $ \sigma^*_{\parallel} = -\lambda_1 > 0 $ the other optimality conditions are also satisfied. On the other hand, if $ [ \mathbf{g}_{\parallel} ]_i \ne 0$ for some $ 1 \le i \le r $ or $ \| ( \Lambda -\lambda_1\mathbf{I} )^{\dagger}\mathbf{g}_{\parallel} \|_2 > \delta $, then $ \mathbf{v}^*_{\parallel} $ is computed using \eqref{eq:v_secularsoln}, where $ \sigma^*_{\parallel} $ solves the secular equation \eqref{eq:secular}. \noindent \textbf{The secular equation.} We now summarize how to find a solution of the so-called \emph{secular equation}. Note that from (\ref{eq:v_sigma}), \begin{equation*} \| \mathbf{v}_{\parallel} (\sigma_{\parallel}) \|_2^2 = \sum_{i=1}^{m} \frac{(\mathbf{g}_{\parallel})_i^2}{(\lambda_i + \sigma_{\parallel})^2}. \end{equation*} If we combine the terms above that correspond to the same eigenvalues and remove the terms with zero numerators, then for $\sigma_{\parallel} \ne -\lambda_i$, we have $$ \| \mathbf{v}_{\parallel} (\sigma_{\parallel}) \|_2^2 = \sum_{i=1}^{\ell} \frac{\bar{a}_i^2}{(\bar{\lambda}_i + \sigma_{\parallel})^2}, $$ where $\bar{a}_i \ne 0$ for $i = 1, \dots, \ell$ and $\bar{\lambda}_i$ are \emph{distinct} eigenvalues of $\mathbf{B}$ with $\bar{\lambda}_1 < \bar{\lambda}_2 < \cdots < \bar{\lambda}_{\ell}$. Next, we define the function \begin{equation} \phi_{\parallel}\left( \sigma_{\parallel} \right) = \begin{cases} \displaystyle \frac{1}{ \sqrt{\displaystyle \sum_{i=1}^{\ell} \frac{\bar{a}_i^2}{(\bar{\lambda}_i + \sigma_{\parallel})^2}} } - \frac{1}{\delta} & \text{if $\sigma_{\parallel} \ne - \bar{\lambda}_i$ where $1 \le i \le \ell$} \\[1.3cm] \displaystyle -\frac{1}{\delta} & \text{otherwise}. \end{cases} \end{equation} From the optimality conditions \eqref{eq:vpar_c2} and \eqref{eq:vpar_c5}, if $\sigma^*_{\parallel} \ne 0$, then $\sigma_{\parallel}^*$ solves the \emph{secular equation} \begin{equation} \label{eq:secular} \phi_{\parallel}\left( \sigma_{\parallel} \right) = 0, \end{equation} with $\sigma_{\parallel} \ge \max \{0, -\lambda_1\}$. Note that $\phi_{\parallel}$ is monotonically increasing and concave on the interval $[ - \lambda_1, \infty)$; thus, with a judicious choice of initial $\sigma_{\parallel}^0$, Newton's method can be used to efficiently compute $\sigma_{\parallel}^*$ in \eqref{eq:secular} (see~\cite{BruEM15}). More details on the solution method for subproblem \eqref{eq:sp_vpar_2} are given in \cite{BruEM15}. \subsection{Computing $\mathbf{p}^*$} Given $\mathbf{v}^* = [ \mathbf{v}^*_{\parallel} \,\, \mathbf{v}^*_{\perp}]^T$, the solution to the trust-region subproblem \eqref{eq:trustProblem} using either the $(\mathbf{P},2)$ or the $(\mathbf{P},\infty)$ norms is \begin{equation} \label{eq:p_general} \mathbf{p}^* = \mathbf{P}\mathbf{v}^* = \mathbf{P}_{\parallel}\mathbf{v}^*_{\parallel}+\mathbf{P}_{\perp}\mathbf{v}^*_{\perp}. \end{equation} (Recall that using either of the two norms generates the same $ \bfm{v}_\perp^*$ but different $ \bfm{v}_\parallel^*$.) It remains to show how to form $\mathbf{p}^*$ in (\ref{eq:p_general}). Matrix-vector products involving $ \mathbf{P}_{\parallel} $ are possible using \eqref{eqn-Pparallel} or \eqref{eqn-Pparallel_nc}, and thus, $ \mathbf{P}_{\parallel}\mathbf{v}^*_{\parallel} $ can be computed; however, an explicit formula to compute products $\mathbf{P}_{\perp}$ is not available. To compute the second term, $\mathbf{P}_{\perp}\mathbf{v}^*_{\perp}$, we observe that $ \mathbf{v}^*_{\perp} $, as given in \eqref{eq:soln_vperp}, is a multiple of either $ \mathbf{g}_{\perp} = \mathbf{P}^T_{\perp}\mathbf{g}$ or a vector $ \mathbf{u}$ with unit length, depending on the sign of $\gamma$ and the magnitude of $\mathbf{g}_\perp$. In the latter case, define $ \mathbf{u} = \frac{ \mathbf{P}^T_{\perp}\mathbf{e}_i }{ \| \mathbf{P}^T_{\perp}\mathbf{e}_i \|_2 } $, where $ i \in \left\{1,2,\ldots,k+2 \right\} $ is the first index such that $ \left\| \mathbf{P}^T_{\perp}\mathbf{e}_i \right\|_2 \neq 0 $. (Such an $\mathbf{e}_i$ exists since rank($\mathbf{P}_{\perp}) = n-m$.) Thus, we obtain \begin{equation} \label{eq:opt_p} \mathbf{p}^* = \mathbf{P}_{\parallel}(\mathbf{v}^*_{\parallel} - \mathbf{P}^T_{\parallel}\mathbf{w}^*) + \mathbf{w}^*, \end{equation} where \begin{equation} \label{eq:opt_w} \mathbf{w}^* = \begin{cases} -\frac{1}{\gamma}\mathbf{g} & \text{ if } \gamma > 0 \text{ and } \left \| \mathbf{g}_{\perp} \right \|_2 \le \delta |\gamma |, \\ \frac{\delta}{\left\| \mathbf{P}^T_{\perp}\mathbf{e}_i \right\|_2}\mathbf{e}_i & \text{ if } \gamma \le 0 \text{ and } \| \mathbf{g}_{\perp} \|_2 = 0, \\ -\frac{ \delta }{\| \mathbf{g}_{\perp} \|_2}\mathbf{g} & \text{ otherwise.} \end{cases} \end{equation} Algorithm~\ref{alg-w} summarizes the computation of $\mathbf{w}^*$. \begin{algorithm}[h!] \caption{Computing $\mathbf{w}^*$}\label{alg-w} \begin{algorithmic}[1] \ENSURE [$\mathbf{w}^*$, $\beta$, $\text{hasBeta}$]=\texttt{ComputeW}($\mathbf{g},\delta,\gamma,\|\mathbf{g}_\perp\|_2, \mathbf{\Pi},\mathbf{\Psi},\mathbf{R}_\ddagger,\mathbf{U},\tau,[\jb{\texttt{varargin}=\{\bfm{S}, \bfm{Y} \}}]$); \IF {$\gamma>0$ \AND $\|\mathbf{g}_\perp\|_2\le \delta\gamma$} \STATE {$\beta \gets -(1/\gamma)$, $ \text{hasBeta} \gets 1 $;} \STATE {$\mathbf{w}^*\gets \beta\mathbf{g}$;} \ELSIF{$\gamma\le 0$ \AND $\|\mathbf{g}_\perp\|_2<\tau$} \STATE {Find the first index $i$ such that $\|\mathbf{P}^T_\perp \mathbf{e}_i\|_2\ne 0$;} \STATE{$\beta \gets 0$, $ \text{hasBeta} \gets 0 $;} \STATE {$\mathbf{w}^*\gets \left(\delta/\|\mathbf{P}^T_\perp \mathbf{e}_i\|_2\right)\mathbf{e}_i$;} \ELSE \STATE {$\beta \gets -(\delta/\|\mathbf{g}_\perp\|_2)$, $ \text{hasBeta} \gets 1 $;} \STATE {$\mathbf{w}^*\gets \beta\mathbf{g}$;} \ENDIF \RETURN $\mathbf{w}^*$; \end{algorithmic} \end{algorithm} The quantities $ \left\| \mathbf{g}_{\perp} \right\|_2 $ and $\left\| \mathbf{P}^T_{\perp}\mathbf{e}_i \right\|_2$ are computed using the orthogonality of $\bfm{P}$, which implies \begin{equation} \label{eq:ng_perp} \left \| \mathbf{g}_{\parallel} \right \|^2_2 + \| \mathbf{g}_{\perp} \|^2_2 = \| \mathbf{g} \|^2_2, \words{and} \| \mathbf{P}^T_{\parallel}\mathbf{e}_i \|^2_2 + \| \mathbf{P}^T_{\perp}\mathbf{e}_i \|^2_2 = 1. \end{equation} Then $\| \mathbf{g}_{\perp} \|_2 = \sqrt{ \| \mathbf{g}\|^2_2 - \| \mathbf{g}_{\parallel} \|^2_2 } $ and $ \| \mathbf{P}^T_{\perp}\mathbf{e}_i \|_2 = \sqrt{ 1 - \|\mathbf{P}^T_{\parallel}\mathbf{e}_i \|^2_2 } $. Note that $\mathbf{v}_{\perp}^*$ is never explicitly computed. \jb{Since $ \bs{\Psi} $ is either explicitly computed when a constant initialization is used or represented through $ \bfm{S} $ and $ \bfm{Y} $, the optional input $ [\texttt{varargin}= \{\bfm{S}, \bfm{Y} \}]$ can be used to pass $ \bfm{S}, \bfm{Y} $ if $ \bs{\Psi} $ is represented implicitly.} \section{The Proposed Algorithms} In this section, we summarize Section 3 in two algorithms that solve the trust-region subproblem using the $(\mathbf{P},\infty)$ and the $(\mathbf{P},2)$ norms. The required inputs \jb{depend on the initialization strategy and often} include $\mathbf{g}$, $\mathbf{S}$, $\mathbf{Y}$, $\gamma$, and $\delta$ which define the trust-region subproblem (including the {\small L-SR1}{} matrix). The input $\tau$ is a small positive number used as a tolerance. The output of each algorithm is $\bfm{p}^*$, the solution to the trust-region subproblem in the given shape-changing norm. In Algorithm~\ref{alg-pinfty}, we detail the algorithm for solving (\ref{eq:trustProblem}) using the $(\mathbf{P},\infty)$ norm to define the subproblem; Algorithm~\ref{alg-p2} solves the subproblem using the $(\mathbf{P},2)$ norm. Both algorithms accept either the matrices that hold the quasi-Newton pairs, $\mathbf{S}$ and $\mathbf{Y}$, or factors for the compact formulation $\mathbf{\Psi}$ and $\mathbf{M}^{-1}$. To reflect this option, the second and third input parameters are labeled ``$S\lor\Psi$'' and ``$Y\lor M^{-1}$'' in both algorithms. If the inputs are the quasi-Newton pairs $ \bfm{S}, \bfm{Y} $, the input parameter \texttt{flag} must be ``0'', and then factors for the compact formulation are computed; if the inputs are factors for the compact formulation, \texttt{flag} must be ``1'', and then $\mathbf{\Psi}$ and $\mathbf{M}^{-1}$ are set to the second and third inputs. \jb{Another option is to pass the product $ \bs{\Psi}^T \bs{\Psi} $ and the matrix $ \bfm{M}^{-1} $ along with $ \bfm{S} $ and $ \bfm{Y} $. This can be particularly advantageous when the matrix $ \bs{\Psi}^T \bs{\Psi} $ is updated, instead of recomputed (by using e.g., Procedure 2).} \begin{algorithm}[H] \caption{The SC-SR1 algorithm for the shape-changing $(\mathbf{P},\infty)$ norm} \label{alg-pinfty} \begin{algorithmic}[1] \ENSURE [$\mathbf{p}^*$]=\texttt{sc\_sr1\_infty}($\mathbf{g}$, $S\lor\Psi$, $Y\lor M^{-1}$, $\gamma$, $\delta$, flag, $\jb{[\texttt{varargin}=\{\Psi^T\Psi, M^{-1} \}]}$) \STATE{Pick $\tau$ such that $0<\tau\ll 1$;} \IF{flag$=0$} \STATE{\jb{$\mathbf{S}\gets S\lor\Psi$ and $\mathbf{Y}\gets Y\lor M^{-1}$;}} \IF{\jb{$ \texttt{isempty(varargin)}$}} \STATE {\jb{Compute $\mathbf{\Psi}$ and $\mathbf{M}^{-1}$ as in (\ref{eqn-PsiM})}} \ELSE \STATE{ \jb{$\Psi \lor \Psi^T \Psi \gets \texttt{varargin}\{1\}$ and $\bfm{M}^{-1} \gets \texttt{varargin}\{2\}$;}} \ENDIF \ELSE \STATE {$\mathbf{\Psi}\gets S\lor\Psi$; $\mathbf{M}^{-1}\gets Y\lor M^{-1}$;} \ENDIF \STATE [$\mathbf{R}_\ddagger$, $\mathbf{\Lambda}$, $\mathbf{U}$, $\mathbf{\Pi}$, $J$]=\texttt{ComputeSpectral}($\Psi \lor \Psi^T \Psi$, $\mathbf{M}^{-1}$, $\gamma$, $\tau$); \STATE{\jeo{$m\gets|J|$ };} \STATE $\mathbf{g}_\parallel\gets \bfm{P}_\parallel^T \bfm{g}$ using (\ref{eqn-Pparallel}); \STATE{$\|\mathbf{g}_\perp\|\gets\sqrt{\|\mathbf{g}\|_2^2-\|\mathbf{g}_\parallel\|^2}$;} \IF{$\|\mathbf{g}_\perp\|<\tau$} \STATE{$\|\mathbf{g}_\perp\|\gets 0$;} \ENDIF \FOR{$i=1$ \TO $m$}\STATE{\IF{$\left|[\mathbf{g}_\parallel]_i\right|<\delta\left|[\mathbf{\Lambda}]_{ii}\right|$ \AND $[\mathbf{\Lambda}]_{ii}>\tau$} \STATE{$[\mathbf{v}_\parallel]_i\gets -[\mathbf{g}_\parallel]_i/[\mathbf{\Lambda}]_{ii}$;} \ELSIF{$\left|[\mathbf{g}_\parallel]_i\right|<\tau$ \AND $\left|[\mathbf{\Lambda}]_{ii}\right|<\tau$} \STATE{$[\mathbf{v}_\parallel]_i\gets\delta/2$;} \ELSIF{$\left|[\mathbf{g}_\parallel]_i\right|>\tau$ \AND $\left|[\mathbf{\Lambda}]_{ii}\right|<\tau$} \STATE{$[\mathbf{v}_\parallel]_i\gets -\text{sgn}\left([\mathbf{g}_\parallel]_i\right)\delta$;} \ELSIF{$\left|[\mathbf{g}_\parallel]_i\right|<\tau$ \AND $[\mathbf{\Lambda}]_{ii}<-\tau$} \STATE{$[\mathbf{v}_\parallel]_i\gets\delta$;} \ELSE \STATE{$[\mathbf{v}_\parallel]_i\gets-\left(\delta/\left|[\mathbf{g}_\parallel]_i\right|\right) [\mathbf{g}_\parallel]_i$;} \ENDIF} \ENDFOR \STATE{[$\mathbf{w}^*$,$\beta$,$\text{hasBeta}$]=\texttt{ComputeW}($\mathbf{g},\delta,\gamma,\|\mathbf{g}_\perp\|_2, \mathbf{\Pi},\mathbf{\Psi},\mathbf{R}_\ddagger,\mathbf{U},\tau,\jb{[\texttt{varargin}=\{\bfm{S},\bfm{Y}\}]} $); } \IF{\text{hasBeta} = 1} \STATE{$\mathbf{p}^*\gets \mathbf{P}_\parallel ( \bfm{v}_\parallel- \beta \mathbf{g}_\parallel) + \mathbf{w}^*$} \ELSE \STATE{$\mathbf{p}^*\gets \mathbf{P}_\parallel ( \bfm{v}_\parallel- \mathbf{P}_\parallel^T\mathbf{w}^*) + \mathbf{w}^*$} \ENDIF \RETURN $\mathbf{p}^*$ \end{algorithmic} \end{algorithm} The computation of $\mathbf{p}^*$ in both Algorithms~\ref{alg-pinfty} and~\ref{alg-p2} is performed as in (\ref{eq:opt_p}) using two matrix-vector products with $\mathbf{P}^T_\parallel$ and $\mathbf{P}_\parallel$, respectively, in order to avoid matrix-matrix products. Products with $\mathbf{P}_\parallel$ are done using the factors $\mathbf{\Psi}$, $\mathbf{R}$, and $\mathbf{U}$ (see Section~\ref{sec-eigs}). \begin{algorithm}[H] \footnotesize \caption{The SC-SR1 algorithm for the shape-changing $(\mathbf{P},2)$ norm} \label{alg-p2} \begin{algorithmic}[1] \ENSURE [$\mathbf{p}^*$]=\texttt{sc\_sr1\_2}($\mathbf{g}$, $S\lor\Psi$, $Y\lor M^{-1}$, $\gamma$, $\delta$, flag, $\jb{[\texttt{varargin}=\{\Psi^T\Psi, M^{-1} \}]}$) \STATE{Pick $\tau$ such that $0<\tau\ll 1$;} \IF{flag$=0$} \STATE{\jb{$\mathbf{S}\gets S\lor\Psi$ and $\mathbf{Y}\gets Y\lor M^{-1}$;}} \IF{\jb{$ \texttt{isempty(varargin)}$}} \STATE {\jb{Compute $\mathbf{\Psi}$ and $\mathbf{M}^{-1}$ as in (\ref{eqn-PsiM})}} \ELSE \STATE{ \jb{$\Psi \lor \Psi^T \Psi \gets \texttt{varargin}\{1\}$ and $\bfm{M}^{-1} \gets \texttt{varargin}\{2\}$;}} \ENDIF \ELSE \STATE {$\mathbf{\Psi}\gets S\lor\Psi$; $\mathbf{M}^{-1}\gets Y\lor M^{-1}$;} \ENDIF \STATE [$\mathbf{R}_\ddagger$, $\mathbf{\Lambda}$, $\mathbf{U}$, $\mathbf{\Pi}$, $J$]=\texttt{ComputeSpectral}($\Psi \lor \Psi^T \Psi$, $\mathbf{M}^{-1}$, $\gamma$, $\tau$); \STATE{$\mathbf{g}_\parallel\gets \bfm{P}_\parallel^T \bfm{g}$ using (\ref{eqn-Pparallel}), and $\|\mathbf{g}_\perp\|\gets\sqrt{\|\mathbf{g}\|_2^2-\|\mathbf{g}_\parallel\|^2}$ }; \IF{$\|\mathbf{g}_\perp\|<\tau$} \STATE{$\|\mathbf{g}_\perp\|\gets 0$;} \ENDIF \IF{$[\mathbf{\Lambda}]_{11}>\tau$} \IF{$\|\mathbf{\Lambda^{-1}}\mathbf{g}_\parallel\|<\delta$} \STATE{$\sigma_\parallel\gets 0$;} \ELSE \STATE{Use Newton's method with $\sigma_0=0$ to find $\sigma_\parallel$, a solution to (\ref{eq:v_sigma});} \ENDIF \STATE{$\mathbf{v}_\parallel \gets -\left(\mathbf{\Sigma}+\sigma_\parallel \bfm{I}\right)^{-1}\mathbf{g}_\parallel$;} \ELSIF{$\left|[\mathbf{\Lambda}]_{11}\right|<\tau$ } \STATE{Define $r$ to be the first $i$ such that $\left|\Lambda_{ii}\right|>\tau$;} \IF{$\left| \mathbf{g}_{ii}\right|<\tau $ for $1\le i \le r$ \AND $\|\mathbf{\Lambda}^\dagger \mathbf{g}_\parallel\|<\delta$} \STATE{$\sigma_\parallel\gets 0$;} \STATE{$\mathbf{v}_\parallel\gets -\mathbf{\Lambda}^\dagger \mathbf{g}_\parallel$;} \ELSE \STATE {$\hat{\sigma}=\text{max}_i ( [\mathbf{g}_\parallel]_i/\delta-\mathbf{\Lambda}_{ii})$;} \STATE{Use Newton's method with $\sigma_0=\hat{\sigma}$ to find $\sigma_\parallel$, a solution to (\ref{eq:v_sigma});} \STATE{$\mathbf{v}_\parallel \gets -\left(\mathbf{\Lambda}+\sigma_\parallel \bfm{I}\right)^{-1}\mathbf{g}_\parallel$;} \ENDIF \ELSE \STATE {Define $r$ to be the first $i$ such that $\left|[\Lambda]_{ii}\right|>\tau$;} \IF {$\left|\mathbf{g}_{ii}\right|<\tau $ for $1\le i \le r$ } \STATE{$\sigma_\parallel=-[\mathbf{\Lambda}]_{11}$, $\mathbf{v} \gets \left(\mathbf{\Lambda}-[\mathbf{\Lambda}]_{11}\bfm{I}\right)^\dagger \mathbf{g}_\parallel$;} \IF{$\|\mathbf{v}\|<\delta$} \STATE{$\alpha \gets \sqrt{\delta^2-\|\mathbf{v}\|^2}$;} \STATE{$\mathbf{v}_\parallel = \bfm{v}+\alpha \bfm{e}_1$, where $\bfm{e}_1$ is the first standard basis vector;} \ELSE \STATE{$\hat{\mathbf{\sigma}}\gets \text{max}_i ( [\mathbf{g}_\parallel]_i/\delta-[\mathbf{\Lambda}]_{ii})$;} \STATE{Use Newton's method with $\sigma_0=\text{max}(\hat{\sigma},0)$ to find $\sigma_\parallel$, a solution to (\ref{eq:v_sigma});} \STATE{$\mathbf{v}_\parallel \gets -\left(\mathbf{\Lambda}+\sigma_\parallel \bfm{I}\right)^{-1}\mathbf{g}_\parallel$;} \ENDIF \ELSE \STATE{$\hat{\mathbf{\sigma}}\gets \text{max}_i ( [\mathbf{g}_\parallel]_i/\delta-[\mathbf{\Lambda}]_{ii})$;} \STATE{Use Newton's method with $\sigma_0=\text{max}(\hat{\sigma},0)$ to find $\sigma_\parallel$, a solution to (\ref{eq:v_sigma});} \STATE{$\mathbf{v}_\parallel \gets -\left(\mathbf{\Lambda}+\sigma_\parallel \bfm{I}\right)^{-1}\mathbf{g}_\parallel$;} \ENDIF \ENDIF \STATE{[$\mathbf{w}^*$,$\beta$,$\text{hasBeta}$]=\texttt{ComputeW}($\mathbf{g},\delta,\gamma,\|\mathbf{g}_\perp\|_2, \mathbf{\Pi},\mathbf{\Psi},\mathbf{R}_\ddagger,\mathbf{U},\tau,\jb{[\texttt{varargin}=\{\bfm{S},\bfm{Y}\}]} $); } \IF{\text{hasBeta} = 1} \STATE{$\mathbf{p}^*\gets \mathbf{P}_\parallel ( \bfm{v}_\parallel- \beta \mathbf{g}_\parallel) + \mathbf{w}^*$} \ELSE \STATE{$\mathbf{p}^*\gets \mathbf{P}_\parallel ( \bfm{v}_\parallel- \mathbf{P}_\parallel^T\mathbf{w}^*) + \mathbf{w}^*$} \ENDIF \end{algorithmic} \end{algorithm} \jb{Besides the optional arguments,} the {\small MATLAB} implementation of both algorithms have an additional input and output variable. The additional input variable is a verbosity setting; the additional output variable is a flag that reveals whether there were any detected run-time errors. \jb{As described in Section \ref{sec-lm} the limited-memory updating techniques vary with the choice of initialization strategy $ \bfm{B}^{(k)}_0 = \gamma_k \bfm{I} $. A commonly used value is $ \gamma_k = \frac{ \| \bk{y} \|^2 }{ \bk{s}^T \bk{y} } $ \cite{bb88}. Recall from Section \ref{sec-eigs} that $ \gamma_k $ is the eigenvalue for the large $ n-m $ dimensional subspace, spanned by the eigenvector in $ \bfm{P}_{\perp} $. At a local minimum all eigenvalues of $ \nabla^2 f(\bfm{x}) $ will be non-negative, motivating non-negative values of $\gamma_k$. For our implementation we tested three different strategies, one of which uses a constant initialization (C Init.) \begin{equation} \label{eq:init} \gamma_k = \begin{cases} \text{max}\left(\text{min}\left(\frac{\| \bfm{y}_0 \|^2}{\bfm{s}^T_0 \bfm{y}_0}, \gamma_{\text{max}}\right),1\right) & \text{C Init.} \\ \frac{\| \bk{y} \|^2}{\bk{s}^T \bk{y}} \: \left(\text{if } \bk{s}^T \bk{y} > 0 \right) & \text{Init. 1} \\ \text{max}\left(\frac{\| \bfm{y}_{k-q} \|^2}{\bfm{s}^T_{k-q} \bfm{y}_{k-q}}, \cdots, \frac{\| \bk{y} \|^2}{\bk{y}^T \bk{s}} \right) & \text{Init. 2} \end{cases} \end{equation} Observe that Init. 2 includes the additional parameter $ q > 0 $, which determines the number of pairs $\{\bfm{s}_i,\bfm{y}_i\}$ to use. For C Init., the parameter $\gamma_{\text{max}}$ ensures that the constant initialization, which uses $ \bfm{s}_0, \bfm{y}_0 $ does not exceed this threshold value. In the experiments the parameter is set as $ \gamma_{\text{max}} = 1 \times 10^4 $.} \subsection{Computational Complexity} \label{subsec:cc} We estimate the cost of one iteration using the proposed method to solve the trust-region subproblem defined by shape-changing norms (\ref{eq:sc_2}) and (\ref{eq:sc_inf}). We make the practical assumption that $\gamma>0$. Computational savings can be achieved by reusing previously computed matrices and not forming certain matrices explicitly. We begin by highlighting the \jb{case when a non-constant initialization strategy is used.} First, we do not form $\mathbf{\Psi}=\mathbf{Y}-\gamma\mathbf{S}$ explicitly. Rather, we compute matrix-vector products with $\mathbf{\Psi}$ by computing matrix-vector products with $\mathbf{Y}$ and $\mathbf{S}$. Second, to form $\mathbf{\Psi}^T\mathbf{\Psi}$, we only store and update the small $m \times m$ matrices $\mathbf{Y}^T\bfm{Y}$, $\mathbf{S}^T\bfm{Y}$, and $\mathbf{S}^T\bfm{S}$. This update involves only $3m$ vector inner products. Third, assuming we have already obtained the Cholesky factorization of $\mathbf{\Psi}^T\mathbf{\Psi}$ associated with the previously-stored limited-memory pairs, it is possible to update the Cholesky factorization of the new $\mathbf{\Psi}^T\mathbf{\Psi}$ at a cost of $O(m^2)$ \cite{Ben65,GilGMS74}. We now consider the dominant cost for a single subproblem solve. The eigendecomposition $\mathbf{R_{\dagger}\Pi}^T \bfm{M\Pi R_{\dagger}} = \mathbf{U} \mathbf{\hat{\Lambda}} \mathbf{U}^T$ costs $O(m^3) = \left ( \frac{m^2}{n} \right ) O(mn)$, where $m \ll n$. To compute $\mathbf{p}^*$ in \eqref{eq:opt_p}, one needs to compute $\mathbf{v}^*$ from Section \ref{sec:v_parallel} and $\mathbf{w}^*$ from \eqref{eq:opt_w}. The dominant cost for computing $\mathbf{v}^*$ and $\mathbf{w}^*$ is forming $\mathbf{\Psi}^T\mathbf{g}$, which requires $\jb{2mn}$ operations. \jb{Note that both $ \bfm{v}^*_{\parallel} $ and $ \bfm{P}^T_{\parallel} \bfm{w}^* $ are typically computed from $ \bfm{P}^T_{\parallel} \bfm{g} $, whose main operation is $ \bs{\Psi}^T \bfm{g} $. Subsequently, computing $ \bfm{P}_{\parallel}( \bfm{v}^*_{\parallel} - \bfm{P}_{\parallel}^T \bfm{w}^*) $ incurs $O(2mn)$ additional multiplications, as this operation reduces to $ \bs{\Psi} \bfm{f} $ for a vector $ \bfm{f} $. Thus, the dominant complexity is $ O(2mn + 2mn) = O(4mn) $.} The following theorem summarizes the dominant computational costs. \begin{theorem} The dominant computational cost of solving one trust-region subproblem for the proposed method is $4mn$ floating point operations. \end{theorem} We note that the floating point operation count of $O(4mn)$ is the same cost as for {\small L-BFGS}~\cite{Noc80}. \jb{If a constant initialization is used the complexity can essentially be halved, because the mat-vec applies $ \bs{\Psi}^T \bfm{g} $ and $ \bs{\Psi} \bfm{f} $ (for some vector $ \bfm{f} $) each take $ O(mn) $ multiplications for a total of $ O(2mn) $.} \subsection{Characterization of global solutions} It is possible to characterize global solutions to the trust-region subproblem defined by shape-changing norm $\left(\mathbf{P},2\right)$-norm. The following theorem is based on well-known optimality conditions for the two-norm trust-region subproblem~\cite{Gay81,MorS83}. \begin{theorem} \label{thrm-charact} A vector $\mathbf{p}^*\in \mathbb{R}^n$ such that $ \left\| \mathbf{P}^T_{\parallel} \mathbf{p}^* \right\|_2 \le \delta$ and $ \left\| \mathbf{P}^T_{\perp} \mathbf{p}^* \right\|_2\le \delta,$ is a global solution of (\ref{eq:trustProblem}) defined by the $\left(\mathbf{P},2 \right)$-norm if and only if there exists unique $\sigma_\parallel^*\ge 0$ and $\sigma_\perp^*\ge 0$ such that \begin{eqnarray} \label{eq:optim_sc2} \left( \mathbf{B} + \mathbf{C}_{\parallel} \right)\mathbf{p}^* +\mathbf{g} = 0, \quad \sigma^*_{\parallel} \left( \left\| \mathbf{P}^T_{\parallel}\mathbf{p}^* \right \|_2 - \delta \right) = 0, \quad \sigma^*_{\perp} \left( \left \|\mathbf{P}_\perp^T\mathbf{p}^*\right\|_2 - \delta \right) = 0, \begin{array}{lcrrcl} \end{array} \end{eqnarray} where $ \mathbf{C}_{\parallel} \defined \sigma^*_{\perp}\mathbf{I} + \left( \sigma^*_{\parallel} - \sigma^*_{\perp} \right)\mathbf{P}_{\parallel}\mathbf{P}^T_{\parallel}$, the matrix $\mathbf{B+C_\parallel}$ is positive semi-definite, and $ \mathbf{P}=[\mathbf{P}_\parallel \,\, \mathbf{P}_\perp]$ and $\Lambda=\diag(\lambda_1,\ldots,\lambda_{m})=\hat{\Lambda}+\gamma\mathbf{I}$ are as in (\ref{eqn-PL}). \end{theorem} When run in the ``verbose'' mode, \texttt{sc\_sr1\_2.m} returns values needed to establish the optimality of $\mathbf{p}^*$ using this theorem. In particular, the code computes $\sigma^*_\parallel$, which, depending on the case, is either 0, the absolute value of the most negative eigenvalue, or obtained from Newton's method. The code also computes $\sigma^*_\perp$ using (\ref{eq:soln_sigmaperp}), and $\|\mathbf{P}^T_\perp \bfm{p}^*\|$ is computed by noting that $\|\mathbf{P}^T_\perp \bfm{p}^*\|_2^2= \|\mathbf{p}^*\|^2_2-\|\mathbf{P}^T_\parallel \bfm{p}^*\|^2_2$. The variables \texttt{opt1}, \texttt{opt2}, and \texttt{opt3} contain the errors in each of the equations in (\ref{eq:optim_sc2}); \texttt{spd\_check} finds the minimum eigenvalue of $(\mathbf{B}+\mathbf{C}_\parallel)$ in (\ref{eq:optim_sc2}), enabling one to ensure $(\mathbf{B}+\mathbf{C}_\parallel)$ is positive definite; and $\sigma^*_\parallel$ and $\sigma^*_\perp$ are displayed to verify that they are nonnegative. \section{Numerical experiments} In this section, we report on numerical experiments with the proposed shape-changing SR1 ({\small SC-SR1}) algorithm implemented in {\small MATLAB}{} to solve limited-memory SR1 trust-region subproblems. \jb{The experiments are divided into solving the TR subproblems with Algorithms \ref{alg-pinfty} and \ref{alg-p2}, and general unconstrained minimization problems, which use the TR subproblem solvers, using 62 large-scale CUTEst problems \cite{GouOT03}.} \subsection{$(\mathbf{P},2)$-norm results} The {\small SC-SR1}{} algorithm was tested on randomly-generated problems of size $n=10^3$ to $n=10^7$, organized as five experiments when there is no closed-form solution to the shape-changing trust-region subproblem and one experiment designed to test the {\small SC-SR1}{} method in the so-called ``hard case''. These six cases only occur using the $(\mathbf{P},2)$-norm trust region. (In the case of the $(\mathbf{P},\infty)$ norm, $\mathbf{v}_\parallel^*$ has the closed-form solution given by (\ref{eq:vpar_inf}).) The six experiments are outlined as follows: \begin{enumerate} \item[(E1)] {$\mathbf{B}$ is positive definite with $\| \mathbf{v}_\parallel(0) \|_2 \ge \delta$}. \item[(E2)] {$\mathbf{B}$ is positive semidefinite and singular with $[\mathbf{g}_{\parallel}]_i \ne 0 $ for some $ 1 \le i \le r $.} \item[(E3)] {$\mathbf{B}$ is positive semidefinite and singular with $[\mathbf{g}_{\parallel}]_i = 0 $ for $ 1 \le i \le r $ and $\| \Lambda^{\dagger} \mathbf{g}_{\parallel} \|_2 > \delta $.} \item[(E4)] {$\mathbf{B}$ is indefinite and $[\mathbf{g}_{\parallel}]_i = 0 $ for $ 1 \le i \le r $ with $ \| ( \Lambda -\lambda_1\mathbf{I} )^{\dagger}\mathbf{g}_{\parallel} \|_2 > \delta $.} \item[(E5)] {$\mathbf{B}$ is indefinite and $ [ \mathbf{g}_{\parallel} ]_i \ne 0$ for some $ 1 \le i \le r $ .} \item [(E6)] {$\mathbf{B}$ is indefinite and $[\mathbf{g}_{\parallel}]_i = 0 $ for $ 1 \le i \le r $ with $\| \mathbf{v}_{\parallel} ( -\lambda_1 )\|_2 \le \delta $ (the ``hard case'').} \end{enumerate} For these experiments, $\mathbf{S}$, $\mathbf{Y}$, and $\mathbf{g}$ were randomly generated and then altered to satisfy the requirements described above by each experiment. In experiments (E2) and (E5), $\delta$ was chosen as a random number. (In the other experiments, $\delta$ was set in accordance with the experiments' rules.) All randomly-generated vectors and matrices were formed using the {\small MATLAB}{} \texttt{randn} command, which draws from the standard normal distribution. The initial {\small SR1}{} matrix was set to $\mathbf{B}_0=\gamma \mathbf{I}$, where $\gamma=|10*\texttt{randn(1)}|$. Finally, the number of limited-memory updates $m$ was set to 5, and $r$ was set to 2. In the five cases when there is no closed-form solution, {\small SC-SR1}{} uses Newton's method to find a root of $\phi_\parallel$. We use the same procedure as in \cite[Algorithm 2]{BruEM15} to initialize Newton's method since it guarantees monotonic and quadratic convergence to $\sigma^*$. The Newton iteration was terminated when the $i$th iterate satisfied $\|\phi_\parallel(\sigma^i)\|\le \texttt{eps}\cdot\|\phi_\parallel(\sigma^0)\| + \sqrt{\texttt{eps}}$, where $\sigma^0$ denotes the initial iterate for Newton's method and $\texttt{eps}$ is machine precision. This stopping criteria is both a relative and absolute criteria, and it is the only stopping criteria used by {\small SC-SR1}. In order to report on the accuracy of the subproblem solves, we make use of the optimality conditions found in Theorem~\ref{thrm-charact}. For each experiment, we report the following: (i) the norm of the residual of the first optimality condition, \texttt{opt 1} $\defined \| (\mathbf{B} + \mathbf{C}_{\parallel}) \mathbf{p}^* + \mathbf{g} \|_2$; (ii) the first complementarity condition, \texttt{opt 2} $\defined |\sigma^*_{\parallel} ( \|\mathbf{P}^T_{\parallel}\mathbf{p}^* \|_2 - \delta ) |$; (iii) the second complementarity condition, \texttt{opt 3} $\defined \|\sigma^*_{\perp} ( \|\mathbf{P}^T_{\perp}\mathbf{p}^* \|_2 - \delta ) |$; (iv) the minimum eigenvalue of $\mathbf{B}+\mathbf{C}_\parallel$; (v) $\sigma_\parallel^*$; (vi) $\sigma_\perp^*$; (vii) $\gamma$; and (viii) time. The quantities (i)-(vi) are reported to check the optimality conditions given in Theorem 4.2. Finally, we ran each experiment five times and report one representative result for each experiment. \begin{table}[!h] \setlength\tabcolsep{.95mm} \caption{ Experiment 1: $\mathbf{B}$ is positive definite with $\| \mathbf{v}_\parallel(0) \|_2 \ge \delta$.} { \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n$ & \texttt{opt 1} & \texttt{opt 2} & \texttt{opt 3} & \texttt{min($\lambda(B+C_\parallel)$)} & $ \sigma^*_{\parallel}$ & $\sigma^*_{\perp} $ & $ \gamma $ & time \\ \hline $1\times 10^3$& \texttt{2.45e-14} & \texttt{0.00e+00} & \texttt{2.45e-14} & \texttt{4.33e+01} & \texttt{1.09e+01} & \texttt{5.89e+02}& \texttt{1.63e+01} & \texttt{9.97e-03} \\ \hline $ 1\times 10^4 $& \texttt{1.21e-13} & \texttt{2.82e-16} & \texttt{4.26e-13} & \texttt{3.25e+01} & \texttt{8.14e+00} & \texttt{1.98e+03}& \texttt{1.22e+01} & \texttt{1.55e-03} \\ \hline $1\times 10^5$& \texttt{5.32e-13} & \texttt{2.28e-16} & \texttt{1.40e-13} & \texttt{2.19e+01} & \texttt{5.47e+00} & \texttt{5.05e+03}& \texttt{8.14e+00} & \texttt{4.49e-03} \\ \hline $1\times 10^6$& \texttt{3.56e-12} & \texttt{5.51e-16} & \texttt{2.05e-11} & \texttt{1.44e+01} & \texttt{3.61e+00} & \texttt{9.57e+03}& \texttt{5.32e+00} & \texttt{8.03e-02} \\ \hline $1\times 10^7$& \texttt{1.46e-11} & \texttt{1.16e-11} & \texttt{3.64e-11} & \texttt{4.07e+01} & \texttt{1.02e+01} & \texttt{5.52e+04}& \texttt{1.52e+01} & \texttt{9.66e-01} \\ \hline \end{tabular} } \end{table} \setlength\tabcolsep{.95mm} \begin{table}[!h] \caption{ Experiment 2: $\mathbf{B}$ is positive semidefinite and singular and $[\mathbf{g}_{\parallel}]_i \ne 0 $ for some $ 1 \le i \le r $.} { \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n$ & \texttt{opt 1} & \texttt{opt 2} & \texttt{opt 3} & \texttt{min($\lambda (B+C_\parallel)$)} & $ \sigma^*_{\parallel}$ & $\sigma^*_{\perp} $ & $ \gamma $ & time \\ \hline $1\times 10^3$& \texttt{1.14e-14} & \texttt{0.00e+00} & \texttt{0.00e+00} & \texttt{9.19e+00} & \texttt{9.19e+00} & \texttt{5.45e+02}& \texttt{1.82e+01} & \texttt{3.24e-03} \\ \hline $ 1\times 10^4 $& \texttt{4.24e-14} & \texttt{1.39e-11} & \texttt{1.29e-13} & \texttt{6.55e+00} & \texttt{6.55e+00} & \texttt{3.86e+02}& \texttt{5.33e-01} & \texttt{2.81e-03} \\ \hline $1\times 10^5$& \texttt{4.02e-13} & \texttt{9.37e-14} & \texttt{2.04e-12} & \texttt{2.81e+00} & \texttt{2.81e+00} & \texttt{8.56e+02}& \texttt{1.16e+01} & \texttt{1.80e-02} \\ \hline $1\times 10^6$& \texttt{2.53e-12} & \texttt{3.54e-15} & \texttt{3.55e-11} & \texttt{2.65e+00} & \texttt{2.65e+00} & \texttt{2.01e+03}& \texttt{1.86e+01} & \texttt{8.18e-02} \\ \hline $1\times 10^7$& \texttt{1.77e-11} & \texttt{1.61e-11} & \texttt{2.44e-10} & \texttt{4.90e+00} & \texttt{4.90e+00} & \texttt{6.29e+03}& \texttt{9.44e+00} & \texttt{9.51e-01} \\ \hline \end{tabular} } \end{table} \setlength\tabcolsep{.95mm} \begin{table}[!h] \caption{ Experiment 3: $\mathbf{B}$ is positive semidefinite and singular with $[\mathbf{g}_{\parallel}]_i = 0 $ for $ 1 \le i \le r $ and $\| \Lambda^{\dagger} \mathbf{g}_{\parallel} \|_2 > \delta $.} { \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n$ & \texttt{opt 1} & \texttt{opt 2} & \texttt{opt 3} & \texttt{min($\lambda(B+C_\parallel)$)} & $ \sigma^*_{\parallel}$ & $\sigma^*_{\perp} $ & $ \gamma $ & time \\ \hline $1\times 10^3$& \texttt{1.38e-14} & \texttt{1.35e-09} & \texttt{1.21e-14} & \texttt{1.99e+00} & \texttt{1.99e+00} & \texttt{1.45e+02}& \texttt{2.80e+00} & \texttt{3.84e-03} \\ \hline $ 1\times 10^4 $& \texttt{7.38e-14} & \texttt{2.98e-17} & \texttt{4.35e-13} & \texttt{8.60e+00} & \texttt{8.60e+00} & \texttt{3.80e+03}& \texttt{1.29e+01} & \texttt{2.03e-03} \\ \hline $1\times 10^5$& \texttt{1.73e-13} & \texttt{8.84e-17} & \texttt{4.17e-12} & \texttt{3.19e+00} & \texttt{3.19e+00} & \texttt{3.19e+03}& \texttt{4.67e+00} & \texttt{6.31e-03} \\ \hline $1\times 10^6$& \texttt{2.04e-12} & \texttt{1.22e-11} & \texttt{4.25e-11} & \texttt{8.57e+00} & \texttt{8.57e+00} & \texttt{2.97e+04}& \texttt{1.28e+01} & \texttt{7.37e-02} \\ \hline $1\times 10^7$& \texttt{3.98e-11} & \texttt{7.53e-11} & \texttt{2.42e-10} & \texttt{4.47e+00} & \texttt{4.47e+00} & \texttt{2.25e+04}& \texttt{6.63e+00} & \texttt{9.42e-01} \\ \hline\end{tabular} } \end{table} \setlength\tabcolsep{0.95mm} \begin{table}[!h] \caption{ Experiment 4: $\mathbf{B}$ is indefinite and $[\mathbf{g}_{\parallel}]_i = 0 $ for $ 1 \le i \le r $ with $ \| ( \Lambda-\lambda_1\mathbf{I} )^{\dagger}\mathbf{g}_{\parallel} \|_2 > \delta $.} { \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n$ & \texttt{opt 1} & \texttt{opt 2} & \texttt{opt 3} & \texttt{min($\lambda(B+C_\parallel)$)} & $ \sigma^*_{\parallel}$ & $\sigma^*_{\perp} $ & $ \gamma $ & time \\ \hline $1\times 10^3$& \texttt{1.95e-14} & \texttt{2.57e-16} & \texttt{0.00e+00} & \texttt{2.34e+00} & \texttt{3.09e+00} & \texttt{2.38e+02}& \texttt{3.04e+00} & \texttt{3.03e-03} \\ \hline $ 1\times 10^4 $& \texttt{8.69e-14} & \texttt{2.16e-16} & \texttt{0.00e+00} & \texttt{2.18e+00} & \texttt{2.59e+00} & \texttt{4.63e+02}& \texttt{2.91e+00} & \texttt{6.16e-03} \\ \hline $1\times 10^5$& \texttt{2.52e-13} & \texttt{4.65e-17} & \texttt{1.72e-12} & \texttt{1.33e+01} & \texttt{1.34e+01} & \texttt{2.15e+04}& \texttt{1.98e+01} & \texttt{6.44e-03} \\ \hline $1\times 10^6$& \texttt{4.45e-12} & \texttt{1.24e-12} & \texttt{1.91e-11} & \texttt{7.02e+00} & \texttt{7.21e+00} & \texttt{2.58e+04}& \texttt{1.04e+01} & \texttt{6.93e-02} \\ \hline $1\times 10^7$& \texttt{2.52e-11} & \texttt{5.27e-10} & \texttt{7.46e-11} & \texttt{1.02e+00} & \texttt{1.21e+00} & \texttt{1.71e+04}& \texttt{8.35e-01} & \texttt{9.23e-01} \\ \hline \end{tabular} } \end{table} Tables I-VI show the results of the experiments. In all tables, the residual of the two optimality conditions \texttt{opt 1}, \texttt{opt 2}, and \texttt{opt 3} are on the order of $1\times10^{-10}$ or smaller. Columns 4 in all tables show that ($\mathbf{B}+\mathbf{C}_\parallel$) are postiive semidefinite. Columns 6 and 7 in all the tables show that $\sigma_{\parallel}^*$ and $\sigma_\perp^*$ are nonnegative. Thus, the solutions obtained by {\small SC-SR1}{} for these experiments satisfy the optimality conditions to high accuracy. Also reported in each table are the number of Newton iterations. In the first five experiments no more than four Newton iterations were required to obtain $\sigma_\parallel$ to high accuracy (Column 8). In the hard case, no Newton iterations are required since $\sigma_\parallel^*=-\lambda_1$. This is reflected in Table VI, where Column 4 shows that $\sigma_\parallel^*=-\lambda_1$ and Column 8 reports no Newton iterations.) The final column reports the time required by {\small SC-SR1}{} to solve each subproblem. Consistent with the best limited-memory methods, as $n$ gets large, the time required to solve each subproblem appears to grow linearly with $n$, as predicted in Section~\ref{subsec:cc}. Additional experiments were run with $\mathbf{g}_\parallel\rightarrow 0$. In particular, the experiments were rerun with $\bfm{g}$ scaled by factors of $10^{-2}, 10^{-4},$ $10^{-6}$, $10^{-8}$, and $10^{-10}$. All experiments resulted in tables similar to those in Tables I-VI: the optimality conditions were satisfied to high accuracy, no more than three Newton iterations were required in any experiment to find $\sigma_\parallel^*$, and the CPU times are similar to those found in the tables. \setlength\tabcolsep{0.95mm} \begin{table}[!h] \caption{ Experiment 5: $\mathbf{B}$ is indefinite and $ [ \mathbf{g}_{\parallel} ]_i \ne 0$ for some $ 1 \le i \le r $.} { \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n$ & \texttt{opt 1} & \texttt{opt 2} & \texttt{opt 3} & \texttt{min($\lambda(B+C_\parallel)$)} & $ \sigma^*_{\parallel}$ & $\sigma^*_{\perp} $ & $ \gamma $ & time \\ \hline $1\times 10^3$& \texttt{9.11e-15} & \texttt{5.14e-16} & \texttt{4.35e-15} & \texttt{7.54e-01} & \texttt{1.16e+00} & \texttt{1.31e+01}& \texttt{2.27e+01} & \texttt{5.60e-03} \\ \hline $ 1\times 10^4 $& \texttt{6.04e-14} & \texttt{8.71e-12} & \texttt{1.25e-13} & \texttt{1.88e+00} & \texttt{2.23e+00} & \texttt{1.41e+02}& \texttt{4.15e+00} & \texttt{1.75e-03} \\ \hline $1\times 10^5$& \texttt{3.16e-13} & \texttt{3.27e-11} & \texttt{2.36e-12} & \texttt{6.23e-01} & \texttt{1.24e+00} & \texttt{3.86e+02}& \texttt{4.89e+00} & \texttt{7.91e-03} \\ \hline $1\times 10^6$& \texttt{1.19e-12} & \texttt{0.00e+00} & \texttt{2.82e-11} & \texttt{3.01e+00} & \texttt{3.59e+00} & \texttt{1.89e+03}& \texttt{1.77e+01} & \texttt{7.00e-02} \\ \hline $1\times 10^7$& \texttt{5.25e-11} & \texttt{1.02e-14} & \texttt{1.30e-10} & \texttt{7.37e-01} & \texttt{1.43e+00} & \texttt{4.32e+03}& \texttt{1.48e+01} & \texttt{9.40e-01} \\ \hline\end{tabular} } \end{table} \setlength\tabcolsep{0.95mm} \begin{table}[!h] \caption{ Experiment 6: $\mathbf{B}$ is indefinite and $[\mathbf{g}_{\parallel}]_i = 0 $ for $ 1 \le i \le r $ with $\| \mathbf{v}_{\parallel} ( -\lambda_1 )\|_2 \le \delta $ (the ``hard case'').} { \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n$ & \texttt{opt 1} & \texttt{opt 2} & \texttt{opt 3} & \texttt{min($\lambda(B+C_\parallel)$)} & $ \sigma^*_{\parallel}$ & $\sigma^*_{\perp} $ & $ \gamma $ & time \\ \hline $1\times 10^3$& \texttt{1.58e-14} & \texttt{1.21e-17} & \texttt{2.83e-14} & \texttt{0.00e+00} & \texttt{1.09e-01} & \texttt{1.45e+02}& \texttt{1.19e+00} & \texttt{2.06e-03} \\ \hline $ 1\times 10^4 $& \texttt{9.07e-14} & \texttt{2.65e-17} & \texttt{2.62e-13} & \texttt{0.00e+00} & \texttt{3.19e-01} & \texttt{4.49e+02}& \texttt{9.14e+00} & \texttt{1.31e-03} \\ \hline $1\times 10^5$& \texttt{8.34e-13} & \texttt{8.80e-17} & \texttt{1.86e-12} & \texttt{0.00e+00} & \texttt{1.67e-01} & \texttt{1.45e+03}& \texttt{5.04e+00} & \texttt{4.45e-03} \\ \hline $1\times 10^6$& \texttt{3.87e-12} & \texttt{7.21e-17} & \texttt{5.46e-12} & \texttt{0.00e+00} & \texttt{1.30e-01} & \texttt{3.51e+03}& \texttt{3.31e+00} & \texttt{6.77e-02} \\ \hline $1\times 10^7$& \texttt{4.19e-11} & \texttt{1.30e-17} & \texttt{3.05e-10} & \texttt{0.00e+00} & \texttt{2.68e-02} & \texttt{2.81e+04}& \texttt{1.19e+01} & \texttt{9.45e-01} \\ \hline \end{tabular} } \end{table} \subsection{$(\mathbf{P},\infty)$-norm results} The {\small SC-SR1}{} method was tested on randomly-generated problems of size $n=10^3$ to $n=10^7$, organized as five experiments that test the cases enumerated in Algorithm~\ref{alg-pinfty}. Since Algorithm~\ref{alg-pinfty} proceeds componentwise (i.e., the components of $\mathbf{g}_\parallel$ and $\mathbf{\Lambda}$ determine how the algorithm proceeds), the experiments were designed to ensure at least one randomly-chosen component satisfied the conditions of the given experiment. The five experiments are below: \begin{enumerate} \item[(E1)] {$\left|[\mathbf{g}_\parallel]_i\right|<\delta\left|[\mathbf{\Lambda}]_{ii}\right|$ and $[\mathbf{\Lambda}]_{ii}>\tau$.} \item[(E2)] {$\left|[\mathbf{g}_\parallel]_i\right|<\tau$ and $\left|[\mathbf{\Lambda}]_{ii}\right|<\tau$.} \item[(E3)] {$\left|[\mathbf{g}_\parallel]_i\right|>\tau$ and $\left|[\mathbf{\Lambda}]_{ii}\right|<\tau$.} \item[(E4)]{$\left|[\mathbf{g}_\parallel]_i\right|<\tau$ and $[\mathbf{\Lambda}]_{ii}<-\tau$.} \item[(E5)] {$\left|[\mathbf{g}_\parallel]_i\right|>\delta\left|[\mathbf{\Lambda}]_{ii}\right|$ and $\|\mathbf{\Lambda}]_{ii}\|>\tau$.} \end{enumerate} For these experiments, $\mathbf{S}$, $\mathbf{Y}$, and $\mathbf{g}$ were randomly generated and then altered to satisfy the requirements described above by each experiment. In (E2)-(E4), $\delta$ was chosen as a random number (in the other experiments, it was set in accordance with the experiments' rules). All randomly-generated vectors and matrices were formed using the {\small MATLAB}{} \texttt{randn} command, which draws from the standard normal distribution. The initial {\small SR1}{} matrix was set to $\mathbf{B}_0=\gamma \mathbf{I}$, where $\gamma=|10*\texttt{randn(1)}|$. Finally, the number of limited-memory updates $m$ was set to 5, and for simplicity, the randomly-chosen $i$ (that defines [E1]-[E5]) was chosen to be an integer in the range $[1\,\, 5]$. $1\times 10^3$ \begin{table}[h!] \setlength\tabcolsep{1.5mm} \caption{ Results using the $(\mathbf{P},\infty)$ norm.} { \centering \begin{tabular}{|l|c|c|c|} \hline & $n$ & $ \gamma $ & time \\ \hline Experiment 1 & $1\times 10^3$& \texttt{8.76e+00} & \texttt{1.34e-03} \\ \hline &$ 1\times 10^4$& \texttt{1.80e-01} & \texttt{1.21e-03} \\ \hline &$1\times 10^5$& \texttt{7.39e+00} & \texttt{6.71e-03} \\ \hline &$1\times 10^6$& \texttt{2.13e-02} & \texttt{1.12e-01} \\ \hline &$1\times 10^7$& \texttt{1.11e+01} & \texttt{1.51e+00} \\ \hline Experiment 2 &$1\times 10^3$& \texttt{4.47e+00} & \texttt{1.05e-03} \\ \hline & $ 1\times 10^4 $& \texttt{6.38e+00} & \texttt{8.74e-04} \\ \hline &$1\times 10^5$& \texttt{1.10e+00} & \texttt{7.37e-03} \\ \hline &$1\times 10^6$& \texttt{2.74e+00} & \texttt{7.94e-02} \\ \hline &$1\times 10^7$& \texttt{8.30e-01} & \texttt{1.39e+00} \\ \hline Experiment 3 &$1\times 10^3$& \texttt{2.09e+01} & \texttt{1.07e-03} \\ \hline & $ 1\times 10^4 $& \texttt{4.67e+00} & \texttt{9.63e-04} \\ \hline &$1\times 10^5$& \texttt{1.39e+01} & \texttt{6.63e-03} \\ \hline &$1\times 10^6$& \texttt{1.76e+01} & \texttt{7.38e-02} \\ \hline &$1\times 10^7$& \texttt{1.51e+01} & \texttt{1.45e+00} \\ \hline Experiment 4 &$1\times 10^3$& \texttt{1.08e+01} & \texttt{1.43e-03} \\ \hline & $ 1\times 10^4 $& \texttt{1.34e+01} & \texttt{1.06e-03} \\ \hline &$1\times 10^5$& \texttt{7.43e+00} & \texttt{1.23e-02} \\ \hline &$1\times 10^6$& \texttt{3.16e+00} & \texttt{9.00e-02} \\ \hline &$1\times 10^7$& \texttt{2.22e+00} & \texttt{1.41e+00} \\ \hline Experiment 5 &$1\times 10^3$& \texttt{1.04e+01} & \texttt{1.15e-03} \\ \hline & $ 1\times 10^4 $& \texttt{1.74e+01} & \texttt{9.40e-04} \\ \hline &$1\times 10^5$& \texttt{4.38e+00} & \texttt{1.15e-02} \\ \hline &$1\times 10^6$& \texttt{5.21e+00} & \texttt{9.05e-02} \\ \hline &$1\times 10^7$& \texttt{2.01e+00} & \texttt{1.40e+00} \\ \hline \end{tabular} } \end{table} Table VII displays the results of the five experiments. Each experiment was run five times; the results of the third iteration are stored in Table VII. In all cases, the results of the third iteration were representative of all the iterations. The first column of the table denotes the experiment, the second column displays the size of the problem, and the third column reports the value of $\gamma$. Finally, the forth column reports the time taken to obtain the solution. \subsection{Trust-Region Algorithm} \label{sec:TRalg} \jb{In this experiment, we embed the TR subproblem solvers in a trust-region algorithm to solve unconstrained optimization problems.} In particular, we implemented our subproblem solvers in an algorithm that is based on \cite[Algorithm 6.2]{NocW06}. A feature of this algorithm is that the L-SR1 matrix is updated by every pair $\{(\mathbf{s}_i,\mathbf{y}_i)\}^k_{\jbo{i=k-m+1}} $ as long as $ |\mathbf{s}^T_i(\mathbf{y}_i - \mathbf{B}_{0}\mathbf{s}_i)| \ge \jbo{ \| \mathbf{s}_i \|_2 \| \mathbf{y}_i - \mathbf{B}_{i}\mathbf{s}_i \|_2} \epsilon_{\text{SR1}} $ (updates are skipped if this condition is not met). \jbo{In case a full memory strategy is used (i.e, $m=\infty$)} then a \jbo{SR-1} matrix is updated by almost every pair $\{(\mathbf{s}_i,\mathbf{y}_i)\} $ in order to help achieve the superlinear convergence rate of quasi-Newton methods, \jb{in contrast to updating the matrix only when a step is accepted}. An outline of our trust-region method implementation (Algorithm 5) is included in the Appendix. \jb{In our comparisons we use the following 4 algorithms to solve the TR subproblems: \begin{center} \begin{tabular}{l l} TR:SC-INF & Algorithm \ref{alg-pinfty} \\ TR:SC-L2 & Algorithm \ref{alg-p2} \\ TR:L2 & $\ell_2$-norm \cite[Algorithm 1]{BruEM15} \\ tr:CG & truncated CG \cite[Algorithm 7.5.1]{ConGT00a} \\ \end{tabular} \end{center} Initially, we included a $5^{\textnormal{th}}$ algorithm, LSTRS \cite{RSS08}, which performed markedly inferior to any of the above solvers and is thus not reported as part of the outcomes in this section. We also found that Init. 2 performed significantly better than Init. 1, and therefore report the outcomes with Init. 2 below. Because the limited-memory updating mechanism is different whether a constant or non-constant initialization strategy is used, we describe our results separately for C Init. and Init. 2. As part of our comparisons we first select the best algorithm using only C Init. and only using Init. 2. Subsequently, we compare the best algorithms to each other. In order to find default parameters for our best algorithms, Figures \ref{fig:default_C}, \ref{fig:default_I2_mIN}, and \ref{fig:default_I2_qIN} report results for a considerable range of $ m$ and $q$ values.} \jb{All remaining experiments are for the general unconstrained minimization problem} \begin{equation} \label{eq:min} \underset{ \mathbf{x} \in \mathbb{R}^n }{ \text{ minimize } } f(\mathbf{x}), \end{equation} where $ f: \mathbb{R}^n \to \mathbb{R} $. We consider this problem solved once $ \| \nabla f (\mathbf{x}_k) \|_{\infty} \le \varepsilon $. Our convergence tolerance is set to be $ \varepsilon = 5\times 10^{-4} $. \jbo{With $\gamma$ fixed, a L-SR1 algorithm can be implemented by only storing the matrices $ \mathbf{\Psi}_k $ and $\mathbf{M}_k^{-1}$. In particular, with a fixed $ \gamma = \gamma_k $ in \eqref{eqn-PsiM} then $ \mathbf{M}_k^{-1} \mathbf{e}_k = \mathbf{\Psi}^T_k \mathbf{s}_k $, so that updating the symmetric matrix $ \mathbf{M}_k^{-1} $ only uses $O(nm)$ multiplications. In this way, the overall computational complexity and memory requirements of the L-SR1 method are reduced as compared to non-constant initializations.} \jb{However, using a non-constant initialization strategy can adaptively incorporate additional information, which can be advantageous. Therefore, we compare the best algorithms for constant and non-constant initialization strategies in Sections \ref{subsec:compC}, \ref{subsec:compI} and \ref{subsec:compAll}. } Parameters in Algorithm 5 are set as follows: $c_1 = 9\times 10^{-4}$, $c_2 = 0.75$, $c_3 = 0.8 $, $c_4=2$, $c_5=0.1$, $c_6=0.75$, $c_7=0.5$ and $ \varepsilon_{\text{SR1}} = 1 \times 10^{-8} $. \jb{Extended performance profiles as in \cite{MahajanLeyfferKirches11} are provided. These profiles are an extension of the well known profiles of Dolan and Mor\'{e} \cite{DolanMore02}. We compare total computational time for each solver on the test set of problems. The performance metric $ \rho_s(\tau) $ with a given number of test problems $ n_p $ is \begin{equation*} \rho_s(\tau) = \frac{\text{card}\left\{ p : \pi_{p,s} \le \tau \right\}}{n_p} \quad \text{and} \quad \pi_{p,s} = \frac{t_{p,s}}{ \underset{1\le i \le S, i \ne s}{\text{ min } t_{p,i}} }, \end{equation*} where $ t_{p,s}$ is the ``output'' (i.e., time) of ``solver'' $s$ on problem $p$. Here $ S $ denotes the total number of solvers for a given comparison. This metric measures the proportion of how close a given solver is to the best result. The extended performance profiles are the same as the classical ones for $ \tau \ge 1 $. In the profiles we include a dashed vertical grey line, to indicate $ \tau = 1 $. The solvers are compared on 62 large-scale CUTEst problems, which are the same problems as in \cite{Burdakov2016}. Additionally, Appendix \ref{subsec:EX_QuadRos} includes supplementary comparisons on quadratics and the Rosenbrock objectives.} \subsection{\jb{Comparisons with constant initialization strategy (C Init.)}} \label{subsec:compC} \jb{This experiment compares the algorithms when the constant initialization C Init. from \eqref{eq:init} is used. Because the memory allocation is essentially halved (relative to a non-constant initialization) the memory parameter $m$ includes larger values, too (such as $m=24$). For each individual solver we first determine its optimal $ m $ parameter in Figure \ref{fig:default_C}. After selecting the best parameters, these best solvers are then compared in Figure \ref{fig:compC}.} \begin{figure*} \caption{\jb{Comparison of best algorithms with C Init. (constant initialization), which are selected from Figure \ref{fig:default_C} \label{fig:compC} \end{figure*} \subsection{\jb{Comparisons with non-constant initialization strategy (Init. 2)}} \label{subsec:compI} \jb{Since Init. 2 depends on the parameter $q$, Figures \ref{fig:default_I2_mIN} and \ref{fig:default_I2_qIN} test each algorithm on a combination of $m$ and $q$ values. A comparison of the best values for each algorithm is in Figure \ref{fig:compI}.} \begin{figure*} \caption{\jb{Comparison of best algorithms with Init.2 (non-constant initialization), which are selected as the best ones from Figures \ref{fig:default_I2_mIN} \label{fig:compI} \end{figure*} \subsection{\jb{Comparisons of best outcomes}} \label{subsec:compAll} \jb{The overall best algorithms from Figures \ref{fig:compC} and \ref{fig:compI} are compared in Figure \ref{fig:compALL}. This declares that the best performing algorithm over the sequence of experiments is TR:SC-INF with the indicated parameter values.} \begin{figure*} \caption{\jb{Overall comparison of best algorithms by selecting winners in Figures \ref{fig:compC} \label{fig:compALL} \end{figure*} \section{Concluding remarks} In this paper, we presented a high-accuracy trust-region subproblem solver for when the Hessian is approximated by {\small L-SR1}{} matrices. The method makes use of special shape-changing norms that decouple the original subproblem into two separate subproblems. Numerical experiments using the $(\mathbf{P},2)$ norm verify that solutions are computed to high accuracy in cases when there are no closed-form solutions and also in the so-called ``hard case''. \jb{Experiments on large-scale unconstrained optimization problems demonstrate that the proposed algorithms perform well when compared to widely used methods, such as truncated CG or an $\ell_2$ TR subproblem algorithm.} \appendix \section*{APPENDIX} \setcounter{section}{1} \label{app:LSR1alg} This appendix lists our implementation of the L-SR1 trust-region algorithm from the numerical experiments in Section 5.3. This trust-region algorithm uses the trust-region radius adjustments from \cite[Algorithm 6.2]{NocW06} and the subproblem solvers in Algorithms 3 and 4, as well as the orthonormal basis method (OBS) from \cite{BruEM15}. \\ \hrule \noindent \small{\textbf{ALGORITHM 5:} L-SR1 Shape-Changing Trust-Region Algorithms (LSR1\_SC)} \hrule \begin{algorithmic}[1] \ENSURE [$\mathbf{x}_k,\mathbf{g}_k,f_k,\text{out}$]=\texttt{LSR1\_SC}($\mathbf{x}$, $f(\mathbf{x})$, $\nabla f(\mathbf{x})$, \text{pars}) \STATE{Set constants from \text{pars}: $ 0 < c_1 < 1\times 10^{-3} $, $0 < c_2$, $ 0 < c_3 < 1 $, $ 1 < c_4 $, $ 0 < c_5 \le c_2 $, $ 0 < c_6 < 1 $, $0 < c_7 < 1$,$ 0 < \varepsilon $, $ 0 < m $, $ 0 < q $, $0 < \varepsilon_{\text{SR1}} $, $\text{ALG} \gets \text{pars.whichSub}$, $\text{INIT} \gets \text{pars.whichInit}$, $\text{SAVE} \gets \text{pars.storePsiPsi}$, ;} \STATE{Initialize $k \gets 0$, $k_m \gets 0$, $\mathbf{x}_k \gets \mathbf{x}$, $0<\gamma_k$, $0<\gamma_{\text{max}}$, $\text{inv}\mathbf{M}_k\gets[]$, $\text{mIdx}\gets1:m$, $\text{iEx}\gets 0$;} \STATE{$f_k \gets f(\mathbf{x}_k)$, $\mathbf{g}_k \gets \nabla f(\mathbf{x}_k)$;} \STATE{$[\mathbf{x}_{k+1},\mathbf{g}_{k+1},f_{k+1}] \gets \text{lineSearch}(\mathbf{x}_{k},\mathbf{g}_{k},f_{k})$;} \STATE{$ \mathbf{s}_k \gets \mathbf{x}_{k+1}-\mathbf{x}_k, \mathbf{y}_k \gets \mathbf{g}_{k+1} - \mathbf{g}_k $;} \IF {$\text{INIT}=\text{C.Init.}$} \STATE{\texttt{\% Constant initialization}} \STATE{$\gamma_k \gets \text{max}(\text{min}(\| \bfm{y}_0 \|^2 / \bfm{s}^T_0 \bfm{y}_0, \gamma_{\text{max}}),1) $} \STATE{$\mathbf{\Psi}_k \gets[]$;} \ELSE \STATE{\texttt{\% Non-constant initialization.}} \STATE{$ \gamma_k \gets \| \bfm{y}_k \|^2 / \bfm{s}^T_k \bfm{y}_k $;} \STATE{$\mathbf{S}_k \gets[]$, $\mathbf{Y}_k \gets[]$, $\mathbf{D}_k \gets[]$, $\mathbf{L}_k \gets[]$, $\mathbf{T}_k \gets[]$, $\mathbf{SS}_k \gets[]$, $\mathbf{YY}_k \gets[]$; } \ENDIF \IF {$\text{SAVE}=1$} \STATE{$ \bs{\Psi}\bs{\Psi}_k \gets[] $;} \ENDIF \STATE{$ b_k \gets \mathbf{s}^T_k(\mathbf{y}_k - \gamma_k \mathbf{s}_k) $;} \IF { $\varepsilon_{\text{SR1}} \jbo{\| \mathbf{s}_k \|_2 \| \mathbf{y}_k - \gamma_k \mathbf{s}_k \|_2} < \text{abs}(b_k)$} \STATE{$k_m \gets k_m+1$;} \STATE{$\text{inv}\mathbf{M}_k(k_m,k_m) \gets b_k$;} \IF {$\text{INIT}=\text{C.Init.}$} \STATE{$[\bs{\Psi}_k,\text{mIdx}] = \texttt{colUpdate}(\bsk{\Psi},\mathbf{y}_k - \gamma_k \mathbf{s}_k,\text{mIdx},m,k) $ \texttt{\% From Procedure 1}} \ELSE \STATE{$[\bk{Y},\sim] = \texttt{colUpdate}(\bk{Y},\mathbf{y}_k ,\text{mIdx},m,k) $;} \STATE{$[\bk{S},\text{mIdx}] = \texttt{colUpdate}(\bk{S},\mathbf{s}_k,\text{mIdx},m,k) $;} \STATE{$\bk{D}(k_m,k_m) = \bk{s}^T\bk{y}$;} \STATE{$\bk{L}(k_m,k_m) = \bk{s}^T\bk{y}$;} \STATE{$\bk{T}(k_m,k_m) = \bk{s}^T\bk{y}$;} \STATE{$\bk{SS}(k_m,k_m) = \bk{s}^T\bk{s}$;} \STATE{$\bk{YY}(k_m,k_m) = \bk{y}^T\bk{y}$;} \ENDIF \ENDIF \STATE{$\delta_k \gets 2 \| \mathbf{s}_k \|$;} \STATE{$k\gets k+1$;} \WHILE {($\varepsilon \le \| \mathbf{g}_k \|_2$) \text{and} ($ k \le \text{maxIt} $) } \STATE{Choose TR subproblem solver to compute $ \bk{s} $ (E.g., Alg. 3, Alg. 4, $\ell_2$-norm, truncated CG);} \STATE{\texttt{\% For example: sc\_sr1\_infty with $ \Psi^T\Psi $ updating}} \STATE{ $\mathbf{s}_k \gets $ \texttt{sc\_sr1\_infty}($\mathbf{g}_k$,$\mathbf{S}_k(:,\text{mIdx}(1:k_m))$, $\mathbf{Y}_k(:,\text{mIdx}(1:k_m))$, $\gamma_k$, $\delta_k$, $1$, $0$, $\ldots$\\ $ \bsk{\Psi\Psi}(1:k_m,1:k_m) $,$ \text{inv}\mathbf{M}_k(1:k_m,1:k_m) $); } \STATE{$\widehat{\mathbf{x}}_{k+1} \gets \mathbf{x}_k + \mathbf{s}_k, \widehat{f}_{k+1} \gets f(\widehat{\mathbf{x}}_{k+1}), \widehat{\mathbf{g}}_{k+1} \gets \nabla f(\widehat{\mathbf{x}}_{k+1})$;} \IF {$\text{INIT}=\text{C.Init}$} \STATE{$\mathbf{b}_k(1:k_m) \gets \mathbf{\Psi}_k(:,\text{mIdx}(1:k_m))^T \mathbf{s}_k$;} \ELSE \STATE{\texttt{\% Non-constant initialization, stores additionally $ \bk{b1}, \bk{b2} $}} \STATE{$\mathbf{b1}_k(1:k_m) \gets \mathbf{Y}_k(:,\text{mIdx}(1:k_m))^T \mathbf{s}_k$;} \STATE{$\mathbf{b2}_k(1:k_m) \gets \mathbf{S}_k(:,\text{mIdx}(1:k_m))^T \mathbf{s}_k$;} \STATE{$\mathbf{b}_k(1:k_m) \gets \bk{b1}(1:k_m) - \gamma_k \bk{b2}(1:k_m) $;} \ENDIF \STATE{$(sBs)_k \gets \gamma_k\mathbf{s}^T_k \mathbf{s}_k + \frac{1}{2} \mathbf{b}_k(1:k_m)^T(\text{inv} \mathbf{M}_k(1:k_m,1:k_m) \backslash \mathbf{b}_k(1:k_m) ) $;} \IF{$\textnormal{INIT} = \textnormal{Init. 2}$} \STATE{\texttt{\% Other non-constant initialization strategies can be implemented here}} \STATE{$ \gamma_k \gets \text{max}(\| \bfm{y}_{k-q} \|^2/\bfm{s}^T_{k-q} \bfm{y}_{k-q}, \cdots, \| \bk{y} \|^2/ \bk{y}^T \bk{s} )$} \ENDIF \STATE{$\rho_k \gets \frac{\widehat{f}_{k+1} - f_k}{\mathbf{s}^T_k \mathbf{g}_k +(sBs)_k} $; } \IF {$c_1 < \rho_k$} \STATE{$\mathbf{x}_{k+1} \gets \widehat{\mathbf{x}}_{k+1}$;} \STATE{$\mathbf{g}_{k+1} \gets \widehat{\mathbf{g}}_{k+1}$;} \STATE{$f_{k+1} \gets \widehat{f}_{k+1}$;} \ELSE \STATE{$\mathbf{x}_{k+1} \gets \mathbf{x}_{k}$;} \ENDIF \IF {$c_2 < \rho_k$} \IF{$\| \mathbf{s}_k \|_2 \le c_3 \delta_k$} \STATE{$\delta_k \gets \delta_k$;} \ELSE \STATE{$\delta_k \gets c_4 \delta_k$;} \ENDIF \ELSIF {$c_5 \le \rho_k \le c_6$} \STATE{$\delta_k \gets \delta_k$;} \ELSE \STATE{$\delta_k \gets c_7\delta_k$;} \ENDIF \STATE{$\mathbf{y}_k \gets \widehat{\mathbf{g}}_{k+1} - \mathbf{g}_k$;} \STATE{$b_k \gets \mathbf{s}_k^T\mathbf{y}_k + (sBs)_k$;} \IF{$ \varepsilon_{\text{SR1}} \jbo{\| \mathbf{s}_k \|_2 \| \mathbf{y}_k - \gamma_k \mathbf{s}_k \|_2} \le \text{abs}(b_k) $} \IF{$\text{INIT}=\text{C.Init.}$} \STATE{$[\bs{\Psi}_k,\text{mIdx}] = \texttt{colUpdate}(\bsk{\Psi},\mathbf{y}_k - \gamma_k \mathbf{s}_k,\text{mIdx},m,k) $;} \IF{($k_m < m$)} \STATE{$k_m \gets k_m+1$;} \ENDIF \STATE{$\text{inv}\mathbf{M}_k(1:(k_m-1),k_m) \gets \mathbf{b}_k(1:(k_m-1))$;} \STATE{$\text{inv}\mathbf{M}_k(k_m,1:(k_m-1)) \gets \mathbf{b}_k(1:(k_m-1))$;} \STATE{$\text{inv}\mathbf{M}_k(k_m,k_m) \gets b_k$;} \IF {$ \textnormal{SAVE} = 1$} \STATE{\texttt{\% Update and store the product $ \Psi_k^T \Psi_k $}} \STATE{$\bsk{\Psi\Psi}(1:k_m,1:k_m) = \texttt{prodUpdate}(\bsk{\Psi\Psi},\bsk{\Psi},\bsk{\Psi},\bk{y}-\gamma_k\bk{s},\bk{y}-\gamma_k\bk{s},\text{mIdx},m,k) $;} \ENDIF \ELSE \STATE{\texttt{\% Non-constant initialization}} \STATE{$[\bk{Y},\sim] = \texttt{colUpdate}(\bk{Y},\mathbf{y}_k ,\text{mIdx},m,k) $;} \STATE{$[\bk{S},\text{mIdx}] = \texttt{colUpdate}(\bk{S},\mathbf{s}_k,\text{mIdx},m,k) $;} \STATE{$\bk{T} = \texttt{prodUpdate}(\bk{T},\bk{S},0,\bk{s},\bk{y},\text{mIdx},m,k) $;} \STATE{$\bk{YY} = \texttt{prodUpdate}(\bk{YY},\bk{Y},\bk{Y},\bk{y},\bk{y},\text{mIdx},m,k) $;} \IF{($k_m < m$)} \STATE{$k_m \gets k_m+1$;} \ENDIF \STATE{$\bk{D}(k_m,k_m) \gets \bk{s}^T \bk{y}$;} \STATE{$\bk{L}(k_m,1:(k_m-1)) \gets \mathbf{b1}_k(1:(k_m-1))$;} \STATE{$\bk{SS}(1:(k_m-1),k_m) \gets \mathbf{b2}_k(1:(k_m-1))$;} \STATE{$\bk{SS}(k_m,1:(k_m-1)) \gets \mathbf{b2}_k(1:(k_m-1))$;} \STATE{$\bk{SS}(k_m,k_m) \gets \bk{s}^T\bk{s}$;} \STATE{$\text{inv}\mathbf{M}_k(1:k_m,1:k_m) \gets \bk{D}(1:k_m,1:k_m) + \bk{L}(1:k_m,1:k_m) + \bk{L}(1:k_m,1:k_m)^T - \gamma_k \bk{SS}(1:k_m,1:k_m)$;} \IF {$ \textnormal{SAVE} = 1$} \STATE{\texttt{\% Update and store the product $ \Psi_k^T \Psi_k $ with non-constant initialization}} \STATE{$\bsk{\Psi\Psi}(1:k_m,1:k_m) = \bk{YY}(1:k_m,1:k_m) - \gamma_k ( \bk{T}(1:k_m,1:k_m) + \bk{T}(1:k_m,1:k_m)^T + \bk{L}(1:k_m,1:k_m) + \bk{L}(1:k_m,1:k_m)^T ) + \gamma_k^2 \bk{SS}(1:k_m,1:k_m) $;} \ENDIF \ENDIF \ENDIF \STATE{$\mathbf{x}_k \gets \mathbf{x}_{k+1}$, $ \mathbf{g}_{k} \gets \mathbf{g}_{k+1} $, $f_{k} \gets f_{k+1}$, $k \gets k+1$; } \ENDWHILE \STATE{$\text{out.numiter}\gets k, \text{out.ng} \gets \| \mathbf{g}_k \|$;} \RETURN $\mathbf{x}_k, \mathbf{g}_k, f_k, \text{out}$ \end{algorithmic} \hrule \subsection{\jb{Experiments to determine default parameters with constant Initialization (C Init.)}} \label{subsec:EXC_DEFAULT} \begin{figure*} \caption{\jb{Comparison of the computational times for the 4 algorithms $\{\textnormal{TR:SC-INF, TR:SC-L2, TR:L2, trCG} \label{fig:default_C} \end{figure*} \subsection{\jb{Experiments to determine default parameters with non-constant Initialization (Init. 2)}} \label{subsec:EXI2_DEFAULT} \begin{figure*} \caption{\jb{Comparison of the computational times for the 4 algorithms $\{\textnormal{TR:SC-INF, TR:SC-L2, TR:L2, trCG} \label{fig:default_I2_mIN} \end{figure*} \begin{figure*} \caption{\jb{Comparison of the computational times for the 4 algorithms $\{\textnormal{TR:SC-INF, TR:SC-L2, TR:L2, trCG} \label{fig:default_I2_qIN} \end{figure*} \subsection{\jb{Experiments on quadratics and the Rosenbrock functions}} \label{subsec:EX_QuadRos} In this set of experiments we vary the problem dimension as $ n = [5\times10^2,1\times10^3,5\times10^3,1\times10^4,5\times10^4,1\times10^5,3\times10^5] $, set the memory parameter $m=5$, use Init. 2 for all solvers and set the maximum iterations as $ \text{maxIt} = 500 $. In Table VIII, we let $ f (\mathbf{x}) $ be the Rosenbrock function defined by $ f(\mathbf{x}) = \sum_{i=1}^n (\mathbf{x}_{2i} - \mathbf{x}^2_{2i-1})^2 + (1-\mathbf{x}^2_{2i-1})^2 $. We initialize the trust-region algorithm (Algorithm 5) from the starting point $ [\mathbf{x}_0]_1 = 30, [\mathbf{x}_0]_{2:n} = 0 $. (With this initial point the gradient norm $ \| \nabla f(\mathbf{x}_0) \|_2 \approx 10^5 $). Table VIII reports the outcomes of using the trust-region algorithm with these three different subproblem solvers. \begin{table}[h!] \label{tbl:trResults} \setlength\tabcolsep{0.75mm} \caption{ Results of solving problem \eqref{eq:min} with the Rosenbrock objective function. The maximum number of iterations are: $ \text{maxIt} = 500 $ and the convergence tolerance is $ \| \nabla f(\mathbf{x}_k) \|_{\infty} \le 1\times 10^{-4} $. The memory parameter is $ m = 5 $. The column $\text{nA}$ denotes the number of ``accepted" search directions, which corresponds to line 55 in Algorithm 5 being true. Observe that all algorithms converged to the prescribed tolerances on all problem instances.} { \centering \begin{tabular}{|c| c c c c| c c c c| c c c c|} \hline $n$ & \multicolumn{4}{c|}{TR:SC-INF (Alg. 3)} & \multicolumn{4}{c|}{TR:SC-L2 (Alg. 4)} & \multicolumn{4}{c|}{TR-L2} \\ \cline{2-13} & $k$ & nA & Time & $ \| \nabla f(\mathbf{x}_k) \| $ & $k$ & nA & Time & $ \| \nabla f(\mathbf{x}_k) \| $ & $k$ & nA & Time & $ \| \nabla f(\mathbf{x}_k) \| $ \\ \hline $5 \times 10^2$ & \texttt{40} & \texttt{26} & \texttt{2.00e-02} & \texttt{7.18e-05} & \texttt{46} & \texttt{29} & \texttt{1.59e-02} & \texttt{3.11e-05} & \texttt{36} & \texttt{24} & \texttt{1.34e-02} & \texttt{2.89e-06} \\ $1 \times 10^3$ & \texttt{38} & \texttt{24} & \texttt{1.13e-02} & \texttt{2.23e-05} & \texttt{41} & \texttt{24} & \texttt{1.28e-02} & \texttt{3.83e-05} & \texttt{32} & \texttt{22} & \texttt{1.14e-02} & \texttt{2.02e-05} \\ $5 \times 10^3$ & \texttt{42} & \texttt{31} & \texttt{3.02e-02} & \texttt{1.17e-05} & \texttt{38} & \texttt{29} & \texttt{2.75e-02} & \texttt{6.38e-05} & \texttt{43} & \texttt{26} & \texttt{4.26e-02} & \texttt{5.03e-05} \\ $1 \times 10^4$ & \texttt{46} & \texttt{30} & \texttt{5.22e-02} & \texttt{4.57e-07} & \texttt{40} & \texttt{28} & \texttt{4.20e-02} & \texttt{5.80e-05} & \texttt{48} & \texttt{29} & \texttt{6.30e-02} & \texttt{8.87e-05} \\ $5 \times 10^4$ & \texttt{47} & \texttt{33} & \texttt{2.14e-01} & \texttt{1.01e-06} & \texttt{39} & \texttt{28} & \texttt{1.73e-01} & \texttt{1.22e-05} & \texttt{54} & \texttt{35} & \texttt{2.85e-01} & \texttt{6.92e-05} \\ $1 \times 10^5$ & \texttt{40} & \texttt{31} & \texttt{3.94e-01} & \texttt{6.82e-05} & \texttt{58} & \texttt{39} & \texttt{4.81e-01} & \texttt{1.06e-05} & \texttt{44} & \texttt{27} & \texttt{4.97e-01} & \texttt{1.57e-08} \\ $3 \times 10^5$ & \texttt{60} & \texttt{39} & \texttt{2.74e+00} & \texttt{1.63e-06} & \texttt{53} & \texttt{33} & \texttt{2.49e+00} & \texttt{3.52e-06} & \texttt{68} & \texttt{43} & \texttt{3.53e+00} & \texttt{1.70e-05} \\ \hline \end{tabular}} \end{table} In table IX, we let $ f (\mathbf{x}) $ be quadratic functions defined by $ f(\mathbf{x}) = \mathbf{g}^T \mathbf{x} + \frac{1}{2}( \mathbf{x}^T( \phi \mathbf{I} + \mathbf{Q} \mathbf{D} \mathbf{Q}^T) \mathbf{x}) $. In particular, we let $ \mathbf{Q} \in \mathbb{R}^{n \times r} $ be a rectangular matrix and $ \mathbf{D} \in \mathbb{R}^{r \times r} $ be a diagonal matrix. We initialize the trust-region algorithm (Algorithm 5) from the starting point $ \mathbf{x}_0 = \mathbf{0} $. We generate $ \mathbf{Q} = \texttt{rand}(n,r) $, $ \mathbf{D} = \text{diag}(\texttt{rand}(r,1)) $ and $ \mathbf{g} = \texttt{randn}(n,1) $, after initializing the random number generator by the command $ \texttt{rng}(\texttt{`default'}) $. Moreover, we set $ r =10 $, $ \phi = 100 $ and the maximum number of iterations as $ \text{maxIt} = 500 $. All other parameters of the method are as before. Table IX reports the outcomes of using the trust-region algorithm with the three different subproblem solvers. \begin{table}[h!] \label{tbl:trResults} \setlength\tabcolsep{0.75mm} \caption{ Results of solving problem \eqref{eq:min} with quadratic objective functions. The maximum number of iterations are set as $ \text{maxIt} = 500 $ and the convergence tolerance $ \| \nabla f(\mathbf{x}_k) \|_{\infty} \le 1\times 10^{-4} $. The memory parameter is $ m = 5 $. The column $\text{nA}$ denotes the number of ``accepted" search directions (line 55 in Algorithm 5 is true). Observe that Alg. 3 and Alg. 4 converged on all problems. Moreover, Alg. 3 and Alg. 4 were fastest on the on the largest two problem instances.} { \centering \begin{tabular}{|c| c c c c| c c c c| c c c c|} \hline $n$ & \multicolumn{4}{c|}{TR:SC-INF (Alg. 3)} & \multicolumn{4}{c|}{TR:SC-L2 (Alg. 4)} & \multicolumn{4}{c|}{TR:L2 ($\ell_2 $ \cite{BruEM15})} \\ \cline{2-13} & $k$ & nA & Time & $ \| \nabla f(\mathbf{x}_k) \| $ & $k$ & nA & Time & $ \| \nabla f(\mathbf{x}_k) \| $ & $k$ & nA & Time & $ \| \nabla f(\mathbf{x}_k) \| $ \\ \hline $5 \times 10^2$ & \texttt{8} & \texttt{6} & \texttt{5.75e-02} & \texttt{6.56e-06} & \texttt{8} & \texttt{6} & \texttt{2.66e-02} & \texttt{6.56e-06} & \texttt{6} & \texttt{4} & \texttt{2.89e-02} & \texttt{8.03e-06} \\ $1 \times 10^3$ & \texttt{8} & \texttt{6} & \texttt{7.25e-03} & \texttt{4.06e-05} & \texttt{8} & \texttt{6} & \texttt{7.51e-03} & \texttt{4.06e-05} & \texttt{6} & \texttt{4} & \texttt{6.67e-03} & \texttt{5.11e-05} \\ $5 \times 10^3$ & \texttt{21} & \texttt{15} & \texttt{2.47e-02} & \texttt{8.96e-05} & \texttt{21} & \texttt{15} & \texttt{2.83e-02} & \texttt{9.14e-05} & \texttt{16} & \texttt{10} & \texttt{2.79e-02} & \texttt{3.71e-05} \\ $1 \times 10^4$ & \texttt{23} & \texttt{18} & \texttt{3.86e-02} & \texttt{7.79e-05} & \texttt{23} & \texttt{18} & \texttt{3.65e-02} & \texttt{5.21e-05} & \texttt{19} & \texttt{14} & \texttt{3.65e-02} & \texttt{4.28e-05} \\ $5 \times 10^4$ & \texttt{45} & \texttt{33} & \texttt{2.16e-01} & \texttt{1.58e-05} & \texttt{60} & \texttt{46} & \texttt{2.28e-01} & \texttt{9.71e-05} & \texttt{27} & \texttt{21} & \texttt{1.20e-01} & \texttt{9.13e-05} \\ $1 \times 10^5$ & \texttt{62} & \texttt{49} & \texttt{5.04e-01} & \texttt{9.80e-05} & \texttt{79} & \texttt{64} & \texttt{5.86e-01} & \texttt{9.72e-05} & \texttt{500} & \texttt{494} & \texttt{4.05e+00} & \texttt{4.09e-04} \\ $3 \times 10^5$ & \texttt{20} & \texttt{15} & \texttt{8.99e-01} & \texttt{3.49e-05} & \texttt{22} & \texttt{17} & \texttt{8.37e-01} & \texttt{3.86e-05} & \texttt{26} & \texttt{17} & \texttt{1.11e+00} & \texttt{9.97e-05} \\ \hline \end{tabular}} \end{table} Remarkably, observe in the outcomes of Tables VIII and IX that a limited memory trust-region algorithm using our subproblem solvers is able to solve large optimization problems, with $ n \approx 1 \times 10^5 $, within seconds. Moreover, we observe that the proposed algorithms (Algorithm 3 and Algorithm 4) may require fewer iterations on some problems than a $\ell_2$-norm method and use less computational time. Future research, can investigate the effectiveness of a L-SR1 trust-region algorithm for non-convex objective functions and improve on the efficiency of the implementation. \end{document}
\begin{document} \title{Decoupling Quantile Representations from Loss Function} \begin{abstract} The simultaneous quantile regression (SQR) technique has been used to estimate uncertainties for deep learning models, but its application is limited by the requirement that the solution at the median quantile ($\tau=0.5$) must minimize the mean absolute error (MAE). In this article, we address this limitation by demonstrating a duality between quantiles and estimated probabilities in the case of simultaneous binary quantile regression (SBQR). This allows us to decouple the construction of quantile representations from the loss function, enabling us to assign an arbitrary classifier $f({\bm{x}})$ at the median quantile and generate the full spectrum of SBQR quantile representations at different $\tau$ values. We validate our approach through two applications: (i) detecting out-of-distribution samples, where we show that quantile representations outperform standard probability outputs, and (ii) calibrating models, where we demonstrate the robustness of quantile representations to distortions. We conclude with a discussion of several hypotheses arising from these findings. \end{abstract} \section{Introduction} \label{sec:intro} Deep learning models have become ubiquitous across diverse domains, and are increasingly being used for several critical applications. Common questions which arise in practice are - (a) Can this model be used on the given data input? and (b) If so, how much can one trust the probability prediction obtained? The former refers to the problem of Out-of-Distribution (OOD) detection \cite{DBLP:conf/iclr/HendrycksG17,DBLP:conf/nips/FortRL21} and the latter refers to the problem of Calibration \cite{DBLP:conf/icml/GuoPSW17,DBLP:conf/nips/Lakshminarayanan17, DBLP:conf/nips/LiuLPTBL20}. Understanding the implications of a deep learning model is also a topic of current research \cite{DBLP:conf/kdd/Ribeiro0G16, 10.3982/ECTA16901, DBLP:conf/cvpr/NguyenYC15,DBLP:conf/nips/JiangKGG18}. Quantile regression techniques \cite{koenker_2005, 10.2307/25146433} provide much richer information about the model, allowing for more comprehensive analysis and understanding relationship between different variables. However, these techniques aren't widely used in modern deep learning based systems since \cite{NEURIPS2021_5b168fdb} - (a) The loss function is restricted to be mean absolute error (MAE) or the pinball loss. This might not compatible with domain specific losses. (b) Moreover, it is difficult to optimize the loss function (due to non-convex nature). (c) Adapting the quantile regression approach for classification is also challenging due to piecewise constant behavior of the loss function. In \cite{DBLP:conf/nips/TagasovskaL19} the authors show how simultaneous quantile regression (SQR) techniques can be used to estimate the uncertainties of the deep learning model in the case of regression problems. \textbf{Motivation and Contributions:} In this article we decouple the construction of the quantile representations and the choice of loss function by identifying the \tilde{p}h{Duality} property between quantiles and probabilities. We leverage the duality to construct \tilde{p}h{Quantile-Representations} for any given classifier $f({\bm{x}})$ in section~{\textnormal{e}}f{ssec:genquantrep}. Such quantile representations are shown to capture the training distributions in section~{\textnormal{e}}f{ssec:exp1}. We show that these representations outperform the baseline for OOD detection in section~{\textnormal{e}}f{ssec:exp2}. We also show that quantile-representations predict probabilities which are invariant to distortions in section~{\textnormal{e}}f{ssec:calibration}. Proof-of-concept experiments to improve OOD detection (appendix~{\textnormal{e}}f{sec:A2}), illustrating the insufficiency of existing calibration techniques (appendix~{\textnormal{e}}f{sec:A4}) and identifying distribution shifts within the data (appendix~{\textnormal{e}}f{sec:A6}) are discussed in the appendix. \paragraph{Illustrating the Construction of Quantile Representations:} Before diving into the rigorous definition of quantile representations, we illustrate the construction using a simple toy example. Figure~{\textnormal{e}}f{fig:intro(a)} shows a simple toy example with 3 classes - $0,1,2$. Class $0$ is taken to be out-of-distribution (OOD), while classes $1,2$ are taken to in-distribution (ID). To get the quantile representation - (step 1) we first construct a simple classifier to differentiate classes $1,2$, (step 2) To get a classifier at quantile $\tau$, construct $y_{i}^{+} = I[p_i > \tau]$\footnote{$I[.]$ indicates the indicator function}, where $p_i$ denotes the predicted probability in (step 1). Construct a classifier using the new labels $y_{i}^{+}$. Figure~{\textnormal{e}}f{fig:intro(b)} illustrates the classifiers obtained at different $\tau$. In (step 3) concatenate the outputs (predictions) of all the classifiers at different $\tau$ to get the quantile-representations. \begin{figure*} \caption{Original Data} \label{fig:intro(a)} \caption{Classifiers at different quantiles ($\tau$)} \label{fig:intro(b)} \caption{OOD Detection with Quantile-Representations} \label{fig:intro(c)} \caption{OOD Detection using single classifier predictions} \label{fig:intro(d)} \caption{Illustrating the construction of Quantile Representations. (a) Simple toy example. (b) Illustrates different classifiers obtained for different $\tau$. (c) OOD detection using Quantile Representations. (d) OOD detection using the predictions from a single classifier. The brightness of {\color{red} \label{fig:intro} \end{figure*} The quantile representations obtained through our approach can be applied to a variety of situations. As an example, we demonstrate their use in out-of-distribution (OOD) detection, as shown in Figure~{\textnormal{e}}f{fig:intro(c)}. We use the One-Class-SVM method from scikit-learn (\cite{scikit-learn-oneclasssvm}) for OOD detection. Figure~{\textnormal{e}}f{fig:intro(c)} illustrates that the quantile representations are able to differentiate between in-distribution (ID) and OOD samples. In contrast, Figure~{\textnormal{e}}f{fig:intro(d)} shows the OOD detection performance using the standard outputs from the median classifier. \textbf{Remark:} It is worth noting that the construction of quantile representations described above does not rely on a specific loss function, but rather on a ``base-classifier'' and the thresholding of its predictions. This procedure is based on a duality between quantiles and probabilities, which is discussed in more detail in Section {\textnormal{e}}f{sec:SBQR}. Intuitively, the information captured by quantile representations can be thought of as the ``aspects of the feature space that the classifier uses for classification,'' which is expected to be greater than the information contained in probabilities, but less than the entire feature space. As we will see in the rest of this article, this information is sufficient for improving out-of-distribution detection and estimating the calibration error of a given classifier. \section{Simultaneous Binary Quantile Regression (SBQR)} \label{sec:SBQR} The construction of quantile representations shown in Figure~{\textnormal{e}}f{fig:intro} is made possible by the duality observed in the case of simultaneous binary quantile regression (SBQR). In this section, we present the theoretical foundations for constructing these quantile representations. Let $p_{\rm{data}}(X,Y)$, denote the distribution from which the data is generated. $X$ denotes the features and $Y$ denotes the targets (class labels). A classification algorithm predicts the latent \tilde{p}h{Logits} $Z$ which are used to make predictions on $Y$. Let ${\bm{x}} \in {\mathbb{R}}^{d}$ denote the $d$ dimensional features and $y \in \{0,1,\cdots, k\}$ denote the class labels (targets). We assume that the training set consists of $N$ i.i.d samples $\mathcal{D} = \{ ({\bm{x}}_i, y_i) \}$. Let $\hat{{\bm{z}}_i}=f({\bm{x}}; \theta)$ denote the classification model which predicts the logits ${\bm{z}}_i$. In binary case, applying the $\sigma$ (Sigmoid) function we obtain the probabilities, $p_i = \sigma({\bm{z}}_i)$. For multi-class classification we use the $\mathrm{softmax}({\bm{z}}_i)$ to obtain the probabilities. The final class predictions are obtained using the $\argmax_k p_{i,k}$, where $k$ denotes the class-index. \subsection{Review - Quantile Regression and Binary Quantile Regression} \label{ssec:review} Let $F_Y(y) = P(Y \leq y)$ denote the cumulative distribution of a random variable $Y$. $F_{Y}^{-}(\tau) = \inf \{x : F_Y(y) \geq \tau\}$ denotes the quantile distribution of the variable $Y$, where $0 < \tau < 1$. The aim of quantile regression is to predict the $\tau^{th}$ quantile of the variable $Y$ given the features ${\bm{x}}$, that is to estimate $f_{\tau}(x) \approx F_Y^{-}(\tau \mid X=x)$. This is achieved by minimizing pinball-loss or check-loss \cite{koenker_2005} \begin{equation} {\textnormal{h}}o(\hat{y}, y; \tau) = \begin{cases} \tau (y - \hat{y}) & \text{if } (y - \hat{y}) > 0 \\ (1-\tau) (\hat{y} - y) & \text{otherwise }\\ \end{cases} \label{eq:checkloss} \end{equation} When $\tau=0.5$, we obtain the loss to be equivalent to mean absolute error (MAE). \paragraph{Quantile Regression for Classification:} The above formulation is used when the target $Y$ is continuous. In binary classification, since $Y \in \{0,1\}$ is discrete, replace $\hat{y_i} = p_i$ and use loss in \eqref{eq:checkloss}. For the multi-class case we follow the one-vs-rest procedure by encoding the target as one-hot-vector. If the class label is $k$, take $Y = {\bm{e}}_k$ where ${\bm{e}}_k(i) = 1$ if $i=k$, and $0$ otherwise. The check-loss is applied to each entry separately, and take sum over all entries. \paragraph{Simultaneous Quantile Regression (SQR):} The loss in \eqref{eq:checkloss} is for a single $\tau$. One can instead minimize the expected loss over all $\tau \in (0,1)$, \begin{equation} \mathbb{E}_{\tau \sim U[0,1]}[{\textnormal{h}}o(\hat{y},y;\tau)] \label{eq:simulcheckloss} \end{equation} This is referred to as simultaneous quantile regression (SQR). Using the loss in \eqref{eq:simulcheckloss} instead of \eqref{eq:checkloss} enforces the solution to have \tilde{p}h{monotonicity property} \cite{DBLP:conf/nips/TagasovskaL19}. If ${\mathcal{Q}}({\bm{x}},\tau)$ denotes the solution to \eqref{eq:simulcheckloss}, monotonicity requires \begin{equation} {\mathcal{Q}}({\bm{x}}, \tau_i) \leq {\mathcal{Q}}({\bm{x}}, \tau_j) \Leftrightarrow \tau_i \leq \tau_j \label{eq:monotonicity} \end{equation} The classification variant for the loss \eqref{eq:simulcheckloss} assumes $y \in \{0,1\}$ and ${\mathcal{Q}}({\bm{x}},\tau) \in (0,1)$. To differentiate with SQR, we refer to this as \tilde{p}h{Simultaneous Binary Quantile Regression (SBQR)}. \textbf{Remark:} ${\mathcal{Q}}({\bm{x}},\tau)$ is sometimes written as ${\mathcal{Q}}({\bm{x}},\tau;\theta)$, where $\theta$ indicates the parameters (such as weights in a neural neural network). For brevity, we do not include the parameters $\theta$ in this article. \section{Quantile Representations} \label{sec:quantrep} Note that for a fixed ${\bm{x}}_i$, the solution to \eqref{eq:simulcheckloss} gives ${\mathcal{Q}}({\bm{x}}_i, \tau)$ which is a function of $\tau$. ${\mathcal{Q}}({\bm{x}}_i, \tau)$ can be interpreted as representation assigned to each ${\bm{x}}_i$. We refer to this as \tilde{p}h{Quantile Representation}. A natural question arises - What information does quantile representations encode? Informally, quantile representation ${\mathcal{Q}}({\bm{x}}_i, \tau)$ encode \tilde{p}h{all} information relevant to the classification of ${\bm{x}}_i$. This is greater than fixing ${\mathcal{Q}}({\bm{x}}_i, 0.5)$ (fixing $\tau$=0.5 gives us median classifier) and less than the information present in the distribution of features. To understand this difference, consider the toy example in figure~{\textnormal{e}}f{fig:intro}. Figure~{\textnormal{e}}f{fig:intro(d)} considers only the predictions at ${\mathcal{Q}}({\bm{x}}_i, 0.5)$, and fails to identify the out-of-distribution (OOD) samples. Quantile representations in figure~{\textnormal{e}}f{fig:intro(c)} identifies the trained samples perfectly. An example where quantile representations cannot identify the distribution from the train data is discussed in appendix~{\textnormal{e}}f{sec:A2}. Since ${\mathcal{Q}}({\bm{x}}_i, \tau)$ encode information relevant to the classification of ${\bm{x}}_i$, this allows us to analyze the ``reasons'' for assigning a particular class to ${\bm{x}}_i$. Specifically, this information can help us identify if a particular sample is out-of-distribution (OOD detection) and also to estimate the confidence of our prediction (Calibration). We verify this in section~{\textnormal{e}}f{ssec:calibration}. \subsection{Generating Quantile Representations Using Duality between Quantiles and Probabilities} \label{ssec:genquantrep} Recall that ${\mathcal{Q}}({\bm{x}}_i, \tau)$ minimizes \eqref{eq:simulcheckloss}. This enforces that ${\mathcal{Q}}({\bm{x}}_i, 0.5)$ should minimize the mean absolute error (MAE). This restricts the applications of quantile representations to classifiers trained using MAE. While, using MAE as loss has a lot of advantages \cite{DBLP:conf/aaai/GhoshKS17}, it also enforces a bias on the solutions which are not suitable in practice. Most classifiers in practice are trained using domain specific losses using regularization terms. This restricts the application of quantile regression based approaches. We can circumvent this restriction by observing \tilde{p}h{duality} between quantiles and estimated probabilities in SBQR. For binary classification, \eqref{eq:checkloss} can be written as \begin{equation} {\textnormal{h}}o(\hat{y}, y;\tau) = \begin{cases} \tau (1 - \hat{y}) & \text{if } y = 1 \\ (1-\tau) (\hat{y}) & \text{if } y = 0\\ \end{cases} \label{eq:bincheckloss} \end{equation} Now, observe that, \begin{equation} {\textnormal{h}}o(\hat{y}, y;\tau) = {\textnormal{h}}o(1-\tau, y; 1-\hat{y}) \label{eq:duality} \end{equation} In words, this implies a solution which predicts the $\tau^{th}$ quantile can be interpreted as the quantile at which the probability is $1-\tau$. This observation leads to the algorithm~{\textnormal{e}}f{alg:quantrep} for generating the quantile representations. \begin{algorithm}[tb] \caption{Generating Quantile Representations.} \label{alg:quantrep} \begin{itemize} \item Let $\mathcal{D}=\{({\bm{x}}_i, y_i)\}$ denote the training dataset. Assume that a pretrained binary classifier $f({\bm{x}})$ is given. The aim is to generate the quantile representations with respect to $f({\bm{x}})$. We also refer to this $f({\bm{x}})$ as base-classifier. \item Assign ${\mathcal{Q}}({\bm{x}}, 0.5) = f({\bm{x}},; \theta)$, that is take the median classifier to be the given classifier. \item Define $y_{i, \tau}^{+} = I[f({\bm{x}}_i) > (1-\tau)]$. We refer to this as modified labels at quantile $\tau$. \item To obtain ${\mathcal{Q}}({\bm{x}}, \tau)$, train the classifier using the dataset $\mathcal{D}_{\tau}^{+}=\{({\bm{x}}_i, y_{i, \tau}^{+})\}$. Repeating this for all $\tau$ allows us to construct the quantile representation ${\mathcal{Q}}({\bm{x}}, \tau)$. \end{itemize} \end{algorithm} \paragraph{Why does algorithm~{\textnormal{e}}f{alg:quantrep} return quantile representations?} Assume for a arbitrary ${\bm{x}}_i$, we have ${\mathcal{Q}}({\bm{x}}_i, 0.5) = p_i$. Standard interpretation states - at quantile $\tau=0.5$, the probability of ${\bm{x}}_i$ in class $1$ is $p_i$. However, thanks to duality in \eqref{eq:duality}, this can also be interpreted as - At quantile $\tau = (1-p_i)$, the probability of ${\bm{x}}_i$ in class $1$ is 0.5. Thanks to monotonocity property in \eqref{eq:monotonicity}, we have for all $\tau < (1-p_i)$, probability of ${\bm{x}}_i$ in class $1$ is $ < 0.5$, and hence belongs to class $0$. And for all $\tau > (1-p_i)$, probability of ${\bm{x}}_i$ in class $1$ is $ > 0.5$, and hence belongs to class $1$. This implies that at a given quantile $\tau^{*}$, ${\bm{x}}_i$ will belong to class $1$ if $\tau^{*} > (1-p_i)$ or if $p_i > (1- \tau^{*})$ or if $f({\bm{x}}_i) > (1- \tau^{*})$. Defining, $y_{i, \tau^{*}}^{+} = I[f({\bm{x}}_i) > (1-\tau^{*})]$, we have that the classifier at quantile $\tau^{*}$, ${\mathcal{Q}}({\bm{x}}, \tau^{*})$ fits the data $\mathcal{D}_{\tau}^{+}=\{({\bm{x}}_i, y_{i, \tau^{*}}^{+})\}$ and thus can be used to identify ${\mathcal{Q}}({\bm{x}}, \tau^{*})$. This gives us the algorithm~{\textnormal{e}}f{alg:quantrep} to get the quantile representations for an arbitrary classifier $f({\bm{x}})$. \textbf{Remark (Sigmoid vs Indicator function):} In theory, we approximate $\hat{y_i} = I[\hat{{\bm{z}}_i} > 0]$ (i.e Indicator function) with the sigmoid as $\hat{y_i} = \sigma(\hat{{\bm{z}}_i})$. The algorithm~{\textnormal{e}}f{alg:quantrep} gives a solution up to this approximation. In particular we have the following theorem \begin{theorem} \label{thm:1} Assume that we generate the output using the indicator function $\hat{y_i} = I[\hat{{\bm{z}}_i} > 0]$, instead of sigmoid function $\hat{y_i} = \sigma({\bm{z}}_i)$. Further assume that the base classifier $f({\bm{x}}) = \sigma(\phi({\bm{x}}_i))$ is obtained using the MAE loss, i.e by minimizing \begin{equation} \min_{\phi} \sum_i |y_i - \sigma(\phi({\bm{x}}_i))| \end{equation} Then the solution ${\mathcal{Q}}({\bm{x}}, \tau)$ obtained by algorithm~{\textnormal{e}}f{alg:quantrep} minimizes the cost in \eqref{eq:simulcheckloss} over the dataset $\mathcal{D}$, i.e \begin{equation} \min_{\psi} \mathbb{E}_{\tau \in U[0,1]}\left[\frac{1}{N}\sum_{i=1}^{N}{\textnormal{h}}o(I[\psi({\bm{x}}_i, \tau) \geq 0],y_i;\tau){\textnormal{i}}ght] \end{equation} \end{theorem} A sketch of the proof for the above theorem is given in the appendix~{\textnormal{e}}f{sec:A1}. To summarize, thanks to the duality in \eqref{eq:duality}, one can compute the quantile representations for any arbitrary classifier. This allows for detailed analysis of the classifier and the features learned. In the following section we first discuss the implementation of algorithm~{\textnormal{e}}f{alg:quantrep} in practice and provide both qualitative and quantitative analysis for specific models. \section{Experiments and Analysis} \label{sec:experiments} \begin{figure*} \caption{Quantile Representations (Resnet34)} \label{fig:quantvsactual(a)} \caption{Original Features (Resnet34)} \label{fig:quantvsactual(b)} \caption{Scatterplot} \label{fig:quantvsactual(e)} \caption{Do quantile representations capture the relevant information for classification? (a) Cross-correlations obtained using Quantile representations for Resnet34 on CIFAR10 (b) Cross-correlations obtained using train features for Resnet34 on CIFAR10. (c) Scatterplot with best fit line (using Locally Weighted Scatterplot Smoothing\cite{cleveland_robust_1979} \label{fig:quantvsactual1} \end{figure*} \begin{figure*} \caption{Quantile Representations (Densenet)} \label{fig:quantvsactual(c)} \caption{Original Features (Densenet)} \label{fig:quantvsactual(d)} \caption{Scatterplot} \label{fig:quantvsactual(f)} \caption{Do quantile representations capture the relevant information for classification? (a) Cross-correlations obtained using Quantile representations for Densenet on CIFAR10 (b) Cross-correlations obtained using train features for Densenet on CIFAR10. (c) Scatterplot with best fit line (using Locally Weighted Scatterplot Smoothing\cite{cleveland_robust_1979} \label{fig:quantvsactual2} \end{figure*} \subsection{Generating Quantile Representations in practice} To generate the quantile representations in practice we follow algorithm~{\textnormal{e}}f{alg:quantrep} as described with a few modifications as follows. (\textbf{Remark}: Code is attached as supplementary material and will be made public after publication of the manuscript) \textbf{Use Logits instead of Probabilities:} Observe that both the base-classifier $f({\bm{x}})$ and the quantile representations ${\mathcal{Q}}({\bm{x}},\tau)$ are taken to be probabilities, and hence belong to the interval $(0,1)$. However, in practice we observed precision issues when considering the probability outputs using the $\sigma(.)$ function. Hence we use the logits instead of the probabilities as this reduces the precision problems associates with the $\sigma(.)$ function. Observe that since logits and probabilities are related by a monotonic transformation, this does not make a difference for classification. \textbf{Taking discrete quantiles:} Observe that the quantile representation ${\mathcal{Q}}({\bm{x}},\tau)$ is a continuous function of $\tau$. In practice we consider $n_{\tau} = 1000$ equally spaces quantiles between $0.01$ and $0.99$ as an approximation. \textbf{Use one-vs-rest for multi-class classification:} The algorithm~{\textnormal{e}}f{alg:quantrep} assumes binary classes. For multi-class classification we use one-vs-rest approach to generate the quantile representations. Accordingly, the quantile representation size would be $n_{\text{classes}} \times n_{\tau}$. \textbf{Remark:} This poses a problem for computational complexity of generating the quantile representations. This is further discussed in section~{\textnormal{e}}f{sec:limitation}. \textbf{Using weighted quantiles for multi-class classification:} Interpretation of quantile regression coefficients is related to the class distribution. For instance if the number of data points in class $0$ is 200 and class $1$ is 800, the coefficients at quantile $0.8$ are much more meaningful than at quantile $0.5$ \cite{10.1093/jjfinec/nbh023}. In case of one-vs-rest approach to multi-class classification it is always the case that the classes are imbalanced. To counter this, we use \tilde{p}h{weighted quantiles} by assigning weights to the data points inversely proportional to its class size. \textbf{Interpolating the quantile representations:} To generate quantile representation at each $\tau$, one has to train a classifier. This can lead to increase computational requirement when considering large number of quantiles $\tau$. However the coefficients of the classifiers at different $\tau$ are expected to smoothly vary with $\tau$. Hence, to reduce computational complexity, we generate the coefficients at $100$ distinct $\tau$ equally spaced between $0.01$ and $0.99$ and use cubic-interpolation \cite{hastie2009elements} to generate coefficients for $1000$ quantiles. \textbf{Normalizing the coefficients for the classifier} Note that if $\beta$ denotes the coefficients of the boundary for linear classifier, then $c\beta$ also defines the same boundary. Since the algorithm involves constructing and comparing different classifiers, we normalize the coefficients of the classifier using the $L^2$ norm. This allows for fair comparison of logits across different models. \subsection{Cross-correlation of features} \label{ssec:exp1} To illustrate that the quantile representations capture the aspects of data-distrbution relevant to classification, we perform the following experiment: Construct the cross-correlation between features using (i) Quantile Representations and (ii) Feature values extracted using the traindata. If our hypothesis is accurate, then cross-correlations obtained using quantile-representations and feature values would be similar. In Figures~{\textnormal{e}}f{fig:quantvsactual1} and {\textnormal{e}}f{fig:quantvsactual2}, we present the results of using features from Resnet34 and Densenet on the CIFAR10 dataset. Figures~{\textnormal{e}}f{fig:quantvsactual(a)} and {\textnormal{e}}f{fig:quantvsactual(b)} show the results for Resnet34, and Figures~{\textnormal{e}}f{fig:quantvsactual(c)} and {\textnormal{e}}f{fig:quantvsactual(c)} show the results for Densenet. To visualize the cross-correlations, we use a heatmap with row and column indices obtained by averaging the linkage of train features. This index is common for both quantile representations and extracted features. It is evident from the figure that the cross-correlation between features is similar whether it is computed using extracted features or quantile representations. \subsection{OOD Detection} \label{ssec:exp2} \begin{table*} \caption{Comparison of Quantile-Representations with baseline for OOD Detection. Observe that Quantile-Representations outperform the baseline in all the cases.} \label{table:1} {\bm{s}}kip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{|c|c|ccccc|} \toprule & & \multicolumn{5}{c|}{DenseNet (Baseline/Quantile-Rep)} \\ \midrule & & LSUN(C) & LSUN(R) & iSUN & Imagenet(C) & Imagenet(R) \\ \hline \multirow{3}{*}{\parbox{2cm}{\centering CIFAR10}} & AUROC & 92.08/93.64 & 93.86/94.61 & 92.84/93.74 & 90.93/91.72 & 90.93/92.06 \\ & TNR@TPR95 & 58.19/64.56 &63.07/66.89 &59.64/64.68 &53.94/56.34 & 54.44/58.22 \\ & Det. Acc & 85.58/87.14 & 87.66/88.60& 86.29/87.42 & 84.11/84.93 & 84.10/85.33\\ \hline \multirow{3}{*}{\parbox{2cm}{\centering SVHN}} & AUROC & 91.80/92.29& 90.75/90.70& 91.21/91.30& 91.93/91.97 & 91.93/92.01 \\ & TNR@TPR95 & 54.61/58.77& 47.67/48.55& 48.24/50.15& 52.38/53.68 & 52.43/53.64 \\ & Det. Acc & 85.10/85.37& 84.32/84.16& 84.80/84.77&85.42/85.55 & 85.46/85.50 \\ \toprule & & \multicolumn{5}{c|}{Resnet34 (Baseline/Quantile-Rep)} \\ \midrule \multirow{3}{*}{\parbox{2cm}{\centering CIFAR10}} & AUROC & 91.43/91.76 & 92.64/93.08 & 91.89/92.34 & 90.59/90.81& 89.12/89.39 \\ & TNR@TPR95 & 54.96/56.76 & 63.24/65.75 &58.56/60.94 & 52.86/54.89& 47.41/49.93 \\ & Det. Acc & 84.63/84.96 & 85.41/86.06& 84.39/85.17 & 83.24/83.44& 81.74/82.05 \\ \hline \multirow{3}{*}{\parbox{2cm}{\centering SVHN}} & AUROC & 94.80/94.87 & 94.37/94.46 & 95.13/95.22 & 95.73/95.85 & 95.62/95.70 \\ & TNR@TPR95 & 76.19/76.15 & 72.10/72.87 & 75.88/76.25 & 79.16/79.53 & 78.34/78.82 \\ & Det. Acc & 89.58/89.72 & 88.82/88.87 & 89.78/89.85 & 90.72/90.87 & 90.54/90.60 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} {\bm{s}}kip -0.1in \end{table*} An assumption made across all machine learning models is that - Train and test datasets share the same distributions. However, test data can contain samples which are out-of-distribution (OOD) whose labels have not been seen during the training process \cite{DBLP:conf/cvpr/NguyenYC15}. Such samples should be ignored during inference. Hence OOD detection is a key component of different ML systems. Several methods \cite{DBLP:conf/iclr/HendrycksG17,DBLP:conf/nips/LeeLLS18,DBLP:conf/nips/BibasFH21} have been proposed for OOD detection. Here we check how quantile representations compare to the baseline method in \cite{DBLP:conf/iclr/HendrycksG17} for OOD detection. Consider the following model for binary classification \begin{equation} \begin{aligned} {\bm{z}} &= g({\bm{x}}) + {\epsilon}ilon({\bm{x}}) \\ {\bm{y}} &= I[{\bm{z}} \geq 0] \end{aligned} \label{eq:binmodel} \end{equation} where ${\epsilon}ilon({\bm{x}})$ is some error distribution dependent on ${\bm{x}}$ (for example hetero-skedastic models). The base classifier would then be obtained using $f({\bm{x}}) = P(g({\bm{x}}) + {\epsilon}ilon({\bm{x}}) \geq 0)$. Consider a regression problem, that is assume latent ${\bm{z}}$ is known. Then for a test sample ${\bm{x}}_t$, $g({\bm{x}}_t)$ denotes the estimates latent score. If this observed latent score is an outlier with respect to the distribution ${\epsilon}ilon$, then it is most likely that ${\bm{x}}_t$ is OOD sample. However, we do not have the information about the latent scores for the classification problem. Hence the classifier constructed using \eqref{eq:binmodel} cannot differentiate between OOD samples and samples with high confidence. This is illustrated in figure~{\textnormal{e}}f{fig:intro(d)}. Quantile representations solve this by constructing different classifiers at different distances from the base-classifier (illustrated in figure~{\textnormal{e}}f{fig:intro(b)}). This allows us to differentiate between OOD samples and samples with high confidence. Thus we expect quantile representations to improve the OOD detection. This is verified empirically below. \paragraph{Experimental Setup} For this study, we use the CIFAR10\cite{krizhevsky2014cifar} and SVHN\cite{netzer2011reading} datasets as in-distribution (ID) datasets and the iSUN\cite{xu2015turkergaze}, LSUN\cite{yu15lsun}, and TinyImagenet\cite{DBLP:conf/iclr/LiangLS18} datasets as out-of-distribution (OOD) datasets. To extract features, we employ two network architectures: Resnet34\cite{he2016deep} and Densenet\cite{DBLP:conf/cvpr/HuangLMW17}. We train a base classifier using Logistic Regression, on the extracted features with the labels, and use this classifier to construct quantile representations. For evaluation we use (i) AUROC: The area under the receiver operating characteristic curve of a threshold-based detector. A perfect detector corresponds to an AUROC score of 100\%. (ii) TNR at 95\% TPR: The probability that an OOD sample is correctly identified (classified as negative) when the true positive rate equals 95\%. (iii) Detection accuracy: Measures the maximum possible classification accuracy over all possible thresholds. \paragraph{Methodology and Results} The baseline method uses the outputs from the base-classifier and then use Local Outlier Factor (LOF) \cite{10.1145/335191.335388} to identify the OOD samples. The quantile approach uses quantile-representations instead and then use the LOF approach to detect OOD samples. Table~{\textnormal{e}}f{table:1} shows the results obtained using quantile-representations and outputs from the base classifier. Observe that quantile-representations outperform the baseline approach for detection of OOD samples. \textbf{Remark:} Observe that the improvement of OOD detection using quantile-representations in Densenet is better than for Resnet34. This is due to the fact that features are much more correlated in Resnet34 (figure~{\textnormal{e}}f{fig:quantvsactual1}) compared to Densenet (figure~{\textnormal{e}}f{fig:quantvsactual2}). So, the amount of improvement itself can be used to quantify the importance of the features. \paragraph{Can we improve OOD detection?} Observe that OOD detection problem considers train labels as a guide. In the sense that, OOD samples are defined as the ones which do not belong to train labels. If we generalize this to identifying the train distribution alone - It can be achieved by using random labels along with quantile representations. In fact this method, by suitable sampling the points and assigning pseudo-labels, can identify any region of the space. See appendix~{\textnormal{e}}f{sec:A2} for illustration. \begin{figure*} \caption{ECE (Resnet34)} \label{fig:calib(a)} \caption{Accuracy (Resnet34)} \label{fig:calib(b)} \caption{ECE (Densenet)} \label{fig:calib(c)} \caption{Accuracy (Densenet)} \label{fig:calib(d)} \caption{Quantile representations can be effective for calibration because they estimate probabilities using Equation~\eqref{eq:quantprob} \label{fig:calib} \end{figure*} \subsection{Calibration of ML models} \label{ssec:calibration} For several applications the confidence of the predictions is important. This is measured by considering how well the output probabilities from the model reflect it's predictive uncertainty. This is referred to as \tilde{p}h{Calibration}. Several methods \cite{PlattProbabilisticOutputs1999, DBLP:conf/kdd/ZadroznyE02, DBLP:conf/nips/Lakshminarayanan17, DBLP:journals/corr/abs-2110-01052, DBLP:conf/nips/LiuLPTBL20} are used to improve the calibration of the deep learning models. Most of these methods consider a part of the data (apart from train data) to adjust the probability predictions. However, in \cite{DBLP:journals/corr/abs-1906-02530, DBLP:conf/nips/MindererDRHZHTL21} it has been shown that most of the calibration approaches fail under distortions. In this section we show that calibration using quantile-representations are invariant to distortions. A perfectly calibrated model (binary class) will satisfy \cite{DBLP:conf/icml/GuoPSW17} \begin{equation} P({\bm{y}} = 1 | f({\bm{x}}) = p^*) = p^* \label{eq:bincalbirated} \end{equation} For multi-class cases this is adapted to \begin{equation} P({\bm{y}} = \argmax_{k}(p_{k}) | \max_{k}(f({\bm{x}})) = p^*) = p^* \label{eq:multicalbirated} \end{equation} The degree of miscalibration is measured using \tilde{p}h{Expected Calibration Error (\texttt{ECE})} \begin{equation} E[| p^* - E[P({\bm{y}} = \argmax_{k}(p_{i,k}) | \max_{k}(f({\bm{x}})) = p^*)] |] \label{eq:ECE} \end{equation} This is computed by binning the predictions into $m$ bins - $B_1, B_2, \cdots, B_m$ and computing\ \begin{equation} \hat{\texttt{ECE}} = \sum_{i=1}^{m} \frac{|B_i|}{n} | \texttt{acc}(B_i) - \texttt{conf}(B_i)| \label{eq:ECEestimate} \end{equation} where $ \texttt{acc}(B_i) = (1/|B_i|)\sum_{j \in B_i} I[{\bm{y}}_j = \argmax(p_{j})]$ denotes the accuracy of the predictions lying in $B_i$, and $\texttt{conf}(B_i) = \sum_{j \in B_i} \max(f({\bm{x}}_j))$ indicates the average confidence of the predictions lying in $B_i$. In the ideal scenario, we have that quantile representations predict perfectly calibrated probabilities as illustrated in the following theorem. \begin{theorem} \label{thm:2} Assume that the data is generated using the model in \eqref{eq:binmodel}, where $f({\bm{x}}) = P(g({\bm{x}}) + {\epsilon}ilon({\bm{x}}) \geq 0)$ denotes the base-classifier. Let ${\mathcal{Q}}({\bm{x}}, \tau)$ denote the quantile representations obtained on this data. Define the probabilities as \begin{equation} P({\bm{y}} = 1) = \int_{\tau=0}^{1} I[{\mathcal{Q}}({\bm{x}}, \tau) \geq 0] d\tau \label{eq:quantprob} \end{equation} The probabilities obtained using \eqref{eq:quantprob} are perfectly calibrated. \end{theorem} The proof for theorem~{\textnormal{e}}f{thm:2} is given in appendix~{\textnormal{e}}f{sec:A3}. We can use theorem~{\textnormal{e}}f{thm:2} as a basis to predict the calibration error using quantile representations (illustrated below), even though, in practice, the model predicted from train data may not be perfectly calibrated. \paragraph{Experimental Setup} In this study, we employ the CIFAR10 dataset and the Resnet34 and Densenet models to investigate the robustness of classifiers. To evaluate the classifiers' robustness, we use the distorted CIFAR10 dataset introduced in \cite{DBLP:conf/iclr/HendrycksD19}, which contains 15 types of common corruptions at five severity levels. This dataset is a standard benchmark for testing the robustness of classifiers. We use quantile-representations trained on the CIFAR10 training data to assess the generalization performance of the classifiers on the distorted dataset. We compare the performance with Maximum Softmax Probability (\texttt{MSP}) as a baseline and evaluate both accuracy and calibration error. We construct the bins $\{B_i\}$ using 5 equally spaced quantiles within the predicted probabilities. The probabilities of each class are predicted using \eqref{eq:quantprob}. (\textbf{Remark:} These probabilities do not add upto 1 since we consider a one-vs-rest approach.) We present the results in Figure~{\textnormal{e}}f{fig:calib}. As we increase the severity of the distortions, we observe that the accuracy of both the quantile representations and \texttt{MSP} decreases. However, the probabilities obtained from quantile representations are robust to distortions, as their Expected Calibration Error (\texttt{ECE}) does not increase with severity in the same way as \texttt{MSP}'s does. This indicates that quantile representations can more accurately predict calibration error and are more resistant to distortions. \textbf{Remark:} An unexpected finding in our experiments was that correcting the probabilities obtained from quantile representations using methods like isotonic regression and Platt scaling did not maintain their robustness to distortions. This result is discussed in more detail in the appendix~{\textnormal{e}}f{sec:A4}. \section{Related Work} \label{sec:related work} \cite{koenker_2005, 10.2307/4144436, 10.2307/2241522, Probal1992} provides a comprehensive overview of approaches related to quantile regression and identifying the parameters. \cite{doi:10.1080/01621459.1996.10476954} extends the quantiles to multi-variate case. \cite{DBLP:conf/nips/TagasovskaL19, DBLP:journals/tai/TambwekarMDS22} use quantile regression based approaches for estimating confidence of neural networks based predictions. \cite{DBLP:journals/corr/abs-2110-01052, NEURIPS2021_1006ff12} uses conformal methods to calibrate probabilities, and is closely related to computing quantiles. \cite{NEURIPS2021_5b168fdb} proposes a similar algorithm to overcome the restriction to pinball loss for regression problems. \cite{DBLP:journals/corr/abs-2110-00816} generates predictive regions using quantile regression techniques. \section{Conclusion, Limitations and Future work} \label{sec:limitation} To summarize, in this article we show the duality between quantiles and probabilities in the case of SBQR. Exploiting the duality, we propose an algorithm to compute quantile representations for any given base classifier. We verify that the quantile representations model the training distribution well both qualitatively (by plotting cross-correlations of the features) and quantitatively (using OOD detection baseline). We further show that the probabilities from quantile representations are robust to distortions. Interestingly, we found that traditional approaches cannot be used to correct the calibration error. The main limitation of the approach is the computation required for algorithm~{\textnormal{e}}f{alg:quantrep} for large scale datasets. We expect that using a neural network to model the function ${\mathcal{Q}}({\bm{x}}, \tau)$ using neural networks can reduce the training time. This is considered for future work. In appendix~{\textnormal{e}}f{sec:A2} we propose using random labels to improve the OOD detection and illustrate it on a toy example. In appendix~{\textnormal{e}}f{sec:A6} we illustrate (on toy example) how distribution shift may be identified using quantile representations. These are considered for future work as well. \appendix \onecolumn \section{Proof for theorem~{\textnormal{e}}f{thm:1}} \label{sec:A1} Let $\mathcal{D} = \{({\bm{x}}_i, y_i)\}$ denote the train dataset of size $N$. Then the minima over the dataset $\mathcal{D}$, is obtained by solving, \begin{equation} \min_{\psi} \mathbb{E}_{\tau \sim U[0,1]}\left[\frac{1}{N} \sum_{i} {\textnormal{h}}o(I[\psi({\bm{x}}_i, \tau) \geq 0],y_i;\tau){\textnormal{i}}ght] \label{eq:a1_1} \end{equation} Let ${\mathcal{Q}}({\bm{x}}, \tau)$ denotes the solution obtained using the algorithm~{\textnormal{e}}f{alg:quantrep}. Let ${\mathcal{P}}({\bm{x}}, \tau)$ denote the solution obtained by solving \eqref{eq:a1_1}. We aim to show that ${\mathcal{Q}}({\bm{x}}_i, \tau) = I[{\mathcal{P}}({\bm{x}}_i, \tau) \geq 0]$ for all the points in $\mathcal{D} = \{({\bm{x}}_i, y_i)\}$. First, observe that, since the base classifier $f({\bm{x}})$ is obtained using MAE we have that ${\mathcal{Q}}({\bm{x}}_i, 0.5) = I[f({\bm{x}}_i) > 0.5] = I[{\mathcal{P}}({\bm{x}}_i, 0.5) \geq 0]$. Next for arbitrary $\tau$, we show that ${\mathcal{Q}}({\bm{x}}_i, \tau) = I[{\mathcal{P}}({\bm{x}}_i, \tau) \geq 0]$ over the dataset $\mathcal{D} = \{({\bm{x}}_i,y_i)\}$. We approximate the indicator function as $I[{\bm{x}} \geq 0] \approx \lim_{k\to \infty} K_k({\bm{x}})$. For instance one can consider $K_k({\bm{x}}) = \sigma({\bm{x}} k)$. Observe that a solution to minimize \eqref{eq:a1_1} can be obtained by \begin{equation} {\mathcal{P}}({\bm{x}},\tau) = \lim_{k \to \infty} \argmin_{\psi} \mathbb{E}_{\tau \sim U[0,1]} \left[\frac{1}{N} \sum_{i} {\textnormal{h}}o\left( K_k\left(\psi({\bm{x}}_i, \tau){\textnormal{i}}ght), y_i; \tau {\textnormal{i}}ght) {\textnormal{i}}ght] \end{equation} Let \begin{equation} {\mathcal{P}}^{(k)}({\bm{x}},\tau) = \argmin_{\psi} \mathbb{E}_{\tau \sim U[0,1]} \left[\frac{1}{N} \sum_{i} {\textnormal{h}}o\left( K_k\left(\psi({\bm{x}}_i, \tau){\textnormal{i}}ght), y_i; \tau {\textnormal{i}}ght) {\textnormal{i}}ght] \end{equation} Also, since $f({\bm{x}})$ optimizes MAE, we have $K_k({\mathcal{P}}^{(k)}({\bm{x}}, 0.5)) = f({\bm{x}})$. Using this, we have for all $k$, \begin{equation} \begin{aligned} I[f({\bm{x}}) \geq 1-\tau] &= I[K_k({\mathcal{P}}^{(k)}({\bm{x}}, 0.5)) \geq 1-\tau] \\ &= I[K_k({\mathcal{P}}^{(k)}({\bm{x}}, \tau)) \geq 0.5] \\ &= I[{\mathcal{P}}^{(k)}({\bm{x}}, \tau)) \geq 0 ] \\ \end{aligned} \end{equation} where the second equality follows from the duality in \eqref{eq:duality}. Taking $k \to \infty$, we have \begin{equation} I[f({\bm{x}}) \geq 1-\tau] = I[{\mathcal{P}}({\bm{x}}, \tau) \geq 0] \end{equation} On the other hand, for all data points in $\mathcal{D}$ (from the definition of on the construction of ${\mathcal{Q}}({\bm{x}}, \tau)$), \begin{equation} I[f({\bm{x}}_i) \geq 1-\tau] = {\mathcal{Q}}({\bm{x}}_i, \tau) \end{equation} Since, $ {\mathcal{Q}}({\bm{x}}_i, \tau) = I[{\mathcal{P}}({\bm{x}}_i, \tau) \geq 0]$ for all datapoints in $\mathcal{D}$, it follows that ${\mathcal{Q}}({\bm{x}}_i, \tau)$ optimizes \eqref{eq:a1_1}. \section{Proof for theorem~{\textnormal{e}}f{thm:2}} \label{sec:A3} The proof follows from the fact that \begin{equation} {\mathcal{Q}}({\bm{x}}_i, \tau) \geq 0 \Leftrightarrow f({\bm{x}}_i) \geq (1-\tau) \Leftrightarrow P(g({\bm{x}}_i) + {\epsilon}ilon({\bm{x}}_i)\geq 0) \geq 1- \tau \end{equation} Assuming that $\tau^* = P(g({\bm{x}}_i) + {\epsilon}ilon({\bm{x}}_i)\geq 0)$, So, we have \begin{equation} \begin{aligned} \int_{\tau=0}^{1} I[{\mathcal{Q}}({\bm{x}}_i, \tau) \geq 0] d\tau &= \int_{\tau=0}^{1} I[\tau^* \geq (1-\tau)] d\tau = \int_{\tau=0}^{1} I[\tau^* \geq (1-\tau)] d\tau \\ &= \int_{\tau=0}^{1} I[\tau \geq (1-\tau^*)] d\tau = \int_{\tau=(1-\tau^*)}^{1} 1 d\tau = \tau^* \end{aligned} \end{equation} Thus the theorem follows. \section{A case where quantile representations do not capture the entire distribution} \label{sec:A2} \begin{figure*} \caption{Original Data} \label{fig:intro_app(a)} \caption{OOD Detection using quantile representations} \label{fig:intro_app(b)} \caption{OOD Detection using Probabilities} \label{fig:intro_app(c)} \caption{Using Random Labels} \label{fig:intro_app(d)} \caption{ Illustrating a case where quantile representations do not capture the distribution perfectly. (a) Original Dataset. (b) The region detected as in-distribution by using quantile representations. (c) Region detected as in-distribution by using the outputs from a single classifier. Observe that quantile representations still perform better than single classifier outputs. (d) Using random labels instead of ground-truth. Observe that the two moons structure is faithfully preserved in this image. The brightness of {\color{red} \label{fig:intro_appendix} \end{figure*} In figure~{\textnormal{e}}f{fig:intro_appendix} we illustrate an example where quantile representations do not capture the entire distribution. Here we use the same data as in figure~{\textnormal{e}}f{fig:intro}, but with different class labels. This is shown in figure~{\textnormal{e}}f{fig:intro_app(a)}. When we perform the OOD detection we get the region as in figure~{\textnormal{e}}f{fig:intro_app(b)}. Observe that while it does detect points far away from the data as out-of-distribution, the moon structure is not identified. In particular, the spaces between the moons is not considered OOD. This illustrates a case when quantile representations might fail. However, OOD detection using a single classifier also fail, as illustrated in figure~{\textnormal{e}}f{fig:intro_app(c)}. Observe that the region identified by quantile representations is much better than the one obtained using a single classifier. \paragraph{A simple fix for OOD detection:} If OOD detection were the aim, then it is possible to change the approach slightly by considering \tilde{p}h{random labels} instead of the ground-truth labels. This allows us to identify arbitrary regions where the data is located. This is illustrated in figure~{\textnormal{e}}f{fig:intro_app(d)}. Observe that this method can be used to identify any region in the space by suitably sampling and assigning pseudo-labels. In this case, we identify the training data perfectly. \section{Cannot Correct Calibration Error} \label{sec:A4} \begin{figure*} \caption{Densenet} \label{fig:A4(a)} \caption{Resnet} \label{fig:A4(b)} \caption{Correcting calibration error on the validation set may not improve performance on corrupted datasets. The figure illustrates the use of Platt scaling and Isotonic regression to correct the calibration error on the validation set, but this leads to an increase in the calibration error on the corrupted dataset. This suggests that it may not be possible to correct the calibration error simply by adjusting the probability scores.} \label{fig:A4} \end{figure*} Figure~{\textnormal{e}}f{fig:calib} shows that calibration error from quantile representations is robust to noise. So, an obvious question which follows is - Can we then correct it using validation data and improve the calibration score? It turns out that this is not possible. \textbf{Remark:} A similar result is also obtained in Proposition 1 of \cite{NEURIPS2021_5b168fdb}. To verify this we perform the same experiment as in section~{\textnormal{e}}f{ssec:calibration}. Further we use Platt Scaling and Isonotic regression on validation data and accordingly transform the probability estimates for the corrupted datasets. These results are shown in figure~{\textnormal{e}}f{fig:A4}. Surprisingly, we find that correcting for calibration on the validation set actually leads to an increase in the calibration error on corrupted datasets. It is currently unclear why this is the case. Our hypothesis is that this calibration error may be due to the insufficient modeling of the underlying distribution by $f({\bm{x}})$. For example, if the true underlying model is quadratic but we are using a linear model for classification, the calibration error caused by this mismatch cannot be corrected. \textbf{Remark:} Using techniques as described in \cite{DBLP:journals/corr/abs-2206-02757} can be useful to correct this, since the correction is dependent on the underlying output features and not just the probabilities. \section{Matching Quantile-Representations to correct the distribution} \label{sec:A6} In this part we illustrate the matching of quantile-representations to correct for distribution shifts following the ideas from \cite{doi:10.1080/01621459.2014.929522}. Let $X$ denote the original distribution of the data, and let $\Phi(X)$ denote the modified distribution. We assume that the function $\Phi(.)$ is deterministic but unknown. If both $X$ and $\Phi(X)$ are known, then it is easy to estimate $\Phi(.)$ using some model such as neural networks. However, in reality we do not have this information. Once the environment changes, the data collected will be very different from the original ones and we do not know how $\Phi(.)$ distorts the original data. So, the aim is to estimate $\Phi(.)$ without the knowledge of $X$ and $\Phi(X)$. This is where the fact that - quantile-representations capture the distribution information becomes relevant. \paragraph{Is this even possible?} Let ${\mathcal{Q}}({\bm{x}}, \tau)$ denote the quantile representation obtained using $X$, and ${\mathcal{Q}}_{\Phi}({\bm{x}}, \tau)$ denote the quantile representation obtained using $\Phi(X)$. Let the data collected in the new environment be $\{\hat{{\bm{x}}}_i\}$, then we should have that \begin{equation} \int_{\tau=0}^{1} |{\mathcal{Q}}(\Phi^{-}(\hat{{\bm{x}}}_i), \tau) - {\mathcal{Q}}_{\Phi}(\hat{{\bm{x}}_i}, \tau)| = 0 \label{eq:get_phi} \end{equation} Using this it is possible to estimate $\Phi^{-}$ and hence $\Phi$. Observe the following - Functions ${\mathcal{Q}}(.,.)$ and ${\mathcal{Q}}_{\Phi}(.,.)$ are learnt from the data using the labels, and depends on it. So, one needs the labels to specify the directions in which distribution should be the same. For instance, consider the following example - Assume we wish to classify the candidates as suitable/not-suitable for a job based on a set of features. Now, what is suitable/not-suitable changes with with time. As well as the ability (represented in features) of the general population. So, we collect data at time $t=t_0$, $\{({\bm{x}}_{i,t_0}, y_{i,t_0})\}$ and at time $t=t_1$, $\{({\bm{x}}_{i,t_1}, y_{i,t_1})\}$. However we do not know the relation between ${\bm{x}}_{i, t_0}$ and ${\bm{x}}_{i, t_1}$. In such cases, matching quantile representations can be useful. \begin{figure*} \caption{Original Dataset ($X_{t_0} \label{fig:A6(a)} \caption{Shifted Dataset ($X_{t_1} \label{fig:A6(b)} \caption{Boundaries at different quantiles (Original data)} \label{fig:A6(c)} \caption{Boundaries at different quantiles (Shifted data)} \label{fig:A6(d)} \caption{Distribution of $X_{t_1} \label{fig:A6(e)} \caption{Distribution of data $\Phi(X_{t_0} \label{fig:A6(f)} \caption{Matching quantile representations. Observe that the estimated distribution at time $t_1$ is similar to the actual distribution at time $t_1$. This shows that the estimate of $\Phi()$ is accurate.} \label{fig:A6} \end{figure*} \paragraph{Illustration: Matching of quantile representations} The above procedure is illustrated in figure~{\textnormal{e}}f{fig:A6}. Consider the data at $t_0$ as in figure~{\textnormal{e}}f{fig:A6(a)} and data at $t_1$ as in figure~{\textnormal{e}}f{fig:A6(b)}. This data in figure~{\textnormal{e}}f{fig:A6(a)} is generated using 2d Gaussian distribution with centers $[[0,0],[1,1]]$ and standard deviation $[[0.1,0.3],[0.3,0.11]]$. We refer to this distribution as $X_{t_0}$. Data in figure~{\textnormal{e}}f{fig:A6(b)} is obtained by generating a new sample with the same distribution as $X_{t_0}$ and transforming it using a random orthogonal matrix. We refer to this distribution using $X_{t_1}$. Note that there is no correspondence between the data samples at $X_{t_0}$ and $X_{t_1}$. Figures~{\textnormal{e}}f{fig:A6(c)} and {\textnormal{e}}f{fig:A6(d)} illustrate the quantile representations obtained using the class labels at both these times. We then estimate $\Phi()$ using \eqref{eq:get_phi}. Figure~{\textnormal{e}}f{fig:A6(e)} shows the density at $X_{t_1}$ and Figure~{\textnormal{e}}f{fig:A6(f)} shows the density of $\Phi(X_{t_0})$. Observe that the estimate of the density and the actual density match. This shows that quantile representations can be used to correct distribution shifts. \paragraph{Caveat:} However, quantile-representations cannot estimate $\Phi()$ which do not change the distribution of the samples. For instance if $X_{t_1} = - X_{t_0}$, and if $X_{t_0}$ is symmetric around $0$, then the quantile-representations are identical. Under what conditions can $\Phi(.)$ be estimated is considered for future work. \paragraph{Advantage of using quantile-representations} A question which follows is - Why not simply retrain the classifier at $t_0$? (i) As can be gleaned from the above experiments, it is not possible to estimate $\Phi()$ from the single classifier alone, but can be done using quantile-representations (ii) The labels considered for constructing the quantile-representations need not be the same as the classification labels. They would correspond to important attributes of the data. For instance, one can consider aspects like technical skill of the candidate instead of simply suitable/not-suitable classification. \end{document}
\begin{document} \maketitle \begin{abstract} We consider the space of chord-arc curves on the plane passing through the infinity with their parametrization $\gamma$ on the real line, and embed this space into the product of the BMO Teichm\"uller spaces. The fundamental theorem we prove about this representation is that $\log \gamma'$ also gives a biholomorphic homeomorphism into the complex Banach space of BMO functions. Using these two equivalent complex structures, we develop a clear exposition on the analytic dependence of involved mappings between certain subspaces. Especially, we examine the parametrization of a chord-arc curve by using the Riemann mapping and its dependence on the arc-length parametrization. As a consequence, we can solve a conjecture of Katznelson, Nag, and Sullivan in 1990 by showing that this dependence is not continuous. \end{abstract} \section{Introduction} A quasicircle $\Gamma$ is the image of the real line $\mathbb R$ by a quasiconformal homeomorphism of the complex plane $\mathbb C$. (In this paper, we always assume that a closed curve passes through the infinity.) The family of all such quasicircles modulo affine translation is identified with the universal Teichm\"uller space $T$. However, if we consider $\Gamma$ with its parametrization, namely, if we consider an embedding $\mathbb R \to \mathbb C$ that is induced by a quasiconformal homeomorphism of $\mathbb C$, the family of all such normalized (i.e., $0$, $1$, and $\infty$ are fixed) embeddings is identified with the product of the Teichm\"uller spaces $T(\mathbb U) \times T(\mathbb L)$ defined on the upper and the lower half-planes. This representation has been used to investigate quasifuchsian spaces by the Bers simultaneous uniformization. We apply this method to consider chord-arc curves. A chord-arc curve $\Gamma$ is the image of $\mathbb R$ by a bi-Lipschitz homeomorphism of $\mathbb C$. The family of all such chord-arc curves modulo affine translation is identified with a proper subset of the BMO Teichm\"uller space $T_b$ introduced by Astala and Zinsmeister \cite{AZ}. To give a parametrization for $\Gamma$, we introduce normalized BMO embeddings of $\mathbb R$ into $\mathbb C$ whose totality is identified with the product $T_b(\mathbb U) \times T_b(\mathbb L)$. The subset of all normalized BMO embeddings whose images are chord-arc curves is denoted by $\rm CA$. Every $\gamma \in {\rm CA}$ is locally absolutely continuous and $\log \gamma'$ belongs to the complex Banach space ${\rm BMO}(\mathbb R)$ of all complex-valued BMO functions on $\mathbb R$ modulo constant functions. In this paper, we prove the following fundamental result on $\rm CA$ from the view point of quasiconformal Teichm\"uller theory (see Theorem \ref{biholo}). Differently from the case of quasicircles, the better regularity of chord-arc curves makes it possible for $\rm CA$ to have another representation of its complex structure in the complex Banach space ${\rm BMO}(\mathbb R)$. \begin{theorem}\label{fundamental} $\rm CA$ is an open subset of $T_b(\mathbb U) \times T_b(\mathbb L)$, and the map $L:{\rm CA} \to {\rm BMO}(\mathbb R)$ defined by $L(\gamma)=\log \gamma'$ is a biholomorphic homeomorphism onto its image. \end{theorem} For the study of plane curves which have a nature of problems in harmonic analysis, this point of view has missed until recently we begin to do researches for Weil--Petersson curves (see \cite{WM-4}). In this paper, we will show that the above theorem greatly simplify and clarify the arguments concerning chord-arc curves by giving simple proofs for existing important results and also by answering open problems in this subject matter. A quasisymmetric homeomorphism $f:\mathbb R \to \mathbb R$ is the extension of a quasiconformal homeomorphism of $\mathbb U$ onto itself. If $f$ is locally absolutely continuous and its derivative $f'$ is an $A_\infty$-weight of Muckenhoupt, then $f$ is called strongly quasisymmetric. The group of all normalized strongly quasisymmetric homeomorphism on $\mathbb R$ is denoted by ${\rm SQS}$, which can be identified with the BMO Teichm\"uller space $T_b$. Let ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ denote a convex open subset of the real Banach space of all real-valued BMO functions $u$ such that $e^u$ is an $A_\infty$-weight. Then, for every $f \in {\rm SQS}$, $\log f'$ belongs to ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ by definition, and in fact, this correspondence $\Psi:{\rm SQS} \cong T_b \to {\rm BMO}_{\mathbb R}^*(\mathbb R)$ is bijective. Moreover, it is known that $\Psi$ is a homeomorphism. However, in our formulation, we can apply a simpler argument to this map $\Psi$ to obtain a stronger result. Since ${\rm SQS}$ is a real-analytic submanifold of ${\rm CA}$ corresponding to the diagonal locus of $T_b(\mathbb U) \times T_b(\mathbb L)$ and $\Psi$ is the restriction of $L$ to ${\rm SQS}$, Theorem \ref{fundamental} implies the following assertion (see Corollary \ref{real-analytic}). This gives an independent argument for an expected claim by Fan, Hu and Shen \cite[Remark 4.4]{FHS}. \begin{corollary}\label{1.2} $\Psi:{\rm SQS} \cong T_b \to {\rm BMO}_{\mathbb R}^*(\mathbb R)$ is a real-analytic homeomorphism whose inverse $\Psi^{-1}$ is also real-analytic. \end{corollary} The importance of this result lies in a fact that we can convert the real analytic dependence upon the BMO norm to that upon the analytic structure of the Teichm\"uller space. This is the translation of harmonic analysis aspects into complex analysis aspects. For instance, the tangent space of ${\rm SQS}$ can be described by the solution of a certain time dependent flow equation on $\mathbb R$ but the dependence of the solution should be given by the BMO norm (see \cite{WS}). Since the tangent space of $T_b$ is represented by Beltrami differentials in the complex analytic theory, we need a translation of the relation ${\rm SQS} \cong T_b$ in the level of tangent spaces. Corollary \ref{1.2} is useful for this. Next, we consider a problem on the dependence of Riemann mapping parametrization of chord-arc curves. For each chord-arc curve $\Gamma$, the normalized Riemann mapping from $\mathbb U$ to the left domain bounded by $\Gamma$ defines a parametrization of $\Gamma$ by its extension to $\mathbb R$. Let ${\rm RM}^\circ$ be a subset of ${\rm CA}$ consisting of all such Riemann mapping parametrizations of chord-arc curves. This is a complex-analytic submanifold of $\rm CA$. Let $\Phi:{\rm CA} \to {\rm RM}^\circ$ be defined by taking the Riemann mapping parametrization in the same chord-arc image. Associated with $\Phi$, another map $\Pi:{\rm CA} \to {\rm SQS}$ for $\gamma \in {\rm CA}$ is defined by the reparamatrization $\gamma$ from $\Phi(\gamma)$ by a strongly quasisymmetric homeomorphism $\Pi(\gamma)$. Namely, there is a unique decomposition $\gamma=\Phi(\gamma) \circ \Pi(\gamma)$. This map $\Pi$ can be regarded as the projection onto the real-analytic submanifold ${\rm SQS}$, and hence it is real-analytic. On the other hand, since a chord-arc curve $\Gamma$ is rectifiable, it has the arc-length parametrization. Let ${\rm ICA}$ be a subset of $\rm CA$ consisting of arc-length parametrizations of chord-arc curves. This is the inverse image of some open subset $\Omega$ in the real Banach subspace $i{\rm BMO}_{\mathbb R}(\mathbb R)$ of purely imaginary BMO functions by $L$, and hence ${\rm ICA}$ is a real-analytic submanifold of ${\rm CA}$. The problems we will consider are concerning the maps $\Pi$ and $\Phi$ restricted to ${\rm ICA}$. By the identification of ${\rm ICA}$ with $\Omega \subset i{\rm BMO}_{\mathbb R}(\mathbb R)$ (which is denoted by $i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$ later) and also by ${\rm SQS} \cong {\rm BMO}_{\mathbb R}^*(\mathbb R)$, we can define a map $\lambda:\Omega \to {\rm BMO}_{\mathbb R}^*(\mathbb R)$ as the conjugate of $\Pi|_{\rm ICA}$ by $L$. Properties of this $\lambda$ were intensively investigated and their arguments were developed further in the literature. Among them, one of the important results Coifman and Meyer \cite{CM} obtained is that $\lambda$ is real-analytic. See also Semmes \cite{SeB} and Wu \cite{Wu}. In our formulation, since $\Pi|_{\rm ICA}$ is merely the projection from the real-analytic submanifold $\rm ICA$, we see that this is an immediate consequence from the fact that $L$ is biholomorphic (see Theorem\ref{CM}). \begin{corollary}\label{1.3} $\lambda=L \circ \Pi|_{\rm ICA} \circ L^{-1}:\Omega \to {\rm BMO}_{\mathbb R}^*(\mathbb R)$ is real-analytic. \end{corollary} We also consider $\Phi|_{\rm ICA}$. This map indicates how the Riemann mapping varies according to the change of the chord-arc curves $\Gamma$. By the composition of $\Pi^*$ identifying ${\rm RM}$ with the Teichm\"uller space $T_b \cong \rm SQS$, we also obtain the correspondence from ${\rm ICA}$ to conformal welding homeomorphisms in $\rm SQS$. The map $\rho:\Omega \to {\rm BMO}_{\mathbb R}^*(\mathbb R)$ given as the conjugate of $\Pi^* \circ \Phi|_{\rm ICA}$ by $L$ was considered in Katznelson, Nag and Sullivan \cite{KNS} where they mentioned that $\rho$ was not known to be continuous, and stated several preferable consequences ``if this were continuous''. On the contrary, Shen and Wu \cite{SW} proved that the corresponding mapping to $\rho$ is continuous in a similar setting of Weil--Petersson curves. This can be also explained simply from another fundamental theorem for the space of Weil--Petersson curves (see \cite{WM-4}). Our conclusion for this problem in this paper is: \begin{theorem}\label{1.4} $\Phi|_{\rm ICA}:{\rm ICA} \to {\rm RM}^\circ$ and $\rho=L \circ (\Pi^* \circ \Phi|_{\rm ICA}) \circ L^{-1}:\Omega \to {\rm BMO}_{\mathbb R}^*(\mathbb R)$ are not continuous. \end{theorem} See Theorem \ref{main} and Corollary \ref{another}. This is proved by using a fact that $T_b \cong {\rm SQS}$ is not a topological group. This property implies that $\Phi$ is not continuous on ${\rm CA}$, but in order to apply this on ${\rm ICA}$, we have to investigate a local property of $\lambda$ at the origin. By using a fact that the derivative of $\lambda$ at the origin is given by the Hilbert transformation on ${\rm BMO}_{\mathbb R}(\mathbb R)$, we see that $\lambda$ is a local homeomorphism at the origin. This gives us the freedom of choosing elements in ${\rm ICA}$ that yields a particular example showing the discontinuity of $\Phi|_{\rm ICA}$. Here is a brief introduction of the contents in this paper. Section 2 is a collection of preliminary results used later. It contains BMO functions, $A_\infty$-weights, Carleson measures, and their roles in the quasiconformal theory of Teichm\"uller spaces. In Section 3, we introduce BMO embeddings of the real line in general and give a rather complete survey on the results concerning such embeddings whose images are chord-arc curves. Sections 4--8 are the main body of the new contribution of this paper. The aforementioned Bers coordinates by the BMO Teichm\"uller spaces for the total space of BMO embeddings are introduced in Section 4. The mappings $\Phi$ and $\Pi$ are also defined explicitly here by using the Bers coordinates. From Section 5, we focus on the space ${\rm CA}$ consisting of BMO embeddings with the chord-arc images. First, curve theoretical representations of an element of ${\rm CA}$, arc-length parametrization and change of parameter, are discussed, and the coordinates of $L({\rm CA})$ are given in the space of BMO functions. Theorem \ref{fundamental} and Corollary \ref{1.2} are proved in Section 6. Properties of the biholomorphic mapping $L$ are also discussed. Section 7 is devoted to the setup for the problem on the Riemann mapping parametrization, and Corollary \ref{1.3} is presented there. A more important result on the real-analyticity of the inverse of $\lambda$ is also explained. Finally in Section 8, we prove our main achievement in this paper, Theorem \ref{1.4}. Questions about the discontinuity of certain related mappings are also answered. \section{Preliminaries} \subsection{BMO functions and $A_\infty$-weights} A locally integrable complex-valued function $u$ on $\mathbb R$ is of {\it BMO} if $$ \Vert u \Vert_*=\sup_{I \subset \mathbb R}\frac{1}{|I|} \int_I |u(x)-u_I| dx <\infty, $$ where the supremum is taken over all bounded intervals $I$ on $\mathbb R$ and $u_I$ denotes the integral mean of $u$ over $I$. The set of all complex-valued BMO functions on $\mathbb R$ is denoted by ${\rm BMO}(\mathbb R)$. This is regarded as a Banach space with norm $\Vert \cdot \Vert_*$ by ignoring the difference of complex constant functions. The {\it John--Nirenberg inequality} for BMO functions (see \cite[VI.2]{Ga}) asserts that there exists two universal positive constants $C_0$ and $C_{JN}$ such that for any complex-valued BMO function $u$, any bounded interval $I$ of $\mathbb{R}$, and any $\lambda > 0$, it holds that \begin{equation*}\label{JN} \frac{1}{|I|} |\{t \in I: |u(t) - u_I| \geq \lambda \}| \leq C_0 \exp\left(\frac{-C_{JN}\lambda}{\Vert u \Vert_*} \right). \end{equation*} A locally integrable non-negative measurable function $\omega \geq 0$ on $\mathbb R$ is called a weight. We say that $\omega$ is an $A_p$-weight of Muckenhoupt \cite{M} for $p>1$ if there exists a constant $C_p(\omega) \geq 1$ such that \begin{equation*}\label{Ap} \left(\frac{1}{|I|} \int_I \omega(x)dx \right)\left(\frac{1}{|I|} \int_{I} \left(\frac{1}{\omega(x)}\right)^{\frac{1}{p-1}}dx\right)^{p-1} \leq C_p(\omega) \end{equation*} for any bounded interval $I \subset \mathbb R$. We define $\omega$ to be an {\it $A_\infty$-weight} if $\omega$ is an $A_p$-weight for some $p>1$, that is, $A_\infty=\bigcup_{p>1} A_p$. It is known that $\omega$ is an $A_\infty$-weight if and only if there are positive constants $\alpha(\omega)$, $K(\omega)>0$ such that \begin{equation}\label{SD} \frac{\int_E \omega(x)dx}{\int_I \omega(x)dx}\leq K(\omega)\left(\frac{|E|}{|I|}\right)^{\alpha(\omega)} \end{equation} for any bounded interval $I \subset \mathbb{R}$ and for any measurable subset $E \subset I$ (see \cite[Theorem V]{CF}, \cite[Lemma VI.6.11]{Ga}). We also define $\omega$ to be an $A_1$-weight if there exists a constant $C_1(\omega) \geq 1$ such that $$ \frac{1}{|I|} \int_I \omega(x)dx \leq C_1(\omega)\, {\rm ess}\!\!\! \inf_{x \in I \quad}\!\!\! \omega(x) $$ for any bounded interval $I \subset \mathbb R$. It holds $A_1 \subset A_p$ for every $p>1$. Another characterization of $A_\infty$-weights can be given by the inverse Jensen inequality. Namely, $\omega \geq 0$ belongs to the class of $A_\infty$-weights if and only if there exists a constant $C_\infty(\omega) \geq 1$ such that \begin{equation*}\label{iff} \frac{1}{|I|} \int_I \omega(x) dx \leq C_\infty(\omega) \exp \left(\frac{1}{|I|} \int_I \log \omega(x) dx \right) \end{equation*} for every bounded interval $I \subset \mathbb R$ (see \cite[Theorem IV.2.15]{GR} and \cite{Hr}). We see that if $\omega$ is an $A_\infty$-weight on $\mathbb R$, then $\log \omega$ belongs to ${\rm BMO}_{\mathbb R}(\mathbb R)$ which is the real subspace of ${\rm BMO}(\mathbb R)$ consisting of all real-valued BMO functions, and conversely, we know the following fact (see \cite[p.409]{GR} and \cite[Lemma VI.6.5]{Ga}). \begin{proposition}\label{C_0} Suppose that a weight $\omega \geq 0$ satisfies $\log \omega \in {\rm BMO}_{\mathbb R}(\mathbb R)$. If the BMO norm $\Vert \log \omega \Vert_*$ is less than the constant $C_{JN}$, then $\omega$ is an $A_\infty$-weight. \end{proposition} There is an example of $u \in {\rm BMO}_{\mathbb R}(\mathbb R)$ such that $e^u$ is not an $A_\infty$-weight. Let ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ denote the proper subset of the real subspace ${\rm BMO}_{\mathbb R}(\mathbb R)$ consisting of all real-valued BMO functions $u$ with $e^u$ being an $A_\infty$-weight. \begin{proposition}\label{convex} ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ is a convex open subset of ${\rm BMO}_{\mathbb R}(\mathbb R)$. \end{proposition} \begin{proof} To show the convexity, we use the following property of $A_\infty$-weight: if $\omega_1$ and $\omega_2$ are $A_\infty$-weights, then $\omega_1^s\omega_2^t$ is also an $A_\infty$-weight for $s,t \geq 0$ with $s+t=1$. This can be proved by decomposing these $A_p$-weights for some $p>1$ into the product of $A_1$-weights by the Jones factorization theorem (see \cite[Corollary IV.5.3]{GR}) and then verifying the same claim for $A_1$-weights. As another property of $A_\infty$-weight, we know that if $\omega$ is an $A_\infty$-weight, then there is some $\varepsilon >0$ such that $\omega^r$ is an $A_\infty$-weight for every $r \in [0,1+\varepsilon)$ (see \cite[Theorem IV.2.7]{GR}). Combining these properties with the fact in Proposition \ref{C_0} that the open neighborhood of the origin of ${\rm BMO}_{\mathbb R}(\mathbb R)$ within $C_{JN}$ is contained in ${\rm BMO}_{\mathbb R}^*(\mathbb R)$, we can prove that ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ is open. Indeed, ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ is the union of open cones spanned by the $C_{JN}$-neighborhood of the origin having any points of ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ as their vertices. \end{proof} \subsection{Strongly quasisymmetric homeomorphisms} A quasisymmetric homeomorphism $f:\mathbb R \to \mathbb R$ is called {\it strongly quasisymmetric} if it is locally absolutely continuous and the derivative $f'$ is an $A_\infty$-weight. In this case, $\log f' \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$. Conversely, for any $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$, the indefinite integral $$ f_u(x) = \frac{\int_0^x e^{u(t)}dt}{\int_0^1 e^{u(t)}dt} $$ defines a strongly quasisymmetric homeomorphism $f_u$ on $\mathbb R$ that fixes $0$, $1$ and $\infty$. We denote the set of all strongly quasisymmetric homeomorphisms of $\mathbb R$ onto itself with this normalization by ${\rm SQS}$. Hence, ${\rm SQS}$ and ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ correspond bijectively. By (\ref{SD}), $f$ is a strongly quasisymmetric homeomorphism if and only if there are constants $K$ and $\alpha$ such that \begin{equation}\label{alphaK} \frac{|f(E)|}{|f(I)|}\leq K\left(\frac{|E|}{|I|}\right)^{\alpha} \end{equation} for any bounded interval $I \subset \mathbb{R}$ and for any measurable subset $E \subset I$. From this property, we see that ${\rm SQS}$ is a group under the composition. A strongly quasisymmetric homeomorphism $f:\mathbb R \to \mathbb R$ can be also characterized by the operator on the Banach space of BMO functions $u$. The pre-composition of $f$ to $u$ gives a change of the parameter, and we consider this linear operator on ${\rm BMO}(\mathbb R)$ induced by $f$. \begin{theorem}[\cite{Jo}]\label{pullback} The increasing homeomorphism $f$ from $\mathbb R$ onto itself is strongly quasisymmetric if and only if the composition operator $P_f: u \mapsto u\circ f$ gives an isomorphism of ${\rm BMO}(\mathbb R)$, that is, $P_f$ and $(P_f)^{-1}$ are bounded linear operators. \end{theorem} \subsection{Quasiconformal extension and Carleson measure} Let $M(\mathbb U)$ denote the open unit ball of the Banach space $L^{\infty}(\mathbb U)$ of all essentially bounded measurable functions on $\mathbb U$. An element in $M(\mathbb U)$ is called a {\it Beltrami coefficient}. We say that a measure $\lambda$ on $\mathbb U$ is a {\it Carleson measure} if $$ \Vert \lambda \Vert_c=\sup_{I \subset \mathbb R} \frac{\lambda(I \times (0,|I|))}{|I|} <\infty, $$ where the supremum is taken over all bounded intervals $I$ in $\mathbb R$. For $\mu \in L^\infty(\mathbb U)$, we set $\lambda_\mu=|\mu(z)|^2dxdy/y$ and define $\Vert \mu \Vert_c=\Vert \lambda_\mu \Vert_c^{1/2}$. Then, we introduce a new norm $\Vert \mu \Vert_{\infty}+\Vert \mu \Vert_{c}$ for $\mu$. Let $\mathcal{L}(\mathbb U)$ denote a linear subspace of $L^{\infty}(\mathbb U)$ consisting of all elements $\mu$ with $\Vert \mu \Vert_{\infty}+\Vert \mu \Vert_{c}<\infty$, namely, $\lambda_\mu$ is a Carleson measure on $\mathbb U$. This is a Banach space with this norm. Moreover, we consider the corresponding spaces of Beltrami coefficients as $\mathcal{M}(\mathbb U) = M(\mathbb U) \cap \mathcal{L}(\mathbb U)$. We consider a quasiconformal homeomorphism $F$ of $\mathbb U$ onto itself whose complex dilatation $\mu_F=\bar \partial F/\partial F$ belongs to $\mathcal M(\mathbb U)$. Concerning its continuous extension to the boundary $\mathbb R$, we know the following result. \begin{theorem}[\mbox{\cite[Theorem 2.3]{FKP}}]\label{basic} If $F$ is a quasiconformal homeomorphism of $\mathbb U$ onto $\mathbb U$ whose complex dilatation $\mu_F$ belongs to ${\mathcal M}(\mathbb U)$, then its extension to $\mathbb R$ is a strongly quasisymmetric homeomorphism. \end{theorem} Conversely to Theorem \ref{basic}, there is a way of extending a strongly quasisymmetric homeomorphism of $\mathbb R$ to a quasiconformal homeomorphism of $\mathbb U$ onto itself whose complex dilatation induces a Carleson measure. Let $\phi(x)=\frac{1}{\sqrt \pi}e^{-x^2}$ and $\psi(x)=\phi'(x)=-2x \phi(x)$. We extend a strongly quasisymmetric homeomorphism $f:\mathbb R \to \mathbb R$ to $\mathbb U$ by setting a differentiable map $F: \mathbb{U} \to \mathbb{C}$ by \begin{equation*}\label{F} \begin{split} &F(x, y) = U(x, y) + iV(x, y);\\ U(x,y)&=(f \ast \phi_y)(x),\ V(x,y)=(f \ast \psi_y)(x), \end{split} \end{equation*} where $\varphi_y(x)=y^{-1} \varphi(y^{-1}x)$ for $x \in \mathbb R$ and $y>0$, and $\ast$ is the convolution. We call this extension the variant of the {\it Beurling--Ahlfors extension} by the heat kernel. The former statement of the next theorem follows from \cite[Theorem 4.2]{FKP}. See \cite[Theorem 3.4]{WM-2} for its exposition. The latter statement is given in \cite{WM-4}. \begin{theorem}\label{FKP} For a strongly quasisymmetric homeomorphism $f:\mathbb R \to \mathbb R$, the map $F$ given by the variant of the Beurling--Ahlfors extension by the heat kernel is a quasiconformal diffeomorphism of $\mathbb U$ onto itself whose complex dilatation $\mu_F$ belongs to $\mathcal M(\mathbb U)$. Moreover, $F$ is bi-Lipschitz with respect to the hyperbolic metric. \end{theorem} We note that in the case where the BMO norm of $\log f'$ is sufficiently small for a strongly quasisymmetric homeomorphism $f$, Semmes \cite{Se} used a modified Beurling--Ahlfors extension $F$ by compactly supported kernels $\phi$ and $\psi$ to prove the same properties as in Theorem \ref{FKP}. By dividing the weight $f'$ into small pieces and composing the resulting maps, the assumption on the small BMO norm can be removed to obtain a quasiconformal extension of the same properties. \subsection{The BMO Teichm\"uller space} The {\it universal Teichm\"uller space} $T=T(\mathbb U)$ is the set of all Teichm\"uller equivalence classes of Beltrami coefficients in $M(\mathbb U)$. Here, $\mu_1$ and $\mu_2$ in $M(\mathbb U)$ are equivalent if $F^{\mu_1}=F^{\mu_2}$ on $\mathbb R$, where $F^{\mu}$ denote the unique quasiconformal homeomorphism of $\mathbb U$ onto itself extendable to $\mathbb R$ that has complex dilatation $\mu \in M(\mathbb U)$ and keeps the points $0$, $1$ and $\infty$ fixed. We denote the quotient projection by $\pi:M(\mathbb U) \to T(\mathbb U)$, which is called the {\it Teichm\"uller projection}. For $\mathcal M(\mathbb U) \subset M(\mathbb U)$, we define the {\it BMO Teichm\"uller space} $T_b=T_b(\mathbb U)$ by $\pi(\mathcal M(\mathbb U))$ equipped with the quotient topology from $\mathcal M(\mathbb U)$. This space was introduced in \cite{AZ}. We can prove that $T_b$ has a complex Banach manifold structure such that $\pi:\mathcal M(\mathbb U) \to T_b(\mathbb U)$ is holomorphic and has a local holomorphic section (see Theorem \ref{model} below). Let ${\rm SQS}$ be the set of all strongly quasisymmetric homeomorphisms of $\mathbb R$ satisfying the normalized condition keeping $0$, $1$ and $\infty$ fixed. Then, the correspondence $\mu \mapsto F^{\mu}|_{\mathbb R}$ induces a well-defined bijection between $T_b$ and ${\rm SQS}$. Under this identification, structures on $T_b$ and on ${\rm SQS}$ are imported to each other. In particular, $T_b$ is endowed with the group structure and ${\rm SQS}$ is endowed with the complex Banach manifold structure. By Proposition \ref{convex}, $\rm BMO_{\mathbb R}^*(\mathbb R)$ is the open convex subset of $\rm BMO_{\mathbb R}(\mathbb R)$ consisting of all real-valued BMO functions $u$ on $\mathbb R$ such that $e^u$ is an $A_{\infty}$-weight. Concerning the bijective relation between $T_b \cong {\rm SQS}$ and $\rm BMO_{\mathbb R}^*(\mathbb R)$, the following result was obtained in \cite[Theorem 7.1]{SWei}. \begin{proposition}\label{topequiv} The map $\Psi:{\rm SQS} \to \rm BMO_{\mathbb R}^*(\mathbb R)$ given by $f \mapsto \log f'$ is a surjective homeomorphism. \end{proposition} This claim implies that the real BMO space provides ${\rm SQS}$, which can be regarded as the real model of the BMO Teichm\"uller space $T_b$, with a real Banach manifold structure. Later in Corollary \ref{real-analytic}, we will see that this map is a real-analytic homeomorphism. In particular, the topology on $T_b \cong {\rm SQS}$ is equivalent to that on ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ defined by the BMO norm. Thus, we have the identifications \begin{equation}\label{trinity} T_b \cong {\rm SQS} \cong {\rm BMO}_{\mathbb R}^*(\mathbb R), \end{equation} and their correspondence becomes clear. Especially, we consider the group structure on $T_b$. As we have mentioned, we can regard $T_b$ as a group by the identification of $T_b$ with ${\rm SQS}$. The group operation is denoted by $[\mu] \ast [\nu]$ for $[\mu], [\nu] \in T_b$ and the inverse by $[\mu]^{-1}$. More explicitly, $[\mu] \ast [\nu]$ is the Teichm\"uller class of the complex dilatation $\mu \ast \nu$ of $F^\mu \circ F^\nu$ and $[\mu]^{-1}$ is that of the complex dilatation $\mu^{-1}$ of $(F^\mu)^{-1}$. The following result, which says that $T_b$ is a {\it partial topological group} in the sense of \cite{GS}, was proved in \cite{Wei}. \begin{proposition}\label{nearid} If $[\mu]$ and $[\nu]$ converge to $[\rm 0]$ in $T_b$, then $[\mu] \ast [\nu] \to [\rm 0]$ and $[\nu]^{-1} \to [\rm 0]$. \end{proposition} The correspondence $f \mapsto \log f'$ for $f \in {\rm SQS}$ gives a topological equivalence of the BMO Teichm\"uller space $T_b$ with ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ by Proposition \ref{topequiv}. Translating Proposition \ref{nearid} to ${\rm BMO}_{\mathbb R}^*(\mathbb R)$, we see that $\Vert \log (h\circ f)' \Vert_* \to 0$ as $\Vert \log h' \Vert_* \to 0$ and $\Vert \log f' \Vert_* \to 0$ for $h, f \in {\rm SQS}$. \subsection{Conformal extension and BMOA} For $\mu \in M(\mathbb L)$, we take a quasiconformal homeomorphism $H=H_\mu$ of $\mathbb C$ onto itself that is conformal on $\mathbb U$ and whose complex dilatation on $\mathbb L$ is $\mu$. We can characterize the condition $\mu \in \mathcal M(\mathbb L)$ by the conformal map $H|_{\mathbb U}$ and analytic function spaces defined as follows. Let $H^2(\mathbb U)$ be the Hardy space of holomorphic functions on $\mathbb U$ defined in a conformally invariant way from that on $\mathbb D$ (see \cite[Chapter 11]{Du}). We introduce $$ {\rm BMOA}(\mathbb U)=\{\phi \in H^2(\mathbb U) \mid \Vert \lambda^1_\phi \Vert_c<\infty\}, $$ where $\lambda^1_\phi=|\phi'(z)|^2ydxdy$ is a Carleson measure on $\mathbb U$. This space modulo constants is a Banach space with norm $\Vert \lambda^1_\phi \Vert_c^{1/2}$. For the set ${\rm Hol}(\mathbb U)$ of holomorphic functions on $\mathbb U$, we also define $$ B(\mathbb U)=\{\varphi \in {\rm Hol}(\mathbb U) \mid \Vert \lambda^2_\varphi \Vert_c<\infty\}, $$ where $\lambda^2_\varphi=|\varphi(z)|^2y^3dxdy$ is a Carleson measure on $\mathbb U$. This is a Banach space with norm $\Vert \lambda^2_\varphi \Vert_c^{1/2}$. With the aid of Theorems \ref{basic} and \ref{FKP} and the geometric characterization of the image $H(\mathbb U)$, we can summarize the equivalent conditions for $H=H_\mu$ as follows (see \cite[Theorem A]{SWei}). \begin{theorem}[\mbox{\cite[Theorem 4]{AZ}, \cite[Theorem 4]{BJ}}]\label{Guo11} Let $H:\mathbb U \to \mathbb C$ be a conformal mapping on $\mathbb U$ with $\lim_{z \to \infty}H(z)=\infty$ that extends to a quasiconformal homeomorphism of $\mathbb C$ whose complex dilatation on $\mathbb L$ is $\mu$. Then, the following conditions are equivalent $\!:$ \begin{enumerate} \item[(a)] $\mu$ belongs to $\mathcal M(\mathbb L);$ \item[(b)] $\mathcal L_H =\log H'\in {\rm BMOA}(\mathbb U);$ \item[(c)] $\mathcal S_H = \mathcal L_H'' - \frac{1}{2}(\mathcal L_H')^2 \in B(\mathbb U)$; \item[(d)] $\Gamma = \partial H(\mathbb U)$ is a quasicircle satisfying the Bishop--Jones condition. \end{enumerate} \end{theorem} Here, a Jordan curve $\Gamma$ satisfies the {\it Bishop--Jones condition} if there are constants $\delta>0$ and $C>0$ such that the domain $\Omega$ bounded by $\Gamma$ satisfies the following: for every $z \in \Omega$, there exists a Jordan domain $\Omega_z \subset \Omega$ containing $z$ and having the rectifiable boundary $\partial \Omega_z$ such that the length of $\partial \Omega_z$ is less than $C d(z,\Gamma)$ and the harmonic measure of $\partial \Omega_z \cap \Gamma$ with respect to $z \in \Omega_z$ is greater than $\delta$. This condition is invariant under a bi-Lipschitz homeomorphism of $\mathbb C$ onto itself in the Euclidean metric. We consider a subset ${\mathcal T}_b \subset {\rm BMOA}(\mathbb U)$ consisting of all $\mathcal L_H$ such that $H:\mathbb U \to \mathbb C$ is a conformal mapping on $\mathbb U$ with $\lim_{z \to \infty}H(z)=\infty$ that can be extended quasiconformally to $\mathbb C$. We also consider a subset ${\mathscr T}_b \subset B(\mathbb U)$ consisting of all $\mathcal S_H$ for those $H$. It is known that ${\mathcal T}_b$ and ${\mathscr T}_b$ are contractible open subsets that correspond bijectively to $T_b$. Hence, they serve as the models of the BMO Teichm\"uller space. The correspondence of the elements in these spaces is described as follows, and in particular, the complex Banach manifold structure on $T_b$ can be introduced in this way. The following theorem is based on the arguments in \cite[Sections 5, 6]{SWei} with the adaptation to the case of $\mathbb U$ as in \cite[Section 7]{WM-4}. \begin{theorem}\label{model} $(1)$ The map $\beta:{\mathcal M}(\mathbb L) \to {\mathscr T}_b$ defined by $\mu \mapsto \mathcal S_{H_\mu}$ is holomorphic and there is a local holomorphic inverse of $\beta$ at every point of ${\mathscr T}_b$. Moreover, $\beta \circ \pi^{-1}$ for the Teichm\"uller projection $\pi:{\mathcal M}(\mathbb L) \to T_b$ gives a well-defined surjective homeomorphism $T_b \to {\mathscr T}_b$. $(2)$ The map $\alpha:{\mathcal T}_b \to {\mathscr T}_b$ defined by $\phi \mapsto \varphi=\phi''-\frac{1}{2} (\phi')^2$ is a biholomorphic homeomorphism. \end{theorem} Concerning the boundary extension of $\phi \in {\rm BMOA}(\mathbb U) \subset H^2(\mathbb U)$, we note that $\phi$ has the non-tangential limit almost everywhere on $\mathbb R$ and the Poisson integral of this boundary function reproduces $\phi$. This links BMO properties of $\phi$ on $\mathbb U$ and on $\mathbb R$. The following theorem is well-known, which can be seen from \cite[Theorems 9.17 and 9.19]{Zh}. \begin{theorem}\label{131} Let $b(\phi)$ be the boundary extension of $\phi \in {\rm BMOA}(\mathbb U)$ defined by the non-tangential limit on $\mathbb R$. Then, $b(\phi) \in {\rm BMO}(\mathbb R)$, and the boundary extension operator $b:{\rm BMOA}(\mathbb U) \to {\rm BMO}(\mathbb R)$ is an isomorphism onto the image. \end{theorem} \section{BMO embeddings and chord-arc curves}\label{curves} We generalize strongly quasisymmetric homeomorphisms $\mathbb R \to \mathbb R$ to BMO embeddings $\gamma:\mathbb R \to \mathbb C$ and consider those whose images are chord-arc curves. These are defined below, but we note here that a BMO embedding is a mapping $\gamma$ of $\mathbb R$, that is, not only its image $\Gamma=\gamma(\mathbb R)$ but also with its parametrization, whereas a chord-arc curve refers to the image $\Gamma$ of a certain special embedding $\gamma$. \begin{definition} A homeomorphic embedding $\gamma:\mathbb R \to \mathbb C$ passing through $\infty$ is called a {\it BMO embedding} if there is a quasiconformal homeomorphism $G$ of $\mathbb C$ onto itself with $G|_{\mathbb R}=\gamma$ whose complex dilatation $\mu=\bar \partial G/\partial G$ satisfies $\mu|_{\mathbb U} \in \mathcal M(\mathbb U)$ and $\mu|_{\mathbb L} \in \mathcal M(\mathbb L)$. Such a map $G$ is called a {\it BMO quasiconformal homeomorphism}. \end{definition} \begin{definition} The image $\Gamma=\gamma(\mathbb R)$ of a homeomorphic embedding $\gamma:\mathbb R \to \mathbb C$ passing through $\infty$ is called a {\it chord-arc curve} if $\Gamma$ is locally rectifiable and there exists a constant $K \geq 1$ such that the length of the arc $\gamma([a,b])$ for any $a, b \in \mathbb R$ with $a<b$ is bounded by $K|\gamma(a)-\gamma(b)|$. \end{definition} In a similar way, we can also define bounded BMO embeddings and bounded chord-arc curves, which are mappings of the unit circle $\mathbb S$ into $\mathbb C$ not passing through $\infty$. However, for the convenience of the arguments, we consider the unbounded case in this paper. The image $\Gamma=\gamma(\mathbb R)$ of a homeomorphic embedding $\gamma:\mathbb R \to \mathbb C$ passing through $\infty$ is called a quasicircle if $\Gamma$ is the image of $\mathbb R$ under a quasiconformal homeomorphism of $\mathbb C$. This is known to be equivalent to satisfying a weaker condition than the above by replacing the length of $\gamma([a,b])$ with the diameter of $\gamma([a,b])$ even though $\Gamma$ is not necessarily locally rectifiable (see \cite[Theorems IV.4, 5]{Ah}). Hence, a chord-arc curve is a quasicircle. The corresponding characterization of a chord-arc by the image of $\mathbb R$ was shown in \cite[Proposition 1.13]{JK} as follows. \begin{proposition}\label{biLip} $\Gamma$ is chord-arc curve if and only if $\Gamma$ is the image of $\mathbb R$ under a bi-Lipschitz homeomorphism of $\mathbb C$ with respect to the Euclidean metric. \end{proposition} We prepare the following lemma used throughout this paper. The statement is known to be true and has been already used before in several places. This result originates in \cite[Lemma 10]{CZ}. There is an exposition in \cite{WM-5}. \begin{lemma}\label{composition} Let $F$ be a quasiconformal homeomorphism of $\mathbb U$ onto itself that is bi-Lipschitz with respect to the hyperbolic metric whose complex dilatation $\nu$ is in $\mathcal M(\mathbb U)$. Then, the complex dilatation of $F^{-1}$ belongs to $\mathcal M(\mathbb U)$. Let $H$ be a quasiconformal homeo\-morphism of $\mathbb U$ into $\mathbb C$ whose complex dilatation $\mu$ is in $\mathcal M(\mathbb U)$. Then, the complex dilatation of $H \circ F$, denoted by $F^*\mu$, also belongs to $\mathcal M(\mathbb U)$. \end{lemma} From this lemma, we also see that if $f:\mathbb R \to \mathbb R$ is a strongly quasisymmetric homeomorphism and $h:\mathbb R \to \mathbb C$ is a BMO embedding, then $h \circ f:\mathbb R \to \mathbb C$ is also a BMO embedding. Indeed, by Theorem \ref{FKP}, we can take the bi-Lipschitz quasiconformal extension $F$ of $f$. We first consider the derivative of a BMO embedding $\gamma:\mathbb R \to \mathbb C$. The following claim has been proved in a more general setting in \cite[Theorem 6.2]{Mac}, but to present a typical argument in this section and also introduce the notation, we show our proof here. \begin{proposition}\label{curve} A BMO embedding $\gamma:\mathbb R \to \mathbb C$ has its derivative $\gamma'$ almost everywhere on $\mathbb R$ and $\log \gamma'$ belongs to ${\rm BMO}(\mathbb R)$. \end{proposition} \begin{proof} Let $G:\mathbb C \to \mathbb C$ be a BMO quasiconformal homeomorphism associated with $\gamma$, and set $\mu_1=\mu|_{\mathbb U} \in \mathcal M(\mathbb U)$ and $\mu_2=\mu|_{\mathbb L} \in \mathcal M(\mathbb L)$ for the complex dilatation $\mu$ of $G$. We take a quasiconformal homeomorphism $F:\mathbb C \to \mathbb C$ whose complex dilatation is $\mu_1(z)$ for $z \in \mathbb U$ and $\overline{\mu_1(\bar z)}$ for $z \in \mathbb L$, which maps $\mathbb R$ onto itself. By Theorem \ref{basic}, $f=F|_{\mathbb R}$ is strongly quasisymmetric and $\log f'$ belongs to ${\rm BMO}_{\mathbb R}^*(\mathbb R)$. By Theorem \ref{FKP}, we may assume that $F$ is bi-Lipschitz on $\mathbb L$ by replacing the quasiconformal extension of $f$. Next, we take a quasiconformal homeomorphism $H:\mathbb C \to \mathbb C$ that is conformal on $\mathbb U$ and whose complex dilatation on $\mathbb L$ is the push-forward $F_* \mu_2$ of $\mu_2$ by $F$. Namely, the complex dilatation of $H \circ F|_{\mathbb L}$ is $\mu_2$. Then, $H \circ F$ coincides with $G$ up to an affine transformation of $\mathbb C$, and hence, we may assume that $H \circ F=G$. The complex dilatation $F_* \mu_2=(F^{-1})^* \mu_2$ belongs to $\mathcal M(\mathbb L)$ by Lemma \ref{composition}. Then, ${\mathcal L}_{H|_{\mathbb U}}=\log (H|_{\mathbb U})' \in {\rm BMOA}(\mathbb U)$ by Theorem \ref{Guo11}, and it follows from Theorem \ref{131} that the boundary function $b(\log (H|_{\mathbb U})')$ defined by the non-tangential limit of $\log (H|_{\mathbb U})'$ belongs to ${\rm BMO}(\mathbb R)$. Since $\log (H|_{\mathbb U})'$ has the finite non-tangential limit almost everywhere on $\mathbb R$, so does $(H|_{\mathbb U})'$. This implies that $H|_{\mathbb U}$ has a finite angular derivative almost everywhere on $\mathbb R$ by \cite[Proposition 4.7]{Pom}. However, since $H(\mathbb U)$ is a quasidisk, \cite[Theorem 5.5]{Pom} asserts that the angular derivative at $x \in \mathbb R$ coincides with $$ h'(x)=\lim_{\mathbb R \ni \xi \to x} \frac{h(\xi)-h(x)}{\xi-x} $$ for $h=H|_{\mathbb R}$. This shows that the non-tangential limit of $(H|_{\mathbb U})'$ coincides with the ordinary derivative $h'$ almost everywhere on $\mathbb R$. By taking the logarithm, we have $b(\log (H|_{\mathbb U})')=\log h'$. By $H \circ F=G$, we see that $\gamma=G|_{\mathbb R}$ has the derivative $\gamma'$ almost everywhere on $\mathbb R$, and satisfies $$ \log h' \circ f + \log f'=\log \gamma'. $$ We have seen that $\log f' \in {\rm BMO}(\mathbb R)$. Since $f$ is strongly quasisymmetric and $\log h' \in {\rm BMO}(\mathbb R)$, Theorem \ref{pullback} shows that $\log h' \circ f \in {\rm BMO}(\mathbb R)$. Thus, we obtain that $\log \gamma' \in {\rm BMO}(\mathbb R)$. \end{proof} Theorem \ref{Guo11} implies that any quasicircle $\Gamma$ satisfying the Bishop--Jones condition is the image $\gamma(\mathbb R)$ of some BMO embedding $\gamma$. (Conversely, the image $\gamma(\mathbb R)$ of any BMO embedding $\gamma$ is a quasicircle $\Gamma$ satisfying the Bishop--Jones condition. Indeed, in the proof of Proposition \ref{curve}, we see that $\Gamma=\gamma(\mathbb R)=h(\mathbb R)$ for $h=H|_{\mathbb R}$ with $\log (H|_{\mathbb U})' \in {\rm BMOA}(\mathbb U)$. Then by Theorem \ref{Guo11}, $\Gamma$ is a quasicircle satisfying the Bishop--Jones condition.) Since a chord-arc curve is the image of $\mathbb R$ under some bi-Lipschitz homeomorphism of $\mathbb C$ by Proposition \ref{biLip} and since the Bishop--Jones condition is invariant under a bi-Lipschitz homeomorphism of $\mathbb C$, any chord-arc curve satisfies the Bishop--Jones condition. Thus, we have: \begin{proposition}\label{image} Any chord-arc curve $\Gamma$ is the image $\gamma(\mathbb R)$ of some BMO embedding $\gamma$. \end{proposition} The converse is not true. In fact, there are examples of BMO embeddings whose images are not locally rectifiable (see \cite{Bi} and \cite{Se0}). On the contrary, we can show its converse if we assume the $A_{\infty}$ property on the derivative. The following claim in \cite[Theorem 4.2]{JK} has an essential role for this argument. In the case of the unit disk, we may also refer to \cite[Theorem 7.11]{Pom}. Notations are the same as those in the proof of Proposition \ref{curve}, and conditions (a) and (b) below are always satisfied in our setting of BMO embeddings. \begin{lemma}\label{JK4.2} Suppose that $\Gamma$ is a locally rectifiable Jordan curve passing through $\infty$ and $H$ is a conformal homeomorphism of $\mathbb U$ onto the left domain bounded by $\Gamma$ with its extension $h$ to $\mathbb R$. If \begin{enumerate} \item[(a)] $\Gamma$ is a quasicircle, \item[(b)] $\log |H'(z)|$ is represented by the Poisson integral of its boundary function, and \item[(c)] $|h'|$ is an $A_{\infty}$-weight, \end{enumerate} then $\Gamma$ is a chord-arc curve. \end{lemma} We obtain the characterization of a chord-arc curve as follows. \begin{theorem}\label{Ainfty} The image of a BMO embedding $\gamma:\mathbb R \to \mathbb C$ is a chord-arc curve if and only if $|\gamma'|$ is an $A_\infty$-weight on $\mathbb R$. Moreover, $\gamma$ is locally absolutely continuous in this case. \end{theorem} \begin{proof} We still use the same notations as in the proof of Proposition \ref{curve}. We consider the quasidisk $H(\mathbb U)$, and suppose that this is bounded by the chord-arc curve $\Gamma=\gamma(\mathbb R)$. In this case, the angular derivative of $H|_{\mathbb U}$ coincides with the ordinary derivative $h'$ on $\mathbb R$. Since the arc-length of $\Gamma$ is given by the integration of $|h'|$, the theorem of Lavrentiev (see \cite[p.222]{JK}) implies that $|h'|$ is an $A_\infty$-weight. By $H \circ F=G$, we have $|h'| \circ f \cdot f'=|\gamma'|$ on $\mathbb R$. Let $\eta(x)=\int_0^x |h'(t)|dt$, which is a strongly quasisymmetric homeomorphism of $\mathbb R$. Then, so is the composition $\eta \circ f$ whose derivative coincides with $|\gamma'|$. Thus, we see that $|\gamma'|$ is an $A_\infty$-weight. We borrow Lemma \ref{JK4.2} to prove the sufficiency. Suppose that $|\gamma'|$ is an $A_\infty$-weight, which is equivalent to that $|h'|$ is an $A_\infty$-weight. By ${\mathcal L}_{H|_{\mathbb U}}=\log (H|_{\mathbb U})' \in {\rm BMOA}(\mathbb U)$, we see that $\log |(H|_{\mathbb U})'|$ is the Poisson integral of its non-tangential limit. Thus, it remains to show the Jordan curve $\Gamma$ is locally rectifiable. We point out that in the case that $\Gamma$ is the image of the unit circle $\mathbb S$ with $\infty \notin \Gamma$, this is simple. Indeed, $\log (H|_{\mathbb D})' \in {\rm BMOA}(\mathbb D)$ implies that $(H|_{\mathbb D})'$ is an outer function. Then, $(H|_{\mathbb D})' \in H^1(\mathbb D)$ if and only if $|h' | \in L^1(\mathbb S)$ (see \cite[p.64]{Ga}). Since $|h'| \in L^{1+\varepsilon}(\mathbb S)$ for some $\varepsilon >0$ by a property of $A_\infty$-weight, we see that $(H|_{\mathbb D})'$ belongs to the Hardy space $H^1(\mathbb D)$, and then $\Gamma$ is rectifiable (see \cite[Theorem 6.8]{Pom}). However, in our case, we need more arguments which are given in the following. We define a strongly quasisymmetric homeomorphism $\eta(x)=\int_0^x |h'(t)|dt$. Let $T:\overline{\mathbb U} \to \overline{\mathbb D}$ be the Cayley transformation defined by $T(z)=(z-i)/(z+i)$. Then, the conjugate $\tilde \eta=T \circ \eta \circ T^{-1}$ is a strongly quasisymmetric homeomorphism of $\mathbb S$ fixing $1$ and $-1$, and $|\tilde \eta'|$ is an $A_\infty$-weight on $\mathbb S$. By computation, we have $$ |\tilde \eta'|=\frac{|T'| \circ \eta \circ T^{-1}}{|T'| \circ T^{-1}} \cdot \eta' \circ T^{-1} = \frac{|T'| \circ (T^{-1} \circ \tilde \eta)}{|T'| \circ T^{-1}} \cdot |h'| \circ T^{-1}. $$ Hence, \begin{equation}\label{Holder} \begin{split} |h'| \circ T^{-1}(\xi) &=|\tilde \eta'(\xi)|\cdot \frac{|T'| \circ T^{-1}(\xi)}{|T'| \circ (T^{-1} \circ \tilde \eta(\xi))}\\ & =|\tilde \eta'(\xi)| \cdot \frac{|(T^{-1})'| \circ \tilde \eta(\xi)}{|(T^{-1})'|(\xi)}\\ &=|\tilde \eta'(\xi)| \cdot \frac{|\xi - 1|^2}{|\tilde \eta(\xi) - \tilde\eta(1)|^2} \lesssim |\tilde \eta'(\xi)||\xi-1|^{2(1-\frac{1}{\alpha})}. \end{split} \end{equation} Here, in the last step, we used the H\"older continuity of the quasisymmetric homeomorphism $\tilde \eta^{-1}$ for some $\alpha \in (0,1)$ (see \cite[Theorem III.2]{Ah}). Set $m = 2(\frac{1}{\alpha} - 1)>0$. By the conformal invariance of $\rm BMOA$, $\log (H|_{\mathbb U})'\circ T^{-1}$ belongs to ${\rm BMOA}(\mathbb D)$. It is well-known that $\log(w-1)$ belongs to ${\rm BMOA}(\mathbb D)$ (see \cite{Dan}). Hence, these can be represented by the Poisson integral. Then, we have \begin{equation*} \begin{split} |H'\circ T^{-1}(w)(w-1)^m| &=|\exp[\,\log H'\circ T^{-1}(w)+m\log(w-1)]|\\ &=\left| \exp \left[\frac{1}{2\pi} \int_{\mathbb S}(\log h' \circ T^{-1}(\xi)+m\log(\xi-1))\,{\rm Re}\,\left(\frac{\xi+w}{\xi-w}\right)|d\xi|\right]\right|\\ &=\exp \left[\frac{1}{2\pi} \int_{\mathbb S}\log (|h'| \circ T^{-1}(\xi)|\xi-1|^m)\,{\rm Re}\,\left(\frac{\xi+w}{\xi-w}\right)|d\xi|\right]\\ &\leq\frac{1}{2\pi}\int_{\mathbb S} |h'| \circ T^{-1}(\xi)|\xi-1|^m \,{\rm Re}\,\left(\frac{\xi+w}{\xi-w}\right)|d\xi|, \end{split} \end{equation*} where the last inequality is due to the Jensen inequality. Then, by the estimate in \eqref{Holder} and the fact that the $A_\infty$-weight $|\tilde \eta'|$ is in $L^{1}(\mathbb S)$, the last Poisson integral above is convergent. From this, we see that $H' \circ T^{-1}(w)(w-1)^m$ belongs to $H^1(\mathbb D)$. Let $I$ be a closed interval in $\mathbb S\setminus \{1\}$ and let $0<\theta_0 < \theta_1 < \cdots <\theta_n<2\pi$ be any finite sequence such that $e^{i\theta_{\nu}} \in I$ for $\nu=0,1,\ldots,n$ with $e^{i\theta_0}$ and $e^{i\theta_n}$ being the endpoints of $I$. For $M_I=\max_{\xi \in I}2|\xi-1 |^{-2-m}$, we have \begin{equation*} \begin{split} \sum_{\nu = 1}^{n} |h\circ T^{-1}(e^{i\theta_{\nu}}) - h\circ T^{-1}(e^{i\theta_{\nu - 1}})| & = \lim_{r \to 1^-}\sum_{\nu = 1}^{n} |H \circ T^{-1}(re^{i\theta_{\nu}}) - H\circ T^{-1}(re^{i\theta_{\nu - 1}})|\\ & \leq \lim_{r \to 1^-} \int_{I}|(H\circ T^{-1})'(r\xi)| |d\xi|\\ &\leq M_I \lim_{r \to 1^-} \int_{I} |H'\circ T^{-1}(r\xi)(r\xi-1)^m| |d\xi|\\ & = M_I \int_{I} |h'\circ T^{-1}(\xi)(1 - \xi)^m| |d\xi|. \end{split} \end{equation*} This integral is bounded by the norm of $H' \circ T^{-1}(w)(w-1)^m \in H^1(\mathbb D)$. Then by the definition of arc length, we see that $h \circ T^{-1}(\mathbb S \setminus \{1\})=h(\mathbb R)=\Gamma$ is locally rectifiable. More strongly, it is easy to see that this estimate implies that $h\circ T^{-1}$ is locally absolutely continuous on $\mathbb S \setminus \{1\}$. See \cite[Theorem 10.11]{Pom75}. Then, $h$ is locally absolutely continuous on $\mathbb R$, and since $\gamma=h \circ f$ and $f$ is locally absolutely continuous, so is $\gamma$. \end{proof} \begin{corollary} A Jordan curve $\Gamma$ passing through $\infty$ is a chord-arc curve if and only if $\Gamma$ is the image of some BMO embedding $\gamma:\mathbb R \to \mathbb C$ such that $|\gamma'|$ is an $A_{\infty}$-weight. \end{corollary} \begin{remark} It was shown in \cite[p.877]{Mac} that being a chord-arc curve is a M\"obius invariant. In particular, $\Gamma$ is a chord-arc curve if and only if $T(\Gamma)$ is a bounded chord-arc curve. Then, by Theorem \ref{Ainfty}, we see that $|\gamma'|$ is an $A_\infty$-weight on $\mathbb R$ if and only if $|(T \circ \gamma \circ T^{-1})'|$ is an $A_\infty$-weight on $\mathbb S$. \end{remark} Furthermore, if we assign a proper parametrization to a chord-arc curve, then this mapping itself is realized as a BMO embedding. This result, which is stated precisely below, was proved in \cite[Theorem 0.2]{Se} where such a mapping was called a {\it strongly quasisymmetric embedding}. This can be derived also from Theorem \ref{Ainfty} if we assume Proposition \ref{image}, which is a consequence from several crucial results as we have seen. \begin{theorem}\label{0.2} Suppose $\gamma$ maps $\mathbb R$ homeomorphically onto a chord-arc curve $\Gamma$. If $\gamma$ is locally absolutely continuous and $|\gamma'|$ is an $A_{\infty}$-weight on $\mathbb R$, then $\gamma$ is a BMO embedding. \end{theorem} \begin{proof} By Proposition \ref{image}, there is a BMO embedding $h:\mathbb R \to \Gamma$, and by Theorem \ref{Ainfty}, $h$ is absolutely continuous and $|h'|$ is an $A_\infty$-weight. We set $f=h^{-1} \circ \gamma$, which is an increasing homeomorphism of $\mathbb R$ onto itself. From $h \circ f=\gamma$, we see that $f$ maps a set of null measure to a set of null measure because $\gamma$ is locally absolutely continuous and $|h'(x)| > 0$ almost everywhere on $\mathbb R$. Hence, $f$ is locally absolutely continuous. Taking the derivative, we have $|h'| \circ f \cdot f'=|\gamma'|$. Then, for strongly quasisymmetric homeomorphisms $$ \tilde h(x)=\int_0^x |h'(t)|dt \quad{\rm and} \quad \tilde \gamma(x)=\int_0^x |\gamma'(t)|dt $$ of $\mathbb R$, we have $\tilde h \circ f=\tilde \gamma$. This implies that $f$ is also a strongly quasisymmetric homeomorphism of $\mathbb R$. By Theorem \ref{FKP}, we can choose a quasiconformal extension $F:\mathbb C \to \mathbb C$ of $f$ such that its complex dilatation $\mu$ satisfies $\mu|_{\mathbb U} \in \mathcal M(\mathbb U)$, $\mu|_{\mathbb L} \in \mathcal M(\mathbb L)$ and that $F|_{\mathbb U}$ and $F|_{\mathbb L}$ are bi-Lipschitz homeomorphisms in the hyperbolic metric on $\mathbb U$ and $\mathbb L$ respectively. We also choose a BMO quasiconformal homeomorphism $H$ of $\mathbb C$ with $H|_{\mathbb R}=h$. Then by Lemma \ref{composition}, $H \circ F$ is a BMO quasiconformal homeomorphism whose restriction to $\mathbb R$ is $h \circ f=\gamma$. Hence, $\gamma$ is a BMO embedding. \end{proof} Recall that ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ denotes the convex open subset of the real subspace ${\rm BMO}_{\mathbb R}(\mathbb R)$ consisting of real-valued BMO functions $u$ with $e^u$ being an $A_\infty$-weight. The above argument implies that $\log |\gamma'|={\rm Re}\, (\log \gamma')$ belongs to ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ for any BMO embedding $\gamma:\mathbb R \to \mathbb C$ whose image is a chord-arc curve. Conversely, if a BMO embedding $\gamma$ in particular satisfies $|\gamma'|=1$, which means that $\gamma$ is parametrized by its arc-length, then $\gamma(\mathbb R)$ is a chord-arc curve. \section{The Bers coordinates using Teichm\"uller spaces} We provide canonical coordinates for the space of BMO embeddings and regard this space as a complex Banach manifold. This is done by a standard method due to Bers in Teich\-m\"uller theory for investigating quasiconformal deformation spaces such as quasifuchsian spaces. We point out that this coordinate system has never been used until we recently introduced, though it is very natural and is more powerful for the study of curves than expected. To make those arguments applicable, we deal with the family of all BMO embeddings, which are not just curves but also with their parametrizations. We impose the normalization $\gamma(0)=0$, $\gamma(1)=1$, and $\gamma(\infty)=\infty$ on every BMO embedding $\gamma$. Let ${\rm BE}$ be the set of all normalized BMO embeddings. For $\mu_1 \in \mathcal M(\mathbb U)$ and $\mu_2 \in \mathcal M(\mathbb L)$, we denote by $G(\mu_1,\mu_2)$ the normalized BMO quasiconformal homeomorphism $G$ of $\mathbb C$ ($G(0)=0$, $G(1)=1$, and $G(\infty)=\infty$) whose complex dilatation $\mu$ satisfies $\mu|_{\mathbb U}=\mu_1$ and $\mu|_{\mathbb L}=\mu_2$. Then, we define a map $$ \widetilde \iota:\mathcal M(\mathbb U) \times \mathcal M(\mathbb L) \to {\rm BE} $$ by $\widetilde \iota(\mu_1,\mu_2)=G(\mu_1,\mu_2)|_{\mathbb R}$. The BMO Teichm\"uller space $T_b(\mathbb U)$ on the upper half-plane $\mathbb U$ is defined to be the set of all Teichm\"uller equivalence classes $[\mu]$ for $\mu \in \mathcal M(\mathbb U)$. The Teichm\"uller projection $\pi:\mathcal M(\mathbb U) \to T_b(\mathbb U)$ is defined by the quotient map $\mu \mapsto [\mu]$. The BMO Teichm\"uller space $T_b(\mathbb L)$ on the lower half-plane $\mathbb L$ and related concepts are defined similarly. Then, by the argument of simultaneous uniformization due to Bers, we see the following fact. \begin{proposition}\label{id} The space ${\rm BE}$ of normalized BMO embeddings is identified with $T_b(\mathbb U) \times T_b(\mathbb L)$. More precisely, $\widetilde \iota$ splits into a well-defined bijection $$ \iota:T_b(\mathbb U) \times T_b(\mathbb L) \to {\rm BE} $$ by the product of the Teichm\"uller projections $$ \widetilde \pi:\mathcal M(\mathbb U) \times \mathcal M(\mathbb L) \to T_b(\mathbb U) \times T_b(\mathbb L), $$ such that $\widetilde \iota=\iota \circ \widetilde \pi$. \end{proposition} We call the representation of $\rm BE$ by $T_b(\mathbb U) \times T_b(\mathbb L)$ the {\it Bers coordinates} of ${\rm BE}$. Moreover, we provide complex Banach manifold structures for $T_b(\mathbb U)$ and $T_b(\mathbb L)$ by using the pre-Schwarzian derivative models as in Theorem \ref{model}. Namely, $T_b(\mathbb U)$ is identified with the domain $\mathcal T_b(\mathbb L)$ of ${\rm BMOA}(\mathbb L)$, and $T_b(\mathbb L)$ is identified with the domain $\mathcal T_b(\mathbb U)$ of ${\rm BMOA}(\mathbb U)$: \begin{align*} T_b(\mathbb U) &\cong \mathcal T_b(\mathbb L)=\{ {\mathcal L}_{G(\mu,0)} \in {\rm BMOA}(\mathbb L) \mid \mu \in \mathcal M(\mathbb U)\};\\ T_b(\mathbb L) &\cong \mathcal T_b(\mathbb U)=\{ {\mathcal L}_{G(0,\mu)} \in {\rm BMOA}(\mathbb U) \mid \mu \in \mathcal M(\mathbb L)\}. \end{align*} Then, by the identification ${\rm BE} \cong T_b(\mathbb U) \times T_b(\mathbb L)$ in Proposition \ref{id}, we may also regard ${\rm BE}$ as a domain of ${\rm BMOA}(\mathbb L) \times {\rm BMOA}(\mathbb U)$. Here, we review the canonical biholomorphic automorphisms of the BMO Teichm\"uller space $T_b(\mathbb U)$. Let $F^\nu$ be the normalized quasiconformal homeomorphism of $\mathbb U$ onto itself whose complex dilatation is $\nu \in \mathcal M(\mathbb U)$. By Theorem \ref{FKP}, we can make $F^\nu$ bi-Lipschitz in the hyperbolic metric by choosing some $\nu$ in any Teichm\"uller class $[\nu] \in T_b(\mathbb U)$. We define the right translation $r_\nu$ of $\mathcal M(\mathbb U)$ by $r_\nu(\mu)=\mu \ast \nu$ for every $\mu \in \mathcal M(\mathbb U)$, where $\mu \ast \nu$ denotes the complex dilatation of $F^{\mu} \circ F^{\nu}$. By Lemma \ref{composition}, $r_\nu(\mu) \in \mathcal M(\mathbb U)$, and since the inverse of $r_\nu$ is $r_{\nu^{-1}}$ where $\nu^{-1}$ denotes the complex dilatation of $(F^\nu)^{-1}$, we see that $r_\nu:\mathcal M(\mathbb U) \to \mathcal M(\mathbb U)$ is a bijection. Then, the projection of $r_{\nu}$ to $T_b(\mathbb U)$ by $\pi:\mathcal M(\mathbb U) \to T_b(\mathbb U)$ is well-defined as $\pi \circ r_{\nu} \circ \pi^{-1}$, which yields a bijection $R_{[\nu]}:T_b(\mathbb U) \to T_b(\mathbb U)$ for any $[\nu] \in T_b(\mathbb U)$. This is the right translation of $T_b(\mathbb U)$. \begin{lemma}\label{auto} If $F^\nu$ is a bi-Lipschitz diffeomorphism with $\nu \in \mathcal M(\mathbb U)$, then $r_\nu:\mathcal M(\mathbb U) \to \mathcal M(\mathbb U)$ is a biholomorphic automorphism of $\mathcal M(\mathbb U)$. For every $[\nu] \in T_b(\mathbb U)$, $R_{[\nu]}:T_b(\mathbb U) \to T_b(\mathbb U)$ is a biholomorphic automorphism of $T_b(\mathbb U)$. \end{lemma} \begin{proof} The argument for showing these statements is outlined in \cite[Remark 5.1]{SWei}. Here, we add supplemental remarks to this. To prove that $r_\nu$ is holomorphic, it is enough to show that $r_\nu$ is continuous or locally bounded, and then verify certain weak holomorphy of $r_\nu$. To prove that $R_{[\nu]}$ is holomorphic, we consider the Teichm\"uller projection $\pi:\mathcal M(\mathbb U) \to T_b(\mathbb U)$, which is holomorphic by \cite[Theorem 5.1]{SWei}. A local inverse of $\pi$ is also given there. Again, we can show that this is continuous or locally bounded first, and then verify its weak holomorphy. These arguments have been done for a different Teichm\"uller space in \cite{Mat}. See \cite{WM-6}. \end{proof} By Proposition \ref{curve}, we can consider a map $L:{\rm BE} \to {\rm BMO}(\mathbb R)$ defined by $L(\gamma)=\log \gamma'$. Then, with respect to the complex structure of ${\rm BE}$ given as above, we see the following: \begin{theorem}\label{holo} The map $L:{\rm BE} \to {\rm BMO}(\mathbb R)$ is holomorphic. \end{theorem} \begin{proof} We will prove that $L$ is holomorphic at any point $\gamma=G(\mu_1,\mu_2)|_{\mathbb R}$ in ${\rm BE}$. Since ${\rm BE}$ can be regarded as a domain of the product ${\rm BMOA}(\mathbb L) \times {\rm BMOA}(\mathbb U)$ of the Banach spaces, the Hartogs theorem for Banach spaces (see \cite[Theorem 14.27]{Ch} and \cite[Theorem 36.8]{Mu}) implies that we have only to prove that $L$ is separately holomorphic. Thus, by fixing $[\mu_1] \in T_b(\mathbb U)$, we will show that $\log (G(\mu_1,\mu)|_{\mathbb R})' \in {\rm BMO}(\mathbb R)$ depends holomorphically on $[\mu] \in T_b(\mathbb L)$. The other case is similarly treated. By the proof of Proposition \ref{curve}, we have $$ \log (G(\mu_1,\mu)|_{\mathbb R})' =\log h' \circ f + \log f', $$ where $f: \mathbb R \to \mathbb R$ is the boundary extension of the fixed bi-Lipschitz diffeomorphism $F:\mathbb L \to \mathbb L$ for the Teichm\"uller class $[\mu_1]$, and $h:\mathbb R \to \mathbb C$ is the restriction of the quasiconformal homeomorphism $H$ of $\mathbb C$ that is conformal on $\mathbb U$ and has the complex dilatation $F_* \mu$ on $\mathbb L$. Since $H|_{\mathbb U}$ depends on $R_{[\mu_1]}^{-1}([\mu]) \in T_b(\mathbb L)$ and $R_{[\mu_1]}^{-1}([\mu])$ is holomorphic on $[\mu]$ by Lemma \ref{auto}, we see that ${\mathcal L}_{{H([\mu])}}=\log (H|_{\mathbb U})' \in {\rm BMOA}(\mathbb U)$ depends on $[\mu] \in T_b(\mathbb L)$ holomorphically. By Theorem \ref{131}, we see that the boundary extension $b:{\rm BMOA}(\mathbb U) \to {\rm BMO}(\mathbb R)$ is a bounded linear operator. Moreover, by Theorem \ref{pullback}, the composition operator $P_f:{\rm BMO}(\mathbb R) \to {\rm BMO}(\mathbb R)$ induced by $f \in {\rm SQS}$ is also a bounded linear operator. Therefore, $$ \log h' \circ f=P_f \circ b ({\mathcal L}_{{H([\mu])}}) \in {\rm BMO}(\mathbb R) $$ depends on $[\mu] \in T_b(\mathbb L)$ holomorphically, and so does $\log (G(\mu_1,\mu)|_{\mathbb R})'$. \end{proof} We introduce canonical biholomorphic automorphisms of ${\rm BE}$. For $\nu \in {\mathcal M}(\mathbb U)$, the same symbol $\nu$ still denotes the symmetric complex dilatation $\overline{\nu(\bar z)}$ for $z \in \mathbb L$ in ${\mathcal M}(\mathbb L)$. This also gives the identification of $T_b(\mathbb U)$ and $T_b(\mathbb L)$, which is often denoted just by $T_b$. For any $[\nu] \in T_b$, we define the right translation of ${\rm BE}$ by $$ \widetilde R_{[\nu]}:([\mu_1],[\mu_2]) \mapsto ([\mu_1] \ast [\nu], [\mu_2] \ast [\nu]). $$ Since $R_{[\nu]}$ is a biholomorphic automorphism of $T_b$ by Lemma \ref{auto}, $\widetilde R_{[\nu]}$ yields a biholomorphic automorphism of ${\rm BE}$. By the proof of Theorem \ref{holo}, we see that the holomorphic map $L$ can be represented by the composition of some biholomorphic automorphism $\widetilde R_{[\,\cdot\,]}$ and the boundary extension of the logarithm of the derivative of some Riemann mapping. We will find a more explicit relation between $L$ and $\widetilde R_{[\,\cdot\,]}$ in the next section. Based on the proof of Theorem \ref{holo}, we also define the {\it conformal welding coordinates} of ${\rm BE}$ as follows. Under the Bers coordinates ${\rm BE} \cong T_b(\mathbb U) \times T_b(\mathbb L)$, the subspace ${\rm SQS} \subset {\rm BE}$ of all normalized strongly quasisymmetric homeomorphisms is identified with the diagonal locus $$ \{([\mu],[\mu]) \in T_b(\mathbb U) \times T_b(\mathbb L) \mid [\mu] \in T_b\}, $$ which is a real-analytic submanifold of ${\rm BE}$. Let ${\rm RM}$ be the set of all BMO embeddings $\gamma:\mathbb R \to \mathbb C$ of {\it Riemann mapping parametrization}, that is, those extending conformally to $\mathbb U$. The subspace ${\rm RM} \subset {\rm BE}$ is identified with the second coordinate axis $$ \{([0],[\mu]) \in T_b(\mathbb U) \times T_b(\mathbb L) \mid [\mu] \in T_b\}, $$ which is a complex-analytic submanifold of ${\rm BE}$. We define the projections to these submanifolds $$ \Pi: {\rm BE} \to {\rm SQS}, \qquad \Phi: {\rm BE} \to {\rm RM} $$ by $\Pi([\mu_1],[\mu_2])=([\mu_1],[\mu_1])$ and $\Phi([\mu_1],[\mu_2])=([0],[\mu_2] \ast [\mu_1]^{-1})$ represented in the Bers coordinates. Then, every $\gamma \in {\rm BE}$ is decomposed uniquely into $\gamma=\Phi(\gamma) \circ \Pi(\gamma)$. Clearly, $\Pi$ is real-analytic. We will see that $\Phi$ is not continuous later in Proposition \ref{continuity}. The biholomorphic automorphism $\widetilde R_{[\nu]}$ of ${\rm BE}$ for $[\nu] \in T_b$ satisfies that $\Phi \circ \widetilde R_{[\nu]}=\Phi$. The projections $\Pi$ and $\Phi$ define another product structure ${\rm SQS} \times {\rm RM}$ on ${\rm BE}$. Namely, we have a bijection $$ (\Pi,\Phi):{\rm BE} \to {\rm SQS} \times {\rm RM}. $$ However, $(\Pi,\Phi)$ is not a homeomorphism. Since ${\rm SQS}$ and ${\rm RM}$ are both identified with $T_b$, $(\Pi,\Phi)$ is the coordinate change of ${\rm BE}$ from the Bers coordinates to the one we may call the conformal welding coordinates: $$ T_b(\mathbb U) \times T_b(\mathbb L) \to T_b \times T_b: \quad([\mu_1], [\mu_2]) \mapsto ([\mu_1],[\mu_2] \ast [\mu_1]^{-1}). $$ \section{Parameter change and arc-length parametrization} To investigate the image $L({\rm BE})$ in ${\rm BMO}(\mathbb R)$, we prepare canonical biholomorphic auto\-morphisms of ${\rm BMO}(\mathbb R)$ that keep $L({\rm BE})$ invariant. Under $L$ restricted to the subset of BMO embeddings whose images are chord-arc curves, these biholomorphic automorphisms correspond to change of parameters for chord-arc curves. We have assumed that ${\rm BMO}(\mathbb R)$ is the complex Banach space of the equivalence classes of BMO functions modulo complex constant functions, and thus we can regard it as the set of representatives $w$ satisfying the normalization condition $\int_0^1 e^{w(t)}dt=1$. Let ${\rm BMO}_{\mathbb R}(\mathbb R)$ and $i{\rm BMO}_{\mathbb R}(\mathbb R)$ denote the real subspaces of ${\rm BMO}(\mathbb R)$ consisting of all real-valued and purely imaginary-valued functions respectively, both of which are taken modulo complex constant functions. Let $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$ and let $\gamma_{u}:\mathbb R \to \mathbb R$ be the strongly quasisymmetric homeomorphism in ${\rm SQS}$ defined by $\gamma_{u}(x)=\int_0^x e^{u(t)}dt$. Then, the composition operator $P_{\gamma_u}:{\rm BMO}(\mathbb R) \to {\rm BMO}(\mathbb R)$ is given by $w \mapsto w \circ \gamma_{u}$ for $w \in {\rm BMO}(\mathbb R)$, which is a linear isomorphism of the Banach space ${\rm BMO}(\mathbb R)$ onto itself by Theorem \ref{pullback}. Moreover, we define $Q_{u}(w)=P_{\gamma_u}(w)+u$ for $w \in {\rm BMO}(\mathbb R)$, which is an affine isomorphism of ${\rm BMO}(\mathbb R)$ onto itself. Clearly, $Q_{u}$ preserves the real subspace ${\rm BMO}_{\mathbb R}(\mathbb R)$. We first show the correspondence of two translations, $\widetilde R_{[\nu]}$ for $\rm BE$ and $Q_u$ for ${\rm BMO}(\mathbb R)$. \begin{proposition}\label{Lequation} It holds that $$L \circ \widetilde R_{[\nu]}=Q_{L([\nu],[\nu])} \circ L$$ on ${\rm BE}$ for every $[\nu] \in T_b$. For any $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$, $Q_u$ is a biholomorphic automorphism of ${\rm BMO}(\mathbb R)$ that keeps $L({\rm BE})$ invariant. \end{proposition} \begin{proof} For any $\gamma=([\mu_1],[\mu_2]) \in {\rm BE}$ in the Bers coordinates, we have \begin{equation*} \begin{split} L \circ \widetilde R_{[\nu]}([\mu_1],[\mu_2])&=L([\mu_1] \ast [\nu], [\mu_2] \ast [\nu])\\ &=\log \gamma'_{L([\mu_1] \ast [\nu], [\mu_2] \ast [\nu])}=\log (\gamma_{L([\mu_1],[\mu_2])}\circ \gamma_{L([\nu],[\nu])})'\\ &=\log \gamma'_{L([\mu_1],[\mu_2])}\circ \gamma_{L([\nu],[\nu])}+ \log \gamma'_{L([\nu],[\nu])}\\ &=L([\mu_1],[\mu_2]) \circ \gamma_{L([\nu],[\nu])}+L([\nu],[\nu])=Q_{L([\nu],[\nu])} \circ L([\mu_1],[\mu_2]) \end{split} \end{equation*} as required. Since $Q_u$ is an affine isomorphism of ${\rm BMO}(\mathbb R)$, this is biholomorphic. For any $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$, we choose the corresponding $[\nu] \in T_b$ under (\ref{trinity}). Then, $L([\nu],[\nu])=u$. By the above formula, we have $$ Q_u(L({\rm BE}))=Q_{L([\nu],[\nu])} \circ L({\rm BE})=L \circ \widetilde R_{[\nu]}({\rm BE}) =L({\rm BE}), $$ and thus $L({\rm BE})$ is invariant under $Q_u$. \end{proof} Let $iv \in i{\rm BMO}_{\mathbb R}(\mathbb R) ^{\circ}$ for the subset given as $$ i{\rm BMO}_{\mathbb R}(\mathbb R) ^{\circ}=i{\rm BMO}_{\mathbb R}(\mathbb R) \cap L({\rm BE}). $$ Then, $\gamma_{iv}(x)=\int_0^x e^{iv(t)}dt$ is a BMO embedding satisfying $|\gamma_{iv}'|=1$. This condition means that $\gamma$ is parametrized by its arc-length. By Theorem \ref{Ainfty}, the image of $\gamma_{iv}$ is a chord-arc curve. We consider the parameter change of BMO embeddings whose images are chord-arc curves. \begin{proposition}\label{arclength} Let $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$ and $iv \in i{\rm BMO}_{\mathbb R}(\mathbb R) ^{\circ}$. Then, $\gamma_{Q_u(iv)}(x)$ is obtained from the BMO embedding $\gamma_{iv}(x')$ of arc-length parametrization by the change of parameter $x'= \gamma_u(x)$, which is also a BMO embedding whose image is a chord-arc curve. Hence, an injective map $$ J:{\rm BMO}_{\mathbb R}^*(\mathbb R) \times i{\rm BMO}_{\mathbb R}(\mathbb R) ^{\circ} \to L({\rm BE}) \subset {\rm BMO}(\mathbb R) $$ is defined by $J(u,iv)=Q_u(iv)=u+iP_{\gamma_u}(v)$. \end{proposition} \begin{proof} Since $Q_u(iv)=u+iP_{\gamma_u}(v)=u+iv \circ \gamma_u$, we have \begin{equation*} \begin{split} \gamma_{Q_u(iv)}(x)=\int_0^x e^{u(t)} e^{iv \circ \gamma_u(t)}dt =\int_0^x \gamma'_u(t) e^{iv \circ \gamma_u(t)}dt =\int_0^{\gamma_u(x)} e^{iv(s)}ds=\gamma_{iv}(\gamma_u(x)) \end{split} \end{equation*} by $s=\gamma_u(t)$. By Proposition \ref{Lequation}, we see that the parameter change of a BMO embedding by $\gamma_u \in {\rm SQS}$ is also a BMO embedding. \end{proof} Let ${\rm CA}$ be a subset of ${\rm BE}$ consisting of all normalized BMO embeddings $\gamma:\mathbb R \to \mathbb C$ whose images are chord-arc curves. By Theorem \ref{Ainfty}, we see that \begin{equation}\label{LCA} L({\rm CA})=L({\rm BE}) \cap \{w \in {\rm BMO}(\mathbb R) \mid {\rm Re}\, w \in {\rm BMO}_{\mathbb R}^*(\mathbb R)\}. \end{equation} In particular, both ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ and $i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$ are contained in $L({\rm CA})$. \begin{proposition}\label{BMO*} The affine isomorphism $Q_u$ for every $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$ keeps the convex open subset ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ invariant, and hence it keeps $L(\rm CA)$ invariant. More explicitly, $Q_u$ maps the affine subspace $u_1+i{\rm BMO}_{\mathbb R}(\mathbb R)$ for any $u_1 \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$ onto $u_2+i{\rm BMO}_{\mathbb R}(\mathbb R)$ for $u_2=Q_u(u_1)\in {\rm BMO}_{\mathbb R}^*(\mathbb R)$. \end{proposition} \begin{proof} For $u, u' \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$, we take the corresponding strongly quasisymmetric homeo\-morphisms $\gamma_u, \gamma_{u'} \in {\rm SQS}$. Then, the indefinite integral of $$ \exp(Q_u(u'))=\exp(u'\circ \gamma_u+u) $$ is $\gamma_{u'} \circ \gamma_{u}$. Since ${\rm SQS}$ is a group under the composition, we see that $\gamma_{u'} \circ \gamma_{u} \in {\rm SQS}$, and thus $Q_u(u') \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$. \end{proof} Proposition \ref{arclength} implies that the image of ${\rm BMO}_{\mathbb R}^*(\mathbb R) \times i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$ by the injection $J$ is in fact contained in $L({\rm CA})$. Further, we have: \begin{theorem}\label{bijective} Every BMO embedding whose image is a chord-arc curve is represented by $\gamma_{Q_u(iv)}$ for some $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$ and $iv \in i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$. Hence, $$ J:{\rm BMO}_{\mathbb R}^*(\mathbb R) \times i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ \to L({\rm CA}) $$ is a bijection. \end{theorem} \begin{proof} Let $\gamma_{u+iv'}$ be any BMO embedding for $u+iv' \in L({\rm CA})$. Theorem \ref{Ainfty} shows that $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$. Then, by choosing $v \in {\rm BMO}_{\mathbb R}(\mathbb R)$ satisfying $P_{\gamma_u}(v)=v'$, we see that $\gamma_{u+iv'}(x)$ is obtained from $\gamma_{iv}(x')$ by the change of the parameter $x'= \gamma_u(x)$. Hence, $\gamma_{u+iv'}=\gamma_{Q_u(iv)}$. This implies that $J$ is surjective. Combined with Proposition \ref{arclength}, this proves that $J$ is bijective. \end{proof} We conclude that $i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$ is a parameter space of all chord-arc curves with arc-length parametrizations and ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ is a parameter space for the change of their parameters. Although $J$ gives a bijection onto $L({\rm CA})$, we will see later that $J$ is not a homeomorphism (Theorem \ref{mapL}). \begin{remark} In \cite[Section 5]{FHS}, the set of all normalized strongly quasisymmetric embeddings of $\mathbb R$ called the {\it extended BMO Teichm\"ul\-ler space} was denoted by $\hat T_b$. We see that $\hat T_b$ coincides with $\rm CA$ if we use Theorem \ref{0.2} in our paper. It was also proved that $L|_{\rm CA}$ is a bijection onto an open subset of ${\rm BMO}(\mathbb R)$ (Theorem 5.2). Moreover, the product structure of $\hat T_b$ was given (Theorem 5.3), which is ${\rm CA}={\rm SQS} \times {\rm ICA}$ in our notation explained in Section 7. This essentially defines the bijection $J$ through $L$. Under this translation, Problem 5.4 in \cite{FHS} can be understood as asking whether $J$ is a homeomorphism or not. \end{remark} \section{Biholomorphic correspondence on the space of chord-arc curves} We will show that $L$ is biholomorphic on ${\rm CA}$. We note that the space ${\rm SQS} \subset {\rm BE}$ of all normalized strongly quasisymmetric homeomorphisms $\gamma:\mathbb R \to \mathbb R$ is contained in ${\rm CA}$ because the image of $\gamma$ is $\mathbb R$, which is of course a chord-arc curve. Hence, we in particular obtain that there is a biholomorphic correspondence between the neighborhood of ${\rm SQS}$ in $\rm BE$ and that of ${\rm BMO}^*_{\mathbb R}(\mathbb R)$ in ${\rm BMO}(\mathbb R)$ under $L$. \begin{theorem}\label{biholo} The holomorphic map $L:{\rm BE} \to {\rm BMO}(\mathbb R)$ restricted to ${\rm CA} \subset {\rm BE}$ is a biholomorphic homeomorphism onto its image. This in particular implies that ${\rm CA}$ is open in ${\rm BE}$ and $L({\rm CA})$ is open in ${\rm BMO}(\mathbb R)$. \end{theorem} \begin{proof} It suffices to show that $L$ has a local holomorphic inverse at any point $w \in L({\rm CA}) \subset {\rm BMO}(\mathbb R)$. We see that if $w=iv \in i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$, then there is a neighborhood $V_{iv} \subset L({\rm CA})$ of $iv$ and a holomorphic map $\Psi_{iv}^*:V_{iv} \to {\rm BE}$ such that $L \circ \Psi_{iv}^*={\rm id}|_{V_{iv}}$. Indeed, it was proved in \cite[Proposition 4.13]{Se} that there is a BMO quasiconformal homeomorphism $G_w:\mathbb C \to \mathbb C$ with complex dilatation $\mu_w$ for every $w$ in some neighborhood $V_{iv}$ so that the map $\Lambda_{iv}:V_{iv} \to \mathcal M(\mathbb U) \times \mathcal M(\mathbb L)$ defined by this correspondence is continuous at $w=iv$. Since ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ is open by Proposition \ref{convex}, we may choose a smaller neighborhood $V_{iv}$ so that ${\rm Re}\, w \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$ if $w \in V_{iv}$. In this case, the image of the BMO embedding $G_w|_{\mathbb R}$ is a chord-arc curve by Theorem \ref{Ainfty}. Then, the local inverse $\Psi_{iv}^*$ of $L$ is given by $\widetilde \pi \circ \Lambda_{iv}$, and $V_{iv} \subset L({\rm CA})$ by (\ref{LCA}). The holomorphy of $\Lambda_{iv}$ is proved as follows. We may assume that $\Lambda_{iv}$ is bounded on $V_{iv}$ by the continuity of $\Lambda_{iv}$ at $iv$. Then, it suffices to show that $\Lambda_{iv}$ is a G\^ateaux holomorphic function (see \cite[Theorem 14.9]{Ch}, \cite[Theorem 36.5]{Mu}). However, because $\mu_w$ can be represented explicitly in terms of $w \in V_{iv}$, we can verify this weak holomorphy as in \cite[Theorem 6.1]{SW} though the norm used there is different from ours. A similar argument using the norm of $\mathcal M(\mathbb U)$ is contained in \cite{WM-6}. If $w=u+iv'$ is an arbitrary point in $L({\rm CA})$, then by Theorem \ref{bijective}, we can find $iv \in i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$ satisfying $Q_u(iv)=u+iv'$. Since $Q_u$ is a biholomorphic automorphism of ${\rm BMO}(\mathbb R)$ keeping $L({\rm CA})$ invariant by Proposition \ref{BMO*}, we see that $\widetilde R_{[\nu]}\circ \Psi_{iv}^* \circ Q_u^{-1}$ is holomorphic on $Q_u(V_{iv}) \subset L({\rm CA})$ for $[\nu] \in T_b$ corresponding to $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$. Then, Proposition \ref{Lequation} shows that $$ L \circ \widetilde R_{[\nu]} \circ \Psi_{iv}^* \circ Q_u^{-1}=Q_{L([\nu],[\nu])} \circ L \circ \Psi_{iv}^* \circ Q_u^{-1} =Q_{L([\nu],[\nu])} \circ Q_u^{-1}=\rm id $$ on $Q_u(V_{iv})$. Hence, $\widetilde R_{[\nu]}\circ \Psi_{iv}^* \circ Q_u^{-1}$ is a local holomorphic inverse of $L$ on $Q_u(V_{iv})$. \end{proof} Let ${\rm ICA} \subset {\rm CA}$ denote the subset of all normalized chord-arc curves with arc-length parametrization. Namely, $$ {\rm ICA}=L^{-1}(i{\rm BMO}_{\mathbb R}(\mathbb R)^{\circ}). $$ As $i{\rm BMO}_{\mathbb R}(\mathbb R)^{\circ}$ is a real-analytic submanifold of the domain $L({\rm CA})$ in the complex Banach space ${\rm BMO}(\mathbb R)$, ${\rm ICA}$ is a real-analytic submanifold of the complex manifold ${\rm CA}$. Similarly, since ${\rm SQS}=L^{-1}({\rm BMO}_{\mathbb R}^*(\mathbb R))$, this is also a real-analytic submanifold of ${\rm CA}$. \begin{corollary}\label{real-analytic} $(1)$ ${\rm SQS}$ is a real-analytic submanifold of ${\rm CA}$ that is the diagonal of $T_b(\mathbb U) \times T_b(\mathbb L)$ in the Bers coordinates, and $L|_{{\rm SQS}}$ is a real-analytic homeomorphism onto ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ whose inverse is also real-analytic. $(2)$ ${\rm ICA}$ is a real-analytic submanifold of ${\rm CA}$, and $L|_{\rm ICA}$ is a real-analytic homeomorphism onto $i{\rm BMO}_{\mathbb R}(\mathbb R)^{\circ}$ whose inverse is also real-analytic. \end{corollary} Part (1) of the above corollary extends Proposition \ref{topequiv} concerning $\Psi=L|_{\rm SQS}$. This shows that ${\rm SQS}$ is equipped with both the complex-analytic structure of $T_b$ and the real-analytic structure of ${\rm BMO}_{\mathbb R}^*(\mathbb R)$, which are real-analytically equivalent. Introducing the real-analytic structure to $T_b$ from the real-analytic submanifold ${\rm SQS}$ on the diagonal of ${\rm BE} \cong T_b(\mathbb U) \times T_b(\mathbb L)$ is usual if we know the theory of quasifuchsian deformation spaces. \begin{remark} We consider a problem of the connectivity of $\rm CA$ (see \cite[p.614]{AZ} and \cite[Problem 5.5]{FHS}). Since $L$ is a homeomorphism on $\rm CA$, we may transfer this problem to $L({\rm CA}) \subset {\rm BMO}(\mathbb R)$. By Theorem \ref{bijective}, $J:{\rm BMO}_{\mathbb R}^*(\mathbb R) \times i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ \to L({\rm CA})$ is bijective. Even though $J$ is not continuous, for each fixed $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$, $J(u, \cdot)=Q_u=u+P_{\gamma_u}$ is continuous on $i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$. Thus, the problem on the connectivity for $L({\rm CA})$ is reduced to that for $i{\rm BMO}_{\mathbb R}(\mathbb R) ^{\circ} =i{\rm BMO}_{\mathbb R}(\mathbb R) \cap L({\rm BE})$. Since ${\rm BE} \cong T_b(\mathbb U) \times T_b(\mathbb L)$, $L({\rm BE})$ is connected. Then, the problem is to show the connectivity of the intersection of $L({\rm BE})$ with the linear subspace $i{\rm BMO}_{\mathbb R}(\mathbb R)$. If this were not connected, then distinct components would be joined by a path in $L({\rm BE})$ taking a detour whose real parts escape from ${\rm BMO}_{\mathbb R}^*(\mathbb R)$. \end{remark} We add one more property to the biholomorphic mapping $L:{\rm CA} \to {\rm BMO}(\mathbb R)$ restricted to ${\rm SQS}$, which is concerning the correspondence of bounded subsets. Here, the boundedness in ${\rm SQS}$ is considered regarding a metric structure of ${\rm SQS} \cong T_b$. The invariant metric provided for $T_b$ is the Carleson metric (see \cite{WM-3}), and let $d_c$ denote the Carleson distance in $T_b$. The composition operator $P_f$ is a bounded linear operator as in Theorem \ref{pullback}. Concerning uniform boundedness of this operator norm, we see the following proposition. This is stated in \cite[p.18]{CM}. We provide a proof for it here. \begin{proposition}\label{uniformPh} For any $f \in {\rm SQS}$, let $P_f:{\rm BMO}(\mathbb R) \to {\rm BMO}(\mathbb R)$ be the bounded linear operator defined by $w \mapsto w \circ f$ for every $w \in {\rm BMO}(\mathbb R)$. Then, there exist constants $\tau_0>0$ and $C_0>0$ such that the operator norm of $P_f$ satisfies $\Vert P_f \Vert \leq C_0$ for every $f \in {\rm SQS}$ with $\Vert \log f' \Vert_* \leq \tau_0$. \end{proposition} \begin{proof} Lemma \ref{nearid} implies that there exist $\delta_0 >0$ and $\tau_0>0$ such that if $\Vert \log h' \Vert_* \leq \delta_0$ and $\Vert \log f' \Vert_* \leq \tau_0$, then $\Vert \log (h\circ f)' \Vert_* \leq 1$. We may choose $\tau_0$ so that $\tau_0 \leq 1$. Then, for any $u=\log h' \in {\rm BMO}_{\mathbb R}(\mathbb R)$ with $\Vert u \Vert_* \leq \delta_0$ and $f \in {\rm SQS}$ with $\Vert \log f'\Vert_* \leq \tau_0$, we have $$ \Vert P_f(u) \Vert_* \leq \Vert \log(h \circ f)'\Vert_*+\Vert \log f'\Vert_*\leq 2. $$ For $w=u+iv \in {\rm BMO}(\mathbb R)$ with $\Vert w \Vert_* = \delta_0$, we apply this estimate to $\Vert P_f(u) \Vert_*$ and $\Vert P_f(v) \Vert_*$; we obtain that if $\Vert w \Vert_* = \delta_0$ and $\Vert \log f'\Vert_* \leq \tau_0$, then $\Vert P_f(w) \Vert_* \leq 4$. This completes the proof by setting $C_0=4/\delta_0$. \end{proof} This can be extended to the local uniform boundedness of the operator norm as follows. \begin{lemma}\label{general} For any $f \in {\rm SQS}$, the operator norm $\Vert P_f \Vert$ of the composition operator $P_f:{\rm BMO}(\mathbb R) \to {\rm BMO}(\mathbb R)$ depends only on the Carleson distance $d_c(f,\rm id)$. \end{lemma} \begin{proof} For the constant $\tau_0$ in Proposition \ref{uniformPh}, we choose a constant $r_0>0$ such that if $f \in {\rm SQS}$ satisfies $d_c(f,{\rm id}) \leq r_0$ then $\Vert \log f' \Vert_* \leq \tau_0$. Any element $f \in {\rm SQS}$ can be joined to $\rm id$ by a curve in ${\rm SQS}$ with its length arbitrary close to $d_c(f,{\rm id})$. We choose the minimal number of consecutive points $$ {\rm id}=f_0, f_1, \ldots, f_n=f $$ on the curve such that $d_c(f_i,f_{i-1}) < r_0$ for any $i=1,\ldots,n$. Then, the number $n$ is determined by $d_c(f,\rm id)$, and the invariance of $d_c$ under the right translation implies that the composition $f_i \circ f_{i-1}^{-1}$ satisfies $d_c(f_i \circ f_{i-1}^{-1},{\rm id}) <r_0$, and hence $\Vert \log (f_i \circ f_{i-1}^{-1})'\Vert_* \leq \tau_0$. By decomposing $f$ into these $n$ mappings, we have $$ P_f=P_{f_1 \circ f_0^{-1}} \circ P_{f_2 \circ f_1^{-1}} \circ \cdots \circ P_{f_n \circ f_{n-1}^{-1}}. $$ Then, Proposition \ref{uniformPh} shows that $\Vert P_f \Vert \leq C_0^n$. \end{proof} \begin{remark} It is known that $\Vert P_f \Vert$ can be estimated only by the constants $\alpha$ and $K$ for the strongly quasisymmetric condition (\ref{alphaK}) of $f \in {\rm SQS}$. See \cite[Example 2.3]{Got}. \end{remark} \begin{theorem}\label{b-b} For any bounded subset $W \subset {\rm SQS}$, the image $L(W)$ is also bounded in ${\rm BMO}_{\mathbb R}(\mathbb R)$. In more details, if $f \in {\rm SQS}$ is within distance $r$ from $\rm id$ in the Carleson distance $d_c$, then $\Vert \log f' \Vert_*$ is bounded by a constant depending only on $r$. \end{theorem} \begin{proof} For $f \in {\rm SQS}$ with $d_c(f,{\rm id}) \leq r$, we decompose $f$ as in the proof of Lemma \ref{general}. Then, by this lemma, we have \begin{align*} \Vert \log f'\Vert_* &=\Vert \log ((f_n \circ f_{n-1}^{-1}) \circ (f_{n-1} \circ f_{n-2}^{-1}) \circ \cdots \circ (f_{1} \circ f_{0}^{-1}) )'\Vert_*\\ & \leq \Vert P_{f_1 \circ f_0^{-1}} \circ P_{f_2 \circ f_1^{-1}} \circ \cdots \circ P_{f_{n-1} \circ f_{n-2}^{-1}}(\log (f_n \circ f_{n-1}^{-1})')\Vert_*\\ &\quad + \cdots +\Vert \log(f_{1} \circ f_{0}^{-1})' \Vert_*\\ & \leq C_0^{n-1} \tau_0+ C_0^{n-2} \tau_0 + \cdots +\tau_0. \end{align*} Since $n$ depends only on $r$, the statement is proved. \end{proof} \section{Arc-length and Riemann mapping parametrizations} Any chord-arc curve $\Gamma$ admits two canonical parametrizations. One is arc-length parametrization and the other is Riemann mapping parametrization, which is given by the boundary extension of the Riemann mapping from $\mathbb U$ to the left domain bounded by $\Gamma$. In this section, we consider the correspondence of these parametrizations. Let ${\rm RM}^\circ={\rm RM} \cap {\rm CA}$. This is the subset of ${\rm BE}$ consisting of all chord-arc curves with Riemann mapping parametrizations. Since ${\rm CA}$ is an open subset of ${\rm BE}$ by Theorem \ref{biholo}, ${\rm RM}^\circ$ is an open subset of the complex-analytic submanifold ${\rm RM}$ of ${\rm BE}$. Under the identification ${\rm RM} \cong T_b$, this corresponds to an open subset $T_c$ of $T_b$ investigated in \cite{WM-3}. Every $\gamma \in {\rm CA}$ is decomposed uniquely into $\gamma=h \circ f$, where $h=\Phi(\gamma) \in {\rm RM}^\circ$ and $f=\Pi(\gamma) \in {\rm SQS}$. Conversely, the parameter change of $h \in {\rm RM}^\circ$ by $f \in {\rm SQS}$ gives $\gamma=h \circ f \in {\rm CA}$. In this sense, ${\rm CA}$ is represented by the product ${\rm SQS} \times {\rm RM}^\circ$, which is in fact induced by the conformal welding coordinates $T_b \times T_b$ of ${\rm BE}$. Similarly, ${\rm CA}$ is represented by the product ${\rm SQS} \times {\rm ICA}$ (see \cite[Theorem 5.3]{FHS}). Namely, every $\gamma \in {\rm CA}$ is decomposed uniquely into $\gamma=\gamma_0 \circ f$, and vice versa, which can be understood as the parameter change of $\gamma_0 \in {\rm ICA}$ by $f \in {\rm SQS}$. Under the map $J^{-1} \circ L$, this decomposition corresponds to the product structure ${\rm BMO}_{\mathbb R}^*(\mathbb R) \times i{\rm BMO}_{\mathbb R}(\mathbb R)^{\circ}$. Having two product structures ${\rm SQS} \times {\rm RM}^\circ$ and ${\rm SQS} \times {\rm ICA}$ on ${\rm CA}$, we compare the arc-length parametrizations ${\rm ICA}$ with the Riemann mapping para\-metri\-zations ${\rm RM}^\circ$. Each fiber of the projection $\Phi:{\rm CA} \to {\rm RM}^\circ$ consists of a family of normalized BMO embeddings with the same image, and hence they have the same arc-length parametrization. This observation leads to the following. \begin{proposition}\label{fibers} There is a bijection between ${\rm ICA}$ and ${\rm RM}^\circ$ keeping the images of the corresponding embeddings the same. This bijection is nothing but $\Phi|_{\rm ICA}:{\rm ICA} \to {\rm RM}^\circ$. \end{proposition} We consider the other projection $\Pi$ restricted to ${\rm ICA}$, which has been studied with great interest in the literature. For any $\gamma_0 \in {\rm ICA}$, $\Pi(\gamma_0) \in {\rm SQS}$ is defined by the strongly quasisymmetric homeomorphism of $\mathbb R$ inducing the parameter change from $\gamma_0$ to $\Phi(\gamma_0) \in {\rm RM}^\circ$. The following result was proved by Coifman and Meyer \cite[Theorem 1]{CM} by operator theoretical arguments. See also \cite[Section 6]{Se} and \cite{Wu}. In our formulation, this statement itself is already self-evident. \begin{theorem}\label{CM} The map $\Pi|_{\rm ICA}:{\rm ICA} \to {\rm SQS}$ is real-analytic. Hence, $$ \lambda=L \circ \Pi \circ L^{-1}|_{i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ}:i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ \to {\rm BMO}_{\mathbb R}^*(\mathbb R) $$ is also real-analytic. \end{theorem} \begin{proof} By Corollary \ref{real-analytic}, ${\rm ICA}$ is a real-analytic submanifold of ${\rm CA}$. Hence, the restriction $\Pi|_{\rm ICA}$ of the projection $\Pi:{\rm CA} \to {\rm SQS}$ is real-analytic. Since $L$ is biholomorphic by Theorem \ref{biholo}, the conjugate map $\lambda$ is real-analytic. \end{proof} However, what is essential in properties of $\Pi|_{\rm ICA}$ is that it is injective and the inverse map is also real-analytic. See \cite[Theorem 5]{SeB}. In fact, the original arguments for Theorem \ref{CM} and their generalization were obtained by investigating this inverse map. \begin{theorem}\label{inverse} The map $\Pi|_{\rm ICA}$ is injective, its image is an open subset of ${\rm SQS}$, and the inverse $(\Pi|_{\rm ICA})^{-1}$ is real-analytic. Equivalently, $\lambda=L \circ \Pi \circ L^{-1}|_{i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ}$ is injective, its image is an open subset of ${\rm BMO}_{\mathbb R}^*(\mathbb R)$, and the inverse $\lambda^{-1}$ is real-analytic. \end{theorem} We will only prove the injectivity of $\lambda$ and explain the existence of the derivative $d_0(\lambda^{-1})$ at the origin which is invertible. Then, we in particular see that some neighborhoods at the origin of $\rm ICA$ and ${\rm SQS}$ correspond homeomorphically by $\Pi|_{\rm ICA}$. This claim will be used in the next section. Every $\gamma_0 \in {\rm ICA}$ is decomposed uniquely into $\gamma_0=h \circ f$ for $h \in {\rm RM}^\circ$ and $f \in {\rm SQS}$. Taking the logarithm of the derivative of this equation, we have \begin{equation*}\label{logder} \log \gamma_0'= \log h' \circ f+\log f'. \end{equation*} Since $\log \gamma_0'=iv$ is purely imaginary and $\log f'$ is real, the real and the imaginary parts of this equation become \begin{equation}\label{ReIm} 0={\rm Re} \log h' \circ f +\log f'\quad {\rm and} \quad v={\rm Im} \log h' \circ f. \end{equation} Moreover, since $\log h'$ is the boundary extension of the holomorphic function $\log H'$ for the Riemann mapping $H$ on $\mathbb U$, ${\rm Re}\log h'$ and ${\rm Im}\log h'$ are related by the Hilbert transformation $T$ on $\mathbb R$: \begin{equation}\label{Hilbert} {\rm Im}\log h'=T({\rm Re}\log h'). \end{equation} Then, the combination of (\ref{ReIm}) and (\ref{Hilbert}) yields that \begin{equation}\label{injective} -P_f\circ T \circ P_f^{-1}(\log f')=v. \end{equation} This shows that $v$ is determined by $f$ and thus $\lambda:\log \gamma_0' \mapsto \log f'$ is injective. Let $u=\log f' \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$. Then, equation (\ref{injective}) also gives $$ \lambda^{-1}(u)=-iP_f\circ T \circ P_f^{-1}(u). $$ Here, we assume a fact that the conjugate $P_f\circ T \circ P_f^{-1}$ by the composition operator $P_f$ tends to the Hilbert transformation $T$ in the operator norm as $f \to \rm id$ in ${\rm SQS} \cong T_b$ (in spite of the fact that $P_f \nrightarrow I$ in Corollary \ref{last}). This was proved in \cite[Theorem 1]{CM0} in a more general form. In fact, $P_f\circ T \circ P_f^{-1}$ was shown to be real-analytic as an operator valued function, from which Theorem \ref{inverse} follows. A conceptional explanation for this fact is in \cite[p.86]{CS}. Indeed, $P_f\circ T \circ P_f^{-1}$ can be extended from the function on $u=\log f' \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$ to that on $w=\log \gamma' \in L({\rm CA})$ as the Cauchy integral operator on a chord-arc curve. Then, local boundedness and weak holomorphy of $P_\gamma\circ T \circ P_\gamma^{-1}$ imply that this function is holomorphic. The local boundedness of $P_f$ is shown in Proposition \ref{uniformPh}. In addition, we can find a proof for an estimate $\Vert P_f\circ T \circ P_f^{-1} -T \Vert \lesssim \Vert u \Vert_*$ when $\Vert u \Vert_*$ is small in \cite[p.206]{Se1}. By $P_f\circ T \circ P_f^{-1} \to T$ as $f \to \rm id$, we obtain that $\lambda^{-1}$ is differentiable at the origin and $d_0(\lambda^{-1})=-iT$. Since the Hilbert transformation $T$ is an isomorphism of ${\rm BMO}_{\mathbb R}(\mathbb R)$ onto itself, we see that $d_0(\lambda^{-1})$ is invertible and in fact $d_0\lambda=-iT$. By the inverse function theorem, there are some neighborhoods at the origin of $i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$ and ${\rm BMO}_{\mathbb R}^*(\mathbb R)$ that are real-analytically equivalent under $\lambda$. \section{Discontinuity of the correspondence to Riemann mappings} In this section, we prove that the bijection $\Phi|_{\rm ICA}:{\rm ICA} \to {\rm RM}^\circ$ is not continuous. This means that the correspondence of Riemann mappings to the arc-length parametrizations of chord-arc curves is not continuous. To this end, we first consider a problem on the topological group structure of $T_b \cong {\rm SQS}$. It is well-known that the universal Teichm\"uller space $T$, which can be identified with the group ${\rm QS}$ of all normalized quasisymmetric homeomorphisms of $\mathbb R$ onto itself, does not constitute a topological group. An example of discontinuity of the map $([\mu_1],[\mu_2]) \mapsto [\mu_2] \ast [\mu_1]^{-1}$ is given in \cite{Le}. We adjust this example slightly to see that $T_b$ is not a topological group either. On the contrary, Proposition \ref{nearid} implies that $T_b$ is a partial topological group. In this case, \cite[Lemma 1.2]{GS} proved that the subgroup consisting of all elements $[\nu] \in T_b$ such that $[\nu] \circ [\mu_n] \circ [\nu]^{-1} \to [0]$ for any sequence $[\mu_n] \in T_b$ converging to $[0]$ is a closed topological group, which is called the characteristic topological subgroup of $T_b$. We denote this subgroup by ${\rm char}(T_b)$. \begin{proposition}\label{continuity} The map $\Phi:{\rm BE} \to {\rm RM}$ is not continuous at $([0],[\nu]) \in {\rm BE}$ with some $[\nu] \neq [0]$ in any small neighborhood of the origin of $T_b$, but continuous at a point $([\mu_1],[\mu_2]) \in {\rm BE}$ if $[\mu_2] \ast [\mu_1]^{-1}$ belongs to ${\rm char}(T_b)$. \end{proposition} \begin{proof} We modify the example in \cite[Theorem III.3.3]{Le}. For $k>0$, let $f_k: \mathbb R \to \mathbb R$ be a normalized quasisymmetric homeomorphism defined by $$ f_k(x)= \left\{ \begin{array}{ll} x & (0 \leq x)\\ \frac{k}{k+1}x & (-\frac{k+1}{k} < x <0)\\ x+\frac{1}{k} & (x \leq -\frac{k+1}{k})\quad. \end{array} \right. $$ Since $\Vert \log f_k' \Vert_* \to 0$ as $k \to \infty$, we have $f_k \in {\rm SQS}$ if $k>0$ is sufficiently large. Under the identification $T_b \cong {\rm SQS}$, we take $[\nu_k] \in T_b$ that corresponds to $f_k$. Since $[\nu_k] \to [0]$ in $T_b$ as $k \to \infty$, we can choose and fix some $k>0$ so that $[\nu]=[\nu_k] \neq [0]$ is arbitrarily close to $[0]$. Set also $f=f_k$. Next, for every $n \in \mathbb N$, let $\ell_n: \mathbb R \to \mathbb R$ be a normalized quasisymmetric homeomorphism defined by $$ \ell_n(x)= \left\{ \begin{array}{ll} x & (x \geq 0)\\ \frac{n}{n+1}x & (x <0) \quad. \end{array} \right. $$ Similarly to the above, since $\Vert \log \ell_n' \Vert_* \to 0$ as $n \to \infty$, the sequence $[\varepsilon_n] \in T_b$ corresponding to $\ell_n$ tends to $[0]$ as $n \to \infty$. Then, $([\varepsilon_n],[\nu])$ converges to $([0],[\nu])$. On the contrary, we see that $\Phi([\varepsilon_n],[\nu])=([0],[\nu] \ast [\varepsilon_n]^{-1})$ does not converge to $\Phi([0],[\nu])=([0],[\nu])$. Indeed, a simple computation yields that $$ \log (f \circ \ell_n^{-1} \circ f^{-1})'(x)= \left\{ \begin{array}{ll} \log \left(1+\frac{1}{n} \right) & (-\frac{n}{n+1} \leq x<0)\\ \log \left(1+\frac{1}{n} \right)+\log \left(1+\frac{1}{k} \right) & (-1 \leq x <-\frac{n}{n+1}) \quad , \end{array} \right. $$ and hence $$ \Vert \log(f \circ \ell_n^{-1} \circ f^{-1})'\Vert_* \geq \frac{1}{2}\log\left(1+\frac{1}{k} \right). $$ This implies that $f \circ \ell_n^{-1}$ does not converge to $f$ as $n \to \infty$ in ${\rm SQS}$. Alternatively, the proof of \cite[Theorem III.3.3]{Le} asserts that $[\nu] \ast [\varepsilon_n]^{-1}$ does not converge to $[\nu]$ in the universal Teichm\"uller space $T$. Since the topology of $T_b$ is stronger than that of $T$, $[\nu] \ast [\varepsilon_n]^{-1}$ does not converge to $[\nu]$ in $T_b$ either. This proves that $\Phi$ is not continuous on any small neighborhood of the origin of ${\rm BE}$. For the continuity at $([\mu_1],[\mu_2])$ when $[\mu_2] \ast [\mu_1]^{-1}$ belongs to ${\rm char}(T_b)$, we take any sequences $[\varepsilon_n]$ and $[\varepsilon'_n]$ converging to $[0]$, and show that $\Phi([\varepsilon_n] \ast [\mu_1],[\varepsilon'_n] \ast[\mu_2]) \to \Phi([\mu_1],[\mu_2])$ as $n \to \infty$. This is equivalent to showing that $$ [\varepsilon'_n] \ast ([\mu_2] \ast [\mu_1]^{-1}) \ast [\varepsilon_n]^{-1}\ast([\mu_2] \ast [\mu_1]^{-1})^{-1} \to [0]. $$ By the definition of ${\rm char}(T_b)$, this is satisfied. \end{proof} \begin{remark} The VMO Teichm\"uller space $T_v$ can be defined as the set of all normalized strongly symmetric homeomorphisms of the unit circle onto itself, and this is a closed subspace of $T_b$ (see \cite{SWei}). The subset $T_c$ of $T_b$ consisting of all elements corresponding to chord-arc curves contains $T_v$ (see \cite{WM-3}). It was shown in \cite{Wei} that all strongly symmetric homeomorphisms on the unit circle constitute a topological group, and from its proof, we see that $T_v$ is contained in ${\rm char}(T_b)$. We expect that ${\rm char}(T_b)=T_v$ but it seems that this is not known yet. \end{remark} Proposition \ref{nearid} implies that $\Phi$ is continuous at the origin of ${\rm BE}$, and so is $\Phi|_{\rm ICA}:{\rm ICA} \to {\rm RM}^\circ$. We also consider the continuity of the inverse map $(\Phi|_{\rm ICA})^{-1}$ at the origin. This follows from the continuity of $J^{-1}$ at the origin. \begin{lemma}\label{origin} $J^{-1}:L({\rm CA}) \to {\rm BMO}_{\mathbb R}^*(\mathbb R) \times i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$ is continuous at $0$. As a consequence, $(\Phi|_{\rm ICA})^{-1}:{\rm RM}^\circ \to {\rm ICA}$ is continuous at $([0],[0])$. \end{lemma} \begin{proof} Since $J^{-1}(u+iv)=(u,Q_u^{-1}(u+iv))$ and $Q_{u}^{-1}=L \circ \widetilde R_{[\nu]}^{-1} \circ L^{-1}$ with the correspondence between $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$ and $[\nu] \in T_b$, the continuity of $J^{-1}$ at $0$ follows from the continuity of $R_{[\nu]}^{-1}([\mu])=[\mu] \ast [\nu]^{-1}$ at $([0],[0])$ by Proposition \ref{nearid}. Since $$ (\Phi|_{\rm ICA})^{-1}=L^{-1}\circ J \circ p \circ J^{-1}\circ L $$ where $p:{\rm BMO}_{\mathbb R}^*(\mathbb R) \times i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ \to i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$ is the projection to the second factor, this is continuous at $([0],[0])$. \end{proof} We are ready to prove the main result, which solves a conjecture of Katznelson, Nag and Sullivan \cite[p.303]{KNS} by showing that the dependence of the Riemann mapping on the arc-length parametrization is not continuous. See also \cite[Section 1]{SW}. \begin{theorem}\label{main} $\Phi|_{\rm ICA}:{\rm ICA} \to {\rm RM}^\circ$ is not continuous. \end{theorem} \begin{proof} By Proposition \ref{continuity}, we can find a point $([0],[\nu]) \in {\rm RM}^\circ$ arbitrarily close to $([0],[0])$ at which $\Phi$ is not continuous. Let $(\Phi|_{\rm ICA})^{-1}([0],[\nu])=([\mu_1],[\mu_2])$, which is in ${\rm ICA}$. Since $(\Phi|_{\rm ICA})^{-1}$ is continuous at $([0],[0])$ by Lemma \ref{origin}, we may assume that $([\mu_1],[\mu_2])$ is sufficiently close to $([0],[0])$. By Theorems \ref{CM} and \ref{inverse}, there are neighborhoods $U$ and $V$ of the origin in ${\rm ICA}$ and ${\rm SQS}$ respectively such that $\Pi|_{U}:U \to V$ is a homeomorphism preserving the origin. We can take $([\mu_1],[\mu_2])$ so that it is in $U$. Then, $\Pi([\mu_1],[\mu_2])=([\mu_1],[\mu_1]) \in V$. We choose a sequence $[\varepsilon_n] \in T_b$ as in the proof of Proposition \ref{continuity}, and consider $([\varepsilon_n] \ast [\mu_1],[\varepsilon_n] \ast[\mu_1]) \in V$. This sequence is mapped to $([\varepsilon_n] \ast [\mu_1],[\varepsilon'_n] \ast[\mu_2]) \in U$ by $(\Pi|_U)^{-1}$, where $[\varepsilon'_n]$ also tends to $[0]$ as $n \to \infty$ by the continuity of $(\Pi|_U)^{-1}$. Thus, we obtain a sequence $([\varepsilon_n] \ast [\mu_1],[\varepsilon'_n] \ast[\mu_2])$ in ${\rm ICA}$ converging to $([\mu_1],[\mu_2])$. The image of this sequence under $\Phi$ is $$ ([0],[\varepsilon'_n] \ast[\mu_2] \ast [\mu_1]^{-1} \ast [\varepsilon_n] ^{-1}) =([0],[\varepsilon'_n] \ast[\nu] \ast [\varepsilon_n] ^{-1}) \in {\rm RM}^\circ. $$ As we have seen in the proof of Proposition \ref{continuity}, $[\nu] \ast [\varepsilon_n] ^{-1}$ does not converge to $[\nu]$ in $T_b$, and neither does $[\varepsilon'_n] \ast[\nu] \ast [\varepsilon_n] ^{-1}$. This implies that $\Phi([\varepsilon_n] \ast [\mu_1],[\varepsilon'_n] \ast[\mu_2])$ does not converge to $\Phi([\mu_1],[\mu_2])=([0],[\nu])$. Hence, $\Phi|_{\rm ICA}$ is not continuous. \end{proof} In fact, the mapping in question was given on the space of BMO functions in \cite{KNS}. Let $\Pi^*:{\rm BE} \to {\rm SQS}$ be the projection defined by $([\mu_1],[\mu_2]) \mapsto ([\mu_2],[\mu_2])$ in parallel to $\Pi$. The identification ${\rm RM} \cong T_b \cong {\rm SQS}$ is realized by $\Pi^*|_{\rm RM}:{\rm RM} \to {\rm SQS}$. Then, $$ \Pi^* \circ \Phi|_{\rm ICA}:{\rm ICA} \to {\rm SQS} $$ is the correspondence of conformal welding homeomorphisms to arc-length para\-metri\-zations. Its conjugation by $L$, that is, $$ \rho=L \circ \Pi^* \circ \Phi \circ L^{-1}|_{i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ} :i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ \to {\rm BMO}_{\mathbb R}^*(\mathbb R) $$ is the one we deal with. \begin{corollary}\label{another} $\rho:i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ \to {\rm BMO}_{\mathbb R}^*(\mathbb R)$ is not continuous. \end{corollary} \begin{proof} This follows from Theorem \ref{main} and the facts that $L$ and $\Pi^*|_{\rm RM}$ are homeomorphisms. \end{proof} We also have the following properties on the bijection $J$. As mentioned before in Section 5, this answers the problem raised in \cite[Problem 5.4]{FHS} negatively. \begin{theorem}\label{mapL} The bijection $$ J:{\rm BMO}_{\mathbb R}^*(\mathbb R) \times i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ \to L({\rm CA}) \subset {\rm BMO}(\mathbb R) $$ defined by $J(u,iv)=Q_u(iv)=u+iP_{\gamma_u}(v)$ is not continuous but locally bounded. \end{theorem} \begin{proof} All the properties of $J$ stem from those of the composition operator $P_{\gamma_u}$. For a given $u_0 \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$, we consider any $u \in {\rm BMO}_{\mathbb R}^*(\mathbb R)$ that is sufficiently close to $u_0$. For the normalized strongly quasisymmetric homeomorphisms $\gamma_u$ and $\gamma_{u_0}$ in ${\rm SQS}$, we set $f=\gamma_u \circ \gamma_{u_0}^{-1}$, which also belongs to ${\rm SQS}$. Then, $\Vert \log f' \Vert_* \to 0$ as $u \to u_0$. By Proposition \ref{uniformPh}, if $\Vert \log f' \Vert_* \leq \tau_0$, then $\Vert P_f \Vert \leq C_0$. Hence, the operator norm of $P_{\gamma_u}=P_{\gamma_{u_0}} \circ P_f$ is bounded by $C_0 \Vert P_{\gamma_{u_0}} \Vert$ for any $u$ in some neighborhood of $u_0$. This implies that $J$ is locally bounded. For $iv_0 \in i{\rm BMO}_{\mathbb R}(\mathbb R)^\circ$ whose BMO norm is small so that $e^{v_0}$ is an $A_\infty$-weight, we can take the normalized strongly quasisymmetric homeomorphism $\gamma_{v_0}:\mathbb R \to \mathbb R$ such that $\log \gamma_{v_0}'=v_0$. Then, \begin{equation}\label{realv} J(u,iv_0)=u+ i \log \gamma_{v_0}'\circ \gamma_u=i \log(\gamma_{v_0} \circ \gamma_u)'+(1-i)u. \end{equation} We may choose $\gamma_{v_0} \in {\rm SQS}$ so that it corresponds to $[\nu] \in T_b$ in the proof of Theorem \ref{continuity}. We may also choose $\gamma_{u_n} \in {\rm SQS}$ for each $n \in \mathbb N$ so that $\gamma_{u_n} \circ \gamma_{u_0}^{-1}$ corresponds to $[\varepsilon_n] \in T_b$. Then, $[\varepsilon_n] \to [0]$ but $[\nu] \ast [\varepsilon_n]^{-1} \nrightarrow [\nu]$ as $n \to \infty$. This implies that $\gamma_{u_n} \to \gamma_{u_0}$ but $\gamma_{v_0} \circ \gamma_{u_n} \nrightarrow \gamma_{v_0} \circ \gamma_{u_0}$, namely, $u_n \to u_0$ but $\log (\gamma_{v_0} \circ \gamma_{u_n})' \nrightarrow \log (\gamma_{v_0} \circ \gamma_{u_0})'$. By equation (\ref{realv}), this shows that $J$ is not continuous at $(u_0,iv_0)$. \end{proof} This also implies discontinuous properties of the composition operator $P_f:{\rm BMO}(\mathbb R) \to {\rm BMO}(\mathbb R)$ for $f \in {\rm SQS}$. \begin{corollary}\label{last} There are $v_0 \neq 0$ in ${\rm BMO}(\mathbb R)$ with an arbitrarily small norm and a sequence $f_n$ in ${\rm SQS}$ converging to $\rm id$ such that $\Vert P_{f_n}(v_0)-v_0 \Vert_*$ does not converge to $0$ as $n \to \infty$. In particular, the operator norm $\Vert P_{f_n}-I \Vert$ does not necessarily converge to $0$ when $f_n \to {\rm id}$. \end{corollary} \begin{proof} For $u_0=0$, choose $f_n=\gamma_{u_n}$ and $v_0$ as in the proof of Theorem \ref{mapL}. Then, $P_{f_n}(v_0)=\log (\gamma_{v_0} \circ \gamma_{u_n})'-\log \gamma_{u_n}'$ does not converge to $v_0=\log \gamma_{v_0}'$. \end{proof} \end{document}
\begin{document} \title{An empirical comparison and characterisation of nine popular clustering methods} \author{Christian Hennig,\\ Dipartimento di Scienze Statistiche ``Paolo Fortunati''\\ Universita di Bologna\\ Bologna, Via delle belle Arti, 41, 40126, Italy\\ [email protected]} \maketitle \begin{abstract} Nine popular clustering methods are applied to 42 real data sets. The aim is to give a detailed characterisation of the methods by means of several cluster validation indexes that measure various individual aspects of the resulting clusters such as small within-cluster distances, separation of clusters, closeness to a Gaussian distribution etc. as introduced in \cite{Hennig19}. 30 of the data sets come with a ``true'' clustering. On these data sets the similarity of the clusterings from the nine methods to the ``true'' clusterings is explored. Furthermore, a mixed effects regression relates the observable individual aspects of the clusters to the similarity with the ``true'' clusterings, which in real clustering problems is unobservable. The study gives new insight not only into the ability of the methods to discover ``true'' clusterings, but also into properties of clusterings that can be expected from the methods, which is crucial for the choice of a method in a real situation without a given ``true'' clustering. {\bf Keywords:} Cluster benchmarking \and internal cluster validation \and external cluster validation \and mixed effects model\\ MSC2010 classification: 62H30 \end{abstract} \section{Introduction} \label{sintro} This work compares cluster analysis methods empirically on 42 real data sets. 30 of these data sets come with a given ``true'' classification. The principal aim is to explore how different clustering methods produce solutions with different data analytic characteristics, which can help a user choosing an appropriate method for the research question of interest. This does not require the knowledge of a ``true'' clustering. The performance of the methods regarding recovery of the ``truth'' is reported, but is not the main focus. Cluster analysis plays a central role in modern data analysis and is applied in almost every field where data arise, be it finance, marketing, genetics, medicine, psychology, archaeology, social and political science, chemistry, engineering, or machine learning. Cluster analysis can have well-defined research aims such as species delimitation in biology, or be applied in a rather exploratory manner to learn about potentially informative structure in a data set, for example when clustering the districts of a city. New cluster analysis methods are regularly developed, often for new data formats, but also to fix apparent defects of already existing methods. One reason for this is that cluster analysis is difficult, and all methods, or at least those with which enough experience has been collected, are known to ``fail'' in certain, even fairly regular and non-pathological, situations, where ``failing'' is often taken to mean that a certain pre-specified ``true'' clustering in data is not recovered. A key problem with clustering is that there is no unique and generally accepted definition of what constitutes a cluster. This is not an accident, but rather part of the nature of the clustering problem. In real applications there can be different requirements for a ``good'' clustering, and different clusterings can qualify as ``true'' on the same data set. For example, crabs can be classified according to species, or as male or female; paintings can be classified according to style of the painter or according to the motif; a data set of customers of a company may not show any clusters that are clearly separated from each other, but may be very heterogeneous, and the company may be interested in having homogeneous subgroups of customers in order to better target their campaigns, but the data set may allow for different groupings of similar quality; in many situations with given ``true'' classes, such as companies that go bankrupt in a given period vs. those that do not, there is no guarantee that these ``true'' classes correspond to patterns in the data that can be found at all. One could even argue that in a data set that comes with a supposedly ``true'' grouping a clustering that does {\it not} coincide with that grouping is of more scientific interest than reproducing what is already known. Rather than being generally ``better'' or ``worse'', different cluster analysis methods can be seen as each coming with their own implicit definition of what a cluster is, and when cluster analysis is to be applied, the researchers have to decide which cluster concepts are appropriate for the application at hand. Cluster analysis can have various aims, and these aims can be in conflict with each other. For example, clusters that are well separated by clear density gaps may involve quite large within-cluster distances, which may be tolerable in some applications but unacceptable in others. Clusters that can be well represented by cluster centroids may be different from those that correspond to separable Gaussian distributions with potentially different covariance matrices, which in some applications are interpreted as meaningfully different data subsets. See \cite{AckBDLok10,LuWiGu12,Hennig15,Hennig15handbook} for the underlying ``philosophy'' of clustering. The starting point of this work is the collection of cluster validation indexes presented in \cite{Hennig19}. These are indexes defined in order to provide a multivariate characterisation of a clustering, individually measuring aspects such as between-cluster separation, within-cluster homogeneity, or representation of the overall dissimilarity structure by the clustering. They are applied here in order to give general information about how the characteristics of clusterings depend on the clustering method. Many cluster validation indexes have been proposed in the literature, often in order to pick an optimal clustering in a given situation, e.g., by comparing different numbers of clusters, see \cite{HaVaHe15} for an overview. Most of them (such as the Average Silhouette Width, \cite{KauRou90}) attempt to assess the quality of a clustering overall by defining a compromise of various aspects, particularly within-cluster homogeneity and between-cluster separation. Following \cite{Hennig19} and \cite{AkhHen20}, the present work deviates from this approach by keeping different aspects separate in order to inform the user in a more detailed way what a given clustering achieves. A number of benchmark studies for cluster analysis have already been published. Most of them focus on evaluating the quality of clusterings by comparing them to given ``true'' clusterings. This has been done for artificially generated data (e.g., \cite{Mil80,BruSte07,SteBru11,SaDoDo13,RCCBACR19}; see \cite{Mil96} for an overview of earlier work), for real data, mostly focusing on specific application areas or types of data (e.g., \cite{dCdLS08,KoPeWa14,BouHat17,LSWZYLD19}), or a mixed collection of real and artificial data, sometimes generating artificial data from models closely derived from a real application (e.g., \cite{MeiHec01,MauBan02,DiBaWiHoMo04,AGMPP13,JLR20}). An exception is \cite{JTLB04}, where different clustering methods were mapped according to the similarity of their clusterings on various data sets (something similar is done here, see Section \ref{scharacterisation}). \cite{AndHen14} contrasted recovery of a ``true'' classification in artificial data sets with the requirement of having homogeneous clusters. All of these studies attempt to provide a neutral comparison of clustering methods, which is to be distinguished from the large number of studies, using real and artificial data, that have been carried out by method developers in order to demonstrate that their newly proposed method compares favourably with existing methods. Due to selection effects, the results of such work, although of some value in their own right, cannot be taken as objective indicators of the quality of methods (\cite{BoLaEu13,Hennig18}). The study presented here is meant to be neutral; I have not been involved in the development of any of the compared methods, and have no specific interest to portray any of them as particularly good or bad. Note that ``true'' neutrality can never be secured and is probably never given; for example, I have been active promoting my own ``philosophy'' of clustering (e.g., \cite{Hennig15handbook}) and may be suspected to favour results that are in line with the idea that the success of clustering methods strongly depends on the application; however n No selections have been made depending on results (\cite{Boulesteix15}); the 42 data sets from which results are reported are all that were involved. Section \ref{sdesign} explains the design of the study, i.e., the clustering methods, the data sets, and the validation indexes. Section \ref{sresults} presents the results, starting with the characterisation of the methods in terms of the internal indexes, then results regarding the recovery of the ``true'' clusters, and ultimately connecting ``true'' cluster recovery with the characteristics of the clustering solutions using a mixed effects regression model. A discussion concludes the paper. \section{Study design} \label{sdesign} For the study design, recommendations for benchmark studies as given, e.g., in \cite{Boulesteix15,VBDDGHLS18} have been taken into account. One important issue is a definition of the scope of the study. There is an enormous amount of clustering methods, and clustering is applied to data of very different formats. It is not even remotely possible to cover everything that could potentially be of interest. Therefore the present study constrains its scope in the following way: \begin{itemize} \item Only clustering methods for $2\le p$-dimensional Euclidean data that can be treated as continuous are used. Methods that work with dissimilarities are run using the Euclidean distance. \item Accordingly, data sets contain numerical variables only. Some data sets include discrete variables, which are treated as admissible for the study if they carry numerical information and take at least three different values (variables taking a small number of values, particularly three or four, are very rare in the study). \item The number of clusters is always treated as fixed. Only methods that allow to fix the number of clusters are used; methods to estimate the number of clusters are not involved. For data sets with a given ``true'' clustering, the corresponding number of clusters was taken. For data sets without such information, a number of clusters was chosen subjectively considering data visualisation and, where possible, subject matter information. \item The included clustering methods were required to have an R-implementation that can be used in a default way without additional tuning in order to allow for a comparison that is not influenced by different tuning flexibilities. \item No statistical structure (such as time series or regression clustering) is taken into account, and neither is any automatic dimension reduction involved as part of any method. All data is treated as plain $p$-dimensional Euclidean. \item Methods are only admissible for the study if they always produce crisp partitions. Every observation always is classified (also in the given ``true'' clusterings) to belong to one and only one cluster. \end{itemize} \subsection{Clustering methods} \label{sclustering} The involved clustering methods are all well established and widely used, as far as my knowledge goes. They represent the major classes of clustering methods listed in \cite{HenMei15} with the exception of density-based clustering, which was excluded because standard density-based methods such as DBSCAN (\cite{EsKrSaXu96}) do not accept the number of clusters as input and often do not produce partitions. Another popular method that was not involved was Ward's method, as this is based on the same objective function as $K$-means and can be seen as just another technique to optimise this function locally (\cite{EvLaLeSt11}). On the other hand, including mixtures of t- and skew t-distributions means that mixture model-based clustering is strongly represented. The motivation for this is that the other included methods are not meant to fit distributional shapes including outliers and skewness, which may be widespread in practice; alternatives would be methods that have the ability to not include observations classified as ``outliers'' in any cluster, but this is beyond the scope of the present study. Here are the included methods. \begin{description} \item[K-means] as implemented in the R-function \verb|kmeans| using the algorithm by \cite{HarWon79}. \item[Partitioning Around Medoids (clara)] (\cite{KauRou90}) as implemented in the R-function \verb|claraCBI| (therefore abbreviated ``clara'' in the results) in R-package \verb|fpc| (\cite{Hennig20}), which calls function \verb|pam| in R-package \verb|cluster| (\cite{MRSHH19}) using (unsquared) Euclidean distances. \item[Gaussian mixture model (mclust)] fitted by Maximum Likelihood using the EM-algorithm, where the best of various covariance matrix models is chosen by the Bayesian Information Criterion (BIC) (\cite{FraRaf02}) as implemented in the R-function \verb|mclustBIC| in R-package \verb|mclust| (\cite{ScFoMuRa16}). \item[Mixture of skew t-distributions (emskewt)] fitted by Maximum Likelihood using the EM-algorithm (\cite{LeeMcL13}), including fully flexible estimation of the degrees of freedom and the shape matrix, as implemented in the function \verb|EmSkew| with parameter \verb|distr="mst"| in the R-package \verb|EMMIXskew| (\cite{WaNgMc18}). \item[Mixture of t-distributions (teigen)] fitted by Maximum Likelihood using the EM-algorithm (\cite{McLPee00}), where the best of various covariance matrix models is chosen by the BIC (\cite{AndMcN12}) as implemented in the R-function \verb|teigen| in R-package \verb|teigen| (\cite{AnWiBoMc18}). \item[Single linkage hierarchical clustering] as implemented in the R-function \verb|hclust| and the dendrogram cut at the required number of clusters to produce a partition, as is done also for the other hierarchical methods. See \cite{EvLaLeSt11} for an explanation and historical references for all involved hierarchical methods. \item[Average linkage hierarchical clustering] as implemented in the R-function \verb|hclust|. \item[Complete linkage hierarchical clustering] as implemented in the R-function \verb|hclust|. \item[Spectral clustering] (\cite{NgJoWe01}) as implemented in the R-function \verb|specc| in R-package \verb|kernlab| (\cite{KaSmHoZe04}). \end{description} The functions were mostly run using the default settings. In some cases, e.g., \verb|hclust|, parameters had to be provided in order to determine which exact method was used. Some amendments were required. In particular, all methods were run in such a way that they would always deliver a valid partition as a result. See Appendix A1 for more computational detail. \subsection{Data sets} The data sets used in this study are a convenience sample, collected from mostly well known benchmark data sets in widespread use together with some data sets that I have come across in my work. 21 data sets are from the UCI repository (\cite{UCI17}), further ones are from Kaggle, \verb|www.openml.org|, example data sets of R-packages, open data accompanying books and research papers, and some were collected by myself or provided to me by collaborators and advisory clients with permission to use them. Details about the data sets are given in Appendix A2. There were some criteria on top of those stated above according to which data sets have been selected, which define the scope of the study. There was a target number of collecting at least 30 data sets with and at least 10 data sets without given ``true'' classes; ultimately there are 30 data sets with and 12 data sets without true classes. The aim was to cover a large range of application areas, although due to the availability of data sets, this has not been perfectly achieved. 17 of the data sets come from the related areas of biology, genetics, medicine, and chemistry. Eight are from the social sciences, two from finance, eight can be classified as engineering including typical pattern recognition tasks, the remaining seven data sets come from miscellaneous areas. As some of the clustering methods cannot handle data with a smaller number of observations $n$ than the number of variables $p$ within clusters, all data sets have $p$ substantially smaller than $n$. The calibration of validation indexes requires repeated computations based on $n\times n$ distance matrices (see Section \ref{sinternal}), for this reason the biggest data set has $n=4601$, and generally data sets with $n<3000$ were preferred. The maximum $p$ is 72. $p=1$ is excluded, as it could not be handled by two methods. The maximum number of ``true'' clusters $K$ is 100. The aim was to achieve a fairly even representation of $p$ and $K$ up to 10 and a number of instances for these values larger than 10, although there are apparently far more data sets in benchmark use with $k=2$ than with larger $K$. Data sets without missing values were preferred, but some data sets with a very small number of missing values were admitted. In these cases mean imputation was used. Tables \ref{tn}, \ref{tp}, and \ref{tk} show the distributions of $n$, $p$, and $K$, respectively, over the data sets. The variables were scaled to mean 0 and variance 1 before clustering, except for data sets in which the variables have compatible units of measurement and there seems to be a subject matter justification to make their impact for clustering proportional to the standard deviation. See Appendix A2 for details on the preprocessing for some data sets. \begin{table}[tb] \caption{Numbers of observations for the 42 data sets.} \begin{tabular}{|l|r|} \hline Observations & Number of data sets\\ \hline $n\le 100$ & 5 \\ $100<n\le 200$ & 6\\ $200<n\le 300$ & 8\\ $300<n\le 500$ & 5\\ $500\le n<1000$ & 7\\ $1000\le n<2000$ & 6\\ $n>2000$ & 5\\ \hline \end{tabular} \label{tn} \end{table} \begin{table}[tb] \caption{Numbers of variables for the 42 data sets} \begin{tabular}{|l|r|} \hline Variables & Number of data sets\\ \hline $p=2$ & 2\\ $p=4$ & 5\\ $p=5$ & 5\\ $6\le p\le 8$ & 6\\ $9\le p \le 11$ & 11\\ $12\le p \le 20$ & 6\\ $21\le p \le 50$ & 4\\ $p > 50$ & 3\\ \hline \end{tabular} \label{tp} \end{table} \begin{table}[tb] \caption{Numbers of clusters for the 30 data sets with given ``true'' clusterings, and for the 12 data sets without ``true'' clusterings, as chosen by the author.} \begin{tabular}{|l|r|r|} \hline Number of clusters & With ``true'' clustering & Without ``true'' clustering\\ \hline $k=2$ & 8 & 1\\ $k=3$ & 3 & 3\\ $k=4$ & 3 & 1\\ $k=5$ & 2 & 6\\ $6\le k\le 7$ & 5 & 1\\ $8\le k \le 11$ & 6 & 0\\ $k>11$ & 3 & 0\\ \hline \end{tabular} \label{tk} \end{table} An issue with the ``representativity'' of these data sets for real clustering problems is that the availability of ``true'' clusterings constitutes a difference to the real unsupervised problems to which clustering is usually applied. This is an issue with almost all collections of data sets for benchmarking clustering algorithms. In particular, several such data sets have been constructed in order to have all clusters represented by the same number of observations. This is the case for eight of the 30 data sets with ``true'' clusterings used here (seven of these have exactly equal cluster sizes). This is not possible for unsupervised problems in practice. Such data sets will favour methods that tend to produce clusters of about equal sizes. \subsection{Internal validation indexes} \label{sinternal} Internal validation indexes are used here with the aim of measuring various aspects of a clustering that can be seen as desirable, depending on the specific application. It is then investigated to what extent the different clustering methods work well according to these aspects. \cite{Hennig15handbook} lists and discusses a number of aspects that can be relevant. \cite{Hennig19} and \cite{AkhHen20} formalised many of these aspects, partly using already existing indexes, partly introducing new ones. Here the indexes used in the present study are listed. For more background and discussion, including possible alternatives, see \cite{Hennig19} and \cite{AkhHen20}. The indexes attempt to formalise clustering aspects in a direct intuitive manner, without making reference to specific models (unless it is of interest whether data look like generated by a particular probability model, see below). The indexes as defined here do not allow comparison between or aggregation over different data sets. In order to do this, they need to be calibrated, which is treated in Section \ref{scalibration}. The data set is denoted as ${\cal D}=\{x_1,\ldots,x_n\}$. Here the observations $x_1,\ldots,x_n$ are assumed to be $\in\mathbb{R}^p$, and $d(x,y)$ is the Euclidean distance between $x$ and $y$, although the indexes can be applied to more general types of data and distances. A clustering is a set ${\cal C}=\{C_1,\ldots,C_K\}$ with $C_j\subseteq {\cal D},\ j=1,\ldots,K$. For $j=1,\ldots,K$, $n_j=|C_j|$ is the number of objects in $C_j$. Assume ${\cal C}$ to be a partition, e.g., $j\neq k \mathbb{R}ightarrow C_j \cap C_k=\emptyset$ and $\bigcup_{j=1}^K C_j={\cal D}$. Let $\gamma:\ \{1,\ldots,n\}\mapsto \{1,\ldots,K\}$ be the assignment function, i.e., $\gamma(i)=j \Leftrightarrow x_i\in C_j$. \begin{description} \item[Average within-cluster distances] (avewithin; aw; \cite{AkhHen20}). This index measures homogeneity in the sense of small distances within clusters. Smaller values are better. \begin{displaymath} I_{avewithin}(\mathcal{C}) = \frac{1}{n} \sum_{k=1}^{K} \frac{1}{n_k-1}\sum_{x_{i} \neq x_{j} \in C_{k}} d(x_{i},x_{j}). \end{displaymath} \item[Representation of cluster members by centroids.] In some applications cluster centroids are used in order to represent the clustered objects, and an important aim is that this representation is good for all cluster members. This is directly formalised by the objective functions of $K$-means (sum of squared distances from the cluster mean) and Partitioning Around Medoids (sum of distances from the cluster medoid). Both of these criteria have been used as internal validation indexes in the present study, however results are not presented, because over all results both of these turn out to have a correlation of larger than 0.95 with $I_{avewithin}$, so $I_{avewithin}$ can be taken to measure this clustering aspect as well. \item[Maximum diameter] (maxdiameter; md). In some applications there may be a stricter requirement that large distances within clusters cannot be tolerated, rather than having only the distance average small. This can be formalised by \begin{displaymath} I_{maxdiameter}(\mathcal{C})=\max_{C\in\mathcal{C};x_i,x_j\in C}d(x_i,x_j). \end{displaymath} Smaller values are better. \item[Widest within-cluster gap] (widestgap; wg; \cite{Hennig19}). Another interpretation of cluster homogeneity is that there should not be different parts of the same cluster that are separated from each other. This can be formalised by \begin{displaymath} I_{widestgap}({\cal C})=\max_{C\in {\cal C}, D, E:\ C=D\cup E}\min_{x\in D, y\in E}d(x,y). \end{displaymath} Smaller values are better. \item[Separation index] (sindex; si; \cite{Hennig19}). This index measures whether clusters are separated in the sense that the closest distances between clusters are large. For every object $x_{i} \in C_{k}$, $i = 1, \ldots, n$, $k \in {1, \ldots, K}$, let $d_{k:i} = \min_{x_{j} \notin C_{k}} d(x_{i},x_{j})$. Let $d_{k:(1)} \leq \ldots \leq d_{k:(n_{k})}$ be the values of $d_{k:i}$ for $x_{i} \in C_{k}$ ordered from the smallest to the largest, and let $[pn_{k}]$ be the largest integer $\leq pn_{k}$. $p$ is a parameter tuning what proportion of observations counts as ``close to the border'' of a cluster with another. Here, $p=0.1$. Then, \begin{displaymath} I_{sindex}(\mathcal{C};p) = \frac{1}{\sum_{k=1}^{K} [pn_{k}]} \sum_{k=1}^{K} \sum_{i=1}^{[pn_{k}]} d_{k:(i)}. \end{displaymath} Larger values are better. Analogously to the maximum diameter, the minimum separation, i.e., the minimum distance between any two clusters may also be of interest. In the present study, this has a correlation of 0.93 with $I_{sindex}$, and results for the minimum separation are omitted for reasons of redundancy. \item[Pearson-version of Hubert's $\Gamma$] (pearsongamma; pg; \cite{HubSch76}). This index measures to what extent the clustering corresponds or represents the distance structure in the data. the vector of pairwise dissimilarities Let ${\bf d}={\rm vec}\left([d(x_i,x_j)]_{i<j}\right)$ be the vector of pairwise distances. Let ${\bf c}={\rm vec}\left([c_{ij}]_{i<j}\right)$, where $c_{ij}=1(\gamma(i)\neq\gamma(j))$, and $1(\bullet)$ denotes the indicator function, be a vector of ``clustering induced dissimilarities''. With $r$ denoting the sample Pearson correlation, \begin{displaymath} I_{Pearson\Gamma}({\cal C})=r({\bf d},{\bf c}). \end{displaymath} Larger values are better. This is one version of a family of indexes introduced in \cite{HubSch76}, sometimes referred to as ``Hubert's $\Gamma$''. \item[Density mode index] (dmode; dm). An intuitive idea of a cluster is that it is associated with a density mode, and that the density goes down toward the cluster border. This is formalised by the ``dmode'' index. It is based on a simple kernel density estimator $h$ that assigns a density value $h(x)$ to every observation. Let $q_{d,p}$ be the $p$-quantile of the vector of dissimilarities ${\bf d}$, e.g., for $p=0.1$, the 10\% smallest dissimilarities are $\le q_{d,0,1}$. Define the kernel and density as \begin{displaymath} \kappa(d)=\left(1-\frac{1}{q_{d,p}}d\right)1(d\le q_{d,p}),\qquad h(x)=\sum_{i=1}^n \kappa(d(x,x_i)). \end{displaymath} The following algorithm constructs a sequence of neighbouring observations from the mode in such a way that the density should always go down, and penalties are incurred if the density goes up. It also constructs a set $T$ that collects information about high dissimilarities between high density observations used below. $I_{densdec}$ collects the penalties. \begin{description} \item[Initialisation] $I_{d1}=0$, $T=\emptyset$. For $j=1,\ldots,K$: \item[Step 1] $S_j=\{x\}$, where $x=\mathop{\rm arg\,max}\limits_{y \in C_j}h(y)$. \item[Step 2] Let $R_j=C_j\setminus S_j$. If $R_j=\emptyset$: $j=j+1$, if $j\le K$ go to Step 1, if $j+K=1$ then go to Step 5. Otherwise: \item[Step 3] Find $(x,y)=\mathop{\rm arg\,min}\limits_{(z_1,z_2): z_1\in R_j, z_2\in S_j}d(z_1,z_2)$. $S_j=S_j\cup\{x\}$, $T=T\cup \{\max_{z\in R_j}h(z)d(x,y)\}$. \item[Step 4] If $h(x)>h(y):\ I_{d1}=I_{d1}+(h(x)-h(y))^2$, back to Step 2. \item[Step 5] $I_{densdec}({\cal C})=\sqrt{\frac{I_{d1}}{n}}.$ \end{description} It is possible that there is a large gap between two observations with high density, which does not incur penalties in $I_{densdec}$ if there are no low-density observations in between. This can be picked up by \begin{displaymath} I_{highdgap}({\cal C})=\max T. \end{displaymath} These two indexes, which are both better for smaller values, were defined in \cite{Hennig19}, but they can be seen as contributing to the measurement of the same aspect, with $I_{highdgap}$ just adding information missed by $I_{densdec}$. An aggregate version, which is used here, can be defined as \begin{displaymath} I_{dmode}({\cal C})=0.75 I_{densdec}^*({\cal C})+0.25 I_{highdgap}^*({\cal C}), \end{displaymath} where $I_{densdec}^*$ and $I_{highdgap}^*$ are suitably calibrated versions of $I_{densdec}$, $I_{highdgap}$, respectively, see Section \ref{scalibration}. The weights 0.75 and 0.25 in the definition of $I_{dmode}$ can be interpreted as the relative impact of the two sub-indexes. \item[Cluster boundaries cutting through density valleys] (denscut; dc; \cite{Hennig19}). A complementary aspect of the idea that clusters are associated with high density regions is that cluster boundaries should run through density valleys rather than density mountains. The ``denscut''-index penalises a high contribution of points from different clusters to the density values in a cluster (measured by $h_o$ below). \begin{displaymath} \mbox{For }x_i,\ i=1,\ldots,n:\ h_o(x_i)=\sum_{k=1}^n \kappa(d(x_i,x_k)) 1(\gamma(k)\neq\gamma(i)). \end{displaymath} A penalty is incurred if for observations with a large density $h(x)$ there is a large contribution $h_o(x)$ to that density from other clusters: \begin{displaymath} I_{denscut}({\cal C})=\frac{1}{n}\sum_{j=1}^K\sum_{x\in C_j} h(x)h_o(x). \end{displaymath} Smaller values are better. \item[Entropy] (en; \cite{Shannon48}). Although not normally listed as primary aim of clustering, in many applications very small clusters are not very useful, and cluster sizes should optimally be close to uniform. This is measured by the well known entropy: \begin{displaymath} I_{entropy}(\mathcal{C}) = - \sum_{k=1}^{K} \frac{n_{k}}{n} \log(\frac{n_{k}}{n}). \end{displaymath} Large values are good. \item[Gaussianity of clusters] (kdnorm; nor; \cite{CorHen16}). Due to the Central Limit Theorem and a widespread belief that the Gaussian distribution approximates many real random processes, it may be of interest in its own right to have clusters that are approximately Gaussian. The index $I_{kdnorm}$ is defined, following \cite{CorHen16}, as the Kolmogorov distance between the empirical distribution of within-cluster Mahalanobis distances to the cluster means, and a $\chi^2_p$-distribution, which is the distribution of Mahalanobis distances in perfectly Gaussian clusters. \item[Coefficient of variation of distances to within-cluster neighbours] (cvnnd; cvn; \cite{Hennig19}). Another within-cluster distributional shape of potential interest is uniformity, where clusters are characterised by a uniform within-cluster density level. This can be characterised by the coefficient of variation (CV) of the dissimilarities to the $k$th nearest within-cluster neighbour $d^k_w(x)$ ($k=2$ is used here). Define for $j=1,\ldots,k$, assuming $n_j>k$: \begin{displaymath} m(C_j;k)=\frac{1}{n_j}\sum_{x\in C_j}d^k_w(x),\qquad {\rm CV}(C_j)=\frac{\sqrt{\frac{1}{n_j-1}\sum_{x\in C_j}(d^k_w(x)-m(C_j;k))^2}} {m(C_j;k)}. \end{displaymath} Using this, \begin{displaymath} I_{cvdens}({\cal C})=\frac{\sum_{j=1}^K n_j {\rm CV}(C_j)1(n_j>k)} {\sum_{j=1}^K n_j1(n_j>k)}. \end{displaymath} Smaller values are better. \item[Average Silhouette Width] (asw; \cite{KauRou90}). This is a popular internal validation index that deviates somewhat from the ``philosophy'' behind the collection of indexes presented here, because it attempts to balance two aspects of cluster quality, namely homogeneity and separation. It has been included in the study anyway, because it also uses an intuitive direct formalisation of clustering characteristics of interest. For $i = 1, \ldots, n$, define the ``silhouette width'' \begin{displaymath} s_{i}=\frac{b_{i}-a_{i}}{\max{\left\{a_{i}, b_{i}\right\}}} \in [-1,1], \end{displaymath} where \begin{displaymath} a_{i}=\frac{1}{n_{l_i}-1} \sum_{x_{j} \in C_{l_i}} d(x_{i}, x_{j}),\ b_{i}=\min_{h \neq l_i} \frac{1}{n_{h}} \sum_{x_{j} \in C_{h}} d(x_{i}, x_{j}). \end{displaymath} The Average Silhouette Width is then defined as \begin{displaymath} I_{asw}(\mathcal{C}) = \frac{1}{n} \sum_{i=1}^{n} s_{i}. \end{displaymath} \end{description} \subsection{Calibrating the indexes} \label{scalibration} For aggregating the indexes introduced in Section \ref{sinternal} over different data sets and to compare the performance of a clustering method over the indexes in order to characterise it, it is necessary to calibrate the values of the indexes, so that they become comparable. This is done as in \cite{Hennig19,AkhHen20}. The idea is to generate a large number $m$ of ``random clusterings'' $\mathcal{C}_{R1},\ldots,\mathcal{C}_{Rm}$ on the data. Denote the clusterings of the $q=9$ methods from Section \ref{sclustering} by ${\mathcal C}_1,\ldots,\mathcal{C}_q$. For a given data set ${\cal D}$ and index $I$, first change $I$ to $-I$ in case that smaller values are better according to the original definition of $I$, so that for all calibrated indexes larger values are better. Then use these clusterings to standardise $I$: \begin{eqnarray*} m(I,{\cal D})&=&\frac{1}{m+q}\left(\sum_{i=1}^m I(\mathcal{C}_{Ri})+ \sum_{i=1}^q I(\mathcal{C}_{i})\right),\\ s^2(I,{\cal D})&=& \frac{1}{m+q-1}\left(\sum_{i=1}^m \left[I(\mathcal{C}_{Ri})- m(I,{\cal D})\right]^2+ \sum_{i=1}^q \left[I(\mathcal{C}_{i})-m(I,{\cal D})\right]^2\right),\\ I^*(\mathcal{C}_{i})&=&\frac{I(\mathcal{C}_i)-m(I,{\cal D})}{s(I,{\cal D})},\ i=1,\ldots,q. \end{eqnarray*} $I^*$ is therefore scaled so that its values can be interpreted as expressing the quality (larger is better) compared to what the collection of clusterings $\mathcal{C}_{R1},\ldots,\mathcal{C}_{Rm},{\mathcal C}_1,\ldots,\mathcal{C}_q$ achieves on the same data set. The approach depends on the definition of the random clusterings. These should generate enough random variation in order to work as a tool for calibration, but they also need to be reasonable as clusterings, because if all random clusterings are several standard deviations away from the ``proper'' clusterings, the exact distance may not be very meaningful. They also need to be fast to generate, as many of them will be required in order to calibrate index values of every single data set. Four different algorithms are used for generating the random clusterings, for detains see \cite{AkhHen20}. For clusterings with $K$ clusters, these are: \begin{description} \item[Random $K$-centroids:] Draw $K$ observations from ${\cal D}$. Assign every observation to the nearest centroid. \item[Random nearest neighbour:] Draw $K$ observations as starting points for the $K$ clusters. At every stage, of the observations that are not yet clustered, assign the observation $x$ to the cluster of its nearest already clustered neighbour, where $x$ is the observation that has the smallest distance to this neighbour. \item[Random farthest neighbour:] As random nearest neighbour, but $x$ is the observation that has the smallest distance to the minimum farthest cluster member. \item[Random average distances:] As random nearest neighbour, but $x$ is the observation that has the smallest average distance to the closest cluster. \end{description} Experience shows that these methods generate a range of clusterings that have sufficient variation in characteristics and are mostly reasonably close to the proper clustering methods (as can be seen in \cite{AkhHen20} as well as from the results of the present study). Here, 50 random clusterings from each algorithm are generated, i.e., $m=200$. All results in Section \ref{sresults} are given in terms of calibrated indexes $I^*$. \subsection{External validation indexes} ``Truth'' recovery is measured by external validation indexes that quantify the similarity between two clusterings on the same data, here the ``true'' one and a clustering generated by one of the clustering methods. The probably most popular external validation index is the Adjusted Rand Index (ARI; \cite{HubAra85}). This index is based on the relative number of pairs of points that are in the same cluster in both clusterings or in different clusters in both clusterings, adjusted for the number of clusters and the cluster sizes in such a way that its expected value under random cluster labels with the same number and sizes of clusters is 0. The maximum value is 1 for perfect agreement. Values can be negative, but already a value of 0 can be interpreted as indicating that the two clusterings have nothing to do with each other. In some work, the ARI has been criticised, often in the framework of an axiomatic approach where it can be shown that it violates some axioms taken to be desirable, e.g., \cite{Meila07,AGAV09}. Alternative indexes have been proposed that fulfill the presented axioms. \cite{Meila07} introduced the Variation of Information (VI), which is a proper metric between partitions. This means that, as opposed to the ARI, smaller values are better. In Section \ref{sresults}, the negative VI is considered so that for all considered indexes larger values are better. The VI is defined by comparing the entropies of the two clusterings with the so-called ``mutual information'', which is based on the entropy of the intersections between two clusters from the two different clusterings. If the two clusterings are the same, the entropy of the intersections between clusters is the same as the entropy of the original clusterings, meaning that the VI is zero, its minimum value. \cite{AGAV09} show their axioms for an index called BCubed first proposed in \cite{BagBal98}. This index is based on observation-wise concepts of ``precision'' and ``recall'', i.e., what percentage of observations in the same cluster are from the same ``true'' class, and what percentage of observations in a different cluster is ``truly'' different. It takes values between 0 and 1, 1 corresponding to a perfect agreement. See \cite{Meila15} for further discussion and some more alternatives. \section{Results} \label{sresults} Three issues are addressed: \begin{itemize} \item How can the clusters produced by the methods be characterised in terms of the external validation indexes? \item How do the methods perform regarding the recovery of the ``true'' clusterings? \item Can the recovery of the ``true'' clusterings be related to the internal validation indexes? \end{itemize} \subsection{Characterisation of the methods in terms of the internal indexes} \label{scharacterisation} The methods can be characterised by the distribution of values of the calibrated internal validation indexes, highlighting the dominating features of the clusterings that they produce. In order to do this, parallel coordinate plots will be used that show the full results including how results belonging to the same data set depend on each other. I decided against running null hypothesis tests due to issues of multiple testing and model assumptions; the plots allow a good assessment of to what extent differences between methods are meaningful, dominated by random variation, or borderline. Although the values of the calibrated indexes can be compared over indexes as relative to the ensemble of clusterings from the methods and random, what is shown are images that compare the different clustering methods for each index, as the comparison of the clustering methods gives information additional to the performance relative to the random clusterings. \begin{figure} \caption{Calibrated values of $I_{avewithin} \label{fawmd} \end{figure} \begin{description} \item[Average within-cluster distances] (left side of Figure \ref{fawmd}): The two centroid-based methods $K$-means and clara achieve the best results. The Gaussian and $t$-mixture are about at the same level as spectral clustering; complete linkage and the mixture of skew $t$-distributions are worse. Average linkage is behind these, and single linkage is the worst by some distance. Results regarding representation of the data by centroids are not shown and look largely the same. The only additional distinctive feature is that $K$-means is better than clara looking at squared Euclidean distances to the centroid, whereas clara is better for unsquared distances. This was to be expected, as it corresponds to what $K$-means, clara, respectively, attempt to optimise. \item[Maximum diameter] (right side of Figure \ref{fawmd}): Unsurprisingly, complete linkage is best; at each step it merges clusters so that the maximum diameter is the smallest possible, although it is not optimal for every single data set (the hierarchical scheme will not normally produce a global optimum). Average linkage is second best, followed by $K$-means, clara, and single linkage, which somewhat surprisingly avoids large distances within clusters more than spectral clustering and the three mixture models. Another potential surprise is that the Gaussian mixture does not do better than the $t$-mixture in this respect; a flexible covariance matrix can occasionally allow for very large within-cluster distances. \begin{figure} \caption{Calibrated values of $I_{widestgap} \label{fwgsi} \end{figure} \item[Widest within-cluster gap] (left side of Figure \ref{fwgsi}): The three linkage methods are best at avoiding large within-cluster gaps, with single linkage in the first place, which will not join sets between which there is a large gap. The two centroid-based methods follow, however differences between them, the three mixture models, and spectral clustering look small compared to the variance, and dominated by outliers. The skew $t$-mixture produces very large within-cluster gaps for a number of data sets. With strong skewness there can be large distances in a tail of a cluster. \item[Separation index] (right side of Figure \ref{fwgsi}): Single linkage achieves the best results here. Its clustering process keep separated subsets in distinct clusters (often one-point clusters with strongly separated outliers). The two other linkage methods follow. Complete linkage is sometimes portrayed as totally prioritising within-cluster homogeneity over separation, but in fact regarding separation it does better than spectral clustering, which is still a bit better than the centroid-based and the mixture models, between which differences look insignificant. \begin{figure} \caption{Calibrated values of $I_{Pearson\Gamma} \label{fpgdm} \end{figure} \item[Pearson-$\Gamma$] (left side of Figure \ref{fpgdm}): The average results for the methods regarding the representation of the distance structure by the clustering vary relatively little compared to the variation over data sets. Average linkage is overall best, and the skew $t$-mixture worst, even if the latter has good results in some data sets. Single linkage does occasionally very well, but also worse than the others for a number of data sets. \item[Density mode index] (right side of Figure \ref{fpgdm}): Results here are dominated by variation between data sets as well. Interestingly, the methods based on mixtures of unimodal distributions do not do best here, but rather clara and spectral clustering. Once more the mixture of skew $t$-distributions does worst, with outliers in both directions. \begin{figure} \caption{Calibrated values of $I_{denscut} \label{fdcen} \end{figure} \item[Density cutting] (left side of Figure \ref{fdcen}): Due to its focus on cluster separation, single linkage is best at avoiding cutting through density mountains. The skew $t$- and $t$-mixture have the strongest tendency to put cluster boundaries in high density areas, but differences between methods are not large. \item[Entropy] (right side of Figure \ref{fdcen}): clara yields the highest average entropy followed by $K$-means, but differences between these and the three mixture models do not seem significant. This runs counter to the idea, sometimes found in the literature, that $K$-means favours similar cluster sizes more than mixtures, or even implicitly assumes them. The other four methods have a clear tendency to produce less balanced clusters, particularly single linkage, but also average and complete linkage, and to some lesser extent spectral clustering. \begin{figure} \caption{Calibrated values of $I_{kdnorm} \label{fnorcvd} \end{figure} \item[Gaussianity] (left side of Figure \ref{fnorcvd}): Although the Gaussian mixture produces on average the most Gaussian-looking clusters, as was to be expected, the differences between all nine methods look largely insignificant. The Gaussian mixture has positive and negative outliers, the skew $t$-mixture only negative ones. \item[CV of distances to within-cluster neighbours] (right side of Figure \ref{fnorcvd}): Despite one lower outlier, the Gaussian mixture tends to produce the largest cvnnd, i.e., the lowest within-cluster CVs. It probably helps that large variance clusters can bring together observations that have large distances between each other and to the rest. clara and the $t$-mixture produce the lowest cvnnd values. Differences between the other methods are rather small. \begin{figure} \caption{Calibrated values of the ASW. Values belonging to the same data set are connected by lines. The thick red line gives the average values.} \label{fasw} \end{figure} \item[Average silhouette width] (left side of Figure \ref{fasw}): Average linkage is a method that explicitly balances separation and homogeneity, and consequently it achieves the best ASW values. $K$-means achieves higher values than complete linkage, but the remaining methods do worse than the linkage methods. ASW had been originally proposed for use with clara (\cite{KauRou90}), but clara does not produce particularly high ASW values, if better than the mixture models and spectral clustering. \end{description} These results characterise the clustering methods as follows: \begin{description} \item[kmeans] clearly favours within-cluster homogeneity over separation. It does not favour entropy as strongly as some literature suggests; in this respect it is in line with clara and the mixture models, ahead of the remaining methods. It should be noted that entropy is treated here as a potentially beneficial feature of a clustering, whereas some literature makes it seem like a defect of kmeans that such solutions are favoured (as far as this in fact happens). \item[clara] has largely similar characteristics to kmeans. It is slightly worse regarding the representation of the distance structure and the ASW. It is slightly better regarding clusters with density decrease from the mode. This may have to do with the fact that the density goes down faster from the mode for the multivariate Laplace distribution (where the log-likelihood sums up unsquared distances) than for the Gaussian distribution (which corresponds to squared distances). \item[mclust] produces clusters with the highest Gaussianity, but only by a rather insignificant distance. It is best regarding uniformity as measured by cvnnd. The reason for this is probably its ability to build clusters with large within-cluster variation collecting observations that have large distances to all or most other points, whereas other methods either need to isolate such observations in one-point clusters, or integrate them in clusters with denser cores. Mixtures of $t$- and skew $t$-distributions could in principle also produce large variance clusters, but the shapes of $t$- and skew $t$-distributions allow to integrate outlying observations more easily with denser regions. mclust often tolerates large within-cluster distances, whereas its clusters are not on average better separated than those from $K$-means. On the other hand, its cluster sizes are not significantly less well balanced. Its ability to produce clusters with strongly different within-cluster variance makes it less suitable regarding Pearson-$\Gamma$ and the ASW, which treat distances in the same way in all clusters. \item[emskewt] looks bad on almost all internal indexes. It is not particularly bad regarding recovery of the ``true'' clusters though, see Section \ref{srecovery}. This means that the current collection of internal indexes does not capture favourable characteristics of skewly distributed clusters appropriately; it also means that emskewt is not an appropriate method for finding clusters with the characteristics that are formalised by the internal indexes. \item[teigen] has a profile that is by and large very similar to the one of mclust, apart from being slightly better regarding the maximum diameter, and slightly worse regarding Gaussianity and uniformity. \item[single linkage] has a very distinct profile. It is best regarding separation, avoiding wide within-cluster gaps, and cluster boundaries through density valleys, and worst by some distance regarding within-cluster homogeneity and entropy. \item[average linkage] has similar strengths and weaknesses as single linkage, but not as extreme. It is the best method regarding Pearson-$\Gamma$ and the ASW, both of which balance homogeneity and separation and measure therefore how much the clustering is in line with the distance structure. \item[complete linkage] is best regarding the maximum diameter. In most other respects it stands between single and average linkage on one side and the centroid- and mixture-based methods on the other side. \item[spectral] is another method that provides a compromise between the rather separation-oriented single and average linkage on one side and the rather homogeneity-oriented centroid- and mixture-based methods. Its maximum cluster diameter is rather high on average. Its mode index value is good if not clearly different from the one of clara. Its mid-range entropy value may look attractive in applications in which a considerable degree of imbalance in the cluster sizes may seem realistic but the tendency of the linkage methods to produce one-point clusters should be avoided. \end{description} The multivariate characterisation of the clustering methods also allows to map them, using a principal components analysis (PCA). Results of this are shown in Figure \ref{fpc}. On the left side, PCs are shown using every index value for every data set as a separate value, i.e., 42*11 variables. The first two PCs carry 30.9\% and 16.6\% of the variance, respectively. On the right side, the PCA is performed on 11 variables that give average index values over all data sets. While this reduces information, it allows to show the indexes as axes in a biplot. The first two PCs here carry 50.0\% and 19.7\% of the variance, respectively. After rotation, the maps are fairly similar. Using the more detailed data set information, spectral seems much closer to kmeans and clara than to mclust and teigen, but the apparent similarity to the latter ones using average index values is an effect of dimension reduction; involving information from the third PC (not shown), the similarity structure is more similar to that of the plot using all 42*11 variables. The biplot on the right side shows the opposite tendencies of separation on one hand and entropy and average within distances on the other hand when characterising the methods, with indexes such as maximum diameter, density mode, Pearson-$\Gamma$, and the ASW opening another dimension, rather corresponding to kmeans, average, and complete linkage. Qualitative conclusions from these maps agree roughly with those in \cite{JTLB04}, where more clustering algorithms, but fewer data sets, were involved. \begin{figure} \caption{Clustering methods mapped on first two principal components from using all data sets separately (left side), and from using mean values over the data sets (right side).} \label{fpc} \end{figure} The study data allow to also investigate the values of the internal indexes computed for the ``true'' clusterings. These are shown in Figure \ref{ftrueindex}. Only the entropy and Gaussianity are clearly above the mean zero of the random clustering ensemble (which includes the solutions from the proper clustering methods as a small minority), and also above the mean for the clustering methods. The clustering methods are on average all above zero, which should be expected, because these are meant to be desirable features of a good clustering, and as such should be better for the proper clustering methods than for the random ones. The methods achieve the highest average for the ASW, which makes sense as this attempts to measure general clustering quality. The fact that index values are mostly below zero for the ``true'' clusterings can be interpreted in such a way that many given ``true'' clusterings are data analytically wanting. The high values for entropy are probably artificial, due to a biased choice of data sets. The high values for Gaussianity, however, could suggest that there is a tendency in some real clusters, i.e., homogeneous subpopulations, to approximate the Gaussian distribution. A possible explanation is that in a crisp clustering of a data set produced by a clustering method, tails of a within-cluster distribution tend to be cut off in the direction of other clusters, whereas ``true'' clusters tend to have some proper overlap (clearly separated clusters are in my experience indeed rare in real data), which is in line with the low values of the separation and denscut (cluster boundaries running through density valleys) index. This probably also affects the ASW and Pearson-$\Gamma$. \begin{figure} \caption{The boxplots show the distributions of the internal indexes computed on the ``true'' clusterings. The red line shows the average index values produced by the clustering methods.} \label{ftrueindex} \end{figure} \subsection{Recovery of ``true'' clusterings} \label{srecovery} The quality of the recovery of the ``true'' clusterings is measured by the ARI, BCubed, and the VI. Figure \ref{fari} shows the ARI-values achieved by the different clustering methods. On average, there is a clear advantage of the centroid- and mixture-based methods compared with the linkage methods (single linkage is clearly the worst), and spectral clustering is in between. Every method achieves good results on some data sets, but the linkage methods produce an ARI around zero on many data sets. Differences between kmeans, clara, mclust, emskewt, and teigen do not seem significant but are clearly dominated by variation. On some data sets all methods produce very low values, and no method achieves an ARI larger than 0.5 on more than half of the data sets. The mean ARI is 0.28, the mean ARI of the best clusterings for every data set is 0.46. Interpreting these numbers, it has to be kept in mind that the given ``true'' clustering does not necessarily qualify as the best clustering of the data from a data analytic point of view; some of these are neither homogeneous nor separated. Furthermore there may be meaningful clusters in the data that differ from those declared as ``truth''. A better recovery does not necessarily mean that a method delivers the most useful clustering that can be found. On the other hand, some given ``true'' clusterings correspond to clearly visible patterns in the data, and at least some methods manage to find them. Overall, the variation is quite high. The picture changes strongly looking at the results regarding BCubed and particularly VI, see Figure \ref{fbcvi}. BCubed still shows single linkage as the weakest method, but otherwise differences look hardly significant, and according to the VI, the average quality of the methods is almost uniform. \begin{figure} \caption{Adjusted Rand Index values by method. Values belonging to the same data set are connected by lines. The thick red line gives the average values.} \label{fari} \end{figure} \begin{figure} \caption{BCubed and negative Variation of Information values by method. Values belonging to the same data set are connected by lines. The thick red line gives the average values.} \label{fbcvi} \end{figure} Further exploratory analysis (not shown) reveals that better values of the external indexes are systematically associated with lower data dimension $p$ and lower sample size $n$, the latter probably because of confounding with the correlated dimension. There was no clear interaction with the methods, and no clear pattern regarding the number of clusters $k$. Table \ref{tbest} shows how often the different methods come out as the best according to the indexes. This portrays mclust as very successful at recovering the ``truth''. Spectral clustering is hardly ever on top, but it has values very close to the best for a number of data sets. Given that emskewt looks so bad regarding the internal indexes in Section \ref{scharacterisation}, its performance regarding the external indexes looks surprisingly good. The most striking difference between the indexes is that single linkage is not the best method for a single data set with respect to the the ARI, but it is the best for 11 data sets with respect to the VI. This is explored in the following. \begin{table} \begin{tabular}{|r|lllllllll|} \hline & \multicolumn{9}{|c|}{Clustering methods}\\ \hline Index & \tiny{kmeans} & \tiny{clara} & \tiny{mclust} & \tiny{mskewt} & \tiny{teigen} & \tiny{single} & \tiny{average} & \tiny{complete} & \tiny{spectral}\\ \hline ARI & 3 & 4 & 8 & 5 & 5 & 0 & 3 & 1 & 1\\ BCubed & 2 & 2 & 7 & 5 & 3 & 4 & 4 & 2 & 1\\ VI & 2 & 1 & 6 & 3 & 3 & 11 & 2 & 1 & 1\\ \hline \end{tabular} \caption{Number of times that a method comes out best according to the three external indexes.} \label{tbest} \end{table} \begin{figure} \caption{Pairs plot of ARI, BCubed, and VI} \label{fexternal} \end{figure} Figure \ref{fexternal} shows how the three indexes are related to each other over all nine clustering methods applied to the 30 data sets with ``true'' clusterings. VI and BCubed have a correlation $\rho$ of -0.94, but the ARI is correlated substantially weaker to both, $\rho=0.75$ with BCubed and $\rho=-0.57$ with VI. BCubed can therefore be seen as a compromise between the two. In order to explore what causes the differences between ARI and VI, in Figure \ref{fexternal} it can be seen that the major issue is that the VI can produce fairly good values close to zero for some situations in which the ARI is around zero, indicating unrelated clusterings, or only slightly better. Generally these situations tend to occur where one clustering is very imbalanced, mostly with one or more one-point clusters, whereas the other one (more often the ``true'' one) is not. The VI involves cluster-wise percentages of points occurring together in the same cluster in the other clustering, and therefore assesses one-point clusters favourably, whereas the random labels model behind ARI indicates that what happens with the object in a one-point cluster in another (potentially ``true'') clustering is random and therefore not meaningful as long as it appears in a substantially bigger cluster there. For example, consider the data set ``22 - Wholesale'' (see Appendix A2). According to the VI, the single linkage clustering is optimal (VI$=0.64$), but this has an ARI-value of about 0. It is second best according to BCubed with a value of 0.72. Table \ref{tslvi} shows how this is related to the ``true'' clustering. In favour of this clustering it can be said that single linkage cluster 2 is ``pure'' regarding the truth; however, it is clear that any random clustering that fixes one cluster size as 1 will be about equally good. This is a rather extreme case, however most of the assessment differences between ARI and VI (and to a lesser extent BCubed) are of a similar kind. This makes the ARI look like the more appropriate index here. \begin{table}[tbp] \caption{Contingency table of ``true'' clustering and single linkage clustering for data set ``22 - Wholesale''} \begin{tabular}{|r|ll|} \hline & \multicolumn{2}{|c|}{Single linkage cluster} \\ \hline Truth & 1 & 2 \\ \hline 1 & 297 & 1\\ 2 & 142 & 0\\ \hline \end{tabular} \label{tslvi} \end{table} \subsection{Relating ``true'' cluster recovery to the internal indexes} \begin{figure} \caption{Correlation matrix of internal and external validation indexes} \label{fcor13} \end{figure} It is of interest whether the internal index values, which are observable in a real situation, can explain to some extent the performance regarding the ``true'' cluster recovery. A tool to assess this is a linear regression with an external index as response, and the internal indexes as explanatory variables. There is dependence between the different clusterings on the same data set, and this can be appropriately handled using a random data set effect. An important issue is that the internal indexes are correlated, which can make the interpretation of the regression results difficult. Figure \ref{fcor13} shows the correlation structure among the internal indexes, ARI and -VI (BCubed is not taken into account in this section due to the high correlation with VI). The order of indexes in Figure \ref{fcor13} was determined by a hierarchical clustering using correlation dissimilarity, however -VI and ARI were put on top due to their different role in the regression, and the ASW was put at the bottom. The ASW is not involved in the regression, as it is defined in order to compromise between homogeneity and separation, which themselves are represented by other internal indexes. It is involved in Figure \ref{fcor13} because its correlation to the other indexes may be of interest anyway. One thing that can be seen is that it is fairly strongly correlated to a number of other indexes, particularly maximum diameter, Pearson-$\Gamma$, and the separation index, but rather weakly to the average within-cluster distances meant to formalise homogeneity. Considerable correlation occurs between the average within-cluster distances and the entropy. Both of these are the internal indexes with the highest correlation to the ARI. This is a problem for interpretation because this means that entropy and homogeneity are confounded when explaining recovery success. Furthermore, both, entropy in particular, are strongly negatively correlated with separation, which may explain the negative correlation between separation and the ARI. There is no further high ($>0.2$) correlation between either -VI or ARI and other internal indexes. It is obvious that the ARI is closer connected to entropy and homogeneity, whereas the -VI is more positively connected to separation. There are a number of further correlations among the internal indexes; separation, the density mode and cut indexes, Pearson-$\Gamma$, the maximum cluster diameter, and the absence of large within-cluster gaps are all positively connected. The Gaussianity index and the nearest neighbours CV are correlated 0.24 to each other; all their other correlations are lower. \begin{table} \caption{Mixed-effects regression results regressing ARI, -VI, respectively, on the internal indexes excluding the ASW.} \begin{tabular}{|l|rrr|rrr|} \hline Response & \multicolumn{3}{|c|}{ARI} & \multicolumn{3}{|c|}{-VI}\\ \hline Indexes & Coefficient & $t$ & $p$ & Coefficient & $t$ & $p$ \\ \hline Intercept & .324 & 6.91 & .000 & -1.54 & -10.11 & .000\\ avewithin & -.019 & -1.34 & .181 & 0.03 & 0.88 & .377\\ maxdiameter & -.025 & -4.03 & .000 & 0.01 & 0.64 & .520\\ widestgap & .014 & 2.00 & .047 & -0.00 & -0.21 & .814\\ sindex & -.010 & -1.65 & .101 & 0.05 & 3.84 & .000\\ pearsongamma & .020 & 2.43 & .016 & -0.04 & -1.86 & .064\\ dmode & .009 & 0.89 & .374 & 0.05 & 1.92 & .056\\ denscut & .000 & 0.03 & .978 & -0.05 & -1.80 & .074\\ entropy & .088 & 4.69 & .000 & 0.00 & 0.01 & .990\\ kdnorm & .024 & 3.51 & .001 & 0.02 & 1.44 & .151\\ cvnnd & -.006 & -0.86 & .388 & -0.01 & -0.48 & .633\\ \hline random eff. (data set) & & &.000 & & & .000\\ \hline \end{tabular} \label{treg} \end{table} Table \ref{treg} gives the results of two regression analyses, with ARI and -VI as responses, with a random data set effect. This has been obtained by the R-package \verb|lme|, \cite{PinBat00}. $p$-values are interpreted in an exploratory manner, as they are not precise. However, the null hypotheses of zero effect of a variable given all other variables are in the model are of interest here. The ARI regression has maximum diameter, entropy, and Gaussianity as highly significant effects; Pearson-$\Gamma$ is clearly significant at 5\%-level. widestgap is borderline significant, which is potentially not meaningful given the number of tests. The interpretation of entropy (which has the clearly largest $t$-value) is problematic for two reasons. Firstly, due to correlation, its coefficient may partly carry information due to avewithin. Secondly, eight data sets have artificially balanced classes, which may favour entropy among good clusterings. The regression was re-run excluding those data sets (not shown), yielding by and large the same significances including entropy, but its $t$-value fell to 2.75. Even in this scenario it cannot be excluded that the ``sample'' of data sets with known ``true'' clusters favours entropy artificially. Gaussianity seems to be a valuable predictor for recovery of ``true'' classes. The maximum diameter has a negative coefficient, meaning that on average and controlled for all other indexes, a larger (therefore worse) maximum cluster diameter went with a better ``truth'' recovery regarding the ARI. It is however clearly correlated with Pearson-$\Gamma$ and widestgap, which have positive effects. Despite a positive relationship between ARI and -VI, the results of the VI-regression are very different, mainly because -VI can achieve high values for clusterings with very low entropy even if the ``true'' clustering is balanced. This means that there is no bias in favour of entropy by the data set sample; rather the VI seems biased against entropy by definition, see above. The only clearly significant index for -VI is the separation index, with a positive coefficient, which was not significant in the ARI-regression. Plots of the fitted values of both regressions against their response variable (not shown) look satisfactorily linear. In principle, the regressions could be used to predict the ARI or VI for data sets with unknown ``truth'' from the observable internal indexes, but this will not work very well, due the strong data set effect. Overall these results do not allow clear cut conclusions, due to correlation, issues with the representativity of the data sets, and the very different patterns observed for ARI and VI. The character of the ``true'' clusterings may just be so diverse that no general statement about which clustering characteristics allow for good recovery can be made. Preferring the ARI as external index, the only safely interpretable significance seems to be the one of Gaussianity, due to its low correlation with other indexes. Separation seems to help in terms of the VI, but this includes favouring clusterings that separate outliers as one-point clusters, arguably an issue with the VI. \section{Discussion} \label{sdiscussion} The aim of this study is to characterise the clustering methods in terms of the internal indexes, to learn about the recovery of ``true'' clusterings, both regarding the methods, and regarding characteristics that could be connected to recovery. Regarding the characterisation of the clustering methods, the right side of Figure \ref{fpc} is probably most expressive, locating the clustering methods relative to the internal indexes. Some indexes do not separate the methods very strongly. Single linkage stands out as being quite different to most other methods in many respects. On the other hand, the centroid-based methods, the mixture-based methods and spectral clustering have much in common; one surprising result is that $K$-means does not favour balanced cluster sizes particularly strongly, compared to the mixture-based methods. Another result is that single and complete linkage are not opposite extremes, but rather that on most characteristics of single linkage, complete linkage is closer to single linkage, with average linkage in between, than the centroid- based and mixture-based methods. Gaussian mixture-based clustering stands out more by its good value regarding uniformity (cvnnd) than regarding Gaussianity of the clusters. Regarding the recovery of ``true'' clusterings, there is large variation between the data sets. According to the ARI and BCubed, the Gaussian mixture is the best for the largest number of data sets. Single Linkage does badly regarding the ARI. Differences between the other methods are not that pronounced, and all of them did best in some data sets. This includes the skew $t$-mixture, which does not look good according to the internal indexes but better regarding the external indexes. There is currently no index, at least in the collection used here, that formalises in which sense such a mixture can yield a good clustering. This is a topic for further work. According to the VI (and to some extent BCubed), single linkage does much better, but this rather indicates a problem with the indexes than a good performance of single linkage. Explaining the ``true'' cluster recovery by the internal indexes does not deliver very clear results, except that Gaussianity seems to help, which is sometimes achieved by the Gaussian mixture, but only insignificantly more often than by some other methods. A critical interpretation could be that quality according to the internal indexes does not really measure what is important for recovery. On the other hand one could argue that this shows the heterogeneity of ``true'' clusterings, and that there is no ``one fits it all approach'', neither for clustering, nor for measuring clustering quality. The given ``true'' clusterings are of such a nature that their recovery cannot be reliably predicted from observable cluster characteristics. Some problems were exposed with the non-representativity of the data sets, with ``true'' clusterings, and with the VI (and somewhat less extreme the BCubed) index. These problems are not exclusive to the present study, and it can be hoped that these issues are on the radar whenever such benchmark studies are run. These problems affect analyses involving the ``true clusterings'' in particular. There is no reason to believe that the results regarding the internal validation indexes are biased for these reasons. \section*{Appendix} \subsection*{A1: Computational details} The following amendments were made to the clustering functions listed in Section \ref{sclustering}: \begin{description} \item[Mixture models:] Crisp partitions have always been enforced by assigning observations to the cluster with maximum posterior probability of membership. \item[kmeans:] This was run with parameter \verb|runs=10|, governing the number of random initialisations. The default value is \verb|runs=1|, which yields very unstable results. \item[emskewt] The function \verb|EmSkew| would occasionally produce errors or invalid results. It is run inside a wrapper function that enforces a solution in the following way: For each covariance matrix model\footnote{The shape of a skew t-distribution is defined by the covariance matrix of an involved Gaussian mixture, see \cite{LeeMcL13}, although this is not the covariance matrix of the resulting skew t-distribution.}, starting from (1) the fully flexible model, 5 attempts (different random initialisations) are made to find a solution. If all attempts for a model fail, a less flexible model is tried out, in the order (2) diagonal covariance matrices, (3) flexible but equal covariance matrices, (4) equal diagonal covariance matrices, (5) equal spherical covariance matrices, until a valid solution is found. If none of these is successful, the same routine is carried out with a mixture of skew normal distributions, and if this does not yield a valid solution either, \verb|mclustBIC| is called with default settings. \item[teigen] The function \verb|teigen| would occasionally produce errors or invalid results. It is run inside a wrapper function that enforces a solution in the following way: If no valid solution is found, the wrapper-function for \verb|EmSkew| as explained above is called, but with \verb|dist="mvt"|, fitting a multivariate t-distribution. \item[specc] The function \verb|specc| would occasionally produce errors or invalid results. It is run inside a wrapper function. 10 attempts (different random initialisations) are made to find a solution. If they all fail, all observations are assigned to cluster 1. While this approach may seem unfair for spectral clustering in comparison to \verb|EmSkew|, which ultimately calls \verb|mclust| and can as such still produce a reasonable clustering, the motivation is that a Gaussian mixture model can be seen as a constrained version of a mixture of skew t-distributions, whereas spectral clustering has no straightforward constrained version that can guarantee a valid solution. \end{description} In principle there can be situations in which also \verb|mclustBIC| fails to deliver a valid solution, however such a situation did not occur in the study. Exhausting all attempts, both \verb|specc| and \verb|EmSkew| failed twice before resorting to a one-cluster solution or \verb|mclustBIC|, respectively, and \verb|teigen| failed 5 times; in all of these cases \verb|EmSkew| with \verb|distr="mvt"| delivered a valid solution. \subsection*{A2: More details on data sets} Tables \ref{tdata1} and \ref{tdata2} give a list of the data sets used in the study. \begin{table}[tbp] \tiny \caption{Overview of data sets used in the study. As ``Source'', the source is given from which the data set was retrieved for the study, which in some cases is not the original source (most data sets retrieved from www.openml.org and many from R-packages are from UCI). Missing references: (i) Turing Institute, Glasgow, (ii) www.bundestag.de (iii) maps.met.police.uk/tables.htm} \begin{tabular}{|l|l|r|r|r|r|r|r|} \hline Number & Name & $n$ & $p$ & $K$ & ``Truth'' given & Source & Reference\\ \hline 1 & Crabs & 200 & 5 & 4 & Yes & R-MASS & \cite{CamMah74} \\ \hline \multicolumn{8}{|c|}{Morphological measurements of crabs, two species, two sexes}\\ \hline 2 & Dortmund & 170 & 5 & 5 & No & See reference & \cite{SomWei05} \\ \hline \multicolumn{8}{|c|}{Various characteristics of the districts of the city of Dortmund}\\ \hline 3 & Iris & 150 & 4 & 3 & Yes & R-datasets & \cite{Anderson35}\\ \hline \multicolumn{8}{|c|}{Measurements on 50 flowers from each of 3 species of iris}\\ \hline 4 & Vowels & 990 & 10 & 11 & Yes & See reference & \cite{HaTiFr01}\\ \hline \multicolumn{8}{|c|}{Recognition of British English vowels}\\ \hline 5 & Bats & 2677 & 72 & 8 & Yes & V. Zamora-Gutierrez & \cite{Zamora16}\\ \hline \multicolumn{8}{|c|}{Acoustic identification of Mexican bat species}\\ \hline 6 & USArrests & 50 & 4 & 2 & No & R-datasets & \cite{McNeil77}\\ \hline \multicolumn{8}{|c|}{Arrests per 100,000 residents for various crimes in US states 1973}\\ \hline 7 & OliveOil & 572 & 8 & 9 & Yes & R-pdfcluster & \cite{FoArLaTi83}\\ \hline \multicolumn{8}{|c|}{Chemical decomposition of Italian olive oils from 9 regions}\\ \hline 8 & OldFaithful & 299 & 2 & 3 & No & R-MASS & \cite{AzzBow90}\\ \hline \multicolumn{8}{|c|}{Duration and waiting times for eruptions of Old Faithful geyser}\\ \hline 9 & Tetragonula & 236 & 4 & 9 & Yes & R-prabclus & \cite{FCGRO04}\\ \hline \multicolumn{8}{|c|}{Genetic information on 9 species of tetragonula bees}\\ \hline 10 & Thyroid & 215 & 6 & 3 & Yes & R-mclust & \cite{CoBrJoMa83}\\ \hline \multicolumn{8}{|c|}{Results of five laboratory tests diagnosing thyroid gland patients}\\ \hline 11 & Spam & 4601 & 57 & 2 & Yes & R-kernlab & \cite{HaTiFr01}\\ \hline \multicolumn{8}{|c|}{Email spam classification from word and character frequencies}\\ \hline 12 & Wisconsin & 569 & 30 & 2 & Yes & UCI & \cite{StrWolMan93}\\ \hline \multicolumn{8}{|c|}{Diagnosis of breast cancer, measurements of features of image}\\ \hline 13 & Yeast & 1484 & 8 & 10 & Yes & UCI & \cite{HorNak96}\\ \hline \multicolumn{8}{|c|}{Discriminative features for protein Localization Sites in cells}\\ \hline 14 & Vehicle & 846 & 18 & 4 & Yes & R-mlbench & (i)\\ \hline \multicolumn{8}{|c|}{Recognising vehicle type from silhouettes}\\ \hline 15 & Letters & 2000 & 16 & 26 & Yes & R-mlbench & \cite{FreSla91}\\ \hline \multicolumn{8}{|c|}{Recognising handwritten letters from pixel displays}\\ \hline 16 & Bundestag & 299 & 5 & 5 & No & R-flexclust & (ii)\\ \hline \multicolumn{8}{|c|}{German Bundestag election results 2009 of 5 major parties by constituency}\\ \hline 17 & Finance & 889 & 4 & 2 & Yes & R-Rmixmod & \cite{duJSev10}\\ \hline \multicolumn{8}{|c|}{Predicting firm bankruptcy from four financial ratios}\\ \hline 18 & BankNotes & 200 & 6 & 2 & Yes & R-mclust & \cite{FluRie88}\\ \hline \multicolumn{8}{|c|}{Identifying counterfeit Swiss bank notes from measurements}\\ \hline 19 & StoneFlakes & 79 & 8 & 3 & No & Thomas Weber & \cite{Weber09}\\ \hline \multicolumn{8}{|c|}{Measurements on prehistoric stone tools}\\ \hline 20 & Leaf & 340 & 14 & 30 & Yes & UCI & \cite{SiMaAl12}\\ \hline \multicolumn{8}{|c|}{Shape and consistency measurements on leafs from 30 plant species}\\ \hline 21 & London & 32 & 9 & 4 & No & See reference & (iii) \\ \hline \multicolumn{8}{|c|}{Relative numbers of various crimes in the boroughs of London 2014}\\ \hline \end{tabular} \label{tdata1} \end{table} \begin{table}[tbp] \tiny \caption{Overview of data sets used in the study (part 2). As ``Source'', the source is given from which the data set was retrieved for the study, which in some cases is not the original source (most data sets retrieved from www.openml.org and many from R-packages are from UCI). Missing references: (i) Deepraj Baidya (ii) Dukascopy Historical Data Feed (iii) www.decathlon2000.com} \begin{tabular}{|l|l|r|r|r|r|r|r|} \hline Number & Name & $n$ & $p$ & $K$ & ``Truth'' given & Source & Reference\\ \hline 22 & Wholesale & 440 & 7 & 2 & Yes & \verb|www.openml.org| & \cite{Abreu11}\\ \hline \multicolumn{8}{|c|}{Spending on various product categories by clients of wholesale distributor}\\ \hline 23 & Heart & 200 & 13 & 5 & Yes & \verb|www.openml.org| & \cite{DJSPSSGLF89}\\ \hline \multicolumn{8}{|c|}{Diagnosing different stages of heart disease by diagnostic measurements}\\ \hline 24 & MachineKnow & 403 & 5 & 5 & No & \verb|www.openml.org| & \cite{KaSaCo13}\\ \hline \multicolumn{8}{|c|}{Students' knowledge status about the subject of Electrical DC Machines}\\ \hline 25 & PlantLeaves & 1599 & 64 & 100 & Yes & \verb|www.openml.org| & \cite{Yanetal13}\\ \hline \multicolumn{8}{|c|}{Plant species classification by texture detected from leaf images}\\ \hline 26 & RNAYan & 90 & 2 & 7 & Yes & Bioconductor & \cite{Yanetal13}\\ \hline \multicolumn{8}{|c|}{RNA sequencing data distinguishing cell types in human embryonic development}\\ \hline 27 & RNAKolo & 704 & 5 & 3 & Yes & Bioconductor & \cite{Kolodziejczyketal15}\\ \hline \multicolumn{8}{|c|}{RNA sequencing data on mouse embryonic stem cell growth}\\ \hline 28 & Cardiotocography & 2126 & 23 & 10 & Yes & \verb|www.openml.org| & \cite{ABGMP00}\\ \hline \multicolumn{8}{|c|}{Classification of cardiotocograms into pattern classes}\\ \hline 29 & Stars & 240 & 4 & 6 & Yes & Kaggle & (i) \\ \hline \multicolumn{8}{|c|}{Predict star types from features of stars}\\ \hline 30 & Kidney & 203 & 11 & 2 & Yes & R-teigen & \cite{UCI17} \\ \hline \multicolumn{8}{|c|}{Presence or absence of chronic kidney disease from diagnostic features}\\ \hline 31 & BreastTissue & 106 & 9 & 4 & Yes & \verb|www.openml.org| & \cite{Jossinet96}\\ \hline \multicolumn{8}{|c|}{Classes of breast carcinoma diagnosed by impedance measurements}\\ \hline 32 & FOREX & 1832 & 10 & 2 & Yes & \verb|www.openml.org| & (ii) \\ \hline \multicolumn{8}{|c|}{Historical price data EUR/JPY for predicting direction next day}\\ \hline 33 & SteelPlates & 1941 & 24 & 7 & Yes & \verb|www.openml.org| & \cite{Buscema98}\\ \hline \multicolumn{8}{|c|}{Classification of steel plates faults from various measurements}\\ \hline 34 & BostonHouse & 506 & 13 & 5 & No & \verb|www.openml.org| & \cite{HarRub78}\\ \hline \multicolumn{8}{|c|}{Multivariate characterisation of different areas of Boston}\\ \hline 35 & Ionosphere & 351 & 32 & 2 & Yes & \verb|www.openml.org| & \cite{SWHB89}\\ \hline \multicolumn{8}{|c|}{Radar data to distinguish free electron patterns from noise in ionosphere}\\ \hline 36 & Glass & 214 & 9 & 6 & Yes & R-MASS & \cite{VenRip02}\\ \hline \multicolumn{8}{|c|}{Identify type of glass from chemical analysis}\\ \hline 37 & CustomerSat & 1811 & 10 & 5 & Yes & R-bayesm & \cite{RoAlMc05}\\ \hline \multicolumn{8}{|c|}{Responses to a satisfaction survey for a product}\\ \hline 38 & Avalanches & 394 & 9 & 5 & No & Margherita Maggioni & \cite{Maggioni04}\\ \hline \multicolumn{8}{|c|}{Avalanche frequencies by size and other factors for mapping release areas}\\ \hline 39 & Decathlon & 2580 & 10 & 6 & No & R-GDAdata & (iii) \\ \hline \multicolumn{8}{|c|}{Points per event of decathlon athletes}\\ \hline 40 & Alcohol & 125 & 10 & 5 & Yes & UCI & \cite{ALJY20}\\ \hline \multicolumn{8}{|c|}{Five types of alcohol classified by QCM sensor data}\\ \hline 41 & Augsburg & 95 & 11 & 3 & No & See reference & \cite{TheUrb09}\\ \hline \multicolumn{8}{|c|}{Tax data for districts of the city of Augsburg before and after Thirty Years War}\\ \hline 42 & ImageSeg & 2310 & 16 & 7 & Yes & \verb|www.openml.org| & \cite{UCI17}\\ \hline \multicolumn{8}{|c|}{$3\times 3$ pixel regions of outdoor images classified as object}\\ \hline \end{tabular} \label{tdata2} \end{table} A zip file with all data sets in the form in which they were analysed in the present study (i.e., after all pre-processing) is planned to be provided as online supplement of the published version of this article. Where a ``true'' clustering is given this is in the first variable. All variables were scaled to zero mean and unit variance before clustering, except where stated in the following list, which gives information about data pre-processing where this was applied. \begin{description} \item[2 Dortmund] The original data set has 203 variables, many of which are not of much substantial interest, with several linear dependencies. The version used here is described in \cite{CorHen16}. \item[4 Vowels] The original data set is split into test and training data for supervised classification. Both are used together here. \item[5 Bats] The used data set is a preliminary version of what is analysed in \cite{Zamora16} that was provided to me for testing purposes by Veronica Zamora-Gutierrez. A small number of missing values were imputed by mean imputation. \item[7 OliveOil] The original data set contains classification by 9 regions, which are subclasses of 3 macro areas. The regions were used as ``true'' clustering. \item[9 Tetragonula] This data set originally contains categorical genetic information. The version used here was generated by running a four-dimensional multidimensional scaling on genetic distances as proposed by \cite{HauHen10}. The resulting data were not scaled before clustering; the original scales represent the original distances. \item[15 Letters] This data set has originally 20,000 observations, which is too big for handling a full distance matrix. Only the first 2,000 have been used. \item[16 Bundestag] The data set was not scaled before clustering. The unscaled version represents comparable voter percentages. \item[19 StoneFlakes] A small number of missing values were imputed by mean imputation. \item[21 London] The data were retrieved from the website \verb|maps.met.police.uk/tables.htm| in December 2015. The website has been reorganised in the meantime and the original data are probably no longer available there, however more recent data of the same kind is available. Only the major crime categories were used, divided by the total number of offences; the total number of offences was used as a variable divided by the number of inhabitants. After constructing these features, variables were scaled (the number of serious crimes is very low, and not scaling the relative numbers would have strongly reduced their influence on clustering). \item[24 MachineKnow] A classification variable is provided, but this was not used as ``true'' clustering here, because according to the documentation this was constructed from the data by a machine learning algorithm, and does not qualify as ``ground truth''. \item[26 RNAYan] This is originally a data set with $p\gg n$. Unscaled principal components were used as explained in \cite{BatHen20} in line with some literature cited there. \item[27 RNAKolo] This is originally a data set with $p\gg n$. Unscaled principal components were used as explained in \cite{BatHen20} in line with some literature cited there. \item[28 Cardiotocography] A variable with less than four distinct values has been removed. \item[33 SteelPlates] Three variables with less than four distinct values have been removed. \item[34 BostonHouse] This was originally a regression problem. The original response variable ``housing price'' is used together with the other variables for clustering here. A binary variable has been removed. \item[35 Ionosphere] Two binary variables have been removed. \item[38 Avalanches] On top of the first six variables, which give geographical information, the original data has frequencies for avalanches of ten different sizes, categorised by what percentage of the release areas is covered. This information has been reduced to the three variables ``number of avalanches'', ``mean coverage'' and ``variance of coverages''. \item[39 Decathlon] Only data from the year 2000 onward are used, in order to generate a data set of manageable size. Variables were not scaled, because the decathlon points system is meant to make the original points values comparable. \item[41 Augsburg] For four count variables the meaning of missing values was ``not existing'', and these were set to zero. Some other missing values were imputed by mean imputation. \item[42 ImageSeg] Three variables with less than five distinct values have been removed. \end{description} \end{document}
\begin{document} \title{Bent Vectorial Functions, Codes and Designs} {\mathbb{M}athrm{Aut}}hor{Cunsheng Ding\footnote{Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong. Email: [email protected]}, Akihiro Munemasa\footnote{Research Center for Pure and Applied Mathematics, Graduate School of Information Sciences, Tohoku University, Sendai 980-8579, Japan. Email: [email protected]}, Vladimir D. Tonchev\footnote{Department of Mathematical Sciences, Michigan Technological University, Houghton, MI 49931, USA. Email: [email protected]}} \mathbb{M}aketitle \begin{abstract} Bent functions, or equivalently, Hadamard difference sets in the elementary Abelian group $({\mathbb{M}athrm{GF}}(2^{2m}), +)$, have been employed to construct symmetric and quasi-symmetric designs having the symmetric difference property \cite{Kantor75}, \cite{DS87}, \cite{Kantor83}, \cite{JT91}, \cite{JT92}. The main objective of this paper is to use bent vectorial functions for a construction of a two-parameter family of binary linear codes that do not satisfy the conditions of the Assmus-Mattson theorem, but nevertheless hold $2$-designs. A new coding-theoretic characterization of bent vectorial functions is presented. \end{abstract} {\bf Keywords:} bent function, bent vectorial function, linear code, 2-design. {\bf MSC:} 94B05, 94B15, 05B05. \section{Introduction, motivations and objectives}\label{sec-Introd} We start with a brief review of combinatorial $t$-designs (cf. \cite{AK92}, \cite{BJL}, \cite{T88}). Let ${\mathcal{P}}$ be a set of $v \ge 1$ elements, called {\it points}, and let ${\mathcal{B}}$ be a collection of $k$-subsets of ${\mathcal{P}}$, called {\it blocks}, where $k$ is a positive integer, $1 \leq k \leq v$. Let $t$ be a non-negative integer, $t \leq k$. The pair ${\mathbb{D}} = ({\mathcal{P}}, {\mathcal{B}})$ is called a $t$-$(v, k, \lambda)$ {\em design\index{design}}, or simply {\em $t$-design\index{$t$-design}}, if every $t$-subset of ${\mathcal{P}}$ is contained in exactly $\lambda$ blocks of ${\mathcal{B}}$. We usually use $b$ to denote the number of blocks in ${\mathcal{B}}$. A $t$-design is called {\em simple\index{simple}} if ${\mathcal{B}}$ does not contain any repeated blocks. In this paper, we consider only simple $t$-designs. Two designs are {\it isomorphic} if there is a bijection between their point sets that maps every block of the first design to a block of the second design. An {\it automorphism} of a design is any isomorphism of the design to itself. The set of all automorphisms of a design ${\mathbb{D}}$ form the (full) automorphism group of ${\mathbb{D}}$. It is clear that $t$-designs with $k = t$ or $k = v$ always exist. Such $t$-designs are called {\em trivial}. In this paper, we consider only $t$-designs with $v > k > t$. The incidence matrix of a design ${\mathbb{D}}$ is a $(0,1)$-matrix $A=(a_{ij})$ with rows labeled by the blocks, columns labeled by the points, where $a_{i,j}=1$ if the $i$th block contains the $j$th point, and $a_{i,j}=0$ otherwise. If the incidence matrix is viewed over ${\mathbb{M}athrm{GF}}(q)$, its rows span a linear code of length $v$ over ${\mathbb{M}athrm{GF}}(q)$, which is denoted by ${\mathcal{C}}_q({\mathbb{D}})$ and is called the code of the design. Note that a $t$-design can be employed to construct linear codes in different ways. The supports of codewords of a given Hamming weight $k$ in a code ${\mathcal{C}}$ may form a $t$-design, which is referred to as a design supported by the code. A design is called {\em symmetric\index{symmetric design}} if $v = b$. A $2$-$(v, k, \lambda)$ design is symmetric if and only if every two blocks share exactly $\lambda$ points. A $2$-design is \emph{quasi-symmetric}\index{quasi-symmetric} with intersection numbers $x$ and $y$, ($x < y$) if any two blocks intersect in either $x$ or $y$ points. Let ${\mathbb{D}}=\{{\mathcal{P}}, \,{\mathcal{B}} \}$ be a $2$-$(v, k, \lambda)$ symmetric design, where ${\mathcal{B}}=\{B_1,\, B_2,\, \cdots, \, B_v\}$ and $v \geq 2$. Then \begin{itemize} \item $(B_1, \, \{B_2 \cap B_1,\, B_3 \cap B_1,\, \cdots,\, B_v \cap B_1 \})$ is a $2$-$(k,\, \lambda,\, \lambda-1)$ design, and called the \emph{derived design}\index{derived design} of ${\mathbb{D}}$ with respect to $B_1$; \item $(\overline{B}_1,\, \{B_2 \cap \overline{B}_1,\, B_3 \cap \overline{B}_1,\, \cdots, B_v \cap \overline{B}_1 \})$ is a $2$-$(v-k,\, k-\lambda,\, \lambda)$ design, called the \emph{residual design}\index{residual design} of ${\mathbb{D}}$ with respect to $B_1$, where $\overline{B_1}={\mathcal{P}} \setminus B_1$. \end{itemize} If a symmetric design ${\mathbb{D}}$ has parameters \begin{eqnarray}\label{eqn-SDPparameters} 2-(2^{2m},\, 2^{2m-1}-2^{m-1},\, 2^{2m-2}-2^{m-1}), \end{eqnarray} its derived designs have parameters \begin{eqnarray*} 2-( 2^{2m-1}-2^{m-1},\, 2^{2m-2}-2^{m-1},\, 2^{2m-2}-2^{m-1}-1), \end{eqnarray*} and its residual designs have parameters \begin{eqnarray*} 2-( 2^{2m-1}+2^{m-1},\, 2^{2m-2},\, 2^{2m-2}-2^{m-1}). \end{eqnarray*} A symmetric $2$-design is said to have the \emph{symmetric difference property}, or to be a \emph{symmetric SDP design}\index{symmetric SDP design}, (Kantor \cite{Kantor75, Kantor83}), if the symmetric difference of any \emph{three} blocks is either a block or the complement of a block. Any derived or residual design of a symmetric SDP design is quasi-symmetric, and has the property that the symmetric difference of every two blocks is either a block or the complement of a block. The derived and residual designs of a symmetric SDP design are called quasi-symmetric SDP designs \cite{JT92}. The binary codes of quasi-symmetric SDP designs give rise to an exponentially growing number of inequivalent linear codes that meet the Grey-Rankin bound \cite{JT91}. It was proved in \cite{Tonchev93} that any quasi-symmetric SDP design can be embedded as a derived or a residual design in exactly one (up to isomorphism) symmetric SDP design. A coding-theoretical characterization of symmetric SDP designs was given by Dillon and Schatz \cite{DS87}, who proved that any symmetric SDP design with parameters (\ref{eqn-SDPparameters}) is supported by the codewords of minimum weight in a binary linear code ${\mathcal{C}}$ of length $2^{2m}$, dimension $2m+2$ and weight enumerator given by \begin{equation} \label{sdpcode} 1 + 2^{2m}z^{2^{2m-1}-2^{m-1}} + (2^{2m+1} - 2)z^{2^{2m-1}}+ 2^{2m}z^{2^{2m-1}+2^{m-1}} + z^{2m}, \end{equation} where ${\mathcal{C}}$ is spanned by the first order Reed-Muller code ${\mathbb{M}athrm{RM}}_2(1, 2m)$ and a vector $u$ being the truth table (introduced in Section~\ref{sec-333}) of a bent function in $2m$ variables, or equivalently, $u$ is the incidence vector of a Hadamard difference set in the additive group of ${\mathbb{M}athrm{GF}}(2)^{2m}$ with parameters \begin{equation*} \label{eqn-MenonHadamardPara} (2^{2m}, \,2^{2m-1} {\mathrm{Paley}}m 2^{m-1}, \,2^{2m-2} {\mathrm{Paley}}m 2^{m-1}). \end{equation*} One of the objectives of this paper is to give a coding-theoretical characterization of bent vectorial functions (Theorem \ref{thm-bentvectf}), which generalizes the Dillon and Schatz characterization of single bent functions \cite{DS87}. Another objective is to present in Theorem \ref{main} a two-parameter family of binary linear codes with parameters \[ [2^{2m},2m+1 +\ell,2^{2m-1} - 2^{m-1}], \ m \ge 2, \ 1\le \ell \le m, \] that are based on bent vectorial functions and support $2$-designs, despite that these codes do not satisfy the conditions of the Assmus-Mattson theorem (see Theorem \ref{thm-AM1}). The subclass of codes with $\ell=1$ consists of codes introduced by Dillon and Schatz \cite{DS87} that are based on bent functions and support symmetric SDP designs. Examples of codes with $\ell=m$ are given that are optimal in the sence that they have the maximum possible minimum distance for the given length and dimension, or have the largest known minimum distance for the given length and dimension (see Note \ref{Note6} in Section \ref{Section4}, and the examples thereafter). \section{The classical constructions of $t$-designs from codes} A simple sufficient condition for the supports of codewords of any given weight in a linear code to support a $t$-design is that the code admits a $t$-transitive or $t$-homogeneous automorphism group. All codes considered in this paper are of even length $n$ of the form $n=2^{2m}$. It is known that any 2-homogeneous group of even degree is necessarily 2-transitive (Kantor \cite{K69, K85}). Another sufficient condition is given by the Assmus-Mattson theorem. Let ${\mathcal{C}}$ be a $[v, \kappa, d]$ linear code over ${\mathbb{M}athrm{GF}}(q)$, and let $A_i =A_i({\mathcal{C}})$ be the number of codewords of Hamming weight $i$ in ${\mathcal{C}}$ ($0 \leq i \leq v$). For each $k$ with $A_k \neq 0$, let ${\mathcal{B}}_k$ denote the set of the supports of all codewords of Hamming weight $k$ in ${\mathcal{C}}$, where the code coordinates are indexed by $1,2, \ldots, v$. Let ${\mathcal{P}}=\{ 1, 2, \ldots, v \}$. The following theorem, proved by Assumus and Mattson, provides sufficient conditions for the pair $({\mathcal{P}}, {\mathcal{B}}_k)$ to be a $t$-design. \begin{theorem}[The Assmus-Mattson Theorem \cite{AM69}]\label{thm-AM1} Let ${\mathcal{C}}$ be a binary $[v, \kappa, d]$ code, and let $d^{\mathrm{Paley}}erp$ be the minimum weight of the dual code ${\mathcal{C}}^{\mathrm{Paley}}erp$. Suppose that $A_i=A_i({\mathcal{C}})$ and $A_i^{\mathrm{Paley}}erp=A_i({\mathcal{C}}^{\mathrm{Paley}}erp)$, $0 \leq i \leq v$, are the weight distributions of ${\mathcal{C}}$ and ${\mathcal{C}}^{\mathrm{Paley}}erp$, respectively. Fix a positive integer $t$ with $t < d$, and let $s$ be the number of $i$ with $A_i^{\mathrm{Paley}}erp \ne 0$ for $0 < i \leq v-t$. If $s \leq d -t$, then \begin{itemize} \item the codewords of weight $i$ in ${\mathcal{C}}$ hold a $t$-design provided that $A_i \ne 0$ and $d \leq i \leq v$, and \item the codewords of weight $i$ in the code ${\mathcal{C}}^{\mathrm{Paley}}erp$ hold a $t$-design provided that $A_i^{\mathrm{Paley}}erp \ne 0$ and $d^{\mathrm{Paley}}erp \leq i \leq v-t$. \end{itemize} \end{theorem} The parameter $\lambda$ of a $t$-$(v,w,\lambda)$ design supported by the codewords of weight $w$ in a binary code ${\mathcal{C}}$ is determined by \begin{equation*} A_w = \lambda { v \choose t}/{w \choose t}. \end{equation*} \section{Bent functions and bent vectorial functions}\label{sec-333} Let $f=f(x)$ be a Boolean function from ${\mathbb{M}athrm{GF}}(2^{n})$ to ${\mathbb{M}athrm{GF}}(2)$. The \emph{support} $S_f$ of $f$ is defined as $$ S_f=\{x \in{\mathbb{M}athrm{GF}}(2^{n}) : f(x)=1\} \subseteq {\mathbb{M}athrm{GF}}(2^{n}). $$ The $(0,1)$ incidence vector of $S_f$, having its coordinates labeled by the elements of ${\mathbb{M}athrm{GF}}(2^n)$, is called the {\it truth table} of $f$. The {\em Walsh transform} of $f$ is defined by \begin{eqnarray*} \hat{f}(w)=\sum_{x \in {\mathbb{M}athrm{GF}}(2^{n})} (-1)^{f(x)+{\mathbb{M}athrm{Tr}}_{n/1}(wx)} \end{eqnarray*} where $w \in {\mathbb{M}athrm{GF}}(2^{n})$ and ${\mathbb{M}athrm{Tr}}_{n/n'}(x)$ denotes the trace function from ${\mathbb{M}athrm{GF}}(2^n)$ to ${\mathbb{M}athrm{GF}}(2^{n'})$. Two Boolean functions $f$ and $g$ from ${\mathbb{M}athrm{GF}}(2^n)$ to ${\mathbb{M}athrm{GF}}(2)$ are called \emph{weakly affinely equivalent} or \emph{EA-equivalent} if there are an automorphism $A$ of $({\mathbb{M}athrm{GF}}(2^n), +)$, a homomorphism $L$ from $({\mathbb{M}athrm{GF}}(2^n),+)$ to $({\mathbb{M}athrm{GF}}(2), +)$, an element $a \in {\mathbb{M}athrm{GF}}(2^n)$ and an element $b \in {\mathbb{M}athrm{GF}}(2)$ such that $$ g(x)=f(A(x)+a)+ L(x) +b $$ for all $x \in {\mathbb{M}athrm{GF}}(2^n)$. A Boolean function $f$ from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)$ is called a \emph{bent\index{bent}} function if $|\hat{f}(w)|= 2^{m}$ for every $w \in {\mathbb{M}athrm{GF}}(2^{2m})$. It is well known that a function $f$ from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)$ is bent if and only if $S_f$ is a difference set in $({\mathbb{M}athrm{GF}}(2^{2m}),\,+)$ with parameters (\ref{eqn-MenonHadamardPara}) \cite{Mesnagerbook}. A Boolean function $f$ from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)$ is a bent function if and only if its truth table is at Hamming distance $2^{2m-1} {\mathrm{Paley}}m 2^{m-1}$ from every codeword of the first order Read-Muller code ${\mathbb{M}athrm{RM}}_2(1, 2m)$ \cite[Theorem 6, page 426]{McS}. It follows that \begin{eqnarray*} |S_f|=2^{2m-1} {\mathrm{Paley}}m 2^{m-1}. \end{eqnarray*} There are many constructions of bent functions. The reader is referred to \cite{CMSurvey} and \cite{Mesnagerbook} for detailed information about bent functions. Let $\ell$ be a positive integer, and let $f_1(x), \cdots, f_\ell(x)$ be Boolean functions from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)$. The function $F(x)=(f_1(x), \cdots, f_\ell(x))$ from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)^\ell$ is called a $(2m,\ell)$ {\it vectorial} Boolean function. A $(2m,\ell)$ vectorial Boolean function $F(x)=(f_1(x), \cdots, f_\ell(x))$ is called a \emph{bent vectorial function} if $\sum_{j=1}^\ell a_j f_j(x)$ is a bent function for each nonzero $(a_1, \cdots, a_\ell) \in {\mathbb{M}athrm{GF}}(2)^\ell$. For another equivalent definition of bent vectorial functions, see \cite{CMbentvect} or \cite[Chapter 12]{Mesnagerbook}. Bent vectorial functions exist only when $\ell \leq m$ (cf. \cite[Chapter 12]{Mesnagerbook}). There are a number of known constructions of bent vectorial functions. The reader is referred to \cite{CMbentvect} and \cite[Chapter 12]{Mesnagerbook} for detailed information. Below we present a specific construction of bent vectorial functions from \cite{CMbentvect}. \begin{example}\label{exam-bentvectfunc1} \cite{CMbentvect}. Let $m \geq 1$ be an odd integer, $\beta_1, \beta_2, \cdots, \beta_{m}$ be a basis of ${\mathbb{M}athrm{GF}}(2^{m})$ over ${\mathbb{M}athrm{GF}}(2)$, and let $u \in {\mathbb{M}athrm{GF}}(2^{2m}) \setminus {\mathbb{M}athrm{GF}}(2^m)$. Let $i$ be a positive integer with $\gcd(2m, i)=1$. Then \begin{eqnarray*} \left({\mathbb{M}athrm{Tr}}_{2m/1}(\beta_1 u x^{2^i+1}), {\mathbb{M}athrm{Tr}}_{2m/1}(\beta_2 u x^{2^i+1}), \cdots, {\mathbb{M}athrm{Tr}}_{2m/1}(\beta_{m} u x^{2^i+1}) \right) \end{eqnarray*} is a $(2m, m)$ bent vectorial function. \end{example} Under a basis of ${\mathbb{M}athrm{GF}}(2^{\ell})$ over ${\mathbb{M}athrm{GF}}(2)$, $({\mathbb{M}athrm{GF}}(2^\ell), +)$ and $({\mathbb{M}athrm{GF}}(2)^\ell, +)$ are isomorphic. Hence, any vectorial function $F(x)=(f_1(x), \cdots, f_\ell(x))$ from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)^\ell$ can be viewed as a function from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2^\ell)$. It is well known that a function $F$ from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2^\ell)$ is bent if and only if ${\mathbb{M}athrm{Tr}}_{\ell/1}(aF(x))$ is a bent Boolean function for all $a \in {\mathbb{M}athrm{GF}}(2^\ell)^*$. Any such vectorial function $F$ can be expressed as ${\mathbb{M}athrm{Tr}}_{2m/\ell}(f(x))$, where $f$ is a univariate polynomial. This presentation of bent vectorial functions is more compact. We give two examples of bent vectorial functions in this form. \begin{example}\label{exam-bentvectfunc2} (cf. \cite[Chapter 12]{Mesnagerbook}). Let $m>1$ and $i \geq 1$ be integers such that $2m/\gcd(i, 2m)$ is even. Then ${\mathbb{M}athrm{Tr}}_{2m/m}(a x^{2^i+1})$ is bent if and only if $\gcd(2^i+1, 2^m+1) \neq 1$ and $a \in {\mathbb{M}athrm{GF}}(2^{2m})^* \setminus \langle \alpha^{\gcd(2^i+1, 2^m+1)} \rangle$, where $\alpha$ is a generator of ${\mathbb{M}athrm{GF}}(2^{2m})^*$. \end{example} \begin{example}\label{exam-bentvectfunc3} (cf. \cite[Chapter 12]{Mesnagerbook}). Let $m>1$ and $i \geq 1$ be integers such that $\gcd(i, 2m)=1$. Let $d=2^{2i}-2^i+1$. Let $m$ be odd. Then ${\mathbb{M}athrm{Tr}}_{2m/m}(a x^{d})$ is bent if and only if $a \in {\mathbb{M}athrm{GF}}(2^{2m})^* \setminus \langle \alpha^{3} \rangle$, where $\alpha$ is a generator of ${\mathbb{M}athrm{GF}}(2^{2m})^*$. \end{example} \section{A construction of codes from bent vectorial functions} \label{Section4} Let $q=2^{2m}$, let ${\mathbb{M}athrm{GF}}(q)=\{u_1, u_2, \cdots, u_{q}\}$, and let $w$ be a generator of ${\mathbb{M}athrm{GF}}(q)^*$. For the purposes of what follows, it is convenient to use the following generator matrix of the binary $[2^{2m}, 2m+1,2^{2m-1}]$ first-order Reed-Muller code ${\mathbb{M}athrm{RM}}_2(1,2m)$: \begin{eqnarray*} G_0=\left[ \begin{array}{cccc} 1 & 1 & \cdots & 1 \\ {\mathbb{M}athrm{Tr}}_{2m/1}(w^0u_1) & {\mathbb{M}athrm{Tr}}_{2m/1}(w^0u_2) & \cdots & {\mathbb{M}athrm{Tr}}_{2m/1}(w^0u_q) \\ \vdots & \vdots & \ddots & \vdots \\ {\mathbb{M}athrm{Tr}}_{2m/1}(w^{2m-1}u_1) & {\mathbb{M}athrm{Tr}}_{2m/1}(w^{2m-1}u_2) & \cdots & {\mathbb{M}athrm{Tr}}_{2m/1}(w^{2m-1}u_q) \end{array} \right]. \end{eqnarray*} The weight enumerator of ${\mathbb{M}athrm{RM}}_2(1,2m)$ is \begin{equation}\label{rm1} 1+(2^{2m+1}-2)z^{2^{2m-1}} + z^{2^{2m}}. \end{equation} Two binary linear codes are equivalent if there is a permutation of coordinates that sends the first code to the second. Up to equivalence, ${\mathbb{M}athrm{RM}}_2(1,2m)$ is the unique linear binary code with parameters $[2^{2m}, 2m+1,2^{2m-1}]$ \cite{DS87}. Its dual code is the $[2^{2m}, 2^{2m}- 1 -2m,4]$ Reed-Muller code of order $2m-2$. Both codes hold 3-designs since they are invariant under a 3-transitive affine group. Note that ${\mathbb{M}athrm{RM}}_2(1,2m)^{\mathrm{Paley}}erp$ is the unique, up to equivalence, binary linear code for the given parameters, hence it is equivalent to the extended binary linear Hamming code. Let $F(x)=(f_1(x), f_2(x), \cdots, f_\ell(x))$ be a $(2m, \ell)$ vectorial function from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)^\ell$. For each $i$, $1 \le i \le \ell$, we define a binary vector \begin{eqnarray*} F_i=(f_i(u_1), f_i(u_2), \cdots, f_i(u_q)) \in {\mathbb{M}athrm{GF}}(2)^{2^{2m}}, \end{eqnarray*} which is the truth table of the Boolean function $f_i(x)$ introduced in Section \ref{sec-333}. Let $\ell$ be an integer in the range $1 \le \ell \le m$. We now define a $(2m+1 + \ell) \times 2^{2m}$ matrix \begin{eqnarray} \label{G} G=G(f_{1}, \cdots, f_{\ell})=\left[ \begin{array}{c} G_0 \\ F_{1} \\ \vdots \\ F_{\ell} \end{array} \right], \end{eqnarray} where $G_0$ is the generator matrix of ${\mathbb{M}athrm{RM}}_2(1,2m)$. Let ${\mathcal{C}}(f_{1}, \cdots, f_{\ell})$ denote the binary code of length $2^{2m}$ with generator matrix $G(f_{1}, \cdots, f_{\ell})$ given by (\ref{G}). The dimension of the code has the following lower and upper bounds: $$ 2m+1 \leq \dim({\mathcal{C}}(f_{1}, \cdots, f_{\ell})) \leq 2m+1+\ell. $$ The following theorem gives a coding-theoretical characterization of bent vectorial functions. \begin{theorem}\label{thm-bentvectf} A $(2m, \ell)$ vectorial function $F(x)=(f_1(x), f_2(x), \cdots, f_\ell(x))$ from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)^\ell$ is a bent vectorial function if and only if the code ${\mathcal{C}}(f_1, \cdots, f_\ell)$ with generator matrix $G$ given by (\ref{G}) has weight enumerator \begin{eqnarray}\label{eqn-wtenumerator111} 1 + (2^\ell-1)2^{2m} z^{2^{2m-1} - 2^{m-1}} + 2(2^{2m}-1)z^{2^{2m-1}} + (2^\ell-1)2^{2m} z^{2^{2m-1} + 2^{m-1}} + z^{2^{2m}}. \end{eqnarray} \end{theorem} \begin{proof} By the definition of $G$, the code ${\mathcal{C}}(f_1, \cdots, f_\ell)$ contains the first-order Reed-Muller code ${\mathbb{M}athrm{RM}}_2(1, 2m)$ as a subcode, having weight enumerator (\ref{rm1}). It follows from (\ref{G}) that every codeword of ${\mathcal{C}}(f_1, \cdots, f_\ell)$ must be the truth table of a Boolean function of the form \begin{eqnarray*} f_{(u, v, h)}(x)=\sum_{i=1}^\ell u_i f_i(x) + \sum_{j=0}^{2m-1} v_j {\mathbb{M}athrm{Tr}}_{2m/1}(w^jx) + h, \end{eqnarray*} where $u_i, v_j, h \in {\mathbb{M}athrm{GF}}(2)$, $x \in {\mathbb{M}athrm{GF}}(2^{2m})$. Suppose that $F(x)=(f_1(x), f_2(x), \cdots, f_\ell(x))$ is a $(2m,\ell)$ bent vectorial function. When $(u_1, \cdots, u_\ell)=(0, \cdots, 0)$, $(v_0, v_1, \cdots, v_{2m-1})$ runs over ${\mathbb{M}athrm{GF}}(2)^{2m}$ and $h$ runs over ${\mathbb{M}athrm{GF}}(2)$, the truth tables of the functions $f_{(u, v, h)}(x)$ form the code ${\mathbb{M}athrm{RM}}_2(1, 2m)$. Whenever $(u_1, \cdots, u_\ell) \neq (0, \cdots, 0)$, it follows from (\ref{G}) that $f_{(u, v, h)}(x)$ is a bent function, and the corresponding codeword has Hamming weight $2^{2m-1} {\mathrm{Paley}}m 2^{m-1}$. Since the all-one vector belongs to ${\mathbb{M}athrm{RM}}_2(1, 2m)$, the code ${\mathcal{C}}(f_1, \cdots, f_\ell)$ is self-complementary, and the desired weight enumerator of ${\mathcal{C}}(f_1, \cdots, f_\ell)$ follows. Suppose that ${\mathcal{C}}(f_1, \cdots, f_\ell)$ has weight enumerator given by \eqref{eqn-wtenumerator111}. Then ${\mathcal{C}}(f_1, \cdots, f_\ell)$ has dimension $2m+1+\ell$. Consequently, $\sum_{i=1}^\ell u_i f_i(x)$ is the zero function if and only if $(u_1, \cdots, u_\ell)=(0, \cdots, 0)$. It then follows that the codewords corresponding to $f_{(u, v, h)}(x)$ must have Hamming weight $2^{2m-1} {\mathrm{Paley}}m 2^{m-1}$ for all $u=(u_1, \cdots, u_\ell) \neq (0, \cdots, 0)$ and all $(v_0, v_1, \cdots, v_{2m-1}) \in {\mathbb{M}athrm{GF}}(2)^{2m}$. Notice that $$ \sum_{j=0}^{2m-1} v_j {\mathbb{M}athrm{Tr}}_{2m/1}(w^jx) $$ ranges over all linear functions from ${\mathbb{M}athrm{GF}}(2^m)$ to ${\mathbb{M}athrm{GF}}(2)$ when $(v_0, v_1, \cdots, v_{2m-1})$ runs over ${\mathbb{M}athrm{GF}}(2)^{2m}$. Consequently, $F(x)$ is a bent vectorial function. \end{proof} \begin{note} \label{Note6} Let $F(x)=(f_1(x), f_2(x), \cdots, f_m(x))$ be a bent vectorial function from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)^m$. Then the code ${\mathcal{C}}(f_1, \cdots, f_m)$ has parameters $$ [2^{2m}, 3m+1, 2^{2m-1} - 2^{m-1}]. $$ In particular, if $m=2$, any code ${\mathcal{C}}(f_1, f_2)$ based on a bent vectorial function from ${\mathbb{M}athrm{GF}}(2^{4})$ to ${\mathbb{M}athrm{GF}}(2)^2$ has parameters $[16, 7, 6]$ and is optimal (cf. \cite{MG}). An $[n,k,d]$ code is optimal if $d$ is the maximum possible minimum distance for the given $n$ and $k$. If $m=3$, any code ${\mathcal{C}}(f_1, f_2, f_3)$ based on a bent vectorial function from ${\mathbb{M}athrm{GF}}(2^{6})$ to ${\mathbb{M}athrm{GF}}(2)^3$ has parameters $[64, 10, 28]$ and is optimal \cite{MG}. If $m=4$, any code ${\mathcal{C}}(f_1, \cdots, f_6)$ based on a bent vectorial function from ${\mathbb{M}athrm{GF}}(2^{8})$ to ${\mathbb{M}athrm{GF}}(2)^4$ has parameters $[256, 13, 120]$ and has the largest known minimum distance for the given code length and dimension \cite{MG}. \end{note} \begin{theorem} \label{ex6} Up to equivalence, there is exactly one $[16,7,6]$ code that can be obtained from a $(4,2)$ bent vectorial function. \end{theorem} \begin{proof} The weight enumerator of the second order Reed-Muller code ${\mathbb{M}athrm{RM}}_{2}(2,4)$ is given by \begin{equation*} 1+140z^4 + 448z^6 + 870z^8 + 448z^{10} + 140z^{12} + z^{16}. \end{equation*} The truth table of a bent function $f$ from ${\mathbb{M}athrm{GF}}(2^4)$ to ${\mathbb{M}athrm{GF}}(2)$ is a codeword $c_f$ of ${\mathbb{M}athrm{RM}}_{2}(2,4)$ of weight 6. The linear code ${\mathcal{C}}(f)$ spanned by $c_f$ and ${\mathbb{M}athrm{RM}}_{2}(1,4)$ is a subcode of ${\mathbb{M}athrm{RM}}_{2}(2,4)$ of dimension 6, having weight enumerator \begin{equation*} 1 + 16z^6 + 30z^8 + 16z^{10} + z^{16}. \end{equation*} The codewords of ${\mathcal{C}}(f)$ of weight 6 form a symmetric 2-$(16,6,2)$ SDP design, whose blocks correspond to the supports of 16 bent functions. Now, let $(f_1,f_2)$ be a $(4,2)$ bent vectorial function. Then, the intersection of the codes ${\mathcal{C}}(f_1)$, ${\mathcal{C}}(f_2)$ consists of the first order Reed-Muller code ${\mathbb{M}athrm{RM}}_{2}(1,4)$. It follows that the set of 448 codewords of weight 6 in ${\mathbb{M}athrm{RM}}_{2}(2,4)$ is a union $\cal{U}$ of 28 pairwise disjoint subsets of size 16, corresponding to the incidence matrices of symmetric 2-$(16,6,2)$ SDP designs associated with 28 different $[16,6]$ codes defined by single bent functions. If ${\mathcal{C}}(f_1,f_2)$ is a $[16,7]$ code defined by a bent vectorial function $(f_1,f_2)$, its weight enumerator is given by \begin{equation}\label{f1f2} 1 + 48z^6 + 30z^8 + 48z^{10} +z^{16}. \end{equation} The set of 48 codewords of weight 6 of ${\mathcal{C}}(f_1,f_2)$ is a union of the incidence matrices of three SDP designs from $\cal{U}$ with pairwise disjoint sets of blocks. A quick check shows that there are exactly 56 such collections of 48 codewords that generate a code having weight enumerator (\ref{f1f2}). Therefore, the number of distinct $[16,7,6]$ subcodes of ${\mathbb{M}athrm{RM}}_{2}(1,4)$ based on $(4,2)$ bent vectorial functions is 56. The $7 \times 16$ generator matrix $G$ of one such $[16,7,6]$ code is listed below: \begin{eqnarray*} \left[ \begin{array}{cccccccccccccccc} 0& 0& 0& 0& 0& 0& 0& 0& 1& 1& 1& 1& 1& 1& 1& 1 \\ 0& 0& 0& 0& 1& 1& 1& 1& 0& 0& 0& 0& 1& 1& 1& 1 \\ 0& 0& 1& 1& 0& 0& 1& 1& 0& 0& 1& 1& 0& 0& 1& 1 \\ 0& 1& 0& 1& 0& 1& 0& 1& 0& 1& 0& 1& 0& 1& 0& 1 \\ 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1 \\ 0& 0& 0& 1& 0& 1& 1& 1& 0& 1& 0& 0& 0& 0& 1& 0 \\ 0& 0& 0& 0& 0& 1& 0& 1& 0& 0& 1& 1& 0& 1& 1& 0 \end{array} \right]. \end{eqnarray*} The first five rows of $G$ form a generator matrix of ${\mathbb{M}athrm{RM}}_{2}(1,4)$, while the last two rows are codewords of weight 6 in ${\mathbb{M}athrm{RM}}_{2}(2,4)$. The full automorphism group of the $[16,7,6]$ code generated by $G$ is of order 5760. Since the order of the automorphism group of ${\mathbb{M}athrm{RM}}_{2}(1,4)$ is 322560, and \[ 322560/5760 = 56, \] it follows that all 56 $[16,7,6]$ codes based on $(4,2)$ bent vectorial functions are pairwise equivalent. \end{proof} The next two examples illustrate that there are at least three inequivalent optimal $[64,10,28]$ codes that are obtainable from bent vectorial functions from ${\mathbb{M}athrm{GF}}(2^{6})$ to ${\mathbb{M}athrm{GF}}(2)^3$. The parameters $[64,10,28]$ correspond to $m=3$ in Note \ref{Note6}. \begin{example} \label{bch6410} The binary cyclic $[63,10]$ code ${\mathcal{C}}$ with parity check polynomial $h(x)=(x+1)(x^3 + x^2 +1)(x^6 + x^5 + x^4 + x + 1)$ has weight enumerator \[ 1 + 196z^{27} + 252z^{28} + 63z^{31} +63z^{32}+252z^{35} + 196z^{36} +z^{63}. \] The $[63,7]$ subcode ${\mathcal{C}}'$ of ${\mathcal{C}}$ having check polynomial $h'(x)=(x+1)(x^6 + x^5 + x^4 + x + 1)$ has weight enumerator \[ 1 + 63z^{31} +63z^{32}+ z^{63}. \] The extended $[64,7]$ code $({\mathcal{C}}')^*$ of ${\mathcal{C}}'$ has weight enumerator \[ 1 + 126z^{32} + z^{64}, \] hence, $({\mathcal{C}}')^*$ is equivalent to the first order Reed-Muller code ${\mathbb{M}athrm{RM}}_{2}(1,6)$. The extended $[64,10]$ code ${\mathcal{C}}^*$ of ${\mathcal{C}}$ has weight enumerator given by \begin{equation} \label{641028} 1 + 448z^{28} + 126z^{32} + 448 z^{36} + z^{64}. \end{equation} Since ${\mathcal{C}}^*$ contains a copy of the first order Reed-Muller code ${\mathbb{M}athrm{RM}}_{2}(1,6)$ as a subcode, it follows from Theorem \ref{thm-bentvectf} that ${\mathcal{C}}^*$ can be obtained from a $(6,3)$ bent vectorial function from ${\mathbb{M}athrm{GF}}(2^6)$ to ${\mathbb{M}athrm{GF}}(2^3)$. The full automorphism group of ${\mathcal{C}}^*$ is of order \[677,376 = 2^9 \cdot 3^3 \cdot 7^2. \] Magma was used for these computations. \end{example} \begin{example} \label{example9} Let $M$ be the 7 by 64 $(0,1)$-matrix with the following structure: the $i$th column of the 6 by 64 submatrix $M'$ of $M$ consisting of its first six rows is the binary presentation of the number $i$ ($i =0, 1, \ldots 63$), while the last row of $M$ is the all-one row. Clearly, $M$ is a generator matrix of a binary linear $[64,7]$ code equivalent to the first order Reed-Muller code ${\mathbb{M}athrm{RM}}_{2}(1,6)$. The first six rows of $M$ can be viewed as the truth tables of the single Boolean variables $x_1, x_2, \dots x_6$, while the seventh row of $M$ is the truth table of the constant ${\bf 1}$. We consider the Boolean bent functions given by \begin{eqnarray*} f_{1}(x_1,\ldots, x_6) & = & x_{1}x_6 + x_{2}x_{5} + x_{3}x_4, \\ f_{2}(x_1,\ldots, x_6) & = & x_{1}x_5 + x_{2}x_4 + x_{3}x_5 + x_{3}x_6,\\ f_{3}(x_1,\ldots, x_6) & = & x_{1}x_4 + x_{2}x_5 + x_{2}x_6 +x_{3}x_4 + x_{3}x_5 + x_{5}x_{6},\\ f_{4}(x_1,\ldots, x_6) & = & x_{1}x_4 +x_{2}x_3 + x_{3}x_6 + x_{5}x_6. \end{eqnarray*} The vectorial functions $F_{1}=(f_{1}, f_{2}, f_{3})$, $F_{2}=(f_{1}, f_{2}, f_{4})$ give via Theorem \ref{thm-bentvectf} binary linear codes ${\mathcal{C}}_1, \ {\mathcal{C}}_2$ with parameters $[64,10,28]$, having weight enumerator given by (\ref{641028}). The automorphism groups of the codes ${\mathcal{C}}_1, \ {\mathcal{C}}_2$ were computed using the computer-algebra package Magma \cite{magma}. The code ${\mathcal{C}}_1$ has full automorphism group of order \[ 10,752 = 2^{9}\cdot 3\cdot 7. \] The code ${\mathcal{C}}_2$ has full automorphism group of order \[ 4,032 = 2^{6}\cdot 3^{2}\cdot 7. \] Thus, ${\mathcal{C}}_1$, ${\mathcal{C}}_2$ and the extended cyclic code ${\mathcal{C}}^*$ from Example \ref{bch6410} are pairwise inequivalent. We note that the code ${\mathcal{C}}_1$ cannot be equivalent to any extended cyclic code because its group order is not divisible by 63. \end{example} \begin{note} \label{note10} The full automorphism group of ${\mathcal{C}}_1$ from Example \ref{example9} cannot be 2-transitive because its order is not divisible by 63. Thus, the code ${\mathcal{C}}_1$ does not satisfy the classical sufficient condition to support 2-designs based on the 2-transitivity of its automorphism group (recall that according to \cite{K69}, any 2-homogeneous group of degree 64 is necessarily 2-transitive). In addition, the minimum distance of its dual code ${{\mathcal{C}}_1}^{\mathrm{Paley}}erp$ is 4, thus the Assmus-Mattson theorem guarantees only 1-designs to be supported by ${\mathcal{C}}_1$. We will prove in the next section that all codes obtained from bent vectorial functions support 2-designs. \end{note} \section{A construction of $2$-designs from bent vectorial functions} The following theorem establishes that the binary codes based on bent vectorial functions support 2-designs, despite that these codes do not meet the conditions of the Assmus-Mattson theorem for 2-designs. \begin{theorem}\label{main} Let $F(x)=(f_1(x), f_2(x), \cdots, f_\ell(x))$ be a bent vectorial function from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)^\ell$, where $m\ge 2$ and $1\le \ell \le m$. Let ${\mathcal{C}} = {\mathcal{C}}(f_1, \cdots, f_\ell)$ be the binary linear code with parameters $[2^{2m}, 2m+1 + \ell, 2^{2m-1} - 2^{m-1}]$ defined in Theorem \ref{thm-bentvectf}. (a) The codewords of ${\mathcal{C}}$ of minimum weight hold a 2-design ${\mathbb{D}}$ with parameters \begin{equation} \label{par} 2-(2^{2m}, 2^{2m-1} - 2^{m-1}, (2^\ell -1)(2^{2m-2} - 2^{m-1})). \end{equation} (b) The codewords of ${\mathcal{C}}$ of weight $2^{2m-1} + 2^{m-1}$ hold a 2-design $\overline{{\mathbb{D}}}$ with parameters \begin{equation}\label{prc} 2-(2^{2m}, 2^{2m-1} + 2^{m-1}, (2^\ell -1)(2^{2m-2} + 2^{m-1})). \end{equation} \end{theorem} \begin{proof} Since ${\mathcal{C}}$ contains ${\mathbb{M}athrm{RM}}_2(1,2m)$, and the minimum distance of ${\mathbb{M}athrm{RM}}_2(1,2m)^{\mathrm{Paley}}erp$ is 4, the minimum distance $d^{{\mathrm{Paley}}erp}$ of ${\mathcal{C}}^{{\mathrm{Paley}}erp}$ is at least 4. Applying the MacWilliams transform (see, for example \cite[p. 41]{vanLint}) to the weight enumerator (\ref{eqn-wtenumerator111}) of ${\mathcal{C}}$ shows that $d^{{\mathrm{Paley}}erp} =4$. It follows from the Assmus-Mattson theorem (Theorem \ref{thm-AM1}) that the codewords of any given nonzero weight $w< 2^{2m}$ in ${\mathcal{C}}$ hold a 1-design. However, we will prove that ${\mathcal{C}}$ actually holds 2-designs, despite that the Assmus-Mattson theorem guarantees only 1-designs to be supported by ${\mathcal{C}}$. Since the subcode ${\mathbb{M}athrm{RM}}_2(1,2m)$ of ${\mathcal{C}}$ contains all codewords of ${\mathcal{C}}$ of weight $2^{2m-1}$, the codewords of this weight hold a 3-design $\cal{A}$ with parameters 3-$(2^{2m}, 2^{2m-1}, 2^{2m-2} -1)$. We note that $\cal{A}$ is a 2-design with \begin{equation}\label{eq2m} \lambda_2 = \frac{2^{2m}-2}{2^{2m-1} -2}\cdot(2^{2m-2} -1)=2^{2m-1}-1. \end{equation} Let ${\mathbb{D}}$ be the 1-design supported by codewords of weight $2^{2m-1} - 2^{m-1}$. Since the number of codewords of weight $2^{2m-1} - 2^{m-1}$ is equal to $(2^\ell -1)2^{2m}$, ${\mathbb{D}}$ is a 1-design with parameters 1-$(2^{2m}, 2^{2m-1} - 2^{m-1}, (2^\ell -1)(2^{2m-1} - 2^{m-1}))$. Every codeword of ${\mathcal{C}}$ of weight $2^{2m-1} + 2^{m-1}$ is the sum of a codeword of weight $2^{2m-1} - 2^{m-1}$ and the all-one vector. Thus, the codewords of weight $2^{2m-1} + 2^{m-1}$ hold a 1-design $\overline{{\mathbb{D}}}$ having parameters 1-$(2^{2m}, 2^{2m-1} + 2^{m-1}, (2^\ell -1)(2^{2m-1} + 2^{m-1}))$. Clearly, $\overline{{\mathbb{D}}}$ is the complementary design of ${\mathbb{D}}$, that is, every block of $\overline{{\mathbb{D}}}$ is the complement of some block of ${\mathbb{D}}$. Let $M$ be the $2^{2m+1+\ell} \times 2^{2m}$ $(0,1)$-matrix having as rows the codewords of ${\mathcal{C}}$. Since $d^{{\mathrm{Paley}}erp} =4$, $M$ is an orthogonal array of strength 3, that is, for every integer $i$, $1 \le i \le 3$, and for every set of $i$ distinct columns of $M$, every binary vector with $i$ components appears exactly $2^{2m+1+\ell - i}$ times among the rows of the $2^{2m+1+\ell} \times i$ submatrix of $M$ formed by the chosen $i$ columns. In particular, any $2^{2m+1+\ell} \times 2$ submatrix consisting of two distinct columns of $M$ contains the binary vector $(1,1)$ exactly $2^{2m+\ell -1}$ times as a row. Among these $2^{2m+\ell -1}$ rows, one corresponds to the all-one codeword of ${\mathcal{C}}$, $2^{2m-1}-1$ rows correspond to codewords of weight $2^{2m-1}$ (by equation (\ref{eq2m})), and the remaining \begin{equation}\label{eqw3} 2^{2m +\ell -1} - 1 - (2^{2m-1}-1)=(2^{\ell} -1)2^{2m-1} \end{equation} rows are labeled by codewords of weight $2^{2m-1} {\mathrm{Paley}}m 2^{m-1}$, corresponding to blocks of ${\mathbb{D}}$ and $\overline{{\mathbb{D}}}$. Let now $1\le c_1 < c_2 \le 2^{2m}$ be two distinct columns of $M$. These two columns label two distinct points of ${\mathbb{D}}$ (resp. $\overline{{\mathbb{D}}}$). Let $\lambda$ denote the number of blocks of ${\mathbb{D}}$ that are incident with $c_1$ and $c_2$. Then the pair $\{ c_1, c_2 \}$ is incident with \begin{equation}\label{eqw4} (2^\ell -1)2^{2m} -2(2^\ell -1)(2^{2m-1}- 2^{m-1}) + \lambda =(2^\ell -1)2^m + \lambda \end{equation} blocks of the complementary design $\overline{{\mathbb{D}}}$. It follows from (\ref{eqw4}) and (\ref{eqw3}) that \[ (2^\ell -1)2^m + 2\lambda =(2^{\ell} -1)2^{2m-1}, \] whence \[ \lambda = (2^\ell -1)(2^{2m-2} - 2^{m-1}), \] and the statements (a) and (b) of the theorem follow. \end{proof} The special case $\ell =1$ in Theorem \ref{main} implies as a corollary the following result of Dillon and Schatz \cite{DS87}. \begin{theorem}\label{thm-DSthm} Let $f(x)$ be a bent function from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)$. Then the code ${\mathcal{C}}(f)$ has parameters $[2^{2m}, 2m+2, 2^{2m-1} - 2^{m-1}]$ and weight enumerator (\ref{sdpcode}). The minimum weight codewords form a symmetric SDP design with parameters (\ref{eqn-SDPparameters}). \end{theorem} \begin{proof} The weight enumerator (\ref{sdpcode}) is obtained by substitution $\ell =1$ in (\ref{eqn-wtenumerator111}). Since the number of minimum weight vectors is equal to the code length $2^{2m}$, the 2-design ${\mathbb{D}}$ supported by the codewords of minimum weight is symmetric. Since every two blocks $B_1, B_2$ of ${\mathbb{D}}$ intersect in $\lambda = 2^{2m-2} - 2^{m-1}$ points, the sum of the two codewords supporting $B_1$, $B_2$ is a codeword $c_{1,2} $ of weight $2^{2m-1}$ that belongs to the subcode ${\mathbb{M}athrm{RM}}_2(1,2m)$. Let $B_3$ be a block distinct from $B_1$ and $B_2$, and let $c_3$ be the codeword associated with $B_3$. Since $c_3$ is the truth table of a bent function, the sum $c_{1,2}+c_3$ is a codeword of weight $2^{2m-1} {\mathrm{Paley}}m 2^{m-1}$, thus its support is either a block or the complement of a block of ${\mathbb{D}}$. Therefore, ${\mathbb{D}}$ is an SDP design. \end{proof} \begin{theorem}\label{minw} The code ${\mathcal{C}} = {\mathcal{C}}(f_1, \cdots, f_\ell)$ from Theorem \ref{main} is spanned by the set of codewords of minimum weight. \end{theorem} \begin{proof} All we need to prove is that the copy of ${\mathbb{M}athrm{RM}}_{2}(1,2m)$ which is a subcode of ${\mathcal{C}}$, is spanned by some minimum weight codewords of ${\mathcal{C}}$. It is known that the 2-rank (that is, the rank over ${\mathbb{M}athrm{GF}}(2)$) of the incidence matrix of any symmetric SDP design ${\mathbb{D}}$ with $2^{2m}$ points is equal to $2m+2$ (for a proof, see \cite{JT92}). This implies that the binary code spanned by ${\mathbb{D}}$ contains the first order Reed-Muller code ${\mathbb{M}athrm{RM}}_{2}(1,2m)$. Consequently the minimum weight vectors of the subcode ${\mathcal{C}}_{f_1} = {\mathcal{C}}(f_1)$ of ${\mathcal{C}}={\mathcal{C}}(f_1,\ldots, f_{\ell})$ span the subcode of ${\mathcal{C}}$ being equivalent to ${\mathbb{M}athrm{RM}}(1,2m)$. \end{proof} \begin{corollary}\label{thm-equivcodedesign} Two codes ${\mathcal{C}}_f ={\mathcal{C}}(f_1, \cdots, f_s)$, ${\mathcal{C}}_g ={\mathcal{C}}(g_1, \cdots, g_s)$ obtained from bent vectorial functions $F(f_1, \cdots, f_s)$, $F(g_1, \cdots, g_s)$ are equivalent if and only if the designs supported by their minimum weight vectors are isomorphic. \end{corollary} \begin{example} \label{ex15} Let $m=5$. Let $w$ be a generator of ${\mathbb{M}athrm{GF}}(2^{10})^*$ with $w^{10} + w^6 + w^5 + w^3 + w^2 + w + 1=0$. Let $\beta=w^{2^5+1}$. Then $\beta$ is a generator of ${\mathbb{M}athrm{GF}}(2^5)^*$. Define $\beta_j=\beta^j$ for $1 \leq j \leq 5$. Then $\{\beta_1, \beta_2, \beta_3, \beta_4, \beta_5\}$ is a basis of ${\mathbb{M}athrm{GF}}(2^5)$ over ${\mathbb{M}athrm{GF}}(2)$. Now consider the bent vectorial function $(f_1, f_2, f_3, f_4, f_5)$ in Example \ref{exam-bentvectfunc1} and the code ${\mathcal{C}}(f_1, f_2, f_3)$. When $i=1$ and $i=7$, the two codes ${\mathcal{C}}(f_1, f_2, f_3)$ have parameters $[1024, 14, 496]$ and weight enumerator $$ 1 + 7168z^{496} + 2046z^{512} + 7168z^{528} +z^{1024}. $$ The two codes are not equivalent according to Magma. It follows from Corollary \ref{thm-equivcodedesign} that the two designs with parameters $2$-$(1024, 496, 1680)$ supported by these codes are not isomorphic. \end{example} \begin{note} \label{note-conj} Examples \ref{bch6410} and \ref{example9} give three inequivalent $[64,10,28]$ codes, and Example \ref{ex15} lists two inequivalent codes with parameters $[1024, 14, 496]$, obtained from bent vectorial functions. As we pointed out in Note \ref{note10}, the code ${\mathcal{C}}_1$ from Example \ref{example9}, does not have a 2-transitive group. \end{note} These examples, as well as further evidence provided by Theorem \ref{thnew} below, suggest the following plausible statement that we formulate as a conjecture. \begin{conj} \label{conjecture} For any given $\ell$ in the range $1\le \ell \le m$, the number of inequivalent codes with parameters $[2^{2m}, 2m+1 +\ell, 2^{2m-1}-2^{m-1}]$ obtained from $(2m,\ell)$ bent vectorial functions via Theorem \ref{thm-bentvectf}, grows exponentially with linear growth of $m$, and most of these codes do not admit a 2-transitive automorphism group. \end{conj} As it is customary, by ``most'' we mean that the limit of the ratio of the number of 2-transitive codes divided by the total number of codes approaches zero when $m$ grows to infinity. The next theorem proves Conjecture \ref{conjecture} in the case $\ell = 1$. \begin{theorem} \label{thnew} (i) The number of inequivalent $[2^{2m},2m+2,2^{2m-1}-2^{m-1}]$ codes obtained from single bent functions from $GF(2^{2m})$ to $GF(2)$ grows exponentially with linear growth of $m$. (ii) For every given $m\ge 2$, there is exactly one (up to equivalence) code with parameters $[2^{2m},2m+2,2^{2m-1}-2^{m-1}]$ obtained from a bent function from $GF(2^{2m})$ to $GF(2)$, that admits a 2-transitive automorphism group. \end{theorem} \begin{proof} (i) By the Dillon-Schatz Theorem \ref{thm-DSthm}, the minimum weight codewords of a code ${\mathcal{C}}(f)$ with parameters $[2^{2m},2m+2,2^{2m-1}-2^{m-1}]$ obtained from a bent function $f$ form a symmetric SDP design ${\mathbb{D}}(f)$ with parameters (\ref{eqn-SDPparameters}). It follows from Theorem \ref{minw} that two codes ${\mathcal{C}}(f_1)$, ${\mathcal{C}}(f_2)$ obtained from bent functions $f_1$, $f_2$ are equivalent if and only if the the corresponding designs ${\mathbb{D}}(f_1)$, ${\mathbb{D}}(f_1)$ are isomorphic. Since the number of nonisomorphic SDP designs with parameters (\ref{eqn-SDPparameters}) grows exponentially when $m$ grows to infinity (Kantor \cite{Kantor83}), the proof of part (i) is complete. (ii) It follows from Theorem \ref{minw} that the automorphism group of a code ${\mathcal{C}}(f)$ obtained from a bent function $f$ coincides with the automorphism group of the design ${\mathbb{D}}(f)$ supported by the codewords of minimum weight. The design ${\mathbb{D}}(f)$ is a symmetric 2-design with parameters (\ref{eqn-SDPparameters}). It was proved by Kantor \cite{K85-2} that for every $m\ge 2$, there is exactly one (up to isomorphism) symmetric design with parameters (\ref{eqn-SDPparameters}) that admits a 2-transitive automorphism group. This completes the proof of part (ii). \end{proof} By Theorem \ref{thm-DSthm}, the codes based on single bent functions support symmetric 2-designs. The next theorem determines the block intersection numbers of the design ${\mathbb{D}}(f_1, \cdots, f_\ell)$ supported by the minimum weight vectors in the code ${\mathcal{C}}(f_1, \cdots, f_\ell)$ from Theorem \ref{main}. \begin{theorem}\label{thm-int} Let ${\mathbb{D}}={\mathbb{D}}(f_1,\ldots,f_\ell)$, ($1\le \ell \le m$), be a 2-design with parameters $$ 2-(2^{2m}, 2^{2m-1} - 2^{m-1}, (2^\ell -1)(2^{2m-2} - 2^{m-1})) $$ supported by the minimum weight codewords of a code ${\mathcal{C}} ={\mathcal{C}}(f_1,\ldots,f_\ell)$ defined as in Theorem \ref{main}. (a) If $\ell =1$, ${\mathbb{D}}$ is a symmetric SDP design, with block intersection number $\lambda = 2^{2m-2} - 2^{m-1}$. (b) If $2 \leq \ell \leq m$, ${\mathbb{D}}$ has the following three block intersection numbers: \begin{equation}\label{s123} s_1 = 2^{2m -2} - 2^{m-2}, \ s_2 = 2^{2m -2} - 2^{m-1}, \ s_3 = 2^{2m -2} - 3\cdot 2^{m-2}. \end{equation} For every block of ${\mathbb{D}}$, these intersection numbers occur with multiplicities \begin{equation}\label{n123} n_1 =2^{m}(2^m +1)(2^{\ell-1} -1), \ n_2 = 2^{2m} -1, \ n_3 = 2^{m}(2^m -1)(2^{\ell-1} -1). \end{equation} \end{theorem} \begin{proof} Case (a) follows from Theorem \ref{thm-DSthm}. (b) Assume that $2 \leq \ell \leq m$. Let $w_1$, $w_2$ be two distinct codewords of weight $2^{2m-1} - 2^{m-1}$. The Hamming distance $d(w_1,w_2)$ between $w_1$ and $w_2$ is equal to \[ 2(2^{2m-1} - 2^{m-1}) - 2s, \] where $s$ is the size of the intersection of the supports of $w_1$ and $w_2$. Since the distance between $w_1$ and $w_2$ is either $2^{2m-1} - 2^{m-1}$, or $2^{2m-1}$, or $2^{2m-1} + 2^{m-1}$, the size $s$ of the intersection of the two blocks of ${\mathbb{D}}$ supported by $w_1$, $w_2$ can take only the values $s_i$, $1\le i \le 3$, given by (\ref{s123}). Let $B$ be a block of ${\mathbb{D}}$ supported by a codeword of weight $2^{2m-1} - 2^{m-1}$, and let $n_i$, ($1\le i \le 3$), denote the number of blocks of ${\mathbb{D}}$ that intersect $B$ in $s_i$ points. Let ${\bf r} = (2^\ell -1)(2^{2m-1} - 2^{m-1})$ denote the number of blocks of ${\mathbb{D}}$ containing a single point, and let $b = (2^\ell -1)2^{2m}$ denote the total number of blocks of ${\mathbb{D}}$. Finally, let $k = 2^{2m-1} - 2^{m-1}$ denote the size of a block, and let $\lambda=(2^\ell -1)(2^{2m-2} - 2^{m-1})$ denote the number of blocks containing two points. We have \begin{eqnarray*} n_1 + n_2 + n_3 & = & b - 1,\\ s_{1}n_1 + s_{2}n_2 + s_{3}n_3 & = & k({\bf r} - 1), \\ s_{1}(s_{1} - 1)n_1 + s_{2}(s_{2} -1)n_2 + s_{3}(s_{3} -1)n_3 & = & k(k-1)(\lambda -1). \end{eqnarray*} The second and the third equation count in two ways the appearances of single points and ordered pairs of points of $B$ in other blocks of ${\mathbb{D}}$. The unique solution of this system of equations for $n_1,\ n_2, \ n_3$ is given by (\ref{n123}). \end{proof} \begin{note} A {\it bent set} is a set $S$ of bent functions such that the sum of every two functions from $S$ is also a bent function \cite{BK}. Since every $(2m,\ell)$ bent vectorial function gives rise to a bent set consisting of $2^\ell$ functions \cite[Proposition~1]{BK}, it follows from \cite[Theorem~1]{BK} that the set of blocks of the design ${\mathbb{D}}$ is a union of $2^\ell-1$ linked system of symmetric 2-$(2^{2m}, 2^{2m-1}-2^{m-1},2^{2m-2}-2^{m-1})$ designs. This gives an alternative proof of Theorem~\ref{main} and Theorem~\ref{thm-int}(b). \end{note} \begin{note} For every integer $m \ge 2$, any code ${\mathcal{C}}(f_1, f_2, \ldots, f_m)$ based on a bent vectorial function $F(x)=(f_1(x), f_2(x), \cdots, f_m(x))$ from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)^m$, contains $2^{m}-1$ subcodes ${\mathcal{C}}'={\mathcal{C}}'(f_{j_1},\ldots, f_{j_s})$, $j_1 < \cdots < j_s \le m$, such that $$ {\mathbb{M}athrm{RM}}_2(1, 2m) \subset {\mathcal{C}}' \subseteq {\mathcal{C}}(f_1, \ldots, f_m). $$ Each subcode ${\mathcal{C}}'$ holds $2$-designs. This may be the only known chain of linear codes, included in each other, other than the chain of the Reed-Muller codes, \[ {\mathbb{M}athrm{RM}}_2(1, 2m) \subset {\mathbb{M}athrm{RM}}_2(2, 2m) \subset \cdots \subset {\mathbb{M}athrm{RM}}_2(m-2, 2m). \] such that all codes in the chain support nontrivial 2-designs. \end{note} \begin{note} We would demonstrate that the characterization of bent vectorial functions in Theorem \ref{thm-bentvectf} can be used to construct bent vectorial functions. To this end, consider the extended binary narrow-sense primitive BCH code of length $2^{2m}-1$ and designed distance $2^{2m-1}-1-2^{m-1}$, which is affine-invariant and holds $2$-designs \cite{DingZhou}. This code has the desired weight enumerator of (\ref{eqn-wtenumerator111}) for $\ell = m$ \cite{DingZhou}. It can be proved with the Delsarte theorem that the trace representation of this code is equivalent to the following code: \[\left\{\left(f_{a,b,h}(x)\right)_{x\in{\mathbb{M}athrm{GF}}(2^{2m})}: a \in {\mathbb{M}athrm{GF}}(2^m), \, b \in {\mathbb{M}athrm{GF}}(2^{2m}), \, h \in {\mathbb{M}athrm{GF}}(2) \right\},\] where \[f_{a,b,h}(x)= {\mathbb{M}athrm{Tr}}_{m/1}\left[a {\mathbb{M}athrm{Tr}}_{2m/m}\left(x^{1+2^{m-1}}\right) \right] + {\mathbb{M}athrm{Tr}}_{2m/1}(bx) + h.\] It then follows from Theorem \ref{thm-bentvectf} that ${\mathbb{M}athrm{Tr}}_{2m/m}(x^{1+2^{m-1}})$ is a bent vectorial function from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2^m)$. Note that this bent vectorial function may not be new. But our purpose here is to show that bent vectorial functions could be constructed from special linear codes. Conversely, we could say that the extended narrow-sense BCH code of length $2^{2m}-1$ and designed distance $2^{2m-1}-1-2^{m-1}$ is in fact generated from the bent vectorial function ${\mathbb{M}athrm{Tr}}_{2m/m}(x^{1+2^{m-1}})$ from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2^m)$ using the construction of Note \ref{note-tracecons}. Example \ref{bch6410} gives a demonstration of that. Thus, all known binary codes with the weight enumerator (\ref{eqn-wtenumerator111}) for some $1\le \ell \le m$ and arbitrary $m\ge 2$ are obtained from the bent vectorial function construction. As shown in Example \ref{ex6}, all $[16,7,6]$ codes obtained from $(4,2)$ bent vectorial functions are equivalent. Example \ref{example9} shows that there are at least three inequivalent $[64, 10, 28]$ binary codes from bent vectorial functions, one of these codes being an extended BCH code. \end{note} \begin{note} It is known that two designs ${\mathbb{D}}(f)$ and ${\mathbb{D}}(g)$ from two single bent Boolean functions $f$ and $g$ on ${\mathbb{M}athrm{GF}}(2^{2m})$ are isomorphic if and only if $f$ and $g$ are weakly affinely equivalent \cite{DS87}. Although the classification of bent Boolean functions into weakly affinely equivalent classes is open, the results from \cite{Kantor83} and \cite{DS87} imply that the number of nonisomorphic SDP designs and inequivalent bent functions in $2m$ variables grows exponentially with linear growth of $m$. \end{note} \begin{note}\label{note-equivconstruct} Two $(n, \ell)$ vectorial Boolean functions $(f_1(x), \cdots, f_\ell(x))$ and $(g_1(x), \cdots, g_\ell(x))$ from ${\mathbb{M}athrm{GF}}(2^n)$ to ${\mathbb{M}athrm{GF}}(2)^\ell$ are said to be \emph{EA-equivalent} if there are an automorphism of $({\mathbb{M}athrm{GF}}(2^n), +)$, a homomorphism $L$ from $({\mathbb{M}athrm{GF}}(2^n),+)$ to $({\mathbb{M}athrm{GF}}(2)^\ell, +)$, an $\ell \times \ell$ invertible matrix $M$ over ${\mathbb{M}athrm{GF}}(2)$, an element $a \in {\mathbb{M}athrm{GF}}(2^n)$, and an element $b \in {\mathbb{M}athrm{GF}}(2)^\ell$ such that \[(g_1(x), \cdots, g_\ell(x))= (f_1(A(x)+a), \cdots, f_\ell(A(x)+a))M +L(x) +b \] for all $x \in {\mathbb{M}athrm{GF}}(2^n)$. Let $(f_1(x), \cdots, f_\ell(x))$ and $(g_1(x), \cdots, g_\ell(x))$ be two bent vectorial functions from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2)^\ell$. We conjecture that the designs ${\mathbb{D}}(f_{1}, \cdots, f_{\ell})$ and ${\mathbb{D}}(g_{1}, \cdots, g_{\ell})$ are isomorphic if and only if $(f_1(x), \cdots, f_\ell(x))$ and $(g_1(x), \cdots, g_\ell(x))$ are EA-equivalent. The reader is invited to attack this open problem. \end{note} ~\\ Suppose that ${\mathbb{D}}$ is a 2-design with parameters (\ref{par}) obtained from a bent vectorial function $F(x)=(f_1(x), f_2(x), \cdots, f_\ell(x))$, ($1 \le \ell \le m$), via the construction from Theorem \ref{main}. Let $\cal{B}$ be the block set of ${\mathbb{D}}$. If $B$ is a block of ${\mathbb{D}}$, we consider the collection of new blocks ${\mathcal{B}}^{de}$ consisting of intersections $B \cap B'$ such that $B' \in \cal{B}$ and $| B \cap B' |=2^{2m-2} - 2^{m-1}$. \begin{theorem} \label{t25} For each $B \in {\mathbb{D}}$, the incidence structure $(B, {\mathcal{B}}^{de})$ is a quasi-symmetric design with parameters $$ 2-( 2^{2m-1}-2^{m-1},\, 2^{2m-2}-2^{m-1},\, 2^{2m-2}-2^{m-1}-1) $$ and intersection numbers $2^{2m-3} - 2^{m-2}$ and $2^{2m-3} - 2^{m-1}$. \end{theorem} \begin{proof} By Theorem \ref{thm-int}, there are exactly $2^{2m} -1$ blocks that intersect $B$ in $2^{2m-2}-2^{m-1}$ points. Together with $B$, these blocks form a symmetric SDP design $D$ with parameters 2-$(2^{2m}, 2^{2m-1} - 2^{m-1}, 2^{2m-2} - 2^{m-1})$. The incidence structure $(B, {\mathcal{B}})^{de}$ is a derived design of $D$. It was proved in \cite{JT92} that each derived design of a symmetric SDP 2-$(2^{2m}, 2^{2m-1} - 2^{m-1}, 2^{2m-2} - 2^{m-1})$ design is quasi-symmetric design with intersection numbers $2^{2m-3} - 2^{m-2}$ and $2^{2m-3} - 2^{m-1}$, and having the additional property that the symmetric difference of every two blocks is either a block or the complement of a block. \end{proof} \begin{note}\label{note-tracecons} Let $m >1$ be an integer. Let $F$ be a bent vectorial function from ${\mathbb{M}athrm{GF}}(2^{2m})$ to ${\mathbb{M}athrm{GF}}(2^m)$. Let $A$ be a subgroup of order $2^s$ of $({\mathbb{M}athrm{GF}}(2^m), +)$. Define a binary code by \begin{eqnarray*} {\mathcal{C}}_{A}:=\{({\mathbb{M}athrm{Tr}}_{m/1}(aF(x))+{\mathbb{M}athrm{Tr}}_{2m/1}(bx)+c)_{x \in {\mathbb{M}athrm{GF}}(2^{2m})}: a \in A, b \in {\mathbb{M}athrm{GF}}(2^{2m}), c \in {\mathbb{M}athrm{GF}}(2)\}. \end{eqnarray*} It can be shown that ${\mathcal{C}}_{A}$ can be viewed as a code ${\mathcal{C}}(f_{i_1}, \cdots, f_{i_s})$ obtained from a bent vectorial function $(f_{i_1}, \cdots, f_{i_s})$. \end{note} \section{Summary and concluding remarks} The contributions of this paper are the following. \begin{itemize} \item A coding-theoretic characterization of bent vectorial functions (Theorem \ref{thm-bentvectf}). \item A construction of a two-parameter family of four-weight binary linear codes with parameters $[2^{2m}, 2m+1+\ell, 2^{2m-1}-2^{m-1}]$ for all $1 \leq \ell \leq m$ and all $m\ge 2$, obtained from $(2m, \ell)$ bent vectorial functions (Theorem \ref{main}). The parameters of these codes appear to be new when $2 \leq \ell \leq m-1$. This family of codes includes some optimal codes, as well as codes meeting the BCH bound. These codes do not satisfy the conditions of the Assmus-Mattson theorem, but nevertheless hold $2$-designs. It is plausible that most of these codes do not admit 2-transitive automorphism groups (Conjecture \ref{conjecture} and Theorem \ref{thnew}). \item A new construction of a two-parameter family of $2$-designs with parameters \begin{eqnarray} \label{pm} 2\mathbb{M}box{--}(2^{2m}, \ 2^{2m-1}-2^{m-1}, \ (2^\ell-1)(2^{2m-2}-2^{m-1})), \end{eqnarray} and having three block intersection numbers, where $2\le \ell \le m$, based on bent vectorial functions (Theorem \ref{main} and Theorem \ref{thm-int}). This construction is a generalization of the construction of SDP designs from single bent functions given in \cite{DS87}. \item The number of nonisomorphic designs with parameters (\ref{pm}) in the special case when $\ell =1$, grows exponentially with $m$ by a known theorem of Kantor \cite{Kantor83}. It is an interesting open problem to prove that the number of nonisomorphic designs with parameters (\ref{pm}) grows exponentially for any fixed $\ell >1$. \end{itemize} Finally, we would like to mention that vectorial Boolean functions were employed in a different way to construct binary linear codes in \cite{TCZ17}. The codes from \cite{TCZ17} have different parameters from the codes described in this paper. \end{document}
\begin{document} \selectlanguage{english} \title{Fluid models with phase transition for kinetic equations in swarming} \author{Miha\"i BOSTAN \thanks{Aix Marseille Universit\'e, CNRS, Centrale Marseille, Institut de Math\'ematiques de Marseille, UMR 7373, Ch\^ateau Gombert 39 rue F. Joliot Curie, 13453 Marseille FRANCE. E-mail : {\tt [email protected]}} \;,\; Jos\'e Antonio CARRILLO \thanks{Department of Mathematics, Imperial College London, London SW7 2AZ UK. E-mail : {\tt [email protected]}} } \date{ (\today)} \maketitle \begin{abstract} We concentrate on kinetic models for swarming with individuals interacting through self-propelling and friction forces, alignment and noise. We assume that the velocity of each individual relaxes to the mean velocity. In our present case, the equilibria depend on the density and the orientation of the mean velocity, whereas the mean speed is not anymore a free parameter and a phase transition occurs in the homogeneous kinetic equation. We analyze the profile of equilibria for general potentials identifying a family of potentials leading to phase transitions. Finally, we derive the fluid equations when the interaction frequency becomes very large. \end{abstract} \paragraph{Keywords:} Swarming, Cucker-Smale model, Phase transition. \paragraph{AMS classification:} 92D50, 82C40, 92C10. \\ \\ \section{Introduction} \label{intro} This paper concerns the derivation of fluid models for populations of self-propelled individuals, with alignment and noise \cite{CarDorPan09, UAB25, ChuDorMarBerCha07} starting from their kinetic description. The alignment between particles is imposed by relaxing the individuals velocities towards the mean velocity \cite{CFRT10, review, CS2, HL08, HT08, MT11}. We refer to \cite{Neu77,BraHep77,Dob79,CCR10,BCC11,BCC12,HLL09,CCH,CCHS} and the references therein for a derivation of kinetic equations for collective behavior from microscopic models. We concentrate on models with phase transition \cite{BD14, BCCD16, Li19, DFL10, DFL15, FL11, VicCziBenCohSho95}. We denote by $f = f(t,x,v)\geq 0$ the particle density in the phase space $(x,v) \in \R^d \times \R^d$, with $d \geq 2$. The self-propulsion and friction mechanism writes $\Divv \{f \nabla _v V(|\cdot|)\}$, where $v \mapsto V(|v|)$ is a confining potential. When considering $V_{\alpha, \beta} (|v|) = \beta \frac{|v|^4}{4} - \alpha \frac{|v|^2}{2}$, with $\alpha, \beta >0$, we obtain the term $\Divv \{f(\beta |v|^2 - \alpha ) v \}$ see \cite{BosCar13, BosCar15} and also \cite{BosAsyAna, BosTraEquSin, BosGuiCen3D} for results based on averaging methods in magnetic confinement. The relaxation towards the mean velocity is given by $\Divv \{f (v - u[f]) \}$ cf. \cite{DM08}, where for any particle density the notation $u[f]$ stands for the mean velocity \[ u[f] = \frac{\intv{f(v)\;v}}{\intv{f(v)}}. \] Including noise with respect to the velocity variable, we obtain the Fokker-Planck type equation \begin{equation} \label{Equ1} \partial _t f + v \cdot \nabla _x f = Q(f):= \Divv\{\sigma \nabla _v f + f(v - u[f]) + f \nabla _v V(|\cdot|)\},\;\;(t,x,v) \in \R_+ \times \R^d \times \R^d\,. \end{equation} When considering large time and space scales in \eqref{Equ1}, we are led to the kinetic equation \begin{equation} \label{Equ3} \partial _t \fe + v \cdot \nabla _x \fe = \frac{1}{\eps} Q(\fe),\;\;(t,x,v) \in \R_+ \times \R^d \times \R^d. \end{equation} We investigate the asymptotic behavior of the family $(\fe)_{\eps >0}$, when $\eps$ becomes small. We expect that the limit density $f(t,x,\cdot) = \lime \fe (t,x,\cdot)$ is an equilibrium for the interaction mechanism \[ Q(f(t,x,\cdot)) = 0,\;\;(t,x) \in \R_+ \times \R^d. \] For any $u \in \R^d$ we introduce the notations \[ \Phiu (v) = \frac{|v - u|^2}{2} + V(|v|),\;\;Z(\sigma, u) = \intv{\exp \left ( - \frac{\Phiu (v)}{\sigma} \right ) },\;\;M_u (v) = \frac{\exp \left ( - \frac{\Phiu (v)}{\sigma} \right ) }{Z(\sigma, u)}. \] Actually the function $Z$ depends only on $\sigma$ and $|u|$, see Proposition \ref{Current}, and thus we will write $Z = Z(\sigma, l = |u|)$. Notice that for any smooth particle density $f$ and any $u \in \R^d$ we have \begin{equation*} \sigma \nabla _v f + f(v - u[f]) + f \nabla _v V(|\cdot|) = \sigma M_u (v) \nabla _v \left ( \frac{f}{M_u}\right ) \end{equation*} leading to the following representation formula \[ Q(f) = \sigma \Divv \left ( \muf \nabla _v \left ( \frac{f}{\muf }\right )\right ). \] Multiplying by $f/\muf$ and integrating by parts with respect to the velocity imply that any equilibrium satisfies \[ f = \rho [f] \muf,\;\;\rho[f] = \intv{f(v)}. \] Recall that $u[f]$ is the mean velocity, and therefore we impose \begin{equation} \label{Equ6} \intv{f(t,x,v) (v - u[f(t,x,\cdot)])} = 0,\;\;(t,x) \in \R_+ \times \R^d. \end{equation} Notice that $\Phiu$ is left invariant by any orthogonal transformation preserving $u$. Consequently, we deduce (see Proposition \ref{Current}) that $\intv{f(v)\;v}$ is parallel to $u$, and therefore the constraint \eqref{Equ6} fix only the modulus of the mean velocity, and not its orientation (which remains a free parameter). Our first important observation gives a characterization to find the bifurcation diagram of stationary solutions of $Q(f) = 0$. We prove that $M_u$ is an equilibrium if and only if $l = |u|$ is a critical point of $Z(\sigma, \cdot)$, cf. Proposition \ref{Current}. Moreover, several values for $|u|$, or only one are admissible, depending on the diffusion coefficient $\sigma$. In that case we will say that a phase transition occurs. Notice that in this work we do not distinguish between phase transitions and bifurcation points. For any particle density $f = f(v)$, the notation $\Omega[f]$ stands for the orientation of the mean velocity $u[f]$, if $u[f] \neq 0$ \[ \Omega[f] = \frac{u[f]}{|u[f]|} = \frac{\intv{f(v) \;v }}{\left | \intv{f(v) \;v } \right |} \] and any vector in $\sphere$, if $u[f] = 0$. Here $\sphere$ is the set of unit vectors in $\R^d$. Notice also that we always have \[ u[f] = |u[f]|\; \Omega [f]. \] Finally, for any $(t,x) \in \R_+ \times \R^d$, the limit particle density is a von Mises-Fisher distribution $f(t,x,v) = \rho (t,x) M_{|u|\Omega(t,x)} (v)$ parametrized by the concentration $\rho (t,x) = \rho [f(t,x,\cdot)]$ and the orientation $\Omega (t,x) = \Omega [f(t,x,\cdot)]$. We identify a class of potentials $v \mapsto V(|v|)$ such that a phase transition occurs and we derive the fluid equations satisfied by the macroscopic quantities $\rho, \Omega$. More exactly we assume that the potential $v \mapsto V(|v|)$ satisfies \begin{equation} \label{EquWellDefZ} \lim _{|v| \to + \infty } \frac{\frac{|v|^2}{2} + V(|v|) }{|v|} = +\infty \end{equation} (such that $Z$ is well defined) and belongs to the family $\mathcal{V}$ defined by: there exists $\sigma _0>0$ verifying \begin{enumerate} \item For any $0 < \sigma < \sigma _0$ there is $l(\sigma) >0 $ such that $Z(\sigma,l)$ is stricly increasing on $[0,l(\sigma)]$ and strictly decreasing on $[l(\sigma), +\infty[$; \item For any $\sigma \geq \sigma _0, Z(\sigma,l)$ is strictly decreasing on $[0,+\infty[$. \end{enumerate} The first important result in this work shows that potentials in $\mathcal{V}$ have a phase transition at $\sigma=\sigma_0$ as shown in Section 2. \begin{remark} The potential $V(|v|) = \beta \frac{|v|^4}{4} - \alpha \frac{|v|^2}{2}$ belongs to the family $\mathcal{V}$ as shown in \cite{Tugaut2013Phase,BCCD16,Li19} in any dimension. \end{remark} \begin{thm} \label{MainRes1} Assume that the potential $v \mapsto V(|v|)$ satisfies \eqref{EquWellDefZ}, belongs to the family $\mathcal{V}$ defined above and that $0 < \sigma < \sigma _0$. Let us consider $(\fe)_{\eps >0}$ satisfying \begin{equation}\label{asymplim} \partial _t \fe + v \cdot \nabla _x \fe = \frac{1}{\eps} \Divv \{\sigma \nabla _v \fe + \fe ( v - u [\fe] + \nabla _v V(|\cdot|)\;) \},\;(t,x,v) \in \R_+ \times \R^d \times \R^d. \end{equation} Therefore, at any $(t,x) \in \R_+ \times \R^d$ the dominant term in the Hilbert expansion $\fe = f + \eps \fo + ...$ is an equilibrium distribution of $Q$, that is $f(t,x,v) = \rho (t,x) M_{u(t,x)} (v)$, where \begin{equation} \label{Equ50} u(t,x) = l(\sigma) \Omega (t,x),\;\;(t,x) \in \R_+ \times \R^d \end{equation} \begin{equation} \label{Equ51} \partial _t \rho + \Divx (\rho u ) = 0,\;\;(t,x) \in \R_+ \times \R^d \end{equation} \begin{equation} \label{Equ52} \partial _t \Omega + l(\sigma) c_{\perp} \;(\Omega \cdot \nabla _x ) \Omega + \frac{\sigma}{l(\sigma)} (I_d - \Omega \otimes \Omega ) \frac{\nabla _x \rho}{\rho} = 0,\;\;(t,x) \in \R_+ \times \R^d. \end{equation} The constant $c_\perp$ is given by \[ c_\perp = \frac{\int_{\R_+}r^{d+1} \intth{\cos \theta \, \chi (\cos \theta, r) \, e(\cos \theta, r, l(\sigma)) \sin ^{d-1} \theta}\md r}{l(\sigma)\int_{\R_+}r^{d} \intth{\chi (\cos \theta, r) \, e(\cos \theta, r, l(\sigma)) \sin ^{d-1} \theta}\md r} \] and the function $\chi$ solves \begin{align} \label{EquChiRes1} - \sigma & \partial _c \left [r^{d-3} (1 - c^2) ^{\frac{d-1}{2}} e(c,r, l(\sigma)) \partial _c \chi \right ] - \sigma \partial _r \left [r^{d-1}(1 - c^2 ) ^{\frac{d-3}{2}} e(c,r, l(\sigma)) \partial _r \chi \right ] \\ & + \sigma (d-2) r ^{d-3} (1- c^2) ^{\frac{d-5}{2}} e(c,r, l(\sigma)) \chi = r^d (1-c^2) ^{\frac{d-2}{2}}e(c,r,l(\sigma)), \; (c,r) \in ]-1,1[ \times \R_+\nonumber \end{align} where $e(c, r, l) = \exp \left ( - \frac{r^2}{2\sigma} + \frac{rcl}{\sigma} - \frac{V(r)}{\sigma} \right )$. \end{thm} \begin{remark} Several considerations regarding the hydrodynamic equations \eqref{Equ50}-\eqref{Equ52} and the asymptotic limit to obtain them are needed: \begin{itemize} \item The asymptotic limit in \eqref{asymplim} is different from the one analysed in \cite{BosCar15} where the friction term is penalized at higher order. The main technical difficulty in \cite{BosCar15} compared to our present work is that to solve for the different orders on the expansion in \cite{BosCar15} we had to deal with Fokker-Planck equations on the velocity sphere with speed $\sqrt{\frac{\alpha}{\beta}}$. \item The hydrodynamic equations \eqref{Equ50}-\eqref{Equ52} in the particular case of the potential $V(|v|) = \beta \frac{|v|^4}{4} - \alpha \frac{|v|^2}{2}$ recover the ones obtained in \cite{DM08,DFL10,DFL15,BosCar15} by taking the limit $\alpha\to\infty$ with $\beta/\alpha=O(1)$. In this limit, the particle density $f$ is squeezed to a Dirac on the velocity sphere with speed $\sqrt{\frac{\alpha}{\beta}}$. The constants can be computed exactly based on \cite{Li19} and they converge towards the exact constants obtained in \cite{DM08,DFL15,BosCar15}. This is left to the reader for verification. \item The hydrodynamic equations \eqref{Equ50}-\eqref{Equ52} have the same structure as the equations derived in \cite{DM08,DFL15,BosCar15} just with different constants, and therefore they form a hyperbolic system as shown in \cite[Subsection 4.4]{DM08}. \end{itemize} \end{remark} When $V(|\cdot|)$ belongs to the family $\mathcal{V}$, we know that $|u| \in \{0,l(\sigma)\}$, for any $0 < \sigma < \sigma _0$ and $|u| = 0$ for any $\sigma \geq \sigma _0$. There is no time evolution for $|u|$. But the modulus of the mean velocity evolves in time for other potentials. For example, let us assume that there is $\sigma >0$, $ 0 \leq l_1 (\sigma) < l_2 (\sigma)\leq +\infty$ such that the function $l \mapsto Z(\sigma, l)$ is stricly increasing on $[0,l_1(\sigma)]$, constant on $[l_1(\sigma), l_2(\sigma)[$, and strictly decreasing on $[l_2(\sigma), +\infty[$. In that case, we obtain a balance for $|u|$ as well. \begin{thm} \label{MainRes2} Assume that the potential $v \mapsto V(|v|)$ satisfies \eqref{EquWellDefZ} and verifies the above hypothesis for some $\sigma >0$. Let us consider $(\fe)_{\eps >0}$ satisfying \[ \partial _t \fe + v \cdot \nabla _x \fe = \frac{1}{\eps} \Divv \{\sigma \nabla _v \fe + \fe ( v - u [\fe] + \nabla _v V(|\cdot|) \; )\},\;\;(t,x,v) \in \R_+ \times \R^d \times \R^d. \] Therefore, at any $(t,x) \in \R_+ \times \R^d$ the dominant term in the Hilbert expansion $\fe = f + \eps \fo + ...$ is an equilibrium distribution of $Q$, that is $f(t,x,v) = \rho (t,x) M_{u(t,x)} (v)$, where \begin{equation} \label{Equ57} \partial _t \rho + \Divx (\rho u ) = 0,\;\;(t,x) \in \R_+ \times \R^d \end{equation} \begin{align} \label{Equ58} \partial _t u & + [ c_{\perp} (I_d - \Omega \otimes \Omega) + c_\parallel \Omega \otimes \Omega] (u\cdot \partial _x) u + [ (c _\perp - 1) (I_d - \Omega \otimes \Omega) + (c_\parallel - 1) \Omega \otimes \Omega ] \nabla _x \frac{|u|^2}{2} \nonumber \\ & + \sigma \frac{\nabla _x \rho }{\rho} + c_\parallel ^\prime \;\Divx \Omega \;|u| u = 0,\;\;(t,x) \in \R_+ \times \R^d. \end{align} The constants $c_\perp, c_\parallel, c_\parallel ^\prime$ are given by \[ c_\perp = \frac{\int_{\R_+}r^{d+1} \intth{\cos \theta \, \chi (\cos \theta, r) \, e(\cos \theta, r, |u|) \sin ^{d-1} \theta}\md r}{|u|\int_{\R_+}r^{d} \intth{\chi (\cos \theta, r) \, e(\cos \theta, r, |u|) \sin ^{d-1} \theta}\md r} \] \[ c_\parallel = \frac{\int_{\R_+}r^{d+1} \intth{\cos ^2 \theta \, \chi _\Omega (\cos \theta, r) \, e(\cos \theta, r, |u|) \sin ^{d-2} \theta}\md r}{2 |u|\int_{\R_+}r^{d} \intth{\cos \theta \chi _\Omega (\cos \theta, r) \, e(\cos \theta, r, |u|) \sin ^{d-2} \theta}\md r} \] \[ c_\parallel ^\prime = \frac{\int_{\R_+}r^{d+1} \intth{\chi _\Omega (\cos \theta, r) \, e(\cos \theta, r, |u|) \sin ^{d} \theta}\md r}{(d-1) |u|\int_{\R_+}r^{d} \intth{\cos \theta \chi _\Omega (\cos \theta, r) \, e(\cos \theta, r, |u|) \sin ^{d-2} \theta}\md r} \] the function $\chi$ solves \eqref{EquChiRes1} and the function $\chi _\Omega$ solves \begin{align*} - & \sigma \partial _c \{ r ^{d-3} (1 - c^2) ^{\frac{d-1}{2}} e(c,r,|u|) \partial _c \chi _\Omega \} - \sigma \partial _r \{ r ^{d-1} (1 - c^2 ) ^{\frac{d-3}{2}} e(c, r, |u|) \partial _r \chi _\Omega \} \\ & = r^{d-1 } (r c - |u|) (1 - c^2) ^{\frac{d-3}{2}}e(c,r,|u|),\;(c,r) \in ]-1,1[ \times ]0,+\infty[\nonumber. \end{align*} \end{thm} Our paper is organized as follows. In Section \ref{PropEqui} we investigate the function $Z$, whose variations will play a crucial role when determining the equilibria of the interaction mechanism $Q$. We identify a family of potentials such that a phase transition occurs for some critical diffusion coefficient $\sigma _0$. Section \ref{LinIntMec} is devoted to the study of the linearization of $Q$ and of its formal adjoint. We are led to study the spectral properties of the pressure tensor. The kernel of the adjoint of the linearization of $Q$ is studied in Section \ref{Invariants}. These elements will play the role of the collision invariants, when determining the macroscopic equations by the moment method. The main results, Theorem \ref{MainRes1}, \ref{MainRes2}, are proved in Section \ref{MainTh}. Some examples are presented in Section \ref{Example}. \section{Phase transitions and Potentials: Properties of Equilibria} \label{PropEqui} For any $u \in \R^d$ we denote by $\calTu$ the family of orthogonal transformations of $\R^d$ preserving $u$. Notice that $\calTz$ is the family of all orthogonal transformations of $\R^d$. \begin{remark} \label{Sym} The functions on $\R^d$ which are left invariant by the family $\calTz$ are those depending only on $|v|$. The functions on $\R^d$ which are left invariant by the family $\calTu, u \neq 0$, are those depending on $v \cdot u$ and $|v|$. \end{remark} \begin{lemma} \label{InvVectField} Let $u$ be a vector in $\R^d$ and $a : \R^d \to \R^d$ be a integrable vector field on $\R^d$, which is left invariant by the family $\calTu$ {\it i.e.,} \[ a(^t \mathcal{O}v ) = \;^t \mathcal{O} a(v),\;\;v \in \R^d,\;\;\mathcal{O} \in \calTu. \] Then $\intv{a(v)} \in \R u$. \end{lemma} \begin{proof} For any $\calO \in \calTu$, we have \[ \intv{a(v)} = \intvp{a(\;^t \calO \vp ) } = \;^t \calO \intvp{a(\vp)}. \] For any $\xi \in \sphere \cap ( \R u ) ^\perp$, we consider $\calO _\xi = I_d - 2 \xi \otimes \xi \in \calTu$, and thus we obtain \[ \intv{a(v)} = (I_d - 2 \xi \otimes \xi ) \intvp{a(\vp)}, \] or equivalently $\xi \cdot \intv{a(v)} = 0$. Therefore, we have $\intv{a(v)} \in ((\R u ) ^\perp) ^ \perp = \R u$. \end{proof} We assume that \begin{equation} \label{Equ11} \lim _{|v| \to + \infty } \frac{\frac{|v|^2}{2} + V(|v|) }{|v|} = +\infty. \end{equation} Observe that \begin{align*} Z(\sigma, u ) & = \exp \left ( - \frac{|u|^2}{2\sigma} \right ) \intv{ \exp \left ( - \frac{\frac{|v|^2}{2} + V(|v|) }{\sigma} + \frac{v \cdot u }{\sigma} \right ) } \\ & \leq \exp \left ( - \frac{|u|^2}{2\sigma} \right )\intv{ \exp \left [ - \frac{|v|}{\sigma} \left ( \frac{\frac{|v|^2}{2} + V(|v|) }{|v|} - |u|\right ) \right ] }, \end{align*} and therefore, under the hypothesis \eqref{Equ11}, it is easily seen that $Z(\sigma, u)$ is finite for any $\sigma >0$ and $u \in \R^d$. Similarly we check that for any $\sigma >0$ and $u \in \R^d$, all the moments of $M_u$ are finite \[ \intv{|v|^p M_u (v)} < +\infty, \;\;p \in \N. \] For further developments, we recall the formula \begin{equation} \label{Equ12} \intv{\chi \left ( \frac{v \cdot \Omega}{|v|},|v| \right )} = |\Sp ^{d-2}| \int _{\R_+} r^{d-1} \intth{\chi(\cos \theta, r) \sin ^{d-2} \theta }\mathrm{d}r, \end{equation} for any non negative measurable function $\chi = \chi (c,r) : ]-1,1[\times \R_+ ^\star \to \R$, any $\Omega \in \Sp ^{d-1}$ and $d \geq 2$. Here $|\Sp ^{d-2}|$ is the surface of the unit sphere in $\R^{d-1}$, for $d \geq 3$, and $|\Sp ^0| = 2$ for $d = 2$. \begin{pro} \label{Current} Assume that the potential $v \mapsto V(|v|)$ satisfies \eqref{Equ11}. Then the following statements hold true~: \begin{enumerate} \item The function $Z(\sigma, u)$ depends only on $\sigma$ and $|u|$. We will simply write \[ \intv{\exp \left ( - \frac{\Phiu (v)}{\sigma} \right ) }= Z(\sigma, l = |u|). \] \item For any $u \in \R^d$, we have $\intv{M_u (v) v } \in \R_+ u$ and obviously, $\intv{M_0 (v) v } = 0$. \item The von Mises-Fisher distribution $M_u$ is an equilibrium if and only if $\partial _l Z(\sigma, l) = 0$. For any $\sigma >0, M_0 (v) = Z^{-1} (\sigma, 0) \exp \left ( - \Phi _0 (v)/\sigma\right )$ is an equilibrium. \end{enumerate} \end{pro} \begin{proof}$\;$\\ 1. Applying formula \eqref{Equ12} with $\Omega = u /|u|$, if $u \neq 0$, and any $\Omega \in \Sp ^{d-1}$ if $u = 0$, we obtain \begin{align*} Z & = \intv{\exp \left ( - \frac{|v|^2}{2\sigma} - \frac{|u|^2}{2\sigma} + \frac{v\cdot u}{\sigma} - \frac{V(|v|)}{\sigma} \right )}\\ & = |\Sp ^{d-2} | \exp \left ( - \frac{|u|^2}{2\sigma}\right ) \int_{\R_+} \exp \left ( - \frac{r^2}{2\sigma} - \frac{V(r)}{\sigma} \right ) r ^{d-1} \intth{\exp \left ( \frac{r |u| \cos \theta }{\sigma} \right ) \sin ^{d-2} \theta }\mathrm{d}r , \end{align*} and therefore $Z$ depends only on $\sigma$ and $|u|$. \\ 2. We consider the integrable vector field $a(v) = M_u (v) v, v \in \R^d$. It is easily seen that for any $\calO \in \calTu$, we have \[ \Phiu (\;^t \calO v) = \Phiu (v),\;\;M_u (\;^t \calO v ) = M_u (v),\;\;v \in \R^d, \] and therefore the vector field $a$ is left invariant by $\calTu$. Our conclusion follows by Lemma \ref{InvVectField}. It remains to check that $\intv{M_u (v) ( v \cdot u) } >0$, when $u \neq 0$. Indeed, we have \[ Z\intv{M_u (v) ( v \cdot u ) } = \int _{v \cdot u >0} \left [ \exp \left (- \frac{\Phiu (v)}{\sigma}\right ) - \exp \left (- \frac{\Phiu (-v)}{\sigma}\right ) \right ] ( v \cdot u ) \;\mathrm{d}v, \] and we are done observing that for any $v$ such that $v \cdot u >0$ we have \[ - \Phiu (v) = - \frac{|v|^2}{2} + v \cdot u - \frac{|u|^2}{2} - V(|v|) > - \frac{|v|^2}{2} - v \cdot u - \frac{|u|^2}{2} - V(|v|) = - \Phiu (-v). \] 3. The von Mises-Fisher distribution $M_u$ is an equilibrium if and only if $\intv{M_u (v) (v-u) } = 0$. By the previous statement we know that $\intv{M_u (v) v } \in \R u$ and therefore $M_u$ is an equilibrium iff $\intv{M_u (v) (v \cdot \Omega - |u|)} = 0$, where $\Omega = \frac{u}{|u|}$ if $u \neq 0$ and $\Omega$ is any vector in $\Sp ^{d-1}$ if $u = 0$. But we have \begin{align} \label{Equ12Bis} \partial _l Z(\sigma, |u|) & = |\Sp ^{d-2}| \exp \left ( - \frac{|u|^2}{2\sigma}\right ) \int _{\R_+} \exp \left ( - \frac{r^2}{2\sigma} - \frac{V(r)}{\sigma} \right ) r^{d-1} \\ &\,\,\,\,\,\, \times \intth{\exp \left ( \frac{r |u| \cos \theta}{\sigma} \right ) \frac{r \cos \theta - |u|}{\sigma} \sin ^{d-2} \theta }\mathrm{d}r \nonumber \\ & = \intv{\exp \left ( - \frac{\Phiu (v)}{\sigma} \right ) \frac{v \cdot \Omega - |u|}{\sigma} } \nonumber \\ & = \frac{Z(\sigma, |u|)}{\sigma}\intv{M_u (v) ( v \cdot \Omega - |u| ) },\nonumber \end{align} and therefore $M_u$ is an equilibrium if and only if $l = |u|$ is a critical point of $Z(\sigma, \cdot)$. \end{proof} \begin{remark} \label{Modulus} As $Z$ depends only on $\sigma, |u|$, we can write \begin{align*} Z(\sigma, |u|) & = \intv{\exp \left ( - \frac{|v - \Omega |u||^2}{2\sigma} - \frac{V(|v|)}{\sigma} \right )}\\ & = \intv{\exp \left (- \frac{|v|^2}{2\sigma} + \frac{(v \cdot \Omega) |u|}{\sigma} - \frac{|u|^2}{2\sigma} - \frac{V(|v|)}{\sigma} \right ) }, \end{align*} for any $\Omega \in \Sp ^{d-1}$ and $u \in \R^d$. We deduce that for any $\Omega \in \Sp ^{d-1}$ and $u \in \R^d$, we have \begin{align*} \partial _l Z(\sigma, |u|) & = \intv{\exp \left ( - \frac{|v|^2}{2\sigma} + \frac{(v \cdot \Omega) |u|}{\sigma} - \frac{|u|^2}{2\sigma} - \frac{V(|v|)}{\sigma}\right )\frac{v \cdot \Omega - |u|}{\sigma}}\\ & = \intv{\exp \left ( - \frac{\Phi_{|u|\Omega} (v)}{\sigma} \right ) \frac{v \cdot \Omega - |u|}{\sigma} }\\ & = \intv{\exp \left ( - \frac{\Phiu (v)}{\sigma} \right ) \frac{(v-u ) \cdot \Omega [u]}{\sigma}} \end{align*} and \begin{align*} \partial ^2 _{ll} Z(\sigma, |u|) & = \intv{\exp \left ( - \frac{\Phi_{|u|\Omega} (v)}{\sigma} \right ) \frac{[v \cdot \Omega - |u|\;]^2 - \sigma}{\sigma ^2} }\\ & = \intv{\exp \left ( - \frac{\Phiu (v)}{\sigma} \right ) \frac{[(v-u ) \cdot \Omega [u]\;]^2 - \sigma}{\sigma ^2}}, \end{align*} where $\Omega = \frac{u}{|u|}$ if $u \neq 0$ and $\Omega$ is any vector in $\Sp ^{d-1}$ if $u = 0$ (compare with \eqref{Equ12Bis}, established for $\Omega = u/|u|$, if $u \neq 0$). \end{remark} At this point, we know that for any $\sigma >0$, the equilibria are related to the critical points of $Z(\sigma, \cdot)$. In order to find possible bifurcation points of the disordered state $u=0$, let us analyze the variations of $Z(\sigma, \cdot)$ for small $\sigma$. We assume the following hypothesis on the potential \begin{equation} \label{Equ14} V ( |\cdot|) \in C^2 (\R^d),\;\;v \mapsto \frac{|v|^2}{2} + V(|v|)\; \mbox{ is strictly convex on } \R^d. \end{equation} For such a potential, we can minimize $\Phiu(v)$ with respect to $v \in \R^d$, for any $u \in \R^d$. Indeed, the function $\Phiu$ is convex, continuous on $\R^d$ and \begin{align*} \Phiu (v) & = \frac{|v- u |^2}{2} + V(|v|) = \frac{|v|^2}{2} + V(|v|) - v \cdot u + \frac{|u|^2}{2} \nonumber \\ & = |v| \left (\frac{\frac{|v|^2}{2} + V (|v|)}{|v|} - \frac{v \cdot u }{|v|} \right ) + \frac{|u|^2}{2} \\ & \geq |v| \left (\frac{\frac{|v|^2}{2} + V (|v|)}{|v|} - |u| \right ) + \frac{|u|^2}{2}. \end{align*} By \eqref{Equ11} we deduce that $\lim _{|v| \to +\infty} \Phiu (v) = +\infty$ and therefore $\Phiu $ has a minimum point $\vb \in \R^d$. This minimum point is unique (use $\vb - u + (\nabla _v V ( |\cdot |) ) (\vb) = 0$ and the strict convexity of $v \mapsto \frac{|v|^2}{2} + V(|v|)$\;). We intend to analyze the sign of $\partial _l Z(\sigma, |u|)$ for small $\sigma$. Performing the change of variable $v = \vb + \sqrt{\sigma} w$ leads to \begin{align} \label{Equ17} & \partial _l Z (\sigma, |u|) \sigma ^{1 - d/2} \exp \left ( \frac{\Phiu (\overline{v})}{\sigma} \right ) = \intv{\Exp{- \frac{\Phiu (v) - \Phiu (\overline{v}) - \nabla _v \Phiu (\overline{v}) \cdot (v - \overline{v})}{\sigma} }\\ & \times \frac{(v-u) \cdot \Omega [u] }{\sigma ^{d/2}}} \nonumber \\ & = \int_{\R^d}\Exp{- \frac{\Phiu(\overline{v} + \sqrt{\sigma} w) - \Phiu(\overline{v}) - \sqrt{\sigma} \nabla _v \Phiu (\overline{v}) \cdot w}{\sigma}} ( \overline{v} + \sqrt{\sigma} w - u ) \cdot \Omega[u]\;\mathrm{d}w \nonumber \\ & = \int_{\R^d}\Exp{- \frac{\Phi _0(\overline{v} + \sqrt{\sigma} w) - \Phi _0(\overline{v}) - \sqrt{\sigma} \nabla _v \Phi _0 (\overline{v}) \cdot w}{\sigma}} ( \overline{v} + \sqrt{\sigma} w - u ) \cdot \Omega[u]\;\mathrm{d}w \nonumber. \end{align} We need to determine the sign of $(\overline{v} - u) \cdot \Omega[u]$, where $\overline{v}$ is the minimum point of $\Phiu$. As $V(|\cdot|) \in C^1(\R^d)$, we have $V^\prime (0) = 0$. We assume that $V(\cdot)$ possesses another critical point $r_0 >0$ and \begin{equation} \label{Equ15} V^\prime (r) <0 \;\mbox{ for any } 0 < r < r_0 \;\mbox{ and } V^\prime (r) >0\;\mbox{ for any } r >r_0. \end{equation} Notice that this is the case for $V_{\alpha, \beta} (r) = \beta \frac{r^4}{4} - \alpha \frac{r^2}{2}, \alpha, \beta >0$, with $r_0 = \sqrt{\alpha/\beta}$. \begin{pro} \label{Sign} Assume that \eqref{Equ11}, \eqref{Equ14}, \eqref{Equ15} hold true. Then \begin{enumerate} \item The function $r \mapsto r + V^\prime (r)$ is strictly increasing on $\R_+$ and maps $[0,r_0]$ to $[0,r_0]$, and $]r_0, +\infty[$ to $]r_0, +\infty[$. \item We have \[ (\overline{v} - u) \cdot \Omega[u] >0\;\mbox{ for any } 0 < |u| < r_0,\;\;\inf _{\delta \leq |u| \leq r_0 - \delta} (\overline{v} - u) \cdot \Omega[u] >0, \;\; 0 < \delta < \frac{r_0}{2} \] and \[ (\overline{v} - u) \cdot \Omega[u] <0\;\mbox{ for any } |u| > r_0,\;\;\inf _{ |u| \geq r_0 + \delta} (u - \overline{v}) \cdot \Omega[u] >0, \;\; \delta >0. \] \end{enumerate} \end{pro} \begin{proof}$\;$\\ 1. By \eqref{Equ14} we know that $\Phi _0$ is strictly convex on $\R^d$ and we deduce that $r \mapsto \frac{r^2}{2} + V(r)$ is strictly convex on $\R_+$. Therefore the function $r \mapsto r + V^\prime (r)$ is strictly increasing on $\R_+$ and maps $[0,r_0]$ to $[0,r_0]$. It remains to check that it is unbounded when $r \to +\infty$. Suppose that there is a constant $C$ such that $r + V^\prime (r) \leq C, r \in \R_+$. After integration with respect to $r$, one gets \[ \frac{r^2}{2} + V(r) \leq V(0) + Cr,\;\;r \in \R_+, \] implying that \[ \frac{\frac{r^2}{2} + V(r)}{r} \leq \frac{V(0)}{r} + C,\;\;r \in \R_+, \] which contradicts \eqref{Equ11}.\\ 2. Let us consider $0 < |u| < r_0$. Therefore, $\overline{v} \neq 0$ and \[ \left ( |\overline{v}| + V^\prime (|\overline{v}| ) \right ) \frac{\overline{v}}{|\overline{v}|} = u , \] implying that $ |\overline{v}| + V^\prime (|\overline{v}| ) = |u| \in ]0,r_0[$. By the previous statement we obtain $0 < |\overline{v}| < r_0$, $\Omega [\overline{v}] = \frac{\overline{v}}{|\overline{v}|} = \frac{u}{|u|} = \Omega [u]$, and thus \[ ( \overline{v} - u) \cdot \Omega[u] = - V^\prime(|\overline{v}|) \frac{\overline{v}}{|\overline{v}|} \cdot \Omega[u] = - V^\prime (|\overline{v}|)>0. \] Clearly, for any $0 < \delta < r_0/2$, we have \[ \inf _{\delta \leq |u| \leq r_0 - \delta} (\overline{v} - u) \cdot \Omega[u] = \inf _{\delta \leq |u| \leq r_0 - \delta} ( - V^\prime(|\overline{v}|) ) >0. \] Similarly, for any $|u| > r_0$, we have $|\overline{v}| > r_0$ and \[ ( \overline{v} - u) \cdot \Omega[u] = - V^\prime(|\overline{v}|) \frac{\overline{v}}{|\overline{v}|} \cdot \Omega[u] = - V^\prime (|\overline{v}|)<0. \] As before, for any $\delta >0$, we obtain \[ \inf _{ |u| \geq r_0 + \delta} (u - \overline{v} ) \cdot \Omega[u] = \inf _{|u| \geq r_0 + \delta} V^\prime(|\overline{v}|) >0. \] \end{proof} The previous arguments allow us to complete the analysis of the variations of $Z(\sigma, |u|)$, when $\sigma$ is small. The convergence when $\sigma \searrow 0$ in \eqref{Equ17} can be handled by dominated convergence, provided that $w \mapsto |w| \Exp{- \frac{\partial _v ^2 \Phi _0 (\overline{v}) w \cdot w }{2}}$ belongs to $L^1(\R^d)$. We assume that there is $\lambda <1$ such that \begin{equation} \label{Equ18} v \mapsto V_\lambda (|v|) := \lambda \frac{|v|^2}{2} + V(|v|) \;\mbox{ is convex on } \R^d. \end{equation} The potentials $V_{\alpha, \beta} (|v|) = \beta \frac{|v|^4}{4} - \alpha \frac{|v|^2}{2}, 0 < \alpha < 1, \beta >0$ satisfy the above hypothesis. Under \eqref{Equ18}, we write \[ \Phi _0 (v) = (1 - \lambda ) \frac{|v|^2}{2} + V_\lambda (|v|),\;\;v \in \R^d, \] and therefore \[ \partial _v ^2 \Phi _0 (v) = (1 - \lambda ) I_d + \partial _v ^2 V_\lambda (|\cdot|) \geq (1- \lambda ) I_d,\;\;v \in \R^d, \] implying that \[ \int_{\R^d} |w| \Exp{- \frac{\partial _v ^2 \Phi _0 (\overline{v}) w \cdot w }{2}} \;\mathrm{d}w \leq \int_{\R^d} |w| \Exp{- \frac{(1- \lambda) |w|^2 }{2}} \;\mathrm{d}w < + \infty. \] Notice that \eqref{Equ18} guarantees \eqref{Equ11} and \eqref{Equ14}. Indeed, the function $v \mapsto V_\lambda (|v|)$ being convex, it is bounded from below by a linear function \[ \exists \; (v_\lambda, C_\lambda ) \in \R^d \times \R \;\mbox{ such that } V_\lambda (|v|) \geq ( v \cdot v_\lambda ) + C_\lambda,\;\;v \in \R^d, \] and therefore \[ \frac{\Phi _0 (v)}{|v|} = \frac{(1 - \lambda) \frac{|v|^2}{2} + V_\lambda (|v|)}{|v|} \geq (1 - \lambda) \frac{|v|}{2} - |v_\lambda | + \frac{C_\lambda}{|v|} \to +\infty,\;\mbox{ as } |v| \to +\infty. \] Obviously, $\Phi _0 $ is strictly convex, as sum between the strictly convex function $v \mapsto (1- \lambda) \frac{|v|^2}{2}$ and the convex function $v \mapsto V_\lambda (|v|)$. In order to conclude the study of the variations of $Z$ for small $\sigma >0$, we consider potentials $V$ satisfying $V(|\cdot|) \in C^2(\R^d)$, \eqref{Equ15} and \eqref{Equ18}. We come back to \eqref{Equ17}. Notice that \begin{align*} \Phi _0 (\overline{v} + \sqrt{\sigma} w) - \Phi _0 (\overline{v}) - \sqrt{\sigma} \nabla _v \Phi _0 (\overline{v}) \cdot w & \geq (1- \lambda) \frac{|\overline{v} + \sqrt{\sigma}w|^2}{2} - (1 - \lambda) \frac{|\overline{v}|^2}{2} \\ & - (1- \lambda) \sqrt{\sigma} \overline{v} \cdot w = (1 - \lambda) \sigma \frac{|w|^2}{2}, \end{align*} implying that, for any $0 < \sigma \leq 1$ \begin{align*} & \left |\Exp{- \frac{\Phi _0 (\overline{v} + \sqrt{\sigma} w) - \Phi _0 (\overline{v}) - \sqrt{\sigma} \nabla _v \Phi _0 (\overline{v}) \cdot w }{\sigma} } (\overline{v} + \sqrt{\sigma} w - u) \cdot \Omega[u] \right | \\ & \leq \Exp{- (1 - \lambda) \frac{|w|^2}{2}} \left [ |(\overline{v} - u ) \cdot \Omega [u] | + |w| \right ]. \end{align*} As the function $w \mapsto \Exp{ - (1 - \lambda) \frac{|w|^2}{2}} \left [ |(\overline{v} - u ) \cdot \Omega [u] | + |w| \right ]$ belongs to $L^1(\R^d)$, we deduce by dominated convergence that \[ \lim _{\sigma \searrow 0 } \left \{ \partial _l Z (\sigma, |u|) \sigma ^{1 - d/2} \exp \left ( \frac{\Phiu (\overline{v})}{\sigma} \right ) \right \} = (\overline{v} - u ) \cdot \Omega [u]\int _{\R^d} \Exp{- \frac{\partial _v ^2 \Phi _0 (\overline{v})w \cdot w}{2}}\;\mathrm{d}w. \] As we know, cf. Proposition \ref{Sign}, that $\inf_{|u|\in [\delta, r_0 - \delta] \cup [r_0 + \delta, +\infty[ } | (\overline{v} - u ) \cdot \Omega [u] | >0, 0 < \delta < r_0/2$, we deduce that for any $\delta \in ]0, r_0/2[$, there is $\sigma _\delta >0$ such that \[ \partial _l Z(\sigma, |u|)>0\;\mbox{ for any } 0 < \sigma < \sigma_\delta,\;\;\delta \leq |u| \leq r_0 - \delta \] and \[ \partial _l Z(\sigma, |u|)<0\;\mbox{ for any } 0 < \sigma < \sigma_\delta,\;\;|u| \geq r_0 + \delta. \] Motivated by the above behavior of the function $Z$, we assume that the potential $v \mapsto V(|v|)$ satisfies \eqref{Equ11} (such that $Z$ is well defined) and belongs to the family $\mathcal{V}$ defined by: there exists $\sigma _0>0$ verifying \begin{enumerate} \item For any $0 < \sigma < \sigma _0$ there is $l(\sigma) >0 $ such that $Z(\sigma,l)$ is stricly increasing on $[0,l(\sigma)]$ and strictly decreasing on $[l(\sigma), +\infty[$; \item For any $\sigma \geq \sigma _0, Z(\sigma,l)$ is strictly decreasing on $[0,+\infty[$. \end{enumerate} In fact, the critical diffusion coefficient $\sigma_0$ vanishes the second order derivative of $Z$ with respect to $l$, at $l = 0$, as shown next. \begin{pro} \label{CriticalDiffusion} Let $V(|\cdot|) \in \mathcal{V}$ be a potential satisfying \eqref{Equ11}. Then we have \[ \partial _{ll} ^2 Z (\sigma, 0) \geq 0,\;\;0 < \sigma < \sigma _0,\;\;\partial _{ll} ^2 Z(\sigma _0, 0) = 0,\;\;\partial _{ll} ^2 Z (\sigma, 0) \leq 0,\;\;\sigma > \sigma _0 \] and \[ \partial _{ll} ^2 Z (\sigma, l(\sigma)) \leq 0,\;\;0 < \sigma < \sigma _0. \] \end{pro} \begin{proof} By Remark \ref{Modulus} we know that $Z(\sigma, \cdot)$ possesses a second order derivative with respect to $l$. As $\partial _l Z(\sigma,0) = 0$, we write \[ \frac{1}{2}\partial _{ll} ^2 Z(\sigma,0) = \lim _{l \searrow 0} \frac{Z(\sigma, l) - Z(\sigma,0) - l \partial _l Z(\sigma,0)}{l^2} = \lim _{l \searrow 0} \frac{Z(\sigma,l) - Z(\sigma,0)}{l^2}. \] We deduce that $\partial _{ll} ^2 Z(\sigma, 0) \geq 0$ for any $0 < \sigma \leq \sigma _0$ and $\partial _{ll} ^2 Z(\sigma, 0) \leq 0$ for any $\sigma \geq \sigma _0$. In particular $\partial _{ll} ^2 Z(\sigma_0,0) = 0$. For any $0 < \sigma < \sigma _0$, the function $Z(\sigma, \cdot)$ possesses a maximum at $l = l(\sigma)>0$ and therefore $\partial _{ll} ^2 Z(\sigma, l(\sigma)) \leq 0$. \end{proof} It is also easily seen that $\lim _{\sigma \nearrow \sigma_0} l(\sigma) = 0$. Indeed, assume that there is $\eta >0$ and a sequence $(\sigma_n)_{n \geq 1} \nearrow \sigma _0$ such that $0 < \sigma _n < \sigma _0, l(\sigma_n) \geq \eta$ for any $n \geq 1$. We have \[ Z(\sigma_n, l(\sigma_n)) \geq Z(\sigma _n, \eta) > Z(\sigma _n,0),\;\;n \geq 1. \] After passing to the limit when $n \to +\infty$, we obtain a contradiction \[ Z(\sigma_0, \eta) \geq Z(\sigma _0, 0) > Z(\sigma _0, \eta) \] and therefore $\lim_{\sigma \nearrow \sigma_0} l(\sigma) = 0$. We have proved that $\sigma \mapsto l(\sigma)$ is continuous. \begin{remark} Given a potential $V(|\cdot|) \in \mathcal{V}$, then the unique bifurcation point from the disordered state happens at $\sigma_0$. In fact, if we define the function $$ H(\sigma,l)=\intv{M_u (v) ( v \cdot \Omega - l ) }\,, $$ as in \cite{BCCD16}. Then by \eqref{Equ12Bis}, we get $\sigma \partial _l Z(\sigma, l) = Z(\sigma, l) H(\sigma,l)$. By taking the derivative with respect to $l$, we obtain $$ \partial_l H = \sigma \left(\frac{\partial^2 _{ll} Z}{Z} - \frac{(\partial _l Z)^2}{Z^2}\right) \,. $$ Therefore, for the curve $l(\sigma)$ such that $H(\sigma,l(\sigma))=0$, we get $\partial_l H (\sigma_0,0)=0$. Using implicit differentiation and the continuity of the curves and the functions involved, it is also easy to check that $\partial_\sigma H (\sigma_0,0)=0$. Therefore, to clarify the behavior of the two curves at $\sigma_0$, one needs to work more to compute the $\lim _{\sigma \nearrow \sigma_0}l^\prime (\sigma)$. In any case, this shows that $\sigma_0$ is the only bifurcation point from the manifold of disorder states $u=0$ for potentials $V(|\cdot|) \in \mathcal{V}$ without the need of applying the Crandall-Rabinowitz bifurcation theorem. It would be interesting to use Crandall-Rabinowitz for general potentials to identify more general conditions for bifurcations. \end{remark} In the last part of this section, we explore some properties of the potentials $V$ in the class $\mathcal{V}$. We show that under the hypothesis \eqref{Equ18}, we retrieve a weaker version of \eqref{Equ15}. \begin{pro} Let $V(|\cdot|) \in \mathcal{V}$ be a potential satisfying \eqref{Equ11}. The application $\sigma \mapsto l(\sigma)$ is continuous on $\R_+ ^\star$. Moreover, if $V(|\cdot|) \in C^2(\R^d)$ verifies \eqref{Equ18} and there is the limit $\lims l(\sigma) = r_0 >0$, then \[ V^\prime (r) \leq 0 \;\mbox{ for any } 0 < r \leq r_0 \;\mbox{ and } V^\prime (r) \geq 0 \;\mbox{ for any } r \geq r_0. \] \end{pro} \begin{proof} We are done if we check the continuity ant any $\sigma \in ]0,\sigma _0[$. Assume that there is a sequence $(\sigma _n)_{n \geq 1} \subset ]0,\sigma _0[$, $\lim_{n \to +\infty} \sigma _n = \sigma \in ]0,\sigma _0[$ and $\eta >0$ such that $l(\sigma _n ) > l(\sigma) + \eta$ for any $n \geq 1$. We have \[ Z(\sigma_n, l(\sigma_n)) > Z(\sigma_n, l(\sigma) + \eta) > Z(\sigma _n, l(\sigma _n)),\;\;n \geq 1, \] leading to the contradiction \[ Z(\sigma, l(\sigma) + \eta) \geq Z(\sigma, l(\sigma)) > Z(\sigma, l(\sigma) + \eta). \] Similarly, assume that there is a sequence $(\sigma _n)_{n \geq 1} \subset ]0,\sigma _0[$, $\lim_{n \to +\infty} \sigma _n = \sigma \in ]0,\sigma _0[$ and $\eta \in ]0,l(\sigma)[$ such that $l(\sigma _n ) < l(\sigma) - \eta$ for any $n \geq 1$. We have \[ Z(\sigma _n, l (\sigma _n)) \geq Z(\sigma _n, l(\sigma) - \eta) > Z(\sigma _n, l(\sigma)), \] leading to the contradiction \[ Z(\sigma, l(\sigma) - \eta) \geq Z(\sigma, l(\sigma)) > Z(\sigma, l(\sigma) - \eta). \] Therefore $\lim _{n \to +\infty} l(\sigma _n) = l(\sigma)$ for any sequence $(\sigma_n)_{n\geq 1}$, $\lim_{n \to +\infty} \sigma _n = \sigma \in ]0,\sigma_0[$. Assume now that $\lims l (\sigma) = r_0 >0$. For any $l \in ]0,r_0[$, we have $0 < l < l(\sigma)$ for $\sigma \in ]0,\sigma _0[$ small enough. As $Z(\sigma, \cdot)$ is strictly increasing on $[0,l(\sigma)]$, we deduce that $\partial _l Z(\sigma, l) >0$ for $\sigma$ small enough, and by \eqref{Equ17} it comes that \[ \intw{\Exp{- \frac{\Phi _0 (\vb + \sqrt{\sigma} w ) - \Phi _0 (\vb) - \sqrt{\sigma} \nabla _v \Phi _0 (\vb) \cdot w}{\sigma}}( - V^\prime(|\vb|) + \sqrt{\sigma} w \cdot \Omega ) } >0, \] where $\vb$ is the minimum point of $\Phi _{l\Omega}$, that is $\vb = |\vb| \Omega, |\vb| + V^\prime (|\vb|) = l$. Passing to the limit when $\sigma \searrow 0$ yields \[ \intw{\Exp{- \frac{\partial _v ^2 \Phi _0 (\vb) w \cdot w}{2} }} \;V^\prime (|\vb|) \leq 0, \] and therefore $V^\prime (|\vb|) \leq 0$. As before, \eqref{Equ18} implies \eqref{Equ14} and therefore $r \mapsto r + V^\prime (r)$ is strictly increasing on $\R_+$. We have $l - |\vb| = V^\prime (|\vb|) \leq 0$ and $l = |\vb| + V^\prime (|\vb|) \geq l + V^\prime (l)$ saying that $V^\prime (l) \leq 0$ for any $l \in ]0,r_0[$, and also for $l = r_0$. Consider now $l >r_0$. For $\sigma \in ]0,\sigma_0[$ small enough we have $l > l(\sigma)$ and therefore $\partial _l Z(\sigma, l) <0$. As before, \eqref{Equ17} leads to $l - |\vb| = V^\prime ( |\vb|) \geq 0$ and we have $l = |\vb| + V^\prime (|\vb|) \leq l + V^\prime (l)$ saying that $V^\prime (l) \geq 0$ for any $l >r_0$, and also for $l = r_0$. In particular $r_0$ is a critical point of $V$. \end{proof} In the next result we analyze the behavior of $l(\sigma)$ for $\sigma$ small. \begin{pro} \label{SmallSigma} Let $V (|\cdot|) \in \mathcal{V}$ be a potential satisfying \eqref{Equ11}, \eqref{Equ18}. If $V (|\cdot|) \in C_b ^3 (\R^d)$ and there is the limit $\lims l(\sigma) = r_0 >0$, then we have for any $\Omega \in \sphere$ \[ V^{\prime \prime} (r_0) \lims \frac{l(\sigma) - r_0}{\sigma} = - \frac{1 + V^{\prime \prime} (r_0)}{6} \frac{\intw{( w \cdot \Omega) \partial _v ^3 \Phi _0 (r_0 \Omega) (w,w,w)\Exp{- \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w \cdot w}{2} }}}{\intw{\Exp{- \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w \cdot w}{2} }}} \] where $ \partial _v ^3 \Phi _0 (r_0 \Omega) (w,w,w) = \sum _{1\leq i,j,k\leq d} \frac{\partial ^3 \Phi _0 }{\partial _{v_k} \partial _{v_j} \partial _{v_i}} (r_0 \Omega) w_k w_j w_i$. \end{pro} \begin{proof} We fix $\Omega \in \sphere$. For any $\sigma \in ]0,\sigma _0[$ we have $\partial _l Z(\sigma, l(\sigma)) = 0$, and \eqref{Equ17} implies \begin{equation} \label{Equ71} \intw{\Exp{- \frac{\Phi _0 (\vb + \sqrt{\sigma} w ) - \Phi _0 (\vb) - \sqrt{\sigma} \nabla _v \Phi _0 (\vb) \cdot w}{\sigma}}( - V^\prime(|\vb|) + \sqrt{\sigma} w \cdot \Omega )} = 0, \end{equation} where $\vb$ is the minimum point of $\Phi _{l(\sigma)\Omega}$, that is $\vb = |\vb| \Omega, |\vb| + V^\prime (|\vb|) = l(\sigma)$. As the function $r \mapsto r + V^\prime (r)$ is strictly increasing on $\R_+$, when $\sigma \searrow 0$, we have $l(\sigma) \to r_0$ and $|\vb|$ converges toward the reciprocal image of $r_0$, through the function $r \mapsto r + V^\prime(r)$, which is $r_0$. We deduce \[ \frac{l(\sigma) - r_0}{\sigma} = \frac{|\vb| - r_0 }{\sigma} + \frac{V^\prime(|\vb|) - V^\prime(r_0)}{|\vb| - r_0} \;\frac{|\vb| - r_0}{\sigma} , \] implying that \[ \lims \frac{l(\sigma) - r_0}{\sigma} =( 1 + V ^{\prime \prime} (r_0 )) \lims \frac{|\vb| - r_0}{\sigma}. \] We will compute \[ \lims \frac{V^\prime (|\vb|)}{\sigma} = V ^{\prime \prime} (r_0 ) \lims \frac{|\vb| - r_0 }{\sigma}. \] Thanks to \eqref{Equ71} we have \begin{align} \label{Equ72} & \intw{\Exp{- \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w\cdot w}{2}}}\;\lims \frac{V^\prime (|\vb|)}{\sigma}\\ & = \lims \intw{\Exp{- \frac{\Phi _0 (\vb + \sqrt{\sigma} w ) - \Phi _0 (\vb) - \sqrt{\sigma} \nabla _v \Phi _0 (\vb) \cdot w}{\sigma}}\frac{ w \cdot \Omega }{\sqrt{\sigma}} }.\nonumber \end{align} Observe that \begin{align} \label{Equ73} \intw{& \Exp{- \frac{\Phi _0 (\vb + \sqrt{\sigma} w ) - \Phi _0 (\vb) - \sqrt{\sigma} \nabla _v \Phi _0 (\vb) \cdot w}{\sigma}}\frac{ w \cdot \Omega }{\sqrt{\sigma}} } \\ & = \intw{\left [ \Exp{- \frac{\Phi _0 (\vb + \sqrt{\sigma} w ) - \Phi _0 (\vb) - \sqrt{\sigma} \nabla _v \Phi _0 (\vb) \cdot w}{\sigma}} \right. \nonumber \\ & \left. \quad - \Exp{- \frac{\partial _v ^2 \Phi _0 (\vb) w\cdot w}{2}} \right ] \frac{ w \cdot \Omega }{\sqrt{\sigma}}} \nonumber \end{align} and \begin{align*} \lims & \frac{1}{\sqrt{\sigma}}\left [ \Exp{- \frac{\Phi _0 (\vb + \sqrt{\sigma} w ) - \Phi _0 (\vb) - \sqrt{\sigma} \nabla _v \Phi _0 (\vb) \cdot w}{\sigma}} - \Exp{- \frac{\partial _v ^2 \Phi _0 (\vb) w\cdot w}{2}} \right ]\\ & = - \Exp{- \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w\cdot w}{2}} \\ & \quad \times \lims \frac{\Phi _0 (\vb + \sqrt{\sigma} w ) - \Phi _0 (\vb) - \sqrt{\sigma} \nabla _v \Phi _0 (\vb) \cdot w - \frac{\sigma}{2} \;\partial _v ^2 \Phi _0 (\vb)w\cdot w}{\sigma ^{3/2}}\\ & = - \Exp{- \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w\cdot w}{2}}\lims \frac{1}{\sqrt{\sigma}} \int _0 ^1 (1-t) [ \partial _v ^2 \Phi _0 (\vb + t \sqrt{\sigma} w) - \partial _v ^2 \Phi _0 (\vb)]w\cdot w\;\md t\\ & = - \frac{1}{6} \partial _v ^3 \Phi _0 (r_0 \Omega) (w,w,w) \Exp{- \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w\cdot w}{2}}. \end{align*} Recall that, thanks to \eqref{Equ18}, we have $\partial _v ^2 \Phi _0 (v) \geq (1- \lambda ) I_d, v \in \R^d$, implying that \[ \frac{\Phi _0 (\vb + \sqrt{\sigma} w ) - \Phi _0 (\vb) - \sqrt{\sigma} \nabla _v \Phi _0 (\vb) \cdot w}{\sigma} = \int _0 ^1 (1-t) \partial _v ^2 \Phi _0 (\vb + t \sqrt{\sigma} w) w \cdot w \;\md t \geq (1 - \lambda) \frac{|w|^2}{2} \] and \[ \frac{\partial _v ^2 \Phi _0 (\vb) w \cdot w}{2} \geq (1 - \lambda) \frac{|w|^2}{2}, \;\;w \in \R^d. \] Therefore the integrand of the right hand side in \eqref{Equ73} can be bounded, uniformly with respect to $\sigma>0$ by a $L^1$ function \begin{align*} & \left | \Exp{- \frac{\Phi _0 (\vb + \sqrt{\sigma} w ) - \Phi _0 (\vb) - \sqrt{\sigma} \nabla _v \Phi _0 (\vb) \cdot w}{\sigma}} - \Exp{-\frac{\partial _v ^2 \Phi _0 (\vb) w \cdot w}{2}}\right |\frac{|(w \cdot \Omega)|}{\sqrt{\sigma}}\\ & \leq \Exp{- (1 - \lambda) \frac{|w|^2}{2}} \frac{|(w \cdot \Omega)|}{\sqrt{\sigma}}\int _0 ^1 (1-t) [ \partial _v ^2 \Phi _0 (\vb + t \sqrt{\sigma} w) - \partial _v ^2 \Phi _0 (\vb) ]w \cdot w\;\md t\\ & \leq \|V(|\cdot |)\|_{C_b ^3 (\R^d)} |w|^2 \Exp{- (1 - \lambda) \frac{|w|^2}{2}},\;\;w \in \R^d. \end{align*} Combining \eqref{Equ72}, \eqref{Equ73}, we obtain by dominated convergence \begin{align*} V^{\prime \prime}(r_0) \lims \frac{|\vb| - r_0 }{\sigma} & = \lims \frac{V^\prime(|\vb|)}{\sigma} \\ & = - \frac{\intw{(w \cdot \Omega) \partial _v ^3 \Phi _0 (r_0 \Omega) (w,w,w) \Exp{ - \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w \cdot w}{2}}} }{6 \intw{\Exp{ - \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w \cdot w}{2}}} } \end{align*} and therefore \begin{align*} V^{\prime \prime}(r_0)& \lims \frac{l(\sigma) - r_0}{\sigma} = ( 1 + V^{\prime \prime}(r_0) )\;V^{\prime \prime}(r_0) \lims \frac{|\vb| - r_0 }{\sigma} \\ & = - \frac{1 + V^{\prime \prime}(r_0)}{6}\frac{\intw{(w \cdot \Omega) \partial _v ^3 \Phi _0 (r_0 \Omega) (w,w,w) \Exp{ - \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w \cdot w}{2}}} }{\intw{\Exp{ - \frac{\partial _v ^2 \Phi _0 (r_0 \Omega) w \cdot w}{2}}}}. \end{align*} \end{proof} \section{Linearization of the interaction mechanism} \label{LinIntMec} We intend to investigate the asymptotic behavior of \eqref{Equ3} when $\eps \searrow 0$. We introduce the formal development \[ \fe = f + \eps \fo + ... \] and we expect that $ Q(f) = 0 $ and \begin{equation} \label{Equ21} \partial _t f + v \cdot \nabla _x f = \lime \frac{Q(\fe) - Q(f)}{\eps} = \mathrm{d}Q_f (\fo) = : \calL _f (\fo). \end{equation} As seen before, for any $(t,x) \in \R_+ \times \R^d$, the individual density $f(t,x,\cdot)$ is a von Mises-Fisher distribution \[ f(t,x,v) = \rho (t,x) M_{|u|\Omega (t,x)} (v),\;\;v \in \R^d \] where $|u|$ is a critical point of $Z(\sigma, \cdot)$, that is \[ |u| \in \{0, l(\sigma)\}\;\mbox{ if } 0 < \sigma < \sigma _0\;\mbox{ and }\;|u| = 0\;\mbox{ if } \; \sigma \geq \sigma _0. \] It remains to determine the fluid equations satisfied by the macroscopic quantities $\rho, \Omega$. When $|u| = 0$, the continuity equation leads to $\partial _t \rho = 0$. In the sequel we concentrate on the case $|u| = l(\sigma), 0 < \sigma < \sigma_0$ (that is, the modulus of the mean velocity is given, as a function of $\sigma$). We follow the strategy in \cite{BosCar15, AceBosCarDeg19}. We consider \[ \ltmu = \{ \chi : \R^d \to \R \mbox{ measurable },\;\intv{(\chi (v))^2 M_u(v)} < + \infty \} \] and \[ \homu = \{ \chi : \R^d \to \R \mbox{ measurable },\;\intv{[\;(\chi (v))^2 + |\nabla _v \chi |^2\;]M_u(v)} < + \infty \}. \] We introduce the usual scalar products \[ (\chi, \theta)_{M_u} = \intv{\chi (v) \theta (v) M_u (v)},\;\;\chi, \theta \in \ltmu , \] \[ ((\chi, \theta))_{M_u} = \intv{( \chi (v) \theta (v) + \nabla _v \chi \cdot \nabla _v \theta) M_u (v)},\;\;\chi, \theta \in \homu \] and we denote by $|\cdot |_{M_u}, \|\cdot \|_{M_u}$ the associated norms. Moreover we need a Poincar\'e inequality. This comes from the equivalence between the Fokker-Planck and Schr\"odinger operators. As described in \cite{BonCarGouPav16}, we can write it as \[ - \frac{\sigma}{\sqrt{M_u}} \Divv\left ( M_u \nabla _v \left ( \frac{g}{\sqrt{M_u}}\right ) \right ) = - \sigma \Delta _v g + \left [ \frac{1}{4\sigma} |\nabla _v \Phiu |^2 - \frac{1}{2} \Delta _v \Phiu \right ] g. \] The operator $\mathcal{H}_u = - \sigma \Delta _v + \left [ \frac{1}{4\sigma} |\nabla _v \Phiu |^2 - \frac{1}{2} \Delta _v \Phiu \right ]$ is defined in the domain \[ D(\mathcal{H}_u) = \left \{g \in L^2 (\R^d),\;\left [ \frac{1}{4\sigma} |\nabla _v \Phiu |^2 - \frac{1}{2} \Delta _v \Phiu \right ]g \in L^2 (\R^d),\;\;\Delta _v g \in L^2 (\R^d) \right \}. \] We have a spectral decomposition of the operator $\mathcal{H}_u$ under suitable confining assumptions (cf. Theorem XIII.67 in \cite{ReedSimon78}). \begin{lemma} \label{RSVol4} Assume that the function $v \mapsto \frac{1}{4\sigma} |\nabla _v \Phiu |^2 - \frac{1}{2}\Delta _v \Phiu$ belongs to $L^1_{\mathrm{loc}}(\R^d)$, is bounded from below and is coercive {\it i.e.} \[ \lim _{|v| \to + \infty} \left [ \frac{1}{4\sigma} |\nabla _v \Phiu |^2 - \frac{1}{2}\Delta _v \Phiu \right ]= + \infty. \] Then $\mathcal{H}_u ^{-1}$ is a self adjoint compact operator in $L^2(\R^d)$ and $\mathcal{H}_u$ admits a spectral decomposition, that is, a nondecreasing sequence of real numbers $(\lambda _u ^n)_{n\in \N}$, $\lim _{n \to + \infty} \lambda _u ^n = + \infty$, and a $L^2(\R^d)$-orthonormal basis $(\psi _u ^n)_{n \in \N}$ such that $\mathcal{H}_u \psi _u ^n = \lambda _u ^n \psi _u ^n, n \in \N$, $\lambda _u ^0 = 0$, $\lambda _u ^1 >0$. \end{lemma} Therefore, under the hypotheses in Lemma \ref{RSVol4}, for any $u \in \R^d$ there is $\lambda _u >0$ such that for any $\chi \in \homu$ we have \begin{equation} \label{Equ22} \sigma \intv{|\nabla _v \chi |^2 M_u (v)} \geq \lambda _u \intv{\left |\chi (v) - \intvp{\chi (\vp) M_u (\vp)}\right |^2 M_u (v)}. \end{equation} The fluid equations are obtained by taking the scalar product of \eqref{Equ21} with elements in the kernel of the (formal) adjoint of $\calL _f$, that is with functions $\psi = \psi (v)$ such that \[ \intv{(\calL _f g)(v) \psi (v)} = 0,\;\;\mbox{ for any function } \;g = g(v), \] see also \cite{BarGolLevI93,BarGolLevII93,BosDCDS15,BosIHP15,Lev93,Lev96}. For example, $\psi = 1$ belongs to the kernel of $\calL_f ^\star$ \[ \intv{(\calL_f g )(v)} = \intv{\lime \frac{Q(f + \eps g) - Q(f)}{\eps}} = \lime \frac{1}{\eps} \intv{\{Q(f + \eps g) - Q(f)\}} = 0, \] and we obtain the continuity equation \eqref{Equ51} \begin{equation*} \partial _t \intv{f} + \Divx \intv{f v } = \intv{\calL_f (f^1)} = 0. \end{equation*} In the sequel we determine the formal adjoint of the linearization of the collision operator $Q$ around its equilibria. \begin{pro} \label{Linear} Let $f = f(v)$ be an equilibrium with non vanishing mean velocity \[ f = \rho M_u,\;\;\rho = \rho[f],\;\;u = |u|\Omega [f],\;\;|u| = l(\sigma),\;\;0 < \sigma < \sigma _0. \] \begin{enumerate} \item The linearization $\calL _f = \mathrm{d} Q_f$ is given by \[ \calL _f g = \Divv \left \{ \sigma \nabla _v g + g \nabla _v \Phiu - M_u \intvp{(\vp - u) g(\vp)} \right \}. \] \item The formal adjoint of $\calL _f$ is \begin{equation*} \calL _f ^\star \psi = \sigma \frac{\Divv (M_u \nabla _v \psi)}{M_u} + (v - u) \cdot W[\psi],\;\;W[\psi] := \intv{M_u (v) \nabla _v \psi }. \end{equation*} \item We have the identity \[ \calL _f (f(v-u)) = \sigma \nabla _v f - \Divv \left ( f \calmu \right ),\;\;\calmu := \intvp{M_u (\vp) (\vp - u) \otimes ( \vp - u)}. \] \end{enumerate} \end{pro} \begin{proof}$\;$\\ 1. We have \[ \calL _f g = \frac{\md}{\md s} \Big |_{s= 0} Q(f+sg) = \Divv \left \{\sigma \nabla _v g + g \nabla _v \Phiu - f \frac{\md}{\md s}\Big|_{s= 0} u[f+sg] \right \} \] and \[ \frac{\md}{\md s}\Big|_{s= 0}u [f+sg] = \frac{\intv{\;(v - u[f])g(v)}}{\intv{f(v)} } . \] Therefore we obtain \[ \calL _f g = \Divv \left \{ \sigma \nabla _v g + g \nabla _v \Phiu - - M_u \intvp{\;(\vp - u[f])g(\vp)} \right \}. \] 2. We have \begin{align*} & \intv{(\calL _f g) (v)\psi(v) } = - \intv{\left \{ \sigma \nabla _v g + g \nabla _v \Phiu - M_u (v) \intvp{\;(\vp - u[f]) g(\vp) }\right \}\cdot \nabla _v \psi }\\ & = \intv{g (v) \left ( \sigma \Divv \nabla _v \psi - \nabla _v \psi \cdot \nabla _v \Phiu \right ) } + \intvp{g(\vp) ( \vp - u[f]) }\cdot \intv{M_u (v) \nabla _v \psi} \\ & = \intv{g(v) \left (\sigma \Divv \nabla _v \psi - \nabla _v \psi \cdot \nabla _v \Phiu + (v - u[f]) \cdot W[\psi] \right ) } \end{align*} implying \[ \calL _f ^\star \psi = \sigma \frac{\Divv (M_u \nabla _v \psi)}{M_u} + (v - u[f]) \cdot W[\psi]. \] 3. For any $i \in \{1,...,d\}$ we have \begin{align*} \calL _f (f(v-u)_i)& = \Divv \left [(v - u)_i (\underbrace{\sigma \nabla _v f + f \nabla _v \Phiu}_{= 0}) + \sigma f e_i - M_u \intvp{(\vp - u)_i (\vp - u) f(\vp)} \right ] \\ & = \sigma \partial _{v_i} f - \Divv\left (f \intvp{(\vp - u ) \otimes (\vp - u) M_u (\vp) } \right ) _i \end{align*} and therefore \begin{align*} \calL _f (f(v-u)) & = \sigma \nabla _v f - \Divv ( f \calmu ). \end{align*} \end{proof} We identify now the kernel of $\calL _f ^\star$. \begin{lemma} \label{AdjointRes} Let $f = \rho M_u >0$ be an equilibrium with non vanishing mean velocity. The following statements are equivalent \begin{enumerate} \item The function $\psi = \psi (v)$ belongs to $\ker \calL _f ^\star$. \item The function $\psi = \psi (v)$ satisfies \begin{equation} \label{Equ24} \sigma \frac{\Divv (M_u \nabla _v \psi)}{M_u (v)} + (v - u ) \cdot W = 0 \end{equation} for some vector $W \in \ker ( \calmu - \sigma I_d)$. \end{enumerate} Moreover, the linear map $W : \ker \calL _f ^\star \to \ker ( \calmu - \sigma I_d)$, defined by $W[\psi] = \intv{M_u (v) \nabla _v \psi }$ induces an isomorphism between the vector spaces $\ker \calL _f ^\star /\ker W$ and $\ker ( \calmu - \sigma I_d)$, where $\ker W$ is the set of constant functions. \end{lemma} \begin{proof} $\;$\\ 1.$\implies$2. Let $\psi$ be an element of $\ker \calL _f ^\star$. By the last statement in Proposition \ref{Linear} we deduce \begin{align*} 0 & = \intv{\calL _f ^\star \psi \; f (v - u)} = \intv{\psi (v) \calL _f (f(v-u))} \\ & = \intv{\psi (v) \left [ \sigma \nabla _v f - \Divv ( f \calmu ) \right ] }\\ & = - \sigma \intv{f(v) \nabla _v \psi } + \calmu \intv{f(v) \nabla _v \psi }\\ & = \rho ( \calmu - \sigma I_d ) W[\psi]. \end{align*} As $\rho >0$ we deduce that $W[\psi] \in \ker ( \calmu - \sigma I_d)$ and by the second statement in Proposition \ref{Linear} it comes that \[ \sigma \frac{\Divv (M_u \nabla _v \psi)}{M_u (v)} + (v - u ) \cdot W = 0,\;\;W = W[\psi] \in \ker ( \calmu - \sigma I_d). \] \noindent 2.$\implies$1. Let $\psi$ be a function satisfying \eqref{Equ24} for some vector $W \in \ker (\calmu - \sigma I_d)$. Multiplying by $M_u(v) (v-u)$ and integrating with respect to $v$ yields \[ - \sigma \intv{M_u (v) \nabla _v \psi } +\calmu W = 0. \] As we know that $W \in \ker (\calmu - \sigma I_d)$, we deduce that $W = W[\psi]$, implying that $\psi$ belongs to $\ker \calL _f ^\star$ \[ \calL _f ^\star \psi = \sigma \frac{\Divv (M_u \nabla _v \psi)}{M_u} + (v- u) \cdot W[\psi] = \sigma \frac{\Divv (M_u \nabla _v \psi)}{M_u} +( v - u) \cdot W = 0. \] \end{proof} We focus on the eigenspace $\ker ( \calmu - \sigma I_d)$. \begin{lemma} \label{Spectral} Let $M_u$ be an equilibrium with non vanishing mean velocity. Then we have \[ \calmu - \sigma I_d = \sigma ^2 \frac{\partial _{ll} ^2 Z(\sigma, l(\sigma))}{Z(\sigma, l(\sigma))} \Omega \otimes \Omega \leq 0,\;\;\Omega = \frac{u}{|u|}. \] In particular $(\R u ) ^\perp \subset \ker (\calmu - \sigma I_d)$ with equality iff $\partial _{ll} ^2 Z(\sigma, l(\sigma)) \neq 0$. \end{lemma} \begin{proof} Let us consider $\{E_1, \ldots, E_{d-1}\}$ an orthonormal basis of $(\R\Omega) ^\perp$. By using the decomposition \[ v - u = ( \Omega \otimes \Omega ) (v-u) + \sumi ( E_i \otimes E_i ) (v-u) = ( \Omega \otimes \Omega ) (v-u) + \sumi ( E_i \otimes E_i ) v \] we obtain \begin{align*} \calmu & = \intv{\left [ \Omega \otimes \Omega (v-u) + \sumi E_i \otimes E_i v \right ]\otimes \left [ \Omega \otimes \Omega (v-u) + \sumj E_j \otimes E_j v \right ] M_u (v) } \\ & = ( \calmu \Omega \cdot \Omega ) \Omega \otimes \Omega + \sumi (\calmu E_i \cdot E_i ) E_i \otimes E_i \end{align*} since we have \begin{equation} \label{Equ26} \calmu \Omega \cdot E_j = 0,\;\;1 \leq j \leq d-1 \end{equation} and \begin{equation} \label{Equ27} \calmu E_i \cdot E_j = \delta _{ij} \intv{\frac{|v|^2 - (v \cdot \Omega) ^2 }{d-1} M_u (v)},\;\;1\leq i, j \leq d-1. \end{equation} The formula \eqref{Equ26} comes by the change of variable $v = (I_d - 2 E_j \otimes E_j)\vp$, by noticing that $I_d - 2 E_j \otimes E_j \in \calTu$, for any $1 \leq j \leq d-1$ \begin{align*} \calmu \Omega \cdot E_j & = \intv{\Omega \cdot (v - u) (E_j \cdot v) M_u (v)}\\ & = - \intvp{\Omega \cdot (\vp - u) (E_j \cdot \vp) M_u (\vp) }\\ &= - \calmu \Omega \cdot E_j = 0,\;\;1 \leq j \leq d-1. \end{align*} For the formula \eqref{Equ27} with $i \neq j$ we use the rotation $\calO _{ij} \in \calTu$ \[ v = \calO _{ij}\vp,\;\;\calO _{ij} = \Omega \otimes \Omega + \sum_{k \notin\{i,j\}}E_k \otimes E_k + E_i \otimes E_j - E_j \otimes E_i. \] Notice that \[ (E_i \cdot v) (E_j \cdot v) = - (E_j \cdot \vp) (E_i \cdot \vp),\;\;(E_i \cdot v) ^2 = (E_j \cdot \vp)^2 \] and therefore, \begin{align*} \calmu E_i \cdot E_j & = \intv{(E_i \cdot v) (E_j \cdot v) M_u (v)} \\ & = - \intvp{(E_j \cdot \vp) (E_i \cdot \vp) M_u (\vp)} \\ & = - \calmu E_i \cdot E_j = 0,\;\;1\leq i\neq j \leq d-1 \end{align*} and \begin{align*} \calmu E_i \cdot E_i & = \intv{(E_i \cdot v)^2 M_u (v)} \\ & = \intvp{(E_j \cdot \vp)^2 M_u (\vp)} \\ & = \calmu E_j \cdot E_j = 0,\;\;1\leq i, j \leq d-1 . \end{align*} As $\sumi (E_i \cdot v)^2 = |v|^2 - (v \cdot \Omega) ^2$, we obtain \[ \intv{(E_i \cdot v) ^2 M_u (v)} = \intv{\frac{|v|^2 - (v \cdot \Omega)^2}{d-1} M_u (v) },\;\;1 \leq i \leq d-1 \] and \[ \calmu = \intv{( \; (v-u) \cdot \Omega )^2 M_u (v) } \;\Omega \otimes \Omega + \intv{\frac{|v|^2 - (v \cdot \Omega)^2}{d-1} M_u (v) }\; (I_d - \Omega \otimes \Omega). \] We claim that $\intv{\frac{|v|^2 - (v \cdot \Omega)^2}{d-1} M_u (v) } = \sigma$. Multiplying $\sigma \nabla _v M_u + M_u (v) \nabla _v \Phiu = 0$ by $(|v|^2 I_d - v \otimes v ) \Omega$ we obtain \[ \intv{\sigma \nabla _v M_u \cdot ( |v|^2 I_d - v \otimes v ) \Omega } + \intv{M_u (v) \nabla _v \Phiu \cdot ( |v|^2 I_d - v \otimes v ) \Omega } = 0. \] But we have \[ \Divv [ ( |v|^2 I_d - v \otimes v ) \Omega] = \Divv [|v|^2 \Omega - (v \cdot \Omega) v ] = -(d-1) (v \cdot \Omega) \] and \[ \nabla _v \Phiu \cdot ( |v|^2 I_d - v \otimes v ) \Omega = \left ( v - u + V^\prime (|v|) \frac{v}{|v|} \right ) \cdot ( |v|^2 I_d - v \otimes v ) \Omega = - (|v|^2 - (v \cdot \Omega) ^2 ) |u|. \] We deduce that \[ (d-1) \sigma \underbrace{\intv{(v \cdot \Omega) M_u (v)}}_{=|u|} - |u| \intv{[|v|^2 - (v \cdot \Omega) ^2 ] M_u (v)} = 0 \] and by taking into account that $|u| = \intv{(v \cdot \Omega) M_u (v)}$, we obtain \[ \intv{\frac{|v|^2 - (v \cdot \Omega)^2}{d-1} M_u (v) } = \sigma. \] By Remark \ref{Modulus}, we know that \[ \sigma ^2 \frac{\partial _{ll} ^2 Z(\sigma, l(\sigma))}{Z(\sigma, l(\sigma))} = \intv{M_u (v) \{ ((v-u) \cdot \Omega ) ^2 - \sigma \}} = \intv{M_u (v) ((v-u) \cdot \Omega ) ^2} - \sigma \] and finally we have \[ \calmu - \sigma I_d = \left ( \intv{((v-u) \cdot \Omega ) ^2 M_u (v)} - \sigma \right ) \Omega \otimes \Omega = \sigma ^2 \frac{\partial _{ll} ^2 Z(\sigma, l(\sigma))}{Z(\sigma, l(\sigma))} \;\Omega \otimes \Omega. \] As $l(\sigma)$ is a maximum point of $Z(\sigma, \cdot)$, we have $\partial _{ll} ^2 Z(\sigma, l(\sigma)) \leq 0$ and therefore $\calmu \leq \sigma I_d$. \end{proof} \section{The kernel of $\calL _f ^\star$} \label{Invariants} By Lemmas \ref{AdjointRes}, \ref{Spectral}, any solution of \eqref{Equ24} with $W \in (\R u ) ^\perp$ belongs to the kernel of the formal adjoint $\calL_f ^\star$. Generally we will solve the elliptic problem \begin{equation} \label{Equ31} - \sigma \Divv (M_u \nabla _v \psi ) = (v - u) \cdot W M_u (v),\;\;v \in \R^d \end{equation} for any $W \in \R^d$. We consider the continuous bilinear symmetric form $a_u :\homu \times \homu \to \R$ defined by \[ a_u (\varphi, \theta) = \sigma \intv{\nabla _v \varphi \cdot \nabla _v \theta M_u (v)},\;\;\varphi, \theta \in \homu \] and the linear form $L : \homu \to \R, L(\theta) = \intv{\theta (v) (v-u ) \cdot W M_u (v) },\; \theta \in \homu$. Notice that under the hypothesis \eqref{Equ11} $L$ is bounded on $\homu$ \begin{align*} \intv{|\theta (v) (v - u ) \cdot W |M_u } & \leq \left ( \intv{(\theta (v))^2 M_u } \right ) ^{1/2} \left ( \intv{(|v| + |u|) ^2 M_u }\right ) ^{1/2} |W|. \end{align*} We are looking for variational solutions of \eqref{Equ31} {\it i.e.,} \begin{equation} \label{Equ30} \psi \in \homu \;\mbox{ and } a_u (\psi, \theta) = L(\theta)\;\;\mbox{ for any }\theta \in \homu. \end{equation} When taking $\theta = 1 \in \homu$, we obtain the following necessary condition for the solvability of \eqref{Equ31} \begin{equation} \label{Equ32} L(1) = \intv{(v-u) \cdot W M_u (v) } = 0 \end{equation} which is satisfied for any $W \in \R^d$, because $M_u$ has mean velocity $u$. It happens that \eqref{Equ32} also guarantees the solvability of \eqref{Equ31}. For that, it is enough to observe that the bilinear form $a_u$ is coercive on the Hilbert space $\thomu : = \{ \theta \in \homu\;:\; ((\theta, 1))_{M_u} = 0\}$. Indeed, for any $\theta \in \homu$ such that $((\theta, 1))_{M_u} = 0$, we have thanks to the Poincar\'e inequality \eqref{Equ22} \[ \sigma \intv{|\nabla _v \theta|^2 M_u (v) } \geq \lambda _u \intv{(\theta (v))^2 M_u (v)}, \] and therefore \[ a_u (\theta, \theta) \geq \frac{\lambda _u}{2} \intv{(\theta (v))^2 M_u (v)} + \frac{\sigma }{2} \intv{|\nabla _v \theta|^2 M_u (v)} \geq \frac{\min \{\sigma, \lambda _u\}}{2} \|\theta \|^2 _{M_u}. \] Thanks to Lax-Milgram lemma on the Hilbert space $\thomu$, there is a unique function $\psi \in \thomu$ such that \begin{equation} \label{Equ33} a_u (\psi, \tilde{\theta} ) = L(\tilde{\theta})\;\;\mbox{ for any } \tilde{\theta} \in \thomu. \end{equation} The condition \eqref{Equ32} allows us to extend \eqref{Equ33} to $\homu$ (apply \eqref{Equ33} with $\tilde{\theta} = \theta - ((\theta, 1))_{M_u}$, for any $\theta \in \homu$). The uniqueness of the solution of \eqref{Equ33} implies the uniqueness, up to a constant, for the solution of \eqref{Equ30}. From now on, for any $W \in \R^d$, we denote by $\psi _W$ the unique solution of \eqref{Equ30}, verifying $\intv{\psi _W (v) M_u (v)} = 0$. Notice that $\psi _0 = 0$. The solution $\psi _W$ depends linearly on $W \in \R^d$. Let us introduce the Hilbert spaces \[ \bltmu= \{\xi : \R^d \to \R^d\;\mbox{ measurable }, \sum _{i = 1} ^d \intv{(\xi _i (v))^2 M_u (v)} < +\infty\} \] \[ \bhomu= \{\xi : \R^d \to \R^d\;\mbox{ measurable }, \sum _{i = 1} ^d \intv{\{ (\xi _i (v))^2 + |\nabla _v \xi _i |^2\}M_u (v)} < +\infty\} \] endowed with the scalar product \[ (\xi, \eta)_{M_u} = \sum _{i = 1} ^d \intv{\xi _i (v) \eta _i (v) M_u (v)},\;\;\xi, \eta \in \bltmu \] \[ ((\xi, \eta))_{M_u} = \sum _{i = 1} ^d \intv{\{ \xi _i (v) \eta _i (v) + \nabla _v \xi _i \cdot \nabla _v \eta _i \}M_u (v)},\;\;\xi, \eta \in \bhomu. \] We denote the induced norms by $|\xi |_{M_u} = (\xi, \xi ) _{M_u} ^{1/2}, \;\xi \in \bltmu$ and $\|\xi \|_{M_u} = ((\xi, \xi )) _{M_u} ^{1/2}, \;\xi \in \bhomu$. Obviously, a vector field $\xi = \xi (v)$ belongs to $\bhomu$ iff $\xi _i \in \homu$ for any $i \in \{1,...,d\}$ and we have \[ \|\xi \|_{M_u}^2 = \sum _{i = 1} ^d \|\xi _i \|_{M_u} ^2. \] Let us consider the closed subspace \[ \bthomu = \{ \xi \in \bhomu\;:\;\intv{\xi (v) M_u (v)} = 0\}. \] Thanks to \eqref{Equ22}, for any $\xi \in \bhomu$ we have the inequality \[ \sigma \sum _{i = 1} ^d \intv{|\nabla _v \xi _i |^2 M_u (v) } \geq \lambda _u \sum _{i = 1} ^d \intv{\left |\xi _i (v) - \intvp{\xi _i (\vp) M_u (\vp)}\right | ^ 2 M_u (v)} \] and therefore \begin{align} \label{Equ41} \sigma \sum _{i = 1} ^d \intv{|\nabla _v \xi _i |^2 M_u (v) } & \geq \frac{\min\{\sigma, \lambda _u \}}{2} \sum _{i = 1} ^d \intv{[(\xi _i (v))^2 + |\nabla _v \xi _i |^2 ] M_u (v)} \nonumber \\ & = \frac{\min\{\sigma, \lambda _u \}}{2}\|\xi \|^2 _{M_u},\;\;\xi \in \bthomu. \end{align} We introduce the continuous bilinear symmetric form ${\bf a} _u : \bhomu \times \bhomu \to \R$ defined by \[ {\bf a}_u (\xi, \eta) = \sigma \intv{\partial _v \xi : \partial _v \eta \; M_u (v)} = \sum _{i = 1} ^d a_u (\xi _i, \eta _i),\;\;\xi, \eta \in \bhomu \] and the linear form ${\bf L } : \bhomu \to \R$, ${\bf L }(\eta) = \intv{(v-u) \cdot \eta (v) M_u (v)}, \eta \in \bhomu$. Under the hypothesis \eqref{Equ11}, it is easily seen that ${\bf L}$ is bounded on $\bhomu$ \[ \intv{|(v - u) \cdot \eta (v)|M_u (v)} \leq \left ( \intv{(|v| + |u| ) ^2 M_u (v) } \right ) ^{1/2} \|\eta \|_{M_u},\;\;\eta \in \bhomu. \] \begin{pro} There is a unique solution $F$ of the variational problem \[ F \in \bthomu\;\mbox{ and }\; {\bf a}_u (F, \eta) = {\bf L } (\eta),\;\mbox{ for any } \eta \in \bhomu. \] For any $W \in \R^d$ we have $\psi _W (v) = F(v) \cdot W, v \in \R^d$. The vector field $F$ is left invariant by the family $\calTu$. \end{pro} \begin{proof} The bilinear for ${\bf a}_u$ is coercive on $\bthomu$, thanks to \eqref{Equ41} \[ {\bf a}_u (\xi, \xi) \geq \frac{\min \{\sigma, \lambda _u \}}{2} \|\xi \|_{M_u } ^2,\;\;\mbox{ for any } \xi \in \bthomu. \] By Lax-Milgram lemma, applied on the Hilbert space $\bthomu$, there is a unique vector field $F \in \bthomu$ such that \[ {\bf a}_u (F, \eta) = {\bf L } (\eta),\;\mbox{ for any } \eta \in \bthomu. \] Actually, the above equality holds true for any $\eta \in \bhomu$ \begin{align*} {\bf a}_u (F, \eta) & = \sum _{i = 1} ^d a_u (F_i, \eta _i) = \sum _{i = 1} ^d a_u (F_i, \eta_i - (\eta_i,1)_{M_u} ) \\ & = {\bf L} ( \eta _1 - (\eta _1,1)_{M_u},...,\eta_d - (\eta _d ,1) _{M_u}) \\ & = \sum _{i =1 } ^d \intv{(v_i - u_i) [ \eta _i (v) - (\eta_i, 1)_{M_u}] M_u (v)} \\ & = \sum _{i =1 } ^d \intv{(v_i - u_i) \eta _i (v) M_u (v)} \\ & = {\bf L} (\eta). \end{align*} It remains to check that for any $W \in \R^d$, $v \mapsto F(v) \cdot W$ solves \eqref{Equ33}, on $\homu$. Observe that $F \cdot W \in \thomu$. Notice also that for any $\theta \in \homu$ we have $\theta W \in \bhomu$ and \begin{align*} a_u (F\cdot W, \theta) & = \sigma \intv{\;^t \partial _v F W \cdot \nabla _v \theta M_u (v)}\\ & = \sigma \intv{\partial _v F : \partial _v (\theta W) M_u (v)}\\ & = {\bf a}_u (F, \theta W) = {\bf L} (\theta W) \\ & = \intv{(v-u) \cdot W \theta(v) M_u (v) }\\ & = L(\theta). \end{align*} Thank to the uniqueness we obtain $\psi _W (v) = F(v) \cdot W, v \in \R^d, W \in \R^d$. Consider now $\calO \in \calTu$. We are done if we prove that $v \to \calO F (\;^t \calO v)$ solves the same problem as $F$. Clearly we have \[ \intv{|\calO F( \;^t \calO v ) |^2 M_u (v)} = \intvp{|F(\vp)|^2 M_u (\vp) } < +\infty \] \begin{align*} \intv{\partial [ \calO F (\;^t \calO \;\cdot)] : \partial [ \calO F (\;^t \calO \;\cdot )] M_u (v) } & = \intv{\partial F (\;^t \calO \;\cdot ) : \partial F (\;^t \calO \;\cdot ) M_u (v)} \\ & = \intv{\partial F ( \;^t \calO v ) \;^t \calO : \partial F ( \;^t \calO v) \;^t \calO M_u (v) } \\ & = \intvp{\partial F (\vp) : \partial F (\vp) M_u (\vp) } < +\infty. \end{align*} and \[ \intv{\calO F (\;^t \calO v ) M_u (v)} = \calO \intvp{F(\vp) M_u (\calO \vp) } = 0 \] saying that $v \mapsto \calO F(\;^t \calO v)$ belongs to $\bthomu$. For any $\eta \in \bhomu$ we have $^t \calO \eta (\calO \; \cdot ) \in \bhomu$ and \begin{align*} {\bf a}_u (\calO F (\;^t \calO \; \cdot ), \eta) & = \sigma \intv{\partial ( \calO F (\;^t \calO \; \cdot )) : \partial \eta M_u (v)} \\ & = \sigma \intv{\calO \partial F (\;^t \calO v ) \;^t \calO : \partial \eta M_u (v)}\\ & = \sigma \intvp{\partial F (\vp) : \;^t \calO ( \partial \eta) (\calO \vp) \calO M_u (\calO \vp) }\\ & = \sigma \intvp{\partial F (\vp) : \partial ( \;^t O \eta (\calO \; \cdot ) ) (\vp) M_u (\vp)} \\ & = \intvp{(\vp - u) \cdot \;^t \calO \eta (\calO \vp) M_u (\vp)} \\ & = \intv{(v-u) \cdot \eta (v) M_u (v)} = {\bf L}(\eta). \end{align*} \end{proof} The vector field $F$ expresses in terms of two functions which are left invariant by the family $\calTu$. \begin{pro} There is a function $\psi$, which is left invariant by the family $\calTu$, such that \[ F(v) = \psi (v) \frac{v - (v \cdot \Omega)\Omega}{\sqrt{|v|^2 - (v \cdot \Omega)^2 }} + \psi _\Omega (v) \Omega,\;\;v \in \R ^d \setminus (\R \Omega). \] \end{pro} \begin{proof} Obviously we have $F = (F \cdot \Omega) \Omega + F^\prime = \psi _\Omega \Omega + F^\prime$, with $F^\prime = (I_d - \Omega \otimes \Omega) F$. The vector field $F^\prime$ is orthogonal to $\Omega$ and is left invariant by the family $\calTu$ \begin{align*} F^\prime (\;^t \calO v) & = F (\;^t \calO v ) - ( F (\;^t \calO v )\cdot \Omega) \Omega = \;^t \calO F(v) - ( \;^t \calO F(v) \cdot \Omega) \Omega \\ & = \;^t \calO (F(v) - (F(v) \cdot \Omega) \Omega ) = \;^t \calO F^\prime (v),\;\;v \in \R^d. \end{align*} We claim that $F^\prime (v)$ is parallel to the orthogonal projection of $v$ over $(\R \Omega) ^\perp$. Indeed, for any $v \in \R^d \setminus (\R \Omega)$, let us consider \[ E(v) = \frac{(I_d - \Omega \otimes \Omega)v}{\sqrt{|v|^2 - (v \cdot \Omega)^2}}. \] When $d = 2$, since $E(v)$ and $F^\prime (v)$ are both orthogonal to $\Omega$, there exists a function $\psi = \psi (v)$ such that \[ F^\prime(v) = \psi (v) E(v) = \psi (v) \frac{(I_2- \Omega \otimes \Omega)v}{\sqrt{|v|^2 - (\Omega \cdot v)^2}},\;\;v \in \R^2 \setminus ( \R \Omega) \, . \] If $d \geq 3$, let us denote by $^\bot E$, any unitary vector orthogonal to $E$ and $\Omega$. Introducing the orthogonal matrix $\calO = I_d - 2 \;^\bot E \otimes \;^\bot E \in \calTu$, we obtain $F^\prime ( \;^t \calO \;\cdot) = \;^t \calO F^\prime $. Observe that \[ 0 = \;^\bot E \cdot E(v) = \;^\bot E\cdot \frac{v - (v \cdot \Omega)\Omega}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}} = \frac{\;^\bot E\cdot v}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}},\;\;\calO v = v \] and thus \[ F^\prime (v) = F^\prime (\calO v ) = \calO F^\prime(v) = (I_d - 2 \;^\bot E \otimes \;^\bot E) F^\prime (v) = F^\prime (v) - 2 ( \;^\bot E \cdot F^\prime (v)) \;^\bot E \] from which it follows that $^\bot E \cdot F^\prime(v) = 0$, for any vector $^ \bot E$ orthogonal to $E$ and $\Omega$. Hence, there exists a function $\psi (v)$ such that \[ F^\prime (v) = \psi (v) E (v) = \psi (v) \frac{(I_d - \Omega \otimes \Omega)v}{\sqrt{|v|^2 - (v \cdot \Omega)^2}},\;\;v \in \R ^d \setminus (\R \Omega). \] It is easily seen that the function $\psi$ is left invariant by the family $\calTu$. Indeed, for any $\calO \in \calTu$ we have \[ \psi (\;^t \calO v) = F^\prime (\;^t \calO v) \cdot E (\;^t \calO v) = \;^t \calO F^\prime (v) \cdot \;^t \calO E(v) = F^\prime (v) \cdot E(v) = \psi (v),\; v \in \R^d. \] Similarly, $\psi _\Omega$ is left invariant by the family $\calTu$ \[ \psi _\Omega (\;^t \calO v ) = F (\;^t \calO v ) \cdot \Omega = \;^t \calO F(v) \cdot \Omega = F(v) \cdot \calO \Omega = F(v) \cdot \Omega = \psi _\Omega (v),\;v \in \R^d, \;\calO \in \calTu. \] \end{proof} The functions $\psi, \psi _\Omega$ will enter the fluid model satisfied by the macroscopic quantities $\rho, \Omega, |u|$. It is convenient to determine the elliptic partial differential equations satisfied by them. \begin{pro} There are two functions $\chi = \chi (c, r) : ]-1, 1[ \times ]0,+\infty[ \to \R$, $\chi _\Omega = \chi _\Omega (c,r) : ]-1,1[ \times ]0,+\infty[ \to \R$ such that $\psi (v) = \chi \left ( v \cdot \Omega/|v|, |v|\right ), \psi _\Omega (v) = \chi _\Omega \left ( v \cdot \Omega/|v|, |v|\right )$, $v \in \R^d \setminus (\R \Omega)$. The above functions satisfy \begin{align} \label{EquChi} - & \sigma \partial _c \{ r ^{d-3} (1 - c^2) ^{\frac{d-1}{2}} e(c,r,|u|) \partial _c \chi \} - \sigma \partial _r \{ r ^{d-1} (1 - c^2 ) ^{\frac{d-3}{2}} e(c, r, |u|) \partial _r \chi \} \\ & + \sigma (d-2) r ^{d-3} (1 - c^2) ^{\frac{d-5}{2}} e(c, r, |u|) \chi = r^d (1 - c^2) ^{\frac{d-2}{2}}e(c,r,|u|),\;(c,r) \in ]-1,1[ \times ]0,+\infty[\nonumber \end{align} and \begin{align} \label{EquChiOme} - & \sigma \partial _c \{ r ^{d-3} (1 - c^2) ^{\frac{d-1}{2}} e(c,r,|u|) \partial _c \chi _\Omega \} - \sigma \partial _r \{ r ^{d-1} (1 - c^2 ) ^{\frac{d-3}{2}} e(c, r, |u|) \partial _r \chi _\Omega \} \\ & = r^{d-1 } (r c - |u|) (1 - c^2) ^{\frac{d-3}{2}}e(c,r,|u|),\;(c,r) \in ]-1,1[ \times ]0,+\infty[\nonumber \end{align} where $e(c, r, l) = \exp \left ( - \frac{r^2}{2\sigma} + \frac{rcl}{\sigma} - \frac{V(r)}{\sigma} \right )$. \end{pro} \begin{proof} The function $\psi _\Omega = F \cdot \Omega$ satisfies \begin{equation} \label{Equ45} \psi _\Omega \in \thomu\;\mbox{ and } \sigma \intv{\nabla _v \psi _\Omega \cdot \nabla _v \theta \;M_u (v) } = \intv{(v-u) \cdot \Omega \;\theta (v) M_u (v)},\;\theta \in \thomu. \end{equation} By Remark \ref{Sym} we know that there is $\chi _\Omega = \chi _\Omega (c,r)$ such that $\psi _\Omega (v) = \chi _\Omega ( v\cdot \Omega /|v|, |v|), v \in \R ^d \setminus ( \R \Omega)$. As $\psi _\Omega$ belongs to $\thomu$, which is equivalent to \[ \intv{\psi _\Omega (v) M_u (v)} = 0,\;\;\intv{|\nabla _v \psi _\Omega |^2 M_u (v) } < +\infty \] we are led to the Hilbert space \begin{align*} H_{\parallel, |u|} & = \{ h : ]-1,1[ \times ]0,+\infty[ \to \R,\;\; \intcr{r^{d-1}}{h(c,r) e(c,r,|u|) ( 1 - c^2) ^{\frac{d-3}{2}}} = 0,\\ & \intcr{r^{d-1}}{\left [ (\partial _c h )^2 \frac{1- c^2}{r^2} + (\partial _r h ) ^2\right ] e(c,r,|u|) ( 1 - c^2) ^{\frac{d-3}{2}}} < +\infty\} \end{align*} endowed with the scalar product \[ (h, g)_{\parallel, |u|} = \intcr{r^{d-1}}{\left [ \partial _c h \partial _c g \frac{1- c^2}{r^2} + \partial _r h \partial _r g\right ] e(c,r,|u|) ( 1 - c^2) ^{\frac{d-3}{2}}},\;h, g \in H_{\parallel, |u|}. \] Taking in \eqref{Equ45} $\theta (v) = h ( v\cdot \Omega /|v|, |v|)$, with $h \in H_{\parallel, |u|}$ (which means $\theta \in \thomu$), we obtain \begin{align*} \sigma & \intcr{r^{d-1}}{\left [\partial _c \chi _\Omega \partial _c h \frac{1- c^2}{r^2} + \partial _r \chi _\Omega \partial _r h \right] e(c,r,|u|) ( 1 - c^2) ^{\frac{d-3}{2}}}\\ & = \intcr{r^{d-1}}{(r c - |u|) h(c,r) e(c,r,|u|) ( 1 - c^2) ^{\frac{d-3}{2}}} \end{align*} which implies \eqref{EquChiOme}. We focus now on the equation satisfied by $\psi$. Let us consider an orthonormal basis $\{E_1, ..., E_{d-1}\}$ of $(\R \Omega ) ^\perp$. By Remark \ref{Sym} we know that there is $\chi = \chi (c,r)$ such that $\psi (v) = \chi ( v\cdot \Omega /|v|, |v|)$ and \[ \psi _{E_i} (v) = F(v) \cdot E_i = \psi (v) \frac{v \cdot E_i}{\sqrt{|v|^2 - ( v \cdot \Omega )^2}} = \chi ( v\cdot \Omega /|v|, |v|) \frac{v \cdot E_i}{\sqrt{|v|^2 - ( v \cdot \Omega )^2}} \] for $v \in \R^d \setminus (\R \Omega), i \in \{1, ..., d-1\}$. Let us consider $\psi _{E_i, h} (v) = h( v\cdot \Omega /|v|, |v|) \frac{v \cdot E_i}{\sqrt{|v|^2 - ( v \cdot \Omega )^2}}$, where $h = h(c,r)$ is a function such that $\psi _{E_i, h } \in \homu$. Actually, once that $\psi _{E_i, h} \in \homu$, then $((\psi _{E_i, h}, 1))_{M_u} = \intv{h( v\cdot \Omega /|v|, |v|) \frac{v \cdot E_i}{\sqrt{|v|^2 - ( v \cdot \Omega )^2}}M_u (v)} = 0$, saying that $\psi _{E_i, h} \in \thomu$. A straightforward computation shows that \begin{align*} \nabla _v \psi _{E_i,h} & = \frac{v \cdot E_i}{\sqrt{|v|^2 - ( v \cdot \Omega )^2}}\left [\partial _c h \frac{I_d - \frac{v \otimes v}{|v|^2}}{|v|} \Omega + \partial _r h \frac{v}{|v|} \right ]\\ & + h \left ( \frac{v \cdot \Omega}{|v|}, |v|\right ) \left [ I_d - \frac{(v - (v \cdot \Omega) \Omega ) \otimes v }{|v|^2 - (v \cdot \Omega)^2} \right ]\frac{E_i}{\sqrt{|v|^2 - ( v \cdot \Omega )^2}} \end{align*} and \begin{align*} |\nabla _v \psi _{E_i,h}|^2 & = \frac{(v \cdot E_i )^2}{|v|^4} (\partial _c h )^2 + \frac{(v \cdot E_i )^2}{|v|^2 - (v \cdot \Omega) ^2} (\partial _r h )^2 \\ & + \frac{|v|^2 - (v \cdot \Omega) ^2 - (v \cdot E_i )^2 }{( |v|^2 - ( v \cdot \Omega )^2)^2} h ^ 2\left ( \frac{v \cdot \Omega}{|v|}, |v|\right ). \end{align*} The condition $\psi _{E_i,h} \in \homu$ writes \[ \intv{(\psi _{E_i,h})^2 M_u (v)} < + \infty,\;\;\intv{|\nabla _v \psi _{E_i,h}|^2 M_u (v)} < + \infty \] which is equivalent, thanks to the Poincar\'e inequality \eqref{Equ22} to \begin{align*} & \intv{|\nabla _v \psi _{E_i,h}|^2 M_u (v)} \\ & = \intv{ \left [ \frac{(v \cdot E_i )^2}{|v|^4} (\partial _c h )^2 + \frac{(v \cdot E_i )^2 (\partial _r h )^2}{|v|^2 - (v \cdot \Omega) ^2} + \frac{|v|^2 - (v \cdot \Omega) ^2 - (v \cdot E_i )^2 }{( |v|^2 - ( v \cdot \Omega )^2)^2} h ^ 2 \right ] M_u (v)}< + \infty \end{align*} and therefore to $h \in H_{\perp, |u|}$, where we consider he Hilbert space \[ H_{\perp, |u|} = \{h : ]-1,1[ \times ]0,+\infty[ \to \R,\;\;\|h \|^2_{\perp, |u|} = (h,h)_{\perp, |u|} < +\infty\} \] endowed with the scalar product \begin{align*} (g, h) _{\perp, |u|} & = \intcr{r^{d-1}}{\left [ \frac{1-c^2}{r^2} \partial _c g \partial _c h + \partial _r g \partial _r h + \frac{(d-2)gh}{r^2 (1- c^2)} \right ] \\ & e(c, r, |u|) (1 - c^2) ^{\frac{d-3}{2}}}, g, h, \in H_{\perp, |u|}. \end{align*} Taking $\theta = \psi _{E_i, h} \in \thomu $ in \eqref{Equ33} leads to \[ \sigma \intv{\nabla _v \psi _{E_i} \cdot \nabla _v \psi _{E_i, h} M_u (v)} = \intv{\psi _{E_i, h} \;(v \cdot E_i) M_u (v) },\;\;h \in H_{\perp, |u|} \] or equivalently \begin{align*} \sigma & \intcr{r^{d-1}}{{\left [ \frac{1-c^2}{r^2} \partial _c \chi \partial _c h + \partial _r \chi \partial _r h + \frac{(d-2)\chi h}{r^2 (1- c^2)} \right ] e(c, r, |u|) (1 - c^2) ^{\frac{d-3}{2}}}} \\ & = \intcr{r^{d-1}}{r h(c,r) e(c, r, |u|) (1 - c^2) ^{\frac{d-2}{2}}},\;h \in H_{\perp, |u|} \end{align*} which implies \eqref{EquChi}. \end{proof} \section{The fluid model} \label{MainTh} The balances for the macroscopic quantities $\rho, u$ follow by using the elements in the kernel of $\calL _f ^\star$. \begin{proof} (of Theorem \ref{MainRes1})\\ The use of $\psi = 1 \in \ker \calL _f ^\star$ leads to \eqref{Equ51}. By Lemma \ref{Spectral}, we know that $(\R u ) ^\perp \subset \ker ( \calmu - \sigma I_d)$ and thus, for any $(t,x) \in \R_+ \times \R^d$, the vector field \[ v \mapsto F^\prime (t,x,v) = \chi \left (\frac{v \cdot \Omega (t,x)}{|v|}, |v| \right ) \frac{(I_d - \Omega (t,x) \otimes \Omega (t,x))v}{\sqrt{|v|^2 - (v \cdot \Omega (t,x))^2}} = \sumi \psi _{E_i} (v) E_i \] belongs to the kernel of $\calL _f ^\star$, implying that \begin{equation*} \intv{\partial _t \fz \;F^\prime (t,x,v)} + \intv{v \cdot \nabla _x \fz \;F^\prime(t,x,v) } = 0,\;\;(t,x) \in \R_+ \times \R^d. \end{equation*} We have $\partial _t \fz = \partial _t \rho M_u + \rho \frac{M_u }{\sigma} (v - u) \cdot \partial _t u$ and we obtain \begin{align*} & \intv{\partial _t \fz \;F^\prime} = \intv{\left ( \partial _t \rho + \frac{\rho }{\sigma} (v - u) \cdot \partial _t u \right ) \chi \left ( \frac{v \cdot \Omega}{|v|}, |v|\right ) \frac{v - ( v \cdot \Omega) \Omega}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}M_u (v)}\\ & = \partial _t \rho \intv{\chi \left ( \frac{v \cdot \Omega}{|v|}, |v|\right )\frac{v - ( v \cdot \Omega) \Omega}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}M_u (v) } \\ & + \frac{\rho}{\sigma} \intv{\chi \left ( \frac{v \cdot \Omega}{|v|}, |v|\right )M_u (v) \frac{[v - ( v \cdot \Omega) \Omega] \otimes [v - (v \cdot \Omega) \Omega + (v \cdot \Omega) \Omega - u]}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}}\;\partial _t u. \nonumber \end{align*} It is easily seen (use the change of variable $v = (I_d - 2 E_i \otimes E_i) v^\prime, 1 \leq i \leq d-1$) that \[ \intv{\chi \frac{v - ( v \cdot \Omega) \Omega}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}M_u (v) } = \sumi \intv{\chi \frac{(v \cdot E_i)E_i}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}M_u (v) } = 0 , \] \begin{align*} \intv{\chi M_u \frac{[v - ( v \cdot \Omega) \Omega] \otimes [(v \cdot \Omega) \Omega - u]}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}} = \sumi \intv{\chi M_u \frac{(v \cdot E_i)E_i \otimes [(v \cdot \Omega) \Omega - u]}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}} = 0, \end{align*} and \begin{align*} \intv{\chi M_u \frac{[v - ( v \cdot \Omega) \Omega] \otimes [v - (v \cdot \Omega) \Omega]}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}} & = \sum_{1 \leq i, j \leq d-1} \intv{\chi M_u \frac{(v \cdot E_i) (v \cdot E_j)}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}}E_i \otimes E_j\\ & = \sumi \intv{\chi \frac{(v \cdot E_i )^2}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}} M_u (v)} E_i \otimes E_i \\ & = \intv{\chi \frac{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}{d-1} M_u (v) } (I_d - \Omega \otimes \Omega). \end{align*} Therefore one gets \begin{equation} \label{Equ53} \intv{\partial _t f F^\prime (t,x,v) } = c_{\perp, 1} \frac{\rho}{\sigma} (I_d - \Omega \otimes \Omega)\partial _t u \end{equation} with \[ c_{\perp, 1} = \intv{\chi \left (\frac{v \cdot \Omega }{|v|}, |v| \right )\frac{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}{d-1} M_u (v) }. \] Observe also that \begin{align*} v \cdot \nabla _x f & = (v \cdot \nabla _x \rho ) M_u + \frac{\rho}{\sigma} \partial _x u v \cdot (v-u) M_u \\ & = ( v \cdot \nabla _x \rho) M_u + \frac{\rho}{\sigma} \partial _x u v \cdot (v - (v \cdot \Omega) \Omega + (v \cdot \Omega)\Omega - u) M_u, \end{align*} and therefore \begin{align} \label{Equ53Bis} & \intv{(v \cdot \nabla _x f ) \;F^\prime} = \intv{\chi \left ( \frac{v \cdot \Omega}{|v|}, |v|\right )M_u (v) \frac{(v - (v \cdot \Omega) \Omega)\otimes v}{\sqrt{|v|^2 - (v \cdot \Omega)^2}}} \;\nabla _x \rho \\ & + \frac{\rho}{\sigma} \intv{ \chi \left ( \frac{v \cdot \Omega}{|v|}, |v|\right )M_u (v) \frac{(v - (v \cdot \Omega)\Omega) \otimes (v - (v \cdot \Omega) \Omega + (v \cdot \Omega)\Omega - u)}{\sqrt{|v|^2 - (v \cdot \Omega)^2}} \; \partial _x u v}.\nonumber \end{align} As before, using the change of variable $v = (I_d - 2E_i \otimes E_i ) v^\prime, 1 \leq i \leq d-1$, we have \begin{align*} \intv{\chi M_u (v) \frac{(v - (v \cdot \Omega) \Omega)\otimes v}{\sqrt{|v|^2 - (v \cdot \Omega)^2}}} & = \intv{\chi M_u \frac{(v - (v \cdot \Omega) \Omega)\otimes (v - (v \cdot \Omega) \Omega)}{\sqrt{|v|^2 - (v \cdot \Omega)^2}}} \\ & + \intv{\chi M_u \frac{(v - (v \cdot \Omega) \Omega)\otimes (v \cdot \Omega) \Omega}{\sqrt{|v|^2 - (v \cdot \Omega)^2}}} \\ & = c_{\perp, 1} (I_d - \Omega \otimes \Omega). \end{align*} For the second integral in the right hand side of \eqref{Equ53Bis}, by noticing that \[ \intv{(v\cdot E_i) (v \cdot E_j) (v\cdot E_k) \chi \left ( \frac{v \cdot \Omega}{|v|}, |v|\right ) M_u (v) } = 0,\;\;i,j,k \in \{1, ..., d-1\}, \] we obtain \begin{align*} & \intv{\chi M_u (v) \frac{(v - (v \cdot \Omega) \Omega)\otimes ( v- (v \cdot \Omega) \Omega + (v \cdot \Omega )\Omega - u)}{\sqrt{|v|^2 - (v \cdot \Omega)^2}}\partial _x u v } \\ & = \intv{\chi M_u \frac{(v - (v \cdot \Omega) \Omega)\otimes (v - (v \cdot \Omega) \Omega)}{\sqrt{|v|^2 - (v \cdot \Omega)^2}}\partial _x u \Omega \;(v \cdot \Omega) } \\ & \quad + \intv{\chi M_u \frac{(v - (v \cdot \Omega) \Omega)\otimes ((v \cdot \Omega) \Omega - u)}{\sqrt{|v|^2 - (v \cdot \Omega)^2}}\partial _x u ( v - (v \cdot \Omega)\Omega) } \\ & = c_{\perp, 2} (I_d - \Omega \otimes \Omega) \partial _x u \Omega + \intv{\chi M_u \frac{(v - (v \cdot \Omega) \Omega)\otimes (v - (v \cdot \Omega) \Omega)}{\sqrt{|v|^2 - (v \cdot \Omega)^2}}\;^t \partial _x u [(v \cdot \Omega)\Omega - u] } \\ & = c_{\perp, 2} (I_d - \Omega \otimes \Omega) (\partial _x u + \;^t \partial _x u ) \Omega - c_{\perp, 1} (I_d - \Omega \otimes \Omega) \; (u\cdot \partial _x) u, \end{align*} where \[ c_{\perp, 2} = \intv{(v\cdot \Omega) \chi \frac{\sqrt{|v|^2 - (v \cdot \Omega)^2}}{d-1} M_u (v)}. \] Therefore we deduce \begin{align} \label{Equ54} \intv{(v \cdot \nabla _x f) F^\prime (t,x,v)} & = c_{\perp, 1} (I_d - \Omega \otimes \Omega) \nabla _x \rho + \frac{\rho}{\sigma} c_{\perp, 2} (I_d - \Omega \otimes \Omega) (\partial _x u + \;^t \partial _x u ) \Omega \nonumber \\ & \quad - \frac{\rho}{\sigma} c_{\perp, 1}(I_d - \Omega \otimes \Omega)\; (u\cdot \partial _x) u \end{align} and finally \eqref{Equ53}, \eqref{Equ54} yield \begin{equation} \label{Equ55} (I_d - \Omega \otimes \Omega) \partial _t u + \sigma (I_d - \Omega \otimes \Omega)\frac{\nabla _x \rho}{\rho} + c_\perp (I_d - \Omega \otimes \Omega)(u\cdot \partial _x) u + (c_\perp - 1) (I_d - \Omega \otimes \Omega) \nabla _x \frac{|u|^2}{2} = 0 \end{equation} where \begin{align*} c_\perp & = \frac{c_{\perp, 2}}{|u| \; c_{\perp, 1}} = \frac{\intv{(v \cdot \Omega) \chi\left ( \frac{v \cdot \Omega}{|v|}, |v|\right ) \sqrt{|v|^2 - (v \cdot \Omega) ^2} M_u (v)}}{|u| \;\intv{ \chi \left ( \frac{v \cdot \Omega}{|v|}, |v|\right ) \sqrt{|v|^2 - (v \cdot \Omega) ^2} M_u (v)}}\\ & = \frac{\int_{\R_+} r^{d+1} \intth{\cos \theta \chi (\cos \theta, r) e(\cos \theta, r, l(\sigma)) \sin ^{d-1} \theta }\md r}{l(\sigma) \int_{\R_+} r^{d} \intth{ \chi (\cos \theta, r) e(\cos \theta, r, l(\sigma)) \sin ^{d-1} \theta }\md r}. \end{align*} Recall that $|u| = l(\sigma)$ and therefore we have $u \cdot \partial _t u = \frac{1}{2} \partial _t |u|^2 = 0$, $(u\cdot \partial _x) u = \frac{1}{2} \nabla _x |u|^2 = 0$, implying that \[ \Omega \cdot \partial _t u = 0,\;\;^t \partial _x u \Omega = 0,\;\;\Omega \cdot \partial _x u \Omega = 0. \] The equation \eqref{Equ55} becomes \[ \partial _t \Omega + l(\sigma) c_\perp ( \Omega \cdot \nabla _x ) \Omega + \frac{\sigma}{l(\sigma)} (I_d - \Omega \otimes \Omega) \frac{\nabla _x \rho}{\rho} = 0. \] We have to check that $c_{\perp, 1} \neq 0$. This comes by using the elliptic equations satisfied by $\psi _{E_i}$, that is \[ - \sigma \Divv ( M_u \nabla _v \psi _{E_i}) = (v \cdot E_i) M_u (v),\;\;v \in \R^d,\;\;i \in \{1, ..., d-1\}. \] Indeed, we have \begin{align*} c_{\perp, 1} & = \intv{\chi \frac{(v \cdot E_i )^2}{\sqrt{|v|^2 - (v \cdot \Omega) ^2}}M_u (v)} = \intv{(F(v) \cdot E_i) (v \cdot E_i) M_u (v)} \\ & = \intv{\psi _{E_i} (v) (v \cdot E_i) M_u (v)} = \sigma \intv{|\nabla _v \psi _{E_i} |^2 M_u (v)} >0. \end{align*} \end{proof} Other potentials $v \mapsto V(|v|)$ can be handled as well. For example, let us assume that there is $\sigma >0$, $ 0 \leq l_1 (\sigma) < l_2 (\sigma)\leq +\infty$ such that the function $l \mapsto Z(\sigma, l)$ is stricly increasing on $[0,l_1(\sigma)]$, constant on $[l_1(\sigma), l_2(\sigma)[$, and strictly decreasing on $[l_2(\sigma), +\infty[$. In that case, for any $l \in [l_1(\sigma), l_2(\sigma)[$ we have $\partial _{ll} ^2 Z(\sigma, l) = 0$ and by Lemma \ref{Spectral} we deduce that $\calmu = \sigma I_d$, saying that $\ker ( \calmu - \sigma I_d) = \R ^d$. Using the function $\psi _\Omega$, we obtain a balance for $|u|$ as well. \begin{proof} (of Theorem \ref{MainRes2})\\ In this case $\psi _\Omega$ belongs to $\ker \calL _f ^\star$, and therefore we also have the balance \[ \intv{\partial _t \fz \psi _{\Omega (t,x)}(v)} + \intv{(v \cdot \nabla _x \fz ) \psi _{\Omega (t,x)} (v) } = \intv{\calL _{\fz(t,x,\cdot)} (\fo ) \psi _{\Omega (t,x)}} = 0. \] As before, using also $\intv{\psi _\Omega (v) M_u (v)} = 0$, we write \begin{align} \label{Equ61} \intv{\partial _t \fz \psi _\Omega} & = \intv{\left [ \partial _t \rho M_u (v)+ \frac{\rho}{\sigma} M_u (v)(v-u) \cdot \partial _t u \right ]\psi _\Omega (v)}\\ & = \left ( \partial _t \rho - \frac{\rho}{\sigma} u \cdot \partial _t u \right ) \intv{\psi _\Omega (v) M_u (v)} \nonumber \\ & + \frac{\rho}{\sigma} \intv{\chi _\Omega M_u (v) [v - (v \cdot \Omega) \Omega + (v \cdot \Omega) \Omega]} \cdot \partial _t u \nonumber \\ & = \frac{\rho}{\sigma} c_{\parallel, 1} \Omega \cdot \partial _t u \nonumber \end{align} where \[ c_{\parallel, 1} = \intv{(v \cdot \Omega) \psi _\Omega (v) M_u (v)} = \intv{(v - u) \cdot \Omega \;\psi _\Omega M_u } = \sigma \intv{|\nabla _v \psi _\Omega |^2 M_u (v)} >0. \] Similarly, observe that \begin{align} \label{Equ62} & \intv{(v \cdot \nabla _x f ) \psi _\Omega } = \intv{\left [v \cdot \nabla _x \rho + \frac{\rho}{\sigma} \partial _x u v \cdot (v-u) \right ] M_u (v)\psi _\Omega (v)}\\ & \intv{(v \cdot \Omega) \psi _\Omega (v) M_u (v) } \;( \Omega \cdot \nabla _x \rho) + \frac{\rho}{\sigma} \intv{\psi _\Omega (v) M_u (v) (v-u) \otimes v } : \partial _x u \nonumber \\ & = c_{\parallel, 1} \Omega \cdot \nabla _x \rho + \frac{\rho}{\sigma} \intv{\psi _\Omega M_u \{ [ v - (v \cdot \Omega) \Omega ] \otimes [v - (v \cdot \Omega) \Omega ] + (v \cdot \Omega)^2 \Omega \otimes \Omega\}}: \partial _x u \nonumber \\ & - \frac{\rho}{\sigma} \intv{\psi _\Omega (v) M_u (v) v } \cdot \;^t \partial _x u u = c_{\parallel, 1} \Omega \cdot \nabla _x \rho \nonumber \\ & + \frac{\rho}{\sigma} \left \{ \intv{\psi _\Omega M_u \frac{|v|^2 - (v \cdot \Omega)^2}{d-1} } \;(I_d - \Omega \otimes \Omega) + \intv{(v \cdot \Omega)^2 \psi _\Omega M_u } \;\Omega \otimes \Omega \right \}: \partial _x u \nonumber \\ & - \frac{\rho}{\sigma} c_{\parallel, 1} \Omega \cdot \;^t \partial _x u u \nonumber \\ & = c_{\parallel, 1} \Omega \cdot \nabla _x \rho + \frac{\rho}{\sigma} ( 2 c_{\parallel, 2} - |u| c_{\parallel, 1} ) \Omega \otimes \Omega : \partial _x u + \frac{\rho}{\sigma} c_{\parallel, 3} (I_d - \Omega \otimes \Omega) : \partial _x u \nonumber \\ & = c_{\parallel, 1} \Omega \cdot \nabla _x \rho + \frac{\rho}{\sigma} \frac{c_{\parallel, 2}}{|u|} ( \Omega \cdot \partial _x u u ) + \frac{\rho}{\sigma} \left ( \frac{c_{\parallel, 2}}{|u|} - c_{\parallel, 1} \right ) \left ( \Omega \cdot \nabla _x \frac{|u|^2}{2} \right ) + \frac{\rho}{\sigma} c_{\parallel, 3} |u|\Divx \Omega \nonumber \end{align} where \[ c_{\parallel, 2} = \intv{\frac{(v \cdot \Omega) ^2}{2} \psi _\Omega (v) M_u (v)},\;\;c_{\parallel, 3} = \intv{\frac{|v|^2 - (v \cdot \Omega) ^2}{d-1} \psi _\Omega (v) M_u (v)}. \] In the above computations we have used the identity $(I_d - \Omega \otimes \Omega) : \partial _x u = |u| \Divx \Omega$. Combining \eqref{Equ61}, \eqref{Equ62} leads to \begin{equation} \label{Equ63} \Omega \cdot \partial _t u + \sigma \Omega \cdot \frac{\nabla _x \rho}{\rho} + c_\parallel ( \Omega \cdot (u\cdot \partial _x) u ) + ( c_\parallel - 1) \left ( \Omega \cdot \nabla _x \frac{|u|^2}{2} \right ) + c_\parallel ^\prime |u|^2 \Divx \Omega = 0 \end{equation} where $c_\parallel = \frac{c_{\parallel, 2}}{|u| c_{\parallel, 1}}, \;c_\parallel ^\prime= \frac{c_{\parallel, 3}}{|u| c_{\parallel, 1}}$. Finally we deduce from \eqref{Equ55}, \eqref{Equ63} the balance for the mean velocity $u$ \begin{align} \partial _t u & + c_\perp (I_d - \Omega \otimes \Omega) \partial _x u u + c_\parallel ( \Omega \otimes \Omega) (u\cdot \partial _x) u + (c_\perp - 1) (I_d - \Omega \otimes \Omega) \nabla _x \frac{|u|^2}{2} \nonumber \\ & + (c _\parallel - 1) (\Omega \otimes \Omega )\nabla _x \frac{|u|^2}{2} + \sigma \frac{\nabla _x \rho }{\rho} + c^\prime _\parallel \Divx \Omega |u| u = 0.\nonumber \end{align} \end{proof} \begin{remark} \label{Maxwellian} When $V = 0$, the equilibria are Maxwellians parametrized by $\rho \in \R_+$ and $u \in \R^d$ \[ M_u (v) = \frac{\rho}{(2\pi \sigma) ^{d/2}} \exp \left ( - \frac{|v-u|^2}{2\sigma} \right ),\;\;v \in \R^d. \] In that case the function $l \to Z(\sigma, l)$ is constant \[ Z(\sigma, l) = \intv{\exp \left ( - \frac{|v-u|^2}{2\sigma} \right )} = (2\pi \sigma ) ^{d/2},\;\;l \in \R_+ ^\star. \] It is easily seen that the solution of \[ - \sigma \Divv \{ M_u \partial _v F\} = (v-u) M_u (v),\;\;v \in \R^d,\;\;\intv{M_u (v) F(v) } = 0 \] is $F(v) = v-u, v \in \R^d$, which belongs to $\bthomu$, and therefore the functions $\psi, \psi _\Omega$ such that \[ F(v) = \psi (v) \frac{v - (v \cdot \Omega) \Omega}{\sqrt{|v|^2 - (v \cdot \Omega)^2}} + \psi _\Omega (v) \Omega,\;\;v \in \R^d \setminus ( \R \Omega) \] are given by \[ \psi (v) = (v-u) \cdot \frac{v - (v \cdot \Omega) \Omega}{\sqrt{|v|^2 - (v \cdot \Omega)^2}} = \sqrt{|v|^2 - (v \cdot \Omega)^2},\;\;\psi _\Omega (v) = (v - u) \cdot \Omega,\;\;v \in \R^d \] and $\psi _{E_i}(v) = F(v) \cdot E_i = (v \cdot E_i), v \in \R^d, 1\leq i \leq d-1$. \end{remark} By straightforward computations we obtain \[ c_{\perp, 1} = \intv{\frac{|v|^2 - (v \cdot \Omega) ^2}{d-1} M_u } = \intv{(v\cdot E_1) ^2 M_u } = - \sigma \intv{(v \cdot E_1) (\nabla _v M_u \cdot E_1) } = \sigma \] \begin{align*} c_{\perp, 2} & = \intv{(v \cdot \Omega) \frac{|v|^2 - (v \cdot \Omega) ^2}{d-1} M_u } = \intv{(v \cdot \Omega) (v\cdot E_1) ^2 M_u } \\ & =- \sigma \intv{(v \cdot \Omega) ( v \cdot E_1) \Divv (M_u E_1) } = \sigma \intv{M_u (v)E_1 \cdot [ ( v \cdot E_1) \Omega + (v \cdot \Omega) E_1]} \\ & = \sigma |u|. \end{align*} \[ c_\perp = \frac{c_{\perp, 2}}{|u|c_{\perp, 1}} = 1 \] \[ c_{\parallel, 1} = \sigma \intv{|\nabla _v \psi _\Omega|^2 M_u (v)} = \sigma \] \[ c_{\parallel, 2} = \intv{\frac{(v \cdot \Omega)^2}{2} \psi _\Omega M_u} = - \sigma \intv{\frac{(v \cdot \Omega)^2}{2}\Divv ( M_u \Omega) } = \sigma \intv{(v \cdot \Omega) M_u } = \sigma |u| \] \[ c_\parallel = \frac{c_{\parallel, 2}}{|u|c_{\parallel, 1}} = 1 \] \begin{align*} c_{\parallel, 3} & = \intv{\frac{|v|^2 - (v \cdot \Omega) ^2}{d-1} \psi _\Omega M_u } = \intv{(v \cdot E_1) ^2 \psi _\Omega M_u } = - \sigma \intv{(v \cdot E_1)^2 \Divv ( M_u \Omega) } \\ & = 2\sigma \intv{(v\cdot E_1) (E_1 \cdot \Omega) M_u } = 0. \end{align*} In this case \eqref{Equ57}, \eqref{Equ58} are the Euler equations, as expected when taking the limit $\eps \searrow 0$ in the Fokker-Planck equations \[ \partial _t \fe + v \cdot \nabla _x \fe = \frac{1}{\eps} \Divv \{ \sigma \nabla _v \fe + \fe ( v - u[\fe])\},\;\;(t,x,v) \in \R_+ \times \R^d \times \R^d \] that is \[ \partial _t \rho + \Divx (\rho u) = 0,\;\;\partial _t u + \partial _x u u + \sigma \frac{\nabla _x \rho}{\rho} = 0,\;\;(t,x) \in \R_+ \times \R^d. \] \section{Examples} \label{Example} We analyze now the potentials $v \mapsto \vab = \beta \frac{|v|^4}{4} - \alpha \frac{|v|^2}{2}$. Clearly the hypothesis \eqref{Equ11} is satisfied, and thus the function $Z(\sigma, |u|) = \intv{\exp \left ( - \frac{|v-u|^2}{2\sigma} - \frac{\vab{}}{\sigma} \right )}$ is well defined. As seen in Section \ref{PropEqui}, the sign of $\partial _l Z(\sigma, l)$, for small $\sigma >0$, depends on the sign of $V^\prime _{\alpha, \beta}$. The potential $V_{\alpha, \beta}$ satisfy \eqref{Equ15} with $r _0 = \sqrt{\alpha/\beta}$ \[ V^\prime _{\alpha, \beta} (r) = r ( \beta r^2 - \alpha )<0\;\mbox{ for } \;0 < r < \sqrt{\alpha/\beta}\;\mbox{ and } V^\prime _{\alpha, \beta } (r) >0 \;\mbox{ for any } r > \sqrt{\alpha/\beta}. \] One can check that these potentials belong to the family $\mathcal{V}$, see \cite{Li19}. We include an example $V_{1,1} (|v|) = \frac{|v|^4}{4} - \frac{|v|^2}{2}$ for the sake of completeness. In this case the critical diffusion can be computed explicitly. \begin{pro} Consider the potential $v \mapsto V_{1,1}(|v|) = \frac{|v|^4}{4} - \frac{|v|^2}{2}$. The critical diffusion $\sigma _0 $ writes \[ \sigma _0 ^{1/2} = \frac{1}{d} \frac{\int_{\R_+} \exp( - z^4/4) z^{d+1}\;\md z}{\int_{\R_+} \exp( - z^4/4) z^{d-1}\;\md z},\;\;d \geq 2. \] In particular, for $d = 2$ we have $\sigma _0 = 1/\pi$. \end{pro} \begin{proof} We have \[ \Phiu (v) = \frac{|v-u|^2}{2} + V_{1,1} (|v|) = \frac{|v|^4}{4} - v \cdot u + \frac{|u|^2}{2} \] and therefore \begin{align*} Z(\sigma, l) & = \intv{\exp\left ( - \frac{\Phiu (v)}{\sigma} \right ) }= |\mathbb{S} ^{d-2}| \exp \left ( - \frac{l^2}{2\sigma} \right ) \\ & \int _{\R_+} \exp \left ( - \frac{r^4}{4\sigma} \right ) r^{d-1} \intth{\exp \left ( \frac{r l \cos \theta}{\sigma} \right ) \sin ^{d-2} \theta }\md r. \end{align*} Taking the second derivative with respect to $l$ one gets cf. Remark \ref{Modulus} \begin{align*} \partial _{ll} ^2 Z(\sigma, l) & = \intv{\exp \left ( - \frac{|v|^4}{4\sigma} + \frac{v \cdot u }{\sigma} - \frac{l^2}{2\sigma} \right ) \frac{(v \cdot \Omega - l) ^2 - \sigma}{\sigma ^2} } = |\mathbb{S} ^{d-2}| \exp \left ( - \frac{l^2}{2\sigma} \right ) \\ & \int _{\R_+} \exp \left ( - \frac{r^4}{4\sigma} \right ) r^{d-1} \intth{\exp \left ( \frac{r l \cos \theta}{\sigma} \right ) \frac{(r \cos \theta - l)^2 - \sigma }{\sigma ^2}\sin ^{d-2} \theta }\md r \end{align*} and therefore \begin{align*} \partial _{ll} ^2 Z(\sigma, 0) & = \frac{|\mathbb{S} ^{d-2}|}{\sigma ^2} \int _{\R_+} \exp \left ( - \frac{r^4}{4\sigma} \right ) r^{d+1} \;\md r \intth{\cos ^2 \theta \sin ^{d-2} \theta } \\ & - \frac{|\mathbb{S} ^{d-2}|}{\sigma}\int _{\R_+} \exp \left ( - \frac{r^4}{4\sigma} \right ) r^{d-1} \;\md r \intth{ \theta \sin ^{d-2} \theta }. \end{align*} It is easily seen that \begin{align*} \intth{\cos ^2 \theta \sin ^{d-2} \theta } & = \intth{\sin ^{d-2} \theta} + \intth{\cos ^\prime \theta \sin ^{d-1} \theta } \\ & = \intth{\sin ^{d-2}\theta} - (d-1) \intth{\cos ^2 \theta \sin ^{d-2} \theta } \end{align*} and thus \[ \intth{\cos ^2 \theta \sin ^{d-2} \theta } = \frac{1}{d} \intth{\sin ^{d-2} \theta},\;\;d \geq 2. \] We obtain the following expression for the second derivative $\partial _{ll} ^2 Z(\sigma, 0)$ \[ \frac{\partial _{ll} ^2 Z(\sigma, 0)}{|\mathbb{S} ^{d-2}|} = \frac{\intth{\sin ^{d-2} \theta }}{\sigma ^2} \left \{ \frac{1}{d} \int _{\R_+} \exp \left ( - \frac{r^4}{4\sigma} \right ) r^{d+1} \;\md r - \sigma \int _{\R_+} \exp \left ( - \frac{r^4}{4\sigma} \right ) r^{d-1} \;\md r \right \}. \] Using the change of variable $r = \sigma ^{1/4}z$, we have \[ \int _{\R_+} \exp \left ( - \frac{r^4}{4\sigma} \right ) r^{d+1} \;\md r = \int _{\R_+} \exp \left ( - \frac{z^4}{4} \right ) z^{d+1} \;\md z \;\sigma ^{\frac{d+2}{4}} \] \[ \int _{\R_+} \exp \left ( - \frac{r^4}{4\sigma} \right ) r^{d-1} \;\md r = \int _{\R_+} \exp \left ( - \frac{z^4}{4} \right ) z^{d-1} \;\md z \;\sigma ^{\frac{d}{4}} \] and thus $\partial _{ll} ^2 Z(\sigma, 0) >0$ iff \[ \sigma ^{1/2} < \frac{1}{d} \frac{\int _{\R_+} \exp \left ( - \frac{z^4}{4} \right ) z^{d+1} \;\md z}{\int _{\R_+} \exp \left ( - \frac{z^4}{4} \right ) z^{d-1} \;\md z}. \] The critical diffusion $\sigma _0$ is, cf. Proposition \ref{CriticalDiffusion} \[ \sigma _0 ^{1/2} = \frac{1}{d} \frac{\int_{\R_+} \exp( - z^4/4) z^{d+1}\;\md z}{\int_{\R_+} \exp( - z^4/4) z^{d-1}\;\md z},\;\;d \geq 2. \] In particular, when $d = 2$, we obtain \[ \int_{\R_+} \exp( - z^4/4) z^{3}\;\md z = \int_{\R_+} \exp( - z^4/4)\; \md \frac{z^4}{4} = \int _{\R_+} \exp (-s) \;\md s = 1 \] and \begin{align*} \int_{\R_+} \exp( - z^4/4) z\;\md z =\int_{\R_+} \exp( - z^4/4)\; \md \frac{z^2}{2} = \int _{\R_+} \exp (-s^2) \;\md s = \frac{\sqrt{\pi}}{2} \end{align*} implying that $\sigma _0 = 1/\pi$. \end{proof} \subsection*{Acknowledgments} This work was initiated during the visit of MB to Imperial College London with an ICL-CNRS Fellowship. MB was supported by the French Federation for Magnetic Fusion Studies (FR-FCM) and the Eurofusion consortium, and has received funding from the Euratom research and training programme 2014-2018 and 2019-2020 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission. MB was supported by the A*MIDEX project (no ANR-11-IDEX-0001-02) funded by the Investissements d'Avenir French Government program, managed by the French National Research Agency (ANR). JAC was partially supported by the EPSRC grant number EP/P031587/1. \end{document}
\begin{document} \title{Hausdorff Dimension of the Record Set of a Fractional Brownian Motion} \begin{abstract} We prove that the Hausdorff dimension of the record set of a fractional Brownian motion with Hurst parameter $H$ equals $H$.\xdef\@thefnmark{}\@footnotetext{\textit{AMS 2010 subject classifications}. 60G15, 60G17, 60G18, 28A78, 28A80.} \xdef\@thefnmark{}\@footnotetext{\textbf{Key words and phrases}. Fractional Brownian motion, record set, Hausdorff dimesion.} \end{abstract} \section{Introduction} The statistics of records has been studied in both the physics and mathematics literature, see for example \cite{majumdar2008universal,godreche2017record,feller1966introduction,glick1978breaking,wergen2011record,LeDoussalWiese2008a,arnold1998records}. The record set (denoted Rec) of a random process $X_{t}$ is the set of times $s$ at which $X_{s}=\max_{\left[0,s\right]}X_{t}$. One of the most basic properties of this set is the number of records occurring during a certain time interval. This problem has been well studied for discrete processes such as sequences of i.i.d. random variables \cite{arnold1998records,glick1978breaking} or random walks on $\mathbb{R}$ \cite{andersen1954fluctuations,feller1966introduction}. However, when considering continuous processes (e.g., the Brownian motion) the question is ill defined. Indeed, an interval will typically contain either zero or infinitely many records. In these cases, a natural way to quantify the size of the record set is to evaluate its Hausdorff dimension. For the Brownian motion, it is shown in \cite{morters2010brownian} that this dimension is $\frac{1}{2}.$ The fractional Brownian motion (fBm) is a continuous Gaussian process $X_{t}$, depending on a parameter $H\in\left(0,1\right)$ called the Hurst index. It has expected value $0$ and covariances given by \[ \left\langle X_{t}X_{s}\right\rangle =\frac{1}{2}\left(\left|t\right|^{2H}+\left|s\right|^{2H}-\left|t-s\right|^{2H}\right). \] The fBm is scale-invariant, namely $\left(a^{-H} X_{at}\right)_{t\ge0}$ has the same law as $\left(X_t\right)_{t\ge0}$ for all $a>0$. We emphasize that, even though in general this process could be defined also for $H=1$, we will only consider here $H$ strictly smaller than $1$. The fractal properties of fBm have been studied extensively (see \cite{adler1981geometry,xiao2013fractal}). In this paper, we show that the Hausdorff dimension of its record set is $H$. \begin{figure} \caption{Two simulations of fBm, (a) with Hurst index $\frac{1} \label{fig:fBmswithmaxandrecs} \end{figure} \section{Heuristics} To find the dimension of the record set, first fix a small $\varepsilon>0$ and divide the time interval $\left[0,1\right]$ in $N_{\varepsilon}=\frac{1}{\varepsilon}$ small boxes, each of diameter $\varepsilon$. We will be interested in finding the number $M_{\varepsilon}$ of boxes in which a record has occurred. To do so, we first compute the probability to find a record during the time interval $\left[\left(n-1\right)\varepsilon,n\varepsilon\right]$. Stated differently, this is the probability that the maximum of $X_{t}$ in $\left[0,n\varepsilon\right]$ is attained in the time interval $\left[\left(n-1\right)\varepsilon,n\varepsilon\right]$. By time reversal symmetry this is the same as the probability to attain the maximum of $X_t$ during $\left[0,n\varepsilon\right]$ in $\left[0,\varepsilon\right]$. Since the maximum during the time interval $\left[0,\varepsilon\right]$ scales like $\varepsilon^{H}$, following \cite{delorme2016perturbative}, we claim that this probability is controlled by the probability that $\max_{[0,n\varepsilon]} X_{t}$ is of order $\varepsilon^{H}$. That probability, as shown in \cite{aurzada2011one}, scales like $\left(\varepsilon^{H}\right)^{\frac{1-H}{H}} = \varepsilon ^{1-H}$. Summing up the argument so far, we get: \begin{eqnarray*} \mathbb{P}\left[\text{Rec}\cap\left[\left(n-1\right)\varepsilon,n\varepsilon\right]\neq\emptyset\right] & = & \mathbb{P}\left[\max_{\left[0,n\varepsilon\right]}X_{t}\text{ is attained during }\left[(n-1)\varepsilon,n\varepsilon\right]\right]\\ & = & \mathbb{P}\left[\max_{\left[0,n\varepsilon\right]}X_{t}\text{ is attained during }\left[0,\varepsilon\right]\right]\\ & \approx & \mathbb{P}\left[\max_{\left[0,n\varepsilon\right]}X_{t}\text{ is of order }\varepsilon^{H}\right]\\ & \approx & \varepsilon^{1-H}. \end{eqnarray*} Thus, the expected number of boxes containing a record scales as $M_{\varepsilon}\approx N_{\varepsilon}\varepsilon^{1-H}=\varepsilon^{-H}$, suggesting a fractal dimension $H$ of the record set. This scaling is verified numerically in figure \ref{fig:numerics}, as well as in \cite{aliakbari2017records}. \begin{figure} \caption{\label{fig:numerics} \label{fig:H075} \label{fig:DimvsH} \label{fig:numerics} \end{figure} \section{Notation and Presentation of the Result} We start by presenting some notations and definitions that will be used throughout the proof. In order to give the definition of the Hausdorff dimension we first define the $\alpha$-value of a covering: \begin{defn} Let $E$ a metric space, $\mathfrak{E}=\{E_1, E_2\dots\}$ a covering of $E$, and $\alpha\geqslant 0$. Then the $\alpha$\emph{-value} of $\mathfrak{E}$ is $$\mathcal{S}_\alpha(\mathfrak{E}):=\sum_{i=1}^\infty\mathrm{Diam}(E_i)^\alpha.$$ \end{defn} We can now define the Hausdorff dimension of a set. \begin{defn} Let $E$ a metric space, and for every $\alpha\geqslant 0$ consider the $\alpha$-\emph{Hausdorff measure} of $E$: $$\mathcal{H}_\alpha(E):=\lim_{\delta \downarrow 0}\inf\left\{S_\alpha(\mathfrak{E}),\,\, \mathfrak{E}\text{ a covering of }E\text{ and }\mathrm{Diam}(E_i)\leqslant \delta\right\}.$$ Then the \emph{Hausdorff dimension} of $E$ is $$\mathrm{Dim}(E)=\inf\left\{\alpha\geqslant 0, \mathcal{H}_{\alpha}(E)<\infty\right\}=\sup\left\{\alpha\geqslant 0,\mathcal{H}_\alpha(E)=\infty\right\}.$$ \end{defn} ~\\ Recall the definition of the record set $$\mathrm{Rec}=\left\{t\geqslant 0, X_t=\max_{s\in[0,t]}X_s\right\}.$$ The main result we present here is: \begin{thm} \label{main_theorem} \begin{equation} \label{eq:dim_rec} \mathrm{Dim}(\mathrm{Rec})=H \ a.s. \end{equation} \end{thm} Finally, we will prove the following corollary, describing the scaling for the fBm equivalent of the third arcsine law (for results in the physics literature beyond the asymptotics see \cite{SadhuDelormeWiese2017}): \begin{cor} \label{cor} For all $\delta>0$ there exists $\varepsilon_0>0$, such that for all positive $\varepsilon<\varepsilon_0$, \begin{equation} \label{eq:cor} \varepsilon^{1-H+\delta}\le\mathbb{P}\left(\mathbf{Argmax}_{\left[0,1\right]}X_{t}\in\left[0,\varepsilon\right]\right)\le\varepsilon^{1-H-\delta}. \end{equation} \end{cor} \section{Proof of the Result} We will follow the proof from \cite{morters2010brownian}, in which the Hausdorff dimension of the record set is found for the (non-fractional) Brownian motion. The main difference comes from the non-Markovian behavior of the fBm, a difficulty that we will control with Lemma \ref{lem:probofrec} below. First, we will get a lower bound on the record set dimension using Lemma 4.21 of \cite{morters2010brownian}: \begin{prop}\label{Holder} Let $f:\,[0,1]\rightarrow \mathbb{R}$, $\alpha$-H\"older continuous, whose maximum is not attained at $0$. Then the Hausdorff dimension of its record set is greater or equal to $\alpha$. \end{prop} For the other bound, we will use the following result, essentially proven in \cite{morters2010brownian}: \begin{prop}\label{Prop6} Let $A\subset [0,1]$ be a random set and $\vartheta>0$ such that for all $b>0$, there exists $C_b>0$ and a sequence of positive numbers $\varepsilon_k$, converging to zero and satisfying \begin{equation} \forall a\geqslant b,\,\,\,\mathds{P}\left[A\cap [a,a+\varepsilon_k]\neq \emptyset\right]\leqslant C_b\varepsilon_k^{1-\vartheta}.\label{eqassum} \end{equation} Then, almost surely, \begin{equation}\label{eqres1} \mathrm{Dim}(A)\leqslant \vartheta. \end{equation} \end{prop} For completeness, we present here the proof: \begin{proof} Let $b>0$, we will show that $\mathrm{Dim}(A\cap[b,1])\leqslant \vartheta$ using assumption \ref{eqassum}. The result will then follow by the countable stability of the Hausdorff dimension.\\ \indent In order to get an upper bound on the dimension, it is enough to find a family of coverings of $[b,1]\cap A$ with diameter going to zero such that the $\vartheta$-value of each covering is finite. To construct such a covering, consider, for $k\in\mathbb{N}$, $$N_k=\sup_{j\in\mathbb{N}}\{b+j\varepsilon_k\leqslant 1\}.$$ Denote, for $j=0\dots N_k-1,$ $I_j:=b+[j\varepsilon_k,(j+1)\varepsilon_k]$ and consider the collection of intervals: $$\mathfrak{I}_k=\{I_j,\,I_j\cap A\ \neq \emptyset\}\cup \{[b+N_k\varepsilon_k,1]\}.$$ \indent Let $j\in\{0,\dots,N_k-1\}$. Taking $a=b+j\varepsilon_k$ in the assumption \ref{eqassum}, we have \begin{equation} \mathds{P}[A\cap I_j\neq\emptyset]\leqslant C_b\varepsilon_k^{1-\vartheta}. \end{equation} Therefore the covering $\mathfrak{I}_k$ of $A\cap[b,1]$ has an expected $\vartheta$-value of \begin{align*} \mathds{E}\left[\mathcal{S}_\vartheta(\mathfrak{I}_k) \right]=\sum_{j=0}^{N_k-1}\mathds{P}[A\cap {I_j}\neq\emptyset]\varepsilon_k^{\vartheta}+\left(1-b-N_k\varepsilon_k\right)^{\vartheta} \leqslant 2C_b. \end{align*} Using Fatou's lemma, we have \begin{equation} \mathds{E}\left[\liminf_{k\rightarrow\infty}\mathcal{S}_\vartheta(\mathfrak{I}_k)\right]\leqslant \liminf_{k\rightarrow\infty}\mathds{E}\left[\mathcal{S}_\vartheta(\mathfrak{I}_k)\right]\leqslant 2C_b. \end{equation} Hence, the liminf is almost surely finite. In particular, there exists a family of coverings whose diameter is going to zero with bounded $\vartheta$-value, and we can conclude that almost surely \begin{equation} \mathrm{Dim}(A\cap[b,1])\leqslant \vartheta. \end{equation} \end{proof} To use this proposition, we will need the following lemma: \begin{lem} \label{lem:probofrec} For all $\delta>0$ and $b>0$, there exists a constant $C(b,\delta,H)>0$, such that, for small enough $\varepsilon >0$, \begin{equation} \forall a\geq b, \quad \mathbb{P}\left[\mathrm{Rec}\cap\left[a,a+\varepsilon\right]\neq\emptyset\right]\le C\,\varepsilon^{1-H-\delta}. \end{equation} \end{lem} \begin{proof} We begin by introducing two inequalities concerning the supremum of $X$: \begin{enumerate}[label = (\roman*)] \item For all $\delta'>0$, there exists a constant $M=M(\delta,H)>0$, such that, for small enough $u>0$: \begin{equation} \label{eq:ineq1} \mathbb{P} [ \sup_{0\leq t \leq 1} X_t \leq u] \leq M \, u^{\frac{1-H-\delta}{H}} . \end{equation} \item There exists a constant $M'=M'(H)>0$, such that, for large enough $v$, we have \begin{equation} \label{eq:ineq2} \mathbb{P} [ \sup_{0\leq t \leq 1} X_t > v] \leq M' v^{1/H} \Psi(v), \end{equation} where $\Psi(v) = \mathbb{P}(N>v)$ for $N$ a standard normal random variable. \end{enumerate} The first inequality is a weak consequence of corollary 2 in \cite{aurzada2011one}. The statement of the second inequality can be found in Theorem D.4. of \cite{piterbarg2012asymptotic} (see also \cite{adler2009random}). Let $\delta,$ $\varepsilon$ and $\theta$ be three positive real numbers, with $\theta$ satisfying $\theta <H$. By time reversal symmetry, the process $t \mapsto \tilde{X}_t = X_{a+\varepsilon-t} - X_{a+\varepsilon}$ is again a fractional Brownian motion starting at $0$ with Hurst index $H$. Hence, \begin{align*} \mathbb{P}\left[\mathrm{Rec}\cap [a,a+\varepsilon]\neq\emptyset\right] &= \mathbb{P}\left[\sup_{[0,a+\varepsilon]} \tilde{X}_t = \sup_{[a,a+\varepsilon]} \tilde{X}_t \right] = \mathbb{P}\left[\sup_{[0,a+\varepsilon]} {X}_t = \sup_{[0,\varepsilon]} {X}_t \right]. \end{align*} Decomposing this last term into the two terms \[ A_\varepsilon = \mathbb{P}\left[\sup_{[0,a+\varepsilon]} {X}_t \leq \varepsilon^{H-\theta}, \sup_{[0,a+\varepsilon]} {X}_t = \sup_{[0,\varepsilon]} {X}_t \right],\] \[ B_\varepsilon = \mathbb{P}\left[\sup_{[0,a+\varepsilon]} {X}_t > \varepsilon^{H-\theta} , \sup_{[0,a+\varepsilon]} {X}_t = \sup_{[0,\varepsilon]} {X}_t \right],\] and using the scaling invariance of $X$, we get that \begin{align}\label{eq:Maj_A} A_\varepsilon &\leq \mathbb{P}\left[ \sup_{[0,a+\varepsilon]} \frac{{X}_t}{(a+\varepsilon)^{H}} \leq \frac{\varepsilon^{H-\theta}}{(a+\varepsilon)^{H}} \right] = \mathbb{P}\left[\sup_{[0,1]} {X}_t \leq \frac{\varepsilon^{H-\theta}}{(a+\varepsilon)^{H}} \right]. \end{align} Therefore, for small enough $\varepsilon$, we can apply inequality \ref{eq:ineq1} with a positive parameter $\delta'<1-H$: \begin{align*} A_\varepsilon & \leq M \, \frac{\varepsilon^{1-H-\delta'}}{(a+\varepsilon)^{1-H-\delta'}} \, \varepsilon^{-\theta(1-H-\delta')/H} \leq C_1 \, \varepsilon^{1-H-\delta}, \end{align*} where we now fix $\theta$ and $\delta'$ sufficiently small, chosen to verify \[ \delta = -\delta' - \theta \frac{1-H-\delta'}{H},\] and where, recalling that $b\leq a$, $C_1$ is defined as: \[C_1(b,\delta,H) = M(\delta,H) \, b^{-1+H+\delta'}.\] We then bound $B_\varepsilon$, using again the scaling invariance and applying \ref{eq:ineq2} for $\varepsilon$ small enough: \begin{align*} B_\varepsilon = \mathbb{P}\left[\sup_{[0,\varepsilon]} {X}_t > \varepsilon^{H-\theta}, \sup_{[0,a+\varepsilon]} {X}_t = \sup_{[0,\varepsilon]} {X}_t \right] &\leq \mathbb{P}\left[\sup_{[0,1]} {X}_t > \epsilon^{-\theta} \right]\\ & \leq M'(H) \, \varepsilon^{-\theta/H} \Psi(\varepsilon^{-\theta})\\ & \leq C_2 \, \varepsilon^{1-H-\delta}, \end{align*} with $C_2=C_2(\delta,H)$ and where the last inequality is a consequence of the rapid decay of $\Psi$ as $\varepsilon$ tends to $0$. Summing the bounds over $A_\varepsilon$ and $B_\varepsilon$ concludes the proof of the lemma. \end{proof} Putting everything together, we are ready to prove Theorem \ref{main_theorem}. \begin{proof}[Proof of Theorem \ref{main_theorem}.] We first prove the lower bound using Proposition \ref{Holder}. Indeed, the sample-paths of the fractional Brownian motion are almost surely $\alpha$-H\"older continuous for any $\alpha<H$ (see Theorem 3.1 of \cite{decreusefond1999stochastic}). Hence, we get that $$\mathrm{Dim}(\mathrm{Rec})\geqslant \alpha$$ for any $\alpha<H$. The lower bound follows letting $\alpha$ go to $H$.\\ \indent In order to get the upper bound, we use Proposition \ref{Prop6} combined with Lemma \ref{lem:probofrec} to find: $$\mathrm{Dim(Rec)}\leqslant H+\delta$$ for all $\delta>0$. The upper bound follows letting $\delta$ go to zero. \end{proof} We can now give a proof of Corollary \ref{cor}: \begin{proof} By the time reversibility property of the fractal Brownian motion, we can see that (cf. proof of Lemma \ref{lem:probofrec}): \[ \mathbb{P}\left[\mathrm{Argmax}_{\left[0,1\right]}X_{t}\in\left[0,\varepsilon\right] \right]= \mathbb{P}\left[\mathrm{Rec}\cap [1-\varepsilon,1]\neq\emptyset\right]. \] Therefore, the upper bound in the inequality \ref{eq:cor} is a direct consequence of Lemma \ref{lem:probofrec} (taking $\frac{\delta}{2}$ in order to absorb the constant $C$). For the lower bound, we need to show that for all $\delta>0$, there exists $\varepsilon_0>0$ such that \begin{equation} \forall \varepsilon < \varepsilon_0,\quad\varepsilon^{1-H+\delta}\le \mathbb{P}\left[\mathrm{Rec}\cap [1-\varepsilon,1]\neq\emptyset\right]. \end{equation} Reasoning by contradiction, let $\delta >0$ and $(\varepsilon_k)_{k\geq 0}$ such that $\varepsilon_k \to 0$ and \begin{equation} \label{eq:absurd_ineq} \mathbb{P}\left[\mathrm{Rec}\cap [1-\varepsilon_k,1]\neq\emptyset\right]< \varepsilon_k^{1-H+\delta}. \end{equation} Let $b>0$, $a\geq b$, and $\varepsilon'_k$ to be chosen later on. Consider the rescaled process $t\to Y_t= \left({a+\varepsilon'_k}\right)^H X_{({a+\varepsilon'_k})t}$. By scaling invariance, $Y$ is a fractional Brownian motion of Hurst index $H$ whose record set $\mathrm{Rec}(Y)$ on $[0,1]$ is the rescaled record set of $X$. Hence, \begin{align*} \mathbb{P}\left[\mathrm{Rec}(X)\cap [a,a+\varepsilon'_k]\neq\emptyset\right] &= \mathbb{P}\left[\mathrm{Rec}(Y)\cap [1 - \frac{\varepsilon'_k}{a+\varepsilon'_k},1]\neq\emptyset\right]\\ & \leq \mathbb{P}\left[\mathrm{Rec}\cap [1 - \frac{\varepsilon'_k}{b+\varepsilon'_k},1]\neq\emptyset\right]. \end{align*} Choosing $\varepsilon'_k=\frac{b\,\varepsilon_k}{1-\varepsilon_k}$, so that $\frac{\varepsilon'_k}{b+\varepsilon'_k}=\varepsilon_k$, \ref{eq:absurd_ineq} yields: \[ \mathbb{P}\left[\mathrm{Rec}\cap [a,a+\varepsilon'_k\right] \leq b^{-1+\delta+H} \, {\varepsilon'}_k^{1-(H-\delta)}. \] This is exactly the assumption of Proposition \ref{Prop6}, thus \[\mathrm{Dim}(\mathrm{Rec}) \leq H-\delta,\] in contradiction with Theorem \ref{main_theorem}. \end{proof} \section{Further Questions} There are various topics for further research concerning the record statistics of continuous processes. For example, one may study the duration of the longest record or the waiting time for a first record to occur after some fixed positive time. It could also be interesting to study non-Gaussian or non-stationary processes. Another question would be to extend the study of records to fields of higher dimensions (both in space and in time), given an appropriate order on these spaces. \section*{Acknowledgments} We thank Mathieu Delorme for providing the python code generating the fBm samples used in the numerical verification. We also thank Francis Comets for careful reading and helpful comments. \Addresses \end{document}
\begin{document} \title{Hyperentanglement-assisted Bell-state analysis} \author{S. P. Walborn} \email[]{[email protected]} \affiliation{Universidade Federal de Minas Gerais, Caixa Postal 702, Belo Horizonte, MG 30123-970, Brazil} \author{S. P\'adua} \affiliation{Universidade Federal de Minas Gerais, Caixa Postal 702, Belo Horizonte, MG 30123-970, Brazil} \author{C. H. Monken} \affiliation{Universidade Federal de Minas Gerais, Caixa Postal 702, Belo Horizonte, MG 30123-970, Brazil} \date{\today} \begin{abstract} We propose a simple scheme for complete Bell-state measurement of photons using hyperentangled states - entangled in multiple degrees of freedom. In addition to hyperentanglement, our scheme requires only linear optics and single photon detectors, and is realizable with current technology. At the cost of additional classical communication, our Bell-state measurement can be implemented non-locally. We discuss the possible application of these results to quantum dense coding and quantum teleportation. \end{abstract} \pacs{03.67.-a,03.67.Hk,42.50.-p} \maketitle \section{Introduction} Bell-state measurement (BSM) - distinguishing between the four maximally-entangled Bell states - is required in many quantum information schemes, including quantum dense coding \cite{bennett92a,mattle96}, quantum teleportation \cite{bennett93,bouwmeester97,boschi98} and entanglement swapping \cite{bennett93,pan98,jennewein02}. However, it has been proven that a complete BSM (distinguishing between the four states with 100\% efficiency) is impossible using only linear operations and classical communication \cite{vaidman99,lutkenhaus99,calsamiglia01,ghosh01}. In fact, Ghosh \textit{et. al.} \cite{ghosh01} have proven that, if only a single copy is provided, the best one can do is discriminate between two Bell states. Calsamiglia and L\"utkenhaus \cite{calsamiglia01} have shown that the maximum efficiency for a linear Bell-state analyzer is $50\%$. \par A favorable characteristic of the photon as a carrier of quantum information is the relative ease with which entangled photons can be created and transported. Two-photon Bell states are easily generated using spontaneous parametric down-conversion (SPDC) in one of several degrees of freedom \cite{rarity90b,tapster94,kwiat95,kwiat99}. There are several methods for optical Bell-state measurement that allow one to distinguish two \cite{weinfurter94,braunstein95,mattle96,michler96,walborn03b} of the four Bell-states states (resulting in 3 classes of states). All of these methods use local or non-local two-photon interference effects at beam splitters. For example, Mattle \textit{et. al.} \cite{mattle96} have performed an experimental demonstration of dense coding, in which the Bell-states were created in the polarization degrees of freedom of the two photons. When the photons meet at a common beam splitter, the overall bosonic symmetry of the two-photon state requires that photons in the antisymmetric singlet state exit in different output ports, while the symmetric triplet states end up in the same output port \cite{zeilinger94}. Polarization analyzers are then used to further discriminate among the triplet states. Weinfurter has proposed a BSM method using momentum-entangled photons that allows one to distinguish all four Bell-states with $25\%$ efficiency \cite{weinfurter94}. It is possible to distinguish among the four Bell states using nonlinear optical processes \cite{paris00,kim00a}, however, with present technology these methods suffer from low efficiency. There have also been several proposals using two-photon absorption \cite{scully99, delre00}. \par In recent years, some attention has been paid to Bell-state analysis using hyperentangled states \cite{kwiat98a,walborn03b}. Utilizing entanglement in additional auxiliary degrees of freedom, it is possible to perform a complete BSM. Due to the enlarged Hilbert space, this type of complete BSM is not restricted to the efficiency limits presented in \cite{vaidman99,lutkenhaus99,calsamiglia01,ghosh01}. Kwiat and Weinfurter \cite{kwiat98a} have proposed a scheme using photons entangled in polarization and momentum (spatial mode). Their method, which relies on linear optics and two-photon interference effects, requires detectors that distinguish between one- and two-photon detection events. We showed that this requirement on the detectors could be removed if the hyperentangled photons were created by SPDC using a Hermite-Gaussian pump beam \cite{walborn03b}. The Hermite-Gaussian beam used is an odd function of the horizontal transverse spatial coordinate, which inverts the two-photon interference behavior \cite{walborn03a}, allowing for identification of all four Bell-states in coincidence detections. \par Here we present a new method for complete Bell-state analysis using hyperentangled states. This scheme differs from others in that it (\textit{i}) does not rely on two-photon interference, (\textit{ii}) does not require detectors sensitive to photon number, and (\textit{iii}) can be implemented non-locally with 2 bits of additional classical communication. In section \ref{sec:hyper} we briefly discuss the creation of hyperentangled states. We present our hyperentangled Bell-state analyzer in section \ref{sec:hbsa}, and discuss the application of these results to quantum information protocol such as dense coding and teleportation. \section{Hyperentanglement} \label{sec:hyper} \begin{figure} \caption{\label{fig:source} \label{fig:source} \end{figure} We will work with hyperentangled two-photon states of the form $\ket{\Pi} \otimes \ket{\eta} \equiv \ket{\Pi}\ket{\eta} $. Here $\ket{\Pi}$ and $\ket{\eta}$ are the four dimensional vectors representing the polarization and momentum degrees of freedom of the two photons, respectively. In the basis defined by linear horizontal ($H$) and linear vertical ($V$) polarization, the polarization-entangled Bell-states are: \begin{subequations} \label{eq:1} \begin{align} \ket{\Psi^{\pm}} & = \frac{1}{\sqrt{2}}(\ket{H}_{1}\ket{V}_{2} \pm \ket{V}_{1}\ket{H}_{2} ) \\ \ket{\Phi^{\pm}} & = \frac{1}{\sqrt{2}}(\ket{H}_{1}\ket{H}_{2} \pm \ket{V}_{1}\ket{V}_{2} ) \end{align} \end{subequations} where $\ket{\sigma}_{j}$ stand for the polarization state of the photon $j$. Likewise, the momentum-entangled Bell-states are: \begin{subequations} \label{eq:2} \begin{align} \ket{\psi^{\pm}} & = \frac{1}{\sqrt{2}}(\ket{a}_{1}\ket{b}_{2} \pm \ket{b}_{1}\ket{a}_{2} ) \\ \ket{\phi^{\pm}} & = \frac{1}{\sqrt{2}}(\ket{a}_{1}\ket{a}_{2} \pm \ket{b}_{1}\ket{b}_{2} ) \end{align} \end{subequations} where $\ket{a}_{j}$ and $\ket{b}_{j}$ represent different spatial modes of photon $j$. We note here that polarization states have been denoted with uppercase letters and momentum states with lowercase letters. \par For our hyperentangled-Bell-state analysis we will consider states of the form $\ket{\Pi}\ket{\psi^{+}}$, where $\ket{\Pi}$ is one of the polarization Bell states (\ref{eq:1}), though one could use any of the states (\ref{eq:2}) with similar results. Recently, it was shown that this type of hyperentangled two-photon state could be used to violate a generalized form of the Greenberger-Horne-Zeilinger theorem \cite{chen03}, and may be useful in creating decoherence-free subspaces \cite{kwiat00}. These states can be generated by means of spontaneous parametric down-conversion (SPDC) in several ways \cite{kwiat97,kwiat99,chen03}. For example, the type-I two-crystal source \cite{kwiat99} emits polarization-entangled photons of the same wavelength around the rim of a cone (FIG. \ref{fig:source}). In this source, crystal 1 emits pairs of (say) $H$-polarized photons and crystal 2 emits pairs of $V$-polarized photons in superimposed emission cones. Phase-matching conditions guarantee that photon pairs are emitted on opposite sides of the cone. If the two-crystal interaction region lies entirely within a coherence volume of the pump laser beam, a polarization- and momentum-entangled state of the form $\ket{\Pi}\ket{\eta}$ can be selected by the two sets of regions $a_{1}b_{2}$ and $a_{2}b_{1}$. One can adjust the phase so that the momentum state is $\ket{\psi^{+}}$. Half- and quarter-wave plates can be used to switch between the four polarization Bell-states \cite{kwiat95}. \section{Bell-state analysis} \label{sec:hbsa} \begin{figure} \caption{\label{fig:BSA_hyp} \label{fig:BSA_hyp} \end{figure} The hyperentangled-Bell-state analyzer is shown in FIG. \ref{fig:BSA_hyp}. The hyperentangled state $\ket{\Pi}\ket{\psi^{+}}$ is first incident on a set of polarizing-beam splitters (PBS), which reflect $H$-polarized photons and transmit $V$-polarized photons. The PBS perform a controlled-not (\textsc{cnot}) logic operation between the polarization (control) and spatial (target) degrees of freedom \cite{simon02}. If the polarization is $V$ then the photon is transmitted, and switches modes. If the polarization is $H$, then the photon is reflected, and remains in the same mode. The complete PBS operation on the polarization and spatial mode of photon $j$ is \begin{align} \ket{H}_{j}\ket{a}_{j} & \longrightarrow \ket{H}_{j}\ket{a}_{j} \nonumber \\ \ket{H}_{j}\ket{b}_{j} & \longrightarrow \ket{H}_{j}\ket{b}_{j} \nonumber \\ \ket{V}_{j}\ket{a}_{j} & \longrightarrow \ket{V}_{j}\ket{b}_{j} \nonumber \\ \ket{V}_{j}\ket{b}_{j} & \longrightarrow \ket{V}_{j}\ket{a}_{j} \label{eq:3} \end{align} It is straightforward to show that the four states $\ket{\Pi}\ket{\psi^{+}}$ transform as \begin{align} \ket{\Psi^{\pm}}\ket{\psi^{+}}& \longrightarrow \ket{\Psi^{\pm}}\ket{\phi^{+}} \nonumber \\ \ket{\Phi^{\pm}}\ket{\psi^{+}}& \longrightarrow \ket{\Phi^{\pm}}\ket{\psi^{+}} \nonumber \\ \label{eq:4} \end{align} A quick look at (\ref{eq:4}) shows that the PBS's mark the momentum state by performing a polarization-controlled logic operation. The momentum state can then be used to discriminate between the $\ket{\Psi^{\pm}}$ and $\ket{\Phi^{\pm}}$ polarization states: coincidence detections in modes $a_{1}a_{2}$ or $b_{1}b_{2}$ imply states $\ket{\Psi^{\pm}}$ while coincidences in $a_{1}b_{2}$ or $b_{1}a_{2}$ imply states $\ket{\Phi^{\pm}}$. Using additional polarization analyzers (PA in FIG. \ref{fig:BSA_hyp}) orientated at $45^{\circ}$, we can discriminate between the respective $\pm$ states. Specifically, $\ket{\Psi^{+}}$ and $\ket{\Phi^{+}}$ give coincidence counts at the $+45,+45$ or $-45,-45$ output ports while $\ket{\Psi^{-}}$ and $\ket{\Phi^{-}}$ give coincidence counts at the $-45,+45$ or $+45,-45$ output ports. A summary of the detector signatures for the polarization Bell states are shown in table \ref{tab:1}. \par We note that all four Bell-states are recognized in the coincidence basis, with no need for detectors that are sensitive to photon number, and without the use of two-photon interference effects. Furthermore, the BSM can be performed non-locally at the cost of 2 bits of classical communication, since the physicist measuring system $1$ needs to communicate one of four possible outcomes to system $2$ (or vice versa). \begin{table} \caption{\label{tab:1} Detector signatures for polarization Bell-states using the momentum state $\ket{\psi^{+}}$ as an ancilla. ``$+$" $ \equiv +45^{\circ}$ and ``$-$"$ \equiv -45^{\circ}$.} \begin{ruledtabular} \begin{tabular}{cc} state & detector signature \\ $ \ket{\Psi^{+}}$ & $A_{1+}A_{2+}$ or $B_{1+}B_{2+}$ or $A_{1-}A_{2-}$ or $B_{1-}B_{2-}$ \\ $ \ket{\Psi^{-}}$ & $A_{1+}A_{2-}$ or $B_{1+}B_{2-}$ or $A_{1-}A_{2+}$ or $B_{1-}B_{2+}$ \\ $ \ket{\Phi^{+}}$ & $A_{1+}B_{2+}$ or $B_{1+}A_{2+}$ or $A_{1-}B_{2-}$ or $B_{1-}A_{2-}$ \\ $ \ket{\Phi^{-}}$ & $A_{1+}B_{2-}$ or $B_{1+}A_{2-}$ or $A_{1-}B_{2+}$ or $B_{1-}A_{2+}$ \\ \end{tabular} \end{ruledtabular} \end{table} \par The hyperentangled Bell-state analyzer is well within the scope of present technology, necessitating only wave plates, polarizing beam splitters and single photon detectors. \par A short analysis of the stability of the scheme proposed here may be in order. In addition to mode overlap at the polarizing beam splitters, the use of the ancillary momentum state $\ket{\psi^{+}}$ requires phase stability between the modes. A phase error $\alpha$ of the form \begin{align} \ket{\psi^{+}} \longrightarrow & \frac{1}{\sqrt{2}}(\ket{a}_{1}\ket{b}_{2} + e^{i \alpha} \ket{b}_{1}\ket{a}_{2} ) \nonumber \\ = & \frac{1}{2}(1+e^{i \alpha})\ket{\psi^{+}} + \frac{1}{2}(1 - e^{i \alpha})\ket{\psi^{-}} \label{eq:5} \end{align} introduces an error in the BSM, since the \textsc{cnot}\ operations with the ancillary momentum state $\ket{\psi^{-}}$ are: \begin{align} \ket{\Psi^{\pm}}\ket{\psi^{-}}& \longrightarrow \ket{\Psi^{\mp}}\ket{\phi^{-}}, \nonumber \\ \ket{\Phi^{\pm}}\ket{\psi^{-}}& \longrightarrow \ket{\Phi^{\mp}}\ket{\psi^{-}}. \label{eq:6} \end{align} The momentum states continue to show the same type of correlation, (the polarization states $\ket{\Psi^{\pm}}$ at $a_{1}a_{2}$ or $b_{1}b_{2}$, etc.), however, the polarization states have been switched. From (\ref{eq:5}), the probability to obtain the correct Bell state is thus $|1+\exp(i\alpha)|^2/2$. To avoid bilateral phase errors, we could use the momentum state $\ket{\psi^{-}}$ instead of the $\ket{\psi^{+}}$ as an ancilla. It has been shown that $\ket{\psi^{-}}$ is insensitive to collective decoherence \cite{kwiat00}. \par It is also possible to use the polarization degrees of freedom as the ancilla and encode information in the momentum Bell-states. A hyperentangled Bell-state analyzer for such an implementation is shown in FIG. \ref{fig:BSA_hyp_mom}. The \textsc{cnot}\ operation is performed by half-wave plates (HWP) aligned at $45^\circ$ in modes $b_{1}$ and $b_{2}$. The beam splitters (BS) and polarizing beam splitters (PBS) separate the 4 hyperentangled states. The beam splitters transform the input modes as $a\longrightarrow (a + b)/\sqrt{2}$ and $b\longrightarrow (a - b)/\sqrt{2}$, which gives different correlations for the $\pm$ states. The polarizing beam splitters separate the polarization ancilla states $\ket{\Phi^{+}}$ and $ \ket{\Psi^{+}}$ after the \textsc{cnot}\ operation. The detector signatures are shown in table \ref{tab:2}. \begin{figure} \caption{\label{fig:BSA_hyp_mom} \label{fig:BSA_hyp_mom} \end{figure} \begin{table} \caption{\label{tab:2} Detector signatures for momentum Bell-states using the polarization state $\ket{\Psi^{+}}$ as an ancilla.} \begin{ruledtabular} \begin{tabular}{cc} state & detector signature \\ $ \ket{\psi^{+}}$ & $A_{1H}A_{2H}$ or $B_{1H}B_{2H}$ or $A_{1V}A_{2V}$ or $B_{1V}B_{2V}$ \\ $ \ket{\psi^{-}}$ & $A_{1H}B_{2H}$ or $B_{1H}A_{2H}$ or $A_{1V}B_{2V}$ or $B_{1V}A_{2V}$ \\ $ \ket{\phi^{+}}$ & $A_{1H}A_{2V}$ or $B_{1H}B_{2V}$ or $A_{1V}A_{2H}$ or $B_{1V}B_{2H}$ \\ $ \ket{\phi^{-}}$ & $A_{1H}B_{2V}$ or $B_{1H}A_{2V}$ or $A_{1V}B_{2H}$ or $B_{1V}A_{2H}$ \\ \end{tabular} \end{ruledtabular} \end{table} \par We now discuss the possible application of hyperentangled Bell-state analysis to quantum information protocol. The quantum dense coding protocol \cite{bennett92a} allows for the transmission of two bits of information in one quantum bit. Two parties, Alice and Bob, each possess one photon of an entangled Bell-state (\ket{\Phi^{+}}, for example). Since the reduced density matrix for Bob's photon is $\oper{I}/2$, where $\oper{I}$ is the $2 \times 2$ identity matrix, there is no information present in Bob's photon alone. Some time later, Alice wishes to send a 2 bit message to Bob. She switches among the four Bell-states using local operations, and then sends her photon to Bob, who performs a Bell-state measurement on the photon pair, and retrieves Alice's message. Since there was no information present in Bob's photon, then the 2 bits of information present was sent in Alice's photon. However, the reduced density matrix of Alice's photon is also $\oper{I}/2$, so an eavesdropper could not extract any information from Alice's photon alone. \par An appealing feature of the hyperentangled product states $\ket{\Pi}\ket{\eta}$ is that the two degrees of freedom can be manipulated independently. For example, one can switch among the polarization Bell states using local operations (quarter- and half-wave plates in modes $a_{j}$ and $b_{j}$) on the polarization state, while leaving the momentum state untouched. One could then implement a dense-coding scheme in which information is encoded in the polarization state, while the momentum state remains as an ancilla to assist in the complete Bell-state measurement. We note that this implementation requires 4 qubits (encoded in 2 photons) to transmit 2 classical bits of information and thus it may be debatable as to whether this is actually ``dense" coding. However, since the density matrix of (say) photon 2 is \begin{equation} \hat{\rho}_{2} = \mathrm{trace}_{1}(\hat{\rho}_{12}) = \frac{1}{4}\oper{I}_{4}, \end{equation} where $\hat{\rho}_{12}$ is the total $16 \times 16$ density matrix of photons 1 and 2 and $\oper{I}_{4}$ is the $4 \times 4$ identity matrix, we can still send 2 classical bits of information in one photon (containing 2 qubits) in such a way that an eavesdropper with access to only this photon cannot extract any information. \par Another use of a Bell-state measurement is in the quantum teleportation protocol \cite{bennett93}, which can be used to swap entanglement \cite{bennett93,pan98,jennewein02} and perform quantum logic operations for quantum computation \cite{gottesman99}. In quantum teleportation, two spatially separate parties Alice and Bob each are in possession of one photon of an entangled pair, prepared in a Bell-state. One of them, say Alice, wants to teleport the quantum state of a third photon to Bob. Of course, Alice does not know what state she is teleporting, or else she could simply call Bob on the telephone and tell him to prepare that state. Instead, Alice performs a Bell-state measurement on her two photons. She then communicates classically to Bob, telling him the results of her BSM, at which point Bob, performs a local operation and recovers the state of the third photon, now encoded in his photon. In this way Alice can teleport a quantum state to Bob without actually knowing what state she is sending. \par To date, quantum teleportation implementations \cite{bouwmeester97} can be performed with a Bell-state measurement that is 50\% efficient using two-photon interference, polarization analysis and single photon detectors. Given an unknown quantum state, Alice, who shares a pair of maximally entangled photons with Bob, can teleport this state to him with 50\% efficiency. Suppose Alice and Bob share a pair of polarization-entangled photons in the Bell-state $\ket{\Phi^{+}}_{12}$ and Alice would like to teleport the unknown polarization state $\ket{u}_{3}$ of photon 3. To take advantage of hyperentangled Bell-state analysis, Alice would need to entangle the momentum degree of freedom of photon 2 with that of photon 3. To do so requires a controlled logic operation between the two photons, such as a \textsc{cnot}\ gate. But if she could perform an efficient two-photon \textsc{cnot}\ operation then she might as well use it to perform her BSM, which can be done with the same efficiency using a \textsc{cnot}\ gate and a single photon Hadamard rotation \cite{chuang00}. \par One might think that Alice could first entangle the momentum degrees of freedom of photons 2 and 3 using an inefficient \textsc{cnot}\ gate, and discard the cases where the gate did not give the desired output (which is usually checked by measuring the ancillary modes). She could then pass photon 3 on to a friend who performs some sort of complex quantum computation using the polarization as a qubit, and then passes it back to Alice for teleportation. The complex quantum computation, which we assume to be much more difficult and time consuming than the teleportation protocol, need be performed only once, since the Bell-state measurement on photons 2 and 3 is now $100\%$ efficient. However, since photon 3 is part of an entangled momentum state, it is now defined by 2 spatial modes. So Alice's friend is required to run the computation on both spatial modes, which in principle is the same as running the computation twice, as is required on average in the $50\%$ efficient teleportation scheme. Thus, a teleportation protocol using Bell-state measurement of hyperentangled states of this form does not present any gain over previous methods. \section{conclusion} We have shown a simple method for complete Bell-state analysis using hyperentangled photons. Our scheme requires only linear optics and single photon detectors, can be implemented non-locally with 2 bits of classical communication, and is well within the bounds of current technology. We have briefly discussed the application of these results to the quantum teleportation protocol. Given that our scheme requires photons entangled in multiple degrees of freedom, it does not provide any increase in efficiency to Bell-state measurements for quantum teleportation. However, our method can be applied directly to implementations of quantum dense coding, resulting in the secure transmission of 2 bits of classical information in one photon (containing 2 qubits). \begin{acknowledgments} The authors acknowledge financial support from the Brazilian funding agencies CNPq, CAPES and Instituto do Mil\^enio de Informa\c{c}\~ao Qu\^antica. \end{acknowledgments} \begin{thebibliography}{32} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\def\urlprefix{URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{\citenamefont{Bennett and Wiesner}(1992)}]{bennett92a} \bibinfo{author}{\bibfnamefont{C.~H.}~\bibnamefont{Bennett}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~J.}~\bibnamefont{Wiesner}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{69}}, \bibinfo{pages}{2881} (\bibinfo{year}{1992}). \bibitem[{\citenamefont{Mattle et~al.}(1996)\citenamefont{Mattle, Weinfurter, Kwiat, and Zeilinger}}]{mattle96} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Mattle}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}}, \bibinfo{author}{\bibfnamefont{P.~G.}~\bibnamefont{Kwiat}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{76}}, \bibinfo{pages}{4656} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Bennett et~al.}(1993)\citenamefont{Bennett, Brassard, Cre\'epeau, Jozsa, Peres, and Wootters}}]{bennett93} \bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Bennett}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Brassard}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Cr\'epeau}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Jozsa}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Peres}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{W.~K.} \bibnamefont{Wootters}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{1895} (\bibinfo{year}{1993}). \bibitem[{\citenamefont{Bouwmeester et~al.}(1997)\citenamefont{Bouwmeester, Pan, Mattle, Eibl, Weinfurter, and Zeilinger}}]{bouwmeester97} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bouwmeester}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Mattle}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Eibl}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{390}}, \bibinfo{pages}{575} (\bibinfo{year}{1997}). \bibitem[{\citenamefont{Boschi et~al.}(1998)\citenamefont{Boschi, Branca, De Martini, Hardy, and Popescu}}]{boschi98} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Boschi}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Branca}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{De Martini}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Hardy}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Popescu}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{1121} (\bibinfo{year}{1998}). \bibitem[{\citenamefont{Pan et~al.}(1998)\citenamefont{Pan, Bouwmeester, Weinfurter, and Zeilinger}}]{pan98} \bibinfo{author}{\bibfnamefont{J.-W.} \bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bouwmeester}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Phys.Rev. Lett.} \textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{3891} (\bibinfo{year}{1998}). \bibitem[{\citenamefont{Jennewein et~al.}(2002)\citenamefont{Jennewein, Weihs, Pan, and Zeilinger}}]{jennewein02} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Jennewein}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Weihs}}, \bibinfo{author}{\bibfnamefont{J.-W.} \bibnamefont{Pan}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{88}}, \bibinfo{pages}{017903} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Vaidman and Yoran}(1999)}]{vaidman99} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Vaidman}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Yoran}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{59}}, \bibinfo{pages}{116} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{L\"utkenhaus et~al.}(1999)\citenamefont{L\"utkenhaus, Calsamiglia, and Suominen}}]{lutkenhaus99} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{L\"utkenhaus}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Calsamiglia}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{K.-A.} \bibnamefont{Suominen}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{59}}, \bibinfo{pages}{3295} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{Calsamiglia and L\"utkenhaus}(2001)}]{calsamiglia01} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Calsamiglia}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{L\"utkenhaus}}, \bibinfo{journal}{Appl. Phys. B} \textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{67} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Ghosh et~al.}(2001)\citenamefont{Ghosh, Kar, Roy, Sen(De), and Sen}}]{ghosh01} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Ghosh}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Kar}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Roy}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Sen(De)}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{U.}~\bibnamefont{Sen}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{87}}, \bibinfo{pages}{277902} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Rarity and Tapster}(1990)}]{rarity90b} \bibinfo{author}{\bibfnamefont{J.~G.}~\bibnamefont{Rarity}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~R.}~\bibnamefont{Tapster}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{64}}, \bibinfo{pages}{2495} (\bibinfo{year}{1990}). \bibitem[{\citenamefont{Tapster et~al.}(1994)\citenamefont{Tapster, Rarity, and Owens}}]{tapster94} \bibinfo{author}{\bibfnamefont{P.~R.}~\bibnamefont{Tapster}}, \bibinfo{author}{\bibfnamefont{J.~G.}~\bibnamefont{Rarity}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~C.~M.}~\bibnamefont{Owens}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{73}}, \bibinfo{pages}{1923} (\bibinfo{year}{1994}). \bibitem[{\citenamefont{Kwiat et~al.}(1995)\citenamefont{Kwiat, Mattle, Weinfurter, Zeilinger, Sergienko, and Shih}}]{kwiat95} \bibinfo{author}{\bibfnamefont{P.~G.} \bibnamefont{Kwiat}}, \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Mattle}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{author}{\bibfnamefont{A.~V.} \bibnamefont{Sergienko}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Shih}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{75}}, \bibinfo{pages}{4337} (\bibinfo{year}{1995}). \bibitem[{\citenamefont{Kwiat et~al.}(1999)\citenamefont{Kwiat, Waks, White, Appelbaum, and Eberhard}}]{kwiat99} \bibinfo{author}{\bibfnamefont{P.~G.} \bibnamefont{Kwiat}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Waks}}, \bibinfo{author}{\bibfnamefont{A.~G.} \bibnamefont{White}}, \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Appelbaum}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~H.} \bibnamefont{Eberhard}}, \bibinfo{journal}{Phys. Rev. A.} \textbf{\bibinfo{volume}{60}}, \bibinfo{pages}{R773} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{Weinfurter}(1994)}]{weinfurter94} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}}, \bibinfo{journal}{Europhys. Lett.} \textbf{\bibinfo{volume}{25}}, \bibinfo{pages}{559} (\bibinfo{year}{1994}). \bibitem[{\citenamefont{Braunstein and Mann}(1995)}]{braunstein95} \bibinfo{author}{\bibfnamefont{S.~L.}~\bibnamefont{Braunstein}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Mann}}, \bibinfo{journal}{Phys. Rev. A.} \textbf{\bibinfo{volume}{51}}, \bibinfo{pages}{R1727} (\bibinfo{year}{1995}). \bibitem[{\citenamefont{Michler et~al.}(1996)\citenamefont{Michler, Mattle, Weinfurter, and Zeilinger}}]{michler96} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Michler}}, \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Mattle}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Phys. Rev. A.} \textbf{\bibinfo{volume}{53}}, \bibinfo{pages}{R1209} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Walborn et~al.}(2003{\natexlab{a}})\citenamefont{Walborn, de~Oliveira, P\'adua, and Monken}}]{walborn03b} \bibinfo{author}{\bibfnamefont{S.~P.} \bibnamefont{Walborn}}, \bibinfo{author}{\bibfnamefont{A.~N.} \bibnamefont{de~Oliveira}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{P\'adua}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Monken}}, \bibinfo{journal}{Europhys. Lett} \textbf{\bibinfo{volume}{62}}, \bibinfo{pages}{161} (\bibinfo{year}{2003}{\natexlab{a}}). \bibitem[{\citenamefont{Zeilinger et~al.}(1994)\citenamefont{Zeilinger, Bernstein, and Horne}}]{zeilinger94} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{author}{\bibfnamefont{H.~J.} \bibnamefont{Bernstein}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Horne}}, \bibinfo{journal}{J. Mod. Optics} \textbf{\bibinfo{volume}{41}}, \bibinfo{pages}{2375} (\bibinfo{year}{1994}). \bibitem[{\citenamefont{Paris et~al.}(2000)\citenamefont{Paris, Plenio, Bose, Jonathan, and D'Ariano}}]{paris00} \bibinfo{author}{\bibfnamefont{M.~G.~A.} \bibnamefont{Paris}}, \bibinfo{author}{\bibfnamefont{M.~B.} \bibnamefont{Plenio}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Bose}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Jonathan}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.~M.} \bibnamefont{D'Ariano}}, \bibinfo{journal}{Phys. Lett. A} \textbf{\bibinfo{volume}{273}}, \bibinfo{pages}{153} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Kim et~al.}(2001)\citenamefont{Kim, Kulik, and Shih}}]{kim00a} \bibinfo{author}{\bibfnamefont{Y.-H.} \bibnamefont{Kim}}, \bibinfo{author}{\bibfnamefont{S.~P.} \bibnamefont{Kulik}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.} \bibnamefont{Shih}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{1370} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Scully et~al.}(1999)\citenamefont{Scully, Englert, and Bednar}}]{scully99} \bibinfo{author}{\bibfnamefont{M.~O.} \bibnamefont{Scully}}, \bibinfo{author}{\bibfnamefont{B.-G.} \bibnamefont{Englert}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~J.} \bibnamefont{Bednar}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{83}}, \bibinfo{pages}{4433} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{Del Re et~al.}(2000)\citenamefont{Del Re, Crosignani, and DiPorto}}]{delre00} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Del Re}}, \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Crosignani}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.} \bibnamefont{DiPorto}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{2989} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Kwiat and Weinfurter}(1998)}]{kwiat98a} \bibinfo{author}{\bibfnamefont{P.~G.} \bibnamefont{Kwiat}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{58}}, \bibinfo{pages}{R2623} (\bibinfo{year}{1998}). \bibitem[{\citenamefont{Walborn et~al.}(2003{\natexlab{b}})\citenamefont{Walborn, de~Oliveira, P\'adua, and Monken}}]{walborn03a} \bibinfo{author}{\bibfnamefont{S.~P.} \bibnamefont{Walborn}}, \bibinfo{author}{\bibfnamefont{A.~N.} \bibnamefont{de~Oliveira}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{P\'adua}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Monken}}, \bibinfo{journal}{Phys. Rev. Lett} \textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{143601} (\bibinfo{year}{2003}{\natexlab{b}}). \bibitem[{\citenamefont{Chen et~al.}(2003)\citenamefont{Chen, Pan, Zhang, Brukner, and Zeilinger}}]{chen03} \bibinfo{author}{\bibfnamefont{Z.-B.} \bibnamefont{Chen}}, \bibinfo{author}{\bibfnamefont{J.-W.} \bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{Y.-D.} \bibnamefont{Zhang}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Brukner}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{160408} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Kwiat et~al.}(2000)\citenamefont{Kwiat, Berglund, Altepeter, and White}}]{kwiat00} \bibinfo{author}{\bibfnamefont{P.~G.} \bibnamefont{Kwiat}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Berglund}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Altepeter}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{White}}, \bibinfo{journal}{Science} \textbf{\bibinfo{volume}{290}}, \bibinfo{pages}{498} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Kwiat}(1997)}]{kwiat97} \bibinfo{author}{\bibfnamefont{P.~G.} \bibnamefont{Kwiat}}, \bibinfo{journal}{J. Mod. Optics} \textbf{\bibinfo{volume}{44}}, \bibinfo{pages}{2173} (\bibinfo{year}{1997}). \bibitem[{\citenamefont{Simon and Pan}(2002)}]{simon02} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Simon}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.-W.} \bibnamefont{Pan}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{89}}, \bibinfo{pages}{257901} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Gottesman and Chuang}(1999)}]{gottesman99} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Gottesman}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~L.} \bibnamefont{Chuang}}, \bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{402}}, \bibinfo{pages}{390} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{Nielsen and Chuang}(2000)}]{chuang00} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Nielsen}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Chuang}}, \emph{\bibinfo{title}{Quantum Computation and Quantum Information}} (\bibinfo{publisher}{Cambridge}, \bibinfo{address}{Cambridge}, \bibinfo{year}{2000}). \end{thebibliography} \end{document}